* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-29 15:22 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-29 15:22 UTC (permalink / raw
To: gentoo-commits
commit: 89cbf34f7a3343b7ca51d87e6d8a04c9c726b619
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 29 15:20:58 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 29 15:20:58 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=89cbf34f
Fix ov2640 patch
Thanks to kveremitz for reporting
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch | 27 ++++--------------------
1 file changed, 4 insertions(+), 23 deletions(-)
diff --git a/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch b/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch
index 6fb67ffa..58e461b1 100644
--- a/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch
+++ b/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch
@@ -1,29 +1,10 @@
-From 58d23260ea259cb1734d3074be56aabe4c90bae4 Mon Sep 17 00:00:00 2001
-From: Mike Pagano <mpagano@gentoo.org>
-Date: Wed, 27 Apr 2022 17:13:01 -0400
-Subject: [PATCH 1/1] Add V4L2_ASYNC as a dependency to match other drivers and
- prevent failures when compile testing.
-Cc: mpagano@gentoo.org
-
-Fixes: ff3cc65cadb5 ("media: v4l: async, fwnode: Improve module organisation")
-
-Signed-off-by: Mike Pagano <mpagano@gentoo.org>
----
- drivers/media/i2c/Kconfig | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
-index fae2baabb773..2b20aa6c37b1 100644
---- a/drivers/media/i2c/Kconfig
-+++ b/drivers/media/i2c/Kconfig
-@@ -372,6 +372,7 @@ config VIDEO_OV13B10
+--- a/drivers/media/i2c/Kconfig 2022-04-29 10:36:23.601027092 -0400
++++ b/drivers/media/i2c/Kconfig 2022-04-29 10:37:04.764398401 -0400
+@@ -915,6 +915,7 @@ config VIDEO_OV02A10
config VIDEO_OV2640
tristate "OmniVision OV2640 sensor support"
- depends on VIDEO_DEV && I2C
+ depends on VIDEO_V4L2 && I2C
+ select V4L2_ASYNC
help
This is a Video4Linux2 sensor driver for the OmniVision
OV2640 camera.
---
-2.35.1
-
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-06-14 17:10 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-06-14 17:10 UTC (permalink / raw
To: gentoo-commits
commit: 62bc1330f547822379dba11fd7cb23f8f889df8f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 14 17:10:22 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun 14 17:10:22 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=62bc1330
Linux patch 5.17.15
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
1014_linux-5.17.15.patch | 11727 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 11735 insertions(+)
diff --git a/0000_README b/0000_README
index 675c8bcb..4e6ae24e 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,14 @@ Patch: 1013_linux-5.17.14.patch
From: http://www.kernel.org
Desc: Linux 5.17.14
+Patch: 1014_linux-5.17.15.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.15
+
+Patch: 1015_linux-5.17.16.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-5.17.15.patch b/1014_linux-5.17.15.patch
new file mode 100644
index 00000000..8f27641c
--- /dev/null
+++ b/1014_linux-5.17.15.patch
@@ -0,0 +1,11727 @@
+diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata
+index 2f726c9147522..3daecac48964f 100644
+--- a/Documentation/ABI/testing/sysfs-ata
++++ b/Documentation/ABI/testing/sysfs-ata
+@@ -107,13 +107,14 @@ Description:
+ described in ATA8 7.16 and 7.17. Only valid if
+ the device is not a PM.
+
+- pio_mode: (RO) Transfer modes supported by the device when
+- in PIO mode. Mostly used by PATA device.
++ pio_mode: (RO) PIO transfer mode used by the device.
++ Mostly used by PATA devices.
+
+- xfer_mode: (RO) Current transfer mode
++ xfer_mode: (RO) Current transfer mode. Mostly used by
++ PATA devices.
+
+- dma_mode: (RO) Transfer modes supported by the device when
+- in DMA mode. Mostly used by PATA device.
++ dma_mode: (RO) DMA transfer mode used by the device.
++ Mostly used by PATA devices.
+
+ class: (RO) Device class. Can be "ata" for disk,
+ "atapi" for packet device, "pmp" for PM, or
+diff --git a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+index 5d2d989de893c..37402c370fbbc 100644
+--- a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+@@ -55,7 +55,7 @@ examples:
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+ regulator-enable-ramp-delay = <256>;
+- regulator-allowed-modes = <0 1 2 4>;
++ regulator-allowed-modes = <0 1 2>;
+ };
+
+ vbuck3 {
+@@ -63,7 +63,7 @@ examples:
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+ regulator-enable-ramp-delay = <256>;
+- regulator-allowed-modes = <0 1 2 4>;
++ regulator-allowed-modes = <0 1 2>;
+ };
+ };
+ };
+diff --git a/Documentation/devicetree/bindings/remoteproc/mtk,scp.yaml b/Documentation/devicetree/bindings/remoteproc/mtk,scp.yaml
+index d21a25ee96e62..8b2c0f1f85500 100644
+--- a/Documentation/devicetree/bindings/remoteproc/mtk,scp.yaml
++++ b/Documentation/devicetree/bindings/remoteproc/mtk,scp.yaml
+@@ -22,11 +22,13 @@ properties:
+
+ reg:
+ description:
+- Should contain the address ranges for memory regions SRAM, CFG, and
+- L1TCM.
++ Should contain the address ranges for memory regions SRAM, CFG, and,
++ on some platforms, L1TCM.
++ minItems: 2
+ maxItems: 3
+
+ reg-names:
++ minItems: 2
+ items:
+ - const: sram
+ - const: cfg
+@@ -46,16 +48,30 @@ required:
+ - reg
+ - reg-names
+
+-if:
+- properties:
+- compatible:
+- enum:
+- - mediatek,mt8183-scp
+- - mediatek,mt8192-scp
+-then:
+- required:
+- - clocks
+- - clock-names
++allOf:
++ - if:
++ properties:
++ compatible:
++ enum:
++ - mediatek,mt8183-scp
++ - mediatek,mt8192-scp
++ then:
++ required:
++ - clocks
++ - clock-names
++
++ - if:
++ properties:
++ compatible:
++ enum:
++ - mediatek,mt8183-scp
++ - mediatek,mt8186-scp
++ then:
++ properties:
++ reg:
++ maxItems: 2
++ reg-names:
++ maxItems: 2
+
+ additionalProperties:
+ type: object
+@@ -75,10 +91,10 @@ additionalProperties:
+
+ examples:
+ - |
+- #include <dt-bindings/clock/mt8183-clk.h>
++ #include <dt-bindings/clock/mt8192-clk.h>
+
+ scp@10500000 {
+- compatible = "mediatek,mt8183-scp";
++ compatible = "mediatek,mt8192-scp";
+ reg = <0x10500000 0x80000>,
+ <0x10700000 0x8000>,
+ <0x10720000 0xe0000>;
+diff --git a/Documentation/tools/rtla/Makefile b/Documentation/tools/rtla/Makefile
+index 9f2b84af1a6c7..093af6d7a0e93 100644
+--- a/Documentation/tools/rtla/Makefile
++++ b/Documentation/tools/rtla/Makefile
+@@ -17,9 +17,21 @@ DOC_MAN1 = $(addprefix $(OUTPUT),$(_DOC_MAN1))
+ RST2MAN_DEP := $(shell command -v rst2man 2>/dev/null)
+ RST2MAN_OPTS += --verbose
+
++TEST_RST2MAN = $(shell sh -c "rst2man --version > /dev/null 2>&1 || echo n")
++
+ $(OUTPUT)%.1: %.rst
+ ifndef RST2MAN_DEP
+- $(error "rst2man not found, but required to generate man pages")
++ $(info ********************************************)
++ $(info ** NOTICE: rst2man not found)
++ $(info **)
++ $(info ** Consider installing the latest rst2man from your)
++ $(info ** distribution, e.g., 'dnf install python3-docutils' on Fedora,)
++ $(info ** or from source:)
++ $(info **)
++ $(info ** https://docutils.sourceforge.io/docs/dev/repository.html )
++ $(info **)
++ $(info ********************************************)
++ $(error NOTICE: rst2man required to generate man pages)
+ endif
+ rst2man $(RST2MAN_OPTS) $< > $@
+
+diff --git a/Makefile b/Makefile
+index 5450a2c9efa67..c3676b04ca386 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/boot/dts/aspeed-ast2600-evb.dts b/arch/arm/boot/dts/aspeed-ast2600-evb.dts
+index b7eb552640cbf..788448cdd6b3f 100644
+--- a/arch/arm/boot/dts/aspeed-ast2600-evb.dts
++++ b/arch/arm/boot/dts/aspeed-ast2600-evb.dts
+@@ -103,7 +103,7 @@
+ &mac0 {
+ status = "okay";
+
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-rxid";
+ phy-handle = <ðphy0>;
+
+ pinctrl-names = "default";
+@@ -114,7 +114,7 @@
+ &mac1 {
+ status = "okay";
+
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-rxid";
+ phy-handle = <ðphy1>;
+
+ pinctrl-names = "default";
+diff --git a/arch/arm/mach-ep93xx/clock.c b/arch/arm/mach-ep93xx/clock.c
+index 28e0ae6e890e5..00c2db101ce55 100644
+--- a/arch/arm/mach-ep93xx/clock.c
++++ b/arch/arm/mach-ep93xx/clock.c
+@@ -345,9 +345,10 @@ static struct clk_hw *clk_hw_register_ddiv(const char *name,
+ psc->hw.init = &init;
+
+ clk = clk_register(NULL, &psc->hw);
+- if (IS_ERR(clk))
++ if (IS_ERR(clk)) {
+ kfree(psc);
+-
++ return ERR_CAST(clk);
++ }
+ return &psc->hw;
+ }
+
+@@ -452,9 +453,10 @@ static struct clk_hw *clk_hw_register_div(const char *name,
+ psc->hw.init = &init;
+
+ clk = clk_register(NULL, &psc->hw);
+- if (IS_ERR(clk))
++ if (IS_ERR(clk)) {
+ kfree(psc);
+-
++ return ERR_CAST(clk);
++ }
+ return &psc->hw;
+ }
+
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index cbc41e261f1e7..c679c57ec76e9 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1120,6 +1120,7 @@ skip_init_ctx:
+ bpf_jit_binary_free(header);
+ prog->bpf_func = NULL;
+ prog->jited = 0;
++ prog->jited_len = 0;
+ goto out_off;
+ }
+ bpf_jit_binary_lock_ro(header);
+diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine
+index eeab4f3e6c197..946853a08502e 100644
+--- a/arch/m68k/Kconfig.machine
++++ b/arch/m68k/Kconfig.machine
+@@ -335,6 +335,7 @@ comment "Machine Options"
+
+ config UBOOT
+ bool "Support for U-Boot command line parameters"
++ depends on COLDFIRE
+ help
+ If you say Y here kernel will try to collect command
+ line parameters from the initial u-boot stack.
+diff --git a/arch/m68k/include/asm/pgtable_no.h b/arch/m68k/include/asm/pgtable_no.h
+index 87151d67d91e7..bce5ca56c3883 100644
+--- a/arch/m68k/include/asm/pgtable_no.h
++++ b/arch/m68k/include/asm/pgtable_no.h
+@@ -42,7 +42,8 @@ extern void paging_init(void);
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+-#define ZERO_PAGE(vaddr) (virt_to_page(0))
++extern void *empty_zero_page;
++#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+ /*
+ * All 32bit addresses are effectively valid for vmalloc...
+diff --git a/arch/m68k/kernel/setup_mm.c b/arch/m68k/kernel/setup_mm.c
+index 49e573b943268..811c7b85db878 100644
+--- a/arch/m68k/kernel/setup_mm.c
++++ b/arch/m68k/kernel/setup_mm.c
+@@ -87,15 +87,8 @@ void (*mach_sched_init) (void) __initdata = NULL;
+ void (*mach_init_IRQ) (void) __initdata = NULL;
+ void (*mach_get_model) (char *model);
+ void (*mach_get_hardware_list) (struct seq_file *m);
+-/* machine dependent timer functions */
+-int (*mach_hwclk) (int, struct rtc_time*);
+-EXPORT_SYMBOL(mach_hwclk);
+ unsigned int (*mach_get_ss)(void);
+-int (*mach_get_rtc_pll)(struct rtc_pll_info *);
+-int (*mach_set_rtc_pll)(struct rtc_pll_info *);
+ EXPORT_SYMBOL(mach_get_ss);
+-EXPORT_SYMBOL(mach_get_rtc_pll);
+-EXPORT_SYMBOL(mach_set_rtc_pll);
+ void (*mach_reset)( void );
+ void (*mach_halt)( void );
+ void (*mach_power_off)( void );
+diff --git a/arch/m68k/kernel/setup_no.c b/arch/m68k/kernel/setup_no.c
+index 5e4104f07a443..19eea73d3c170 100644
+--- a/arch/m68k/kernel/setup_no.c
++++ b/arch/m68k/kernel/setup_no.c
+@@ -50,7 +50,6 @@ char __initdata command_line[COMMAND_LINE_SIZE];
+
+ /* machine dependent timer functions */
+ void (*mach_sched_init)(void) __initdata = NULL;
+-int (*mach_hwclk) (int, struct rtc_time*);
+
+ /* machine dependent reboot functions */
+ void (*mach_reset)(void);
+diff --git a/arch/m68k/kernel/time.c b/arch/m68k/kernel/time.c
+index 340ffeea0a9dc..a97600b2af502 100644
+--- a/arch/m68k/kernel/time.c
++++ b/arch/m68k/kernel/time.c
+@@ -63,6 +63,15 @@ void timer_heartbeat(void)
+ #endif /* CONFIG_HEARTBEAT */
+
+ #ifdef CONFIG_M68KCLASSIC
++/* machine dependent timer functions */
++int (*mach_hwclk) (int, struct rtc_time*);
++EXPORT_SYMBOL(mach_hwclk);
++
++int (*mach_get_rtc_pll)(struct rtc_pll_info *);
++int (*mach_set_rtc_pll)(struct rtc_pll_info *);
++EXPORT_SYMBOL(mach_get_rtc_pll);
++EXPORT_SYMBOL(mach_set_rtc_pll);
++
+ #if !IS_BUILTIN(CONFIG_RTC_DRV_GENERIC)
+ void read_persistent_clock64(struct timespec64 *ts)
+ {
+diff --git a/arch/mips/kernel/mips-cpc.c b/arch/mips/kernel/mips-cpc.c
+index 17aff13cd7ce6..3e386f7e15450 100644
+--- a/arch/mips/kernel/mips-cpc.c
++++ b/arch/mips/kernel/mips-cpc.c
+@@ -28,6 +28,7 @@ phys_addr_t __weak mips_cpc_default_phys_base(void)
+ cpc_node = of_find_compatible_node(of_root, NULL, "mti,mips-cpc");
+ if (cpc_node) {
+ err = of_address_to_resource(cpc_node, 0, &res);
++ of_node_put(cpc_node);
+ if (!err)
+ return res.start;
+ }
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index b779603978e10..e49ef5b1857ef 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -211,7 +211,6 @@ config PPC
+ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH
+ select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx)
+ select HAVE_IOREMAP_PROT
+- select HAVE_IRQ_EXIT_ON_IRQ_STACK
+ select HAVE_IRQ_TIME_ACCOUNTING
+ select HAVE_KERNEL_GZIP
+ select HAVE_KERNEL_LZMA if DEFAULT_UIMAGE
+@@ -764,7 +763,6 @@ config THREAD_SHIFT
+ range 13 15
+ default "15" if PPC_256K_PAGES
+ default "14" if PPC64
+- default "14" if KASAN
+ default "13"
+ help
+ Used to define the stack size. The default is almost always what you
+diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
+index d6e649b3c70b6..bc3e1de9d08b0 100644
+--- a/arch/powerpc/include/asm/thread_info.h
++++ b/arch/powerpc/include/asm/thread_info.h
+@@ -14,10 +14,16 @@
+
+ #ifdef __KERNEL__
+
+-#if defined(CONFIG_VMAP_STACK) && CONFIG_THREAD_SHIFT < PAGE_SHIFT
++#ifdef CONFIG_KASAN
++#define MIN_THREAD_SHIFT (CONFIG_THREAD_SHIFT + 1)
++#else
++#define MIN_THREAD_SHIFT CONFIG_THREAD_SHIFT
++#endif
++
++#if defined(CONFIG_VMAP_STACK) && MIN_THREAD_SHIFT < PAGE_SHIFT
+ #define THREAD_SHIFT PAGE_SHIFT
+ #else
+-#define THREAD_SHIFT CONFIG_THREAD_SHIFT
++#define THREAD_SHIFT MIN_THREAD_SHIFT
+ #endif
+
+ #define THREAD_SIZE (1 << THREAD_SHIFT)
+diff --git a/arch/powerpc/kernel/ptrace/ptrace-fpu.c b/arch/powerpc/kernel/ptrace/ptrace-fpu.c
+index 5dca19361316e..09c49632bfe59 100644
+--- a/arch/powerpc/kernel/ptrace/ptrace-fpu.c
++++ b/arch/powerpc/kernel/ptrace/ptrace-fpu.c
+@@ -17,9 +17,13 @@ int ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data)
+
+ #ifdef CONFIG_PPC_FPU_REGS
+ flush_fp_to_thread(child);
+- if (fpidx < (PT_FPSCR - PT_FPR0))
+- memcpy(data, &child->thread.TS_FPR(fpidx), sizeof(long));
+- else
++ if (fpidx < (PT_FPSCR - PT_FPR0)) {
++ if (IS_ENABLED(CONFIG_PPC32))
++ // On 32-bit the index we are passed refers to 32-bit words
++ *data = ((u32 *)child->thread.fp_state.fpr)[fpidx];
++ else
++ memcpy(data, &child->thread.TS_FPR(fpidx), sizeof(long));
++ } else
+ *data = child->thread.fp_state.fpscr;
+ #else
+ *data = 0;
+@@ -39,9 +43,13 @@ int ptrace_put_fpr(struct task_struct *child, int index, unsigned long data)
+
+ #ifdef CONFIG_PPC_FPU_REGS
+ flush_fp_to_thread(child);
+- if (fpidx < (PT_FPSCR - PT_FPR0))
+- memcpy(&child->thread.TS_FPR(fpidx), &data, sizeof(long));
+- else
++ if (fpidx < (PT_FPSCR - PT_FPR0)) {
++ if (IS_ENABLED(CONFIG_PPC32))
++ // On 32-bit the index we are passed refers to 32-bit words
++ ((u32 *)child->thread.fp_state.fpr)[fpidx] = data;
++ else
++ memcpy(&child->thread.TS_FPR(fpidx), &data, sizeof(long));
++ } else
+ child->thread.fp_state.fpscr = data;
+ #endif
+
+diff --git a/arch/powerpc/kernel/ptrace/ptrace.c b/arch/powerpc/kernel/ptrace/ptrace.c
+index c43f77e2ac310..6d45fa2880156 100644
+--- a/arch/powerpc/kernel/ptrace/ptrace.c
++++ b/arch/powerpc/kernel/ptrace/ptrace.c
+@@ -445,4 +445,7 @@ void __init pt_regs_check(void)
+ * real registers.
+ */
+ BUILD_BUG_ON(PT_DSCR < sizeof(struct user_pt_regs) / sizeof(unsigned long));
++
++ // ptrace_get/put_fpr() rely on PPC32 and VSX being incompatible
++ BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_VSX));
+ }
+diff --git a/arch/riscv/kernel/efi.c b/arch/riscv/kernel/efi.c
+index 0241592982314..1aa540350abd3 100644
+--- a/arch/riscv/kernel/efi.c
++++ b/arch/riscv/kernel/efi.c
+@@ -65,7 +65,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data)
+
+ if (md->attribute & EFI_MEMORY_RO) {
+ val = pte_val(pte) & ~_PAGE_WRITE;
+- val = pte_val(pte) | _PAGE_READ;
++ val |= _PAGE_READ;
+ pte = __pte(val);
+ }
+ if (md->attribute & EFI_MEMORY_XP) {
+diff --git a/arch/riscv/kernel/machine_kexec.c b/arch/riscv/kernel/machine_kexec.c
+index cbef0fc73afa8..df8e24559035c 100644
+--- a/arch/riscv/kernel/machine_kexec.c
++++ b/arch/riscv/kernel/machine_kexec.c
+@@ -65,7 +65,9 @@ machine_kexec_prepare(struct kimage *image)
+ if (image->segment[i].memsz <= sizeof(fdt))
+ continue;
+
+- if (copy_from_user(&fdt, image->segment[i].buf, sizeof(fdt)))
++ if (image->file_mode)
++ memcpy(&fdt, image->segment[i].buf, sizeof(fdt));
++ else if (copy_from_user(&fdt, image->segment[i].buf, sizeof(fdt)))
+ continue;
+
+ if (fdt_check_header(&fdt))
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index 54c7536f2482d..1023e9d43d443 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -701,7 +701,7 @@ static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw,
+ unsigned int nbytes)
+ {
+ gw->walk_bytes_remain -= nbytes;
+- scatterwalk_unmap(&gw->walk);
++ scatterwalk_unmap(gw->walk_ptr);
+ scatterwalk_advance(&gw->walk, nbytes);
+ scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
+ gw->walk_ptr = NULL;
+@@ -776,7 +776,7 @@ static int gcm_out_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
+ goto out;
+ }
+
+- scatterwalk_unmap(&gw->walk);
++ scatterwalk_unmap(gw->walk_ptr);
+ gw->walk_ptr = NULL;
+
+ gw->ptr = gw->buf;
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 01bae1d51113b..3bf8aeeec96f5 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -264,6 +264,10 @@ ENTRY(sie64a)
+ BPEXIT __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
+ .Lsie_entry:
+ sie 0(%r14)
++# Let the next instruction be NOP to avoid triggering a machine check
++# and handling it in a guest as result of the instruction execution.
++ nopr 7
++.Lsie_leave:
+ BPOFF
+ BPENTER __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
+ .Lsie_skip:
+@@ -563,7 +567,7 @@ ENTRY(mcck_int_handler)
+ jno .Lmcck_panic
+ #if IS_ENABLED(CONFIG_KVM)
+ OUTSIDE %r9,.Lsie_gmap,.Lsie_done,6f
+- OUTSIDE %r9,.Lsie_entry,.Lsie_skip,4f
++ OUTSIDE %r9,.Lsie_entry,.Lsie_leave,4f
+ oi __LC_CPU_FLAGS+7, _CIF_MCCK_GUEST
+ j 5f
+ 4: CHKSTG .Lmcck_panic
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index dfee0ebb2fac4..88f6d923ee456 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -2601,6 +2601,18 @@ static int __s390_enable_skey_pte(pte_t *pte, unsigned long addr,
+ return 0;
+ }
+
++/*
++ * Give a chance to schedule after setting a key to 256 pages.
++ * We only hold the mm lock, which is a rwsem and the kvm srcu.
++ * Both can sleep.
++ */
++static int __s390_enable_skey_pmd(pmd_t *pmd, unsigned long addr,
++ unsigned long next, struct mm_walk *walk)
++{
++ cond_resched();
++ return 0;
++}
++
+ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
+ unsigned long hmask, unsigned long next,
+ struct mm_walk *walk)
+@@ -2623,12 +2635,14 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr,
+ end = start + HPAGE_SIZE - 1;
+ __storage_key_init_range(start, end);
+ set_bit(PG_arch_1, &page->flags);
++ cond_resched();
+ return 0;
+ }
+
+ static const struct mm_walk_ops enable_skey_walk_ops = {
+ .hugetlb_entry = __s390_enable_skey_hugetlb,
+ .pte_entry = __s390_enable_skey_pte,
++ .pmd_entry = __s390_enable_skey_pmd,
+ };
+
+ int s390_enable_skey(void)
+diff --git a/arch/um/drivers/chan_kern.c b/arch/um/drivers/chan_kern.c
+index 62997055c4547..26a702a065154 100644
+--- a/arch/um/drivers/chan_kern.c
++++ b/arch/um/drivers/chan_kern.c
+@@ -133,7 +133,7 @@ static void line_timer_cb(struct work_struct *work)
+ struct line *line = container_of(work, struct line, task.work);
+
+ if (!line->throttled)
+- chan_interrupt(line, line->driver->read_irq);
++ chan_interrupt(line, line->read_irq);
+ }
+
+ int enable_chan(struct line *line)
+@@ -195,9 +195,9 @@ void free_irqs(void)
+ chan = list_entry(ele, struct chan, free_list);
+
+ if (chan->input && chan->enabled)
+- um_free_irq(chan->line->driver->read_irq, chan);
++ um_free_irq(chan->line->read_irq, chan);
+ if (chan->output && chan->enabled)
+- um_free_irq(chan->line->driver->write_irq, chan);
++ um_free_irq(chan->line->write_irq, chan);
+ chan->enabled = 0;
+ }
+ }
+@@ -215,9 +215,9 @@ static void close_one_chan(struct chan *chan, int delay_free_irq)
+ spin_unlock_irqrestore(&irqs_to_free_lock, flags);
+ } else {
+ if (chan->input && chan->enabled)
+- um_free_irq(chan->line->driver->read_irq, chan);
++ um_free_irq(chan->line->read_irq, chan);
+ if (chan->output && chan->enabled)
+- um_free_irq(chan->line->driver->write_irq, chan);
++ um_free_irq(chan->line->write_irq, chan);
+ chan->enabled = 0;
+ }
+ if (chan->ops->close != NULL)
+diff --git a/arch/um/drivers/line.c b/arch/um/drivers/line.c
+index 8febf95da96e1..02b0befd67632 100644
+--- a/arch/um/drivers/line.c
++++ b/arch/um/drivers/line.c
+@@ -139,7 +139,7 @@ static int flush_buffer(struct line *line)
+ count = line->buffer + LINE_BUFSIZE - line->head;
+
+ n = write_chan(line->chan_out, line->head, count,
+- line->driver->write_irq);
++ line->write_irq);
+ if (n < 0)
+ return n;
+ if (n == count) {
+@@ -156,7 +156,7 @@ static int flush_buffer(struct line *line)
+
+ count = line->tail - line->head;
+ n = write_chan(line->chan_out, line->head, count,
+- line->driver->write_irq);
++ line->write_irq);
+
+ if (n < 0)
+ return n;
+@@ -195,7 +195,7 @@ int line_write(struct tty_struct *tty, const unsigned char *buf, int len)
+ ret = buffer_data(line, buf, len);
+ else {
+ n = write_chan(line->chan_out, buf, len,
+- line->driver->write_irq);
++ line->write_irq);
+ if (n < 0) {
+ ret = n;
+ goto out_up;
+@@ -215,7 +215,7 @@ void line_throttle(struct tty_struct *tty)
+ {
+ struct line *line = tty->driver_data;
+
+- deactivate_chan(line->chan_in, line->driver->read_irq);
++ deactivate_chan(line->chan_in, line->read_irq);
+ line->throttled = 1;
+ }
+
+@@ -224,7 +224,7 @@ void line_unthrottle(struct tty_struct *tty)
+ struct line *line = tty->driver_data;
+
+ line->throttled = 0;
+- chan_interrupt(line, line->driver->read_irq);
++ chan_interrupt(line, line->read_irq);
+ }
+
+ static irqreturn_t line_write_interrupt(int irq, void *data)
+@@ -260,19 +260,23 @@ int line_setup_irq(int fd, int input, int output, struct line *line, void *data)
+ int err;
+
+ if (input) {
+- err = um_request_irq(driver->read_irq, fd, IRQ_READ,
+- line_interrupt, IRQF_SHARED,
++ err = um_request_irq(UM_IRQ_ALLOC, fd, IRQ_READ,
++ line_interrupt, 0,
+ driver->read_irq_name, data);
+ if (err < 0)
+ return err;
++
++ line->read_irq = err;
+ }
+
+ if (output) {
+- err = um_request_irq(driver->write_irq, fd, IRQ_WRITE,
+- line_write_interrupt, IRQF_SHARED,
++ err = um_request_irq(UM_IRQ_ALLOC, fd, IRQ_WRITE,
++ line_write_interrupt, 0,
+ driver->write_irq_name, data);
+ if (err < 0)
+ return err;
++
++ line->write_irq = err;
+ }
+
+ return 0;
+diff --git a/arch/um/drivers/line.h b/arch/um/drivers/line.h
+index bdb16b96e76fd..f15be75a3bf3b 100644
+--- a/arch/um/drivers/line.h
++++ b/arch/um/drivers/line.h
+@@ -23,9 +23,7 @@ struct line_driver {
+ const short minor_start;
+ const short type;
+ const short subtype;
+- const int read_irq;
+ const char *read_irq_name;
+- const int write_irq;
+ const char *write_irq_name;
+ struct mc_device mc;
+ struct tty_driver *driver;
+@@ -35,6 +33,8 @@ struct line {
+ struct tty_port port;
+ int valid;
+
++ int read_irq, write_irq;
++
+ char *init_str;
+ struct list_head chan_list;
+ struct chan *chan_in, *chan_out;
+diff --git a/arch/um/drivers/ssl.c b/arch/um/drivers/ssl.c
+index 41eae2e8fb652..8514966778d53 100644
+--- a/arch/um/drivers/ssl.c
++++ b/arch/um/drivers/ssl.c
+@@ -47,9 +47,7 @@ static struct line_driver driver = {
+ .minor_start = 64,
+ .type = TTY_DRIVER_TYPE_SERIAL,
+ .subtype = 0,
+- .read_irq = SSL_IRQ,
+ .read_irq_name = "ssl",
+- .write_irq = SSL_WRITE_IRQ,
+ .write_irq_name = "ssl-write",
+ .mc = {
+ .list = LIST_HEAD_INIT(driver.mc.list),
+diff --git a/arch/um/drivers/stdio_console.c b/arch/um/drivers/stdio_console.c
+index e8b762f4d8c25..489d5a746ed33 100644
+--- a/arch/um/drivers/stdio_console.c
++++ b/arch/um/drivers/stdio_console.c
+@@ -53,9 +53,7 @@ static struct line_driver driver = {
+ .minor_start = 0,
+ .type = TTY_DRIVER_TYPE_CONSOLE,
+ .subtype = SYSTEM_TYPE_CONSOLE,
+- .read_irq = CONSOLE_IRQ,
+ .read_irq_name = "console",
+- .write_irq = CONSOLE_WRITE_IRQ,
+ .write_irq_name = "console-write",
+ .mc = {
+ .list = LIST_HEAD_INIT(driver.mc.list),
+diff --git a/arch/um/include/asm/irq.h b/arch/um/include/asm/irq.h
+index e187c789369d3..749dfe8512e84 100644
+--- a/arch/um/include/asm/irq.h
++++ b/arch/um/include/asm/irq.h
+@@ -4,19 +4,15 @@
+
+ #define TIMER_IRQ 0
+ #define UMN_IRQ 1
+-#define CONSOLE_IRQ 2
+-#define CONSOLE_WRITE_IRQ 3
+-#define UBD_IRQ 4
+-#define UM_ETH_IRQ 5
+-#define SSL_IRQ 6
+-#define SSL_WRITE_IRQ 7
+-#define ACCEPT_IRQ 8
+-#define MCONSOLE_IRQ 9
+-#define WINCH_IRQ 10
+-#define SIGIO_WRITE_IRQ 11
+-#define TELNETD_IRQ 12
+-#define XTERM_IRQ 13
+-#define RANDOM_IRQ 14
++#define UBD_IRQ 2
++#define UM_ETH_IRQ 3
++#define ACCEPT_IRQ 4
++#define MCONSOLE_IRQ 5
++#define WINCH_IRQ 6
++#define SIGIO_WRITE_IRQ 7
++#define TELNETD_IRQ 8
++#define XTERM_IRQ 9
++#define RANDOM_IRQ 10
+
+ #ifdef CONFIG_UML_NET_VECTOR
+
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index 1261842d006c7..49a3b122279e5 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -51,7 +51,7 @@ extern const char * const x86_power_flags[32];
+ extern const char * const x86_bug_flags[NBUGINTS*32];
+
+ #define test_cpu_cap(c, bit) \
+- test_bit(bit, (unsigned long *)((c)->x86_capability))
++ arch_test_bit(bit, (unsigned long *)((c)->x86_capability))
+
+ /*
+ * There are 32 bits/features in each mask word. The high bits
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 1c14bcce88f27..729ecf1e546ce 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -466,7 +466,7 @@ do { \
+ [ptr] "+m" (*_ptr), \
+ [old] "+a" (__old) \
+ : [new] ltype (__new) \
+- : "memory", "cc"); \
++ : "memory"); \
+ if (unlikely(__err)) \
+ goto label; \
+ if (unlikely(!success)) \
+diff --git a/block/bio.c b/block/bio.c
+index 342b1cf5d713c..dc6940621d7d8 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -668,6 +668,7 @@ static void bio_alloc_cache_destroy(struct bio_set *bs)
+ bio_alloc_cache_prune(cache, -1U);
+ }
+ free_percpu(bs->cache);
++ bs->cache = NULL;
+ }
+
+ /**
+@@ -1308,10 +1309,12 @@ void bio_copy_data_iter(struct bio *dst, struct bvec_iter *dst_iter,
+ struct bio_vec src_bv = bio_iter_iovec(src, *src_iter);
+ struct bio_vec dst_bv = bio_iter_iovec(dst, *dst_iter);
+ unsigned int bytes = min(src_bv.bv_len, dst_bv.bv_len);
+- void *src_buf;
++ void *src_buf = bvec_kmap_local(&src_bv);
++ void *dst_buf = bvec_kmap_local(&dst_bv);
+
+- src_buf = bvec_kmap_local(&src_bv);
+- memcpy_to_bvec(&dst_bv, src_buf);
++ memcpy(dst_buf, src_buf, bytes);
++
++ kunmap_local(dst_buf);
+ kunmap_local(src_buf);
+
+ bio_advance_iter_single(src, src_iter, bytes);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 779b4a1f66ac2..45d750eb26283 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -992,7 +992,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
+ if (current->plug)
+ blk_flush_plug(current->plug, false);
+
+- if (blk_queue_enter(q, BLK_MQ_REQ_NOWAIT))
++ if (bio_queue_enter(bio))
+ return 0;
+ if (WARN_ON_ONCE(!queue_is_mq(q)))
+ ret = 0; /* not yet implemented, should not happen */
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 0aa20df31e369..f18e1c9c3f4a8 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -132,7 +132,8 @@ static bool blk_mq_check_inflight(struct request *rq, void *priv,
+ {
+ struct mq_inflight *mi = priv;
+
+- if ((!mi->part->bd_partno || rq->part == mi->part) &&
++ if (rq->part && blk_do_io_stat(rq) &&
++ (!mi->part->bd_partno || rq->part == mi->part) &&
+ blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT)
+ mi->inflight[rq_data_dir(rq)]++;
+
+@@ -2114,8 +2115,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
+ */
+ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
+ {
+- struct blk_mq_hw_ctx *hctx;
+-
++ struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
+ /*
+ * If the IO scheduler does not respect hardware queues when
+ * dispatching, we just don't bother with multiple HW queues and
+@@ -2123,8 +2123,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
+ * just causes lock contention inside the scheduler and pointless cache
+ * bouncing.
+ */
+- hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
+- raw_smp_processor_id());
++ struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
++
+ if (!blk_mq_hctx_stopped(hctx))
+ return hctx;
+ return NULL;
+diff --git a/block/genhd.c b/block/genhd.c
+index 9d9d702d07787..c284c1cf33967 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -380,6 +380,8 @@ int disk_scan_partitions(struct gendisk *disk, fmode_t mode)
+
+ if (disk->flags & (GENHD_FL_NO_PART | GENHD_FL_HIDDEN))
+ return -EINVAL;
++ if (test_bit(GD_SUPPRESS_PART_SCAN, &disk->state))
++ return -EINVAL;
+ if (disk->open_partitions)
+ return -EBUSY;
+
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 760c0d81d1482..a3a547ed0eb03 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2003,16 +2003,16 @@ retry:
+ return err_mask;
+ }
+
+-static bool ata_log_supported(struct ata_device *dev, u8 log)
++static int ata_log_supported(struct ata_device *dev, u8 log)
+ {
+ struct ata_port *ap = dev->link->ap;
+
+ if (dev->horkage & ATA_HORKAGE_NO_LOG_DIR)
+- return false;
++ return 0;
+
+ if (ata_read_log_page(dev, ATA_LOG_DIRECTORY, 0, ap->sector_buf, 1))
+- return false;
+- return get_unaligned_le16(&ap->sector_buf[log * 2]) ? true : false;
++ return 0;
++ return get_unaligned_le16(&ap->sector_buf[log * 2]);
+ }
+
+ static bool ata_identify_page_supported(struct ata_device *dev, u8 page)
+@@ -2448,15 +2448,20 @@ static void ata_dev_config_cpr(struct ata_device *dev)
+ struct ata_cpr_log *cpr_log = NULL;
+ u8 *desc, *buf = NULL;
+
+- if (ata_id_major_version(dev->id) < 11 ||
+- !ata_log_supported(dev, ATA_LOG_CONCURRENT_POSITIONING_RANGES))
++ if (ata_id_major_version(dev->id) < 11)
++ goto out;
++
++ buf_len = ata_log_supported(dev, ATA_LOG_CONCURRENT_POSITIONING_RANGES);
++ if (buf_len == 0)
+ goto out;
+
+ /*
+ * Read the concurrent positioning ranges log (0x47). We can have at
+- * most 255 32B range descriptors plus a 64B header.
++ * most 255 32B range descriptors plus a 64B header. This log varies in
++ * size, so use the size reported in the GPL directory. Reading beyond
++ * the supported length will result in an error.
+ */
+- buf_len = (64 + 255 * 32 + 511) & ~511;
++ buf_len <<= 9;
+ buf = kzalloc(buf_len, GFP_KERNEL);
+ if (!buf)
+ goto out;
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index ed8be585a98f7..673f95a21eb21 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2119,7 +2119,7 @@ static unsigned int ata_scsiop_inq_b9(struct ata_scsi_args *args, u8 *rbuf)
+
+ /* SCSI Concurrent Positioning Ranges VPD page: SBC-5 rev 1 or later */
+ rbuf[1] = 0xb9;
+- put_unaligned_be16(64 + (int)cpr_log->nr_cpr * 32 - 4, &rbuf[3]);
++ put_unaligned_be16(64 + (int)cpr_log->nr_cpr * 32 - 4, &rbuf[2]);
+
+ for (i = 0; i < cpr_log->nr_cpr; i++, desc += 32) {
+ desc[0] = cpr_log->cpr[i].num;
+diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c
+index ca129854a88c7..c380278874990 100644
+--- a/drivers/ata/libata-transport.c
++++ b/drivers/ata/libata-transport.c
+@@ -196,7 +196,7 @@ static struct {
+ { XFER_PIO_0, "XFER_PIO_0" },
+ { XFER_PIO_SLOW, "XFER_PIO_SLOW" }
+ };
+-ata_bitfield_name_match(xfer,ata_xfer_names)
++ata_bitfield_name_search(xfer, ata_xfer_names)
+
+ /*
+ * ATA Port attributes
+diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c
+index 05c2ab3757568..a2abf6c9a0853 100644
+--- a/drivers/ata/pata_octeon_cf.c
++++ b/drivers/ata/pata_octeon_cf.c
+@@ -856,12 +856,14 @@ static int octeon_cf_probe(struct platform_device *pdev)
+ int i;
+ res_dma = platform_get_resource(dma_dev, IORESOURCE_MEM, 0);
+ if (!res_dma) {
++ put_device(&dma_dev->dev);
+ of_node_put(dma_node);
+ return -EINVAL;
+ }
+ cf_port->dma_base = (u64)devm_ioremap(&pdev->dev, res_dma->start,
+ resource_size(res_dma));
+ if (!cf_port->dma_base) {
++ put_device(&dma_dev->dev);
+ of_node_put(dma_node);
+ return -EINVAL;
+ }
+@@ -871,6 +873,7 @@ static int octeon_cf_probe(struct platform_device *pdev)
+ irq = i;
+ irq_handler = octeon_cf_interrupt;
+ }
++ put_device(&dma_dev->dev);
+ }
+ of_node_put(dma_node);
+ }
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 97936ec49bde0..7ca47e5b3c1f4 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -617,7 +617,7 @@ int bus_add_driver(struct device_driver *drv)
+ if (drv->bus->p->drivers_autoprobe) {
+ error = driver_attach(drv);
+ if (error)
+- goto out_unregister;
++ goto out_del_list;
+ }
+ module_add_driver(drv->owner, drv);
+
+@@ -644,6 +644,8 @@ int bus_add_driver(struct device_driver *drv)
+
+ return 0;
+
++out_del_list:
++ klist_del(&priv->knode_bus);
+ out_unregister:
+ kobject_put(&priv->kobj);
+ /* drv->p is freed in driver_release() */
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 7e079fa3795b1..86fd2ea356565 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -257,7 +257,6 @@ DEFINE_SHOW_ATTRIBUTE(deferred_devs);
+
+ int driver_deferred_probe_timeout;
+ EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout);
+-static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue);
+
+ static int __init deferred_probe_timeout_setup(char *str)
+ {
+@@ -312,7 +311,6 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
+ list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe)
+ dev_info(p->device, "deferred probe pending\n");
+ mutex_unlock(&deferred_probe_mutex);
+- wake_up_all(&probe_timeout_waitqueue);
+ }
+ static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+
+@@ -720,9 +718,6 @@ int driver_probe_done(void)
+ */
+ void wait_for_device_probe(void)
+ {
+- /* wait for probe timeout */
+- wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout);
+-
+ /* wait for the deferred probe workqueue to finish */
+ flush_work(&deferred_probe_work);
+
+@@ -945,6 +940,7 @@ out_unlock:
+ static int __device_attach(struct device *dev, bool allow_async)
+ {
+ int ret = 0;
++ bool async = false;
+
+ device_lock(dev);
+ if (dev->p->dead) {
+@@ -983,7 +979,7 @@ static int __device_attach(struct device *dev, bool allow_async)
+ */
+ dev_dbg(dev, "scheduling asynchronous probe\n");
+ get_device(dev);
+- async_schedule_dev(__device_attach_async_helper, dev);
++ async = true;
+ } else {
+ pm_request_idle(dev);
+ }
+@@ -993,6 +989,8 @@ static int __device_attach(struct device *dev, bool allow_async)
+ }
+ out_unlock:
+ device_unlock(dev);
++ if (async)
++ async_schedule_dev(__device_attach_async_helper, dev);
+ return ret;
+ }
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index d46a3d5d0c2ec..3411d3c0a5b0f 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1067,7 +1067,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
+ lo->lo_flags |= LO_FLAGS_PARTSCAN;
+ partscan = lo->lo_flags & LO_FLAGS_PARTSCAN;
+ if (partscan)
+- lo->lo_disk->flags &= ~GENHD_FL_NO_PART;
++ clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
+
+ loop_global_unlock(lo, is_loop);
+ if (partscan)
+@@ -1186,7 +1186,7 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
+ */
+ lo->lo_flags = 0;
+ if (!part_shift)
+- lo->lo_disk->flags |= GENHD_FL_NO_PART;
++ set_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
+ mutex_lock(&lo->lo_mutex);
+ lo->lo_state = Lo_unbound;
+ mutex_unlock(&lo->lo_mutex);
+@@ -1296,7 +1296,7 @@ out_unfreeze:
+
+ if (!err && (lo->lo_flags & LO_FLAGS_PARTSCAN) &&
+ !(prev_lo_flags & LO_FLAGS_PARTSCAN)) {
+- lo->lo_disk->flags &= ~GENHD_FL_NO_PART;
++ clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
+ partscan = true;
+ }
+ out_unlock:
+@@ -2028,7 +2028,7 @@ static int loop_add(int i)
+ * userspace tools. Parameters like this in general should be avoided.
+ */
+ if (!part_shift)
+- disk->flags |= GENHD_FL_NO_PART;
++ set_bit(GD_SUPPRESS_PART_SCAN, &disk->state);
+ atomic_set(&lo->lo_refcnt, 0);
+ mutex_init(&lo->lo_mutex);
+ lo->lo_number = i;
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 2845570413360..151264a4be364 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -404,13 +404,14 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
+ if (!mutex_trylock(&cmd->lock))
+ return BLK_EH_RESET_TIMER;
+
+- if (!__test_and_clear_bit(NBD_CMD_INFLIGHT, &cmd->flags)) {
++ if (!test_bit(NBD_CMD_INFLIGHT, &cmd->flags)) {
+ mutex_unlock(&cmd->lock);
+ return BLK_EH_DONE;
+ }
+
+ if (!refcount_inc_not_zero(&nbd->config_refs)) {
+ cmd->status = BLK_STS_TIMEOUT;
++ __clear_bit(NBD_CMD_INFLIGHT, &cmd->flags);
+ mutex_unlock(&cmd->lock);
+ goto done;
+ }
+@@ -479,6 +480,7 @@ static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
+ dev_err_ratelimited(nbd_to_dev(nbd), "Connection timed out\n");
+ set_bit(NBD_RT_TIMEDOUT, &config->runtime_flags);
+ cmd->status = BLK_STS_IOERR;
++ __clear_bit(NBD_CMD_INFLIGHT, &cmd->flags);
+ mutex_unlock(&cmd->lock);
+ sock_shutdown(nbd);
+ nbd_config_put(nbd);
+@@ -746,7 +748,7 @@ static struct nbd_cmd *nbd_handle_reply(struct nbd_device *nbd, int index,
+ cmd = blk_mq_rq_to_pdu(req);
+
+ mutex_lock(&cmd->lock);
+- if (!__test_and_clear_bit(NBD_CMD_INFLIGHT, &cmd->flags)) {
++ if (!test_bit(NBD_CMD_INFLIGHT, &cmd->flags)) {
+ dev_err(disk_to_dev(nbd->disk), "Suspicious reply %d (status %u flags %lu)",
+ tag, cmd->status, cmd->flags);
+ ret = -ENOENT;
+@@ -855,8 +857,16 @@ static void recv_work(struct work_struct *work)
+ }
+
+ rq = blk_mq_rq_from_pdu(cmd);
+- if (likely(!blk_should_fake_timeout(rq->q)))
+- blk_mq_complete_request(rq);
++ if (likely(!blk_should_fake_timeout(rq->q))) {
++ bool complete;
++
++ mutex_lock(&cmd->lock);
++ complete = __test_and_clear_bit(NBD_CMD_INFLIGHT,
++ &cmd->flags);
++ mutex_unlock(&cmd->lock);
++ if (complete)
++ blk_mq_complete_request(rq);
++ }
+ percpu_ref_put(&q->q_usage_counter);
+ }
+
+@@ -1424,7 +1434,7 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
+ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
+ struct block_device *bdev)
+ {
+- sock_shutdown(nbd);
++ nbd_clear_sock(nbd);
+ __invalidate_device(bdev, true);
+ nbd_bdev_reset(bdev);
+ if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+@@ -1523,15 +1533,20 @@ static struct nbd_config *nbd_alloc_config(void)
+ {
+ struct nbd_config *config;
+
++ if (!try_module_get(THIS_MODULE))
++ return ERR_PTR(-ENODEV);
++
+ config = kzalloc(sizeof(struct nbd_config), GFP_NOFS);
+- if (!config)
+- return NULL;
++ if (!config) {
++ module_put(THIS_MODULE);
++ return ERR_PTR(-ENOMEM);
++ }
++
+ atomic_set(&config->recv_threads, 0);
+ init_waitqueue_head(&config->recv_wq);
+ init_waitqueue_head(&config->conn_wait);
+ config->blksize_bits = NBD_DEF_BLKSIZE_BITS;
+ atomic_set(&config->live_connections, 0);
+- try_module_get(THIS_MODULE);
+ return config;
+ }
+
+@@ -1558,12 +1573,13 @@ static int nbd_open(struct block_device *bdev, fmode_t mode)
+ mutex_unlock(&nbd->config_lock);
+ goto out;
+ }
+- config = nbd->config = nbd_alloc_config();
+- if (!config) {
+- ret = -ENOMEM;
++ config = nbd_alloc_config();
++ if (IS_ERR(config)) {
++ ret = PTR_ERR(config);
+ mutex_unlock(&nbd->config_lock);
+ goto out;
+ }
++ nbd->config = config;
+ refcount_set(&nbd->config_refs, 1);
+ refcount_inc(&nbd->refs);
+ mutex_unlock(&nbd->config_lock);
+@@ -1970,13 +1986,14 @@ again:
+ nbd_put(nbd);
+ return -EINVAL;
+ }
+- config = nbd->config = nbd_alloc_config();
+- if (!nbd->config) {
++ config = nbd_alloc_config();
++ if (IS_ERR(config)) {
+ mutex_unlock(&nbd->config_lock);
+ nbd_put(nbd);
+ printk(KERN_ERR "nbd: couldn't allocate config\n");
+- return -ENOMEM;
++ return PTR_ERR(config);
+ }
++ nbd->config = config;
+ refcount_set(&nbd->config_refs, 1);
+ set_bit(NBD_RT_BOUND, &config->runtime_flags);
+
+@@ -2534,6 +2551,12 @@ static void __exit nbd_cleanup(void)
+ struct nbd_device *nbd;
+ LIST_HEAD(del_list);
+
++ /*
++ * Unregister netlink interface prior to waiting
++ * for the completion of netlink commands.
++ */
++ genl_unregister_family(&nbd_genl_family);
++
+ nbd_dbg_close();
+
+ mutex_lock(&nbd_index_mutex);
+@@ -2543,6 +2566,9 @@ static void __exit nbd_cleanup(void)
+ while (!list_empty(&del_list)) {
+ nbd = list_first_entry(&del_list, struct nbd_device, list);
+ list_del_init(&nbd->list);
++ if (refcount_read(&nbd->config_refs))
++ printk(KERN_ERR "nbd: possibly leaking nbd_config (ref %d)\n",
++ refcount_read(&nbd->config_refs));
+ if (refcount_read(&nbd->refs) != 1)
+ printk(KERN_ERR "nbd: possibly leaking a device\n");
+ nbd_put(nbd);
+@@ -2552,7 +2578,6 @@ static void __exit nbd_cleanup(void)
+ destroy_workqueue(nbd_del_wq);
+
+ idr_destroy(&nbd_index_idr);
+- genl_unregister_family(&nbd_genl_family);
+ unregister_blkdev(NBD_MAJOR, "nbd");
+ }
+
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 7a1b1f9e49333..70d00cea9d22a 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3395,7 +3395,9 @@ static int sysc_remove(struct platform_device *pdev)
+ struct sysc *ddata = platform_get_drvdata(pdev);
+ int error;
+
+- cancel_delayed_work_sync(&ddata->idle_work);
++ /* Device can still be enabled, see deferred idle quirk in probe */
++ if (cancel_delayed_work_sync(&ddata->idle_work))
++ ti_sysc_idle(&ddata->idle_work.work);
+
+ error = pm_runtime_resume_and_get(ddata->dev);
+ if (error < 0) {
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index e856df7e285c7..a6f3a8a2aca6d 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -159,6 +159,8 @@ static int probe_common(struct virtio_device *vdev)
+ goto err_find;
+ }
+
++ virtio_device_ready(vdev);
++
+ /* we always have a pending entropy request */
+ request_entropy(vi);
+
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index d6aa4b57d9858..634b980c47b7c 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -789,8 +789,8 @@ static void __cold _credit_init_bits(size_t bits)
+ *
+ **********************************************************************/
+
+-static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
+-static bool trust_bootloader __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER);
++static bool trust_cpu __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
++static bool trust_bootloader __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER);
+ static int __init parse_trust_cpu(char *arg)
+ {
+ return kstrtobool(arg, &trust_cpu);
+@@ -813,7 +813,7 @@ early_param("random.trust_bootloader", parse_trust_bootloader);
+ int __init random_init(const char *command_line)
+ {
+ ktime_t now = ktime_get_real();
+- unsigned int i, arch_bytes;
++ unsigned int i, arch_bits;
+ unsigned long entropy;
+
+ #if defined(LATENT_ENTROPY_PLUGIN)
+@@ -821,12 +821,12 @@ int __init random_init(const char *command_line)
+ _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed));
+ #endif
+
+- for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE;
++ for (i = 0, arch_bits = BLAKE2S_BLOCK_SIZE * 8;
+ i < BLAKE2S_BLOCK_SIZE; i += sizeof(entropy)) {
+ if (!arch_get_random_seed_long_early(&entropy) &&
+ !arch_get_random_long_early(&entropy)) {
+ entropy = random_get_entropy();
+- arch_bytes -= sizeof(entropy);
++ arch_bits -= sizeof(entropy) * 8;
+ }
+ _mix_pool_bytes(&entropy, sizeof(entropy));
+ }
+@@ -838,7 +838,7 @@ int __init random_init(const char *command_line)
+ if (crng_ready())
+ crng_reseed();
+ else if (trust_cpu)
+- credit_init_bits(arch_bytes * 8);
++ _credit_init_bits(arch_bits);
+
+ return 0;
+ }
+@@ -886,13 +886,12 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+ * Handle random seed passed by bootloader, and credit it if
+ * CONFIG_RANDOM_TRUST_BOOTLOADER is set.
+ */
+-void __cold add_bootloader_randomness(const void *buf, size_t len)
++void __init add_bootloader_randomness(const void *buf, size_t len)
+ {
+ mix_pool_bytes(buf, len);
+ if (trust_bootloader)
+ credit_init_bits(len * 8);
+ }
+-EXPORT_SYMBOL_GPL(add_bootloader_randomness);
+
+ struct fast_pool {
+ struct work_struct mix;
+diff --git a/drivers/char/xillybus/xillyusb.c b/drivers/char/xillybus/xillyusb.c
+index dc3551796e5ed..39bcbfd908b46 100644
+--- a/drivers/char/xillybus/xillyusb.c
++++ b/drivers/char/xillybus/xillyusb.c
+@@ -549,6 +549,7 @@ static void cleanup_dev(struct kref *kref)
+ if (xdev->workq)
+ destroy_workqueue(xdev->workq);
+
++ usb_put_dev(xdev->udev);
+ kfree(xdev->channels); /* Argument may be NULL, and that's fine */
+ kfree(xdev);
+ }
+diff --git a/drivers/clocksource/timer-oxnas-rps.c b/drivers/clocksource/timer-oxnas-rps.c
+index 56c0cc32d0ac6..d514b44e67dd1 100644
+--- a/drivers/clocksource/timer-oxnas-rps.c
++++ b/drivers/clocksource/timer-oxnas-rps.c
+@@ -236,7 +236,7 @@ static int __init oxnas_rps_timer_init(struct device_node *np)
+ }
+
+ rps->irq = irq_of_parse_and_map(np, 0);
+- if (rps->irq < 0) {
++ if (!rps->irq) {
+ ret = -EINVAL;
+ goto err_iomap;
+ }
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 1767f8bf20133..593d5a957b69d 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -34,7 +34,7 @@ static int riscv_clock_next_event(unsigned long delta,
+ static unsigned int riscv_clock_event_irq;
+ static DEFINE_PER_CPU(struct clock_event_device, riscv_clock_event) = {
+ .name = "riscv_timer_clockevent",
+- .features = CLOCK_EVT_FEAT_ONESHOT,
++ .features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP,
+ .rating = 100,
+ .set_next_event = riscv_clock_next_event,
+ };
+diff --git a/drivers/clocksource/timer-sp804.c b/drivers/clocksource/timer-sp804.c
+index 401d592e85f5a..e6a87f4af2b50 100644
+--- a/drivers/clocksource/timer-sp804.c
++++ b/drivers/clocksource/timer-sp804.c
+@@ -259,6 +259,11 @@ static int __init sp804_of_init(struct device_node *np, struct sp804_timer *time
+ struct clk *clk1, *clk2;
+ const char *name = of_get_property(np, "compatible", NULL);
+
++ if (initialized) {
++ pr_debug("%pOF: skipping further SP804 timer device\n", np);
++ return 0;
++ }
++
+ base = of_iomap(np, 0);
+ if (!base)
+ return -ENXIO;
+@@ -270,11 +275,6 @@ static int __init sp804_of_init(struct device_node *np, struct sp804_timer *time
+ writel(0, timer1_base + timer->ctrl);
+ writel(0, timer2_base + timer->ctrl);
+
+- if (initialized || !of_device_is_available(np)) {
+- ret = -EINVAL;
+- goto err;
+- }
+-
+ clk1 = of_clk_get(np, 0);
+ if (IS_ERR(clk1))
+ clk1 = NULL;
+diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c
+index bfff59617d047..f9ed550117756 100644
+--- a/drivers/dma/idxd/dma.c
++++ b/drivers/dma/idxd/dma.c
+@@ -87,6 +87,27 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq,
+ hw->completion_addr = compl;
+ }
+
++static struct dma_async_tx_descriptor *
++idxd_dma_prep_interrupt(struct dma_chan *c, unsigned long flags)
++{
++ struct idxd_wq *wq = to_idxd_wq(c);
++ u32 desc_flags;
++ struct idxd_desc *desc;
++
++ if (wq->state != IDXD_WQ_ENABLED)
++ return NULL;
++
++ op_flag_setup(flags, &desc_flags);
++ desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
++ if (IS_ERR(desc))
++ return NULL;
++
++ idxd_prep_desc_common(wq, desc->hw, DSA_OPCODE_NOOP,
++ 0, 0, 0, desc->compl_dma, desc_flags);
++ desc->txd.flags = flags;
++ return &desc->txd;
++}
++
+ static struct dma_async_tx_descriptor *
+ idxd_dma_submit_memcpy(struct dma_chan *c, dma_addr_t dma_dest,
+ dma_addr_t dma_src, size_t len, unsigned long flags)
+@@ -193,10 +214,12 @@ int idxd_register_dma_device(struct idxd_device *idxd)
+ INIT_LIST_HEAD(&dma->channels);
+ dma->dev = dev;
+
++ dma_cap_set(DMA_INTERRUPT, dma->cap_mask);
+ dma_cap_set(DMA_PRIVATE, dma->cap_mask);
+ dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask);
+ dma->device_release = idxd_dma_release;
+
++ dma->device_prep_dma_interrupt = idxd_dma_prep_interrupt;
+ if (idxd->hw.opcap.bits[0] & IDXD_OPCAP_MEMMOVE) {
+ dma_cap_set(DMA_MEMCPY, dma->cap_mask);
+ dma->device_prep_dma_memcpy = idxd_dma_submit_memcpy;
+diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c
+index 7aa63b6520272..3ffa7f37c7017 100644
+--- a/drivers/dma/xilinx/zynqmp_dma.c
++++ b/drivers/dma/xilinx/zynqmp_dma.c
+@@ -229,7 +229,7 @@ struct zynqmp_dma_chan {
+ bool is_dmacoherent;
+ struct tasklet_struct tasklet;
+ bool idle;
+- u32 desc_size;
++ size_t desc_size;
+ bool err;
+ u32 bus_width;
+ u32 src_burst_len;
+@@ -486,7 +486,8 @@ static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan)
+ }
+
+ chan->desc_pool_v = dma_alloc_coherent(chan->dev,
+- (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS),
++ (2 * ZYNQMP_DMA_DESC_SIZE(chan) *
++ ZYNQMP_DMA_NUM_DESCS),
+ &chan->desc_pool_p, GFP_KERNEL);
+ if (!chan->desc_pool_v)
+ return -ENOMEM;
+diff --git a/drivers/extcon/extcon-axp288.c b/drivers/extcon/extcon-axp288.c
+index 7c6d5857ff25e..180be768c2157 100644
+--- a/drivers/extcon/extcon-axp288.c
++++ b/drivers/extcon/extcon-axp288.c
+@@ -394,8 +394,8 @@ static int axp288_extcon_probe(struct platform_device *pdev)
+ if (adev) {
+ info->id_extcon = extcon_get_extcon_dev(acpi_dev_name(adev));
+ put_device(&adev->dev);
+- if (!info->id_extcon)
+- return -EPROBE_DEFER;
++ if (IS_ERR(info->id_extcon))
++ return PTR_ERR(info->id_extcon);
+
+ dev_info(dev, "controlling USB role\n");
+ } else {
+diff --git a/drivers/extcon/extcon-ptn5150.c b/drivers/extcon/extcon-ptn5150.c
+index 5b9a3cf8df268..2a7874108df87 100644
+--- a/drivers/extcon/extcon-ptn5150.c
++++ b/drivers/extcon/extcon-ptn5150.c
+@@ -194,6 +194,13 @@ static int ptn5150_init_dev_type(struct ptn5150_info *info)
+ return 0;
+ }
+
++static void ptn5150_work_sync_and_put(void *data)
++{
++ struct ptn5150_info *info = data;
++
++ cancel_work_sync(&info->irq_work);
++}
++
+ static int ptn5150_i2c_probe(struct i2c_client *i2c)
+ {
+ struct device *dev = &i2c->dev;
+@@ -284,6 +291,10 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c)
+ if (ret)
+ return -EINVAL;
+
++ ret = devm_add_action_or_reset(dev, ptn5150_work_sync_and_put, info);
++ if (ret)
++ return ret;
++
+ /*
+ * Update current extcon state if for example OTG connection was there
+ * before the probe
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index a09e704fd0fa1..97e35c32bfa55 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -851,6 +851,8 @@ EXPORT_SYMBOL_GPL(extcon_set_property_capability);
+ * @extcon_name: the extcon name provided with extcon_dev_register()
+ *
+ * Return the pointer of extcon device if success or ERR_PTR(err) if fail.
++ * NOTE: This function returns -EPROBE_DEFER so it may only be called from
++ * probe() functions.
+ */
+ struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
+ {
+@@ -864,7 +866,7 @@ struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
+ if (!strcmp(sd->name, extcon_name))
+ goto out;
+ }
+- sd = NULL;
++ sd = ERR_PTR(-EPROBE_DEFER);
+ out:
+ mutex_unlock(&extcon_dev_list_lock);
+ return sd;
+@@ -1218,19 +1220,14 @@ int extcon_dev_register(struct extcon_dev *edev)
+ edev->dev.type = &edev->extcon_dev_type;
+ }
+
+- ret = device_register(&edev->dev);
+- if (ret) {
+- put_device(&edev->dev);
+- goto err_dev;
+- }
+-
+ spin_lock_init(&edev->lock);
+- edev->nh = devm_kcalloc(&edev->dev, edev->max_supported,
+- sizeof(*edev->nh), GFP_KERNEL);
+- if (!edev->nh) {
+- ret = -ENOMEM;
+- device_unregister(&edev->dev);
+- goto err_dev;
++ if (edev->max_supported) {
++ edev->nh = kcalloc(edev->max_supported, sizeof(*edev->nh),
++ GFP_KERNEL);
++ if (!edev->nh) {
++ ret = -ENOMEM;
++ goto err_alloc_nh;
++ }
+ }
+
+ for (index = 0; index < edev->max_supported; index++)
+@@ -1241,6 +1238,12 @@ int extcon_dev_register(struct extcon_dev *edev)
+ dev_set_drvdata(&edev->dev, edev);
+ edev->state = 0;
+
++ ret = device_register(&edev->dev);
++ if (ret) {
++ put_device(&edev->dev);
++ goto err_dev;
++ }
++
+ mutex_lock(&extcon_dev_list_lock);
+ list_add(&edev->entry, &extcon_dev_list);
+ mutex_unlock(&extcon_dev_list_lock);
+@@ -1248,6 +1251,9 @@ int extcon_dev_register(struct extcon_dev *edev)
+ return 0;
+
+ err_dev:
++ if (edev->max_supported)
++ kfree(edev->nh);
++err_alloc_nh:
+ if (edev->max_supported)
+ kfree(edev->extcon_dev_type.groups);
+ err_alloc_groups:
+@@ -1308,6 +1314,7 @@ void extcon_dev_unregister(struct extcon_dev *edev)
+ if (edev->max_supported) {
+ kfree(edev->extcon_dev_type.groups);
+ kfree(edev->cables);
++ kfree(edev->nh);
+ }
+
+ put_device(&edev->dev);
+diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c
+index 3a353776bd344..66727ad3361b9 100644
+--- a/drivers/firmware/dmi-sysfs.c
++++ b/drivers/firmware/dmi-sysfs.c
+@@ -604,7 +604,7 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh,
+ "%d-%d", dh->type, entry->instance);
+
+ if (*ret) {
+- kfree(entry);
++ kobject_put(&entry->kobj);
+ return;
+ }
+
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index c4bf934e3553e..33ab24cfd6002 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -941,17 +941,17 @@ EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory);
+ void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr)
+ {
+ struct stratix10_svc_data_mem *pmem;
+- size_t size = 0;
+
+ list_for_each_entry(pmem, &svc_data_mem, node)
+ if (pmem->vaddr == kaddr) {
+- size = pmem->size;
+- break;
++ gen_pool_free(chan->ctrl->genpool,
++ (unsigned long)kaddr, pmem->size);
++ pmem->vaddr = NULL;
++ list_del(&pmem->node);
++ return;
+ }
+
+- gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size);
+- pmem->vaddr = NULL;
+- list_del(&pmem->node);
++ list_del(&svc_data_mem);
+ }
+ EXPORT_SYMBOL_GPL(stratix10_svc_free_memory);
+
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 8726921a11294..33683295a0bfe 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1108,20 +1108,21 @@ static int pca953x_regcache_sync(struct device *dev)
+ {
+ struct pca953x_chip *chip = dev_get_drvdata(dev);
+ int ret;
++ u8 regaddr;
+
+ /*
+ * The ordering between direction and output is important,
+ * sync these registers first and only then sync the rest.
+ */
+- ret = regcache_sync_region(chip->regmap, chip->regs->direction,
+- chip->regs->direction + NBANK(chip));
++ regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0);
++ ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip));
+ if (ret) {
+ dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret);
+ return ret;
+ }
+
+- ret = regcache_sync_region(chip->regmap, chip->regs->output,
+- chip->regs->output + NBANK(chip));
++ regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0);
++ ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip));
+ if (ret) {
+ dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret);
+ return ret;
+@@ -1129,16 +1130,18 @@ static int pca953x_regcache_sync(struct device *dev)
+
+ #ifdef CONFIG_GPIO_PCA953X_IRQ
+ if (chip->driver_data & PCA_PCAL) {
+- ret = regcache_sync_region(chip->regmap, PCAL953X_IN_LATCH,
+- PCAL953X_IN_LATCH + NBANK(chip));
++ regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0);
++ ret = regcache_sync_region(chip->regmap, regaddr,
++ regaddr + NBANK(chip));
+ if (ret) {
+ dev_err(dev, "Failed to sync INT latch registers: %d\n",
+ ret);
+ return ret;
+ }
+
+- ret = regcache_sync_region(chip->regmap, PCAL953X_INT_MASK,
+- PCAL953X_INT_MASK + NBANK(chip));
++ regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0);
++ ret = regcache_sync_region(chip->regmap, regaddr,
++ regaddr + NBANK(chip));
+ if (ret) {
+ dev_err(dev, "Failed to sync INT mask registers: %d\n",
+ ret);
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+index 299de1d131d82..5ac1a43a16b4a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+@@ -535,6 +535,10 @@ void jpeg_v2_0_dec_ring_emit_ib(struct amdgpu_ring *ring,
+ {
+ unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+
++ amdgpu_ring_write(ring, PACKETJ(mmUVD_JPEG_IH_CTRL_INTERNAL_OFFSET,
++ 0, 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, (vmid << JPEG_IH_CTRL__IH_VMID__SHIFT));
++
+ amdgpu_ring_write(ring, PACKETJ(mmUVD_LMI_JRBC_IB_VMID_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, (vmid | (vmid << 4)));
+@@ -768,7 +772,7 @@ static const struct amdgpu_ring_funcs jpeg_v2_0_dec_ring_vm_funcs = {
+ 8 + /* jpeg_v2_0_dec_ring_emit_vm_flush */
+ 18 + 18 + /* jpeg_v2_0_dec_ring_emit_fence x2 vm fence */
+ 8 + 16,
+- .emit_ib_size = 22, /* jpeg_v2_0_dec_ring_emit_ib */
++ .emit_ib_size = 24, /* jpeg_v2_0_dec_ring_emit_ib */
+ .emit_ib = jpeg_v2_0_dec_ring_emit_ib,
+ .emit_fence = jpeg_v2_0_dec_ring_emit_fence,
+ .emit_vm_flush = jpeg_v2_0_dec_ring_emit_vm_flush,
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
+index 1a03baa597557..654e43e83e2c4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.h
+@@ -41,6 +41,7 @@
+ #define mmUVD_JRBC_RB_REF_DATA_INTERNAL_OFFSET 0x4084
+ #define mmUVD_JRBC_STATUS_INTERNAL_OFFSET 0x4089
+ #define mmUVD_JPEG_PITCH_INTERNAL_OFFSET 0x401f
++#define mmUVD_JPEG_IH_CTRL_INTERNAL_OFFSET 0x4149
+
+ #define JRBC_DEC_EXTERNAL_REG_WRITE_ADDR 0x18000
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 2ec1ffb36b1fc..07fc591a65656 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -170,6 +170,7 @@ static const struct amdgpu_video_codec_info yc_video_codecs_decode_array[] = {
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
+ };
+
+ static const struct amdgpu_video_codecs yc_video_codecs_decode = {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 24db2297857b4..edb5e72aeb663 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -767,7 +767,7 @@ static void dm_dmub_outbox1_low_irq(void *interrupt_params)
+
+ do {
+ dc_stat_get_dmub_notification(adev->dm.dc, ¬ify);
+- if (notify.type > ARRAY_SIZE(dm->dmub_thread_offload)) {
++ if (notify.type >= ARRAY_SIZE(dm->dmub_thread_offload)) {
+ DRM_ERROR("DM: notify type %d invalid!", notify.type);
+ continue;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+index 2c7eb982eabca..054823d12403d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+@@ -1013,9 +1013,12 @@ static bool get_pixel_clk_frequency_100hz(
+ * not be programmed equal to DPREFCLK
+ */
+ modulo_hz = REG_READ(MODULO[inst]);
+- *pixel_clk_khz = div_u64((uint64_t)clock_hz*
+- clock_source->ctx->dc->clk_mgr->dprefclk_khz*10,
+- modulo_hz);
++ if (modulo_hz)
++ *pixel_clk_khz = div_u64((uint64_t)clock_hz*
++ clock_source->ctx->dc->clk_mgr->dprefclk_khz*10,
++ modulo_hz);
++ else
++ *pixel_clk_khz = 0;
+ } else {
+ /* NOTE: There is agreement with VBIOS here that MODULO is
+ * programmed equal to DPREFCLK, in which case PHASE will be
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+index 4e9e2cf398591..5fcbec9dd45dd 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+@@ -777,7 +777,7 @@ int smu_v11_0_set_allowed_mask(struct smu_context *smu)
+ goto failed;
+ }
+
+- bitmap_copy((unsigned long *)feature_mask, feature->allowed, 64);
++ bitmap_to_arr32(feature_mask, feature->allowed, 64);
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetAllowedFeaturesMaskHigh,
+ feature_mask[1], NULL);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index 4885c4ae78b73..27a54145d352b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1643,6 +1643,7 @@ static const struct throttling_logging_label {
+ uint32_t feature_mask;
+ const char *label;
+ } logging_label[] = {
++ {(1U << THROTTLER_TEMP_GPU_BIT), "GPU"},
+ {(1U << THROTTLER_TEMP_MEM_BIT), "HBM"},
+ {(1U << THROTTLER_TEMP_VR_GFX_BIT), "VR of GFX rail"},
+ {(1U << THROTTLER_TEMP_VR_MEM_BIT), "VR of HBM rail"},
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index b54790d3483ef..4a5e91b59f0cd 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -726,7 +726,7 @@ int smu_v13_0_set_allowed_mask(struct smu_context *smu)
+ if (bitmap_empty(feature->allowed, SMU_FEATURE_MAX) || feature->feature_num < 64)
+ goto failed;
+
+- bitmap_copy((unsigned long *)feature_mask, feature->allowed, 64);
++ bitmap_to_arr32(feature_mask, feature->allowed, 64);
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetAllowedFeaturesMaskHigh,
+ feature_mask[1], NULL);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+index d0715927b07f6..13461e28aeedb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+@@ -190,6 +190,9 @@ static int yellow_carp_fini_smc_tables(struct smu_context *smu)
+ kfree(smu_table->watermarks_table);
+ smu_table->watermarks_table = NULL;
+
++ kfree(smu_table->gpu_metrics_table);
++ smu_table->gpu_metrics_table = NULL;
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 956c8982192ba..ab52efb15670e 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -471,7 +471,10 @@ static void ast_set_color_reg(struct ast_private *ast,
+ static void ast_set_crtthd_reg(struct ast_private *ast)
+ {
+ /* Set Threshold */
+- if (ast->chip == AST2300 || ast->chip == AST2400 ||
++ if (ast->chip == AST2600) {
++ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0xe0);
++ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0xa0);
++ } else if (ast->chip == AST2300 || ast->chip == AST2400 ||
+ ast->chip == AST2500) {
+ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa7, 0x78);
+ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa6, 0x60);
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 235c7ee5258ff..873cf6882bd34 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1268,6 +1268,25 @@ static int analogix_dp_bridge_attach(struct drm_bridge *bridge,
+ return 0;
+ }
+
++static
++struct drm_crtc *analogix_dp_get_old_crtc(struct analogix_dp_device *dp,
++ struct drm_atomic_state *state)
++{
++ struct drm_encoder *encoder = dp->encoder;
++ struct drm_connector *connector;
++ struct drm_connector_state *conn_state;
++
++ connector = drm_atomic_get_old_connector_for_encoder(state, encoder);
++ if (!connector)
++ return NULL;
++
++ conn_state = drm_atomic_get_old_connector_state(state, connector);
++ if (!conn_state)
++ return NULL;
++
++ return conn_state->crtc;
++}
++
+ static
+ struct drm_crtc *analogix_dp_get_new_crtc(struct analogix_dp_device *dp,
+ struct drm_atomic_state *state)
+@@ -1448,14 +1467,16 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
+ {
+ struct drm_atomic_state *old_state = old_bridge_state->base.state;
+ struct analogix_dp_device *dp = bridge->driver_private;
+- struct drm_crtc *crtc;
++ struct drm_crtc *old_crtc, *new_crtc;
++ struct drm_crtc_state *old_crtc_state = NULL;
+ struct drm_crtc_state *new_crtc_state = NULL;
++ int ret;
+
+- crtc = analogix_dp_get_new_crtc(dp, old_state);
+- if (!crtc)
++ new_crtc = analogix_dp_get_new_crtc(dp, old_state);
++ if (!new_crtc)
+ goto out;
+
+- new_crtc_state = drm_atomic_get_new_crtc_state(old_state, crtc);
++ new_crtc_state = drm_atomic_get_new_crtc_state(old_state, new_crtc);
+ if (!new_crtc_state)
+ goto out;
+
+@@ -1464,6 +1485,19 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge,
+ return;
+
+ out:
++ old_crtc = analogix_dp_get_old_crtc(dp, old_state);
++ if (old_crtc) {
++ old_crtc_state = drm_atomic_get_old_crtc_state(old_state,
++ old_crtc);
++
++ /* When moving from PSR to fully disabled, exit PSR first. */
++ if (old_crtc_state && old_crtc_state->self_refresh_active) {
++ ret = analogix_dp_disable_psr(dp);
++ if (ret)
++ DRM_ERROR("Failed to disable psr (%d)\n", ret);
++ }
++ }
++
+ analogix_dp_bridge_disable(bridge);
+ }
+
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+index 314a84ffcea3d..1b7eeefe67842 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -560,7 +560,7 @@ static int sn65dsi83_parse_dt(struct sn65dsi83 *ctx, enum sn65dsi83_model model)
+ ctx->host_node = of_graph_get_remote_port_parent(endpoint);
+ of_node_put(endpoint);
+
+- if (ctx->dsi_lanes < 0 || ctx->dsi_lanes > 4) {
++ if (ctx->dsi_lanes <= 0 || ctx->dsi_lanes > 4) {
+ ret = -EINVAL;
+ goto err_put_node;
+ }
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 9603193d2fa13..987e4b212e9fb 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1011,9 +1011,19 @@ crtc_needs_disable(struct drm_crtc_state *old_state,
+ return drm_atomic_crtc_effectively_active(old_state);
+
+ /*
+- * We need to run through the crtc_funcs->disable() function if the CRTC
+- * is currently on, if it's transitioning to self refresh mode, or if
+- * it's in self refresh mode and needs to be fully disabled.
++ * We need to disable bridge(s) and CRTC if we're transitioning out of
++ * self-refresh and changing CRTCs at the same time, because the
++ * bridge tracks self-refresh status via CRTC state.
++ */
++ if (old_state->self_refresh_active &&
++ old_state->crtc != new_state->crtc)
++ return true;
++
++ /*
++ * We also need to run through the crtc_funcs->disable() function if
++ * the CRTC is currently on, if it's transitioning to self refresh
++ * mode, or if it's in self refresh mode and needs to be fully
++ * disabled.
+ */
+ return old_state->active ||
+ (old_state->self_refresh_active && !new_state->active) ||
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index 9c8829f945b23..f7863d6dea804 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -69,7 +69,7 @@ static void ipu_crtc_disable_planes(struct ipu_crtc *ipu_crtc,
+ drm_atomic_crtc_state_for_each_plane(plane, old_crtc_state) {
+ if (plane == &ipu_crtc->plane[0]->base)
+ disable_full = true;
+- if (&ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base)
++ if (ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base)
+ disable_partial = true;
+ }
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 6eb176872a17b..7ae74bd05924a 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1373,8 +1373,13 @@ void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable)
+
+ dp_catalog_ctrl_reset(ctrl->catalog);
+
+- if (enable)
+- dp_catalog_ctrl_enable_irq(ctrl->catalog, enable);
++ /*
++ * all dp controller programmable registers will not
++ * be reset to default value after DP_SW_RESET
++ * therefore interrupt mask bits have to be updated
++ * to enable/disable interrupts
++ */
++ dp_catalog_ctrl_enable_irq(ctrl->catalog, enable);
+ }
+
+ void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl)
+diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
+index 96bb5a4656278..012af6eaaf620 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -233,6 +233,7 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
+ struct drm_file *file)
+ {
+ struct panfrost_device *pfdev = dev->dev_private;
++ struct panfrost_file_priv *file_priv = file->driver_priv;
+ struct drm_panfrost_submit *args = data;
+ struct drm_syncobj *sync_out = NULL;
+ struct panfrost_job *job;
+@@ -262,12 +263,12 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
+ job->jc = args->jc;
+ job->requirements = args->requirements;
+ job->flush_id = panfrost_gpu_get_latest_flush_id(pfdev);
+- job->file_priv = file->driver_priv;
++ job->mmu = file_priv->mmu;
+
+ slot = panfrost_job_get_slot(job);
+
+ ret = drm_sched_job_init(&job->base,
+- &job->file_priv->sched_entity[slot],
++ &file_priv->sched_entity[slot],
+ NULL);
+ if (ret)
+ goto out_put_job;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
+index 908d79520853d..016bec72b7ce4 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -201,7 +201,7 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
+ return;
+ }
+
+- cfg = panfrost_mmu_as_get(pfdev, job->file_priv->mmu);
++ cfg = panfrost_mmu_as_get(pfdev, job->mmu);
+
+ job_write(pfdev, JS_HEAD_NEXT_LO(js), lower_32_bits(jc_head));
+ job_write(pfdev, JS_HEAD_NEXT_HI(js), upper_32_bits(jc_head));
+@@ -431,7 +431,7 @@ static void panfrost_job_handle_err(struct panfrost_device *pfdev,
+ job->jc = 0;
+ }
+
+- panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
++ panfrost_mmu_as_put(pfdev, job->mmu);
+ panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
+
+ if (signal_fence)
+@@ -452,7 +452,7 @@ static void panfrost_job_handle_done(struct panfrost_device *pfdev,
+ * happen when we receive the DONE interrupt while doing a GPU reset).
+ */
+ job->jc = 0;
+- panfrost_mmu_as_put(pfdev, job->file_priv->mmu);
++ panfrost_mmu_as_put(pfdev, job->mmu);
+ panfrost_devfreq_record_idle(&pfdev->pfdevfreq);
+
+ dma_fence_signal_locked(job->done_fence);
+diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
+index 77e6d0e6f612f..8becc1ba0eb95 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_job.h
++++ b/drivers/gpu/drm/panfrost/panfrost_job.h
+@@ -17,7 +17,7 @@ struct panfrost_job {
+ struct kref refcount;
+
+ struct panfrost_device *pfdev;
+- struct panfrost_file_priv *file_priv;
++ struct panfrost_mmu *mmu;
+
+ /* Fence to be signaled by IRQ handler when the job is complete. */
+ struct dma_fence *done_fence;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 1546abcadacf4..d157bb9072e86 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -473,6 +473,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode
+ native_mode->vdisplay != 0 &&
+ native_mode->clock != 0) {
+ mode = drm_mode_duplicate(dev, native_mode);
++ if (!mode)
++ return NULL;
+ mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ drm_mode_set_name(mode);
+
+@@ -487,6 +489,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode
+ * simpler.
+ */
+ mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false);
++ if (!mode)
++ return NULL;
+ mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+ DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name);
+ }
+diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+index 8845ec4b4402a..1874df7c6a736 100644
+--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+@@ -380,9 +380,10 @@ static int debug_notifier_call(struct notifier_block *self,
+ int cpu;
+ struct debug_drvdata *drvdata;
+
+- mutex_lock(&debug_lock);
++ /* Bail out if we can't acquire the mutex or the functionality is off */
++ if (!mutex_trylock(&debug_lock))
++ return NOTIFY_DONE;
+
+- /* Bail out if the functionality is disabled */
+ if (!debug_enable)
+ goto skip_dump;
+
+@@ -401,7 +402,7 @@ static int debug_notifier_call(struct notifier_block *self,
+
+ skip_dump:
+ mutex_unlock(&debug_lock);
+- return 0;
++ return NOTIFY_DONE;
+ }
+
+ static struct notifier_block debug_notifier = {
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index 805c77143a0f9..b4c1ad19cdaec 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -760,7 +760,7 @@ static void cdns_i2c_master_reset(struct i2c_adapter *adap)
+ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg,
+ struct i2c_adapter *adap)
+ {
+- unsigned long time_left;
++ unsigned long time_left, msg_timeout;
+ u32 reg;
+
+ id->p_msg = msg;
+@@ -785,8 +785,16 @@ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg,
+ else
+ cdns_i2c_msend(id);
+
++ /* Minimal time to execute this message */
++ msg_timeout = msecs_to_jiffies((1000 * msg->len * BITS_PER_BYTE) / id->i2c_clk);
++ /* Plus some wiggle room */
++ msg_timeout += msecs_to_jiffies(500);
++
++ if (msg_timeout < adap->timeout)
++ msg_timeout = adap->timeout;
++
+ /* Wait for the signal of completion */
+- time_left = wait_for_completion_timeout(&id->xfer_done, adap->timeout);
++ time_left = wait_for_completion_timeout(&id->xfer_done, msg_timeout);
+ if (time_left == 0) {
+ cdns_i2c_master_reset(adap);
+ dev_err(id->adap.dev.parent,
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 0b66e25c0e2d5..b2565bb05c791 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -109,6 +109,18 @@ static unsigned int mwait_substates __initdata;
+ #define flg2MWAIT(flags) (((flags) >> 24) & 0xFF)
+ #define MWAIT2flg(eax) ((eax & 0xFF) << 24)
+
++static __always_inline int __intel_idle(struct cpuidle_device *dev,
++ struct cpuidle_driver *drv, int index)
++{
++ struct cpuidle_state *state = &drv->states[index];
++ unsigned long eax = flg2MWAIT(state->flags);
++ unsigned long ecx = 1; /* break on interrupt flag */
++
++ mwait_idle_with_hints(eax, ecx);
++
++ return index;
++}
++
+ /**
+ * intel_idle - Ask the processor to enter the given idle state.
+ * @dev: cpuidle device of the target CPU.
+@@ -129,16 +141,19 @@ static unsigned int mwait_substates __initdata;
+ static __cpuidle int intel_idle(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int index)
+ {
+- struct cpuidle_state *state = &drv->states[index];
+- unsigned long eax = flg2MWAIT(state->flags);
+- unsigned long ecx = 1; /* break on interrupt flag */
++ return __intel_idle(dev, drv, index);
++}
+
+- if (state->flags & CPUIDLE_FLAG_IRQ_ENABLE)
+- local_irq_enable();
++static __cpuidle int intel_idle_irq(struct cpuidle_device *dev,
++ struct cpuidle_driver *drv, int index)
++{
++ int ret;
+
+- mwait_idle_with_hints(eax, ecx);
++ raw_local_irq_enable();
++ ret = __intel_idle(dev, drv, index);
++ raw_local_irq_disable();
+
+- return index;
++ return ret;
+ }
+
+ /**
+@@ -1583,6 +1598,9 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
+ /* Structure copy. */
+ drv->states[drv->state_count] = cpuidle_state_table[cstate];
+
++ if (cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IRQ_ENABLE)
++ drv->states[drv->state_count].enter = intel_idle_irq;
++
+ if ((disabled_states_mask & BIT(drv->state_count)) ||
+ ((icpu->use_acpi || force_use_acpi) &&
+ intel_idle_off_by_default(mwait_hint) &&
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index b400bbe291aa4..c22d8dcaa1001 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -188,7 +188,6 @@ static const struct iio_chan_spec ad7124_channel_template = {
+ .sign = 'u',
+ .realbits = 24,
+ .storagebits = 32,
+- .shift = 8,
+ .endianness = IIO_BE,
+ },
+ };
+diff --git a/drivers/iio/adc/sc27xx_adc.c b/drivers/iio/adc/sc27xx_adc.c
+index 00098caf6d9ee..cfe003cc4f0b6 100644
+--- a/drivers/iio/adc/sc27xx_adc.c
++++ b/drivers/iio/adc/sc27xx_adc.c
+@@ -36,8 +36,8 @@
+
+ /* Bits and mask definition for SC27XX_ADC_CH_CFG register */
+ #define SC27XX_ADC_CHN_ID_MASK GENMASK(4, 0)
+-#define SC27XX_ADC_SCALE_MASK GENMASK(10, 8)
+-#define SC27XX_ADC_SCALE_SHIFT 8
++#define SC27XX_ADC_SCALE_MASK GENMASK(10, 9)
++#define SC27XX_ADC_SCALE_SHIFT 9
+
+ /* Bits definitions for SC27XX_ADC_INT_EN registers */
+ #define SC27XX_ADC_IRQ_EN BIT(0)
+@@ -103,14 +103,14 @@ static struct sc27xx_adc_linear_graph small_scale_graph = {
+ 100, 341,
+ };
+
+-static const struct sc27xx_adc_linear_graph big_scale_graph_calib = {
+- 4200, 856,
+- 3600, 733,
++static const struct sc27xx_adc_linear_graph sc2731_big_scale_graph_calib = {
++ 4200, 850,
++ 3600, 728,
+ };
+
+-static const struct sc27xx_adc_linear_graph small_scale_graph_calib = {
+- 1000, 833,
+- 100, 80,
++static const struct sc27xx_adc_linear_graph sc2731_small_scale_graph_calib = {
++ 1000, 838,
++ 100, 84,
+ };
+
+ static int sc27xx_adc_get_calib_data(u32 calib_data, int calib_adc)
+@@ -130,11 +130,11 @@ static int sc27xx_adc_scale_calibration(struct sc27xx_adc_data *data,
+ size_t len;
+
+ if (big_scale) {
+- calib_graph = &big_scale_graph_calib;
++ calib_graph = &sc2731_big_scale_graph_calib;
+ graph = &big_scale_graph;
+ cell_name = "big_scale_calib";
+ } else {
+- calib_graph = &small_scale_graph_calib;
++ calib_graph = &sc2731_small_scale_graph_calib;
+ graph = &small_scale_graph;
+ cell_name = "small_scale_calib";
+ }
+diff --git a/drivers/iio/adc/stmpe-adc.c b/drivers/iio/adc/stmpe-adc.c
+index d2d4053884991..83e0ac4467ca0 100644
+--- a/drivers/iio/adc/stmpe-adc.c
++++ b/drivers/iio/adc/stmpe-adc.c
+@@ -61,7 +61,7 @@ struct stmpe_adc {
+ static int stmpe_read_voltage(struct stmpe_adc *info,
+ struct iio_chan_spec const *chan, int *val)
+ {
+- long ret;
++ unsigned long ret;
+
+ mutex_lock(&info->lock);
+
+@@ -79,7 +79,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info,
+
+ ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT);
+
+- if (ret <= 0) {
++ if (ret == 0) {
+ stmpe_reg_write(info->stmpe, STMPE_REG_ADC_INT_STA,
+ STMPE_ADC_CH(info->channel));
+ mutex_unlock(&info->lock);
+@@ -96,7 +96,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info,
+ static int stmpe_read_temp(struct stmpe_adc *info,
+ struct iio_chan_spec const *chan, int *val)
+ {
+- long ret;
++ unsigned long ret;
+
+ mutex_lock(&info->lock);
+
+@@ -114,7 +114,7 @@ static int stmpe_read_temp(struct stmpe_adc *info,
+
+ ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT);
+
+- if (ret <= 0) {
++ if (ret == 0) {
+ mutex_unlock(&info->lock);
+ return -ETIMEDOUT;
+ }
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index eb452d0c423c8..5c7e4e725819f 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -71,16 +71,18 @@ st_sensors_match_odr_error:
+
+ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr)
+ {
+- int err;
++ int err = 0;
+ struct st_sensor_odr_avl odr_out = {0, 0};
+ struct st_sensor_data *sdata = iio_priv(indio_dev);
+
++ mutex_lock(&sdata->odr_lock);
++
+ if (!sdata->sensor_settings->odr.mask)
+- return 0;
++ goto unlock_mutex;
+
+ err = st_sensors_match_odr(sdata->sensor_settings, odr, &odr_out);
+ if (err < 0)
+- goto st_sensors_match_odr_error;
++ goto unlock_mutex;
+
+ if ((sdata->sensor_settings->odr.addr ==
+ sdata->sensor_settings->pw.addr) &&
+@@ -103,7 +105,9 @@ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr)
+ if (err >= 0)
+ sdata->odr = odr_out.hz;
+
+-st_sensors_match_odr_error:
++unlock_mutex:
++ mutex_unlock(&sdata->odr_lock);
++
+ return err;
+ }
+ EXPORT_SYMBOL(st_sensors_set_odr);
+@@ -361,6 +365,8 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ struct st_sensors_platform_data *of_pdata;
+ int err = 0;
+
++ mutex_init(&sdata->odr_lock);
++
+ /* If OF/DT pdata exists, it will take precedence of anything else */
+ of_pdata = st_sensors_dev_probe(indio_dev->dev.parent, pdata);
+ if (IS_ERR(of_pdata))
+@@ -554,18 +560,24 @@ int st_sensors_read_info_raw(struct iio_dev *indio_dev,
+ err = -EBUSY;
+ goto out;
+ } else {
++ mutex_lock(&sdata->odr_lock);
+ err = st_sensors_set_enable(indio_dev, true);
+- if (err < 0)
++ if (err < 0) {
++ mutex_unlock(&sdata->odr_lock);
+ goto out;
++ }
+
+ msleep((sdata->sensor_settings->bootime * 1000) / sdata->odr);
+ err = st_sensors_read_axis_data(indio_dev, ch, val);
+- if (err < 0)
++ if (err < 0) {
++ mutex_unlock(&sdata->odr_lock);
+ goto out;
++ }
+
+ *val = *val >> ch->scan_type.shift;
+
+ err = st_sensors_set_enable(indio_dev, false);
++ mutex_unlock(&sdata->odr_lock);
+ }
+ out:
+ mutex_unlock(&indio_dev->mlock);
+diff --git a/drivers/iio/dummy/iio_simple_dummy.c b/drivers/iio/dummy/iio_simple_dummy.c
+index c0b7ef9007354..c24f609c2ade6 100644
+--- a/drivers/iio/dummy/iio_simple_dummy.c
++++ b/drivers/iio/dummy/iio_simple_dummy.c
+@@ -575,10 +575,9 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ */
+
+ swd = kzalloc(sizeof(*swd), GFP_KERNEL);
+- if (!swd) {
+- ret = -ENOMEM;
+- goto error_kzalloc;
+- }
++ if (!swd)
++ return ERR_PTR(-ENOMEM);
++
+ /*
+ * Allocate an IIO device.
+ *
+@@ -590,7 +589,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ indio_dev = iio_device_alloc(parent, sizeof(*st));
+ if (!indio_dev) {
+ ret = -ENOMEM;
+- goto error_ret;
++ goto error_free_swd;
+ }
+
+ st = iio_priv(indio_dev);
+@@ -616,6 +615,10 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+ * indio_dev->name = spi_get_device_id(spi)->name;
+ */
+ indio_dev->name = kstrdup(name, GFP_KERNEL);
++ if (!indio_dev->name) {
++ ret = -ENOMEM;
++ goto error_free_device;
++ }
+
+ /* Provide description of available channels */
+ indio_dev->channels = iio_dummy_channels;
+@@ -632,7 +635,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name)
+
+ ret = iio_simple_dummy_events_register(indio_dev);
+ if (ret < 0)
+- goto error_free_device;
++ goto error_free_name;
+
+ ret = iio_simple_dummy_configure_buffer(indio_dev);
+ if (ret < 0)
+@@ -649,11 +652,12 @@ error_unconfigure_buffer:
+ iio_simple_dummy_unconfigure_buffer(indio_dev);
+ error_unregister_events:
+ iio_simple_dummy_events_unregister(indio_dev);
++error_free_name:
++ kfree(indio_dev->name);
+ error_free_device:
+ iio_device_free(indio_dev);
+-error_ret:
++error_free_swd:
+ kfree(swd);
+-error_kzalloc:
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/iio/proximity/vl53l0x-i2c.c b/drivers/iio/proximity/vl53l0x-i2c.c
+index cf38144b6f954..13a87d3e3544f 100644
+--- a/drivers/iio/proximity/vl53l0x-i2c.c
++++ b/drivers/iio/proximity/vl53l0x-i2c.c
+@@ -104,6 +104,7 @@ static int vl53l0x_read_proximity(struct vl53l0x_data *data,
+ u16 tries = 20;
+ u8 buffer[12];
+ int ret;
++ unsigned long time_left;
+
+ ret = i2c_smbus_write_byte_data(client, VL_REG_SYSRANGE_START, 1);
+ if (ret < 0)
+@@ -112,10 +113,8 @@ static int vl53l0x_read_proximity(struct vl53l0x_data *data,
+ if (data->client->irq) {
+ reinit_completion(&data->completion);
+
+- ret = wait_for_completion_timeout(&data->completion, HZ/10);
+- if (ret < 0)
+- return ret;
+- else if (ret == 0)
++ time_left = wait_for_completion_timeout(&data->completion, HZ/10);
++ if (time_left == 0)
+ return -ETIMEDOUT;
+
+ vl53l0x_clear_irq(data);
+diff --git a/drivers/input/mouse/bcm5974.c b/drivers/input/mouse/bcm5974.c
+index 59a14505b9cd1..ca150618d32f1 100644
+--- a/drivers/input/mouse/bcm5974.c
++++ b/drivers/input/mouse/bcm5974.c
+@@ -942,17 +942,22 @@ static int bcm5974_probe(struct usb_interface *iface,
+ if (!dev->tp_data)
+ goto err_free_bt_buffer;
+
+- if (dev->bt_urb)
++ if (dev->bt_urb) {
+ usb_fill_int_urb(dev->bt_urb, udev,
+ usb_rcvintpipe(udev, cfg->bt_ep),
+ dev->bt_data, dev->cfg.bt_datalen,
+ bcm5974_irq_button, dev, 1);
+
++ dev->bt_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++ }
++
+ usb_fill_int_urb(dev->tp_urb, udev,
+ usb_rcvintpipe(udev, cfg->tp_ep),
+ dev->tp_data, dev->cfg.tp_datalen,
+ bcm5974_irq_trackpad, dev, 1);
+
++ dev->tp_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
++
+ /* create bcm5974 device */
+ usb_make_path(udev, dev->phys, sizeof(dev->phys));
+ strlcat(dev->phys, "/input0", sizeof(dev->phys));
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f60381cdf1c48..a009a3dd762bc 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -3780,6 +3780,8 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+
+ /* Base address */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -EINVAL;
+ if (resource_size(res) < arm_smmu_resource_size(smmu)) {
+ dev_err(dev, "MMIO region too small (%pr)\n", res);
+ return -EINVAL;
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index 4bc75c4ce402d..324e8f32962ac 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -2090,11 +2090,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ if (err)
+ return err;
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- ioaddr = res->start;
+- smmu->base = devm_ioremap_resource(dev, res);
++ smmu->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(smmu->base))
+ return PTR_ERR(smmu->base);
++ ioaddr = res->start;
+ /*
+ * The resource size should effectively match the value of SMMU_TOP;
+ * stash that temporarily until we know PAGESIZE to validate it with.
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 43c5890dc9f35..79d64640240be 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -7962,17 +7962,22 @@ EXPORT_SYMBOL(md_register_thread);
+
+ void md_unregister_thread(struct md_thread **threadp)
+ {
+- struct md_thread *thread = *threadp;
+- if (!thread)
+- return;
+- pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
+- /* Locking ensures that mddev_unlock does not wake_up a
++ struct md_thread *thread;
++
++ /*
++ * Locking ensures that mddev_unlock does not wake_up a
+ * non-existent thread
+ */
+ spin_lock(&pers_lock);
++ thread = *threadp;
++ if (!thread) {
++ spin_unlock(&pers_lock);
++ return;
++ }
+ *threadp = NULL;
+ spin_unlock(&pers_lock);
+
++ pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
+ kthread_stop(thread->tsk);
+ kfree(thread);
+ }
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index bccc741b9382a..8495045eb989b 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -128,21 +128,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ pr_debug("md/raid0:%s: FINAL %d zones\n",
+ mdname(mddev), conf->nr_strip_zones);
+
+- if (conf->nr_strip_zones == 1) {
+- conf->layout = RAID0_ORIG_LAYOUT;
+- } else if (mddev->layout == RAID0_ORIG_LAYOUT ||
+- mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) {
+- conf->layout = mddev->layout;
+- } else if (default_layout == RAID0_ORIG_LAYOUT ||
+- default_layout == RAID0_ALT_MULTIZONE_LAYOUT) {
+- conf->layout = default_layout;
+- } else {
+- pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
+- mdname(mddev));
+- pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
+- err = -ENOTSUPP;
+- goto abort;
+- }
+ /*
+ * now since we have the hard sector sizes, we can make sure
+ * chunk size is a multiple of that sector size
+@@ -273,6 +258,22 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf)
+ (unsigned long long)smallest->sectors);
+ }
+
++ if (conf->nr_strip_zones == 1 || conf->strip_zone[1].nb_dev == 1) {
++ conf->layout = RAID0_ORIG_LAYOUT;
++ } else if (mddev->layout == RAID0_ORIG_LAYOUT ||
++ mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) {
++ conf->layout = mddev->layout;
++ } else if (default_layout == RAID0_ORIG_LAYOUT ||
++ default_layout == RAID0_ALT_MULTIZONE_LAYOUT) {
++ conf->layout = default_layout;
++ } else {
++ pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n",
++ mdname(mddev));
++ pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n");
++ err = -EOPNOTSUPP;
++ goto abort;
++ }
++
+ pr_debug("md/raid0:%s: done.\n", mdname(mddev));
+ *private_conf = conf;
+
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index 59eda55d92a38..1ef9b61077c44 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -667,6 +667,7 @@ static int rtsx_usb_probe(struct usb_interface *intf,
+ return 0;
+
+ out_init_fail:
++ usb_set_intfdata(ucr->pusb_intf, NULL);
+ usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf,
+ ucr->iobuf_dma);
+ return ret;
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index aa1682b94a23b..45aaf54a75604 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1353,17 +1353,18 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl,
+ struct fastrpc_req_munmap *req)
+ {
+ struct fastrpc_invoke_args args[1] = { [0] = { 0 } };
+- struct fastrpc_buf *buf, *b;
++ struct fastrpc_buf *buf = NULL, *iter, *b;
+ struct fastrpc_munmap_req_msg req_msg;
+ struct device *dev = fl->sctx->dev;
+ int err;
+ u32 sc;
+
+ spin_lock(&fl->lock);
+- list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
+- if ((buf->raddr == req->vaddrout) && (buf->size == req->size))
++ list_for_each_entry_safe(iter, b, &fl->mmaps, node) {
++ if ((iter->raddr == req->vaddrout) && (iter->size == req->size)) {
++ buf = iter;
+ break;
+- buf = NULL;
++ }
+ }
+ spin_unlock(&fl->lock);
+
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index f21854ac5cc2b..8cb342c562af6 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -327,6 +327,11 @@ void lkdtm_ARRAY_BOUNDS(void)
+
+ not_checked = kmalloc(sizeof(*not_checked) * 2, GFP_KERNEL);
+ checked = kmalloc(sizeof(*checked) * 2, GFP_KERNEL);
++ if (!not_checked || !checked) {
++ kfree(not_checked);
++ kfree(checked);
++ return;
++ }
+
+ pr_info("Array access within bounds ...\n");
+ /* For both, touch all bytes in the actual member size. */
+@@ -346,7 +351,10 @@ void lkdtm_ARRAY_BOUNDS(void)
+ kfree(not_checked);
+ kfree(checked);
+ pr_err("FAIL: survived array bounds overflow!\n");
+- pr_expected_config(CONFIG_UBSAN_BOUNDS);
++ if (IS_ENABLED(CONFIG_UBSAN_BOUNDS))
++ pr_expected_config(CONFIG_UBSAN_TRAP);
++ else
++ pr_expected_config(CONFIG_UBSAN_BOUNDS);
+ }
+
+ void lkdtm_CORRUPT_LIST_ADD(void)
+diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
+index d6137c70ebbe6..cc76ebcca4c7d 100644
+--- a/drivers/misc/lkdtm/lkdtm.h
++++ b/drivers/misc/lkdtm/lkdtm.h
+@@ -9,19 +9,19 @@
+ extern char *lkdtm_kernel_info;
+
+ #define pr_expected_config(kconfig) \
+-{ \
++do { \
+ if (IS_ENABLED(kconfig)) \
+ pr_err("Unexpected! This %s was built with " #kconfig "=y\n", \
+ lkdtm_kernel_info); \
+ else \
+ pr_warn("This is probably expected, since this %s was built *without* " #kconfig "=y\n", \
+ lkdtm_kernel_info); \
+-}
++} while (0)
+
+ #ifndef MODULE
+ int lkdtm_check_bool_cmdline(const char *param);
+ #define pr_expected_config_param(kconfig, param) \
+-{ \
++do { \
+ if (IS_ENABLED(kconfig)) { \
+ switch (lkdtm_check_bool_cmdline(param)) { \
+ case 0: \
+@@ -52,7 +52,7 @@ int lkdtm_check_bool_cmdline(const char *param);
+ break; \
+ } \
+ } \
+-}
++} while (0)
+ #else
+ #define pr_expected_config_param(kconfig, param) pr_expected_config(kconfig)
+ #endif
+diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c
+index 9161ce7ed47a6..3fead5efe523a 100644
+--- a/drivers/misc/lkdtm/usercopy.c
++++ b/drivers/misc/lkdtm/usercopy.c
+@@ -30,12 +30,12 @@ static const unsigned char test_text[] = "This is a test.\n";
+ */
+ static noinline unsigned char *trick_compiler(unsigned char *stack)
+ {
+- return stack + 0;
++ return stack + unconst;
+ }
+
+ static noinline unsigned char *do_usercopy_stack_callee(int value)
+ {
+- unsigned char buf[32];
++ unsigned char buf[128];
+ int i;
+
+ /* Exercise stack to avoid everything living in registers. */
+@@ -43,7 +43,12 @@ static noinline unsigned char *do_usercopy_stack_callee(int value)
+ buf[i] = value & 0xff;
+ }
+
+- return trick_compiler(buf);
++ /*
++ * Put the target buffer in the middle of stack allocation
++ * so that we don't step on future stack users regardless
++ * of stack growth direction.
++ */
++ return trick_compiler(&buf[(128/2)-32]);
+ }
+
+ static noinline void do_usercopy_stack(bool to_user, bool bad_frame)
+@@ -66,6 +71,12 @@ static noinline void do_usercopy_stack(bool to_user, bool bad_frame)
+ bad_stack -= sizeof(unsigned long);
+ }
+
++#ifdef ARCH_HAS_CURRENT_STACK_POINTER
++ pr_info("stack : %px\n", (void *)current_stack_pointer);
++#endif
++ pr_info("good_stack: %px-%px\n", good_stack, good_stack + sizeof(good_stack));
++ pr_info("bad_stack : %px-%px\n", bad_stack, bad_stack + sizeof(good_stack));
++
+ user_addr = vm_mmap(NULL, 0, PAGE_SIZE,
+ PROT_READ | PROT_WRITE | PROT_EXEC,
+ MAP_ANONYMOUS | MAP_PRIVATE, 0);
+diff --git a/drivers/misc/pvpanic/pvpanic.c b/drivers/misc/pvpanic/pvpanic.c
+index 4b8f1c7d726d1..049a120063489 100644
+--- a/drivers/misc/pvpanic/pvpanic.c
++++ b/drivers/misc/pvpanic/pvpanic.c
+@@ -34,7 +34,9 @@ pvpanic_send_event(unsigned int event)
+ {
+ struct pvpanic_instance *pi_cur;
+
+- spin_lock(&pvpanic_lock);
++ if (!spin_trylock(&pvpanic_lock))
++ return;
++
+ list_for_each_entry(pi_cur, &pvpanic_list, list) {
+ if (event & pi_cur->capability & pi_cur->events)
+ iowrite8(event, pi_cur->base);
+@@ -55,9 +57,13 @@ pvpanic_panic_notify(struct notifier_block *nb, unsigned long code, void *unused
+ return NOTIFY_DONE;
+ }
+
++/*
++ * Call our notifier very early on panic, deferring the
++ * action taken to the hypervisor.
++ */
+ static struct notifier_block pvpanic_panic_nb = {
+ .notifier_call = pvpanic_panic_notify,
+- .priority = 1, /* let this called before broken drm_fb_helper() */
++ .priority = INT_MAX,
+ };
+
+ static void pvpanic_remove(void *param)
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 52e4106bcb6e5..57f5e0a4d7d79 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1482,8 +1482,7 @@ void mmc_blk_cqe_recovery(struct mmc_queue *mq)
+ err = mmc_cqe_recovery(host);
+ if (err)
+ mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY);
+- else
+- mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
++ mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY);
+
+ pr_debug("%s: CQE recovery done\n", mmc_hostname(host));
+ }
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 28f55f9cf7153..053ab52668e8b 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -97,6 +97,33 @@ out:
+ return e;
+ }
+
++/*
++ * has_enough_free_count - whether ubi has enough free pebs to fill fm pools
++ * @ubi: UBI device description object
++ * @is_wl_pool: whether UBI is filling wear leveling pool
++ *
++ * This helper function checks whether there are enough free pebs (deducted
++ * by fastmap pebs) to fill fm_pool and fm_wl_pool, above rule works after
++ * there is at least one of free pebs is filled into fm_wl_pool.
++ * For wear leveling pool, UBI should also reserve free pebs for bad pebs
++ * handling, because there maybe no enough free pebs for user volumes after
++ * producing new bad pebs.
++ */
++static bool has_enough_free_count(struct ubi_device *ubi, bool is_wl_pool)
++{
++ int fm_used = 0; // fastmap non anchor pebs.
++ int beb_rsvd_pebs;
++
++ if (!ubi->free.rb_node)
++ return false;
++
++ beb_rsvd_pebs = is_wl_pool ? ubi->beb_rsvd_pebs : 0;
++ if (ubi->fm_wl_pool.size > 0 && !(ubi->ro_mode || ubi->fm_disabled))
++ fm_used = ubi->fm_size / ubi->leb_size - 1;
++
++ return ubi->free_count - beb_rsvd_pebs > fm_used;
++}
++
+ /**
+ * ubi_refill_pools - refills all fastmap PEB pools.
+ * @ubi: UBI device description object
+@@ -120,21 +147,17 @@ void ubi_refill_pools(struct ubi_device *ubi)
+ wl_tree_add(ubi->fm_anchor, &ubi->free);
+ ubi->free_count++;
+ }
+- if (ubi->fm_next_anchor) {
+- wl_tree_add(ubi->fm_next_anchor, &ubi->free);
+- ubi->free_count++;
+- }
+
+- /* All available PEBs are in ubi->free, now is the time to get
++ /*
++ * All available PEBs are in ubi->free, now is the time to get
+ * the best anchor PEBs.
+ */
+ ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
+- ubi->fm_next_anchor = ubi_wl_get_fm_peb(ubi, 1);
+
+ for (;;) {
+ enough = 0;
+ if (pool->size < pool->max_size) {
+- if (!ubi->free.rb_node)
++ if (!has_enough_free_count(ubi, false))
+ break;
+
+ e = wl_get_wle(ubi);
+@@ -147,8 +170,7 @@ void ubi_refill_pools(struct ubi_device *ubi)
+ enough++;
+
+ if (wl_pool->size < wl_pool->max_size) {
+- if (!ubi->free.rb_node ||
+- (ubi->free_count - ubi->beb_rsvd_pebs < 5))
++ if (!has_enough_free_count(ubi, true))
+ break;
+
+ e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF);
+@@ -286,20 +308,26 @@ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi)
+ int ubi_ensure_anchor_pebs(struct ubi_device *ubi)
+ {
+ struct ubi_work *wrk;
++ struct ubi_wl_entry *anchor;
+
+ spin_lock(&ubi->wl_lock);
+
+- /* Do we have a next anchor? */
+- if (!ubi->fm_next_anchor) {
+- ubi->fm_next_anchor = ubi_wl_get_fm_peb(ubi, 1);
+- if (!ubi->fm_next_anchor)
+- /* Tell wear leveling to produce a new anchor PEB */
+- ubi->fm_do_produce_anchor = 1;
++ /* Do we already have an anchor? */
++ if (ubi->fm_anchor) {
++ spin_unlock(&ubi->wl_lock);
++ return 0;
+ }
+
+- /* Do wear leveling to get a new anchor PEB or check the
+- * existing next anchor candidate.
+- */
++ /* See if we can find an anchor PEB on the list of free PEBs */
++ anchor = ubi_wl_get_fm_peb(ubi, 1);
++ if (anchor) {
++ ubi->fm_anchor = anchor;
++ spin_unlock(&ubi->wl_lock);
++ return 0;
++ }
++
++ ubi->fm_do_produce_anchor = 1;
++ /* No luck, trigger wear leveling to produce a new anchor PEB. */
+ if (ubi->wl_scheduled) {
+ spin_unlock(&ubi->wl_lock);
+ return 0;
+@@ -381,11 +409,6 @@ static void ubi_fastmap_close(struct ubi_device *ubi)
+ ubi->fm_anchor = NULL;
+ }
+
+- if (ubi->fm_next_anchor) {
+- return_unused_peb(ubi, ubi->fm_next_anchor);
+- ubi->fm_next_anchor = NULL;
+- }
+-
+ if (ubi->fm) {
+ for (i = 0; i < ubi->fm->used_blocks; i++)
+ kfree(ubi->fm->e[i]);
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 6b5f1ffd961b9..6e95c4b1473e6 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -1230,17 +1230,6 @@ static int ubi_write_fastmap(struct ubi_device *ubi,
+ fm_pos += sizeof(*fec);
+ ubi_assert(fm_pos <= ubi->fm_size);
+ }
+- if (ubi->fm_next_anchor) {
+- fec = (struct ubi_fm_ec *)(fm_raw + fm_pos);
+-
+- fec->pnum = cpu_to_be32(ubi->fm_next_anchor->pnum);
+- set_seen(ubi, ubi->fm_next_anchor->pnum, seen_pebs);
+- fec->ec = cpu_to_be32(ubi->fm_next_anchor->ec);
+-
+- free_peb_count++;
+- fm_pos += sizeof(*fec);
+- ubi_assert(fm_pos <= ubi->fm_size);
+- }
+ fmh->free_peb_count = cpu_to_be32(free_peb_count);
+
+ ubi_for_each_used_peb(ubi, wl_e, tmp_rb) {
+diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
+index 7c083ad58274a..078112e23dfd5 100644
+--- a/drivers/mtd/ubi/ubi.h
++++ b/drivers/mtd/ubi/ubi.h
+@@ -489,8 +489,7 @@ struct ubi_debug_info {
+ * @fm_work: fastmap work queue
+ * @fm_work_scheduled: non-zero if fastmap work was scheduled
+ * @fast_attach: non-zero if UBI was attached by fastmap
+- * @fm_anchor: The new anchor PEB used during fastmap update
+- * @fm_next_anchor: An anchor PEB candidate for the next time fastmap is updated
++ * @fm_anchor: The next anchor PEB to use for fastmap
+ * @fm_do_produce_anchor: If true produce an anchor PEB in wl
+ *
+ * @used: RB-tree of used physical eraseblocks
+@@ -601,7 +600,6 @@ struct ubi_device {
+ int fm_work_scheduled;
+ int fast_attach;
+ struct ubi_wl_entry *fm_anchor;
+- struct ubi_wl_entry *fm_next_anchor;
+ int fm_do_produce_anchor;
+
+ /* Wear-leveling sub-system's stuff */
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 1bc7b3a056046..6ea95ade4ca6b 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -309,7 +309,6 @@ out_mapping:
+ ubi->volumes[vol_id] = NULL;
+ ubi->vol_count -= 1;
+ spin_unlock(&ubi->volumes_lock);
+- ubi_eba_destroy_table(eba_tbl);
+ out_acc:
+ spin_lock(&ubi->volumes_lock);
+ ubi->rsvd_pebs -= vol->reserved_pebs;
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 8455f1d47f3c9..afcdacb9d0e99 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -689,16 +689,16 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+ e1 = find_anchor_wl_entry(&ubi->used);
+- if (e1 && ubi->fm_next_anchor &&
+- (ubi->fm_next_anchor->ec - e1->ec >= UBI_WL_THRESHOLD)) {
++ if (e1 && ubi->fm_anchor &&
++ (ubi->fm_anchor->ec - e1->ec >= UBI_WL_THRESHOLD)) {
+ ubi->fm_do_produce_anchor = 1;
+- /* fm_next_anchor is no longer considered a good anchor
+- * candidate.
++ /*
++ * fm_anchor is no longer considered a good anchor.
+ * NULL assignment also prevents multiple wear level checks
+ * of this PEB.
+ */
+- wl_tree_add(ubi->fm_next_anchor, &ubi->free);
+- ubi->fm_next_anchor = NULL;
++ wl_tree_add(ubi->fm_anchor, &ubi->free);
++ ubi->fm_anchor = NULL;
+ ubi->free_count++;
+ }
+
+@@ -1085,12 +1085,13 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
+ if (!err) {
+ spin_lock(&ubi->wl_lock);
+
+- if (!ubi->fm_disabled && !ubi->fm_next_anchor &&
++ if (!ubi->fm_disabled && !ubi->fm_anchor &&
+ e->pnum < UBI_FM_MAX_START) {
+- /* Abort anchor production, if needed it will be
++ /*
++ * Abort anchor production, if needed it will be
+ * enabled again in the wear leveling started below.
+ */
+- ubi->fm_next_anchor = e;
++ ubi->fm_anchor = e;
+ ubi->fm_do_produce_anchor = 0;
+ } else {
+ wl_tree_add(e, &ubi->free);
+diff --git a/drivers/net/amt.c b/drivers/net/amt.c
+index fb774d568baab..83e5fe784f5c6 100644
+--- a/drivers/net/amt.c
++++ b/drivers/net/amt.c
+@@ -51,6 +51,7 @@ static char *status_str[] = {
+ };
+
+ static char *type_str[] = {
++ "", /* Type 0 is not defined */
+ "AMT_MSG_DISCOVERY",
+ "AMT_MSG_ADVERTISEMENT",
+ "AMT_MSG_REQUEST",
+@@ -2220,8 +2221,7 @@ static bool amt_advertisement_handler(struct amt_dev *amt, struct sk_buff *skb)
+ struct amt_header_advertisement *amta;
+ int hdr_size;
+
+- hdr_size = sizeof(*amta) - sizeof(struct amt_header);
+-
++ hdr_size = sizeof(*amta) + sizeof(struct udphdr);
+ if (!pskb_may_pull(skb, hdr_size))
+ return true;
+
+@@ -2251,19 +2251,27 @@ static bool amt_multicast_data_handler(struct amt_dev *amt, struct sk_buff *skb)
+ struct ethhdr *eth;
+ struct iphdr *iph;
+
++ hdr_size = sizeof(*amtmd) + sizeof(struct udphdr);
++ if (!pskb_may_pull(skb, hdr_size))
++ return true;
++
+ amtmd = (struct amt_header_mcast_data *)(udp_hdr(skb) + 1);
+ if (amtmd->reserved || amtmd->version)
+ return true;
+
+- hdr_size = sizeof(*amtmd) + sizeof(struct udphdr);
+ if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_IP), false))
+ return true;
++
+ skb_reset_network_header(skb);
+ skb_push(skb, sizeof(*eth));
+ skb_reset_mac_header(skb);
+ skb_pull(skb, sizeof(*eth));
+ eth = eth_hdr(skb);
++
++ if (!pskb_may_pull(skb, sizeof(*iph)))
++ return true;
+ iph = ip_hdr(skb);
++
+ if (iph->version == 4) {
+ if (!ipv4_is_multicast(iph->daddr))
+ return true;
+@@ -2274,6 +2282,9 @@ static bool amt_multicast_data_handler(struct amt_dev *amt, struct sk_buff *skb)
+ } else if (iph->version == 6) {
+ struct ipv6hdr *ip6h;
+
++ if (!pskb_may_pull(skb, sizeof(*ip6h)))
++ return true;
++
+ ip6h = ipv6_hdr(skb);
+ if (!ipv6_addr_is_multicast(&ip6h->daddr))
+ return true;
+@@ -2306,8 +2317,7 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
+ struct iphdr *iph;
+ int hdr_size, len;
+
+- hdr_size = sizeof(*amtmq) - sizeof(struct amt_header);
+-
++ hdr_size = sizeof(*amtmq) + sizeof(struct udphdr);
+ if (!pskb_may_pull(skb, hdr_size))
+ return true;
+
+@@ -2315,22 +2325,27 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
+ if (amtmq->reserved || amtmq->version)
+ return true;
+
+- hdr_size = sizeof(*amtmq) + sizeof(struct udphdr) - sizeof(*eth);
++ hdr_size -= sizeof(*eth);
+ if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_TEB), false))
+ return true;
++
+ oeth = eth_hdr(skb);
+ skb_reset_mac_header(skb);
+ skb_pull(skb, sizeof(*eth));
+ skb_reset_network_header(skb);
+ eth = eth_hdr(skb);
++ if (!pskb_may_pull(skb, sizeof(*iph)))
++ return true;
++
+ iph = ip_hdr(skb);
+ if (iph->version == 4) {
+- if (!ipv4_is_multicast(iph->daddr))
+- return true;
+ if (!pskb_may_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS +
+ sizeof(*ihv3)))
+ return true;
+
++ if (!ipv4_is_multicast(iph->daddr))
++ return true;
++
+ ihv3 = skb_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
+ skb_reset_transport_header(skb);
+ skb_push(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
+@@ -2345,15 +2360,17 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
+ ip_eth_mc_map(iph->daddr, eth->h_dest);
+ #if IS_ENABLED(CONFIG_IPV6)
+ } else if (iph->version == 6) {
+- struct ipv6hdr *ip6h = ipv6_hdr(skb);
+ struct mld2_query *mld2q;
++ struct ipv6hdr *ip6h;
+
+- if (!ipv6_addr_is_multicast(&ip6h->daddr))
+- return true;
+ if (!pskb_may_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS +
+ sizeof(*mld2q)))
+ return true;
+
++ ip6h = ipv6_hdr(skb);
++ if (!ipv6_addr_is_multicast(&ip6h->daddr))
++ return true;
++
+ mld2q = skb_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
+ skb_reset_transport_header(skb);
+ skb_push(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
+@@ -2389,23 +2406,23 @@ static bool amt_update_handler(struct amt_dev *amt, struct sk_buff *skb)
+ {
+ struct amt_header_membership_update *amtmu;
+ struct amt_tunnel_list *tunnel;
+- struct udphdr *udph;
+ struct ethhdr *eth;
+ struct iphdr *iph;
+- int len;
++ int len, hdr_size;
+
+ iph = ip_hdr(skb);
+- udph = udp_hdr(skb);
+
+- if (__iptunnel_pull_header(skb, sizeof(*udph), skb->protocol,
+- false, false))
++ hdr_size = sizeof(*amtmu) + sizeof(struct udphdr);
++ if (!pskb_may_pull(skb, hdr_size))
+ return true;
+
+- amtmu = (struct amt_header_membership_update *)skb->data;
++ amtmu = (struct amt_header_membership_update *)(udp_hdr(skb) + 1);
+ if (amtmu->reserved || amtmu->version)
+ return true;
+
+- skb_pull(skb, sizeof(*amtmu));
++ if (iptunnel_pull_header(skb, hdr_size, skb->protocol, false))
++ return true;
++
+ skb_reset_network_header(skb);
+
+ list_for_each_entry_rcu(tunnel, &amt->tunnel_list, list) {
+@@ -2423,9 +2440,12 @@ static bool amt_update_handler(struct amt_dev *amt, struct sk_buff *skb)
+ }
+ }
+
+- return false;
++ return true;
+
+ report:
++ if (!pskb_may_pull(skb, sizeof(*iph)))
++ return true;
++
+ iph = ip_hdr(skb);
+ if (iph->version == 4) {
+ if (ip_mc_check_igmp(skb)) {
+@@ -2679,6 +2699,7 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
+ amt = rcu_dereference_sk_user_data(sk);
+ if (!amt) {
+ err = true;
++ kfree_skb(skb);
+ goto out;
+ }
+
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 8acec33a47027..9d8db457599c6 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -2021,8 +2021,10 @@ static int gswip_gphy_fw_list(struct gswip_priv *priv,
+ for_each_available_child_of_node(gphy_fw_list_np, gphy_fw_np) {
+ err = gswip_gphy_fw_probe(priv, &priv->gphy_fw[i],
+ gphy_fw_np, i);
+- if (err)
++ if (err) {
++ of_node_put(gphy_fw_np);
+ goto remove_gphy;
++ }
+ i++;
+ }
+
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index cf7754dddad78..283ae376f4695 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3482,6 +3482,7 @@ static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip,
+ */
+ child = of_get_child_by_name(np, "mdio");
+ err = mv88e6xxx_mdio_register(chip, child, false);
++ of_node_put(child);
+ if (err)
+ return err;
+
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index 2b05ead515cdc..6ae7a0ed9e0ba 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -50,22 +50,17 @@ static int mv88e6390_serdes_write(struct mv88e6xxx_chip *chip,
+ }
+
+ static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,
+- u16 ctrl, u16 status, u16 lpa,
++ u16 bmsr, u16 lpa, u16 status,
+ struct phylink_link_state *state)
+ {
+ state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);
++ state->an_complete = !!(bmsr & BMSR_ANEGCOMPLETE);
+
+ if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) {
+ /* The Spped and Duplex Resolved register is 1 if AN is enabled
+ * and complete, or if AN is disabled. So with disabled AN we
+- * still get here on link up. But we want to set an_complete
+- * only if AN was enabled, thus we look at BMCR_ANENABLE.
+- * (According to 802.3-2008 section 22.2.4.2.10, we should be
+- * able to get this same value from BMSR_ANEGCAPABLE, but tests
+- * show that these Marvell PHYs don't conform to this part of
+- * the specificaion - BMSR_ANEGCAPABLE is simply always 1.)
++ * still get here on link up.
+ */
+- state->an_complete = !!(ctrl & BMCR_ANENABLE);
+ state->duplex = status &
+ MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ?
+ DUPLEX_FULL : DUPLEX_HALF;
+@@ -191,12 +186,12 @@ int mv88e6352_serdes_pcs_config(struct mv88e6xxx_chip *chip, int port,
+ int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,
+ int lane, struct phylink_link_state *state)
+ {
+- u16 lpa, status, ctrl;
++ u16 bmsr, lpa, status;
+ int err;
+
+- err = mv88e6352_serdes_read(chip, MII_BMCR, &ctrl);
++ err = mv88e6352_serdes_read(chip, MII_BMSR, &bmsr);
+ if (err) {
+- dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);
++ dev_err(chip->dev, "can't read Serdes BMSR: %d\n", err);
+ return err;
+ }
+
+@@ -212,7 +207,7 @@ int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,
+ return err;
+ }
+
+- return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);
++ return mv88e6xxx_serdes_pcs_get_state(chip, bmsr, lpa, status, state);
+ }
+
+ int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port,
+@@ -915,13 +910,13 @@ int mv88e6390_serdes_pcs_config(struct mv88e6xxx_chip *chip, int port,
+ static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,
+ int port, int lane, struct phylink_link_state *state)
+ {
+- u16 lpa, status, ctrl;
++ u16 bmsr, lpa, status;
+ int err;
+
+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
+- MV88E6390_SGMII_BMCR, &ctrl);
++ MV88E6390_SGMII_BMSR, &bmsr);
+ if (err) {
+- dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);
++ dev_err(chip->dev, "can't read Serdes PHY BMSR: %d\n", err);
+ return err;
+ }
+
+@@ -939,7 +934,7 @@ static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,
+ return err;
+ }
+
+- return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);
++ return mv88e6xxx_serdes_pcs_get_state(chip, bmsr, lpa, status, state);
+ }
+
+ static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip,
+diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c
+index 993b2fb429612..36bf3ce545c9b 100644
+--- a/drivers/net/ethernet/altera/altera_tse_main.c
++++ b/drivers/net/ethernet/altera/altera_tse_main.c
+@@ -163,7 +163,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
+ mdio = mdiobus_alloc();
+ if (mdio == NULL) {
+ netdev_err(dev, "Error allocating MDIO bus\n");
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto put_node;
+ }
+
+ mdio->name = ALTERA_TSE_RESOURCE_NAME;
+@@ -180,6 +181,7 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
+ mdio->id);
+ goto out_free_mdio;
+ }
++ of_node_put(mdio_node);
+
+ if (netif_msg_drv(priv))
+ netdev_info(dev, "MDIO bus %s: created\n", mdio->id);
+@@ -189,6 +191,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id)
+ out_free_mdio:
+ mdiobus_free(mdio);
+ mdio = NULL;
++put_node:
++ of_node_put(mdio_node);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c b/drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c
+index 086739e4f40a9..9b83d53616994 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c
++++ b/drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c
+@@ -234,6 +234,7 @@ struct mii_bus *bcma_mdio_mii_register(struct bgmac *bgmac)
+ np = of_get_child_by_name(core->dev.of_node, "mdio");
+
+ err = of_mdiobus_register(mii_bus, np);
++ of_node_put(np);
+ if (err) {
+ dev_err(&core->dev, "Registration of mii bus failed\n");
+ goto err_free_bus;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+index 66cc79500c10d..af9c88e714525 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+@@ -830,8 +830,6 @@ void i40e_free_tx_resources(struct i40e_ring *tx_ring)
+ i40e_clean_tx_ring(tx_ring);
+ kfree(tx_ring->tx_bi);
+ tx_ring->tx_bi = NULL;
+- kfree(tx_ring->xsk_descs);
+- tx_ring->xsk_descs = NULL;
+
+ if (tx_ring->desc) {
+ dma_free_coherent(tx_ring->dev, tx_ring->size,
+@@ -1433,13 +1431,6 @@ int i40e_setup_tx_descriptors(struct i40e_ring *tx_ring)
+ if (!tx_ring->tx_bi)
+ goto err;
+
+- if (ring_is_xdp(tx_ring)) {
+- tx_ring->xsk_descs = kcalloc(I40E_MAX_NUM_DESCRIPTORS, sizeof(*tx_ring->xsk_descs),
+- GFP_KERNEL);
+- if (!tx_ring->xsk_descs)
+- goto err;
+- }
+-
+ u64_stats_init(&tx_ring->syncp);
+
+ /* round up to nearest 4K */
+@@ -1463,8 +1454,6 @@ int i40e_setup_tx_descriptors(struct i40e_ring *tx_ring)
+ return 0;
+
+ err:
+- kfree(tx_ring->xsk_descs);
+- tx_ring->xsk_descs = NULL;
+ kfree(tx_ring->tx_bi);
+ tx_ring->tx_bi = NULL;
+ return -ENOMEM;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+index bfc2845c99d1c..f6d91fa1562ee 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+@@ -390,7 +390,6 @@ struct i40e_ring {
+ u16 rx_offset;
+ struct xdp_rxq_info xdp_rxq;
+ struct xsk_buff_pool *xsk_pool;
+- struct xdp_desc *xsk_descs; /* For storing descriptors in the AF_XDP ZC path */
+ } ____cacheline_internodealigned_in_smp;
+
+ static inline bool ring_uses_build_skb(struct i40e_ring *ring)
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index e5e72b5bb6196..c1d25b0b0ca26 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -470,11 +470,11 @@ static void i40e_set_rs_bit(struct i40e_ring *xdp_ring)
+ **/
+ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget)
+ {
+- struct xdp_desc *descs = xdp_ring->xsk_descs;
++ struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
+ u32 nb_pkts, nb_processed = 0;
+ unsigned int total_bytes = 0;
+
+- nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, descs, budget);
++ nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
+ if (!nb_pkts)
+ return true;
+
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index 214a38de3f415..aaebdae8b5fff 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -1157,9 +1157,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
+
+ switch (xcast_mode) {
+ case IXGBEVF_XCAST_MODE_NONE:
+- disable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
++ disable = IXGBE_VMOLR_ROMPE |
+ IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
+- enable = 0;
++ enable = IXGBE_VMOLR_BAM;
+ break;
+ case IXGBEVF_XCAST_MODE_MULTI:
+ disable = IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
+@@ -1181,9 +1181,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
+ return -EPERM;
+ }
+
+- disable = 0;
++ disable = IXGBE_VMOLR_VPE;
+ enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
+- IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
++ IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE;
+ break;
+ default:
+ return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index a73a8017e0ee9..e3a317442c8c2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -579,7 +579,7 @@ static bool is_valid_offset(struct rvu *rvu, struct cpt_rd_wr_reg_msg *req)
+
+ blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
+ if (blkaddr < 0)
+- return blkaddr;
++ return false;
+
+ /* Registers that can be accessed from PF/VF */
+ if ((offset & 0xFF000) == CPT_AF_LFX_CTL(0) ||
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index f02d07ec5ccbf..a50090e62c8f9 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -1949,6 +1949,9 @@ static int mtk_hwlro_get_fdir_entry(struct net_device *dev,
+ struct ethtool_rx_flow_spec *fsp =
+ (struct ethtool_rx_flow_spec *)&cmd->fs;
+
++ if (fsp->location >= ARRAY_SIZE(mac->hwlro_ip))
++ return -EINVAL;
++
+ /* only tcp dst ipv4 is meaningful, others are meaningless */
+ fsp->flow_type = TCP_V4_FLOW;
+ fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]);
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index ed5038d98ef6e..6400a827173cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -2110,7 +2110,7 @@ static int mlx4_en_get_module_eeprom(struct net_device *dev,
+ en_err(priv,
+ "mlx4_get_module_info i(%d) offset(%d) bytes_to_read(%d) - FAILED (0x%x)\n",
+ i, offset, ee->len - i, ret);
+- return 0;
++ return ret;
+ }
+
+ i += ret;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+index ba6dad97e308d..c4ca4e157ff7d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+@@ -555,12 +555,9 @@ static u32 mlx5_gen_pci_id(const struct mlx5_core_dev *dev)
+ PCI_SLOT(dev->pdev->devfn));
+ }
+
+-static int next_phys_dev(struct device *dev, const void *data)
++static int _next_phys_dev(struct mlx5_core_dev *mdev,
++ const struct mlx5_core_dev *curr)
+ {
+- struct mlx5_adev *madev = container_of(dev, struct mlx5_adev, adev.dev);
+- struct mlx5_core_dev *mdev = madev->mdev;
+- const struct mlx5_core_dev *curr = data;
+-
+ if (!mlx5_core_is_pf(mdev))
+ return 0;
+
+@@ -574,22 +571,51 @@ static int next_phys_dev(struct device *dev, const void *data)
+ return 1;
+ }
+
+-/* Must be called with intf_mutex held */
+-struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev)
++static void *pci_get_other_drvdata(struct device *this, struct device *other)
+ {
+- struct auxiliary_device *adev;
+- struct mlx5_adev *madev;
++ if (this->driver != other->driver)
++ return NULL;
++
++ return pci_get_drvdata(to_pci_dev(other));
++}
++
++static int next_phys_dev_lag(struct device *dev, const void *data)
++{
++ struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data;
++
++ mdev = pci_get_other_drvdata(this->device, dev);
++ if (!mdev)
++ return 0;
++
++ if (!MLX5_CAP_GEN(mdev, vport_group_manager) ||
++ !MLX5_CAP_GEN(mdev, lag_master) ||
++ MLX5_CAP_GEN(mdev, num_lag_ports) != MLX5_MAX_PORTS)
++ return 0;
++
++ return _next_phys_dev(mdev, data);
++}
++
++static struct mlx5_core_dev *mlx5_get_next_dev(struct mlx5_core_dev *dev,
++ int (*match)(struct device *dev, const void *data))
++{
++ struct device *next;
+
+ if (!mlx5_core_is_pf(dev))
+ return NULL;
+
+- adev = auxiliary_find_device(NULL, dev, &next_phys_dev);
+- if (!adev)
++ next = bus_find_device(&pci_bus_type, NULL, dev, match);
++ if (!next)
+ return NULL;
+
+- madev = container_of(adev, struct mlx5_adev, adev);
+- put_device(&adev->dev);
+- return madev->mdev;
++ put_device(next);
++ return pci_get_drvdata(to_pci_dev(next));
++}
++
++/* Must be called with intf_mutex held */
++struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev)
++{
++ lockdep_assert_held(&mlx5_intf_mutex);
++ return mlx5_get_next_dev(dev, &next_phys_dev_lag);
+ }
+
+ void mlx5_dev_list_lock(void)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index eae9aa9c08118..978a2bb8e1220 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -675,6 +675,9 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
+ if (!tracer->owner)
+ return;
+
++ if (unlikely(!tracer->str_db.loaded))
++ goto arm;
++
+ block_count = tracer->buff.size / TRACER_BLOCK_SIZE_BYTE;
+ start_offset = tracer->buff.consumer_index * TRACER_BLOCK_SIZE_BYTE;
+
+@@ -732,6 +735,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work)
+ &tmp_trace_block[TRACES_PER_BLOCK - 1]);
+ }
+
++arm:
+ mlx5_fw_tracer_arm(dev);
+ }
+
+@@ -1136,8 +1140,7 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void
+ queue_work(tracer->work_queue, &tracer->ownership_change_work);
+ break;
+ case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE:
+- if (likely(tracer->str_db.loaded))
+- queue_work(tracer->work_queue, &tracer->handle_traces_work);
++ queue_work(tracer->work_queue, &tracer->handle_traces_work);
+ break;
+ default:
+ mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n",
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 5ccd6c634274b..4c8c8e4c1ef36 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -730,6 +730,7 @@ struct mlx5e_rq {
+ u8 wq_type;
+ u32 rqn;
+ struct mlx5_core_dev *mdev;
++ struct mlx5e_channel *channel;
+ u32 umr_mkey;
+ struct mlx5e_dma_info wqe_overflow;
+
+@@ -1044,6 +1045,9 @@ void mlx5e_close_cq(struct mlx5e_cq *cq);
+ int mlx5e_open_locked(struct net_device *netdev);
+ int mlx5e_close_locked(struct net_device *netdev);
+
++void mlx5e_trigger_napi_icosq(struct mlx5e_channel *c);
++void mlx5e_trigger_napi_sched(struct napi_struct *napi);
++
+ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ struct mlx5e_channels *chs);
+ void mlx5e_close_channels(struct mlx5e_channels *chs);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+index 678ffbb48a25f..e3e8c1c3ff242 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+@@ -12,6 +12,7 @@ struct mlx5e_post_act;
+ enum {
+ MLX5E_TC_FT_LEVEL = 0,
+ MLX5E_TC_TTC_FT_LEVEL,
++ MLX5E_TC_MISS_LEVEL,
+ };
+
+ struct mlx5e_tc_table {
+@@ -20,6 +21,7 @@ struct mlx5e_tc_table {
+ */
+ struct mutex t_lock;
+ struct mlx5_flow_table *t;
++ struct mlx5_flow_table *miss_t;
+ struct mlx5_fs_chains *chains;
+ struct mlx5e_post_act *post_act;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+index 82baafd3c00c5..fdb82f2b01308 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
+@@ -737,6 +737,7 @@ void mlx5e_ptp_activate_channel(struct mlx5e_ptp *c)
+ if (test_bit(MLX5E_PTP_STATE_RX, c->state)) {
+ mlx5e_ptp_rx_set_fs(c->priv);
+ mlx5e_activate_rq(&c->rq);
++ mlx5e_trigger_napi_sched(&c->napi);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+index 2684e9da9f412..fc366e66d0b0f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+@@ -123,6 +123,8 @@ static int mlx5e_rx_reporter_err_icosq_cqe_recover(void *ctx)
+ xskrq->stats->recover++;
+ }
+
++ mlx5e_trigger_napi_icosq(icosq->channel);
++
+ mutex_unlock(&icosq->channel->icosq_recovery_lock);
+
+ return 0;
+@@ -166,6 +168,10 @@ static int mlx5e_rx_reporter_err_rq_cqe_recover(void *ctx)
+ clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
+ mlx5e_activate_rq(rq);
+ rq->stats->recover++;
++ if (rq->channel)
++ mlx5e_trigger_napi_icosq(rq->channel);
++ else
++ mlx5e_trigger_napi_sched(rq->cq.napi);
+ return 0;
+ out:
+ clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 9028e9958c72d..cf9d48d934efc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -692,7 +692,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
+ struct mlx5_flow_attr *attr,
+ struct flow_rule *flow_rule,
+ struct mlx5e_mod_hdr_handle **mh,
+- u8 zone_restore_id, bool nat)
++ u8 zone_restore_id, bool nat_table, bool has_nat)
+ {
+ DECLARE_MOD_HDR_ACTS_ACTIONS(actions_arr, MLX5_CT_MIN_MOD_ACTS);
+ DECLARE_MOD_HDR_ACTS(mod_acts, actions_arr);
+@@ -708,11 +708,12 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
+ &attr->ct_attr.ct_labels_id);
+ if (err)
+ return -EOPNOTSUPP;
+- if (nat) {
+- err = mlx5_tc_ct_entry_create_nat(ct_priv, flow_rule,
+- &mod_acts);
+- if (err)
+- goto err_mapping;
++ if (nat_table) {
++ if (has_nat) {
++ err = mlx5_tc_ct_entry_create_nat(ct_priv, flow_rule, &mod_acts);
++ if (err)
++ goto err_mapping;
++ }
+
+ ct_state |= MLX5_CT_STATE_NAT_BIT;
+ }
+@@ -727,7 +728,7 @@ mlx5_tc_ct_entry_create_mod_hdr(struct mlx5_tc_ct_priv *ct_priv,
+ if (err)
+ goto err_mapping;
+
+- if (nat) {
++ if (nat_table && has_nat) {
+ attr->modify_hdr = mlx5_modify_header_alloc(ct_priv->dev, ct_priv->ns_type,
+ mod_acts.num_actions,
+ mod_acts.actions);
+@@ -795,7 +796,9 @@ mlx5_tc_ct_entry_add_rule(struct mlx5_tc_ct_priv *ct_priv,
+
+ err = mlx5_tc_ct_entry_create_mod_hdr(ct_priv, attr, flow_rule,
+ &zone_rule->mh,
+- zone_restore_id, nat);
++ zone_restore_id,
++ nat,
++ mlx5_tc_ct_entry_has_nat(entry));
+ if (err) {
+ ct_dbg("Failed to create ct entry mod hdr");
+ goto err_mod_hdr;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
+index a55b066746cb7..6dd36e3cf425d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c
+@@ -172,6 +172,7 @@ static void mlx5e_activate_trap(struct mlx5e_trap *trap)
+ {
+ napi_enable(&trap->napi);
+ mlx5e_activate_rq(&trap->rq);
++ mlx5e_trigger_napi_sched(&trap->napi);
+ }
+
+ void mlx5e_deactivate_trap(struct mlx5e_priv *priv)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
+index 279cd8f4e79f7..2c520394aa1d6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c
+@@ -117,6 +117,7 @@ static int mlx5e_xsk_enable_locked(struct mlx5e_priv *priv,
+ goto err_remove_pool;
+
+ mlx5e_activate_xsk(c);
++ mlx5e_trigger_napi_icosq(c);
+
+ /* Don't wait for WQEs, because the newer xdpsock sample doesn't provide
+ * any Fill Ring entries at the setup stage.
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+index 25eac9e203423..5a2cd15e245d9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+@@ -64,6 +64,7 @@ static int mlx5e_init_xsk_rq(struct mlx5e_channel *c,
+ rq->clock = &mdev->clock;
+ rq->icosq = &c->icosq;
+ rq->ix = c->ix;
++ rq->channel = c;
+ rq->mdev = mdev;
+ rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
+ rq->xdpsq = &c->rq_xdpsq;
+@@ -179,10 +180,6 @@ void mlx5e_activate_xsk(struct mlx5e_channel *c)
+ mlx5e_reporter_icosq_resume_recovery(c);
+
+ /* TX queue is created active. */
+-
+- spin_lock_bh(&c->async_icosq_lock);
+- mlx5e_trigger_irq(&c->async_icosq);
+- spin_unlock_bh(&c->async_icosq_lock);
+ }
+
+ void mlx5e_deactivate_xsk(struct mlx5e_channel *c)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 531fffe1abe3a..352b5c8ae24e3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -477,6 +477,7 @@ static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param
+ rq->clock = &mdev->clock;
+ rq->icosq = &c->icosq;
+ rq->ix = c->ix;
++ rq->channel = c;
+ rq->mdev = mdev;
+ rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
+ rq->xdpsq = &c->rq_xdpsq;
+@@ -1070,13 +1071,6 @@ err_free_rq:
+ void mlx5e_activate_rq(struct mlx5e_rq *rq)
+ {
+ set_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
+- if (rq->icosq) {
+- mlx5e_trigger_irq(rq->icosq);
+- } else {
+- local_bh_disable();
+- napi_schedule(rq->cq.napi);
+- local_bh_enable();
+- }
+ }
+
+ void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
+@@ -2218,6 +2212,20 @@ static int mlx5e_channel_stats_alloc(struct mlx5e_priv *priv, int ix, int cpu)
+ return 0;
+ }
+
++void mlx5e_trigger_napi_icosq(struct mlx5e_channel *c)
++{
++ spin_lock_bh(&c->async_icosq_lock);
++ mlx5e_trigger_irq(&c->async_icosq);
++ spin_unlock_bh(&c->async_icosq_lock);
++}
++
++void mlx5e_trigger_napi_sched(struct napi_struct *napi)
++{
++ local_bh_disable();
++ napi_schedule(napi);
++ local_bh_enable();
++}
++
+ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ struct mlx5e_params *params,
+ struct mlx5e_channel_param *cparam,
+@@ -2299,6 +2307,8 @@ static void mlx5e_activate_channel(struct mlx5e_channel *c)
+
+ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
+ mlx5e_activate_xsk(c);
++
++ mlx5e_trigger_napi_icosq(c);
+ }
+
+ static void mlx5e_deactivate_channel(struct mlx5e_channel *c)
+@@ -4521,6 +4531,11 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
+
+ unlock:
+ mutex_unlock(&priv->state_lock);
++
++ /* Need to fix some features. */
++ if (!err)
++ netdev_update_features(netdev);
++
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index e0f45cef97c34..deff6698f395e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -4284,6 +4284,33 @@ static int mlx5e_tc_nic_get_ft_size(struct mlx5_core_dev *dev)
+ return tc_tbl_size;
+ }
+
++static int mlx5e_tc_nic_create_miss_table(struct mlx5e_priv *priv)
++{
++ struct mlx5_flow_table **ft = &priv->fs.tc.miss_t;
++ struct mlx5_flow_table_attr ft_attr = {};
++ struct mlx5_flow_namespace *ns;
++ int err = 0;
++
++ ft_attr.max_fte = 1;
++ ft_attr.autogroup.max_num_groups = 1;
++ ft_attr.level = MLX5E_TC_MISS_LEVEL;
++ ft_attr.prio = 0;
++ ns = mlx5_get_flow_namespace(priv->mdev, MLX5_FLOW_NAMESPACE_KERNEL);
++
++ *ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr);
++ if (IS_ERR(*ft)) {
++ err = PTR_ERR(*ft);
++ netdev_err(priv->netdev, "failed to create tc nic miss table err=%d\n", err);
++ }
++
++ return err;
++}
++
++static void mlx5e_tc_nic_destroy_miss_table(struct mlx5e_priv *priv)
++{
++ mlx5_destroy_flow_table(priv->fs.tc.miss_t);
++}
++
+ int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
+ {
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
+@@ -4316,19 +4343,23 @@ int mlx5e_tc_nic_init(struct mlx5e_priv *priv)
+ }
+ tc->mapping = chains_mapping;
+
++ err = mlx5e_tc_nic_create_miss_table(priv);
++ if (err)
++ goto err_chains;
++
+ if (MLX5_CAP_FLOWTABLE_NIC_RX(priv->mdev, ignore_flow_level))
+ attr.flags = MLX5_CHAINS_AND_PRIOS_SUPPORTED |
+ MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED;
+ attr.ns = MLX5_FLOW_NAMESPACE_KERNEL;
+ attr.max_ft_sz = mlx5e_tc_nic_get_ft_size(dev);
+ attr.max_grp_num = MLX5E_TC_TABLE_NUM_GROUPS;
+- attr.default_ft = mlx5e_vlan_get_flowtable(priv->fs.vlan);
++ attr.default_ft = priv->fs.tc.miss_t;
+ attr.mapping = chains_mapping;
+
+ tc->chains = mlx5_chains_create(dev, &attr);
+ if (IS_ERR(tc->chains)) {
+ err = PTR_ERR(tc->chains);
+- goto err_chains;
++ goto err_miss;
+ }
+
+ tc->post_act = mlx5e_tc_post_act_init(priv, tc->chains, MLX5_FLOW_NAMESPACE_KERNEL);
+@@ -4351,6 +4382,8 @@ err_reg:
+ mlx5_tc_ct_clean(tc->ct);
+ mlx5e_tc_post_act_destroy(tc->post_act);
+ mlx5_chains_destroy(tc->chains);
++err_miss:
++ mlx5e_tc_nic_destroy_miss_table(priv);
+ err_chains:
+ mapping_destroy(chains_mapping);
+ err_mapping:
+@@ -4391,6 +4424,7 @@ void mlx5e_tc_nic_cleanup(struct mlx5e_priv *priv)
+ mlx5e_tc_post_act_destroy(tc->post_act);
+ mapping_destroy(tc->mapping);
+ mlx5_chains_destroy(tc->chains);
++ mlx5e_tc_nic_destroy_miss_table(priv);
+ }
+
+ int mlx5e_tc_esw_init(struct rhashtable *tc_ht)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index cebfa8565c9d9..c70cefbcd9ea4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -49,6 +49,7 @@
+ #include "en_tc.h"
+ #include "en/mapping.h"
+ #include "devlink.h"
++#include "lag/lag.h"
+
+ #define mlx5_esw_for_each_rep(esw, i, rep) \
+ xa_for_each(&((esw)->offloads.vport_reps), i, rep)
+@@ -2749,9 +2750,6 @@ static int mlx5_esw_offloads_devcom_event(int event,
+
+ switch (event) {
+ case ESW_OFFLOADS_DEVCOM_PAIR:
+- if (mlx5_get_next_phys_dev(esw->dev) != peer_esw->dev)
+- break;
+-
+ if (mlx5_eswitch_vport_match_metadata_enabled(esw) !=
+ mlx5_eswitch_vport_match_metadata_enabled(peer_esw))
+ break;
+@@ -2803,6 +2801,9 @@ static void esw_offloads_devcom_init(struct mlx5_eswitch *esw)
+ if (!MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ return;
+
++ if (!mlx5_is_lag_supported(esw->dev))
++ return;
++
+ mlx5_devcom_register_component(devcom,
+ MLX5_DEVCOM_ESW_OFFLOADS,
+ mlx5_esw_offloads_devcom_event,
+@@ -2820,6 +2821,9 @@ static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
+ if (!MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ return;
+
++ if (!mlx5_is_lag_supported(esw->dev))
++ return;
++
+ mlx5_devcom_send_event(devcom, MLX5_DEVCOM_ESW_OFFLOADS,
+ ESW_OFFLOADS_DEVCOM_UNPAIR, esw);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 298c614c631b0..add55195335c7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -116,7 +116,7 @@
+ #define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
+
+ #define KERNEL_NIC_TC_NUM_PRIOS 1
+-#define KERNEL_NIC_TC_NUM_LEVELS 2
++#define KERNEL_NIC_TC_NUM_LEVELS 3
+
+ #define ANCHOR_NUM_LEVELS 1
+ #define ANCHOR_NUM_PRIOS 1
+@@ -1560,9 +1560,22 @@ static struct mlx5_flow_rule *find_flow_rule(struct fs_fte *fte,
+ return NULL;
+ }
+
+-static bool check_conflicting_actions(u32 action1, u32 action2)
++static bool check_conflicting_actions_vlan(const struct mlx5_fs_vlan *vlan0,
++ const struct mlx5_fs_vlan *vlan1)
+ {
+- u32 xored_actions = action1 ^ action2;
++ return vlan0->ethtype != vlan1->ethtype ||
++ vlan0->vid != vlan1->vid ||
++ vlan0->prio != vlan1->prio;
++}
++
++static bool check_conflicting_actions(const struct mlx5_flow_act *act1,
++ const struct mlx5_flow_act *act2)
++{
++ u32 action1 = act1->action;
++ u32 action2 = act2->action;
++ u32 xored_actions;
++
++ xored_actions = action1 ^ action2;
+
+ /* if one rule only wants to count, it's ok */
+ if (action1 == MLX5_FLOW_CONTEXT_ACTION_COUNT ||
+@@ -1579,6 +1592,22 @@ static bool check_conflicting_actions(u32 action1, u32 action2)
+ MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2))
+ return true;
+
++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT &&
++ act1->pkt_reformat != act2->pkt_reformat)
++ return true;
++
++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&
++ act1->modify_hdr != act2->modify_hdr)
++ return true;
++
++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH &&
++ check_conflicting_actions_vlan(&act1->vlan[0], &act2->vlan[0]))
++ return true;
++
++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2 &&
++ check_conflicting_actions_vlan(&act1->vlan[1], &act2->vlan[1]))
++ return true;
++
+ return false;
+ }
+
+@@ -1586,7 +1615,7 @@ static int check_conflicting_ftes(struct fs_fte *fte,
+ const struct mlx5_flow_context *flow_context,
+ const struct mlx5_flow_act *flow_act)
+ {
+- if (check_conflicting_actions(flow_act->action, fte->action.action)) {
++ if (check_conflicting_actions(flow_act, &fte->action)) {
+ mlx5_core_warn(get_dev(&fte->node),
+ "Found two FTEs with conflicting actions\n");
+ return -EEXIST;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index 4ddf6b330a442..d4629f9bdab15 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -804,12 +804,7 @@ static int __mlx5_lag_dev_add_mdev(struct mlx5_core_dev *dev)
+ struct mlx5_lag *ldev = NULL;
+ struct mlx5_core_dev *tmp_dev;
+
+- if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
+- !MLX5_CAP_GEN(dev, lag_master) ||
+- MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_MAX_PORTS)
+- return 0;
+-
+- tmp_dev = mlx5_get_next_phys_dev(dev);
++ tmp_dev = mlx5_get_next_phys_dev_lag(dev);
+ if (tmp_dev)
+ ldev = tmp_dev->priv.lag;
+
+@@ -854,6 +849,11 @@ void mlx5_lag_add_mdev(struct mlx5_core_dev *dev)
+ {
+ int err;
+
++ if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
++ !MLX5_CAP_GEN(dev, lag_master) ||
++ MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_MAX_PORTS)
++ return;
++
+ recheck:
+ mlx5_dev_list_lock();
+ err = __mlx5_lag_dev_add_mdev(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+index e5d231c31b544..b5edf0c8b0ed7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+@@ -56,6 +56,16 @@ struct mlx5_lag {
+ struct mlx5_lag_port_sel port_sel;
+ };
+
++static inline bool mlx5_is_lag_supported(struct mlx5_core_dev *dev)
++{
++ if (!MLX5_CAP_GEN(dev, vport_group_manager) ||
++ !MLX5_CAP_GEN(dev, lag_master) ||
++ MLX5_CAP_GEN(dev, num_lag_ports) < 2 ||
++ MLX5_CAP_GEN(dev, num_lag_ports) > MLX5_MAX_PORTS)
++ return false;
++ return true;
++}
++
+ static inline struct mlx5_lag *
+ mlx5_lag_dev(struct mlx5_core_dev *dev)
+ {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index 2d2150fc7a0f3..83b52bc5cefd4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -209,7 +209,7 @@ int mlx5_attach_device(struct mlx5_core_dev *dev);
+ void mlx5_detach_device(struct mlx5_core_dev *dev);
+ int mlx5_register_device(struct mlx5_core_dev *dev);
+ void mlx5_unregister_device(struct mlx5_core_dev *dev);
+-struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev);
++struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev);
+ void mlx5_dev_list_lock(void);
+ void mlx5_dev_list_unlock(void);
+ int mlx5_dev_list_trylock(void);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+index 05393fe11132b..caeaa3c293535 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+@@ -44,11 +44,10 @@ static int set_miss_action(struct mlx5_flow_root_namespace *ns,
+ err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action);
+ if (err && action) {
+ err = mlx5dr_action_destroy(action);
+- if (err) {
+- action = NULL;
+- mlx5_core_err(ns->dev, "Failed to destroy action (%d)\n",
+- err);
+- }
++ if (err)
++ mlx5_core_err(ns->dev,
++ "Failed to destroy action (%d)\n", err);
++ action = NULL;
+ }
+ ft->fs_dr_table.miss_action = action;
+ if (old_miss_action) {
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+index fee148bbf13ea..b0cb3b65cd5b7 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+@@ -948,8 +948,13 @@ static int lan966x_probe(struct platform_device *pdev)
+ lan966x->ports[p]->fwnode = fwnode_handle_get(portnp);
+
+ serdes = devm_of_phy_get(lan966x->dev, to_of_node(portnp), NULL);
+- if (!IS_ERR(serdes))
+- lan966x->ports[p]->serdes = serdes;
++ if (PTR_ERR(serdes) == -ENODEV)
++ serdes = NULL;
++ if (IS_ERR(serdes)) {
++ err = PTR_ERR(serdes);
++ goto cleanup_ports;
++ }
++ lan966x->ports[p]->serdes = serdes;
+
+ lan966x_port_init(lan966x->ports[p]);
+ }
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
+index bfd7d1c350767..7e9fcc16286e2 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/conntrack.c
+@@ -442,6 +442,11 @@ nfp_fl_calc_key_layers_sz(struct nfp_fl_key_ls in_key_ls, uint16_t *map)
+ key_size += sizeof(struct nfp_flower_ipv6);
+ }
+
++ if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_QINQ) {
++ map[FLOW_PAY_QINQ] = key_size;
++ key_size += sizeof(struct nfp_flower_vlan);
++ }
++
+ if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_GRE) {
+ map[FLOW_PAY_GRE] = key_size;
+ if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_TUN_IPV6)
+@@ -450,11 +455,6 @@ nfp_fl_calc_key_layers_sz(struct nfp_fl_key_ls in_key_ls, uint16_t *map)
+ key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
+ }
+
+- if (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_QINQ) {
+- map[FLOW_PAY_QINQ] = key_size;
+- key_size += sizeof(struct nfp_flower_vlan);
+- }
+-
+ if ((in_key_ls.key_layer & NFP_FLOWER_LAYER_VXLAN) ||
+ (in_key_ls.key_layer_two & NFP_FLOWER_LAYER2_GENEVE)) {
+ map[FLOW_PAY_UDP_TUN] = key_size;
+@@ -693,6 +693,17 @@ static int nfp_fl_ct_add_offload(struct nfp_fl_nft_tc_merge *m_entry)
+ }
+ }
+
++ if (NFP_FLOWER_LAYER2_QINQ & key_layer.key_layer_two) {
++ offset = key_map[FLOW_PAY_QINQ];
++ key = kdata + offset;
++ msk = mdata + offset;
++ for (i = 0; i < _CT_TYPE_MAX; i++) {
++ nfp_flower_compile_vlan((struct nfp_flower_vlan *)key,
++ (struct nfp_flower_vlan *)msk,
++ rules[i]);
++ }
++ }
++
+ if (key_layer.key_layer_two & NFP_FLOWER_LAYER2_GRE) {
+ offset = key_map[FLOW_PAY_GRE];
+ key = kdata + offset;
+@@ -733,17 +744,6 @@ static int nfp_fl_ct_add_offload(struct nfp_fl_nft_tc_merge *m_entry)
+ }
+ }
+
+- if (NFP_FLOWER_LAYER2_QINQ & key_layer.key_layer_two) {
+- offset = key_map[FLOW_PAY_QINQ];
+- key = kdata + offset;
+- msk = mdata + offset;
+- for (i = 0; i < _CT_TYPE_MAX; i++) {
+- nfp_flower_compile_vlan((struct nfp_flower_vlan *)key,
+- (struct nfp_flower_vlan *)msk,
+- rules[i]);
+- }
+- }
+-
+ if (key_layer.key_layer & NFP_FLOWER_LAYER_VXLAN ||
+ key_layer.key_layer_two & NFP_FLOWER_LAYER2_GENEVE) {
+ offset = key_map[FLOW_PAY_UDP_TUN];
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
+index 9d86eea4dc169..fb8bd2135c63a 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
+@@ -602,6 +602,14 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
+ msk += sizeof(struct nfp_flower_ipv6);
+ }
+
++ if (NFP_FLOWER_LAYER2_QINQ & key_ls->key_layer_two) {
++ nfp_flower_compile_vlan((struct nfp_flower_vlan *)ext,
++ (struct nfp_flower_vlan *)msk,
++ rule);
++ ext += sizeof(struct nfp_flower_vlan);
++ msk += sizeof(struct nfp_flower_vlan);
++ }
++
+ if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_GRE) {
+ if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_TUN_IPV6) {
+ struct nfp_flower_ipv6_gre_tun *gre_match;
+@@ -637,14 +645,6 @@ int nfp_flower_compile_flow_match(struct nfp_app *app,
+ }
+ }
+
+- if (NFP_FLOWER_LAYER2_QINQ & key_ls->key_layer_two) {
+- nfp_flower_compile_vlan((struct nfp_flower_vlan *)ext,
+- (struct nfp_flower_vlan *)msk,
+- rule);
+- ext += sizeof(struct nfp_flower_vlan);
+- msk += sizeof(struct nfp_flower_vlan);
+- }
+-
+ if (key_ls->key_layer & NFP_FLOWER_LAYER_VXLAN ||
+ key_ls->key_layer_two & NFP_FLOWER_LAYER2_GENEVE) {
+ if (key_ls->key_layer_two & NFP_FLOWER_LAYER2_TUN_IPV6) {
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+index e0c27471bcdb5..5e2631aafdb6d 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+@@ -287,8 +287,6 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
+
+ /* Init to unknowns */
+ ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE);
+- ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
+- ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+ cmd->base.port = PORT_OTHER;
+ cmd->base.speed = SPEED_UNKNOWN;
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+@@ -296,6 +294,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev,
+ port = nfp_port_from_netdev(netdev);
+ eth_port = nfp_port_get_eth_port(port);
+ if (eth_port) {
++ ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);
++ ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);
+ cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ?
+ AUTONEG_ENABLE : AUTONEG_DISABLE;
+ nfp_net_set_fec_link_mode(eth_port, cmd);
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index eec0db76d888c..8ab9358a1c3d7 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -309,6 +309,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ efx->n_channels = 1;
+ efx->n_rx_channels = 1;
+ efx->n_tx_channels = 1;
++ efx->tx_channel_offset = 0;
+ efx->n_xdp_channels = 0;
+ efx->xdp_channel_offset = efx->n_channels;
+ rc = pci_enable_msi(efx->pci_dev);
+@@ -329,6 +330,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
+ efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
+ efx->n_rx_channels = 1;
+ efx->n_tx_channels = 1;
++ efx->tx_channel_offset = 1;
+ efx->n_xdp_channels = 0;
+ efx->xdp_channel_offset = efx->n_channels;
+ efx->legacy_irq = efx->pci_dev->irq;
+@@ -957,10 +959,6 @@ int efx_set_channels(struct efx_nic *efx)
+ struct efx_channel *channel;
+ int rc;
+
+- efx->tx_channel_offset =
+- efx_separate_tx_channels ?
+- efx->n_channels - efx->n_tx_channels : 0;
+-
+ if (efx->xdp_tx_queue_count) {
+ EFX_WARN_ON_PARANOID(efx->xdp_tx_queues);
+
+diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
+index cc15ee8812d9e..8a9eedec177ac 100644
+--- a/drivers/net/ethernet/sfc/net_driver.h
++++ b/drivers/net/ethernet/sfc/net_driver.h
+@@ -1533,7 +1533,7 @@ static inline bool efx_channel_is_xdp_tx(struct efx_channel *channel)
+
+ static inline bool efx_channel_has_tx_queues(struct efx_channel *channel)
+ {
+- return true;
++ return channel && channel->channel >= channel->efx->tx_channel_offset;
+ }
+
+ static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 6f87e296a410f..502fbbc082fb8 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -1073,13 +1073,11 @@ static int intel_eth_pci_probe(struct pci_dev *pdev,
+
+ ret = stmmac_dvr_probe(&pdev->dev, plat, &res);
+ if (ret) {
+- goto err_dvr_probe;
++ goto err_alloc_irq;
+ }
+
+ return 0;
+
+-err_dvr_probe:
+- pci_free_irq_vectors(pdev);
+ err_alloc_irq:
+ clk_disable_unprepare(plat->stmmac_clk);
+ clk_unregister_fixed_rate(plat->stmmac_clk);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 8251d7eb001b3..eda91336c9f62 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -1802,6 +1802,7 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common)
+ if (IS_ERR(cpts)) {
+ int ret = PTR_ERR(cpts);
+
++ of_node_put(node);
+ if (ret == -EOPNOTSUPP) {
+ dev_info(dev, "cpts disabled\n");
+ return 0;
+@@ -2669,9 +2670,9 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ if (!node)
+ return -ENOENT;
+ common->port_num = of_get_child_count(node);
++ of_node_put(node);
+ if (common->port_num < 1 || common->port_num > AM65_CPSW_MAX_PORTS)
+ return -ENOENT;
+- of_node_put(node);
+
+ common->rx_flow_id_base = -1;
+ init_completion(&common->tdown_complete);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 3d08743317634..b901acca098b1 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -99,6 +99,7 @@ struct pcpu_secy_stats {
+ * struct macsec_dev - private data
+ * @secy: SecY config
+ * @real_dev: pointer to underlying netdevice
++ * @dev_tracker: refcount tracker for @real_dev reference
+ * @stats: MACsec device stats
+ * @secys: linked list of SecY's on the underlying device
+ * @gro_cells: pointer to the Generic Receive Offload cell
+@@ -107,6 +108,7 @@ struct pcpu_secy_stats {
+ struct macsec_dev {
+ struct macsec_secy secy;
+ struct net_device *real_dev;
++ netdevice_tracker dev_tracker;
+ struct pcpu_secy_stats __percpu *stats;
+ struct list_head secys;
+ struct gro_cells gro_cells;
+@@ -3459,6 +3461,9 @@ static int macsec_dev_init(struct net_device *dev)
+ if (is_zero_ether_addr(dev->broadcast))
+ memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len);
+
++ /* Get macsec's reference to real_dev */
++ dev_hold_track(real_dev, &macsec->dev_tracker, GFP_KERNEL);
++
+ return 0;
+ }
+
+@@ -3704,6 +3709,8 @@ static void macsec_free_netdev(struct net_device *dev)
+ free_percpu(macsec->stats);
+ free_percpu(macsec->secy.tx_sc.stats);
+
++ /* Get rid of the macsec's reference to real_dev */
++ dev_put_track(macsec->real_dev, &macsec->dev_tracker);
+ }
+
+ static void macsec_setup(struct net_device *dev)
+diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
+index 8561f2d4443bf..13dafe7a29bde 100644
+--- a/drivers/net/phy/dp83867.c
++++ b/drivers/net/phy/dp83867.c
+@@ -137,6 +137,7 @@
+ #define DP83867_DOWNSHIFT_2_COUNT 2
+ #define DP83867_DOWNSHIFT_4_COUNT 4
+ #define DP83867_DOWNSHIFT_8_COUNT 8
++#define DP83867_SGMII_AUTONEG_EN BIT(7)
+
+ /* CFG3 bits */
+ #define DP83867_CFG3_INT_OE BIT(7)
+@@ -855,6 +856,32 @@ static int dp83867_phy_reset(struct phy_device *phydev)
+ DP83867_PHYCR_FORCE_LINK_GOOD, 0);
+ }
+
++static void dp83867_link_change_notify(struct phy_device *phydev)
++{
++ /* There is a limitation in DP83867 PHY device where SGMII AN is
++ * only triggered once after the device is booted up. Even after the
++ * PHY TPI is down and up again, SGMII AN is not triggered and
++ * hence no new in-band message from PHY to MAC side SGMII.
++ * This could cause an issue during power up, when PHY is up prior
++ * to MAC. At this condition, once MAC side SGMII is up, MAC side
++ * SGMII wouldn`t receive new in-band message from TI PHY with
++ * correct link status, speed and duplex info.
++ * Thus, implemented a SW solution here to retrigger SGMII Auto-Neg
++ * whenever there is a link change.
++ */
++ if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
++ int val = 0;
++
++ val = phy_clear_bits(phydev, DP83867_CFG2,
++ DP83867_SGMII_AUTONEG_EN);
++ if (val < 0)
++ return;
++
++ phy_set_bits(phydev, DP83867_CFG2,
++ DP83867_SGMII_AUTONEG_EN);
++ }
++}
++
+ static struct phy_driver dp83867_driver[] = {
+ {
+ .phy_id = DP83867_PHY_ID,
+@@ -879,6 +906,8 @@ static struct phy_driver dp83867_driver[] = {
+
+ .suspend = genphy_suspend,
+ .resume = genphy_resume,
++
++ .link_change_notify = dp83867_link_change_notify,
+ },
+ };
+ module_phy_driver(dp83867_driver);
+diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
+index 58d602985877b..8a2dbe849866d 100644
+--- a/drivers/net/phy/mdio_bus.c
++++ b/drivers/net/phy/mdio_bus.c
+@@ -1046,7 +1046,6 @@ int __init mdio_bus_init(void)
+
+ return ret;
+ }
+-EXPORT_SYMBOL_GPL(mdio_bus_init);
+
+ #if IS_ENABLED(CONFIG_PHYLIB)
+ void mdio_bus_exit(void)
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index 7e213f8ddc98b..df8d27cf2956b 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -300,6 +300,8 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ int r = 0;
+ struct device *dev = &hdev->ndev->dev;
+ struct nfc_evt_transaction *transaction;
++ u32 aid_len;
++ u8 params_len;
+
+ pr_debug("connectivity gate event: %x\n", event);
+
+@@ -308,43 +310,48 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host,
+ r = nfc_se_connectivity(hdev->ndev, host);
+ break;
+ case ST21NFCA_EVT_TRANSACTION:
+- /*
+- * According to specification etsi 102 622
++ /* According to specification etsi 102 622
+ * 11.2.2.4 EVT_TRANSACTION Table 52
+ * Description Tag Length
+ * AID 81 5 to 16
+ * PARAMETERS 82 0 to 255
++ *
++ * The key differences are aid storage length is variably sized
++ * in the packet, but fixed in nfc_evt_transaction, and that the aid_len
++ * is u8 in the packet, but u32 in the structure, and the tags in
++ * the packet are not included in nfc_evt_transaction.
++ *
++ * size in bytes: 1 1 5-16 1 1 0-255
++ * offset: 0 1 2 aid_len + 2 aid_len + 3 aid_len + 4
++ * member name: aid_tag(M) aid_len aid params_tag(M) params_len params
++ * example: 0x81 5-16 X 0x82 0-255 X
+ */
+- if (skb->len < NFC_MIN_AID_LENGTH + 2 &&
+- skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
++ if (skb->len < 2 || skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG)
+ return -EPROTO;
+
+- transaction = devm_kzalloc(dev, skb->len - 2, GFP_KERNEL);
+- if (!transaction)
+- return -ENOMEM;
+-
+- transaction->aid_len = skb->data[1];
++ aid_len = skb->data[1];
+
+- /* Checking if the length of the AID is valid */
+- if (transaction->aid_len > sizeof(transaction->aid))
+- return -EINVAL;
++ if (skb->len < aid_len + 4 || aid_len > sizeof(transaction->aid))
++ return -EPROTO;
+
+- memcpy(transaction->aid, &skb->data[2],
+- transaction->aid_len);
++ params_len = skb->data[aid_len + 3];
+
+- /* Check next byte is PARAMETERS tag (82) */
+- if (skb->data[transaction->aid_len + 2] !=
+- NFC_EVT_TRANSACTION_PARAMS_TAG)
++ /* Verify PARAMETERS tag is (82), and final check that there is enough
++ * space in the packet to read everything.
++ */
++ if ((skb->data[aid_len + 2] != NFC_EVT_TRANSACTION_PARAMS_TAG) ||
++ (skb->len < aid_len + 4 + params_len))
+ return -EPROTO;
+
+- transaction->params_len = skb->data[transaction->aid_len + 3];
++ transaction = devm_kzalloc(dev, sizeof(*transaction) + params_len, GFP_KERNEL);
++ if (!transaction)
++ return -ENOMEM;
+
+- /* Total size is allocated (skb->len - 2) minus fixed array members */
+- if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction)))
+- return -EINVAL;
++ transaction->aid_len = aid_len;
++ transaction->params_len = params_len;
+
+- memcpy(transaction->params, skb->data +
+- transaction->aid_len + 4, transaction->params_len);
++ memcpy(transaction->aid, &skb->data[2], aid_len);
++ memcpy(transaction->params, &skb->data[aid_len + 4], params_len);
+
+ r = nfc_se_transaction(hdev->ndev, host, transaction);
+ break;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 1bbce48e301ed..a84216f4cd583 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1230,12 +1230,6 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ goto err_disable_clocks;
+ }
+
+- ret = clk_prepare_enable(res->pipe_clk);
+- if (ret) {
+- dev_err(dev, "cannot prepare/enable pipe clock\n");
+- goto err_disable_clocks;
+- }
+-
+ /* configure PCIe to RC mode */
+ writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE);
+
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 375c0c40bbf8d..e61058e138182 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -24,7 +24,6 @@
+ #include <linux/pci.h>
+ #include <linux/pci-ecam.h>
+ #include <linux/printk.h>
+-#include <linux/regulator/consumer.h>
+ #include <linux/reset.h>
+ #include <linux/sizes.h>
+ #include <linux/slab.h>
+@@ -196,8 +195,6 @@ static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie,
+ static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val);
+ static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val);
+ static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val);
+-static int brcm_pcie_linkup(struct brcm_pcie *pcie);
+-static int brcm_pcie_add_bus(struct pci_bus *bus);
+
+ enum {
+ RGR1_SW_INIT_1,
+@@ -286,14 +283,6 @@ static const struct pcie_cfg_data bcm2711_cfg = {
+ .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic,
+ };
+
+-struct subdev_regulators {
+- unsigned int num_supplies;
+- struct regulator_bulk_data supplies[];
+-};
+-
+-static int pci_subdev_regulators_add_bus(struct pci_bus *bus);
+-static void pci_subdev_regulators_remove_bus(struct pci_bus *bus);
+-
+ struct brcm_msi {
+ struct device *dev;
+ void __iomem *base;
+@@ -331,9 +320,6 @@ struct brcm_pcie {
+ u32 hw_rev;
+ void (*perst_set)(struct brcm_pcie *pcie, u32 val);
+ void (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val);
+- bool refusal_mode;
+- struct subdev_regulators *sr;
+- bool ep_wakeup_capable;
+ };
+
+ static inline bool is_bmips(const struct brcm_pcie *pcie)
+@@ -450,99 +436,6 @@ static int brcm_pcie_set_ssc(struct brcm_pcie *pcie)
+ return ssc && pll ? 0 : -EIO;
+ }
+
+-static void *alloc_subdev_regulators(struct device *dev)
+-{
+- static const char * const supplies[] = {
+- "vpcie3v3",
+- "vpcie3v3aux",
+- "vpcie12v",
+- };
+- const size_t size = sizeof(struct subdev_regulators)
+- + sizeof(struct regulator_bulk_data) * ARRAY_SIZE(supplies);
+- struct subdev_regulators *sr;
+- int i;
+-
+- sr = devm_kzalloc(dev, size, GFP_KERNEL);
+- if (sr) {
+- sr->num_supplies = ARRAY_SIZE(supplies);
+- for (i = 0; i < ARRAY_SIZE(supplies); i++)
+- sr->supplies[i].supply = supplies[i];
+- }
+-
+- return sr;
+-}
+-
+-static int pci_subdev_regulators_add_bus(struct pci_bus *bus)
+-{
+- struct device *dev = &bus->dev;
+- struct subdev_regulators *sr;
+- int ret;
+-
+- if (!dev->of_node || !bus->parent || !pci_is_root_bus(bus->parent))
+- return 0;
+-
+- if (dev->driver_data)
+- dev_err(dev, "dev.driver_data unexpectedly non-NULL\n");
+-
+- sr = alloc_subdev_regulators(dev);
+- if (!sr)
+- return -ENOMEM;
+-
+- dev->driver_data = sr;
+- ret = regulator_bulk_get(dev, sr->num_supplies, sr->supplies);
+- if (ret)
+- return ret;
+-
+- ret = regulator_bulk_enable(sr->num_supplies, sr->supplies);
+- if (ret) {
+- dev_err(dev, "failed to enable regulators for downstream device\n");
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-static int brcm_pcie_add_bus(struct pci_bus *bus)
+-{
+- struct device *dev = &bus->dev;
+- struct brcm_pcie *pcie = (struct brcm_pcie *) bus->sysdata;
+- int ret;
+-
+- if (!dev->of_node || !bus->parent || !pci_is_root_bus(bus->parent))
+- return 0;
+-
+- ret = pci_subdev_regulators_add_bus(bus);
+- if (ret)
+- return ret;
+-
+- /* Grab the regulators for suspend/resume */
+- pcie->sr = bus->dev.driver_data;
+-
+- /*
+- * If we have failed linkup there is no point to return an error as
+- * currently it will cause a WARNING() from pci_alloc_child_bus().
+- * We return 0 and turn on the "refusal_mode" so that any further
+- * accesses to the pci_dev just get 0xffffffff
+- */
+- if (brcm_pcie_linkup(pcie) != 0)
+- pcie->refusal_mode = true;
+-
+- return 0;
+-}
+-
+-static void pci_subdev_regulators_remove_bus(struct pci_bus *bus)
+-{
+- struct device *dev = &bus->dev;
+- struct subdev_regulators *sr = dev->driver_data;
+-
+- if (!sr || !bus->parent || !pci_is_root_bus(bus->parent))
+- return;
+-
+- if (regulator_bulk_disable(sr->num_supplies, sr->supplies))
+- dev_err(dev, "failed to disable regulators for downstream device\n");
+- dev->driver_data = NULL;
+-}
+-
+ /* Limits operation to a specific generation (1, 2, or 3) */
+ static void brcm_pcie_set_gen(struct brcm_pcie *pcie, int gen)
+ {
+@@ -858,18 +751,6 @@ static void __iomem *brcm_pcie_map_conf(struct pci_bus *bus, unsigned int devfn,
+ /* Accesses to the RC go right to the RC registers if slot==0 */
+ if (pci_is_root_bus(bus))
+ return PCI_SLOT(devfn) ? NULL : base + where;
+- if (pcie->refusal_mode) {
+- /*
+- * At this point we do not have link. There will be a CPU
+- * abort -- a quirk with this controller --if Linux tries
+- * to read any config-space registers besides those
+- * targeting the host bridge. To prevent this we hijack
+- * the address to point to a safe access that will return
+- * 0xffffffff.
+- */
+- writel(0xffffffff, base + PCIE_MISC_RC_BAR2_CONFIG_HI);
+- return base + PCIE_MISC_RC_BAR2_CONFIG_HI + (where & 0x3);
+- }
+
+ /* For devices, write to the config space index register */
+ idx = PCIE_ECAM_OFFSET(bus->number, devfn, 0);
+@@ -898,8 +779,6 @@ static struct pci_ops brcm_pcie_ops = {
+ .map_bus = brcm_pcie_map_conf,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+- .add_bus = brcm_pcie_add_bus,
+- .remove_bus = pci_subdev_regulators_remove_bus,
+ };
+
+ static struct pci_ops brcm_pcie_ops32 = {
+@@ -1047,9 +926,16 @@ static inline int brcm_pcie_get_rc_bar2_size_and_offset(struct brcm_pcie *pcie,
+
+ static int brcm_pcie_setup(struct brcm_pcie *pcie)
+ {
++ struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
+ u64 rc_bar2_offset, rc_bar2_size;
+ void __iomem *base = pcie->base;
+- int ret, memc;
++ struct device *dev = pcie->dev;
++ struct resource_entry *entry;
++ bool ssc_good = false;
++ struct resource *res;
++ int num_out_wins = 0;
++ u16 nlw, cls, lnksta;
++ int i, ret, memc;
+ u32 tmp, burst, aspm_support;
+
+ /* Reset the bridge */
+@@ -1139,40 +1025,6 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
+ if (pcie->gen)
+ brcm_pcie_set_gen(pcie, pcie->gen);
+
+- /* Don't advertise L0s capability if 'aspm-no-l0s' */
+- aspm_support = PCIE_LINK_STATE_L1;
+- if (!of_property_read_bool(pcie->np, "aspm-no-l0s"))
+- aspm_support |= PCIE_LINK_STATE_L0S;
+- tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
+- u32p_replace_bits(&tmp, aspm_support,
+- PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK);
+- writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
+-
+- /*
+- * For config space accesses on the RC, show the right class for
+- * a PCIe-PCIe bridge (the default setting is to be EP mode).
+- */
+- tmp = readl(base + PCIE_RC_CFG_PRIV1_ID_VAL3);
+- u32p_replace_bits(&tmp, 0x060400,
+- PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK);
+- writel(tmp, base + PCIE_RC_CFG_PRIV1_ID_VAL3);
+-
+- return 0;
+-}
+-
+-static int brcm_pcie_linkup(struct brcm_pcie *pcie)
+-{
+- struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
+- struct device *dev = pcie->dev;
+- void __iomem *base = pcie->base;
+- struct resource_entry *entry;
+- struct resource *res;
+- int num_out_wins = 0;
+- u16 nlw, cls, lnksta;
+- bool ssc_good = false;
+- u32 tmp;
+- int ret, i;
+-
+ /* Unassert the fundamental reset */
+ pcie->perst_set(pcie, 0);
+
+@@ -1223,6 +1075,24 @@ static int brcm_pcie_linkup(struct brcm_pcie *pcie)
+ num_out_wins++;
+ }
+
++ /* Don't advertise L0s capability if 'aspm-no-l0s' */
++ aspm_support = PCIE_LINK_STATE_L1;
++ if (!of_property_read_bool(pcie->np, "aspm-no-l0s"))
++ aspm_support |= PCIE_LINK_STATE_L0S;
++ tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
++ u32p_replace_bits(&tmp, aspm_support,
++ PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK);
++ writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
++
++ /*
++ * For config space accesses on the RC, show the right class for
++ * a PCIe-PCIe bridge (the default setting is to be EP mode).
++ */
++ tmp = readl(base + PCIE_RC_CFG_PRIV1_ID_VAL3);
++ u32p_replace_bits(&tmp, 0x060400,
++ PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK);
++ writel(tmp, base + PCIE_RC_CFG_PRIV1_ID_VAL3);
++
+ if (pcie->ssc) {
+ ret = brcm_pcie_set_ssc(pcie);
+ if (ret == 0)
+@@ -1351,21 +1221,9 @@ static void brcm_pcie_turn_off(struct brcm_pcie *pcie)
+ pcie->bridge_sw_init_set(pcie, 1);
+ }
+
+-static int pci_dev_may_wakeup(struct pci_dev *dev, void *data)
+-{
+- bool *ret = data;
+-
+- if (device_may_wakeup(&dev->dev)) {
+- *ret = true;
+- dev_info(&dev->dev, "disable cancelled for wake-up device\n");
+- }
+- return (int) *ret;
+-}
+-
+ static int brcm_pcie_suspend(struct device *dev)
+ {
+ struct brcm_pcie *pcie = dev_get_drvdata(dev);
+- struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie);
+ int ret;
+
+ brcm_pcie_turn_off(pcie);
+@@ -1383,25 +1241,6 @@ static int brcm_pcie_suspend(struct device *dev)
+ return ret;
+ }
+
+- if (pcie->sr) {
+- /*
+- * Now turn off the regulators, but if at least one
+- * downstream device is enabled as a wake-up source, do not
+- * turn off regulators.
+- */
+- pcie->ep_wakeup_capable = false;
+- pci_walk_bus(bridge->bus, pci_dev_may_wakeup,
+- &pcie->ep_wakeup_capable);
+- if (!pcie->ep_wakeup_capable) {
+- ret = regulator_bulk_disable(pcie->sr->num_supplies,
+- pcie->sr->supplies);
+- if (ret) {
+- dev_err(dev, "Could not turn off regulators\n");
+- reset_control_reset(pcie->rescal);
+- return ret;
+- }
+- }
+- }
+ clk_disable_unprepare(pcie->clk);
+
+ return 0;
+@@ -1419,28 +1258,9 @@ static int brcm_pcie_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- if (pcie->sr) {
+- if (pcie->ep_wakeup_capable) {
+- /*
+- * We are resuming from a suspend. In the suspend we
+- * did not disable the power supplies, so there is
+- * no need to enable them (and falsely increase their
+- * usage count).
+- */
+- pcie->ep_wakeup_capable = false;
+- } else {
+- ret = regulator_bulk_enable(pcie->sr->num_supplies,
+- pcie->sr->supplies);
+- if (ret) {
+- dev_err(dev, "Could not turn on regulators\n");
+- goto err_disable_clk;
+- }
+- }
+- }
+-
+ ret = reset_control_reset(pcie->rescal);
+ if (ret)
+- goto err_regulator;
++ goto err_disable_clk;
+
+ ret = brcm_phy_start(pcie);
+ if (ret)
+@@ -1461,10 +1281,6 @@ static int brcm_pcie_resume(struct device *dev)
+ if (ret)
+ goto err_reset;
+
+- ret = brcm_pcie_linkup(pcie);
+- if (ret)
+- goto err_reset;
+-
+ if (pcie->msi)
+ brcm_msi_set_regs(pcie->msi);
+
+@@ -1472,9 +1288,6 @@ static int brcm_pcie_resume(struct device *dev)
+
+ err_reset:
+ reset_control_rearm(pcie->rescal);
+-err_regulator:
+- if (pcie->sr)
+- regulator_bulk_disable(pcie->sr->num_supplies, pcie->sr->supplies);
+ err_disable_clk:
+ clk_disable_unprepare(pcie->clk);
+ return ret;
+@@ -1606,17 +1419,7 @@ static int brcm_pcie_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, pcie);
+
+- ret = pci_host_probe(bridge);
+- if (!ret && !brcm_pcie_link_up(pcie))
+- ret = -ENODEV;
+-
+- if (ret) {
+- brcm_pcie_remove(pdev);
+- return ret;
+- }
+-
+- return 0;
+-
++ return pci_host_probe(bridge);
+ fail:
+ __brcm_pcie_remove(pcie);
+ return ret;
+@@ -1625,8 +1428,8 @@ fail:
+ MODULE_DEVICE_TABLE(of, brcm_pcie_match);
+
+ static const struct dev_pm_ops brcm_pcie_pm_ops = {
+- .suspend_noirq = brcm_pcie_suspend,
+- .resume_noirq = brcm_pcie_resume,
++ .suspend = brcm_pcie_suspend,
++ .resume = brcm_pcie_resume,
+ };
+
+ static struct platform_driver brcm_pcie_driver = {
+diff --git a/drivers/pcmcia/Kconfig b/drivers/pcmcia/Kconfig
+index ab53eab635f6a..1740a63b814da 100644
+--- a/drivers/pcmcia/Kconfig
++++ b/drivers/pcmcia/Kconfig
+@@ -151,7 +151,7 @@ config TCIC
+
+ config PCMCIA_ALCHEMY_DEVBOARD
+ tristate "Alchemy Db/Pb1xxx PCMCIA socket services"
+- depends on MIPS_ALCHEMY && PCMCIA
++ depends on MIPS_DB1XXX && PCMCIA
+ help
+ Enable this driver of you want PCMCIA support on your Alchemy
+ Db1000, Db/Pb1100, Db/Pb1500, Db/Pb1550, Db/Pb1200, DB1300
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index c2b878128e2c0..7493fd634c1dc 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -5246,7 +5246,7 @@ static int qcom_qmp_phy_power_on(struct phy *phy)
+
+ ret = reset_control_deassert(qmp->ufs_reset);
+ if (ret)
+- goto err_lane_rst;
++ goto err_pcs_ready;
+
+ qcom_qmp_phy_configure(pcs_misc, cfg->regs, cfg->pcs_misc_tbl,
+ cfg->pcs_misc_tbl_num);
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+index eca77e44a4c1b..cba5c32cbaeed 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+@@ -940,8 +940,14 @@ static irqreturn_t rockchip_usb2phy_irq(int irq, void *data)
+ if (!rport->phy)
+ continue;
+
+- /* Handle linestate irq for both otg port and host port */
+- ret = rockchip_usb2phy_linestate_irq(irq, rport);
++ switch (rport->port_id) {
++ case USB2PHY_PORT_OTG:
++ ret |= rockchip_usb2phy_otg_mux_irq(irq, rport);
++ break;
++ case USB2PHY_PORT_HOST:
++ ret |= rockchip_usb2phy_linestate_irq(irq, rport);
++ break;
++ }
+ }
+
+ return ret;
+diff --git a/drivers/platform/x86/barco-p50-gpio.c b/drivers/platform/x86/barco-p50-gpio.c
+index f5c72e33f9ae3..f8b796820ef43 100644
+--- a/drivers/platform/x86/barco-p50-gpio.c
++++ b/drivers/platform/x86/barco-p50-gpio.c
+@@ -406,11 +406,14 @@ MODULE_DEVICE_TABLE(dmi, dmi_ids);
+ static int __init p50_module_init(void)
+ {
+ struct resource res = DEFINE_RES_IO(P50_GPIO_IO_PORT_BASE, P50_PORT_CMD + 1);
++ int ret;
+
+ if (!dmi_first_match(dmi_ids))
+ return -ENODEV;
+
+- platform_driver_register(&p50_gpio_driver);
++ ret = platform_driver_register(&p50_gpio_driver);
++ if (ret)
++ return ret;
+
+ gpio_pdev = platform_device_register_simple(DRIVER_NAME, PLATFORM_DEVID_NONE, &res, 1);
+ if (IS_ERR(gpio_pdev)) {
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 88f0bfd6ecf1a..c573bfd9ab0e5 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -38,6 +38,7 @@ MODULE_ALIAS("wmi:5FB7F034-2C63-45e9-BE91-3D44E2C707E4");
+ #define HPWMI_EVENT_GUID "95F24279-4D7B-4334-9387-ACCDC67EF61C"
+ #define HPWMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4"
+ #define HP_OMEN_EC_THERMAL_PROFILE_OFFSET 0x95
++#define zero_if_sup(tmp) (zero_insize_support?0:sizeof(tmp)) // use when zero insize is required
+
+ /* DMI board names of devices that should use the omen specific path for
+ * thermal profiles.
+@@ -200,6 +201,7 @@ static struct input_dev *hp_wmi_input_dev;
+ static struct platform_device *hp_wmi_platform_dev;
+ static struct platform_profile_handler platform_profile_handler;
+ static bool platform_profile_support;
++static bool zero_insize_support;
+
+ static struct rfkill *wifi_rfkill;
+ static struct rfkill *bluetooth_rfkill;
+@@ -347,7 +349,7 @@ static int hp_wmi_read_int(int query)
+ int val = 0, ret;
+
+ ret = hp_wmi_perform_query(query, HPWMI_READ, &val,
+- sizeof(val), sizeof(val));
++ zero_if_sup(val), sizeof(val));
+
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+@@ -383,7 +385,8 @@ static int hp_wmi_get_tablet_mode(void)
+ return -ENODEV;
+
+ ret = hp_wmi_perform_query(HPWMI_SYSTEM_DEVICE_MODE, HPWMI_READ,
+- system_device_mode, 0, sizeof(system_device_mode));
++ system_device_mode, zero_if_sup(system_device_mode),
++ sizeof(system_device_mode));
+ if (ret < 0)
+ return ret;
+
+@@ -449,7 +452,7 @@ static int hp_wmi_fan_speed_max_get(void)
+ int val = 0, ret;
+
+ ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_MAX_GET_QUERY, HPWMI_GM,
+- &val, 0, sizeof(val));
++ &val, zero_if_sup(val), sizeof(val));
+
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+@@ -461,7 +464,7 @@ static int __init hp_wmi_bios_2008_later(void)
+ {
+ int state = 0;
+ int ret = hp_wmi_perform_query(HPWMI_FEATURE_QUERY, HPWMI_READ, &state,
+- 0, sizeof(state));
++ zero_if_sup(state), sizeof(state));
+ if (!ret)
+ return 1;
+
+@@ -472,7 +475,7 @@ static int __init hp_wmi_bios_2009_later(void)
+ {
+ u8 state[128];
+ int ret = hp_wmi_perform_query(HPWMI_FEATURE2_QUERY, HPWMI_READ, &state,
+- 0, sizeof(state));
++ zero_if_sup(state), sizeof(state));
+ if (!ret)
+ return 1;
+
+@@ -550,7 +553,7 @@ static int hp_wmi_rfkill2_refresh(void)
+ int err, i;
+
+ err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+- 0, sizeof(state));
++ zero_if_sup(state), sizeof(state));
+ if (err)
+ return err;
+
+@@ -952,7 +955,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device)
+ int err, i;
+
+ err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+- 0, sizeof(state));
++ zero_if_sup(state), sizeof(state));
+ if (err)
+ return err < 0 ? err : -EINVAL;
+
+@@ -1410,11 +1413,15 @@ static int __init hp_wmi_init(void)
+ {
+ int event_capable = wmi_has_guid(HPWMI_EVENT_GUID);
+ int bios_capable = wmi_has_guid(HPWMI_BIOS_GUID);
+- int err;
++ int err, tmp = 0;
+
+ if (!bios_capable && !event_capable)
+ return -ENODEV;
+
++ if (hp_wmi_perform_query(HPWMI_HARDWARE_QUERY, HPWMI_READ, &tmp,
++ sizeof(tmp), sizeof(tmp)) == HPWMI_RET_INVALID_PARAMETERS)
++ zero_insize_support = true;
++
+ if (event_capable) {
+ err = hp_wmi_input_setup();
+ if (err)
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index 09a4cbd69676a..23adcb597ff96 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -2995,13 +2995,6 @@ static int ab8500_fg_bind(struct device *dev, struct device *master,
+ {
+ struct ab8500_fg *di = dev_get_drvdata(dev);
+
+- /* Create a work queue for running the FG algorithm */
+- di->fg_wq = alloc_ordered_workqueue("ab8500_fg_wq", WQ_MEM_RECLAIM);
+- if (di->fg_wq == NULL) {
+- dev_err(dev, "failed to create work queue\n");
+- return -ENOMEM;
+- }
+-
+ di->bat_cap.max_mah_design = di->bm->bi->charge_full_design_uah;
+ di->bat_cap.max_mah = di->bat_cap.max_mah_design;
+ di->vbat_nom_uv = di->bm->bi->voltage_max_design_uv;
+@@ -3025,8 +3018,7 @@ static void ab8500_fg_unbind(struct device *dev, struct device *master,
+ if (ret)
+ dev_err(dev, "failed to disable coulomb counter\n");
+
+- destroy_workqueue(di->fg_wq);
+- flush_scheduled_work();
++ flush_workqueue(di->fg_wq);
+ }
+
+ static const struct component_ops ab8500_fg_component_ops = {
+@@ -3070,6 +3062,13 @@ static int ab8500_fg_probe(struct platform_device *pdev)
+ ab8500_fg_charge_state_to(di, AB8500_FG_CHARGE_INIT);
+ ab8500_fg_discharge_state_to(di, AB8500_FG_DISCHARGE_INIT);
+
++ /* Create a work queue for running the FG algorithm */
++ di->fg_wq = alloc_ordered_workqueue("ab8500_fg_wq", WQ_MEM_RECLAIM);
++ if (di->fg_wq == NULL) {
++ dev_err(dev, "failed to create work queue\n");
++ return -ENOMEM;
++ }
++
+ /* Init work for running the fg algorithm instantly */
+ INIT_WORK(&di->fg_work, ab8500_fg_instant_work);
+
+@@ -3181,6 +3180,8 @@ static int ab8500_fg_remove(struct platform_device *pdev)
+ int ret = 0;
+ struct ab8500_fg *di = platform_get_drvdata(pdev);
+
++ destroy_workqueue(di->fg_wq);
++ flush_scheduled_work();
+ component_del(&pdev->dev, &ab8500_fg_component_ops);
+ list_del(&di->node);
+ ab8500_fg_sysfs_exit(di);
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 19746e658a6a8..15219ed43ce95 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -865,17 +865,20 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ info->regmap_irqc = axp20x->regmap_irqc;
+
+ info->cable.edev = extcon_get_extcon_dev(AXP288_EXTCON_DEV_NAME);
+- if (info->cable.edev == NULL) {
+- dev_dbg(dev, "%s is not ready, probe deferred\n",
+- AXP288_EXTCON_DEV_NAME);
+- return -EPROBE_DEFER;
++ if (IS_ERR(info->cable.edev)) {
++ dev_err_probe(dev, PTR_ERR(info->cable.edev),
++ "extcon_get_extcon_dev(%s) failed\n",
++ AXP288_EXTCON_DEV_NAME);
++ return PTR_ERR(info->cable.edev);
+ }
+
+ if (acpi_dev_present(USB_HOST_EXTCON_HID, NULL, -1)) {
+ info->otg.cable = extcon_get_extcon_dev(USB_HOST_EXTCON_NAME);
+- if (info->otg.cable == NULL) {
+- dev_dbg(dev, "EXTCON_USB_HOST is not ready, probe deferred\n");
+- return -EPROBE_DEFER;
++ if (IS_ERR(info->otg.cable)) {
++ dev_err_probe(dev, PTR_ERR(info->otg.cable),
++ "extcon_get_extcon_dev(%s) failed\n",
++ USB_HOST_EXTCON_NAME);
++ return PTR_ERR(info->otg.cable);
+ }
+ dev_info(dev, "Using " USB_HOST_EXTCON_HID " extcon for usb-id\n");
+ }
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index ce8ffd0a41b5a..68595897e72db 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -600,7 +600,6 @@ static const struct dmi_system_id axp288_no_battery_list[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "T3 MRD"),
+ DMI_MATCH(DMI_CHASSIS_TYPE, "3"),
+ DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
+- DMI_MATCH(DMI_BIOS_VERSION, "5.11"),
+ },
+ },
+ {}
+diff --git a/drivers/power/supply/charger-manager.c b/drivers/power/supply/charger-manager.c
+index d67edb760c948..92db79400a6ad 100644
+--- a/drivers/power/supply/charger-manager.c
++++ b/drivers/power/supply/charger-manager.c
+@@ -985,13 +985,10 @@ static int charger_extcon_init(struct charger_manager *cm,
+ cable->nb.notifier_call = charger_extcon_notifier;
+
+ cable->extcon_dev = extcon_get_extcon_dev(cable->extcon_name);
+- if (IS_ERR_OR_NULL(cable->extcon_dev)) {
++ if (IS_ERR(cable->extcon_dev)) {
+ pr_err("Cannot find extcon_dev for %s (cable: %s)\n",
+ cable->extcon_name, cable->name);
+- if (cable->extcon_dev == NULL)
+- return -EPROBE_DEFER;
+- else
+- return PTR_ERR(cable->extcon_dev);
++ return PTR_ERR(cable->extcon_dev);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(extcon_mapping); i++) {
+diff --git a/drivers/power/supply/max8997_charger.c b/drivers/power/supply/max8997_charger.c
+index 25207fe2aa68e..bfa7a576523df 100644
+--- a/drivers/power/supply/max8997_charger.c
++++ b/drivers/power/supply/max8997_charger.c
+@@ -248,10 +248,10 @@ static int max8997_battery_probe(struct platform_device *pdev)
+ dev_info(&pdev->dev, "couldn't get charger regulator\n");
+ }
+ charger->edev = extcon_get_extcon_dev("max8997-muic");
+- if (IS_ERR_OR_NULL(charger->edev)) {
+- if (!charger->edev)
+- return -EPROBE_DEFER;
+- dev_info(charger->dev, "couldn't get extcon device\n");
++ if (IS_ERR(charger->edev)) {
++ dev_err_probe(charger->dev, PTR_ERR(charger->edev),
++ "couldn't get extcon device: max8997-muic\n");
++ return PTR_ERR(charger->edev);
+ }
+
+ if (!IS_ERR(charger->reg) && !IS_ERR_OR_NULL(charger->edev)) {
+diff --git a/drivers/pwm/pwm-lp3943.c b/drivers/pwm/pwm-lp3943.c
+index ea17d446a6276..2bd04ecb508cf 100644
+--- a/drivers/pwm/pwm-lp3943.c
++++ b/drivers/pwm/pwm-lp3943.c
+@@ -125,6 +125,7 @@ static int lp3943_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ if (err)
+ return err;
+
++ duty_ns = min(duty_ns, period_ns);
+ val = (u8)(duty_ns * LP3943_MAX_DUTY / period_ns);
+
+ return lp3943_write_byte(lp3943, reg_duty, val);
+diff --git a/drivers/pwm/pwm-raspberrypi-poe.c b/drivers/pwm/pwm-raspberrypi-poe.c
+index 579a15240e0a8..c877de37734d9 100644
+--- a/drivers/pwm/pwm-raspberrypi-poe.c
++++ b/drivers/pwm/pwm-raspberrypi-poe.c
+@@ -66,7 +66,7 @@ static int raspberrypi_pwm_get_property(struct rpi_firmware *firmware,
+ u32 reg, u32 *val)
+ {
+ struct raspberrypi_pwm_prop msg = {
+- .reg = reg
++ .reg = cpu_to_le32(reg),
+ };
+ int ret;
+
+diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
+index 7a096f1891e61..91eb037089ef8 100644
+--- a/drivers/remoteproc/imx_rproc.c
++++ b/drivers/remoteproc/imx_rproc.c
+@@ -423,6 +423,9 @@ static int imx_rproc_prepare(struct rproc *rproc)
+ if (!strcmp(it.node->name, "vdev0buffer"))
+ continue;
+
++ if (!strcmp(it.node->name, "rsc-table"))
++ continue;
++
+ rmem = of_reserved_mem_lookup(it.node);
+ if (!rmem) {
+ dev_err(priv->dev, "unable to acquire memory-region\n");
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index 540e027f08c4b..4ad90945518f1 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1404,9 +1404,9 @@ static int qcom_smd_parse_edge(struct device *dev,
+ edge->name = node->name;
+
+ irq = irq_of_parse_and_map(node, 0);
+- if (irq < 0) {
++ if (!irq) {
+ dev_err(dev, "required smd interrupt missing\n");
+- ret = irq;
++ ret = -EINVAL;
+ goto put_node;
+ }
+
+diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
+index ac764e04c898f..ed26e92268344 100644
+--- a/drivers/rpmsg/virtio_rpmsg_bus.c
++++ b/drivers/rpmsg/virtio_rpmsg_bus.c
+@@ -851,7 +851,7 @@ static struct rpmsg_device *rpmsg_virtio_add_ctrl_dev(struct virtio_device *vdev
+
+ err = rpmsg_chrdev_register_device(rpdev_ctrl);
+ if (err) {
+- kfree(vch);
++ /* vch will be free in virtio_rpmsg_release_device() */
+ return ERR_PTR(err);
+ }
+
+@@ -862,7 +862,7 @@ static void rpmsg_virtio_del_ctrl_dev(struct rpmsg_device *rpdev_ctrl)
+ {
+ if (!rpdev_ctrl)
+ return;
+- kfree(to_virtio_rpmsg_channel(rpdev_ctrl));
++ device_unregister(&rpdev_ctrl->dev);
+ }
+
+ static int rpmsg_probe(struct virtio_device *vdev)
+@@ -973,7 +973,8 @@ static int rpmsg_probe(struct virtio_device *vdev)
+
+ err = rpmsg_ns_register_device(rpdev_ns);
+ if (err)
+- goto free_vch;
++ /* vch will be free in virtio_rpmsg_release_device() */
++ goto free_ctrldev;
+ }
+
+ /*
+@@ -997,8 +998,6 @@ static int rpmsg_probe(struct virtio_device *vdev)
+
+ return 0;
+
+-free_vch:
+- kfree(vch);
+ free_ctrldev:
+ rpmsg_virtio_del_ctrl_dev(rpdev_ctrl);
+ free_coherent:
+diff --git a/drivers/rtc/rtc-ftrtc010.c b/drivers/rtc/rtc-ftrtc010.c
+index 53bb08fe1cd46..25c6e7d9570f0 100644
+--- a/drivers/rtc/rtc-ftrtc010.c
++++ b/drivers/rtc/rtc-ftrtc010.c
+@@ -137,26 +137,34 @@ static int ftrtc010_rtc_probe(struct platform_device *pdev)
+ ret = clk_prepare_enable(rtc->extclk);
+ if (ret) {
+ dev_err(dev, "failed to enable EXTCLK\n");
+- return ret;
++ goto err_disable_pclk;
+ }
+ }
+
+ rtc->rtc_irq = platform_get_irq(pdev, 0);
+- if (rtc->rtc_irq < 0)
+- return rtc->rtc_irq;
++ if (rtc->rtc_irq < 0) {
++ ret = rtc->rtc_irq;
++ goto err_disable_extclk;
++ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- if (!res)
+- return -ENODEV;
++ if (!res) {
++ ret = -ENODEV;
++ goto err_disable_extclk;
++ }
+
+ rtc->rtc_base = devm_ioremap(dev, res->start,
+ resource_size(res));
+- if (!rtc->rtc_base)
+- return -ENOMEM;
++ if (!rtc->rtc_base) {
++ ret = -ENOMEM;
++ goto err_disable_extclk;
++ }
+
+ rtc->rtc_dev = devm_rtc_allocate_device(dev);
+- if (IS_ERR(rtc->rtc_dev))
+- return PTR_ERR(rtc->rtc_dev);
++ if (IS_ERR(rtc->rtc_dev)) {
++ ret = PTR_ERR(rtc->rtc_dev);
++ goto err_disable_extclk;
++ }
+
+ rtc->rtc_dev->ops = &ftrtc010_rtc_ops;
+
+@@ -172,9 +180,15 @@ static int ftrtc010_rtc_probe(struct platform_device *pdev)
+ ret = devm_request_irq(dev, rtc->rtc_irq, ftrtc010_rtc_interrupt,
+ IRQF_SHARED, pdev->name, dev);
+ if (unlikely(ret))
+- return ret;
++ goto err_disable_extclk;
+
+ return devm_rtc_register_device(rtc->rtc_dev);
++
++err_disable_extclk:
++ clk_disable_unprepare(rtc->extclk);
++err_disable_pclk:
++ clk_disable_unprepare(rtc->pclk);
++ return ret;
+ }
+
+ static int ftrtc010_rtc_remove(struct platform_device *pdev)
+diff --git a/drivers/rtc/rtc-mt6397.c b/drivers/rtc/rtc-mt6397.c
+index 80dc479a6ff02..1d297af80f878 100644
+--- a/drivers/rtc/rtc-mt6397.c
++++ b/drivers/rtc/rtc-mt6397.c
+@@ -269,6 +269,8 @@ static int mtk_rtc_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -EINVAL;
+ rtc->addr_base = res->start;
+
+ rtc->data = of_device_get_match_data(&pdev->dev);
+diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c
+index 71585528e8db9..e885c1dbf61f9 100644
+--- a/drivers/scsi/myrb.c
++++ b/drivers/scsi/myrb.c
+@@ -1239,7 +1239,8 @@ static void myrb_cleanup(struct myrb_hba *cb)
+ myrb_unmap(cb);
+
+ if (cb->mmio_base) {
+- cb->disable_intr(cb->io_base);
++ if (cb->disable_intr)
++ cb->disable_intr(cb->io_base);
+ iounmap(cb->mmio_base);
+ }
+ if (cb->irq)
+@@ -3413,9 +3414,13 @@ static struct myrb_hba *myrb_detect(struct pci_dev *pdev,
+ mutex_init(&cb->dcmd_mutex);
+ mutex_init(&cb->dma_mutex);
+ cb->pdev = pdev;
++ cb->host = shost;
+
+- if (pci_enable_device(pdev))
+- goto failure;
++ if (pci_enable_device(pdev)) {
++ dev_err(&pdev->dev, "Failed to enable PCI device\n");
++ scsi_host_put(shost);
++ return NULL;
++ }
+
+ if (privdata->hw_init == DAC960_PD_hw_init ||
+ privdata->hw_init == DAC960_P_hw_init) {
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 8b5d2a4076c21..b0922c521d613 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3171,7 +3171,7 @@ static void sd_read_cpr(struct scsi_disk *sdkp)
+ goto out;
+
+ /* We must have at least a 64B header and one 32B range descriptor */
+- vpd_len = get_unaligned_be16(&buffer[2]) + 3;
++ vpd_len = get_unaligned_be16(&buffer[2]) + 4;
+ if (vpd_len > buf_len || vpd_len < 64 + 32 || (vpd_len & 31)) {
+ sd_printk(KERN_ERR, sdkp,
+ "Invalid Concurrent Positioning Ranges VPD page\n");
+@@ -3605,7 +3605,6 @@ static int sd_probe(struct device *dev)
+ out_put:
+ put_disk(gd);
+ out_free:
+- sd_zbc_release_disk(sdkp);
+ kfree(sdkp);
+ out:
+ scsi_autopm_put_device(sdp);
+diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c
+index 494cf2b5bf7b6..343ff61ccccbb 100644
+--- a/drivers/soc/rockchip/grf.c
++++ b/drivers/soc/rockchip/grf.c
+@@ -148,12 +148,14 @@ static int __init rockchip_grf_init(void)
+ return -ENODEV;
+ if (!match || !match->data) {
+ pr_err("%s: missing grf data\n", __func__);
++ of_node_put(np);
+ return -EINVAL;
+ }
+
+ grf_info = match->data;
+
+ grf = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(grf)) {
+ pr_err("%s: could not get grf syscon\n", __func__);
+ return PTR_ERR(grf);
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 63101f1ba2713..32e5fdb823c40 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -1293,6 +1293,9 @@ static int intel_link_probe(struct auxiliary_device *auxdev,
+ /* use generic bandwidth allocation algorithm */
+ sdw->cdns.bus.compute_params = sdw_compute_params;
+
++ /* avoid resuming from pm_runtime suspend if it's not required */
++ dev_pm_set_driver_flags(dev, DPM_FLAG_SMART_SUSPEND);
++
+ ret = sdw_bus_master_add(bus, dev, dev->fwnode);
+ if (ret) {
+ dev_err(dev, "sdw_bus_master_add fail: %d\n", ret);
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 54813417ef8ea..249f1b69ec94e 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -99,7 +99,7 @@
+
+ #define SWRM_SPECIAL_CMD_ID 0xF
+ #define MAX_FREQ_NUM 1
+-#define TIMEOUT_MS (2 * HZ)
++#define TIMEOUT_MS 100
+ #define QCOM_SWRM_MAX_RD_LEN 0x1
+ #define QCOM_SDW_MAX_PORTS 14
+ #define DEFAULT_CLK_FREQ 9600000
+diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c
+index d403a7a3021d0..72ab066ce5523 100644
+--- a/drivers/spi/spi-fsi.c
++++ b/drivers/spi/spi-fsi.c
+@@ -319,12 +319,12 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+
+ end = jiffies + msecs_to_jiffies(SPI_FSI_STATUS_TIMEOUT_MS);
+ do {
++ if (time_after(jiffies, end))
++ return -ETIMEDOUT;
++
+ rc = fsi_spi_status(ctx, &status, "TX");
+ if (rc)
+ return rc;
+-
+- if (time_after(jiffies, end))
+- return -ETIMEDOUT;
+ } while (status & SPI_FSI_STATUS_TDR_FULL);
+
+ sent += nb;
+@@ -337,12 +337,12 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ while (transfer->len > recv) {
+ end = jiffies + msecs_to_jiffies(SPI_FSI_STATUS_TIMEOUT_MS);
+ do {
++ if (time_after(jiffies, end))
++ return -ETIMEDOUT;
++
+ rc = fsi_spi_status(ctx, &status, "RX");
+ if (rc)
+ return rc;
+-
+- if (time_after(jiffies, end))
+- return -ETIMEDOUT;
+ } while (!(status & SPI_FSI_STATUS_RDR_FULL));
+
+ rc = fsi_spi_read_reg(ctx, SPI_FSI_DATA_RX, &in);
+diff --git a/drivers/staging/fieldbus/anybuss/host.c b/drivers/staging/fieldbus/anybuss/host.c
+index a344410e48fe9..cd86b9c9e3458 100644
+--- a/drivers/staging/fieldbus/anybuss/host.c
++++ b/drivers/staging/fieldbus/anybuss/host.c
+@@ -1384,7 +1384,7 @@ anybuss_host_common_probe(struct device *dev,
+ goto err_device;
+ return cd;
+ err_device:
+- device_unregister(&cd->client->dev);
++ put_device(&cd->client->dev);
+ err_kthread:
+ kthread_stop(cd->qthread);
+ err_reset:
+diff --git a/drivers/staging/greybus/audio_codec.c b/drivers/staging/greybus/audio_codec.c
+index b589cf6b1d034..e19b91e7a72ef 100644
+--- a/drivers/staging/greybus/audio_codec.c
++++ b/drivers/staging/greybus/audio_codec.c
+@@ -599,8 +599,8 @@ static int gbcodec_mute_stream(struct snd_soc_dai *dai, int mute, int stream)
+ break;
+ }
+ if (!data) {
+- dev_err(dai->dev, "%s:%s DATA connection missing\n",
+- dai->name, module->name);
++ dev_err(dai->dev, "%s DATA connection missing\n",
++ dai->name);
+ mutex_unlock(&codec->lock);
+ return -ENODEV;
+ }
+diff --git a/drivers/staging/r8188eu/core/rtw_xmit.c b/drivers/staging/r8188eu/core/rtw_xmit.c
+index 8503059edc46b..f4e9f61025392 100644
+--- a/drivers/staging/r8188eu/core/rtw_xmit.c
++++ b/drivers/staging/r8188eu/core/rtw_xmit.c
+@@ -179,7 +179,12 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter)
+
+ pxmitpriv->free_xmit_extbuf_cnt = num_xmit_extbuf;
+
+- rtw_alloc_hwxmits(padapter);
++ res = rtw_alloc_hwxmits(padapter);
++ if (res) {
++ res = _FAIL;
++ goto exit;
++ }
++
+ rtw_init_hwxmits(pxmitpriv->hwxmits, pxmitpriv->hwxmit_entry);
+
+ for (i = 0; i < 4; i++)
+@@ -1496,7 +1501,7 @@ exit:
+ return res;
+ }
+
+-void rtw_alloc_hwxmits(struct adapter *padapter)
++int rtw_alloc_hwxmits(struct adapter *padapter)
+ {
+ struct hw_xmit *hwxmits;
+ struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+@@ -1504,6 +1509,8 @@ void rtw_alloc_hwxmits(struct adapter *padapter)
+ pxmitpriv->hwxmit_entry = HWXMIT_ENTRY;
+
+ pxmitpriv->hwxmits = kzalloc(sizeof(struct hw_xmit) * pxmitpriv->hwxmit_entry, GFP_KERNEL);
++ if (!pxmitpriv->hwxmits)
++ return -ENOMEM;
+
+ hwxmits = pxmitpriv->hwxmits;
+
+@@ -1520,6 +1527,8 @@ void rtw_alloc_hwxmits(struct adapter *padapter)
+ hwxmits[3] .sta_queue = &pxmitpriv->bk_pending;
+ } else {
+ }
++
++ return 0;
+ }
+
+ void rtw_free_hwxmits(struct adapter *padapter)
+diff --git a/drivers/staging/r8188eu/include/rtw_xmit.h b/drivers/staging/r8188eu/include/rtw_xmit.h
+index b2df1480d66b3..e73632972900d 100644
+--- a/drivers/staging/r8188eu/include/rtw_xmit.h
++++ b/drivers/staging/r8188eu/include/rtw_xmit.h
+@@ -341,7 +341,7 @@ s32 rtw_txframes_sta_ac_pending(struct adapter *padapter,
+ void rtw_init_hwxmits(struct hw_xmit *phwxmit, int entry);
+ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter);
+ void _rtw_free_xmit_priv(struct xmit_priv *pxmitpriv);
+-void rtw_alloc_hwxmits(struct adapter *padapter);
++int rtw_alloc_hwxmits(struct adapter *padapter);
+ void rtw_free_hwxmits(struct adapter *padapter);
+ s32 rtw_xmit(struct adapter *padapter, struct sk_buff **pkt);
+
+diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c
+index 4b6c2295a3cf7..b5a38f0a8d79b 100644
+--- a/drivers/staging/rtl8192e/rtllib_softmac.c
++++ b/drivers/staging/rtl8192e/rtllib_softmac.c
+@@ -651,9 +651,9 @@ static void rtllib_beacons_stop(struct rtllib_device *ieee)
+ spin_lock_irqsave(&ieee->beacon_lock, flags);
+
+ ieee->beacon_txing = 0;
+- del_timer_sync(&ieee->beacon_timer);
+
+ spin_unlock_irqrestore(&ieee->beacon_lock, flags);
++ del_timer_sync(&ieee->beacon_timer);
+
+ }
+
+diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
+index 1a43979939a8a..79f3fbe25556a 100644
+--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
+@@ -528,9 +528,9 @@ static void ieee80211_beacons_stop(struct ieee80211_device *ieee)
+ spin_lock_irqsave(&ieee->beacon_lock, flags);
+
+ ieee->beacon_txing = 0;
+- del_timer_sync(&ieee->beacon_timer);
+
+ spin_unlock_irqrestore(&ieee->beacon_lock, flags);
++ del_timer_sync(&ieee->beacon_timer);
+ }
+
+ void ieee80211_stop_send_beacons(struct ieee80211_device *ieee)
+diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c
+index d15d52c0d1a74..003e972051240 100644
+--- a/drivers/staging/rtl8712/os_intfs.c
++++ b/drivers/staging/rtl8712/os_intfs.c
+@@ -332,7 +332,6 @@ void r8712_free_drv_sw(struct _adapter *padapter)
+ r8712_free_evt_priv(&padapter->evtpriv);
+ r8712_DeInitSwLeds(padapter);
+ r8712_free_mlme_priv(&padapter->mlmepriv);
+- r8712_free_io_queue(padapter);
+ _free_xmit_priv(&padapter->xmitpriv);
+ _r8712_free_sta_priv(&padapter->stapriv);
+ _r8712_free_recv_priv(&padapter->recvpriv);
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index ee4c61f85a076..1ff3e2658e77e 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -265,6 +265,7 @@ static uint r8712_usb_dvobj_init(struct _adapter *padapter)
+
+ static void r8712_usb_dvobj_deinit(struct _adapter *padapter)
+ {
++ r8712_free_io_queue(padapter);
+ }
+
+ void rtl871x_intf_stop(struct _adapter *padapter)
+@@ -302,9 +303,6 @@ void r871x_dev_unload(struct _adapter *padapter)
+ rtl8712_hal_deinit(padapter);
+ }
+
+- /*s6.*/
+- if (padapter->dvobj_deinit)
+- padapter->dvobj_deinit(padapter);
+ padapter->bup = false;
+ }
+ }
+@@ -538,13 +536,13 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf,
+ } else {
+ AutoloadFail = false;
+ }
+- if (((mac[0] == 0xff) && (mac[1] == 0xff) &&
++ if ((!AutoloadFail) ||
++ ((mac[0] == 0xff) && (mac[1] == 0xff) &&
+ (mac[2] == 0xff) && (mac[3] == 0xff) &&
+ (mac[4] == 0xff) && (mac[5] == 0xff)) ||
+ ((mac[0] == 0x00) && (mac[1] == 0x00) &&
+ (mac[2] == 0x00) && (mac[3] == 0x00) &&
+- (mac[4] == 0x00) && (mac[5] == 0x00)) ||
+- (!AutoloadFail)) {
++ (mac[4] == 0x00) && (mac[5] == 0x00))) {
+ mac[0] = 0x00;
+ mac[1] = 0xe0;
+ mac[2] = 0x4c;
+@@ -607,6 +605,8 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf)
+ /* Stop driver mlme relation timer */
+ r8712_stop_drv_timers(padapter);
+ r871x_dev_unload(padapter);
++ if (padapter->dvobj_deinit)
++ padapter->dvobj_deinit(padapter);
+ r8712_free_drv_sw(padapter);
+ free_netdev(pnetdev);
+
+diff --git a/drivers/staging/rtl8712/usb_ops.c b/drivers/staging/rtl8712/usb_ops.c
+index e64845e6adf3d..af9966d03979c 100644
+--- a/drivers/staging/rtl8712/usb_ops.c
++++ b/drivers/staging/rtl8712/usb_ops.c
+@@ -29,7 +29,8 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr)
+ u16 wvalue;
+ u16 index;
+ u16 len;
+- __le32 data;
++ int status;
++ __le32 data = 0;
+ struct intf_priv *intfpriv = intfhdl->pintfpriv;
+
+ request = 0x05;
+@@ -37,8 +38,10 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr)
+ index = 0;
+ wvalue = (u16)(addr & 0x0000ffff);
+ len = 1;
+- r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len,
+- requesttype);
++ status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index,
++ &data, len, requesttype);
++ if (status < 0)
++ return 0;
+ return (u8)(le32_to_cpu(data) & 0x0ff);
+ }
+
+@@ -49,7 +52,8 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr)
+ u16 wvalue;
+ u16 index;
+ u16 len;
+- __le32 data;
++ int status;
++ __le32 data = 0;
+ struct intf_priv *intfpriv = intfhdl->pintfpriv;
+
+ request = 0x05;
+@@ -57,8 +61,10 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr)
+ index = 0;
+ wvalue = (u16)(addr & 0x0000ffff);
+ len = 2;
+- r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len,
+- requesttype);
++ status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index,
++ &data, len, requesttype);
++ if (status < 0)
++ return 0;
+ return (u16)(le32_to_cpu(data) & 0xffff);
+ }
+
+@@ -69,7 +75,8 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr)
+ u16 wvalue;
+ u16 index;
+ u16 len;
+- __le32 data;
++ int status;
++ __le32 data = 0;
+ struct intf_priv *intfpriv = intfhdl->pintfpriv;
+
+ request = 0x05;
+@@ -77,8 +84,10 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr)
+ index = 0;
+ wvalue = (u16)(addr & 0x0000ffff);
+ len = 4;
+- r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len,
+- requesttype);
++ status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index,
++ &data, len, requesttype);
++ if (status < 0)
++ return 0;
+ return le32_to_cpu(data);
+ }
+
+diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme.c b/drivers/staging/rtl8723bs/core/rtw_mlme.c
+index 9202223ebc0ca..c29e3c68e61ee 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_mlme.c
++++ b/drivers/staging/rtl8723bs/core/rtw_mlme.c
+@@ -751,7 +751,9 @@ void rtw_surveydone_event_callback(struct adapter *adapter, u8 *pbuf)
+ }
+
+ if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY)) {
++ spin_unlock_bh(&pmlmepriv->lock);
+ del_timer_sync(&pmlmepriv->scan_to_timer);
++ spin_lock_bh(&pmlmepriv->lock);
+ _clr_fwstate_(pmlmepriv, _FW_UNDER_SURVEY);
+ }
+
+@@ -1238,8 +1240,10 @@ void rtw_joinbss_event_prehandle(struct adapter *adapter, u8 *pbuf)
+
+ spin_unlock_bh(&pmlmepriv->scanned_queue.lock);
+
++ spin_unlock_bh(&pmlmepriv->lock);
+ /* s5. Cancel assoc_timer */
+ del_timer_sync(&pmlmepriv->assoc_timer);
++ spin_lock_bh(&pmlmepriv->lock);
+ } else {
+ spin_unlock_bh(&(pmlmepriv->scanned_queue.lock));
+ }
+@@ -1545,7 +1549,7 @@ void _rtw_join_timeout_handler(struct timer_list *t)
+ if (adapter->bDriverStopped || adapter->bSurpriseRemoved)
+ return;
+
+- spin_lock_bh(&pmlmepriv->lock);
++ spin_lock_irq(&pmlmepriv->lock);
+
+ if (rtw_to_roam(adapter) > 0) { /* join timeout caused by roaming */
+ while (1) {
+@@ -1573,7 +1577,7 @@ void _rtw_join_timeout_handler(struct timer_list *t)
+
+ }
+
+- spin_unlock_bh(&pmlmepriv->lock);
++ spin_unlock_irq(&pmlmepriv->lock);
+ }
+
+ /*
+@@ -1586,11 +1590,11 @@ void rtw_scan_timeout_handler(struct timer_list *t)
+ mlmepriv.scan_to_timer);
+ struct mlme_priv *pmlmepriv = &adapter->mlmepriv;
+
+- spin_lock_bh(&pmlmepriv->lock);
++ spin_lock_irq(&pmlmepriv->lock);
+
+ _clr_fwstate_(pmlmepriv, _FW_UNDER_SURVEY);
+
+- spin_unlock_bh(&pmlmepriv->lock);
++ spin_unlock_irq(&pmlmepriv->lock);
+
+ rtw_indicate_scan_done(adapter, true);
+ }
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index cbd0ad85ffb1d..a1f5d1842811d 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -867,7 +867,7 @@ static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in)
+
+ static void tb_tunnel_dp(struct tb *tb)
+ {
+- int available_up, available_down, ret;
++ int available_up, available_down, ret, link_nr;
+ struct tb_cm *tcm = tb_priv(tb);
+ struct tb_port *port, *in, *out;
+ struct tb_tunnel *tunnel;
+@@ -912,6 +912,20 @@ static void tb_tunnel_dp(struct tb *tb)
+ return;
+ }
+
++ /*
++ * This is only applicable to links that are not bonded (so
++ * when Thunderbolt 1 hardware is involved somewhere in the
++ * topology). For these try to share the DP bandwidth between
++ * the two lanes.
++ */
++ link_nr = 1;
++ list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
++ if (tb_tunnel_is_dp(tunnel)) {
++ link_nr = 0;
++ break;
++ }
++ }
++
+ /*
+ * DP stream needs the domain to be active so runtime resume
+ * both ends of the tunnel.
+@@ -943,7 +957,8 @@ static void tb_tunnel_dp(struct tb *tb)
+ tb_dbg(tb, "available bandwidth for new DP tunnel %u/%u Mb/s\n",
+ available_up, available_down);
+
+- tunnel = tb_tunnel_alloc_dp(tb, in, out, available_up, available_down);
++ tunnel = tb_tunnel_alloc_dp(tb, in, out, link_nr, available_up,
++ available_down);
+ if (!tunnel) {
+ tb_port_dbg(out, "could not allocate DP tunnel\n");
+ goto err_reclaim;
+diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c
+index 1f69bab236ee9..66b6e665e96f0 100644
+--- a/drivers/thunderbolt/test.c
++++ b/drivers/thunderbolt/test.c
+@@ -1348,7 +1348,7 @@ static void tb_test_tunnel_dp(struct kunit *test)
+ in = &host->ports[5];
+ out = &dev->ports[13];
+
+- tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ tunnel = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+ KUNIT_EXPECT_EQ(test, tunnel->type, TB_TUNNEL_DP);
+ KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+@@ -1394,7 +1394,7 @@ static void tb_test_tunnel_dp_chain(struct kunit *test)
+ in = &host->ports[5];
+ out = &dev4->ports[14];
+
+- tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ tunnel = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+ KUNIT_EXPECT_EQ(test, tunnel->type, TB_TUNNEL_DP);
+ KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+@@ -1444,7 +1444,7 @@ static void tb_test_tunnel_dp_tree(struct kunit *test)
+ in = &dev2->ports[13];
+ out = &dev5->ports[13];
+
+- tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ tunnel = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+ KUNIT_EXPECT_EQ(test, tunnel->type, TB_TUNNEL_DP);
+ KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+@@ -1509,7 +1509,7 @@ static void tb_test_tunnel_dp_max_length(struct kunit *test)
+ in = &dev6->ports[13];
+ out = &dev12->ports[13];
+
+- tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ tunnel = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+ KUNIT_EXPECT_EQ(test, tunnel->type, TB_TUNNEL_DP);
+ KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
+@@ -1627,7 +1627,7 @@ static void tb_test_tunnel_port_on_path(struct kunit *test)
+ in = &dev2->ports[13];
+ out = &dev5->ports[13];
+
+- dp_tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ dp_tunnel = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, dp_tunnel != NULL);
+
+ KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, in));
+@@ -2009,7 +2009,7 @@ static void tb_test_credit_alloc_dp(struct kunit *test)
+ in = &host->ports[5];
+ out = &dev->ports[14];
+
+- tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ tunnel = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, tunnel != NULL);
+ KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
+
+@@ -2245,7 +2245,7 @@ static struct tb_tunnel *TB_TEST_DP_TUNNEL1(struct kunit *test,
+
+ in = &host->ports[5];
+ out = &dev->ports[13];
+- dp_tunnel1 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ dp_tunnel1 = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, dp_tunnel1 != NULL);
+ KUNIT_ASSERT_EQ(test, dp_tunnel1->npaths, (size_t)3);
+
+@@ -2282,7 +2282,7 @@ static struct tb_tunnel *TB_TEST_DP_TUNNEL2(struct kunit *test,
+
+ in = &host->ports[6];
+ out = &dev->ports[14];
+- dp_tunnel2 = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
++ dp_tunnel2 = tb_tunnel_alloc_dp(NULL, in, out, 1, 0, 0);
+ KUNIT_ASSERT_TRUE(test, dp_tunnel2 != NULL);
+ KUNIT_ASSERT_EQ(test, dp_tunnel2->npaths, (size_t)3);
+
+diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
+index a473cc7d9a8da..500a0afe30732 100644
+--- a/drivers/thunderbolt/tunnel.c
++++ b/drivers/thunderbolt/tunnel.c
+@@ -848,6 +848,7 @@ err_free:
+ * @tb: Pointer to the domain structure
+ * @in: DP in adapter port
+ * @out: DP out adapter port
++ * @link_nr: Preferred lane adapter when the link is not bonded
+ * @max_up: Maximum available upstream bandwidth for the DP tunnel (%0
+ * if not limited)
+ * @max_down: Maximum available downstream bandwidth for the DP tunnel
+@@ -859,8 +860,8 @@ err_free:
+ * Return: Returns a tb_tunnel on success or NULL on failure.
+ */
+ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
+- struct tb_port *out, int max_up,
+- int max_down)
++ struct tb_port *out, int link_nr,
++ int max_up, int max_down)
+ {
+ struct tb_tunnel *tunnel;
+ struct tb_path **paths;
+@@ -884,21 +885,21 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
+ paths = tunnel->paths;
+
+ path = tb_path_alloc(tb, in, TB_DP_VIDEO_HOPID, out, TB_DP_VIDEO_HOPID,
+- 1, "Video");
++ link_nr, "Video");
+ if (!path)
+ goto err_free;
+ tb_dp_init_video_path(path);
+ paths[TB_DP_VIDEO_PATH_OUT] = path;
+
+ path = tb_path_alloc(tb, in, TB_DP_AUX_TX_HOPID, out,
+- TB_DP_AUX_TX_HOPID, 1, "AUX TX");
++ TB_DP_AUX_TX_HOPID, link_nr, "AUX TX");
+ if (!path)
+ goto err_free;
+ tb_dp_init_aux_path(path);
+ paths[TB_DP_AUX_PATH_OUT] = path;
+
+ path = tb_path_alloc(tb, out, TB_DP_AUX_RX_HOPID, in,
+- TB_DP_AUX_RX_HOPID, 1, "AUX RX");
++ TB_DP_AUX_RX_HOPID, link_nr, "AUX RX");
+ if (!path)
+ goto err_free;
+ tb_dp_init_aux_path(path);
+diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h
+index 03e56076b5bcf..bb4d1f1d6d0b0 100644
+--- a/drivers/thunderbolt/tunnel.h
++++ b/drivers/thunderbolt/tunnel.h
+@@ -71,8 +71,8 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
+ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
+ bool alloc_hopid);
+ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
+- struct tb_port *out, int max_up,
+- int max_down);
++ struct tb_port *out, int link_nr,
++ int max_up, int max_down);
+ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
+ struct tb_port *dst, int transmit_path,
+ int transmit_ring, int receive_path,
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index 10c13b93ed526..9355d97ff5913 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -405,6 +405,7 @@ static int goldfish_tty_probe(struct platform_device *pdev)
+ err_tty_register_device_failed:
+ free_irq(irq, qtty);
+ err_dec_line_count:
++ tty_port_destroy(&qtty->port);
+ goldfish_tty_current_line_count--;
+ if (goldfish_tty_current_line_count == 0)
+ goldfish_tty_delete_driver();
+@@ -426,6 +427,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
+ iounmap(qtty->base);
+ qtty->base = NULL;
+ free_irq(qtty->irq, pdev);
++ tty_port_destroy(&qtty->port);
+ goldfish_tty_current_line_count--;
+ if (goldfish_tty_current_line_count == 0)
+ goldfish_tty_delete_driver();
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index efc72104c8400..bdc314aeab886 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -1975,6 +1975,35 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty,
+ return ldata->read_tail != canon_head;
+ }
+
++/*
++ * If we finished a read at the exact location of an
++ * EOF (special EOL character that's a __DISABLED_CHAR)
++ * in the stream, silently eat the EOF.
++ */
++static void canon_skip_eof(struct tty_struct *tty)
++{
++ struct n_tty_data *ldata = tty->disc_data;
++ size_t tail, canon_head;
++
++ canon_head = smp_load_acquire(&ldata->canon_head);
++ tail = ldata->read_tail;
++
++ // No data?
++ if (tail == canon_head)
++ return;
++
++ // See if the tail position is EOF in the circular buffer
++ tail &= (N_TTY_BUF_SIZE - 1);
++ if (!test_bit(tail, ldata->read_flags))
++ return;
++ if (read_buf(ldata, tail) != __DISABLED_CHAR)
++ return;
++
++ // Clear the EOL bit, skip the EOF char.
++ clear_bit(tail, ldata->read_flags);
++ smp_store_release(&ldata->read_tail, ldata->read_tail + 1);
++}
++
+ /**
+ * job_control - check job control
+ * @tty: tty
+@@ -2045,7 +2074,14 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file,
+ */
+ if (*cookie) {
+ if (ldata->icanon && !L_EXTPROC(tty)) {
+- if (canon_copy_from_read_buf(tty, &kb, &nr))
++ /*
++ * If we have filled the user buffer, see
++ * if we should skip an EOF character before
++ * releasing the lock and returning done.
++ */
++ if (!nr)
++ canon_skip_eof(tty);
++ else if (canon_copy_from_read_buf(tty, &kb, &nr))
+ return kb - kbuf;
+ } else {
+ if (copy_from_read_buf(tty, &kb, &nr))
+diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+index c2cecc6f47db4..179bb1375636b 100644
+--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c
++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+@@ -429,6 +429,8 @@ static int aspeed_vuart_probe(struct platform_device *pdev)
+ timer_setup(&vuart->unthrottle_timer, aspeed_vuart_unthrottle_exp, 0);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -EINVAL;
+
+ memset(&port, 0, sizeof(port));
+ port.port.private_data = vuart;
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index 251f0018ae8ca..dba5950b8d0e2 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -200,12 +200,12 @@ static int fintek_8250_rs485_config(struct uart_port *port,
+ if (!pdata)
+ return -EINVAL;
+
+- /* Hardware do not support same RTS level on send and receive */
+- if (!(rs485->flags & SER_RS485_RTS_ON_SEND) ==
+- !(rs485->flags & SER_RS485_RTS_AFTER_SEND))
+- return -EINVAL;
+
+ if (rs485->flags & SER_RS485_ENABLED) {
++ /* Hardware do not support same RTS level on send and receive */
++ if (!(rs485->flags & SER_RS485_RTS_ON_SEND) ==
++ !(rs485->flags & SER_RS485_RTS_AFTER_SEND))
++ return -EINVAL;
+ memset(rs485->padding, 0, sizeof(rs485->padding));
+ config |= RS485_URA;
+ } else {
+diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+index d6d3db9c3b1f8..db07d6a5d764d 100644
+--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c
++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c
+@@ -1247,7 +1247,7 @@ static int cpm_uart_init_port(struct device_node *np,
+ }
+
+ #ifdef CONFIG_PPC_EARLY_DEBUG_CPM
+-#ifdef CONFIG_CONSOLE_POLL
++#if defined(CONFIG_CONSOLE_POLL) && defined(CONFIG_SERIAL_CPM_CONSOLE)
+ if (!udbg_port)
+ #endif
+ udbg_putc = NULL;
+diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c
+index c7f81aa1ce912..5fea9bf86e85e 100644
+--- a/drivers/tty/serial/digicolor-usart.c
++++ b/drivers/tty/serial/digicolor-usart.c
+@@ -309,6 +309,8 @@ static void digicolor_uart_set_termios(struct uart_port *port,
+ case CS8:
+ default:
+ config |= UA_CONFIG_CHAR_LEN;
++ termios->c_cflag &= ~CSIZE;
++ termios->c_cflag |= CS8;
+ break;
+ }
+
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index d32c25bc973b4..b1307ef344686 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -239,8 +239,6 @@
+ /* IMX lpuart has four extra unused regs located at the beginning */
+ #define IMX_REG_OFF 0x10
+
+-static DEFINE_IDA(fsl_lpuart_ida);
+-
+ enum lpuart_type {
+ VF610_LPUART,
+ LS1021A_LPUART,
+@@ -276,7 +274,6 @@ struct lpuart_port {
+ int rx_dma_rng_buf_len;
+ unsigned int dma_tx_nents;
+ wait_queue_head_t dma_wait;
+- bool id_allocated;
+ };
+
+ struct lpuart_soc_data {
+@@ -2711,23 +2708,18 @@ static int lpuart_probe(struct platform_device *pdev)
+
+ ret = of_alias_get_id(np, "serial");
+ if (ret < 0) {
+- ret = ida_simple_get(&fsl_lpuart_ida, 0, UART_NR, GFP_KERNEL);
+- if (ret < 0) {
+- dev_err(&pdev->dev, "port line is full, add device failed\n");
+- return ret;
+- }
+- sport->id_allocated = true;
++ dev_err(&pdev->dev, "failed to get alias id, errno %d\n", ret);
++ return ret;
+ }
+ if (ret >= ARRAY_SIZE(lpuart_ports)) {
+ dev_err(&pdev->dev, "serial%d out of range\n", ret);
+- ret = -EINVAL;
+- goto failed_out_of_range;
++ return -EINVAL;
+ }
+ sport->port.line = ret;
+
+ ret = lpuart_enable_clks(sport);
+ if (ret)
+- goto failed_clock_enable;
++ return ret;
+ sport->port.uartclk = lpuart_get_baud_clk_rate(sport);
+
+ lpuart_ports[sport->port.line] = sport;
+@@ -2775,10 +2767,6 @@ failed_reset:
+ uart_remove_one_port(&lpuart_reg, &sport->port);
+ failed_attach_port:
+ lpuart_disable_clks(sport);
+-failed_clock_enable:
+-failed_out_of_range:
+- if (sport->id_allocated)
+- ida_simple_remove(&fsl_lpuart_ida, sport->port.line);
+ return ret;
+ }
+
+@@ -2788,9 +2776,6 @@ static int lpuart_remove(struct platform_device *pdev)
+
+ uart_remove_one_port(&lpuart_reg, &sport->port);
+
+- if (sport->id_allocated)
+- ida_simple_remove(&fsl_lpuart_ida, sport->port.line);
+-
+ lpuart_disable_clks(sport);
+
+ if (sport->dma_tx_chan)
+@@ -2920,7 +2905,6 @@ static int __init lpuart_serial_init(void)
+
+ static void __exit lpuart_serial_exit(void)
+ {
+- ida_destroy(&fsl_lpuart_ida);
+ platform_driver_unregister(&lpuart_driver);
+ uart_unregister_driver(&lpuart_reg);
+ }
+diff --git a/drivers/tty/serial/icom.c b/drivers/tty/serial/icom.c
+index 03a2fe9f4c9a9..02b375ba2f078 100644
+--- a/drivers/tty/serial/icom.c
++++ b/drivers/tty/serial/icom.c
+@@ -1501,7 +1501,7 @@ static int icom_probe(struct pci_dev *dev,
+ retval = pci_read_config_dword(dev, PCI_COMMAND, &command_reg);
+ if (retval) {
+ dev_err(&dev->dev, "PCI Config read FAILED\n");
+- return retval;
++ goto probe_exit0;
+ }
+
+ pci_write_config_dword(dev, PCI_COMMAND,
+diff --git a/drivers/tty/serial/meson_uart.c b/drivers/tty/serial/meson_uart.c
+index 45e00d928253e..54a6b488bc8cb 100644
+--- a/drivers/tty/serial/meson_uart.c
++++ b/drivers/tty/serial/meson_uart.c
+@@ -253,6 +253,14 @@ static const char *meson_uart_type(struct uart_port *port)
+ return (port->type == PORT_MESON) ? "meson_uart" : NULL;
+ }
+
++/*
++ * This function is called only from probe() using a temporary io mapping
++ * in order to perform a reset before setting up the device. Since the
++ * temporarily mapped region was successfully requested, there can be no
++ * console on this port at this time. Hence it is not necessary for this
++ * function to acquire the port->lock. (Since there is no console on this
++ * port at this time, the port->lock is not initialized yet.)
++ */
+ static void meson_uart_reset(struct uart_port *port)
+ {
+ u32 val;
+@@ -267,9 +275,12 @@ static void meson_uart_reset(struct uart_port *port)
+
+ static int meson_uart_startup(struct uart_port *port)
+ {
++ unsigned long flags;
+ u32 val;
+ int ret = 0;
+
++ spin_lock_irqsave(&port->lock, flags);
++
+ val = readl(port->membase + AML_UART_CONTROL);
+ val |= AML_UART_CLEAR_ERR;
+ writel(val, port->membase + AML_UART_CONTROL);
+@@ -285,6 +296,8 @@ static int meson_uart_startup(struct uart_port *port)
+ val = (AML_UART_RECV_IRQ(1) | AML_UART_XMIT_IRQ(port->fifosize / 2));
+ writel(val, port->membase + AML_UART_MISC);
+
++ spin_unlock_irqrestore(&port->lock, flags);
++
+ ret = request_irq(port->irq, meson_uart_interrupt, 0,
+ port->name, port);
+
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 23c94b9277760..e676ec761f18c 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -1599,6 +1599,7 @@ static inline struct uart_port *msm_get_port_from_line(unsigned int line)
+ static void __msm_console_write(struct uart_port *port, const char *s,
+ unsigned int count, bool is_uartdm)
+ {
++ unsigned long flags;
+ int i;
+ int num_newlines = 0;
+ bool replaced = false;
+@@ -1616,6 +1617,8 @@ static void __msm_console_write(struct uart_port *port, const char *s,
+ num_newlines++;
+ count += num_newlines;
+
++ local_irq_save(flags);
++
+ if (port->sysrq)
+ locked = 0;
+ else if (oops_in_progress)
+@@ -1661,6 +1664,8 @@ static void __msm_console_write(struct uart_port *port, const char *s,
+
+ if (locked)
+ spin_unlock(&port->lock);
++
++ local_irq_restore(flags);
+ }
+
+ static void msm_console_write(struct console *co, const char *s,
+diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
+index 91f1eb0058d7e..9a6611cfc18e9 100644
+--- a/drivers/tty/serial/owl-uart.c
++++ b/drivers/tty/serial/owl-uart.c
+@@ -731,6 +731,7 @@ static int owl_uart_probe(struct platform_device *pdev)
+ owl_port->port.uartclk = clk_get_rate(owl_port->clk);
+ if (owl_port->port.uartclk == 0) {
+ dev_err(&pdev->dev, "clock rate is zero\n");
++ clk_disable_unprepare(owl_port->clk);
+ return -EINVAL;
+ }
+ owl_port->port.flags = UPF_BOOT_AUTOCONF | UPF_IOREMAP | UPF_LOW_LATENCY;
+diff --git a/drivers/tty/serial/rda-uart.c b/drivers/tty/serial/rda-uart.c
+index d550d8fa2fabf..a8fe1c3ebcd98 100644
+--- a/drivers/tty/serial/rda-uart.c
++++ b/drivers/tty/serial/rda-uart.c
+@@ -262,6 +262,8 @@ static void rda_uart_set_termios(struct uart_port *port,
+ fallthrough;
+ case CS7:
+ ctrl &= ~RDA_UART_DBITS_8;
++ termios->c_cflag &= ~CSIZE;
++ termios->c_cflag |= CS7;
+ break;
+ default:
+ ctrl |= RDA_UART_DBITS_8;
+diff --git a/drivers/tty/serial/sa1100.c b/drivers/tty/serial/sa1100.c
+index 697b6a002a16e..4ddcc985621a8 100644
+--- a/drivers/tty/serial/sa1100.c
++++ b/drivers/tty/serial/sa1100.c
+@@ -446,6 +446,8 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios,
+ baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16);
+ quot = uart_get_divisor(port, baud);
+
++ del_timer_sync(&sport->timer);
++
+ spin_lock_irqsave(&sport->port.lock, flags);
+
+ sport->port.read_status_mask &= UTSR0_TO_SM(UTSR0_TFS);
+@@ -476,8 +478,6 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios,
+ UTSR1_TO_SM(UTSR1_ROR);
+ }
+
+- del_timer_sync(&sport->timer);
+-
+ /*
+ * Update the per-port timeout.
+ */
+diff --git a/drivers/tty/serial/serial_txx9.c b/drivers/tty/serial/serial_txx9.c
+index aaca4fe38486a..1f8362d5e3b97 100644
+--- a/drivers/tty/serial/serial_txx9.c
++++ b/drivers/tty/serial/serial_txx9.c
+@@ -644,6 +644,8 @@ serial_txx9_set_termios(struct uart_port *port, struct ktermios *termios,
+ case CS6: /* not supported */
+ case CS8:
+ cval |= TXX9_SILCR_UMODE_8BIT;
++ termios->c_cflag &= ~CSIZE;
++ termios->c_cflag |= CS8;
+ break;
+ }
+
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 968967d722d49..e55895f0a4ffd 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2379,8 +2379,12 @@ static void sci_set_termios(struct uart_port *port, struct ktermios *termios,
+ int best_clk = -1;
+ unsigned long flags;
+
+- if ((termios->c_cflag & CSIZE) == CS7)
++ if ((termios->c_cflag & CSIZE) == CS7) {
+ smr_val |= SCSMR_CHR;
++ } else {
++ termios->c_cflag &= ~CSIZE;
++ termios->c_cflag |= CS8;
++ }
+ if (termios->c_cflag & PARENB)
+ smr_val |= SCSMR_PE;
+ if (termios->c_cflag & PARODD)
+diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
+index b79900d0e91a6..cba44483cb03e 100644
+--- a/drivers/tty/serial/sifive.c
++++ b/drivers/tty/serial/sifive.c
+@@ -666,12 +666,16 @@ static void sifive_serial_set_termios(struct uart_port *port,
+ int rate;
+ char nstop;
+
+- if ((termios->c_cflag & CSIZE) != CS8)
++ if ((termios->c_cflag & CSIZE) != CS8) {
+ dev_err_once(ssp->port.dev, "only 8-bit words supported\n");
++ termios->c_cflag &= ~CSIZE;
++ termios->c_cflag |= CS8;
++ }
+ if (termios->c_iflag & (INPCK | PARMRK))
+ dev_err_once(ssp->port.dev, "parity checking not supported\n");
+ if (termios->c_iflag & BRKINT)
+ dev_err_once(ssp->port.dev, "BREAK detection not supported\n");
++ termios->c_iflag &= ~(INPCK|PARMRK|BRKINT);
+
+ /* Set number of stop bits */
+ nstop = (termios->c_cflag & CSTOPB) ? 2 : 1;
+@@ -998,7 +1002,7 @@ static int sifive_serial_probe(struct platform_device *pdev)
+ /* Set up clock divider */
+ ssp->clkin_rate = clk_get_rate(ssp->clk);
+ ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE;
+- ssp->port.uartclk = ssp->baud_rate * 16;
++ ssp->port.uartclk = ssp->clkin_rate;
+ __ssp_update_div(ssp);
+
+ platform_set_drvdata(pdev, ssp);
+diff --git a/drivers/tty/serial/st-asc.c b/drivers/tty/serial/st-asc.c
+index 87e480cc8206d..5a45633aaea8d 100644
+--- a/drivers/tty/serial/st-asc.c
++++ b/drivers/tty/serial/st-asc.c
+@@ -535,10 +535,14 @@ static void asc_set_termios(struct uart_port *port, struct ktermios *termios,
+ /* set character length */
+ if ((cflag & CSIZE) == CS7) {
+ ctrl_val |= ASC_CTL_MODE_7BIT_PAR;
++ cflag |= PARENB;
+ } else {
+ ctrl_val |= (cflag & PARENB) ? ASC_CTL_MODE_8BIT_PAR :
+ ASC_CTL_MODE_8BIT;
++ cflag &= ~CSIZE;
++ cflag |= CS8;
+ }
++ termios->c_cflag = cflag;
+
+ /* set stop bit */
+ ctrl_val |= (cflag & CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT;
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index 9570002d07e75..9bc970be59bab 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -1037,13 +1037,22 @@ static void stm32_usart_set_termios(struct uart_port *port,
+ * CS8 or (CS7 + parity), 8 bits word aka [M1:M0] = 0b00
+ * M0 and M1 already cleared by cr1 initialization.
+ */
+- if (bits == 9)
++ if (bits == 9) {
+ cr1 |= USART_CR1_M0;
+- else if ((bits == 7) && cfg->has_7bits_data)
++ } else if ((bits == 7) && cfg->has_7bits_data) {
+ cr1 |= USART_CR1_M1;
+- else if (bits != 8)
++ } else if (bits != 8) {
+ dev_dbg(port->dev, "Unsupported data bits config: %u bits\n"
+ , bits);
++ cflag &= ~CSIZE;
++ cflag |= CS8;
++ termios->c_cflag = cflag;
++ bits = 8;
++ if (cflag & PARENB) {
++ bits++;
++ cr1 |= USART_CR1_M0;
++ }
++ }
+
+ if (ofs->rtor != UNDEF_REG && (stm32_port->rx_ch ||
+ (stm32_port->fifoen &&
+diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c
+index e1fa52d31474f..7c788c697f3e1 100644
+--- a/drivers/tty/serial/uartlite.c
++++ b/drivers/tty/serial/uartlite.c
+@@ -321,7 +321,8 @@ static void ulite_set_termios(struct uart_port *port, struct ktermios *termios,
+ struct uartlite_data *pdata = port->private_data;
+
+ /* Set termios to what the hardware supports */
+- termios->c_cflag &= ~(BRKINT | CSTOPB | PARENB | PARODD | CSIZE);
++ termios->c_iflag &= ~BRKINT;
++ termios->c_cflag &= ~(CSTOPB | PARENB | PARODD | CSIZE);
+ termios->c_cflag |= pdata->cflags & (PARENB | PARODD | CSIZE);
+ tty_termios_encode_baud_rate(termios, pdata->baud, pdata->baud);
+
+diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c
+index 25c558e65ece0..9bc2a92652772 100644
+--- a/drivers/tty/synclink_gt.c
++++ b/drivers/tty/synclink_gt.c
+@@ -1746,6 +1746,8 @@ static int hdlcdev_init(struct slgt_info *info)
+ */
+ static void hdlcdev_exit(struct slgt_info *info)
+ {
++ if (!info->netdev)
++ return;
+ unregister_hdlc_device(info->netdev);
+ free_netdev(info->netdev);
+ info->netdev = NULL;
+diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
+index bbfd004449b5b..34cfdda4aff5d 100644
+--- a/drivers/tty/sysrq.c
++++ b/drivers/tty/sysrq.c
+@@ -232,8 +232,10 @@ static void showacpu(void *dummy)
+ unsigned long flags;
+
+ /* Idle CPUs have no interesting backtrace. */
+- if (idle_cpu(smp_processor_id()))
++ if (idle_cpu(smp_processor_id())) {
++ pr_info("CPU%d: backtrace skipped as idling\n", smp_processor_id());
+ return;
++ }
+
+ raw_spin_lock_irqsave(&show_lock, flags);
+ pr_info("CPU%d:\n", smp_processor_id());
+@@ -260,10 +262,13 @@ static void sysrq_handle_showallcpus(int key)
+
+ if (in_hardirq())
+ regs = get_irq_regs();
+- if (regs) {
+- pr_info("CPU%d:\n", smp_processor_id());
++
++ pr_info("CPU%d:\n", smp_processor_id());
++ if (regs)
+ show_regs(regs);
+- }
++ else
++ show_stack(NULL, NULL, KERN_INFO);
++
+ schedule_work(&sysrq_showallcpus);
+ }
+ }
+diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
+index d630cccd2e6ea..5af810cd8a58f 100644
+--- a/drivers/usb/core/hcd-pci.c
++++ b/drivers/usb/core/hcd-pci.c
+@@ -616,10 +616,10 @@ const struct dev_pm_ops usb_hcd_pci_pm_ops = {
+ .suspend_noirq = hcd_pci_suspend_noirq,
+ .resume_noirq = hcd_pci_resume_noirq,
+ .resume = hcd_pci_resume,
+- .freeze = check_root_hub_suspended,
++ .freeze = hcd_pci_suspend,
+ .freeze_noirq = check_root_hub_suspended,
+ .thaw_noirq = NULL,
+- .thaw = NULL,
++ .thaw = hcd_pci_resume,
+ .poweroff = hcd_pci_suspend,
+ .poweroff_noirq = hcd_pci_suspend_noirq,
+ .restore_noirq = hcd_pci_resume_noirq,
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index eee3504397e6e..fe2a58c758610 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4544,7 +4544,6 @@ static int dwc2_hsotg_udc_start(struct usb_gadget *gadget,
+
+ WARN_ON(hsotg->driver);
+
+- driver->driver.bus = NULL;
+ hsotg->driver = driver;
+ hsotg->gadget.dev.of_node = hsotg->dev->of_node;
+ hsotg->gadget.speed = USB_SPEED_UNKNOWN;
+diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c
+index f148b0370f829..81ff21bd405a8 100644
+--- a/drivers/usb/dwc3/drd.c
++++ b/drivers/usb/dwc3/drd.c
+@@ -454,13 +454,8 @@ static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)
+ * This device property is for kernel internal use only and
+ * is expected to be set by the glue code.
+ */
+- if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) {
+- edev = extcon_get_extcon_dev(name);
+- if (!edev)
+- return ERR_PTR(-EPROBE_DEFER);
+-
+- return edev;
+- }
++ if (device_property_read_string(dev, "linux,extcon-name", &name) == 0)
++ return extcon_get_extcon_dev(name);
+
+ /*
+ * Try to get an extcon device from the USB PHY controller's "port"
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index f08b2178fd32d..9c8887615701f 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -256,7 +256,7 @@ static void dwc3_pci_resume_work(struct work_struct *work)
+ int ret;
+
+ ret = pm_runtime_get_sync(&dwc3->dev);
+- if (ret) {
++ if (ret < 0) {
+ pm_runtime_put_sync_autosuspend(&dwc3->dev);
+ return;
+ }
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 78ec6af79c7f1..5c1ae0d0ed47d 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1978,10 +1978,10 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
+ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
+ {
+ struct dwc3_request *req;
+- struct dwc3_request *tmp;
+ struct dwc3 *dwc = dep->dwc;
+
+- list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) {
++ while (!list_empty(&dep->cancelled_list)) {
++ req = next_request(&dep->cancelled_list);
+ dwc3_gadget_ep_skip_trbs(dep, req);
+ switch (req->status) {
+ case DWC3_REQUEST_STATUS_DISCONNECTED:
+@@ -1998,6 +1998,12 @@ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
+ dwc3_gadget_giveback(dep, req, -ECONNRESET);
+ break;
+ }
++ /*
++ * The endpoint is disabled, let the dwc3_remove_requests()
++ * handle the cleanup.
++ */
++ if (!dep->endpoint.desc)
++ break;
+ }
+ }
+
+@@ -3288,15 +3294,21 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep,
+ const struct dwc3_event_depevt *event, int status)
+ {
+ struct dwc3_request *req;
+- struct dwc3_request *tmp;
+
+- list_for_each_entry_safe(req, tmp, &dep->started_list, list) {
++ while (!list_empty(&dep->started_list)) {
+ int ret;
+
++ req = next_request(&dep->started_list);
+ ret = dwc3_gadget_ep_cleanup_completed_request(dep, event,
+ req, status);
+ if (ret)
+ break;
++ /*
++ * The endpoint is disabled, let the dwc3_remove_requests()
++ * handle the cleanup.
++ */
++ if (!dep->endpoint.desc)
++ break;
+ }
+ }
+
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index eda871973d6cc..f56c30cf151e4 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -7,7 +7,6 @@
+ * Authors: Felipe Balbi <balbi@ti.com>,
+ */
+
+-#include <linux/acpi.h>
+ #include <linux/irq.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+@@ -83,7 +82,6 @@ int dwc3_host_init(struct dwc3 *dwc)
+ }
+
+ xhci->dev.parent = dwc->dev;
+- ACPI_COMPANION_SET(&xhci->dev, ACPI_COMPANION(dwc->dev));
+
+ dwc->xhci = xhci;
+
+diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
+index 8835f6bd528e1..8c7f0991c21b5 100644
+--- a/drivers/usb/host/isp116x-hcd.c
++++ b/drivers/usb/host/isp116x-hcd.c
+@@ -1541,10 +1541,12 @@ static int isp116x_remove(struct platform_device *pdev)
+
+ iounmap(isp116x->data_reg);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+- release_mem_region(res->start, 2);
++ if (res)
++ release_mem_region(res->start, 2);
+ iounmap(isp116x->addr_reg);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- release_mem_region(res->start, 2);
++ if (res)
++ release_mem_region(res->start, 2);
+
+ usb_put_hcd(hcd);
+ return 0;
+diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c
+index e82ff2a49672d..4f5498521e1b1 100644
+--- a/drivers/usb/host/oxu210hp-hcd.c
++++ b/drivers/usb/host/oxu210hp-hcd.c
+@@ -3909,8 +3909,10 @@ static int oxu_bus_suspend(struct usb_hcd *hcd)
+ }
+ }
+
++ spin_unlock_irq(&oxu->lock);
+ /* turn off now-idle HC */
+ del_timer_sync(&oxu->watchdog);
++ spin_lock_irq(&oxu->lock);
+ ehci_halt(oxu);
+ hcd->state = HC_STATE_SUSPENDED;
+
+diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
+index d2b7e613eb34f..f571a65ae6ee2 100644
+--- a/drivers/usb/musb/omap2430.c
++++ b/drivers/usb/musb/omap2430.c
+@@ -362,6 +362,7 @@ static int omap2430_probe(struct platform_device *pdev)
+ control_node = of_parse_phandle(np, "ctrl-module", 0);
+ if (control_node) {
+ control_pdev = of_find_device_by_node(control_node);
++ of_node_put(control_node);
+ if (!control_pdev) {
+ dev_err(&pdev->dev, "Failed to get control device\n");
+ ret = -EINVAL;
+diff --git a/drivers/usb/phy/phy-omap-otg.c b/drivers/usb/phy/phy-omap-otg.c
+index ee0863c6553ed..6e6ef8c0bc7ed 100644
+--- a/drivers/usb/phy/phy-omap-otg.c
++++ b/drivers/usb/phy/phy-omap-otg.c
+@@ -95,8 +95,8 @@ static int omap_otg_probe(struct platform_device *pdev)
+ return -ENODEV;
+
+ extcon = extcon_get_extcon_dev(config->extcon);
+- if (!extcon)
+- return -EPROBE_DEFER;
++ if (IS_ERR(extcon))
++ return PTR_ERR(extcon);
+
+ otg_dev = devm_kzalloc(&pdev->dev, sizeof(*otg_dev), GFP_KERNEL);
+ if (!otg_dev)
+diff --git a/drivers/usb/storage/karma.c b/drivers/usb/storage/karma.c
+index 05cec81dcd3f2..38ddfedef629c 100644
+--- a/drivers/usb/storage/karma.c
++++ b/drivers/usb/storage/karma.c
+@@ -174,24 +174,25 @@ static void rio_karma_destructor(void *extra)
+
+ static int rio_karma_init(struct us_data *us)
+ {
+- int ret = 0;
+ struct karma_data *data = kzalloc(sizeof(struct karma_data), GFP_NOIO);
+
+ if (!data)
+- goto out;
++ return -ENOMEM;
+
+ data->recv = kmalloc(RIO_RECV_LEN, GFP_NOIO);
+ if (!data->recv) {
+ kfree(data);
+- goto out;
++ return -ENOMEM;
+ }
+
+ us->extra = data;
+ us->extra_destructor = rio_karma_destructor;
+- ret = rio_karma_send_command(RIO_ENTER_STORAGE, us);
+- data->in_storage = (ret == 0);
+-out:
+- return ret;
++ if (rio_karma_send_command(RIO_ENTER_STORAGE, us))
++ return -EIO;
++
++ data->in_storage = 1;
++
++ return 0;
+ }
+
+ static struct scsi_host_template karma_host_template;
+diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c
+index c8340de0ed495..d2aaf294b6493 100644
+--- a/drivers/usb/typec/mux.c
++++ b/drivers/usb/typec/mux.c
+@@ -131,8 +131,11 @@ typec_switch_register(struct device *parent,
+ sw->dev.class = &typec_mux_class;
+ sw->dev.type = &typec_switch_dev_type;
+ sw->dev.driver_data = desc->drvdata;
+- dev_set_name(&sw->dev, "%s-switch",
+- desc->name ? desc->name : dev_name(parent));
++ ret = dev_set_name(&sw->dev, "%s-switch", desc->name ? desc->name : dev_name(parent));
++ if (ret) {
++ put_device(&sw->dev);
++ return ERR_PTR(ret);
++ }
+
+ ret = device_add(&sw->dev);
+ if (ret) {
+@@ -338,8 +341,11 @@ typec_mux_register(struct device *parent, const struct typec_mux_desc *desc)
+ mux->dev.class = &typec_mux_class;
+ mux->dev.type = &typec_mux_dev_type;
+ mux->dev.driver_data = desc->drvdata;
+- dev_set_name(&mux->dev, "%s-mux",
+- desc->name ? desc->name : dev_name(parent));
++ ret = dev_set_name(&mux->dev, "%s-mux", desc->name ? desc->name : dev_name(parent));
++ if (ret) {
++ put_device(&mux->dev);
++ return ERR_PTR(ret);
++ }
+
+ ret = device_add(&mux->dev);
+ if (ret) {
+diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c
+index 72f9001b07921..96c55eaf3f808 100644
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -1708,8 +1708,8 @@ static int fusb302_probe(struct i2c_client *client,
+ */
+ if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) {
+ chip->extcon = extcon_get_extcon_dev(name);
+- if (!chip->extcon)
+- return -EPROBE_DEFER;
++ if (IS_ERR(chip->extcon))
++ return PTR_ERR(chip->extcon);
+ }
+
+ chip->vbus = devm_regulator_get(chip->dev, "vbus");
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index d8d3892e5a69a..3c6d452e3bf40 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -393,7 +393,6 @@ static int stub_probe(struct usb_device *udev)
+
+ err_port:
+ dev_set_drvdata(&udev->dev, NULL);
+- usb_put_dev(udev);
+
+ /* we already have busid_priv, just lock busid_lock */
+ spin_lock(&busid_priv->busid_lock);
+@@ -408,6 +407,7 @@ call_put_busid_priv:
+ put_busid_priv(busid_priv);
+
+ sdev_free:
++ usb_put_dev(udev);
+ stub_device_free(sdev);
+
+ return rc;
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index 325c22008e536..5dd41e8215e0f 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -138,7 +138,9 @@ static int tweak_set_configuration_cmd(struct urb *urb)
+ req = (struct usb_ctrlrequest *) urb->setup_packet;
+ config = le16_to_cpu(req->wValue);
+
++ usb_lock_device(sdev->udev);
+ err = usb_set_configuration(sdev->udev, config);
++ usb_unlock_device(sdev->udev);
+ if (err && err != -ENODEV)
+ dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n",
+ config, err);
+diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
+index d1a6b5ab543c3..474c6120c9558 100644
+--- a/drivers/vdpa/ifcvf/ifcvf_main.c
++++ b/drivers/vdpa/ifcvf/ifcvf_main.c
+@@ -514,7 +514,6 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
+ }
+
+ ifcvf_mgmt_dev->adapter = adapter;
+- pci_set_drvdata(pdev, ifcvf_mgmt_dev);
+
+ vf = &adapter->vf;
+ vf->dev_type = get_dev_type(pdev);
+@@ -629,6 +628,8 @@ static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ goto err;
+ }
+
++ pci_set_drvdata(pdev, ifcvf_mgmt_dev);
++
+ return 0;
+
+ err:
+diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
+index 1ea525433a5ca..4e7351110e43e 100644
+--- a/drivers/vdpa/vdpa.c
++++ b/drivers/vdpa/vdpa.c
+@@ -756,14 +756,19 @@ static int vdpa_nl_cmd_dev_get_doit(struct sk_buff *skb, struct genl_info *info)
+ goto mdev_err;
+ }
+ err = vdpa_dev_fill(vdev, msg, info->snd_portid, info->snd_seq, 0, info->extack);
+- if (!err)
+- err = genlmsg_reply(msg, info);
++ if (err)
++ goto mdev_err;
++
++ err = genlmsg_reply(msg, info);
++ put_device(dev);
++ mutex_unlock(&vdpa_dev_mutex);
++ return err;
++
+ mdev_err:
+ put_device(dev);
+ err:
+ mutex_unlock(&vdpa_dev_mutex);
+- if (err)
+- nlmsg_free(msg);
++ nlmsg_free(msg);
+ return err;
+ }
+
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index f85d1a08ed87c..160e40d030847 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -1344,9 +1344,9 @@ static int vduse_create_dev(struct vduse_dev_config *config,
+
+ dev->minor = ret;
+ dev->msg_timeout = VDUSE_MSG_DEFAULT_TIMEOUT;
+- dev->dev = device_create(vduse_class, NULL,
+- MKDEV(MAJOR(vduse_major), dev->minor),
+- dev, "%s", config->name);
++ dev->dev = device_create_with_groups(vduse_class, NULL,
++ MKDEV(MAJOR(vduse_major), dev->minor),
++ dev, vduse_dev_groups, "%s", config->name);
+ if (IS_ERR(dev->dev)) {
+ ret = PTR_ERR(dev->dev);
+ goto err_dev;
+@@ -1595,7 +1595,6 @@ static int vduse_init(void)
+ return PTR_ERR(vduse_class);
+
+ vduse_class->devnode = vduse_devnode;
+- vduse_class->dev_groups = vduse_dev_groups;
+
+ ret = alloc_chrdev_region(&vduse_major, 0, VDUSE_DEV_MAX, "vduse");
+ if (ret)
+diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
+index 14e2043d76852..eab55accf381f 100644
+--- a/drivers/vhost/vringh.c
++++ b/drivers/vhost/vringh.c
+@@ -292,7 +292,7 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ int (*copy)(const struct vringh *vrh,
+ void *dst, const void *src, size_t len))
+ {
+- int err, count = 0, up_next, desc_max;
++ int err, count = 0, indirect_count = 0, up_next, desc_max;
+ struct vring_desc desc, *descs;
+ struct vringh_range range = { -1ULL, 0 }, slowrange;
+ bool slow = false;
+@@ -349,7 +349,12 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ continue;
+ }
+
+- if (count++ == vrh->vring.num) {
++ if (up_next == -1)
++ count++;
++ else
++ indirect_count++;
++
++ if (count > vrh->vring.num || indirect_count > desc_max) {
+ vringh_bad("Descriptor loop in %p", descs);
+ err = -ELOOP;
+ goto fail;
+@@ -411,6 +416,7 @@ __vringh_iov(struct vringh *vrh, u16 i,
+ i = return_from_indirect(vrh, &up_next,
+ &descs, &desc_max);
+ slow = false;
++ indirect_count = 0;
+ } else
+ break;
+ }
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index c8e0ea27caf1d..58c304a3b7c41 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -1009,7 +1009,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ struct pci_dev *pdev = NULL;
+ void __iomem *fb_virt;
+ int gen2vm = efi_enabled(EFI_BOOT);
+- resource_size_t pot_start, pot_end;
+ phys_addr_t paddr;
+ int ret;
+
+@@ -1060,23 +1059,7 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ dio_fb_size =
+ screen_width * screen_height * screen_depth / 8;
+
+- if (gen2vm) {
+- pot_start = 0;
+- pot_end = -1;
+- } else {
+- if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM) ||
+- pci_resource_len(pdev, 0) < screen_fb_size) {
+- pr_err("Resource not available or (0x%lx < 0x%lx)\n",
+- (unsigned long) pci_resource_len(pdev, 0),
+- (unsigned long) screen_fb_size);
+- goto err1;
+- }
+-
+- pot_end = pci_resource_end(pdev, 0);
+- pot_start = pot_end - screen_fb_size + 1;
+- }
+-
+- ret = vmbus_allocate_mmio(&par->mem, hdev, pot_start, pot_end,
++ ret = vmbus_allocate_mmio(&par->mem, hdev, 0, -1,
+ screen_fb_size, 0x100000, true);
+ if (ret != 0) {
+ pr_err("Unable to allocate framebuffer memory\n");
+diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c
+index 4279e13a3b58d..9421d14d0eb02 100644
+--- a/drivers/video/fbdev/pxa3xx-gcu.c
++++ b/drivers/video/fbdev/pxa3xx-gcu.c
+@@ -650,6 +650,7 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev)
+ for (i = 0; i < 8; i++) {
+ ret = pxa3xx_gcu_add_buffer(dev, priv);
+ if (ret) {
++ pxa3xx_gcu_free_buffers(dev, priv);
+ dev_err(dev, "failed to allocate DMA memory\n");
+ goto err_disable_clk;
+ }
+@@ -666,15 +667,15 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev)
+ SHARED_SIZE, irq);
+ return 0;
+
+-err_free_dma:
+- dma_free_coherent(dev, SHARED_SIZE,
+- priv->shared, priv->shared_phys);
++err_disable_clk:
++ clk_disable_unprepare(priv->clk);
+
+ err_misc_deregister:
+ misc_deregister(&priv->misc_dev);
+
+-err_disable_clk:
+- clk_disable_unprepare(priv->clk);
++err_free_dma:
++ dma_free_coherent(dev, SHARED_SIZE,
++ priv->shared, priv->shared_phys);
+
+ return ret;
+ }
+@@ -687,6 +688,7 @@ static int pxa3xx_gcu_remove(struct platform_device *pdev)
+ pxa3xx_gcu_wait_idle(priv);
+ misc_deregister(&priv->misc_dev);
+ dma_free_coherent(dev, SHARED_SIZE, priv->shared, priv->shared_phys);
++ clk_disable_unprepare(priv->clk);
+ pxa3xx_gcu_free_buffers(dev, priv);
+
+ return 0;
+diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c
+index e8b3ff2b9fbc2..6f6187fe8893e 100644
+--- a/drivers/virtio/virtio_pci_modern_dev.c
++++ b/drivers/virtio/virtio_pci_modern_dev.c
+@@ -340,6 +340,7 @@ err_map_notify:
+ err_map_isr:
+ pci_iounmap(pci_dev, mdev->common);
+ err_map_common:
++ pci_release_selected_regions(pci_dev, mdev->modern_bars);
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(vp_modern_probe);
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index db843f8258602..00ebeffc674fd 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -226,7 +226,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+
+ pm_runtime_enable(dev);
+ ret = pm_runtime_get_sync(dev);
+- if (ret) {
++ if (ret < 0) {
+ pm_runtime_put_noidle(dev);
+ pm_runtime_disable(&pdev->dev);
+ return dev_err_probe(dev, ret, "runtime pm failed\n");
+diff --git a/drivers/watchdog/rzg2l_wdt.c b/drivers/watchdog/rzg2l_wdt.c
+index 6b426df34fd6f..88274704b260d 100644
+--- a/drivers/watchdog/rzg2l_wdt.c
++++ b/drivers/watchdog/rzg2l_wdt.c
+@@ -43,6 +43,8 @@ struct rzg2l_wdt_priv {
+ struct reset_control *rstc;
+ unsigned long osc_clk_rate;
+ unsigned long delay;
++ struct clk *pclk;
++ struct clk *osc_clk;
+ };
+
+ static void rzg2l_wdt_wait_delay(struct rzg2l_wdt_priv *priv)
+@@ -53,7 +55,7 @@ static void rzg2l_wdt_wait_delay(struct rzg2l_wdt_priv *priv)
+
+ static u32 rzg2l_wdt_get_cycle_usec(unsigned long cycle, u32 wdttime)
+ {
+- u64 timer_cycle_us = 1024 * 1024 * (wdttime + 1) * MICRO;
++ u64 timer_cycle_us = 1024 * 1024ULL * (wdttime + 1) * MICRO;
+
+ return div64_ul(timer_cycle_us, cycle);
+ }
+@@ -86,7 +88,6 @@ static int rzg2l_wdt_start(struct watchdog_device *wdev)
+ {
+ struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
+
+- reset_control_deassert(priv->rstc);
+ pm_runtime_get_sync(wdev->parent);
+
+ /* Initialize time out */
+@@ -106,7 +107,7 @@ static int rzg2l_wdt_stop(struct watchdog_device *wdev)
+ struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
+
+ pm_runtime_put(wdev->parent);
+- reset_control_assert(priv->rstc);
++ reset_control_reset(priv->rstc);
+
+ return 0;
+ }
+@@ -118,7 +119,9 @@ static int rzg2l_wdt_restart(struct watchdog_device *wdev,
+
+ /* Reset the module before we modify any register */
+ reset_control_reset(priv->rstc);
+- pm_runtime_get_sync(wdev->parent);
++
++ clk_prepare_enable(priv->pclk);
++ clk_prepare_enable(priv->osc_clk);
+
+ /* smallest counter value to reboot soon */
+ rzg2l_wdt_write(priv, WDTSET_COUNTER_VAL(1), WDTSET);
+@@ -151,12 +154,11 @@ static const struct watchdog_ops rzg2l_wdt_ops = {
+ .restart = rzg2l_wdt_restart,
+ };
+
+-static void rzg2l_wdt_reset_assert_pm_disable_put(void *data)
++static void rzg2l_wdt_reset_assert_pm_disable(void *data)
+ {
+ struct watchdog_device *wdev = data;
+ struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
+
+- pm_runtime_put(wdev->parent);
+ pm_runtime_disable(wdev->parent);
+ reset_control_assert(priv->rstc);
+ }
+@@ -166,7 +168,6 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+ struct device *dev = &pdev->dev;
+ struct rzg2l_wdt_priv *priv;
+ unsigned long pclk_rate;
+- struct clk *wdt_clk;
+ int ret;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -178,22 +179,20 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+ return PTR_ERR(priv->base);
+
+ /* Get watchdog main clock */
+- wdt_clk = clk_get(&pdev->dev, "oscclk");
+- if (IS_ERR(wdt_clk))
+- return dev_err_probe(&pdev->dev, PTR_ERR(wdt_clk), "no oscclk");
++ priv->osc_clk = devm_clk_get(&pdev->dev, "oscclk");
++ if (IS_ERR(priv->osc_clk))
++ return dev_err_probe(&pdev->dev, PTR_ERR(priv->osc_clk), "no oscclk");
+
+- priv->osc_clk_rate = clk_get_rate(wdt_clk);
+- clk_put(wdt_clk);
++ priv->osc_clk_rate = clk_get_rate(priv->osc_clk);
+ if (!priv->osc_clk_rate)
+ return dev_err_probe(&pdev->dev, -EINVAL, "oscclk rate is 0");
+
+ /* Get Peripheral clock */
+- wdt_clk = clk_get(&pdev->dev, "pclk");
+- if (IS_ERR(wdt_clk))
+- return dev_err_probe(&pdev->dev, PTR_ERR(wdt_clk), "no pclk");
++ priv->pclk = devm_clk_get(&pdev->dev, "pclk");
++ if (IS_ERR(priv->pclk))
++ return dev_err_probe(&pdev->dev, PTR_ERR(priv->pclk), "no pclk");
+
+- pclk_rate = clk_get_rate(wdt_clk);
+- clk_put(wdt_clk);
++ pclk_rate = clk_get_rate(priv->pclk);
+ if (!pclk_rate)
+ return dev_err_probe(&pdev->dev, -EINVAL, "pclk rate is 0");
+
+@@ -206,11 +205,6 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+
+ reset_control_deassert(priv->rstc);
+ pm_runtime_enable(&pdev->dev);
+- ret = pm_runtime_resume_and_get(&pdev->dev);
+- if (ret < 0) {
+- dev_err(dev, "pm_runtime_resume_and_get failed ret=%pe", ERR_PTR(ret));
+- goto out_pm_get;
+- }
+
+ priv->wdev.info = &rzg2l_wdt_ident;
+ priv->wdev.ops = &rzg2l_wdt_ops;
+@@ -222,7 +216,7 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+
+ watchdog_set_drvdata(&priv->wdev, priv);
+ ret = devm_add_action_or_reset(&pdev->dev,
+- rzg2l_wdt_reset_assert_pm_disable_put,
++ rzg2l_wdt_reset_assert_pm_disable,
+ &priv->wdev);
+ if (ret < 0)
+ return ret;
+@@ -235,12 +229,6 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+ dev_warn(dev, "Specified timeout invalid, using default");
+
+ return devm_watchdog_register_device(&pdev->dev, &priv->wdev);
+-
+-out_pm_get:
+- pm_runtime_disable(dev);
+- reset_control_assert(priv->rstc);
+-
+- return ret;
+ }
+
+ static const struct of_device_id rzg2l_wdt_ids[] = {
+diff --git a/drivers/watchdog/ts4800_wdt.c b/drivers/watchdog/ts4800_wdt.c
+index c137ad2bd5c31..0ea554c7cda57 100644
+--- a/drivers/watchdog/ts4800_wdt.c
++++ b/drivers/watchdog/ts4800_wdt.c
+@@ -125,13 +125,16 @@ static int ts4800_wdt_probe(struct platform_device *pdev)
+ ret = of_property_read_u32_index(np, "syscon", 1, ®);
+ if (ret < 0) {
+ dev_err(dev, "no offset in syscon\n");
++ of_node_put(syscon_np);
+ return ret;
+ }
+
+ /* allocate memory for watchdog struct */
+ wdt = devm_kzalloc(dev, sizeof(*wdt), GFP_KERNEL);
+- if (!wdt)
++ if (!wdt) {
++ of_node_put(syscon_np);
+ return -ENOMEM;
++ }
+
+ /* set regmap and offset to know where to write */
+ wdt->feed_offset = reg;
+diff --git a/drivers/watchdog/wdat_wdt.c b/drivers/watchdog/wdat_wdt.c
+index 195c8c004b69d..4fac8148a8e62 100644
+--- a/drivers/watchdog/wdat_wdt.c
++++ b/drivers/watchdog/wdat_wdt.c
+@@ -462,6 +462,7 @@ static int wdat_wdt_probe(struct platform_device *pdev)
+ return ret;
+
+ watchdog_set_nowayout(&wdat->wdd, nowayout);
++ watchdog_stop_on_reboot(&wdat->wdd);
+ return devm_watchdog_register_device(dev, &wdat->wdd);
+ }
+
+diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
+index 34742c6e189e3..f17c4c03db30c 100644
+--- a/drivers/xen/xlate_mmu.c
++++ b/drivers/xen/xlate_mmu.c
+@@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
+
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages);
+
+ struct remap_pfn {
+ struct mm_struct *mm;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index da9b4f8577a1a..7f1cb3b73874e 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -462,8 +462,11 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode,
+ }
+
+ /* skip if starts before the current position */
+- if (offset < curr)
++ if (offset < curr) {
++ if (next > curr)
++ ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent);
+ continue;
++ }
+
+ /* found the next entry */
+ if (!dir_emit(ctx, dire->u.name, nlen,
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index b34d6286ee903..7744db8f5bb0a 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4709,15 +4709,17 @@ void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc)
+ }
+
+ /*
+- * wait for all write mds requests to flush.
++ * flush the mdlog and wait for all write mds requests to flush.
+ */
+-static void wait_unsafe_requests(struct ceph_mds_client *mdsc, u64 want_tid)
++static void flush_mdlog_and_wait_mdsc_unsafe_requests(struct ceph_mds_client *mdsc,
++ u64 want_tid)
+ {
+ struct ceph_mds_request *req = NULL, *nextreq;
++ struct ceph_mds_session *last_session = NULL;
+ struct rb_node *n;
+
+ mutex_lock(&mdsc->mutex);
+- dout("wait_unsafe_requests want %lld\n", want_tid);
++ dout("%s want %lld\n", __func__, want_tid);
+ restart:
+ req = __get_oldest_req(mdsc);
+ while (req && req->r_tid <= want_tid) {
+@@ -4729,14 +4731,32 @@ restart:
+ nextreq = NULL;
+ if (req->r_op != CEPH_MDS_OP_SETFILELOCK &&
+ (req->r_op & CEPH_MDS_OP_WRITE)) {
++ struct ceph_mds_session *s = req->r_session;
++
++ if (!s) {
++ req = nextreq;
++ continue;
++ }
++
+ /* write op */
+ ceph_mdsc_get_request(req);
+ if (nextreq)
+ ceph_mdsc_get_request(nextreq);
++ s = ceph_get_mds_session(s);
+ mutex_unlock(&mdsc->mutex);
+- dout("wait_unsafe_requests wait on %llu (want %llu)\n",
++
++ /* send flush mdlog request to MDS */
++ if (last_session != s) {
++ send_flush_mdlog(s);
++ ceph_put_mds_session(last_session);
++ last_session = s;
++ } else {
++ ceph_put_mds_session(s);
++ }
++ dout("%s wait on %llu (want %llu)\n", __func__,
+ req->r_tid, want_tid);
+ wait_for_completion(&req->r_safe_completion);
++
+ mutex_lock(&mdsc->mutex);
+ ceph_mdsc_put_request(req);
+ if (!nextreq)
+@@ -4751,7 +4771,8 @@ restart:
+ req = nextreq;
+ }
+ mutex_unlock(&mdsc->mutex);
+- dout("wait_unsafe_requests done\n");
++ ceph_put_mds_session(last_session);
++ dout("%s done\n", __func__);
+ }
+
+ void ceph_mdsc_sync(struct ceph_mds_client *mdsc)
+@@ -4780,7 +4801,7 @@ void ceph_mdsc_sync(struct ceph_mds_client *mdsc)
+ dout("sync want tid %lld flush_seq %lld\n",
+ want_tid, want_flush);
+
+- wait_unsafe_requests(mdsc, want_tid);
++ flush_mdlog_and_wait_mdsc_unsafe_requests(mdsc, want_tid);
+ wait_caps_flush(mdsc, want_flush);
+ }
+
+diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c
+index fcf7dfdecf966..e41b22bd66ced 100644
+--- a/fs/ceph/xattr.c
++++ b/fs/ceph/xattr.c
+@@ -366,6 +366,14 @@ static ssize_t ceph_vxattrcb_auth_mds(struct ceph_inode_info *ci,
+ }
+ #define XATTR_RSTAT_FIELD(_type, _name) \
+ XATTR_NAME_CEPH(_type, _name, VXATTR_FLAG_RSTAT)
++#define XATTR_RSTAT_FIELD_UPDATABLE(_type, _name) \
++ { \
++ .name = CEPH_XATTR_NAME(_type, _name), \
++ .name_size = sizeof (CEPH_XATTR_NAME(_type, _name)), \
++ .getxattr_cb = ceph_vxattrcb_ ## _type ## _ ## _name, \
++ .exists_cb = NULL, \
++ .flags = VXATTR_FLAG_RSTAT, \
++ }
+ #define XATTR_LAYOUT_FIELD(_type, _name, _field) \
+ { \
+ .name = CEPH_XATTR_NAME2(_type, _name, _field), \
+@@ -404,7 +412,7 @@ static struct ceph_vxattr ceph_dir_vxattrs[] = {
+ XATTR_RSTAT_FIELD(dir, rsubdirs),
+ XATTR_RSTAT_FIELD(dir, rsnaps),
+ XATTR_RSTAT_FIELD(dir, rbytes),
+- XATTR_RSTAT_FIELD(dir, rctime),
++ XATTR_RSTAT_FIELD_UPDATABLE(dir, rctime),
+ {
+ .name = "ceph.dir.pin",
+ .name_size = sizeof("ceph.dir.pin"),
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index a6363d362c489..12829b62fcc91 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1084,7 +1084,7 @@ struct file_system_type cifs_fs_type = {
+ };
+ MODULE_ALIAS_FS("cifs");
+
+-static struct file_system_type smb3_fs_type = {
++struct file_system_type smb3_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "smb3",
+ .init_fs_context = smb3_init_fs_context,
+diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
+index 15a5c5db038b8..6e1791afcd874 100644
+--- a/fs/cifs/cifsfs.h
++++ b/fs/cifs/cifsfs.h
+@@ -38,7 +38,7 @@ static inline unsigned long cifs_get_time(struct dentry *dentry)
+ return (unsigned long) dentry->d_fsdata;
+ }
+
+-extern struct file_system_type cifs_fs_type;
++extern struct file_system_type cifs_fs_type, smb3_fs_type;
+ extern const struct address_space_operations cifs_addr_ops;
+ extern const struct address_space_operations cifs_addr_ops_smallbuf;
+
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 7f28fe8f6ba75..bf6cc128436a1 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1915,11 +1915,13 @@ extern mempool_t *cifs_mid_poolp;
+
+ /* Operations for different SMB versions */
+ #define SMB1_VERSION_STRING "1.0"
++#define SMB20_VERSION_STRING "2.0"
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ extern struct smb_version_operations smb1_operations;
+ extern struct smb_version_values smb1_values;
+-#define SMB20_VERSION_STRING "2.0"
+ extern struct smb_version_operations smb20_operations;
+ extern struct smb_version_values smb20_values;
++#endif /* CIFS_ALLOW_INSECURE_LEGACY */
+ #define SMB21_VERSION_STRING "2.1"
+ extern struct smb_version_operations smb21_operations;
+ extern struct smb_version_values smb21_values;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index faf4587804d95..9dc0c7ccd5e86 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -97,6 +97,10 @@ static int reconn_set_ipaddr_from_hostname(struct TCP_Server_Info *server)
+ if (!server->hostname)
+ return -EINVAL;
+
++ /* if server hostname isn't populated, there's nothing to do here */
++ if (server->hostname[0] == '\0')
++ return 0;
++
+ len = strlen(server->hostname) + 3;
+
+ unc = kmalloc(len, GFP_KERNEL);
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 5a803d6861464..4abf36b3b3450 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -1209,18 +1209,23 @@ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void
+ .data = data,
+ .sb = NULL,
+ };
++ struct file_system_type **fs_type = (struct file_system_type *[]) {
++ &cifs_fs_type, &smb3_fs_type, NULL,
++ };
+
+- iterate_supers_type(&cifs_fs_type, f, &sd);
+-
+- if (!sd.sb)
+- return ERR_PTR(-EINVAL);
+- /*
+- * Grab an active reference in order to prevent automounts (DFS links)
+- * of expiring and then freeing up our cifs superblock pointer while
+- * we're doing failover.
+- */
+- cifs_sb_active(sd.sb);
+- return sd.sb;
++ for (; *fs_type; fs_type++) {
++ iterate_supers_type(*fs_type, f, &sd);
++ if (sd.sb) {
++ /*
++ * Grab an active reference in order to prevent automounts (DFS links)
++ * of expiring and then freeing up our cifs superblock pointer while
++ * we're doing failover.
++ */
++ cifs_sb_active(sd.sb);
++ return sd.sb;
++ }
++ }
++ return ERR_PTR(-EINVAL);
+ }
+
+ static void __cifs_put_super(struct super_block *sb)
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 1a0995bb5d90c..822da56891661 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -274,7 +274,10 @@ cifs_ses_add_channel(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses,
+ /* Auth */
+ ctx.domainauto = ses->domainAuto;
+ ctx.domainname = ses->domainName;
+- ctx.server_hostname = ses->server->hostname;
++
++ /* no hostname for extra channels */
++ ctx.server_hostname = "";
++
+ ctx.username = ses->user_name;
+ ctx.password = ses->password;
+ ctx.sectype = ses->sectype;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ab74a678fb939..e0a1cb4b9f190 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -4297,11 +4297,13 @@ smb3_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ }
+ }
+
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ static bool
+ smb2_is_read_op(__u32 oplock)
+ {
+ return oplock == SMB2_OPLOCK_LEVEL_II;
+ }
++#endif /* CIFS_ALLOW_INSECURE_LEGACY */
+
+ static bool
+ smb21_is_read_op(__u32 oplock)
+@@ -5400,7 +5402,7 @@ out:
+ return rc;
+ }
+
+-
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ struct smb_version_operations smb20_operations = {
+ .compare_fids = smb2_compare_fids,
+ .setup_request = smb2_setup_request,
+@@ -5499,6 +5501,7 @@ struct smb_version_operations smb20_operations = {
+ .is_status_io_timeout = smb2_is_status_io_timeout,
+ .is_network_name_deleted = smb2_is_network_name_deleted,
+ };
++#endif /* CIFS_ALLOW_INSECURE_LEGACY */
+
+ struct smb_version_operations smb21_operations = {
+ .compare_fids = smb2_compare_fids,
+@@ -5830,6 +5833,7 @@ struct smb_version_operations smb311_operations = {
+ .is_network_name_deleted = smb2_is_network_name_deleted,
+ };
+
++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ struct smb_version_values smb20_values = {
+ .version_string = SMB20_VERSION_STRING,
+ .protocol_id = SMB20_PROT_ID,
+@@ -5850,6 +5854,7 @@ struct smb_version_values smb20_values = {
+ .signing_required = SMB2_NEGOTIATE_SIGNING_REQUIRED,
+ .create_lease_size = sizeof(struct create_lease),
+ };
++#endif /* ALLOW_INSECURE_LEGACY */
+
+ struct smb_version_values smb21_values = {
+ .version_string = SMB21_VERSION_STRING,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 55e6879ef18be..4cdc54ffa3661 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -288,6 +288,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ mutex_unlock(&ses->session_mutex);
+ rc = -EHOSTDOWN;
+ goto failed;
++ } else if (rc) {
++ mutex_unlock(&ses->session_mutex);
++ goto out;
+ }
+ } else {
+ mutex_unlock(&ses->session_mutex);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index bf3ba85cf325b..1438ae53c73c8 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -151,7 +151,7 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
+ f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
+ blkaddr, exist);
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+- WARN_ON(1);
++ dump_stack();
+ }
+ return exist;
+ }
+@@ -189,7 +189,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
+ f2fs_warn(sbi, "access invalid blkaddr:%u",
+ blkaddr);
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+- WARN_ON(1);
++ dump_stack();
+ return false;
+ } else {
+ return __is_bitmap_valid(sbi, blkaddr, type);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 29cd65f7c9eb8..a2b7dead68e0b 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2679,6 +2679,7 @@ do_map:
+ }
+
+ set_page_dirty(page);
++ set_page_private_gcing(page);
+ f2fs_put_page(page, 1);
+
+ idx++;
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index d6cb4b52758b8..69d1d8ad0903c 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -120,6 +120,7 @@ static bool inode_io_list_move_locked(struct inode *inode,
+ struct list_head *head)
+ {
+ assert_spin_locked(&wb->list_lock);
++ assert_spin_locked(&inode->i_lock);
+
+ list_move(&inode->i_io_list, head);
+
+@@ -1402,9 +1403,9 @@ static int move_expired_inodes(struct list_head *delaying_queue,
+ inode = wb_inode(delaying_queue->prev);
+ if (inode_dirtied_after(inode, dirtied_before))
+ break;
++ spin_lock(&inode->i_lock);
+ list_move(&inode->i_io_list, &tmp);
+ moved++;
+- spin_lock(&inode->i_lock);
+ inode->i_state |= I_SYNC_QUEUED;
+ spin_unlock(&inode->i_lock);
+ if (sb_is_blkdev_sb(inode->i_sb))
+@@ -1420,7 +1421,12 @@ static int move_expired_inodes(struct list_head *delaying_queue,
+ goto out;
+ }
+
+- /* Move inodes from one superblock together */
++ /*
++ * Although inode's i_io_list is moved from 'tmp' to 'dispatch_queue',
++ * we don't take inode->i_lock here because it is just a pointless overhead.
++ * Inode is already marked as I_SYNC_QUEUED so writeback list handling is
++ * fully under our control.
++ */
+ while (!list_empty(&tmp)) {
+ sb = wb_inode(tmp.prev)->i_sb;
+ list_for_each_prev_safe(pos, node, &tmp) {
+@@ -1863,8 +1869,8 @@ static long writeback_sb_inodes(struct super_block *sb,
+ * We'll have another go at writing back this inode
+ * when we completed a full scan of b_io.
+ */
+- spin_unlock(&inode->i_lock);
+ requeue_io(inode, wb);
++ spin_unlock(&inode->i_lock);
+ trace_writeback_sb_inodes_requeue(inode);
+ continue;
+ }
+@@ -2400,6 +2406,7 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ {
+ struct super_block *sb = inode->i_sb;
+ int dirtytime = 0;
++ struct bdi_writeback *wb = NULL;
+
+ trace_writeback_mark_inode_dirty(inode, flags);
+
+@@ -2451,6 +2458,17 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ inode->i_state &= ~I_DIRTY_TIME;
+ inode->i_state |= flags;
+
++ /*
++ * Grab inode's wb early because it requires dropping i_lock and we
++ * need to make sure following checks happen atomically with dirty
++ * list handling so that we don't move inodes under flush worker's
++ * hands.
++ */
++ if (!was_dirty) {
++ wb = locked_inode_to_wb_and_lock_list(inode);
++ spin_lock(&inode->i_lock);
++ }
++
+ /*
+ * If the inode is queued for writeback by flush worker, just
+ * update its dirty state. Once the flush worker is done with
+@@ -2458,7 +2476,7 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ * list, based upon its state.
+ */
+ if (inode->i_state & I_SYNC_QUEUED)
+- goto out_unlock_inode;
++ goto out_unlock;
+
+ /*
+ * Only add valid (hashed) inodes to the superblock's
+@@ -2466,22 +2484,19 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ */
+ if (!S_ISBLK(inode->i_mode)) {
+ if (inode_unhashed(inode))
+- goto out_unlock_inode;
++ goto out_unlock;
+ }
+ if (inode->i_state & I_FREEING)
+- goto out_unlock_inode;
++ goto out_unlock;
+
+ /*
+ * If the inode was already on b_dirty/b_io/b_more_io, don't
+ * reposition it (that would break b_dirty time-ordering).
+ */
+ if (!was_dirty) {
+- struct bdi_writeback *wb;
+ struct list_head *dirty_list;
+ bool wakeup_bdi = false;
+
+- wb = locked_inode_to_wb_and_lock_list(inode);
+-
+ inode->dirtied_when = jiffies;
+ if (dirtytime)
+ inode->dirtied_time_when = jiffies;
+@@ -2495,6 +2510,7 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ dirty_list);
+
+ spin_unlock(&wb->list_lock);
++ spin_unlock(&inode->i_lock);
+ trace_writeback_dirty_inode_enqueue(inode);
+
+ /*
+@@ -2509,6 +2525,9 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ return;
+ }
+ }
++out_unlock:
++ if (wb)
++ spin_unlock(&wb->list_lock);
+ out_unlock_inode:
+ spin_unlock(&inode->i_lock);
+ }
+diff --git a/fs/inode.c b/fs/inode.c
+index 63324df6fa273..a5b45dac00830 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -27,7 +27,7 @@
+ * Inode locking rules:
+ *
+ * inode->i_lock protects:
+- * inode->i_state, inode->i_hash, __iget()
++ * inode->i_state, inode->i_hash, __iget(), inode->i_io_list
+ * Inode LRU list locks protect:
+ * inode->i_sb->s_inode_lru, inode->i_lru
+ * inode->i_sb->s_inode_list_lock protects:
+diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c
+index 71f03a5d36ed2..f83a468b64883 100644
+--- a/fs/jffs2/fs.c
++++ b/fs/jffs2/fs.c
+@@ -604,6 +604,7 @@ out_root:
+ jffs2_free_raw_node_refs(c);
+ kvfree(c->blocks);
+ jffs2_clear_xattr_subsystem(c);
++ jffs2_sum_exit(c);
+ out_inohash:
+ kfree(c->inocache_list);
+ out_wbuf:
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 4096953390b44..52f3afeed612a 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -18,7 +18,15 @@
+ #include "kernfs-internal.h"
+
+ static DEFINE_SPINLOCK(kernfs_rename_lock); /* kn->parent and ->name */
+-static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by rename_lock */
++/*
++ * Don't use rename_lock to piggy back on pr_cont_buf. We don't want to
++ * call pr_cont() while holding rename_lock. Because sometimes pr_cont()
++ * will perform wakeups when releasing console_sem. Holding rename_lock
++ * will introduce deadlock if the scheduler reads the kernfs_name in the
++ * wakeup path.
++ */
++static DEFINE_SPINLOCK(kernfs_pr_cont_lock);
++static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by pr_cont_lock */
+ static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */
+
+ #define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb)
+@@ -229,12 +237,12 @@ void pr_cont_kernfs_name(struct kernfs_node *kn)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&kernfs_rename_lock, flags);
++ spin_lock_irqsave(&kernfs_pr_cont_lock, flags);
+
+- kernfs_name_locked(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf));
++ kernfs_name(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf));
+ pr_cont("%s", kernfs_pr_cont_buf);
+
+- spin_unlock_irqrestore(&kernfs_rename_lock, flags);
++ spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags);
+ }
+
+ /**
+@@ -248,10 +256,10 @@ void pr_cont_kernfs_path(struct kernfs_node *kn)
+ unsigned long flags;
+ int sz;
+
+- spin_lock_irqsave(&kernfs_rename_lock, flags);
++ spin_lock_irqsave(&kernfs_pr_cont_lock, flags);
+
+- sz = kernfs_path_from_node_locked(kn, NULL, kernfs_pr_cont_buf,
+- sizeof(kernfs_pr_cont_buf));
++ sz = kernfs_path_from_node(kn, NULL, kernfs_pr_cont_buf,
++ sizeof(kernfs_pr_cont_buf));
+ if (sz < 0) {
+ pr_cont("(error)");
+ goto out;
+@@ -265,7 +273,7 @@ void pr_cont_kernfs_path(struct kernfs_node *kn)
+ pr_cont("%s", kernfs_pr_cont_buf);
+
+ out:
+- spin_unlock_irqrestore(&kernfs_rename_lock, flags);
++ spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags);
+ }
+
+ /**
+@@ -823,13 +831,12 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent,
+
+ lockdep_assert_held_read(&kernfs_root(parent)->kernfs_rwsem);
+
+- /* grab kernfs_rename_lock to piggy back on kernfs_pr_cont_buf */
+- spin_lock_irq(&kernfs_rename_lock);
++ spin_lock_irq(&kernfs_pr_cont_lock);
+
+ len = strlcpy(kernfs_pr_cont_buf, path, sizeof(kernfs_pr_cont_buf));
+
+ if (len >= sizeof(kernfs_pr_cont_buf)) {
+- spin_unlock_irq(&kernfs_rename_lock);
++ spin_unlock_irq(&kernfs_pr_cont_lock);
+ return NULL;
+ }
+
+@@ -841,7 +848,7 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent,
+ parent = kernfs_find_ns(parent, name, ns);
+ }
+
+- spin_unlock_irq(&kernfs_rename_lock);
++ spin_unlock_irq(&kernfs_pr_cont_lock);
+
+ return parent;
+ }
+diff --git a/fs/ksmbd/smbacl.c b/fs/ksmbd/smbacl.c
+index 6ecf55ea1fed5..38f23bf981ac9 100644
+--- a/fs/ksmbd/smbacl.c
++++ b/fs/ksmbd/smbacl.c
+@@ -1261,6 +1261,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, struct path *path,
+ if (!access_bits)
+ access_bits =
+ SET_MINIMUM_RIGHTS;
++ posix_acl_release(posix_acls);
+ goto check_access_bits;
+ }
+ }
+diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
+index ba5a22bc2e6d8..d3b60b833a816 100644
+--- a/fs/ksmbd/transport_rdma.c
++++ b/fs/ksmbd/transport_rdma.c
+@@ -569,6 +569,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ }
+ t->negotiation_requested = true;
+ t->full_packet_received = true;
++ t->status = SMB_DIRECT_CS_CONNECTED;
+ enqueue_reassembly(t, recvmsg, 0);
+ wake_up_interruptible(&t->wait_status);
+ break;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 1db686509a3e1..e9761f55ac9c1 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3101,6 +3101,10 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ }
+
+ out:
++ if (opendata->lgp) {
++ nfs4_lgopen_release(opendata->lgp);
++ opendata->lgp = NULL;
++ }
+ if (!opendata->cancelled)
+ nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ return ret;
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index 63e09caf19b82..61c1d0bc22e79 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -1694,11 +1694,6 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
+ sbi->s_mount_opts = ZONEFS_MNTOPT_ERRORS_RO;
+ sbi->s_max_open_zones = bdev_max_open_zones(sb->s_bdev);
+ atomic_set(&sbi->s_open_zones, 0);
+- if (!sbi->s_max_open_zones &&
+- sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
+- zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n");
+- sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN;
+- }
+
+ ret = zonefs_read_super(sb);
+ if (ret)
+@@ -1717,6 +1712,12 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent)
+ zonefs_info(sb, "Mounting %u zones",
+ blkdev_nr_zones(sb->s_bdev->bd_disk));
+
++ if (!sbi->s_max_open_zones &&
++ sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) {
++ zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n");
++ sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN;
++ }
++
+ /* Create root directory inode */
+ ret = -ENOMEM;
+ inode = new_inode(sb);
+diff --git a/include/linux/export.h b/include/linux/export.h
+index 27d848712b90b..5910ccb66ca2d 100644
+--- a/include/linux/export.h
++++ b/include/linux/export.h
+@@ -2,6 +2,8 @@
+ #ifndef _LINUX_EXPORT_H
+ #define _LINUX_EXPORT_H
+
++#include <linux/stringify.h>
++
+ /*
+ * Export symbols from the kernel to modules. Forked from module.h
+ * to reduce the amount of pointless cruft we feed to gcc when only
+@@ -154,7 +156,6 @@ struct kernel_symbol {
+ #endif /* CONFIG_MODULES */
+
+ #ifdef DEFAULT_SYMBOL_NAMESPACE
+-#include <linux/stringify.h>
+ #define _EXPORT_SYMBOL(sym, sec) __EXPORT_SYMBOL(sym, sec, __stringify(DEFAULT_SYMBOL_NAMESPACE))
+ #else
+ #define _EXPORT_SYMBOL(sym, sec) __EXPORT_SYMBOL(sym, sec, "")
+@@ -162,8 +163,8 @@ struct kernel_symbol {
+
+ #define EXPORT_SYMBOL(sym) _EXPORT_SYMBOL(sym, "")
+ #define EXPORT_SYMBOL_GPL(sym) _EXPORT_SYMBOL(sym, "_gpl")
+-#define EXPORT_SYMBOL_NS(sym, ns) __EXPORT_SYMBOL(sym, "", #ns)
+-#define EXPORT_SYMBOL_NS_GPL(sym, ns) __EXPORT_SYMBOL(sym, "_gpl", #ns)
++#define EXPORT_SYMBOL_NS(sym, ns) __EXPORT_SYMBOL(sym, "", __stringify(ns))
++#define EXPORT_SYMBOL_NS_GPL(sym, ns) __EXPORT_SYMBOL(sym, "_gpl", __stringify(ns))
+
+ #endif /* !__ASSEMBLY__ */
+
+diff --git a/include/linux/extcon.h b/include/linux/extcon.h
+index 0c19010da77fa..685401d94d398 100644
+--- a/include/linux/extcon.h
++++ b/include/linux/extcon.h
+@@ -296,7 +296,7 @@ static inline void devm_extcon_unregister_notifier_all(struct device *dev,
+
+ static inline struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
+ {
+- return ERR_PTR(-ENODEV);
++ return NULL;
+ }
+
+ static inline struct extcon_dev *extcon_find_edev_by_node(struct device_node *node)
+diff --git a/include/linux/genhd.h b/include/linux/genhd.h
+index 6906a45bc761a..2cb105f120282 100644
+--- a/include/linux/genhd.h
++++ b/include/linux/genhd.h
+@@ -110,6 +110,7 @@ struct gendisk {
+ #define GD_READ_ONLY 1
+ #define GD_DEAD 2
+ #define GD_NATIVE_CAPACITY 3
++#define GD_SUPPRESS_PART_SCAN 5
+
+ struct mutex open_mutex; /* open/close mutex */
+ unsigned open_partitions; /* number of open partitions */
+diff --git a/include/linux/iio/common/st_sensors.h b/include/linux/iio/common/st_sensors.h
+index 22f67845cdd36..db4a1b260348c 100644
+--- a/include/linux/iio/common/st_sensors.h
++++ b/include/linux/iio/common/st_sensors.h
+@@ -237,6 +237,7 @@ struct st_sensor_settings {
+ * @hw_irq_trigger: if we're using the hardware interrupt on the sensor.
+ * @hw_timestamp: Latest timestamp from the interrupt handler, when in use.
+ * @buffer_data: Data used by buffer part.
++ * @odr_lock: Local lock for preventing concurrent ODR accesses/changes
+ */
+ struct st_sensor_data {
+ struct iio_trigger *trig;
+@@ -261,6 +262,8 @@ struct st_sensor_data {
+ s64 hw_timestamp;
+
+ char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] ____cacheline_aligned;
++
++ struct mutex odr_lock;
+ };
+
+ #ifdef CONFIG_IIO_BUFFER
+diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
+index 48b9b2a82767d..019e55c13248b 100644
+--- a/include/linux/jump_label.h
++++ b/include/linux/jump_label.h
+@@ -261,9 +261,9 @@ extern void static_key_disable_cpuslocked(struct static_key *key);
+ #include <linux/atomic.h>
+ #include <linux/bug.h>
+
+-static inline int static_key_count(struct static_key *key)
++static __always_inline int static_key_count(struct static_key *key)
+ {
+- return atomic_read(&key->enabled);
++ return arch_atomic_read(&key->enabled);
+ }
+
+ static __always_inline void jump_label_init(void)
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 49a48d7709ac1..4cd54277d5d92 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -5175,12 +5175,11 @@ struct mlx5_ifc_query_qp_out_bits {
+
+ u8 syndrome[0x20];
+
+- u8 reserved_at_40[0x20];
+- u8 ece[0x20];
++ u8 reserved_at_40[0x40];
+
+ u8 opt_param_mask[0x20];
+
+- u8 reserved_at_a0[0x20];
++ u8 ece[0x20];
+
+ struct mlx5_ifc_qpc_bits qpc;
+
+diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
+index c6199dbe25913..0f233b76c9cec 100644
+--- a/include/linux/nodemask.h
++++ b/include/linux/nodemask.h
+@@ -42,11 +42,11 @@
+ * void nodes_shift_right(dst, src, n) Shift right
+ * void nodes_shift_left(dst, src, n) Shift left
+ *
+- * int first_node(mask) Number lowest set bit, or MAX_NUMNODES
+- * int next_node(node, mask) Next node past 'node', or MAX_NUMNODES
+- * int next_node_in(node, mask) Next node past 'node', or wrap to first,
++ * unsigned int first_node(mask) Number lowest set bit, or MAX_NUMNODES
++ * unsigend int next_node(node, mask) Next node past 'node', or MAX_NUMNODES
++ * unsigned int next_node_in(node, mask) Next node past 'node', or wrap to first,
+ * or MAX_NUMNODES
+- * int first_unset_node(mask) First node not set in mask, or
++ * unsigned int first_unset_node(mask) First node not set in mask, or
+ * MAX_NUMNODES
+ *
+ * nodemask_t nodemask_of_node(node) Return nodemask with bit 'node' set
+@@ -153,7 +153,7 @@ static inline void __nodes_clear(nodemask_t *dstp, unsigned int nbits)
+
+ #define node_test_and_set(node, nodemask) \
+ __node_test_and_set((node), &(nodemask))
+-static inline int __node_test_and_set(int node, nodemask_t *addr)
++static inline bool __node_test_and_set(int node, nodemask_t *addr)
+ {
+ return test_and_set_bit(node, addr->bits);
+ }
+@@ -200,7 +200,7 @@ static inline void __nodes_complement(nodemask_t *dstp,
+
+ #define nodes_equal(src1, src2) \
+ __nodes_equal(&(src1), &(src2), MAX_NUMNODES)
+-static inline int __nodes_equal(const nodemask_t *src1p,
++static inline bool __nodes_equal(const nodemask_t *src1p,
+ const nodemask_t *src2p, unsigned int nbits)
+ {
+ return bitmap_equal(src1p->bits, src2p->bits, nbits);
+@@ -208,7 +208,7 @@ static inline int __nodes_equal(const nodemask_t *src1p,
+
+ #define nodes_intersects(src1, src2) \
+ __nodes_intersects(&(src1), &(src2), MAX_NUMNODES)
+-static inline int __nodes_intersects(const nodemask_t *src1p,
++static inline bool __nodes_intersects(const nodemask_t *src1p,
+ const nodemask_t *src2p, unsigned int nbits)
+ {
+ return bitmap_intersects(src1p->bits, src2p->bits, nbits);
+@@ -216,20 +216,20 @@ static inline int __nodes_intersects(const nodemask_t *src1p,
+
+ #define nodes_subset(src1, src2) \
+ __nodes_subset(&(src1), &(src2), MAX_NUMNODES)
+-static inline int __nodes_subset(const nodemask_t *src1p,
++static inline bool __nodes_subset(const nodemask_t *src1p,
+ const nodemask_t *src2p, unsigned int nbits)
+ {
+ return bitmap_subset(src1p->bits, src2p->bits, nbits);
+ }
+
+ #define nodes_empty(src) __nodes_empty(&(src), MAX_NUMNODES)
+-static inline int __nodes_empty(const nodemask_t *srcp, unsigned int nbits)
++static inline bool __nodes_empty(const nodemask_t *srcp, unsigned int nbits)
+ {
+ return bitmap_empty(srcp->bits, nbits);
+ }
+
+ #define nodes_full(nodemask) __nodes_full(&(nodemask), MAX_NUMNODES)
+-static inline int __nodes_full(const nodemask_t *srcp, unsigned int nbits)
++static inline bool __nodes_full(const nodemask_t *srcp, unsigned int nbits)
+ {
+ return bitmap_full(srcp->bits, nbits);
+ }
+@@ -260,15 +260,15 @@ static inline void __nodes_shift_left(nodemask_t *dstp,
+ > MAX_NUMNODES, then the silly min_ts could be dropped. */
+
+ #define first_node(src) __first_node(&(src))
+-static inline int __first_node(const nodemask_t *srcp)
++static inline unsigned int __first_node(const nodemask_t *srcp)
+ {
+- return min_t(int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES));
++ return min_t(unsigned int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES));
+ }
+
+ #define next_node(n, src) __next_node((n), &(src))
+-static inline int __next_node(int n, const nodemask_t *srcp)
++static inline unsigned int __next_node(int n, const nodemask_t *srcp)
+ {
+- return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
++ return min_t(unsigned int, MAX_NUMNODES, find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
+ }
+
+ /*
+@@ -276,7 +276,7 @@ static inline int __next_node(int n, const nodemask_t *srcp)
+ * the first node in src if needed. Returns MAX_NUMNODES if src is empty.
+ */
+ #define next_node_in(n, src) __next_node_in((n), &(src))
+-int __next_node_in(int node, const nodemask_t *srcp);
++unsigned int __next_node_in(int node, const nodemask_t *srcp);
+
+ static inline void init_nodemask_of_node(nodemask_t *mask, int node)
+ {
+@@ -296,9 +296,9 @@ static inline void init_nodemask_of_node(nodemask_t *mask, int node)
+ })
+
+ #define first_unset_node(mask) __first_unset_node(&(mask))
+-static inline int __first_unset_node(const nodemask_t *maskp)
++static inline unsigned int __first_unset_node(const nodemask_t *maskp)
+ {
+- return min_t(int,MAX_NUMNODES,
++ return min_t(unsigned int, MAX_NUMNODES,
+ find_first_zero_bit(maskp->bits, MAX_NUMNODES));
+ }
+
+@@ -435,11 +435,11 @@ static inline int num_node_state(enum node_states state)
+
+ #define first_online_node first_node(node_states[N_ONLINE])
+ #define first_memory_node first_node(node_states[N_MEMORY])
+-static inline int next_online_node(int nid)
++static inline unsigned int next_online_node(int nid)
+ {
+ return next_node(nid, node_states[N_ONLINE]);
+ }
+-static inline int next_memory_node(int nid)
++static inline unsigned int next_memory_node(int nid)
+ {
+ return next_node(nid, node_states[N_MEMORY]);
+ }
+diff --git a/include/linux/random.h b/include/linux/random.h
+index 917470c4490ac..3feafab498ad9 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -13,7 +13,7 @@
+ struct notifier_block;
+
+ void add_device_randomness(const void *buf, size_t len);
+-void add_bootloader_randomness(const void *buf, size_t len);
++void __init add_bootloader_randomness(const void *buf, size_t len);
+ void add_input_randomness(unsigned int type, unsigned int code,
+ unsigned int value) __latent_entropy;
+ void add_interrupt_randomness(int irq) __latent_entropy;
+diff --git a/include/linux/xarray.h b/include/linux/xarray.h
+index 66e28bc1a023f..9409a3d26dfd6 100644
+--- a/include/linux/xarray.h
++++ b/include/linux/xarray.h
+@@ -1506,6 +1506,7 @@ void *xas_find_marked(struct xa_state *, unsigned long max, xa_mark_t);
+ void xas_init_marks(const struct xa_state *);
+
+ bool xas_nomem(struct xa_state *, gfp_t);
++void xas_destroy(struct xa_state *);
+ void xas_pause(struct xa_state *);
+
+ void xas_create_range(struct xa_state *);
+diff --git a/include/net/ax25.h b/include/net/ax25.h
+index 8221af1811df5..5253692db9eb8 100644
+--- a/include/net/ax25.h
++++ b/include/net/ax25.h
+@@ -240,6 +240,7 @@ typedef struct ax25_dev {
+ ax25_dama_info dama;
+ #endif
+ refcount_t refcount;
++ bool device_up;
+ } ax25_dev;
+
+ typedef struct ax25_cb {
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 4524920e4895d..f397b0a3d6310 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -155,21 +155,18 @@ struct bdaddr_list_with_irk {
+ u8 local_irk[16];
+ };
+
++/* Bitmask of connection flags */
+ enum hci_conn_flags {
+- HCI_CONN_FLAG_REMOTE_WAKEUP,
+- HCI_CONN_FLAG_DEVICE_PRIVACY,
+-
+- __HCI_CONN_NUM_FLAGS,
++ HCI_CONN_FLAG_REMOTE_WAKEUP = 1,
++ HCI_CONN_FLAG_DEVICE_PRIVACY = 2,
+ };
+-
+-/* Make sure number of flags doesn't exceed sizeof(current_flags) */
+-static_assert(__HCI_CONN_NUM_FLAGS < 32);
++typedef u8 hci_conn_flags_t;
+
+ struct bdaddr_list_with_flags {
+ struct list_head list;
+ bdaddr_t bdaddr;
+ u8 bdaddr_type;
+- DECLARE_BITMAP(flags, __HCI_CONN_NUM_FLAGS);
++ hci_conn_flags_t flags;
+ };
+
+ struct bt_uuid {
+@@ -567,7 +564,7 @@ struct hci_dev {
+ struct rfkill *rfkill;
+
+ DECLARE_BITMAP(dev_flags, __HCI_NUM_FLAGS);
+- DECLARE_BITMAP(conn_flags, __HCI_CONN_NUM_FLAGS);
++ hci_conn_flags_t conn_flags;
+
+ __s8 adv_tx_power;
+ __u8 adv_data[HCI_MAX_EXT_AD_LENGTH];
+@@ -763,7 +760,7 @@ struct hci_conn_params {
+
+ struct hci_conn *conn;
+ bool explicit_connect;
+- DECLARE_BITMAP(flags, __HCI_CONN_NUM_FLAGS);
++ hci_conn_flags_t flags;
+ u8 privacy_mode;
+ };
+
+diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
+index 5b8c54eb7a6b8..7a10e4ed55407 100644
+--- a/include/net/flow_offload.h
++++ b/include/net/flow_offload.h
+@@ -591,5 +591,6 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
+ enum tc_setup_type type, void *data,
+ struct flow_block_offload *bo,
+ void (*cleanup)(struct flow_block_cb *block_cb));
++bool flow_indr_dev_exists(void);
+
+ #endif /* _NET_FLOW_OFFLOAD_H */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index c4c0861deac12..c3fdd9f71c05d 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1089,7 +1089,6 @@ struct nft_stats {
+
+ struct nft_hook {
+ struct list_head list;
+- bool inactive;
+ struct nf_hook_ops ops;
+ struct rcu_head rcu;
+ };
+diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
+index 7971478439580..3568b6a2f5f0f 100644
+--- a/include/net/netfilter/nf_tables_offload.h
++++ b/include/net/netfilter/nf_tables_offload.h
+@@ -92,7 +92,7 @@ int nft_flow_rule_offload_commit(struct net *net);
+ NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg) \
+ memset(&(__reg)->mask, 0xff, (__reg)->len);
+
+-int nft_chain_offload_priority(struct nft_base_chain *basechain);
++bool nft_chain_offload_support(const struct nft_base_chain *basechain);
+
+ int nft_offload_init(void);
+ void nft_offload_exit(void);
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 472843eedbaee..6764fc2657451 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -187,37 +187,17 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ if (spin_trylock(&qdisc->seqlock))
+ return true;
+
+- /* Paired with smp_mb__after_atomic() to make sure
+- * STATE_MISSED checking is synchronized with clearing
+- * in pfifo_fast_dequeue().
++ /* No need to insist if the MISSED flag was already set.
++ * Note that test_and_set_bit() also gives us memory ordering
++ * guarantees wrt potential earlier enqueue() and below
++ * spin_trylock(), both of which are necessary to prevent races
+ */
+- smp_mb__before_atomic();
+-
+- /* If the MISSED flag is set, it means other thread has
+- * set the MISSED flag before second spin_trylock(), so
+- * we can return false here to avoid multi cpus doing
+- * the set_bit() and second spin_trylock() concurrently.
+- */
+- if (test_bit(__QDISC_STATE_MISSED, &qdisc->state))
++ if (test_and_set_bit(__QDISC_STATE_MISSED, &qdisc->state))
+ return false;
+
+- /* Set the MISSED flag before the second spin_trylock(),
+- * if the second spin_trylock() return false, it means
+- * other cpu holding the lock will do dequeuing for us
+- * or it will see the MISSED flag set after releasing
+- * lock and reschedule the net_tx_action() to do the
+- * dequeuing.
+- */
+- set_bit(__QDISC_STATE_MISSED, &qdisc->state);
+-
+- /* spin_trylock() only has load-acquire semantic, so use
+- * smp_mb__after_atomic() to ensure STATE_MISSED is set
+- * before doing the second spin_trylock().
+- */
+- smp_mb__after_atomic();
+-
+- /* Retry again in case other CPU may not see the new flag
+- * after it releases the lock at the end of qdisc_run_end().
++ /* Try to take the lock again to make sure that we will either
++ * grab it or the CPU that still has it will see MISSED set
++ * when testing it in qdisc_run_end()
+ */
+ return spin_trylock(&qdisc->seqlock);
+ }
+@@ -229,6 +209,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc)
+ if (qdisc->flags & TCQ_F_NOLOCK) {
+ spin_unlock(&qdisc->seqlock);
+
++ /* spin_unlock() only has store-release semantic. The unlock
++ * and test_bit() ordering is a store-load ordering, so a full
++ * memory barrier is needed here.
++ */
++ smp_mb();
++
+ if (unlikely(test_bit(__QDISC_STATE_MISSED,
+ &qdisc->state)))
+ __netif_schedule(qdisc);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index a3fe2f9bc01ce..818ac80773816 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1215,9 +1215,20 @@ static inline unsigned int tcp_packets_in_flight(const struct tcp_sock *tp)
+
+ #define TCP_INFINITE_SSTHRESH 0x7fffffff
+
++static inline u32 tcp_snd_cwnd(const struct tcp_sock *tp)
++{
++ return tp->snd_cwnd;
++}
++
++static inline void tcp_snd_cwnd_set(struct tcp_sock *tp, u32 val)
++{
++ WARN_ON_ONCE((int)val <= 0);
++ tp->snd_cwnd = val;
++}
++
+ static inline bool tcp_in_slow_start(const struct tcp_sock *tp)
+ {
+- return tp->snd_cwnd < tp->snd_ssthresh;
++ return tcp_snd_cwnd(tp) < tp->snd_ssthresh;
+ }
+
+ static inline bool tcp_in_initial_slowstart(const struct tcp_sock *tp)
+@@ -1243,8 +1254,8 @@ static inline __u32 tcp_current_ssthresh(const struct sock *sk)
+ return tp->snd_ssthresh;
+ else
+ return max(tp->snd_ssthresh,
+- ((tp->snd_cwnd >> 1) +
+- (tp->snd_cwnd >> 2)));
++ ((tcp_snd_cwnd(tp) >> 1) +
++ (tcp_snd_cwnd(tp) >> 2)));
+ }
+
+ /* Use define here intentionally to get WARN_ON location shown at the caller */
+@@ -1286,7 +1297,7 @@ static inline bool tcp_is_cwnd_limited(const struct sock *sk)
+
+ /* If in slow start, ensure cwnd grows to twice what was ACKed. */
+ if (tcp_in_slow_start(tp))
+- return tp->snd_cwnd < 2 * tp->max_packets_out;
++ return tcp_snd_cwnd(tp) < 2 * tp->max_packets_out;
+
+ return tp->is_cwnd_limited;
+ }
+diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
+index 443d459515648..4aa0318496688 100644
+--- a/include/net/xdp_sock_drv.h
++++ b/include/net/xdp_sock_drv.h
+@@ -13,7 +13,7 @@
+
+ void xsk_tx_completed(struct xsk_buff_pool *pool, u32 nb_entries);
+ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc);
+-u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *desc, u32 max);
++u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max);
+ void xsk_tx_release(struct xsk_buff_pool *pool);
+ struct xsk_buff_pool *xsk_get_pool_from_qid(struct net_device *dev,
+ u16 queue_id);
+@@ -142,8 +142,7 @@ static inline bool xsk_tx_peek_desc(struct xsk_buff_pool *pool,
+ return false;
+ }
+
+-static inline u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *desc,
+- u32 max)
++static inline u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max)
+ {
+ return 0;
+ }
+diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
+index ddeefc4a10405..647722e847b41 100644
+--- a/include/net/xsk_buff_pool.h
++++ b/include/net/xsk_buff_pool.h
+@@ -60,6 +60,7 @@ struct xsk_buff_pool {
+ */
+ dma_addr_t *dma_pages;
+ struct xdp_buff_xsk *heads;
++ struct xdp_desc *tx_descs;
+ u64 chunk_mask;
+ u64 addrs_cnt;
+ u32 free_list_cnt;
+@@ -96,6 +97,7 @@ int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *dev,
+ u16 queue_id, u16 flags);
+ int xp_assign_dev_shared(struct xsk_buff_pool *pool, struct xdp_umem *umem,
+ struct net_device *dev, u16 queue_id);
++int xp_alloc_tx_descs(struct xsk_buff_pool *pool, struct xdp_sock *xs);
+ void xp_destroy(struct xsk_buff_pool *pool);
+ void xp_get_pool(struct xsk_buff_pool *pool);
+ bool xp_put_pool(struct xsk_buff_pool *pool);
+diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
+index 521059d8dc0a6..edcd6369de102 100644
+--- a/include/trace/events/tcp.h
++++ b/include/trace/events/tcp.h
+@@ -279,7 +279,7 @@ TRACE_EVENT(tcp_probe,
+ __entry->data_len = skb->len - __tcp_hdrlen(th);
+ __entry->snd_nxt = tp->snd_nxt;
+ __entry->snd_una = tp->snd_una;
+- __entry->snd_cwnd = tp->snd_cwnd;
++ __entry->snd_cwnd = tcp_snd_cwnd(tp);
+ __entry->snd_wnd = tp->snd_wnd;
+ __entry->rcv_wnd = tp->rcv_wnd;
+ __entry->ssthresh = tcp_current_ssthresh(sk);
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 64c44eed8c078..10c4a2028e071 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1671,6 +1671,11 @@ out:
+ CONT; \
+ LDX_MEM_##SIZEOP: \
+ DST = *(SIZE *)(unsigned long) (SRC + insn->off); \
++ CONT; \
++ LDX_PROBE_MEM_##SIZEOP: \
++ bpf_probe_read_kernel(&DST, sizeof(SIZE), \
++ (const void *)(long) (SRC + insn->off)); \
++ DST = *((SIZE *)&DST); \
+ CONT;
+
+ LDST(B, u8)
+@@ -1678,15 +1683,6 @@ out:
+ LDST(W, u32)
+ LDST(DW, u64)
+ #undef LDST
+-#define LDX_PROBE(SIZEOP, SIZE) \
+- LDX_PROBE_MEM_##SIZEOP: \
+- bpf_probe_read_kernel(&DST, SIZE, (const void *)(long) (SRC + insn->off)); \
+- CONT;
+- LDX_PROBE(B, 1)
+- LDX_PROBE(H, 2)
+- LDX_PROBE(W, 4)
+- LDX_PROBE(DW, 8)
+-#undef LDX_PROBE
+
+ #define ATOMIC_ALU_OP(BOP, KOP) \
+ case BOP: \
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 8bc7beea10c70..7ebf426075056 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2827,7 +2827,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb,
+ }
+ EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve);
+
+-static DEFINE_SPINLOCK(tracepoint_iter_lock);
++static DEFINE_RAW_SPINLOCK(tracepoint_iter_lock);
+ static DEFINE_MUTEX(tracepoint_printk_mutex);
+
+ static void output_printk(struct trace_event_buffer *fbuffer)
+@@ -2855,14 +2855,14 @@ static void output_printk(struct trace_event_buffer *fbuffer)
+
+ event = &fbuffer->trace_file->event_call->event;
+
+- spin_lock_irqsave(&tracepoint_iter_lock, flags);
++ raw_spin_lock_irqsave(&tracepoint_iter_lock, flags);
+ trace_seq_init(&iter->seq);
+ iter->ent = fbuffer->entry;
+ event_call->event.funcs->trace(iter, 0, event);
+ trace_seq_putc(&iter->seq, 0);
+ printk("%s", iter->seq.buffer);
+
+- spin_unlock_irqrestore(&tracepoint_iter_lock, flags);
++ raw_spin_unlock_irqrestore(&tracepoint_iter_lock, flags);
+ }
+
+ int tracepoint_printk_sysctl(struct ctl_table *table, int write,
+@@ -6324,12 +6324,18 @@ static void tracing_set_nop(struct trace_array *tr)
+ tr->current_trace = &nop_trace;
+ }
+
++static bool tracer_options_updated;
++
+ static void add_tracer_options(struct trace_array *tr, struct tracer *t)
+ {
+ /* Only enable if the directory has been created already. */
+ if (!tr->dir)
+ return;
+
++ /* Only create trace option files after update_tracer_options finish */
++ if (!tracer_options_updated)
++ return;
++
+ create_trace_option_files(tr, t);
+ }
+
+@@ -9139,6 +9145,7 @@ static void __update_tracer_options(struct trace_array *tr)
+ static void update_tracer_options(struct trace_array *tr)
+ {
+ mutex_lock(&trace_types_lock);
++ tracer_options_updated = true;
+ __update_tracer_options(tr);
+ mutex_unlock(&trace_types_lock);
+ }
+diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
+index f755bde42fd07..b69e207012c99 100644
+--- a/kernel/trace/trace_syscalls.c
++++ b/kernel/trace/trace_syscalls.c
+@@ -154,7 +154,7 @@ print_syscall_enter(struct trace_iterator *iter, int flags,
+ goto end;
+
+ /* parameter types */
+- if (tr->trace_flags & TRACE_ITER_VERBOSE)
++ if (tr && tr->trace_flags & TRACE_ITER_VERBOSE)
+ trace_seq_printf(s, "%s ", entry->types[i]);
+
+ /* parameter values */
+@@ -296,9 +296,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
+ struct trace_event_file *trace_file;
+ struct syscall_trace_enter *entry;
+ struct syscall_metadata *sys_data;
+- struct ring_buffer_event *event;
+- struct trace_buffer *buffer;
+- unsigned int trace_ctx;
++ struct trace_event_buffer fbuffer;
+ unsigned long args[6];
+ int syscall_nr;
+ int size;
+@@ -321,20 +319,16 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
+
+ size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
+
+- trace_ctx = tracing_gen_ctx();
+-
+- event = trace_event_buffer_lock_reserve(&buffer, trace_file,
+- sys_data->enter_event->event.type, size, trace_ctx);
+- if (!event)
++ entry = trace_event_buffer_reserve(&fbuffer, trace_file, size);
++ if (!entry)
+ return;
+
+- entry = ring_buffer_event_data(event);
++ entry = ring_buffer_event_data(fbuffer.event);
+ entry->nr = syscall_nr;
+ syscall_get_arguments(current, regs, args);
+ memcpy(entry->args, args, sizeof(unsigned long) * sys_data->nb_args);
+
+- event_trigger_unlock_commit(trace_file, buffer, event, entry,
+- trace_ctx);
++ trace_event_buffer_commit(&fbuffer);
+ }
+
+ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
+@@ -343,9 +337,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
+ struct trace_event_file *trace_file;
+ struct syscall_trace_exit *entry;
+ struct syscall_metadata *sys_data;
+- struct ring_buffer_event *event;
+- struct trace_buffer *buffer;
+- unsigned int trace_ctx;
++ struct trace_event_buffer fbuffer;
+ int syscall_nr;
+
+ syscall_nr = trace_get_syscall_nr(current, regs);
+@@ -364,20 +356,15 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
+ if (!sys_data)
+ return;
+
+- trace_ctx = tracing_gen_ctx();
+-
+- event = trace_event_buffer_lock_reserve(&buffer, trace_file,
+- sys_data->exit_event->event.type, sizeof(*entry),
+- trace_ctx);
+- if (!event)
++ entry = trace_event_buffer_reserve(&fbuffer, trace_file, sizeof(*entry));
++ if (!entry)
+ return;
+
+- entry = ring_buffer_event_data(event);
++ entry = ring_buffer_event_data(fbuffer.event);
+ entry->nr = syscall_nr;
+ entry->ret = syscall_get_return_value(current, regs);
+
+- event_trigger_unlock_commit(trace_file, buffer, event, entry,
+- trace_ctx);
++ trace_event_buffer_commit(&fbuffer);
+ }
+
+ static int reg_event_syscall_enter(struct trace_event_file *file,
+diff --git a/lib/Makefile b/lib/Makefile
+index 300f569c626b0..4c0220cb43290 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -279,7 +279,7 @@ $(foreach file, $(libfdt_files), \
+ $(eval CFLAGS_$(file) = -I $(srctree)/scripts/dtc/libfdt))
+ lib-$(CONFIG_LIBFDT) += $(libfdt_files)
+
+-lib-$(CONFIG_BOOT_CONFIG) += bootconfig.o
++obj-$(CONFIG_BOOT_CONFIG) += bootconfig.o
+
+ obj-$(CONFIG_RBTREE_TEST) += rbtree_test.o
+ obj-$(CONFIG_INTERVAL_TREE_TEST) += interval_tree_test.o
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 6dd5330f7a995..0b64695ab632f 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1434,7 +1434,7 @@ static ssize_t iter_xarray_get_pages(struct iov_iter *i,
+ {
+ unsigned nr, offset;
+ pgoff_t index, count;
+- size_t size = maxsize, actual;
++ size_t size = maxsize;
+ loff_t pos;
+
+ if (!size || !maxpages)
+@@ -1461,13 +1461,7 @@ static ssize_t iter_xarray_get_pages(struct iov_iter *i,
+ if (nr == 0)
+ return 0;
+
+- actual = PAGE_SIZE * nr;
+- actual -= offset;
+- if (nr == count && size > 0) {
+- unsigned last_offset = (nr > 1) ? 0 : offset;
+- actual -= PAGE_SIZE - (last_offset + size);
+- }
+- return actual;
++ return min_t(size_t, nr * PAGE_SIZE - offset, maxsize);
+ }
+
+ /* must be done on non-empty ITER_IOVEC one */
+@@ -1602,7 +1596,7 @@ static ssize_t iter_xarray_get_pages_alloc(struct iov_iter *i,
+ struct page **p;
+ unsigned nr, offset;
+ pgoff_t index, count;
+- size_t size = maxsize, actual;
++ size_t size = maxsize;
+ loff_t pos;
+
+ if (!size)
+@@ -1631,13 +1625,7 @@ static ssize_t iter_xarray_get_pages_alloc(struct iov_iter *i,
+ if (nr == 0)
+ return 0;
+
+- actual = PAGE_SIZE * nr;
+- actual -= offset;
+- if (nr == count && size > 0) {
+- unsigned last_offset = (nr > 1) ? 0 : offset;
+- actual -= PAGE_SIZE - (last_offset + size);
+- }
+- return actual;
++ return min_t(size_t, nr * PAGE_SIZE - offset, maxsize);
+ }
+
+ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i,
+diff --git a/lib/nodemask.c b/lib/nodemask.c
+index 3aa454c54c0de..e22647f5181b3 100644
+--- a/lib/nodemask.c
++++ b/lib/nodemask.c
+@@ -3,9 +3,9 @@
+ #include <linux/module.h>
+ #include <linux/random.h>
+
+-int __next_node_in(int node, const nodemask_t *srcp)
++unsigned int __next_node_in(int node, const nodemask_t *srcp)
+ {
+- int ret = __next_node(node, srcp);
++ unsigned int ret = __next_node(node, srcp);
+
+ if (ret == MAX_NUMNODES)
+ ret = __first_node(srcp);
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 32e1669d5b649..276d56bd19a20 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -264,9 +264,10 @@ static void xa_node_free(struct xa_node *node)
+ * xas_destroy() - Free any resources allocated during the XArray operation.
+ * @xas: XArray operation state.
+ *
+- * This function is now internal-only.
++ * Most users will not need to call this function; it is called for you
++ * by xas_nomem().
+ */
+-static void xas_destroy(struct xa_state *xas)
++void xas_destroy(struct xa_state *xas)
+ {
+ struct xa_node *next, *node = xas->xa_alloc;
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index fb91636917057..33be0864aebae 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2736,8 +2736,7 @@ out_unlock:
+ if (mapping)
+ i_mmap_unlock_read(mapping);
+ out:
+- /* Free any memory we didn't use */
+- xas_nomem(&xas, 0);
++ xas_destroy(&xas);
+ count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED);
+ return ret;
+ }
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 363d47f945324..289f355e18531 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -62,12 +62,12 @@ static void ax25_free_sock(struct sock *sk)
+ */
+ static void ax25_cb_del(ax25_cb *ax25)
+ {
++ spin_lock_bh(&ax25_list_lock);
+ if (!hlist_unhashed(&ax25->ax25_node)) {
+- spin_lock_bh(&ax25_list_lock);
+ hlist_del_init(&ax25->ax25_node);
+- spin_unlock_bh(&ax25_list_lock);
+ ax25_cb_put(ax25);
+ }
++ spin_unlock_bh(&ax25_list_lock);
+ }
+
+ /*
+@@ -81,6 +81,7 @@ static void ax25_kill_by_device(struct net_device *dev)
+
+ if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
+ return;
++ ax25_dev->device_up = false;
+
+ spin_lock_bh(&ax25_list_lock);
+ again:
+@@ -91,6 +92,7 @@ again:
+ spin_unlock_bh(&ax25_list_lock);
+ ax25_disconnect(s, ENETUNREACH);
+ s->ax25_dev = NULL;
++ ax25_cb_del(s);
+ spin_lock_bh(&ax25_list_lock);
+ goto again;
+ }
+@@ -103,6 +105,7 @@ again:
+ dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+ ax25_dev_put(ax25_dev);
+ }
++ ax25_cb_del(s);
+ release_sock(sk);
+ spin_lock_bh(&ax25_list_lock);
+ sock_put(sk);
+@@ -995,9 +998,11 @@ static int ax25_release(struct socket *sock)
+ if (sk->sk_type == SOCK_SEQPACKET) {
+ switch (ax25->state) {
+ case AX25_STATE_0:
+- release_sock(sk);
+- ax25_disconnect(ax25, 0);
+- lock_sock(sk);
++ if (!sock_flag(ax25->sk, SOCK_DEAD)) {
++ release_sock(sk);
++ ax25_disconnect(ax25, 0);
++ lock_sock(sk);
++ }
+ ax25_destroy_socket(ax25);
+ break;
+
+@@ -1053,11 +1058,13 @@ static int ax25_release(struct socket *sock)
+ ax25_destroy_socket(ax25);
+ }
+ if (ax25_dev) {
+- del_timer_sync(&ax25->timer);
+- del_timer_sync(&ax25->t1timer);
+- del_timer_sync(&ax25->t2timer);
+- del_timer_sync(&ax25->t3timer);
+- del_timer_sync(&ax25->idletimer);
++ if (!ax25_dev->device_up) {
++ del_timer_sync(&ax25->timer);
++ del_timer_sync(&ax25->t1timer);
++ del_timer_sync(&ax25->t2timer);
++ del_timer_sync(&ax25->t3timer);
++ del_timer_sync(&ax25->idletimer);
++ }
+ dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+ ax25_dev_put(ax25_dev);
+ }
+diff --git a/net/ax25/ax25_dev.c b/net/ax25/ax25_dev.c
+index d2a244e1c260f..5451be15e072b 100644
+--- a/net/ax25/ax25_dev.c
++++ b/net/ax25/ax25_dev.c
+@@ -62,6 +62,7 @@ void ax25_dev_device_up(struct net_device *dev)
+ ax25_dev->dev = dev;
+ dev_hold_track(dev, &ax25_dev->dev_tracker, GFP_ATOMIC);
+ ax25_dev->forward = NULL;
++ ax25_dev->device_up = true;
+
+ ax25_dev->values[AX25_VALUES_IPDEFMODE] = AX25_DEF_IPDEFMODE;
+ ax25_dev->values[AX25_VALUES_AXDEFMODE] = AX25_DEF_AXDEFMODE;
+diff --git a/net/ax25/ax25_subr.c b/net/ax25/ax25_subr.c
+index 3a476e4f6cd0b..9ff98f46dc6be 100644
+--- a/net/ax25/ax25_subr.c
++++ b/net/ax25/ax25_subr.c
+@@ -268,7 +268,7 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
+ del_timer_sync(&ax25->t3timer);
+ del_timer_sync(&ax25->idletimer);
+ } else {
+- if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
++ if (ax25->sk && !sock_flag(ax25->sk, SOCK_DESTROY))
+ ax25_stop_heartbeat(ax25);
+ ax25_stop_t1timer(ax25);
+ ax25_stop_t2timer(ax25);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 9e9713f7ddb84..f1feb92040637 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2153,7 +2153,7 @@ int hci_bdaddr_list_add_with_flags(struct list_head *list, bdaddr_t *bdaddr,
+
+ bacpy(&entry->bdaddr, bdaddr);
+ entry->bdaddr_type = type;
+- bitmap_from_u64(entry->flags, flags);
++ entry->flags = flags;
+
+ list_add(&entry->list, list);
+
+@@ -2633,7 +2633,7 @@ int hci_register_dev(struct hci_dev *hdev)
+ * callback.
+ */
+ if (hdev->wakeup)
+- set_bit(HCI_CONN_FLAG_REMOTE_WAKEUP, hdev->conn_flags);
++ hdev->conn_flags |= HCI_CONN_FLAG_REMOTE_WAKEUP;
+
+ hci_sock_dev_event(hdev, HCI_DEV_REG);
+ hci_dev_hold(hdev);
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index f4afe482e3004..95689982eedbc 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -482,7 +482,7 @@ static int add_to_accept_list(struct hci_request *req,
+
+ /* During suspend, only wakeable devices can be in accept list */
+ if (hdev->suspended &&
+- !test_bit(HCI_CONN_FLAG_REMOTE_WAKEUP, params->flags))
++ !(params->flags & HCI_CONN_FLAG_REMOTE_WAKEUP))
+ return 0;
+
+ *num_entries += 1;
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 13600bf120b02..351c2390164d0 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1637,7 +1637,7 @@ static int hci_le_set_privacy_mode_sync(struct hci_dev *hdev,
+ * indicates that LL Privacy has been enabled and
+ * HCI_OP_LE_SET_PRIVACY_MODE is supported.
+ */
+- if (!test_bit(HCI_CONN_FLAG_DEVICE_PRIVACY, params->flags))
++ if (!(params->flags & HCI_CONN_FLAG_DEVICE_PRIVACY))
+ return 0;
+
+ irk = hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type);
+@@ -1664,20 +1664,19 @@ static int hci_le_add_accept_list_sync(struct hci_dev *hdev,
+ struct hci_cp_le_add_to_accept_list cp;
+ int err;
+
++ /* During suspend, only wakeable devices can be in acceptlist */
++ if (hdev->suspended &&
++ !(params->flags & HCI_CONN_FLAG_REMOTE_WAKEUP))
++ return 0;
++
+ /* Select filter policy to accept all advertising */
+ if (*num_entries >= hdev->le_accept_list_size)
+ return -ENOSPC;
+
+ /* Accept list can not be used with RPAs */
+ if (!use_ll_privacy(hdev) &&
+- hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type)) {
++ hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type))
+ return -EINVAL;
+- }
+-
+- /* During suspend, only wakeable devices can be in acceptlist */
+- if (hdev->suspended &&
+- !test_bit(HCI_CONN_FLAG_REMOTE_WAKEUP, params->flags))
+- return 0;
+
+ /* Attempt to program the device in the resolving list first to avoid
+ * having to rollback in case it fails since the resolving list is
+@@ -4857,7 +4856,7 @@ static int hci_update_event_filter_sync(struct hci_dev *hdev)
+ hci_clear_event_filter_sync(hdev);
+
+ list_for_each_entry(b, &hdev->accept_list, list) {
+- if (!test_bit(HCI_CONN_FLAG_REMOTE_WAKEUP, b->flags))
++ if (!(b->flags & HCI_CONN_FLAG_REMOTE_WAKEUP))
+ continue;
+
+ bt_dev_dbg(hdev, "Adding event filters for %pMR", &b->bdaddr);
+@@ -4881,10 +4880,28 @@ static int hci_update_event_filter_sync(struct hci_dev *hdev)
+ return 0;
+ }
+
++/* This function disables scan (BR and LE) and mark it as paused */
++static int hci_pause_scan_sync(struct hci_dev *hdev)
++{
++ if (hdev->scanning_paused)
++ return 0;
++
++ /* Disable page scan if enabled */
++ if (test_bit(HCI_PSCAN, &hdev->flags))
++ hci_write_scan_enable_sync(hdev, SCAN_DISABLED);
++
++ hci_scan_disable_sync(hdev);
++
++ hdev->scanning_paused = true;
++
++ return 0;
++}
++
+ /* This function performs the HCI suspend procedures in the follow order:
+ *
+ * Pause discovery (active scanning/inquiry)
+ * Pause Directed Advertising/Advertising
++ * Pause Scanning (passive scanning in case discovery was not active)
+ * Disconnect all connections
+ * Set suspend_status to BT_SUSPEND_DISCONNECT if hdev cannot wakeup
+ * otherwise:
+@@ -4910,15 +4927,11 @@ int hci_suspend_sync(struct hci_dev *hdev)
+ /* Pause other advertisements */
+ hci_pause_advertising_sync(hdev);
+
+- /* Disable page scan if enabled */
+- if (test_bit(HCI_PSCAN, &hdev->flags))
+- hci_write_scan_enable_sync(hdev, SCAN_DISABLED);
+-
+ /* Suspend monitor filters */
+ hci_suspend_monitor_sync(hdev);
+
+ /* Prevent disconnects from causing scanning to be re-enabled */
+- hdev->scanning_paused = true;
++ hci_pause_scan_sync(hdev);
+
+ /* Soft disconnect everything (power off) */
+ err = hci_disconnect_all_sync(hdev, HCI_ERROR_REMOTE_POWER_OFF);
+@@ -4989,6 +5002,22 @@ static void hci_resume_monitor_sync(struct hci_dev *hdev)
+ }
+ }
+
++/* This function resume scan and reset paused flag */
++static int hci_resume_scan_sync(struct hci_dev *hdev)
++{
++ if (!hdev->scanning_paused)
++ return 0;
++
++ hci_update_scan_sync(hdev);
++
++ /* Reset passive scanning to normal */
++ hci_update_passive_scan_sync(hdev);
++
++ hdev->scanning_paused = false;
++
++ return 0;
++}
++
+ /* This function performs the HCI suspend procedures in the follow order:
+ *
+ * Restore event mask
+@@ -5011,10 +5040,9 @@ int hci_resume_sync(struct hci_dev *hdev)
+
+ /* Clear any event filters and restore scan state */
+ hci_clear_event_filter_sync(hdev);
+- hci_update_scan_sync(hdev);
+
+- /* Reset passive scanning to normal */
+- hci_update_passive_scan_sync(hdev);
++ /* Resume scanning */
++ hci_resume_scan_sync(hdev);
+
+ /* Resume monitor filters */
+ hci_resume_monitor_sync(hdev);
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 15eab8b968ce8..943cdc9ec7630 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -4009,10 +4009,11 @@ static int exp_ll_privacy_feature_changed(bool enabled, struct hci_dev *hdev,
+ memcpy(ev.uuid, rpa_resolution_uuid, 16);
+ ev.flags = cpu_to_le32((enabled ? BIT(0) : 0) | BIT(1));
+
++ // Do we need to be atomic with the conn_flags?
+ if (enabled && privacy_mode_capable(hdev))
+- set_bit(HCI_CONN_FLAG_DEVICE_PRIVACY, hdev->conn_flags);
++ hdev->conn_flags |= HCI_CONN_FLAG_DEVICE_PRIVACY;
+ else
+- clear_bit(HCI_CONN_FLAG_DEVICE_PRIVACY, hdev->conn_flags);
++ hdev->conn_flags &= ~HCI_CONN_FLAG_DEVICE_PRIVACY;
+
+ return mgmt_limited_event(MGMT_EV_EXP_FEATURE_CHANGED, hdev,
+ &ev, sizeof(ev),
+@@ -4431,8 +4432,7 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+
+ hci_dev_lock(hdev);
+
+- bitmap_to_arr32(&supported_flags, hdev->conn_flags,
+- __HCI_CONN_NUM_FLAGS);
++ supported_flags = hdev->conn_flags;
+
+ memset(&rp, 0, sizeof(rp));
+
+@@ -4443,8 +4443,7 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ if (!br_params)
+ goto done;
+
+- bitmap_to_arr32(¤t_flags, br_params->flags,
+- __HCI_CONN_NUM_FLAGS);
++ current_flags = br_params->flags;
+ } else {
+ params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
+ le_addr_type(cp->addr.type));
+@@ -4452,8 +4451,7 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ if (!params)
+ goto done;
+
+- bitmap_to_arr32(¤t_flags, params->flags,
+- __HCI_CONN_NUM_FLAGS);
++ current_flags = params->flags;
+ }
+
+ bacpy(&rp.addr.bdaddr, &cp->addr.bdaddr);
+@@ -4498,8 +4496,8 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ &cp->addr.bdaddr, cp->addr.type,
+ __le32_to_cpu(current_flags));
+
+- bitmap_to_arr32(&supported_flags, hdev->conn_flags,
+- __HCI_CONN_NUM_FLAGS);
++ // We should take hci_dev_lock() early, I think.. conn_flags can change
++ supported_flags = hdev->conn_flags;
+
+ if ((supported_flags | current_flags) != supported_flags) {
+ bt_dev_warn(hdev, "Bad flag given (0x%x) vs supported (0x%0x)",
+@@ -4515,7 +4513,7 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ cp->addr.type);
+
+ if (br_params) {
+- bitmap_from_u64(br_params->flags, current_flags);
++ br_params->flags = current_flags;
+ status = MGMT_STATUS_SUCCESS;
+ } else {
+ bt_dev_warn(hdev, "No such BR/EDR device %pMR (0x%x)",
+@@ -4525,14 +4523,26 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
+ le_addr_type(cp->addr.type));
+ if (params) {
+- bitmap_from_u64(params->flags, current_flags);
++ /* Devices using RPAs can only be programmed in the
++ * acceptlist LL Privacy has been enable otherwise they
++ * cannot mark HCI_CONN_FLAG_REMOTE_WAKEUP.
++ */
++ if ((current_flags & HCI_CONN_FLAG_REMOTE_WAKEUP) &&
++ !use_ll_privacy(hdev) &&
++ hci_find_irk_by_addr(hdev, ¶ms->addr,
++ params->addr_type)) {
++ bt_dev_warn(hdev,
++ "Cannot set wakeable for RPA");
++ goto unlock;
++ }
++
++ params->flags = current_flags;
+ status = MGMT_STATUS_SUCCESS;
+
+ /* Update passive scan if HCI_CONN_FLAG_DEVICE_PRIVACY
+ * has been set.
+ */
+- if (test_bit(HCI_CONN_FLAG_DEVICE_PRIVACY,
+- params->flags))
++ if (params->flags & HCI_CONN_FLAG_DEVICE_PRIVACY)
+ hci_update_passive_scan(hdev);
+ } else {
+ bt_dev_warn(hdev, "No such LE device %pMR (0x%x)",
+@@ -4541,6 +4551,7 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data,
+ }
+ }
+
++unlock:
+ hci_dev_unlock(hdev);
+
+ done:
+@@ -7132,8 +7143,7 @@ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ params = hci_conn_params_lookup(hdev, &cp->addr.bdaddr,
+ addr_type);
+ if (params)
+- bitmap_to_arr32(¤t_flags, params->flags,
+- __HCI_CONN_NUM_FLAGS);
++ current_flags = params->flags;
+ }
+
+ err = hci_cmd_sync_queue(hdev, add_device_sync, NULL, NULL);
+@@ -7142,8 +7152,7 @@ static int add_device(struct sock *sk, struct hci_dev *hdev,
+
+ added:
+ device_added(sk, hdev, &cp->addr.bdaddr, cp->addr.type, cp->action);
+- bitmap_to_arr32(&supported_flags, hdev->conn_flags,
+- __HCI_CONN_NUM_FLAGS);
++ supported_flags = hdev->conn_flags;
+ device_flags_changed(NULL, hdev, &cp->addr.bdaddr, cp->addr.type,
+ supported_flags, current_flags);
+
+diff --git a/net/core/filter.c b/net/core/filter.c
+index f8fbb5fa74f35..4210b127c5f57 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -4937,7 +4937,7 @@ static int _bpf_setsockopt(struct sock *sk, int level, int optname,
+ if (val <= 0 || tp->data_segs_out > tp->syn_data)
+ ret = -EINVAL;
+ else
+- tp->snd_cwnd = val;
++ tcp_snd_cwnd_set(tp, val);
+ break;
+ case TCP_BPF_SNDCWND_CLAMP:
+ if (val <= 0) {
+diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c
+index 73f68d4625f32..929f6379a2798 100644
+--- a/net/core/flow_offload.c
++++ b/net/core/flow_offload.c
+@@ -595,3 +595,9 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch,
+ return (bo && list_empty(&bo->cb_list)) ? -EOPNOTSUPP : count;
+ }
+ EXPORT_SYMBOL(flow_indr_dev_setup_offload);
++
++bool flow_indr_dev_exists(void)
++{
++ return !list_empty(&flow_block_indr_dev_list);
++}
++EXPORT_SYMBOL(flow_indr_dev_exists);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index ec0bf737b076a..a252f4090d758 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1579,7 +1579,7 @@ static void neigh_managed_work(struct work_struct *work)
+ list_for_each_entry(neigh, &tbl->managed_list, managed_list)
+ neigh_event_send_probe(neigh, NULL, false);
+ queue_delayed_work(system_power_efficient_wq, &tbl->managed_work,
+- NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME));
++ max(NEIGH_VAR(&tbl->parms, DELAY_PROBE_TIME), HZ));
+ write_unlock_bh(&tbl->lock);
+ }
+
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index a5d57fa679caa..55654e335d43d 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -917,10 +917,12 @@ void __init inet_hashinfo2_init(struct inet_hashinfo *h, const char *name,
+ init_hashinfo_lhash2(h);
+
+ /* this one is used for source ports of outgoing connections */
+- table_perturb = kmalloc_array(INET_TABLE_PERTURB_SIZE,
+- sizeof(*table_perturb), GFP_KERNEL);
+- if (!table_perturb)
+- panic("TCP: failed to alloc table_perturb");
++ table_perturb = alloc_large_system_hash("Table-perturb",
++ sizeof(*table_perturb),
++ INET_TABLE_PERTURB_SIZE,
++ 0, 0, NULL, NULL,
++ INET_TABLE_PERTURB_SIZE,
++ INET_TABLE_PERTURB_SIZE);
+ }
+
+ int inet_hashinfo2_init_mod(struct inet_hashinfo *h)
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 8cf86e42c1d1c..65b6d4c1698e2 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -629,21 +629,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
+ }
+
+ if (dev->header_ops) {
+- const int pull_len = tunnel->hlen + sizeof(struct iphdr);
+-
+ if (skb_cow_head(skb, 0))
+ goto free_skb;
+
+ tnl_params = (const struct iphdr *)skb->data;
+
+- if (pull_len > skb_transport_offset(skb))
+- goto free_skb;
+-
+ /* Pull skb since ip_tunnel_xmit() needs skb->data pointing
+ * to gre header.
+ */
+- skb_pull(skb, pull_len);
++ skb_pull(skb, tunnel->hlen + sizeof(struct iphdr));
+ skb_reset_mac_header(skb);
++
++ if (skb->ip_summed == CHECKSUM_PARTIAL &&
++ skb_checksum_start(skb) < skb->data)
++ goto free_skb;
+ } else {
+ if (skb_cow_head(skb, dev->needed_headroom))
+ goto free_skb;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 28ff2a820f7c9..c9ad372f8edbf 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -429,7 +429,7 @@ void tcp_init_sock(struct sock *sk)
+ * algorithms that we must have the following bandaid to talk
+ * efficiently to them. -DaveM
+ */
+- tp->snd_cwnd = TCP_INIT_CWND;
++ tcp_snd_cwnd_set(tp, TCP_INIT_CWND);
+
+ /* There's a bubble in the pipe until at least the first ACK. */
+ tp->app_limited = ~0U;
+@@ -3033,7 +3033,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ icsk->icsk_rto_min = TCP_RTO_MIN;
+ icsk->icsk_delack_max = TCP_DELACK_MAX;
+ tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+- tp->snd_cwnd = TCP_INIT_CWND;
++ tcp_snd_cwnd_set(tp, TCP_INIT_CWND);
+ tp->snd_cwnd_cnt = 0;
+ tp->window_clamp = 0;
+ tp->delivered = 0;
+@@ -3744,7 +3744,7 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
+ info->tcpi_max_pacing_rate = rate64;
+
+ info->tcpi_reordering = tp->reordering;
+- info->tcpi_snd_cwnd = tp->snd_cwnd;
++ info->tcpi_snd_cwnd = tcp_snd_cwnd(tp);
+
+ if (info->tcpi_state == TCP_LISTEN) {
+ /* listeners aliased fields :
+@@ -3915,7 +3915,7 @@ struct sk_buff *tcp_get_timestamping_opt_stats(const struct sock *sk,
+ rate64 = tcp_compute_delivery_rate(tp);
+ nla_put_u64_64bit(stats, TCP_NLA_DELIVERY_RATE, rate64, TCP_NLA_PAD);
+
+- nla_put_u32(stats, TCP_NLA_SND_CWND, tp->snd_cwnd);
++ nla_put_u32(stats, TCP_NLA_SND_CWND, tcp_snd_cwnd(tp));
+ nla_put_u32(stats, TCP_NLA_REORDERING, tp->reordering);
+ nla_put_u32(stats, TCP_NLA_MIN_RTT, tcp_min_rtt(tp));
+
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index ec5550089b4d2..aefe12b7dbf7a 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -276,7 +276,7 @@ static void bbr_init_pacing_rate_from_rtt(struct sock *sk)
+ } else { /* no RTT sample yet */
+ rtt_us = USEC_PER_MSEC; /* use nominal default RTT */
+ }
+- bw = (u64)tp->snd_cwnd * BW_UNIT;
++ bw = (u64)tcp_snd_cwnd(tp) * BW_UNIT;
+ do_div(bw, rtt_us);
+ sk->sk_pacing_rate = bbr_bw_to_pacing_rate(sk, bw, bbr_high_gain);
+ }
+@@ -323,9 +323,9 @@ static void bbr_save_cwnd(struct sock *sk)
+ struct bbr *bbr = inet_csk_ca(sk);
+
+ if (bbr->prev_ca_state < TCP_CA_Recovery && bbr->mode != BBR_PROBE_RTT)
+- bbr->prior_cwnd = tp->snd_cwnd; /* this cwnd is good enough */
++ bbr->prior_cwnd = tcp_snd_cwnd(tp); /* this cwnd is good enough */
+ else /* loss recovery or BBR_PROBE_RTT have temporarily cut cwnd */
+- bbr->prior_cwnd = max(bbr->prior_cwnd, tp->snd_cwnd);
++ bbr->prior_cwnd = max(bbr->prior_cwnd, tcp_snd_cwnd(tp));
+ }
+
+ static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event)
+@@ -482,7 +482,7 @@ static bool bbr_set_cwnd_to_recover_or_restore(
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct bbr *bbr = inet_csk_ca(sk);
+ u8 prev_state = bbr->prev_ca_state, state = inet_csk(sk)->icsk_ca_state;
+- u32 cwnd = tp->snd_cwnd;
++ u32 cwnd = tcp_snd_cwnd(tp);
+
+ /* An ACK for P pkts should release at most 2*P packets. We do this
+ * in two steps. First, here we deduct the number of lost packets.
+@@ -520,7 +520,7 @@ static void bbr_set_cwnd(struct sock *sk, const struct rate_sample *rs,
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct bbr *bbr = inet_csk_ca(sk);
+- u32 cwnd = tp->snd_cwnd, target_cwnd = 0;
++ u32 cwnd = tcp_snd_cwnd(tp), target_cwnd = 0;
+
+ if (!acked)
+ goto done; /* no packet fully ACKed; just apply caps */
+@@ -544,9 +544,9 @@ static void bbr_set_cwnd(struct sock *sk, const struct rate_sample *rs,
+ cwnd = max(cwnd, bbr_cwnd_min_target);
+
+ done:
+- tp->snd_cwnd = min(cwnd, tp->snd_cwnd_clamp); /* apply global cap */
++ tcp_snd_cwnd_set(tp, min(cwnd, tp->snd_cwnd_clamp)); /* apply global cap */
+ if (bbr->mode == BBR_PROBE_RTT) /* drain queue, refresh min_rtt */
+- tp->snd_cwnd = min(tp->snd_cwnd, bbr_cwnd_min_target);
++ tcp_snd_cwnd_set(tp, min(tcp_snd_cwnd(tp), bbr_cwnd_min_target));
+ }
+
+ /* End cycle phase if it's time and/or we hit the phase's in-flight target. */
+@@ -856,7 +856,7 @@ static void bbr_update_ack_aggregation(struct sock *sk,
+ bbr->ack_epoch_acked = min_t(u32, 0xFFFFF,
+ bbr->ack_epoch_acked + rs->acked_sacked);
+ extra_acked = bbr->ack_epoch_acked - expected_acked;
+- extra_acked = min(extra_acked, tp->snd_cwnd);
++ extra_acked = min(extra_acked, tcp_snd_cwnd(tp));
+ if (extra_acked > bbr->extra_acked[bbr->extra_acked_win_idx])
+ bbr->extra_acked[bbr->extra_acked_win_idx] = extra_acked;
+ }
+@@ -914,7 +914,7 @@ static void bbr_check_probe_rtt_done(struct sock *sk)
+ return;
+
+ bbr->min_rtt_stamp = tcp_jiffies32; /* wait a while until PROBE_RTT */
+- tp->snd_cwnd = max(tp->snd_cwnd, bbr->prior_cwnd);
++ tcp_snd_cwnd_set(tp, max(tcp_snd_cwnd(tp), bbr->prior_cwnd));
+ bbr_reset_mode(sk);
+ }
+
+@@ -1093,7 +1093,7 @@ static u32 bbr_undo_cwnd(struct sock *sk)
+ bbr->full_bw = 0; /* spurious slow-down; reset full pipe detection */
+ bbr->full_bw_cnt = 0;
+ bbr_reset_lt_bw_sampling(sk);
+- return tcp_sk(sk)->snd_cwnd;
++ return tcp_snd_cwnd(tcp_sk(sk));
+ }
+
+ /* Entering loss recovery, so save cwnd for when we exit or undo recovery. */
+diff --git a/net/ipv4/tcp_bic.c b/net/ipv4/tcp_bic.c
+index f5f588b1f6e9d..58358bf92e1b8 100644
+--- a/net/ipv4/tcp_bic.c
++++ b/net/ipv4/tcp_bic.c
+@@ -150,7 +150,7 @@ static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ if (!acked)
+ return;
+ }
+- bictcp_update(ca, tp->snd_cwnd);
++ bictcp_update(ca, tcp_snd_cwnd(tp));
+ tcp_cong_avoid_ai(tp, ca->cnt, acked);
+ }
+
+@@ -166,16 +166,16 @@ static u32 bictcp_recalc_ssthresh(struct sock *sk)
+ ca->epoch_start = 0; /* end of epoch */
+
+ /* Wmax and fast convergence */
+- if (tp->snd_cwnd < ca->last_max_cwnd && fast_convergence)
+- ca->last_max_cwnd = (tp->snd_cwnd * (BICTCP_BETA_SCALE + beta))
++ if (tcp_snd_cwnd(tp) < ca->last_max_cwnd && fast_convergence)
++ ca->last_max_cwnd = (tcp_snd_cwnd(tp) * (BICTCP_BETA_SCALE + beta))
+ / (2 * BICTCP_BETA_SCALE);
+ else
+- ca->last_max_cwnd = tp->snd_cwnd;
++ ca->last_max_cwnd = tcp_snd_cwnd(tp);
+
+- if (tp->snd_cwnd <= low_window)
+- return max(tp->snd_cwnd >> 1U, 2U);
++ if (tcp_snd_cwnd(tp) <= low_window)
++ return max(tcp_snd_cwnd(tp) >> 1U, 2U);
+ else
+- return max((tp->snd_cwnd * beta) / BICTCP_BETA_SCALE, 2U);
++ return max((tcp_snd_cwnd(tp) * beta) / BICTCP_BETA_SCALE, 2U);
+ }
+
+ static void bictcp_state(struct sock *sk, u8 new_state)
+diff --git a/net/ipv4/tcp_cdg.c b/net/ipv4/tcp_cdg.c
+index 709d238018239..ddc7ba0554bdd 100644
+--- a/net/ipv4/tcp_cdg.c
++++ b/net/ipv4/tcp_cdg.c
+@@ -161,8 +161,8 @@ static void tcp_cdg_hystart_update(struct sock *sk)
+ LINUX_MIB_TCPHYSTARTTRAINDETECT);
+ NET_ADD_STATS(sock_net(sk),
+ LINUX_MIB_TCPHYSTARTTRAINCWND,
+- tp->snd_cwnd);
+- tp->snd_ssthresh = tp->snd_cwnd;
++ tcp_snd_cwnd(tp));
++ tp->snd_ssthresh = tcp_snd_cwnd(tp);
+ return;
+ }
+ }
+@@ -180,8 +180,8 @@ static void tcp_cdg_hystart_update(struct sock *sk)
+ LINUX_MIB_TCPHYSTARTDELAYDETECT);
+ NET_ADD_STATS(sock_net(sk),
+ LINUX_MIB_TCPHYSTARTDELAYCWND,
+- tp->snd_cwnd);
+- tp->snd_ssthresh = tp->snd_cwnd;
++ tcp_snd_cwnd(tp));
++ tp->snd_ssthresh = tcp_snd_cwnd(tp);
+ }
+ }
+ }
+@@ -252,7 +252,7 @@ static bool tcp_cdg_backoff(struct sock *sk, u32 grad)
+ return false;
+ }
+
+- ca->shadow_wnd = max(ca->shadow_wnd, tp->snd_cwnd);
++ ca->shadow_wnd = max(ca->shadow_wnd, tcp_snd_cwnd(tp));
+ ca->state = CDG_BACKOFF;
+ tcp_enter_cwr(sk);
+ return true;
+@@ -285,14 +285,14 @@ static void tcp_cdg_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ }
+
+ if (!tcp_is_cwnd_limited(sk)) {
+- ca->shadow_wnd = min(ca->shadow_wnd, tp->snd_cwnd);
++ ca->shadow_wnd = min(ca->shadow_wnd, tcp_snd_cwnd(tp));
+ return;
+ }
+
+- prior_snd_cwnd = tp->snd_cwnd;
++ prior_snd_cwnd = tcp_snd_cwnd(tp);
+ tcp_reno_cong_avoid(sk, ack, acked);
+
+- incr = tp->snd_cwnd - prior_snd_cwnd;
++ incr = tcp_snd_cwnd(tp) - prior_snd_cwnd;
+ ca->shadow_wnd = max(ca->shadow_wnd, ca->shadow_wnd + incr);
+ }
+
+@@ -331,15 +331,15 @@ static u32 tcp_cdg_ssthresh(struct sock *sk)
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (ca->state == CDG_BACKOFF)
+- return max(2U, (tp->snd_cwnd * min(1024U, backoff_beta)) >> 10);
++ return max(2U, (tcp_snd_cwnd(tp) * min(1024U, backoff_beta)) >> 10);
+
+ if (ca->state == CDG_NONFULL && use_tolerance)
+- return tp->snd_cwnd;
++ return tcp_snd_cwnd(tp);
+
+- ca->shadow_wnd = min(ca->shadow_wnd >> 1, tp->snd_cwnd);
++ ca->shadow_wnd = min(ca->shadow_wnd >> 1, tcp_snd_cwnd(tp));
+ if (use_shadow)
+- return max3(2U, ca->shadow_wnd, tp->snd_cwnd >> 1);
+- return max(2U, tp->snd_cwnd >> 1);
++ return max3(2U, ca->shadow_wnd, tcp_snd_cwnd(tp) >> 1);
++ return max(2U, tcp_snd_cwnd(tp) >> 1);
+ }
+
+ static void tcp_cdg_cwnd_event(struct sock *sk, const enum tcp_ca_event ev)
+@@ -357,7 +357,7 @@ static void tcp_cdg_cwnd_event(struct sock *sk, const enum tcp_ca_event ev)
+
+ ca->gradients = gradients;
+ ca->rtt_seq = tp->snd_nxt;
+- ca->shadow_wnd = tp->snd_cwnd;
++ ca->shadow_wnd = tcp_snd_cwnd(tp);
+ break;
+ case CA_EVENT_COMPLETE_CWR:
+ ca->state = CDG_UNKNOWN;
+@@ -380,7 +380,7 @@ static void tcp_cdg_init(struct sock *sk)
+ ca->gradients = kcalloc(window, sizeof(ca->gradients[0]),
+ GFP_NOWAIT | __GFP_NOWARN);
+ ca->rtt_seq = tp->snd_nxt;
+- ca->shadow_wnd = tp->snd_cwnd;
++ ca->shadow_wnd = tcp_snd_cwnd(tp);
+ }
+
+ static void tcp_cdg_release(struct sock *sk)
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index db5831e6c136a..f43db30a7195d 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -395,10 +395,10 @@ int tcp_set_congestion_control(struct sock *sk, const char *name, bool load,
+ */
+ u32 tcp_slow_start(struct tcp_sock *tp, u32 acked)
+ {
+- u32 cwnd = min(tp->snd_cwnd + acked, tp->snd_ssthresh);
++ u32 cwnd = min(tcp_snd_cwnd(tp) + acked, tp->snd_ssthresh);
+
+- acked -= cwnd - tp->snd_cwnd;
+- tp->snd_cwnd = min(cwnd, tp->snd_cwnd_clamp);
++ acked -= cwnd - tcp_snd_cwnd(tp);
++ tcp_snd_cwnd_set(tp, min(cwnd, tp->snd_cwnd_clamp));
+
+ return acked;
+ }
+@@ -412,7 +412,7 @@ void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked)
+ /* If credits accumulated at a higher w, apply them gently now. */
+ if (tp->snd_cwnd_cnt >= w) {
+ tp->snd_cwnd_cnt = 0;
+- tp->snd_cwnd++;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ }
+
+ tp->snd_cwnd_cnt += acked;
+@@ -420,9 +420,9 @@ void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked)
+ u32 delta = tp->snd_cwnd_cnt / w;
+
+ tp->snd_cwnd_cnt -= delta * w;
+- tp->snd_cwnd += delta;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + delta);
+ }
+- tp->snd_cwnd = min(tp->snd_cwnd, tp->snd_cwnd_clamp);
++ tcp_snd_cwnd_set(tp, min(tcp_snd_cwnd(tp), tp->snd_cwnd_clamp));
+ }
+ EXPORT_SYMBOL_GPL(tcp_cong_avoid_ai);
+
+@@ -447,7 +447,7 @@ void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ return;
+ }
+ /* In dangerous area, increase slowly. */
+- tcp_cong_avoid_ai(tp, tp->snd_cwnd, acked);
++ tcp_cong_avoid_ai(tp, tcp_snd_cwnd(tp), acked);
+ }
+ EXPORT_SYMBOL_GPL(tcp_reno_cong_avoid);
+
+@@ -456,7 +456,7 @@ u32 tcp_reno_ssthresh(struct sock *sk)
+ {
+ const struct tcp_sock *tp = tcp_sk(sk);
+
+- return max(tp->snd_cwnd >> 1U, 2U);
++ return max(tcp_snd_cwnd(tp) >> 1U, 2U);
+ }
+ EXPORT_SYMBOL_GPL(tcp_reno_ssthresh);
+
+@@ -464,7 +464,7 @@ u32 tcp_reno_undo_cwnd(struct sock *sk)
+ {
+ const struct tcp_sock *tp = tcp_sk(sk);
+
+- return max(tp->snd_cwnd, tp->prior_cwnd);
++ return max(tcp_snd_cwnd(tp), tp->prior_cwnd);
+ }
+ EXPORT_SYMBOL_GPL(tcp_reno_undo_cwnd);
+
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
+index e07837e23b3fd..f0a240792ff98 100644
+--- a/net/ipv4/tcp_cubic.c
++++ b/net/ipv4/tcp_cubic.c
+@@ -334,7 +334,7 @@ static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ if (!acked)
+ return;
+ }
+- bictcp_update(ca, tp->snd_cwnd, acked);
++ bictcp_update(ca, tcp_snd_cwnd(tp), acked);
+ tcp_cong_avoid_ai(tp, ca->cnt, acked);
+ }
+
+@@ -346,13 +346,13 @@ static u32 cubictcp_recalc_ssthresh(struct sock *sk)
+ ca->epoch_start = 0; /* end of epoch */
+
+ /* Wmax and fast convergence */
+- if (tp->snd_cwnd < ca->last_max_cwnd && fast_convergence)
+- ca->last_max_cwnd = (tp->snd_cwnd * (BICTCP_BETA_SCALE + beta))
++ if (tcp_snd_cwnd(tp) < ca->last_max_cwnd && fast_convergence)
++ ca->last_max_cwnd = (tcp_snd_cwnd(tp) * (BICTCP_BETA_SCALE + beta))
+ / (2 * BICTCP_BETA_SCALE);
+ else
+- ca->last_max_cwnd = tp->snd_cwnd;
++ ca->last_max_cwnd = tcp_snd_cwnd(tp);
+
+- return max((tp->snd_cwnd * beta) / BICTCP_BETA_SCALE, 2U);
++ return max((tcp_snd_cwnd(tp) * beta) / BICTCP_BETA_SCALE, 2U);
+ }
+
+ static void cubictcp_state(struct sock *sk, u8 new_state)
+@@ -413,13 +413,13 @@ static void hystart_update(struct sock *sk, u32 delay)
+ ca->found = 1;
+ pr_debug("hystart_ack_train (%u > %u) delay_min %u (+ ack_delay %u) cwnd %u\n",
+ now - ca->round_start, threshold,
+- ca->delay_min, hystart_ack_delay(sk), tp->snd_cwnd);
++ ca->delay_min, hystart_ack_delay(sk), tcp_snd_cwnd(tp));
+ NET_INC_STATS(sock_net(sk),
+ LINUX_MIB_TCPHYSTARTTRAINDETECT);
+ NET_ADD_STATS(sock_net(sk),
+ LINUX_MIB_TCPHYSTARTTRAINCWND,
+- tp->snd_cwnd);
+- tp->snd_ssthresh = tp->snd_cwnd;
++ tcp_snd_cwnd(tp));
++ tp->snd_ssthresh = tcp_snd_cwnd(tp);
+ }
+ }
+ }
+@@ -438,8 +438,8 @@ static void hystart_update(struct sock *sk, u32 delay)
+ LINUX_MIB_TCPHYSTARTDELAYDETECT);
+ NET_ADD_STATS(sock_net(sk),
+ LINUX_MIB_TCPHYSTARTDELAYCWND,
+- tp->snd_cwnd);
+- tp->snd_ssthresh = tp->snd_cwnd;
++ tcp_snd_cwnd(tp));
++ tp->snd_ssthresh = tcp_snd_cwnd(tp);
+ }
+ }
+ }
+@@ -469,7 +469,7 @@ static void cubictcp_acked(struct sock *sk, const struct ack_sample *sample)
+
+ /* hystart triggers when cwnd is larger than some threshold */
+ if (!ca->found && tcp_in_slow_start(tp) && hystart &&
+- tp->snd_cwnd >= hystart_low_window)
++ tcp_snd_cwnd(tp) >= hystart_low_window)
+ hystart_update(sk, delay);
+ }
+
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index 0d7ab3cc7b614..d0bf7bb4b1405 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -106,8 +106,8 @@ static u32 dctcp_ssthresh(struct sock *sk)
+ struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+- ca->loss_cwnd = tp->snd_cwnd;
+- return max(tp->snd_cwnd - ((tp->snd_cwnd * ca->dctcp_alpha) >> 11U), 2U);
++ ca->loss_cwnd = tcp_snd_cwnd(tp);
++ return max(tcp_snd_cwnd(tp) - ((tcp_snd_cwnd(tp) * ca->dctcp_alpha) >> 11U), 2U);
+ }
+
+ static void dctcp_update_alpha(struct sock *sk, u32 flags)
+@@ -148,8 +148,8 @@ static void dctcp_react_to_loss(struct sock *sk)
+ struct dctcp *ca = inet_csk_ca(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+- ca->loss_cwnd = tp->snd_cwnd;
+- tp->snd_ssthresh = max(tp->snd_cwnd >> 1U, 2U);
++ ca->loss_cwnd = tcp_snd_cwnd(tp);
++ tp->snd_ssthresh = max(tcp_snd_cwnd(tp) >> 1U, 2U);
+ }
+
+ static void dctcp_state(struct sock *sk, u8 new_state)
+@@ -211,8 +211,9 @@ static size_t dctcp_get_info(struct sock *sk, u32 ext, int *attr,
+ static u32 dctcp_cwnd_undo(struct sock *sk)
+ {
+ const struct dctcp *ca = inet_csk_ca(sk);
++ struct tcp_sock *tp = tcp_sk(sk);
+
+- return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
++ return max(tcp_snd_cwnd(tp), ca->loss_cwnd);
+ }
+
+ static struct tcp_congestion_ops dctcp __read_mostly = {
+diff --git a/net/ipv4/tcp_highspeed.c b/net/ipv4/tcp_highspeed.c
+index 349069d6cd0aa..c6de5ce79ad3c 100644
+--- a/net/ipv4/tcp_highspeed.c
++++ b/net/ipv4/tcp_highspeed.c
+@@ -127,22 +127,22 @@ static void hstcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ * snd_cwnd <=
+ * hstcp_aimd_vals[ca->ai].cwnd
+ */
+- if (tp->snd_cwnd > hstcp_aimd_vals[ca->ai].cwnd) {
+- while (tp->snd_cwnd > hstcp_aimd_vals[ca->ai].cwnd &&
++ if (tcp_snd_cwnd(tp) > hstcp_aimd_vals[ca->ai].cwnd) {
++ while (tcp_snd_cwnd(tp) > hstcp_aimd_vals[ca->ai].cwnd &&
+ ca->ai < HSTCP_AIMD_MAX - 1)
+ ca->ai++;
+- } else if (ca->ai && tp->snd_cwnd <= hstcp_aimd_vals[ca->ai-1].cwnd) {
+- while (ca->ai && tp->snd_cwnd <= hstcp_aimd_vals[ca->ai-1].cwnd)
++ } else if (ca->ai && tcp_snd_cwnd(tp) <= hstcp_aimd_vals[ca->ai-1].cwnd) {
++ while (ca->ai && tcp_snd_cwnd(tp) <= hstcp_aimd_vals[ca->ai-1].cwnd)
+ ca->ai--;
+ }
+
+ /* Do additive increase */
+- if (tp->snd_cwnd < tp->snd_cwnd_clamp) {
++ if (tcp_snd_cwnd(tp) < tp->snd_cwnd_clamp) {
+ /* cwnd = cwnd + a(w) / cwnd */
+ tp->snd_cwnd_cnt += ca->ai + 1;
+- if (tp->snd_cwnd_cnt >= tp->snd_cwnd) {
+- tp->snd_cwnd_cnt -= tp->snd_cwnd;
+- tp->snd_cwnd++;
++ if (tp->snd_cwnd_cnt >= tcp_snd_cwnd(tp)) {
++ tp->snd_cwnd_cnt -= tcp_snd_cwnd(tp);
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ }
+ }
+ }
+@@ -154,7 +154,7 @@ static u32 hstcp_ssthresh(struct sock *sk)
+ struct hstcp *ca = inet_csk_ca(sk);
+
+ /* Do multiplicative decrease */
+- return max(tp->snd_cwnd - ((tp->snd_cwnd * hstcp_aimd_vals[ca->ai].md) >> 8), 2U);
++ return max(tcp_snd_cwnd(tp) - ((tcp_snd_cwnd(tp) * hstcp_aimd_vals[ca->ai].md) >> 8), 2U);
+ }
+
+ static struct tcp_congestion_ops tcp_highspeed __read_mostly = {
+diff --git a/net/ipv4/tcp_htcp.c b/net/ipv4/tcp_htcp.c
+index 55adcfcf96fea..52b1f2665dfae 100644
+--- a/net/ipv4/tcp_htcp.c
++++ b/net/ipv4/tcp_htcp.c
+@@ -124,7 +124,7 @@ static void measure_achieved_throughput(struct sock *sk,
+
+ ca->packetcount += sample->pkts_acked;
+
+- if (ca->packetcount >= tp->snd_cwnd - (ca->alpha >> 7 ? : 1) &&
++ if (ca->packetcount >= tcp_snd_cwnd(tp) - (ca->alpha >> 7 ? : 1) &&
+ now - ca->lasttime >= ca->minRTT &&
+ ca->minRTT > 0) {
+ __u32 cur_Bi = ca->packetcount * HZ / (now - ca->lasttime);
+@@ -225,7 +225,7 @@ static u32 htcp_recalc_ssthresh(struct sock *sk)
+ const struct htcp *ca = inet_csk_ca(sk);
+
+ htcp_param_update(sk);
+- return max((tp->snd_cwnd * ca->beta) >> 7, 2U);
++ return max((tcp_snd_cwnd(tp) * ca->beta) >> 7, 2U);
+ }
+
+ static void htcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+@@ -242,9 +242,9 @@ static void htcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ /* In dangerous area, increase slowly.
+ * In theory this is tp->snd_cwnd += alpha / tp->snd_cwnd
+ */
+- if ((tp->snd_cwnd_cnt * ca->alpha)>>7 >= tp->snd_cwnd) {
+- if (tp->snd_cwnd < tp->snd_cwnd_clamp)
+- tp->snd_cwnd++;
++ if ((tp->snd_cwnd_cnt * ca->alpha)>>7 >= tcp_snd_cwnd(tp)) {
++ if (tcp_snd_cwnd(tp) < tp->snd_cwnd_clamp)
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ tp->snd_cwnd_cnt = 0;
+ htcp_alpha_update(ca);
+ } else
+diff --git a/net/ipv4/tcp_hybla.c b/net/ipv4/tcp_hybla.c
+index be39327e04e6c..abd7d91807e54 100644
+--- a/net/ipv4/tcp_hybla.c
++++ b/net/ipv4/tcp_hybla.c
+@@ -54,7 +54,7 @@ static void hybla_init(struct sock *sk)
+ ca->rho2_7ls = 0;
+ ca->snd_cwnd_cents = 0;
+ ca->hybla_en = true;
+- tp->snd_cwnd = 2;
++ tcp_snd_cwnd_set(tp, 2);
+ tp->snd_cwnd_clamp = 65535;
+
+ /* 1st Rho measurement based on initial srtt */
+@@ -62,7 +62,7 @@ static void hybla_init(struct sock *sk)
+
+ /* set minimum rtt as this is the 1st ever seen */
+ ca->minrtt_us = tp->srtt_us;
+- tp->snd_cwnd = ca->rho;
++ tcp_snd_cwnd_set(tp, ca->rho);
+ }
+
+ static void hybla_state(struct sock *sk, u8 ca_state)
+@@ -137,31 +137,31 @@ static void hybla_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ * as long as increment is estimated as (rho<<7)/window
+ * it already is <<7 and we can easily count its fractions.
+ */
+- increment = ca->rho2_7ls / tp->snd_cwnd;
++ increment = ca->rho2_7ls / tcp_snd_cwnd(tp);
+ if (increment < 128)
+ tp->snd_cwnd_cnt++;
+ }
+
+ odd = increment % 128;
+- tp->snd_cwnd += increment >> 7;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + (increment >> 7));
+ ca->snd_cwnd_cents += odd;
+
+ /* check when fractions goes >=128 and increase cwnd by 1. */
+ while (ca->snd_cwnd_cents >= 128) {
+- tp->snd_cwnd++;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ ca->snd_cwnd_cents -= 128;
+ tp->snd_cwnd_cnt = 0;
+ }
+ /* check when cwnd has not been incremented for a while */
+- if (increment == 0 && odd == 0 && tp->snd_cwnd_cnt >= tp->snd_cwnd) {
+- tp->snd_cwnd++;
++ if (increment == 0 && odd == 0 && tp->snd_cwnd_cnt >= tcp_snd_cwnd(tp)) {
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ tp->snd_cwnd_cnt = 0;
+ }
+ /* clamp down slowstart cwnd to ssthresh value. */
+ if (is_slowstart)
+- tp->snd_cwnd = min(tp->snd_cwnd, tp->snd_ssthresh);
++ tcp_snd_cwnd_set(tp, min(tcp_snd_cwnd(tp), tp->snd_ssthresh));
+
+- tp->snd_cwnd = min_t(u32, tp->snd_cwnd, tp->snd_cwnd_clamp);
++ tcp_snd_cwnd_set(tp, min(tcp_snd_cwnd(tp), tp->snd_cwnd_clamp));
+ }
+
+ static struct tcp_congestion_ops tcp_hybla __read_mostly = {
+diff --git a/net/ipv4/tcp_illinois.c b/net/ipv4/tcp_illinois.c
+index 00e54873213e8..c0c81a2c77fae 100644
+--- a/net/ipv4/tcp_illinois.c
++++ b/net/ipv4/tcp_illinois.c
+@@ -224,7 +224,7 @@ static void update_params(struct sock *sk)
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct illinois *ca = inet_csk_ca(sk);
+
+- if (tp->snd_cwnd < win_thresh) {
++ if (tcp_snd_cwnd(tp) < win_thresh) {
+ ca->alpha = ALPHA_BASE;
+ ca->beta = BETA_BASE;
+ } else if (ca->cnt_rtt > 0) {
+@@ -284,9 +284,9 @@ static void tcp_illinois_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ * tp->snd_cwnd += alpha/tp->snd_cwnd
+ */
+ delta = (tp->snd_cwnd_cnt * ca->alpha) >> ALPHA_SHIFT;
+- if (delta >= tp->snd_cwnd) {
+- tp->snd_cwnd = min(tp->snd_cwnd + delta / tp->snd_cwnd,
+- (u32)tp->snd_cwnd_clamp);
++ if (delta >= tcp_snd_cwnd(tp)) {
++ tcp_snd_cwnd_set(tp, min(tcp_snd_cwnd(tp) + delta / tcp_snd_cwnd(tp),
++ (u32)tp->snd_cwnd_clamp));
+ tp->snd_cwnd_cnt = 0;
+ }
+ }
+@@ -296,9 +296,11 @@ static u32 tcp_illinois_ssthresh(struct sock *sk)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct illinois *ca = inet_csk_ca(sk);
++ u32 decr;
+
+ /* Multiplicative decrease */
+- return max(tp->snd_cwnd - ((tp->snd_cwnd * ca->beta) >> BETA_SHIFT), 2U);
++ decr = (tcp_snd_cwnd(tp) * ca->beta) >> BETA_SHIFT;
++ return max(tcp_snd_cwnd(tp) - decr, 2U);
+ }
+
+ /* Extract info for Tcp socket info provided via netlink. */
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 96c25c97ee567..c2552b6e93224 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -414,7 +414,7 @@ static void tcp_sndbuf_expand(struct sock *sk)
+ per_mss = roundup_pow_of_two(per_mss) +
+ SKB_DATA_ALIGN(sizeof(struct sk_buff));
+
+- nr_segs = max_t(u32, TCP_INIT_CWND, tp->snd_cwnd);
++ nr_segs = max_t(u32, TCP_INIT_CWND, tcp_snd_cwnd(tp));
+ nr_segs = max_t(u32, nr_segs, tp->reordering + 1);
+
+ /* Fast Recovery (RFC 5681 3.2) :
+@@ -909,12 +909,12 @@ static void tcp_update_pacing_rate(struct sock *sk)
+ * If snd_cwnd >= (tp->snd_ssthresh / 2), we are approaching
+ * end of slow start and should slow down.
+ */
+- if (tp->snd_cwnd < tp->snd_ssthresh / 2)
++ if (tcp_snd_cwnd(tp) < tp->snd_ssthresh / 2)
+ rate *= sock_net(sk)->ipv4.sysctl_tcp_pacing_ss_ratio;
+ else
+ rate *= sock_net(sk)->ipv4.sysctl_tcp_pacing_ca_ratio;
+
+- rate *= max(tp->snd_cwnd, tp->packets_out);
++ rate *= max(tcp_snd_cwnd(tp), tp->packets_out);
+
+ if (likely(tp->srtt_us))
+ do_div(rate, tp->srtt_us);
+@@ -2147,12 +2147,12 @@ void tcp_enter_loss(struct sock *sk)
+ !after(tp->high_seq, tp->snd_una) ||
+ (icsk->icsk_ca_state == TCP_CA_Loss && !icsk->icsk_retransmits)) {
+ tp->prior_ssthresh = tcp_current_ssthresh(sk);
+- tp->prior_cwnd = tp->snd_cwnd;
++ tp->prior_cwnd = tcp_snd_cwnd(tp);
+ tp->snd_ssthresh = icsk->icsk_ca_ops->ssthresh(sk);
+ tcp_ca_event(sk, CA_EVENT_LOSS);
+ tcp_init_undo(tp);
+ }
+- tp->snd_cwnd = tcp_packets_in_flight(tp) + 1;
++ tcp_snd_cwnd_set(tp, tcp_packets_in_flight(tp) + 1);
+ tp->snd_cwnd_cnt = 0;
+ tp->snd_cwnd_stamp = tcp_jiffies32;
+
+@@ -2458,7 +2458,7 @@ static void DBGUNDO(struct sock *sk, const char *msg)
+ pr_debug("Undo %s %pI4/%u c%u l%u ss%u/%u p%u\n",
+ msg,
+ &inet->inet_daddr, ntohs(inet->inet_dport),
+- tp->snd_cwnd, tcp_left_out(tp),
++ tcp_snd_cwnd(tp), tcp_left_out(tp),
+ tp->snd_ssthresh, tp->prior_ssthresh,
+ tp->packets_out);
+ }
+@@ -2467,7 +2467,7 @@ static void DBGUNDO(struct sock *sk, const char *msg)
+ pr_debug("Undo %s %pI6/%u c%u l%u ss%u/%u p%u\n",
+ msg,
+ &sk->sk_v6_daddr, ntohs(inet->inet_dport),
+- tp->snd_cwnd, tcp_left_out(tp),
++ tcp_snd_cwnd(tp), tcp_left_out(tp),
+ tp->snd_ssthresh, tp->prior_ssthresh,
+ tp->packets_out);
+ }
+@@ -2492,7 +2492,7 @@ static void tcp_undo_cwnd_reduction(struct sock *sk, bool unmark_loss)
+ if (tp->prior_ssthresh) {
+ const struct inet_connection_sock *icsk = inet_csk(sk);
+
+- tp->snd_cwnd = icsk->icsk_ca_ops->undo_cwnd(sk);
++ tcp_snd_cwnd_set(tp, icsk->icsk_ca_ops->undo_cwnd(sk));
+
+ if (tp->prior_ssthresh > tp->snd_ssthresh) {
+ tp->snd_ssthresh = tp->prior_ssthresh;
+@@ -2599,7 +2599,7 @@ static void tcp_init_cwnd_reduction(struct sock *sk)
+ tp->high_seq = tp->snd_nxt;
+ tp->tlp_high_seq = 0;
+ tp->snd_cwnd_cnt = 0;
+- tp->prior_cwnd = tp->snd_cwnd;
++ tp->prior_cwnd = tcp_snd_cwnd(tp);
+ tp->prr_delivered = 0;
+ tp->prr_out = 0;
+ tp->snd_ssthresh = inet_csk(sk)->icsk_ca_ops->ssthresh(sk);
+@@ -2629,7 +2629,7 @@ void tcp_cwnd_reduction(struct sock *sk, int newly_acked_sacked, int newly_lost,
+ }
+ /* Force a fast retransmit upon entering fast recovery */
+ sndcnt = max(sndcnt, (tp->prr_out ? 0 : 1));
+- tp->snd_cwnd = tcp_packets_in_flight(tp) + sndcnt;
++ tcp_snd_cwnd_set(tp, tcp_packets_in_flight(tp) + sndcnt);
+ }
+
+ static inline void tcp_end_cwnd_reduction(struct sock *sk)
+@@ -2642,7 +2642,7 @@ static inline void tcp_end_cwnd_reduction(struct sock *sk)
+ /* Reset cwnd to ssthresh in CWR or Recovery (unless it's undone) */
+ if (tp->snd_ssthresh < TCP_INFINITE_SSTHRESH &&
+ (inet_csk(sk)->icsk_ca_state == TCP_CA_CWR || tp->undo_marker)) {
+- tp->snd_cwnd = tp->snd_ssthresh;
++ tcp_snd_cwnd_set(tp, tp->snd_ssthresh);
+ tp->snd_cwnd_stamp = tcp_jiffies32;
+ }
+ tcp_ca_event(sk, CA_EVENT_COMPLETE_CWR);
+@@ -2706,12 +2706,15 @@ static void tcp_mtup_probe_success(struct sock *sk)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct inet_connection_sock *icsk = inet_csk(sk);
++ u64 val;
+
+- /* FIXME: breaks with very large cwnd */
+ tp->prior_ssthresh = tcp_current_ssthresh(sk);
+- tp->snd_cwnd = tp->snd_cwnd *
+- tcp_mss_to_mtu(sk, tp->mss_cache) /
+- icsk->icsk_mtup.probe_size;
++
++ val = (u64)tcp_snd_cwnd(tp) * tcp_mss_to_mtu(sk, tp->mss_cache);
++ do_div(val, icsk->icsk_mtup.probe_size);
++ WARN_ON_ONCE((u32)val != val);
++ tcp_snd_cwnd_set(tp, max_t(u32, 1U, val));
++
+ tp->snd_cwnd_cnt = 0;
+ tp->snd_cwnd_stamp = tcp_jiffies32;
+ tp->snd_ssthresh = tcp_current_ssthresh(sk);
+@@ -3034,7 +3037,7 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
+ tp->snd_una == tp->mtu_probe.probe_seq_start) {
+ tcp_mtup_probe_failed(sk);
+ /* Restores the reduction we did in tcp_mtup_probe() */
+- tp->snd_cwnd++;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ tcp_simple_retransmit(sk);
+ return;
+ }
+@@ -5420,7 +5423,7 @@ static bool tcp_should_expand_sndbuf(struct sock *sk)
+ return false;
+
+ /* If we filled the congestion window, do not expand. */
+- if (tcp_packets_in_flight(tp) >= tp->snd_cwnd)
++ if (tcp_packets_in_flight(tp) >= tcp_snd_cwnd(tp))
+ return false;
+
+ return true;
+@@ -5991,9 +5994,9 @@ void tcp_init_transfer(struct sock *sk, int bpf_op, struct sk_buff *skb)
+ * retransmission has occurred.
+ */
+ if (tp->total_retrans > 1 && tp->undo_marker)
+- tp->snd_cwnd = 1;
++ tcp_snd_cwnd_set(tp, 1);
+ else
+- tp->snd_cwnd = tcp_init_cwnd(tp, __sk_dst_get(sk));
++ tcp_snd_cwnd_set(tp, tcp_init_cwnd(tp, __sk_dst_get(sk)));
+ tp->snd_cwnd_stamp = tcp_jiffies32;
+
+ bpf_skops_established(sk, bpf_op, skb);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index fec656f5a39ee..79f9a6187a012 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2665,7 +2665,7 @@ static void get_tcp4_sock(struct sock *sk, struct seq_file *f, int i)
+ jiffies_to_clock_t(icsk->icsk_rto),
+ jiffies_to_clock_t(icsk->icsk_ack.ato),
+ (icsk->icsk_ack.quick << 1) | inet_csk_in_pingpong_mode(sk),
+- tp->snd_cwnd,
++ tcp_snd_cwnd(tp),
+ state == TCP_LISTEN ?
+ fastopenq->max_qlen :
+ (tcp_in_initial_slowstart(tp) ? -1 : tp->snd_ssthresh));
+diff --git a/net/ipv4/tcp_lp.c b/net/ipv4/tcp_lp.c
+index 82b36ec3f2f82..ae36780977d27 100644
+--- a/net/ipv4/tcp_lp.c
++++ b/net/ipv4/tcp_lp.c
+@@ -297,7 +297,7 @@ static void tcp_lp_pkts_acked(struct sock *sk, const struct ack_sample *sample)
+ lp->flag &= ~LP_WITHIN_THR;
+
+ pr_debug("TCP-LP: %05o|%5u|%5u|%15u|%15u|%15u\n", lp->flag,
+- tp->snd_cwnd, lp->remote_hz, lp->owd_min, lp->owd_max,
++ tcp_snd_cwnd(tp), lp->remote_hz, lp->owd_min, lp->owd_max,
+ lp->sowd >> 3);
+
+ if (lp->flag & LP_WITHIN_THR)
+@@ -313,12 +313,12 @@ static void tcp_lp_pkts_acked(struct sock *sk, const struct ack_sample *sample)
+ /* happened within inference
+ * drop snd_cwnd into 1 */
+ if (lp->flag & LP_WITHIN_INF)
+- tp->snd_cwnd = 1U;
++ tcp_snd_cwnd_set(tp, 1U);
+
+ /* happened after inference
+ * cut snd_cwnd into half */
+ else
+- tp->snd_cwnd = max(tp->snd_cwnd >> 1U, 1U);
++ tcp_snd_cwnd_set(tp, max(tcp_snd_cwnd(tp) >> 1U, 1U));
+
+ /* record this drop time */
+ lp->last_drop = now;
+diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
+index 0588b004ddac1..7029b0e98edb2 100644
+--- a/net/ipv4/tcp_metrics.c
++++ b/net/ipv4/tcp_metrics.c
+@@ -388,15 +388,15 @@ void tcp_update_metrics(struct sock *sk)
+ if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
+ !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) {
+ val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
+- if (val && (tp->snd_cwnd >> 1) > val)
++ if (val && (tcp_snd_cwnd(tp) >> 1) > val)
+ tcp_metric_set(tm, TCP_METRIC_SSTHRESH,
+- tp->snd_cwnd >> 1);
++ tcp_snd_cwnd(tp) >> 1);
+ }
+ if (!tcp_metric_locked(tm, TCP_METRIC_CWND)) {
+ val = tcp_metric_get(tm, TCP_METRIC_CWND);
+- if (tp->snd_cwnd > val)
++ if (tcp_snd_cwnd(tp) > val)
+ tcp_metric_set(tm, TCP_METRIC_CWND,
+- tp->snd_cwnd);
++ tcp_snd_cwnd(tp));
+ }
+ } else if (!tcp_in_slow_start(tp) &&
+ icsk->icsk_ca_state == TCP_CA_Open) {
+@@ -404,10 +404,10 @@ void tcp_update_metrics(struct sock *sk)
+ if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save &&
+ !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH))
+ tcp_metric_set(tm, TCP_METRIC_SSTHRESH,
+- max(tp->snd_cwnd >> 1, tp->snd_ssthresh));
++ max(tcp_snd_cwnd(tp) >> 1, tp->snd_ssthresh));
+ if (!tcp_metric_locked(tm, TCP_METRIC_CWND)) {
+ val = tcp_metric_get(tm, TCP_METRIC_CWND);
+- tcp_metric_set(tm, TCP_METRIC_CWND, (val + tp->snd_cwnd) >> 1);
++ tcp_metric_set(tm, TCP_METRIC_CWND, (val + tcp_snd_cwnd(tp)) >> 1);
+ }
+ } else {
+ /* Else slow start did not finish, cwnd is non-sense,
+diff --git a/net/ipv4/tcp_nv.c b/net/ipv4/tcp_nv.c
+index ab552356bdba8..a60662f4bdf92 100644
+--- a/net/ipv4/tcp_nv.c
++++ b/net/ipv4/tcp_nv.c
+@@ -197,10 +197,10 @@ static void tcpnv_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ }
+
+ if (ca->cwnd_growth_factor < 0) {
+- cnt = tp->snd_cwnd << -ca->cwnd_growth_factor;
++ cnt = tcp_snd_cwnd(tp) << -ca->cwnd_growth_factor;
+ tcp_cong_avoid_ai(tp, cnt, acked);
+ } else {
+- cnt = max(4U, tp->snd_cwnd >> ca->cwnd_growth_factor);
++ cnt = max(4U, tcp_snd_cwnd(tp) >> ca->cwnd_growth_factor);
+ tcp_cong_avoid_ai(tp, cnt, acked);
+ }
+ }
+@@ -209,7 +209,7 @@ static u32 tcpnv_recalc_ssthresh(struct sock *sk)
+ {
+ const struct tcp_sock *tp = tcp_sk(sk);
+
+- return max((tp->snd_cwnd * nv_loss_dec_factor) >> 10, 2U);
++ return max((tcp_snd_cwnd(tp) * nv_loss_dec_factor) >> 10, 2U);
+ }
+
+ static void tcpnv_state(struct sock *sk, u8 new_state)
+@@ -257,7 +257,7 @@ static void tcpnv_acked(struct sock *sk, const struct ack_sample *sample)
+ return;
+
+ /* Stop cwnd growth if we were in catch up mode */
+- if (ca->nv_catchup && tp->snd_cwnd >= nv_min_cwnd) {
++ if (ca->nv_catchup && tcp_snd_cwnd(tp) >= nv_min_cwnd) {
+ ca->nv_catchup = 0;
+ ca->nv_allow_cwnd_growth = 0;
+ }
+@@ -371,7 +371,7 @@ static void tcpnv_acked(struct sock *sk, const struct ack_sample *sample)
+ * if cwnd < max_win, grow cwnd
+ * else leave the same
+ */
+- if (tp->snd_cwnd > max_win) {
++ if (tcp_snd_cwnd(tp) > max_win) {
+ /* there is congestion, check that it is ok
+ * to make a CA decision
+ * 1. We should have at least nv_dec_eval_min_calls
+@@ -398,20 +398,20 @@ static void tcpnv_acked(struct sock *sk, const struct ack_sample *sample)
+ ca->nv_allow_cwnd_growth = 0;
+ tp->snd_ssthresh =
+ (nv_ssthresh_factor * max_win) >> 3;
+- if (tp->snd_cwnd - max_win > 2) {
++ if (tcp_snd_cwnd(tp) - max_win > 2) {
+ /* gap > 2, we do exponential cwnd decrease */
+ int dec;
+
+- dec = max(2U, ((tp->snd_cwnd - max_win) *
++ dec = max(2U, ((tcp_snd_cwnd(tp) - max_win) *
+ nv_cong_dec_mult) >> 7);
+- tp->snd_cwnd -= dec;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) - dec);
+ } else if (nv_cong_dec_mult > 0) {
+- tp->snd_cwnd = max_win;
++ tcp_snd_cwnd_set(tp, max_win);
+ }
+ if (ca->cwnd_growth_factor > 0)
+ ca->cwnd_growth_factor = 0;
+ ca->nv_no_cong_cnt = 0;
+- } else if (tp->snd_cwnd <= max_win - nv_pad_buffer) {
++ } else if (tcp_snd_cwnd(tp) <= max_win - nv_pad_buffer) {
+ /* There is no congestion, grow cwnd if allowed*/
+ if (ca->nv_eval_call_cnt < nv_inc_eval_min_calls)
+ return;
+@@ -444,8 +444,8 @@ static void tcpnv_acked(struct sock *sk, const struct ack_sample *sample)
+ * (it wasn't before, if it is now is because nv
+ * decreased it).
+ */
+- if (tp->snd_cwnd < nv_min_cwnd)
+- tp->snd_cwnd = nv_min_cwnd;
++ if (tcp_snd_cwnd(tp) < nv_min_cwnd)
++ tcp_snd_cwnd_set(tp, nv_min_cwnd);
+ }
+ }
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 0b5eab6851549..2adff4877cd6e 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -143,7 +143,7 @@ void tcp_cwnd_restart(struct sock *sk, s32 delta)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ u32 restart_cwnd = tcp_init_cwnd(tp, __sk_dst_get(sk));
+- u32 cwnd = tp->snd_cwnd;
++ u32 cwnd = tcp_snd_cwnd(tp);
+
+ tcp_ca_event(sk, CA_EVENT_CWND_RESTART);
+
+@@ -152,7 +152,7 @@ void tcp_cwnd_restart(struct sock *sk, s32 delta)
+
+ while ((delta -= inet_csk(sk)->icsk_rto) > 0 && cwnd > restart_cwnd)
+ cwnd >>= 1;
+- tp->snd_cwnd = max(cwnd, restart_cwnd);
++ tcp_snd_cwnd_set(tp, max(cwnd, restart_cwnd));
+ tp->snd_cwnd_stamp = tcp_jiffies32;
+ tp->snd_cwnd_used = 0;
+ }
+@@ -1014,7 +1014,7 @@ static void tcp_tsq_write(struct sock *sk)
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (tp->lost_out > tp->retrans_out &&
+- tp->snd_cwnd > tcp_packets_in_flight(tp)) {
++ tcp_snd_cwnd(tp) > tcp_packets_in_flight(tp)) {
+ tcp_mstamp_refresh(tp);
+ tcp_xmit_retransmit_queue(sk);
+ }
+@@ -1861,9 +1861,9 @@ static void tcp_cwnd_application_limited(struct sock *sk)
+ /* Limited by application or receiver window. */
+ u32 init_win = tcp_init_cwnd(tp, __sk_dst_get(sk));
+ u32 win_used = max(tp->snd_cwnd_used, init_win);
+- if (win_used < tp->snd_cwnd) {
++ if (win_used < tcp_snd_cwnd(tp)) {
+ tp->snd_ssthresh = tcp_current_ssthresh(sk);
+- tp->snd_cwnd = (tp->snd_cwnd + win_used) >> 1;
++ tcp_snd_cwnd_set(tp, (tcp_snd_cwnd(tp) + win_used) >> 1);
+ }
+ tp->snd_cwnd_used = 0;
+ }
+@@ -2035,7 +2035,7 @@ static inline unsigned int tcp_cwnd_test(const struct tcp_sock *tp,
+ return 1;
+
+ in_flight = tcp_packets_in_flight(tp);
+- cwnd = tp->snd_cwnd;
++ cwnd = tcp_snd_cwnd(tp);
+ if (in_flight >= cwnd)
+ return 0;
+
+@@ -2188,12 +2188,12 @@ static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb,
+ in_flight = tcp_packets_in_flight(tp);
+
+ BUG_ON(tcp_skb_pcount(skb) <= 1);
+- BUG_ON(tp->snd_cwnd <= in_flight);
++ BUG_ON(tcp_snd_cwnd(tp) <= in_flight);
+
+ send_win = tcp_wnd_end(tp) - TCP_SKB_CB(skb)->seq;
+
+ /* From in_flight test above, we know that cwnd > in_flight. */
+- cong_win = (tp->snd_cwnd - in_flight) * tp->mss_cache;
++ cong_win = (tcp_snd_cwnd(tp) - in_flight) * tp->mss_cache;
+
+ limit = min(send_win, cong_win);
+
+@@ -2207,7 +2207,7 @@ static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb,
+
+ win_divisor = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_tso_win_divisor);
+ if (win_divisor) {
+- u32 chunk = min(tp->snd_wnd, tp->snd_cwnd * tp->mss_cache);
++ u32 chunk = min(tp->snd_wnd, tcp_snd_cwnd(tp) * tp->mss_cache);
+
+ /* If at least some fraction of a window is available,
+ * just use it.
+@@ -2337,7 +2337,7 @@ static int tcp_mtu_probe(struct sock *sk)
+ if (likely(!icsk->icsk_mtup.enabled ||
+ icsk->icsk_mtup.probe_size ||
+ inet_csk(sk)->icsk_ca_state != TCP_CA_Open ||
+- tp->snd_cwnd < 11 ||
++ tcp_snd_cwnd(tp) < 11 ||
+ tp->rx_opt.num_sacks || tp->rx_opt.dsack))
+ return -1;
+
+@@ -2373,7 +2373,7 @@ static int tcp_mtu_probe(struct sock *sk)
+ return 0;
+
+ /* Do we need to wait to drain cwnd? With none in flight, don't stall */
+- if (tcp_packets_in_flight(tp) + 2 > tp->snd_cwnd) {
++ if (tcp_packets_in_flight(tp) + 2 > tcp_snd_cwnd(tp)) {
+ if (!tcp_packets_in_flight(tp))
+ return -1;
+ else
+@@ -2442,7 +2442,7 @@ static int tcp_mtu_probe(struct sock *sk)
+ if (!tcp_transmit_skb(sk, nskb, 1, GFP_ATOMIC)) {
+ /* Decrement cwnd here because we are sending
+ * effectively two packets. */
+- tp->snd_cwnd--;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) - 1);
+ tcp_event_new_data_sent(sk, nskb);
+
+ icsk->icsk_mtup.probe_size = tcp_mss_to_mtu(sk, nskb->len);
+@@ -2699,7 +2699,7 @@ repair:
+ else
+ tcp_chrono_stop(sk, TCP_CHRONO_RWND_LIMITED);
+
+- is_cwnd_limited |= (tcp_packets_in_flight(tp) >= tp->snd_cwnd);
++ is_cwnd_limited |= (tcp_packets_in_flight(tp) >= tcp_snd_cwnd(tp));
+ if (likely(sent_pkts || is_cwnd_limited))
+ tcp_cwnd_validate(sk, is_cwnd_limited);
+
+@@ -2809,7 +2809,7 @@ void tcp_send_loss_probe(struct sock *sk)
+ if (unlikely(!skb)) {
+ WARN_ONCE(tp->packets_out,
+ "invalid inflight: %u state %u cwnd %u mss %d\n",
+- tp->packets_out, sk->sk_state, tp->snd_cwnd, mss);
++ tp->packets_out, sk->sk_state, tcp_snd_cwnd(tp), mss);
+ inet_csk(sk)->icsk_pending = 0;
+ return;
+ }
+@@ -3293,7 +3293,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
+ if (!hole)
+ tp->retransmit_skb_hint = skb;
+
+- segs = tp->snd_cwnd - tcp_packets_in_flight(tp);
++ segs = tcp_snd_cwnd(tp) - tcp_packets_in_flight(tp);
+ if (segs <= 0)
+ break;
+ sacked = TCP_SKB_CB(skb)->sacked;
+@@ -4100,8 +4100,8 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req)
+ res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL,
+ NULL);
+ if (!res) {
+- __TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
+- __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
++ TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
++ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS);
+ if (unlikely(tcp_passive_fastopen(sk)))
+ tcp_sk(sk)->total_retrans++;
+ trace_tcp_retransmit_synack(sk, req);
+diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
+index 9a8e014d9b5b9..a8f6d9d06f2eb 100644
+--- a/net/ipv4/tcp_rate.c
++++ b/net/ipv4/tcp_rate.c
+@@ -200,7 +200,7 @@ void tcp_rate_check_app_limited(struct sock *sk)
+ /* Nothing in sending host's qdisc queues or NIC tx queue. */
+ sk_wmem_alloc_get(sk) < SKB_TRUESIZE(1) &&
+ /* We are not limited by CWND. */
+- tcp_packets_in_flight(tp) < tp->snd_cwnd &&
++ tcp_packets_in_flight(tp) < tcp_snd_cwnd(tp) &&
+ /* All lost packets have been retransmitted. */
+ tp->lost_out <= tp->retrans_out)
+ tp->app_limited =
+diff --git a/net/ipv4/tcp_scalable.c b/net/ipv4/tcp_scalable.c
+index 5842081bc8a25..862b96248a92d 100644
+--- a/net/ipv4/tcp_scalable.c
++++ b/net/ipv4/tcp_scalable.c
+@@ -27,7 +27,7 @@ static void tcp_scalable_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ if (!acked)
+ return;
+ }
+- tcp_cong_avoid_ai(tp, min(tp->snd_cwnd, TCP_SCALABLE_AI_CNT),
++ tcp_cong_avoid_ai(tp, min(tcp_snd_cwnd(tp), TCP_SCALABLE_AI_CNT),
+ acked);
+ }
+
+@@ -35,7 +35,7 @@ static u32 tcp_scalable_ssthresh(struct sock *sk)
+ {
+ const struct tcp_sock *tp = tcp_sk(sk);
+
+- return max(tp->snd_cwnd - (tp->snd_cwnd>>TCP_SCALABLE_MD_SCALE), 2U);
++ return max(tcp_snd_cwnd(tp) - (tcp_snd_cwnd(tp)>>TCP_SCALABLE_MD_SCALE), 2U);
+ }
+
+ static struct tcp_congestion_ops tcp_scalable __read_mostly = {
+diff --git a/net/ipv4/tcp_vegas.c b/net/ipv4/tcp_vegas.c
+index c8003c8aad2c0..786848ad37ea8 100644
+--- a/net/ipv4/tcp_vegas.c
++++ b/net/ipv4/tcp_vegas.c
+@@ -159,7 +159,7 @@ EXPORT_SYMBOL_GPL(tcp_vegas_cwnd_event);
+
+ static inline u32 tcp_vegas_ssthresh(struct tcp_sock *tp)
+ {
+- return min(tp->snd_ssthresh, tp->snd_cwnd);
++ return min(tp->snd_ssthresh, tcp_snd_cwnd(tp));
+ }
+
+ static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+@@ -217,14 +217,14 @@ static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ * This is:
+ * (actual rate in segments) * baseRTT
+ */
+- target_cwnd = (u64)tp->snd_cwnd * vegas->baseRTT;
++ target_cwnd = (u64)tcp_snd_cwnd(tp) * vegas->baseRTT;
+ do_div(target_cwnd, rtt);
+
+ /* Calculate the difference between the window we had,
+ * and the window we would like to have. This quantity
+ * is the "Diff" from the Arizona Vegas papers.
+ */
+- diff = tp->snd_cwnd * (rtt-vegas->baseRTT) / vegas->baseRTT;
++ diff = tcp_snd_cwnd(tp) * (rtt-vegas->baseRTT) / vegas->baseRTT;
+
+ if (diff > gamma && tcp_in_slow_start(tp)) {
+ /* Going too fast. Time to slow down
+@@ -238,7 +238,8 @@ static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ * truncation robs us of full link
+ * utilization.
+ */
+- tp->snd_cwnd = min(tp->snd_cwnd, (u32)target_cwnd+1);
++ tcp_snd_cwnd_set(tp, min(tcp_snd_cwnd(tp),
++ (u32)target_cwnd + 1));
+ tp->snd_ssthresh = tcp_vegas_ssthresh(tp);
+
+ } else if (tcp_in_slow_start(tp)) {
+@@ -254,14 +255,14 @@ static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ /* The old window was too fast, so
+ * we slow down.
+ */
+- tp->snd_cwnd--;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) - 1);
+ tp->snd_ssthresh
+ = tcp_vegas_ssthresh(tp);
+ } else if (diff < alpha) {
+ /* We don't have enough extra packets
+ * in the network, so speed up.
+ */
+- tp->snd_cwnd++;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ } else {
+ /* Sending just as fast as we
+ * should be.
+@@ -269,10 +270,10 @@ static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ }
+ }
+
+- if (tp->snd_cwnd < 2)
+- tp->snd_cwnd = 2;
+- else if (tp->snd_cwnd > tp->snd_cwnd_clamp)
+- tp->snd_cwnd = tp->snd_cwnd_clamp;
++ if (tcp_snd_cwnd(tp) < 2)
++ tcp_snd_cwnd_set(tp, 2);
++ else if (tcp_snd_cwnd(tp) > tp->snd_cwnd_clamp)
++ tcp_snd_cwnd_set(tp, tp->snd_cwnd_clamp);
+
+ tp->snd_ssthresh = tcp_current_ssthresh(sk);
+ }
+diff --git a/net/ipv4/tcp_veno.c b/net/ipv4/tcp_veno.c
+index cd50a61c9976d..366ff6f214b2e 100644
+--- a/net/ipv4/tcp_veno.c
++++ b/net/ipv4/tcp_veno.c
+@@ -146,11 +146,11 @@ static void tcp_veno_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+
+ rtt = veno->minrtt;
+
+- target_cwnd = (u64)tp->snd_cwnd * veno->basertt;
++ target_cwnd = (u64)tcp_snd_cwnd(tp) * veno->basertt;
+ target_cwnd <<= V_PARAM_SHIFT;
+ do_div(target_cwnd, rtt);
+
+- veno->diff = (tp->snd_cwnd << V_PARAM_SHIFT) - target_cwnd;
++ veno->diff = (tcp_snd_cwnd(tp) << V_PARAM_SHIFT) - target_cwnd;
+
+ if (tcp_in_slow_start(tp)) {
+ /* Slow start. */
+@@ -164,15 +164,15 @@ static void tcp_veno_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ /* In the "non-congestive state", increase cwnd
+ * every rtt.
+ */
+- tcp_cong_avoid_ai(tp, tp->snd_cwnd, acked);
++ tcp_cong_avoid_ai(tp, tcp_snd_cwnd(tp), acked);
+ } else {
+ /* In the "congestive state", increase cwnd
+ * every other rtt.
+ */
+- if (tp->snd_cwnd_cnt >= tp->snd_cwnd) {
++ if (tp->snd_cwnd_cnt >= tcp_snd_cwnd(tp)) {
+ if (veno->inc &&
+- tp->snd_cwnd < tp->snd_cwnd_clamp) {
+- tp->snd_cwnd++;
++ tcp_snd_cwnd(tp) < tp->snd_cwnd_clamp) {
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) + 1);
+ veno->inc = 0;
+ } else
+ veno->inc = 1;
+@@ -181,10 +181,10 @@ static void tcp_veno_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+ tp->snd_cwnd_cnt += acked;
+ }
+ done:
+- if (tp->snd_cwnd < 2)
+- tp->snd_cwnd = 2;
+- else if (tp->snd_cwnd > tp->snd_cwnd_clamp)
+- tp->snd_cwnd = tp->snd_cwnd_clamp;
++ if (tcp_snd_cwnd(tp) < 2)
++ tcp_snd_cwnd_set(tp, 2);
++ else if (tcp_snd_cwnd(tp) > tp->snd_cwnd_clamp)
++ tcp_snd_cwnd_set(tp, tp->snd_cwnd_clamp);
+ }
+ /* Wipe the slate clean for the next rtt. */
+ /* veno->cntrtt = 0; */
+@@ -199,10 +199,10 @@ static u32 tcp_veno_ssthresh(struct sock *sk)
+
+ if (veno->diff < beta)
+ /* in "non-congestive state", cut cwnd by 1/5 */
+- return max(tp->snd_cwnd * 4 / 5, 2U);
++ return max(tcp_snd_cwnd(tp) * 4 / 5, 2U);
+ else
+ /* in "congestive state", cut cwnd by 1/2 */
+- return max(tp->snd_cwnd >> 1U, 2U);
++ return max(tcp_snd_cwnd(tp) >> 1U, 2U);
+ }
+
+ static struct tcp_congestion_ops tcp_veno __read_mostly = {
+diff --git a/net/ipv4/tcp_westwood.c b/net/ipv4/tcp_westwood.c
+index b2e05c4cea00f..c6e97141eef25 100644
+--- a/net/ipv4/tcp_westwood.c
++++ b/net/ipv4/tcp_westwood.c
+@@ -244,7 +244,8 @@ static void tcp_westwood_event(struct sock *sk, enum tcp_ca_event event)
+
+ switch (event) {
+ case CA_EVENT_COMPLETE_CWR:
+- tp->snd_cwnd = tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk);
++ tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk);
++ tcp_snd_cwnd_set(tp, tp->snd_ssthresh);
+ break;
+ case CA_EVENT_LOSS:
+ tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk);
+diff --git a/net/ipv4/tcp_yeah.c b/net/ipv4/tcp_yeah.c
+index 07c4c93b9fdb6..18b07ff5d20e6 100644
+--- a/net/ipv4/tcp_yeah.c
++++ b/net/ipv4/tcp_yeah.c
+@@ -71,11 +71,11 @@ static void tcp_yeah_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+
+ if (!yeah->doing_reno_now) {
+ /* Scalable */
+- tcp_cong_avoid_ai(tp, min(tp->snd_cwnd, TCP_SCALABLE_AI_CNT),
++ tcp_cong_avoid_ai(tp, min(tcp_snd_cwnd(tp), TCP_SCALABLE_AI_CNT),
+ acked);
+ } else {
+ /* Reno */
+- tcp_cong_avoid_ai(tp, tp->snd_cwnd, acked);
++ tcp_cong_avoid_ai(tp, tcp_snd_cwnd(tp), acked);
+ }
+
+ /* The key players are v_vegas.beg_snd_una and v_beg_snd_nxt.
+@@ -130,7 +130,7 @@ do_vegas:
+ /* Compute excess number of packets above bandwidth
+ * Avoid doing full 64 bit divide.
+ */
+- bw = tp->snd_cwnd;
++ bw = tcp_snd_cwnd(tp);
+ bw *= rtt - yeah->vegas.baseRTT;
+ do_div(bw, rtt);
+ queue = bw;
+@@ -138,20 +138,20 @@ do_vegas:
+ if (queue > TCP_YEAH_ALPHA ||
+ rtt - yeah->vegas.baseRTT > (yeah->vegas.baseRTT / TCP_YEAH_PHY)) {
+ if (queue > TCP_YEAH_ALPHA &&
+- tp->snd_cwnd > yeah->reno_count) {
++ tcp_snd_cwnd(tp) > yeah->reno_count) {
+ u32 reduction = min(queue / TCP_YEAH_GAMMA ,
+- tp->snd_cwnd >> TCP_YEAH_EPSILON);
++ tcp_snd_cwnd(tp) >> TCP_YEAH_EPSILON);
+
+- tp->snd_cwnd -= reduction;
++ tcp_snd_cwnd_set(tp, tcp_snd_cwnd(tp) - reduction);
+
+- tp->snd_cwnd = max(tp->snd_cwnd,
+- yeah->reno_count);
++ tcp_snd_cwnd_set(tp, max(tcp_snd_cwnd(tp),
++ yeah->reno_count));
+
+- tp->snd_ssthresh = tp->snd_cwnd;
++ tp->snd_ssthresh = tcp_snd_cwnd(tp);
+ }
+
+ if (yeah->reno_count <= 2)
+- yeah->reno_count = max(tp->snd_cwnd>>1, 2U);
++ yeah->reno_count = max(tcp_snd_cwnd(tp)>>1, 2U);
+ else
+ yeah->reno_count++;
+
+@@ -176,7 +176,7 @@ do_vegas:
+ */
+ yeah->vegas.beg_snd_una = yeah->vegas.beg_snd_nxt;
+ yeah->vegas.beg_snd_nxt = tp->snd_nxt;
+- yeah->vegas.beg_snd_cwnd = tp->snd_cwnd;
++ yeah->vegas.beg_snd_cwnd = tcp_snd_cwnd(tp);
+
+ /* Wipe the slate clean for the next RTT. */
+ yeah->vegas.cntRTT = 0;
+@@ -193,16 +193,16 @@ static u32 tcp_yeah_ssthresh(struct sock *sk)
+ if (yeah->doing_reno_now < TCP_YEAH_RHO) {
+ reduction = yeah->lastQ;
+
+- reduction = min(reduction, max(tp->snd_cwnd>>1, 2U));
++ reduction = min(reduction, max(tcp_snd_cwnd(tp)>>1, 2U));
+
+- reduction = max(reduction, tp->snd_cwnd >> TCP_YEAH_DELTA);
++ reduction = max(reduction, tcp_snd_cwnd(tp) >> TCP_YEAH_DELTA);
+ } else
+- reduction = max(tp->snd_cwnd>>1, 2U);
++ reduction = max(tcp_snd_cwnd(tp)>>1, 2U);
+
+ yeah->fast_count = 0;
+ yeah->reno_count = max(yeah->reno_count>>1, 2U);
+
+- return max_t(int, tp->snd_cwnd - reduction, 2);
++ return max_t(int, tcp_snd_cwnd(tp) - reduction, 2);
+ }
+
+ static struct tcp_congestion_ops tcp_yeah __read_mostly = {
+diff --git a/net/ipv4/xfrm4_protocol.c b/net/ipv4/xfrm4_protocol.c
+index 2fe5860c21d6e..b146ce88c5d0c 100644
+--- a/net/ipv4/xfrm4_protocol.c
++++ b/net/ipv4/xfrm4_protocol.c
+@@ -304,4 +304,3 @@ void __init xfrm4_protocol_init(void)
+ {
+ xfrm_input_register_afinfo(&xfrm4_input_afinfo);
+ }
+-EXPORT_SYMBOL(xfrm4_protocol_init);
+diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c
+index 29bc4e7c3046e..6de01185cc68f 100644
+--- a/net/ipv6/seg6_hmac.c
++++ b/net/ipv6/seg6_hmac.c
+@@ -399,7 +399,6 @@ int __init seg6_hmac_init(void)
+ {
+ return seg6_hmac_init_algo();
+ }
+-EXPORT_SYMBOL(seg6_hmac_init);
+
+ int __net_init seg6_hmac_net_init(struct net *net)
+ {
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 075ee8a2df3b7..29a4fc92580ec 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -2074,7 +2074,7 @@ static void get_tcp6_sock(struct seq_file *seq, struct sock *sp, int i)
+ jiffies_to_clock_t(icsk->icsk_rto),
+ jiffies_to_clock_t(icsk->icsk_ack.ato),
+ (icsk->icsk_ack.quick << 1) | inet_csk_in_pingpong_mode(sp),
+- tp->snd_cwnd,
++ tcp_snd_cwnd(tp),
+ state == TCP_LISTEN ?
+ fastopenq->max_qlen :
+ (tcp_in_initial_slowstart(tp) ? -1 : tp->snd_ssthresh)
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 339d95df19d32..d93bde6573593 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2826,10 +2826,12 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb
+ void *ext_hdrs[SADB_EXT_MAX];
+ int err;
+
+- err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
+- BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
+- if (err)
+- return err;
++ /* Non-zero return value of pfkey_broadcast() does not always signal
++ * an error and even on an actual error we may still want to process
++ * the message so rather ignore the return value.
++ */
++ pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
++ BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
+
+ memset(ext_hdrs, 0, sizeof(ext_hdrs));
+ err = parse_exthdrs(skb, hdr, ext_hdrs);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 42cc703a68e50..8eac1915ec730 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -544,6 +544,7 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type,
+ if (msg_type == NFT_MSG_NEWFLOWTABLE)
+ nft_activate_next(ctx->net, flowtable);
+
++ INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
+ nft_trans_flowtable(trans) = flowtable;
+ nft_trans_commit_list_add_tail(ctx->net, trans);
+
+@@ -1835,7 +1836,6 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net,
+ goto err_hook_dev;
+ }
+ hook->ops.dev = dev;
+- hook->inactive = false;
+
+ return hook;
+
+@@ -2087,7 +2087,7 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family,
+ chain->flags |= NFT_CHAIN_BASE | flags;
+ basechain->policy = NF_ACCEPT;
+ if (chain->flags & NFT_CHAIN_HW_OFFLOAD &&
+- nft_chain_offload_priority(basechain) < 0)
++ !nft_chain_offload_support(basechain))
+ return -EOPNOTSUPP;
+
+ flow_block_init(&basechain->flow_block);
+@@ -7247,7 +7247,7 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net,
+ nf_unregister_net_hook(net, &hook->ops);
+ if (release_netdev) {
+ list_del(&hook->list);
+- kfree_rcu(hook);
++ kfree_rcu(hook, rcu);
+ }
+ }
+ }
+@@ -7348,11 +7348,15 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
+
+ if (nla[NFTA_FLOWTABLE_FLAGS]) {
+ flags = ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS]));
+- if (flags & ~NFT_FLOWTABLE_MASK)
+- return -EOPNOTSUPP;
++ if (flags & ~NFT_FLOWTABLE_MASK) {
++ err = -EOPNOTSUPP;
++ goto err_flowtable_update_hook;
++ }
+ if ((flowtable->data.flags & NFT_FLOWTABLE_HW_OFFLOAD) ^
+- (flags & NFT_FLOWTABLE_HW_OFFLOAD))
+- return -EOPNOTSUPP;
++ (flags & NFT_FLOWTABLE_HW_OFFLOAD)) {
++ err = -EOPNOTSUPP;
++ goto err_flowtable_update_hook;
++ }
+ } else {
+ flags = flowtable->data.flags;
+ }
+@@ -7533,6 +7537,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ {
+ const struct nlattr * const *nla = ctx->nla;
+ struct nft_flowtable_hook flowtable_hook;
++ LIST_HEAD(flowtable_del_list);
+ struct nft_hook *this, *hook;
+ struct nft_trans *trans;
+ int err;
+@@ -7548,7 +7553,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ err = -ENOENT;
+ goto err_flowtable_del_hook;
+ }
+- hook->inactive = true;
++ list_move(&hook->list, &flowtable_del_list);
+ }
+
+ trans = nft_trans_alloc(ctx, NFT_MSG_DELFLOWTABLE,
+@@ -7561,6 +7566,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ nft_trans_flowtable(trans) = flowtable;
+ nft_trans_flowtable_update(trans) = true;
+ INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans));
++ list_splice(&flowtable_del_list, &nft_trans_flowtable_hooks(trans));
+ nft_flowtable_hook_release(&flowtable_hook);
+
+ nft_trans_commit_list_add_tail(ctx->net, trans);
+@@ -7568,13 +7574,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
+ return 0;
+
+ err_flowtable_del_hook:
+- list_for_each_entry(this, &flowtable_hook.list, list) {
+- hook = nft_hook_list_find(&flowtable->hook_list, this);
+- if (!hook)
+- break;
+-
+- hook->inactive = false;
+- }
++ list_splice(&flowtable_del_list, &flowtable->hook_list);
+ nft_flowtable_hook_release(&flowtable_hook);
+
+ return err;
+@@ -8244,6 +8244,9 @@ static void nft_commit_release(struct nft_trans *trans)
+ nf_tables_chain_destroy(&trans->ctx);
+ break;
+ case NFT_MSG_DELRULE:
++ if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++ nft_flow_rule_destroy(nft_trans_flow_rule(trans));
++
+ nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ break;
+ case NFT_MSG_DELSET:
+@@ -8480,17 +8483,6 @@ void nft_chain_del(struct nft_chain *chain)
+ list_del_rcu(&chain->list);
+ }
+
+-static void nft_flowtable_hooks_del(struct nft_flowtable *flowtable,
+- struct list_head *hook_list)
+-{
+- struct nft_hook *hook, *next;
+-
+- list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
+- if (hook->inactive)
+- list_move(&hook->list, hook_list);
+- }
+-}
+-
+ static void nf_tables_module_autoload_cleanup(struct net *net)
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+@@ -8745,6 +8737,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ nf_tables_rule_notify(&trans->ctx,
+ nft_trans_rule(trans),
+ NFT_MSG_NEWRULE);
++ if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD)
++ nft_flow_rule_destroy(nft_trans_flow_rule(trans));
++
+ nft_trans_destroy(trans);
+ break;
+ case NFT_MSG_DELRULE:
+@@ -8835,8 +8830,6 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ break;
+ case NFT_MSG_DELFLOWTABLE:
+ if (nft_trans_flowtable_update(trans)) {
+- nft_flowtable_hooks_del(nft_trans_flowtable(trans),
+- &nft_trans_flowtable_hooks(trans));
+ nf_tables_flowtable_notify(&trans->ctx,
+ nft_trans_flowtable(trans),
+ &nft_trans_flowtable_hooks(trans),
+@@ -8917,7 +8910,6 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ struct nftables_pernet *nft_net = nft_pernet(net);
+ struct nft_trans *trans, *next;
+ struct nft_trans_elem *te;
+- struct nft_hook *hook;
+
+ if (action == NFNL_ABORT_VALIDATE &&
+ nf_tables_validate(net) < 0)
+@@ -9048,8 +9040,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ break;
+ case NFT_MSG_DELFLOWTABLE:
+ if (nft_trans_flowtable_update(trans)) {
+- list_for_each_entry(hook, &nft_trans_flowtable(trans)->hook_list, list)
+- hook->inactive = false;
++ list_splice(&nft_trans_flowtable_hooks(trans),
++ &nft_trans_flowtable(trans)->hook_list);
+ } else {
+ trans->ctx.table->use++;
+ nft_clear(trans->ctx.net, nft_trans_flowtable(trans));
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 2d36952b13920..910ef881c3b85 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -208,7 +208,7 @@ static int nft_setup_cb_call(enum tc_setup_type type, void *type_data,
+ return 0;
+ }
+
+-int nft_chain_offload_priority(struct nft_base_chain *basechain)
++static int nft_chain_offload_priority(const struct nft_base_chain *basechain)
+ {
+ if (basechain->ops.priority <= 0 ||
+ basechain->ops.priority > USHRT_MAX)
+@@ -217,6 +217,27 @@ int nft_chain_offload_priority(struct nft_base_chain *basechain)
+ return 0;
+ }
+
++bool nft_chain_offload_support(const struct nft_base_chain *basechain)
++{
++ struct net_device *dev;
++ struct nft_hook *hook;
++
++ if (nft_chain_offload_priority(basechain) < 0)
++ return false;
++
++ list_for_each_entry(hook, &basechain->hook_list, list) {
++ if (hook->ops.pf != NFPROTO_NETDEV ||
++ hook->ops.hooknum != NF_NETDEV_INGRESS)
++ return false;
++
++ dev = hook->ops.dev;
++ if (!dev->netdev_ops->ndo_setup_tc && !flow_indr_dev_exists())
++ return false;
++ }
++
++ return true;
++}
++
+ static void nft_flow_cls_offload_setup(struct flow_cls_offload *cls_flow,
+ const struct nft_base_chain *basechain,
+ const struct nft_rule *rule,
+diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
+index be1595d6979d8..db8f9116eeb43 100644
+--- a/net/netfilter/nft_nat.c
++++ b/net/netfilter/nft_nat.c
+@@ -334,7 +334,8 @@ static void nft_nat_inet_eval(const struct nft_expr *expr,
+ {
+ const struct nft_nat *priv = nft_expr_priv(expr);
+
+- if (priv->family == nft_pf(pkt))
++ if (priv->family == nft_pf(pkt) ||
++ priv->family == NFPROTO_INET)
+ nft_nat_eval(expr, regs, pkt);
+ }
+
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 8955f31fa47e9..aca6e2b599c86 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -373,6 +373,7 @@ static void set_ip_addr(struct sk_buff *skb, struct iphdr *nh,
+ update_ip_l4_checksum(skb, nh, *addr, new_addr);
+ csum_replace4(&nh->check, *addr, new_addr);
+ skb_clear_hash(skb);
++ ovs_ct_clear(skb, NULL);
+ *addr = new_addr;
+ }
+
+@@ -420,6 +421,7 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto,
+ update_ipv6_checksum(skb, l4_proto, addr, new_addr);
+
+ skb_clear_hash(skb);
++ ovs_ct_clear(skb, NULL);
+ memcpy(addr, new_addr, sizeof(__be32[4]));
+ }
+
+@@ -660,6 +662,7 @@ static int set_nsh(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ static void set_tp_port(struct sk_buff *skb, __be16 *port,
+ __be16 new_port, __sum16 *check)
+ {
++ ovs_ct_clear(skb, NULL);
+ inet_proto_csum_replace2(check, skb, *port, new_port, false);
+ *port = new_port;
+ }
+@@ -699,6 +702,7 @@ static int set_udp(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ uh->dest = dst;
+ flow_key->tp.src = src;
+ flow_key->tp.dst = dst;
++ ovs_ct_clear(skb, NULL);
+ }
+
+ skb_clear_hash(skb);
+@@ -761,6 +765,8 @@ static int set_sctp(struct sk_buff *skb, struct sw_flow_key *flow_key,
+ sh->checksum = old_csum ^ old_correct_csum ^ new_csum;
+
+ skb_clear_hash(skb);
++ ovs_ct_clear(skb, NULL);
++
+ flow_key->tp.src = sh->source;
+ flow_key->tp.dst = sh->dest;
+
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 4a947c13c813a..4e70df91d0f2a 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -1342,7 +1342,9 @@ int ovs_ct_clear(struct sk_buff *skb, struct sw_flow_key *key)
+
+ nf_ct_put(ct);
+ nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+- ovs_ct_fill_key(skb, key, false);
++
++ if (key)
++ ovs_ct_fill_key(skb, key, false);
+
+ return 0;
+ }
+diff --git a/net/sched/act_police.c b/net/sched/act_police.c
+index 0923aa2b8f8a7..899fe025df776 100644
+--- a/net/sched/act_police.c
++++ b/net/sched/act_police.c
+@@ -239,6 +239,20 @@ release_idr:
+ return err;
+ }
+
++static bool tcf_police_mtu_check(struct sk_buff *skb, u32 limit)
++{
++ u32 len;
++
++ if (skb_is_gso(skb))
++ return skb_gso_validate_mac_len(skb, limit);
++
++ len = qdisc_pkt_len(skb);
++ if (skb_at_tc_ingress(skb))
++ len += skb->mac_len;
++
++ return len <= limit;
++}
++
+ static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
+ struct tcf_result *res)
+ {
+@@ -261,7 +275,7 @@ static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a,
+ goto inc_overlimits;
+ }
+
+- if (qdisc_pkt_len(skb) <= p->tcfp_mtu) {
++ if (tcf_police_mtu_check(skb, p->tcfp_mtu)) {
+ if (!p->rate_present && !p->pps_present) {
+ ret = p->tcfp_result;
+ goto end;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index b9fe31834354d..4bc6b16669f35 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1973,6 +1973,7 @@ static void smc_find_rdma_v2_device_serv(struct smc_sock *new_smc,
+
+ not_found:
+ ini->smcr_version &= ~SMC_V2;
++ ini->smcrv2.ib_dev_v2 = NULL;
+ ini->check_smcrv2 = false;
+ }
+
+diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
+index 9d5a971689695..93042ef6869ba 100644
+--- a/net/smc/smc_cdc.c
++++ b/net/smc/smc_cdc.c
+@@ -72,7 +72,7 @@ int smc_cdc_get_free_slot(struct smc_connection *conn,
+ /* abnormal termination */
+ if (!rc)
+ smc_wr_tx_put_slot(link,
+- (struct smc_wr_tx_pend_priv *)pend);
++ (struct smc_wr_tx_pend_priv *)(*pend));
+ rc = -EPIPE;
+ }
+ return rc;
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index df194cc070350..b57cf9df4de89 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -979,7 +979,11 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr,
+ */
+ xdr->p = (void *)p + frag2bytes;
+ space_left = xdr->buf->buflen - xdr->buf->len;
+- xdr->end = (void *)p + min_t(int, space_left, PAGE_SIZE);
++ if (space_left - nbytes >= PAGE_SIZE)
++ xdr->end = (void *)p + PAGE_SIZE;
++ else
++ xdr->end = (void *)p + space_left - frag1bytes;
++
+ xdr->buf->page_len += frag2bytes;
+ xdr->buf->len += nbytes;
+ return p;
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index 281ddb87ac8d5..190a4de239c85 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -1121,6 +1121,7 @@ static bool
+ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
+ {
++ struct rpc_xprt *xprt = &r_xprt->rx_xprt;
+ struct xdr_stream *xdr = &rep->rr_stream;
+ __be32 *p;
+
+@@ -1144,6 +1145,10 @@ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep)
+ if (*p != cpu_to_be32(RPC_CALL))
+ return false;
+
++ /* No bc service. */
++ if (xprt->bc_serv == NULL)
++ return false;
++
+ /* Now that we are sure this is a backchannel call,
+ * advance to the RPC header.
+ */
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+index 5f0155fdefc7b..11cf7c6466443 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_rw.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+@@ -478,10 +478,10 @@ svc_rdma_build_writes(struct svc_rdma_write_info *info,
+ unsigned int write_len;
+ u64 offset;
+
+- seg = &info->wi_chunk->ch_segments[info->wi_seg_no];
+- if (!seg)
++ if (info->wi_seg_no >= info->wi_chunk->ch_segcount)
+ goto out_overflow;
+
++ seg = &info->wi_chunk->ch_segments[info->wi_seg_no];
+ write_len = min(remaining, seg->rs_length - info->wi_seg_off);
+ if (!write_len)
+ goto out_overflow;
+diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c
+index a2f9c96407161..91d9c815a4067 100644
+--- a/net/tipc/bearer.c
++++ b/net/tipc/bearer.c
+@@ -259,9 +259,8 @@ static int tipc_enable_bearer(struct net *net, const char *name,
+ u32 i;
+
+ if (!bearer_name_validate(name, &b_names)) {
+- errstr = "illegal name";
+ NL_SET_ERR_MSG(extack, "Illegal name");
+- goto rejected;
++ return res;
+ }
+
+ if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) {
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 1e7ed5829ed51..99c56922abf51 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -490,7 +490,7 @@ static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other)
+ * -ECONNREFUSED. Otherwise, if we haven't queued any skbs
+ * to other and its full, we will hang waiting for POLLOUT.
+ */
+- if (unix_recvq_full(other) && !sock_flag(other, SOCK_DEAD))
++ if (unix_recvq_full_lockless(other) && !sock_flag(other, SOCK_DEAD))
+ return 1;
+
+ if (connected)
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 39a82bfb5caa3..d6bcdbfd0fc58 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -343,9 +343,9 @@ out:
+ }
+ EXPORT_SYMBOL(xsk_tx_peek_desc);
+
+-static u32 xsk_tx_peek_release_fallback(struct xsk_buff_pool *pool, struct xdp_desc *descs,
+- u32 max_entries)
++static u32 xsk_tx_peek_release_fallback(struct xsk_buff_pool *pool, u32 max_entries)
+ {
++ struct xdp_desc *descs = pool->tx_descs;
+ u32 nb_pkts = 0;
+
+ while (nb_pkts < max_entries && xsk_tx_peek_desc(pool, &descs[nb_pkts]))
+@@ -355,8 +355,7 @@ static u32 xsk_tx_peek_release_fallback(struct xsk_buff_pool *pool, struct xdp_d
+ return nb_pkts;
+ }
+
+-u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *descs,
+- u32 max_entries)
++u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, u32 max_entries)
+ {
+ struct xdp_sock *xs;
+ u32 nb_pkts;
+@@ -365,7 +364,7 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *
+ if (!list_is_singular(&pool->xsk_tx_list)) {
+ /* Fallback to the non-batched version */
+ rcu_read_unlock();
+- return xsk_tx_peek_release_fallback(pool, descs, max_entries);
++ return xsk_tx_peek_release_fallback(pool, max_entries);
+ }
+
+ xs = list_first_or_null_rcu(&pool->xsk_tx_list, struct xdp_sock, tx_list);
+@@ -374,7 +373,8 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *
+ goto out;
+ }
+
+- nb_pkts = xskq_cons_peek_desc_batch(xs->tx, descs, pool, max_entries);
++ max_entries = xskq_cons_nb_entries(xs->tx, max_entries);
++ nb_pkts = xskq_cons_read_desc_batch(xs->tx, pool, max_entries);
+ if (!nb_pkts) {
+ xs->tx->queue_empty_descs++;
+ goto out;
+@@ -386,11 +386,11 @@ u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *
+ * packets. This avoids having to implement any buffering in
+ * the Tx path.
+ */
+- nb_pkts = xskq_prod_reserve_addr_batch(pool->cq, descs, nb_pkts);
++ nb_pkts = xskq_prod_reserve_addr_batch(pool->cq, pool->tx_descs, nb_pkts);
+ if (!nb_pkts)
+ goto out;
+
+- xskq_cons_release_n(xs->tx, nb_pkts);
++ xskq_cons_release_n(xs->tx, max_entries);
+ __xskq_cons_release(xs->tx);
+ xs->sk.sk_write_space(&xs->sk);
+
+@@ -968,6 +968,19 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+
+ xp_get_pool(umem_xs->pool);
+ xs->pool = umem_xs->pool;
++
++ /* If underlying shared umem was created without Tx
++ * ring, allocate Tx descs array that Tx batching API
++ * utilizes
++ */
++ if (xs->tx && !xs->pool->tx_descs) {
++ err = xp_alloc_tx_descs(xs->pool, xs);
++ if (err) {
++ xp_put_pool(xs->pool);
++ sockfd_put(sock);
++ goto out_unlock;
++ }
++ }
+ }
+
+ xdp_get_umem(umem_xs->umem);
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 0202a90b65e3a..87bdd71c7bb66 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -37,10 +37,21 @@ void xp_destroy(struct xsk_buff_pool *pool)
+ if (!pool)
+ return;
+
++ kvfree(pool->tx_descs);
+ kvfree(pool->heads);
+ kvfree(pool);
+ }
+
++int xp_alloc_tx_descs(struct xsk_buff_pool *pool, struct xdp_sock *xs)
++{
++ pool->tx_descs = kvcalloc(xs->tx->nentries, sizeof(*pool->tx_descs),
++ GFP_KERNEL);
++ if (!pool->tx_descs)
++ return -ENOMEM;
++
++ return 0;
++}
++
+ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ struct xdp_umem *umem)
+ {
+@@ -58,6 +69,10 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ if (!pool->heads)
+ goto out;
+
++ if (xs->tx)
++ if (xp_alloc_tx_descs(pool, xs))
++ goto out;
++
+ pool->chunk_mask = ~((u64)umem->chunk_size - 1);
+ pool->addrs_cnt = umem->size;
+ pool->heads_cnt = umem->chunks;
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index e9aa2c2363561..4d092e7a33d10 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -205,11 +205,11 @@ static inline bool xskq_cons_read_desc(struct xsk_queue *q,
+ return false;
+ }
+
+-static inline u32 xskq_cons_read_desc_batch(struct xsk_queue *q,
+- struct xdp_desc *descs,
+- struct xsk_buff_pool *pool, u32 max)
++static inline u32 xskq_cons_read_desc_batch(struct xsk_queue *q, struct xsk_buff_pool *pool,
++ u32 max)
+ {
+ u32 cached_cons = q->cached_cons, nb_entries = 0;
++ struct xdp_desc *descs = pool->tx_descs;
+
+ while (cached_cons != q->cached_prod && nb_entries < max) {
+ struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring;
+@@ -282,14 +282,6 @@ static inline bool xskq_cons_peek_desc(struct xsk_queue *q,
+ return xskq_cons_read_desc(q, desc, pool);
+ }
+
+-static inline u32 xskq_cons_peek_desc_batch(struct xsk_queue *q, struct xdp_desc *descs,
+- struct xsk_buff_pool *pool, u32 max)
+-{
+- u32 entries = xskq_cons_nb_entries(q, max);
+-
+- return xskq_cons_read_desc_batch(q, descs, pool, entries);
+-}
+-
+ /* To improve performance in the xskq_cons_release functions, only update local state here.
+ * Reflect this to global state when we get new entries from the ring in
+ * xskq_cons_get_entries() and whenever Rx or Tx processing are completed in the NAPI loop.
+diff --git a/scripts/gdb/linux/config.py b/scripts/gdb/linux/config.py
+index 90e1565b19671..8843ab3cbaddc 100644
+--- a/scripts/gdb/linux/config.py
++++ b/scripts/gdb/linux/config.py
+@@ -24,9 +24,9 @@ class LxConfigDump(gdb.Command):
+ filename = arg
+
+ try:
+- py_config_ptr = gdb.parse_and_eval("kernel_config_data + 8")
+- py_config_size = gdb.parse_and_eval(
+- "sizeof(kernel_config_data) - 1 - 8 * 2")
++ py_config_ptr = gdb.parse_and_eval("&kernel_config_data")
++ py_config_ptr_end = gdb.parse_and_eval("&kernel_config_data_end")
++ py_config_size = py_config_ptr_end - py_config_ptr
+ except gdb.error as e:
+ raise gdb.GdbError("Can't find config, enable CONFIG_IKCONFIG?")
+
+diff --git a/scripts/get_abi.pl b/scripts/get_abi.pl
+index 6212f58b69c61..0cf5012852043 100755
+--- a/scripts/get_abi.pl
++++ b/scripts/get_abi.pl
+@@ -980,11 +980,11 @@ __END__
+
+ =head1 NAME
+
+-abi_book.pl - parse the Linux ABI files and produce a ReST book.
++get_abi.pl - parse the Linux ABI files and produce a ReST book.
+
+ =head1 SYNOPSIS
+
+-B<abi_book.pl> [--debug <level>] [--enable-lineno] [--man] [--help]
++B<get_abi.pl> [--debug <level>] [--enable-lineno] [--man] [--help]
+ [--(no-)rst-source] [--dir=<dir>] [--show-hints]
+ [--search-string <regex>]
+ <COMAND> [<ARGUMENT>]
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index e04ae56931e2e..eae6ae9d3c3ba 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1260,7 +1260,8 @@ static int secref_whitelist(const struct sectioncheck *mismatch,
+
+ static inline int is_arm_mapping_symbol(const char *str)
+ {
+- return str[0] == '$' && strchr("axtd", str[1])
++ return str[0] == '$' &&
++ (str[1] == 'a' || str[1] == 'd' || str[1] == 't' || str[1] == 'x')
+ && (str[2] == '\0' || str[2] == '.');
+ }
+
+@@ -1986,7 +1987,7 @@ static char *remove_dot(char *s)
+
+ if (n && s[n]) {
+ size_t m = strspn(s + n + 1, "0123456789");
+- if (m && (s[n + m] == '.' || s[n + m] == 0))
++ if (m && (s[n + m + 1] == '.' || s[n + m + 1] == 0))
+ s[n] = 0;
+
+ /* strip trailing .lto */
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index 0165da386289c..2b2c8eb258d5b 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -283,8 +283,8 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ /* key properties */
+ flags = 0;
+ flags |= options->policydigest_len ? 0 : TPM2_OA_USER_WITH_AUTH;
+- flags |= payload->migratable ? (TPM2_OA_FIXED_TPM |
+- TPM2_OA_FIXED_PARENT) : 0;
++ flags |= payload->migratable ? 0 : (TPM2_OA_FIXED_TPM |
++ TPM2_OA_FIXED_PARENT);
+ tpm_buf_append_u32(&buf, flags);
+
+ /* policy */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 0515137a75b0f..bce2cef80000b 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1052,6 +1052,13 @@ static int patch_conexant_auto(struct hda_codec *codec)
+ snd_hda_pick_fixup(codec, cxt5051_fixup_models,
+ cxt5051_fixups, cxt_fixups);
+ break;
++ case 0x14f15098:
++ codec->pin_amp_workaround = 1;
++ spec->gen.mixer_nid = 0x22;
++ spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO;
++ snd_hda_pick_fixup(codec, cxt5066_fixup_models,
++ cxt5066_fixups, cxt_fixups);
++ break;
+ case 0x14f150f2:
+ codec->power_save_node = 1;
+ fallthrough;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 112ecc256b148..4a9cddc040450 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9056,6 +9056,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x89c3, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++ SND_PCI_QUIRK(0x103c, 0x8a78, "HP Dev One", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -9255,6 +9256,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
++ SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index e7a82565b905e..f078463346e84 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -2097,12 +2097,14 @@ EXPORT_SYMBOL_GPL(rt5640_sel_asrc_clk_src);
+ void rt5640_enable_micbias1_for_ovcd(struct snd_soc_component *component)
+ {
+ struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component);
++ struct rt5640_priv *rt5640 = snd_soc_component_get_drvdata(component);
+
+ snd_soc_dapm_mutex_lock(dapm);
+ snd_soc_dapm_force_enable_pin_unlocked(dapm, "LDO2");
+ snd_soc_dapm_force_enable_pin_unlocked(dapm, "MICBIAS1");
+ /* OVCD is unreliable when used with RCCLK as sysclk-source */
+- snd_soc_dapm_force_enable_pin_unlocked(dapm, "Platform Clock");
++ if (rt5640->use_platform_clock)
++ snd_soc_dapm_force_enable_pin_unlocked(dapm, "Platform Clock");
+ snd_soc_dapm_sync_unlocked(dapm);
+ snd_soc_dapm_mutex_unlock(dapm);
+ }
+@@ -2111,9 +2113,11 @@ EXPORT_SYMBOL_GPL(rt5640_enable_micbias1_for_ovcd);
+ void rt5640_disable_micbias1_for_ovcd(struct snd_soc_component *component)
+ {
+ struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component);
++ struct rt5640_priv *rt5640 = snd_soc_component_get_drvdata(component);
+
+ snd_soc_dapm_mutex_lock(dapm);
+- snd_soc_dapm_disable_pin_unlocked(dapm, "Platform Clock");
++ if (rt5640->use_platform_clock)
++ snd_soc_dapm_disable_pin_unlocked(dapm, "Platform Clock");
+ snd_soc_dapm_disable_pin_unlocked(dapm, "MICBIAS1");
+ snd_soc_dapm_disable_pin_unlocked(dapm, "LDO2");
+ snd_soc_dapm_sync_unlocked(dapm);
+@@ -2538,6 +2542,9 @@ static void rt5640_enable_jack_detect(struct snd_soc_component *component,
+ rt5640->jd_gpio_irq_requested = true;
+ }
+
++ if (jack_data && jack_data->use_platform_clock)
++ rt5640->use_platform_clock = jack_data->use_platform_clock;
++
+ ret = request_irq(rt5640->irq, rt5640_irq,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ "rt5640", rt5640);
+diff --git a/sound/soc/codecs/rt5640.h b/sound/soc/codecs/rt5640.h
+index 9e49b9a0ccaad..505c93514051a 100644
+--- a/sound/soc/codecs/rt5640.h
++++ b/sound/soc/codecs/rt5640.h
+@@ -2155,11 +2155,13 @@ struct rt5640_priv {
+ bool jd_inverted;
+ unsigned int ovcd_th;
+ unsigned int ovcd_sf;
++ bool use_platform_clock;
+ };
+
+ struct rt5640_set_jack_data {
+ int codec_irq_override;
+ struct gpio_desc *jd_gpio;
++ bool use_platform_clock;
+ };
+
+ int rt5640_dmic_enable(struct snd_soc_component *component,
+diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
+index 9aaf231bc0244..93da86009c750 100644
+--- a/sound/soc/fsl/fsl_sai.h
++++ b/sound/soc/fsl/fsl_sai.h
+@@ -80,8 +80,8 @@
+ #define FSL_SAI_xCR3(tx, ofs) (tx ? FSL_SAI_TCR3(ofs) : FSL_SAI_RCR3(ofs))
+ #define FSL_SAI_xCR4(tx, ofs) (tx ? FSL_SAI_TCR4(ofs) : FSL_SAI_RCR4(ofs))
+ #define FSL_SAI_xCR5(tx, ofs) (tx ? FSL_SAI_TCR5(ofs) : FSL_SAI_RCR5(ofs))
+-#define FSL_SAI_xDR(tx, ofs) (tx ? FSL_SAI_TDR(ofs) : FSL_SAI_RDR(ofs))
+-#define FSL_SAI_xFR(tx, ofs) (tx ? FSL_SAI_TFR(ofs) : FSL_SAI_RFR(ofs))
++#define FSL_SAI_xDR0(tx) (tx ? FSL_SAI_TDR0 : FSL_SAI_RDR0)
++#define FSL_SAI_xFR0(tx) (tx ? FSL_SAI_TFR0 : FSL_SAI_RFR0)
+ #define FSL_SAI_xMR(tx) (tx ? FSL_SAI_TMR : FSL_SAI_RMR)
+
+ /* SAI Transmit/Receive Control Register */
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index b5ac226c59e1d..754cc4fc706da 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -1191,12 +1191,14 @@ static int byt_rt5640_init(struct snd_soc_pcm_runtime *runtime)
+ {
+ struct snd_soc_card *card = runtime->card;
+ struct byt_rt5640_private *priv = snd_soc_card_get_drvdata(card);
++ struct rt5640_set_jack_data *jack_data = &priv->jack_data;
+ struct snd_soc_component *component = asoc_rtd_to_codec(runtime, 0)->component;
+ const struct snd_soc_dapm_route *custom_map = NULL;
+ int num_routes = 0;
+ int ret;
+
+ card->dapm.idle_bias_off = true;
++ jack_data->use_platform_clock = true;
+
+ /* Start with RC clk for jack-detect (we disable MCLK below) */
+ if (byt_rt5640_quirk & BYT_RT5640_MCLK_EN)
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index b470404a5376c..e692ae04436a5 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -291,6 +291,9 @@ int snd_usb_audioformat_set_sync_ep(struct snd_usb_audio *chip,
+ bool is_playback;
+ int err;
+
++ if (fmt->sync_ep)
++ return 0; /* already set up */
++
+ alts = snd_usb_get_host_interface(chip, fmt->iface, fmt->altsetting);
+ if (!alts)
+ return 0;
+@@ -304,7 +307,7 @@ int snd_usb_audioformat_set_sync_ep(struct snd_usb_audio *chip,
+ * Generic sync EP handling
+ */
+
+- if (altsd->bNumEndpoints < 2)
++ if (fmt->ep_idx > 0 || altsd->bNumEndpoints < 2)
+ return 0;
+
+ is_playback = !(get_endpoint(alts, 0)->bEndpointAddress & USB_DIR_IN);
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 78eb41b621d63..4f56e1784932a 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2658,7 +2658,12 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .nr_rates = 2,
+ .rate_table = (unsigned int[]) {
+ 44100, 48000
+- }
++ },
++ .sync_ep = 0x82,
++ .sync_iface = 0,
++ .sync_altsetting = 1,
++ .sync_ep_idx = 1,
++ .implicit_fb = 1,
+ }
+ },
+ {
+diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
+index 75564a7df15be..68f681ad54c1e 100644
+--- a/tools/perf/arch/x86/util/evlist.c
++++ b/tools/perf/arch/x86/util/evlist.c
+@@ -3,6 +3,7 @@
+ #include "util/pmu.h"
+ #include "util/evlist.h"
+ #include "util/parse-events.h"
++#include "topdown.h"
+
+ #define TOPDOWN_L1_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound}"
+ #define TOPDOWN_L2_EVENTS "{slots,topdown-retiring,topdown-bad-spec,topdown-fe-bound,topdown-be-bound,topdown-heavy-ops,topdown-br-mispredict,topdown-fetch-lat,topdown-mem-bound}"
+@@ -25,12 +26,12 @@ struct evsel *arch_evlist__leader(struct list_head *list)
+
+ first = list_first_entry(list, struct evsel, core.node);
+
+- if (!pmu_have_event("cpu", "slots"))
++ if (!topdown_sys_has_perf_metrics())
+ return first;
+
+ /* If there is a slots event and a topdown event then the slots event comes first. */
+ __evlist__for_each_entry(list, evsel) {
+- if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
++ if (evsel->pmu_name && !strncmp(evsel->pmu_name, "cpu", 3) && evsel->name) {
+ if (strcasestr(evsel->name, "slots")) {
+ slots = evsel;
+ if (slots == first)
+diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
+index 0c9e56ab07b5b..3501399cef350 100644
+--- a/tools/perf/arch/x86/util/evsel.c
++++ b/tools/perf/arch/x86/util/evsel.c
+@@ -5,6 +5,7 @@
+ #include "util/env.h"
+ #include "util/pmu.h"
+ #include "linux/string.h"
++#include "evsel.h"
+
+ void arch_evsel__set_sample_weight(struct evsel *evsel)
+ {
+@@ -31,10 +32,29 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
+ free(env.cpuid);
+ }
+
++/* Check whether the evsel's PMU supports the perf metrics */
++bool evsel__sys_has_perf_metrics(const struct evsel *evsel)
++{
++ const char *pmu_name = evsel->pmu_name ? evsel->pmu_name : "cpu";
++
++ /*
++ * The PERF_TYPE_RAW type is the core PMU type, e.g., "cpu" PMU
++ * on a non-hybrid machine, "cpu_core" PMU on a hybrid machine.
++ * The slots event is only available for the core PMU, which
++ * supports the perf metrics feature.
++ * Checking both the PERF_TYPE_RAW type and the slots event
++ * should be good enough to detect the perf metrics feature.
++ */
++ if ((evsel->core.attr.type == PERF_TYPE_RAW) &&
++ pmu_have_event(pmu_name, "slots"))
++ return true;
++
++ return false;
++}
++
+ bool arch_evsel__must_be_in_group(const struct evsel *evsel)
+ {
+- if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
+- !pmu_have_event("cpu", "slots"))
++ if (!evsel__sys_has_perf_metrics(evsel))
+ return false;
+
+ return evsel->name &&
+diff --git a/tools/perf/arch/x86/util/evsel.h b/tools/perf/arch/x86/util/evsel.h
+new file mode 100644
+index 0000000000000..19ad1691374dc
+--- /dev/null
++++ b/tools/perf/arch/x86/util/evsel.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _EVSEL_H
++#define _EVSEL_H 1
++
++bool evsel__sys_has_perf_metrics(const struct evsel *evsel);
++
++#endif
+diff --git a/tools/perf/arch/x86/util/topdown.c b/tools/perf/arch/x86/util/topdown.c
+index 2f3d96aa92a58..f81a7cfe4d633 100644
+--- a/tools/perf/arch/x86/util/topdown.c
++++ b/tools/perf/arch/x86/util/topdown.c
+@@ -3,6 +3,32 @@
+ #include "api/fs/fs.h"
+ #include "util/pmu.h"
+ #include "util/topdown.h"
++#include "topdown.h"
++#include "evsel.h"
++
++/* Check whether there is a PMU which supports the perf metrics. */
++bool topdown_sys_has_perf_metrics(void)
++{
++ static bool has_perf_metrics;
++ static bool cached;
++ struct perf_pmu *pmu;
++
++ if (cached)
++ return has_perf_metrics;
++
++ /*
++ * The perf metrics feature is a core PMU feature.
++ * The PERF_TYPE_RAW type is the type of a core PMU.
++ * The slots event is only available when the core PMU
++ * supports the perf metrics feature.
++ */
++ pmu = perf_pmu__find_by_type(PERF_TYPE_RAW);
++ if (pmu && pmu_have_event(pmu->name, "slots"))
++ has_perf_metrics = true;
++
++ cached = true;
++ return has_perf_metrics;
++}
+
+ /*
+ * Check whether we can use a group for top down.
+@@ -30,33 +56,19 @@ void arch_topdown_group_warn(void)
+
+ #define TOPDOWN_SLOTS 0x0400
+
+-static bool is_topdown_slots_event(struct evsel *counter)
+-{
+- if (!counter->pmu_name)
+- return false;
+-
+- if (strcmp(counter->pmu_name, "cpu"))
+- return false;
+-
+- if (counter->core.attr.config == TOPDOWN_SLOTS)
+- return true;
+-
+- return false;
+-}
+-
+ /*
+ * Check whether a topdown group supports sample-read.
+ *
+- * Only Topdown metic supports sample-read. The slots
++ * Only Topdown metric supports sample-read. The slots
+ * event must be the leader of the topdown group.
+ */
+
+ bool arch_topdown_sample_read(struct evsel *leader)
+ {
+- if (!pmu_have_event("cpu", "slots"))
++ if (!evsel__sys_has_perf_metrics(leader))
+ return false;
+
+- if (is_topdown_slots_event(leader))
++ if (leader->core.attr.config == TOPDOWN_SLOTS)
+ return true;
+
+ return false;
+diff --git a/tools/perf/arch/x86/util/topdown.h b/tools/perf/arch/x86/util/topdown.h
+new file mode 100644
+index 0000000000000..46bf9273e572f
+--- /dev/null
++++ b/tools/perf/arch/x86/util/topdown.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _TOPDOWN_H
++#define _TOPDOWN_H 1
++
++bool topdown_sys_has_perf_metrics(void);
++
++#endif
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index d8ec683b06a5e..4e0c385427a4a 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -924,8 +924,8 @@ percent_rmt_hitm_cmp(struct perf_hpp_fmt *fmt __maybe_unused,
+ double per_left;
+ double per_right;
+
+- per_left = PERCENT(left, lcl_hitm);
+- per_right = PERCENT(right, lcl_hitm);
++ per_left = PERCENT(left, rmt_hitm);
++ per_right = PERCENT(right, rmt_hitm);
+
+ return per_left - per_right;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c b/tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c
+index 36a707e7c7a7a..0c4426592a260 100644
+--- a/tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c
++++ b/tools/testing/selftests/bpf/progs/test_stacktrace_build_id.c
+@@ -39,16 +39,8 @@ struct {
+ __type(value, stack_trace_t);
+ } stack_amap SEC(".maps");
+
+-/* taken from /sys/kernel/debug/tracing/events/random/urandom_read/format */
+-struct random_urandom_args {
+- unsigned long long pad;
+- int got_bits;
+- int pool_left;
+- int input_left;
+-};
+-
+-SEC("tracepoint/random/urandom_read")
+-int oncpu(struct random_urandom_args *args)
++SEC("kprobe/urandom_read_iter")
++int oncpu(struct pt_regs *args)
+ {
+ __u32 max_len = sizeof(struct bpf_stack_build_id)
+ * PERF_MAX_STACK_DEPTH;
+diff --git a/tools/testing/selftests/net/forwarding/tc_police.sh b/tools/testing/selftests/net/forwarding/tc_police.sh
+index 4f9f17cb45d64..0a51eef21b9ef 100755
+--- a/tools/testing/selftests/net/forwarding/tc_police.sh
++++ b/tools/testing/selftests/net/forwarding/tc_police.sh
+@@ -37,6 +37,8 @@ ALL_TESTS="
+ police_tx_mirror_test
+ police_pps_rx_test
+ police_pps_tx_test
++ police_mtu_rx_test
++ police_mtu_tx_test
+ "
+ NUM_NETIFS=6
+ source tc_common.sh
+@@ -346,6 +348,56 @@ police_pps_tx_test()
+ tc filter del dev $rp2 egress protocol ip pref 1 handle 101 flower
+ }
+
++police_mtu_common_test() {
++ RET=0
++
++ local test_name=$1; shift
++ local dev=$1; shift
++ local direction=$1; shift
++
++ tc filter add dev $dev $direction protocol ip pref 1 handle 101 flower \
++ dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \
++ action police mtu 1042 conform-exceed drop/ok
++
++ # to count "conform" packets
++ tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \
++ dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \
++ action drop
++
++ mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \
++ -t udp sp=12345,dp=54321 -p 1001 -c 10 -q
++
++ mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \
++ -t udp sp=12345,dp=54321 -p 1000 -c 3 -q
++
++ tc_check_packets "dev $dev $direction" 101 13
++ check_err $? "wrong packet counter"
++
++ # "exceed" packets
++ local overlimits_t0=$(tc_rule_stats_get ${dev} 1 ${direction} .overlimits)
++ test ${overlimits_t0} = 10
++ check_err $? "wrong overlimits, expected 10 got ${overlimits_t0}"
++
++ # "conform" packets
++ tc_check_packets "dev $h2 ingress" 101 3
++ check_err $? "forwarding error"
++
++ tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower
++ tc filter del dev $dev $direction protocol ip pref 1 handle 101 flower
++
++ log_test "$test_name"
++}
++
++police_mtu_rx_test()
++{
++ police_mtu_common_test "police mtu (rx)" $rp1 ingress
++}
++
++police_mtu_tx_test()
++{
++ police_mtu_common_test "police mtu (tx)" $rp2 egress
++}
++
+ setup_prepare()
+ {
+ h1=${NETIFS[p1]}
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index eb8543b9a5c40..924ecb3f1f737 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -374,6 +374,45 @@ EOF
+ return $lret
+ }
+
++test_local_dnat_portonly()
++{
++ local family=$1
++ local daddr=$2
++ local lret=0
++ local sr_s
++ local sr_r
++
++ip netns exec "$ns0" nft -f /dev/stdin <<EOF
++table $family nat {
++ chain output {
++ type nat hook output priority 0; policy accept;
++ meta l4proto tcp dnat to :2000
++
++ }
++}
++EOF
++ if [ $? -ne 0 ]; then
++ if [ $family = "inet" ];then
++ echo "SKIP: inet port test"
++ test_inet_nat=false
++ return
++ fi
++ echo "SKIP: Could not add $family dnat hook"
++ return
++ fi
++
++ echo SERVER-$family | ip netns exec "$ns1" timeout 5 socat -u STDIN TCP-LISTEN:2000 &
++ sc_s=$!
++
++ result=$(ip netns exec "$ns0" timeout 1 socat TCP:$daddr:2000 STDOUT)
++
++ if [ "$result" = "SERVER-inet" ];then
++ echo "PASS: inet port rewrite without l3 address"
++ else
++ echo "ERROR: inet port rewrite"
++ ret=1
++ fi
++}
+
+ test_masquerade6()
+ {
+@@ -1148,6 +1187,10 @@ fi
+ reset_counters
+ test_local_dnat ip
+ test_local_dnat6 ip6
++
++reset_counters
++test_local_dnat_portonly inet 10.0.1.99
++
+ reset_counters
+ $test_inet_nat && test_local_dnat inet
+ $test_inet_nat && test_local_dnat6 inet
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 4b635d4de0185..32ed2e7535c55 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -58,6 +58,41 @@ else
+ DOCSRC = $(SRCTREE)/../../../Documentation/tools/rtla/
+ endif
+
++LIBTRACEEVENT_MIN_VERSION = 1.5
++LIBTRACEFS_MIN_VERSION = 1.3
++
++TEST_LIBTRACEEVENT = $(shell sh -c "$(PKG_CONFIG) --atleast-version $(LIBTRACEEVENT_MIN_VERSION) libtraceevent > /dev/null 2>&1 || echo n")
++ifeq ("$(TEST_LIBTRACEEVENT)", "n")
++.PHONY: warning_traceevent
++warning_traceevent:
++ @echo "********************************************"
++ @echo "** NOTICE: libtraceevent version $(LIBTRACEEVENT_MIN_VERSION) or higher not found"
++ @echo "**"
++ @echo "** Consider installing the latest libtraceevent from your"
++ @echo "** distribution, e.g., 'dnf install libtraceevent' on Fedora,"
++ @echo "** or from source:"
++ @echo "**"
++ @echo "** https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/ "
++ @echo "**"
++ @echo "********************************************"
++endif
++
++TEST_LIBTRACEFS = $(shell sh -c "$(PKG_CONFIG) --atleast-version $(LIBTRACEFS_MIN_VERSION) libtracefs > /dev/null 2>&1 || echo n")
++ifeq ("$(TEST_LIBTRACEFS)", "n")
++.PHONY: warning_tracefs
++warning_tracefs:
++ @echo "********************************************"
++ @echo "** NOTICE: libtracefs version $(LIBTRACEFS_MIN_VERSION) or higher not found"
++ @echo "**"
++ @echo "** Consider installing the latest libtracefs from your"
++ @echo "** distribution, e.g., 'dnf install libtracefs' on Fedora,"
++ @echo "** or from source:"
++ @echo "**"
++ @echo "** https://git.kernel.org/pub/scm/libs/libtrace/libtracefs.git/ "
++ @echo "**"
++ @echo "********************************************"
++endif
++
+ .PHONY: all
+ all: rtla
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-06-09 18:31 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-06-09 18:31 UTC (permalink / raw
To: gentoo-commits
commit: 3c727b15fe293a179ccf32e872d6933cf6f57406
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 9 18:30:52 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 9 18:30:52 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3c727b15
cifs: fix minor compile warning
Bug: https://bugs.gentoo.org/850763
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
1950_cifs-fix-minor-compile-warning.patch | 33 +++++++++++++++++++++++++++++++
2 files changed, 37 insertions(+)
diff --git a/0000_README b/0000_README
index e0e0b2de..675c8bcb 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
+Patch: 1950_cifs-fix-minor-compile-warning.patch
+From: https://git.kernel.org/
+Desc: cifs: fix minor compile warning
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1950_cifs-fix-minor-compile-warning.patch b/1950_cifs-fix-minor-compile-warning.patch
new file mode 100644
index 00000000..65787238
--- /dev/null
+++ b/1950_cifs-fix-minor-compile-warning.patch
@@ -0,0 +1,33 @@
+From 93ed91c020aa4f021600a633f1f87790a5e50b91 Mon Sep 17 00:00:00 2001
+From: Steve French <stfrench@microsoft.com>
+Date: Sun, 22 May 2022 21:25:24 -0500
+Subject: cifs: fix minor compile warning
+
+Add ifdef around nodfs variable from patch:
+ "cifs: don't call cifs_dfs_query_info_nonascii_quirk() if nodfs was set"
+which is unused when CONFIG_DFS_UPCALL is not set.
+
+Signed-off-by: Steve French <stfrench@microsoft.com>
+---
+ fs/cifs/connect.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+(limited to 'fs/cifs/connect.c')
+
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 44dc66f21d832..0b08693d1af8f 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -3433,7 +3433,9 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ struct cifs_tcon *tcon = mnt_ctx->tcon;
+ struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ char *full_path;
++#ifdef CONFIG_CIFS_DFS_UPCALL
+ bool nodfs = cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS;
++#endif
+
+ if (!server->ops->is_path_accessible)
+ return -EOPNOTSUPP;
+--
+cgit 1.2.3-1.el7
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-06-09 11:25 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-06-09 11:25 UTC (permalink / raw
To: gentoo-commits
commit: 090f876a4dfcf3029f7ea8bd9a0afc38def9badf
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 9 11:25:37 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun 9 11:25:37 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=090f876a
Linux patch 5.17.14
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1013_linux-5.17.14.patch | 32376 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 32380 insertions(+)
diff --git a/0000_README b/0000_README
index aa088efb..e0e0b2de 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-5.17.13.patch
From: http://www.kernel.org
Desc: Linux 5.17.13
+Patch: 1013_linux-5.17.14.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-5.17.14.patch b/1013_linux-5.17.14.patch
new file mode 100644
index 00000000..b3c64ba2
--- /dev/null
+++ b/1013_linux-5.17.14.patch
@@ -0,0 +1,32376 @@
+diff --git a/Documentation/accounting/psi.rst b/Documentation/accounting/psi.rst
+index 860fe651d6453..5e40b3f437f90 100644
+--- a/Documentation/accounting/psi.rst
++++ b/Documentation/accounting/psi.rst
+@@ -37,11 +37,7 @@ Pressure interface
+ Pressure information for each resource is exported through the
+ respective file in /proc/pressure/ -- cpu, memory, and io.
+
+-The format for CPU is as such::
+-
+- some avg10=0.00 avg60=0.00 avg300=0.00 total=0
+-
+-and for memory and IO::
++The format is as such::
+
+ some avg10=0.00 avg60=0.00 avg300=0.00 total=0
+ full avg10=0.00 avg60=0.00 avg300=0.00 total=0
+@@ -58,6 +54,9 @@ situation from a state where some tasks are stalled but the CPU is
+ still doing productive work. As such, time spent in this subset of the
+ stall state is tracked separately and exported in the "full" averages.
+
++CPU full is undefined at the system level, but has been reported
++since 5.13, so it is set to zero for backward compatibility.
++
+ The ratios (in %) are tracked as recent trends over ten, sixty, and
+ three hundred second windows, which gives insight into short term events
+ as well as medium and long term trends. The total absolute stall time
+diff --git a/Documentation/conf.py b/Documentation/conf.py
+index f07f2e9b9f2c8..624ec729f71ac 100644
+--- a/Documentation/conf.py
++++ b/Documentation/conf.py
+@@ -161,7 +161,7 @@ finally:
+ #
+ # This is also used if you do content translation via gettext catalogs.
+ # Usually you set "language" from the command line for these cases.
+-language = None
++language = 'en'
+
+ # There are two options for replacing |today|: either, you set today to some
+ # non-false value, then it is used:
+diff --git a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
+index 0cebaaefda032..419c3b2ac5a6f 100644
+--- a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
++++ b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
+@@ -72,6 +72,7 @@ examples:
+ dc-gpios = <&gpio 43 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&gpio 80 GPIO_ACTIVE_HIGH>;
+ rotation = <270>;
++ backlight = <&backlight>;
+ };
+ };
+
+diff --git a/Documentation/devicetree/bindings/gpio/gpio-altera.txt b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
+index 146e554b3c676..2a80e272cd666 100644
+--- a/Documentation/devicetree/bindings/gpio/gpio-altera.txt
++++ b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
+@@ -9,8 +9,9 @@ Required properties:
+ - The second cell is reserved and is currently unused.
+ - gpio-controller : Marks the device node as a GPIO controller.
+ - interrupt-controller: Mark the device node as an interrupt controller
+-- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware.
++- #interrupt-cells : Should be 2. The interrupt type is fixed in the hardware.
+ - The first cell is the GPIO offset number within the GPIO controller.
++ - The second cell is the interrupt trigger type and level flags.
+ - interrupts: Specify the interrupt.
+ - altr,interrupt-type: Specifies the interrupt trigger type the GPIO
+ hardware is synthesized. This field is required if the Altera GPIO controller
+@@ -38,6 +39,6 @@ gpio_altr: gpio@ff200000 {
+ altr,interrupt-type = <IRQ_TYPE_EDGE_RISING>;
+ #gpio-cells = <2>;
+ gpio-controller;
+- #interrupt-cells = <1>;
++ #interrupt-cells = <2>;
+ interrupt-controller;
+ };
+diff --git a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+index 61dd5af80db67..5d2d989de893c 100644
+--- a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+@@ -31,7 +31,7 @@ properties:
+ $ref: "regulator.yaml#"
+
+ properties:
+- regulator-name:
++ regulator-compatible:
+ pattern: "^vbuck[1-4]$"
+
+ additionalProperties: false
+diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
+index b32457c2fc0b0..3361218e278f3 100644
+--- a/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
++++ b/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
+@@ -34,6 +34,7 @@ properties:
+ - qcom,rpm-ipq6018
+ - qcom,rpm-msm8226
+ - qcom,rpm-msm8916
++ - qcom,rpm-msm8936
+ - qcom,rpm-msm8953
+ - qcom,rpm-msm8974
+ - qcom,rpm-msm8976
+diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
+index 055524fe83273..116f3746c1e6e 100644
+--- a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
++++ b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
+@@ -49,6 +49,7 @@ properties:
+ maxItems: 2
+
+ interconnect-names:
++ minItems: 1
+ items:
+ - const: qspi-config
+ - const: qspi-memory
+diff --git a/Documentation/sound/alsa-configuration.rst b/Documentation/sound/alsa-configuration.rst
+index 34888d4fc4a83..21ab5e6f7062f 100644
+--- a/Documentation/sound/alsa-configuration.rst
++++ b/Documentation/sound/alsa-configuration.rst
+@@ -2246,7 +2246,7 @@ implicit_fb
+ Apply the generic implicit feedback sync mode. When this is set
+ and the playback stream sync mode is ASYNC, the driver tries to
+ tie an adjacent ASYNC capture stream as the implicit feedback
+- source.
++ source. This is equivalent with quirk_flags bit 17.
+ use_vmalloc
+ Use vmalloc() for allocations of the PCM buffers (default: yes).
+ For architectures with non-coherent memory like ARM or MIPS, the
+@@ -2288,6 +2288,8 @@ quirk_flags
+ * bit 14: Ignore errors for mixer access
+ * bit 15: Support generic DSD raw U32_BE format
+ * bit 16: Set up the interface at first like UAC1
++ * bit 17: Apply the generic implicit feedback sync mode
++ * bit 18: Don't apply implicit feedback sync mode
+
+ This module supports multiple devices, autoprobe and hotplugging.
+
+diff --git a/Documentation/userspace-api/landlock.rst b/Documentation/userspace-api/landlock.rst
+index f35552ff19ba8..b68e7a51009f8 100644
+--- a/Documentation/userspace-api/landlock.rst
++++ b/Documentation/userspace-api/landlock.rst
+@@ -267,8 +267,8 @@ restrict such paths with dedicated ruleset flags.
+ Ruleset layers
+ --------------
+
+-There is a limit of 64 layers of stacked rulesets. This can be an issue for a
+-task willing to enforce a new ruleset in complement to its 64 inherited
++There is a limit of 16 layers of stacked rulesets. This can be an issue for a
++task willing to enforce a new ruleset in complement to its 16 inherited
+ rulesets. Once this limit is reached, sys_landlock_restrict_self() returns
+ E2BIG. It is then strongly suggested to carefully build rulesets once in the
+ life of a thread, especially for applications able to launch other applications
+diff --git a/Makefile b/Makefile
+index d38228d336bf6..5450a2c9efa67 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h
+index 18f48a6f2ff6d..8f3f5eecba28b 100644
+--- a/arch/alpha/include/asm/page.h
++++ b/arch/alpha/include/asm/page.h
+@@ -18,7 +18,7 @@ extern void clear_page(void *page);
+ #define clear_user_page(page, vaddr, pg) clear_page(page)
+
+ #define alloc_zeroed_user_highpage_movable(vma, vaddr) \
+- alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr)
++ alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
+ #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
+
+ extern void copy_page(void * _to, void * _from);
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+index 1b63d6b19750b..25d87212cefd3 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+@@ -53,18 +53,17 @@
+ "GPIO18",
+ "NC", /* GPIO19 */
+ "NC", /* GPIO20 */
+- "GPIO21",
++ "CAM_GPIO0",
+ "GPIO22",
+ "GPIO23",
+ "GPIO24",
+ "GPIO25",
+ "NC", /* GPIO26 */
+- "CAM_GPIO0",
+- /* Binary number representing build/revision */
+- "CONFIG0",
+- "CONFIG1",
+- "CONFIG2",
+- "CONFIG3",
++ "GPIO27",
++ "GPIO28",
++ "GPIO29",
++ "GPIO30",
++ "GPIO31",
+ "NC", /* GPIO32 */
+ "NC", /* GPIO33 */
+ "NC", /* GPIO34 */
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+index 243236bc1e00b..8b043ab62dc83 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+@@ -74,16 +74,18 @@
+ "GPIO27",
+ "SDA0",
+ "SCL0",
+- "NC", /* GPIO30 */
+- "NC", /* GPIO31 */
+- "NC", /* GPIO32 */
+- "NC", /* GPIO33 */
+- "NC", /* GPIO34 */
+- "NC", /* GPIO35 */
+- "NC", /* GPIO36 */
+- "NC", /* GPIO37 */
+- "NC", /* GPIO38 */
+- "NC", /* GPIO39 */
++ /* Used by BT module */
++ "CTS0",
++ "RTS0",
++ "TXD0",
++ "RXD0",
++ /* Used by Wifi */
++ "SD1_CLK",
++ "SD1_CMD",
++ "SD1_DATA0",
++ "SD1_DATA1",
++ "SD1_DATA2",
++ "SD1_DATA3",
+ "CAM_GPIO1", /* GPIO40 */
+ "WL_ON", /* GPIO41 */
+ "NC", /* GPIO42 */
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+index e12938baaf12c..c263f5b48b96b 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+@@ -45,7 +45,7 @@
+ #gpio-cells = <2>;
+ gpio-line-names = "BT_ON",
+ "WL_ON",
+- "STATUS_LED_R",
++ "PWR_LED_R",
+ "LAN_RUN",
+ "",
+ "CAM_GPIO0",
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
+index 588d9411ceb61..3dfce4312dfc4 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
+@@ -63,8 +63,8 @@
+ "GPIO43",
+ "GPIO44",
+ "GPIO45",
+- "GPIO46",
+- "GPIO47",
++ "SMPS_SCL",
++ "SMPS_SDA",
+ /* Used by eMMC */
+ "SD_CLK_R",
+ "SD_CMD_R",
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 603c700c706f2..65f8a759f1e31 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -455,7 +455,7 @@
+ reg = <0x180 0x4>;
+ };
+
+- pinctrl: pin-controller@1c0 {
++ pinctrl: pinctrl@1c0 {
+ compatible = "brcm,bcm4708-pinmux";
+ reg = <0x1c0 0x24>;
+ reg-names = "cru_gpio_control";
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index f042954bdfa5d..e4861415a0fe5 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -129,7 +129,7 @@
+ samsung,i2c-max-bus-freq = <20000>;
+
+ eeprom@50 {
+- compatible = "samsung,s524ad0xd1";
++ compatible = "samsung,s524ad0xd1", "atmel,24c128";
+ reg = <0x50>;
+ };
+
+@@ -289,7 +289,7 @@
+ samsung,i2c-max-bus-freq = <20000>;
+
+ eeprom@51 {
+- compatible = "samsung,s524ad0xd1";
++ compatible = "samsung,s524ad0xd1", "atmel,24c128";
+ reg = <0x51>;
+ };
+
+diff --git a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
+index b4a9523e325b4..864dc5018451f 100644
+--- a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
++++ b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
+@@ -297,7 +297,11 @@
+ phy-mode = "rmii";
+ phy-reset-gpios = <&gpio1 18 GPIO_ACTIVE_LOW>;
+ phy-handle = <&phy>;
+- clocks = <&clks IMX6QDL_CLK_ENET>, <&clks IMX6QDL_CLK_ENET>, <&rmii_clk>;
++ clocks = <&clks IMX6QDL_CLK_ENET>,
++ <&clks IMX6QDL_CLK_ENET>,
++ <&rmii_clk>,
++ <&clks IMX6QDL_CLK_ENET_REF>;
++ clock-names = "ipg", "ahb", "ptp", "enet_out";
+ status = "okay";
+
+ mdio {
+diff --git a/arch/arm/boot/dts/imx6qdl-colibri.dtsi b/arch/arm/boot/dts/imx6qdl-colibri.dtsi
+index 4e2a309c93fa8..1e86b38147080 100644
+--- a/arch/arm/boot/dts/imx6qdl-colibri.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-colibri.dtsi
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0+ OR MIT
+ /*
+- * Copyright 2014-2020 Toradex
++ * Copyright 2014-2022 Toradex
+ * Copyright 2012 Freescale Semiconductor, Inc.
+ * Copyright 2011 Linaro Ltd.
+ */
+@@ -132,7 +132,7 @@
+ clock-frequency = <100000>;
+ pinctrl-names = "default", "gpio";
+ pinctrl-0 = <&pinctrl_i2c2>;
+- pinctrl-0 = <&pinctrl_i2c2_gpio>;
++ pinctrl-1 = <&pinctrl_i2c2_gpio>;
+ scl-gpios = <&gpio2 30 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ sda-gpios = <&gpio3 16 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ status = "okay";
+@@ -488,7 +488,7 @@
+ >;
+ };
+
+- pinctrl_i2c2_gpio: i2c2grp {
++ pinctrl_i2c2_gpio: i2c2gpiogrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x4001b8b1
+ MX6QDL_PAD_EIM_D16__GPIO3_IO16 0x4001b8b1
+diff --git a/arch/arm/boot/dts/ox820.dtsi b/arch/arm/boot/dts/ox820.dtsi
+index 90846a7655b49..dde4364892bf0 100644
+--- a/arch/arm/boot/dts/ox820.dtsi
++++ b/arch/arm/boot/dts/ox820.dtsi
+@@ -287,7 +287,7 @@
+ clocks = <&armclk>;
+ };
+
+- gic: gic@1000 {
++ gic: interrupt-controller@1000 {
+ compatible = "arm,arm11mp-gic";
+ interrupt-controller;
+ #interrupt-cells = <3>;
+diff --git a/arch/arm/boot/dts/qcom-sdx65.dtsi b/arch/arm/boot/dts/qcom-sdx65.dtsi
+index 796641d30e06c..0c3f93603adc8 100644
+--- a/arch/arm/boot/dts/qcom-sdx65.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx65.dtsi
+@@ -202,7 +202,7 @@
+ <WAKE_TCS 2>,
+ <CONTROL_TCS 1>;
+
+- rpmhcc: clock-controller@1 {
++ rpmhcc: clock-controller {
+ compatible = "qcom,sdx65-rpmh-clk";
+ #clock-cells = <1>;
+ clock-names = "xo";
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 2f57100a011a3..b6d55a782c208 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -564,7 +564,6 @@
+ reset-gpios = <&mp05 5 GPIO_ACTIVE_LOW>;
+ vdd3-supply = <&ldo7_reg>;
+ vci-supply = <&ldo17_reg>;
+- spi-cs-high;
+ spi-max-frequency = <1200000>;
+
+ pinctrl-names = "default";
+@@ -636,7 +635,7 @@
+ };
+
+ &i2s0 {
+- dmas = <&pdma0 9>, <&pdma0 10>, <&pdma0 11>;
++ dmas = <&pdma0 10>, <&pdma0 9>, <&pdma0 11>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index 353ba7b09a0c0..c5265f3ae31d6 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -239,8 +239,8 @@
+ reg = <0xeee30000 0x1000>;
+ interrupt-parent = <&vic2>;
+ interrupts = <16>;
+- dma-names = "rx", "tx", "tx-sec";
+- dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
++ dma-names = "tx", "rx", "tx-sec";
++ dmas = <&pdma1 10>, <&pdma1 9>, <&pdma1 11>;
+ clock-names = "iis",
+ "i2s_opclk0",
+ "i2s_opclk1";
+@@ -259,8 +259,8 @@
+ reg = <0xe2100000 0x1000>;
+ interrupt-parent = <&vic2>;
+ interrupts = <17>;
+- dma-names = "rx", "tx";
+- dmas = <&pdma1 12>, <&pdma1 13>;
++ dma-names = "tx", "rx";
++ dmas = <&pdma1 13>, <&pdma1 12>;
+ clock-names = "iis", "i2s_opclk0";
+ clocks = <&clocks CLK_I2S1>, <&clocks SCLK_AUDIO1>;
+ pinctrl-names = "default";
+@@ -274,8 +274,8 @@
+ reg = <0xe2a00000 0x1000>;
+ interrupt-parent = <&vic2>;
+ interrupts = <18>;
+- dma-names = "rx", "tx";
+- dmas = <&pdma1 14>, <&pdma1 15>;
++ dma-names = "tx", "rx";
++ dmas = <&pdma1 15>, <&pdma1 14>;
+ clock-names = "iis", "i2s_opclk0";
+ clocks = <&clocks CLK_I2S2>, <&clocks SCLK_AUDIO2>;
+ pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/sama7g5.dtsi b/arch/arm/boot/dts/sama7g5.dtsi
+index 22520cdd37fc5..46c96a3d79924 100644
+--- a/arch/arm/boot/dts/sama7g5.dtsi
++++ b/arch/arm/boot/dts/sama7g5.dtsi
+@@ -626,7 +626,6 @@
+ #interrupt-cells = <3>;
+ #address-cells = <0>;
+ interrupt-controller;
+- interrupt-parent;
+ reg = <0xe8c11000 0x1000>,
+ <0xe8c12000 0x2000>;
+ };
+diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi
+index 7c1d6423d7f8c..b8c5dd7860cb2 100644
+--- a/arch/arm/boot/dts/socfpga.dtsi
++++ b/arch/arm/boot/dts/socfpga.dtsi
+@@ -46,7 +46,7 @@
+ <0xff113000 0x1000>;
+ };
+
+- intc: intc@fffed000 {
++ intc: interrupt-controller@fffed000 {
+ compatible = "arm,cortex-a9-gic";
+ #interrupt-cells = <3>;
+ interrupt-controller;
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 3ba431dfa8c94..f1e50d2e623a3 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -38,7 +38,7 @@
+ <0xff113000 0x1000>;
+ };
+
+- intc: intc@ffffd000 {
++ intc: interrupt-controller@ffffd000 {
+ compatible = "arm,cortex-a9-gic";
+ #interrupt-cells = <3>;
+ interrupt-controller;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 6885948f3024e..8eb51d84b6988 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -141,6 +141,7 @@
+ compatible = "snps,dwmac-mdio";
+ reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>;
+ reset-delay-us = <1000>;
++ reset-post-delay-us = <1000>;
+
+ phy0: ethernet-phy@7 {
+ reg = <7>;
+diff --git a/arch/arm/boot/dts/suniv-f1c100s.dtsi b/arch/arm/boot/dts/suniv-f1c100s.dtsi
+index 6100d3b75f613..def8301014487 100644
+--- a/arch/arm/boot/dts/suniv-f1c100s.dtsi
++++ b/arch/arm/boot/dts/suniv-f1c100s.dtsi
+@@ -104,8 +104,10 @@
+
+ wdt: watchdog@1c20ca0 {
+ compatible = "allwinner,suniv-f1c100s-wdt",
+- "allwinner,sun4i-a10-wdt";
++ "allwinner,sun6i-a31-wdt";
+ reg = <0x01c20ca0 0x20>;
++ interrupts = <16>;
++ clocks = <&osc32k>;
+ };
+
+ uart0: serial@1c25000 {
+diff --git a/arch/arm/include/asm/arch_gicv3.h b/arch/arm/include/asm/arch_gicv3.h
+index 413abfb42989e..f82a819eb0dbb 100644
+--- a/arch/arm/include/asm/arch_gicv3.h
++++ b/arch/arm/include/asm/arch_gicv3.h
+@@ -48,6 +48,7 @@ static inline u32 read_ ## a64(void) \
+ return read_sysreg(a32); \
+ } \
+
++CPUIF_MAP(ICC_EOIR1, ICC_EOIR1_EL1)
+ CPUIF_MAP(ICC_PMR, ICC_PMR_EL1)
+ CPUIF_MAP(ICC_AP0R0, ICC_AP0R0_EL1)
+ CPUIF_MAP(ICC_AP0R1, ICC_AP0R1_EL1)
+@@ -63,12 +64,6 @@ CPUIF_MAP(ICC_AP1R3, ICC_AP1R3_EL1)
+
+ /* Low-level accessors */
+
+-static inline void gic_write_eoir(u32 irq)
+-{
+- write_sysreg(irq, ICC_EOIR1);
+- isb();
+-}
+-
+ static inline void gic_write_dir(u32 val)
+ {
+ write_sysreg(val, ICC_DIR);
+diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
+index c532a60410667..8111147f7c425 100644
+--- a/arch/arm/kernel/signal.c
++++ b/arch/arm/kernel/signal.c
+@@ -708,6 +708,7 @@ static_assert(offsetof(siginfo_t, si_upper) == 0x18);
+ static_assert(offsetof(siginfo_t, si_pkey) == 0x14);
+ static_assert(offsetof(siginfo_t, si_perf_data) == 0x10);
+ static_assert(offsetof(siginfo_t, si_perf_type) == 0x14);
++static_assert(offsetof(siginfo_t, si_perf_flags) == 0x18);
+ static_assert(offsetof(siginfo_t, si_band) == 0x0c);
+ static_assert(offsetof(siginfo_t, si_fd) == 0x10);
+ static_assert(offsetof(siginfo_t, si_call_addr) == 0x0c);
+diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c
+index a56cc64deeb8f..9ce93e0b6cdc3 100644
+--- a/arch/arm/mach-hisi/platsmp.c
++++ b/arch/arm/mach-hisi/platsmp.c
+@@ -67,14 +67,17 @@ static void __init hi3xxx_smp_prepare_cpus(unsigned int max_cpus)
+ }
+ ctrl_base = of_iomap(np, 0);
+ if (!ctrl_base) {
++ of_node_put(np);
+ pr_err("failed to map address\n");
+ return;
+ }
+ if (of_property_read_u32(np, "smp-offset", &offset) < 0) {
++ of_node_put(np);
+ pr_err("failed to find smp-offset property\n");
+ return;
+ }
+ ctrl_base += offset;
++ of_node_put(np);
+ }
+ }
+
+@@ -160,6 +163,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ if (WARN_ON(!node))
+ return -1;
+ ctrl_base = of_iomap(node, 0);
++ of_node_put(node);
+
+ /* set the secondary core boot from DDR */
+ remap_reg_value = readl_relaxed(ctrl_base + REG_SC_CTRL);
+diff --git a/arch/arm/mach-mediatek/Kconfig b/arch/arm/mach-mediatek/Kconfig
+index 9e0f592d87d8e..35a3430c7942d 100644
+--- a/arch/arm/mach-mediatek/Kconfig
++++ b/arch/arm/mach-mediatek/Kconfig
+@@ -30,6 +30,7 @@ config MACH_MT7623
+ config MACH_MT7629
+ bool "MediaTek MT7629 SoCs support"
+ default ARCH_MEDIATEK
++ select HAVE_ARM_ARCH_TIMER
+
+ config MACH_MT8127
+ bool "MediaTek MT8127 SoCs support"
+diff --git a/arch/arm/mach-omap1/clock.c b/arch/arm/mach-omap1/clock.c
+index 9d4a0ab50a468..d63d5eb8d8fdf 100644
+--- a/arch/arm/mach-omap1/clock.c
++++ b/arch/arm/mach-omap1/clock.c
+@@ -41,7 +41,7 @@ static DEFINE_SPINLOCK(clockfw_lock);
+ unsigned long omap1_uart_recalc(struct clk *clk)
+ {
+ unsigned int val = __raw_readl(clk->enable_reg);
+- return val & clk->enable_bit ? 48000000 : 12000000;
++ return val & 1 << clk->enable_bit ? 48000000 : 12000000;
+ }
+
+ unsigned long omap1_sossi_recalc(struct clk *clk)
+diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c
+index 2e35354b61f56..167e871f059ef 100644
+--- a/arch/arm/mach-pxa/cm-x300.c
++++ b/arch/arm/mach-pxa/cm-x300.c
+@@ -354,13 +354,13 @@ static struct platform_device cm_x300_spi_gpio = {
+ static struct gpiod_lookup_table cm_x300_spi_gpiod_table = {
+ .dev_id = "spi_gpio",
+ .table = {
+- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_SCL,
++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_SCL - GPIO_LCD_BASE,
+ "sck", GPIO_ACTIVE_HIGH),
+- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DIN,
++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_DIN - GPIO_LCD_BASE,
+ "mosi", GPIO_ACTIVE_HIGH),
+- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DOUT,
++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_DOUT - GPIO_LCD_BASE,
+ "miso", GPIO_ACTIVE_HIGH),
+- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_CS,
++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_CS - GPIO_LCD_BASE,
+ "cs", GPIO_ACTIVE_HIGH),
+ { },
+ },
+diff --git a/arch/arm/mach-pxa/magician.c b/arch/arm/mach-pxa/magician.c
+index cd9fa465b9b2a..9aee8e0f2bb1d 100644
+--- a/arch/arm/mach-pxa/magician.c
++++ b/arch/arm/mach-pxa/magician.c
+@@ -681,7 +681,7 @@ static struct platform_device bq24022 = {
+ static struct gpiod_lookup_table bq24022_gpiod_table = {
+ .dev_id = "gpio-regulator",
+ .table = {
+- GPIO_LOOKUP("gpio-pxa", EGPIO_MAGICIAN_BQ24022_ISET2,
++ GPIO_LOOKUP("htc-egpio-0", EGPIO_MAGICIAN_BQ24022_ISET2 - MAGICIAN_EGPIO_BASE,
+ NULL, GPIO_ACTIVE_HIGH),
+ GPIO_LOOKUP("gpio-pxa", GPIO30_MAGICIAN_BQ24022_nCHARGE_EN,
+ "enable", GPIO_ACTIVE_LOW),
+diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c
+index 431709725d02b..ded5e343e1984 100644
+--- a/arch/arm/mach-pxa/tosa.c
++++ b/arch/arm/mach-pxa/tosa.c
+@@ -296,9 +296,9 @@ static struct gpiod_lookup_table tosa_mci_gpio_table = {
+ .table = {
+ GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_nSD_DETECT,
+ "cd", GPIO_ACTIVE_LOW),
+- GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_SD_WP,
++ GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_SD_WP - TOSA_SCOOP_GPIO_BASE,
+ "wp", GPIO_ACTIVE_LOW),
+- GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_PWR_ON,
++ GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_PWR_ON - TOSA_SCOOP_GPIO_BASE,
+ "power", GPIO_ACTIVE_HIGH),
+ { },
+ },
+diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c
+index a0554d7d04f7c..e1adc098f89ac 100644
+--- a/arch/arm/mach-vexpress/dcscb.c
++++ b/arch/arm/mach-vexpress/dcscb.c
+@@ -144,6 +144,7 @@ static int __init dcscb_init(void)
+ if (!node)
+ return -ENODEV;
+ dcscb_base = of_iomap(node, 0);
++ of_node_put(node);
+ if (!dcscb_base)
+ return -EADDRNOTAVAIL;
+ cfg = readl_relaxed(dcscb_base + DCS_CFG_R);
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index 21697449d762f..14602e0e024ea 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -253,6 +253,7 @@ config ARCH_INTEL_SOCFPGA
+
+ config ARCH_SYNQUACER
+ bool "Socionext SynQuacer SoC Family"
++ select IRQ_FASTEOI_HIERARCHY_HANDLERS
+
+ config ARCH_TEGRA
+ bool "NVIDIA Tegra SoC Family"
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
+index c5eb3604dd5b7..119db6b541b7b 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
+@@ -71,10 +71,6 @@
+
+ &spi0 {
+ flash@0 {
+- spi-max-frequency = <108000000>;
+- spi-rx-bus-width = <4>;
+- spi-tx-bus-width = <4>;
+-
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+@@ -112,7 +108,6 @@
+
+ &usb3 {
+ usb-phy = <&usb3_phy>;
+- status = "disabled";
+ };
+
+ &mdio {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 53d790c335f99..cd010185753ea 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -470,7 +470,7 @@
+ clock-names = "spi", "sf", "axi";
+ #address-cells = <1>;
+ #size-cells = <0>;
+- status = "disable";
++ status = "disabled";
+ };
+
+ audsys: clock-controller@11210000 {
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+index 218a2b32200f8..4f0e51f1a3430 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+@@ -1366,8 +1366,9 @@
+ <&tegra_car TEGRA210_CLK_DFLL_REF>,
+ <&tegra_car TEGRA210_CLK_I2C5>;
+ clock-names = "soc", "ref", "i2c";
+- resets = <&tegra_car TEGRA210_RST_DFLL_DVCO>;
+- reset-names = "dvco";
++ resets = <&tegra_car TEGRA210_RST_DFLL_DVCO>,
++ <&tegra_car 155>;
++ reset-names = "dvco", "dfll";
+ #clock-cells = <0>;
+ clock-output-names = "dfllCPU_out";
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index e6cc261201efe..d505c4d8830c1 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -13,7 +13,7 @@
+ clocks {
+ sleep_clk: sleep_clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32768>;
+ #clock-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 215f56daa26c2..d76e93cff478d 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -183,8 +183,8 @@
+ no-map;
+ };
+
+- cont_splash_mem: memory@3800000 {
+- reg = <0 0x03800000 0 0x2400000>;
++ cont_splash_mem: memory@3401000 {
++ reg = <0 0x03401000 0 0x2200000>;
+ no-map;
+ };
+
+@@ -498,7 +498,7 @@
+ #dma-cells = <1>;
+ qcom,ee = <0>;
+ qcom,controlled-remotely;
+- num-channels = <18>;
++ num-channels = <24>;
+ qcom,num-ees = <4>;
+ };
+
+@@ -634,7 +634,7 @@
+ #dma-cells = <1>;
+ qcom,ee = <0>;
+ qcom,controlled-remotely;
+- num-channels = <18>;
++ num-channels = <24>;
+ qcom,num-ees = <4>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+index 845eb7a6bf92e..0e63f707b9115 100644
+--- a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
++++ b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+@@ -29,7 +29,7 @@
+ };
+
+ /* Fixed crystal oscillator dedicated to MCP2518FD */
+- clk40M: can_clock {
++ clk40M: can-clock {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <40000000>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+index d623d71d8bd47..dd6dac0e17842 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+@@ -462,10 +462,13 @@
+
+ &qup_uart7_cts {
+ /*
+- * Configure a pull-down on CTS to match the pull of
+- * the Bluetooth module.
++ * Configure a bias-bus-hold on CTS to lower power
++ * usage when Bluetooth is turned off. Bus hold will
++ * maintain a low power state regardless of whether
++ * the Bluetooth module drives the pin in either
++ * direction or leaves the pin fully unpowered.
+ */
+- bias-pull-down;
++ bias-bus-hold;
+ };
+
+ &qup_uart7_rts {
+@@ -516,10 +519,13 @@
+ pins = "gpio28";
+ function = "gpio";
+ /*
+- * Configure a pull-down on CTS to match the pull of
+- * the Bluetooth module.
++ * Configure a bias-bus-hold on CTS to lower power
++ * usage when Bluetooth is turned off. Bus hold will
++ * maintain a low power state regardless of whether
++ * the Bluetooth module drives the pin in either
++ * direction or leaves the pin fully unpowered.
+ */
+- bias-pull-down;
++ bias-bus-hold;
+ };
+
+ qup_uart7_sleep_rts: qup-uart7-sleep-rts {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts
+index 367389526b418..a97f5e89e1d0f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts
+@@ -218,7 +218,7 @@
+ panel@0 {
+ compatible = "tianma,fhd-video";
+ reg = <0>;
+- vddi0-supply = <&vreg_l14a_1p8>;
++ vddio-supply = <&vreg_l14a_1p8>;
+ vddpos-supply = <&lab>;
+ vddneg-supply = <&ibb>;
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 080457a68e3c7..88f26d89eea1a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1534,6 +1534,7 @@
+ reg = <0xf780 0x24>;
+ clocks = <&sdhci>;
+ clock-names = "emmcclk";
++ drive-impedance-ohm = <50>;
+ #phy-cells = <0>;
+ status = "disabled";
+ };
+@@ -1544,7 +1545,6 @@
+ clock-names = "refclk";
+ #phy-cells = <1>;
+ resets = <&cru SRST_PCIEPHY>;
+- drive-impedance-ohm = <50>;
+ reset-names = "phy";
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi b/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
+index 2bb5c9ff172c9..02d4285acbb8d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
+@@ -10,7 +10,6 @@
+ compatible = "ti,am64-uart", "ti,am654-uart";
+ reg = <0x00 0x04a00000 0x00 0x100>;
+ interrupts = <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>;
+- clock-frequency = <48000000>;
+ current-speed = <115200>;
+ power-domains = <&k3_pds 149 TI_SCI_PD_EXCLUSIVE>;
+ clocks = <&k3_clks 149 0>;
+@@ -21,7 +20,6 @@
+ compatible = "ti,am64-uart", "ti,am654-uart";
+ reg = <0x00 0x04a10000 0x00 0x100>;
+ interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
+- clock-frequency = <48000000>;
+ current-speed = <115200>;
+ power-domains = <&k3_pds 160 TI_SCI_PD_EXCLUSIVE>;
+ clocks = <&k3_clks 160 0>;
+diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
+index 4ad22c3135dbb..5a0f792492af0 100644
+--- a/arch/arm64/include/asm/arch_gicv3.h
++++ b/arch/arm64/include/asm/arch_gicv3.h
+@@ -26,12 +26,6 @@
+ * sets the GP register's most significant bits to 0 with an explicit cast.
+ */
+
+-static inline void gic_write_eoir(u32 irq)
+-{
+- write_sysreg_s(irq, SYS_ICC_EOIR1_EL1);
+- isb();
+-}
+-
+ static __always_inline void gic_write_dir(u32 irq)
+ {
+ write_sysreg_s(irq, SYS_ICC_DIR_EL1);
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 6f41b65f99628..25c0bb5b8faaa 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -380,12 +380,10 @@ long get_tagged_addr_ctrl(struct task_struct *task);
+ * of header definitions for the use of task_stack_page.
+ */
+
+-#define current_top_of_stack() \
+-({ \
+- struct stack_info _info; \
+- BUG_ON(!on_accessible_stack(current, current_stack_pointer, 1, &_info)); \
+- _info.high; \
+-})
++/*
++ * The top of the current task's task stack
++ */
++#define current_top_of_stack() ((unsigned long)current->stack + THREAD_SIZE)
+ #define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1, NULL))
+
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 3d66fba69016f..305a1f387cbf3 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -1012,6 +1012,7 @@ static_assert(offsetof(siginfo_t, si_upper) == 0x28);
+ static_assert(offsetof(siginfo_t, si_pkey) == 0x20);
+ static_assert(offsetof(siginfo_t, si_perf_data) == 0x18);
+ static_assert(offsetof(siginfo_t, si_perf_type) == 0x20);
++static_assert(offsetof(siginfo_t, si_perf_flags) == 0x24);
+ static_assert(offsetof(siginfo_t, si_band) == 0x10);
+ static_assert(offsetof(siginfo_t, si_fd) == 0x18);
+ static_assert(offsetof(siginfo_t, si_call_addr) == 0x10);
+diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
+index d984282b979f8..4700f8522d27b 100644
+--- a/arch/arm64/kernel/signal32.c
++++ b/arch/arm64/kernel/signal32.c
+@@ -487,6 +487,7 @@ static_assert(offsetof(compat_siginfo_t, si_upper) == 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_pkey) == 0x14);
+ static_assert(offsetof(compat_siginfo_t, si_perf_data) == 0x10);
+ static_assert(offsetof(compat_siginfo_t, si_perf_type) == 0x14);
++static_assert(offsetof(compat_siginfo_t, si_perf_flags) == 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_band) == 0x0c);
+ static_assert(offsetof(compat_siginfo_t, si_fd) == 0x10);
+ static_assert(offsetof(compat_siginfo_t, si_call_addr) == 0x0c);
+diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
+index db5159a3055fc..b88a52f7188fc 100644
+--- a/arch/arm64/kernel/sys_compat.c
++++ b/arch/arm64/kernel/sys_compat.c
+@@ -114,6 +114,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno)
+ addr = instruction_pointer(regs) - (compat_thumb_mode(regs) ? 2 : 4);
+
+ arm64_notify_die("Oops - bad compat syscall(2)", regs,
+- SIGILL, ILL_ILLTRP, addr, scno);
++ SIGILL, ILL_ILLTRP, addr, 0);
+ return 0;
+ }
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index b5447e53cd73e..0dea80bf6de46 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -16,8 +16,8 @@
+
+ void copy_highpage(struct page *to, struct page *from)
+ {
+- struct page *kto = page_address(to);
+- struct page *kfrom = page_address(from);
++ void *kto = page_address(to);
++ void *kfrom = page_address(from);
+
+ copy_page(kto, kfrom);
+
+diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
+index 42920f25e73c8..34ba684d5962b 100644
+--- a/arch/csky/kernel/probes/kprobes.c
++++ b/arch/csky/kernel/probes/kprobes.c
+@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
+ struct csky_insn_patch *param = priv;
+ unsigned int addr = (unsigned int)param->addr;
+
+- if (atomic_inc_return(¶m->cpu_count) == 1) {
++ if (atomic_inc_return(¶m->cpu_count) == num_online_cpus()) {
+ *(u16 *) addr = cpu_to_le16(param->opcode);
+ dcache_wb_range(addr, addr + 2);
+ atomic_inc(¶m->cpu_count);
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index 0d00ef5117dce..97bb4ce45e109 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -327,7 +327,7 @@ comment "Processor Specific Options"
+
+ config M68KFPU_EMU
+ bool "Math emulation support"
+- depends on MMU
++ depends on M68KCLASSIC && FPU
+ help
+ At some point in the future, this will cause floating-point math
+ instructions to be emulated by the kernel on machines that lack a
+diff --git a/arch/m68k/include/asm/raw_io.h b/arch/m68k/include/asm/raw_io.h
+index 80eb2396d01eb..3ba40bc1dfaa9 100644
+--- a/arch/m68k/include/asm/raw_io.h
++++ b/arch/m68k/include/asm/raw_io.h
+@@ -80,14 +80,14 @@
+ ({ u16 __v = le16_to_cpu(*(__force volatile u16 *) (addr)); __v; })
+
+ #define rom_out_8(addr, b) \
+- ({u8 __maybe_unused __w, __v = (b); u32 _addr = ((u32) (addr)); \
++ (void)({u8 __maybe_unused __w, __v = (b); u32 _addr = ((u32) (addr)); \
+ __w = ((*(__force volatile u8 *) ((_addr | 0x10000) + (__v<<1)))); })
+ #define rom_out_be16(addr, w) \
+- ({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
++ (void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
+ __w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v & 0xFF)<<1)))); \
+ __w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v >> 8)<<1)))); })
+ #define rom_out_le16(addr, w) \
+- ({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
++ (void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
+ __w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v >> 8)<<1)))); \
+ __w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v & 0xFF)<<1)))); })
+
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index 338817d0cb3fb..74ee1e3013d70 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -625,6 +625,7 @@ static inline void siginfo_build_tests(void)
+ /* _sigfault._perf */
+ BUILD_BUG_ON(offsetof(siginfo_t, si_perf_data) != 0x10);
+ BUILD_BUG_ON(offsetof(siginfo_t, si_perf_type) != 0x14);
++ BUILD_BUG_ON(offsetof(siginfo_t, si_perf_flags) != 0x18);
+
+ /* _sigpoll */
+ BUILD_BUG_ON(offsetof(siginfo_t, si_band) != 0x0c);
+diff --git a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
+index 58f829c9b6c70..79d6fd249583f 100644
+--- a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
+@@ -26,7 +26,6 @@
+ #define cpu_has_3k_cache 0
+ #define cpu_has_4k_cache 1
+ #define cpu_has_tx39_cache 0
+-#define cpu_has_fpu 1
+ #define cpu_has_nofpuex 0
+ #define cpu_has_32fpr 1
+ #define cpu_has_counter 1
+diff --git a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
+index 49a93e82c2528..2635b6ba1cb54 100644
+--- a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
+@@ -29,7 +29,6 @@
+ #define cpu_has_3k_cache 0
+ #define cpu_has_4k_cache 1
+ #define cpu_has_tx39_cache 0
+-#define cpu_has_fpu 1
+ #define cpu_has_nofpuex 0
+ #define cpu_has_32fpr 1
+ #define cpu_has_counter 1
+diff --git a/arch/mips/include/asm/mach-ralink/spaces.h b/arch/mips/include/asm/mach-ralink/spaces.h
+index f7af11ea2d612..a9f0570d0f044 100644
+--- a/arch/mips/include/asm/mach-ralink/spaces.h
++++ b/arch/mips/include/asm/mach-ralink/spaces.h
+@@ -6,7 +6,9 @@
+ #define PCI_IOSIZE SZ_64K
+ #define IO_SPACE_LIMIT (PCI_IOSIZE - 1)
+
++#ifdef CONFIG_PCI_DRIVERS_GENERIC
+ #define pci_remap_iospace pci_remap_iospace
++#endif
+
+ #include <asm/mach-generic/spaces.h>
+ #endif
+diff --git a/arch/openrisc/include/asm/timex.h b/arch/openrisc/include/asm/timex.h
+index d52b4e536e3f9..5487fa93dd9be 100644
+--- a/arch/openrisc/include/asm/timex.h
++++ b/arch/openrisc/include/asm/timex.h
+@@ -23,6 +23,7 @@ static inline cycles_t get_cycles(void)
+ {
+ return mfspr(SPR_TTCR);
+ }
++#define get_cycles get_cycles
+
+ /* This isn't really used any more */
+ #define CLOCK_TICK_RATE 1000
+diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S
+index 15f1b38dfe03b..871f4c8588595 100644
+--- a/arch/openrisc/kernel/head.S
++++ b/arch/openrisc/kernel/head.S
+@@ -521,6 +521,15 @@ _start:
+ l.ori r3,r0,0x1
+ l.mtspr r0,r3,SPR_SR
+
++ /*
++ * Start the TTCR as early as possible, so that the RNG can make use of
++ * measurements of boot time from the earliest opportunity. Especially
++ * important is that the TTCR does not return zero by the time we reach
++ * rand_initialize().
++ */
++ l.movhi r3,hi(SPR_TTMR_CR)
++ l.mtspr r0,r3,SPR_TTMR
++
+ CLEAR_GPR(r1)
+ CLEAR_GPR(r2)
+ CLEAR_GPR(r3)
+diff --git a/arch/parisc/include/asm/fb.h b/arch/parisc/include/asm/fb.h
+index c4cd6360f9964..d63a2acb91f2b 100644
+--- a/arch/parisc/include/asm/fb.h
++++ b/arch/parisc/include/asm/fb.h
+@@ -12,9 +12,13 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
+ pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
+ }
+
++#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI)
++int fb_is_primary_device(struct fb_info *info);
++#else
+ static inline int fb_is_primary_device(struct fb_info *info)
+ {
+ return 0;
+ }
++#endif
+
+ #endif /* _ASM_FB_H_ */
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index f2c5c26869f1a..03ae544eb6cc4 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -216,6 +216,9 @@ static inline bool pfn_valid(unsigned long pfn)
+ #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
+ #else
+ #ifdef CONFIG_PPC64
++
++#define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x))
++
+ /*
+ * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
+ * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
+@@ -223,13 +226,13 @@ static inline bool pfn_valid(unsigned long pfn)
+ */
+ #define __va(x) \
+ ({ \
+- VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \
++ VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \
+ (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \
+ })
+
+ #define __pa(x) \
+ ({ \
+- VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \
++ VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \
+ (unsigned long)(x) & 0x0fffffffffffffffUL; \
+ })
+
+diff --git a/arch/powerpc/include/asm/vas.h b/arch/powerpc/include/asm/vas.h
+index 57573d9c1e091..56834a8a14654 100644
+--- a/arch/powerpc/include/asm/vas.h
++++ b/arch/powerpc/include/asm/vas.h
+@@ -112,7 +112,7 @@ static inline void vas_user_win_add_mm_context(struct vas_user_win_ref *ref)
+ * Receive window attributes specified by the (in-kernel) owner of window.
+ */
+ struct vas_rx_win_attr {
+- void *rx_fifo;
++ u64 rx_fifo;
+ int rx_fifo_size;
+ int wcreds_max;
+
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 9581906b5ee9c..da18f83ef8834 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -330,22 +330,22 @@ _GLOBAL(enter_rtas)
+ clrldi r4,r4,2 /* convert to realmode address */
+ mtlr r4
+
+- li r0,0
+- ori r0,r0,MSR_EE|MSR_SE|MSR_BE|MSR_RI
+- andc r0,r6,r0
+-
+- li r9,1
+- rldicr r9,r9,MSR_SF_LG,(63-MSR_SF_LG)
+- ori r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI|MSR_LE
+- andc r6,r0,r9
+-
+ __enter_rtas:
+- sync /* disable interrupts so SRR0/1 */
+- mtmsrd r0 /* don't get trashed */
+-
+ LOAD_REG_ADDR(r4, rtas)
+ ld r5,RTASENTRY(r4) /* get the rtas->entry value */
+ ld r4,RTASBASE(r4) /* get the rtas->base value */
++
++ /*
++ * RTAS runs in 32-bit big endian real mode, but leave MSR[RI] on as we
++ * may hit NMI (SRESET or MCE) while in RTAS. RTAS should disable RI in
++ * its critical regions (as specified in PAPR+ section 7.2.1). MSR[S]
++ * is not impacted by RFI_TO_KERNEL (only urfid can unset it). So if
++ * MSR[S] is set, it will remain when entering RTAS.
++ */
++ LOAD_REG_IMMEDIATE(r6, MSR_ME | MSR_RI)
++
++ li r0,0
++ mtmsrd r0,1 /* disable RI before using SRR0/1 */
+
+ mtspr SPRN_SRR0,r5
+ mtspr SPRN_SRR1,r6
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index d03e488cfe9ca..a32d6871de759 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -861,7 +861,6 @@ static int fadump_alloc_mem_ranges(struct fadump_mrange_info *mrange_info)
+ sizeof(struct fadump_memory_range));
+ return 0;
+ }
+-
+ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ u64 base, u64 end)
+ {
+@@ -880,7 +879,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ start = mem_ranges[mrange_info->mem_range_cnt - 1].base;
+ size = mem_ranges[mrange_info->mem_range_cnt - 1].size;
+
+- if ((start + size) == base)
++ /*
++ * Boot memory area needs separate PT_LOAD segment(s) as it
++ * is moved to a different location at the time of crash.
++ * So, fold only if the region is not boot memory area.
++ */
++ if ((start + size) == base && start >= fw_dump.boot_mem_top)
+ is_adjacent = true;
+ }
+ if (!is_adjacent) {
+diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
+index 4ad79eb638c62..77cd4c5a2d631 100644
+--- a/arch/powerpc/kernel/idle.c
++++ b/arch/powerpc/kernel/idle.c
+@@ -37,7 +37,7 @@ static int __init powersave_off(char *arg)
+ {
+ ppc_md.power_save = NULL;
+ cpuidle_disable = IDLE_POWERSAVE_OFF;
+- return 0;
++ return 1;
+ }
+ __setup("powersave=off", powersave_off);
+
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 1f42aabbbab3a..6bc89d9ccf635 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -49,6 +49,15 @@ void enter_rtas(unsigned long);
+
+ static inline void do_enter_rtas(unsigned long args)
+ {
++ unsigned long msr;
++
++ /*
++ * Make sure MSR[RI] is currently enabled as it will be forced later
++ * in enter_rtas.
++ */
++ msr = mfmsr();
++ BUG_ON(!(msr & MSR_RI));
++
+ enter_rtas(args);
+
+ srr_regs_clobbered(); /* rtas uses SRRs, invalidate */
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 316f61a4cb599..85a3b756e0b05 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4238,13 +4238,13 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
+ start_wait = ktime_get();
+
+ vc->vcore_state = VCORE_SLEEPING;
+- trace_kvmppc_vcore_blocked(vc, 0);
++ trace_kvmppc_vcore_blocked(vc->runner, 0);
+ spin_unlock(&vc->lock);
+ schedule();
+ finish_rcuwait(&vc->wait);
+ spin_lock(&vc->lock);
+ vc->vcore_state = VCORE_INACTIVE;
+- trace_kvmppc_vcore_blocked(vc, 1);
++ trace_kvmppc_vcore_blocked(vc->runner, 1);
+ ++vc->runner->stat.halt_successful_wait;
+
+ cur = ktime_get();
+@@ -4624,9 +4624,9 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ if (kvmppc_vcpu_check_block(vcpu))
+ break;
+
+- trace_kvmppc_vcore_blocked(vc, 0);
++ trace_kvmppc_vcore_blocked(vcpu, 0);
+ schedule();
+- trace_kvmppc_vcore_blocked(vc, 1);
++ trace_kvmppc_vcore_blocked(vcpu, 1);
+ }
+ finish_rcuwait(wait);
+ }
+@@ -5289,6 +5289,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm)
+ kvm->arch.host_lpcr = lpcr = mfspr(SPRN_LPCR);
+ lpcr &= LPCR_PECE | LPCR_LPES;
+ } else {
++ /*
++ * The L2 LPES mode will be set by the L0 according to whether
++ * or not it needs to take external interrupts in HV mode.
++ */
+ lpcr = 0;
+ }
+ lpcr |= (4UL << LPCR_DPFD_SH) | LPCR_HDICE |
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 9d373f8963ee9..58e05a9122acf 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -261,8 +261,7 @@ static void load_l2_hv_regs(struct kvm_vcpu *vcpu,
+ /*
+ * Don't let L1 change LPCR bits for the L2 except these:
+ */
+- mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
+- LPCR_LPES | LPCR_MER;
++ mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | LPCR_MER;
+
+ /*
+ * Additional filtering is required depending on hardware
+diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
+index 830a126e095d8..9c56da21a0754 100644
+--- a/arch/powerpc/kvm/trace_hv.h
++++ b/arch/powerpc/kvm/trace_hv.h
+@@ -408,9 +408,9 @@ TRACE_EVENT(kvmppc_run_core,
+ );
+
+ TRACE_EVENT(kvmppc_vcore_blocked,
+- TP_PROTO(struct kvmppc_vcore *vc, int where),
++ TP_PROTO(struct kvm_vcpu *vcpu, int where),
+
+- TP_ARGS(vc, where),
++ TP_ARGS(vcpu, where),
+
+ TP_STRUCT__entry(
+ __field(int, n_runnable)
+@@ -420,8 +420,8 @@ TRACE_EVENT(kvmppc_vcore_blocked,
+ ),
+
+ TP_fast_assign(
+- __entry->runner_vcpu = vc->runner->vcpu_id;
+- __entry->n_runnable = vc->n_runnable;
++ __entry->runner_vcpu = vcpu->vcpu_id;
++ __entry->n_runnable = vcpu->arch.vcore->n_runnable;
+ __entry->where = where;
+ __entry->tgid = current->tgid;
+ ),
+diff --git a/arch/powerpc/mm/nohash/fsl_book3e.c b/arch/powerpc/mm/nohash/fsl_book3e.c
+index dfe715e0f70ac..388f7c7dabd30 100644
+--- a/arch/powerpc/mm/nohash/fsl_book3e.c
++++ b/arch/powerpc/mm/nohash/fsl_book3e.c
+@@ -287,22 +287,19 @@ void __init adjust_total_lowmem(void)
+
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+ void mmu_mark_rodata_ro(void)
+-{
+- /* Everything is done in mmu_mark_initmem_nx() */
+-}
+-#endif
+-
+-void mmu_mark_initmem_nx(void)
+ {
+ unsigned long remapped;
+
+- if (!strict_kernel_rwx_enabled())
+- return;
+-
+ remapped = map_mem_in_cams(__max_low_memory, CONFIG_LOWMEM_CAM_NUM, false, false);
+
+ WARN_ON(__max_low_memory != remapped);
+ }
++#endif
++
++void mmu_mark_initmem_nx(void)
++{
++ /* Everything is done in mmu_mark_rodata_ro() */
++}
+
+ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size)
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 4037ea652522a..abc4c5187dbc9 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -108,7 +108,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
+ *mmcra |= MMCRA_SDAR_MODE_TLB;
+ }
+
+-static u64 p10_thresh_cmp_val(u64 value)
++static int p10_thresh_cmp_val(u64 value)
+ {
+ int exp = 0;
+ u64 result = value;
+@@ -139,7 +139,7 @@ static u64 p10_thresh_cmp_val(u64 value)
+ * exponent is also zero.
+ */
+ if (!(value & 0xC0) && exp)
+- result = 0;
++ result = -1;
+ else
+ result = (exp << 8) | value;
+ }
+@@ -187,7 +187,7 @@ static bool is_thresh_cmp_valid(u64 event)
+ unsigned int cmp, exp;
+
+ if (cpu_has_feature(CPU_FTR_ARCH_31))
+- return p10_thresh_cmp_val(event) != 0;
++ return p10_thresh_cmp_val(event) >= 0;
+
+ /*
+ * Check the mantissa upper two bits are not zero, unless the
+@@ -502,12 +502,14 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp,
+ value |= CNST_THRESH_CTL_SEL_VAL(event >> EVENT_THRESH_SHIFT);
+ mask |= p10_CNST_THRESH_CMP_MASK;
+ value |= p10_CNST_THRESH_CMP_VAL(p10_thresh_cmp_val(event_config1));
+- }
++ } else if (event_is_threshold(event))
++ return -1;
+ } else if (cpu_has_feature(CPU_FTR_ARCH_300)) {
+ if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
+ mask |= CNST_THRESH_MASK;
+ value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
+- }
++ } else if (event_is_threshold(event))
++ return -1;
+ } else {
+ /*
+ * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
+diff --git a/arch/powerpc/platforms/4xx/cpm.c b/arch/powerpc/platforms/4xx/cpm.c
+index 2571841625a23..1d3bc35ee1a7d 100644
+--- a/arch/powerpc/platforms/4xx/cpm.c
++++ b/arch/powerpc/platforms/4xx/cpm.c
+@@ -327,6 +327,6 @@ late_initcall(cpm_init);
+ static int __init cpm_powersave_off(char *arg)
+ {
+ cpm.powersave_off = 1;
+- return 0;
++ return 1;
+ }
+ __setup("powersave=off", cpm_powersave_off);
+diff --git a/arch/powerpc/platforms/8xx/cpm1.c b/arch/powerpc/platforms/8xx/cpm1.c
+index c58b6f1c40e35..3ef5e9fd3a9b6 100644
+--- a/arch/powerpc/platforms/8xx/cpm1.c
++++ b/arch/powerpc/platforms/8xx/cpm1.c
+@@ -280,6 +280,7 @@ cpm_setbrg(uint brg, uint rate)
+ out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) |
+ CPM_BRG_EN | CPM_BRG_DIV16);
+ }
++EXPORT_SYMBOL(cpm_setbrg);
+
+ struct cpm_ioport16 {
+ __be16 dir, par, odr_sor, dat, intr;
+diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c
+index c8ad057c72210..9d74d3950a523 100644
+--- a/arch/powerpc/platforms/powernv/opal-fadump.c
++++ b/arch/powerpc/platforms/powernv/opal-fadump.c
+@@ -60,7 +60,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ addr = be64_to_cpu(addr);
+ pr_debug("Kernel metadata addr: %llx\n", addr);
+ opal_fdm_active = (void *)addr;
+- if (opal_fdm_active->registered_regions == 0)
++ if (be16_to_cpu(opal_fdm_active->registered_regions) == 0)
+ return;
+
+ ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_BOOT_MEM, &addr);
+@@ -95,17 +95,17 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf);
+ static void opal_fadump_update_config(struct fw_dump *fadump_conf,
+ const struct opal_fadump_mem_struct *fdm)
+ {
+- pr_debug("Boot memory regions count: %d\n", fdm->region_cnt);
++ pr_debug("Boot memory regions count: %d\n", be16_to_cpu(fdm->region_cnt));
+
+ /*
+ * The destination address of the first boot memory region is the
+ * destination address of boot memory regions.
+ */
+- fadump_conf->boot_mem_dest_addr = fdm->rgn[0].dest;
++ fadump_conf->boot_mem_dest_addr = be64_to_cpu(fdm->rgn[0].dest);
+ pr_debug("Destination address of boot memory regions: %#016llx\n",
+ fadump_conf->boot_mem_dest_addr);
+
+- fadump_conf->fadumphdr_addr = fdm->fadumphdr_addr;
++ fadump_conf->fadumphdr_addr = be64_to_cpu(fdm->fadumphdr_addr);
+ }
+
+ /*
+@@ -126,9 +126,9 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ fadump_conf->boot_memory_size = 0;
+
+ pr_debug("Boot memory regions:\n");
+- for (i = 0; i < fdm->region_cnt; i++) {
+- base = fdm->rgn[i].src;
+- size = fdm->rgn[i].size;
++ for (i = 0; i < be16_to_cpu(fdm->region_cnt); i++) {
++ base = be64_to_cpu(fdm->rgn[i].src);
++ size = be64_to_cpu(fdm->rgn[i].size);
+ pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size);
+
+ fadump_conf->boot_mem_addr[i] = base;
+@@ -143,7 +143,7 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ * Start address of reserve dump area (permanent reservation) for
+ * re-registering FADump after dump capture.
+ */
+- fadump_conf->reserve_dump_area_start = fdm->rgn[0].dest;
++ fadump_conf->reserve_dump_area_start = be64_to_cpu(fdm->rgn[0].dest);
+
+ /*
+ * Rarely, but it can so happen that system crashes before all
+@@ -155,13 +155,14 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ * Hope the memory that could not be preserved only has pages
+ * that are usually filtered out while saving the vmcore.
+ */
+- if (fdm->region_cnt > fdm->registered_regions) {
++ if (be16_to_cpu(fdm->region_cnt) > be16_to_cpu(fdm->registered_regions)) {
+ pr_warn("Not all memory regions were saved!!!\n");
+ pr_warn(" Unsaved memory regions:\n");
+- i = fdm->registered_regions;
+- while (i < fdm->region_cnt) {
++ i = be16_to_cpu(fdm->registered_regions);
++ while (i < be16_to_cpu(fdm->region_cnt)) {
+ pr_warn("\t[%03d] base: 0x%llx, size: 0x%llx\n",
+- i, fdm->rgn[i].src, fdm->rgn[i].size);
++ i, be64_to_cpu(fdm->rgn[i].src),
++ be64_to_cpu(fdm->rgn[i].size));
+ i++;
+ }
+
+@@ -170,7 +171,7 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ }
+
+ fadump_conf->boot_mem_top = (fadump_conf->boot_memory_size + hole_size);
+- fadump_conf->boot_mem_regs_cnt = fdm->region_cnt;
++ fadump_conf->boot_mem_regs_cnt = be16_to_cpu(fdm->region_cnt);
+ opal_fadump_update_config(fadump_conf, fdm);
+ }
+
+@@ -178,35 +179,38 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ static void opal_fadump_init_metadata(struct opal_fadump_mem_struct *fdm)
+ {
+ fdm->version = OPAL_FADUMP_VERSION;
+- fdm->region_cnt = 0;
+- fdm->registered_regions = 0;
+- fdm->fadumphdr_addr = 0;
++ fdm->region_cnt = cpu_to_be16(0);
++ fdm->registered_regions = cpu_to_be16(0);
++ fdm->fadumphdr_addr = cpu_to_be64(0);
+ }
+
+ static u64 opal_fadump_init_mem_struct(struct fw_dump *fadump_conf)
+ {
+ u64 addr = fadump_conf->reserve_dump_area_start;
++ u16 reg_cnt;
+ int i;
+
+ opal_fdm = __va(fadump_conf->kernel_metadata);
+ opal_fadump_init_metadata(opal_fdm);
+
+ /* Boot memory regions */
++ reg_cnt = be16_to_cpu(opal_fdm->region_cnt);
+ for (i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) {
+- opal_fdm->rgn[i].src = fadump_conf->boot_mem_addr[i];
+- opal_fdm->rgn[i].dest = addr;
+- opal_fdm->rgn[i].size = fadump_conf->boot_mem_sz[i];
++ opal_fdm->rgn[i].src = cpu_to_be64(fadump_conf->boot_mem_addr[i]);
++ opal_fdm->rgn[i].dest = cpu_to_be64(addr);
++ opal_fdm->rgn[i].size = cpu_to_be64(fadump_conf->boot_mem_sz[i]);
+
+- opal_fdm->region_cnt++;
++ reg_cnt++;
+ addr += fadump_conf->boot_mem_sz[i];
+ }
++ opal_fdm->region_cnt = cpu_to_be16(reg_cnt);
+
+ /*
+ * Kernel metadata is passed to f/w and retrieved in capture kerenl.
+ * So, use it to save fadump header address instead of calculating it.
+ */
+- opal_fdm->fadumphdr_addr = (opal_fdm->rgn[0].dest +
+- fadump_conf->boot_memory_size);
++ opal_fdm->fadumphdr_addr = cpu_to_be64(be64_to_cpu(opal_fdm->rgn[0].dest) +
++ fadump_conf->boot_memory_size);
+
+ opal_fadump_update_config(fadump_conf, opal_fdm);
+
+@@ -269,18 +273,21 @@ static u64 opal_fadump_get_bootmem_min(void)
+ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ {
+ s64 rc = OPAL_PARAMETER;
++ u16 registered_regs;
+ int i, err = -EIO;
+
+- for (i = 0; i < opal_fdm->region_cnt; i++) {
++ registered_regs = be16_to_cpu(opal_fdm->registered_regions);
++ for (i = 0; i < be16_to_cpu(opal_fdm->region_cnt); i++) {
+ rc = opal_mpipl_update(OPAL_MPIPL_ADD_RANGE,
+- opal_fdm->rgn[i].src,
+- opal_fdm->rgn[i].dest,
+- opal_fdm->rgn[i].size);
++ be64_to_cpu(opal_fdm->rgn[i].src),
++ be64_to_cpu(opal_fdm->rgn[i].dest),
++ be64_to_cpu(opal_fdm->rgn[i].size));
+ if (rc != OPAL_SUCCESS)
+ break;
+
+- opal_fdm->registered_regions++;
++ registered_regs++;
+ }
++ opal_fdm->registered_regions = cpu_to_be16(registered_regs);
+
+ switch (rc) {
+ case OPAL_SUCCESS:
+@@ -291,7 +298,8 @@ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ case OPAL_RESOURCE:
+ /* If MAX regions limit in f/w is hit, warn and proceed. */
+ pr_warn("%d regions could not be registered for MPIPL as MAX limit is reached!\n",
+- (opal_fdm->region_cnt - opal_fdm->registered_regions));
++ (be16_to_cpu(opal_fdm->region_cnt) -
++ be16_to_cpu(opal_fdm->registered_regions)));
+ fadump_conf->dump_registered = 1;
+ err = 0;
+ break;
+@@ -312,7 +320,7 @@ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ * If some regions were registered before OPAL_MPIPL_ADD_RANGE
+ * OPAL call failed, unregister all regions.
+ */
+- if ((err < 0) && (opal_fdm->registered_regions > 0))
++ if ((err < 0) && (be16_to_cpu(opal_fdm->registered_regions) > 0))
+ opal_fadump_unregister(fadump_conf);
+
+ return err;
+@@ -328,7 +336,7 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf)
+ return -EIO;
+ }
+
+- opal_fdm->registered_regions = 0;
++ opal_fdm->registered_regions = cpu_to_be16(0);
+ fadump_conf->dump_registered = 0;
+ return 0;
+ }
+@@ -563,19 +571,20 @@ static void opal_fadump_region_show(struct fw_dump *fadump_conf,
+ else
+ fdm_ptr = opal_fdm;
+
+- for (i = 0; i < fdm_ptr->region_cnt; i++) {
++ for (i = 0; i < be16_to_cpu(fdm_ptr->region_cnt); i++) {
+ /*
+ * Only regions that are registered for MPIPL
+ * would have dump data.
+ */
+ if ((fadump_conf->dump_active) &&
+- (i < fdm_ptr->registered_regions))
+- dumped_bytes = fdm_ptr->rgn[i].size;
++ (i < be16_to_cpu(fdm_ptr->registered_regions)))
++ dumped_bytes = be64_to_cpu(fdm_ptr->rgn[i].size);
+
+ seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ",
+- fdm_ptr->rgn[i].src, fdm_ptr->rgn[i].dest);
++ be64_to_cpu(fdm_ptr->rgn[i].src),
++ be64_to_cpu(fdm_ptr->rgn[i].dest));
+ seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n",
+- fdm_ptr->rgn[i].size, dumped_bytes);
++ be64_to_cpu(fdm_ptr->rgn[i].size), dumped_bytes);
+ }
+
+ /* Dump is active. Show reserved area start address. */
+@@ -624,6 +633,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ {
+ const __be32 *prop;
+ unsigned long dn;
++ __be64 be_addr;
+ u64 addr = 0;
+ int i, len;
+ s64 ret;
+@@ -680,13 +690,13 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ if (!prop)
+ return;
+
+- ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &addr);
+- if ((ret != OPAL_SUCCESS) || !addr) {
++ ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &be_addr);
++ if ((ret != OPAL_SUCCESS) || !be_addr) {
+ pr_err("Failed to get Kernel metadata (%lld)\n", ret);
+ return;
+ }
+
+- addr = be64_to_cpu(addr);
++ addr = be64_to_cpu(be_addr);
+ pr_debug("Kernel metadata addr: %llx\n", addr);
+
+ opal_fdm_active = __va(addr);
+@@ -697,14 +707,14 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ }
+
+ /* Kernel regions not registered with f/w for MPIPL */
+- if (opal_fdm_active->registered_regions == 0) {
++ if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) {
+ opal_fdm_active = NULL;
+ return;
+ }
+
+- ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &addr);
+- if (addr) {
+- addr = be64_to_cpu(addr);
++ ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &be_addr);
++ if (be_addr) {
++ addr = be64_to_cpu(be_addr);
+ pr_debug("CPU metadata addr: %llx\n", addr);
+ opal_cpu_metadata = __va(addr);
+ }
+diff --git a/arch/powerpc/platforms/powernv/opal-fadump.h b/arch/powerpc/platforms/powernv/opal-fadump.h
+index f1e9ecf548c5d..3f715efb0aa6e 100644
+--- a/arch/powerpc/platforms/powernv/opal-fadump.h
++++ b/arch/powerpc/platforms/powernv/opal-fadump.h
+@@ -31,14 +31,14 @@
+ * OPAL FADump kernel metadata
+ *
+ * The address of this structure will be registered with f/w for retrieving
+- * and processing during crash dump.
++ * in the capture kernel to process the crash dump.
+ */
+ struct opal_fadump_mem_struct {
+ u8 version;
+ u8 reserved[3];
+- u16 region_cnt; /* number of regions */
+- u16 registered_regions; /* Regions registered for MPIPL */
+- u64 fadumphdr_addr;
++ __be16 region_cnt; /* number of regions */
++ __be16 registered_regions; /* Regions registered for MPIPL */
++ __be64 fadumphdr_addr;
+ struct opal_mpipl_region rgn[FADUMP_MAX_MEM_REGS];
+ } __packed;
+
+@@ -135,7 +135,7 @@ static inline void opal_fadump_read_regs(char *bufp, unsigned int regs_cnt,
+ for (i = 0; i < regs_cnt; i++, bufp += reg_entry_size) {
+ reg_entry = (struct hdat_fadump_reg_entry *)bufp;
+ val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) :
+- reg_entry->reg_val);
++ (u64)(reg_entry->reg_val));
+ opal_fadump_set_regval_regnum(regs,
+ be32_to_cpu(reg_entry->reg_type),
+ be32_to_cpu(reg_entry->reg_num),
+diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
+index 105d889abd51a..824c3ad7a0faf 100644
+--- a/arch/powerpc/platforms/powernv/setup.c
++++ b/arch/powerpc/platforms/powernv/setup.c
+@@ -96,6 +96,15 @@ static void __init init_fw_feat_flags(struct device_node *np)
+
+ if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+ security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
++
++ if (fw_feature_is("enabled", "no-need-l1d-flush-msr-pr-1-to-0", np))
++ security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY);
++
++ if (fw_feature_is("enabled", "no-need-l1d-flush-kernel-on-user-access", np))
++ security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS);
++
++ if (fw_feature_is("enabled", "no-need-store-drain-on-priv-state-switch", np))
++ security_ftr_clear(SEC_FTR_STF_BARRIER);
+ }
+
+ static void __init pnv_setup_security_mitigations(void)
+diff --git a/arch/powerpc/platforms/powernv/ultravisor.c b/arch/powerpc/platforms/powernv/ultravisor.c
+index e4a00ad06f9d3..67c8c4b2d8b17 100644
+--- a/arch/powerpc/platforms/powernv/ultravisor.c
++++ b/arch/powerpc/platforms/powernv/ultravisor.c
+@@ -55,6 +55,7 @@ static int __init uv_init(void)
+ return -ENODEV;
+
+ uv_memcons = memcons_init(node, "memcons");
++ of_node_put(node);
+ if (!uv_memcons)
+ return -ENOENT;
+
+diff --git a/arch/powerpc/platforms/powernv/vas-fault.c b/arch/powerpc/platforms/powernv/vas-fault.c
+index a7aabc18039eb..c1bfad56447d4 100644
+--- a/arch/powerpc/platforms/powernv/vas-fault.c
++++ b/arch/powerpc/platforms/powernv/vas-fault.c
+@@ -216,7 +216,7 @@ int vas_setup_fault_window(struct vas_instance *vinst)
+ vas_init_rx_win_attr(&attr, VAS_COP_TYPE_FAULT);
+
+ attr.rx_fifo_size = vinst->fault_fifo_size;
+- attr.rx_fifo = vinst->fault_fifo;
++ attr.rx_fifo = __pa(vinst->fault_fifo);
+
+ /*
+ * Max creds is based on number of CRBs can fit in the FIFO.
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 0f8d39fbf2b21..0072682531d80 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -404,7 +404,7 @@ static void init_winctx_regs(struct pnv_vas_window *window,
+ *
+ * See also: Design note in function header.
+ */
+- val = __pa(winctx->rx_fifo);
++ val = winctx->rx_fifo;
+ val = SET_FIELD(VAS_PAGE_MIGRATION_SELECT, val, 0);
+ write_hvwc_reg(window, VREG(LFIFO_BAR), val);
+
+@@ -739,7 +739,7 @@ static void init_winctx_for_rxwin(struct pnv_vas_window *rxwin,
+ */
+ winctx->fifo_disable = true;
+ winctx->intr_disable = true;
+- winctx->rx_fifo = NULL;
++ winctx->rx_fifo = 0;
+ }
+
+ winctx->lnotify_lpid = rxattr->lnotify_lpid;
+diff --git a/arch/powerpc/platforms/powernv/vas.h b/arch/powerpc/platforms/powernv/vas.h
+index 8bb08e395de05..08d9d3d5a22b0 100644
+--- a/arch/powerpc/platforms/powernv/vas.h
++++ b/arch/powerpc/platforms/powernv/vas.h
+@@ -376,7 +376,7 @@ struct pnv_vas_window {
+ * is a container for the register fields in the window context.
+ */
+ struct vas_winctx {
+- void *rx_fifo;
++ u64 rx_fifo;
+ int rx_fifo_size;
+ int wcreds_max;
+ int rsvd_txbuf_count;
+diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c
+index be6b99b1b3523..9a02aed886a0d 100644
+--- a/arch/powerpc/sysdev/dart_iommu.c
++++ b/arch/powerpc/sysdev/dart_iommu.c
+@@ -404,9 +404,10 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops)
+ }
+
+ /* Initialize the DART HW */
+- if (dart_init(dn) != 0)
++ if (dart_init(dn) != 0) {
++ of_node_put(dn);
+ return;
+-
++ }
+ /*
+ * U4 supports a DART bypass, we use it for 64-bit capable devices to
+ * improve performance. However, that only works for devices connected
+@@ -419,6 +420,7 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops)
+
+ /* Setup pci_dma ops */
+ set_pci_dma_ops(&dma_iommu_ops);
++ of_node_put(dn);
+ }
+
+ #ifdef CONFIG_PM
+diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c
+index ff7906b48ca1e..1bfc9afa8a1a1 100644
+--- a/arch/powerpc/sysdev/fsl_rio.c
++++ b/arch/powerpc/sysdev/fsl_rio.c
+@@ -505,8 +505,10 @@ int fsl_rio_setup(struct platform_device *dev)
+ if (rc) {
+ dev_err(&dev->dev, "Can't get %pOF property 'reg'\n",
+ rmu_node);
++ of_node_put(rmu_node);
+ goto err_rmu;
+ }
++ of_node_put(rmu_node);
+ rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs));
+ if (!rmu_regs_win) {
+ dev_err(&dev->dev, "Unable to map rmu register window\n");
+diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c
+index bda4c32582d97..4dae624b9f2f4 100644
+--- a/arch/powerpc/sysdev/xics/icp-opal.c
++++ b/arch/powerpc/sysdev/xics/icp-opal.c
+@@ -196,6 +196,7 @@ int __init icp_opal_init(void)
+
+ printk("XICS: Using OPAL ICP fallbacks\n");
+
++ of_node_put(np);
+ return 0;
+ }
+
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index 928f95004501f..503f544d28e29 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -67,6 +67,17 @@ static int __init xive_irq_bitmap_add(int base, int count)
+ return 0;
+ }
+
++static void xive_irq_bitmap_remove_all(void)
++{
++ struct xive_irq_bitmap *xibm, *tmp;
++
++ list_for_each_entry_safe(xibm, tmp, &xive_irq_bitmaps, list) {
++ list_del(&xibm->list);
++ kfree(xibm->bitmap);
++ kfree(xibm);
++ }
++}
++
+ static int __xive_irq_bitmap_alloc(struct xive_irq_bitmap *xibm)
+ {
+ int irq;
+@@ -803,7 +814,7 @@ bool __init xive_spapr_init(void)
+ u32 val;
+ u32 len;
+ const __be32 *reg;
+- int i;
++ int i, err;
+
+ if (xive_spapr_disabled())
+ return false;
+@@ -819,32 +830,35 @@ bool __init xive_spapr_init(void)
+ /* Resource 1 is the OS ring TIMA */
+ if (of_address_to_resource(np, 1, &r)) {
+ pr_err("Failed to get thread mgmnt area resource\n");
+- return false;
++ goto err_put;
+ }
+ tima = ioremap(r.start, resource_size(&r));
+ if (!tima) {
+ pr_err("Failed to map thread mgmnt area\n");
+- return false;
++ goto err_put;
+ }
+
+ if (!xive_get_max_prio(&max_prio))
+- return false;
++ goto err_unmap;
+
+ /* Feed the IRQ number allocator with the ranges given in the DT */
+ reg = of_get_property(np, "ibm,xive-lisn-ranges", &len);
+ if (!reg) {
+ pr_err("Failed to read 'ibm,xive-lisn-ranges' property\n");
+- return false;
++ goto err_unmap;
+ }
+
+ if (len % (2 * sizeof(u32)) != 0) {
+ pr_err("invalid 'ibm,xive-lisn-ranges' property\n");
+- return false;
++ goto err_unmap;
+ }
+
+- for (i = 0; i < len / (2 * sizeof(u32)); i++, reg += 2)
+- xive_irq_bitmap_add(be32_to_cpu(reg[0]),
+- be32_to_cpu(reg[1]));
++ for (i = 0; i < len / (2 * sizeof(u32)); i++, reg += 2) {
++ err = xive_irq_bitmap_add(be32_to_cpu(reg[0]),
++ be32_to_cpu(reg[1]));
++ if (err < 0)
++ goto err_mem_free;
++ }
+
+ /* Iterate the EQ sizes and pick one */
+ of_property_for_each_u32(np, "ibm,xive-eq-sizes", prop, reg, val) {
+@@ -855,10 +869,19 @@ bool __init xive_spapr_init(void)
+
+ /* Initialize XIVE core with our backend */
+ if (!xive_core_init(np, &xive_spapr_ops, tima, TM_QW1_OS, max_prio))
+- return false;
++ goto err_mem_free;
+
++ of_node_put(np);
+ pr_info("Using %dkB queues\n", 1 << (xive_queue_shift - 10));
+ return true;
++
++err_mem_free:
++ xive_irq_bitmap_remove_all();
++err_unmap:
++ iounmap(tima);
++err_put:
++ of_node_put(np);
++ return false;
+ }
+
+ machine_arch_initcall(pseries, xive_core_debug_init);
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 7d81102cffd48..c6ca1b9cbf712 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -154,3 +154,7 @@ PHONY += rv64_randconfig
+ rv64_randconfig:
+ $(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/riscv/configs/64-bit.config \
+ -f $(srctree)/Makefile randconfig
++
++PHONY += rv32_defconfig
++rv32_defconfig:
++ $(Q)$(MAKE) -f $(srctree)/Makefile defconfig 32-bit.config
+diff --git a/arch/riscv/include/asm/alternative-macros.h b/arch/riscv/include/asm/alternative-macros.h
+index 67406c3763890..0377ce0fcc726 100644
+--- a/arch/riscv/include/asm/alternative-macros.h
++++ b/arch/riscv/include/asm/alternative-macros.h
+@@ -23,9 +23,9 @@
+ 888 :
+ \new_c
+ 889 :
+- .previous
+ .org . - (889b - 888b) + (887b - 886b)
+ .org . - (887b - 886b) + (889b - 888b)
++ .previous
+ .endif
+ .endm
+
+@@ -60,9 +60,9 @@
+ "888 :\n" \
+ new_c "\n" \
+ "889 :\n" \
+- ".previous\n" \
+ ".org . - (887b - 886b) + (889b - 888b)\n" \
+ ".org . - (889b - 888b) + (887b - 886b)\n" \
++ ".previous\n" \
+ ".endif\n"
+
+ #define __ALTERNATIVE_CFG(old_c, new_c, vendor_id, errata_id, enable) \
+diff --git a/arch/riscv/include/asm/irq_work.h b/arch/riscv/include/asm/irq_work.h
+index d6c277992f76a..b53891964ae03 100644
+--- a/arch/riscv/include/asm/irq_work.h
++++ b/arch/riscv/include/asm/irq_work.h
+@@ -4,7 +4,7 @@
+
+ static inline bool arch_irq_work_has_interrupt(void)
+ {
+- return true;
++ return IS_ENABLED(CONFIG_SMP);
+ }
+ extern void arch_irq_work_raise(void);
+ #endif /* _ASM_RISCV_IRQ_WORK_H */
+diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
+index 6c316093a1e59..977ee6181dabf 100644
+--- a/arch/riscv/include/asm/unistd.h
++++ b/arch/riscv/include/asm/unistd.h
+@@ -9,7 +9,6 @@
+ */
+
+ #define __ARCH_WANT_SYS_CLONE
+-#define __ARCH_WANT_MEMFD_SECRET
+
+ #include <uapi/asm/unistd.h>
+
+diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
+index 8062996c2dfd0..d95fbf5846b0b 100644
+--- a/arch/riscv/include/uapi/asm/unistd.h
++++ b/arch/riscv/include/uapi/asm/unistd.h
+@@ -21,6 +21,7 @@
+ #endif /* __LP64__ */
+
+ #define __ARCH_WANT_SYS_CLONE3
++#define __ARCH_WANT_MEMFD_SECRET
+
+ #include <asm-generic/unistd.h>
+
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index ec07f991866a5..38678a33e7512 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -316,6 +316,7 @@ clear_bss_done:
+ REG_S a0, (a2)
+
+ /* Initialize page tables and relocate to virtual addresses */
++ la tp, init_task
+ la sp, init_thread_union + THREAD_SIZE
+ XIP_FIXUP_OFFSET sp
+ #ifdef CONFIG_BUILTIN_DTB
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index b42bfdc674823..c8dc327de967d 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -189,7 +189,7 @@ static void __init init_resources(void)
+ res = &mem_res[res_idx--];
+
+ res->name = "Reserved";
+- res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++ res->flags = IORESOURCE_MEM | IORESOURCE_EXCLUSIVE;
+ res->start = __pfn_to_phys(memblock_region_reserved_base_pfn(region));
+ res->end = __pfn_to_phys(memblock_region_reserved_end_pfn(region)) - 1;
+
+@@ -214,7 +214,7 @@ static void __init init_resources(void)
+
+ if (unlikely(memblock_is_nomap(region))) {
+ res->name = "Reserved";
+- res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++ res->flags = IORESOURCE_MEM | IORESOURCE_EXCLUSIVE;
+ } else {
+ res->name = "System RAM";
+ res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 697a9aed4f77f..e42c511969957 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -591,7 +591,7 @@ static __init pgprot_t pgprot_from_va(uintptr_t va)
+ }
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
+
+-#ifdef CONFIG_64BIT
++#if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL)
+ static void __init disable_pgtable_l4(void)
+ {
+ pgtable_l4_enabled = false;
+diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
+index 1effac6a01520..1c4f585dd39b6 100644
+--- a/arch/s390/include/asm/cio.h
++++ b/arch/s390/include/asm/cio.h
+@@ -369,7 +369,7 @@ void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
+ struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);
+
+ /* Function from drivers/s390/cio/chsc.c */
+-int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
++int chsc_sstpc(void *page, unsigned int op, u16 ctrl, long *clock_delta);
+ int chsc_sstpi(void *page, void *result, size_t size);
+ int chsc_stzi(void *page, void *result, size_t size);
+ int chsc_sgib(u32 origin);
+diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h
+index 7f3c9ac34bd8d..63098df81c9f2 100644
+--- a/arch/s390/include/asm/kexec.h
++++ b/arch/s390/include/asm/kexec.h
+@@ -9,6 +9,8 @@
+ #ifndef _S390_KEXEC_H
+ #define _S390_KEXEC_H
+
++#include <linux/module.h>
++
+ #include <asm/processor.h>
+ #include <asm/page.h>
+ #include <asm/setup.h>
+@@ -83,4 +85,12 @@ struct kimage_arch {
+ extern const struct kexec_file_ops s390_kexec_image_ops;
+ extern const struct kexec_file_ops s390_kexec_elf_ops;
+
++#ifdef CONFIG_KEXEC_FILE
++struct purgatory_info;
++int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
++ Elf_Shdr *section,
++ const Elf_Shdr *relsec,
++ const Elf_Shdr *symtab);
++#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++#endif
+ #endif /*_S390_KEXEC_H */
+diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
+index d9d5350cc3ec3..bf15da0fedbca 100644
+--- a/arch/s390/include/asm/preempt.h
++++ b/arch/s390/include/asm/preempt.h
+@@ -46,10 +46,17 @@ static inline bool test_preempt_need_resched(void)
+
+ static inline void __preempt_count_add(int val)
+ {
+- if (__builtin_constant_p(val) && (val >= -128) && (val <= 127))
+- __atomic_add_const(val, &S390_lowcore.preempt_count);
+- else
+- __atomic_add(val, &S390_lowcore.preempt_count);
++ /*
++ * With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES
++ * enabled, gcc 12 fails to handle __builtin_constant_p().
++ */
++ if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) {
++ if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) {
++ __atomic_add_const(val, &S390_lowcore.preempt_count);
++ return;
++ }
++ }
++ __atomic_add(val, &S390_lowcore.preempt_count);
+ }
+
+ static inline void __preempt_count_sub(int val)
+diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c
+index ea7729bebaa07..a7f8db73984b0 100644
+--- a/arch/s390/kernel/perf_event.c
++++ b/arch/s390/kernel/perf_event.c
+@@ -30,7 +30,7 @@ static struct kvm_s390_sie_block *sie_block(struct pt_regs *regs)
+ if (!stack)
+ return NULL;
+
+- return (struct kvm_s390_sie_block *) stack->empty1[0];
++ return (struct kvm_s390_sie_block *)stack->empty1[1];
+ }
+
+ static bool is_in_guest(struct pt_regs *regs)
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index 326cb8f75f58e..f0a1484ee00b0 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -364,7 +364,7 @@ static inline int check_sync_clock(void)
+ * Apply clock delta to the global data structures.
+ * This is called once on the CPU that performed the clock sync.
+ */
+-static void clock_sync_global(unsigned long delta)
++static void clock_sync_global(long delta)
+ {
+ unsigned long now, adj;
+ struct ptff_qto qto;
+@@ -400,7 +400,7 @@ static void clock_sync_global(unsigned long delta)
+ * Apply clock delta to the per-CPU data structures of this CPU.
+ * This is called for each online CPU after the call to clock_sync_global.
+ */
+-static void clock_sync_local(unsigned long delta)
++static void clock_sync_local(long delta)
+ {
+ /* Add the delta to the clock comparator. */
+ if (S390_lowcore.clock_comparator != clock_comparator_max) {
+@@ -424,7 +424,7 @@ static void __init time_init_wq(void)
+ struct clock_sync_data {
+ atomic_t cpus;
+ int in_sync;
+- unsigned long clock_delta;
++ long clock_delta;
+ };
+
+ /*
+@@ -544,7 +544,7 @@ static int stpinfo_valid(void)
+ static int stp_sync_clock(void *data)
+ {
+ struct clock_sync_data *sync = data;
+- u64 clock_delta, flags;
++ long clock_delta, flags;
+ static int first;
+ int rc;
+
+diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c
+index 6cc124a3bb98a..90ff7ff94ea7f 100644
+--- a/arch/sparc/kernel/signal32.c
++++ b/arch/sparc/kernel/signal32.c
+@@ -780,5 +780,6 @@ static_assert(offsetof(compat_siginfo_t, si_upper) == 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_pkey) == 0x14);
+ static_assert(offsetof(compat_siginfo_t, si_perf_data) == 0x10);
+ static_assert(offsetof(compat_siginfo_t, si_perf_type) == 0x14);
++static_assert(offsetof(compat_siginfo_t, si_perf_flags) == 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_band) == 0x0c);
+ static_assert(offsetof(compat_siginfo_t, si_fd) == 0x10);
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index 2a78d2af12655..6eeb766987d1a 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -590,5 +590,6 @@ static_assert(offsetof(siginfo_t, si_upper) == 0x28);
+ static_assert(offsetof(siginfo_t, si_pkey) == 0x20);
+ static_assert(offsetof(siginfo_t, si_perf_data) == 0x18);
+ static_assert(offsetof(siginfo_t, si_perf_type) == 0x20);
++static_assert(offsetof(siginfo_t, si_perf_flags) == 0x24);
+ static_assert(offsetof(siginfo_t, si_band) == 0x10);
+ static_assert(offsetof(siginfo_t, si_fd) == 0x14);
+diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c
+index 6040817c036f3..25727ed648b72 100644
+--- a/arch/um/drivers/chan_user.c
++++ b/arch/um/drivers/chan_user.c
+@@ -220,7 +220,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ unsigned long *stack_out)
+ {
+ struct winch_data data;
+- int fds[2], n, err;
++ int fds[2], n, err, pid;
+ char c;
+
+ err = os_pipe(fds, 1, 1);
+@@ -238,8 +238,9 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ * problem with /dev/net/tun, which if held open by this
+ * thread, prevents the TUN/TAP device from being reused.
+ */
+- err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
+- if (err < 0) {
++ pid = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
++ if (pid < 0) {
++ err = pid;
+ printk(UM_KERN_ERR "fork of winch_thread failed - errno = %d\n",
+ -err);
+ goto out_close;
+@@ -263,7 +264,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ goto out_close;
+ }
+
+- return err;
++ return pid;
+
+ out_close:
+ close(fds[1]);
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index ba562d68dc048..82ff3785bf69f 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -63,6 +63,7 @@ struct virtio_uml_device {
+
+ u8 config_changed_irq:1;
+ uint64_t vq_irq_vq_map;
++ int recv_rc;
+ };
+
+ struct virtio_uml_vq_info {
+@@ -148,14 +149,6 @@ static int vhost_user_recv(struct virtio_uml_device *vu_dev,
+
+ rc = vhost_user_recv_header(fd, msg);
+
+- if (rc == -ECONNRESET && vu_dev->registered) {
+- struct virtio_uml_platform_data *pdata;
+-
+- pdata = vu_dev->pdata;
+-
+- virtio_break_device(&vu_dev->vdev);
+- schedule_work(&pdata->conn_broken_wk);
+- }
+ if (rc)
+ return rc;
+ size = msg->header.size;
+@@ -164,6 +157,21 @@ static int vhost_user_recv(struct virtio_uml_device *vu_dev,
+ return full_read(fd, &msg->payload, size, false);
+ }
+
++static void vhost_user_check_reset(struct virtio_uml_device *vu_dev,
++ int rc)
++{
++ struct virtio_uml_platform_data *pdata = vu_dev->pdata;
++
++ if (rc != -ECONNRESET)
++ return;
++
++ if (!vu_dev->registered)
++ return;
++
++ virtio_break_device(&vu_dev->vdev);
++ schedule_work(&pdata->conn_broken_wk);
++}
++
+ static int vhost_user_recv_resp(struct virtio_uml_device *vu_dev,
+ struct vhost_user_msg *msg,
+ size_t max_payload_size)
+@@ -171,8 +179,10 @@ static int vhost_user_recv_resp(struct virtio_uml_device *vu_dev,
+ int rc = vhost_user_recv(vu_dev, vu_dev->sock, msg,
+ max_payload_size, true);
+
+- if (rc)
++ if (rc) {
++ vhost_user_check_reset(vu_dev, rc);
+ return rc;
++ }
+
+ if (msg->header.flags != (VHOST_USER_FLAG_REPLY | VHOST_USER_VERSION))
+ return -EPROTO;
+@@ -369,6 +379,7 @@ static irqreturn_t vu_req_read_message(struct virtio_uml_device *vu_dev,
+ sizeof(msg.msg.payload) +
+ sizeof(msg.extra_payload));
+
++ vu_dev->recv_rc = rc;
+ if (rc)
+ return IRQ_NONE;
+
+@@ -412,7 +423,9 @@ static irqreturn_t vu_req_interrupt(int irq, void *data)
+ if (!um_irq_timetravel_handler_used())
+ ret = vu_req_read_message(vu_dev, NULL);
+
+- if (vu_dev->vq_irq_vq_map) {
++ if (vu_dev->recv_rc) {
++ vhost_user_check_reset(vu_dev, vu_dev->recv_rc);
++ } else if (vu_dev->vq_irq_vq_map) {
+ struct virtqueue *vq;
+
+ virtio_device_for_each_vq((&vu_dev->vdev), vq) {
+diff --git a/arch/um/include/asm/Kbuild b/arch/um/include/asm/Kbuild
+index e5a7b552bb384..a8c763c296b48 100644
+--- a/arch/um/include/asm/Kbuild
++++ b/arch/um/include/asm/Kbuild
+@@ -4,6 +4,7 @@ generic-y += bug.h
+ generic-y += compat.h
+ generic-y += current.h
+ generic-y += device.h
++generic-y += dma-mapping.h
+ generic-y += emergency-restart.h
+ generic-y += exec.h
+ generic-y += extable.h
+diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h
+index 1395cbd7e340d..c7b4b49826a2a 100644
+--- a/arch/um/include/asm/thread_info.h
++++ b/arch/um/include/asm/thread_info.h
+@@ -60,6 +60,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_RESTORE_SIGMASK 7
+ #define TIF_NOTIFY_RESUME 8
+ #define TIF_SECCOMP 9 /* secure computing */
++#define TIF_SINGLESTEP 10 /* single stepping userspace */
+
+ #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
+@@ -68,5 +69,6 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_MEMDIE (1 << TIF_MEMDIE)
+ #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP (1 << TIF_SECCOMP)
++#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
+
+ #endif
+diff --git a/arch/um/kernel/exec.c b/arch/um/kernel/exec.c
+index c85e40c72779f..58938d75871af 100644
+--- a/arch/um/kernel/exec.c
++++ b/arch/um/kernel/exec.c
+@@ -43,7 +43,7 @@ void start_thread(struct pt_regs *regs, unsigned long eip, unsigned long esp)
+ {
+ PT_REGS_IP(regs) = eip;
+ PT_REGS_SP(regs) = esp;
+- current->ptrace &= ~PT_DTRACE;
++ clear_thread_flag(TIF_SINGLESTEP);
+ #ifdef SUBARCH_EXECVE1
+ SUBARCH_EXECVE1(regs->regs);
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 4a420778ed87b..703a5e1eab2d5 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -335,7 +335,7 @@ int singlestepping(void * t)
+ {
+ struct task_struct *task = t ? t : current;
+
+- if (!(task->ptrace & PT_DTRACE))
++ if (!test_thread_flag(TIF_SINGLESTEP))
+ return 0;
+
+ if (task->thread.singlestep_syscall)
+diff --git a/arch/um/kernel/ptrace.c b/arch/um/kernel/ptrace.c
+index b425f47bddbb3..d37802ced5636 100644
+--- a/arch/um/kernel/ptrace.c
++++ b/arch/um/kernel/ptrace.c
+@@ -12,7 +12,7 @@
+
+ void user_enable_single_step(struct task_struct *child)
+ {
+- child->ptrace |= PT_DTRACE;
++ set_tsk_thread_flag(child, TIF_SINGLESTEP);
+ child->thread.singlestep_syscall = 0;
+
+ #ifdef SUBARCH_SET_SINGLESTEPPING
+@@ -22,7 +22,7 @@ void user_enable_single_step(struct task_struct *child)
+
+ void user_disable_single_step(struct task_struct *child)
+ {
+- child->ptrace &= ~PT_DTRACE;
++ clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ child->thread.singlestep_syscall = 0;
+
+ #ifdef SUBARCH_SET_SINGLESTEPPING
+@@ -121,7 +121,7 @@ static void send_sigtrap(struct uml_pt_regs *regs, int error_code)
+ }
+
+ /*
+- * XXX Check PT_DTRACE vs TIF_SINGLESTEP for singlestepping check and
++ * XXX Check TIF_SINGLESTEP for singlestepping check and
+ * PT_PTRACED vs TIF_SYSCALL_TRACE for syscall tracing check
+ */
+ int syscall_trace_enter(struct pt_regs *regs)
+@@ -145,7 +145,7 @@ void syscall_trace_leave(struct pt_regs *regs)
+ audit_syscall_exit(regs);
+
+ /* Fake a debug trap */
+- if (ptraced & PT_DTRACE)
++ if (test_thread_flag(TIF_SINGLESTEP))
+ send_sigtrap(®s->regs, 0);
+
+ if (!test_thread_flag(TIF_SYSCALL_TRACE))
+diff --git a/arch/um/kernel/signal.c b/arch/um/kernel/signal.c
+index 88cd9b5c1b744..ae4658f576ab7 100644
+--- a/arch/um/kernel/signal.c
++++ b/arch/um/kernel/signal.c
+@@ -53,7 +53,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ unsigned long sp;
+ int err;
+
+- if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED))
++ if (test_thread_flag(TIF_SINGLESTEP) && (current->ptrace & PT_PTRACED))
+ singlestep = 1;
+
+ /* Did we come from a system call? */
+@@ -128,7 +128,7 @@ void do_signal(struct pt_regs *regs)
+ * on the host. The tracing thread will check this flag and
+ * PTRACE_SYSCALL if necessary.
+ */
+- if (current->ptrace & PT_DTRACE)
++ if (test_thread_flag(TIF_SINGLESTEP))
+ current->thread.singlestep_syscall =
+ is_syscall(PT_REGS_IP(¤t->thread.regs));
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index d0ecc4005df33..380c16ff5078b 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1332,7 +1332,7 @@ config MICROCODE
+
+ config MICROCODE_INTEL
+ bool "Intel microcode loading support"
+- depends on MICROCODE
++ depends on CPU_SUP_INTEL && MICROCODE
+ default MICROCODE
+ help
+ This options enables microcode patch loading support for Intel
+@@ -1344,7 +1344,7 @@ config MICROCODE_INTEL
+
+ config MICROCODE_AMD
+ bool "AMD microcode loading support"
+- depends on MICROCODE
++ depends on CPU_SUP_AMD && MICROCODE
+ help
+ If you select this option, microcode patch loading support for AMD
+ processors will be enabled.
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 466df3e502760..f7a0cc261b140 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -499,6 +499,7 @@ SYM_CODE_START(\asmsym)
+ call vc_switch_off_ist
+ movq %rax, %rsp /* Switch to new stack */
+
++ ENCODE_FRAME_POINTER
+ UNWIND_HINT_REGS
+
+ /* Update pt_regs */
+diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
+index 235a5794296ac..1000d457c3321 100644
+--- a/arch/x86/entry/vdso/vma.c
++++ b/arch/x86/entry/vdso/vma.c
+@@ -438,7 +438,7 @@ bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
+ static __init int vdso_setup(char *s)
+ {
+ vdso64_enabled = simple_strtoul(s, NULL, 0);
+- return 0;
++ return 1;
+ }
+ __setup("vdso=", vdso_setup);
+
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 9739019d4b67a..2704ec1e42a30 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -304,6 +304,16 @@ static int perf_ibs_init(struct perf_event *event)
+ hwc->config_base = perf_ibs->msr;
+ hwc->config = config;
+
++ /*
++ * rip recorded by IbsOpRip will not be consistent with rsp and rbp
++ * recorded as part of interrupt regs. Thus we need to use rip from
++ * interrupt regs while unwinding call stack. Setting _EARLY flag
++ * makes sure we unwind call-stack before perf sample rip is set to
++ * IbsOpRip.
++ */
++ if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
++ event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY;
++
+ return 0;
+ }
+
+@@ -687,6 +697,14 @@ fail:
+ data.raw = &raw;
+ }
+
++ /*
++ * rip recorded by IbsOpRip will not be consistent with rsp and rbp
++ * recorded as part of interrupt regs. Thus we need to use rip from
++ * interrupt regs while unwinding call stack.
++ */
++ if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
++ data.callchain = perf_callchain(event, iregs);
++
+ throttle = perf_event_overflow(event, &data, ®s);
+ out:
+ if (throttle) {
+@@ -759,9 +777,10 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
+ return ret;
+ }
+
+-static __init void perf_event_ibs_init(void)
++static __init int perf_event_ibs_init(void)
+ {
+ struct attribute **attr = ibs_op_format_attrs;
++ int ret;
+
+ /*
+ * Some chips fail to reset the fetch count when it is written; instead
+@@ -773,7 +792,9 @@ static __init void perf_event_ibs_init(void)
+ if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
+ perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
+
+- perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
++ ret = perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
++ if (ret)
++ return ret;
+
+ if (ibs_caps & IBS_CAPS_OPCNT) {
+ perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
+@@ -786,15 +807,35 @@ static __init void perf_event_ibs_init(void)
+ perf_ibs_op.cnt_mask |= IBS_OP_MAX_CNT_EXT_MASK;
+ }
+
+- perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
++ ret = perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
++ if (ret)
++ goto err_op;
++
++ ret = register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
++ if (ret)
++ goto err_nmi;
+
+- register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
+ pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps);
++ return 0;
++
++err_nmi:
++ perf_pmu_unregister(&perf_ibs_op.pmu);
++ free_percpu(perf_ibs_op.pcpu);
++ perf_ibs_op.pcpu = NULL;
++err_op:
++ perf_pmu_unregister(&perf_ibs_fetch.pmu);
++ free_percpu(perf_ibs_fetch.pcpu);
++ perf_ibs_fetch.pcpu = NULL;
++
++ return ret;
+ }
+
+ #else /* defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) */
+
+-static __init void perf_event_ibs_init(void) { }
++static __init int perf_event_ibs_init(void)
++{
++ return 0;
++}
+
+ #endif
+
+@@ -1064,9 +1105,7 @@ static __init int amd_ibs_init(void)
+ x86_pmu_amd_ibs_starting_cpu,
+ x86_pmu_amd_ibs_dying_cpu);
+
+- perf_event_ibs_init();
+-
+- return 0;
++ return perf_event_ibs_init();
+ }
+
+ /* Since we need the pci subsystem to init ibs we can't do this earlier: */
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index d87c9b246a8fa..26c3f3c721337 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -255,7 +255,7 @@ static struct event_constraint intel_icl_event_constraints[] = {
+ INTEL_EVENT_CONSTRAINT_RANGE(0x03, 0x0a, 0xf),
+ INTEL_EVENT_CONSTRAINT_RANGE(0x1f, 0x28, 0xf),
+ INTEL_EVENT_CONSTRAINT(0x32, 0xf), /* SW_PREFETCH_ACCESS.* */
+- INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x54, 0xf),
++ INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x56, 0xf),
+ INTEL_EVENT_CONSTRAINT_RANGE(0x60, 0x8b, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x04a3, 0xff), /* CYCLE_ACTIVITY.STALLS_TOTAL */
+ INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff), /* CYCLE_ACTIVITY.CYCLES_MEM_ANY */
+diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h
+index 9aff97f0de7fd..d937c55e717e6 100644
+--- a/arch/x86/include/asm/acenv.h
++++ b/arch/x86/include/asm/acenv.h
+@@ -13,7 +13,19 @@
+
+ /* Asm macros */
+
+-#define ACPI_FLUSH_CPU_CACHE() wbinvd()
++/*
++ * ACPI_FLUSH_CPU_CACHE() flushes caches on entering sleep states.
++ * It is required to prevent data loss.
++ *
++ * While running inside virtual machine, the kernel can bypass cache flushing.
++ * Changing sleep state in a virtual machine doesn't affect the host system
++ * sleep state and cannot lead to data loss.
++ */
++#define ACPI_FLUSH_CPU_CACHE() \
++do { \
++ if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) \
++ wbinvd(); \
++} while (0)
+
+ int __acpi_acquire_global_lock(unsigned int *lock);
+ int __acpi_release_global_lock(unsigned int *lock);
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index 11b7c06e2828c..6ad8d946cd3eb 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -186,6 +186,14 @@ extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages,
+ extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages);
+ #define arch_kexec_pre_free_pages arch_kexec_pre_free_pages
+
++#ifdef CONFIG_KEXEC_FILE
++struct purgatory_info;
++int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
++ Elf_Shdr *section,
++ const Elf_Shdr *relsec,
++ const Elf_Shdr *symtab);
++#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++#endif
+ #endif
+
+ typedef void crash_vmclear_fn(void);
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index ff0f2d90338a1..648be0bd20dfa 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -88,56 +88,4 @@ void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc);
+
+ extern int kernel_set_to_readonly;
+
+-#ifdef CONFIG_X86_64
+-/*
+- * Prevent speculative access to the page by either unmapping
+- * it (if we do not require access to any part of the page) or
+- * marking it uncacheable (if we want to try to retrieve data
+- * from non-poisoned lines in the page).
+- */
+-static inline int set_mce_nospec(unsigned long pfn, bool unmap)
+-{
+- unsigned long decoy_addr;
+- int rc;
+-
+- /* SGX pages are not in the 1:1 map */
+- if (arch_is_platform_page(pfn << PAGE_SHIFT))
+- return 0;
+- /*
+- * We would like to just call:
+- * set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
+- * but doing that would radically increase the odds of a
+- * speculative access to the poison page because we'd have
+- * the virtual address of the kernel 1:1 mapping sitting
+- * around in registers.
+- * Instead we get tricky. We create a non-canonical address
+- * that looks just like the one we want, but has bit 63 flipped.
+- * This relies on set_memory_XX() properly sanitizing any __pa()
+- * results with __PHYSICAL_MASK or PTE_PFN_MASK.
+- */
+- decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
+-
+- if (unmap)
+- rc = set_memory_np(decoy_addr, 1);
+- else
+- rc = set_memory_uc(decoy_addr, 1);
+- if (rc)
+- pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
+- return rc;
+-}
+-#define set_mce_nospec set_mce_nospec
+-
+-/* Restore full speculative operation to the pfn. */
+-static inline int clear_mce_nospec(unsigned long pfn)
+-{
+- return set_memory_wb((unsigned long) pfn_to_kaddr(pfn), 1);
+-}
+-#define clear_mce_nospec clear_mce_nospec
+-#else
+-/*
+- * Few people would run a 32-bit kernel on a machine that supports
+- * recoverable errors because they have too much memory to boot 32-bit.
+- */
+-#endif
+-
+ #endif /* _ASM_X86_SET_MEMORY_H */
+diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h
+index 7b132d0312ebf..a800abb1a9925 100644
+--- a/arch/x86/include/asm/suspend_32.h
++++ b/arch/x86/include/asm/suspend_32.h
+@@ -19,7 +19,6 @@ struct saved_context {
+ u16 gs;
+ unsigned long cr0, cr2, cr3, cr4;
+ u64 misc_enable;
+- bool misc_enable_saved;
+ struct saved_msrs saved_msrs;
+ struct desc_ptr gdt_desc;
+ struct desc_ptr idt;
+@@ -28,6 +27,7 @@ struct saved_context {
+ unsigned long tr;
+ unsigned long safety;
+ unsigned long return_address;
++ bool misc_enable_saved;
+ } __attribute__((packed));
+
+ /* routines for saving/restoring kernel state */
+diff --git a/arch/x86/include/asm/suspend_64.h b/arch/x86/include/asm/suspend_64.h
+index 35bb35d28733e..54df06687d834 100644
+--- a/arch/x86/include/asm/suspend_64.h
++++ b/arch/x86/include/asm/suspend_64.h
+@@ -14,9 +14,13 @@
+ * Image of the saved processor state, used by the low level ACPI suspend to
+ * RAM code and by the low level hibernation code.
+ *
+- * If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that
+- * __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c,
+- * still work as required.
++ * If you modify it, check how it is used in arch/x86/kernel/acpi/wakeup_64.S
++ * and make sure that __save/__restore_processor_state(), defined in
++ * arch/x86/power/cpu.c, still work as required.
++ *
++ * Because the structure is packed, make sure to avoid unaligned members. For
++ * optimisation purposes but also because tools like kmemleak only search for
++ * pointers that are aligned.
+ */
+ struct saved_context {
+ struct pt_regs regs;
+@@ -36,7 +40,6 @@ struct saved_context {
+
+ unsigned long cr0, cr2, cr3, cr4;
+ u64 misc_enable;
+- bool misc_enable_saved;
+ struct saved_msrs saved_msrs;
+ unsigned long efer;
+ u16 gdt_pad; /* Unused */
+@@ -48,6 +51,7 @@ struct saved_context {
+ unsigned long tr;
+ unsigned long safety;
+ unsigned long return_address;
++ bool misc_enable_saved;
+ } __attribute__((packed));
+
+ #define loaddebug(thread,register) \
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index b70344bf66008..ed7d9cf71f68d 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -170,7 +170,7 @@ static __init int setup_apicpmtimer(char *s)
+ {
+ apic_calibrate_pmtmr = 1;
+ notsc_setup(NULL);
+- return 0;
++ return 1;
+ }
+ __setup("apicpmtimer", setup_apicpmtimer);
+ #endif
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index f5a48e66e4f54..a6e9c2794ef56 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -199,7 +199,13 @@ static void __init uv_tsc_check_sync(void)
+ int mmr_shift;
+ char *state;
+
+- /* Different returns from different UV BIOS versions */
++ /* UV5 guarantees synced TSCs; do not zero TSC_ADJUST */
++ if (!is_uv(UV2|UV3|UV4)) {
++ mark_tsc_async_resets("UV5+");
++ return;
++ }
++
++ /* UV2,3,4, UV BIOS TSC sync state available */
+ mmr = uv_early_read_mmr(UVH_TSC_SYNC_MMR);
+ mmr_shift =
+ is_uv2_hub() ? UVH_TSC_SYNC_SHIFT_UV2K : UVH_TSC_SYNC_SHIFT;
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index f7a5370a9b3b8..2c87d62f191e2 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -91,7 +91,7 @@ static bool ring3mwait_disabled __read_mostly;
+ static int __init ring3mwait_disable(char *__unused)
+ {
+ ring3mwait_disabled = true;
+- return 0;
++ return 1;
+ }
+ __setup("ring3mwait=disable", ring3mwait_disable);
+
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 9f4b508886dde..644f97b189630 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -1293,10 +1293,23 @@ out_free:
+ kfree(bank);
+ }
+
++static void __threshold_remove_device(struct threshold_bank **bp)
++{
++ unsigned int bank, numbanks = this_cpu_read(mce_num_banks);
++
++ for (bank = 0; bank < numbanks; bank++) {
++ if (!bp[bank])
++ continue;
++
++ threshold_remove_bank(bp[bank]);
++ bp[bank] = NULL;
++ }
++ kfree(bp);
++}
++
+ int mce_threshold_remove_device(unsigned int cpu)
+ {
+ struct threshold_bank **bp = this_cpu_read(threshold_banks);
+- unsigned int bank, numbanks = this_cpu_read(mce_num_banks);
+
+ if (!bp)
+ return 0;
+@@ -1307,13 +1320,7 @@ int mce_threshold_remove_device(unsigned int cpu)
+ */
+ this_cpu_write(threshold_banks, NULL);
+
+- for (bank = 0; bank < numbanks; bank++) {
+- if (bp[bank]) {
+- threshold_remove_bank(bp[bank]);
+- bp[bank] = NULL;
+- }
+- }
+- kfree(bp);
++ __threshold_remove_device(bp);
+ return 0;
+ }
+
+@@ -1350,15 +1357,14 @@ int mce_threshold_create_device(unsigned int cpu)
+ if (!(this_cpu_read(bank_map) & (1 << bank)))
+ continue;
+ err = threshold_create_bank(bp, cpu, bank);
+- if (err)
+- goto out_err;
++ if (err) {
++ __threshold_remove_device(bp);
++ return err;
++ }
+ }
+ this_cpu_write(threshold_banks, bp);
+
+ if (thresholding_irq_en)
+ mce_threshold_vector = amd_threshold_interrupt;
+ return 0;
+-out_err:
+- mce_threshold_remove_device(cpu);
+- return err;
+ }
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 2d719e0d2e404..9ed616e7e1cfa 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -613,7 +613,7 @@ static int uc_decode_notifier(struct notifier_block *nb, unsigned long val,
+
+ pfn = mce->addr >> PAGE_SHIFT;
+ if (!memory_failure(pfn, 0)) {
+- set_mce_nospec(pfn, whole_page(mce));
++ set_mce_nospec(pfn);
+ mce->kflags |= MCE_HANDLED_UC;
+ }
+
+@@ -1350,7 +1350,7 @@ static void kill_me_maybe(struct callback_head *cb)
+
+ ret = memory_failure(p->mce_addr >> PAGE_SHIFT, flags);
+ if (!ret) {
+- set_mce_nospec(p->mce_addr >> PAGE_SHIFT, p->mce_whole_page);
++ set_mce_nospec(p->mce_addr >> PAGE_SHIFT);
+ sync_core();
+ return;
+ }
+@@ -1374,7 +1374,7 @@ static void kill_me_never(struct callback_head *cb)
+ p->mce_count = 0;
+ pr_err("Kernel accessed poison in user space at %llx\n", p->mce_addr);
+ if (!memory_failure(p->mce_addr >> PAGE_SHIFT, 0))
+- set_mce_nospec(p->mce_addr >> PAGE_SHIFT, p->mce_whole_page);
++ set_mce_nospec(p->mce_addr >> PAGE_SHIFT);
+ }
+
+ static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callback_head *))
+diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
+index 3c24e6124d955..19876ebfb5044 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.c
++++ b/arch/x86/kernel/cpu/sgx/encl.c
+@@ -152,7 +152,7 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+
+ page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
+
+- ret = sgx_encl_get_backing(encl, page_index, &b);
++ ret = sgx_encl_lookup_backing(encl, page_index, &b);
+ if (ret)
+ return ret;
+
+@@ -718,7 +718,7 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl,
+ * 0 on success,
+ * -errno otherwise.
+ */
+-int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
++static int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ struct sgx_backing *backing)
+ {
+ pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
+@@ -743,6 +743,107 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ return 0;
+ }
+
++/*
++ * When called from ksgxd, returns the mem_cgroup of a struct mm stored
++ * in the enclave's mm_list. When not called from ksgxd, just returns
++ * the mem_cgroup of the current task.
++ */
++static struct mem_cgroup *sgx_encl_get_mem_cgroup(struct sgx_encl *encl)
++{
++ struct mem_cgroup *memcg = NULL;
++ struct sgx_encl_mm *encl_mm;
++ int idx;
++
++ /*
++ * If called from normal task context, return the mem_cgroup
++ * of the current task's mm. The remainder of the handling is for
++ * ksgxd.
++ */
++ if (!current_is_ksgxd())
++ return get_mem_cgroup_from_mm(current->mm);
++
++ /*
++ * Search the enclave's mm_list to find an mm associated with
++ * this enclave to charge the allocation to.
++ */
++ idx = srcu_read_lock(&encl->srcu);
++
++ list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
++ if (!mmget_not_zero(encl_mm->mm))
++ continue;
++
++ memcg = get_mem_cgroup_from_mm(encl_mm->mm);
++
++ mmput_async(encl_mm->mm);
++
++ break;
++ }
++
++ srcu_read_unlock(&encl->srcu, idx);
++
++ /*
++ * In the rare case that there isn't an mm associated with
++ * the enclave, set memcg to the current active mem_cgroup.
++ * This will be the root mem_cgroup if there is no active
++ * mem_cgroup.
++ */
++ if (!memcg)
++ return get_mem_cgroup_from_mm(NULL);
++
++ return memcg;
++}
++
++/**
++ * sgx_encl_alloc_backing() - allocate a new backing storage page
++ * @encl: an enclave pointer
++ * @page_index: enclave page index
++ * @backing: data for accessing backing storage for the page
++ *
++ * When called from ksgxd, sets the active memcg from one of the
++ * mms in the enclave's mm_list prior to any backing page allocation,
++ * in order to ensure that shmem page allocations are charged to the
++ * enclave.
++ *
++ * Return:
++ * 0 on success,
++ * -errno otherwise.
++ */
++int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index,
++ struct sgx_backing *backing)
++{
++ struct mem_cgroup *encl_memcg = sgx_encl_get_mem_cgroup(encl);
++ struct mem_cgroup *memcg = set_active_memcg(encl_memcg);
++ int ret;
++
++ ret = sgx_encl_get_backing(encl, page_index, backing);
++
++ set_active_memcg(memcg);
++ mem_cgroup_put(encl_memcg);
++
++ return ret;
++}
++
++/**
++ * sgx_encl_lookup_backing() - retrieve an existing backing storage page
++ * @encl: an enclave pointer
++ * @page_index: enclave page index
++ * @backing: data for accessing backing storage for the page
++ *
++ * Retrieve a backing page for loading data back into an EPC page with ELDU.
++ * It is the caller's responsibility to ensure that it is appropriate to use
++ * sgx_encl_lookup_backing() rather than sgx_encl_alloc_backing(). If lookup is
++ * not used correctly, this will cause an allocation which is not accounted for.
++ *
++ * Return:
++ * 0 on success,
++ * -errno otherwise.
++ */
++int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index,
++ struct sgx_backing *backing)
++{
++ return sgx_encl_get_backing(encl, page_index, backing);
++}
++
+ /**
+ * sgx_encl_put_backing() - Unpin the backing storage
+ * @backing: data for accessing backing storage for the page
+diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
+index d44e7372151f0..332ef3568267e 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.h
++++ b/arch/x86/kernel/cpu/sgx/encl.h
+@@ -103,10 +103,13 @@ static inline int sgx_encl_find(struct mm_struct *mm, unsigned long addr,
+ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
+ unsigned long end, unsigned long vm_flags);
+
++bool current_is_ksgxd(void);
+ void sgx_encl_release(struct kref *ref);
+ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
+-int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+- struct sgx_backing *backing);
++int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index,
++ struct sgx_backing *backing);
++int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index,
++ struct sgx_backing *backing);
+ void sgx_encl_put_backing(struct sgx_backing *backing);
+ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
+ struct sgx_encl_page *page);
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index ab4ec54bbdd94..a78652d43e61b 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -313,7 +313,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
+ sgx_encl_put_backing(backing);
+
+ if (!encl->secs_child_cnt && test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) {
+- ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size),
++ ret = sgx_encl_alloc_backing(encl, PFN_DOWN(encl->size),
+ &secs_backing);
+ if (ret)
+ goto out;
+@@ -384,7 +384,7 @@ static void sgx_reclaim_pages(void)
+ page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base);
+
+ mutex_lock(&encl_page->encl->lock);
+- ret = sgx_encl_get_backing(encl_page->encl, page_index, &backing[i]);
++ ret = sgx_encl_alloc_backing(encl_page->encl, page_index, &backing[i]);
+ if (ret) {
+ mutex_unlock(&encl_page->encl->lock);
+ goto skip;
+@@ -475,6 +475,11 @@ static bool __init sgx_page_reclaimer_init(void)
+ return true;
+ }
+
++bool current_is_ksgxd(void)
++{
++ return current == ksgxd_tsk;
++}
++
+ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
+ {
+ struct sgx_numa_node *node = &sgx_numa_nodes[nid];
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index f5da4a18070ae..1f0eb0eed5467 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -374,9 +374,6 @@ void machine_kexec(struct kimage *image)
+ #ifdef CONFIG_KEXEC_FILE
+ void *arch_kexec_kernel_image_load(struct kimage *image)
+ {
+- vfree(image->elf_headers);
+- image->elf_headers = NULL;
+-
+ if (!image->fops || !image->fops->load)
+ return ERR_PTR(-ENOEXEC);
+
+@@ -512,6 +509,15 @@ overflow:
+ (int)ELF64_R_TYPE(rel[i].r_info), value);
+ return -ENOEXEC;
+ }
++
++int arch_kimage_file_post_load_cleanup(struct kimage *image)
++{
++ vfree(image->elf_headers);
++ image->elf_headers = NULL;
++ image->elf_headers_sz = 0;
++
++ return kexec_image_post_load_cleanup_default(image);
++}
+ #endif /* CONFIG_KEXEC_FILE */
+
+ static int
+diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
+index b52407c56000e..879ef8c72f5c0 100644
+--- a/arch/x86/kernel/signal_compat.c
++++ b/arch/x86/kernel/signal_compat.c
+@@ -149,8 +149,10 @@ static inline void signal_compat_build_tests(void)
+
+ BUILD_BUG_ON(offsetof(siginfo_t, si_perf_data) != 0x18);
+ BUILD_BUG_ON(offsetof(siginfo_t, si_perf_type) != 0x20);
++ BUILD_BUG_ON(offsetof(siginfo_t, si_perf_flags) != 0x24);
+ BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf_data) != 0x10);
+ BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf_type) != 0x14);
++ BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf_flags) != 0x18);
+
+ CHECK_CSI_OFFSET(_sigpoll);
+ CHECK_CSI_SIZE (_sigpoll, 2*sizeof(int));
+diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
+index 0f3c307b37b3a..8e2b2552b5eea 100644
+--- a/arch/x86/kernel/step.c
++++ b/arch/x86/kernel/step.c
+@@ -180,8 +180,7 @@ void set_task_blockstep(struct task_struct *task, bool on)
+ *
+ * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if
+ * task is current or it can't be running, otherwise we can race
+- * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but
+- * PTRACE_KILL is not safe.
++ * with __switch_to_xtra(). We rely on ptrace_freeze_traced().
+ */
+ local_irq_disable();
+ debugctl = get_debugctlmsr();
+diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
+index 660b78827638f..8cc653ffdccd7 100644
+--- a/arch/x86/kernel/sys_x86_64.c
++++ b/arch/x86/kernel/sys_x86_64.c
+@@ -68,9 +68,6 @@ static int __init control_va_addr_alignment(char *str)
+ if (*str == 0)
+ return 1;
+
+- if (*str == '=')
+- str++;
+-
+ if (!strcmp(str, "32"))
+ va_align.flags = ALIGN_VA_32;
+ else if (!strcmp(str, "64"))
+@@ -80,11 +77,11 @@ static int __init control_va_addr_alignment(char *str)
+ else if (!strcmp(str, "on"))
+ va_align.flags = ALIGN_VA_32 | ALIGN_VA_64;
+ else
+- return 0;
++ pr_warn("invalid option value: 'align_va_addr=%s'\n", str);
+
+ return 1;
+ }
+-__setup("align_va_addr", control_va_addr_alignment);
++__setup("align_va_addr=", control_va_addr_alignment);
+
+ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+ unsigned long, prot, unsigned long, flags,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 970d5c740b00b..dd12c15b69e65 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1508,6 +1508,7 @@ static void cancel_apic_timer(struct kvm_lapic *apic)
+ if (apic->lapic_timer.hv_timer_in_use)
+ cancel_hv_timer(apic);
+ preempt_enable();
++ atomic_set(&apic->lapic_timer.pending, 0);
+ }
+
+ static void apic_update_lvtt(struct kvm_lapic *apic)
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 3237d804564b1..d795ac816aecc 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3695,12 +3695,34 @@ vmcs12_guest_cr4(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ }
+
+ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
+- struct vmcs12 *vmcs12)
++ struct vmcs12 *vmcs12,
++ u32 vm_exit_reason, u32 exit_intr_info)
+ {
+ u32 idt_vectoring;
+ unsigned int nr;
+
+- if (vcpu->arch.exception.injected) {
++ /*
++ * Per the SDM, VM-Exits due to double and triple faults are never
++ * considered to occur during event delivery, even if the double/triple
++ * fault is the result of an escalating vectoring issue.
++ *
++ * Note, the SDM qualifies the double fault behavior with "The original
++ * event results in a double-fault exception". It's unclear why the
++ * qualification exists since exits due to double fault can occur only
++ * while vectoring a different exception (injected events are never
++ * subject to interception), i.e. there's _always_ an original event.
++ *
++ * The SDM also uses NMI as a confusing example for the "original event
++ * causes the VM exit directly" clause. NMI isn't special in any way,
++ * the same rule applies to all events that cause an exit directly.
++ * NMI is an odd choice for the example because NMIs can only occur on
++ * instruction boundaries, i.e. they _can't_ occur during vectoring.
++ */
++ if ((u16)vm_exit_reason == EXIT_REASON_TRIPLE_FAULT ||
++ ((u16)vm_exit_reason == EXIT_REASON_EXCEPTION_NMI &&
++ is_double_fault(exit_intr_info))) {
++ vmcs12->idt_vectoring_info_field = 0;
++ } else if (vcpu->arch.exception.injected) {
+ nr = vcpu->arch.exception.nr;
+ idt_vectoring = nr | VECTORING_INFO_VALID_MASK;
+
+@@ -3733,6 +3755,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
+ idt_vectoring |= INTR_TYPE_EXT_INTR;
+
+ vmcs12->idt_vectoring_info_field = idt_vectoring;
++ } else {
++ vmcs12->idt_vectoring_info_field = 0;
+ }
+ }
+
+@@ -4202,12 +4226,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ if (to_vmx(vcpu)->exit_reason.enclave_mode)
+ vmcs12->vm_exit_reason |= VMX_EXIT_REASONS_SGX_ENCLAVE_MODE;
+ vmcs12->exit_qualification = exit_qualification;
+- vmcs12->vm_exit_intr_info = exit_intr_info;
+-
+- vmcs12->idt_vectoring_info_field = 0;
+- vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+- vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+
++ /*
++ * On VM-Exit due to a failed VM-Entry, the VMCS isn't marked launched
++ * and only EXIT_REASON and EXIT_QUALIFICATION are updated, all other
++ * exit info fields are unmodified.
++ */
+ if (!(vmcs12->vm_exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) {
+ vmcs12->launch_state = 1;
+
+@@ -4219,7 +4243,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ * Transfer the event that L0 or L1 may wanted to inject into
+ * L2 to IDT_VECTORING_INFO_FIELD.
+ */
+- vmcs12_save_pending_event(vcpu, vmcs12);
++ vmcs12_save_pending_event(vcpu, vmcs12,
++ vm_exit_reason, exit_intr_info);
++
++ vmcs12->vm_exit_intr_info = exit_intr_info;
++ vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
++ vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+
+ /*
+ * According to spec, there's no need to store the guest's
+diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
+index e325c290a8162..2b9d7a7e83f77 100644
+--- a/arch/x86/kvm/vmx/vmcs.h
++++ b/arch/x86/kvm/vmx/vmcs.h
+@@ -104,6 +104,11 @@ static inline bool is_breakpoint(u32 intr_info)
+ return is_exception_n(intr_info, BP_VECTOR);
+ }
+
++static inline bool is_double_fault(u32 intr_info)
++{
++ return is_exception_n(intr_info, DF_VECTOR);
++}
++
+ static inline bool is_page_fault(u32 intr_info)
+ {
+ return is_exception_n(intr_info, PF_VECTOR);
+diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c
+index 65d15df6212d6..0e65d00e2339f 100644
+--- a/arch/x86/lib/delay.c
++++ b/arch/x86/lib/delay.c
+@@ -54,8 +54,8 @@ static void delay_loop(u64 __loops)
+ " jnz 2b \n"
+ "3: dec %0 \n"
+
+- : /* we don't need output */
+- :"a" (loops)
++ : "+a" (loops)
++ :
+ );
+ }
+
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index 4ba2a3ee4bce1..d5ef64ddd35e9 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -101,7 +101,7 @@ int pat_debug_enable;
+ static int __init pat_debug_setup(char *str)
+ {
+ pat_debug_enable = 1;
+- return 0;
++ return 1;
+ }
+ __setup("debugpat", pat_debug_setup);
+
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index b4072115c8ef6..1f10181044889 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -19,6 +19,7 @@
+ #include <linux/vmstat.h>
+ #include <linux/kernel.h>
+ #include <linux/cc_platform.h>
++#include <linux/set_memory.h>
+
+ #include <asm/e820/api.h>
+ #include <asm/processor.h>
+@@ -29,7 +30,6 @@
+ #include <asm/pgalloc.h>
+ #include <asm/proto.h>
+ #include <asm/memtype.h>
+-#include <asm/set_memory.h>
+ #include <asm/hyperv-tlfs.h>
+ #include <asm/mshyperv.h>
+
+@@ -1816,7 +1816,7 @@ static inline int cpa_clear_pages_array(struct page **pages, int numpages,
+ }
+
+ /*
+- * _set_memory_prot is an internal helper for callers that have been passed
++ * __set_memory_prot is an internal helper for callers that have been passed
+ * a pgprot_t value from upper layers and a reservation has already been taken.
+ * If you want to set the pgprot to a specific page protocol, use the
+ * set_memory_xx() functions.
+@@ -1925,6 +1925,51 @@ int set_memory_wb(unsigned long addr, int numpages)
+ }
+ EXPORT_SYMBOL(set_memory_wb);
+
++/* Prevent speculative access to a page by marking it not-present */
++#ifdef CONFIG_X86_64
++int set_mce_nospec(unsigned long pfn)
++{
++ unsigned long decoy_addr;
++ int rc;
++
++ /* SGX pages are not in the 1:1 map */
++ if (arch_is_platform_page(pfn << PAGE_SHIFT))
++ return 0;
++ /*
++ * We would like to just call:
++ * set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
++ * but doing that would radically increase the odds of a
++ * speculative access to the poison page because we'd have
++ * the virtual address of the kernel 1:1 mapping sitting
++ * around in registers.
++ * Instead we get tricky. We create a non-canonical address
++ * that looks just like the one we want, but has bit 63 flipped.
++ * This relies on set_memory_XX() properly sanitizing any __pa()
++ * results with __PHYSICAL_MASK or PTE_PFN_MASK.
++ */
++ decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
++
++ rc = set_memory_np(decoy_addr, 1);
++ if (rc)
++ pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
++ return rc;
++}
++
++static int set_memory_present(unsigned long *addr, int numpages)
++{
++ return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0);
++}
++
++/* Restore full speculative operation to the pfn. */
++int clear_mce_nospec(unsigned long pfn)
++{
++ unsigned long addr = (unsigned long) pfn_to_kaddr(pfn);
++
++ return set_memory_present(&addr, 1);
++}
++EXPORT_SYMBOL_GPL(clear_mce_nospec);
++#endif /* CONFIG_X86_64 */
++
+ int set_memory_x(unsigned long addr, int numpages)
+ {
+ if (!(__supported_pte_mask & _PAGE_NX))
+diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c
+index 97b63e35e1528..21c4bc41741fe 100644
+--- a/arch/x86/pci/irq.c
++++ b/arch/x86/pci/irq.c
+@@ -253,6 +253,15 @@ static void write_pc_conf_nybble(u8 base, u8 index, u8 val)
+ pc_conf_set(reg, x);
+ }
+
++/*
++ * FinALi pirq rules are as follows:
++ *
++ * - bit 0 selects between INTx Routing Table Mapping Registers,
++ *
++ * - bit 3 selects the nibble within the INTx Routing Table Mapping Register,
++ *
++ * - bits 7:4 map to bits 3:0 of the PCI INTx Sensitivity Register.
++ */
+ static int pirq_finali_get(struct pci_dev *router, struct pci_dev *dev,
+ int pirq)
+ {
+@@ -260,11 +269,13 @@ static int pirq_finali_get(struct pci_dev *router, struct pci_dev *dev,
+ 0, 9, 3, 10, 4, 5, 7, 6, 0, 11, 0, 12, 0, 14, 0, 15
+ };
+ unsigned long flags;
++ u8 index;
+ u8 x;
+
++ index = (pirq & 1) << 1 | (pirq & 8) >> 3;
+ raw_spin_lock_irqsave(&pc_conf_lock, flags);
+ pc_conf_set(PC_CONF_FINALI_LOCK, PC_CONF_FINALI_LOCK_KEY);
+- x = irqmap[read_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, pirq - 1)];
++ x = irqmap[read_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, index)];
+ pc_conf_set(PC_CONF_FINALI_LOCK, 0);
+ raw_spin_unlock_irqrestore(&pc_conf_lock, flags);
+ return x;
+@@ -278,13 +289,15 @@ static int pirq_finali_set(struct pci_dev *router, struct pci_dev *dev,
+ };
+ u8 val = irqmap[irq];
+ unsigned long flags;
++ u8 index;
+
+ if (!val)
+ return 0;
+
++ index = (pirq & 1) << 1 | (pirq & 8) >> 3;
+ raw_spin_lock_irqsave(&pc_conf_lock, flags);
+ pc_conf_set(PC_CONF_FINALI_LOCK, PC_CONF_FINALI_LOCK_KEY);
+- write_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, pirq - 1, val);
++ write_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, index, val);
+ pc_conf_set(PC_CONF_FINALI_LOCK, 0);
+ raw_spin_unlock_irqrestore(&pc_conf_lock, flags);
+ return 1;
+@@ -293,7 +306,7 @@ static int pirq_finali_set(struct pci_dev *router, struct pci_dev *dev,
+ static int pirq_finali_lvl(struct pci_dev *router, struct pci_dev *dev,
+ int pirq, int irq)
+ {
+- u8 mask = ~(1u << (pirq - 1));
++ u8 mask = ~((pirq & 0xf0u) >> 4);
+ unsigned long flags;
+ u8 trig;
+
+diff --git a/arch/x86/um/ldt.c b/arch/x86/um/ldt.c
+index 3ee234b6234dd..255a44dd415a9 100644
+--- a/arch/x86/um/ldt.c
++++ b/arch/x86/um/ldt.c
+@@ -23,9 +23,11 @@ static long write_ldt_entry(struct mm_id *mm_idp, int func,
+ {
+ long res;
+ void *stub_addr;
++
++ BUILD_BUG_ON(sizeof(*desc) % sizeof(long));
++
+ res = syscall_stub_data(mm_idp, (unsigned long *)desc,
+- (sizeof(*desc) + sizeof(long) - 1) &
+- ~(sizeof(long) - 1),
++ sizeof(*desc) / sizeof(long),
+ addr, &stub_addr);
+ if (!res) {
+ unsigned long args[] = { func,
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index a1029a5b6a1d9..ee08238099f49 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -442,7 +442,6 @@ KABI_W or a3, a3, a0
+ moveqz a3, a0, a2 # a3 = LOCKLEVEL iff interrupt
+ KABI_W movi a2, PS_WOE_MASK
+ KABI_W or a3, a3, a2
+- rsr a2, exccause
+ #endif
+
+ /* restore return address (or 0 if return to userspace) */
+@@ -469,19 +468,27 @@ KABI_W or a3, a3, a2
+
+ save_xtregs_opt a1 a3 a4 a5 a6 a7 PT_XTREGS_OPT
+
++#ifdef CONFIG_TRACE_IRQFLAGS
++ rsr abi_tmp0, ps
++ extui abi_tmp0, abi_tmp0, PS_INTLEVEL_SHIFT, PS_INTLEVEL_WIDTH
++ beqz abi_tmp0, 1f
++ abi_call trace_hardirqs_off
++1:
++#endif
++
+ /* Go to second-level dispatcher. Set up parameters to pass to the
+ * exception handler and call the exception handler.
+ */
+
+- rsr a4, excsave1
+- addx4 a4, a2, a4
+- l32i a4, a4, EXC_TABLE_DEFAULT # load handler
+- mov abi_arg1, a2 # pass EXCCAUSE
++ l32i abi_arg1, a1, PT_EXCCAUSE # pass EXCCAUSE
++ rsr abi_tmp0, excsave1
++ addx4 abi_tmp0, abi_arg1, abi_tmp0
++ l32i abi_tmp0, abi_tmp0, EXC_TABLE_DEFAULT # load handler
+ mov abi_arg0, a1 # pass stack frame
+
+ /* Call the second-level handler */
+
+- abi_callx a4
++ abi_callx abi_tmp0
+
+ /* Jump here for exception exit */
+ .global common_exception_return
+diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c
+index bb3f4797d212b..db6cdea471d83 100644
+--- a/arch/xtensa/kernel/ptrace.c
++++ b/arch/xtensa/kernel/ptrace.c
+@@ -226,12 +226,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+
+ void user_enable_single_step(struct task_struct *child)
+ {
+- child->ptrace |= PT_SINGLESTEP;
++ set_tsk_thread_flag(child, TIF_SINGLESTEP);
+ }
+
+ void user_disable_single_step(struct task_struct *child)
+ {
+- child->ptrace &= ~PT_SINGLESTEP;
++ clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ }
+
+ /*
+diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c
+index f6c949895b3eb..4847578f912b7 100644
+--- a/arch/xtensa/kernel/signal.c
++++ b/arch/xtensa/kernel/signal.c
+@@ -473,7 +473,7 @@ static void do_signal(struct pt_regs *regs)
+ /* Set up the stack frame */
+ ret = setup_frame(&ksig, sigmask_to_save(), regs);
+ signal_setup_done(ret, &ksig, 0);
+- if (current->ptrace & PT_SINGLESTEP)
++ if (test_thread_flag(TIF_SINGLESTEP))
+ task_pt_regs(current)->icountlevel = 1;
+
+ return;
+@@ -499,7 +499,7 @@ static void do_signal(struct pt_regs *regs)
+ /* If there's no signal to deliver, we just restore the saved mask. */
+ restore_saved_sigmask();
+
+- if (current->ptrace & PT_SINGLESTEP)
++ if (test_thread_flag(TIF_SINGLESTEP))
+ task_pt_regs(current)->icountlevel = 1;
+ return;
+ }
+diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c
+index 9345007d474d3..5f86208c67c87 100644
+--- a/arch/xtensa/kernel/traps.c
++++ b/arch/xtensa/kernel/traps.c
+@@ -242,12 +242,8 @@ DEFINE_PER_CPU(unsigned long, nmi_count);
+
+ void do_nmi(struct pt_regs *regs)
+ {
+- struct pt_regs *old_regs;
++ struct pt_regs *old_regs = set_irq_regs(regs);
+
+- if ((regs->ps & PS_INTLEVEL_MASK) < LOCKLEVEL)
+- trace_hardirqs_off();
+-
+- old_regs = set_irq_regs(regs);
+ nmi_enter();
+ ++*this_cpu_ptr(&nmi_count);
+ check_valid_nmi();
+@@ -269,12 +265,9 @@ void do_interrupt(struct pt_regs *regs)
+ XCHAL_INTLEVEL6_MASK,
+ XCHAL_INTLEVEL7_MASK,
+ };
+- struct pt_regs *old_regs;
++ struct pt_regs *old_regs = set_irq_regs(regs);
+ unsigned unhandled = ~0u;
+
+- trace_hardirqs_off();
+-
+- old_regs = set_irq_regs(regs);
+ irq_enter();
+
+ for (;;) {
+diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c
+index 8eb6ad1a3a1de..49f6260a8c1da 100644
+--- a/arch/xtensa/platforms/iss/simdisk.c
++++ b/arch/xtensa/platforms/iss/simdisk.c
+@@ -211,12 +211,18 @@ static ssize_t proc_read_simdisk(struct file *file, char __user *buf,
+ struct simdisk *dev = pde_data(file_inode(file));
+ const char *s = dev->filename;
+ if (s) {
+- ssize_t n = simple_read_from_buffer(buf, size, ppos,
+- s, strlen(s));
+- if (n < 0)
+- return n;
+- buf += n;
+- size -= n;
++ ssize_t len = strlen(s);
++ char *temp = kmalloc(len + 2, GFP_KERNEL);
++
++ if (!temp)
++ return -ENOMEM;
++
++ len = scnprintf(temp, len + 2, "%s\n", s);
++ len = simple_read_from_buffer(buf, size, ppos,
++ temp, len);
++
++ kfree(temp);
++ return len;
+ }
+ return simple_read_from_buffer(buf, size, ppos, "\n", 1);
+ }
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 809bc612d96b3..8b3368095bfe0 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
+ */
+ bfqg->bfqd = bfqd;
+ bfqg->active_entities = 0;
++ bfqg->online = true;
+ bfqg->rq_pos_tree = RB_ROOT;
+ }
+
+@@ -585,28 +586,11 @@ static void bfq_group_set_parent(struct bfq_group *bfqg,
+ entity->sched_data = &parent->sched_data;
+ }
+
+-static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd,
+- struct blkcg *blkcg)
++static void bfq_link_bfqg(struct bfq_data *bfqd, struct bfq_group *bfqg)
+ {
+- struct blkcg_gq *blkg;
+-
+- blkg = blkg_lookup(blkcg, bfqd->queue);
+- if (likely(blkg))
+- return blkg_to_bfqg(blkg);
+- return NULL;
+-}
+-
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+- struct blkcg *blkcg)
+-{
+- struct bfq_group *bfqg, *parent;
++ struct bfq_group *parent;
+ struct bfq_entity *entity;
+
+- bfqg = bfq_lookup_bfqg(bfqd, blkcg);
+-
+- if (unlikely(!bfqg))
+- return NULL;
+-
+ /*
+ * Update chain of bfq_groups as we might be handling a leaf group
+ * which, along with some of its relatives, has not been hooked yet
+@@ -623,8 +607,24 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+ bfq_group_set_parent(curr_bfqg, parent);
+ }
+ }
++}
+
+- return bfqg;
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
++{
++ struct blkcg_gq *blkg = bio->bi_blkg;
++ struct bfq_group *bfqg;
++
++ while (blkg) {
++ bfqg = blkg_to_bfqg(blkg);
++ if (bfqg->online) {
++ bio_associate_blkg_from_css(bio, &blkg->blkcg->css);
++ return bfqg;
++ }
++ blkg = blkg->parent;
++ }
++ bio_associate_blkg_from_css(bio,
++ &bfqg_to_blkg(bfqd->root_group)->blkcg->css);
++ return bfqd->root_group;
+ }
+
+ /**
+@@ -706,25 +706,15 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ * Move bic to blkcg, assuming that bfqd->lock is held; which makes
+ * sure that the reference to cgroup is valid across the call (see
+ * comments in bfq_bic_update_cgroup on this issue)
+- *
+- * NOTE: an alternative approach might have been to store the current
+- * cgroup in bfqq and getting a reference to it, reducing the lookup
+- * time here, at the price of slightly more complex code.
+ */
+-static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+- struct bfq_io_cq *bic,
+- struct blkcg *blkcg)
++static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic,
++ struct bfq_group *bfqg)
+ {
+ struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
+ struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
+- struct bfq_group *bfqg;
+ struct bfq_entity *entity;
+
+- bfqg = bfq_find_set_group(bfqd, blkcg);
+-
+- if (unlikely(!bfqg))
+- bfqg = bfqd->root_group;
+-
+ if (async_bfqq) {
+ entity = &async_bfqq->entity;
+
+@@ -735,9 +725,39 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ }
+
+ if (sync_bfqq) {
+- entity = &sync_bfqq->entity;
+- if (entity->sched_data != &bfqg->sched_data)
+- bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
++ if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
++ /* We are the only user of this bfqq, just move it */
++ if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
++ bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
++ } else {
++ struct bfq_queue *bfqq;
++
++ /*
++ * The queue was merged to a different queue. Check
++ * that the merge chain still belongs to the same
++ * cgroup.
++ */
++ for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
++ if (bfqq->entity.sched_data !=
++ &bfqg->sched_data)
++ break;
++ if (bfqq) {
++ /*
++ * Some queue changed cgroup so the merge is
++ * not valid anymore. We cannot easily just
++ * cancel the merge (by clearing new_bfqq) as
++ * there may be other processes using this
++ * queue and holding refs to all queues below
++ * sync_bfqq->new_bfqq. Similarly if the merge
++ * already happened, we need to detach from
++ * bfqq now so that we cannot merge bio to a
++ * request from the old cgroup.
++ */
++ bfq_put_cooperator(sync_bfqq);
++ bfq_release_process_ref(bfqd, sync_bfqq);
++ bic_set_bfqq(bic, NULL, 1);
++ }
++ }
+ }
+
+ return bfqg;
+@@ -746,20 +766,24 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ {
+ struct bfq_data *bfqd = bic_to_bfqd(bic);
+- struct bfq_group *bfqg = NULL;
++ struct bfq_group *bfqg = bfq_bio_bfqg(bfqd, bio);
+ uint64_t serial_nr;
+
+- rcu_read_lock();
+- serial_nr = __bio_blkcg(bio)->css.serial_nr;
++ serial_nr = bfqg_to_blkg(bfqg)->blkcg->css.serial_nr;
+
+ /*
+ * Check whether blkcg has changed. The condition may trigger
+ * spuriously on a newly created cic but there's no harm.
+ */
+ if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
+- goto out;
++ return;
+
+- bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio));
++ /*
++ * New cgroup for this process. Make sure it is linked to bfq internal
++ * cgroup hierarchy.
++ */
++ bfq_link_bfqg(bfqd, bfqg);
++ __bfq_bic_change_cgroup(bfqd, bic, bfqg);
+ /*
+ * Update blkg_path for bfq_log_* functions. We cache this
+ * path, and update it here, for the following
+@@ -812,8 +836,6 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ */
+ blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path));
+ bic->blkcg_serial_nr = serial_nr;
+-out:
+- rcu_read_unlock();
+ }
+
+ /**
+@@ -941,6 +963,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+
+ put_async_queues:
+ bfq_put_async_queues(bfqd, bfqg);
++ bfqg->online = false;
+
+ spin_unlock_irqrestore(&bfqd->lock, flags);
+ /*
+@@ -1430,7 +1453,7 @@ void bfq_end_wr_async(struct bfq_data *bfqd)
+ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, struct blkcg *blkcg)
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
+ {
+ return bfqd->root_group;
+ }
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 963f9f549232b..5d2d3fe65a9d9 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2133,9 +2133,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ if (!bfqd->last_completed_rq_bfqq ||
+ bfqd->last_completed_rq_bfqq == bfqq ||
+ bfq_bfqq_has_short_ttime(bfqq) ||
+- bfqq->dispatched > 0 ||
+- now_ns - bfqd->last_completion >= 4 * NSEC_PER_MSEC ||
+- bfqd->last_completed_rq_bfqq == bfqq->waker_bfqq)
++ now_ns - bfqd->last_completion >= 4 * NSEC_PER_MSEC)
+ return;
+
+ /*
+@@ -2210,7 +2208,7 @@ static void bfq_add_request(struct request *rq)
+ bfqq->queued[rq_is_sync(rq)]++;
+ bfqd->queued++;
+
+- if (RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_sync(bfqq)) {
++ if (bfq_bfqq_sync(bfqq) && RQ_BIC(rq)->requests <= 1) {
+ bfq_check_waker(bfqd, bfqq, now_ns);
+
+ /*
+@@ -2463,10 +2461,17 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
+
+ spin_lock_irq(&bfqd->lock);
+
+- if (bic)
++ if (bic) {
++ /*
++ * Make sure cgroup info is uptodate for current process before
++ * considering the merge.
++ */
++ bfq_bic_update_cgroup(bic, bio);
++
+ bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
+- else
++ } else {
+ bfqd->bio_bfqq = NULL;
++ }
+ bfqd->bio_bic = bic;
+
+ ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
+@@ -2496,8 +2501,6 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
+ return ELEVATOR_NO_MERGE;
+ }
+
+-static struct bfq_queue *bfq_init_rq(struct request *rq);
+-
+ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ enum elv_merge type)
+ {
+@@ -2506,7 +2509,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ blk_rq_pos(req) <
+ blk_rq_pos(container_of(rb_prev(&req->rb_node),
+ struct request, rb_node))) {
+- struct bfq_queue *bfqq = bfq_init_rq(req);
++ struct bfq_queue *bfqq = RQ_BFQQ(req);
+ struct bfq_data *bfqd;
+ struct request *prev, *next_rq;
+
+@@ -2558,8 +2561,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+ struct request *next)
+ {
+- struct bfq_queue *bfqq = bfq_init_rq(rq),
+- *next_bfqq = bfq_init_rq(next);
++ struct bfq_queue *bfqq = RQ_BFQQ(rq),
++ *next_bfqq = RQ_BFQQ(next);
+
+ if (!bfqq)
+ goto remove;
+@@ -2764,6 +2767,14 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ if (process_refs == 0 || new_process_refs == 0)
+ return NULL;
+
++ /*
++ * Make sure merged queues belong to the same parent. Parents could
++ * have changed since the time we decided the two queues are suitable
++ * for merging.
++ */
++ if (new_bfqq->entity.parent != bfqq->entity.parent)
++ return NULL;
++
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
+ new_bfqq->pid);
+
+@@ -2901,9 +2912,12 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ struct bfq_queue *new_bfqq =
+ bfq_setup_merge(bfqq, stable_merge_bfqq);
+
+- bic->stably_merged = true;
+- if (new_bfqq && new_bfqq->bic)
+- new_bfqq->bic->stably_merged = true;
++ if (new_bfqq) {
++ bic->stably_merged = true;
++ if (new_bfqq->bic)
++ new_bfqq->bic->stably_merged =
++ true;
++ }
+ return new_bfqq;
+ } else
+ return NULL;
+@@ -5310,7 +5324,7 @@ static void bfq_put_stable_ref(struct bfq_queue *bfqq)
+ bfq_put_queue(bfqq);
+ }
+
+-static void bfq_put_cooperator(struct bfq_queue *bfqq)
++void bfq_put_cooperator(struct bfq_queue *bfqq)
+ {
+ struct bfq_queue *__bfqq, *next;
+
+@@ -5716,14 +5730,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq;
+ struct bfq_group *bfqg;
+
+- rcu_read_lock();
+-
+- bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio));
+- if (!bfqg) {
+- bfqq = &bfqd->oom_bfqq;
+- goto out;
+- }
+-
++ bfqg = bfq_bio_bfqg(bfqd, bio);
+ if (!is_sync) {
+ async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
+ ioprio);
+@@ -5769,8 +5776,6 @@ out:
+
+ if (bfqq != &bfqd->oom_bfqq && is_sync && !respawn)
+ bfqq = bfq_do_or_sched_stable_merge(bfqd, bfqq, bic);
+-
+- rcu_read_unlock();
+ return bfqq;
+ }
+
+@@ -6117,6 +6122,8 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
+ unsigned int cmd_flags) {}
+ #endif /* CONFIG_BFQ_CGROUP_DEBUG */
+
++static struct bfq_queue *bfq_init_rq(struct request *rq);
++
+ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ bool at_head)
+ {
+@@ -6132,18 +6139,15 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ bfqg_stats_update_legacy_io(q, rq);
+ #endif
+ spin_lock_irq(&bfqd->lock);
++ bfqq = bfq_init_rq(rq);
+ if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
+ spin_unlock_irq(&bfqd->lock);
+ blk_mq_free_requests(&free);
+ return;
+ }
+
+- spin_unlock_irq(&bfqd->lock);
+-
+ trace_block_rq_insert(rq);
+
+- spin_lock_irq(&bfqd->lock);
+- bfqq = bfq_init_rq(rq);
+ if (!bfqq || at_head) {
+ if (at_head)
+ list_add(&rq->queuelist, &bfqd->dispatch);
+@@ -6563,6 +6567,7 @@ static void bfq_finish_requeue_request(struct request *rq)
+ bfq_completed_request(bfqq, bfqd);
+ }
+ bfq_finish_requeue_request_body(bfqq);
++ RQ_BIC(rq)->requests--;
+ spin_unlock_irqrestore(&bfqd->lock, flags);
+
+ /*
+@@ -6796,6 +6801,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+
+ bfqq_request_allocated(bfqq);
+ bfqq->ref++;
++ bic->requests++;
+ bfq_log_bfqq(bfqd, bfqq, "get_request %p: bfqq %p, %d",
+ rq, bfqq, bfqq->ref);
+
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 07288b9da3895..4149e1be19182 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -469,6 +469,7 @@ struct bfq_io_cq {
+ struct bfq_queue *stable_merge_bfqq;
+
+ bool stably_merged; /* non splittable if true */
++ unsigned int requests; /* Number of requests this process has in flight */
+ };
+
+ /**
+@@ -929,6 +930,8 @@ struct bfq_group {
+
+ /* reference counter (see comments in bfq_bic_update_cgroup) */
+ int ref;
++ /* Is bfq_group still online? */
++ bool online;
+
+ struct bfq_entity entity;
+ struct bfq_sched_data sched_data;
+@@ -980,6 +983,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ bool compensate, enum bfqq_expiration reason);
+ void bfq_put_queue(struct bfq_queue *bfqq);
++void bfq_put_cooperator(struct bfq_queue *bfqq);
+ void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
+ void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+ void bfq_schedule_dispatch(struct bfq_data *bfqd);
+@@ -1007,8 +1011,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg);
+ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio);
+ void bfq_end_wr_async(struct bfq_data *bfqd);
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+- struct blkcg *blkcg);
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio);
+ struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
+ struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
+ struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 87a1c0c3fa401..b19c252a5530b 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1888,12 +1888,8 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg);
+ */
+ void bio_clone_blkg_association(struct bio *dst, struct bio *src)
+ {
+- if (src->bi_blkg) {
+- if (dst->bi_blkg)
+- blkg_put(dst->bi_blkg);
+- blkg_get(src->bi_blkg);
+- dst->bi_blkg = src->bi_blkg;
+- }
++ if (src->bi_blkg)
++ bio_associate_blkg_from_css(dst, &bio_blkcg(src)->css);
+ }
+ EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
+
+diff --git a/block/blk-ia-ranges.c b/block/blk-ia-ranges.c
+index 18c68d8b9138e..56ed48d2954e6 100644
+--- a/block/blk-ia-ranges.c
++++ b/block/blk-ia-ranges.c
+@@ -54,13 +54,8 @@ static ssize_t blk_ia_range_sysfs_show(struct kobject *kobj,
+ container_of(attr, struct blk_ia_range_sysfs_entry, attr);
+ struct blk_independent_access_range *iar =
+ container_of(kobj, struct blk_independent_access_range, kobj);
+- ssize_t ret;
+
+- mutex_lock(&iar->queue->sysfs_lock);
+- ret = entry->show(iar, buf);
+- mutex_unlock(&iar->queue->sysfs_lock);
+-
+- return ret;
++ return entry->show(iar, buf);
+ }
+
+ static const struct sysfs_ops blk_ia_range_sysfs_ops = {
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 24d70e0555ddb..5dce97ea904cb 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -87,7 +87,17 @@ struct iolatency_grp;
+ struct blk_iolatency {
+ struct rq_qos rqos;
+ struct timer_list timer;
+- atomic_t enabled;
++
++ /*
++ * ->enabled is the master enable switch gating the throttling logic and
++ * inflight tracking. The number of cgroups which have iolat enabled is
++ * tracked in ->enable_cnt, and ->enable is flipped on/off accordingly
++ * from ->enable_work with the request_queue frozen. For details, See
++ * blkiolatency_enable_work_fn().
++ */
++ bool enabled;
++ atomic_t enable_cnt;
++ struct work_struct enable_work;
+ };
+
+ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
+@@ -95,11 +105,6 @@ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
+ return container_of(rqos, struct blk_iolatency, rqos);
+ }
+
+-static inline bool blk_iolatency_enabled(struct blk_iolatency *blkiolat)
+-{
+- return atomic_read(&blkiolat->enabled) > 0;
+-}
+-
+ struct child_latency_info {
+ spinlock_t lock;
+
+@@ -464,7 +469,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
+ struct blkcg_gq *blkg = bio->bi_blkg;
+ bool issue_as_root = bio_issue_as_root_blkg(bio);
+
+- if (!blk_iolatency_enabled(blkiolat))
++ if (!blkiolat->enabled)
+ return;
+
+ while (blkg && blkg->parent) {
+@@ -594,7 +599,6 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ u64 window_start;
+ u64 now;
+ bool issue_as_root = bio_issue_as_root_blkg(bio);
+- bool enabled = false;
+ int inflight = 0;
+
+ blkg = bio->bi_blkg;
+@@ -605,8 +609,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ if (!iolat)
+ return;
+
+- enabled = blk_iolatency_enabled(iolat->blkiolat);
+- if (!enabled)
++ if (!iolat->blkiolat->enabled)
+ return;
+
+ now = ktime_to_ns(ktime_get());
+@@ -645,6 +648,7 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos)
+ struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
+
+ del_timer_sync(&blkiolat->timer);
++ flush_work(&blkiolat->enable_work);
+ blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency);
+ kfree(blkiolat);
+ }
+@@ -716,6 +720,44 @@ next:
+ rcu_read_unlock();
+ }
+
++/**
++ * blkiolatency_enable_work_fn - Enable or disable iolatency on the device
++ * @work: enable_work of the blk_iolatency of interest
++ *
++ * iolatency needs to keep track of the number of in-flight IOs per cgroup. This
++ * is relatively expensive as it involves walking up the hierarchy twice for
++ * every IO. Thus, if iolatency is not enabled in any cgroup for the device, we
++ * want to disable the in-flight tracking.
++ *
++ * We have to make sure that the counting is balanced - we don't want to leak
++ * the in-flight counts by disabling accounting in the completion path while IOs
++ * are in flight. This is achieved by ensuring that no IO is in flight by
++ * freezing the queue while flipping ->enabled. As this requires a sleepable
++ * context, ->enabled flipping is punted to this work function.
++ */
++static void blkiolatency_enable_work_fn(struct work_struct *work)
++{
++ struct blk_iolatency *blkiolat = container_of(work, struct blk_iolatency,
++ enable_work);
++ bool enabled;
++
++ /*
++ * There can only be one instance of this function running for @blkiolat
++ * and it's guaranteed to be executed at least once after the latest
++ * ->enabled_cnt modification. Acting on the latest ->enable_cnt is
++ * sufficient.
++ *
++ * Also, we know @blkiolat is safe to access as ->enable_work is flushed
++ * in blkcg_iolatency_exit().
++ */
++ enabled = atomic_read(&blkiolat->enable_cnt);
++ if (enabled != blkiolat->enabled) {
++ blk_mq_freeze_queue(blkiolat->rqos.q);
++ blkiolat->enabled = enabled;
++ blk_mq_unfreeze_queue(blkiolat->rqos.q);
++ }
++}
++
+ int blk_iolatency_init(struct request_queue *q)
+ {
+ struct blk_iolatency *blkiolat;
+@@ -741,17 +783,15 @@ int blk_iolatency_init(struct request_queue *q)
+ }
+
+ timer_setup(&blkiolat->timer, blkiolatency_timer_fn, 0);
++ INIT_WORK(&blkiolat->enable_work, blkiolatency_enable_work_fn);
+
+ return 0;
+ }
+
+-/*
+- * return 1 for enabling iolatency, return -1 for disabling iolatency, otherwise
+- * return 0.
+- */
+-static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
++static void iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ {
+ struct iolatency_grp *iolat = blkg_to_lat(blkg);
++ struct blk_iolatency *blkiolat = iolat->blkiolat;
+ u64 oldval = iolat->min_lat_nsec;
+
+ iolat->min_lat_nsec = val;
+@@ -759,13 +799,15 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ iolat->cur_win_nsec = min_t(u64, iolat->cur_win_nsec,
+ BLKIOLATENCY_MAX_WIN_SIZE);
+
+- if (!oldval && val)
+- return 1;
++ if (!oldval && val) {
++ if (atomic_inc_return(&blkiolat->enable_cnt) == 1)
++ schedule_work(&blkiolat->enable_work);
++ }
+ if (oldval && !val) {
+ blkcg_clear_delay(blkg);
+- return -1;
++ if (atomic_dec_return(&blkiolat->enable_cnt) == 0)
++ schedule_work(&blkiolat->enable_work);
+ }
+- return 0;
+ }
+
+ static void iolatency_clear_scaling(struct blkcg_gq *blkg)
+@@ -797,7 +839,6 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ u64 lat_val = 0;
+ u64 oldval;
+ int ret;
+- int enable = 0;
+
+ ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, buf, &ctx);
+ if (ret)
+@@ -832,41 +873,12 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ blkg = ctx.blkg;
+ oldval = iolat->min_lat_nsec;
+
+- enable = iolatency_set_min_lat_nsec(blkg, lat_val);
+- if (enable) {
+- if (!blk_get_queue(blkg->q)) {
+- ret = -ENODEV;
+- goto out;
+- }
+-
+- blkg_get(blkg);
+- }
+-
+- if (oldval != iolat->min_lat_nsec) {
++ iolatency_set_min_lat_nsec(blkg, lat_val);
++ if (oldval != iolat->min_lat_nsec)
+ iolatency_clear_scaling(blkg);
+- }
+-
+ ret = 0;
+ out:
+ blkg_conf_finish(&ctx);
+- if (ret == 0 && enable) {
+- struct iolatency_grp *tmp = blkg_to_lat(blkg);
+- struct blk_iolatency *blkiolat = tmp->blkiolat;
+-
+- blk_mq_freeze_queue(blkg->q);
+-
+- if (enable == 1)
+- atomic_inc(&blkiolat->enabled);
+- else if (enable == -1)
+- atomic_dec(&blkiolat->enabled);
+- else
+- WARN_ON_ONCE(1);
+-
+- blk_mq_unfreeze_queue(blkg->q);
+-
+- blkg_put(blkg);
+- blk_put_queue(blkg->q);
+- }
+ return ret ?: nbytes;
+ }
+
+@@ -1007,14 +1019,8 @@ static void iolatency_pd_offline(struct blkg_policy_data *pd)
+ {
+ struct iolatency_grp *iolat = pd_to_lat(pd);
+ struct blkcg_gq *blkg = lat_to_blkg(iolat);
+- struct blk_iolatency *blkiolat = iolat->blkiolat;
+- int ret;
+
+- ret = iolatency_set_min_lat_nsec(blkg, 0);
+- if (ret == 1)
+- atomic_inc(&blkiolat->enabled);
+- if (ret == -1)
+- atomic_dec(&blkiolat->enabled);
++ iolatency_set_min_lat_nsec(blkg, 0);
+ iolatency_clear_scaling(blkg);
+ }
+
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 87769b337fc55..e1b253775a56d 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -2167,13 +2167,14 @@ again:
+ }
+
+ out_unlock:
+- spin_unlock_irq(&q->queue_lock);
+ bio_set_flag(bio, BIO_THROTTLED);
+
+ #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+ if (throttled || !td->track_bio_latency)
+ bio->bi_issue.value |= BIO_ISSUE_THROTL_SKIP_LATENCY;
+ #endif
++ spin_unlock_irq(&q->queue_lock);
++
+ rcu_read_unlock();
+ return throttled;
+ }
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index a1bea0f4baa88..668095eca0faf 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -39,6 +39,10 @@ struct cryptd_cpu_queue {
+ };
+
+ struct cryptd_queue {
++ /*
++ * Protected by disabling BH to allow enqueueing from softinterrupt and
++ * dequeuing from kworker (cryptd_queue_worker()).
++ */
+ struct cryptd_cpu_queue __percpu *cpu_queue;
+ };
+
+@@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue)
+ static int cryptd_enqueue_request(struct cryptd_queue *queue,
+ struct crypto_async_request *request)
+ {
+- int cpu, err;
++ int err;
+ struct cryptd_cpu_queue *cpu_queue;
+ refcount_t *refcnt;
+
+- cpu = get_cpu();
++ local_bh_disable();
+ cpu_queue = this_cpu_ptr(queue->cpu_queue);
+ err = crypto_enqueue_request(&cpu_queue->queue, request);
+
+ refcnt = crypto_tfm_ctx(request->tfm);
+
+ if (err == -ENOSPC)
+- goto out_put_cpu;
++ goto out;
+
+- queue_work_on(cpu, cryptd_wq, &cpu_queue->work);
++ queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work);
+
+ if (!refcount_read(refcnt))
+- goto out_put_cpu;
++ goto out;
+
+ refcount_inc(refcnt);
+
+-out_put_cpu:
+- put_cpu();
++out:
++ local_bh_enable();
+
+ return err;
+ }
+@@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work)
+ cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
+ /*
+ * Only handle one request at a time to avoid hogging crypto workqueue.
+- * preempt_disable/enable is used to prevent being preempted by
+- * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
+- * cryptd_enqueue_request() being accessed from software interrupts.
+ */
+ local_bh_disable();
+- preempt_disable();
+ backlog = crypto_get_backlog(&cpu_queue->queue);
+ req = crypto_dequeue_request(&cpu_queue->queue);
+- preempt_enable();
+ local_bh_enable();
+
+ if (!req)
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 123e98a765de7..a7facd4b4ca89 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -100,6 +100,16 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
+ (cpc)->cpc_entry.reg.space_id == \
+ ACPI_ADR_SPACE_PLATFORM_COMM)
+
++/* Check if a CPC register is in SystemMemory */
++#define CPC_IN_SYSTEM_MEMORY(cpc) ((cpc)->type == ACPI_TYPE_BUFFER && \
++ (cpc)->cpc_entry.reg.space_id == \
++ ACPI_ADR_SPACE_SYSTEM_MEMORY)
++
++/* Check if a CPC register is in SystemIo */
++#define CPC_IN_SYSTEM_IO(cpc) ((cpc)->type == ACPI_TYPE_BUFFER && \
++ (cpc)->cpc_entry.reg.space_id == \
++ ACPI_ADR_SPACE_SYSTEM_IO)
++
+ /* Evaluates to True if reg is a NULL register descriptor */
+ #define IS_NULL_REG(reg) ((reg)->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY && \
+ (reg)->address == 0 && \
+@@ -1441,6 +1451,9 @@ EXPORT_SYMBOL_GPL(cppc_set_perf);
+ * transition latency for performance change requests. The closest we have
+ * is the timing information from the PCCT tables which provides the info
+ * on the number and frequency of PCC commands the platform can handle.
++ *
++ * If desired_reg is in the SystemMemory or SystemIo ACPI address space,
++ * then assume there is no latency.
+ */
+ unsigned int cppc_get_transition_latency(int cpu_num)
+ {
+@@ -1466,7 +1479,9 @@ unsigned int cppc_get_transition_latency(int cpu_num)
+ return CPUFREQ_ETERNAL;
+
+ desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
+- if (!CPC_IN_PCC(desired_reg))
++ if (CPC_IN_SYSTEM_MEMORY(desired_reg) || CPC_IN_SYSTEM_IO(desired_reg))
++ return 0;
++ else if (!CPC_IN_PCC(desired_reg))
+ return CPUFREQ_ETERNAL;
+
+ if (pcc_ss_id < 0)
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 3fceb4681ec9f..2da5e7cd28134 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -433,6 +433,16 @@ void acpi_init_properties(struct acpi_device *adev)
+ acpi_extract_apple_properties(adev);
+ }
+
++static void acpi_free_device_properties(struct list_head *list)
++{
++ struct acpi_device_properties *props, *tmp;
++
++ list_for_each_entry_safe(props, tmp, list, list) {
++ list_del(&props->list);
++ kfree(props);
++ }
++}
++
+ static void acpi_destroy_nondev_subnodes(struct list_head *list)
+ {
+ struct acpi_data_node *dn, *next;
+@@ -445,22 +455,18 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list)
+ wait_for_completion(&dn->kobj_done);
+ list_del(&dn->sibling);
+ ACPI_FREE((void *)dn->data.pointer);
++ acpi_free_device_properties(&dn->data.properties);
+ kfree(dn);
+ }
+ }
+
+ void acpi_free_properties(struct acpi_device *adev)
+ {
+- struct acpi_device_properties *props, *tmp;
+-
+ acpi_destroy_nondev_subnodes(&adev->data.subnodes);
+ ACPI_FREE((void *)adev->data.pointer);
+ adev->data.of_compatible = NULL;
+ adev->data.pointer = NULL;
+- list_for_each_entry_safe(props, tmp, &adev->data.properties, list) {
+- list_del(&props->list);
+- kfree(props);
+- }
++ acpi_free_device_properties(&adev->data.properties);
+ }
+
+ /**
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index d4fbea91ab6b8..8a5cb115fea87 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -373,6 +373,18 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"),
+ },
+ },
++ /*
++ * ASUS B1400CEAE hangs on resume from suspend (see
++ * https://bugzilla.kernel.org/show_bug.cgi?id=215742).
++ */
++ {
++ .callback = init_default_s3,
++ .ident = "ASUS B1400CEAE",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK B1400CEAE"),
++ },
++ },
+ {},
+ };
+
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 60c38f9cf1a75..c0d501a3a7140 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -634,10 +634,9 @@ int register_memory(struct memory_block *memory)
+ }
+ ret = xa_err(xa_store(&memory_blocks, memory->dev.id, memory,
+ GFP_KERNEL));
+- if (ret) {
+- put_device(&memory->dev);
++ if (ret)
+ device_unregister(&memory->dev);
+- }
++
+ return ret;
+ }
+
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index 87acc47e89515..7b8368bc20005 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -682,6 +682,7 @@ static int register_node(struct node *node, int num)
+ */
+ void unregister_node(struct node *node)
+ {
++ compaction_unregister_node(node);
+ hugetlb_unregister_node(node); /* no-op, if memoryless node */
+ node_remove_accesses(node);
+ node_remove_caches(node);
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 7e8039d1884cc..0f2e42f368519 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1978,6 +1978,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
+ genpd->device_count = 0;
+ genpd->max_off_time_ns = -1;
+ genpd->max_off_time_changed = true;
++ genpd->next_wakeup = KTIME_MAX;
+ genpd->provider = NULL;
+ genpd->has_provider = false;
+ genpd->accounting_time = ktime_get();
+diff --git a/drivers/base/property.c b/drivers/base/property.c
+index e6497f6877ee8..4d5bac0a490d4 100644
+--- a/drivers/base/property.c
++++ b/drivers/base/property.c
+@@ -47,12 +47,14 @@ bool fwnode_property_present(const struct fwnode_handle *fwnode,
+ {
+ bool ret;
+
++ if (IS_ERR_OR_NULL(fwnode))
++ return false;
++
+ ret = fwnode_call_bool_op(fwnode, property_present, propname);
+- if (ret == false && !IS_ERR_OR_NULL(fwnode) &&
+- !IS_ERR_OR_NULL(fwnode->secondary))
+- ret = fwnode_call_bool_op(fwnode->secondary, property_present,
+- propname);
+- return ret;
++ if (ret)
++ return ret;
++
++ return fwnode_call_bool_op(fwnode->secondary, property_present, propname);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_property_present);
+
+@@ -232,15 +234,16 @@ static int fwnode_property_read_int_array(const struct fwnode_handle *fwnode,
+ {
+ int ret;
+
++ if (IS_ERR_OR_NULL(fwnode))
++ return -EINVAL;
++
+ ret = fwnode_call_int_op(fwnode, property_read_int_array, propname,
+ elem_size, val, nval);
+- if (ret == -EINVAL && !IS_ERR_OR_NULL(fwnode) &&
+- !IS_ERR_OR_NULL(fwnode->secondary))
+- ret = fwnode_call_int_op(
+- fwnode->secondary, property_read_int_array, propname,
+- elem_size, val, nval);
++ if (ret != -EINVAL)
++ return ret;
+
+- return ret;
++ return fwnode_call_int_op(fwnode->secondary, property_read_int_array, propname,
++ elem_size, val, nval);
+ }
+
+ /**
+@@ -371,14 +374,16 @@ int fwnode_property_read_string_array(const struct fwnode_handle *fwnode,
+ {
+ int ret;
+
++ if (IS_ERR_OR_NULL(fwnode))
++ return -EINVAL;
++
+ ret = fwnode_call_int_op(fwnode, property_read_string_array, propname,
+ val, nval);
+- if (ret == -EINVAL && !IS_ERR_OR_NULL(fwnode) &&
+- !IS_ERR_OR_NULL(fwnode->secondary))
+- ret = fwnode_call_int_op(fwnode->secondary,
+- property_read_string_array, propname,
+- val, nval);
+- return ret;
++ if (ret != -EINVAL)
++ return ret;
++
++ return fwnode_call_int_op(fwnode->secondary, property_read_string_array, propname,
++ val, nval);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_property_read_string_array);
+
+@@ -480,15 +485,19 @@ int fwnode_property_get_reference_args(const struct fwnode_handle *fwnode,
+ {
+ int ret;
+
++ if (IS_ERR_OR_NULL(fwnode))
++ return -ENOENT;
++
+ ret = fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop,
+ nargs, index, args);
++ if (ret == 0)
++ return ret;
+
+- if (ret < 0 && !IS_ERR_OR_NULL(fwnode) &&
+- !IS_ERR_OR_NULL(fwnode->secondary))
+- ret = fwnode_call_int_op(fwnode->secondary, get_reference_args,
+- prop, nargs_prop, nargs, index, args);
++ if (IS_ERR_OR_NULL(fwnode->secondary))
++ return ret;
+
+- return ret;
++ return fwnode_call_int_op(fwnode->secondary, get_reference_args, prop, nargs_prop,
++ nargs, index, args);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_property_get_reference_args);
+
+@@ -635,12 +644,13 @@ EXPORT_SYMBOL_GPL(fwnode_count_parents);
+ struct fwnode_handle *fwnode_get_nth_parent(struct fwnode_handle *fwnode,
+ unsigned int depth)
+ {
+- unsigned int i;
+-
+ fwnode_handle_get(fwnode);
+
+- for (i = 0; i < depth && fwnode; i++)
++ do {
++ if (depth-- == 0)
++ break;
+ fwnode = fwnode_get_next_parent(fwnode);
++ } while (fwnode);
+
+ return fwnode;
+ }
+@@ -659,17 +669,17 @@ EXPORT_SYMBOL_GPL(fwnode_get_nth_parent);
+ bool fwnode_is_ancestor_of(struct fwnode_handle *test_ancestor,
+ struct fwnode_handle *test_child)
+ {
+- if (!test_ancestor)
++ if (IS_ERR_OR_NULL(test_ancestor))
+ return false;
+
+ fwnode_handle_get(test_child);
+- while (test_child) {
++ do {
+ if (test_child == test_ancestor) {
+ fwnode_handle_put(test_child);
+ return true;
+ }
+ test_child = fwnode_get_next_parent(test_child);
+- }
++ } while (test_child);
+ return false;
+ }
+
+@@ -698,7 +708,7 @@ fwnode_get_next_available_child_node(const struct fwnode_handle *fwnode,
+ {
+ struct fwnode_handle *next_child = child;
+
+- if (!fwnode)
++ if (IS_ERR_OR_NULL(fwnode))
+ return NULL;
+
+ do {
+@@ -722,16 +732,16 @@ struct fwnode_handle *device_get_next_child_node(struct device *dev,
+ const struct fwnode_handle *fwnode = dev_fwnode(dev);
+ struct fwnode_handle *next;
+
++ if (IS_ERR_OR_NULL(fwnode))
++ return NULL;
++
+ /* Try to find a child in primary fwnode */
+ next = fwnode_get_next_child_node(fwnode, child);
+ if (next)
+ return next;
+
+ /* When no more children in primary, continue with secondary */
+- if (fwnode && !IS_ERR_OR_NULL(fwnode->secondary))
+- next = fwnode_get_next_child_node(fwnode->secondary, child);
+-
+- return next;
++ return fwnode_get_next_child_node(fwnode->secondary, child);
+ }
+ EXPORT_SYMBOL_GPL(device_get_next_child_node);
+
+@@ -798,6 +808,9 @@ EXPORT_SYMBOL_GPL(fwnode_handle_put);
+ */
+ bool fwnode_device_is_available(const struct fwnode_handle *fwnode)
+ {
++ if (IS_ERR_OR_NULL(fwnode))
++ return false;
++
+ if (!fwnode_has_op(fwnode, device_is_available))
+ return true;
+
+@@ -959,14 +972,14 @@ fwnode_graph_get_next_endpoint(const struct fwnode_handle *fwnode,
+ parent = fwnode_graph_get_port_parent(prev);
+ else
+ parent = fwnode;
++ if (IS_ERR_OR_NULL(parent))
++ return NULL;
+
+ ep = fwnode_call_ptr_op(parent, graph_get_next_endpoint, prev);
++ if (ep)
++ return ep;
+
+- if (IS_ERR_OR_NULL(ep) &&
+- !IS_ERR_OR_NULL(parent) && !IS_ERR_OR_NULL(parent->secondary))
+- ep = fwnode_graph_get_next_endpoint(parent->secondary, NULL);
+-
+- return ep;
++ return fwnode_graph_get_next_endpoint(parent->secondary, NULL);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_graph_get_next_endpoint);
+
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 416f4f48f69b0..8d17dd647187c 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -3609,9 +3609,8 @@ const char *cmdname(enum drbd_packet cmd)
+ * when we want to support more than
+ * one PRO_VERSION */
+ static const char *cmdnames[] = {
++
+ [P_DATA] = "Data",
+- [P_WSAME] = "WriteSame",
+- [P_TRIM] = "Trim",
+ [P_DATA_REPLY] = "DataReply",
+ [P_RS_DATA_REPLY] = "RSDataReply",
+ [P_BARRIER] = "Barrier",
+@@ -3622,7 +3621,6 @@ const char *cmdname(enum drbd_packet cmd)
+ [P_DATA_REQUEST] = "DataRequest",
+ [P_RS_DATA_REQUEST] = "RSDataRequest",
+ [P_SYNC_PARAM] = "SyncParam",
+- [P_SYNC_PARAM89] = "SyncParam89",
+ [P_PROTOCOL] = "ReportProtocol",
+ [P_UUIDS] = "ReportUUIDs",
+ [P_SIZES] = "ReportSizes",
+@@ -3630,6 +3628,7 @@ const char *cmdname(enum drbd_packet cmd)
+ [P_SYNC_UUID] = "ReportSyncUUID",
+ [P_AUTH_CHALLENGE] = "AuthChallenge",
+ [P_AUTH_RESPONSE] = "AuthResponse",
++ [P_STATE_CHG_REQ] = "StateChgRequest",
+ [P_PING] = "Ping",
+ [P_PING_ACK] = "PingAck",
+ [P_RECV_ACK] = "RecvAck",
+@@ -3640,23 +3639,25 @@ const char *cmdname(enum drbd_packet cmd)
+ [P_NEG_DREPLY] = "NegDReply",
+ [P_NEG_RS_DREPLY] = "NegRSDReply",
+ [P_BARRIER_ACK] = "BarrierAck",
+- [P_STATE_CHG_REQ] = "StateChgRequest",
+ [P_STATE_CHG_REPLY] = "StateChgReply",
+ [P_OV_REQUEST] = "OVRequest",
+ [P_OV_REPLY] = "OVReply",
+ [P_OV_RESULT] = "OVResult",
+ [P_CSUM_RS_REQUEST] = "CsumRSRequest",
+ [P_RS_IS_IN_SYNC] = "CsumRSIsInSync",
++ [P_SYNC_PARAM89] = "SyncParam89",
+ [P_COMPRESSED_BITMAP] = "CBitmap",
+ [P_DELAY_PROBE] = "DelayProbe",
+ [P_OUT_OF_SYNC] = "OutOfSync",
+- [P_RETRY_WRITE] = "RetryWrite",
+ [P_RS_CANCEL] = "RSCancel",
+ [P_CONN_ST_CHG_REQ] = "conn_st_chg_req",
+ [P_CONN_ST_CHG_REPLY] = "conn_st_chg_reply",
+ [P_PROTOCOL_UPDATE] = "protocol_update",
++ [P_TRIM] = "Trim",
+ [P_RS_THIN_REQ] = "rs_thin_req",
+ [P_RS_DEALLOCATED] = "rs_deallocated",
++ [P_WSAME] = "WriteSame",
++ [P_ZEROES] = "Zeroes",
+
+ /* enum drbd_packet, but not commands - obsoleted flags:
+ * P_MAY_IGNORE
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 5a1f98494dddf..2845570413360 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -947,11 +947,15 @@ static int wait_for_reconnect(struct nbd_device *nbd)
+ struct nbd_config *config = nbd->config;
+ if (!config->dead_conn_timeout)
+ return 0;
+- if (test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags))
++
++ if (!wait_event_timeout(config->conn_wait,
++ test_bit(NBD_RT_DISCONNECTED,
++ &config->runtime_flags) ||
++ atomic_read(&config->live_connections) > 0,
++ config->dead_conn_timeout))
+ return 0;
+- return wait_event_timeout(config->conn_wait,
+- atomic_read(&config->live_connections) > 0,
+- config->dead_conn_timeout) > 0;
++
++ return !test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags);
+ }
+
+ static int nbd_handle_cmd(struct nbd_cmd *cmd, int index)
+@@ -2082,6 +2086,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ mutex_lock(&nbd->config_lock);
+ nbd_disconnect(nbd);
+ sock_shutdown(nbd);
++ wake_up(&nbd->config->conn_wait);
+ /*
+ * Make sure recv thread has finished, we can safely call nbd_clear_que()
+ * to cancel the inflight I/Os.
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 8c415be867327..907eba23cb2fc 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -905,11 +905,12 @@ static int virtblk_probe(struct virtio_device *vdev)
+ blk_queue_io_opt(q, blk_size * opt_io_size);
+
+ if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) {
+- q->limits.discard_granularity = blk_size;
+-
+ virtio_cread(vdev, struct virtio_blk_config,
+ discard_sector_alignment, &v);
+- q->limits.discard_alignment = v ? v << SECTOR_SHIFT : 0;
++ if (v)
++ q->limits.discard_granularity = v << SECTOR_SHIFT;
++ else
++ q->limits.discard_granularity = blk_size;
+
+ virtio_cread(vdev, struct virtio_blk_config,
+ max_discard_sectors, &v);
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index ecf29cfa7d792..411676ad0c6b3 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -368,6 +368,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+ struct hci_event_hdr *hdr = (void *)skb->data;
++ u8 evt = hdr->evt;
+ int err;
+
+ /* When someone waits for the WMT event, the skb is being cloned
+@@ -385,7 +386,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ if (err < 0)
+ goto err_free_skb;
+
+- if (hdr->evt == HCI_EV_WMT) {
++ if (evt == HCI_EV_WMT) {
+ if (test_and_clear_bit(BTMTKSDIO_TX_WAIT_VND_EVT,
+ &bdev->tx_state)) {
+ /* Barrier to sync with other CPUs */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 42234d5f602dd..304351d2cfdf9 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3317,6 +3317,12 @@ static int btusb_setup_qca(struct hci_dev *hdev)
+ return err;
+ }
+
++ /* Mark HCI_OP_ENHANCED_SETUP_SYNC_CONN as broken as it doesn't seem to
++ * work with the likes of HSP/HFP mSBC.
++ */
++ set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++
+ return 0;
+ }
+
+diff --git a/drivers/char/hw_random/cn10k-rng.c b/drivers/char/hw_random/cn10k-rng.c
+index 35001c63648bb..a01e9307737c5 100644
+--- a/drivers/char/hw_random/cn10k-rng.c
++++ b/drivers/char/hw_random/cn10k-rng.c
+@@ -31,26 +31,23 @@ struct cn10k_rng {
+
+ #define PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE 0xc2000b0f
+
+-static int reset_rng_health_state(struct cn10k_rng *rng)
++static unsigned long reset_rng_health_state(struct cn10k_rng *rng)
+ {
+ struct arm_smccc_res res;
+
+ /* Send SMC service call to reset EBG health state */
+ arm_smccc_smc(PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE, 0, 0, 0, 0, 0, 0, 0, &res);
+- if (res.a0 != 0UL)
+- return -EIO;
+-
+- return 0;
++ return res.a0;
+ }
+
+ static int check_rng_health(struct cn10k_rng *rng)
+ {
+ u64 status;
+- int err;
++ unsigned long err;
+
+ /* Skip checking health */
+ if (!rng->reg_base)
+- return 0;
++ return -ENODEV;
+
+ status = readq(rng->reg_base + RNM_PF_EBG_HEALTH);
+ if (status & BIT_ULL(20)) {
+@@ -58,7 +55,9 @@ static int check_rng_health(struct cn10k_rng *rng)
+ if (err) {
+ dev_err(&rng->pdev->dev, "HWRNG: Health test failed (status=%llx)\n",
+ status);
+- dev_err(&rng->pdev->dev, "HWRNG: error during reset\n");
++ dev_err(&rng->pdev->dev, "HWRNG: error during reset (error=%lx)\n",
++ err);
++ return -EIO;
+ }
+ }
+ return 0;
+@@ -90,6 +89,7 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
+ {
+ struct cn10k_rng *rng = (struct cn10k_rng *)hwrng->priv;
+ unsigned int size;
++ u8 *pos = data;
+ int err = 0;
+ u64 value;
+
+@@ -102,17 +102,20 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
+ while (size >= 8) {
+ cn10k_read_trng(rng, &value);
+
+- *((u64 *)data) = (u64)value;
++ *((u64 *)pos) = value;
+ size -= 8;
+- data += 8;
++ pos += 8;
+ }
+
+- while (size > 0) {
++ if (size > 0) {
+ cn10k_read_trng(rng, &value);
+
+- *((u8 *)data) = (u8)value;
+- size--;
+- data++;
++ while (size > 0) {
++ *pos = (u8)value;
++ value >>= 8;
++ size--;
++ pos++;
++ }
+ }
+
+ return max - size;
+diff --git a/drivers/char/hw_random/omap3-rom-rng.c b/drivers/char/hw_random/omap3-rom-rng.c
+index e0d77fa048fb6..f06e4f95114f9 100644
+--- a/drivers/char/hw_random/omap3-rom-rng.c
++++ b/drivers/char/hw_random/omap3-rom-rng.c
+@@ -92,7 +92,7 @@ static int __maybe_unused omap_rom_rng_runtime_resume(struct device *dev)
+
+ r = ddata->rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT);
+ if (r != 0) {
+- clk_disable(ddata->clk);
++ clk_disable_unprepare(ddata->clk);
+ dev_err(dev, "HW init failed: %d\n", r);
+
+ return -EIO;
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index f1827257ef0e0..2610e809c802b 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -11,8 +11,8 @@
+ * Copyright 2002 MontaVista Software Inc.
+ */
+
+-#define pr_fmt(fmt) "%s" fmt, "IPMI message handler: "
+-#define dev_fmt pr_fmt
++#define pr_fmt(fmt) "IPMI message handler: " fmt
++#define dev_fmt(fmt) pr_fmt(fmt)
+
+ #include <linux/module.h>
+ #include <linux/errno.h>
+diff --git a/drivers/char/ipmi/ipmi_poweroff.c b/drivers/char/ipmi/ipmi_poweroff.c
+index bc3a18daf97a6..62e71c46ac5f7 100644
+--- a/drivers/char/ipmi/ipmi_poweroff.c
++++ b/drivers/char/ipmi/ipmi_poweroff.c
+@@ -94,9 +94,7 @@ static void dummy_recv_free(struct ipmi_recv_msg *msg)
+ {
+ atomic_dec(&dummy_count);
+ }
+-static struct ipmi_smi_msg halt_smi_msg = {
+- .done = dummy_smi_free
+-};
++static struct ipmi_smi_msg halt_smi_msg = INIT_IPMI_SMI_MSG(dummy_smi_free);
+ static struct ipmi_recv_msg halt_recv_msg = {
+ .done = dummy_recv_free
+ };
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 48aab77abebf1..588610236de1d 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -814,6 +814,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ break;
+
+ case SSIF_GETTING_EVENTS:
++ if (!msg) {
++ /* Should never happen, but just in case. */
++ dev_warn(&ssif_info->client->dev,
++ "No message set while getting events\n");
++ ipmi_ssif_unlock_cond(ssif_info, flags);
++ break;
++ }
++
+ if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
+ /* Error getting event, probably done. */
+ msg->done(msg);
+@@ -838,6 +846,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ break;
+
+ case SSIF_GETTING_MESSAGES:
++ if (!msg) {
++ /* Should never happen, but just in case. */
++ dev_warn(&ssif_info->client->dev,
++ "No message set while getting messages\n");
++ ipmi_ssif_unlock_cond(ssif_info, flags);
++ break;
++ }
++
+ if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
+ /* Error getting event, probably done. */
+ msg->done(msg);
+@@ -861,6 +877,13 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ deliver_recv_msg(ssif_info, msg);
+ }
+ break;
++
++ default:
++ /* Should never happen, but just in case. */
++ dev_warn(&ssif_info->client->dev,
++ "Invalid state in message done handling: %d\n",
++ ssif_info->ssif_state);
++ ipmi_ssif_unlock_cond(ssif_info, flags);
+ }
+
+ flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index 883b4a3410122..8e536ce0a5d21 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -354,9 +354,7 @@ static void msg_free_recv(struct ipmi_recv_msg *msg)
+ complete(&msg_wait);
+ }
+ }
+-static struct ipmi_smi_msg smi_msg = {
+- .done = msg_free_smi
+-};
++static struct ipmi_smi_msg smi_msg = INIT_IPMI_SMI_MSG(msg_free_smi);
+ static struct ipmi_recv_msg recv_msg = {
+ .done = msg_free_recv
+ };
+@@ -475,9 +473,8 @@ static void panic_recv_free(struct ipmi_recv_msg *msg)
+ atomic_dec(&panic_done_count);
+ }
+
+-static struct ipmi_smi_msg panic_halt_heartbeat_smi_msg = {
+- .done = panic_smi_free
+-};
++static struct ipmi_smi_msg panic_halt_heartbeat_smi_msg =
++ INIT_IPMI_SMI_MSG(panic_smi_free);
+ static struct ipmi_recv_msg panic_halt_heartbeat_recv_msg = {
+ .done = panic_recv_free
+ };
+@@ -516,9 +513,8 @@ static void panic_halt_ipmi_heartbeat(void)
+ atomic_sub(2, &panic_done_count);
+ }
+
+-static struct ipmi_smi_msg panic_halt_smi_msg = {
+- .done = panic_smi_free
+-};
++static struct ipmi_smi_msg panic_halt_smi_msg =
++ INIT_IPMI_SMI_MSG(panic_smi_free);
+ static struct ipmi_recv_msg panic_halt_recv_msg = {
+ .done = panic_recv_free
+ };
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 92428bfdc1431..d6aa4b57d9858 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -79,8 +79,7 @@ static enum {
+ CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
+ CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */
+ } crng_init __read_mostly = CRNG_EMPTY;
+-static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
+-#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY)
++#define crng_ready() (likely(crng_init >= CRNG_READY))
+ /* Various types of waiters for crng_init->CRNG_READY transition. */
+ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
+ static struct fasync_struct *fasync;
+@@ -110,11 +109,6 @@ bool rng_is_initialized(void)
+ }
+ EXPORT_SYMBOL(rng_is_initialized);
+
+-static void __cold crng_set_ready(struct work_struct *work)
+-{
+- static_branch_enable(&crng_is_ready);
+-}
+-
+ /* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
+ static void try_to_generate_entropy(void);
+
+@@ -268,7 +262,7 @@ static void crng_reseed(void)
+ ++next_gen;
+ WRITE_ONCE(base_crng.generation, next_gen);
+ WRITE_ONCE(base_crng.birth, jiffies);
+- if (!static_branch_likely(&crng_is_ready))
++ if (!crng_ready())
+ crng_init = CRNG_READY;
+ spin_unlock_irqrestore(&base_crng.lock, flags);
+ memzero_explicit(key, sizeof(key));
+@@ -711,7 +705,6 @@ static void extract_entropy(void *buf, size_t len)
+
+ static void __cold _credit_init_bits(size_t bits)
+ {
+- static struct execute_work set_ready;
+ unsigned int new, orig, add;
+ unsigned long flags;
+
+@@ -727,7 +720,6 @@ static void __cold _credit_init_bits(size_t bits)
+
+ if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
+ crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
+- execute_in_process_context(crng_set_ready, &set_ready);
+ process_random_ready_list();
+ wake_up_interruptible(&crng_init_wait);
+ kill_fasync(&fasync, SIGIO, POLL_IN);
+diff --git a/drivers/char/tpm/tpm_tis_i2c_cr50.c b/drivers/char/tpm/tpm_tis_i2c_cr50.c
+index f6c0affbb4567..bf608b6af3395 100644
+--- a/drivers/char/tpm/tpm_tis_i2c_cr50.c
++++ b/drivers/char/tpm/tpm_tis_i2c_cr50.c
+@@ -768,8 +768,8 @@ static int tpm_cr50_i2c_remove(struct i2c_client *client)
+ struct device *dev = &client->dev;
+
+ if (!chip) {
+- dev_err(dev, "Could not get client data at remove\n");
+- return -ENODEV;
++ dev_crit(dev, "Could not get client data at remove, memory corruption ahead\n");
++ return 0;
+ }
+
+ tpm_chip_unregister(chip);
+diff --git a/drivers/clk/tegra/clk-dfll.c b/drivers/clk/tegra/clk-dfll.c
+index 6144447f86c63..62238dca9a534 100644
+--- a/drivers/clk/tegra/clk-dfll.c
++++ b/drivers/clk/tegra/clk-dfll.c
+@@ -271,6 +271,7 @@ struct tegra_dfll {
+ struct clk *ref_clk;
+ struct clk *i2c_clk;
+ struct clk *dfll_clk;
++ struct reset_control *dfll_rst;
+ struct reset_control *dvco_rst;
+ unsigned long ref_rate;
+ unsigned long i2c_clk_rate;
+@@ -1464,6 +1465,7 @@ static int dfll_init(struct tegra_dfll *td)
+ return -EINVAL;
+ }
+
++ reset_control_deassert(td->dfll_rst);
+ reset_control_deassert(td->dvco_rst);
+
+ ret = clk_prepare(td->ref_clk);
+@@ -1509,6 +1511,7 @@ di_err1:
+ clk_unprepare(td->ref_clk);
+
+ reset_control_assert(td->dvco_rst);
++ reset_control_assert(td->dfll_rst);
+
+ return ret;
+ }
+@@ -1530,6 +1533,7 @@ int tegra_dfll_suspend(struct device *dev)
+ }
+
+ reset_control_assert(td->dvco_rst);
++ reset_control_assert(td->dfll_rst);
+
+ return 0;
+ }
+@@ -1548,6 +1552,7 @@ int tegra_dfll_resume(struct device *dev)
+ {
+ struct tegra_dfll *td = dev_get_drvdata(dev);
+
++ reset_control_deassert(td->dfll_rst);
+ reset_control_deassert(td->dvco_rst);
+
+ pm_runtime_get_sync(td->dev);
+@@ -1951,6 +1956,12 @@ int tegra_dfll_register(struct platform_device *pdev,
+
+ td->soc = soc;
+
++ td->dfll_rst = devm_reset_control_get_optional(td->dev, "dfll");
++ if (IS_ERR(td->dfll_rst)) {
++ dev_err(td->dev, "couldn't get dfll reset\n");
++ return PTR_ERR(td->dfll_rst);
++ }
++
+ td->dvco_rst = devm_reset_control_get(td->dev, "dvco");
+ if (IS_ERR(td->dvco_rst)) {
+ dev_err(td->dev, "couldn't get dvco reset\n");
+@@ -2087,6 +2098,7 @@ struct tegra_dfll_soc_data *tegra_dfll_unregister(struct platform_device *pdev)
+ clk_unprepare(td->i2c_clk);
+
+ reset_control_assert(td->dvco_rst);
++ reset_control_assert(td->dfll_rst);
+
+ return td->soc;
+ }
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 80f535cc8a757..fbaa8e6c7d232 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -28,6 +28,7 @@
+ #include <linux/suspend.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/tick.h>
++#include <linux/units.h>
+ #include <trace/events/power.h>
+
+ static LIST_HEAD(cpufreq_policy_list);
+@@ -1707,6 +1708,16 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b
+ return new_freq;
+
+ if (policy->cur != new_freq) {
++ /*
++ * For some platforms, the frequency returned by hardware may be
++ * slightly different from what is provided in the frequency
++ * table, for example hardware may return 499 MHz instead of 500
++ * MHz. In such cases it is better to avoid getting into
++ * unnecessary frequency updates.
++ */
++ if (abs(policy->cur - new_freq) < HZ_PER_MHZ)
++ return policy->cur;
++
+ cpufreq_out_of_sync(policy, new_freq);
+ if (update)
+ schedule_work(&policy->update);
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 866163883b48d..bfe240c726e34 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -44,6 +44,8 @@ struct mtk_cpu_dvfs_info {
+ bool need_voltage_tracking;
+ };
+
++static struct platform_device *cpufreq_pdev;
++
+ static LIST_HEAD(dvfs_info_list);
+
+ static struct mtk_cpu_dvfs_info *mtk_cpu_dvfs_info_lookup(int cpu)
+@@ -547,7 +549,6 @@ static int __init mtk_cpufreq_driver_init(void)
+ {
+ struct device_node *np;
+ const struct of_device_id *match;
+- struct platform_device *pdev;
+ int err;
+
+ np = of_find_node_by_path("/");
+@@ -571,16 +572,23 @@ static int __init mtk_cpufreq_driver_init(void)
+ * and the device registration codes are put here to handle defer
+ * probing.
+ */
+- pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
+- if (IS_ERR(pdev)) {
++ cpufreq_pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
++ if (IS_ERR(cpufreq_pdev)) {
+ pr_err("failed to register mtk-cpufreq platform device\n");
+ platform_driver_unregister(&mtk_cpufreq_platdrv);
+- return PTR_ERR(pdev);
++ return PTR_ERR(cpufreq_pdev);
+ }
+
+ return 0;
+ }
+-device_initcall(mtk_cpufreq_driver_init);
++module_init(mtk_cpufreq_driver_init)
++
++static void __exit mtk_cpufreq_driver_exit(void)
++{
++ platform_device_unregister(cpufreq_pdev);
++ platform_driver_unregister(&mtk_cpufreq_platdrv);
++}
++module_exit(mtk_cpufreq_driver_exit)
+
+ MODULE_DESCRIPTION("MediaTek CPUFreq driver");
+ MODULE_AUTHOR("Pi-Cheng Chen <pi-cheng.chen@linaro.org>");
+diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
+index b51b5df084500..540105ca0781f 100644
+--- a/drivers/cpuidle/cpuidle-psci.c
++++ b/drivers/cpuidle/cpuidle-psci.c
+@@ -23,6 +23,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/syscore_ops.h>
+
+ #include <asm/cpuidle.h>
+
+@@ -131,6 +132,49 @@ static int psci_idle_cpuhp_down(unsigned int cpu)
+ return 0;
+ }
+
++static void psci_idle_syscore_switch(bool suspend)
++{
++ bool cleared = false;
++ struct device *dev;
++ int cpu;
++
++ for_each_possible_cpu(cpu) {
++ dev = per_cpu_ptr(&psci_cpuidle_data, cpu)->dev;
++
++ if (dev && suspend) {
++ dev_pm_genpd_suspend(dev);
++ } else if (dev) {
++ dev_pm_genpd_resume(dev);
++
++ /* Account for userspace having offlined a CPU. */
++ if (pm_runtime_status_suspended(dev))
++ pm_runtime_set_active(dev);
++
++ /* Clear domain state to re-start fresh. */
++ if (!cleared) {
++ psci_set_domain_state(0);
++ cleared = true;
++ }
++ }
++ }
++}
++
++static int psci_idle_syscore_suspend(void)
++{
++ psci_idle_syscore_switch(true);
++ return 0;
++}
++
++static void psci_idle_syscore_resume(void)
++{
++ psci_idle_syscore_switch(false);
++}
++
++static struct syscore_ops psci_idle_syscore_ops = {
++ .suspend = psci_idle_syscore_suspend,
++ .resume = psci_idle_syscore_resume,
++};
++
+ static void psci_idle_init_cpuhp(void)
+ {
+ int err;
+@@ -138,6 +182,8 @@ static void psci_idle_init_cpuhp(void)
+ if (!psci_cpuidle_use_cpuhp)
+ return;
+
++ register_syscore_ops(&psci_idle_syscore_ops);
++
+ err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
+ "cpuidle/psci:online",
+ psci_idle_cpuhp_up,
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 554e400d41cad..70e2e6e373897 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -93,6 +93,68 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq)
+ return err;
+ }
+
++static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
++{
++ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
++ struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
++ struct sun8i_ss_dev *ss = op->ss;
++ struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
++ struct scatterlist *sg = areq->src;
++ unsigned int todo, offset;
++ unsigned int len = areq->cryptlen;
++ unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++ struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
++ int i = 0;
++ u32 a;
++ int err;
++
++ rctx->ivlen = ivsize;
++ if (rctx->op_dir & SS_DECRYPTION) {
++ offset = areq->cryptlen - ivsize;
++ scatterwalk_map_and_copy(sf->biv, areq->src, offset,
++ ivsize, 0);
++ }
++
++ /* we need to copy all IVs from source in case DMA is bi-directionnal */
++ while (sg && len) {
++ if (sg_dma_len(sg) == 0) {
++ sg = sg_next(sg);
++ continue;
++ }
++ if (i == 0)
++ memcpy(sf->iv[0], areq->iv, ivsize);
++ a = dma_map_single(ss->dev, sf->iv[i], ivsize, DMA_TO_DEVICE);
++ if (dma_mapping_error(ss->dev, a)) {
++ memzero_explicit(sf->iv[i], ivsize);
++ dev_err(ss->dev, "Cannot DMA MAP IV\n");
++ err = -EFAULT;
++ goto dma_iv_error;
++ }
++ rctx->p_iv[i] = a;
++ /* we need to setup all others IVs only in the decrypt way */
++ if (rctx->op_dir & SS_ENCRYPTION)
++ return 0;
++ todo = min(len, sg_dma_len(sg));
++ len -= todo;
++ i++;
++ if (i < MAX_SG) {
++ offset = sg->length - ivsize;
++ scatterwalk_map_and_copy(sf->iv[i], sg, offset, ivsize, 0);
++ }
++ rctx->niv = i;
++ sg = sg_next(sg);
++ }
++
++ return 0;
++dma_iv_error:
++ i--;
++ while (i >= 0) {
++ dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
++ memzero_explicit(sf->iv[i], ivsize);
++ }
++ return err;
++}
++
+ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ {
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
+@@ -101,9 +163,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
+ struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+ struct sun8i_ss_alg_template *algt;
++ struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
+ struct scatterlist *sg;
+ unsigned int todo, len, offset, ivsize;
+- void *backup_iv = NULL;
+ int nr_sgs = 0;
+ int nr_sgd = 0;
+ int err = 0;
+@@ -134,30 +196,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
+
+ ivsize = crypto_skcipher_ivsize(tfm);
+ if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
+- rctx->ivlen = ivsize;
+- rctx->biv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA);
+- if (!rctx->biv) {
+- err = -ENOMEM;
++ err = sun8i_ss_setup_ivs(areq);
++ if (err)
+ goto theend_key;
+- }
+- if (rctx->op_dir & SS_DECRYPTION) {
+- backup_iv = kzalloc(ivsize, GFP_KERNEL);
+- if (!backup_iv) {
+- err = -ENOMEM;
+- goto theend_key;
+- }
+- offset = areq->cryptlen - ivsize;
+- scatterwalk_map_and_copy(backup_iv, areq->src, offset,
+- ivsize, 0);
+- }
+- memcpy(rctx->biv, areq->iv, ivsize);
+- rctx->p_iv = dma_map_single(ss->dev, rctx->biv, rctx->ivlen,
+- DMA_TO_DEVICE);
+- if (dma_mapping_error(ss->dev, rctx->p_iv)) {
+- dev_err(ss->dev, "Cannot DMA MAP IV\n");
+- err = -ENOMEM;
+- goto theend_iv;
+- }
+ }
+ if (areq->src == areq->dst) {
+ nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src),
+@@ -243,21 +284,19 @@ theend_sgs:
+ }
+
+ theend_iv:
+- if (rctx->p_iv)
+- dma_unmap_single(ss->dev, rctx->p_iv, rctx->ivlen,
+- DMA_TO_DEVICE);
+-
+ if (areq->iv && ivsize > 0) {
+- if (rctx->biv) {
+- offset = areq->cryptlen - ivsize;
+- if (rctx->op_dir & SS_DECRYPTION) {
+- memcpy(areq->iv, backup_iv, ivsize);
+- kfree_sensitive(backup_iv);
+- } else {
+- scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
+- ivsize, 0);
+- }
+- kfree(rctx->biv);
++ for (i = 0; i < rctx->niv; i++) {
++ dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
++ memzero_explicit(sf->iv[i], ivsize);
++ }
++
++ offset = areq->cryptlen - ivsize;
++ if (rctx->op_dir & SS_DECRYPTION) {
++ memcpy(areq->iv, sf->biv, ivsize);
++ memzero_explicit(sf->biv, ivsize);
++ } else {
++ scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
++ ivsize, 0);
+ }
+ }
+
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 319fe3279a716..6575305786436 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -66,6 +66,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
+ const char *name)
+ {
+ int flow = rctx->flow;
++ unsigned int ivlen = rctx->ivlen;
+ u32 v = SS_START;
+ int i;
+
+@@ -104,15 +105,14 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
+ mutex_lock(&ss->mlock);
+ writel(rctx->p_key, ss->base + SS_KEY_ADR_REG);
+
+- if (i == 0) {
+- if (rctx->p_iv)
+- writel(rctx->p_iv, ss->base + SS_IV_ADR_REG);
+- } else {
+- if (rctx->biv) {
+- if (rctx->op_dir == SS_ENCRYPTION)
+- writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
++ if (ivlen) {
++ if (rctx->op_dir == SS_ENCRYPTION) {
++ if (i == 0)
++ writel(rctx->p_iv[0], ss->base + SS_IV_ADR_REG);
+ else
+- writel(rctx->t_src[i - 1].addr + rctx->t_src[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
++ writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - ivlen, ss->base + SS_IV_ADR_REG);
++ } else {
++ writel(rctx->p_iv[i], ss->base + SS_IV_ADR_REG);
+ }
+ }
+
+@@ -464,7 +464,7 @@ static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i)
+ */
+ static int allocate_flows(struct sun8i_ss_dev *ss)
+ {
+- int i, err;
++ int i, j, err;
+
+ ss->flows = devm_kcalloc(ss->dev, MAXFLOW, sizeof(struct sun8i_ss_flow),
+ GFP_KERNEL);
+@@ -474,6 +474,18 @@ static int allocate_flows(struct sun8i_ss_dev *ss)
+ for (i = 0; i < MAXFLOW; i++) {
+ init_completion(&ss->flows[i].complete);
+
++ ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
++ GFP_KERNEL | GFP_DMA);
++ if (!ss->flows[i].biv)
++ goto error_engine;
++
++ for (j = 0; j < MAX_SG; j++) {
++ ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
++ GFP_KERNEL | GFP_DMA);
++ if (!ss->flows[i].iv[j])
++ goto error_engine;
++ }
++
+ ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true);
+ if (!ss->flows[i].engine) {
+ dev_err(ss->dev, "Cannot allocate engine\n");
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 1a71ed49d2333..ca4f280af35d2 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -380,13 +380,21 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ }
+
+ len = areq->nbytes;
+- for_each_sg(areq->src, sg, nr_sgs, i) {
++ sg = areq->src;
++ i = 0;
++ while (len > 0 && sg) {
++ if (sg_dma_len(sg) == 0) {
++ sg = sg_next(sg);
++ continue;
++ }
+ rctx->t_src[i].addr = sg_dma_address(sg);
+ todo = min(len, sg_dma_len(sg));
+ rctx->t_src[i].len = todo / 4;
+ len -= todo;
+ rctx->t_dst[i].addr = addr_res;
+ rctx->t_dst[i].len = digestsize / 4;
++ sg = sg_next(sg);
++ i++;
+ }
+ if (len > 0) {
+ dev_err(ss->dev, "remaining len %d\n", len);
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+index 28188685b9100..57ada86538550 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+@@ -121,11 +121,15 @@ struct sginfo {
+ * @complete: completion for the current task on this flow
+ * @status: set to 1 by interrupt if task is done
+ * @stat_req: number of request done by this flow
++ * @iv: list of IV to use for each step
++ * @biv: buffer which contain the backuped IV
+ */
+ struct sun8i_ss_flow {
+ struct crypto_engine *engine;
+ struct completion complete;
+ int status;
++ u8 *iv[MAX_SG];
++ u8 *biv;
+ #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
+ unsigned long stat_req;
+ #endif
+@@ -164,28 +168,28 @@ struct sun8i_ss_dev {
+ * @t_src: list of mapped SGs with their size
+ * @t_dst: list of mapped SGs with their size
+ * @p_key: DMA address of the key
+- * @p_iv: DMA address of the IV
++ * @p_iv: DMA address of the IVs
++ * @niv: Number of IVs DMA mapped
+ * @method: current algorithm for this request
+ * @op_mode: op_mode for this request
+ * @op_dir: direction (encrypt vs decrypt) for this request
+ * @flow: the flow to use for this request
+- * @ivlen: size of biv
++ * @ivlen: size of IVs
+ * @keylen: keylen for this request
+- * @biv: buffer which contain the IV
+ * @fallback_req: request struct for invoking the fallback skcipher TFM
+ */
+ struct sun8i_cipher_req_ctx {
+ struct sginfo t_src[MAX_SG];
+ struct sginfo t_dst[MAX_SG];
+ u32 p_key;
+- u32 p_iv;
++ u32 p_iv[MAX_SG];
++ int niv;
+ u32 method;
+ u32 op_mode;
+ u32 op_dir;
+ int flow;
+ unsigned int ivlen;
+ unsigned int keylen;
+- void *biv;
+ struct skcipher_request fallback_req; // keep at the end
+ };
+
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 6ab93dfd478a9..3aefb177715e9 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -23,6 +23,7 @@
+ #include <linux/gfp.h>
+ #include <linux/cpufeature.h>
+ #include <linux/fs.h>
++#include <linux/fs_struct.h>
+
+ #include <asm/smp.h>
+
+@@ -170,6 +171,31 @@ static void *sev_fw_alloc(unsigned long len)
+ return page_address(page);
+ }
+
++static struct file *open_file_as_root(const char *filename, int flags, umode_t mode)
++{
++ struct file *fp;
++ struct path root;
++ struct cred *cred;
++ const struct cred *old_cred;
++
++ task_lock(&init_task);
++ get_fs_root(init_task.fs, &root);
++ task_unlock(&init_task);
++
++ cred = prepare_creds();
++ if (!cred)
++ return ERR_PTR(-ENOMEM);
++ cred->fsuid = GLOBAL_ROOT_UID;
++ old_cred = override_creds(cred);
++
++ fp = file_open_root(&root, filename, flags, mode);
++ path_put(&root);
++
++ revert_creds(old_cred);
++
++ return fp;
++}
++
+ static int sev_read_init_ex_file(void)
+ {
+ struct sev_device *sev = psp_master->sev_data;
+@@ -181,7 +207,7 @@ static int sev_read_init_ex_file(void)
+ if (!sev_init_ex_buffer)
+ return -EOPNOTSUPP;
+
+- fp = filp_open(init_ex_path, O_RDONLY, 0);
++ fp = open_file_as_root(init_ex_path, O_RDONLY, 0);
+ if (IS_ERR(fp)) {
+ int ret = PTR_ERR(fp);
+
+@@ -217,7 +243,7 @@ static void sev_write_init_ex_file(void)
+ if (!sev_init_ex_buffer)
+ return;
+
+- fp = filp_open(init_ex_path, O_CREAT | O_WRONLY, 0600);
++ fp = open_file_as_root(init_ex_path, O_CREAT | O_WRONLY, 0600);
+ if (IS_ERR(fp)) {
+ dev_err(sev->dev,
+ "SEV: could not open file for write, error %ld\n",
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index 11e0278c8631d..6140e49273226 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -356,12 +356,14 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx,
+ req_ctx->mlli_params.mlli_dma_addr);
+ }
+
+- dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
+- dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+-
+ if (src != dst) {
+- dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL);
++ dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE);
++ dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE);
+ dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst));
++ dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
++ } else {
++ dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
++ dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+ }
+ }
+
+@@ -377,6 +379,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ u32 dummy = 0;
+ int rc = 0;
+ u32 mapped_nents = 0;
++ int src_direction = (src != dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
+
+ req_ctx->dma_buf_type = CC_DMA_BUF_DLLI;
+ mlli_params->curr_pool = NULL;
+@@ -399,7 +402,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ }
+
+ /* Map the src SGL */
+- rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
++ rc = cc_map_sg(dev, src, nbytes, src_direction, &req_ctx->in_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+ if (rc)
+ goto cipher_exit;
+@@ -416,7 +419,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ }
+ } else {
+ /* Map the dst sg */
+- rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL,
++ rc = cc_map_sg(dev, dst, nbytes, DMA_FROM_DEVICE,
+ &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
+ &dummy, &mapped_nents);
+ if (rc)
+@@ -456,6 +459,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ struct cc_drvdata *drvdata = dev_get_drvdata(dev);
++ int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
+
+ if (areq_ctx->mac_buf_dma_addr) {
+ dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr,
+@@ -514,13 +518,11 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
+ areq_ctx->assoclen, req->cryptlen);
+
+- dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents,
+- DMA_BIDIRECTIONAL);
++ dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction);
+ if (req->src != req->dst) {
+ dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
+ sg_virt(req->dst));
+- dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents,
+- DMA_BIDIRECTIONAL);
++ dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE);
+ }
+ if (drvdata->coherent &&
+ areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT &&
+@@ -843,7 +845,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ else
+ size_for_map -= authsize;
+
+- rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL,
++ rc = cc_map_sg(dev, req->dst, size_for_map, DMA_FROM_DEVICE,
+ &areq_ctx->dst.mapped_nents,
+ LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
+ &dst_mapped_nents);
+@@ -1056,7 +1058,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
+ size_to_map += authsize;
+ }
+
+- rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
++ rc = cc_map_sg(dev, req->src, size_to_map,
++ (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL),
+ &areq_ctx->src.mapped_nents,
+ (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
+ LLI_MAX_NUM_OF_DATA_ENTRIES),
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index b739d3b873dcf..c6f2fa753b7c0 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -624,7 +624,6 @@ struct skcipher_alg mv_cesa_ecb_des3_ede_alg = {
+ .decrypt = mv_cesa_ecb_des3_ede_decrypt,
+ .min_keysize = DES3_EDE_KEY_SIZE,
+ .max_keysize = DES3_EDE_KEY_SIZE,
+- .ivsize = DES3_EDE_BLOCK_SIZE,
+ .base = {
+ .cra_name = "ecb(des3_ede)",
+ .cra_driver_name = "mv-ecb-des3-ede",
+diff --git a/drivers/crypto/nx/nx-common-powernv.c b/drivers/crypto/nx/nx-common-powernv.c
+index 32a036ada5d0a..f418817c0f43e 100644
+--- a/drivers/crypto/nx/nx-common-powernv.c
++++ b/drivers/crypto/nx/nx-common-powernv.c
+@@ -827,7 +827,7 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id,
+ goto err_out;
+
+ vas_init_rx_win_attr(&rxattr, coproc->ct);
+- rxattr.rx_fifo = (void *)rx_fifo;
++ rxattr.rx_fifo = rx_fifo;
+ rxattr.rx_fifo_size = fifo_size;
+ rxattr.lnotify_lpid = lpid;
+ rxattr.lnotify_pid = pid;
+diff --git a/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c b/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c
+index 588352de1ef0e..d17318d3f63a4 100644
+--- a/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c
++++ b/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c
+@@ -154,7 +154,7 @@ static struct pfvf_message handle_blkmsg_req(struct adf_accel_vf_info *vf_info,
+ if (FIELD_GET(ADF_VF2PF_BLOCK_CRC_REQ_MASK, req.data)) {
+ dev_dbg(&GET_DEV(vf_info->accel_dev),
+ "BlockMsg of type %d for CRC over %d bytes received from VF%d\n",
+- blk_type, blk_byte, vf_info->vf_nr);
++ blk_type, blk_byte + 1, vf_info->vf_nr);
+
+ if (!adf_pf2vf_blkmsg_get_data(vf_info, blk_type, blk_byte,
+ byte_max, &resp_data,
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+index 09599fe4d2f3f..61d5467e0d92b 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+@@ -58,17 +58,24 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+
+ capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC |
+ ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
+- ICP_ACCEL_CAPABILITIES_AUTHENTICATION;
++ ICP_ACCEL_CAPABILITIES_AUTHENTICATION |
++ ICP_ACCEL_CAPABILITIES_CIPHER |
++ ICP_ACCEL_CAPABILITIES_COMPRESSION;
+
+ /* Read accelerator capabilities mask */
+ pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, &legfuses);
+
+- if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE)
++ /* A set bit in legfuses means the feature is OFF in this SKU */
++ if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) {
+ capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC;
++ capabilities &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
++ }
+ if (legfuses & ICP_ACCEL_MASK_PKE_SLICE)
+ capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC;
+- if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE)
++ if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) {
+ capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION;
++ capabilities &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
++ }
+ if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE)
+ capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION;
+
+diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
+index 293857ebfd75d..538e8dc74f40a 100644
+--- a/drivers/devfreq/rk3399_dmc.c
++++ b/drivers/devfreq/rk3399_dmc.c
+@@ -477,6 +477,8 @@ static int rk3399_dmcfreq_remove(struct platform_device *pdev)
+ {
+ struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(&pdev->dev);
+
++ devfreq_event_disable_edev(dmcfreq->edev);
++
+ /*
+ * Before remove the opp table we need to unregister the opp notifier.
+ */
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index b9b2b4a4124ee..033df43db0cec 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -369,10 +369,16 @@ int idxd_cdev_register(void)
+ rc = alloc_chrdev_region(&ictx[i].devt, 0, MINORMASK,
+ ictx[i].name);
+ if (rc)
+- return rc;
++ goto err_free_chrdev_region;
+ }
+
+ return 0;
++
++err_free_chrdev_region:
++ for (i--; i >= 0; i--)
++ unregister_chrdev_region(ictx[i].devt, MINORMASK);
++
++ return rc;
+ }
+
+ void idxd_cdev_remove(void)
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index 6f57ff0e7b37b..f8c8b9d76aadc 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -34,7 +34,6 @@
+ #include "virt-dma.h"
+
+ #define STM32_MDMA_GISR0 0x0000 /* MDMA Int Status Reg 1 */
+-#define STM32_MDMA_GISR1 0x0004 /* MDMA Int Status Reg 2 */
+
+ /* MDMA Channel x interrupt/status register */
+ #define STM32_MDMA_CISR(x) (0x40 + 0x40 * (x)) /* x = 0..62 */
+@@ -168,7 +167,7 @@
+
+ #define STM32_MDMA_MAX_BUF_LEN 128
+ #define STM32_MDMA_MAX_BLOCK_LEN 65536
+-#define STM32_MDMA_MAX_CHANNELS 63
++#define STM32_MDMA_MAX_CHANNELS 32
+ #define STM32_MDMA_MAX_REQUESTS 256
+ #define STM32_MDMA_MAX_BURST 128
+ #define STM32_MDMA_VERY_HIGH_PRIORITY 0x3
+@@ -1317,26 +1316,16 @@ static void stm32_mdma_xfer_end(struct stm32_mdma_chan *chan)
+ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid)
+ {
+ struct stm32_mdma_device *dmadev = devid;
+- struct stm32_mdma_chan *chan = devid;
++ struct stm32_mdma_chan *chan;
+ u32 reg, id, ccr, ien, status;
+
+ /* Find out which channel generates the interrupt */
+ status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0);
+- if (status) {
+- id = __ffs(status);
+- } else {
+- status = readl_relaxed(dmadev->base + STM32_MDMA_GISR1);
+- if (!status) {
+- dev_dbg(mdma2dev(dmadev), "spurious it\n");
+- return IRQ_NONE;
+- }
+- id = __ffs(status);
+- /*
+- * As GISR0 provides status for channel id from 0 to 31,
+- * so GISR1 provides status for channel id from 32 to 62
+- */
+- id += 32;
++ if (!status) {
++ dev_dbg(mdma2dev(dmadev), "spurious it\n");
++ return IRQ_NONE;
+ }
++ id = __ffs(status);
+
+ chan = &dmadev->chan[id];
+ if (!chan) {
+diff --git a/drivers/edac/dmc520_edac.c b/drivers/edac/dmc520_edac.c
+index b8a7d9594afd4..1fa5ca57e9ec1 100644
+--- a/drivers/edac/dmc520_edac.c
++++ b/drivers/edac/dmc520_edac.c
+@@ -489,7 +489,7 @@ static int dmc520_edac_probe(struct platform_device *pdev)
+ dev = &pdev->dev;
+
+ for (idx = 0; idx < NUMBER_OF_IRQS; idx++) {
+- irq = platform_get_irq_byname(pdev, dmc520_irq_configs[idx].name);
++ irq = platform_get_irq_byname_optional(pdev, dmc520_irq_configs[idx].name);
+ irqs[idx] = irq;
+ masks[idx] = dmc520_irq_configs[idx].mask;
+ if (irq >= 0) {
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 14f900047ac0c..44300dbcc643d 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -582,7 +582,7 @@ static int ffa_partition_info_get(const char *uuid_str,
+ return -ENODEV;
+ }
+
+- count = ffa_partition_probe(&uuid_null, &pbuf);
++ count = ffa_partition_probe(&uuid, &pbuf);
+ if (count <= 0)
+ return -ENOENT;
+
+@@ -688,8 +688,6 @@ static void ffa_setup_partitions(void)
+ __func__, tpbuf->id);
+ continue;
+ }
+-
+- ffa_dev_set_drvdata(ffa_dev, drv_info);
+ }
+ kfree(pbuf);
+ }
+diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
+index f5219334fd3a5..3fe172c03c247 100644
+--- a/drivers/firmware/arm_scmi/base.c
++++ b/drivers/firmware/arm_scmi/base.c
+@@ -197,7 +197,7 @@ scmi_base_implementation_list_get(const struct scmi_protocol_handle *ph,
+ break;
+
+ loop_num_ret = le32_to_cpu(*num_ret);
+- if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) {
++ if (loop_num_ret > MAX_PROTOCOLS_IMP - tot_num_ret) {
+ dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP");
+ break;
+ }
+diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
+index 2c3dac5ecb36d..243882f5e5f99 100644
+--- a/drivers/firmware/efi/Kconfig
++++ b/drivers/firmware/efi/Kconfig
+@@ -284,3 +284,18 @@ config EFI_CUSTOM_SSDT_OVERLAYS
+
+ See Documentation/admin-guide/acpi/ssdt-overlays.rst for more
+ information.
++
++config EFI_DISABLE_RUNTIME
++ bool "Disable EFI runtime services support by default"
++ default y if PREEMPT_RT
++ help
++ Allow to disable the EFI runtime services support by default. This can
++ already be achieved by using the efi=noruntime option, but it could be
++ useful to have this default without any kernel command line parameter.
++
++ The EFI runtime services are disabled by default when PREEMPT_RT is
++ enabled, because measurements have shown that some EFI functions calls
++ might take too much time to complete, causing large latencies which is
++ an issue for Real-Time kernels.
++
++ This default can be overridden by using the efi=runtime option.
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 5502e176d51be..ff57db8f8d059 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -66,7 +66,7 @@ struct mm_struct efi_mm = {
+
+ struct workqueue_struct *efi_rts_wq;
+
+-static bool disable_runtime = IS_ENABLED(CONFIG_PREEMPT_RT);
++static bool disable_runtime = IS_ENABLED(CONFIG_EFI_DISABLE_RUNTIME);
+ static int __init setup_noefi(char *arg)
+ {
+ disable_runtime = true;
+diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c
+index 099e358d24915..bcf5214e35866 100644
+--- a/drivers/gpio/gpio-rockchip.c
++++ b/drivers/gpio/gpio-rockchip.c
+@@ -19,6 +19,7 @@
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
+ #include <linux/of_irq.h>
++#include <linux/pinctrl/pinconf-generic.h>
+ #include <linux/regmap.h>
+
+ #include "../pinctrl/core.h"
+@@ -706,7 +707,7 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
+ struct device_node *pctlnp = of_get_parent(np);
+ struct pinctrl_dev *pctldev = NULL;
+ struct rockchip_pin_bank *bank = NULL;
+- struct rockchip_pin_output_deferred *cfg;
++ struct rockchip_pin_deferred *cfg;
+ static int gpio;
+ int id, ret;
+
+@@ -747,15 +748,22 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- while (!list_empty(&bank->deferred_output)) {
+- cfg = list_first_entry(&bank->deferred_output,
+- struct rockchip_pin_output_deferred, head);
++ while (!list_empty(&bank->deferred_pins)) {
++ cfg = list_first_entry(&bank->deferred_pins,
++ struct rockchip_pin_deferred, head);
+ list_del(&cfg->head);
+
+- ret = rockchip_gpio_direction_output(&bank->gpio_chip, cfg->pin, cfg->arg);
+- if (ret)
+- dev_warn(dev, "setting output pin %u to %u failed\n", cfg->pin, cfg->arg);
+-
++ switch (cfg->param) {
++ case PIN_CONFIG_OUTPUT:
++ ret = rockchip_gpio_direction_output(&bank->gpio_chip, cfg->pin, cfg->arg);
++ if (ret)
++ dev_warn(dev, "setting output pin %u to %u failed\n", cfg->pin,
++ cfg->arg);
++ break;
++ default:
++ dev_warn(dev, "unknown deferred config param %d\n", cfg->param);
++ break;
++ }
+ kfree(cfg);
+ }
+
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index 41c31b10ae848..98109839102fb 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -314,8 +314,8 @@ static int gpio_sim_setup_sysfs(struct gpio_sim_chip *chip)
+
+ for (i = 0; i < num_lines; i++) {
+ attr_group = devm_kzalloc(dev, sizeof(*attr_group), GFP_KERNEL);
+- attrs = devm_kcalloc(dev, sizeof(*attrs),
+- GPIO_SIM_NUM_ATTRS, GFP_KERNEL);
++ attrs = devm_kcalloc(dev, GPIO_SIM_NUM_ATTRS, sizeof(*attrs),
++ GFP_KERNEL);
+ val_attr = devm_kzalloc(dev, sizeof(*val_attr), GFP_KERNEL);
+ pull_attr = devm_kzalloc(dev, sizeof(*pull_attr), GFP_KERNEL);
+ if (!attr_group || !attrs || !val_attr || !pull_attr)
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 775a7dadf9a39..1fcb759062c46 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -933,6 +933,11 @@ static int of_gpiochip_add_pin_range(struct gpio_chip *chip)
+ if (!np)
+ return 0;
+
++ if (!of_property_read_bool(np, "gpio-ranges") &&
++ chip->of_gpio_ranges_fallback) {
++ return chip->of_gpio_ranges_fallback(chip, np);
++ }
++
+ group_names = of_find_property(np, group_names_propname, NULL);
+
+ for (;; index++) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index a34be65c9eaac..78ae27be1872d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -115,7 +115,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
+ int ret;
+
+ if (cs->in.num_chunks == 0)
+- return 0;
++ return -EINVAL;
+
+ chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL);
+ if (!chunk_array)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index e2c422a6825c7..06adb66fd99ab 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1955,6 +1955,7 @@ static const struct pci_device_id pciidlist[] = {
+ {0x1002, 0x7421, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ {0x1002, 0x7422, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ {0x1002, 0x7423, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
++ {0x1002, 0x7424, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ {0x1002, 0x743F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+
+ { PCI_DEVICE(0x1002, PCI_ANY_ID),
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index dee17a0e11870..786518f7e37b2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -331,7 +331,39 @@ static int psp_sw_init(void *handle)
+ }
+ }
+
++ ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
++ amdgpu_sriov_vf(adev) ?
++ AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
++ &psp->fw_pri_bo,
++ &psp->fw_pri_mc_addr,
++ &psp->fw_pri_buf);
++ if (ret)
++ return ret;
++
++ ret = amdgpu_bo_create_kernel(adev, PSP_FENCE_BUFFER_SIZE, PAGE_SIZE,
++ AMDGPU_GEM_DOMAIN_VRAM,
++ &psp->fence_buf_bo,
++ &psp->fence_buf_mc_addr,
++ &psp->fence_buf);
++ if (ret)
++ goto failed1;
++
++ ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
++ AMDGPU_GEM_DOMAIN_VRAM,
++ &psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
++ (void **)&psp->cmd_buf_mem);
++ if (ret)
++ goto failed2;
++
+ return 0;
++
++failed2:
++ amdgpu_bo_free_kernel(&psp->fw_pri_bo,
++ &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
++failed1:
++ amdgpu_bo_free_kernel(&psp->fence_buf_bo,
++ &psp->fence_buf_mc_addr, &psp->fence_buf);
++ return ret;
+ }
+
+ static int psp_sw_fini(void *handle)
+@@ -361,6 +393,13 @@ static int psp_sw_fini(void *handle)
+ kfree(cmd);
+ cmd = NULL;
+
++ amdgpu_bo_free_kernel(&psp->fw_pri_bo,
++ &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
++ amdgpu_bo_free_kernel(&psp->fence_buf_bo,
++ &psp->fence_buf_mc_addr, &psp->fence_buf);
++ amdgpu_bo_free_kernel(&psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
++ (void **)&psp->cmd_buf_mem);
++
+ return 0;
+ }
+
+@@ -2391,51 +2430,18 @@ static int psp_load_fw(struct amdgpu_device *adev)
+ struct psp_context *psp = &adev->psp;
+
+ if (amdgpu_sriov_vf(adev) && amdgpu_in_reset(adev)) {
+- psp_ring_stop(psp, PSP_RING_TYPE__KM); /* should not destroy ring, only stop */
+- goto skip_memalloc;
+- }
+-
+- if (amdgpu_sriov_vf(adev)) {
+- ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
+- AMDGPU_GEM_DOMAIN_VRAM,
+- &psp->fw_pri_bo,
+- &psp->fw_pri_mc_addr,
+- &psp->fw_pri_buf);
++ /* should not destroy ring, only stop */
++ psp_ring_stop(psp, PSP_RING_TYPE__KM);
+ } else {
+- ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
+- AMDGPU_GEM_DOMAIN_GTT,
+- &psp->fw_pri_bo,
+- &psp->fw_pri_mc_addr,
+- &psp->fw_pri_buf);
+- }
+-
+- if (ret)
+- goto failed;
+-
+- ret = amdgpu_bo_create_kernel(adev, PSP_FENCE_BUFFER_SIZE, PAGE_SIZE,
+- AMDGPU_GEM_DOMAIN_VRAM,
+- &psp->fence_buf_bo,
+- &psp->fence_buf_mc_addr,
+- &psp->fence_buf);
+- if (ret)
+- goto failed;
+-
+- ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
+- AMDGPU_GEM_DOMAIN_VRAM,
+- &psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
+- (void **)&psp->cmd_buf_mem);
+- if (ret)
+- goto failed;
++ memset(psp->fence_buf, 0, PSP_FENCE_BUFFER_SIZE);
+
+- memset(psp->fence_buf, 0, PSP_FENCE_BUFFER_SIZE);
+-
+- ret = psp_ring_init(psp, PSP_RING_TYPE__KM);
+- if (ret) {
+- DRM_ERROR("PSP ring init failed!\n");
+- goto failed;
++ ret = psp_ring_init(psp, PSP_RING_TYPE__KM);
++ if (ret) {
++ DRM_ERROR("PSP ring init failed!\n");
++ goto failed;
++ }
+ }
+
+-skip_memalloc:
+ ret = psp_hw_start(psp);
+ if (ret)
+ goto failed;
+@@ -2553,13 +2559,6 @@ static int psp_hw_fini(void *handle)
+ psp_tmr_terminate(psp);
+ psp_ring_destroy(psp, PSP_RING_TYPE__KM);
+
+- amdgpu_bo_free_kernel(&psp->fw_pri_bo,
+- &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
+- amdgpu_bo_free_kernel(&psp->fence_buf_bo,
+- &psp->fence_buf_mc_addr, &psp->fence_buf);
+- amdgpu_bo_free_kernel(&psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
+- (void **)&psp->cmd_buf_mem);
+-
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index ca33505026189..aebafbc327fb2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -714,8 +714,7 @@ int amdgpu_ucode_create_bo(struct amdgpu_device *adev)
+
+ void amdgpu_ucode_free_bo(struct amdgpu_device *adev)
+ {
+- if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT)
+- amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
++ amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
+ &adev->firmware.fw_buf_mc,
+ &adev->firmware.fw_buf_ptr);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index f0638db57111d..66b6b175ae908 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -772,8 +772,8 @@ static void sdma_v4_0_ring_set_wptr(struct amdgpu_ring *ring)
+
+ DRM_DEBUG("Using doorbell -- "
+ "wptr_offs == 0x%08x "
+- "lower_32_bits(ring->wptr) << 2 == 0x%08x "
+- "upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
++ "lower_32_bits(ring->wptr << 2) == 0x%08x "
++ "upper_32_bits(ring->wptr << 2) == 0x%08x\n",
+ ring->wptr_offs,
+ lower_32_bits(ring->wptr << 2),
+ upper_32_bits(ring->wptr << 2));
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 81e033549dda3..6982735e88baf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -400,8 +400,8 @@ static void sdma_v5_0_ring_set_wptr(struct amdgpu_ring *ring)
+ if (ring->use_doorbell) {
+ DRM_DEBUG("Using doorbell -- "
+ "wptr_offs == 0x%08x "
+- "lower_32_bits(ring->wptr) << 2 == 0x%08x "
+- "upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
++ "lower_32_bits(ring->wptr << 2) == 0x%08x "
++ "upper_32_bits(ring->wptr << 2) == 0x%08x\n",
+ ring->wptr_offs,
+ lower_32_bits(ring->wptr << 2),
+ upper_32_bits(ring->wptr << 2));
+@@ -780,9 +780,9 @@ static int sdma_v5_0_gfx_resume(struct amdgpu_device *adev)
+
+ if (!amdgpu_sriov_vf(adev)) { /* only bare-metal use register write for wptr */
+ WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR),
+- lower_32_bits(ring->wptr) << 2);
++ lower_32_bits(ring->wptr << 2));
+ WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI),
+- upper_32_bits(ring->wptr) << 2);
++ upper_32_bits(ring->wptr << 2));
+ }
+
+ doorbell = RREG32_SOC15_IP(GC, sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL));
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index d3d6d5b045b83..ce3a3d1bdaa8b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -287,8 +287,8 @@ static void sdma_v5_2_ring_set_wptr(struct amdgpu_ring *ring)
+ if (ring->use_doorbell) {
+ DRM_DEBUG("Using doorbell -- "
+ "wptr_offs == 0x%08x "
+- "lower_32_bits(ring->wptr) << 2 == 0x%08x "
+- "upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
++ "lower_32_bits(ring->wptr << 2) == 0x%08x "
++ "upper_32_bits(ring->wptr << 2) == 0x%08x\n",
+ ring->wptr_offs,
+ lower_32_bits(ring->wptr << 2),
+ upper_32_bits(ring->wptr << 2));
+@@ -664,8 +664,8 @@ static int sdma_v5_2_gfx_resume(struct amdgpu_device *adev)
+ WREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 1);
+
+ if (!amdgpu_sriov_vf(adev)) { /* only bare-metal use register write for wptr */
+- WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr) << 2);
+- WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr) << 2);
++ WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr << 2));
++ WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr << 2));
+ }
+
+ doorbell = RREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL));
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index f3933c9f57468..6966b72eefc64 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -1030,6 +1030,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .afmt = true,
+ }
+ },
++ .disable_z10 = true,
+ .optimize_edp_link_rate = true,
+ .enable_sw_cntl_psr = true,
+ .apply_vendor_specific_lttpr_wa = true,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+index bcae42cef3743..6ba4c2ae69a63 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c
+@@ -1609,19 +1609,7 @@ static int kv_update_samu_dpm(struct amdgpu_device *adev, bool gate)
+
+ static u8 kv_get_acp_boot_level(struct amdgpu_device *adev)
+ {
+- u8 i;
+- struct amdgpu_clock_voltage_dependency_table *table =
+- &adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table;
+-
+- for (i = 0; i < table->count; i++) {
+- if (table->entries[i].clk >= 0) /* XXX */
+- break;
+- }
+-
+- if (i >= table->count)
+- i = table->count - 1;
+-
+- return i;
++ return 0;
+ }
+
+ static void kv_update_acp_boot_level(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+index 81f82aa05ec28..66fc63f1f1c17 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c
+@@ -7247,17 +7247,15 @@ static int si_parse_power_table(struct amdgpu_device *adev)
+ if (!adev->pm.dpm.ps)
+ return -ENOMEM;
+ power_state_offset = (u8 *)state_array->states;
+- for (i = 0; i < state_array->ucNumEntries; i++) {
++ for (adev->pm.dpm.num_ps = 0, i = 0; i < state_array->ucNumEntries; i++) {
+ u8 *idx;
+ power_state = (union pplib_power_state *)power_state_offset;
+ non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ &non_clock_info_array->nonClockInfo[non_clock_array_index];
+ ps = kzalloc(sizeof(struct si_ps), GFP_KERNEL);
+- if (ps == NULL) {
+- kfree(adev->pm.dpm.ps);
++ if (ps == NULL)
+ return -ENOMEM;
+- }
+ adev->pm.dpm.ps[i].ps_priv = ps;
+ si_parse_pplib_non_clock_info(adev, &adev->pm.dpm.ps[i],
+ non_clock_info,
+@@ -7279,8 +7277,8 @@ static int si_parse_power_table(struct amdgpu_device *adev)
+ k++;
+ }
+ power_state_offset += 2 + power_state->v2.ucNumDPMLevels;
++ adev->pm.dpm.num_ps++;
+ }
+- adev->pm.dpm.num_ps = state_array->ucNumEntries;
+
+ /* fill in the vce power states */
+ for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index 25c4b135f8303..5e7c9e6d8125b 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -1119,6 +1119,39 @@ static int renoir_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
++static void renoir_get_ss_power_percent(SmuMetrics_t *metrics,
++ uint32_t *apu_percent, uint32_t *dgpu_percent)
++{
++ uint32_t apu_boost = 0;
++ uint32_t dgpu_boost = 0;
++ uint16_t apu_limit = 0;
++ uint16_t dgpu_limit = 0;
++ uint16_t apu_power = 0;
++ uint16_t dgpu_power = 0;
++
++ apu_power = metrics->ApuPower;
++ apu_limit = metrics->StapmOriginalLimit;
++ if (apu_power > apu_limit && apu_limit != 0)
++ apu_boost = ((apu_power - apu_limit) * 100) / apu_limit;
++ apu_boost = (apu_boost > 100) ? 100 : apu_boost;
++
++ dgpu_power = metrics->dGpuPower;
++ if (metrics->StapmCurrentLimit > metrics->StapmOriginalLimit)
++ dgpu_limit = metrics->StapmCurrentLimit - metrics->StapmOriginalLimit;
++ if (dgpu_power > dgpu_limit && dgpu_limit != 0)
++ dgpu_boost = ((dgpu_power - dgpu_limit) * 100) / dgpu_limit;
++ dgpu_boost = (dgpu_boost > 100) ? 100 : dgpu_boost;
++
++ if (dgpu_boost >= apu_boost)
++ apu_boost = 0;
++ else
++ dgpu_boost = 0;
++
++ *apu_percent = apu_boost;
++ *dgpu_percent = dgpu_boost;
++}
++
++
+ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ MetricsMember_t member,
+ uint32_t *value)
+@@ -1127,6 +1160,9 @@ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+
+ SmuMetrics_t *metrics = (SmuMetrics_t *)smu_table->metrics_table;
+ int ret = 0;
++ uint32_t apu_percent = 0;
++ uint32_t dgpu_percent = 0;
++
+
+ mutex_lock(&smu->metrics_lock);
+
+@@ -1175,26 +1211,18 @@ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ *value = metrics->Voltage[1];
+ break;
+ case METRICS_SS_APU_SHARE:
+- /* return the percentage of APU power with respect to APU's power limit.
+- * percentage is reported, this isn't boost value. Smartshift power
+- * boost/shift is only when the percentage is more than 100.
++ /* return the percentage of APU power boost
++ * with respect to APU's power limit.
+ */
+- if (metrics->StapmOriginalLimit > 0)
+- *value = (metrics->ApuPower * 100) / metrics->StapmOriginalLimit;
+- else
+- *value = 0;
++ renoir_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++ *value = apu_percent;
+ break;
+ case METRICS_SS_DGPU_SHARE:
+- /* return the percentage of dGPU power with respect to dGPU's power limit.
+- * percentage is reported, this isn't boost value. Smartshift power
+- * boost/shift is only when the percentage is more than 100.
++ /* return the percentage of dGPU power boost
++ * with respect to dGPU's power limit.
+ */
+- if ((metrics->dGpuPower > 0) &&
+- (metrics->StapmCurrentLimit > metrics->StapmOriginalLimit))
+- *value = (metrics->dGpuPower * 100) /
+- (metrics->StapmCurrentLimit - metrics->StapmOriginalLimit);
+- else
+- *value = 0;
++ renoir_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++ *value = dgpu_percent;
+ break;
+ default:
+ *value = UINT_MAX;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+index 0bc84b709a935..d0715927b07f6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+@@ -296,6 +296,42 @@ static int yellow_carp_mode2_reset(struct smu_context *smu)
+ return yellow_carp_mode_reset(smu, SMU_RESET_MODE_2);
+ }
+
++
++static void yellow_carp_get_ss_power_percent(SmuMetrics_t *metrics,
++ uint32_t *apu_percent, uint32_t *dgpu_percent)
++{
++ uint32_t apu_boost = 0;
++ uint32_t dgpu_boost = 0;
++ uint16_t apu_limit = 0;
++ uint16_t dgpu_limit = 0;
++ uint16_t apu_power = 0;
++ uint16_t dgpu_power = 0;
++
++ /* APU and dGPU power values are reported in milli Watts
++ * and STAPM power limits are in Watts */
++ apu_power = metrics->ApuPower/1000;
++ apu_limit = metrics->StapmOpnLimit;
++ if (apu_power > apu_limit && apu_limit != 0)
++ apu_boost = ((apu_power - apu_limit) * 100) / apu_limit;
++ apu_boost = (apu_boost > 100) ? 100 : apu_boost;
++
++ dgpu_power = metrics->dGpuPower/1000;
++ if (metrics->StapmCurrentLimit > metrics->StapmOpnLimit)
++ dgpu_limit = metrics->StapmCurrentLimit - metrics->StapmOpnLimit;
++ if (dgpu_power > dgpu_limit && dgpu_limit != 0)
++ dgpu_boost = ((dgpu_power - dgpu_limit) * 100) / dgpu_limit;
++ dgpu_boost = (dgpu_boost > 100) ? 100 : dgpu_boost;
++
++ if (dgpu_boost >= apu_boost)
++ apu_boost = 0;
++ else
++ dgpu_boost = 0;
++
++ *apu_percent = apu_boost;
++ *dgpu_percent = dgpu_boost;
++
++}
++
+ static int yellow_carp_get_smu_metrics_data(struct smu_context *smu,
+ MetricsMember_t member,
+ uint32_t *value)
+@@ -304,6 +340,8 @@ static int yellow_carp_get_smu_metrics_data(struct smu_context *smu,
+
+ SmuMetrics_t *metrics = (SmuMetrics_t *)smu_table->metrics_table;
+ int ret = 0;
++ uint32_t apu_percent = 0;
++ uint32_t dgpu_percent = 0;
+
+ mutex_lock(&smu->metrics_lock);
+
+@@ -356,26 +394,18 @@ static int yellow_carp_get_smu_metrics_data(struct smu_context *smu,
+ *value = metrics->Voltage[1];
+ break;
+ case METRICS_SS_APU_SHARE:
+- /* return the percentage of APU power with respect to APU's power limit.
+- * percentage is reported, this isn't boost value. Smartshift power
+- * boost/shift is only when the percentage is more than 100.
++ /* return the percentage of APU power boost
++ * with respect to APU's power limit.
+ */
+- if (metrics->StapmOpnLimit > 0)
+- *value = (metrics->ApuPower * 100) / metrics->StapmOpnLimit;
+- else
+- *value = 0;
++ yellow_carp_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++ *value = apu_percent;
+ break;
+ case METRICS_SS_DGPU_SHARE:
+- /* return the percentage of dGPU power with respect to dGPU's power limit.
+- * percentage is reported, this isn't boost value. Smartshift power
+- * boost/shift is only when the percentage is more than 100.
++ /* return the percentage of dGPU power boost
++ * with respect to dGPU's power limit.
+ */
+- if ((metrics->dGpuPower > 0) &&
+- (metrics->StapmCurrentLimit > metrics->StapmOpnLimit))
+- *value = (metrics->dGpuPower * 100) /
+- (metrics->StapmCurrentLimit - metrics->StapmOpnLimit);
+- else
+- *value = 0;
++ yellow_carp_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++ *value = dgpu_percent;
+ break;
+ default:
+ *value = UINT_MAX;
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
+index d63d83800a8a3..517b94c3bcaf9 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
+@@ -265,6 +265,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms,
+
+ formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
+ layer->layer_type, &n_formats);
++ if (!formats) {
++ kfree(kplane);
++ return -ENOMEM;
++ }
+
+ err = drm_universal_plane_init(&kms->base, plane,
+ get_possible_crtcs(kms, c->pipeline),
+@@ -275,8 +279,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms,
+
+ komeda_put_fourcc_list(formats);
+
+- if (err)
+- goto cleanup;
++ if (err) {
++ kfree(kplane);
++ return err;
++ }
+
+ drm_plane_helper_add(plane, &komeda_plane_helper_funcs);
+
+diff --git a/drivers/gpu/drm/arm/malidp_crtc.c b/drivers/gpu/drm/arm/malidp_crtc.c
+index 494075ddbef68..b5928b52e2791 100644
+--- a/drivers/gpu/drm/arm/malidp_crtc.c
++++ b/drivers/gpu/drm/arm/malidp_crtc.c
+@@ -487,7 +487,10 @@ static void malidp_crtc_reset(struct drm_crtc *crtc)
+ if (crtc->state)
+ malidp_crtc_destroy_state(crtc, crtc->state);
+
+- __drm_atomic_helper_crtc_reset(crtc, &state->base);
++ if (state)
++ __drm_atomic_helper_crtc_reset(crtc, &state->base);
++ else
++ __drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+
+ static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 77118c3395bf0..320cbd5d90b82 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1313,6 +1313,7 @@ err_unregister_audio:
+ adv7511_audio_exit(adv7511);
+ drm_bridge_remove(&adv7511->bridge);
+ err_unregister_cec:
++ cec_unregister_adapter(adv7511->cec_adap);
+ i2c_unregister_device(adv7511->i2c_cec);
+ clk_disable_unprepare(adv7511->cec_clk);
+ err_i2c_unregister_packet:
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index b7d2e4449cfaa..235c7ee5258ff 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1632,8 +1632,19 @@ static ssize_t analogix_dpaux_transfer(struct drm_dp_aux *aux,
+ struct drm_dp_aux_msg *msg)
+ {
+ struct analogix_dp_device *dp = to_dp(aux);
++ int ret;
++
++ pm_runtime_get_sync(dp->dev);
++
++ ret = analogix_dp_detect_hpd(dp);
++ if (ret)
++ goto out;
+
+- return analogix_dp_transfer(dp, msg);
++ ret = analogix_dp_transfer(dp, msg);
++out:
++ pm_runtime_put(dp->dev);
++
++ return ret;
+ }
+
+ struct analogix_dp_device *
+@@ -1698,8 +1709,10 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+
+ dp->reg_base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(dp->reg_base))
+- return ERR_CAST(dp->reg_base);
++ if (IS_ERR(dp->reg_base)) {
++ ret = PTR_ERR(dp->reg_base);
++ goto err_disable_clk;
++ }
+
+ dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd");
+
+@@ -1711,7 +1724,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ if (IS_ERR(dp->hpd_gpiod)) {
+ dev_err(dev, "error getting HDP GPIO: %ld\n",
+ PTR_ERR(dp->hpd_gpiod));
+- return ERR_CAST(dp->hpd_gpiod);
++ ret = PTR_ERR(dp->hpd_gpiod);
++ goto err_disable_clk;
+ }
+
+ if (dp->hpd_gpiod) {
+@@ -1731,7 +1745,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+
+ if (dp->irq == -ENXIO) {
+ dev_err(&pdev->dev, "failed to get irq\n");
+- return ERR_PTR(-ENODEV);
++ ret = -ENODEV;
++ goto err_disable_clk;
+ }
+
+ ret = devm_request_threaded_irq(&pdev->dev, dp->irq,
+@@ -1740,11 +1755,15 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ irq_flags, "analogix-dp", dp);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request irq\n");
+- return ERR_PTR(ret);
++ goto err_disable_clk;
+ }
+ disable_irq(dp->irq);
+
+ return dp;
++
++err_disable_clk:
++ clk_disable_unprepare(dp->clock);
++ return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_probe);
+
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index e596cacce9e3e..ce04b17c0d3a1 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1201,12 +1201,12 @@ static void anx7625_dp_adjust_swing(struct anx7625_data *ctx)
+ for (i = 0; i < ctx->pdata.dp_lane0_swing_reg_cnt; i++)
+ anx7625_reg_write(ctx, ctx->i2c.tx_p1_client,
+ DP_TX_LANE0_SWING_REG0 + i,
+- ctx->pdata.lane0_reg_data[i] & 0xFF);
++ ctx->pdata.lane0_reg_data[i]);
+
+ for (i = 0; i < ctx->pdata.dp_lane1_swing_reg_cnt; i++)
+ anx7625_reg_write(ctx, ctx->i2c.tx_p1_client,
+ DP_TX_LANE1_SWING_REG0 + i,
+- ctx->pdata.lane1_reg_data[i] & 0xFF);
++ ctx->pdata.lane1_reg_data[i]);
+ }
+
+ static void dp_hpd_change_handler(struct anx7625_data *ctx, bool on)
+@@ -1313,8 +1313,8 @@ static int anx7625_get_swing_setting(struct device *dev,
+ num_regs = DP_TX_SWING_REG_CNT;
+
+ pdata->dp_lane0_swing_reg_cnt = num_regs;
+- of_property_read_u32_array(dev->of_node, "analogix,lane0-swing",
+- pdata->lane0_reg_data, num_regs);
++ of_property_read_u8_array(dev->of_node, "analogix,lane0-swing",
++ pdata->lane0_reg_data, num_regs);
+ }
+
+ if (of_get_property(dev->of_node,
+@@ -1323,8 +1323,8 @@ static int anx7625_get_swing_setting(struct device *dev,
+ num_regs = DP_TX_SWING_REG_CNT;
+
+ pdata->dp_lane1_swing_reg_cnt = num_regs;
+- of_property_read_u32_array(dev->of_node, "analogix,lane1-swing",
+- pdata->lane1_reg_data, num_regs);
++ of_property_read_u8_array(dev->of_node, "analogix,lane1-swing",
++ pdata->lane1_reg_data, num_regs);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.h b/drivers/gpu/drm/bridge/analogix/anx7625.h
+index 3d79b6fb13c88..8759d9441f4e4 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.h
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.h
+@@ -370,9 +370,9 @@ struct anx7625_platform_data {
+ int mipi_lanes;
+ int audio_en;
+ int dp_lane0_swing_reg_cnt;
+- int lane0_reg_data[DP_TX_SWING_REG_CNT];
++ u8 lane0_reg_data[DP_TX_SWING_REG_CNT];
+ int dp_lane1_swing_reg_cnt;
+- int lane1_reg_data[DP_TX_SWING_REG_CNT];
++ u8 lane1_reg_data[DP_TX_SWING_REG_CNT];
+ u32 low_power_mode;
+ struct device_node *mipi_host_node;
+ };
+diff --git a/drivers/gpu/drm/bridge/chipone-icn6211.c b/drivers/gpu/drm/bridge/chipone-icn6211.c
+index a6151db955868..d7eedf35e8415 100644
+--- a/drivers/gpu/drm/bridge/chipone-icn6211.c
++++ b/drivers/gpu/drm/bridge/chipone-icn6211.c
+@@ -14,8 +14,19 @@
+ #include <linux/of_device.h>
+ #include <linux/regulator/consumer.h>
+
+-#include <video/mipi_display.h>
+-
++#define VENDOR_ID 0x00
++#define DEVICE_ID_H 0x01
++#define DEVICE_ID_L 0x02
++#define VERSION_ID 0x03
++#define FIRMWARE_VERSION 0x08
++#define CONFIG_FINISH 0x09
++#define PD_CTRL(n) (0x0a + ((n) & 0x3)) /* 0..3 */
++#define RST_CTRL(n) (0x0e + ((n) & 0x1)) /* 0..1 */
++#define SYS_CTRL(n) (0x10 + ((n) & 0x7)) /* 0..4 */
++#define RGB_DRV(n) (0x18 + ((n) & 0x3)) /* 0..3 */
++#define RGB_DLY(n) (0x1c + ((n) & 0x1)) /* 0..1 */
++#define RGB_TEST_CTRL 0x1e
++#define ATE_PLL_EN 0x1f
+ #define HACTIVE_LI 0x20
+ #define VACTIVE_LI 0x21
+ #define VACTIVE_HACTIVE_HI 0x22
+@@ -23,9 +34,101 @@
+ #define HSYNC_LI 0x24
+ #define HBP_LI 0x25
+ #define HFP_HSW_HBP_HI 0x26
++#define HFP_HSW_HBP_HI_HFP(n) (((n) & 0x300) >> 4)
++#define HFP_HSW_HBP_HI_HS(n) (((n) & 0x300) >> 6)
++#define HFP_HSW_HBP_HI_HBP(n) (((n) & 0x300) >> 8)
+ #define VFP 0x27
+ #define VSYNC 0x28
+ #define VBP 0x29
++#define BIST_POL 0x2a
++#define BIST_POL_BIST_MODE(n) (((n) & 0xf) << 4)
++#define BIST_POL_BIST_GEN BIT(3)
++#define BIST_POL_HSYNC_POL BIT(2)
++#define BIST_POL_VSYNC_POL BIT(1)
++#define BIST_POL_DE_POL BIT(0)
++#define BIST_RED 0x2b
++#define BIST_GREEN 0x2c
++#define BIST_BLUE 0x2d
++#define BIST_CHESS_X 0x2e
++#define BIST_CHESS_Y 0x2f
++#define BIST_CHESS_XY_H 0x30
++#define BIST_FRAME_TIME_L 0x31
++#define BIST_FRAME_TIME_H 0x32
++#define FIFO_MAX_ADDR_LOW 0x33
++#define SYNC_EVENT_DLY 0x34
++#define HSW_MIN 0x35
++#define HFP_MIN 0x36
++#define LOGIC_RST_NUM 0x37
++#define OSC_CTRL(n) (0x48 + ((n) & 0x7)) /* 0..5 */
++#define BG_CTRL 0x4e
++#define LDO_PLL 0x4f
++#define PLL_CTRL(n) (0x50 + ((n) & 0xf)) /* 0..15 */
++#define PLL_CTRL_6_EXTERNAL 0x90
++#define PLL_CTRL_6_MIPI_CLK 0x92
++#define PLL_CTRL_6_INTERNAL 0x93
++#define PLL_REM(n) (0x60 + ((n) & 0x3)) /* 0..2 */
++#define PLL_DIV(n) (0x63 + ((n) & 0x3)) /* 0..2 */
++#define PLL_FRAC(n) (0x66 + ((n) & 0x3)) /* 0..2 */
++#define PLL_INT(n) (0x69 + ((n) & 0x1)) /* 0..1 */
++#define PLL_REF_DIV 0x6b
++#define PLL_REF_DIV_P(n) ((n) & 0xf)
++#define PLL_REF_DIV_Pe BIT(4)
++#define PLL_REF_DIV_S(n) (((n) & 0x7) << 5)
++#define PLL_SSC_P(n) (0x6c + ((n) & 0x3)) /* 0..2 */
++#define PLL_SSC_STEP(n) (0x6f + ((n) & 0x3)) /* 0..2 */
++#define PLL_SSC_OFFSET(n) (0x72 + ((n) & 0x3)) /* 0..3 */
++#define GPIO_OEN 0x79
++#define MIPI_CFG_PW 0x7a
++#define MIPI_CFG_PW_CONFIG_DSI 0xc1
++#define MIPI_CFG_PW_CONFIG_I2C 0x3e
++#define GPIO_SEL(n) (0x7b + ((n) & 0x1)) /* 0..1 */
++#define IRQ_SEL 0x7d
++#define DBG_SEL 0x7e
++#define DBG_SIGNAL 0x7f
++#define MIPI_ERR_VECTOR_L 0x80
++#define MIPI_ERR_VECTOR_H 0x81
++#define MIPI_ERR_VECTOR_EN_L 0x82
++#define MIPI_ERR_VECTOR_EN_H 0x83
++#define MIPI_MAX_SIZE_L 0x84
++#define MIPI_MAX_SIZE_H 0x85
++#define DSI_CTRL 0x86
++#define DSI_CTRL_UNKNOWN 0x28
++#define DSI_CTRL_DSI_LANES(n) ((n) & 0x3)
++#define MIPI_PN_SWAP 0x87
++#define MIPI_PN_SWAP_CLK BIT(4)
++#define MIPI_PN_SWAP_D(n) BIT((n) & 0x3)
++#define MIPI_SOT_SYNC_BIT_(n) (0x88 + ((n) & 0x1)) /* 0..1 */
++#define MIPI_ULPS_CTRL 0x8a
++#define MIPI_CLK_CHK_VAR 0x8e
++#define MIPI_CLK_CHK_INI 0x8f
++#define MIPI_T_TERM_EN 0x90
++#define MIPI_T_HS_SETTLE 0x91
++#define MIPI_T_TA_SURE_PRE 0x92
++#define MIPI_T_LPX_SET 0x94
++#define MIPI_T_CLK_MISS 0x95
++#define MIPI_INIT_TIME_L 0x96
++#define MIPI_INIT_TIME_H 0x97
++#define MIPI_T_CLK_TERM_EN 0x99
++#define MIPI_T_CLK_SETTLE 0x9a
++#define MIPI_TO_HS_RX_L 0x9e
++#define MIPI_TO_HS_RX_H 0x9f
++#define MIPI_PHY_(n) (0xa0 + ((n) & 0x7)) /* 0..5 */
++#define MIPI_PD_RX 0xb0
++#define MIPI_PD_TERM 0xb1
++#define MIPI_PD_HSRX 0xb2
++#define MIPI_PD_LPTX 0xb3
++#define MIPI_PD_LPRX 0xb4
++#define MIPI_PD_CK_LANE 0xb5
++#define MIPI_FORCE_0 0xb6
++#define MIPI_RST_CTRL 0xb7
++#define MIPI_RST_NUM 0xb8
++#define MIPI_DBG_SET_(n) (0xc0 + ((n) & 0xf)) /* 0..9 */
++#define MIPI_DBG_SEL 0xe0
++#define MIPI_DBG_DATA 0xe1
++#define MIPI_ATE_TEST_SEL 0xe2
++#define MIPI_ATE_STATUS_(n) (0xe3 + ((n) & 0x1)) /* 0..1 */
++#define MIPI_ATE_STATUS_1 0xe4
++#define ICN6211_MAX_REGISTER MIPI_ATE_STATUS(1)
+
+ struct chipone {
+ struct device *dev;
+@@ -65,14 +168,15 @@ static void chipone_enable(struct drm_bridge *bridge)
+ {
+ struct chipone *icn = bridge_to_chipone(bridge);
+ struct drm_display_mode *mode = bridge_to_mode(bridge);
++ u16 hfp, hbp, hsync;
+
+- ICN6211_DSI(icn, 0x7a, 0xc1);
++ ICN6211_DSI(icn, MIPI_CFG_PW, MIPI_CFG_PW_CONFIG_DSI);
+
+ ICN6211_DSI(icn, HACTIVE_LI, mode->hdisplay & 0xff);
+
+ ICN6211_DSI(icn, VACTIVE_LI, mode->vdisplay & 0xff);
+
+- /**
++ /*
+ * lsb nibble: 2nd nibble of hdisplay
+ * msb nibble: 2nd nibble of vdisplay
+ */
+@@ -80,13 +184,18 @@ static void chipone_enable(struct drm_bridge *bridge)
+ ((mode->hdisplay >> 8) & 0xf) |
+ (((mode->vdisplay >> 8) & 0xf) << 4));
+
+- ICN6211_DSI(icn, HFP_LI, mode->hsync_start - mode->hdisplay);
+-
+- ICN6211_DSI(icn, HSYNC_LI, mode->hsync_end - mode->hsync_start);
+-
+- ICN6211_DSI(icn, HBP_LI, mode->htotal - mode->hsync_end);
++ hfp = mode->hsync_start - mode->hdisplay;
++ hsync = mode->hsync_end - mode->hsync_start;
++ hbp = mode->htotal - mode->hsync_end;
+
+- ICN6211_DSI(icn, HFP_HSW_HBP_HI, 0x00);
++ ICN6211_DSI(icn, HFP_LI, hfp & 0xff);
++ ICN6211_DSI(icn, HSYNC_LI, hsync & 0xff);
++ ICN6211_DSI(icn, HBP_LI, hbp & 0xff);
++ /* Top two bits of Horizontal Front porch/Sync/Back porch */
++ ICN6211_DSI(icn, HFP_HSW_HBP_HI,
++ HFP_HSW_HBP_HI_HFP(hfp) |
++ HFP_HSW_HBP_HI_HS(hsync) |
++ HFP_HSW_HBP_HI_HBP(hbp));
+
+ ICN6211_DSI(icn, VFP, mode->vsync_start - mode->vdisplay);
+
+@@ -95,21 +204,21 @@ static void chipone_enable(struct drm_bridge *bridge)
+ ICN6211_DSI(icn, VBP, mode->vtotal - mode->vsync_end);
+
+ /* dsi specific sequence */
+- ICN6211_DSI(icn, MIPI_DCS_SET_TEAR_OFF, 0x80);
+- ICN6211_DSI(icn, MIPI_DCS_SET_ADDRESS_MODE, 0x28);
+- ICN6211_DSI(icn, 0xb5, 0xa0);
+- ICN6211_DSI(icn, 0x5c, 0xff);
+- ICN6211_DSI(icn, MIPI_DCS_SET_COLUMN_ADDRESS, 0x01);
+- ICN6211_DSI(icn, MIPI_DCS_GET_POWER_SAVE, 0x92);
+- ICN6211_DSI(icn, 0x6b, 0x71);
+- ICN6211_DSI(icn, 0x69, 0x2b);
+- ICN6211_DSI(icn, MIPI_DCS_ENTER_SLEEP_MODE, 0x40);
+- ICN6211_DSI(icn, MIPI_DCS_EXIT_SLEEP_MODE, 0x98);
++ ICN6211_DSI(icn, SYNC_EVENT_DLY, 0x80);
++ ICN6211_DSI(icn, HFP_MIN, hfp & 0xff);
++ ICN6211_DSI(icn, MIPI_PD_CK_LANE, 0xa0);
++ ICN6211_DSI(icn, PLL_CTRL(12), 0xff);
++ ICN6211_DSI(icn, BIST_POL, BIST_POL_DE_POL);
++ ICN6211_DSI(icn, PLL_CTRL(6), PLL_CTRL_6_MIPI_CLK);
++ ICN6211_DSI(icn, PLL_REF_DIV, 0x71);
++ ICN6211_DSI(icn, PLL_INT(0), 0x2b);
++ ICN6211_DSI(icn, SYS_CTRL(0), 0x40);
++ ICN6211_DSI(icn, SYS_CTRL(1), 0x98);
+
+ /* icn6211 specific sequence */
+- ICN6211_DSI(icn, 0xb6, 0x20);
+- ICN6211_DSI(icn, 0x51, 0x20);
+- ICN6211_DSI(icn, 0x09, 0x10);
++ ICN6211_DSI(icn, MIPI_FORCE_0, 0x20);
++ ICN6211_DSI(icn, PLL_CTRL(1), 0x20);
++ ICN6211_DSI(icn, CONFIG_FINISH, 0x10);
+
+ usleep_range(10000, 11000);
+ }
+diff --git a/drivers/gpu/drm/bridge/ite-it66121.c b/drivers/gpu/drm/bridge/ite-it66121.c
+index 06b59b422c696..64912b770086f 100644
+--- a/drivers/gpu/drm/bridge/ite-it66121.c
++++ b/drivers/gpu/drm/bridge/ite-it66121.c
+@@ -227,7 +227,7 @@ static const struct regmap_range_cfg it66121_regmap_banks[] = {
+ .selector_mask = 0x1,
+ .selector_shift = 0,
+ .window_start = 0x00,
+- .window_len = 0x130,
++ .window_len = 0x100,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/drm_bridge_connector.c b/drivers/gpu/drm/drm_bridge_connector.c
+index 791379816837d..4f20137ef21d5 100644
+--- a/drivers/gpu/drm/drm_bridge_connector.c
++++ b/drivers/gpu/drm/drm_bridge_connector.c
+@@ -369,8 +369,10 @@ struct drm_connector *drm_bridge_connector_init(struct drm_device *drm,
+ connector_type, ddc);
+ drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs);
+
+- if (bridge_connector->bridge_hpd)
++ if (bridge_connector->bridge_hpd) {
+ connector->polled = DRM_CONNECTOR_POLL_HPD;
++ drm_bridge_connector_enable_hpd(connector);
++ }
+ else if (bridge_connector->bridge_detect)
+ connector->polled = DRM_CONNECTOR_POLL_CONNECT
+ | DRM_CONNECTOR_POLL_DISCONNECT;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 83e5c115e7547..502ef71dac689 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -2029,9 +2029,6 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+
+ connector_bad_edid(connector, edid, edid[0x7e] + 1);
+
+- edid[EDID_LENGTH-1] += edid[0x7e] - valid_extensions;
+- edid[0x7e] = valid_extensions;
+-
+ new = kmalloc_array(valid_extensions + 1, EDID_LENGTH,
+ GFP_KERNEL);
+ if (!new)
+@@ -2048,6 +2045,9 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+ base += EDID_LENGTH;
+ }
+
++ new[EDID_LENGTH - 1] += new[0x7e] - valid_extensions;
++ new[0x7e] = valid_extensions;
++
+ kfree(edid);
+ edid = new;
+ }
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 82afb854141b2..fd0bf90fb4c28 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -249,6 +249,13 @@ static int __drm_universal_plane_init(struct drm_device *dev,
+ if (WARN_ON(config->num_total_plane >= 32))
+ return -EINVAL;
+
++ /*
++ * First driver to need more than 64 formats needs to fix this. Each
++ * format is encoded as a bit and the current code only supports a u64.
++ */
++ if (WARN_ON(format_count > 64))
++ return -EINVAL;
++
+ WARN_ON(drm_drv_uses_atomic_modeset(dev) &&
+ (!funcs->atomic_destroy_state ||
+ !funcs->atomic_duplicate_state));
+@@ -270,13 +277,6 @@ static int __drm_universal_plane_init(struct drm_device *dev,
+ return -ENOMEM;
+ }
+
+- /*
+- * First driver to need more than 64 formats needs to fix this. Each
+- * format is encoded as a bit and the current code only supports a u64.
+- */
+- if (WARN_ON(format_count > 64))
+- return -EINVAL;
+-
+ if (format_modifiers) {
+ const uint64_t *temp_modifiers = format_modifiers;
+
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+index 9fb1a2aadbcb0..aabb997a74eb4 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+@@ -286,6 +286,12 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context,
+
+ mutex_lock(&context->lock);
+
++ /* Bail if the mapping has been reaped by another thread */
++ if (!mapping->context) {
++ mutex_unlock(&context->lock);
++ return;
++ }
++
+ /* If the vram node is on the mm, unmap and remove the node */
+ if (mapping->vram_node.mm == &context->mm)
+ etnaviv_iommu_remove_mapping(context, mapping);
+diff --git a/drivers/gpu/drm/gma500/psb_intel_display.c b/drivers/gpu/drm/gma500/psb_intel_display.c
+index d5f95212934e2..42d1a733e1249 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_display.c
++++ b/drivers/gpu/drm/gma500/psb_intel_display.c
+@@ -535,14 +535,15 @@ void psb_intel_crtc_init(struct drm_device *dev, int pipe,
+
+ struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, int pipe)
+ {
+- struct drm_crtc *crtc = NULL;
++ struct drm_crtc *crtc;
+
+ list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
++
+ if (gma_crtc->pipe == pipe)
+- break;
++ return crtc;
+ }
+- return crtc;
++ return NULL;
+ }
+
+ int gma_connector_clones(struct drm_device *dev, int type_mask)
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index 0da91849efde4..79f3544881a59 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -122,9 +122,25 @@ struct i2c_adapter_lookup {
+ #define ICL_GPIO_DDPA_CTRLCLK_2 8
+ #define ICL_GPIO_DDPA_CTRLDATA_2 9
+
+-static enum port intel_dsi_seq_port_to_port(u8 port)
++static enum port intel_dsi_seq_port_to_port(struct intel_dsi *intel_dsi,
++ u8 seq_port)
+ {
+- return port ? PORT_C : PORT_A;
++ /*
++ * If single link DSI is being used on any port, the VBT sequence block
++ * send packet apparently always has 0 for the port. Just use the port
++ * we have configured, and ignore the sequence block port.
++ */
++ if (hweight8(intel_dsi->ports) == 1)
++ return ffs(intel_dsi->ports) - 1;
++
++ if (seq_port) {
++ if (intel_dsi->ports & PORT_B)
++ return PORT_B;
++ else if (intel_dsi->ports & PORT_C)
++ return PORT_C;
++ }
++
++ return PORT_A;
+ }
+
+ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
+@@ -146,15 +162,10 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
+
+ seq_port = (flags >> MIPI_PORT_SHIFT) & 3;
+
+- /* For DSI single link on Port A & C, the seq_port value which is
+- * parsed from Sequence Block#53 of VBT has been set to 0
+- * Now, read/write of packets for the DSI single link on Port A and
+- * Port C will based on the DVO port from VBT block 2.
+- */
+- if (intel_dsi->ports == (1 << PORT_C))
+- port = PORT_C;
+- else
+- port = intel_dsi_seq_port_to_port(seq_port);
++ port = intel_dsi_seq_port_to_port(intel_dsi, seq_port);
++
++ if (drm_WARN_ON(&dev_priv->drm, !intel_dsi->dsi_hosts[port]))
++ goto out;
+
+ dsi_device = intel_dsi->dsi_hosts[port]->device;
+ if (!dsi_device) {
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index e27f3b7cf094e..abcf7e66fe6a8 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -4008,8 +4008,8 @@ addr_err:
+ return ERR_PTR(err);
+ }
+
+-static ssize_t show_dynamic_id(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t show_dynamic_id(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ char *buf)
+ {
+ struct i915_oa_config *oa_config =
+diff --git a/drivers/gpu/drm/i915/i915_perf_types.h b/drivers/gpu/drm/i915/i915_perf_types.h
+index aa14354a51203..f682c7a6474d2 100644
+--- a/drivers/gpu/drm/i915/i915_perf_types.h
++++ b/drivers/gpu/drm/i915/i915_perf_types.h
+@@ -55,7 +55,7 @@ struct i915_oa_config {
+
+ struct attribute_group sysfs_metric;
+ struct attribute *attrs[2];
+- struct device_attribute sysfs_metric_id;
++ struct kobj_attribute sysfs_metric_id;
+
+ struct kref ref;
+ struct rcu_head rcu;
+diff --git a/drivers/gpu/drm/mediatek/mtk_cec.c b/drivers/gpu/drm/mediatek/mtk_cec.c
+index e9cef5c0c8f7e..cdfa648910b23 100644
+--- a/drivers/gpu/drm/mediatek/mtk_cec.c
++++ b/drivers/gpu/drm/mediatek/mtk_cec.c
+@@ -85,7 +85,7 @@ static void mtk_cec_mask(struct mtk_cec *cec, unsigned int offset,
+ u32 tmp = readl(cec->regs + offset) & ~mask;
+
+ tmp |= val & mask;
+- writel(val, cec->regs + offset);
++ writel(tmp, cec->regs + offset);
+ }
+
+ void mtk_cec_set_hpd_event(struct device *dev,
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_drv.h b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+index 86c3068894b11..974462831133b 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_drv.h
++++ b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+@@ -76,9 +76,11 @@ void mtk_ovl_layer_off(struct device *dev, unsigned int idx,
+ void mtk_ovl_start(struct device *dev);
+ void mtk_ovl_stop(struct device *dev);
+ unsigned int mtk_ovl_supported_rotations(struct device *dev);
+-void mtk_ovl_enable_vblank(struct device *dev,
+- void (*vblank_cb)(void *),
+- void *vblank_cb_data);
++void mtk_ovl_register_vblank_cb(struct device *dev,
++ void (*vblank_cb)(void *),
++ void *vblank_cb_data);
++void mtk_ovl_unregister_vblank_cb(struct device *dev);
++void mtk_ovl_enable_vblank(struct device *dev);
+ void mtk_ovl_disable_vblank(struct device *dev);
+
+ void mtk_rdma_bypass_shadow(struct device *dev);
+@@ -93,9 +95,11 @@ void mtk_rdma_layer_config(struct device *dev, unsigned int idx,
+ struct cmdq_pkt *cmdq_pkt);
+ void mtk_rdma_start(struct device *dev);
+ void mtk_rdma_stop(struct device *dev);
+-void mtk_rdma_enable_vblank(struct device *dev,
+- void (*vblank_cb)(void *),
+- void *vblank_cb_data);
++void mtk_rdma_register_vblank_cb(struct device *dev,
++ void (*vblank_cb)(void *),
++ void *vblank_cb_data);
++void mtk_rdma_unregister_vblank_cb(struct device *dev);
++void mtk_rdma_enable_vblank(struct device *dev);
+ void mtk_rdma_disable_vblank(struct device *dev);
+
+ #endif
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index 2146299e5f524..1fa1bbac9f9c8 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -96,14 +96,28 @@ static irqreturn_t mtk_disp_ovl_irq_handler(int irq, void *dev_id)
+ return IRQ_HANDLED;
+ }
+
+-void mtk_ovl_enable_vblank(struct device *dev,
+- void (*vblank_cb)(void *),
+- void *vblank_cb_data)
++void mtk_ovl_register_vblank_cb(struct device *dev,
++ void (*vblank_cb)(void *),
++ void *vblank_cb_data)
+ {
+ struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+
+ ovl->vblank_cb = vblank_cb;
+ ovl->vblank_cb_data = vblank_cb_data;
++}
++
++void mtk_ovl_unregister_vblank_cb(struct device *dev)
++{
++ struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
++
++ ovl->vblank_cb = NULL;
++ ovl->vblank_cb_data = NULL;
++}
++
++void mtk_ovl_enable_vblank(struct device *dev)
++{
++ struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
++
+ writel(0x0, ovl->regs + DISP_REG_OVL_INTSTA);
+ writel_relaxed(OVL_FME_CPL_INT, ovl->regs + DISP_REG_OVL_INTEN);
+ }
+@@ -112,8 +126,6 @@ void mtk_ovl_disable_vblank(struct device *dev)
+ {
+ struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+
+- ovl->vblank_cb = NULL;
+- ovl->vblank_cb_data = NULL;
+ writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_INTEN);
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c
+index d41a3970b944f..943780fc7bf64 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c
+@@ -94,24 +94,32 @@ static void rdma_update_bits(struct device *dev, unsigned int reg,
+ writel(tmp, rdma->regs + reg);
+ }
+
+-void mtk_rdma_enable_vblank(struct device *dev,
+- void (*vblank_cb)(void *),
+- void *vblank_cb_data)
++void mtk_rdma_register_vblank_cb(struct device *dev,
++ void (*vblank_cb)(void *),
++ void *vblank_cb_data)
+ {
+ struct mtk_disp_rdma *rdma = dev_get_drvdata(dev);
+
+ rdma->vblank_cb = vblank_cb;
+ rdma->vblank_cb_data = vblank_cb_data;
+- rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT,
+- RDMA_FRAME_END_INT);
+ }
+
+-void mtk_rdma_disable_vblank(struct device *dev)
++void mtk_rdma_unregister_vblank_cb(struct device *dev)
+ {
+ struct mtk_disp_rdma *rdma = dev_get_drvdata(dev);
+
+ rdma->vblank_cb = NULL;
+ rdma->vblank_cb_data = NULL;
++}
++
++void mtk_rdma_enable_vblank(struct device *dev)
++{
++ rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT,
++ RDMA_FRAME_END_INT);
++}
++
++void mtk_rdma_disable_vblank(struct device *dev)
++{
+ rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, 0);
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 4554e2de14309..e61cd67b978ff 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -819,8 +819,8 @@ static const struct mtk_dpi_conf mt8192_conf = {
+ .cal_factor = mt8183_calculate_factor,
+ .reg_h_fre_con = 0xe0,
+ .max_clock_khz = 150000,
+- .output_fmts = mt8173_output_fmts,
+- .num_output_fmts = ARRAY_SIZE(mt8173_output_fmts),
++ .output_fmts = mt8183_output_fmts,
++ .num_output_fmts = ARRAY_SIZE(mt8183_output_fmts),
+ };
+
+ static int mtk_dpi_probe(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index d661edf7e0fe8..e42a9bfa0ecb0 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -152,6 +152,7 @@ static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt)
+ static void mtk_drm_crtc_destroy(struct drm_crtc *crtc)
+ {
+ struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
++ int i;
+
+ mtk_mutex_put(mtk_crtc->mutex);
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+@@ -162,6 +163,14 @@ static void mtk_drm_crtc_destroy(struct drm_crtc *crtc)
+ mtk_crtc->cmdq_client.chan = NULL;
+ }
+ #endif
++
++ for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
++ struct mtk_ddp_comp *comp;
++
++ comp = mtk_crtc->ddp_comp[i];
++ mtk_ddp_comp_unregister_vblank_cb(comp);
++ }
++
+ drm_crtc_cleanup(crtc);
+ }
+
+@@ -616,7 +625,7 @@ static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc)
+ struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
+ struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0];
+
+- mtk_ddp_comp_enable_vblank(comp, mtk_crtc_ddp_irq, &mtk_crtc->base);
++ mtk_ddp_comp_enable_vblank(comp);
+
+ return 0;
+ }
+@@ -916,6 +925,9 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
+ if (comp->funcs->ctm_set)
+ has_ctm = true;
+ }
++
++ mtk_ddp_comp_register_vblank_cb(comp, mtk_crtc_ddp_irq,
++ &mtk_crtc->base);
+ }
+
+ for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index b4b682bc19913..028cf76b95319 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -297,6 +297,8 @@ static const struct mtk_ddp_comp_funcs ddp_ovl = {
+ .config = mtk_ovl_config,
+ .start = mtk_ovl_start,
+ .stop = mtk_ovl_stop,
++ .register_vblank_cb = mtk_ovl_register_vblank_cb,
++ .unregister_vblank_cb = mtk_ovl_unregister_vblank_cb,
+ .enable_vblank = mtk_ovl_enable_vblank,
+ .disable_vblank = mtk_ovl_disable_vblank,
+ .supported_rotations = mtk_ovl_supported_rotations,
+@@ -321,6 +323,8 @@ static const struct mtk_ddp_comp_funcs ddp_rdma = {
+ .config = mtk_rdma_config,
+ .start = mtk_rdma_start,
+ .stop = mtk_rdma_stop,
++ .register_vblank_cb = mtk_rdma_register_vblank_cb,
++ .unregister_vblank_cb = mtk_rdma_unregister_vblank_cb,
+ .enable_vblank = mtk_rdma_enable_vblank,
+ .disable_vblank = mtk_rdma_disable_vblank,
+ .layer_nr = mtk_rdma_layer_nr,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+index 4c6a986623051..b83f24cb045f5 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+@@ -48,9 +48,11 @@ struct mtk_ddp_comp_funcs {
+ unsigned int bpc, struct cmdq_pkt *cmdq_pkt);
+ void (*start)(struct device *dev);
+ void (*stop)(struct device *dev);
+- void (*enable_vblank)(struct device *dev,
+- void (*vblank_cb)(void *),
+- void *vblank_cb_data);
++ void (*register_vblank_cb)(struct device *dev,
++ void (*vblank_cb)(void *),
++ void *vblank_cb_data);
++ void (*unregister_vblank_cb)(struct device *dev);
++ void (*enable_vblank)(struct device *dev);
+ void (*disable_vblank)(struct device *dev);
+ unsigned int (*supported_rotations)(struct device *dev);
+ unsigned int (*layer_nr)(struct device *dev);
+@@ -111,12 +113,25 @@ static inline void mtk_ddp_comp_stop(struct mtk_ddp_comp *comp)
+ comp->funcs->stop(comp->dev);
+ }
+
+-static inline void mtk_ddp_comp_enable_vblank(struct mtk_ddp_comp *comp,
+- void (*vblank_cb)(void *),
+- void *vblank_cb_data)
++static inline void mtk_ddp_comp_register_vblank_cb(struct mtk_ddp_comp *comp,
++ void (*vblank_cb)(void *),
++ void *vblank_cb_data)
++{
++ if (comp->funcs && comp->funcs->register_vblank_cb)
++ comp->funcs->register_vblank_cb(comp->dev, vblank_cb,
++ vblank_cb_data);
++}
++
++static inline void mtk_ddp_comp_unregister_vblank_cb(struct mtk_ddp_comp *comp)
++{
++ if (comp->funcs && comp->funcs->unregister_vblank_cb)
++ comp->funcs->unregister_vblank_cb(comp->dev);
++}
++
++static inline void mtk_ddp_comp_enable_vblank(struct mtk_ddp_comp *comp)
+ {
+ if (comp->funcs && comp->funcs->enable_vblank)
+- comp->funcs->enable_vblank(comp->dev, vblank_cb, vblank_cb_data);
++ comp->funcs->enable_vblank(comp->dev);
+ }
+
+ static inline void mtk_ddp_comp_disable_vblank(struct mtk_ddp_comp *comp)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 56ff8c57ef8fd..e29ac2f9254a4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -511,6 +511,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = {
+ .data = (void *)MTK_DPI },
+ { .compatible = "mediatek,mt8183-dpi",
+ .data = (void *)MTK_DPI },
++ { .compatible = "mediatek,mt8192-dpi",
++ .data = (void *)MTK_DPI },
+ { .compatible = "mediatek,mt2701-dsi",
+ .data = (void *)MTK_DSI },
+ { .compatible = "mediatek,mt8173-dsi",
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 3d28fcf841a65..1e5de14779680 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1662,28 +1662,23 @@ static struct msm_ringbuffer *a5xx_active_ring(struct msm_gpu *gpu)
+ return a5xx_gpu->cur_ring;
+ }
+
+-static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
++static u64 a5xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+ {
+- u64 busy_cycles, busy_time;
++ u64 busy_cycles;
+
+ /* Only read the gpu busy if the hardware is already active */
+- if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0)
++ if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0) {
++ *out_sample_rate = 1;
+ return 0;
++ }
+
+ busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO,
+ REG_A5XX_RBBM_PERFCTR_RBBM_0_HI);
+-
+- busy_time = busy_cycles - gpu->devfreq.busy_cycles;
+- do_div(busy_time, clk_get_rate(gpu->core_clk) / 1000000);
+-
+- gpu->devfreq.busy_cycles = busy_cycles;
++ *out_sample_rate = clk_get_rate(gpu->core_clk);
+
+ pm_runtime_put(&gpu->pdev->dev);
+
+- if (WARN_ON(busy_time > ~0LU))
+- return ~0LU;
+-
+- return (unsigned long)busy_time;
++ return busy_cycles;
+ }
+
+ static uint32_t a5xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 19622fb1fa35b..55eb0054cd3a8 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1621,12 +1621,14 @@ static void a6xx_destroy(struct msm_gpu *gpu)
+ kfree(a6xx_gpu);
+ }
+
+-static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
++static u64 a6xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+ {
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+- u64 busy_cycles, busy_time;
++ u64 busy_cycles;
+
++ /* 19.2MHz */
++ *out_sample_rate = 19200000;
+
+ /* Only read the gpu busy if the hardware is already active */
+ if (pm_runtime_get_if_in_use(a6xx_gpu->gmu.dev) == 0)
+@@ -1636,17 +1638,10 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L,
+ REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_H);
+
+- busy_time = (busy_cycles - gpu->devfreq.busy_cycles) * 10;
+- do_div(busy_time, 192);
+-
+- gpu->devfreq.busy_cycles = busy_cycles;
+
+ pm_runtime_put(a6xx_gpu->gmu.dev);
+
+- if (WARN_ON(busy_time > ~0LU))
+- return ~0LU;
+-
+- return (unsigned long)busy_time;
++ return busy_cycles;
+ }
+
+ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
+@@ -1875,6 +1870,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
+ BUG_ON(!node);
+
+ ret = a6xx_gmu_init(a6xx_gpu, node);
++ of_node_put(node);
+ if (ret) {
+ a6xx_destroy(&(a6xx_gpu->base.base));
+ return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index e7c9fe1a250fb..0a857f222982c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -204,7 +204,8 @@ static int dpu_crtc_get_crc(struct drm_crtc *crtc)
+ rc = m->hw_lm->ops.collect_misr(m->hw_lm, &crcs[i]);
+
+ if (rc) {
+- DRM_DEBUG_DRIVER("MISR read failed\n");
++ if (rc != -ENODATA)
++ DRM_DEBUG_DRIVER("MISR read failed\n");
+ return rc;
+ }
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
+index a77a5eaa78ad2..6730e771bffa1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
+@@ -593,6 +593,9 @@ void dpu_core_irq_uninstall(struct dpu_kms *dpu_kms)
+ {
+ int i;
+
++ if (!dpu_kms->hw_intr)
++ return;
++
+ pm_runtime_get_sync(&dpu_kms->pdev->dev);
+ for (i = 0; i < dpu_kms->hw_intr->total_irqs; i++)
+ if (!list_empty(&dpu_kms->hw_intr->irq_cb_tbl[i]))
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index 116e2b5b1a90f..284f5610dc35b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -148,6 +148,7 @@ static void dpu_hw_intf_setup_timing_engine(struct dpu_hw_intf *ctx,
+ active_v_end = active_v_start + (p->yres * hsync_period) - 1;
+
+ display_v_start += p->hsync_pulse_width + p->h_back_porch;
++ display_v_end -= p->h_front_porch;
+
+ active_hctl = (active_h_end << 16) | active_h_start;
+ display_hctl = active_hctl;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+index 86363c0ec8341..462f5082099e6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+@@ -138,7 +138,7 @@ static int dpu_hw_lm_collect_misr(struct dpu_hw_mixer *ctx, u32 *misr_value)
+ ctrl = DPU_REG_READ(c, LM_MISR_CTRL);
+
+ if (!(ctrl & LM_MISR_CTRL_ENABLE))
+- return -EINVAL;
++ return -ENODATA;
+
+ if (!(ctrl & LM_MISR_CTRL_STATUS))
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index 47fe11a84a77c..ced3e00e4ad57 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -795,8 +795,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
+ for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+ u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+
+- if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx])
++ if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) {
+ dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]);
++ dpu_kms->hw_vbif[vbif_idx] = NULL;
++ }
+ }
+ }
+
+@@ -1073,7 +1075,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+
+ dpu_kms_parse_data_bus_icc_path(dpu_kms);
+
+- pm_runtime_get_sync(&dpu_kms->pdev->dev);
++ rc = pm_runtime_resume_and_get(&dpu_kms->pdev->dev);
++ if (rc < 0)
++ goto error;
+
+ dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);
+
+@@ -1258,7 +1262,7 @@ static int dpu_bind(struct device *dev, struct device *master, void *data)
+
+ priv->kms = &dpu_kms->base;
+
+- return ret;
++ return 0;
+ }
+
+ static void dpu_unbind(struct device *dev, struct device *master, void *data)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index bb7d066618e64..0e02e252ff894 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -612,9 +612,15 @@ static int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc,
+ if (ret)
+ return ret;
+
+- mdp5_mixer_release(new_crtc_state->state, old_mixer);
++ ret = mdp5_mixer_release(new_crtc_state->state, old_mixer);
++ if (ret)
++ return ret;
++
+ if (old_r_mixer) {
+- mdp5_mixer_release(new_crtc_state->state, old_r_mixer);
++ ret = mdp5_mixer_release(new_crtc_state->state, old_r_mixer);
++ if (ret)
++ return ret;
++
+ if (!need_right_mixer)
+ pipeline->r_mixer = NULL;
+ }
+@@ -983,8 +989,10 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
+
+ ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace,
+ &mdp5_crtc->cursor.iova);
+- if (ret)
++ if (ret) {
++ drm_gem_object_put(cursor_bo);
+ return -EINVAL;
++ }
+
+ pm_runtime_get_sync(&pdev->dev);
+
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+index 12a5f81e402bd..1c0dde825e57a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+@@ -577,9 +577,9 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
+ }
+
+ irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+- if (irq < 0) {
+- ret = irq;
+- DRM_DEV_ERROR(&pdev->dev, "failed to get irq: %d\n", ret);
++ if (!irq) {
++ ret = -EINVAL;
++ DRM_DEV_ERROR(&pdev->dev, "failed to get irq\n");
+ goto fail;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
+index 954db683ae444..2536def2a0005 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
+@@ -116,21 +116,28 @@ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc,
+ return 0;
+ }
+
+-void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer)
++int mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer)
+ {
+ struct mdp5_global_state *global_state = mdp5_get_global_state(s);
+- struct mdp5_hw_mixer_state *new_state = &global_state->hwmixer;
++ struct mdp5_hw_mixer_state *new_state;
+
+ if (!mixer)
+- return;
++ return 0;
++
++ if (IS_ERR(global_state))
++ return PTR_ERR(global_state);
++
++ new_state = &global_state->hwmixer;
+
+ if (WARN_ON(!new_state->hwmixer_to_crtc[mixer->idx]))
+- return;
++ return -EINVAL;
+
+ DBG("%s: release from crtc %s", mixer->name,
+ new_state->hwmixer_to_crtc[mixer->idx]->name);
+
+ new_state->hwmixer_to_crtc[mixer->idx] = NULL;
++
++ return 0;
+ }
+
+ void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
+index 43c9ba43ce185..545ee223b9d74 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
+@@ -30,7 +30,7 @@ void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm);
+ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc,
+ uint32_t caps, struct mdp5_hw_mixer **mixer,
+ struct mdp5_hw_mixer **r_mixer);
+-void mdp5_mixer_release(struct drm_atomic_state *s,
+- struct mdp5_hw_mixer *mixer);
++int mdp5_mixer_release(struct drm_atomic_state *s,
++ struct mdp5_hw_mixer *mixer);
+
+ #endif /* __MDP5_LM_H__ */
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+index ba6695963aa66..a4f5cb90f3e80 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+@@ -119,18 +119,23 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ return 0;
+ }
+
+-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ {
+ struct msm_drm_private *priv = s->dev->dev_private;
+ struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
+ struct mdp5_global_state *state = mdp5_get_global_state(s);
+- struct mdp5_hw_pipe_state *new_state = &state->hwpipe;
++ struct mdp5_hw_pipe_state *new_state;
+
+ if (!hwpipe)
+- return;
++ return 0;
++
++ if (IS_ERR(state))
++ return PTR_ERR(state);
++
++ new_state = &state->hwpipe;
+
+ if (WARN_ON(!new_state->hwpipe_to_plane[hwpipe->idx]))
+- return;
++ return -EINVAL;
+
+ DBG("%s: release from plane %s", hwpipe->name,
+ new_state->hwpipe_to_plane[hwpipe->idx]->name);
+@@ -141,6 +146,8 @@ void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ }
+
+ new_state->hwpipe_to_plane[hwpipe->idx] = NULL;
++
++ return 0;
+ }
+
+ void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
+index 9b26d0761bd4f..cca67938cab21 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
+@@ -37,7 +37,7 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ uint32_t caps, uint32_t blkcfg,
+ struct mdp5_hw_pipe **hwpipe,
+ struct mdp5_hw_pipe **r_hwpipe);
+-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
+
+ struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe,
+ uint32_t reg_offset, uint32_t caps);
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index 50e854207c70a..c0d947bce9e9a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -297,12 +297,24 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state,
+ mdp5_state->r_hwpipe = NULL;
+
+
+- mdp5_pipe_release(state->state, old_hwpipe);
+- mdp5_pipe_release(state->state, old_right_hwpipe);
++ ret = mdp5_pipe_release(state->state, old_hwpipe);
++ if (ret)
++ return ret;
++
++ ret = mdp5_pipe_release(state->state, old_right_hwpipe);
++ if (ret)
++ return ret;
++
+ }
+ } else {
+- mdp5_pipe_release(state->state, mdp5_state->hwpipe);
+- mdp5_pipe_release(state->state, mdp5_state->r_hwpipe);
++ ret = mdp5_pipe_release(state->state, mdp5_state->hwpipe);
++ if (ret)
++ return ret;
++
++ ret = mdp5_pipe_release(state->state, mdp5_state->r_hwpipe);
++ if (ret)
++ return ret;
++
+ mdp5_state->hwpipe = mdp5_state->r_hwpipe = NULL;
+ }
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 8d1ea694d06cd..6eb176872a17b 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1519,7 +1519,7 @@ static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
+ * running. Add the global reset just before disabling the
+ * link clocks and core clocks.
+ */
+- ret = dp_ctrl_off_link_stream(&ctrl->dp_ctrl);
++ ret = dp_ctrl_off(&ctrl->dp_ctrl);
+ if (ret) {
+ DRM_ERROR("failed to disable DP controller\n");
+ return ret;
+@@ -1686,8 +1686,6 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ ctrl->link->link_params.rate,
+ ctrl->link->link_params.num_lanes, ctrl->dp_ctrl.pixel_rate);
+
+- ctrl->link->phy_params.p_level = 0;
+- ctrl->link->phy_params.v_level = 0;
+
+ rc = dp_ctrl_enable_mainlink_clocks(ctrl);
+ if (rc)
+@@ -1809,12 +1807,6 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ }
+ }
+
+- if (!dp_ctrl_channel_eq_ok(ctrl))
+- dp_ctrl_link_retrain(ctrl);
+-
+- /* stop txing train pattern to end link training */
+- dp_ctrl_clear_training_pattern(ctrl);
+-
+ ret = dp_ctrl_enable_stream_clocks(ctrl);
+ if (ret) {
+ DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
+@@ -1826,6 +1818,12 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ return 0;
+ }
+
++ if (!dp_ctrl_channel_eq_ok(ctrl))
++ dp_ctrl_link_retrain(ctrl);
++
++ /* stop txing train pattern to end link training */
++ dp_ctrl_clear_training_pattern(ctrl);
++
+ /*
+ * Set up transfer unit values and set controller state to send
+ * video.
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 1d7f82e6eafea..2baa1ad548997 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -113,6 +113,7 @@ struct dp_display_private {
+ u32 hpd_state;
+ u32 event_pndx;
+ u32 event_gndx;
++ struct task_struct *ev_tsk;
+ struct dp_event event_list[DP_EVENT_Q_MAX];
+ spinlock_t event_lock;
+
+@@ -230,6 +231,8 @@ void dp_display_signal_audio_complete(struct msm_dp *dp_display)
+ complete_all(&dp->audio_comp);
+ }
+
++static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv);
++
+ static int dp_display_bind(struct device *dev, struct device *master,
+ void *data)
+ {
+@@ -263,9 +266,18 @@ static int dp_display_bind(struct device *dev, struct device *master,
+ }
+
+ rc = dp_register_audio_driver(dev, dp->audio);
+- if (rc)
++ if (rc) {
+ DRM_ERROR("Audio registration Dp failed\n");
++ goto end;
++ }
+
++ rc = dp_hpd_event_thread_start(dp);
++ if (rc) {
++ DRM_ERROR("Event thread create failed\n");
++ goto end;
++ }
++
++ return 0;
+ end:
+ return rc;
+ }
+@@ -276,6 +288,11 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ struct dp_display_private *dp = dev_get_dp_display_private(dev);
+ struct msm_drm_private *priv = dev_get_drvdata(master);
+
++ /* disable all HPD interrupts */
++ dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
++
++ kthread_stop(dp->ev_tsk);
++
+ dp_power_client_deinit(dp->power);
+ dp_aux_unregister(dp->aux);
+ priv->dp[dp->id] = NULL;
+@@ -1051,12 +1068,17 @@ static int hpd_event_thread(void *data)
+ while (1) {
+ if (timeout_mode) {
+ wait_event_timeout(dp_priv->event_q,
+- (dp_priv->event_pndx == dp_priv->event_gndx),
+- EVENT_TIMEOUT);
++ (dp_priv->event_pndx == dp_priv->event_gndx) ||
++ kthread_should_stop(), EVENT_TIMEOUT);
+ } else {
+ wait_event_interruptible(dp_priv->event_q,
+- (dp_priv->event_pndx != dp_priv->event_gndx));
++ (dp_priv->event_pndx != dp_priv->event_gndx) ||
++ kthread_should_stop());
+ }
++
++ if (kthread_should_stop())
++ break;
++
+ spin_lock_irqsave(&dp_priv->event_lock, flag);
+ todo = &dp_priv->event_list[dp_priv->event_gndx];
+ if (todo->delay) {
+@@ -1126,12 +1148,17 @@ static int hpd_event_thread(void *data)
+ return 0;
+ }
+
+-static void dp_hpd_event_setup(struct dp_display_private *dp_priv)
++static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv)
+ {
+- init_waitqueue_head(&dp_priv->event_q);
+- spin_lock_init(&dp_priv->event_lock);
++ /* set event q to empty */
++ dp_priv->event_gndx = 0;
++ dp_priv->event_pndx = 0;
+
+- kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler");
++ dp_priv->ev_tsk = kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler");
++ if (IS_ERR(dp_priv->ev_tsk))
++ return PTR_ERR(dp_priv->ev_tsk);
++
++ return 0;
+ }
+
+ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+@@ -1190,10 +1217,9 @@ int dp_display_request_irq(struct msm_dp *dp_display)
+ dp = container_of(dp_display, struct dp_display_private, dp_display);
+
+ dp->irq = irq_of_parse_and_map(dp->pdev->dev.of_node, 0);
+- if (dp->irq < 0) {
+- rc = dp->irq;
+- DRM_ERROR("failed to get irq: %d\n", rc);
+- return rc;
++ if (!dp->irq) {
++ DRM_ERROR("failed to get irq\n");
++ return -EINVAL;
+ }
+
+ rc = devm_request_irq(&dp->pdev->dev, dp->irq,
+@@ -1260,7 +1286,10 @@ static int dp_display_probe(struct platform_device *pdev)
+ return -EPROBE_DEFER;
+ }
+
++ /* setup event q */
+ mutex_init(&dp->event_mutex);
++ init_waitqueue_head(&dp->event_q);
++ spin_lock_init(&dp->event_lock);
+
+ /* Store DP audio handle inside DP display */
+ dp->dp_display.dp_audio = dp->audio;
+@@ -1435,8 +1464,6 @@ void msm_dp_irq_postinstall(struct msm_dp *dp_display)
+
+ dp = container_of(dp_display, struct dp_display_private, dp_display);
+
+- dp_hpd_event_setup(dp);
+-
+ dp_add_event(dp, EV_HPD_INIT_SETUP, 0, 100);
+ }
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c
+index 26ef41a4c1b68..dca4f9a144e8d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_drm.c
++++ b/drivers/gpu/drm/msm/dp/dp_drm.c
+@@ -230,9 +230,13 @@ struct drm_bridge *msm_dp_bridge_init(struct msm_dp *dp_display, struct drm_devi
+ bridge->funcs = &dp_bridge_ops;
+ bridge->encoder = encoder;
+
++ drm_bridge_add(bridge);
++
+ rc = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ if (rc) {
+ DRM_ERROR("failed to attach bridge, rc=%d\n", rc);
++ drm_bridge_remove(bridge);
++
+ return ERR_PTR(rc);
+ }
+
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 3a3f53f0c8ae1..9ea07bd541c45 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1337,10 +1337,10 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ dsi_get_bpp(msm_host->format) / 8;
+
+ len = dsi_cmd_dma_add(msm_host, msg);
+- if (!len) {
++ if (len < 0) {
+ pr_err("%s: failed to add cmd type = 0x%x\n",
+ __func__, msg->type);
+- return -EINVAL;
++ return len;
+ }
+
+ /* for video mode, do not send cmds more than
+@@ -1359,10 +1359,14 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ }
+
+ ret = dsi_cmd_dma_tx(msm_host, len);
+- if (ret < len) {
+- pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d\n",
+- __func__, msg->type, (*(u8 *)(msg->tx_buf)), len);
+- return -ECOMM;
++ if (ret < 0) {
++ pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d, ret=%d\n",
++ __func__, msg->type, (*(u8 *)(msg->tx_buf)), len, ret);
++ return ret;
++ } else if (ret < len) {
++ pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, ret=%d len=%d\n",
++ __func__, msg->type, (*(u8 *)(msg->tx_buf)), ret, len);
++ return -EIO;
+ }
+
+ return len;
+@@ -2088,9 +2092,12 @@ int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host,
+ }
+
+ ret = dsi_cmds2buf_tx(msm_host, msg);
+- if (ret < msg->tx_len) {
++ if (ret < 0) {
+ pr_err("%s: Read cmd Tx failed, %d\n", __func__, ret);
+ return ret;
++ } else if (ret < msg->tx_len) {
++ pr_err("%s: Read cmd Tx failed, too short: %d\n", __func__, ret);
++ return -ECOMM;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index cd7b41b7d5180..d95b2cea14dc7 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -668,6 +668,8 @@ struct drm_bridge *msm_dsi_manager_bridge_init(u8 id)
+ bridge = &dsi_bridge->base;
+ bridge->funcs = &dsi_mgr_bridge_funcs;
+
++ drm_bridge_add(bridge);
++
+ ret = drm_bridge_attach(encoder, bridge, NULL, 0);
+ if (ret)
+ goto fail;
+@@ -738,6 +740,7 @@ struct drm_connector *msm_dsi_manager_ext_bridge_init(u8 id)
+
+ void msm_dsi_manager_bridge_destroy(struct drm_bridge *bridge)
+ {
++ drm_bridge_remove(bridge);
+ }
+
+ int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg)
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 75557ac99adf1..8199c53567f4e 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -1062,7 +1062,7 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
+ },
+ .min_pll_rate = VCO_MIN_RATE,
+ .max_pll_rate = VCO_MAX_RATE,
+- .io_start = { 0xc994400, 0xc996000 },
++ .io_start = { 0xc994400, 0xc996400 },
+ .num_dsi_phy = 2,
+ };
+
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index 719720709e9e7..46a1f74335d82 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -142,6 +142,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
+ /* HDCP needs physical address of hdmi register */
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ config->mmio_name);
++ if (!res) {
++ ret = -EINVAL;
++ goto fail;
++ }
+ hdmi->mmio_phy_addr = res->start;
+
+ hdmi->qfprom_mmio = msm_ioremap(pdev,
+@@ -299,9 +303,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ drm_connector_attach_encoder(hdmi->connector, hdmi->encoder);
+
+ hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+- if (hdmi->irq < 0) {
+- ret = hdmi->irq;
+- DRM_DEV_ERROR(dev->dev, "failed to get irq: %d\n", ret);
++ if (!hdmi->irq) {
++ ret = -EINVAL;
++ DRM_DEV_ERROR(dev->dev, "failed to get irq\n");
+ goto fail;
+ }
+
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
+index 68fba4bf72121..9b1b7608ee6d8 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
+@@ -15,6 +15,7 @@ void msm_hdmi_bridge_destroy(struct drm_bridge *bridge)
+ struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
+
+ msm_hdmi_hpd_disable(hdmi_bridge);
++ drm_bridge_remove(bridge);
+ }
+
+ static void msm_hdmi_power_on(struct drm_bridge *bridge)
+@@ -346,6 +347,8 @@ struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi)
+ DRM_BRIDGE_OP_DETECT |
+ DRM_BRIDGE_OP_EDID;
+
++ drm_bridge_add(bridge);
++
+ ret = drm_bridge_attach(hdmi->encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ if (ret)
+ goto fail;
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 555666e3f9603..aa3442e1981ae 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -11,6 +11,7 @@
+ #include <linux/uaccess.h>
+ #include <uapi/linux/sched/types.h>
+
++#include <drm/drm_bridge.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
+ #include <drm/drm_ioctl.h>
+@@ -265,6 +266,8 @@ static int msm_irq_postinstall(struct drm_device *dev)
+
+ static int msm_irq_install(struct drm_device *dev, unsigned int irq)
+ {
++ struct msm_drm_private *priv = dev->dev_private;
++ struct msm_kms *kms = priv->kms;
+ int ret;
+
+ if (irq == IRQ_NOTCONNECTED)
+@@ -276,6 +279,8 @@ static int msm_irq_install(struct drm_device *dev, unsigned int irq)
+ if (ret)
+ return ret;
+
++ kms->irq_requested = true;
++
+ ret = msm_irq_postinstall(dev);
+ if (ret) {
+ free_irq(irq, dev);
+@@ -291,7 +296,8 @@ static void msm_irq_uninstall(struct drm_device *dev)
+ struct msm_kms *kms = priv->kms;
+
+ kms->funcs->irq_uninstall(kms);
+- free_irq(kms->irq, dev);
++ if (kms->irq_requested)
++ free_irq(kms->irq, dev);
+ }
+
+ struct msm_vblank_work {
+@@ -385,6 +391,9 @@ static int msm_drm_uninit(struct device *dev)
+
+ drm_mode_config_cleanup(ddev);
+
++ for (i = 0; i < priv->num_bridges; i++)
++ drm_bridge_remove(priv->bridges[i]);
++
+ pm_runtime_get_sync(dev);
+ msm_irq_uninstall(ddev);
+ pm_runtime_put_sync(dev);
+diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
+index fc94e061d6a7c..8a2d94bd5df28 100644
+--- a/drivers/gpu/drm/msm/msm_gem_prime.c
++++ b/drivers/gpu/drm/msm/msm_gem_prime.c
+@@ -17,7 +17,7 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
+ int npages = obj->size >> PAGE_SHIFT;
+
+ if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ return drm_prime_pages_to_sg(obj->dev, msm_obj->pages, npages);
+ }
+diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
+index 92aa1e9196c64..5efca011d66fc 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.h
++++ b/drivers/gpu/drm/msm/msm_gpu.h
+@@ -9,6 +9,7 @@
+
+ #include <linux/adreno-smmu-priv.h>
+ #include <linux/clk.h>
++#include <linux/devfreq.h>
+ #include <linux/interconnect.h>
+ #include <linux/pm_opp.h>
+ #include <linux/regulator/consumer.h>
+@@ -59,7 +60,7 @@ struct msm_gpu_funcs {
+ /* for generation specific debugfs: */
+ void (*debugfs_init)(struct msm_gpu *gpu, struct drm_minor *minor);
+ #endif
+- unsigned long (*gpu_busy)(struct msm_gpu *gpu);
++ u64 (*gpu_busy)(struct msm_gpu *gpu, unsigned long *out_sample_rate);
+ struct msm_gpu_state *(*gpu_state_get)(struct msm_gpu *gpu);
+ int (*gpu_state_put)(struct msm_gpu_state *state);
+ unsigned long (*gpu_get_freq)(struct msm_gpu *gpu);
+@@ -103,11 +104,8 @@ struct msm_gpu_devfreq {
+ struct dev_pm_qos_request boost_freq;
+
+ /**
+- * busy_cycles:
+- *
+- * Used by implementation of gpu->gpu_busy() to track the last
+- * busy counter value, for calculating elapsed busy cycles since
+- * last sampling period.
++ * busy_cycles: Last busy counter value, for calculating elapsed busy
++ * cycles since last sampling period.
+ */
+ u64 busy_cycles;
+
+@@ -117,6 +115,8 @@ struct msm_gpu_devfreq {
+ /** idle_time: Time of last transition to idle: */
+ ktime_t idle_time;
+
++ struct devfreq_dev_status average_status;
++
+ /**
+ * idle_work:
+ *
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index 12641616acd30..c7dbaa4b19264 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -9,6 +9,7 @@
+
+ #include <linux/devfreq.h>
+ #include <linux/devfreq_cooling.h>
++#include <linux/math64.h>
+ #include <linux/units.h>
+
+ /*
+@@ -49,18 +50,95 @@ static unsigned long get_freq(struct msm_gpu *gpu)
+ return clk_get_rate(gpu->core_clk);
+ }
+
+-static int msm_devfreq_get_dev_status(struct device *dev,
++static void get_raw_dev_status(struct msm_gpu *gpu,
+ struct devfreq_dev_status *status)
+ {
+- struct msm_gpu *gpu = dev_to_gpu(dev);
++ struct msm_gpu_devfreq *df = &gpu->devfreq;
++ u64 busy_cycles, busy_time;
++ unsigned long sample_rate;
+ ktime_t time;
+
+ status->current_frequency = get_freq(gpu);
+- status->busy_time = gpu->funcs->gpu_busy(gpu);
+-
++ busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate);
+ time = ktime_get();
+- status->total_time = ktime_us_delta(time, gpu->devfreq.time);
+- gpu->devfreq.time = time;
++
++ busy_time = busy_cycles - df->busy_cycles;
++ status->total_time = ktime_us_delta(time, df->time);
++
++ df->busy_cycles = busy_cycles;
++ df->time = time;
++
++ busy_time *= USEC_PER_SEC;
++ do_div(busy_time, sample_rate);
++ if (WARN_ON(busy_time > ~0LU))
++ busy_time = ~0LU;
++
++ status->busy_time = busy_time;
++}
++
++static void update_average_dev_status(struct msm_gpu *gpu,
++ const struct devfreq_dev_status *raw)
++{
++ struct msm_gpu_devfreq *df = &gpu->devfreq;
++ const u32 polling_ms = df->devfreq->profile->polling_ms;
++ const u32 max_history_ms = polling_ms * 11 / 10;
++ struct devfreq_dev_status *avg = &df->average_status;
++ u64 avg_freq;
++
++ /* simple_ondemand governor interacts poorly with gpu->clamp_to_idle.
++ * When we enforce the constraint on idle, it calls get_dev_status
++ * which would normally reset the stats. When we remove the
++ * constraint on active, it calls get_dev_status again where busy_time
++ * would be 0.
++ *
++ * To remedy this, we always return the average load over the past
++ * polling_ms.
++ */
++
++ /* raw is longer than polling_ms or avg has no history */
++ if (div_u64(raw->total_time, USEC_PER_MSEC) >= polling_ms ||
++ !avg->total_time) {
++ *avg = *raw;
++ return;
++ }
++
++ /* Truncate the oldest history first.
++ *
++ * Because we keep the history with a single devfreq_dev_status,
++ * rather than a list of devfreq_dev_status, we have to assume freq
++ * and load are the same over avg->total_time. We can scale down
++ * avg->busy_time and avg->total_time by the same factor to drop
++ * history.
++ */
++ if (div_u64(avg->total_time + raw->total_time, USEC_PER_MSEC) >=
++ max_history_ms) {
++ const u32 new_total_time = polling_ms * USEC_PER_MSEC -
++ raw->total_time;
++ avg->busy_time = div_u64(
++ mul_u32_u32(avg->busy_time, new_total_time),
++ avg->total_time);
++ avg->total_time = new_total_time;
++ }
++
++ /* compute the average freq over avg->total_time + raw->total_time */
++ avg_freq = mul_u32_u32(avg->current_frequency, avg->total_time);
++ avg_freq += mul_u32_u32(raw->current_frequency, raw->total_time);
++ do_div(avg_freq, avg->total_time + raw->total_time);
++
++ avg->current_frequency = avg_freq;
++ avg->busy_time += raw->busy_time;
++ avg->total_time += raw->total_time;
++}
++
++static int msm_devfreq_get_dev_status(struct device *dev,
++ struct devfreq_dev_status *status)
++{
++ struct msm_gpu *gpu = dev_to_gpu(dev);
++ struct devfreq_dev_status raw;
++
++ get_raw_dev_status(gpu, &raw);
++ update_average_dev_status(gpu, &raw);
++ *status = gpu->devfreq.average_status;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
+index 2a4f0526cb980..401d7e19811f3 100644
+--- a/drivers/gpu/drm/msm/msm_kms.h
++++ b/drivers/gpu/drm/msm/msm_kms.h
+@@ -148,6 +148,7 @@ struct msm_kms {
+
+ /* irq number to be passed on to msm_irq_install */
+ int irq;
++ bool irq_requested;
+
+ /* mapper-id used to request GEM buffer mapped for scanout: */
+ struct msm_gem_address_space *aspace;
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/atom.h b/drivers/gpu/drm/nouveau/dispnv50/atom.h
+index 3d82b3c67decc..93f8f4f645784 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/atom.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/atom.h
+@@ -160,14 +160,14 @@ nv50_head_atom_get(struct drm_atomic_state *state, struct drm_crtc *crtc)
+ static inline struct drm_encoder *
+ nv50_head_atom_get_encoder(struct nv50_head_atom *atom)
+ {
+- struct drm_encoder *encoder = NULL;
++ struct drm_encoder *encoder;
+
+ /* We only ever have a single encoder */
+ drm_for_each_encoder_mask(encoder, atom->state.crtc->dev,
+ atom->state.encoder_mask)
+- break;
++ return encoder;
+
+- return encoder;
++ return NULL;
+ }
+
+ #define nv50_wndw_atom(p) container_of((p), struct nv50_wndw_atom, state)
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.c b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+index 29428e770f146..b834e8a9ae775 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/crc.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+@@ -390,9 +390,18 @@ void nv50_crc_atomic_check_outp(struct nv50_atom *atom)
+ struct nv50_head_atom *armh = nv50_head_atom(old_crtc_state);
+ struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
+ struct nv50_outp_atom *outp_atom;
+- struct nouveau_encoder *outp =
+- nv50_real_outp(nv50_head_atom_get_encoder(armh));
+- struct drm_encoder *encoder = &outp->base.base;
++ struct nouveau_encoder *outp;
++ struct drm_encoder *encoder, *enc;
++
++ enc = nv50_head_atom_get_encoder(armh);
++ if (!enc)
++ continue;
++
++ outp = nv50_real_outp(enc);
++ if (!outp)
++ continue;
++
++ encoder = &outp->base.base;
+
+ if (!asyh->clr.crc)
+ continue;
+@@ -443,8 +452,16 @@ void nv50_crc_atomic_set(struct nv50_head *head,
+ struct drm_device *dev = crtc->dev;
+ struct nv50_crc *crc = &head->crc;
+ const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc;
+- struct nouveau_encoder *outp =
+- nv50_real_outp(nv50_head_atom_get_encoder(asyh));
++ struct nouveau_encoder *outp;
++ struct drm_encoder *encoder;
++
++ encoder = nv50_head_atom_get_encoder(asyh);
++ if (!encoder)
++ return;
++
++ outp = nv50_real_outp(encoder);
++ if (!outp)
++ return;
+
+ func->set_src(head, outp->or, nv50_crc_source_type(outp, asyh->crc.src),
+ &crc->ctx[crc->ctx_idx]);
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h b/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
+index 1665738948fb4..96113c8bee8c5 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
+@@ -62,4 +62,6 @@ void nvkm_subdev_intr(struct nvkm_subdev *);
+ #define nvkm_debug(s,f,a...) nvkm_printk((s), DEBUG, info, f, ##a)
+ #define nvkm_trace(s,f,a...) nvkm_printk((s), TRACE, info, f, ##a)
+ #define nvkm_spam(s,f,a...) nvkm_printk((s), SPAM, dbg, f, ##a)
++
++#define nvkm_error_ratelimited(s,f,a...) nvkm_printk((s), ERROR, err_ratelimited, f, ##a)
+ #endif
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c
+index 53a6651ac2258..80b5aaceeaad1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c
+@@ -35,13 +35,13 @@ gf100_bus_intr(struct nvkm_bus *bus)
+ u32 addr = nvkm_rd32(device, 0x009084);
+ u32 data = nvkm_rd32(device, 0x009088);
+
+- nvkm_error(subdev,
+- "MMIO %s of %08x FAULT at %06x [ %s%s%s]\n",
+- (addr & 0x00000002) ? "write" : "read", data,
+- (addr & 0x00fffffc),
+- (stat & 0x00000002) ? "!ENGINE " : "",
+- (stat & 0x00000004) ? "PRIVRING " : "",
+- (stat & 0x00000008) ? "TIMEOUT " : "");
++ nvkm_error_ratelimited(subdev,
++ "MMIO %s of %08x FAULT at %06x [ %s%s%s]\n",
++ (addr & 0x00000002) ? "write" : "read", data,
++ (addr & 0x00fffffc),
++ (stat & 0x00000002) ? "!ENGINE " : "",
++ (stat & 0x00000004) ? "PRIVRING " : "",
++ (stat & 0x00000008) ? "TIMEOUT " : "");
+
+ nvkm_wr32(device, 0x009084, 0x00000000);
+ nvkm_wr32(device, 0x001100, (stat & 0x0000000e));
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c
+index ad8da523bb22e..c75e463f35013 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c
+@@ -45,9 +45,9 @@ nv31_bus_intr(struct nvkm_bus *bus)
+ u32 addr = nvkm_rd32(device, 0x009084);
+ u32 data = nvkm_rd32(device, 0x009088);
+
+- nvkm_error(subdev, "MMIO %s of %08x FAULT at %06x\n",
+- (addr & 0x00000002) ? "write" : "read", data,
+- (addr & 0x00fffffc));
++ nvkm_error_ratelimited(subdev, "MMIO %s of %08x FAULT at %06x\n",
++ (addr & 0x00000002) ? "write" : "read", data,
++ (addr & 0x00fffffc));
+
+ stat &= ~0x00000008;
+ nvkm_wr32(device, 0x001100, 0x00000008);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c
+index 3a1e45adeedc1..2055d0b100d3f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c
+@@ -60,9 +60,9 @@ nv50_bus_intr(struct nvkm_bus *bus)
+ u32 addr = nvkm_rd32(device, 0x009084);
+ u32 data = nvkm_rd32(device, 0x009088);
+
+- nvkm_error(subdev, "MMIO %s of %08x FAULT at %06x\n",
+- (addr & 0x00000002) ? "write" : "read", data,
+- (addr & 0x00fffffc));
++ nvkm_error_ratelimited(subdev, "MMIO %s of %08x FAULT at %06x\n",
++ (addr & 0x00000002) ? "write" : "read", data,
++ (addr & 0x00fffffc));
+
+ stat &= ~0x00000008;
+ nvkm_wr32(device, 0x001100, 0x00000008);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+index 57199be082fd3..c2b5cc5f97eda 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+@@ -135,10 +135,10 @@ nvkm_cstate_find_best(struct nvkm_clk *clk, struct nvkm_pstate *pstate,
+
+ list_for_each_entry_from_reverse(cstate, &pstate->list, head) {
+ if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp))
+- break;
++ return cstate;
+ }
+
+- return cstate;
++ return NULL;
+ }
+
+ static struct nvkm_cstate *
+@@ -169,6 +169,8 @@ nvkm_cstate_prog(struct nvkm_clk *clk, struct nvkm_pstate *pstate, int cstatei)
+ if (!list_empty(&pstate->list)) {
+ cstate = nvkm_cstate_get(clk, pstate, cstatei);
+ cstate = nvkm_cstate_find_best(clk, pstate, cstate);
++ if (!cstate)
++ return -EINVAL;
+ } else {
+ cstate = &pstate->base;
+ }
+diff --git a/drivers/gpu/drm/omapdrm/omap_overlay.c b/drivers/gpu/drm/omapdrm/omap_overlay.c
+index 10730c9b27523..b0bc9ad2ef73a 100644
+--- a/drivers/gpu/drm/omapdrm/omap_overlay.c
++++ b/drivers/gpu/drm/omapdrm/omap_overlay.c
+@@ -86,7 +86,7 @@ int omap_overlay_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ r_ovl = omap_plane_find_free_overlay(s->dev, overlay_map,
+ caps, fourcc);
+ if (!r_ovl) {
+- overlay_map[r_ovl->idx] = NULL;
++ overlay_map[ovl->idx] = NULL;
+ *overlay = NULL;
+ return -ENOMEM;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index b42c1d816e792..725055edb67e1 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -720,7 +720,7 @@ static const struct drm_display_mode ampire_am_1280800n3tzqw_t00h_mode = {
+ static const struct panel_desc ampire_am_1280800n3tzqw_t00h = {
+ .modes = &ire_am_1280800n3tzqw_t00h_mode,
+ .num_modes = 1,
+- .bpc = 6,
++ .bpc = 8,
+ .size = {
+ .width = 217,
+ .height = 136,
+@@ -2029,6 +2029,7 @@ static const struct panel_desc innolux_g070y2_l01 = {
+ .unprepare = 800,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++ .bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 3e8d9e2d1b675..d53037531f407 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -2118,10 +2118,10 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
+ vop_win_init(vop);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- vop->len = resource_size(res);
+ vop->regs = devm_ioremap_resource(dev, res);
+ if (IS_ERR(vop->regs))
+ return PTR_ERR(vop->regs);
++ vop->len = resource_size(res);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res) {
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index dbdee954692a6..d6124aa873e51 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -528,8 +528,8 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ struct drm_device *ddev = crtc->dev;
+ struct drm_connector_list_iter iter;
+ struct drm_connector *connector = NULL;
+- struct drm_encoder *encoder = NULL;
+- struct drm_bridge *bridge = NULL;
++ struct drm_encoder *encoder = NULL, *en_iter;
++ struct drm_bridge *bridge = NULL, *br_iter;
+ struct drm_display_mode *mode = &crtc->state->adjusted_mode;
+ u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h;
+ u32 total_width, total_height;
+@@ -538,15 +538,19 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ int ret;
+
+ /* get encoder from crtc */
+- drm_for_each_encoder(encoder, ddev)
+- if (encoder->crtc == crtc)
++ drm_for_each_encoder(en_iter, ddev)
++ if (en_iter->crtc == crtc) {
++ encoder = en_iter;
+ break;
++ }
+
+ if (encoder) {
+ /* get bridge from encoder */
+- list_for_each_entry(bridge, &encoder->bridge_chain, chain_node)
+- if (bridge->encoder == encoder)
++ list_for_each_entry(br_iter, &encoder->bridge_chain, chain_node)
++ if (br_iter->encoder == encoder) {
++ bridge = br_iter;
+ break;
++ }
+
+ /* Get the connector from encoder */
+ drm_connector_list_iter_begin(ddev, &iter);
+diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
+index fce0e52973c2f..9810b3bdd3427 100644
+--- a/drivers/gpu/drm/tegra/gem.c
++++ b/drivers/gpu/drm/tegra/gem.c
+@@ -88,6 +88,7 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_
+ if (IS_ERR(map->sgt)) {
+ dma_buf_detach(buf, map->attach);
+ err = PTR_ERR(map->sgt);
++ map->sgt = NULL;
+ goto free;
+ }
+
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_external.c b/drivers/gpu/drm/tilcdc/tilcdc_external.c
+index 7594cf6e186eb..3b86d002ef62e 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_external.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_external.c
+@@ -60,11 +60,13 @@ struct drm_connector *tilcdc_encoder_find_connector(struct drm_device *ddev,
+ int tilcdc_add_component_encoder(struct drm_device *ddev)
+ {
+ struct tilcdc_drm_private *priv = ddev->dev_private;
+- struct drm_encoder *encoder;
++ struct drm_encoder *encoder = NULL, *iter;
+
+- list_for_each_entry(encoder, &ddev->mode_config.encoder_list, head)
+- if (encoder->possible_crtcs & (1 << priv->crtc->index))
++ list_for_each_entry(iter, &ddev->mode_config.encoder_list, head)
++ if (iter->possible_crtcs & (1 << priv->crtc->index)) {
++ encoder = iter;
+ break;
++ }
+
+ if (!encoder) {
+ dev_err(ddev->dev, "%s: No suitable encoder found\n", __func__);
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 0288ef063513e..f6a88abccc7d9 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -25,11 +25,12 @@ void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon)
+ {
+ unsigned int i;
+ u32 mask;
+- u8 ncounters = perfmon->ncounters;
++ u8 ncounters;
+
+ if (WARN_ON_ONCE(!perfmon || v3d->active_perfmon))
+ return;
+
++ ncounters = perfmon->ncounters;
+ mask = GENMASK(ncounters - 1, 0);
+
+ for (i = 0; i < ncounters; i++) {
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 783890e8d43a2..477b3c5ad089c 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -123,7 +123,7 @@ static bool vc4_crtc_get_scanout_position(struct drm_crtc *crtc,
+ *vpos /= 2;
+
+ /* Use hpos to correct for field offset in interlaced mode. */
+- if (VC4_GET_FIELD(val, SCALER_DISPSTATX_FRAME_COUNT) % 2)
++ if (vc4_hvs_get_fifo_frame_count(dev, vc4_crtc_state->assigned_channel) % 2)
+ *hpos += mode->crtc_htotal / 2;
+ }
+
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 4329e09d357c8..801da3e8ebdb8 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -935,6 +935,7 @@ void vc4_irq_reset(struct drm_device *dev);
+ extern struct platform_driver vc4_hvs_driver;
+ void vc4_hvs_stop_channel(struct drm_device *dev, unsigned int output);
+ int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output);
++u8 vc4_hvs_get_fifo_frame_count(struct drm_device *dev, unsigned int fifo);
+ int vc4_hvs_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_begin(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state);
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 604933e20e6a2..9d88bfb50c9b0 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -197,6 +197,29 @@ static void vc4_hvs_update_gamma_lut(struct drm_crtc *crtc)
+ vc4_hvs_lut_load(crtc);
+ }
+
++u8 vc4_hvs_get_fifo_frame_count(struct drm_device *dev, unsigned int fifo)
++{
++ struct vc4_dev *vc4 = to_vc4_dev(dev);
++ u8 field = 0;
++
++ switch (fifo) {
++ case 0:
++ field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT1),
++ SCALER_DISPSTAT1_FRCNT0);
++ break;
++ case 1:
++ field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT1),
++ SCALER_DISPSTAT1_FRCNT1);
++ break;
++ case 2:
++ field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT2),
++ SCALER_DISPSTAT2_FRCNT2);
++ break;
++ }
++
++ return field;
++}
++
+ int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+@@ -582,6 +605,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ struct vc4_hvs *hvs = NULL;
+ int ret;
+ u32 dispctrl;
++ u32 reg;
+
+ hvs = devm_kzalloc(&pdev->dev, sizeof(*hvs), GFP_KERNEL);
+ if (!hvs)
+@@ -653,6 +677,26 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+
+ vc4->hvs = hvs;
+
++ reg = HVS_READ(SCALER_DISPECTRL);
++ reg &= ~SCALER_DISPECTRL_DSP2_MUX_MASK;
++ HVS_WRITE(SCALER_DISPECTRL,
++ reg | VC4_SET_FIELD(0, SCALER_DISPECTRL_DSP2_MUX));
++
++ reg = HVS_READ(SCALER_DISPCTRL);
++ reg &= ~SCALER_DISPCTRL_DSP3_MUX_MASK;
++ HVS_WRITE(SCALER_DISPCTRL,
++ reg | VC4_SET_FIELD(3, SCALER_DISPCTRL_DSP3_MUX));
++
++ reg = HVS_READ(SCALER_DISPEOLN);
++ reg &= ~SCALER_DISPEOLN_DSP4_MUX_MASK;
++ HVS_WRITE(SCALER_DISPEOLN,
++ reg | VC4_SET_FIELD(3, SCALER_DISPEOLN_DSP4_MUX));
++
++ reg = HVS_READ(SCALER_DISPDITHER);
++ reg &= ~SCALER_DISPDITHER_DSP5_MUX_MASK;
++ HVS_WRITE(SCALER_DISPDITHER,
++ reg | VC4_SET_FIELD(3, SCALER_DISPDITHER_DSP5_MUX));
++
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+
+ dispctrl |= SCALER_DISPCTRL_ENABLE;
+@@ -660,10 +704,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_DISPEIRQ(1) |
+ SCALER_DISPCTRL_DISPEIRQ(2);
+
+- /* Set DSP3 (PV1) to use HVS channel 2, which would otherwise
+- * be unused.
+- */
+- dispctrl &= ~SCALER_DISPCTRL_DSP3_MUX_MASK;
+ dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+ SCALER_DISPCTRL_SLVWREIRQ |
+ SCALER_DISPCTRL_SLVRDEIRQ |
+@@ -677,7 +717,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_DSPEISLUR(1) |
+ SCALER_DISPCTRL_DSPEISLUR(2) |
+ SCALER_DISPCTRL_SCLEIRQ);
+- dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_DSP3_MUX);
+
+ HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index 24de29bc1cda4..992d6a2400029 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -385,9 +385,10 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+ }
+
+ if (vc4->hvs->hvs5) {
++ unsigned long state_rate = max(old_hvs_state->core_clock_rate,
++ new_hvs_state->core_clock_rate);
+ unsigned long core_rate = max_t(unsigned long,
+- 500000000,
+- new_hvs_state->core_clock_rate);
++ 500000000, state_rate);
+
+ clk_set_min_rate(hvs->core_clk, core_rate);
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_regs.h b/drivers/gpu/drm/vc4/vc4_regs.h
+index 7538b84a6dcaa..e3761ffbac7bc 100644
+--- a/drivers/gpu/drm/vc4/vc4_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_regs.h
+@@ -379,8 +379,6 @@
+ # define SCALER_DISPSTATX_MODE_EOF 3
+ # define SCALER_DISPSTATX_FULL BIT(29)
+ # define SCALER_DISPSTATX_EMPTY BIT(28)
+-# define SCALER_DISPSTATX_FRAME_COUNT_MASK VC4_MASK(17, 12)
+-# define SCALER_DISPSTATX_FRAME_COUNT_SHIFT 12
+ # define SCALER_DISPSTATX_LINE_MASK VC4_MASK(11, 0)
+ # define SCALER_DISPSTATX_LINE_SHIFT 0
+
+@@ -403,9 +401,15 @@
+ (x) * (SCALER_DISPBKGND1 - \
+ SCALER_DISPBKGND0))
+ #define SCALER_DISPSTAT1 0x00000058
++# define SCALER_DISPSTAT1_FRCNT0_MASK VC4_MASK(23, 18)
++# define SCALER_DISPSTAT1_FRCNT0_SHIFT 18
++# define SCALER_DISPSTAT1_FRCNT1_MASK VC4_MASK(17, 12)
++# define SCALER_DISPSTAT1_FRCNT1_SHIFT 12
++
+ #define SCALER_DISPSTATX(x) (SCALER_DISPSTAT0 + \
+ (x) * (SCALER_DISPSTAT1 - \
+ SCALER_DISPSTAT0))
++
+ #define SCALER_DISPBASE1 0x0000005c
+ #define SCALER_DISPBASEX(x) (SCALER_DISPBASE0 + \
+ (x) * (SCALER_DISPBASE1 - \
+@@ -415,7 +419,11 @@
+ (x) * (SCALER_DISPCTRL1 - \
+ SCALER_DISPCTRL0))
+ #define SCALER_DISPBKGND2 0x00000064
++
+ #define SCALER_DISPSTAT2 0x00000068
++# define SCALER_DISPSTAT2_FRCNT2_MASK VC4_MASK(17, 12)
++# define SCALER_DISPSTAT2_FRCNT2_SHIFT 12
++
+ #define SCALER_DISPBASE2 0x0000006c
+ #define SCALER_DISPALPHA2 0x00000070
+ #define SCALER_GAMADDR 0x00000078
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index 9809ca3e29451..82beb8c159f28 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -298,12 +298,18 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn,
+ if (WARN_ON(i == ARRAY_SIZE(drm_fmts)))
+ return;
+
+- ctrl = TXP_GO | TXP_VSTART_AT_EOF | TXP_EI |
++ ctrl = TXP_GO | TXP_EI |
+ VC4_SET_FIELD(0xf, TXP_BYTE_ENABLE) |
+ VC4_SET_FIELD(txp_fmts[i], TXP_FORMAT);
+
+ if (fb->format->has_alpha)
+ ctrl |= TXP_ALPHA_ENABLE;
++ else
++ /*
++ * If TXP_ALPHA_ENABLE isn't set and TXP_ALPHA_INVERT is, the
++ * hardware will force the output padding to be 0xff.
++ */
++ ctrl |= TXP_ALPHA_INVERT;
+
+ gem = drm_fb_cma_get_gem_obj(fb, 0);
+ TXP_WRITE(TXP_DST_PTR, gem->paddr + fb->offsets[0]);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
+index 5b00310ac4cd4..f73352e7b8329 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_display.c
++++ b/drivers/gpu/drm/virtio/virtgpu_display.c
+@@ -179,6 +179,8 @@ static int virtio_gpu_conn_get_modes(struct drm_connector *connector)
+ DRM_DEBUG("add mode: %dx%d\n", width, height);
+ mode = drm_cvt_mode(connector->dev, width, height, 60,
+ false, false, false);
++ if (!mode)
++ return count;
+ mode->type |= DRM_MODE_TYPE_PREFERRED;
+ drm_mode_probed_add(connector, mode);
+ count++;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 93431e8f66060..9410152f9d6f1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -914,6 +914,15 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+ * Sanity checks.
+ */
+
++ if (!drm_any_plane_has_format(&dev_priv->drm,
++ mode_cmd->pixel_format,
++ mode_cmd->modifier[0])) {
++ drm_dbg(&dev_priv->drm,
++ "unsupported pixel format %p4cc / modifier 0x%llx\n",
++ &mode_cmd->pixel_format, mode_cmd->modifier[0]);
++ return -EINVAL;
++ }
++
+ /* Surface must be marked as a scanout. */
+ if (unlikely(!surface->metadata.scanout))
+ return -EINVAL;
+@@ -1236,20 +1245,13 @@ static int vmw_kms_new_framebuffer_bo(struct vmw_private *dev_priv,
+ return -EINVAL;
+ }
+
+- /* Limited framebuffer color depth support for screen objects */
+- if (dev_priv->active_display_unit == vmw_du_screen_object) {
+- switch (mode_cmd->pixel_format) {
+- case DRM_FORMAT_XRGB8888:
+- case DRM_FORMAT_ARGB8888:
+- break;
+- case DRM_FORMAT_XRGB1555:
+- case DRM_FORMAT_RGB565:
+- break;
+- default:
+- DRM_ERROR("Invalid pixel format: %p4cc\n",
+- &mode_cmd->pixel_format);
+- return -EINVAL;
+- }
++ if (!drm_any_plane_has_format(&dev_priv->drm,
++ mode_cmd->pixel_format,
++ mode_cmd->modifier[0])) {
++ drm_dbg(&dev_priv->drm,
++ "unsupported pixel format %p4cc / modifier 0x%llx\n",
++ &mode_cmd->pixel_format, mode_cmd->modifier[0]);
++ return -EINVAL;
+ }
+
+ vfbd = kzalloc(sizeof(*vfbd), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+index 4d36e85073806..d9ebd02099a68 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+@@ -247,7 +247,6 @@ struct vmw_framebuffer_bo {
+ static const uint32_t __maybe_unused vmw_primary_plane_formats[] = {
+ DRM_FORMAT_XRGB1555,
+ DRM_FORMAT_RGB565,
+- DRM_FORMAT_RGB888,
+ DRM_FORMAT_XRGB8888,
+ DRM_FORMAT_ARGB8888,
+ };
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index 708899ba21029..6542f14986510 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -859,22 +859,21 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo,
+ struct ttm_device *bdev = bo->bdev;
+ struct vmw_private *dev_priv;
+
+-
+ dev_priv = container_of(bdev, struct vmw_private, bdev);
+
+ mutex_lock(&dev_priv->binding_mutex);
+
+- dx_query_mob = container_of(bo, struct vmw_buffer_object, base);
+- if (!dx_query_mob || !dx_query_mob->dx_query_ctx) {
+- mutex_unlock(&dev_priv->binding_mutex);
+- return;
+- }
+-
+ /* If BO is being moved from MOB to system memory */
+ if (new_mem->mem_type == TTM_PL_SYSTEM &&
+ old_mem->mem_type == VMW_PL_MOB) {
+ struct vmw_fence_obj *fence;
+
++ dx_query_mob = container_of(bo, struct vmw_buffer_object, base);
++ if (!dx_query_mob || !dx_query_mob->dx_query_ctx) {
++ mutex_unlock(&dev_priv->binding_mutex);
++ return;
++ }
++
+ (void) vmw_query_readback_all(dx_query_mob);
+ mutex_unlock(&dev_priv->binding_mutex);
+
+@@ -888,7 +887,6 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo,
+ (void) ttm_bo_wait(bo, false, false);
+ } else
+ mutex_unlock(&dev_priv->binding_mutex);
+-
+ }
+
+ /**
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+index 2bf97b6ac9735..e2a9679e32be8 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+@@ -141,10 +141,10 @@ int amdtp_hid_probe(u32 cur_hid_dev, struct amdtp_cl_data *cli_data)
+
+ hid->driver_data = hid_data;
+ cli_data->hid_sensor_hubs[cur_hid_dev] = hid;
+- hid->bus = BUS_AMD_AMDTP;
++ hid->bus = BUS_AMD_SFH;
+ hid->vendor = AMD_SFH_HID_VENDOR;
+ hid->product = AMD_SFH_HID_PRODUCT;
+- snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "hid-amdtp",
++ snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "hid-amdsfh",
+ hid->vendor, hid->product);
+
+ rc = hid_add_device(hid);
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h
+index c60abd38054ca..cb04f47c86483 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h
+@@ -12,7 +12,7 @@
+ #define AMDSFH_HID_H
+
+ #define MAX_HID_DEVICES 5
+-#define BUS_AMD_AMDTP 0x20
++#define BUS_AMD_SFH 0x20
+ #define AMD_SFH_HID_VENDOR 0x1022
+ #define AMD_SFH_HID_PRODUCT 0x0001
+
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index 74ad8bf98bfd5..e8c5e3ac9fff1 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -347,6 +347,12 @@ static int bigben_probe(struct hid_device *hid,
+ bigben->report = list_entry(report_list->next,
+ struct hid_report, list);
+
++ if (list_empty(&hid->inputs)) {
++ hid_err(hid, "no inputs found\n");
++ error = -ENODEV;
++ goto error_hw_stop;
++ }
++
+ hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
+ set_bit(FF_RUMBLE, hidinput->input->ffbit);
+
+diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c
+index 3091355d48df6..8e4a5528e25df 100644
+--- a/drivers/hid/hid-elan.c
++++ b/drivers/hid/hid-elan.c
+@@ -188,7 +188,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER);
+ if (ret) {
+ hid_err(hdev, "Failed to init elan MT slots: %d\n", ret);
+- input_free_device(input);
+ return ret;
+ }
+
+@@ -200,7 +199,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ hid_err(hdev, "Failed to register elan input device: %d\n",
+ ret);
+ input_mt_destroy_slots(input);
+- input_free_device(input);
+ return ret;
+ }
+
+diff --git a/drivers/hid/hid-led.c b/drivers/hid/hid-led.c
+index c2c66ceca1327..7d82f8d426bbc 100644
+--- a/drivers/hid/hid-led.c
++++ b/drivers/hid/hid-led.c
+@@ -366,7 +366,7 @@ static const struct hidled_config hidled_configs[] = {
+ .type = DREAM_CHEEKY,
+ .name = "Dream Cheeky Webmail Notifier",
+ .short_name = "dream_cheeky",
+- .max_brightness = 31,
++ .max_brightness = 63,
+ .num_leds = 1,
+ .report_size = 9,
+ .report_type = RAW_REQUEST,
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index dc5c35210c16a..20fc8d50a0398 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -1245,7 +1245,9 @@ u64 vmbus_next_request_id(struct vmbus_channel *channel, u64 rqst_addr)
+
+ /*
+ * Cannot return an ID of 0, which is reserved for an unsolicited
+- * message from Hyper-V.
++ * message from Hyper-V; Hyper-V does not acknowledge (respond to)
++ * VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED requests with ID of
++ * 0 sent by the guest.
+ */
+ return current_id + 1;
+ }
+@@ -1270,7 +1272,7 @@ u64 vmbus_request_addr(struct vmbus_channel *channel, u64 trans_id)
+
+ /* Hyper-V can send an unsolicited message with ID of 0 */
+ if (!trans_id)
+- return trans_id;
++ return VMBUS_RQST_ERROR;
+
+ spin_lock_irqsave(&rqstor->req_lock, flags);
+
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index 5f8f824d997f8..63b616ce3a6e9 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -2308,6 +2308,21 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ struct device *dev = &client->dev;
+ int page, ret;
+
++ /*
++ * Figure out if PEC is enabled before accessing any other register.
++ * Make sure PEC is disabled, will be enabled later if needed.
++ */
++ client->flags &= ~I2C_CLIENT_PEC;
++
++ /* Enable PEC if the controller and bus supports it */
++ if (!(data->flags & PMBUS_NO_CAPABILITY)) {
++ ret = i2c_smbus_read_byte_data(client, PMBUS_CAPABILITY);
++ if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK)) {
++ if (i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_PEC))
++ client->flags |= I2C_CLIENT_PEC;
++ }
++ }
++
+ /*
+ * Some PMBus chips don't support PMBUS_STATUS_WORD, so try
+ * to use PMBUS_STATUS_BYTE instead if that is the case.
+@@ -2326,19 +2341,6 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ data->has_status_word = true;
+ }
+
+- /* Make sure PEC is disabled, will be enabled later if needed */
+- client->flags &= ~I2C_CLIENT_PEC;
+-
+- /* Enable PEC if the controller and bus supports it */
+- if (!(data->flags & PMBUS_NO_CAPABILITY)) {
+- ret = i2c_smbus_read_byte_data(client, PMBUS_CAPABILITY);
+- if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK)) {
+- if (i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_PEC)) {
+- client->flags |= I2C_CLIENT_PEC;
+- }
+- }
+- }
+-
+ /*
+ * Check if the chip is write protected. If it is, we can not clear
+ * faults, and we should not try it. Also, in that case, writes into
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index 88653d1c06a45..7fed6e9c0ca14 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1382,7 +1382,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev)
+ continue;
+ conn->child_dev =
+ coresight_find_csdev_by_fwnode(conn->child_fwnode);
+- if (conn->child_dev) {
++ if (conn->child_dev && conn->child_dev->has_conns_grp) {
+ ret = coresight_make_links(csdev, conn,
+ conn->child_dev);
+ if (ret)
+@@ -1574,6 +1574,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ int nr_refcnts = 1;
+ atomic_t *refcnts = NULL;
+ struct coresight_device *csdev;
++ bool registered = false;
+
+ csdev = kzalloc(sizeof(*csdev), GFP_KERNEL);
+ if (!csdev) {
+@@ -1594,7 +1595,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL);
+ if (!refcnts) {
+ ret = -ENOMEM;
+- goto err_free_csdev;
++ kfree(csdev);
++ goto err_out;
+ }
+
+ csdev->refcnt = refcnts;
+@@ -1619,6 +1621,13 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev));
+ dev_set_name(&csdev->dev, "%s", desc->name);
+
++ /*
++ * Make sure the device registration and the connection fixup
++ * are synchronised, so that we don't see uninitialised devices
++ * on the coresight bus while trying to resolve the connections.
++ */
++ mutex_lock(&coresight_mutex);
++
+ ret = device_register(&csdev->dev);
+ if (ret) {
+ put_device(&csdev->dev);
+@@ -1626,7 +1635,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ * All resources are free'd explicitly via
+ * coresight_device_release(), triggered from put_device().
+ */
+- goto err_out;
++ goto out_unlock;
+ }
+
+ if (csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+@@ -1641,11 +1650,11 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ * from put_device(), which is in turn called from
+ * function device_unregister().
+ */
+- goto err_out;
++ goto out_unlock;
+ }
+ }
+-
+- mutex_lock(&coresight_mutex);
++ /* Device is now registered */
++ registered = true;
+
+ ret = coresight_create_conns_sysfs_group(csdev);
+ if (!ret)
+@@ -1655,16 +1664,18 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ if (!ret && cti_assoc_ops && cti_assoc_ops->add)
+ cti_assoc_ops->add(csdev);
+
++out_unlock:
+ mutex_unlock(&coresight_mutex);
+- if (ret) {
++ /* Success */
++ if (!ret)
++ return csdev;
++
++ /* Unregister the device if needed */
++ if (registered) {
+ coresight_unregister(csdev);
+ return ERR_PTR(ret);
+ }
+
+- return csdev;
+-
+-err_free_csdev:
+- kfree(csdev);
+ err_out:
+ /* Cleanup the connection information */
+ coresight_release_platform_data(NULL, desc->pdata);
+diff --git a/drivers/i2c/busses/i2c-at91-master.c b/drivers/i2c/busses/i2c-at91-master.c
+index b0eae94909f44..c0c35785a0dc4 100644
+--- a/drivers/i2c/busses/i2c-at91-master.c
++++ b/drivers/i2c/busses/i2c-at91-master.c
+@@ -656,6 +656,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
+ unsigned int_addr_flag = 0;
+ struct i2c_msg *m_start = msg;
+ bool is_read;
++ u8 *dma_buf = NULL;
+
+ dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num);
+
+@@ -703,7 +704,17 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
+ dev->msg = m_start;
+ dev->recv_len_abort = false;
+
++ if (dev->use_dma) {
++ dma_buf = i2c_get_dma_safe_msg_buf(m_start, 1);
++ if (!dma_buf) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ dev->buf = dma_buf;
++ }
++
+ ret = at91_do_twi_transfer(dev);
++ i2c_put_dma_safe_msg_buf(dma_buf, m_start, !ret);
+
+ ret = (ret < 0) ? ret : num;
+ out:
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 2ad166355ec9b..20a2f903b7f6c 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -359,14 +359,14 @@ static int npcm_i2c_get_SCL(struct i2c_adapter *_adap)
+ {
+ struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap);
+
+- return !!(I2CCTL3_SCL_LVL & ioread32(bus->reg + NPCM_I2CCTL3));
++ return !!(I2CCTL3_SCL_LVL & ioread8(bus->reg + NPCM_I2CCTL3));
+ }
+
+ static int npcm_i2c_get_SDA(struct i2c_adapter *_adap)
+ {
+ struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap);
+
+- return !!(I2CCTL3_SDA_LVL & ioread32(bus->reg + NPCM_I2CCTL3));
++ return !!(I2CCTL3_SDA_LVL & ioread8(bus->reg + NPCM_I2CCTL3));
+ }
+
+ static inline u16 npcm_i2c_get_index(struct npcm_i2c *bus)
+@@ -563,6 +563,15 @@ static inline void npcm_i2c_nack(struct npcm_i2c *bus)
+ iowrite8(val, bus->reg + NPCM_I2CCTL1);
+ }
+
++static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus)
++{
++ u8 val;
++
++ /* Clear NEGACK, STASTR and BER bits */
++ val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR;
++ iowrite8(val, bus->reg + NPCM_I2CST);
++}
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ static void npcm_i2c_slave_int_enable(struct npcm_i2c *bus, bool enable)
+ {
+@@ -642,8 +651,8 @@ static void npcm_i2c_reset(struct npcm_i2c *bus)
+ iowrite8(NPCM_I2CCST_BB, bus->reg + NPCM_I2CCST);
+ iowrite8(0xFF, bus->reg + NPCM_I2CST);
+
+- /* Clear EOB bit */
+- iowrite8(NPCM_I2CCST3_EO_BUSY, bus->reg + NPCM_I2CCST3);
++ /* Clear and disable EOB */
++ npcm_i2c_eob_int(bus, false);
+
+ /* Clear all fifo bits: */
+ iowrite8(NPCM_I2CFIF_CTS_CLR_FIFO, bus->reg + NPCM_I2CFIF_CTS);
+@@ -655,6 +664,9 @@ static void npcm_i2c_reset(struct npcm_i2c *bus)
+ }
+ #endif
+
++ /* clear status bits for spurious interrupts */
++ npcm_i2c_clear_master_status(bus);
++
+ bus->state = I2C_IDLE;
+ }
+
+@@ -815,15 +827,6 @@ static void npcm_i2c_read_fifo(struct npcm_i2c *bus, u8 bytes_in_fifo)
+ }
+ }
+
+-static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus)
+-{
+- u8 val;
+-
+- /* Clear NEGACK, STASTR and BER bits */
+- val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR;
+- iowrite8(val, bus->reg + NPCM_I2CST);
+-}
+-
+ static void npcm_i2c_master_abort(struct npcm_i2c *bus)
+ {
+ /* Only current master is allowed to issue a stop condition */
+@@ -1231,7 +1234,16 @@ static irqreturn_t npcm_i2c_int_slave_handler(struct npcm_i2c *bus)
+ ret = IRQ_HANDLED;
+ } /* SDAST */
+
+- return ret;
++ /*
++ * if irq is not one of the above, make sure EOB is disabled and all
++ * status bits are cleared.
++ */
++ if (ret == IRQ_NONE) {
++ npcm_i2c_eob_int(bus, false);
++ npcm_i2c_clear_master_status(bus);
++ }
++
++ return IRQ_HANDLED;
+ }
+
+ static int npcm_i2c_reg_slave(struct i2c_client *client)
+@@ -1467,6 +1479,9 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus)
+ npcm_i2c_eob_int(bus, false);
+ npcm_i2c_master_stop(bus);
+
++ /* Clear SDA Status bit (by reading dummy byte) */
++ npcm_i2c_rd_byte(bus);
++
+ /*
+ * The bus is released from stall only after the SW clears
+ * NEGACK bit. Then a Stop condition is sent.
+@@ -1474,6 +1489,8 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus)
+ npcm_i2c_clear_master_status(bus);
+ readx_poll_timeout_atomic(ioread8, bus->reg + NPCM_I2CCST, val,
+ !(val & NPCM_I2CCST_BUSY), 10, 200);
++ /* verify no status bits are still set after bus is released */
++ npcm_i2c_clear_master_status(bus);
+ }
+ bus->state = I2C_IDLE;
+
+@@ -1672,10 +1689,10 @@ static int npcm_i2c_recovery_tgclk(struct i2c_adapter *_adap)
+ int iter = 27;
+
+ if ((npcm_i2c_get_SDA(_adap) == 1) && (npcm_i2c_get_SCL(_adap) == 1)) {
+- dev_dbg(bus->dev, "bus%d recovery skipped, bus not stuck",
+- bus->num);
++ dev_dbg(bus->dev, "bus%d-0x%x recovery skipped, bus not stuck",
++ bus->num, bus->dest_addr);
+ npcm_i2c_reset(bus);
+- return status;
++ return 0;
+ }
+
+ npcm_i2c_int_enable(bus, false);
+@@ -1909,6 +1926,7 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ bus_freq_hz < I2C_FREQ_MIN_HZ || bus_freq_hz > I2C_FREQ_MAX_HZ)
+ return -EINVAL;
+
++ npcm_i2c_int_enable(bus, false);
+ npcm_i2c_disable(bus);
+
+ /* Configure FIFO mode : */
+@@ -1937,10 +1955,17 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ val = (val | NPCM_I2CCTL1_NMINTE) & ~NPCM_I2CCTL1_RWS;
+ iowrite8(val, bus->reg + NPCM_I2CCTL1);
+
+- npcm_i2c_int_enable(bus, true);
+-
+ npcm_i2c_reset(bus);
+
++ /* check HW is OK: SDA and SCL should be high at this point. */
++ if ((npcm_i2c_get_SDA(&bus->adap) == 0) || (npcm_i2c_get_SCL(&bus->adap) == 0)) {
++ dev_err(bus->dev, "I2C%d init fail: lines are low\n", bus->num);
++ dev_err(bus->dev, "SDA=%d SCL=%d\n", npcm_i2c_get_SDA(&bus->adap),
++ npcm_i2c_get_SCL(&bus->adap));
++ return -ENXIO;
++ }
++
++ npcm_i2c_int_enable(bus, true);
+ return 0;
+ }
+
+@@ -1988,10 +2013,14 @@ static irqreturn_t npcm_i2c_bus_irq(int irq, void *dev_id)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ if (bus->slave) {
+ bus->master_or_slave = I2C_SLAVE;
+- return npcm_i2c_int_slave_handler(bus);
++ if (npcm_i2c_int_slave_handler(bus))
++ return IRQ_HANDLED;
+ }
+ #endif
+- return IRQ_NONE;
++ /* clear status bits for spurious interrupts */
++ npcm_i2c_clear_master_status(bus);
++
++ return IRQ_HANDLED;
+ }
+
+ static bool npcm_i2c_master_start_xmit(struct npcm_i2c *bus,
+@@ -2047,8 +2076,7 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ u16 nwrite, nread;
+ u8 *write_data, *read_data;
+ u8 slave_addr;
+- int timeout;
+- int ret = 0;
++ unsigned long timeout;
+ bool read_block = false;
+ bool read_PEC = false;
+ u8 bus_busy;
+@@ -2099,13 +2127,13 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ * 9: bits per transaction (including the ack/nack)
+ */
+ timeout_usec = (2 * 9 * USEC_PER_SEC / bus->bus_freq) * (2 + nread + nwrite);
+- timeout = max(msecs_to_jiffies(35), usecs_to_jiffies(timeout_usec));
++ timeout = max_t(unsigned long, bus->adap.timeout, usecs_to_jiffies(timeout_usec));
+ if (nwrite >= 32 * 1024 || nread >= 32 * 1024) {
+ dev_err(bus->dev, "i2c%d buffer too big\n", bus->num);
+ return -EINVAL;
+ }
+
+- time_left = jiffies + msecs_to_jiffies(DEFAULT_STALL_COUNT) + 1;
++ time_left = jiffies + timeout + 1;
+ do {
+ /*
+ * we must clear slave address immediately when the bus is not
+@@ -2138,12 +2166,12 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ bus->read_block_use = read_block;
+
+ reinit_completion(&bus->cmd_complete);
+- if (!npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread,
+- write_data, read_data, read_PEC,
+- read_block))
+- ret = -EBUSY;
+
+- if (ret != -EBUSY) {
++ npcm_i2c_int_enable(bus, true);
++
++ if (npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread,
++ write_data, read_data, read_PEC,
++ read_block)) {
+ time_left = wait_for_completion_timeout(&bus->cmd_complete,
+ timeout);
+
+@@ -2157,26 +2185,31 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ }
+ }
+ }
+- ret = bus->cmd_err;
+
+ /* if there was BER, check if need to recover the bus: */
+ if (bus->cmd_err == -EAGAIN)
+- ret = i2c_recover_bus(adap);
++ bus->cmd_err = i2c_recover_bus(adap);
+
+ /*
+ * After any type of error, check if LAST bit is still set,
+ * due to a HW issue.
+ * It cannot be cleared without resetting the module.
+ */
+- if (bus->cmd_err &&
+- (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
++ else if (bus->cmd_err &&
++ (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
+ npcm_i2c_reset(bus);
+
++ /* after any xfer, successful or not, stall and EOB must be disabled */
++ npcm_i2c_stall_after_start(bus, false);
++ npcm_i2c_eob_int(bus, false);
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ /* reenable slave if it was enabled */
+ if (bus->slave)
+ iowrite8((bus->slave->addr & 0x7F) | NPCM_I2CADDR_SAEN,
+ bus->reg + NPCM_I2CADDR1);
++#else
++ npcm_i2c_int_enable(bus, false);
+ #endif
+ return bus->cmd_err;
+ }
+@@ -2269,7 +2302,7 @@ static int npcm_i2c_probe_bus(struct platform_device *pdev)
+ adap = &bus->adap;
+ adap->owner = THIS_MODULE;
+ adap->retries = 3;
+- adap->timeout = HZ;
++ adap->timeout = msecs_to_jiffies(35);
+ adap->algo = &npcm_i2c_algo;
+ adap->quirks = &npcm_i2c_quirks;
+ adap->algo_data = bus;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index f71c730f9838d..995ecf5e4438c 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -1062,8 +1062,10 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ pm_runtime_enable(dev);
+ pm_runtime_get_sync(dev);
+ ret = rcar_i2c_clock_calculate(priv);
+- if (ret < 0)
+- goto out_pm_put;
++ if (ret < 0) {
++ pm_runtime_put(dev);
++ goto out_pm_disable;
++ }
+
+ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+
+@@ -1092,19 +1094,19 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+
+ ret = platform_get_irq(pdev, 0);
+ if (ret < 0)
+- goto out_pm_disable;
++ goto out_pm_put;
+ priv->irq = ret;
+ ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv);
+ if (ret < 0) {
+ dev_err(dev, "cannot get irq %d\n", priv->irq);
+- goto out_pm_disable;
++ goto out_pm_put;
+ }
+
+ platform_set_drvdata(pdev, priv);
+
+ ret = i2c_add_numbered_adapter(adap);
+ if (ret < 0)
+- goto out_pm_disable;
++ goto out_pm_put;
+
+ if (priv->flags & ID_P_HOST_NOTIFY) {
+ priv->host_notify_client = i2c_new_slave_host_notify_device(adap);
+@@ -1121,7 +1123,8 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ out_del_device:
+ i2c_del_adapter(&priv->adap);
+ out_pm_put:
+- pm_runtime_put(dev);
++ if (priv->flags & ID_P_PM_BLOCKED)
++ pm_runtime_put(dev);
+ out_pm_disable:
+ pm_runtime_disable(dev);
+ return ret;
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index 1783a6ea5427b..3ebdd42fec362 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -265,6 +265,8 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
+ unsigned long dim = from->nr_segs;
+ int idx;
+
++ if (!HFI1_CAP_IS_KSET(SDMA))
++ return -EINVAL;
+ idx = srcu_read_lock(&fd->pq_srcu);
+ pq = srcu_dereference(fd->pq, &fd->pq_srcu);
+ if (!cq || !pq) {
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 4436ed41547c4..436372b314312 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -489,7 +489,7 @@ void set_link_ipg(struct hfi1_pportdata *ppd)
+ u16 shift, mult;
+ u64 src;
+ u32 current_egress_rate; /* Mbits /sec */
+- u32 max_pkt_time;
++ u64 max_pkt_time;
+ /*
+ * max_pkt_time is the maximum packet egress time in units
+ * of the fabric clock period 1/(805 MHz).
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index f07d328689d3d..a95b654f52540 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1288,11 +1288,13 @@ void sdma_clean(struct hfi1_devdata *dd, size_t num_engines)
+ kvfree(sde->tx_ring);
+ sde->tx_ring = NULL;
+ }
+- spin_lock_irq(&dd->sde_map_lock);
+- sdma_map_free(rcu_access_pointer(dd->sdma_map));
+- RCU_INIT_POINTER(dd->sdma_map, NULL);
+- spin_unlock_irq(&dd->sde_map_lock);
+- synchronize_rcu();
++ if (rcu_access_pointer(dd->sdma_map)) {
++ spin_lock_irq(&dd->sde_map_lock);
++ sdma_map_free(rcu_access_pointer(dd->sdma_map));
++ RCU_INIT_POINTER(dd->sdma_map, NULL);
++ spin_unlock_irq(&dd->sde_map_lock);
++ synchronize_rcu();
++ }
+ kfree(dd->per_sdma);
+ dd->per_sdma = NULL;
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 1e0bae1369974..b739f44197aa7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -535,6 +535,11 @@ struct hns_roce_cmd_context {
+ u16 busy;
+ };
+
++enum hns_roce_cmdq_state {
++ HNS_ROCE_CMDQ_STATE_NORMAL,
++ HNS_ROCE_CMDQ_STATE_FATAL_ERR,
++};
++
+ struct hns_roce_cmdq {
+ struct dma_pool *pool;
+ struct semaphore poll_sem;
+@@ -554,6 +559,7 @@ struct hns_roce_cmdq {
+ * close device, switch into poll mode(non event mode)
+ */
+ u8 use_events;
++ enum hns_roce_cmdq_state state;
+ };
+
+ struct hns_roce_cmd_mailbox {
+@@ -715,7 +721,6 @@ struct hns_roce_caps {
+ u32 num_pi_qps;
+ u32 reserved_qps;
+ int num_qpc_timer;
+- int num_cqc_timer;
+ u32 num_srqs;
+ u32 max_wqes;
+ u32 max_srq_wrs;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index b33e948fd060f..b0ea885708281 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1265,6 +1265,16 @@ static int hns_roce_cmq_csq_done(struct hns_roce_dev *hr_dev)
+ return tail == priv->cmq.csq.head;
+ }
+
++static void update_cmdq_status(struct hns_roce_dev *hr_dev)
++{
++ struct hns_roce_v2_priv *priv = hr_dev->priv;
++ struct hnae3_handle *handle = priv->handle;
++
++ if (handle->rinfo.reset_state == HNS_ROCE_STATE_RST_INIT ||
++ handle->rinfo.instance_state == HNS_ROCE_STATE_INIT)
++ hr_dev->cmd.state = HNS_ROCE_CMDQ_STATE_FATAL_ERR;
++}
++
+ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ struct hns_roce_cmq_desc *desc, int num)
+ {
+@@ -1318,6 +1328,8 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ csq->head, tail);
+ csq->head = tail;
+
++ update_cmdq_status(hr_dev);
++
+ ret = -EAGAIN;
+ }
+
+@@ -1332,6 +1344,9 @@ static int hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ bool busy;
+ int ret;
+
++ if (hr_dev->cmd.state == HNS_ROCE_CMDQ_STATE_FATAL_ERR)
++ return -EIO;
++
+ if (!v2_chk_mbox_is_avail(hr_dev, &busy))
+ return busy ? -EBUSY : 0;
+
+@@ -1528,6 +1543,9 @@ static void hns_roce_function_clear(struct hns_roce_dev *hr_dev)
+ {
+ int i;
+
++ if (hr_dev->cmd.state == HNS_ROCE_CMDQ_STATE_FATAL_ERR)
++ return;
++
+ for (i = hr_dev->func_num - 1; i >= 0; i--) {
+ __hns_roce_function_clear(hr_dev, i);
+ if (i != 0)
+@@ -1947,7 +1965,7 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
+ caps->num_mtpts = HNS_ROCE_V2_MAX_MTPT_NUM;
+ caps->num_pds = HNS_ROCE_V2_MAX_PD_NUM;
+ caps->num_qpc_timer = HNS_ROCE_V2_MAX_QPC_TIMER_NUM;
+- caps->num_cqc_timer = HNS_ROCE_V2_MAX_CQC_TIMER_NUM;
++ caps->cqc_timer_bt_num = HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM;
+
+ caps->max_qp_init_rdma = HNS_ROCE_V2_MAX_QP_INIT_RDMA;
+ caps->max_qp_dest_rdma = HNS_ROCE_V2_MAX_QP_DEST_RDMA;
+@@ -2243,7 +2261,6 @@ static int hns_roce_query_pf_caps(struct hns_roce_dev *hr_dev)
+ caps->max_rq_sg = roundup_pow_of_two(caps->max_rq_sg);
+ caps->max_extend_sg = le32_to_cpu(resp_a->max_extend_sg);
+ caps->num_qpc_timer = le16_to_cpu(resp_a->num_qpc_timer);
+- caps->num_cqc_timer = le16_to_cpu(resp_a->num_cqc_timer);
+ caps->max_srq_sges = le16_to_cpu(resp_a->max_srq_sges);
+ caps->max_srq_sges = roundup_pow_of_two(caps->max_srq_sges);
+ caps->num_aeq_vectors = resp_a->num_aeq_vectors;
+@@ -2812,6 +2829,9 @@ static int v2_wait_mbox_complete(struct hns_roce_dev *hr_dev, u32 timeout,
+ mb_st = (struct hns_roce_mbox_status *)desc.data;
+ end = msecs_to_jiffies(timeout) + jiffies;
+ while (v2_chk_mbox_is_avail(hr_dev, &busy)) {
++ if (hr_dev->cmd.state == HNS_ROCE_CMDQ_STATE_FATAL_ERR)
++ return -EIO;
++
+ status = 0;
+ hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_MB_ST,
+ true);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 12be85f0986ea..106bca5cf18a9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -41,7 +41,7 @@
+ #define HNS_ROCE_V2_MAX_SRQ_WR 0x8000
+ #define HNS_ROCE_V2_MAX_SRQ_SGE 64
+ #define HNS_ROCE_V2_MAX_CQ_NUM 0x100000
+-#define HNS_ROCE_V2_MAX_CQC_TIMER_NUM 0x100
++#define HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM 0x100
+ #define HNS_ROCE_V2_MAX_SRQ_NUM 0x100000
+ #define HNS_ROCE_V2_MAX_CQE_NUM 0x400000
+ #define HNS_ROCE_V2_MAX_RQ_SGE_NUM 64
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index f73ba619f3756..c8af4ebd7cbd3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -737,7 +737,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ ret = hns_roce_init_hem_table(hr_dev, &hr_dev->cqc_timer_table,
+ HEM_TYPE_CQC_TIMER,
+ hr_dev->caps.cqc_timer_entry_sz,
+- hr_dev->caps.num_cqc_timer, 1);
++ hr_dev->caps.cqc_timer_bt_num, 1);
+ if (ret) {
+ dev_err(dev,
+ "Failed to init CQC timer memory, aborting.\n");
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 8ef112f883a77..3acab569fbb94 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -2775,7 +2775,7 @@ void rvt_qp_iter(struct rvt_dev_info *rdi,
+ EXPORT_SYMBOL(rvt_qp_iter);
+
+ /*
+- * This should be called with s_lock held.
++ * This should be called with s_lock and r_lock held.
+ */
+ void rvt_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
+ enum ib_wc_status status)
+@@ -3134,7 +3134,9 @@ send_comp:
+ rvp->n_loop_pkts++;
+ flush_send:
+ sqp->s_rnr_retry = sqp->s_rnr_retry_cnt;
++ spin_lock(&sqp->r_lock);
+ rvt_send_complete(sqp, wqe, send_status);
++ spin_unlock(&sqp->r_lock);
+ if (local_ops) {
+ atomic_dec(&sqp->local_ops_pending);
+ local_ops = 0;
+@@ -3188,7 +3190,9 @@ serr:
+ spin_unlock_irqrestore(&qp->r_lock, flags);
+ serr_no_r_lock:
+ spin_lock_irqsave(&sqp->s_lock, flags);
++ spin_lock(&sqp->r_lock);
+ rvt_send_complete(sqp, wqe, send_status);
++ spin_unlock(&sqp->r_lock);
+ if (sqp->ibqp.qp_type == IB_QPT_RC) {
+ int lastwqe;
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 204e31bbd61f7..5039211b67c9b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -661,7 +661,7 @@ next_wqe:
+ opcode = next_opcode(qp, wqe, wqe->wr.opcode);
+ if (unlikely(opcode < 0)) {
+ wqe->status = IB_WC_LOC_QP_OP_ERR;
+- goto exit;
++ goto err;
+ }
+
+ mask = rxe_opcode[opcode].mask;
+diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gpio_keys.c
+index d75a8b179a8ae..a5dc4ab87fa1f 100644
+--- a/drivers/input/keyboard/gpio_keys.c
++++ b/drivers/input/keyboard/gpio_keys.c
+@@ -131,7 +131,7 @@ static void gpio_keys_quiesce_key(void *data)
+
+ if (!bdata->gpiod)
+ hrtimer_cancel(&bdata->release_timer);
+- if (bdata->debounce_use_hrtimer)
++ else if (bdata->debounce_use_hrtimer)
+ hrtimer_cancel(&bdata->debounce_timer);
+ else
+ cancel_delayed_work_sync(&bdata->work);
+diff --git a/drivers/input/misc/sparcspkr.c b/drivers/input/misc/sparcspkr.c
+index fe43e5557ed72..cdcb7737c46aa 100644
+--- a/drivers/input/misc/sparcspkr.c
++++ b/drivers/input/misc/sparcspkr.c
+@@ -205,6 +205,7 @@ static int bbc_beep_probe(struct platform_device *op)
+
+ info = &state->u.bbc;
+ info->clock_freq = of_getintprop_default(dp, "clock-frequency", 0);
++ of_node_put(dp);
+ if (!info->clock_freq)
+ goto out_free;
+
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index 72e0b767e1ba4..c175d44c52f37 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -337,13 +337,15 @@ static int stmfts_input_open(struct input_dev *dev)
+ struct stmfts_data *sdata = input_get_drvdata(dev);
+ int err;
+
+- err = pm_runtime_get_sync(&sdata->client->dev);
+- if (err < 0)
+- goto out;
++ err = pm_runtime_resume_and_get(&sdata->client->dev);
++ if (err)
++ return err;
+
+ err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON);
+- if (err)
+- goto out;
++ if (err) {
++ pm_runtime_put_sync(&sdata->client->dev);
++ return err;
++ }
+
+ mutex_lock(&sdata->mutex);
+ sdata->running = true;
+@@ -366,9 +368,7 @@ static int stmfts_input_open(struct input_dev *dev)
+ "failed to enable touchkey\n");
+ }
+
+-out:
+- pm_runtime_put_noidle(&sdata->client->dev);
+- return err;
++ return 0;
+ }
+
+ static void stmfts_input_close(struct input_dev *dev)
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 7bfe37e52e210..6418a0445d6fb 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -84,7 +84,7 @@
+ #define ACPI_DEVFLAG_LINT1 0x80
+ #define ACPI_DEVFLAG_ATSDIS 0x10000000
+
+-#define LOOP_TIMEOUT 100000
++#define LOOP_TIMEOUT 2000000
+ /*
+ * ACPI table definitions
+ *
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index a18b549951bb8..74c4ae85e41e8 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -1838,17 +1838,10 @@ void amd_iommu_domain_update(struct protection_domain *domain)
+ amd_iommu_domain_flush_complete(domain);
+ }
+
+-static void __init amd_iommu_init_dma_ops(void)
+-{
+- swiotlb = (iommu_default_passthrough() || sme_me_mask) ? 1 : 0;
+-}
+-
+ int __init amd_iommu_init_api(void)
+ {
+ int err;
+
+- amd_iommu_init_dma_ops();
+-
+ err = bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
+ if (err)
+ return err;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+index f9e9b4fb78bd5..b69161f2e0c04 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+@@ -6,6 +6,7 @@
+ #include <linux/mm.h>
+ #include <linux/mmu_context.h>
+ #include <linux/mmu_notifier.h>
++#include <linux/sched/mm.h>
+ #include <linux/slab.h>
+
+ #include "arm-smmu-v3.h"
+@@ -96,9 +97,14 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm)
+ struct arm_smmu_ctx_desc *cd;
+ struct arm_smmu_ctx_desc *ret = NULL;
+
++ /* Don't free the mm until we release the ASID */
++ mmgrab(mm);
++
+ asid = arm64_mm_context_get(mm);
+- if (!asid)
+- return ERR_PTR(-ESRCH);
++ if (!asid) {
++ err = -ESRCH;
++ goto out_drop_mm;
++ }
+
+ cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+ if (!cd) {
+@@ -165,6 +171,8 @@ out_free_cd:
+ kfree(cd);
+ out_put_context:
+ arm64_mm_context_put(mm);
++out_drop_mm:
++ mmdrop(mm);
+ return err < 0 ? ERR_PTR(err) : ret;
+ }
+
+@@ -173,6 +181,7 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd)
+ if (arm_smmu_free_asid(cd)) {
+ /* Unpin ASID */
+ arm64_mm_context_put(cd->mm);
++ mmdrop(cd->mm);
+ kfree(cd);
+ }
+ }
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index d85d54f2b5496..977a6f7245734 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -772,6 +772,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
+ unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
+ struct page **pages;
+ dma_addr_t iova;
++ ssize_t ret;
+
+ if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
+ iommu_deferred_attach(dev, domain))
+@@ -809,8 +810,8 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
+ arch_dma_prep_coherent(sg_page(sg), sg->length);
+ }
+
+- if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot)
+- < size)
++ ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot);
++ if (ret < 0 || ret < size)
+ goto out_free_sg;
+
+ sgt->sgl->dma_address = iova;
+@@ -1207,7 +1208,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ * implementation - it knows better than we do.
+ */
+ ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot);
+- if (ret < iova_len)
++ if (ret < 0 || ret < iova_len)
+ goto out_free_iova;
+
+ return __finalise_sg(dev, sg, nents, iova);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index ab22733003464..e3f15e0cae34d 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -5764,7 +5764,7 @@ static void quirk_igfx_skip_te_disable(struct pci_dev *dev)
+ ver = (dev->device >> 8) & 0xff;
+ if (ver != 0x45 && ver != 0x46 && ver != 0x4c &&
+ ver != 0x4e && ver != 0x8a && ver != 0x98 &&
+- ver != 0x9a)
++ ver != 0x9a && ver != 0xa7)
+ return;
+
+ if (risky_device(dev))
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 3a38352b603f3..c9eaf27cbb743 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -615,16 +615,19 @@ static void insert_iommu_master(struct device *dev,
+ static int qcom_iommu_of_xlate(struct device *dev,
+ struct of_phandle_args *spec)
+ {
+- struct msm_iommu_dev *iommu;
++ struct msm_iommu_dev *iommu = NULL, *iter;
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&msm_iommu_lock, flags);
+- list_for_each_entry(iommu, &qcom_iommu_devices, dev_node)
+- if (iommu->dev->of_node == spec->np)
++ list_for_each_entry(iter, &qcom_iommu_devices, dev_node) {
++ if (iter->dev->of_node == spec->np) {
++ iommu = iter;
+ break;
++ }
++ }
+
+- if (!iommu || iommu->dev->of_node != spec->np) {
++ if (!iommu) {
+ ret = -ENODEV;
+ goto fail;
+ }
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 5971a11686662..2ae46fa6b3dee 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -451,7 +451,7 @@ static void mtk_iommu_domain_free(struct iommu_domain *domain)
+ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
+ {
+- struct mtk_iommu_data *data = dev_iommu_priv_get(dev);
++ struct mtk_iommu_data *data = dev_iommu_priv_get(dev), *frstdata;
+ struct mtk_iommu_domain *dom = to_mtk_domain(domain);
+ struct device *m4udev = data->dev;
+ int ret, domid;
+@@ -461,20 +461,24 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ return domid;
+
+ if (!dom->data) {
+- if (mtk_iommu_domain_finalise(dom, data, domid))
++ /* Data is in the frstdata in sharing pgtable case. */
++ frstdata = mtk_iommu_get_m4u_data();
++
++ if (mtk_iommu_domain_finalise(dom, frstdata, domid))
+ return -ENODEV;
+ dom->data = data;
+ }
+
++ mutex_lock(&data->mutex);
+ if (!data->m4u_dom) { /* Initialize the M4U HW */
+ ret = pm_runtime_resume_and_get(m4udev);
+ if (ret < 0)
+- return ret;
++ goto err_unlock;
+
+ ret = mtk_iommu_hw_init(data);
+ if (ret) {
+ pm_runtime_put(m4udev);
+- return ret;
++ goto err_unlock;
+ }
+ data->m4u_dom = dom;
+ writel(dom->cfg.arm_v7s_cfg.ttbr & MMU_PT_ADDR_MASK,
+@@ -482,9 +486,14 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+
+ pm_runtime_put(m4udev);
+ }
++ mutex_unlock(&data->mutex);
+
+ mtk_iommu_config(data, dev, true, domid);
+ return 0;
++
++err_unlock:
++ mutex_unlock(&data->mutex);
++ return ret;
+ }
+
+ static void mtk_iommu_detach_device(struct iommu_domain *domain,
+@@ -577,6 +586,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ * All the ports in each a device should be in the same larbs.
+ */
+ larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
++ if (larbid >= MTK_LARB_NR_MAX)
++ return ERR_PTR(-EINVAL);
++
+ for (i = 1; i < fwspec->num_ids; i++) {
+ larbidx = MTK_M4U_TO_LARB(fwspec->ids[i]);
+ if (larbid != larbidx) {
+@@ -586,6 +598,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ }
+ }
+ larbdev = data->larb_imu[larbid].dev;
++ if (!larbdev)
++ return ERR_PTR(-EINVAL);
++
+ link = device_link_add(dev, larbdev,
+ DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
+ if (!link)
+@@ -624,6 +639,7 @@ static struct iommu_group *mtk_iommu_device_group(struct device *dev)
+ if (domid < 0)
+ return ERR_PTR(domid);
+
++ mutex_lock(&data->mutex);
+ group = data->m4u_group[domid];
+ if (!group) {
+ group = iommu_group_alloc();
+@@ -632,6 +648,7 @@ static struct iommu_group *mtk_iommu_device_group(struct device *dev)
+ } else {
+ iommu_group_ref_get(group);
+ }
++ mutex_unlock(&data->mutex);
+ return group;
+ }
+
+@@ -906,6 +923,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ }
+
+ platform_set_drvdata(pdev, data);
++ mutex_init(&data->mutex);
+
+ ret = iommu_device_sysfs_add(&data->iommu, dev, NULL,
+ "mtk-iommu.%pa", &ioaddr);
+@@ -951,10 +969,8 @@ static int mtk_iommu_remove(struct platform_device *pdev)
+ iommu_device_sysfs_remove(&data->iommu);
+ iommu_device_unregister(&data->iommu);
+
+- if (iommu_present(&platform_bus_type))
+- bus_set_iommu(&platform_bus_type, NULL);
++ list_del(&data->list);
+
+- clk_disable_unprepare(data->bclk);
+ device_link_remove(data->smicomm_dev, &pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ devm_free_irq(&pdev->dev, data->irq, data);
+diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h
+index f81fa8862ed04..f413546ac6e57 100644
+--- a/drivers/iommu/mtk_iommu.h
++++ b/drivers/iommu/mtk_iommu.h
+@@ -80,6 +80,8 @@ struct mtk_iommu_data {
+
+ struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */
+
++ struct mutex mutex; /* Protect m4u_group/m4u_dom above */
++
+ struct list_head list;
+ struct mtk_smi_larb_iommu larb_imu[MTK_LARB_NR_MAX];
+ };
+diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
+index bc7ee90b9373d..254530ad6c488 100644
+--- a/drivers/iommu/mtk_iommu_v1.c
++++ b/drivers/iommu/mtk_iommu_v1.c
+@@ -80,6 +80,7 @@
+ /* MTK generation one iommu HW only support 4K size mapping */
+ #define MT2701_IOMMU_PAGE_SHIFT 12
+ #define MT2701_IOMMU_PAGE_SIZE (1UL << MT2701_IOMMU_PAGE_SHIFT)
++#define MT2701_LARB_NR_MAX 3
+
+ /*
+ * MTK m4u support 4GB iova address space, and only support 4K page
+@@ -457,6 +458,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+
+ /* Link the consumer device with the smi-larb device(supplier) */
+ larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
++ if (larbid >= MT2701_LARB_NR_MAX)
++ return ERR_PTR(-EINVAL);
++
+ for (idx = 1; idx < fwspec->num_ids; idx++) {
+ larbidx = mt2701_m4u_to_larb(fwspec->ids[idx]);
+ if (larbid != larbidx) {
+@@ -467,6 +471,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ }
+
+ larbdev = data->larb_imu[larbid].dev;
++ if (!larbdev)
++ return ERR_PTR(-EINVAL);
++
+ link = device_link_add(dev, larbdev,
+ DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
+ if (!link)
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index 5b8d571c041dc..1120084cba09d 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -308,7 +308,16 @@ static inline int armada_370_xp_msi_init(struct device_node *node,
+
+ static void armada_xp_mpic_perf_init(void)
+ {
+- unsigned long cpuid = cpu_logical_map(smp_processor_id());
++ unsigned long cpuid;
++
++ /*
++ * This Performance Counter Overflow interrupt is specific for
++ * Armada 370 and XP. It is not available on Armada 375, 38x and 39x.
++ */
++ if (!of_machine_is_compatible("marvell,armada-370-xp"))
++ return;
++
++ cpuid = cpu_logical_map(smp_processor_id());
+
+ /* Enable Performance Counter Overflow interrupts */
+ writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid),
+diff --git a/drivers/irqchip/irq-aspeed-i2c-ic.c b/drivers/irqchip/irq-aspeed-i2c-ic.c
+index a47db16ff9603..9c9fc3e2967ed 100644
+--- a/drivers/irqchip/irq-aspeed-i2c-ic.c
++++ b/drivers/irqchip/irq-aspeed-i2c-ic.c
+@@ -77,8 +77,8 @@ static int __init aspeed_i2c_ic_of_init(struct device_node *node,
+ }
+
+ i2c_ic->parent_irq = irq_of_parse_and_map(node, 0);
+- if (i2c_ic->parent_irq < 0) {
+- ret = i2c_ic->parent_irq;
++ if (!i2c_ic->parent_irq) {
++ ret = -EINVAL;
+ goto err_iounmap;
+ }
+
+diff --git a/drivers/irqchip/irq-aspeed-scu-ic.c b/drivers/irqchip/irq-aspeed-scu-ic.c
+index 18b77c3e6db4b..279e92cf0b16b 100644
+--- a/drivers/irqchip/irq-aspeed-scu-ic.c
++++ b/drivers/irqchip/irq-aspeed-scu-ic.c
+@@ -157,8 +157,8 @@ static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic,
+ }
+
+ irq = irq_of_parse_and_map(node, 0);
+- if (irq < 0) {
+- rc = irq;
++ if (!irq) {
++ rc = -EINVAL;
+ goto err;
+ }
+
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 907af63d1bba9..09abc8a4759e5 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -556,7 +556,8 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
+
+ static void gic_eoi_irq(struct irq_data *d)
+ {
+- gic_write_eoir(gic_irq(d));
++ write_gicreg(gic_irq(d), ICC_EOIR1_EL1);
++ isb();
+ }
+
+ static void gic_eoimode1_eoi_irq(struct irq_data *d)
+@@ -640,82 +641,101 @@ static void gic_deactivate_unhandled(u32 irqnr)
+ if (irqnr < 8192)
+ gic_write_dir(irqnr);
+ } else {
+- gic_write_eoir(irqnr);
++ write_gicreg(irqnr, ICC_EOIR1_EL1);
++ isb();
+ }
+ }
+
+-static inline void gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
++/*
++ * Follow a read of the IAR with any HW maintenance that needs to happen prior
++ * to invoking the relevant IRQ handler. We must do two things:
++ *
++ * (1) Ensure instruction ordering between a read of IAR and subsequent
++ * instructions in the IRQ handler using an ISB.
++ *
++ * It is possible for the IAR to report an IRQ which was signalled *after*
++ * the CPU took an IRQ exception as multiple interrupts can race to be
++ * recognized by the GIC, earlier interrupts could be withdrawn, and/or
++ * later interrupts could be prioritized by the GIC.
++ *
++ * For devices which are tightly coupled to the CPU, such as PMUs, a
++ * context synchronization event is necessary to ensure that system
++ * register state is not stale, as these may have been indirectly written
++ * *after* exception entry.
++ *
++ * (2) Deactivate the interrupt when EOI mode 1 is in use.
++ */
++static inline void gic_complete_ack(u32 irqnr)
+ {
+- bool irqs_enabled = interrupts_enabled(regs);
+- int err;
+-
+- if (irqs_enabled)
+- nmi_enter();
+-
+ if (static_branch_likely(&supports_deactivate_key))
+- gic_write_eoir(irqnr);
+- /*
+- * Leave the PSR.I bit set to prevent other NMIs to be
+- * received while handling this one.
+- * PSR.I will be restored when we ERET to the
+- * interrupted context.
+- */
+- err = generic_handle_domain_nmi(gic_data.domain, irqnr);
+- if (err)
+- gic_deactivate_unhandled(irqnr);
++ write_gicreg(irqnr, ICC_EOIR1_EL1);
+
+- if (irqs_enabled)
+- nmi_exit();
++ isb();
+ }
+
+-static u32 do_read_iar(struct pt_regs *regs)
++static bool gic_rpr_is_nmi_prio(void)
+ {
+- u32 iar;
++ if (!gic_supports_nmi())
++ return false;
+
+- if (gic_supports_nmi() && unlikely(!interrupts_enabled(regs))) {
+- u64 pmr;
++ return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
++}
+
+- /*
+- * We were in a context with IRQs disabled. However, the
+- * entry code has set PMR to a value that allows any
+- * interrupt to be acknowledged, and not just NMIs. This can
+- * lead to surprising effects if the NMI has been retired in
+- * the meantime, and that there is an IRQ pending. The IRQ
+- * would then be taken in NMI context, something that nobody
+- * wants to debug twice.
+- *
+- * Until we sort this, drop PMR again to a level that will
+- * actually only allow NMIs before reading IAR, and then
+- * restore it to what it was.
+- */
+- pmr = gic_read_pmr();
+- gic_pmr_mask_irqs();
+- isb();
++static bool gic_irqnr_is_special(u32 irqnr)
++{
++ return irqnr >= 1020 && irqnr <= 1023;
++}
+
+- iar = gic_read_iar();
++static void __gic_handle_irq(u32 irqnr, struct pt_regs *regs)
++{
++ if (gic_irqnr_is_special(irqnr))
++ return;
+
+- gic_write_pmr(pmr);
+- } else {
+- iar = gic_read_iar();
++ gic_complete_ack(irqnr);
++
++ if (generic_handle_domain_irq(gic_data.domain, irqnr)) {
++ WARN_ONCE(true, "Unexpected interrupt (irqnr %u)\n", irqnr);
++ gic_deactivate_unhandled(irqnr);
+ }
++}
++
++static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
++{
++ if (gic_irqnr_is_special(irqnr))
++ return;
++
++ gic_complete_ack(irqnr);
+
+- return iar;
++ if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
++ WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
++ gic_deactivate_unhandled(irqnr);
++ }
+ }
+
+-static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
++/*
++ * An exception has been taken from a context with IRQs enabled, and this could
++ * be an IRQ or an NMI.
++ *
++ * The entry code called us with DAIF.IF set to keep NMIs masked. We must clear
++ * DAIF.IF (and update ICC_PMR_EL1 to mask regular IRQs) prior to returning,
++ * after handling any NMI but before handling any IRQ.
++ *
++ * The entry code has performed IRQ entry, and if an NMI is detected we must
++ * perform NMI entry/exit around invoking the handler.
++ */
++static void __gic_handle_irq_from_irqson(struct pt_regs *regs)
+ {
++ bool is_nmi;
+ u32 irqnr;
+
+- irqnr = do_read_iar(regs);
++ irqnr = gic_read_iar();
+
+- /* Check for special IDs first */
+- if ((irqnr >= 1020 && irqnr <= 1023))
+- return;
++ is_nmi = gic_rpr_is_nmi_prio();
+
+- if (gic_supports_nmi() &&
+- unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
+- gic_handle_nmi(irqnr, regs);
+- return;
++ if (is_nmi) {
++ nmi_enter();
++ __gic_handle_nmi(irqnr, regs);
++ nmi_exit();
+ }
+
+ if (gic_prio_masking_enabled()) {
+@@ -723,15 +743,52 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ gic_arch_enable_irqs();
+ }
+
+- if (static_branch_likely(&supports_deactivate_key))
+- gic_write_eoir(irqnr);
+- else
+- isb();
++ if (!is_nmi)
++ __gic_handle_irq(irqnr, regs);
++}
+
+- if (generic_handle_domain_irq(gic_data.domain, irqnr)) {
+- WARN_ONCE(true, "Unexpected interrupt received!\n");
+- gic_deactivate_unhandled(irqnr);
+- }
++/*
++ * An exception has been taken from a context with IRQs disabled, which can only
++ * be an NMI.
++ *
++ * The entry code called us with DAIF.IF set to keep NMIs masked. We must leave
++ * DAIF.IF (and ICC_PMR_EL1) unchanged.
++ *
++ * The entry code has performed NMI entry.
++ */
++static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
++{
++ u64 pmr;
++ u32 irqnr;
++
++ /*
++ * We were in a context with IRQs disabled. However, the
++ * entry code has set PMR to a value that allows any
++ * interrupt to be acknowledged, and not just NMIs. This can
++ * lead to surprising effects if the NMI has been retired in
++ * the meantime, and that there is an IRQ pending. The IRQ
++ * would then be taken in NMI context, something that nobody
++ * wants to debug twice.
++ *
++ * Until we sort this, drop PMR again to a level that will
++ * actually only allow NMIs before reading IAR, and then
++ * restore it to what it was.
++ */
++ pmr = gic_read_pmr();
++ gic_pmr_mask_irqs();
++ isb();
++ irqnr = gic_read_iar();
++ gic_write_pmr(pmr);
++
++ __gic_handle_nmi(irqnr, regs);
++}
++
++static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
++{
++ if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
++ __gic_handle_irq_from_irqsoff(regs);
++ else
++ __gic_handle_irq_from_irqson(regs);
+ }
+
+ static u32 gic_get_pribits(void)
+diff --git a/drivers/irqchip/irq-sni-exiu.c b/drivers/irqchip/irq-sni-exiu.c
+index abd011fcecf4a..c7db617e1a2f6 100644
+--- a/drivers/irqchip/irq-sni-exiu.c
++++ b/drivers/irqchip/irq-sni-exiu.c
+@@ -37,11 +37,26 @@ struct exiu_irq_data {
+ u32 spi_base;
+ };
+
+-static void exiu_irq_eoi(struct irq_data *d)
++static void exiu_irq_ack(struct irq_data *d)
+ {
+ struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
+
+ writel(BIT(d->hwirq), data->base + EIREQCLR);
++}
++
++static void exiu_irq_eoi(struct irq_data *d)
++{
++ struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
++
++ /*
++ * Level triggered interrupts are latched and must be cleared during
++ * EOI or the interrupt will be jammed on. Of course if a level
++ * triggered interrupt is still asserted then the write will not clear
++ * the interrupt.
++ */
++ if (irqd_is_level_type(d))
++ writel(BIT(d->hwirq), data->base + EIREQCLR);
++
+ irq_chip_eoi_parent(d);
+ }
+
+@@ -91,10 +106,13 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
+ writel_relaxed(val, data->base + EILVL);
+
+ val = readl_relaxed(data->base + EIEDG);
+- if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH)
++ if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) {
+ val &= ~BIT(d->hwirq);
+- else
++ irq_set_handler_locked(d, handle_fasteoi_irq);
++ } else {
+ val |= BIT(d->hwirq);
++ irq_set_handler_locked(d, handle_fasteoi_ack_irq);
++ }
+ writel_relaxed(val, data->base + EIEDG);
+
+ writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR);
+@@ -104,6 +122,7 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
+
+ static struct irq_chip exiu_irq_chip = {
+ .name = "EXIU",
++ .irq_ack = exiu_irq_ack,
+ .irq_eoi = exiu_irq_eoi,
+ .irq_enable = exiu_irq_enable,
+ .irq_mask = exiu_irq_mask,
+diff --git a/drivers/irqchip/irq-xtensa-mx.c b/drivers/irqchip/irq-xtensa-mx.c
+index 27933338f7b36..8c581c985aa7d 100644
+--- a/drivers/irqchip/irq-xtensa-mx.c
++++ b/drivers/irqchip/irq-xtensa-mx.c
+@@ -151,14 +151,25 @@ static struct irq_chip xtensa_mx_irq_chip = {
+ .irq_set_affinity = xtensa_mx_irq_set_affinity,
+ };
+
++static void __init xtensa_mx_init_common(struct irq_domain *root_domain)
++{
++ unsigned int i;
++
++ irq_set_default_host(root_domain);
++ secondary_init_irq();
++
++ /* Initialize default IRQ routing to CPU 0 */
++ for (i = 0; i < XCHAL_NUM_EXTINTERRUPTS; ++i)
++ set_er(1, MIROUT(i));
++}
++
+ int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent)
+ {
+ struct irq_domain *root_domain =
+ irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0,
+ &xtensa_mx_irq_domain_ops,
+ &xtensa_mx_irq_chip);
+- irq_set_default_host(root_domain);
+- secondary_init_irq();
++ xtensa_mx_init_common(root_domain);
+ return 0;
+ }
+
+@@ -168,8 +179,7 @@ static int __init xtensa_mx_init(struct device_node *np,
+ struct irq_domain *root_domain =
+ irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops,
+ &xtensa_mx_irq_chip);
+- irq_set_default_host(root_domain);
+- secondary_init_irq();
++ xtensa_mx_init_common(root_domain);
+ return 0;
+ }
+ IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init);
+diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig
+index 5cdc361da37cb..539a2ed4e13dc 100644
+--- a/drivers/macintosh/Kconfig
++++ b/drivers/macintosh/Kconfig
+@@ -44,6 +44,7 @@ config ADB_IOP
+ config ADB_CUDA
+ bool "Support for Cuda/Egret based Macs and PowerMacs"
+ depends on (ADB || PPC_PMAC) && !PPC_PMAC64
++ select RTC_LIB
+ help
+ This provides support for Cuda/Egret based Macintosh and
+ Power Macintosh systems. This includes most m68k based Macs,
+@@ -57,6 +58,7 @@ config ADB_CUDA
+ config ADB_PMU
+ bool "Support for PMU based PowerMacs and PowerBooks"
+ depends on PPC_PMAC || MAC
++ select RTC_LIB
+ help
+ On PowerBooks, iBooks, and recent iMacs and Power Macintoshes, the
+ PMU is an embedded microprocessor whose primary function is to
+@@ -67,6 +69,10 @@ config ADB_PMU
+ this device; you should do so if your machine is one of those
+ mentioned above.
+
++config ADB_PMU_EVENT
++ def_bool y
++ depends on ADB_PMU && INPUT=y
++
+ config ADB_PMU_LED
+ bool "Support for the Power/iBook front LED"
+ depends on PPC_PMAC && ADB_PMU
+diff --git a/drivers/macintosh/Makefile b/drivers/macintosh/Makefile
+index 49819b1b6f201..712edcb3e0b08 100644
+--- a/drivers/macintosh/Makefile
++++ b/drivers/macintosh/Makefile
+@@ -12,7 +12,8 @@ obj-$(CONFIG_MAC_EMUMOUSEBTN) += mac_hid.o
+ obj-$(CONFIG_INPUT_ADBHID) += adbhid.o
+ obj-$(CONFIG_ANSLCD) += ans-lcd.o
+
+-obj-$(CONFIG_ADB_PMU) += via-pmu.o via-pmu-event.o
++obj-$(CONFIG_ADB_PMU) += via-pmu.o
++obj-$(CONFIG_ADB_PMU_EVENT) += via-pmu-event.o
+ obj-$(CONFIG_ADB_PMU_LED) += via-pmu-led.o
+ obj-$(CONFIG_PMAC_BACKLIGHT) += via-pmu-backlight.o
+ obj-$(CONFIG_ADB_CUDA) += via-cuda.o
+diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
+index 4b98bc26a94b5..2109129ea1bbf 100644
+--- a/drivers/macintosh/via-pmu.c
++++ b/drivers/macintosh/via-pmu.c
+@@ -1459,7 +1459,7 @@ next:
+ pmu_pass_intr(data, len);
+ /* len == 6 is probably a bad check. But how do I
+ * know what PMU versions send what events here? */
+- if (len == 6) {
++ if (IS_ENABLED(CONFIG_ADB_PMU_EVENT) && len == 6) {
+ via_pmu_event(PMU_EVT_POWER, !!(data[1]&8));
+ via_pmu_event(PMU_EVT_LID, data[1]&1);
+ }
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index 3e7d4b20ab34f..4229b9b5da98f 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -82,11 +82,11 @@ static void msg_submit(struct mbox_chan *chan)
+ exit:
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+- /* kick start the timer immediately to avoid delays */
+ if (!err && (chan->txdone_method & TXDONE_BY_POLL)) {
+- /* but only if not already active */
+- if (!hrtimer_active(&chan->mbox->poll_hrt))
+- hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++ /* kick start the timer immediately to avoid delays */
++ spin_lock_irqsave(&chan->mbox->poll_hrt_lock, flags);
++ hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++ spin_unlock_irqrestore(&chan->mbox->poll_hrt_lock, flags);
+ }
+ }
+
+@@ -120,20 +120,26 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer)
+ container_of(hrtimer, struct mbox_controller, poll_hrt);
+ bool txdone, resched = false;
+ int i;
++ unsigned long flags;
+
+ for (i = 0; i < mbox->num_chans; i++) {
+ struct mbox_chan *chan = &mbox->chans[i];
+
+ if (chan->active_req && chan->cl) {
+- resched = true;
+ txdone = chan->mbox->ops->last_tx_done(chan);
+ if (txdone)
+ tx_tick(chan, 0);
++ else
++ resched = true;
+ }
+ }
+
+ if (resched) {
+- hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period));
++ spin_lock_irqsave(&mbox->poll_hrt_lock, flags);
++ if (!hrtimer_is_queued(hrtimer))
++ hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period));
++ spin_unlock_irqrestore(&mbox->poll_hrt_lock, flags);
++
+ return HRTIMER_RESTART;
+ }
+ return HRTIMER_NORESTART;
+@@ -500,6 +506,7 @@ int mbox_controller_register(struct mbox_controller *mbox)
+ hrtimer_init(&mbox->poll_hrt, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL);
+ mbox->poll_hrt.function = txdone_hrtimer;
++ spin_lock_init(&mbox->poll_hrt_lock);
+ }
+
+ for (i = 0; i < mbox->num_chans; i++) {
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index ed18936b8ce68..ebfa33a40fceb 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -654,7 +654,7 @@ static int pcc_mbox_probe(struct platform_device *pdev)
+ goto err;
+ }
+
+- pcc_mbox_ctrl = devm_kmalloc(dev, sizeof(*pcc_mbox_ctrl), GFP_KERNEL);
++ pcc_mbox_ctrl = devm_kzalloc(dev, sizeof(*pcc_mbox_ctrl), GFP_KERNEL);
+ if (!pcc_mbox_ctrl) {
+ rc = -ENOMEM;
+ goto err;
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index ad9f16689419d..2362bb8ef6d19 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2006,8 +2006,7 @@ int bch_btree_check(struct cache_set *c)
+ int i;
+ struct bkey *k = NULL;
+ struct btree_iter iter;
+- struct btree_check_state *check_state;
+- char name[32];
++ struct btree_check_state check_state;
+
+ /* check and mark root node keys */
+ for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid)
+@@ -2018,63 +2017,58 @@ int bch_btree_check(struct cache_set *c)
+ if (c->root->level == 0)
+ return 0;
+
+- check_state = kzalloc(sizeof(struct btree_check_state), GFP_KERNEL);
+- if (!check_state)
+- return -ENOMEM;
+-
+- check_state->c = c;
+- check_state->total_threads = bch_btree_chkthread_nr();
+- check_state->key_idx = 0;
+- spin_lock_init(&check_state->idx_lock);
+- atomic_set(&check_state->started, 0);
+- atomic_set(&check_state->enough, 0);
+- init_waitqueue_head(&check_state->wait);
++ check_state.c = c;
++ check_state.total_threads = bch_btree_chkthread_nr();
++ check_state.key_idx = 0;
++ spin_lock_init(&check_state.idx_lock);
++ atomic_set(&check_state.started, 0);
++ atomic_set(&check_state.enough, 0);
++ init_waitqueue_head(&check_state.wait);
+
++ rw_lock(0, c->root, c->root->level);
+ /*
+ * Run multiple threads to check btree nodes in parallel,
+- * if check_state->enough is non-zero, it means current
++ * if check_state.enough is non-zero, it means current
+ * running check threads are enough, unncessary to create
+ * more.
+ */
+- for (i = 0; i < check_state->total_threads; i++) {
+- /* fetch latest check_state->enough earlier */
++ for (i = 0; i < check_state.total_threads; i++) {
++ /* fetch latest check_state.enough earlier */
+ smp_mb__before_atomic();
+- if (atomic_read(&check_state->enough))
++ if (atomic_read(&check_state.enough))
+ break;
+
+- check_state->infos[i].result = 0;
+- check_state->infos[i].state = check_state;
+- snprintf(name, sizeof(name), "bch_btrchk[%u]", i);
+- atomic_inc(&check_state->started);
++ check_state.infos[i].result = 0;
++ check_state.infos[i].state = &check_state;
+
+- check_state->infos[i].thread =
++ check_state.infos[i].thread =
+ kthread_run(bch_btree_check_thread,
+- &check_state->infos[i],
+- name);
+- if (IS_ERR(check_state->infos[i].thread)) {
++ &check_state.infos[i],
++ "bch_btrchk[%d]", i);
++ if (IS_ERR(check_state.infos[i].thread)) {
+ pr_err("fails to run thread bch_btrchk[%d]\n", i);
+ for (--i; i >= 0; i--)
+- kthread_stop(check_state->infos[i].thread);
++ kthread_stop(check_state.infos[i].thread);
+ ret = -ENOMEM;
+ goto out;
+ }
++ atomic_inc(&check_state.started);
+ }
+
+ /*
+ * Must wait for all threads to stop.
+ */
+- wait_event_interruptible(check_state->wait,
+- atomic_read(&check_state->started) == 0);
++ wait_event(check_state.wait, atomic_read(&check_state.started) == 0);
+
+- for (i = 0; i < check_state->total_threads; i++) {
+- if (check_state->infos[i].result) {
+- ret = check_state->infos[i].result;
++ for (i = 0; i < check_state.total_threads; i++) {
++ if (check_state.infos[i].result) {
++ ret = check_state.infos[i].result;
+ goto out;
+ }
+ }
+
+ out:
+- kfree(check_state);
++ rw_unlock(0, c->root);
+ return ret;
+ }
+
+diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
+index 50482107134f1..1b5fdbc0d83eb 100644
+--- a/drivers/md/bcache/btree.h
++++ b/drivers/md/bcache/btree.h
+@@ -226,7 +226,7 @@ struct btree_check_info {
+ int result;
+ };
+
+-#define BCH_BTR_CHKTHREAD_MAX 64
++#define BCH_BTR_CHKTHREAD_MAX 12
+ struct btree_check_state {
+ struct cache_set *c;
+ int total_threads;
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index 61bd79babf7ae..346a92c438582 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -407,6 +407,11 @@ err:
+ return ret;
+ }
+
++void bch_journal_space_reserve(struct journal *j)
++{
++ j->do_reserve = true;
++}
++
+ /* Journalling */
+
+ static void btree_flush_write(struct cache_set *c)
+@@ -625,12 +630,30 @@ static void do_journal_discard(struct cache *ca)
+ }
+ }
+
++static unsigned int free_journal_buckets(struct cache_set *c)
++{
++ struct journal *j = &c->journal;
++ struct cache *ca = c->cache;
++ struct journal_device *ja = &c->cache->journal;
++ unsigned int n;
++
++ /* In case njournal_buckets is not power of 2 */
++ if (ja->cur_idx >= ja->discard_idx)
++ n = ca->sb.njournal_buckets + ja->discard_idx - ja->cur_idx;
++ else
++ n = ja->discard_idx - ja->cur_idx;
++
++ if (n > (1 + j->do_reserve))
++ return n - (1 + j->do_reserve);
++
++ return 0;
++}
++
+ static void journal_reclaim(struct cache_set *c)
+ {
+ struct bkey *k = &c->journal.key;
+ struct cache *ca = c->cache;
+ uint64_t last_seq;
+- unsigned int next;
+ struct journal_device *ja = &ca->journal;
+ atomic_t p __maybe_unused;
+
+@@ -653,12 +676,10 @@ static void journal_reclaim(struct cache_set *c)
+ if (c->journal.blocks_free)
+ goto out;
+
+- next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+- /* No space available on this device */
+- if (next == ja->discard_idx)
++ if (!free_journal_buckets(c))
+ goto out;
+
+- ja->cur_idx = next;
++ ja->cur_idx = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+ k->ptr[0] = MAKE_PTR(0,
+ bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
+ ca->sb.nr_this_dev);
+diff --git a/drivers/md/bcache/journal.h b/drivers/md/bcache/journal.h
+index f2ea34d5f431b..cd316b4a1e95f 100644
+--- a/drivers/md/bcache/journal.h
++++ b/drivers/md/bcache/journal.h
+@@ -105,6 +105,7 @@ struct journal {
+ spinlock_t lock;
+ spinlock_t flush_write_lock;
+ bool btree_flushing;
++ bool do_reserve;
+ /* used when waiting because the journal was full */
+ struct closure_waitlist wait;
+ struct closure io;
+@@ -182,5 +183,6 @@ int bch_journal_replay(struct cache_set *c, struct list_head *list);
+
+ void bch_journal_free(struct cache_set *c);
+ int bch_journal_alloc(struct cache_set *c);
++void bch_journal_space_reserve(struct journal *j);
+
+ #endif /* _BCACHE_JOURNAL_H */
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index d15aae6c51c13..673a680240a93 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -1107,6 +1107,12 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio,
+ * which would call closure_get(&dc->disk.cl)
+ */
+ ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
++ if (!ddip) {
++ bio->bi_status = BLK_STS_RESOURCE;
++ bio->bi_end_io(bio);
++ return;
++ }
++
+ ddip->d = d;
+ /* Count on the bcache device */
+ ddip->orig_bdev = orig_bdev;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 140f35dc0c457..2c3d2d47eebf1 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -2131,6 +2131,7 @@ static int run_cache_set(struct cache_set *c)
+
+ flash_devs_run(c);
+
++ bch_journal_space_reserve(&c->journal);
+ set_bit(CACHE_SET_RUNNING, &c->flags);
+ return 0;
+ err:
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 68d3dd6b4f119..267557f5d4eb3 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -802,13 +802,11 @@ static int bch_writeback_thread(void *arg)
+
+ /* Init */
+ #define INIT_KEYS_EACH_TIME 500000
+-#define INIT_KEYS_SLEEP_MS 100
+
+ struct sectors_dirty_init {
+ struct btree_op op;
+ unsigned int inode;
+ size_t count;
+- struct bkey start;
+ };
+
+ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
+@@ -824,11 +822,8 @@ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
+ KEY_START(k), KEY_SIZE(k));
+
+ op->count++;
+- if (atomic_read(&b->c->search_inflight) &&
+- !(op->count % INIT_KEYS_EACH_TIME)) {
+- bkey_copy_key(&op->start, k);
+- return -EAGAIN;
+- }
++ if (!(op->count % INIT_KEYS_EACH_TIME))
++ cond_resched();
+
+ return MAP_CONTINUE;
+ }
+@@ -843,24 +838,16 @@ static int bch_root_node_dirty_init(struct cache_set *c,
+ bch_btree_op_init(&op.op, -1);
+ op.inode = d->id;
+ op.count = 0;
+- op.start = KEY(op.inode, 0, 0);
+-
+- do {
+- ret = bcache_btree(map_keys_recurse,
+- k,
+- c->root,
+- &op.op,
+- &op.start,
+- sectors_dirty_init_fn,
+- 0);
+- if (ret == -EAGAIN)
+- schedule_timeout_interruptible(
+- msecs_to_jiffies(INIT_KEYS_SLEEP_MS));
+- else if (ret < 0) {
+- pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+- break;
+- }
+- } while (ret == -EAGAIN);
++
++ ret = bcache_btree(map_keys_recurse,
++ k,
++ c->root,
++ &op.op,
++ &KEY(op.inode, 0, 0),
++ sectors_dirty_init_fn,
++ 0);
++ if (ret < 0)
++ pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+
+ return ret;
+ }
+@@ -904,7 +891,6 @@ static int bch_dirty_init_thread(void *arg)
+ goto out;
+ }
+ skip_nr--;
+- cond_resched();
+ }
+
+ if (p) {
+@@ -914,7 +900,6 @@ static int bch_dirty_init_thread(void *arg)
+
+ p = NULL;
+ prev_idx = cur_idx;
+- cond_resched();
+ }
+
+ out:
+@@ -945,67 +930,55 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ struct btree_iter iter;
+ struct sectors_dirty_init op;
+ struct cache_set *c = d->c;
+- struct bch_dirty_init_state *state;
+- char name[32];
++ struct bch_dirty_init_state state;
+
+ /* Just count root keys if no leaf node */
++ rw_lock(0, c->root, c->root->level);
+ if (c->root->level == 0) {
+ bch_btree_op_init(&op.op, -1);
+ op.inode = d->id;
+ op.count = 0;
+- op.start = KEY(op.inode, 0, 0);
+
+ for_each_key_filter(&c->root->keys,
+ k, &iter, bch_ptr_invalid)
+ sectors_dirty_init_fn(&op.op, c->root, k);
+- return;
+- }
+
+- state = kzalloc(sizeof(struct bch_dirty_init_state), GFP_KERNEL);
+- if (!state) {
+- pr_warn("sectors dirty init failed: cannot allocate memory\n");
++ rw_unlock(0, c->root);
+ return;
+ }
+
+- state->c = c;
+- state->d = d;
+- state->total_threads = bch_btre_dirty_init_thread_nr();
+- state->key_idx = 0;
+- spin_lock_init(&state->idx_lock);
+- atomic_set(&state->started, 0);
+- atomic_set(&state->enough, 0);
+- init_waitqueue_head(&state->wait);
+-
+- for (i = 0; i < state->total_threads; i++) {
+- /* Fetch latest state->enough earlier */
++ state.c = c;
++ state.d = d;
++ state.total_threads = bch_btre_dirty_init_thread_nr();
++ state.key_idx = 0;
++ spin_lock_init(&state.idx_lock);
++ atomic_set(&state.started, 0);
++ atomic_set(&state.enough, 0);
++ init_waitqueue_head(&state.wait);
++
++ for (i = 0; i < state.total_threads; i++) {
++ /* Fetch latest state.enough earlier */
+ smp_mb__before_atomic();
+- if (atomic_read(&state->enough))
++ if (atomic_read(&state.enough))
+ break;
+
+- state->infos[i].state = state;
+- atomic_inc(&state->started);
+- snprintf(name, sizeof(name), "bch_dirty_init[%d]", i);
+-
+- state->infos[i].thread =
+- kthread_run(bch_dirty_init_thread,
+- &state->infos[i],
+- name);
+- if (IS_ERR(state->infos[i].thread)) {
++ state.infos[i].state = &state;
++ state.infos[i].thread =
++ kthread_run(bch_dirty_init_thread, &state.infos[i],
++ "bch_dirtcnt[%d]", i);
++ if (IS_ERR(state.infos[i].thread)) {
+ pr_err("fails to run thread bch_dirty_init[%d]\n", i);
+ for (--i; i >= 0; i--)
+- kthread_stop(state->infos[i].thread);
++ kthread_stop(state.infos[i].thread);
+ goto out;
+ }
++ atomic_inc(&state.started);
+ }
+
+- /*
+- * Must wait for all threads to stop.
+- */
+- wait_event_interruptible(state->wait,
+- atomic_read(&state->started) == 0);
+-
+ out:
+- kfree(state);
++ /* Must wait for all threads to stop. */
++ wait_event(state.wait, atomic_read(&state.started) == 0);
++ rw_unlock(0, c->root);
+ }
+
+ void bch_cached_dev_writeback_init(struct cached_dev *dc)
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index 02b2f9df73f69..31df716951f66 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -20,7 +20,7 @@
+ #define BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID 57
+ #define BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH 64
+
+-#define BCH_DIRTY_INIT_THRD_MAX 64
++#define BCH_DIRTY_INIT_THRD_MAX 12
+ /*
+ * 14 (16384ths) is chosen here as something that each backing device
+ * should be a reasonable fraction of the share, and not to blow up
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index bfd6026d78099..612460d2bdaf2 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -639,14 +639,6 @@ re_read:
+ daemon_sleep = le32_to_cpu(sb->daemon_sleep) * HZ;
+ write_behind = le32_to_cpu(sb->write_behind);
+ sectors_reserved = le32_to_cpu(sb->sectors_reserved);
+- /* Setup nodes/clustername only if bitmap version is
+- * cluster-compatible
+- */
+- if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) {
+- nodes = le32_to_cpu(sb->nodes);
+- strlcpy(bitmap->mddev->bitmap_info.cluster_name,
+- sb->cluster_name, 64);
+- }
+
+ /* verify that the bitmap-specific fields are valid */
+ if (sb->magic != cpu_to_le32(BITMAP_MAGIC))
+@@ -668,6 +660,16 @@ re_read:
+ goto out;
+ }
+
++ /*
++ * Setup nodes/clustername only if bitmap version is
++ * cluster-compatible
++ */
++ if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) {
++ nodes = le32_to_cpu(sb->nodes);
++ strlcpy(bitmap->mddev->bitmap_info.cluster_name,
++ sb->cluster_name, 64);
++ }
++
+ /* keep the array size field of the bitmap superblock up to date */
+ sb->sync_size = cpu_to_le64(bitmap->mddev->resync_max_sectors);
+
+@@ -700,9 +702,9 @@ re_read:
+
+ out:
+ kunmap_atomic(sb);
+- /* Assigning chunksize is required for "re_read" */
+- bitmap->mddev->bitmap_info.chunksize = chunksize;
+ if (err == 0 && nodes && (bitmap->cluster_slot < 0)) {
++ /* Assigning chunksize is required for "re_read" */
++ bitmap->mddev->bitmap_info.chunksize = chunksize;
+ err = md_setup_cluster(bitmap->mddev, nodes);
+ if (err) {
+ pr_warn("%s: Could not setup cluster service (%d)\n",
+@@ -713,18 +715,18 @@ out:
+ goto re_read;
+ }
+
+-
+ out_no_sb:
+- if (test_bit(BITMAP_STALE, &bitmap->flags))
+- bitmap->events_cleared = bitmap->mddev->events;
+- bitmap->mddev->bitmap_info.chunksize = chunksize;
+- bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep;
+- bitmap->mddev->bitmap_info.max_write_behind = write_behind;
+- bitmap->mddev->bitmap_info.nodes = nodes;
+- if (bitmap->mddev->bitmap_info.space == 0 ||
+- bitmap->mddev->bitmap_info.space > sectors_reserved)
+- bitmap->mddev->bitmap_info.space = sectors_reserved;
+- if (err) {
++ if (err == 0) {
++ if (test_bit(BITMAP_STALE, &bitmap->flags))
++ bitmap->events_cleared = bitmap->mddev->events;
++ bitmap->mddev->bitmap_info.chunksize = chunksize;
++ bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep;
++ bitmap->mddev->bitmap_info.max_write_behind = write_behind;
++ bitmap->mddev->bitmap_info.nodes = nodes;
++ if (bitmap->mddev->bitmap_info.space == 0 ||
++ bitmap->mddev->bitmap_info.space > sectors_reserved)
++ bitmap->mddev->bitmap_info.space = sectors_reserved;
++ } else {
+ md_bitmap_print_sb(bitmap);
+ if (bitmap->cluster_slot < 0)
+ md_cluster_stop(bitmap->mddev);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 4d38bd7dadd60..43c5890dc9f35 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2629,14 +2629,16 @@ static void sync_sbs(struct mddev *mddev, int nospares)
+
+ static bool does_sb_need_changing(struct mddev *mddev)
+ {
+- struct md_rdev *rdev;
++ struct md_rdev *rdev = NULL, *iter;
+ struct mdp_superblock_1 *sb;
+ int role;
+
+ /* Find a good rdev */
+- rdev_for_each(rdev, mddev)
+- if ((rdev->raid_disk >= 0) && !test_bit(Faulty, &rdev->flags))
++ rdev_for_each(iter, mddev)
++ if ((iter->raid_disk >= 0) && !test_bit(Faulty, &iter->flags)) {
++ rdev = iter;
+ break;
++ }
+
+ /* No good device found. */
+ if (!rdev)
+@@ -5598,8 +5600,6 @@ static void md_free(struct kobject *ko)
+
+ bioset_exit(&mddev->bio_set);
+ bioset_exit(&mddev->sync_set);
+- if (mddev->level != 1 && mddev->level != 10)
+- bioset_exit(&mddev->io_acct_set);
+ kfree(mddev);
+ }
+
+@@ -6286,8 +6286,6 @@ void md_stop(struct mddev *mddev)
+ __md_stop(mddev);
+ bioset_exit(&mddev->bio_set);
+ bioset_exit(&mddev->sync_set);
+- if (mddev->level != 1 && mddev->level != 10)
+- bioset_exit(&mddev->io_acct_set);
+ }
+
+ EXPORT_SYMBOL_GPL(md_stop);
+@@ -9792,16 +9790,18 @@ static int read_rdev(struct mddev *mddev, struct md_rdev *rdev)
+
+ void md_reload_sb(struct mddev *mddev, int nr)
+ {
+- struct md_rdev *rdev;
++ struct md_rdev *rdev = NULL, *iter;
+ int err;
+
+ /* Find the rdev */
+- rdev_for_each_rcu(rdev, mddev) {
+- if (rdev->desc_nr == nr)
++ rdev_for_each_rcu(iter, mddev) {
++ if (iter->desc_nr == nr) {
++ rdev = iter;
+ break;
++ }
+ }
+
+- if (!rdev || rdev->desc_nr != nr) {
++ if (!rdev) {
+ pr_warn("%s: %d Could not find rdev with nr %d\n", __func__, __LINE__, nr);
+ return;
+ }
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index b59a77b31b90d..bccc741b9382a 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -361,7 +361,6 @@ static void free_conf(struct mddev *mddev, struct r0conf *conf)
+ kfree(conf->strip_zone);
+ kfree(conf->devlist);
+ kfree(conf);
+- mddev->private = NULL;
+ }
+
+ static void raid0_free(struct mddev *mddev, void *priv)
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 2e12331c12a9d..01766e7447728 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1278,7 +1278,7 @@ static int cec_config_log_addr(struct cec_adapter *adap,
+ * While trying to poll the physical address was reset
+ * and the adapter was unconfigured, so bail out.
+ */
+- if (!adap->is_configuring)
++ if (adap->phys_addr == CEC_PHYS_ADDR_INVALID)
+ return -EINTR;
+
+ if (err)
+@@ -1335,7 +1335,6 @@ static void cec_adap_unconfigure(struct cec_adapter *adap)
+ adap->phys_addr != CEC_PHYS_ADDR_INVALID)
+ WARN_ON(adap->ops->adap_log_addr(adap, CEC_LOG_ADDR_INVALID));
+ adap->log_addrs.log_addr_mask = 0;
+- adap->is_configuring = false;
+ adap->is_configured = false;
+ cec_flush(adap);
+ wake_up_interruptible(&adap->kthread_waitq);
+@@ -1527,9 +1526,10 @@ unconfigure:
+ for (i = 0; i < las->num_log_addrs; i++)
+ las->log_addr[i] = CEC_LOG_ADDR_INVALID;
+ cec_adap_unconfigure(adap);
++ adap->is_configuring = false;
+ adap->kthread_config = NULL;
+- mutex_unlock(&adap->lock);
+ complete(&adap->config_completion);
++ mutex_unlock(&adap->lock);
+ return 0;
+ }
+
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index 9158d3ce45c04..1649b7a9ff720 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -1603,8 +1603,11 @@ static int ccs_power_on(struct device *dev)
+ usleep_range(1000, 2000);
+ } while (--retry);
+
+- if (!reset)
+- return -EIO;
++ if (!reset) {
++ dev_err(dev, "software reset failed\n");
++ rval = -EIO;
++ goto out_cci_addr_fail;
++ }
+ }
+
+ if (sensor->hwcfg.i2c_addr_alt) {
+diff --git a/drivers/media/i2c/dw9768.c b/drivers/media/i2c/dw9768.c
+index 65c6acf3ced9a..c086580efac78 100644
+--- a/drivers/media/i2c/dw9768.c
++++ b/drivers/media/i2c/dw9768.c
+@@ -469,11 +469,6 @@ static int dw9768_probe(struct i2c_client *client)
+
+ dw9768->sd.entity.function = MEDIA_ENT_F_LENS;
+
+- /*
+- * Device is already turned on by i2c-core with ACPI domain PM.
+- * Attempt to turn off the device to satisfy the privacy LED concerns.
+- */
+- pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+ if (!pm_runtime_enabled(dev)) {
+ ret = dw9768_runtime_resume(dev);
+@@ -488,7 +483,6 @@ static int dw9768_probe(struct i2c_client *client)
+ dev_err(dev, "failed to register V4L2 subdev: %d", ret);
+ goto err_power_off;
+ }
+- pm_runtime_idle(dev);
+
+ return 0;
+
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index eb2b8e42335bd..07e98d57e3237 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -15,6 +15,7 @@
+ #include <linux/fwnode.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/gpio/driver.h>
++#include <linux/gpio/machine.h>
+ #include <linux/i2c.h>
+ #include <linux/i2c-mux.h>
+ #include <linux/module.h>
+@@ -168,6 +169,8 @@ struct max9286_priv {
+ u32 init_rev_chan_mv;
+ u32 rev_chan_mv;
+
++ u32 gpio_poc[2];
++
+ struct v4l2_ctrl_handler ctrls;
+ struct v4l2_ctrl *pixelrate;
+
+@@ -1025,20 +1028,27 @@ static int max9286_setup(struct max9286_priv *priv)
+ return 0;
+ }
+
+-static void max9286_gpio_set(struct gpio_chip *chip,
+- unsigned int offset, int value)
++static int max9286_gpio_set(struct max9286_priv *priv, unsigned int offset,
++ int value)
+ {
+- struct max9286_priv *priv = gpiochip_get_data(chip);
+-
+ if (value)
+ priv->gpio_state |= BIT(offset);
+ else
+ priv->gpio_state &= ~BIT(offset);
+
+- max9286_write(priv, 0x0f, MAX9286_0X0F_RESERVED | priv->gpio_state);
++ return max9286_write(priv, 0x0f,
++ MAX9286_0X0F_RESERVED | priv->gpio_state);
++}
++
++static void max9286_gpiochip_set(struct gpio_chip *chip,
++ unsigned int offset, int value)
++{
++ struct max9286_priv *priv = gpiochip_get_data(chip);
++
++ max9286_gpio_set(priv, offset, value);
+ }
+
+-static int max9286_gpio_get(struct gpio_chip *chip, unsigned int offset)
++static int max9286_gpiochip_get(struct gpio_chip *chip, unsigned int offset)
+ {
+ struct max9286_priv *priv = gpiochip_get_data(chip);
+
+@@ -1057,13 +1067,10 @@ static int max9286_register_gpio(struct max9286_priv *priv)
+ gpio->owner = THIS_MODULE;
+ gpio->ngpio = 2;
+ gpio->base = -1;
+- gpio->set = max9286_gpio_set;
+- gpio->get = max9286_gpio_get;
++ gpio->set = max9286_gpiochip_set;
++ gpio->get = max9286_gpiochip_get;
+ gpio->can_sleep = true;
+
+- /* GPIO values default to high */
+- priv->gpio_state = BIT(0) | BIT(1);
+-
+ ret = devm_gpiochip_add_data(dev, gpio, priv);
+ if (ret)
+ dev_err(dev, "Unable to create gpio_chip\n");
+@@ -1071,26 +1078,83 @@ static int max9286_register_gpio(struct max9286_priv *priv)
+ return ret;
+ }
+
+-static int max9286_init(struct device *dev)
++static int max9286_parse_gpios(struct max9286_priv *priv)
+ {
+- struct max9286_priv *priv;
+- struct i2c_client *client;
++ struct device *dev = &priv->client->dev;
+ int ret;
+
+- client = to_i2c_client(dev);
+- priv = i2c_get_clientdata(client);
++ /* GPIO values default to high */
++ priv->gpio_state = BIT(0) | BIT(1);
+
+- /* Enable the bus power. */
+- ret = regulator_enable(priv->regulator);
+- if (ret < 0) {
+- dev_err(&client->dev, "Unable to turn PoC on\n");
+- return ret;
++ /*
++ * Parse the "gpio-poc" vendor property. If the property is not
++ * specified the camera power is controlled by a regulator.
++ */
++ ret = of_property_read_u32_array(dev->of_node, "maxim,gpio-poc",
++ priv->gpio_poc, 2);
++ if (ret == -EINVAL) {
++ /*
++ * If gpio lines are not used for the camera power, register
++ * a gpio controller for consumers.
++ */
++ ret = max9286_register_gpio(priv);
++ if (ret)
++ return ret;
++
++ priv->regulator = devm_regulator_get(dev, "poc");
++ if (IS_ERR(priv->regulator)) {
++ return dev_err_probe(dev, PTR_ERR(priv->regulator),
++ "Unable to get PoC regulator (%ld)\n",
++ PTR_ERR(priv->regulator));
++ }
++
++ return 0;
++ }
++
++ /* If the property is specified make sure it is well formed. */
++ if (ret || priv->gpio_poc[0] > 1 ||
++ (priv->gpio_poc[1] != GPIO_ACTIVE_HIGH &&
++ priv->gpio_poc[1] != GPIO_ACTIVE_LOW)) {
++ dev_err(dev, "Invalid 'gpio-poc' property\n");
++ return -EINVAL;
+ }
+
++ return 0;
++}
++
++static int max9286_poc_enable(struct max9286_priv *priv, bool enable)
++{
++ int ret;
++
++ /* If the regulator is not available, use gpio to control power. */
++ if (!priv->regulator)
++ ret = max9286_gpio_set(priv, priv->gpio_poc[0],
++ enable ^ priv->gpio_poc[1]);
++ else if (enable)
++ ret = regulator_enable(priv->regulator);
++ else
++ ret = regulator_disable(priv->regulator);
++
++ if (ret < 0)
++ dev_err(&priv->client->dev, "Unable to turn power %s\n",
++ enable ? "on" : "off");
++
++ return ret;
++}
++
++static int max9286_init(struct max9286_priv *priv)
++{
++ struct i2c_client *client = priv->client;
++ int ret;
++
++ ret = max9286_poc_enable(priv, true);
++ if (ret)
++ return ret;
++
+ ret = max9286_setup(priv);
+ if (ret) {
+- dev_err(dev, "Unable to setup max9286\n");
+- goto err_regulator;
++ dev_err(&client->dev, "Unable to setup max9286\n");
++ goto err_poc_disable;
+ }
+
+ /*
+@@ -1099,13 +1163,13 @@ static int max9286_init(struct device *dev)
+ */
+ ret = max9286_v4l2_register(priv);
+ if (ret) {
+- dev_err(dev, "Failed to register with V4L2\n");
+- goto err_regulator;
++ dev_err(&client->dev, "Failed to register with V4L2\n");
++ goto err_poc_disable;
+ }
+
+ ret = max9286_i2c_mux_init(priv);
+ if (ret) {
+- dev_err(dev, "Unable to initialize I2C multiplexer\n");
++ dev_err(&client->dev, "Unable to initialize I2C multiplexer\n");
+ goto err_v4l2_register;
+ }
+
+@@ -1116,8 +1180,8 @@ static int max9286_init(struct device *dev)
+
+ err_v4l2_register:
+ max9286_v4l2_unregister(priv);
+-err_regulator:
+- regulator_disable(priv->regulator);
++err_poc_disable:
++ max9286_poc_enable(priv, false);
+
+ return ret;
+ }
+@@ -1260,7 +1324,6 @@ static int max9286_probe(struct i2c_client *client)
+ mutex_init(&priv->mutex);
+
+ priv->client = client;
+- i2c_set_clientdata(client, priv);
+
+ priv->gpiod_pwdn = devm_gpiod_get_optional(&client->dev, "enable",
+ GPIOD_OUT_HIGH);
+@@ -1288,23 +1351,15 @@ static int max9286_probe(struct i2c_client *client)
+ */
+ max9286_configure_i2c(priv, false);
+
+- ret = max9286_register_gpio(priv);
++ ret = max9286_parse_gpios(priv);
+ if (ret)
+ goto err_powerdown;
+
+- priv->regulator = devm_regulator_get(&client->dev, "poc");
+- if (IS_ERR(priv->regulator)) {
+- ret = PTR_ERR(priv->regulator);
+- dev_err_probe(&client->dev, ret,
+- "Unable to get PoC regulator\n");
+- goto err_powerdown;
+- }
+-
+ ret = max9286_parse_dt(priv);
+ if (ret)
+ goto err_powerdown;
+
+- ret = max9286_init(&client->dev);
++ ret = max9286_init(priv);
+ if (ret < 0)
+ goto err_cleanup_dt;
+
+@@ -1320,13 +1375,13 @@ err_powerdown:
+
+ static int max9286_remove(struct i2c_client *client)
+ {
+- struct max9286_priv *priv = i2c_get_clientdata(client);
++ struct max9286_priv *priv = sd_to_max9286(i2c_get_clientdata(client));
+
+ i2c_mux_del_adapters(priv->mux);
+
+ max9286_v4l2_unregister(priv);
+
+- regulator_disable(priv->regulator);
++ max9286_poc_enable(priv, false);
+
+ gpiod_set_value_cansleep(priv->gpiod_pwdn, 0);
+
+diff --git a/drivers/media/i2c/ov5648.c b/drivers/media/i2c/ov5648.c
+index ef8b52dc9401d..bb3666fc56183 100644
+--- a/drivers/media/i2c/ov5648.c
++++ b/drivers/media/i2c/ov5648.c
+@@ -2498,9 +2498,9 @@ static int ov5648_probe(struct i2c_client *client)
+
+ /* DOVDD: digital I/O */
+ sensor->dovdd = devm_regulator_get(dev, "dovdd");
+- if (IS_ERR(sensor->dvdd)) {
++ if (IS_ERR(sensor->dovdd)) {
+ dev_err(dev, "cannot get DOVDD (digital I/O) regulator\n");
+- ret = PTR_ERR(sensor->dvdd);
++ ret = PTR_ERR(sensor->dovdd);
+ goto error_endpoint;
+ }
+
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 1967464231160..1be2c0e5bdc15 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -2017,7 +2017,6 @@ static int ov7670_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(sd);
+ v4l2_ctrl_handler_free(&info->hdl);
+ media_entity_cleanup(&info->sd.entity);
+- ov7670_power_off(sd);
+ return 0;
+ }
+
+diff --git a/drivers/media/i2c/rdacm20.c b/drivers/media/i2c/rdacm20.c
+index 025a610de8935..9c6f66cab5642 100644
+--- a/drivers/media/i2c/rdacm20.c
++++ b/drivers/media/i2c/rdacm20.c
+@@ -611,7 +611,7 @@ static int rdacm20_probe(struct i2c_client *client)
+ goto error_free_ctrls;
+
+ dev->pad.flags = MEDIA_PAD_FL_SOURCE;
+- dev->sd.entity.flags |= MEDIA_ENT_F_CAM_SENSOR;
++ dev->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+ ret = media_entity_pads_init(&dev->sd.entity, 1, &dev->pad);
+ if (ret < 0)
+ goto error_free_ctrls;
+diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c
+index 12ec5467ed1ee..ef31cf5f23cac 100644
+--- a/drivers/media/i2c/rdacm21.c
++++ b/drivers/media/i2c/rdacm21.c
+@@ -583,7 +583,7 @@ static int rdacm21_probe(struct i2c_client *client)
+ goto error_free_ctrls;
+
+ dev->pad.flags = MEDIA_PAD_FL_SOURCE;
+- dev->sd.entity.flags |= MEDIA_ENT_F_CAM_SENSOR;
++ dev->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+ ret = media_entity_pads_init(&dev->sd.entity, 1, &dev->pad);
+ if (ret < 0)
+ goto error_free_ctrls;
+diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c
+index f8f2ff3b00c37..a07b18f2034e9 100644
+--- a/drivers/media/pci/cx23885/cx23885-core.c
++++ b/drivers/media/pci/cx23885/cx23885-core.c
+@@ -2165,7 +2165,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ err = dma_set_mask(&pci_dev->dev, 0xffffffff);
+ if (err) {
+ pr_err("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name);
+- goto fail_ctrl;
++ goto fail_dma_set_mask;
+ }
+
+ err = request_irq(pci_dev->irq, cx23885_irq,
+@@ -2173,7 +2173,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ if (err < 0) {
+ pr_err("%s: can't get IRQ %d\n",
+ dev->name, pci_dev->irq);
+- goto fail_irq;
++ goto fail_dma_set_mask;
+ }
+
+ switch (dev->board) {
+@@ -2195,7 +2195,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+
+ return 0;
+
+-fail_irq:
++fail_dma_set_mask:
+ cx23885_dev_unregister(dev);
+ fail_ctrl:
+ v4l2_ctrl_handler_free(hdl);
+diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c
+index 3078a39f0b95d..6627fa9166d30 100644
+--- a/drivers/media/pci/cx25821/cx25821-core.c
++++ b/drivers/media/pci/cx25821/cx25821-core.c
+@@ -1332,11 +1332,11 @@ static void cx25821_finidev(struct pci_dev *pci_dev)
+ struct cx25821_dev *dev = get_cx25821(v4l2_dev);
+
+ cx25821_shutdown(dev);
+- pci_disable_device(pci_dev);
+
+ /* unregister stuff */
+ if (pci_dev->irq)
+ free_irq(pci_dev->irq, dev);
++ pci_disable_device(pci_dev);
+
+ cx25821_dev_unregister(dev);
+ v4l2_device_unregister(v4l2_dev);
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index bdeecde0d9978..1e3c5c7d6dd7f 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -1828,6 +1828,7 @@ static int aspeed_video_probe(struct platform_device *pdev)
+
+ rc = aspeed_video_setup_video(video);
+ if (rc) {
++ aspeed_video_free_buf(video, &video->jpeg);
+ clk_unprepare(video->vclk);
+ clk_unprepare(video->eclk);
+ return rc;
+@@ -1859,8 +1860,7 @@ static int aspeed_video_remove(struct platform_device *pdev)
+
+ v4l2_device_unregister(v4l2_dev);
+
+- dma_free_coherent(video->dev, VE_JPEG_HEADER_SIZE, video->jpeg.virt,
+- video->jpeg.dma);
++ aspeed_video_free_buf(video, &video->jpeg);
+
+ of_reserved_mem_device_release(dev);
+
+diff --git a/drivers/media/platform/atmel/atmel-sama5d2-isc.c b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+index 1b2063cce0f72..a1fd240c6aeb8 100644
+--- a/drivers/media/platform/atmel/atmel-sama5d2-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+@@ -267,7 +267,7 @@ static void isc_sama5d2_config_rlp(struct isc_device *isc)
+ * Thus, if the YCYC mode is selected, replace it with the
+ * sama5d2-compliant mode which is YYCC .
+ */
+- if ((rlp_mode & ISC_RLP_CFG_MODE_YCYC) == ISC_RLP_CFG_MODE_YCYC) {
++ if ((rlp_mode & ISC_RLP_CFG_MODE_MASK) == ISC_RLP_CFG_MODE_YCYC) {
+ rlp_mode &= ~ISC_RLP_CFG_MODE_MASK;
+ rlp_mode |= ISC_RLP_CFG_MODE_YYCC;
+ }
+@@ -538,7 +538,7 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ ret = clk_prepare_enable(isc->ispck);
+ if (ret) {
+ dev_err(dev, "failed to enable ispck: %d\n", ret);
+- goto cleanup_subdev;
++ goto disable_pm;
+ }
+
+ /* ispck should be greater or equal to hclock */
+@@ -556,6 +556,9 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ unprepare_clk:
+ clk_disable_unprepare(isc->ispck);
+
++disable_pm:
++ pm_runtime_disable(dev);
++
+ cleanup_subdev:
+ isc_subdev_cleanup(isc);
+
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index a57822b050706..27d002d9f631f 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -1324,7 +1324,8 @@ static int coda_enum_frameintervals(struct file *file, void *fh,
+ struct v4l2_frmivalenum *f)
+ {
+ struct coda_ctx *ctx = fh_to_ctx(fh);
+- int i;
++ struct coda_q_data *q_data;
++ const struct coda_codec *codec;
+
+ if (f->index)
+ return -EINVAL;
+@@ -1333,12 +1334,19 @@ static int coda_enum_frameintervals(struct file *file, void *fh,
+ if (!ctx->vdoa && f->pixel_format == V4L2_PIX_FMT_YUYV)
+ return -EINVAL;
+
+- for (i = 0; i < CODA_MAX_FORMATS; i++) {
+- if (f->pixel_format == ctx->cvd->src_formats[i] ||
+- f->pixel_format == ctx->cvd->dst_formats[i])
+- break;
++ if (coda_format_normalize_yuv(f->pixel_format) == V4L2_PIX_FMT_YUV420) {
++ q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++ codec = coda_find_codec(ctx->dev, f->pixel_format,
++ q_data->fourcc);
++ } else {
++ codec = coda_find_codec(ctx->dev, V4L2_PIX_FMT_YUV420,
++ f->pixel_format);
+ }
+- if (i == CODA_MAX_FORMATS)
++ if (!codec)
++ return -EINVAL;
++
++ if (f->width < MIN_W || f->width > codec->max_w ||
++ f->height < MIN_H || f->height > codec->max_h)
+ return -EINVAL;
+
+ f->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
+@@ -2344,8 +2352,8 @@ static void coda_encode_ctrls(struct coda_ctx *ctx)
+ V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET, -12, 12, 1, 0);
+ v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+- V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, 0x0,
+- V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE);
++ V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE, 0x0,
++ V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE);
+ if (ctx->dev->devtype->product == CODA_HX4 ||
+ ctx->dev->devtype->product == CODA_7541) {
+ v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+@@ -2359,12 +2367,15 @@ static void coda_encode_ctrls(struct coda_ctx *ctx)
+ if (ctx->dev->devtype->product == CODA_960) {
+ v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_LEVEL,
+- V4L2_MPEG_VIDEO_H264_LEVEL_4_0,
+- ~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
++ V4L2_MPEG_VIDEO_H264_LEVEL_4_2,
++ ~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_1_0) |
++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
+ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_0) |
+ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_1) |
+ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_2) |
+- (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0)),
++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0) |
++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_1) |
++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_2)),
+ V4L2_MPEG_VIDEO_H264_LEVEL_4_0);
+ }
+ v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops,
+@@ -2426,7 +2437,7 @@ static void coda_decode_ctrls(struct coda_ctx *ctx)
+ ctx->h264_profile_ctrl = v4l2_ctrl_new_std_menu(&ctx->ctrls,
+ &coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+ V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
+- ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) |
++ ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) |
+ (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) |
+ (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)),
+ V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
+diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c
+index e55e411038f48..e3072d69c49fa 100644
+--- a/drivers/media/platform/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/exynos4-is/fimc-is.c
+@@ -140,7 +140,7 @@ static int fimc_is_enable_clocks(struct fimc_is *is)
+ dev_err(&is->pdev->dev, "clock %s enable failed\n",
+ fimc_is_clocks[i]);
+ for (--i; i >= 0; i--)
+- clk_disable(is->clocks[i]);
++ clk_disable_unprepare(is->clocks[i]);
+ return ret;
+ }
+ pr_debug("enabled clock: %s\n", fimc_is_clocks[i]);
+@@ -830,7 +830,7 @@ static int fimc_is_probe(struct platform_device *pdev)
+
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0)
+- goto err_irq;
++ goto err_pm_disable;
+
+ vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32));
+
+@@ -864,6 +864,8 @@ err_pm:
+ pm_runtime_put_noidle(dev);
+ if (!pm_runtime_enabled(dev))
+ fimc_is_runtime_suspend(dev);
++err_pm_disable:
++ pm_runtime_disable(dev);
+ err_irq:
+ free_irq(is->irq, is);
+ err_clk:
+diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.h b/drivers/media/platform/exynos4-is/fimc-isp-video.h
+index edcb3a5e3cb90..2dd4ddbc748a1 100644
+--- a/drivers/media/platform/exynos4-is/fimc-isp-video.h
++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.h
+@@ -32,7 +32,7 @@ static inline int fimc_isp_video_device_register(struct fimc_isp *isp,
+ return 0;
+ }
+
+-void fimc_isp_video_device_unregister(struct fimc_isp *isp,
++static inline void fimc_isp_video_device_unregister(struct fimc_isp *isp,
+ enum v4l2_buf_type type)
+ {
+ }
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c
+index 2b334a8a81c62..f99771a038b06 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c
+@@ -47,14 +47,7 @@ static struct mtk_q_data *mtk_vdec_get_q_data(struct mtk_vcodec_ctx *ctx,
+ static int vidioc_try_decoder_cmd(struct file *file, void *priv,
+ struct v4l2_decoder_cmd *cmd)
+ {
+- struct mtk_vcodec_ctx *ctx = fh_to_ctx(priv);
+-
+- /* Use M2M stateless helper if relevant */
+- if (ctx->dev->vdec_pdata->uses_stateless_api)
+- return v4l2_m2m_ioctl_stateless_try_decoder_cmd(file, priv,
+- cmd);
+- else
+- return v4l2_m2m_ioctl_try_decoder_cmd(file, priv, cmd);
++ return v4l2_m2m_ioctl_try_decoder_cmd(file, priv, cmd);
+ }
+
+
+@@ -69,10 +62,6 @@ static int vidioc_decoder_cmd(struct file *file, void *priv,
+ if (ret)
+ return ret;
+
+- /* Use M2M stateless helper if relevant */
+- if (ctx->dev->vdec_pdata->uses_stateless_api)
+- return v4l2_m2m_ioctl_stateless_decoder_cmd(file, priv, cmd);
+-
+ mtk_v4l2_debug(1, "decoder cmd=%u", cmd->cmd);
+ dst_vq = v4l2_m2m_get_vq(ctx->m2m_ctx,
+ V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+index 40c39e1e596b9..4b5cbaf672ab8 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
+@@ -315,6 +315,9 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ }
+
+ if (dev->vdec_pdata->uses_stateless_api) {
++ v4l2_disable_ioctl(vfd_dec, VIDIOC_DECODER_CMD);
++ v4l2_disable_ioctl(vfd_dec, VIDIOC_TRY_DECODER_CMD);
++
+ dev->mdev_dec.dev = &pdev->dev;
+ strscpy(dev->mdev_dec.model, MTK_VCODEC_DEC_NAME,
+ sizeof(dev->mdev_dec.model));
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 0bca95d016507..fa01edd54c038 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -90,12 +90,28 @@ bool venus_helper_check_codec(struct venus_inst *inst, u32 v4l2_pixfmt)
+ }
+ EXPORT_SYMBOL_GPL(venus_helper_check_codec);
+
++static void free_dpb_buf(struct venus_inst *inst, struct intbuf *buf)
++{
++ ida_free(&inst->dpb_ids, buf->dpb_out_tag);
++
++ list_del_init(&buf->list);
++ dma_free_attrs(inst->core->dev, buf->size, buf->va, buf->da,
++ buf->attrs);
++ kfree(buf);
++}
++
+ int venus_helper_queue_dpb_bufs(struct venus_inst *inst)
+ {
+- struct intbuf *buf;
++ struct intbuf *buf, *next;
++ unsigned int dpb_size = 0;
+ int ret = 0;
+
+- list_for_each_entry(buf, &inst->dpbbufs, list) {
++ if (inst->dpb_buftype == HFI_BUFFER_OUTPUT)
++ dpb_size = inst->output_buf_size;
++ else if (inst->dpb_buftype == HFI_BUFFER_OUTPUT2)
++ dpb_size = inst->output2_buf_size;
++
++ list_for_each_entry_safe(buf, next, &inst->dpbbufs, list) {
+ struct hfi_frame_data fdata;
+
+ memset(&fdata, 0, sizeof(fdata));
+@@ -106,6 +122,12 @@ int venus_helper_queue_dpb_bufs(struct venus_inst *inst)
+ if (buf->owned_by == FIRMWARE)
+ continue;
+
++ /* free buffer from previous sequence which was released later */
++ if (dpb_size > buf->size) {
++ free_dpb_buf(inst, buf);
++ continue;
++ }
++
+ fdata.clnt_data = buf->dpb_out_tag;
+
+ ret = hfi_session_process_buf(inst, &fdata);
+@@ -127,13 +149,7 @@ int venus_helper_free_dpb_bufs(struct venus_inst *inst)
+ list_for_each_entry_safe(buf, n, &inst->dpbbufs, list) {
+ if (buf->owned_by == FIRMWARE)
+ continue;
+-
+- ida_free(&inst->dpb_ids, buf->dpb_out_tag);
+-
+- list_del_init(&buf->list);
+- dma_free_attrs(inst->core->dev, buf->size, buf->va, buf->da,
+- buf->attrs);
+- kfree(buf);
++ free_dpb_buf(inst, buf);
+ }
+
+ if (list_empty(&inst->dpbbufs))
+diff --git a/drivers/media/platform/qcom/venus/hfi.c b/drivers/media/platform/qcom/venus/hfi.c
+index 4e2151fb47f08..1968f09ad177a 100644
+--- a/drivers/media/platform/qcom/venus/hfi.c
++++ b/drivers/media/platform/qcom/venus/hfi.c
+@@ -104,6 +104,9 @@ int hfi_core_deinit(struct venus_core *core, bool blocking)
+ mutex_lock(&core->lock);
+ }
+
++ if (!core->ops)
++ goto unlock;
++
+ ret = core->ops->core_deinit(core);
+
+ if (!ret)
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 3d3d1062e2122..2f8df74ad0fde 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -865,7 +865,7 @@ static int rga_probe(struct platform_device *pdev)
+
+ ret = pm_runtime_resume_and_get(rga->dev);
+ if (ret < 0)
+- goto rel_vdev;
++ goto rel_m2m;
+
+ rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF;
+ rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F;
+@@ -881,7 +881,7 @@ static int rga_probe(struct platform_device *pdev)
+ DMA_ATTR_WRITE_COMBINE);
+ if (!rga->cmdbuf_virt) {
+ ret = -ENOMEM;
+- goto rel_vdev;
++ goto rel_m2m;
+ }
+
+ rga->src_mmu_pages =
+@@ -918,6 +918,8 @@ free_src_pages:
+ free_dma:
+ dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt,
+ rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE);
++rel_m2m:
++ v4l2_m2m_release(rga->m2m_dev);
+ rel_vdev:
+ video_device_release(vfd);
+ unreg_v4l2_dev:
+diff --git a/drivers/media/platform/sti/delta/delta-v4l2.c b/drivers/media/platform/sti/delta/delta-v4l2.c
+index c887a31ebb540..420ad4d8df5d5 100644
+--- a/drivers/media/platform/sti/delta/delta-v4l2.c
++++ b/drivers/media/platform/sti/delta/delta-v4l2.c
+@@ -1859,7 +1859,7 @@ static int delta_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(delta->dev, "%s failed to initialize firmware ipc channel\n",
+ DELTA_PREFIX);
+- goto err;
++ goto err_pm_disable;
+ }
+
+ /* register all available decoders */
+@@ -1873,7 +1873,7 @@ static int delta_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(delta->dev, "%s failed to register V4L2 device\n",
+ DELTA_PREFIX);
+- goto err;
++ goto err_pm_disable;
+ }
+
+ delta->work_queue = create_workqueue(DELTA_NAME);
+@@ -1898,6 +1898,8 @@ err_work_queue:
+ destroy_workqueue(delta->work_queue);
+ err_v4l2:
+ v4l2_device_unregister(&delta->v4l2_dev);
++err_pm_disable:
++ pm_runtime_disable(dev);
+ err:
+ return ret;
+ }
+diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c
+index 85587c1b6a373..75083cb234fe3 100644
+--- a/drivers/media/platform/vsp1/vsp1_rpf.c
++++ b/drivers/media/platform/vsp1/vsp1_rpf.c
+@@ -291,11 +291,11 @@ static void rpf_configure_partition(struct vsp1_entity *entity,
+ + crop.left * fmtinfo->bpp[0] / 8;
+
+ if (format->num_planes > 1) {
++ unsigned int bpl = format->plane_fmt[1].bytesperline;
+ unsigned int offset;
+
+- offset = crop.top * format->plane_fmt[1].bytesperline
+- + crop.left / fmtinfo->hsub
+- * fmtinfo->bpp[1] / 8;
++ offset = crop.top / fmtinfo->vsub * bpl
++ + crop.left / fmtinfo->hsub * fmtinfo->bpp[1] / 8;
+ mem.addr[1] += offset;
+ mem.addr[2] += offset;
+ }
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 54da6f60079ba..ab090663f975f 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -153,6 +153,24 @@ struct imon_context {
+ const struct imon_usb_dev_descr *dev_descr;
+ /* device description with key */
+ /* table for front panels */
++ /*
++ * Fields for deferring free_imon_context().
++ *
++ * Since reference to "struct imon_context" is stored into
++ * "struct file"->private_data, we need to remember
++ * how many file descriptors might access this "struct imon_context".
++ */
++ refcount_t users;
++ /*
++ * Use a flag for telling display_open()/vfd_write()/lcd_write() that
++ * imon_disconnect() was already called.
++ */
++ bool disconnected;
++ /*
++ * We need to wait for RCU grace period in order to allow
++ * display_open() to safely check ->disconnected and increment ->users.
++ */
++ struct rcu_head rcu;
+ };
+
+ #define TOUCH_TIMEOUT (HZ/30)
+@@ -160,18 +178,18 @@ struct imon_context {
+ /* vfd character device file operations */
+ static const struct file_operations vfd_fops = {
+ .owner = THIS_MODULE,
+- .open = &display_open,
+- .write = &vfd_write,
+- .release = &display_close,
++ .open = display_open,
++ .write = vfd_write,
++ .release = display_close,
+ .llseek = noop_llseek,
+ };
+
+ /* lcd character device file operations */
+ static const struct file_operations lcd_fops = {
+ .owner = THIS_MODULE,
+- .open = &display_open,
+- .write = &lcd_write,
+- .release = &display_close,
++ .open = display_open,
++ .write = lcd_write,
++ .release = display_close,
+ .llseek = noop_llseek,
+ };
+
+@@ -439,9 +457,6 @@ static struct usb_driver imon_driver = {
+ .id_table = imon_usb_id_table,
+ };
+
+-/* to prevent races between open() and disconnect(), probing, etc */
+-static DEFINE_MUTEX(driver_lock);
+-
+ /* Module bookkeeping bits */
+ MODULE_AUTHOR(MOD_AUTHOR);
+ MODULE_DESCRIPTION(MOD_DESC);
+@@ -481,9 +496,11 @@ static void free_imon_context(struct imon_context *ictx)
+ struct device *dev = ictx->dev;
+
+ usb_free_urb(ictx->tx_urb);
++ WARN_ON(ictx->dev_present_intf0);
+ usb_free_urb(ictx->rx_urb_intf0);
++ WARN_ON(ictx->dev_present_intf1);
+ usb_free_urb(ictx->rx_urb_intf1);
+- kfree(ictx);
++ kfree_rcu(ictx, rcu);
+
+ dev_dbg(dev, "%s: iMON context freed\n", __func__);
+ }
+@@ -499,9 +516,6 @@ static int display_open(struct inode *inode, struct file *file)
+ int subminor;
+ int retval = 0;
+
+- /* prevent races with disconnect */
+- mutex_lock(&driver_lock);
+-
+ subminor = iminor(inode);
+ interface = usb_find_interface(&imon_driver, subminor);
+ if (!interface) {
+@@ -509,13 +523,16 @@ static int display_open(struct inode *inode, struct file *file)
+ retval = -ENODEV;
+ goto exit;
+ }
+- ictx = usb_get_intfdata(interface);
+
+- if (!ictx) {
++ rcu_read_lock();
++ ictx = usb_get_intfdata(interface);
++ if (!ictx || ictx->disconnected || !refcount_inc_not_zero(&ictx->users)) {
++ rcu_read_unlock();
+ pr_err("no context found for minor %d\n", subminor);
+ retval = -ENODEV;
+ goto exit;
+ }
++ rcu_read_unlock();
+
+ mutex_lock(&ictx->lock);
+
+@@ -533,8 +550,10 @@ static int display_open(struct inode *inode, struct file *file)
+
+ mutex_unlock(&ictx->lock);
+
++ if (retval && refcount_dec_and_test(&ictx->users))
++ free_imon_context(ictx);
++
+ exit:
+- mutex_unlock(&driver_lock);
+ return retval;
+ }
+
+@@ -544,16 +563,9 @@ exit:
+ */
+ static int display_close(struct inode *inode, struct file *file)
+ {
+- struct imon_context *ictx = NULL;
++ struct imon_context *ictx = file->private_data;
+ int retval = 0;
+
+- ictx = file->private_data;
+-
+- if (!ictx) {
+- pr_err("no context for device\n");
+- return -ENODEV;
+- }
+-
+ mutex_lock(&ictx->lock);
+
+ if (!ictx->display_supported) {
+@@ -568,6 +580,8 @@ static int display_close(struct inode *inode, struct file *file)
+ }
+
+ mutex_unlock(&ictx->lock);
++ if (refcount_dec_and_test(&ictx->users))
++ free_imon_context(ictx);
+ return retval;
+ }
+
+@@ -934,15 +948,12 @@ static ssize_t vfd_write(struct file *file, const char __user *buf,
+ int offset;
+ int seq;
+ int retval = 0;
+- struct imon_context *ictx;
++ struct imon_context *ictx = file->private_data;
+ static const unsigned char vfd_packet6[] = {
+ 0x01, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF };
+
+- ictx = file->private_data;
+- if (!ictx) {
+- pr_err_ratelimited("no context for device\n");
++ if (ictx->disconnected)
+ return -ENODEV;
+- }
+
+ mutex_lock(&ictx->lock);
+
+@@ -1018,13 +1029,10 @@ static ssize_t lcd_write(struct file *file, const char __user *buf,
+ size_t n_bytes, loff_t *pos)
+ {
+ int retval = 0;
+- struct imon_context *ictx;
++ struct imon_context *ictx = file->private_data;
+
+- ictx = file->private_data;
+- if (!ictx) {
+- pr_err_ratelimited("no context for device\n");
++ if (ictx->disconnected)
+ return -ENODEV;
+- }
+
+ mutex_lock(&ictx->lock);
+
+@@ -2404,7 +2412,6 @@ static int imon_probe(struct usb_interface *interface,
+ int ifnum, sysfs_err;
+ int ret = 0;
+ struct imon_context *ictx = NULL;
+- struct imon_context *first_if_ctx = NULL;
+ u16 vendor, product;
+
+ usbdev = usb_get_dev(interface_to_usbdev(interface));
+@@ -2416,17 +2423,12 @@ static int imon_probe(struct usb_interface *interface,
+ dev_dbg(dev, "%s: found iMON device (%04x:%04x, intf%d)\n",
+ __func__, vendor, product, ifnum);
+
+- /* prevent races probing devices w/multiple interfaces */
+- mutex_lock(&driver_lock);
+-
+ first_if = usb_ifnum_to_if(usbdev, 0);
+ if (!first_if) {
+ ret = -ENODEV;
+ goto fail;
+ }
+
+- first_if_ctx = usb_get_intfdata(first_if);
+-
+ if (ifnum == 0) {
+ ictx = imon_init_intf0(interface, id);
+ if (!ictx) {
+@@ -2434,9 +2436,11 @@ static int imon_probe(struct usb_interface *interface,
+ ret = -ENODEV;
+ goto fail;
+ }
++ refcount_set(&ictx->users, 1);
+
+ } else {
+ /* this is the secondary interface on the device */
++ struct imon_context *first_if_ctx = usb_get_intfdata(first_if);
+
+ /* fail early if first intf failed to register */
+ if (!first_if_ctx) {
+@@ -2450,14 +2454,13 @@ static int imon_probe(struct usb_interface *interface,
+ ret = -ENODEV;
+ goto fail;
+ }
++ refcount_inc(&ictx->users);
+
+ }
+
+ usb_set_intfdata(interface, ictx);
+
+ if (ifnum == 0) {
+- mutex_lock(&ictx->lock);
+-
+ if (product == 0xffdc && ictx->rf_device) {
+ sysfs_err = sysfs_create_group(&interface->dev.kobj,
+ &imon_rf_attr_group);
+@@ -2468,21 +2471,17 @@ static int imon_probe(struct usb_interface *interface,
+
+ if (ictx->display_supported)
+ imon_init_display(ictx, interface);
+-
+- mutex_unlock(&ictx->lock);
+ }
+
+ dev_info(dev, "iMON device (%04x:%04x, intf%d) on usb<%d:%d> initialized\n",
+ vendor, product, ifnum,
+ usbdev->bus->busnum, usbdev->devnum);
+
+- mutex_unlock(&driver_lock);
+ usb_put_dev(usbdev);
+
+ return 0;
+
+ fail:
+- mutex_unlock(&driver_lock);
+ usb_put_dev(usbdev);
+ dev_err(dev, "unable to register, err %d\n", ret);
+
+@@ -2498,10 +2497,8 @@ static void imon_disconnect(struct usb_interface *interface)
+ struct device *dev;
+ int ifnum;
+
+- /* prevent races with multi-interface device probing and display_open */
+- mutex_lock(&driver_lock);
+-
+ ictx = usb_get_intfdata(interface);
++ ictx->disconnected = true;
+ dev = ictx->dev;
+ ifnum = interface->cur_altsetting->desc.bInterfaceNumber;
+
+@@ -2542,11 +2539,9 @@ static void imon_disconnect(struct usb_interface *interface)
+ }
+ }
+
+- if (!ictx->dev_present_intf0 && !ictx->dev_present_intf1)
++ if (refcount_dec_and_test(&ictx->users))
+ free_imon_context(ictx);
+
+- mutex_unlock(&driver_lock);
+-
+ dev_dbg(dev, "%s: iMON device (intf%d) disconnected\n",
+ __func__, ifnum);
+ }
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index cd7b118d59290..a9666373af6b9 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2569,6 +2569,11 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ } while (0);
+ mutex_unlock(&pvr2_unit_mtx);
+
++ INIT_WORK(&hdw->workpoll, pvr2_hdw_worker_poll);
++
++ if (hdw->unit_number == -1)
++ goto fail;
++
+ cnt1 = 0;
+ cnt2 = scnprintf(hdw->name+cnt1,sizeof(hdw->name)-cnt1,"pvrusb2");
+ cnt1 += cnt2;
+@@ -2580,8 +2585,6 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ if (cnt1 >= sizeof(hdw->name)) cnt1 = sizeof(hdw->name)-1;
+ hdw->name[cnt1] = 0;
+
+- INIT_WORK(&hdw->workpoll,pvr2_hdw_worker_poll);
+-
+ pvr2_trace(PVR2_TRACE_INIT,"Driver unit number is %d, name is %s",
+ hdw->unit_number,hdw->name);
+
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 711556d13d03a..177181985345c 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -871,29 +871,31 @@ static int uvc_ioctl_enum_input(struct file *file, void *fh,
+ struct uvc_video_chain *chain = handle->chain;
+ const struct uvc_entity *selector = chain->selector;
+ struct uvc_entity *iterm = NULL;
++ struct uvc_entity *it;
+ u32 index = input->index;
+- int pin = 0;
+
+ if (selector == NULL ||
+ (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) {
+ if (index != 0)
+ return -EINVAL;
+- list_for_each_entry(iterm, &chain->entities, chain) {
+- if (UVC_ENTITY_IS_ITERM(iterm))
++ list_for_each_entry(it, &chain->entities, chain) {
++ if (UVC_ENTITY_IS_ITERM(it)) {
++ iterm = it;
+ break;
++ }
+ }
+- pin = iterm->id;
+ } else if (index < selector->bNrInPins) {
+- pin = selector->baSourceID[index];
+- list_for_each_entry(iterm, &chain->entities, chain) {
+- if (!UVC_ENTITY_IS_ITERM(iterm))
++ list_for_each_entry(it, &chain->entities, chain) {
++ if (!UVC_ENTITY_IS_ITERM(it))
+ continue;
+- if (iterm->id == pin)
++ if (it->id == selector->baSourceID[index]) {
++ iterm = it;
+ break;
++ }
+ }
+ }
+
+- if (iterm == NULL || iterm->id != pin)
++ if (iterm == NULL)
+ return -EINVAL;
+
+ memset(input, 0, sizeof(*input));
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 9c8318923ed0b..4733e7898ffe5 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1322,7 +1322,6 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
+ */
+ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
+ {
+- int counters_size;
+ int ret, i;
+
+ dmc->num_counters = devfreq_event_get_edev_count(dmc->dev,
+@@ -1332,8 +1331,8 @@ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
+ return dmc->num_counters;
+ }
+
+- counters_size = sizeof(struct devfreq_event_dev) * dmc->num_counters;
+- dmc->counter = devm_kzalloc(dmc->dev, counters_size, GFP_KERNEL);
++ dmc->counter = devm_kcalloc(dmc->dev, dmc->num_counters,
++ sizeof(*dmc->counter), GFP_KERNEL);
+ if (!dmc->counter)
+ return -ENOMEM;
+
+diff --git a/drivers/mfd/davinci_voicecodec.c b/drivers/mfd/davinci_voicecodec.c
+index e5c8bc998eb4e..965820481f1e1 100644
+--- a/drivers/mfd/davinci_voicecodec.c
++++ b/drivers/mfd/davinci_voicecodec.c
+@@ -46,14 +46,12 @@ static int __init davinci_vc_probe(struct platform_device *pdev)
+ }
+ clk_enable(davinci_vc->clk);
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-
+- fifo_base = (dma_addr_t)res->start;
+- davinci_vc->base = devm_ioremap_resource(&pdev->dev, res);
++ davinci_vc->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(davinci_vc->base)) {
+ ret = PTR_ERR(davinci_vc->base);
+ goto fail;
+ }
++ fifo_base = (dma_addr_t)res->start;
+
+ davinci_vc->regmap = devm_regmap_init_mmio(&pdev->dev,
+ davinci_vc->base,
+diff --git a/drivers/mfd/ipaq-micro.c b/drivers/mfd/ipaq-micro.c
+index e92eeeb67a98a..4cd5ecc722112 100644
+--- a/drivers/mfd/ipaq-micro.c
++++ b/drivers/mfd/ipaq-micro.c
+@@ -403,7 +403,7 @@ static int __init micro_probe(struct platform_device *pdev)
+ micro_reset_comm(micro);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq)
++ if (irq < 0)
+ return -EINVAL;
+ ret = devm_request_irq(&pdev->dev, irq, micro_serial_isr,
+ IRQF_SHARED, "ipaq-micro",
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index d881f5e40ad9e..6777c419a8da2 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -556,7 +556,9 @@ int ocxl_file_register_afu(struct ocxl_afu *afu)
+
+ err_unregister:
+ ocxl_sysfs_unregister_afu(info); // safe to call even if register failed
++ free_minor(info);
+ device_unregister(&info->dev);
++ return rc;
+ err_put:
+ ocxl_afu_put(afu);
+ free_minor(info);
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index db99882c95d86..52e4106bcb6e5 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -609,11 +609,11 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+
+ if (idata->rpmb || (cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) {
+ /*
+- * Ensure RPMB/R1B command has completed by polling CMD13
+- * "Send Status".
++ * Ensure RPMB/R1B command has completed by polling CMD13 "Send Status". Here we
++ * allow to override the default timeout value if a custom timeout is specified.
+ */
+- err = mmc_poll_for_busy(card, MMC_BLK_TIMEOUT_MS, false,
+- MMC_BUSY_IO);
++ err = mmc_poll_for_busy(card, idata->ic.cmd_timeout_ms ? : MMC_BLK_TIMEOUT_MS,
++ false, MMC_BUSY_IO);
+ }
+
+ return err;
+diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c
+index 7ab1b38a7be50..b1d563b2ed1b0 100644
+--- a/drivers/mmc/host/jz4740_mmc.c
++++ b/drivers/mmc/host/jz4740_mmc.c
+@@ -247,6 +247,26 @@ static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host)
+ return PTR_ERR(host->dma_rx);
+ }
+
++ /*
++ * Limit the maximum segment size in any SG entry according to
++ * the parameters of the DMA engine device.
++ */
++ if (host->dma_tx) {
++ struct device *dev = host->dma_tx->device->dev;
++ unsigned int max_seg_size = dma_get_max_seg_size(dev);
++
++ if (max_seg_size < host->mmc->max_seg_size)
++ host->mmc->max_seg_size = max_seg_size;
++ }
++
++ if (host->dma_rx) {
++ struct device *dev = host->dma_rx->device->dev;
++ unsigned int max_seg_size = dma_get_max_seg_size(dev);
++
++ if (max_seg_size < host->mmc->max_seg_size)
++ host->mmc->max_seg_size = max_seg_size;
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index b4891bb266485..a3e62e212631f 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -147,6 +147,9 @@ struct sdhci_am654_data {
+ int drv_strength;
+ int strb_sel;
+ u32 flags;
++ u32 quirks;
++
++#define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0)
+ };
+
+ struct sdhci_am654_driver_data {
+@@ -369,6 +372,21 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
+ }
+ }
+
++static void sdhci_am654_reset(struct sdhci_host *host, u8 mask)
++{
++ u8 ctrl;
++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++ struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
++
++ sdhci_reset(host, mask);
++
++ if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_FORCE_CDTEST) {
++ ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
++ ctrl |= SDHCI_CTRL_CDTEST_INS | SDHCI_CTRL_CDTEST_EN;
++ sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
++ }
++}
++
+ static int sdhci_am654_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ {
+ struct sdhci_host *host = mmc_priv(mmc);
+@@ -500,7 +518,7 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = {
+ .set_clock = sdhci_j721e_4bit_set_clock,
+ .write_b = sdhci_am654_write_b,
+ .irq = sdhci_am654_cqhci_irq,
+- .reset = sdhci_reset,
++ .reset = sdhci_am654_reset,
+ };
+
+ static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
+@@ -719,6 +737,9 @@ static int sdhci_am654_get_of_property(struct platform_device *pdev,
+ device_property_read_u32(dev, "ti,clkbuf-sel",
+ &sdhci_am654->clkbuf_sel);
+
++ if (device_property_read_bool(dev, "ti,fails-without-test-cd"))
++ sdhci_am654->quirks |= SDHCI_AM654_QUIRK_FORCE_CDTEST;
++
+ sdhci_get_of_property(pdev);
+
+ return 0;
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index a761134fd3bea..59334530dd46f 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -59,6 +59,10 @@
+ #define CFI_SR_WBASB BIT(3)
+ #define CFI_SR_SLSB BIT(1)
+
++enum cfi_quirks {
++ CFI_QUIRK_DQ_TRUE_DATA = BIT(0),
++};
++
+ static int cfi_amdstd_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *);
+ static int cfi_amdstd_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *);
+ #if !FORCE_WORD_WRITE
+@@ -436,6 +440,15 @@ static void fixup_s29ns512p_sectors(struct mtd_info *mtd)
+ mtd->name);
+ }
+
++static void fixup_quirks(struct mtd_info *mtd)
++{
++ struct map_info *map = mtd->priv;
++ struct cfi_private *cfi = map->fldrv_priv;
++
++ if (cfi->mfr == CFI_MFR_AMD && cfi->id == 0x0c01)
++ cfi->quirks |= CFI_QUIRK_DQ_TRUE_DATA;
++}
++
+ /* Used to fix CFI-Tables of chips without Extended Query Tables */
+ static struct cfi_fixup cfi_nopri_fixup_table[] = {
+ { CFI_MFR_SST, 0x234a, fixup_sst39vf }, /* SST39VF1602 */
+@@ -474,6 +487,7 @@ static struct cfi_fixup cfi_fixup_table[] = {
+ #if !FORCE_WORD_WRITE
+ { CFI_MFR_ANY, CFI_ID_ANY, fixup_use_write_buffers },
+ #endif
++ { CFI_MFR_ANY, CFI_ID_ANY, fixup_quirks },
+ { 0, 0, NULL }
+ };
+ static struct cfi_fixup jedec_fixup_table[] = {
+@@ -802,21 +816,25 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd)
+ }
+
+ /*
+- * Return true if the chip is ready.
++ * Return true if the chip is ready and has the correct value.
+ *
+ * Ready is one of: read mode, query mode, erase-suspend-read mode (in any
+ * non-suspended sector) and is indicated by no toggle bits toggling.
+ *
++ * Error are indicated by toggling bits or bits held with the wrong value,
++ * or with bits toggling.
++ *
+ * Note that anything more complicated than checking if no bits are toggling
+ * (including checking DQ5 for an error status) is tricky to get working
+ * correctly and is therefore not done (particularly with interleaved chips
+ * as each chip must be checked independently of the others).
+ */
+ static int __xipram chip_ready(struct map_info *map, struct flchip *chip,
+- unsigned long addr)
++ unsigned long addr, map_word *expected)
+ {
+ struct cfi_private *cfi = map->fldrv_priv;
+ map_word d, t;
++ int ret;
+
+ if (cfi_use_status_reg(cfi)) {
+ map_word ready = CMD(CFI_SR_DRB);
+@@ -826,57 +844,32 @@ static int __xipram chip_ready(struct map_info *map, struct flchip *chip,
+ */
+ cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi,
+ cfi->device_type, NULL);
+- d = map_read(map, addr);
++ t = map_read(map, addr);
+
+- return map_word_andequal(map, d, ready, ready);
++ return map_word_andequal(map, t, ready, ready);
+ }
+
+ d = map_read(map, addr);
+ t = map_read(map, addr);
+
+- return map_word_equal(map, d, t);
++ ret = map_word_equal(map, d, t);
++
++ if (!ret || !expected)
++ return ret;
++
++ return map_word_equal(map, t, *expected);
+ }
+
+-/*
+- * Return true if the chip is ready and has the correct value.
+- *
+- * Ready is one of: read mode, query mode, erase-suspend-read mode (in any
+- * non-suspended sector) and it is indicated by no bits toggling.
+- *
+- * Error are indicated by toggling bits or bits held with the wrong value,
+- * or with bits toggling.
+- *
+- * Note that anything more complicated than checking if no bits are toggling
+- * (including checking DQ5 for an error status) is tricky to get working
+- * correctly and is therefore not done (particularly with interleaved chips
+- * as each chip must be checked independently of the others).
+- *
+- */
+ static int __xipram chip_good(struct map_info *map, struct flchip *chip,
+- unsigned long addr, map_word expected)
++ unsigned long addr, map_word *expected)
+ {
+ struct cfi_private *cfi = map->fldrv_priv;
+- map_word oldd, curd;
+-
+- if (cfi_use_status_reg(cfi)) {
+- map_word ready = CMD(CFI_SR_DRB);
+-
+- /*
+- * For chips that support status register, check device
+- * ready bit
+- */
+- cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi,
+- cfi->device_type, NULL);
+- curd = map_read(map, addr);
+-
+- return map_word_andequal(map, curd, ready, ready);
+- }
++ map_word *datum = expected;
+
+- oldd = map_read(map, addr);
+- curd = map_read(map, addr);
++ if (cfi->quirks & CFI_QUIRK_DQ_TRUE_DATA)
++ datum = NULL;
+
+- return map_word_equal(map, oldd, curd) &&
+- map_word_equal(map, curd, expected);
++ return chip_ready(map, chip, addr, datum);
+ }
+
+ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr, int mode)
+@@ -893,7 +886,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
+
+ case FL_STATUS:
+ for (;;) {
+- if (chip_ready(map, chip, adr))
++ if (chip_ready(map, chip, adr, NULL))
+ break;
+
+ if (time_after(jiffies, timeo)) {
+@@ -932,7 +925,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
+ chip->state = FL_ERASE_SUSPENDING;
+ chip->erase_suspended = 1;
+ for (;;) {
+- if (chip_ready(map, chip, adr))
++ if (chip_ready(map, chip, adr, NULL))
+ break;
+
+ if (time_after(jiffies, timeo)) {
+@@ -1463,7 +1456,7 @@ static int do_otp_lock(struct map_info *map, struct flchip *chip, loff_t adr,
+ /* wait for chip to become ready */
+ timeo = jiffies + msecs_to_jiffies(2);
+ for (;;) {
+- if (chip_ready(map, chip, adr))
++ if (chip_ready(map, chip, adr, NULL))
+ break;
+
+ if (time_after(jiffies, timeo)) {
+@@ -1699,7 +1692,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
+ * "chip_good" to avoid the failure due to scheduling.
+ */
+ if (time_after(jiffies, timeo) &&
+- !chip_good(map, chip, adr, datum)) {
++ !chip_good(map, chip, adr, &datum)) {
+ xip_enable(map, chip, adr);
+ printk(KERN_WARNING "MTD %s(): software timeout\n", __func__);
+ xip_disable(map, chip, adr);
+@@ -1707,7 +1700,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
+ break;
+ }
+
+- if (chip_good(map, chip, adr, datum)) {
++ if (chip_good(map, chip, adr, &datum)) {
+ if (cfi_check_err_status(map, chip, adr))
+ ret = -EIO;
+ break;
+@@ -1979,14 +1972,14 @@ static int __xipram do_write_buffer_wait(struct map_info *map,
+ * "chip_good" to avoid the failure due to scheduling.
+ */
+ if (time_after(jiffies, timeo) &&
+- !chip_good(map, chip, adr, datum)) {
++ !chip_good(map, chip, adr, &datum)) {
+ pr_err("MTD %s(): software timeout, address:0x%.8lx.\n",
+ __func__, adr);
+ ret = -EIO;
+ break;
+ }
+
+- if (chip_good(map, chip, adr, datum)) {
++ if (chip_good(map, chip, adr, &datum)) {
+ if (cfi_check_err_status(map, chip, adr))
+ ret = -EIO;
+ break;
+@@ -2195,7 +2188,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip,
+ * If the driver thinks the chip is idle, and no toggle bits
+ * are changing, then the chip is actually idle for sure.
+ */
+- if (chip->state == FL_READY && chip_ready(map, chip, adr))
++ if (chip->state == FL_READY && chip_ready(map, chip, adr, NULL))
+ return 0;
+
+ /*
+@@ -2212,7 +2205,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip,
+
+ /* wait for the chip to become ready */
+ for (i = 0; i < jiffies_to_usecs(timeo); i++) {
+- if (chip_ready(map, chip, adr))
++ if (chip_ready(map, chip, adr, NULL))
+ return 0;
+
+ udelay(1);
+@@ -2276,13 +2269,13 @@ retry:
+ map_write(map, datum, adr);
+
+ for (i = 0; i < jiffies_to_usecs(uWriteTimeout); i++) {
+- if (chip_ready(map, chip, adr))
++ if (chip_ready(map, chip, adr, NULL))
+ break;
+
+ udelay(1);
+ }
+
+- if (!chip_good(map, chip, adr, datum) ||
++ if (!chip_ready(map, chip, adr, &datum) ||
+ cfi_check_err_status(map, chip, adr)) {
+ /* reset on all failures. */
+ map_write(map, CMD(0xF0), chip->start);
+@@ -2424,6 +2417,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ DECLARE_WAITQUEUE(wait, current);
+ int ret;
+ int retry_cnt = 0;
++ map_word datum = map_word_ff(map);
+
+ adr = cfi->addr_unlock1;
+
+@@ -2478,7 +2472,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ chip->erase_suspended = 0;
+ }
+
+- if (chip_good(map, chip, adr, map_word_ff(map))) {
++ if (chip_ready(map, chip, adr, &datum)) {
+ if (cfi_check_err_status(map, chip, adr))
+ ret = -EIO;
+ break;
+@@ -2523,6 +2517,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ DECLARE_WAITQUEUE(wait, current);
+ int ret;
+ int retry_cnt = 0;
++ map_word datum = map_word_ff(map);
+
+ adr += chip->start;
+
+@@ -2577,7 +2572,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ chip->erase_suspended = 0;
+ }
+
+- if (chip_good(map, chip, adr, map_word_ff(map))) {
++ if (chip_ready(map, chip, adr, &datum)) {
+ if (cfi_check_err_status(map, chip, adr))
+ ret = -EIO;
+ break;
+@@ -2771,7 +2766,7 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map,
+ */
+ timeo = jiffies + msecs_to_jiffies(2000); /* 2s max (un)locking */
+ for (;;) {
+- if (chip_ready(map, chip, adr))
++ if (chip_ready(map, chip, adr, NULL))
+ break;
+
+ if (time_after(jiffies, timeo)) {
+diff --git a/drivers/mtd/mtdblock.c b/drivers/mtd/mtdblock.c
+index 03e3de3a5d79e..1e94e7d10b8be 100644
+--- a/drivers/mtd/mtdblock.c
++++ b/drivers/mtd/mtdblock.c
+@@ -257,6 +257,10 @@ static int mtdblock_open(struct mtd_blktrans_dev *mbd)
+ return 0;
+ }
+
++ if (mtd_type_is_nand(mbd->mtd))
++ pr_warn("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n",
++ mbd->tr->name, mbd->mtd->name);
++
+ /* OK, it's not open. Create cache info for it */
+ mtdblk->count = 1;
+ mutex_init(&mtdblk->cache_mutex);
+@@ -322,10 +326,6 @@ static void mtdblock_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd)
+ if (!(mtd->flags & MTD_WRITEABLE))
+ dev->mbd.readonly = 1;
+
+- if (mtd_type_is_nand(mtd))
+- pr_warn("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n",
+- tr->name, mtd->name);
+-
+ if (add_mtd_blktrans_dev(&dev->mbd))
+ kfree(dev);
+ }
+diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
+index 7eec60ea90564..0d72672f8b64d 100644
+--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
++++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
+@@ -2983,11 +2983,10 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev)
+ if (IS_ERR(cdns_ctrl->reg))
+ return PTR_ERR(cdns_ctrl->reg);
+
+- res = platform_get_resource(ofdev, IORESOURCE_MEM, 1);
+- cdns_ctrl->io.dma = res->start;
+- cdns_ctrl->io.virt = devm_ioremap_resource(&ofdev->dev, res);
++ cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res);
+ if (IS_ERR(cdns_ctrl->io.virt))
+ return PTR_ERR(cdns_ctrl->io.virt);
++ cdns_ctrl->io.dma = res->start;
+
+ dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk");
+ if (IS_ERR(dt->clk))
+diff --git a/drivers/mtd/nand/raw/denali_pci.c b/drivers/mtd/nand/raw/denali_pci.c
+index 20c085a30adcb..de7e722d38262 100644
+--- a/drivers/mtd/nand/raw/denali_pci.c
++++ b/drivers/mtd/nand/raw/denali_pci.c
+@@ -74,22 +74,21 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ return ret;
+ }
+
+- denali->reg = ioremap(csr_base, csr_len);
++ denali->reg = devm_ioremap(denali->dev, csr_base, csr_len);
+ if (!denali->reg) {
+ dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
+ return -ENOMEM;
+ }
+
+- denali->host = ioremap(mem_base, mem_len);
++ denali->host = devm_ioremap(denali->dev, mem_base, mem_len);
+ if (!denali->host) {
+ dev_err(&dev->dev, "Spectra: ioremap failed!");
+- ret = -ENOMEM;
+- goto out_unmap_reg;
++ return -ENOMEM;
+ }
+
+ ret = denali_init(denali);
+ if (ret)
+- goto out_unmap_host;
++ return ret;
+
+ nsels = denali->nbanks;
+
+@@ -117,10 +116,6 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+
+ out_remove_denali:
+ denali_remove(denali);
+-out_unmap_host:
+- iounmap(denali->host);
+-out_unmap_reg:
+- iounmap(denali->reg);
+ return ret;
+ }
+
+@@ -129,8 +124,6 @@ static void denali_pci_remove(struct pci_dev *dev)
+ struct denali_controller *denali = pci_get_drvdata(dev);
+
+ denali_remove(denali);
+- iounmap(denali->reg);
+- iounmap(denali->host);
+ }
+
+ static struct pci_driver denali_pci_driver = {
+diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c
+index 7c1c80dae826a..e91b879b32bdb 100644
+--- a/drivers/mtd/nand/raw/intel-nand-controller.c
++++ b/drivers/mtd/nand/raw/intel-nand-controller.c
+@@ -619,9 +619,9 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs);
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
+ ebu_host->cs[cs].chipaddr = devm_ioremap_resource(dev, res);
+- ebu_host->cs[cs].nand_pa = res->start;
+ if (IS_ERR(ebu_host->cs[cs].chipaddr))
+ return PTR_ERR(ebu_host->cs[cs].chipaddr);
++ ebu_host->cs[cs].nand_pa = res->start;
+
+ ebu_host->clk = devm_clk_get(dev, NULL);
+ if (IS_ERR(ebu_host->clk))
+diff --git a/drivers/mtd/nand/spi/gigadevice.c b/drivers/mtd/nand/spi/gigadevice.c
+index 1dd1c58980934..da77ab20296ea 100644
+--- a/drivers/mtd/nand/spi/gigadevice.c
++++ b/drivers/mtd/nand/spi/gigadevice.c
+@@ -39,6 +39,14 @@ static SPINAND_OP_VARIANTS(read_cache_variants_f,
+ SPINAND_PAGE_READ_FROM_CACHE_OP_3A(true, 0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_OP_3A(false, 0, 0, NULL, 0));
+
++static SPINAND_OP_VARIANTS(read_cache_variants_1gq5,
++ SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0),
++ SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0));
++
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+ SPINAND_PROG_LOAD(true, 0, NULL, 0));
+@@ -339,7 +347,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x51),
+ NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
+ NAND_ECCREQ(4, 512),
+- SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++ SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5,
+ &write_cache_variants,
+ &update_cache_variants),
+ SPINAND_HAS_QE_BIT,
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index cc155f6c6c68c..3d78f7f97f965 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -1007,6 +1007,15 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1)
+ if (ret)
+ return ret;
+
++ ret = spi_nor_read_sr(nor, sr_cr);
++ if (ret)
++ return ret;
++
++ if (sr1 != sr_cr[0]) {
++ dev_dbg(nor->dev, "SR: Read back test failed\n");
++ return -EIO;
++ }
++
+ if (nor->flags & SNOR_F_NO_READ_CR)
+ return 0;
+
+diff --git a/drivers/net/amt.c b/drivers/net/amt.c
+index f1a36d7e2151c..fb774d568baab 100644
+--- a/drivers/net/amt.c
++++ b/drivers/net/amt.c
+@@ -943,7 +943,7 @@ static void amt_req_work(struct work_struct *work)
+ if (amt->status < AMT_STATUS_RECEIVED_ADVERTISEMENT)
+ goto out;
+
+- if (amt->req_cnt++ > AMT_MAX_REQ_COUNT) {
++ if (amt->req_cnt > AMT_MAX_REQ_COUNT) {
+ netdev_dbg(amt->dev, "Gateway is not ready");
+ amt->qi = AMT_INIT_REQ_TIMEOUT;
+ amt->ready4 = false;
+@@ -951,13 +951,15 @@ static void amt_req_work(struct work_struct *work)
+ amt->remote_ip = 0;
+ __amt_update_gw_status(amt, AMT_STATUS_INIT, false);
+ amt->req_cnt = 0;
++ goto out;
+ }
+ spin_unlock_bh(&amt->lock);
+
+ amt_send_request(amt, false);
+ amt_send_request(amt, true);
+- amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
+ spin_lock_bh(&amt->lock);
++ __amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
++ amt->req_cnt++;
+ out:
+ exp = min_t(u32, (1 * (1 << amt->req_cnt)), AMT_MAX_REQ_TIMEOUT);
+ mod_delayed_work(amt_wq, &amt->req_wq, msecs_to_jiffies(exp * 1000));
+@@ -2696,9 +2698,8 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
+ err = true;
+ goto drop;
+ }
+- if (amt_advertisement_handler(amt, skb))
+- amt->dev->stats.rx_dropped++;
+- goto out;
++ err = amt_advertisement_handler(amt, skb);
++ break;
+ case AMT_MSG_MULTICAST_DATA:
+ if (iph->saddr != amt->remote_ip) {
+ netdev_dbg(amt->dev, "Invalid Relay IP\n");
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index c9107a8b4b906..1e1b420f7f865 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -5383,16 +5383,23 @@ static int bond_ethtool_get_ts_info(struct net_device *bond_dev,
+ const struct ethtool_ops *ops;
+ struct net_device *real_dev;
+ struct phy_device *phydev;
++ int ret = 0;
+
++ rcu_read_lock();
+ real_dev = bond_option_active_slave_get_rcu(bond);
++ dev_hold(real_dev);
++ rcu_read_unlock();
++
+ if (real_dev) {
+ ops = real_dev->ethtool_ops;
+ phydev = real_dev->phydev;
+
+ if (phy_has_tsinfo(phydev)) {
+- return phy_ts_info(phydev, info);
++ ret = phy_ts_info(phydev, info);
++ goto out;
+ } else if (ops->get_ts_info) {
+- return ops->get_ts_info(real_dev, info);
++ ret = ops->get_ts_info(real_dev, info);
++ goto out;
+ }
+ }
+
+@@ -5400,7 +5407,9 @@ static int bond_ethtool_get_ts_info(struct net_device *bond_dev,
+ SOF_TIMESTAMPING_SOFTWARE;
+ info->phc_index = -1;
+
+- return 0;
++out:
++ dev_put(real_dev);
++ return ret;
+ }
+
+ static const struct ethtool_ops bond_ethtool_ops = {
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index f551c900803e3..aed6e9d475172 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -434,7 +434,7 @@ struct mcp251xfd_hw_tef_obj {
+ /* The tx_obj_raw version is used in spi async, i.e. without
+ * regmap. We have to take care of endianness ourselves.
+ */
+-struct mcp251xfd_hw_tx_obj_raw {
++struct __packed mcp251xfd_hw_tx_obj_raw {
+ __le32 id;
+ __le32 flags;
+ u8 data[sizeof_field(struct canfd_frame, data)];
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 1674b561c9a26..f3b149b604650 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -239,7 +239,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = {
+ };
+
+ /* AXI CANFD Data Bittiming constants as per AXI CANFD 1.0 specs */
+-static struct can_bittiming_const xcan_data_bittiming_const_canfd = {
++static const struct can_bittiming_const xcan_data_bittiming_const_canfd = {
+ .name = DRIVER_NAME,
+ .tseg1_min = 1,
+ .tseg1_max = 16,
+@@ -265,7 +265,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
+ };
+
+ /* AXI CANFD 2.0 Data Bittiming constants as per AXI CANFD 2.0 spec */
+-static struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
++static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
+ .name = DRIVER_NAME,
+ .tseg1_min = 1,
+ .tseg1_max = 32,
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index 37a3dabdce313..6d1fcb08bba1f 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -72,7 +72,6 @@ source "drivers/net/dsa/realtek/Kconfig"
+
+ config NET_DSA_SMSC_LAN9303
+ tristate
+- depends on VLAN_8021Q || VLAN_8021Q=n
+ select NET_DSA_TAG_LAN9303
+ select REGMAP
+ help
+@@ -82,6 +81,7 @@ config NET_DSA_SMSC_LAN9303
+ config NET_DSA_SMSC_LAN9303_I2C
+ tristate "SMSC/Microchip LAN9303 3-ports 10/100 ethernet switch in I2C managed mode"
+ depends on I2C
++ depends on VLAN_8021Q || VLAN_8021Q=n
+ select NET_DSA_SMSC_LAN9303
+ select REGMAP_I2C
+ help
+@@ -91,6 +91,7 @@ config NET_DSA_SMSC_LAN9303_I2C
+ config NET_DSA_SMSC_LAN9303_MDIO
+ tristate "SMSC/Microchip LAN9303 3-ports 10/100 ethernet switch in MDIO managed mode"
+ select NET_DSA_SMSC_LAN9303
++ depends on VLAN_8021Q || VLAN_8021Q=n
+ help
+ Enable access functions if the SMSC/Microchip LAN9303 is configured
+ for MDIO managed mode.
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index fcdd022b24986..487b4717c096f 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2535,13 +2535,7 @@ static void mt7531_sgmii_validate(struct mt7530_priv *priv, int port,
+ /* Port5 supports ethier RGMII or SGMII.
+ * Port6 supports SGMII only.
+ */
+- switch (port) {
+- case 5:
+- if (mt7531_is_rgmii_port(priv, port))
+- break;
+- fallthrough;
+- case 6:
+- phylink_set(supported, 1000baseX_Full);
++ if (port == 6) {
+ phylink_set(supported, 2500baseX_Full);
+ phylink_set(supported, 2500baseT_Full);
+ }
+@@ -2909,8 +2903,6 @@ static void
+ mt7530_mac_port_validate(struct dsa_switch *ds, int port,
+ unsigned long *supported)
+ {
+- if (port == 5)
+- phylink_set(supported, 1000baseX_Full);
+ }
+
+ static void mt7531_mac_port_validate(struct dsa_switch *ds, int port,
+@@ -2947,8 +2939,10 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port,
+ }
+
+ /* This switch only supports 1G full-duplex. */
+- if (state->interface != PHY_INTERFACE_MODE_MII)
++ if (state->interface != PHY_INTERFACE_MODE_MII) {
+ phylink_set(mask, 1000baseT_Full);
++ phylink_set(mask, 1000baseX_Full);
++ }
+
+ priv->info->mac_port_validate(ds, port, mask);
+
+diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile
+index 0ddfb5b5d53ca..2e6c5f258a1ff 100644
+--- a/drivers/net/ethernet/broadcom/Makefile
++++ b/drivers/net/ethernet/broadcom/Makefile
+@@ -17,3 +17,8 @@ obj-$(CONFIG_BGMAC_BCMA) += bgmac-bcma.o bgmac-bcma-mdio.o
+ obj-$(CONFIG_BGMAC_PLATFORM) += bgmac-platform.o
+ obj-$(CONFIG_SYSTEMPORT) += bcmsysport.o
+ obj-$(CONFIG_BNXT) += bnxt/
++
++# FIXME: temporarily silence -Warray-bounds on non W=1+ builds
++ifndef KBUILD_EXTRA_WARN
++CFLAGS_tg3.o += -Wno-array-bounds
++endif
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index c1100af5666b4..52ee4a825c255 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -35,6 +35,7 @@
+ #include <linux/tcp.h>
+ #include <linux/iopoll.h>
+ #include <linux/pm_runtime.h>
++#include <linux/ptp_classify.h>
+ #include "macb.h"
+
+ /* This structure is only used for MACB on SiFive FU540 devices */
+@@ -1122,6 +1123,36 @@ static void macb_tx_error_task(struct work_struct *work)
+ spin_unlock_irqrestore(&bp->lock, flags);
+ }
+
++static bool ptp_one_step_sync(struct sk_buff *skb)
++{
++ struct ptp_header *hdr;
++ unsigned int ptp_class;
++ u8 msgtype;
++
++ /* No need to parse packet if PTP TS is not involved */
++ if (likely(!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)))
++ goto not_oss;
++
++ /* Identify and return whether PTP one step sync is being processed */
++ ptp_class = ptp_classify_raw(skb);
++ if (ptp_class == PTP_CLASS_NONE)
++ goto not_oss;
++
++ hdr = ptp_parse_header(skb, ptp_class);
++ if (!hdr)
++ goto not_oss;
++
++ if (hdr->flag_field[0] & PTP_FLAG_TWOSTEP)
++ goto not_oss;
++
++ msgtype = ptp_get_msgtype(hdr, ptp_class);
++ if (msgtype == PTP_MSGTYPE_SYNC)
++ return true;
++
++not_oss:
++ return false;
++}
++
+ static void macb_tx_interrupt(struct macb_queue *queue)
+ {
+ unsigned int tail;
+@@ -1166,8 +1197,8 @@ static void macb_tx_interrupt(struct macb_queue *queue)
+
+ /* First, update TX stats if needed */
+ if (skb) {
+- if (unlikely(skb_shinfo(skb)->tx_flags &
+- SKBTX_HW_TSTAMP) &&
++ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
++ !ptp_one_step_sync(skb) &&
+ gem_ptp_do_txstamp(queue, skb, desc) == 0) {
+ /* skb now belongs to timestamp buffer
+ * and will be removed later
+@@ -1997,7 +2028,8 @@ static unsigned int macb_tx_map(struct macb *bp,
+ ctrl |= MACB_BF(TX_LSO, lso_ctrl);
+ ctrl |= MACB_BF(TX_TCP_SEQ_SRC, seq_ctrl);
+ if ((bp->dev->features & NETIF_F_HW_CSUM) &&
+- skb->ip_summed != CHECKSUM_PARTIAL && !lso_ctrl)
++ skb->ip_summed != CHECKSUM_PARTIAL && !lso_ctrl &&
++ !ptp_one_step_sync(skb))
+ ctrl |= MACB_BIT(TX_NOCRC);
+ } else
+ /* Only set MSS/MFS on payload descriptors
+@@ -2095,7 +2127,7 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
+
+ if (!(ndev->features & NETIF_F_HW_CSUM) ||
+ !((*skb)->ip_summed != CHECKSUM_PARTIAL) ||
+- skb_shinfo(*skb)->gso_size) /* Not available for GSO */
++ skb_shinfo(*skb)->gso_size || ptp_one_step_sync(*skb))
+ return 0;
+
+ if (padlen <= 0) {
+diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
+index fb6b27f46b153..9559c16078f95 100644
+--- a/drivers/net/ethernet/cadence/macb_ptp.c
++++ b/drivers/net/ethernet/cadence/macb_ptp.c
+@@ -470,8 +470,10 @@ int gem_set_hwtst(struct net_device *dev, struct ifreq *ifr, int cmd)
+ case HWTSTAMP_TX_ONESTEP_SYNC:
+ if (gem_ptp_set_one_step_sync(bp, 1) != 0)
+ return -ERANGE;
+- fallthrough;
++ tx_bd_control = TSTAMP_ALL_FRAMES;
++ break;
+ case HWTSTAMP_TX_ON:
++ gem_ptp_set_one_step_sync(bp, 0);
+ tx_bd_control = TSTAMP_ALL_FRAMES;
+ break;
+ default:
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+index ebc77771f5dac..4aa1f433ed24d 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+@@ -643,6 +643,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to allocate msg buffers\n");
++ destroy_workqueue(pf_to_mgmt->workq);
+ hinic_health_reporters_destroy(hwdev->devlink_dev);
+ return err;
+ }
+@@ -650,6 +651,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ err = hinic_api_cmd_init(pf_to_mgmt->cmd_chain, hwif);
+ if (err) {
+ dev_err(&pdev->dev, "Failed to initialize cmd chains\n");
++ destroy_workqueue(pf_to_mgmt->workq);
+ hinic_health_reporters_destroy(hwdev->devlink_dev);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+index f7dc7d825f637..4daf6bf291ecb 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+@@ -386,7 +386,7 @@ static int alloc_wqes_shadow(struct hinic_wq *wq)
+ return -ENOMEM;
+
+ wq->shadow_idx = devm_kcalloc(&pdev->dev, wq->num_q_pages,
+- sizeof(wq->prod_idx), GFP_KERNEL);
++ sizeof(*wq->shadow_idx), GFP_KERNEL);
+ if (!wq->shadow_idx)
+ goto err_shadow_idx;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index d1093bb2d4361..5e432e4fc3810 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -182,13 +182,13 @@ static int mlx5_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
+ *actions_performed = BIT(action);
+ switch (action) {
+ case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
+- return mlx5_load_one(dev);
++ return mlx5_load_one(dev, false);
+ case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
+ if (limit == DEVLINK_RELOAD_LIMIT_NO_RESET)
+ break;
+ /* On fw_activate action, also driver is reloaded and reinit performed */
+ *actions_performed |= BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
+- return mlx5_load_one(dev);
++ return mlx5_load_one(dev, false);
+ default:
+ /* Unsupported action should not get to this function */
+ WARN_ON(1);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index c14e06ca64d8d..5ccd6c634274b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1188,6 +1188,7 @@ mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
+ MLX5_CAP_ETH(mdev, enhanced_multi_pkt_send_wqe);
+ }
+
++int mlx5e_get_pf_num_tirs(struct mlx5_core_dev *mdev);
+ int mlx5e_priv_init(struct mlx5e_priv *priv,
+ const struct mlx5e_profile *profile,
+ struct net_device *netdev,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 3500faf086710..531fffe1abe3a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5206,6 +5206,15 @@ mlx5e_calc_max_nch(struct mlx5_core_dev *mdev, struct net_device *netdev,
+ return max_nch;
+ }
+
++int mlx5e_get_pf_num_tirs(struct mlx5_core_dev *mdev)
++{
++ /* Indirect TIRS: 2 sets of TTCs (inner + outer steering)
++ * and 1 set of direct TIRS
++ */
++ return 2 * MLX5E_NUM_INDIR_TIRS
++ + mlx5e_profile_max_num_channels(mdev, &mlx5e_nic_profile);
++}
++
+ /* mlx5e generic netdev management API (move to en_common.c) */
+ int mlx5e_priv_init(struct mlx5e_priv *priv,
+ const struct mlx5e_profile *profile,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 06d1f46f16887..f6a1c5efdb25f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -592,10 +592,16 @@ bool mlx5e_eswitch_vf_rep(const struct net_device *netdev)
+ return netdev->netdev_ops == &mlx5e_netdev_ops_rep;
+ }
+
++/* One indirect TIR set for outer. Inner not supported in reps. */
++#define REP_NUM_INDIR_TIRS MLX5E_NUM_INDIR_TIRS
++
+ static int mlx5e_rep_max_nch_limit(struct mlx5_core_dev *mdev)
+ {
+- return (1 << MLX5_CAP_GEN(mdev, log_max_tir)) /
+- mlx5_eswitch_get_total_vports(mdev);
++ int max_tir_num = 1 << MLX5_CAP_GEN(mdev, log_max_tir);
++ int num_vports = mlx5_eswitch_get_total_vports(mdev);
++
++ return (max_tir_num - mlx5e_get_pf_num_tirs(mdev)
++ - (num_vports * REP_NUM_INDIR_TIRS)) / num_vports;
+ }
+
+ static void mlx5e_build_rep_params(struct net_device *netdev)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index b6f58d16d1453..298c614c631b0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2064,16 +2064,16 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
+ down_write_ref_node(&fte->node, false);
+ for (i = handle->num_rules - 1; i >= 0; i--)
+ tree_remove_node(&handle->rule[i]->node, true);
+- if (fte->dests_size) {
+- if (fte->modify_mask)
+- modify_fte(fte);
+- up_write_ref_node(&fte->node, false);
+- } else if (list_empty(&fte->node.children)) {
++ if (list_empty(&fte->node.children)) {
+ del_hw_fte(&fte->node);
+ /* Avoid double call to del_hw_fte */
+ fte->node.del_hw_func = NULL;
+ up_write_ref_node(&fte->node, false);
+ tree_put_node(&fte->node, false);
++ } else if (fte->dests_size) {
++ if (fte->modify_mask)
++ modify_fte(fte);
++ up_write_ref_node(&fte->node, false);
+ } else {
+ up_write_ref_node(&fte->node, false);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 1c771287bee53..d5f540132a0e2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -106,7 +106,7 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
+ if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) {
+ complete(&fw_reset->done);
+ } else {
+- mlx5_load_one(dev);
++ mlx5_load_one(dev, false);
+ devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
+ BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
+ BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
+index c1df0d3595d87..d758848d34d0c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
+@@ -10,6 +10,7 @@ struct mlx5_timeouts {
+
+ static const u32 tout_def_sw_val[MAX_TIMEOUT_TYPES] = {
+ [MLX5_TO_FW_PRE_INIT_TIMEOUT_MS] = 120000,
++ [MLX5_TO_FW_PRE_INIT_ON_RECOVERY_TIMEOUT_MS] = 7200000,
+ [MLX5_TO_FW_PRE_INIT_WARN_MESSAGE_INTERVAL_MS] = 20000,
+ [MLX5_TO_FW_PRE_INIT_WAIT_MS] = 2,
+ [MLX5_TO_FW_INIT_MS] = 2000,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
+index 1c42ead782fa7..257c03eeab365 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
+@@ -7,6 +7,7 @@
+ enum mlx5_timeouts_types {
+ /* pre init timeouts (not read from FW) */
+ MLX5_TO_FW_PRE_INIT_TIMEOUT_MS,
++ MLX5_TO_FW_PRE_INIT_ON_RECOVERY_TIMEOUT_MS,
+ MLX5_TO_FW_PRE_INIT_WARN_MESSAGE_INTERVAL_MS,
+ MLX5_TO_FW_PRE_INIT_WAIT_MS,
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 4e49dca94bc38..56a8079dc16a0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1015,7 +1015,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
+ mlx5_devcom_unregister_device(dev->priv.devcom);
+ }
+
+-static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
++static int mlx5_function_setup(struct mlx5_core_dev *dev, u64 timeout)
+ {
+ int err;
+
+@@ -1030,11 +1030,11 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
+
+ /* wait for firmware to accept initialization segments configurations
+ */
+- err = wait_fw_init(dev, mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT),
++ err = wait_fw_init(dev, timeout,
+ mlx5_tout_ms(dev, FW_PRE_INIT_WARN_MESSAGE_INTERVAL));
+ if (err) {
+ mlx5_core_err(dev, "Firmware over %llu MS in pre-initializing state, aborting\n",
+- mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT));
++ timeout);
+ return err;
+ }
+
+@@ -1297,7 +1297,7 @@ int mlx5_init_one(struct mlx5_core_dev *dev)
+ mutex_lock(&dev->intf_state_mutex);
+ dev->state = MLX5_DEVICE_STATE_UP;
+
+- err = mlx5_function_setup(dev, true);
++ err = mlx5_function_setup(dev, mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT));
+ if (err)
+ goto err_function;
+
+@@ -1361,9 +1361,10 @@ out:
+ mutex_unlock(&dev->intf_state_mutex);
+ }
+
+-int mlx5_load_one(struct mlx5_core_dev *dev)
++int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery)
+ {
+ int err = 0;
++ u64 timeout;
+
+ mutex_lock(&dev->intf_state_mutex);
+ if (test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
+@@ -1373,7 +1374,11 @@ int mlx5_load_one(struct mlx5_core_dev *dev)
+ /* remove any previous indication of internal error */
+ dev->state = MLX5_DEVICE_STATE_UP;
+
+- err = mlx5_function_setup(dev, false);
++ if (recovery)
++ timeout = mlx5_tout_ms(dev, FW_PRE_INIT_ON_RECOVERY_TIMEOUT);
++ else
++ timeout = mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT);
++ err = mlx5_function_setup(dev, timeout);
+ if (err)
+ goto err_function;
+
+@@ -1746,7 +1751,7 @@ static void mlx5_pci_resume(struct pci_dev *pdev)
+
+ mlx5_pci_trace(dev, "Enter, loading driver..\n");
+
+- err = mlx5_load_one(dev);
++ err = mlx5_load_one(dev, false);
+
+ mlx5_pci_trace(dev, "Done, err = %d, device %s\n", err,
+ !err ? "recovered" : "Failed");
+@@ -1833,7 +1838,7 @@ static int mlx5_resume(struct pci_dev *pdev)
+ {
+ struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
+
+- return mlx5_load_one(dev);
++ return mlx5_load_one(dev, false);
+ }
+
+ static const struct pci_device_id mlx5_core_pci_table[] = {
+@@ -1878,7 +1883,7 @@ int mlx5_recover_device(struct mlx5_core_dev *dev)
+ return -EIO;
+ }
+
+- return mlx5_load_one(dev);
++ return mlx5_load_one(dev, true);
+ }
+
+ static struct pci_driver mlx5_core_driver = {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index 6f8baa0f2a735..2d2150fc7a0f3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -290,7 +290,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev);
+ int mlx5_init_one(struct mlx5_core_dev *dev);
+ void mlx5_uninit_one(struct mlx5_core_dev *dev);
+ void mlx5_unload_one(struct mlx5_core_dev *dev);
+-int mlx5_load_one(struct mlx5_core_dev *dev);
++int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery);
+
+ int mlx5_vport_get_other_func_cap(struct mlx5_core_dev *dev, u16 function_id, void *out);
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+index 5f92b16913605..aff6d4f35cd2f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+@@ -168,8 +168,6 @@ static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev,
+ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev,
+ struct dcb_app *app)
+ {
+- int prio;
+-
+ if (app->priority >= IEEE_8021QAZ_MAX_TCS) {
+ netdev_err(dev, "APP entry with priority value %u is invalid\n",
+ app->priority);
+@@ -183,17 +181,6 @@ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev,
+ app->protocol);
+ return -EINVAL;
+ }
+-
+- /* Warn about any DSCP APP entries with the same PID. */
+- prio = fls(dcb_ieee_getapp_mask(dev, app));
+- if (prio--) {
+- if (prio < app->priority)
+- netdev_warn(dev, "Choosing priority %d for DSCP %d in favor of previously-active value of %d\n",
+- app->priority, app->protocol, prio);
+- else if (prio > app->priority)
+- netdev_warn(dev, "Ignoring new priority %d for DSCP %d in favor of current value of %d\n",
+- app->priority, app->protocol, prio);
+- }
+ break;
+
+ case IEEE_8021QAZ_APP_SEL_ETHERTYPE:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+index 47b061b99160e..ed4d0d3448f31 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+@@ -864,7 +864,7 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = {
+ .trap = MLXSW_SP_TRAP_CONTROL(LLDP, LLDP, TRAP),
+ .listeners_arr = {
+ MLXSW_RXL(mlxsw_sp_rx_ptp_listener, LLDP, TRAP_TO_CPU,
+- false, SP_LLDP, DISCARD),
++ true, SP_LLDP, DISCARD),
+ },
+ },
+ {
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index 1ab725d554a51..e50dea0e3859d 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -2256,7 +2256,7 @@ int efx_ef10_tx_tso_desc(struct efx_tx_queue *tx_queue, struct sk_buff *skb,
+ * guaranteed to satisfy the second as we only attempt TSO if
+ * inner_network_header <= 208.
+ */
+- ip_tot_len = -EFX_TSO2_MAX_HDRLEN;
++ ip_tot_len = 0x10000 - EFX_TSO2_MAX_HDRLEN;
+ EFX_WARN_ON_ONCE_PARANOID(mss + EFX_TSO2_MAX_HDRLEN +
+ (tcp->doff << 2u) > ip_tot_len);
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+index be3cb63675a5e..133c5bd2ef451 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+@@ -1084,8 +1084,9 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ unsigned char addr[ETH_ALEN] = {0xde, 0xad, 0xbe, 0xef, 0x00, 0x00};
+ struct tc_cls_u32_offload cls_u32 = { };
+ struct stmmac_packet_attrs attr = { };
+- struct tc_action **actions, *act;
++ struct tc_action **actions;
+ struct tc_u32_sel *sel;
++ struct tcf_gact *gact;
+ struct tcf_exts *exts;
+ int ret, i, nk = 1;
+
+@@ -1110,8 +1111,8 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ goto cleanup_exts;
+ }
+
+- act = kcalloc(nk, sizeof(*act), GFP_KERNEL);
+- if (!act) {
++ gact = kcalloc(nk, sizeof(*gact), GFP_KERNEL);
++ if (!gact) {
+ ret = -ENOMEM;
+ goto cleanup_actions;
+ }
+@@ -1126,9 +1127,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ exts->nr_actions = nk;
+ exts->actions = actions;
+ for (i = 0; i < nk; i++) {
+- struct tcf_gact *gact = to_gact(&act[i]);
+-
+- actions[i] = &act[i];
++ actions[i] = (struct tc_action *)&gact[i];
+ gact->tcf_action = TC_ACT_SHOT;
+ }
+
+@@ -1152,7 +1151,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ stmmac_tc_setup_cls_u32(priv, priv, &cls_u32);
+
+ cleanup_act:
+- kfree(act);
++ kfree(gact);
+ cleanup_actions:
+ kfree(actions);
+ cleanup_exts:
+diff --git a/drivers/net/ethernet/xscale/ptp_ixp46x.c b/drivers/net/ethernet/xscale/ptp_ixp46x.c
+index 39234852e01b0..20f6aa508003b 100644
+--- a/drivers/net/ethernet/xscale/ptp_ixp46x.c
++++ b/drivers/net/ethernet/xscale/ptp_ixp46x.c
+@@ -272,7 +272,7 @@ static int ptp_ixp_probe(struct platform_device *pdev)
+ ixp_clock.master_irq = platform_get_irq(pdev, 0);
+ ixp_clock.slave_irq = platform_get_irq(pdev, 1);
+ if (IS_ERR(ixp_clock.regs) ||
+- !ixp_clock.master_irq || !ixp_clock.slave_irq)
++ ixp_clock.master_irq < 0 || ixp_clock.slave_irq < 0)
+ return -ENXIO;
+
+ ixp_clock.caps = ptp_ixp_caps;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index fde1c492ca02a..b1dece6b96986 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2671,7 +2671,10 @@ static int netvsc_suspend(struct hv_device *dev)
+
+ /* Save the current config info */
+ ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);
+-
++ if (!ndev_ctx->saved_netvsc_dev_info) {
++ ret = -ENOMEM;
++ goto out;
++ }
+ ret = netvsc_detach(net, nvdev);
+ out:
+ rtnl_unlock();
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index dde55ccae9d73..0dc51d9c8d8a6 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -571,19 +571,23 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
+ struct ipa *ipa = endpoint->ipa;
+ u32 val = 0;
+
+- val |= HDR_ENDIANNESS_FMASK; /* big endian */
+-
+- /* A QMAP header contains a 6 bit pad field at offset 0. The RMNet
+- * driver assumes this field is meaningful in packets it receives,
+- * and assumes the header's payload length includes that padding.
+- * The RMNet driver does *not* pad packets it sends, however, so
+- * the pad field (although 0) should be ignored.
+- */
+- if (endpoint->data->qmap && !endpoint->toward_ipa) {
+- val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK;
+- /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */
+- val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK;
+- /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */
++ if (endpoint->data->qmap) {
++ /* We have a header, so we must specify its endianness */
++ val |= HDR_ENDIANNESS_FMASK; /* big endian */
++
++ /* A QMAP header contains a 6 bit pad field at offset 0.
++ * The RMNet driver assumes this field is meaningful in
++ * packets it receives, and assumes the header's payload
++ * length includes that padding. The RMNet driver does
++ * *not* pad packets it sends, however, so the pad field
++ * (although 0) should be ignored.
++ */
++ if (!endpoint->toward_ipa) {
++ val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK;
++ /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */
++ val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK;
++ /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */
++ }
+ }
+
+ /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */
+@@ -741,8 +745,6 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
+
+ close_eof = endpoint->data->rx.aggr_close_eof;
+ val |= aggr_sw_eof_active_encoded(version, close_eof);
+-
+- /* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */
+ } else {
+ val |= u32_encode_bits(IPA_ENABLE_DEAGGR,
+ AGGR_EN_FMASK);
+@@ -1060,7 +1062,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
+ err_trans_free:
+ gsi_trans_free(trans);
+ err_free_pages:
+- __free_pages(page, get_order(IPA_RX_BUFFER_SIZE));
++ put_page(page);
+
+ return -ENOMEM;
+ }
+@@ -1400,7 +1402,7 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
+ struct page *page = trans->data;
+
+ if (page)
+- __free_pages(page, get_order(IPA_RX_BUFFER_SIZE));
++ put_page(page);
+ }
+ }
+
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index cfb5378bbb390..f20d8c3e91bfa 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -348,7 +348,7 @@ static int kszphy_config_reset(struct phy_device *phydev)
+ }
+ }
+
+- if (priv->led_mode >= 0)
++ if (priv->type && priv->led_mode >= 0)
+ kszphy_setup_led(phydev, priv->type->led_mode_reg, priv->led_mode);
+
+ return 0;
+@@ -364,10 +364,10 @@ static int kszphy_config_init(struct phy_device *phydev)
+
+ type = priv->type;
+
+- if (type->has_broadcast_disable)
++ if (type && type->has_broadcast_disable)
+ kszphy_broadcast_disable(phydev);
+
+- if (type->has_nand_tree_disable)
++ if (type && type->has_nand_tree_disable)
+ kszphy_nand_tree_disable(phydev);
+
+ return kszphy_config_reset(phydev);
+@@ -1365,7 +1365,7 @@ static int kszphy_probe(struct phy_device *phydev)
+
+ priv->type = type;
+
+- if (type->led_mode_reg) {
++ if (type && type->led_mode_reg) {
+ ret = of_property_read_u32(np, "micrel,led-mode",
+ &priv->led_mode);
+ if (ret)
+@@ -1386,7 +1386,8 @@ static int kszphy_probe(struct phy_device *phydev)
+ unsigned long rate = clk_get_rate(clk);
+ bool rmii_ref_clk_sel_25_mhz;
+
+- priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel;
++ if (type)
++ priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel;
+ rmii_ref_clk_sel_25_mhz = of_property_read_bool(np,
+ "micrel,rmii-reference-clock-select-25-mhz");
+
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 6b2fbdf4e0fde..34854ee537dc5 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -799,11 +799,7 @@ static int ax88772_stop(struct usbnet *dev)
+ {
+ struct asix_common_private *priv = dev->driver_priv;
+
+- /* On unplugged USB, we will get MDIO communication errors and the
+- * PHY will be set in to PHY_HALTED state.
+- */
+- if (priv->phydev->state != PHY_HALTED)
+- phy_stop(priv->phydev);
++ phy_stop(priv->phydev);
+
+ return 0;
+ }
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index a0f29482294d4..d81485cd7e90a 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1218,8 +1218,7 @@ static int smsc95xx_start_phy(struct usbnet *dev)
+
+ static int smsc95xx_stop(struct usbnet *dev)
+ {
+- if (dev->net->phydev)
+- phy_stop(dev->net->phydev);
++ phy_stop(dev->net->phydev);
+
+ return 0;
+ }
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 9a6450f796dcb..36b24ec116504 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1616,9 +1616,6 @@ void usbnet_disconnect (struct usb_interface *intf)
+ xdev->bus->bus_name, xdev->devpath,
+ dev->driver_info->description);
+
+- if (dev->driver_info->unbind)
+- dev->driver_info->unbind(dev, intf);
+-
+ net = dev->net;
+ unregister_netdev (net);
+
+@@ -1626,6 +1623,9 @@ void usbnet_disconnect (struct usb_interface *intf)
+
+ usb_scuttle_anchored_urbs(&dev->deferred);
+
++ if (dev->driver_info->unbind)
++ dev->driver_info->unbind(dev, intf);
++
+ usb_kill_urb(dev->interrupt);
+ usb_free_urb(dev->interrupt);
+ kfree(dev->padding_pkt);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index b11aaee8b8c03..a11b31191d5aa 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -5339,13 +5339,29 @@ err:
+ static void ath10k_stop(struct ieee80211_hw *hw)
+ {
+ struct ath10k *ar = hw->priv;
++ u32 opt;
+
+ ath10k_drain_tx(ar);
+
+ mutex_lock(&ar->conf_mutex);
+ if (ar->state != ATH10K_STATE_OFF) {
+- if (!ar->hw_rfkill_on)
+- ath10k_halt(ar);
++ if (!ar->hw_rfkill_on) {
++ /* If the current driver state is RESTARTING but not yet
++ * fully RESTARTED because of incoming suspend event,
++ * then ath10k_halt() is already called via
++ * ath10k_core_restart() and should not be called here.
++ */
++ if (ar->state != ATH10K_STATE_RESTARTING) {
++ ath10k_halt(ar);
++ } else {
++ /* Suspending here, because when in RESTARTING
++ * state, ath10k_core_stop() skips
++ * ath10k_wait_for_suspend().
++ */
++ opt = WMI_PDEV_SUSPEND_AND_DISABLE_INTR;
++ ath10k_wait_for_suspend(ar, opt);
++ }
++ }
+ ar->state = ATH10K_STATE_OFF;
+ }
+ mutex_unlock(&ar->conf_mutex);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index b2dac859dfe16..d46f53061d613 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5515,8 +5515,8 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ }
+
+ arvif = ath11k_vif_to_arvif(skb_cb->vif);
+- if (ar->allocated_vdev_map & (1LL << arvif->vdev_id) &&
+- arvif->is_started) {
++ mutex_lock(&ar->conf_mutex);
++ if (ar->allocated_vdev_map & (1LL << arvif->vdev_id)) {
+ ret = ath11k_mac_mgmt_tx_wmi(ar, arvif, skb);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to tx mgmt frame, vdev_id %d :%d\n",
+@@ -5534,6 +5534,7 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ arvif->is_started);
+ ath11k_mgmt_over_wmi_tx_drop(ar, skb);
+ }
++ mutex_unlock(&ar->conf_mutex);
+ }
+ }
+
+@@ -7068,6 +7069,7 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ath11k *ar = hw->priv;
+ struct ath11k_base *ab = ar->ab;
+ struct ath11k_vif *arvif = (void *)vif->drv_priv;
++ struct ath11k_peer *peer;
+ int ret;
+
+ mutex_lock(&ar->conf_mutex);
+@@ -7079,9 +7081,13 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ WARN_ON(!arvif->is_started);
+
+ if (ab->hw_params.vdev_start_delay &&
+- arvif->vdev_type == WMI_VDEV_TYPE_MONITOR &&
+- ath11k_peer_find_by_addr(ab, ar->mac_addr))
+- ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
++ arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
++ spin_lock_bh(&ab->base_lock);
++ peer = ath11k_peer_find_by_addr(ab, ar->mac_addr);
++ spin_unlock_bh(&ab->base_lock);
++ if (peer)
++ ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
++ }
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ ret = ath11k_mac_monitor_stop(ar);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 903758751c99a..8a3ff12057e89 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -191,6 +191,7 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ {
+ struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+ u32 window_start;
++ int ret = 0;
+
+ /* for offset beyond BAR + 4K - 32, may
+ * need to wakeup MHI to access.
+@@ -198,7 +199,7 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ if (ab->hw_params.wakeup_mhi &&
+ test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ offset >= ACCESS_ALWAYS_OFF)
+- mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++ ret = mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+
+ if (offset < WINDOW_START) {
+ iowrite32(value, ab->mem + offset);
+@@ -222,7 +223,8 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+
+ if (ab->hw_params.wakeup_mhi &&
+ test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+- offset >= ACCESS_ALWAYS_OFF)
++ offset >= ACCESS_ALWAYS_OFF &&
++ !ret)
+ mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ }
+
+@@ -230,6 +232,7 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ {
+ struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+ u32 val, window_start;
++ int ret = 0;
+
+ /* for offset beyond BAR + 4K - 32, may
+ * need to wakeup MHI to access.
+@@ -237,7 +240,7 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ if (ab->hw_params.wakeup_mhi &&
+ test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ offset >= ACCESS_ALWAYS_OFF)
+- mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++ ret = mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+
+ if (offset < WINDOW_START) {
+ val = ioread32(ab->mem + offset);
+@@ -261,7 +264,8 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+
+ if (ab->hw_params.wakeup_mhi &&
+ test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+- offset >= ACCESS_ALWAYS_OFF)
++ offset >= ACCESS_ALWAYS_OFF &&
++ !ret)
+ mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+
+ return val;
+diff --git a/drivers/net/wireless/ath/ath11k/spectral.c b/drivers/net/wireless/ath/ath11k/spectral.c
+index 4100cc1449a2b..e997426a1b4be 100644
+--- a/drivers/net/wireless/ath/ath11k/spectral.c
++++ b/drivers/net/wireless/ath/ath11k/spectral.c
+@@ -212,7 +212,10 @@ static int ath11k_spectral_scan_config(struct ath11k *ar,
+ return -ENODEV;
+
+ arvif->spectral_enabled = (mode != ATH11K_SPECTRAL_DISABLED);
++
++ spin_lock_bh(&ar->spectral.lock);
+ ar->spectral.mode = mode;
++ spin_unlock_bh(&ar->spectral.lock);
+
+ ret = ath11k_wmi_vdev_spectral_enable(ar, arvif->vdev_id,
+ ATH11K_WMI_SPECTRAL_TRIGGER_CMD_CLEAR,
+@@ -843,9 +846,6 @@ static inline void ath11k_spectral_ring_free(struct ath11k *ar)
+ {
+ struct ath11k_spectral *sp = &ar->spectral;
+
+- if (!sp->enabled)
+- return;
+-
+ ath11k_dbring_srng_cleanup(ar, &sp->rx_ring);
+ ath11k_dbring_buf_cleanup(ar, &sp->rx_ring);
+ }
+@@ -897,15 +897,16 @@ void ath11k_spectral_deinit(struct ath11k_base *ab)
+ if (!sp->enabled)
+ continue;
+
+- ath11k_spectral_debug_unregister(ar);
+- ath11k_spectral_ring_free(ar);
++ mutex_lock(&ar->conf_mutex);
++ ath11k_spectral_scan_config(ar, ATH11K_SPECTRAL_DISABLED);
++ mutex_unlock(&ar->conf_mutex);
+
+ spin_lock_bh(&sp->lock);
+-
+- sp->mode = ATH11K_SPECTRAL_DISABLED;
+ sp->enabled = false;
+-
+ spin_unlock_bh(&sp->lock);
++
++ ath11k_spectral_debug_unregister(ar);
++ ath11k_spectral_ring_free(ar);
+ }
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 22921673e956e..4ad3fe7d7d1f2 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -5616,9 +5616,9 @@ static int ath11k_wmi_tlv_rssi_chain_parse(struct ath11k_base *ab,
+ arvif->bssid,
+ NULL);
+ if (!sta) {
+- ath11k_warn(ab, "not found station for bssid %pM\n",
+- arvif->bssid);
+- ret = -EPROTO;
++ ath11k_dbg(ab, ATH11K_DBG_WMI,
++ "not found station of bssid %pM for rssi chain\n",
++ arvif->bssid);
+ goto exit;
+ }
+
+@@ -5716,8 +5716,9 @@ static int ath11k_wmi_tlv_fw_stats_data_parse(struct ath11k_base *ab,
+ "wmi stats vdev id %d snr %d\n",
+ src->vdev_id, src->beacon_snr);
+ } else {
+- ath11k_warn(ab, "not found station for bssid %pM\n",
+- arvif->bssid);
++ ath11k_dbg(ab, ATH11K_DBG_WMI,
++ "not found station of bssid %pM for vdev stat\n",
++ arvif->bssid);
+ }
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
+index 2f26ec1a8aa38..8173570975e40 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.h
++++ b/drivers/net/wireless/ath/ath11k/wmi.h
+@@ -3087,9 +3087,6 @@ enum scan_dwelltime_adaptive_mode {
+ SCAN_DWELL_MODE_STATIC = 4
+ };
+
+-#define WLAN_SCAN_MAX_NUM_SSID 10
+-#define WLAN_SCAN_MAX_NUM_BSSID 10
+-
+ #define WLAN_SSID_MAX_LEN 32
+
+ struct element_info {
+@@ -3104,7 +3101,6 @@ struct wlan_ssid {
+
+ #define WMI_IE_BITMAP_SIZE 8
+
+-#define WMI_SCAN_MAX_NUM_SSID 0x0A
+ /* prefix used by scan requestor ids on the host */
+ #define WMI_HOST_SCAN_REQUESTOR_ID_PREFIX 0xA000
+
+@@ -3112,10 +3108,6 @@ struct wlan_ssid {
+ /* host cycles through the lower 12 bits to generate ids */
+ #define WMI_HOST_SCAN_REQ_ID_PREFIX 0xA000
+
+-#define WLAN_SCAN_PARAMS_MAX_SSID 16
+-#define WLAN_SCAN_PARAMS_MAX_BSSID 4
+-#define WLAN_SCAN_PARAMS_MAX_IE_LEN 256
+-
+ /* Values lower than this may be refused by some firmware revisions with a scan
+ * completion with a timedout reason.
+ */
+@@ -3311,8 +3303,8 @@ struct scan_req_params {
+ u32 n_probes;
+ u32 *chan_list;
+ u32 notify_scan_events;
+- struct wlan_ssid ssid[WLAN_SCAN_MAX_NUM_SSID];
+- struct wmi_mac_addr bssid_list[WLAN_SCAN_MAX_NUM_BSSID];
++ struct wlan_ssid ssid[WLAN_SCAN_PARAMS_MAX_SSID];
++ struct wmi_mac_addr bssid_list[WLAN_SCAN_PARAMS_MAX_BSSID];
+ struct element_info extraie;
+ struct element_info htcap;
+ struct element_info vhtcap;
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index b0a4ca3559fd8..abed1effd95ca 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -5615,7 +5615,7 @@ unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah,
+
+ static u8 ar9003_get_eepmisc(struct ath_hw *ah)
+ {
+- return ah->eeprom.map4k.baseEepHeader.eepMisc;
++ return ah->eeprom.ar9300_eep.baseEepHeader.opCapFlags.eepMisc;
+ }
+
+ const struct eeprom_ops eep_ar9300_ops = {
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+index a171dbb29fbb6..ad949eb02f3d2 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h
++++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+@@ -720,7 +720,7 @@
+ #define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \
+ (AR_SREV_9462(ah) ? 0x16290 : 0x16284))
+ #define AR_CH0_TOP2_XPABIASLVL (AR_SREV_9561(ah) ? 0x1e00 : 0xf000)
+-#define AR_CH0_TOP2_XPABIASLVL_S 12
++#define AR_CH0_TOP2_XPABIASLVL_S (AR_SREV_9561(ah) ? 9 : 12)
+
+ #define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \
+ ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 6a850a0bfa8ad..a23eaca0326d1 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -1016,6 +1016,14 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
+ goto rx_next;
+ }
+
++ if (rxstatus->rs_keyix >= ATH_KEYMAX &&
++ rxstatus->rs_keyix != ATH9K_RXKEYIX_INVALID) {
++ ath_dbg(common, ANY,
++ "Invalid keyix, dropping (keyix: %d)\n",
++ rxstatus->rs_keyix);
++ goto rx_next;
++ }
++
+ /* Get the RX status information */
+
+ memset(rx_status, 0, sizeof(struct ieee80211_rx_status));
+diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c
+index 1b76f4434c069..791f9f120af3a 100644
+--- a/drivers/net/wireless/ath/carl9170/tx.c
++++ b/drivers/net/wireless/ath/carl9170/tx.c
+@@ -1558,6 +1558,9 @@ static struct carl9170_vif_info *carl9170_pick_beaconing_vif(struct ar9170 *ar)
+ goto out;
+ }
+ } while (ar->beacon_enabled && i--);
++
++ /* no entry found in list */
++ return NULL;
+ }
+
+ out:
+diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c
+index cf3ccf4ddfe72..aa5c994656749 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_n.c
++++ b/drivers/net/wireless/broadcom/b43/phy_n.c
+@@ -582,7 +582,7 @@ static void b43_nphy_adjust_lna_gain_table(struct b43_wldev *dev)
+ u16 data[4];
+ s16 gain[2];
+ u16 minmax[2];
+- static const u16 lna_gain[4] = { -2, 10, 19, 25 };
++ static const s16 lna_gain[4] = { -2, 10, 19, 25 };
+
+ if (nphy->hang_avoid)
+ b43_nphy_stay_in_carrier_search(dev, 1);
+diff --git a/drivers/net/wireless/broadcom/b43legacy/phy.c b/drivers/net/wireless/broadcom/b43legacy/phy.c
+index 05404fbd1e70b..c1395e622759e 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/phy.c
++++ b/drivers/net/wireless/broadcom/b43legacy/phy.c
+@@ -1123,7 +1123,7 @@ void b43legacy_phy_lo_b_measure(struct b43legacy_wldev *dev)
+ struct b43legacy_phy *phy = &dev->phy;
+ u16 regstack[12] = { 0 };
+ u16 mls;
+- u16 fval;
++ s16 fval;
+ int i;
+ int j;
+
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
+index 36d1e6b2568db..4aec1fce1ae29 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
+@@ -383,7 +383,7 @@ netdev_tx_t libipw_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ /* Each fragment may need to have room for encryption
+ * pre/postfix */
+- if (host_encrypt)
++ if (host_encrypt && crypt && crypt->ops)
+ bytes_per_frag -= crypt->ops->extra_mpdu_prefix_len +
+ crypt->ops->extra_mpdu_postfix_len;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index c17ab53fcd8ff..347921000abb4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -902,6 +902,9 @@ int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt,
+ {
+ int i, j;
+
++ if (!fwrt->geo_enabled)
++ return -ENODATA;
++
+ if (!iwl_sar_geo_support(fwrt))
+ return -EOPNOTSUPP;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mei/main.c b/drivers/net/wireless/intel/iwlwifi/mei/main.c
+index 2f7f0f994ca32..208c4373f07d7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mei/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/mei/main.c
+@@ -493,6 +493,7 @@ void iwl_mei_add_data_to_ring(struct sk_buff *skb, bool cb_tx)
+ if (cb_tx) {
+ struct iwl_sap_cb_data *cb_hdr = skb_push(skb, sizeof(*cb_hdr));
+
++ memset(cb_hdr, 0, sizeof(*cb_hdr));
+ cb_hdr->hdr.type = cpu_to_le16(SAP_MSG_CB_DATA_PACKET);
+ cb_hdr->hdr.len = cpu_to_le16(skb->len - sizeof(cb_hdr->hdr));
+ cb_hdr->hdr.seq_num = cpu_to_le32(atomic_inc_return(&mei->sap_seq_no));
+@@ -1019,6 +1020,8 @@ static void iwl_mei_handle_sap_data(struct mei_cl_device *cldev,
+
+ /* We need enough room for the WiFi header + SNAP + IV */
+ skb = netdev_alloc_skb(netdev, len + QOS_HDR_IV_SNAP_LEN);
++ if (!skb)
++ continue;
+
+ skb_reserve(skb, QOS_HDR_IV_SNAP_LEN);
+ ethhdr = skb_push(skb, sizeof(*ethhdr));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+index b2ea2fca5376f..b9bd81242b216 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+@@ -563,6 +563,9 @@ static void iwl_mvm_power_get_vifs_iterator(void *_data, u8 *mac,
+ struct iwl_power_vifs *power_iterator = _data;
+ bool active = mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX;
+
++ if (!mvmvif->uploaded)
++ return;
++
+ switch (ieee80211_vif_type_p2p(vif)) {
+ case NL80211_IFTYPE_P2P_DEVICE:
+ break;
+diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c
+index d2ee6469e67bb..3fa25cd64cda0 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11h.c
++++ b/drivers/net/wireless/marvell/mwifiex/11h.c
+@@ -303,5 +303,7 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work)
+
+ mwifiex_dbg(priv->adapter, MSG,
+ "indicating channel switch completion to kernel\n");
++ mutex_lock(&priv->wdev.mtx);
+ cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef);
++ mutex_unlock(&priv->wdev.mtx);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/agg-rx.c b/drivers/net/wireless/mediatek/mt76/agg-rx.c
+index 72622220051bb..6c8b441945791 100644
+--- a/drivers/net/wireless/mediatek/mt76/agg-rx.c
++++ b/drivers/net/wireless/mediatek/mt76/agg-rx.c
+@@ -162,8 +162,9 @@ void mt76_rx_aggr_reorder(struct sk_buff *skb, struct sk_buff_head *frames)
+ if (!sta)
+ return;
+
+- if (!status->aggr && !(status->flag & RX_FLAG_8023)) {
+- mt76_rx_aggr_check_ctl(skb, frames);
++ if (!status->aggr) {
++ if (!(status->flag & RX_FLAG_8023))
++ mt76_rx_aggr_check_ctl(skb, frames);
+ return;
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 8bb1c7ab5b502..d17a0503d6404 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -248,6 +248,8 @@ static void mt76_init_stream_cap(struct mt76_phy *phy,
+ vht_cap->cap |= IEEE80211_VHT_CAP_TXSTBC;
+ else
+ vht_cap->cap &= ~IEEE80211_VHT_CAP_TXSTBC;
++ vht_cap->cap |= IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN |
++ IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN;
+
+ for (i = 0; i < 8; i++) {
+ if (i < nstream)
+@@ -323,8 +325,6 @@ mt76_init_sband(struct mt76_phy *phy, struct mt76_sband *msband,
+ vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC |
+ IEEE80211_VHT_CAP_RXSTBC_1 |
+ IEEE80211_VHT_CAP_SHORT_GI_80 |
+- IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN |
+- IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN |
+ (3 << IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT);
+
+ return 0;
+@@ -1266,7 +1266,7 @@ mt76_sta_add(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ continue;
+
+ mtxq = (struct mt76_txq *)sta->txq[i]->drv_priv;
+- mtxq->wcid = wcid;
++ mtxq->wcid = wcid->idx;
+ }
+
+ ewma_signal_init(&wcid->rssi);
+@@ -1344,7 +1344,9 @@ void mt76_sta_pre_rcu_remove(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv;
+
+ mutex_lock(&dev->mutex);
++ spin_lock_bh(&dev->status_lock);
+ rcu_assign_pointer(dev->wcid[wcid->idx], NULL);
++ spin_unlock_bh(&dev->status_lock);
+ mutex_unlock(&dev->mutex);
+ }
+ EXPORT_SYMBOL_GPL(mt76_sta_pre_rcu_remove);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 5197fcb066492..1d984be252887 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -267,7 +267,7 @@ struct mt76_wcid {
+ };
+
+ struct mt76_txq {
+- struct mt76_wcid *wcid;
++ u16 wcid;
+
+ u16 agg_ssn;
+ bool send_bar;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+index 83c5eec5b1633..1d098e9799ddc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+@@ -75,7 +75,7 @@ mt7603_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ mt7603_wtbl_init(dev, idx, mvif->idx, bc_addr);
+
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+- mtxq->wcid = &mvif->sta.wcid;
++ mtxq->wcid = idx;
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+
+ out:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index ce902b107ce33..974c8116b6e6c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -234,7 +234,7 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ if (vif->txq) {
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+- mtxq->wcid = &mvif->sta.wcid;
++ mtxq->wcid = idx;
+ }
+
+ ret = mt7615_mcu_add_dev_info(phy, vif, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+index dd30f537676da..be1d27de993ae 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+@@ -292,7 +292,8 @@ mt76x02_vif_init(struct mt76x02_dev *dev, struct ieee80211_vif *vif,
+ mt76_packet_id_init(&mvif->group_wcid);
+
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+- mtxq->wcid = &mvif->group_wcid;
++ rcu_assign_pointer(dev->mt76.wcid[MT_VIF_WCID(idx)], &mvif->group_wcid);
++ mtxq->wcid = MT_VIF_WCID(idx);
+ }
+
+ int
+@@ -345,6 +346,7 @@ void mt76x02_remove_interface(struct ieee80211_hw *hw,
+ struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv;
+
+ dev->mt76.vif_mask &= ~BIT(mvif->idx);
++ rcu_assign_pointer(dev->mt76.wcid[mvif->group_wcid.idx], NULL);
+ mt76_packet_id_flush(&dev->mt76, &mvif->group_wcid);
+ }
+ EXPORT_SYMBOL_GPL(mt76x02_remove_interface);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index e96d1c31dd36b..2ff054187daca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -87,7 +87,7 @@ mt7915_muru_debug_set(void *data, u64 val)
+ struct mt7915_dev *dev = data;
+
+ dev->muru_debug = val;
+- mt7915_mcu_muru_debug_set(dev, data);
++ mt7915_mcu_muru_debug_set(dev, dev->muru_debug);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index e4c300aa15260..efc04f7a3c716 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -982,6 +982,7 @@ mt7915_mac_write_txwi_8023(struct mt7915_dev *dev, __le32 *txwi,
+
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ u8 fc_type, fc_stype;
++ u16 ethertype;
+ bool wmm = false;
+ u32 val;
+
+@@ -995,7 +996,8 @@ mt7915_mac_write_txwi_8023(struct mt7915_dev *dev, __le32 *txwi,
+ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+ FIELD_PREP(MT_TXD1_TID, tid);
+
+- if (be16_to_cpu(skb->protocol) >= ETH_P_802_3_MIN)
++ ethertype = get_unaligned_be16(&skb->data[12]);
++ if (ethertype >= ETH_P_802_3_MIN)
+ val |= MT_TXD1_ETH_802_3;
+
+ txwi[1] |= cpu_to_le32(val);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index 8ac6f59af1740..815ade60997f9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -245,7 +245,7 @@ static int mt7915_add_interface(struct ieee80211_hw *hw,
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ if (vif->txq) {
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+- mtxq->wcid = &mvif->sta.wcid;
++ mtxq->wcid = idx;
+ }
+
+ if (vif->type != NL80211_IFTYPE_AP &&
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 12ca545664614..4f62dbb936db7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -279,7 +279,7 @@ struct mt7915_dev {
+ void *cal;
+
+ struct {
+- u8 table_mask;
++ u16 table_mask;
+ u8 n_agrt;
+ } twt;
+ };
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 84f72dd1bf930..eac2d85b5864b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -698,7 +698,7 @@ mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ status->nss =
+ FIELD_GET(MT_PRXV_NSTS, v0) + 1;
+ status->encoding = RX_ENC_VHT;
+- if (i > 9)
++ if (i > 11)
+ return -EINVAL;
+ break;
+ case MT_PHY_TYPE_HE_MU:
+@@ -816,6 +816,7 @@ mt7921_mac_write_txwi_8023(struct mt7921_dev *dev, __le32 *txwi,
+ {
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ u8 fc_type, fc_stype;
++ u16 ethertype;
+ bool wmm = false;
+ u32 val;
+
+@@ -829,7 +830,8 @@ mt7921_mac_write_txwi_8023(struct mt7921_dev *dev, __le32 *txwi,
+ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+ FIELD_PREP(MT_TXD1_TID, tid);
+
+- if (be16_to_cpu(skb->protocol) >= ETH_P_802_3_MIN)
++ ethertype = get_unaligned_be16(&skb->data[12]);
++ if (ethertype >= ETH_P_802_3_MIN)
+ val |= MT_TXD1_ETH_802_3;
+
+ txwi[1] |= cpu_to_le32(val);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 4abb7a6e775af..060fd8d1b2398 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -329,7 +329,7 @@ static int mt7921_add_interface(struct ieee80211_hw *hw,
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ if (vif->txq) {
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+- mtxq->wcid = &mvif->sta.wcid;
++ mtxq->wcid = idx;
+ }
+
+ out:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 9a71a5d864819..2dec9750c6aa8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -118,7 +118,6 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ mt7921_mcu_exit(dev);
+
+ tasklet_disable(&dev->irq_tasklet);
+- mt76_free_device(&dev->mt76);
+ }
+
+ static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
+@@ -300,8 +299,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ dev->bus_ops = dev->mt76.bus;
+ bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+ GFP_KERNEL);
+- if (!bus_ops)
+- return -ENOMEM;
++ if (!bus_ops) {
++ ret = -ENOMEM;
++ goto err_free_dev;
++ }
+
+ bus_ops->rr = mt7921_rr;
+ bus_ops->wr = mt7921_wr;
+@@ -310,7 +311,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+
+ ret = __mt7921e_mcu_drv_pmctrl(dev);
+ if (ret)
+- return ret;
++ goto err_free_dev;
+
+ mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
+ (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+@@ -352,6 +353,7 @@ static void mt7921_pci_remove(struct pci_dev *pdev)
+
+ mt7921e_unregister_device(dev);
+ devm_free_irq(&pdev->dev, pdev->irq, dev);
++ mt76_free_device(&dev->mt76);
+ pci_free_irq_vectors(pdev);
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 6b8c9dc805425..5ed2d60debfb3 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -120,7 +120,7 @@ mt76_tx_status_skb_add(struct mt76_dev *dev, struct mt76_wcid *wcid,
+
+ memset(cb, 0, sizeof(*cb));
+
+- if (!wcid)
++ if (!wcid || !rcu_access_pointer(dev->wcid[wcid->idx]))
+ return MT_PACKET_ID_NO_ACK;
+
+ if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+@@ -436,12 +436,11 @@ mt76_txq_stopped(struct mt76_queue *q)
+
+ static int
+ mt76_txq_send_burst(struct mt76_phy *phy, struct mt76_queue *q,
+- struct mt76_txq *mtxq)
++ struct mt76_txq *mtxq, struct mt76_wcid *wcid)
+ {
+ struct mt76_dev *dev = phy->dev;
+ struct ieee80211_txq *txq = mtxq_to_txq(mtxq);
+ enum mt76_txq_id qid = mt76_txq_get_qid(txq);
+- struct mt76_wcid *wcid = mtxq->wcid;
+ struct ieee80211_tx_info *info;
+ struct sk_buff *skb;
+ int n_frames = 1;
+@@ -521,8 +520,8 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid)
+ break;
+
+ mtxq = (struct mt76_txq *)txq->drv_priv;
+- wcid = mtxq->wcid;
+- if (wcid && test_bit(MT_WCID_FLAG_PS, &wcid->flags))
++ wcid = rcu_dereference(dev->wcid[mtxq->wcid]);
++ if (!wcid || test_bit(MT_WCID_FLAG_PS, &wcid->flags))
+ continue;
+
+ spin_lock_bh(&q->lock);
+@@ -541,7 +540,7 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid)
+ }
+
+ if (!mt76_txq_stopped(q))
+- n_frames = mt76_txq_send_burst(phy, q, mtxq);
++ n_frames = mt76_txq_send_burst(phy, q, mtxq, wcid);
+
+ spin_unlock_bh(&q->lock);
+
+diff --git a/drivers/net/wireless/microchip/wilc1000/mon.c b/drivers/net/wireless/microchip/wilc1000/mon.c
+index 6bd63934c2d84..b5a1b65c087ca 100644
+--- a/drivers/net/wireless/microchip/wilc1000/mon.c
++++ b/drivers/net/wireless/microchip/wilc1000/mon.c
+@@ -233,7 +233,7 @@ struct net_device *wilc_wfi_init_mon_interface(struct wilc *wl,
+ wl->monitor_dev->netdev_ops = &wilc_wfi_netdev_ops;
+ wl->monitor_dev->needs_free_netdev = true;
+
+- if (cfg80211_register_netdevice(wl->monitor_dev)) {
++ if (register_netdevice(wl->monitor_dev)) {
+ netdev_err(real_dev, "register_netdevice failed\n");
+ free_netdev(wl->monitor_dev);
+ return NULL;
+@@ -251,7 +251,7 @@ void wilc_wfi_deinit_mon_interface(struct wilc *wl, bool rtnl_locked)
+ return;
+
+ if (rtnl_locked)
+- cfg80211_unregister_netdevice(wl->monitor_dev);
++ unregister_netdevice(wl->monitor_dev);
+ else
+ unregister_netdev(wl->monitor_dev);
+ wl->monitor_dev = NULL;
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
+index 2477e18c7caec..025619cd14e82 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
+@@ -460,8 +460,10 @@ static void rtl8180_tx(struct ieee80211_hw *dev,
+ struct rtl8180_priv *priv = dev->priv;
+ struct rtl8180_tx_ring *ring;
+ struct rtl8180_tx_desc *entry;
++ unsigned int prio = 0;
+ unsigned long flags;
+- unsigned int idx, prio, hw_prio;
++ unsigned int idx, hw_prio;
++
+ dma_addr_t mapping;
+ u32 tx_flags;
+ u8 rc_flags;
+@@ -470,7 +472,9 @@ static void rtl8180_tx(struct ieee80211_hw *dev,
+ /* do arithmetic and then convert to le16 */
+ u16 frame_duration = 0;
+
+- prio = skb_get_queue_mapping(skb);
++ /* rtl8180/rtl8185 only has one useable tx queue */
++ if (dev->queues > IEEE80211_AC_BK)
++ prio = skb_get_queue_mapping(skb);
+ ring = &priv->tx_ring[prio];
+
+ mapping = dma_map_single(&priv->pdev->dev, skb->data, skb->len,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index 86a2368732547..a8eebafb9a7ee 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -1014,7 +1014,7 @@ int rtl_usb_probe(struct usb_interface *intf,
+ hw = ieee80211_alloc_hw(sizeof(struct rtl_priv) +
+ sizeof(struct rtl_usb_priv), &rtl_ops);
+ if (!hw) {
+- WARN_ONCE(true, "rtl_usb: ieee80211 alloc failed\n");
++ pr_warn("rtl_usb: ieee80211 alloc failed\n");
+ return -ENOMEM;
+ }
+ rtlpriv = hw->priv;
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index 80d4761796b15..0f16f649e03f4 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -512,6 +512,7 @@ static s8 get_cck_rx_pwr(struct rtw_dev *rtwdev, u8 lna_idx, u8 vga_idx)
+ static void query_phy_status_page0(struct rtw_dev *rtwdev, u8 *phy_status,
+ struct rtw_rx_pkt_stat *pkt_stat)
+ {
++ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ s8 rx_power;
+ u8 lna_idx = 0;
+ u8 vga_idx = 0;
+@@ -523,6 +524,7 @@ static void query_phy_status_page0(struct rtw_dev *rtwdev, u8 *phy_status,
+
+ pkt_stat->rx_power[RF_PATH_A] = rx_power;
+ pkt_stat->rssi = rtw_phy_rf_power_2_rssi(pkt_stat->rx_power, 1);
++ dm_info->rssi[RF_PATH_A] = pkt_stat->rssi;
+ pkt_stat->bw = RTW_CHANNEL_WIDTH_20;
+ pkt_stat->signal_power = rx_power;
+ }
+@@ -530,6 +532,7 @@ static void query_phy_status_page0(struct rtw_dev *rtwdev, u8 *phy_status,
+ static void query_phy_status_page1(struct rtw_dev *rtwdev, u8 *phy_status,
+ struct rtw_rx_pkt_stat *pkt_stat)
+ {
++ struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ u8 rxsc, bw;
+ s8 min_rx_power = -120;
+
+@@ -549,6 +552,7 @@ static void query_phy_status_page1(struct rtw_dev *rtwdev, u8 *phy_status,
+
+ pkt_stat->rx_power[RF_PATH_A] = GET_PHY_STAT_P1_PWDB_A(phy_status) - 110;
+ pkt_stat->rssi = rtw_phy_rf_power_2_rssi(pkt_stat->rx_power, 1);
++ dm_info->rssi[RF_PATH_A] = pkt_stat->rssi;
+ pkt_stat->bw = bw;
+ pkt_stat->signal_power = max(pkt_stat->rx_power[RF_PATH_A],
+ min_rx_power);
+diff --git a/drivers/net/wireless/realtek/rtw88/rx.c b/drivers/net/wireless/realtek/rtw88/rx.c
+index d2d607e22198d..84aedabdf2853 100644
+--- a/drivers/net/wireless/realtek/rtw88/rx.c
++++ b/drivers/net/wireless/realtek/rtw88/rx.c
+@@ -158,7 +158,8 @@ void rtw_rx_fill_rx_status(struct rtw_dev *rtwdev,
+ memset(rx_status, 0, sizeof(*rx_status));
+ rx_status->freq = hw->conf.chandef.chan->center_freq;
+ rx_status->band = hw->conf.chandef.chan->band;
+- if (rtw_fw_feature_check(&rtwdev->fw, FW_FEATURE_SCAN_OFFLOAD))
++ if (rtw_fw_feature_check(&rtwdev->fw, FW_FEATURE_SCAN_OFFLOAD) &&
++ test_bit(RTW_FLAG_SCANNING, rtwdev->flags))
+ rtw_set_rx_freq_by_pktstat(pkt_stat, rx_status);
+ if (pkt_stat->crc_err)
+ rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index 147009888de04..777ad4e8f45f4 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -1872,6 +1872,11 @@ void rtw89_phy_cfo_parse(struct rtw89_dev *rtwdev, s16 cfo_val,
+ struct rtw89_cfo_tracking_info *cfo = &rtwdev->cfo_tracking;
+ u8 macid = phy_ppdu->mac_id;
+
++ if (macid >= CFO_TRACK_MAX_USER) {
++ rtw89_warn(rtwdev, "mac_id %d is out of range\n", macid);
++ return;
++ }
++
+ cfo->cfo_tail[macid] += cfo_val;
+ cfo->cfo_cnt[macid]++;
+ cfo->packet_count++;
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index c922f10d0d7b9..7e213f8ddc98b 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -241,7 +241,7 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
+ }
+ EXPORT_SYMBOL(st21nfca_hci_se_io);
+
+-static void st21nfca_se_wt_timeout(struct timer_list *t)
++static void st21nfca_se_wt_work(struct work_struct *work)
+ {
+ /*
+ * No answer from the secure element
+@@ -254,8 +254,9 @@ static void st21nfca_se_wt_timeout(struct timer_list *t)
+ */
+ /* hardware reset managed through VCC_UICC_OUT power supply */
+ u8 param = 0x01;
+- struct st21nfca_hci_info *info = from_timer(info, t,
+- se_info.bwi_timer);
++ struct st21nfca_hci_info *info = container_of(work,
++ struct st21nfca_hci_info,
++ se_info.timeout_work);
+
+ info->se_info.bwi_active = false;
+
+@@ -271,6 +272,13 @@ static void st21nfca_se_wt_timeout(struct timer_list *t)
+ info->se_info.cb(info->se_info.cb_context, NULL, 0, -ETIME);
+ }
+
++static void st21nfca_se_wt_timeout(struct timer_list *t)
++{
++ struct st21nfca_hci_info *info = from_timer(info, t, se_info.bwi_timer);
++
++ schedule_work(&info->se_info.timeout_work);
++}
++
+ static void st21nfca_se_activation_timeout(struct timer_list *t)
+ {
+ struct st21nfca_hci_info *info = from_timer(info, t,
+@@ -360,6 +368,7 @@ int st21nfca_apdu_reader_event_received(struct nfc_hci_dev *hdev,
+ switch (event) {
+ case ST21NFCA_EVT_TRANSMIT_DATA:
+ del_timer_sync(&info->se_info.bwi_timer);
++ cancel_work_sync(&info->se_info.timeout_work);
+ info->se_info.bwi_active = false;
+ r = nfc_hci_send_event(hdev, ST21NFCA_DEVICE_MGNT_GATE,
+ ST21NFCA_EVT_SE_END_OF_APDU_TRANSFER, NULL, 0);
+@@ -389,6 +398,7 @@ void st21nfca_se_init(struct nfc_hci_dev *hdev)
+ struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev);
+
+ init_completion(&info->se_info.req_completion);
++ INIT_WORK(&info->se_info.timeout_work, st21nfca_se_wt_work);
+ /* initialize timers */
+ timer_setup(&info->se_info.bwi_timer, st21nfca_se_wt_timeout, 0);
+ info->se_info.bwi_active = false;
+@@ -416,6 +426,7 @@ void st21nfca_se_deinit(struct nfc_hci_dev *hdev)
+ if (info->se_info.se_active)
+ del_timer_sync(&info->se_info.se_active_timer);
+
++ cancel_work_sync(&info->se_info.timeout_work);
+ info->se_info.bwi_active = false;
+ info->se_info.se_active = false;
+ }
+diff --git a/drivers/nfc/st21nfca/st21nfca.h b/drivers/nfc/st21nfca/st21nfca.h
+index cb6ad916be911..ae6771cc9894a 100644
+--- a/drivers/nfc/st21nfca/st21nfca.h
++++ b/drivers/nfc/st21nfca/st21nfca.h
+@@ -141,6 +141,7 @@ struct st21nfca_se_info {
+
+ se_io_cb_t cb;
+ void *cb_context;
++ struct work_struct timeout_work;
+ };
+
+ struct st21nfca_hci_info {
+diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c
+index 69a03358817f1..681cc28703a3e 100644
+--- a/drivers/nvdimm/core.c
++++ b/drivers/nvdimm/core.c
+@@ -368,9 +368,7 @@ static ssize_t capability_show(struct device *dev,
+ if (!nd_desc->fw_ops)
+ return -EOPNOTSUPP;
+
+- nvdimm_bus_lock(dev);
+ cap = nd_desc->fw_ops->capability(nd_desc);
+- nvdimm_bus_unlock(dev);
+
+ switch (cap) {
+ case NVDIMM_FWA_CAP_QUIESCE:
+@@ -395,10 +393,8 @@ static ssize_t activate_show(struct device *dev,
+ if (!nd_desc->fw_ops)
+ return -EOPNOTSUPP;
+
+- nvdimm_bus_lock(dev);
+ cap = nd_desc->fw_ops->capability(nd_desc);
+ state = nd_desc->fw_ops->activate_state(nd_desc);
+- nvdimm_bus_unlock(dev);
+
+ if (cap < NVDIMM_FWA_CAP_QUIESCE)
+ return -EOPNOTSUPP;
+@@ -443,7 +439,6 @@ static ssize_t activate_store(struct device *dev,
+ else
+ return -EINVAL;
+
+- nvdimm_bus_lock(dev);
+ state = nd_desc->fw_ops->activate_state(nd_desc);
+
+ switch (state) {
+@@ -461,7 +456,6 @@ static ssize_t activate_store(struct device *dev,
+ default:
+ rc = -ENXIO;
+ }
+- nvdimm_bus_unlock(dev);
+
+ if (rc == 0)
+ rc = len;
+@@ -484,10 +478,7 @@ static umode_t nvdimm_bus_firmware_visible(struct kobject *kobj, struct attribut
+ if (!nd_desc->fw_ops)
+ return 0;
+
+- nvdimm_bus_lock(dev);
+ cap = nd_desc->fw_ops->capability(nd_desc);
+- nvdimm_bus_unlock(dev);
+-
+ if (cap < NVDIMM_FWA_CAP_QUIESCE)
+ return 0;
+
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 58d95242a836b..4aa17132a5572 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -158,36 +158,20 @@ static blk_status_t pmem_do_write(struct pmem_device *pmem,
+ struct page *page, unsigned int page_off,
+ sector_t sector, unsigned int len)
+ {
+- blk_status_t rc = BLK_STS_OK;
+- bool bad_pmem = false;
+ phys_addr_t pmem_off = sector * 512 + pmem->data_offset;
+ void *pmem_addr = pmem->virt_addr + pmem_off;
+
+- if (unlikely(is_bad_pmem(&pmem->bb, sector, len)))
+- bad_pmem = true;
++ if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) {
++ blk_status_t rc = pmem_clear_poison(pmem, pmem_off, len);
++
++ if (rc != BLK_STS_OK)
++ return rc;
++ }
+
+- /*
+- * Note that we write the data both before and after
+- * clearing poison. The write before clear poison
+- * handles situations where the latest written data is
+- * preserved and the clear poison operation simply marks
+- * the address range as valid without changing the data.
+- * In this case application software can assume that an
+- * interrupted write will either return the new good
+- * data or an error.
+- *
+- * However, if pmem_clear_poison() leaves the data in an
+- * indeterminate state we need to perform the write
+- * after clear poison.
+- */
+ flush_dcache_page(page);
+ write_pmem(pmem_addr, page, page_off, len);
+- if (unlikely(bad_pmem)) {
+- rc = pmem_clear_poison(pmem, pmem_off, len);
+- write_pmem(pmem_addr, page, page_off, len);
+- }
+
+- return rc;
++ return BLK_STS_OK;
+ }
+
+ static void pmem_submit_bio(struct bio *bio)
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index 4b80150e4afa7..b5aa55c614616 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -379,11 +379,6 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid)
+ || !nvdimm->sec.flags)
+ return -EOPNOTSUPP;
+
+- if (dev->driver == NULL) {
+- dev_dbg(dev, "Unable to overwrite while DIMM active.\n");
+- return -EINVAL;
+- }
+-
+ rc = check_security_state(nvdimm);
+ if (rc)
+ return rc;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 0abd772c57f08..e086440a2042e 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1796,7 +1796,7 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
+ blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
+ }
+ blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1);
+- blk_queue_dma_alignment(q, 7);
++ blk_queue_dma_alignment(q, 3);
+ blk_queue_write_cache(q, vwc, vwc);
+ }
+
+@@ -3096,10 +3096,6 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
+ if (ret)
+ return ret;
+
+- ret = nvme_init_non_mdts_limits(ctrl);
+- if (ret < 0)
+- return ret;
+-
+ ret = nvme_configure_apst(ctrl);
+ if (ret < 0)
+ return ret;
+@@ -4160,11 +4156,26 @@ static void nvme_scan_work(struct work_struct *work)
+ {
+ struct nvme_ctrl *ctrl =
+ container_of(work, struct nvme_ctrl, scan_work);
++ int ret;
+
+ /* No tagset on a live ctrl means IO queues could not created */
+ if (ctrl->state != NVME_CTRL_LIVE || !ctrl->tagset)
+ return;
+
++ /*
++ * Identify controller limits can change at controller reset due to
++ * new firmware download, even though it is not common we cannot ignore
++ * such scenario. Controller's non-mdts limits are reported in the unit
++ * of logical blocks that is dependent on the format of attached
++ * namespace. Hence re-read the limits at the time of ns allocation.
++ */
++ ret = nvme_init_non_mdts_limits(ctrl);
++ if (ret < 0) {
++ dev_warn(ctrl->device,
++ "reading non-mdts-limits failed: %d\n", ret);
++ return;
++ }
++
+ if (test_and_clear_bit(NVME_AER_NOTICE_NS_CHANGED, &ctrl->events)) {
+ dev_info(ctrl->device, "rescanning namespaces.\n");
+ nvme_clear_changed_ns_log(ctrl);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 94a0b933b1335..823fa48fbfb08 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1772,6 +1772,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
+ dev->ctrl.admin_q = blk_mq_init_queue(&dev->admin_tagset);
+ if (IS_ERR(dev->ctrl.admin_q)) {
+ blk_mq_free_tag_set(&dev->admin_tagset);
++ dev->ctrl.admin_q = NULL;
+ return -ENOMEM;
+ }
+ if (!blk_get_queue(dev->ctrl.admin_q)) {
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index ec315b060cd50..0f30496ce80bf 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -1105,6 +1105,9 @@ int __init early_init_dt_scan_memory(void)
+ if (type == NULL || strcmp(type, "memory") != 0)
+ continue;
+
++ if (!of_fdt_device_is_available(fdt, node))
++ continue;
++
+ reg = of_get_flat_dt_prop(node, "linux,usable-memory", &l);
+ if (reg == NULL)
+ reg = of_get_flat_dt_prop(node, "reg", &l);
+diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
+index b9bd1cff17938..8d374cc552be5 100644
+--- a/drivers/of/kexec.c
++++ b/drivers/of/kexec.c
+@@ -386,6 +386,15 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
+ crashk_res.end - crashk_res.start + 1);
+ if (ret)
+ goto out;
++
++ if (crashk_low_res.end) {
++ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node,
++ "linux,usable-memory-range",
++ crashk_low_res.start,
++ crashk_low_res.end - crashk_low_res.start + 1);
++ if (ret)
++ goto out;
++ }
+ }
+
+ /* add bootargs */
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index d80160cf34bb7..d1187123c4fc4 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -170,9 +170,7 @@ static int overlay_notify(struct overlay_changeset *ovcs,
+
+ ret = blocking_notifier_call_chain(&overlay_notify_chain,
+ action, &nd);
+- if (ret == NOTIFY_OK || ret == NOTIFY_STOP)
+- return 0;
+- if (ret) {
++ if (notifier_to_errno(ret)) {
+ ret = notifier_to_errno(ret);
+ pr_err("overlay changeset %s notifier error %d, target: %pOF\n",
+ of_overlay_action_name[action], ret, nd.target);
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 2f40afa4e65c5..b4e96d54387a1 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -437,11 +437,11 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table)
+
+ /* Checking only first OPP is sufficient */
+ np = of_get_next_available_child(opp_np, NULL);
++ of_node_put(opp_np);
+ if (!np) {
+ dev_err(dev, "OPP table empty\n");
+ return -EINVAL;
+ }
+- of_node_put(opp_np);
+
+ prop = of_find_property(np, "opp-peak-kBps", NULL);
+ of_node_put(np);
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 768d33f9ebc87..a82f845cc4b52 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -69,6 +69,7 @@ struct j721e_pcie_data {
+ enum j721e_pcie_mode mode;
+ unsigned int quirk_retrain_flag:1;
+ unsigned int quirk_detect_quiet_flag:1;
++ unsigned int quirk_disable_flr:1;
+ u32 linkdown_irq_regfield;
+ unsigned int byte_access_allowed:1;
+ };
+@@ -307,6 +308,7 @@ static const struct j721e_pcie_data j7200_pcie_rc_data = {
+ static const struct j721e_pcie_data j7200_pcie_ep_data = {
+ .mode = PCI_MODE_EP,
+ .quirk_detect_quiet_flag = true,
++ .quirk_disable_flr = true,
+ };
+
+ static const struct j721e_pcie_data am64_pcie_rc_data = {
+@@ -405,6 +407,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
++ ep->quirk_disable_flr = data->quirk_disable_flr;
+
+ cdns_pcie = &ep->pcie;
+ cdns_pcie->dev = dev;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 88e05b9c2e5b8..b8b655d4047ec 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -187,8 +187,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 r;
+
+- r = find_first_zero_bit(&ep->ob_region_map,
+- sizeof(ep->ob_region_map) * BITS_PER_LONG);
++ r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ if (r >= ep->max_regions - 1) {
+ dev_err(&epc->dev, "no free outbound region\n");
+ return -EINVAL;
+@@ -565,7 +564,8 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
+ struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ struct cdns_pcie *pcie = &ep->pcie;
+ struct device *dev = pcie->dev;
+- int ret;
++ int max_epfs = sizeof(epc->function_num_map) * 8;
++ int ret, value, epf;
+
+ /*
+ * BIT(0) is hardwired to 1, hence function 0 is always enabled
+@@ -573,6 +573,21 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
+ */
+ cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
+
++ if (ep->quirk_disable_flr) {
++ for (epf = 0; epf < max_epfs; epf++) {
++ if (!(epc->function_num_map & BIT(epf)))
++ continue;
++
++ value = cdns_pcie_ep_fn_readl(pcie, epf,
++ CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
++ PCI_EXP_DEVCAP);
++ value &= ~PCI_EXP_DEVCAP_FLR;
++ cdns_pcie_ep_fn_writel(pcie, epf,
++ CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
++ PCI_EXP_DEVCAP, value);
++ }
++ }
++
+ ret = cdns_pcie_start_link(pcie);
+ if (ret) {
+ dev_err(dev, "Failed to start link\n");
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index c8a27b6290cea..d9c785365da3b 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -123,6 +123,7 @@
+
+ #define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90
+ #define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0
++#define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xc0
+ #define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200
+
+ /*
+@@ -357,6 +358,7 @@ struct cdns_pcie_epf {
+ * minimize time between read and write
+ * @epf: Structure to hold info about endpoint function
+ * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
++ * @quirk_disable_flr: Disable FLR (Function Level Reset) quirk flag
+ */
+ struct cdns_pcie_ep {
+ struct cdns_pcie pcie;
+@@ -372,6 +374,7 @@ struct cdns_pcie_ep {
+ spinlock_t lock;
+ struct cdns_pcie_epf *epf;
+ unsigned int quirk_detect_quiet_flag:1;
++ unsigned int quirk_disable_flr:1;
+ };
+
+
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 343fe1429e3c2..ce3a36d1f2fa8 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -408,6 +408,11 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
+ dev_err(dev, "failed to disable vpcie regulator: %d\n",
+ ret);
+ }
++
++ /* Some boards don't have PCIe reset GPIO. */
++ if (gpio_is_valid(imx6_pcie->reset_gpio))
++ gpio_set_value_cansleep(imx6_pcie->reset_gpio,
++ imx6_pcie->gpio_active_high);
+ }
+
+ static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie)
+@@ -540,15 +545,6 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
+ /* allow the clocks to stabilize */
+ usleep_range(200, 500);
+
+- /* Some boards don't have PCIe reset GPIO. */
+- if (gpio_is_valid(imx6_pcie->reset_gpio)) {
+- gpio_set_value_cansleep(imx6_pcie->reset_gpio,
+- imx6_pcie->gpio_active_high);
+- msleep(100);
+- gpio_set_value_cansleep(imx6_pcie->reset_gpio,
+- !imx6_pcie->gpio_active_high);
+- }
+-
+ switch (imx6_pcie->drvdata->variant) {
+ case IMX8MQ:
+ reset_control_deassert(imx6_pcie->pciephy_reset);
+@@ -595,6 +591,15 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
+ break;
+ }
+
++ /* Some boards don't have PCIe reset GPIO. */
++ if (gpio_is_valid(imx6_pcie->reset_gpio)) {
++ msleep(100);
++ gpio_set_value_cansleep(imx6_pcie->reset_gpio,
++ !imx6_pcie->gpio_active_high);
++ /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */
++ msleep(100);
++ }
++
+ return;
+
+ err_ref_clk:
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index f4755f3a03bea..9dcb51728dd18 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -390,7 +390,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ sizeof(pp->msi_msg),
+ DMA_FROM_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC);
+- if (dma_mapping_error(pci->dev, pp->msi_data)) {
++ ret = dma_mapping_error(pci->dev, pp->msi_data);
++ if (ret) {
+ dev_err(pci->dev, "Failed to map MSI data\n");
+ pp->msi_data = 0;
+ goto err_free_msi;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index c19cd506ed3f2..1bbce48e301ed 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1593,22 +1593,21 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ pp->ops = &qcom_pcie_dw_ops;
+
+ ret = phy_init(pcie->phy);
+- if (ret) {
+- pm_runtime_disable(&pdev->dev);
++ if (ret)
+ goto err_pm_runtime_put;
+- }
+
+ platform_set_drvdata(pdev, pcie);
+
+ ret = dw_pcie_host_init(pp);
+ if (ret) {
+ dev_err(dev, "cannot initialize host\n");
+- pm_runtime_disable(&pdev->dev);
+- goto err_pm_runtime_put;
++ goto err_phy_exit;
+ }
+
+ return 0;
+
++err_phy_exit:
++ phy_exit(pcie->phy);
+ err_pm_runtime_put:
+ pm_runtime_put(dev);
+ pm_runtime_disable(dev);
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index 7705d61fba4c7..0e27a49ae0c2e 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -838,6 +838,14 @@ static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
+ if (err)
+ return err;
+
++ /*
++ * The controller may have been left out of reset by the bootloader
++ * so make sure that we get a clean start by asserting resets here.
++ */
++ reset_control_assert(pcie->phy_reset);
++ reset_control_assert(pcie->mac_reset);
++ usleep_range(10, 20);
++
+ /* Don't touch the hardware registers before power up */
+ err = mtk_pcie_power_up(pcie);
+ if (err)
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index ddfbd4aebdeca..be8bd919cb88f 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -1008,6 +1008,7 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie)
+ "mediatek,generic-pciecfg");
+ if (cfg_node) {
+ pcie->cfg = syscon_node_to_regmap(cfg_node);
++ of_node_put(cfg_node);
+ if (IS_ERR(pcie->cfg))
+ return PTR_ERR(pcie->cfg);
+ }
+diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c
+index 29d8e81e41810..2c52a8cef7260 100644
+--- a/drivers/pci/controller/pcie-microchip-host.c
++++ b/drivers/pci/controller/pcie-microchip-host.c
+@@ -406,6 +406,7 @@ static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base)
+ static void mc_handle_msi(struct irq_desc *desc)
+ {
+ struct mc_pcie *port = irq_desc_get_handler_data(desc);
++ struct irq_chip *chip = irq_desc_get_chip(desc);
+ struct device *dev = port->dev;
+ struct mc_msi *msi = &port->msi;
+ void __iomem *bridge_base_addr =
+@@ -414,8 +415,11 @@ static void mc_handle_msi(struct irq_desc *desc)
+ u32 bit;
+ int ret;
+
++ chained_irq_enter(chip, desc);
++
+ status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+ if (status & PM_MSI_INT_MSI_MASK) {
++ writel_relaxed(status & PM_MSI_INT_MSI_MASK, bridge_base_addr + ISTATUS_LOCAL);
+ status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+ for_each_set_bit(bit, &status, msi->num_vectors) {
+ ret = generic_handle_domain_irq(msi->dev_domain, bit);
+@@ -424,6 +428,8 @@ static void mc_handle_msi(struct irq_desc *desc)
+ bit);
+ }
+ }
++
++ chained_irq_exit(chip, desc);
+ }
+
+ static void mc_msi_bottom_irq_ack(struct irq_data *data)
+@@ -432,13 +438,8 @@ static void mc_msi_bottom_irq_ack(struct irq_data *data)
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ u32 bitpos = data->hwirq;
+- unsigned long status;
+
+ writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
+- status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+- if (!status)
+- writel_relaxed(BIT(PM_MSI_INT_MSI_SHIFT),
+- bridge_base_addr + ISTATUS_LOCAL);
+ }
+
+ static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+@@ -563,6 +564,7 @@ static int mc_allocate_msi_domains(struct mc_pcie *port)
+ static void mc_handle_intx(struct irq_desc *desc)
+ {
+ struct mc_pcie *port = irq_desc_get_handler_data(desc);
++ struct irq_chip *chip = irq_desc_get_chip(desc);
+ struct device *dev = port->dev;
+ void __iomem *bridge_base_addr =
+ port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+@@ -570,6 +572,8 @@ static void mc_handle_intx(struct irq_desc *desc)
+ u32 bit;
+ int ret;
+
++ chained_irq_enter(chip, desc);
++
+ status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+ if (status & PM_MSI_INT_INTX_MASK) {
+ status &= PM_MSI_INT_INTX_MASK;
+@@ -581,6 +585,8 @@ static void mc_handle_intx(struct irq_desc *desc)
+ bit);
+ }
+ }
++
++ chained_irq_exit(chip, desc);
+ }
+
+ static void mc_ack_intx_irq(struct irq_data *data)
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 5fb9ce6e536e0..d1a200b93b2bf 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -264,8 +264,7 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+ struct rockchip_pcie *pcie = &ep->rockchip;
+ u32 r;
+
+- r = find_first_zero_bit(&ep->ob_region_map,
+- sizeof(ep->ob_region_map) * BITS_PER_LONG);
++ r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ /*
+ * Region 0 is reserved for configuration space and shouldn't
+ * be used elsewhere per TRM, so leave it out.
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index a42dbf4488604..1243e2156ac8e 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -974,9 +974,11 @@ bool acpi_pci_power_manageable(struct pci_dev *dev)
+
+ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ {
+- const union acpi_object *obj;
+- struct acpi_device *adev;
+ struct pci_dev *rpdev;
++ struct acpi_device *adev;
++ acpi_status status;
++ unsigned long long state;
++ const union acpi_object *obj;
+
+ if (acpi_pci_disabled || !dev->is_hotplug_bridge)
+ return false;
+@@ -985,12 +987,6 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ if (acpi_pci_power_manageable(dev))
+ return true;
+
+- /*
+- * The ACPI firmware will provide the device-specific properties through
+- * _DSD configuration object. Look for the 'HotPlugSupportInD3' property
+- * for the root port and if it is set we know the hierarchy behind it
+- * supports D3 just fine.
+- */
+ rpdev = pcie_find_root_port(dev);
+ if (!rpdev)
+ return false;
+@@ -999,11 +995,34 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ if (!adev)
+ return false;
+
+- if (acpi_dev_get_property(adev, "HotPlugSupportInD3",
+- ACPI_TYPE_INTEGER, &obj) < 0)
++ /*
++ * If the Root Port cannot signal wakeup signals at all, i.e., it
++ * doesn't supply a wakeup GPE via _PRW, it cannot signal hotplug
++ * events from low-power states including D3hot and D3cold.
++ */
++ if (!adev->wakeup.flags.valid)
+ return false;
+
+- return obj->integer.value == 1;
++ /*
++ * If the Root Port cannot wake itself from D3hot or D3cold, we
++ * can't use D3.
++ */
++ status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state);
++ if (ACPI_SUCCESS(status) && state < ACPI_STATE_D3_HOT)
++ return false;
++
++ /*
++ * The "HotPlugSupportInD3" property in a Root Port _DSD indicates
++ * the Port can signal hotplug events while in D3. We assume any
++ * bridges *below* that Root Port can also signal hotplug events
++ * while in D3.
++ */
++ if (!acpi_dev_get_property(adev, "HotPlugSupportInD3",
++ ACPI_TYPE_INTEGER, &obj) &&
++ obj->integer.value == 1)
++ return true;
++
++ return false;
+ }
+
+ int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d25122fbe98ab..8ac110d6c6f4b 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2920,6 +2920,8 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
+ DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
+ },
++ },
++ {
+ /*
+ * Downstream device is not accessible after putting a root port
+ * into D3cold and back into D0 on Elo i2.
+@@ -5113,19 +5115,19 @@ static int pci_reset_bus_function(struct pci_dev *dev, bool probe)
+
+ void pci_dev_lock(struct pci_dev *dev)
+ {
+- pci_cfg_access_lock(dev);
+ /* block PM suspend, driver probe, etc. */
+ device_lock(&dev->dev);
++ pci_cfg_access_lock(dev);
+ }
+ EXPORT_SYMBOL_GPL(pci_dev_lock);
+
+ /* Return 1 on successful lock, 0 on contention */
+ int pci_dev_trylock(struct pci_dev *dev)
+ {
+- if (pci_cfg_access_trylock(dev)) {
+- if (device_trylock(&dev->dev))
++ if (device_trylock(&dev->dev)) {
++ if (pci_cfg_access_trylock(dev))
+ return 1;
+- pci_cfg_access_unlock(dev);
++ device_unlock(&dev->dev);
+ }
+
+ return 0;
+@@ -5134,8 +5136,8 @@ EXPORT_SYMBOL_GPL(pci_dev_trylock);
+
+ void pci_dev_unlock(struct pci_dev *dev)
+ {
+- device_unlock(&dev->dev);
+ pci_cfg_access_unlock(dev);
++ device_unlock(&dev->dev);
+ }
+ EXPORT_SYMBOL_GPL(pci_dev_unlock);
+
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index 9fa1f97e5b270..7952e5efd6cf3 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -101,6 +101,11 @@ struct aer_stats {
+ #define ERR_COR_ID(d) (d & 0xffff)
+ #define ERR_UNCOR_ID(d) (d >> 16)
+
++#define AER_ERR_STATUS_MASK (PCI_ERR_ROOT_UNCOR_RCV | \
++ PCI_ERR_ROOT_COR_RCV | \
++ PCI_ERR_ROOT_MULTI_COR_RCV | \
++ PCI_ERR_ROOT_MULTI_UNCOR_RCV)
++
+ static int pcie_aer_disable;
+ static pci_ers_result_t aer_root_reset(struct pci_dev *dev);
+
+@@ -1196,7 +1201,7 @@ static irqreturn_t aer_irq(int irq, void *context)
+ struct aer_err_source e_src = {};
+
+ pci_read_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, &e_src.status);
+- if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV)))
++ if (!(e_src.status & AER_ERR_STATUS_MASK))
+ return IRQ_NONE;
+
+ pci_read_config_dword(rp, aer + PCI_ERR_ROOT_ERR_SRC, &e_src.id);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index da829274fc66d..41aeaa2351322 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -12,6 +12,7 @@
+ * file, where their drivers can use them.
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/export.h>
+@@ -5895,3 +5896,49 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1533, rom_bar_overlap_defect);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1536, rom_bar_overlap_defect);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1537, rom_bar_overlap_defect);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1538, rom_bar_overlap_defect);
++
++#ifdef CONFIG_PCIEASPM
++/*
++ * Several Intel DG2 graphics devices advertise that they can only tolerate
++ * 1us latency when transitioning from L1 to L0, which may prevent ASPM L1
++ * from being enabled. But in fact these devices can tolerate unlimited
++ * latency. Override their Device Capabilities value to allow ASPM L1 to
++ * be enabled.
++ */
++static void aspm_l1_acceptable_latency(struct pci_dev *dev)
++{
++ u32 l1_lat = FIELD_GET(PCI_EXP_DEVCAP_L1, dev->devcap);
++
++ if (l1_lat < 7) {
++ dev->devcap |= FIELD_PREP(PCI_EXP_DEVCAP_L1, 7);
++ pci_info(dev, "ASPM: overriding L1 acceptable latency from %#x to 0x7\n",
++ l1_lat);
++ }
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f80, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f81, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f82, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f83, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f84, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f85, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f86, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f87, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f88, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5690, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5691, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5692, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5693, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5694, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5695, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a1, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a2, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a3, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a4, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a5, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a6, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56b0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56b1, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c1, aspm_l1_acceptable_latency);
++#endif
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index 8ea87c69f463d..c2b878128e2c0 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -5818,6 +5818,11 @@ static const struct phy_ops qcom_qmp_pcie_ufs_ops = {
+ .owner = THIS_MODULE,
+ };
+
++static void qcom_qmp_reset_control_put(void *data)
++{
++ reset_control_put(data);
++}
++
+ static
+ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ void __iomem *serdes, const struct qmp_phy_cfg *cfg)
+@@ -5890,7 +5895,7 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ * all phys that don't need this.
+ */
+ snprintf(prop_name, sizeof(prop_name), "pipe%d", id);
+- qphy->pipe_clk = of_clk_get_by_name(np, prop_name);
++ qphy->pipe_clk = devm_get_clk_from_child(dev, np, prop_name);
+ if (IS_ERR(qphy->pipe_clk)) {
+ if (cfg->type == PHY_TYPE_PCIE ||
+ cfg->type == PHY_TYPE_USB3) {
+@@ -5912,6 +5917,10 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ dev_err(dev, "failed to get lane%d reset\n", id);
+ return PTR_ERR(qphy->lane_rst);
+ }
++ ret = devm_add_action_or_reset(dev, qcom_qmp_reset_control_put,
++ qphy->lane_rst);
++ if (ret)
++ return ret;
+ }
+
+ if (cfg->type == PHY_TYPE_UFS || cfg->type == PHY_TYPE_PCIE)
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 47e433e09c5ce..dad4530547768 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -358,6 +358,22 @@ static int bcm2835_gpio_direction_output(struct gpio_chip *chip,
+ return 0;
+ }
+
++static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
++ struct device_node *np)
++{
++ struct pinctrl_dev *pctldev = of_pinctrl_get(np);
++
++ of_node_put(np);
++
++ if (!pctldev)
++ return 0;
++
++ gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
++ gc->ngpio);
++
++ return 0;
++}
++
+ static const struct gpio_chip bcm2835_gpio_chip = {
+ .label = MODULE_NAME,
+ .owner = THIS_MODULE,
+@@ -372,6 +388,7 @@ static const struct gpio_chip bcm2835_gpio_chip = {
+ .base = -1,
+ .ngpio = BCM2835_NUM_GPIOS,
+ .can_sleep = false,
++ .of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback,
+ };
+
+ static const struct gpio_chip bcm2711_gpio_chip = {
+@@ -388,6 +405,7 @@ static const struct gpio_chip bcm2711_gpio_chip = {
+ .base = -1,
+ .ngpio = BCM2711_NUM_GPIOS,
+ .can_sleep = false,
++ .of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback,
+ };
+
+ static void bcm2835_gpio_irq_handle_bank(struct bcm2835_pinctrl *pc,
+diff --git a/drivers/pinctrl/mediatek/Kconfig b/drivers/pinctrl/mediatek/Kconfig
+index c7fa1525e42af..901c0ec0cca5d 100644
+--- a/drivers/pinctrl/mediatek/Kconfig
++++ b/drivers/pinctrl/mediatek/Kconfig
+@@ -159,6 +159,7 @@ config PINCTRL_MT8195
+ bool "Mediatek MT8195 pin control"
+ depends on OF
+ depends on ARM64 || COMPILE_TEST
++ default ARM64 && ARCH_MEDIATEK
+ select PINCTRL_MTK_PARIS
+
+ config PINCTRL_MT8365
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 08cad14042e2e..adccf03b3e5af 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -773,7 +773,7 @@ static int armada_37xx_irqchip_register(struct platform_device *pdev,
+ for (i = 0; i < nr_irq_parent; i++) {
+ int irq = irq_of_parse_and_map(np, i);
+
+- if (irq < 0)
++ if (!irq)
+ continue;
+ girq->parents[i] = irq;
+ }
+diff --git a/drivers/pinctrl/pinctrl-apple-gpio.c b/drivers/pinctrl/pinctrl-apple-gpio.c
+index 72f4dd2466e11..6d1bff9588d99 100644
+--- a/drivers/pinctrl/pinctrl-apple-gpio.c
++++ b/drivers/pinctrl/pinctrl-apple-gpio.c
+@@ -72,6 +72,7 @@ struct regmap_config regmap_config = {
+ .max_register = 512 * sizeof(u32),
+ .num_reg_defaults_raw = 512,
+ .use_relaxed_mmio = true,
++ .use_raw_spinlock = true,
+ };
+
+ /* No locking needed to mask/unmask IRQs as the interrupt mode is per pin-register. */
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 65fa305b5f59f..32e8be98cf46d 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2110,19 +2110,20 @@ static bool rockchip_pinconf_pull_valid(struct rockchip_pin_ctrl *ctrl,
+ return false;
+ }
+
+-static int rockchip_pinconf_defer_output(struct rockchip_pin_bank *bank,
+- unsigned int pin, u32 arg)
++static int rockchip_pinconf_defer_pin(struct rockchip_pin_bank *bank,
++ unsigned int pin, u32 param, u32 arg)
+ {
+- struct rockchip_pin_output_deferred *cfg;
++ struct rockchip_pin_deferred *cfg;
+
+ cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
+ if (!cfg)
+ return -ENOMEM;
+
+ cfg->pin = pin;
++ cfg->param = param;
+ cfg->arg = arg;
+
+- list_add_tail(&cfg->head, &bank->deferred_output);
++ list_add_tail(&cfg->head, &bank->deferred_pins);
+
+ return 0;
+ }
+@@ -2143,6 +2144,25 @@ static int rockchip_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ param = pinconf_to_config_param(configs[i]);
+ arg = pinconf_to_config_argument(configs[i]);
+
++ if (param == PIN_CONFIG_OUTPUT || param == PIN_CONFIG_INPUT_ENABLE) {
++ /*
++ * Check for gpio driver not being probed yet.
++ * The lock makes sure that either gpio-probe has completed
++ * or the gpio driver hasn't probed yet.
++ */
++ mutex_lock(&bank->deferred_lock);
++ if (!gpio || !gpio->direction_output) {
++ rc = rockchip_pinconf_defer_pin(bank, pin - bank->pin_base, param,
++ arg);
++ mutex_unlock(&bank->deferred_lock);
++ if (rc)
++ return rc;
++
++ break;
++ }
++ mutex_unlock(&bank->deferred_lock);
++ }
++
+ switch (param) {
+ case PIN_CONFIG_BIAS_DISABLE:
+ rc = rockchip_set_pull(bank, pin - bank->pin_base,
+@@ -2171,27 +2191,21 @@ static int rockchip_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ if (rc != RK_FUNC_GPIO)
+ return -EINVAL;
+
+- /*
+- * Check for gpio driver not being probed yet.
+- * The lock makes sure that either gpio-probe has completed
+- * or the gpio driver hasn't probed yet.
+- */
+- mutex_lock(&bank->deferred_lock);
+- if (!gpio || !gpio->direction_output) {
+- rc = rockchip_pinconf_defer_output(bank, pin - bank->pin_base, arg);
+- mutex_unlock(&bank->deferred_lock);
+- if (rc)
+- return rc;
+-
+- break;
+- }
+- mutex_unlock(&bank->deferred_lock);
+-
+ rc = gpio->direction_output(gpio, pin - bank->pin_base,
+ arg);
+ if (rc)
+ return rc;
+ break;
++ case PIN_CONFIG_INPUT_ENABLE:
++ rc = rockchip_set_mux(bank, pin - bank->pin_base,
++ RK_FUNC_GPIO);
++ if (rc != RK_FUNC_GPIO)
++ return -EINVAL;
++
++ rc = gpio->direction_input(gpio, pin - bank->pin_base);
++ if (rc)
++ return rc;
++ break;
+ case PIN_CONFIG_DRIVE_STRENGTH:
+ /* rk3288 is the first with per-pin drive-strength */
+ if (!info->ctrl->drv_calc_reg)
+@@ -2500,7 +2514,7 @@ static int rockchip_pinctrl_register(struct platform_device *pdev,
+ pdesc++;
+ }
+
+- INIT_LIST_HEAD(&pin_bank->deferred_output);
++ INIT_LIST_HEAD(&pin_bank->deferred_pins);
+ mutex_init(&pin_bank->deferred_lock);
+ }
+
+@@ -2763,7 +2777,7 @@ static int rockchip_pinctrl_remove(struct platform_device *pdev)
+ {
+ struct rockchip_pinctrl *info = platform_get_drvdata(pdev);
+ struct rockchip_pin_bank *bank;
+- struct rockchip_pin_output_deferred *cfg;
++ struct rockchip_pin_deferred *cfg;
+ int i;
+
+ of_platform_depopulate(&pdev->dev);
+@@ -2772,9 +2786,9 @@ static int rockchip_pinctrl_remove(struct platform_device *pdev)
+ bank = &info->ctrl->pin_banks[i];
+
+ mutex_lock(&bank->deferred_lock);
+- while (!list_empty(&bank->deferred_output)) {
+- cfg = list_first_entry(&bank->deferred_output,
+- struct rockchip_pin_output_deferred, head);
++ while (!list_empty(&bank->deferred_pins)) {
++ cfg = list_first_entry(&bank->deferred_pins,
++ struct rockchip_pin_deferred, head);
+ list_del(&cfg->head);
+ kfree(cfg);
+ }
+diff --git a/drivers/pinctrl/pinctrl-rockchip.h b/drivers/pinctrl/pinctrl-rockchip.h
+index 91f10279d0844..98a01a616da67 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.h
++++ b/drivers/pinctrl/pinctrl-rockchip.h
+@@ -171,7 +171,7 @@ struct rockchip_pin_bank {
+ u32 toggle_edge_mode;
+ u32 recalced_mask;
+ u32 route_mask;
+- struct list_head deferred_output;
++ struct list_head deferred_pins;
+ struct mutex deferred_lock;
+ };
+
+@@ -247,9 +247,12 @@ struct rockchip_pin_config {
+ unsigned int nconfigs;
+ };
+
+-struct rockchip_pin_output_deferred {
++enum pin_config_param;
++
++struct rockchip_pin_deferred {
+ struct list_head head;
+ unsigned int pin;
++ enum pin_config_param param;
+ u32 arg;
+ };
+
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 12d41ac017b53..8ed95be904906 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -71,12 +71,11 @@ static int sh_pfc_map_resources(struct sh_pfc *pfc,
+
+ /* Fill them. */
+ for (i = 0; i < num_windows; i++) {
+- res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+- windows->phys = res->start;
+- windows->size = resource_size(res);
+- windows->virt = devm_ioremap_resource(pfc->dev, res);
++ windows->virt = devm_platform_get_and_ioremap_resource(pdev, i, &res);
+ if (IS_ERR(windows->virt))
+ return -ENOMEM;
++ windows->phys = res->start;
++ windows->size = resource_size(res);
+ windows++;
+ }
+ for (i = 0; i < num_irqs; i++)
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779a0.c b/drivers/pinctrl/renesas/pfc-r8a779a0.c
+index 83580385c3ca9..93bf8ca44e380 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779a0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779a0.c
+@@ -629,7 +629,36 @@ enum {
+ };
+
+ static const u16 pinmux_data[] = {
++/* Using GP_2_[2-15] requires disabling I2C in MOD_SEL2 */
++#define GP_2_2_FN GP_2_2_FN, FN_SEL_I2C0_0
++#define GP_2_3_FN GP_2_3_FN, FN_SEL_I2C0_0
++#define GP_2_4_FN GP_2_4_FN, FN_SEL_I2C1_0
++#define GP_2_5_FN GP_2_5_FN, FN_SEL_I2C1_0
++#define GP_2_6_FN GP_2_6_FN, FN_SEL_I2C2_0
++#define GP_2_7_FN GP_2_7_FN, FN_SEL_I2C2_0
++#define GP_2_8_FN GP_2_8_FN, FN_SEL_I2C3_0
++#define GP_2_9_FN GP_2_9_FN, FN_SEL_I2C3_0
++#define GP_2_10_FN GP_2_10_FN, FN_SEL_I2C4_0
++#define GP_2_11_FN GP_2_11_FN, FN_SEL_I2C4_0
++#define GP_2_12_FN GP_2_12_FN, FN_SEL_I2C5_0
++#define GP_2_13_FN GP_2_13_FN, FN_SEL_I2C5_0
++#define GP_2_14_FN GP_2_14_FN, FN_SEL_I2C6_0
++#define GP_2_15_FN GP_2_15_FN, FN_SEL_I2C6_0
+ PINMUX_DATA_GP_ALL(),
++#undef GP_2_2_FN
++#undef GP_2_3_FN
++#undef GP_2_4_FN
++#undef GP_2_5_FN
++#undef GP_2_6_FN
++#undef GP_2_7_FN
++#undef GP_2_8_FN
++#undef GP_2_9_FN
++#undef GP_2_10_FN
++#undef GP_2_11_FN
++#undef GP_2_12_FN
++#undef GP_2_13_FN
++#undef GP_2_14_FN
++#undef GP_2_15_FN
+
+ PINMUX_SINGLE(MMC_D7),
+ PINMUX_SINGLE(MMC_D6),
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzn1.c b/drivers/pinctrl/renesas/pinctrl-rzn1.c
+index ef5fb25b6016d..849d091205d4d 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzn1.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzn1.c
+@@ -865,17 +865,15 @@ static int rzn1_pinctrl_probe(struct platform_device *pdev)
+ ipctl->mdio_func[0] = -1;
+ ipctl->mdio_func[1] = -1;
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- ipctl->lev1_protect_phys = (u32)res->start + 0x400;
+- ipctl->lev1 = devm_ioremap_resource(&pdev->dev, res);
++ ipctl->lev1 = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(ipctl->lev1))
+ return PTR_ERR(ipctl->lev1);
++ ipctl->lev1_protect_phys = (u32)res->start + 0x400;
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+- ipctl->lev2_protect_phys = (u32)res->start + 0x400;
+- ipctl->lev2 = devm_ioremap_resource(&pdev->dev, res);
++ ipctl->lev2 = devm_platform_get_and_ioremap_resource(pdev, 1, &res);
+ if (IS_ERR(ipctl->lev2))
+ return PTR_ERR(ipctl->lev2);
++ ipctl->lev2_protect_phys = (u32)res->start + 0x400;
+
+ ipctl->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(ipctl->clk))
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index fc5aa1525d13c..ff2a24b0c6114 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -189,6 +189,8 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ ec_dev->max_request = sizeof(struct ec_params_hello);
+ ec_dev->max_response = sizeof(struct ec_response_get_protocol_info);
+ ec_dev->max_passthru = 0;
++ ec_dev->ec = NULL;
++ ec_dev->pd = NULL;
+
+ ec_dev->din = devm_kzalloc(dev, ec_dev->din_size, GFP_KERNEL);
+ if (!ec_dev->din)
+@@ -245,18 +247,16 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ if (IS_ERR(ec_dev->pd)) {
+ dev_err(ec_dev->dev,
+ "Failed to create CrOS PD platform device\n");
+- platform_device_unregister(ec_dev->ec);
+- return PTR_ERR(ec_dev->pd);
++ err = PTR_ERR(ec_dev->pd);
++ goto exit;
+ }
+ }
+
+ if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
+ err = devm_of_platform_populate(dev);
+ if (err) {
+- platform_device_unregister(ec_dev->pd);
+- platform_device_unregister(ec_dev->ec);
+ dev_err(dev, "Failed to register sub-devices\n");
+- return err;
++ goto exit;
+ }
+ }
+
+@@ -278,7 +278,7 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ err = blocking_notifier_chain_register(&ec_dev->event_notifier,
+ &ec_dev->notifier_ready);
+ if (err)
+- return err;
++ goto exit;
+ }
+
+ dev_info(dev, "Chrome EC device registered\n");
+@@ -291,6 +291,10 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ cros_ec_irq_thread(0, ec_dev);
+
+ return 0;
++exit:
++ platform_device_unregister(ec_dev->ec);
++ platform_device_unregister(ec_dev->pd);
++ return err;
+ }
+ EXPORT_SYMBOL(cros_ec_register);
+
+diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
+index e0bce869c49a9..fd33de546aee0 100644
+--- a/drivers/platform/chrome/cros_ec_chardev.c
++++ b/drivers/platform/chrome/cros_ec_chardev.c
+@@ -301,7 +301,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg)
+ }
+
+ s_cmd->command += ec->cmd_offset;
+- ret = cros_ec_cmd_xfer_status(ec->ec_dev, s_cmd);
++ ret = cros_ec_cmd_xfer(ec->ec_dev, s_cmd);
+ /* Only copy data to userland if data was received. */
+ if (ret < 0)
+ goto exit;
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index c4caf2e2de825..ac1419881ff35 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -560,22 +560,28 @@ exit:
+ EXPORT_SYMBOL(cros_ec_query_all);
+
+ /**
+- * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC.
++ * cros_ec_cmd_xfer() - Send a command to the ChromeOS EC.
+ * @ec_dev: EC device.
+ * @msg: Message to write.
+ *
+- * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's
+- * cmd_xfer() callback directly. It returns success status only if both the command was transmitted
+- * successfully and the EC replied with success status.
++ * Call this to send a command to the ChromeOS EC. This should be used instead
++ * of calling the EC's cmd_xfer() callback directly. This function does not
++ * convert EC command execution error codes to Linux error codes. Most
++ * in-kernel users will want to use cros_ec_cmd_xfer_status() instead since
++ * that function implements the conversion.
+ *
+ * Return:
+- * >=0 - The number of bytes transferred
+- * <0 - Linux error code
++ * >0 - EC command was executed successfully. The return value is the number
++ * of bytes returned by the EC (excluding the header).
++ * =0 - EC communication was successful. EC command execution results are
++ * reported in msg->result. The result will be EC_RES_SUCCESS if the
++ * command was executed successfully or report an EC command execution
++ * error.
++ * <0 - EC communication error. Return value is the Linux error code.
+ */
+-int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+- struct cros_ec_command *msg)
++int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev, struct cros_ec_command *msg)
+ {
+- int ret, mapped;
++ int ret;
+
+ mutex_lock(&ec_dev->lock);
+ if (ec_dev->proto_version == EC_PROTO_VERSION_UNKNOWN) {
+@@ -616,6 +622,32 @@ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+ ret = send_command(ec_dev, msg);
+ mutex_unlock(&ec_dev->lock);
+
++ return ret;
++}
++EXPORT_SYMBOL(cros_ec_cmd_xfer);
++
++/**
++ * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC.
++ * @ec_dev: EC device.
++ * @msg: Message to write.
++ *
++ * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's
++ * cmd_xfer() callback directly. It returns success status only if both the command was transmitted
++ * successfully and the EC replied with success status.
++ *
++ * Return:
++ * >=0 - The number of bytes transferred.
++ * <0 - Linux error code
++ */
++int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
++ struct cros_ec_command *msg)
++{
++ int ret, mapped;
++
++ ret = cros_ec_cmd_xfer(ec_dev, msg);
++ if (ret < 0)
++ return ret;
++
+ mapped = cros_ec_map_error(msg->result);
+ if (mapped) {
+ dev_dbg(ec_dev->dev, "Command result (err: %d [%d])\n",
+diff --git a/drivers/platform/mips/cpu_hwmon.c b/drivers/platform/mips/cpu_hwmon.c
+index 386389ffec419..d8c5f9195f85f 100644
+--- a/drivers/platform/mips/cpu_hwmon.c
++++ b/drivers/platform/mips/cpu_hwmon.c
+@@ -55,55 +55,6 @@ out:
+ static int nr_packages;
+ static struct device *cpu_hwmon_dev;
+
+-static SENSOR_DEVICE_ATTR(name, 0444, NULL, NULL, 0);
+-
+-static struct attribute *cpu_hwmon_attributes[] = {
+- &sensor_dev_attr_name.dev_attr.attr,
+- NULL
+-};
+-
+-/* Hwmon device attribute group */
+-static struct attribute_group cpu_hwmon_attribute_group = {
+- .attrs = cpu_hwmon_attributes,
+-};
+-
+-static ssize_t get_cpu_temp(struct device *dev,
+- struct device_attribute *attr, char *buf);
+-static ssize_t cpu_temp_label(struct device *dev,
+- struct device_attribute *attr, char *buf);
+-
+-static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1);
+-static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1);
+-static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2);
+-static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2);
+-static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3);
+-static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3);
+-static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4);
+-static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4);
+-
+-static const struct attribute *hwmon_cputemp[4][3] = {
+- {
+- &sensor_dev_attr_temp1_input.dev_attr.attr,
+- &sensor_dev_attr_temp1_label.dev_attr.attr,
+- NULL
+- },
+- {
+- &sensor_dev_attr_temp2_input.dev_attr.attr,
+- &sensor_dev_attr_temp2_label.dev_attr.attr,
+- NULL
+- },
+- {
+- &sensor_dev_attr_temp3_input.dev_attr.attr,
+- &sensor_dev_attr_temp3_label.dev_attr.attr,
+- NULL
+- },
+- {
+- &sensor_dev_attr_temp4_input.dev_attr.attr,
+- &sensor_dev_attr_temp4_label.dev_attr.attr,
+- NULL
+- }
+-};
+-
+ static ssize_t cpu_temp_label(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+@@ -121,24 +72,47 @@ static ssize_t get_cpu_temp(struct device *dev,
+ return sprintf(buf, "%d\n", value);
+ }
+
+-static int create_sysfs_cputemp_files(struct kobject *kobj)
+-{
+- int i, ret = 0;
+-
+- for (i = 0; i < nr_packages; i++)
+- ret = sysfs_create_files(kobj, hwmon_cputemp[i]);
++static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1);
++static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1);
++static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2);
++static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2);
++static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3);
++static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3);
++static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4);
++static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4);
+
+- return ret;
+-}
++static struct attribute *cpu_hwmon_attributes[] = {
++ &sensor_dev_attr_temp1_input.dev_attr.attr,
++ &sensor_dev_attr_temp1_label.dev_attr.attr,
++ &sensor_dev_attr_temp2_input.dev_attr.attr,
++ &sensor_dev_attr_temp2_label.dev_attr.attr,
++ &sensor_dev_attr_temp3_input.dev_attr.attr,
++ &sensor_dev_attr_temp3_label.dev_attr.attr,
++ &sensor_dev_attr_temp4_input.dev_attr.attr,
++ &sensor_dev_attr_temp4_label.dev_attr.attr,
++ NULL
++};
+
+-static void remove_sysfs_cputemp_files(struct kobject *kobj)
++static umode_t cpu_hwmon_is_visible(struct kobject *kobj,
++ struct attribute *attr, int i)
+ {
+- int i;
++ int id = i / 2;
+
+- for (i = 0; i < nr_packages; i++)
+- sysfs_remove_files(kobj, hwmon_cputemp[i]);
++ if (id < nr_packages)
++ return attr->mode;
++ return 0;
+ }
+
++static struct attribute_group cpu_hwmon_group = {
++ .attrs = cpu_hwmon_attributes,
++ .is_visible = cpu_hwmon_is_visible,
++};
++
++static const struct attribute_group *cpu_hwmon_groups[] = {
++ &cpu_hwmon_group,
++ NULL
++};
++
+ #define CPU_THERMAL_THRESHOLD 90000
+ static struct delayed_work thermal_work;
+
+@@ -159,50 +133,31 @@ static void do_thermal_timer(struct work_struct *work)
+
+ static int __init loongson_hwmon_init(void)
+ {
+- int ret;
+-
+ pr_info("Loongson Hwmon Enter...\n");
+
+ if (cpu_has_csr())
+ csr_temp_enable = csr_readl(LOONGSON_CSR_FEATURES) &
+ LOONGSON_CSRF_TEMP;
+
+- cpu_hwmon_dev = hwmon_device_register_with_info(NULL, "cpu_hwmon", NULL, NULL, NULL);
+- if (IS_ERR(cpu_hwmon_dev)) {
+- ret = PTR_ERR(cpu_hwmon_dev);
+- pr_err("hwmon_device_register fail!\n");
+- goto fail_hwmon_device_register;
+- }
+-
+ nr_packages = loongson_sysconf.nr_cpus /
+ loongson_sysconf.cores_per_package;
+
+- ret = create_sysfs_cputemp_files(&cpu_hwmon_dev->kobj);
+- if (ret) {
+- pr_err("fail to create cpu temperature interface!\n");
+- goto fail_create_sysfs_cputemp_files;
++ cpu_hwmon_dev = hwmon_device_register_with_groups(NULL, "cpu_hwmon",
++ NULL, cpu_hwmon_groups);
++ if (IS_ERR(cpu_hwmon_dev)) {
++ pr_err("hwmon_device_register fail!\n");
++ return PTR_ERR(cpu_hwmon_dev);
+ }
+
+ INIT_DEFERRABLE_WORK(&thermal_work, do_thermal_timer);
+ schedule_delayed_work(&thermal_work, msecs_to_jiffies(20000));
+
+- return ret;
+-
+-fail_create_sysfs_cputemp_files:
+- sysfs_remove_group(&cpu_hwmon_dev->kobj,
+- &cpu_hwmon_attribute_group);
+- hwmon_device_unregister(cpu_hwmon_dev);
+-
+-fail_hwmon_device_register:
+- return ret;
++ return 0;
+ }
+
+ static void __exit loongson_hwmon_exit(void)
+ {
+ cancel_delayed_work_sync(&thermal_work);
+- remove_sysfs_cputemp_files(&cpu_hwmon_dev->kobj);
+- sysfs_remove_group(&cpu_hwmon_dev->kobj,
+- &cpu_hwmon_attribute_group);
+ hwmon_device_unregister(cpu_hwmon_dev);
+ }
+
+diff --git a/drivers/platform/x86/intel/hid.c b/drivers/platform/x86/intel/hid.c
+index 13f8cf70b9aee..5c39d40a701b0 100644
+--- a/drivers/platform/x86/intel/hid.c
++++ b/drivers/platform/x86/intel/hid.c
+@@ -238,7 +238,7 @@ static bool intel_hid_evaluate_method(acpi_handle handle,
+
+ method_name = (char *)intel_hid_dsm_fn_to_method[fn_index];
+
+- if (!(intel_hid_dsm_fn_mask & fn_index))
++ if (!(intel_hid_dsm_fn_mask & BIT(fn_index)))
+ goto skip_dsm_eval;
+
+ obj = acpi_evaluate_dsm_typed(handle, &intel_dsm_guid,
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index d2553970a67ba..c4d844ffad7a6 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2133,10 +2133,13 @@ struct regulator *_regulator_get(struct device *dev, const char *id,
+ rdev->exclusive = 1;
+
+ ret = _regulator_is_enabled(rdev);
+- if (ret > 0)
++ if (ret > 0) {
+ rdev->use_count = 1;
+- else
++ regulator->enable_count = 1;
++ } else {
+ rdev->use_count = 0;
++ regulator->enable_count = 0;
++ }
+ }
+
+ link = device_link_add(dev, &rdev->dev, DL_FLAG_STATELESS);
+diff --git a/drivers/regulator/da9121-regulator.c b/drivers/regulator/da9121-regulator.c
+index eb9df485bd8aa..76e0e23bf598c 100644
+--- a/drivers/regulator/da9121-regulator.c
++++ b/drivers/regulator/da9121-regulator.c
+@@ -1030,6 +1030,8 @@ static int da9121_assign_chip_model(struct i2c_client *i2c,
+ chip->variant_id = DA9121_TYPE_DA9142;
+ regmap = &da9121_2ch_regmap_config;
+ break;
++ default:
++ return -EINVAL;
+ }
+
+ /* Set these up for of_regulator_match call which may want .of_map_modes */
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index d60d7d1b7fa25..aa55cfca9e400 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -521,6 +521,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip)
+ parent = of_get_child_by_name(np, "regulators");
+ if (!parent) {
+ dev_err(dev, "regulators node not found\n");
++ of_node_put(np);
+ return -EINVAL;
+ }
+
+@@ -550,6 +551,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip)
+ }
+
+ of_node_put(parent);
++ of_node_put(np);
+ if (ret < 0) {
+ dev_err(dev, "Error parsing regulator init data: %d\n",
+ ret);
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 8490aa8eecb1a..7dff94a2eb7e9 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -944,32 +944,31 @@ static const struct rpm_regulator_data rpm_pm8950_regulators[] = {
+ { "s2", QCOM_SMD_RPM_SMPA, 2, &pm8950_hfsmps, "vdd_s2" },
+ { "s3", QCOM_SMD_RPM_SMPA, 3, &pm8950_hfsmps, "vdd_s3" },
+ { "s4", QCOM_SMD_RPM_SMPA, 4, &pm8950_hfsmps, "vdd_s4" },
+- { "s5", QCOM_SMD_RPM_SMPA, 5, &pm8950_ftsmps2p5, "vdd_s5" },
++ /* S5 is managed via SPMI. */
+ { "s6", QCOM_SMD_RPM_SMPA, 6, &pm8950_hfsmps, "vdd_s6" },
+
+ { "l1", QCOM_SMD_RPM_LDOA, 1, &pm8950_ult_nldo, "vdd_l1_l19" },
+ { "l2", QCOM_SMD_RPM_LDOA, 2, &pm8950_ult_nldo, "vdd_l2_l23" },
+ { "l3", QCOM_SMD_RPM_LDOA, 3, &pm8950_ult_nldo, "vdd_l3" },
+- { "l4", QCOM_SMD_RPM_LDOA, 4, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16" },
+- { "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
+- { "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
+- { "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
++ /* L4 seems not to exist. */
++ { "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
++ { "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
++ { "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
+ { "l8", QCOM_SMD_RPM_LDOA, 8, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
+ { "l9", QCOM_SMD_RPM_LDOA, 9, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
+ { "l10", QCOM_SMD_RPM_LDOA, 10, &pm8950_ult_nldo, "vdd_l9_l10_l13_l14_l15_l18"},
+- { "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+- { "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+- { "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+- { "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+- { "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+- { "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16"},
+- { "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+- { "l18", QCOM_SMD_RPM_LDOA, 18, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+- { "l19", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l1_l19"},
+- { "l20", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l20"},
+- { "l21", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l21"},
+- { "l22", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22"},
+- { "l23", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l2_l23"},
++ { "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++ { "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++ { "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++ { "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++ { "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++ { "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l5_l6_l7_l16" },
++ { "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++ /* L18 seems not to exist. */
++ { "l19", QCOM_SMD_RPM_LDOA, 19, &pm8950_pldo, "vdd_l1_l19" },
++ /* L20 & L21 seem not to exist. */
++ { "l22", QCOM_SMD_RPM_LDOA, 22, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22" },
++ { "l23", QCOM_SMD_RPM_LDOA, 23, &pm8950_pldo, "vdd_l2_l23" },
+ {}
+ };
+
+diff --git a/drivers/regulator/scmi-regulator.c b/drivers/regulator/scmi-regulator.c
+index 1f02f60ad1366..41ae7ac27ff6a 100644
+--- a/drivers/regulator/scmi-regulator.c
++++ b/drivers/regulator/scmi-regulator.c
+@@ -352,7 +352,7 @@ static int scmi_regulator_probe(struct scmi_device *sdev)
+ return ret;
+ }
+ }
+-
++ of_node_put(np);
+ /*
+ * Register a regulator for each valid regulator-DT-entry that we
+ * can successfully reach via SCMI and has a valid associated voltage
+diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
+index 297fb399363cc..620a917cd3a15 100644
+--- a/drivers/s390/cio/chsc.c
++++ b/drivers/s390/cio/chsc.c
+@@ -1255,7 +1255,7 @@ exit:
+ EXPORT_SYMBOL_GPL(css_general_characteristics);
+ EXPORT_SYMBOL_GPL(css_chsc_characteristics);
+
+-int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta)
++int chsc_sstpc(void *page, unsigned int op, u16 ctrl, long *clock_delta)
+ {
+ struct {
+ struct chsc_header request;
+@@ -1266,7 +1266,7 @@ int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta)
+ unsigned int rsvd2[5];
+ struct chsc_header response;
+ unsigned int rsvd3[3];
+- u64 clock_delta;
++ s64 clock_delta;
+ unsigned int rsvd4[2];
+ } *rr;
+ int rc;
+diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
+index c11916b8ae001..bbc03190a6f23 100644
+--- a/drivers/scsi/dc395x.c
++++ b/drivers/scsi/dc395x.c
+@@ -3588,10 +3588,19 @@ static struct DeviceCtlBlk *device_alloc(struct AdapterCtlBlk *acb,
+ #endif
+ if (dcb->target_lun != 0) {
+ /* Copy settings */
+- struct DeviceCtlBlk *p;
+- list_for_each_entry(p, &acb->dcb_list, list)
+- if (p->target_id == dcb->target_id)
++ struct DeviceCtlBlk *p = NULL, *iter;
++
++ list_for_each_entry(iter, &acb->dcb_list, list)
++ if (iter->target_id == dcb->target_id) {
++ p = iter;
+ break;
++ }
++
++ if (!p) {
++ kfree(dcb);
++ return NULL;
++ }
++
+ dprintkdbg(DBG_1,
+ "device_alloc: <%02i-%i> copy from <%02i-%i>\n",
+ dcb->target_id, dcb->target_lun,
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 1756a0ac6f083..558f3f4e18593 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -1969,7 +1969,7 @@ EXPORT_SYMBOL(fcoe_ctlr_recv_flogi);
+ *
+ * Returns: u64 fc world wide name
+ */
+-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN],
++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN],
+ unsigned int scheme, unsigned int port)
+ {
+ u64 wwn;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index ebf5ec38891bb..2fbeb151aadbe 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -680,8 +680,6 @@ static int hisi_sas_init_device(struct domain_device *device)
+ struct hisi_sas_tmf_task tmf_task;
+ int retry = HISI_SAS_DISK_RECOVER_CNT;
+ struct hisi_hba *hisi_hba = dev_to_hisi_hba(device);
+- struct device *dev = hisi_hba->dev;
+- struct sas_phy *local_phy;
+
+ switch (device->dev_type) {
+ case SAS_END_DEVICE:
+@@ -702,30 +700,18 @@ static int hisi_sas_init_device(struct domain_device *device)
+ case SAS_SATA_PM_PORT:
+ case SAS_SATA_PENDING:
+ /*
+- * send HARD RESET to clear previous affiliation of
+- * STP target port
++ * If an expander is swapped when a SATA disk is attached then
++ * we should issue a hard reset to clear previous affiliation
++ * of STP target port, see SPL (chapter 6.19.4).
++ *
++ * However we don't need to issue a hard reset here for these
++ * reasons:
++ * a. When probing the device, libsas/libata already issues a
++ * hard reset in sas_probe_sata() -> ata_sas_async_probe().
++ * Note that in hisi_sas_debug_I_T_nexus_reset() we take care
++ * to issue a hard reset by checking the dev status (== INIT).
++ * b. When resetting the controller, this is simply unnecessary.
+ */
+- local_phy = sas_get_local_phy(device);
+- if (!scsi_is_sas_phy_local(local_phy) &&
+- !test_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags)) {
+- unsigned long deadline = ata_deadline(jiffies, 20000);
+- struct sata_device *sata_dev = &device->sata_dev;
+- struct ata_host *ata_host = sata_dev->ata_host;
+- struct ata_port_operations *ops = ata_host->ops;
+- struct ata_port *ap = sata_dev->ap;
+- struct ata_link *link;
+- unsigned int classes;
+-
+- ata_for_each_link(link, ap, EDGE)
+- rc = ops->hardreset(link, &classes,
+- deadline);
+- }
+- sas_put_local_phy(local_phy);
+- if (rc) {
+- dev_warn(dev, "SATA disk hardreset fail: %d\n", rc);
+- return rc;
+- }
+-
+ while (retry-- > 0) {
+ rc = hisi_sas_softreset_ata_disk(device);
+ if (!rc)
+@@ -741,15 +727,19 @@ static int hisi_sas_init_device(struct domain_device *device)
+
+ int hisi_sas_slave_alloc(struct scsi_device *sdev)
+ {
+- struct domain_device *ddev;
++ struct domain_device *ddev = sdev_to_domain_dev(sdev);
++ struct hisi_sas_device *sas_dev = ddev->lldd_dev;
+ int rc;
+
+ rc = sas_slave_alloc(sdev);
+ if (rc)
+ return rc;
+- ddev = sdev_to_domain_dev(sdev);
+
+- return hisi_sas_init_device(ddev);
++ rc = hisi_sas_init_device(ddev);
++ if (rc)
++ return rc;
++ sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(hisi_sas_slave_alloc);
+
+@@ -799,7 +789,6 @@ static int hisi_sas_dev_found(struct domain_device *device)
+ dev_info(dev, "dev[%d:%x] found\n",
+ sas_dev->device_id, sas_dev->dev_type);
+
+- sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
+ return 0;
+
+ err_out:
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 52089538e9de6..b309a9d0f5b77 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -1561,9 +1561,15 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
+
+ phy->port_id = port_id;
+
+- /* Call pm_runtime_put_sync() with pairs in hisi_sas_phyup_pm_work() */
++ /*
++ * Call pm_runtime_get_noresume() which pairs with
++ * hisi_sas_phyup_pm_work() -> pm_runtime_put_sync().
++ * For failure call pm_runtime_put() as we are in a hardirq context.
++ */
+ pm_runtime_get_noresume(dev);
+- hisi_sas_notify_phy_event(phy, HISI_PHYE_PHY_UP_PM);
++ res = hisi_sas_notify_phy_event(phy, HISI_PHYE_PHY_UP_PM);
++ if (!res)
++ pm_runtime_put(dev);
+
+ res = IRQ_HANDLED;
+
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index f936833c99099..8d416019f59f5 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -3784,9 +3784,6 @@ lpfc_least_capable_settings(struct lpfc_hba *phba,
+ {
+ u32 rsp_sig_cap = 0, drv_sig_cap = 0;
+ u32 rsp_sig_freq_cyc = 0, rsp_sig_freq_scale = 0;
+- struct lpfc_cgn_info *cp;
+- u32 crc;
+- u16 sig_freq;
+
+ /* Get rsp signal and frequency capabilities. */
+ rsp_sig_cap = be32_to_cpu(pcgd->xmt_signal_capability);
+@@ -3842,25 +3839,7 @@ lpfc_least_capable_settings(struct lpfc_hba *phba,
+ }
+ }
+
+- if (!phba->cgn_i)
+- return;
+-
+- /* Update signal frequency in congestion info buffer */
+- cp = (struct lpfc_cgn_info *)phba->cgn_i->virt;
+-
+- /* Frequency (in ms) Signal Warning/Signal Congestion Notifications
+- * are received by the HBA
+- */
+- sig_freq = phba->cgn_sig_freq;
+-
+- if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ONLY)
+- cp->cgn_warn_freq = cpu_to_le16(sig_freq);
+- if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM) {
+- cp->cgn_alarm_freq = cpu_to_le16(sig_freq);
+- cp->cgn_warn_freq = cpu_to_le16(sig_freq);
+- }
+- crc = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ, LPFC_CGN_CRC32_SEED);
+- cp->cgn_info_crc = cpu_to_le32(crc);
++ /* We are NOT recording signal frequency in congestion info buffer */
+ return;
+
+ out_no_support:
+@@ -9606,11 +9585,14 @@ lpfc_els_rcv_fpin_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+ /* Take action here for an Alarm event */
+ if (phba->cmf_active_mode != LPFC_CFG_OFF) {
+ if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_ALARM) {
+- /* Track of alarm cnt for cgn_info */
+- atomic_inc(&phba->cgn_fabric_alarm_cnt);
+ /* Track of alarm cnt for SYNC_WQE */
+ atomic_inc(&phba->cgn_sync_alarm_cnt);
+ }
++ /* Track alarm cnt for cgn_info regardless
++ * of whether CMF is configured for Signals
++ * or FPINs.
++ */
++ atomic_inc(&phba->cgn_fabric_alarm_cnt);
+ goto cleanup;
+ }
+ break;
+@@ -9618,11 +9600,14 @@ lpfc_els_rcv_fpin_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+ /* Take action here for a Warning event */
+ if (phba->cmf_active_mode != LPFC_CFG_OFF) {
+ if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_WARN) {
+- /* Track of warning cnt for cgn_info */
+- atomic_inc(&phba->cgn_fabric_warn_cnt);
+ /* Track of warning cnt for SYNC_WQE */
+ atomic_inc(&phba->cgn_sync_warn_cnt);
+ }
++ /* Track warning cnt and freq for cgn_info
++ * regardless of whether CMF is configured for
++ * Signals or FPINs.
++ */
++ atomic_inc(&phba->cgn_fabric_warn_cnt);
+ cleanup:
+ /* Save frequency in ms */
+ phba->cgn_fpin_frequency =
+@@ -9631,14 +9616,10 @@ cleanup:
+ if (phba->cgn_i) {
+ cp = (struct lpfc_cgn_info *)
+ phba->cgn_i->virt;
+- if (phba->cgn_reg_fpin &
+- LPFC_CGN_FPIN_ALARM)
+- cp->cgn_alarm_freq =
+- cpu_to_le16(value);
+- if (phba->cgn_reg_fpin &
+- LPFC_CGN_FPIN_WARN)
+- cp->cgn_warn_freq =
+- cpu_to_le16(value);
++ cp->cgn_alarm_freq =
++ cpu_to_le16(value);
++ cp->cgn_warn_freq =
++ cpu_to_le16(value);
+ crc = lpfc_cgn_calc_crc32
+ (cp,
+ LPFC_CGN_INFO_SZ,
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 9569a7390f9d5..7e4a8848a12c3 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -5877,21 +5877,8 @@ lpfc_cgn_save_evt_cnt(struct lpfc_hba *phba)
+
+ /* Use the frequency found in the last rcv'ed FPIN */
+ value = phba->cgn_fpin_frequency;
+- if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_WARN)
+- cp->cgn_warn_freq = cpu_to_le16(value);
+- if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_ALARM)
+- cp->cgn_alarm_freq = cpu_to_le16(value);
+-
+- /* Frequency (in ms) Signal Warning/Signal Congestion Notifications
+- * are received by the HBA
+- */
+- value = phba->cgn_sig_freq;
+-
+- if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ONLY ||
+- phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM)
+- cp->cgn_warn_freq = cpu_to_le16(value);
+- if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM)
+- cp->cgn_alarm_freq = cpu_to_le16(value);
++ cp->cgn_warn_freq = cpu_to_le16(value);
++ cp->cgn_alarm_freq = cpu_to_le16(value);
+
+ lvalue = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ,
+ LPFC_CGN_CRC32_SEED);
+@@ -6606,9 +6593,6 @@ lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli)
+ /* Alarm overrides warning, so check that first */
+ if (cgn_signal->alarm_cnt) {
+ if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM) {
+- /* Keep track of alarm cnt for cgn_info */
+- atomic_add(cgn_signal->alarm_cnt,
+- &phba->cgn_fabric_alarm_cnt);
+ /* Keep track of alarm cnt for CMF_SYNC_WQE */
+ atomic_add(cgn_signal->alarm_cnt,
+ &phba->cgn_sync_alarm_cnt);
+@@ -6617,8 +6601,6 @@ lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli)
+ /* signal action needs to be taken */
+ if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ONLY ||
+ phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM) {
+- /* Keep track of warning cnt for cgn_info */
+- atomic_add(cnt, &phba->cgn_fabric_warn_cnt);
+ /* Keep track of warning cnt for CMF_SYNC_WQE */
+ atomic_add(cnt, &phba->cgn_sync_warn_cnt);
+ }
+@@ -15712,34 +15694,7 @@ void lpfc_dmp_dbg(struct lpfc_hba *phba)
+ unsigned int temp_idx;
+ int i;
+ int j = 0;
+- unsigned long rem_nsec, iflags;
+- bool log_verbose = false;
+- struct lpfc_vport *port_iterator;
+-
+- /* Don't dump messages if we explicitly set log_verbose for the
+- * physical port or any vport.
+- */
+- if (phba->cfg_log_verbose)
+- return;
+-
+- spin_lock_irqsave(&phba->port_list_lock, iflags);
+- list_for_each_entry(port_iterator, &phba->port_list, listentry) {
+- if (port_iterator->load_flag & FC_UNLOADING)
+- continue;
+- if (scsi_host_get(lpfc_shost_from_vport(port_iterator))) {
+- if (port_iterator->cfg_log_verbose)
+- log_verbose = true;
+-
+- scsi_host_put(lpfc_shost_from_vport(port_iterator));
+-
+- if (log_verbose) {
+- spin_unlock_irqrestore(&phba->port_list_lock,
+- iflags);
+- return;
+- }
+- }
+- }
+- spin_unlock_irqrestore(&phba->port_list_lock, iflags);
++ unsigned long rem_nsec;
+
+ if (atomic_cmpxchg(&phba->dbg_log_dmping, 0, 1) != 0)
+ return;
+diff --git a/drivers/scsi/lpfc/lpfc_logmsg.h b/drivers/scsi/lpfc/lpfc_logmsg.h
+index 7d480c7987942..a5aafe230c74f 100644
+--- a/drivers/scsi/lpfc/lpfc_logmsg.h
++++ b/drivers/scsi/lpfc/lpfc_logmsg.h
+@@ -73,7 +73,7 @@ do { \
+ #define lpfc_printf_vlog(vport, level, mask, fmt, arg...) \
+ do { \
+ { if (((mask) & (vport)->cfg_log_verbose) || (level[1] <= '3')) { \
+- if ((mask) & LOG_TRACE_EVENT) \
++ if ((mask) & LOG_TRACE_EVENT && !(vport)->cfg_log_verbose) \
+ lpfc_dmp_dbg((vport)->phba); \
+ dev_printk(level, &((vport)->phba->pcidev)->dev, "%d:(%d):" \
+ fmt, (vport)->phba->brd_no, vport->vpi, ##arg); \
+@@ -89,11 +89,11 @@ do { \
+ (phba)->pport->cfg_log_verbose : \
+ (phba)->cfg_log_verbose; \
+ if (((mask) & log_verbose) || (level[1] <= '3')) { \
+- if ((mask) & LOG_TRACE_EVENT) \
++ if ((mask) & LOG_TRACE_EVENT && !log_verbose) \
+ lpfc_dmp_dbg(phba); \
+ dev_printk(level, &((phba)->pcidev)->dev, "%d:" \
+ fmt, phba->brd_no, ##arg); \
+- } else if (!(phba)->cfg_log_verbose)\
++ } else if (!log_verbose)\
+ lpfc_dbg_print(phba, "%d:" fmt, phba->brd_no, ##arg); \
+ } \
+ } while (0)
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 5a3da38a90670..d9a8e3a59d944 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -3936,7 +3936,7 @@ lpfc_update_cmf_cmpl(struct lpfc_hba *phba,
+ else
+ time = div_u64(time + 500, 1000); /* round it */
+
+- cgs = this_cpu_ptr(phba->cmf_stat);
++ cgs = per_cpu_ptr(phba->cmf_stat, raw_smp_processor_id());
+ atomic64_add(size, &cgs->rcv_bytes);
+ atomic64_add(time, &cgs->rx_latency);
+ atomic_inc(&cgs->rx_io_cnt);
+@@ -3980,7 +3980,7 @@ lpfc_update_cmf_cmd(struct lpfc_hba *phba, uint32_t size)
+ atomic_set(&phba->rx_max_read_cnt, size);
+ }
+
+- cgs = this_cpu_ptr(phba->cmf_stat);
++ cgs = per_cpu_ptr(phba->cmf_stat, raw_smp_processor_id());
+ atomic64_add(size, &cgs->total_bytes);
+ return 0;
+ }
+@@ -5907,25 +5907,25 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ if (!lpfc_cmd)
+ return ret;
+
+- spin_lock_irqsave(&phba->hbalock, flags);
++ /* Guard against IO completion being called at same time */
++ spin_lock_irqsave(&lpfc_cmd->buf_lock, flags);
++
++ spin_lock(&phba->hbalock);
+ /* driver queued commands are in process of being flushed */
+ if (phba->hba_flag & HBA_IOQ_FLUSH) {
+ lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ "3168 SCSI Layer abort requested I/O has been "
+ "flushed by LLD.\n");
+ ret = FAILED;
+- goto out_unlock;
++ goto out_unlock_hba;
+ }
+
+- /* Guard against IO completion being called at same time */
+- spin_lock(&lpfc_cmd->buf_lock);
+-
+ if (!lpfc_cmd->pCmd) {
+ lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ "2873 SCSI Layer I/O Abort Request IO CMPL Status "
+ "x%x ID %d LUN %llu\n",
+ SUCCESS, cmnd->device->id, cmnd->device->lun);
+- goto out_unlock_buf;
++ goto out_unlock_hba;
+ }
+
+ iocb = &lpfc_cmd->cur_iocbq;
+@@ -5933,7 +5933,7 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
+ if (!pring_s4) {
+ ret = FAILED;
+- goto out_unlock_buf;
++ goto out_unlock_hba;
+ }
+ spin_lock(&pring_s4->ring_lock);
+ }
+@@ -5966,8 +5966,8 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ "3389 SCSI Layer I/O Abort Request is pending\n");
+ if (phba->sli_rev == LPFC_SLI_REV4)
+ spin_unlock(&pring_s4->ring_lock);
+- spin_unlock(&lpfc_cmd->buf_lock);
+- spin_unlock_irqrestore(&phba->hbalock, flags);
++ spin_unlock(&phba->hbalock);
++ spin_unlock_irqrestore(&lpfc_cmd->buf_lock, flags);
+ goto wait_for_cmpl;
+ }
+
+@@ -5988,15 +5988,13 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ if (ret_val != IOCB_SUCCESS) {
+ /* Indicate the IO is not being aborted by the driver. */
+ lpfc_cmd->waitq = NULL;
+- spin_unlock(&lpfc_cmd->buf_lock);
+- spin_unlock_irqrestore(&phba->hbalock, flags);
+ ret = FAILED;
+- goto out;
++ goto out_unlock_hba;
+ }
+
+ /* no longer need the lock after this point */
+- spin_unlock(&lpfc_cmd->buf_lock);
+- spin_unlock_irqrestore(&phba->hbalock, flags);
++ spin_unlock(&phba->hbalock);
++ spin_unlock_irqrestore(&lpfc_cmd->buf_lock, flags);
+
+ if (phba->cfg_poll & DISABLE_FCP_RING_INT)
+ lpfc_sli_handle_fast_ring_event(phba,
+@@ -6031,10 +6029,9 @@ wait_for_cmpl:
+ out_unlock_ring:
+ if (phba->sli_rev == LPFC_SLI_REV4)
+ spin_unlock(&pring_s4->ring_lock);
+-out_unlock_buf:
+- spin_unlock(&lpfc_cmd->buf_lock);
+-out_unlock:
+- spin_unlock_irqrestore(&phba->hbalock, flags);
++out_unlock_hba:
++ spin_unlock(&phba->hbalock);
++ spin_unlock_irqrestore(&lpfc_cmd->buf_lock, flags);
+ out:
+ lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ "0749 SCSI Layer I/O Abort Request Status x%x ID %d "
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index b64c5f157ce90..f5f472991816b 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -18466,7 +18466,6 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr)
+ case FC_RCTL_ELS_REP: /* extended link services reply */
+ case FC_RCTL_ELS4_REQ: /* FC-4 ELS request */
+ case FC_RCTL_ELS4_REP: /* FC-4 ELS reply */
+- case FC_RCTL_BA_NOP: /* basic link service NOP */
+ case FC_RCTL_BA_ABTS: /* basic link service abort */
+ case FC_RCTL_BA_RMC: /* remove connection */
+ case FC_RCTL_BA_ACC: /* basic accept */
+@@ -18487,6 +18486,7 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr)
+ fc_vft_hdr = (struct fc_vft_header *)fc_hdr;
+ fc_hdr = &((struct fc_frame_header *)fc_vft_hdr)[1];
+ return lpfc_fc_frame_check(phba, fc_hdr);
++ case FC_RCTL_BA_NOP: /* basic link service NOP */
+ default:
+ goto drop;
+ }
+@@ -19299,12 +19299,14 @@ lpfc_sli4_send_seq_to_ulp(struct lpfc_vport *vport,
+ if (!lpfc_complete_unsol_iocb(phba,
+ phba->sli4_hba.els_wq->pring,
+ iocbq, fc_hdr->fh_r_ctl,
+- fc_hdr->fh_type))
++ fc_hdr->fh_type)) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ "2540 Ring %d handler: unexpected Rctl "
+ "x%x Type x%x received\n",
+ LPFC_ELS_RING,
+ fc_hdr->fh_r_ctl, fc_hdr->fh_type);
++ lpfc_in_buf_free(phba, &seq_dmabuf->dbuf);
++ }
+
+ /* Free iocb created in lpfc_prep_seq */
+ list_for_each_entry_safe(curr_iocb, next_iocb,
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index bf987f3a7f3f2..840175f34d1a7 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4608,7 +4608,7 @@ static int __init megaraid_init(void)
+ * major number allocation.
+ */
+ major = register_chrdev(0, "megadev_legacy", &megadev_fops);
+- if (!major) {
++ if (major < 0) {
+ printk(KERN_WARNING
+ "megaraid: failed to register char device\n");
+ }
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index eafe0db98d542..122d650d08102 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -29,11 +29,9 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ return PTR_ERR(regbase);
+
+ pm_runtime_enable(dev);
+- ret = pm_runtime_get_sync(dev);
+- if (ret < 0) {
+- pm_runtime_put_noidle(dev);
++ ret = pm_runtime_resume_and_get(dev);
++ if (ret < 0)
+ goto disable_pm;
+- }
+
+ /* Select MPHY refclk frequency */
+ clk = devm_clk_get(dev, NULL);
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 0d2e950d0865e..35a1e6ee8de44 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -641,12 +641,7 @@ static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ return err;
+ }
+
+- err = ufs_qcom_ice_resume(host);
+- if (err)
+- return err;
+-
+- hba->is_sys_suspended = false;
+- return 0;
++ return ufs_qcom_ice_resume(host);
+ }
+
+ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable)
+@@ -687,8 +682,11 @@ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable)
+
+ writel_relaxed(temp, host->dev_ref_clk_ctrl_mmio);
+
+- /* ensure that ref_clk is enabled/disabled before we return */
+- wmb();
++ /*
++ * Make sure the write to ref_clk reaches the destination and
++ * not stored in a Write Buffer (WB).
++ */
++ readl(host->dev_ref_clk_ctrl_mmio);
+
+ /*
+ * If we call hibern8 exit after this, we need to make sure that
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 5696e52c76e9d..05d2155dbf402 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -115,8 +115,13 @@ int ufshcd_dump_regs(struct ufs_hba *hba, size_t offset, size_t len,
+ if (!regs)
+ return -ENOMEM;
+
+- for (pos = 0; pos < len; pos += 4)
++ for (pos = 0; pos < len; pos += 4) {
++ if (offset == 0 &&
++ pos >= REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER &&
++ pos <= REG_UIC_ERROR_CODE_DME)
++ continue;
+ regs[pos / 4] = ufshcd_readl(hba, offset + pos);
++ }
+
+ ufshcd_hex_dump(prefix, regs, len);
+ kfree(regs);
+diff --git a/drivers/soc/bcm/bcm63xx/bcm-pmb.c b/drivers/soc/bcm/bcm63xx/bcm-pmb.c
+index 7bbe46ea5f945..9407cac47fdbe 100644
+--- a/drivers/soc/bcm/bcm63xx/bcm-pmb.c
++++ b/drivers/soc/bcm/bcm63xx/bcm-pmb.c
+@@ -312,6 +312,9 @@ static int bcm_pmb_probe(struct platform_device *pdev)
+ for (e = table; e->name; e++) {
+ struct bcm_pmb_pm_domain *pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+
++ if (!pd)
++ return -ENOMEM;
++
+ pd->pmb = pmb;
+ pd->data = e;
+ pd->genpd.name = e->name;
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index ec52f29c88673..73bbe960f1443 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -674,6 +674,7 @@ static const struct of_device_id qcom_llcc_of_match[] = {
+ { .compatible = "qcom,sm8350-llcc", .data = &sm8350_cfg },
+ { }
+ };
++MODULE_DEVICE_TABLE(of, qcom_llcc_of_match);
+
+ static struct platform_driver qcom_llcc_driver = {
+ .driver = {
+diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c
+index 4a157240f419e..59dbf4b61e6c2 100644
+--- a/drivers/soc/qcom/smp2p.c
++++ b/drivers/soc/qcom/smp2p.c
+@@ -493,6 +493,7 @@ static int smp2p_parse_ipc(struct qcom_smp2p *smp2p)
+ }
+
+ smp2p->ipc_regmap = syscon_node_to_regmap(syscon);
++ of_node_put(syscon);
+ if (IS_ERR(smp2p->ipc_regmap))
+ return PTR_ERR(smp2p->ipc_regmap);
+
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index ef15d014c03a3..9df9bba242f3e 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -374,6 +374,7 @@ static int smsm_parse_ipc(struct qcom_smsm *smsm, unsigned host_id)
+ return 0;
+
+ host->ipc_regmap = syscon_node_to_regmap(syscon);
++ of_node_put(syscon);
+ if (IS_ERR(host->ipc_regmap))
+ return PTR_ERR(host->ipc_regmap);
+
+diff --git a/drivers/soc/ti/ti_sci_pm_domains.c b/drivers/soc/ti/ti_sci_pm_domains.c
+index 8afb3f45d2637..a33ec7eaf23d1 100644
+--- a/drivers/soc/ti/ti_sci_pm_domains.c
++++ b/drivers/soc/ti/ti_sci_pm_domains.c
+@@ -183,6 +183,8 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ devm_kcalloc(dev, max_id + 1,
+ sizeof(*pd_provider->data.domains),
+ GFP_KERNEL);
++ if (!pd_provider->data.domains)
++ return -ENOMEM;
+
+ pd_provider->data.num_domains = max_id + 1;
+ pd_provider->data.xlate = ti_sci_pd_xlate;
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index b8ac24318cb3a..85a33bcb78845 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1863,7 +1863,7 @@ static const struct cqspi_driver_platdata intel_lgm_qspi = {
+ };
+
+ static const struct cqspi_driver_platdata socfpga_qspi = {
+- .quirks = CQSPI_NO_SUPPORT_WR_COMPLETION,
++ .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_NO_SUPPORT_WR_COMPLETION,
+ };
+
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c
+index 9851551ebbe05..46ae46a944c5c 100644
+--- a/drivers/spi/spi-fsl-qspi.c
++++ b/drivers/spi/spi-fsl-qspi.c
+@@ -876,6 +876,10 @@ static int fsl_qspi_probe(struct platform_device *pdev)
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "QuadSPI-memory");
++ if (!res) {
++ ret = -EINVAL;
++ goto err_put_ctrl;
++ }
+ q->memmap_phy = res->start;
+ /* Since there are 4 cs, map size required is 4 times ahb_buf_size */
+ q->ahb_addr = devm_ioremap(dev, q->memmap_phy,
+diff --git a/drivers/spi/spi-img-spfi.c b/drivers/spi/spi-img-spfi.c
+index 5f05d519fbbd0..71376b6df89db 100644
+--- a/drivers/spi/spi-img-spfi.c
++++ b/drivers/spi/spi-img-spfi.c
+@@ -731,7 +731,7 @@ static int img_spfi_resume(struct device *dev)
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+- if (ret) {
++ if (ret < 0) {
+ pm_runtime_put_noidle(dev);
+ return ret;
+ }
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index c6a1bb09be056..b721b62118e12 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -133,7 +133,8 @@
+ #define INT_TF_OVERFLOW (1 << 1)
+ #define INT_RF_UNDERFLOW (1 << 2)
+ #define INT_RF_OVERFLOW (1 << 3)
+-#define INT_RF_FULL (1 << 4)
++#define INT_RF_FULL (1 << 4)
++#define INT_CS_INACTIVE (1 << 6)
+
+ /* Bit fields in ICR, 4bit */
+ #define ICR_MASK 0x0f
+@@ -194,6 +195,10 @@ struct rockchip_spi {
+ bool cs_asserted[ROCKCHIP_SPI_MAX_CS_NUM];
+
+ bool slave_abort;
++ bool cs_inactive; /* spi slave tansmition stop when cs inactive */
++ bool cs_high_supported; /* native CS supports active-high polarity */
++
++ struct spi_transfer *xfer; /* Store xfer temporarily */
+ };
+
+ static inline void spi_enable_chip(struct rockchip_spi *rs, bool enable)
+@@ -343,6 +348,15 @@ static irqreturn_t rockchip_spi_isr(int irq, void *dev_id)
+ struct spi_controller *ctlr = dev_id;
+ struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+
++ /* When int_cs_inactive comes, spi slave abort */
++ if (rs->cs_inactive && readl_relaxed(rs->regs + ROCKCHIP_SPI_IMR) & INT_CS_INACTIVE) {
++ ctlr->slave_abort(ctlr);
++ writel_relaxed(0, rs->regs + ROCKCHIP_SPI_IMR);
++ writel_relaxed(0xffffffff, rs->regs + ROCKCHIP_SPI_ICR);
++
++ return IRQ_HANDLED;
++ }
++
+ if (rs->tx_left)
+ rockchip_spi_pio_writer(rs);
+
+@@ -350,6 +364,7 @@ static irqreturn_t rockchip_spi_isr(int irq, void *dev_id)
+ if (!rs->rx_left) {
+ spi_enable_chip(rs, false);
+ writel_relaxed(0, rs->regs + ROCKCHIP_SPI_IMR);
++ writel_relaxed(0xffffffff, rs->regs + ROCKCHIP_SPI_ICR);
+ spi_finalize_current_transfer(ctlr);
+ }
+
+@@ -357,14 +372,18 @@ static irqreturn_t rockchip_spi_isr(int irq, void *dev_id)
+ }
+
+ static int rockchip_spi_prepare_irq(struct rockchip_spi *rs,
+- struct spi_transfer *xfer)
++ struct spi_controller *ctlr,
++ struct spi_transfer *xfer)
+ {
+ rs->tx = xfer->tx_buf;
+ rs->rx = xfer->rx_buf;
+ rs->tx_left = rs->tx ? xfer->len / rs->n_bytes : 0;
+ rs->rx_left = xfer->len / rs->n_bytes;
+
+- writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR);
++ if (rs->cs_inactive)
++ writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR);
++ else
++ writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR);
+ spi_enable_chip(rs, true);
+
+ if (rs->tx_left)
+@@ -383,6 +402,9 @@ static void rockchip_spi_dma_rxcb(void *data)
+ if (state & TXDMA && !rs->slave_abort)
+ return;
+
++ if (rs->cs_inactive)
++ writel_relaxed(0, rs->regs + ROCKCHIP_SPI_IMR);
++
+ spi_enable_chip(rs, false);
+ spi_finalize_current_transfer(ctlr);
+ }
+@@ -423,14 +445,16 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs,
+
+ atomic_set(&rs->state, 0);
+
++ rs->tx = xfer->tx_buf;
++ rs->rx = xfer->rx_buf;
++
+ rxdesc = NULL;
+ if (xfer->rx_buf) {
+ struct dma_slave_config rxconf = {
+ .direction = DMA_DEV_TO_MEM,
+ .src_addr = rs->dma_addr_rx,
+ .src_addr_width = rs->n_bytes,
+- .src_maxburst = rockchip_spi_calc_burst_size(xfer->len /
+- rs->n_bytes),
++ .src_maxburst = rockchip_spi_calc_burst_size(xfer->len / rs->n_bytes),
+ };
+
+ dmaengine_slave_config(ctlr->dma_rx, &rxconf);
+@@ -474,10 +498,13 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs,
+ /* rx must be started before tx due to spi instinct */
+ if (rxdesc) {
+ atomic_or(RXDMA, &rs->state);
+- dmaengine_submit(rxdesc);
++ ctlr->dma_rx->cookie = dmaengine_submit(rxdesc);
+ dma_async_issue_pending(ctlr->dma_rx);
+ }
+
++ if (rs->cs_inactive)
++ writel_relaxed(INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR);
++
+ spi_enable_chip(rs, true);
+
+ if (txdesc) {
+@@ -584,7 +611,42 @@ static size_t rockchip_spi_max_transfer_size(struct spi_device *spi)
+ static int rockchip_spi_slave_abort(struct spi_controller *ctlr)
+ {
+ struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
++ u32 rx_fifo_left;
++ struct dma_tx_state state;
++ enum dma_status status;
++
++ /* Get current dma rx point */
++ if (atomic_read(&rs->state) & RXDMA) {
++ dmaengine_pause(ctlr->dma_rx);
++ status = dmaengine_tx_status(ctlr->dma_rx, ctlr->dma_rx->cookie, &state);
++ if (status == DMA_ERROR) {
++ rs->rx = rs->xfer->rx_buf;
++ rs->xfer->len = 0;
++ rx_fifo_left = readl_relaxed(rs->regs + ROCKCHIP_SPI_RXFLR);
++ for (; rx_fifo_left; rx_fifo_left--)
++ readl_relaxed(rs->regs + ROCKCHIP_SPI_RXDR);
++ goto out;
++ } else {
++ rs->rx += rs->xfer->len - rs->n_bytes * state.residue;
++ }
++ }
++
++ /* Get the valid data left in rx fifo and set rs->xfer->len real rx size */
++ if (rs->rx) {
++ rx_fifo_left = readl_relaxed(rs->regs + ROCKCHIP_SPI_RXFLR);
++ for (; rx_fifo_left; rx_fifo_left--) {
++ u32 rxw = readl_relaxed(rs->regs + ROCKCHIP_SPI_RXDR);
++
++ if (rs->n_bytes == 1)
++ *(u8 *)rs->rx = (u8)rxw;
++ else
++ *(u16 *)rs->rx = (u16)rxw;
++ rs->rx += rs->n_bytes;
++ }
++ rs->xfer->len = (unsigned int)(rs->rx - rs->xfer->rx_buf);
++ }
+
++out:
+ if (atomic_read(&rs->state) & RXDMA)
+ dmaengine_terminate_sync(ctlr->dma_rx);
+ if (atomic_read(&rs->state) & TXDMA)
+@@ -626,7 +688,7 @@ static int rockchip_spi_transfer_one(
+ }
+
+ rs->n_bytes = xfer->bits_per_word <= 8 ? 1 : 2;
+-
++ rs->xfer = xfer;
+ use_dma = ctlr->can_dma ? ctlr->can_dma(ctlr, spi, xfer) : false;
+
+ ret = rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
+@@ -636,7 +698,7 @@ static int rockchip_spi_transfer_one(
+ if (use_dma)
+ return rockchip_spi_prepare_dma(rs, ctlr, xfer);
+
+- return rockchip_spi_prepare_irq(rs, xfer);
++ return rockchip_spi_prepare_irq(rs, ctlr, xfer);
+ }
+
+ static bool rockchip_spi_can_dma(struct spi_controller *ctlr,
+@@ -653,6 +715,34 @@ static bool rockchip_spi_can_dma(struct spi_controller *ctlr,
+ return xfer->len / bytes_per_word >= rs->fifo_len;
+ }
+
++static int rockchip_spi_setup(struct spi_device *spi)
++{
++ struct rockchip_spi *rs = spi_controller_get_devdata(spi->controller);
++ u32 cr0;
++
++ if (!spi->cs_gpiod && (spi->mode & SPI_CS_HIGH) && !rs->cs_high_supported) {
++ dev_warn(&spi->dev, "setup: non GPIO CS can't be active-high\n");
++ return -EINVAL;
++ }
++
++ pm_runtime_get_sync(rs->dev);
++
++ cr0 = readl_relaxed(rs->regs + ROCKCHIP_SPI_CTRLR0);
++
++ cr0 &= ~(0x3 << CR0_SCPH_OFFSET);
++ cr0 |= ((spi->mode & 0x3) << CR0_SCPH_OFFSET);
++ if (spi->mode & SPI_CS_HIGH && spi->chip_select <= 1)
++ cr0 |= BIT(spi->chip_select) << CR0_SOI_OFFSET;
++ else if (spi->chip_select <= 1)
++ cr0 &= ~(BIT(spi->chip_select) << CR0_SOI_OFFSET);
++
++ writel_relaxed(cr0, rs->regs + ROCKCHIP_SPI_CTRLR0);
++
++ pm_runtime_put(rs->dev);
++
++ return 0;
++}
++
+ static int rockchip_spi_probe(struct platform_device *pdev)
+ {
+ int ret;
+@@ -780,6 +870,7 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+ ctlr->min_speed_hz = rs->freq / BAUDR_SCKDV_MAX;
+ ctlr->max_speed_hz = min(rs->freq / BAUDR_SCKDV_MIN, MAX_SCLK_OUT);
+
++ ctlr->setup = rockchip_spi_setup;
+ ctlr->set_cs = rockchip_spi_set_cs;
+ ctlr->transfer_one = rockchip_spi_transfer_one;
+ ctlr->max_transfer_size = rockchip_spi_max_transfer_size;
+@@ -814,9 +905,15 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+
+ switch (readl_relaxed(rs->regs + ROCKCHIP_SPI_VERSION)) {
+ case ROCKCHIP_SPI_VER2_TYPE2:
++ rs->cs_high_supported = true;
+ ctlr->mode_bits |= SPI_CS_HIGH;
++ if (ctlr->can_dma && slave_mode)
++ rs->cs_inactive = true;
++ else
++ rs->cs_inactive = false;
+ break;
+ default:
++ rs->cs_inactive = false;
+ break;
+ }
+
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index bd5708d7e5a15..7a014eeec2d0d 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -1108,14 +1108,11 @@ static struct dma_chan *rspi_request_dma_chan(struct device *dev,
+ }
+
+ memset(&cfg, 0, sizeof(cfg));
++ cfg.dst_addr = port_addr + RSPI_SPDR;
++ cfg.src_addr = port_addr + RSPI_SPDR;
++ cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
++ cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ cfg.direction = dir;
+- if (dir == DMA_MEM_TO_DEV) {
+- cfg.dst_addr = port_addr;
+- cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+- } else {
+- cfg.src_addr = port_addr;
+- cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+- }
+
+ ret = dmaengine_slave_config(chan, &cfg);
+ if (ret) {
+@@ -1146,12 +1143,12 @@ static int rspi_request_dma(struct device *dev, struct spi_controller *ctlr,
+ }
+
+ ctlr->dma_tx = rspi_request_dma_chan(dev, DMA_MEM_TO_DEV, dma_tx_id,
+- res->start + RSPI_SPDR);
++ res->start);
+ if (!ctlr->dma_tx)
+ return -ENODEV;
+
+ ctlr->dma_rx = rspi_request_dma_chan(dev, DMA_DEV_TO_MEM, dma_rx_id,
+- res->start + RSPI_SPDR);
++ res->start);
+ if (!ctlr->dma_rx) {
+ dma_release_channel(ctlr->dma_tx);
+ ctlr->dma_tx = NULL;
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index ffdc55f87e821..dd38cb8ffbc20 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -308,7 +308,8 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi,
+ if (!op->data.nbytes)
+ goto wait_nobusy;
+
+- if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF)
++ if ((readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) ||
++ qspi->fmode == CCR_FMODE_APM)
+ goto out;
+
+ reinit_completion(&qspi->data_completion);
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index e06aafe169e0c..081da1fd3fd7e 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -448,6 +448,7 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst,
+ enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+ struct dma_async_tx_descriptor *tx;
+ int ret;
++ unsigned long time_left;
+
+ tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
+ if (!tx) {
+@@ -467,9 +468,9 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst,
+ }
+
+ dma_async_issue_pending(chan);
+- ret = wait_for_completion_timeout(&qspi->transfer_complete,
++ time_left = wait_for_completion_timeout(&qspi->transfer_complete,
+ msecs_to_jiffies(len));
+- if (ret <= 0) {
++ if (time_left == 0) {
+ dmaengine_terminate_sync(chan);
+ dev_err(qspi->dev, "DMA wait_for_completion_timeout\n");
+ return -ETIMEDOUT;
+diff --git a/drivers/staging/media/hantro/hantro_g2_hevc_dec.c b/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
+index 99d8ea7543da0..13602e051a06c 100644
+--- a/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
++++ b/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
+@@ -60,7 +60,7 @@ static void prepare_tile_info_buffer(struct hantro_ctx *ctx)
+ no_chroma = 1;
+ for (j = 0, tmp_w = 0; j < num_tile_cols - 1; j++) {
+ tmp_w += pps->column_width_minus1[j] + 1;
+- *p++ = pps->column_width_minus1[j + 1];
++ *p++ = pps->column_width_minus1[j] + 1;
+ *p++ = h;
+ if (i == 0 && h == 1 && ctb_size == 16)
+ no_chroma = 1;
+@@ -180,13 +180,8 @@ static void set_params(struct hantro_ctx *ctx)
+ hantro_reg_write(vpu, &g2_max_cu_qpd_depth, 0);
+ }
+
+- if (pps->flags & V4L2_HEVC_PPS_FLAG_PPS_SLICE_CHROMA_QP_OFFSETS_PRESENT) {
+- hantro_reg_write(vpu, &g2_cb_qp_offset, pps->pps_cb_qp_offset);
+- hantro_reg_write(vpu, &g2_cr_qp_offset, pps->pps_cr_qp_offset);
+- } else {
+- hantro_reg_write(vpu, &g2_cb_qp_offset, 0);
+- hantro_reg_write(vpu, &g2_cr_qp_offset, 0);
+- }
++ hantro_reg_write(vpu, &g2_cb_qp_offset, pps->pps_cb_qp_offset);
++ hantro_reg_write(vpu, &g2_cr_qp_offset, pps->pps_cr_qp_offset);
+
+ hantro_reg_write(vpu, &g2_filt_offset_beta, pps->pps_beta_offset_div2);
+ hantro_reg_write(vpu, &g2_filt_offset_tc, pps->pps_tc_offset_div2);
+diff --git a/drivers/staging/media/hantro/hantro_h264.c b/drivers/staging/media/hantro/hantro_h264.c
+index 0b4d2491be3b8..228629fb3cdf9 100644
+--- a/drivers/staging/media/hantro/hantro_h264.c
++++ b/drivers/staging/media/hantro/hantro_h264.c
+@@ -354,8 +354,6 @@ u16 hantro_h264_get_ref_nbr(struct hantro_ctx *ctx, unsigned int dpb_idx)
+
+ if (!(dpb->flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE))
+ return 0;
+- if (dpb->flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM)
+- return dpb->pic_num;
+ return dpb->frame_num;
+ }
+
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index e595905b3bd7a..3067d76f7638a 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -656,8 +656,12 @@ static int hantro_buf_prepare(struct vb2_buffer *vb)
+ * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
+ * it to buffer length).
+ */
+- if (V4L2_TYPE_IS_CAPTURE(vq->type))
+- vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++ if (V4L2_TYPE_IS_CAPTURE(vq->type)) {
++ if (ctx->is_encoder)
++ vb2_set_plane_payload(vb, 0, 0);
++ else
++ vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 951e19231da21..22b4bf9e9ef40 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -112,6 +112,7 @@ struct rkvdec_h264_run {
+ const struct v4l2_ctrl_h264_sps *sps;
+ const struct v4l2_ctrl_h264_pps *pps;
+ const struct v4l2_ctrl_h264_scaling_matrix *scaling_matrix;
++ int ref_buf_idx[V4L2_H264_NUM_DPB_ENTRIES];
+ };
+
+ struct rkvdec_h264_ctx {
+@@ -661,8 +662,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx,
+ WRITE_PPS(0xff, PROFILE_IDC);
+ WRITE_PPS(1, CONSTRAINT_SET3_FLAG);
+ WRITE_PPS(sps->chroma_format_idc, CHROMA_FORMAT_IDC);
+- WRITE_PPS(sps->bit_depth_luma_minus8 + 8, BIT_DEPTH_LUMA);
+- WRITE_PPS(sps->bit_depth_chroma_minus8 + 8, BIT_DEPTH_CHROMA);
++ WRITE_PPS(sps->bit_depth_luma_minus8, BIT_DEPTH_LUMA);
++ WRITE_PPS(sps->bit_depth_chroma_minus8, BIT_DEPTH_CHROMA);
+ WRITE_PPS(0, QPPRIME_Y_ZERO_TRANSFORM_BYPASS_FLAG);
+ WRITE_PPS(sps->log2_max_frame_num_minus4, LOG2_MAX_FRAME_NUM_MINUS4);
+ WRITE_PPS(sps->max_num_ref_frames, MAX_NUM_REF_FRAMES);
+@@ -725,6 +726,26 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx,
+ }
+ }
+
++static void lookup_ref_buf_idx(struct rkvdec_ctx *ctx,
++ struct rkvdec_h264_run *run)
++{
++ const struct v4l2_ctrl_h264_decode_params *dec_params = run->decode_params;
++ u32 i;
++
++ for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) {
++ struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
++ const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb;
++ struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q;
++ int buf_idx = -1;
++
++ if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)
++ buf_idx = vb2_find_timestamp(cap_q,
++ dpb[i].reference_ts, 0);
++
++ run->ref_buf_idx[i] = buf_idx;
++ }
++}
++
+ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ struct rkvdec_h264_run *run)
+ {
+@@ -762,7 +783,7 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+
+ for (j = 0; j < RKVDEC_NUM_REFLIST; j++) {
+ for (i = 0; i < h264_ctx->reflists.num_valid; i++) {
+- u8 dpb_valid = 0;
++ bool dpb_valid = run->ref_buf_idx[i] >= 0;
+ u8 idx = 0;
+
+ switch (j) {
+@@ -779,8 +800,6 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+
+ if (idx >= ARRAY_SIZE(dec_params->dpb))
+ continue;
+- dpb_valid = !!(dpb[idx].flags &
+- V4L2_H264_DPB_ENTRY_FLAG_ACTIVE);
+
+ set_ps_field(hw_rps, DPB_INFO(i, j),
+ idx | dpb_valid << 4);
+@@ -859,13 +878,8 @@ get_ref_buf(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run,
+ unsigned int dpb_idx)
+ {
+ struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+- const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb;
+ struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q;
+- int buf_idx = -1;
+-
+- if (dpb[dpb_idx].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)
+- buf_idx = vb2_find_timestamp(cap_q,
+- dpb[dpb_idx].reference_ts, 0);
++ int buf_idx = run->ref_buf_idx[dpb_idx];
+
+ /*
+ * If a DPB entry is unused or invalid, address of current destination
+@@ -1102,6 +1116,7 @@ static int rkvdec_h264_run(struct rkvdec_ctx *ctx)
+
+ assemble_hw_scaling_list(ctx, &run);
+ assemble_hw_pps(ctx, &run);
++ lookup_ref_buf_idx(ctx, &run);
+ assemble_hw_rps(ctx, &run);
+ config_registers(ctx, &run);
+
+diff --git a/drivers/staging/r8188eu/os_dep/ioctl_linux.c b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+index 41b457838a5ba..dac2343569e5a 100644
+--- a/drivers/staging/r8188eu/os_dep/ioctl_linux.c
++++ b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+@@ -1181,9 +1181,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
+ break;
+ }
+ sec_len = *(pos++); len -= 1;
+- if (sec_len > 0 && sec_len <= len) {
++ if (sec_len > 0 &&
++ sec_len <= len &&
++ sec_len <= 32) {
+ ssid[ssid_index].SsidLength = sec_len;
+- memcpy(ssid[ssid_index].Ssid, pos, ssid[ssid_index].SsidLength);
++ memcpy(ssid[ssid_index].Ssid, pos, sec_len);
+ ssid_index++;
+ }
+ pos += sec_len;
+@@ -1967,94 +1969,6 @@ static int rtw_wx_get_nick(struct net_device *dev,
+ return 0;
+ }
+
+-static int rtw_wx_read32(struct net_device *dev,
+- struct iw_request_info *info,
+- union iwreq_data *wrqu, char *extra)
+-{
+- struct adapter *padapter;
+- struct iw_point *p;
+- u16 len;
+- u32 addr;
+- u32 data32;
+- u32 bytes;
+- u8 *ptmp;
+- int ret;
+-
+- padapter = (struct adapter *)rtw_netdev_priv(dev);
+- p = &wrqu->data;
+- len = p->length;
+- ptmp = memdup_user(p->pointer, len);
+- if (IS_ERR(ptmp))
+- return PTR_ERR(ptmp);
+-
+- bytes = 0;
+- addr = 0;
+- sscanf(ptmp, "%d,%x", &bytes, &addr);
+-
+- switch (bytes) {
+- case 1:
+- data32 = rtw_read8(padapter, addr);
+- sprintf(extra, "0x%02X", data32);
+- break;
+- case 2:
+- data32 = rtw_read16(padapter, addr);
+- sprintf(extra, "0x%04X", data32);
+- break;
+- case 4:
+- data32 = rtw_read32(padapter, addr);
+- sprintf(extra, "0x%08X", data32);
+- break;
+- default:
+- DBG_88E(KERN_INFO "%s: usage> read [bytes],[address(hex)]\n", __func__);
+- ret = -EINVAL;
+- goto err_free_ptmp;
+- }
+- DBG_88E(KERN_INFO "%s: addr = 0x%08X data =%s\n", __func__, addr, extra);
+-
+- kfree(ptmp);
+- return 0;
+-
+-err_free_ptmp:
+- kfree(ptmp);
+- return ret;
+-}
+-
+-static int rtw_wx_write32(struct net_device *dev,
+- struct iw_request_info *info,
+- union iwreq_data *wrqu, char *extra)
+-{
+- struct adapter *padapter = (struct adapter *)rtw_netdev_priv(dev);
+-
+- u32 addr;
+- u32 data32;
+- u32 bytes;
+-
+- bytes = 0;
+- addr = 0;
+- data32 = 0;
+- sscanf(extra, "%d,%x,%x", &bytes, &addr, &data32);
+-
+- switch (bytes) {
+- case 1:
+- rtw_write8(padapter, addr, (u8)data32);
+- DBG_88E(KERN_INFO "%s: addr = 0x%08X data = 0x%02X\n", __func__, addr, (u8)data32);
+- break;
+- case 2:
+- rtw_write16(padapter, addr, (u16)data32);
+- DBG_88E(KERN_INFO "%s: addr = 0x%08X data = 0x%04X\n", __func__, addr, (u16)data32);
+- break;
+- case 4:
+- rtw_write32(padapter, addr, data32);
+- DBG_88E(KERN_INFO "%s: addr = 0x%08X data = 0x%08X\n", __func__, addr, data32);
+- break;
+- default:
+- DBG_88E(KERN_INFO "%s: usage> write [bytes],[address(hex)],[data(hex)]\n", __func__);
+- return -EINVAL;
+- }
+-
+- return 0;
+-}
+-
+ static int rtw_wx_read_rf(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+@@ -4265,8 +4179,8 @@ static const struct iw_priv_args rtw_private_args[] = {
+ };
+
+ static iw_handler rtw_private_handler[] = {
+-rtw_wx_write32, /* 0x00 */
+-rtw_wx_read32, /* 0x01 */
++ NULL, /* 0x00 */
++ NULL, /* 0x01 */
+ NULL, /* 0x02 */
+ NULL, /* 0x03 */
+ /* for MM DTV platform */
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 44bb380e7390c..fa866acef5bb2 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -850,7 +850,6 @@ bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
+ attrib->unmap_granularity = q->limits.discard_granularity / block_size;
+ attrib->unmap_granularity_alignment = q->limits.discard_alignment /
+ block_size;
+- attrib->unmap_zeroes_data = !!(q->limits.max_write_zeroes_sectors);
+ return true;
+ }
+ EXPORT_SYMBOL(target_configure_unmap_from_queue);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 06a5c40865513..826b55caa17f9 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -20,6 +20,7 @@
+ #include <linux/configfs.h>
+ #include <linux/mutex.h>
+ #include <linux/workqueue.h>
++#include <linux/pagemap.h>
+ #include <net/genetlink.h>
+ #include <scsi/scsi_common.h>
+ #include <scsi/scsi_proto.h>
+@@ -1659,17 +1660,37 @@ static int tcmu_check_and_free_pending_cmd(struct tcmu_cmd *cmd)
+ static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first,
+ unsigned long last)
+ {
+- XA_STATE(xas, &udev->data_pages, first * udev->data_pages_per_blk);
+ struct page *page;
++ unsigned long dpi;
+ u32 pages_freed = 0;
+
+- xas_lock(&xas);
+- xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) {
+- xas_store(&xas, NULL);
++ first = first * udev->data_pages_per_blk;
++ last = (last + 1) * udev->data_pages_per_blk - 1;
++ xa_for_each_range(&udev->data_pages, dpi, page, first, last) {
++ xa_erase(&udev->data_pages, dpi);
++ /*
++ * While reaching here there may be page faults occurring on
++ * the to-be-released pages. A race condition may occur if
++ * unmap_mapping_range() is called before page faults on these
++ * pages have completed; a valid but stale map is created.
++ *
++ * If another command subsequently runs and needs to extend
++ * dbi_thresh, it may reuse the slot corresponding to the
++ * previous page in data_bitmap. Though we will allocate a new
++ * page for the slot in data_area, no page fault will happen
++ * because we have a valid map. Therefore the command's data
++ * will be lost.
++ *
++ * We lock and unlock pages that are to be released to ensure
++ * all page faults have completed. This way
++ * unmap_mapping_range() can ensure stale maps are cleanly
++ * removed.
++ */
++ lock_page(page);
++ unlock_page(page);
+ __free_page(page);
+ pages_freed++;
+ }
+- xas_unlock(&xas);
+
+ atomic_sub(pages_freed, &global_page_count);
+
+@@ -1821,6 +1842,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
+ page = xa_load(&udev->data_pages, dpi);
+ if (likely(page)) {
+ get_page(page);
++ lock_page(page);
+ mutex_unlock(&udev->cmdr_lock);
+ return page;
+ }
+@@ -1862,6 +1884,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ struct page *page;
+ unsigned long offset;
+ void *addr;
++ vm_fault_t ret = 0;
+
+ int mi = tcmu_find_mem_index(vmf->vma);
+ if (mi < 0)
+@@ -1886,10 +1909,11 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ page = tcmu_try_get_data_page(udev, dpi);
+ if (!page)
+ return VM_FAULT_SIGBUS;
++ ret = VM_FAULT_LOCKED;
+ }
+
+ vmf->page = page;
+- return 0;
++ return ret;
+ }
+
+ static const struct vm_operations_struct tcmu_vm_ops = {
+@@ -3152,12 +3176,22 @@ static void find_free_blocks(void)
+ udev->dbi_max = block;
+ }
+
++ /*
++ * Release the block pages.
++ *
++ * Also note that since tcmu_vma_fault() gets an extra page
++ * refcount, tcmu_blocks_release() won't free pages if pages
++ * are mapped. This means it is safe to call
++ * tcmu_blocks_release() before unmap_mapping_range() which
++ * drops the refcount of any pages it unmaps and thus releases
++ * them.
++ */
++ pages_freed = tcmu_blocks_release(udev, start, end - 1);
++
+ /* Here will truncate the data area from off */
+ off = udev->data_off + (loff_t)start * udev->data_blk_size;
+ unmap_mapping_range(udev->inode->i_mapping, off, 0, 1);
+
+- /* Release the block pages */
+- pages_freed = tcmu_blocks_release(udev, start, end - 1);
+ mutex_unlock(&udev->cmdr_lock);
+
+ total_pages_freed += pages_freed;
+diff --git a/drivers/thermal/broadcom/bcm2711_thermal.c b/drivers/thermal/broadcom/bcm2711_thermal.c
+index 1ec57d9ecf539..e9bef5c3414b6 100644
+--- a/drivers/thermal/broadcom/bcm2711_thermal.c
++++ b/drivers/thermal/broadcom/bcm2711_thermal.c
+@@ -38,7 +38,6 @@ static int bcm2711_get_temp(void *data, int *temp)
+ int offset = thermal_zone_get_offset(priv->thermal);
+ u32 val;
+ int ret;
+- long t;
+
+ ret = regmap_read(priv->regmap, AVS_RO_TEMP_STATUS, &val);
+ if (ret)
+@@ -50,9 +49,7 @@ static int bcm2711_get_temp(void *data, int *temp)
+ val &= AVS_RO_TEMP_STATUS_DATA_MSK;
+
+ /* Convert a HW code to a temperature reading (millidegree celsius) */
+- t = slope * val + offset;
+-
+- *temp = t < 0 ? 0 : t;
++ *temp = slope * val + offset;
+
+ return 0;
+ }
+diff --git a/drivers/thermal/broadcom/sr-thermal.c b/drivers/thermal/broadcom/sr-thermal.c
+index 475ce29007713..85ab9edd580cc 100644
+--- a/drivers/thermal/broadcom/sr-thermal.c
++++ b/drivers/thermal/broadcom/sr-thermal.c
+@@ -60,6 +60,9 @@ static int sr_thermal_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -ENOENT;
++
+ sr_thermal->regs = (void __iomem *)devm_memremap(&pdev->dev, res->start,
+ resource_size(res),
+ MEMREMAP_WB);
+diff --git a/drivers/thermal/devfreq_cooling.c b/drivers/thermal/devfreq_cooling.c
+index 4310cb342a9fb..d38a80adec733 100644
+--- a/drivers/thermal/devfreq_cooling.c
++++ b/drivers/thermal/devfreq_cooling.c
+@@ -358,21 +358,28 @@ of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df,
+ struct thermal_cooling_device *cdev;
+ struct device *dev = df->dev.parent;
+ struct devfreq_cooling_device *dfc;
++ struct thermal_cooling_device_ops *ops;
+ char *name;
+ int err, num_opps;
+
+- dfc = kzalloc(sizeof(*dfc), GFP_KERNEL);
+- if (!dfc)
++ ops = kmemdup(&devfreq_cooling_ops, sizeof(*ops), GFP_KERNEL);
++ if (!ops)
+ return ERR_PTR(-ENOMEM);
+
++ dfc = kzalloc(sizeof(*dfc), GFP_KERNEL);
++ if (!dfc) {
++ err = -ENOMEM;
++ goto free_ops;
++ }
++
+ dfc->devfreq = df;
+
+ dfc->em_pd = em_pd_get(dev);
+ if (dfc->em_pd) {
+- devfreq_cooling_ops.get_requested_power =
++ ops->get_requested_power =
+ devfreq_cooling_get_requested_power;
+- devfreq_cooling_ops.state2power = devfreq_cooling_state2power;
+- devfreq_cooling_ops.power2state = devfreq_cooling_power2state;
++ ops->state2power = devfreq_cooling_state2power;
++ ops->power2state = devfreq_cooling_power2state;
+
+ dfc->power_ops = dfc_power;
+
+@@ -407,8 +414,7 @@ of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df,
+ if (!name)
+ goto remove_qos_req;
+
+- cdev = thermal_of_cooling_device_register(np, name, dfc,
+- &devfreq_cooling_ops);
++ cdev = thermal_of_cooling_device_register(np, name, dfc, ops);
+ kfree(name);
+
+ if (IS_ERR(cdev)) {
+@@ -429,6 +435,8 @@ free_table:
+ kfree(dfc->freq_table);
+ free_dfc:
+ kfree(dfc);
++free_ops:
++ kfree(ops);
+
+ return ERR_PTR(err);
+ }
+@@ -510,11 +518,13 @@ EXPORT_SYMBOL_GPL(devfreq_cooling_em_register);
+ void devfreq_cooling_unregister(struct thermal_cooling_device *cdev)
+ {
+ struct devfreq_cooling_device *dfc;
++ const struct thermal_cooling_device_ops *ops;
+ struct device *dev;
+
+ if (IS_ERR_OR_NULL(cdev))
+ return;
+
++ ops = cdev->ops;
+ dfc = cdev->devdata;
+ dev = dfc->devfreq->dev.parent;
+
+@@ -525,5 +535,6 @@ void devfreq_cooling_unregister(struct thermal_cooling_device *cdev)
+
+ kfree(dfc->freq_table);
+ kfree(dfc);
++ kfree(ops);
+ }
+ EXPORT_SYMBOL_GPL(devfreq_cooling_unregister);
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index 8d76dbfde6a9f..331a241eb0ef3 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -94,8 +94,8 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL);
+ if (!sensor) {
+ of_node_put(child);
+- of_node_put(sensor_np);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto put_node;
+ }
+
+ ret = thermal_zone_of_get_sensor_id(child,
+@@ -124,7 +124,9 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ dev_warn(&pdev->dev, "failed to add hwmon sysfs attributes\n");
+ }
+
++put_node:
+ of_node_put(sensor_np);
++ of_node_put(np);
+
+ return ret;
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 82654dc8382b8..cdc0552e8c42e 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -947,6 +947,7 @@ __thermal_cooling_device_register(struct device_node *np,
+ return cdev;
+
+ out_kfree_type:
++ thermal_cooling_device_destroy_sysfs(cdev);
+ kfree(cdev->type);
+ put_device(&cdev->device);
+ cdev = NULL;
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index 5ed19a9857adf..10c13b93ed526 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -61,13 +61,13 @@ static void do_rw_io(struct goldfish_tty *qtty,
+ spin_lock_irqsave(&qtty->lock, irq_flags);
+ gf_write_ptr((void *)address, base + GOLDFISH_TTY_REG_DATA_PTR,
+ base + GOLDFISH_TTY_REG_DATA_PTR_HIGH);
+- __raw_writel(count, base + GOLDFISH_TTY_REG_DATA_LEN);
++ gf_iowrite32(count, base + GOLDFISH_TTY_REG_DATA_LEN);
+
+ if (is_write)
+- __raw_writel(GOLDFISH_TTY_CMD_WRITE_BUFFER,
++ gf_iowrite32(GOLDFISH_TTY_CMD_WRITE_BUFFER,
+ base + GOLDFISH_TTY_REG_CMD);
+ else
+- __raw_writel(GOLDFISH_TTY_CMD_READ_BUFFER,
++ gf_iowrite32(GOLDFISH_TTY_CMD_READ_BUFFER,
+ base + GOLDFISH_TTY_REG_CMD);
+
+ spin_unlock_irqrestore(&qtty->lock, irq_flags);
+@@ -142,7 +142,7 @@ static irqreturn_t goldfish_tty_interrupt(int irq, void *dev_id)
+ unsigned char *buf;
+ u32 count;
+
+- count = __raw_readl(base + GOLDFISH_TTY_REG_BYTES_READY);
++ count = gf_ioread32(base + GOLDFISH_TTY_REG_BYTES_READY);
+ if (count == 0)
+ return IRQ_NONE;
+
+@@ -159,7 +159,7 @@ static int goldfish_tty_activate(struct tty_port *port, struct tty_struct *tty)
+ {
+ struct goldfish_tty *qtty = container_of(port, struct goldfish_tty,
+ port);
+- __raw_writel(GOLDFISH_TTY_CMD_INT_ENABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
++ gf_iowrite32(GOLDFISH_TTY_CMD_INT_ENABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
+ return 0;
+ }
+
+@@ -167,7 +167,7 @@ static void goldfish_tty_shutdown(struct tty_port *port)
+ {
+ struct goldfish_tty *qtty = container_of(port, struct goldfish_tty,
+ port);
+- __raw_writel(GOLDFISH_TTY_CMD_INT_DISABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
++ gf_iowrite32(GOLDFISH_TTY_CMD_INT_DISABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
+ }
+
+ static int goldfish_tty_open(struct tty_struct *tty, struct file *filp)
+@@ -202,7 +202,7 @@ static unsigned int goldfish_tty_chars_in_buffer(struct tty_struct *tty)
+ {
+ struct goldfish_tty *qtty = &goldfish_ttys[tty->index];
+ void __iomem *base = qtty->base;
+- return __raw_readl(base + GOLDFISH_TTY_REG_BYTES_READY);
++ return gf_ioread32(base + GOLDFISH_TTY_REG_BYTES_READY);
+ }
+
+ static void goldfish_tty_console_write(struct console *co, const char *b,
+@@ -355,7 +355,7 @@ static int goldfish_tty_probe(struct platform_device *pdev)
+ * on Ranchu emulator (qemu2) returns 1 here and
+ * driver will use physical addresses.
+ */
+- qtty->version = __raw_readl(base + GOLDFISH_TTY_REG_VERSION);
++ qtty->version = gf_ioread32(base + GOLDFISH_TTY_REG_VERSION);
+
+ /*
+ * Goldfish TTY device on Ranchu emulator (qemu2)
+@@ -374,7 +374,7 @@ static int goldfish_tty_probe(struct platform_device *pdev)
+ }
+ }
+
+- __raw_writel(GOLDFISH_TTY_CMD_INT_DISABLE, base + GOLDFISH_TTY_REG_CMD);
++ gf_iowrite32(GOLDFISH_TTY_CMD_INT_DISABLE, base + GOLDFISH_TTY_REG_CMD);
+
+ ret = request_irq(irq, goldfish_tty_interrupt, IRQF_SHARED,
+ "goldfish_tty", qtty);
+@@ -436,7 +436,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
+ #ifdef CONFIG_GOLDFISH_TTY_EARLY_CONSOLE
+ static void gf_early_console_putchar(struct uart_port *port, int ch)
+ {
+- __raw_writel(ch, port->membase);
++ gf_iowrite32(ch, port->membase);
+ }
+
+ static void gf_early_write(struct console *con, const char *s, unsigned int n)
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index fd8b86dde5255..ea5381dedb07c 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -444,6 +444,25 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
+ return modembits;
+ }
+
++static void gsm_hex_dump_bytes(const char *fname, const u8 *data,
++ unsigned long len)
++{
++ char *prefix;
++
++ if (!fname) {
++ print_hex_dump(KERN_INFO, "", DUMP_PREFIX_NONE, 16, 1, data, len,
++ true);
++ return;
++ }
++
++ prefix = kasprintf(GFP_KERNEL, "%s: ", fname);
++ if (!prefix)
++ return;
++ print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET, 16, 1, data, len,
++ true);
++ kfree(prefix);
++}
++
+ /**
+ * gsm_print_packet - display a frame for debug
+ * @hdr: header to print before decode
+@@ -508,7 +527,7 @@ static void gsm_print_packet(const char *hdr, int addr, int cr,
+ else
+ pr_cont("(F)");
+
+- print_hex_dump_bytes("", DUMP_PREFIX_NONE, data, dlen);
++ gsm_hex_dump_bytes(NULL, data, dlen);
+ }
+
+
+@@ -698,9 +717,7 @@ static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ }
+
+ if (debug & 4)
+- print_hex_dump_bytes("gsm_data_kick: ",
+- DUMP_PREFIX_OFFSET,
+- gsm->txframe, len);
++ gsm_hex_dump_bytes(__func__, gsm->txframe, len);
+ if (gsmld_output(gsm, gsm->txframe, len) <= 0)
+ break;
+ /* FIXME: Can eliminate one SOF in many more cases */
+@@ -2448,8 +2465,7 @@ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len)
+ return -ENOSPC;
+ }
+ if (debug & 4)
+- print_hex_dump_bytes("gsmld_output: ", DUMP_PREFIX_OFFSET,
+- data, len);
++ gsm_hex_dump_bytes(__func__, data, len);
+ return gsm->tty->ops->write(gsm->tty, data, len);
+ }
+
+@@ -2525,8 +2541,7 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ char flags = TTY_NORMAL;
+
+ if (debug & 4)
+- print_hex_dump_bytes("gsmld_receive: ", DUMP_PREFIX_OFFSET,
+- cp, count);
++ gsm_hex_dump_bytes(__func__, cp, count);
+
+ for (; count; count--, cp++) {
+ if (fp)
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index f0351e6f0ef6d..1e65933f6ccec 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -624,22 +624,6 @@ static int push_rx(struct eg20t_port *priv, const unsigned char *buf,
+ return 0;
+ }
+
+-static int pop_tx_x(struct eg20t_port *priv, unsigned char *buf)
+-{
+- int ret = 0;
+- struct uart_port *port = &priv->port;
+-
+- if (port->x_char) {
+- dev_dbg(priv->port.dev, "%s:X character send %02x (%lu)\n",
+- __func__, port->x_char, jiffies);
+- buf[0] = port->x_char;
+- port->x_char = 0;
+- ret = 1;
+- }
+-
+- return ret;
+-}
+-
+ static int dma_push_rx(struct eg20t_port *priv, int size)
+ {
+ int room;
+@@ -889,9 +873,10 @@ static unsigned int handle_tx(struct eg20t_port *priv)
+
+ fifo_size = max(priv->fifo_size, 1);
+ tx_empty = 1;
+- if (pop_tx_x(priv, xmit->buf)) {
+- pch_uart_hal_write(priv, xmit->buf, 1);
++ if (port->x_char) {
++ pch_uart_hal_write(priv, &port->x_char, 1);
+ port->icount.tx++;
++ port->x_char = 0;
+ tx_empty = 0;
+ fifo_size--;
+ }
+@@ -946,9 +931,11 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv)
+ }
+
+ fifo_size = max(priv->fifo_size, 1);
+- if (pop_tx_x(priv, xmit->buf)) {
+- pch_uart_hal_write(priv, xmit->buf, 1);
++
++ if (port->x_char) {
++ pch_uart_hal_write(priv, &port->x_char, 1);
+ port->icount.tx++;
++ port->x_char = 0;
+ fifo_size--;
+ }
+
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 646510476c304..bfa431a8e6902 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -175,7 +175,8 @@ static struct tty_buffer *tty_buffer_alloc(struct tty_port *port, size_t size)
+ */
+ if (atomic_read(&port->buf.mem_used) > port->buf.mem_limit)
+ return NULL;
+- p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC);
++ p = kmalloc(sizeof(struct tty_buffer) + 2 * size,
++ GFP_ATOMIC | __GFP_NOWARN);
+ if (p == NULL)
+ return NULL;
+
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index d9712c2602afe..06eea8848ccc2 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2816,6 +2816,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ int retval;
+ struct usb_device *rhdev;
++ struct usb_hcd *shared_hcd;
+
+ if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2976,13 +2977,26 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ goto err_hcd_driver_start;
+ }
+
++ /* starting here, usbcore will pay attention to the shared HCD roothub */
++ shared_hcd = hcd->shared_hcd;
++ if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
++ retval = register_root_hub(shared_hcd);
++ if (retval != 0)
++ goto err_register_root_hub;
++
++ if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
++ usb_hcd_poll_rh_status(shared_hcd);
++ }
++
+ /* starting here, usbcore will pay attention to this root hub */
+- retval = register_root_hub(hcd);
+- if (retval != 0)
+- goto err_register_root_hub;
++ if (!HCD_DEFER_RH_REGISTER(hcd)) {
++ retval = register_root_hub(hcd);
++ if (retval != 0)
++ goto err_register_root_hub;
+
+- if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+- usb_hcd_poll_rh_status(hcd);
++ if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++ usb_hcd_poll_rh_status(hcd);
++ }
+
+ return retval;
+
+@@ -3020,6 +3034,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ struct usb_device *rhdev = hcd->self.root_hub;
++ bool rh_registered;
+
+ dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+
+@@ -3030,6 +3045,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+
+ dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ spin_lock_irq (&hcd_root_hub_lock);
++ rh_registered = hcd->rh_registered;
+ hcd->rh_registered = 0;
+ spin_unlock_irq (&hcd_root_hub_lock);
+
+@@ -3039,7 +3055,8 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ cancel_work_sync(&hcd->died_work);
+
+ mutex_lock(&usb_bus_idr_lock);
+- usb_disconnect(&rhdev); /* Sets rhdev to NULL */
++ if (rh_registered)
++ usb_disconnect(&rhdev); /* Sets rhdev to NULL */
+ mutex_unlock(&usb_bus_idr_lock);
+
+ /*
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 97b44a68668a5..f99a65a64588f 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -510,6 +510,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* DJI CineSSD */
+ { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+
++ /* DELL USB GEN2 */
++ { USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME },
++
+ /* VCOM device */
+ { USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 836049887ac83..78ec6af79c7f1 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3335,14 +3335,14 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep,
+ struct dwc3 *dwc = dep->dwc;
+ bool no_started_trb = true;
+
+- if (!dep->endpoint.desc)
+- return no_started_trb;
+-
+ dwc3_gadget_ep_cleanup_completed_requests(dep, event, status);
+
+ if (dep->flags & DWC3_EP_END_TRANSFER_PENDING)
+ goto out;
+
++ if (!dep->endpoint.desc)
++ return no_started_trb;
++
+ if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+ list_empty(&dep->started_list) &&
+ (list_empty(&dep->pending_list) || status == -EXDEV))
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index d7e0e6ebf0800..d57c5ff5ae1f4 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
+
+ #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639
+@@ -268,6 +269,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
++ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+
+diff --git a/drivers/usb/isp1760/isp1760-core.c b/drivers/usb/isp1760/isp1760-core.c
+index d1d9a7d5da175..af88f4fe00d27 100644
+--- a/drivers/usb/isp1760/isp1760-core.c
++++ b/drivers/usb/isp1760/isp1760-core.c
+@@ -251,6 +251,8 @@ static const struct reg_field isp1760_hc_reg_fields[] = {
+ [HW_DM_PULLDOWN] = REG_FIELD(ISP176x_HC_OTG_CTRL, 2, 2),
+ [HW_DP_PULLDOWN] = REG_FIELD(ISP176x_HC_OTG_CTRL, 1, 1),
+ [HW_DP_PULLUP] = REG_FIELD(ISP176x_HC_OTG_CTRL, 0, 0),
++ /* Make sure the array is sized properly during compilation */
++ [HC_FIELD_MAX] = {},
+ };
+
+ static const struct reg_field isp1763_hc_reg_fields[] = {
+@@ -321,6 +323,8 @@ static const struct reg_field isp1763_hc_reg_fields[] = {
+ [HW_DM_PULLDOWN_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 2, 2),
+ [HW_DP_PULLDOWN_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 1, 1),
+ [HW_DP_PULLUP_CLEAR] = REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 0, 0),
++ /* Make sure the array is sized properly during compilation */
++ [HC_FIELD_MAX] = {},
+ };
+
+ static const struct regmap_range isp1763_hc_volatile_ranges[] = {
+@@ -405,6 +409,8 @@ static const struct reg_field isp1761_dc_reg_fields[] = {
+ [DC_CHIP_ID_HIGH] = REG_FIELD(ISP176x_DC_CHIPID, 16, 31),
+ [DC_CHIP_ID_LOW] = REG_FIELD(ISP176x_DC_CHIPID, 0, 15),
+ [DC_SCRATCH] = REG_FIELD(ISP176x_DC_SCRATCH, 0, 15),
++ /* Make sure the array is sized properly during compilation */
++ [DC_FIELD_MAX] = {},
+ };
+
+ static const struct regmap_range isp1763_dc_volatile_ranges[] = {
+@@ -458,6 +464,8 @@ static const struct reg_field isp1763_dc_reg_fields[] = {
+ [DC_CHIP_ID_HIGH] = REG_FIELD(ISP1763_DC_CHIPID_HIGH, 0, 15),
+ [DC_CHIP_ID_LOW] = REG_FIELD(ISP1763_DC_CHIPID_LOW, 0, 15),
+ [DC_SCRATCH] = REG_FIELD(ISP1763_DC_SCRATCH, 0, 15),
++ /* Make sure the array is sized properly during compilation */
++ [DC_FIELD_MAX] = {},
+ };
+
+ static const struct regmap_config isp1763_dc_regmap_conf = {
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 152ad882657d7..e60425bbf5376 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1137,6 +1137,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) }, /* EM160R-GL */
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) },
++ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0700, 0xff), /* BG95 */
++ .driver_info = RSVD(3) | ZLP },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 1d878d05a6584..3506c47e1eef0 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -421,6 +421,9 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ bcdUSB = le16_to_cpu(desc->bcdUSB);
+
+ switch (bcdUSB) {
++ case 0x101:
++ /* USB 1.0.1? Let's assume they meant 1.1... */
++ fallthrough;
+ case 0x110:
+ switch (bcdDevice) {
+ case 0x300:
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index ddbe142af09ae..881f9864c437c 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -353,11 +353,14 @@ static void vdpasim_set_vq_ready(struct vdpa_device *vdpa, u16 idx, bool ready)
+ {
+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+ struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx];
++ bool old_ready;
+
+ spin_lock(&vdpasim->lock);
++ old_ready = vq->ready;
+ vq->ready = ready;
+- if (vq->ready)
++ if (vq->ready && !old_ready) {
+ vdpasim_queue_ready(vdpasim, idx);
++ }
+ spin_unlock(&vdpasim->lock);
+ }
+
+diff --git a/drivers/video/console/sticon.c b/drivers/video/console/sticon.c
+index 40496e9e9b438..f304163e87e99 100644
+--- a/drivers/video/console/sticon.c
++++ b/drivers/video/console/sticon.c
+@@ -46,6 +46,7 @@
+ #include <linux/slab.h>
+ #include <linux/font.h>
+ #include <linux/crc32.h>
++#include <linux/fb.h>
+
+ #include <asm/io.h>
+
+@@ -392,7 +393,9 @@ static int __init sticonsole_init(void)
+ for (i = 0; i < MAX_NR_CONSOLES; i++)
+ font_data[i] = STI_DEF_FONT;
+
+- pr_info("sticon: Initializing STI text console.\n");
++ pr_info("sticon: Initializing STI text console on %s at [%s]\n",
++ sticon_sti->sti_data->inq_outptr.dev_name,
++ sticon_sti->pa_path);
+ console_lock();
+ err = do_take_over_console(&sti_con, 0, MAX_NR_CONSOLES - 1,
+ PAGE0->mem_cons.cl_class != CL_DUPLEX);
+diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c
+index f869b723494f1..6a947ff96d6eb 100644
+--- a/drivers/video/console/sticore.c
++++ b/drivers/video/console/sticore.c
+@@ -30,10 +30,11 @@
+ #include <asm/pdc.h>
+ #include <asm/cacheflush.h>
+ #include <asm/grfioctl.h>
++#include <asm/fb.h>
+
+ #include "../fbdev/sticore.h"
+
+-#define STI_DRIVERVERSION "Version 0.9b"
++#define STI_DRIVERVERSION "Version 0.9c"
+
+ static struct sti_struct *default_sti __read_mostly;
+
+@@ -502,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name)
+ if (!fbfont)
+ return NULL;
+
+- pr_info("STI selected %ux%u framebuffer font %s for sticon\n",
++ pr_info(" using %ux%u framebuffer font %s\n",
+ fbfont->width, fbfont->height, fbfont->name);
+
+ bpc = ((fbfont->width+7)/8) * fbfont->height;
+@@ -946,6 +947,7 @@ out_err:
+
+ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path)
+ {
++ pr_info(" located at [%s]\n", sti->pa_path);
+ if (strcmp (path, default_sti_path) == 0)
+ default_sti = sti;
+ }
+@@ -957,7 +959,6 @@ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path)
+ */
+ static int __init sticore_pa_init(struct parisc_device *dev)
+ {
+- char pa_path[21];
+ struct sti_struct *sti = NULL;
+ int hpa = dev->hpa.start;
+
+@@ -970,8 +971,8 @@ static int __init sticore_pa_init(struct parisc_device *dev)
+ if (!sti)
+ return 1;
+
+- print_pa_hwpath(dev, pa_path);
+- sticore_check_for_default_sti(sti, pa_path);
++ print_pa_hwpath(dev, sti->pa_path);
++ sticore_check_for_default_sti(sti, sti->pa_path);
+ return 0;
+ }
+
+@@ -1007,9 +1008,8 @@ static int sticore_pci_init(struct pci_dev *pd, const struct pci_device_id *ent)
+
+ sti = sti_try_rom_generic(rom_base, fb_base, pd);
+ if (sti) {
+- char pa_path[30];
+- print_pci_hwpath(pd, pa_path);
+- sticore_check_for_default_sti(sti, pa_path);
++ print_pci_hwpath(pd, sti->pa_path);
++ sticore_check_for_default_sti(sti, sti->pa_path);
+ }
+
+ if (!sti) {
+@@ -1127,6 +1127,22 @@ int sti_call(const struct sti_struct *sti, unsigned long func,
+ return ret;
+ }
+
++/* check if given fb_info is the primary device */
++int fb_is_primary_device(struct fb_info *info)
++{
++ struct sti_struct *sti;
++
++ sti = sti_get_rom(0);
++
++ /* if no built-in graphics card found, allow any fb driver as default */
++ if (!sti)
++ return true;
++
++ /* return true if it's the default built-in framebuffer driver */
++ return (sti->info == info);
++}
++EXPORT_SYMBOL(fb_is_primary_device);
++
+ MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");
+ MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
+index 9ec969e136bfd..8080116aea844 100644
+--- a/drivers/video/fbdev/amba-clcd.c
++++ b/drivers/video/fbdev/amba-clcd.c
+@@ -758,12 +758,15 @@ static int clcdfb_of_vram_setup(struct clcd_fb *fb)
+ return -ENODEV;
+
+ fb->fb.screen_base = of_iomap(memory, 0);
+- if (!fb->fb.screen_base)
++ if (!fb->fb.screen_base) {
++ of_node_put(memory);
+ return -ENOMEM;
++ }
+
+ fb->fb.fix.smem_start = of_translate_address(memory,
+ of_get_address(memory, 0, &size, NULL));
+ fb->fb.fix.smem_len = size;
++ of_node_put(memory);
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2fc1b80a26ad9..9a8ae6fa6ecbb 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -3265,6 +3265,9 @@ static void fbcon_register_existing_fbs(struct work_struct *work)
+
+ console_lock();
+
++ deferred_takeover = false;
++ logo_shown = FBCON_LOGO_DONTSHOW;
++
+ for_each_registered_fb(i)
+ fbcon_fb_registered(registered_fb[i]);
+
+@@ -3282,8 +3285,6 @@ static int fbcon_output_notifier(struct notifier_block *nb,
+ pr_info("fbcon: Taking over console\n");
+
+ dummycon_unregister_output_notifier(&fbcon_output_nb);
+- deferred_takeover = false;
+- logo_shown = FBCON_LOGO_DONTSHOW;
+
+ /* We may get called in atomic context */
+ schedule_work(&fbcon_deferred_takeover_work);
+diff --git a/drivers/video/fbdev/sticore.h b/drivers/video/fbdev/sticore.h
+index c338f7848ae2b..0ebdd28a0b813 100644
+--- a/drivers/video/fbdev/sticore.h
++++ b/drivers/video/fbdev/sticore.h
+@@ -370,6 +370,9 @@ struct sti_struct {
+
+ /* pointer to all internal data */
+ struct sti_all_data *sti_data;
++
++ /* pa_path of this device */
++ char pa_path[24];
+ };
+
+
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index 265865610edc6..002f265d8db58 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1317,11 +1317,11 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ goto out_err3;
+ }
+
++ /* save for primary gfx device detection & unregister_framebuffer() */
++ sti->info = info;
+ if (register_framebuffer(&fb->info) < 0)
+ goto out_err4;
+
+- sti->info = info; /* save for unregister_framebuffer() */
+-
+ fb_info(&fb->info, "%s %dx%d-%d frame buffer device, %s, id: %04x, mmio: 0x%04lx\n",
+ fix->id,
+ var->xres,
+diff --git a/drivers/video/fbdev/vesafb.c b/drivers/video/fbdev/vesafb.c
+index e25e8de5ff672..929d4775cb4bc 100644
+--- a/drivers/video/fbdev/vesafb.c
++++ b/drivers/video/fbdev/vesafb.c
+@@ -490,11 +490,12 @@ static int vesafb_remove(struct platform_device *pdev)
+ {
+ struct fb_info *info = platform_get_drvdata(pdev);
+
+- /* vesafb_destroy takes care of info cleanup */
+- unregister_framebuffer(info);
+ if (((struct vesafb_par *)(info->par))->region)
+ release_region(0x3c0, 32);
+
++ /* vesafb_destroy takes care of info cleanup */
++ unregister_framebuffer(info);
++
+ return 0;
+ }
+
+diff --git a/fs/afs/misc.c b/fs/afs/misc.c
+index 1d1a8debe4723..933e67fcdab1a 100644
+--- a/fs/afs/misc.c
++++ b/fs/afs/misc.c
+@@ -163,8 +163,11 @@ void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)
+ return;
+
+ case -ECONNABORTED:
++ error = afs_abort_to_error(abort_code);
++ fallthrough;
++ case -ENETRESET: /* Responded, but we seem to have changed address */
+ e->responded = true;
+- e->error = afs_abort_to_error(abort_code);
++ e->error = error;
+ return;
+ }
+ }
+diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c
+index 79e1a5f6701be..a840c3588ebbb 100644
+--- a/fs/afs/rotate.c
++++ b/fs/afs/rotate.c
+@@ -292,6 +292,10 @@ bool afs_select_fileserver(struct afs_operation *op)
+ op->error = error;
+ goto iterate_address;
+
++ case -ENETRESET:
++ pr_warn("kAFS: Peer reset %s (op=%x)\n",
++ op->type ? op->type->name : "???", op->debug_id);
++ fallthrough;
+ case -ECONNRESET:
+ _debug("call reset");
+ op->error = error;
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 23a1a92d64bb5..a5434f3e57c68 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -537,6 +537,8 @@ static void afs_deliver_to_call(struct afs_call *call)
+ case -ENODATA:
+ case -EBADMSG:
+ case -EMSGSIZE:
++ case -ENOMEM:
++ case -EFAULT:
+ abort_code = RXGEN_CC_UNMARSHAL;
+ if (state != AFS_CALL_CL_AWAIT_REPLY)
+ abort_code = RXGEN_SS_UNMARSHAL;
+@@ -544,7 +546,7 @@ static void afs_deliver_to_call(struct afs_call *call)
+ abort_code, ret, "KUM");
+ goto local_abort;
+ default:
+- abort_code = RX_USER_ABORT;
++ abort_code = RX_CALL_DEAD;
+ rxrpc_kernel_abort_call(call->net->socket, call->rxcall,
+ abort_code, ret, "KER");
+ goto local_abort;
+@@ -836,7 +838,7 @@ void afs_send_empty_reply(struct afs_call *call)
+ case -ENOMEM:
+ _debug("oom");
+ rxrpc_kernel_abort_call(net->socket, call->rxcall,
+- RX_USER_ABORT, -ENOMEM, "KOO");
++ RXGEN_SS_MARSHAL, -ENOMEM, "KOO");
+ fallthrough;
+ default:
+ _leave(" [error]");
+@@ -878,7 +880,7 @@ void afs_send_simple_reply(struct afs_call *call, const void *buf, size_t len)
+ if (n == -ENOMEM) {
+ _debug("oom");
+ rxrpc_kernel_abort_call(net->socket, call->rxcall,
+- RX_USER_ABORT, -ENOMEM, "KOO");
++ RXGEN_SS_MARSHAL, -ENOMEM, "KOO");
+ }
+ _leave(" [error]");
+ }
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index f447c902318da..07454b1ed2404 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -638,6 +638,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
+ case -EKEYEXPIRED:
+ case -EKEYREJECTED:
+ case -EKEYREVOKED:
++ case -ENETRESET:
+ afs_redirty_pages(wbc, mapping, start, len);
+ mapping_set_error(mapping, ret);
+ break;
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index 5d776f80ee50c..7ca3e0db06ffa 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -433,6 +433,30 @@ static void old_reloc(unsigned long rl)
+
+ /****************************************************************************/
+
++static inline u32 __user *skip_got_header(u32 __user *rp)
++{
++ if (IS_ENABLED(CONFIG_RISCV)) {
++ /*
++ * RISC-V has a 16 byte GOT PLT header for elf64-riscv
++ * and 8 byte GOT PLT header for elf32-riscv.
++ * Skip the whole GOT PLT header, since it is reserved
++ * for the dynamic linker (ld.so).
++ */
++ u32 rp_val0, rp_val1;
++
++ if (get_user(rp_val0, rp))
++ return rp;
++ if (get_user(rp_val1, rp + 1))
++ return rp;
++
++ if (rp_val0 == 0xffffffff && rp_val1 == 0xffffffff)
++ rp += 4;
++ else if (rp_val0 == 0xffffffff)
++ rp += 2;
++ }
++ return rp;
++}
++
+ static int load_flat_file(struct linux_binprm *bprm,
+ struct lib_info *libinfo, int id, unsigned long *extra_stack)
+ {
+@@ -782,7 +806,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ * image.
+ */
+ if (flags & FLAT_FLAG_GOTPIC) {
+- for (rp = (u32 __user *)datapos; ; rp++) {
++ rp = skip_got_header((u32 __user *) datapos);
++ for (; ; rp++) {
+ u32 addr, rp_val;
+ if (get_user(rp_val, rp))
+ return -EFAULT;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 18e5ad5decdeb..43ce7366a05af 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1367,6 +1367,14 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ goto next;
+ }
+
++ ret = btrfs_zone_finish(block_group);
++ if (ret < 0) {
++ btrfs_dec_block_group_ro(block_group);
++ if (ret == -EAGAIN)
++ ret = 0;
++ goto next;
++ }
++
+ /*
+ * Want to do this before we do anything else so we can recover
+ * properly if we fail to join the transaction.
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index faa7f1d6782a0..e91f075cb34ad 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -211,6 +211,8 @@ struct btrfs_block_group {
+ u64 meta_write_pointer;
+ struct map_lookup *physical_map;
+ struct list_head active_bg_list;
++ struct work_struct zone_finish_work;
++ struct extent_buffer *last_eb;
+ };
+
+ static inline u64 btrfs_block_group_end(struct btrfs_block_group *block_group)
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index e5f13922a18fe..2a00946857e2d 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3522,7 +3522,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ ~BTRFS_FEATURE_INCOMPAT_SUPP;
+ if (features) {
+ btrfs_err(fs_info,
+- "cannot mount because of unsupported optional features (%llx)",
++ "cannot mount because of unsupported optional features (0x%llx)",
+ features);
+ err = -EINVAL;
+ goto fail_alloc;
+@@ -3560,7 +3560,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ ~BTRFS_FEATURE_COMPAT_RO_SUPP;
+ if (!sb_rdonly(sb) && features) {
+ btrfs_err(fs_info,
+- "cannot mount read-write because of unsupported optional features (%llx)",
++ "cannot mount read-write because of unsupported optional features (0x%llx)",
+ features);
+ err = -EINVAL;
+ goto fail_alloc;
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 9b488a737f8ae..806a5ceb684e5 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3721,8 +3721,12 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
+ this_bio_flag,
+ force_bio_submit);
+ if (ret) {
+- unlock_extent(tree, cur, cur + iosize - 1);
+- end_page_read(page, false, cur, iosize);
++ /*
++ * We have to unlock the remaining range, or the page
++ * will never be unlocked.
++ */
++ unlock_extent(tree, cur, end);
++ end_page_read(page, false, cur, end + 1 - cur);
+ goto out;
+ }
+ cur = cur + iosize;
+@@ -3898,10 +3902,12 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ u64 extent_offset;
+ u64 block_start;
+ struct extent_map *em;
++ int saved_ret = 0;
+ int ret = 0;
+ int nr = 0;
+ u32 opf = REQ_OP_WRITE;
+ const unsigned int write_flags = wbc_to_write_flags(wbc);
++ bool has_error = false;
+ bool compressed;
+
+ ret = btrfs_writepage_cow_fixup(page);
+@@ -3951,6 +3957,9 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ if (IS_ERR_OR_NULL(em)) {
+ btrfs_page_set_error(fs_info, page, cur, end - cur + 1);
+ ret = PTR_ERR_OR_ZERO(em);
++ has_error = true;
++ if (!saved_ret)
++ saved_ret = ret;
+ break;
+ }
+
+@@ -4014,6 +4023,10 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ end_bio_extent_writepage,
+ 0, 0, false);
+ if (ret) {
++ has_error = true;
++ if (!saved_ret)
++ saved_ret = ret;
++
+ btrfs_page_set_error(fs_info, page, cur, iosize);
+ if (PageWriteback(page))
+ btrfs_page_clear_writeback(fs_info, page, cur,
+@@ -4027,8 +4040,10 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ * If we finish without problem, we should not only clear page dirty,
+ * but also empty subpage dirty bits
+ */
+- if (!ret)
++ if (!has_error)
+ btrfs_page_assert_not_dirty(fs_info, page);
++ else
++ ret = saved_ret;
+ *nr_ret = nr;
+ return ret;
+ }
+@@ -4158,9 +4173,6 @@ void wait_on_extent_buffer_writeback(struct extent_buffer *eb)
+
+ static void end_extent_buffer_writeback(struct extent_buffer *eb)
+ {
+- if (test_bit(EXTENT_BUFFER_ZONE_FINISH, &eb->bflags))
+- btrfs_zone_finish_endio(eb->fs_info, eb->start, eb->len);
+-
+ clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags);
+ smp_mb__after_atomic();
+ wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK);
+@@ -4780,8 +4792,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
+ /*
+ * Implies write in zoned mode. Mark the last eb in a block group.
+ */
+- if (cache->seq_zone && eb->start + eb->len == cache->zone_capacity)
+- set_bit(EXTENT_BUFFER_ZONE_FINISH, &eb->bflags);
++ btrfs_schedule_zone_finish_bg(cache, eb);
+ btrfs_put_block_group(cache);
+ }
+ ret = write_one_eb(eb, wbc, epd);
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 151e9da5da2dc..c37a3e5f5eb98 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -32,7 +32,6 @@ enum {
+ /* write IO error */
+ EXTENT_BUFFER_WRITE_ERR,
+ EXTENT_BUFFER_NO_CHECK,
+- EXTENT_BUFFER_ZONE_FINISH,
+ };
+
+ /* these are flags for __process_pages_contig */
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 8fe9d55d68622..072cdcab30618 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -544,7 +544,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ struct timespec64 cur_time = current_time(dir);
+ struct inode *inode;
+ int ret;
+- dev_t anon_dev = 0;
++ dev_t anon_dev;
+ u64 objectid;
+ u64 index = 0;
+
+@@ -554,11 +554,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+
+ ret = btrfs_get_free_objectid(fs_info->tree_root, &objectid);
+ if (ret)
+- goto fail_free;
+-
+- ret = get_anon_bdev(&anon_dev);
+- if (ret < 0)
+- goto fail_free;
++ goto out_root_item;
+
+ /*
+ * Don't create subvolume whose level is not zero. Or qgroup will be
+@@ -566,9 +562,13 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ */
+ if (btrfs_qgroup_level(objectid)) {
+ ret = -ENOSPC;
+- goto fail_free;
++ goto out_root_item;
+ }
+
++ ret = get_anon_bdev(&anon_dev);
++ if (ret < 0)
++ goto out_root_item;
++
+ btrfs_init_block_rsv(&block_rsv, BTRFS_BLOCK_RSV_TEMP);
+ /*
+ * The same as the snapshot creation, please see the comment
+@@ -576,26 +576,26 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ */
+ ret = btrfs_subvolume_reserve_metadata(root, &block_rsv, 8, false);
+ if (ret)
+- goto fail_free;
++ goto out_anon_dev;
+
+ trans = btrfs_start_transaction(root, 0);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+ btrfs_subvolume_release_metadata(root, &block_rsv);
+- goto fail_free;
++ goto out_anon_dev;
+ }
+ trans->block_rsv = &block_rsv;
+ trans->bytes_reserved = block_rsv.size;
+
+ ret = btrfs_qgroup_inherit(trans, 0, objectid, inherit);
+ if (ret)
+- goto fail;
++ goto out;
+
+ leaf = btrfs_alloc_tree_block(trans, root, 0, objectid, NULL, 0, 0, 0,
+ BTRFS_NESTING_NORMAL);
+ if (IS_ERR(leaf)) {
+ ret = PTR_ERR(leaf);
+- goto fail;
++ goto out;
+ }
+
+ btrfs_mark_buffer_dirty(leaf);
+@@ -650,7 +650,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ btrfs_tree_unlock(leaf);
+ btrfs_free_tree_block(trans, objectid, leaf, 0, 1);
+ free_extent_buffer(leaf);
+- goto fail;
++ goto out;
+ }
+
+ free_extent_buffer(leaf);
+@@ -659,19 +659,18 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ key.offset = (u64)-1;
+ new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
+ if (IS_ERR(new_root)) {
+- free_anon_bdev(anon_dev);
+ ret = PTR_ERR(new_root);
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+- /* Freeing will be done in btrfs_put_root() of new_root */
++ /* anon_dev is owned by new_root now. */
+ anon_dev = 0;
+
+ ret = btrfs_record_root_in_trans(trans, new_root);
+ if (ret) {
+ btrfs_put_root(new_root);
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+
+ ret = btrfs_create_subvol_root(trans, new_root, root, mnt_userns);
+@@ -679,7 +678,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ if (ret) {
+ /* We potentially lose an unused inode item here */
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+
+ /*
+@@ -688,28 +687,28 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ ret = btrfs_set_inode_index(BTRFS_I(dir), &index);
+ if (ret) {
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+
+ ret = btrfs_insert_dir_item(trans, name, namelen, BTRFS_I(dir), &key,
+ BTRFS_FT_DIR, index);
+ if (ret) {
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+
+ btrfs_i_size_write(BTRFS_I(dir), dir->i_size + namelen * 2);
+ ret = btrfs_update_inode(trans, root, BTRFS_I(dir));
+ if (ret) {
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+
+ ret = btrfs_add_root_ref(trans, objectid, root->root_key.objectid,
+ btrfs_ino(BTRFS_I(dir)), index, name, namelen);
+ if (ret) {
+ btrfs_abort_transaction(trans, ret);
+- goto fail;
++ goto out;
+ }
+
+ ret = btrfs_uuid_tree_add(trans, root_item->uuid,
+@@ -717,8 +716,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ if (ret)
+ btrfs_abort_transaction(trans, ret);
+
+-fail:
+- kfree(root_item);
++out:
+ trans->block_rsv = NULL;
+ trans->bytes_reserved = 0;
+ btrfs_subvolume_release_metadata(root, &block_rsv);
+@@ -734,11 +732,10 @@ fail:
+ return PTR_ERR(inode);
+ d_instantiate(dentry, inode);
+ }
+- return ret;
+-
+-fail_free:
++out_anon_dev:
+ if (anon_dev)
+ free_anon_bdev(anon_dev);
++out_root_item:
+ kfree(root_item);
+ return ret;
+ }
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index b0dfcc7a4225c..e9986667a7c86 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -7698,12 +7698,12 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ * do another round of validation checks.
+ */
+ if (total_dev != fs_info->fs_devices->total_devices) {
+- btrfs_err(fs_info,
+- "super_num_devices %llu mismatch with num_devices %llu found here",
++ btrfs_warn(fs_info,
++"super block num_devices %llu mismatch with DEV_ITEM count %llu, will be repaired on next transaction commit",
+ btrfs_super_num_devices(fs_info->super_copy),
+ total_dev);
+- ret = -EINVAL;
+- goto error;
++ fs_info->fs_devices->total_devices = total_dev;
++ btrfs_set_super_num_devices(fs_info->super_copy, total_dev);
+ }
+ if (btrfs_super_total_bytes(fs_info->super_copy) <
+ fs_info->fs_devices->total_rw_bytes) {
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index f03705d2f8a8c..65bc60b4be1d2 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1863,7 +1863,7 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group)
+ /* Check if we have unwritten allocated space */
+ if ((block_group->flags &
+ (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)) &&
+- block_group->alloc_offset > block_group->meta_write_pointer) {
++ block_group->start + block_group->alloc_offset > block_group->meta_write_pointer) {
+ spin_unlock(&block_group->lock);
+ return -EAGAIN;
+ }
+@@ -1961,6 +1961,7 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
+ struct btrfs_block_group *block_group;
+ struct map_lookup *map;
+ struct btrfs_device *device;
++ u64 min_alloc_bytes;
+ u64 physical;
+
+ if (!btrfs_is_zoned(fs_info))
+@@ -1969,7 +1970,15 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
+ block_group = btrfs_lookup_block_group(fs_info, logical);
+ ASSERT(block_group);
+
+- if (logical + length < block_group->start + block_group->zone_capacity)
++ /* No MIXED_BG on zoned btrfs. */
++ if (block_group->flags & BTRFS_BLOCK_GROUP_DATA)
++ min_alloc_bytes = fs_info->sectorsize;
++ else
++ min_alloc_bytes = fs_info->nodesize;
++
++ /* Bail out if we can allocate more data from this block group. */
++ if (logical + length + min_alloc_bytes <=
++ block_group->start + block_group->zone_capacity)
+ goto out;
+
+ spin_lock(&block_group->lock);
+@@ -2007,6 +2016,37 @@ out:
+ btrfs_put_block_group(block_group);
+ }
+
++static void btrfs_zone_finish_endio_workfn(struct work_struct *work)
++{
++ struct btrfs_block_group *bg =
++ container_of(work, struct btrfs_block_group, zone_finish_work);
++
++ wait_on_extent_buffer_writeback(bg->last_eb);
++ free_extent_buffer(bg->last_eb);
++ btrfs_zone_finish_endio(bg->fs_info, bg->start, bg->length);
++ btrfs_put_block_group(bg);
++}
++
++void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
++ struct extent_buffer *eb)
++{
++ if (!bg->seq_zone || eb->start + eb->len * 2 <= bg->start + bg->zone_capacity)
++ return;
++
++ if (WARN_ON(bg->zone_finish_work.func == btrfs_zone_finish_endio_workfn)) {
++ btrfs_err(bg->fs_info, "double scheduling of bg %llu zone finishing",
++ bg->start);
++ return;
++ }
++
++ /* For the work */
++ btrfs_get_block_group(bg);
++ atomic_inc(&eb->refs);
++ bg->last_eb = eb;
++ INIT_WORK(&bg->zone_finish_work, btrfs_zone_finish_endio_workfn);
++ queue_work(system_unbound_wq, &bg->zone_finish_work);
++}
++
+ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg)
+ {
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index 6dee76248cb4d..2d898970aec5f 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -76,6 +76,8 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group);
+ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags);
+ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical,
+ u64 length);
++void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
++ struct extent_buffer *eb);
+ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg);
+ void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
+ #else /* CONFIG_BLK_DEV_ZONED */
+@@ -233,6 +235,9 @@ static inline bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices,
+ static inline void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info,
+ u64 logical, u64 length) { }
+
++static inline void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
++ struct extent_buffer *eb) { }
++
+ static inline void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) { }
+
+ static inline void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info) { }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index c30eefc0ac193..b34d6286ee903 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3378,13 +3378,17 @@ static void handle_session(struct ceph_mds_session *session,
+ }
+
+ if (msg_version >= 5) {
+- u32 flags;
+- /* version >= 4, struct_v, struct_cv, len, metric_spec */
+- ceph_decode_skip_n(&p, end, 2 + sizeof(u32) * 2, bad);
++ u32 flags, len;
++
++ /* version >= 4 */
++ ceph_decode_skip_16(&p, end, bad); /* struct_v, struct_cv */
++ ceph_decode_32_safe(&p, end, len, bad); /* len */
++ ceph_decode_skip_n(&p, end, len, bad); /* metric_spec */
++
+ /* version >= 5, flags */
+- ceph_decode_32_safe(&p, end, flags, bad);
++ ceph_decode_32_safe(&p, end, flags, bad);
+ if (flags & CEPH_SESSION_BLOCKLISTED) {
+- pr_warn("mds%d session blocklisted\n", session->s_mds);
++ pr_warn("mds%d session blocklisted\n", session->s_mds);
+ blocklisted = true;
+ }
+ }
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 59d22261e0821..a6363d362c489 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -836,7 +836,7 @@ cifs_smb3_do_mount(struct file_system_type *fs_type,
+ int flags, struct smb3_fs_context *old_ctx)
+ {
+ int rc;
+- struct super_block *sb;
++ struct super_block *sb = NULL;
+ struct cifs_sb_info *cifs_sb = NULL;
+ struct cifs_mnt_data mnt_data;
+ struct dentry *root;
+@@ -932,9 +932,11 @@ out_super:
+ return root;
+ out:
+ if (cifs_sb) {
+- kfree(cifs_sb->prepath);
+- smb3_cleanup_fs_context(cifs_sb->ctx);
+- kfree(cifs_sb);
++ if (!sb || IS_ERR(sb)) { /* otherwise kill_sb will handle */
++ kfree(cifs_sb->prepath);
++ smb3_cleanup_fs_context(cifs_sb->ctx);
++ kfree(cifs_sb);
++ }
+ }
+ return root;
+ }
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 560ecc4ad87d5..7f28fe8f6ba75 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -949,7 +949,7 @@ struct cifs_ses {
+ and after mount option parsing we fill it */
+ char *domainName;
+ char *password;
+- char *workstation_name;
++ char workstation_name[CIFS_MAX_WORKSTATION_LEN];
+ struct session_key auth_key;
+ struct ntlmssp_auth *ntlmssp; /* ciphertext, flags, server challenge */
+ enum securityEnum sectype; /* what security flavor was specified? */
+@@ -1983,4 +1983,17 @@ static inline bool cifs_is_referral_server(struct cifs_tcon *tcon,
+ return is_tcon_dfs(tcon) || (ref && (ref->flags & DFSREF_REFERRAL_SERVER));
+ }
+
++static inline size_t ntlmssp_workstation_name_size(const struct cifs_ses *ses)
++{
++ if (WARN_ON_ONCE(!ses || !ses->server))
++ return 0;
++ /*
++ * Make workstation name no more than 15 chars when using insecure dialects as some legacy
++ * servers do require it during NTLMSSP.
++ */
++ if (ses->server->dialect <= SMB20_PROT_ID)
++ return min_t(size_t, sizeof(ses->workstation_name), RFC1001_NAME_LEN_WITH_NULL);
++ return sizeof(ses->workstation_name);
++}
++
+ #endif /* _CIFS_GLOB_H */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index c3a26f06fdaa1..faf4587804d95 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2037,18 +2037,7 @@ cifs_set_cifscreds(struct smb3_fs_context *ctx, struct cifs_ses *ses)
+ }
+ }
+
+- ctx->workstation_name = kstrdup(ses->workstation_name, GFP_KERNEL);
+- if (!ctx->workstation_name) {
+- cifs_dbg(FYI, "Unable to allocate memory for workstation_name\n");
+- rc = -ENOMEM;
+- kfree(ctx->username);
+- ctx->username = NULL;
+- kfree_sensitive(ctx->password);
+- ctx->password = NULL;
+- kfree(ctx->domainname);
+- ctx->domainname = NULL;
+- goto out_key_put;
+- }
++ strscpy(ctx->workstation_name, ses->workstation_name, sizeof(ctx->workstation_name));
+
+ out_key_put:
+ up_read(&key->sem);
+@@ -2157,12 +2146,9 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ if (!ses->domainName)
+ goto get_ses_fail;
+ }
+- if (ctx->workstation_name) {
+- ses->workstation_name = kstrdup(ctx->workstation_name,
+- GFP_KERNEL);
+- if (!ses->workstation_name)
+- goto get_ses_fail;
+- }
++
++ strscpy(ses->workstation_name, ctx->workstation_name, sizeof(ses->workstation_name));
++
+ if (ctx->domainauto)
+ ses->domainAuto = ctx->domainauto;
+ ses->cred_uid = ctx->cred_uid;
+@@ -3420,8 +3406,9 @@ cifs_are_all_path_components_accessible(struct TCP_Server_Info *server,
+ }
+
+ /*
+- * Check if path is remote (e.g. a DFS share). Return -EREMOTE if it is,
+- * otherwise 0.
++ * Check if path is remote (i.e. a DFS share).
++ *
++ * Return -EREMOTE if it is, otherwise 0 or -errno.
+ */
+ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ {
+@@ -3432,6 +3419,7 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ struct cifs_tcon *tcon = mnt_ctx->tcon;
+ struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ char *full_path;
++ bool nodfs = cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS;
+
+ if (!server->ops->is_path_accessible)
+ return -EOPNOTSUPP;
+@@ -3449,14 +3437,20 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ rc = server->ops->is_path_accessible(xid, tcon, cifs_sb,
+ full_path);
+ #ifdef CONFIG_CIFS_DFS_UPCALL
++ if (nodfs) {
++ if (rc == -EREMOTE)
++ rc = -EOPNOTSUPP;
++ goto out;
++ }
++
++ /* path *might* exist with non-ASCII characters in DFS root
++ * try again with full path (only if nodfs is not set) */
+ if (rc == -ENOENT && is_tcon_dfs(tcon))
+ rc = cifs_dfs_query_info_nonascii_quirk(xid, tcon, cifs_sb,
+ full_path);
+ #endif
+- if (rc != 0 && rc != -EREMOTE) {
+- kfree(full_path);
+- return rc;
+- }
++ if (rc != 0 && rc != -EREMOTE)
++ goto out;
+
+ if (rc != -EREMOTE) {
+ rc = cifs_are_all_path_components_accessible(server, xid, tcon,
+@@ -3468,6 +3462,7 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ }
+ }
+
++out:
+ kfree(full_path);
+ return rc;
+ }
+@@ -3703,6 +3698,7 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb3_fs_context *ctx)
+ if (!isdfs)
+ goto out;
+
++ /* proceed as DFS mount */
+ uuid_gen(&mnt_ctx.mount_id);
+ rc = connect_dfs_root(&mnt_ctx, &tl);
+ dfs_cache_free_tgts(&tl);
+@@ -3960,7 +3956,7 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ if (rc == 0) {
+ spin_lock(&cifs_tcp_ses_lock);
+ if (server->tcpStatus == CifsInNegotiate)
+- server->tcpStatus = CifsNeedSessSetup;
++ server->tcpStatus = CifsGood;
+ else
+ rc = -EHOSTDOWN;
+ spin_unlock(&cifs_tcp_ses_lock);
+@@ -3983,19 +3979,18 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses,
+ bool is_binding = false;
+
+ /* only send once per connect */
++ spin_lock(&ses->chan_lock);
++ is_binding = !CIFS_ALL_CHANS_NEED_RECONNECT(ses);
++ spin_unlock(&ses->chan_lock);
++
+ spin_lock(&cifs_tcp_ses_lock);
+- if ((server->tcpStatus != CifsNeedSessSetup) &&
+- (ses->status == CifsGood)) {
++ if (ses->status == CifsExiting) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return 0;
+ }
+- server->tcpStatus = CifsInSessSetup;
++ ses->status = CifsInSessSetup;
+ spin_unlock(&cifs_tcp_ses_lock);
+
+- spin_lock(&ses->chan_lock);
+- is_binding = !CIFS_ALL_CHANS_NEED_RECONNECT(ses);
+- spin_unlock(&ses->chan_lock);
+-
+ if (!is_binding) {
+ ses->capabilities = server->capabilities;
+ if (!linuxExtEnabled)
+@@ -4019,13 +4014,13 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses,
+ if (rc) {
+ cifs_server_dbg(VFS, "Send error in SessSetup = %d\n", rc);
+ spin_lock(&cifs_tcp_ses_lock);
+- if (server->tcpStatus == CifsInSessSetup)
+- server->tcpStatus = CifsNeedSessSetup;
++ if (ses->status == CifsInSessSetup)
++ ses->status = CifsNeedSessSetup;
+ spin_unlock(&cifs_tcp_ses_lock);
+ } else {
+ spin_lock(&cifs_tcp_ses_lock);
+- if (server->tcpStatus == CifsInSessSetup)
+- server->tcpStatus = CifsGood;
++ if (ses->status == CifsInSessSetup)
++ ses->status = CifsGood;
+ /* Even if one channel is active, session is in good state */
+ ses->status = CifsGood;
+ spin_unlock(&cifs_tcp_ses_lock);
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 956f8e5cf3e74..c5dd6f7305bd1 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -654,7 +654,7 @@ static struct cache_entry *__lookup_cache_entry(const char *path, unsigned int h
+ return ce;
+ }
+ }
+- return ERR_PTR(-EEXIST);
++ return ERR_PTR(-ENOENT);
+ }
+
+ /*
+@@ -662,7 +662,7 @@ static struct cache_entry *__lookup_cache_entry(const char *path, unsigned int h
+ *
+ * Use whole path components in the match. Must be called with htable_rw_lock held.
+ *
+- * Return ERR_PTR(-EEXIST) if the entry is not found.
++ * Return ERR_PTR(-ENOENT) if the entry is not found.
+ */
+ static struct cache_entry *lookup_cache_entry(const char *path)
+ {
+@@ -710,7 +710,7 @@ static struct cache_entry *lookup_cache_entry(const char *path)
+ while (e > s && *e != sep)
+ e--;
+ }
+- return ERR_PTR(-EEXIST);
++ return ERR_PTR(-ENOENT);
+ }
+
+ /**
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index a92e9eec521f3..fbb0e98c7d2c4 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -312,7 +312,6 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ new_ctx->password = NULL;
+ new_ctx->server_hostname = NULL;
+ new_ctx->domainname = NULL;
+- new_ctx->workstation_name = NULL;
+ new_ctx->UNC = NULL;
+ new_ctx->source = NULL;
+ new_ctx->iocharset = NULL;
+@@ -327,7 +326,6 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ DUP_CTX_STR(UNC);
+ DUP_CTX_STR(source);
+ DUP_CTX_STR(domainname);
+- DUP_CTX_STR(workstation_name);
+ DUP_CTX_STR(nodename);
+ DUP_CTX_STR(iocharset);
+
+@@ -766,8 +764,7 @@ static int smb3_verify_reconfigure_ctx(struct fs_context *fc,
+ cifs_errorf(fc, "can not change domainname during remount\n");
+ return -EINVAL;
+ }
+- if (new_ctx->workstation_name &&
+- (!old_ctx->workstation_name || strcmp(new_ctx->workstation_name, old_ctx->workstation_name))) {
++ if (strcmp(new_ctx->workstation_name, old_ctx->workstation_name)) {
+ cifs_errorf(fc, "can not change workstation_name during remount\n");
+ return -EINVAL;
+ }
+@@ -814,7 +811,6 @@ static int smb3_reconfigure(struct fs_context *fc)
+ STEAL_STRING(cifs_sb, ctx, username);
+ STEAL_STRING(cifs_sb, ctx, password);
+ STEAL_STRING(cifs_sb, ctx, domainname);
+- STEAL_STRING(cifs_sb, ctx, workstation_name);
+ STEAL_STRING(cifs_sb, ctx, nodename);
+ STEAL_STRING(cifs_sb, ctx, iocharset);
+
+@@ -1467,22 +1463,15 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+
+ int smb3_init_fs_context(struct fs_context *fc)
+ {
+- int rc;
+ struct smb3_fs_context *ctx;
+ char *nodename = utsname()->nodename;
+ int i;
+
+ ctx = kzalloc(sizeof(struct smb3_fs_context), GFP_KERNEL);
+- if (unlikely(!ctx)) {
+- rc = -ENOMEM;
+- goto err_exit;
+- }
++ if (unlikely(!ctx))
++ return -ENOMEM;
+
+- ctx->workstation_name = kstrdup(nodename, GFP_KERNEL);
+- if (unlikely(!ctx->workstation_name)) {
+- rc = -ENOMEM;
+- goto err_exit;
+- }
++ strscpy(ctx->workstation_name, nodename, sizeof(ctx->workstation_name));
+
+ /*
+ * does not have to be perfect mapping since field is
+@@ -1555,14 +1544,6 @@ int smb3_init_fs_context(struct fs_context *fc)
+ fc->fs_private = ctx;
+ fc->ops = &smb3_fs_context_ops;
+ return 0;
+-
+-err_exit:
+- if (ctx) {
+- kfree(ctx->workstation_name);
+- kfree(ctx);
+- }
+-
+- return rc;
+ }
+
+ void
+@@ -1588,8 +1569,6 @@ smb3_cleanup_fs_context_contents(struct smb3_fs_context *ctx)
+ ctx->source = NULL;
+ kfree(ctx->domainname);
+ ctx->domainname = NULL;
+- kfree(ctx->workstation_name);
+- ctx->workstation_name = NULL;
+ kfree(ctx->nodename);
+ ctx->nodename = NULL;
+ kfree(ctx->iocharset);
+diff --git a/fs/cifs/fs_context.h b/fs/cifs/fs_context.h
+index e54090d9ef368..3a156c1439254 100644
+--- a/fs/cifs/fs_context.h
++++ b/fs/cifs/fs_context.h
+@@ -170,7 +170,7 @@ struct smb3_fs_context {
+ char *server_hostname;
+ char *UNC;
+ char *nodename;
+- char *workstation_name;
++ char workstation_name[CIFS_MAX_WORKSTATION_LEN];
+ char *iocharset; /* local code page for mapping to and from Unicode */
+ char source_rfc1001_name[RFC1001_NAME_LEN_WITH_NULL]; /* clnt nb name */
+ char target_rfc1001_name[RFC1001_NAME_LEN_WITH_NULL]; /* srvr nb name */
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index afaf59c221936..5a803d6861464 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -95,7 +95,6 @@ sesInfoFree(struct cifs_ses *buf_to_free)
+ kfree_sensitive(buf_to_free->password);
+ kfree(buf_to_free->user_name);
+ kfree(buf_to_free->domainName);
+- kfree(buf_to_free->workstation_name);
+ kfree_sensitive(buf_to_free->auth_key.response);
+ kfree(buf_to_free->iface_list);
+ kfree_sensitive(buf_to_free);
+@@ -1309,7 +1308,7 @@ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix)
+ * for "\<server>\<dfsname>\<linkpath>" DFS reference,
+ * where <dfsname> contains non-ASCII unicode symbols.
+ *
+- * Check such DFS reference and emulate -ENOENT if it is actual.
++ * Check such DFS reference.
+ */
+ int cifs_dfs_query_info_nonascii_quirk(const unsigned int xid,
+ struct cifs_tcon *tcon,
+@@ -1341,10 +1340,6 @@ int cifs_dfs_query_info_nonascii_quirk(const unsigned int xid,
+ cifs_dbg(FYI, "DFS ref '%s' is found, emulate -EREMOTE\n",
+ dfspath);
+ rc = -EREMOTE;
+- } else if (rc == -EEXIST) {
+- cifs_dbg(FYI, "DFS ref '%s' is not found, emulate -ENOENT\n",
+- dfspath);
+- rc = -ENOENT;
+ } else {
+ cifs_dbg(FYI, "%s: dfs_cache_find returned %d\n", __func__, rc);
+ }
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 32f478c7a66d8..1a0995bb5d90c 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -714,9 +714,9 @@ static int size_of_ntlmssp_blob(struct cifs_ses *ses, int base_size)
+ else
+ sz += sizeof(__le16);
+
+- if (ses->workstation_name)
++ if (ses->workstation_name[0])
+ sz += sizeof(__le16) * strnlen(ses->workstation_name,
+- CIFS_MAX_WORKSTATION_LEN);
++ ntlmssp_workstation_name_size(ses));
+ else
+ sz += sizeof(__le16);
+
+@@ -960,7 +960,7 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer,
+
+ cifs_security_buffer_from_str(&sec_blob->WorkstationName,
+ ses->workstation_name,
+- CIFS_MAX_WORKSTATION_LEN,
++ ntlmssp_workstation_name_size(ses),
+ *pbuffer, &tmp,
+ nls_cp);
+
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index fe5bfa245fa7e..1b89b9b8a212a 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -362,8 +362,6 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ num_rqst++;
+
+ if (cfile) {
+- cifsFileInfo_put(cfile);
+- cfile = NULL;
+ rc = compound_send_recv(xid, ses, server,
+ flags, num_rqst - 2,
+ &rqst[1], &resp_buftype[1],
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 13080d6a140b3..ab74a678fb939 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -757,8 +757,8 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ struct cached_fid **cfid)
+ {
+- struct cifs_ses *ses = tcon->ses;
+- struct TCP_Server_Info *server = ses->server;
++ struct cifs_ses *ses;
++ struct TCP_Server_Info *server;
+ struct cifs_open_parms oparms;
+ struct smb2_create_rsp *o_rsp = NULL;
+ struct smb2_query_info_rsp *qi_rsp = NULL;
+@@ -776,6 +776,9 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ if (tcon->nohandlecache)
+ return -ENOTSUPP;
+
++ ses = tcon->ses;
++ server = ses->server;
++
+ if (cifs_sb->root == NULL)
+ return -ENOENT;
+
+@@ -3808,7 +3811,7 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ if (rc)
+ goto out;
+
+- if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0)
++ if (cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE)
+ smb2_set_sparse(xid, tcon, cfile, inode, false);
+
+ eof = cpu_to_le64(off + len);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 1704fd358b850..55e6879ef18be 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3904,7 +3904,8 @@ SMB2_echo(struct TCP_Server_Info *server)
+ cifs_dbg(FYI, "In echo request for conn_id %lld\n", server->conn_id);
+
+ spin_lock(&cifs_tcp_ses_lock);
+- if (server->tcpStatus == CifsNeedNegotiate) {
++ if (server->ops->need_neg &&
++ server->ops->need_neg(server)) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ /* No need to send echo on newly established connections */
+ mod_delayed_work(cifsiod_wq, &server->reconnect, 0);
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 2af79093b78bc..01b732641edb2 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -641,7 +641,8 @@ smb2_sign_rqst(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ if (!is_signed)
+ return 0;
+ spin_lock(&cifs_tcp_ses_lock);
+- if (server->tcpStatus == CifsNeedNegotiate) {
++ if (server->ops->need_neg &&
++ server->ops->need_neg(server)) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return 0;
+ }
+diff --git a/fs/dax.c b/fs/dax.c
+index cd03485867a74..411ea6a0fe571 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -846,7 +846,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
+ if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ goto unlock_pmd;
+
+- flush_cache_page(vma, address, pfn);
++ flush_cache_range(vma, address,
++ address + HPAGE_PMD_SIZE);
+ pmd = pmdp_invalidate(vma, address, pmdp);
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_mkclean(pmd);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index bdb51d209ba25..5b485cd96c931 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -1559,6 +1559,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype,
+ lkb->lkb_wait_type = 0;
+ lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL;
+ lkb->lkb_wait_count--;
++ unhold_lkb(lkb);
+ goto out_del;
+ }
+
+@@ -1585,6 +1586,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype,
+ log_error(ls, "remwait error %x reply %d wait_type %d overlap",
+ lkb->lkb_id, mstype, lkb->lkb_wait_type);
+ lkb->lkb_wait_count--;
++ unhold_lkb(lkb);
+ lkb->lkb_wait_type = 0;
+ }
+
+@@ -1795,7 +1797,6 @@ static void shrink_bucket(struct dlm_ls *ls, int b)
+ memcpy(ls->ls_remove_name, name, DLM_RESNAME_MAXLEN);
+ spin_unlock(&ls->ls_remove_spin);
+ spin_unlock(&ls->ls_rsbtbl[b].lock);
+- wake_up(&ls->ls_remove_wait);
+
+ send_remove(r);
+
+@@ -1804,6 +1805,7 @@ static void shrink_bucket(struct dlm_ls *ls, int b)
+ ls->ls_remove_len = 0;
+ memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN);
+ spin_unlock(&ls->ls_remove_spin);
++ wake_up(&ls->ls_remove_wait);
+
+ dlm_free_rsb(r);
+ }
+@@ -4079,7 +4081,6 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len)
+ memcpy(ls->ls_remove_name, name, DLM_RESNAME_MAXLEN);
+ spin_unlock(&ls->ls_remove_spin);
+ spin_unlock(&ls->ls_rsbtbl[b].lock);
+- wake_up(&ls->ls_remove_wait);
+
+ rv = _create_message(ls, sizeof(struct dlm_message) + len,
+ dir_nodeid, DLM_MSG_REMOVE, &ms, &mh);
+@@ -4095,6 +4096,7 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len)
+ ls->ls_remove_len = 0;
+ memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN);
+ spin_unlock(&ls->ls_remove_spin);
++ wake_up(&ls->ls_remove_wait);
+ }
+
+ static int receive_request(struct dlm_ls *ls, struct dlm_message *ms)
+@@ -5331,11 +5333,16 @@ int dlm_recover_waiters_post(struct dlm_ls *ls)
+ lkb->lkb_flags &= ~DLM_IFL_OVERLAP_UNLOCK;
+ lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL;
+ lkb->lkb_wait_type = 0;
+- lkb->lkb_wait_count = 0;
++ /* drop all wait_count references we still
++ * hold a reference for this iteration.
++ */
++ while (lkb->lkb_wait_count) {
++ lkb->lkb_wait_count--;
++ unhold_lkb(lkb);
++ }
+ mutex_lock(&ls->ls_waiters_mutex);
+ list_del_init(&lkb->lkb_wait_reply);
+ mutex_unlock(&ls->ls_waiters_mutex);
+- unhold_lkb(lkb); /* for waiters list */
+
+ if (oc || ou) {
+ /* do an unlock or cancel instead of resending */
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index e284d696c1fdc..6ed935ad82471 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -1789,7 +1789,7 @@ static int dlm_listen_for_all(void)
+ SOCK_STREAM, dlm_proto_ops->proto, &sock);
+ if (result < 0) {
+ log_print("Can't create comms socket: %d", result);
+- goto out;
++ return result;
+ }
+
+ sock_set_mark(sock->sk, dlm_config.ci_mark);
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index c38b2b8ffd1d3..a10d2bcfe75a8 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -23,11 +23,11 @@ struct plock_op {
+ struct list_head list;
+ int done;
+ struct dlm_plock_info info;
++ int (*callback)(struct file_lock *fl, int result);
+ };
+
+ struct plock_xop {
+ struct plock_op xop;
+- int (*callback)(struct file_lock *fl, int result);
+ void *fl;
+ void *file;
+ struct file_lock flc;
+@@ -129,19 +129,18 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ /* fl_owner is lockd which doesn't distinguish
+ processes on the nfs client */
+ op->info.owner = (__u64) fl->fl_pid;
+- xop->callback = fl->fl_lmops->lm_grant;
++ op->callback = fl->fl_lmops->lm_grant;
+ locks_init_lock(&xop->flc);
+ locks_copy_lock(&xop->flc, fl);
+ xop->fl = fl;
+ xop->file = file;
+ } else {
+ op->info.owner = (__u64)(long) fl->fl_owner;
+- xop->callback = NULL;
+ }
+
+ send_op(op);
+
+- if (xop->callback == NULL) {
++ if (!op->callback) {
+ rv = wait_event_interruptible(recv_wq, (op->done != 0));
+ if (rv == -ERESTARTSYS) {
+ log_debug(ls, "dlm_posix_lock: wait killed %llx",
+@@ -203,7 +202,7 @@ static int dlm_plock_callback(struct plock_op *op)
+ file = xop->file;
+ flc = &xop->flc;
+ fl = xop->fl;
+- notify = xop->callback;
++ notify = op->callback;
+
+ if (op->info.rv) {
+ notify(fl, op->info.rv);
+@@ -436,10 +435,9 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ if (op->info.fsid == info.fsid &&
+ op->info.number == info.number &&
+ op->info.owner == info.owner) {
+- struct plock_xop *xop = (struct plock_xop *)op;
+ list_del_init(&op->list);
+ memcpy(&op->info, &info, sizeof(info));
+- if (xop->callback)
++ if (op->callback)
+ do_callback = 1;
+ else
+ op->done = 1;
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index 3efa686c76441..0e0d1fc0f1301 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -322,6 +322,7 @@ static int z_erofs_shifted_transform(struct z_erofs_decompress_req *rq,
+ PAGE_ALIGN(rq->pageofs_out + rq->outputsize) >> PAGE_SHIFT;
+ const unsigned int righthalf = min_t(unsigned int, rq->outputsize,
+ PAGE_SIZE - rq->pageofs_out);
++ const unsigned int lefthalf = rq->outputsize - righthalf;
+ unsigned char *src, *dst;
+
+ if (nrpages_out > 2) {
+@@ -344,10 +345,10 @@ static int z_erofs_shifted_transform(struct z_erofs_decompress_req *rq,
+ if (nrpages_out == 2) {
+ DBG_BUGON(!rq->out[1]);
+ if (rq->out[1] == *rq->in) {
+- memmove(src, src + righthalf, rq->pageofs_out);
++ memmove(src, src + righthalf, lefthalf);
+ } else {
+ dst = kmap_atomic(rq->out[1]);
+- memcpy(dst, src + righthalf, rq->pageofs_out);
++ memcpy(dst, src + righthalf, lefthalf);
+ kunmap_atomic(dst);
+ }
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index 40b1008fb0f79..60282ebe835b5 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1315,8 +1315,6 @@ int begin_new_exec(struct linux_binprm * bprm)
+ */
+ force_uaccess_begin();
+
+- if (me->flags & PF_KTHREAD)
+- free_kthread_struct(me);
+ me->flags &= ~(PF_RANDOMIZE | PF_FORKNOEXEC | PF_KTHREAD |
+ PF_NOFREEZE | PF_NO_SETAFFINITY);
+ flush_thread();
+@@ -1962,6 +1960,10 @@ int kernel_execve(const char *kernel_filename,
+ int fd = AT_FDCWD;
+ int retval;
+
++ if (WARN_ON_ONCE((current->flags & PF_KTHREAD) &&
++ (current->worker_private)))
++ return -EINVAL;
++
+ filename = getname_kernel(kernel_filename);
+ if (IS_ERR(filename))
+ return PTR_ERR(filename);
+diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
+index 0106eba46d5af..3ef80d000e13d 100644
+--- a/fs/exportfs/expfs.c
++++ b/fs/exportfs/expfs.c
+@@ -145,7 +145,7 @@ static struct dentry *reconnect_one(struct vfsmount *mnt,
+ if (err)
+ goto out_err;
+ dprintk("%s: found name: %s\n", __func__, nbuf);
+- tmp = lookup_one_len_unlocked(nbuf, parent, strlen(nbuf));
++ tmp = lookup_one_unlocked(mnt_user_ns(mnt), nbuf, parent, strlen(nbuf));
+ if (IS_ERR(tmp)) {
+ dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp));
+ err = PTR_ERR(tmp);
+@@ -525,7 +525,8 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
+ }
+
+ inode_lock(target_dir->d_inode);
+- nresult = lookup_one_len(nbuf, target_dir, strlen(nbuf));
++ nresult = lookup_one(mnt_user_ns(mnt), nbuf,
++ target_dir, strlen(nbuf));
+ if (!IS_ERR(nresult)) {
+ if (unlikely(nresult->d_inode != result->d_inode)) {
+ dput(nresult);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 9b80693224957..28343087850a4 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1438,12 +1438,6 @@ struct ext4_super_block {
+
+ #ifdef __KERNEL__
+
+-#ifdef CONFIG_FS_ENCRYPTION
+-#define DUMMY_ENCRYPTION_ENABLED(sbi) ((sbi)->s_dummy_enc_policy.policy != NULL)
+-#else
+-#define DUMMY_ENCRYPTION_ENABLED(sbi) (0)
+-#endif
+-
+ /* Number of quota types we support */
+ #define EXT4_MAXQUOTAS 3
+
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 488d7c1de941e..fcf54be9c9a46 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -372,7 +372,7 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ {
+ unsigned short entries;
+ ext4_lblk_t lblock = 0;
+- ext4_lblk_t prev = 0;
++ ext4_lblk_t cur = 0;
+
+ if (eh->eh_entries == 0)
+ return 1;
+@@ -396,11 +396,11 @@ static int ext4_valid_extent_entries(struct inode *inode,
+
+ /* Check for overlapping extents */
+ lblock = le32_to_cpu(ext->ee_block);
+- if ((lblock <= prev) && prev) {
++ if (lblock < cur) {
+ *pblk = ext4_ext_pblock(ext);
+ return 0;
+ }
+- prev = lblock + ext4_ext_get_actual_len(ext) - 1;
++ cur = lblock + ext4_ext_get_actual_len(ext);
+ ext++;
+ entries--;
+ }
+@@ -420,13 +420,13 @@ static int ext4_valid_extent_entries(struct inode *inode,
+
+ /* Check for overlapping index extents */
+ lblock = le32_to_cpu(ext_idx->ei_block);
+- if ((lblock <= prev) && prev) {
++ if (lblock < cur) {
+ *pblk = ext4_idx_pblock(ext_idx);
+ return 0;
+ }
+ ext_idx++;
+ entries--;
+- prev = lblock;
++ cur = lblock + 1;
+ }
+ }
+ return 1;
+@@ -4694,15 +4694,17 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ FALLOC_FL_INSERT_RANGE))
+ return -EOPNOTSUPP;
+
++ inode_lock(inode);
++ ret = ext4_convert_inline_data(inode);
++ inode_unlock(inode);
++ if (ret)
++ goto exit;
++
+ if (mode & FALLOC_FL_PUNCH_HOLE) {
+ ret = ext4_punch_hole(file, offset, len);
+ goto exit;
+ }
+
+- ret = ext4_convert_inline_data(inode);
+- if (ret)
+- goto exit;
+-
+ if (mode & FALLOC_FL_COLLAPSE_RANGE) {
+ ret = ext4_collapse_range(file, offset, len);
+ goto exit;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 9c076262770d9..e9ef5cf309694 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -2005,6 +2005,18 @@ int ext4_convert_inline_data(struct inode *inode)
+ if (!ext4_has_inline_data(inode)) {
+ ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ return 0;
++ } else if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
++ /*
++ * Inode has inline data but EXT4_STATE_MAY_INLINE_DATA is
++ * cleared. This means we are in the middle of moving of
++ * inline data to delay allocated block. Just force writeout
++ * here to finish conversion.
++ */
++ error = filemap_flush(inode->i_mapping);
++ if (error)
++ return error;
++ if (!ext4_has_inline_data(inode))
++ return 0;
+ }
+
+ needed_blocks = ext4_writepage_trans_blocks(inode);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index d8ff93a4b1b90..0b85fca32ca9b 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3958,15 +3958,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+
+ trace_ext4_punch_hole(inode, offset, length, 0);
+
+- ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+- if (ext4_has_inline_data(inode)) {
+- filemap_invalidate_lock(mapping);
+- ret = ext4_convert_inline_data(inode);
+- filemap_invalidate_unlock(mapping);
+- if (ret)
+- return ret;
+- }
+-
+ /*
+ * Write out all dirty pages to avoid race conditions
+ * Then release them.
+@@ -5390,6 +5381,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ if (attr->ia_valid & ATTR_SIZE) {
+ handle_t *handle;
+ loff_t oldsize = inode->i_size;
++ loff_t old_disksize;
+ int shrink = (attr->ia_size < inode->i_size);
+
+ if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
+@@ -5461,6 +5453,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ inode->i_sb->s_blocksize_bits);
+
+ down_write(&EXT4_I(inode)->i_data_sem);
++ old_disksize = EXT4_I(inode)->i_disksize;
+ EXT4_I(inode)->i_disksize = attr->ia_size;
+ rc = ext4_mark_inode_dirty(handle, inode);
+ if (!error)
+@@ -5472,6 +5465,8 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ */
+ if (!error)
+ i_size_write(inode, attr->ia_size);
++ else
++ EXT4_I(inode)->i_disksize = old_disksize;
+ up_write(&EXT4_I(inode)->i_data_sem);
+ ext4_journal_stop(handle);
+ if (error)
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 1f37eb0176ccc..47a560d60d3ef 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6377,6 +6377,7 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ * @start: first group block to examine
+ * @max: last group block to examine
+ * @minblocks: minimum extent block count
++ * @set_trimmed: set the trimmed flag if at least one block is trimmed
+ *
+ * ext4_trim_all_free walks through group's block bitmap searching for free
+ * extents. When the free extent is found, mark it as used in group buddy
+@@ -6386,7 +6387,7 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ static ext4_grpblk_t
+ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ ext4_grpblk_t start, ext4_grpblk_t max,
+- ext4_grpblk_t minblocks)
++ ext4_grpblk_t minblocks, bool set_trimmed)
+ {
+ struct ext4_buddy e4b;
+ int ret;
+@@ -6405,7 +6406,7 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ if (!EXT4_MB_GRP_WAS_TRIMMED(e4b.bd_info) ||
+ minblocks < EXT4_SB(sb)->s_last_trim_minblks) {
+ ret = ext4_try_to_trim_range(sb, &e4b, start, max, minblocks);
+- if (ret >= 0)
++ if (ret >= 0 && set_trimmed)
+ EXT4_MB_GRP_SET_TRIMMED(e4b.bd_info);
+ } else {
+ ret = 0;
+@@ -6442,6 +6443,7 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ ext4_fsblk_t first_data_blk =
+ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
+ ext4_fsblk_t max_blks = ext4_blocks_count(EXT4_SB(sb)->s_es);
++ bool whole_group, eof = false;
+ int ret = 0;
+
+ start = range->start >> sb->s_blocksize_bits;
+@@ -6460,8 +6462,10 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
+ goto out;
+ }
+- if (end >= max_blks)
++ if (end >= max_blks - 1) {
+ end = max_blks - 1;
++ eof = true;
++ }
+ if (end <= first_data_blk)
+ goto out;
+ if (start < first_data_blk)
+@@ -6475,6 +6479,7 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+
+ /* end now represents the last cluster to discard in this group */
+ end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
++ whole_group = true;
+
+ for (group = first_group; group <= last_group; group++) {
+ grp = ext4_get_group_info(sb, group);
+@@ -6491,12 +6496,13 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ * change it for the last group, note that last_cluster is
+ * already computed earlier by ext4_get_group_no_and_offset()
+ */
+- if (group == last_group)
++ if (group == last_group) {
+ end = last_cluster;
+-
++ whole_group = eof ? true : end == EXT4_CLUSTERS_PER_GROUP(sb) - 1;
++ }
+ if (grp->bb_free >= minlen) {
+ cnt = ext4_trim_all_free(sb, group, first_cluster,
+- end, minlen);
++ end, minlen, whole_group);
+ if (cnt < 0) {
+ ret = cnt;
+ break;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index f62260264f68c..31e05b0c6cd98 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -277,9 +277,9 @@ static struct dx_frame *dx_probe(struct ext4_filename *fname,
+ struct dx_hash_info *hinfo,
+ struct dx_frame *frame);
+ static void dx_release(struct dx_frame *frames);
+-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+- unsigned blocksize, struct dx_hash_info *hinfo,
+- struct dx_map_entry map[]);
++static int dx_make_map(struct inode *dir, struct buffer_head *bh,
++ struct dx_hash_info *hinfo,
++ struct dx_map_entry *map_tail);
+ static void dx_sort_map(struct dx_map_entry *map, unsigned count);
+ static struct ext4_dir_entry_2 *dx_move_dirents(struct inode *dir, char *from,
+ char *to, struct dx_map_entry *offsets,
+@@ -777,12 +777,14 @@ static struct dx_frame *
+ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ struct dx_hash_info *hinfo, struct dx_frame *frame_in)
+ {
+- unsigned count, indirect;
++ unsigned count, indirect, level, i;
+ struct dx_entry *at, *entries, *p, *q, *m;
+ struct dx_root *root;
+ struct dx_frame *frame = frame_in;
+ struct dx_frame *ret_err = ERR_PTR(ERR_BAD_DX_DIR);
+ u32 hash;
++ ext4_lblk_t block;
++ ext4_lblk_t blocks[EXT4_HTREE_LEVEL];
+
+ memset(frame_in, 0, EXT4_HTREE_LEVEL * sizeof(frame_in[0]));
+ frame->bh = ext4_read_dirblock(dir, 0, INDEX);
+@@ -854,6 +856,8 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ }
+
+ dxtrace(printk("Look up %x", hash));
++ level = 0;
++ blocks[0] = 0;
+ while (1) {
+ count = dx_get_count(entries);
+ if (!count || count > dx_get_limit(entries)) {
+@@ -882,15 +886,27 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ dx_get_block(at)));
+ frame->entries = entries;
+ frame->at = at;
+- if (!indirect--)
++
++ block = dx_get_block(at);
++ for (i = 0; i <= level; i++) {
++ if (blocks[i] == block) {
++ ext4_warning_inode(dir,
++ "dx entry: tree cycle block %u points back to block %u",
++ blocks[level], block);
++ goto fail;
++ }
++ }
++ if (++level > indirect)
+ return frame;
++ blocks[level] = block;
+ frame++;
+- frame->bh = ext4_read_dirblock(dir, dx_get_block(at), INDEX);
++ frame->bh = ext4_read_dirblock(dir, block, INDEX);
+ if (IS_ERR(frame->bh)) {
+ ret_err = (struct dx_frame *) frame->bh;
+ frame->bh = NULL;
+ goto fail;
+ }
++
+ entries = ((struct dx_node *) frame->bh->b_data)->entries;
+
+ if (dx_get_limit(entries) != dx_node_limit(dir)) {
+@@ -1249,15 +1265,23 @@ static inline int search_dirblock(struct buffer_head *bh,
+ * Create map of hash values, offsets, and sizes, stored at end of block.
+ * Returns number of entries mapped.
+ */
+-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+- unsigned blocksize, struct dx_hash_info *hinfo,
++static int dx_make_map(struct inode *dir, struct buffer_head *bh,
++ struct dx_hash_info *hinfo,
+ struct dx_map_entry *map_tail)
+ {
+ int count = 0;
+- char *base = (char *) de;
++ struct ext4_dir_entry_2 *de = (struct ext4_dir_entry_2 *)bh->b_data;
++ unsigned int buflen = bh->b_size;
++ char *base = bh->b_data;
+ struct dx_hash_info h = *hinfo;
+
+- while ((char *) de < base + blocksize) {
++ if (ext4_has_metadata_csum(dir->i_sb))
++ buflen -= sizeof(struct ext4_dir_entry_tail);
++
++ while ((char *) de < base + buflen) {
++ if (ext4_check_dir_entry(dir, NULL, de, bh, base, buflen,
++ ((char *)de) - base))
++ return -EFSCORRUPTED;
+ if (de->name_len && de->inode) {
+ if (ext4_hash_in_dirent(dir))
+ h.hash = EXT4_DIRENT_HASH(de);
+@@ -1270,8 +1294,7 @@ static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+ count++;
+ cond_resched();
+ }
+- /* XXX: do we need to check rec_len == 0 case? -Chris */
+- de = ext4_next_entry(de, blocksize);
++ de = ext4_next_entry(de, dir->i_sb->s_blocksize);
+ }
+ return count;
+ }
+@@ -1943,8 +1966,11 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+
+ /* create map in the end of data2 block */
+ map = (struct dx_map_entry *) (data2 + blocksize);
+- count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1,
+- blocksize, hinfo, map);
++ count = dx_make_map(dir, *bh, hinfo, map);
++ if (count < 0) {
++ err = count;
++ goto journal_error;
++ }
+ map -= count;
+ dx_sort_map(map, count);
+ /* Ensure that neither split block is over half full */
+@@ -3455,6 +3481,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ struct buffer_head *bh;
+
+ if (!ext4_has_inline_data(inode)) {
++ struct ext4_dir_entry_2 *de;
++ unsigned int offset;
++
+ /* The first directory block must not be a hole, so
+ * treat it as DIRENT_HTREE
+ */
+@@ -3463,9 +3492,30 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ *retval = PTR_ERR(bh);
+ return NULL;
+ }
+- *parent_de = ext4_next_entry(
+- (struct ext4_dir_entry_2 *)bh->b_data,
+- inode->i_sb->s_blocksize);
++
++ de = (struct ext4_dir_entry_2 *) bh->b_data;
++ if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
++ bh->b_size, 0) ||
++ le32_to_cpu(de->inode) != inode->i_ino ||
++ strcmp(".", de->name)) {
++ EXT4_ERROR_INODE(inode, "directory missing '.'");
++ brelse(bh);
++ *retval = -EFSCORRUPTED;
++ return NULL;
++ }
++ offset = ext4_rec_len_from_disk(de->rec_len,
++ inode->i_sb->s_blocksize);
++ de = ext4_next_entry(de, inode->i_sb->s_blocksize);
++ if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
++ bh->b_size, offset) ||
++ le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
++ EXT4_ERROR_INODE(inode, "directory missing '..'");
++ brelse(bh);
++ *retval = -EFSCORRUPTED;
++ return NULL;
++ }
++ *parent_de = de;
++
+ return bh;
+ }
+
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index fe30f483c59f0..3e6e618019a29 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1913,6 +1913,7 @@ static const struct mount_opts {
+ MOPT_EXT4_ONLY | MOPT_CLEAR},
+ {Opt_warn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_SET},
+ {Opt_nowarn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_CLEAR},
++ {Opt_commit, 0, MOPT_NO_EXT2},
+ {Opt_nojournal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
+ MOPT_EXT4_ONLY | MOPT_CLEAR},
+ {Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
+@@ -2427,11 +2428,12 @@ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ ctx->spec |= EXT4_SPEC_DUMMY_ENCRYPTION;
+ ctx->test_dummy_enc_arg = kmemdup_nul(param->string, param->size,
+ GFP_KERNEL);
++ return 0;
+ #else
+ ext4_msg(NULL, KERN_WARNING,
+- "Test dummy encryption mount option ignored");
++ "test_dummy_encryption option not supported");
++ return -EINVAL;
+ #endif
+- return 0;
+ case Opt_dax:
+ case Opt_dax_type:
+ #ifdef CONFIG_FS_DAX
+@@ -2625,8 +2627,10 @@ parse_failed:
+ ret = ext4_apply_options(fc, sb);
+
+ out_free:
+- kfree(s_ctx);
+- kfree(fc);
++ if (fc) {
++ ext4_fc_free(fc);
++ kfree(fc);
++ }
+ kfree(s_mount_opts);
+ return ret;
+ }
+@@ -2786,12 +2790,44 @@ err_jquota_specified:
+ #endif
+ }
+
++static int ext4_check_test_dummy_encryption(const struct fs_context *fc,
++ struct super_block *sb)
++{
++#ifdef CONFIG_FS_ENCRYPTION
++ const struct ext4_fs_context *ctx = fc->fs_private;
++ const struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++ if (!(ctx->spec & EXT4_SPEC_DUMMY_ENCRYPTION))
++ return 0;
++
++ if (!ext4_has_feature_encrypt(sb)) {
++ ext4_msg(NULL, KERN_WARNING,
++ "test_dummy_encryption requires encrypt feature");
++ return -EINVAL;
++ }
++ /*
++ * This mount option is just for testing, and it's not worthwhile to
++ * implement the extra complexity (e.g. RCU protection) that would be
++ * needed to allow it to be set or changed during remount. We do allow
++ * it to be specified during remount, but only if there is no change.
++ */
++ if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE &&
++ !sbi->s_dummy_enc_policy.policy) {
++ ext4_msg(NULL, KERN_WARNING,
++ "Can't set test_dummy_encryption on remount");
++ return -EINVAL;
++ }
++#endif /* CONFIG_FS_ENCRYPTION */
++ return 0;
++}
++
+ static int ext4_check_opt_consistency(struct fs_context *fc,
+ struct super_block *sb)
+ {
+ struct ext4_fs_context *ctx = fc->fs_private;
+ struct ext4_sb_info *sbi = fc->s_fs_info;
+ int is_remount = fc->purpose == FS_CONTEXT_FOR_RECONFIGURE;
++ int err;
+
+ if ((ctx->opt_flags & MOPT_NO_EXT2) && IS_EXT2_SB(sb)) {
+ ext4_msg(NULL, KERN_ERR,
+@@ -2821,20 +2857,9 @@ static int ext4_check_opt_consistency(struct fs_context *fc,
+ "for blocksize < PAGE_SIZE");
+ }
+
+-#ifdef CONFIG_FS_ENCRYPTION
+- /*
+- * This mount option is just for testing, and it's not worthwhile to
+- * implement the extra complexity (e.g. RCU protection) that would be
+- * needed to allow it to be set or changed during remount. We do allow
+- * it to be specified during remount, but only if there is no change.
+- */
+- if ((ctx->spec & EXT4_SPEC_DUMMY_ENCRYPTION) &&
+- is_remount && !sbi->s_dummy_enc_policy.policy) {
+- ext4_msg(NULL, KERN_WARNING,
+- "Can't set test_dummy_encryption on remount");
+- return -1;
+- }
+-#endif
++ err = ext4_check_test_dummy_encryption(fc, sb);
++ if (err)
++ return err;
+
+ if ((ctx->spec & EXT4_SPEC_DATAJ) && is_remount) {
+ if (!sbi->s_journal) {
+@@ -4393,7 +4418,8 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ int silent = fc->sb_flags & SB_SILENT;
+
+ /* Set defaults for the variables that will be set during parsing */
+- ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
++ if (!(ctx->spec & EXT4_SPEC_JOURNAL_IOPRIO))
++ ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+
+ sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
+ sbi->s_sectors_written_start =
+@@ -4870,7 +4896,7 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ sbi->s_inodes_per_block;
+ sbi->s_desc_per_block = blocksize / EXT4_DESC_SIZE(sb);
+ sbi->s_sbh = bh;
+- sbi->s_mount_state = le16_to_cpu(es->s_state);
++ sbi->s_mount_state = le16_to_cpu(es->s_state) & ~EXT4_FC_REPLAY;
+ sbi->s_addr_per_block_bits = ilog2(EXT4_ADDR_PER_BLOCK(sb));
+ sbi->s_desc_per_block_bits = ilog2(EXT4_DESC_PER_BLOCK(sb));
+
+@@ -5263,12 +5289,6 @@ no_journal:
+ goto failed_mount_wq;
+ }
+
+- if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) &&
+- !ext4_has_feature_encrypt(sb)) {
+- ext4_set_feature_encrypt(sb);
+- ext4_commit_super(sb);
+- }
+-
+ /*
+ * Get the # of file system overhead blocks from the
+ * superblock if present.
+@@ -6260,7 +6280,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ char *to_free[EXT4_MAXQUOTAS];
+ #endif
+
+- ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+
+ /* Store the original options */
+ old_sb_flags = sb->s_flags;
+@@ -6286,9 +6305,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ } else
+ old_opts.s_qf_names[i] = NULL;
+ #endif
+- if (sbi->s_journal && sbi->s_journal->j_task->io_context)
+- ctx->journal_ioprio =
+- sbi->s_journal->j_task->io_context->ioprio;
++ if (!(ctx->spec & EXT4_SPEC_JOURNAL_IOPRIO)) {
++ if (sbi->s_journal && sbi->s_journal->j_task->io_context)
++ ctx->journal_ioprio =
++ sbi->s_journal->j_task->io_context->ioprio;
++ else
++ ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
++
++ }
+
+ ext4_apply_options(fc, sb);
+
+@@ -6429,7 +6453,8 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ if (err)
+ goto restore_opts;
+ }
+- sbi->s_mount_state = le16_to_cpu(es->s_state);
++ sbi->s_mount_state = (le16_to_cpu(es->s_state) &
++ ~EXT4_FC_REPLAY);
+
+ err = ext4_setup_super(sb, es, 0);
+ if (err)
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 166f086233625..e85f999ddd856 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -82,7 +82,8 @@ int f2fs_init_casefolded_name(const struct inode *dir,
+ #if IS_ENABLED(CONFIG_UNICODE)
+ struct super_block *sb = dir->i_sb;
+
+- if (IS_CASEFOLDED(dir)) {
++ if (IS_CASEFOLDED(dir) &&
++ !is_dot_dotdot(fname->usr_fname->name, fname->usr_fname->len)) {
+ fname->cf_name.name = f2fs_kmem_cache_alloc(f2fs_cf_name_slab,
+ GFP_NOFS, false, F2FS_SB(sb));
+ if (!fname->cf_name.name)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 2514597f5b26b..1a2256aec94f6 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -491,11 +491,11 @@ struct f2fs_filename {
+ #if IS_ENABLED(CONFIG_UNICODE)
+ /*
+ * For casefolded directories: the casefolded name, but it's left NULL
+- * if the original name is not valid Unicode, if the directory is both
+- * casefolded and encrypted and its encryption key is unavailable, or if
+- * the filesystem is doing an internal operation where usr_fname is also
+- * NULL. In all these cases we fall back to treating the name as an
+- * opaque byte sequence.
++ * if the original name is not valid Unicode, if the original name is
++ * "." or "..", if the directory is both casefolded and encrypted and
++ * its encryption key is unavailable, or if the filesystem is doing an
++ * internal operation where usr_fname is also NULL. In all these cases
++ * we fall back to treating the name as an opaque byte sequence.
+ */
+ struct fscrypt_str cf_name;
+ #endif
+@@ -1092,8 +1092,8 @@ enum count_type {
+ */
+ #define PAGE_TYPE_OF_BIO(type) ((type) > META ? META : (type))
+ enum page_type {
+- DATA,
+- NODE,
++ DATA = 0,
++ NODE = 1, /* should not change this */
+ META,
+ NR_PAGE_TYPE,
+ META_FLUSH,
+@@ -2509,11 +2509,17 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi,
+ {
+ spin_lock(&sbi->stat_lock);
+
+- f2fs_bug_on(sbi, !sbi->total_valid_block_count);
+- f2fs_bug_on(sbi, !sbi->total_valid_node_count);
++ if (unlikely(!sbi->total_valid_block_count ||
++ !sbi->total_valid_node_count)) {
++ f2fs_warn(sbi, "dec_valid_node_count: inconsistent block counts, total_valid_block:%u, total_valid_node:%u",
++ sbi->total_valid_block_count,
++ sbi->total_valid_node_count);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ } else {
++ sbi->total_valid_block_count--;
++ sbi->total_valid_node_count--;
++ }
+
+- sbi->total_valid_node_count--;
+- sbi->total_valid_block_count--;
+ if (sbi->reserved_blocks &&
+ sbi->current_reserved_blocks < sbi->reserved_blocks)
+ sbi->current_reserved_blocks++;
+@@ -3949,6 +3955,7 @@ extern struct kmem_cache *f2fs_inode_entry_slab;
+ * inline.c
+ */
+ bool f2fs_may_inline_data(struct inode *inode);
++bool f2fs_sanity_check_inline_data(struct inode *inode);
+ bool f2fs_may_inline_dentry(struct inode *inode);
+ void f2fs_do_read_inline_data(struct page *page, struct page *ipage);
+ void f2fs_truncate_inline_inode(struct inode *inode,
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index b110c3a7db6ae..29cd65f7c9eb8 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1437,11 +1437,19 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
+ ret = -ENOSPC;
+ break;
+ }
+- if (dn->data_blkaddr != NEW_ADDR) {
+- f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
+- dn->data_blkaddr = NEW_ADDR;
+- f2fs_set_data_blkaddr(dn);
++
++ if (dn->data_blkaddr == NEW_ADDR)
++ continue;
++
++ if (!f2fs_is_valid_blkaddr(sbi, dn->data_blkaddr,
++ DATA_GENERIC_ENHANCE)) {
++ ret = -EFSCORRUPTED;
++ break;
+ }
++
++ f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
++ dn->data_blkaddr = NEW_ADDR;
++ f2fs_set_data_blkaddr(dn);
+ }
+
+ f2fs_update_extent_cache_range(dn, start, 0, index - start);
+@@ -1766,6 +1774,10 @@ static long f2fs_fallocate(struct file *file, int mode,
+
+ inode_lock(inode);
+
++ ret = file_modified(file);
++ if (ret)
++ goto out;
++
+ if (mode & FALLOC_FL_PUNCH_HOLE) {
+ if (offset >= inode->i_size)
+ goto out;
+diff --git a/fs/f2fs/hash.c b/fs/f2fs/hash.c
+index 3cb1e7a24740f..049ce50cec9b0 100644
+--- a/fs/f2fs/hash.c
++++ b/fs/f2fs/hash.c
+@@ -91,7 +91,7 @@ static u32 TEA_hash_name(const u8 *p, size_t len)
+ /*
+ * Compute @fname->hash. For all directories, @fname->disk_name must be set.
+ * For casefolded directories, @fname->usr_fname must be set, and also
+- * @fname->cf_name if the filename is valid Unicode.
++ * @fname->cf_name if the filename is valid Unicode and is not "." or "..".
+ */
+ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname)
+ {
+@@ -110,10 +110,11 @@ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname)
+ /*
+ * If the casefolded name is provided, hash it instead of the
+ * on-disk name. If the casefolded name is *not* provided, that
+- * should only be because the name wasn't valid Unicode, so fall
+- * back to treating the name as an opaque byte sequence. Note
+- * that to handle encrypted directories, the fallback must use
+- * usr_fname (plaintext) rather than disk_name (ciphertext).
++ * should only be because the name wasn't valid Unicode or was
++ * "." or "..", so fall back to treating the name as an opaque
++ * byte sequence. Note that to handle encrypted directories,
++ * the fallback must use usr_fname (plaintext) rather than
++ * disk_name (ciphertext).
+ */
+ WARN_ON_ONCE(!fname->usr_fname->name);
+ if (fname->cf_name.name) {
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 4b5cefa3f90c1..c2d406616ff01 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -14,21 +14,40 @@
+ #include "node.h"
+ #include <trace/events/f2fs.h>
+
+-bool f2fs_may_inline_data(struct inode *inode)
++static bool support_inline_data(struct inode *inode)
+ {
+ if (f2fs_is_atomic_file(inode))
+ return false;
+-
+ if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))
+ return false;
+-
+ if (i_size_read(inode) > MAX_INLINE_DATA(inode))
+ return false;
++ return true;
++}
+
+- if (f2fs_post_read_required(inode))
++bool f2fs_may_inline_data(struct inode *inode)
++{
++ if (!support_inline_data(inode))
+ return false;
+
+- return true;
++ return !f2fs_post_read_required(inode);
++}
++
++bool f2fs_sanity_check_inline_data(struct inode *inode)
++{
++ if (!f2fs_has_inline_data(inode))
++ return false;
++
++ if (!support_inline_data(inode))
++ return true;
++
++ /*
++ * used by sanity_check_inode(), when disk layout fields has not
++ * been synchronized to inmem fields.
++ */
++ return (S_ISREG(inode->i_mode) &&
++ (file_is_encrypt(inode) || file_is_verity(inode) ||
++ (F2FS_I(inode)->i_flags & F2FS_COMPR_FL)));
+ }
+
+ bool f2fs_may_inline_dentry(struct inode *inode)
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 83639238a1fe9..e9818723103c6 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -276,8 +276,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ }
+ }
+
+- if (f2fs_has_inline_data(inode) &&
+- (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))) {
++ if (f2fs_sanity_check_inline_data(inode)) {
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix",
+ __func__, inode->i_ino, inode->i_mode);
+@@ -796,8 +795,22 @@ retry:
+ f2fs_lock_op(sbi);
+ err = f2fs_remove_inode_page(inode);
+ f2fs_unlock_op(sbi);
+- if (err == -ENOENT)
++ if (err == -ENOENT) {
+ err = 0;
++
++ /*
++ * in fuzzed image, another node may has the same
++ * block address as inode's, if it was truncated
++ * previously, truncation of inode node will fail.
++ */
++ if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++ f2fs_warn(F2FS_I_SB(inode),
++ "f2fs_evict_inode: inconsistent node id, ino:%lu",
++ inode->i_ino);
++ f2fs_inode_synced(inode);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ }
++ }
+ }
+
+ /* give more chances, if ENOMEM case */
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 5f213f05556dc..53273aec2627f 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -460,6 +460,13 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
+ return 0;
+ }
+
++ if (!S_ISDIR(dir->i_mode)) {
++ f2fs_err(sbi, "inconsistent inode status, skip recovering inline_dots inode (ino:%lu, i_mode:%u, pino:%u)",
++ dir->i_ino, dir->i_mode, pino);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ return -ENOTDIR;
++ }
++
+ err = f2fs_dquot_initialize(dir);
+ if (err)
+ return err;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 416d802ebbea6..52e963d25796d 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -356,16 +356,19 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct list_head *head = &fi->inmem_pages;
+ struct inmem_pages *cur = NULL;
++ struct inmem_pages *tmp;
+
+ f2fs_bug_on(sbi, !page_private_atomic(page));
+
+ mutex_lock(&fi->inmem_lock);
+- list_for_each_entry(cur, head, list) {
+- if (cur->page == page)
++ list_for_each_entry(tmp, head, list) {
++ if (tmp->page == page) {
++ cur = tmp;
+ break;
++ }
+ }
+
+- f2fs_bug_on(sbi, list_empty(head) || cur->page != page);
++ f2fs_bug_on(sbi, !cur);
+ list_del(&cur->list);
+ mutex_unlock(&fi->inmem_lock);
+
+@@ -4550,7 +4553,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ unsigned int i, start, end;
+ unsigned int readed, start_blk = 0;
+ int err = 0;
+- block_t total_node_blocks = 0;
++ block_t sit_valid_blocks[2] = {0, 0};
+
+ do {
+ readed = f2fs_ra_meta_pages(sbi, start_blk, BIO_MAX_VECS,
+@@ -4575,8 +4578,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ if (err)
+ return err;
+ seg_info_from_raw_sit(se, &sit);
+- if (IS_NODESEG(se->type))
+- total_node_blocks += se->valid_blocks;
++
++ sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+
+ if (f2fs_block_unit_discard(sbi)) {
+ /* build discard map only one time */
+@@ -4616,15 +4619,15 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ sit = sit_in_journal(journal, i);
+
+ old_valid_blocks = se->valid_blocks;
+- if (IS_NODESEG(se->type))
+- total_node_blocks -= old_valid_blocks;
++
++ sit_valid_blocks[SE_PAGETYPE(se)] -= old_valid_blocks;
+
+ err = check_block_count(sbi, start, &sit);
+ if (err)
+ break;
+ seg_info_from_raw_sit(se, &sit);
+- if (IS_NODESEG(se->type))
+- total_node_blocks += se->valid_blocks;
++
++ sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+
+ if (f2fs_block_unit_discard(sbi)) {
+ if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
+@@ -4646,13 +4649,24 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ }
+ up_read(&curseg->journal_rwsem);
+
+- if (!err && total_node_blocks != valid_node_count(sbi)) {
++ if (err)
++ return err;
++
++ if (sit_valid_blocks[NODE] != valid_node_count(sbi)) {
+ f2fs_err(sbi, "SIT is corrupted node# %u vs %u",
+- total_node_blocks, valid_node_count(sbi));
+- err = -EFSCORRUPTED;
++ sit_valid_blocks[NODE], valid_node_count(sbi));
++ return -EFSCORRUPTED;
+ }
+
+- return err;
++ if (sit_valid_blocks[DATA] + sit_valid_blocks[NODE] >
++ valid_user_blocks(sbi)) {
++ f2fs_err(sbi, "SIT is corrupted data# %u %u vs %u",
++ sit_valid_blocks[DATA], sit_valid_blocks[NODE],
++ valid_user_blocks(sbi));
++ return -EFSCORRUPTED;
++ }
++
++ return 0;
+ }
+
+ static void init_free_segmap(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 0291cd55cf09b..2104481b8ebfa 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -24,6 +24,7 @@
+
+ #define IS_DATASEG(t) ((t) <= CURSEG_COLD_DATA)
+ #define IS_NODESEG(t) ((t) >= CURSEG_HOT_NODE && (t) <= CURSEG_COLD_NODE)
++#define SE_PAGETYPE(se) ((IS_NODESEG((se)->type) ? NODE : DATA))
+
+ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
+ unsigned short seg_type)
+@@ -572,11 +573,10 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
+ return GET_SEC_FROM_SEG(sbi, reserved_segments(sbi));
+ }
+
+-static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi)
++static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
++ unsigned int node_blocks, unsigned int dent_blocks)
+ {
+- unsigned int node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
+- get_pages(sbi, F2FS_DIRTY_DENTS);
+- unsigned int dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++
+ unsigned int segno, left_blocks;
+ int i;
+
+@@ -602,19 +602,28 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi)
+ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+ int freed, int needed)
+ {
+- int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
+- int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
+- int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA);
++ unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
++ get_pages(sbi, F2FS_DIRTY_DENTS) +
++ get_pages(sbi, F2FS_DIRTY_IMETA);
++ unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++ unsigned int node_secs = total_node_blocks / BLKS_PER_SEC(sbi);
++ unsigned int dent_secs = total_dent_blocks / BLKS_PER_SEC(sbi);
++ unsigned int node_blocks = total_node_blocks % BLKS_PER_SEC(sbi);
++ unsigned int dent_blocks = total_dent_blocks % BLKS_PER_SEC(sbi);
++ unsigned int free, need_lower, need_upper;
+
+ if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ return false;
+
+- if (free_sections(sbi) + freed == reserved_sections(sbi) + needed &&
+- has_curseg_enough_space(sbi))
++ free = free_sections(sbi) + freed;
++ need_lower = node_secs + dent_secs + reserved_sections(sbi) + needed;
++ need_upper = need_lower + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++
++ if (free > need_upper)
+ return false;
+- return (free_sections(sbi) + freed) <=
+- (node_secs + 2 * dent_secs + imeta_secs +
+- reserved_sections(sbi) + needed);
++ else if (free <= need_lower)
++ return true;
++ return !has_curseg_enough_space(sbi, node_blocks, dent_blocks);
+ }
+
+ static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index c4f8510fac930..40a44297da429 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2706,7 +2706,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ if (!sb_has_quota_active(sb, cnt))
+ continue;
+
+- inode_lock(dqopt->files[cnt]);
++ if (!f2fs_sb_has_quota_ino(sbi))
++ inode_lock(dqopt->files[cnt]);
+
+ /*
+ * do_quotactl
+@@ -2725,7 +2726,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ up_read(&sbi->quota_sem);
+ f2fs_unlock_op(sbi);
+
+- inode_unlock(dqopt->files[cnt]);
++ if (!f2fs_sb_has_quota_ino(sbi))
++ inode_unlock(dqopt->files[cnt]);
+
+ if (ret)
+ break;
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index 978ac6751aeb7..1db348f8f887a 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -94,7 +94,8 @@ static int fat12_ent_bread(struct super_block *sb, struct fat_entry *fatent,
+ err_brelse:
+ brelse(bhs[0]);
+ err:
+- fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", (llu)blocknr);
++ fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
++ (llu)blocknr);
+ return -EIO;
+ }
+
+@@ -107,8 +108,8 @@ static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent,
+ fatent->fat_inode = MSDOS_SB(sb)->fat_inode;
+ fatent->bhs[0] = sb_bread(sb, blocknr);
+ if (!fatent->bhs[0]) {
+- fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
+- (llu)blocknr);
++ fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
++ (llu)blocknr);
+ return -EIO;
+ }
+ fatent->nr_bhs = 1;
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index ab5fa50664f66..d6cb4b52758b8 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -1816,11 +1816,12 @@ static long writeback_sb_inodes(struct super_block *sb,
+ };
+ unsigned long start_time = jiffies;
+ long write_chunk;
+- long wrote = 0; /* count both pages and inodes */
++ long total_wrote = 0; /* count both pages and inodes */
+
+ while (!list_empty(&wb->b_io)) {
+ struct inode *inode = wb_inode(wb->b_io.prev);
+ struct bdi_writeback *tmp_wb;
++ long wrote;
+
+ if (inode->i_sb != sb) {
+ if (work->sb) {
+@@ -1896,7 +1897,9 @@ static long writeback_sb_inodes(struct super_block *sb,
+
+ wbc_detach_inode(&wbc);
+ work->nr_pages -= write_chunk - wbc.nr_to_write;
+- wrote += write_chunk - wbc.nr_to_write;
++ wrote = write_chunk - wbc.nr_to_write - wbc.pages_skipped;
++ wrote = wrote < 0 ? 0 : wrote;
++ total_wrote += wrote;
+
+ if (need_resched()) {
+ /*
+@@ -1919,7 +1922,7 @@ static long writeback_sb_inodes(struct super_block *sb,
+ tmp_wb = inode_to_wb_and_lock_list(inode);
+ spin_lock(&inode->i_lock);
+ if (!(inode->i_state & I_DIRTY_ALL))
+- wrote++;
++ total_wrote++;
+ requeue_inode(inode, tmp_wb, &wbc);
+ inode_sync_complete(inode);
+ spin_unlock(&inode->i_lock);
+@@ -1933,14 +1936,14 @@ static long writeback_sb_inodes(struct super_block *sb,
+ * bail out to wb_writeback() often enough to check
+ * background threshold and other termination conditions.
+ */
+- if (wrote) {
++ if (total_wrote) {
+ if (time_is_before_jiffies(start_time + HZ / 10UL))
+ break;
+ if (work->nr_pages <= 0)
+ break;
+ }
+ }
+- return wrote;
++ return total_wrote;
+ }
+
+ static long __writeback_inodes_wb(struct bdi_writeback *wb,
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index be0997e24d60b..dc77080a82bbf 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -531,34 +531,42 @@ static void qdsb_put(struct gfs2_quota_data *qd)
+ */
+ int gfs2_qa_get(struct gfs2_inode *ip)
+ {
+- int error = 0;
+ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++ struct inode *inode = &ip->i_inode;
+
+ if (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF)
+ return 0;
+
+- down_write(&ip->i_rw_mutex);
++ spin_lock(&inode->i_lock);
+ if (ip->i_qadata == NULL) {
+- ip->i_qadata = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS);
+- if (!ip->i_qadata) {
+- error = -ENOMEM;
+- goto out;
+- }
++ struct gfs2_qadata *tmp;
++
++ spin_unlock(&inode->i_lock);
++ tmp = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS);
++ if (!tmp)
++ return -ENOMEM;
++
++ spin_lock(&inode->i_lock);
++ if (ip->i_qadata == NULL)
++ ip->i_qadata = tmp;
++ else
++ kmem_cache_free(gfs2_qadata_cachep, tmp);
+ }
+ ip->i_qadata->qa_ref++;
+-out:
+- up_write(&ip->i_rw_mutex);
+- return error;
++ spin_unlock(&inode->i_lock);
++ return 0;
+ }
+
+ void gfs2_qa_put(struct gfs2_inode *ip)
+ {
+- down_write(&ip->i_rw_mutex);
++ struct inode *inode = &ip->i_inode;
++
++ spin_lock(&inode->i_lock);
+ if (ip->i_qadata && --ip->i_qadata->qa_ref == 0) {
+ kmem_cache_free(gfs2_qadata_cachep, ip->i_qadata);
+ ip->i_qadata = NULL;
+ }
+- up_write(&ip->i_rw_mutex);
++ spin_unlock(&inode->i_lock);
+ }
+
+ int gfs2_quota_hold(struct gfs2_inode *ip, kuid_t uid, kgid_t gid)
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index ed85051b12754..e5a411ec9b330 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -1048,12 +1048,12 @@ static int hugetlbfs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ if (sbinfo->spool) {
+ long free_pages;
+
+- spin_lock(&sbinfo->spool->lock);
++ spin_lock_irq(&sbinfo->spool->lock);
+ buf->f_blocks = sbinfo->spool->max_hpages;
+ free_pages = sbinfo->spool->max_hpages
+ - sbinfo->spool->used_hpages;
+ buf->f_bavail = buf->f_bfree = free_pages;
+- spin_unlock(&sbinfo->spool->lock);
++ spin_unlock_irq(&sbinfo->spool->lock);
+ buf->f_files = sbinfo->max_inodes;
+ buf->f_ffree = sbinfo->free_inodes;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index a0680046ff3c7..0bd592af1bf7d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -740,6 +740,8 @@ enum {
+ REQ_F_ARM_LTIMEOUT_BIT,
+ REQ_F_ASYNC_DATA_BIT,
+ REQ_F_SKIP_LINK_CQES_BIT,
++ REQ_F_SINGLE_POLL_BIT,
++ REQ_F_DOUBLE_POLL_BIT,
+ /* keep async read/write and isreg together and in order */
+ REQ_F_SUPPORT_NOWAIT_BIT,
+ REQ_F_ISREG_BIT,
+@@ -798,6 +800,10 @@ enum {
+ REQ_F_ASYNC_DATA = BIT(REQ_F_ASYNC_DATA_BIT),
+ /* don't post CQEs while failing linked requests */
+ REQ_F_SKIP_LINK_CQES = BIT(REQ_F_SKIP_LINK_CQES_BIT),
++ /* single poll may be active */
++ REQ_F_SINGLE_POLL = BIT(REQ_F_SINGLE_POLL_BIT),
++ /* double poll may active */
++ REQ_F_DOUBLE_POLL = BIT(REQ_F_DOUBLE_POLL_BIT),
+ };
+
+ struct async_poll {
+@@ -5464,8 +5470,12 @@ static inline void io_poll_remove_entry(struct io_poll_iocb *poll)
+
+ static void io_poll_remove_entries(struct io_kiocb *req)
+ {
+- struct io_poll_iocb *poll = io_poll_get_single(req);
+- struct io_poll_iocb *poll_double = io_poll_get_double(req);
++ /*
++ * Nothing to do if neither of those flags are set. Avoid dipping
++ * into the poll/apoll/double cachelines if we can.
++ */
++ if (!(req->flags & (REQ_F_SINGLE_POLL | REQ_F_DOUBLE_POLL)))
++ return;
+
+ /*
+ * While we hold the waitqueue lock and the waitqueue is nonempty,
+@@ -5483,9 +5493,10 @@ static void io_poll_remove_entries(struct io_kiocb *req)
+ * In that case, only RCU prevents the queue memory from being freed.
+ */
+ rcu_read_lock();
+- io_poll_remove_entry(poll);
+- if (poll_double)
+- io_poll_remove_entry(poll_double);
++ if (req->flags & REQ_F_SINGLE_POLL)
++ io_poll_remove_entry(io_poll_get_single(req));
++ if (req->flags & REQ_F_DOUBLE_POLL)
++ io_poll_remove_entry(io_poll_get_double(req));
+ rcu_read_unlock();
+ }
+
+@@ -5621,10 +5632,14 @@ static void io_poll_cancel_req(struct io_kiocb *req)
+ io_poll_execute(req, 0);
+ }
+
++#define wqe_to_req(wait) ((void *)((unsigned long) (wait)->private & ~1))
++#define wqe_is_double(wait) ((unsigned long) (wait)->private & 1)
++#define IO_ASYNC_POLL_COMMON (EPOLLONESHOT | POLLPRI)
++
+ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ void *key)
+ {
+- struct io_kiocb *req = wait->private;
++ struct io_kiocb *req = wqe_to_req(wait);
+ struct io_poll_iocb *poll = container_of(wait, struct io_poll_iocb,
+ wait);
+ __poll_t mask = key_to_poll(key);
+@@ -5654,7 +5669,7 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ }
+
+ /* for instances that support it check for an event match first */
+- if (mask && !(mask & poll->events))
++ if (mask && !(mask & (poll->events & ~IO_ASYNC_POLL_COMMON)))
+ return 0;
+
+ if (io_poll_get_ownership(req)) {
+@@ -5662,6 +5677,10 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ if (mask && poll->events & EPOLLONESHOT) {
+ list_del_init(&poll->wait.entry);
+ poll->head = NULL;
++ if (wqe_is_double(wait))
++ req->flags &= ~REQ_F_DOUBLE_POLL;
++ else
++ req->flags &= ~REQ_F_SINGLE_POLL;
+ }
+ __io_poll_execute(req, mask);
+ }
+@@ -5673,6 +5692,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ struct io_poll_iocb **poll_ptr)
+ {
+ struct io_kiocb *req = pt->req;
++ unsigned long wqe_private = (unsigned long) req;
+
+ /*
+ * The file being polled uses multiple waitqueues for poll handling
+@@ -5698,15 +5718,19 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt,
+ pt->error = -ENOMEM;
+ return;
+ }
++ /* mark as double wq entry */
++ wqe_private |= 1;
++ req->flags |= REQ_F_DOUBLE_POLL;
+ io_init_poll_iocb(poll, first->events, first->wait.func);
+ *poll_ptr = poll;
+ if (req->opcode == IORING_OP_POLL_ADD)
+ req->flags |= REQ_F_ASYNC_DATA;
+ }
+
++ req->flags |= REQ_F_SINGLE_POLL;
+ pt->nr_entries++;
+ poll->head = head;
+- poll->wait.private = req;
++ poll->wait.private = (void *) wqe_private;
+
+ if (poll->events & EPOLLEXCLUSIVE)
+ add_wait_queue_exclusive(head, &poll->wait);
+@@ -5733,7 +5757,6 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
+ INIT_HLIST_NODE(&req->hash_node);
+ io_init_poll_iocb(poll, mask, io_poll_wake);
+ poll->file = req->file;
+- poll->wait.private = req;
+
+ ipt->pt._key = mask;
+ ipt->req = req;
+@@ -5802,7 +5825,7 @@ static int io_arm_poll_handler(struct io_kiocb *req)
+ struct io_ring_ctx *ctx = req->ctx;
+ struct async_poll *apoll;
+ struct io_poll_table ipt;
+- __poll_t mask = EPOLLONESHOT | POLLERR | POLLPRI;
++ __poll_t mask = IO_ASYNC_POLL_COMMON | POLLERR;
+ int ret;
+
+ if (!def->pollin && !def->pollout)
+@@ -6939,6 +6962,8 @@ fail:
+ * wait for request slots on the block side.
+ */
+ if (!needs_poll) {
++ if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
++ break;
+ cond_resched();
+ continue;
+ }
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index d020a2e81a24c..7e4ba8b2b25ec 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -542,7 +542,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
+ * write started inside the existing inode size.
+ */
+ if (pos + len > i_size)
+- truncate_pagecache_range(inode, max(pos, i_size), pos + len);
++ truncate_pagecache_range(inode, max(pos, i_size),
++ pos + len - 1);
+ }
+
+ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio,
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index d8502f4989d9d..e75f31b81d634 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -385,7 +385,8 @@ int dbFree(struct inode *ip, s64 blkno, s64 nblocks)
+ }
+
+ /* write the last buffer. */
+- write_metapage(mp);
++ if (mp)
++ write_metapage(mp);
+
+ IREAD_UNLOCK(ipbmap);
+
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index 208d2cff7bd37..bc6050b67256d 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -62,7 +62,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ atomic_set(&conn->req_running, 0);
+ atomic_set(&conn->r_count, 0);
+ conn->total_credits = 1;
+- conn->outstanding_credits = 1;
++ conn->outstanding_credits = 0;
+
+ init_waitqueue_head(&conn->req_running_q);
+ INIT_LIST_HEAD(&conn->conns_list);
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index 4a9460153b595..f8f456377a51d 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -338,7 +338,7 @@ static int smb2_validate_credit_charge(struct ksmbd_conn *conn,
+ ret = 1;
+ }
+
+- if ((u64)conn->outstanding_credits + credit_charge > conn->vals->max_credits) {
++ if ((u64)conn->outstanding_credits + credit_charge > conn->total_credits) {
+ ksmbd_debug(SMB, "Limits exceeding the maximum allowable outstanding requests, given : %u, pending : %u\n",
+ credit_charge, conn->outstanding_credits);
+ ret = 1;
+diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
+index 9a7e211dbf4f4..7f8ab14fb8ec1 100644
+--- a/fs/ksmbd/smb_common.c
++++ b/fs/ksmbd/smb_common.c
+@@ -140,8 +140,10 @@ int ksmbd_verify_smb_message(struct ksmbd_work *work)
+
+ hdr = work->request_buf;
+ if (*(__le32 *)hdr->Protocol == SMB1_PROTO_NUMBER &&
+- hdr->Command == SMB_COM_NEGOTIATE)
++ hdr->Command == SMB_COM_NEGOTIATE) {
++ work->conn->outstanding_credits++;
+ return 0;
++ }
+
+ return -EINVAL;
+ }
+diff --git a/fs/namei.c b/fs/namei.c
+index 509657fdf4f56..fd3c95ac261bd 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2768,7 +2768,8 @@ struct dentry *lookup_one(struct user_namespace *mnt_userns, const char *name,
+ EXPORT_SYMBOL(lookup_one);
+
+ /**
+- * lookup_one_len_unlocked - filesystem helper to lookup single pathname component
++ * lookup_one_unlocked - filesystem helper to lookup single pathname component
++ * @mnt_userns: idmapping of the mount the lookup is performed from
+ * @name: pathname component to lookup
+ * @base: base directory to lookup from
+ * @len: maximum length @len should be interpreted to
+@@ -2779,14 +2780,15 @@ EXPORT_SYMBOL(lookup_one);
+ * Unlike lookup_one_len, it should be called without the parent
+ * i_mutex held, and will take the i_mutex itself if necessary.
+ */
+-struct dentry *lookup_one_len_unlocked(const char *name,
+- struct dentry *base, int len)
++struct dentry *lookup_one_unlocked(struct user_namespace *mnt_userns,
++ const char *name, struct dentry *base,
++ int len)
+ {
+ struct qstr this;
+ int err;
+ struct dentry *ret;
+
+- err = lookup_one_common(&init_user_ns, name, base, len, &this);
++ err = lookup_one_common(mnt_userns, name, base, len, &this);
+ if (err)
+ return ERR_PTR(err);
+
+@@ -2795,6 +2797,59 @@ struct dentry *lookup_one_len_unlocked(const char *name,
+ ret = lookup_slow(&this, base, 0);
+ return ret;
+ }
++EXPORT_SYMBOL(lookup_one_unlocked);
++
++/**
++ * lookup_one_positive_unlocked - filesystem helper to lookup single
++ * pathname component
++ * @mnt_userns: idmapping of the mount the lookup is performed from
++ * @name: pathname component to lookup
++ * @base: base directory to lookup from
++ * @len: maximum length @len should be interpreted to
++ *
++ * This helper will yield ERR_PTR(-ENOENT) on negatives. The helper returns
++ * known positive or ERR_PTR(). This is what most of the users want.
++ *
++ * Note that pinned negative with unlocked parent _can_ become positive at any
++ * time, so callers of lookup_one_unlocked() need to be very careful; pinned
++ * positives have >d_inode stable, so this one avoids such problems.
++ *
++ * Note that this routine is purely a helper for filesystem usage and should
++ * not be called by generic code.
++ *
++ * The helper should be called without i_mutex held.
++ */
++struct dentry *lookup_one_positive_unlocked(struct user_namespace *mnt_userns,
++ const char *name,
++ struct dentry *base, int len)
++{
++ struct dentry *ret = lookup_one_unlocked(mnt_userns, name, base, len);
++
++ if (!IS_ERR(ret) && d_flags_negative(smp_load_acquire(&ret->d_flags))) {
++ dput(ret);
++ ret = ERR_PTR(-ENOENT);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(lookup_one_positive_unlocked);
++
++/**
++ * lookup_one_len_unlocked - filesystem helper to lookup single pathname component
++ * @name: pathname component to lookup
++ * @base: base directory to lookup from
++ * @len: maximum length @len should be interpreted to
++ *
++ * Note that this routine is purely a helper for filesystem usage and should
++ * not be called by generic code.
++ *
++ * Unlike lookup_one_len, it should be called without the parent
++ * i_mutex held, and will take the i_mutex itself if necessary.
++ */
++struct dentry *lookup_one_len_unlocked(const char *name,
++ struct dentry *base, int len)
++{
++ return lookup_one_unlocked(&init_user_ns, name, base, len);
++}
+ EXPORT_SYMBOL(lookup_one_len_unlocked);
+
+ /*
+@@ -2808,12 +2863,7 @@ EXPORT_SYMBOL(lookup_one_len_unlocked);
+ struct dentry *lookup_positive_unlocked(const char *name,
+ struct dentry *base, int len)
+ {
+- struct dentry *ret = lookup_one_len_unlocked(name, base, len);
+- if (!IS_ERR(ret) && d_flags_negative(smp_load_acquire(&ret->d_flags))) {
+- dput(ret);
+- ret = ERR_PTR(-ENOENT);
+- }
+- return ret;
++ return lookup_one_positive_unlocked(&init_user_ns, name, base, len);
+ }
+ EXPORT_SYMBOL(lookup_positive_unlocked);
+
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index d8583f57ff99f..832a97ed2165a 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -209,15 +209,16 @@ static int
+ nfs_file_fsync_commit(struct file *file, int datasync)
+ {
+ struct inode *inode = file_inode(file);
+- int ret;
++ int ret, ret2;
+
+ dprintk("NFS: fsync file(%pD2) datasync %d\n", file, datasync);
+
+ nfs_inc_stats(inode, NFSIOS_VFSFSYNC);
+ ret = nfs_commit_inode(inode, FLUSH_SYNC);
+- if (ret < 0)
+- return ret;
+- return file_check_and_advance_wb_err(file);
++ ret2 = file_check_and_advance_wb_err(file);
++ if (ret2 < 0)
++ return ret2;
++ return ret;
+ }
+
+ int
+@@ -390,11 +391,8 @@ static int nfs_write_end(struct file *file, struct address_space *mapping,
+ return status;
+ NFS_I(mapping->host)->write_io += copied;
+
+- if (nfs_ctx_key_to_expire(ctx, mapping->host)) {
+- status = nfs_wb_all(mapping->host);
+- if (status < 0)
+- return status;
+- }
++ if (nfs_ctx_key_to_expire(ctx, mapping->host))
++ nfs_wb_all(mapping->host);
+
+ return copied;
+ }
+@@ -593,18 +591,6 @@ static const struct vm_operations_struct nfs_file_vm_ops = {
+ .page_mkwrite = nfs_vm_page_mkwrite,
+ };
+
+-static int nfs_need_check_write(struct file *filp, struct inode *inode,
+- int error)
+-{
+- struct nfs_open_context *ctx;
+-
+- ctx = nfs_file_open_context(filp);
+- if (nfs_error_is_fatal_on_server(error) ||
+- nfs_ctx_key_to_expire(ctx, inode))
+- return 1;
+- return 0;
+-}
+-
+ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ {
+ struct file *file = iocb->ki_filp;
+@@ -632,7 +618,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ if (iocb->ki_flags & IOCB_APPEND || iocb->ki_pos > i_size_read(inode)) {
+ result = nfs_revalidate_file_size(inode, file);
+ if (result)
+- goto out;
++ return result;
+ }
+
+ nfs_clear_invalid_mapping(file->f_mapping);
+@@ -651,6 +637,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+
+ written = result;
+ iocb->ki_pos += written;
++ nfs_add_stats(inode, NFSIOS_NORMALWRITTENBYTES, written);
+
+ if (mntflags & NFS_MOUNT_WRITE_EAGER) {
+ result = filemap_fdatawrite_range(file->f_mapping,
+@@ -668,17 +655,22 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ }
+ result = generic_write_sync(iocb, written);
+ if (result < 0)
+- goto out;
++ return result;
+
++out:
+ /* Return error values */
+ error = filemap_check_wb_err(file->f_mapping, since);
+- if (nfs_need_check_write(file, inode, error)) {
+- int err = nfs_wb_all(inode);
+- if (err < 0)
+- result = err;
++ switch (error) {
++ default:
++ break;
++ case -EDQUOT:
++ case -EFBIG:
++ case -ENOSPC:
++ nfs_wb_all(inode);
++ error = file_check_and_advance_wb_err(file);
++ if (error < 0)
++ result = error;
+ }
+- nfs_add_stats(inode, NFSIOS_NORMALWRITTENBYTES, written);
+-out:
+ return result;
+
+ out_swapfile:
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index e4fb939a2904b..c32040c968b67 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1582,7 +1582,7 @@ struct nfs_fattr *nfs_alloc_fattr(void)
+ {
+ struct nfs_fattr *fattr;
+
+- fattr = kmalloc(sizeof(*fattr), GFP_NOFS);
++ fattr = kmalloc(sizeof(*fattr), GFP_KERNEL);
+ if (fattr != NULL) {
+ nfs_fattr_init(fattr);
+ fattr->label = NULL;
+@@ -1598,7 +1598,7 @@ struct nfs_fattr *nfs_alloc_fattr_with_label(struct nfs_server *server)
+ if (!fattr)
+ return NULL;
+
+- fattr->label = nfs4_label_alloc(server, GFP_NOFS);
++ fattr->label = nfs4_label_alloc(server, GFP_KERNEL);
+ if (IS_ERR(fattr->label)) {
+ kfree(fattr);
+ return NULL;
+@@ -1612,7 +1612,7 @@ struct nfs_fh *nfs_alloc_fhandle(void)
+ {
+ struct nfs_fh *fh;
+
+- fh = kmalloc(sizeof(struct nfs_fh), GFP_NOFS);
++ fh = kmalloc(sizeof(struct nfs_fh), GFP_KERNEL);
+ if (fh != NULL)
+ fh->size = 0;
+ return fh;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 3d307854c6504..1db686509a3e1 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1162,7 +1162,7 @@ static int nfs4_call_sync_sequence(struct rpc_clnt *clnt,
+ {
+ unsigned short task_flags = 0;
+
+- if (server->nfs_client->cl_minorversion)
++ if (server->caps & NFS_CAP_MOVEABLE)
+ task_flags = RPC_TASK_MOVEABLE;
+ return nfs4_do_call_sync(clnt, server, msg, args, res, task_flags);
+ }
+@@ -2573,7 +2573,7 @@ static int nfs4_run_open_task(struct nfs4_opendata *data,
+ };
+ int status;
+
+- if (server->nfs_client->cl_minorversion)
++ if (nfs_server_capable(dir, NFS_CAP_MOVEABLE))
+ task_setup_data.flags |= RPC_TASK_MOVEABLE;
+
+ kref_get(&data->kref);
+@@ -3736,7 +3736,7 @@ int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait)
+ };
+ int status = -ENOMEM;
+
+- if (server->nfs_client->cl_minorversion)
++ if (nfs_server_capable(state->inode, NFS_CAP_MOVEABLE))
+ task_setup_data.flags |= RPC_TASK_MOVEABLE;
+
+ nfs4_state_protect(server->nfs_client, NFS_SP4_MACH_CRED_CLEANUP,
+@@ -4407,7 +4407,7 @@ static int _nfs4_proc_lookup(struct rpc_clnt *clnt, struct inode *dir,
+ };
+ unsigned short task_flags = 0;
+
+- if (server->nfs_client->cl_minorversion)
++ if (nfs_server_capable(dir, NFS_CAP_MOVEABLE))
+ task_flags = RPC_TASK_MOVEABLE;
+
+ /* Is this is an attribute revalidation, subject to softreval? */
+@@ -5912,7 +5912,7 @@ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t bu
+ buflen = server->rsize;
+
+ npages = DIV_ROUND_UP(buflen, PAGE_SIZE) + 1;
+- pages = kmalloc_array(npages, sizeof(struct page *), GFP_NOFS);
++ pages = kmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+@@ -6615,11 +6615,14 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+ .rpc_client = server->client,
+ .rpc_message = &msg,
+ .callback_ops = &nfs4_delegreturn_ops,
+- .flags = RPC_TASK_ASYNC | RPC_TASK_TIMEOUT | RPC_TASK_MOVEABLE,
++ .flags = RPC_TASK_ASYNC | RPC_TASK_TIMEOUT,
+ };
+ int status = 0;
+
+- data = kzalloc(sizeof(*data), GFP_NOFS);
++ if (nfs_server_capable(inode, NFS_CAP_MOVEABLE))
++ task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (data == NULL)
+ return -ENOMEM;
+
+@@ -6807,7 +6810,7 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
+ struct nfs4_state *state = lsp->ls_state;
+ struct inode *inode = state->inode;
+
+- p = kzalloc(sizeof(*p), GFP_NOFS);
++ p = kzalloc(sizeof(*p), GFP_KERNEL);
+ if (p == NULL)
+ return NULL;
+ p->arg.fh = NFS_FH(inode);
+@@ -6932,10 +6935,8 @@ static struct rpc_task *nfs4_do_unlck(struct file_lock *fl,
+ .workqueue = nfsiod_workqueue,
+ .flags = RPC_TASK_ASYNC,
+ };
+- struct nfs_client *client =
+- NFS_SERVER(lsp->ls_state->inode)->nfs_client;
+
+- if (client->cl_minorversion)
++ if (nfs_server_capable(lsp->ls_state->inode, NFS_CAP_MOVEABLE))
+ task_setup_data.flags |= RPC_TASK_MOVEABLE;
+
+ nfs4_state_protect(NFS_SERVER(lsp->ls_state->inode)->nfs_client,
+@@ -7206,14 +7207,12 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
+ .flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF,
+ };
+ int ret;
+- struct nfs_client *client = NFS_SERVER(state->inode)->nfs_client;
+
+- if (client->cl_minorversion)
++ if (nfs_server_capable(state->inode, NFS_CAP_MOVEABLE))
+ task_setup_data.flags |= RPC_TASK_MOVEABLE;
+
+ data = nfs4_alloc_lockdata(fl, nfs_file_open_context(fl->fl_file),
+- fl->fl_u.nfs4_fl.owner,
+- recovery_type == NFS_LOCK_NEW ? GFP_KERNEL : GFP_NOFS);
++ fl->fl_u.nfs4_fl.owner, GFP_KERNEL);
+ if (data == NULL)
+ return -ENOMEM;
+ if (IS_SETLKW(cmd))
+@@ -7636,7 +7635,7 @@ nfs4_release_lockowner(struct nfs_server *server, struct nfs4_lock_state *lsp)
+ if (server->nfs_client->cl_mvops->minor_version != 0)
+ return;
+
+- data = kmalloc(sizeof(*data), GFP_NOFS);
++ data = kmalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return;
+ data->lsp = lsp;
+@@ -9302,7 +9301,7 @@ static struct rpc_task *_nfs41_proc_sequence(struct nfs_client *clp,
+ goto out_err;
+
+ ret = ERR_PTR(-ENOMEM);
+- calldata = kzalloc(sizeof(*calldata), GFP_NOFS);
++ calldata = kzalloc(sizeof(*calldata), GFP_KERNEL);
+ if (calldata == NULL)
+ goto out_put_clp;
+ nfs4_init_sequence(&calldata->args, &calldata->res, 0, is_privileged);
+@@ -10233,7 +10232,7 @@ static int nfs41_free_stateid(struct nfs_server *server,
+ &task_setup.rpc_client, &msg);
+
+ dprintk("NFS call free_stateid %p\n", stateid);
+- data = kmalloc(sizeof(*data), GFP_NOFS);
++ data = kmalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+ data->server = server;
+@@ -10382,7 +10381,8 @@ static const struct nfs4_minor_version_ops nfs_v4_1_minor_ops = {
+ | NFS_CAP_POSIX_LOCK
+ | NFS_CAP_STATEID_NFSV41
+ | NFS_CAP_ATOMIC_OPEN_V1
+- | NFS_CAP_LGOPEN,
++ | NFS_CAP_LGOPEN
++ | NFS_CAP_MOVEABLE,
+ .init_client = nfs41_init_client,
+ .shutdown_client = nfs41_shutdown_client,
+ .match_stateid = nfs41_match_stateid,
+@@ -10417,7 +10417,8 @@ static const struct nfs4_minor_version_ops nfs_v4_2_minor_ops = {
+ | NFS_CAP_LAYOUTSTATS
+ | NFS_CAP_CLONE
+ | NFS_CAP_LAYOUTERROR
+- | NFS_CAP_READ_PLUS,
++ | NFS_CAP_READ_PLUS
++ | NFS_CAP_MOVEABLE,
+ .init_client = nfs41_init_client,
+ .shutdown_client = nfs41_shutdown_client,
+ .match_stateid = nfs41_match_stateid,
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 0f4818627ef0c..ea6162537957a 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -821,7 +821,7 @@ static void __nfs4_close(struct nfs4_state *state,
+
+ void nfs4_close_state(struct nfs4_state *state, fmode_t fmode)
+ {
+- __nfs4_close(state, fmode, GFP_NOFS, 0);
++ __nfs4_close(state, fmode, GFP_KERNEL, 0);
+ }
+
+ void nfs4_close_sync(struct nfs4_state *state, fmode_t fmode)
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 9157dd19b8b4f..317cedfa52bf6 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -767,6 +767,9 @@ int nfs_initiate_pgio(struct rpc_clnt *clnt, struct nfs_pgio_header *hdr,
+ .flags = RPC_TASK_ASYNC | flags,
+ };
+
++ if (nfs_server_capable(hdr->inode, NFS_CAP_MOVEABLE))
++ task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ hdr->rw_ops->rw_initiate(hdr, &msg, rpc_ops, &task_setup_data, how);
+
+ dprintk("NFS: initiated pgio call "
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 7ddd003ab8b1a..1b4dd8b828def 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1244,7 +1244,7 @@ pnfs_send_layoutreturn(struct pnfs_layout_hdr *lo,
+ int status = 0;
+
+ *pcred = NULL;
+- lrp = kzalloc(sizeof(*lrp), GFP_NOFS);
++ lrp = kzalloc(sizeof(*lrp), GFP_KERNEL);
+ if (unlikely(lrp == NULL)) {
+ status = -ENOMEM;
+ spin_lock(&ino->i_lock);
+@@ -2000,6 +2000,7 @@ lookup_again:
+ lo = pnfs_find_alloc_layout(ino, ctx, gfp_flags);
+ if (lo == NULL) {
+ spin_unlock(&ino->i_lock);
++ lseg = ERR_PTR(-ENOMEM);
+ trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+ PNFS_UPDATE_LAYOUT_NOMEM);
+ goto out;
+@@ -2128,6 +2129,7 @@ lookup_again:
+
+ lgp = pnfs_alloc_init_layoutget_args(ino, ctx, &stateid, &arg, gfp_flags);
+ if (!lgp) {
++ lseg = ERR_PTR(-ENOMEM);
+ trace_pnfs_update_layout(ino, pos, count, iomode, lo, NULL,
+ PNFS_UPDATE_LAYOUT_NOMEM);
+ nfs_layoutget_end(lo);
+@@ -3261,7 +3263,7 @@ struct nfs4_threshold *pnfs_mdsthreshold_alloc(void)
+ {
+ struct nfs4_threshold *thp;
+
+- thp = kzalloc(sizeof(*thp), GFP_NOFS);
++ thp = kzalloc(sizeof(*thp), GFP_KERNEL);
+ if (!thp) {
+ dprintk("%s mdsthreshold allocation failed\n", __func__);
+ return NULL;
+diff --git a/fs/nfs/unlink.c b/fs/nfs/unlink.c
+index 5fa11e1aca4c2..d5ccf095b2a7d 100644
+--- a/fs/nfs/unlink.c
++++ b/fs/nfs/unlink.c
+@@ -102,6 +102,10 @@ static void nfs_do_call_unlink(struct inode *inode, struct nfs_unlinkdata *data)
+ };
+ struct rpc_task *task;
+ struct inode *dir = d_inode(data->dentry->d_parent);
++
++ if (nfs_server_capable(inode, NFS_CAP_MOVEABLE))
++ task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ nfs_sb_active(dir->i_sb);
+ data->args.fh = NFS_FH(dir);
+ nfs_fattr_init(data->res.dir_attr);
+@@ -344,6 +348,10 @@ nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+ .flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF,
+ };
+
++ if (nfs_server_capable(old_dir, NFS_CAP_MOVEABLE) &&
++ nfs_server_capable(new_dir, NFS_CAP_MOVEABLE))
++ task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (data == NULL)
+ return ERR_PTR(-ENOMEM);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 9388503030992..fda854de808b6 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -603,8 +603,9 @@ static void nfs_write_error(struct nfs_page *req, int error)
+ * Find an associated nfs write request, and prepare to flush it out
+ * May return an error if the user signalled nfs_wait_on_request().
+ */
+-static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+- struct page *page)
++static int nfs_page_async_flush(struct page *page,
++ struct writeback_control *wbc,
++ struct nfs_pageio_descriptor *pgio)
+ {
+ struct nfs_page *req;
+ int ret = 0;
+@@ -630,11 +631,11 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+ /*
+ * Remove the problematic req upon fatal errors on the server
+ */
+- if (nfs_error_is_fatal(ret)) {
+- if (nfs_error_is_fatal_on_server(ret))
+- goto out_launder;
+- } else
+- ret = -EAGAIN;
++ if (nfs_error_is_fatal_on_server(ret))
++ goto out_launder;
++ if (wbc->sync_mode == WB_SYNC_NONE)
++ ret = AOP_WRITEPAGE_ACTIVATE;
++ redirty_page_for_writepage(wbc, page);
+ nfs_redirty_request(req);
+ pgio->pg_error = 0;
+ } else
+@@ -650,15 +651,8 @@ out_launder:
+ static int nfs_do_writepage(struct page *page, struct writeback_control *wbc,
+ struct nfs_pageio_descriptor *pgio)
+ {
+- int ret;
+-
+ nfs_pageio_cond_complete(pgio, page_index(page));
+- ret = nfs_page_async_flush(pgio, page);
+- if (ret == -EAGAIN) {
+- redirty_page_for_writepage(wbc, page);
+- ret = AOP_WRITEPAGE_ACTIVATE;
+- }
+- return ret;
++ return nfs_page_async_flush(page, wbc, pgio);
+ }
+
+ /*
+@@ -677,11 +671,7 @@ static int nfs_writepage_locked(struct page *page,
+ err = nfs_do_writepage(page, wbc, &pgio);
+ pgio.pg_error = 0;
+ nfs_pageio_complete(&pgio);
+- if (err < 0)
+- return err;
+- if (nfs_error_is_fatal(pgio.pg_error))
+- return pgio.pg_error;
+- return 0;
++ return err;
+ }
+
+ int nfs_writepage(struct page *page, struct writeback_control *wbc)
+@@ -729,19 +719,19 @@ int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc)
+ priority = wb_priority(wbc);
+ }
+
+- nfs_pageio_init_write(&pgio, inode, priority, false,
+- &nfs_async_write_completion_ops);
+- pgio.pg_io_completion = ioc;
+- err = write_cache_pages(mapping, wbc, nfs_writepages_callback, &pgio);
+- pgio.pg_error = 0;
+- nfs_pageio_complete(&pgio);
++ do {
++ nfs_pageio_init_write(&pgio, inode, priority, false,
++ &nfs_async_write_completion_ops);
++ pgio.pg_io_completion = ioc;
++ err = write_cache_pages(mapping, wbc, nfs_writepages_callback,
++ &pgio);
++ pgio.pg_error = 0;
++ nfs_pageio_complete(&pgio);
++ } while (err < 0 && !nfs_error_is_fatal(err));
+ nfs_io_completion_put(ioc);
+
+ if (err < 0)
+ goto out_err;
+- err = pgio.pg_error;
+- if (nfs_error_is_fatal(err))
+- goto out_err;
+ return 0;
+ out_err:
+ return err;
+@@ -1436,7 +1426,7 @@ static void nfs_async_write_error(struct list_head *head, int error)
+ while (!list_empty(head)) {
+ req = nfs_list_entry(head->next);
+ nfs_list_remove_request(req);
+- if (nfs_error_is_fatal(error))
++ if (nfs_error_is_fatal_on_server(error))
+ nfs_write_error(req, error);
+ else
+ nfs_redirty_request(req);
+@@ -1711,6 +1701,10 @@ int nfs_initiate_commit(struct rpc_clnt *clnt, struct nfs_commit_data *data,
+ .flags = RPC_TASK_ASYNC | flags,
+ .priority = priority,
+ };
++
++ if (nfs_server_capable(data->inode, NFS_CAP_MOVEABLE))
++ task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ /* Set up the initial task struct. */
+ nfs_ops->commit_setup(data, &msg, &task_setup_data.rpc_client);
+ trace_nfs_initiate_commit(data);
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index a4a69ab6ab280..a838909502907 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -212,7 +212,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ struct svc_cacherep *rp;
+ unsigned int i;
+
+- nfsd_reply_cache_stats_destroy(nn);
+ unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
+
+ for (i = 0; i < nn->drc_hashsize; i++) {
+@@ -223,6 +222,7 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ rp, nn);
+ }
+ }
++ nfsd_reply_cache_stats_destroy(nn);
+
+ kvfree(nn->drc_hashtbl);
+ nn->drc_hashtbl = NULL;
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index f2a1947ec5ee0..ead37db01ab56 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -264,7 +264,7 @@ static int create_fd(struct fsnotify_group *group, struct path *path,
+ * originally opened O_WRONLY.
+ */
+ new_file = dentry_open(path,
+- group->fanotify_data.f_flags | FMODE_NONOTIFY,
++ group->fanotify_data.f_flags | __FMODE_NONOTIFY,
+ current_cred());
+ if (IS_ERR(new_file)) {
+ /*
+@@ -1329,7 +1329,7 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ (!(fid_mode & FAN_REPORT_NAME) || !(fid_mode & FAN_REPORT_FID)))
+ return -EINVAL;
+
+- f_flags = O_RDWR | FMODE_NONOTIFY;
++ f_flags = O_RDWR | __FMODE_NONOTIFY;
+ if (flags & FAN_CLOEXEC)
+ f_flags |= O_CLOEXEC;
+ if (flags & FAN_NONBLOCK)
+diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
+index 57f0d5d9f934e..3451708fd035c 100644
+--- a/fs/notify/fdinfo.c
++++ b/fs/notify/fdinfo.c
+@@ -83,16 +83,9 @@ static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
+ inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark);
+ inode = igrab(fsnotify_conn_inode(mark->connector));
+ if (inode) {
+- /*
+- * IN_ALL_EVENTS represents all of the mask bits
+- * that we expose to userspace. There is at
+- * least one bit (FS_EVENT_ON_CHILD) which is
+- * used only internally to the kernel.
+- */
+- u32 mask = mark->mask & IN_ALL_EVENTS;
+- seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:%x ",
++ seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ",
+ inode_mark->wd, inode->i_ino, inode->i_sb->s_dev,
+- mask, mark->ignored_mask);
++ inotify_mark_user_mask(mark));
+ show_mark_fhandle(m, inode);
+ seq_putc(m, '\n');
+ iput(inode);
+diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h
+index 2007e37119160..8f00151eb731f 100644
+--- a/fs/notify/inotify/inotify.h
++++ b/fs/notify/inotify/inotify.h
+@@ -22,6 +22,18 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse)
+ return container_of(fse, struct inotify_event_info, fse);
+ }
+
++/*
++ * INOTIFY_USER_FLAGS represents all of the mask bits that we expose to
++ * userspace. There is at least one bit (FS_EVENT_ON_CHILD) which is
++ * used only internally to the kernel.
++ */
++#define INOTIFY_USER_MASK (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK)
++
++static inline __u32 inotify_mark_user_mask(struct fsnotify_mark *fsn_mark)
++{
++ return fsn_mark->mask & INOTIFY_USER_MASK;
++}
++
+ extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark,
+ struct fsnotify_group *group);
+ extern int inotify_handle_inode_event(struct fsnotify_mark *inode_mark,
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 54583f62dc440..3ef57db0ec9d6 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -110,7 +110,7 @@ static inline __u32 inotify_arg_to_mask(struct inode *inode, u32 arg)
+ mask |= FS_EVENT_ON_CHILD;
+
+ /* mask off the flags used to open the fd */
+- mask |= (arg & (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK));
++ mask |= (arg & INOTIFY_USER_MASK);
+
+ return mask;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 9007d6affff35..b42629d2fc1c6 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -452,7 +452,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark)
+ void fsnotify_destroy_mark(struct fsnotify_mark *mark,
+ struct fsnotify_group *group)
+ {
+- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++ mutex_lock(&group->mark_mutex);
+ fsnotify_detach_mark(mark);
+ mutex_unlock(&group->mark_mutex);
+ fsnotify_free_mark(mark);
+@@ -770,7 +770,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+ * move marks to free to to_free list in one go and then free marks in
+ * to_free list one by one.
+ */
+- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++ mutex_lock(&group->mark_mutex);
+ list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) {
+ if (mark->connector->type == obj_type)
+ list_move(&mark->g_list, &to_free);
+@@ -779,7 +779,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+
+ clear:
+ while (1) {
+- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++ mutex_lock(&group->mark_mutex);
+ if (list_empty(head)) {
+ mutex_unlock(&group->mark_mutex);
+ break;
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 787b53b984ee1..3bae76930e68a 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -495,7 +495,7 @@ static int ntfs_truncate(struct inode *inode, loff_t new_size)
+
+ down_write(&ni->file.run_lock);
+ err = attr_set_size(ni, ATTR_DATA, NULL, 0, &ni->file.run, new_size,
+- &new_valid, true, NULL);
++ &new_valid, ni->mi.sbi->options->prealloc, NULL);
+ up_write(&ni->file.run_lock);
+
+ if (new_valid < ni->i_valid)
+@@ -662,7 +662,13 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ /*
+ * Normal file: Allocate clusters, do not change 'valid' size.
+ */
+- err = ntfs_set_size(inode, max(end, i_size));
++ loff_t new_size = max(end, i_size);
++
++ err = inode_newsize_ok(inode, new_size);
++ if (err)
++ goto out;
++
++ err = ntfs_set_size(inode, new_size);
+ if (err)
+ goto out;
+
+@@ -762,7 +768,7 @@ int ntfs3_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ }
+ inode_dio_wait(inode);
+
+- if (attr->ia_size < oldsize)
++ if (attr->ia_size <= oldsize)
+ err = ntfs_truncate(inode, attr->ia_size);
+ else if (attr->ia_size > oldsize)
+ err = ntfs_extend(inode, attr->ia_size, 0, NULL);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 6f47a9c17f896..18842998c8fa3 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1964,10 +1964,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+
+ vcn += clen;
+
+- if (vbo + bytes >= end) {
++ if (vbo + bytes >= end)
+ bytes = end - vbo;
+- flags |= FIEMAP_EXTENT_LAST;
+- }
+
+ if (vbo + bytes <= valid) {
+ ;
+@@ -1977,6 +1975,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ /* vbo < valid && valid < vbo + bytes */
+ u64 dlen = valid - vbo;
+
++ if (vbo + dlen >= end)
++ flags |= FIEMAP_EXTENT_LAST;
++
+ err = fiemap_fill_next_extent(fieinfo, vbo, lbo, dlen,
+ flags);
+ if (err < 0)
+@@ -1995,6 +1996,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ flags |= FIEMAP_EXTENT_UNWRITTEN;
+ }
+
++ if (vbo + bytes >= end)
++ flags |= FIEMAP_EXTENT_LAST;
++
+ err = fiemap_fill_next_extent(fieinfo, vbo, lbo, bytes, flags);
+ if (err < 0)
+ break;
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index 06492f088d602..49b7df6167785 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -1185,8 +1185,6 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
+ if (!r_page)
+ return -ENOMEM;
+
+- memset(info, 0, sizeof(struct restart_info));
+-
+ /* Determine which restart area we are looking for. */
+ if (first) {
+ vbo = 0;
+@@ -3791,10 +3789,11 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ if (!log)
+ return -ENOMEM;
+
++ memset(&rst_info, 0, sizeof(struct restart_info));
++
+ log->ni = ni;
+ log->l_size = l_size;
+ log->one_page_buf = kmalloc(page_size, GFP_NOFS);
+-
+ if (!log->one_page_buf) {
+ err = -ENOMEM;
+ goto out;
+@@ -3842,6 +3841,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ if (rst_info.vbo)
+ goto check_restart_area;
+
++ memset(&rst_info2, 0, sizeof(struct restart_info));
+ err = log_read_rst(log, l_size, false, &rst_info2);
+
+ /* Determine which restart area to use. */
+@@ -4085,8 +4085,10 @@ process_log:
+ if (client == LFS_NO_CLIENT_LE) {
+ /* Insert "NTFS" client LogFile. */
+ client = ra->client_idx[0];
+- if (client == LFS_NO_CLIENT_LE)
+- return -EINVAL;
++ if (client == LFS_NO_CLIENT_LE) {
++ err = -EINVAL;
++ goto out;
++ }
+
+ t16 = le16_to_cpu(client);
+ cr = ca + t16;
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index a87ab3ad3cd38..76ee7e7aecf5f 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -757,6 +757,7 @@ static ssize_t ntfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ loff_t vbo = iocb->ki_pos;
+ loff_t end;
+ int wr = iov_iter_rw(iter) & WRITE;
++ size_t iter_count = iov_iter_count(iter);
+ loff_t valid;
+ ssize_t ret;
+
+@@ -770,10 +771,13 @@ static ssize_t ntfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ wr ? ntfs_get_block_direct_IO_W
+ : ntfs_get_block_direct_IO_R);
+
+- if (ret <= 0)
++ if (ret > 0)
++ end = vbo + ret;
++ else if (wr && ret == -EIOCBQUEUED)
++ end = vbo + iter_count;
++ else
+ goto out;
+
+- end = vbo + ret;
+ valid = ni->i_valid;
+ if (wr) {
+ if (end > valid && !S_ISBLK(inode->i_mode)) {
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index afd0ddad826ff..0968565ff2ca0 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -112,7 +112,7 @@ static int ntfs_read_ea(struct ntfs_inode *ni, struct EA_FULL **ea,
+ return -ENOMEM;
+
+ if (!size) {
+- ;
++ /* EA info persists, but xattr is empty. Looks like EA problem. */
+ } else if (attr_ea->non_res) {
+ struct runs_tree run;
+
+@@ -541,7 +541,7 @@ struct posix_acl *ntfs_get_acl(struct inode *inode, int type, bool rcu)
+
+ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ struct inode *inode, struct posix_acl *acl,
+- int type)
++ int type, bool init_acl)
+ {
+ const char *name;
+ size_t size, name_len;
+@@ -554,8 +554,9 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+- if (acl) {
+- umode_t mode = inode->i_mode;
++ /* Do not change i_mode if we are in init_acl */
++ if (acl && !init_acl) {
++ umode_t mode;
+
+ err = posix_acl_update_mode(mnt_userns, inode, &mode,
+ &acl);
+@@ -616,7 +617,68 @@ out:
+ int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode,
+ struct posix_acl *acl, int type)
+ {
+- return ntfs_set_acl_ex(mnt_userns, inode, acl, type);
++ return ntfs_set_acl_ex(mnt_userns, inode, acl, type, false);
++}
++
++static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,
++ struct inode *inode, int type, void *buffer,
++ size_t size)
++{
++ struct posix_acl *acl;
++ int err;
++
++ if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
++ ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
++ return -EOPNOTSUPP;
++ }
++
++ acl = ntfs_get_acl(inode, type, false);
++ if (IS_ERR(acl))
++ return PTR_ERR(acl);
++
++ if (!acl)
++ return -ENODATA;
++
++ err = posix_acl_to_xattr(mnt_userns, acl, buffer, size);
++ posix_acl_release(acl);
++
++ return err;
++}
++
++static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,
++ struct inode *inode, int type, const void *value,
++ size_t size)
++{
++ struct posix_acl *acl;
++ int err;
++
++ if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
++ ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
++ return -EOPNOTSUPP;
++ }
++
++ if (!inode_owner_or_capable(mnt_userns, inode))
++ return -EPERM;
++
++ if (!value) {
++ acl = NULL;
++ } else {
++ acl = posix_acl_from_xattr(mnt_userns, value, size);
++ if (IS_ERR(acl))
++ return PTR_ERR(acl);
++
++ if (acl) {
++ err = posix_acl_valid(mnt_userns, acl);
++ if (err)
++ goto release_and_out;
++ }
++ }
++
++ err = ntfs_set_acl(mnt_userns, inode, acl, type);
++
++release_and_out:
++ posix_acl_release(acl);
++ return err;
+ }
+
+ /*
+@@ -636,7 +698,7 @@ int ntfs_init_acl(struct user_namespace *mnt_userns, struct inode *inode,
+
+ if (default_acl) {
+ err = ntfs_set_acl_ex(mnt_userns, inode, default_acl,
+- ACL_TYPE_DEFAULT);
++ ACL_TYPE_DEFAULT, true);
+ posix_acl_release(default_acl);
+ } else {
+ inode->i_default_acl = NULL;
+@@ -647,7 +709,7 @@ int ntfs_init_acl(struct user_namespace *mnt_userns, struct inode *inode,
+ else {
+ if (!err)
+ err = ntfs_set_acl_ex(mnt_userns, inode, acl,
+- ACL_TYPE_ACCESS);
++ ACL_TYPE_ACCESS, true);
+ posix_acl_release(acl);
+ }
+
+@@ -785,6 +847,23 @@ static int ntfs_getxattr(const struct xattr_handler *handler, struct dentry *de,
+ goto out;
+ }
+
++#ifdef CONFIG_NTFS3_FS_POSIX_ACL
++ if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
++ !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
++ sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
++ (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
++ !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
++ sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
++ /* TODO: init_user_ns? */
++ err = ntfs_xattr_get_acl(
++ &init_user_ns, inode,
++ name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
++ ? ACL_TYPE_ACCESS
++ : ACL_TYPE_DEFAULT,
++ buffer, size);
++ goto out;
++ }
++#endif
+ /* Deal with NTFS extended attribute. */
+ err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL);
+
+@@ -897,10 +976,29 @@ set_new_fa:
+ goto out;
+ }
+
++#ifdef CONFIG_NTFS3_FS_POSIX_ACL
++ if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
++ !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
++ sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
++ (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
++ !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
++ sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
++ err = ntfs_xattr_set_acl(
++ mnt_userns, inode,
++ name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
++ ? ACL_TYPE_ACCESS
++ : ACL_TYPE_DEFAULT,
++ value, size);
++ goto out;
++ }
++#endif
+ /* Deal with NTFS extended attribute. */
+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);
+
+ out:
++ inode->i_ctime = current_time(inode);
++ mark_inode_dirty(inode);
++
+ return err;
+ }
+
+diff --git a/fs/ocfs2/dlmfs/userdlm.c b/fs/ocfs2/dlmfs/userdlm.c
+index 29f183a15798e..c1d67c806e1d3 100644
+--- a/fs/ocfs2/dlmfs/userdlm.c
++++ b/fs/ocfs2/dlmfs/userdlm.c
+@@ -433,6 +433,11 @@ again:
+ }
+
+ spin_lock(&lockres->l_lock);
++ if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) {
++ spin_unlock(&lockres->l_lock);
++ status = -EAGAIN;
++ goto bail;
++ }
+
+ /* We only compare against the currently granted level
+ * here. If the lock is blocked waiting on a downconvert,
+@@ -595,7 +600,7 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ spin_lock(&lockres->l_lock);
+ if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) {
+ spin_unlock(&lockres->l_lock);
+- return 0;
++ goto bail;
+ }
+
+ lockres->l_flags |= USER_LOCK_IN_TEARDOWN;
+@@ -609,12 +614,17 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ }
+
+ if (lockres->l_ro_holders || lockres->l_ex_holders) {
++ lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN;
+ spin_unlock(&lockres->l_lock);
+ goto bail;
+ }
+
+ status = 0;
+ if (!(lockres->l_flags & USER_LOCK_ATTACHED)) {
++ /*
++ * lock is never requested, leave USER_LOCK_IN_TEARDOWN set
++ * to avoid new lock request coming in.
++ */
+ spin_unlock(&lockres->l_lock);
+ goto bail;
+ }
+@@ -625,6 +635,10 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+
+ status = ocfs2_dlm_unlock(conn, &lockres->l_lksb, DLM_LKF_VALBLK);
+ if (status) {
++ spin_lock(&lockres->l_lock);
++ lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN;
++ lockres->l_flags &= ~USER_LOCK_BUSY;
++ spin_unlock(&lockres->l_lock);
+ user_log_dlm_error("ocfs2_dlm_unlock", status, lockres);
+ goto bail;
+ }
+diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c
+index 6c2411c2afcf1..fb090dac21d23 100644
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -125,6 +125,7 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
+ struct inode *inode = NULL;
+ struct super_block *sb = osb->sb;
+ struct ocfs2_find_inode_args args;
++ journal_t *journal = osb->journal->j_journal;
+
+ trace_ocfs2_iget_begin((unsigned long long)blkno, flags,
+ sysfile_type);
+@@ -171,11 +172,10 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
+ * part of the transaction - the inode could have been reclaimed and
+ * now it is reread from disk.
+ */
+- if (osb->journal) {
++ if (journal) {
+ transaction_t *transaction;
+ tid_t tid;
+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
+- journal_t *journal = osb->journal->j_journal;
+
+ read_lock(&journal->j_state_lock);
+ if (journal->j_running_transaction)
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 1887a27087097..fa87d89cf7542 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -810,22 +810,20 @@ void ocfs2_set_journal_params(struct ocfs2_super *osb)
+ write_unlock(&journal->j_state_lock);
+ }
+
+-int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
++/*
++ * alloc & initialize skeleton for journal structure.
++ * ocfs2_journal_init() will make fs have journal ability.
++ */
++int ocfs2_journal_alloc(struct ocfs2_super *osb)
+ {
+- int status = -1;
+- struct inode *inode = NULL; /* the journal inode */
+- journal_t *j_journal = NULL;
+- struct ocfs2_journal *journal = NULL;
+- struct ocfs2_dinode *di = NULL;
+- struct buffer_head *bh = NULL;
+- int inode_lock = 0;
++ int status = 0;
++ struct ocfs2_journal *journal;
+
+- /* initialize our journal structure */
+ journal = kzalloc(sizeof(struct ocfs2_journal), GFP_KERNEL);
+ if (!journal) {
+ mlog(ML_ERROR, "unable to alloc journal\n");
+ status = -ENOMEM;
+- goto done;
++ goto bail;
+ }
+ osb->journal = journal;
+ journal->j_osb = osb;
+@@ -839,6 +837,21 @@ int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
+ INIT_WORK(&journal->j_recovery_work, ocfs2_complete_recovery);
+ journal->j_state = OCFS2_JOURNAL_FREE;
+
++bail:
++ return status;
++}
++
++int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
++{
++ int status = -1;
++ struct inode *inode = NULL; /* the journal inode */
++ journal_t *j_journal = NULL;
++ struct ocfs2_journal *journal = osb->journal;
++ struct ocfs2_dinode *di = NULL;
++ struct buffer_head *bh = NULL;
++ int inode_lock = 0;
++
++ BUG_ON(!journal);
+ /* already have the inode for our journal */
+ inode = ocfs2_get_system_file_inode(osb, JOURNAL_SYSTEM_INODE,
+ osb->slot_num);
+diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
+index 8dcb2f2cadbc5..969d0aa287187 100644
+--- a/fs/ocfs2/journal.h
++++ b/fs/ocfs2/journal.h
+@@ -154,6 +154,7 @@ int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
+ * Journal Control:
+ * Initialize, Load, Shutdown, Wipe a journal.
+ *
++ * ocfs2_journal_alloc - Initialize skeleton for journal structure.
+ * ocfs2_journal_init - Initialize journal structures in the OSB.
+ * ocfs2_journal_load - Load the given journal off disk. Replay it if
+ * there's transactions still in there.
+@@ -167,6 +168,7 @@ int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
+ * ocfs2_start_checkpoint - Kick the commit thread to do a checkpoint.
+ */
+ void ocfs2_set_journal_params(struct ocfs2_super *osb);
++int ocfs2_journal_alloc(struct ocfs2_super *osb);
+ int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty);
+ void ocfs2_journal_shutdown(struct ocfs2_super *osb);
+ int ocfs2_journal_wipe(struct ocfs2_journal *journal,
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 8bde30fa5387d..4a3d625772fc7 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2195,6 +2195,15 @@ static int ocfs2_initialize_super(struct super_block *sb,
+
+ get_random_bytes(&osb->s_next_generation, sizeof(u32));
+
++ /*
++ * FIXME
++ * This should be done in ocfs2_journal_init(), but any inode
++ * writes back operation will cause the filesystem to crash.
++ */
++ status = ocfs2_journal_alloc(osb);
++ if (status < 0)
++ goto bail;
++
+ INIT_WORK(&osb->dquot_drop_work, ocfs2_drop_dquot_refs);
+ init_llist_head(&osb->dquot_drop_list);
+
+@@ -2483,6 +2492,12 @@ static void ocfs2_delete_osb(struct ocfs2_super *osb)
+
+ kfree(osb->osb_orphan_wipes);
+ kfree(osb->slot_recovery_generations);
++ /* FIXME
++ * This belongs in journal shutdown, but because we have to
++ * allocate osb->journal at the middle of ocfs2_initialize_super(),
++ * we free it here.
++ */
++ kfree(osb->journal);
+ kfree(osb->local_alloc_copy);
+ kfree(osb->uuid_str);
+ kfree(osb->vol_label);
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index f2132407e1335..587b91d9d998f 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -448,6 +448,9 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent,
+ proc_set_user(ent, (*parent)->uid, (*parent)->gid);
+
+ ent->proc_dops = &proc_misc_dentry_ops;
++ /* Revalidate everything under /proc/${pid}/net */
++ if ((*parent)->proc_dops == &proc_net_dentry_ops)
++ pde_force_lookup(ent);
+
+ out:
+ return ent;
+diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c
+index e1cfeda397f3f..913e5acefbb66 100644
+--- a/fs/proc/proc_net.c
++++ b/fs/proc/proc_net.c
+@@ -376,6 +376,9 @@ static __net_init int proc_net_ns_init(struct net *net)
+
+ proc_set_user(netd, uid, gid);
+
++ /* Seed dentry revalidation for /proc/${pid}/net */
++ pde_force_lookup(netd);
++
+ err = -EEXIST;
+ net_statd = proc_net_mkdir(net, "stat", netd);
+ if (!net_statd)
+diff --git a/fs/seq_file.c b/fs/seq_file.c
+index f8e1f4ee87ffc..3cc7fb4487711 100644
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -931,6 +931,38 @@ struct list_head *seq_list_next(void *v, struct list_head *head, loff_t *ppos)
+ }
+ EXPORT_SYMBOL(seq_list_next);
+
++struct list_head *seq_list_start_rcu(struct list_head *head, loff_t pos)
++{
++ struct list_head *lh;
++
++ list_for_each_rcu(lh, head)
++ if (pos-- == 0)
++ return lh;
++
++ return NULL;
++}
++EXPORT_SYMBOL(seq_list_start_rcu);
++
++struct list_head *seq_list_start_head_rcu(struct list_head *head, loff_t pos)
++{
++ if (!pos)
++ return head;
++
++ return seq_list_start_rcu(head, pos - 1);
++}
++EXPORT_SYMBOL(seq_list_start_head_rcu);
++
++struct list_head *seq_list_next_rcu(void *v, struct list_head *head,
++ loff_t *ppos)
++{
++ struct list_head *lh;
++
++ lh = list_next_rcu((struct list_head *)v);
++ ++*ppos;
++ return lh == head ? NULL : lh;
++}
++EXPORT_SYMBOL(seq_list_next_rcu);
++
+ /**
+ * seq_hlist_start - start an iteration of a hlist
+ * @head: the head of the hlist
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index 18f6c700f6d02..8c3112f5406d1 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -121,7 +121,7 @@ struct detailed_data_monitor_range {
+ u8 supported_scalings;
+ u8 preferred_refresh;
+ } __attribute__((packed)) cvt;
+- } formula;
++ } __attribute__((packed)) formula;
+ } __attribute__((packed));
+
+ struct detailed_data_wpindex {
+@@ -154,7 +154,7 @@ struct detailed_non_pixel {
+ struct detailed_data_wpindex color;
+ struct std_timing timings[6];
+ struct cvt_timing cvt[4];
+- } data;
++ } __attribute__((packed)) data;
+ } __attribute__((packed));
+
+ #define EDID_DETAIL_EST_TIMINGS 0xf7
+@@ -172,7 +172,7 @@ struct detailed_timing {
+ union {
+ struct detailed_pixel_timing pixel_data;
+ struct detailed_non_pixel other_data;
+- } data;
++ } __attribute__((packed)) data;
+ } __attribute__((packed));
+
+ #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0)
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index 86c0f85df8bb4..fa6e14b2763fb 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -237,9 +237,8 @@ typedef unsigned int blk_qc_t;
+ struct bio {
+ struct bio *bi_next; /* request queue link */
+ struct block_device *bi_bdev;
+- unsigned int bi_opf; /* bottom bits req flags,
+- * top bits REQ_OP. Use
+- * accessors.
++ unsigned int bi_opf; /* bottom bits REQ_OP, top bits
++ * req_flags.
+ */
+ unsigned short bi_flags; /* BIO_* below */
+ unsigned short bi_ioprio;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 3121d1fc8e754..e78113f25b71d 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -652,7 +652,7 @@ struct btf_func_model {
+ #define BPF_TRAMP_F_RET_FENTRY_RET BIT(4)
+
+ /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
+- * bytes on x86. Pick a number to fit into BPF_IMAGE_SIZE / 2
++ * bytes on x86.
+ */
+ #define BPF_MAX_TRAMP_PROGS 38
+
+@@ -2064,6 +2064,8 @@ void bpf_offload_dev_netdev_unregister(struct bpf_offload_dev *offdev,
+ struct net_device *netdev);
+ bool bpf_offload_dev_match(struct bpf_prog *prog, struct net_device *netdev);
+
++void unpriv_ebpf_notify(int new_state);
++
+ #if defined(CONFIG_NET) && defined(CONFIG_BPF_SYSCALL)
+ int bpf_prog_offload_init(struct bpf_prog *prog, union bpf_attr *attr);
+
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index 1c758b0e03598..01fddf72a81f0 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -235,6 +235,7 @@ typedef struct compat_siginfo {
+ struct {
+ compat_ulong_t _data;
+ u32 _type;
++ u32 _flags;
+ } _perf;
+ };
+ } _sigfault;
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index ccd4d3f91c98c..cc6d2be2ffd51 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -213,6 +213,8 @@ struct capsule_info {
+ size_t page_bytes_remain;
+ };
+
++int efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff,
++ size_t hdr_bytes);
+ int __efi_capsule_setup_info(struct capsule_info *cap_info);
+
+ /*
+diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h
+index 3a532ba66f6c2..7defac04f9a38 100644
+--- a/include/linux/fwnode.h
++++ b/include/linux/fwnode.h
+@@ -148,12 +148,12 @@ struct fwnode_operations {
+ int (*add_links)(struct fwnode_handle *fwnode);
+ };
+
+-#define fwnode_has_op(fwnode, op) \
+- ((fwnode) && (fwnode)->ops && (fwnode)->ops->op)
++#define fwnode_has_op(fwnode, op) \
++ (!IS_ERR_OR_NULL(fwnode) && (fwnode)->ops && (fwnode)->ops->op)
++
+ #define fwnode_call_int_op(fwnode, op, ...) \
+- (fwnode ? (fwnode_has_op(fwnode, op) ? \
+- (fwnode)->ops->op(fwnode, ## __VA_ARGS__) : -ENXIO) : \
+- -EINVAL)
++ (fwnode_has_op(fwnode, op) ? \
++ (fwnode)->ops->op(fwnode, ## __VA_ARGS__) : (IS_ERR_OR_NULL(fwnode) ? -EINVAL : -ENXIO))
+
+ #define fwnode_call_bool_op(fwnode, op, ...) \
+ (fwnode_has_op(fwnode, op) ? \
+diff --git a/include/linux/goldfish.h b/include/linux/goldfish.h
+index 12be1601fd845..bcc17f95b9066 100644
+--- a/include/linux/goldfish.h
++++ b/include/linux/goldfish.h
+@@ -8,14 +8,21 @@
+
+ /* Helpers for Goldfish virtual platform */
+
++#ifndef gf_ioread32
++#define gf_ioread32 ioread32
++#endif
++#ifndef gf_iowrite32
++#define gf_iowrite32 iowrite32
++#endif
++
+ static inline void gf_write_ptr(const void *ptr, void __iomem *portl,
+ void __iomem *porth)
+ {
+ const unsigned long addr = (unsigned long)ptr;
+
+- __raw_writel(lower_32_bits(addr), portl);
++ gf_iowrite32(lower_32_bits(addr), portl);
+ #ifdef CONFIG_64BIT
+- __raw_writel(upper_32_bits(addr), porth);
++ gf_iowrite32(upper_32_bits(addr), porth);
+ #endif
+ }
+
+@@ -23,9 +30,9 @@ static inline void gf_write_dma_addr(const dma_addr_t addr,
+ void __iomem *portl,
+ void __iomem *porth)
+ {
+- __raw_writel(lower_32_bits(addr), portl);
++ gf_iowrite32(lower_32_bits(addr), portl);
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+- __raw_writel(upper_32_bits(addr), porth);
++ gf_iowrite32(upper_32_bits(addr), porth);
+ #endif
+ }
+
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index f8996b46f430e..9f0408222f404 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -498,6 +498,18 @@ struct gpio_chip {
+ */
+ int (*of_xlate)(struct gpio_chip *gc,
+ const struct of_phandle_args *gpiospec, u32 *flags);
++
++ /**
++ * @of_gpio_ranges_fallback:
++ *
++ * Optional hook for the case that no gpio-ranges property is defined
++ * within the device tree node "np" (usually DT before introduction
++ * of gpio-ranges). So this callback is helpful to provide the
++ * necessary backward compatibility for the pin ranges.
++ */
++ int (*of_gpio_ranges_fallback)(struct gpio_chip *gc,
++ struct device_node *np);
++
+ #endif /* CONFIG_OF_GPIO */
+ };
+
+diff --git a/include/linux/ipmi_smi.h b/include/linux/ipmi_smi.h
+index 9277d21c2690c..5d69820d8b027 100644
+--- a/include/linux/ipmi_smi.h
++++ b/include/linux/ipmi_smi.h
+@@ -125,6 +125,12 @@ struct ipmi_smi_msg {
+ void (*done)(struct ipmi_smi_msg *msg);
+ };
+
++#define INIT_IPMI_SMI_MSG(done_handler) \
++{ \
++ .done = done_handler, \
++ .type = IPMI_SMI_MSG_TYPE_NORMAL \
++}
++
+ struct ipmi_smi_handlers {
+ struct module *owner;
+
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 0c994ae37729e..33be67c2f7ff8 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -187,14 +187,6 @@ void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name);
+ int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+ unsigned long buf_len);
+ void *arch_kexec_kernel_image_load(struct kimage *image);
+-int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+- Elf_Shdr *section,
+- const Elf_Shdr *relsec,
+- const Elf_Shdr *symtab);
+-int arch_kexec_apply_relocations(struct purgatory_info *pi,
+- Elf_Shdr *section,
+- const Elf_Shdr *relsec,
+- const Elf_Shdr *symtab);
+ int arch_kimage_file_post_load_cleanup(struct kimage *image);
+ #ifdef CONFIG_KEXEC_SIG
+ int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+@@ -223,6 +215,44 @@ extern int crash_exclude_mem_range(struct crash_mem *mem,
+ unsigned long long mend);
+ extern int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,
+ void **addr, unsigned long *sz);
++
++#ifndef arch_kexec_apply_relocations_add
++/*
++ * arch_kexec_apply_relocations_add - apply relocations of type RELA
++ * @pi: Purgatory to be relocated.
++ * @section: Section relocations applying to.
++ * @relsec: Section containing RELAs.
++ * @symtab: Corresponding symtab.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int
++arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section,
++ const Elf_Shdr *relsec, const Elf_Shdr *symtab)
++{
++ pr_err("RELA relocation unsupported.\n");
++ return -ENOEXEC;
++}
++#endif
++
++#ifndef arch_kexec_apply_relocations
++/*
++ * arch_kexec_apply_relocations - apply relocations of type REL
++ * @pi: Purgatory to be relocated.
++ * @section: Section relocations applying to.
++ * @relsec: Section containing RELs.
++ * @symtab: Corresponding symtab.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int
++arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section,
++ const Elf_Shdr *relsec, const Elf_Shdr *symtab)
++{
++ pr_err("REL relocation unsupported.\n");
++ return -ENOEXEC;
++}
++#endif
+ #endif /* CONFIG_KEXEC_FILE */
+
+ #ifdef CONFIG_KEXEC_ELF
+diff --git a/include/linux/list.h b/include/linux/list.h
+index dd6c2041d09c1..0df13cb03028b 100644
+--- a/include/linux/list.h
++++ b/include/linux/list.h
+@@ -35,7 +35,7 @@
+ static inline void INIT_LIST_HEAD(struct list_head *list)
+ {
+ WRITE_ONCE(list->next, list);
+- list->prev = list;
++ WRITE_ONCE(list->prev, list);
+ }
+
+ #ifdef CONFIG_DEBUG_LIST
+@@ -306,7 +306,7 @@ static inline int list_empty(const struct list_head *head)
+ static inline void list_del_init_careful(struct list_head *entry)
+ {
+ __list_del_entry(entry);
+- entry->prev = entry;
++ WRITE_ONCE(entry->prev, entry);
+ smp_store_release(&entry->next, entry);
+ }
+
+@@ -326,7 +326,7 @@ static inline void list_del_init_careful(struct list_head *entry)
+ static inline int list_empty_careful(const struct list_head *head)
+ {
+ struct list_head *next = smp_load_acquire(&head->next);
+- return list_is_head(next, head) && (next == head->prev);
++ return list_is_head(next, head) && (next == READ_ONCE(head->prev));
+ }
+
+ /**
+@@ -579,6 +579,16 @@ static inline void list_splice_tail_init(struct list_head *list,
+ #define list_for_each(pos, head) \
+ for (pos = (head)->next; !list_is_head(pos, (head)); pos = pos->next)
+
++/**
++ * list_for_each_rcu - Iterate over a list in an RCU-safe fashion
++ * @pos: the &struct list_head to use as a loop cursor.
++ * @head: the head for your list.
++ */
++#define list_for_each_rcu(pos, head) \
++ for (pos = rcu_dereference((head)->next); \
++ !list_is_head(pos, (head)); \
++ pos = rcu_dereference(pos->next))
++
+ /**
+ * list_for_each_continue - continue iteration over a list
+ * @pos: the &struct list_head to use as a loop cursor.
+diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h
+index 36d6ce673503c..6fee33cb52f58 100644
+--- a/include/linux/mailbox_controller.h
++++ b/include/linux/mailbox_controller.h
+@@ -83,6 +83,7 @@ struct mbox_controller {
+ const struct of_phandle_args *sp);
+ /* Internal to API */
+ struct hrtimer poll_hrt;
++ spinlock_t poll_hrt_lock;
+ struct list_head node;
+ };
+
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 1e135fd5c076a..d5e9066990ca0 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -290,8 +290,7 @@ extern typeof(name) __mod_##type##__##name##_device_table \
+ * files require multiple MODULE_FIRMWARE() specifiers */
+ #define MODULE_FIRMWARE(_firmware) MODULE_INFO(firmware, _firmware)
+
+-#define _MODULE_IMPORT_NS(ns) MODULE_INFO(import_ns, #ns)
+-#define MODULE_IMPORT_NS(ns) _MODULE_IMPORT_NS(ns)
++#define MODULE_IMPORT_NS(ns) MODULE_INFO(import_ns, __stringify(ns))
+
+ struct notifier_block;
+
+diff --git a/include/linux/mtd/cfi.h b/include/linux/mtd/cfi.h
+index fd1ecb8211060..d88bb56c18e2e 100644
+--- a/include/linux/mtd/cfi.h
++++ b/include/linux/mtd/cfi.h
+@@ -286,6 +286,7 @@ struct cfi_private {
+ map_word sector_erase_cmd;
+ unsigned long chipshift; /* Because they're of the same type */
+ const char *im_name; /* inter_module name for cmdset_setup */
++ unsigned long quirks;
+ struct flchip chips[]; /* per-chip data structure for each chip */
+ };
+
+diff --git a/include/linux/namei.h b/include/linux/namei.h
+index e89329bb3134e..caeb08a98536c 100644
+--- a/include/linux/namei.h
++++ b/include/linux/namei.h
+@@ -69,6 +69,12 @@ extern struct dentry *lookup_one_len(const char *, struct dentry *, int);
+ extern struct dentry *lookup_one_len_unlocked(const char *, struct dentry *, int);
+ extern struct dentry *lookup_positive_unlocked(const char *, struct dentry *, int);
+ struct dentry *lookup_one(struct user_namespace *, const char *, struct dentry *, int);
++struct dentry *lookup_one_unlocked(struct user_namespace *mnt_userns,
++ const char *name, struct dentry *base,
++ int len);
++struct dentry *lookup_one_positive_unlocked(struct user_namespace *mnt_userns,
++ const char *name,
++ struct dentry *base, int len);
+
+ extern int follow_down_one(struct path *);
+ extern int follow_down(struct path *);
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index ca0959e51e817..1fdf560dddd8f 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -285,4 +285,5 @@ struct nfs_server {
+ #define NFS_CAP_XATTR (1U << 28)
+ #define NFS_CAP_READ_PLUS (1U << 29)
+ #define NFS_CAP_FS_LOCATIONS (1U << 30)
++#define NFS_CAP_MOVEABLE (1U << 31)
+ #endif
+diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
+index 567c3ddba2c42..c6199dbe25913 100644
+--- a/include/linux/nodemask.h
++++ b/include/linux/nodemask.h
+@@ -375,14 +375,13 @@ static inline void __nodes_fold(nodemask_t *dstp, const nodemask_t *origp,
+ }
+
+ #if MAX_NUMNODES > 1
+-#define for_each_node_mask(node, mask) \
+- for ((node) = first_node(mask); \
+- (node) < MAX_NUMNODES; \
+- (node) = next_node((node), (mask)))
++#define for_each_node_mask(node, mask) \
++ for ((node) = first_node(mask); \
++ (node >= 0) && (node) < MAX_NUMNODES; \
++ (node) = next_node((node), (mask)))
+ #else /* MAX_NUMNODES == 1 */
+-#define for_each_node_mask(node, mask) \
+- if (!nodes_empty(mask)) \
+- for ((node) = 0; (node) < 1; (node)++)
++#define for_each_node_mask(node, mask) \
++ for ((node) = 0; (node) < 1 && !nodes_empty(mask); (node)++)
+ #endif /* MAX_NUMNODES */
+
+ /*
+diff --git a/include/linux/platform_data/cros_ec_proto.h b/include/linux/platform_data/cros_ec_proto.h
+index df3c78c92ca2f..16931569adce1 100644
+--- a/include/linux/platform_data/cros_ec_proto.h
++++ b/include/linux/platform_data/cros_ec_proto.h
+@@ -216,6 +216,9 @@ int cros_ec_prepare_tx(struct cros_ec_device *ec_dev,
+ int cros_ec_check_result(struct cros_ec_device *ec_dev,
+ struct cros_ec_command *msg);
+
++int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev,
++ struct cros_ec_command *msg);
++
+ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+ struct cros_ec_command *msg);
+
+diff --git a/include/linux/ptp_classify.h b/include/linux/ptp_classify.h
+index 9afd34a2d36c5..b760373524feb 100644
+--- a/include/linux/ptp_classify.h
++++ b/include/linux/ptp_classify.h
+@@ -43,6 +43,9 @@
+ #define OFF_PTP_SOURCE_UUID 22 /* PTPv1 only */
+ #define OFF_PTP_SEQUENCE_ID 30
+
++/* PTP header flag fields */
++#define PTP_FLAG_TWOSTEP BIT(1)
++
+ /* Below defines should actually be removed at some point in time. */
+ #define IP6_HLEN 40
+ #define UDP_HLEN 8
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index 8aee2945ff08f..831e6d2cf4d42 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -30,7 +30,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
+
+ #define PT_SEIZED 0x00010000 /* SEIZE used, enable new behavior */
+ #define PT_PTRACED 0x00000001
+-#define PT_DTRACE 0x00000002 /* delayed trace (used on m68k, i386) */
+
+ #define PT_OPT_FLAG_SHIFT 3
+ /* PT_TRACE_* event enable flags */
+@@ -47,12 +46,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
+ #define PT_EXITKILL (PTRACE_O_EXITKILL << PT_OPT_FLAG_SHIFT)
+ #define PT_SUSPEND_SECCOMP (PTRACE_O_SUSPEND_SECCOMP << PT_OPT_FLAG_SHIFT)
+
+-/* single stepping state bits (used on ARM and PA-RISC) */
+-#define PT_SINGLESTEP_BIT 31
+-#define PT_SINGLESTEP (1<<PT_SINGLESTEP_BIT)
+-#define PT_BLOCKSTEP_BIT 30
+-#define PT_BLOCKSTEP (1<<PT_BLOCKSTEP_BIT)
+-
+ extern long arch_ptrace(struct task_struct *child, long request,
+ unsigned long addr, unsigned long data);
+ extern int ptrace_readdata(struct task_struct *tsk, unsigned long src, char __user *dst, int len);
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index b6ecb9fc4cd2d..9924fe7559a02 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -320,7 +320,7 @@ int send_sig_mceerr(int code, void __user *, short, struct task_struct *);
+
+ int force_sig_bnderr(void __user *addr, void __user *lower, void __user *upper);
+ int force_sig_pkuerr(void __user *addr, u32 pkey);
+-int force_sig_perf(void __user *addr, u32 type, u64 sig_data);
++int send_sig_perf(void __user *addr, u32 type, u64 sig_data);
+
+ int force_sig_ptrace_errno_trap(int errno, void __user *addr);
+ int force_sig_fault_trapno(int sig, int code, void __user *addr, int trapno);
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index e84e54d1b490f..f944015579487 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -32,6 +32,7 @@ struct kernel_clone_args {
+ size_t set_tid_size;
+ int cgroup;
+ int io_thread;
++ int kthread;
+ struct cgroup *cgrp;
+ struct css_set *cset;
+ };
+@@ -89,6 +90,7 @@ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node);
+ struct task_struct *fork_idle(int);
+ struct mm_struct *copy_init_mm(void);
+ extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
++extern pid_t user_mode_thread(int (*fn)(void *), void *arg, unsigned long flags);
+ extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
+ int kernel_wait(pid_t pid, int *stat);
+
+diff --git a/include/linux/seq_file.h b/include/linux/seq_file.h
+index 88cc16444b431..4baf03ea8992a 100644
+--- a/include/linux/seq_file.h
++++ b/include/linux/seq_file.h
+@@ -276,6 +276,10 @@ extern struct list_head *seq_list_start_head(struct list_head *head,
+ extern struct list_head *seq_list_next(void *v, struct list_head *head,
+ loff_t *ppos);
+
++extern struct list_head *seq_list_start_rcu(struct list_head *head, loff_t pos);
++extern struct list_head *seq_list_start_head_rcu(struct list_head *head, loff_t pos);
++extern struct list_head *seq_list_next_rcu(void *v, struct list_head *head, loff_t *ppos);
++
+ /*
+ * Helpers for iteration over hlist_head-s in seq_files
+ */
+diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
+index f36be5166c197..369769ce7399d 100644
+--- a/include/linux/set_memory.h
++++ b/include/linux/set_memory.h
+@@ -42,14 +42,14 @@ static inline bool can_set_direct_map(void)
+ #endif
+ #endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+
+-#ifndef set_mce_nospec
+-static inline int set_mce_nospec(unsigned long pfn, bool unmap)
++#ifdef CONFIG_X86_64
++int set_mce_nospec(unsigned long pfn);
++int clear_mce_nospec(unsigned long pfn);
++#else
++static inline int set_mce_nospec(unsigned long pfn)
+ {
+ return 0;
+ }
+-#endif
+-
+-#ifndef clear_mce_nospec
+ static inline int clear_mce_nospec(unsigned long pfn)
+ {
+ return 0;
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 548a028f2dabb..2c1fc9212cf28 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,6 +124,7 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING 5 /* root hub is running? */
+ #define HCD_FLAG_DEAD 6 /* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED 7 /* authorize interfaces? */
++#define HCD_FLAG_DEFER_RH_REGISTER 8 /* Defer roothub registration */
+
+ /* The flags can be tested using these macros; they are likely to
+ * be slightly faster than test_bit().
+@@ -134,6 +135,7 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd) ((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd) ((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEAD))
++#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+
+ /*
+ * Specifies if interfaces are authorized by default
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 69ef31cea5822..62a9bb022aedf 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -265,6 +265,15 @@ enum {
+ * runtime suspend, because event filtering takes place there.
+ */
+ HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL,
++
++ /*
++ * When this quirk is set, disables the use of
++ * HCI_OP_ENHANCED_SETUP_SYNC_CONN command to setup SCO connections.
++ *
++ * This quirk can be set before hci_register_dev is called or
++ * during the hdev->setup vendor callback.
++ */
++ HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN,
+ };
+
+ /* HCI device flags */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 131514913430a..4524920e4895d 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1147,7 +1147,7 @@ int hci_conn_switch_role(struct hci_conn *conn, __u8 role);
+
+ void hci_conn_enter_active_mode(struct hci_conn *conn, __u8 force_active);
+
+-void hci_le_conn_failed(struct hci_conn *conn, u8 status);
++void hci_conn_failed(struct hci_conn *conn, u8 status);
+
+ /*
+ * hci_conn_get() and hci_conn_put() are used to control the life-time of an
+@@ -1483,8 +1483,12 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ #define privacy_mode_capable(dev) (use_ll_privacy(dev) && \
+ (hdev->commands[39] & 0x04))
+
+-/* Use enhanced synchronous connection if command is supported */
+-#define enhanced_sco_capable(dev) ((dev)->commands[29] & 0x08)
++/* Use enhanced synchronous connection if command is supported and its quirk
++ * has not been set.
++ */
++#define enhanced_sync_conn_capable(dev) \
++ (((dev)->commands[29] & 0x08) && \
++ !test_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &(dev)->quirks))
+
+ /* Use ext scanning if set ext scan param and ext scan enable is supported */
+ #define use_ext_scan(dev) (((dev)->commands[37] & 0x20) && \
+diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h
+index f026cf08a8e86..4714610234439 100644
+--- a/include/net/if_inet6.h
++++ b/include/net/if_inet6.h
+@@ -64,6 +64,14 @@ struct inet6_ifaddr {
+
+ struct hlist_node addr_lst;
+ struct list_head if_list;
++ /*
++ * Used to safely traverse idev->addr_list in process context
++ * if the idev->lock needed to protect idev->addr_list cannot be held.
++ * In that case, add the items to this list temporarily and iterate
++ * without holding idev->lock.
++ * See addrconf_ifdown and dev_forward_change.
++ */
++ struct list_head if_list_aux;
+
+ struct list_head tmp_list;
+ struct inet6_ifaddr *ifpub;
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index fac8e89aed81d..310e0dbffda99 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -249,7 +249,8 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *,
+ struct fc_frame *);
+
+ /* libfcoe funcs */
+-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int);
++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], unsigned int scheme,
++ unsigned int port);
+ int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *,
+ const struct libfc_function_template *, int init_fcp);
+ u32 fcoe_fc_crc(struct fc_frame *fp);
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index d1e282f0d6f18..6ad01d7de4809 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -53,9 +53,9 @@ enum {
+ #define ISID_SIZE 6
+
+ /* Connection flags */
+-#define ISCSI_CONN_FLAG_SUSPEND_TX BIT(0)
+-#define ISCSI_CONN_FLAG_SUSPEND_RX BIT(1)
+-#define ISCSI_CONN_FLAG_BOUND BIT(2)
++#define ISCSI_CONN_FLAG_SUSPEND_TX 0
++#define ISCSI_CONN_FLAG_SUSPEND_RX 1
++#define ISCSI_CONN_FLAG_BOUND 2
+
+ #define ISCSI_ITT_MASK 0x1fff
+ #define ISCSI_TOTAL_CMDS_MAX 4096
+diff --git a/include/sound/cs35l41.h b/include/sound/cs35l41.h
+index bf7f9a9aeba04..9341130257ea6 100644
+--- a/include/sound/cs35l41.h
++++ b/include/sound/cs35l41.h
+@@ -536,7 +536,6 @@
+
+ #define CS35L41_MAX_CACHE_REG 36
+ #define CS35L41_OTP_SIZE_WORDS 32
+-#define CS35L41_NUM_OTP_ELEM 100
+
+ #define CS35L41_VALID_PDATA 0x80000000
+ #define CS35L41_NUM_SUPPLIES 2
+diff --git a/include/sound/jack.h b/include/sound/jack.h
+index 1181f536557eb..1ed90e2109e9b 100644
+--- a/include/sound/jack.h
++++ b/include/sound/jack.h
+@@ -62,6 +62,7 @@ struct snd_jack {
+ const char *id;
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+ struct input_dev *input_dev;
++ struct mutex input_dev_lock;
+ int registered;
+ int type;
+ char name[100];
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4a3ab0ed6e062..1c714336b8635 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -1509,7 +1509,7 @@ TRACE_EVENT(rxrpc_call_reset,
+ __entry->call_serial = call->rx_serial;
+ __entry->conn_serial = call->conn->hi_serial;
+ __entry->tx_seq = call->tx_hard_ack;
+- __entry->rx_seq = call->ackr_seen;
++ __entry->rx_seq = call->rx_hard_ack;
+ ),
+
+ TP_printk("c=%08x %08x:%08x r=%08x/%08x tx=%08x rx=%08x",
+diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
+index ca2e9009a6512..beb1280460892 100644
+--- a/include/trace/events/vmscan.h
++++ b/include/trace/events/vmscan.h
+@@ -297,7 +297,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate,
+ __field(unsigned long, nr_scanned)
+ __field(unsigned long, nr_skipped)
+ __field(unsigned long, nr_taken)
+- __field(isolate_mode_t, isolate_mode)
++ __field(unsigned int, isolate_mode)
+ __field(int, lru)
+ ),
+
+@@ -308,7 +308,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate,
+ __entry->nr_scanned = nr_scanned;
+ __entry->nr_skipped = nr_skipped;
+ __entry->nr_taken = nr_taken;
+- __entry->isolate_mode = isolate_mode;
++ __entry->isolate_mode = (__force unsigned int)isolate_mode;
+ __entry->lru = lru;
+ ),
+
+diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h
+index 3ba180f550d7c..ffbe4cec9f32d 100644
+--- a/include/uapi/asm-generic/siginfo.h
++++ b/include/uapi/asm-generic/siginfo.h
+@@ -99,6 +99,7 @@ union __sifields {
+ struct {
+ unsigned long _data;
+ __u32 _type;
++ __u32 _flags;
+ } _perf;
+ };
+ } _sigfault;
+@@ -164,6 +165,7 @@ typedef struct siginfo {
+ #define si_pkey _sifields._sigfault._addr_pkey._pkey
+ #define si_perf_data _sifields._sigfault._perf._data
+ #define si_perf_type _sifields._sigfault._perf._type
++#define si_perf_flags _sifields._sigfault._perf._flags
+ #define si_band _sifields._sigpoll._band
+ #define si_fd _sifields._sigpoll._fd
+ #define si_call_addr _sifields._sigsys._call_addr
+@@ -270,6 +272,11 @@ typedef struct siginfo {
+ * that are of the form: ((PTRACE_EVENT_XXX << 8) | SIGTRAP)
+ */
+
++/*
++ * Flags for si_perf_flags if SIGTRAP si_code is TRAP_PERF.
++ */
++#define TRAP_PERF_FLAG_ASYNC (1u << 0)
++
+ /*
+ * SIGCHLD si_codes
+ */
+diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h
+index b3d952067f59c..21c8d58283c9e 100644
+--- a/include/uapi/linux/landlock.h
++++ b/include/uapi/linux/landlock.h
+@@ -33,7 +33,9 @@ struct landlock_ruleset_attr {
+ * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI
+ * version.
+ */
++/* clang-format off */
+ #define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)
++/* clang-format on */
+
+ /**
+ * enum landlock_rule_type - Landlock rule type
+@@ -60,8 +62,9 @@ struct landlock_path_beneath_attr {
+ */
+ __u64 allowed_access;
+ /**
+- * @parent_fd: File descriptor, open with ``O_PATH``, which identifies
+- * the parent directory of a file hierarchy, or just a file.
++ * @parent_fd: File descriptor, preferably opened with ``O_PATH``,
++ * which identifies the parent directory of a file hierarchy, or just a
++ * file.
+ */
+ __s32 parent_fd;
+ /*
+@@ -120,6 +123,7 @@ struct landlock_path_beneath_attr {
+ * :manpage:`access(2)`.
+ * Future Landlock evolutions will enable to restrict them.
+ */
++/* clang-format off */
+ #define LANDLOCK_ACCESS_FS_EXECUTE (1ULL << 0)
+ #define LANDLOCK_ACCESS_FS_WRITE_FILE (1ULL << 1)
+ #define LANDLOCK_ACCESS_FS_READ_FILE (1ULL << 2)
+@@ -133,5 +137,6 @@ struct landlock_path_beneath_attr {
+ #define LANDLOCK_ACCESS_FS_MAKE_FIFO (1ULL << 10)
+ #define LANDLOCK_ACCESS_FS_MAKE_BLOCK (1ULL << 11)
+ #define LANDLOCK_ACCESS_FS_MAKE_SYM (1ULL << 12)
++/* clang-format on */
+
+ #endif /* _UAPI_LINUX_LANDLOCK_H */
+diff --git a/init/Kconfig b/init/Kconfig
+index e9119bf54b1f3..4a7a569706c54 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -77,6 +77,11 @@ config CC_HAS_ASM_GOTO_OUTPUT
+ depends on CC_HAS_ASM_GOTO
+ def_bool $(success,echo 'int foo(int x) { asm goto ("": "=r"(x) ::: bar); return x; bar: return 0; }' | $(CC) -x c - -c -o /dev/null)
+
++config CC_HAS_ASM_GOTO_TIED_OUTPUT
++ depends on CC_HAS_ASM_GOTO_OUTPUT
++ # Detect buggy gcc and clang, fixed in gcc-11 clang-14.
++ def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null)
++
+ config TOOLS_SUPPORT_RELR
+ def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh)
+
+diff --git a/init/main.c b/init/main.c
+index 0aa2e1c17b1c3..c5a1f77b87da8 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -688,7 +688,7 @@ noinline void __ref rest_init(void)
+ * the init task will end up wanting to create kthreads, which, if
+ * we schedule it before we create kthreadd, will OOPS.
+ */
+- pid = kernel_thread(kernel_init, NULL, CLONE_FS);
++ pid = user_mode_thread(kernel_init, NULL, CLONE_FS);
+ /*
+ * Pin init on the boot CPU. Task migration is not properly working
+ * until sched_init_smp() has been run. It will set the allowed
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 5becca9be867c..089c34d0732cf 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -45,6 +45,7 @@
+
+ struct mqueue_fs_context {
+ struct ipc_namespace *ipc_ns;
++ bool newns; /* Set if newly created ipc namespace */
+ };
+
+ #define MQUEUE_MAGIC 0x19800202
+@@ -427,6 +428,14 @@ static int mqueue_get_tree(struct fs_context *fc)
+ {
+ struct mqueue_fs_context *ctx = fc->fs_private;
+
++ /*
++ * With a newly created ipc namespace, we don't need to do a search
++ * for an ipc namespace match, but we still need to set s_fs_info.
++ */
++ if (ctx->newns) {
++ fc->s_fs_info = ctx->ipc_ns;
++ return get_tree_nodev(fc, mqueue_fill_super);
++ }
+ return get_tree_keyed(fc, mqueue_fill_super, ctx->ipc_ns);
+ }
+
+@@ -454,6 +463,10 @@ static int mqueue_init_fs_context(struct fs_context *fc)
+ return 0;
+ }
+
++/*
++ * mq_init_ns() is currently the only caller of mq_create_mount().
++ * So the ns parameter is always a newly created ipc namespace.
++ */
+ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
+ {
+ struct mqueue_fs_context *ctx;
+@@ -465,6 +478,7 @@ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
+ return ERR_CAST(fc);
+
+ ctx = fc->fs_private;
++ ctx->newns = true;
+ put_ipc_ns(ctx->ipc_ns);
+ ctx->ipc_ns = get_ipc_ns(ns);
+ put_user_ns(fc->user_ns);
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index f8ff598596b85..ac740630c79c2 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -448,7 +448,7 @@ void debug_dma_dump_mappings(struct device *dev)
+ * other hand, consumes a single dma_debug_entry, but inserts 'nents'
+ * entries into the tree.
+ */
+-static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT);
++static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC);
+ static DEFINE_SPINLOCK(radix_lock);
+ #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1)
+ #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT)
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 50f48e9e45987..54b1a5d211876 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -79,7 +79,7 @@ static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
+ {
+ if (!force_dma_unencrypted(dev))
+ return 0;
+- return set_memory_decrypted((unsigned long)vaddr, 1 << get_order(size));
++ return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
+ }
+
+ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
+@@ -88,7 +88,7 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
+
+ if (!force_dma_unencrypted(dev))
+ return 0;
+- ret = set_memory_encrypted((unsigned long)vaddr, 1 << get_order(size));
++ ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
+ if (ret)
+ pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
+ return ret;
+@@ -115,7 +115,7 @@ static struct page *dma_direct_alloc_swiotlb(struct device *dev, size_t size)
+ }
+
+ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
+- gfp_t gfp)
++ gfp_t gfp, bool allow_highmem)
+ {
+ int node = dev_to_node(dev);
+ struct page *page = NULL;
+@@ -129,9 +129,12 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
+ gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
+ &phys_limit);
+ page = dma_alloc_contiguous(dev, size, gfp);
+- if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+- dma_free_contiguous(dev, page, size);
+- page = NULL;
++ if (page) {
++ if (!dma_coherent_ok(dev, page_to_phys(page), size) ||
++ (!allow_highmem && PageHighMem(page))) {
++ dma_free_contiguous(dev, page, size);
++ page = NULL;
++ }
+ }
+ again:
+ if (!page)
+@@ -189,7 +192,7 @@ static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size,
+ {
+ struct page *page;
+
+- page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
++ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ if (!page)
+ return NULL;
+
+@@ -262,7 +265,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
+ return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
+
+ /* we always manually zero the memory once we are done */
+- page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
++ page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ if (!page)
+ return NULL;
+ if (PageHighMem(page)) {
+@@ -370,19 +373,9 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
+ if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
+ return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
+
+- page = __dma_direct_alloc_pages(dev, size, gfp);
++ page = __dma_direct_alloc_pages(dev, size, gfp, false);
+ if (!page)
+ return NULL;
+- if (PageHighMem(page)) {
+- /*
+- * Depending on the cma= arguments and per-arch setup
+- * dma_alloc_contiguous could return highmem pages.
+- * Without remapping there is no way to return them here,
+- * so log an error and fail.
+- */
+- dev_info(dev, "Rejecting highmem page from CMA.\n");
+- goto out_free_pages;
+- }
+
+ ret = page_address(page);
+ if (dma_set_decrypted(dev, ret, size))
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 2d7a23a7507b6..8343880057ffe 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6533,8 +6533,8 @@ static void perf_sigtrap(struct perf_event *event)
+ if (current->flags & PF_EXITING)
+ return;
+
+- force_sig_perf((void __user *)event->pending_addr,
+- event->attr.type, event->attr.sig_data);
++ send_sig_perf((void __user *)event->pending_addr,
++ event->attr.type, event->attr.sig_data);
+ }
+
+ static void perf_pending_event_disable(struct perf_event *event)
+diff --git a/kernel/fork.c b/kernel/fork.c
+index f1e89007f2288..f2e3f5c5d9562 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2087,7 +2087,7 @@ static __latent_entropy struct task_struct *copy_process(
+ p->io_context = NULL;
+ audit_set_context(p, NULL);
+ cgroup_fork(p);
+- if (p->flags & PF_KTHREAD) {
++ if (args->kthread) {
+ if (!set_kthread_struct(p))
+ goto bad_fork_cleanup_delayacct;
+ }
+@@ -2474,7 +2474,8 @@ struct task_struct * __init fork_idle(int cpu)
+ {
+ struct task_struct *task;
+ struct kernel_clone_args args = {
+- .flags = CLONE_VM,
++ .flags = CLONE_VM,
++ .kthread = 1,
+ };
+
+ task = copy_process(&init_struct_pid, 0, cpu_to_node(cpu), &args);
+@@ -2605,6 +2606,23 @@ pid_t kernel_clone(struct kernel_clone_args *args)
+ * Create a kernel thread.
+ */
+ pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
++{
++ struct kernel_clone_args args = {
++ .flags = ((lower_32_bits(flags) | CLONE_VM |
++ CLONE_UNTRACED) & ~CSIGNAL),
++ .exit_signal = (lower_32_bits(flags) & CSIGNAL),
++ .stack = (unsigned long)fn,
++ .stack_size = (unsigned long)arg,
++ .kthread = 1,
++ };
++
++ return kernel_clone(&args);
++}
++
++/*
++ * Create a user mode thread.
++ */
++pid_t user_mode_thread(int (*fn)(void *), void *arg, unsigned long flags)
+ {
+ struct kernel_clone_args args = {
+ .flags = ((lower_32_bits(flags) | CLONE_VM |
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 8347fc158d2b9..c108a2a887546 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -108,40 +108,6 @@ int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+ }
+ #endif
+
+-/*
+- * arch_kexec_apply_relocations_add - apply relocations of type RELA
+- * @pi: Purgatory to be relocated.
+- * @section: Section relocations applying to.
+- * @relsec: Section containing RELAs.
+- * @symtab: Corresponding symtab.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak
+-arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section,
+- const Elf_Shdr *relsec, const Elf_Shdr *symtab)
+-{
+- pr_err("RELA relocation unsupported.\n");
+- return -ENOEXEC;
+-}
+-
+-/*
+- * arch_kexec_apply_relocations - apply relocations of type REL
+- * @pi: Purgatory to be relocated.
+- * @section: Section relocations applying to.
+- * @relsec: Section containing RELs.
+- * @symtab: Corresponding symtab.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak
+-arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section,
+- const Elf_Shdr *relsec, const Elf_Shdr *symtab)
+-{
+- pr_err("REL relocation unsupported.\n");
+- return -ENOEXEC;
+-}
+-
+ /*
+ * Free up memory used by kernel, initrd, and command line. This is temporary
+ * memory allocation which is not needed any more after these buffers have
+diff --git a/kernel/module.c b/kernel/module.c
+index 46a5c2ed19285..740323cff545f 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3033,6 +3033,10 @@ static int elf_validity_check(struct load_info *info)
+ * strings in the section safe.
+ */
+ info->secstrings = (void *)info->hdr + strhdr->sh_offset;
++ if (strhdr->sh_size == 0) {
++ pr_err("empty section name table\n");
++ goto no_exec;
++ }
+ if (info->secstrings[strhdr->sh_size - 1] != '\0') {
+ pr_err("ELF Spec violation: section name table isn't null terminated\n");
+ goto no_exec;
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 0153b0ca7b23e..6219aaa454b5b 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -259,6 +259,8 @@ static void em_cpufreq_update_efficiencies(struct device *dev)
+ found++;
+ }
+
++ cpufreq_cpu_put(policy);
++
+ if (!found)
+ return;
+
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 833e407545b82..4f344e7b28786 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -735,8 +735,19 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf,
+ goto out;
+ }
+
++ /*
++ * Guarantee this task is visible on the waitqueue before
++ * checking the wake condition.
++ *
++ * The full memory barrier within set_current_state() of
++ * prepare_to_wait_event() pairs with the full memory barrier
++ * within wq_has_sleeper().
++ *
++ * This pairs with __wake_up_klogd:A.
++ */
+ ret = wait_event_interruptible(log_wait,
+- prb_read_valid(prb, atomic64_read(&user->seq), r));
++ prb_read_valid(prb,
++ atomic64_read(&user->seq), r)); /* LMM(devkmsg_read:A) */
+ if (ret)
+ goto out;
+ }
+@@ -1502,7 +1513,18 @@ static int syslog_print(char __user *buf, int size)
+ seq = syslog_seq;
+
+ mutex_unlock(&syslog_lock);
+- len = wait_event_interruptible(log_wait, prb_read_valid(prb, seq, NULL));
++ /*
++ * Guarantee this task is visible on the waitqueue before
++ * checking the wake condition.
++ *
++ * The full memory barrier within set_current_state() of
++ * prepare_to_wait_event() pairs with the full memory barrier
++ * within wq_has_sleeper().
++ *
++ * This pairs with __wake_up_klogd:A.
++ */
++ len = wait_event_interruptible(log_wait,
++ prb_read_valid(prb, seq, NULL)); /* LMM(syslog_print:A) */
+ mutex_lock(&syslog_lock);
+
+ if (len)
+@@ -3230,7 +3252,7 @@ static DEFINE_PER_CPU(int, printk_pending);
+
+ static void wake_up_klogd_work_func(struct irq_work *irq_work)
+ {
+- int pending = __this_cpu_xchg(printk_pending, 0);
++ int pending = this_cpu_xchg(printk_pending, 0);
+
+ if (pending & PRINTK_PENDING_OUTPUT) {
+ /* If trylock fails, someone else is doing the printing */
+@@ -3245,28 +3267,43 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work)
+ static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) =
+ IRQ_WORK_INIT_LAZY(wake_up_klogd_work_func);
+
+-void wake_up_klogd(void)
++static void __wake_up_klogd(int val)
+ {
+ if (!printk_percpu_data_ready())
+ return;
+
+ preempt_disable();
+- if (waitqueue_active(&log_wait)) {
+- this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
++ /*
++ * Guarantee any new records can be seen by tasks preparing to wait
++ * before this context checks if the wait queue is empty.
++ *
++ * The full memory barrier within wq_has_sleeper() pairs with the full
++ * memory barrier within set_current_state() of
++ * prepare_to_wait_event(), which is called after ___wait_event() adds
++ * the waiter but before it has checked the wait condition.
++ *
++ * This pairs with devkmsg_read:A and syslog_print:A.
++ */
++ if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */
++ (val & PRINTK_PENDING_OUTPUT)) {
++ this_cpu_or(printk_pending, val);
+ irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+ }
+ preempt_enable();
+ }
+
+-void defer_console_output(void)
++void wake_up_klogd(void)
+ {
+- if (!printk_percpu_data_ready())
+- return;
++ __wake_up_klogd(PRINTK_PENDING_WAKEUP);
++}
+
+- preempt_disable();
+- __this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
+- irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+- preempt_enable();
++void defer_console_output(void)
++{
++ /*
++ * New messages may have been added directly to the ringbuffer
++ * using vprintk_store(), so wake any waiters as well.
++ */
++ __wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT);
+ }
+
+ void printk_trigger_flush(void)
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index ccc4b465775b8..6149ca5e0e14b 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -1236,9 +1236,8 @@ int ptrace_request(struct task_struct *child, long request,
+ return ptrace_resume(child, request, data);
+
+ case PTRACE_KILL:
+- if (child->exit_state) /* already dead */
+- return 0;
+- return ptrace_resume(child, request, SIGKILL);
++ send_sig_info(SIGKILL, SEND_SIG_NOINFO, child);
++ return 0;
+
+ #ifdef CONFIG_HAVE_ARCH_TRACEHOOK
+ case PTRACE_GETREGSET:
+diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
+index bf8e341e75b4f..f559870fbf8b3 100644
+--- a/kernel/rcu/Kconfig
++++ b/kernel/rcu/Kconfig
+@@ -86,6 +86,7 @@ config TASKS_RCU
+
+ config TASKS_RUDE_RCU
+ def_bool 0
++ select IRQ_WORK
+ help
+ This option enables a task-based RCU implementation that uses
+ only context switch (including preemption) and user-mode
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index d64f0b1d8cd3b..1664e472524bd 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -460,7 +460,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
+ }
+ }
+
+- if (rcu_segcblist_empty(&rtpcp->cblist))
++ if (rcu_segcblist_empty(&rtpcp->cblist) || !cpu_possible(cpu))
+ return;
+ raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
+ rcu_segcblist_advance(&rtpcp->cblist, rcu_seq_current(&rtp->tasks_gp_seq));
+@@ -950,6 +950,9 @@ static void rcu_tasks_be_rude(struct work_struct *work)
+ // Wait for one rude RCU-tasks grace period.
+ static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
+ {
++ if (num_online_cpus() <= 1)
++ return; // Fastpath for only one CPU.
++
+ rtp->n_ipis += cpumask_weight(cpu_online_mask);
+ schedule_on_each_cpu(rcu_tasks_be_rude);
+ }
+diff --git a/kernel/scftorture.c b/kernel/scftorture.c
+index dcb0410950e45..5d113aa59e773 100644
+--- a/kernel/scftorture.c
++++ b/kernel/scftorture.c
+@@ -267,9 +267,10 @@ static void scf_handler(void *scfc_in)
+ }
+ this_cpu_inc(scf_invoked_count);
+ if (longwait <= 0) {
+- if (!(r & 0xffc0))
++ if (!(r & 0xffc0)) {
+ udelay(r & 0x3f);
+- goto out;
++ goto out;
++ }
+ }
+ if (r & 0xfff)
+ goto out;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1eec4925b8c64..a6722496ed5f2 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -546,10 +546,10 @@ void double_rq_lock(struct rq *rq1, struct rq *rq2)
+ swap(rq1, rq2);
+
+ raw_spin_rq_lock(rq1);
+- if (__rq_lockp(rq1) == __rq_lockp(rq2))
+- return;
++ if (__rq_lockp(rq1) != __rq_lockp(rq2))
++ raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
+
+- raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
++ double_rq_clock_clear_update(rq1, rq2);
+ }
+ #endif
+
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 62f0cf8422775..1571b775ec150 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1804,6 +1804,7 @@ out:
+
+ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused)
+ {
++ struct rq_flags rf;
+ struct rq *rq;
+
+ if (READ_ONCE(p->__state) != TASK_WAKING)
+@@ -1815,7 +1816,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ * from try_to_wake_up(). Hence, p->pi_lock is locked, but
+ * rq->lock is not... So, lock it
+ */
+- raw_spin_rq_lock(rq);
++ rq_lock(rq, &rf);
+ if (p->dl.dl_non_contending) {
+ update_rq_clock(rq);
+ sub_running_bw(&p->dl, &rq->dl);
+@@ -1831,7 +1832,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ put_task_struct(p);
+ }
+ sub_rq_bw(&p->dl, &rq->dl);
+- raw_spin_rq_unlock(rq);
++ rq_unlock(rq, &rf);
+ }
+
+ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2f461f0592789..95bcf0a8767f1 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4793,8 +4793,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
+
+ cfs_rq->throttle_count--;
+ if (!cfs_rq->throttle_count) {
+- cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
+- cfs_rq->throttled_clock_task;
++ cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
++ cfs_rq->throttled_clock_pelt;
+
+ /* Add cfs_rq with load or one or more already running entities to the list */
+ if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
+@@ -4811,7 +4811,7 @@ static int tg_throttle_down(struct task_group *tg, void *data)
+
+ /* group is entering throttled state, stop time */
+ if (!cfs_rq->throttle_count) {
+- cfs_rq->throttled_clock_task = rq_clock_task(rq);
++ cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
+ list_del_leaf_cfs_rq(cfs_rq);
+ }
+ cfs_rq->throttle_count++;
+@@ -5255,7 +5255,7 @@ static void sync_throttle(struct task_group *tg, int cpu)
+ pcfs_rq = tg->parent->cfs_rq[cpu];
+
+ cfs_rq->throttle_count = pcfs_rq->throttle_count;
+- cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu));
++ cfs_rq->throttled_clock_pelt = rq_clock_pelt(cpu_rq(cpu));
+ }
+
+ /* conditionally throttle active cfs_rq's from put_prev_entity() */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index c336f5f481bca..4ff2ed4f8fa15 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -145,9 +145,9 @@ static inline u64 rq_clock_pelt(struct rq *rq)
+ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ {
+ if (unlikely(cfs_rq->throttle_count))
+- return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
++ return cfs_rq->throttled_clock_pelt - cfs_rq->throttled_clock_pelt_time;
+
+- return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
++ return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
+ }
+ #else
+ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index e143581788497..97fd85c5143c6 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -1062,14 +1062,17 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
+ mutex_unlock(&group->avgs_lock);
+
+ for (full = 0; full < 2; full++) {
+- unsigned long avg[3];
+- u64 total;
++ unsigned long avg[3] = { 0, };
++ u64 total = 0;
+ int w;
+
+- for (w = 0; w < 3; w++)
+- avg[w] = group->avg[res * 2 + full][w];
+- total = div_u64(group->total[PSI_AVGS][res * 2 + full],
+- NSEC_PER_USEC);
++ /* CPU FULL is undefined at the system level */
++ if (!(group == &psi_system && res == PSI_CPU && full)) {
++ for (w = 0; w < 3; w++)
++ avg[w] = group->avg[res * 2 + full][w];
++ total = div_u64(group->total[PSI_AVGS][res * 2 + full],
++ NSEC_PER_USEC);
++ }
+
+ seq_printf(m, "%s avg10=%lu.%02lu avg60=%lu.%02lu avg300=%lu.%02lu total=%llu\n",
+ full ? "full" : "some",
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 14f273c295183..3b8a7e1533678 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -885,6 +885,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ int enqueue = 0;
+ struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
+ struct rq *rq = rq_of_rt_rq(rt_rq);
++ struct rq_flags rf;
+ int skip;
+
+ /*
+@@ -899,7 +900,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ if (skip)
+ continue;
+
+- raw_spin_rq_lock(rq);
++ rq_lock(rq, &rf);
+ update_rq_clock(rq);
+
+ if (rt_rq->rt_time) {
+@@ -937,7 +938,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+
+ if (enqueue)
+ sched_rt_rq_enqueue(rt_rq);
+- raw_spin_rq_unlock(rq);
++ rq_unlock(rq, &rf);
+ }
+
+ if (!throttled && (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF))
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index e8a5549488dd1..8c0dfeadef709 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -618,8 +618,8 @@ struct cfs_rq {
+ s64 runtime_remaining;
+
+ u64 throttled_clock;
+- u64 throttled_clock_task;
+- u64 throttled_clock_task_time;
++ u64 throttled_clock_pelt;
++ u64 throttled_clock_pelt_time;
+ int throttled;
+ int throttle_count;
+ struct list_head throttled_list;
+@@ -2494,6 +2494,24 @@ unsigned long arch_scale_freq_capacity(int cpu)
+ }
+ #endif
+
++#ifdef CONFIG_SCHED_DEBUG
++/*
++ * In double_lock_balance()/double_rq_lock(), we use raw_spin_rq_lock() to
++ * acquire rq lock instead of rq_lock(). So at the end of these two functions
++ * we need to call double_rq_clock_clear_update() to clear RQCF_UPDATED of
++ * rq->clock_update_flags to avoid the WARN_DOUBLE_CLOCK warning.
++ */
++static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2)
++{
++ rq1->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
++ /* rq1 == rq2 for !CONFIG_SMP, so just clear RQCF_UPDATED once. */
++#ifdef CONFIG_SMP
++ rq2->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
++#endif
++}
++#else
++static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2) {}
++#endif
+
+ #ifdef CONFIG_SMP
+
+@@ -2559,14 +2577,15 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
+ __acquires(busiest->lock)
+ __acquires(this_rq->lock)
+ {
+- if (__rq_lockp(this_rq) == __rq_lockp(busiest))
+- return 0;
+-
+- if (likely(raw_spin_rq_trylock(busiest)))
++ if (__rq_lockp(this_rq) == __rq_lockp(busiest) ||
++ likely(raw_spin_rq_trylock(busiest))) {
++ double_rq_clock_clear_update(this_rq, busiest);
+ return 0;
++ }
+
+ if (rq_order_less(this_rq, busiest)) {
+ raw_spin_rq_lock_nested(busiest, SINGLE_DEPTH_NESTING);
++ double_rq_clock_clear_update(this_rq, busiest);
+ return 0;
+ }
+
+@@ -2660,6 +2679,7 @@ static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
+ BUG_ON(rq1 != rq2);
+ raw_spin_rq_lock(rq1);
+ __acquire(rq2->lock); /* Fake it out ;) */
++ double_rq_clock_clear_update(rq1, rq2);
+ }
+
+ /*
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 9b04631acde8f..41682bb8bb8f0 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1805,7 +1805,7 @@ int force_sig_pkuerr(void __user *addr, u32 pkey)
+ }
+ #endif
+
+-int force_sig_perf(void __user *addr, u32 type, u64 sig_data)
++int send_sig_perf(void __user *addr, u32 type, u64 sig_data)
+ {
+ struct kernel_siginfo info;
+
+@@ -1817,7 +1817,18 @@ int force_sig_perf(void __user *addr, u32 type, u64 sig_data)
+ info.si_perf_data = sig_data;
+ info.si_perf_type = type;
+
+- return force_sig_info(&info);
++ /*
++ * Signals generated by perf events should not terminate the whole
++ * process if SIGTRAP is blocked, however, delivering the signal
++ * asynchronously is better than not delivering at all. But tell user
++ * space if the signal was asynchronous, so it can clearly be
++ * distinguished from normal synchronous ones.
++ */
++ info.si_perf_flags = sigismember(¤t->blocked, info.si_signo) ?
++ TRAP_PERF_FLAG_ASYNC :
++ 0;
++
++ return send_sig_info(info.si_signo, &info, current);
+ }
+
+ /**
+@@ -3430,6 +3441,7 @@ void copy_siginfo_to_external32(struct compat_siginfo *to,
+ to->si_addr = ptr_to_compat(from->si_addr);
+ to->si_perf_data = from->si_perf_data;
+ to->si_perf_type = from->si_perf_type;
++ to->si_perf_flags = from->si_perf_flags;
+ break;
+ case SIL_CHLD:
+ to->si_pid = from->si_pid;
+@@ -3507,6 +3519,7 @@ static int post_copy_siginfo_from_user32(kernel_siginfo_t *to,
+ to->si_addr = compat_ptr(from->si_addr);
+ to->si_perf_data = from->si_perf_data;
+ to->si_perf_type = from->si_perf_type;
++ to->si_perf_flags = from->si_perf_flags;
+ break;
+ case SIL_CHLD:
+ to->si_pid = from->si_pid;
+@@ -4720,6 +4733,7 @@ static inline void siginfo_buildtime_checks(void)
+ CHECK_OFFSET(si_pkey);
+ CHECK_OFFSET(si_perf_data);
+ CHECK_OFFSET(si_perf_type);
++ CHECK_OFFSET(si_perf_flags);
+
+ /* sigpoll */
+ CHECK_OFFSET(si_band);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 6105b7036482e..3439e3a037217 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -4448,7 +4448,7 @@ int ftrace_func_mapper_add_ip(struct ftrace_func_mapper *mapper,
+ * @ip: The instruction pointer address to remove the data from
+ *
+ * Returns the data if it is found, otherwise NULL.
+- * Note, if the data pointer is used as the data itself, (see
++ * Note, if the data pointer is used as the data itself, (see
+ * ftrace_func_mapper_find_ip(), then the return value may be meaningless,
+ * if the data pointer was set to zero.
+ */
+@@ -5151,8 +5151,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ goto out_unlock;
+
+ ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0);
+- if (ret)
+- remove_hash_entry(direct_functions, entry);
+
+ if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) {
+ ret = register_ftrace_function(&direct_ops);
+@@ -5161,6 +5159,7 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ }
+
+ if (ret) {
++ remove_hash_entry(direct_functions, entry);
+ kfree(entry);
+ if (!direct->count) {
+ list_del_rcu(&direct->next);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 96265a717ca4e..8bc7beea10c70 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -711,13 +711,16 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ pos = 0;
+
+ ret = trace_get_user(&parser, ubuf, cnt, &pos);
+- if (ret < 0 || !trace_parser_loaded(&parser))
++ if (ret < 0)
+ break;
+
+ read += ret;
+ ubuf += ret;
+ cnt -= ret;
+
++ if (!trace_parser_loaded(&parser))
++ break;
++
+ ret = -EINVAL;
+ if (kstrtoul(parser.buffer, 0, &val))
+ break;
+@@ -743,7 +746,6 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ if (!nr_pids) {
+ /* Cleared the list of pids */
+ trace_pid_list_free(pid_list);
+- read = ret;
+ pid_list = NULL;
+ }
+
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index 0580287d7a0d1..778200dd8edea 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -300,7 +300,7 @@ trace_boot_hist_add_handlers(struct xbc_node *hnode, char **bufp,
+ {
+ struct xbc_node *node;
+ const char *p, *handler;
+- int ret;
++ int ret = 0;
+
+ handler = xbc_node_get_data(hnode);
+
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index dc7f733b4cb33..0275881fc5b28 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2074,8 +2074,11 @@ static int init_var_ref(struct hist_field *ref_field,
+ return err;
+ free:
+ kfree(ref_field->system);
++ ref_field->system = NULL;
+ kfree(ref_field->event_name);
++ ref_field->event_name = NULL;
+ kfree(ref_field->name);
++ ref_field->name = NULL;
+
+ goto out;
+ }
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 5e3c62a08fc0f..61c3fff488bf3 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1576,11 +1576,12 @@ static enum hrtimer_restart timerlat_irq(struct hrtimer *timer)
+
+ trace_timerlat_sample(&s);
+
+- notify_new_max_latency(diff);
+-
+- if (osnoise_data.stop_tracing)
+- if (time_to_us(diff) >= osnoise_data.stop_tracing)
++ if (osnoise_data.stop_tracing) {
++ if (time_to_us(diff) >= osnoise_data.stop_tracing) {
+ osnoise_stop_tracing();
++ notify_new_max_latency(diff);
++ }
++ }
+
+ wake_up_process(tlat->kthread);
+
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index abcadbe933bb7..a2d301f58ceda 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -895,6 +895,9 @@ trace_selftest_startup_function_graph(struct tracer *trace,
+ ret = -1;
+ goto out;
+ }
++
++ /* Enable tracing on all functions again */
++ ftrace_set_global_filter(NULL, 0, 1);
+ #endif
+
+ /* Don't test dynamic tracing, the function tracer already did */
+diff --git a/kernel/umh.c b/kernel/umh.c
+index 36c123360ab88..b989736e87074 100644
+--- a/kernel/umh.c
++++ b/kernel/umh.c
+@@ -132,7 +132,7 @@ static void call_usermodehelper_exec_sync(struct subprocess_info *sub_info)
+
+ /* If SIGCLD is ignored do_wait won't populate the status. */
+ kernel_sigaction(SIGCHLD, SIG_DFL);
+- pid = kernel_thread(call_usermodehelper_exec_async, sub_info, SIGCHLD);
++ pid = user_mode_thread(call_usermodehelper_exec_async, sub_info, SIGCHLD);
+ if (pid < 0)
+ sub_info->retval = pid;
+ else
+@@ -171,8 +171,8 @@ static void call_usermodehelper_exec_work(struct work_struct *work)
+ * want to pollute current->children, and we need a parent
+ * that always ignores SIGCHLD to ensure auto-reaping.
+ */
+- pid = kernel_thread(call_usermodehelper_exec_async, sub_info,
+- CLONE_PARENT | SIGCHLD);
++ pid = user_mode_thread(call_usermodehelper_exec_async, sub_info,
++ CLONE_PARENT | SIGCHLD);
+ if (pid < 0) {
+ sub_info->retval = pid;
+ umh_complete(sub_info);
+diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c
+index b71db0abc12bf..1048ef1b8d6ec 100644
+--- a/lib/kunit/debugfs.c
++++ b/lib/kunit/debugfs.c
+@@ -52,7 +52,7 @@ static void debugfs_print_result(struct seq_file *seq,
+ static int debugfs_print_results(struct seq_file *seq, void *v)
+ {
+ struct kunit_suite *suite = (struct kunit_suite *)seq->private;
+- bool success = kunit_suite_has_succeeded(suite);
++ enum kunit_status success = kunit_suite_has_succeeded(suite);
+ struct kunit_case *test_case;
+
+ if (!suite || !suite->log)
+diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c
+index 22640c9ee8198..96f96e42ce062 100644
+--- a/lib/kunit/executor.c
++++ b/lib/kunit/executor.c
+@@ -71,9 +71,13 @@ kunit_filter_tests(struct kunit_suite *const suite, const char *test_glob)
+
+ /* Use memcpy to workaround copy->name being const. */
+ copy = kmalloc(sizeof(*copy), GFP_KERNEL);
++ if (!copy)
++ return ERR_PTR(-ENOMEM);
+ memcpy(copy, suite, sizeof(*copy));
+
+ filtered = kcalloc(n + 1, sizeof(*filtered), GFP_KERNEL);
++ if (!filtered)
++ return ERR_PTR(-ENOMEM);
+
+ n = 0;
+ kunit_suite_for_each_test_case(suite, test_case) {
+@@ -106,14 +110,16 @@ kunit_filter_subsuite(struct kunit_suite * const * const subsuite,
+
+ filtered = kmalloc_array(n + 1, sizeof(*filtered), GFP_KERNEL);
+ if (!filtered)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ n = 0;
+ for (i = 0; subsuite[i] != NULL; ++i) {
+ if (!glob_match(filter->suite_glob, subsuite[i]->name))
+ continue;
+ filtered_suite = kunit_filter_tests(subsuite[i], filter->test_glob);
+- if (filtered_suite)
++ if (IS_ERR(filtered_suite))
++ return ERR_CAST(filtered_suite);
++ else if (filtered_suite)
+ filtered[n++] = filtered_suite;
+ }
+ filtered[n] = NULL;
+@@ -146,7 +152,8 @@ static void kunit_free_suite_set(struct suite_set suite_set)
+ }
+
+ static struct suite_set kunit_filter_suites(const struct suite_set *suite_set,
+- const char *filter_glob)
++ const char *filter_glob,
++ int *err)
+ {
+ int i;
+ struct kunit_suite * const **copy, * const *filtered_subsuite;
+@@ -166,6 +173,10 @@ static struct suite_set kunit_filter_suites(const struct suite_set *suite_set,
+
+ for (i = 0; i < max; ++i) {
+ filtered_subsuite = kunit_filter_subsuite(suite_set->start[i], &filter);
++ if (IS_ERR(filtered_subsuite)) {
++ *err = PTR_ERR(filtered_subsuite);
++ return filtered;
++ }
+ if (filtered_subsuite)
+ *copy++ = filtered_subsuite;
+ }
+@@ -236,9 +247,15 @@ int kunit_run_all_tests(void)
+ .start = __kunit_suites_start,
+ .end = __kunit_suites_end,
+ };
++ int err = 0;
+
+- if (filter_glob_param)
+- suite_set = kunit_filter_suites(&suite_set, filter_glob_param);
++ if (filter_glob_param) {
++ suite_set = kunit_filter_suites(&suite_set, filter_glob_param, &err);
++ if (err) {
++ pr_err("kunit executor: error filtering suites: %d\n", err);
++ goto out;
++ }
++ }
+
+ if (!action_param)
+ kunit_exec_run_tests(&suite_set);
+@@ -251,9 +268,10 @@ int kunit_run_all_tests(void)
+ kunit_free_suite_set(suite_set);
+ }
+
+- kunit_handle_shutdown();
+
+- return 0;
++out:
++ kunit_handle_shutdown();
++ return err;
+ }
+
+ #if IS_BUILTIN(CONFIG_KUNIT_TEST)
+diff --git a/lib/kunit/executor_test.c b/lib/kunit/executor_test.c
+index 4ed57fd94e427..eac6ff4802738 100644
+--- a/lib/kunit/executor_test.c
++++ b/lib/kunit/executor_test.c
+@@ -137,14 +137,16 @@ static void filter_suites_test(struct kunit *test)
+ .end = suites + 2,
+ };
+ struct suite_set filtered = {.start = NULL, .end = NULL};
++ int err = 0;
+
+ /* Emulate two files, each having one suite */
+ subsuites[0][0] = alloc_fake_suite(test, "suite0", dummy_test_cases);
+ subsuites[1][0] = alloc_fake_suite(test, "suite1", dummy_test_cases);
+
+ /* Filter out suite1 */
+- filtered = kunit_filter_suites(&suite_set, "suite0");
++ filtered = kunit_filter_suites(&suite_set, "suite0", &err);
+ kfree_subsuites_at_end(test, &filtered); /* let us use ASSERTs without leaking */
++ KUNIT_EXPECT_EQ(test, err, 0);
+ KUNIT_ASSERT_EQ(test, filtered.end - filtered.start, (ptrdiff_t)1);
+
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, filtered.start);
+diff --git a/lib/list-test.c b/lib/list-test.c
+index ee09505df16f1..994ea4e3fc1b9 100644
+--- a/lib/list-test.c
++++ b/lib/list-test.c
+@@ -234,6 +234,24 @@ static void list_test_list_bulk_move_tail(struct kunit *test)
+ KUNIT_EXPECT_EQ(test, i, 2);
+ }
+
++static void list_test_list_is_head(struct kunit *test)
++{
++ struct list_head a, b, c;
++
++ /* Two lists: [a] -> b, [c] */
++ INIT_LIST_HEAD(&a);
++ INIT_LIST_HEAD(&c);
++ list_add_tail(&b, &a);
++
++ KUNIT_EXPECT_TRUE_MSG(test, list_is_head(&a, &a),
++ "Head element of same list");
++ KUNIT_EXPECT_FALSE_MSG(test, list_is_head(&a, &b),
++ "Non-head element of same list");
++ KUNIT_EXPECT_FALSE_MSG(test, list_is_head(&a, &c),
++ "Head element of different list");
++}
++
++
+ static void list_test_list_is_first(struct kunit *test)
+ {
+ struct list_head a, b;
+@@ -710,6 +728,7 @@ static struct kunit_case list_test_cases[] = {
+ KUNIT_CASE(list_test_list_move),
+ KUNIT_CASE(list_test_list_move_tail),
+ KUNIT_CASE(list_test_list_bulk_move_tail),
++ KUNIT_CASE(list_test_list_is_head),
+ KUNIT_CASE(list_test_list_is_first),
+ KUNIT_CASE(list_test_list_is_last),
+ KUNIT_CASE(list_test_list_empty),
+diff --git a/lib/string_helpers.c b/lib/string_helpers.c
+index 90f9f1b7afecd..d6a4e2b84ddba 100644
+--- a/lib/string_helpers.c
++++ b/lib/string_helpers.c
+@@ -757,6 +757,9 @@ char **devm_kasprintf_strarray(struct device *dev, const char *prefix, size_t n)
+ return ERR_PTR(-ENOMEM);
+ }
+
++ ptr->n = n;
++ devres_add(dev, ptr);
++
+ return ptr->array;
+ }
+ EXPORT_SYMBOL_GPL(devm_kasprintf_strarray);
+diff --git a/mm/cma.c b/mm/cma.c
+index bc9ca8f3c4871..1d17a4d22b11c 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -37,6 +37,7 @@
+
+ struct cma cma_areas[MAX_CMA_AREAS];
+ unsigned cma_area_count;
++static DEFINE_MUTEX(cma_mutex);
+
+ phys_addr_t cma_get_base(const struct cma *cma)
+ {
+@@ -471,9 +472,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
+ spin_unlock_irq(&cma->lock);
+
+ pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
++ mutex_lock(&cma_mutex);
+ ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
+ GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
+-
++ mutex_unlock(&cma_mutex);
+ if (ret == 0) {
+ page = pfn_to_page(pfn);
+ break;
+diff --git a/mm/compaction.c b/mm/compaction.c
+index b4e94cda30190..ffaa0dad7fcd1 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1821,6 +1821,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+
+ update_fast_start_pfn(cc, free_pfn);
+ pfn = pageblock_start_pfn(free_pfn);
++ if (pfn < cc->zone->zone_start_pfn)
++ pfn = cc->zone->zone_start_pfn;
+ cc->fast_search_fail = 0;
+ found_block = true;
+ set_pageblock_skip(freepage);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index e2dc190c67256..c1a0981861848 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -6557,7 +6557,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+ pud_clear(pud);
+ put_page(virt_to_page(ptep));
+ mm_dec_nr_pmds(mm);
+- *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
++ /*
++ * This update of passed address optimizes loops sequentially
++ * processing addresses in increments of huge page size (PMD_SIZE
++ * in this case). By clearing the pud, a PUD_SIZE area is unmapped.
++ * Update address to the 'last page' in the cleared area so that
++ * calling loop can move to first page past this area.
++ */
++ *addr |= PUD_SIZE - PMD_SIZE;
+ return 1;
+ }
+
+diff --git a/mm/memremap.c b/mm/memremap.c
+index 6aa5f0c2d11fd..e8f4d22d650f0 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -228,7 +228,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
+
+ if (!mhp_range_allowed(range->start, range_len(range), !is_private)) {
+ error = -EINVAL;
+- goto err_pfn_remap;
++ goto err_kasan;
+ }
+
+ mem_hotplug_begin();
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index e6f211dcf82e7..b1caa1c6c887c 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5305,8 +5305,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
+ page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags,
+ pcp, pcp_list);
+ if (unlikely(!page)) {
+- /* Try and get at least one page */
+- if (!nr_populated)
++ /* Try and allocate at least one page */
++ if (!nr_account)
+ goto failed_irq;
+ break;
+ }
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 84312c8365493..ac06c9724c7f3 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -481,7 +481,7 @@ static bool hci_setup_sync_conn(struct hci_conn *conn, __u16 handle)
+
+ bool hci_setup_sync(struct hci_conn *conn, __u16 handle)
+ {
+- if (enhanced_sco_capable(conn->hdev))
++ if (enhanced_sync_conn_capable(conn->hdev))
+ return hci_enhanced_setup_sync_conn(conn, handle);
+
+ return hci_setup_sync_conn(conn, handle);
+@@ -670,7 +670,7 @@ static void le_conn_timeout(struct work_struct *work)
+ /* Disable LE Advertising */
+ le_disable_advertising(hdev);
+ hci_dev_lock(hdev);
+- hci_le_conn_failed(conn, HCI_ERROR_ADVERTISING_TIMEOUT);
++ hci_conn_failed(conn, HCI_ERROR_ADVERTISING_TIMEOUT);
+ hci_dev_unlock(hdev);
+ return;
+ }
+@@ -873,7 +873,7 @@ struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src, uint8_t src_type)
+ EXPORT_SYMBOL(hci_get_route);
+
+ /* This function requires the caller holds hdev->lock */
+-void hci_le_conn_failed(struct hci_conn *conn, u8 status)
++static void hci_le_conn_failed(struct hci_conn *conn, u8 status)
+ {
+ struct hci_dev *hdev = conn->hdev;
+ struct hci_conn_params *params;
+@@ -886,8 +886,6 @@ void hci_le_conn_failed(struct hci_conn *conn, u8 status)
+ params->conn = NULL;
+ }
+
+- conn->state = BT_CLOSED;
+-
+ /* If the status indicates successful cancellation of
+ * the attempt (i.e. Unknown Connection Id) there's no point of
+ * notifying failure since we'll go back to keep trying to
+@@ -899,10 +897,6 @@ void hci_le_conn_failed(struct hci_conn *conn, u8 status)
+ mgmt_connect_failed(hdev, &conn->dst, conn->type,
+ conn->dst_type, status);
+
+- hci_connect_cfm(conn, status);
+-
+- hci_conn_del(conn);
+-
+ /* Since we may have temporarily stopped the background scanning in
+ * favor of connection establishment, we should restart it.
+ */
+@@ -914,6 +908,28 @@ void hci_le_conn_failed(struct hci_conn *conn, u8 status)
+ hci_enable_advertising(hdev);
+ }
+
++/* This function requires the caller holds hdev->lock */
++void hci_conn_failed(struct hci_conn *conn, u8 status)
++{
++ struct hci_dev *hdev = conn->hdev;
++
++ bt_dev_dbg(hdev, "status 0x%2.2x", status);
++
++ switch (conn->type) {
++ case LE_LINK:
++ hci_le_conn_failed(conn, status);
++ break;
++ case ACL_LINK:
++ mgmt_connect_failed(hdev, &conn->dst, conn->type,
++ conn->dst_type, status);
++ break;
++ }
++
++ conn->state = BT_CLOSED;
++ hci_connect_cfm(conn, status);
++ hci_conn_del(conn);
++}
++
+ static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct hci_conn *conn = data;
+@@ -927,10 +943,11 @@ static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err)
+
+ bt_dev_err(hdev, "request failed to create LE connection: err %d", err);
+
+- if (!conn)
++ /* Check if connection is still pending */
++ if (conn != hci_lookup_le_connect(hdev))
+ goto done;
+
+- hci_le_conn_failed(conn, err);
++ hci_conn_failed(conn, err);
+
+ done:
+ hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 33a1b4115194e..df5a484d2b2ae 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1835,7 +1835,9 @@ static u8 hci_cc_le_clear_accept_list(struct hci_dev *hdev, void *data,
+ if (rp->status)
+ return rp->status;
+
++ hci_dev_lock(hdev);
+ hci_bdaddr_list_clear(&hdev->le_accept_list);
++ hci_dev_unlock(hdev);
+
+ return rp->status;
+ }
+@@ -1855,8 +1857,10 @@ static u8 hci_cc_le_add_to_accept_list(struct hci_dev *hdev, void *data,
+ if (!sent)
+ return rp->status;
+
++ hci_dev_lock(hdev);
+ hci_bdaddr_list_add(&hdev->le_accept_list, &sent->bdaddr,
+ sent->bdaddr_type);
++ hci_dev_unlock(hdev);
+
+ return rp->status;
+ }
+@@ -1876,8 +1880,10 @@ static u8 hci_cc_le_del_from_accept_list(struct hci_dev *hdev, void *data,
+ if (!sent)
+ return rp->status;
+
++ hci_dev_lock(hdev);
+ hci_bdaddr_list_del(&hdev->le_accept_list, &sent->bdaddr,
+ sent->bdaddr_type);
++ hci_dev_unlock(hdev);
+
+ return rp->status;
+ }
+@@ -1949,9 +1955,11 @@ static u8 hci_cc_le_add_to_resolv_list(struct hci_dev *hdev, void *data,
+ if (!sent)
+ return rp->status;
+
++ hci_dev_lock(hdev);
+ hci_bdaddr_list_add_with_irk(&hdev->le_resolv_list, &sent->bdaddr,
+ sent->bdaddr_type, sent->peer_irk,
+ sent->local_irk);
++ hci_dev_unlock(hdev);
+
+ return rp->status;
+ }
+@@ -1971,8 +1979,10 @@ static u8 hci_cc_le_del_from_resolv_list(struct hci_dev *hdev, void *data,
+ if (!sent)
+ return rp->status;
+
++ hci_dev_lock(hdev);
+ hci_bdaddr_list_del_with_irk(&hdev->le_resolv_list, &sent->bdaddr,
+ sent->bdaddr_type);
++ hci_dev_unlock(hdev);
+
+ return rp->status;
+ }
+@@ -1987,7 +1997,9 @@ static u8 hci_cc_le_clear_resolv_list(struct hci_dev *hdev, void *data,
+ if (rp->status)
+ return rp->status;
+
++ hci_dev_lock(hdev);
+ hci_bdaddr_list_clear(&hdev->le_resolv_list);
++ hci_dev_unlock(hdev);
+
+ return rp->status;
+ }
+@@ -2834,7 +2846,7 @@ static void hci_cs_le_create_conn(struct hci_dev *hdev, u8 status)
+ bt_dev_dbg(hdev, "status 0x%2.2x", status);
+
+ /* All connection failure handling is taken care of by the
+- * hci_le_conn_failed function which is triggered by the HCI
++ * hci_conn_failed function which is triggered by the HCI
+ * request completion callbacks used for connecting.
+ */
+ if (status)
+@@ -2859,7 +2871,7 @@ static void hci_cs_le_ext_create_conn(struct hci_dev *hdev, u8 status)
+ bt_dev_dbg(hdev, "status 0x%2.2x", status);
+
+ /* All connection failure handling is taken care of by the
+- * hci_le_conn_failed function which is triggered by the HCI
++ * hci_conn_failed function which is triggered by the HCI
+ * request completion callbacks used for connecting.
+ */
+ if (status)
+@@ -3173,12 +3185,7 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+
+ done:
+ if (status) {
+- conn->state = BT_CLOSED;
+- if (conn->type == ACL_LINK)
+- mgmt_connect_failed(hdev, &conn->dst, conn->type,
+- conn->dst_type, status);
+- hci_connect_cfm(conn, status);
+- hci_conn_del(conn);
++ hci_conn_failed(conn, status);
+ } else if (ev->link_type == SCO_LINK) {
+ switch (conn->setting & SCO_AIRMODE_MASK) {
+ case SCO_AIRMODE_CVSD:
+@@ -3224,10 +3231,12 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ return;
+ }
+
++ hci_dev_lock(hdev);
++
+ if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr,
+ BDADDR_BREDR)) {
+ hci_reject_conn(hdev, &ev->bdaddr);
+- return;
++ goto unlock;
+ }
+
+ /* Require HCI_CONNECTABLE or an accept list entry to accept the
+@@ -3239,13 +3248,11 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr,
+ BDADDR_BREDR)) {
+ hci_reject_conn(hdev, &ev->bdaddr);
+- return;
++ goto unlock;
+ }
+
+ /* Connection accepted */
+
+- hci_dev_lock(hdev);
+-
+ ie = hci_inquiry_cache_lookup(hdev, &ev->bdaddr);
+ if (ie)
+ memcpy(ie->data.dev_class, ev->dev_class, 3);
+@@ -3257,8 +3264,7 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ HCI_ROLE_SLAVE);
+ if (!conn) {
+ bt_dev_err(hdev, "no memory for new connection");
+- hci_dev_unlock(hdev);
+- return;
++ goto unlock;
+ }
+ }
+
+@@ -3298,6 +3304,10 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ conn->state = BT_CONNECT2;
+ hci_connect_cfm(conn, 0);
+ }
++
++ return;
++unlock:
++ hci_dev_unlock(hdev);
+ }
+
+ static u8 hci_to_mgmt_reason(u8 err)
+@@ -5610,10 +5620,12 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ status = HCI_ERROR_INVALID_PARAMETERS;
+ }
+
+- if (status) {
+- hci_le_conn_failed(conn, status);
++ /* All connection failure handling is taken care of by the
++ * hci_conn_failed function which is triggered by the HCI
++ * request completion callbacks used for connecting.
++ */
++ if (status)
+ goto unlock;
+- }
+
+ if (conn->dst_type == ADDR_LE_DEV_PUBLIC)
+ addr_type = BDADDR_LE_PUBLIC;
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 42c8047a9897d..f4afe482e3004 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -2260,6 +2260,7 @@ static int active_scan(struct hci_request *req, unsigned long opt)
+ if (err < 0)
+ own_addr_type = ADDR_LE_DEV_PUBLIC;
+
++ hci_dev_lock(hdev);
+ if (hci_is_adv_monitoring(hdev)) {
+ /* Duplicate filter should be disabled when some advertisement
+ * monitor is activated, otherwise AdvMon can only receive one
+@@ -2276,6 +2277,7 @@ static int active_scan(struct hci_request *req, unsigned long opt)
+ */
+ filter_dup = LE_SCAN_FILTER_DUP_DISABLE;
+ }
++ hci_dev_unlock(hdev);
+
+ hci_req_start_scan(req, LE_SCAN_ACTIVE, interval,
+ hdev->le_scan_window_discovery, own_addr_type,
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 8f4c5698913d7..13600bf120b02 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4408,12 +4408,21 @@ static int hci_reject_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ static int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ u8 reason)
+ {
++ int err;
++
+ switch (conn->state) {
+ case BT_CONNECTED:
+ case BT_CONFIG:
+ return hci_disconnect_sync(hdev, conn, reason);
+ case BT_CONNECT:
+- return hci_connect_cancel_sync(hdev, conn);
++ err = hci_connect_cancel_sync(hdev, conn);
++ /* Cleanup hci_conn object if it cannot be cancelled as it
++ * likelly means the controller and host stack are out of sync.
++ */
++ if (err)
++ hci_conn_failed(conn, err);
++
++ return err;
+ case BT_CONNECT2:
+ return hci_reject_conn_sync(hdev, conn, reason);
+ default:
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 8eabf41b29939..1111da4e2f2bd 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -574,19 +574,24 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ addr->sa_family != AF_BLUETOOTH)
+ return -EINVAL;
+
+- if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND)
+- return -EBADFD;
++ lock_sock(sk);
++ if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) {
++ err = -EBADFD;
++ goto done;
++ }
+
+- if (sk->sk_type != SOCK_SEQPACKET)
+- return -EINVAL;
++ if (sk->sk_type != SOCK_SEQPACKET) {
++ err = -EINVAL;
++ goto done;
++ }
+
+ hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
+- if (!hdev)
+- return -EHOSTUNREACH;
++ if (!hdev) {
++ err = -EHOSTUNREACH;
++ goto done;
++ }
+ hci_dev_lock(hdev);
+
+- lock_sock(sk);
+-
+ /* Set destination address and psm */
+ bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+
+@@ -885,7 +890,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ err = -EBADFD;
+ break;
+ }
+- if (enhanced_sco_capable(hdev) &&
++ if (enhanced_sync_conn_capable(hdev) &&
+ voice.setting == BT_VOICE_TRANSPARENT)
+ sco_pi(sk)->codec.id = BT_CODEC_TRANSPARENT;
+ hci_dev_put(hdev);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 5f1ac48122777..f2ce184d72239 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3156,11 +3156,15 @@ int skb_checksum_help(struct sk_buff *skb)
+ }
+
+ offset = skb_checksum_start_offset(skb);
+- BUG_ON(offset >= skb_headlen(skb));
++ ret = -EINVAL;
++ if (WARN_ON_ONCE(offset >= skb_headlen(skb)))
++ goto out;
++
+ csum = skb_checksum(skb, offset, skb->len - offset, 0);
+
+ offset += skb->csum_offset;
+- BUG_ON(offset + sizeof(__sum16) > skb_headlen(skb));
++ if (WARN_ON_ONCE(offset + sizeof(__sum16) > skb_headlen(skb)))
++ goto out;
+
+ ret = skb_ensure_writable(skb, offset + sizeof(__sum16));
+ if (ret)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 7bf84ce34d9e7..96c25c97ee567 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5694,7 +5694,7 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
+ &tp->last_oow_ack_time))
+ tcp_send_dupack(sk, skb);
+ } else if (tcp_reset_check(sk, skb)) {
+- tcp_reset(sk, skb);
++ goto reset;
+ }
+ goto discard;
+ }
+@@ -5730,17 +5730,16 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
+ }
+
+ if (rst_seq_match)
+- tcp_reset(sk, skb);
+- else {
+- /* Disable TFO if RST is out-of-order
+- * and no data has been received
+- * for current active TFO socket
+- */
+- if (tp->syn_fastopen && !tp->data_segs_in &&
+- sk->sk_state == TCP_ESTABLISHED)
+- tcp_fastopen_active_disable(sk);
+- tcp_send_challenge_ack(sk);
+- }
++ goto reset;
++
++ /* Disable TFO if RST is out-of-order
++ * and no data has been received
++ * for current active TFO socket
++ */
++ if (tp->syn_fastopen && !tp->data_segs_in &&
++ sk->sk_state == TCP_ESTABLISHED)
++ tcp_fastopen_active_disable(sk);
++ tcp_send_challenge_ack(sk);
+ goto discard;
+ }
+
+@@ -5765,6 +5764,11 @@ syn_challenge:
+ discard:
+ tcp_drop(sk, skb);
+ return false;
++
++reset:
++ tcp_reset(sk, skb);
++ __kfree_skb(skb);
++ return false;
+ }
+
+ /*
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 4df84013c4e6b..5ec289e0f2ac1 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -800,6 +800,7 @@ static void dev_forward_change(struct inet6_dev *idev)
+ {
+ struct net_device *dev;
+ struct inet6_ifaddr *ifa;
++ LIST_HEAD(tmp_addr_list);
+
+ if (!idev)
+ return;
+@@ -818,14 +819,24 @@ static void dev_forward_change(struct inet6_dev *idev)
+ }
+ }
+
++ read_lock_bh(&idev->lock);
+ list_for_each_entry(ifa, &idev->addr_list, if_list) {
+ if (ifa->flags&IFA_F_TENTATIVE)
+ continue;
++ list_add_tail(&ifa->if_list_aux, &tmp_addr_list);
++ }
++ read_unlock_bh(&idev->lock);
++
++ while (!list_empty(&tmp_addr_list)) {
++ ifa = list_first_entry(&tmp_addr_list,
++ struct inet6_ifaddr, if_list_aux);
++ list_del(&ifa->if_list_aux);
+ if (idev->cnf.forwarding)
+ addrconf_join_anycast(ifa);
+ else
+ addrconf_leave_anycast(ifa);
+ }
++
+ inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ NETCONFA_FORWARDING,
+ dev->ifindex, &idev->cnf);
+@@ -3730,7 +3741,8 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ unsigned long event = unregister ? NETDEV_UNREGISTER : NETDEV_DOWN;
+ struct net *net = dev_net(dev);
+ struct inet6_dev *idev;
+- struct inet6_ifaddr *ifa, *tmp;
++ struct inet6_ifaddr *ifa;
++ LIST_HEAD(tmp_addr_list);
+ bool keep_addr = false;
+ bool was_ready;
+ int state, i;
+@@ -3822,16 +3834,23 @@ restart:
+ write_lock_bh(&idev->lock);
+ }
+
+- list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) {
++ list_for_each_entry(ifa, &idev->addr_list, if_list)
++ list_add_tail(&ifa->if_list_aux, &tmp_addr_list);
++ write_unlock_bh(&idev->lock);
++
++ while (!list_empty(&tmp_addr_list)) {
+ struct fib6_info *rt = NULL;
+ bool keep;
+
++ ifa = list_first_entry(&tmp_addr_list,
++ struct inet6_ifaddr, if_list_aux);
++ list_del(&ifa->if_list_aux);
++
+ addrconf_del_dad_work(ifa);
+
+ keep = keep_addr && (ifa->flags & IFA_F_PERMANENT) &&
+ !addr_is_local(&ifa->addr);
+
+- write_unlock_bh(&idev->lock);
+ spin_lock_bh(&ifa->lock);
+
+ if (keep) {
+@@ -3862,15 +3881,14 @@ restart:
+ addrconf_leave_solict(ifa->idev, &ifa->addr);
+ }
+
+- write_lock_bh(&idev->lock);
+ if (!keep) {
++ write_lock_bh(&idev->lock);
+ list_del_rcu(&ifa->if_list);
++ write_unlock_bh(&idev->lock);
+ in6_ifa_put(ifa);
+ }
+ }
+
+- write_unlock_bh(&idev->lock);
+-
+ /* Step 5: Discard anycast and multicast list */
+ if (unregister) {
+ ipv6_ac_destroy_dev(idev);
+@@ -4203,7 +4221,8 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id,
+ send_rs = send_mld &&
+ ipv6_accept_ra(ifp->idev) &&
+ ifp->idev->cnf.rtr_solicits != 0 &&
+- (dev->flags&IFF_LOOPBACK) == 0;
++ (dev->flags & IFF_LOOPBACK) == 0 &&
++ (dev->type != ARPHRD_TUNNEL);
+ read_unlock_bh(&ifp->idev->lock);
+
+ /* While dad is in progress mld report's source address is in6_addrany.
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index 76fc36a68750e..63e15f583e0a6 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -1746,12 +1746,9 @@ int ieee80211_vif_use_reserved_context(struct ieee80211_sub_if_data *sdata)
+
+ if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) {
+ if (old_ctx)
+- err = ieee80211_vif_use_reserved_reassign(sdata);
+- else
+- err = ieee80211_vif_use_reserved_assign(sdata);
++ return ieee80211_vif_use_reserved_reassign(sdata);
+
+- if (err)
+- return err;
++ return ieee80211_vif_use_reserved_assign(sdata);
+ }
+
+ /*
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 95aaf00c876c3..773ab909c6e85 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1128,6 +1128,9 @@ struct tpt_led_trigger {
+ * a scan complete for an aborted scan.
+ * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being
+ * cancelled.
++ * @SCAN_BEACON_WAIT: Set whenever we're passive scanning because of radar/no-IR
++ * and could send a probe request after receiving a beacon.
++ * @SCAN_BEACON_DONE: Beacon received, we can now send a probe request
+ */
+ enum {
+ SCAN_SW_SCANNING,
+@@ -1136,6 +1139,8 @@ enum {
+ SCAN_COMPLETED,
+ SCAN_ABORTED,
+ SCAN_HW_CANCELLED,
++ SCAN_BEACON_WAIT,
++ SCAN_BEACON_DONE,
+ };
+
+ /**
+diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
+index 9c3b7fc377c17..50cce990784fa 100644
+--- a/net/mac80211/rc80211_minstrel_ht.c
++++ b/net/mac80211/rc80211_minstrel_ht.c
+@@ -362,6 +362,9 @@ minstrel_ht_get_stats(struct minstrel_priv *mp, struct minstrel_ht_sta *mi,
+
+ group = MINSTREL_CCK_GROUP;
+ for (idx = 0; idx < ARRAY_SIZE(mp->cck_rates); idx++) {
++ if (!(mi->supported[group] & BIT(idx)))
++ continue;
++
+ if (rate->idx != mp->cck_rates[idx])
+ continue;
+
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 5e6b275afc9e6..b698756887eb5 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -281,6 +281,16 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ if (likely(!sdata1 && !sdata2))
+ return;
+
++ if (test_and_clear_bit(SCAN_BEACON_WAIT, &local->scanning)) {
++ /*
++ * we were passive scanning because of radar/no-IR, but
++ * the beacon/proberesp rx gives us an opportunity to upgrade
++ * to active scan
++ */
++ set_bit(SCAN_BEACON_DONE, &local->scanning);
++ ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
++ }
++
+ if (ieee80211_is_probe_resp(mgmt->frame_control)) {
+ struct cfg80211_scan_request *scan_req;
+ struct cfg80211_sched_scan_request *sched_scan_req;
+@@ -787,6 +797,8 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
+ IEEE80211_CHAN_RADAR)) ||
+ !req->n_ssids) {
+ next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
++ if (req->n_ssids)
++ set_bit(SCAN_BEACON_WAIT, &local->scanning);
+ } else {
+ ieee80211_scan_state_send_probe(local, &next_delay);
+ next_delay = IEEE80211_CHANNEL_TIME;
+@@ -998,6 +1010,8 @@ set_channel:
+ !scan_req->n_ssids) {
+ *next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
+ local->next_scan_state = SCAN_DECISION;
++ if (scan_req->n_ssids)
++ set_bit(SCAN_BEACON_WAIT, &local->scanning);
+ return;
+ }
+
+@@ -1090,6 +1104,8 @@ void ieee80211_scan_work(struct work_struct *work)
+ goto out;
+ }
+
++ clear_bit(SCAN_BEACON_WAIT, &local->scanning);
++
+ /*
+ * as long as no delay is required advance immediately
+ * without scheduling a new work
+@@ -1100,6 +1116,10 @@ void ieee80211_scan_work(struct work_struct *work)
+ goto out_complete;
+ }
+
++ if (test_and_clear_bit(SCAN_BEACON_DONE, &local->scanning) &&
++ local->next_scan_state == SCAN_DECISION)
++ local->next_scan_state = SCAN_SEND_PROBE;
++
+ switch (local->next_scan_state) {
+ case SCAN_DECISION:
+ /* if no more bands/channels left, complete scan */
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 1eb83cbe8aae0..770c4e36c7762 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -261,14 +261,25 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ spin_unlock_bh(&pm->lock);
+ }
+
+-void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup)
++void mptcp_pm_mp_prio_received(struct sock *ssk, u8 bkup)
+ {
+- struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
++ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
++ struct sock *sk = subflow->conn;
++ struct mptcp_sock *msk;
+
+ pr_debug("subflow->backup=%d, bkup=%d\n", subflow->backup, bkup);
+- subflow->backup = bkup;
++ msk = mptcp_sk(sk);
++ if (subflow->backup != bkup) {
++ subflow->backup = bkup;
++ mptcp_data_lock(sk);
++ if (!sock_owned_by_user(sk))
++ msk->last_snd = NULL;
++ else
++ __set_bit(MPTCP_RESET_SCHEDULER, &msk->cb_flags);
++ mptcp_data_unlock(sk);
++ }
+
+- mptcp_event(MPTCP_EVENT_SUB_PRIORITY, mptcp_sk(subflow->conn), sk, GFP_ATOMIC);
++ mptcp_event(MPTCP_EVENT_SUB_PRIORITY, msk, ssk, GFP_ATOMIC);
+ }
+
+ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 4b5d795383cd6..3bc778c2d0c2c 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -745,6 +745,8 @@ static int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ if (!addresses_equal(&local, addr, addr->port))
+ continue;
+
++ if (subflow->backup != bkup)
++ msk->last_snd = NULL;
+ subflow->backup = bkup;
+ subflow->send_mp_prio = 1;
+ subflow->request_bkup = bkup;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 014c9d88f9479..4a50906f4d8f3 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -3088,15 +3088,19 @@ static void mptcp_release_cb(struct sock *sk)
+ spin_lock_bh(&sk->sk_lock.slock);
+ }
+
+- /* be sure to set the current sk state before tacking actions
+- * depending on sk_state
+- */
+- if (__test_and_clear_bit(MPTCP_CONNECTED, &msk->cb_flags))
+- __mptcp_set_connected(sk);
+ if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags))
+ __mptcp_clean_una_wakeup(sk);
+- if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags))
+- __mptcp_error_report(sk);
++ if (unlikely(&msk->cb_flags)) {
++ /* be sure to set the current sk state before tacking actions
++ * depending on sk_state, that is processing MPTCP_ERROR_REPORT
++ */
++ if (__test_and_clear_bit(MPTCP_CONNECTED, &msk->cb_flags))
++ __mptcp_set_connected(sk);
++ if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags))
++ __mptcp_error_report(sk);
++ if (__test_and_clear_bit(MPTCP_RESET_SCHEDULER, &msk->cb_flags))
++ msk->last_snd = NULL;
++ }
+
+ __mptcp_update_rmem(sk);
+ }
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 46b343a0b17ea..2ae8276b82fc0 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -124,6 +124,7 @@
+ #define MPTCP_RETRANSMIT 4
+ #define MPTCP_FLUSH_JOIN_LIST 5
+ #define MPTCP_CONNECTED 6
++#define MPTCP_RESET_SCHEDULER 7
+
+ static inline bool before64(__u64 seq1, __u64 seq2)
+ {
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index 5b286e1e0a6ff..6ff3e10ff8e35 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -1166,6 +1166,7 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ if (dev->rfkill) {
+ rfkill_unregister(dev->rfkill);
+ rfkill_destroy(dev->rfkill);
++ dev->rfkill = NULL;
+ }
+ dev->shutting_down = true;
+ device_unlock(&dev->dev);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 969e532f77a90..f2d593e27b64f 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -68,7 +68,7 @@ struct rxrpc_net {
+ struct proc_dir_entry *proc_net; /* Subdir in /proc/net */
+ u32 epoch; /* Local epoch for detecting local-end reset */
+ struct list_head calls; /* List of calls active in this namespace */
+- rwlock_t call_lock; /* Lock for ->calls */
++ spinlock_t call_lock; /* Lock for ->calls */
+ atomic_t nr_calls; /* Count of allocated calls */
+
+ atomic_t nr_conns;
+@@ -676,13 +676,12 @@ struct rxrpc_call {
+
+ spinlock_t input_lock; /* Lock for packet input to this call */
+
+- /* receive-phase ACK management */
++ /* Receive-phase ACK management (ACKs we send). */
+ u8 ackr_reason; /* reason to ACK */
+ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */
+- rxrpc_serial_t ackr_first_seq; /* first sequence number received */
+- rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */
+- rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */
+- rxrpc_seq_t ackr_seen; /* Highest packet shown seen */
++ rxrpc_seq_t ackr_highest_seq; /* Higest sequence number received */
++ atomic_t ackr_nr_unacked; /* Number of unacked packets */
++ atomic_t ackr_nr_consumed; /* Number of packets needing hard ACK */
+
+ /* RTT management */
+ rxrpc_serial_t rtt_serial[4]; /* Serial number of DATA or PING sent */
+@@ -692,8 +691,10 @@ struct rxrpc_call {
+ #define RXRPC_CALL_RTT_AVAIL_MASK 0xf
+ #define RXRPC_CALL_RTT_PEND_SHIFT 8
+
+- /* transmission-phase ACK management */
++ /* Transmission-phase ACK management (ACKs we've received). */
+ ktime_t acks_latest_ts; /* Timestamp of latest ACK received */
++ rxrpc_seq_t acks_first_seq; /* first sequence number received */
++ rxrpc_seq_t acks_prev_seq; /* Highest previousPacket received */
+ rxrpc_seq_t acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */
+ rxrpc_seq_t acks_lost_top; /* tx_top at the time lost-ack ping sent */
+ rxrpc_serial_t acks_lost_ping; /* Serial number of probe ACK */
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 1ae90fb979362..8b24ffbc72efb 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -140,9 +140,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
+ write_unlock(&rx->call_lock);
+
+ rxnet = call->rxnet;
+- write_lock(&rxnet->call_lock);
+- list_add_tail(&call->link, &rxnet->calls);
+- write_unlock(&rxnet->call_lock);
++ spin_lock_bh(&rxnet->call_lock);
++ list_add_tail_rcu(&call->link, &rxnet->calls);
++ spin_unlock_bh(&rxnet->call_lock);
+
+ b->call_backlog[call_head] = call;
+ smp_store_release(&b->call_backlog_head, (call_head + 1) & (size - 1));
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index 22e05de5d1ca9..f8ecad2b730e8 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -377,9 +377,9 @@ recheck_state:
+ if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) &&
+ (int)call->conn->hi_serial - (int)call->rx_serial > 0) {
+ trace_rxrpc_call_reset(call);
+- rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ECONNRESET);
++ rxrpc_abort_call("EXP", call, 0, RX_CALL_DEAD, -ECONNRESET);
+ } else {
+- rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ETIME);
++ rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME);
+ }
+ set_bit(RXRPC_CALL_EV_ABORT, &call->events);
+ goto recheck_state;
+@@ -406,7 +406,8 @@ recheck_state:
+ goto recheck_state;
+ }
+
+- if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events)) {
++ if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) &&
++ call->state != RXRPC_CALL_CLIENT_RECV_REPLY) {
+ rxrpc_resend(call, now);
+ goto recheck_state;
+ }
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 043508fd8d8a5..25c9a2cbf048c 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -337,9 +337,9 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ write_unlock(&rx->call_lock);
+
+ rxnet = call->rxnet;
+- write_lock(&rxnet->call_lock);
+- list_add_tail(&call->link, &rxnet->calls);
+- write_unlock(&rxnet->call_lock);
++ spin_lock_bh(&rxnet->call_lock);
++ list_add_tail_rcu(&call->link, &rxnet->calls);
++ spin_unlock_bh(&rxnet->call_lock);
+
+ /* From this point on, the call is protected by its own lock. */
+ release_sock(&rx->sk);
+@@ -631,9 +631,9 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+ ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+
+ if (!list_empty(&call->link)) {
+- write_lock(&rxnet->call_lock);
++ spin_lock_bh(&rxnet->call_lock);
+ list_del_init(&call->link);
+- write_unlock(&rxnet->call_lock);
++ spin_unlock_bh(&rxnet->call_lock);
+ }
+
+ rxrpc_cleanup_call(call);
+@@ -705,7 +705,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
+ _enter("");
+
+ if (!list_empty(&rxnet->calls)) {
+- write_lock(&rxnet->call_lock);
++ spin_lock_bh(&rxnet->call_lock);
+
+ while (!list_empty(&rxnet->calls)) {
+ call = list_entry(rxnet->calls.next,
+@@ -720,12 +720,12 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
+ rxrpc_call_states[call->state],
+ call->flags, call->events);
+
+- write_unlock(&rxnet->call_lock);
++ spin_unlock_bh(&rxnet->call_lock);
+ cond_resched();
+- write_lock(&rxnet->call_lock);
++ spin_lock_bh(&rxnet->call_lock);
+ }
+
+- write_unlock(&rxnet->call_lock);
++ spin_unlock_bh(&rxnet->call_lock);
+ }
+
+ atomic_dec(&rxnet->nr_calls);
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index b2159dbf5412c..660cd9b1a4658 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -183,7 +183,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn,
+ chan->last_type = RXRPC_PACKET_TYPE_ABORT;
+ break;
+ default:
+- chan->last_abort = RX_USER_ABORT;
++ chan->last_abort = RX_CALL_DEAD;
+ chan->last_type = RXRPC_PACKET_TYPE_ABORT;
+ break;
+ }
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index dc201363f2c48..3521ebd0ee41c 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -412,8 +412,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ {
+ struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ enum rxrpc_call_state state;
+- unsigned int j, nr_subpackets;
+- rxrpc_serial_t serial = sp->hdr.serial, ack_serial = 0;
++ unsigned int j, nr_subpackets, nr_unacked = 0;
++ rxrpc_serial_t serial = sp->hdr.serial, ack_serial = serial;
+ rxrpc_seq_t seq0 = sp->hdr.seq, hard_ack;
+ bool immediate_ack = false, jumbo_bad = false;
+ u8 ack = 0;
+@@ -453,7 +453,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ !rxrpc_receiving_reply(call))
+ goto unlock;
+
+- call->ackr_prev_seq = seq0;
+ hard_ack = READ_ONCE(call->rx_hard_ack);
+
+ nr_subpackets = sp->nr_subpackets;
+@@ -534,6 +533,9 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ ack_serial = serial;
+ }
+
++ if (after(seq0, call->ackr_highest_seq))
++ call->ackr_highest_seq = seq0;
++
+ /* Queue the packet. We use a couple of memory barriers here as need
+ * to make sure that rx_top is perceived to be set after the buffer
+ * pointer and that the buffer pointer is set after the annotation and
+@@ -567,6 +569,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ sp = NULL;
+ }
+
++ nr_unacked++;
++
+ if (last) {
+ set_bit(RXRPC_CALL_RX_LAST, &call->flags);
+ if (!ack) {
+@@ -586,9 +590,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ }
+ call->rx_expect_next = seq + 1;
+ }
++ if (!ack)
++ ack_serial = serial;
+ }
+
+ ack:
++ if (atomic_add_return(nr_unacked, &call->ackr_nr_unacked) > 2 && !ack)
++ ack = RXRPC_ACK_IDLE;
++
+ if (ack)
+ rxrpc_propose_ACK(call, ack, ack_serial,
+ immediate_ack, true,
+@@ -812,7 +821,7 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks,
+ static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
+ rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt)
+ {
+- rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq);
++ rxrpc_seq_t base = READ_ONCE(call->acks_first_seq);
+
+ if (after(first_pkt, base))
+ return true; /* The window advanced */
+@@ -820,7 +829,7 @@ static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
+ if (before(first_pkt, base))
+ return false; /* firstPacket regressed */
+
+- if (after_eq(prev_pkt, call->ackr_prev_seq))
++ if (after_eq(prev_pkt, call->acks_prev_seq))
+ return true; /* previousPacket hasn't regressed. */
+
+ /* Some rx implementations put a serial number in previousPacket. */
+@@ -903,11 +912,38 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ rxrpc_propose_ack_respond_to_ack);
+ }
+
++ /* If we get an EXCEEDS_WINDOW ACK from the server, it probably
++ * indicates that the client address changed due to NAT. The server
++ * lost the call because it switched to a different peer.
++ */
++ if (unlikely(buf.ack.reason == RXRPC_ACK_EXCEEDS_WINDOW) &&
++ first_soft_ack == 1 &&
++ prev_pkt == 0 &&
++ rxrpc_is_client_call(call)) {
++ rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
++ 0, -ENETRESET);
++ return;
++ }
++
++ /* If we get an OUT_OF_SEQUENCE ACK from the server, that can also
++ * indicate a change of address. However, we can retransmit the call
++ * if we still have it buffered to the beginning.
++ */
++ if (unlikely(buf.ack.reason == RXRPC_ACK_OUT_OF_SEQUENCE) &&
++ first_soft_ack == 1 &&
++ prev_pkt == 0 &&
++ call->tx_hard_ack == 0 &&
++ rxrpc_is_client_call(call)) {
++ rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
++ 0, -ENETRESET);
++ return;
++ }
++
+ /* Discard any out-of-order or duplicate ACKs (outside lock). */
+ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+ trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+- first_soft_ack, call->ackr_first_seq,
+- prev_pkt, call->ackr_prev_seq);
++ first_soft_ack, call->acks_first_seq,
++ prev_pkt, call->acks_prev_seq);
+ return;
+ }
+
+@@ -922,14 +958,14 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ /* Discard any out-of-order or duplicate ACKs (inside lock). */
+ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+ trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+- first_soft_ack, call->ackr_first_seq,
+- prev_pkt, call->ackr_prev_seq);
++ first_soft_ack, call->acks_first_seq,
++ prev_pkt, call->acks_prev_seq);
+ goto out;
+ }
+ call->acks_latest_ts = skb->tstamp;
+
+- call->ackr_first_seq = first_soft_ack;
+- call->ackr_prev_seq = prev_pkt;
++ call->acks_first_seq = first_soft_ack;
++ call->acks_prev_seq = prev_pkt;
+
+ /* Parse rwind and mtu sizes if provided. */
+ if (buf.info.rxMTU)
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index cc7e30733feb0..e4d6d432515bc 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -50,7 +50,7 @@ static __net_init int rxrpc_init_net(struct net *net)
+ rxnet->epoch |= RXRPC_RANDOM_EPOCH;
+
+ INIT_LIST_HEAD(&rxnet->calls);
+- rwlock_init(&rxnet->call_lock);
++ spin_lock_init(&rxnet->call_lock);
+ atomic_set(&rxnet->nr_calls, 1);
+
+ atomic_set(&rxnet->nr_conns, 1);
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index a45c83f22236e..9683617db7049 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -74,11 +74,18 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ u8 reason)
+ {
+ rxrpc_serial_t serial;
++ unsigned int tmp;
+ rxrpc_seq_t hard_ack, top, seq;
+ int ix;
+ u32 mtu, jmax;
+ u8 *ackp = pkt->acks;
+
++ tmp = atomic_xchg(&call->ackr_nr_unacked, 0);
++ tmp |= atomic_xchg(&call->ackr_nr_consumed, 0);
++ if (!tmp && (reason == RXRPC_ACK_DELAY ||
++ reason == RXRPC_ACK_IDLE))
++ return 0;
++
+ /* Barrier against rxrpc_input_data(). */
+ serial = call->ackr_serial;
+ hard_ack = READ_ONCE(call->rx_hard_ack);
+@@ -89,7 +96,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ pkt->ack.bufferSpace = htons(8);
+ pkt->ack.maxSkew = htons(0);
+ pkt->ack.firstPacket = htonl(hard_ack + 1);
+- pkt->ack.previousPacket = htonl(call->ackr_prev_seq);
++ pkt->ack.previousPacket = htonl(call->ackr_highest_seq);
+ pkt->ack.serial = htonl(serial);
+ pkt->ack.reason = reason;
+ pkt->ack.nAcks = top - hard_ack;
+@@ -223,6 +230,10 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ n = rxrpc_fill_out_ack(conn, call, pkt, &hard_ack, &top, reason);
+
+ spin_unlock_bh(&call->lock);
++ if (n == 0) {
++ kfree(pkt);
++ return 0;
++ }
+
+ iov[0].iov_base = pkt;
+ iov[0].iov_len = sizeof(pkt->whdr) + sizeof(pkt->ack) + n;
+@@ -259,13 +270,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ ntohl(pkt->ack.serial),
+ false, true,
+ rxrpc_propose_ack_retry_tx);
+- } else {
+- spin_lock_bh(&call->lock);
+- if (after(hard_ack, call->ackr_consumed))
+- call->ackr_consumed = hard_ack;
+- if (after(top, call->ackr_seen))
+- call->ackr_seen = top;
+- spin_unlock_bh(&call->lock);
+ }
+
+ rxrpc_set_keepalive(call);
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index e2f990754f882..5a67955cc00f6 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -26,29 +26,23 @@ static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = {
+ */
+ static void *rxrpc_call_seq_start(struct seq_file *seq, loff_t *_pos)
+ __acquires(rcu)
+- __acquires(rxnet->call_lock)
+ {
+ struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
+
+ rcu_read_lock();
+- read_lock(&rxnet->call_lock);
+- return seq_list_start_head(&rxnet->calls, *_pos);
++ return seq_list_start_head_rcu(&rxnet->calls, *_pos);
+ }
+
+ static void *rxrpc_call_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
+
+- return seq_list_next(v, &rxnet->calls, pos);
++ return seq_list_next_rcu(v, &rxnet->calls, pos);
+ }
+
+ static void rxrpc_call_seq_stop(struct seq_file *seq, void *v)
+- __releases(rxnet->call_lock)
+ __releases(rcu)
+ {
+- struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
+-
+- read_unlock(&rxnet->call_lock);
+ rcu_read_unlock();
+ }
+
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index eca6dda26c77e..250f23bc1c076 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -260,11 +260,9 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
+ rxrpc_end_rx_phase(call, serial);
+ } else {
+ /* Check to see if there's an ACK that needs sending. */
+- if (after_eq(hard_ack, call->ackr_consumed + 2) ||
+- after_eq(top, call->ackr_seen + 2) ||
+- (hard_ack == top && after(hard_ack, call->ackr_consumed)))
+- rxrpc_propose_ACK(call, RXRPC_ACK_DELAY, serial,
+- true, true,
++ if (atomic_inc_return(&call->ackr_nr_consumed) > 2)
++ rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, serial,
++ true, false,
+ rxrpc_propose_ack_rotate_rx);
+ if (call->ackr_reason && call->ackr_reason != RXRPC_ACK_DELAY)
+ rxrpc_send_ack_packet(call, false, NULL);
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index af8ad6c30b9fb..1d38e279e2efa 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -444,6 +444,12 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+
+ success:
+ ret = copied;
++ if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) {
++ read_lock_bh(&call->state_lock);
++ if (call->error < 0)
++ ret = call->error;
++ read_unlock_bh(&call->state_lock);
++ }
+ out:
+ call->tx_pending = skb;
+ _leave(" = %d", ret);
+diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c
+index 540351d6a5f47..555e0910786bc 100644
+--- a/net/rxrpc/sysctl.c
++++ b/net/rxrpc/sysctl.c
+@@ -12,7 +12,7 @@
+
+ static struct ctl_table_header *rxrpc_sysctl_reg_table;
+ static const unsigned int four = 4;
+-static const unsigned int thirtytwo = 32;
++static const unsigned int max_backlog = RXRPC_BACKLOG_MAX - 1;
+ static const unsigned int n_65535 = 65535;
+ static const unsigned int n_max_acks = RXRPC_RXTX_BUFF_SIZE - 1;
+ static const unsigned long one_jiffy = 1;
+@@ -89,7 +89,7 @@ static struct ctl_table rxrpc_sysctl_table[] = {
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = (void *)&four,
+- .extra2 = (void *)&thirtytwo,
++ .extra2 = (void *)&max_backlog,
+ },
+ {
+ .procname = "rx_window_size",
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 90e12bafdd489..4f43afa8678f9 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -92,6 +92,7 @@ int sctp_rcv(struct sk_buff *skb)
+ struct sctp_chunk *chunk;
+ union sctp_addr src;
+ union sctp_addr dest;
++ int bound_dev_if;
+ int family;
+ struct sctp_af *af;
+ struct net *net = dev_net(skb->dev);
+@@ -169,7 +170,8 @@ int sctp_rcv(struct sk_buff *skb)
+ * If a frame arrives on an interface and the receiving socket is
+ * bound to another interface, via SO_BINDTODEVICE, treat it as OOTB
+ */
+- if (sk->sk_bound_dev_if && (sk->sk_bound_dev_if != af->skb_iif(skb))) {
++ bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++ if (bound_dev_if && (bound_dev_if != af->skb_iif(skb))) {
+ if (transport) {
+ sctp_transport_put(transport);
+ asoc = NULL;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index fd9d9cfd0f3dd..b9fe31834354d 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1417,9 +1417,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
+ if (rc && rc != -EINPROGRESS)
+ goto out;
+
+- sock_hold(&smc->sk); /* sock put in passive closing */
+ if (smc->use_fallback)
+ goto out;
++ sock_hold(&smc->sk); /* sock put in passive closing */
+ if (flags & O_NONBLOCK) {
+ if (queue_work(smc_hs_wq, &smc->connect_work))
+ smc->connect_nonblock = 1;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 0c20df052db37..0aad974f17a76 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3674,6 +3674,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+ wdev_lock(wdev);
+ switch (wdev->iftype) {
+ case NL80211_IFTYPE_AP:
++ case NL80211_IFTYPE_P2P_GO:
+ if (wdev->ssid_len &&
+ nla_put(msg, NL80211_ATTR_SSID, wdev->ssid_len, wdev->ssid))
+ goto nla_put_failure_locked;
+@@ -16193,8 +16194,7 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ .doit = nl80211_color_change,
+ .flags = GENL_UNS_ADMIN_PERM,
+- .internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+- NL80211_FLAG_NEED_RTNL,
++ .internal_flags = NL80211_FLAG_NEED_NETDEV_UP,
+ },
+ {
+ .cmd = NL80211_CMD_SET_FILS_AAD,
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index ec25924a1c26b..512d38d47466f 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -807,6 +807,8 @@ static int __init load_builtin_regdb_keys(void)
+ return 0;
+ }
+
++MODULE_FIRMWARE("regulatory.db.p7s");
++
+ static bool regdb_has_valid_signature(const u8 *data, unsigned int size)
+ {
+ const struct firmware *sig;
+@@ -1078,6 +1080,8 @@ static void regdb_fw_cb(const struct firmware *fw, void *context)
+ release_firmware(fw);
+ }
+
++MODULE_FIRMWARE("regulatory.db");
++
+ static int query_regdb_file(const char *alpha2)
+ {
+ ASSERT_RTNL();
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index 38638845db9d7..72bb85c18804f 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -368,16 +368,15 @@ VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+
+ $(obj)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
+ ifeq ($(VMLINUX_H),)
++ifeq ($(VMLINUX_BTF),)
++ $(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
++ build the kernel or set VMLINUX_BTF or VMLINUX_H variable)
++endif
+ $(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
+ else
+ $(Q)cp "$(VMLINUX_H)" $@
+ endif
+
+-ifeq ($(VMLINUX_BTF),)
+- $(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
+- build the kernel or set VMLINUX_BTF variable)
+-endif
+-
+ clean-files += vmlinux.h
+
+ # Get Clang's default includes on this system, as opposed to those seen by
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index 8859fc1935428..c089e9cdaf328 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -22,9 +22,9 @@
+ #include <unistd.h>
+
+ #ifndef landlock_create_ruleset
+-static inline int landlock_create_ruleset(
+- const struct landlock_ruleset_attr *const attr,
+- const size_t size, const __u32 flags)
++static inline int
++landlock_create_ruleset(const struct landlock_ruleset_attr *const attr,
++ const size_t size, const __u32 flags)
+ {
+ return syscall(__NR_landlock_create_ruleset, attr, size, flags);
+ }
+@@ -32,17 +32,18 @@ static inline int landlock_create_ruleset(
+
+ #ifndef landlock_add_rule
+ static inline int landlock_add_rule(const int ruleset_fd,
+- const enum landlock_rule_type rule_type,
+- const void *const rule_attr, const __u32 flags)
++ const enum landlock_rule_type rule_type,
++ const void *const rule_attr,
++ const __u32 flags)
+ {
+- return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type,
+- rule_attr, flags);
++ return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type, rule_attr,
++ flags);
+ }
+ #endif
+
+ #ifndef landlock_restrict_self
+ static inline int landlock_restrict_self(const int ruleset_fd,
+- const __u32 flags)
++ const __u32 flags)
+ {
+ return syscall(__NR_landlock_restrict_self, ruleset_fd, flags);
+ }
+@@ -70,14 +71,17 @@ static int parse_path(char *env_path, const char ***const path_list)
+ return num_paths;
+ }
+
++/* clang-format off */
++
+ #define ACCESS_FILE ( \
+ LANDLOCK_ACCESS_FS_EXECUTE | \
+ LANDLOCK_ACCESS_FS_WRITE_FILE | \
+ LANDLOCK_ACCESS_FS_READ_FILE)
+
+-static int populate_ruleset(
+- const char *const env_var, const int ruleset_fd,
+- const __u64 allowed_access)
++/* clang-format on */
++
++static int populate_ruleset(const char *const env_var, const int ruleset_fd,
++ const __u64 allowed_access)
+ {
+ int num_paths, i, ret = 1;
+ char *env_path_name;
+@@ -107,12 +111,10 @@ static int populate_ruleset(
+ for (i = 0; i < num_paths; i++) {
+ struct stat statbuf;
+
+- path_beneath.parent_fd = open(path_list[i], O_PATH |
+- O_CLOEXEC);
++ path_beneath.parent_fd = open(path_list[i], O_PATH | O_CLOEXEC);
+ if (path_beneath.parent_fd < 0) {
+ fprintf(stderr, "Failed to open \"%s\": %s\n",
+- path_list[i],
+- strerror(errno));
++ path_list[i], strerror(errno));
+ goto out_free_name;
+ }
+ if (fstat(path_beneath.parent_fd, &statbuf)) {
+@@ -123,9 +125,10 @@ static int populate_ruleset(
+ if (!S_ISDIR(statbuf.st_mode))
+ path_beneath.allowed_access &= ACCESS_FILE;
+ if (landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0)) {
+- fprintf(stderr, "Failed to update the ruleset with \"%s\": %s\n",
+- path_list[i], strerror(errno));
++ &path_beneath, 0)) {
++ fprintf(stderr,
++ "Failed to update the ruleset with \"%s\": %s\n",
++ path_list[i], strerror(errno));
+ close(path_beneath.parent_fd);
+ goto out_free_name;
+ }
+@@ -139,6 +142,8 @@ out_free_name:
+ return ret;
+ }
+
++/* clang-format off */
++
+ #define ACCESS_FS_ROUGHLY_READ ( \
+ LANDLOCK_ACCESS_FS_EXECUTE | \
+ LANDLOCK_ACCESS_FS_READ_FILE | \
+@@ -156,6 +161,8 @@ out_free_name:
+ LANDLOCK_ACCESS_FS_MAKE_BLOCK | \
+ LANDLOCK_ACCESS_FS_MAKE_SYM)
+
++/* clang-format on */
++
+ int main(const int argc, char *const argv[], char *const *const envp)
+ {
+ const char *cmd_path;
+@@ -163,55 +170,64 @@ int main(const int argc, char *const argv[], char *const *const envp)
+ int ruleset_fd;
+ struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = ACCESS_FS_ROUGHLY_READ |
+- ACCESS_FS_ROUGHLY_WRITE,
++ ACCESS_FS_ROUGHLY_WRITE,
+ };
+
+ if (argc < 2) {
+- fprintf(stderr, "usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n",
+- ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
+- fprintf(stderr, "Launch a command in a restricted environment.\n\n");
++ fprintf(stderr,
++ "usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n",
++ ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
++ fprintf(stderr,
++ "Launch a command in a restricted environment.\n\n");
+ fprintf(stderr, "Environment variables containing paths, "
+ "each separated by a colon:\n");
+- fprintf(stderr, "* %s: list of paths allowed to be used in a read-only way.\n",
+- ENV_FS_RO_NAME);
+- fprintf(stderr, "* %s: list of paths allowed to be used in a read-write way.\n",
+- ENV_FS_RW_NAME);
+- fprintf(stderr, "\nexample:\n"
+- "%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" "
+- "%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" "
+- "%s bash -i\n",
+- ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
++ fprintf(stderr,
++ "* %s: list of paths allowed to be used in a read-only way.\n",
++ ENV_FS_RO_NAME);
++ fprintf(stderr,
++ "* %s: list of paths allowed to be used in a read-write way.\n",
++ ENV_FS_RW_NAME);
++ fprintf(stderr,
++ "\nexample:\n"
++ "%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" "
++ "%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" "
++ "%s bash -i\n",
++ ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
+ return 1;
+ }
+
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ if (ruleset_fd < 0) {
+ const int err = errno;
+
+ perror("Failed to create a ruleset");
+ switch (err) {
+ case ENOSYS:
+- fprintf(stderr, "Hint: Landlock is not supported by the current kernel. "
+- "To support it, build the kernel with "
+- "CONFIG_SECURITY_LANDLOCK=y and prepend "
+- "\"landlock,\" to the content of CONFIG_LSM.\n");
++ fprintf(stderr,
++ "Hint: Landlock is not supported by the current kernel. "
++ "To support it, build the kernel with "
++ "CONFIG_SECURITY_LANDLOCK=y and prepend "
++ "\"landlock,\" to the content of CONFIG_LSM.\n");
+ break;
+ case EOPNOTSUPP:
+- fprintf(stderr, "Hint: Landlock is currently disabled. "
+- "It can be enabled in the kernel configuration by "
+- "prepending \"landlock,\" to the content of CONFIG_LSM, "
+- "or at boot time by setting the same content to the "
+- "\"lsm\" kernel parameter.\n");
++ fprintf(stderr,
++ "Hint: Landlock is currently disabled. "
++ "It can be enabled in the kernel configuration by "
++ "prepending \"landlock,\" to the content of CONFIG_LSM, "
++ "or at boot time by setting the same content to the "
++ "\"lsm\" kernel parameter.\n");
+ break;
+ }
+ return 1;
+ }
+ if (populate_ruleset(ENV_FS_RO_NAME, ruleset_fd,
+- ACCESS_FS_ROUGHLY_READ)) {
++ ACCESS_FS_ROUGHLY_READ)) {
+ goto err_close_ruleset;
+ }
+ if (populate_ruleset(ENV_FS_RW_NAME, ruleset_fd,
+- ACCESS_FS_ROUGHLY_READ | ACCESS_FS_ROUGHLY_WRITE)) {
++ ACCESS_FS_ROUGHLY_READ |
++ ACCESS_FS_ROUGHLY_WRITE)) {
+ goto err_close_ruleset;
+ }
+ if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+@@ -228,7 +244,7 @@ int main(const int argc, char *const argv[], char *const *const envp)
+ cmd_argv = argv + 1;
+ execvpe(cmd_path, cmd_argv, envp);
+ fprintf(stderr, "Failed to execute \"%s\": %s\n", cmd_path,
+- strerror(errno));
++ strerror(errno));
+ fprintf(stderr, "Hint: access to the binary, the interpreter or "
+ "shared libraries may be denied.\n");
+ return 1;
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 6c6439f69a725..0e6268d598835 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -44,17 +44,6 @@
+ set -o errexit
+ set -o nounset
+
+-READELF="${CROSS_COMPILE:-}readelf"
+-ADDR2LINE="${CROSS_COMPILE:-}addr2line"
+-SIZE="${CROSS_COMPILE:-}size"
+-NM="${CROSS_COMPILE:-}nm"
+-
+-command -v awk >/dev/null 2>&1 || die "awk isn't installed"
+-command -v ${READELF} >/dev/null 2>&1 || die "readelf isn't installed"
+-command -v ${ADDR2LINE} >/dev/null 2>&1 || die "addr2line isn't installed"
+-command -v ${SIZE} >/dev/null 2>&1 || die "size isn't installed"
+-command -v ${NM} >/dev/null 2>&1 || die "nm isn't installed"
+-
+ usage() {
+ echo "usage: faddr2line [--list] <object file> <func+offset> <func+offset>..." >&2
+ exit 1
+@@ -69,6 +58,14 @@ die() {
+ exit 1
+ }
+
++READELF="${CROSS_COMPILE:-}readelf"
++ADDR2LINE="${CROSS_COMPILE:-}addr2line"
++AWK="awk"
++
++command -v ${AWK} >/dev/null 2>&1 || die "${AWK} isn't installed"
++command -v ${READELF} >/dev/null 2>&1 || die "${READELF} isn't installed"
++command -v ${ADDR2LINE} >/dev/null 2>&1 || die "${ADDR2LINE} isn't installed"
++
+ # Try to figure out the source directory prefix so we can remove it from the
+ # addr2line output. HACK ALERT: This assumes that start_kernel() is in
+ # init/main.c! This only works for vmlinux. Otherwise it falls back to
+@@ -76,7 +73,7 @@ die() {
+ find_dir_prefix() {
+ local objfile=$1
+
+- local start_kernel_addr=$(${READELF} -sW $objfile | awk '$8 == "start_kernel" {printf "0x%s", $2}')
++ local start_kernel_addr=$(${READELF} --symbols --wide $objfile | ${AWK} '$8 == "start_kernel" {printf "0x%s", $2}')
+ [[ -z $start_kernel_addr ]] && return
+
+ local file_line=$(${ADDR2LINE} -e $objfile $start_kernel_addr)
+@@ -97,86 +94,133 @@ __faddr2line() {
+ local dir_prefix=$3
+ local print_warnings=$4
+
+- local func=${func_addr%+*}
++ local sym_name=${func_addr%+*}
+ local offset=${func_addr#*+}
+ offset=${offset%/*}
+- local size=
+- [[ $func_addr =~ "/" ]] && size=${func_addr#*/}
++ local user_size=
++ [[ $func_addr =~ "/" ]] && user_size=${func_addr#*/}
+
+- if [[ -z $func ]] || [[ -z $offset ]] || [[ $func = $func_addr ]]; then
++ if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then
+ warn "bad func+offset $func_addr"
+ DONE=1
+ return
+ fi
+
+ # Go through each of the object's symbols which match the func name.
+- # In rare cases there might be duplicates.
+- file_end=$(${SIZE} -Ax $objfile | awk '$1 == ".text" {print $2}')
+- while read symbol; do
+- local fields=($symbol)
+- local sym_base=0x${fields[0]}
+- local sym_type=${fields[1]}
+- local sym_end=${fields[3]}
+-
+- # calculate the size
+- local sym_size=$(($sym_end - $sym_base))
++ # In rare cases there might be duplicates, in which case we print all
++ # matches.
++ while read line; do
++ local fields=($line)
++ local sym_addr=0x${fields[1]}
++ local sym_elf_size=${fields[2]}
++ local sym_sec=${fields[6]}
++
++ # Get the section size:
++ local sec_size=$(${READELF} --section-headers --wide $objfile |
++ sed 's/\[ /\[/' |
++ ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }')
++
++ if [[ -z $sec_size ]]; then
++ warn "bad section size: section: $sym_sec"
++ DONE=1
++ return
++ fi
++
++ # Calculate the symbol size.
++ #
++ # Unfortunately we can't use the ELF size, because kallsyms
++ # also includes the padding bytes in its size calculation. For
++ # kallsyms, the size calculation is the distance between the
++ # symbol and the next symbol in a sorted list.
++ local sym_size
++ local cur_sym_addr
++ local found=0
++ while read line; do
++ local fields=($line)
++ cur_sym_addr=0x${fields[1]}
++ local cur_sym_elf_size=${fields[2]}
++ local cur_sym_name=${fields[7]:-}
++
++ if [[ $cur_sym_addr = $sym_addr ]] &&
++ [[ $cur_sym_elf_size = $sym_elf_size ]] &&
++ [[ $cur_sym_name = $sym_name ]]; then
++ found=1
++ continue
++ fi
++
++ if [[ $found = 1 ]]; then
++ sym_size=$(($cur_sym_addr - $sym_addr))
++ [[ $sym_size -lt $sym_elf_size ]] && continue;
++ found=2
++ break
++ fi
++ done < <(${READELF} --symbols --wide $objfile | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
++
++ if [[ $found = 0 ]]; then
++ warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
++ DONE=1
++ return
++ fi
++
++ # If nothing was found after the symbol, assume it's the last
++ # symbol in the section.
++ [[ $found = 1 ]] && sym_size=$(($sec_size - $sym_addr))
++
+ if [[ -z $sym_size ]] || [[ $sym_size -le 0 ]]; then
+- warn "bad symbol size: base: $sym_base end: $sym_end"
++ warn "bad symbol size: sym_addr: $sym_addr cur_sym_addr: $cur_sym_addr"
+ DONE=1
+ return
+ fi
++
+ sym_size=0x$(printf %x $sym_size)
+
+- # calculate the address
+- local addr=$(($sym_base + $offset))
++ # Calculate the section address from user-supplied offset:
++ local addr=$(($sym_addr + $offset))
+ if [[ -z $addr ]] || [[ $addr = 0 ]]; then
+- warn "bad address: $sym_base + $offset"
++ warn "bad address: $sym_addr + $offset"
+ DONE=1
+ return
+ fi
+ addr=0x$(printf %x $addr)
+
+- # weed out non-function symbols
+- if [[ $sym_type != t ]] && [[ $sym_type != T ]]; then
+- [[ $print_warnings = 1 ]] &&
+- echo "skipping $func address at $addr due to non-function symbol of type '$sym_type'"
+- continue
+- fi
+-
+- # if the user provided a size, make sure it matches the symbol's size
+- if [[ -n $size ]] && [[ $size -ne $sym_size ]]; then
++ # If the user provided a size, make sure it matches the symbol's size:
++ if [[ -n $user_size ]] && [[ $user_size -ne $sym_size ]]; then
+ [[ $print_warnings = 1 ]] &&
+- echo "skipping $func address at $addr due to size mismatch ($size != $sym_size)"
++ echo "skipping $sym_name address at $addr due to size mismatch ($user_size != $sym_size)"
+ continue;
+ fi
+
+- # make sure the provided offset is within the symbol's range
++ # Make sure the provided offset is within the symbol's range:
+ if [[ $offset -gt $sym_size ]]; then
+ [[ $print_warnings = 1 ]] &&
+- echo "skipping $func address at $addr due to size mismatch ($offset > $sym_size)"
++ echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)"
+ continue
+ fi
+
+- # separate multiple entries with a blank line
++ # In case of duplicates or multiple addresses specified on the
++ # cmdline, separate multiple entries with a blank line:
+ [[ $FIRST = 0 ]] && echo
+ FIRST=0
+
+- # pass real address to addr2line
+- echo "$func+$offset/$sym_size:"
+- local file_lines=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
+- [[ -z $file_lines ]] && return
++ echo "$sym_name+$offset/$sym_size:"
+
++ # Pass section address to addr2line and strip absolute paths
++ # from the output:
++ local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
++ [[ -z $output ]] && continue
++
++ # Default output (non --list):
+ if [[ $LIST = 0 ]]; then
+- echo "$file_lines" | while read -r line
++ echo "$output" | while read -r line
+ do
+ echo $line
+ done
+ DONE=1;
+- return
++ continue
+ fi
+
+- # show each line with context
+- echo "$file_lines" | while read -r line
++ # For --list, show each line with its corresponding source code:
++ echo "$output" | while read -r line
+ do
+ echo
+ echo $line
+@@ -184,12 +228,12 @@ __faddr2line() {
+ n1=$[$n-5]
+ n2=$[$n+5]
+ f=$(echo $line | sed 's/.*at \(.\+\):.*/\1/g')
+- awk 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f
++ ${AWK} 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f
+ done
+
+ DONE=1
+
+- done < <(${NM} -n $objfile | awk -v fn=$func -v end=$file_end '$3 == fn { found=1; line=$0; start=$1; next } found == 1 { found=0; print line, "0x"$1 } END {if (found == 1) print line, end; }')
++ done < <(${READELF} --symbols --wide $objfile | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn')
+ }
+
+ [[ $# -lt 2 ]] && usage
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index f3a9cc201c8c2..7249f16257c72 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -69,10 +69,9 @@ choice
+ hash, defined as 20 bytes, and a null terminated pathname,
+ limited to 255 characters. The 'ima-ng' measurement list
+ template permits both larger hash digests and longer
+- pathnames.
++ pathnames. The configured default template can be replaced
++ by specifying "ima_template=" on the boot command line.
+
+- config IMA_TEMPLATE
+- bool "ima"
+ config IMA_NG_TEMPLATE
+ bool "ima-ng (default)"
+ config IMA_SIG_TEMPLATE
+@@ -82,7 +81,6 @@ endchoice
+ config IMA_DEFAULT_TEMPLATE
+ string
+ depends on IMA
+- default "ima" if IMA_TEMPLATE
+ default "ima-ng" if IMA_NG_TEMPLATE
+ default "ima-sig" if IMA_SIG_TEMPLATE
+
+@@ -102,19 +100,19 @@ choice
+
+ config IMA_DEFAULT_HASH_SHA256
+ bool "SHA256"
+- depends on CRYPTO_SHA256=y && !IMA_TEMPLATE
++ depends on CRYPTO_SHA256=y
+
+ config IMA_DEFAULT_HASH_SHA512
+ bool "SHA512"
+- depends on CRYPTO_SHA512=y && !IMA_TEMPLATE
++ depends on CRYPTO_SHA512=y
+
+ config IMA_DEFAULT_HASH_WP512
+ bool "WP512"
+- depends on CRYPTO_WP512=y && !IMA_TEMPLATE
++ depends on CRYPTO_WP512=y
+
+ config IMA_DEFAULT_HASH_SM3
+ bool "SM3"
+- depends on CRYPTO_SM3=y && !IMA_TEMPLATE
++ depends on CRYPTO_SM3=y
+ endchoice
+
+ config IMA_DEFAULT_HASH
+diff --git a/security/integrity/platform_certs/keyring_handler.h b/security/integrity/platform_certs/keyring_handler.h
+index 2462bfa08fe34..cd06bd6072be2 100644
+--- a/security/integrity/platform_certs/keyring_handler.h
++++ b/security/integrity/platform_certs/keyring_handler.h
+@@ -30,3 +30,11 @@ efi_element_handler_t get_handler_for_db(const efi_guid_t *sig_type);
+ efi_element_handler_t get_handler_for_dbx(const efi_guid_t *sig_type);
+
+ #endif
++
++#ifndef UEFI_QUIRK_SKIP_CERT
++#define UEFI_QUIRK_SKIP_CERT(vendor, product) \
++ .matches = { \
++ DMI_MATCH(DMI_BOARD_VENDOR, vendor), \
++ DMI_MATCH(DMI_PRODUCT_NAME, product), \
++ },
++#endif
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index 08b6d12f99b4f..a031522351c91 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -3,6 +3,7 @@
+ #include <linux/kernel.h>
+ #include <linux/sched.h>
+ #include <linux/cred.h>
++#include <linux/dmi.h>
+ #include <linux/err.h>
+ #include <linux/efi.h>
+ #include <linux/slab.h>
+@@ -12,6 +13,31 @@
+ #include "../integrity.h"
+ #include "keyring_handler.h"
+
++/*
++ * On T2 Macs reading the db and dbx efi variables to load UEFI Secure Boot
++ * certificates causes occurrence of a page fault in Apple's firmware and
++ * a crash disabling EFI runtime services. The following quirk skips reading
++ * these variables.
++ */
++static const struct dmi_system_id uefi_skip_cert[] = {
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,2") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,3") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,4") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,2") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,3") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,4") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") },
++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") },
++ { }
++};
++
+ /*
+ * Look to see if a UEFI variable called MokIgnoreDB exists and return true if
+ * it does.
+@@ -138,6 +164,13 @@ static int __init load_uefi_certs(void)
+ unsigned long dbsize = 0, dbxsize = 0, mokxsize = 0;
+ efi_status_t status;
+ int rc = 0;
++ const struct dmi_system_id *dmi_id;
++
++ dmi_id = dmi_first_match(uefi_skip_cert);
++ if (dmi_id) {
++ pr_err("Reading UEFI Secure Boot Certs is not supported on T2 Macs.\n");
++ return false;
++ }
+
+ if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ return false;
+diff --git a/security/landlock/cred.c b/security/landlock/cred.c
+index 6725af24c6841..ec6c37f04a191 100644
+--- a/security/landlock/cred.c
++++ b/security/landlock/cred.c
+@@ -15,7 +15,7 @@
+ #include "setup.h"
+
+ static int hook_cred_prepare(struct cred *const new,
+- const struct cred *const old, const gfp_t gfp)
++ const struct cred *const old, const gfp_t gfp)
+ {
+ struct landlock_ruleset *const old_dom = landlock_cred(old)->domain;
+
+@@ -42,5 +42,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
+ __init void landlock_add_cred_hooks(void)
+ {
+ security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+- LANDLOCK_NAME);
++ LANDLOCK_NAME);
+ }
+diff --git a/security/landlock/cred.h b/security/landlock/cred.h
+index 5f99d3decade6..af89ab00e6d10 100644
+--- a/security/landlock/cred.h
++++ b/security/landlock/cred.h
+@@ -20,8 +20,8 @@ struct landlock_cred_security {
+ struct landlock_ruleset *domain;
+ };
+
+-static inline struct landlock_cred_security *landlock_cred(
+- const struct cred *cred)
++static inline struct landlock_cred_security *
++landlock_cred(const struct cred *cred)
+ {
+ return cred->security + landlock_blob_sizes.lbs_cred;
+ }
+@@ -34,8 +34,8 @@ static inline const struct landlock_ruleset *landlock_get_current_domain(void)
+ /*
+ * The call needs to come from an RCU read-side critical section.
+ */
+-static inline const struct landlock_ruleset *landlock_get_task_domain(
+- const struct task_struct *const task)
++static inline const struct landlock_ruleset *
++landlock_get_task_domain(const struct task_struct *const task)
+ {
+ return landlock_cred(__task_cred(task))->domain;
+ }
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index 97b8e421f6171..c5749301b37d6 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -141,23 +141,26 @@ retry:
+ }
+
+ /* All access rights that can be tied to files. */
++/* clang-format off */
+ #define ACCESS_FILE ( \
+ LANDLOCK_ACCESS_FS_EXECUTE | \
+ LANDLOCK_ACCESS_FS_WRITE_FILE | \
+ LANDLOCK_ACCESS_FS_READ_FILE)
++/* clang-format on */
+
+ /*
+ * @path: Should have been checked by get_path_from_fd().
+ */
+ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+- const struct path *const path, u32 access_rights)
++ const struct path *const path,
++ access_mask_t access_rights)
+ {
+ int err;
+ struct landlock_object *object;
+
+ /* Files only get access rights that make sense. */
+- if (!d_is_dir(path->dentry) && (access_rights | ACCESS_FILE) !=
+- ACCESS_FILE)
++ if (!d_is_dir(path->dentry) &&
++ (access_rights | ACCESS_FILE) != ACCESS_FILE)
+ return -EINVAL;
+ if (WARN_ON_ONCE(ruleset->num_layers != 1))
+ return -EINVAL;
+@@ -180,59 +183,93 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+
+ /* Access-control management */
+
+-static inline u64 unmask_layers(
+- const struct landlock_ruleset *const domain,
+- const struct path *const path, const u32 access_request,
+- u64 layer_mask)
++/*
++ * The lifetime of the returned rule is tied to @domain.
++ *
++ * Returns NULL if no rule is found or if @dentry is negative.
++ */
++static inline const struct landlock_rule *
++find_rule(const struct landlock_ruleset *const domain,
++ const struct dentry *const dentry)
+ {
+ const struct landlock_rule *rule;
+ const struct inode *inode;
+- size_t i;
+
+- if (d_is_negative(path->dentry))
+- /* Ignore nonexistent leafs. */
+- return layer_mask;
+- inode = d_backing_inode(path->dentry);
++ /* Ignores nonexistent leafs. */
++ if (d_is_negative(dentry))
++ return NULL;
++
++ inode = d_backing_inode(dentry);
+ rcu_read_lock();
+- rule = landlock_find_rule(domain,
+- rcu_dereference(landlock_inode(inode)->object));
++ rule = landlock_find_rule(
++ domain, rcu_dereference(landlock_inode(inode)->object));
+ rcu_read_unlock();
++ return rule;
++}
++
++/*
++ * @layer_masks is read and may be updated according to the access request and
++ * the matching rule.
++ *
++ * Returns true if the request is allowed (i.e. relevant layer masks for the
++ * request are empty).
++ */
++static inline bool
++unmask_layers(const struct landlock_rule *const rule,
++ const access_mask_t access_request,
++ layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS])
++{
++ size_t layer_level;
++
++ if (!access_request || !layer_masks)
++ return true;
+ if (!rule)
+- return layer_mask;
++ return false;
+
+ /*
+ * An access is granted if, for each policy layer, at least one rule
+- * encountered on the pathwalk grants the requested accesses,
+- * regardless of their position in the layer stack. We must then check
++ * encountered on the pathwalk grants the requested access,
++ * regardless of its position in the layer stack. We must then check
+ * the remaining layers for each inode, from the first added layer to
+- * the last one.
++ * the last one. When there is multiple requested accesses, for each
++ * policy layer, the full set of requested accesses may not be granted
++ * by only one rule, but by the union (binary OR) of multiple rules.
++ * E.g. /a/b <execute> + /a <read> => /a/b <execute + read>
+ */
+- for (i = 0; i < rule->num_layers; i++) {
+- const struct landlock_layer *const layer = &rule->layers[i];
+- const u64 layer_level = BIT_ULL(layer->level - 1);
+-
+- /* Checks that the layer grants access to the full request. */
+- if ((layer->access & access_request) == access_request) {
+- layer_mask &= ~layer_level;
++ for (layer_level = 0; layer_level < rule->num_layers; layer_level++) {
++ const struct landlock_layer *const layer =
++ &rule->layers[layer_level];
++ const layer_mask_t layer_bit = BIT_ULL(layer->level - 1);
++ const unsigned long access_req = access_request;
++ unsigned long access_bit;
++ bool is_empty;
+
+- if (layer_mask == 0)
+- return layer_mask;
++ /*
++ * Records in @layer_masks which layer grants access to each
++ * requested access.
++ */
++ is_empty = true;
++ for_each_set_bit(access_bit, &access_req,
++ ARRAY_SIZE(*layer_masks)) {
++ if (layer->access & BIT_ULL(access_bit))
++ (*layer_masks)[access_bit] &= ~layer_bit;
++ is_empty = is_empty && !(*layer_masks)[access_bit];
+ }
++ if (is_empty)
++ return true;
+ }
+- return layer_mask;
++ return false;
+ }
+
+ static int check_access_path(const struct landlock_ruleset *const domain,
+- const struct path *const path, u32 access_request)
++ const struct path *const path,
++ const access_mask_t access_request)
+ {
+- bool allowed = false;
++ layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {};
++ bool allowed = false, has_access = false;
+ struct path walker_path;
+- u64 layer_mask;
+ size_t i;
+
+- /* Make sure all layers can be checked. */
+- BUILD_BUG_ON(BITS_PER_TYPE(layer_mask) < LANDLOCK_MAX_NUM_LAYERS);
+-
+ if (!access_request)
+ return 0;
+ if (WARN_ON_ONCE(!domain || !path))
+@@ -243,20 +280,27 @@ static int check_access_path(const struct landlock_ruleset *const domain,
+ * /proc/<pid>/fd/<file-descriptor> .
+ */
+ if ((path->dentry->d_sb->s_flags & SB_NOUSER) ||
+- (d_is_positive(path->dentry) &&
+- unlikely(IS_PRIVATE(d_backing_inode(path->dentry)))))
++ (d_is_positive(path->dentry) &&
++ unlikely(IS_PRIVATE(d_backing_inode(path->dentry)))))
+ return 0;
+ if (WARN_ON_ONCE(domain->num_layers < 1))
+ return -EACCES;
+
+ /* Saves all layers handling a subset of requested accesses. */
+- layer_mask = 0;
+ for (i = 0; i < domain->num_layers; i++) {
+- if (domain->fs_access_masks[i] & access_request)
+- layer_mask |= BIT_ULL(i);
++ const unsigned long access_req = access_request;
++ unsigned long access_bit;
++
++ for_each_set_bit(access_bit, &access_req,
++ ARRAY_SIZE(layer_masks)) {
++ if (domain->fs_access_masks[i] & BIT_ULL(access_bit)) {
++ layer_masks[access_bit] |= BIT_ULL(i);
++ has_access = true;
++ }
++ }
+ }
+ /* An access request not handled by the domain is allowed. */
+- if (layer_mask == 0)
++ if (!has_access)
+ return 0;
+
+ walker_path = *path;
+@@ -268,13 +312,11 @@ static int check_access_path(const struct landlock_ruleset *const domain,
+ while (true) {
+ struct dentry *parent_dentry;
+
+- layer_mask = unmask_layers(domain, &walker_path,
+- access_request, layer_mask);
+- if (layer_mask == 0) {
++ allowed = unmask_layers(find_rule(domain, walker_path.dentry),
++ access_request, &layer_masks);
++ if (allowed)
+ /* Stops when a rule from each layer grants access. */
+- allowed = true;
+ break;
+- }
+
+ jump_up:
+ if (walker_path.dentry == walker_path.mnt->mnt_root) {
+@@ -308,7 +350,7 @@ jump_up:
+ }
+
+ static inline int current_check_access_path(const struct path *const path,
+- const u32 access_request)
++ const access_mask_t access_request)
+ {
+ const struct landlock_ruleset *const dom =
+ landlock_get_current_domain();
+@@ -436,8 +478,8 @@ static void hook_sb_delete(struct super_block *const sb)
+ if (prev_inode)
+ iput(prev_inode);
+ /* Waits for pending iput() in release_inode(). */
+- wait_var_event(&landlock_superblock(sb)->inode_refs, !atomic_long_read(
+- &landlock_superblock(sb)->inode_refs));
++ wait_var_event(&landlock_superblock(sb)->inode_refs,
++ !atomic_long_read(&landlock_superblock(sb)->inode_refs));
+ }
+
+ /*
+@@ -459,8 +501,8 @@ static void hook_sb_delete(struct super_block *const sb)
+ * a dedicated user space option would be required (e.g. as a ruleset flag).
+ */
+ static int hook_sb_mount(const char *const dev_name,
+- const struct path *const path, const char *const type,
+- const unsigned long flags, void *const data)
++ const struct path *const path, const char *const type,
++ const unsigned long flags, void *const data)
+ {
+ if (!landlock_get_current_domain())
+ return 0;
+@@ -468,7 +510,7 @@ static int hook_sb_mount(const char *const dev_name,
+ }
+
+ static int hook_move_mount(const struct path *const from_path,
+- const struct path *const to_path)
++ const struct path *const to_path)
+ {
+ if (!landlock_get_current_domain())
+ return 0;
+@@ -502,7 +544,7 @@ static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts)
+ * view of the filesystem.
+ */
+ static int hook_sb_pivotroot(const struct path *const old_path,
+- const struct path *const new_path)
++ const struct path *const new_path)
+ {
+ if (!landlock_get_current_domain())
+ return 0;
+@@ -511,7 +553,7 @@ static int hook_sb_pivotroot(const struct path *const old_path,
+
+ /* Path hooks */
+
+-static inline u32 get_mode_access(const umode_t mode)
++static inline access_mask_t get_mode_access(const umode_t mode)
+ {
+ switch (mode & S_IFMT) {
+ case S_IFLNK:
+@@ -545,8 +587,8 @@ static inline u32 get_mode_access(const umode_t mode)
+ * deal with that.
+ */
+ static int hook_path_link(struct dentry *const old_dentry,
+- const struct path *const new_dir,
+- struct dentry *const new_dentry)
++ const struct path *const new_dir,
++ struct dentry *const new_dentry)
+ {
+ const struct landlock_ruleset *const dom =
+ landlock_get_current_domain();
+@@ -559,22 +601,23 @@ static int hook_path_link(struct dentry *const old_dentry,
+ return -EXDEV;
+ if (unlikely(d_is_negative(old_dentry)))
+ return -ENOENT;
+- return check_access_path(dom, new_dir,
+- get_mode_access(d_backing_inode(old_dentry)->i_mode));
++ return check_access_path(
++ dom, new_dir,
++ get_mode_access(d_backing_inode(old_dentry)->i_mode));
+ }
+
+-static inline u32 maybe_remove(const struct dentry *const dentry)
++static inline access_mask_t maybe_remove(const struct dentry *const dentry)
+ {
+ if (d_is_negative(dentry))
+ return 0;
+ return d_is_dir(dentry) ? LANDLOCK_ACCESS_FS_REMOVE_DIR :
+- LANDLOCK_ACCESS_FS_REMOVE_FILE;
++ LANDLOCK_ACCESS_FS_REMOVE_FILE;
+ }
+
+ static int hook_path_rename(const struct path *const old_dir,
+- struct dentry *const old_dentry,
+- const struct path *const new_dir,
+- struct dentry *const new_dentry)
++ struct dentry *const old_dentry,
++ const struct path *const new_dir,
++ struct dentry *const new_dentry)
+ {
+ const struct landlock_ruleset *const dom =
+ landlock_get_current_domain();
+@@ -588,20 +631,21 @@ static int hook_path_rename(const struct path *const old_dir,
+ if (unlikely(d_is_negative(old_dentry)))
+ return -ENOENT;
+ /* RENAME_EXCHANGE is handled because directories are the same. */
+- return check_access_path(dom, old_dir, maybe_remove(old_dentry) |
+- maybe_remove(new_dentry) |
++ return check_access_path(
++ dom, old_dir,
++ maybe_remove(old_dentry) | maybe_remove(new_dentry) |
+ get_mode_access(d_backing_inode(old_dentry)->i_mode));
+ }
+
+ static int hook_path_mkdir(const struct path *const dir,
+- struct dentry *const dentry, const umode_t mode)
++ struct dentry *const dentry, const umode_t mode)
+ {
+ return current_check_access_path(dir, LANDLOCK_ACCESS_FS_MAKE_DIR);
+ }
+
+ static int hook_path_mknod(const struct path *const dir,
+- struct dentry *const dentry, const umode_t mode,
+- const unsigned int dev)
++ struct dentry *const dentry, const umode_t mode,
++ const unsigned int dev)
+ {
+ const struct landlock_ruleset *const dom =
+ landlock_get_current_domain();
+@@ -612,28 +656,29 @@ static int hook_path_mknod(const struct path *const dir,
+ }
+
+ static int hook_path_symlink(const struct path *const dir,
+- struct dentry *const dentry, const char *const old_name)
++ struct dentry *const dentry,
++ const char *const old_name)
+ {
+ return current_check_access_path(dir, LANDLOCK_ACCESS_FS_MAKE_SYM);
+ }
+
+ static int hook_path_unlink(const struct path *const dir,
+- struct dentry *const dentry)
++ struct dentry *const dentry)
+ {
+ return current_check_access_path(dir, LANDLOCK_ACCESS_FS_REMOVE_FILE);
+ }
+
+ static int hook_path_rmdir(const struct path *const dir,
+- struct dentry *const dentry)
++ struct dentry *const dentry)
+ {
+ return current_check_access_path(dir, LANDLOCK_ACCESS_FS_REMOVE_DIR);
+ }
+
+ /* File hooks */
+
+-static inline u32 get_file_access(const struct file *const file)
++static inline access_mask_t get_file_access(const struct file *const file)
+ {
+- u32 access = 0;
++ access_mask_t access = 0;
+
+ if (file->f_mode & FMODE_READ) {
+ /* A directory can only be opened in read mode. */
+@@ -688,5 +733,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
+ __init void landlock_add_fs_hooks(void)
+ {
+ security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+- LANDLOCK_NAME);
++ LANDLOCK_NAME);
+ }
+diff --git a/security/landlock/fs.h b/security/landlock/fs.h
+index 187284b421c9d..8db7acf9109b6 100644
+--- a/security/landlock/fs.h
++++ b/security/landlock/fs.h
+@@ -50,14 +50,14 @@ struct landlock_superblock_security {
+ atomic_long_t inode_refs;
+ };
+
+-static inline struct landlock_inode_security *landlock_inode(
+- const struct inode *const inode)
++static inline struct landlock_inode_security *
++landlock_inode(const struct inode *const inode)
+ {
+ return inode->i_security + landlock_blob_sizes.lbs_inode;
+ }
+
+-static inline struct landlock_superblock_security *landlock_superblock(
+- const struct super_block *const superblock)
++static inline struct landlock_superblock_security *
++landlock_superblock(const struct super_block *const superblock)
+ {
+ return superblock->s_security + landlock_blob_sizes.lbs_superblock;
+ }
+@@ -65,6 +65,7 @@ static inline struct landlock_superblock_security *landlock_superblock(
+ __init void landlock_add_fs_hooks(void);
+
+ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+- const struct path *const path, u32 access_hierarchy);
++ const struct path *const path,
++ access_mask_t access_hierarchy);
+
+ #endif /* _SECURITY_LANDLOCK_FS_H */
+diff --git a/security/landlock/limits.h b/security/landlock/limits.h
+index 2a0a1095ee27e..17c2a2e7fe1ef 100644
+--- a/security/landlock/limits.h
++++ b/security/landlock/limits.h
+@@ -9,13 +9,19 @@
+ #ifndef _SECURITY_LANDLOCK_LIMITS_H
+ #define _SECURITY_LANDLOCK_LIMITS_H
+
++#include <linux/bitops.h>
+ #include <linux/limits.h>
+ #include <uapi/linux/landlock.h>
+
+-#define LANDLOCK_MAX_NUM_LAYERS 64
++/* clang-format off */
++
++#define LANDLOCK_MAX_NUM_LAYERS 16
+ #define LANDLOCK_MAX_NUM_RULES U32_MAX
+
+ #define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_MAKE_SYM
+ #define LANDLOCK_MASK_ACCESS_FS ((LANDLOCK_LAST_ACCESS_FS << 1) - 1)
++#define LANDLOCK_NUM_ACCESS_FS __const_hweight64(LANDLOCK_MASK_ACCESS_FS)
++
++/* clang-format on */
+
+ #endif /* _SECURITY_LANDLOCK_LIMITS_H */
+diff --git a/security/landlock/object.c b/security/landlock/object.c
+index d674fdf9ff04f..1f50612f01850 100644
+--- a/security/landlock/object.c
++++ b/security/landlock/object.c
+@@ -17,9 +17,9 @@
+
+ #include "object.h"
+
+-struct landlock_object *landlock_create_object(
+- const struct landlock_object_underops *const underops,
+- void *const underobj)
++struct landlock_object *
++landlock_create_object(const struct landlock_object_underops *const underops,
++ void *const underobj)
+ {
+ struct landlock_object *new_object;
+
+diff --git a/security/landlock/object.h b/security/landlock/object.h
+index 3f80674c6c8d3..5f28c35e8aa8c 100644
+--- a/security/landlock/object.h
++++ b/security/landlock/object.h
+@@ -76,9 +76,9 @@ struct landlock_object {
+ };
+ };
+
+-struct landlock_object *landlock_create_object(
+- const struct landlock_object_underops *const underops,
+- void *const underobj);
++struct landlock_object *
++landlock_create_object(const struct landlock_object_underops *const underops,
++ void *const underobj);
+
+ void landlock_put_object(struct landlock_object *const object);
+
+diff --git a/security/landlock/ptrace.c b/security/landlock/ptrace.c
+index f55b82446de21..4c5b9cd712861 100644
+--- a/security/landlock/ptrace.c
++++ b/security/landlock/ptrace.c
+@@ -30,7 +30,7 @@
+ * means a subset of) the @child domain.
+ */
+ static bool domain_scope_le(const struct landlock_ruleset *const parent,
+- const struct landlock_ruleset *const child)
++ const struct landlock_ruleset *const child)
+ {
+ const struct landlock_hierarchy *walker;
+
+@@ -48,7 +48,7 @@ static bool domain_scope_le(const struct landlock_ruleset *const parent,
+ }
+
+ static bool task_is_scoped(const struct task_struct *const parent,
+- const struct task_struct *const child)
++ const struct task_struct *const child)
+ {
+ bool is_scoped;
+ const struct landlock_ruleset *dom_parent, *dom_child;
+@@ -62,7 +62,7 @@ static bool task_is_scoped(const struct task_struct *const parent,
+ }
+
+ static int task_ptrace(const struct task_struct *const parent,
+- const struct task_struct *const child)
++ const struct task_struct *const child)
+ {
+ /* Quick return for non-landlocked tasks. */
+ if (!landlocked(parent))
+@@ -86,7 +86,7 @@ static int task_ptrace(const struct task_struct *const parent,
+ * granted, -errno if denied.
+ */
+ static int hook_ptrace_access_check(struct task_struct *const child,
+- const unsigned int mode)
++ const unsigned int mode)
+ {
+ return task_ptrace(current, child);
+ }
+@@ -116,5 +116,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
+ __init void landlock_add_ptrace_hooks(void)
+ {
+ security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+- LANDLOCK_NAME);
++ LANDLOCK_NAME);
+ }
+diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c
+index ec72b9262bf38..996484f98bfde 100644
+--- a/security/landlock/ruleset.c
++++ b/security/landlock/ruleset.c
+@@ -28,8 +28,9 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers)
+ {
+ struct landlock_ruleset *new_ruleset;
+
+- new_ruleset = kzalloc(struct_size(new_ruleset, fs_access_masks,
+- num_layers), GFP_KERNEL_ACCOUNT);
++ new_ruleset =
++ kzalloc(struct_size(new_ruleset, fs_access_masks, num_layers),
++ GFP_KERNEL_ACCOUNT);
+ if (!new_ruleset)
+ return ERR_PTR(-ENOMEM);
+ refcount_set(&new_ruleset->usage, 1);
+@@ -44,7 +45,8 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers)
+ return new_ruleset;
+ }
+
+-struct landlock_ruleset *landlock_create_ruleset(const u32 fs_access_mask)
++struct landlock_ruleset *
++landlock_create_ruleset(const access_mask_t fs_access_mask)
+ {
+ struct landlock_ruleset *new_ruleset;
+
+@@ -66,11 +68,10 @@ static void build_check_rule(void)
+ BUILD_BUG_ON(rule.num_layers < LANDLOCK_MAX_NUM_LAYERS);
+ }
+
+-static struct landlock_rule *create_rule(
+- struct landlock_object *const object,
+- const struct landlock_layer (*const layers)[],
+- const u32 num_layers,
+- const struct landlock_layer *const new_layer)
++static struct landlock_rule *
++create_rule(struct landlock_object *const object,
++ const struct landlock_layer (*const layers)[], const u32 num_layers,
++ const struct landlock_layer *const new_layer)
+ {
+ struct landlock_rule *new_rule;
+ u32 new_num_layers;
+@@ -85,7 +86,7 @@ static struct landlock_rule *create_rule(
+ new_num_layers = num_layers;
+ }
+ new_rule = kzalloc(struct_size(new_rule, layers, new_num_layers),
+- GFP_KERNEL_ACCOUNT);
++ GFP_KERNEL_ACCOUNT);
+ if (!new_rule)
+ return ERR_PTR(-ENOMEM);
+ RB_CLEAR_NODE(&new_rule->node);
+@@ -94,7 +95,7 @@ static struct landlock_rule *create_rule(
+ new_rule->num_layers = new_num_layers;
+ /* Copies the original layer stack. */
+ memcpy(new_rule->layers, layers,
+- flex_array_size(new_rule, layers, num_layers));
++ flex_array_size(new_rule, layers, num_layers));
+ if (new_layer)
+ /* Adds a copy of @new_layer on the layer stack. */
+ new_rule->layers[new_rule->num_layers - 1] = *new_layer;
+@@ -142,9 +143,9 @@ static void build_check_ruleset(void)
+ * access rights.
+ */
+ static int insert_rule(struct landlock_ruleset *const ruleset,
+- struct landlock_object *const object,
+- const struct landlock_layer (*const layers)[],
+- size_t num_layers)
++ struct landlock_object *const object,
++ const struct landlock_layer (*const layers)[],
++ size_t num_layers)
+ {
+ struct rb_node **walker_node;
+ struct rb_node *parent_node = NULL;
+@@ -156,8 +157,8 @@ static int insert_rule(struct landlock_ruleset *const ruleset,
+ return -ENOENT;
+ walker_node = &(ruleset->root.rb_node);
+ while (*walker_node) {
+- struct landlock_rule *const this = rb_entry(*walker_node,
+- struct landlock_rule, node);
++ struct landlock_rule *const this =
++ rb_entry(*walker_node, struct landlock_rule, node);
+
+ if (this->object != object) {
+ parent_node = *walker_node;
+@@ -194,7 +195,7 @@ static int insert_rule(struct landlock_ruleset *const ruleset,
+ * ruleset and a domain.
+ */
+ new_rule = create_rule(object, &this->layers, this->num_layers,
+- &(*layers)[0]);
++ &(*layers)[0]);
+ if (IS_ERR(new_rule))
+ return PTR_ERR(new_rule);
+ rb_replace_node(&this->node, &new_rule->node, &ruleset->root);
+@@ -228,13 +229,14 @@ static void build_check_layer(void)
+
+ /* @ruleset must be locked by the caller. */
+ int landlock_insert_rule(struct landlock_ruleset *const ruleset,
+- struct landlock_object *const object, const u32 access)
++ struct landlock_object *const object,
++ const access_mask_t access)
+ {
+- struct landlock_layer layers[] = {{
++ struct landlock_layer layers[] = { {
+ .access = access,
+ /* When @level is zero, insert_rule() extends @ruleset. */
+ .level = 0,
+- }};
++ } };
+
+ build_check_layer();
+ return insert_rule(ruleset, object, &layers, ARRAY_SIZE(layers));
+@@ -257,7 +259,7 @@ static void put_hierarchy(struct landlock_hierarchy *hierarchy)
+ }
+
+ static int merge_ruleset(struct landlock_ruleset *const dst,
+- struct landlock_ruleset *const src)
++ struct landlock_ruleset *const src)
+ {
+ struct landlock_rule *walker_rule, *next_rule;
+ int err = 0;
+@@ -282,11 +284,11 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
+ dst->fs_access_masks[dst->num_layers - 1] = src->fs_access_masks[0];
+
+ /* Merges the @src tree. */
+- rbtree_postorder_for_each_entry_safe(walker_rule, next_rule,
+- &src->root, node) {
+- struct landlock_layer layers[] = {{
++ rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, &src->root,
++ node) {
++ struct landlock_layer layers[] = { {
+ .level = dst->num_layers,
+- }};
++ } };
+
+ if (WARN_ON_ONCE(walker_rule->num_layers != 1)) {
+ err = -EINVAL;
+@@ -298,7 +300,7 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
+ }
+ layers[0].access = walker_rule->layers[0].access;
+ err = insert_rule(dst, walker_rule->object, &layers,
+- ARRAY_SIZE(layers));
++ ARRAY_SIZE(layers));
+ if (err)
+ goto out_unlock;
+ }
+@@ -310,7 +312,7 @@ out_unlock:
+ }
+
+ static int inherit_ruleset(struct landlock_ruleset *const parent,
+- struct landlock_ruleset *const child)
++ struct landlock_ruleset *const child)
+ {
+ struct landlock_rule *walker_rule, *next_rule;
+ int err = 0;
+@@ -325,9 +327,10 @@ static int inherit_ruleset(struct landlock_ruleset *const parent,
+
+ /* Copies the @parent tree. */
+ rbtree_postorder_for_each_entry_safe(walker_rule, next_rule,
+- &parent->root, node) {
++ &parent->root, node) {
+ err = insert_rule(child, walker_rule->object,
+- &walker_rule->layers, walker_rule->num_layers);
++ &walker_rule->layers,
++ walker_rule->num_layers);
+ if (err)
+ goto out_unlock;
+ }
+@@ -338,7 +341,7 @@ static int inherit_ruleset(struct landlock_ruleset *const parent,
+ }
+ /* Copies the parent layer stack and leaves a space for the new layer. */
+ memcpy(child->fs_access_masks, parent->fs_access_masks,
+- flex_array_size(parent, fs_access_masks, parent->num_layers));
++ flex_array_size(parent, fs_access_masks, parent->num_layers));
+
+ if (WARN_ON_ONCE(!parent->hierarchy)) {
+ err = -EINVAL;
+@@ -358,8 +361,7 @@ static void free_ruleset(struct landlock_ruleset *const ruleset)
+ struct landlock_rule *freeme, *next;
+
+ might_sleep();
+- rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root,
+- node)
++ rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root, node)
+ free_rule(freeme);
+ put_hierarchy(ruleset->hierarchy);
+ kfree(ruleset);
+@@ -397,9 +399,9 @@ void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset)
+ * Returns the intersection of @parent and @ruleset, or returns @parent if
+ * @ruleset is empty, or returns a duplicate of @ruleset if @parent is empty.
+ */
+-struct landlock_ruleset *landlock_merge_ruleset(
+- struct landlock_ruleset *const parent,
+- struct landlock_ruleset *const ruleset)
++struct landlock_ruleset *
++landlock_merge_ruleset(struct landlock_ruleset *const parent,
++ struct landlock_ruleset *const ruleset)
+ {
+ struct landlock_ruleset *new_dom;
+ u32 num_layers;
+@@ -421,8 +423,8 @@ struct landlock_ruleset *landlock_merge_ruleset(
+ new_dom = create_ruleset(num_layers);
+ if (IS_ERR(new_dom))
+ return new_dom;
+- new_dom->hierarchy = kzalloc(sizeof(*new_dom->hierarchy),
+- GFP_KERNEL_ACCOUNT);
++ new_dom->hierarchy =
++ kzalloc(sizeof(*new_dom->hierarchy), GFP_KERNEL_ACCOUNT);
+ if (!new_dom->hierarchy) {
+ err = -ENOMEM;
+ goto out_put_dom;
+@@ -449,9 +451,9 @@ out_put_dom:
+ /*
+ * The returned access has the same lifetime as @ruleset.
+ */
+-const struct landlock_rule *landlock_find_rule(
+- const struct landlock_ruleset *const ruleset,
+- const struct landlock_object *const object)
++const struct landlock_rule *
++landlock_find_rule(const struct landlock_ruleset *const ruleset,
++ const struct landlock_object *const object)
+ {
+ const struct rb_node *node;
+
+@@ -459,8 +461,8 @@ const struct landlock_rule *landlock_find_rule(
+ return NULL;
+ node = ruleset->root.rb_node;
+ while (node) {
+- struct landlock_rule *this = rb_entry(node,
+- struct landlock_rule, node);
++ struct landlock_rule *this =
++ rb_entry(node, struct landlock_rule, node);
+
+ if (this->object == object)
+ return this;
+diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h
+index 2d3ed7ec5a0ab..d43231b783e4f 100644
+--- a/security/landlock/ruleset.h
++++ b/security/landlock/ruleset.h
+@@ -9,13 +9,26 @@
+ #ifndef _SECURITY_LANDLOCK_RULESET_H
+ #define _SECURITY_LANDLOCK_RULESET_H
+
++#include <linux/bitops.h>
++#include <linux/build_bug.h>
+ #include <linux/mutex.h>
+ #include <linux/rbtree.h>
+ #include <linux/refcount.h>
+ #include <linux/workqueue.h>
+
++#include "limits.h"
+ #include "object.h"
+
++typedef u16 access_mask_t;
++/* Makes sure all filesystem access rights can be stored. */
++static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS);
++/* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */
++static_assert(sizeof(unsigned long) >= sizeof(access_mask_t));
++
++typedef u16 layer_mask_t;
++/* Makes sure all layers can be checked. */
++static_assert(BITS_PER_TYPE(layer_mask_t) >= LANDLOCK_MAX_NUM_LAYERS);
++
+ /**
+ * struct landlock_layer - Access rights for a given layer
+ */
+@@ -28,7 +41,7 @@ struct landlock_layer {
+ * @access: Bitfield of allowed actions on the kernel object. They are
+ * relative to the object type (e.g. %LANDLOCK_ACTION_FS_READ).
+ */
+- u16 access;
++ access_mask_t access;
+ };
+
+ /**
+@@ -135,26 +148,28 @@ struct landlock_ruleset {
+ * layers are set once and never changed for the
+ * lifetime of the ruleset.
+ */
+- u16 fs_access_masks[];
++ access_mask_t fs_access_masks[];
+ };
+ };
+ };
+
+-struct landlock_ruleset *landlock_create_ruleset(const u32 fs_access_mask);
++struct landlock_ruleset *
++landlock_create_ruleset(const access_mask_t fs_access_mask);
+
+ void landlock_put_ruleset(struct landlock_ruleset *const ruleset);
+ void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset);
+
+ int landlock_insert_rule(struct landlock_ruleset *const ruleset,
+- struct landlock_object *const object, const u32 access);
++ struct landlock_object *const object,
++ const access_mask_t access);
+
+-struct landlock_ruleset *landlock_merge_ruleset(
+- struct landlock_ruleset *const parent,
+- struct landlock_ruleset *const ruleset);
++struct landlock_ruleset *
++landlock_merge_ruleset(struct landlock_ruleset *const parent,
++ struct landlock_ruleset *const ruleset);
+
+-const struct landlock_rule *landlock_find_rule(
+- const struct landlock_ruleset *const ruleset,
+- const struct landlock_object *const object);
++const struct landlock_rule *
++landlock_find_rule(const struct landlock_ruleset *const ruleset,
++ const struct landlock_object *const object);
+
+ static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset)
+ {
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 7e27ce394020d..507d43827afed 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -43,9 +43,10 @@
+ * @src: User space pointer or NULL.
+ * @usize: (Alleged) size of the data pointed to by @src.
+ */
+-static __always_inline int copy_min_struct_from_user(void *const dst,
+- const size_t ksize, const size_t ksize_min,
+- const void __user *const src, const size_t usize)
++static __always_inline int
++copy_min_struct_from_user(void *const dst, const size_t ksize,
++ const size_t ksize_min, const void __user *const src,
++ const size_t usize)
+ {
+ /* Checks buffer inconsistencies. */
+ BUILD_BUG_ON(!dst);
+@@ -93,7 +94,7 @@ static void build_check_abi(void)
+ /* Ruleset handling */
+
+ static int fop_ruleset_release(struct inode *const inode,
+- struct file *const filp)
++ struct file *const filp)
+ {
+ struct landlock_ruleset *ruleset = filp->private_data;
+
+@@ -102,15 +103,15 @@ static int fop_ruleset_release(struct inode *const inode,
+ }
+
+ static ssize_t fop_dummy_read(struct file *const filp, char __user *const buf,
+- const size_t size, loff_t *const ppos)
++ const size_t size, loff_t *const ppos)
+ {
+ /* Dummy handler to enable FMODE_CAN_READ. */
+ return -EINVAL;
+ }
+
+ static ssize_t fop_dummy_write(struct file *const filp,
+- const char __user *const buf, const size_t size,
+- loff_t *const ppos)
++ const char __user *const buf, const size_t size,
++ loff_t *const ppos)
+ {
+ /* Dummy handler to enable FMODE_CAN_WRITE. */
+ return -EINVAL;
+@@ -128,7 +129,7 @@ static const struct file_operations ruleset_fops = {
+ .write = fop_dummy_write,
+ };
+
+-#define LANDLOCK_ABI_VERSION 1
++#define LANDLOCK_ABI_VERSION 1
+
+ /**
+ * sys_landlock_create_ruleset - Create a new ruleset
+@@ -168,22 +169,23 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ return -EOPNOTSUPP;
+
+ if (flags) {
+- if ((flags == LANDLOCK_CREATE_RULESET_VERSION)
+- && !attr && !size)
++ if ((flags == LANDLOCK_CREATE_RULESET_VERSION) && !attr &&
++ !size)
+ return LANDLOCK_ABI_VERSION;
+ return -EINVAL;
+ }
+
+ /* Copies raw user space buffer. */
+ err = copy_min_struct_from_user(&ruleset_attr, sizeof(ruleset_attr),
+- offsetofend(typeof(ruleset_attr), handled_access_fs),
+- attr, size);
++ offsetofend(typeof(ruleset_attr),
++ handled_access_fs),
++ attr, size);
+ if (err)
+ return err;
+
+ /* Checks content (and 32-bits cast). */
+ if ((ruleset_attr.handled_access_fs | LANDLOCK_MASK_ACCESS_FS) !=
+- LANDLOCK_MASK_ACCESS_FS)
++ LANDLOCK_MASK_ACCESS_FS)
+ return -EINVAL;
+
+ /* Checks arguments and transforms to kernel struct. */
+@@ -193,7 +195,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+
+ /* Creates anonymous FD referring to the ruleset. */
+ ruleset_fd = anon_inode_getfd("[landlock-ruleset]", &ruleset_fops,
+- ruleset, O_RDWR | O_CLOEXEC);
++ ruleset, O_RDWR | O_CLOEXEC);
+ if (ruleset_fd < 0)
+ landlock_put_ruleset(ruleset);
+ return ruleset_fd;
+@@ -204,7 +206,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ * landlock_put_ruleset() on the return value.
+ */
+ static struct landlock_ruleset *get_ruleset_from_fd(const int fd,
+- const fmode_t mode)
++ const fmode_t mode)
+ {
+ struct fd ruleset_f;
+ struct landlock_ruleset *ruleset;
+@@ -244,8 +246,8 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
+ struct fd f;
+ int err = 0;
+
+- BUILD_BUG_ON(!__same_type(fd,
+- ((struct landlock_path_beneath_attr *)NULL)->parent_fd));
++ BUILD_BUG_ON(!__same_type(
++ fd, ((struct landlock_path_beneath_attr *)NULL)->parent_fd));
+
+ /* Handles O_PATH. */
+ f = fdget_raw(fd);
+@@ -257,10 +259,10 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
+ * pipefs).
+ */
+ if ((f.file->f_op == &ruleset_fops) ||
+- (f.file->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
+- (f.file->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
+- d_is_negative(f.file->f_path.dentry) ||
+- IS_PRIVATE(d_backing_inode(f.file->f_path.dentry))) {
++ (f.file->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
++ (f.file->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
++ d_is_negative(f.file->f_path.dentry) ||
++ IS_PRIVATE(d_backing_inode(f.file->f_path.dentry))) {
+ err = -EBADFD;
+ goto out_fdput;
+ }
+@@ -290,19 +292,18 @@ out_fdput:
+ *
+ * - EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
+ * - EINVAL: @flags is not 0, or inconsistent access in the rule (i.e.
+- * &landlock_path_beneath_attr.allowed_access is not a subset of the rule's
+- * accesses);
++ * &landlock_path_beneath_attr.allowed_access is not a subset of the
++ * ruleset handled accesses);
+ * - ENOMSG: Empty accesses (e.g. &landlock_path_beneath_attr.allowed_access);
+ * - EBADF: @ruleset_fd is not a file descriptor for the current thread, or a
+ * member of @rule_attr is not a file descriptor as expected;
+ * - EBADFD: @ruleset_fd is not a ruleset file descriptor, or a member of
+- * @rule_attr is not the expected file descriptor type (e.g. file open
+- * without O_PATH);
++ * @rule_attr is not the expected file descriptor type;
+ * - EPERM: @ruleset_fd has no write access to the underlying ruleset;
+ * - EFAULT: @rule_attr inconsistency.
+ */
+-SYSCALL_DEFINE4(landlock_add_rule,
+- const int, ruleset_fd, const enum landlock_rule_type, rule_type,
++SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd,
++ const enum landlock_rule_type, rule_type,
+ const void __user *const, rule_attr, const __u32, flags)
+ {
+ struct landlock_path_beneath_attr path_beneath_attr;
+@@ -317,20 +318,24 @@ SYSCALL_DEFINE4(landlock_add_rule,
+ if (flags)
+ return -EINVAL;
+
+- if (rule_type != LANDLOCK_RULE_PATH_BENEATH)
+- return -EINVAL;
+-
+- /* Copies raw user space buffer, only one type for now. */
+- res = copy_from_user(&path_beneath_attr, rule_attr,
+- sizeof(path_beneath_attr));
+- if (res)
+- return -EFAULT;
+-
+ /* Gets and checks the ruleset. */
+ ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_WRITE);
+ if (IS_ERR(ruleset))
+ return PTR_ERR(ruleset);
+
++ if (rule_type != LANDLOCK_RULE_PATH_BENEATH) {
++ err = -EINVAL;
++ goto out_put_ruleset;
++ }
++
++ /* Copies raw user space buffer, only one type for now. */
++ res = copy_from_user(&path_beneath_attr, rule_attr,
++ sizeof(path_beneath_attr));
++ if (res) {
++ err = -EFAULT;
++ goto out_put_ruleset;
++ }
++
+ /*
+ * Informs about useless rule: empty allowed_access (i.e. deny rules)
+ * are ignored in path walks.
+@@ -344,7 +349,7 @@ SYSCALL_DEFINE4(landlock_add_rule,
+ * (ruleset->fs_access_masks[0] is automatically upgraded to 64-bits).
+ */
+ if ((path_beneath_attr.allowed_access | ruleset->fs_access_masks[0]) !=
+- ruleset->fs_access_masks[0]) {
++ ruleset->fs_access_masks[0]) {
+ err = -EINVAL;
+ goto out_put_ruleset;
+ }
+@@ -356,7 +361,7 @@ SYSCALL_DEFINE4(landlock_add_rule,
+
+ /* Imports the new rule. */
+ err = landlock_append_fs_rule(ruleset, &path,
+- path_beneath_attr.allowed_access);
++ path_beneath_attr.allowed_access);
+ path_put(&path);
+
+ out_put_ruleset:
+@@ -389,8 +394,8 @@ out_put_ruleset:
+ * - E2BIG: The maximum number of stacked rulesets is reached for the current
+ * thread.
+ */
+-SYSCALL_DEFINE2(landlock_restrict_self,
+- const int, ruleset_fd, const __u32, flags)
++SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,
++ flags)
+ {
+ struct landlock_ruleset *new_dom, *ruleset;
+ struct cred *new_cred;
+@@ -400,18 +405,18 @@ SYSCALL_DEFINE2(landlock_restrict_self,
+ if (!landlock_initialized)
+ return -EOPNOTSUPP;
+
+- /* No flag for now. */
+- if (flags)
+- return -EINVAL;
+-
+ /*
+ * Similar checks as for seccomp(2), except that an -EPERM may be
+ * returned.
+ */
+ if (!task_no_new_privs(current) &&
+- !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))
++ !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))
+ return -EPERM;
+
++ /* No flag for now. */
++ if (flags)
++ return -EINVAL;
++
+ /* Gets and checks the ruleset. */
+ ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_READ);
+ if (IS_ERR(ruleset))
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index d1e3055f2b6a5..88493cc31914b 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -42,8 +42,11 @@ static int snd_jack_dev_disconnect(struct snd_device *device)
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+ struct snd_jack *jack = device->device_data;
+
+- if (!jack->input_dev)
++ mutex_lock(&jack->input_dev_lock);
++ if (!jack->input_dev) {
++ mutex_unlock(&jack->input_dev_lock);
+ return 0;
++ }
+
+ /* If the input device is registered with the input subsystem
+ * then we need to use a different deallocator. */
+@@ -52,6 +55,7 @@ static int snd_jack_dev_disconnect(struct snd_device *device)
+ else
+ input_free_device(jack->input_dev);
+ jack->input_dev = NULL;
++ mutex_unlock(&jack->input_dev_lock);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ return 0;
+ }
+@@ -90,8 +94,11 @@ static int snd_jack_dev_register(struct snd_device *device)
+ snprintf(jack->name, sizeof(jack->name), "%s %s",
+ card->shortname, jack->id);
+
+- if (!jack->input_dev)
++ mutex_lock(&jack->input_dev_lock);
++ if (!jack->input_dev) {
++ mutex_unlock(&jack->input_dev_lock);
+ return 0;
++ }
+
+ jack->input_dev->name = jack->name;
+
+@@ -116,6 +123,7 @@ static int snd_jack_dev_register(struct snd_device *device)
+ if (err == 0)
+ jack->registered = 1;
+
++ mutex_unlock(&jack->input_dev_lock);
+ return err;
+ }
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+@@ -517,9 +525,11 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ return -ENOMEM;
+ }
+
+- /* don't creat input device for phantom jack */
+- if (!phantom_jack) {
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
++ mutex_init(&jack->input_dev_lock);
++
++ /* don't create input device for phantom jack */
++ if (!phantom_jack) {
+ int i;
+
+ jack->input_dev = input_allocate_device();
+@@ -537,8 +547,8 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ input_set_capability(jack->input_dev, EV_SW,
+ jack_switch_types[i]);
+
+-#endif /* CONFIG_SND_JACK_INPUT_DEV */
+ }
++#endif /* CONFIG_SND_JACK_INPUT_DEV */
+
+ err = snd_device_new(card, SNDRV_DEV_JACK, jack, &ops);
+ if (err < 0)
+@@ -578,10 +588,14 @@ EXPORT_SYMBOL(snd_jack_new);
+ void snd_jack_set_parent(struct snd_jack *jack, struct device *parent)
+ {
+ WARN_ON(jack->registered);
+- if (!jack->input_dev)
++ mutex_lock(&jack->input_dev_lock);
++ if (!jack->input_dev) {
++ mutex_unlock(&jack->input_dev_lock);
+ return;
++ }
+
+ jack->input_dev->dev.parent = parent;
++ mutex_unlock(&jack->input_dev_lock);
+ }
+ EXPORT_SYMBOL(snd_jack_set_parent);
+
+@@ -629,6 +643,8 @@ EXPORT_SYMBOL(snd_jack_set_key);
+
+ /**
+ * snd_jack_report - Report the current status of a jack
++ * Note: This function uses mutexes and should be called from a
++ * context which can sleep (such as a workqueue).
+ *
+ * @jack: The jack to report status for
+ * @status: The current status of the jack
+@@ -654,8 +670,11 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ status & jack_kctl->mask_bits);
+
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+- if (!jack->input_dev)
++ mutex_lock(&jack->input_dev_lock);
++ if (!jack->input_dev) {
++ mutex_unlock(&jack->input_dev_lock);
+ return;
++ }
+
+ for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
+ int testbit = ((SND_JACK_BTN_0 >> i) & ~mask_bits);
+@@ -675,6 +694,7 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ }
+
+ input_sync(jack->input_dev);
++ mutex_unlock(&jack->input_dev_lock);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ }
+ EXPORT_SYMBOL(snd_jack_report);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index 8848d2f3160d8..b8296b6eb2c19 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -453,7 +453,6 @@ EXPORT_SYMBOL(snd_pcm_lib_malloc_pages);
+ */
+ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
+ {
+- struct snd_card *card = substream->pcm->card;
+ struct snd_pcm_runtime *runtime;
+
+ if (PCM_RUNTIME_CHECK(substream))
+@@ -462,6 +461,8 @@ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
+ if (runtime->dma_area == NULL)
+ return 0;
+ if (runtime->dma_buffer_p != &substream->dma_buffer) {
++ struct snd_card *card = substream->pcm->card;
++
+ /* it's a newly allocated buffer. release it now. */
+ do_free_pages(card, runtime->dma_buffer_p);
+ kfree(runtime->dma_buffer_p);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 53d1586b71ec6..112ecc256b148 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1981,6 +1981,7 @@ enum {
+ ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+ ALC887_FIXUP_ASUS_AUDIO,
+ ALC887_FIXUP_ASUS_HMIC,
++ ALCS1200A_FIXUP_MIC_VREF,
+ };
+
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2526,6 +2527,14 @@ static const struct hda_fixup alc882_fixups[] = {
+ .chained = true,
+ .chain_id = ALC887_FIXUP_ASUS_AUDIO,
+ },
++ [ALCS1200A_FIXUP_MIC_VREF] = {
++ .type = HDA_FIXUP_PINCTLS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x18, PIN_VREF50 }, /* rear mic */
++ { 0x19, PIN_VREF50 }, /* front mic */
++ {}
++ }
++ },
+ };
+
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2563,6 +2572,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
+ SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
++ SND_PCI_QUIRK(0x1043, 0x8797, "ASUS TUF B550M-PLUS", ALCS1200A_FIXUP_MIC_VREF),
+ SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+ SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
+@@ -3131,6 +3141,7 @@ enum {
+ ALC269_TYPE_ALC257,
+ ALC269_TYPE_ALC215,
+ ALC269_TYPE_ALC225,
++ ALC269_TYPE_ALC245,
+ ALC269_TYPE_ALC287,
+ ALC269_TYPE_ALC294,
+ ALC269_TYPE_ALC300,
+@@ -3168,6 +3179,7 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ case ALC269_TYPE_ALC257:
+ case ALC269_TYPE_ALC215:
+ case ALC269_TYPE_ALC225:
++ case ALC269_TYPE_ALC245:
+ case ALC269_TYPE_ALC287:
+ case ALC269_TYPE_ALC294:
+ case ALC269_TYPE_ALC300:
+@@ -3695,7 +3707,8 @@ static void alc225_init(struct hda_codec *codec)
+ hda_nid_t hp_pin = alc_get_hp_pin(spec);
+ bool hp1_pin_sense, hp2_pin_sense;
+
+- if (spec->codec_variant != ALC269_TYPE_ALC287)
++ if (spec->codec_variant != ALC269_TYPE_ALC287 &&
++ spec->codec_variant != ALC269_TYPE_ALC245)
+ /* required only at boot or S3 and S4 resume time */
+ if (!spec->done_hp_init ||
+ is_s3_resume(codec) ||
+@@ -8918,6 +8931,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -10093,7 +10107,10 @@ static int patch_alc269(struct hda_codec *codec)
+ case 0x10ec0245:
+ case 0x10ec0285:
+ case 0x10ec0289:
+- spec->codec_variant = ALC269_TYPE_ALC215;
++ if (alc_get_coef0(codec) & 0x0010)
++ spec->codec_variant = ALC269_TYPE_ALC245;
++ else
++ spec->codec_variant = ALC269_TYPE_ALC215;
+ spec->shutup = alc225_shutup;
+ spec->init_hook = alc225_init;
+ spec->gen.mixer_nid = 0;
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 9a767f47b89f1..959b70e8baf21 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -45,108 +45,126 @@ static struct snd_soc_card acp6x_card = {
+
+ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D2"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D3"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D4"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D5"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CF"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CG"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CQ"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CR"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21AW"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21AX"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21BN"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21BQ"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CH"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CJ"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CK"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21CL"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D8"),
+ }
+ },
+ {
++ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D9"),
+@@ -157,18 +175,21 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+
+ static int acp6x_probe(struct platform_device *pdev)
+ {
++ const struct dmi_system_id *dmi_id;
+ struct acp6x_pdm *machine = NULL;
+ struct snd_soc_card *card;
+ int ret;
+- const struct dmi_system_id *dmi_id;
+
++ /* check for any DMI overrides */
+ dmi_id = dmi_first_match(yc_acp_quirk_table);
+- if (!dmi_id)
++ if (dmi_id)
++ platform_set_drvdata(pdev, dmi_id->driver_data);
++
++ card = platform_get_drvdata(pdev);
++ if (!card)
+ return -ENODEV;
+- card = &acp6x_card;
+ acp6x_card.dev = &pdev->dev;
+
+- platform_set_drvdata(pdev, card);
+ snd_soc_card_set_drvdata(card, machine);
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+ if (ret) {
+diff --git a/sound/soc/atmel/atmel-classd.c b/sound/soc/atmel/atmel-classd.c
+index a9f9f449c48c2..74b7b2611aa70 100644
+--- a/sound/soc/atmel/atmel-classd.c
++++ b/sound/soc/atmel/atmel-classd.c
+@@ -458,7 +458,6 @@ static const struct snd_soc_component_driver atmel_classd_cpu_dai_component = {
+ .num_controls = ARRAY_SIZE(atmel_classd_snd_controls),
+ .idle_bias_on = 1,
+ .use_pmdown_time = 1,
+- .endianness = 1,
+ };
+
+ /* ASoC sound card */
+diff --git a/sound/soc/atmel/atmel-pdmic.c b/sound/soc/atmel/atmel-pdmic.c
+index 42117de299e74..ea34efac2fff5 100644
+--- a/sound/soc/atmel/atmel-pdmic.c
++++ b/sound/soc/atmel/atmel-pdmic.c
+@@ -481,7 +481,6 @@ static const struct snd_soc_component_driver atmel_pdmic_cpu_dai_component = {
+ .num_controls = ARRAY_SIZE(atmel_pdmic_snd_controls),
+ .idle_bias_on = 1,
+ .use_pmdown_time = 1,
+- .endianness = 1,
+ };
+
+ /* ASoC sound card */
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 30c00380499cd..3496403004acd 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -943,7 +943,6 @@ config SND_SOC_MAX98095
+
+ config SND_SOC_MAX98357A
+ tristate "Maxim MAX98357A CODEC"
+- depends on GPIOLIB
+
+ config SND_SOC_MAX98371
+ tristate
+@@ -1203,7 +1202,6 @@ config SND_SOC_RT1015
+
+ config SND_SOC_RT1015P
+ tristate
+- depends on GPIOLIB
+
+ config SND_SOC_RT1019
+ tristate
+diff --git a/sound/soc/codecs/cs35l41-lib.c b/sound/soc/codecs/cs35l41-lib.c
+index 281a710a41231..8de5038ee9b3f 100644
+--- a/sound/soc/codecs/cs35l41-lib.c
++++ b/sound/soc/codecs/cs35l41-lib.c
+@@ -422,7 +422,7 @@ static bool cs35l41_volatile_reg(struct device *dev, unsigned int reg)
+ }
+ }
+
+-static const struct cs35l41_otp_packed_element_t otp_map_1[CS35L41_NUM_OTP_ELEM] = {
++static const struct cs35l41_otp_packed_element_t otp_map_1[] = {
+ /* addr shift size */
+ { 0x00002030, 0, 4 }, /*TRIM_OSC_FREQ_TRIM*/
+ { 0x00002030, 7, 1 }, /*TRIM_OSC_TRIM_DONE*/
+@@ -525,7 +525,7 @@ static const struct cs35l41_otp_packed_element_t otp_map_1[CS35L41_NUM_OTP_ELEM]
+ { 0x00017044, 0, 24 }, /*LOT_NUMBER*/
+ };
+
+-static const struct cs35l41_otp_packed_element_t otp_map_2[CS35L41_NUM_OTP_ELEM] = {
++static const struct cs35l41_otp_packed_element_t otp_map_2[] = {
+ /* addr shift size */
+ { 0x00002030, 0, 4 }, /*TRIM_OSC_FREQ_TRIM*/
+ { 0x00002030, 7, 1 }, /*TRIM_OSC_TRIM_DONE*/
+@@ -671,35 +671,35 @@ static const struct cs35l41_otp_map_element_t cs35l41_otp_map_map[] = {
+ {
+ .id = 0x01,
+ .map = otp_map_1,
+- .num_elements = CS35L41_NUM_OTP_ELEM,
++ .num_elements = ARRAY_SIZE(otp_map_1),
+ .bit_offset = 16,
+ .word_offset = 2,
+ },
+ {
+ .id = 0x02,
+ .map = otp_map_2,
+- .num_elements = CS35L41_NUM_OTP_ELEM,
++ .num_elements = ARRAY_SIZE(otp_map_2),
+ .bit_offset = 16,
+ .word_offset = 2,
+ },
+ {
+ .id = 0x03,
+ .map = otp_map_2,
+- .num_elements = CS35L41_NUM_OTP_ELEM,
++ .num_elements = ARRAY_SIZE(otp_map_2),
+ .bit_offset = 16,
+ .word_offset = 2,
+ },
+ {
+ .id = 0x06,
+ .map = otp_map_2,
+- .num_elements = CS35L41_NUM_OTP_ELEM,
++ .num_elements = ARRAY_SIZE(otp_map_2),
+ .bit_offset = 16,
+ .word_offset = 2,
+ },
+ {
+ .id = 0x08,
+ .map = otp_map_1,
+- .num_elements = CS35L41_NUM_OTP_ELEM,
++ .num_elements = ARRAY_SIZE(otp_map_1),
+ .bit_offset = 16,
+ .word_offset = 2,
+ },
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 62b41ca050a20..5513acd360b8f 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -393,7 +393,8 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ struct soc_mixer_control *mc =
+ (struct soc_mixer_control *)kcontrol->private_value;
+ unsigned int mask = (1 << fls(mc->max)) - 1;
+- unsigned int sel = ucontrol->value.integer.value[0];
++ int sel_unchecked = ucontrol->value.integer.value[0];
++ unsigned int sel;
+ unsigned int val = snd_soc_component_read(component, mc->reg);
+ unsigned int *select;
+
+@@ -413,8 +414,9 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+
+ val = (val >> mc->shift) & mask;
+
+- if (sel < 0 || sel > mc->max)
++ if (sel_unchecked < 0 || sel_unchecked > mc->max)
+ return -EINVAL;
++ sel = sel_unchecked;
+
+ *select = sel;
+
+diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c
+index 758d439e8c7a5..86b679cf7aef9 100644
+--- a/sound/soc/codecs/rk3328_codec.c
++++ b/sound/soc/codecs/rk3328_codec.c
+@@ -481,7 +481,7 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ ret = clk_prepare_enable(rk3328->pclk);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to enable acodec pclk\n");
+- return ret;
++ goto err_unprepare_mclk;
+ }
+
+ base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 577680df70520..92428f2b459ba 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -419,7 +419,7 @@ static int rt5514_dsp_voice_wake_up_put(struct snd_kcontrol *kcontrol,
+ }
+ }
+
+- return 0;
++ return 1;
+ }
+
+ static const struct snd_kcontrol_new rt5514_snd_controls[] = {
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 197c560479470..4b2e027c10331 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -4154,9 +4154,14 @@ static int rt5645_i2c_remove(struct i2c_client *i2c)
+ if (i2c->irq)
+ free_irq(i2c->irq, rt5645);
+
++ /*
++ * Since the rt5645_btn_check_callback() can queue jack_detect_work,
++ * the timer need to be delted first
++ */
++ del_timer_sync(&rt5645->btn_check_timer);
++
+ cancel_delayed_work_sync(&rt5645->jack_detect_work);
+ cancel_delayed_work_sync(&rt5645->rcclock_work);
+- del_timer_sync(&rt5645->btn_check_timer);
+
+ regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies);
+
+diff --git a/sound/soc/codecs/tscs454.c b/sound/soc/codecs/tscs454.c
+index 43220bb36701a..c27ca9a273e14 100644
+--- a/sound/soc/codecs/tscs454.c
++++ b/sound/soc/codecs/tscs454.c
+@@ -3120,18 +3120,17 @@ static int set_aif_sample_format(struct snd_soc_component *component,
+ unsigned int width;
+ int ret;
+
+- switch (format) {
+- case SNDRV_PCM_FORMAT_S16_LE:
++ switch (snd_pcm_format_width(format)) {
++ case 16:
+ width = FV_WL_16;
+ break;
+- case SNDRV_PCM_FORMAT_S20_3LE:
++ case 20:
+ width = FV_WL_20;
+ break;
+- case SNDRV_PCM_FORMAT_S24_3LE:
++ case 24:
+ width = FV_WL_24;
+ break;
+- case SNDRV_PCM_FORMAT_S24_LE:
+- case SNDRV_PCM_FORMAT_S32_LE:
++ case 32:
+ width = FV_WL_32;
+ break;
+ default:
+@@ -3326,6 +3325,7 @@ static const struct snd_soc_component_driver soc_component_dev_tscs454 = {
+ .num_dapm_routes = ARRAY_SIZE(tscs454_intercon),
+ .controls = tscs454_snd_controls,
+ .num_controls = ARRAY_SIZE(tscs454_snd_controls),
++ .endianness = 1,
+ };
+
+ #define TSCS454_RATES SNDRV_PCM_RATE_8000_96000
+diff --git a/sound/soc/codecs/wm2000.c b/sound/soc/codecs/wm2000.c
+index 72e165cc64439..97ece3114b3dc 100644
+--- a/sound/soc/codecs/wm2000.c
++++ b/sound/soc/codecs/wm2000.c
+@@ -536,7 +536,7 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000,
+ {
+ struct i2c_client *i2c = wm2000->i2c;
+ int i, j;
+- int ret;
++ int ret = 0;
+
+ if (wm2000->anc_mode == mode)
+ return 0;
+@@ -566,13 +566,13 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000,
+ ret = anc_transitions[i].step[j](i2c,
+ anc_transitions[i].analogue);
+ if (ret != 0)
+- return ret;
++ break;
+ }
+
+ if (anc_transitions[i].dest == ANC_OFF)
+ clk_disable_unprepare(wm2000->mclk);
+
+- return 0;
++ return ret;
+ }
+
+ static int wm2000_anc_set_mode(struct wm2000_priv *wm2000)
+diff --git a/sound/soc/fsl/imx-hdmi.c b/sound/soc/fsl/imx-hdmi.c
+index 929f69b758af4..ec149dc739383 100644
+--- a/sound/soc/fsl/imx-hdmi.c
++++ b/sound/soc/fsl/imx-hdmi.c
+@@ -126,6 +126,7 @@ static int imx_hdmi_probe(struct platform_device *pdev)
+ data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ ret = -ENOMEM;
++ put_device(&cpu_pdev->dev);
+ goto fail;
+ }
+
+diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
+index 8daced42d55e4..580a0d963f0eb 100644
+--- a/sound/soc/fsl/imx-sgtl5000.c
++++ b/sound/soc/fsl/imx-sgtl5000.c
+@@ -120,19 +120,19 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ ret = -ENOMEM;
+- goto fail;
++ goto put_device;
+ }
+
+ comp = devm_kzalloc(&pdev->dev, 3 * sizeof(*comp), GFP_KERNEL);
+ if (!comp) {
+ ret = -ENOMEM;
+- goto fail;
++ goto put_device;
+ }
+
+ data->codec_clk = clk_get(&codec_dev->dev, NULL);
+ if (IS_ERR(data->codec_clk)) {
+ ret = PTR_ERR(data->codec_clk);
+- goto fail;
++ goto put_device;
+ }
+
+ data->clk_frequency = clk_get_rate(data->codec_clk);
+@@ -158,10 +158,10 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ data->card.dev = &pdev->dev;
+ ret = snd_soc_of_parse_card_name(&data->card, "model");
+ if (ret)
+- goto fail;
++ goto put_device;
+ ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing");
+ if (ret)
+- goto fail;
++ goto put_device;
+ data->card.num_links = 1;
+ data->card.owner = THIS_MODULE;
+ data->card.dai_link = &data->dai;
+@@ -174,7 +174,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ ret = devm_snd_soc_register_card(&pdev->dev, &data->card);
+ if (ret) {
+ dev_err_probe(&pdev->dev, ret, "snd_soc_register_card failed\n");
+- goto fail;
++ goto put_device;
+ }
+
+ of_node_put(ssi_np);
+@@ -182,6 +182,8 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+
+ return 0;
+
++put_device:
++ put_device(&codec_dev->dev);
+ fail:
+ if (data && !IS_ERR(data->codec_clk))
+ clk_put(data->codec_clk);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 2ace32c03ec9d..b5ac226c59e1d 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -773,6 +773,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_OVCD_SF_0P75 |
+ BYT_RT5640_MCLK_EN),
+ },
++ { /* HP Pro Tablet 408 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Pro Tablet 408"),
++ },
++ .driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++ BYT_RT5640_JD_SRC_JD2_IN4N |
++ BYT_RT5640_OVCD_TH_1500UA |
++ BYT_RT5640_OVCD_SF_0P75 |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
+ { /* HP Stream 7 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+diff --git a/sound/soc/mediatek/mt2701/mt2701-wm8960.c b/sound/soc/mediatek/mt2701/mt2701-wm8960.c
+index f56de1b918bf0..0cdf2ae362439 100644
+--- a/sound/soc/mediatek/mt2701/mt2701-wm8960.c
++++ b/sound/soc/mediatek/mt2701/mt2701-wm8960.c
+@@ -129,7 +129,8 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ if (!codec_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_platform_node;
+ }
+ for_each_card_prelinks(card, i, dai_link) {
+ if (dai_link->codecs->name)
+@@ -140,7 +141,7 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ ret = snd_soc_of_parse_audio_routing(card, "audio-routing");
+ if (ret) {
+ dev_err(&pdev->dev, "failed to parse audio-routing: %d\n", ret);
+- return ret;
++ goto put_codec_node;
+ }
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+@@ -148,6 +149,10 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ __func__, ret);
+
++put_codec_node:
++ of_node_put(codec_node);
++put_platform_node:
++ of_node_put(platform_node);
+ return ret;
+ }
+
+diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+index 4cb90da89262b..58778cd2e61b1 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c
++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+@@ -167,7 +167,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ if (!codec_node) {
+ dev_err(&pdev->dev,
+ "Property 'audio-codec' missing or invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_platform_node;
+ }
+ for_each_card_prelinks(card, i, dai_link) {
+ if (dai_link->codecs->name)
+@@ -179,6 +180,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+
+ of_node_put(codec_node);
++
++put_platform_node:
+ of_node_put(platform_node);
+ return ret;
+ }
+diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
+index 879c1221a809b..7afe1a1acc568 100644
+--- a/sound/soc/mxs/mxs-saif.c
++++ b/sound/soc/mxs/mxs-saif.c
+@@ -754,6 +754,7 @@ static int mxs_saif_probe(struct platform_device *pdev)
+ saif->master_id = saif->id;
+ } else {
+ ret = of_alias_get_id(master, "saif");
++ of_node_put(master);
+ if (ret < 0)
+ return ret;
+ else
+diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c
+index 5265e546b124c..83acbe57b2489 100644
+--- a/sound/soc/samsung/aries_wm8994.c
++++ b/sound/soc/samsung/aries_wm8994.c
+@@ -585,10 +585,10 @@ static int aries_audio_probe(struct platform_device *pdev)
+
+ extcon_np = of_parse_phandle(np, "extcon", 0);
+ priv->usb_extcon = extcon_find_edev_by_node(extcon_np);
++ of_node_put(extcon_np);
+ if (IS_ERR(priv->usb_extcon))
+ return dev_err_probe(dev, PTR_ERR(priv->usb_extcon),
+ "Failed to get extcon device");
+- of_node_put(extcon_np);
+
+ priv->adc = devm_iio_channel_get(dev, "headset-detect");
+ if (IS_ERR(priv->adc))
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 6a8fe0da7670b..af8ef2a27d341 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1159,6 +1159,7 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ struct device_node *capture)
+ {
+ struct rsnd_priv *priv = rsnd_rdai_to_priv(rdai);
++ struct device *dev = rsnd_priv_to_dev(priv);
+ struct device_node *np;
+ int i;
+
+@@ -1169,7 +1170,11 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ for_each_child_of_node(node, np) {
+ struct rsnd_mod *mod;
+
+- i = rsnd_node_fixed_index(np, name, i);
++ i = rsnd_node_fixed_index(dev, np, name, i);
++ if (i < 0) {
++ of_node_put(np);
++ break;
++ }
+
+ mod = mod_get(priv, i);
+
+@@ -1183,7 +1188,7 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ of_node_put(node);
+ }
+
+-int rsnd_node_fixed_index(struct device_node *node, char *name, int idx)
++int rsnd_node_fixed_index(struct device *dev, struct device_node *node, char *name, int idx)
+ {
+ char node_name[16];
+
+@@ -1210,6 +1215,8 @@ int rsnd_node_fixed_index(struct device_node *node, char *name, int idx)
+ return idx;
+ }
+
++ dev_err(dev, "strange node numbering (%s)",
++ of_node_full_name(node));
+ return -EINVAL;
+ }
+
+@@ -1221,10 +1228,8 @@ int rsnd_node_count(struct rsnd_priv *priv, struct device_node *node, char *name
+
+ i = 0;
+ for_each_child_of_node(node, np) {
+- i = rsnd_node_fixed_index(np, name, i);
++ i = rsnd_node_fixed_index(dev, np, name, i);
+ if (i < 0) {
+- dev_err(dev, "strange node numbering (%s)",
+- of_node_full_name(node));
+ of_node_put(np);
+ return 0;
+ }
+diff --git a/sound/soc/sh/rcar/dma.c b/sound/soc/sh/rcar/dma.c
+index 03e0d4eca7815..463ab237d7bd4 100644
+--- a/sound/soc/sh/rcar/dma.c
++++ b/sound/soc/sh/rcar/dma.c
+@@ -240,12 +240,19 @@ static int rsnd_dmaen_start(struct rsnd_mod *mod,
+ struct dma_chan *rsnd_dma_request_channel(struct device_node *of_node, char *name,
+ struct rsnd_mod *mod, char *x)
+ {
++ struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
++ struct device *dev = rsnd_priv_to_dev(priv);
+ struct dma_chan *chan = NULL;
+ struct device_node *np;
+ int i = 0;
+
+ for_each_child_of_node(of_node, np) {
+- i = rsnd_node_fixed_index(np, name, i);
++ i = rsnd_node_fixed_index(dev, np, name, i);
++ if (i < 0) {
++ chan = NULL;
++ of_node_put(np);
++ break;
++ }
+
+ if (i == rsnd_mod_id_raw(mod) && (!chan))
+ chan = of_dma_request_slave_channel(np, x);
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 6580bab0e229b..d9cd190d7e198 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -460,7 +460,7 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ struct device_node *playback,
+ struct device_node *capture);
+ int rsnd_node_count(struct rsnd_priv *priv, struct device_node *node, char *name);
+-int rsnd_node_fixed_index(struct device_node *node, char *name, int idx);
++int rsnd_node_fixed_index(struct device *dev, struct device_node *node, char *name, int idx);
+
+ int rsnd_channel_normalization(int chan);
+ #define rsnd_runtime_channel_original(io) \
+diff --git a/sound/soc/sh/rcar/src.c b/sound/soc/sh/rcar/src.c
+index 42a100c6303d4..0ea84ae57c6ac 100644
+--- a/sound/soc/sh/rcar/src.c
++++ b/sound/soc/sh/rcar/src.c
+@@ -676,7 +676,12 @@ int rsnd_src_probe(struct rsnd_priv *priv)
+ if (!of_device_is_available(np))
+ goto skip;
+
+- i = rsnd_node_fixed_index(np, SRC_NAME, i);
++ i = rsnd_node_fixed_index(dev, np, SRC_NAME, i);
++ if (i < 0) {
++ ret = -EINVAL;
++ of_node_put(np);
++ goto rsnd_src_probe_done;
++ }
+
+ src = rsnd_src_get(priv, i);
+
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 87e606f688d3f..43c5e27dc5c86 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -1105,6 +1105,7 @@ void rsnd_parse_connect_ssi(struct rsnd_dai *rdai,
+ struct device_node *capture)
+ {
+ struct rsnd_priv *priv = rsnd_rdai_to_priv(rdai);
++ struct device *dev = rsnd_priv_to_dev(priv);
+ struct device_node *node;
+ struct device_node *np;
+ int i;
+@@ -1117,7 +1118,11 @@ void rsnd_parse_connect_ssi(struct rsnd_dai *rdai,
+ for_each_child_of_node(node, np) {
+ struct rsnd_mod *mod;
+
+- i = rsnd_node_fixed_index(np, SSI_NAME, i);
++ i = rsnd_node_fixed_index(dev, np, SSI_NAME, i);
++ if (i < 0) {
++ of_node_put(np);
++ break;
++ }
+
+ mod = rsnd_ssi_mod_get(priv, i);
+
+@@ -1182,7 +1187,12 @@ int rsnd_ssi_probe(struct rsnd_priv *priv)
+ if (!of_device_is_available(np))
+ goto skip;
+
+- i = rsnd_node_fixed_index(np, SSI_NAME, i);
++ i = rsnd_node_fixed_index(dev, np, SSI_NAME, i);
++ if (i < 0) {
++ ret = -EINVAL;
++ of_node_put(np);
++ goto rsnd_ssi_probe_done;
++ }
+
+ ssi = rsnd_ssi_get(priv, i);
+
+diff --git a/sound/soc/sh/rcar/ssiu.c b/sound/soc/sh/rcar/ssiu.c
+index 0d8f97633dd26..4b8a63e336c77 100644
+--- a/sound/soc/sh/rcar/ssiu.c
++++ b/sound/soc/sh/rcar/ssiu.c
+@@ -102,6 +102,8 @@ bool rsnd_ssiu_busif_err_status_clear(struct rsnd_mod *mod)
+ shift = 1;
+ offset = 1;
+ break;
++ default:
++ goto out;
+ }
+
+ for (i = 0; i < 4; i++) {
+@@ -120,7 +122,7 @@ bool rsnd_ssiu_busif_err_status_clear(struct rsnd_mod *mod)
+ }
+ rsnd_mod_write(mod, reg, val);
+ }
+-
++out:
+ return error;
+ }
+
+@@ -460,6 +462,7 @@ void rsnd_parse_connect_ssiu(struct rsnd_dai *rdai,
+ struct device_node *capture)
+ {
+ struct rsnd_priv *priv = rsnd_rdai_to_priv(rdai);
++ struct device *dev = rsnd_priv_to_dev(priv);
+ struct device_node *node = rsnd_ssiu_of_node(priv);
+ struct rsnd_dai_stream *io_p = &rdai->playback;
+ struct rsnd_dai_stream *io_c = &rdai->capture;
+@@ -472,7 +475,11 @@ void rsnd_parse_connect_ssiu(struct rsnd_dai *rdai,
+ for_each_child_of_node(node, np) {
+ struct rsnd_mod *mod;
+
+- i = rsnd_node_fixed_index(np, SSIU_NAME, i);
++ i = rsnd_node_fixed_index(dev, np, SSIU_NAME, i);
++ if (i < 0) {
++ of_node_put(np);
++ break;
++ }
+
+ mod = rsnd_ssiu_mod_get(priv, i);
+
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index 7379b1489e358..6d794eaaf4c39 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -982,22 +982,24 @@ static int rz_ssi_probe(struct platform_device *pdev)
+
+ /* Error Interrupt */
+ ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
+- if (ssi->irq_int < 0)
+- return dev_err_probe(&pdev->dev, -ENODEV,
+- "Unable to get SSI int_req IRQ\n");
++ if (ssi->irq_int < 0) {
++ rz_ssi_release_dma_channels(ssi);
++ return ssi->irq_int;
++ }
+
+ ret = devm_request_irq(&pdev->dev, ssi->irq_int, &rz_ssi_interrupt,
+ 0, dev_name(&pdev->dev), ssi);
+- if (ret < 0)
++ if (ret < 0) {
++ rz_ssi_release_dma_channels(ssi);
+ return dev_err_probe(&pdev->dev, ret,
+ "irq request error (int_req)\n");
++ }
+
+ if (!rz_ssi_is_dma_enabled(ssi)) {
+ /* Tx and Rx interrupts (pio only) */
+ ssi->irq_tx = platform_get_irq_byname(pdev, "dma_tx");
+ if (ssi->irq_tx < 0)
+- return dev_err_probe(&pdev->dev, -ENODEV,
+- "Unable to get SSI dma_tx IRQ\n");
++ return ssi->irq_tx;
+
+ ret = devm_request_irq(&pdev->dev, ssi->irq_tx,
+ &rz_ssi_interrupt, 0,
+@@ -1008,8 +1010,7 @@ static int rz_ssi_probe(struct platform_device *pdev)
+
+ ssi->irq_rx = platform_get_irq_byname(pdev, "dma_rx");
+ if (ssi->irq_rx < 0)
+- return dev_err_probe(&pdev->dev, -ENODEV,
+- "Unable to get SSI dma_rx IRQ\n");
++ return ssi->irq_rx;
+
+ ret = devm_request_irq(&pdev->dev, ssi->irq_rx,
+ &rz_ssi_interrupt, 0,
+@@ -1020,13 +1021,16 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ }
+
+ ssi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+- if (IS_ERR(ssi->rstc))
++ if (IS_ERR(ssi->rstc)) {
++ rz_ssi_release_dma_channels(ssi);
+ return PTR_ERR(ssi->rstc);
++ }
+
+ reset_control_deassert(ssi->rstc);
+ pm_runtime_enable(&pdev->dev);
+ ret = pm_runtime_resume_and_get(&pdev->dev);
+ if (ret < 0) {
++ rz_ssi_release_dma_channels(ssi);
+ pm_runtime_disable(ssi->dev);
+ reset_control_assert(ssi->rstc);
+ return dev_err_probe(ssi->dev, ret, "pm_runtime_resume_and_get failed\n");
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index fb43b331a36e8..f62dd119639d4 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3430,7 +3430,6 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol,
+ update.val = val;
+ card->update = &update;
+ }
+- change |= reg_change;
+
+ ret = soc_dapm_mixer_update_power(card, kcontrol, connect,
+ rconnect);
+@@ -3532,7 +3531,6 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol,
+ update.val = val;
+ card->update = &update;
+ }
+- change |= reg_change;
+
+ ret = soc_dapm_mux_update_power(card, kcontrol, item[0], e);
+
+diff --git a/sound/soc/sof/amd/pci-rn.c b/sound/soc/sof/amd/pci-rn.c
+index 392ffbdf64179..d809d151a38c4 100644
+--- a/sound/soc/sof/amd/pci-rn.c
++++ b/sound/soc/sof/amd/pci-rn.c
+@@ -93,6 +93,7 @@ static int acp_pci_rn_probe(struct pci_dev *pci, const struct pci_device_id *pci
+ res = devm_kzalloc(&pci->dev, sizeof(struct resource) * ARRAY_SIZE(renoir_res), GFP_KERNEL);
+ if (!res) {
+ sof_pci_remove(pci);
++ platform_device_unregister(dmic_dev);
+ return -ENOMEM;
+ }
+
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index 4077e15ec48b7..6a969874c9270 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -630,17 +630,18 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ codec_node = of_parse_phandle(node, "ti,cpb-codec", 0);
+ if (!codec_node) {
+ dev_err(priv->dev, "CPB codec node is not provided\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_dai_node;
+ }
+
+ domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_CPB];
+ ret = j721e_get_clocks(priv->dev, &domain->codec, "cpb-codec-scki");
+ if (ret)
+- return ret;
++ goto put_codec_node;
+
+ ret = j721e_get_clocks(priv->dev, &domain->mcasp, "cpb-mcasp-auxclk");
+ if (ret)
+- return ret;
++ goto put_codec_node;
+
+ /*
+ * Common Processor Board, two links
+@@ -650,8 +651,10 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ comp_count = 6;
+ compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent),
+ GFP_KERNEL);
+- if (!compnent)
+- return -ENOMEM;
++ if (!compnent) {
++ ret = -ENOMEM;
++ goto put_codec_node;
++ }
+
+ comp_idx = 0;
+ priv->dai_links[*link_idx].cpus = &compnent[comp_idx++];
+@@ -702,6 +705,12 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ (*conf_idx)++;
+
+ return 0;
++
++put_codec_node:
++ of_node_put(codec_node);
++put_dai_node:
++ of_node_put(dai_node);
++ return ret;
+ }
+
+ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+@@ -726,23 +735,25 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ codeca_node = of_parse_phandle(node, "ti,ivi-codec-a", 0);
+ if (!codeca_node) {
+ dev_err(priv->dev, "IVI codec-a node is not provided\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_dai_node;
+ }
+
+ codecb_node = of_parse_phandle(node, "ti,ivi-codec-b", 0);
+ if (!codecb_node) {
+ dev_warn(priv->dev, "IVI codec-b node is not provided\n");
+- return 0;
++ ret = 0;
++ goto put_codeca_node;
+ }
+
+ domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_IVI];
+ ret = j721e_get_clocks(priv->dev, &domain->codec, "ivi-codec-scki");
+ if (ret)
+- return ret;
++ goto put_codecb_node;
+
+ ret = j721e_get_clocks(priv->dev, &domain->mcasp, "ivi-mcasp-auxclk");
+ if (ret)
+- return ret;
++ goto put_codecb_node;
+
+ /*
+ * IVI extension, two links
+@@ -754,8 +765,10 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ comp_count = 8;
+ compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent),
+ GFP_KERNEL);
+- if (!compnent)
+- return -ENOMEM;
++ if (!compnent) {
++ ret = -ENOMEM;
++ goto put_codecb_node;
++ }
+
+ comp_idx = 0;
+ priv->dai_links[*link_idx].cpus = &compnent[comp_idx++];
+@@ -816,6 +829,15 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ (*conf_idx)++;
+
+ return 0;
++
++
++put_codecb_node:
++ of_node_put(codecb_node);
++put_codeca_node:
++ of_node_put(codeca_node);
++put_dai_node:
++ of_node_put(dai_node);
++ return ret;
+ }
+
+ static int j721e_soc_probe(struct platform_device *pdev)
+diff --git a/sound/usb/implicit.c b/sound/usb/implicit.c
+index 2d444ec742029..e1bf1b5da423c 100644
+--- a/sound/usb/implicit.c
++++ b/sound/usb/implicit.c
+@@ -45,11 +45,6 @@ struct snd_usb_implicit_fb_match {
+
+ /* Implicit feedback quirk table for playback */
+ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = {
+- /* Generic matching */
+- IMPLICIT_FB_GENERIC_DEV(0x0499, 0x1509), /* Steinberg UR22 */
+- IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2030), /* M-Audio Fast Track C400 */
+- IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2031), /* M-Audio Fast Track C600 */
+-
+ /* Fixed EP */
+ /* FIXME: check the availability of generic matching */
+ IMPLICIT_FB_FIXED_DEV(0x0763, 0x2080, 0x81, 2), /* M-Audio FastTrack Ultra */
+@@ -350,7 +345,8 @@ static int audioformat_implicit_fb_quirk(struct snd_usb_audio *chip,
+ }
+
+ /* Try the generic implicit fb if available */
+- if (chip->generic_implicit_fb)
++ if (chip->generic_implicit_fb ||
++ (chip->quirk_flags & QUIRK_FLAG_GENERIC_IMPLICIT_FB))
+ return add_generic_implicit_fb(chip, fmt, alts);
+
+ /* No quirk */
+@@ -387,6 +383,8 @@ int snd_usb_parse_implicit_fb_quirk(struct snd_usb_audio *chip,
+ struct audioformat *fmt,
+ struct usb_host_interface *alts)
+ {
++ if (chip->quirk_flags & QUIRK_FLAG_SKIP_IMPLICIT_FB)
++ return 0;
+ if (fmt->endpoint & USB_DIR_IN)
+ return audioformat_capture_quirk(chip, fmt, alts);
+ else
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 7c6ca2b433a53..344fbeadf161b 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1145,6 +1145,9 @@ static int snd_usbmidi_output_open(struct snd_rawmidi_substream *substream)
+
+ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream)
+ {
++ struct usbmidi_out_port *port = substream->runtime->private_data;
++
++ cancel_work_sync(&port->ep->work);
+ return substream_open(substream, 0, 0);
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index fbbe59054c3fb..e8468f9b007d1 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1793,6 +1793,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
++ DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
++ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x04d8, 0xfeea, /* Benchmark DAC1 Pre */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x04e8, 0xa051, /* Samsung USBC Headset (AKG) */
+@@ -1826,6 +1828,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x074d, 0x3553, /* Outlaw RR2150 (Micronas UAC3553B) */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
++ DEVICE_FLG(0x0763, 0x2030, /* M-Audio Fast Track C400 */
++ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++ DEVICE_FLG(0x0763, 0x2031, /* M-Audio Fast Track C600 */
++ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x08bb, 0x2702, /* LineX FM Transmitter */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x0951, 0x16ad, /* Kingston HyperX */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index b8359a0aa008a..044cd7ab27cbb 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -164,6 +164,10 @@ extern bool snd_usb_skip_validation;
+ * Support generic DSD raw U32_BE format
+ * QUIRK_FLAG_SET_IFACE_FIRST:
+ * Set up the interface at first like UAC1
++ * QUIRK_FLAG_GENERIC_IMPLICIT_FB
++ * Apply the generic implicit feedback sync mode (same as implicit_fb=1 option)
++ * QUIRK_FLAG_SKIP_IMPLICIT_FB
++ * Don't apply implicit feedback sync mode
+ */
+
+ #define QUIRK_FLAG_GET_SAMPLE_RATE (1U << 0)
+@@ -183,5 +187,7 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_IGNORE_CTL_ERROR (1U << 14)
+ #define QUIRK_FLAG_DSD_RAW (1U << 15)
+ #define QUIRK_FLAG_SET_IFACE_FIRST (1U << 16)
++#define QUIRK_FLAG_GENERIC_IMPLICIT_FB (1U << 17)
++#define QUIRK_FLAG_SKIP_IMPLICIT_FB (1U << 18)
+
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c b/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
+index f7c084428735a..a17647f7d5a43 100644
+--- a/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
++++ b/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <bpf/libbpf.h>
++#include <bpf/btf.h>
+
+ int main(void)
+ {
+- return btf__load_from_kernel_by_id(20151128, NULL);
++ btf__load_from_kernel_by_id(20151128);
++ return 0;
+ }
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 94a6a8543cbc9..0ea7b91e675f8 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4564,7 +4564,7 @@ static int probe_kern_probe_read_kernel(void)
+ };
+ int fd, insn_cnt = ARRAY_SIZE(insns);
+
+- fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL", insns, insn_cnt, NULL);
++ fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, NULL);
+ return probe_fd(fd);
+ }
+
+@@ -5639,9 +5639,10 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+ */
+ prog = NULL;
+ for (i = 0; i < obj->nr_programs; i++) {
+- prog = &obj->programs[i];
+- if (strcmp(prog->sec_name, sec_name) == 0)
++ if (strcmp(obj->programs[i].sec_name, sec_name) == 0) {
++ prog = &obj->programs[i];
+ break;
++ }
+ }
+ if (!prog) {
+ pr_warn("sec '%s': failed to find a BPF program\n", sec_name);
+@@ -5656,10 +5657,17 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+ insn_idx = rec->insn_off / BPF_INSN_SZ;
+ prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx);
+ if (!prog) {
+- pr_warn("sec '%s': failed to find program at insn #%d for CO-RE offset relocation #%d\n",
+- sec_name, insn_idx, i);
+- err = -EINVAL;
+- goto out;
++ /* When __weak subprog is "overridden" by another instance
++ * of the subprog from a different object file, linker still
++ * appends all the .BTF.ext info that used to belong to that
++ * eliminated subprogram.
++ * This is similar to what x86-64 linker does for relocations.
++ * So just ignore such relocations just like we ignore
++ * subprog instructions when discovering subprograms.
++ */
++ pr_debug("sec '%s': skipping CO-RE relocation #%d for insn #%d belonging to eliminated weak subprogram\n",
++ sec_name, i, insn_idx);
++ continue;
+ }
+ /* no need to apply CO-RE relocation if the program is
+ * not going to be loaded
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index af675c8cf05d6..526779c094ad5 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -5,6 +5,7 @@
+
+ #include <string.h>
+ #include <stdlib.h>
++#include <inttypes.h>
+ #include <sys/mman.h>
+
+ #include <arch/elf.h>
+@@ -546,12 +547,12 @@ static int add_dead_ends(struct objtool_file *file)
+ else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
+ insn = find_last_insn(file, reloc->sym->sec);
+ if (!insn) {
+- WARN("can't find unreachable insn at %s+0x%lx",
++ WARN("can't find unreachable insn at %s+0x%" PRIx64,
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+ } else {
+- WARN("can't find unreachable insn at %s+0x%lx",
++ WARN("can't find unreachable insn at %s+0x%" PRIx64,
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+@@ -581,12 +582,12 @@ reachable:
+ else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
+ insn = find_last_insn(file, reloc->sym->sec);
+ if (!insn) {
+- WARN("can't find reachable insn at %s+0x%lx",
++ WARN("can't find reachable insn at %s+0x%" PRIx64,
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+ } else {
+- WARN("can't find reachable insn at %s+0x%lx",
++ WARN("can't find reachable insn at %s+0x%" PRIx64,
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 2a7061e6f465a..c164512690c57 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -314,6 +314,9 @@ static void elf_add_symbol(struct elf *elf, struct symbol *sym)
+ struct list_head *entry;
+ struct rb_node *pnode;
+
++ INIT_LIST_HEAD(&sym->pv_target);
++ sym->alias = sym;
++
+ sym->type = GELF_ST_TYPE(sym->sym.st_info);
+ sym->bind = GELF_ST_BIND(sym->sym.st_info);
+
+@@ -375,8 +378,6 @@ static int read_symbols(struct elf *elf)
+ return -1;
+ }
+ memset(sym, 0, sizeof(*sym));
+- INIT_LIST_HEAD(&sym->pv_target);
+- sym->alias = sym;
+
+ sym->idx = i;
+
+@@ -486,7 +487,7 @@ static struct section *elf_create_reloc_section(struct elf *elf,
+ int reltype);
+
+ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+- unsigned int type, struct symbol *sym, long addend)
++ unsigned int type, struct symbol *sym, s64 addend)
+ {
+ struct reloc *reloc;
+
+@@ -540,24 +541,21 @@ static void elf_dirty_reloc_sym(struct elf *elf, struct symbol *sym)
+ }
+
+ /*
+- * Move the first global symbol, as per sh_info, into a new, higher symbol
+- * index. This fees up the shndx for a new local symbol.
++ * The libelf API is terrible; gelf_update_sym*() takes a data block relative
++ * index value, *NOT* the symbol index. As such, iterate the data blocks and
++ * adjust index until it fits.
++ *
++ * If no data block is found, allow adding a new data block provided the index
++ * is only one past the end.
+ */
+-static int elf_move_global_symbol(struct elf *elf, struct section *symtab,
+- struct section *symtab_shndx)
++static int elf_update_symbol(struct elf *elf, struct section *symtab,
++ struct section *symtab_shndx, struct symbol *sym)
+ {
+- Elf_Data *data, *shndx_data = NULL;
+- Elf32_Word first_non_local;
+- struct symbol *sym;
+- Elf_Scn *s;
+-
+- first_non_local = symtab->sh.sh_info;
+-
+- sym = find_symbol_by_index(elf, first_non_local);
+- if (!sym) {
+- WARN("no non-local symbols !?");
+- return first_non_local;
+- }
++ Elf32_Word shndx = sym->sec ? sym->sec->idx : SHN_UNDEF;
++ Elf_Data *symtab_data = NULL, *shndx_data = NULL;
++ Elf64_Xword entsize = symtab->sh.sh_entsize;
++ int max_idx, idx = sym->idx;
++ Elf_Scn *s, *t = NULL;
+
+ s = elf_getscn(elf->elf, symtab->idx);
+ if (!s) {
+@@ -565,79 +563,124 @@ static int elf_move_global_symbol(struct elf *elf, struct section *symtab,
+ return -1;
+ }
+
+- data = elf_newdata(s);
+- if (!data) {
+- WARN_ELF("elf_newdata");
+- return -1;
++ if (symtab_shndx) {
++ t = elf_getscn(elf->elf, symtab_shndx->idx);
++ if (!t) {
++ WARN_ELF("elf_getscn");
++ return -1;
++ }
+ }
+
+- data->d_buf = &sym->sym;
+- data->d_size = sizeof(sym->sym);
+- data->d_align = 1;
+- data->d_type = ELF_T_SYM;
++ for (;;) {
++ /* get next data descriptor for the relevant sections */
++ symtab_data = elf_getdata(s, symtab_data);
++ if (t)
++ shndx_data = elf_getdata(t, shndx_data);
+
+- sym->idx = symtab->sh.sh_size / sizeof(sym->sym);
+- elf_dirty_reloc_sym(elf, sym);
++ /* end-of-list */
++ if (!symtab_data) {
++ void *buf;
+
+- symtab->sh.sh_info += 1;
+- symtab->sh.sh_size += data->d_size;
+- symtab->changed = true;
++ if (idx) {
++ /* we don't do holes in symbol tables */
++ WARN("index out of range");
++ return -1;
++ }
+
+- if (symtab_shndx) {
+- s = elf_getscn(elf->elf, symtab_shndx->idx);
+- if (!s) {
+- WARN_ELF("elf_getscn");
++ /* if @idx == 0, it's the next contiguous entry, create it */
++ symtab_data = elf_newdata(s);
++ if (t)
++ shndx_data = elf_newdata(t);
++
++ buf = calloc(1, entsize);
++ if (!buf) {
++ WARN("malloc");
++ return -1;
++ }
++
++ symtab_data->d_buf = buf;
++ symtab_data->d_size = entsize;
++ symtab_data->d_align = 1;
++ symtab_data->d_type = ELF_T_SYM;
++
++ symtab->sh.sh_size += entsize;
++ symtab->changed = true;
++
++ if (t) {
++ shndx_data->d_buf = &sym->sec->idx;
++ shndx_data->d_size = sizeof(Elf32_Word);
++ shndx_data->d_align = sizeof(Elf32_Word);
++ shndx_data->d_type = ELF_T_WORD;
++
++ symtab_shndx->sh.sh_size += sizeof(Elf32_Word);
++ symtab_shndx->changed = true;
++ }
++
++ break;
++ }
++
++ /* empty blocks should not happen */
++ if (!symtab_data->d_size) {
++ WARN("zero size data");
+ return -1;
+ }
+
+- shndx_data = elf_newdata(s);
++ /* is this the right block? */
++ max_idx = symtab_data->d_size / entsize;
++ if (idx < max_idx)
++ break;
++
++ /* adjust index and try again */
++ idx -= max_idx;
++ }
++
++ /* something went side-ways */
++ if (idx < 0) {
++ WARN("negative index");
++ return -1;
++ }
++
++ /* setup extended section index magic and write the symbol */
++ if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
++ sym->sym.st_shndx = shndx;
++ if (!shndx_data)
++ shndx = 0;
++ } else {
++ sym->sym.st_shndx = SHN_XINDEX;
+ if (!shndx_data) {
+- WARN_ELF("elf_newshndx_data");
++ WARN("no .symtab_shndx");
+ return -1;
+ }
++ }
+
+- shndx_data->d_buf = &sym->sec->idx;
+- shndx_data->d_size = sizeof(Elf32_Word);
+- shndx_data->d_align = 4;
+- shndx_data->d_type = ELF_T_WORD;
+-
+- symtab_shndx->sh.sh_size += 4;
+- symtab_shndx->changed = true;
++ if (!gelf_update_symshndx(symtab_data, shndx_data, idx, &sym->sym, shndx)) {
++ WARN_ELF("gelf_update_symshndx");
++ return -1;
+ }
+
+- return first_non_local;
++ return 0;
+ }
+
+ static struct symbol *
+ elf_create_section_symbol(struct elf *elf, struct section *sec)
+ {
+ struct section *symtab, *symtab_shndx;
+- Elf_Data *shndx_data = NULL;
+- struct symbol *sym;
+- Elf32_Word shndx;
++ Elf32_Word first_non_local, new_idx;
++ struct symbol *sym, *old;
+
+ symtab = find_section_by_name(elf, ".symtab");
+ if (symtab) {
+ symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
+- if (symtab_shndx)
+- shndx_data = symtab_shndx->data;
+ } else {
+ WARN("no .symtab");
+ return NULL;
+ }
+
+- sym = malloc(sizeof(*sym));
++ sym = calloc(1, sizeof(*sym));
+ if (!sym) {
+ perror("malloc");
+ return NULL;
+ }
+- memset(sym, 0, sizeof(*sym));
+-
+- sym->idx = elf_move_global_symbol(elf, symtab, symtab_shndx);
+- if (sym->idx < 0) {
+- WARN("elf_move_global_symbol");
+- return NULL;
+- }
+
+ sym->name = sec->name;
+ sym->sec = sec;
+@@ -647,24 +690,41 @@ elf_create_section_symbol(struct elf *elf, struct section *sec)
+ // st_other 0
+ // st_value 0
+ // st_size 0
+- shndx = sec->idx;
+- if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
+- sym->sym.st_shndx = shndx;
+- if (!shndx_data)
+- shndx = 0;
+- } else {
+- sym->sym.st_shndx = SHN_XINDEX;
+- if (!shndx_data) {
+- WARN("no .symtab_shndx");
++
++ /*
++ * Move the first global symbol, as per sh_info, into a new, higher
++ * symbol index. This fees up a spot for a new local symbol.
++ */
++ first_non_local = symtab->sh.sh_info;
++ new_idx = symtab->sh.sh_size / symtab->sh.sh_entsize;
++ old = find_symbol_by_index(elf, first_non_local);
++ if (old) {
++ old->idx = new_idx;
++
++ hlist_del(&old->hash);
++ elf_hash_add(symbol, &old->hash, old->idx);
++
++ elf_dirty_reloc_sym(elf, old);
++
++ if (elf_update_symbol(elf, symtab, symtab_shndx, old)) {
++ WARN("elf_update_symbol move");
+ return NULL;
+ }
++
++ new_idx = first_non_local;
+ }
+
+- if (!gelf_update_symshndx(symtab->data, shndx_data, sym->idx, &sym->sym, shndx)) {
+- WARN_ELF("gelf_update_symshndx");
++ sym->idx = new_idx;
++ if (elf_update_symbol(elf, symtab, symtab_shndx, sym)) {
++ WARN("elf_update_symbol");
+ return NULL;
+ }
+
++ /*
++ * Either way, we added a LOCAL symbol.
++ */
++ symtab->sh.sh_info += 1;
++
+ elf_add_symbol(elf, sym);
+
+ return sym;
+diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
+index 18a26e3c13290..8668d18d40649 100644
+--- a/tools/objtool/include/objtool/elf.h
++++ b/tools/objtool/include/objtool/elf.h
+@@ -73,7 +73,7 @@ struct reloc {
+ struct symbol *sym;
+ unsigned long offset;
+ unsigned int type;
+- long addend;
++ s64 addend;
+ int idx;
+ bool jump_table_start;
+ };
+@@ -135,7 +135,7 @@ struct elf *elf_open_read(const char *name, int flags);
+ struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr);
+
+ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+- unsigned int type, struct symbol *sym, long addend);
++ unsigned int type, struct symbol *sym, s64 addend);
+ int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
+ unsigned long offset, unsigned int type,
+ struct section *insn_sec, unsigned long insn_off);
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 1bd64e7404b9f..c38423807d010 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -239,18 +239,33 @@ ifdef PARSER_DEBUG
+ endif
+
+ # Try different combinations to accommodate systems that only have
+-# python[2][-config] in weird combinations but always preferring
+-# python2 and python2-config as per pep-0394. If python2 or python
+-# aren't found, then python3 is used.
+-PYTHON_AUTO := python
+-PYTHON_AUTO := $(if $(call get-executable,python3),python3,$(PYTHON_AUTO))
+-PYTHON_AUTO := $(if $(call get-executable,python),python,$(PYTHON_AUTO))
+-PYTHON_AUTO := $(if $(call get-executable,python2),python2,$(PYTHON_AUTO))
+-override PYTHON := $(call get-executable-or-default,PYTHON,$(PYTHON_AUTO))
+-PYTHON_AUTO_CONFIG := \
+- $(if $(call get-executable,$(PYTHON)-config),$(PYTHON)-config,python-config)
+-override PYTHON_CONFIG := \
+- $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO_CONFIG))
++# python[2][3]-config in weird combinations in the following order of
++# priority from lowest to highest:
++# * python3-config
++# * python-config
++# * python2-config as per pep-0394.
++# * $(PYTHON)-config (If PYTHON is user supplied but PYTHON_CONFIG isn't)
++#
++PYTHON_AUTO := python-config
++PYTHON_AUTO := $(if $(call get-executable,python3-config),python3-config,$(PYTHON_AUTO))
++PYTHON_AUTO := $(if $(call get-executable,python-config),python-config,$(PYTHON_AUTO))
++PYTHON_AUTO := $(if $(call get-executable,python2-config),python2-config,$(PYTHON_AUTO))
++
++# If PYTHON is defined but PYTHON_CONFIG isn't, then take $(PYTHON)-config as if it was the user
++# supplied value for PYTHON_CONFIG. Because it's "user supplied", error out if it doesn't exist.
++ifdef PYTHON
++ ifndef PYTHON_CONFIG
++ PYTHON_CONFIG_AUTO := $(call get-executable,$(PYTHON)-config)
++ PYTHON_CONFIG := $(if $(PYTHON_CONFIG_AUTO),$(PYTHON_CONFIG_AUTO),\
++ $(call $(error $(PYTHON)-config not found)))
++ endif
++endif
++
++# Select either auto detected python and python-config or use user supplied values if they are
++# defined. get-executable-or-default fails with an error if the first argument is supplied but
++# doesn't exist.
++override PYTHON_CONFIG := $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO))
++override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_AUTO)))
+
+ grep-libs = $(filter -l%,$(1))
+ strip-libs = $(filter-out -l%,$(1))
+diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
+index cfc208d71f00a..75564a7df15be 100644
+--- a/tools/perf/arch/x86/util/evlist.c
++++ b/tools/perf/arch/x86/util/evlist.c
+@@ -36,7 +36,7 @@ struct evsel *arch_evlist__leader(struct list_head *list)
+ if (slots == first)
+ return first;
+ }
+- if (!strncasecmp(evsel->name, "topdown", 7))
++ if (strcasestr(evsel->name, "topdown"))
+ has_topdown = true;
+ if (slots && has_topdown)
+ return slots;
+diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
+index ac2899a25b7a3..0c9e56ab07b5b 100644
+--- a/tools/perf/arch/x86/util/evsel.c
++++ b/tools/perf/arch/x86/util/evsel.c
+@@ -3,6 +3,7 @@
+ #include <stdlib.h>
+ #include "util/evsel.h"
+ #include "util/env.h"
++#include "util/pmu.h"
+ #include "linux/string.h"
+
+ void arch_evsel__set_sample_weight(struct evsel *evsel)
+@@ -29,3 +30,14 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
+
+ free(env.cpuid);
+ }
++
++bool arch_evsel__must_be_in_group(const struct evsel *evsel)
++{
++ if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
++ !pmu_have_event("cpu", "slots"))
++ return false;
++
++ return evsel->name &&
++ (strcasestr(evsel->name, "slots") ||
++ strcasestr(evsel->name, "topdown"));
++}
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index 77dd4afacca49..d8ec683b06a5e 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2734,9 +2734,7 @@ static int perf_c2c__report(int argc, const char **argv)
+ "the input file to process"),
+ OPT_INCR('N', "node-info", &c2c.node_info,
+ "show extra node info in report (repeat for more info)"),
+-#ifdef HAVE_SLANG_SUPPORT
+ OPT_BOOLEAN(0, "stdio", &c2c.use_stdio, "Use the stdio interface"),
+-#endif
+ OPT_BOOLEAN(0, "stats", &c2c.stats_only,
+ "Display only statistic tables (implies --stdio)"),
+ OPT_BOOLEAN(0, "full-symbols", &c2c.symbol_full,
+@@ -2766,6 +2764,10 @@ static int perf_c2c__report(int argc, const char **argv)
+ if (argc)
+ usage_with_options(report_c2c_usage, options);
+
++#ifndef HAVE_SLANG_SUPPORT
++ c2c.use_stdio = true;
++#endif
++
+ if (c2c.stats_only)
+ c2c.use_stdio = true;
+
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 60baa3dadc4b6..416a6a4fbcc7f 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -271,11 +271,8 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
+ pr_warning(" %s: %s\n", evsel->name, buf);
+ }
+
+- for_each_group_evsel(pos, leader) {
+- evsel__set_leader(pos, pos);
+- pos->core.nr_members = 0;
+- }
+- evsel->core.leader->nr_members = 0;
++ for_each_group_evsel(pos, leader)
++ evsel__remove_from_group(pos, leader);
+ }
+ }
+
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index 1a57c3f81dd46..d07e24c1a8897 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -611,7 +611,7 @@ static int json_events(const char *fn,
+ } else if (json_streq(map, field, "ExtSel")) {
+ char *code = NULL;
+ addfield(map, &code, "", "", val);
+- eventcode |= strtoul(code, NULL, 0) << 21;
++ eventcode |= strtoul(code, NULL, 0) << 8;
+ free(code);
+ } else if (json_streq(map, field, "EventName")) {
+ addfield(map, &je.name, "", "", val);
+diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h
+index c9de82af5584e..1402d9657ef27 100644
+--- a/tools/perf/util/data.h
++++ b/tools/perf/util/data.h
+@@ -4,6 +4,7 @@
+
+ #include <stdio.h>
+ #include <stdbool.h>
++#include <linux/types.h>
+
+ enum perf_data_mode {
+ PERF_DATA_MODE_WRITE,
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index 41a66a48cbdff..ec745209c8b4b 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -1790,8 +1790,13 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
+ if (evsel__has_leader(c2, leader)) {
+ if (is_open && close)
+ perf_evsel__close(&c2->core);
+- evsel__set_leader(c2, c2);
+- c2->core.nr_members = 0;
++ /*
++ * We want to close all members of the group and reopen
++ * them. Some events, like Intel topdown, require being
++ * in a group and so keep these in the group.
++ */
++ evsel__remove_from_group(c2, leader);
++
+ /*
+ * Set this for all former members of the group
+ * to indicate they get reopened.
+@@ -1799,6 +1804,9 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
+ c2->reset_group = true;
+ }
+ }
++ /* Reset the leader count if all entries were removed. */
++ if (leader->core.nr_members == 1)
++ leader->core.nr_members = 0;
+ return leader;
+ }
+
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 22d3267ce2941..63284e98c4191 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -3048,3 +3048,22 @@ int evsel__source_count(const struct evsel *evsel)
+ }
+ return count;
+ }
++
++bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unused)
++{
++ return false;
++}
++
++/*
++ * Remove an event from a given group (leader).
++ * Some events, e.g., perf metrics Topdown events,
++ * must always be grouped. Ignore the events.
++ */
++void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
++{
++ if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
++ evsel__set_leader(evsel, evsel);
++ evsel->core.nr_members = 0;
++ leader->core.nr_members--;
++ }
++}
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 041b42d33bf5a..47f65f8e7c749 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -483,6 +483,9 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
+ bool evsel__is_leader(struct evsel *evsel);
+ void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
+ int evsel__source_count(const struct evsel *evsel);
++void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
++
++bool arch_evsel__must_be_in_group(const struct evsel *evsel);
+
+ /*
+ * Macro to swap the bit-field postition and size.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 47d3ba895d6d9..4f176bbf29f42 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4376,6 +4376,7 @@ static double rapl_dram_energy_units_probe(int model, double rapl_energy_units)
+ case INTEL_FAM6_BROADWELL_X: /* BDX */
+ case INTEL_FAM6_SKYLAKE_X: /* SKX */
+ case INTEL_FAM6_XEON_PHI_KNL: /* KNL */
++ case INTEL_FAM6_ICELAKE_X: /* ICX */
+ return (rapl_dram_energy_units = 15.3 / 1000000);
+ default:
+ return (rapl_energy_units);
+diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
+index 05ff334761dd0..2f93ed1d7f990 100644
+--- a/tools/testing/kunit/kunit_parser.py
++++ b/tools/testing/kunit/kunit_parser.py
+@@ -789,8 +789,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+
+ # Check for there being no tests
+ if parent_test and len(subtests) == 0:
+- test.status = TestStatus.NO_TESTS
+- test.add_error('0 tests run!')
++ # Don't override a bad status if this test had one reported.
++ # Assumption: no subtests means CRASHED is from Test.__init__()
++ if test.status in (TestStatus.TEST_CRASHED, TestStatus.SUCCESS):
++ test.status = TestStatus.NO_TESTS
++ test.add_error('0 tests run!')
+
+ # Add statuses to TestCounts attribute in Test object
+ bubble_up_test_results(test)
+diff --git a/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
+index dd873c9811086..4f81876ee6f18 100644
+--- a/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
++++ b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
+@@ -3,5 +3,5 @@ TAP version 14
+ # Subtest: suite
+ 1..1
+ # Subtest: case
+- ok 1 - case # SKIP
++ ok 1 - case
+ ok 1 - suite
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index d08fe4cfe8115..ffe453760a12e 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -9,6 +9,7 @@ TARGETS += clone3
+ TARGETS += core
+ TARGETS += cpufreq
+ TARGETS += cpu-hotplug
++TARGETS += damon
+ TARGETS += drivers/dma-buf
+ TARGETS += efivarfs
+ TARGETS += exec
+diff --git a/tools/testing/selftests/arm64/bti/Makefile b/tools/testing/selftests/arm64/bti/Makefile
+index 73e013c082a65..dafa1c2aa5c47 100644
+--- a/tools/testing/selftests/arm64/bti/Makefile
++++ b/tools/testing/selftests/arm64/bti/Makefile
+@@ -39,7 +39,7 @@ BTI_OBJS = \
+ teststubs-bti.o \
+ trampoline-bti.o
+ gen/btitest: $(BTI_OBJS)
+- $(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -o $@ $^
++ $(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -static -o $@ $^
+
+ NOBTI_OBJS = \
+ test-nobti.o \
+@@ -50,7 +50,7 @@ NOBTI_OBJS = \
+ teststubs-nobti.o \
+ trampoline-nobti.o
+ gen/nobtitest: $(NOBTI_OBJS)
+- $(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -o $@ $^
++ $(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -static -o $@ $^
+
+ # Including KSFT lib.mk here will also mangle the TEST_GEN_PROGS list
+ # to account for any OUTPUT target-dirs optionally provided by
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 42ffc24e9e710..eb2da63fa39b2 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -75,7 +75,7 @@ TEST_PROGS := test_kmod.sh \
+ test_xsk.sh
+
+ TEST_PROGS_EXTENDED := with_addr.sh \
+- with_tunnels.sh \
++ with_tunnels.sh ima_setup.sh \
+ test_xdp_vlan.sh test_bpftool.py
+
+ # Compile but not part of 'make run_tests'
+@@ -407,11 +407,11 @@ $(TRUNNER_BPF_SKELS): %.skel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
+
+ $(TRUNNER_BPF_LSKELS): %.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ $$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+- $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked1.o) $$<
+- $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked2.o) $$(<:.o=.linked1.o)
+- $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o)
+- $(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
+- $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=_lskel)) > $$@
++ $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked1.o) $$<
++ $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked2.o) $$(<:.o=.llinked1.o)
++ $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked3.o) $$(<:.o=.llinked2.o)
++ $(Q)diff $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
++ $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.llinked3.o) name $$(notdir $$(<:.o=_lskel)) > $$@
+
+ $(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ $$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.o))
+diff --git a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
+index 9c795ee52b7bf..b0acbda6dbf5e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
++++ b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
+@@ -1,126 +1,94 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ #define _GNU_SOURCE
+-#include <sched.h>
+-#include <sys/prctl.h>
+ #include <test_progs.h>
+
+ #define MAX_TRAMP_PROGS 38
+
+ struct inst {
+ struct bpf_object *obj;
+- struct bpf_link *link_fentry;
+- struct bpf_link *link_fexit;
++ struct bpf_link *link;
+ };
+
+-static int test_task_rename(void)
+-{
+- int fd, duration = 0, err;
+- char buf[] = "test_overhead";
+-
+- fd = open("/proc/self/comm", O_WRONLY|O_TRUNC);
+- if (CHECK(fd < 0, "open /proc", "err %d", errno))
+- return -1;
+- err = write(fd, buf, sizeof(buf));
+- if (err < 0) {
+- CHECK(err < 0, "task rename", "err %d", errno);
+- close(fd);
+- return -1;
+- }
+- close(fd);
+- return 0;
+-}
+-
+-static struct bpf_link *load(struct bpf_object *obj, const char *name)
++static struct bpf_program *load_prog(char *file, char *name, struct inst *inst)
+ {
++ struct bpf_object *obj;
+ struct bpf_program *prog;
+- int duration = 0;
++ int err;
++
++ obj = bpf_object__open_file(file, NULL);
++ if (!ASSERT_OK_PTR(obj, "obj_open_file"))
++ return NULL;
++
++ inst->obj = obj;
++
++ err = bpf_object__load(obj);
++ if (!ASSERT_OK(err, "obj_load"))
++ return NULL;
+
+ prog = bpf_object__find_program_by_name(obj, name);
+- if (CHECK(!prog, "find_probe", "prog '%s' not found\n", name))
+- return ERR_PTR(-EINVAL);
+- return bpf_program__attach_trace(prog);
++ if (!ASSERT_OK_PTR(prog, "obj_find_prog"))
++ return NULL;
++
++ return prog;
+ }
+
+ /* TODO: use different target function to run in concurrent mode */
+ void serial_test_trampoline_count(void)
+ {
+- const char *fentry_name = "prog1";
+- const char *fexit_name = "prog2";
+- const char *object = "test_trampoline_count.o";
+- struct inst inst[MAX_TRAMP_PROGS] = {};
+- int err, i = 0, duration = 0;
+- struct bpf_object *obj;
++ char *file = "test_trampoline_count.o";
++ char *const progs[] = { "fentry_test", "fmod_ret_test", "fexit_test" };
++ struct inst inst[MAX_TRAMP_PROGS + 1] = {};
++ struct bpf_program *prog;
+ struct bpf_link *link;
+- char comm[16] = {};
++ int prog_fd, err, i;
++ LIBBPF_OPTS(bpf_test_run_opts, opts);
+
+ /* attach 'allowed' trampoline programs */
+ for (i = 0; i < MAX_TRAMP_PROGS; i++) {
+- obj = bpf_object__open_file(object, NULL);
+- if (!ASSERT_OK_PTR(obj, "obj_open_file")) {
+- obj = NULL;
++ prog = load_prog(file, progs[i % ARRAY_SIZE(progs)], &inst[i]);
++ if (!prog)
+ goto cleanup;
+- }
+
+- err = bpf_object__load(obj);
+- if (CHECK(err, "obj_load", "err %d\n", err))
++ link = bpf_program__attach(prog);
++ if (!ASSERT_OK_PTR(link, "attach_prog"))
+ goto cleanup;
+- inst[i].obj = obj;
+- obj = NULL;
+-
+- if (rand() % 2) {
+- link = load(inst[i].obj, fentry_name);
+- if (!ASSERT_OK_PTR(link, "attach_prog")) {
+- link = NULL;
+- goto cleanup;
+- }
+- inst[i].link_fentry = link;
+- } else {
+- link = load(inst[i].obj, fexit_name);
+- if (!ASSERT_OK_PTR(link, "attach_prog")) {
+- link = NULL;
+- goto cleanup;
+- }
+- inst[i].link_fexit = link;
+- }
++
++ inst[i].link = link;
+ }
+
+ /* and try 1 extra.. */
+- obj = bpf_object__open_file(object, NULL);
+- if (!ASSERT_OK_PTR(obj, "obj_open_file")) {
+- obj = NULL;
++ prog = load_prog(file, "fmod_ret_test", &inst[i]);
++ if (!prog)
+ goto cleanup;
+- }
+-
+- err = bpf_object__load(obj);
+- if (CHECK(err, "obj_load", "err %d\n", err))
+- goto cleanup_extra;
+
+ /* ..that needs to fail */
+- link = load(obj, fentry_name);
+- err = libbpf_get_error(link);
+- if (!ASSERT_ERR_PTR(link, "cannot attach over the limit")) {
+- bpf_link__destroy(link);
+- goto cleanup_extra;
++ link = bpf_program__attach(prog);
++ if (!ASSERT_ERR_PTR(link, "attach_prog")) {
++ inst[i].link = link;
++ goto cleanup;
+ }
+
+ /* with E2BIG error */
+- ASSERT_EQ(err, -E2BIG, "proper error check");
+- ASSERT_EQ(link, NULL, "ptr_is_null");
++ if (!ASSERT_EQ(libbpf_get_error(link), -E2BIG, "E2BIG"))
++ goto cleanup;
++ if (!ASSERT_EQ(link, NULL, "ptr_is_null"))
++ goto cleanup;
+
+ /* and finaly execute the probe */
+- if (CHECK_FAIL(prctl(PR_GET_NAME, comm, 0L, 0L, 0L)))
+- goto cleanup_extra;
+- CHECK_FAIL(test_task_rename());
+- CHECK_FAIL(prctl(PR_SET_NAME, comm, 0L, 0L, 0L));
++ prog_fd = bpf_program__fd(prog);
++ if (!ASSERT_GE(prog_fd, 0, "bpf_program__fd"))
++ goto cleanup;
++
++ err = bpf_prog_test_run_opts(prog_fd, &opts);
++ if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
++ goto cleanup;
++
++ ASSERT_EQ(opts.retval & 0xffff, 4, "bpf_modify_return_test.result");
++ ASSERT_EQ(opts.retval >> 16, 1, "bpf_modify_return_test.side_effect");
+
+-cleanup_extra:
+- bpf_object__close(obj);
+ cleanup:
+- if (i >= MAX_TRAMP_PROGS)
+- i = MAX_TRAMP_PROGS - 1;
+ for (; i >= 0; i--) {
+- bpf_link__destroy(inst[i].link_fentry);
+- bpf_link__destroy(inst[i].link_fexit);
++ bpf_link__destroy(inst[i].link);
+ bpf_object__close(inst[i].obj);
+ }
+ }
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+index 1c7105fcae3c4..4ee4748133fec 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+@@ -94,7 +94,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int);
+
+ typedef char * (*fn_ptr_arr1_t[10])(int **);
+
+-typedef char * (* const (* const fn_ptr_arr2_t[5])())(char * (*)(int));
++typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int));
+
+ struct struct_w_typedefs {
+ int_t a;
+diff --git a/tools/testing/selftests/bpf/progs/profiler.inc.h b/tools/testing/selftests/bpf/progs/profiler.inc.h
+index 4896fdf816f73..92331053dba3b 100644
+--- a/tools/testing/selftests/bpf/progs/profiler.inc.h
++++ b/tools/testing/selftests/bpf/progs/profiler.inc.h
+@@ -826,8 +826,9 @@ out:
+
+ SEC("kprobe/vfs_link")
+ int BPF_KPROBE(kprobe__vfs_link,
+- struct dentry* old_dentry, struct inode* dir,
+- struct dentry* new_dentry, struct inode** delegated_inode)
++ struct dentry* old_dentry, struct user_namespace *mnt_userns,
++ struct inode* dir, struct dentry* new_dentry,
++ struct inode** delegated_inode)
+ {
+ struct bpf_func_stats_ctx stats_ctx;
+ bpf_stats_enter(&stats_ctx, profiler_bpf_vfs_link);
+diff --git a/tools/testing/selftests/bpf/progs/test_trampoline_count.c b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
+index f030e469d05b5..7765720da7d58 100644
+--- a/tools/testing/selftests/bpf/progs/test_trampoline_count.c
++++ b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
+@@ -1,20 +1,22 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <stdbool.h>
+-#include <stddef.h>
+ #include <linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+
+-struct task_struct;
++SEC("fentry/bpf_modify_return_test")
++int BPF_PROG(fentry_test, int a, int *b)
++{
++ return 0;
++}
+
+-SEC("fentry/__set_task_comm")
+-int BPF_PROG(prog1, struct task_struct *tsk, const char *buf, bool exec)
++SEC("fmod_ret/bpf_modify_return_test")
++int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
+ {
+ return 0;
+ }
+
+-SEC("fexit/__set_task_comm")
+-int BPF_PROG(prog2, struct task_struct *tsk, const char *buf, bool exec)
++SEC("fexit/bpf_modify_return_test")
++int BPF_PROG(fexit_test, int a, int *b, int ret)
+ {
+ return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/test_bpftool_synctypes.py b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
+index 6bf21e47882af..c0e7acd698edf 100755
+--- a/tools/testing/selftests/bpf/test_bpftool_synctypes.py
++++ b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
+@@ -180,7 +180,7 @@ class FileExtractor(object):
+ @enum_name: name of the enum to parse
+ """
+ start_marker = re.compile(f'enum {enum_name} {{\n')
+- pattern = re.compile('^\s*(BPF_\w+),?$')
++ pattern = re.compile('^\s*(BPF_\w+),?(\s+/\*.*\*/)?$')
+ end_marker = re.compile('^};')
+ parser = BlockParser(self.reader)
+ parser.search_block(start_marker)
+diff --git a/tools/testing/selftests/cgroup/test_stress.sh b/tools/testing/selftests/cgroup/test_stress.sh
+index 15d9d58963941..3c9c4554d5f6a 100755
+--- a/tools/testing/selftests/cgroup/test_stress.sh
++++ b/tools/testing/selftests/cgroup/test_stress.sh
+@@ -1,4 +1,4 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+
+-./with_stress.sh -s subsys -s fork ./test_core
++./with_stress.sh -s subsys -s fork ${OUTPUT:-.}/test_core
+diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c
+index ca40abe9daa86..35f64832b869c 100644
+--- a/tools/testing/selftests/landlock/base_test.c
++++ b/tools/testing/selftests/landlock/base_test.c
+@@ -18,10 +18,11 @@
+ #include "common.h"
+
+ #ifndef O_PATH
+-#define O_PATH 010000000
++#define O_PATH 010000000
+ #endif
+
+-TEST(inconsistent_attr) {
++TEST(inconsistent_attr)
++{
+ const long page_size = sysconf(_SC_PAGESIZE);
+ char *const buf = malloc(page_size + 1);
+ struct landlock_ruleset_attr *const ruleset_attr = (void *)buf;
+@@ -34,20 +35,26 @@ TEST(inconsistent_attr) {
+ ASSERT_EQ(EINVAL, errno);
+ ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 1, 0));
+ ASSERT_EQ(EINVAL, errno);
++ ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 7, 0));
++ ASSERT_EQ(EINVAL, errno);
+
+ ASSERT_EQ(-1, landlock_create_ruleset(NULL, 1, 0));
+ /* The size if less than sizeof(struct landlock_attr_enforce). */
+ ASSERT_EQ(EFAULT, errno);
+
+- ASSERT_EQ(-1, landlock_create_ruleset(NULL,
+- sizeof(struct landlock_ruleset_attr), 0));
++ ASSERT_EQ(-1, landlock_create_ruleset(
++ NULL, sizeof(struct landlock_ruleset_attr), 0));
+ ASSERT_EQ(EFAULT, errno);
+
+ ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, page_size + 1, 0));
+ ASSERT_EQ(E2BIG, errno);
+
+- ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr,
+- sizeof(struct landlock_ruleset_attr), 0));
++ /* Checks minimal valid attribute size. */
++ ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 8, 0));
++ ASSERT_EQ(ENOMSG, errno);
++ ASSERT_EQ(-1, landlock_create_ruleset(
++ ruleset_attr,
++ sizeof(struct landlock_ruleset_attr), 0));
+ ASSERT_EQ(ENOMSG, errno);
+ ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, page_size, 0));
+ ASSERT_EQ(ENOMSG, errno);
+@@ -63,38 +70,44 @@ TEST(inconsistent_attr) {
+ free(buf);
+ }
+
+-TEST(abi_version) {
++TEST(abi_version)
++{
+ const struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
+ };
+ ASSERT_EQ(1, landlock_create_ruleset(NULL, 0,
+- LANDLOCK_CREATE_RULESET_VERSION));
++ LANDLOCK_CREATE_RULESET_VERSION));
+
+ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0,
+- LANDLOCK_CREATE_RULESET_VERSION));
++ LANDLOCK_CREATE_RULESET_VERSION));
+ ASSERT_EQ(EINVAL, errno);
+
+ ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
+- LANDLOCK_CREATE_RULESET_VERSION));
++ LANDLOCK_CREATE_RULESET_VERSION));
+ ASSERT_EQ(EINVAL, errno);
+
+- ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr),
+- LANDLOCK_CREATE_RULESET_VERSION));
++ ASSERT_EQ(-1,
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
++ LANDLOCK_CREATE_RULESET_VERSION));
+ ASSERT_EQ(EINVAL, errno);
+
+ ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0,
+- LANDLOCK_CREATE_RULESET_VERSION | 1 << 31));
++ LANDLOCK_CREATE_RULESET_VERSION |
++ 1 << 31));
+ ASSERT_EQ(EINVAL, errno);
+ }
+
+-TEST(inval_create_ruleset_flags) {
++/* Tests ordering of syscall argument checks. */
++TEST(create_ruleset_checks_ordering)
++{
+ const int last_flag = LANDLOCK_CREATE_RULESET_VERSION;
+ const int invalid_flag = last_flag << 1;
++ int ruleset_fd;
+ const struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
+ };
+
++ /* Checks priority for invalid flags. */
+ ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0, invalid_flag));
+ ASSERT_EQ(EINVAL, errno);
+
+@@ -102,44 +115,121 @@ TEST(inval_create_ruleset_flags) {
+ ASSERT_EQ(EINVAL, errno);
+
+ ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
+- invalid_flag));
++ invalid_flag));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1,
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
++ invalid_flag));
+ ASSERT_EQ(EINVAL, errno);
+
+- ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), invalid_flag));
++ /* Checks too big ruleset_attr size. */
++ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, -1, 0));
++ ASSERT_EQ(E2BIG, errno);
++
++ /* Checks too small ruleset_attr size. */
++ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0, 0));
++ ASSERT_EQ(EINVAL, errno);
++ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 1, 0));
+ ASSERT_EQ(EINVAL, errno);
++
++ /* Checks valid call. */
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++ ASSERT_LE(0, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
+ }
+
+-TEST(empty_path_beneath_attr) {
++/* Tests ordering of syscall argument checks. */
++TEST(add_rule_checks_ordering)
++{
+ const struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
+ };
+- const int ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ struct landlock_path_beneath_attr path_beneath_attr = {
++ .allowed_access = LANDLOCK_ACCESS_FS_EXECUTE,
++ .parent_fd = -1,
++ };
++ const int ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+
+ ASSERT_LE(0, ruleset_fd);
+
+- /* Similar to struct landlock_path_beneath_attr.parent_fd = 0 */
++ /* Checks invalid flags. */
++ ASSERT_EQ(-1, landlock_add_rule(-1, 0, NULL, 1));
++ ASSERT_EQ(EINVAL, errno);
++
++ /* Checks invalid ruleset FD. */
++ ASSERT_EQ(-1, landlock_add_rule(-1, 0, NULL, 0));
++ ASSERT_EQ(EBADF, errno);
++
++ /* Checks invalid rule type. */
++ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, 0, NULL, 0));
++ ASSERT_EQ(EINVAL, errno);
++
++ /* Checks invalid rule attr. */
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- NULL, 0));
++ NULL, 0));
+ ASSERT_EQ(EFAULT, errno);
++
++ /* Checks invalid path_beneath.parent_fd. */
++ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++ &path_beneath_attr, 0));
++ ASSERT_EQ(EBADF, errno);
++
++ /* Checks valid call. */
++ path_beneath_attr.parent_fd =
++ open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
++ ASSERT_LE(0, path_beneath_attr.parent_fd);
++ ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++ &path_beneath_attr, 0));
++ ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
+ ASSERT_EQ(0, close(ruleset_fd));
+ }
+
+-TEST(inval_fd_enforce) {
++/* Tests ordering of syscall argument and permission checks. */
++TEST(restrict_self_checks_ordering)
++{
++ const struct landlock_ruleset_attr ruleset_attr = {
++ .handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
++ };
++ struct landlock_path_beneath_attr path_beneath_attr = {
++ .allowed_access = LANDLOCK_ACCESS_FS_EXECUTE,
++ .parent_fd = -1,
++ };
++ const int ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++
++ ASSERT_LE(0, ruleset_fd);
++ path_beneath_attr.parent_fd =
++ open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
++ ASSERT_LE(0, path_beneath_attr.parent_fd);
++ ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++ &path_beneath_attr, 0));
++ ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
++
++ /* Checks unprivileged enforcement without no_new_privs. */
++ drop_caps(_metadata);
++ ASSERT_EQ(-1, landlock_restrict_self(-1, -1));
++ ASSERT_EQ(EPERM, errno);
++ ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
++ ASSERT_EQ(EPERM, errno);
++ ASSERT_EQ(-1, landlock_restrict_self(ruleset_fd, 0));
++ ASSERT_EQ(EPERM, errno);
++
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+
++ /* Checks invalid flags. */
++ ASSERT_EQ(-1, landlock_restrict_self(-1, -1));
++ ASSERT_EQ(EINVAL, errno);
++
++ /* Checks invalid ruleset FD. */
+ ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
+ ASSERT_EQ(EBADF, errno);
+-}
+-
+-TEST(unpriv_enforce_without_no_new_privs) {
+- int err;
+
+- drop_caps(_metadata);
+- err = landlock_restrict_self(-1, 0);
+- ASSERT_EQ(EPERM, errno);
+- ASSERT_EQ(err, -1);
++ /* Checks valid call. */
++ ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
++ ASSERT_EQ(0, close(ruleset_fd));
+ }
+
+ TEST(ruleset_fd_io)
+@@ -151,8 +241,8 @@ TEST(ruleset_fd_io)
+ char buf;
+
+ drop_caps(_metadata);
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, ruleset_fd);
+
+ ASSERT_EQ(-1, write(ruleset_fd, ".", 1));
+@@ -197,14 +287,15 @@ TEST(ruleset_fd_transfer)
+ drop_caps(_metadata);
+
+ /* Creates a test ruleset with a simple rule. */
+- ruleset_fd_tx = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ ruleset_fd_tx =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, ruleset_fd_tx);
+- path_beneath_attr.parent_fd = open("/tmp", O_PATH | O_NOFOLLOW |
+- O_DIRECTORY | O_CLOEXEC);
++ path_beneath_attr.parent_fd =
++ open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
+ ASSERT_LE(0, path_beneath_attr.parent_fd);
+- ASSERT_EQ(0, landlock_add_rule(ruleset_fd_tx, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath_attr, 0));
++ ASSERT_EQ(0,
++ landlock_add_rule(ruleset_fd_tx, LANDLOCK_RULE_PATH_BENEATH,
++ &path_beneath_attr, 0));
+ ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
+
+ cmsg = CMSG_FIRSTHDR(&msg);
+@@ -215,7 +306,8 @@ TEST(ruleset_fd_transfer)
+ memcpy(CMSG_DATA(cmsg), &ruleset_fd_tx, sizeof(ruleset_fd_tx));
+
+ /* Sends the ruleset FD over a socketpair and then close it. */
+- ASSERT_EQ(0, socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0, socket_fds));
++ ASSERT_EQ(0, socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0,
++ socket_fds));
+ ASSERT_EQ(sizeof(data_tx), sendmsg(socket_fds[0], &msg, 0));
+ ASSERT_EQ(0, close(socket_fds[0]));
+ ASSERT_EQ(0, close(ruleset_fd_tx));
+@@ -226,7 +318,8 @@ TEST(ruleset_fd_transfer)
+ int ruleset_fd_rx;
+
+ *(char *)msg.msg_iov->iov_base = '\0';
+- ASSERT_EQ(sizeof(data_tx), recvmsg(socket_fds[1], &msg, MSG_CMSG_CLOEXEC));
++ ASSERT_EQ(sizeof(data_tx),
++ recvmsg(socket_fds[1], &msg, MSG_CMSG_CLOEXEC));
+ ASSERT_EQ('.', *(char *)msg.msg_iov->iov_base);
+ ASSERT_EQ(0, close(socket_fds[1]));
+ cmsg = CMSG_FIRSTHDR(&msg);
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 183b7e8e1b957..7ba18eb237838 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -25,6 +25,7 @@
+ * this to be possible, we must not call abort() but instead exit smoothly
+ * (hence the step print).
+ */
++/* clang-format off */
+ #define TEST_F_FORK(fixture_name, test_name) \
+ static void fixture_name##_##test_name##_child( \
+ struct __test_metadata *_metadata, \
+@@ -71,11 +72,12 @@
+ FIXTURE_DATA(fixture_name) __attribute__((unused)) *self, \
+ const FIXTURE_VARIANT(fixture_name) \
+ __attribute__((unused)) *variant)
++/* clang-format on */
+
+ #ifndef landlock_create_ruleset
+-static inline int landlock_create_ruleset(
+- const struct landlock_ruleset_attr *const attr,
+- const size_t size, const __u32 flags)
++static inline int
++landlock_create_ruleset(const struct landlock_ruleset_attr *const attr,
++ const size_t size, const __u32 flags)
+ {
+ return syscall(__NR_landlock_create_ruleset, attr, size, flags);
+ }
+@@ -83,17 +85,18 @@ static inline int landlock_create_ruleset(
+
+ #ifndef landlock_add_rule
+ static inline int landlock_add_rule(const int ruleset_fd,
+- const enum landlock_rule_type rule_type,
+- const void *const rule_attr, const __u32 flags)
++ const enum landlock_rule_type rule_type,
++ const void *const rule_attr,
++ const __u32 flags)
+ {
+- return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type,
+- rule_attr, flags);
++ return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type, rule_attr,
++ flags);
+ }
+ #endif
+
+ #ifndef landlock_restrict_self
+ static inline int landlock_restrict_self(const int ruleset_fd,
+- const __u32 flags)
++ const __u32 flags)
+ {
+ return syscall(__NR_landlock_restrict_self, ruleset_fd, flags);
+ }
+@@ -111,69 +114,76 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ };
+
+ cap_p = cap_get_proc();
+- EXPECT_NE(NULL, cap_p) {
++ EXPECT_NE(NULL, cap_p)
++ {
+ TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
+ }
+- EXPECT_NE(-1, cap_clear(cap_p)) {
++ EXPECT_NE(-1, cap_clear(cap_p))
++ {
+ TH_LOG("Failed to cap_clear: %s", strerror(errno));
+ }
+ if (!drop_all) {
+ EXPECT_NE(-1, cap_set_flag(cap_p, CAP_PERMITTED,
+- ARRAY_SIZE(caps), caps, CAP_SET)) {
++ ARRAY_SIZE(caps), caps, CAP_SET))
++ {
+ TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
+ }
+ }
+- EXPECT_NE(-1, cap_set_proc(cap_p)) {
++ EXPECT_NE(-1, cap_set_proc(cap_p))
++ {
+ TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
+ }
+- EXPECT_NE(-1, cap_free(cap_p)) {
++ EXPECT_NE(-1, cap_free(cap_p))
++ {
+ TH_LOG("Failed to cap_free: %s", strerror(errno));
+ }
+ }
+
+ /* We cannot put such helpers in a library because of kselftest_harness.h . */
+-__attribute__((__unused__))
+-static void disable_caps(struct __test_metadata *const _metadata)
++__attribute__((__unused__)) static void
++disable_caps(struct __test_metadata *const _metadata)
+ {
+ _init_caps(_metadata, false);
+ }
+
+-__attribute__((__unused__))
+-static void drop_caps(struct __test_metadata *const _metadata)
++__attribute__((__unused__)) static void
++drop_caps(struct __test_metadata *const _metadata)
+ {
+ _init_caps(_metadata, true);
+ }
+
+ static void _effective_cap(struct __test_metadata *const _metadata,
+- const cap_value_t caps, const cap_flag_value_t value)
++ const cap_value_t caps, const cap_flag_value_t value)
+ {
+ cap_t cap_p;
+
+ cap_p = cap_get_proc();
+- EXPECT_NE(NULL, cap_p) {
++ EXPECT_NE(NULL, cap_p)
++ {
+ TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
+ }
+- EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value)) {
++ EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value))
++ {
+ TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
+ }
+- EXPECT_NE(-1, cap_set_proc(cap_p)) {
++ EXPECT_NE(-1, cap_set_proc(cap_p))
++ {
+ TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
+ }
+- EXPECT_NE(-1, cap_free(cap_p)) {
++ EXPECT_NE(-1, cap_free(cap_p))
++ {
+ TH_LOG("Failed to cap_free: %s", strerror(errno));
+ }
+ }
+
+-__attribute__((__unused__))
+-static void set_cap(struct __test_metadata *const _metadata,
+- const cap_value_t caps)
++__attribute__((__unused__)) static void
++set_cap(struct __test_metadata *const _metadata, const cap_value_t caps)
+ {
+ _effective_cap(_metadata, caps, CAP_SET);
+ }
+
+-__attribute__((__unused__))
+-static void clear_cap(struct __test_metadata *const _metadata,
+- const cap_value_t caps)
++__attribute__((__unused__)) static void
++clear_cap(struct __test_metadata *const _metadata, const cap_value_t caps)
+ {
+ _effective_cap(_metadata, caps, CAP_CLEAR);
+ }
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 10c9a1e4ebd9b..a4fdcda62bdee 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -22,8 +22,21 @@
+
+ #include "common.h"
+
+-#define TMP_DIR "tmp"
+-#define BINARY_PATH "./true"
++#ifndef renameat2
++int renameat2(int olddirfd, const char *oldpath, int newdirfd,
++ const char *newpath, unsigned int flags)
++{
++ return syscall(__NR_renameat2, olddirfd, oldpath, newdirfd, newpath,
++ flags);
++}
++#endif
++
++#ifndef RENAME_EXCHANGE
++#define RENAME_EXCHANGE (1 << 1)
++#endif
++
++#define TMP_DIR "tmp"
++#define BINARY_PATH "./true"
+
+ /* Paths (sibling number and depth) */
+ static const char dir_s1d1[] = TMP_DIR "/s1d1";
+@@ -75,7 +88,7 @@ static const char dir_s3d3[] = TMP_DIR "/s3d1/s3d2/s3d3";
+ */
+
+ static void mkdir_parents(struct __test_metadata *const _metadata,
+- const char *const path)
++ const char *const path)
+ {
+ char *walker;
+ const char *parent;
+@@ -90,9 +103,10 @@ static void mkdir_parents(struct __test_metadata *const _metadata,
+ continue;
+ walker[i] = '\0';
+ err = mkdir(parent, 0700);
+- ASSERT_FALSE(err && errno != EEXIST) {
+- TH_LOG("Failed to create directory \"%s\": %s",
+- parent, strerror(errno));
++ ASSERT_FALSE(err && errno != EEXIST)
++ {
++ TH_LOG("Failed to create directory \"%s\": %s", parent,
++ strerror(errno));
+ }
+ walker[i] = '/';
+ }
+@@ -100,22 +114,24 @@ static void mkdir_parents(struct __test_metadata *const _metadata,
+ }
+
+ static void create_directory(struct __test_metadata *const _metadata,
+- const char *const path)
++ const char *const path)
+ {
+ mkdir_parents(_metadata, path);
+- ASSERT_EQ(0, mkdir(path, 0700)) {
++ ASSERT_EQ(0, mkdir(path, 0700))
++ {
+ TH_LOG("Failed to create directory \"%s\": %s", path,
+- strerror(errno));
++ strerror(errno));
+ }
+ }
+
+ static void create_file(struct __test_metadata *const _metadata,
+- const char *const path)
++ const char *const path)
+ {
+ mkdir_parents(_metadata, path);
+- ASSERT_EQ(0, mknod(path, S_IFREG | 0700, 0)) {
++ ASSERT_EQ(0, mknod(path, S_IFREG | 0700, 0))
++ {
+ TH_LOG("Failed to create file \"%s\": %s", path,
+- strerror(errno));
++ strerror(errno));
+ }
+ }
+
+@@ -221,8 +237,9 @@ static void remove_layout1(struct __test_metadata *const _metadata)
+ EXPECT_EQ(0, remove_path(dir_s3d2));
+ }
+
+-FIXTURE(layout1) {
+-};
++/* clang-format off */
++FIXTURE(layout1) {};
++/* clang-format on */
+
+ FIXTURE_SETUP(layout1)
+ {
+@@ -242,7 +259,8 @@ FIXTURE_TEARDOWN(layout1)
+ * This helper enables to use the ASSERT_* macros and print the line number
+ * pointing to the test caller.
+ */
+-static int test_open_rel(const int dirfd, const char *const path, const int flags)
++static int test_open_rel(const int dirfd, const char *const path,
++ const int flags)
+ {
+ int fd;
+
+@@ -291,23 +309,23 @@ TEST_F_FORK(layout1, inval)
+ {
+ struct landlock_path_beneath_attr path_beneath = {
+ .allowed_access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ .parent_fd = -1,
+ };
+ struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ };
+ int ruleset_fd;
+
+- path_beneath.parent_fd = open(dir_s1d2, O_PATH | O_DIRECTORY |
+- O_CLOEXEC);
++ path_beneath.parent_fd =
++ open(dir_s1d2, O_PATH | O_DIRECTORY | O_CLOEXEC);
+ ASSERT_LE(0, path_beneath.parent_fd);
+
+ ruleset_fd = open(dir_s1d1, O_PATH | O_DIRECTORY | O_CLOEXEC);
+ ASSERT_LE(0, ruleset_fd);
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ /* Returns EBADF because ruleset_fd is not a landlock-ruleset FD. */
+ ASSERT_EQ(EBADF, errno);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -315,55 +333,55 @@ TEST_F_FORK(layout1, inval)
+ ruleset_fd = open(dir_s1d1, O_DIRECTORY | O_CLOEXEC);
+ ASSERT_LE(0, ruleset_fd);
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ /* Returns EBADFD because ruleset_fd is not a valid ruleset. */
+ ASSERT_EQ(EBADFD, errno);
+ ASSERT_EQ(0, close(ruleset_fd));
+
+ /* Gets a real ruleset. */
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, ruleset_fd);
+ ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(0, close(path_beneath.parent_fd));
+
+ /* Tests without O_PATH. */
+ path_beneath.parent_fd = open(dir_s1d2, O_DIRECTORY | O_CLOEXEC);
+ ASSERT_LE(0, path_beneath.parent_fd);
+ ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(0, close(path_beneath.parent_fd));
+
+ /* Tests with a ruleset FD. */
+ path_beneath.parent_fd = ruleset_fd;
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(EBADFD, errno);
+
+ /* Checks unhandled allowed_access. */
+- path_beneath.parent_fd = open(dir_s1d2, O_PATH | O_DIRECTORY |
+- O_CLOEXEC);
++ path_beneath.parent_fd =
++ open(dir_s1d2, O_PATH | O_DIRECTORY | O_CLOEXEC);
+ ASSERT_LE(0, path_beneath.parent_fd);
+
+ /* Test with legitimate values. */
+ path_beneath.allowed_access |= LANDLOCK_ACCESS_FS_EXECUTE;
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(EINVAL, errno);
+ path_beneath.allowed_access &= ~LANDLOCK_ACCESS_FS_EXECUTE;
+
+ /* Test with unknown (64-bits) value. */
+ path_beneath.allowed_access |= (1ULL << 60);
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(EINVAL, errno);
+ path_beneath.allowed_access &= ~(1ULL << 60);
+
+ /* Test with no access. */
+ path_beneath.allowed_access = 0;
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(ENOMSG, errno);
+ path_beneath.allowed_access &= ~(1ULL << 60);
+
+@@ -376,6 +394,8 @@ TEST_F_FORK(layout1, inval)
+ ASSERT_EQ(0, close(ruleset_fd));
+ }
+
++/* clang-format off */
++
+ #define ACCESS_FILE ( \
+ LANDLOCK_ACCESS_FS_EXECUTE | \
+ LANDLOCK_ACCESS_FS_WRITE_FILE | \
+@@ -396,53 +416,87 @@ TEST_F_FORK(layout1, inval)
+ LANDLOCK_ACCESS_FS_MAKE_BLOCK | \
+ ACCESS_LAST)
+
+-TEST_F_FORK(layout1, file_access_rights)
++/* clang-format on */
++
++TEST_F_FORK(layout1, file_and_dir_access_rights)
+ {
+ __u64 access;
+ int err;
+- struct landlock_path_beneath_attr path_beneath = {};
++ struct landlock_path_beneath_attr path_beneath_file = {},
++ path_beneath_dir = {};
+ struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = ACCESS_ALL,
+ };
+- const int ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ const int ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+
+ ASSERT_LE(0, ruleset_fd);
+
+ /* Tests access rights for files. */
+- path_beneath.parent_fd = open(file1_s1d2, O_PATH | O_CLOEXEC);
+- ASSERT_LE(0, path_beneath.parent_fd);
++ path_beneath_file.parent_fd = open(file1_s1d2, O_PATH | O_CLOEXEC);
++ ASSERT_LE(0, path_beneath_file.parent_fd);
++
++ /* Tests access rights for directories. */
++ path_beneath_dir.parent_fd =
++ open(dir_s1d2, O_PATH | O_DIRECTORY | O_CLOEXEC);
++ ASSERT_LE(0, path_beneath_dir.parent_fd);
++
+ for (access = 1; access <= ACCESS_LAST; access <<= 1) {
+- path_beneath.allowed_access = access;
++ path_beneath_dir.allowed_access = access;
++ ASSERT_EQ(0, landlock_add_rule(ruleset_fd,
++ LANDLOCK_RULE_PATH_BENEATH,
++ &path_beneath_dir, 0));
++
++ path_beneath_file.allowed_access = access;
+ err = landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0);
+- if ((access | ACCESS_FILE) == ACCESS_FILE) {
++ &path_beneath_file, 0);
++ if (access & ACCESS_FILE) {
+ ASSERT_EQ(0, err);
+ } else {
+ ASSERT_EQ(-1, err);
+ ASSERT_EQ(EINVAL, errno);
+ }
+ }
+- ASSERT_EQ(0, close(path_beneath.parent_fd));
++ ASSERT_EQ(0, close(path_beneath_file.parent_fd));
++ ASSERT_EQ(0, close(path_beneath_dir.parent_fd));
++ ASSERT_EQ(0, close(ruleset_fd));
++}
++
++TEST_F_FORK(layout1, unknown_access_rights)
++{
++ __u64 access_mask;
++
++ for (access_mask = 1ULL << 63; access_mask != ACCESS_LAST;
++ access_mask >>= 1) {
++ struct landlock_ruleset_attr ruleset_attr = {
++ .handled_access_fs = access_mask,
++ };
++
++ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
++ sizeof(ruleset_attr), 0));
++ ASSERT_EQ(EINVAL, errno);
++ }
+ }
+
+ static void add_path_beneath(struct __test_metadata *const _metadata,
+- const int ruleset_fd, const __u64 allowed_access,
+- const char *const path)
++ const int ruleset_fd, const __u64 allowed_access,
++ const char *const path)
+ {
+ struct landlock_path_beneath_attr path_beneath = {
+ .allowed_access = allowed_access,
+ };
+
+ path_beneath.parent_fd = open(path, O_PATH | O_CLOEXEC);
+- ASSERT_LE(0, path_beneath.parent_fd) {
++ ASSERT_LE(0, path_beneath.parent_fd)
++ {
+ TH_LOG("Failed to open directory \"%s\": %s", path,
+- strerror(errno));
++ strerror(errno));
+ }
+ ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0)) {
++ &path_beneath, 0))
++ {
+ TH_LOG("Failed to update the ruleset with \"%s\": %s", path,
+- strerror(errno));
++ strerror(errno));
+ }
+ ASSERT_EQ(0, close(path_beneath.parent_fd));
+ }
+@@ -452,6 +506,8 @@ struct rule {
+ __u64 access;
+ };
+
++/* clang-format off */
++
+ #define ACCESS_RO ( \
+ LANDLOCK_ACCESS_FS_READ_FILE | \
+ LANDLOCK_ACCESS_FS_READ_DIR)
+@@ -460,39 +516,46 @@ struct rule {
+ ACCESS_RO | \
+ LANDLOCK_ACCESS_FS_WRITE_FILE)
+
++/* clang-format on */
++
+ static int create_ruleset(struct __test_metadata *const _metadata,
+- const __u64 handled_access_fs, const struct rule rules[])
++ const __u64 handled_access_fs,
++ const struct rule rules[])
+ {
+ int ruleset_fd, i;
+ struct landlock_ruleset_attr ruleset_attr = {
+ .handled_access_fs = handled_access_fs,
+ };
+
+- ASSERT_NE(NULL, rules) {
++ ASSERT_NE(NULL, rules)
++ {
+ TH_LOG("No rule list");
+ }
+- ASSERT_NE(NULL, rules[0].path) {
++ ASSERT_NE(NULL, rules[0].path)
++ {
+ TH_LOG("Empty rule list");
+ }
+
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
+- ASSERT_LE(0, ruleset_fd) {
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++ ASSERT_LE(0, ruleset_fd)
++ {
+ TH_LOG("Failed to create a ruleset: %s", strerror(errno));
+ }
+
+ for (i = 0; rules[i].path; i++) {
+ add_path_beneath(_metadata, ruleset_fd, rules[i].access,
+- rules[i].path);
++ rules[i].path);
+ }
+ return ruleset_fd;
+ }
+
+ static void enforce_ruleset(struct __test_metadata *const _metadata,
+- const int ruleset_fd)
++ const int ruleset_fd)
+ {
+ ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+- ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0)) {
++ ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0))
++ {
+ TH_LOG("Failed to enforce ruleset: %s", strerror(errno));
+ }
+ }
+@@ -503,13 +566,14 @@ TEST_F_FORK(layout1, proc_nsfs)
+ {
+ .path = "/dev/null",
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ struct landlock_path_beneath_attr path_beneath;
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access |
+- LANDLOCK_ACCESS_FS_READ_DIR, rules);
++ const int ruleset_fd = create_ruleset(
++ _metadata, rules[0].access | LANDLOCK_ACCESS_FS_READ_DIR,
++ rules);
+
+ ASSERT_LE(0, ruleset_fd);
+ ASSERT_EQ(0, test_open("/proc/self/ns/mnt", O_RDONLY));
+@@ -536,22 +600,23 @@ TEST_F_FORK(layout1, proc_nsfs)
+ * references to a ruleset.
+ */
+ path_beneath.allowed_access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ path_beneath.parent_fd = open("/proc/self/ns/mnt", O_PATH | O_CLOEXEC);
+ ASSERT_LE(0, path_beneath.parent_fd);
+ ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+- &path_beneath, 0));
++ &path_beneath, 0));
+ ASSERT_EQ(EBADFD, errno);
+ ASSERT_EQ(0, close(path_beneath.parent_fd));
+ }
+
+-TEST_F_FORK(layout1, unpriv) {
++TEST_F_FORK(layout1, unpriv)
++{
+ const struct rule rules[] = {
+ {
+ .path = dir_s1d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd;
+
+@@ -577,9 +642,9 @@ TEST_F_FORK(layout1, effective_access)
+ {
+ .path = file1_s2d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ char buf;
+@@ -589,17 +654,23 @@ TEST_F_FORK(layout1, effective_access)
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+
+- /* Tests on a directory. */
++ /* Tests on a directory (with or without O_PATH). */
+ ASSERT_EQ(EACCES, test_open("/", O_RDONLY));
++ ASSERT_EQ(0, test_open("/", O_RDONLY | O_PATH));
+ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY));
++ ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY | O_PATH));
+ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++ ASSERT_EQ(0, test_open(file1_s1d1, O_RDONLY | O_PATH));
++
+ ASSERT_EQ(0, test_open(dir_s1d2, O_RDONLY));
+ ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
+ ASSERT_EQ(0, test_open(dir_s1d3, O_RDONLY));
+ ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
+
+- /* Tests on a file. */
++ /* Tests on a file (with or without O_PATH). */
+ ASSERT_EQ(EACCES, test_open(dir_s2d2, O_RDONLY));
++ ASSERT_EQ(0, test_open(dir_s2d2, O_RDONLY | O_PATH));
++
+ ASSERT_EQ(0, test_open(file1_s2d2, O_RDONLY));
+
+ /* Checks effective read and write actions. */
+@@ -626,7 +697,7 @@ TEST_F_FORK(layout1, unhandled_access)
+ .path = dir_s1d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ /* Here, we only handle read accesses, not write accesses. */
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RO, rules);
+@@ -653,14 +724,14 @@ TEST_F_FORK(layout1, ruleset_overlap)
+ {
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+ {
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_READ_DIR,
++ LANDLOCK_ACCESS_FS_READ_DIR,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -687,6 +758,113 @@ TEST_F_FORK(layout1, ruleset_overlap)
+ ASSERT_EQ(0, test_open(dir_s1d3, O_RDONLY | O_DIRECTORY));
+ }
+
++TEST_F_FORK(layout1, layer_rule_unions)
++{
++ const struct rule layer1[] = {
++ {
++ .path = dir_s1d2,
++ .access = LANDLOCK_ACCESS_FS_READ_FILE,
++ },
++ /* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */
++ {
++ .path = dir_s1d3,
++ .access = LANDLOCK_ACCESS_FS_WRITE_FILE,
++ },
++ {},
++ };
++ const struct rule layer2[] = {
++ /* Doesn't change anything from layer1. */
++ {
++ .path = dir_s1d2,
++ .access = LANDLOCK_ACCESS_FS_READ_FILE |
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
++ },
++ {},
++ };
++ const struct rule layer3[] = {
++ /* Only allows write (but not read) to dir_s1d3. */
++ {
++ .path = dir_s1d2,
++ .access = LANDLOCK_ACCESS_FS_WRITE_FILE,
++ },
++ {},
++ };
++ int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1);
++
++ ASSERT_LE(0, ruleset_fd);
++ enforce_ruleset(_metadata, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
++
++ /* Checks s1d1 hierarchy with layer1. */
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Checks s1d2 hierarchy with layer1. */
++ ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Checks s1d3 hierarchy with layer1. */
++ ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
++ ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY));
++ /* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */
++ ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Doesn't change anything from layer1. */
++ ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2);
++ ASSERT_LE(0, ruleset_fd);
++ enforce_ruleset(_metadata, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
++
++ /* Checks s1d1 hierarchy with layer2. */
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Checks s1d2 hierarchy with layer2. */
++ ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Checks s1d3 hierarchy with layer2. */
++ ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
++ ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY));
++ /* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */
++ ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Only allows write (but not read) to dir_s1d3. */
++ ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer3);
++ ASSERT_LE(0, ruleset_fd);
++ enforce_ruleset(_metadata, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
++
++ /* Checks s1d1 hierarchy with layer3. */
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Checks s1d2 hierarchy with layer3. */
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY));
++ ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++ /* Checks s1d3 hierarchy with layer3. */
++ ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDONLY));
++ ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY));
++ /* dir_s1d3 should now deny READ_FILE and WRITE_FILE (O_RDWR). */
++ ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDWR));
++ ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++}
++
+ TEST_F_FORK(layout1, non_overlapping_accesses)
+ {
+ const struct rule layer1[] = {
+@@ -694,22 +872,22 @@ TEST_F_FORK(layout1, non_overlapping_accesses)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_MAKE_REG,
+ },
+- {}
++ {},
+ };
+ const struct rule layer2[] = {
+ {
+ .path = dir_s1d3,
+ .access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd;
+
+ ASSERT_EQ(0, unlink(file1_s1d1));
+ ASSERT_EQ(0, unlink(file1_s1d2));
+
+- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_MAKE_REG,
+- layer1);
++ ruleset_fd =
++ create_ruleset(_metadata, LANDLOCK_ACCESS_FS_MAKE_REG, layer1);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -720,7 +898,7 @@ TEST_F_FORK(layout1, non_overlapping_accesses)
+ ASSERT_EQ(0, unlink(file1_s1d2));
+
+ ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_REMOVE_FILE,
+- layer2);
++ layer2);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -758,7 +936,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ .path = file1_s1d3,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ /* First rule with write restrictions. */
+ const struct rule layer2_read_write[] = {
+@@ -766,14 +944,14 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ {
+ .path = dir_s1d3,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+ /* ...but also denies read access via its grandparent directory. */
+ {
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ const struct rule layer3_read[] = {
+ /* Allows read access via its great-grandparent directory. */
+@@ -781,7 +959,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ .path = dir_s1d1,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ const struct rule layer4_read_write[] = {
+ /*
+@@ -792,7 +970,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ const struct rule layer5_read[] = {
+ /*
+@@ -803,7 +981,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ const struct rule layer6_execute[] = {
+ /*
+@@ -814,7 +992,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ .path = dir_s2d1,
+ .access = LANDLOCK_ACCESS_FS_EXECUTE,
+ },
+- {}
++ {},
+ };
+ const struct rule layer7_read_write[] = {
+ /*
+@@ -825,12 +1003,12 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd;
+
+ ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+- layer1_read);
++ layer1_read);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -840,8 +1018,10 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
+ ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
+
+- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE, layer2_read_write);
++ ruleset_fd = create_ruleset(_metadata,
++ LANDLOCK_ACCESS_FS_READ_FILE |
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
++ layer2_read_write);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -852,7 +1032,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
+
+ ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+- layer3_read);
++ layer3_read);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -863,8 +1043,10 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
+
+ /* This time, denies write access for the file hierarchy. */
+- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE, layer4_read_write);
++ ruleset_fd = create_ruleset(_metadata,
++ LANDLOCK_ACCESS_FS_READ_FILE |
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
++ layer4_read_write);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -879,7 +1061,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
+
+ ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+- layer5_read);
++ layer5_read);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -891,7 +1073,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
+
+ ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_EXECUTE,
+- layer6_execute);
++ layer6_execute);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -902,8 +1084,10 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
+ ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
+
+- ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE, layer7_read_write);
++ ruleset_fd = create_ruleset(_metadata,
++ LANDLOCK_ACCESS_FS_READ_FILE |
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
++ layer7_read_write);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+@@ -921,9 +1105,9 @@ TEST_F_FORK(layout1, inherit_subset)
+ {
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_READ_DIR,
++ LANDLOCK_ACCESS_FS_READ_DIR,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -949,7 +1133,7 @@ TEST_F_FORK(layout1, inherit_subset)
+ * ANDed with the previous ones.
+ */
+ add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_WRITE_FILE,
+- dir_s1d2);
++ dir_s1d2);
+ /*
+ * According to ruleset_fd, dir_s1d2 should now have the
+ * LANDLOCK_ACCESS_FS_READ_FILE and LANDLOCK_ACCESS_FS_WRITE_FILE
+@@ -1004,7 +1188,7 @@ TEST_F_FORK(layout1, inherit_subset)
+ * that there was no rule tied to it before.
+ */
+ add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_WRITE_FILE,
+- dir_s1d3);
++ dir_s1d3);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+
+@@ -1039,7 +1223,7 @@ TEST_F_FORK(layout1, inherit_superset)
+ .path = dir_s1d3,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1054,8 +1238,10 @@ TEST_F_FORK(layout1, inherit_superset)
+ ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
+
+ /* Now dir_s1d2, parent of dir_s1d3, gets a new rule tied to it. */
+- add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_READ_DIR, dir_s1d2);
++ add_path_beneath(_metadata, ruleset_fd,
++ LANDLOCK_ACCESS_FS_READ_FILE |
++ LANDLOCK_ACCESS_FS_READ_DIR,
++ dir_s1d2);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+
+@@ -1075,12 +1261,12 @@ TEST_F_FORK(layout1, max_layers)
+ .path = dir_s1d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+- for (i = 0; i < 64; i++)
++ for (i = 0; i < 16; i++)
+ enforce_ruleset(_metadata, ruleset_fd);
+
+ for (i = 0; i < 2; i++) {
+@@ -1097,15 +1283,15 @@ TEST_F_FORK(layout1, empty_or_same_ruleset)
+ int ruleset_fd;
+
+ /* Tests empty handled_access_fs. */
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(-1, ruleset_fd);
+ ASSERT_EQ(ENOMSG, errno);
+
+ /* Enforces policy which deny read access to all files. */
+ ruleset_attr.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE;
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
+@@ -1113,8 +1299,8 @@ TEST_F_FORK(layout1, empty_or_same_ruleset)
+
+ /* Nests a policy which deny read access to all directories. */
+ ruleset_attr.handled_access_fs = LANDLOCK_ACCESS_FS_READ_DIR;
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
+@@ -1137,7 +1323,7 @@ TEST_F_FORK(layout1, rule_on_mountpoint)
+ .path = dir_s3d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1166,7 +1352,7 @@ TEST_F_FORK(layout1, rule_over_mountpoint)
+ .path = dir_s3d1,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1194,7 +1380,7 @@ TEST_F_FORK(layout1, rule_over_root_allow_then_deny)
+ .path = "/",
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1224,7 +1410,7 @@ TEST_F_FORK(layout1, rule_over_root_deny)
+ .path = "/",
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1244,12 +1430,13 @@ TEST_F_FORK(layout1, rule_inside_mount_ns)
+ .path = "s3d3",
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd;
+
+ set_cap(_metadata, CAP_SYS_ADMIN);
+- ASSERT_EQ(0, syscall(SYS_pivot_root, dir_s3d2, dir_s3d3)) {
++ ASSERT_EQ(0, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3))
++ {
+ TH_LOG("Failed to pivot root: %s", strerror(errno));
+ };
+ ASSERT_EQ(0, chdir("/"));
+@@ -1271,7 +1458,7 @@ TEST_F_FORK(layout1, mount_and_pivot)
+ .path = dir_s3d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1282,7 +1469,7 @@ TEST_F_FORK(layout1, mount_and_pivot)
+ set_cap(_metadata, CAP_SYS_ADMIN);
+ ASSERT_EQ(-1, mount(NULL, dir_s3d2, NULL, MS_RDONLY, NULL));
+ ASSERT_EQ(EPERM, errno);
+- ASSERT_EQ(-1, syscall(SYS_pivot_root, dir_s3d2, dir_s3d3));
++ ASSERT_EQ(-1, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3));
+ ASSERT_EQ(EPERM, errno);
+ clear_cap(_metadata, CAP_SYS_ADMIN);
+ }
+@@ -1294,28 +1481,29 @@ TEST_F_FORK(layout1, move_mount)
+ .path = dir_s3d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+
+ set_cap(_metadata, CAP_SYS_ADMIN);
+- ASSERT_EQ(0, syscall(SYS_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
+- dir_s1d2, 0)) {
++ ASSERT_EQ(0, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
++ dir_s1d2, 0))
++ {
+ TH_LOG("Failed to move mount: %s", strerror(errno));
+ }
+
+- ASSERT_EQ(0, syscall(SYS_move_mount, AT_FDCWD, dir_s1d2, AT_FDCWD,
+- dir_s3d2, 0));
++ ASSERT_EQ(0, syscall(__NR_move_mount, AT_FDCWD, dir_s1d2, AT_FDCWD,
++ dir_s3d2, 0));
+ clear_cap(_metadata, CAP_SYS_ADMIN);
+
+ enforce_ruleset(_metadata, ruleset_fd);
+ ASSERT_EQ(0, close(ruleset_fd));
+
+ set_cap(_metadata, CAP_SYS_ADMIN);
+- ASSERT_EQ(-1, syscall(SYS_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
+- dir_s1d2, 0));
++ ASSERT_EQ(-1, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
++ dir_s1d2, 0));
+ ASSERT_EQ(EPERM, errno);
+ clear_cap(_metadata, CAP_SYS_ADMIN);
+ }
+@@ -1335,7 +1523,7 @@ TEST_F_FORK(layout1, release_inodes)
+ .path = dir_s3d3,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+
+@@ -1362,7 +1550,7 @@ enum relative_access {
+ };
+
+ static void test_relative_path(struct __test_metadata *const _metadata,
+- const enum relative_access rel)
++ const enum relative_access rel)
+ {
+ /*
+ * Common layer to check that chroot doesn't ignore it (i.e. a chroot
+@@ -1373,7 +1561,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ .path = TMP_DIR,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ const struct rule layer2_subs[] = {
+ {
+@@ -1384,7 +1572,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ .path = dir_s2d2,
+ .access = ACCESS_RO,
+ },
+- {}
++ {},
+ };
+ int dirfd, ruleset_fd;
+
+@@ -1425,14 +1613,16 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ break;
+ case REL_CHROOT_ONLY:
+ /* Do chroot into dir_s1d2 (relative to dir_s2d2). */
+- ASSERT_EQ(0, chroot("../../s1d1/s1d2")) {
++ ASSERT_EQ(0, chroot("../../s1d1/s1d2"))
++ {
+ TH_LOG("Failed to chroot: %s", strerror(errno));
+ }
+ dirfd = AT_FDCWD;
+ break;
+ case REL_CHROOT_CHDIR:
+ /* Do chroot into dir_s1d2. */
+- ASSERT_EQ(0, chroot(".")) {
++ ASSERT_EQ(0, chroot("."))
++ {
+ TH_LOG("Failed to chroot: %s", strerror(errno));
+ }
+ dirfd = AT_FDCWD;
+@@ -1440,7 +1630,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ }
+
+ ASSERT_EQ((rel == REL_CHROOT_CHDIR) ? 0 : EACCES,
+- test_open_rel(dirfd, "..", O_RDONLY));
++ test_open_rel(dirfd, "..", O_RDONLY));
+ ASSERT_EQ(0, test_open_rel(dirfd, ".", O_RDONLY));
+
+ if (rel == REL_CHROOT_ONLY) {
+@@ -1462,11 +1652,13 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ if (rel != REL_CHROOT_CHDIR) {
+ ASSERT_EQ(EACCES, test_open_rel(dirfd, "../../s1d1", O_RDONLY));
+ ASSERT_EQ(0, test_open_rel(dirfd, "../../s1d1/s1d2", O_RDONLY));
+- ASSERT_EQ(0, test_open_rel(dirfd, "../../s1d1/s1d2/s1d3", O_RDONLY));
++ ASSERT_EQ(0, test_open_rel(dirfd, "../../s1d1/s1d2/s1d3",
++ O_RDONLY));
+
+ ASSERT_EQ(EACCES, test_open_rel(dirfd, "../../s2d1", O_RDONLY));
+ ASSERT_EQ(0, test_open_rel(dirfd, "../../s2d1/s2d2", O_RDONLY));
+- ASSERT_EQ(0, test_open_rel(dirfd, "../../s2d1/s2d2/s2d3", O_RDONLY));
++ ASSERT_EQ(0, test_open_rel(dirfd, "../../s2d1/s2d2/s2d3",
++ O_RDONLY));
+ }
+
+ if (rel == REL_OPEN)
+@@ -1495,40 +1687,42 @@ TEST_F_FORK(layout1, relative_chroot_chdir)
+ }
+
+ static void copy_binary(struct __test_metadata *const _metadata,
+- const char *const dst_path)
++ const char *const dst_path)
+ {
+ int dst_fd, src_fd;
+ struct stat statbuf;
+
+ dst_fd = open(dst_path, O_WRONLY | O_TRUNC | O_CLOEXEC);
+- ASSERT_LE(0, dst_fd) {
+- TH_LOG("Failed to open \"%s\": %s", dst_path,
+- strerror(errno));
++ ASSERT_LE(0, dst_fd)
++ {
++ TH_LOG("Failed to open \"%s\": %s", dst_path, strerror(errno));
+ }
+ src_fd = open(BINARY_PATH, O_RDONLY | O_CLOEXEC);
+- ASSERT_LE(0, src_fd) {
++ ASSERT_LE(0, src_fd)
++ {
+ TH_LOG("Failed to open \"" BINARY_PATH "\": %s",
+- strerror(errno));
++ strerror(errno));
+ }
+ ASSERT_EQ(0, fstat(src_fd, &statbuf));
+- ASSERT_EQ(statbuf.st_size, sendfile(dst_fd, src_fd, 0,
+- statbuf.st_size));
++ ASSERT_EQ(statbuf.st_size,
++ sendfile(dst_fd, src_fd, 0, statbuf.st_size));
+ ASSERT_EQ(0, close(src_fd));
+ ASSERT_EQ(0, close(dst_fd));
+ }
+
+-static void test_execute(struct __test_metadata *const _metadata,
+- const int err, const char *const path)
++static void test_execute(struct __test_metadata *const _metadata, const int err,
++ const char *const path)
+ {
+ int status;
+- char *const argv[] = {(char *)path, NULL};
++ char *const argv[] = { (char *)path, NULL };
+ const pid_t child = fork();
+
+ ASSERT_LE(0, child);
+ if (child == 0) {
+- ASSERT_EQ(err ? -1 : 0, execve(path, argv, NULL)) {
++ ASSERT_EQ(err ? -1 : 0, execve(path, argv, NULL))
++ {
+ TH_LOG("Failed to execute \"%s\": %s", path,
+- strerror(errno));
++ strerror(errno));
+ };
+ ASSERT_EQ(err, errno);
+ _exit(_metadata->passed ? 2 : 1);
+@@ -1536,9 +1730,10 @@ static void test_execute(struct __test_metadata *const _metadata,
+ }
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(1, WIFEXITED(status));
+- ASSERT_EQ(err ? 2 : 0, WEXITSTATUS(status)) {
++ ASSERT_EQ(err ? 2 : 0, WEXITSTATUS(status))
++ {
+ TH_LOG("Unexpected return code for \"%s\": %s", path,
+- strerror(errno));
++ strerror(errno));
+ };
+ }
+
+@@ -1549,10 +1744,10 @@ TEST_F_FORK(layout1, execute)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_EXECUTE,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+ copy_binary(_metadata, file1_s1d1);
+@@ -1577,15 +1772,21 @@ TEST_F_FORK(layout1, execute)
+
+ TEST_F_FORK(layout1, link)
+ {
+- const struct rule rules[] = {
++ const struct rule layer1[] = {
+ {
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_MAKE_REG,
+ },
+- {}
++ {},
++ };
++ const struct rule layer2[] = {
++ {
++ .path = dir_s1d3,
++ .access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
++ },
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ int ruleset_fd = create_ruleset(_metadata, layer1[0].access, layer1);
+
+ ASSERT_LE(0, ruleset_fd);
+
+@@ -1598,14 +1799,30 @@ TEST_F_FORK(layout1, link)
+
+ ASSERT_EQ(-1, link(file2_s1d1, file1_s1d1));
+ ASSERT_EQ(EACCES, errno);
++
+ /* Denies linking because of reparenting. */
+ ASSERT_EQ(-1, link(file1_s2d1, file1_s1d2));
+ ASSERT_EQ(EXDEV, errno);
+ ASSERT_EQ(-1, link(file2_s1d2, file1_s1d3));
+ ASSERT_EQ(EXDEV, errno);
++ ASSERT_EQ(-1, link(file2_s1d3, file1_s1d2));
++ ASSERT_EQ(EXDEV, errno);
+
+ ASSERT_EQ(0, link(file2_s1d2, file1_s1d2));
+ ASSERT_EQ(0, link(file2_s1d3, file1_s1d3));
++
++ /* Prepares for next unlinks. */
++ ASSERT_EQ(0, unlink(file2_s1d2));
++ ASSERT_EQ(0, unlink(file2_s1d3));
++
++ ruleset_fd = create_ruleset(_metadata, layer2[0].access, layer2);
++ ASSERT_LE(0, ruleset_fd);
++ enforce_ruleset(_metadata, ruleset_fd);
++ ASSERT_EQ(0, close(ruleset_fd));
++
++ /* Checks that linkind doesn't require the ability to delete a file. */
++ ASSERT_EQ(0, link(file1_s1d2, file2_s1d2));
++ ASSERT_EQ(0, link(file1_s1d3, file2_s1d3));
+ }
+
+ TEST_F_FORK(layout1, rename_file)
+@@ -1619,14 +1836,13 @@ TEST_F_FORK(layout1, rename_file)
+ .path = dir_s2d2,
+ .access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+
+- ASSERT_EQ(0, unlink(file1_s1d1));
+ ASSERT_EQ(0, unlink(file1_s1d2));
+
+ enforce_ruleset(_metadata, ruleset_fd);
+@@ -1662,9 +1878,15 @@ TEST_F_FORK(layout1, rename_file)
+ ASSERT_EQ(-1, renameat2(AT_FDCWD, dir_s2d2, AT_FDCWD, file1_s2d1,
+ RENAME_EXCHANGE));
+ ASSERT_EQ(EACCES, errno);
++ /* Checks that file1_s2d1 cannot be removed (instead of ENOTDIR). */
++ ASSERT_EQ(-1, rename(dir_s2d2, file1_s2d1));
++ ASSERT_EQ(EACCES, errno);
+ ASSERT_EQ(-1, renameat2(AT_FDCWD, file1_s2d1, AT_FDCWD, dir_s2d2,
+ RENAME_EXCHANGE));
+ ASSERT_EQ(EACCES, errno);
++ /* Checks that file1_s1d1 cannot be removed (instead of EISDIR). */
++ ASSERT_EQ(-1, rename(file1_s1d1, dir_s1d2));
++ ASSERT_EQ(EACCES, errno);
+
+ /* Renames files with different parents. */
+ ASSERT_EQ(-1, rename(file1_s2d2, file1_s1d2));
+@@ -1675,14 +1897,14 @@ TEST_F_FORK(layout1, rename_file)
+
+ /* Exchanges and renames files with same parent. */
+ ASSERT_EQ(0, renameat2(AT_FDCWD, file2_s2d3, AT_FDCWD, file1_s2d3,
+- RENAME_EXCHANGE));
++ RENAME_EXCHANGE));
+ ASSERT_EQ(0, rename(file2_s2d3, file1_s2d3));
+
+ /* Exchanges files and directories with same parent, twice. */
+ ASSERT_EQ(0, renameat2(AT_FDCWD, file1_s2d2, AT_FDCWD, dir_s2d3,
+- RENAME_EXCHANGE));
++ RENAME_EXCHANGE));
+ ASSERT_EQ(0, renameat2(AT_FDCWD, file1_s2d2, AT_FDCWD, dir_s2d3,
+- RENAME_EXCHANGE));
++ RENAME_EXCHANGE));
+ }
+
+ TEST_F_FORK(layout1, rename_dir)
+@@ -1696,10 +1918,10 @@ TEST_F_FORK(layout1, rename_dir)
+ .path = dir_s2d1,
+ .access = LANDLOCK_ACCESS_FS_REMOVE_DIR,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+
+@@ -1727,16 +1949,22 @@ TEST_F_FORK(layout1, rename_dir)
+ ASSERT_EQ(-1, renameat2(AT_FDCWD, dir_s1d1, AT_FDCWD, dir_s2d1,
+ RENAME_EXCHANGE));
+ ASSERT_EQ(EACCES, errno);
++ /* Checks that dir_s1d2 cannot be removed (instead of ENOTDIR). */
++ ASSERT_EQ(-1, rename(dir_s1d2, file1_s1d1));
++ ASSERT_EQ(EACCES, errno);
+ ASSERT_EQ(-1, renameat2(AT_FDCWD, file1_s1d1, AT_FDCWD, dir_s1d2,
+ RENAME_EXCHANGE));
+ ASSERT_EQ(EACCES, errno);
++ /* Checks that dir_s1d2 cannot be removed (instead of EISDIR). */
++ ASSERT_EQ(-1, rename(file1_s1d1, dir_s1d2));
++ ASSERT_EQ(EACCES, errno);
+
+ /*
+ * Exchanges and renames directory to the same parent, which allows
+ * directory removal.
+ */
+ ASSERT_EQ(0, renameat2(AT_FDCWD, dir_s1d3, AT_FDCWD, file1_s1d2,
+- RENAME_EXCHANGE));
++ RENAME_EXCHANGE));
+ ASSERT_EQ(0, unlink(dir_s1d3));
+ ASSERT_EQ(0, mkdir(dir_s1d3, 0700));
+ ASSERT_EQ(0, rename(file1_s1d2, dir_s1d3));
+@@ -1750,10 +1978,10 @@ TEST_F_FORK(layout1, remove_dir)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_REMOVE_DIR,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+
+@@ -1787,10 +2015,10 @@ TEST_F_FORK(layout1, remove_file)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+@@ -1805,14 +2033,15 @@ TEST_F_FORK(layout1, remove_file)
+ }
+
+ static void test_make_file(struct __test_metadata *const _metadata,
+- const __u64 access, const mode_t mode, const dev_t dev)
++ const __u64 access, const mode_t mode,
++ const dev_t dev)
+ {
+ const struct rule rules[] = {
+ {
+ .path = dir_s1d2,
+ .access = access,
+ },
+- {}
++ {},
+ };
+ const int ruleset_fd = create_ruleset(_metadata, access, rules);
+
+@@ -1820,9 +2049,10 @@ static void test_make_file(struct __test_metadata *const _metadata,
+
+ ASSERT_EQ(0, unlink(file1_s1d1));
+ ASSERT_EQ(0, unlink(file2_s1d1));
+- ASSERT_EQ(0, mknod(file2_s1d1, mode | 0400, dev)) {
+- TH_LOG("Failed to make file \"%s\": %s",
+- file2_s1d1, strerror(errno));
++ ASSERT_EQ(0, mknod(file2_s1d1, mode | 0400, dev))
++ {
++ TH_LOG("Failed to make file \"%s\": %s", file2_s1d1,
++ strerror(errno));
+ };
+
+ ASSERT_EQ(0, unlink(file1_s1d2));
+@@ -1841,9 +2071,10 @@ static void test_make_file(struct __test_metadata *const _metadata,
+ ASSERT_EQ(-1, rename(file2_s1d1, file1_s1d1));
+ ASSERT_EQ(EACCES, errno);
+
+- ASSERT_EQ(0, mknod(file1_s1d2, mode | 0400, dev)) {
+- TH_LOG("Failed to make file \"%s\": %s",
+- file1_s1d2, strerror(errno));
++ ASSERT_EQ(0, mknod(file1_s1d2, mode | 0400, dev))
++ {
++ TH_LOG("Failed to make file \"%s\": %s", file1_s1d2,
++ strerror(errno));
+ };
+ ASSERT_EQ(0, link(file1_s1d2, file2_s1d2));
+ ASSERT_EQ(0, unlink(file2_s1d2));
+@@ -1860,7 +2091,7 @@ TEST_F_FORK(layout1, make_char)
+ /* Creates a /dev/null device. */
+ set_cap(_metadata, CAP_MKNOD);
+ test_make_file(_metadata, LANDLOCK_ACCESS_FS_MAKE_CHAR, S_IFCHR,
+- makedev(1, 3));
++ makedev(1, 3));
+ }
+
+ TEST_F_FORK(layout1, make_block)
+@@ -1868,7 +2099,7 @@ TEST_F_FORK(layout1, make_block)
+ /* Creates a /dev/loop0 device. */
+ set_cap(_metadata, CAP_MKNOD);
+ test_make_file(_metadata, LANDLOCK_ACCESS_FS_MAKE_BLOCK, S_IFBLK,
+- makedev(7, 0));
++ makedev(7, 0));
+ }
+
+ TEST_F_FORK(layout1, make_reg_1)
+@@ -1898,10 +2129,10 @@ TEST_F_FORK(layout1, make_sym)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_MAKE_SYM,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+
+@@ -1943,10 +2174,10 @@ TEST_F_FORK(layout1, make_dir)
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_MAKE_DIR,
+ },
+- {}
++ {},
+ };
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+
+@@ -1965,12 +2196,12 @@ TEST_F_FORK(layout1, make_dir)
+ }
+
+ static int open_proc_fd(struct __test_metadata *const _metadata, const int fd,
+- const int open_flags)
++ const int open_flags)
+ {
+ static const char path_template[] = "/proc/self/fd/%d";
+ char procfd_path[sizeof(path_template) + 10];
+- const int procfd_path_size = snprintf(procfd_path, sizeof(procfd_path),
+- path_template, fd);
++ const int procfd_path_size =
++ snprintf(procfd_path, sizeof(procfd_path), path_template, fd);
+
+ ASSERT_LT(procfd_path_size, sizeof(procfd_path));
+ return open(procfd_path, open_flags);
+@@ -1983,12 +2214,13 @@ TEST_F_FORK(layout1, proc_unlinked_file)
+ .path = file1_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ int reg_fd, proc_fd;
+- const int ruleset_fd = create_ruleset(_metadata,
+- LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE, rules);
++ const int ruleset_fd = create_ruleset(
++ _metadata,
++ LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
++ rules);
+
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+@@ -2005,9 +2237,10 @@ TEST_F_FORK(layout1, proc_unlinked_file)
+ ASSERT_EQ(0, close(proc_fd));
+
+ proc_fd = open_proc_fd(_metadata, reg_fd, O_RDWR | O_CLOEXEC);
+- ASSERT_EQ(-1, proc_fd) {
+- TH_LOG("Successfully opened /proc/self/fd/%d: %s",
+- reg_fd, strerror(errno));
++ ASSERT_EQ(-1, proc_fd)
++ {
++ TH_LOG("Successfully opened /proc/self/fd/%d: %s", reg_fd,
++ strerror(errno));
+ }
+ ASSERT_EQ(EACCES, errno);
+
+@@ -2023,13 +2256,13 @@ TEST_F_FORK(layout1, proc_pipe)
+ {
+ .path = dir_s1d2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ /* Limits read and write access to files tied to the filesystem. */
+- const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+- rules);
++ const int ruleset_fd =
++ create_ruleset(_metadata, rules[0].access, rules);
+
+ ASSERT_LE(0, ruleset_fd);
+ enforce_ruleset(_metadata, ruleset_fd);
+@@ -2041,7 +2274,8 @@ TEST_F_FORK(layout1, proc_pipe)
+
+ /* Checks access to pipes through FD. */
+ ASSERT_EQ(0, pipe2(pipe_fds, O_CLOEXEC));
+- ASSERT_EQ(1, write(pipe_fds[1], ".", 1)) {
++ ASSERT_EQ(1, write(pipe_fds[1], ".", 1))
++ {
+ TH_LOG("Failed to write in pipe: %s", strerror(errno));
+ }
+ ASSERT_EQ(1, read(pipe_fds[0], &buf, 1));
+@@ -2050,9 +2284,10 @@ TEST_F_FORK(layout1, proc_pipe)
+ /* Checks write access to pipe through /proc/self/fd . */
+ proc_fd = open_proc_fd(_metadata, pipe_fds[1], O_WRONLY | O_CLOEXEC);
+ ASSERT_LE(0, proc_fd);
+- ASSERT_EQ(1, write(proc_fd, ".", 1)) {
++ ASSERT_EQ(1, write(proc_fd, ".", 1))
++ {
+ TH_LOG("Failed to write through /proc/self/fd/%d: %s",
+- pipe_fds[1], strerror(errno));
++ pipe_fds[1], strerror(errno));
+ }
+ ASSERT_EQ(0, close(proc_fd));
+
+@@ -2060,9 +2295,10 @@ TEST_F_FORK(layout1, proc_pipe)
+ proc_fd = open_proc_fd(_metadata, pipe_fds[0], O_RDONLY | O_CLOEXEC);
+ ASSERT_LE(0, proc_fd);
+ buf = '\0';
+- ASSERT_EQ(1, read(proc_fd, &buf, 1)) {
++ ASSERT_EQ(1, read(proc_fd, &buf, 1))
++ {
+ TH_LOG("Failed to read through /proc/self/fd/%d: %s",
+- pipe_fds[1], strerror(errno));
++ pipe_fds[1], strerror(errno));
+ }
+ ASSERT_EQ(0, close(proc_fd));
+
+@@ -2070,8 +2306,9 @@ TEST_F_FORK(layout1, proc_pipe)
+ ASSERT_EQ(0, close(pipe_fds[1]));
+ }
+
+-FIXTURE(layout1_bind) {
+-};
++/* clang-format off */
++FIXTURE(layout1_bind) {};
++/* clang-format on */
+
+ FIXTURE_SETUP(layout1_bind)
+ {
+@@ -2161,7 +2398,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ .path = dir_s2d1,
+ .access = ACCESS_RW,
+ },
+- {}
++ {},
+ };
+ /*
+ * Sets access rights on the same bind-mounted directories. The result
+@@ -2177,7 +2414,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ .path = dir_s2d2,
+ .access = ACCESS_RW,
+ },
+- {}
++ {},
+ };
+ /* Only allow read-access to the s1d3 hierarchies. */
+ const struct rule layer3_source[] = {
+@@ -2185,7 +2422,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ .path = dir_s1d3,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE,
+ },
+- {}
++ {},
+ };
+ /* Removes all access rights. */
+ const struct rule layer4_destination[] = {
+@@ -2193,7 +2430,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ .path = bind_file1_s1d3,
+ .access = LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd;
+
+@@ -2282,8 +2519,8 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ ASSERT_EQ(EACCES, test_open(bind_file1_s1d3, O_WRONLY));
+ }
+
+-#define LOWER_BASE TMP_DIR "/lower"
+-#define LOWER_DATA LOWER_BASE "/data"
++#define LOWER_BASE TMP_DIR "/lower"
++#define LOWER_DATA LOWER_BASE "/data"
+ static const char lower_fl1[] = LOWER_DATA "/fl1";
+ static const char lower_dl1[] = LOWER_DATA "/dl1";
+ static const char lower_dl1_fl2[] = LOWER_DATA "/dl1/fl2";
+@@ -2295,23 +2532,23 @@ static const char lower_do1_fl3[] = LOWER_DATA "/do1/fl3";
+ static const char (*lower_base_files[])[] = {
+ &lower_fl1,
+ &lower_fo1,
+- NULL
++ NULL,
+ };
+ static const char (*lower_base_directories[])[] = {
+ &lower_dl1,
+ &lower_do1,
+- NULL
++ NULL,
+ };
+ static const char (*lower_sub_files[])[] = {
+ &lower_dl1_fl2,
+ &lower_do1_fo2,
+ &lower_do1_fl3,
+- NULL
++ NULL,
+ };
+
+-#define UPPER_BASE TMP_DIR "/upper"
+-#define UPPER_DATA UPPER_BASE "/data"
+-#define UPPER_WORK UPPER_BASE "/work"
++#define UPPER_BASE TMP_DIR "/upper"
++#define UPPER_DATA UPPER_BASE "/data"
++#define UPPER_WORK UPPER_BASE "/work"
+ static const char upper_fu1[] = UPPER_DATA "/fu1";
+ static const char upper_du1[] = UPPER_DATA "/du1";
+ static const char upper_du1_fu2[] = UPPER_DATA "/du1/fu2";
+@@ -2323,22 +2560,22 @@ static const char upper_do1_fu3[] = UPPER_DATA "/do1/fu3";
+ static const char (*upper_base_files[])[] = {
+ &upper_fu1,
+ &upper_fo1,
+- NULL
++ NULL,
+ };
+ static const char (*upper_base_directories[])[] = {
+ &upper_du1,
+ &upper_do1,
+- NULL
++ NULL,
+ };
+ static const char (*upper_sub_files[])[] = {
+ &upper_du1_fu2,
+ &upper_do1_fo2,
+ &upper_do1_fu3,
+- NULL
++ NULL,
+ };
+
+-#define MERGE_BASE TMP_DIR "/merge"
+-#define MERGE_DATA MERGE_BASE "/data"
++#define MERGE_BASE TMP_DIR "/merge"
++#define MERGE_DATA MERGE_BASE "/data"
+ static const char merge_fl1[] = MERGE_DATA "/fl1";
+ static const char merge_dl1[] = MERGE_DATA "/dl1";
+ static const char merge_dl1_fl2[] = MERGE_DATA "/dl1/fl2";
+@@ -2355,21 +2592,17 @@ static const char (*merge_base_files[])[] = {
+ &merge_fl1,
+ &merge_fu1,
+ &merge_fo1,
+- NULL
++ NULL,
+ };
+ static const char (*merge_base_directories[])[] = {
+ &merge_dl1,
+ &merge_du1,
+ &merge_do1,
+- NULL
++ NULL,
+ };
+ static const char (*merge_sub_files[])[] = {
+- &merge_dl1_fl2,
+- &merge_du1_fu2,
+- &merge_do1_fo2,
+- &merge_do1_fl3,
+- &merge_do1_fu3,
+- NULL
++ &merge_dl1_fl2, &merge_du1_fu2, &merge_do1_fo2,
++ &merge_do1_fl3, &merge_do1_fu3, NULL,
+ };
+
+ /*
+@@ -2411,8 +2644,9 @@ static const char (*merge_sub_files[])[] = {
+ * └── work
+ */
+
+-FIXTURE(layout2_overlay) {
+-};
++/* clang-format off */
++FIXTURE(layout2_overlay) {};
++/* clang-format on */
+
+ FIXTURE_SETUP(layout2_overlay)
+ {
+@@ -2444,9 +2678,8 @@ FIXTURE_SETUP(layout2_overlay)
+ set_cap(_metadata, CAP_SYS_ADMIN);
+ set_cap(_metadata, CAP_DAC_OVERRIDE);
+ ASSERT_EQ(0, mount("overlay", MERGE_DATA, "overlay", 0,
+- "lowerdir=" LOWER_DATA
+- ",upperdir=" UPPER_DATA
+- ",workdir=" UPPER_WORK));
++ "lowerdir=" LOWER_DATA ",upperdir=" UPPER_DATA
++ ",workdir=" UPPER_WORK));
+ clear_cap(_metadata, CAP_DAC_OVERRIDE);
+ clear_cap(_metadata, CAP_SYS_ADMIN);
+ }
+@@ -2513,9 +2746,9 @@ TEST_F_FORK(layout2_overlay, no_restriction)
+ ASSERT_EQ(0, test_open(merge_do1_fu3, O_RDONLY));
+ }
+
+-#define for_each_path(path_list, path_entry, i) \
+- for (i = 0, path_entry = *path_list[i]; path_list[i]; \
+- path_entry = *path_list[++i])
++#define for_each_path(path_list, path_entry, i) \
++ for (i = 0, path_entry = *path_list[i]; path_list[i]; \
++ path_entry = *path_list[++i])
+
+ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ {
+@@ -2533,7 +2766,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ .path = MERGE_BASE,
+ .access = ACCESS_RW,
+ },
+- {}
++ {},
+ };
+ const struct rule layer2_data[] = {
+ {
+@@ -2548,7 +2781,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ .path = MERGE_DATA,
+ .access = ACCESS_RW,
+ },
+- {}
++ {},
+ };
+ /* Sets access right on directories inside both layers. */
+ const struct rule layer3_subdirs[] = {
+@@ -2580,7 +2813,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ .path = merge_do1,
+ .access = ACCESS_RW,
+ },
+- {}
++ {},
+ };
+ /* Tighten access rights to the files. */
+ const struct rule layer4_files[] = {
+@@ -2611,37 +2844,37 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ {
+ .path = merge_dl1_fl2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+ {
+ .path = merge_du1_fu2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+ {
+ .path = merge_do1_fo2,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+ {
+ .path = merge_do1_fl3,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+ {
+ .path = merge_do1_fu3,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ const struct rule layer5_merge_only[] = {
+ {
+ .path = MERGE_DATA,
+ .access = LANDLOCK_ACCESS_FS_READ_FILE |
+- LANDLOCK_ACCESS_FS_WRITE_FILE,
++ LANDLOCK_ACCESS_FS_WRITE_FILE,
+ },
+- {}
++ {},
+ };
+ int ruleset_fd;
+ size_t i;
+@@ -2659,7 +2892,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ ASSERT_EQ(EACCES, test_open(path_entry, O_WRONLY));
+ }
+ for_each_path(lower_base_directories, path_entry, i) {
+- ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++ ASSERT_EQ(EACCES,
++ test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ }
+ for_each_path(lower_sub_files, path_entry, i) {
+ ASSERT_EQ(0, test_open(path_entry, O_RDONLY));
+@@ -2671,7 +2905,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ ASSERT_EQ(EACCES, test_open(path_entry, O_WRONLY));
+ }
+ for_each_path(upper_base_directories, path_entry, i) {
+- ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++ ASSERT_EQ(EACCES,
++ test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ }
+ for_each_path(upper_sub_files, path_entry, i) {
+ ASSERT_EQ(0, test_open(path_entry, O_RDONLY));
+@@ -2756,7 +2991,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ ASSERT_EQ(EACCES, test_open(path_entry, O_RDWR));
+ }
+ for_each_path(merge_base_directories, path_entry, i) {
+- ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++ ASSERT_EQ(EACCES,
++ test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ }
+ for_each_path(merge_sub_files, path_entry, i) {
+ ASSERT_EQ(0, test_open(path_entry, O_RDWR));
+@@ -2781,7 +3017,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ ASSERT_EQ(EACCES, test_open(path_entry, O_RDWR));
+ }
+ for_each_path(merge_base_directories, path_entry, i) {
+- ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++ ASSERT_EQ(EACCES,
++ test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ }
+ for_each_path(merge_sub_files, path_entry, i) {
+ ASSERT_EQ(0, test_open(path_entry, O_RDWR));
+diff --git a/tools/testing/selftests/landlock/ptrace_test.c b/tools/testing/selftests/landlock/ptrace_test.c
+index 15fbef9cc8496..c28ef98ff3ac1 100644
+--- a/tools/testing/selftests/landlock/ptrace_test.c
++++ b/tools/testing/selftests/landlock/ptrace_test.c
+@@ -26,9 +26,10 @@ static void create_domain(struct __test_metadata *const _metadata)
+ .handled_access_fs = LANDLOCK_ACCESS_FS_MAKE_BLOCK,
+ };
+
+- ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+- sizeof(ruleset_attr), 0);
+- EXPECT_LE(0, ruleset_fd) {
++ ruleset_fd =
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++ EXPECT_LE(0, ruleset_fd)
++ {
+ TH_LOG("Failed to create a ruleset: %s", strerror(errno));
+ }
+ EXPECT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+@@ -43,7 +44,7 @@ static int test_ptrace_read(const pid_t pid)
+ int procenv_path_size, fd;
+
+ procenv_path_size = snprintf(procenv_path, sizeof(procenv_path),
+- path_template, pid);
++ path_template, pid);
+ if (procenv_path_size >= sizeof(procenv_path))
+ return E2BIG;
+
+@@ -59,9 +60,12 @@ static int test_ptrace_read(const pid_t pid)
+ return 0;
+ }
+
+-FIXTURE(hierarchy) { };
++/* clang-format off */
++FIXTURE(hierarchy) {};
++/* clang-format on */
+
+-FIXTURE_VARIANT(hierarchy) {
++FIXTURE_VARIANT(hierarchy)
++{
+ const bool domain_both;
+ const bool domain_parent;
+ const bool domain_child;
+@@ -83,7 +87,9 @@ FIXTURE_VARIANT(hierarchy) {
+ * \ P2 -> P1 : allow
+ * 'P2
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) {
++ /* clang-format on */
+ .domain_both = false,
+ .domain_parent = false,
+ .domain_child = false,
+@@ -98,7 +104,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) {
+ * | P2 |
+ * '------'
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) {
++ /* clang-format on */
+ .domain_both = false,
+ .domain_parent = false,
+ .domain_child = true,
+@@ -112,7 +120,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) {
+ * '
+ * P2
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) {
++ /* clang-format on */
+ .domain_both = false,
+ .domain_parent = true,
+ .domain_child = false,
+@@ -127,7 +137,9 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) {
+ * | P2 |
+ * '------'
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) {
++ /* clang-format on */
+ .domain_both = false,
+ .domain_parent = true,
+ .domain_child = true,
+@@ -142,7 +154,9 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) {
+ * | P2 |
+ * '-------------'
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) {
++ /* clang-format on */
+ .domain_both = true,
+ .domain_parent = false,
+ .domain_child = false,
+@@ -158,7 +172,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) {
+ * | '------' |
+ * '-----------------'
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) {
++ /* clang-format on */
+ .domain_both = true,
+ .domain_parent = false,
+ .domain_child = true,
+@@ -174,7 +190,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) {
+ * | P2 |
+ * '-----------------'
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) {
++ /* clang-format on */
+ .domain_both = true,
+ .domain_parent = true,
+ .domain_child = false,
+@@ -192,17 +210,21 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) {
+ * | '------' |
+ * '-----------------'
+ */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_forked_domain) {
++ /* clang-format on */
+ .domain_both = true,
+ .domain_parent = true,
+ .domain_child = true,
+ };
+
+ FIXTURE_SETUP(hierarchy)
+-{ }
++{
++}
+
+ FIXTURE_TEARDOWN(hierarchy)
+-{ }
++{
++}
+
+ /* Test PTRACE_TRACEME and PTRACE_ATTACH for parent and child. */
+ TEST_F(hierarchy, trace)
+@@ -330,7 +352,7 @@ TEST_F(hierarchy, trace)
+ ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ if (WIFSIGNALED(status) || !WIFEXITED(status) ||
+- WEXITSTATUS(status) != EXIT_SUCCESS)
++ WEXITSTATUS(status) != EXIT_SUCCESS)
+ _metadata->passed = 0;
+ }
+
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 51e5cf22632f7..56ccbeae0638d 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -121,8 +121,10 @@ static int fill_cache_read(unsigned char *start_ptr, unsigned char *end_ptr,
+
+ /* Consume read result so that reading memory is not optimized out. */
+ fp = fopen("/dev/null", "w");
+- if (!fp)
++ if (!fp) {
+ perror("Unable to write to /dev/null");
++ return -1;
++ }
+ fprintf(fp, "Sum: %d ", ret);
+ fclose(fp);
+
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 5a1eda6179924..4b635d4de0185 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -23,6 +23,7 @@ $(call allow-override,LD_SO_CONF_PATH,/etc/ld.so.conf.d/)
+ $(call allow-override,LDCONFIG,ldconfig)
+
+ INSTALL = install
++MKDIR = mkdir
+ FOPTS := -flto=auto -ffat-lto-objects -fexceptions -fstack-protector-strong \
+ -fasynchronous-unwind-tables -fstack-clash-protection
+ WOPTS := -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -Wno-maybe-uninitialized
+@@ -68,7 +69,7 @@ static: $(OBJ)
+
+ .PHONY: install
+ install: doc_install
+- $(INSTALL) -d -m 755 $(DESTDIR)$(BINDIR)
++ $(MKDIR) -p $(DESTDIR)$(BINDIR)
+ $(INSTALL) rtla -m 755 $(DESTDIR)$(BINDIR)
+ $(STRIP) $(DESTDIR)$(BINDIR)/rtla
+ @test ! -f $(DESTDIR)$(BINDIR)/osnoise || rm $(DESTDIR)$(BINDIR)/osnoise
+diff --git a/tools/tracing/rtla/README.txt b/tools/tracing/rtla/README.txt
+index 6c88446f7e74a..0fbad2640b8c9 100644
+--- a/tools/tracing/rtla/README.txt
++++ b/tools/tracing/rtla/README.txt
+@@ -1,15 +1,13 @@
+ RTLA: Real-Time Linux Analysis tools
+
+-The rtla is a meta-tool that includes a set of commands that
+-aims to analyze the real-time properties of Linux. But, instead of
+-testing Linux as a black box, rtla leverages kernel tracing
+-capabilities to provide precise information about the properties
+-and root causes of unexpected results.
++The rtla meta-tool includes a set of commands that aims to analyze
++the real-time properties of Linux. Instead of testing Linux as a black box,
++rtla leverages kernel tracing capabilities to provide precise information
++about the properties and root causes of unexpected results.
+
+ Installing RTLA
+
+-RTLA depends on some libraries and tools. More precisely, it depends on the
+-following libraries:
++RTLA depends on the following libraries and tools:
+
+ - libtracefs
+ - libtraceevent
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index ffaf8ec84001e..c4dd2aa0e9634 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -254,7 +254,7 @@ int __set_sched_attr(int pid, struct sched_attr *attr)
+
+ retval = sched_setattr(pid, attr, flags);
+ if (retval < 0) {
+- err_msg("boost_with_deadline failed to boost pid %d: %s\n",
++ err_msg("Failed to set sched attributes to the pid %d: %s\n",
+ pid, strerror(errno));
+ return 1;
+ }
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-06-06 11:01 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-06-06 11:01 UTC (permalink / raw
To: gentoo-commits
commit: 2959c9bb7685e717881e8ce4a25992c5f9dcdeda
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 6 11:01:33 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jun 6 11:01:33 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2959c9bb
Linux patch 5.17.13
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1012_linux-5.17.13.patch | 2619 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2623 insertions(+)
diff --git a/0000_README b/0000_README
index ecb45bb4..aa088efb 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-5.17.12.patch
From: http://www.kernel.org
Desc: Linux 5.17.12
+Patch: 1012_linux-5.17.13.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-5.17.13.patch b/1012_linux-5.17.13.patch
new file mode 100644
index 00000000..84bd9083
--- /dev/null
+++ b/1012_linux-5.17.13.patch
@@ -0,0 +1,2619 @@
+diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst
+index 31ea120ce531c..0c76d4ede04b6 100644
+--- a/Documentation/process/submitting-patches.rst
++++ b/Documentation/process/submitting-patches.rst
+@@ -77,7 +77,7 @@ as you intend it to.
+
+ The maintainer will thank you if you write your patch description in a
+ form which can be easily pulled into Linux's source code management
+-system, ``git``, as a "commit log". See :ref:`explicit_in_reply_to`.
++system, ``git``, as a "commit log". See :ref:`the_canonical_patch_format`.
+
+ Solve only one problem per patch. If your description starts to get
+ long, that's a sign that you probably need to split up your patch.
+diff --git a/Makefile b/Makefile
+index 25c44dda0ef37..d38228d336bf6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 160f8cd9a68da..2f57100a011a3 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -895,7 +895,7 @@
+ device-wakeup-gpios = <&gpg3 4 GPIO_ACTIVE_HIGH>;
+ interrupt-parent = <&gph2>;
+ interrupts = <5 IRQ_TYPE_LEVEL_HIGH>;
+- interrupt-names = "host-wake";
++ interrupt-names = "host-wakeup";
+ };
+ };
+
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 25d8aff273a10..4dd5f3b08aaa4 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1496,7 +1496,8 @@ static int kvm_init_vector_slots(void)
+ base = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
+ kvm_init_vector_slot(base, HYP_VECTOR_SPECTRE_DIRECT);
+
+- if (kvm_system_needs_idmapped_vectors() && !has_vhe()) {
++ if (kvm_system_needs_idmapped_vectors() &&
++ !is_protected_kvm_enabled()) {
+ err = create_hyp_exec_mappings(__pa_symbol(__bp_harden_hyp_vecs),
+ __BP_HARDEN_HYP_VECS_SZ, &base);
+ if (err)
+diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
+index e414ca44839fd..0cb20ee6a632c 100644
+--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
++++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
+@@ -360,13 +360,15 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm,
+ static bool kvmppc_next_nontransitioned_gfn(const struct kvm_memory_slot *memslot,
+ struct kvm *kvm, unsigned long *gfn)
+ {
+- struct kvmppc_uvmem_slot *p;
++ struct kvmppc_uvmem_slot *p = NULL, *iter;
+ bool ret = false;
+ unsigned long i;
+
+- list_for_each_entry(p, &kvm->arch.uvmem_pfns, list)
+- if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns)
++ list_for_each_entry(iter, &kvm->arch.uvmem_pfns, list)
++ if (*gfn >= iter->base_pfn && *gfn < iter->base_pfn + iter->nr_pfns) {
++ p = iter;
+ break;
++ }
+ if (!p)
+ return ret;
+ /*
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index ac96f9b2d64b3..1c14bcce88f27 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -409,6 +409,103 @@ do { \
+
+ #endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT
+
++#ifdef CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
++#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label) ({ \
++ bool success; \
++ __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
++ __typeof__(*(_ptr)) __old = *_old; \
++ __typeof__(*(_ptr)) __new = (_new); \
++ asm_volatile_goto("\n" \
++ "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
++ _ASM_EXTABLE_UA(1b, %l[label]) \
++ : CC_OUT(z) (success), \
++ [ptr] "+m" (*_ptr), \
++ [old] "+a" (__old) \
++ : [new] ltype (__new) \
++ : "memory" \
++ : label); \
++ if (unlikely(!success)) \
++ *_old = __old; \
++ likely(success); })
++
++#ifdef CONFIG_X86_32
++#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label) ({ \
++ bool success; \
++ __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
++ __typeof__(*(_ptr)) __old = *_old; \
++ __typeof__(*(_ptr)) __new = (_new); \
++ asm_volatile_goto("\n" \
++ "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n" \
++ _ASM_EXTABLE_UA(1b, %l[label]) \
++ : CC_OUT(z) (success), \
++ "+A" (__old), \
++ [ptr] "+m" (*_ptr) \
++ : "b" ((u32)__new), \
++ "c" ((u32)((u64)__new >> 32)) \
++ : "memory" \
++ : label); \
++ if (unlikely(!success)) \
++ *_old = __old; \
++ likely(success); })
++#endif // CONFIG_X86_32
++#else // !CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
++#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label) ({ \
++ int __err = 0; \
++ bool success; \
++ __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
++ __typeof__(*(_ptr)) __old = *_old; \
++ __typeof__(*(_ptr)) __new = (_new); \
++ asm volatile("\n" \
++ "1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
++ CC_SET(z) \
++ "2:\n" \
++ _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, \
++ %[errout]) \
++ : CC_OUT(z) (success), \
++ [errout] "+r" (__err), \
++ [ptr] "+m" (*_ptr), \
++ [old] "+a" (__old) \
++ : [new] ltype (__new) \
++ : "memory", "cc"); \
++ if (unlikely(__err)) \
++ goto label; \
++ if (unlikely(!success)) \
++ *_old = __old; \
++ likely(success); })
++
++#ifdef CONFIG_X86_32
++/*
++ * Unlike the normal CMPXCHG, hardcode ECX for both success/fail and error.
++ * There are only six GPRs available and four (EAX, EBX, ECX, and EDX) are
++ * hardcoded by CMPXCHG8B, leaving only ESI and EDI. If the compiler uses
++ * both ESI and EDI for the memory operand, compilation will fail if the error
++ * is an input+output as there will be no register available for input.
++ */
++#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label) ({ \
++ int __result; \
++ __typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
++ __typeof__(*(_ptr)) __old = *_old; \
++ __typeof__(*(_ptr)) __new = (_new); \
++ asm volatile("\n" \
++ "1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n" \
++ "mov $0, %%ecx\n\t" \
++ "setz %%cl\n" \
++ "2:\n" \
++ _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %%ecx) \
++ : [result]"=c" (__result), \
++ "+A" (__old), \
++ [ptr] "+m" (*_ptr) \
++ : "b" ((u32)__new), \
++ "c" ((u32)((u64)__new >> 32)) \
++ : "memory", "cc"); \
++ if (unlikely(__result < 0)) \
++ goto label; \
++ if (unlikely(!__result)) \
++ *_old = __old; \
++ likely(__result); })
++#endif // CONFIG_X86_32
++#endif // CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
++
+ /* FIXME: this hack is definitely wrong -AK */
+ struct __large_struct { unsigned long buf[100]; };
+ #define __m(x) (*(struct __large_struct __user *)(x))
+@@ -501,6 +598,51 @@ do { \
+ } while (0)
+ #endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT
+
++extern void __try_cmpxchg_user_wrong_size(void);
++
++#ifndef CONFIG_X86_32
++#define __try_cmpxchg64_user_asm(_ptr, _oldp, _nval, _label) \
++ __try_cmpxchg_user_asm("q", "r", (_ptr), (_oldp), (_nval), _label)
++#endif
++
++/*
++ * Force the pointer to u<size> to match the size expected by the asm helper.
++ * clang/LLVM compiles all cases and only discards the unused paths after
++ * processing errors, which breaks i386 if the pointer is an 8-byte value.
++ */
++#define unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \
++ bool __ret; \
++ __chk_user_ptr(_ptr); \
++ switch (sizeof(*(_ptr))) { \
++ case 1: __ret = __try_cmpxchg_user_asm("b", "q", \
++ (__force u8 *)(_ptr), (_oldp), \
++ (_nval), _label); \
++ break; \
++ case 2: __ret = __try_cmpxchg_user_asm("w", "r", \
++ (__force u16 *)(_ptr), (_oldp), \
++ (_nval), _label); \
++ break; \
++ case 4: __ret = __try_cmpxchg_user_asm("l", "r", \
++ (__force u32 *)(_ptr), (_oldp), \
++ (_nval), _label); \
++ break; \
++ case 8: __ret = __try_cmpxchg64_user_asm((__force u64 *)(_ptr), (_oldp),\
++ (_nval), _label); \
++ break; \
++ default: __try_cmpxchg_user_wrong_size(); \
++ } \
++ __ret; })
++
++/* "Returns" 0 on success, 1 on failure, -EFAULT if the access faults. */
++#define __try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \
++ int __ret = -EFAULT; \
++ __uaccess_begin_nospec(); \
++ __ret = !unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label); \
++_label: \
++ __uaccess_end(); \
++ __ret; \
++ })
++
+ /*
+ * We want the unsafe accessors to always be inlined and use
+ * the error labels - thus the macro games.
+diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
+index 7c63a1911fae9..3c24e6124d955 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.c
++++ b/arch/x86/kernel/cpu/sgx/encl.c
+@@ -12,6 +12,92 @@
+ #include "encls.h"
+ #include "sgx.h"
+
++#define PCMDS_PER_PAGE (PAGE_SIZE / sizeof(struct sgx_pcmd))
++/*
++ * 32 PCMD entries share a PCMD page. PCMD_FIRST_MASK is used to
++ * determine the page index associated with the first PCMD entry
++ * within a PCMD page.
++ */
++#define PCMD_FIRST_MASK GENMASK(4, 0)
++
++/**
++ * reclaimer_writing_to_pcmd() - Query if any enclave page associated with
++ * a PCMD page is in process of being reclaimed.
++ * @encl: Enclave to which PCMD page belongs
++ * @start_addr: Address of enclave page using first entry within the PCMD page
++ *
++ * When an enclave page is reclaimed some Paging Crypto MetaData (PCMD) is
++ * stored. The PCMD data of a reclaimed enclave page contains enough
++ * information for the processor to verify the page at the time
++ * it is loaded back into the Enclave Page Cache (EPC).
++ *
++ * The backing storage to which enclave pages are reclaimed is laid out as
++ * follows:
++ * Encrypted enclave pages:SECS page:PCMD pages
++ *
++ * Each PCMD page contains the PCMD metadata of
++ * PAGE_SIZE/sizeof(struct sgx_pcmd) enclave pages.
++ *
++ * A PCMD page can only be truncated if it is (a) empty, and (b) not in the
++ * process of getting data (and thus soon being non-empty). (b) is tested with
++ * a check if an enclave page sharing the PCMD page is in the process of being
++ * reclaimed.
++ *
++ * The reclaimer sets the SGX_ENCL_PAGE_BEING_RECLAIMED flag when it
++ * intends to reclaim that enclave page - it means that the PCMD page
++ * associated with that enclave page is about to get some data and thus
++ * even if the PCMD page is empty, it should not be truncated.
++ *
++ * Context: Enclave mutex (&sgx_encl->lock) must be held.
++ * Return: 1 if the reclaimer is about to write to the PCMD page
++ * 0 if the reclaimer has no intention to write to the PCMD page
++ */
++static int reclaimer_writing_to_pcmd(struct sgx_encl *encl,
++ unsigned long start_addr)
++{
++ int reclaimed = 0;
++ int i;
++
++ /*
++ * PCMD_FIRST_MASK is based on number of PCMD entries within
++ * PCMD page being 32.
++ */
++ BUILD_BUG_ON(PCMDS_PER_PAGE != 32);
++
++ for (i = 0; i < PCMDS_PER_PAGE; i++) {
++ struct sgx_encl_page *entry;
++ unsigned long addr;
++
++ addr = start_addr + i * PAGE_SIZE;
++
++ /*
++ * Stop when reaching the SECS page - it does not
++ * have a page_array entry and its reclaim is
++ * started and completed with enclave mutex held so
++ * it does not use the SGX_ENCL_PAGE_BEING_RECLAIMED
++ * flag.
++ */
++ if (addr == encl->base + encl->size)
++ break;
++
++ entry = xa_load(&encl->page_array, PFN_DOWN(addr));
++ if (!entry)
++ continue;
++
++ /*
++ * VA page slot ID uses same bit as the flag so it is important
++ * to ensure that the page is not already in backing store.
++ */
++ if (entry->epc_page &&
++ (entry->desc & SGX_ENCL_PAGE_BEING_RECLAIMED)) {
++ reclaimed = 1;
++ break;
++ }
++ }
++
++ return reclaimed;
++}
++
+ /*
+ * Calculate byte offset of a PCMD struct associated with an enclave page. PCMD's
+ * follow right after the EPC data in the backing storage. In addition to the
+@@ -47,6 +133,7 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ unsigned long va_offset = encl_page->desc & SGX_ENCL_PAGE_VA_OFFSET_MASK;
+ struct sgx_encl *encl = encl_page->encl;
+ pgoff_t page_index, page_pcmd_off;
++ unsigned long pcmd_first_page;
+ struct sgx_pageinfo pginfo;
+ struct sgx_backing b;
+ bool pcmd_page_empty;
+@@ -58,6 +145,11 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ else
+ page_index = PFN_DOWN(encl->size);
+
++ /*
++ * Address of enclave page using the first entry within the PCMD page.
++ */
++ pcmd_first_page = PFN_PHYS(page_index & ~PCMD_FIRST_MASK) + encl->base;
++
+ page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
+
+ ret = sgx_encl_get_backing(encl, page_index, &b);
+@@ -84,6 +176,7 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ }
+
+ memset(pcmd_page + b.pcmd_offset, 0, sizeof(struct sgx_pcmd));
++ set_page_dirty(b.pcmd);
+
+ /*
+ * The area for the PCMD in the page was zeroed above. Check if the
+@@ -94,12 +187,20 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ kunmap_atomic(pcmd_page);
+ kunmap_atomic((void *)(unsigned long)pginfo.contents);
+
+- sgx_encl_put_backing(&b, false);
++ get_page(b.pcmd);
++ sgx_encl_put_backing(&b);
+
+ sgx_encl_truncate_backing_page(encl, page_index);
+
+- if (pcmd_page_empty)
++ if (pcmd_page_empty && !reclaimer_writing_to_pcmd(encl, pcmd_first_page)) {
+ sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off));
++ pcmd_page = kmap_atomic(b.pcmd);
++ if (memchr_inv(pcmd_page, 0, PAGE_SIZE))
++ pr_warn("PCMD page not empty after truncate.\n");
++ kunmap_atomic(pcmd_page);
++ }
++
++ put_page(b.pcmd);
+
+ return ret;
+ }
+@@ -645,15 +746,9 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ /**
+ * sgx_encl_put_backing() - Unpin the backing storage
+ * @backing: data for accessing backing storage for the page
+- * @do_write: mark pages dirty
+ */
+-void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write)
++void sgx_encl_put_backing(struct sgx_backing *backing)
+ {
+- if (do_write) {
+- set_page_dirty(backing->pcmd);
+- set_page_dirty(backing->contents);
+- }
+-
+ put_page(backing->pcmd);
+ put_page(backing->contents);
+ }
+diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
+index fec43ca65065b..d44e7372151f0 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.h
++++ b/arch/x86/kernel/cpu/sgx/encl.h
+@@ -107,7 +107,7 @@ void sgx_encl_release(struct kref *ref);
+ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
+ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ struct sgx_backing *backing);
+-void sgx_encl_put_backing(struct sgx_backing *backing, bool do_write);
++void sgx_encl_put_backing(struct sgx_backing *backing);
+ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
+ struct sgx_encl_page *page);
+
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index 8e4bc6453d263..ab4ec54bbdd94 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -191,6 +191,8 @@ static int __sgx_encl_ewb(struct sgx_epc_page *epc_page, void *va_slot,
+ backing->pcmd_offset;
+
+ ret = __ewb(&pginfo, sgx_get_epc_virt_addr(epc_page), va_slot);
++ set_page_dirty(backing->pcmd);
++ set_page_dirty(backing->contents);
+
+ kunmap_atomic((void *)(unsigned long)(pginfo.metadata -
+ backing->pcmd_offset));
+@@ -308,6 +310,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
+ sgx_encl_ewb(epc_page, backing);
+ encl_page->epc_page = NULL;
+ encl->secs_child_cnt--;
++ sgx_encl_put_backing(backing);
+
+ if (!encl->secs_child_cnt && test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) {
+ ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size),
+@@ -320,7 +323,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
+ sgx_encl_free_epc_page(encl->secs.epc_page);
+ encl->secs.epc_page = NULL;
+
+- sgx_encl_put_backing(&secs_backing, true);
++ sgx_encl_put_backing(&secs_backing);
+ }
+
+ out:
+@@ -379,11 +382,14 @@ static void sgx_reclaim_pages(void)
+ goto skip;
+
+ page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base);
++
++ mutex_lock(&encl_page->encl->lock);
+ ret = sgx_encl_get_backing(encl_page->encl, page_index, &backing[i]);
+- if (ret)
++ if (ret) {
++ mutex_unlock(&encl_page->encl->lock);
+ goto skip;
++ }
+
+- mutex_lock(&encl_page->encl->lock);
+ encl_page->desc |= SGX_ENCL_PAGE_BEING_RECLAIMED;
+ mutex_unlock(&encl_page->encl->lock);
+ continue;
+@@ -411,7 +417,6 @@ skip:
+
+ encl_page = epc_page->owner;
+ sgx_reclaimer_write(epc_page, &backing[i]);
+- sgx_encl_put_backing(&backing[i], true);
+
+ kref_put(&encl_page->encl->refcount, sgx_encl_release);
+ epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED;
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 5290d64723086..98e6c29e17a48 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -14,6 +14,8 @@
+ #include <asm/traps.h>
+ #include <asm/irq_regs.h>
+
++#include <uapi/asm/kvm.h>
++
+ #include <linux/hardirq.h>
+ #include <linux/pkeys.h>
+ #include <linux/vmalloc.h>
+@@ -232,7 +234,20 @@ bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu)
+ gfpu->fpstate = fpstate;
+ gfpu->xfeatures = fpu_user_cfg.default_features;
+ gfpu->perm = fpu_user_cfg.default_features;
+- gfpu->uabi_size = fpu_user_cfg.default_size;
++
++ /*
++ * KVM sets the FP+SSE bits in the XSAVE header when copying FPU state
++ * to userspace, even when XSAVE is unsupported, so that restoring FPU
++ * state on a different CPU that does support XSAVE can cleanly load
++ * the incoming state using its natural XSAVE. In other words, KVM's
++ * uABI size may be larger than this host's default size. Conversely,
++ * the default size should never be larger than KVM's base uABI size;
++ * all features that can expand the uABI size must be opt-in.
++ */
++ gfpu->uabi_size = sizeof(struct kvm_xsave);
++ if (WARN_ON_ONCE(fpu_user_cfg.default_size > gfpu->uabi_size))
++ gfpu->uabi_size = fpu_user_cfg.default_size;
++
+ fpu_init_guest_permissions(gfpu);
+
+ return true;
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 4c2a158bb6c4f..015df76bc9145 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -191,7 +191,7 @@ void kvm_async_pf_task_wake(u32 token)
+ {
+ u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS);
+ struct kvm_task_sleep_head *b = &async_pf_sleepers[key];
+- struct kvm_task_sleep_node *n;
++ struct kvm_task_sleep_node *n, *dummy = NULL;
+
+ if (token == ~0) {
+ apf_task_wake_all();
+@@ -203,28 +203,41 @@ again:
+ n = _find_apf_task(b, token);
+ if (!n) {
+ /*
+- * async PF was not yet handled.
+- * Add dummy entry for the token.
++ * Async #PF not yet handled, add a dummy entry for the token.
++ * Allocating the token must be down outside of the raw lock
++ * as the allocator is preemptible on PREEMPT_RT kernels.
+ */
+- n = kzalloc(sizeof(*n), GFP_ATOMIC);
+- if (!n) {
++ if (!dummy) {
++ raw_spin_unlock(&b->lock);
++ dummy = kzalloc(sizeof(*dummy), GFP_ATOMIC);
++
+ /*
+- * Allocation failed! Busy wait while other cpu
+- * handles async PF.
++ * Continue looping on allocation failure, eventually
++ * the async #PF will be handled and allocating a new
++ * node will be unnecessary.
++ */
++ if (!dummy)
++ cpu_relax();
++
++ /*
++ * Recheck for async #PF completion before enqueueing
++ * the dummy token to avoid duplicate list entries.
+ */
+- raw_spin_unlock(&b->lock);
+- cpu_relax();
+ goto again;
+ }
+- n->token = token;
+- n->cpu = smp_processor_id();
+- init_swait_queue_head(&n->wq);
+- hlist_add_head(&n->link, &b->list);
++ dummy->token = token;
++ dummy->cpu = smp_processor_id();
++ init_swait_queue_head(&dummy->wq);
++ hlist_add_head(&dummy->link, &b->list);
++ dummy = NULL;
+ } else {
+ apf_task_wake_one(n);
+ }
+ raw_spin_unlock(&b->lock);
+- return;
++
++ /* A dummy token might be allocated and ultimately not used. */
++ if (dummy)
++ kfree(dummy);
+ }
+ EXPORT_SYMBOL_GPL(kvm_async_pf_task_wake);
+
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 495329ae6b1b2..f03fa385b2e28 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -1894,17 +1894,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
+ &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \
+ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else
+
+-static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
++static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ struct list_head *invalid_list)
+ {
+ int ret = vcpu->arch.mmu->sync_page(vcpu, sp);
+
+- if (ret < 0) {
++ if (ret < 0)
+ kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);
+- return false;
+- }
+-
+- return !!ret;
++ return ret;
+ }
+
+ static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm,
+@@ -2033,7 +2030,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu,
+
+ for_each_sp(pages, sp, parents, i) {
+ kvm_unlink_unsync_page(vcpu->kvm, sp);
+- flush |= kvm_sync_page(vcpu, sp, &invalid_list);
++ flush |= kvm_sync_page(vcpu, sp, &invalid_list) > 0;
+ mmu_pages_clear_parents(&parents);
+ }
+ if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) {
+@@ -2074,6 +2071,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
+ struct hlist_head *sp_list;
+ unsigned quadrant;
+ struct kvm_mmu_page *sp;
++ int ret;
+ int collisions = 0;
+ LIST_HEAD(invalid_list);
+
+@@ -2126,11 +2124,13 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
+ * If the sync fails, the page is zapped. If so, break
+ * in order to rebuild it.
+ */
+- if (!kvm_sync_page(vcpu, sp, &invalid_list))
++ ret = kvm_sync_page(vcpu, sp, &invalid_list);
++ if (ret < 0)
+ break;
+
+ WARN_ON(!list_empty(&invalid_list));
+- kvm_flush_remote_tlbs(vcpu->kvm);
++ if (ret > 0)
++ kvm_flush_remote_tlbs(vcpu->kvm);
+ }
+
+ __clear_sp_write_flooding_count(sp);
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 3821d5140ea31..8aba3c5be4627 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -144,42 +144,6 @@ static bool FNAME(is_rsvd_bits_set)(struct kvm_mmu *mmu, u64 gpte, int level)
+ FNAME(is_bad_mt_xwr)(&mmu->guest_rsvd_check, gpte);
+ }
+
+-static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+- pt_element_t __user *ptep_user, unsigned index,
+- pt_element_t orig_pte, pt_element_t new_pte)
+-{
+- signed char r;
+-
+- if (!user_access_begin(ptep_user, sizeof(pt_element_t)))
+- return -EFAULT;
+-
+-#ifdef CMPXCHG
+- asm volatile("1:" LOCK_PREFIX CMPXCHG " %[new], %[ptr]\n"
+- "setnz %b[r]\n"
+- "2:"
+- _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %k[r])
+- : [ptr] "+m" (*ptep_user),
+- [old] "+a" (orig_pte),
+- [r] "=q" (r)
+- : [new] "r" (new_pte)
+- : "memory");
+-#else
+- asm volatile("1:" LOCK_PREFIX "cmpxchg8b %[ptr]\n"
+- "setnz %b[r]\n"
+- "2:"
+- _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %k[r])
+- : [ptr] "+m" (*ptep_user),
+- [old] "+A" (orig_pte),
+- [r] "=q" (r)
+- : [new_lo] "b" ((u32)new_pte),
+- [new_hi] "c" ((u32)(new_pte >> 32))
+- : "memory");
+-#endif
+-
+- user_access_end();
+- return r;
+-}
+-
+ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
+ struct kvm_mmu_page *sp, u64 *spte,
+ u64 gpte)
+@@ -278,7 +242,7 @@ static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu,
+ if (unlikely(!walker->pte_writable[level - 1]))
+ continue;
+
+- ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, orig_pte, pte);
++ ret = __try_cmpxchg_user(ptep_user, &orig_pte, pte, fault);
+ if (ret)
+ return ret;
+
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index 39d280e7e80ef..26140579456b1 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -790,9 +790,6 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
+ struct kvm_host_map map;
+ int rc;
+
+- /* Triple faults in L2 should never escape. */
+- WARN_ON_ONCE(kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu));
+-
+ rc = kvm_vcpu_map(vcpu, gpa_to_gfn(svm->nested.vmcb12_gpa), &map);
+ if (rc) {
+ if (rc == -EINVAL)
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 76e6411d4dde1..730d887f53ac2 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -684,7 +684,7 @@ static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ if (params.len > SEV_FW_BLOB_MAX_SIZE)
+ return -EINVAL;
+
+- blob = kmalloc(params.len, GFP_KERNEL_ACCOUNT);
++ blob = kzalloc(params.len, GFP_KERNEL_ACCOUNT);
+ if (!blob)
+ return -ENOMEM;
+
+@@ -804,7 +804,7 @@ static int __sev_dbg_decrypt_user(struct kvm *kvm, unsigned long paddr,
+ if (!IS_ALIGNED(dst_paddr, 16) ||
+ !IS_ALIGNED(paddr, 16) ||
+ !IS_ALIGNED(size, 16)) {
+- tpage = (void *)alloc_page(GFP_KERNEL);
++ tpage = (void *)alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!tpage)
+ return -ENOMEM;
+
+@@ -1090,7 +1090,7 @@ static int sev_get_attestation_report(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ if (params.len > SEV_FW_BLOB_MAX_SIZE)
+ return -EINVAL;
+
+- blob = kmalloc(params.len, GFP_KERNEL_ACCOUNT);
++ blob = kzalloc(params.len, GFP_KERNEL_ACCOUNT);
+ if (!blob)
+ return -ENOMEM;
+
+@@ -1172,7 +1172,7 @@ static int sev_send_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ return -EINVAL;
+
+ /* allocate the memory to hold the session data blob */
+- session_data = kmalloc(params.session_len, GFP_KERNEL_ACCOUNT);
++ session_data = kzalloc(params.session_len, GFP_KERNEL_ACCOUNT);
+ if (!session_data)
+ return -ENOMEM;
+
+@@ -1296,11 +1296,11 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+
+ /* allocate memory for header and transport buffer */
+ ret = -ENOMEM;
+- hdr = kmalloc(params.hdr_len, GFP_KERNEL_ACCOUNT);
++ hdr = kzalloc(params.hdr_len, GFP_KERNEL_ACCOUNT);
+ if (!hdr)
+ goto e_unpin;
+
+- trans_data = kmalloc(params.trans_len, GFP_KERNEL_ACCOUNT);
++ trans_data = kzalloc(params.trans_len, GFP_KERNEL_ACCOUNT);
+ if (!trans_data)
+ goto e_free_hdr;
+
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 896ddf7392365..3237d804564b1 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -4518,9 +4518,6 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ /* trying to cancel vmlaunch/vmresume is a bug */
+ WARN_ON_ONCE(vmx->nested.nested_run_pending);
+
+- /* Similarly, triple faults in L2 should never escape. */
+- WARN_ON_ONCE(kvm_check_request(KVM_REQ_TRIPLE_FAULT, vcpu));
+-
+ if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) {
+ /*
+ * KVM_REQ_GET_NESTED_STATE_PAGES is also used to map
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 267d6dc4b8186..c87be7c52cc26 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7858,7 +7858,7 @@ static unsigned int vmx_handle_intel_pt_intr(void)
+ struct kvm_vcpu *vcpu = kvm_get_running_vcpu();
+
+ /* '0' on failure so that the !PT case can use a RET0 static call. */
+- if (!kvm_arch_pmi_in_guest(vcpu))
++ if (!vcpu || !kvm_handling_nmi_from_guest(vcpu))
+ return 0;
+
+ kvm_make_request(KVM_REQ_PMI, vcpu);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 23d176cd12a4f..5204283da7987 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7168,15 +7168,8 @@ static int emulator_write_emulated(struct x86_emulate_ctxt *ctxt,
+ exception, &write_emultor);
+ }
+
+-#define CMPXCHG_TYPE(t, ptr, old, new) \
+- (cmpxchg((t *)(ptr), *(t *)(old), *(t *)(new)) == *(t *)(old))
+-
+-#ifdef CONFIG_X86_64
+-# define CMPXCHG64(ptr, old, new) CMPXCHG_TYPE(u64, ptr, old, new)
+-#else
+-# define CMPXCHG64(ptr, old, new) \
+- (cmpxchg64((u64 *)(ptr), *(u64 *)(old), *(u64 *)(new)) == *(u64 *)(old))
+-#endif
++#define emulator_try_cmpxchg_user(t, ptr, old, new) \
++ (__try_cmpxchg_user((t __user *)(ptr), (t *)(old), *(t *)(new), efault ## t))
+
+ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
+ unsigned long addr,
+@@ -7185,12 +7178,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
+ unsigned int bytes,
+ struct x86_exception *exception)
+ {
+- struct kvm_host_map map;
+ struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt);
+ u64 page_line_mask;
++ unsigned long hva;
+ gpa_t gpa;
+- char *kaddr;
+- bool exchanged;
++ int r;
+
+ /* guests cmpxchg8b have to be emulated atomically */
+ if (bytes > 8 || (bytes & (bytes - 1)))
+@@ -7214,31 +7206,32 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt,
+ if (((gpa + bytes - 1) & page_line_mask) != (gpa & page_line_mask))
+ goto emul_write;
+
+- if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), &map))
++ hva = kvm_vcpu_gfn_to_hva(vcpu, gpa_to_gfn(gpa));
++ if (kvm_is_error_hva(hva))
+ goto emul_write;
+
+- kaddr = map.hva + offset_in_page(gpa);
++ hva += offset_in_page(gpa);
+
+ switch (bytes) {
+ case 1:
+- exchanged = CMPXCHG_TYPE(u8, kaddr, old, new);
++ r = emulator_try_cmpxchg_user(u8, hva, old, new);
+ break;
+ case 2:
+- exchanged = CMPXCHG_TYPE(u16, kaddr, old, new);
++ r = emulator_try_cmpxchg_user(u16, hva, old, new);
+ break;
+ case 4:
+- exchanged = CMPXCHG_TYPE(u32, kaddr, old, new);
++ r = emulator_try_cmpxchg_user(u32, hva, old, new);
+ break;
+ case 8:
+- exchanged = CMPXCHG64(kaddr, old, new);
++ r = emulator_try_cmpxchg_user(u64, hva, old, new);
+ break;
+ default:
+ BUG();
+ }
+
+- kvm_vcpu_unmap(vcpu, &map, true);
+-
+- if (!exchanged)
++ if (r < 0)
++ goto emul_write;
++ if (r)
+ return X86EMUL_CMPXCHG_FAILED;
+
+ kvm_page_track_write(vcpu, gpa, new, bytes);
+@@ -8176,7 +8169,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu)
+ }
+ EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction);
+
+-static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r)
++static bool kvm_vcpu_check_code_breakpoint(struct kvm_vcpu *vcpu, int *r)
+ {
+ if (unlikely(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) &&
+ (vcpu->arch.guest_debug_dr7 & DR7_BP_EN_MASK)) {
+@@ -8245,25 +8238,23 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt)
+ }
+
+ /*
+- * Decode to be emulated instruction. Return EMULATION_OK if success.
++ * Decode an instruction for emulation. The caller is responsible for handling
++ * code breakpoints. Note, manually detecting code breakpoints is unnecessary
++ * (and wrong) when emulating on an intercepted fault-like exception[*], as
++ * code breakpoints have higher priority and thus have already been done by
++ * hardware.
++ *
++ * [*] Except #MC, which is higher priority, but KVM should never emulate in
++ * response to a machine check.
+ */
+ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type,
+ void *insn, int insn_len)
+ {
+- int r = EMULATION_OK;
+ struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt;
++ int r;
+
+ init_emulate_ctxt(vcpu);
+
+- /*
+- * We will reenter on the same instruction since we do not set
+- * complete_userspace_io. This does not handle watchpoints yet,
+- * those would be handled in the emulate_ops.
+- */
+- if (!(emulation_type & EMULTYPE_SKIP) &&
+- kvm_vcpu_check_breakpoint(vcpu, &r))
+- return r;
+-
+ r = x86_decode_insn(ctxt, insn, insn_len, emulation_type);
+
+ trace_kvm_emulate_insn_start(vcpu);
+@@ -8296,6 +8287,15 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
+ if (!(emulation_type & EMULTYPE_NO_DECODE)) {
+ kvm_clear_exception_queue(vcpu);
+
++ /*
++ * Return immediately if RIP hits a code breakpoint, such #DBs
++ * are fault-like and are higher priority than any faults on
++ * the code fetch itself.
++ */
++ if (!(emulation_type & EMULTYPE_SKIP) &&
++ kvm_vcpu_check_code_breakpoint(vcpu, &r))
++ return r;
++
+ r = x86_decode_emulated_instruction(vcpu, emulation_type,
+ insn, insn_len);
+ if (r != EMULATION_OK) {
+@@ -11655,20 +11655,15 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
+ vcpu_put(vcpu);
+ }
+
+-static void kvm_free_vcpus(struct kvm *kvm)
++static void kvm_unload_vcpu_mmus(struct kvm *kvm)
+ {
+ unsigned long i;
+ struct kvm_vcpu *vcpu;
+
+- /*
+- * Unpin any mmu pages first.
+- */
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ kvm_clear_async_pf_completion_queue(vcpu);
+ kvm_unload_vcpu_mmu(vcpu);
+ }
+-
+- kvm_destroy_vcpus(kvm);
+ }
+
+ void kvm_arch_sync_events(struct kvm *kvm)
+@@ -11774,11 +11769,12 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
+ __x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
+ mutex_unlock(&kvm->slots_lock);
+ }
++ kvm_unload_vcpu_mmus(kvm);
+ static_call_cond(kvm_x86_vm_destroy)(kvm);
+ kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter, &kvm->srcu, 1));
+ kvm_pic_destroy(kvm);
+ kvm_ioapic_destroy(kvm);
+- kvm_free_vcpus(kvm);
++ kvm_destroy_vcpus(kvm);
+ kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
+ kfree(srcu_dereference_check(kvm->arch.pmu_event_filter, &kvm->srcu, 1));
+ kvm_mmu_uninit_vm(kvm);
+diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c
+index b32ffcaad9adf..f3c6b5e15e75b 100644
+--- a/crypto/ecrdsa.c
++++ b/crypto/ecrdsa.c
+@@ -113,15 +113,15 @@ static int ecrdsa_verify(struct akcipher_request *req)
+
+ /* Step 1: verify that 0 < r < q, 0 < s < q */
+ if (vli_is_zero(r, ndigits) ||
+- vli_cmp(r, ctx->curve->n, ndigits) == 1 ||
++ vli_cmp(r, ctx->curve->n, ndigits) >= 0 ||
+ vli_is_zero(s, ndigits) ||
+- vli_cmp(s, ctx->curve->n, ndigits) == 1)
++ vli_cmp(s, ctx->curve->n, ndigits) >= 0)
+ return -EKEYREJECTED;
+
+ /* Step 2: calculate hash (h) of the message (passed as input) */
+ /* Step 3: calculate e = h \mod q */
+ vli_from_le64(e, digest, ndigits);
+- if (vli_cmp(e, ctx->curve->n, ndigits) == 1)
++ if (vli_cmp(e, ctx->curve->n, ndigits) >= 0)
+ vli_sub(e, e, ctx->curve->n, ndigits);
+ if (vli_is_zero(e, ndigits))
+ e[0] = 1;
+@@ -137,7 +137,7 @@ static int ecrdsa_verify(struct akcipher_request *req)
+ /* Step 6: calculate point C = z_1P + z_2Q, and R = x_c \mod q */
+ ecc_point_mult_shamir(&cc, z1, &ctx->curve->g, z2, &ctx->pub_key,
+ ctx->curve);
+- if (vli_cmp(cc.x, ctx->curve->n, ndigits) == 1)
++ if (vli_cmp(cc.x, ctx->curve->n, ndigits) >= 0)
+ vli_sub(cc.x, cc.x, ctx->curve->n, ndigits);
+
+ /* Step 7: if R == r signature is valid */
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index f6e91fb432a3b..eab34e24d9446 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -696,9 +696,9 @@ static int qca_close(struct hci_uart *hu)
+ skb_queue_purge(&qca->tx_wait_q);
+ skb_queue_purge(&qca->txq);
+ skb_queue_purge(&qca->rx_memdump_q);
+- del_timer(&qca->tx_idle_timer);
+- del_timer(&qca->wake_retrans_timer);
+ destroy_workqueue(qca->workqueue);
++ del_timer_sync(&qca->tx_idle_timer);
++ del_timer_sync(&qca->wake_retrans_timer);
+ qca->hu = NULL;
+
+ kfree_skb(qca->rx_skb);
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index 4704fa553098b..04a3e23a4afc7 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -400,7 +400,16 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, u32 *value,
+ if (!rc) {
+ out = (struct tpm2_get_cap_out *)
+ &buf.data[TPM_HEADER_SIZE];
+- *value = be32_to_cpu(out->value);
++ /*
++ * To prevent failing boot up of some systems, Infineon TPM2.0
++ * returns SUCCESS on TPM2_Startup in field upgrade mode. Also
++ * the TPM2_Getcapability command returns a zero length list
++ * in field upgrade mode.
++ */
++ if (be32_to_cpu(out->property_cnt) > 0)
++ *value = be32_to_cpu(out->value);
++ else
++ rc = -ENODATA;
+ }
+ tpm_buf_destroy(&buf);
+ return rc;
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 3af4c07a9342f..d3989b257f422 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -681,6 +681,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ if (!wait_event_timeout(ibmvtpm->crq_queue.wq,
+ ibmvtpm->rtce_buf != NULL,
+ HZ)) {
++ rc = -ENODEV;
+ dev_err(dev, "CRQ response timed out\n");
+ goto init_irq_cleanup;
+ }
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index ca0361b2dbb07..f87aa2169e5f5 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -609,6 +609,13 @@ static bool check_version(struct fsl_mc_version *mc_version, u32 major,
+ }
+ #endif
+
++static bool needs_entropy_delay_adjustment(void)
++{
++ if (of_machine_is_compatible("fsl,imx6sx"))
++ return true;
++ return false;
++}
++
+ /* Probe routine for CAAM top (controller) level */
+ static int caam_probe(struct platform_device *pdev)
+ {
+@@ -855,6 +862,8 @@ static int caam_probe(struct platform_device *pdev)
+ * Also, if a handle was instantiated, do not change
+ * the TRNG parameters.
+ */
++ if (needs_entropy_delay_adjustment())
++ ent_delay = 12000;
+ if (!(ctrlpriv->rng4_sh_init || inst_handles)) {
+ dev_info(dev,
+ "Entropy delay = %u\n",
+@@ -871,6 +880,15 @@ static int caam_probe(struct platform_device *pdev)
+ */
+ ret = instantiate_rng(dev, inst_handles,
+ gen_sk);
++ /*
++ * Entropy delay is determined via TRNG characterization.
++ * TRNG characterization is run across different voltages
++ * and temperatures.
++ * If worst case value for ent_dly is identified,
++ * the loop can be skipped for that platform.
++ */
++ if (needs_entropy_delay_adjustment())
++ break;
+ if (ret == -EAGAIN)
+ /*
+ * if here, the loop will rerun,
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index 12120474c80c7..b97a079a56aab 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -2876,7 +2876,7 @@ static void ilk_compute_wm_level(const struct drm_i915_private *dev_priv,
+ }
+
+ static void intel_read_wm_latency(struct drm_i915_private *dev_priv,
+- u16 wm[8])
++ u16 wm[])
+ {
+ struct intel_uncore *uncore = &dev_priv->uncore;
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 78bd3ddda4426..aca7909c726d3 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -760,6 +760,7 @@
+ #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085
+ #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3
+ #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5
++#define USB_DEVICE_ID_LENOVO_X12_TAB 0x60fe
+ #define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E 0x600e
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d
+ #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 99eabfb4145b5..6bb3890b0f2c9 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2034,6 +2034,12 @@ static const struct hid_device_id mt_devices[] = {
+ USB_VENDOR_ID_LENOVO,
+ USB_DEVICE_ID_LENOVO_X1_TAB3) },
+
++ /* Lenovo X12 TAB Gen 1 */
++ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8,
++ USB_VENDOR_ID_LENOVO,
++ USB_DEVICE_ID_LENOVO_X12_TAB) },
++
+ /* MosArt panels */
+ { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE,
+ MT_USB_DEVICE(USB_VENDOR_ID_ASUS,
+@@ -2178,6 +2184,9 @@ static const struct hid_device_id mt_devices[] = {
+ { .driver_data = MT_CLS_GOOGLE,
+ HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY, USB_VENDOR_ID_GOOGLE,
+ USB_DEVICE_ID_GOOGLE_TOUCH_ROSE) },
++ { .driver_data = MT_CLS_GOOGLE,
++ HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_GOOGLE,
++ USB_DEVICE_ID_GOOGLE_WHISKERS) },
+
+ /* Generic MT device */
+ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH, HID_ANY_ID, HID_ANY_ID) },
+diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c
+index f4820fd3dc13e..9c78965360218 100644
+--- a/drivers/i2c/busses/i2c-ismt.c
++++ b/drivers/i2c/busses/i2c-ismt.c
+@@ -82,6 +82,7 @@
+
+ #define ISMT_DESC_ENTRIES 2 /* number of descriptor entries */
+ #define ISMT_MAX_RETRIES 3 /* number of SMBus retries to attempt */
++#define ISMT_LOG_ENTRIES 3 /* number of interrupt cause log entries */
+
+ /* Hardware Descriptor Constants - Control Field */
+ #define ISMT_DESC_CWRL 0x01 /* Command/Write Length */
+@@ -175,6 +176,8 @@ struct ismt_priv {
+ u8 head; /* ring buffer head pointer */
+ struct completion cmp; /* interrupt completion */
+ u8 buffer[I2C_SMBUS_BLOCK_MAX + 16]; /* temp R/W data buffer */
++ dma_addr_t log_dma;
++ u32 *log;
+ };
+
+ static const struct pci_device_id ismt_ids[] = {
+@@ -411,6 +414,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
+ memset(desc, 0, sizeof(struct ismt_desc));
+ desc->tgtaddr_rw = ISMT_DESC_ADDR_RW(addr, read_write);
+
++ /* Always clear the log entries */
++ memset(priv->log, 0, ISMT_LOG_ENTRIES * sizeof(u32));
++
+ /* Initialize common control bits */
+ if (likely(pci_dev_msi_enabled(priv->pci_dev)))
+ desc->control = ISMT_DESC_INT | ISMT_DESC_FAIR;
+@@ -522,6 +528,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr,
+
+ case I2C_SMBUS_BLOCK_PROC_CALL:
+ dev_dbg(dev, "I2C_SMBUS_BLOCK_PROC_CALL\n");
++ if (data->block[0] > I2C_SMBUS_BLOCK_MAX)
++ return -EINVAL;
++
+ dma_size = I2C_SMBUS_BLOCK_MAX;
+ desc->tgtaddr_rw = ISMT_DESC_ADDR_RW(addr, 1);
+ desc->wr_len_cmd = data->block[0] + 1;
+@@ -708,6 +717,8 @@ static void ismt_hw_init(struct ismt_priv *priv)
+ /* initialize the Master Descriptor Base Address (MDBA) */
+ writeq(priv->io_rng_dma, priv->smba + ISMT_MSTR_MDBA);
+
++ writeq(priv->log_dma, priv->smba + ISMT_GR_SMTICL);
++
+ /* initialize the Master Control Register (MCTRL) */
+ writel(ISMT_MCTRL_MEIE, priv->smba + ISMT_MSTR_MCTRL);
+
+@@ -795,6 +806,12 @@ static int ismt_dev_init(struct ismt_priv *priv)
+ priv->head = 0;
+ init_completion(&priv->cmp);
+
++ priv->log = dmam_alloc_coherent(&priv->pci_dev->dev,
++ ISMT_LOG_ENTRIES * sizeof(u32),
++ &priv->log_dma, GFP_KERNEL);
++ if (!priv->log)
++ return -ENOMEM;
++
+ return 0;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-thunderx-pcidrv.c b/drivers/i2c/busses/i2c-thunderx-pcidrv.c
+index 12c90aa0900e6..a77cd86fe75ed 100644
+--- a/drivers/i2c/busses/i2c-thunderx-pcidrv.c
++++ b/drivers/i2c/busses/i2c-thunderx-pcidrv.c
+@@ -213,6 +213,7 @@ static int thunder_i2c_probe_pci(struct pci_dev *pdev,
+ i2c->adap.bus_recovery_info = &octeon_i2c_recovery_info;
+ i2c->adap.dev.parent = dev;
+ i2c->adap.dev.of_node = pdev->dev.of_node;
++ i2c->adap.dev.fwnode = dev->fwnode;
+ snprintf(i2c->adap.name, sizeof(i2c->adap.name),
+ "Cavium ThunderX i2c adapter at %s", dev_name(dev));
+ i2c_set_adapdata(&i2c->adap, i2c);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index f51aea71cb036..a97ac274d8754 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -3449,6 +3449,11 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
+ return DM_MAPIO_SUBMITTED;
+ }
+
++static char hex2asc(unsigned char c)
++{
++ return c + '0' + ((unsigned)(9 - c) >> 4 & 0x27);
++}
++
+ static void crypt_status(struct dm_target *ti, status_type_t type,
+ unsigned status_flags, char *result, unsigned maxlen)
+ {
+@@ -3467,9 +3472,12 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
+ if (cc->key_size > 0) {
+ if (cc->key_string)
+ DMEMIT(":%u:%s", cc->key_size, cc->key_string);
+- else
+- for (i = 0; i < cc->key_size; i++)
+- DMEMIT("%02x", cc->key[i]);
++ else {
++ for (i = 0; i < cc->key_size; i++) {
++ DMEMIT("%c%c", hex2asc(cc->key[i] >> 4),
++ hex2asc(cc->key[i] & 0xf));
++ }
++ }
+ } else
+ DMEMIT("-");
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index ffe50be8b6875..afd127eef0c1d 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4495,8 +4495,6 @@ try_smaller_buffer:
+ }
+
+ if (should_write_sb) {
+- int r;
+-
+ init_journal(ic, 0, ic->journal_sections, 0);
+ r = dm_integrity_failed(ic);
+ if (unlikely(r)) {
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index 0e039a8c0bf2e..a3f2050b9c9b4 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -225,6 +225,7 @@ void dm_stats_cleanup(struct dm_stats *stats)
+ atomic_read(&shared->in_flight[READ]),
+ atomic_read(&shared->in_flight[WRITE]));
+ }
++ cond_resched();
+ }
+ dm_stat_free(&s->rcu_head);
+ }
+@@ -330,6 +331,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ for (ni = 0; ni < n_entries; ni++) {
+ atomic_set(&s->stat_shared[ni].in_flight[READ], 0);
+ atomic_set(&s->stat_shared[ni].in_flight[WRITE], 0);
++ cond_resched();
+ }
+
+ if (s->n_histogram_entries) {
+@@ -342,6 +344,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ for (ni = 0; ni < n_entries; ni++) {
+ s->stat_shared[ni].tmp.histogram = hi;
+ hi += s->n_histogram_entries + 1;
++ cond_resched();
+ }
+ }
+
+@@ -362,6 +365,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ for (ni = 0; ni < n_entries; ni++) {
+ p[ni].histogram = hi;
+ hi += s->n_histogram_entries + 1;
++ cond_resched();
+ }
+ }
+ }
+@@ -497,6 +501,7 @@ static int dm_stats_list(struct dm_stats *stats, const char *program,
+ }
+ DMEMIT("\n");
+ }
++ cond_resched();
+ }
+ mutex_unlock(&stats->mutex);
+
+@@ -774,6 +779,7 @@ static void __dm_stat_clear(struct dm_stat *s, size_t idx_start, size_t idx_end,
+ local_irq_enable();
+ }
+ }
++ cond_resched();
+ }
+ }
+
+@@ -889,6 +895,8 @@ static int dm_stats_print(struct dm_stats *stats, int id,
+
+ if (unlikely(sz + 1 >= maxlen))
+ goto buffer_overflow;
++
++ cond_resched();
+ }
+
+ if (clear)
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index 80133aae0db37..d6dbd47492a85 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -1312,6 +1312,7 @@ bad:
+
+ static struct target_type verity_target = {
+ .name = "verity",
++ .features = DM_TARGET_IMMUTABLE,
+ .version = {1, 8, 0},
+ .module = THIS_MODULE,
+ .ctr = verity_ctr,
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index ffe720c73b0a5..ad7a84a82938f 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -686,17 +686,17 @@ int raid5_calc_degraded(struct r5conf *conf)
+ return degraded;
+ }
+
+-static int has_failed(struct r5conf *conf)
++static bool has_failed(struct r5conf *conf)
+ {
+- int degraded;
++ int degraded = conf->mddev->degraded;
+
+- if (conf->mddev->reshape_position == MaxSector)
+- return conf->mddev->degraded > conf->max_degraded;
++ if (test_bit(MD_BROKEN, &conf->mddev->flags))
++ return true;
+
+- degraded = raid5_calc_degraded(conf);
+- if (degraded > conf->max_degraded)
+- return 1;
+- return 0;
++ if (conf->mddev->reshape_position != MaxSector)
++ degraded = raid5_calc_degraded(conf);
++
++ return degraded > conf->max_degraded;
+ }
+
+ struct stripe_head *
+@@ -2877,34 +2877,31 @@ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev)
+ unsigned long flags;
+ pr_debug("raid456: error called\n");
+
++ pr_crit("md/raid:%s: Disk failure on %s, disabling device.\n",
++ mdname(mddev), bdevname(rdev->bdev, b));
++
+ spin_lock_irqsave(&conf->device_lock, flags);
++ set_bit(Faulty, &rdev->flags);
++ clear_bit(In_sync, &rdev->flags);
++ mddev->degraded = raid5_calc_degraded(conf);
+
+- if (test_bit(In_sync, &rdev->flags) &&
+- mddev->degraded == conf->max_degraded) {
+- /*
+- * Don't allow to achieve failed state
+- * Don't try to recover this device
+- */
++ if (has_failed(conf)) {
++ set_bit(MD_BROKEN, &conf->mddev->flags);
+ conf->recovery_disabled = mddev->recovery_disabled;
+- spin_unlock_irqrestore(&conf->device_lock, flags);
+- return;
++
++ pr_crit("md/raid:%s: Cannot continue operation (%d/%d failed).\n",
++ mdname(mddev), mddev->degraded, conf->raid_disks);
++ } else {
++ pr_crit("md/raid:%s: Operation continuing on %d devices.\n",
++ mdname(mddev), conf->raid_disks - mddev->degraded);
+ }
+
+- set_bit(Faulty, &rdev->flags);
+- clear_bit(In_sync, &rdev->flags);
+- mddev->degraded = raid5_calc_degraded(conf);
+ spin_unlock_irqrestore(&conf->device_lock, flags);
+ set_bit(MD_RECOVERY_INTR, &mddev->recovery);
+
+ set_bit(Blocked, &rdev->flags);
+ set_mask_bits(&mddev->sb_flags, 0,
+ BIT(MD_SB_CHANGE_DEVS) | BIT(MD_SB_CHANGE_PENDING));
+- pr_crit("md/raid:%s: Disk failure on %s, disabling device.\n"
+- "md/raid:%s: Operation continuing on %d devices.\n",
+- mdname(mddev),
+- bdevname(rdev->bdev, b),
+- mdname(mddev),
+- conf->raid_disks - mddev->degraded);
+ r5c_update_on_rdev_error(mddev, rdev);
+ }
+
+diff --git a/drivers/media/i2c/imx412.c b/drivers/media/i2c/imx412.c
+index be3f6ea555597..84279a6808730 100644
+--- a/drivers/media/i2c/imx412.c
++++ b/drivers/media/i2c/imx412.c
+@@ -1011,7 +1011,7 @@ static int imx412_power_on(struct device *dev)
+ struct imx412 *imx412 = to_imx412(sd);
+ int ret;
+
+- gpiod_set_value_cansleep(imx412->reset_gpio, 1);
++ gpiod_set_value_cansleep(imx412->reset_gpio, 0);
+
+ ret = clk_prepare_enable(imx412->inclk);
+ if (ret) {
+@@ -1024,7 +1024,7 @@ static int imx412_power_on(struct device *dev)
+ return 0;
+
+ error_reset:
+- gpiod_set_value_cansleep(imx412->reset_gpio, 0);
++ gpiod_set_value_cansleep(imx412->reset_gpio, 1);
+
+ return ret;
+ }
+@@ -1040,10 +1040,10 @@ static int imx412_power_off(struct device *dev)
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct imx412 *imx412 = to_imx412(sd);
+
+- gpiod_set_value_cansleep(imx412->reset_gpio, 0);
+-
+ clk_disable_unprepare(imx412->inclk);
+
++ gpiod_set_value_cansleep(imx412->reset_gpio, 1);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index caf48023f8ea5..5231818943c6e 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1928,6 +1928,11 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ /* AST2400 doesn't have working HW checksum generation */
+ if (np && (of_device_is_compatible(np, "aspeed,ast2400-mac")))
+ netdev->hw_features &= ~NETIF_F_HW_CSUM;
++
++ /* AST2600 tx checksum with NCSI is broken */
++ if (priv->use_ncsi && of_device_is_compatible(np, "aspeed,ast2600-mac"))
++ netdev->hw_features &= ~NETIF_F_HW_CSUM;
++
+ if (np && of_get_property(np, "no-hw-checksum", NULL))
+ netdev->hw_features &= ~(NETIF_F_HW_CSUM | NETIF_F_RXCSUM);
+ netdev->features |= netdev->hw_features;
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 2ecfc17544a6a..dde55ccae9d73 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -723,13 +723,15 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
+
+ if (endpoint->data->aggregation) {
+ if (!endpoint->toward_ipa) {
++ u32 buffer_size;
+ bool close_eof;
+ u32 limit;
+
+ val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK);
+ val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK);
+
+- limit = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE);
++ buffer_size = IPA_RX_BUFFER_SIZE - NET_SKB_PAD;
++ limit = ipa_aggr_size_kb(buffer_size);
+ val |= aggr_byte_limit_encoded(version, limit);
+
+ limit = IPA_AGGR_TIME_LIMIT;
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index a491db46e3bd4..d9f6367b9993d 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -2787,13 +2787,14 @@ void pn53x_common_clean(struct pn533 *priv)
+ {
+ struct pn533_cmd *cmd, *n;
+
++ /* delete the timer before cleanup the worker */
++ del_timer_sync(&priv->listen_timer);
++
+ flush_delayed_work(&priv->poll_work);
+ destroy_workqueue(priv->wq);
+
+ skb_queue_purge(&priv->resp_q);
+
+- del_timer(&priv->listen_timer);
+-
+ list_for_each_entry_safe(cmd, n, &priv->cmd_queue, queue) {
+ list_del(&cmd->queue);
+ kfree(cmd);
+diff --git a/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c b/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c
+index 2801ca7062732..68a5b627fb9b2 100644
+--- a/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c
++++ b/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c
+@@ -204,7 +204,7 @@ static const struct sunxi_desc_pin suniv_f1c100s_pins[] = {
+ SUNXI_FUNCTION(0x0, "gpio_in"),
+ SUNXI_FUNCTION(0x1, "gpio_out"),
+ SUNXI_FUNCTION(0x2, "lcd"), /* D20 */
+- SUNXI_FUNCTION(0x3, "lvds1"), /* RX */
++ SUNXI_FUNCTION(0x3, "uart2"), /* RX */
+ SUNXI_FUNCTION_IRQ_BANK(0x6, 0, 14)),
+ SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 15),
+ SUNXI_FUNCTION(0x0, "gpio_in"),
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index 03f1423071749..9f42f25fab920 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -148,7 +148,9 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync)
+ struct super_block *sb = inode->i_sb;
+ struct exfat_sb_info *sbi = EXFAT_SB(sb);
+
+- WARN_ON(clu < EXFAT_FIRST_CLUSTER);
++ if (!is_valid_cluster(sbi, clu))
++ return -EINVAL;
++
+ ent_idx = CLUSTER_TO_BITMAP_ENT(clu);
+ i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx);
+ b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx);
+@@ -166,7 +168,9 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync)
+ struct exfat_sb_info *sbi = EXFAT_SB(sb);
+ struct exfat_mount_options *opts = &sbi->options;
+
+- WARN_ON(clu < EXFAT_FIRST_CLUSTER);
++ if (!is_valid_cluster(sbi, clu))
++ return;
++
+ ent_idx = CLUSTER_TO_BITMAP_ENT(clu);
+ i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx);
+ b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx);
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index 619e5b4bed100..0f2b1b196fa25 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -380,6 +380,12 @@ static inline int exfat_sector_to_cluster(struct exfat_sb_info *sbi,
+ EXFAT_RESERVED_CLUSTERS;
+ }
+
++static inline bool is_valid_cluster(struct exfat_sb_info *sbi,
++ unsigned int clus)
++{
++ return clus >= EXFAT_FIRST_CLUSTER && clus < sbi->num_clusters;
++}
++
+ /* super.c */
+ int exfat_set_volume_dirty(struct super_block *sb);
+ int exfat_clear_volume_dirty(struct super_block *sb);
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index a3464e56a7e16..421c273531049 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -81,12 +81,6 @@ int exfat_ent_set(struct super_block *sb, unsigned int loc,
+ return 0;
+ }
+
+-static inline bool is_valid_cluster(struct exfat_sb_info *sbi,
+- unsigned int clus)
+-{
+- return clus >= EXFAT_FIRST_CLUSTER && clus < sbi->num_clusters;
+-}
+-
+ int exfat_ent_get(struct super_block *sb, unsigned int loc,
+ unsigned int *content)
+ {
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 465e39ff018d4..b3b8ac4a9ace6 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -827,6 +827,7 @@ static inline bool nfs_error_is_fatal_on_server(int err)
+ case 0:
+ case -ERESTARTSYS:
+ case -EINTR:
++ case -ENOMEM:
+ return false;
+ }
+ return nfs_error_is_fatal(err);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index f3b71fd1d1341..8ad64b207b58a 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -7330,16 +7330,12 @@ nfsd4_release_lockowner(struct svc_rqst *rqstp,
+ if (sop->so_is_open_owner || !same_owner_str(sop, owner))
+ continue;
+
+- /* see if there are still any locks associated with it */
+- lo = lockowner(sop);
+- list_for_each_entry(stp, &sop->so_stateids, st_perstateowner) {
+- if (check_for_locks(stp->st_stid.sc_file, lo)) {
+- status = nfserr_locks_held;
+- spin_unlock(&clp->cl_lock);
+- return status;
+- }
++ if (atomic_read(&sop->so_count) != 1) {
++ spin_unlock(&clp->cl_lock);
++ return nfserr_locks_held;
+ }
+
++ lo = lockowner(sop);
+ nfs4_get_stateowner(sop);
+ break;
+ }
+diff --git a/fs/ntfs3/super.c b/fs/ntfs3/super.c
+index 29813200c7af9..e81246d9a08ea 100644
+--- a/fs/ntfs3/super.c
++++ b/fs/ntfs3/super.c
+@@ -668,9 +668,11 @@ static u32 format_size_gb(const u64 bytes, u32 *mb)
+
+ static u32 true_sectors_per_clst(const struct NTFS_BOOT *boot)
+ {
+- return boot->sectors_per_clusters <= 0x80
+- ? boot->sectors_per_clusters
+- : (1u << (0 - boot->sectors_per_clusters));
++ if (boot->sectors_per_clusters <= 0x80)
++ return boot->sectors_per_clusters;
++ if (boot->sectors_per_clusters >= 0xf4) /* limit shift to 2MB max */
++ return 1U << (0 - boot->sectors_per_clusters);
++ return -EINVAL;
+ }
+
+ /*
+@@ -713,6 +715,8 @@ static int ntfs_init_from_boot(struct super_block *sb, u32 sector_size,
+
+ /* cluster size: 512, 1K, 2K, 4K, ... 2M */
+ sct_per_clst = true_sectors_per_clst(boot);
++ if ((int)sct_per_clst < 0)
++ goto out;
+ if (!is_power_of_2(sct_per_clst))
+ goto out;
+
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 2667db9506e2f..1025f8ad1aa56 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -653,7 +653,7 @@ pipe_poll(struct file *filp, poll_table *wait)
+ unsigned int head, tail;
+
+ /* Epoll has some historical nasty semantics, this enables them */
+- pipe->poll_usage = 1;
++ WRITE_ONCE(pipe->poll_usage, true);
+
+ /*
+ * Reading pipe state only -- no need for acquiring the semaphore.
+@@ -1245,30 +1245,33 @@ unsigned int round_pipe_size(unsigned long size)
+
+ /*
+ * Resize the pipe ring to a number of slots.
++ *
++ * Note the pipe can be reduced in capacity, but only if the current
++ * occupancy doesn't exceed nr_slots; if it does, EBUSY will be
++ * returned instead.
+ */
+ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ {
+ struct pipe_buffer *bufs;
+ unsigned int head, tail, mask, n;
+
+- /*
+- * We can shrink the pipe, if arg is greater than the ring occupancy.
+- * Since we don't expect a lot of shrink+grow operations, just free and
+- * allocate again like we would do for growing. If the pipe currently
+- * contains more buffers than arg, then return busy.
+- */
+- mask = pipe->ring_size - 1;
+- head = pipe->head;
+- tail = pipe->tail;
+- n = pipe_occupancy(pipe->head, pipe->tail);
+- if (nr_slots < n)
+- return -EBUSY;
+-
+ bufs = kcalloc(nr_slots, sizeof(*bufs),
+ GFP_KERNEL_ACCOUNT | __GFP_NOWARN);
+ if (unlikely(!bufs))
+ return -ENOMEM;
+
++ spin_lock_irq(&pipe->rd_wait.lock);
++ mask = pipe->ring_size - 1;
++ head = pipe->head;
++ tail = pipe->tail;
++
++ n = pipe_occupancy(head, tail);
++ if (nr_slots < n) {
++ spin_unlock_irq(&pipe->rd_wait.lock);
++ kfree(bufs);
++ return -EBUSY;
++ }
++
+ /*
+ * The pipe array wraps around, so just start the new one at zero
+ * and adjust the indices.
+@@ -1300,6 +1303,8 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots)
+ pipe->tail = tail;
+ pipe->head = head;
+
++ spin_unlock_irq(&pipe->rd_wait.lock);
++
+ /* This might have made more room for writers */
+ wake_up_interruptible(&pipe->wr_wait);
+ return 0;
+diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
+index 37b3906af8b19..40b561b09e47f 100644
+--- a/include/linux/bpf_local_storage.h
++++ b/include/linux/bpf_local_storage.h
+@@ -143,9 +143,9 @@ void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage,
+
+ bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
+ struct bpf_local_storage_elem *selem,
+- bool uncharge_omem);
++ bool uncharge_omem, bool use_trace_rcu);
+
+-void bpf_selem_unlink(struct bpf_local_storage_elem *selem);
++void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool use_trace_rcu);
+
+ void bpf_selem_link_map(struct bpf_local_storage_map *smap,
+ struct bpf_local_storage_elem *selem);
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index c00c618ef290d..cb0fd633a6106 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -71,7 +71,7 @@ struct pipe_inode_info {
+ unsigned int files;
+ unsigned int r_counter;
+ unsigned int w_counter;
+- unsigned int poll_usage;
++ bool poll_usage;
+ struct page *tmp_page;
+ struct fasync_struct *fasync_readers;
+ struct fasync_struct *fasync_writers;
+diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h
+index 13807ea94cd2b..2d524782f53b7 100644
+--- a/include/net/netfilter/nf_conntrack_core.h
++++ b/include/net/netfilter/nf_conntrack_core.h
+@@ -58,8 +58,13 @@ static inline int nf_conntrack_confirm(struct sk_buff *skb)
+ int ret = NF_ACCEPT;
+
+ if (ct) {
+- if (!nf_ct_is_confirmed(ct))
++ if (!nf_ct_is_confirmed(ct)) {
+ ret = __nf_conntrack_confirm(skb);
++
++ if (ret == NF_ACCEPT)
++ ct = (struct nf_conn *)skb_nfct(skb);
++ }
++
+ if (likely(ret == NF_ACCEPT))
+ nf_ct_deliver_cached_events(ct);
+ }
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index e29d9e3d853ea..0da52c3bcd4d8 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -90,7 +90,7 @@ void bpf_inode_storage_free(struct inode *inode)
+ */
+ bpf_selem_unlink_map(selem);
+ free_inode_storage = bpf_selem_unlink_storage_nolock(
+- local_storage, selem, false);
++ local_storage, selem, false, false);
+ }
+ raw_spin_unlock_bh(&local_storage->lock);
+ rcu_read_unlock();
+@@ -149,7 +149,7 @@ static int inode_storage_delete(struct inode *inode, struct bpf_map *map)
+ if (!sdata)
+ return -ENOENT;
+
+- bpf_selem_unlink(SELEM(sdata));
++ bpf_selem_unlink(SELEM(sdata), true);
+
+ return 0;
+ }
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index 71de2a89869c8..79a8edfa7a2f4 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -106,7 +106,7 @@ static void bpf_selem_free_rcu(struct rcu_head *rcu)
+ */
+ bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
+ struct bpf_local_storage_elem *selem,
+- bool uncharge_mem)
++ bool uncharge_mem, bool use_trace_rcu)
+ {
+ struct bpf_local_storage_map *smap;
+ bool free_local_storage;
+@@ -150,11 +150,16 @@ bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
+ SDATA(selem))
+ RCU_INIT_POINTER(local_storage->cache[smap->cache_idx], NULL);
+
+- call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
++ if (use_trace_rcu)
++ call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu);
++ else
++ kfree_rcu(selem, rcu);
++
+ return free_local_storage;
+ }
+
+-static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem)
++static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem,
++ bool use_trace_rcu)
+ {
+ struct bpf_local_storage *local_storage;
+ bool free_local_storage = false;
+@@ -169,12 +174,16 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem)
+ raw_spin_lock_irqsave(&local_storage->lock, flags);
+ if (likely(selem_linked_to_storage(selem)))
+ free_local_storage = bpf_selem_unlink_storage_nolock(
+- local_storage, selem, true);
++ local_storage, selem, true, use_trace_rcu);
+ raw_spin_unlock_irqrestore(&local_storage->lock, flags);
+
+- if (free_local_storage)
+- call_rcu_tasks_trace(&local_storage->rcu,
++ if (free_local_storage) {
++ if (use_trace_rcu)
++ call_rcu_tasks_trace(&local_storage->rcu,
+ bpf_local_storage_free_rcu);
++ else
++ kfree_rcu(local_storage, rcu);
++ }
+ }
+
+ void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage,
+@@ -214,14 +223,14 @@ void bpf_selem_link_map(struct bpf_local_storage_map *smap,
+ raw_spin_unlock_irqrestore(&b->lock, flags);
+ }
+
+-void bpf_selem_unlink(struct bpf_local_storage_elem *selem)
++void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool use_trace_rcu)
+ {
+ /* Always unlink from map before unlinking from local_storage
+ * because selem will be freed after successfully unlinked from
+ * the local_storage.
+ */
+ bpf_selem_unlink_map(selem);
+- __bpf_selem_unlink_storage(selem);
++ __bpf_selem_unlink_storage(selem, use_trace_rcu);
+ }
+
+ struct bpf_local_storage_data *
+@@ -454,7 +463,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
+ if (old_sdata) {
+ bpf_selem_unlink_map(SELEM(old_sdata));
+ bpf_selem_unlink_storage_nolock(local_storage, SELEM(old_sdata),
+- false);
++ false, true);
+ }
+
+ unlock:
+@@ -532,7 +541,7 @@ void bpf_local_storage_map_free(struct bpf_local_storage_map *smap,
+ migrate_disable();
+ __this_cpu_inc(*busy_counter);
+ }
+- bpf_selem_unlink(selem);
++ bpf_selem_unlink(selem, false);
+ if (busy_counter) {
+ __this_cpu_dec(*busy_counter);
+ migrate_enable();
+diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
+index 5da7bed0f5f6e..be6c533bb862f 100644
+--- a/kernel/bpf/bpf_task_storage.c
++++ b/kernel/bpf/bpf_task_storage.c
+@@ -102,7 +102,7 @@ void bpf_task_storage_free(struct task_struct *task)
+ */
+ bpf_selem_unlink_map(selem);
+ free_task_storage = bpf_selem_unlink_storage_nolock(
+- local_storage, selem, false);
++ local_storage, selem, false, false);
+ }
+ raw_spin_unlock_irqrestore(&local_storage->lock, flags);
+ bpf_task_storage_unlock();
+@@ -191,7 +191,7 @@ static int task_storage_delete(struct task_struct *task, struct bpf_map *map)
+ if (!sdata)
+ return -ENOENT;
+
+- bpf_selem_unlink(SELEM(sdata));
++ bpf_selem_unlink(SELEM(sdata), true);
+
+ return 0;
+ }
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index de3e5bc6781fe..64c44eed8c078 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -1157,6 +1157,16 @@ struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *prog)
+ insn = clone->insnsi;
+
+ for (i = 0; i < insn_cnt; i++, insn++) {
++ if (bpf_pseudo_func(insn)) {
++ /* ld_imm64 with an address of bpf subprog is not
++ * a user controlled constant. Don't randomize it,
++ * since it will conflict with jit_subprogs() logic.
++ */
++ insn++;
++ i++;
++ continue;
++ }
++
+ /* We temporarily need to hold the original ld64 insn
+ * so that we can still access the first part in the
+ * second blinding run.
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 2823dcefae10e..354d066ce6955 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -100,7 +100,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
+ return ERR_PTR(-E2BIG);
+
+ cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap);
+- cost += n_buckets * (value_size + sizeof(struct stack_map_bucket));
+ smap = bpf_map_area_alloc(cost, bpf_map_attr_numa_node(attr));
+ if (!smap)
+ return ERR_PTR(-ENOMEM);
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 5e7edf9130601..e854ebd0aa652 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -423,7 +423,7 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+ {
+ enum bpf_tramp_prog_type kind;
+ int err = 0;
+- int cnt;
++ int cnt = 0, i;
+
+ kind = bpf_attach_type_to_tramp(prog);
+ mutex_lock(&tr->mutex);
+@@ -434,7 +434,10 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+ err = -EBUSY;
+ goto out;
+ }
+- cnt = tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT];
++
++ for (i = 0; i < BPF_TRAMP_MAX; i++)
++ cnt += tr->progs_cnt[i];
++
+ if (kind == BPF_TRAMP_REPLACE) {
+ /* Cannot attach extension if fentry/fexit are in use. */
+ if (cnt) {
+@@ -512,16 +515,19 @@ out:
+
+ void bpf_trampoline_put(struct bpf_trampoline *tr)
+ {
++ int i;
++
+ if (!tr)
+ return;
+ mutex_lock(&trampoline_mutex);
+ if (!refcount_dec_and_test(&tr->refcnt))
+ goto out;
+ WARN_ON_ONCE(mutex_is_locked(&tr->mutex));
+- if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FENTRY])))
+- goto out;
+- if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
+- goto out;
++
++ for (i = 0; i < BPF_TRAMP_MAX; i++)
++ if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[i])))
++ goto out;
++
+ /* This code will be executed even when the last bpf_tramp_image
+ * is alive. All progs are detached from the trampoline and the
+ * trampoline image is patched with jmp into epilogue to skip
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a39eedecc93a1..1d93e90b4e09f 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4832,6 +4832,11 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ return check_packet_access(env, regno, reg->off, access_size,
+ zero_size_allowed);
+ case PTR_TO_MAP_KEY:
++ if (meta && meta->raw_mode) {
++ verbose(env, "R%d cannot write into %s\n", regno,
++ reg_type_str(env, reg->type));
++ return -EACCES;
++ }
+ return check_mem_region_access(env, regno, reg->off, access_size,
+ reg->map_ptr->key_size, false);
+ case PTR_TO_MAP_VALUE:
+@@ -4842,13 +4847,23 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+ return check_map_access(env, regno, reg->off, access_size,
+ zero_size_allowed);
+ case PTR_TO_MEM:
++ if (type_is_rdonly_mem(reg->type)) {
++ if (meta && meta->raw_mode) {
++ verbose(env, "R%d cannot write into %s\n", regno,
++ reg_type_str(env, reg->type));
++ return -EACCES;
++ }
++ }
+ return check_mem_region_access(env, regno, reg->off,
+ access_size, reg->mem_size,
+ zero_size_allowed);
+ case PTR_TO_BUF:
+ if (type_is_rdonly_mem(reg->type)) {
+- if (meta && meta->raw_mode)
++ if (meta && meta->raw_mode) {
++ verbose(env, "R%d cannot write into %s\n", regno,
++ reg_type_str(env, reg->type));
+ return -EACCES;
++ }
+
+ buf_info = "rdonly";
+ max_access = &env->prog->aux->max_rdonly_access;
+diff --git a/lib/assoc_array.c b/lib/assoc_array.c
+index 079c72e26493e..ca0b4f360c1a0 100644
+--- a/lib/assoc_array.c
++++ b/lib/assoc_array.c
+@@ -1461,6 +1461,7 @@ int assoc_array_gc(struct assoc_array *array,
+ struct assoc_array_ptr *cursor, *ptr;
+ struct assoc_array_ptr *new_root, *new_parent, **new_ptr_pp;
+ unsigned long nr_leaves_on_tree;
++ bool retained;
+ int keylen, slot, nr_free, next_slot, i;
+
+ pr_devel("-->%s()\n", __func__);
+@@ -1536,6 +1537,7 @@ continue_node:
+ goto descend;
+ }
+
++retry_compress:
+ pr_devel("-- compress node %p --\n", new_n);
+
+ /* Count up the number of empty slots in this node and work out the
+@@ -1553,6 +1555,7 @@ continue_node:
+ pr_devel("free=%d, leaves=%lu\n", nr_free, new_n->nr_leaves_on_branch);
+
+ /* See what we can fold in */
++ retained = false;
+ next_slot = 0;
+ for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) {
+ struct assoc_array_shortcut *s;
+@@ -1602,9 +1605,14 @@ continue_node:
+ pr_devel("[%d] retain node %lu/%d [nx %d]\n",
+ slot, child->nr_leaves_on_branch, nr_free + 1,
+ next_slot);
++ retained = true;
+ }
+ }
+
++ if (retained && new_n->nr_leaves_on_branch <= ASSOC_ARRAY_FAN_OUT) {
++ pr_devel("internal nodes remain despite enough space, retrying\n");
++ goto retry_compress;
++ }
+ pr_devel("after: %lu\n", new_n->nr_leaves_on_branch);
+
+ nr_leaves_on_tree = new_n->nr_leaves_on_branch;
+diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
+index af9302141bcf6..e5c5315da2741 100644
+--- a/lib/percpu-refcount.c
++++ b/lib/percpu-refcount.c
+@@ -76,6 +76,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release,
+ data = kzalloc(sizeof(*ref->data), gfp);
+ if (!data) {
+ free_percpu((void __percpu *)ref->percpu_count_ptr);
++ ref->percpu_count_ptr = 0;
+ return -ENOMEM;
+ }
+
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 9152fbde33b50..5d5fc04385b8d 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
+ */
+ static void lock_zspage(struct zspage *zspage)
+ {
+- struct page *page = get_first_page(zspage);
++ struct page *curr_page, *page;
+
+- do {
+- lock_page(page);
+- } while ((page = get_next_page(page)) != NULL);
++ /*
++ * Pages we haven't locked yet can be migrated off the list while we're
++ * trying to lock them, so we need to be careful and only attempt to
++ * lock each page under migrate_read_lock(). Otherwise, the page we lock
++ * may no longer belong to the zspage. This means that we may wait for
++ * the wrong page to unlock, so we must take a reference to the page
++ * prior to waiting for it to unlock outside migrate_read_lock().
++ */
++ while (1) {
++ migrate_read_lock(zspage);
++ page = get_first_page(zspage);
++ if (trylock_page(page))
++ break;
++ get_page(page);
++ migrate_read_unlock(zspage);
++ wait_on_page_locked(page);
++ put_page(page);
++ }
++
++ curr_page = page;
++ while ((page = get_next_page(curr_page))) {
++ if (trylock_page(page)) {
++ curr_page = page;
++ } else {
++ get_page(page);
++ migrate_read_unlock(zspage);
++ wait_on_page_locked(page);
++ put_page(page);
++ migrate_read_lock(zspage);
++ }
++ }
++ migrate_read_unlock(zspage);
+ }
+
+ static int zs_init_fs_context(struct fs_context *fc)
+diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c
+index d9c37fd108097..4fc5bf519ba59 100644
+--- a/net/core/bpf_sk_storage.c
++++ b/net/core/bpf_sk_storage.c
+@@ -40,7 +40,7 @@ static int bpf_sk_storage_del(struct sock *sk, struct bpf_map *map)
+ if (!sdata)
+ return -ENOENT;
+
+- bpf_selem_unlink(SELEM(sdata));
++ bpf_selem_unlink(SELEM(sdata), true);
+
+ return 0;
+ }
+@@ -75,8 +75,8 @@ void bpf_sk_storage_free(struct sock *sk)
+ * sk_storage.
+ */
+ bpf_selem_unlink_map(selem);
+- free_sk_storage = bpf_selem_unlink_storage_nolock(sk_storage,
+- selem, true);
++ free_sk_storage = bpf_selem_unlink_storage_nolock(
++ sk_storage, selem, true, false);
+ }
+ raw_spin_unlock_bh(&sk_storage->lock);
+ rcu_read_unlock();
+diff --git a/net/core/filter.c b/net/core/filter.c
+index af0bafe9dcce2..f8fbb5fa74f35 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -1687,7 +1687,7 @@ BPF_CALL_5(bpf_skb_store_bytes, struct sk_buff *, skb, u32, offset,
+
+ if (unlikely(flags & ~(BPF_F_RECOMPUTE_CSUM | BPF_F_INVALIDATE_HASH)))
+ return -EINVAL;
+- if (unlikely(offset > 0xffff))
++ if (unlikely(offset > INT_MAX))
+ return -EFAULT;
+ if (unlikely(bpf_try_make_writable(skb, offset + len)))
+ return -EFAULT;
+@@ -1722,7 +1722,7 @@ BPF_CALL_4(bpf_skb_load_bytes, const struct sk_buff *, skb, u32, offset,
+ {
+ void *ptr;
+
+- if (unlikely(offset > 0xffff))
++ if (unlikely(offset > INT_MAX))
+ goto err_clear;
+
+ ptr = skb_header_pointer(skb, offset, len, to);
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 92e9d75dba2f4..339d95df19d32 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2900,7 +2900,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t)
+ break;
+ if (!aalg->pfkey_supported)
+ continue;
+- if (aalg_tmpl_set(t, aalg))
++ if (aalg_tmpl_set(t, aalg) && aalg->available)
+ sz += sizeof(struct sadb_comb);
+ }
+ return sz + sizeof(struct sadb_prop);
+@@ -2918,7 +2918,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ if (!ealg->pfkey_supported)
+ continue;
+
+- if (!(ealg_tmpl_set(t, ealg)))
++ if (!(ealg_tmpl_set(t, ealg) && ealg->available))
+ continue;
+
+ for (k = 1; ; k++) {
+@@ -2929,7 +2929,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t)
+ if (!aalg->pfkey_supported)
+ continue;
+
+- if (aalg_tmpl_set(t, aalg))
++ if (aalg_tmpl_set(t, aalg) && aalg->available)
+ sz += sizeof(struct sadb_comb);
+ }
+ }
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 30d29d038d095..42cc703a68e50 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -222,12 +222,18 @@ err_register:
+ }
+
+ static void nft_netdev_unregister_hooks(struct net *net,
+- struct list_head *hook_list)
++ struct list_head *hook_list,
++ bool release_netdev)
+ {
+- struct nft_hook *hook;
++ struct nft_hook *hook, *next;
+
+- list_for_each_entry(hook, hook_list, list)
++ list_for_each_entry_safe(hook, next, hook_list, list) {
+ nf_unregister_net_hook(net, &hook->ops);
++ if (release_netdev) {
++ list_del(&hook->list);
++ kfree_rcu(hook, rcu);
++ }
++ }
+ }
+
+ static int nf_tables_register_hook(struct net *net,
+@@ -253,9 +259,10 @@ static int nf_tables_register_hook(struct net *net,
+ return nf_register_net_hook(net, &basechain->ops);
+ }
+
+-static void nf_tables_unregister_hook(struct net *net,
+- const struct nft_table *table,
+- struct nft_chain *chain)
++static void __nf_tables_unregister_hook(struct net *net,
++ const struct nft_table *table,
++ struct nft_chain *chain,
++ bool release_netdev)
+ {
+ struct nft_base_chain *basechain;
+ const struct nf_hook_ops *ops;
+@@ -270,11 +277,19 @@ static void nf_tables_unregister_hook(struct net *net,
+ return basechain->type->ops_unregister(net, ops);
+
+ if (nft_base_chain_netdev(table->family, basechain->ops.hooknum))
+- nft_netdev_unregister_hooks(net, &basechain->hook_list);
++ nft_netdev_unregister_hooks(net, &basechain->hook_list,
++ release_netdev);
+ else
+ nf_unregister_net_hook(net, &basechain->ops);
+ }
+
++static void nf_tables_unregister_hook(struct net *net,
++ const struct nft_table *table,
++ struct nft_chain *chain)
++{
++ return __nf_tables_unregister_hook(net, table, chain, false);
++}
++
+ static void nft_trans_commit_list_add_tail(struct net *net, struct nft_trans *trans)
+ {
+ struct nftables_pernet *nft_net = nft_pernet(net);
+@@ -2794,27 +2809,31 @@ static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+
+ err = nf_tables_expr_parse(ctx, nla, &expr_info);
+ if (err < 0)
+- goto err1;
++ goto err_expr_parse;
++
++ err = -EOPNOTSUPP;
++ if (!(expr_info.ops->type->flags & NFT_EXPR_STATEFUL))
++ goto err_expr_stateful;
+
+ err = -ENOMEM;
+ expr = kzalloc(expr_info.ops->size, GFP_KERNEL);
+ if (expr == NULL)
+- goto err2;
++ goto err_expr_stateful;
+
+ err = nf_tables_newexpr(ctx, &expr_info, expr);
+ if (err < 0)
+- goto err3;
++ goto err_expr_new;
+
+ return expr;
+-err3:
++err_expr_new:
+ kfree(expr);
+-err2:
++err_expr_stateful:
+ owner = expr_info.ops->type->owner;
+ if (expr_info.ops->type->release_ops)
+ expr_info.ops->type->release_ops(expr_info.ops);
+
+ module_put(owner);
+-err1:
++err_expr_parse:
+ return ERR_PTR(err);
+ }
+
+@@ -4163,6 +4182,9 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ u32 len;
+ int err;
+
++ if (desc->field_count >= ARRAY_SIZE(desc->field_len))
++ return -E2BIG;
++
+ err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr,
+ nft_concat_policy, NULL);
+ if (err < 0)
+@@ -4172,9 +4194,8 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ return -EINVAL;
+
+ len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN]));
+-
+- if (len * BITS_PER_BYTE / 32 > NFT_REG32_COUNT)
+- return -E2BIG;
++ if (!len || len > U8_MAX)
++ return -EINVAL;
+
+ desc->field_len[desc->field_count++] = len;
+
+@@ -4185,7 +4206,8 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ const struct nlattr *nla)
+ {
+ struct nlattr *attr;
+- int rem, err;
++ u32 num_regs = 0;
++ int rem, err, i;
+
+ nla_for_each_nested(attr, nla, rem) {
+ if (nla_type(attr) != NFTA_LIST_ELEM)
+@@ -4196,6 +4218,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ return err;
+ }
+
++ for (i = 0; i < desc->field_count; i++)
++ num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
++
++ if (num_regs > NFT_REG32_COUNT)
++ return -E2BIG;
++
+ return 0;
+ }
+
+@@ -5334,9 +5362,6 @@ struct nft_expr *nft_set_elem_expr_alloc(const struct nft_ctx *ctx,
+ return expr;
+
+ err = -EOPNOTSUPP;
+- if (!(expr->ops->type->flags & NFT_EXPR_STATEFUL))
+- goto err_set_elem_expr;
+-
+ if (expr->ops->type->flags & NFT_EXPR_GC) {
+ if (set->flags & NFT_SET_TIMEOUT)
+ goto err_set_elem_expr;
+@@ -7212,13 +7237,25 @@ static void nft_unregister_flowtable_hook(struct net *net,
+ FLOW_BLOCK_UNBIND);
+ }
+
+-static void nft_unregister_flowtable_net_hooks(struct net *net,
+- struct list_head *hook_list)
++static void __nft_unregister_flowtable_net_hooks(struct net *net,
++ struct list_head *hook_list,
++ bool release_netdev)
+ {
+- struct nft_hook *hook;
++ struct nft_hook *hook, *next;
+
+- list_for_each_entry(hook, hook_list, list)
++ list_for_each_entry_safe(hook, next, hook_list, list) {
+ nf_unregister_net_hook(net, &hook->ops);
++ if (release_netdev) {
++ list_del(&hook->list);
++ kfree_rcu(hook);
++ }
++ }
++}
++
++static void nft_unregister_flowtable_net_hooks(struct net *net,
++ struct list_head *hook_list)
++{
++ __nft_unregister_flowtable_net_hooks(net, hook_list, false);
+ }
+
+ static int nft_register_flowtable_net_hooks(struct net *net,
+@@ -9662,9 +9699,10 @@ static void __nft_release_hook(struct net *net, struct nft_table *table)
+ struct nft_chain *chain;
+
+ list_for_each_entry(chain, &table->chains, list)
+- nf_tables_unregister_hook(net, table, chain);
++ __nf_tables_unregister_hook(net, table, chain, true);
+ list_for_each_entry(flowtable, &table->flowtables, list)
+- nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list);
++ __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list,
++ true);
+ }
+
+ static void __nft_release_hooks(struct net *net)
+@@ -9803,7 +9841,11 @@ static int __net_init nf_tables_init_net(struct net *net)
+
+ static void __net_exit nf_tables_pre_exit_net(struct net *net)
+ {
++ struct nftables_pernet *nft_net = nft_pernet(net);
++
++ mutex_lock(&nft_net->commit_mutex);
+ __nft_release_hooks(net);
++ mutex_unlock(&nft_net->commit_mutex);
+ }
+
+ static void __net_exit nf_tables_exit_net(struct net *net)
+diff --git a/net/netfilter/nft_limit.c b/net/netfilter/nft_limit.c
+index a726b623963de..05a17dc1febbd 100644
+--- a/net/netfilter/nft_limit.c
++++ b/net/netfilter/nft_limit.c
+@@ -213,6 +213,8 @@ static int nft_limit_pkts_clone(struct nft_expr *dst, const struct nft_expr *src
+ struct nft_limit_priv_pkts *priv_dst = nft_expr_priv(dst);
+ struct nft_limit_priv_pkts *priv_src = nft_expr_priv(src);
+
++ priv_dst->cost = priv_src->cost;
++
+ return nft_limit_clone(&priv_dst->limit, &priv_src->limit);
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e38acdbe1a3b5..53d1586b71ec6 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6773,6 +6773,41 @@ static void alc256_fixup_mic_no_presence_and_resume(struct hda_codec *codec,
+ }
+ }
+
++static void alc_fixup_dell4_mic_no_presence_quiet(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ struct alc_spec *spec = codec->spec;
++ struct hda_input_mux *imux = &spec->gen.input_mux;
++ int i;
++
++ alc269_fixup_limit_int_mic_boost(codec, fix, action);
++
++ switch (action) {
++ case HDA_FIXUP_ACT_PRE_PROBE:
++ /**
++ * Set the vref of pin 0x19 (Headset Mic) and pin 0x1b (Headphone Mic)
++ * to Hi-Z to avoid pop noises at startup and when plugging and
++ * unplugging headphones.
++ */
++ snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREFHIZ);
++ snd_hda_codec_set_pin_target(codec, 0x1b, PIN_VREFHIZ);
++ break;
++ case HDA_FIXUP_ACT_PROBE:
++ /**
++ * Make the internal mic (0x12) the default input source to
++ * prevent pop noises on cold boot.
++ */
++ for (i = 0; i < imux->num_items; i++) {
++ if (spec->gen.imux_pins[i] == 0x12) {
++ spec->gen.cur_mux[0] = i;
++ break;
++ }
++ }
++ break;
++ }
++}
++
+ enum {
+ ALC269_FIXUP_GPIO2,
+ ALC269_FIXUP_SONY_VAIO,
+@@ -6814,6 +6849,7 @@ enum {
+ ALC269_FIXUP_DELL2_MIC_NO_PRESENCE,
+ ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
+ ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
++ ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET,
+ ALC269_FIXUP_HEADSET_MODE,
+ ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
+ ALC269_FIXUP_ASPIRE_HEADSET_MIC,
+@@ -7000,6 +7036,7 @@ enum {
+ ALC287_FIXUP_LEGION_16ACHG6,
+ ALC287_FIXUP_CS35L41_I2C_2,
+ ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED,
++ ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -8770,6 +8807,21 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC285_FIXUP_HP_MUTE_LED,
+ },
++ [ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_dell4_mic_no_presence_quiet,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
++ },
++ [ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x02a1112c }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -8860,6 +8912,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1028, 0x0a38, "Dell Latitude 7520", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE_QUIET),
+ SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),
+ SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
+@@ -9249,6 +9302,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
++ SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 4dfe76416794f..33db334e65566 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -572,6 +572,17 @@ static int set_sample_rate_v2v3(struct snd_usb_audio *chip,
+ /* continue processing */
+ }
+
++ /* FIXME - TEAC devices require the immediate interface setup */
++ if (USB_ID_VENDOR(chip->usb_id) == 0x0644) {
++ bool cur_base_48k = (rate % 48000 == 0);
++ bool prev_base_48k = (prev_rate % 48000 == 0);
++ if (cur_base_48k != prev_base_48k) {
++ usb_set_interface(chip->dev, fmt->iface, fmt->altsetting);
++ if (chip->quirk_flags & QUIRK_FLAG_IFACE_DELAY)
++ msleep(50);
++ }
++ }
++
+ validation:
+ /* validate clock after rate change */
+ if (!uac_clock_source_is_valid(chip, fmt, clock))
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 6d699065e81a2..b470404a5376c 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -439,16 +439,21 @@ static int configure_endpoints(struct snd_usb_audio *chip,
+ /* stop any running stream beforehand */
+ if (stop_endpoints(subs, false))
+ sync_pending_stops(subs);
++ if (subs->sync_endpoint) {
++ err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
++ if (err < 0)
++ return err;
++ }
+ err = snd_usb_endpoint_configure(chip, subs->data_endpoint);
+ if (err < 0)
+ return err;
+ snd_usb_set_format_quirk(subs, subs->cur_audiofmt);
+- }
+-
+- if (subs->sync_endpoint) {
+- err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
+- if (err < 0)
+- return err;
++ } else {
++ if (subs->sync_endpoint) {
++ err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);
++ if (err < 0)
++ return err;
++ }
+ }
+
+ return 0;
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 40a5e3eb4ef26..78eb41b621d63 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -2672,6 +2672,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .altset_idx = 1,
+ .attributes = 0,
+ .endpoint = 0x82,
++ .ep_idx = 1,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC,
+ .datainterval = 1,
+ .maxpacksize = 0x0126,
+@@ -2875,6 +2876,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .altset_idx = 1,
+ .attributes = 0x4,
+ .endpoint = 0x81,
++ .ep_idx = 1,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC |
+ USB_ENDPOINT_SYNC_ASYNC,
+ .maxpacksize = 0x130,
+@@ -3391,6 +3393,7 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ .altset_idx = 1,
+ .attributes = 0,
+ .endpoint = 0x03,
++ .ep_idx = 1,
+ .rates = SNDRV_PCM_RATE_96000,
+ .ep_attr = USB_ENDPOINT_XFER_ISOC |
+ USB_ENDPOINT_SYNC_ASYNC,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index ab9f3da49941f..fbbe59054c3fb 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1822,6 +1822,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x06f8, 0xd002, /* Hercules DJ Console (Macintosh Edition) */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
++ DEVICE_FLG(0x0711, 0x5800, /* MCT Trigger 5 USB-to-HDMI */
++ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x074d, 0x3553, /* Outlaw RR2150 (Micronas UAC3553B) */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x08bb, 0x2702, /* LineX FM Transmitter */
+diff --git a/tools/memory-model/README b/tools/memory-model/README
+index 9edd402704c4f..dab38904206a0 100644
+--- a/tools/memory-model/README
++++ b/tools/memory-model/README
+@@ -54,7 +54,8 @@ klitmus7 Compatibility Table
+ -- 4.14 7.48 --
+ 4.15 -- 4.19 7.49 --
+ 4.20 -- 5.5 7.54 --
+- 5.6 -- 7.56 --
++ 5.6 -- 5.16 7.56 --
++ 5.17 -- 7.56.1 --
+ ============ ==========
+
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-30 13:58 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-30 13:58 UTC (permalink / raw
To: gentoo-commits
commit: a8ee0e96ec8829dcec0a8fac8e0f9203189002d3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 30 13:58:07 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 30 13:58:07 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8ee0e96
Linux patch 5.17.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-5.17.12.patch | 4650 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4654 insertions(+)
diff --git a/0000_README b/0000_README
index 8aed7c53..ecb45bb4 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-5.17.11.patch
From: http://www.kernel.org
Desc: Linux 5.17.11
+Patch: 1011_linux-5.17.12.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-5.17.12.patch b/1011_linux-5.17.12.patch
new file mode 100644
index 00000000..185a60e2
--- /dev/null
+++ b/1011_linux-5.17.12.patch
@@ -0,0 +1,4650 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 59f881f367793..ad67b848d04ee 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4355,6 +4355,12 @@
+ fully seed the kernel's CRNG. Default is controlled
+ by CONFIG_RANDOM_TRUST_CPU.
+
++ random.trust_bootloader={on,off}
++ [KNL] Enable or disable trusting the use of a
++ seed passed by the bootloader (if available) to
++ fully seed the kernel's CRNG. Default is controlled
++ by CONFIG_RANDOM_TRUST_BOOTLOADER.
++
+ randomize_kstack_offset=
+ [KNL] Enable or disable kernel stack offset
+ randomization, which provides roughly 5 bits of
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index 0f86e9f931293..264735c5d0bda 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1025,28 +1025,22 @@ This is a directory, with the following entries:
+ * ``boot_id``: a UUID generated the first time this is retrieved, and
+ unvarying after that;
+
++* ``uuid``: a UUID generated every time this is retrieved (this can
++ thus be used to generate UUIDs at will);
++
+ * ``entropy_avail``: the pool's entropy count, in bits;
+
+ * ``poolsize``: the entropy pool size, in bits;
+
+ * ``urandom_min_reseed_secs``: obsolete (used to determine the minimum
+- number of seconds between urandom pool reseeding).
+-
+-* ``uuid``: a UUID generated every time this is retrieved (this can
+- thus be used to generate UUIDs at will);
++ number of seconds between urandom pool reseeding). This file is
++ writable for compatibility purposes, but writing to it has no effect
++ on any RNG behavior;
+
+ * ``write_wakeup_threshold``: when the entropy count drops below this
+ (as a number of bits), processes waiting to write to ``/dev/random``
+- are woken up.
+-
+-If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH``
+-defined, these additional entries are present:
+-
+-* ``add_interrupt_avg_cycles``: the average number of cycles between
+- interrupts used to feed the pool;
+-
+-* ``add_interrupt_avg_deviation``: the standard deviation seen on the
+- number of cycles between interrupts used to feed the pool.
++ are woken up. This file is writable for compatibility purposes, but
++ writing to it has no effect on any RNG behavior.
+
+
+ randomize_va_space
+diff --git a/Makefile b/Makefile
+index b821f270a4ca6..25c44dda0ef37 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/alpha/include/asm/timex.h b/arch/alpha/include/asm/timex.h
+index b565cc6f408e9..f89798da8a147 100644
+--- a/arch/alpha/include/asm/timex.h
++++ b/arch/alpha/include/asm/timex.h
+@@ -28,5 +28,6 @@ static inline cycles_t get_cycles (void)
+ __asm__ __volatile__ ("rpcc %0" : "=r"(ret));
+ return ret;
+ }
++#define get_cycles get_cycles
+
+ #endif
+diff --git a/arch/arm/include/asm/timex.h b/arch/arm/include/asm/timex.h
+index 7c3b3671d6c25..6d1337c169cd3 100644
+--- a/arch/arm/include/asm/timex.h
++++ b/arch/arm/include/asm/timex.h
+@@ -11,5 +11,6 @@
+
+ typedef unsigned long cycles_t;
+ #define get_cycles() ({ cycles_t c; read_current_timer(&c) ? 0 : c; })
++#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback())
+
+ #endif
+diff --git a/arch/ia64/include/asm/timex.h b/arch/ia64/include/asm/timex.h
+index 869a3ac6bf23a..7ccc077a60bed 100644
+--- a/arch/ia64/include/asm/timex.h
++++ b/arch/ia64/include/asm/timex.h
+@@ -39,6 +39,7 @@ get_cycles (void)
+ ret = ia64_getreg(_IA64_REG_AR_ITC);
+ return ret;
+ }
++#define get_cycles get_cycles
+
+ extern void ia64_cpu_local_tick (void);
+ extern unsigned long long ia64_native_sched_clock (void);
+diff --git a/arch/m68k/include/asm/timex.h b/arch/m68k/include/asm/timex.h
+index 6a21d93582805..f4a7a340f4cae 100644
+--- a/arch/m68k/include/asm/timex.h
++++ b/arch/m68k/include/asm/timex.h
+@@ -35,7 +35,7 @@ static inline unsigned long random_get_entropy(void)
+ {
+ if (mach_random_get_entropy)
+ return mach_random_get_entropy();
+- return 0;
++ return random_get_entropy_fallback();
+ }
+ #define random_get_entropy random_get_entropy
+
+diff --git a/arch/mips/include/asm/timex.h b/arch/mips/include/asm/timex.h
+index 8026baf46e729..2e107886f97ac 100644
+--- a/arch/mips/include/asm/timex.h
++++ b/arch/mips/include/asm/timex.h
+@@ -76,25 +76,24 @@ static inline cycles_t get_cycles(void)
+ else
+ return 0; /* no usable counter */
+ }
++#define get_cycles get_cycles
+
+ /*
+ * Like get_cycles - but where c0_count is not available we desperately
+ * use c0_random in an attempt to get at least a little bit of entropy.
+- *
+- * R6000 and R6000A neither have a count register nor a random register.
+- * That leaves no entropy source in the CPU itself.
+ */
+ static inline unsigned long random_get_entropy(void)
+ {
+- unsigned int prid = read_c0_prid();
+- unsigned int imp = prid & PRID_IMP_MASK;
++ unsigned int c0_random;
+
+- if (can_use_mips_counter(prid))
++ if (can_use_mips_counter(read_c0_prid()))
+ return read_c0_count();
+- else if (likely(imp != PRID_IMP_R6000 && imp != PRID_IMP_R6000A))
+- return read_c0_random();
++
++ if (cpu_has_3kex)
++ c0_random = (read_c0_random() >> 8) & 0x3f;
+ else
+- return 0; /* no usable register */
++ c0_random = read_c0_random() & 0x3f;
++ return (random_get_entropy_fallback() << 6) | (0x3f - c0_random);
+ }
+ #define random_get_entropy random_get_entropy
+
+diff --git a/arch/nios2/include/asm/timex.h b/arch/nios2/include/asm/timex.h
+index a769f871b28d9..40a1adc9bd03e 100644
+--- a/arch/nios2/include/asm/timex.h
++++ b/arch/nios2/include/asm/timex.h
+@@ -8,5 +8,8 @@
+ typedef unsigned long cycles_t;
+
+ extern cycles_t get_cycles(void);
++#define get_cycles get_cycles
++
++#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback())
+
+ #endif
+diff --git a/arch/parisc/include/asm/timex.h b/arch/parisc/include/asm/timex.h
+index 06b510f8172e3..b4622cb06a75e 100644
+--- a/arch/parisc/include/asm/timex.h
++++ b/arch/parisc/include/asm/timex.h
+@@ -13,9 +13,10 @@
+
+ typedef unsigned long cycles_t;
+
+-static inline cycles_t get_cycles (void)
++static inline cycles_t get_cycles(void)
+ {
+ return mfctl(16);
+ }
++#define get_cycles get_cycles
+
+ #endif
+diff --git a/arch/powerpc/include/asm/timex.h b/arch/powerpc/include/asm/timex.h
+index fa2e76e4093a3..14b4489de52c5 100644
+--- a/arch/powerpc/include/asm/timex.h
++++ b/arch/powerpc/include/asm/timex.h
+@@ -19,6 +19,7 @@ static inline cycles_t get_cycles(void)
+ {
+ return mftb();
+ }
++#define get_cycles get_cycles
+
+ #endif /* __KERNEL__ */
+ #endif /* _ASM_POWERPC_TIMEX_H */
+diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h
+index 507cae273bc62..d6a7428f6248d 100644
+--- a/arch/riscv/include/asm/timex.h
++++ b/arch/riscv/include/asm/timex.h
+@@ -41,7 +41,7 @@ static inline u32 get_cycles_hi(void)
+ static inline unsigned long random_get_entropy(void)
+ {
+ if (unlikely(clint_time_val == NULL))
+- return 0;
++ return random_get_entropy_fallback();
+ return get_cycles();
+ }
+ #define random_get_entropy() random_get_entropy()
+diff --git a/arch/s390/include/asm/timex.h b/arch/s390/include/asm/timex.h
+index 50d9b04ecbd14..bc50ee0e91ff1 100644
+--- a/arch/s390/include/asm/timex.h
++++ b/arch/s390/include/asm/timex.h
+@@ -201,6 +201,7 @@ static inline cycles_t get_cycles(void)
+ {
+ return (cycles_t) get_tod_clock() >> 2;
+ }
++#define get_cycles get_cycles
+
+ int get_phys_clock(unsigned long *clock);
+ void init_cpu_timer(void);
+diff --git a/arch/sparc/include/asm/timex_32.h b/arch/sparc/include/asm/timex_32.h
+index 542915b462097..f86326a6f89e0 100644
+--- a/arch/sparc/include/asm/timex_32.h
++++ b/arch/sparc/include/asm/timex_32.h
+@@ -9,8 +9,6 @@
+
+ #define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
+
+-/* XXX Maybe do something better at some point... -DaveM */
+-typedef unsigned long cycles_t;
+-#define get_cycles() (0)
++#include <asm-generic/timex.h>
+
+ #endif
+diff --git a/arch/um/include/asm/timex.h b/arch/um/include/asm/timex.h
+index e392a9a5bc9bd..9f27176adb26d 100644
+--- a/arch/um/include/asm/timex.h
++++ b/arch/um/include/asm/timex.h
+@@ -2,13 +2,8 @@
+ #ifndef __UM_TIMEX_H
+ #define __UM_TIMEX_H
+
+-typedef unsigned long cycles_t;
+-
+-static inline cycles_t get_cycles (void)
+-{
+- return 0;
+-}
+-
+ #define CLOCK_TICK_RATE (HZ)
+
++#include <asm-generic/timex.h>
++
+ #endif
+diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h
+index a4a8b1b16c0c1..956e4145311b1 100644
+--- a/arch/x86/include/asm/timex.h
++++ b/arch/x86/include/asm/timex.h
+@@ -5,6 +5,15 @@
+ #include <asm/processor.h>
+ #include <asm/tsc.h>
+
++static inline unsigned long random_get_entropy(void)
++{
++ if (!IS_ENABLED(CONFIG_X86_TSC) &&
++ !cpu_feature_enabled(X86_FEATURE_TSC))
++ return random_get_entropy_fallback();
++ return rdtsc();
++}
++#define random_get_entropy random_get_entropy
++
+ /* Assume we use the PIT time source for the clock tick */
+ #define CLOCK_TICK_RATE PIT_TICK_RATE
+
+diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
+index 01a300a9700b9..fbdc3d9514943 100644
+--- a/arch/x86/include/asm/tsc.h
++++ b/arch/x86/include/asm/tsc.h
+@@ -20,13 +20,12 @@ extern void disable_TSC(void);
+
+ static inline cycles_t get_cycles(void)
+ {
+-#ifndef CONFIG_X86_TSC
+- if (!boot_cpu_has(X86_FEATURE_TSC))
++ if (!IS_ENABLED(CONFIG_X86_TSC) &&
++ !cpu_feature_enabled(X86_FEATURE_TSC))
+ return 0;
+-#endif
+-
+ return rdtsc();
+ }
++#define get_cycles get_cycles
+
+ extern struct system_counterval_t convert_art_to_tsc(u64 art);
+ extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 32333dfc85b6a..495329ae6b1b2 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5416,14 +5416,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid)
+ uint i;
+
+ if (pcid == kvm_get_active_pcid(vcpu)) {
+- mmu->invlpg(vcpu, gva, mmu->root_hpa);
++ if (mmu->invlpg)
++ mmu->invlpg(vcpu, gva, mmu->root_hpa);
+ tlb_flush = true;
+ }
+
+ for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
+ if (VALID_PAGE(mmu->prev_roots[i].hpa) &&
+ pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) {
+- mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
++ if (mmu->invlpg)
++ mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
+ tlb_flush = true;
+ }
+ }
+diff --git a/arch/xtensa/include/asm/timex.h b/arch/xtensa/include/asm/timex.h
+index 233ec75e60c69..3f2462f2d0270 100644
+--- a/arch/xtensa/include/asm/timex.h
++++ b/arch/xtensa/include/asm/timex.h
+@@ -29,10 +29,6 @@
+
+ extern unsigned long ccount_freq;
+
+-typedef unsigned long long cycles_t;
+-
+-#define get_cycles() (0)
+-
+ void local_timer_setup(unsigned cpu);
+
+ /*
+@@ -59,4 +55,6 @@ static inline void set_linux_timer (unsigned long ccompare)
+ xtensa_set_sr(ccompare, SREG_CCOMPARE + LINUX_TIMER);
+ }
+
++#include <asm-generic/timex.h>
++
+ #endif /* _XTENSA_TIMEX_H */
+diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
+index a4b638bea6f16..cc2fe0618178e 100644
+--- a/drivers/acpi/sysfs.c
++++ b/drivers/acpi/sysfs.c
+@@ -415,19 +415,30 @@ static ssize_t acpi_data_show(struct file *filp, struct kobject *kobj,
+ loff_t offset, size_t count)
+ {
+ struct acpi_data_attr *data_attr;
+- void *base;
+- ssize_t rc;
++ void __iomem *base;
++ ssize_t size;
+
+ data_attr = container_of(bin_attr, struct acpi_data_attr, attr);
++ size = data_attr->attr.size;
++
++ if (offset < 0)
++ return -EINVAL;
++
++ if (offset >= size)
++ return 0;
+
+- base = acpi_os_map_memory(data_attr->addr, data_attr->attr.size);
++ if (count > size - offset)
++ count = size - offset;
++
++ base = acpi_os_map_iomem(data_attr->addr, size);
+ if (!base)
+ return -ENOMEM;
+- rc = memory_read_from_buffer(buf, count, &offset, base,
+- data_attr->attr.size);
+- acpi_os_unmap_memory(base, data_attr->attr.size);
+
+- return rc;
++ memcpy_fromio(buf, base + offset, count);
++
++ acpi_os_unmap_iomem(base, size);
++
++ return count;
+ }
+
+ static int acpi_bert_data_init(void *th, struct acpi_data_attr *data_attr)
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 740811893c570..55f48375e3fe5 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -449,6 +449,7 @@ config RANDOM_TRUST_BOOTLOADER
+ device randomness. Say Y here to assume the entropy provided by the
+ booloader is trustworthy so it will be added to the kernel's entropy
+ pool. Otherwise, say N here so it will be regarded as device input that
+- only mixes the entropy pool.
++ only mixes the entropy pool. This can also be configured at boot with
++ "random.trust_bootloader=on/off".
+
+ endmenu
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index a3db27916256d..cfb085de876b7 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -15,6 +15,7 @@
+ #include <linux/err.h>
+ #include <linux/fs.h>
+ #include <linux/hw_random.h>
++#include <linux/random.h>
+ #include <linux/kernel.h>
+ #include <linux/kthread.h>
+ #include <linux/sched/signal.h>
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 3404a91edf292..92428bfdc1431 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1,320 +1,26 @@
++// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+ /*
+- * random.c -- A strong random number generator
+- *
+ * Copyright (C) 2017-2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+- *
+ * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005
+- *
+- * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All
+- * rights reserved.
+- *
+- * Redistribution and use in source and binary forms, with or without
+- * modification, are permitted provided that the following conditions
+- * are met:
+- * 1. Redistributions of source code must retain the above copyright
+- * notice, and the entire permission notice in its entirety,
+- * including the disclaimer of warranties.
+- * 2. Redistributions in binary form must reproduce the above copyright
+- * notice, this list of conditions and the following disclaimer in the
+- * documentation and/or other materials provided with the distribution.
+- * 3. The name of the author may not be used to endorse or promote
+- * products derived from this software without specific prior
+- * written permission.
+- *
+- * ALTERNATIVELY, this product may be distributed under the terms of
+- * the GNU General Public License, in which case the provisions of the GPL are
+- * required INSTEAD OF the above restrictions. (This clause is
+- * necessary due to a potential bad interaction between the GPL and
+- * the restrictions contained in a BSD-style copyright.)
+- *
+- * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+- * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+- * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE
+- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+- * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+- * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+- * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+- * DAMAGE.
+- */
+-
+-/*
+- * (now, with legal B.S. out of the way.....)
+- *
+- * This routine gathers environmental noise from device drivers, etc.,
+- * and returns good random numbers, suitable for cryptographic use.
+- * Besides the obvious cryptographic uses, these numbers are also good
+- * for seeding TCP sequence numbers, and other places where it is
+- * desirable to have numbers which are not only random, but hard to
+- * predict by an attacker.
+- *
+- * Theory of operation
+- * ===================
+- *
+- * Computers are very predictable devices. Hence it is extremely hard
+- * to produce truly random numbers on a computer --- as opposed to
+- * pseudo-random numbers, which can easily generated by using a
+- * algorithm. Unfortunately, it is very easy for attackers to guess
+- * the sequence of pseudo-random number generators, and for some
+- * applications this is not acceptable. So instead, we must try to
+- * gather "environmental noise" from the computer's environment, which
+- * must be hard for outside attackers to observe, and use that to
+- * generate random numbers. In a Unix environment, this is best done
+- * from inside the kernel.
+- *
+- * Sources of randomness from the environment include inter-keyboard
+- * timings, inter-interrupt timings from some interrupts, and other
+- * events which are both (a) non-deterministic and (b) hard for an
+- * outside observer to measure. Randomness from these sources are
+- * added to an "entropy pool", which is mixed using a CRC-like function.
+- * This is not cryptographically strong, but it is adequate assuming
+- * the randomness is not chosen maliciously, and it is fast enough that
+- * the overhead of doing it on every interrupt is very reasonable.
+- * As random bytes are mixed into the entropy pool, the routines keep
+- * an *estimate* of how many bits of randomness have been stored into
+- * the random number generator's internal state.
+- *
+- * When random bytes are desired, they are obtained by taking the BLAKE2s
+- * hash of the contents of the "entropy pool". The BLAKE2s hash avoids
+- * exposing the internal state of the entropy pool. It is believed to
+- * be computationally infeasible to derive any useful information
+- * about the input of BLAKE2s from its output. Even if it is possible to
+- * analyze BLAKE2s in some clever way, as long as the amount of data
+- * returned from the generator is less than the inherent entropy in
+- * the pool, the output data is totally unpredictable. For this
+- * reason, the routine decreases its internal estimate of how many
+- * bits of "true randomness" are contained in the entropy pool as it
+- * outputs random numbers.
+- *
+- * If this estimate goes to zero, the routine can still generate
+- * random numbers; however, an attacker may (at least in theory) be
+- * able to infer the future output of the generator from prior
+- * outputs. This requires successful cryptanalysis of BLAKE2s, which is
+- * not believed to be feasible, but there is a remote possibility.
+- * Nonetheless, these numbers should be useful for the vast majority
+- * of purposes.
+- *
+- * Exported interfaces ---- output
+- * ===============================
+- *
+- * There are four exported interfaces; two for use within the kernel,
+- * and two for use from userspace.
+- *
+- * Exported interfaces ---- userspace output
+- * -----------------------------------------
+- *
+- * The userspace interfaces are two character devices /dev/random and
+- * /dev/urandom. /dev/random is suitable for use when very high
+- * quality randomness is desired (for example, for key generation or
+- * one-time pads), as it will only return a maximum of the number of
+- * bits of randomness (as estimated by the random number generator)
+- * contained in the entropy pool.
+- *
+- * The /dev/urandom device does not have this limit, and will return
+- * as many bytes as are requested. As more and more random bytes are
+- * requested without giving time for the entropy pool to recharge,
+- * this will result in random numbers that are merely cryptographically
+- * strong. For many applications, however, this is acceptable.
+- *
+- * Exported interfaces ---- kernel output
+- * --------------------------------------
+- *
+- * The primary kernel interface is
+- *
+- * void get_random_bytes(void *buf, int nbytes);
+- *
+- * This interface will return the requested number of random bytes,
+- * and place it in the requested buffer. This is equivalent to a
+- * read from /dev/urandom.
+- *
+- * For less critical applications, there are the functions:
+- *
+- * u32 get_random_u32()
+- * u64 get_random_u64()
+- * unsigned int get_random_int()
+- * unsigned long get_random_long()
+- *
+- * These are produced by a cryptographic RNG seeded from get_random_bytes,
+- * and so do not deplete the entropy pool as much. These are recommended
+- * for most in-kernel operations *if the result is going to be stored in
+- * the kernel*.
+- *
+- * Specifically, the get_random_int() family do not attempt to do
+- * "anti-backtracking". If you capture the state of the kernel (e.g.
+- * by snapshotting the VM), you can figure out previous get_random_int()
+- * return values. But if the value is stored in the kernel anyway,
+- * this is not a problem.
+- *
+- * It *is* safe to expose get_random_int() output to attackers (e.g. as
+- * network cookies); given outputs 1..n, it's not feasible to predict
+- * outputs 0 or n+1. The only concern is an attacker who breaks into
+- * the kernel later; the get_random_int() engine is not reseeded as
+- * often as the get_random_bytes() one.
+- *
+- * get_random_bytes() is needed for keys that need to stay secret after
+- * they are erased from the kernel. For example, any key that will
+- * be wrapped and stored encrypted. And session encryption keys: we'd
+- * like to know that after the session is closed and the keys erased,
+- * the plaintext is unrecoverable to someone who recorded the ciphertext.
+- *
+- * But for network ports/cookies, stack canaries, PRNG seeds, address
+- * space layout randomization, session *authentication* keys, or other
+- * applications where the sensitive data is stored in the kernel in
+- * plaintext for as long as it's sensitive, the get_random_int() family
+- * is just fine.
+- *
+- * Consider ASLR. We want to keep the address space secret from an
+- * outside attacker while the process is running, but once the address
+- * space is torn down, it's of no use to an attacker any more. And it's
+- * stored in kernel data structures as long as it's alive, so worrying
+- * about an attacker's ability to extrapolate it from the get_random_int()
+- * CRNG is silly.
+- *
+- * Even some cryptographic keys are safe to generate with get_random_int().
+- * In particular, keys for SipHash are generally fine. Here, knowledge
+- * of the key authorizes you to do something to a kernel object (inject
+- * packets to a network connection, or flood a hash table), and the
+- * key is stored with the object being protected. Once it goes away,
+- * we no longer care if anyone knows the key.
+- *
+- * prandom_u32()
+- * -------------
+- *
+- * For even weaker applications, see the pseudorandom generator
+- * prandom_u32(), prandom_max(), and prandom_bytes(). If the random
+- * numbers aren't security-critical at all, these are *far* cheaper.
+- * Useful for self-tests, random error simulation, randomized backoffs,
+- * and any other application where you trust that nobody is trying to
+- * maliciously mess with you by guessing the "random" numbers.
+- *
+- * Exported interfaces ---- input
+- * ==============================
+- *
+- * The current exported interfaces for gathering environmental noise
+- * from the devices are:
+- *
+- * void add_device_randomness(const void *buf, unsigned int size);
+- * void add_input_randomness(unsigned int type, unsigned int code,
+- * unsigned int value);
+- * void add_interrupt_randomness(int irq);
+- * void add_disk_randomness(struct gendisk *disk);
+- * void add_hwgenerator_randomness(const char *buffer, size_t count,
+- * size_t entropy);
+- * void add_bootloader_randomness(const void *buf, unsigned int size);
+- *
+- * add_device_randomness() is for adding data to the random pool that
+- * is likely to differ between two devices (or possibly even per boot).
+- * This would be things like MAC addresses or serial numbers, or the
+- * read-out of the RTC. This does *not* add any actual entropy to the
+- * pool, but it initializes the pool to different values for devices
+- * that might otherwise be identical and have very little entropy
+- * available to them (particularly common in the embedded world).
+- *
+- * add_input_randomness() uses the input layer interrupt timing, as well as
+- * the event type information from the hardware.
+- *
+- * add_interrupt_randomness() uses the interrupt timing as random
+- * inputs to the entropy pool. Using the cycle counters and the irq source
+- * as inputs, it feeds the randomness roughly once a second.
+- *
+- * add_disk_randomness() uses what amounts to the seek time of block
+- * layer request events, on a per-disk_devt basis, as input to the
+- * entropy pool. Note that high-speed solid state drives with very low
+- * seek times do not make for good sources of entropy, as their seek
+- * times are usually fairly consistent.
+- *
+- * All of these routines try to estimate how many bits of randomness a
+- * particular randomness source. They do this by keeping track of the
+- * first and second order deltas of the event timings.
+- *
+- * add_hwgenerator_randomness() is for true hardware RNGs, and will credit
+- * entropy as specified by the caller. If the entropy pool is full it will
+- * block until more entropy is needed.
+- *
+- * add_bootloader_randomness() is the same as add_hwgenerator_randomness() or
+- * add_device_randomness(), depending on whether or not the configuration
+- * option CONFIG_RANDOM_TRUST_BOOTLOADER is set.
+- *
+- * Ensuring unpredictability at system startup
+- * ============================================
+- *
+- * When any operating system starts up, it will go through a sequence
+- * of actions that are fairly predictable by an adversary, especially
+- * if the start-up does not involve interaction with a human operator.
+- * This reduces the actual number of bits of unpredictability in the
+- * entropy pool below the value in entropy_count. In order to
+- * counteract this effect, it helps to carry information in the
+- * entropy pool across shut-downs and start-ups. To do this, put the
+- * following lines an appropriate script which is run during the boot
+- * sequence:
+- *
+- * echo "Initializing random number generator..."
+- * random_seed=/var/run/random-seed
+- * # Carry a random seed from start-up to start-up
+- * # Load and then save the whole entropy pool
+- * if [ -f $random_seed ]; then
+- * cat $random_seed >/dev/urandom
+- * else
+- * touch $random_seed
+- * fi
+- * chmod 600 $random_seed
+- * dd if=/dev/urandom of=$random_seed count=1 bs=512
+- *
+- * and the following lines in an appropriate script which is run as
+- * the system is shutdown:
+- *
+- * # Carry a random seed from shut-down to start-up
+- * # Save the whole entropy pool
+- * echo "Saving random seed..."
+- * random_seed=/var/run/random-seed
+- * touch $random_seed
+- * chmod 600 $random_seed
+- * dd if=/dev/urandom of=$random_seed count=1 bs=512
+- *
+- * For example, on most modern systems using the System V init
+- * scripts, such code fragments would be found in
+- * /etc/rc.d/init.d/random. On older Linux systems, the correct script
+- * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0.
+- *
+- * Effectively, these commands cause the contents of the entropy pool
+- * to be saved at shut-down time and reloaded into the entropy pool at
+- * start-up. (The 'dd' in the addition to the bootup script is to
+- * make sure that /etc/random-seed is different for every start-up,
+- * even if the system crashes without executing rc.0.) Even with
+- * complete knowledge of the start-up activities, predicting the state
+- * of the entropy pool requires knowledge of the previous history of
+- * the system.
+- *
+- * Configuring the /dev/random driver under Linux
+- * ==============================================
+- *
+- * The /dev/random driver under Linux uses minor numbers 8 and 9 of
+- * the /dev/mem major number (#1). So if your system does not have
+- * /dev/random and /dev/urandom created already, they can be created
+- * by using the commands:
+- *
+- * mknod /dev/random c 1 8
+- * mknod /dev/urandom c 1 9
+- *
+- * Acknowledgements:
+- * =================
+- *
+- * Ideas for constructing this random number generator were derived
+- * from Pretty Good Privacy's random number generator, and from private
+- * discussions with Phil Karn. Colin Plumb provided a faster random
+- * number generator, which speed up the mixing function of the entropy
+- * pool, taken from PGPfone. Dale Worley has also contributed many
+- * useful ideas and suggestions to improve this driver.
+- *
+- * Any flaws in the design are solely my responsibility, and should
+- * not be attributed to the Phil, Colin, or any of authors of PGP.
+- *
+- * Further background information on this topic may be obtained from
+- * RFC 1750, "Randomness Recommendations for Security", by Donald
+- * Eastlake, Steve Crocker, and Jeff Schiller.
++ * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All rights reserved.
++ *
++ * This driver produces cryptographically secure pseudorandom data. It is divided
++ * into roughly six sections, each with a section header:
++ *
++ * - Initialization and readiness waiting.
++ * - Fast key erasure RNG, the "crng".
++ * - Entropy accumulation and extraction routines.
++ * - Entropy collection routines.
++ * - Userspace reader/writer interfaces.
++ * - Sysctl interface.
++ *
++ * The high level overview is that there is one input pool, into which
++ * various pieces of data are hashed. Prior to initialization, some of that
++ * data is then "credited" as having a certain number of bits of entropy.
++ * When enough bits of entropy are available, the hash is finalized and
++ * handed as a key to a stream cipher that expands it indefinitely for
++ * various consumers. This key is periodically refreshed as the various
++ * entropy collectors, described below, add data to the input pool.
+ */
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+@@ -344,1371 +50,1080 @@
+ #include <linux/syscalls.h>
+ #include <linux/completion.h>
+ #include <linux/uuid.h>
++#include <linux/uaccess.h>
++#include <linux/siphash.h>
++#include <linux/uio.h>
+ #include <crypto/chacha.h>
+ #include <crypto/blake2s.h>
+-
+ #include <asm/processor.h>
+-#include <linux/uaccess.h>
+ #include <asm/irq.h>
+ #include <asm/irq_regs.h>
+ #include <asm/io.h>
+
+-#define CREATE_TRACE_POINTS
+-#include <trace/events/random.h>
+-
+-/* #define ADD_INTERRUPT_BENCH */
+-
+-/*
+- * If the entropy count falls under this number of bits, then we
+- * should wake up processes which are selecting or polling on write
+- * access to /dev/random.
+- */
+-static int random_write_wakeup_bits = 28 * (1 << 5);
+-
+-/*
+- * Originally, we used a primitive polynomial of degree .poolwords
+- * over GF(2). The taps for various sizes are defined below. They
+- * were chosen to be evenly spaced except for the last tap, which is 1
+- * to get the twisting happening as fast as possible.
+- *
+- * For the purposes of better mixing, we use the CRC-32 polynomial as
+- * well to make a (modified) twisted Generalized Feedback Shift
+- * Register. (See M. Matsumoto & Y. Kurita, 1992. Twisted GFSR
+- * generators. ACM Transactions on Modeling and Computer Simulation
+- * 2(3):179-194. Also see M. Matsumoto & Y. Kurita, 1994. Twisted
+- * GFSR generators II. ACM Transactions on Modeling and Computer
+- * Simulation 4:254-266)
++/*********************************************************************
+ *
+- * Thanks to Colin Plumb for suggesting this.
++ * Initialization and readiness waiting.
+ *
+- * The mixing operation is much less sensitive than the output hash,
+- * where we use BLAKE2s. All that we want of mixing operation is that
+- * it be a good non-cryptographic hash; i.e. it not produce collisions
+- * when fed "random" data of the sort we expect to see. As long as
+- * the pool state differs for different inputs, we have preserved the
+- * input entropy and done a good job. The fact that an intelligent
+- * attacker can construct inputs that will produce controlled
+- * alterations to the pool's state is not important because we don't
+- * consider such inputs to contribute any randomness. The only
+- * property we need with respect to them is that the attacker can't
+- * increase his/her knowledge of the pool's state. Since all
+- * additions are reversible (knowing the final state and the input,
+- * you can reconstruct the initial state), if an attacker has any
+- * uncertainty about the initial state, he/she can only shuffle that
+- * uncertainty about, but never cause any collisions (which would
+- * decrease the uncertainty).
++ * Much of the RNG infrastructure is devoted to various dependencies
++ * being able to wait until the RNG has collected enough entropy and
++ * is ready for safe consumption.
+ *
+- * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
+- * Videau in their paper, "The Linux Pseudorandom Number Generator
+- * Revisited" (see: http://eprint.iacr.org/2012/251.pdf). In their
+- * paper, they point out that we are not using a true Twisted GFSR,
+- * since Matsumoto & Kurita used a trinomial feedback polynomial (that
+- * is, with only three taps, instead of the six that we are using).
+- * As a result, the resulting polynomial is neither primitive nor
+- * irreducible, and hence does not have a maximal period over
+- * GF(2**32). They suggest a slight change to the generator
+- * polynomial which improves the resulting TGFSR polynomial to be
+- * irreducible, which we have made here.
+- */
+-enum poolinfo {
+- POOL_WORDS = 128,
+- POOL_WORDMASK = POOL_WORDS - 1,
+- POOL_BYTES = POOL_WORDS * sizeof(u32),
+- POOL_BITS = POOL_BYTES * 8,
+- POOL_BITSHIFT = ilog2(POOL_BITS),
+-
+- /* To allow fractional bits to be tracked, the entropy_count field is
+- * denominated in units of 1/8th bits. */
+- POOL_ENTROPY_SHIFT = 3,
+-#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT)
+- POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT,
+-
+- /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */
+- POOL_TAP1 = 104,
+- POOL_TAP2 = 76,
+- POOL_TAP3 = 51,
+- POOL_TAP4 = 25,
+- POOL_TAP5 = 1,
+-
+- EXTRACT_SIZE = BLAKE2S_HASH_SIZE / 2
+-};
++ *********************************************************************/
+
+ /*
+- * Static global variables
++ * crng_init is protected by base_crng->lock, and only increases
++ * its value (from empty->early->ready).
+ */
+-static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
++static enum {
++ CRNG_EMPTY = 0, /* Little to no entropy collected */
++ CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
++ CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */
++} crng_init __read_mostly = CRNG_EMPTY;
++static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
++#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY)
++/* Various types of waiters for crng_init->CRNG_READY transition. */
++static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
+ static struct fasync_struct *fasync;
++static DEFINE_SPINLOCK(random_ready_chain_lock);
++static RAW_NOTIFIER_HEAD(random_ready_chain);
+
+-static DEFINE_SPINLOCK(random_ready_list_lock);
+-static LIST_HEAD(random_ready_list);
+-
+-struct crng_state {
+- u32 state[16];
+- unsigned long init_time;
+- spinlock_t lock;
+-};
+-
+-static struct crng_state primary_crng = {
+- .lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock),
+- .state[0] = CHACHA_CONSTANT_EXPA,
+- .state[1] = CHACHA_CONSTANT_ND_3,
+- .state[2] = CHACHA_CONSTANT_2_BY,
+- .state[3] = CHACHA_CONSTANT_TE_K,
+-};
+-
+-/*
+- * crng_init = 0 --> Uninitialized
+- * 1 --> Initialized
+- * 2 --> Initialized from input_pool
+- *
+- * crng_init is protected by primary_crng->lock, and only increases
+- * its value (from 0->1->2).
+- */
+-static int crng_init = 0;
+-static bool crng_need_final_init = false;
+-#define crng_ready() (likely(crng_init > 1))
+-static int crng_init_cnt = 0;
+-static unsigned long crng_global_init_time = 0;
+-#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE)
+-static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]);
+-static void _crng_backtrack_protect(struct crng_state *crng,
+- u8 tmp[CHACHA_BLOCK_SIZE], int used);
+-static void process_random_ready_list(void);
+-static void _get_random_bytes(void *buf, int nbytes);
+-
+-static struct ratelimit_state unseeded_warning =
+- RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3);
++/* Control how we warn userspace. */
+ static struct ratelimit_state urandom_warning =
+ RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
+-
+-static int ratelimit_disable __read_mostly;
+-
++static int ratelimit_disable __read_mostly =
++ IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
+ module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
+ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
+
+-/**********************************************************************
+- *
+- * OS independent entropy store. Here are the functions which handle
+- * storing entropy in an entropy pool.
+- *
+- **********************************************************************/
+-
+-static u32 input_pool_data[POOL_WORDS] __latent_entropy;
+-
+-static struct {
+- spinlock_t lock;
+- u16 add_ptr;
+- u16 input_rotate;
+- int entropy_count;
+-} input_pool = {
+- .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
+-};
+-
+-static ssize_t extract_entropy(void *buf, size_t nbytes, int min);
+-static ssize_t _extract_entropy(void *buf, size_t nbytes);
+-
+-static void crng_reseed(struct crng_state *crng, bool use_input_pool);
+-
+-static const u32 twist_table[8] = {
+- 0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
+- 0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
+-
+ /*
+- * This function adds bytes into the entropy "pool". It does not
+- * update the entropy estimate. The caller should call
+- * credit_entropy_bits if this is appropriate.
++ * Returns whether or not the input pool has been seeded and thus guaranteed
++ * to supply cryptographically secure random numbers. This applies to: the
++ * /dev/urandom device, the get_random_bytes function, and the get_random_{u32,
++ * ,u64,int,long} family of functions.
+ *
+- * The pool is stirred with a primitive polynomial of the appropriate
+- * degree, and then twisted. We twist by three bits at a time because
+- * it's cheap to do so and helps slightly in the expected case where
+- * the entropy is concentrated in the low-order bits.
++ * Returns: true if the input pool has been seeded.
++ * false if the input pool has not been seeded.
+ */
+-static void _mix_pool_bytes(const void *in, int nbytes)
+-{
+- unsigned long i;
+- int input_rotate;
+- const u8 *bytes = in;
+- u32 w;
+-
+- input_rotate = input_pool.input_rotate;
+- i = input_pool.add_ptr;
+-
+- /* mix one byte at a time to simplify size handling and churn faster */
+- while (nbytes--) {
+- w = rol32(*bytes++, input_rotate);
+- i = (i - 1) & POOL_WORDMASK;
+-
+- /* XOR in the various taps */
+- w ^= input_pool_data[i];
+- w ^= input_pool_data[(i + POOL_TAP1) & POOL_WORDMASK];
+- w ^= input_pool_data[(i + POOL_TAP2) & POOL_WORDMASK];
+- w ^= input_pool_data[(i + POOL_TAP3) & POOL_WORDMASK];
+- w ^= input_pool_data[(i + POOL_TAP4) & POOL_WORDMASK];
+- w ^= input_pool_data[(i + POOL_TAP5) & POOL_WORDMASK];
+-
+- /* Mix the result back in with a twist */
+- input_pool_data[i] = (w >> 3) ^ twist_table[w & 7];
+-
+- /*
+- * Normally, we add 7 bits of rotation to the pool.
+- * At the beginning of the pool, add an extra 7 bits
+- * rotation, so that successive passes spread the
+- * input bits across the pool evenly.
+- */
+- input_rotate = (input_rotate + (i ? 7 : 14)) & 31;
+- }
+-
+- input_pool.input_rotate = input_rotate;
+- input_pool.add_ptr = i;
+-}
+-
+-static void __mix_pool_bytes(const void *in, int nbytes)
++bool rng_is_initialized(void)
+ {
+- trace_mix_pool_bytes_nolock(nbytes, _RET_IP_);
+- _mix_pool_bytes(in, nbytes);
++ return crng_ready();
+ }
++EXPORT_SYMBOL(rng_is_initialized);
+
+-static void mix_pool_bytes(const void *in, int nbytes)
++static void __cold crng_set_ready(struct work_struct *work)
+ {
+- unsigned long flags;
+-
+- trace_mix_pool_bytes(nbytes, _RET_IP_);
+- spin_lock_irqsave(&input_pool.lock, flags);
+- _mix_pool_bytes(in, nbytes);
+- spin_unlock_irqrestore(&input_pool.lock, flags);
++ static_branch_enable(&crng_is_ready);
+ }
+
+-struct fast_pool {
+- u32 pool[4];
+- unsigned long last;
+- u16 reg_idx;
+- u8 count;
+-};
++/* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
++static void try_to_generate_entropy(void);
+
+ /*
+- * This is a fast mixing routine used by the interrupt randomness
+- * collector. It's hardcoded for an 128 bit pool and assumes that any
+- * locks that might be needed are taken by the caller.
++ * Wait for the input pool to be seeded and thus guaranteed to supply
++ * cryptographically secure random numbers. This applies to: the /dev/urandom
++ * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
++ * family of functions. Using any of these functions without first calling
++ * this function forfeits the guarantee of security.
++ *
++ * Returns: 0 if the input pool has been seeded.
++ * -ERESTARTSYS if the function was interrupted by a signal.
+ */
+-static void fast_mix(struct fast_pool *f)
++int wait_for_random_bytes(void)
+ {
+- u32 a = f->pool[0], b = f->pool[1];
+- u32 c = f->pool[2], d = f->pool[3];
+-
+- a += b; c += d;
+- b = rol32(b, 6); d = rol32(d, 27);
+- d ^= a; b ^= c;
+-
+- a += b; c += d;
+- b = rol32(b, 16); d = rol32(d, 14);
+- d ^= a; b ^= c;
+-
+- a += b; c += d;
+- b = rol32(b, 6); d = rol32(d, 27);
+- d ^= a; b ^= c;
+-
+- a += b; c += d;
+- b = rol32(b, 16); d = rol32(d, 14);
+- d ^= a; b ^= c;
++ while (!crng_ready()) {
++ int ret;
+
+- f->pool[0] = a; f->pool[1] = b;
+- f->pool[2] = c; f->pool[3] = d;
+- f->count++;
++ try_to_generate_entropy();
++ ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
++ if (ret)
++ return ret > 0 ? 0 : ret;
++ }
++ return 0;
+ }
++EXPORT_SYMBOL(wait_for_random_bytes);
+
+-static void process_random_ready_list(void)
++/*
++ * Add a callback function that will be invoked when the input
++ * pool is initialised.
++ *
++ * returns: 0 if callback is successfully added
++ * -EALREADY if pool is already initialised (callback not called)
++ */
++int __cold register_random_ready_notifier(struct notifier_block *nb)
+ {
+ unsigned long flags;
+- struct random_ready_callback *rdy, *tmp;
++ int ret = -EALREADY;
+
+- spin_lock_irqsave(&random_ready_list_lock, flags);
+- list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) {
+- struct module *owner = rdy->owner;
++ if (crng_ready())
++ return ret;
+
+- list_del_init(&rdy->list);
+- rdy->func(rdy);
+- module_put(owner);
+- }
+- spin_unlock_irqrestore(&random_ready_list_lock, flags);
++ spin_lock_irqsave(&random_ready_chain_lock, flags);
++ if (!crng_ready())
++ ret = raw_notifier_chain_register(&random_ready_chain, nb);
++ spin_unlock_irqrestore(&random_ready_chain_lock, flags);
++ return ret;
+ }
+
+ /*
+- * Credit (or debit) the entropy store with n bits of entropy.
+- * Use credit_entropy_bits_safe() if the value comes from userspace
+- * or otherwise should be checked for extreme values.
++ * Delete a previously registered readiness callback function.
+ */
+-static void credit_entropy_bits(int nbits)
++int __cold unregister_random_ready_notifier(struct notifier_block *nb)
+ {
+- int entropy_count, entropy_bits, orig;
+- int nfrac = nbits << POOL_ENTROPY_SHIFT;
+-
+- /* Ensure that the multiplication can avoid being 64 bits wide. */
+- BUILD_BUG_ON(2 * (POOL_ENTROPY_SHIFT + POOL_BITSHIFT) > 31);
+-
+- if (!nbits)
+- return;
+-
+-retry:
+- entropy_count = orig = READ_ONCE(input_pool.entropy_count);
+- if (nfrac < 0) {
+- /* Debit */
+- entropy_count += nfrac;
+- } else {
+- /*
+- * Credit: we have to account for the possibility of
+- * overwriting already present entropy. Even in the
+- * ideal case of pure Shannon entropy, new contributions
+- * approach the full value asymptotically:
+- *
+- * entropy <- entropy + (pool_size - entropy) *
+- * (1 - exp(-add_entropy/pool_size))
+- *
+- * For add_entropy <= pool_size/2 then
+- * (1 - exp(-add_entropy/pool_size)) >=
+- * (add_entropy/pool_size)*0.7869...
+- * so we can approximate the exponential with
+- * 3/4*add_entropy/pool_size and still be on the
+- * safe side by adding at most pool_size/2 at a time.
+- *
+- * The use of pool_size-2 in the while statement is to
+- * prevent rounding artifacts from making the loop
+- * arbitrarily long; this limits the loop to log2(pool_size)*2
+- * turns no matter how large nbits is.
+- */
+- int pnfrac = nfrac;
+- const int s = POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2;
+- /* The +2 corresponds to the /4 in the denominator */
+-
+- do {
+- unsigned int anfrac = min(pnfrac, POOL_FRACBITS / 2);
+- unsigned int add =
+- ((POOL_FRACBITS - entropy_count) * anfrac * 3) >> s;
+-
+- entropy_count += add;
+- pnfrac -= anfrac;
+- } while (unlikely(entropy_count < POOL_FRACBITS - 2 && pnfrac));
+- }
+-
+- if (WARN_ON(entropy_count < 0)) {
+- pr_warn("negative entropy/overflow: count %d\n", entropy_count);
+- entropy_count = 0;
+- } else if (entropy_count > POOL_FRACBITS)
+- entropy_count = POOL_FRACBITS;
+- if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig)
+- goto retry;
+-
+- trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RET_IP_);
++ unsigned long flags;
++ int ret;
+
+- entropy_bits = entropy_count >> POOL_ENTROPY_SHIFT;
+- if (crng_init < 2 && entropy_bits >= 128)
+- crng_reseed(&primary_crng, true);
++ spin_lock_irqsave(&random_ready_chain_lock, flags);
++ ret = raw_notifier_chain_unregister(&random_ready_chain, nb);
++ spin_unlock_irqrestore(&random_ready_chain_lock, flags);
++ return ret;
+ }
+
+-static int credit_entropy_bits_safe(int nbits)
++static void __cold process_random_ready_list(void)
+ {
+- if (nbits < 0)
+- return -EINVAL;
+-
+- /* Cap the value to avoid overflows */
+- nbits = min(nbits, POOL_BITS);
++ unsigned long flags;
+
+- credit_entropy_bits(nbits);
+- return 0;
++ spin_lock_irqsave(&random_ready_chain_lock, flags);
++ raw_notifier_call_chain(&random_ready_chain, 0, NULL);
++ spin_unlock_irqrestore(&random_ready_chain_lock, flags);
+ }
+
++#define warn_unseeded_randomness() \
++ if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \
++ printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \
++ __func__, (void *)_RET_IP_, crng_init)
++
++
+ /*********************************************************************
+ *
+- * CRNG using CHACHA20
++ * Fast key erasure RNG, the "crng".
++ *
++ * These functions expand entropy from the entropy extractor into
++ * long streams for external consumption using the "fast key erasure"
++ * RNG described at <https://blog.cr.yp.to/20170723-random.html>.
++ *
++ * There are a few exported interfaces for use by other drivers:
++ *
++ * void get_random_bytes(void *buf, size_t len)
++ * u32 get_random_u32()
++ * u64 get_random_u64()
++ * unsigned int get_random_int()
++ * unsigned long get_random_long()
++ *
++ * These interfaces will return the requested number of random bytes
++ * into the given buffer or as a return value. This is equivalent to
++ * a read from /dev/urandom. The u32, u64, int, and long family of
++ * functions may be higher performance for one-off random integers,
++ * because they do a bit of buffering and do not invoke reseeding
++ * until the buffer is emptied.
+ *
+ *********************************************************************/
+
+-#define CRNG_RESEED_INTERVAL (300 * HZ)
++enum {
++ CRNG_RESEED_START_INTERVAL = HZ,
++ CRNG_RESEED_INTERVAL = 60 * HZ
++};
+
+-static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
++static struct {
++ u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
++ unsigned long birth;
++ unsigned long generation;
++ spinlock_t lock;
++} base_crng = {
++ .lock = __SPIN_LOCK_UNLOCKED(base_crng.lock)
++};
+
+-/*
+- * Hack to deal with crazy userspace progams when they are all trying
+- * to access /dev/urandom in parallel. The programs are almost
+- * certainly doing something terribly wrong, but we'll work around
+- * their brain damage.
+- */
+-static struct crng_state **crng_node_pool __read_mostly;
++struct crng {
++ u8 key[CHACHA_KEY_SIZE];
++ unsigned long generation;
++ local_lock_t lock;
++};
+
+-static void invalidate_batched_entropy(void);
+-static void numa_crng_init(void);
++static DEFINE_PER_CPU(struct crng, crngs) = {
++ .generation = ULONG_MAX,
++ .lock = INIT_LOCAL_LOCK(crngs.lock),
++};
+
+-static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
+-static int __init parse_trust_cpu(char *arg)
+-{
+- return kstrtobool(arg, &trust_cpu);
+-}
+-early_param("random.trust_cpu", parse_trust_cpu);
++/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
++static void extract_entropy(void *buf, size_t len);
+
+-static bool crng_init_try_arch(struct crng_state *crng)
++/* This extracts a new crng key from the input pool. */
++static void crng_reseed(void)
+ {
+- int i;
+- bool arch_init = true;
+- unsigned long rv;
+-
+- for (i = 4; i < 16; i++) {
+- if (!arch_get_random_seed_long(&rv) &&
+- !arch_get_random_long(&rv)) {
+- rv = random_get_entropy();
+- arch_init = false;
+- }
+- crng->state[i] ^= rv;
+- }
++ unsigned long flags;
++ unsigned long next_gen;
++ u8 key[CHACHA_KEY_SIZE];
+
+- return arch_init;
++ extract_entropy(key, sizeof(key));
++
++ /*
++ * We copy the new key into the base_crng, overwriting the old one,
++ * and update the generation counter. We avoid hitting ULONG_MAX,
++ * because the per-cpu crngs are initialized to ULONG_MAX, so this
++ * forces new CPUs that come online to always initialize.
++ */
++ spin_lock_irqsave(&base_crng.lock, flags);
++ memcpy(base_crng.key, key, sizeof(base_crng.key));
++ next_gen = base_crng.generation + 1;
++ if (next_gen == ULONG_MAX)
++ ++next_gen;
++ WRITE_ONCE(base_crng.generation, next_gen);
++ WRITE_ONCE(base_crng.birth, jiffies);
++ if (!static_branch_likely(&crng_is_ready))
++ crng_init = CRNG_READY;
++ spin_unlock_irqrestore(&base_crng.lock, flags);
++ memzero_explicit(key, sizeof(key));
+ }
+
+-static bool __init crng_init_try_arch_early(void)
++/*
++ * This generates a ChaCha block using the provided key, and then
++ * immediately overwites that key with half the block. It returns
++ * the resultant ChaCha state to the user, along with the second
++ * half of the block containing 32 bytes of random data that may
++ * be used; random_data_len may not be greater than 32.
++ *
++ * The returned ChaCha state contains within it a copy of the old
++ * key value, at index 4, so the state should always be zeroed out
++ * immediately after using in order to maintain forward secrecy.
++ * If the state cannot be erased in a timely manner, then it is
++ * safer to set the random_data parameter to &chacha_state[4] so
++ * that this function overwrites it before returning.
++ */
++static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
++ u32 chacha_state[CHACHA_STATE_WORDS],
++ u8 *random_data, size_t random_data_len)
+ {
+- int i;
+- bool arch_init = true;
+- unsigned long rv;
+-
+- for (i = 4; i < 16; i++) {
+- if (!arch_get_random_seed_long_early(&rv) &&
+- !arch_get_random_long_early(&rv)) {
+- rv = random_get_entropy();
+- arch_init = false;
+- }
+- primary_crng.state[i] ^= rv;
+- }
++ u8 first_block[CHACHA_BLOCK_SIZE];
+
+- return arch_init;
+-}
++ BUG_ON(random_data_len > 32);
+
+-static void crng_initialize_secondary(struct crng_state *crng)
+-{
+- chacha_init_consts(crng->state);
+- _get_random_bytes(&crng->state[4], sizeof(u32) * 12);
+- crng_init_try_arch(crng);
+- crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
++ chacha_init_consts(chacha_state);
++ memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE);
++ memset(&chacha_state[12], 0, sizeof(u32) * 4);
++ chacha20_block(chacha_state, first_block);
++
++ memcpy(key, first_block, CHACHA_KEY_SIZE);
++ memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len);
++ memzero_explicit(first_block, sizeof(first_block));
+ }
+
+-static void __init crng_initialize_primary(void)
+-{
+- _extract_entropy(&primary_crng.state[4], sizeof(u32) * 12);
+- if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) {
+- invalidate_batched_entropy();
+- numa_crng_init();
+- crng_init = 2;
+- pr_notice("crng init done (trusting CPU's manufacturer)\n");
++/*
++ * Return whether the crng seed is considered to be sufficiently old
++ * that a reseeding is needed. This happens if the last reseeding
++ * was CRNG_RESEED_INTERVAL ago, or during early boot, at an interval
++ * proportional to the uptime.
++ */
++static bool crng_has_old_seed(void)
++{
++ static bool early_boot = true;
++ unsigned long interval = CRNG_RESEED_INTERVAL;
++
++ if (unlikely(READ_ONCE(early_boot))) {
++ time64_t uptime = ktime_get_seconds();
++ if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2)
++ WRITE_ONCE(early_boot, false);
++ else
++ interval = max_t(unsigned int, CRNG_RESEED_START_INTERVAL,
++ (unsigned int)uptime / 2 * HZ);
+ }
+- primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1;
++ return time_is_before_jiffies(READ_ONCE(base_crng.birth) + interval);
+ }
+
+-static void crng_finalize_init(void)
++/*
++ * This function returns a ChaCha state that you may use for generating
++ * random data. It also returns up to 32 bytes on its own of random data
++ * that may be used; random_data_len may not be greater than 32.
++ */
++static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
++ u8 *random_data, size_t random_data_len)
+ {
+- if (!system_wq) {
+- /* We can't call numa_crng_init until we have workqueues,
+- * so mark this for processing later. */
+- crng_need_final_init = true;
+- return;
+- }
++ unsigned long flags;
++ struct crng *crng;
+
+- invalidate_batched_entropy();
+- numa_crng_init();
+- crng_init = 2;
+- crng_need_final_init = false;
+- process_random_ready_list();
+- wake_up_interruptible(&crng_init_wait);
+- kill_fasync(&fasync, SIGIO, POLL_IN);
+- pr_notice("crng init done\n");
+- if (unseeded_warning.missed) {
+- pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
+- unseeded_warning.missed);
+- unseeded_warning.missed = 0;
++ BUG_ON(random_data_len > 32);
++
++ /*
++ * For the fast path, we check whether we're ready, unlocked first, and
++ * then re-check once locked later. In the case where we're really not
++ * ready, we do fast key erasure with the base_crng directly, extracting
++ * when crng_init is CRNG_EMPTY.
++ */
++ if (!crng_ready()) {
++ bool ready;
++
++ spin_lock_irqsave(&base_crng.lock, flags);
++ ready = crng_ready();
++ if (!ready) {
++ if (crng_init == CRNG_EMPTY)
++ extract_entropy(base_crng.key, sizeof(base_crng.key));
++ crng_fast_key_erasure(base_crng.key, chacha_state,
++ random_data, random_data_len);
++ }
++ spin_unlock_irqrestore(&base_crng.lock, flags);
++ if (!ready)
++ return;
+ }
+- if (urandom_warning.missed) {
+- pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
+- urandom_warning.missed);
+- urandom_warning.missed = 0;
++
++ /*
++ * If the base_crng is old enough, we reseed, which in turn bumps the
++ * generation counter that we check below.
++ */
++ if (unlikely(crng_has_old_seed()))
++ crng_reseed();
++
++ local_lock_irqsave(&crngs.lock, flags);
++ crng = raw_cpu_ptr(&crngs);
++
++ /*
++ * If our per-cpu crng is older than the base_crng, then it means
++ * somebody reseeded the base_crng. In that case, we do fast key
++ * erasure on the base_crng, and use its output as the new key
++ * for our per-cpu crng. This brings us up to date with base_crng.
++ */
++ if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) {
++ spin_lock(&base_crng.lock);
++ crng_fast_key_erasure(base_crng.key, chacha_state,
++ crng->key, sizeof(crng->key));
++ crng->generation = base_crng.generation;
++ spin_unlock(&base_crng.lock);
+ }
++
++ /*
++ * Finally, when we've made it this far, our per-cpu crng has an up
++ * to date key, and we can do fast key erasure with it to produce
++ * some random data and a ChaCha state for the caller. All other
++ * branches of this function are "unlikely", so most of the time we
++ * should wind up here immediately.
++ */
++ crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
++ local_unlock_irqrestore(&crngs.lock, flags);
+ }
+
+-static void do_numa_crng_init(struct work_struct *work)
++static void _get_random_bytes(void *buf, size_t len)
+ {
+- int i;
+- struct crng_state *crng;
+- struct crng_state **pool;
+-
+- pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL | __GFP_NOFAIL);
+- for_each_online_node(i) {
+- crng = kmalloc_node(sizeof(struct crng_state),
+- GFP_KERNEL | __GFP_NOFAIL, i);
+- spin_lock_init(&crng->lock);
+- crng_initialize_secondary(crng);
+- pool[i] = crng;
+- }
+- /* pairs with READ_ONCE() in select_crng() */
+- if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) {
+- for_each_node(i)
+- kfree(pool[i]);
+- kfree(pool);
+- }
+-}
++ u32 chacha_state[CHACHA_STATE_WORDS];
++ u8 tmp[CHACHA_BLOCK_SIZE];
++ size_t first_block_len;
+
+-static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init);
++ if (!len)
++ return;
+
+-static void numa_crng_init(void)
+-{
+- if (IS_ENABLED(CONFIG_NUMA))
+- schedule_work(&numa_crng_init_work);
+-}
++ first_block_len = min_t(size_t, 32, len);
++ crng_make_state(chacha_state, buf, first_block_len);
++ len -= first_block_len;
++ buf += first_block_len;
+
+-static struct crng_state *select_crng(void)
+-{
+- if (IS_ENABLED(CONFIG_NUMA)) {
+- struct crng_state **pool;
+- int nid = numa_node_id();
+-
+- /* pairs with cmpxchg_release() in do_numa_crng_init() */
+- pool = READ_ONCE(crng_node_pool);
+- if (pool && pool[nid])
+- return pool[nid];
++ while (len) {
++ if (len < CHACHA_BLOCK_SIZE) {
++ chacha20_block(chacha_state, tmp);
++ memcpy(buf, tmp, len);
++ memzero_explicit(tmp, sizeof(tmp));
++ break;
++ }
++
++ chacha20_block(chacha_state, buf);
++ if (unlikely(chacha_state[12] == 0))
++ ++chacha_state[13];
++ len -= CHACHA_BLOCK_SIZE;
++ buf += CHACHA_BLOCK_SIZE;
+ }
+
+- return &primary_crng;
++ memzero_explicit(chacha_state, sizeof(chacha_state));
+ }
+
+ /*
+- * crng_fast_load() can be called by code in the interrupt service
+- * path. So we can't afford to dilly-dally. Returns the number of
+- * bytes processed from cp.
++ * This function is the exported kernel interface. It returns some
++ * number of good random numbers, suitable for key generation, seeding
++ * TCP sequence numbers, etc. It does not rely on the hardware random
++ * number generator. For random bytes direct from the hardware RNG
++ * (when available), use get_random_bytes_arch(). In order to ensure
++ * that the randomness provided by this function is okay, the function
++ * wait_for_random_bytes() should be called and return 0 at least once
++ * at any point prior.
+ */
+-static size_t crng_fast_load(const u8 *cp, size_t len)
++void get_random_bytes(void *buf, size_t len)
+ {
+- unsigned long flags;
+- u8 *p;
+- size_t ret = 0;
+-
+- if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+- return 0;
+- if (crng_init != 0) {
+- spin_unlock_irqrestore(&primary_crng.lock, flags);
+- return 0;
+- }
+- p = (u8 *)&primary_crng.state[4];
+- while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
+- p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp;
+- cp++; crng_init_cnt++; len--; ret++;
+- }
+- spin_unlock_irqrestore(&primary_crng.lock, flags);
+- if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
+- invalidate_batched_entropy();
+- crng_init = 1;
+- pr_notice("fast init done\n");
+- }
+- return ret;
++ warn_unseeded_randomness();
++ _get_random_bytes(buf, len);
+ }
++EXPORT_SYMBOL(get_random_bytes);
+
+-/*
+- * crng_slow_load() is called by add_device_randomness, which has two
+- * attributes. (1) We can't trust the buffer passed to it is
+- * guaranteed to be unpredictable (so it might not have any entropy at
+- * all), and (2) it doesn't have the performance constraints of
+- * crng_fast_load().
+- *
+- * So we do something more comprehensive which is guaranteed to touch
+- * all of the primary_crng's state, and which uses a LFSR with a
+- * period of 255 as part of the mixing algorithm. Finally, we do
+- * *not* advance crng_init_cnt since buffer we may get may be something
+- * like a fixed DMI table (for example), which might very well be
+- * unique to the machine, but is otherwise unvarying.
+- */
+-static int crng_slow_load(const u8 *cp, size_t len)
++static ssize_t get_random_bytes_user(struct iov_iter *iter)
+ {
+- unsigned long flags;
+- static u8 lfsr = 1;
+- u8 tmp;
+- unsigned int i, max = CHACHA_KEY_SIZE;
+- const u8 *src_buf = cp;
+- u8 *dest_buf = (u8 *)&primary_crng.state[4];
++ u32 chacha_state[CHACHA_STATE_WORDS];
++ u8 block[CHACHA_BLOCK_SIZE];
++ size_t ret = 0, copied;
+
+- if (!spin_trylock_irqsave(&primary_crng.lock, flags))
+- return 0;
+- if (crng_init != 0) {
+- spin_unlock_irqrestore(&primary_crng.lock, flags);
++ if (unlikely(!iov_iter_count(iter)))
+ return 0;
+- }
+- if (len > max)
+- max = len;
+-
+- for (i = 0; i < max; i++) {
+- tmp = lfsr;
+- lfsr >>= 1;
+- if (tmp & 1)
+- lfsr ^= 0xE1;
+- tmp = dest_buf[i % CHACHA_KEY_SIZE];
+- dest_buf[i % CHACHA_KEY_SIZE] ^= src_buf[i % len] ^ lfsr;
+- lfsr += (tmp << 3) | (tmp >> 5);
+- }
+- spin_unlock_irqrestore(&primary_crng.lock, flags);
+- return 1;
+-}
+
+-static void crng_reseed(struct crng_state *crng, bool use_input_pool)
+-{
+- unsigned long flags;
+- int i, num;
+- union {
+- u8 block[CHACHA_BLOCK_SIZE];
+- u32 key[8];
+- } buf;
+-
+- if (use_input_pool) {
+- num = extract_entropy(&buf, 32, 16);
+- if (num == 0)
+- return;
+- } else {
+- _extract_crng(&primary_crng, buf.block);
+- _crng_backtrack_protect(&primary_crng, buf.block,
+- CHACHA_KEY_SIZE);
+- }
+- spin_lock_irqsave(&crng->lock, flags);
+- for (i = 0; i < 8; i++) {
+- unsigned long rv;
+- if (!arch_get_random_seed_long(&rv) &&
+- !arch_get_random_long(&rv))
+- rv = random_get_entropy();
+- crng->state[i + 4] ^= buf.key[i] ^ rv;
++ /*
++ * Immediately overwrite the ChaCha key at index 4 with random
++ * bytes, in case userspace causes copy_to_user() below to sleep
++ * forever, so that we still retain forward secrecy in that case.
++ */
++ crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
++ /*
++ * However, if we're doing a read of len <= 32, we don't need to
++ * use chacha_state after, so we can simply return those bytes to
++ * the user directly.
++ */
++ if (iov_iter_count(iter) <= CHACHA_KEY_SIZE) {
++ ret = copy_to_iter(&chacha_state[4], CHACHA_KEY_SIZE, iter);
++ goto out_zero_chacha;
+ }
+- memzero_explicit(&buf, sizeof(buf));
+- WRITE_ONCE(crng->init_time, jiffies);
+- spin_unlock_irqrestore(&crng->lock, flags);
+- if (crng == &primary_crng && crng_init < 2)
+- crng_finalize_init();
+-}
+
+-static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE])
+-{
+- unsigned long flags, init_time;
++ for (;;) {
++ chacha20_block(chacha_state, block);
++ if (unlikely(chacha_state[12] == 0))
++ ++chacha_state[13];
++
++ copied = copy_to_iter(block, sizeof(block), iter);
++ ret += copied;
++ if (!iov_iter_count(iter) || copied != sizeof(block))
++ break;
+
+- if (crng_ready()) {
+- init_time = READ_ONCE(crng->init_time);
+- if (time_after(READ_ONCE(crng_global_init_time), init_time) ||
+- time_after(jiffies, init_time + CRNG_RESEED_INTERVAL))
+- crng_reseed(crng, crng == &primary_crng);
++ BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
++ if (ret % PAGE_SIZE == 0) {
++ if (signal_pending(current))
++ break;
++ cond_resched();
++ }
+ }
+- spin_lock_irqsave(&crng->lock, flags);
+- chacha20_block(&crng->state[0], out);
+- if (crng->state[12] == 0)
+- crng->state[13]++;
+- spin_unlock_irqrestore(&crng->lock, flags);
+-}
+
+-static void extract_crng(u8 out[CHACHA_BLOCK_SIZE])
+-{
+- _extract_crng(select_crng(), out);
++ memzero_explicit(block, sizeof(block));
++out_zero_chacha:
++ memzero_explicit(chacha_state, sizeof(chacha_state));
++ return ret ? ret : -EFAULT;
+ }
+
+ /*
+- * Use the leftover bytes from the CRNG block output (if there is
+- * enough) to mutate the CRNG key to provide backtracking protection.
++ * Batched entropy returns random integers. The quality of the random
++ * number is good as /dev/urandom. In order to ensure that the randomness
++ * provided by this function is okay, the function wait_for_random_bytes()
++ * should be called and return 0 at least once at any point prior.
+ */
+-static void _crng_backtrack_protect(struct crng_state *crng,
+- u8 tmp[CHACHA_BLOCK_SIZE], int used)
+-{
+- unsigned long flags;
+- u32 *s, *d;
+- int i;
+
+- used = round_up(used, sizeof(u32));
+- if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) {
+- extract_crng(tmp);
+- used = 0;
+- }
+- spin_lock_irqsave(&crng->lock, flags);
+- s = (u32 *)&tmp[used];
+- d = &crng->state[4];
+- for (i = 0; i < 8; i++)
+- *d++ ^= *s++;
+- spin_unlock_irqrestore(&crng->lock, flags);
+-}
+-
+-static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used)
++#define DEFINE_BATCHED_ENTROPY(type) \
++struct batch_ ##type { \
++ /* \
++ * We make this 1.5x a ChaCha block, so that we get the \
++ * remaining 32 bytes from fast key erasure, plus one full \
++ * block from the detached ChaCha state. We can increase \
++ * the size of this later if needed so long as we keep the \
++ * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. \
++ */ \
++ type entropy[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(type))]; \
++ local_lock_t lock; \
++ unsigned long generation; \
++ unsigned int position; \
++}; \
++ \
++static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \
++ .lock = INIT_LOCAL_LOCK(batched_entropy_ ##type.lock), \
++ .position = UINT_MAX \
++}; \
++ \
++type get_random_ ##type(void) \
++{ \
++ type ret; \
++ unsigned long flags; \
++ struct batch_ ##type *batch; \
++ unsigned long next_gen; \
++ \
++ warn_unseeded_randomness(); \
++ \
++ if (!crng_ready()) { \
++ _get_random_bytes(&ret, sizeof(ret)); \
++ return ret; \
++ } \
++ \
++ local_lock_irqsave(&batched_entropy_ ##type.lock, flags); \
++ batch = raw_cpu_ptr(&batched_entropy_##type); \
++ \
++ next_gen = READ_ONCE(base_crng.generation); \
++ if (batch->position >= ARRAY_SIZE(batch->entropy) || \
++ next_gen != batch->generation) { \
++ _get_random_bytes(batch->entropy, sizeof(batch->entropy)); \
++ batch->position = 0; \
++ batch->generation = next_gen; \
++ } \
++ \
++ ret = batch->entropy[batch->position]; \
++ batch->entropy[batch->position] = 0; \
++ ++batch->position; \
++ local_unlock_irqrestore(&batched_entropy_ ##type.lock, flags); \
++ return ret; \
++} \
++EXPORT_SYMBOL(get_random_ ##type);
++
++DEFINE_BATCHED_ENTROPY(u64)
++DEFINE_BATCHED_ENTROPY(u32)
++
++#ifdef CONFIG_SMP
++/*
++ * This function is called when the CPU is coming up, with entry
++ * CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP.
++ */
++int __cold random_prepare_cpu(unsigned int cpu)
+ {
+- _crng_backtrack_protect(select_crng(), tmp, used);
++ /*
++ * When the cpu comes back online, immediately invalidate both
++ * the per-cpu crng and all batches, so that we serve fresh
++ * randomness.
++ */
++ per_cpu_ptr(&crngs, cpu)->generation = ULONG_MAX;
++ per_cpu_ptr(&batched_entropy_u32, cpu)->position = UINT_MAX;
++ per_cpu_ptr(&batched_entropy_u64, cpu)->position = UINT_MAX;
++ return 0;
+ }
++#endif
+
+-static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
++/*
++ * This function will use the architecture-specific hardware random
++ * number generator if it is available. It is not recommended for
++ * use. Use get_random_bytes() instead. It returns the number of
++ * bytes filled in.
++ */
++size_t __must_check get_random_bytes_arch(void *buf, size_t len)
+ {
+- ssize_t ret = 0, i = CHACHA_BLOCK_SIZE;
+- u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+- int large_request = (nbytes > 256);
+-
+- while (nbytes) {
+- if (large_request && need_resched()) {
+- if (signal_pending(current)) {
+- if (ret == 0)
+- ret = -ERESTARTSYS;
+- break;
+- }
+- schedule();
+- }
++ size_t left = len;
++ u8 *p = buf;
++
++ while (left) {
++ unsigned long v;
++ size_t block_len = min_t(size_t, left, sizeof(unsigned long));
+
+- extract_crng(tmp);
+- i = min_t(int, nbytes, CHACHA_BLOCK_SIZE);
+- if (copy_to_user(buf, tmp, i)) {
+- ret = -EFAULT;
++ if (!arch_get_random_long(&v))
+ break;
+- }
+
+- nbytes -= i;
+- buf += i;
+- ret += i;
++ memcpy(p, &v, block_len);
++ p += block_len;
++ left -= block_len;
+ }
+- crng_backtrack_protect(tmp, i);
+-
+- /* Wipe data just written to memory */
+- memzero_explicit(tmp, sizeof(tmp));
+
+- return ret;
++ return len - left;
+ }
++EXPORT_SYMBOL(get_random_bytes_arch);
+
+-/*********************************************************************
++
++/**********************************************************************
+ *
+- * Entropy input management
++ * Entropy accumulation and extraction routines.
+ *
+- *********************************************************************/
++ * Callers may add entropy via:
++ *
++ * static void mix_pool_bytes(const void *buf, size_t len)
++ *
++ * After which, if added entropy should be credited:
++ *
++ * static void credit_init_bits(size_t bits)
++ *
++ * Finally, extract entropy via:
++ *
++ * static void extract_entropy(void *buf, size_t len)
++ *
++ **********************************************************************/
+
+-/* There is one of these per entropy source */
+-struct timer_rand_state {
+- cycles_t last_time;
+- long last_delta, last_delta2;
++enum {
++ POOL_BITS = BLAKE2S_HASH_SIZE * 8,
++ POOL_READY_BITS = POOL_BITS, /* When crng_init->CRNG_READY */
++ POOL_EARLY_BITS = POOL_READY_BITS / 2 /* When crng_init->CRNG_EARLY */
+ };
+
+-#define INIT_TIMER_RAND_STATE { INITIAL_JIFFIES, };
++static struct {
++ struct blake2s_state hash;
++ spinlock_t lock;
++ unsigned int init_bits;
++} input_pool = {
++ .hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE),
++ BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4,
++ BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 },
++ .hash.outlen = BLAKE2S_HASH_SIZE,
++ .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
++};
++
++static void _mix_pool_bytes(const void *buf, size_t len)
++{
++ blake2s_update(&input_pool.hash, buf, len);
++}
+
+ /*
+- * Add device- or boot-specific data to the input pool to help
+- * initialize it.
+- *
+- * None of this adds any entropy; it is meant to avoid the problem of
+- * the entropy pool having similar initial state across largely
+- * identical devices.
++ * This function adds bytes into the input pool. It does not
++ * update the initialization bit counter; the caller should call
++ * credit_init_bits if this is appropriate.
+ */
+-void add_device_randomness(const void *buf, unsigned int size)
++static void mix_pool_bytes(const void *buf, size_t len)
+ {
+- unsigned long time = random_get_entropy() ^ jiffies;
+ unsigned long flags;
+
+- if (!crng_ready() && size)
+- crng_slow_load(buf, size);
+-
+- trace_add_device_randomness(size, _RET_IP_);
+ spin_lock_irqsave(&input_pool.lock, flags);
+- _mix_pool_bytes(buf, size);
+- _mix_pool_bytes(&time, sizeof(time));
++ _mix_pool_bytes(buf, len);
+ spin_unlock_irqrestore(&input_pool.lock, flags);
+ }
+-EXPORT_SYMBOL(add_device_randomness);
+-
+-static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE;
+
+ /*
+- * This function adds entropy to the entropy "pool" by using timing
+- * delays. It uses the timer_rand_state structure to make an estimate
+- * of how many bits of entropy this call has added to the pool.
+- *
+- * The number "num" is also added to the pool - it should somehow describe
+- * the type of event which just happened. This is currently 0-255 for
+- * keyboard scan codes, and 256 upwards for interrupts.
+- *
++ * This is an HKDF-like construction for using the hashed collected entropy
++ * as a PRF key, that's then expanded block-by-block.
+ */
+-static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
++static void extract_entropy(void *buf, size_t len)
+ {
++ unsigned long flags;
++ u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE];
+ struct {
+- long jiffies;
+- unsigned int cycles;
+- unsigned int num;
+- } sample;
+- long delta, delta2, delta3;
+-
+- sample.jiffies = jiffies;
+- sample.cycles = random_get_entropy();
+- sample.num = num;
+- mix_pool_bytes(&sample, sizeof(sample));
+-
+- /*
+- * Calculate number of bits of randomness we probably added.
+- * We take into account the first, second and third-order deltas
+- * in order to make our estimate.
+- */
+- delta = sample.jiffies - READ_ONCE(state->last_time);
+- WRITE_ONCE(state->last_time, sample.jiffies);
+-
+- delta2 = delta - READ_ONCE(state->last_delta);
+- WRITE_ONCE(state->last_delta, delta);
+-
+- delta3 = delta2 - READ_ONCE(state->last_delta2);
+- WRITE_ONCE(state->last_delta2, delta2);
++ unsigned long rdseed[32 / sizeof(long)];
++ size_t counter;
++ } block;
++ size_t i;
++
++ for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) {
++ if (!arch_get_random_seed_long(&block.rdseed[i]) &&
++ !arch_get_random_long(&block.rdseed[i]))
++ block.rdseed[i] = random_get_entropy();
++ }
+
+- if (delta < 0)
+- delta = -delta;
+- if (delta2 < 0)
+- delta2 = -delta2;
+- if (delta3 < 0)
+- delta3 = -delta3;
+- if (delta > delta2)
+- delta = delta2;
+- if (delta > delta3)
+- delta = delta3;
++ spin_lock_irqsave(&input_pool.lock, flags);
+
+- /*
+- * delta is now minimum absolute delta.
+- * Round down by 1 bit on general principles,
+- * and limit entropy estimate to 12 bits.
+- */
+- credit_entropy_bits(min_t(int, fls(delta >> 1), 11));
+-}
++ /* seed = HASHPRF(last_key, entropy_input) */
++ blake2s_final(&input_pool.hash, seed);
+
+-void add_input_randomness(unsigned int type, unsigned int code,
+- unsigned int value)
+-{
+- static unsigned char last_value;
++ /* next_key = HASHPRF(seed, RDSEED || 0) */
++ block.counter = 0;
++ blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed));
++ blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key));
+
+- /* ignore autorepeat and the like */
+- if (value == last_value)
+- return;
++ spin_unlock_irqrestore(&input_pool.lock, flags);
++ memzero_explicit(next_key, sizeof(next_key));
++
++ while (len) {
++ i = min_t(size_t, len, BLAKE2S_HASH_SIZE);
++ /* output = HASHPRF(seed, RDSEED || ++counter) */
++ ++block.counter;
++ blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed));
++ len -= i;
++ buf += i;
++ }
+
+- last_value = value;
+- add_timer_randomness(&input_timer_state,
+- (type << 4) ^ code ^ (code >> 4) ^ value);
+- trace_add_input_randomness(POOL_ENTROPY_BITS());
++ memzero_explicit(seed, sizeof(seed));
++ memzero_explicit(&block, sizeof(block));
+ }
+-EXPORT_SYMBOL_GPL(add_input_randomness);
+-
+-static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
+
+-#ifdef ADD_INTERRUPT_BENCH
+-static unsigned long avg_cycles, avg_deviation;
++#define credit_init_bits(bits) if (!crng_ready()) _credit_init_bits(bits)
+
+-#define AVG_SHIFT 8 /* Exponential average factor k=1/256 */
+-#define FIXED_1_2 (1 << (AVG_SHIFT - 1))
+-
+-static void add_interrupt_bench(cycles_t start)
++static void __cold _credit_init_bits(size_t bits)
+ {
+- long delta = random_get_entropy() - start;
+-
+- /* Use a weighted moving average */
+- delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT);
+- avg_cycles += delta;
+- /* And average deviation */
+- delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT);
+- avg_deviation += delta;
+-}
+-#else
+-#define add_interrupt_bench(x)
+-#endif
++ static struct execute_work set_ready;
++ unsigned int new, orig, add;
++ unsigned long flags;
+
+-static u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
+-{
+- u32 *ptr = (u32 *)regs;
+- unsigned int idx;
++ if (!bits)
++ return;
+
+- if (regs == NULL)
+- return 0;
+- idx = READ_ONCE(f->reg_idx);
+- if (idx >= sizeof(struct pt_regs) / sizeof(u32))
+- idx = 0;
+- ptr += idx++;
+- WRITE_ONCE(f->reg_idx, idx);
+- return *ptr;
+-}
++ add = min_t(size_t, bits, POOL_BITS);
+
+-void add_interrupt_randomness(int irq)
+-{
+- struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
+- struct pt_regs *regs = get_irq_regs();
+- unsigned long now = jiffies;
+- cycles_t cycles = random_get_entropy();
+- u32 c_high, j_high;
+- u64 ip;
+-
+- if (cycles == 0)
+- cycles = get_reg(fast_pool, regs);
+- c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0;
+- j_high = (sizeof(now) > 4) ? now >> 32 : 0;
+- fast_pool->pool[0] ^= cycles ^ j_high ^ irq;
+- fast_pool->pool[1] ^= now ^ c_high;
+- ip = regs ? instruction_pointer(regs) : _RET_IP_;
+- fast_pool->pool[2] ^= ip;
+- fast_pool->pool[3] ^=
+- (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs);
+-
+- fast_mix(fast_pool);
+- add_interrupt_bench(cycles);
+-
+- if (unlikely(crng_init == 0)) {
+- if ((fast_pool->count >= 64) &&
+- crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
+- fast_pool->count = 0;
+- fast_pool->last = now;
++ do {
++ orig = READ_ONCE(input_pool.init_bits);
++ new = min_t(unsigned int, POOL_BITS, orig + add);
++ } while (cmpxchg(&input_pool.init_bits, orig, new) != orig);
++
++ if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
++ crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
++ execute_in_process_context(crng_set_ready, &set_ready);
++ process_random_ready_list();
++ wake_up_interruptible(&crng_init_wait);
++ kill_fasync(&fasync, SIGIO, POLL_IN);
++ pr_notice("crng init done\n");
++ if (urandom_warning.missed)
++ pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
++ urandom_warning.missed);
++ } else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) {
++ spin_lock_irqsave(&base_crng.lock, flags);
++ /* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */
++ if (crng_init == CRNG_EMPTY) {
++ extract_entropy(base_crng.key, sizeof(base_crng.key));
++ crng_init = CRNG_EARLY;
+ }
+- return;
++ spin_unlock_irqrestore(&base_crng.lock, flags);
+ }
++}
+
+- if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ))
+- return;
+-
+- if (!spin_trylock(&input_pool.lock))
+- return;
+-
+- fast_pool->last = now;
+- __mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool));
+- spin_unlock(&input_pool.lock);
+
+- fast_pool->count = 0;
++/**********************************************************************
++ *
++ * Entropy collection routines.
++ *
++ * The following exported functions are used for pushing entropy into
++ * the above entropy accumulation routines:
++ *
++ * void add_device_randomness(const void *buf, size_t len);
++ * void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
++ * void add_bootloader_randomness(const void *buf, size_t len);
++ * void add_interrupt_randomness(int irq);
++ * void add_input_randomness(unsigned int type, unsigned int code, unsigned int value);
++ * void add_disk_randomness(struct gendisk *disk);
++ *
++ * add_device_randomness() adds data to the input pool that
++ * is likely to differ between two devices (or possibly even per boot).
++ * This would be things like MAC addresses or serial numbers, or the
++ * read-out of the RTC. This does *not* credit any actual entropy to
++ * the pool, but it initializes the pool to different values for devices
++ * that might otherwise be identical and have very little entropy
++ * available to them (particularly common in the embedded world).
++ *
++ * add_hwgenerator_randomness() is for true hardware RNGs, and will credit
++ * entropy as specified by the caller. If the entropy pool is full it will
++ * block until more entropy is needed.
++ *
++ * add_bootloader_randomness() is called by bootloader drivers, such as EFI
++ * and device tree, and credits its input depending on whether or not the
++ * configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set.
++ *
++ * add_interrupt_randomness() uses the interrupt timing as random
++ * inputs to the entropy pool. Using the cycle counters and the irq source
++ * as inputs, it feeds the input pool roughly once a second or after 64
++ * interrupts, crediting 1 bit of entropy for whichever comes first.
++ *
++ * add_input_randomness() uses the input layer interrupt timing, as well
++ * as the event type information from the hardware.
++ *
++ * add_disk_randomness() uses what amounts to the seek time of block
++ * layer request events, on a per-disk_devt basis, as input to the
++ * entropy pool. Note that high-speed solid state drives with very low
++ * seek times do not make for good sources of entropy, as their seek
++ * times are usually fairly consistent.
++ *
++ * The last two routines try to estimate how many bits of entropy
++ * to credit. They do this by keeping track of the first and second
++ * order deltas of the event timings.
++ *
++ **********************************************************************/
+
+- /* award one bit for the contents of the fast pool */
+- credit_entropy_bits(1);
++static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
++static bool trust_bootloader __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER);
++static int __init parse_trust_cpu(char *arg)
++{
++ return kstrtobool(arg, &trust_cpu);
+ }
+-EXPORT_SYMBOL_GPL(add_interrupt_randomness);
+-
+-#ifdef CONFIG_BLOCK
+-void add_disk_randomness(struct gendisk *disk)
++static int __init parse_trust_bootloader(char *arg)
+ {
+- if (!disk || !disk->random)
+- return;
+- /* first major is 1, so we get >= 0x200 here */
+- add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
+- trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS());
++ return kstrtobool(arg, &trust_bootloader);
+ }
+-EXPORT_SYMBOL_GPL(add_disk_randomness);
+-#endif
+-
+-/*********************************************************************
+- *
+- * Entropy extraction routines
+- *
+- *********************************************************************/
++early_param("random.trust_cpu", parse_trust_cpu);
++early_param("random.trust_bootloader", parse_trust_bootloader);
+
+ /*
+- * This function decides how many bytes to actually take from the
+- * given pool, and also debits the entropy count accordingly.
++ * The first collection of entropy occurs at system boot while interrupts
++ * are still turned off. Here we push in latent entropy, RDSEED, a timestamp,
++ * utsname(), and the command line. Depending on the above configuration knob,
++ * RDSEED may be considered sufficient for initialization. Note that much
++ * earlier setup may already have pushed entropy into the input pool by the
++ * time we get here.
+ */
+-static size_t account(size_t nbytes, int min)
++int __init random_init(const char *command_line)
+ {
+- int entropy_count, orig;
+- size_t ibytes, nfrac;
++ ktime_t now = ktime_get_real();
++ unsigned int i, arch_bytes;
++ unsigned long entropy;
+
+- BUG_ON(input_pool.entropy_count > POOL_FRACBITS);
++#if defined(LATENT_ENTROPY_PLUGIN)
++ static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy;
++ _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed));
++#endif
+
+- /* Can we pull enough? */
+-retry:
+- entropy_count = orig = READ_ONCE(input_pool.entropy_count);
+- if (WARN_ON(entropy_count < 0)) {
+- pr_warn("negative entropy count: count %d\n", entropy_count);
+- entropy_count = 0;
++ for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE;
++ i < BLAKE2S_BLOCK_SIZE; i += sizeof(entropy)) {
++ if (!arch_get_random_seed_long_early(&entropy) &&
++ !arch_get_random_long_early(&entropy)) {
++ entropy = random_get_entropy();
++ arch_bytes -= sizeof(entropy);
++ }
++ _mix_pool_bytes(&entropy, sizeof(entropy));
+ }
++ _mix_pool_bytes(&now, sizeof(now));
++ _mix_pool_bytes(utsname(), sizeof(*(utsname())));
++ _mix_pool_bytes(command_line, strlen(command_line));
++ add_latent_entropy();
+
+- /* never pull more than available */
+- ibytes = min_t(size_t, nbytes, entropy_count >> (POOL_ENTROPY_SHIFT + 3));
+- if (ibytes < min)
+- ibytes = 0;
+- nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3);
+- if ((size_t)entropy_count > nfrac)
+- entropy_count -= nfrac;
+- else
+- entropy_count = 0;
+-
+- if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig)
+- goto retry;
+-
+- trace_debit_entropy(8 * ibytes);
+- if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) {
+- wake_up_interruptible(&random_write_wait);
+- kill_fasync(&fasync, SIGIO, POLL_OUT);
+- }
++ if (crng_ready())
++ crng_reseed();
++ else if (trust_cpu)
++ credit_init_bits(arch_bytes * 8);
+
+- return ibytes;
++ return 0;
+ }
+
+ /*
+- * This function does the actual extraction for extract_entropy.
++ * Add device- or boot-specific data to the input pool to help
++ * initialize it.
+ *
+- * Note: we assume that .poolwords is a multiple of 16 words.
++ * None of this adds any entropy; it is meant to avoid the problem of
++ * the entropy pool having similar initial state across largely
++ * identical devices.
+ */
+-static void extract_buf(u8 *out)
++void add_device_randomness(const void *buf, size_t len)
+ {
+- struct blake2s_state state __aligned(__alignof__(unsigned long));
+- u8 hash[BLAKE2S_HASH_SIZE];
+- unsigned long *salt;
++ unsigned long entropy = random_get_entropy();
+ unsigned long flags;
+
+- blake2s_init(&state, sizeof(hash));
+-
+- /*
+- * If we have an architectural hardware random number
+- * generator, use it for BLAKE2's salt & personal fields.
+- */
+- for (salt = (unsigned long *)&state.h[4];
+- salt < (unsigned long *)&state.h[8]; ++salt) {
+- unsigned long v;
+- if (!arch_get_random_long(&v))
+- break;
+- *salt ^= v;
+- }
+-
+- /* Generate a hash across the pool */
+ spin_lock_irqsave(&input_pool.lock, flags);
+- blake2s_update(&state, (const u8 *)input_pool_data, POOL_BYTES);
+- blake2s_final(&state, hash); /* final zeros out state */
+-
+- /*
+- * We mix the hash back into the pool to prevent backtracking
+- * attacks (where the attacker knows the state of the pool
+- * plus the current outputs, and attempts to find previous
+- * outputs), unless the hash function can be inverted. By
+- * mixing at least a hash worth of hash data back, we make
+- * brute-forcing the feedback as hard as brute-forcing the
+- * hash.
+- */
+- __mix_pool_bytes(hash, sizeof(hash));
++ _mix_pool_bytes(&entropy, sizeof(entropy));
++ _mix_pool_bytes(buf, len);
+ spin_unlock_irqrestore(&input_pool.lock, flags);
+-
+- /* Note that EXTRACT_SIZE is half of hash size here, because above
+- * we've dumped the full length back into mixer. By reducing the
+- * amount that we emit, we retain a level of forward secrecy.
+- */
+- memcpy(out, hash, EXTRACT_SIZE);
+- memzero_explicit(hash, sizeof(hash));
+ }
++EXPORT_SYMBOL(add_device_randomness);
+
+-static ssize_t _extract_entropy(void *buf, size_t nbytes)
++/*
++ * Interface for in-kernel drivers of true hardware RNGs.
++ * Those devices may produce endless random bits and will be throttled
++ * when our pool is full.
++ */
++void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy)
+ {
+- ssize_t ret = 0, i;
+- u8 tmp[EXTRACT_SIZE];
+-
+- while (nbytes) {
+- extract_buf(tmp);
+- i = min_t(int, nbytes, EXTRACT_SIZE);
+- memcpy(buf, tmp, i);
+- nbytes -= i;
+- buf += i;
+- ret += i;
+- }
++ mix_pool_bytes(buf, len);
++ credit_init_bits(entropy);
+
+- /* Wipe data just returned from memory */
+- memzero_explicit(tmp, sizeof(tmp));
+-
+- return ret;
++ /*
++ * Throttle writing to once every CRNG_RESEED_INTERVAL, unless
++ * we're not yet initialized.
++ */
++ if (!kthread_should_stop() && crng_ready())
++ schedule_timeout_interruptible(CRNG_RESEED_INTERVAL);
+ }
++EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+
+ /*
+- * This function extracts randomness from the "entropy pool", and
+- * returns it in a buffer.
+- *
+- * The min parameter specifies the minimum amount we can pull before
+- * failing to avoid races that defeat catastrophic reseeding.
++ * Handle random seed passed by bootloader, and credit it if
++ * CONFIG_RANDOM_TRUST_BOOTLOADER is set.
+ */
+-static ssize_t extract_entropy(void *buf, size_t nbytes, int min)
++void __cold add_bootloader_randomness(const void *buf, size_t len)
+ {
+- trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_);
+- nbytes = account(nbytes, min);
+- return _extract_entropy(buf, nbytes);
++ mix_pool_bytes(buf, len);
++ if (trust_bootloader)
++ credit_init_bits(len * 8);
+ }
++EXPORT_SYMBOL_GPL(add_bootloader_randomness);
+
+-#define warn_unseeded_randomness(previous) \
+- _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous))
++struct fast_pool {
++ struct work_struct mix;
++ unsigned long pool[4];
++ unsigned long last;
++ unsigned int count;
++};
+
+-static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous)
+-{
+-#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+- const bool print_once = false;
++static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
++#ifdef CONFIG_64BIT
++#define FASTMIX_PERM SIPHASH_PERMUTATION
++ .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 }
+ #else
+- static bool print_once __read_mostly;
+-#endif
+-
+- if (print_once || crng_ready() ||
+- (previous && (caller == READ_ONCE(*previous))))
+- return;
+- WRITE_ONCE(*previous, caller);
+-#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM
+- print_once = true;
++#define FASTMIX_PERM HSIPHASH_PERMUTATION
++ .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 }
+ #endif
+- if (__ratelimit(&unseeded_warning))
+- printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n",
+- func_name, caller, crng_init);
+-}
++};
+
+ /*
+- * This function is the exported kernel interface. It returns some
+- * number of good random numbers, suitable for key generation, seeding
+- * TCP sequence numbers, etc. It does not rely on the hardware random
+- * number generator. For random bytes direct from the hardware RNG
+- * (when available), use get_random_bytes_arch(). In order to ensure
+- * that the randomness provided by this function is okay, the function
+- * wait_for_random_bytes() should be called and return 0 at least once
+- * at any point prior.
++ * This is [Half]SipHash-1-x, starting from an empty key. Because
++ * the key is fixed, it assumes that its inputs are non-malicious,
++ * and therefore this has no security on its own. s represents the
++ * four-word SipHash state, while v represents a two-word input.
+ */
+-static void _get_random_bytes(void *buf, int nbytes)
+-{
+- u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4);
+-
+- trace_get_random_bytes(nbytes, _RET_IP_);
+-
+- while (nbytes >= CHACHA_BLOCK_SIZE) {
+- extract_crng(buf);
+- buf += CHACHA_BLOCK_SIZE;
+- nbytes -= CHACHA_BLOCK_SIZE;
+- }
+-
+- if (nbytes > 0) {
+- extract_crng(tmp);
+- memcpy(buf, tmp, nbytes);
+- crng_backtrack_protect(tmp, nbytes);
+- } else
+- crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE);
+- memzero_explicit(tmp, sizeof(tmp));
+-}
+-
+-void get_random_bytes(void *buf, int nbytes)
++static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2)
+ {
+- static void *previous;
+-
+- warn_unseeded_randomness(&previous);
+- _get_random_bytes(buf, nbytes);
++ s[3] ^= v1;
++ FASTMIX_PERM(s[0], s[1], s[2], s[3]);
++ s[0] ^= v1;
++ s[3] ^= v2;
++ FASTMIX_PERM(s[0], s[1], s[2], s[3]);
++ s[0] ^= v2;
+ }
+-EXPORT_SYMBOL(get_random_bytes);
+
++#ifdef CONFIG_SMP
+ /*
+- * Each time the timer fires, we expect that we got an unpredictable
+- * jump in the cycle counter. Even if the timer is running on another
+- * CPU, the timer activity will be touching the stack of the CPU that is
+- * generating entropy..
+- *
+- * Note that we don't re-arm the timer in the timer itself - we are
+- * happy to be scheduled away, since that just makes the load more
+- * complex, but we do not want the timer to keep ticking unless the
+- * entropy loop is running.
+- *
+- * So the re-arming always happens in the entropy loop itself.
++ * This function is called when the CPU has just come online, with
++ * entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE.
+ */
+-static void entropy_timer(struct timer_list *t)
++int __cold random_online_cpu(unsigned int cpu)
+ {
+- credit_entropy_bits(1);
++ /*
++ * During CPU shutdown and before CPU onlining, add_interrupt_
++ * randomness() may schedule mix_interrupt_randomness(), and
++ * set the MIX_INFLIGHT flag. However, because the worker can
++ * be scheduled on a different CPU during this period, that
++ * flag will never be cleared. For that reason, we zero out
++ * the flag here, which runs just after workqueues are onlined
++ * for the CPU again. This also has the effect of setting the
++ * irq randomness count to zero so that new accumulated irqs
++ * are fresh.
++ */
++ per_cpu_ptr(&irq_randomness, cpu)->count = 0;
++ return 0;
+ }
++#endif
+
+-/*
+- * If we have an actual cycle counter, see if we can
+- * generate enough entropy with timing noise
+- */
+-static void try_to_generate_entropy(void)
++static void mix_interrupt_randomness(struct work_struct *work)
+ {
+- struct {
+- unsigned long now;
+- struct timer_list timer;
+- } stack;
+-
+- stack.now = random_get_entropy();
++ struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
++ /*
++ * The size of the copied stack pool is explicitly 2 longs so that we
++ * only ever ingest half of the siphash output each time, retaining
++ * the other half as the next "key" that carries over. The entropy is
++ * supposed to be sufficiently dispersed between bits so on average
++ * we don't wind up "losing" some.
++ */
++ unsigned long pool[2];
++ unsigned int count;
+
+- /* Slow counter - or none. Don't even bother */
+- if (stack.now == random_get_entropy())
++ /* Check to see if we're running on the wrong CPU due to hotplug. */
++ local_irq_disable();
++ if (fast_pool != this_cpu_ptr(&irq_randomness)) {
++ local_irq_enable();
+ return;
+-
+- timer_setup_on_stack(&stack.timer, entropy_timer, 0);
+- while (!crng_ready()) {
+- if (!timer_pending(&stack.timer))
+- mod_timer(&stack.timer, jiffies + 1);
+- mix_pool_bytes(&stack.now, sizeof(stack.now));
+- schedule();
+- stack.now = random_get_entropy();
+ }
+
+- del_timer_sync(&stack.timer);
+- destroy_timer_on_stack(&stack.timer);
+- mix_pool_bytes(&stack.now, sizeof(stack.now));
+-}
+-
+-/*
+- * Wait for the urandom pool to be seeded and thus guaranteed to supply
+- * cryptographically secure random numbers. This applies to: the /dev/urandom
+- * device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
+- * family of functions. Using any of these functions without first calling
+- * this function forfeits the guarantee of security.
+- *
+- * Returns: 0 if the urandom pool has been seeded.
+- * -ERESTARTSYS if the function was interrupted by a signal.
+- */
+-int wait_for_random_bytes(void)
+-{
+- if (likely(crng_ready()))
+- return 0;
+-
+- do {
+- int ret;
+- ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
+- if (ret)
+- return ret > 0 ? 0 : ret;
++ /*
++ * Copy the pool to the stack so that the mixer always has a
++ * consistent view, before we reenable irqs again.
++ */
++ memcpy(pool, fast_pool->pool, sizeof(pool));
++ count = fast_pool->count;
++ fast_pool->count = 0;
++ fast_pool->last = jiffies;
++ local_irq_enable();
+
+- try_to_generate_entropy();
+- } while (!crng_ready());
++ mix_pool_bytes(pool, sizeof(pool));
++ credit_init_bits(max(1u, (count & U16_MAX) / 64));
+
+- return 0;
++ memzero_explicit(pool, sizeof(pool));
+ }
+-EXPORT_SYMBOL(wait_for_random_bytes);
+
+-/*
+- * Returns whether or not the urandom pool has been seeded and thus guaranteed
+- * to supply cryptographically secure random numbers. This applies to: the
+- * /dev/urandom device, the get_random_bytes function, and the get_random_{u32,
+- * ,u64,int,long} family of functions.
+- *
+- * Returns: true if the urandom pool has been seeded.
+- * false if the urandom pool has not been seeded.
+- */
+-bool rng_is_initialized(void)
+-{
+- return crng_ready();
+-}
+-EXPORT_SYMBOL(rng_is_initialized);
+-
+-/*
+- * Add a callback function that will be invoked when the nonblocking
+- * pool is initialised.
+- *
+- * returns: 0 if callback is successfully added
+- * -EALREADY if pool is already initialised (callback not called)
+- * -ENOENT if module for callback is not alive
+- */
+-int add_random_ready_callback(struct random_ready_callback *rdy)
++void add_interrupt_randomness(int irq)
+ {
+- struct module *owner;
+- unsigned long flags;
+- int err = -EALREADY;
+-
+- if (crng_ready())
+- return err;
+-
+- owner = rdy->owner;
+- if (!try_module_get(owner))
+- return -ENOENT;
+-
+- spin_lock_irqsave(&random_ready_list_lock, flags);
+- if (crng_ready())
+- goto out;
+-
+- owner = NULL;
++ enum { MIX_INFLIGHT = 1U << 31 };
++ unsigned long entropy = random_get_entropy();
++ struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
++ struct pt_regs *regs = get_irq_regs();
++ unsigned int new_count;
+
+- list_add(&rdy->list, &random_ready_list);
+- err = 0;
++ fast_mix(fast_pool->pool, entropy,
++ (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq));
++ new_count = ++fast_pool->count;
+
+-out:
+- spin_unlock_irqrestore(&random_ready_list_lock, flags);
++ if (new_count & MIX_INFLIGHT)
++ return;
+
+- module_put(owner);
++ if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
++ return;
+
+- return err;
++ if (unlikely(!fast_pool->mix.func))
++ INIT_WORK(&fast_pool->mix, mix_interrupt_randomness);
++ fast_pool->count |= MIX_INFLIGHT;
++ queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix);
+ }
+-EXPORT_SYMBOL(add_random_ready_callback);
++EXPORT_SYMBOL_GPL(add_interrupt_randomness);
++
++/* There is one of these per entropy source */
++struct timer_rand_state {
++ unsigned long last_time;
++ long last_delta, last_delta2;
++};
+
+ /*
+- * Delete a previously registered readiness callback function.
++ * This function adds entropy to the entropy "pool" by using timing
++ * delays. It uses the timer_rand_state structure to make an estimate
++ * of how many bits of entropy this call has added to the pool. The
++ * value "num" is also added to the pool; it should somehow describe
++ * the type of event that just happened.
+ */
+-void del_random_ready_callback(struct random_ready_callback *rdy)
++static void add_timer_randomness(struct timer_rand_state *state, unsigned int num)
+ {
+- unsigned long flags;
+- struct module *owner = NULL;
++ unsigned long entropy = random_get_entropy(), now = jiffies, flags;
++ long delta, delta2, delta3;
++ unsigned int bits;
+
+- spin_lock_irqsave(&random_ready_list_lock, flags);
+- if (!list_empty(&rdy->list)) {
+- list_del_init(&rdy->list);
+- owner = rdy->owner;
++ /*
++ * If we're in a hard IRQ, add_interrupt_randomness() will be called
++ * sometime after, so mix into the fast pool.
++ */
++ if (in_hardirq()) {
++ fast_mix(this_cpu_ptr(&irq_randomness)->pool, entropy, num);
++ } else {
++ spin_lock_irqsave(&input_pool.lock, flags);
++ _mix_pool_bytes(&entropy, sizeof(entropy));
++ _mix_pool_bytes(&num, sizeof(num));
++ spin_unlock_irqrestore(&input_pool.lock, flags);
+ }
+- spin_unlock_irqrestore(&random_ready_list_lock, flags);
+
+- module_put(owner);
+-}
+-EXPORT_SYMBOL(del_random_ready_callback);
++ if (crng_ready())
++ return;
+
+-/*
+- * This function will use the architecture-specific hardware random
+- * number generator if it is available. The arch-specific hw RNG will
+- * almost certainly be faster than what we can do in software, but it
+- * is impossible to verify that it is implemented securely (as
+- * opposed, to, say, the AES encryption of a sequence number using a
+- * key known by the NSA). So it's useful if we need the speed, but
+- * only if we're willing to trust the hardware manufacturer not to
+- * have put in a back door.
+- *
+- * Return number of bytes filled in.
+- */
+-int __must_check get_random_bytes_arch(void *buf, int nbytes)
+-{
+- int left = nbytes;
+- u8 *p = buf;
++ /*
++ * Calculate number of bits of randomness we probably added.
++ * We take into account the first, second and third-order deltas
++ * in order to make our estimate.
++ */
++ delta = now - READ_ONCE(state->last_time);
++ WRITE_ONCE(state->last_time, now);
++
++ delta2 = delta - READ_ONCE(state->last_delta);
++ WRITE_ONCE(state->last_delta, delta);
+
+- trace_get_random_bytes_arch(left, _RET_IP_);
+- while (left) {
+- unsigned long v;
+- int chunk = min_t(int, left, sizeof(unsigned long));
++ delta3 = delta2 - READ_ONCE(state->last_delta2);
++ WRITE_ONCE(state->last_delta2, delta2);
+
+- if (!arch_get_random_long(&v))
+- break;
++ if (delta < 0)
++ delta = -delta;
++ if (delta2 < 0)
++ delta2 = -delta2;
++ if (delta3 < 0)
++ delta3 = -delta3;
++ if (delta > delta2)
++ delta = delta2;
++ if (delta > delta3)
++ delta = delta3;
+
+- memcpy(p, &v, chunk);
+- p += chunk;
+- left -= chunk;
+- }
++ /*
++ * delta is now minimum absolute delta. Round down by 1 bit
++ * on general principles, and limit entropy estimate to 11 bits.
++ */
++ bits = min(fls(delta >> 1), 11);
+
+- return nbytes - left;
++ /*
++ * As mentioned above, if we're in a hard IRQ, add_interrupt_randomness()
++ * will run after this, which uses a different crediting scheme of 1 bit
++ * per every 64 interrupts. In order to let that function do accounting
++ * close to the one in this function, we credit a full 64/64 bit per bit,
++ * and then subtract one to account for the extra one added.
++ */
++ if (in_hardirq())
++ this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1;
++ else
++ _credit_init_bits(bits);
+ }
+-EXPORT_SYMBOL(get_random_bytes_arch);
+
+-/*
+- * init_std_data - initialize pool with system data
+- *
+- * This function clears the pool's entropy count and mixes some system
+- * data into the pool to prepare it for use. The pool is not cleared
+- * as that can only decrease the entropy in the pool.
+- */
+-static void __init init_std_data(void)
++void add_input_randomness(unsigned int type, unsigned int code, unsigned int value)
+ {
+- int i;
+- ktime_t now = ktime_get_real();
+- unsigned long rv;
+-
+- mix_pool_bytes(&now, sizeof(now));
+- for (i = POOL_BYTES; i > 0; i -= sizeof(rv)) {
+- if (!arch_get_random_seed_long(&rv) &&
+- !arch_get_random_long(&rv))
+- rv = random_get_entropy();
+- mix_pool_bytes(&rv, sizeof(rv));
+- }
+- mix_pool_bytes(utsname(), sizeof(*(utsname())));
++ static unsigned char last_value;
++ static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES };
++
++ /* Ignore autorepeat and the like. */
++ if (value == last_value)
++ return;
++
++ last_value = value;
++ add_timer_randomness(&input_timer_state,
++ (type << 4) ^ code ^ (code >> 4) ^ value);
+ }
++EXPORT_SYMBOL_GPL(add_input_randomness);
+
+-/*
+- * Note that setup_arch() may call add_device_randomness()
+- * long before we get here. This allows seeding of the pools
+- * with some platform dependent data very early in the boot
+- * process. But it limits our options here. We must use
+- * statically allocated structures that already have all
+- * initializations complete at compile time. We should also
+- * take care not to overwrite the precious per platform data
+- * we were given.
+- */
+-int __init rand_initialize(void)
++#ifdef CONFIG_BLOCK
++void add_disk_randomness(struct gendisk *disk)
+ {
+- init_std_data();
+- if (crng_need_final_init)
+- crng_finalize_init();
+- crng_initialize_primary();
+- crng_global_init_time = jiffies;
+- if (ratelimit_disable) {
+- urandom_warning.interval = 0;
+- unseeded_warning.interval = 0;
+- }
+- return 0;
++ if (!disk || !disk->random)
++ return;
++ /* First major is 1, so we get >= 0x200 here. */
++ add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
+ }
++EXPORT_SYMBOL_GPL(add_disk_randomness);
+
+-#ifdef CONFIG_BLOCK
+-void rand_initialize_disk(struct gendisk *disk)
++void __cold rand_initialize_disk(struct gendisk *disk)
+ {
+ struct timer_rand_state *state;
+
+@@ -1724,109 +1139,189 @@ void rand_initialize_disk(struct gendisk *disk)
+ }
+ #endif
+
+-static ssize_t urandom_read_nowarn(struct file *file, char __user *buf,
+- size_t nbytes, loff_t *ppos)
++/*
++ * Each time the timer fires, we expect that we got an unpredictable
++ * jump in the cycle counter. Even if the timer is running on another
++ * CPU, the timer activity will be touching the stack of the CPU that is
++ * generating entropy..
++ *
++ * Note that we don't re-arm the timer in the timer itself - we are
++ * happy to be scheduled away, since that just makes the load more
++ * complex, but we do not want the timer to keep ticking unless the
++ * entropy loop is running.
++ *
++ * So the re-arming always happens in the entropy loop itself.
++ */
++static void __cold entropy_timer(struct timer_list *t)
+ {
+- int ret;
+-
+- nbytes = min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3));
+- ret = extract_crng_user(buf, nbytes);
+- trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS());
+- return ret;
++ credit_init_bits(1);
+ }
+
+-static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes,
+- loff_t *ppos)
++/*
++ * If we have an actual cycle counter, see if we can
++ * generate enough entropy with timing noise
++ */
++static void __cold try_to_generate_entropy(void)
+ {
+- static int maxwarn = 10;
++ struct {
++ unsigned long entropy;
++ struct timer_list timer;
++ } stack;
++
++ stack.entropy = random_get_entropy();
++
++ /* Slow counter - or none. Don't even bother */
++ if (stack.entropy == random_get_entropy())
++ return;
+
+- if (!crng_ready() && maxwarn > 0) {
+- maxwarn--;
+- if (__ratelimit(&urandom_warning))
+- pr_notice("%s: uninitialized urandom read (%zd bytes read)\n",
+- current->comm, nbytes);
++ timer_setup_on_stack(&stack.timer, entropy_timer, 0);
++ while (!crng_ready() && !signal_pending(current)) {
++ if (!timer_pending(&stack.timer))
++ mod_timer(&stack.timer, jiffies + 1);
++ mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
++ schedule();
++ stack.entropy = random_get_entropy();
+ }
+
+- return urandom_read_nowarn(file, buf, nbytes, ppos);
++ del_timer_sync(&stack.timer);
++ destroy_timer_on_stack(&stack.timer);
++ mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
+ }
+
+-static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes,
+- loff_t *ppos)
++
++/**********************************************************************
++ *
++ * Userspace reader/writer interfaces.
++ *
++ * getrandom(2) is the primary modern interface into the RNG and should
++ * be used in preference to anything else.
++ *
++ * Reading from /dev/random has the same functionality as calling
++ * getrandom(2) with flags=0. In earlier versions, however, it had
++ * vastly different semantics and should therefore be avoided, to
++ * prevent backwards compatibility issues.
++ *
++ * Reading from /dev/urandom has the same functionality as calling
++ * getrandom(2) with flags=GRND_INSECURE. Because it does not block
++ * waiting for the RNG to be ready, it should not be used.
++ *
++ * Writing to either /dev/random or /dev/urandom adds entropy to
++ * the input pool but does not credit it.
++ *
++ * Polling on /dev/random indicates when the RNG is initialized, on
++ * the read side, and when it wants new entropy, on the write side.
++ *
++ * Both /dev/random and /dev/urandom have the same set of ioctls for
++ * adding entropy, getting the entropy count, zeroing the count, and
++ * reseeding the crng.
++ *
++ **********************************************************************/
++
++SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags)
+ {
++ struct iov_iter iter;
++ struct iovec iov;
+ int ret;
+
+- ret = wait_for_random_bytes();
+- if (ret != 0)
++ if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
++ return -EINVAL;
++
++ /*
++ * Requesting insecure and blocking randomness at the same time makes
++ * no sense.
++ */
++ if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
++ return -EINVAL;
++
++ if (!crng_ready() && !(flags & GRND_INSECURE)) {
++ if (flags & GRND_NONBLOCK)
++ return -EAGAIN;
++ ret = wait_for_random_bytes();
++ if (unlikely(ret))
++ return ret;
++ }
++
++ ret = import_single_range(READ, ubuf, len, &iov, &iter);
++ if (unlikely(ret))
+ return ret;
+- return urandom_read_nowarn(file, buf, nbytes, ppos);
++ return get_random_bytes_user(&iter);
+ }
+
+ static __poll_t random_poll(struct file *file, poll_table *wait)
+ {
+- __poll_t mask;
+-
+ poll_wait(file, &crng_init_wait, wait);
+- poll_wait(file, &random_write_wait, wait);
+- mask = 0;
+- if (crng_ready())
+- mask |= EPOLLIN | EPOLLRDNORM;
+- if (POOL_ENTROPY_BITS() < random_write_wakeup_bits)
+- mask |= EPOLLOUT | EPOLLWRNORM;
+- return mask;
++ return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM;
+ }
+
+-static int write_pool(const char __user *buffer, size_t count)
++static ssize_t write_pool_user(struct iov_iter *iter)
+ {
+- size_t bytes;
+- u32 t, buf[16];
+- const char __user *p = buffer;
++ u8 block[BLAKE2S_BLOCK_SIZE];
++ ssize_t ret = 0;
++ size_t copied;
+
+- while (count > 0) {
+- int b, i = 0;
++ if (unlikely(!iov_iter_count(iter)))
++ return 0;
+
+- bytes = min(count, sizeof(buf));
+- if (copy_from_user(&buf, p, bytes))
+- return -EFAULT;
++ for (;;) {
++ copied = copy_from_iter(block, sizeof(block), iter);
++ ret += copied;
++ mix_pool_bytes(block, copied);
++ if (!iov_iter_count(iter) || copied != sizeof(block))
++ break;
+
+- for (b = bytes; b > 0; b -= sizeof(u32), i++) {
+- if (!arch_get_random_int(&t))
++ BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
++ if (ret % PAGE_SIZE == 0) {
++ if (signal_pending(current))
+ break;
+- buf[i] ^= t;
++ cond_resched();
+ }
++ }
+
+- count -= bytes;
+- p += bytes;
++ memzero_explicit(block, sizeof(block));
++ return ret ? ret : -EFAULT;
++}
++
++static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *iter)
++{
++ return write_pool_user(iter);
++}
+
+- mix_pool_bytes(buf, bytes);
+- cond_resched();
++static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
++{
++ static int maxwarn = 10;
++
++ if (!crng_ready()) {
++ if (!ratelimit_disable && maxwarn <= 0)
++ ++urandom_warning.missed;
++ else if (ratelimit_disable || __ratelimit(&urandom_warning)) {
++ --maxwarn;
++ pr_notice("%s: uninitialized urandom read (%zu bytes read)\n",
++ current->comm, iov_iter_count(iter));
++ }
+ }
+
+- return 0;
++ return get_random_bytes_user(iter);
+ }
+
+-static ssize_t random_write(struct file *file, const char __user *buffer,
+- size_t count, loff_t *ppos)
++static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
+ {
+- size_t ret;
++ int ret;
+
+- ret = write_pool(buffer, count);
+- if (ret)
++ ret = wait_for_random_bytes();
++ if (ret != 0)
+ return ret;
+-
+- return (ssize_t)count;
++ return get_random_bytes_user(iter);
+ }
+
+ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ {
+- int size, ent_count;
+ int __user *p = (int __user *)arg;
+- int retval;
++ int ent_count;
+
+ switch (cmd) {
+ case RNDGETENTCNT:
+- /* inherently racy, no point locking */
+- ent_count = POOL_ENTROPY_BITS();
+- if (put_user(ent_count, p))
++ /* Inherently racy, no point locking. */
++ if (put_user(input_pool.init_bits, p))
+ return -EFAULT;
+ return 0;
+ case RNDADDTOENTCNT:
+@@ -1834,40 +1329,48 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+ return -EPERM;
+ if (get_user(ent_count, p))
+ return -EFAULT;
+- return credit_entropy_bits_safe(ent_count);
+- case RNDADDENTROPY:
++ if (ent_count < 0)
++ return -EINVAL;
++ credit_init_bits(ent_count);
++ return 0;
++ case RNDADDENTROPY: {
++ struct iov_iter iter;
++ struct iovec iov;
++ ssize_t ret;
++ int len;
++
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ if (get_user(ent_count, p++))
+ return -EFAULT;
+ if (ent_count < 0)
+ return -EINVAL;
+- if (get_user(size, p++))
++ if (get_user(len, p++))
++ return -EFAULT;
++ ret = import_single_range(WRITE, p, len, &iov, &iter);
++ if (unlikely(ret))
++ return ret;
++ ret = write_pool_user(&iter);
++ if (unlikely(ret < 0))
++ return ret;
++ /* Since we're crediting, enforce that it was all written into the pool. */
++ if (unlikely(ret != len))
+ return -EFAULT;
+- retval = write_pool((const char __user *)p, size);
+- if (retval < 0)
+- return retval;
+- return credit_entropy_bits_safe(ent_count);
++ credit_init_bits(ent_count);
++ return 0;
++ }
+ case RNDZAPENTCNT:
+ case RNDCLEARPOOL:
+- /*
+- * Clear the entropy pool counters. We no longer clear
+- * the entropy pool, as that's silly.
+- */
++ /* No longer has any effect. */
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+- if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) {
+- wake_up_interruptible(&random_write_wait);
+- kill_fasync(&fasync, SIGIO, POLL_OUT);
+- }
+ return 0;
+ case RNDRESEEDCRNG:
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+- if (crng_init < 2)
++ if (!crng_ready())
+ return -ENODATA;
+- crng_reseed(&primary_crng, true);
+- WRITE_ONCE(crng_global_init_time, jiffies - 1);
++ crng_reseed();
+ return 0;
+ default:
+ return -EINVAL;
+@@ -1880,55 +1383,56 @@ static int random_fasync(int fd, struct file *filp, int on)
+ }
+
+ const struct file_operations random_fops = {
+- .read = random_read,
+- .write = random_write,
++ .read_iter = random_read_iter,
++ .write_iter = random_write_iter,
+ .poll = random_poll,
+ .unlocked_ioctl = random_ioctl,
+ .compat_ioctl = compat_ptr_ioctl,
+ .fasync = random_fasync,
+ .llseek = noop_llseek,
++ .splice_read = generic_file_splice_read,
++ .splice_write = iter_file_splice_write,
+ };
+
+ const struct file_operations urandom_fops = {
+- .read = urandom_read,
+- .write = random_write,
++ .read_iter = urandom_read_iter,
++ .write_iter = random_write_iter,
+ .unlocked_ioctl = random_ioctl,
+ .compat_ioctl = compat_ptr_ioctl,
+ .fasync = random_fasync,
+ .llseek = noop_llseek,
++ .splice_read = generic_file_splice_read,
++ .splice_write = iter_file_splice_write,
+ };
+
+-SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int,
+- flags)
+-{
+- int ret;
+-
+- if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
+- return -EINVAL;
+-
+- /*
+- * Requesting insecure and blocking randomness at the same time makes
+- * no sense.
+- */
+- if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
+- return -EINVAL;
+-
+- if (count > INT_MAX)
+- count = INT_MAX;
+-
+- if (!(flags & GRND_INSECURE) && !crng_ready()) {
+- if (flags & GRND_NONBLOCK)
+- return -EAGAIN;
+- ret = wait_for_random_bytes();
+- if (unlikely(ret))
+- return ret;
+- }
+- return urandom_read_nowarn(NULL, buf, count, NULL);
+-}
+
+ /********************************************************************
+ *
+- * Sysctl interface
++ * Sysctl interface.
++ *
++ * These are partly unused legacy knobs with dummy values to not break
++ * userspace and partly still useful things. They are usually accessible
++ * in /proc/sys/kernel/random/ and are as follows:
++ *
++ * - boot_id - a UUID representing the current boot.
++ *
++ * - uuid - a random UUID, different each time the file is read.
++ *
++ * - poolsize - the number of bits of entropy that the input pool can
++ * hold, tied to the POOL_BITS constant.
++ *
++ * - entropy_avail - the number of bits of entropy currently in the
++ * input pool. Always <= poolsize.
++ *
++ * - write_wakeup_threshold - the amount of entropy in the input pool
++ * below which write polls to /dev/random will unblock, requesting
++ * more entropy, tied to the POOL_READY_BITS constant. It is writable
++ * to avoid breaking old userspaces, but writing to it does not
++ * change any behavior of the RNG.
++ *
++ * - urandom_min_reseed_secs - fixed to the value CRNG_RESEED_INTERVAL.
++ * It is writable to avoid breaking old userspaces, but writing
++ * to it does not change any behavior of the RNG.
+ *
+ ********************************************************************/
+
+@@ -1936,25 +1440,28 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int,
+
+ #include <linux/sysctl.h>
+
+-static int min_write_thresh;
+-static int max_write_thresh = POOL_BITS;
+-static int random_min_urandom_seed = 60;
+-static char sysctl_bootid[16];
++static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ;
++static int sysctl_random_write_wakeup_bits = POOL_READY_BITS;
++static int sysctl_poolsize = POOL_BITS;
++static u8 sysctl_bootid[UUID_SIZE];
+
+ /*
+ * This function is used to return both the bootid UUID, and random
+- * UUID. The difference is in whether table->data is NULL; if it is,
++ * UUID. The difference is in whether table->data is NULL; if it is,
+ * then a new UUID is generated and returned to the user.
+- *
+- * If the user accesses this via the proc interface, the UUID will be
+- * returned as an ASCII string in the standard UUID format; if via the
+- * sysctl system call, as 16 bytes of binary data.
+ */
+-static int proc_do_uuid(struct ctl_table *table, int write, void *buffer,
++static int proc_do_uuid(struct ctl_table *table, int write, void *buf,
+ size_t *lenp, loff_t *ppos)
+ {
+- struct ctl_table fake_table;
+- unsigned char buf[64], tmp_uuid[16], *uuid;
++ u8 tmp_uuid[UUID_SIZE], *uuid;
++ char uuid_string[UUID_STRING_LEN + 1];
++ struct ctl_table fake_table = {
++ .data = uuid_string,
++ .maxlen = UUID_STRING_LEN
++ };
++
++ if (write)
++ return -EPERM;
+
+ uuid = table->data;
+ if (!uuid) {
+@@ -1969,32 +1476,17 @@ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer,
+ spin_unlock(&bootid_spinlock);
+ }
+
+- sprintf(buf, "%pU", uuid);
+-
+- fake_table.data = buf;
+- fake_table.maxlen = sizeof(buf);
+-
+- return proc_dostring(&fake_table, write, buffer, lenp, ppos);
++ snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid);
++ return proc_dostring(&fake_table, 0, buf, lenp, ppos);
+ }
+
+-/*
+- * Return entropy available scaled to integral bits
+- */
+-static int proc_do_entropy(struct ctl_table *table, int write, void *buffer,
+- size_t *lenp, loff_t *ppos)
++/* The same as proc_dointvec, but writes don't change anything. */
++static int proc_do_rointvec(struct ctl_table *table, int write, void *buf,
++ size_t *lenp, loff_t *ppos)
+ {
+- struct ctl_table fake_table;
+- int entropy_count;
+-
+- entropy_count = *(int *)table->data >> POOL_ENTROPY_SHIFT;
+-
+- fake_table.data = &entropy_count;
+- fake_table.maxlen = sizeof(entropy_count);
+-
+- return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
++ return write ? 0 : proc_dointvec(table, 0, buf, lenp, ppos);
+ }
+
+-static int sysctl_poolsize = POOL_BITS;
+ static struct ctl_table random_table[] = {
+ {
+ .procname = "poolsize",
+@@ -2005,62 +1497,42 @@ static struct ctl_table random_table[] = {
+ },
+ {
+ .procname = "entropy_avail",
++ .data = &input_pool.init_bits,
+ .maxlen = sizeof(int),
+ .mode = 0444,
+- .proc_handler = proc_do_entropy,
+- .data = &input_pool.entropy_count,
++ .proc_handler = proc_dointvec,
+ },
+ {
+ .procname = "write_wakeup_threshold",
+- .data = &random_write_wakeup_bits,
++ .data = &sysctl_random_write_wakeup_bits,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec_minmax,
+- .extra1 = &min_write_thresh,
+- .extra2 = &max_write_thresh,
++ .proc_handler = proc_do_rointvec,
+ },
+ {
+ .procname = "urandom_min_reseed_secs",
+- .data = &random_min_urandom_seed,
++ .data = &sysctl_random_min_urandom_seed,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_do_rointvec,
+ },
+ {
+ .procname = "boot_id",
+ .data = &sysctl_bootid,
+- .maxlen = 16,
+ .mode = 0444,
+ .proc_handler = proc_do_uuid,
+ },
+ {
+ .procname = "uuid",
+- .maxlen = 16,
+ .mode = 0444,
+ .proc_handler = proc_do_uuid,
+ },
+-#ifdef ADD_INTERRUPT_BENCH
+- {
+- .procname = "add_interrupt_avg_cycles",
+- .data = &avg_cycles,
+- .maxlen = sizeof(avg_cycles),
+- .mode = 0444,
+- .proc_handler = proc_doulongvec_minmax,
+- },
+- {
+- .procname = "add_interrupt_avg_deviation",
+- .data = &avg_deviation,
+- .maxlen = sizeof(avg_deviation),
+- .mode = 0444,
+- .proc_handler = proc_doulongvec_minmax,
+- },
+-#endif
+ { }
+ };
+
+ /*
+- * rand_initialize() is called before sysctl_init(),
+- * so we cannot call register_sysctl_init() in rand_initialize()
++ * random_init() is called before sysctl_init(),
++ * so we cannot call register_sysctl_init() in random_init()
+ */
+ static int __init random_sysctls_init(void)
+ {
+@@ -2068,170 +1540,4 @@ static int __init random_sysctls_init(void)
+ return 0;
+ }
+ device_initcall(random_sysctls_init);
+-#endif /* CONFIG_SYSCTL */
+-
+-struct batched_entropy {
+- union {
+- u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)];
+- u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
+- };
+- unsigned int position;
+- spinlock_t batch_lock;
+-};
+-
+-/*
+- * Get a random word for internal kernel use only. The quality of the random
+- * number is good as /dev/urandom, but there is no backtrack protection, with
+- * the goal of being quite fast and not depleting entropy. In order to ensure
+- * that the randomness provided by this function is okay, the function
+- * wait_for_random_bytes() should be called and return 0 at least once at any
+- * point prior.
+- */
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
+- .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
+-};
+-
+-u64 get_random_u64(void)
+-{
+- u64 ret;
+- unsigned long flags;
+- struct batched_entropy *batch;
+- static void *previous;
+-
+- warn_unseeded_randomness(&previous);
+-
+- batch = raw_cpu_ptr(&batched_entropy_u64);
+- spin_lock_irqsave(&batch->batch_lock, flags);
+- if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+- extract_crng((u8 *)batch->entropy_u64);
+- batch->position = 0;
+- }
+- ret = batch->entropy_u64[batch->position++];
+- spin_unlock_irqrestore(&batch->batch_lock, flags);
+- return ret;
+-}
+-EXPORT_SYMBOL(get_random_u64);
+-
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
+- .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
+-};
+-u32 get_random_u32(void)
+-{
+- u32 ret;
+- unsigned long flags;
+- struct batched_entropy *batch;
+- static void *previous;
+-
+- warn_unseeded_randomness(&previous);
+-
+- batch = raw_cpu_ptr(&batched_entropy_u32);
+- spin_lock_irqsave(&batch->batch_lock, flags);
+- if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+- extract_crng((u8 *)batch->entropy_u32);
+- batch->position = 0;
+- }
+- ret = batch->entropy_u32[batch->position++];
+- spin_unlock_irqrestore(&batch->batch_lock, flags);
+- return ret;
+-}
+-EXPORT_SYMBOL(get_random_u32);
+-
+-/* It's important to invalidate all potential batched entropy that might
+- * be stored before the crng is initialized, which we can do lazily by
+- * simply resetting the counter to zero so that it's re-extracted on the
+- * next usage. */
+-static void invalidate_batched_entropy(void)
+-{
+- int cpu;
+- unsigned long flags;
+-
+- for_each_possible_cpu(cpu) {
+- struct batched_entropy *batched_entropy;
+-
+- batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu);
+- spin_lock_irqsave(&batched_entropy->batch_lock, flags);
+- batched_entropy->position = 0;
+- spin_unlock(&batched_entropy->batch_lock);
+-
+- batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu);
+- spin_lock(&batched_entropy->batch_lock);
+- batched_entropy->position = 0;
+- spin_unlock_irqrestore(&batched_entropy->batch_lock, flags);
+- }
+-}
+-
+-/**
+- * randomize_page - Generate a random, page aligned address
+- * @start: The smallest acceptable address the caller will take.
+- * @range: The size of the area, starting at @start, within which the
+- * random address must fall.
+- *
+- * If @start + @range would overflow, @range is capped.
+- *
+- * NOTE: Historical use of randomize_range, which this replaces, presumed that
+- * @start was already page aligned. We now align it regardless.
+- *
+- * Return: A page aligned address within [start, start + range). On error,
+- * @start is returned.
+- */
+-unsigned long randomize_page(unsigned long start, unsigned long range)
+-{
+- if (!PAGE_ALIGNED(start)) {
+- range -= PAGE_ALIGN(start) - start;
+- start = PAGE_ALIGN(start);
+- }
+-
+- if (start > ULONG_MAX - range)
+- range = ULONG_MAX - start;
+-
+- range >>= PAGE_SHIFT;
+-
+- if (range == 0)
+- return start;
+-
+- return start + (get_random_long() % range << PAGE_SHIFT);
+-}
+-
+-/* Interface for in-kernel drivers of true hardware RNGs.
+- * Those devices may produce endless random bits and will be throttled
+- * when our pool is full.
+- */
+-void add_hwgenerator_randomness(const char *buffer, size_t count,
+- size_t entropy)
+-{
+- if (unlikely(crng_init == 0)) {
+- size_t ret = crng_fast_load(buffer, count);
+- mix_pool_bytes(buffer, ret);
+- count -= ret;
+- buffer += ret;
+- if (!count || crng_init == 0)
+- return;
+- }
+-
+- /* Throttle writing if we're above the trickle threshold.
+- * We'll be woken up again once below random_write_wakeup_thresh,
+- * when the calling thread is about to terminate, or once
+- * CRNG_RESEED_INTERVAL has lapsed.
+- */
+- wait_event_interruptible_timeout(random_write_wait,
+- !system_wq || kthread_should_stop() ||
+- POOL_ENTROPY_BITS() <= random_write_wakeup_bits,
+- CRNG_RESEED_INTERVAL);
+- mix_pool_bytes(buffer, count);
+- credit_entropy_bits(entropy);
+-}
+-EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+-
+-/* Handle random seed passed by bootloader.
+- * If the seed is trustworthy, it would be regarded as hardware RNGs. Otherwise
+- * it would be regarded as device data.
+- * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER.
+- */
+-void add_bootloader_randomness(const void *buf, unsigned int size)
+-{
+- if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER))
+- add_hwgenerator_randomness(buf, size, size * 8);
+- else
+- add_device_randomness(buf, size);
+-}
+-EXPORT_SYMBOL_GPL(add_bootloader_randomness);
++#endif
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+index c5de0ec4f9d03..444acd9e2cd6a 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c
+@@ -227,6 +227,17 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata)
+ dev_dbg(dev, "sid 0x%x status 0x%x\n",
+ cl_data->sensor_idx[i], cl_data->sensor_sts[i]);
+ }
++ if (privdata->mp2_ops->discovery_status &&
++ privdata->mp2_ops->discovery_status(privdata) == 0) {
++ amd_sfh_hid_client_deinit(privdata);
++ for (i = 0; i < cl_data->num_hid_devices; i++) {
++ devm_kfree(dev, cl_data->feature_report[i]);
++ devm_kfree(dev, in_data->input_report[i]);
++ devm_kfree(dev, cl_data->report_descr[i]);
++ }
++ dev_warn(dev, "Failed to discover, sensors not enabled\n");
++ return -EOPNOTSUPP;
++ }
+ schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDLE_LOOP));
+ return 0;
+
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+index 19fa734a9a793..abd7f65860958 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
+@@ -130,6 +130,12 @@ static int amd_sfh_irq_init_v2(struct amd_mp2_dev *privdata)
+ return 0;
+ }
+
++static int amd_sfh_dis_sts_v2(struct amd_mp2_dev *privdata)
++{
++ return (readl(privdata->mmio + AMD_P2C_MSG(1)) &
++ SENSOR_DISCOVERY_STATUS_MASK) >> SENSOR_DISCOVERY_STATUS_SHIFT;
++}
++
+ void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info info)
+ {
+ union sfh_cmd_param cmd_param;
+@@ -245,6 +251,7 @@ static const struct amd_mp2_ops amd_sfh_ops_v2 = {
+ .response = amd_sfh_wait_response_v2,
+ .clear_intr = amd_sfh_clear_intr_v2,
+ .init_intr = amd_sfh_irq_init_v2,
++ .discovery_status = amd_sfh_dis_sts_v2,
+ };
+
+ static const struct amd_mp2_ops amd_sfh_ops = {
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+index 97b99861fae25..9aa88a91ac8d1 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
+@@ -39,6 +39,9 @@
+
+ #define AMD_SFH_IDLE_LOOP 200
+
++#define SENSOR_DISCOVERY_STATUS_MASK GENMASK(5, 3)
++#define SENSOR_DISCOVERY_STATUS_SHIFT 3
++
+ /* SFH Command register */
+ union sfh_cmd_base {
+ u32 ul;
+@@ -143,5 +146,6 @@ struct amd_mp2_ops {
+ int (*response)(struct amd_mp2_dev *mp2, u8 sid, u32 sensor_sts);
+ void (*clear_intr)(struct amd_mp2_dev *privdata);
+ int (*init_intr)(struct amd_mp2_dev *privdata);
++ int (*discovery_status)(struct amd_mp2_dev *privdata);
+ };
+ #endif
+diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
+index 411a428ace4d4..481e565cc5c42 100644
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -100,6 +100,7 @@ enum cpuhp_state {
+ CPUHP_AP_ARM_CACHE_B15_RAC_DEAD,
+ CPUHP_PADATA_DEAD,
+ CPUHP_AP_DTPM_CPU_DEAD,
++ CPUHP_RANDOM_PREPARE,
+ CPUHP_WORKQUEUE_PREP,
+ CPUHP_POWER_NUMA_PREPARE,
+ CPUHP_HRTIMERS_PREPARE,
+@@ -240,6 +241,7 @@ enum cpuhp_state {
+ CPUHP_AP_PERF_CSKY_ONLINE,
+ CPUHP_AP_WATCHDOG_ONLINE,
+ CPUHP_AP_WORKQUEUE_ONLINE,
++ CPUHP_AP_RANDOM_ONLINE,
+ CPUHP_AP_RCUTREE_ONLINE,
+ CPUHP_AP_BASE_CACHEINFO_ONLINE,
+ CPUHP_AP_ONLINE_DYN,
+diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h
+index 8e6dd908da216..aa1d4da03538b 100644
+--- a/include/linux/hw_random.h
++++ b/include/linux/hw_random.h
+@@ -60,7 +60,5 @@ extern int devm_hwrng_register(struct device *dev, struct hwrng *rng);
+ /** Unregister a Hardware Random Number Generator driver. */
+ extern void hwrng_unregister(struct hwrng *rng);
+ extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng);
+-/** Feed random bits into the pool. */
+-extern void add_hwgenerator_randomness(const char *buffer, size_t count, size_t entropy);
+
+ #endif /* LINUX_HWRANDOM_H_ */
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 5744a3fc47169..9cb0ff065e8b1 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -2678,6 +2678,7 @@ extern int install_special_mapping(struct mm_struct *mm,
+ unsigned long flags, struct page **pages);
+
+ unsigned long randomize_stack_top(unsigned long stack_top);
++unsigned long randomize_page(unsigned long start, unsigned long range);
+
+ extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+
+diff --git a/include/linux/prandom.h b/include/linux/prandom.h
+index 056d31317e499..a4aadd2dc153e 100644
+--- a/include/linux/prandom.h
++++ b/include/linux/prandom.h
+@@ -10,6 +10,7 @@
+
+ #include <linux/types.h>
+ #include <linux/percpu.h>
++#include <linux/siphash.h>
+
+ u32 prandom_u32(void);
+ void prandom_bytes(void *buf, size_t nbytes);
+@@ -27,15 +28,10 @@ DECLARE_PER_CPU(unsigned long, net_rand_noise);
+ * The core SipHash round function. Each line can be executed in
+ * parallel given enough CPU resources.
+ */
+-#define PRND_SIPROUND(v0, v1, v2, v3) ( \
+- v0 += v1, v1 = rol64(v1, 13), v2 += v3, v3 = rol64(v3, 16), \
+- v1 ^= v0, v0 = rol64(v0, 32), v3 ^= v2, \
+- v0 += v3, v3 = rol64(v3, 21), v2 += v1, v1 = rol64(v1, 17), \
+- v3 ^= v0, v1 ^= v2, v2 = rol64(v2, 32) \
+-)
++#define PRND_SIPROUND(v0, v1, v2, v3) SIPHASH_PERMUTATION(v0, v1, v2, v3)
+
+-#define PRND_K0 (0x736f6d6570736575 ^ 0x6c7967656e657261)
+-#define PRND_K1 (0x646f72616e646f6d ^ 0x7465646279746573)
++#define PRND_K0 (SIPHASH_CONST_0 ^ SIPHASH_CONST_2)
++#define PRND_K1 (SIPHASH_CONST_1 ^ SIPHASH_CONST_3)
+
+ #elif BITS_PER_LONG == 32
+ /*
+@@ -43,14 +39,9 @@ DECLARE_PER_CPU(unsigned long, net_rand_noise);
+ * This is weaker, but 32-bit machines are not used for high-traffic
+ * applications, so there is less output for an attacker to analyze.
+ */
+-#define PRND_SIPROUND(v0, v1, v2, v3) ( \
+- v0 += v1, v1 = rol32(v1, 5), v2 += v3, v3 = rol32(v3, 8), \
+- v1 ^= v0, v0 = rol32(v0, 16), v3 ^= v2, \
+- v0 += v3, v3 = rol32(v3, 7), v2 += v1, v1 = rol32(v1, 13), \
+- v3 ^= v0, v1 ^= v2, v2 = rol32(v2, 16) \
+-)
+-#define PRND_K0 0x6c796765
+-#define PRND_K1 0x74656462
++#define PRND_SIPROUND(v0, v1, v2, v3) HSIPHASH_PERMUTATION(v0, v1, v2, v3)
++#define PRND_K0 (HSIPHASH_CONST_0 ^ HSIPHASH_CONST_2)
++#define PRND_K1 (HSIPHASH_CONST_1 ^ HSIPHASH_CONST_3)
+
+ #else
+ #error Unsupported BITS_PER_LONG
+diff --git a/include/linux/random.h b/include/linux/random.h
+index c45b2693e51fb..917470c4490ac 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -1,9 +1,5 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+-/*
+- * include/linux/random.h
+- *
+- * Include file for the random number generator.
+- */
++
+ #ifndef _LINUX_RANDOM_H
+ #define _LINUX_RANDOM_H
+
+@@ -14,41 +10,26 @@
+
+ #include <uapi/linux/random.h>
+
+-struct random_ready_callback {
+- struct list_head list;
+- void (*func)(struct random_ready_callback *rdy);
+- struct module *owner;
+-};
++struct notifier_block;
+
+-extern void add_device_randomness(const void *, unsigned int);
+-extern void add_bootloader_randomness(const void *, unsigned int);
++void add_device_randomness(const void *buf, size_t len);
++void add_bootloader_randomness(const void *buf, size_t len);
++void add_input_randomness(unsigned int type, unsigned int code,
++ unsigned int value) __latent_entropy;
++void add_interrupt_randomness(int irq) __latent_entropy;
++void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
+
+ #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
+ static inline void add_latent_entropy(void)
+ {
+- add_device_randomness((const void *)&latent_entropy,
+- sizeof(latent_entropy));
++ add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
+ }
+ #else
+-static inline void add_latent_entropy(void) {}
+-#endif
+-
+-extern void add_input_randomness(unsigned int type, unsigned int code,
+- unsigned int value) __latent_entropy;
+-extern void add_interrupt_randomness(int irq) __latent_entropy;
+-
+-extern void get_random_bytes(void *buf, int nbytes);
+-extern int wait_for_random_bytes(void);
+-extern int __init rand_initialize(void);
+-extern bool rng_is_initialized(void);
+-extern int add_random_ready_callback(struct random_ready_callback *rdy);
+-extern void del_random_ready_callback(struct random_ready_callback *rdy);
+-extern int __must_check get_random_bytes_arch(void *buf, int nbytes);
+-
+-#ifndef MODULE
+-extern const struct file_operations random_fops, urandom_fops;
++static inline void add_latent_entropy(void) { }
+ #endif
+
++void get_random_bytes(void *buf, size_t len);
++size_t __must_check get_random_bytes_arch(void *buf, size_t len);
+ u32 get_random_u32(void);
+ u64 get_random_u64(void);
+ static inline unsigned int get_random_int(void)
+@@ -80,36 +61,38 @@ static inline unsigned long get_random_long(void)
+
+ static inline unsigned long get_random_canary(void)
+ {
+- unsigned long val = get_random_long();
+-
+- return val & CANARY_MASK;
++ return get_random_long() & CANARY_MASK;
+ }
+
++int __init random_init(const char *command_line);
++bool rng_is_initialized(void);
++int wait_for_random_bytes(void);
++int register_random_ready_notifier(struct notifier_block *nb);
++int unregister_random_ready_notifier(struct notifier_block *nb);
++
+ /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
+ * Returns the result of the call to wait_for_random_bytes. */
+-static inline int get_random_bytes_wait(void *buf, int nbytes)
++static inline int get_random_bytes_wait(void *buf, size_t nbytes)
+ {
+ int ret = wait_for_random_bytes();
+ get_random_bytes(buf, nbytes);
+ return ret;
+ }
+
+-#define declare_get_random_var_wait(var) \
+- static inline int get_random_ ## var ## _wait(var *out) { \
++#define declare_get_random_var_wait(name, ret_type) \
++ static inline int get_random_ ## name ## _wait(ret_type *out) { \
+ int ret = wait_for_random_bytes(); \
+ if (unlikely(ret)) \
+ return ret; \
+- *out = get_random_ ## var(); \
++ *out = get_random_ ## name(); \
+ return 0; \
+ }
+-declare_get_random_var_wait(u32)
+-declare_get_random_var_wait(u64)
+-declare_get_random_var_wait(int)
+-declare_get_random_var_wait(long)
++declare_get_random_var_wait(u32, u32)
++declare_get_random_var_wait(u64, u32)
++declare_get_random_var_wait(int, unsigned int)
++declare_get_random_var_wait(long, unsigned long)
+ #undef declare_get_random_var
+
+-unsigned long randomize_page(unsigned long start, unsigned long range);
+-
+ /*
+ * This is designed to be standalone for just prandom
+ * users, but for now we include it from <linux/random.h>
+@@ -120,22 +103,10 @@ unsigned long randomize_page(unsigned long start, unsigned long range);
+ #ifdef CONFIG_ARCH_RANDOM
+ # include <asm/archrandom.h>
+ #else
+-static inline bool __must_check arch_get_random_long(unsigned long *v)
+-{
+- return false;
+-}
+-static inline bool __must_check arch_get_random_int(unsigned int *v)
+-{
+- return false;
+-}
+-static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
+-{
+- return false;
+-}
+-static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
+-{
+- return false;
+-}
++static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; }
++static inline bool __must_check arch_get_random_int(unsigned int *v) { return false; }
++static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { return false; }
++static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { return false; }
+ #endif
+
+ /*
+@@ -158,4 +129,13 @@ static inline bool __init arch_get_random_long_early(unsigned long *v)
+ }
+ #endif
+
++#ifdef CONFIG_SMP
++int random_prepare_cpu(unsigned int cpu);
++int random_online_cpu(unsigned int cpu);
++#endif
++
++#ifndef MODULE
++extern const struct file_operations random_fops, urandom_fops;
++#endif
++
+ #endif /* _LINUX_RANDOM_H */
+diff --git a/include/linux/siphash.h b/include/linux/siphash.h
+index cce8a9acc76cb..3af1428da5597 100644
+--- a/include/linux/siphash.h
++++ b/include/linux/siphash.h
+@@ -138,4 +138,32 @@ static inline u32 hsiphash(const void *data, size_t len,
+ return ___hsiphash_aligned(data, len, key);
+ }
+
++/*
++ * These macros expose the raw SipHash and HalfSipHash permutations.
++ * Do not use them directly! If you think you have a use for them,
++ * be sure to CC the maintainer of this file explaining why.
++ */
++
++#define SIPHASH_PERMUTATION(a, b, c, d) ( \
++ (a) += (b), (b) = rol64((b), 13), (b) ^= (a), (a) = rol64((a), 32), \
++ (c) += (d), (d) = rol64((d), 16), (d) ^= (c), \
++ (a) += (d), (d) = rol64((d), 21), (d) ^= (a), \
++ (c) += (b), (b) = rol64((b), 17), (b) ^= (c), (c) = rol64((c), 32))
++
++#define SIPHASH_CONST_0 0x736f6d6570736575ULL
++#define SIPHASH_CONST_1 0x646f72616e646f6dULL
++#define SIPHASH_CONST_2 0x6c7967656e657261ULL
++#define SIPHASH_CONST_3 0x7465646279746573ULL
++
++#define HSIPHASH_PERMUTATION(a, b, c, d) ( \
++ (a) += (b), (b) = rol32((b), 5), (b) ^= (a), (a) = rol32((a), 16), \
++ (c) += (d), (d) = rol32((d), 8), (d) ^= (c), \
++ (a) += (d), (d) = rol32((d), 7), (d) ^= (a), \
++ (c) += (b), (b) = rol32((b), 13), (b) ^= (c), (c) = rol32((c), 16))
++
++#define HSIPHASH_CONST_0 0U
++#define HSIPHASH_CONST_1 0U
++#define HSIPHASH_CONST_2 0x6c796765U
++#define HSIPHASH_CONST_3 0x74656462U
++
+ #endif /* _LINUX_SIPHASH_H */
+diff --git a/include/linux/timex.h b/include/linux/timex.h
+index 059b18eb1f1fa..3871b06bd302c 100644
+--- a/include/linux/timex.h
++++ b/include/linux/timex.h
+@@ -62,6 +62,8 @@
+ #include <linux/types.h>
+ #include <linux/param.h>
+
++unsigned long random_get_entropy_fallback(void);
++
+ #include <asm/timex.h>
+
+ #ifndef random_get_entropy
+@@ -74,8 +76,14 @@
+ *
+ * By default we use get_cycles() for this purpose, but individual
+ * architectures may override this in their asm/timex.h header file.
++ * If a given arch does not have get_cycles(), then we fallback to
++ * using random_get_entropy_fallback().
+ */
+-#define random_get_entropy() get_cycles()
++#ifdef get_cycles
++#define random_get_entropy() ((unsigned long)get_cycles())
++#else
++#define random_get_entropy() random_get_entropy_fallback()
++#endif
+ #endif
+
+ /*
+diff --git a/include/trace/events/random.h b/include/trace/events/random.h
+deleted file mode 100644
+index a2d9aa16a5d7a..0000000000000
+--- a/include/trace/events/random.h
++++ /dev/null
+@@ -1,233 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-#undef TRACE_SYSTEM
+-#define TRACE_SYSTEM random
+-
+-#if !defined(_TRACE_RANDOM_H) || defined(TRACE_HEADER_MULTI_READ)
+-#define _TRACE_RANDOM_H
+-
+-#include <linux/writeback.h>
+-#include <linux/tracepoint.h>
+-
+-TRACE_EVENT(add_device_randomness,
+- TP_PROTO(int bytes, unsigned long IP),
+-
+- TP_ARGS(bytes, IP),
+-
+- TP_STRUCT__entry(
+- __field( int, bytes )
+- __field(unsigned long, IP )
+- ),
+-
+- TP_fast_assign(
+- __entry->bytes = bytes;
+- __entry->IP = IP;
+- ),
+-
+- TP_printk("bytes %d caller %pS",
+- __entry->bytes, (void *)__entry->IP)
+-);
+-
+-DECLARE_EVENT_CLASS(random__mix_pool_bytes,
+- TP_PROTO(int bytes, unsigned long IP),
+-
+- TP_ARGS(bytes, IP),
+-
+- TP_STRUCT__entry(
+- __field( int, bytes )
+- __field(unsigned long, IP )
+- ),
+-
+- TP_fast_assign(
+- __entry->bytes = bytes;
+- __entry->IP = IP;
+- ),
+-
+- TP_printk("input pool: bytes %d caller %pS",
+- __entry->bytes, (void *)__entry->IP)
+-);
+-
+-DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes,
+- TP_PROTO(int bytes, unsigned long IP),
+-
+- TP_ARGS(bytes, IP)
+-);
+-
+-DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
+- TP_PROTO(int bytes, unsigned long IP),
+-
+- TP_ARGS(bytes, IP)
+-);
+-
+-TRACE_EVENT(credit_entropy_bits,
+- TP_PROTO(int bits, int entropy_count, unsigned long IP),
+-
+- TP_ARGS(bits, entropy_count, IP),
+-
+- TP_STRUCT__entry(
+- __field( int, bits )
+- __field( int, entropy_count )
+- __field(unsigned long, IP )
+- ),
+-
+- TP_fast_assign(
+- __entry->bits = bits;
+- __entry->entropy_count = entropy_count;
+- __entry->IP = IP;
+- ),
+-
+- TP_printk("input pool: bits %d entropy_count %d caller %pS",
+- __entry->bits, __entry->entropy_count, (void *)__entry->IP)
+-);
+-
+-TRACE_EVENT(debit_entropy,
+- TP_PROTO(int debit_bits),
+-
+- TP_ARGS( debit_bits),
+-
+- TP_STRUCT__entry(
+- __field( int, debit_bits )
+- ),
+-
+- TP_fast_assign(
+- __entry->debit_bits = debit_bits;
+- ),
+-
+- TP_printk("input pool: debit_bits %d", __entry->debit_bits)
+-);
+-
+-TRACE_EVENT(add_input_randomness,
+- TP_PROTO(int input_bits),
+-
+- TP_ARGS(input_bits),
+-
+- TP_STRUCT__entry(
+- __field( int, input_bits )
+- ),
+-
+- TP_fast_assign(
+- __entry->input_bits = input_bits;
+- ),
+-
+- TP_printk("input_pool_bits %d", __entry->input_bits)
+-);
+-
+-TRACE_EVENT(add_disk_randomness,
+- TP_PROTO(dev_t dev, int input_bits),
+-
+- TP_ARGS(dev, input_bits),
+-
+- TP_STRUCT__entry(
+- __field( dev_t, dev )
+- __field( int, input_bits )
+- ),
+-
+- TP_fast_assign(
+- __entry->dev = dev;
+- __entry->input_bits = input_bits;
+- ),
+-
+- TP_printk("dev %d,%d input_pool_bits %d", MAJOR(__entry->dev),
+- MINOR(__entry->dev), __entry->input_bits)
+-);
+-
+-DECLARE_EVENT_CLASS(random__get_random_bytes,
+- TP_PROTO(int nbytes, unsigned long IP),
+-
+- TP_ARGS(nbytes, IP),
+-
+- TP_STRUCT__entry(
+- __field( int, nbytes )
+- __field(unsigned long, IP )
+- ),
+-
+- TP_fast_assign(
+- __entry->nbytes = nbytes;
+- __entry->IP = IP;
+- ),
+-
+- TP_printk("nbytes %d caller %pS", __entry->nbytes, (void *)__entry->IP)
+-);
+-
+-DEFINE_EVENT(random__get_random_bytes, get_random_bytes,
+- TP_PROTO(int nbytes, unsigned long IP),
+-
+- TP_ARGS(nbytes, IP)
+-);
+-
+-DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch,
+- TP_PROTO(int nbytes, unsigned long IP),
+-
+- TP_ARGS(nbytes, IP)
+-);
+-
+-DECLARE_EVENT_CLASS(random__extract_entropy,
+- TP_PROTO(int nbytes, int entropy_count, unsigned long IP),
+-
+- TP_ARGS(nbytes, entropy_count, IP),
+-
+- TP_STRUCT__entry(
+- __field( int, nbytes )
+- __field( int, entropy_count )
+- __field(unsigned long, IP )
+- ),
+-
+- TP_fast_assign(
+- __entry->nbytes = nbytes;
+- __entry->entropy_count = entropy_count;
+- __entry->IP = IP;
+- ),
+-
+- TP_printk("input pool: nbytes %d entropy_count %d caller %pS",
+- __entry->nbytes, __entry->entropy_count, (void *)__entry->IP)
+-);
+-
+-
+-DEFINE_EVENT(random__extract_entropy, extract_entropy,
+- TP_PROTO(int nbytes, int entropy_count, unsigned long IP),
+-
+- TP_ARGS(nbytes, entropy_count, IP)
+-);
+-
+-TRACE_EVENT(urandom_read,
+- TP_PROTO(int got_bits, int pool_left, int input_left),
+-
+- TP_ARGS(got_bits, pool_left, input_left),
+-
+- TP_STRUCT__entry(
+- __field( int, got_bits )
+- __field( int, pool_left )
+- __field( int, input_left )
+- ),
+-
+- TP_fast_assign(
+- __entry->got_bits = got_bits;
+- __entry->pool_left = pool_left;
+- __entry->input_left = input_left;
+- ),
+-
+- TP_printk("got_bits %d nonblocking_pool_entropy_left %d "
+- "input_entropy_left %d", __entry->got_bits,
+- __entry->pool_left, __entry->input_left)
+-);
+-
+-TRACE_EVENT(prandom_u32,
+-
+- TP_PROTO(unsigned int ret),
+-
+- TP_ARGS(ret),
+-
+- TP_STRUCT__entry(
+- __field( unsigned int, ret)
+- ),
+-
+- TP_fast_assign(
+- __entry->ret = ret;
+- ),
+-
+- TP_printk("ret=%u" , __entry->ret)
+-);
+-
+-#endif /* _TRACE_RANDOM_H */
+-
+-/* This part must be outside protection */
+-#include <trace/define_trace.h>
+diff --git a/init/main.c b/init/main.c
+index 9a5097b2251a5..0aa2e1c17b1c3 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -1035,21 +1035,18 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ softirq_init();
+ timekeeping_init();
+ kfence_init();
++ time_init();
+
+ /*
+ * For best initial stack canary entropy, prepare it after:
+ * - setup_arch() for any UEFI RNG entropy and boot cmdline access
+- * - timekeeping_init() for ktime entropy used in rand_initialize()
+- * - rand_initialize() to get any arch-specific entropy like RDRAND
+- * - add_latent_entropy() to get any latent entropy
+- * - adding command line entropy
++ * - timekeeping_init() for ktime entropy used in random_init()
++ * - time_init() for making random_get_entropy() work on some platforms
++ * - random_init() to initialize the RNG from from early entropy sources
+ */
+- rand_initialize();
+- add_latent_entropy();
+- add_device_randomness(command_line, strlen(command_line));
++ random_init(command_line);
+ boot_init_stack_canary();
+
+- time_init();
+ perf_event_init();
+ profile_init();
+ call_function_init();
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 5601216eb51bd..da871eb075662 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -34,6 +34,7 @@
+ #include <linux/scs.h>
+ #include <linux/percpu-rwsem.h>
+ #include <linux/cpuset.h>
++#include <linux/random.h>
+
+ #include <trace/events/power.h>
+ #define CREATE_TRACE_POINTS
+@@ -1659,6 +1660,11 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ .startup.single = perf_event_init_cpu,
+ .teardown.single = perf_event_exit_cpu,
+ },
++ [CPUHP_RANDOM_PREPARE] = {
++ .name = "random:prepare",
++ .startup.single = random_prepare_cpu,
++ .teardown.single = NULL,
++ },
+ [CPUHP_WORKQUEUE_PREP] = {
+ .name = "workqueue:prepare",
+ .startup.single = workqueue_prepare_cpu,
+@@ -1782,6 +1788,11 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ .startup.single = workqueue_online_cpu,
+ .teardown.single = workqueue_offline_cpu,
+ },
++ [CPUHP_AP_RANDOM_ONLINE] = {
++ .name = "random:online",
++ .startup.single = random_online_cpu,
++ .teardown.single = NULL,
++ },
+ [CPUHP_AP_RCUTREE_ONLINE] = {
+ .name = "RCU/tree:online",
+ .startup.single = rcutree_online_cpu,
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 3b1398fbddaf8..871c912860ed5 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -17,6 +17,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/jiffies.h>
+ #include <linux/time.h>
++#include <linux/timex.h>
+ #include <linux/tick.h>
+ #include <linux/stop_machine.h>
+ #include <linux/pvclock_gtod.h>
+@@ -2380,6 +2381,20 @@ static int timekeeping_validate_timex(const struct __kernel_timex *txc)
+ return 0;
+ }
+
++/**
++ * random_get_entropy_fallback - Returns the raw clock source value,
++ * used by random.c for platforms with no valid random_get_entropy().
++ */
++unsigned long random_get_entropy_fallback(void)
++{
++ struct tk_read_base *tkr = &tk_core.timekeeper.tkr_mono;
++ struct clocksource *clock = READ_ONCE(tkr->clock);
++
++ if (unlikely(timekeeping_suspended || !clock))
++ return 0;
++ return clock->read(clock);
++}
++EXPORT_SYMBOL_GPL(random_get_entropy_fallback);
+
+ /**
+ * do_adjtimex() - Accessor function to NTP __do_adjtimex function
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 440fd666c16d1..c7dfe1000111d 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1566,8 +1566,7 @@ config WARN_ALL_UNSEEDED_RANDOM
+ so architecture maintainers really need to do what they can
+ to get the CRNG seeded sooner after the system is booted.
+ However, since users cannot do anything actionable to
+- address this, by default the kernel will issue only a single
+- warning for the first use of unseeded randomness.
++ address this, by default this option is disabled.
+
+ Say Y here if you want to receive warnings for all uses of
+ unseeded randomness. This will be of use primarily for
+diff --git a/lib/random32.c b/lib/random32.c
+index a57a0e18819d0..976632003ec65 100644
+--- a/lib/random32.c
++++ b/lib/random32.c
+@@ -41,7 +41,6 @@
+ #include <linux/bitops.h>
+ #include <linux/slab.h>
+ #include <asm/unaligned.h>
+-#include <trace/events/random.h>
+
+ /**
+ * prandom_u32_state - seeded pseudo-random number generator.
+@@ -387,7 +386,6 @@ u32 prandom_u32(void)
+ struct siprand_state *state = get_cpu_ptr(&net_rand_state);
+ u32 res = siprand_u32(state);
+
+- trace_prandom_u32(res);
+ put_cpu_ptr(&net_rand_state);
+ return res;
+ }
+@@ -553,9 +551,11 @@ static void prandom_reseed(struct timer_list *unused)
+ * To avoid worrying about whether it's safe to delay that interrupt
+ * long enough to seed all CPUs, just schedule an immediate timer event.
+ */
+-static void prandom_timer_start(struct random_ready_callback *unused)
++static int prandom_timer_start(struct notifier_block *nb,
++ unsigned long action, void *data)
+ {
+ mod_timer(&seed_timer, jiffies);
++ return 0;
+ }
+
+ #ifdef CONFIG_RANDOM32_SELFTEST
+@@ -619,13 +619,13 @@ core_initcall(prandom32_state_selftest);
+ */
+ static int __init prandom_init_late(void)
+ {
+- static struct random_ready_callback random_ready = {
+- .func = prandom_timer_start
++ static struct notifier_block random_ready = {
++ .notifier_call = prandom_timer_start
+ };
+- int ret = add_random_ready_callback(&random_ready);
++ int ret = register_random_ready_notifier(&random_ready);
+
+ if (ret == -EALREADY) {
+- prandom_timer_start(&random_ready);
++ prandom_timer_start(&random_ready, 0, NULL);
+ ret = 0;
+ }
+ return ret;
+diff --git a/lib/siphash.c b/lib/siphash.c
+index 72b9068ab57bf..71d315a6ad623 100644
+--- a/lib/siphash.c
++++ b/lib/siphash.c
+@@ -18,19 +18,13 @@
+ #include <asm/word-at-a-time.h>
+ #endif
+
+-#define SIPROUND \
+- do { \
+- v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \
+- v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \
+- v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \
+- v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \
+- } while (0)
++#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3)
+
+ #define PREAMBLE(len) \
+- u64 v0 = 0x736f6d6570736575ULL; \
+- u64 v1 = 0x646f72616e646f6dULL; \
+- u64 v2 = 0x6c7967656e657261ULL; \
+- u64 v3 = 0x7465646279746573ULL; \
++ u64 v0 = SIPHASH_CONST_0; \
++ u64 v1 = SIPHASH_CONST_1; \
++ u64 v2 = SIPHASH_CONST_2; \
++ u64 v3 = SIPHASH_CONST_3; \
+ u64 b = ((u64)(len)) << 56; \
+ v3 ^= key->key[1]; \
+ v2 ^= key->key[0]; \
+@@ -389,19 +383,13 @@ u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third,
+ }
+ EXPORT_SYMBOL(hsiphash_4u32);
+ #else
+-#define HSIPROUND \
+- do { \
+- v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \
+- v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \
+- v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \
+- v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \
+- } while (0)
++#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3)
+
+ #define HPREAMBLE(len) \
+- u32 v0 = 0; \
+- u32 v1 = 0; \
+- u32 v2 = 0x6c796765U; \
+- u32 v3 = 0x74656462U; \
++ u32 v0 = HSIPHASH_CONST_0; \
++ u32 v1 = HSIPHASH_CONST_1; \
++ u32 v2 = HSIPHASH_CONST_2; \
++ u32 v3 = HSIPHASH_CONST_3; \
+ u32 b = ((u32)(len)) << 24; \
+ v3 ^= key->key[1]; \
+ v2 ^= key->key[0]; \
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index fbf261bbea950..35cc358f8daee 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -762,14 +762,16 @@ static void enable_ptr_key_workfn(struct work_struct *work)
+
+ static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn);
+
+-static void fill_random_ptr_key(struct random_ready_callback *unused)
++static int fill_random_ptr_key(struct notifier_block *nb,
++ unsigned long action, void *data)
+ {
+ /* This may be in an interrupt handler. */
+ queue_work(system_unbound_wq, &enable_ptr_key_work);
++ return 0;
+ }
+
+-static struct random_ready_callback random_ready = {
+- .func = fill_random_ptr_key
++static struct notifier_block random_ready = {
++ .notifier_call = fill_random_ptr_key
+ };
+
+ static int __init initialize_ptr_random(void)
+@@ -783,7 +785,7 @@ static int __init initialize_ptr_random(void)
+ return 0;
+ }
+
+- ret = add_random_ready_callback(&random_ready);
++ ret = register_random_ready_notifier(&random_ready);
+ if (!ret) {
+ return 0;
+ } else if (ret == -EALREADY) {
+diff --git a/mm/util.c b/mm/util.c
+index d3102081add00..5223d7e2f65ec 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -343,6 +343,38 @@ unsigned long randomize_stack_top(unsigned long stack_top)
+ #endif
+ }
+
++/**
++ * randomize_page - Generate a random, page aligned address
++ * @start: The smallest acceptable address the caller will take.
++ * @range: The size of the area, starting at @start, within which the
++ * random address must fall.
++ *
++ * If @start + @range would overflow, @range is capped.
++ *
++ * NOTE: Historical use of randomize_range, which this replaces, presumed that
++ * @start was already page aligned. We now align it regardless.
++ *
++ * Return: A page aligned address within [start, start + range). On error,
++ * @start is returned.
++ */
++unsigned long randomize_page(unsigned long start, unsigned long range)
++{
++ if (!PAGE_ALIGNED(start)) {
++ range -= PAGE_ALIGN(start) - start;
++ start = PAGE_ALIGN(start);
++ }
++
++ if (start > ULONG_MAX - range)
++ range = ULONG_MAX - start;
++
++ range >>= PAGE_SHIFT;
++
++ if (range == 0)
++ return start;
++
++ return start + (get_random_long() % range << PAGE_SHIFT);
++}
++
+ #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
+ unsigned long arch_randomize_brk(struct mm_struct *mm)
+ {
+diff --git a/sound/pci/ctxfi/ctatc.c b/sound/pci/ctxfi/ctatc.c
+index 78f35e88aed6b..fbdb8a3d5b8e5 100644
+--- a/sound/pci/ctxfi/ctatc.c
++++ b/sound/pci/ctxfi/ctatc.c
+@@ -36,6 +36,7 @@
+ | ((IEC958_AES3_CON_FS_48000) << 24))
+
+ static const struct snd_pci_quirk subsys_20k1_list[] = {
++ SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0021, "SB046x", CTSB046X),
+ SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0022, "SB055x", CTSB055X),
+ SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x002f, "SB055x", CTSB055X),
+ SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0029, "SB073x", CTSB073X),
+@@ -64,6 +65,7 @@ static const struct snd_pci_quirk subsys_20k2_list[] = {
+
+ static const char *ct_subsys_name[NUM_CTCARDS] = {
+ /* 20k1 models */
++ [CTSB046X] = "SB046x",
+ [CTSB055X] = "SB055x",
+ [CTSB073X] = "SB073x",
+ [CTUAA] = "UAA",
+diff --git a/sound/pci/ctxfi/cthardware.h b/sound/pci/ctxfi/cthardware.h
+index f406b626a28c4..2875cec83b8f2 100644
+--- a/sound/pci/ctxfi/cthardware.h
++++ b/sound/pci/ctxfi/cthardware.h
+@@ -26,8 +26,9 @@ enum CHIPTYP {
+
+ enum CTCARDS {
+ /* 20k1 models */
++ CTSB046X,
++ CT20K1_MODEL_FIRST = CTSB046X,
+ CTSB055X,
+- CT20K1_MODEL_FIRST = CTSB055X,
+ CTSB073X,
+ CTUAA,
+ CT20K1_UNKNOWN,
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-25 13:09 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-25 13:09 UTC (permalink / raw
To: gentoo-commits
commit: e4ff21ff5a7c7b31bb0b352ce21cc66a9db973be
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 25 13:09:20 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 25 13:09:20 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e4ff21ff
Linux patch 5.17.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +++
1010_linux-5.17.11.patch | 76 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 80 insertions(+)
diff --git a/0000_README b/0000_README
index fcdbf704..8aed7c53 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-5.17.10.patch
From: http://www.kernel.org
Desc: Linux 5.17.10
+Patch: 1010_linux-5.17.11.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-5.17.11.patch b/1010_linux-5.17.11.patch
new file mode 100644
index 00000000..a85749fe
--- /dev/null
+++ b/1010_linux-5.17.11.patch
@@ -0,0 +1,76 @@
+diff --git a/Makefile b/Makefile
+index 318597a4147e0..b821f270a4ca6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index aec767ee047ab..46b343a0b17ea 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -442,7 +442,8 @@ struct mptcp_subflow_context {
+ rx_eof : 1,
+ can_ack : 1, /* only after processing the remote a key */
+ disposable : 1, /* ctx can be free at ulp release time */
+- stale : 1; /* unable to snd/rcv data, do not use for xmit */
++ stale : 1, /* unable to snd/rcv data, do not use for xmit */
++ valid_csum_seen : 1; /* at least one csum validated */
+ enum mptcp_data_avail data_avail;
+ u32 remote_nonce;
+ u64 thmac;
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 651f01d13191e..8d5ddf8e3ef71 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -913,11 +913,14 @@ static enum mapping_status validate_data_csum(struct sock *ssk, struct sk_buff *
+ subflow->map_data_csum);
+ if (unlikely(csum)) {
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DATACSUMERR);
+- subflow->send_mp_fail = 1;
+- MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPFAILTX);
++ if (subflow->mp_join || subflow->valid_csum_seen) {
++ subflow->send_mp_fail = 1;
++ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPFAILTX);
++ }
+ return subflow->mp_join ? MAPPING_INVALID : MAPPING_DUMMY;
+ }
+
++ subflow->valid_csum_seen = 1;
+ return MAPPING_OK;
+ }
+
+@@ -1099,6 +1102,18 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
+ }
+ }
+
++static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
++{
++ struct mptcp_sock *msk = mptcp_sk(subflow->conn);
++
++ if (subflow->mp_join)
++ return false;
++ else if (READ_ONCE(msk->csum_enabled))
++ return !subflow->valid_csum_seen;
++ else
++ return !subflow->fully_established;
++}
++
+ static bool subflow_check_data_avail(struct sock *ssk)
+ {
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+@@ -1176,7 +1191,7 @@ fallback:
+ return true;
+ }
+
+- if (subflow->mp_join || subflow->fully_established) {
++ if (!subflow_can_fallback(subflow)) {
+ /* fatal protocol error, close the socket.
+ * subflow_error_report() will introduce the appropriate barriers
+ */
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-25 11:52 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-25 11:52 UTC (permalink / raw
To: gentoo-commits
commit: 44e8c98697cab5673da8ef44c67d57fb8000bc08
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 25 11:52:26 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 25 11:52:26 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=44e8c986
Linux patch 5.17.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-5.17.10.patch | 8138 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8142 insertions(+)
diff --git a/0000_README b/0000_README
index ad4b906b..fcdbf704 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-5.17.9.patch
From: http://www.kernel.org
Desc: Linux 5.17.9
+Patch: 1009_linux-5.17.10.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-5.17.10.patch b/1009_linux-5.17.10.patch
new file mode 100644
index 00000000..ff23964b
--- /dev/null
+++ b/1009_linux-5.17.10.patch
@@ -0,0 +1,8138 @@
+diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
+index ea281dd755171..29b136849d300 100644
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -189,6 +189,9 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Kryo4xx Silver | N/A | ARM64_ERRATUM_1024718 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Qualcomm Tech. | Kryo4xx Gold | N/A | ARM64_ERRATUM_1286807 |
+++----------------+-----------------+-----------------+-----------------------------+
++
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Fujitsu | A64FX | E#010001 | FUJITSU_ERRATUM_010001 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml b/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml
+index 57b68d6c7c70d..eb6e2f2dc9eb0 100644
+--- a/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml
+@@ -58,7 +58,7 @@ patternProperties:
+ $ref: "/schemas/types.yaml#/definitions/string"
+ enum: [ ADC0, ADC1, ADC10, ADC11, ADC12, ADC13, ADC14, ADC15, ADC2,
+ ADC3, ADC4, ADC5, ADC6, ADC7, ADC8, ADC9, BMCINT, EMMCG1, EMMCG4,
+- EMMCG8, ESPI, ESPIALT, FSI1, FSI2, FWSPIABR, FWSPID, FWQSPID, FWSPIWP,
++ EMMCG8, ESPI, ESPIALT, FSI1, FSI2, FWSPIABR, FWSPID, FWSPIWP,
+ GPIT0, GPIT1, GPIT2, GPIT3, GPIT4, GPIT5, GPIT6, GPIT7, GPIU0, GPIU1,
+ GPIU2, GPIU3, GPIU4, GPIU5, GPIU6, GPIU7, HVI3C3, HVI3C4, I2C1, I2C10,
+ I2C11, I2C12, I2C13, I2C14, I2C15, I2C16, I2C2, I2C3, I2C4, I2C5,
+diff --git a/Makefile b/Makefile
+index aba139bbd1c70..318597a4147e0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+index e4775bbceecc6..ac07c240419a2 100644
+--- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
+@@ -117,11 +117,6 @@
+ groups = "FWSPID";
+ };
+
+- pinctrl_fwqspid_default: fwqspid_default {
+- function = "FWSPID";
+- groups = "FWQSPID";
+- };
+-
+ pinctrl_fwspiwp_default: fwspiwp_default {
+ function = "FWSPIWP";
+ groups = "FWSPIWP";
+@@ -653,12 +648,12 @@
+ };
+
+ pinctrl_qspi1_default: qspi1_default {
+- function = "QSPI1";
++ function = "SPI1";
+ groups = "QSPI1";
+ };
+
+ pinctrl_qspi2_default: qspi2_default {
+- function = "QSPI2";
++ function = "SPI2";
+ groups = "QSPI2";
+ };
+
+diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi
+index c32e87fad4dc9..aac55b3aeded4 100644
+--- a/arch/arm/boot/dts/aspeed-g6.dtsi
++++ b/arch/arm/boot/dts/aspeed-g6.dtsi
+@@ -389,6 +389,16 @@
+ reg = <0x1e6f2000 0x1000>;
+ };
+
++ video: video@1e700000 {
++ compatible = "aspeed,ast2600-video-engine";
++ reg = <0x1e700000 0x1000>;
++ clocks = <&syscon ASPEED_CLK_GATE_VCLK>,
++ <&syscon ASPEED_CLK_GATE_ECLK>;
++ clock-names = "vclk", "eclk";
++ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
++ status = "disabled";
++ };
++
+ gpio0: gpio@1e780000 {
+ #gpio-cells = <2>;
+ gpio-controller;
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index ee3f7a599181e..4bbd92d41031f 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -1040,7 +1040,7 @@ vector_bhb_loop8_\name:
+
+ @ bhb workaround
+ mov r0, #8
+-3: b . + 4
++3: W(b) . + 4
+ subs r0, r0, #1
+ bne 3b
+ dsb
+diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
+index 75e905508f279..f0c390e9d3cee 100644
+--- a/arch/arm/kernel/stacktrace.c
++++ b/arch/arm/kernel/stacktrace.c
+@@ -54,17 +54,17 @@ int notrace unwind_frame(struct stackframe *frame)
+ return -EINVAL;
+
+ frame->sp = frame->fp;
+- frame->fp = *(unsigned long *)(fp);
+- frame->pc = *(unsigned long *)(fp + 4);
++ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
++ frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 4));
+ #else
+ /* check current frame pointer is within bounds */
+ if (fp < low + 12 || fp > high - 4)
+ return -EINVAL;
+
+ /* restore the registers from the stack frame */
+- frame->fp = *(unsigned long *)(fp - 12);
+- frame->sp = *(unsigned long *)(fp - 8);
+- frame->pc = *(unsigned long *)(fp - 4);
++ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 12));
++ frame->sp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 8));
++ frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 4));
+ #endif
+ #ifdef CONFIG_KRETPROBES
+ if (is_kretprobe_trampoline(frame->pc))
+diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
+index 06dbfb968182d..fb9f3eb6bf483 100644
+--- a/arch/arm/mm/proc-v7-bugs.c
++++ b/arch/arm/mm/proc-v7-bugs.c
+@@ -288,6 +288,7 @@ void cpu_v7_ca15_ibe(void)
+ {
+ if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
+ cpu_v7_spectre_v2_init();
++ cpu_v7_spectre_bhb_init();
+ }
+
+ void cpu_v7_bugs_init(void)
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-mtp.dts b/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
+index fb99cc2827c76..7ab3627cc347d 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8250-mtp.dts
+@@ -622,6 +622,10 @@
+ status = "okay";
+ };
+
++&rxmacro {
++ status = "okay";
++};
++
+ &slpi {
+ status = "okay";
+ firmware-name = "qcom/sm8250/slpi.mbn";
+@@ -773,6 +777,8 @@
+ };
+
+ &swr1 {
++ status = "okay";
++
+ wcd_rx: wcd9380-rx@0,4 {
+ compatible = "sdw20217010d00";
+ reg = <0 4>;
+@@ -781,6 +787,8 @@
+ };
+
+ &swr2 {
++ status = "okay";
++
+ wcd_tx: wcd9380-tx@0,3 {
+ compatible = "sdw20217010d00";
+ reg = <0 3>;
+@@ -819,6 +827,10 @@
+ };
+ };
+
++&txmacro {
++ status = "okay";
++};
++
+ &uart12 {
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index a92230bec1ddb..bd212f6c351f1 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -2150,6 +2150,7 @@
+ pinctrl-0 = <&rx_swr_active>;
+ compatible = "qcom,sm8250-lpass-rx-macro";
+ reg = <0 0x3200000 0 0x1000>;
++ status = "disabled";
+
+ clocks = <&q6afecc LPASS_CLK_ID_TX_CORE_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
+ <&q6afecc LPASS_CLK_ID_TX_CORE_NPL_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
+@@ -2168,6 +2169,7 @@
+ swr1: soundwire-controller@3210000 {
+ reg = <0 0x3210000 0 0x2000>;
+ compatible = "qcom,soundwire-v1.5.1";
++ status = "disabled";
+ interrupts = <GIC_SPI 298 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&rxmacro>;
+ clock-names = "iface";
+@@ -2195,6 +2197,7 @@
+ pinctrl-0 = <&tx_swr_active>;
+ compatible = "qcom,sm8250-lpass-tx-macro";
+ reg = <0 0x3220000 0 0x1000>;
++ status = "disabled";
+
+ clocks = <&q6afecc LPASS_CLK_ID_TX_CORE_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
+ <&q6afecc LPASS_CLK_ID_TX_CORE_NPL_MCLK LPASS_CLK_ATTRIBUTE_COUPLE_NO>,
+@@ -2218,6 +2221,7 @@
+ compatible = "qcom,soundwire-v1.5.1";
+ interrupts-extended = <&intc GIC_SPI 297 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "core";
++ status = "disabled";
+
+ clocks = <&txmacro>;
+ clock-names = "iface";
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 146fa2e76834d..10c865e311a05 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -208,6 +208,8 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
+ #ifdef CONFIG_ARM64_ERRATUM_1286807
+ {
+ ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
++ /* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */
++ ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
+ },
+ #endif
+ {},
+diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
+index f418ebc65f950..8a25b9df430ea 100644
+--- a/arch/arm64/kernel/mte.c
++++ b/arch/arm64/kernel/mte.c
+@@ -76,6 +76,9 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
+ mte_sync_page_tags(page, old_pte, check_swap,
+ pte_is_tagged);
+ }
++
++ /* ensure the tags are visible before the PTE is set */
++ smp_wmb();
+ }
+
+ int memcmp_pages(struct page *page1, struct page *page2)
+diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
+index 75fed4460407d..57c7c211f8c71 100644
+--- a/arch/arm64/kernel/paravirt.c
++++ b/arch/arm64/kernel/paravirt.c
+@@ -35,7 +35,7 @@ static u64 native_steal_clock(int cpu)
+ DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock);
+
+ struct pv_time_stolen_time_region {
+- struct pvclock_vcpu_stolen_time *kaddr;
++ struct pvclock_vcpu_stolen_time __rcu *kaddr;
+ };
+
+ static DEFINE_PER_CPU(struct pv_time_stolen_time_region, stolen_time_region);
+@@ -52,7 +52,9 @@ early_param("no-steal-acc", parse_no_stealacc);
+ /* return stolen time in ns by asking the hypervisor */
+ static u64 para_steal_clock(int cpu)
+ {
++ struct pvclock_vcpu_stolen_time *kaddr = NULL;
+ struct pv_time_stolen_time_region *reg;
++ u64 ret = 0;
+
+ reg = per_cpu_ptr(&stolen_time_region, cpu);
+
+@@ -61,28 +63,37 @@ static u64 para_steal_clock(int cpu)
+ * online notification callback runs. Until the callback
+ * has run we just return zero.
+ */
+- if (!reg->kaddr)
++ rcu_read_lock();
++ kaddr = rcu_dereference(reg->kaddr);
++ if (!kaddr) {
++ rcu_read_unlock();
+ return 0;
++ }
+
+- return le64_to_cpu(READ_ONCE(reg->kaddr->stolen_time));
++ ret = le64_to_cpu(READ_ONCE(kaddr->stolen_time));
++ rcu_read_unlock();
++ return ret;
+ }
+
+ static int stolen_time_cpu_down_prepare(unsigned int cpu)
+ {
++ struct pvclock_vcpu_stolen_time *kaddr = NULL;
+ struct pv_time_stolen_time_region *reg;
+
+ reg = this_cpu_ptr(&stolen_time_region);
+ if (!reg->kaddr)
+ return 0;
+
+- memunmap(reg->kaddr);
+- memset(reg, 0, sizeof(*reg));
++ kaddr = rcu_replace_pointer(reg->kaddr, NULL, true);
++ synchronize_rcu();
++ memunmap(kaddr);
+
+ return 0;
+ }
+
+ static int stolen_time_cpu_online(unsigned int cpu)
+ {
++ struct pvclock_vcpu_stolen_time *kaddr = NULL;
+ struct pv_time_stolen_time_region *reg;
+ struct arm_smccc_res res;
+
+@@ -93,17 +104,19 @@ static int stolen_time_cpu_online(unsigned int cpu)
+ if (res.a0 == SMCCC_RET_NOT_SUPPORTED)
+ return -EINVAL;
+
+- reg->kaddr = memremap(res.a0,
++ kaddr = memremap(res.a0,
+ sizeof(struct pvclock_vcpu_stolen_time),
+ MEMREMAP_WB);
+
++ rcu_assign_pointer(reg->kaddr, kaddr);
++
+ if (!reg->kaddr) {
+ pr_warn("Failed to map stolen time data structure\n");
+ return -ENOMEM;
+ }
+
+- if (le32_to_cpu(reg->kaddr->revision) != 0 ||
+- le32_to_cpu(reg->kaddr->attributes) != 0) {
++ if (le32_to_cpu(kaddr->revision) != 0 ||
++ le32_to_cpu(kaddr->attributes) != 0) {
+ pr_warn_once("Unexpected revision or attributes in stolen time data\n");
+ return -ENXIO;
+ }
+diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S
+index f0a3df9e18a32..413f899e4ac63 100644
+--- a/arch/arm64/kernel/relocate_kernel.S
++++ b/arch/arm64/kernel/relocate_kernel.S
+@@ -37,6 +37,15 @@
+ * safe memory that has been set up to be preserved during the copy operation.
+ */
+ SYM_CODE_START(arm64_relocate_new_kernel)
++ /*
++ * The kimage structure isn't allocated specially and may be clobbered
++ * during relocation. We must load any values we need from it prior to
++ * any relocation occurring.
++ */
++ ldr x28, [x0, #KIMAGE_START]
++ ldr x27, [x0, #KIMAGE_ARCH_EL2_VECTORS]
++ ldr x26, [x0, #KIMAGE_ARCH_DTB_MEM]
++
+ /* Setup the list loop variables. */
+ ldr x18, [x0, #KIMAGE_ARCH_ZERO_PAGE] /* x18 = zero page for BBM */
+ ldr x17, [x0, #KIMAGE_ARCH_TTBR1] /* x17 = linear map copy */
+@@ -72,21 +81,20 @@ SYM_CODE_START(arm64_relocate_new_kernel)
+ ic iallu
+ dsb nsh
+ isb
+- ldr x4, [x0, #KIMAGE_START] /* relocation start */
+- ldr x1, [x0, #KIMAGE_ARCH_EL2_VECTORS] /* relocation start */
+- ldr x0, [x0, #KIMAGE_ARCH_DTB_MEM] /* dtb address */
+ turn_off_mmu x12, x13
+
+ /* Start new image. */
+- cbz x1, .Lel1
+- mov x1, x4 /* relocation start */
+- mov x2, x0 /* dtb address */
++ cbz x27, .Lel1
++ mov x1, x28 /* kernel entry point */
++ mov x2, x26 /* dtb address */
+ mov x3, xzr
+ mov x4, xzr
+ mov x0, #HVC_SOFT_RESTART
+ hvc #0 /* Jumps from el2 */
+ .Lel1:
++ mov x0, x26 /* dtb address */
++ mov x1, xzr
+ mov x2, xzr
+ mov x3, xzr
+- br x4 /* Jumps from el1 */
++ br x28 /* Jumps from el1 */
+ SYM_CODE_END(arm64_relocate_new_kernel)
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 4dc2fba316fff..2ae664a3930fc 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1080,8 +1080,7 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
+- if (irqchip_in_kernel(vcpu->kvm) &&
+- vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
++ if (kvm_vgic_global_state.type == VGIC_V3) {
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), 1);
+ }
+diff --git a/arch/mips/lantiq/falcon/sysctrl.c b/arch/mips/lantiq/falcon/sysctrl.c
+index 64726c670ca64..5204fc6d6d502 100644
+--- a/arch/mips/lantiq/falcon/sysctrl.c
++++ b/arch/mips/lantiq/falcon/sysctrl.c
+@@ -167,6 +167,8 @@ static inline void clkdev_add_sys(const char *dev, unsigned int module,
+ {
+ struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+
++ if (!clk)
++ return;
+ clk->cl.dev_id = dev;
+ clk->cl.con_id = NULL;
+ clk->cl.clk = clk;
+diff --git a/arch/mips/lantiq/xway/gptu.c b/arch/mips/lantiq/xway/gptu.c
+index 3d5683e75cf1e..200fe9ff641d6 100644
+--- a/arch/mips/lantiq/xway/gptu.c
++++ b/arch/mips/lantiq/xway/gptu.c
+@@ -122,6 +122,8 @@ static inline void clkdev_add_gptu(struct device *dev, const char *con,
+ {
+ struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+
++ if (!clk)
++ return;
+ clk->cl.dev_id = dev_name(dev);
+ clk->cl.con_id = con;
+ clk->cl.clk = clk;
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index 917fac1636b71..084f6caba5f23 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -315,6 +315,8 @@ static void clkdev_add_pmu(const char *dev, const char *con, bool deactivate,
+ {
+ struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+
++ if (!clk)
++ return;
+ clk->cl.dev_id = dev;
+ clk->cl.con_id = con;
+ clk->cl.clk = clk;
+@@ -338,6 +340,8 @@ static void clkdev_add_cgu(const char *dev, const char *con,
+ {
+ struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
+
++ if (!clk)
++ return;
+ clk->cl.dev_id = dev;
+ clk->cl.con_id = con;
+ clk->cl.clk = clk;
+@@ -356,24 +360,28 @@ static void clkdev_add_pci(void)
+ struct clk *clk_ext = kzalloc(sizeof(struct clk), GFP_KERNEL);
+
+ /* main pci clock */
+- clk->cl.dev_id = "17000000.pci";
+- clk->cl.con_id = NULL;
+- clk->cl.clk = clk;
+- clk->rate = CLOCK_33M;
+- clk->rates = valid_pci_rates;
+- clk->enable = pci_enable;
+- clk->disable = pmu_disable;
+- clk->module = 0;
+- clk->bits = PMU_PCI;
+- clkdev_add(&clk->cl);
++ if (clk) {
++ clk->cl.dev_id = "17000000.pci";
++ clk->cl.con_id = NULL;
++ clk->cl.clk = clk;
++ clk->rate = CLOCK_33M;
++ clk->rates = valid_pci_rates;
++ clk->enable = pci_enable;
++ clk->disable = pmu_disable;
++ clk->module = 0;
++ clk->bits = PMU_PCI;
++ clkdev_add(&clk->cl);
++ }
+
+ /* use internal/external bus clock */
+- clk_ext->cl.dev_id = "17000000.pci";
+- clk_ext->cl.con_id = "external";
+- clk_ext->cl.clk = clk_ext;
+- clk_ext->enable = pci_ext_enable;
+- clk_ext->disable = pci_ext_disable;
+- clkdev_add(&clk_ext->cl);
++ if (clk_ext) {
++ clk_ext->cl.dev_id = "17000000.pci";
++ clk_ext->cl.con_id = "external";
++ clk_ext->cl.clk = clk_ext;
++ clk_ext->enable = pci_ext_enable;
++ clk_ext->disable = pci_ext_disable;
++ clkdev_add(&clk_ext->cl);
++ }
+ }
+
+ /* xway socs can generate clocks on gpio pins */
+@@ -393,9 +401,15 @@ static void clkdev_add_clkout(void)
+ char *name;
+
+ name = kzalloc(sizeof("clkout0"), GFP_KERNEL);
++ if (!name)
++ continue;
+ sprintf(name, "clkout%d", i);
+
+ clk = kzalloc(sizeof(struct clk), GFP_KERNEL);
++ if (!clk) {
++ kfree(name);
++ continue;
++ }
+ clk->cl.dev_id = "1f103000.cgu";
+ clk->cl.con_id = name;
+ clk->cl.clk = clk;
+diff --git a/arch/riscv/boot/dts/sifive/fu540-c000.dtsi b/arch/riscv/boot/dts/sifive/fu540-c000.dtsi
+index 3eef52b1a59b5..fd93fdadd28ca 100644
+--- a/arch/riscv/boot/dts/sifive/fu540-c000.dtsi
++++ b/arch/riscv/boot/dts/sifive/fu540-c000.dtsi
+@@ -167,7 +167,7 @@
+ clocks = <&prci PRCI_CLK_TLCLK>;
+ status = "disabled";
+ };
+- dma: dma@3000000 {
++ dma: dma-controller@3000000 {
+ compatible = "sifive,fu540-c000-pdma";
+ reg = <0x0 0x3000000 0x0 0x8000>;
+ interrupt-parent = <&plic0>;
+diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c
+index 2b780786fc689..ead721965b9f9 100644
+--- a/arch/s390/kernel/traps.c
++++ b/arch/s390/kernel/traps.c
+@@ -142,10 +142,10 @@ static inline void do_fp_trap(struct pt_regs *regs, __u32 fpc)
+ do_trap(regs, SIGFPE, si_code, "floating point exception");
+ }
+
+-static void translation_exception(struct pt_regs *regs)
++static void translation_specification_exception(struct pt_regs *regs)
+ {
+ /* May never happen. */
+- panic("Translation exception");
++ panic("Translation-Specification Exception");
+ }
+
+ static void illegal_op(struct pt_regs *regs)
+@@ -374,7 +374,7 @@ static void (*pgm_check_table[128])(struct pt_regs *regs) = {
+ [0x0f] = hfp_divide_exception,
+ [0x10] = do_dat_exception,
+ [0x11] = do_dat_exception,
+- [0x12] = translation_exception,
++ [0x12] = translation_specification_exception,
+ [0x13] = special_op_exception,
+ [0x14] = default_trap_handler,
+ [0x15] = operand_exception,
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 792f8e0f21789..5bcd9228db5fa 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -69,6 +69,7 @@ struct zpci_dev *get_zdev_by_fid(u32 fid)
+ list_for_each_entry(tmp, &zpci_list, entry) {
+ if (tmp->fid == fid) {
+ zdev = tmp;
++ zpci_zdev_get(zdev);
+ break;
+ }
+ }
+diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
+index e359d2686178b..ecef3a9e16c00 100644
+--- a/arch/s390/pci/pci_bus.h
++++ b/arch/s390/pci/pci_bus.h
+@@ -19,7 +19,8 @@ void zpci_bus_remove_device(struct zpci_dev *zdev, bool set_error);
+ void zpci_release_device(struct kref *kref);
+ static inline void zpci_zdev_put(struct zpci_dev *zdev)
+ {
+- kref_put(&zdev->kref, zpci_release_device);
++ if (zdev)
++ kref_put(&zdev->kref, zpci_release_device);
+ }
+
+ static inline void zpci_zdev_get(struct zpci_dev *zdev)
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index be077b39da336..5011d27461fd3 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -22,6 +22,8 @@
+ #include <asm/clp.h>
+ #include <uapi/asm/clp.h>
+
++#include "pci_bus.h"
++
+ bool zpci_unique_uid;
+
+ void update_uid_checking(bool new)
+@@ -403,8 +405,11 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ return;
+
+ zdev = get_zdev_by_fid(entry->fid);
+- if (!zdev)
+- zpci_create_device(entry->fid, entry->fh, entry->config_state);
++ if (zdev) {
++ zpci_zdev_put(zdev);
++ return;
++ }
++ zpci_create_device(entry->fid, entry->fh, entry->config_state);
+ }
+
+ int clp_scan_pci_devices(void)
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index 2e3e5b2789257..ea9db5cea64e3 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -269,7 +269,7 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
+ pdev ? pci_name(pdev) : "n/a", ccdf->pec, ccdf->fid);
+
+ if (!pdev)
+- return;
++ goto no_pdev;
+
+ switch (ccdf->pec) {
+ case 0x003a: /* Service Action or Error Recovery Successful */
+@@ -286,6 +286,8 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
+ break;
+ }
+ pci_dev_put(pdev);
++no_pdev:
++ zpci_zdev_put(zdev);
+ }
+
+ void zpci_event_error(void *data)
+@@ -314,6 +316,7 @@ static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh)
+ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ {
+ struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
++ bool existing_zdev = !!zdev;
+ enum zpci_state state;
+
+ zpci_dbg(3, "avl fid:%x, fh:%x, pec:%x\n",
+@@ -378,6 +381,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ default:
+ break;
+ }
++ if (existing_zdev)
++ zpci_zdev_put(zdev);
+ }
+
+ void zpci_event_availability(void *data)
+diff --git a/arch/x86/crypto/chacha-avx512vl-x86_64.S b/arch/x86/crypto/chacha-avx512vl-x86_64.S
+index 946f74dd6fbaa..259383e1ad440 100644
+--- a/arch/x86/crypto/chacha-avx512vl-x86_64.S
++++ b/arch/x86/crypto/chacha-avx512vl-x86_64.S
+@@ -172,7 +172,7 @@ SYM_FUNC_START(chacha_2block_xor_avx512vl)
+ # xor remaining bytes from partial register into output
+ mov %rcx,%rax
+ and $0xf,%rcx
+- jz .Ldone8
++ jz .Ldone2
+ mov %rax,%r9
+ and $~0xf,%r9
+
+@@ -438,7 +438,7 @@ SYM_FUNC_START(chacha_4block_xor_avx512vl)
+ # xor remaining bytes from partial register into output
+ mov %rcx,%rax
+ and $0xf,%rcx
+- jz .Ldone8
++ jz .Ldone4
+ mov %rax,%r9
+ and $~0xf,%r9
+
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index e7cd16e1e0a0b..32333dfc85b6a 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5611,6 +5611,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
+ {
+ struct kvm_mmu_page *sp, *node;
+ int nr_zapped, batch = 0;
++ bool unstable;
+
+ restart:
+ list_for_each_entry_safe_reverse(sp, node,
+@@ -5642,11 +5643,12 @@ restart:
+ goto restart;
+ }
+
+- if (__kvm_mmu_prepare_zap_page(kvm, sp,
+- &kvm->arch.zapped_obsolete_pages, &nr_zapped)) {
+- batch += nr_zapped;
++ unstable = __kvm_mmu_prepare_zap_page(kvm, sp,
++ &kvm->arch.zapped_obsolete_pages, &nr_zapped);
++ batch += nr_zapped;
++
++ if (unstable)
+ goto restart;
+- }
+ }
+
+ /*
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index eca39f56c2315..0604bc29f0b8c 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -171,9 +171,12 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
+ return true;
+ }
+
+-static int cmp_u64(const void *a, const void *b)
++static int cmp_u64(const void *pa, const void *pb)
+ {
+- return *(__u64 *)a - *(__u64 *)b;
++ u64 a = *(u64 *)pa;
++ u64 b = *(u64 *)pb;
++
++ return (a > b) - (a < b);
+ }
+
+ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+diff --git a/arch/x86/um/shared/sysdep/syscalls_64.h b/arch/x86/um/shared/sysdep/syscalls_64.h
+index 48d6cd12f8a5e..b6b997225841c 100644
+--- a/arch/x86/um/shared/sysdep/syscalls_64.h
++++ b/arch/x86/um/shared/sysdep/syscalls_64.h
+@@ -10,13 +10,12 @@
+ #include <linux/msg.h>
+ #include <linux/shm.h>
+
+-typedef long syscall_handler_t(void);
++typedef long syscall_handler_t(long, long, long, long, long, long);
+
+ extern syscall_handler_t *sys_call_table[];
+
+ #define EXECUTE_SYSCALL(syscall, regs) \
+- (((long (*)(long, long, long, long, long, long)) \
+- (*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(®s->regs), \
++ (((*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(®s->regs), \
+ UPT_SYSCALL_ARG2(®s->regs), \
+ UPT_SYSCALL_ARG3(®s->regs), \
+ UPT_SYSCALL_ARG4(®s->regs), \
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 3ed5eaf3446a2..6ed602b2f80a5 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -742,6 +742,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+
+ if (at_head) {
+ list_add(&rq->queuelist, &per_prio->dispatch);
++ rq->fifo_time = jiffies;
+ } else {
+ deadline_add_rq_rb(per_prio, rq);
+
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 478ba959362ce..416f4f48f69b0 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -171,7 +171,7 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
+ unsigned int set_size)
+ {
+ struct drbd_request *r;
+- struct drbd_request *req = NULL;
++ struct drbd_request *req = NULL, *tmp = NULL;
+ int expect_epoch = 0;
+ int expect_size = 0;
+
+@@ -225,8 +225,11 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
+ * to catch requests being barrier-acked "unexpectedly".
+ * It usually should find the same req again, or some READ preceding it. */
+ list_for_each_entry(req, &connection->transfer_log, tl_requests)
+- if (req->epoch == expect_epoch)
++ if (req->epoch == expect_epoch) {
++ tmp = req;
+ break;
++ }
++ req = list_prepare_entry(tmp, &connection->transfer_log, tl_requests);
+ list_for_each_entry_safe_from(req, r, &connection->transfer_log, tl_requests) {
+ if (req->epoch != expect_epoch)
+ break;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index a29cc2928be47..37b53d6779e41 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -509,8 +509,8 @@ static unsigned long fdc_busy;
+ static DECLARE_WAIT_QUEUE_HEAD(fdc_wait);
+ static DECLARE_WAIT_QUEUE_HEAD(command_done);
+
+-/* Errors during formatting are counted here. */
+-static int format_errors;
++/* errors encountered on the current (or last) request */
++static int floppy_errors;
+
+ /* Format request descriptor. */
+ static struct format_descr format_req;
+@@ -530,7 +530,6 @@ static struct format_descr format_req;
+ static char *floppy_track_buffer;
+ static int max_buffer_sectors;
+
+-static int *errors;
+ typedef void (*done_f)(int);
+ static const struct cont_t {
+ void (*interrupt)(void);
+@@ -1455,7 +1454,7 @@ static int interpret_errors(void)
+ if (drive_params[current_drive].flags & FTD_MSG)
+ DPRINT("Over/Underrun - retrying\n");
+ bad = 0;
+- } else if (*errors >= drive_params[current_drive].max_errors.reporting) {
++ } else if (floppy_errors >= drive_params[current_drive].max_errors.reporting) {
+ print_errors();
+ }
+ if (reply_buffer[ST2] & ST2_WC || reply_buffer[ST2] & ST2_BC)
+@@ -2095,7 +2094,7 @@ static void bad_flp_intr(void)
+ if (!next_valid_format(current_drive))
+ return;
+ }
+- err_count = ++(*errors);
++ err_count = ++floppy_errors;
+ INFBOUND(write_errors[current_drive].badness, err_count);
+ if (err_count > drive_params[current_drive].max_errors.abort)
+ cont->done(0);
+@@ -2241,9 +2240,8 @@ static int do_format(int drive, struct format_descr *tmp_format_req)
+ return -EINVAL;
+ }
+ format_req = *tmp_format_req;
+- format_errors = 0;
+ cont = &format_cont;
+- errors = &format_errors;
++ floppy_errors = 0;
+ ret = wait_til_done(redo_format, true);
+ if (ret == -EINTR)
+ return -EINTR;
+@@ -2761,10 +2759,11 @@ static int set_next_request(void)
+ current_req = list_first_entry_or_null(&floppy_reqs, struct request,
+ queuelist);
+ if (current_req) {
+- current_req->error_count = 0;
++ floppy_errors = 0;
+ list_del_init(¤t_req->queuelist);
++ return 1;
+ }
+- return current_req != NULL;
++ return 0;
+ }
+
+ /* Starts or continues processing request. Will automatically unlock the
+@@ -2823,7 +2822,6 @@ do_request:
+ _floppy = floppy_type + drive_params[current_drive].autodetect[drive_state[current_drive].probed_format];
+ } else
+ probing = 0;
+- errors = &(current_req->error_count);
+ tmp = make_raw_rw_request();
+ if (tmp < 2) {
+ request_done(tmp);
+diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c
+index 23cc8297ec4c0..d429ba52a7190 100644
+--- a/drivers/clk/at91/clk-generated.c
++++ b/drivers/clk/at91/clk-generated.c
+@@ -117,6 +117,10 @@ static void clk_generated_best_diff(struct clk_rate_request *req,
+ tmp_rate = parent_rate;
+ else
+ tmp_rate = parent_rate / div;
++
++ if (tmp_rate < req->min_rate || tmp_rate > req->max_rate)
++ return;
++
+ tmp_diff = abs(req->rate - tmp_rate);
+
+ if (*best_diff < 0 || *best_diff >= tmp_diff) {
+diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c
+index 11f30fd48c141..031b5f701a0a3 100644
+--- a/drivers/crypto/qcom-rng.c
++++ b/drivers/crypto/qcom-rng.c
+@@ -65,6 +65,7 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
+ } else {
+ /* copy only remaining bytes */
+ memcpy(data, &val, max - currsize);
++ break;
+ }
+ } while (currsize < max);
+
+diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c
+index be1bf39a317de..90a920e7f6642 100644
+--- a/drivers/crypto/stm32/stm32-crc32.c
++++ b/drivers/crypto/stm32/stm32-crc32.c
+@@ -384,8 +384,10 @@ static int stm32_crc_remove(struct platform_device *pdev)
+ struct stm32_crc *crc = platform_get_drvdata(pdev);
+ int ret = pm_runtime_get_sync(crc->dev);
+
+- if (ret < 0)
++ if (ret < 0) {
++ pm_runtime_put_noidle(crc->dev);
+ return ret;
++ }
+
+ spin_lock(&crc_list.lock);
+ list_del(&crc->list);
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index a6fc96e426687..0ad5039e49b63 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -407,6 +407,7 @@ static inline int is_dma_buf_file(struct file *file)
+
+ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
+ {
++ static atomic64_t dmabuf_inode = ATOMIC64_INIT(0);
+ struct file *file;
+ struct inode *inode = alloc_anon_inode(dma_buf_mnt->mnt_sb);
+
+@@ -416,6 +417,13 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
+ inode->i_size = dmabuf->size;
+ inode_set_bytes(inode, dmabuf->size);
+
++ /*
++ * The ->i_ino acquired from get_next_ino() is not unique thus
++ * not suitable for using it as dentry name by dmabuf stats.
++ * Override ->i_ino with the unique and dmabuffs specific
++ * value.
++ */
++ inode->i_ino = atomic64_add_return(1, &dmabuf_inode);
+ file = alloc_file_pseudo(inode, dma_buf_mnt, "dmabuf",
+ flags, &dma_buf_fops);
+ if (IS_ERR(file))
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index a2c8dd329b31b..2db19cd640a43 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -707,6 +707,9 @@ static int mvebu_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ unsigned long flags;
+ unsigned int on, off;
+
++ if (state->polarity != PWM_POLARITY_NORMAL)
++ return -EINVAL;
++
+ val = (unsigned long long) mvpwm->clk_rate * state->duty_cycle;
+ do_div(val, NSEC_PER_SEC);
+ if (val > UINT_MAX + 1ULL)
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 20780c35da1b4..23cddb265a0dc 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -125,9 +125,13 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned gpio,
+ {
+ struct vf610_gpio_port *port = gpiochip_get_data(chip);
+ unsigned long mask = BIT(gpio);
++ u32 val;
+
+- if (port->sdata && port->sdata->have_paddr)
+- vf610_gpio_writel(mask, port->gpio_base + GPIO_PDDR);
++ if (port->sdata && port->sdata->have_paddr) {
++ val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR);
++ val |= mask;
++ vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR);
++ }
+
+ vf610_gpio_set(chip, gpio, value);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+index 9a53a4de2bb7c..e8e8a74026159 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+@@ -1422,9 +1422,11 @@ static inline int amdgpu_acpi_smart_shift_update(struct drm_device *dev,
+
+ #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND)
+ bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev);
++bool amdgpu_acpi_should_gpu_reset(struct amdgpu_device *adev);
+ bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev);
+ #else
+ static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; }
++static inline bool amdgpu_acpi_should_gpu_reset(struct amdgpu_device *adev) { return false; }
+ static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; }
+ #endif
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 0e12315fa0cb8..98ac53ee6bb55 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -1045,6 +1045,20 @@ bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev)
+ (pm_suspend_target_state == PM_SUSPEND_MEM);
+ }
+
++/**
++ * amdgpu_acpi_should_gpu_reset
++ *
++ * @adev: amdgpu_device_pointer
++ *
++ * returns true if should reset GPU, false if not
++ */
++bool amdgpu_acpi_should_gpu_reset(struct amdgpu_device *adev)
++{
++ if (adev->flags & AMD_IS_APU)
++ return false;
++ return pm_suspend_target_state != PM_SUSPEND_TO_IDLE;
++}
++
+ /**
+ * amdgpu_acpi_is_s0ix_active
+ *
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index f09aeff513ee9..e2c422a6825c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2289,7 +2289,7 @@ static int amdgpu_pmops_suspend_noirq(struct device *dev)
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+ struct amdgpu_device *adev = drm_to_adev(drm_dev);
+
+- if (!adev->in_s0ix)
++ if (amdgpu_acpi_should_gpu_reset(adev))
+ return amdgpu_asic_reset(adev);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c
+index d7559e5a99ce8..e708f07fe75af 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c
+@@ -153,9 +153,4 @@ void dcn31_hw_sequencer_construct(struct dc *dc)
+ dc->hwss.init_hw = dcn20_fpga_init_hw;
+ dc->hwseq->funcs.init_pipes = NULL;
+ }
+- if (dc->debug.disable_z10) {
+- /*hw not support z10 or sw disable it*/
+- dc->hwss.z10_restore = NULL;
+- dc->hwss.z10_save_init = NULL;
+- }
+ }
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 8b3822142fed8..a1e5bb78f433c 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -4852,6 +4852,7 @@ static void fetch_monitor_name(struct drm_dp_mst_topology_mgr *mgr,
+
+ mst_edid = drm_dp_mst_get_edid(port->connector, mgr, port);
+ drm_edid_get_monitor_name(mst_edid, name, namelen);
++ kfree(mst_edid);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/i915/display/intel_dmc.c b/drivers/gpu/drm/i915/display/intel_dmc.c
+index a69b28d65a9ba..f005c793cadf6 100644
+--- a/drivers/gpu/drm/i915/display/intel_dmc.c
++++ b/drivers/gpu/drm/i915/display/intel_dmc.c
+@@ -367,6 +367,44 @@ static void dmc_set_fw_offset(struct intel_dmc *dmc,
+ }
+ }
+
++static bool dmc_mmio_addr_sanity_check(struct intel_dmc *dmc,
++ const u32 *mmioaddr, u32 mmio_count,
++ int header_ver, u8 dmc_id)
++{
++ struct drm_i915_private *i915 = container_of(dmc, typeof(*i915), dmc);
++ u32 start_range, end_range;
++ int i;
++
++ if (dmc_id >= DMC_FW_MAX) {
++ drm_warn(&i915->drm, "Unsupported firmware id %u\n", dmc_id);
++ return false;
++ }
++
++ if (header_ver == 1) {
++ start_range = DMC_MMIO_START_RANGE;
++ end_range = DMC_MMIO_END_RANGE;
++ } else if (dmc_id == DMC_FW_MAIN) {
++ start_range = TGL_MAIN_MMIO_START;
++ end_range = TGL_MAIN_MMIO_END;
++ } else if (DISPLAY_VER(i915) >= 13) {
++ start_range = ADLP_PIPE_MMIO_START;
++ end_range = ADLP_PIPE_MMIO_END;
++ } else if (DISPLAY_VER(i915) >= 12) {
++ start_range = TGL_PIPE_MMIO_START(dmc_id);
++ end_range = TGL_PIPE_MMIO_END(dmc_id);
++ } else {
++ drm_warn(&i915->drm, "Unknown mmio range for sanity check");
++ return false;
++ }
++
++ for (i = 0; i < mmio_count; i++) {
++ if (mmioaddr[i] < start_range || mmioaddr[i] > end_range)
++ return false;
++ }
++
++ return true;
++}
++
+ static u32 parse_dmc_fw_header(struct intel_dmc *dmc,
+ const struct intel_dmc_header_base *dmc_header,
+ size_t rem_size, u8 dmc_id)
+@@ -436,6 +474,12 @@ static u32 parse_dmc_fw_header(struct intel_dmc *dmc,
+ return 0;
+ }
+
++ if (!dmc_mmio_addr_sanity_check(dmc, mmioaddr, mmio_count,
++ dmc_header->header_ver, dmc_id)) {
++ drm_err(&i915->drm, "DMC firmware has Wrong MMIO Addresses\n");
++ return 0;
++ }
++
+ for (i = 0; i < mmio_count; i++) {
+ dmc_info->mmioaddr[i] = _MMIO(mmioaddr[i]);
+ dmc_info->mmiodata[i] = mmiodata[i];
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index df10b6898987a..4a2662838cd8d 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -375,21 +375,6 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ return -EINVAL;
+ }
+
+- /*
+- * The port numbering and mapping here is bizarre. The now-obsolete
+- * swsci spec supports ports numbered [0..4]. Port E is handled as a
+- * special case, but port F and beyond are not. The functionality is
+- * supposed to be obsolete for new platforms. Just bail out if the port
+- * number is out of bounds after mapping.
+- */
+- if (port > 4) {
+- drm_dbg_kms(&dev_priv->drm,
+- "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
+- intel_encoder->base.base.id, intel_encoder->base.name,
+- port_name(intel_encoder->port), port);
+- return -EINVAL;
+- }
+-
+ if (!enable)
+ parm |= 4 << 8;
+
+diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
+index 7be0002d9d707..f577582ddd9fe 100644
+--- a/drivers/gpu/drm/i915/gt/intel_reset.c
++++ b/drivers/gpu/drm/i915/gt/intel_reset.c
+@@ -791,7 +791,7 @@ static int gt_reset(struct intel_gt *gt, intel_engine_mask_t stalled_mask)
+ __intel_engine_reset(engine, stalled_mask & engine->mask);
+ local_bh_enable();
+
+- intel_uc_reset(>->uc, true);
++ intel_uc_reset(>->uc, ALL_ENGINES);
+
+ intel_ggtt_restore_fences(gt->ggtt);
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+index 3aabe164c3291..e1fb8e1da128c 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.h
+@@ -417,7 +417,7 @@ int intel_guc_global_policies_update(struct intel_guc *guc);
+ void intel_guc_context_ban(struct intel_context *ce, struct i915_request *rq);
+
+ void intel_guc_submission_reset_prepare(struct intel_guc *guc);
+-void intel_guc_submission_reset(struct intel_guc *guc, bool stalled);
++void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stalled);
+ void intel_guc_submission_reset_finish(struct intel_guc *guc);
+ void intel_guc_submission_cancel_requests(struct intel_guc *guc);
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index 154ad726e266a..1e51a365833bb 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -1603,9 +1603,9 @@ __unwind_incomplete_requests(struct intel_context *ce)
+ spin_unlock_irqrestore(&sched_engine->lock, flags);
+ }
+
+-static void __guc_reset_context(struct intel_context *ce, bool stalled)
++static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t stalled)
+ {
+- bool local_stalled;
++ bool guilty;
+ struct i915_request *rq;
+ unsigned long flags;
+ u32 head;
+@@ -1647,7 +1647,7 @@ static void __guc_reset_context(struct intel_context *ce, bool stalled)
+ if (!intel_context_is_pinned(ce))
+ goto next_context;
+
+- local_stalled = false;
++ guilty = false;
+ rq = intel_context_find_active_request(ce);
+ if (!rq) {
+ head = ce->ring->tail;
+@@ -1655,14 +1655,14 @@ static void __guc_reset_context(struct intel_context *ce, bool stalled)
+ }
+
+ if (i915_request_started(rq))
+- local_stalled = true;
++ guilty = stalled & ce->engine->mask;
+
+ GEM_BUG_ON(i915_active_is_idle(&ce->active));
+ head = intel_ring_wrap(ce->ring, rq->head);
+
+- __i915_request_reset(rq, local_stalled && stalled);
++ __i915_request_reset(rq, guilty);
+ out_replay:
+- guc_reset_state(ce, head, local_stalled && stalled);
++ guc_reset_state(ce, head, guilty);
+ next_context:
+ if (i != number_children)
+ ce = list_next_entry(ce, parallel.child_link);
+@@ -1673,7 +1673,7 @@ out_put:
+ intel_context_put(parent);
+ }
+
+-void intel_guc_submission_reset(struct intel_guc *guc, bool stalled)
++void intel_guc_submission_reset(struct intel_guc *guc, intel_engine_mask_t stalled)
+ {
+ struct intel_context *ce;
+ unsigned long index;
+@@ -4042,7 +4042,7 @@ static void guc_context_replay(struct intel_context *ce)
+ {
+ struct i915_sched_engine *sched_engine = ce->engine->sched_engine;
+
+- __guc_reset_context(ce, true);
++ __guc_reset_context(ce, ce->engine->mask);
+ tasklet_hi_schedule(&sched_engine->tasklet);
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+index 09ed29df67bc9..cbfb5a01cc1da 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+@@ -592,7 +592,7 @@ sanitize:
+ __uc_sanitize(uc);
+ }
+
+-void intel_uc_reset(struct intel_uc *uc, bool stalled)
++void intel_uc_reset(struct intel_uc *uc, intel_engine_mask_t stalled)
+ {
+ struct intel_guc *guc = &uc->guc;
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.h b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
+index 866b462821c00..a8f38c2c60e23 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.h
+@@ -42,7 +42,7 @@ void intel_uc_driver_late_release(struct intel_uc *uc);
+ void intel_uc_driver_remove(struct intel_uc *uc);
+ void intel_uc_init_mmio(struct intel_uc *uc);
+ void intel_uc_reset_prepare(struct intel_uc *uc);
+-void intel_uc_reset(struct intel_uc *uc, bool stalled);
++void intel_uc_reset(struct intel_uc *uc, intel_engine_mask_t stalled);
+ void intel_uc_reset_finish(struct intel_uc *uc);
+ void intel_uc_cancel_requests(struct intel_uc *uc);
+ void intel_uc_suspend(struct intel_uc *uc);
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 4b8fee1be8ae5..f4ab46ae62143 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7938,6 +7938,22 @@ enum {
+ /* MMIO address range for DMC program (0x80000 - 0x82FFF) */
+ #define DMC_MMIO_START_RANGE 0x80000
+ #define DMC_MMIO_END_RANGE 0x8FFFF
++#define DMC_V1_MMIO_START_RANGE 0x80000
++#define TGL_MAIN_MMIO_START 0x8F000
++#define TGL_MAIN_MMIO_END 0x8FFFF
++#define _TGL_PIPEA_MMIO_START 0x92000
++#define _TGL_PIPEA_MMIO_END 0x93FFF
++#define _TGL_PIPEB_MMIO_START 0x96000
++#define _TGL_PIPEB_MMIO_END 0x97FFF
++#define ADLP_PIPE_MMIO_START 0x5F000
++#define ADLP_PIPE_MMIO_END 0x5FFFF
++
++#define TGL_PIPE_MMIO_START(dmc_id) _PICK_EVEN(((dmc_id) - 1), _TGL_PIPEA_MMIO_START,\
++ _TGL_PIPEB_MMIO_START)
++
++#define TGL_PIPE_MMIO_END(dmc_id) _PICK_EVEN(((dmc_id) - 1), _TGL_PIPEA_MMIO_END,\
++ _TGL_PIPEB_MMIO_END)
++
+ #define SKL_DMC_DC3_DC5_COUNT _MMIO(0x80030)
+ #define SKL_DMC_DC5_DC6_COUNT _MMIO(0x8002C)
+ #define BXT_DMC_DC3_DC5_COUNT _MMIO(0x80038)
+diff --git a/drivers/i2c/busses/i2c-mt7621.c b/drivers/i2c/busses/i2c-mt7621.c
+index 45fe4a7fe0c03..901f0fb04fee4 100644
+--- a/drivers/i2c/busses/i2c-mt7621.c
++++ b/drivers/i2c/busses/i2c-mt7621.c
+@@ -304,7 +304,8 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+
+ if (i2c->bus_freq == 0) {
+ dev_warn(i2c->dev, "clock-frequency 0 not supported\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_disable_clk;
+ }
+
+ adap = &i2c->adap;
+@@ -322,10 +323,15 @@ static int mtk_i2c_probe(struct platform_device *pdev)
+
+ ret = i2c_add_adapter(adap);
+ if (ret < 0)
+- return ret;
++ goto err_disable_clk;
+
+ dev_info(&pdev->dev, "clock %u kHz\n", i2c->bus_freq / 1000);
+
++ return 0;
++
++err_disable_clk:
++ clk_disable_unprepare(i2c->clk);
++
+ return ret;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-piix4.c b/drivers/i2c/busses/i2c-piix4.c
+index 8c1b31ed0c429..ac8e7d60672a1 100644
+--- a/drivers/i2c/busses/i2c-piix4.c
++++ b/drivers/i2c/busses/i2c-piix4.c
+@@ -77,6 +77,7 @@
+
+ /* SB800 constants */
+ #define SB800_PIIX4_SMB_IDX 0xcd6
++#define SB800_PIIX4_SMB_MAP_SIZE 2
+
+ #define KERNCZ_IMC_IDX 0x3e
+ #define KERNCZ_IMC_DATA 0x3f
+@@ -97,6 +98,9 @@
+ #define SB800_PIIX4_PORT_IDX_MASK_KERNCZ 0x18
+ #define SB800_PIIX4_PORT_IDX_SHIFT_KERNCZ 3
+
++#define SB800_PIIX4_FCH_PM_ADDR 0xFED80300
++#define SB800_PIIX4_FCH_PM_SIZE 8
++
+ /* insmod parameters */
+
+ /* If force is set to anything different from 0, we forcibly enable the
+@@ -155,6 +159,12 @@ static const char *piix4_main_port_names_sb800[PIIX4_MAX_ADAPTERS] = {
+ };
+ static const char *piix4_aux_port_name_sb800 = " port 1";
+
++struct sb800_mmio_cfg {
++ void __iomem *addr;
++ struct resource *res;
++ bool use_mmio;
++};
++
+ struct i2c_piix4_adapdata {
+ unsigned short smba;
+
+@@ -162,8 +172,75 @@ struct i2c_piix4_adapdata {
+ bool sb800_main;
+ bool notify_imc;
+ u8 port; /* Port number, shifted */
++ struct sb800_mmio_cfg mmio_cfg;
+ };
+
++static int piix4_sb800_region_request(struct device *dev,
++ struct sb800_mmio_cfg *mmio_cfg)
++{
++ if (mmio_cfg->use_mmio) {
++ struct resource *res;
++ void __iomem *addr;
++
++ res = request_mem_region_muxed(SB800_PIIX4_FCH_PM_ADDR,
++ SB800_PIIX4_FCH_PM_SIZE,
++ "sb800_piix4_smb");
++ if (!res) {
++ dev_err(dev,
++ "SMBus base address memory region 0x%x already in use.\n",
++ SB800_PIIX4_FCH_PM_ADDR);
++ return -EBUSY;
++ }
++
++ addr = ioremap(SB800_PIIX4_FCH_PM_ADDR,
++ SB800_PIIX4_FCH_PM_SIZE);
++ if (!addr) {
++ release_resource(res);
++ dev_err(dev, "SMBus base address mapping failed.\n");
++ return -ENOMEM;
++ }
++
++ mmio_cfg->res = res;
++ mmio_cfg->addr = addr;
++
++ return 0;
++ }
++
++ if (!request_muxed_region(SB800_PIIX4_SMB_IDX, SB800_PIIX4_SMB_MAP_SIZE,
++ "sb800_piix4_smb")) {
++ dev_err(dev,
++ "SMBus base address index region 0x%x already in use.\n",
++ SB800_PIIX4_SMB_IDX);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
++static void piix4_sb800_region_release(struct device *dev,
++ struct sb800_mmio_cfg *mmio_cfg)
++{
++ if (mmio_cfg->use_mmio) {
++ iounmap(mmio_cfg->addr);
++ release_resource(mmio_cfg->res);
++ return;
++ }
++
++ release_region(SB800_PIIX4_SMB_IDX, SB800_PIIX4_SMB_MAP_SIZE);
++}
++
++static bool piix4_sb800_use_mmio(struct pci_dev *PIIX4_dev)
++{
++ /*
++ * cd6h/cd7h port I/O accesses can be disabled on AMD processors
++ * w/ SMBus PCI revision ID 0x51 or greater. MMIO is supported on
++ * the same processors and is the recommended access method.
++ */
++ return (PIIX4_dev->vendor == PCI_VENDOR_ID_AMD &&
++ PIIX4_dev->device == PCI_DEVICE_ID_AMD_KERNCZ_SMBUS &&
++ PIIX4_dev->revision >= 0x51);
++}
++
+ static int piix4_setup(struct pci_dev *PIIX4_dev,
+ const struct pci_device_id *id)
+ {
+@@ -263,12 +340,61 @@ static int piix4_setup(struct pci_dev *PIIX4_dev,
+ return piix4_smba;
+ }
+
++static int piix4_setup_sb800_smba(struct pci_dev *PIIX4_dev,
++ u8 smb_en,
++ u8 aux,
++ u8 *smb_en_status,
++ unsigned short *piix4_smba)
++{
++ struct sb800_mmio_cfg mmio_cfg;
++ u8 smba_en_lo;
++ u8 smba_en_hi;
++ int retval;
++
++ mmio_cfg.use_mmio = piix4_sb800_use_mmio(PIIX4_dev);
++ retval = piix4_sb800_region_request(&PIIX4_dev->dev, &mmio_cfg);
++ if (retval)
++ return retval;
++
++ if (mmio_cfg.use_mmio) {
++ smba_en_lo = ioread8(mmio_cfg.addr);
++ smba_en_hi = ioread8(mmio_cfg.addr + 1);
++ } else {
++ outb_p(smb_en, SB800_PIIX4_SMB_IDX);
++ smba_en_lo = inb_p(SB800_PIIX4_SMB_IDX + 1);
++ outb_p(smb_en + 1, SB800_PIIX4_SMB_IDX);
++ smba_en_hi = inb_p(SB800_PIIX4_SMB_IDX + 1);
++ }
++
++ piix4_sb800_region_release(&PIIX4_dev->dev, &mmio_cfg);
++
++ if (!smb_en) {
++ *smb_en_status = smba_en_lo & 0x10;
++ *piix4_smba = smba_en_hi << 8;
++ if (aux)
++ *piix4_smba |= 0x20;
++ } else {
++ *smb_en_status = smba_en_lo & 0x01;
++ *piix4_smba = ((smba_en_hi << 8) | smba_en_lo) & 0xffe0;
++ }
++
++ if (!*smb_en_status) {
++ dev_err(&PIIX4_dev->dev,
++ "SMBus Host Controller not enabled!\n");
++ return -ENODEV;
++ }
++
++ return 0;
++}
++
+ static int piix4_setup_sb800(struct pci_dev *PIIX4_dev,
+ const struct pci_device_id *id, u8 aux)
+ {
+ unsigned short piix4_smba;
+- u8 smba_en_lo, smba_en_hi, smb_en, smb_en_status, port_sel;
++ u8 smb_en, smb_en_status, port_sel;
+ u8 i2ccfg, i2ccfg_offset = 0x10;
++ struct sb800_mmio_cfg mmio_cfg;
++ int retval;
+
+ /* SB800 and later SMBus does not support forcing address */
+ if (force || force_addr) {
+@@ -290,35 +416,11 @@ static int piix4_setup_sb800(struct pci_dev *PIIX4_dev,
+ else
+ smb_en = (aux) ? 0x28 : 0x2c;
+
+- if (!request_muxed_region(SB800_PIIX4_SMB_IDX, 2, "sb800_piix4_smb")) {
+- dev_err(&PIIX4_dev->dev,
+- "SMB base address index region 0x%x already in use.\n",
+- SB800_PIIX4_SMB_IDX);
+- return -EBUSY;
+- }
+-
+- outb_p(smb_en, SB800_PIIX4_SMB_IDX);
+- smba_en_lo = inb_p(SB800_PIIX4_SMB_IDX + 1);
+- outb_p(smb_en + 1, SB800_PIIX4_SMB_IDX);
+- smba_en_hi = inb_p(SB800_PIIX4_SMB_IDX + 1);
++ retval = piix4_setup_sb800_smba(PIIX4_dev, smb_en, aux, &smb_en_status,
++ &piix4_smba);
+
+- release_region(SB800_PIIX4_SMB_IDX, 2);
+-
+- if (!smb_en) {
+- smb_en_status = smba_en_lo & 0x10;
+- piix4_smba = smba_en_hi << 8;
+- if (aux)
+- piix4_smba |= 0x20;
+- } else {
+- smb_en_status = smba_en_lo & 0x01;
+- piix4_smba = ((smba_en_hi << 8) | smba_en_lo) & 0xffe0;
+- }
+-
+- if (!smb_en_status) {
+- dev_err(&PIIX4_dev->dev,
+- "SMBus Host Controller not enabled!\n");
+- return -ENODEV;
+- }
++ if (retval)
++ return retval;
+
+ if (acpi_check_region(piix4_smba, SMBIOSIZE, piix4_driver.name))
+ return -ENODEV;
+@@ -371,10 +473,11 @@ static int piix4_setup_sb800(struct pci_dev *PIIX4_dev,
+ piix4_port_shift_sb800 = SB800_PIIX4_PORT_IDX_SHIFT;
+ }
+ } else {
+- if (!request_muxed_region(SB800_PIIX4_SMB_IDX, 2,
+- "sb800_piix4_smb")) {
++ mmio_cfg.use_mmio = piix4_sb800_use_mmio(PIIX4_dev);
++ retval = piix4_sb800_region_request(&PIIX4_dev->dev, &mmio_cfg);
++ if (retval) {
+ release_region(piix4_smba, SMBIOSIZE);
+- return -EBUSY;
++ return retval;
+ }
+
+ outb_p(SB800_PIIX4_PORT_IDX_SEL, SB800_PIIX4_SMB_IDX);
+@@ -384,7 +487,7 @@ static int piix4_setup_sb800(struct pci_dev *PIIX4_dev,
+ SB800_PIIX4_PORT_IDX;
+ piix4_port_mask_sb800 = SB800_PIIX4_PORT_IDX_MASK;
+ piix4_port_shift_sb800 = SB800_PIIX4_PORT_IDX_SHIFT;
+- release_region(SB800_PIIX4_SMB_IDX, 2);
++ piix4_sb800_region_release(&PIIX4_dev->dev, &mmio_cfg);
+ }
+
+ dev_info(&PIIX4_dev->dev,
+@@ -662,6 +765,29 @@ static void piix4_imc_wakeup(void)
+ release_region(KERNCZ_IMC_IDX, 2);
+ }
+
++static int piix4_sb800_port_sel(u8 port, struct sb800_mmio_cfg *mmio_cfg)
++{
++ u8 smba_en_lo, val;
++
++ if (mmio_cfg->use_mmio) {
++ smba_en_lo = ioread8(mmio_cfg->addr + piix4_port_sel_sb800);
++ val = (smba_en_lo & ~piix4_port_mask_sb800) | port;
++ if (smba_en_lo != val)
++ iowrite8(val, mmio_cfg->addr + piix4_port_sel_sb800);
++
++ return (smba_en_lo & piix4_port_mask_sb800);
++ }
++
++ outb_p(piix4_port_sel_sb800, SB800_PIIX4_SMB_IDX);
++ smba_en_lo = inb_p(SB800_PIIX4_SMB_IDX + 1);
++
++ val = (smba_en_lo & ~piix4_port_mask_sb800) | port;
++ if (smba_en_lo != val)
++ outb_p(val, SB800_PIIX4_SMB_IDX + 1);
++
++ return (smba_en_lo & piix4_port_mask_sb800);
++}
++
+ /*
+ * Handles access to multiple SMBus ports on the SB800.
+ * The port is selected by bits 2:1 of the smb_en register (0x2c).
+@@ -678,12 +804,12 @@ static s32 piix4_access_sb800(struct i2c_adapter *adap, u16 addr,
+ unsigned short piix4_smba = adapdata->smba;
+ int retries = MAX_TIMEOUT;
+ int smbslvcnt;
+- u8 smba_en_lo;
+- u8 port;
++ u8 prev_port;
+ int retval;
+
+- if (!request_muxed_region(SB800_PIIX4_SMB_IDX, 2, "sb800_piix4_smb"))
+- return -EBUSY;
++ retval = piix4_sb800_region_request(&adap->dev, &adapdata->mmio_cfg);
++ if (retval)
++ return retval;
+
+ /* Request the SMBUS semaphore, avoid conflicts with the IMC */
+ smbslvcnt = inb_p(SMBSLVCNT);
+@@ -738,18 +864,12 @@ static s32 piix4_access_sb800(struct i2c_adapter *adap, u16 addr,
+ }
+ }
+
+- outb_p(piix4_port_sel_sb800, SB800_PIIX4_SMB_IDX);
+- smba_en_lo = inb_p(SB800_PIIX4_SMB_IDX + 1);
+-
+- port = adapdata->port;
+- if ((smba_en_lo & piix4_port_mask_sb800) != port)
+- outb_p((smba_en_lo & ~piix4_port_mask_sb800) | port,
+- SB800_PIIX4_SMB_IDX + 1);
++ prev_port = piix4_sb800_port_sel(adapdata->port, &adapdata->mmio_cfg);
+
+ retval = piix4_access(adap, addr, flags, read_write,
+ command, size, data);
+
+- outb_p(smba_en_lo, SB800_PIIX4_SMB_IDX + 1);
++ piix4_sb800_port_sel(prev_port, &adapdata->mmio_cfg);
+
+ /* Release the semaphore */
+ outb_p(smbslvcnt | 0x20, SMBSLVCNT);
+@@ -758,7 +878,7 @@ static s32 piix4_access_sb800(struct i2c_adapter *adap, u16 addr,
+ piix4_imc_wakeup();
+
+ release:
+- release_region(SB800_PIIX4_SMB_IDX, 2);
++ piix4_sb800_region_release(&adap->dev, &adapdata->mmio_cfg);
+ return retval;
+ }
+
+@@ -836,6 +956,7 @@ static int piix4_add_adapter(struct pci_dev *dev, unsigned short smba,
+ return -ENOMEM;
+ }
+
++ adapdata->mmio_cfg.use_mmio = piix4_sb800_use_mmio(dev);
+ adapdata->smba = smba;
+ adapdata->sb800_main = sb800_main;
+ adapdata->port = port << piix4_port_shift_sb800;
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index ccaeb24263854..ba246fabc6c17 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -47,6 +47,17 @@ static DEFINE_MUTEX(input_mutex);
+
+ static const struct input_value input_value_sync = { EV_SYN, SYN_REPORT, 1 };
+
++static const unsigned int input_max_code[EV_CNT] = {
++ [EV_KEY] = KEY_MAX,
++ [EV_REL] = REL_MAX,
++ [EV_ABS] = ABS_MAX,
++ [EV_MSC] = MSC_MAX,
++ [EV_SW] = SW_MAX,
++ [EV_LED] = LED_MAX,
++ [EV_SND] = SND_MAX,
++ [EV_FF] = FF_MAX,
++};
++
+ static inline int is_event_supported(unsigned int code,
+ unsigned long *bm, unsigned int max)
+ {
+@@ -2074,6 +2085,14 @@ EXPORT_SYMBOL(input_get_timestamp);
+ */
+ void input_set_capability(struct input_dev *dev, unsigned int type, unsigned int code)
+ {
++ if (type < EV_CNT && input_max_code[type] &&
++ code > input_max_code[type]) {
++ pr_err("%s: invalid code %u for type %u\n", __func__, code,
++ type);
++ dump_stack();
++ return;
++ }
++
+ switch (type) {
+ case EV_KEY:
+ __set_bit(code, dev->keybit);
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index 2bd407d86bae5..3a48262fb3d35 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -951,9 +951,9 @@ static int ili210x_i2c_probe(struct i2c_client *client,
+ if (error)
+ return error;
+
+- usleep_range(50, 100);
++ usleep_range(12000, 15000);
+ gpiod_set_value_cansleep(reset_gpio, 0);
+- msleep(100);
++ msleep(160);
+ }
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index bc11203c9cf78..72e0b767e1ba4 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -339,11 +339,11 @@ static int stmfts_input_open(struct input_dev *dev)
+
+ err = pm_runtime_get_sync(&sdata->client->dev);
+ if (err < 0)
+- return err;
++ goto out;
+
+ err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON);
+ if (err)
+- return err;
++ goto out;
+
+ mutex_lock(&sdata->mutex);
+ sdata->running = true;
+@@ -366,7 +366,9 @@ static int stmfts_input_open(struct input_dev *dev)
+ "failed to enable touchkey\n");
+ }
+
+- return 0;
++out:
++ pm_runtime_put_noidle(&sdata->client->dev);
++ return err;
+ }
+
+ static void stmfts_input_close(struct input_dev *dev)
+diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
+index 180d7e9d3400a..81c55bfd6e0c2 100644
+--- a/drivers/mmc/core/mmc_ops.c
++++ b/drivers/mmc/core/mmc_ops.c
+@@ -21,7 +21,7 @@
+
+ #define MMC_BKOPS_TIMEOUT_MS (120 * 1000) /* 120s */
+ #define MMC_SANITIZE_TIMEOUT_MS (240 * 1000) /* 240s */
+-#define MMC_OP_COND_PERIOD_US (1 * 1000) /* 1ms */
++#define MMC_OP_COND_PERIOD_US (4 * 1000) /* 4ms */
+ #define MMC_OP_COND_TIMEOUT_MS 1000 /* 1s */
+
+ static const u8 tuning_blk_pattern_4bit[] = {
+diff --git a/drivers/net/can/m_can/m_can_pci.c b/drivers/net/can/m_can/m_can_pci.c
+index b56a54d6c5a9c..8f184a852a0a7 100644
+--- a/drivers/net/can/m_can/m_can_pci.c
++++ b/drivers/net/can/m_can/m_can_pci.c
+@@ -18,14 +18,9 @@
+
+ #define M_CAN_PCI_MMIO_BAR 0
+
++#define M_CAN_CLOCK_FREQ_EHL 200000000
+ #define CTL_CSR_INT_CTL_OFFSET 0x508
+
+-struct m_can_pci_config {
+- const struct can_bittiming_const *bit_timing;
+- const struct can_bittiming_const *data_timing;
+- unsigned int clock_freq;
+-};
+-
+ struct m_can_pci_priv {
+ struct m_can_classdev cdev;
+
+@@ -89,40 +84,9 @@ static struct m_can_ops m_can_pci_ops = {
+ .read_fifo = iomap_read_fifo,
+ };
+
+-static const struct can_bittiming_const m_can_bittiming_const_ehl = {
+- .name = KBUILD_MODNAME,
+- .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
+- .tseg1_max = 64,
+- .tseg2_min = 1, /* Time segment 2 = phase_seg2 */
+- .tseg2_max = 128,
+- .sjw_max = 128,
+- .brp_min = 1,
+- .brp_max = 512,
+- .brp_inc = 1,
+-};
+-
+-static const struct can_bittiming_const m_can_data_bittiming_const_ehl = {
+- .name = KBUILD_MODNAME,
+- .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */
+- .tseg1_max = 16,
+- .tseg2_min = 1, /* Time segment 2 = phase_seg2 */
+- .tseg2_max = 8,
+- .sjw_max = 4,
+- .brp_min = 1,
+- .brp_max = 32,
+- .brp_inc = 1,
+-};
+-
+-static const struct m_can_pci_config m_can_pci_ehl = {
+- .bit_timing = &m_can_bittiming_const_ehl,
+- .data_timing = &m_can_data_bittiming_const_ehl,
+- .clock_freq = 200000000,
+-};
+-
+ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ {
+ struct device *dev = &pci->dev;
+- const struct m_can_pci_config *cfg;
+ struct m_can_classdev *mcan_class;
+ struct m_can_pci_priv *priv;
+ void __iomem *base;
+@@ -150,8 +114,6 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ if (!mcan_class)
+ return -ENOMEM;
+
+- cfg = (const struct m_can_pci_config *)id->driver_data;
+-
+ priv = cdev_to_priv(mcan_class);
+
+ priv->base = base;
+@@ -163,9 +125,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ mcan_class->dev = &pci->dev;
+ mcan_class->net->irq = pci_irq_vector(pci, 0);
+ mcan_class->pm_clock_support = 1;
+- mcan_class->bit_timing = cfg->bit_timing;
+- mcan_class->data_timing = cfg->data_timing;
+- mcan_class->can.clock.freq = cfg->clock_freq;
++ mcan_class->can.clock.freq = id->driver_data;
+ mcan_class->ops = &m_can_pci_ops;
+
+ pci_set_drvdata(pci, mcan_class);
+@@ -218,8 +178,8 @@ static SIMPLE_DEV_PM_OPS(m_can_pci_pm_ops,
+ m_can_pci_suspend, m_can_pci_resume);
+
+ static const struct pci_device_id m_can_pci_id_table[] = {
+- { PCI_VDEVICE(INTEL, 0x4bc1), (kernel_ulong_t)&m_can_pci_ehl, },
+- { PCI_VDEVICE(INTEL, 0x4bc2), (kernel_ulong_t)&m_can_pci_ehl, },
++ { PCI_VDEVICE(INTEL, 0x4bc1), M_CAN_CLOCK_FREQ_EHL, },
++ { PCI_VDEVICE(INTEL, 0x4bc2), M_CAN_CLOCK_FREQ_EHL, },
+ { } /* Terminating Entry */
+ };
+ MODULE_DEVICE_TABLE(pci, m_can_pci_id_table);
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 77e76c9efd32f..8201ce7adb777 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -346,7 +346,6 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ int budget)
+ {
+ struct net_device *ndev = aq_nic_get_ndev(self->aq_nic);
+- bool is_rsc_completed = true;
+ int err = 0;
+
+ for (; (self->sw_head != self->hw_head) && budget;
+@@ -364,12 +363,17 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ continue;
+
+ if (!buff->is_eop) {
++ unsigned int frag_cnt = 0U;
+ buff_ = buff;
+ do {
++ bool is_rsc_completed = true;
++
+ if (buff_->next >= self->size) {
+ err = -EIO;
+ goto err_exit;
+ }
++
++ frag_cnt++;
+ next_ = buff_->next,
+ buff_ = &self->buff_ring[next_];
+ is_rsc_completed =
+@@ -377,18 +381,17 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ next_,
+ self->hw_head);
+
+- if (unlikely(!is_rsc_completed))
+- break;
++ if (unlikely(!is_rsc_completed) ||
++ frag_cnt > MAX_SKB_FRAGS) {
++ err = 0;
++ goto err_exit;
++ }
+
+ buff->is_error |= buff_->is_error;
+ buff->is_cso_err |= buff_->is_cso_err;
+
+ } while (!buff_->is_eop);
+
+- if (!is_rsc_completed) {
+- err = 0;
+- goto err_exit;
+- }
+ if (buff->is_error ||
+ (buff->is_lro && buff->is_cso_err)) {
+ buff_ = buff;
+@@ -446,7 +449,7 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+ ALIGN(hdr_len, sizeof(long)));
+
+ if (buff->len - hdr_len > 0) {
+- skb_add_rx_frag(skb, 0, buff->rxdata.page,
++ skb_add_rx_frag(skb, i++, buff->rxdata.page,
+ buff->rxdata.pg_off + hdr_len,
+ buff->len - hdr_len,
+ AQ_CFG_RX_FRAME_MAX);
+@@ -455,7 +458,6 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
+
+ if (!buff->is_eop) {
+ buff_ = buff;
+- i = 1U;
+ do {
+ next_ = buff_->next;
+ buff_ = &self->buff_ring[next_];
+diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+index d875ce3ec759b..15ede7285fb5d 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
+@@ -889,6 +889,13 @@ int hw_atl_b0_hw_ring_tx_head_update(struct aq_hw_s *self,
+ err = -ENXIO;
+ goto err_exit;
+ }
++
++ /* Validate that the new hw_head_ is reasonable. */
++ if (hw_head_ >= ring->size) {
++ err = -ENXIO;
++ goto err_exit;
++ }
++
+ ring->hw_head = hw_head_;
+ err = aq_hw_err_from_flags(self);
+
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 60dde29974bfe..df51be3cbe069 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -2585,8 +2585,10 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ device_set_wakeup_capable(&pdev->dev, 1);
+
+ priv->wol_clk = devm_clk_get_optional(&pdev->dev, "sw_sysportwol");
+- if (IS_ERR(priv->wol_clk))
+- return PTR_ERR(priv->wol_clk);
++ if (IS_ERR(priv->wol_clk)) {
++ ret = PTR_ERR(priv->wol_clk);
++ goto err_deregister_fixed_link;
++ }
+
+ /* Set the needed headroom once and for all */
+ BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index c4f4b13ac4691..c1100af5666b4 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1217,7 +1217,6 @@ static void gem_rx_refill(struct macb_queue *queue)
+ /* Make hw descriptor updates visible to CPU */
+ rmb();
+
+- queue->rx_prepared_head++;
+ desc = macb_rx_desc(queue, entry);
+
+ if (!queue->rx_skbuff[entry]) {
+@@ -1256,6 +1255,7 @@ static void gem_rx_refill(struct macb_queue *queue)
+ dma_wmb();
+ desc->addr &= ~MACB_BIT(RX_USED);
+ }
++ queue->rx_prepared_head++;
+ }
+
+ /* Make descriptor updates visible to hardware */
+diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c
+index 79df5a72877b8..0040dcaab9455 100644
+--- a/drivers/net/ethernet/dec/tulip/tulip_core.c
++++ b/drivers/net/ethernet/dec/tulip/tulip_core.c
+@@ -1399,8 +1399,10 @@ static int tulip_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ /* alloc_etherdev ensures aligned and zeroed private structures */
+ dev = alloc_etherdev (sizeof (*tp));
+- if (!dev)
++ if (!dev) {
++ pci_disable_device(pdev);
+ return -ENOMEM;
++ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ if (pci_resource_len (pdev, 0) < tulip_tbl[chip_idx].io_size) {
+@@ -1785,6 +1787,7 @@ err_out_free_res:
+
+ err_out_free_netdev:
+ free_netdev (dev);
++ pci_disable_device(pdev);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 15bb6f001a04f..5f86cc1cfd09c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3207,8 +3207,8 @@ ice_vsi_rebuild_get_coalesce(struct ice_vsi *vsi,
+ ice_for_each_q_vector(vsi, i) {
+ struct ice_q_vector *q_vector = vsi->q_vectors[i];
+
+- coalesce[i].itr_tx = q_vector->tx.itr_setting;
+- coalesce[i].itr_rx = q_vector->rx.itr_setting;
++ coalesce[i].itr_tx = q_vector->tx.itr_settings;
++ coalesce[i].itr_rx = q_vector->rx.itr_settings;
+ coalesce[i].intrl = q_vector->intrl;
+
+ if (i < vsi->num_txq)
+@@ -3264,21 +3264,21 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
+ */
+ if (i < vsi->alloc_rxq && coalesce[i].rx_valid) {
+ rc = &vsi->q_vectors[i]->rx;
+- rc->itr_setting = coalesce[i].itr_rx;
++ rc->itr_settings = coalesce[i].itr_rx;
+ ice_write_itr(rc, rc->itr_setting);
+ } else if (i < vsi->alloc_rxq) {
+ rc = &vsi->q_vectors[i]->rx;
+- rc->itr_setting = coalesce[0].itr_rx;
++ rc->itr_settings = coalesce[0].itr_rx;
+ ice_write_itr(rc, rc->itr_setting);
+ }
+
+ if (i < vsi->alloc_txq && coalesce[i].tx_valid) {
+ rc = &vsi->q_vectors[i]->tx;
+- rc->itr_setting = coalesce[i].itr_tx;
++ rc->itr_settings = coalesce[i].itr_tx;
+ ice_write_itr(rc, rc->itr_setting);
+ } else if (i < vsi->alloc_txq) {
+ rc = &vsi->q_vectors[i]->tx;
+- rc->itr_setting = coalesce[0].itr_tx;
++ rc->itr_settings = coalesce[0].itr_tx;
+ ice_write_itr(rc, rc->itr_setting);
+ }
+
+@@ -3292,12 +3292,12 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
+ for (; i < vsi->num_q_vectors; i++) {
+ /* transmit */
+ rc = &vsi->q_vectors[i]->tx;
+- rc->itr_setting = coalesce[0].itr_tx;
++ rc->itr_settings = coalesce[0].itr_tx;
+ ice_write_itr(rc, rc->itr_setting);
+
+ /* receive */
+ rc = &vsi->q_vectors[i]->rx;
+- rc->itr_setting = coalesce[0].itr_rx;
++ rc->itr_settings = coalesce[0].itr_rx;
+ ice_write_itr(rc, rc->itr_setting);
+
+ vsi->q_vectors[i]->intrl = coalesce[0].intrl;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 7f6715eb862fe..30f055e1a92aa 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -5907,9 +5907,10 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ ice_ptp_link_change(pf, pf->hw.pf_id, true);
+ }
+
+- /* clear this now, and the first stats read will be used as baseline */
+- vsi->stat_offsets_loaded = false;
+-
++ /* Perform an initial read of the statistics registers now to
++ * set the baseline so counters are ready when interface is up
++ */
++ ice_update_eth_stats(vsi);
+ ice_service_task_schedule(pf);
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 45ae97b8b97db..836c67f1aa465 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -499,12 +499,19 @@ ice_ptp_read_src_clk_reg(struct ice_pf *pf, struct ptp_system_timestamp *sts)
+ * This function must be called periodically to ensure that the cached value
+ * is never more than 2 seconds old. It must also be called whenever the PHC
+ * time has been changed.
++ *
++ * Return:
++ * * 0 - OK, successfully updated
++ * * -EAGAIN - PF was busy, need to reschedule the update
+ */
+-static void ice_ptp_update_cached_phctime(struct ice_pf *pf)
++static int ice_ptp_update_cached_phctime(struct ice_pf *pf)
+ {
+ u64 systime;
+ int i;
+
++ if (test_and_set_bit(ICE_CFG_BUSY, pf->state))
++ return -EAGAIN;
++
+ /* Read the current PHC time */
+ systime = ice_ptp_read_src_clk_reg(pf, NULL);
+
+@@ -527,6 +534,9 @@ static void ice_ptp_update_cached_phctime(struct ice_pf *pf)
+ WRITE_ONCE(vsi->rx_rings[j]->cached_phctime, systime);
+ }
+ }
++ clear_bit(ICE_CFG_BUSY, pf->state);
++
++ return 0;
+ }
+
+ /**
+@@ -2322,17 +2332,18 @@ static void ice_ptp_periodic_work(struct kthread_work *work)
+ {
+ struct ice_ptp *ptp = container_of(work, struct ice_ptp, work.work);
+ struct ice_pf *pf = container_of(ptp, struct ice_pf, ptp);
++ int err;
+
+ if (!test_bit(ICE_FLAG_PTP, pf->flags))
+ return;
+
+- ice_ptp_update_cached_phctime(pf);
++ err = ice_ptp_update_cached_phctime(pf);
+
+ ice_ptp_tx_tstamp_cleanup(&pf->hw, &pf->ptp.port.tx);
+
+- /* Run twice a second */
++ /* Run twice a second or reschedule if phc update failed */
+ kthread_queue_delayed_work(ptp->kworker, &ptp->work,
+- msecs_to_jiffies(500));
++ msecs_to_jiffies(err ? 10 : 500));
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index b7b3bd4816f0d..ec4733272034f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -379,9 +379,14 @@ struct ice_ring_container {
+ /* this matches the maximum number of ITR bits, but in usec
+ * values, so it is shifted left one bit (bit zero is ignored)
+ */
+- u16 itr_setting:13;
+- u16 itr_reserved:2;
+- u16 itr_mode:1;
++ union {
++ struct {
++ u16 itr_setting:13;
++ u16 itr_reserved:2;
++ u16 itr_mode:1;
++ };
++ u16 itr_settings;
++ };
+ enum ice_container_type type;
+ };
+
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index c1e4ad65b02de..4e0abfe68cfdb 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -5512,7 +5512,8 @@ static void igb_watchdog_task(struct work_struct *work)
+ break;
+ }
+
+- if (adapter->link_speed != SPEED_1000)
++ if (adapter->link_speed != SPEED_1000 ||
++ !hw->phy.ops.read_reg)
+ goto no_wait;
+
+ /* wait for Remote receiver status OK */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 169e3524bb1c7..3500faf086710 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3829,6 +3829,10 @@ static netdev_features_t mlx5e_fix_uplink_rep_features(struct net_device *netdev
+ if (netdev->features & NETIF_F_NTUPLE)
+ netdev_warn(netdev, "Disabling ntuple, not supported in switchdev mode\n");
+
++ features &= ~NETIF_F_GRO_HW;
++ if (netdev->features & NETIF_F_GRO_HW)
++ netdev_warn(netdev, "Disabling HW_GRO, not supported in switchdev mode\n");
++
+ return features;
+ }
+
+@@ -3861,6 +3865,25 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
+ }
+ }
+
++ if (params->xdp_prog) {
++ if (features & NETIF_F_LRO) {
++ netdev_warn(netdev, "LRO is incompatible with XDP\n");
++ features &= ~NETIF_F_LRO;
++ }
++ if (features & NETIF_F_GRO_HW) {
++ netdev_warn(netdev, "HW GRO is incompatible with XDP\n");
++ features &= ~NETIF_F_GRO_HW;
++ }
++ }
++
++ if (priv->xsk.refcnt) {
++ if (features & NETIF_F_GRO_HW) {
++ netdev_warn(netdev, "HW GRO is incompatible with AF_XDP (%u XSKs are active)\n",
++ priv->xsk.refcnt);
++ features &= ~NETIF_F_GRO_HW;
++ }
++ }
++
+ if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) {
+ features &= ~NETIF_F_RXHASH;
+ if (netdev->features & NETIF_F_RXHASH)
+@@ -4805,10 +4828,6 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ netdev->hw_features |= NETIF_F_HW_VLAN_STAG_TX;
+
+- if (!!MLX5_CAP_GEN(mdev, shampo) &&
+- mlx5e_check_fragmented_striding_rq_cap(mdev))
+- netdev->hw_features |= NETIF_F_GRO_HW;
+-
+ if (mlx5e_tunnel_any_tx_proto_supported(mdev)) {
+ netdev->hw_enc_features |= NETIF_F_HW_CSUM;
+ netdev->hw_enc_features |= NETIF_F_TSO;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 537c82b9aa530..b6f58d16d1453 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2656,28 +2656,6 @@ static void cleanup_root_ns(struct mlx5_flow_root_namespace *root_ns)
+ clean_tree(&root_ns->ns.node);
+ }
+
+-void mlx5_cleanup_fs(struct mlx5_core_dev *dev)
+-{
+- struct mlx5_flow_steering *steering = dev->priv.steering;
+-
+- cleanup_root_ns(steering->root_ns);
+- cleanup_root_ns(steering->fdb_root_ns);
+- steering->fdb_root_ns = NULL;
+- kfree(steering->fdb_sub_ns);
+- steering->fdb_sub_ns = NULL;
+- cleanup_root_ns(steering->port_sel_root_ns);
+- cleanup_root_ns(steering->sniffer_rx_root_ns);
+- cleanup_root_ns(steering->sniffer_tx_root_ns);
+- cleanup_root_ns(steering->rdma_rx_root_ns);
+- cleanup_root_ns(steering->rdma_tx_root_ns);
+- cleanup_root_ns(steering->egress_root_ns);
+- mlx5_cleanup_fc_stats(dev);
+- kmem_cache_destroy(steering->ftes_cache);
+- kmem_cache_destroy(steering->fgs_cache);
+- mlx5_ft_pool_destroy(dev);
+- kfree(steering);
+-}
+-
+ static int init_sniffer_tx_root_ns(struct mlx5_flow_steering *steering)
+ {
+ struct fs_prio *prio;
+@@ -3063,42 +3041,27 @@ cleanup:
+ return err;
+ }
+
+-int mlx5_init_fs(struct mlx5_core_dev *dev)
++void mlx5_fs_core_cleanup(struct mlx5_core_dev *dev)
+ {
+- struct mlx5_flow_steering *steering;
+- int err = 0;
+-
+- err = mlx5_init_fc_stats(dev);
+- if (err)
+- return err;
+-
+- err = mlx5_ft_pool_init(dev);
+- if (err)
+- return err;
+-
+- steering = kzalloc(sizeof(*steering), GFP_KERNEL);
+- if (!steering) {
+- err = -ENOMEM;
+- goto err;
+- }
+-
+- steering->dev = dev;
+- dev->priv.steering = steering;
++ struct mlx5_flow_steering *steering = dev->priv.steering;
+
+- if (mlx5_fs_dr_is_supported(dev))
+- steering->mode = MLX5_FLOW_STEERING_MODE_SMFS;
+- else
+- steering->mode = MLX5_FLOW_STEERING_MODE_DMFS;
++ cleanup_root_ns(steering->root_ns);
++ cleanup_root_ns(steering->fdb_root_ns);
++ steering->fdb_root_ns = NULL;
++ kfree(steering->fdb_sub_ns);
++ steering->fdb_sub_ns = NULL;
++ cleanup_root_ns(steering->port_sel_root_ns);
++ cleanup_root_ns(steering->sniffer_rx_root_ns);
++ cleanup_root_ns(steering->sniffer_tx_root_ns);
++ cleanup_root_ns(steering->rdma_rx_root_ns);
++ cleanup_root_ns(steering->rdma_tx_root_ns);
++ cleanup_root_ns(steering->egress_root_ns);
++}
+
+- steering->fgs_cache = kmem_cache_create("mlx5_fs_fgs",
+- sizeof(struct mlx5_flow_group), 0,
+- 0, NULL);
+- steering->ftes_cache = kmem_cache_create("mlx5_fs_ftes", sizeof(struct fs_fte), 0,
+- 0, NULL);
+- if (!steering->ftes_cache || !steering->fgs_cache) {
+- err = -ENOMEM;
+- goto err;
+- }
++int mlx5_fs_core_init(struct mlx5_core_dev *dev)
++{
++ struct mlx5_flow_steering *steering = dev->priv.steering;
++ int err = 0;
+
+ if ((((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_ETH) &&
+ (MLX5_CAP_GEN(dev, nic_flow_table))) ||
+@@ -3157,8 +3120,64 @@ int mlx5_init_fs(struct mlx5_core_dev *dev)
+ }
+
+ return 0;
++
++err:
++ mlx5_fs_core_cleanup(dev);
++ return err;
++}
++
++void mlx5_fs_core_free(struct mlx5_core_dev *dev)
++{
++ struct mlx5_flow_steering *steering = dev->priv.steering;
++
++ kmem_cache_destroy(steering->ftes_cache);
++ kmem_cache_destroy(steering->fgs_cache);
++ kfree(steering);
++ mlx5_ft_pool_destroy(dev);
++ mlx5_cleanup_fc_stats(dev);
++}
++
++int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
++{
++ struct mlx5_flow_steering *steering;
++ int err = 0;
++
++ err = mlx5_init_fc_stats(dev);
++ if (err)
++ return err;
++
++ err = mlx5_ft_pool_init(dev);
++ if (err)
++ goto err;
++
++ steering = kzalloc(sizeof(*steering), GFP_KERNEL);
++ if (!steering) {
++ err = -ENOMEM;
++ goto err;
++ }
++
++ steering->dev = dev;
++ dev->priv.steering = steering;
++
++ if (mlx5_fs_dr_is_supported(dev))
++ steering->mode = MLX5_FLOW_STEERING_MODE_SMFS;
++ else
++ steering->mode = MLX5_FLOW_STEERING_MODE_DMFS;
++
++ steering->fgs_cache = kmem_cache_create("mlx5_fs_fgs",
++ sizeof(struct mlx5_flow_group), 0,
++ 0, NULL);
++ steering->ftes_cache = kmem_cache_create("mlx5_fs_ftes", sizeof(struct fs_fte), 0,
++ 0, NULL);
++ if (!steering->ftes_cache || !steering->fgs_cache) {
++ err = -ENOMEM;
++ goto err;
++ }
++
++ return 0;
++
+ err:
+- mlx5_cleanup_fs(dev);
++ mlx5_fs_core_free(dev);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+index 5469b08d635f1..6366bf50a564b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+@@ -293,8 +293,10 @@ int mlx5_flow_namespace_set_peer(struct mlx5_flow_root_namespace *ns,
+ int mlx5_flow_namespace_set_mode(struct mlx5_flow_namespace *ns,
+ enum mlx5_flow_steering_mode mode);
+
+-int mlx5_init_fs(struct mlx5_core_dev *dev);
+-void mlx5_cleanup_fs(struct mlx5_core_dev *dev);
++int mlx5_fs_core_alloc(struct mlx5_core_dev *dev);
++void mlx5_fs_core_free(struct mlx5_core_dev *dev);
++int mlx5_fs_core_init(struct mlx5_core_dev *dev);
++void mlx5_fs_core_cleanup(struct mlx5_core_dev *dev);
+
+ int mlx5_fs_egress_acls_init(struct mlx5_core_dev *dev, int total_vports);
+ void mlx5_fs_egress_acls_cleanup(struct mlx5_core_dev *dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 862f5b7cb2106..1c771287bee53 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -8,7 +8,8 @@
+ enum {
+ MLX5_FW_RESET_FLAGS_RESET_REQUESTED,
+ MLX5_FW_RESET_FLAGS_NACK_RESET_REQUEST,
+- MLX5_FW_RESET_FLAGS_PENDING_COMP
++ MLX5_FW_RESET_FLAGS_PENDING_COMP,
++ MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS
+ };
+
+ struct mlx5_fw_reset {
+@@ -165,7 +166,10 @@ static void poll_sync_reset(struct timer_list *t)
+
+ if (fatal_error) {
+ mlx5_core_warn(dev, "Got Device Reset\n");
+- queue_work(fw_reset->wq, &fw_reset->reset_reload_work);
++ if (!test_bit(MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, &fw_reset->reset_flags))
++ queue_work(fw_reset->wq, &fw_reset->reset_reload_work);
++ else
++ mlx5_core_err(dev, "Device is being removed, Drop new reset work\n");
+ return;
+ }
+
+@@ -390,9 +394,12 @@ static int fw_reset_event_notifier(struct notifier_block *nb, unsigned long acti
+ struct mlx5_fw_reset *fw_reset = mlx5_nb_cof(nb, struct mlx5_fw_reset, nb);
+ struct mlx5_eqe *eqe = data;
+
++ if (test_bit(MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, &fw_reset->reset_flags))
++ return NOTIFY_DONE;
++
+ switch (eqe->sub_type) {
+ case MLX5_GENERAL_SUBTYPE_FW_LIVE_PATCH_EVENT:
+- queue_work(fw_reset->wq, &fw_reset->fw_live_patch_work);
++ queue_work(fw_reset->wq, &fw_reset->fw_live_patch_work);
+ break;
+ case MLX5_GENERAL_SUBTYPE_PCI_SYNC_FOR_FW_UPDATE_EVENT:
+ mlx5_sync_reset_events_handle(fw_reset, eqe);
+@@ -436,6 +443,18 @@ void mlx5_fw_reset_events_stop(struct mlx5_core_dev *dev)
+ mlx5_eq_notifier_unregister(dev, &dev->priv.fw_reset->nb);
+ }
+
++void mlx5_drain_fw_reset(struct mlx5_core_dev *dev)
++{
++ struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
++
++ set_bit(MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, &fw_reset->reset_flags);
++ cancel_work_sync(&fw_reset->fw_live_patch_work);
++ cancel_work_sync(&fw_reset->reset_request_work);
++ cancel_work_sync(&fw_reset->reset_reload_work);
++ cancel_work_sync(&fw_reset->reset_now_work);
++ cancel_work_sync(&fw_reset->reset_abort_work);
++}
++
+ int mlx5_fw_reset_init(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_fw_reset *fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
+index 7761ee5fc7d0a..372046e173e78 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.h
+@@ -15,6 +15,7 @@ int mlx5_fw_reset_set_live_patch(struct mlx5_core_dev *dev);
+ int mlx5_fw_reset_wait_reset_done(struct mlx5_core_dev *dev);
+ void mlx5_fw_reset_events_start(struct mlx5_core_dev *dev);
+ void mlx5_fw_reset_events_stop(struct mlx5_core_dev *dev);
++void mlx5_drain_fw_reset(struct mlx5_core_dev *dev);
+ int mlx5_fw_reset_init(struct mlx5_core_dev *dev);
+ void mlx5_fw_reset_cleanup(struct mlx5_core_dev *dev);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index bba72b220cc3f..4e49dca94bc38 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -939,6 +939,12 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+ goto err_sf_table_cleanup;
+ }
+
++ err = mlx5_fs_core_alloc(dev);
++ if (err) {
++ mlx5_core_err(dev, "Failed to alloc flow steering\n");
++ goto err_fs;
++ }
++
+ dev->dm = mlx5_dm_create(dev);
+ if (IS_ERR(dev->dm))
+ mlx5_core_warn(dev, "Failed to init device memory%d\n", err);
+@@ -949,6 +955,8 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
+
+ return 0;
+
++err_fs:
++ mlx5_sf_table_cleanup(dev);
+ err_sf_table_cleanup:
+ mlx5_sf_hw_table_cleanup(dev);
+ err_sf_hw_table_cleanup:
+@@ -986,6 +994,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
+ mlx5_hv_vhca_destroy(dev->hv_vhca);
+ mlx5_fw_tracer_destroy(dev->tracer);
+ mlx5_dm_cleanup(dev);
++ mlx5_fs_core_free(dev);
+ mlx5_sf_table_cleanup(dev);
+ mlx5_sf_hw_table_cleanup(dev);
+ mlx5_vhca_event_cleanup(dev);
+@@ -1192,7 +1201,7 @@ static int mlx5_load(struct mlx5_core_dev *dev)
+ goto err_tls_start;
+ }
+
+- err = mlx5_init_fs(dev);
++ err = mlx5_fs_core_init(dev);
+ if (err) {
+ mlx5_core_err(dev, "Failed to init flow steering\n");
+ goto err_fs;
+@@ -1237,7 +1246,7 @@ err_ec:
+ err_vhca:
+ mlx5_vhca_event_stop(dev);
+ err_set_hca:
+- mlx5_cleanup_fs(dev);
++ mlx5_fs_core_cleanup(dev);
+ err_fs:
+ mlx5_accel_tls_cleanup(dev);
+ err_tls_start:
+@@ -1266,7 +1275,7 @@ static void mlx5_unload(struct mlx5_core_dev *dev)
+ mlx5_ec_cleanup(dev);
+ mlx5_sf_hw_table_destroy(dev);
+ mlx5_vhca_event_stop(dev);
+- mlx5_cleanup_fs(dev);
++ mlx5_fs_core_cleanup(dev);
+ mlx5_accel_ipsec_cleanup(dev);
+ mlx5_accel_tls_cleanup(dev);
+ mlx5_fpga_device_stop(dev);
+@@ -1619,6 +1628,10 @@ static void remove_one(struct pci_dev *pdev)
+ struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
+ struct devlink *devlink = priv_to_devlink(dev);
+
++ /* mlx5_drain_fw_reset() is using devlink APIs. Hence, we must drain
++ * fw_reset before unregistering the devlink.
++ */
++ mlx5_drain_fw_reset(dev);
+ devlink_unregister(devlink);
+ mlx5_crdump_disable(dev);
+ mlx5_drain_health_wq(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+index c61a5e83c78c4..8622af6d6bf83 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+@@ -530,6 +530,37 @@ static int dr_action_handle_cs_recalc(struct mlx5dr_domain *dmn,
+ return 0;
+ }
+
++static void dr_action_modify_ttl_adjust(struct mlx5dr_domain *dmn,
++ struct mlx5dr_ste_actions_attr *attr,
++ bool rx_rule,
++ bool *recalc_cs_required)
++{
++ *recalc_cs_required = false;
++
++ /* if device supports csum recalculation - no adjustment needed */
++ if (mlx5dr_ste_supp_ttl_cs_recalc(&dmn->info.caps))
++ return;
++
++ /* no adjustment needed on TX rules */
++ if (!rx_rule)
++ return;
++
++ if (!MLX5_CAP_ESW_FLOWTABLE(dmn->mdev, fdb_ipv4_ttl_modify)) {
++ /* Ignore the modify TTL action.
++ * It is always kept as last HW action.
++ */
++ attr->modify_actions--;
++ return;
++ }
++
++ if (dmn->type == MLX5DR_DOMAIN_TYPE_FDB)
++ /* Due to a HW bug on some devices, modifying TTL on RX flows
++ * will cause an incorrect checksum calculation. In such cases
++ * we will use a FW table to recalculate the checksum.
++ */
++ *recalc_cs_required = true;
++}
++
+ static void dr_action_print_sequence(struct mlx5dr_domain *dmn,
+ struct mlx5dr_action *actions[],
+ int last_idx)
+@@ -649,8 +680,9 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
+ case DR_ACTION_TYP_MODIFY_HDR:
+ attr.modify_index = action->rewrite->index;
+ attr.modify_actions = action->rewrite->num_of_actions;
+- recalc_cs_required = action->rewrite->modify_ttl &&
+- !mlx5dr_ste_supp_ttl_cs_recalc(&dmn->info.caps);
++ if (action->rewrite->modify_ttl)
++ dr_action_modify_ttl_adjust(dmn, &attr, rx_rule,
++ &recalc_cs_required);
+ break;
+ case DR_ACTION_TYP_L2_TO_TNL_L2:
+ case DR_ACTION_TYP_L2_TO_TNL_L3:
+@@ -737,12 +769,7 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
+ *new_hw_ste_arr_sz = nic_matcher->num_of_builders;
+ last_ste = ste_arr + DR_STE_SIZE * (nic_matcher->num_of_builders - 1);
+
+- /* Due to a HW bug in some devices, modifying TTL on RX flows will
+- * cause an incorrect checksum calculation. In this case we will
+- * use a FW table to recalculate.
+- */
+- if (dmn->type == MLX5DR_DOMAIN_TYPE_FDB &&
+- rx_rule && recalc_cs_required && dest_action) {
++ if (recalc_cs_required && dest_action) {
+ ret = dr_action_handle_cs_recalc(dmn, dest_action, &attr.final_icm_addr);
+ if (ret) {
+ mlx5dr_err(dmn,
+@@ -847,7 +874,8 @@ struct mlx5dr_action *
+ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn,
+ struct mlx5dr_action_dest *dests,
+ u32 num_of_dests,
+- bool ignore_flow_level)
++ bool ignore_flow_level,
++ u32 flow_source)
+ {
+ struct mlx5dr_cmd_flow_destination_hw_info *hw_dests;
+ struct mlx5dr_action **ref_actions;
+@@ -919,7 +947,8 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn,
+ reformat_req,
+ &action->dest_tbl->fw_tbl.id,
+ &action->dest_tbl->fw_tbl.group_id,
+- ignore_flow_level);
++ ignore_flow_level,
++ flow_source);
+ if (ret)
+ goto free_action;
+
+@@ -1560,12 +1589,6 @@ dr_action_modify_check_is_ttl_modify(const void *sw_action)
+ return sw_field == MLX5_ACTION_IN_FIELD_OUT_IP_TTL;
+ }
+
+-static bool dr_action_modify_ttl_ignore(struct mlx5dr_domain *dmn)
+-{
+- return !mlx5dr_ste_supp_ttl_cs_recalc(&dmn->info.caps) &&
+- !MLX5_CAP_ESW_FLOWTABLE(dmn->mdev, fdb_ipv4_ttl_modify);
+-}
+-
+ static int dr_actions_convert_modify_header(struct mlx5dr_action *action,
+ u32 max_hw_actions,
+ u32 num_sw_actions,
+@@ -1577,6 +1600,7 @@ static int dr_actions_convert_modify_header(struct mlx5dr_action *action,
+ const struct mlx5dr_ste_action_modify_field *hw_dst_action_info;
+ const struct mlx5dr_ste_action_modify_field *hw_src_action_info;
+ struct mlx5dr_domain *dmn = action->rewrite->dmn;
++ __be64 *modify_ttl_sw_action = NULL;
+ int ret, i, hw_idx = 0;
+ __be64 *sw_action;
+ __be64 hw_action;
+@@ -1589,8 +1613,14 @@ static int dr_actions_convert_modify_header(struct mlx5dr_action *action,
+ action->rewrite->allow_rx = 1;
+ action->rewrite->allow_tx = 1;
+
+- for (i = 0; i < num_sw_actions; i++) {
+- sw_action = &sw_actions[i];
++ for (i = 0; i < num_sw_actions || modify_ttl_sw_action; i++) {
++ /* modify TTL is handled separately, as a last action */
++ if (i == num_sw_actions) {
++ sw_action = modify_ttl_sw_action;
++ modify_ttl_sw_action = NULL;
++ } else {
++ sw_action = &sw_actions[i];
++ }
+
+ ret = dr_action_modify_check_field_limitation(action,
+ sw_action);
+@@ -1599,10 +1629,9 @@ static int dr_actions_convert_modify_header(struct mlx5dr_action *action,
+
+ if (!(*modify_ttl) &&
+ dr_action_modify_check_is_ttl_modify(sw_action)) {
+- if (dr_action_modify_ttl_ignore(dmn))
+- continue;
+-
++ modify_ttl_sw_action = sw_action;
+ *modify_ttl = true;
++ continue;
+ }
+
+ /* Convert SW action to HW action */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
+index 68a4c32d5f34c..f05ef0cd54bac 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
+@@ -104,7 +104,8 @@ int mlx5dr_fw_create_md_tbl(struct mlx5dr_domain *dmn,
+ bool reformat_req,
+ u32 *tbl_id,
+ u32 *group_id,
+- bool ignore_flow_level)
++ bool ignore_flow_level,
++ u32 flow_source)
+ {
+ struct mlx5dr_cmd_create_flow_table_attr ft_attr = {};
+ struct mlx5dr_cmd_fte_info fte_info = {};
+@@ -139,6 +140,7 @@ int mlx5dr_fw_create_md_tbl(struct mlx5dr_domain *dmn,
+ fte_info.val = val;
+ fte_info.dest_arr = dest;
+ fte_info.ignore_flow_level = ignore_flow_level;
++ fte_info.flow_context.flow_source = flow_source;
+
+ ret = mlx5dr_cmd_set_fte(dmn->mdev, 0, 0, &ft_info, *group_id, &fte_info);
+ if (ret) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
+index 2d62950f7a294..134c8484c9016 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
+@@ -419,7 +419,7 @@ dr_ste_v0_set_actions_tx(struct mlx5dr_domain *dmn,
+ * encapsulation. The reason for that is that we support
+ * modify headers for outer headers only
+ */
+- if (action_type_set[DR_ACTION_TYP_MODIFY_HDR]) {
++ if (action_type_set[DR_ACTION_TYP_MODIFY_HDR] && attr->modify_actions) {
+ dr_ste_v0_set_entry_type(last_ste, DR_STE_TYPE_MODIFY_PKT);
+ dr_ste_v0_set_rewrite_actions(last_ste,
+ attr->modify_actions,
+@@ -511,7 +511,7 @@ dr_ste_v0_set_actions_rx(struct mlx5dr_domain *dmn,
+ }
+ }
+
+- if (action_type_set[DR_ACTION_TYP_MODIFY_HDR]) {
++ if (action_type_set[DR_ACTION_TYP_MODIFY_HDR] && attr->modify_actions) {
+ if (dr_ste_v0_get_entry_type(last_ste) == DR_STE_TYPE_MODIFY_PKT)
+ dr_ste_v0_arr_init_next(&last_ste,
+ added_stes,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+index 55fcb751e24a4..64f41e7938e14 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+@@ -1463,7 +1463,8 @@ int mlx5dr_fw_create_md_tbl(struct mlx5dr_domain *dmn,
+ bool reformat_req,
+ u32 *tbl_id,
+ u32 *group_id,
+- bool ignore_flow_level);
++ bool ignore_flow_level,
++ u32 flow_source);
+ void mlx5dr_fw_destroy_md_tbl(struct mlx5dr_domain *dmn, u32 tbl_id,
+ u32 group_id);
+ #endif /* _DR_TYPES_H_ */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+index 3f311462bedf3..05393fe11132b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+@@ -520,6 +520,7 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ } else if (num_term_actions > 1) {
+ bool ignore_flow_level =
+ !!(fte->action.flags & FLOW_ACT_IGNORE_FLOW_LEVEL);
++ u32 flow_source = fte->flow_context.flow_source;
+
+ if (num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX ||
+ fs_dr_num_actions == MLX5_FLOW_CONTEXT_ACTION_MAX) {
+@@ -529,7 +530,8 @@ static int mlx5_cmd_dr_create_fte(struct mlx5_flow_root_namespace *ns,
+ tmp_action = mlx5dr_action_create_mult_dest_tbl(domain,
+ term_actions,
+ num_term_actions,
+- ignore_flow_level);
++ ignore_flow_level,
++ flow_source);
+ if (!tmp_action) {
+ err = -EOPNOTSUPP;
+ goto free_actions;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+index dfa223415fe24..74a7a2f4d50d6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h
+@@ -96,7 +96,8 @@ struct mlx5dr_action *
+ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn,
+ struct mlx5dr_action_dest *dests,
+ u32 num_of_dests,
+- bool ignore_flow_level);
++ bool ignore_flow_level,
++ u32 flow_source);
+
+ struct mlx5dr_action *mlx5dr_action_create_drop(void);
+
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+index 1f60fd125a1dc..fee148bbf13ea 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+@@ -100,6 +100,24 @@ static int lan966x_create_targets(struct platform_device *pdev,
+ return 0;
+ }
+
++static bool lan966x_port_unique_address(struct net_device *dev)
++{
++ struct lan966x_port *port = netdev_priv(dev);
++ struct lan966x *lan966x = port->lan966x;
++ int p;
++
++ for (p = 0; p < lan966x->num_phys_ports; ++p) {
++ port = lan966x->ports[p];
++ if (!port || port->dev == dev)
++ continue;
++
++ if (ether_addr_equal(dev->dev_addr, port->dev->dev_addr))
++ return false;
++ }
++
++ return true;
++}
++
+ static int lan966x_port_set_mac_address(struct net_device *dev, void *p)
+ {
+ struct lan966x_port *port = netdev_priv(dev);
+@@ -107,16 +125,26 @@ static int lan966x_port_set_mac_address(struct net_device *dev, void *p)
+ const struct sockaddr *addr = p;
+ int ret;
+
++ if (ether_addr_equal(addr->sa_data, dev->dev_addr))
++ return 0;
++
+ /* Learn the new net device MAC address in the mac table. */
+ ret = lan966x_mac_cpu_learn(lan966x, addr->sa_data, HOST_PVID);
+ if (ret)
+ return ret;
+
++ /* If there is another port with the same address as the dev, then don't
++ * delete it from the MAC table
++ */
++ if (!lan966x_port_unique_address(dev))
++ goto out;
++
+ /* Then forget the previous one. */
+ ret = lan966x_mac_cpu_forget(lan966x, dev->dev_addr, HOST_PVID);
+ if (ret)
+ return ret;
+
++out:
+ eth_hw_addr_set(dev, addr->sa_data);
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index b30589a135c24..06f4d9a9e9388 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -3614,7 +3614,8 @@ static void ql_reset_work(struct work_struct *work)
+ qdev->mem_map_registers;
+ unsigned long hw_flags;
+
+- if (test_bit((QL_RESET_PER_SCSI | QL_RESET_START), &qdev->flags)) {
++ if (test_bit(QL_RESET_PER_SCSI, &qdev->flags) ||
++ test_bit(QL_RESET_START, &qdev->flags)) {
+ clear_bit(QL_LINK_MASTER, &qdev->flags);
+
+ /*
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+index fcf17d8a0494b..644bb54f5f020 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+@@ -181,7 +181,7 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
+ return -ENOMEM;
+
+ /* Enable pci device */
+- ret = pci_enable_device(pdev);
++ ret = pcim_enable_device(pdev);
+ if (ret) {
+ dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n",
+ __func__);
+@@ -241,8 +241,6 @@ static void stmmac_pci_remove(struct pci_dev *pdev)
+ pcim_iounmap_regions(pdev, BIT(i));
+ break;
+ }
+-
+- pci_disable_device(pdev);
+ }
+
+ static int __maybe_unused stmmac_pci_suspend(struct device *dev)
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index bc981043cc808..a701178a1d139 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1367,9 +1367,10 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+ struct gsi_event *event_done;
+ struct gsi_event *event;
+ struct gsi_trans *trans;
++ u32 trans_count = 0;
+ u32 byte_count = 0;
+- u32 old_index;
+ u32 event_avail;
++ u32 old_index;
+
+ trans_info = &channel->trans_info;
+
+@@ -1390,6 +1391,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+ do {
+ trans->len = __le16_to_cpu(event->len);
+ byte_count += trans->len;
++ trans_count++;
+
+ /* Move on to the next event and transaction */
+ if (--event_avail)
+@@ -1401,7 +1403,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+
+ /* We record RX bytes when they are received */
+ channel->byte_count += byte_count;
+- channel->trans_count++;
++ channel->trans_count += trans_count;
+ }
+
+ /* Initialize a ring, including allocating DMA memory for its entries */
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 68291a3efd040..2ecfc17544a6a 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -1169,13 +1169,12 @@ static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint,
+ return;
+
+ skb = __dev_alloc_skb(len, GFP_ATOMIC);
+- if (!skb)
+- return;
+-
+- /* Copy the data into the socket buffer and receive it */
+- skb_put(skb, len);
+- memcpy(skb->data, data, len);
+- skb->truesize += extra;
++ if (skb) {
++ /* Copy the data into the socket buffer and receive it */
++ skb_put(skb, len);
++ memcpy(skb->data, data, len);
++ skb->truesize += extra;
++ }
+
+ ipa_modem_skb_rx(endpoint->netdev, skb);
+ }
+diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
+index 3619520340b74..e172743948ed7 100644
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -988,6 +988,7 @@ static int pppoe_fill_forward_path(struct net_device_path_ctx *ctx,
+ path->encap.proto = htons(ETH_P_PPP_SES);
+ path->encap.id = be16_to_cpu(po->num);
+ memcpy(path->encap.h_dest, po->pppoe_pa.remote, ETH_ALEN);
++ memcpy(ctx->daddr, po->pppoe_pa.remote, ETH_ALEN);
+ path->dev = ctx->dev;
+ ctx->dev = dev;
+
+diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c
+index d9d90baac72a2..93e8d119d45f6 100644
+--- a/drivers/net/vmxnet3/vmxnet3_drv.c
++++ b/drivers/net/vmxnet3/vmxnet3_drv.c
+@@ -589,6 +589,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,
+ if (dma_mapping_error(&adapter->pdev->dev,
+ rbi->dma_addr)) {
+ dev_kfree_skb_any(rbi->skb);
++ rbi->skb = NULL;
+ rq->stats.rx_buf_alloc_failure++;
+ break;
+ }
+@@ -613,6 +614,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx,
+ if (dma_mapping_error(&adapter->pdev->dev,
+ rbi->dma_addr)) {
+ put_page(rbi->page);
++ rbi->page = NULL;
+ rq->stats.rx_buf_alloc_failure++;
+ break;
+ }
+@@ -1666,6 +1668,10 @@ vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq,
+ u32 i, ring_idx;
+ struct Vmxnet3_RxDesc *rxd;
+
++ /* ring has already been cleaned up */
++ if (!rq->rx_ring[0].base)
++ return;
++
+ for (ring_idx = 0; ring_idx < 2; ring_idx++) {
+ for (i = 0; i < rq->rx_ring[ring_idx].size; i++) {
+ #ifdef __BIG_ENDIAN_BITFIELD
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 10f7c79caac2d..0abd772c57f08 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4422,6 +4422,7 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl)
+ if (ctrl->queue_count > 1) {
+ nvme_queue_scan(ctrl);
+ nvme_start_queues(ctrl);
++ nvme_mpath_update(ctrl);
+ }
+ }
+ EXPORT_SYMBOL_GPL(nvme_start_ctrl);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index a703f1f5fb64c..189175fff7e44 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -635,8 +635,17 @@ static void nvme_update_ns_ana_state(struct nvme_ana_group_desc *desc,
+ ns->ana_grpid = le32_to_cpu(desc->grpid);
+ ns->ana_state = desc->state;
+ clear_bit(NVME_NS_ANA_PENDING, &ns->flags);
+-
+- if (nvme_state_is_live(ns->ana_state))
++ /*
++ * nvme_mpath_set_live() will trigger I/O to the multipath path device
++ * and in turn to this path device. However we cannot accept this I/O
++ * if the controller is not live. This may deadlock if called from
++ * nvme_mpath_init_identify() and the ctrl will never complete
++ * initialization, preventing I/O from completing. For this case we
++ * will reprocess the ANA log page in nvme_mpath_update() once the
++ * controller is ready.
++ */
++ if (nvme_state_is_live(ns->ana_state) &&
++ ns->ctrl->state == NVME_CTRL_LIVE)
+ nvme_mpath_set_live(ns);
+ }
+
+@@ -723,6 +732,18 @@ static void nvme_ana_work(struct work_struct *work)
+ nvme_read_ana_log(ctrl);
+ }
+
++void nvme_mpath_update(struct nvme_ctrl *ctrl)
++{
++ u32 nr_change_groups = 0;
++
++ if (!ctrl->ana_log_buf)
++ return;
++
++ mutex_lock(&ctrl->ana_lock);
++ nvme_parse_ana_log(ctrl, &nr_change_groups, nvme_update_ana_state);
++ mutex_unlock(&ctrl->ana_lock);
++}
++
+ static void nvme_anatt_timeout(struct timer_list *t)
+ {
+ struct nvme_ctrl *ctrl = from_timer(ctrl, t, anatt_timer);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 68c42e8311172..85f3f55c71c58 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -800,6 +800,7 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id);
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head);
+ int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id);
+ void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl);
++void nvme_mpath_update(struct nvme_ctrl *ctrl);
+ void nvme_mpath_uninit(struct nvme_ctrl *ctrl);
+ void nvme_mpath_stop(struct nvme_ctrl *ctrl);
+ bool nvme_mpath_clear_current_path(struct nvme_ns *ns);
+@@ -874,6 +875,9 @@ static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl,
+ "Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n");
+ return 0;
+ }
++static inline void nvme_mpath_update(struct nvme_ctrl *ctrl)
++{
++}
+ static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
+ {
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index e4b79bee62068..94a0b933b1335 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3470,7 +3470,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ NVME_QUIRK_128_BYTES_SQES |
+ NVME_QUIRK_SHARED_TAGS |
+ NVME_QUIRK_SKIP_CID_GEN },
+-
++ { PCI_DEVICE(0x144d, 0xa808), /* Samsung X5 */
++ .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY|
++ NVME_QUIRK_NO_DEEPEST_PS |
++ NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
+ { 0, }
+ };
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index 6fb24746de069..c3a9df5545cf4 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -984,7 +984,7 @@ void nvmet_execute_async_event(struct nvmet_req *req)
+ ctrl->async_event_cmds[ctrl->nr_async_event_cmds++] = req;
+ mutex_unlock(&ctrl->lock);
+
+- schedule_work(&ctrl->async_event_work);
++ queue_work(nvmet_wq, &ctrl->async_event_work);
+ }
+
+ void nvmet_execute_keep_alive(struct nvmet_req *req)
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 496d775c67707..cea30e4f50533 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -1554,7 +1554,7 @@ static void nvmet_port_release(struct config_item *item)
+ struct nvmet_port *port = to_nvmet_port(item);
+
+ /* Let inflight controllers teardown complete */
+- flush_scheduled_work();
++ flush_workqueue(nvmet_wq);
+ list_del(&port->global_entry);
+
+ kfree(port->ana_state);
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 626caf6f1e4b4..1c026a21f2185 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -20,6 +20,9 @@ struct workqueue_struct *zbd_wq;
+ static const struct nvmet_fabrics_ops *nvmet_transports[NVMF_TRTYPE_MAX];
+ static DEFINE_IDA(cntlid_ida);
+
++struct workqueue_struct *nvmet_wq;
++EXPORT_SYMBOL_GPL(nvmet_wq);
++
+ /*
+ * This read/write semaphore is used to synchronize access to configuration
+ * information on a target system that will result in discovery log page
+@@ -205,7 +208,7 @@ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type,
+ list_add_tail(&aen->entry, &ctrl->async_events);
+ mutex_unlock(&ctrl->lock);
+
+- schedule_work(&ctrl->async_event_work);
++ queue_work(nvmet_wq, &ctrl->async_event_work);
+ }
+
+ static void nvmet_add_to_changed_ns_log(struct nvmet_ctrl *ctrl, __le32 nsid)
+@@ -385,7 +388,7 @@ static void nvmet_keep_alive_timer(struct work_struct *work)
+ if (reset_tbkas) {
+ pr_debug("ctrl %d reschedule traffic based keep-alive timer\n",
+ ctrl->cntlid);
+- schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
++ queue_delayed_work(nvmet_wq, &ctrl->ka_work, ctrl->kato * HZ);
+ return;
+ }
+
+@@ -403,7 +406,7 @@ void nvmet_start_keep_alive_timer(struct nvmet_ctrl *ctrl)
+ pr_debug("ctrl %d start keep-alive timer for %d secs\n",
+ ctrl->cntlid, ctrl->kato);
+
+- schedule_delayed_work(&ctrl->ka_work, ctrl->kato * HZ);
++ queue_delayed_work(nvmet_wq, &ctrl->ka_work, ctrl->kato * HZ);
+ }
+
+ void nvmet_stop_keep_alive_timer(struct nvmet_ctrl *ctrl)
+@@ -1479,7 +1482,7 @@ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
+ mutex_lock(&ctrl->lock);
+ if (!(ctrl->csts & NVME_CSTS_CFS)) {
+ ctrl->csts |= NVME_CSTS_CFS;
+- schedule_work(&ctrl->fatal_err_work);
++ queue_work(nvmet_wq, &ctrl->fatal_err_work);
+ }
+ mutex_unlock(&ctrl->lock);
+ }
+@@ -1620,9 +1623,15 @@ static int __init nvmet_init(void)
+ goto out_free_zbd_work_queue;
+ }
+
++ nvmet_wq = alloc_workqueue("nvmet-wq", WQ_MEM_RECLAIM, 0);
++ if (!nvmet_wq) {
++ error = -ENOMEM;
++ goto out_free_buffered_work_queue;
++ }
++
+ error = nvmet_init_discovery();
+ if (error)
+- goto out_free_work_queue;
++ goto out_free_nvmet_work_queue;
+
+ error = nvmet_init_configfs();
+ if (error)
+@@ -1631,7 +1640,9 @@ static int __init nvmet_init(void)
+
+ out_exit_discovery:
+ nvmet_exit_discovery();
+-out_free_work_queue:
++out_free_nvmet_work_queue:
++ destroy_workqueue(nvmet_wq);
++out_free_buffered_work_queue:
+ destroy_workqueue(buffered_io_wq);
+ out_free_zbd_work_queue:
+ destroy_workqueue(zbd_wq);
+@@ -1643,6 +1654,7 @@ static void __exit nvmet_exit(void)
+ nvmet_exit_configfs();
+ nvmet_exit_discovery();
+ ida_destroy(&cntlid_ida);
++ destroy_workqueue(nvmet_wq);
+ destroy_workqueue(buffered_io_wq);
+ destroy_workqueue(zbd_wq);
+
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 22b5108168a6a..c43bc5e1c7a28 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -1491,7 +1491,7 @@ __nvmet_fc_free_assocs(struct nvmet_fc_tgtport *tgtport)
+ list_for_each_entry_rcu(assoc, &tgtport->assoc_list, a_list) {
+ if (!nvmet_fc_tgt_a_get(assoc))
+ continue;
+- if (!schedule_work(&assoc->del_work))
++ if (!queue_work(nvmet_wq, &assoc->del_work))
+ /* already deleting - release local reference */
+ nvmet_fc_tgt_a_put(assoc);
+ }
+@@ -1546,7 +1546,7 @@ nvmet_fc_invalidate_host(struct nvmet_fc_target_port *target_port,
+ continue;
+ assoc->hostport->invalid = 1;
+ noassoc = false;
+- if (!schedule_work(&assoc->del_work))
++ if (!queue_work(nvmet_wq, &assoc->del_work))
+ /* already deleting - release local reference */
+ nvmet_fc_tgt_a_put(assoc);
+ }
+@@ -1592,7 +1592,7 @@ nvmet_fc_delete_ctrl(struct nvmet_ctrl *ctrl)
+ nvmet_fc_tgtport_put(tgtport);
+
+ if (found_ctrl) {
+- if (!schedule_work(&assoc->del_work))
++ if (!queue_work(nvmet_wq, &assoc->del_work))
+ /* already deleting - release local reference */
+ nvmet_fc_tgt_a_put(assoc);
+ return;
+@@ -2060,7 +2060,7 @@ nvmet_fc_rcv_ls_req(struct nvmet_fc_target_port *target_port,
+ iod->rqstdatalen = lsreqbuf_len;
+ iod->hosthandle = hosthandle;
+
+- schedule_work(&iod->work);
++ queue_work(nvmet_wq, &iod->work);
+
+ return 0;
+ }
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 54606f1872b4a..5c16372f3b533 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -360,7 +360,7 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
+ spin_lock(&rport->lock);
+ list_add_tail(&rport->ls_list, &tls_req->ls_list);
+ spin_unlock(&rport->lock);
+- schedule_work(&rport->ls_work);
++ queue_work(nvmet_wq, &rport->ls_work);
+ return ret;
+ }
+
+@@ -393,7 +393,7 @@ fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
+ spin_lock(&rport->lock);
+ list_add_tail(&rport->ls_list, &tls_req->ls_list);
+ spin_unlock(&rport->lock);
+- schedule_work(&rport->ls_work);
++ queue_work(nvmet_wq, &rport->ls_work);
+ }
+
+ return 0;
+@@ -448,7 +448,7 @@ fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetport, void *hosthandle,
+ spin_lock(&tport->lock);
+ list_add_tail(&tport->ls_list, &tls_req->ls_list);
+ spin_unlock(&tport->lock);
+- schedule_work(&tport->ls_work);
++ queue_work(nvmet_wq, &tport->ls_work);
+ return ret;
+ }
+
+@@ -480,7 +480,7 @@ fcloop_t2h_xmt_ls_rsp(struct nvme_fc_local_port *localport,
+ spin_lock(&tport->lock);
+ list_add_tail(&tport->ls_list, &tls_req->ls_list);
+ spin_unlock(&tport->lock);
+- schedule_work(&tport->ls_work);
++ queue_work(nvmet_wq, &tport->ls_work);
+ }
+
+ return 0;
+@@ -520,7 +520,7 @@ fcloop_tgt_discovery_evt(struct nvmet_fc_target_port *tgtport)
+ tgt_rscn->tport = tgtport->private;
+ INIT_WORK(&tgt_rscn->work, fcloop_tgt_rscn_work);
+
+- schedule_work(&tgt_rscn->work);
++ queue_work(nvmet_wq, &tgt_rscn->work);
+ }
+
+ static void
+@@ -739,7 +739,7 @@ fcloop_fcp_req(struct nvme_fc_local_port *localport,
+ INIT_WORK(&tfcp_req->tio_done_work, fcloop_tgt_fcprqst_done_work);
+ kref_init(&tfcp_req->ref);
+
+- schedule_work(&tfcp_req->fcp_rcv_work);
++ queue_work(nvmet_wq, &tfcp_req->fcp_rcv_work);
+
+ return 0;
+ }
+@@ -921,7 +921,7 @@ fcloop_fcp_req_release(struct nvmet_fc_target_port *tgtport,
+ {
+ struct fcloop_fcpreq *tfcp_req = tgt_fcp_req_to_fcpreq(tgt_fcpreq);
+
+- schedule_work(&tfcp_req->tio_done_work);
++ queue_work(nvmet_wq, &tfcp_req->tio_done_work);
+ }
+
+ static void
+@@ -976,7 +976,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+
+ if (abortio)
+ /* leave the reference while the work item is scheduled */
+- WARN_ON(!schedule_work(&tfcp_req->abort_rcv_work));
++ WARN_ON(!queue_work(nvmet_wq, &tfcp_req->abort_rcv_work));
+ else {
+ /*
+ * as the io has already had the done callback made,
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index 6be6e59d273bb..80f079a7015d6 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -292,7 +292,7 @@ static void nvmet_file_execute_flush(struct nvmet_req *req)
+ if (!nvmet_check_transfer_len(req, 0))
+ return;
+ INIT_WORK(&req->f.work, nvmet_file_flush_work);
+- schedule_work(&req->f.work);
++ queue_work(nvmet_wq, &req->f.work);
+ }
+
+ static void nvmet_file_execute_discard(struct nvmet_req *req)
+@@ -352,7 +352,7 @@ static void nvmet_file_execute_dsm(struct nvmet_req *req)
+ if (!nvmet_check_data_len_lte(req, nvmet_dsm_len(req)))
+ return;
+ INIT_WORK(&req->f.work, nvmet_file_dsm_work);
+- schedule_work(&req->f.work);
++ queue_work(nvmet_wq, &req->f.work);
+ }
+
+ static void nvmet_file_write_zeroes_work(struct work_struct *w)
+@@ -382,7 +382,7 @@ static void nvmet_file_execute_write_zeroes(struct nvmet_req *req)
+ if (!nvmet_check_transfer_len(req, 0))
+ return;
+ INIT_WORK(&req->f.work, nvmet_file_write_zeroes_work);
+- schedule_work(&req->f.work);
++ queue_work(nvmet_wq, &req->f.work);
+ }
+
+ u16 nvmet_file_parse_io_cmd(struct nvmet_req *req)
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index eb1094254c823..2a968eeddda37 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -166,7 +166,7 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ iod->req.transfer_len = blk_rq_payload_bytes(req);
+ }
+
+- schedule_work(&iod->work);
++ queue_work(nvmet_wq, &iod->work);
+ return BLK_STS_OK;
+ }
+
+@@ -187,7 +187,7 @@ static void nvme_loop_submit_async_event(struct nvme_ctrl *arg)
+ return;
+ }
+
+- schedule_work(&iod->work);
++ queue_work(nvmet_wq, &iod->work);
+ }
+
+ static int nvme_loop_init_iod(struct nvme_loop_ctrl *ctrl,
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index af193423c10bb..ff26dbde8c1e9 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -366,6 +366,7 @@ struct nvmet_req {
+
+ extern struct workqueue_struct *buffered_io_wq;
+ extern struct workqueue_struct *zbd_wq;
++extern struct workqueue_struct *nvmet_wq;
+
+ static inline void nvmet_set_result(struct nvmet_req *req, u32 result)
+ {
+diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
+index 9e5b89ae29dfe..2b5031b646e92 100644
+--- a/drivers/nvme/target/passthru.c
++++ b/drivers/nvme/target/passthru.c
+@@ -281,7 +281,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
+ if (req->p.use_workqueue || effects) {
+ INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work);
+ req->p.rq = rq;
+- schedule_work(&req->p.work);
++ queue_work(nvmet_wq, &req->p.work);
+ } else {
+ rq->end_io_data = req;
+ blk_execute_rq_nowait(rq, false, nvmet_passthru_req_done);
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 1deb4043e2425..0ebfe21911655 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -1584,7 +1584,7 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
+
+ if (queue->host_qid == 0) {
+ /* Let inflight controller teardown complete */
+- flush_scheduled_work();
++ flush_workqueue(nvmet_wq);
+ }
+
+ ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
+@@ -1669,7 +1669,7 @@ static void __nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue)
+
+ if (disconnect) {
+ rdma_disconnect(queue->cm_id);
+- schedule_work(&queue->release_work);
++ queue_work(nvmet_wq, &queue->release_work);
+ }
+ }
+
+@@ -1699,7 +1699,7 @@ static void nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
+ mutex_unlock(&nvmet_rdma_queue_mutex);
+
+ pr_err("failed to connect queue %d\n", queue->idx);
+- schedule_work(&queue->release_work);
++ queue_work(nvmet_wq, &queue->release_work);
+ }
+
+ /**
+@@ -1773,7 +1773,7 @@ static int nvmet_rdma_cm_handler(struct rdma_cm_id *cm_id,
+ if (!queue) {
+ struct nvmet_rdma_port *port = cm_id->context;
+
+- schedule_delayed_work(&port->repair_work, 0);
++ queue_delayed_work(nvmet_wq, &port->repair_work, 0);
+ break;
+ }
+ fallthrough;
+@@ -1903,7 +1903,7 @@ static void nvmet_rdma_repair_port_work(struct work_struct *w)
+ nvmet_rdma_disable_port(port);
+ ret = nvmet_rdma_enable_port(port);
+ if (ret)
+- schedule_delayed_work(&port->repair_work, 5 * HZ);
++ queue_delayed_work(nvmet_wq, &port->repair_work, 5 * HZ);
+ }
+
+ static int nvmet_rdma_add_port(struct nvmet_port *nport)
+@@ -2053,7 +2053,7 @@ static void nvmet_rdma_remove_one(struct ib_device *ib_device, void *client_data
+ }
+ mutex_unlock(&nvmet_rdma_queue_mutex);
+
+- flush_scheduled_work();
++ flush_workqueue(nvmet_wq);
+ }
+
+ static struct ib_client nvmet_rdma_ib_client = {
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 7c1c43ce466bc..31bab7477d531 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -1269,7 +1269,7 @@ static void nvmet_tcp_schedule_release_queue(struct nvmet_tcp_queue *queue)
+ spin_lock(&queue->state_lock);
+ if (queue->state != NVMET_TCP_Q_DISCONNECTING) {
+ queue->state = NVMET_TCP_Q_DISCONNECTING;
+- schedule_work(&queue->release_work);
++ queue_work(nvmet_wq, &queue->release_work);
+ }
+ spin_unlock(&queue->state_lock);
+ }
+@@ -1684,7 +1684,7 @@ static void nvmet_tcp_listen_data_ready(struct sock *sk)
+ goto out;
+
+ if (sk->sk_state == TCP_LISTEN)
+- schedule_work(&port->accept_work);
++ queue_work(nvmet_wq, &port->accept_work);
+ out:
+ read_unlock_bh(&sk->sk_callback_lock);
+ }
+@@ -1815,7 +1815,7 @@ static u16 nvmet_tcp_install_queue(struct nvmet_sq *sq)
+
+ if (sq->qid == 0) {
+ /* Let inflight controller teardown complete */
+- flush_scheduled_work();
++ flush_workqueue(nvmet_wq);
+ }
+
+ queue->nr_cmds = sq->size * 2;
+@@ -1876,12 +1876,12 @@ static void __exit nvmet_tcp_exit(void)
+
+ nvmet_unregister_transport(&nvmet_tcp_ops);
+
+- flush_scheduled_work();
++ flush_workqueue(nvmet_wq);
+ mutex_lock(&nvmet_tcp_queue_mutex);
+ list_for_each_entry(queue, &nvmet_tcp_queue_list, queue_list)
+ kernel_sock_shutdown(queue->sock, SHUT_RDWR);
+ mutex_unlock(&nvmet_tcp_queue_mutex);
+- flush_scheduled_work();
++ flush_workqueue(nvmet_wq);
+
+ destroy_workqueue(nvmet_tcp_wq);
+ }
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 5be382b19d9a7..27169c0231809 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -272,7 +272,6 @@ struct advk_pcie {
+ u32 actions;
+ } wins[OB_WIN_COUNT];
+ u8 wins_count;
+- int irq;
+ struct irq_domain *rp_irq_domain;
+ struct irq_domain *irq_domain;
+ struct irq_chip irq_chip;
+@@ -1570,26 +1569,21 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
+ }
+ }
+
+-static void advk_pcie_irq_handler(struct irq_desc *desc)
++static irqreturn_t advk_pcie_irq_handler(int irq, void *arg)
+ {
+- struct advk_pcie *pcie = irq_desc_get_handler_data(desc);
+- struct irq_chip *chip = irq_desc_get_chip(desc);
+- u32 val, mask, status;
++ struct advk_pcie *pcie = arg;
++ u32 status;
+
+- chained_irq_enter(chip, desc);
++ status = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG);
++ if (!(status & PCIE_IRQ_CORE_INT))
++ return IRQ_NONE;
+
+- val = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG);
+- mask = advk_readl(pcie, HOST_CTRL_INT_MASK_REG);
+- status = val & ((~mask) & PCIE_IRQ_ALL_MASK);
++ advk_pcie_handle_int(pcie);
+
+- if (status & PCIE_IRQ_CORE_INT) {
+- advk_pcie_handle_int(pcie);
++ /* Clear interrupt */
++ advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG);
+
+- /* Clear interrupt */
+- advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG);
+- }
+-
+- chained_irq_exit(chip, desc);
++ return IRQ_HANDLED;
+ }
+
+ static int advk_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+@@ -1671,7 +1665,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ struct advk_pcie *pcie;
+ struct pci_host_bridge *bridge;
+ struct resource_entry *entry;
+- int ret;
++ int ret, irq;
+
+ bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
+ if (!bridge)
+@@ -1757,9 +1751,17 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ if (IS_ERR(pcie->base))
+ return PTR_ERR(pcie->base);
+
+- pcie->irq = platform_get_irq(pdev, 0);
+- if (pcie->irq < 0)
+- return pcie->irq;
++ irq = platform_get_irq(pdev, 0);
++ if (irq < 0)
++ return irq;
++
++ ret = devm_request_irq(dev, irq, advk_pcie_irq_handler,
++ IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie",
++ pcie);
++ if (ret) {
++ dev_err(dev, "Failed to register interrupt\n");
++ return ret;
++ }
+
+ pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
+ "reset-gpios", 0,
+@@ -1816,15 +1818,12 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- irq_set_chained_handler_and_data(pcie->irq, advk_pcie_irq_handler, pcie);
+-
+ bridge->sysdata = pcie;
+ bridge->ops = &advk_pcie_ops;
+ bridge->map_irq = advk_pcie_map_irq;
+
+ ret = pci_host_probe(bridge);
+ if (ret < 0) {
+- irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
+ advk_pcie_remove_rp_irq_domain(pcie);
+ advk_pcie_remove_msi_irq_domain(pcie);
+ advk_pcie_remove_irq_domain(pcie);
+@@ -1873,9 +1872,6 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
+ advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
+
+- /* Remove IRQ handler */
+- irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
+-
+ /* Remove IRQ domains */
+ advk_pcie_remove_rp_irq_domain(pcie);
+ advk_pcie_remove_msi_irq_domain(pcie);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 9ecce435fb3f1..d25122fbe98ab 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2920,6 +2920,16 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
+ DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
+ },
++ /*
++ * Downstream device is not accessible after putting a root port
++ * into D3cold and back into D0 on Elo i2.
++ */
++ .ident = "Elo i2",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"),
++ },
+ },
+ #endif
+ { }
+diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+index a3fa03bcd9a30..54064714d73fb 100644
+--- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
+@@ -1236,18 +1236,12 @@ FUNC_GROUP_DECL(SALT8, AA12);
+ FUNC_GROUP_DECL(WDTRST4, AA12);
+
+ #define AE12 196
+-SIG_EXPR_LIST_DECL_SEMG(AE12, FWSPIDQ2, FWQSPID, FWSPID,
+- SIG_DESC_SET(SCU438, 4));
+ SIG_EXPR_LIST_DECL_SESG(AE12, GPIOY4, GPIOY4);
+-PIN_DECL_(AE12, SIG_EXPR_LIST_PTR(AE12, FWSPIDQ2),
+- SIG_EXPR_LIST_PTR(AE12, GPIOY4));
++PIN_DECL_(AE12, SIG_EXPR_LIST_PTR(AE12, GPIOY4));
+
+ #define AF12 197
+-SIG_EXPR_LIST_DECL_SEMG(AF12, FWSPIDQ3, FWQSPID, FWSPID,
+- SIG_DESC_SET(SCU438, 5));
+ SIG_EXPR_LIST_DECL_SESG(AF12, GPIOY5, GPIOY5);
+-PIN_DECL_(AF12, SIG_EXPR_LIST_PTR(AF12, FWSPIDQ3),
+- SIG_EXPR_LIST_PTR(AF12, GPIOY5));
++PIN_DECL_(AF12, SIG_EXPR_LIST_PTR(AF12, GPIOY5));
+
+ #define AC12 198
+ SSSF_PIN_DECL(AC12, GPIOY6, FWSPIABR, SIG_DESC_SET(SCU438, 6));
+@@ -1520,9 +1514,8 @@ SIG_EXPR_LIST_DECL_SEMG(Y4, EMMCDAT7, EMMCG8, EMMC, SIG_DESC_SET(SCU404, 3));
+ PIN_DECL_3(Y4, GPIO18E3, FWSPIDMISO, VBMISO, EMMCDAT7);
+
+ GROUP_DECL(FWSPID, Y1, Y2, Y3, Y4);
+-GROUP_DECL(FWQSPID, Y1, Y2, Y3, Y4, AE12, AF12);
+ GROUP_DECL(EMMCG8, AB4, AA4, AC4, AA5, Y5, AB5, AB6, AC5, Y1, Y2, Y3, Y4);
+-FUNC_DECL_2(FWSPID, FWSPID, FWQSPID);
++FUNC_DECL_1(FWSPID, FWSPID);
+ FUNC_GROUP_DECL(VB, Y1, Y2, Y3, Y4);
+ FUNC_DECL_3(EMMC, EMMCG1, EMMCG4, EMMCG8);
+ /*
+@@ -1918,7 +1911,6 @@ static const struct aspeed_pin_group aspeed_g6_groups[] = {
+ ASPEED_PINCTRL_GROUP(FSI2),
+ ASPEED_PINCTRL_GROUP(FWSPIABR),
+ ASPEED_PINCTRL_GROUP(FWSPID),
+- ASPEED_PINCTRL_GROUP(FWQSPID),
+ ASPEED_PINCTRL_GROUP(FWSPIWP),
+ ASPEED_PINCTRL_GROUP(GPIT0),
+ ASPEED_PINCTRL_GROUP(GPIT1),
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt8365.c b/drivers/pinctrl/mediatek/pinctrl-mt8365.c
+index 79b1fee5a1eba..ddee0db72d264 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt8365.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt8365.c
+@@ -259,7 +259,7 @@ static const struct mtk_pin_ies_smt_set mt8365_ies_set[] = {
+ MTK_PIN_IES_SMT_SPEC(104, 104, 0x420, 13),
+ MTK_PIN_IES_SMT_SPEC(105, 109, 0x420, 14),
+ MTK_PIN_IES_SMT_SPEC(110, 113, 0x420, 15),
+- MTK_PIN_IES_SMT_SPEC(114, 112, 0x420, 16),
++ MTK_PIN_IES_SMT_SPEC(114, 116, 0x420, 16),
+ MTK_PIN_IES_SMT_SPEC(117, 119, 0x420, 17),
+ MTK_PIN_IES_SMT_SPEC(120, 122, 0x420, 18),
+ MTK_PIN_IES_SMT_SPEC(123, 125, 0x420, 19),
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index 370459243007b..61e3844cddbf0 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -129,6 +129,7 @@ enum {
+ FUNC_PTP1,
+ FUNC_PTP2,
+ FUNC_PTP3,
++ FUNC_PTPSYNC_0,
+ FUNC_PTPSYNC_1,
+ FUNC_PTPSYNC_2,
+ FUNC_PTPSYNC_3,
+@@ -252,6 +253,7 @@ static const char *const ocelot_function_names[] = {
+ [FUNC_PTP1] = "ptp1",
+ [FUNC_PTP2] = "ptp2",
+ [FUNC_PTP3] = "ptp3",
++ [FUNC_PTPSYNC_0] = "ptpsync_0",
+ [FUNC_PTPSYNC_1] = "ptpsync_1",
+ [FUNC_PTPSYNC_2] = "ptpsync_2",
+ [FUNC_PTPSYNC_3] = "ptpsync_3",
+@@ -891,7 +893,7 @@ LAN966X_P(31, GPIO, FC3_c, CAN1, NONE, OB_TRG, RECO_b, NON
+ LAN966X_P(32, GPIO, FC3_c, NONE, SGPIO_a, NONE, MIIM_Sa, NONE, R);
+ LAN966X_P(33, GPIO, FC1_b, NONE, SGPIO_a, NONE, MIIM_Sa, MIIM_b, R);
+ LAN966X_P(34, GPIO, FC1_b, NONE, SGPIO_a, NONE, MIIM_Sa, MIIM_b, R);
+-LAN966X_P(35, GPIO, FC1_b, NONE, SGPIO_a, CAN0_b, NONE, NONE, R);
++LAN966X_P(35, GPIO, FC1_b, PTPSYNC_0, SGPIO_a, CAN0_b, NONE, NONE, R);
+ LAN966X_P(36, GPIO, NONE, PTPSYNC_1, NONE, CAN0_b, NONE, NONE, R);
+ LAN966X_P(37, GPIO, FC_SHRD0, PTPSYNC_2, TWI_SLC_GATE_AD, NONE, NONE, NONE, R);
+ LAN966X_P(38, GPIO, NONE, PTPSYNC_3, NONE, NONE, NONE, NONE, R);
+diff --git a/drivers/platform/chrome/cros_ec_debugfs.c b/drivers/platform/chrome/cros_ec_debugfs.c
+index 272c89837d745..0dbceee87a4b1 100644
+--- a/drivers/platform/chrome/cros_ec_debugfs.c
++++ b/drivers/platform/chrome/cros_ec_debugfs.c
+@@ -25,6 +25,9 @@
+
+ #define CIRC_ADD(idx, size, value) (((idx) + (value)) & ((size) - 1))
+
++/* waitqueue for log readers */
++static DECLARE_WAIT_QUEUE_HEAD(cros_ec_debugfs_log_wq);
++
+ /**
+ * struct cros_ec_debugfs - EC debugging information.
+ *
+@@ -33,7 +36,6 @@
+ * @log_buffer: circular buffer for console log information
+ * @read_msg: preallocated EC command and buffer to read console log
+ * @log_mutex: mutex to protect circular buffer
+- * @log_wq: waitqueue for log readers
+ * @log_poll_work: recurring task to poll EC for new console log data
+ * @panicinfo_blob: panicinfo debugfs blob
+ */
+@@ -44,7 +46,6 @@ struct cros_ec_debugfs {
+ struct circ_buf log_buffer;
+ struct cros_ec_command *read_msg;
+ struct mutex log_mutex;
+- wait_queue_head_t log_wq;
+ struct delayed_work log_poll_work;
+ /* EC panicinfo */
+ struct debugfs_blob_wrapper panicinfo_blob;
+@@ -107,7 +108,7 @@ static void cros_ec_console_log_work(struct work_struct *__work)
+ buf_space--;
+ }
+
+- wake_up(&debug_info->log_wq);
++ wake_up(&cros_ec_debugfs_log_wq);
+ }
+
+ mutex_unlock(&debug_info->log_mutex);
+@@ -141,7 +142,7 @@ static ssize_t cros_ec_console_log_read(struct file *file, char __user *buf,
+
+ mutex_unlock(&debug_info->log_mutex);
+
+- ret = wait_event_interruptible(debug_info->log_wq,
++ ret = wait_event_interruptible(cros_ec_debugfs_log_wq,
+ CIRC_CNT(cb->head, cb->tail, LOG_SIZE));
+ if (ret < 0)
+ return ret;
+@@ -173,7 +174,7 @@ static __poll_t cros_ec_console_log_poll(struct file *file,
+ struct cros_ec_debugfs *debug_info = file->private_data;
+ __poll_t mask = 0;
+
+- poll_wait(file, &debug_info->log_wq, wait);
++ poll_wait(file, &cros_ec_debugfs_log_wq, wait);
+
+ mutex_lock(&debug_info->log_mutex);
+ if (CIRC_CNT(debug_info->log_buffer.head,
+@@ -377,7 +378,6 @@ static int cros_ec_create_console_log(struct cros_ec_debugfs *debug_info)
+ debug_info->log_buffer.tail = 0;
+
+ mutex_init(&debug_info->log_mutex);
+- init_waitqueue_head(&debug_info->log_wq);
+
+ debugfs_create_file("console_log", S_IFREG | 0444, debug_info->dir,
+ debug_info, &cros_ec_console_log_fops);
+diff --git a/drivers/platform/surface/surface_gpe.c b/drivers/platform/surface/surface_gpe.c
+index c1775db29efb6..ec66fde28e75a 100644
+--- a/drivers/platform/surface/surface_gpe.c
++++ b/drivers/platform/surface/surface_gpe.c
+@@ -99,6 +99,14 @@ static const struct dmi_system_id dmi_lid_device_table[] = {
+ },
+ .driver_data = (void *)lid_device_props_l4D,
+ },
++ {
++ .ident = "Surface Pro 8",
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Surface Pro 8"),
++ },
++ .driver_data = (void *)lid_device_props_l4B,
++ },
+ {
+ .ident = "Surface Book 1",
+ .matches = {
+diff --git a/drivers/platform/x86/intel/pmt/telemetry.c b/drivers/platform/x86/intel/pmt/telemetry.c
+index 6b6f3e2a617af..f73ecfd4a3092 100644
+--- a/drivers/platform/x86/intel/pmt/telemetry.c
++++ b/drivers/platform/x86/intel/pmt/telemetry.c
+@@ -103,7 +103,7 @@ static int pmt_telem_probe(struct auxiliary_device *auxdev, const struct auxilia
+ auxiliary_set_drvdata(auxdev, priv);
+
+ for (i = 0; i < intel_vsec_dev->num_resources; i++) {
+- struct intel_pmt_entry *entry = &priv->entry[i];
++ struct intel_pmt_entry *entry = &priv->entry[priv->num_entries];
+
+ ret = intel_pmt_dev_create(entry, &pmt_telem_ns, intel_vsec_dev, i);
+ if (ret < 0)
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 3fb8cda31eb9e..0ea71416d292a 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -309,6 +309,15 @@ struct ibm_init_struct {
+ struct ibm_struct *data;
+ };
+
++/* DMI Quirks */
++struct quirk_entry {
++ bool btusb_bug;
++};
++
++static struct quirk_entry quirk_btusb_bug = {
++ .btusb_bug = true,
++};
++
+ static struct {
+ u32 bluetooth:1;
+ u32 hotkey:1;
+@@ -338,6 +347,7 @@ static struct {
+ u32 hotkey_poll_active:1;
+ u32 has_adaptive_kbd:1;
+ u32 kbd_lang:1;
++ struct quirk_entry *quirks;
+ } tp_features;
+
+ static struct {
+@@ -4361,9 +4371,10 @@ static void bluetooth_exit(void)
+ bluetooth_shutdown();
+ }
+
+-static const struct dmi_system_id bt_fwbug_list[] __initconst = {
++static const struct dmi_system_id fwbug_list[] __initconst = {
+ {
+ .ident = "ThinkPad E485",
++ .driver_data = &quirk_btusb_bug,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_BOARD_NAME, "20KU"),
+@@ -4371,6 +4382,7 @@ static const struct dmi_system_id bt_fwbug_list[] __initconst = {
+ },
+ {
+ .ident = "ThinkPad E585",
++ .driver_data = &quirk_btusb_bug,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_BOARD_NAME, "20KV"),
+@@ -4378,6 +4390,7 @@ static const struct dmi_system_id bt_fwbug_list[] __initconst = {
+ },
+ {
+ .ident = "ThinkPad A285 - 20MW",
++ .driver_data = &quirk_btusb_bug,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_BOARD_NAME, "20MW"),
+@@ -4385,6 +4398,7 @@ static const struct dmi_system_id bt_fwbug_list[] __initconst = {
+ },
+ {
+ .ident = "ThinkPad A285 - 20MX",
++ .driver_data = &quirk_btusb_bug,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_BOARD_NAME, "20MX"),
+@@ -4392,6 +4406,7 @@ static const struct dmi_system_id bt_fwbug_list[] __initconst = {
+ },
+ {
+ .ident = "ThinkPad A485 - 20MU",
++ .driver_data = &quirk_btusb_bug,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_BOARD_NAME, "20MU"),
+@@ -4399,6 +4414,7 @@ static const struct dmi_system_id bt_fwbug_list[] __initconst = {
+ },
+ {
+ .ident = "ThinkPad A485 - 20MV",
++ .driver_data = &quirk_btusb_bug,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_BOARD_NAME, "20MV"),
+@@ -4421,7 +4437,8 @@ static int __init have_bt_fwbug(void)
+ * Some AMD based ThinkPads have a firmware bug that calling
+ * "GBDC" will cause bluetooth on Intel wireless cards blocked
+ */
+- if (dmi_check_system(bt_fwbug_list) && pci_dev_present(fwbug_cards_ids)) {
++ if (tp_features.quirks && tp_features.quirks->btusb_bug &&
++ pci_dev_present(fwbug_cards_ids)) {
+ vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_RFKILL,
+ FW_BUG "disable bluetooth subdriver for Intel cards\n");
+ return 1;
+@@ -8749,24 +8766,27 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+ fan_status_access_mode = TPACPI_FAN_RD_TPEC;
+ if (quirks & TPACPI_FAN_Q1)
+ fan_quirk1_setup();
+- if (quirks & TPACPI_FAN_2FAN) {
+- tp_features.second_fan = 1;
+- pr_info("secondary fan support enabled\n");
+- }
+- if (quirks & TPACPI_FAN_2CTL) {
+- tp_features.second_fan = 1;
+- tp_features.second_fan_ctl = 1;
+- pr_info("secondary fan control enabled\n");
+- }
+ /* Try and probe the 2nd fan */
++ tp_features.second_fan = 1; /* needed for get_speed to work */
+ res = fan2_get_speed(&speed);
+ if (res >= 0) {
+ /* It responded - so let's assume it's there */
+ tp_features.second_fan = 1;
+ tp_features.second_fan_ctl = 1;
+ pr_info("secondary fan control detected & enabled\n");
++ } else {
++ /* Fan not auto-detected */
++ tp_features.second_fan = 0;
++ if (quirks & TPACPI_FAN_2FAN) {
++ tp_features.second_fan = 1;
++ pr_info("secondary fan support enabled\n");
++ }
++ if (quirks & TPACPI_FAN_2CTL) {
++ tp_features.second_fan = 1;
++ tp_features.second_fan_ctl = 1;
++ pr_info("secondary fan control enabled\n");
++ }
+ }
+-
+ } else {
+ pr_err("ThinkPad ACPI EC access misbehaving, fan status and control unavailable\n");
+ return -ENODEV;
+@@ -11438,6 +11458,7 @@ static void thinkpad_acpi_module_exit(void)
+
+ static int __init thinkpad_acpi_module_init(void)
+ {
++ const struct dmi_system_id *dmi_id;
+ int ret, i;
+
+ tpacpi_lifecycle = TPACPI_LIFE_INIT;
+@@ -11477,6 +11498,10 @@ static int __init thinkpad_acpi_module_init(void)
+ return -ENODEV;
+ }
+
++ dmi_id = dmi_first_match(fwbug_list);
++ if (dmi_id)
++ tp_features.quirks = dmi_id->driver_data;
++
+ /* Device initialization */
+ tpacpi_pdev = platform_device_register_simple(TPACPI_DRVR_NAME, -1,
+ NULL, 0);
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 17ad5f0d13b2a..6585789ed6951 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -625,7 +625,7 @@ __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u32 adj_val)
+ }
+
+ static void
+-ptp_ocp_adjtime_coarse(struct ptp_ocp *bp, u64 delta_ns)
++ptp_ocp_adjtime_coarse(struct ptp_ocp *bp, s64 delta_ns)
+ {
+ struct timespec64 ts;
+ unsigned long flags;
+@@ -634,7 +634,8 @@ ptp_ocp_adjtime_coarse(struct ptp_ocp *bp, u64 delta_ns)
+ spin_lock_irqsave(&bp->lock, flags);
+ err = __ptp_ocp_gettime_locked(bp, &ts, NULL);
+ if (likely(!err)) {
+- timespec64_add_ns(&ts, delta_ns);
++ set_normalized_timespec64(&ts, ts.tv_sec,
++ ts.tv_nsec + delta_ns);
+ __ptp_ocp_settime_locked(bp, &ts);
+ }
+ spin_unlock_irqrestore(&bp->lock, flags);
+diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
+index 4b460c61f1d8c..40d504dac1a92 100644
+--- a/drivers/rtc/class.c
++++ b/drivers/rtc/class.c
+@@ -26,6 +26,15 @@ struct class *rtc_class;
+ static void rtc_device_release(struct device *dev)
+ {
+ struct rtc_device *rtc = to_rtc_device(dev);
++ struct timerqueue_head *head = &rtc->timerqueue;
++ struct timerqueue_node *node;
++
++ mutex_lock(&rtc->ops_lock);
++ while ((node = timerqueue_getnext(head)))
++ timerqueue_del(head, node);
++ mutex_unlock(&rtc->ops_lock);
++
++ cancel_work_sync(&rtc->irqwork);
+
+ ida_simple_remove(&rtc_ida, rtc->id);
+ mutex_destroy(&rtc->ops_lock);
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index 562f99b664a24..522449b25921e 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -176,6 +176,17 @@ int mc146818_get_time(struct rtc_time *time)
+ }
+ EXPORT_SYMBOL_GPL(mc146818_get_time);
+
++/* AMD systems don't allow access to AltCentury with DV1 */
++static bool apply_amd_register_a_behavior(void)
++{
++#ifdef CONFIG_X86
++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
++ return true;
++#endif
++ return false;
++}
++
+ /* Set the current date and time in the real time clock. */
+ int mc146818_set_time(struct rtc_time *time)
+ {
+@@ -249,7 +260,10 @@ int mc146818_set_time(struct rtc_time *time)
+ save_control = CMOS_READ(RTC_CONTROL);
+ CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+ save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
+- CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
++ if (apply_amd_register_a_behavior())
++ CMOS_WRITE((save_freq_select & ~RTC_AMD_BANK_SELECT), RTC_FREQ_SELECT);
++ else
++ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+
+ #ifdef CONFIG_MACH_DECSTATION
+ CMOS_WRITE(real_yrs, RTC_DEC_YEAR);
+diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c
+index 81a5b1f2e68c3..6c9d8de41e7be 100644
+--- a/drivers/rtc/rtc-pcf2127.c
++++ b/drivers/rtc/rtc-pcf2127.c
+@@ -374,7 +374,8 @@ static int pcf2127_watchdog_init(struct device *dev, struct pcf2127 *pcf2127)
+ static int pcf2127_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm)
+ {
+ struct pcf2127 *pcf2127 = dev_get_drvdata(dev);
+- unsigned int buf[5], ctrl2;
++ u8 buf[5];
++ unsigned int ctrl2;
+ int ret;
+
+ ret = regmap_read(pcf2127->regmap, PCF2127_REG_CTRL2, &ctrl2);
+diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
+index 711832c758aea..bcc0c2ce4b4e7 100644
+--- a/drivers/rtc/rtc-sun6i.c
++++ b/drivers/rtc/rtc-sun6i.c
+@@ -138,7 +138,7 @@ struct sun6i_rtc_dev {
+ const struct sun6i_rtc_clk_data *data;
+ void __iomem *base;
+ int irq;
+- unsigned long alarm;
++ time64_t alarm;
+
+ struct clk_hw hw;
+ struct clk_hw *int_osc;
+@@ -510,10 +510,8 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ struct sun6i_rtc_dev *chip = dev_get_drvdata(dev);
+ struct rtc_time *alrm_tm = &wkalrm->time;
+ struct rtc_time tm_now;
+- unsigned long time_now = 0;
+- unsigned long time_set = 0;
+- unsigned long time_gap = 0;
+- int ret = 0;
++ time64_t time_now, time_set;
++ int ret;
+
+ ret = sun6i_rtc_gettime(dev, &tm_now);
+ if (ret < 0) {
+@@ -528,9 +526,7 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ return -EINVAL;
+ }
+
+- time_gap = time_set - time_now;
+-
+- if (time_gap > U32_MAX) {
++ if ((time_set - time_now) > U32_MAX) {
+ dev_err(dev, "Date too far in the future\n");
+ return -EINVAL;
+ }
+@@ -539,7 +535,7 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ writel(0, chip->base + SUN6I_ALRM_COUNTER);
+ usleep_range(100, 300);
+
+- writel(time_gap, chip->base + SUN6I_ALRM_COUNTER);
++ writel(time_set - time_now, chip->base + SUN6I_ALRM_COUNTER);
+ chip->alarm = time_set;
+
+ sun6i_rtc_setaie(wkalrm->enabled, chip);
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index 37d06f993b761..1d9be771f3ee0 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -1172,9 +1172,8 @@ static blk_status_t alua_prep_fn(struct scsi_device *sdev, struct request *req)
+ case SCSI_ACCESS_STATE_OPTIMAL:
+ case SCSI_ACCESS_STATE_ACTIVE:
+ case SCSI_ACCESS_STATE_LBA:
+- return BLK_STS_OK;
+ case SCSI_ACCESS_STATE_TRANSITIONING:
+- return BLK_STS_AGAIN;
++ return BLK_STS_OK;
+ default:
+ req->rq_flags |= RQF_QUIET;
+ return BLK_STS_IOERR;
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index b109716d44fb7..7ab3c9e4d4783 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -3837,6 +3837,9 @@ int qlt_abort_cmd(struct qla_tgt_cmd *cmd)
+
+ spin_lock_irqsave(&cmd->cmd_lock, flags);
+ if (cmd->aborted) {
++ if (cmd->sg_mapped)
++ qlt_unmap_sg(vha, cmd);
++
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+ /*
+ * It's normal to see 2 calls in this path:
+diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
+index b34feba1f53de..8dc818b03939a 100644
+--- a/drivers/scsi/ufs/ufshpb.c
++++ b/drivers/scsi/ufs/ufshpb.c
+@@ -1256,6 +1256,13 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ struct utp_hpb_rsp *rsp_field = &lrbp->ucd_rsp_ptr->hr;
+ int data_seg_len;
+
++ data_seg_len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2)
++ & MASK_RSP_UPIU_DATA_SEG_LEN;
++
++ /* If data segment length is zero, rsp_field is not valid */
++ if (!data_seg_len)
++ return;
++
+ if (unlikely(lrbp->lun != rsp_field->lun)) {
+ struct scsi_device *sdev;
+ bool found = false;
+@@ -1290,18 +1297,6 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ return;
+ }
+
+- data_seg_len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2)
+- & MASK_RSP_UPIU_DATA_SEG_LEN;
+-
+- /* To flush remained rsp_list, we queue the map_work task */
+- if (!data_seg_len) {
+- if (!ufshpb_is_general_lun(hpb->lun))
+- return;
+-
+- ufshpb_kick_map_work(hpb);
+- return;
+- }
+-
+ BUILD_BUG_ON(sizeof(struct utp_hpb_rsp) != UTP_HPB_RSP_SIZE);
+
+ if (!ufshpb_is_hpb_rsp_valid(hba, lrbp, rsp_field))
+diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c
+index d86c3a36441ee..3427ce37a5c5b 100644
+--- a/drivers/usb/gadget/legacy/raw_gadget.c
++++ b/drivers/usb/gadget/legacy/raw_gadget.c
+@@ -145,6 +145,7 @@ enum dev_state {
+ STATE_DEV_INVALID = 0,
+ STATE_DEV_OPENED,
+ STATE_DEV_INITIALIZED,
++ STATE_DEV_REGISTERING,
+ STATE_DEV_RUNNING,
+ STATE_DEV_CLOSED,
+ STATE_DEV_FAILED
+@@ -508,6 +509,7 @@ static int raw_ioctl_run(struct raw_dev *dev, unsigned long value)
+ ret = -EINVAL;
+ goto out_unlock;
+ }
++ dev->state = STATE_DEV_REGISTERING;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ ret = usb_gadget_probe_driver(&dev->driver);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 1b5de3af1a627..9c45be8ab1788 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -161,6 +161,7 @@ struct mlx5_vdpa_net {
+ struct mlx5_flow_handle *rx_rule_mcast;
+ bool setup;
+ u32 cur_num_vqs;
++ u32 rqt_size;
+ struct notifier_block nb;
+ struct vdpa_callback config_cb;
+ struct mlx5_vdpa_wq_ent cvq_ent;
+@@ -204,17 +205,12 @@ static __virtio16 cpu_to_mlx5vdpa16(struct mlx5_vdpa_dev *mvdev, u16 val)
+ return __cpu_to_virtio16(mlx5_vdpa_is_little_endian(mvdev), val);
+ }
+
+-static inline u32 mlx5_vdpa_max_qps(int max_vqs)
+-{
+- return max_vqs / 2;
+-}
+-
+ static u16 ctrl_vq_idx(struct mlx5_vdpa_dev *mvdev)
+ {
+ if (!(mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_MQ)))
+ return 2;
+
+- return 2 * mlx5_vdpa_max_qps(mvdev->max_vqs);
++ return mvdev->max_vqs;
+ }
+
+ static bool is_ctrl_vq_idx(struct mlx5_vdpa_dev *mvdev, u16 idx)
+@@ -1236,25 +1232,13 @@ static void teardown_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *
+ static int create_rqt(struct mlx5_vdpa_net *ndev)
+ {
+ __be32 *list;
+- int max_rqt;
+ void *rqtc;
+ int inlen;
+ void *in;
+ int i, j;
+ int err;
+- int num;
+-
+- if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_MQ)))
+- num = 1;
+- else
+- num = ndev->cur_num_vqs / 2;
+
+- max_rqt = min_t(int, roundup_pow_of_two(num),
+- 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
+- if (max_rqt < 1)
+- return -EOPNOTSUPP;
+-
+- inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + max_rqt * MLX5_ST_SZ_BYTES(rq_num);
++ inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + ndev->rqt_size * MLX5_ST_SZ_BYTES(rq_num);
+ in = kzalloc(inlen, GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+@@ -1263,12 +1247,12 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
+ rqtc = MLX5_ADDR_OF(create_rqt_in, in, rqt_context);
+
+ MLX5_SET(rqtc, rqtc, list_q_type, MLX5_RQTC_LIST_Q_TYPE_VIRTIO_NET_Q);
+- MLX5_SET(rqtc, rqtc, rqt_max_size, max_rqt);
++ MLX5_SET(rqtc, rqtc, rqt_max_size, ndev->rqt_size);
+ list = MLX5_ADDR_OF(rqtc, rqtc, rq_num[0]);
+- for (i = 0, j = 0; i < max_rqt; i++, j += 2)
+- list[i] = cpu_to_be32(ndev->vqs[j % (2 * num)].virtq_id);
++ for (i = 0, j = 0; i < ndev->rqt_size; i++, j += 2)
++ list[i] = cpu_to_be32(ndev->vqs[j % ndev->cur_num_vqs].virtq_id);
+
+- MLX5_SET(rqtc, rqtc, rqt_actual_size, max_rqt);
++ MLX5_SET(rqtc, rqtc, rqt_actual_size, ndev->rqt_size);
+ err = mlx5_vdpa_create_rqt(&ndev->mvdev, in, inlen, &ndev->res.rqtn);
+ kfree(in);
+ if (err)
+@@ -1282,19 +1266,13 @@ static int create_rqt(struct mlx5_vdpa_net *ndev)
+ static int modify_rqt(struct mlx5_vdpa_net *ndev, int num)
+ {
+ __be32 *list;
+- int max_rqt;
+ void *rqtc;
+ int inlen;
+ void *in;
+ int i, j;
+ int err;
+
+- max_rqt = min_t(int, roundup_pow_of_two(ndev->cur_num_vqs / 2),
+- 1 << MLX5_CAP_GEN(ndev->mvdev.mdev, log_max_rqt_size));
+- if (max_rqt < 1)
+- return -EOPNOTSUPP;
+-
+- inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) + max_rqt * MLX5_ST_SZ_BYTES(rq_num);
++ inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) + ndev->rqt_size * MLX5_ST_SZ_BYTES(rq_num);
+ in = kzalloc(inlen, GFP_KERNEL);
+ if (!in)
+ return -ENOMEM;
+@@ -1305,10 +1283,10 @@ static int modify_rqt(struct mlx5_vdpa_net *ndev, int num)
+ MLX5_SET(rqtc, rqtc, list_q_type, MLX5_RQTC_LIST_Q_TYPE_VIRTIO_NET_Q);
+
+ list = MLX5_ADDR_OF(rqtc, rqtc, rq_num[0]);
+- for (i = 0, j = 0; i < max_rqt; i++, j += 2)
++ for (i = 0, j = 0; i < ndev->rqt_size; i++, j += 2)
+ list[i] = cpu_to_be32(ndev->vqs[j % num].virtq_id);
+
+- MLX5_SET(rqtc, rqtc, rqt_actual_size, max_rqt);
++ MLX5_SET(rqtc, rqtc, rqt_actual_size, ndev->rqt_size);
+ err = mlx5_vdpa_modify_rqt(&ndev->mvdev, in, inlen, ndev->res.rqtn);
+ kfree(in);
+ if (err)
+@@ -1582,7 +1560,7 @@ static virtio_net_ctrl_ack handle_ctrl_mq(struct mlx5_vdpa_dev *mvdev, u8 cmd)
+
+ newqps = mlx5vdpa16_to_cpu(mvdev, mq.virtqueue_pairs);
+ if (newqps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
+- newqps > mlx5_vdpa_max_qps(mvdev->max_vqs))
++ newqps > ndev->rqt_size)
+ break;
+
+ if (ndev->cur_num_vqs == 2 * newqps) {
+@@ -1937,7 +1915,7 @@ static int setup_virtqueues(struct mlx5_vdpa_dev *mvdev)
+ int err;
+ int i;
+
+- for (i = 0; i < 2 * mlx5_vdpa_max_qps(mvdev->max_vqs); i++) {
++ for (i = 0; i < mvdev->max_vqs; i++) {
+ err = setup_vq(ndev, &ndev->vqs[i]);
+ if (err)
+ goto err_vq;
+@@ -2008,9 +1986,11 @@ static int mlx5_vdpa_set_driver_features(struct vdpa_device *vdev, u64 features)
+
+ ndev->mvdev.actual_features = features & ndev->mvdev.mlx_features;
+ if (ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_MQ))
+- ndev->cur_num_vqs = 2 * mlx5vdpa16_to_cpu(mvdev, ndev->config.max_virtqueue_pairs);
++ ndev->rqt_size = mlx5vdpa16_to_cpu(mvdev, ndev->config.max_virtqueue_pairs);
+ else
+- ndev->cur_num_vqs = 2;
++ ndev->rqt_size = 1;
++
++ ndev->cur_num_vqs = 2 * ndev->rqt_size;
+
+ update_cvq_info(mvdev);
+ return err;
+@@ -2463,7 +2443,7 @@ static void init_mvqs(struct mlx5_vdpa_net *ndev)
+ struct mlx5_vdpa_virtqueue *mvq;
+ int i;
+
+- for (i = 0; i < 2 * mlx5_vdpa_max_qps(ndev->mvdev.max_vqs); ++i) {
++ for (i = 0; i < ndev->mvdev.max_vqs; ++i) {
+ mvq = &ndev->vqs[i];
+ memset(mvq, 0, offsetof(struct mlx5_vdpa_virtqueue, ri));
+ mvq->index = i;
+@@ -2583,7 +2563,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ return -EOPNOTSUPP;
+ }
+
+- max_vqs = MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues);
++ max_vqs = min_t(int, MLX5_CAP_DEV_VDPA_EMULATION(mdev, max_num_virtio_queues),
++ 1 << MLX5_CAP_GEN(mdev, log_max_rqt_size));
+ if (max_vqs < 2) {
+ dev_warn(mdev->device,
+ "%d virtqueues are supported. At least 2 are required\n",
+@@ -2647,7 +2628,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ ndev->mvdev.mlx_features |= BIT_ULL(VIRTIO_NET_F_MAC);
+ }
+
+- config->max_virtqueue_pairs = cpu_to_mlx5vdpa16(mvdev, mlx5_vdpa_max_qps(max_vqs));
++ config->max_virtqueue_pairs = cpu_to_mlx5vdpa16(mvdev, max_vqs / 2);
+ mvdev->vdev.dma_dev = &mdev->pdev->dev;
+ err = mlx5_vdpa_alloc_resources(&ndev->mvdev);
+ if (err)
+@@ -2674,7 +2655,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ ndev->nb.notifier_call = event_handler;
+ mlx5_notifier_register(mdev, &ndev->nb);
+ mvdev->vdev.mdev = &mgtdev->mgtdev;
+- err = _vdpa_register_device(&mvdev->vdev, 2 * mlx5_vdpa_max_qps(max_vqs) + 1);
++ err = _vdpa_register_device(&mvdev->vdev, max_vqs + 1);
+ if (err)
+ goto err_reg;
+
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 792ab5f236471..297b5db474545 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -1450,13 +1450,9 @@ err:
+ return ERR_PTR(r);
+ }
+
+-static struct ptr_ring *get_tap_ptr_ring(int fd)
++static struct ptr_ring *get_tap_ptr_ring(struct file *file)
+ {
+ struct ptr_ring *ring;
+- struct file *file = fget(fd);
+-
+- if (!file)
+- return NULL;
+ ring = tun_get_tx_ring(file);
+ if (!IS_ERR(ring))
+ goto out;
+@@ -1465,7 +1461,6 @@ static struct ptr_ring *get_tap_ptr_ring(int fd)
+ goto out;
+ ring = NULL;
+ out:
+- fput(file);
+ return ring;
+ }
+
+@@ -1552,8 +1547,12 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
+ r = vhost_net_enable_vq(n, vq);
+ if (r)
+ goto err_used;
+- if (index == VHOST_NET_VQ_RX)
+- nvq->rx_ring = get_tap_ptr_ring(fd);
++ if (index == VHOST_NET_VQ_RX) {
++ if (sock)
++ nvq->rx_ring = get_tap_ptr_ring(sock->file);
++ else
++ nvq->rx_ring = NULL;
++ }
+
+ oldubufs = nvq->ubufs;
+ nvq->ubufs = ubufs;
+diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
+index ec5249e8c32d9..05f5fd2af58f8 100644
+--- a/drivers/vhost/vdpa.c
++++ b/drivers/vhost/vdpa.c
+@@ -97,8 +97,11 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
+ return;
+
+ irq = ops->get_vq_irq(vdpa, qid);
++ if (irq < 0)
++ return;
++
+ irq_bypass_unregister_producer(&vq->call_ctx.producer);
+- if (!vq->call_ctx.ctx || irq < 0)
++ if (!vq->call_ctx.ctx)
+ return;
+
+ vq->call_ctx.producer.token = vq->call_ctx.ctx;
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 10a9369c9dea4..00f0f282e7a13 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1438,10 +1438,7 @@ fb_release(struct inode *inode, struct file *file)
+ __acquires(&info->lock)
+ __releases(&info->lock)
+ {
+- struct fb_info * const info = file_fb_info(file);
+-
+- if (!info)
+- return -ENODEV;
++ struct fb_info * const info = file->private_data;
+
+ lock_fb_info(info);
+ if (info->fbops->fb_release)
+diff --git a/drivers/video/fbdev/core/fbsysfs.c b/drivers/video/fbdev/core/fbsysfs.c
+index 26892940c2136..82e31a2d845e1 100644
+--- a/drivers/video/fbdev/core/fbsysfs.c
++++ b/drivers/video/fbdev/core/fbsysfs.c
+@@ -80,6 +80,10 @@ void framebuffer_release(struct fb_info *info)
+ {
+ if (!info)
+ return;
++
++ if (WARN_ON(refcount_read(&info->count)))
++ return;
++
+ kfree(info->apertures);
+ kfree(info);
+ }
+diff --git a/drivers/watchdog/sp5100_tco.c b/drivers/watchdog/sp5100_tco.c
+index dd9a744f82f85..86ffb58fbc854 100644
+--- a/drivers/watchdog/sp5100_tco.c
++++ b/drivers/watchdog/sp5100_tco.c
+@@ -49,7 +49,7 @@
+ /* internal variables */
+
+ enum tco_reg_layout {
+- sp5100, sb800, efch
++ sp5100, sb800, efch, efch_mmio
+ };
+
+ struct sp5100_tco {
+@@ -86,6 +86,10 @@ static enum tco_reg_layout tco_reg_layout(struct pci_dev *dev)
+ dev->device == PCI_DEVICE_ID_ATI_SBX00_SMBUS &&
+ dev->revision < 0x40) {
+ return sp5100;
++ } else if (dev->vendor == PCI_VENDOR_ID_AMD &&
++ sp5100_tco_pci->device == PCI_DEVICE_ID_AMD_KERNCZ_SMBUS &&
++ sp5100_tco_pci->revision >= AMD_ZEN_SMBUS_PCI_REV) {
++ return efch_mmio;
+ } else if (dev->vendor == PCI_VENDOR_ID_AMD &&
+ ((dev->device == PCI_DEVICE_ID_AMD_HUDSON2_SMBUS &&
+ dev->revision >= 0x41) ||
+@@ -209,6 +213,8 @@ static void tco_timer_enable(struct sp5100_tco *tco)
+ ~EFCH_PM_WATCHDOG_DISABLE,
+ EFCH_PM_DECODEEN_SECOND_RES);
+ break;
++ default:
++ break;
+ }
+ }
+
+@@ -223,14 +229,195 @@ static u32 sp5100_tco_read_pm_reg32(u8 index)
+ return val;
+ }
+
++static u32 sp5100_tco_request_region(struct device *dev,
++ u32 mmio_addr,
++ const char *dev_name)
++{
++ if (!devm_request_mem_region(dev, mmio_addr, SP5100_WDT_MEM_MAP_SIZE,
++ dev_name)) {
++ dev_dbg(dev, "MMIO address 0x%08x already in use\n", mmio_addr);
++ return 0;
++ }
++
++ return mmio_addr;
++}
++
++static u32 sp5100_tco_prepare_base(struct sp5100_tco *tco,
++ u32 mmio_addr,
++ u32 alt_mmio_addr,
++ const char *dev_name)
++{
++ struct device *dev = tco->wdd.parent;
++
++ dev_dbg(dev, "Got 0x%08x from SBResource_MMIO register\n", mmio_addr);
++
++ if (!mmio_addr && !alt_mmio_addr)
++ return -ENODEV;
++
++ /* Check for MMIO address and alternate MMIO address conflicts */
++ if (mmio_addr)
++ mmio_addr = sp5100_tco_request_region(dev, mmio_addr, dev_name);
++
++ if (!mmio_addr && alt_mmio_addr)
++ mmio_addr = sp5100_tco_request_region(dev, alt_mmio_addr, dev_name);
++
++ if (!mmio_addr) {
++ dev_err(dev, "Failed to reserve MMIO or alternate MMIO region\n");
++ return -EBUSY;
++ }
++
++ tco->tcobase = devm_ioremap(dev, mmio_addr, SP5100_WDT_MEM_MAP_SIZE);
++ if (!tco->tcobase) {
++ dev_err(dev, "MMIO address 0x%08x failed mapping\n", mmio_addr);
++ devm_release_mem_region(dev, mmio_addr, SP5100_WDT_MEM_MAP_SIZE);
++ return -ENOMEM;
++ }
++
++ dev_info(dev, "Using 0x%08x for watchdog MMIO address\n", mmio_addr);
++
++ return 0;
++}
++
++static int sp5100_tco_timer_init(struct sp5100_tco *tco)
++{
++ struct watchdog_device *wdd = &tco->wdd;
++ struct device *dev = wdd->parent;
++ u32 val;
++
++ val = readl(SP5100_WDT_CONTROL(tco->tcobase));
++ if (val & SP5100_WDT_DISABLED) {
++ dev_err(dev, "Watchdog hardware is disabled\n");
++ return -ENODEV;
++ }
++
++ /*
++ * Save WatchDogFired status, because WatchDogFired flag is
++ * cleared here.
++ */
++ if (val & SP5100_WDT_FIRED)
++ wdd->bootstatus = WDIOF_CARDRESET;
++
++ /* Set watchdog action to reset the system */
++ val &= ~SP5100_WDT_ACTION_RESET;
++ writel(val, SP5100_WDT_CONTROL(tco->tcobase));
++
++ /* Set a reasonable heartbeat before we stop the timer */
++ tco_timer_set_timeout(wdd, wdd->timeout);
++
++ /*
++ * Stop the TCO before we change anything so we don't race with
++ * a zeroed timer.
++ */
++ tco_timer_stop(wdd);
++
++ return 0;
++}
++
++static u8 efch_read_pm_reg8(void __iomem *addr, u8 index)
++{
++ return readb(addr + index);
++}
++
++static void efch_update_pm_reg8(void __iomem *addr, u8 index, u8 reset, u8 set)
++{
++ u8 val;
++
++ val = readb(addr + index);
++ val &= reset;
++ val |= set;
++ writeb(val, addr + index);
++}
++
++static void tco_timer_enable_mmio(void __iomem *addr)
++{
++ efch_update_pm_reg8(addr, EFCH_PM_DECODEEN3,
++ ~EFCH_PM_WATCHDOG_DISABLE,
++ EFCH_PM_DECODEEN_SECOND_RES);
++}
++
++static int sp5100_tco_setupdevice_mmio(struct device *dev,
++ struct watchdog_device *wdd)
++{
++ struct sp5100_tco *tco = watchdog_get_drvdata(wdd);
++ const char *dev_name = SB800_DEVNAME;
++ u32 mmio_addr = 0, alt_mmio_addr = 0;
++ struct resource *res;
++ void __iomem *addr;
++ int ret;
++ u32 val;
++
++ res = request_mem_region_muxed(EFCH_PM_ACPI_MMIO_PM_ADDR,
++ EFCH_PM_ACPI_MMIO_PM_SIZE,
++ "sp5100_tco");
++
++ if (!res) {
++ dev_err(dev,
++ "Memory region 0x%08x already in use\n",
++ EFCH_PM_ACPI_MMIO_PM_ADDR);
++ return -EBUSY;
++ }
++
++ addr = ioremap(EFCH_PM_ACPI_MMIO_PM_ADDR, EFCH_PM_ACPI_MMIO_PM_SIZE);
++ if (!addr) {
++ dev_err(dev, "Address mapping failed\n");
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ /*
++ * EFCH_PM_DECODEEN_WDT_TMREN is dual purpose. This bitfield
++ * enables sp5100_tco register MMIO space decoding. The bitfield
++ * also starts the timer operation. Enable if not already enabled.
++ */
++ val = efch_read_pm_reg8(addr, EFCH_PM_DECODEEN);
++ if (!(val & EFCH_PM_DECODEEN_WDT_TMREN)) {
++ efch_update_pm_reg8(addr, EFCH_PM_DECODEEN, 0xff,
++ EFCH_PM_DECODEEN_WDT_TMREN);
++ }
++
++ /* Error if the timer could not be enabled */
++ val = efch_read_pm_reg8(addr, EFCH_PM_DECODEEN);
++ if (!(val & EFCH_PM_DECODEEN_WDT_TMREN)) {
++ dev_err(dev, "Failed to enable the timer\n");
++ ret = -EFAULT;
++ goto out;
++ }
++
++ mmio_addr = EFCH_PM_WDT_ADDR;
++
++ /* Determine alternate MMIO base address */
++ val = efch_read_pm_reg8(addr, EFCH_PM_ISACONTROL);
++ if (val & EFCH_PM_ISACONTROL_MMIOEN)
++ alt_mmio_addr = EFCH_PM_ACPI_MMIO_ADDR +
++ EFCH_PM_ACPI_MMIO_WDT_OFFSET;
++
++ ret = sp5100_tco_prepare_base(tco, mmio_addr, alt_mmio_addr, dev_name);
++ if (!ret) {
++ tco_timer_enable_mmio(addr);
++ ret = sp5100_tco_timer_init(tco);
++ }
++
++out:
++ if (addr)
++ iounmap(addr);
++
++ release_resource(res);
++
++ return ret;
++}
++
+ static int sp5100_tco_setupdevice(struct device *dev,
+ struct watchdog_device *wdd)
+ {
+ struct sp5100_tco *tco = watchdog_get_drvdata(wdd);
+ const char *dev_name;
+ u32 mmio_addr = 0, val;
++ u32 alt_mmio_addr = 0;
+ int ret;
+
++ if (tco->tco_reg_layout == efch_mmio)
++ return sp5100_tco_setupdevice_mmio(dev, wdd);
++
+ /* Request the IO ports used by this driver */
+ if (!request_muxed_region(SP5100_IO_PM_INDEX_REG,
+ SP5100_PM_IOPORTS_SIZE, "sp5100_tco")) {
+@@ -247,138 +434,55 @@ static int sp5100_tco_setupdevice(struct device *dev,
+ dev_name = SP5100_DEVNAME;
+ mmio_addr = sp5100_tco_read_pm_reg32(SP5100_PM_WATCHDOG_BASE) &
+ 0xfffffff8;
++
++ /*
++ * Secondly, find the watchdog timer MMIO address
++ * from SBResource_MMIO register.
++ */
++
++ /* Read SBResource_MMIO from PCI config(PCI_Reg: 9Ch) */
++ pci_read_config_dword(sp5100_tco_pci,
++ SP5100_SB_RESOURCE_MMIO_BASE,
++ &val);
++
++ /* Verify MMIO is enabled and using bar0 */
++ if ((val & SB800_ACPI_MMIO_MASK) == SB800_ACPI_MMIO_DECODE_EN)
++ alt_mmio_addr = (val & ~0xfff) + SB800_PM_WDT_MMIO_OFFSET;
+ break;
+ case sb800:
+ dev_name = SB800_DEVNAME;
+ mmio_addr = sp5100_tco_read_pm_reg32(SB800_PM_WATCHDOG_BASE) &
+ 0xfffffff8;
++
++ /* Read SBResource_MMIO from AcpiMmioEn(PM_Reg: 24h) */
++ val = sp5100_tco_read_pm_reg32(SB800_PM_ACPI_MMIO_EN);
++
++ /* Verify MMIO is enabled and using bar0 */
++ if ((val & SB800_ACPI_MMIO_MASK) == SB800_ACPI_MMIO_DECODE_EN)
++ alt_mmio_addr = (val & ~0xfff) + SB800_PM_WDT_MMIO_OFFSET;
+ break;
+ case efch:
+ dev_name = SB800_DEVNAME;
+- /*
+- * On Family 17h devices, the EFCH_PM_DECODEEN_WDT_TMREN bit of
+- * EFCH_PM_DECODEEN not only enables the EFCH_PM_WDT_ADDR memory
+- * region, it also enables the watchdog itself.
+- */
+- if (boot_cpu_data.x86 == 0x17) {
+- val = sp5100_tco_read_pm_reg8(EFCH_PM_DECODEEN);
+- if (!(val & EFCH_PM_DECODEEN_WDT_TMREN)) {
+- sp5100_tco_update_pm_reg8(EFCH_PM_DECODEEN, 0xff,
+- EFCH_PM_DECODEEN_WDT_TMREN);
+- }
+- }
+ val = sp5100_tco_read_pm_reg8(EFCH_PM_DECODEEN);
+ if (val & EFCH_PM_DECODEEN_WDT_TMREN)
+ mmio_addr = EFCH_PM_WDT_ADDR;
++
++ val = sp5100_tco_read_pm_reg8(EFCH_PM_ISACONTROL);
++ if (val & EFCH_PM_ISACONTROL_MMIOEN)
++ alt_mmio_addr = EFCH_PM_ACPI_MMIO_ADDR +
++ EFCH_PM_ACPI_MMIO_WDT_OFFSET;
+ break;
+ default:
+ return -ENODEV;
+ }
+
+- /* Check MMIO address conflict */
+- if (!mmio_addr ||
+- !devm_request_mem_region(dev, mmio_addr, SP5100_WDT_MEM_MAP_SIZE,
+- dev_name)) {
+- if (mmio_addr)
+- dev_dbg(dev, "MMIO address 0x%08x already in use\n",
+- mmio_addr);
+- switch (tco->tco_reg_layout) {
+- case sp5100:
+- /*
+- * Secondly, Find the watchdog timer MMIO address
+- * from SBResource_MMIO register.
+- */
+- /* Read SBResource_MMIO from PCI config(PCI_Reg: 9Ch) */
+- pci_read_config_dword(sp5100_tco_pci,
+- SP5100_SB_RESOURCE_MMIO_BASE,
+- &mmio_addr);
+- if ((mmio_addr & (SB800_ACPI_MMIO_DECODE_EN |
+- SB800_ACPI_MMIO_SEL)) !=
+- SB800_ACPI_MMIO_DECODE_EN) {
+- ret = -ENODEV;
+- goto unreg_region;
+- }
+- mmio_addr &= ~0xFFF;
+- mmio_addr += SB800_PM_WDT_MMIO_OFFSET;
+- break;
+- case sb800:
+- /* Read SBResource_MMIO from AcpiMmioEn(PM_Reg: 24h) */
+- mmio_addr =
+- sp5100_tco_read_pm_reg32(SB800_PM_ACPI_MMIO_EN);
+- if ((mmio_addr & (SB800_ACPI_MMIO_DECODE_EN |
+- SB800_ACPI_MMIO_SEL)) !=
+- SB800_ACPI_MMIO_DECODE_EN) {
+- ret = -ENODEV;
+- goto unreg_region;
+- }
+- mmio_addr &= ~0xFFF;
+- mmio_addr += SB800_PM_WDT_MMIO_OFFSET;
+- break;
+- case efch:
+- val = sp5100_tco_read_pm_reg8(EFCH_PM_ISACONTROL);
+- if (!(val & EFCH_PM_ISACONTROL_MMIOEN)) {
+- ret = -ENODEV;
+- goto unreg_region;
+- }
+- mmio_addr = EFCH_PM_ACPI_MMIO_ADDR +
+- EFCH_PM_ACPI_MMIO_WDT_OFFSET;
+- break;
+- }
+- dev_dbg(dev, "Got 0x%08x from SBResource_MMIO register\n",
+- mmio_addr);
+- if (!devm_request_mem_region(dev, mmio_addr,
+- SP5100_WDT_MEM_MAP_SIZE,
+- dev_name)) {
+- dev_dbg(dev, "MMIO address 0x%08x already in use\n",
+- mmio_addr);
+- ret = -EBUSY;
+- goto unreg_region;
+- }
+- }
+-
+- tco->tcobase = devm_ioremap(dev, mmio_addr, SP5100_WDT_MEM_MAP_SIZE);
+- if (!tco->tcobase) {
+- dev_err(dev, "failed to get tcobase address\n");
+- ret = -ENOMEM;
+- goto unreg_region;
+- }
+-
+- dev_info(dev, "Using 0x%08x for watchdog MMIO address\n", mmio_addr);
+-
+- /* Setup the watchdog timer */
+- tco_timer_enable(tco);
+-
+- val = readl(SP5100_WDT_CONTROL(tco->tcobase));
+- if (val & SP5100_WDT_DISABLED) {
+- dev_err(dev, "Watchdog hardware is disabled\n");
+- ret = -ENODEV;
+- goto unreg_region;
++ ret = sp5100_tco_prepare_base(tco, mmio_addr, alt_mmio_addr, dev_name);
++ if (!ret) {
++ /* Setup the watchdog timer */
++ tco_timer_enable(tco);
++ ret = sp5100_tco_timer_init(tco);
+ }
+
+- /*
+- * Save WatchDogFired status, because WatchDogFired flag is
+- * cleared here.
+- */
+- if (val & SP5100_WDT_FIRED)
+- wdd->bootstatus = WDIOF_CARDRESET;
+- /* Set watchdog action to reset the system */
+- val &= ~SP5100_WDT_ACTION_RESET;
+- writel(val, SP5100_WDT_CONTROL(tco->tcobase));
+-
+- /* Set a reasonable heartbeat before we stop the timer */
+- tco_timer_set_timeout(wdd, wdd->timeout);
+-
+- /*
+- * Stop the TCO before we change anything so we don't race with
+- * a zeroed timer.
+- */
+- tco_timer_stop(wdd);
+-
+- release_region(SP5100_IO_PM_INDEX_REG, SP5100_PM_IOPORTS_SIZE);
+-
+- return 0;
+-
+-unreg_region:
+ release_region(SP5100_IO_PM_INDEX_REG, SP5100_PM_IOPORTS_SIZE);
+ return ret;
+ }
+diff --git a/drivers/watchdog/sp5100_tco.h b/drivers/watchdog/sp5100_tco.h
+index adf015aa4126f..6a0986d2c94b7 100644
+--- a/drivers/watchdog/sp5100_tco.h
++++ b/drivers/watchdog/sp5100_tco.h
+@@ -58,6 +58,7 @@
+ #define SB800_PM_WATCHDOG_SECOND_RES GENMASK(1, 0)
+ #define SB800_ACPI_MMIO_DECODE_EN BIT(0)
+ #define SB800_ACPI_MMIO_SEL BIT(1)
++#define SB800_ACPI_MMIO_MASK GENMASK(1, 0)
+
+ #define SB800_PM_WDT_MMIO_OFFSET 0xB00
+
+@@ -82,4 +83,10 @@
+ #define EFCH_PM_ISACONTROL_MMIOEN BIT(1)
+
+ #define EFCH_PM_ACPI_MMIO_ADDR 0xfed80000
++#define EFCH_PM_ACPI_MMIO_PM_OFFSET 0x00000300
+ #define EFCH_PM_ACPI_MMIO_WDT_OFFSET 0x00000b00
++
++#define EFCH_PM_ACPI_MMIO_PM_ADDR (EFCH_PM_ACPI_MMIO_ADDR + \
++ EFCH_PM_ACPI_MMIO_PM_OFFSET)
++#define EFCH_PM_ACPI_MMIO_PM_SIZE 8
++#define AMD_ZEN_SMBUS_PCI_REV 0x51
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 5964f8aee090f..0d6c0885b2d74 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -727,10 +727,22 @@ int afs_getattr(struct user_namespace *mnt_userns, const struct path *path,
+ {
+ struct inode *inode = d_inode(path->dentry);
+ struct afs_vnode *vnode = AFS_FS_I(inode);
+- int seq = 0;
++ struct key *key;
++ int ret, seq = 0;
+
+ _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation);
+
++ if (!(query_flags & AT_STATX_DONT_SYNC) &&
++ !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {
++ key = afs_request_key(vnode->volume->cell);
++ if (IS_ERR(key))
++ return PTR_ERR(key);
++ ret = afs_validate(vnode, key);
++ key_put(key);
++ if (ret < 0)
++ return ret;
++ }
++
+ do {
+ read_seqbegin_or_lock(&vnode->cb_lock, &seq);
+ generic_fillattr(&init_user_ns, inode, stat);
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index ea00e1a91250c..9d334816eac07 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -94,7 +94,7 @@ static void cifs_debug_tcon(struct seq_file *m, struct cifs_tcon *tcon)
+ le32_to_cpu(tcon->fsDevInfo.DeviceCharacteristics),
+ le32_to_cpu(tcon->fsAttrInfo.Attributes),
+ le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength),
+- tcon->tidStatus);
++ tcon->status);
+ if (dev_type == FILE_DEVICE_DISK)
+ seq_puts(m, " type: DISK ");
+ else if (dev_type == FILE_DEVICE_CD_ROM)
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 10aa0fb946138..59d22261e0821 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -701,14 +701,14 @@ static void cifs_umount_begin(struct super_block *sb)
+ tcon = cifs_sb_master_tcon(cifs_sb);
+
+ spin_lock(&cifs_tcp_ses_lock);
+- if ((tcon->tc_count > 1) || (tcon->tidStatus == CifsExiting)) {
++ if ((tcon->tc_count > 1) || (tcon->status == TID_EXITING)) {
+ /* we have other mounts to same share or we have
+ already tried to force umount this and woken up
+ all waiting network requests, nothing to do */
+ spin_unlock(&cifs_tcp_ses_lock);
+ return;
+ } else if (tcon->tc_count == 1)
+- tcon->tidStatus = CifsExiting;
++ tcon->status = TID_EXITING;
+ spin_unlock(&cifs_tcp_ses_lock);
+
+ /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 48b343d034309..560ecc4ad87d5 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -115,10 +115,18 @@ enum statusEnum {
+ CifsInNegotiate,
+ CifsNeedSessSetup,
+ CifsInSessSetup,
+- CifsNeedTcon,
+- CifsInTcon,
+- CifsNeedFilesInvalidate,
+- CifsInFilesInvalidate
++};
++
++/* associated with each tree connection to the server */
++enum tid_status_enum {
++ TID_NEW = 0,
++ TID_GOOD,
++ TID_EXITING,
++ TID_NEED_RECON,
++ TID_NEED_TCON,
++ TID_IN_TCON,
++ TID_NEED_FILES_INVALIDATE, /* currently unused */
++ TID_IN_FILES_INVALIDATE
+ };
+
+ enum securityEnum {
+@@ -1038,7 +1046,7 @@ struct cifs_tcon {
+ char *password; /* for share-level security */
+ __u32 tid; /* The 4 byte tree id */
+ __u16 Flags; /* optional support bits */
+- enum statusEnum tidStatus;
++ enum tid_status_enum status;
+ atomic_t num_smbs_sent;
+ union {
+ struct {
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 071e2f21a7db7..aca9338b0877e 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -75,12 +75,11 @@ cifs_mark_open_files_invalid(struct cifs_tcon *tcon)
+
+ /* only send once per connect */
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->ses->status != CifsGood ||
+- tcon->tidStatus != CifsNeedReconnect) {
++ if ((tcon->ses->status != CifsGood) || (tcon->status != TID_NEED_RECON)) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return;
+ }
+- tcon->tidStatus = CifsInFilesInvalidate;
++ tcon->status = TID_IN_FILES_INVALIDATE;
+ spin_unlock(&cifs_tcp_ses_lock);
+
+ /* list all files open on tree connection and mark them invalid */
+@@ -100,8 +99,8 @@ cifs_mark_open_files_invalid(struct cifs_tcon *tcon)
+ mutex_unlock(&tcon->crfid.fid_mutex);
+
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsInFilesInvalidate)
+- tcon->tidStatus = CifsNeedTcon;
++ if (tcon->status == TID_IN_FILES_INVALIDATE)
++ tcon->status = TID_NEED_TCON;
+ spin_unlock(&cifs_tcp_ses_lock);
+
+ /*
+@@ -136,7 +135,7 @@ cifs_reconnect_tcon(struct cifs_tcon *tcon, int smb_command)
+ * have tcon) are allowed as we start force umount
+ */
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsExiting) {
++ if (tcon->status == TID_EXITING) {
+ if (smb_command != SMB_COM_WRITE_ANDX &&
+ smb_command != SMB_COM_OPEN_ANDX &&
+ smb_command != SMB_COM_TREE_DISCONNECT) {
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 532770c30415d..c3a26f06fdaa1 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -245,7 +245,7 @@ cifs_mark_tcp_ses_conns_for_reconnect(struct TCP_Server_Info *server,
+
+ list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
+ tcon->need_reconnect = true;
+- tcon->tidStatus = CifsNeedReconnect;
++ tcon->status = TID_NEED_RECON;
+ }
+ if (ses->tcon_ipc)
+ ses->tcon_ipc->need_reconnect = true;
+@@ -2217,7 +2217,7 @@ get_ses_fail:
+
+ static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ {
+- if (tcon->tidStatus == CifsExiting)
++ if (tcon->status == TID_EXITING)
+ return 0;
+ if (strncmp(tcon->treeName, ctx->UNC, MAX_TREE_SIZE))
+ return 0;
+@@ -4498,12 +4498,12 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ /* only send once per connect */
+ spin_lock(&cifs_tcp_ses_lock);
+ if (tcon->ses->status != CifsGood ||
+- (tcon->tidStatus != CifsNew &&
+- tcon->tidStatus != CifsNeedTcon)) {
++ (tcon->status != TID_NEW &&
++ tcon->status != TID_NEED_TCON)) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return 0;
+ }
+- tcon->tidStatus = CifsInTcon;
++ tcon->status = TID_IN_TCON;
+ spin_unlock(&cifs_tcp_ses_lock);
+
+ tree = kzalloc(MAX_TREE_SIZE, GFP_KERNEL);
+@@ -4544,13 +4544,13 @@ out:
+
+ if (rc) {
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsInTcon)
+- tcon->tidStatus = CifsNeedTcon;
++ if (tcon->status == TID_IN_TCON)
++ tcon->status = TID_NEED_TCON;
+ spin_unlock(&cifs_tcp_ses_lock);
+ } else {
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsInTcon)
+- tcon->tidStatus = CifsGood;
++ if (tcon->status == TID_IN_TCON)
++ tcon->status = TID_GOOD;
+ spin_unlock(&cifs_tcp_ses_lock);
+ tcon->need_reconnect = false;
+ }
+@@ -4566,24 +4566,24 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ /* only send once per connect */
+ spin_lock(&cifs_tcp_ses_lock);
+ if (tcon->ses->status != CifsGood ||
+- (tcon->tidStatus != CifsNew &&
+- tcon->tidStatus != CifsNeedTcon)) {
++ (tcon->status != TID_NEW &&
++ tcon->status != TID_NEED_TCON)) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return 0;
+ }
+- tcon->tidStatus = CifsInTcon;
++ tcon->status = TID_IN_TCON;
+ spin_unlock(&cifs_tcp_ses_lock);
+
+ rc = ops->tree_connect(xid, tcon->ses, tcon->treeName, tcon, nlsc);
+ if (rc) {
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsInTcon)
+- tcon->tidStatus = CifsNeedTcon;
++ if (tcon->status == TID_IN_TCON)
++ tcon->status = TID_NEED_TCON;
+ spin_unlock(&cifs_tcp_ses_lock);
+ } else {
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsInTcon)
+- tcon->tidStatus = CifsGood;
++ if (tcon->status == TID_IN_TCON)
++ tcon->status = TID_GOOD;
+ spin_unlock(&cifs_tcp_ses_lock);
+ tcon->need_reconnect = false;
+ }
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 56598f7dbe00d..afaf59c221936 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -116,7 +116,7 @@ tconInfoAlloc(void)
+ }
+
+ atomic_inc(&tconInfoAllocCount);
+- ret_buf->tidStatus = CifsNew;
++ ret_buf->status = TID_NEW;
+ ++ret_buf->tc_count;
+ INIT_LIST_HEAD(&ret_buf->openFileList);
+ INIT_LIST_HEAD(&ret_buf->tcon_list);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index f82d6fcb5c646..1704fd358b850 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -163,7 +163,7 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon,
+ return 0;
+
+ spin_lock(&cifs_tcp_ses_lock);
+- if (tcon->tidStatus == CifsExiting) {
++ if (tcon->status == TID_EXITING) {
+ /*
+ * only tree disconnect, open, and write,
+ * (and ulogoff which does not have tcon)
+@@ -3865,7 +3865,7 @@ void smb2_reconnect_server(struct work_struct *work)
+ goto done;
+ }
+
+- tcon->tidStatus = CifsGood;
++ tcon->status = TID_GOOD;
+ tcon->retry = false;
+ tcon->need_reconnect = false;
+
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index fa071d738c78e..c781c19303db4 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -956,14 +956,16 @@ static ssize_t gfs2_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ return ret;
+ iocb->ki_flags &= ~IOCB_DIRECT;
+ }
++ pagefault_disable();
+ iocb->ki_flags |= IOCB_NOIO;
+ ret = generic_file_read_iter(iocb, to);
+ iocb->ki_flags &= ~IOCB_NOIO;
++ pagefault_enable();
+ if (ret >= 0) {
+ if (!iov_iter_count(to))
+ return ret;
+ written = ret;
+- } else {
++ } else if (ret != -EFAULT) {
+ if (ret != -EAGAIN)
+ return ret;
+ if (iocb->ki_flags & IOCB_NOWAIT)
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 6b23399eaee0c..d368d9a2e8f00 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -669,6 +669,8 @@ static void finish_xmote(struct gfs2_glock *gl, unsigned int ret)
+
+ /* Check for state != intended state */
+ if (unlikely(state != gl->gl_target)) {
++ if (gh && (ret & LM_OUT_CANCELED))
++ gfs2_holder_wake(gh);
+ if (gh && !test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags)) {
+ /* move to back of queue and try next entry */
+ if (ret & LM_OUT_CANCELED) {
+@@ -1691,6 +1693,14 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
+ struct gfs2_glock *gl = gh->gh_gl;
+
+ spin_lock(&gl->gl_lockref.lock);
++ if (list_is_first(&gh->gh_list, &gl->gl_holders) &&
++ !test_bit(HIF_HOLDER, &gh->gh_iflags)) {
++ spin_unlock(&gl->gl_lockref.lock);
++ gl->gl_name.ln_sbd->sd_lockstruct.ls_ops->lm_cancel(gl);
++ wait_on_bit(&gh->gh_iflags, HIF_WAIT, TASK_UNINTERRUPTIBLE);
++ spin_lock(&gl->gl_lockref.lock);
++ }
++
+ __gfs2_glock_dq(gh);
+ spin_unlock(&gl->gl_lockref.lock);
+ }
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 66a123306aecb..c8ec876f33ea3 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -131,7 +131,21 @@ struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned int type,
+ struct gfs2_sbd *sdp = GFS2_SB(inode);
+ struct gfs2_glock *io_gl;
+
+- error = gfs2_glock_get(sdp, no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl);
++ error = gfs2_glock_get(sdp, no_addr, &gfs2_inode_glops, CREATE,
++ &ip->i_gl);
++ if (unlikely(error))
++ goto fail;
++
++ error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE,
++ &io_gl);
++ if (unlikely(error))
++ goto fail;
++
++ if (blktype != GFS2_BLKST_UNLINKED)
++ gfs2_cancel_delete_work(io_gl);
++ error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT,
++ &ip->i_iopen_gh);
++ gfs2_glock_put(io_gl);
+ if (unlikely(error))
+ goto fail;
+
+@@ -161,16 +175,6 @@ struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned int type,
+
+ set_bit(GLF_INSTANTIATE_NEEDED, &ip->i_gl->gl_flags);
+
+- error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl);
+- if (unlikely(error))
+- goto fail;
+- if (blktype != GFS2_BLKST_UNLINKED)
+- gfs2_cancel_delete_work(io_gl);
+- error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh);
+- gfs2_glock_put(io_gl);
+- if (unlikely(error))
+- goto fail;
+-
+ /* Lowest possible timestamp; will be overwritten in gfs2_dinode_in. */
+ inode->i_atime.tv_sec = 1LL << (8 * sizeof(inode->i_atime.tv_sec) - 1);
+ inode->i_atime.tv_nsec = 0;
+@@ -716,13 +720,17 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ error = insert_inode_locked4(inode, ip->i_no_addr, iget_test, &ip->i_no_addr);
+ BUG_ON(error);
+
+- error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, ghs + 1);
++ error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh);
+ if (error)
+ goto fail_gunlock2;
+
++ error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, ghs + 1);
++ if (error)
++ goto fail_gunlock3;
++
+ error = gfs2_trans_begin(sdp, blocks, 0);
+ if (error)
+- goto fail_gunlock2;
++ goto fail_gunlock3;
+
+ if (blocks > 1) {
+ ip->i_eattr = ip->i_no_addr + 1;
+@@ -731,10 +739,6 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ init_dinode(dip, ip, symname);
+ gfs2_trans_end(sdp);
+
+- error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh);
+- if (error)
+- goto fail_gunlock2;
+-
+ glock_set_object(ip->i_gl, ip);
+ glock_set_object(io_gl, ip);
+ gfs2_set_iop(inode);
+@@ -745,14 +749,14 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ if (default_acl) {
+ error = __gfs2_set_acl(inode, default_acl, ACL_TYPE_DEFAULT);
+ if (error)
+- goto fail_gunlock3;
++ goto fail_gunlock4;
+ posix_acl_release(default_acl);
+ default_acl = NULL;
+ }
+ if (acl) {
+ error = __gfs2_set_acl(inode, acl, ACL_TYPE_ACCESS);
+ if (error)
+- goto fail_gunlock3;
++ goto fail_gunlock4;
+ posix_acl_release(acl);
+ acl = NULL;
+ }
+@@ -760,11 +764,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ error = security_inode_init_security(&ip->i_inode, &dip->i_inode, name,
+ &gfs2_initxattrs, NULL);
+ if (error)
+- goto fail_gunlock3;
++ goto fail_gunlock4;
+
+ error = link_dinode(dip, name, ip, &da);
+ if (error)
+- goto fail_gunlock3;
++ goto fail_gunlock4;
+
+ mark_inode_dirty(inode);
+ d_instantiate(dentry, inode);
+@@ -782,9 +786,10 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry,
+ unlock_new_inode(inode);
+ return error;
+
+-fail_gunlock3:
++fail_gunlock4:
+ glock_clear_object(ip->i_gl, ip);
+ glock_clear_object(io_gl, ip);
++fail_gunlock3:
+ gfs2_glock_dq_uninit(&ip->i_iopen_gh);
+ fail_gunlock2:
+ gfs2_glock_put(io_gl);
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index 1ed097e94af2d..85f7e4ee6924f 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -173,7 +173,7 @@ int fiemap_prep(struct inode *inode, struct fiemap_extent_info *fieinfo,
+
+ if (*len == 0)
+ return -EINVAL;
+- if (start > maxbytes)
++ if (start >= maxbytes)
+ return -EFBIG;
+
+ /*
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index 66bdaa2cf496a..ca611ac09f7c1 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -20,6 +20,23 @@
+ #include "page.h"
+ #include "btnode.h"
+
++
++/**
++ * nilfs_init_btnc_inode - initialize B-tree node cache inode
++ * @btnc_inode: inode to be initialized
++ *
++ * nilfs_init_btnc_inode() sets up an inode for B-tree node cache.
++ */
++void nilfs_init_btnc_inode(struct inode *btnc_inode)
++{
++ struct nilfs_inode_info *ii = NILFS_I(btnc_inode);
++
++ btnc_inode->i_mode = S_IFREG;
++ ii->i_flags = 0;
++ memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap));
++ mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS);
++}
++
+ void nilfs_btnode_cache_clear(struct address_space *btnc)
+ {
+ invalidate_mapping_pages(btnc, 0, -1);
+@@ -29,7 +46,7 @@ void nilfs_btnode_cache_clear(struct address_space *btnc)
+ struct buffer_head *
+ nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr)
+ {
+- struct inode *inode = NILFS_BTNC_I(btnc);
++ struct inode *inode = btnc->host;
+ struct buffer_head *bh;
+
+ bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node));
+@@ -57,7 +74,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr,
+ struct buffer_head **pbh, sector_t *submit_ptr)
+ {
+ struct buffer_head *bh;
+- struct inode *inode = NILFS_BTNC_I(btnc);
++ struct inode *inode = btnc->host;
+ struct page *page;
+ int err;
+
+@@ -157,7 +174,7 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc,
+ struct nilfs_btnode_chkey_ctxt *ctxt)
+ {
+ struct buffer_head *obh, *nbh;
+- struct inode *inode = NILFS_BTNC_I(btnc);
++ struct inode *inode = btnc->host;
+ __u64 oldkey = ctxt->oldkey, newkey = ctxt->newkey;
+ int err;
+
+diff --git a/fs/nilfs2/btnode.h b/fs/nilfs2/btnode.h
+index 11663650add73..bd5544e63a01d 100644
+--- a/fs/nilfs2/btnode.h
++++ b/fs/nilfs2/btnode.h
+@@ -30,6 +30,7 @@ struct nilfs_btnode_chkey_ctxt {
+ struct buffer_head *newbh;
+ };
+
++void nilfs_init_btnc_inode(struct inode *btnc_inode);
+ void nilfs_btnode_cache_clear(struct address_space *);
+ struct buffer_head *nilfs_btnode_create_block(struct address_space *btnc,
+ __u64 blocknr);
+diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
+index 3594eabe14194..f544c22fff78b 100644
+--- a/fs/nilfs2/btree.c
++++ b/fs/nilfs2/btree.c
+@@ -58,7 +58,8 @@ static void nilfs_btree_free_path(struct nilfs_btree_path *path)
+ static int nilfs_btree_get_new_block(const struct nilfs_bmap *btree,
+ __u64 ptr, struct buffer_head **bhp)
+ {
+- struct address_space *btnc = &NILFS_BMAP_I(btree)->i_btnode_cache;
++ struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
++ struct address_space *btnc = btnc_inode->i_mapping;
+ struct buffer_head *bh;
+
+ bh = nilfs_btnode_create_block(btnc, ptr);
+@@ -470,7 +471,8 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
+ struct buffer_head **bhp,
+ const struct nilfs_btree_readahead_info *ra)
+ {
+- struct address_space *btnc = &NILFS_BMAP_I(btree)->i_btnode_cache;
++ struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
++ struct address_space *btnc = btnc_inode->i_mapping;
+ struct buffer_head *bh, *ra_bh;
+ sector_t submit_ptr = 0;
+ int ret;
+@@ -1741,6 +1743,10 @@ nilfs_btree_prepare_convert_and_insert(struct nilfs_bmap *btree, __u64 key,
+ dat = nilfs_bmap_get_dat(btree);
+ }
+
++ ret = nilfs_attach_btree_node_cache(&NILFS_BMAP_I(btree)->vfs_inode);
++ if (ret < 0)
++ return ret;
++
+ ret = nilfs_bmap_prepare_alloc_ptr(btree, dreq, dat);
+ if (ret < 0)
+ return ret;
+@@ -1913,7 +1919,7 @@ static int nilfs_btree_prepare_update_v(struct nilfs_bmap *btree,
+ path[level].bp_ctxt.newkey = path[level].bp_newreq.bpr_ptr;
+ path[level].bp_ctxt.bh = path[level].bp_bh;
+ ret = nilfs_btnode_prepare_change_key(
+- &NILFS_BMAP_I(btree)->i_btnode_cache,
++ NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ &path[level].bp_ctxt);
+ if (ret < 0) {
+ nilfs_dat_abort_update(dat,
+@@ -1939,7 +1945,7 @@ static void nilfs_btree_commit_update_v(struct nilfs_bmap *btree,
+
+ if (buffer_nilfs_node(path[level].bp_bh)) {
+ nilfs_btnode_commit_change_key(
+- &NILFS_BMAP_I(btree)->i_btnode_cache,
++ NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ &path[level].bp_ctxt);
+ path[level].bp_bh = path[level].bp_ctxt.bh;
+ }
+@@ -1958,7 +1964,7 @@ static void nilfs_btree_abort_update_v(struct nilfs_bmap *btree,
+ &path[level].bp_newreq.bpr_req);
+ if (buffer_nilfs_node(path[level].bp_bh))
+ nilfs_btnode_abort_change_key(
+- &NILFS_BMAP_I(btree)->i_btnode_cache,
++ NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ &path[level].bp_ctxt);
+ }
+
+@@ -2134,7 +2140,8 @@ static void nilfs_btree_add_dirty_buffer(struct nilfs_bmap *btree,
+ static void nilfs_btree_lookup_dirty_buffers(struct nilfs_bmap *btree,
+ struct list_head *listp)
+ {
+- struct address_space *btcache = &NILFS_BMAP_I(btree)->i_btnode_cache;
++ struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
++ struct address_space *btcache = btnc_inode->i_mapping;
+ struct list_head lists[NILFS_BTREE_LEVEL_MAX];
+ struct pagevec pvec;
+ struct buffer_head *bh, *head;
+@@ -2188,12 +2195,12 @@ static int nilfs_btree_assign_p(struct nilfs_bmap *btree,
+ path[level].bp_ctxt.newkey = blocknr;
+ path[level].bp_ctxt.bh = *bh;
+ ret = nilfs_btnode_prepare_change_key(
+- &NILFS_BMAP_I(btree)->i_btnode_cache,
++ NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ &path[level].bp_ctxt);
+ if (ret < 0)
+ return ret;
+ nilfs_btnode_commit_change_key(
+- &NILFS_BMAP_I(btree)->i_btnode_cache,
++ NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping,
+ &path[level].bp_ctxt);
+ *bh = path[level].bp_ctxt.bh;
+ }
+@@ -2398,6 +2405,10 @@ int nilfs_btree_init(struct nilfs_bmap *bmap)
+
+ if (nilfs_btree_root_broken(nilfs_btree_get_root(bmap), bmap->b_inode))
+ ret = -EIO;
++ else
++ ret = nilfs_attach_btree_node_cache(
++ &NILFS_BMAP_I(bmap)->vfs_inode);
++
+ return ret;
+ }
+
+diff --git a/fs/nilfs2/dat.c b/fs/nilfs2/dat.c
+index dc51d3b7a7bff..3b55e239705f4 100644
+--- a/fs/nilfs2/dat.c
++++ b/fs/nilfs2/dat.c
+@@ -497,7 +497,9 @@ int nilfs_dat_read(struct super_block *sb, size_t entry_size,
+ di = NILFS_DAT_I(dat);
+ lockdep_set_class(&di->mi.mi_sem, &dat_lock_key);
+ nilfs_palloc_setup_cache(dat, &di->palloc_cache);
+- nilfs_mdt_setup_shadow_map(dat, &di->shadow);
++ err = nilfs_mdt_setup_shadow_map(dat, &di->shadow);
++ if (err)
++ goto failed;
+
+ err = nilfs_read_inode_common(dat, raw_inode);
+ if (err)
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index a8f5315f01e3a..04fdd420eae72 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -126,9 +126,10 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+ int nilfs_gccache_submit_read_node(struct inode *inode, sector_t pbn,
+ __u64 vbn, struct buffer_head **out_bh)
+ {
++ struct inode *btnc_inode = NILFS_I(inode)->i_assoc_inode;
+ int ret;
+
+- ret = nilfs_btnode_submit_block(&NILFS_I(inode)->i_btnode_cache,
++ ret = nilfs_btnode_submit_block(btnc_inode->i_mapping,
+ vbn ? : pbn, pbn, REQ_OP_READ, 0,
+ out_bh, &pbn);
+ if (ret == -EEXIST) /* internal code (cache hit) */
+@@ -170,7 +171,7 @@ int nilfs_init_gcinode(struct inode *inode)
+ ii->i_flags = 0;
+ nilfs_bmap_init_gc(ii->i_bmap);
+
+- return 0;
++ return nilfs_attach_btree_node_cache(inode);
+ }
+
+ /**
+@@ -185,7 +186,7 @@ void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs)
+ ii = list_first_entry(head, struct nilfs_inode_info, i_dirty);
+ list_del_init(&ii->i_dirty);
+ truncate_inode_pages(&ii->vfs_inode.i_data, 0);
+- nilfs_btnode_cache_clear(&ii->i_btnode_cache);
++ nilfs_btnode_cache_clear(ii->i_assoc_inode->i_mapping);
+ iput(&ii->vfs_inode);
+ }
+ }
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index e3d807d5b83ad..d63d4bbad9fef 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -29,12 +29,16 @@
+ * @cno: checkpoint number
+ * @root: pointer on NILFS root object (mounted checkpoint)
+ * @for_gc: inode for GC flag
++ * @for_btnc: inode for B-tree node cache flag
++ * @for_shadow: inode for shadowed page cache flag
+ */
+ struct nilfs_iget_args {
+ u64 ino;
+ __u64 cno;
+ struct nilfs_root *root;
+- int for_gc;
++ bool for_gc;
++ bool for_btnc;
++ bool for_shadow;
+ };
+
+ static int nilfs_iget_test(struct inode *inode, void *opaque);
+@@ -314,7 +318,8 @@ static int nilfs_insert_inode_locked(struct inode *inode,
+ unsigned long ino)
+ {
+ struct nilfs_iget_args args = {
+- .ino = ino, .root = root, .cno = 0, .for_gc = 0
++ .ino = ino, .root = root, .cno = 0, .for_gc = false,
++ .for_btnc = false, .for_shadow = false
+ };
+
+ return insert_inode_locked4(inode, ino, nilfs_iget_test, &args);
+@@ -527,6 +532,19 @@ static int nilfs_iget_test(struct inode *inode, void *opaque)
+ return 0;
+
+ ii = NILFS_I(inode);
++ if (test_bit(NILFS_I_BTNC, &ii->i_state)) {
++ if (!args->for_btnc)
++ return 0;
++ } else if (args->for_btnc) {
++ return 0;
++ }
++ if (test_bit(NILFS_I_SHADOW, &ii->i_state)) {
++ if (!args->for_shadow)
++ return 0;
++ } else if (args->for_shadow) {
++ return 0;
++ }
++
+ if (!test_bit(NILFS_I_GCINODE, &ii->i_state))
+ return !args->for_gc;
+
+@@ -538,15 +556,17 @@ static int nilfs_iget_set(struct inode *inode, void *opaque)
+ struct nilfs_iget_args *args = opaque;
+
+ inode->i_ino = args->ino;
+- if (args->for_gc) {
++ NILFS_I(inode)->i_cno = args->cno;
++ NILFS_I(inode)->i_root = args->root;
++ if (args->root && args->ino == NILFS_ROOT_INO)
++ nilfs_get_root(args->root);
++
++ if (args->for_gc)
+ NILFS_I(inode)->i_state = BIT(NILFS_I_GCINODE);
+- NILFS_I(inode)->i_cno = args->cno;
+- NILFS_I(inode)->i_root = NULL;
+- } else {
+- if (args->root && args->ino == NILFS_ROOT_INO)
+- nilfs_get_root(args->root);
+- NILFS_I(inode)->i_root = args->root;
+- }
++ if (args->for_btnc)
++ NILFS_I(inode)->i_state |= BIT(NILFS_I_BTNC);
++ if (args->for_shadow)
++ NILFS_I(inode)->i_state |= BIT(NILFS_I_SHADOW);
+ return 0;
+ }
+
+@@ -554,7 +574,8 @@ struct inode *nilfs_ilookup(struct super_block *sb, struct nilfs_root *root,
+ unsigned long ino)
+ {
+ struct nilfs_iget_args args = {
+- .ino = ino, .root = root, .cno = 0, .for_gc = 0
++ .ino = ino, .root = root, .cno = 0, .for_gc = false,
++ .for_btnc = false, .for_shadow = false
+ };
+
+ return ilookup5(sb, ino, nilfs_iget_test, &args);
+@@ -564,7 +585,8 @@ struct inode *nilfs_iget_locked(struct super_block *sb, struct nilfs_root *root,
+ unsigned long ino)
+ {
+ struct nilfs_iget_args args = {
+- .ino = ino, .root = root, .cno = 0, .for_gc = 0
++ .ino = ino, .root = root, .cno = 0, .for_gc = false,
++ .for_btnc = false, .for_shadow = false
+ };
+
+ return iget5_locked(sb, ino, nilfs_iget_test, nilfs_iget_set, &args);
+@@ -595,7 +617,8 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino,
+ __u64 cno)
+ {
+ struct nilfs_iget_args args = {
+- .ino = ino, .root = NULL, .cno = cno, .for_gc = 1
++ .ino = ino, .root = NULL, .cno = cno, .for_gc = true,
++ .for_btnc = false, .for_shadow = false
+ };
+ struct inode *inode;
+ int err;
+@@ -615,6 +638,113 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino,
+ return inode;
+ }
+
++/**
++ * nilfs_attach_btree_node_cache - attach a B-tree node cache to the inode
++ * @inode: inode object
++ *
++ * nilfs_attach_btree_node_cache() attaches a B-tree node cache to @inode,
++ * or does nothing if the inode already has it. This function allocates
++ * an additional inode to maintain page cache of B-tree nodes one-on-one.
++ *
++ * Return Value: On success, 0 is returned. On errors, one of the following
++ * negative error code is returned.
++ *
++ * %-ENOMEM - Insufficient memory available.
++ */
++int nilfs_attach_btree_node_cache(struct inode *inode)
++{
++ struct nilfs_inode_info *ii = NILFS_I(inode);
++ struct inode *btnc_inode;
++ struct nilfs_iget_args args;
++
++ if (ii->i_assoc_inode)
++ return 0;
++
++ args.ino = inode->i_ino;
++ args.root = ii->i_root;
++ args.cno = ii->i_cno;
++ args.for_gc = test_bit(NILFS_I_GCINODE, &ii->i_state) != 0;
++ args.for_btnc = true;
++ args.for_shadow = test_bit(NILFS_I_SHADOW, &ii->i_state) != 0;
++
++ btnc_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test,
++ nilfs_iget_set, &args);
++ if (unlikely(!btnc_inode))
++ return -ENOMEM;
++ if (btnc_inode->i_state & I_NEW) {
++ nilfs_init_btnc_inode(btnc_inode);
++ unlock_new_inode(btnc_inode);
++ }
++ NILFS_I(btnc_inode)->i_assoc_inode = inode;
++ NILFS_I(btnc_inode)->i_bmap = ii->i_bmap;
++ ii->i_assoc_inode = btnc_inode;
++
++ return 0;
++}
++
++/**
++ * nilfs_detach_btree_node_cache - detach the B-tree node cache from the inode
++ * @inode: inode object
++ *
++ * nilfs_detach_btree_node_cache() detaches the B-tree node cache and its
++ * holder inode bound to @inode, or does nothing if @inode doesn't have it.
++ */
++void nilfs_detach_btree_node_cache(struct inode *inode)
++{
++ struct nilfs_inode_info *ii = NILFS_I(inode);
++ struct inode *btnc_inode = ii->i_assoc_inode;
++
++ if (btnc_inode) {
++ NILFS_I(btnc_inode)->i_assoc_inode = NULL;
++ ii->i_assoc_inode = NULL;
++ iput(btnc_inode);
++ }
++}
++
++/**
++ * nilfs_iget_for_shadow - obtain inode for shadow mapping
++ * @inode: inode object that uses shadow mapping
++ *
++ * nilfs_iget_for_shadow() allocates a pair of inodes that holds page
++ * caches for shadow mapping. The page cache for data pages is set up
++ * in one inode and the one for b-tree node pages is set up in the
++ * other inode, which is attached to the former inode.
++ *
++ * Return Value: On success, a pointer to the inode for data pages is
++ * returned. On errors, one of the following negative error code is returned
++ * in a pointer type.
++ *
++ * %-ENOMEM - Insufficient memory available.
++ */
++struct inode *nilfs_iget_for_shadow(struct inode *inode)
++{
++ struct nilfs_iget_args args = {
++ .ino = inode->i_ino, .root = NULL, .cno = 0, .for_gc = false,
++ .for_btnc = false, .for_shadow = true
++ };
++ struct inode *s_inode;
++ int err;
++
++ s_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test,
++ nilfs_iget_set, &args);
++ if (unlikely(!s_inode))
++ return ERR_PTR(-ENOMEM);
++ if (!(s_inode->i_state & I_NEW))
++ return inode;
++
++ NILFS_I(s_inode)->i_flags = 0;
++ memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap));
++ mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS);
++
++ err = nilfs_attach_btree_node_cache(s_inode);
++ if (unlikely(err)) {
++ iget_failed(s_inode);
++ return ERR_PTR(err);
++ }
++ unlock_new_inode(s_inode);
++ return s_inode;
++}
++
+ void nilfs_write_inode_common(struct inode *inode,
+ struct nilfs_inode *raw_inode, int has_bmap)
+ {
+@@ -762,7 +892,8 @@ static void nilfs_clear_inode(struct inode *inode)
+ if (test_bit(NILFS_I_BMAP, &ii->i_state))
+ nilfs_bmap_clear(ii->i_bmap);
+
+- nilfs_btnode_cache_clear(&ii->i_btnode_cache);
++ if (!test_bit(NILFS_I_BTNC, &ii->i_state))
++ nilfs_detach_btree_node_cache(inode);
+
+ if (ii->i_root && inode->i_ino == NILFS_ROOT_INO)
+ nilfs_put_root(ii->i_root);
+diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c
+index 4b3d33cf0041f..880b5e8cd3ecd 100644
+--- a/fs/nilfs2/mdt.c
++++ b/fs/nilfs2/mdt.c
+@@ -470,9 +470,18 @@ int nilfs_mdt_init(struct inode *inode, gfp_t gfp_mask, size_t objsz)
+ void nilfs_mdt_clear(struct inode *inode)
+ {
+ struct nilfs_mdt_info *mdi = NILFS_MDT(inode);
++ struct nilfs_shadow_map *shadow = mdi->mi_shadow;
+
+ if (mdi->mi_palloc_cache)
+ nilfs_palloc_destroy_cache(inode);
++
++ if (shadow) {
++ struct inode *s_inode = shadow->inode;
++
++ shadow->inode = NULL;
++ iput(s_inode);
++ mdi->mi_shadow = NULL;
++ }
+ }
+
+ /**
+@@ -506,12 +515,15 @@ int nilfs_mdt_setup_shadow_map(struct inode *inode,
+ struct nilfs_shadow_map *shadow)
+ {
+ struct nilfs_mdt_info *mi = NILFS_MDT(inode);
++ struct inode *s_inode;
+
+ INIT_LIST_HEAD(&shadow->frozen_buffers);
+- address_space_init_once(&shadow->frozen_data);
+- nilfs_mapping_init(&shadow->frozen_data, inode);
+- address_space_init_once(&shadow->frozen_btnodes);
+- nilfs_mapping_init(&shadow->frozen_btnodes, inode);
++
++ s_inode = nilfs_iget_for_shadow(inode);
++ if (IS_ERR(s_inode))
++ return PTR_ERR(s_inode);
++
++ shadow->inode = s_inode;
+ mi->mi_shadow = shadow;
+ return 0;
+ }
+@@ -525,14 +537,15 @@ int nilfs_mdt_save_to_shadow_map(struct inode *inode)
+ struct nilfs_mdt_info *mi = NILFS_MDT(inode);
+ struct nilfs_inode_info *ii = NILFS_I(inode);
+ struct nilfs_shadow_map *shadow = mi->mi_shadow;
++ struct inode *s_inode = shadow->inode;
+ int ret;
+
+- ret = nilfs_copy_dirty_pages(&shadow->frozen_data, inode->i_mapping);
++ ret = nilfs_copy_dirty_pages(s_inode->i_mapping, inode->i_mapping);
+ if (ret)
+ goto out;
+
+- ret = nilfs_copy_dirty_pages(&shadow->frozen_btnodes,
+- &ii->i_btnode_cache);
++ ret = nilfs_copy_dirty_pages(NILFS_I(s_inode)->i_assoc_inode->i_mapping,
++ ii->i_assoc_inode->i_mapping);
+ if (ret)
+ goto out;
+
+@@ -548,7 +561,7 @@ int nilfs_mdt_freeze_buffer(struct inode *inode, struct buffer_head *bh)
+ struct page *page;
+ int blkbits = inode->i_blkbits;
+
+- page = grab_cache_page(&shadow->frozen_data, bh->b_page->index);
++ page = grab_cache_page(shadow->inode->i_mapping, bh->b_page->index);
+ if (!page)
+ return -ENOMEM;
+
+@@ -580,7 +593,7 @@ nilfs_mdt_get_frozen_buffer(struct inode *inode, struct buffer_head *bh)
+ struct page *page;
+ int n;
+
+- page = find_lock_page(&shadow->frozen_data, bh->b_page->index);
++ page = find_lock_page(shadow->inode->i_mapping, bh->b_page->index);
+ if (page) {
+ if (page_has_buffers(page)) {
+ n = bh_offset(bh) >> inode->i_blkbits;
+@@ -621,10 +634,11 @@ void nilfs_mdt_restore_from_shadow_map(struct inode *inode)
+ nilfs_palloc_clear_cache(inode);
+
+ nilfs_clear_dirty_pages(inode->i_mapping, true);
+- nilfs_copy_back_pages(inode->i_mapping, &shadow->frozen_data);
++ nilfs_copy_back_pages(inode->i_mapping, shadow->inode->i_mapping);
+
+- nilfs_clear_dirty_pages(&ii->i_btnode_cache, true);
+- nilfs_copy_back_pages(&ii->i_btnode_cache, &shadow->frozen_btnodes);
++ nilfs_clear_dirty_pages(ii->i_assoc_inode->i_mapping, true);
++ nilfs_copy_back_pages(ii->i_assoc_inode->i_mapping,
++ NILFS_I(shadow->inode)->i_assoc_inode->i_mapping);
+
+ nilfs_bmap_restore(ii->i_bmap, &shadow->bmap_store);
+
+@@ -639,10 +653,11 @@ void nilfs_mdt_clear_shadow_map(struct inode *inode)
+ {
+ struct nilfs_mdt_info *mi = NILFS_MDT(inode);
+ struct nilfs_shadow_map *shadow = mi->mi_shadow;
++ struct inode *shadow_btnc_inode = NILFS_I(shadow->inode)->i_assoc_inode;
+
+ down_write(&mi->mi_sem);
+ nilfs_release_frozen_buffers(shadow);
+- truncate_inode_pages(&shadow->frozen_data, 0);
+- truncate_inode_pages(&shadow->frozen_btnodes, 0);
++ truncate_inode_pages(shadow->inode->i_mapping, 0);
++ truncate_inode_pages(shadow_btnc_inode->i_mapping, 0);
+ up_write(&mi->mi_sem);
+ }
+diff --git a/fs/nilfs2/mdt.h b/fs/nilfs2/mdt.h
+index 8f86080a436de..9e23bab3ff127 100644
+--- a/fs/nilfs2/mdt.h
++++ b/fs/nilfs2/mdt.h
+@@ -18,14 +18,12 @@
+ /**
+ * struct nilfs_shadow_map - shadow mapping of meta data file
+ * @bmap_store: shadow copy of bmap state
+- * @frozen_data: shadowed dirty data pages
+- * @frozen_btnodes: shadowed dirty b-tree nodes' pages
++ * @inode: holder of page caches used in shadow mapping
+ * @frozen_buffers: list of frozen buffers
+ */
+ struct nilfs_shadow_map {
+ struct nilfs_bmap_store bmap_store;
+- struct address_space frozen_data;
+- struct address_space frozen_btnodes;
++ struct inode *inode;
+ struct list_head frozen_buffers;
+ };
+
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index a7b81755c3501..1344f7d475d3c 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -28,7 +28,7 @@
+ * @i_xattr: <TODO>
+ * @i_dir_start_lookup: page index of last successful search
+ * @i_cno: checkpoint number for GC inode
+- * @i_btnode_cache: cached pages of b-tree nodes
++ * @i_assoc_inode: associated inode (B-tree node cache holder or back pointer)
+ * @i_dirty: list for connecting dirty files
+ * @xattr_sem: semaphore for extended attributes processing
+ * @i_bh: buffer contains disk inode
+@@ -43,7 +43,7 @@ struct nilfs_inode_info {
+ __u64 i_xattr; /* sector_t ??? */
+ __u32 i_dir_start_lookup;
+ __u64 i_cno; /* check point number for GC inode */
+- struct address_space i_btnode_cache;
++ struct inode *i_assoc_inode;
+ struct list_head i_dirty; /* List for connecting dirty files */
+
+ #ifdef CONFIG_NILFS_XATTR
+@@ -75,13 +75,6 @@ NILFS_BMAP_I(const struct nilfs_bmap *bmap)
+ return container_of(bmap, struct nilfs_inode_info, i_bmap_data);
+ }
+
+-static inline struct inode *NILFS_BTNC_I(struct address_space *btnc)
+-{
+- struct nilfs_inode_info *ii =
+- container_of(btnc, struct nilfs_inode_info, i_btnode_cache);
+- return &ii->vfs_inode;
+-}
+-
+ /*
+ * Dynamic state flags of NILFS on-memory inode (i_state)
+ */
+@@ -98,6 +91,8 @@ enum {
+ NILFS_I_INODE_SYNC, /* dsync is not allowed for inode */
+ NILFS_I_BMAP, /* has bmap and btnode_cache */
+ NILFS_I_GCINODE, /* inode for GC, on memory only */
++ NILFS_I_BTNC, /* inode for btree node cache */
++ NILFS_I_SHADOW, /* inode for shadowed page cache */
+ };
+
+ /*
+@@ -267,6 +262,9 @@ struct inode *nilfs_iget(struct super_block *sb, struct nilfs_root *root,
+ unsigned long ino);
+ extern struct inode *nilfs_iget_for_gc(struct super_block *sb,
+ unsigned long ino, __u64 cno);
++int nilfs_attach_btree_node_cache(struct inode *inode);
++void nilfs_detach_btree_node_cache(struct inode *inode);
++struct inode *nilfs_iget_for_shadow(struct inode *inode);
+ extern void nilfs_update_inode(struct inode *, struct buffer_head *, int);
+ extern void nilfs_truncate(struct inode *);
+ extern void nilfs_evict_inode(struct inode *);
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 063dd16d75b59..45e0792950080 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -448,10 +448,9 @@ void nilfs_mapping_init(struct address_space *mapping, struct inode *inode)
+ /*
+ * NILFS2 needs clear_page_dirty() in the following two cases:
+ *
+- * 1) For B-tree node pages and data pages of the dat/gcdat, NILFS2 clears
+- * page dirty flags when it copies back pages from the shadow cache
+- * (gcdat->{i_mapping,i_btnode_cache}) to its original cache
+- * (dat->{i_mapping,i_btnode_cache}).
++ * 1) For B-tree node pages and data pages of DAT file, NILFS2 clears dirty
++ * flag of pages when it copies back pages from shadow cache to the
++ * original cache.
+ *
+ * 2) Some B-tree operations like insertion or deletion may dispose buffers
+ * in dirty state, and this needs to cancel the dirty state of their pages.
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 85a8533347718..0afe0832c7547 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -733,15 +733,18 @@ static void nilfs_lookup_dirty_node_buffers(struct inode *inode,
+ struct list_head *listp)
+ {
+ struct nilfs_inode_info *ii = NILFS_I(inode);
+- struct address_space *mapping = &ii->i_btnode_cache;
++ struct inode *btnc_inode = ii->i_assoc_inode;
+ struct pagevec pvec;
+ struct buffer_head *bh, *head;
+ unsigned int i;
+ pgoff_t index = 0;
+
++ if (!btnc_inode)
++ return;
++
+ pagevec_init(&pvec);
+
+- while (pagevec_lookup_tag(&pvec, mapping, &index,
++ while (pagevec_lookup_tag(&pvec, btnc_inode->i_mapping, &index,
+ PAGECACHE_TAG_DIRTY)) {
+ for (i = 0; i < pagevec_count(&pvec); i++) {
+ bh = head = page_buffers(pvec.pages[i]);
+@@ -2410,7 +2413,7 @@ nilfs_remove_written_gcinodes(struct the_nilfs *nilfs, struct list_head *head)
+ continue;
+ list_del_init(&ii->i_dirty);
+ truncate_inode_pages(&ii->vfs_inode.i_data, 0);
+- nilfs_btnode_cache_clear(&ii->i_btnode_cache);
++ nilfs_btnode_cache_clear(ii->i_assoc_inode->i_mapping);
+ iput(&ii->vfs_inode);
+ }
+ }
+diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
+index 63e5fa74016c7..c4c6578185d57 100644
+--- a/fs/nilfs2/super.c
++++ b/fs/nilfs2/super.c
+@@ -157,7 +157,8 @@ struct inode *nilfs_alloc_inode(struct super_block *sb)
+ ii->i_bh = NULL;
+ ii->i_state = 0;
+ ii->i_cno = 0;
+- nilfs_mapping_init(&ii->i_btnode_cache, &ii->vfs_inode);
++ ii->i_assoc_inode = NULL;
++ ii->i_bmap = &ii->i_bmap_data;
+ return &ii->vfs_inode;
+ }
+
+@@ -1377,8 +1378,6 @@ static void nilfs_inode_init_once(void *obj)
+ #ifdef CONFIG_NILFS_XATTR
+ init_rwsem(&ii->xattr_sem);
+ #endif
+- address_space_init_once(&ii->i_btnode_cache);
+- ii->i_bmap = &ii->i_bmap_data;
+ inode_init_once(&ii->vfs_inode);
+ }
+
+diff --git a/include/linux/audit.h b/include/linux/audit.h
+index d06134ac6245f..cece702311388 100644
+--- a/include/linux/audit.h
++++ b/include/linux/audit.h
+@@ -339,7 +339,7 @@ static inline void audit_uring_entry(u8 op)
+ }
+ static inline void audit_uring_exit(int success, long code)
+ {
+- if (unlikely(!audit_dummy_context()))
++ if (unlikely(audit_context()))
+ __audit_uring_exit(success, code);
+ }
+ static inline void audit_syscall_entry(int major, unsigned long a0,
+diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h
+index 3431011f364dd..cba8a6ffc3290 100644
+--- a/include/linux/ceph/osd_client.h
++++ b/include/linux/ceph/osd_client.h
+@@ -287,6 +287,9 @@ struct ceph_osd_linger_request {
+ rados_watcherrcb_t errcb;
+ void *data;
+
++ struct ceph_pagelist *request_pl;
++ struct page **notify_id_pages;
++
+ struct page ***preply_pages;
+ size_t *preply_len;
+ };
+diff --git a/include/linux/ioport.h b/include/linux/ioport.h
+index 8359c50f99884..ec5f71f7135b0 100644
+--- a/include/linux/ioport.h
++++ b/include/linux/ioport.h
+@@ -262,6 +262,8 @@ resource_union(struct resource *r1, struct resource *r2, struct resource *r)
+ #define request_muxed_region(start,n,name) __request_region(&ioport_resource, (start), (n), (name), IORESOURCE_MUXED)
+ #define __request_mem_region(start,n,name, excl) __request_region(&iomem_resource, (start), (n), (name), excl)
+ #define request_mem_region(start,n,name) __request_region(&iomem_resource, (start), (n), (name), 0)
++#define request_mem_region_muxed(start, n, name) \
++ __request_region(&iomem_resource, (start), (n), (name), IORESOURCE_MUXED)
+ #define request_mem_region_exclusive(start,n,name) \
+ __request_region(&iomem_resource, (start), (n), (name), IORESOURCE_EXCLUSIVE)
+ #define rename_region(region, newname) do { (region)->name = (newname); } while (0)
+diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h
+index 808bb4cee2300..b0da04fe087bb 100644
+--- a/include/linux/mc146818rtc.h
++++ b/include/linux/mc146818rtc.h
+@@ -86,6 +86,8 @@ struct cmos_rtc_board_info {
+ /* 2 values for divider stage reset, others for "testing purposes only" */
+ # define RTC_DIV_RESET1 0x60
+ # define RTC_DIV_RESET2 0x70
++ /* In AMD BKDG bit 5 and 6 are reserved, bit 4 is for select dv0 bank */
++# define RTC_AMD_BANK_SELECT 0x10
+ /* Periodic intr. / Square wave rate select. 0=none, 1=32.8kHz,... 15=2Hz */
+ # define RTC_RATE_SELECT 0x0F
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index f53ea70384418..dadd4d2f6d8ac 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -891,7 +891,7 @@ struct net_device_path_stack {
+
+ struct net_device_path_ctx {
+ const struct net_device *dev;
+- const u8 *daddr;
++ u8 daddr[ETH_ALEN];
+
+ int num_vlans;
+ struct {
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 25b3ef71f495e..7fc4e9f49f542 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -121,10 +121,12 @@ enum lockdown_reason {
+ LOCKDOWN_DEBUGFS,
+ LOCKDOWN_XMON_WR,
+ LOCKDOWN_BPF_WRITE_USER,
++ LOCKDOWN_DBG_WRITE_KERNEL,
+ LOCKDOWN_INTEGRITY_MAX,
+ LOCKDOWN_KCORE,
+ LOCKDOWN_KPROBES,
+ LOCKDOWN_BPF_READ_KERNEL,
++ LOCKDOWN_DBG_READ_KERNEL,
+ LOCKDOWN_PERF,
+ LOCKDOWN_TRACEFS,
+ LOCKDOWN_XMON_RW,
+diff --git a/include/net/ip.h b/include/net/ip.h
+index b51bae43b0ddb..9fba950fdf12e 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -56,6 +56,7 @@ struct inet_skb_parm {
+ #define IPSKB_DOREDIRECT BIT(5)
+ #define IPSKB_FRAG_PMTU BIT(6)
+ #define IPSKB_L3SLAVE BIT(7)
++#define IPSKB_NOPOLICY BIT(8)
+
+ u16 frag_max_size;
+ };
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index 947733a639a6f..bd7c3be4af5d7 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -66,11 +66,7 @@ struct netns_xfrm {
+ int sysctl_larval_drop;
+ u32 sysctl_acq_expires;
+
+- u8 policy_default;
+-#define XFRM_POL_DEFAULT_IN 1
+-#define XFRM_POL_DEFAULT_OUT 2
+-#define XFRM_POL_DEFAULT_FWD 4
+-#define XFRM_POL_DEFAULT_MASK 7
++ u8 policy_default[XFRM_POLICY_MAX];
+
+ #ifdef CONFIG_SYSCTL
+ struct ctl_table_header *sysctl_hdr;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 76aa6f11a5409..d2efddce65d46 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -1081,24 +1081,29 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un
+ }
+
+ #ifdef CONFIG_XFRM
+-static inline bool
+-xfrm_default_allow(struct net *net, int dir)
+-{
+- u8 def = net->xfrm.policy_default;
+-
+- switch (dir) {
+- case XFRM_POLICY_IN:
+- return def & XFRM_POL_DEFAULT_IN ? false : true;
+- case XFRM_POLICY_OUT:
+- return def & XFRM_POL_DEFAULT_OUT ? false : true;
+- case XFRM_POLICY_FWD:
+- return def & XFRM_POL_DEFAULT_FWD ? false : true;
+- }
++int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb,
++ unsigned short family);
++
++static inline bool __xfrm_check_nopolicy(struct net *net, struct sk_buff *skb,
++ int dir)
++{
++ if (!net->xfrm.policy_count[dir] && !secpath_exists(skb))
++ return net->xfrm.policy_default[dir] == XFRM_USERPOLICY_ACCEPT;
++
+ return false;
+ }
+
+-int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb,
+- unsigned short family);
++static inline bool __xfrm_check_dev_nopolicy(struct sk_buff *skb,
++ int dir, unsigned short family)
++{
++ if (dir != XFRM_POLICY_OUT && family == AF_INET) {
++ /* same dst may be used for traffic originating from
++ * devices with different policy settings.
++ */
++ return IPCB(skb)->flags & IPSKB_NOPOLICY;
++ }
++ return skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY);
++}
+
+ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+ struct sk_buff *skb,
+@@ -1110,13 +1115,9 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+ if (sk && sk->sk_policy[XFRM_POLICY_IN])
+ return __xfrm_policy_check(sk, ndir, skb, family);
+
+- if (xfrm_default_allow(net, dir))
+- return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) ||
+- (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) ||
+- __xfrm_policy_check(sk, ndir, skb, family);
+- else
+- return (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) ||
+- __xfrm_policy_check(sk, ndir, skb, family);
++ return __xfrm_check_nopolicy(net, skb, dir) ||
++ __xfrm_check_dev_nopolicy(skb, dir, family) ||
++ __xfrm_policy_check(sk, ndir, skb, family);
+ }
+
+ static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family)
+@@ -1168,13 +1169,12 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+ {
+ struct net *net = dev_net(skb->dev);
+
+- if (xfrm_default_allow(net, XFRM_POLICY_OUT))
+- return !net->xfrm.policy_count[XFRM_POLICY_OUT] ||
+- (skb_dst(skb)->flags & DST_NOXFRM) ||
+- __xfrm_route_forward(skb, family);
+- else
+- return (skb_dst(skb)->flags & DST_NOXFRM) ||
+- __xfrm_route_forward(skb, family);
++ if (!net->xfrm.policy_count[XFRM_POLICY_OUT] &&
++ net->xfrm.policy_default[XFRM_POLICY_OUT] == XFRM_USERPOLICY_ACCEPT)
++ return true;
++
++ return (skb_dst(skb)->flags & DST_NOXFRM) ||
++ __xfrm_route_forward(skb, family);
+ }
+
+ static inline int xfrm4_route_forward(struct sk_buff *skb)
+diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
+index 8e4a2ca0bcbf7..b1523cb8ab307 100644
+--- a/include/uapi/linux/dma-buf.h
++++ b/include/uapi/linux/dma-buf.h
+@@ -92,7 +92,7 @@ struct dma_buf_sync {
+ * between them in actual uapi, they're just different numbers.
+ */
+ #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *)
+-#define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32)
+-#define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64)
++#define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, __u32)
++#define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, __u64)
+
+ #endif
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index ea2ee1181921e..f3a2abd6d1a19 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1959,6 +1959,12 @@ void __audit_uring_exit(int success, long code)
+ {
+ struct audit_context *ctx = audit_context();
+
++ if (ctx->dummy) {
++ if (ctx->context != AUDIT_CTX_URING)
++ return;
++ goto out;
++ }
++
+ if (ctx->context == AUDIT_CTX_SYSCALL) {
+ /*
+ * NOTE: See the note in __audit_uring_entry() about the case
+diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c
+index da06a5553835b..7beceb447211d 100644
+--- a/kernel/debug/debug_core.c
++++ b/kernel/debug/debug_core.c
+@@ -53,6 +53,7 @@
+ #include <linux/vmacache.h>
+ #include <linux/rcupdate.h>
+ #include <linux/irq.h>
++#include <linux/security.h>
+
+ #include <asm/cacheflush.h>
+ #include <asm/byteorder.h>
+@@ -752,6 +753,29 @@ cpu_master_loop:
+ continue;
+ kgdb_connected = 0;
+ } else {
++ /*
++ * This is a brutal way to interfere with the debugger
++ * and prevent gdb being used to poke at kernel memory.
++ * This could cause trouble if lockdown is applied when
++ * there is already an active gdb session. For now the
++ * answer is simply "don't do that". Typically lockdown
++ * *will* be applied before the debug core gets started
++ * so only developers using kgdb for fairly advanced
++ * early kernel debug can be biten by this. Hopefully
++ * they are sophisticated enough to take care of
++ * themselves, especially with help from the lockdown
++ * message printed on the console!
++ */
++ if (security_locked_down(LOCKDOWN_DBG_WRITE_KERNEL)) {
++ if (IS_ENABLED(CONFIG_KGDB_KDB)) {
++ /* Switch back to kdb if possible... */
++ dbg_kdb_mode = 1;
++ continue;
++ } else {
++ /* ... otherwise just bail */
++ break;
++ }
++ }
+ error = gdb_serial_stub(ks);
+ }
+
+diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c
+index 0852a537dad4c..ead4da9471270 100644
+--- a/kernel/debug/kdb/kdb_main.c
++++ b/kernel/debug/kdb/kdb_main.c
+@@ -45,6 +45,7 @@
+ #include <linux/proc_fs.h>
+ #include <linux/uaccess.h>
+ #include <linux/slab.h>
++#include <linux/security.h>
+ #include "kdb_private.h"
+
+ #undef MODULE_PARAM_PREFIX
+@@ -166,10 +167,62 @@ struct task_struct *kdb_curr_task(int cpu)
+ }
+
+ /*
+- * Check whether the flags of the current command and the permissions
+- * of the kdb console has allow a command to be run.
++ * Update the permissions flags (kdb_cmd_enabled) to match the
++ * current lockdown state.
++ *
++ * Within this function the calls to security_locked_down() are "lazy". We
++ * avoid calling them if the current value of kdb_cmd_enabled already excludes
++ * flags that might be subject to lockdown. Additionally we deliberately check
++ * the lockdown flags independently (even though read lockdown implies write
++ * lockdown) since that results in both simpler code and clearer messages to
++ * the user on first-time debugger entry.
++ *
++ * The permission masks during a read+write lockdown permits the following
++ * flags: INSPECT, SIGNAL, REBOOT (and ALWAYS_SAFE).
++ *
++ * The INSPECT commands are not blocked during lockdown because they are
++ * not arbitrary memory reads. INSPECT covers the backtrace family (sometimes
++ * forcing them to have no arguments) and lsmod. These commands do expose
++ * some kernel state but do not allow the developer seated at the console to
++ * choose what state is reported. SIGNAL and REBOOT should not be controversial,
++ * given these are allowed for root during lockdown already.
++ */
++static void kdb_check_for_lockdown(void)
++{
++ const int write_flags = KDB_ENABLE_MEM_WRITE |
++ KDB_ENABLE_REG_WRITE |
++ KDB_ENABLE_FLOW_CTRL;
++ const int read_flags = KDB_ENABLE_MEM_READ |
++ KDB_ENABLE_REG_READ;
++
++ bool need_to_lockdown_write = false;
++ bool need_to_lockdown_read = false;
++
++ if (kdb_cmd_enabled & (KDB_ENABLE_ALL | write_flags))
++ need_to_lockdown_write =
++ security_locked_down(LOCKDOWN_DBG_WRITE_KERNEL);
++
++ if (kdb_cmd_enabled & (KDB_ENABLE_ALL | read_flags))
++ need_to_lockdown_read =
++ security_locked_down(LOCKDOWN_DBG_READ_KERNEL);
++
++ /* De-compose KDB_ENABLE_ALL if required */
++ if (need_to_lockdown_write || need_to_lockdown_read)
++ if (kdb_cmd_enabled & KDB_ENABLE_ALL)
++ kdb_cmd_enabled = KDB_ENABLE_MASK & ~KDB_ENABLE_ALL;
++
++ if (need_to_lockdown_write)
++ kdb_cmd_enabled &= ~write_flags;
++
++ if (need_to_lockdown_read)
++ kdb_cmd_enabled &= ~read_flags;
++}
++
++/*
++ * Check whether the flags of the current command, the permissions of the kdb
++ * console and the lockdown state allow a command to be run.
+ */
+-static inline bool kdb_check_flags(kdb_cmdflags_t flags, int permissions,
++static bool kdb_check_flags(kdb_cmdflags_t flags, int permissions,
+ bool no_args)
+ {
+ /* permissions comes from userspace so needs massaging slightly */
+@@ -1180,6 +1233,9 @@ static int kdb_local(kdb_reason_t reason, int error, struct pt_regs *regs,
+ kdb_curr_task(raw_smp_processor_id());
+
+ KDB_DEBUG_STATE("kdb_local 1", reason);
++
++ kdb_check_for_lockdown();
++
+ kdb_go_count = 0;
+ if (reason == KDB_REASON_DEBUG) {
+ /* special case below */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index baa0fe350246f..2d7a23a7507b6 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -12327,6 +12327,9 @@ SYSCALL_DEFINE5(perf_event_open,
+ * Do not allow to attach to a group in a different task
+ * or CPU context. If we're moving SW events, we'll fix
+ * this up later, so allow that.
++ *
++ * Racy, not holding group_leader->ctx->mutex, see comment with
++ * perf_event_ctx_lock().
+ */
+ if (!move_group && group_leader->ctx != ctx)
+ goto err_context;
+@@ -12392,6 +12395,7 @@ SYSCALL_DEFINE5(perf_event_open,
+ } else {
+ perf_event_ctx_unlock(group_leader, gctx);
+ move_group = 0;
++ goto not_move_group;
+ }
+ }
+
+@@ -12408,7 +12412,17 @@ SYSCALL_DEFINE5(perf_event_open,
+ }
+ } else {
+ mutex_lock(&ctx->mutex);
++
++ /*
++ * Now that we hold ctx->lock, (re)validate group_leader->ctx == ctx,
++ * see the group_leader && !move_group test earlier.
++ */
++ if (group_leader && group_leader->ctx != ctx) {
++ err = -EINVAL;
++ goto err_locked;
++ }
+ }
++not_move_group:
+
+ if (ctx->task == TASK_TOMBSTONE) {
+ err = -ESRCH;
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index b50382f957c12..6743c8a0fe8e1 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -39,6 +39,13 @@ static int br_pass_frame_up(struct sk_buff *skb)
+ dev_sw_netstats_rx_add(brdev, skb->len);
+
+ vg = br_vlan_group_rcu(br);
++
++ /* Reset the offload_fwd_mark because there could be a stacked
++ * bridge above, and it should not think this bridge it doing
++ * that bridge's work forwarding out its ports.
++ */
++ br_switchdev_frame_unmark(skb);
++
+ /* Bridge is just like any other port. Make sure the
+ * packet is allowed except in promisc mode when someone
+ * may be running packet capture.
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 1c5815530e0dd..3814e1d50a446 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -537,43 +537,6 @@ static void request_init(struct ceph_osd_request *req)
+ target_init(&req->r_t);
+ }
+
+-/*
+- * This is ugly, but it allows us to reuse linger registration and ping
+- * requests, keeping the structure of the code around send_linger{_ping}()
+- * reasonable. Setting up a min_nr=2 mempool for each linger request
+- * and dealing with copying ops (this blasts req only, watch op remains
+- * intact) isn't any better.
+- */
+-static void request_reinit(struct ceph_osd_request *req)
+-{
+- struct ceph_osd_client *osdc = req->r_osdc;
+- bool mempool = req->r_mempool;
+- unsigned int num_ops = req->r_num_ops;
+- u64 snapid = req->r_snapid;
+- struct ceph_snap_context *snapc = req->r_snapc;
+- bool linger = req->r_linger;
+- struct ceph_msg *request_msg = req->r_request;
+- struct ceph_msg *reply_msg = req->r_reply;
+-
+- dout("%s req %p\n", __func__, req);
+- WARN_ON(kref_read(&req->r_kref) != 1);
+- request_release_checks(req);
+-
+- WARN_ON(kref_read(&request_msg->kref) != 1);
+- WARN_ON(kref_read(&reply_msg->kref) != 1);
+- target_destroy(&req->r_t);
+-
+- request_init(req);
+- req->r_osdc = osdc;
+- req->r_mempool = mempool;
+- req->r_num_ops = num_ops;
+- req->r_snapid = snapid;
+- req->r_snapc = snapc;
+- req->r_linger = linger;
+- req->r_request = request_msg;
+- req->r_reply = reply_msg;
+-}
+-
+ struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc,
+ struct ceph_snap_context *snapc,
+ unsigned int num_ops,
+@@ -918,14 +881,30 @@ EXPORT_SYMBOL(osd_req_op_xattr_init);
+ * @watch_opcode: CEPH_OSD_WATCH_OP_*
+ */
+ static void osd_req_op_watch_init(struct ceph_osd_request *req, int which,
+- u64 cookie, u8 watch_opcode)
++ u8 watch_opcode, u64 cookie, u32 gen)
+ {
+ struct ceph_osd_req_op *op;
+
+ op = osd_req_op_init(req, which, CEPH_OSD_OP_WATCH, 0);
+ op->watch.cookie = cookie;
+ op->watch.op = watch_opcode;
+- op->watch.gen = 0;
++ op->watch.gen = gen;
++}
++
++/*
++ * prot_ver, timeout and notify payload (may be empty) should already be
++ * encoded in @request_pl
++ */
++static void osd_req_op_notify_init(struct ceph_osd_request *req, int which,
++ u64 cookie, struct ceph_pagelist *request_pl)
++{
++ struct ceph_osd_req_op *op;
++
++ op = osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0);
++ op->notify.cookie = cookie;
++
++ ceph_osd_data_pagelist_init(&op->notify.request_data, request_pl);
++ op->indata_len = request_pl->length;
+ }
+
+ /*
+@@ -2727,10 +2706,13 @@ static void linger_release(struct kref *kref)
+ WARN_ON(!list_empty(&lreq->pending_lworks));
+ WARN_ON(lreq->osd);
+
+- if (lreq->reg_req)
+- ceph_osdc_put_request(lreq->reg_req);
+- if (lreq->ping_req)
+- ceph_osdc_put_request(lreq->ping_req);
++ if (lreq->request_pl)
++ ceph_pagelist_release(lreq->request_pl);
++ if (lreq->notify_id_pages)
++ ceph_release_page_vector(lreq->notify_id_pages, 1);
++
++ ceph_osdc_put_request(lreq->reg_req);
++ ceph_osdc_put_request(lreq->ping_req);
+ target_destroy(&lreq->t);
+ kfree(lreq);
+ }
+@@ -2999,6 +2981,12 @@ static void linger_commit_cb(struct ceph_osd_request *req)
+ struct ceph_osd_linger_request *lreq = req->r_priv;
+
+ mutex_lock(&lreq->lock);
++ if (req != lreq->reg_req) {
++ dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n",
++ __func__, lreq, lreq->linger_id, req, lreq->reg_req);
++ goto out;
++ }
++
+ dout("%s lreq %p linger_id %llu result %d\n", __func__, lreq,
+ lreq->linger_id, req->r_result);
+ linger_reg_commit_complete(lreq, req->r_result);
+@@ -3022,6 +3010,7 @@ static void linger_commit_cb(struct ceph_osd_request *req)
+ }
+ }
+
++out:
+ mutex_unlock(&lreq->lock);
+ linger_put(lreq);
+ }
+@@ -3044,6 +3033,12 @@ static void linger_reconnect_cb(struct ceph_osd_request *req)
+ struct ceph_osd_linger_request *lreq = req->r_priv;
+
+ mutex_lock(&lreq->lock);
++ if (req != lreq->reg_req) {
++ dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n",
++ __func__, lreq, lreq->linger_id, req, lreq->reg_req);
++ goto out;
++ }
++
+ dout("%s lreq %p linger_id %llu result %d last_error %d\n", __func__,
+ lreq, lreq->linger_id, req->r_result, lreq->last_error);
+ if (req->r_result < 0) {
+@@ -3053,46 +3048,64 @@ static void linger_reconnect_cb(struct ceph_osd_request *req)
+ }
+ }
+
++out:
+ mutex_unlock(&lreq->lock);
+ linger_put(lreq);
+ }
+
+ static void send_linger(struct ceph_osd_linger_request *lreq)
+ {
+- struct ceph_osd_request *req = lreq->reg_req;
+- struct ceph_osd_req_op *op = &req->r_ops[0];
++ struct ceph_osd_client *osdc = lreq->osdc;
++ struct ceph_osd_request *req;
++ int ret;
+
+- verify_osdc_wrlocked(req->r_osdc);
++ verify_osdc_wrlocked(osdc);
++ mutex_lock(&lreq->lock);
+ dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id);
+
+- if (req->r_osd)
+- cancel_linger_request(req);
++ if (lreq->reg_req) {
++ if (lreq->reg_req->r_osd)
++ cancel_linger_request(lreq->reg_req);
++ ceph_osdc_put_request(lreq->reg_req);
++ }
++
++ req = ceph_osdc_alloc_request(osdc, NULL, 1, true, GFP_NOIO);
++ BUG_ON(!req);
+
+- request_reinit(req);
+ target_copy(&req->r_t, &lreq->t);
+ req->r_mtime = lreq->mtime;
+
+- mutex_lock(&lreq->lock);
+ if (lreq->is_watch && lreq->committed) {
+- WARN_ON(op->op != CEPH_OSD_OP_WATCH ||
+- op->watch.cookie != lreq->linger_id);
+- op->watch.op = CEPH_OSD_WATCH_OP_RECONNECT;
+- op->watch.gen = ++lreq->register_gen;
++ osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_RECONNECT,
++ lreq->linger_id, ++lreq->register_gen);
+ dout("lreq %p reconnect register_gen %u\n", lreq,
+- op->watch.gen);
++ req->r_ops[0].watch.gen);
+ req->r_callback = linger_reconnect_cb;
+ } else {
+- if (!lreq->is_watch)
++ if (lreq->is_watch) {
++ osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_WATCH,
++ lreq->linger_id, 0);
++ } else {
+ lreq->notify_id = 0;
+- else
+- WARN_ON(op->watch.op != CEPH_OSD_WATCH_OP_WATCH);
++
++ refcount_inc(&lreq->request_pl->refcnt);
++ osd_req_op_notify_init(req, 0, lreq->linger_id,
++ lreq->request_pl);
++ ceph_osd_data_pages_init(
++ osd_req_op_data(req, 0, notify, response_data),
++ lreq->notify_id_pages, PAGE_SIZE, 0, false, false);
++ }
+ dout("lreq %p register\n", lreq);
+ req->r_callback = linger_commit_cb;
+ }
+- mutex_unlock(&lreq->lock);
++
++ ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
++ BUG_ON(ret);
+
+ req->r_priv = linger_get(lreq);
+ req->r_linger = true;
++ lreq->reg_req = req;
++ mutex_unlock(&lreq->lock);
+
+ submit_request(req, true);
+ }
+@@ -3102,6 +3115,12 @@ static void linger_ping_cb(struct ceph_osd_request *req)
+ struct ceph_osd_linger_request *lreq = req->r_priv;
+
+ mutex_lock(&lreq->lock);
++ if (req != lreq->ping_req) {
++ dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n",
++ __func__, lreq, lreq->linger_id, req, lreq->ping_req);
++ goto out;
++ }
++
+ dout("%s lreq %p linger_id %llu result %d ping_sent %lu last_error %d\n",
+ __func__, lreq, lreq->linger_id, req->r_result, lreq->ping_sent,
+ lreq->last_error);
+@@ -3117,6 +3136,7 @@ static void linger_ping_cb(struct ceph_osd_request *req)
+ lreq->register_gen, req->r_ops[0].watch.gen);
+ }
+
++out:
+ mutex_unlock(&lreq->lock);
+ linger_put(lreq);
+ }
+@@ -3124,8 +3144,8 @@ static void linger_ping_cb(struct ceph_osd_request *req)
+ static void send_linger_ping(struct ceph_osd_linger_request *lreq)
+ {
+ struct ceph_osd_client *osdc = lreq->osdc;
+- struct ceph_osd_request *req = lreq->ping_req;
+- struct ceph_osd_req_op *op = &req->r_ops[0];
++ struct ceph_osd_request *req;
++ int ret;
+
+ if (ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD)) {
+ dout("%s PAUSERD\n", __func__);
+@@ -3137,19 +3157,26 @@ static void send_linger_ping(struct ceph_osd_linger_request *lreq)
+ __func__, lreq, lreq->linger_id, lreq->ping_sent,
+ lreq->register_gen);
+
+- if (req->r_osd)
+- cancel_linger_request(req);
++ if (lreq->ping_req) {
++ if (lreq->ping_req->r_osd)
++ cancel_linger_request(lreq->ping_req);
++ ceph_osdc_put_request(lreq->ping_req);
++ }
+
+- request_reinit(req);
+- target_copy(&req->r_t, &lreq->t);
++ req = ceph_osdc_alloc_request(osdc, NULL, 1, true, GFP_NOIO);
++ BUG_ON(!req);
+
+- WARN_ON(op->op != CEPH_OSD_OP_WATCH ||
+- op->watch.cookie != lreq->linger_id ||
+- op->watch.op != CEPH_OSD_WATCH_OP_PING);
+- op->watch.gen = lreq->register_gen;
++ target_copy(&req->r_t, &lreq->t);
++ osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_PING, lreq->linger_id,
++ lreq->register_gen);
+ req->r_callback = linger_ping_cb;
++
++ ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
++ BUG_ON(ret);
++
+ req->r_priv = linger_get(lreq);
+ req->r_linger = true;
++ lreq->ping_req = req;
+
+ ceph_osdc_get_request(req);
+ account_request(req);
+@@ -3165,12 +3192,6 @@ static void linger_submit(struct ceph_osd_linger_request *lreq)
+
+ down_write(&osdc->lock);
+ linger_register(lreq);
+- if (lreq->is_watch) {
+- lreq->reg_req->r_ops[0].watch.cookie = lreq->linger_id;
+- lreq->ping_req->r_ops[0].watch.cookie = lreq->linger_id;
+- } else {
+- lreq->reg_req->r_ops[0].notify.cookie = lreq->linger_id;
+- }
+
+ calc_target(osdc, &lreq->t, false);
+ osd = lookup_create_osd(osdc, lreq->t.osd, true);
+@@ -3202,9 +3223,9 @@ static void cancel_linger_map_check(struct ceph_osd_linger_request *lreq)
+ */
+ static void __linger_cancel(struct ceph_osd_linger_request *lreq)
+ {
+- if (lreq->is_watch && lreq->ping_req->r_osd)
++ if (lreq->ping_req && lreq->ping_req->r_osd)
+ cancel_linger_request(lreq->ping_req);
+- if (lreq->reg_req->r_osd)
++ if (lreq->reg_req && lreq->reg_req->r_osd)
+ cancel_linger_request(lreq->reg_req);
+ cancel_linger_map_check(lreq);
+ unlink_linger(lreq->osd, lreq);
+@@ -4653,43 +4674,6 @@ again:
+ }
+ EXPORT_SYMBOL(ceph_osdc_sync);
+
+-static struct ceph_osd_request *
+-alloc_linger_request(struct ceph_osd_linger_request *lreq)
+-{
+- struct ceph_osd_request *req;
+-
+- req = ceph_osdc_alloc_request(lreq->osdc, NULL, 1, false, GFP_NOIO);
+- if (!req)
+- return NULL;
+-
+- ceph_oid_copy(&req->r_base_oid, &lreq->t.base_oid);
+- ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
+- return req;
+-}
+-
+-static struct ceph_osd_request *
+-alloc_watch_request(struct ceph_osd_linger_request *lreq, u8 watch_opcode)
+-{
+- struct ceph_osd_request *req;
+-
+- req = alloc_linger_request(lreq);
+- if (!req)
+- return NULL;
+-
+- /*
+- * Pass 0 for cookie because we don't know it yet, it will be
+- * filled in by linger_submit().
+- */
+- osd_req_op_watch_init(req, 0, 0, watch_opcode);
+-
+- if (ceph_osdc_alloc_messages(req, GFP_NOIO)) {
+- ceph_osdc_put_request(req);
+- return NULL;
+- }
+-
+- return req;
+-}
+-
+ /*
+ * Returns a handle, caller owns a ref.
+ */
+@@ -4719,18 +4703,6 @@ ceph_osdc_watch(struct ceph_osd_client *osdc,
+ lreq->t.flags = CEPH_OSD_FLAG_WRITE;
+ ktime_get_real_ts64(&lreq->mtime);
+
+- lreq->reg_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_WATCH);
+- if (!lreq->reg_req) {
+- ret = -ENOMEM;
+- goto err_put_lreq;
+- }
+-
+- lreq->ping_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_PING);
+- if (!lreq->ping_req) {
+- ret = -ENOMEM;
+- goto err_put_lreq;
+- }
+-
+ linger_submit(lreq);
+ ret = linger_reg_commit_wait(lreq);
+ if (ret) {
+@@ -4768,8 +4740,8 @@ int ceph_osdc_unwatch(struct ceph_osd_client *osdc,
+ ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc);
+ req->r_flags = CEPH_OSD_FLAG_WRITE;
+ ktime_get_real_ts64(&req->r_mtime);
+- osd_req_op_watch_init(req, 0, lreq->linger_id,
+- CEPH_OSD_WATCH_OP_UNWATCH);
++ osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_UNWATCH,
++ lreq->linger_id, 0);
+
+ ret = ceph_osdc_alloc_messages(req, GFP_NOIO);
+ if (ret)
+@@ -4855,35 +4827,6 @@ out_put_req:
+ }
+ EXPORT_SYMBOL(ceph_osdc_notify_ack);
+
+-static int osd_req_op_notify_init(struct ceph_osd_request *req, int which,
+- u64 cookie, u32 prot_ver, u32 timeout,
+- void *payload, u32 payload_len)
+-{
+- struct ceph_osd_req_op *op;
+- struct ceph_pagelist *pl;
+- int ret;
+-
+- op = osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0);
+- op->notify.cookie = cookie;
+-
+- pl = ceph_pagelist_alloc(GFP_NOIO);
+- if (!pl)
+- return -ENOMEM;
+-
+- ret = ceph_pagelist_encode_32(pl, 1); /* prot_ver */
+- ret |= ceph_pagelist_encode_32(pl, timeout);
+- ret |= ceph_pagelist_encode_32(pl, payload_len);
+- ret |= ceph_pagelist_append(pl, payload, payload_len);
+- if (ret) {
+- ceph_pagelist_release(pl);
+- return -ENOMEM;
+- }
+-
+- ceph_osd_data_pagelist_init(&op->notify.request_data, pl);
+- op->indata_len = pl->length;
+- return 0;
+-}
+-
+ /*
+ * @timeout: in seconds
+ *
+@@ -4902,7 +4845,6 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc,
+ size_t *preply_len)
+ {
+ struct ceph_osd_linger_request *lreq;
+- struct page **pages;
+ int ret;
+
+ WARN_ON(!timeout);
+@@ -4915,41 +4857,35 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc,
+ if (!lreq)
+ return -ENOMEM;
+
+- lreq->preply_pages = preply_pages;
+- lreq->preply_len = preply_len;
+-
+- ceph_oid_copy(&lreq->t.base_oid, oid);
+- ceph_oloc_copy(&lreq->t.base_oloc, oloc);
+- lreq->t.flags = CEPH_OSD_FLAG_READ;
+-
+- lreq->reg_req = alloc_linger_request(lreq);
+- if (!lreq->reg_req) {
++ lreq->request_pl = ceph_pagelist_alloc(GFP_NOIO);
++ if (!lreq->request_pl) {
+ ret = -ENOMEM;
+ goto out_put_lreq;
+ }
+
+- /*
+- * Pass 0 for cookie because we don't know it yet, it will be
+- * filled in by linger_submit().
+- */
+- ret = osd_req_op_notify_init(lreq->reg_req, 0, 0, 1, timeout,
+- payload, payload_len);
+- if (ret)
++ ret = ceph_pagelist_encode_32(lreq->request_pl, 1); /* prot_ver */
++ ret |= ceph_pagelist_encode_32(lreq->request_pl, timeout);
++ ret |= ceph_pagelist_encode_32(lreq->request_pl, payload_len);
++ ret |= ceph_pagelist_append(lreq->request_pl, payload, payload_len);
++ if (ret) {
++ ret = -ENOMEM;
+ goto out_put_lreq;
++ }
+
+ /* for notify_id */
+- pages = ceph_alloc_page_vector(1, GFP_NOIO);
+- if (IS_ERR(pages)) {
+- ret = PTR_ERR(pages);
++ lreq->notify_id_pages = ceph_alloc_page_vector(1, GFP_NOIO);
++ if (IS_ERR(lreq->notify_id_pages)) {
++ ret = PTR_ERR(lreq->notify_id_pages);
++ lreq->notify_id_pages = NULL;
+ goto out_put_lreq;
+ }
+- ceph_osd_data_pages_init(osd_req_op_data(lreq->reg_req, 0, notify,
+- response_data),
+- pages, PAGE_SIZE, 0, false, true);
+
+- ret = ceph_osdc_alloc_messages(lreq->reg_req, GFP_NOIO);
+- if (ret)
+- goto out_put_lreq;
++ lreq->preply_pages = preply_pages;
++ lreq->preply_len = preply_len;
++
++ ceph_oid_copy(&lreq->t.base_oid, oid);
++ ceph_oloc_copy(&lreq->t.base_oloc, oloc);
++ lreq->t.flags = CEPH_OSD_FLAG_READ;
+
+ linger_submit(lreq);
+ ret = linger_reg_commit_wait(lreq);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 91cf709c98b37..5f1ac48122777 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -663,11 +663,11 @@ int dev_fill_forward_path(const struct net_device *dev, const u8 *daddr,
+ const struct net_device *last_dev;
+ struct net_device_path_ctx ctx = {
+ .dev = dev,
+- .daddr = daddr,
+ };
+ struct net_device_path *path;
+ int ret = 0;
+
++ memcpy(ctx.daddr, daddr, sizeof(ctx.daddr));
+ stack->num_paths = 0;
+ while (ctx.dev && ctx.dev->netdev_ops->ndo_fill_forward_path) {
+ last_dev = ctx.dev;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 180fa6a26ad45..708cc9b1b1767 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3896,7 +3896,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ unsigned int delta_len = 0;
+ struct sk_buff *tail = NULL;
+ struct sk_buff *nskb, *tmp;
+- int err;
++ int len_diff, err;
+
+ skb_push(skb, -skb_network_offset(skb) + offset);
+
+@@ -3936,9 +3936,11 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ skb_push(nskb, -skb_network_offset(nskb) + offset);
+
+ skb_release_head_state(nskb);
++ len_diff = skb_network_header_len(nskb) - skb_network_header_len(skb);
+ __copy_skb_header(nskb, skb);
+
+ skb_headers_offset_update(nskb, skb_headroom(nskb) - skb_headroom(skb));
++ nskb->transport_header += len_diff;
+ skb_copy_from_linear_data_offset(skb, -tnl_hlen,
+ nskb->data - tnl_hlen,
+ offset + tnl_hlen);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index eef07b62b2d88..1cdfac733bd8b 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1721,6 +1721,7 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ struct in_device *in_dev = __in_dev_get_rcu(dev);
+ unsigned int flags = RTCF_MULTICAST;
+ struct rtable *rth;
++ bool no_policy;
+ u32 itag = 0;
+ int err;
+
+@@ -1731,8 +1732,12 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ if (our)
+ flags |= RTCF_LOCAL;
+
++ no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY);
++ if (no_policy)
++ IPCB(skb)->flags |= IPSKB_NOPOLICY;
++
+ rth = rt_dst_alloc(dev_net(dev)->loopback_dev, flags, RTN_MULTICAST,
+- IN_DEV_ORCONF(in_dev, NOPOLICY), false);
++ no_policy, false);
+ if (!rth)
+ return -ENOBUFS;
+
+@@ -1791,7 +1796,7 @@ static int __mkroute_input(struct sk_buff *skb,
+ struct rtable *rth;
+ int err;
+ struct in_device *out_dev;
+- bool do_cache;
++ bool do_cache, no_policy;
+ u32 itag = 0;
+
+ /* get a working reference to the output device */
+@@ -1836,6 +1841,10 @@ static int __mkroute_input(struct sk_buff *skb,
+ }
+ }
+
++ no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY);
++ if (no_policy)
++ IPCB(skb)->flags |= IPSKB_NOPOLICY;
++
+ fnhe = find_exception(nhc, daddr);
+ if (do_cache) {
+ if (fnhe)
+@@ -1848,8 +1857,7 @@ static int __mkroute_input(struct sk_buff *skb,
+ }
+ }
+
+- rth = rt_dst_alloc(out_dev->dev, 0, res->type,
+- IN_DEV_ORCONF(in_dev, NOPOLICY),
++ rth = rt_dst_alloc(out_dev->dev, 0, res->type, no_policy,
+ IN_DEV_ORCONF(out_dev, NOXFRM));
+ if (!rth) {
+ err = -ENOBUFS;
+@@ -2224,6 +2232,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ struct rtable *rth;
+ struct flowi4 fl4;
+ bool do_cache = true;
++ bool no_policy;
+
+ /* IP on this device is disabled. */
+
+@@ -2341,6 +2350,10 @@ brd_input:
+ RT_CACHE_STAT_INC(in_brd);
+
+ local_input:
++ no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY);
++ if (no_policy)
++ IPCB(skb)->flags |= IPSKB_NOPOLICY;
++
+ do_cache &= res->fi && !itag;
+ if (do_cache) {
+ struct fib_nh_common *nhc = FIB_RES_NHC(*res);
+@@ -2355,7 +2368,7 @@ local_input:
+
+ rth = rt_dst_alloc(ip_rt_get_dev(net, res),
+ flags | RTCF_LOCAL, res->type,
+- IN_DEV_ORCONF(in_dev, NOPOLICY), false);
++ no_policy, false);
+ if (!rth)
+ goto e_nobufs;
+
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index fd51db3be91c4..92e9d75dba2f4 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -2826,8 +2826,10 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb
+ void *ext_hdrs[SADB_EXT_MAX];
+ int err;
+
+- pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
+- BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
++ err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
++ BROADCAST_PROMISC_ONLY, NULL, sock_net(sk));
++ if (err)
++ return err;
+
+ memset(ext_hdrs, 0, sizeof(ext_hdrs));
+ err = parse_exthdrs(skb, hdr, ext_hdrs);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 48d9553dafe37..7e2404fd85b64 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1405,8 +1405,7 @@ static void ieee80211_rx_reorder_ampdu(struct ieee80211_rx_data *rx,
+ goto dont_reorder;
+
+ /* not part of a BA session */
+- if (ack_policy != IEEE80211_QOS_CTL_ACK_POLICY_BLOCKACK &&
+- ack_policy != IEEE80211_QOS_CTL_ACK_POLICY_NORMAL)
++ if (ack_policy == IEEE80211_QOS_CTL_ACK_POLICY_NOACK)
+ goto dont_reorder;
+
+ /* new, potentially un-ordered, ampdu frame - process it */
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 645dd984fef03..9ac75689a99dc 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -107,7 +107,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ ptr += 2;
+ }
+ if (opsize == TCPOLEN_MPTCP_MPC_ACK_DATA_CSUM) {
+- mp_opt->csum = (__force __sum16)get_unaligned_be16(ptr);
++ mp_opt->csum = get_unaligned((__force __sum16 *)ptr);
+ mp_opt->suboptions |= OPTION_MPTCP_CSUMREQD;
+ ptr += 2;
+ }
+@@ -221,7 +221,7 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+
+ if (opsize == expected_opsize + TCPOLEN_MPTCP_DSS_CHECKSUM) {
+ mp_opt->suboptions |= OPTION_MPTCP_CSUMREQD;
+- mp_opt->csum = (__force __sum16)get_unaligned_be16(ptr);
++ mp_opt->csum = get_unaligned((__force __sum16 *)ptr);
+ ptr += 2;
+ }
+
+@@ -1236,7 +1236,7 @@ static void mptcp_set_rwin(const struct tcp_sock *tp)
+ WRITE_ONCE(msk->rcv_wnd_sent, ack_seq);
+ }
+
+-u16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum)
++__sum16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum)
+ {
+ struct csum_pseudo_header header;
+ __wsum csum;
+@@ -1252,15 +1252,25 @@ u16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum)
+ header.csum = 0;
+
+ csum = csum_partial(&header, sizeof(header), sum);
+- return (__force u16)csum_fold(csum);
++ return csum_fold(csum);
+ }
+
+-static u16 mptcp_make_csum(const struct mptcp_ext *mpext)
++static __sum16 mptcp_make_csum(const struct mptcp_ext *mpext)
+ {
+ return __mptcp_make_csum(mpext->data_seq, mpext->subflow_seq, mpext->data_len,
+ ~csum_unfold(mpext->csum));
+ }
+
++static void put_len_csum(u16 len, __sum16 csum, void *data)
++{
++ __sum16 *sumptr = data + 2;
++ __be16 *ptr = data;
++
++ put_unaligned_be16(len, ptr);
++
++ put_unaligned(csum, sumptr);
++}
++
+ void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp,
+ struct mptcp_out_options *opts)
+ {
+@@ -1328,8 +1338,9 @@ void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp,
+ put_unaligned_be32(mpext->subflow_seq, ptr);
+ ptr += 1;
+ if (opts->csum_reqd) {
+- put_unaligned_be32(mpext->data_len << 16 |
+- mptcp_make_csum(mpext), ptr);
++ put_len_csum(mpext->data_len,
++ mptcp_make_csum(mpext),
++ ptr);
+ } else {
+ put_unaligned_be32(mpext->data_len << 16 |
+ TCPOPT_NOP << 8 | TCPOPT_NOP, ptr);
+@@ -1376,11 +1387,12 @@ void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp,
+ goto mp_capable_done;
+
+ if (opts->csum_reqd) {
+- put_unaligned_be32(opts->data_len << 16 |
+- __mptcp_make_csum(opts->data_seq,
+- opts->subflow_seq,
+- opts->data_len,
+- ~csum_unfold(opts->csum)), ptr);
++ put_len_csum(opts->data_len,
++ __mptcp_make_csum(opts->data_seq,
++ opts->subflow_seq,
++ opts->data_len,
++ ~csum_unfold(opts->csum)),
++ ptr);
+ } else {
+ put_unaligned_be32(opts->data_len << 16 |
+ TCPOPT_NOP << 8 | TCPOPT_NOP, ptr);
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index 7bea318ac5f28..1eb83cbe8aae0 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -178,14 +178,13 @@ void mptcp_pm_subflow_check_next(struct mptcp_sock *msk, const struct sock *ssk,
+ struct mptcp_pm_data *pm = &msk->pm;
+ bool update_subflows;
+
+- update_subflows = (ssk->sk_state == TCP_CLOSE) &&
+- (subflow->request_join || subflow->mp_join);
++ update_subflows = subflow->request_join || subflow->mp_join;
+ if (!READ_ONCE(pm->work_pending) && !update_subflows)
+ return;
+
+ spin_lock_bh(&pm->lock);
+ if (update_subflows)
+- pm->subflows--;
++ __mptcp_pm_close_subflow(msk);
+
+ /* Even if this subflow is not really established, tell the PM to try
+ * to pick the next ones, if possible.
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 85317ce38e3fa..aec767ee047ab 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -725,7 +725,7 @@ void mptcp_token_destroy(struct mptcp_sock *msk);
+ void mptcp_crypto_key_sha(u64 key, u32 *token, u64 *idsn);
+
+ void mptcp_crypto_hmac_sha(u64 key1, u64 key2, u8 *msg, int len, void *hmac);
+-u16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum);
++__sum16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum);
+
+ void __init mptcp_pm_init(void);
+ void mptcp_pm_data_init(struct mptcp_sock *msk);
+@@ -835,6 +835,20 @@ unsigned int mptcp_pm_get_add_addr_accept_max(struct mptcp_sock *msk);
+ unsigned int mptcp_pm_get_subflows_max(struct mptcp_sock *msk);
+ unsigned int mptcp_pm_get_local_addr_max(struct mptcp_sock *msk);
+
++/* called under PM lock */
++static inline void __mptcp_pm_close_subflow(struct mptcp_sock *msk)
++{
++ if (--msk->pm.subflows < mptcp_pm_get_subflows_max(msk))
++ WRITE_ONCE(msk->pm.accept_subflow, true);
++}
++
++static inline void mptcp_pm_close_subflow(struct mptcp_sock *msk)
++{
++ spin_lock_bh(&msk->pm.lock);
++ __mptcp_pm_close_subflow(msk);
++ spin_unlock_bh(&msk->pm.lock);
++}
++
+ void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk);
+ void mptcp_sockopt_sync_locked(struct mptcp_sock *msk, struct sock *ssk);
+
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index bea47a1180dc2..651f01d13191e 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -846,7 +846,7 @@ static enum mapping_status validate_data_csum(struct sock *ssk, struct sk_buff *
+ {
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ u32 offset, seq, delta;
+- u16 csum;
++ __sum16 csum;
+ int len;
+
+ if (!csum_reqd)
+@@ -1380,20 +1380,20 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
+ struct sockaddr_storage addr;
+ int remote_id = remote->id;
+ int local_id = loc->id;
++ int err = -ENOTCONN;
+ struct socket *sf;
+ struct sock *ssk;
+ u32 remote_token;
+ int addrlen;
+ int ifindex;
+ u8 flags;
+- int err;
+
+ if (!mptcp_is_fully_established(sk))
+- return -ENOTCONN;
++ goto err_out;
+
+ err = mptcp_subflow_create_socket(sk, &sf);
+ if (err)
+- return err;
++ goto err_out;
+
+ ssk = sf->sk;
+ subflow = mptcp_subflow_ctx(ssk);
+@@ -1456,6 +1456,12 @@ failed_unlink:
+ failed:
+ subflow->disposable = 1;
+ sock_release(sf);
++
++err_out:
++ /* we account subflows before the creation, and this failures will not
++ * be caught by sk_state_change()
++ */
++ mptcp_pm_close_subflow(msk);
+ return err;
+ }
+
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index b90eca7a2f22b..9fb407084c506 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -173,12 +173,11 @@ EXPORT_SYMBOL_GPL(flow_offload_route_init);
+
+ static void flow_offload_fixup_tcp(struct ip_ct_tcp *tcp)
+ {
+- tcp->state = TCP_CONNTRACK_ESTABLISHED;
+ tcp->seen[0].td_maxwin = 0;
+ tcp->seen[1].td_maxwin = 0;
+ }
+
+-static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
++static void flow_offload_fixup_ct(struct nf_conn *ct)
+ {
+ struct net *net = nf_ct_net(ct);
+ int l4num = nf_ct_protonum(ct);
+@@ -187,7 +186,9 @@ static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
+ if (l4num == IPPROTO_TCP) {
+ struct nf_tcp_net *tn = nf_tcp_pernet(net);
+
+- timeout = tn->timeouts[TCP_CONNTRACK_ESTABLISHED];
++ flow_offload_fixup_tcp(&ct->proto.tcp);
++
++ timeout = tn->timeouts[ct->proto.tcp.state];
+ timeout -= tn->offload_timeout;
+ } else if (l4num == IPPROTO_UDP) {
+ struct nf_udp_net *tn = nf_udp_pernet(net);
+@@ -205,18 +206,6 @@ static void flow_offload_fixup_ct_timeout(struct nf_conn *ct)
+ WRITE_ONCE(ct->timeout, nfct_time_stamp + timeout);
+ }
+
+-static void flow_offload_fixup_ct_state(struct nf_conn *ct)
+-{
+- if (nf_ct_protonum(ct) == IPPROTO_TCP)
+- flow_offload_fixup_tcp(&ct->proto.tcp);
+-}
+-
+-static void flow_offload_fixup_ct(struct nf_conn *ct)
+-{
+- flow_offload_fixup_ct_state(ct);
+- flow_offload_fixup_ct_timeout(ct);
+-}
+-
+ static void flow_offload_route_release(struct flow_offload *flow)
+ {
+ nft_flow_dst_release(flow, FLOW_OFFLOAD_DIR_ORIGINAL);
+@@ -329,8 +318,10 @@ void flow_offload_refresh(struct nf_flowtable *flow_table,
+ u32 timeout;
+
+ timeout = nf_flowtable_time_stamp + flow_offload_get_timeout(flow);
+- if (READ_ONCE(flow->timeout) != timeout)
++ if (timeout - READ_ONCE(flow->timeout) > HZ)
+ WRITE_ONCE(flow->timeout, timeout);
++ else
++ return;
+
+ if (likely(!nf_flowtable_hw_offload(flow_table)))
+ return;
+@@ -353,22 +344,14 @@ static void flow_offload_del(struct nf_flowtable *flow_table,
+ rhashtable_remove_fast(&flow_table->rhashtable,
+ &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].node,
+ nf_flow_offload_rhash_params);
+-
+- clear_bit(IPS_OFFLOAD_BIT, &flow->ct->status);
+-
+- if (nf_flow_has_expired(flow))
+- flow_offload_fixup_ct(flow->ct);
+- else
+- flow_offload_fixup_ct_timeout(flow->ct);
+-
+ flow_offload_free(flow);
+ }
+
+ void flow_offload_teardown(struct flow_offload *flow)
+ {
++ clear_bit(IPS_OFFLOAD_BIT, &flow->ct->status);
+ set_bit(NF_FLOW_TEARDOWN, &flow->flags);
+-
+- flow_offload_fixup_ct_state(flow->ct);
++ flow_offload_fixup_ct(flow->ct);
+ }
+ EXPORT_SYMBOL_GPL(flow_offload_teardown);
+
+@@ -399,7 +382,8 @@ EXPORT_SYMBOL_GPL(flow_offload_lookup);
+
+ static int
+ nf_flow_table_iterate(struct nf_flowtable *flow_table,
+- void (*iter)(struct flow_offload *flow, void *data),
++ void (*iter)(struct nf_flowtable *flowtable,
++ struct flow_offload *flow, void *data),
+ void *data)
+ {
+ struct flow_offload_tuple_rhash *tuplehash;
+@@ -423,7 +407,7 @@ nf_flow_table_iterate(struct nf_flowtable *flow_table,
+
+ flow = container_of(tuplehash, struct flow_offload, tuplehash[0]);
+
+- iter(flow, data);
++ iter(flow_table, flow, data);
+ }
+ rhashtable_walk_stop(&hti);
+ rhashtable_walk_exit(&hti);
+@@ -431,34 +415,12 @@ nf_flow_table_iterate(struct nf_flowtable *flow_table,
+ return err;
+ }
+
+-static bool flow_offload_stale_dst(struct flow_offload_tuple *tuple)
++static void nf_flow_offload_gc_step(struct nf_flowtable *flow_table,
++ struct flow_offload *flow, void *data)
+ {
+- struct dst_entry *dst;
+-
+- if (tuple->xmit_type == FLOW_OFFLOAD_XMIT_NEIGH ||
+- tuple->xmit_type == FLOW_OFFLOAD_XMIT_XFRM) {
+- dst = tuple->dst_cache;
+- if (!dst_check(dst, tuple->dst_cookie))
+- return true;
+- }
+-
+- return false;
+-}
+-
+-static bool nf_flow_has_stale_dst(struct flow_offload *flow)
+-{
+- return flow_offload_stale_dst(&flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple) ||
+- flow_offload_stale_dst(&flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple);
+-}
+-
+-static void nf_flow_offload_gc_step(struct flow_offload *flow, void *data)
+-{
+- struct nf_flowtable *flow_table = data;
+-
+ if (nf_flow_has_expired(flow) ||
+- nf_ct_is_dying(flow->ct) ||
+- nf_flow_has_stale_dst(flow))
+- set_bit(NF_FLOW_TEARDOWN, &flow->flags);
++ nf_ct_is_dying(flow->ct))
++ flow_offload_teardown(flow);
+
+ if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
+ if (test_bit(NF_FLOW_HW, &flow->flags)) {
+@@ -479,7 +441,7 @@ static void nf_flow_offload_work_gc(struct work_struct *work)
+ struct nf_flowtable *flow_table;
+
+ flow_table = container_of(work, struct nf_flowtable, gc_work.work);
+- nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, flow_table);
++ nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
+ queue_delayed_work(system_power_efficient_wq, &flow_table->gc_work, HZ);
+ }
+
+@@ -595,7 +557,8 @@ int nf_flow_table_init(struct nf_flowtable *flowtable)
+ }
+ EXPORT_SYMBOL_GPL(nf_flow_table_init);
+
+-static void nf_flow_table_do_cleanup(struct flow_offload *flow, void *data)
++static void nf_flow_table_do_cleanup(struct nf_flowtable *flow_table,
++ struct flow_offload *flow, void *data)
+ {
+ struct net_device *dev = data;
+
+@@ -637,11 +600,10 @@ void nf_flow_table_free(struct nf_flowtable *flow_table)
+
+ cancel_delayed_work_sync(&flow_table->gc_work);
+ nf_flow_table_iterate(flow_table, nf_flow_table_do_cleanup, NULL);
+- nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, flow_table);
++ nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
+ nf_flow_table_offload_flush(flow_table);
+ if (nf_flowtable_hw_offload(flow_table))
+- nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step,
+- flow_table);
++ nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, NULL);
+ rhashtable_destroy(&flow_table->rhashtable);
+ }
+ EXPORT_SYMBOL_GPL(nf_flow_table_free);
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index 6257d87c3a56d..28026467b54cd 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -227,6 +227,15 @@ static bool nf_flow_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu)
+ return true;
+ }
+
++static inline bool nf_flow_dst_check(struct flow_offload_tuple *tuple)
++{
++ if (tuple->xmit_type != FLOW_OFFLOAD_XMIT_NEIGH &&
++ tuple->xmit_type != FLOW_OFFLOAD_XMIT_XFRM)
++ return true;
++
++ return dst_check(tuple->dst_cache, tuple->dst_cookie);
++}
++
+ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
+ const struct nf_hook_state *state,
+ struct dst_entry *dst)
+@@ -346,6 +355,11 @@ nf_flow_offload_ip_hook(void *priv, struct sk_buff *skb,
+ if (nf_flow_state_check(flow, iph->protocol, skb, thoff))
+ return NF_ACCEPT;
+
++ if (!nf_flow_dst_check(&tuplehash->tuple)) {
++ flow_offload_teardown(flow);
++ return NF_ACCEPT;
++ }
++
+ if (skb_try_make_writable(skb, thoff + hdrsize))
+ return NF_DROP;
+
+@@ -582,6 +596,11 @@ nf_flow_offload_ipv6_hook(void *priv, struct sk_buff *skb,
+ if (nf_flow_state_check(flow, ip6h->nexthdr, skb, thoff))
+ return NF_ACCEPT;
+
++ if (!nf_flow_dst_check(&tuplehash->tuple)) {
++ flow_offload_teardown(flow);
++ return NF_ACCEPT;
++ }
++
+ if (skb_try_make_writable(skb, thoff + hdrsize))
+ return NF_DROP;
+
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index 0af34ad414796..aac6db8680d47 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -36,6 +36,15 @@ static void nft_default_forward_path(struct nf_flow_route *route,
+ route->tuple[dir].xmit_type = nft_xmit_type(dst_cache);
+ }
+
++static bool nft_is_valid_ether_device(const struct net_device *dev)
++{
++ if (!dev || (dev->flags & IFF_LOOPBACK) || dev->type != ARPHRD_ETHER ||
++ dev->addr_len != ETH_ALEN || !is_valid_ether_addr(dev->dev_addr))
++ return false;
++
++ return true;
++}
++
+ static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
+ const struct dst_entry *dst_cache,
+ const struct nf_conn *ct,
+@@ -47,6 +56,9 @@ static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
+ struct neighbour *n;
+ u8 nud_state;
+
++ if (!nft_is_valid_ether_device(dev))
++ goto out;
++
+ n = dst_neigh_lookup(dst_cache, daddr);
+ if (!n)
+ return -1;
+@@ -60,6 +72,7 @@ static int nft_dev_fill_forward_path(const struct nf_flow_route *route,
+ if (!(nud_state & NUD_VALID))
+ return -1;
+
++out:
+ return dev_fill_forward_path(dev, ha, stack);
+ }
+
+@@ -78,15 +91,6 @@ struct nft_forward_info {
+ enum flow_offload_xmit_type xmit_type;
+ };
+
+-static bool nft_is_valid_ether_device(const struct net_device *dev)
+-{
+- if (!dev || (dev->flags & IFF_LOOPBACK) || dev->type != ARPHRD_ETHER ||
+- dev->addr_len != ETH_ALEN || !is_valid_ether_addr(dev->dev_addr))
+- return false;
+-
+- return true;
+-}
+-
+ static void nft_dev_path_info(const struct net_device_path_stack *stack,
+ struct nft_forward_info *info,
+ unsigned char *ha, struct nf_flowtable *flowtable)
+@@ -119,7 +123,8 @@ static void nft_dev_path_info(const struct net_device_path_stack *stack,
+ info->indev = NULL;
+ break;
+ }
+- info->outdev = path->dev;
++ if (!info->outdev)
++ info->outdev = path->dev;
+ info->encap[info->num_encaps].id = path->encap.id;
+ info->encap[info->num_encaps].proto = path->encap.proto;
+ info->num_encaps++;
+@@ -293,7 +298,8 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ case IPPROTO_TCP:
+ tcph = skb_header_pointer(pkt->skb, nft_thoff(pkt),
+ sizeof(_tcph), &_tcph);
+- if (unlikely(!tcph || tcph->fin || tcph->rst))
++ if (unlikely(!tcph || tcph->fin || tcph->rst ||
++ !nf_conntrack_tcp_established(ct)))
+ goto out;
+ break;
+ case IPPROTO_UDP:
+diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c
+index 6055dc9a82aa0..aa5e712adf078 100644
+--- a/net/nfc/nci/data.c
++++ b/net/nfc/nci/data.c
+@@ -118,7 +118,7 @@ static int nci_queue_tx_data_frags(struct nci_dev *ndev,
+
+ skb_frag = nci_skb_alloc(ndev,
+ (NCI_DATA_HDR_SIZE + frag_len),
+- GFP_KERNEL);
++ GFP_ATOMIC);
+ if (skb_frag == NULL) {
+ rc = -ENOMEM;
+ goto free_exit;
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index 19703a649b5a6..78c4b6addf15a 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -153,7 +153,7 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe,
+
+ i = 0;
+ skb = nci_skb_alloc(ndev, conn_info->max_pkt_payload_len +
+- NCI_DATA_HDR_SIZE, GFP_KERNEL);
++ NCI_DATA_HDR_SIZE, GFP_ATOMIC);
+ if (!skb)
+ return -ENOMEM;
+
+@@ -184,7 +184,7 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe,
+ if (i < data_len) {
+ skb = nci_skb_alloc(ndev,
+ conn_info->max_pkt_payload_len +
+- NCI_DATA_HDR_SIZE, GFP_KERNEL);
++ NCI_DATA_HDR_SIZE, GFP_ATOMIC);
+ if (!skb)
+ return -ENOMEM;
+
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 0eaaf1f45de17..211c757bfc3c4 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -232,6 +232,10 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ for (i = 0; i < p->tcfp_nkeys; ++i) {
+ u32 cur = p->tcfp_keys[i].off;
+
++ /* sanitize the shift value for any later use */
++ p->tcfp_keys[i].shift = min_t(size_t, BITS_PER_TYPE(int) - 1,
++ p->tcfp_keys[i].shift);
++
+ /* The AT option can read a single byte, we can bound the actual
+ * value with uchar max.
+ */
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index dc171ca0d1b12..0c20df052db37 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3128,6 +3128,15 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev,
+ } else if (attrs[NL80211_ATTR_CHANNEL_WIDTH]) {
+ chandef->width =
+ nla_get_u32(attrs[NL80211_ATTR_CHANNEL_WIDTH]);
++ if (chandef->chan->band == NL80211_BAND_S1GHZ) {
++ /* User input error for channel width doesn't match channel */
++ if (chandef->width != ieee80211_s1g_channel_width(chandef->chan)) {
++ NL_SET_ERR_MSG_ATTR(extack,
++ attrs[NL80211_ATTR_CHANNEL_WIDTH],
++ "bad channel width");
++ return -EINVAL;
++ }
++ }
+ if (attrs[NL80211_ATTR_CENTER_FREQ1]) {
+ chandef->center_freq1 =
+ nla_get_u32(attrs[NL80211_ATTR_CENTER_FREQ1]);
+@@ -11564,18 +11573,23 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb,
+ struct cfg80211_bitrate_mask mask;
+ struct cfg80211_registered_device *rdev = info->user_ptr[0];
+ struct net_device *dev = info->user_ptr[1];
++ struct wireless_dev *wdev = dev->ieee80211_ptr;
+ int err;
+
+ if (!rdev->ops->set_bitrate_mask)
+ return -EOPNOTSUPP;
+
++ wdev_lock(wdev);
+ err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
+ NL80211_ATTR_TX_RATES, &mask,
+ dev, true);
+ if (err)
+- return err;
++ goto out;
+
+- return rdev_set_bitrate_mask(rdev, dev, NULL, &mask);
++ err = rdev_set_bitrate_mask(rdev, dev, NULL, &mask);
++out:
++ wdev_unlock(wdev);
++ return err;
+ }
+
+ static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info)
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 4a6d864329106..6d82bd9eaf8c7 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1829,7 +1829,7 @@ int cfg80211_get_ies_channel_number(const u8 *ie, size_t ielen,
+ if (tmp && tmp->datalen >= sizeof(struct ieee80211_s1g_oper_ie)) {
+ struct ieee80211_s1g_oper_ie *s1gop = (void *)tmp->data;
+
+- return s1gop->primary_ch;
++ return s1gop->oper_ch;
+ }
+ } else {
+ tmp = cfg80211_find_elem(WLAN_EID_DS_PARAMS, ie, ielen);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 882526159d3a9..19aa994f5d2c2 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3158,7 +3158,7 @@ ok:
+
+ nopol:
+ if (!(dst_orig->dev->flags & IFF_LOOPBACK) &&
+- !xfrm_default_allow(net, dir)) {
++ net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) {
+ err = -EPERM;
+ goto error;
+ }
+@@ -3569,7 +3569,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ }
+
+ if (!pol) {
+- if (!xfrm_default_allow(net, dir)) {
++ if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) {
+ XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS);
+ return 0;
+ }
+@@ -3629,7 +3629,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ }
+ xfrm_nr = ti;
+
+- if (!xfrm_default_allow(net, dir) && !xfrm_nr) {
++ if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK &&
++ !xfrm_nr) {
+ XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
+ goto reject;
+ }
+@@ -4118,6 +4119,9 @@ static int __net_init xfrm_net_init(struct net *net)
+ spin_lock_init(&net->xfrm.xfrm_policy_lock);
+ seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock);
+ mutex_init(&net->xfrm.xfrm_cfg_mutex);
++ net->xfrm.policy_default[XFRM_POLICY_IN] = XFRM_USERPOLICY_ACCEPT;
++ net->xfrm.policy_default[XFRM_POLICY_FWD] = XFRM_USERPOLICY_ACCEPT;
++ net->xfrm.policy_default[XFRM_POLICY_OUT] = XFRM_USERPOLICY_ACCEPT;
+
+ rv = xfrm_statistics_init(net);
+ if (rv < 0)
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 72b2f173aac8b..64fa8fdd6bbd5 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1994,12 +1994,9 @@ static int xfrm_notify_userpolicy(struct net *net)
+ }
+
+ up = nlmsg_data(nlh);
+- up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ?
+- XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+- up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ?
+- XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+- up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ?
+- XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
++ up->in = net->xfrm.policy_default[XFRM_POLICY_IN];
++ up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD];
++ up->out = net->xfrm.policy_default[XFRM_POLICY_OUT];
+
+ nlmsg_end(skb, nlh);
+
+@@ -2010,26 +2007,26 @@ static int xfrm_notify_userpolicy(struct net *net)
+ return err;
+ }
+
++static bool xfrm_userpolicy_is_valid(__u8 policy)
++{
++ return policy == XFRM_USERPOLICY_BLOCK ||
++ policy == XFRM_USERPOLICY_ACCEPT;
++}
++
+ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh,
+ struct nlattr **attrs)
+ {
+ struct net *net = sock_net(skb->sk);
+ struct xfrm_userpolicy_default *up = nlmsg_data(nlh);
+
+- if (up->in == XFRM_USERPOLICY_BLOCK)
+- net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN;
+- else if (up->in == XFRM_USERPOLICY_ACCEPT)
+- net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN;
++ if (xfrm_userpolicy_is_valid(up->in))
++ net->xfrm.policy_default[XFRM_POLICY_IN] = up->in;
+
+- if (up->fwd == XFRM_USERPOLICY_BLOCK)
+- net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD;
+- else if (up->fwd == XFRM_USERPOLICY_ACCEPT)
+- net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD;
++ if (xfrm_userpolicy_is_valid(up->fwd))
++ net->xfrm.policy_default[XFRM_POLICY_FWD] = up->fwd;
+
+- if (up->out == XFRM_USERPOLICY_BLOCK)
+- net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT;
+- else if (up->out == XFRM_USERPOLICY_ACCEPT)
+- net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT;
++ if (xfrm_userpolicy_is_valid(up->out))
++ net->xfrm.policy_default[XFRM_POLICY_OUT] = up->out;
+
+ rt_genid_bump_all(net);
+
+@@ -2059,13 +2056,9 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh,
+ }
+
+ r_up = nlmsg_data(r_nlh);
+-
+- r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ?
+- XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+- r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ?
+- XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
+- r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ?
+- XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT;
++ r_up->in = net->xfrm.policy_default[XFRM_POLICY_IN];
++ r_up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD];
++ r_up->out = net->xfrm.policy_default[XFRM_POLICY_OUT];
+ nlmsg_end(r_skb, r_nlh);
+
+ return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid);
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index d3c3a61308ada..94dcec2cc803f 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -979,6 +979,7 @@ static int conf_write_autoconf_cmd(const char *autoconf_name)
+
+ fprintf(out, "\n$(deps_config): ;\n");
+
++ fflush(out);
+ ret = ferror(out); /* error check for all fprintf() calls */
+ fclose(out);
+ if (ret)
+@@ -1097,6 +1098,7 @@ static int __conf_write_autoconf(const char *filename,
+ if ((sym->flags & SYMBOL_WRITE) && sym->name)
+ print_symbol(file, sym);
+
++ fflush(file);
+ /* check possible errors in conf_write_heading() and print_symbol() */
+ ret = ferror(file);
+ fclose(file);
+diff --git a/security/security.c b/security/security.c
+index b7cf5cbfdc677..aaf6566deb9f0 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -59,10 +59,12 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
+ [LOCKDOWN_DEBUGFS] = "debugfs access",
+ [LOCKDOWN_XMON_WR] = "xmon write access",
+ [LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM",
++ [LOCKDOWN_DBG_WRITE_KERNEL] = "use of kgdb/kdb to write kernel RAM",
+ [LOCKDOWN_INTEGRITY_MAX] = "integrity",
+ [LOCKDOWN_KCORE] = "/proc/kcore access",
+ [LOCKDOWN_KPROBES] = "use of kprobes",
+ [LOCKDOWN_BPF_READ_KERNEL] = "use of bpf to read kernel RAM",
++ [LOCKDOWN_DBG_READ_KERNEL] = "use of kgdb/kdb to read kernel RAM",
+ [LOCKDOWN_PERF] = "unsafe use of perf",
+ [LOCKDOWN_TRACEFS] = "use of tracefs",
+ [LOCKDOWN_XMON_RW] = "xmon read and write access",
+diff --git a/security/selinux/ss/hashtab.c b/security/selinux/ss/hashtab.c
+index 0ae4e4e57a401..3fb8f9026e9be 100644
+--- a/security/selinux/ss/hashtab.c
++++ b/security/selinux/ss/hashtab.c
+@@ -179,7 +179,8 @@ int hashtab_duplicate(struct hashtab *new, struct hashtab *orig,
+ kmem_cache_free(hashtab_node_cachep, cur);
+ }
+ }
+- kmem_cache_free(hashtab_node_cachep, new);
++ kfree(new->htable);
++ memset(new, 0, sizeof(*new));
+ return -ENOMEM;
+ }
+
+diff --git a/sound/isa/wavefront/wavefront_synth.c b/sound/isa/wavefront/wavefront_synth.c
+index 69cbc79fbb716..2aaaa68071744 100644
+--- a/sound/isa/wavefront/wavefront_synth.c
++++ b/sound/isa/wavefront/wavefront_synth.c
+@@ -1094,7 +1094,8 @@ wavefront_send_sample (snd_wavefront_t *dev,
+
+ if (dataptr < data_end) {
+
+- __get_user (sample_short, dataptr);
++ if (get_user(sample_short, dataptr))
++ return -EFAULT;
+ dataptr += skip;
+
+ if (data_is_unsigned) { /* GUS ? */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 51c54cf0f3127..e38acdbe1a3b5 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -937,6 +937,9 @@ static int alc_init(struct hda_codec *codec)
+ return 0;
+ }
+
++#define alc_free snd_hda_gen_free
++
++#ifdef CONFIG_PM
+ static inline void alc_shutup(struct hda_codec *codec)
+ {
+ struct alc_spec *spec = codec->spec;
+@@ -950,9 +953,6 @@ static inline void alc_shutup(struct hda_codec *codec)
+ alc_shutup_pins(codec);
+ }
+
+-#define alc_free snd_hda_gen_free
+-
+-#ifdef CONFIG_PM
+ static void alc_power_eapd(struct hda_codec *codec)
+ {
+ alc_auto_setup_eapd(codec, false);
+@@ -966,9 +966,7 @@ static int alc_suspend(struct hda_codec *codec)
+ spec->power_hook(codec);
+ return 0;
+ }
+-#endif
+
+-#ifdef CONFIG_PM
+ static int alc_resume(struct hda_codec *codec)
+ {
+ struct alc_spec *spec = codec->spec;
+@@ -9236,6 +9234,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+ SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),
++ SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x1100, "TongFang GKxNRxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x1111, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x1119, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x1129, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x1147, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP),
++ SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP),
+ SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -11106,6 +11112,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+ SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE),
+ SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS),
++ SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN),
+ SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN),
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 0ea39565e6232..40a5e3eb4ef26 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3235,6 +3235,15 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ },
+
++/* Rane SL-1 */
++{
++ USB_DEVICE(0x13e5, 0x0001),
++ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++ .ifnum = QUIRK_ANY_INTERFACE,
++ .type = QUIRK_AUDIO_STANDARD_INTERFACE
++ }
++},
++
+ /* disabled due to regression for other devices;
+ * see https://bugzilla.kernel.org/show_bug.cgi?id=199905
+ */
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index ae61f464043a1..c6a48d0ef9ff0 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -98,6 +98,7 @@ FEATURE_TESTS_EXTRA := \
+ llvm-version \
+ clang \
+ libbpf \
++ libbpf-btf__load_from_kernel_by_id \
+ libpfm4 \
+ libdebuginfod \
+ clang-bpf-co-re
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index de66e1cc07348..cb4a2a4fa2e48 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -57,6 +57,7 @@ FILES= \
+ test-lzma.bin \
+ test-bpf.bin \
+ test-libbpf.bin \
++ test-libbpf-btf__load_from_kernel_by_id.bin \
+ test-get_cpuid.bin \
+ test-sdt.bin \
+ test-cxx.bin \
+@@ -287,6 +288,9 @@ $(OUTPUT)test-bpf.bin:
+ $(OUTPUT)test-libbpf.bin:
+ $(BUILD) -lbpf
+
++$(OUTPUT)test-libbpf-btf__load_from_kernel_by_id.bin:
++ $(BUILD) -lbpf
++
+ $(OUTPUT)test-sdt.bin:
+ $(BUILD)
+
+diff --git a/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c b/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
+new file mode 100644
+index 0000000000000..f7c084428735a
+--- /dev/null
++++ b/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
+@@ -0,0 +1,7 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <bpf/libbpf.h>
++
++int main(void)
++{
++ return btf__load_from_kernel_by_id(20151128, NULL);
++}
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index f3bf9297bcc03..1bd64e7404b9f 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -553,9 +553,16 @@ ifndef NO_LIBELF
+ ifeq ($(feature-libbpf), 1)
+ EXTLIBS += -lbpf
+ $(call detected,CONFIG_LIBBPF_DYNAMIC)
++
++ $(call feature_check,libbpf-btf__load_from_kernel_by_id)
++ ifeq ($(feature-libbpf-btf__load_from_kernel_by_id), 1)
++ CFLAGS += -DHAVE_LIBBPF_BTF__LOAD_FROM_KERNEL_BY_ID
++ endif
+ else
+ dummy := $(error Error: No libbpf devel library found, please install libbpf-devel);
+ endif
++ else
++ CFLAGS += -DHAVE_LIBBPF_BTF__LOAD_FROM_KERNEL_BY_ID
+ endif
+ endif
+
+diff --git a/tools/perf/arch/x86/util/perf_regs.c b/tools/perf/arch/x86/util/perf_regs.c
+index 207c56805c551..0ed177991ad05 100644
+--- a/tools/perf/arch/x86/util/perf_regs.c
++++ b/tools/perf/arch/x86/util/perf_regs.c
+@@ -9,6 +9,8 @@
+ #include "../../../util/perf_regs.h"
+ #include "../../../util/debug.h"
+ #include "../../../util/event.h"
++#include "../../../util/pmu.h"
++#include "../../../util/pmu-hybrid.h"
+
+ const struct sample_reg sample_reg_masks[] = {
+ SMPL_REG(AX, PERF_REG_X86_AX),
+@@ -284,12 +286,22 @@ uint64_t arch__intr_reg_mask(void)
+ .disabled = 1,
+ .exclude_kernel = 1,
+ };
++ struct perf_pmu *pmu;
+ int fd;
+ /*
+ * In an unnamed union, init it here to build on older gcc versions
+ */
+ attr.sample_period = 1;
+
++ if (perf_pmu__has_hybrid()) {
++ /*
++ * The same register set is supported among different hybrid PMUs.
++ * Only check the first available one.
++ */
++ pmu = list_first_entry(&perf_pmu__hybrid_pmus, typeof(*pmu), hybrid_list);
++ attr.config |= (__u64)pmu->type << PERF_PMU_TYPE_SHIFT;
++ }
++
+ event_attr_init(&attr);
+
+ fd = sys_perf_event_open(&attr, 0, -1, -1, 0);
+diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
+index f2640179ada9e..c2c81567afa50 100644
+--- a/tools/perf/bench/numa.c
++++ b/tools/perf/bench/numa.c
+@@ -1672,7 +1672,7 @@ static int __bench_numa(const char *name)
+ "GB/sec,", "total-speed", "GB/sec total speed");
+
+ if (g->p.show_details >= 2) {
+- char tname[14 + 2 * 10 + 1];
++ char tname[14 + 2 * 11 + 1];
+ struct thread_data *td;
+ for (p = 0; p < g->p.nr_proc; p++) {
+ for (t = 0; t < g->p.nr_threads; t++) {
+diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
+index 573490530194f..592ab02d5ba30 100644
+--- a/tools/perf/tests/bpf.c
++++ b/tools/perf/tests/bpf.c
+@@ -222,11 +222,11 @@ static int __test__bpf(int idx)
+
+ ret = test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz,
+ bpf_testcase_table[idx].prog_id,
+- true, NULL);
++ false, NULL);
+ if (ret != TEST_OK || !obj_buf || !obj_buf_sz) {
+ pr_debug("Unable to get BPF object, %s\n",
+ bpf_testcase_table[idx].msg_compile_fail);
+- if (idx == 0)
++ if ((idx == 0) || (ret == TEST_SKIP))
+ return TEST_SKIP;
+ else
+ return TEST_FAIL;
+@@ -370,9 +370,11 @@ static int test__bpf_prologue_test(struct test_suite *test __maybe_unused,
+ static struct test_case bpf_tests[] = {
+ #ifdef HAVE_LIBBPF_SUPPORT
+ TEST_CASE("Basic BPF filtering", basic_bpf_test),
+- TEST_CASE("BPF pinning", bpf_pinning),
++ TEST_CASE_REASON("BPF pinning", bpf_pinning,
++ "clang isn't installed or environment missing BPF support"),
+ #ifdef HAVE_BPF_PROLOGUE
+- TEST_CASE("BPF prologue generation", bpf_prologue_test),
++ TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test,
++ "clang isn't installed or environment missing BPF support"),
+ #else
+ TEST_CASE_REASON("BPF prologue generation", bpf_prologue_test, "not compiled in"),
+ #endif
+diff --git a/tools/perf/tests/shell/stat_all_pmu.sh b/tools/perf/tests/shell/stat_all_pmu.sh
+index b30dba455f36c..9c9ef33e0b3c6 100755
+--- a/tools/perf/tests/shell/stat_all_pmu.sh
++++ b/tools/perf/tests/shell/stat_all_pmu.sh
+@@ -5,6 +5,16 @@
+ set -e
+
+ for p in $(perf list --raw-dump pmu); do
++ # In powerpc, skip the events for hv_24x7 and hv_gpci.
++ # These events needs input values to be filled in for
++ # core, chip, partition id based on system.
++ # Example: hv_24x7/CPM_ADJUNCT_INST,domain=?,core=?/
++ # hv_gpci/event,partition_id=?/
++ # Hence skip these events for ppc.
++ if echo "$p" |grep -Eq 'hv_24x7|hv_gpci' ; then
++ echo "Skipping: Event '$p' in powerpc"
++ continue
++ fi
+ echo "Testing $p"
+ result=$(perf stat -e "$p" true 2>&1)
+ if ! echo "$result" | grep -q "$p" && ! echo "$result" | grep -q "<not supported>" ; then
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index a517eaa51eb37..65dfd2c70246e 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -22,7 +22,8 @@
+ #include "record.h"
+ #include "util/synthetic-events.h"
+
+-struct btf * __weak btf__load_from_kernel_by_id(__u32 id)
++#ifndef HAVE_LIBBPF_BTF__LOAD_FROM_KERNEL_BY_ID
++struct btf *btf__load_from_kernel_by_id(__u32 id)
+ {
+ struct btf *btf;
+ #pragma GCC diagnostic push
+@@ -32,6 +33,7 @@ struct btf * __weak btf__load_from_kernel_by_id(__u32 id)
+
+ return err ? ERR_PTR(err) : btf;
+ }
++#endif
+
+ struct bpf_program * __weak
+ bpf_object__next_program(const struct bpf_object *obj, struct bpf_program *prev)
+diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
+index ee6f034812151..9c230b908b76f 100644
+--- a/tools/perf/util/stat.c
++++ b/tools/perf/util/stat.c
+@@ -471,9 +471,10 @@ int perf_stat_process_counter(struct perf_stat_config *config,
+ int perf_event__process_stat_event(struct perf_session *session,
+ union perf_event *event)
+ {
+- struct perf_counts_values count;
++ struct perf_counts_values count, *ptr;
+ struct perf_record_stat *st = &event->stat;
+ struct evsel *counter;
++ int cpu_map_idx;
+
+ count.val = st->val;
+ count.ena = st->ena;
+@@ -484,8 +485,18 @@ int perf_event__process_stat_event(struct perf_session *session,
+ pr_err("Failed to resolve counter for stat event.\n");
+ return -EINVAL;
+ }
+-
+- *perf_counts(counter->counts, st->cpu, st->thread) = count;
++ cpu_map_idx = perf_cpu_map__idx(evsel__cpus(counter), (struct perf_cpu){.cpu = st->cpu});
++ if (cpu_map_idx == -1) {
++ pr_err("Invalid CPU %d for event %s.\n", st->cpu, evsel__name(counter));
++ return -EINVAL;
++ }
++ ptr = perf_counts(counter->counts, cpu_map_idx, st->thread);
++ if (ptr == NULL) {
++ pr_err("Failed to find perf count for CPU %d thread %d on event %s.\n",
++ st->cpu, st->thread, evsel__name(counter));
++ return -EINVAL;
++ }
++ *ptr = count;
+ counter->supported = true;
+ return 0;
+ }
+diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
+index 3f4c8cfe7aca8..7cd9b31d03073 100755
+--- a/tools/testing/selftests/net/fcnal-test.sh
++++ b/tools/testing/selftests/net/fcnal-test.sh
+@@ -810,10 +810,16 @@ ipv4_ping()
+ setup
+ set_sysctl net.ipv4.raw_l3mdev_accept=1 2>/dev/null
+ ipv4_ping_novrf
++ setup
++ set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++ ipv4_ping_novrf
+
+ log_subsection "With VRF"
+ setup "yes"
+ ipv4_ping_vrf
++ setup "yes"
++ set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++ ipv4_ping_vrf
+ }
+
+ ################################################################################
+@@ -2348,10 +2354,16 @@ ipv6_ping()
+ log_subsection "No VRF"
+ setup
+ ipv6_ping_novrf
++ setup
++ set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++ ipv6_ping_novrf
+
+ log_subsection "With VRF"
+ setup "yes"
+ ipv6_ping_vrf
++ setup "yes"
++ set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null
++ ipv6_ping_vrf
+ }
+
+ ################################################################################
+diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile
+index 0d7bbe49359d8..1b25cc7c64bbd 100644
+--- a/tools/virtio/Makefile
++++ b/tools/virtio/Makefile
+@@ -5,7 +5,8 @@ virtio_test: virtio_ring.o virtio_test.o
+ vringh_test: vringh_test.o vringh.o virtio_ring.o
+
+ CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h
+-LDFLAGS += -lpthread
++CFLAGS += -pthread
++LDFLAGS += -pthread
+ vpath %.c ../../drivers/virtio ../../drivers/vhost
+ mod:
+ ${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V}
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 610cc7920c8a2..717ee1b2e058a 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1539,7 +1539,7 @@ static int kvm_prepare_memory_region(struct kvm *kvm,
+ r = kvm_arch_prepare_memory_region(kvm, old, new, change);
+
+ /* Free the bitmap on failure if it was allocated above. */
+- if (r && new && new->dirty_bitmap && old && !old->dirty_bitmap)
++ if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap))
+ kvm_destroy_dirty_bitmap(new);
+
+ return r;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-18 9:38 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-18 9:38 UTC (permalink / raw
To: gentoo-commits
commit: 5aac3dbc0e90b2a0dda13a7b7026af118e288358
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 18 09:38:23 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 18 09:38:23 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5aac3dbc
Linux patch 5.17.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-5.17.9.patch | 3768 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3772 insertions(+)
diff --git a/0000_README b/0000_README
index 30109237..ad4b906b 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-5.17.8.patch
From: http://www.kernel.org
Desc: Linux 5.17.8
+Patch: 1008_linux-5.17.9.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-5.17.9.patch b/1008_linux-5.17.9.patch
new file mode 100644
index 00000000..e6f7c06a
--- /dev/null
+++ b/1008_linux-5.17.9.patch
@@ -0,0 +1,3768 @@
+diff --git a/Makefile b/Makefile
+index 3cf179812f0f9..aba139bbd1c70 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
+index 0c70eb688a00c..2a0739a2350be 100644
+--- a/arch/arm/include/asm/io.h
++++ b/arch/arm/include/asm/io.h
+@@ -440,6 +440,9 @@ extern void pci_iounmap(struct pci_dev *dev, void __iomem *addr);
+ #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
+ extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
+ extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
++extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++ unsigned long flags);
++#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap
+ #endif
+
+ /*
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index 197f8eb3a7752..09c0bb9aeb3c5 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -489,3 +489,11 @@ void __init early_ioremap_init(void)
+ {
+ early_ioremap_setup();
+ }
++
++bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++ unsigned long flags)
++{
++ unsigned long pfn = PHYS_PFN(offset);
++
++ return memblock_is_map_memory(pfn);
++}
+diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
+index 7fd836bea7eb4..3995652daf81a 100644
+--- a/arch/arm64/include/asm/io.h
++++ b/arch/arm64/include/asm/io.h
+@@ -192,4 +192,8 @@ extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);
+ extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
+ extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
+
++extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++ unsigned long flags);
++#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap
++
+ #endif /* __ASM_IO_H */
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index 88b3e2a214084..db557856854e7 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -74,6 +74,10 @@ obj-$(CONFIG_ARM64_MTE) += mte.o
+ obj-y += vdso-wrap.o
+ obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o
+
++# Force dependency (vdso*-wrap.S includes vdso.so through incbin)
++$(obj)/vdso-wrap.o: $(obj)/vdso/vdso.so
++$(obj)/vdso32-wrap.o: $(obj)/vdso32/vdso.so
++
+ obj-y += probes/
+ head-y := head.o
+ extra-y += $(head-y) vmlinux.lds
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index 172452f79e462..ac1964ebed1ef 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -52,9 +52,6 @@ GCOV_PROFILE := n
+ targets += vdso.lds
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+-# Force dependency (incbin is bad)
+-$(obj)/vdso.o : $(obj)/vdso.so
+-
+ # Link rule for the .so file, .lds has to be first
+ $(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE
+ $(call if_changed,vdsold_and_vdso_check)
+diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile
+index 6c01b63ff56df..2c2036eb0df7b 100644
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -130,9 +130,6 @@ obj-vdso := $(c-obj-vdso) $(c-obj-vdso-gettimeofday) $(asm-obj-vdso)
+ targets += vdso.lds
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+
+-# Force dependency (vdso.s includes vdso.so through incbin)
+-$(obj)/vdso.o: $(obj)/vdso.so
+-
+ include/generated/vdso32-offsets.h: $(obj)/vdso.so.dbg FORCE
+ $(call if_changed,vdsosym)
+
+diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
+index b7c81dacabf07..b21f91cd830db 100644
+--- a/arch/arm64/mm/ioremap.c
++++ b/arch/arm64/mm/ioremap.c
+@@ -99,3 +99,11 @@ void __init early_ioremap_init(void)
+ {
+ early_ioremap_setup();
+ }
++
++bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
++ unsigned long flags)
++{
++ unsigned long pfn = PHYS_PFN(offset);
++
++ return pfn_is_map_memory(pfn);
++}
+diff --git a/arch/powerpc/kvm/book3s_32_sr.S b/arch/powerpc/kvm/book3s_32_sr.S
+index e3ab9df6cf199..6cfcd20d46686 100644
+--- a/arch/powerpc/kvm/book3s_32_sr.S
++++ b/arch/powerpc/kvm/book3s_32_sr.S
+@@ -122,11 +122,27 @@
+
+ /* 0x0 - 0xb */
+
+- /* 'current->mm' needs to be in r4 */
+- tophys(r4, r2)
+- lwz r4, MM(r4)
+- tophys(r4, r4)
+- /* This only clobbers r0, r3, r4 and r5 */
++ /* switch_mmu_context() needs paging, let's enable it */
++ mfmsr r9
++ ori r11, r9, MSR_DR
++ mtmsr r11
++ sync
++
++ /* switch_mmu_context() clobbers r12, rescue it */
++ SAVE_GPR(12, r1)
++
++ /* Calling switch_mmu_context(<inv>, current->mm, <inv>); */
++ lwz r4, MM(r2)
+ bl switch_mmu_context
+
++ /* restore r12 */
++ REST_GPR(12, r1)
++
++ /* Disable paging again */
++ mfmsr r9
++ li r6, MSR_DR
++ andc r9, r9, r6
++ mtmsr r9
++ sync
++
+ .endm
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 609e3697324b1..6e42252214dd8 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -30,6 +30,16 @@ KBUILD_CFLAGS_DECOMPRESSOR += -fno-stack-protector
+ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
+ KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
++
++ifdef CONFIG_CC_IS_GCC
++ ifeq ($(call cc-ifversion, -ge, 1200, y), y)
++ ifeq ($(call cc-ifversion, -lt, 1300, y), y)
++ KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
++ KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, array-bounds)
++ endif
++ endif
++endif
++
+ UTS_MACHINE := s390x
+ STACK_SIZE := $(if $(CONFIG_KASAN),65536,16384)
+ CHECKFLAGS += -D__s390__ -D__s390x__
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 96d34ebb20a9e..e2942335d1437 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -902,6 +902,8 @@ static void __meminit vmemmap_use_sub_pmd(unsigned long start, unsigned long end
+
+ static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long end)
+ {
++ const unsigned long page = ALIGN_DOWN(start, PMD_SIZE);
++
+ vmemmap_flush_unused_pmd();
+
+ /*
+@@ -914,8 +916,7 @@ static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long
+ * Mark with PAGE_UNUSED the unused parts of the new memmap range
+ */
+ if (!IS_ALIGNED(start, PMD_SIZE))
+- memset((void *)start, PAGE_UNUSED,
+- start - ALIGN_DOWN(start, PMD_SIZE));
++ memset((void *)page, PAGE_UNUSED, start - page);
+
+ /*
+ * We want to avoid memset(PAGE_UNUSED) when populating the vmemmap of
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 94d1789a233e0..406a907a4caec 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -735,6 +735,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ size_t offset, u32 opt_flags)
+ {
+ struct firmware *fw = NULL;
++ struct cred *kern_cred = NULL;
++ const struct cred *old_cred;
+ bool nondirect = false;
+ int ret;
+
+@@ -751,6 +753,18 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ if (ret <= 0) /* error or already assigned */
+ goto out;
+
++ /*
++ * We are about to try to access the firmware file. Because we may have been
++ * called by a driver when serving an unrelated request from userland, we use
++ * the kernel credentials to read the file.
++ */
++ kern_cred = prepare_kernel_cred(NULL);
++ if (!kern_cred) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ old_cred = override_creds(kern_cred);
++
+ ret = fw_get_filesystem_firmware(device, fw->priv, "", NULL);
+
+ /* Only full reads can support decompression, platform, and sysfs. */
+@@ -776,6 +790,9 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
+ } else
+ ret = assign_fw(fw, device);
+
++ revert_creds(old_cred);
++ put_cred(kern_cred);
++
+ out:
+ if (ret < 0) {
+ fw_abort_batch_reqs(fw);
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 602b12d7470d2..a6fc96e426687 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -543,10 +543,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
+ file->f_mode |= FMODE_LSEEK;
+ dmabuf->file = file;
+
+- ret = dma_buf_stats_setup(dmabuf);
+- if (ret)
+- goto err_sysfs;
+-
+ mutex_init(&dmabuf->lock);
+ INIT_LIST_HEAD(&dmabuf->attachments);
+
+@@ -554,6 +550,10 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
+ list_add(&dmabuf->list_node, &db_list.head);
+ mutex_unlock(&db_list.lock);
+
++ ret = dma_buf_stats_setup(dmabuf);
++ if (ret)
++ goto err_sysfs;
++
+ return dmabuf;
+
+ err_sysfs:
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index b51368fa30253..86acff764e7f6 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1405,14 +1405,8 @@ static int smu_disable_dpms(struct smu_context *smu)
+ {
+ struct amdgpu_device *adev = smu->adev;
+ int ret = 0;
+- /*
+- * TODO: (adev->in_suspend && !adev->in_s0ix) is added to pair
+- * the workaround which always reset the asic in suspend.
+- * It's likely that workaround will be dropped in the future.
+- * Then the change here should be dropped together.
+- */
+ bool use_baco = !smu->is_apu &&
+- (((amdgpu_in_reset(adev) || (adev->in_suspend && !adev->in_s0ix)) &&
++ ((amdgpu_in_reset(adev) &&
+ (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)) ||
+ ((adev->in_runpm || adev->in_s4) && amdgpu_asic_supports_baco(adev)));
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index daf9f87477ba1..a2141d3d9b1d2 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -46,8 +46,9 @@ static bool
+ nouveau_get_backlight_name(char backlight_name[BL_NAME_SIZE],
+ struct nouveau_backlight *bl)
+ {
+- const int nb = ida_simple_get(&bl_ida, 0, 0, GFP_KERNEL);
+- if (nb < 0 || nb >= 100)
++ const int nb = ida_alloc_max(&bl_ida, 99, GFP_KERNEL);
++
++ if (nb < 0)
+ return false;
+ if (nb > 0)
+ snprintf(backlight_name, BL_NAME_SIZE, "nv_backlight%d", nb);
+@@ -414,7 +415,7 @@ nouveau_backlight_init(struct drm_connector *connector)
+ nv_encoder, ops, &props);
+ if (IS_ERR(bl->dev)) {
+ if (bl->id >= 0)
+- ida_simple_remove(&bl_ida, bl->id);
++ ida_free(&bl_ida, bl->id);
+ ret = PTR_ERR(bl->dev);
+ goto fail_alloc;
+ }
+@@ -442,7 +443,7 @@ nouveau_backlight_fini(struct drm_connector *connector)
+ return;
+
+ if (bl->id >= 0)
+- ida_simple_remove(&bl_ida, bl->id);
++ ida_free(&bl_ida, bl->id);
+
+ backlight_device_unregister(bl->dev);
+ nv_conn->backlight = NULL;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+index d0d52c1d4aee0..950a3de3e1166 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+@@ -123,7 +123,7 @@ nvkm_device_tegra_probe_iommu(struct nvkm_device_tegra *tdev)
+
+ mutex_init(&tdev->iommu.mutex);
+
+- if (iommu_present(&platform_bus_type)) {
++ if (device_iommu_mapped(dev)) {
+ tdev->iommu.domain = iommu_domain_alloc(&platform_bus_type);
+ if (!tdev->iommu.domain)
+ goto error;
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 3a1626f261e5a..3f651364ed7ad 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -38,6 +38,7 @@
+ #include <drm/drm_scdc_helper.h>
+ #include <linux/clk.h>
+ #include <linux/component.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/of_address.h>
+ #include <linux/of_gpio.h>
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
+index a3bfbb6c3e14a..162dfeb1cc5ad 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
+@@ -528,7 +528,7 @@ int vmw_cmd_send_fence(struct vmw_private *dev_priv, uint32_t *seqno)
+ *seqno = atomic_add_return(1, &dev_priv->marker_seq);
+ } while (*seqno == 0);
+
+- if (!(vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_FENCE)) {
++ if (!vmw_has_fences(dev_priv)) {
+
+ /*
+ * Don't request hardware to send a fence. The
+@@ -675,11 +675,14 @@ int vmw_cmd_emit_dummy_query(struct vmw_private *dev_priv,
+ */
+ bool vmw_cmd_supported(struct vmw_private *vmw)
+ {
+- if ((vmw->capabilities & (SVGA_CAP_COMMAND_BUFFERS |
+- SVGA_CAP_CMD_BUFFERS_2)) != 0)
+- return true;
++ bool has_cmdbufs =
++ (vmw->capabilities & (SVGA_CAP_COMMAND_BUFFERS |
++ SVGA_CAP_CMD_BUFFERS_2)) != 0;
++ if (vmw_is_svga_v3(vmw))
++ return (has_cmdbufs &&
++ (vmw->capabilities & SVGA_CAP_GBOBJECTS) != 0);
+ /*
+ * We have FIFO cmd's
+ */
+- return vmw->fifo_mem != NULL;
++ return has_cmdbufs || vmw->fifo_mem != NULL;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index ea3ecdda561dc..6de0b9ef5c773 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -1679,4 +1679,12 @@ static inline void vmw_irq_status_write(struct vmw_private *vmw,
+ outl(status, vmw->io_start + SVGA_IRQSTATUS_PORT);
+ }
+
++static inline bool vmw_has_fences(struct vmw_private *vmw)
++{
++ if ((vmw->capabilities & (SVGA_CAP_COMMAND_BUFFERS |
++ SVGA_CAP_CMD_BUFFERS_2)) != 0)
++ return true;
++ return (vmw_fifo_caps(vmw) & SVGA_FIFO_CAP_FENCE) != 0;
++}
++
+ #endif
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+index 8ee34576c7d08..adf17c740656d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+@@ -483,7 +483,7 @@ static int vmw_fb_kms_detach(struct vmw_fb_par *par,
+
+ static int vmw_fb_kms_framebuffer(struct fb_info *info)
+ {
+- struct drm_mode_fb_cmd2 mode_cmd;
++ struct drm_mode_fb_cmd2 mode_cmd = {0};
+ struct vmw_fb_par *par = info->par;
+ struct fb_var_screeninfo *var = &info->var;
+ struct drm_framebuffer *cur_fb;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+index 5001b87aebe81..a16b854ca18a7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+@@ -82,6 +82,22 @@ fman_from_fence(struct vmw_fence_obj *fence)
+ return container_of(fence->base.lock, struct vmw_fence_manager, lock);
+ }
+
++static u32 vmw_fence_goal_read(struct vmw_private *vmw)
++{
++ if ((vmw->capabilities2 & SVGA_CAP2_EXTRA_REGS) != 0)
++ return vmw_read(vmw, SVGA_REG_FENCE_GOAL);
++ else
++ return vmw_fifo_mem_read(vmw, SVGA_FIFO_FENCE_GOAL);
++}
++
++static void vmw_fence_goal_write(struct vmw_private *vmw, u32 value)
++{
++ if ((vmw->capabilities2 & SVGA_CAP2_EXTRA_REGS) != 0)
++ vmw_write(vmw, SVGA_REG_FENCE_GOAL, value);
++ else
++ vmw_fifo_mem_write(vmw, SVGA_FIFO_FENCE_GOAL, value);
++}
++
+ /*
+ * Note on fencing subsystem usage of irqs:
+ * Typically the vmw_fences_update function is called
+@@ -392,7 +408,7 @@ static bool vmw_fence_goal_new_locked(struct vmw_fence_manager *fman,
+ if (likely(!fman->seqno_valid))
+ return false;
+
+- goal_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE_GOAL);
++ goal_seqno = vmw_fence_goal_read(fman->dev_priv);
+ if (likely(passed_seqno - goal_seqno >= VMW_FENCE_WRAP))
+ return false;
+
+@@ -400,9 +416,8 @@ static bool vmw_fence_goal_new_locked(struct vmw_fence_manager *fman,
+ list_for_each_entry(fence, &fman->fence_list, head) {
+ if (!list_empty(&fence->seq_passed_actions)) {
+ fman->seqno_valid = true;
+- vmw_fifo_mem_write(fman->dev_priv,
+- SVGA_FIFO_FENCE_GOAL,
+- fence->base.seqno);
++ vmw_fence_goal_write(fman->dev_priv,
++ fence->base.seqno);
+ break;
+ }
+ }
+@@ -434,13 +449,12 @@ static bool vmw_fence_goal_check_locked(struct vmw_fence_obj *fence)
+ if (dma_fence_is_signaled_locked(&fence->base))
+ return false;
+
+- goal_seqno = vmw_fifo_mem_read(fman->dev_priv, SVGA_FIFO_FENCE_GOAL);
++ goal_seqno = vmw_fence_goal_read(fman->dev_priv);
+ if (likely(fman->seqno_valid &&
+ goal_seqno - fence->base.seqno < VMW_FENCE_WRAP))
+ return false;
+
+- vmw_fifo_mem_write(fman->dev_priv, SVGA_FIFO_FENCE_GOAL,
+- fence->base.seqno);
++ vmw_fence_goal_write(fman->dev_priv, fence->base.seqno);
+ fman->seqno_valid = true;
+
+ return true;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
+index c5191de365ca1..fe4732bf2c9d2 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
+@@ -32,6 +32,14 @@
+
+ #define VMW_FENCE_WRAP (1 << 24)
+
++static u32 vmw_irqflag_fence_goal(struct vmw_private *vmw)
++{
++ if ((vmw->capabilities2 & SVGA_CAP2_EXTRA_REGS) != 0)
++ return SVGA_IRQFLAG_REG_FENCE_GOAL;
++ else
++ return SVGA_IRQFLAG_FENCE_GOAL;
++}
++
+ /**
+ * vmw_thread_fn - Deferred (process context) irq handler
+ *
+@@ -96,7 +104,7 @@ static irqreturn_t vmw_irq_handler(int irq, void *arg)
+ wake_up_all(&dev_priv->fifo_queue);
+
+ if ((masked_status & (SVGA_IRQFLAG_ANY_FENCE |
+- SVGA_IRQFLAG_FENCE_GOAL)) &&
++ vmw_irqflag_fence_goal(dev_priv))) &&
+ !test_and_set_bit(VMW_IRQTHREAD_FENCE, dev_priv->irqthread_pending))
+ ret = IRQ_WAKE_THREAD;
+
+@@ -137,8 +145,7 @@ bool vmw_seqno_passed(struct vmw_private *dev_priv,
+ if (likely(dev_priv->last_read_seqno - seqno < VMW_FENCE_WRAP))
+ return true;
+
+- if (!(vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_FENCE) &&
+- vmw_fifo_idle(dev_priv, seqno))
++ if (!vmw_has_fences(dev_priv) && vmw_fifo_idle(dev_priv, seqno))
+ return true;
+
+ /**
+@@ -160,6 +167,7 @@ int vmw_fallback_wait(struct vmw_private *dev_priv,
+ unsigned long timeout)
+ {
+ struct vmw_fifo_state *fifo_state = dev_priv->fifo;
++ bool fifo_down = false;
+
+ uint32_t count = 0;
+ uint32_t signal_seq;
+@@ -176,12 +184,14 @@ int vmw_fallback_wait(struct vmw_private *dev_priv,
+ */
+
+ if (fifo_idle) {
+- down_read(&fifo_state->rwsem);
+ if (dev_priv->cman) {
+ ret = vmw_cmdbuf_idle(dev_priv->cman, interruptible,
+ 10*HZ);
+ if (ret)
+ goto out_err;
++ } else if (fifo_state) {
++ down_read(&fifo_state->rwsem);
++ fifo_down = true;
+ }
+ }
+
+@@ -218,12 +228,12 @@ int vmw_fallback_wait(struct vmw_private *dev_priv,
+ }
+ }
+ finish_wait(&dev_priv->fence_queue, &__wait);
+- if (ret == 0 && fifo_idle)
++ if (ret == 0 && fifo_idle && fifo_state)
+ vmw_fence_write(dev_priv, signal_seq);
+
+ wake_up_all(&dev_priv->fence_queue);
+ out_err:
+- if (fifo_idle)
++ if (fifo_down)
+ up_read(&fifo_state->rwsem);
+
+ return ret;
+@@ -266,13 +276,13 @@ void vmw_seqno_waiter_remove(struct vmw_private *dev_priv)
+
+ void vmw_goal_waiter_add(struct vmw_private *dev_priv)
+ {
+- vmw_generic_waiter_add(dev_priv, SVGA_IRQFLAG_FENCE_GOAL,
++ vmw_generic_waiter_add(dev_priv, vmw_irqflag_fence_goal(dev_priv),
+ &dev_priv->goal_queue_waiters);
+ }
+
+ void vmw_goal_waiter_remove(struct vmw_private *dev_priv)
+ {
+- vmw_generic_waiter_remove(dev_priv, SVGA_IRQFLAG_FENCE_GOAL,
++ vmw_generic_waiter_remove(dev_priv, vmw_irqflag_fence_goal(dev_priv),
+ &dev_priv->goal_queue_waiters);
+ }
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index bbd2f4ec08ec1..93431e8f66060 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -1344,7 +1344,6 @@ vmw_kms_new_framebuffer(struct vmw_private *dev_priv,
+ ret = vmw_kms_new_framebuffer_surface(dev_priv, surface, &vfb,
+ mode_cmd,
+ is_bo_proxy);
+-
+ /*
+ * vmw_create_bo_proxy() adds a reference that is no longer
+ * needed
+@@ -1385,13 +1384,16 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
+ ret = vmw_user_lookup_handle(dev_priv, file_priv,
+ mode_cmd->handles[0],
+ &surface, &bo);
+- if (ret)
++ if (ret) {
++ DRM_ERROR("Invalid buffer object handle %u (0x%x).\n",
++ mode_cmd->handles[0], mode_cmd->handles[0]);
+ goto err_out;
++ }
+
+
+ if (!bo &&
+ !vmw_kms_srf_ok(dev_priv, mode_cmd->width, mode_cmd->height)) {
+- DRM_ERROR("Surface size cannot exceed %dx%d",
++ DRM_ERROR("Surface size cannot exceed %dx%d\n",
+ dev_priv->texture_max_width,
+ dev_priv->texture_max_height);
+ goto err_out;
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 8df25f1079bac..d958d87b7edcf 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -944,7 +944,7 @@ config SENSORS_LTC4261
+
+ config SENSORS_LTQ_CPUTEMP
+ bool "Lantiq cpu temperature sensor driver"
+- depends on LANTIQ
++ depends on SOC_XWAY
+ help
+ If you say yes here you get support for the temperature
+ sensor inside your CPU.
+diff --git a/drivers/hwmon/asus_wmi_sensors.c b/drivers/hwmon/asus_wmi_sensors.c
+index c80eee874b6c0..49784a6ea23a3 100644
+--- a/drivers/hwmon/asus_wmi_sensors.c
++++ b/drivers/hwmon/asus_wmi_sensors.c
+@@ -71,7 +71,7 @@ static const struct dmi_system_id asus_wmi_dmi_table[] = {
+ DMI_EXACT_MATCH_ASUS_BOARD_NAME("PRIME X399-A"),
+ DMI_EXACT_MATCH_ASUS_BOARD_NAME("PRIME X470-PRO"),
+ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VI EXTREME"),
+- DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VI HERO"),
++ DMI_EXACT_MATCH_ASUS_BOARD_NAME("CROSSHAIR VI HERO"),
+ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VI HERO (WI-FI AC)"),
+ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VII HERO"),
+ DMI_EXACT_MATCH_ASUS_BOARD_NAME("ROG CROSSHAIR VII HERO (WI-FI)"),
+diff --git a/drivers/hwmon/f71882fg.c b/drivers/hwmon/f71882fg.c
+index 938a8b9ec70dd..6830e029995dc 100644
+--- a/drivers/hwmon/f71882fg.c
++++ b/drivers/hwmon/f71882fg.c
+@@ -1578,8 +1578,9 @@ static ssize_t show_temp(struct device *dev, struct device_attribute *devattr,
+ temp *= 125;
+ if (sign)
+ temp -= 128000;
+- } else
+- temp = data->temp[nr] * 1000;
++ } else {
++ temp = ((s8)data->temp[nr]) * 1000;
++ }
+
+ return sprintf(buf, "%d\n", temp);
+ }
+diff --git a/drivers/hwmon/tmp401.c b/drivers/hwmon/tmp401.c
+index b86d9df7105d1..52c9e7d3f2ae7 100644
+--- a/drivers/hwmon/tmp401.c
++++ b/drivers/hwmon/tmp401.c
+@@ -708,10 +708,21 @@ static int tmp401_probe(struct i2c_client *client)
+ return 0;
+ }
+
++static const struct of_device_id __maybe_unused tmp4xx_of_match[] = {
++ { .compatible = "ti,tmp401", },
++ { .compatible = "ti,tmp411", },
++ { .compatible = "ti,tmp431", },
++ { .compatible = "ti,tmp432", },
++ { .compatible = "ti,tmp435", },
++ { },
++};
++MODULE_DEVICE_TABLE(of, tmp4xx_of_match);
++
+ static struct i2c_driver tmp401_driver = {
+ .class = I2C_CLASS_HWMON,
+ .driver = {
+ .name = "tmp401",
++ .of_match_table = of_match_ptr(tmp4xx_of_match),
+ },
+ .probe_new = tmp401_probe,
+ .id_table = tmp401_id,
+diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
+index 082a3ddb0fa3b..632f65e53b63f 100644
+--- a/drivers/infiniband/hw/irdma/cm.c
++++ b/drivers/infiniband/hw/irdma/cm.c
+@@ -3242,15 +3242,10 @@ enum irdma_status_code irdma_setup_cm_core(struct irdma_device *iwdev,
+ */
+ void irdma_cleanup_cm_core(struct irdma_cm_core *cm_core)
+ {
+- unsigned long flags;
+-
+ if (!cm_core)
+ return;
+
+- spin_lock_irqsave(&cm_core->ht_lock, flags);
+- if (timer_pending(&cm_core->tcp_timer))
+- del_timer_sync(&cm_core->tcp_timer);
+- spin_unlock_irqrestore(&cm_core->ht_lock, flags);
++ del_timer_sync(&cm_core->tcp_timer);
+
+ destroy_workqueue(cm_core->event_wq);
+ cm_core->dev->ws_reset(&cm_core->iwdev->vsi);
+diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
+index 9050ca1f4285c..808f6e7a80482 100644
+--- a/drivers/interconnect/core.c
++++ b/drivers/interconnect/core.c
+@@ -1087,9 +1087,15 @@ static int of_count_icc_providers(struct device_node *np)
+ {
+ struct device_node *child;
+ int count = 0;
++ const struct of_device_id __maybe_unused ignore_list[] = {
++ { .compatible = "qcom,sc7180-ipa-virt" },
++ { .compatible = "qcom,sdx55-ipa-virt" },
++ {}
++ };
+
+ for_each_available_child_of_node(np, child) {
+- if (of_property_read_bool(child, "#interconnect-cells"))
++ if (of_property_read_bool(child, "#interconnect-cells") &&
++ likely(!of_match_node(ignore_list, child)))
+ count++;
+ count += of_count_icc_providers(child);
+ }
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
+index 01e9b50b10a18..87bf522b9d2ee 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c
+@@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi
+ dev_name(dev), err);
+ }
+
++static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain,
++ struct io_pgtable_cfg *pgtbl_cfg,
++ struct device *dev)
++{
++ struct arm_smmu_device *smmu = smmu_domain->smmu;
++ const struct device_node *np = smmu->dev->of_node;
++
++ /*
++ * Tegra194 and Tegra234 SoCs have the erratum that causes walk cache
++ * entries to not be invalidated correctly. The problem is that the walk
++ * cache index generated for IOVA is not same across translation and
++ * invalidation requests. This is leading to page faults when PMD entry
++ * is released during unmap and populated with new PTE table during
++ * subsequent map request. Disabling large page mappings avoids the
++ * release of PMD entry and avoid translations seeing stale PMD entry in
++ * walk cache.
++ * Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and
++ * Tegra234.
++ */
++ if (of_device_is_compatible(np, "nvidia,tegra234-smmu") ||
++ of_device_is_compatible(np, "nvidia,tegra194-smmu")) {
++ smmu->pgsize_bitmap = PAGE_SIZE;
++ pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap;
++ }
++
++ return 0;
++}
++
+ static const struct arm_smmu_impl nvidia_smmu_impl = {
+ .read_reg = nvidia_smmu_read_reg,
+ .write_reg = nvidia_smmu_write_reg,
+@@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = {
+ .global_fault = nvidia_smmu_global_fault,
+ .context_fault = nvidia_smmu_context_fault,
+ .probe_finalize = nvidia_smmu_probe_finalize,
++ .init_context = nvidia_smmu_init_context,
+ };
+
+ static const struct arm_smmu_impl nvidia_smmu_single_impl = {
+ .probe_finalize = nvidia_smmu_probe_finalize,
++ .init_context = nvidia_smmu_init_context,
+ };
+
+ struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index 6afb5db8244cd..6d15a743219f0 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -833,6 +833,9 @@ static void bcm_sf2_sw_mac_link_down(struct dsa_switch *ds, int port,
+ struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+ u32 reg, offset;
+
++ if (priv->wol_ports_mask & BIT(port))
++ return;
++
+ if (port != core_readl(priv, CORE_IMP0_PRT_ID)) {
+ if (priv->type == BCM4908_DEVICE_ID ||
+ priv->type == BCM7445_DEVICE_ID)
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 3a529ee8c8340..831833911a525 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -449,7 +449,7 @@ static int aq_pm_freeze(struct device *dev)
+
+ static int aq_pm_suspend_poweroff(struct device *dev)
+ {
+- return aq_suspend_common(dev, false);
++ return aq_suspend_common(dev, true);
+ }
+
+ static int aq_pm_thaw(struct device *dev)
+@@ -459,7 +459,7 @@ static int aq_pm_thaw(struct device *dev)
+
+ static int aq_pm_resume_restore(struct device *dev)
+ {
+- return atl_resume_common(dev, false);
++ return atl_resume_common(dev, true);
+ }
+
+ static const struct dev_pm_ops aq_pm_ops = {
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index c2bfb25e087c1..64bf31ceb6d90 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3999,6 +3999,10 @@ static int bcmgenet_probe(struct platform_device *pdev)
+ goto err;
+ }
+ priv->wol_irq = platform_get_irq_optional(pdev, 2);
++ if (priv->wol_irq == -EPROBE_DEFER) {
++ err = priv->wol_irq;
++ goto err;
++ }
+
+ priv->base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(priv->base)) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index e7b4e3ed056c7..8d719f82854a9 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -2793,14 +2793,14 @@ int t4_get_raw_vpd_params(struct adapter *adapter, struct vpd_params *p)
+ goto out;
+ na = ret;
+
+- memcpy(p->id, vpd + id, min_t(int, id_len, ID_LEN));
++ memcpy(p->id, vpd + id, min_t(unsigned int, id_len, ID_LEN));
+ strim(p->id);
+- memcpy(p->sn, vpd + sn, min_t(int, sn_len, SERNUM_LEN));
++ memcpy(p->sn, vpd + sn, min_t(unsigned int, sn_len, SERNUM_LEN));
+ strim(p->sn);
+- memcpy(p->pn, vpd + pn, min_t(int, pn_len, PN_LEN));
++ memcpy(p->pn, vpd + pn, min_t(unsigned int, pn_len, PN_LEN));
+ strim(p->pn);
+- memcpy(p->na, vpd + na, min_t(int, na_len, MACADDR_LEN));
+- strim((char *)p->na);
++ memcpy(p->na, vpd + na, min_t(unsigned int, na_len, MACADDR_LEN));
++ strim(p->na);
+
+ out:
+ vfree(vpd);
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index 31b03fe78d3be..313a798fe3a9f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -7535,42 +7535,43 @@ static void i40e_free_macvlan_channels(struct i40e_vsi *vsi)
+ static int i40e_fwd_ring_up(struct i40e_vsi *vsi, struct net_device *vdev,
+ struct i40e_fwd_adapter *fwd)
+ {
++ struct i40e_channel *ch = NULL, *ch_tmp, *iter;
+ int ret = 0, num_tc = 1, i, aq_err;
+- struct i40e_channel *ch, *ch_tmp;
+ struct i40e_pf *pf = vsi->back;
+ struct i40e_hw *hw = &pf->hw;
+
+- if (list_empty(&vsi->macvlan_list))
+- return -EINVAL;
+-
+ /* Go through the list and find an available channel */
+- list_for_each_entry_safe(ch, ch_tmp, &vsi->macvlan_list, list) {
+- if (!i40e_is_channel_macvlan(ch)) {
+- ch->fwd = fwd;
++ list_for_each_entry_safe(iter, ch_tmp, &vsi->macvlan_list, list) {
++ if (!i40e_is_channel_macvlan(iter)) {
++ iter->fwd = fwd;
+ /* record configuration for macvlan interface in vdev */
+ for (i = 0; i < num_tc; i++)
+ netdev_bind_sb_channel_queue(vsi->netdev, vdev,
+ i,
+- ch->num_queue_pairs,
+- ch->base_queue);
+- for (i = 0; i < ch->num_queue_pairs; i++) {
++ iter->num_queue_pairs,
++ iter->base_queue);
++ for (i = 0; i < iter->num_queue_pairs; i++) {
+ struct i40e_ring *tx_ring, *rx_ring;
+ u16 pf_q;
+
+- pf_q = ch->base_queue + i;
++ pf_q = iter->base_queue + i;
+
+ /* Get to TX ring ptr */
+ tx_ring = vsi->tx_rings[pf_q];
+- tx_ring->ch = ch;
++ tx_ring->ch = iter;
+
+ /* Get the RX ring ptr */
+ rx_ring = vsi->rx_rings[pf_q];
+- rx_ring->ch = ch;
++ rx_ring->ch = iter;
+ }
++ ch = iter;
+ break;
+ }
+ }
+
++ if (!ch)
++ return -EINVAL;
++
+ /* Guarantee all rings are updated before we update the
+ * MAC address filter.
+ */
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 9c04a71a9fca3..e2ffdf2726fde 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -546,6 +546,7 @@ struct ice_pf {
+ struct mutex avail_q_mutex; /* protects access to avail_[rx|tx]qs */
+ struct mutex sw_mutex; /* lock for protecting VSI alloc flow */
+ struct mutex tc_mutex; /* lock to protect TC changes */
++ struct mutex adev_mutex; /* lock to protect aux device access */
+ u32 msg_enable;
+ struct ice_ptp ptp;
+ u16 num_rdma_msix; /* Total MSIX vectors for RDMA driver */
+diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
+index 5559230eff8b5..0deed090645f8 100644
+--- a/drivers/net/ethernet/intel/ice/ice_idc.c
++++ b/drivers/net/ethernet/intel/ice/ice_idc.c
+@@ -37,14 +37,17 @@ void ice_send_event_to_aux(struct ice_pf *pf, struct iidc_event *event)
+ if (WARN_ON_ONCE(!in_task()))
+ return;
+
++ mutex_lock(&pf->adev_mutex);
+ if (!pf->adev)
+- return;
++ goto finish;
+
+ device_lock(&pf->adev->dev);
+ iadrv = ice_get_auxiliary_drv(pf);
+ if (iadrv && iadrv->event_handler)
+ iadrv->event_handler(pf, event);
+ device_unlock(&pf->adev->dev);
++finish:
++ mutex_unlock(&pf->adev_mutex);
+ }
+
+ /**
+@@ -285,7 +288,6 @@ int ice_plug_aux_dev(struct ice_pf *pf)
+ return -ENOMEM;
+
+ adev = &iadev->adev;
+- pf->adev = adev;
+ iadev->pf = pf;
+
+ adev->id = pf->aux_idx;
+@@ -295,18 +297,20 @@ int ice_plug_aux_dev(struct ice_pf *pf)
+
+ ret = auxiliary_device_init(adev);
+ if (ret) {
+- pf->adev = NULL;
+ kfree(iadev);
+ return ret;
+ }
+
+ ret = auxiliary_device_add(adev);
+ if (ret) {
+- pf->adev = NULL;
+ auxiliary_device_uninit(adev);
+ return ret;
+ }
+
++ mutex_lock(&pf->adev_mutex);
++ pf->adev = adev;
++ mutex_unlock(&pf->adev_mutex);
++
+ return 0;
+ }
+
+@@ -315,12 +319,17 @@ int ice_plug_aux_dev(struct ice_pf *pf)
+ */
+ void ice_unplug_aux_dev(struct ice_pf *pf)
+ {
+- if (!pf->adev)
+- return;
++ struct auxiliary_device *adev;
+
+- auxiliary_device_delete(pf->adev);
+- auxiliary_device_uninit(pf->adev);
++ mutex_lock(&pf->adev_mutex);
++ adev = pf->adev;
+ pf->adev = NULL;
++ mutex_unlock(&pf->adev_mutex);
++
++ if (adev) {
++ auxiliary_device_delete(adev);
++ auxiliary_device_uninit(adev);
++ }
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index e347030ee2e33..7f6715eb862fe 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3682,6 +3682,7 @@ u16 ice_get_avail_rxq_count(struct ice_pf *pf)
+ static void ice_deinit_pf(struct ice_pf *pf)
+ {
+ ice_service_task_stop(pf);
++ mutex_destroy(&pf->adev_mutex);
+ mutex_destroy(&pf->sw_mutex);
+ mutex_destroy(&pf->tc_mutex);
+ mutex_destroy(&pf->avail_q_mutex);
+@@ -3762,6 +3763,7 @@ static int ice_init_pf(struct ice_pf *pf)
+
+ mutex_init(&pf->sw_mutex);
+ mutex_init(&pf->tc_mutex);
++ mutex_init(&pf->adev_mutex);
+
+ INIT_HLIST_HEAD(&pf->aq_wait_list);
+ spin_lock_init(&pf->aq_wait_lock);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index 000c39d163a28..45ae97b8b97db 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -2279,6 +2279,7 @@ ice_ptp_init_tx_e810(struct ice_pf *pf, struct ice_ptp_tx *tx)
+
+ /**
+ * ice_ptp_tx_tstamp_cleanup - Cleanup old timestamp requests that got dropped
++ * @hw: pointer to the hw struct
+ * @tx: PTP Tx tracker to clean up
+ *
+ * Loop through the Tx timestamp requests and see if any of them have been
+@@ -2287,7 +2288,7 @@ ice_ptp_init_tx_e810(struct ice_pf *pf, struct ice_ptp_tx *tx)
+ * timestamp will never be captured. This might happen if the packet gets
+ * discarded before it reaches the PHY timestamping block.
+ */
+-static void ice_ptp_tx_tstamp_cleanup(struct ice_ptp_tx *tx)
++static void ice_ptp_tx_tstamp_cleanup(struct ice_hw *hw, struct ice_ptp_tx *tx)
+ {
+ u8 idx;
+
+@@ -2296,11 +2297,16 @@ static void ice_ptp_tx_tstamp_cleanup(struct ice_ptp_tx *tx)
+
+ for_each_set_bit(idx, tx->in_use, tx->len) {
+ struct sk_buff *skb;
++ u64 raw_tstamp;
+
+ /* Check if this SKB has been waiting for too long */
+ if (time_is_after_jiffies(tx->tstamps[idx].start + 2 * HZ))
+ continue;
+
++ /* Read tstamp to be able to use this register again */
++ ice_read_phy_tstamp(hw, tx->quad, idx + tx->quad_offset,
++ &raw_tstamp);
++
+ spin_lock(&tx->lock);
+ skb = tx->tstamps[idx].skb;
+ tx->tstamps[idx].skb = NULL;
+@@ -2322,7 +2328,7 @@ static void ice_ptp_periodic_work(struct kthread_work *work)
+
+ ice_ptp_update_cached_phctime(pf);
+
+- ice_ptp_tx_tstamp_cleanup(&pf->ptp.port.tx);
++ ice_ptp_tx_tstamp_cleanup(&pf->hw, &pf->ptp.port.tx);
+
+ /* Run twice a second */
+ kthread_queue_delayed_work(ptp->kworker, &ptp->work,
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 2bee8f10ad89c..0cc8b7e06b72a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -3300,13 +3300,52 @@ error_param:
+ NULL, 0);
+ }
+
++/**
++ * ice_vf_vsi_dis_single_txq - disable a single Tx queue
++ * @vf: VF to disable queue for
++ * @vsi: VSI for the VF
++ * @q_id: VF relative (0-based) queue ID
++ *
++ * Attempt to disable the Tx queue passed in. If the Tx queue was successfully
++ * disabled then clear q_id bit in the enabled queues bitmap and return
++ * success. Otherwise return error.
++ */
++static int
++ice_vf_vsi_dis_single_txq(struct ice_vf *vf, struct ice_vsi *vsi, u16 q_id)
++{
++ struct ice_txq_meta txq_meta = { 0 };
++ struct ice_tx_ring *ring;
++ int err;
++
++ if (!test_bit(q_id, vf->txq_ena))
++ dev_dbg(ice_pf_to_dev(vsi->back), "Queue %u on VSI %u is not enabled, but stopping it anyway\n",
++ q_id, vsi->vsi_num);
++
++ ring = vsi->tx_rings[q_id];
++ if (!ring)
++ return -EINVAL;
++
++ ice_fill_txq_meta(vsi, ring, &txq_meta);
++
++ err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, vf->vf_id, ring, &txq_meta);
++ if (err) {
++ dev_err(ice_pf_to_dev(vsi->back), "Failed to stop Tx ring %d on VSI %d\n",
++ q_id, vsi->vsi_num);
++ return err;
++ }
++
++ /* Clear enabled queues flag */
++ clear_bit(q_id, vf->txq_ena);
++
++ return 0;
++}
++
+ /**
+ * ice_vc_dis_qs_msg
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ *
+- * called from the VF to disable all or specific
+- * queue(s)
++ * called from the VF to disable all or specific queue(s)
+ */
+ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
+ {
+@@ -3343,30 +3382,15 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
+ q_map = vqs->tx_queues;
+
+ for_each_set_bit(vf_q_id, &q_map, ICE_MAX_RSS_QS_PER_VF) {
+- struct ice_tx_ring *ring = vsi->tx_rings[vf_q_id];
+- struct ice_txq_meta txq_meta = { 0 };
+-
+ if (!ice_vc_isvalid_q_id(vf, vqs->vsi_id, vf_q_id)) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ goto error_param;
+ }
+
+- if (!test_bit(vf_q_id, vf->txq_ena))
+- dev_dbg(ice_pf_to_dev(vsi->back), "Queue %u on VSI %u is not enabled, but stopping it anyway\n",
+- vf_q_id, vsi->vsi_num);
+-
+- ice_fill_txq_meta(vsi, ring, &txq_meta);
+-
+- if (ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, vf->vf_id,
+- ring, &txq_meta)) {
+- dev_err(ice_pf_to_dev(vsi->back), "Failed to stop Tx ring %d on VSI %d\n",
+- vf_q_id, vsi->vsi_num);
++ if (ice_vf_vsi_dis_single_txq(vf, vsi, vf_q_id)) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ goto error_param;
+ }
+-
+- /* Clear enabled queues flag */
+- clear_bit(vf_q_id, vf->txq_ena);
+ }
+ }
+
+@@ -3615,6 +3639,14 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ if (qpi->txq.ring_len > 0) {
+ vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;
+ vsi->tx_rings[i]->count = qpi->txq.ring_len;
++
++ /* Disable any existing queue first */
++ if (ice_vf_vsi_dis_single_txq(vf, vsi, q_idx)) {
++ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++ goto error_param;
++ }
++
++ /* Configure a queue with the requested settings */
+ if (ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx)) {
+ v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ goto error_param;
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
+index 3ad10c793308e..66298e2235c91 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
+@@ -395,7 +395,7 @@ static void mtk_ppe_init_foe_table(struct mtk_ppe *ppe)
+ static const u8 skip[] = { 12, 25, 38, 51, 76, 89, 102 };
+ int i, k;
+
+- memset(ppe->foe_table, 0, MTK_PPE_ENTRIES * sizeof(ppe->foe_table));
++ memset(ppe->foe_table, 0, MTK_PPE_ENTRIES * sizeof(*ppe->foe_table));
+
+ if (!IS_ENABLED(CONFIG_SOC_MT7621))
+ return;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+index 01cf5a6a26bd3..a2ee695a3f178 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ipip.c
+@@ -568,10 +568,8 @@ static int
+ mlxsw_sp2_ipip_rem_addr_set_gre6(struct mlxsw_sp *mlxsw_sp,
+ struct mlxsw_sp_ipip_entry *ipip_entry)
+ {
+- struct __ip6_tnl_parm parms6;
+-
+- parms6 = mlxsw_sp_ipip_netdev_parms6(ipip_entry->ol_dev);
+- return mlxsw_sp_ipv6_addr_kvdl_index_get(mlxsw_sp, &parms6.raddr,
++ return mlxsw_sp_ipv6_addr_kvdl_index_get(mlxsw_sp,
++ &ipip_entry->parms.daddr.addr6,
+ &ipip_entry->dip_kvdl_index);
+ }
+
+@@ -579,10 +577,7 @@ static void
+ mlxsw_sp2_ipip_rem_addr_unset_gre6(struct mlxsw_sp *mlxsw_sp,
+ const struct mlxsw_sp_ipip_entry *ipip_entry)
+ {
+- struct __ip6_tnl_parm parms6;
+-
+- parms6 = mlxsw_sp_ipip_netdev_parms6(ipip_entry->ol_dev);
+- mlxsw_sp_ipv6_addr_put(mlxsw_sp, &parms6.raddr);
++ mlxsw_sp_ipv6_addr_put(mlxsw_sp, &ipip_entry->parms.daddr.addr6);
+ }
+
+ static const struct mlxsw_sp_ipip_ops mlxsw_sp2_ipip_gre6_ops = {
+diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
+index fdb4d7e7296c8..cb602a2261497 100644
+--- a/drivers/net/ethernet/mscc/ocelot_flower.c
++++ b/drivers/net/ethernet/mscc/ocelot_flower.c
+@@ -278,9 +278,10 @@ static int ocelot_flower_parse_action(struct ocelot *ocelot, int port,
+ filter->type = OCELOT_VCAP_FILTER_OFFLOAD;
+ break;
+ case FLOW_ACTION_TRAP:
+- if (filter->block_id != VCAP_IS2) {
++ if (filter->block_id != VCAP_IS2 ||
++ filter->lookup != 0) {
+ NL_SET_ERR_MSG_MOD(extack,
+- "Trap action can only be offloaded to VCAP IS2");
++ "Trap action can only be offloaded to VCAP IS2 lookup 0");
+ return -EOPNOTSUPP;
+ }
+ if (filter->goto_target != -1) {
+diff --git a/drivers/net/ethernet/mscc/ocelot_vcap.c b/drivers/net/ethernet/mscc/ocelot_vcap.c
+index d3544413a8a43..f159726788bad 100644
+--- a/drivers/net/ethernet/mscc/ocelot_vcap.c
++++ b/drivers/net/ethernet/mscc/ocelot_vcap.c
+@@ -373,7 +373,6 @@ static void is2_entry_set(struct ocelot *ocelot, int ix,
+ OCELOT_VCAP_BIT_0);
+ vcap_key_set(vcap, &data, VCAP_IS2_HK_IGR_PORT_MASK, 0,
+ ~filter->ingress_port_mask);
+- vcap_key_bit_set(vcap, &data, VCAP_IS2_HK_FIRST, OCELOT_VCAP_BIT_ANY);
+ vcap_key_bit_set(vcap, &data, VCAP_IS2_HK_HOST_MATCH,
+ OCELOT_VCAP_BIT_ANY);
+ vcap_key_bit_set(vcap, &data, VCAP_IS2_HK_L2_MC, filter->dmac_mc);
+@@ -1182,6 +1181,8 @@ int ocelot_vcap_filter_add(struct ocelot *ocelot,
+ struct ocelot_vcap_filter *tmp;
+
+ tmp = ocelot_vcap_block_find_filter_by_index(block, i);
++ /* Read back the filter's counters before moving it */
++ vcap_entry_get(ocelot, i - 1, tmp);
+ vcap_entry_set(ocelot, i, tmp);
+ }
+
+@@ -1221,7 +1222,11 @@ int ocelot_vcap_filter_del(struct ocelot *ocelot,
+ struct ocelot_vcap_filter del_filter;
+ int i, index;
+
++ /* Need to inherit the block_id so that vcap_entry_set()
++ * does not get confused and knows where to install it.
++ */
+ memset(&del_filter, 0, sizeof(del_filter));
++ del_filter.block_id = filter->block_id;
+
+ /* Gets index of the filter */
+ index = ocelot_vcap_block_get_filter_index(block, filter);
+@@ -1236,6 +1241,8 @@ int ocelot_vcap_filter_del(struct ocelot *ocelot,
+ struct ocelot_vcap_filter *tmp;
+
+ tmp = ocelot_vcap_block_find_filter_by_index(block, i);
++ /* Read back the filter's counters before moving it */
++ vcap_entry_get(ocelot, i + 1, tmp);
+ vcap_entry_set(ocelot, i, tmp);
+ }
+
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index 40fa5bce2ac2c..d324c292318b3 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -255,7 +255,7 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ err = ionic_map_bars(ionic);
+ if (err)
+- goto err_out_pci_disable_device;
++ goto err_out_pci_release_regions;
+
+ /* Configure the device */
+ err = ionic_setup(ionic);
+@@ -359,6 +359,7 @@ err_out_teardown:
+
+ err_out_unmap_bars:
+ ionic_unmap_bars(ionic);
++err_out_pci_release_regions:
+ pci_release_regions(pdev);
+ err_out_pci_disable_device:
+ pci_disable_device(pdev);
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index cf366ed2557cc..1ab725d554a51 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -3579,6 +3579,11 @@ static int efx_ef10_mtd_probe(struct efx_nic *efx)
+ n_parts++;
+ }
+
++ if (!n_parts) {
++ kfree(parts);
++ return 0;
++ }
++
+ rc = efx_mtd_add(efx, &parts[0].common, n_parts, sizeof(*parts));
+ fail:
+ if (rc)
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index 40bfd0ad7d053..eec0db76d888c 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -845,7 +845,9 @@ static void efx_set_xdp_channels(struct efx_nic *efx)
+
+ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ {
+- struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel;
++ struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel,
++ *ptp_channel = efx_ptp_channel(efx);
++ struct efx_ptp_data *ptp_data = efx->ptp_data;
+ unsigned int i, next_buffer_table = 0;
+ u32 old_rxq_entries, old_txq_entries;
+ int rc, rc2;
+@@ -916,6 +918,7 @@ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+
+ efx_set_xdp_channels(efx);
+ out:
++ efx->ptp_data = NULL;
+ /* Destroy unused channel structures */
+ for (i = 0; i < efx->n_channels; i++) {
+ channel = other_channel[i];
+@@ -926,6 +929,7 @@ out:
+ }
+ }
+
++ efx->ptp_data = ptp_data;
+ rc2 = efx_soft_enable_interrupts(efx);
+ if (rc2) {
+ rc = rc ? rc : rc2;
+@@ -944,6 +948,7 @@ rollback:
+ efx->txq_entries = old_txq_entries;
+ for (i = 0; i < efx->n_channels; i++)
+ swap(efx->channel[i], other_channel[i]);
++ efx_ptp_update_channel(efx, ptp_channel);
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index f0ef515e2ade5..4625f85acab2e 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -45,6 +45,7 @@
+ #include "farch_regs.h"
+ #include "tx.h"
+ #include "nic.h" /* indirectly includes ptp.h */
++#include "efx_channels.h"
+
+ /* Maximum number of events expected to make up a PTP event */
+ #define MAX_EVENT_FRAGS 3
+@@ -541,6 +542,12 @@ struct efx_channel *efx_ptp_channel(struct efx_nic *efx)
+ return efx->ptp_data ? efx->ptp_data->channel : NULL;
+ }
+
++void efx_ptp_update_channel(struct efx_nic *efx, struct efx_channel *channel)
++{
++ if (efx->ptp_data)
++ efx->ptp_data->channel = channel;
++}
++
+ static u32 last_sync_timestamp_major(struct efx_nic *efx)
+ {
+ struct efx_channel *channel = efx_ptp_channel(efx);
+@@ -1443,6 +1450,11 @@ int efx_ptp_probe(struct efx_nic *efx, struct efx_channel *channel)
+ int rc = 0;
+ unsigned int pos;
+
++ if (efx->ptp_data) {
++ efx->ptp_data->channel = channel;
++ return 0;
++ }
++
+ ptp = kzalloc(sizeof(struct efx_ptp_data), GFP_KERNEL);
+ efx->ptp_data = ptp;
+ if (!efx->ptp_data)
+@@ -2176,7 +2188,7 @@ static const struct efx_channel_type efx_ptp_channel_type = {
+ .pre_probe = efx_ptp_probe_channel,
+ .post_remove = efx_ptp_remove_channel,
+ .get_name = efx_ptp_get_channel_name,
+- /* no copy operation; there is no need to reallocate this channel */
++ .copy = efx_copy_channel,
+ .receive_skb = efx_ptp_rx,
+ .want_txqs = efx_ptp_want_txqs,
+ .keep_eventq = false,
+diff --git a/drivers/net/ethernet/sfc/ptp.h b/drivers/net/ethernet/sfc/ptp.h
+index 9855e8c9e544d..7b1ef7002b3f0 100644
+--- a/drivers/net/ethernet/sfc/ptp.h
++++ b/drivers/net/ethernet/sfc/ptp.h
+@@ -16,6 +16,7 @@ struct ethtool_ts_info;
+ int efx_ptp_probe(struct efx_nic *efx, struct efx_channel *channel);
+ void efx_ptp_defer_probe_with_channel(struct efx_nic *efx);
+ struct efx_channel *efx_ptp_channel(struct efx_nic *efx);
++void efx_ptp_update_channel(struct efx_nic *efx, struct efx_channel *channel);
+ void efx_ptp_remove(struct efx_nic *efx);
+ int efx_ptp_set_ts_config(struct efx_nic *efx, struct ifreq *ifr);
+ int efx_ptp_get_ts_config(struct efx_nic *efx, struct ifreq *ifr);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 08a670bf2cd19..c2b142cf75eb4 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -935,8 +935,6 @@ static int xemaclite_open(struct net_device *dev)
+ xemaclite_disable_interrupts(lp);
+
+ if (lp->phy_node) {
+- u32 bmcr;
+-
+ lp->phy_dev = of_phy_connect(lp->ndev, lp->phy_node,
+ xemaclite_adjust_link, 0,
+ PHY_INTERFACE_MODE_MII);
+@@ -947,19 +945,6 @@ static int xemaclite_open(struct net_device *dev)
+
+ /* EmacLite doesn't support giga-bit speeds */
+ phy_set_max_speed(lp->phy_dev, SPEED_100);
+-
+- /* Don't advertise 1000BASE-T Full/Half duplex speeds */
+- phy_write(lp->phy_dev, MII_CTRL1000, 0);
+-
+- /* Advertise only 10 and 100mbps full/half duplex speeds */
+- phy_write(lp->phy_dev, MII_ADVERTISE, ADVERTISE_ALL |
+- ADVERTISE_CSMA);
+-
+- /* Restart auto negotiation */
+- bmcr = phy_read(lp->phy_dev, MII_BMCR);
+- bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
+- phy_write(lp->phy_dev, MII_BMCR, bmcr);
+-
+ phy_start(lp->phy_dev);
+ }
+
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 281cebc3d00cc..cfb5378bbb390 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1594,7 +1594,7 @@ static int ksz886x_cable_test_get_status(struct phy_device *phydev,
+
+ static int lanphy_read_page_reg(struct phy_device *phydev, int page, u32 addr)
+ {
+- u32 data;
++ int data;
+
+ phy_lock_mdio_bus(phydev);
+ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
+@@ -1725,6 +1725,7 @@ static struct phy_driver ksphy_driver[] = {
+ .name = "Micrel KS8737",
+ /* PHY_BASIC_FEATURES */
+ .driver_data = &ks8737_type,
++ .probe = kszphy_probe,
+ .config_init = kszphy_config_init,
+ .config_intr = kszphy_config_intr,
+ .handle_interrupt = kszphy_handle_interrupt,
+@@ -1850,8 +1851,8 @@ static struct phy_driver ksphy_driver[] = {
+ .config_init = ksz8061_config_init,
+ .config_intr = kszphy_config_intr,
+ .handle_interrupt = kszphy_handle_interrupt,
+- .suspend = kszphy_suspend,
+- .resume = kszphy_resume,
++ .suspend = genphy_suspend,
++ .resume = genphy_resume,
+ }, {
+ .phy_id = PHY_ID_KSZ9021,
+ .phy_id_mask = 0x000ffffe,
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index beb2b66da1324..f122026c46826 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -970,8 +970,13 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
+ {
+ struct phy_device *phydev = phy_dat;
+ struct phy_driver *drv = phydev->drv;
++ irqreturn_t ret;
+
+- return drv->handle_interrupt(phydev);
++ mutex_lock(&phydev->lock);
++ ret = drv->handle_interrupt(phydev);
++ mutex_unlock(&phydev->lock);
++
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 4720b24ca51b5..90dfefc1f5f8d 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -250,6 +250,7 @@ struct sfp {
+ struct sfp_eeprom_id id;
+ unsigned int module_power_mW;
+ unsigned int module_t_start_up;
++ bool tx_fault_ignore;
+
+ #if IS_ENABLED(CONFIG_HWMON)
+ struct sfp_diag diag;
+@@ -1945,6 +1946,12 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
+ else
+ sfp->module_t_start_up = T_START_UP;
+
++ if (!memcmp(id.base.vendor_name, "HUAWEI ", 16) &&
++ !memcmp(id.base.vendor_pn, "MA5671A ", 16))
++ sfp->tx_fault_ignore = true;
++ else
++ sfp->tx_fault_ignore = false;
++
+ return 0;
+ }
+
+@@ -2397,7 +2404,10 @@ static void sfp_check_state(struct sfp *sfp)
+ mutex_lock(&sfp->st_mutex);
+ state = sfp_get_state(sfp);
+ changed = state ^ sfp->state;
+- changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
++ if (sfp->tx_fault_ignore)
++ changed &= SFP_F_PRESENT | SFP_F_LOS;
++ else
++ changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
+
+ for (i = 0; i < GPIO_MAX; i++)
+ if (changed & BIT(i))
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index 293563b3f7842..412d812a07997 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -1275,6 +1275,7 @@ static void ath11k_core_restart(struct work_struct *work)
+
+ ieee80211_stop_queues(ar->hw);
+ ath11k_mac_drain_tx(ar);
++ complete(&ar->completed_11d_scan);
+ complete(&ar->scan.started);
+ complete(&ar->scan.completed);
+ complete(&ar->peer_assoc_done);
+diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
+index 9e88ccca5ca75..09a2c1744a54c 100644
+--- a/drivers/net/wireless/ath/ath11k/core.h
++++ b/drivers/net/wireless/ath/ath11k/core.h
+@@ -38,6 +38,8 @@
+
+ extern unsigned int ath11k_frame_mode;
+
++#define ATH11K_SCAN_TIMEOUT_HZ (20 * HZ)
++
+ #define ATH11K_MON_TIMER_INTERVAL 10
+
+ enum ath11k_supported_bw {
+@@ -189,6 +191,12 @@ enum ath11k_scan_state {
+ ATH11K_SCAN_ABORTING,
+ };
+
++enum ath11k_11d_state {
++ ATH11K_11D_IDLE,
++ ATH11K_11D_PREPARING,
++ ATH11K_11D_RUNNING,
++};
++
+ enum ath11k_dev_flags {
+ ATH11K_CAC_RUNNING,
+ ATH11K_FLAG_CORE_REGISTERED,
+@@ -599,9 +607,8 @@ struct ath11k {
+ bool dfs_block_radar_events;
+ struct ath11k_thermal thermal;
+ u32 vdev_id_11d_scan;
+- struct completion finish_11d_scan;
+- struct completion finish_11d_ch_list;
+- bool pending_11d;
++ struct completion completed_11d_scan;
++ enum ath11k_11d_state state_11d;
+ bool regdom_set_by_user;
+ };
+
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index f54d5819477a4..b2dac859dfe16 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -3596,26 +3596,6 @@ static int ath11k_mac_op_hw_scan(struct ieee80211_hw *hw,
+ if (ret)
+ goto exit;
+
+- /* Currently the pending_11d=true only happened 1 time while
+- * wlan interface up in ath11k_mac_11d_scan_start(), it is called by
+- * ath11k_mac_op_add_interface(), after wlan interface up,
+- * pending_11d=false always.
+- * If remove below wait, it always happened scan fail and lead connect
+- * fail while wlan interface up, because it has a 11d scan which is running
+- * in firmware, and lead this scan failed.
+- */
+- if (ar->pending_11d) {
+- long time_left;
+- unsigned long timeout = 5 * HZ;
+-
+- if (ar->supports_6ghz)
+- timeout += 5 * HZ;
+-
+- time_left = wait_for_completion_timeout(&ar->finish_11d_ch_list, timeout);
+- ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
+- "mac wait 11d channel list time left %ld\n", time_left);
+- }
+-
+ memset(&arg, 0, sizeof(arg));
+ ath11k_wmi_start_scan_init(ar, &arg);
+ arg.vdev_id = arvif->vdev_id;
+@@ -3681,6 +3661,10 @@ exit:
+ kfree(arg.extraie.ptr);
+
+ mutex_unlock(&ar->conf_mutex);
++
++ if (ar->state_11d == ATH11K_11D_PREPARING)
++ ath11k_mac_11d_scan_start(ar, arvif->vdev_id);
++
+ return ret;
+ }
+
+@@ -5809,7 +5793,7 @@ static int ath11k_mac_op_start(struct ieee80211_hw *hw)
+
+ /* TODO: Do we need to enable ANI? */
+
+- ath11k_reg_update_chan_list(ar);
++ ath11k_reg_update_chan_list(ar, false);
+
+ ar->num_started_vdevs = 0;
+ ar->num_created_vdevs = 0;
+@@ -5876,6 +5860,11 @@ static void ath11k_mac_op_stop(struct ieee80211_hw *hw)
+ cancel_work_sync(&ar->ab->update_11d_work);
+ cancel_work_sync(&ar->ab->rfkill_work);
+
++ if (ar->state_11d == ATH11K_11D_PREPARING) {
++ ar->state_11d = ATH11K_11D_IDLE;
++ complete(&ar->completed_11d_scan);
++ }
++
+ spin_lock_bh(&ar->data_lock);
+ list_for_each_entry_safe(ppdu_stats, tmp, &ar->ppdu_stats_info, list) {
+ list_del(&ppdu_stats->list);
+@@ -6046,7 +6035,7 @@ static bool ath11k_mac_vif_ap_active_any(struct ath11k_base *ab)
+ return false;
+ }
+
+-void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id, bool wait)
++void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id)
+ {
+ struct wmi_11d_scan_start_params param;
+ int ret;
+@@ -6074,28 +6063,22 @@ void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id, bool wait)
+
+ ath11k_dbg(ar->ab, ATH11K_DBG_MAC, "mac start 11d scan\n");
+
+- if (wait)
+- reinit_completion(&ar->finish_11d_scan);
+-
+ ret = ath11k_wmi_send_11d_scan_start_cmd(ar, ¶m);
+ if (ret) {
+ ath11k_warn(ar->ab, "failed to start 11d scan vdev %d ret: %d\n",
+ vdev_id, ret);
+ } else {
+ ar->vdev_id_11d_scan = vdev_id;
+- if (wait) {
+- ar->pending_11d = true;
+- ret = wait_for_completion_timeout(&ar->finish_11d_scan,
+- 5 * HZ);
+- ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
+- "mac 11d scan left time %d\n", ret);
+-
+- if (!ret)
+- ar->pending_11d = false;
+- }
++ if (ar->state_11d == ATH11K_11D_PREPARING)
++ ar->state_11d = ATH11K_11D_RUNNING;
+ }
+
+ fin:
++ if (ar->state_11d == ATH11K_11D_PREPARING) {
++ ar->state_11d = ATH11K_11D_IDLE;
++ complete(&ar->completed_11d_scan);
++ }
++
+ mutex_unlock(&ar->ab->vdev_id_11d_lock);
+ }
+
+@@ -6118,12 +6101,15 @@ void ath11k_mac_11d_scan_stop(struct ath11k *ar)
+ vdev_id = ar->vdev_id_11d_scan;
+
+ ret = ath11k_wmi_send_11d_scan_stop_cmd(ar, vdev_id);
+- if (ret)
++ if (ret) {
+ ath11k_warn(ar->ab,
+ "failed to stopt 11d scan vdev %d ret: %d\n",
+ vdev_id, ret);
+- else
++ } else {
+ ar->vdev_id_11d_scan = ATH11K_11D_INVALID_VDEV_ID;
++ ar->state_11d = ATH11K_11D_IDLE;
++ complete(&ar->completed_11d_scan);
++ }
+ }
+ mutex_unlock(&ar->ab->vdev_id_11d_lock);
+ }
+@@ -6319,8 +6305,10 @@ static int ath11k_mac_op_add_interface(struct ieee80211_hw *hw,
+ goto err_peer_del;
+ }
+
+- ath11k_mac_11d_scan_start(ar, arvif->vdev_id, true);
+-
++ if (test_bit(WMI_TLV_SERVICE_11D_OFFLOAD, ab->wmi_ab.svc_map)) {
++ reinit_completion(&ar->completed_11d_scan);
++ ar->state_11d = ATH11K_11D_PREPARING;
++ }
+ break;
+ case WMI_VDEV_TYPE_MONITOR:
+ set_bit(ATH11K_FLAG_MONITOR_VDEV_CREATED, &ar->monitor_flags);
+@@ -7144,7 +7132,7 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ }
+
+ if (arvif->vdev_type == WMI_VDEV_TYPE_STA)
+- ath11k_mac_11d_scan_start(ar, arvif->vdev_id, false);
++ ath11k_mac_11d_scan_start(ar, arvif->vdev_id);
+
+ mutex_unlock(&ar->conf_mutex);
+ }
+@@ -8625,8 +8613,7 @@ int ath11k_mac_allocate(struct ath11k_base *ab)
+ ar->monitor_vdev_id = -1;
+ clear_bit(ATH11K_FLAG_MONITOR_VDEV_CREATED, &ar->monitor_flags);
+ ar->vdev_id_11d_scan = ATH11K_11D_INVALID_VDEV_ID;
+- init_completion(&ar->finish_11d_scan);
+- init_completion(&ar->finish_11d_ch_list);
++ init_completion(&ar->completed_11d_scan);
+ }
+
+ return 0;
+diff --git a/drivers/net/wireless/ath/ath11k/mac.h b/drivers/net/wireless/ath/ath11k/mac.h
+index 0e6c870b09c88..29b523af66dd2 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.h
++++ b/drivers/net/wireless/ath/ath11k/mac.h
+@@ -130,7 +130,7 @@ extern const struct htt_rx_ring_tlv_filter ath11k_mac_mon_status_filter_default;
+ #define ATH11K_SCAN_11D_INTERVAL 600000
+ #define ATH11K_11D_INVALID_VDEV_ID 0xFFFF
+
+-void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id, bool wait);
++void ath11k_mac_11d_scan_start(struct ath11k *ar, u32 vdev_id);
+ void ath11k_mac_11d_scan_stop(struct ath11k *ar);
+ void ath11k_mac_11d_scan_stop_all(struct ath11k_base *ab);
+
+diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
+index d6575feca5a26..eca935f5a95e1 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.c
++++ b/drivers/net/wireless/ath/ath11k/reg.c
+@@ -93,7 +93,7 @@ ath11k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+ ar->regdom_set_by_user = true;
+ }
+
+-int ath11k_reg_update_chan_list(struct ath11k *ar)
++int ath11k_reg_update_chan_list(struct ath11k *ar, bool wait)
+ {
+ struct ieee80211_supported_band **bands;
+ struct scan_chan_list_params *params;
+@@ -102,7 +102,32 @@ int ath11k_reg_update_chan_list(struct ath11k *ar)
+ struct channel_param *ch;
+ enum nl80211_band band;
+ int num_channels = 0;
+- int i, ret;
++ int i, ret, left;
++
++ if (wait && ar->state_11d != ATH11K_11D_IDLE) {
++ left = wait_for_completion_timeout(&ar->completed_11d_scan,
++ ATH11K_SCAN_TIMEOUT_HZ);
++ if (!left) {
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "failed to receive 11d scan complete: timed out\n");
++ ar->state_11d = ATH11K_11D_IDLE;
++ }
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "reg 11d scan wait left time %d\n", left);
++ }
++
++ if (wait &&
++ (ar->scan.state == ATH11K_SCAN_STARTING ||
++ ar->scan.state == ATH11K_SCAN_RUNNING)) {
++ left = wait_for_completion_timeout(&ar->scan.completed,
++ ATH11K_SCAN_TIMEOUT_HZ);
++ if (!left)
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "failed to receive hw scan complete: timed out\n");
++
++ ath11k_dbg(ar->ab, ATH11K_DBG_REG,
++ "reg hw scan wait left time %d\n", left);
++ }
+
+ bands = hw->wiphy->bands;
+ for (band = 0; band < NUM_NL80211_BANDS; band++) {
+@@ -184,11 +209,6 @@ int ath11k_reg_update_chan_list(struct ath11k *ar)
+ ret = ath11k_wmi_send_scan_chan_list_cmd(ar, params);
+ kfree(params);
+
+- if (ar->pending_11d) {
+- complete(&ar->finish_11d_ch_list);
+- ar->pending_11d = false;
+- }
+-
+ return ret;
+ }
+
+@@ -254,15 +274,8 @@ int ath11k_regd_update(struct ath11k *ar)
+ goto err;
+ }
+
+- if (ar->pending_11d)
+- complete(&ar->finish_11d_scan);
+-
+ rtnl_lock();
+ wiphy_lock(ar->hw->wiphy);
+-
+- if (ar->pending_11d)
+- reinit_completion(&ar->finish_11d_ch_list);
+-
+ ret = regulatory_set_wiphy_regd_sync(ar->hw->wiphy, regd_copy);
+ wiphy_unlock(ar->hw->wiphy);
+ rtnl_unlock();
+@@ -273,7 +286,7 @@ int ath11k_regd_update(struct ath11k *ar)
+ goto err;
+
+ if (ar->state == ATH11K_STATE_ON) {
+- ret = ath11k_reg_update_chan_list(ar);
++ ret = ath11k_reg_update_chan_list(ar, true);
+ if (ret)
+ goto err;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/reg.h b/drivers/net/wireless/ath/ath11k/reg.h
+index 5fb9dc03a74e8..2f284f26378d1 100644
+--- a/drivers/net/wireless/ath/ath11k/reg.h
++++ b/drivers/net/wireless/ath/ath11k/reg.h
+@@ -32,5 +32,5 @@ struct ieee80211_regdomain *
+ ath11k_reg_build_regd(struct ath11k_base *ab,
+ struct cur_regulatory_info *reg_info, bool intersect);
+ int ath11k_regd_update(struct ath11k *ar);
+-int ath11k_reg_update_chan_list(struct ath11k *ar);
++int ath11k_reg_update_chan_list(struct ath11k *ar, bool wait);
+ #endif
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 6b68ccf65e390..22921673e956e 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -2013,7 +2013,10 @@ void ath11k_wmi_start_scan_init(struct ath11k *ar,
+ {
+ /* setup commonly used values */
+ arg->scan_req_id = 1;
+- arg->scan_priority = WMI_SCAN_PRIORITY_LOW;
++ if (ar->state_11d == ATH11K_11D_PREPARING)
++ arg->scan_priority = WMI_SCAN_PRIORITY_MEDIUM;
++ else
++ arg->scan_priority = WMI_SCAN_PRIORITY_LOW;
+ arg->dwell_time_active = 50;
+ arg->dwell_time_active_2g = 0;
+ arg->dwell_time_passive = 150;
+@@ -6177,8 +6180,10 @@ static void ath11k_wmi_op_ep_tx_credits(struct ath11k_base *ab)
+ static int ath11k_reg_11d_new_cc_event(struct ath11k_base *ab, struct sk_buff *skb)
+ {
+ const struct wmi_11d_new_cc_ev *ev;
++ struct ath11k *ar;
++ struct ath11k_pdev *pdev;
+ const void **tb;
+- int ret;
++ int ret, i;
+
+ tb = ath11k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+@@ -6204,6 +6209,13 @@ static int ath11k_reg_11d_new_cc_event(struct ath11k_base *ab, struct sk_buff *s
+
+ kfree(tb);
+
++ for (i = 0; i < ab->num_radios; i++) {
++ pdev = &ab->pdevs[i];
++ ar = pdev->ar;
++ ar->state_11d = ATH11K_11D_IDLE;
++ complete(&ar->completed_11d_scan);
++ }
++
+ queue_work(ab->workqueue, &ab->update_11d_work);
+
+ return 0;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 42f6f8bb83be9..901600ca6f0ec 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -362,7 +362,7 @@ void iwl_dbg_tlv_del_timers(struct iwl_trans *trans)
+ struct iwl_dbg_tlv_timer_node *node, *tmp;
+
+ list_for_each_entry_safe(node, tmp, timer_list, list) {
+- del_timer(&node->timer);
++ del_timer_sync(&node->timer);
+ list_del(&node->list);
+ kfree(node);
+ }
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index fc5725f6daee6..4a91d5cb75c3e 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2336,11 +2336,13 @@ static void hw_scan_work(struct work_struct *work)
+ if (req->ie_len)
+ skb_put_data(probe, req->ie, req->ie_len);
+
++ rcu_read_lock();
+ if (!ieee80211_tx_prepare_skb(hwsim->hw,
+ hwsim->hw_scan_vif,
+ probe,
+ hwsim->tmp_chan->band,
+ NULL)) {
++ rcu_read_unlock();
+ kfree_skb(probe);
+ continue;
+ }
+@@ -2348,6 +2350,7 @@ static void hw_scan_work(struct work_struct *work)
+ local_bh_disable();
+ mac80211_hwsim_tx_frame(hwsim->hw, probe,
+ hwsim->tmp_chan);
++ rcu_read_unlock();
+ local_bh_enable();
+ }
+ }
+diff --git a/drivers/platform/surface/aggregator/core.c b/drivers/platform/surface/aggregator/core.c
+index d384d36098c27..a62c5dfe42d64 100644
+--- a/drivers/platform/surface/aggregator/core.c
++++ b/drivers/platform/surface/aggregator/core.c
+@@ -817,7 +817,7 @@ err_cpkg:
+ err_bus:
+ return status;
+ }
+-module_init(ssam_core_init);
++subsys_initcall(ssam_core_init);
+
+ static void __exit ssam_core_exit(void)
+ {
+diff --git a/drivers/s390/net/ctcm_mpc.c b/drivers/s390/net/ctcm_mpc.c
+index 88abfb5e8045c..8ac213a551418 100644
+--- a/drivers/s390/net/ctcm_mpc.c
++++ b/drivers/s390/net/ctcm_mpc.c
+@@ -626,8 +626,6 @@ static void mpc_rcvd_sweep_resp(struct mpcg_info *mpcginfo)
+ ctcm_clear_busy_do(dev);
+ }
+
+- kfree(mpcginfo);
+-
+ return;
+
+ }
+@@ -1192,10 +1190,10 @@ static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb)
+ CTCM_FUNTAIL, dev->name);
+ priv->stats.rx_dropped++;
+ /* mpcginfo only used for non-data transfers */
+- kfree(mpcginfo);
+ if (do_debug_data)
+ ctcmpc_dump_skb(pskb, -8);
+ }
++ kfree(mpcginfo);
+ }
+ done:
+
+@@ -1977,7 +1975,6 @@ static void mpc_action_rcvd_xid0(fsm_instance *fsm, int event, void *arg)
+ }
+ break;
+ }
+- kfree(mpcginfo);
+
+ CTCM_PR_DEBUG("ctcmpc:%s() %s xid2:%i xid7:%i xidt_p2:%i \n",
+ __func__, ch->id, grp->outstanding_xid2,
+@@ -2038,7 +2035,6 @@ static void mpc_action_rcvd_xid7(fsm_instance *fsm, int event, void *arg)
+ mpc_validate_xid(mpcginfo);
+ break;
+ }
+- kfree(mpcginfo);
+ return;
+ }
+
+diff --git a/drivers/s390/net/ctcm_sysfs.c b/drivers/s390/net/ctcm_sysfs.c
+index ded1930a00b2d..e3813a7aa5e68 100644
+--- a/drivers/s390/net/ctcm_sysfs.c
++++ b/drivers/s390/net/ctcm_sysfs.c
+@@ -39,11 +39,12 @@ static ssize_t ctcm_buffer_write(struct device *dev,
+ struct ctcm_priv *priv = dev_get_drvdata(dev);
+ int rc;
+
+- ndev = priv->channel[CTCM_READ]->netdev;
+- if (!(priv && priv->channel[CTCM_READ] && ndev)) {
++ if (!(priv && priv->channel[CTCM_READ] &&
++ priv->channel[CTCM_READ]->netdev)) {
+ CTCM_DBF_TEXT(SETUP, CTC_DBF_ERROR, "bfnondev");
+ return -ENODEV;
+ }
++ ndev = priv->channel[CTCM_READ]->netdev;
+
+ rc = kstrtouint(buf, 0, &bs1);
+ if (rc)
+diff --git a/drivers/s390/net/lcs.c b/drivers/s390/net/lcs.c
+index a61d38a1b4ed1..66c2893badadf 100644
+--- a/drivers/s390/net/lcs.c
++++ b/drivers/s390/net/lcs.c
+@@ -1736,10 +1736,11 @@ lcs_get_control(struct lcs_card *card, struct lcs_cmd *cmd)
+ lcs_schedule_recovery(card);
+ break;
+ case LCS_CMD_STOPLAN:
+- pr_warn("Stoplan for %s initiated by LGW\n",
+- card->dev->name);
+- if (card->dev)
++ if (card->dev) {
++ pr_warn("Stoplan for %s initiated by LGW\n",
++ card->dev->name);
+ netif_carrier_off(card->dev);
++ }
+ break;
+ default:
+ LCS_DBF_TEXT(5, trace, "noLGWcmd");
+diff --git a/drivers/slimbus/qcom-ctrl.c b/drivers/slimbus/qcom-ctrl.c
+index f04b961b96cd4..ec58091fc948a 100644
+--- a/drivers/slimbus/qcom-ctrl.c
++++ b/drivers/slimbus/qcom-ctrl.c
+@@ -510,9 +510,9 @@ static int qcom_slim_probe(struct platform_device *pdev)
+ }
+
+ ctrl->irq = platform_get_irq(pdev, 0);
+- if (!ctrl->irq) {
++ if (ctrl->irq < 0) {
+ dev_err(&pdev->dev, "no slimbus IRQ\n");
+- return -ENODEV;
++ return ctrl->irq;
+ }
+
+ sctrl = &ctrl->ctrl;
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index a38b922bcbc10..fd8b86dde5255 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -137,6 +137,7 @@ struct gsm_dlci {
+ int retries;
+ /* Uplink tty if active */
+ struct tty_port port; /* The tty bound to this DLCI if there is one */
++#define TX_SIZE 4096 /* Must be power of 2. */
+ struct kfifo fifo; /* Queue fifo for the DLCI */
+ int adaption; /* Adaption layer in use */
+ int prev_adaption;
+@@ -1658,6 +1659,7 @@ static void gsm_dlci_data(struct gsm_dlci *dlci, const u8 *data, int clen)
+ if (len == 0)
+ return;
+ }
++ len--;
+ slen++;
+ tty = tty_port_tty_get(port);
+ if (tty) {
+@@ -1730,7 +1732,7 @@ static struct gsm_dlci *gsm_dlci_alloc(struct gsm_mux *gsm, int addr)
+ return NULL;
+ spin_lock_init(&dlci->lock);
+ mutex_init(&dlci->mutex);
+- if (kfifo_alloc(&dlci->fifo, 4096, GFP_KERNEL) < 0) {
++ if (kfifo_alloc(&dlci->fifo, TX_SIZE, GFP_KERNEL) < 0) {
+ kfree(dlci);
+ return NULL;
+ }
+@@ -2351,6 +2353,7 @@ static void gsm_copy_config_values(struct gsm_mux *gsm,
+
+ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ {
++ int ret = 0;
+ int need_close = 0;
+ int need_restart = 0;
+
+@@ -2418,10 +2421,13 @@ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ * FIXME: We need to separate activation/deactivation from adding
+ * and removing from the mux array
+ */
+- if (need_restart)
+- gsm_activate_mux(gsm);
+- if (gsm->initiator && need_close)
+- gsm_dlci_begin_open(gsm->dlci[0]);
++ if (gsm->dead) {
++ ret = gsm_activate_mux(gsm);
++ if (ret)
++ return ret;
++ if (gsm->initiator)
++ gsm_dlci_begin_open(gsm->dlci[0]);
++ }
+ return 0;
+ }
+
+@@ -2971,8 +2977,6 @@ static struct tty_ldisc_ops tty_ldisc_packet = {
+ * Virtual tty side
+ */
+
+-#define TX_SIZE 512
+-
+ /**
+ * gsm_modem_upd_via_data - send modem bits via convergence layer
+ * @dlci: channel
+@@ -3212,7 +3216,7 @@ static unsigned int gsmtty_write_room(struct tty_struct *tty)
+ struct gsm_dlci *dlci = tty->driver_data;
+ if (dlci->state == DLCI_CLOSED)
+ return 0;
+- return TX_SIZE - kfifo_len(&dlci->fifo);
++ return kfifo_avail(&dlci->fifo);
+ }
+
+ static unsigned int gsmtty_chars_in_buffer(struct tty_struct *tty)
+diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
+index fb65dc601b237..de48a58460f47 100644
+--- a/drivers/tty/serial/8250/8250_mtk.c
++++ b/drivers/tty/serial/8250/8250_mtk.c
+@@ -37,6 +37,7 @@
+ #define MTK_UART_IER_RTSI 0x40 /* Enable RTS Modem status interrupt */
+ #define MTK_UART_IER_CTSI 0x80 /* Enable CTS Modem status interrupt */
+
++#define MTK_UART_EFR 38 /* I/O: Extended Features Register */
+ #define MTK_UART_EFR_EN 0x10 /* Enable enhancement feature */
+ #define MTK_UART_EFR_RTS 0x40 /* Enable hardware rx flow control */
+ #define MTK_UART_EFR_CTS 0x80 /* Enable hardware tx flow control */
+@@ -53,6 +54,9 @@
+ #define MTK_UART_TX_TRIGGER 1
+ #define MTK_UART_RX_TRIGGER MTK_UART_RX_SIZE
+
++#define MTK_UART_XON1 40 /* I/O: Xon character 1 */
++#define MTK_UART_XOFF1 42 /* I/O: Xoff character 1 */
++
+ #ifdef CONFIG_SERIAL_8250_DMA
+ enum dma_rx_status {
+ DMA_RX_START = 0,
+@@ -169,7 +173,7 @@ static void mtk8250_dma_enable(struct uart_8250_port *up)
+ MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX);
+
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+- serial_out(up, UART_EFR, UART_EFR_ECB);
++ serial_out(up, MTK_UART_EFR, UART_EFR_ECB);
+ serial_out(up, UART_LCR, lcr);
+
+ if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0)
+@@ -232,7 +236,7 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ int lcr = serial_in(up, UART_LCR);
+
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+- serial_out(up, UART_EFR, UART_EFR_ECB);
++ serial_out(up, MTK_UART_EFR, UART_EFR_ECB);
+ serial_out(up, UART_LCR, lcr);
+ lcr = serial_in(up, UART_LCR);
+
+@@ -241,7 +245,7 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ serial_out(up, MTK_UART_ESCAPE_DAT, MTK_UART_ESCAPE_CHAR);
+ serial_out(up, MTK_UART_ESCAPE_EN, 0x00);
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+- serial_out(up, UART_EFR, serial_in(up, UART_EFR) &
++ serial_out(up, MTK_UART_EFR, serial_in(up, MTK_UART_EFR) &
+ (~(MTK_UART_EFR_HW_FC | MTK_UART_EFR_SW_FC_MASK)));
+ serial_out(up, UART_LCR, lcr);
+ mtk8250_disable_intrs(up, MTK_UART_IER_XOFFI |
+@@ -255,8 +259,8 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+
+ /*enable hw flow control*/
+- serial_out(up, UART_EFR, MTK_UART_EFR_HW_FC |
+- (serial_in(up, UART_EFR) &
++ serial_out(up, MTK_UART_EFR, MTK_UART_EFR_HW_FC |
++ (serial_in(up, MTK_UART_EFR) &
+ (~(MTK_UART_EFR_HW_FC | MTK_UART_EFR_SW_FC_MASK))));
+
+ serial_out(up, UART_LCR, lcr);
+@@ -270,12 +274,12 @@ static void mtk8250_set_flow_ctrl(struct uart_8250_port *up, int mode)
+ serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+
+ /*enable sw flow control */
+- serial_out(up, UART_EFR, MTK_UART_EFR_XON1_XOFF1 |
+- (serial_in(up, UART_EFR) &
++ serial_out(up, MTK_UART_EFR, MTK_UART_EFR_XON1_XOFF1 |
++ (serial_in(up, MTK_UART_EFR) &
+ (~(MTK_UART_EFR_HW_FC | MTK_UART_EFR_SW_FC_MASK))));
+
+- serial_out(up, UART_XON1, START_CHAR(port->state->port.tty));
+- serial_out(up, UART_XOFF1, STOP_CHAR(port->state->port.tty));
++ serial_out(up, MTK_UART_XON1, START_CHAR(port->state->port.tty));
++ serial_out(up, MTK_UART_XOFF1, STOP_CHAR(port->state->port.tty));
+ serial_out(up, UART_LCR, lcr);
+ mtk8250_disable_intrs(up, MTK_UART_IER_CTSI|MTK_UART_IER_RTSI);
+ mtk8250_enable_intrs(up, MTK_UART_IER_XOFFI);
+diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c
+index 13ac36e2da4f0..c7f81aa1ce912 100644
+--- a/drivers/tty/serial/digicolor-usart.c
++++ b/drivers/tty/serial/digicolor-usart.c
+@@ -471,11 +471,10 @@ static int digicolor_uart_probe(struct platform_device *pdev)
+ if (IS_ERR(uart_clk))
+ return PTR_ERR(uart_clk);
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+- dp->port.mapbase = res->start;
+- dp->port.membase = devm_ioremap_resource(&pdev->dev, res);
++ dp->port.membase = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(dp->port.membase))
+ return PTR_ERR(dp->port.membase);
++ dp->port.mapbase = res->start;
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index ce3e261446898..d32c25bc973b4 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2658,6 +2658,7 @@ static int lpuart_probe(struct platform_device *pdev)
+ struct device_node *np = pdev->dev.of_node;
+ struct lpuart_port *sport;
+ struct resource *res;
++ irq_handler_t handler;
+ int ret;
+
+ sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL);
+@@ -2735,17 +2736,11 @@ static int lpuart_probe(struct platform_device *pdev)
+
+ if (lpuart_is_32(sport)) {
+ lpuart_reg.cons = LPUART32_CONSOLE;
+- ret = devm_request_irq(&pdev->dev, sport->port.irq, lpuart32_int, 0,
+- DRIVER_NAME, sport);
++ handler = lpuart32_int;
+ } else {
+ lpuart_reg.cons = LPUART_CONSOLE;
+- ret = devm_request_irq(&pdev->dev, sport->port.irq, lpuart_int, 0,
+- DRIVER_NAME, sport);
++ handler = lpuart_int;
+ }
+-
+- if (ret)
+- goto failed_irq_request;
+-
+ ret = uart_add_one_port(&lpuart_reg, &sport->port);
+ if (ret)
+ goto failed_attach_port;
+@@ -2767,13 +2762,18 @@ static int lpuart_probe(struct platform_device *pdev)
+
+ sport->port.rs485_config(&sport->port, &sport->port.rs485);
+
++ ret = devm_request_irq(&pdev->dev, sport->port.irq, handler, 0,
++ DRIVER_NAME, sport);
++ if (ret)
++ goto failed_irq_request;
++
+ return 0;
+
++failed_irq_request:
+ failed_get_rs485:
+ failed_reset:
+ uart_remove_one_port(&lpuart_reg, &sport->port);
+ failed_attach_port:
+-failed_irq_request:
+ lpuart_disable_clks(sport);
+ failed_clock_enable:
+ failed_out_of_range:
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 7f2c83f299d32..eebe782380fb9 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -774,6 +774,7 @@ static int wdm_release(struct inode *inode, struct file *file)
+ poison_urbs(desc);
+ spin_lock_irq(&desc->iuspin);
+ desc->resp_count = 0;
++ clear_bit(WDM_RESPONDING, &desc->flags);
+ spin_unlock_irq(&desc->iuspin);
+ desc->manage_power(desc->intf, 0);
+ unpoison_urbs(desc);
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 71bb5e477dbad..d37965867b230 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -890,13 +890,37 @@ static void uvc_function_unbind(struct usb_configuration *c,
+ {
+ struct usb_composite_dev *cdev = c->cdev;
+ struct uvc_device *uvc = to_uvc(f);
++ long wait_ret = 1;
+
+ uvcg_info(f, "%s()\n", __func__);
+
++ /* If we know we're connected via v4l2, then there should be a cleanup
++ * of the device from userspace either via UVC_EVENT_DISCONNECT or
++ * though the video device removal uevent. Allow some time for the
++ * application to close out before things get deleted.
++ */
++ if (uvc->func_connected) {
++ uvcg_dbg(f, "waiting for clean disconnect\n");
++ wait_ret = wait_event_interruptible_timeout(uvc->func_connected_queue,
++ uvc->func_connected == false, msecs_to_jiffies(500));
++ uvcg_dbg(f, "done waiting with ret: %ld\n", wait_ret);
++ }
++
+ device_remove_file(&uvc->vdev.dev, &dev_attr_function_name);
+ video_unregister_device(&uvc->vdev);
+ v4l2_device_unregister(&uvc->v4l2_dev);
+
++ if (uvc->func_connected) {
++ /* Wait for the release to occur to ensure there are no longer any
++ * pending operations that may cause panics when resources are cleaned
++ * up.
++ */
++ uvcg_warn(f, "%s no clean disconnect, wait for release\n", __func__);
++ wait_ret = wait_event_interruptible_timeout(uvc->func_connected_queue,
++ uvc->func_connected == false, msecs_to_jiffies(1000));
++ uvcg_dbg(f, "done waiting for release with ret: %ld\n", wait_ret);
++ }
++
+ usb_ep_free_request(cdev->gadget->ep0, uvc->control_req);
+ kfree(uvc->control_buf);
+
+@@ -915,6 +939,7 @@ static struct usb_function *uvc_alloc(struct usb_function_instance *fi)
+
+ mutex_init(&uvc->video.mutex);
+ uvc->state = UVC_STATE_DISCONNECTED;
++ init_waitqueue_head(&uvc->func_connected_queue);
+ opts = fi_to_f_uvc_opts(fi);
+
+ mutex_lock(&opts->lock);
+diff --git a/drivers/usb/gadget/function/uvc.h b/drivers/usb/gadget/function/uvc.h
+index c3607a32b9862..886103a1fe9b7 100644
+--- a/drivers/usb/gadget/function/uvc.h
++++ b/drivers/usb/gadget/function/uvc.h
+@@ -14,6 +14,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/usb/composite.h>
+ #include <linux/videodev2.h>
++#include <linux/wait.h>
+
+ #include <media/v4l2-device.h>
+ #include <media/v4l2-dev.h>
+@@ -129,6 +130,7 @@ struct uvc_device {
+ struct usb_function func;
+ struct uvc_video video;
+ bool func_connected;
++ wait_queue_head_t func_connected_queue;
+
+ /* Descriptors */
+ struct {
+diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
+index a2c78690c5c28..fd8f73bb726dd 100644
+--- a/drivers/usb/gadget/function/uvc_v4l2.c
++++ b/drivers/usb/gadget/function/uvc_v4l2.c
+@@ -253,10 +253,11 @@ uvc_v4l2_subscribe_event(struct v4l2_fh *fh,
+
+ static void uvc_v4l2_disable(struct uvc_device *uvc)
+ {
+- uvc->func_connected = false;
+ uvc_function_disconnect(uvc);
+ uvcg_video_enable(&uvc->video, 0);
+ uvcg_free_buffers(&uvc->video.queue);
++ uvc->func_connected = false;
++ wake_up_interruptible(&uvc->func_connected_queue);
+ }
+
+ static int
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index edbfa82c65659..74ce52b34317e 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -465,7 +465,7 @@ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
+ */
+ for (j = 0; j < sch_ep->num_budget_microframes; j++) {
+ k = XHCI_MTK_BW_INDEX(base + j);
+- tmp = tt->fs_bus_bw[k] + sch_ep->bw_budget_table[j];
++ tmp = tt->fs_bus_bw[k] + sch_ep->bw_cost_per_microframe;
+ if (tmp > FS_PAYLOAD_MAX)
+ return -ESCH_BW_OVERFLOW;
+ }
+@@ -539,19 +539,17 @@ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
+ static void update_sch_tt(struct mu3h_sch_ep_info *sch_ep, bool used)
+ {
+ struct mu3h_sch_tt *tt = sch_ep->sch_tt;
++ int bw_updated;
+ u32 base;
+- int i, j, k;
++ int i, j;
++
++ bw_updated = sch_ep->bw_cost_per_microframe * (used ? 1 : -1);
+
+ for (i = 0; i < sch_ep->num_esit; i++) {
+ base = sch_ep->offset + i * sch_ep->esit;
+
+- for (j = 0; j < sch_ep->num_budget_microframes; j++) {
+- k = XHCI_MTK_BW_INDEX(base + j);
+- if (used)
+- tt->fs_bus_bw[k] += sch_ep->bw_budget_table[j];
+- else
+- tt->fs_bus_bw[k] -= sch_ep->bw_budget_table[j];
+- }
++ for (j = 0; j < sch_ep->num_budget_microframes; j++)
++ tt->fs_bus_bw[XHCI_MTK_BW_INDEX(base + j)] += bw_updated;
+ }
+
+ if (used)
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 1364ce7f0abf0..152ad882657d7 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -2123,10 +2123,14 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(3) },
+ { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 (IOT version) */
+ .driver_info = RSVD(4) | RSVD(5) | RSVD(6) },
++ { USB_DEVICE(0x1782, 0x4d10) }, /* Fibocom L610 (AT mode) */
++ { USB_DEVICE_INTERFACE_CLASS(0x1782, 0x4d11, 0xff) }, /* Fibocom L610 (ECM/RNDIS mode) */
+ { USB_DEVICE(0x2cb7, 0x0104), /* Fibocom NL678 series */
+ .driver_info = RSVD(4) | RSVD(5) },
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff), /* Fibocom NL678 series */
+ .driver_info = RSVD(6) },
++ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0106, 0xff) }, /* Fibocom MA510 (ECM mode w/ diag intf.) */
++ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x010a, 0xff) }, /* Fibocom MA510 (ECM mode) */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 88b284d61681a..1d878d05a6584 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -106,6 +106,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LCM960_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LM920_PRODUCT_ID) },
++ { USB_DEVICE(HP_VENDOR_ID, HP_LM930_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_LM940_PRODUCT_ID) },
+ { USB_DEVICE(HP_VENDOR_ID, HP_TD620_PRODUCT_ID) },
+ { USB_DEVICE(CRESSI_VENDOR_ID, CRESSI_EDY_PRODUCT_ID) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index c5406452b774e..732f9b13ad5d5 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -135,6 +135,7 @@
+ #define HP_TD620_PRODUCT_ID 0x0956
+ #define HP_LD960_PRODUCT_ID 0x0b39
+ #define HP_LD381_PRODUCT_ID 0x0f7f
++#define HP_LM930_PRODUCT_ID 0x0f9b
+ #define HP_LCM220_PRODUCT_ID 0x3139
+ #define HP_LCM960_PRODUCT_ID 0x3239
+ #define HP_LD220_PRODUCT_ID 0x3524
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index c18bf8164bc2e..586ef5551e76e 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -166,6 +166,8 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x1199, 0x9090)}, /* Sierra Wireless EM7565 QDL */
+ {DEVICE_SWI(0x1199, 0x9091)}, /* Sierra Wireless EM7565 */
+ {DEVICE_SWI(0x1199, 0x90d2)}, /* Sierra Wireless EM9191 QDL */
++ {DEVICE_SWI(0x1199, 0xc080)}, /* Sierra Wireless EM7590 QDL */
++ {DEVICE_SWI(0x1199, 0xc081)}, /* Sierra Wireless EM7590 */
+ {DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81a3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81a4)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index e07d26a3cd8e1..f33e08eb76709 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -877,7 +877,7 @@ static int tcpci_remove(struct i2c_client *client)
+ /* Disable chip interrupts before unregistering port */
+ err = tcpci_write16(chip->tcpci, TCPC_ALERT_MASK, 0);
+ if (err < 0)
+- return err;
++ dev_warn(&client->dev, "Failed to disable irqs (%pe)\n", ERR_PTR(err));
+
+ tcpci_unregister_port(chip->tcpci);
+
+diff --git a/drivers/usb/typec/tcpm/tcpci_mt6360.c b/drivers/usb/typec/tcpm/tcpci_mt6360.c
+index f1bd9e09bc87f..8a952eaf90163 100644
+--- a/drivers/usb/typec/tcpm/tcpci_mt6360.c
++++ b/drivers/usb/typec/tcpm/tcpci_mt6360.c
+@@ -15,6 +15,9 @@
+
+ #include "tcpci.h"
+
++#define MT6360_REG_PHYCTRL1 0x80
++#define MT6360_REG_PHYCTRL3 0x82
++#define MT6360_REG_PHYCTRL7 0x86
+ #define MT6360_REG_VCONNCTRL1 0x8C
+ #define MT6360_REG_MODECTRL2 0x8F
+ #define MT6360_REG_SWRESET 0xA0
+@@ -22,6 +25,8 @@
+ #define MT6360_REG_DRPCTRL1 0xA2
+ #define MT6360_REG_DRPCTRL2 0xA3
+ #define MT6360_REG_I2CTORST 0xBF
++#define MT6360_REG_PHYCTRL11 0xCA
++#define MT6360_REG_RXCTRL1 0xCE
+ #define MT6360_REG_RXCTRL2 0xCF
+ #define MT6360_REG_CTDCTRL2 0xEC
+
+@@ -106,6 +111,27 @@ static int mt6360_tcpc_init(struct tcpci *tcpci, struct tcpci_data *tdata)
+ if (ret)
+ return ret;
+
++ /* BMC PHY */
++ ret = mt6360_tcpc_write16(regmap, MT6360_REG_PHYCTRL1, 0x3A70);
++ if (ret)
++ return ret;
++
++ ret = regmap_write(regmap, MT6360_REG_PHYCTRL3, 0x82);
++ if (ret)
++ return ret;
++
++ ret = regmap_write(regmap, MT6360_REG_PHYCTRL7, 0x36);
++ if (ret)
++ return ret;
++
++ ret = mt6360_tcpc_write16(regmap, MT6360_REG_PHYCTRL11, 0x3C60);
++ if (ret)
++ return ret;
++
++ ret = regmap_write(regmap, MT6360_REG_RXCTRL1, 0xE8);
++ if (ret)
++ return ret;
++
+ /* Set shipping mode off, AUTOIDLE on */
+ return regmap_write(regmap, MT6360_REG_MODECTRL2, 0x7A);
+ }
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index ea42ba6445b2d..b3d5f884c5445 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -243,6 +243,10 @@ error:
+ static inline void efifb_show_boot_graphics(struct fb_info *info) {}
+ #endif
+
++/*
++ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end
++ * of unregister_framebuffer() or fb_release(). Do any cleanup here.
++ */
+ static void efifb_destroy(struct fb_info *info)
+ {
+ if (efifb_pci_dev)
+@@ -254,10 +258,13 @@ static void efifb_destroy(struct fb_info *info)
+ else
+ memunmap(info->screen_base);
+ }
++
+ if (request_mem_succeeded)
+ release_mem_region(info->apertures->ranges[0].base,
+ info->apertures->ranges[0].size);
+ fb_dealloc_cmap(&info->cmap);
++
++ framebuffer_release(info);
+ }
+
+ static const struct fb_ops efifb_ops = {
+@@ -620,9 +627,9 @@ static int efifb_remove(struct platform_device *pdev)
+ {
+ struct fb_info *info = platform_get_drvdata(pdev);
+
++ /* efifb_destroy takes care of info cleanup */
+ unregister_framebuffer(info);
+ sysfs_remove_groups(&pdev->dev.kobj, efifb_groups);
+- framebuffer_release(info);
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/simplefb.c b/drivers/video/fbdev/simplefb.c
+index 57541887188b1..efce6ef8532d2 100644
+--- a/drivers/video/fbdev/simplefb.c
++++ b/drivers/video/fbdev/simplefb.c
+@@ -70,12 +70,18 @@ struct simplefb_par;
+ static void simplefb_clocks_destroy(struct simplefb_par *par);
+ static void simplefb_regulators_destroy(struct simplefb_par *par);
+
++/*
++ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end
++ * of unregister_framebuffer() or fb_release(). Do any cleanup here.
++ */
+ static void simplefb_destroy(struct fb_info *info)
+ {
+ simplefb_regulators_destroy(info->par);
+ simplefb_clocks_destroy(info->par);
+ if (info->screen_base)
+ iounmap(info->screen_base);
++
++ framebuffer_release(info);
+ }
+
+ static const struct fb_ops simplefb_ops = {
+@@ -520,8 +526,8 @@ static int simplefb_remove(struct platform_device *pdev)
+ {
+ struct fb_info *info = platform_get_drvdata(pdev);
+
++ /* simplefb_destroy takes care of info cleanup */
+ unregister_framebuffer(info);
+- framebuffer_release(info);
+
+ return 0;
+ }
+diff --git a/drivers/video/fbdev/vesafb.c b/drivers/video/fbdev/vesafb.c
+index df6de5a9dd4cd..e25e8de5ff672 100644
+--- a/drivers/video/fbdev/vesafb.c
++++ b/drivers/video/fbdev/vesafb.c
+@@ -179,6 +179,10 @@ static int vesafb_setcolreg(unsigned regno, unsigned red, unsigned green,
+ return err;
+ }
+
++/*
++ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end
++ * of unregister_framebuffer() or fb_release(). Do any cleanup here.
++ */
+ static void vesafb_destroy(struct fb_info *info)
+ {
+ struct vesafb_par *par = info->par;
+@@ -188,6 +192,8 @@ static void vesafb_destroy(struct fb_info *info)
+ if (info->screen_base)
+ iounmap(info->screen_base);
+ release_mem_region(info->apertures->ranges[0].base, info->apertures->ranges[0].size);
++
++ framebuffer_release(info);
+ }
+
+ static struct fb_ops vesafb_ops = {
+@@ -484,10 +490,10 @@ static int vesafb_remove(struct platform_device *pdev)
+ {
+ struct fb_info *info = platform_get_drvdata(pdev);
+
++ /* vesafb_destroy takes care of info cleanup */
+ unregister_framebuffer(info);
+ if (((struct vesafb_par *)(info->par))->region)
+ release_region(0x3c0, 32);
+- framebuffer_release(info);
+
+ return 0;
+ }
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index bbed3224ad689..52268cd6df167 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -598,9 +598,15 @@ static int ceph_finish_async_create(struct inode *dir, struct dentry *dentry,
+ iinfo.change_attr = 1;
+ ceph_encode_timespec64(&iinfo.btime, &now);
+
+- iinfo.xattr_len = ARRAY_SIZE(xattr_buf);
+- iinfo.xattr_data = xattr_buf;
+- memset(iinfo.xattr_data, 0, iinfo.xattr_len);
++ if (req->r_pagelist) {
++ iinfo.xattr_len = req->r_pagelist->length;
++ iinfo.xattr_data = req->r_pagelist->mapped_tail;
++ } else {
++ /* fake it */
++ iinfo.xattr_len = ARRAY_SIZE(xattr_buf);
++ iinfo.xattr_data = xattr_buf;
++ memset(iinfo.xattr_data, 0, iinfo.xattr_len);
++ }
+
+ in.ino = cpu_to_le64(vino.ino);
+ in.snapid = cpu_to_le64(CEPH_NOSNAP);
+@@ -712,6 +718,10 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ err = ceph_security_init_secctx(dentry, mode, &as_ctx);
+ if (err < 0)
+ goto out_ctx;
++ /* Async create can't handle more than a page of xattrs */
++ if (as_ctx.pagelist &&
++ !list_is_singular(&as_ctx.pagelist->head))
++ try_async = false;
+ } else if (!d_in_lookup(dentry)) {
+ /* If it's not being looked up, it's negative */
+ return -ENOENT;
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index f8d7fe6db989e..ab5fa50664f66 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -1749,6 +1749,10 @@ static int writeback_single_inode(struct inode *inode,
+ */
+ if (!(inode->i_state & I_DIRTY_ALL))
+ inode_cgwb_move_to_attached(inode, wb);
++ else if (!(inode->i_state & I_SYNC_QUEUED) &&
++ (inode->i_state & I_DIRTY))
++ redirty_tail_locked(inode, wb);
++
+ spin_unlock(&wb->list_lock);
+ inode_sync_complete(inode);
+ out:
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index fbdb7a30470a3..f785af2aa23cf 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -1154,13 +1154,12 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length,
+
+ if (length != written && (iomap->flags & IOMAP_F_NEW)) {
+ /* Deallocate blocks that were just allocated. */
+- loff_t blockmask = i_blocksize(inode) - 1;
+- loff_t end = (pos + length) & ~blockmask;
++ loff_t hstart = round_up(pos + written, i_blocksize(inode));
++ loff_t hend = iomap->offset + iomap->length;
+
+- pos = (pos + written + blockmask) & ~blockmask;
+- if (pos < end) {
+- truncate_pagecache_range(inode, pos, end - 1);
+- punch_hole(ip, pos, end - pos);
++ if (hstart < hend) {
++ truncate_pagecache_range(inode, hstart, hend - 1);
++ punch_hole(ip, hstart, hend - hstart);
+ }
+ }
+
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 87df379120551..a0680046ff3c7 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -6572,7 +6572,12 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+
+ static int io_req_prep_async(struct io_kiocb *req)
+ {
+- if (!io_op_defs[req->opcode].needs_async_setup)
++ const struct io_op_def *def = &io_op_defs[req->opcode];
++
++ /* assign early for deferred execution for non-fixed file */
++ if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE))
++ req->file = io_file_get_normal(req, req->fd);
++ if (!def->needs_async_setup)
+ return 0;
+ if (WARN_ON_ONCE(req_has_async_data(req)))
+ return -EFAULT;
+diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
+index ea17fa1f31ecb..d208911621451 100644
+--- a/fs/nfs/fs_context.c
++++ b/fs/nfs/fs_context.c
+@@ -515,7 +515,7 @@ static int nfs_fs_context_parse_param(struct fs_context *fc,
+ if (result.negated)
+ ctx->flags &= ~NFS_MOUNT_SOFTREVAL;
+ else
+- ctx->flags &= NFS_MOUNT_SOFTREVAL;
++ ctx->flags |= NFS_MOUNT_SOFTREVAL;
+ break;
+ case Opt_posix:
+ if (result.negated)
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 2ff6bd85ba8f6..f2a1947ec5ee0 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -1638,6 +1638,19 @@ static int do_fanotify_mark(int fanotify_fd, unsigned int flags, __u64 mask,
+ else
+ mnt = path.mnt;
+
++ /*
++ * FAN_RENAME is not allowed on non-dir (for now).
++ * We shouldn't have allowed setting any dirent events in mask of
++ * non-dir, but because we always allowed it, error only if group
++ * was initialized with the new flag FAN_REPORT_TARGET_FID.
++ */
++ ret = -ENOTDIR;
++ if (inode && !S_ISDIR(inode->i_mode) &&
++ ((mask & FAN_RENAME) ||
++ ((mask & FANOTIFY_DIRENT_EVENTS) &&
++ FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID))))
++ goto path_put_and_out;
++
+ /* Mask out FAN_EVENT_ON_CHILD flag for sb/mount/non-dir marks */
+ if (mnt || !S_ISDIR(inode->i_mode)) {
+ mask &= ~FAN_EVENT_ON_CHILD;
+diff --git a/fs/proc/fd.c b/fs/proc/fd.c
+index 172c86270b312..913bef0d2a36c 100644
+--- a/fs/proc/fd.c
++++ b/fs/proc/fd.c
+@@ -72,7 +72,7 @@ out:
+ return 0;
+ }
+
+-static int seq_fdinfo_open(struct inode *inode, struct file *file)
++static int proc_fdinfo_access_allowed(struct inode *inode)
+ {
+ bool allowed = false;
+ struct task_struct *task = get_proc_task(inode);
+@@ -86,6 +86,16 @@ static int seq_fdinfo_open(struct inode *inode, struct file *file)
+ if (!allowed)
+ return -EACCES;
+
++ return 0;
++}
++
++static int seq_fdinfo_open(struct inode *inode, struct file *file)
++{
++ int ret = proc_fdinfo_access_allowed(inode);
++
++ if (ret)
++ return ret;
++
+ return single_open(file, seq_show, inode);
+ }
+
+@@ -348,12 +358,23 @@ static int proc_readfdinfo(struct file *file, struct dir_context *ctx)
+ proc_fdinfo_instantiate);
+ }
+
++static int proc_open_fdinfo(struct inode *inode, struct file *file)
++{
++ int ret = proc_fdinfo_access_allowed(inode);
++
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
+ const struct inode_operations proc_fdinfo_inode_operations = {
+ .lookup = proc_lookupfdinfo,
+ .setattr = proc_setattr,
+ };
+
+ const struct file_operations proc_fdinfo_operations = {
++ .open = proc_open_fdinfo,
+ .read = generic_read_dir,
+ .iterate_shared = proc_readfdinfo,
+ .llseek = generic_file_llseek,
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index 117d7f248ac96..2ca54c084d5ad 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -272,6 +272,7 @@ struct folio_iter {
+ size_t offset;
+ size_t length;
+ /* private: for use by the iterator */
++ struct folio *_next;
+ size_t _seg_count;
+ int _i;
+ };
+@@ -286,6 +287,7 @@ static inline void bio_first_folio(struct folio_iter *fi, struct bio *bio,
+ PAGE_SIZE * (bvec->bv_page - &fi->folio->page);
+ fi->_seg_count = bvec->bv_len;
+ fi->length = min(folio_size(fi->folio) - fi->offset, fi->_seg_count);
++ fi->_next = folio_next(fi->folio);
+ fi->_i = i;
+ }
+
+@@ -293,9 +295,10 @@ static inline void bio_next_folio(struct folio_iter *fi, struct bio *bio)
+ {
+ fi->_seg_count -= fi->length;
+ if (fi->_seg_count) {
+- fi->folio = folio_next(fi->folio);
++ fi->folio = fi->_next;
+ fi->offset = 0;
+ fi->length = min(folio_size(fi->folio), fi->_seg_count);
++ fi->_next = folio_next(fi->folio);
+ } else if (fi->_i + 1 < bio->bi_vcnt) {
+ bio_first_folio(fi, bio, fi->_i + 1);
+ } else {
+diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
+index 2c6b9e4162254..7c2d77d75a888 100644
+--- a/include/linux/netdev_features.h
++++ b/include/linux/netdev_features.h
+@@ -169,7 +169,7 @@ enum {
+ #define NETIF_F_HW_HSR_FWD __NETIF_F(HW_HSR_FWD)
+ #define NETIF_F_HW_HSR_DUP __NETIF_F(HW_HSR_DUP)
+
+-/* Finds the next feature with the highest number of the range of start till 0.
++/* Finds the next feature with the highest number of the range of start-1 till 0.
+ */
+ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
+ {
+@@ -188,7 +188,7 @@ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
+ for ((bit) = find_next_netdev_feature((mask_addr), \
+ NETDEV_FEATURE_COUNT); \
+ (bit) >= 0; \
+- (bit) = find_next_netdev_feature((mask_addr), (bit) - 1))
++ (bit) = find_next_netdev_feature((mask_addr), (bit)))
+
+ /* Features valid for ethtool to change */
+ /* = all defined minus driver/device-class-related */
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index 267b7aeaf1a69..90501404fa49f 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -160,6 +160,7 @@ struct rpc_add_xprt_test {
+ #define RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT (1UL << 9)
+ #define RPC_CLNT_CREATE_SOFTERR (1UL << 10)
+ #define RPC_CLNT_CREATE_REUSEPORT (1UL << 11)
++#define RPC_CLNT_CREATE_CONNECTED (1UL << 12)
+
+ struct rpc_clnt *rpc_create(struct rpc_create_args *args);
+ struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *,
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index f72ec113ae568..98e1ec1a14f03 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -425,7 +425,7 @@ static inline void sk_rcv_saddr_set(struct sock *sk, __be32 addr)
+ }
+
+ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+- struct sock *sk, u32 port_offset,
++ struct sock *sk, u64 port_offset,
+ int (*check_established)(struct inet_timewait_death_row *,
+ struct sock *, __u16,
+ struct inet_timewait_sock **));
+diff --git a/include/net/secure_seq.h b/include/net/secure_seq.h
+index d7d2495f83c27..dac91aa38c5af 100644
+--- a/include/net/secure_seq.h
++++ b/include/net/secure_seq.h
+@@ -4,8 +4,8 @@
+
+ #include <linux/types.h>
+
+-u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
+-u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
++u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
++u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
+ __be16 dport);
+ u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
+ __be16 sport, __be16 dport);
+diff --git a/include/net/tc_act/tc_pedit.h b/include/net/tc_act/tc_pedit.h
+index 748cf87a4d7ea..3e02709a1df65 100644
+--- a/include/net/tc_act/tc_pedit.h
++++ b/include/net/tc_act/tc_pedit.h
+@@ -14,6 +14,7 @@ struct tcf_pedit {
+ struct tc_action common;
+ unsigned char tcfp_nkeys;
+ unsigned char tcfp_flags;
++ u32 tcfp_off_max_hint;
+ struct tc_pedit_key *tcfp_keys;
+ struct tcf_pedit_key_ex *tcfp_keys_ex;
+ };
+diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
+index 80d76b75bccd9..7aa2eb7662050 100644
+--- a/include/uapi/linux/virtio_ids.h
++++ b/include/uapi/linux/virtio_ids.h
+@@ -73,12 +73,12 @@
+ * Virtio Transitional IDs
+ */
+
+-#define VIRTIO_TRANS_ID_NET 1000 /* transitional virtio net */
+-#define VIRTIO_TRANS_ID_BLOCK 1001 /* transitional virtio block */
+-#define VIRTIO_TRANS_ID_BALLOON 1002 /* transitional virtio balloon */
+-#define VIRTIO_TRANS_ID_CONSOLE 1003 /* transitional virtio console */
+-#define VIRTIO_TRANS_ID_SCSI 1004 /* transitional virtio SCSI */
+-#define VIRTIO_TRANS_ID_RNG 1005 /* transitional virtio rng */
+-#define VIRTIO_TRANS_ID_9P 1009 /* transitional virtio 9p console */
++#define VIRTIO_TRANS_ID_NET 0x1000 /* transitional virtio net */
++#define VIRTIO_TRANS_ID_BLOCK 0x1001 /* transitional virtio block */
++#define VIRTIO_TRANS_ID_BALLOON 0x1002 /* transitional virtio balloon */
++#define VIRTIO_TRANS_ID_CONSOLE 0x1003 /* transitional virtio console */
++#define VIRTIO_TRANS_ID_SCSI 0x1004 /* transitional virtio SCSI */
++#define VIRTIO_TRANS_ID_RNG 0x1005 /* transitional virtio rng */
++#define VIRTIO_TRANS_ID_9P 0x1009 /* transitional virtio 9p console */
+
+ #endif /* _LINUX_VIRTIO_IDS_H */
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 5de18448016cd..f9dd3aaa8486d 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -3390,8 +3390,11 @@ static struct notifier_block cpuset_track_online_nodes_nb = {
+ */
+ void __init cpuset_init_smp(void)
+ {
+- cpumask_copy(top_cpuset.cpus_allowed, cpu_active_mask);
+- top_cpuset.mems_allowed = node_states[N_MEMORY];
++ /*
++ * cpus_allowd/mems_allowed set to v2 values in the initial
++ * cpuset_bind() call will be reset to v1 values in another
++ * cpuset_bind() call when v1 cpuset is mounted.
++ */
+ top_cpuset.old_mems_allowed = top_cpuset.mems_allowed;
+
+ cpumask_copy(top_cpuset.effective_cpus, cpu_active_mask);
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index a4426a00b9edf..db3e91ba58cd4 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -678,7 +678,6 @@ EXPORT_SYMBOL_GPL(generic_handle_irq);
+ */
+ int generic_handle_domain_irq(struct irq_domain *domain, unsigned int hwirq)
+ {
+- WARN_ON_ONCE(!in_irq());
+ return handle_irq_desc(irq_resolve_mapping(domain, hwirq));
+ }
+ EXPORT_SYMBOL_GPL(generic_handle_domain_irq);
+diff --git a/lib/dim/net_dim.c b/lib/dim/net_dim.c
+index 06811d866775c..53f6b9c6e9366 100644
+--- a/lib/dim/net_dim.c
++++ b/lib/dim/net_dim.c
+@@ -12,41 +12,41 @@
+ * Each profile size must be of NET_DIM_PARAMS_NUM_PROFILES
+ */
+ #define NET_DIM_PARAMS_NUM_PROFILES 5
+-#define NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE 256
+-#define NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE 128
++#define NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE 256
++#define NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE 128
+ #define NET_DIM_DEF_PROFILE_CQE 1
+ #define NET_DIM_DEF_PROFILE_EQE 1
+
+ #define NET_DIM_RX_EQE_PROFILES { \
+- {1, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {8, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {64, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {128, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {256, NET_DIM_DEFAULT_RX_CQ_MODERATION_PKTS_FROM_EQE}, \
++ {.usec = 1, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 8, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 64, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 128, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 256, .pkts = NET_DIM_DEFAULT_RX_CQ_PKTS_FROM_EQE,} \
+ }
+
+ #define NET_DIM_RX_CQE_PROFILES { \
+- {2, 256}, \
+- {8, 128}, \
+- {16, 64}, \
+- {32, 64}, \
+- {64, 64} \
++ {.usec = 2, .pkts = 256,}, \
++ {.usec = 8, .pkts = 128,}, \
++ {.usec = 16, .pkts = 64,}, \
++ {.usec = 32, .pkts = 64,}, \
++ {.usec = 64, .pkts = 64,} \
+ }
+
+ #define NET_DIM_TX_EQE_PROFILES { \
+- {1, NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {8, NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {32, NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {64, NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE}, \
+- {128, NET_DIM_DEFAULT_TX_CQ_MODERATION_PKTS_FROM_EQE} \
++ {.usec = 1, .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 8, .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 32, .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 64, .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,}, \
++ {.usec = 128, .pkts = NET_DIM_DEFAULT_TX_CQ_PKTS_FROM_EQE,} \
+ }
+
+ #define NET_DIM_TX_CQE_PROFILES { \
+- {5, 128}, \
+- {8, 64}, \
+- {16, 32}, \
+- {32, 32}, \
+- {64, 32} \
++ {.usec = 5, .pkts = 128,}, \
++ {.usec = 8, .pkts = 64,}, \
++ {.usec = 16, .pkts = 32,}, \
++ {.usec = 32, .pkts = 32,}, \
++ {.usec = 64, .pkts = 32,} \
+ }
+
+ static const struct dim_cq_moder
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 406a3c28c0266..fb91636917057 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2609,11 +2609,16 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
+ struct address_space *mapping = NULL;
+ int extra_pins, ret;
+ pgoff_t end;
++ bool is_hzp;
+
+- VM_BUG_ON_PAGE(is_huge_zero_page(head), head);
+ VM_BUG_ON_PAGE(!PageLocked(head), head);
+ VM_BUG_ON_PAGE(!PageCompound(head), head);
+
++ is_hzp = is_huge_zero_page(head);
++ VM_WARN_ON_ONCE_PAGE(is_hzp, head);
++ if (is_hzp)
++ return -EBUSY;
++
+ if (PageWriteback(head))
+ return -EBUSY;
+
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index af82c6f7d7239..a527f2769ea82 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -515,6 +515,7 @@ static bool __init kfence_init_pool(void)
+ {
+ unsigned long addr = (unsigned long)__kfence_pool;
+ struct page *pages;
++ char *p;
+ int i;
+
+ if (!__kfence_pool)
+@@ -598,6 +599,16 @@ err:
+ * fails for the first page, and therefore expect addr==__kfence_pool in
+ * most failure cases.
+ */
++ for (p = (char *)addr; p < __kfence_pool + KFENCE_POOL_SIZE; p += PAGE_SIZE) {
++ struct slab *slab = virt_to_slab(p);
++
++ if (!slab)
++ continue;
++#ifdef CONFIG_MEMCG
++ slab->memcg_data = 0;
++#endif
++ __folio_clear_slab(slab_folio(slab));
++ }
+ memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool));
+ __kfence_pool = NULL;
+ return false;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 682eedb5ea75b..7889d2612384d 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1275,7 +1275,7 @@ try_again:
+ }
+ out:
+ if (ret == -EIO)
+- dump_page(p, "hwpoison: unhandlable page");
++ pr_err("Memory failure: %#lx: unhandlable page.\n", page_to_pfn(p));
+
+ return ret;
+ }
+@@ -1781,19 +1781,6 @@ try_again:
+ }
+
+ if (PageTransHuge(hpage)) {
+- /*
+- * Bail out before SetPageHasHWPoisoned() if hpage is
+- * huge_zero_page, although PG_has_hwpoisoned is not
+- * checked in set_huge_zero_page().
+- *
+- * TODO: Handle memory failure of huge_zero_page thoroughly.
+- */
+- if (is_huge_zero_page(hpage)) {
+- action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
+- res = -EBUSY;
+- goto unlock_mutex;
+- }
+-
+ /*
+ * The flag must be set after the refcount is bumped
+ * otherwise it may race with THP split.
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 0e175aef536e1..bbcf48fd56a0d 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -947,7 +947,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
+ return -EINTR;
+ vma = find_vma(mm, addr);
+ if (!vma || vma->vm_start > addr) {
+- ret = EFAULT;
++ ret = -EFAULT;
+ goto out;
+ }
+
+diff --git a/net/batman-adv/fragmentation.c b/net/batman-adv/fragmentation.c
+index 0899a729a23f4..c120c7c6d25fc 100644
+--- a/net/batman-adv/fragmentation.c
++++ b/net/batman-adv/fragmentation.c
+@@ -475,6 +475,17 @@ int batadv_frag_send_packet(struct sk_buff *skb,
+ goto free_skb;
+ }
+
++ /* GRO might have added fragments to the fragment list instead of
++ * frags[]. But this is not handled by skb_split and must be
++ * linearized to avoid incorrect length information after all
++ * batman-adv fragments were created and submitted to the
++ * hard-interface
++ */
++ if (skb_has_frag_list(skb) && __skb_linearize(skb)) {
++ ret = -ENOMEM;
++ goto free_skb;
++ }
++
+ /* Create one header to be copied to all fragments */
+ frag_header.packet_type = BATADV_UNICAST_FRAG;
+ frag_header.version = BATADV_COMPAT_VERSION;
+diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
+index 9b8443774449f..5f85e01d4093b 100644
+--- a/net/core/secure_seq.c
++++ b/net/core/secure_seq.c
+@@ -22,6 +22,8 @@
+ static siphash_aligned_key_t net_secret;
+ static siphash_aligned_key_t ts_secret;
+
++#define EPHEMERAL_PORT_SHUFFLE_PERIOD (10 * HZ)
++
+ static __always_inline void net_secret_init(void)
+ {
+ net_get_random_once(&net_secret, sizeof(net_secret));
+@@ -94,17 +96,19 @@ u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr,
+ }
+ EXPORT_SYMBOL(secure_tcpv6_seq);
+
+-u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
++u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr,
+ __be16 dport)
+ {
+ const struct {
+ struct in6_addr saddr;
+ struct in6_addr daddr;
++ unsigned int timeseed;
+ __be16 dport;
+ } __aligned(SIPHASH_ALIGNMENT) combined = {
+ .saddr = *(struct in6_addr *)saddr,
+ .daddr = *(struct in6_addr *)daddr,
+- .dport = dport
++ .timeseed = jiffies / EPHEMERAL_PORT_SHUFFLE_PERIOD,
++ .dport = dport,
+ };
+ net_secret_init();
+ return siphash(&combined, offsetofend(typeof(combined), dport),
+@@ -142,11 +146,13 @@ u32 secure_tcp_seq(__be32 saddr, __be32 daddr,
+ }
+ EXPORT_SYMBOL_GPL(secure_tcp_seq);
+
+-u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
++u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport)
+ {
+ net_secret_init();
+- return siphash_3u32((__force u32)saddr, (__force u32)daddr,
+- (__force u16)dport, &net_secret);
++ return siphash_4u32((__force u32)saddr, (__force u32)daddr,
++ (__force u16)dport,
++ jiffies / EPHEMERAL_PORT_SHUFFLE_PERIOD,
++ &net_secret);
+ }
+ EXPORT_SYMBOL_GPL(secure_ipv4_port_ephemeral);
+ #endif
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 4368fd32c4a50..f4bd063f83151 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -367,6 +367,7 @@ out_rollback_unoffload:
+ switchdev_bridge_port_unoffload(brport_dev, dp,
+ &dsa_slave_switchdev_notifier,
+ &dsa_slave_switchdev_blocking_notifier);
++ dsa_flush_workqueue();
+ out_rollback_unbridge:
+ dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info);
+ out_rollback:
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 17440840a7914..a5d57fa679caa 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -504,7 +504,7 @@ not_unique:
+ return -EADDRNOTAVAIL;
+ }
+
+-static u32 inet_sk_port_offset(const struct sock *sk)
++static u64 inet_sk_port_offset(const struct sock *sk)
+ {
+ const struct inet_sock *inet = inet_sk(sk);
+
+@@ -726,15 +726,17 @@ EXPORT_SYMBOL_GPL(inet_unhash);
+ * Note that we use 32bit integers (vs RFC 'short integers')
+ * because 2^16 is not a multiple of num_ephemeral and this
+ * property might be used by clever attacker.
+- * RFC claims using TABLE_LENGTH=10 buckets gives an improvement,
+- * we use 256 instead to really give more isolation and
+- * privacy, this only consumes 1 KB of kernel memory.
++ * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, though
++ * attacks were since demonstrated, thus we use 65536 instead to really
++ * give more isolation and privacy, at the expense of 256kB of kernel
++ * memory.
+ */
+-#define INET_TABLE_PERTURB_SHIFT 8
+-static u32 table_perturb[1 << INET_TABLE_PERTURB_SHIFT];
++#define INET_TABLE_PERTURB_SHIFT 16
++#define INET_TABLE_PERTURB_SIZE (1 << INET_TABLE_PERTURB_SHIFT)
++static u32 *table_perturb;
+
+ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+- struct sock *sk, u32 port_offset,
++ struct sock *sk, u64 port_offset,
+ int (*check_established)(struct inet_timewait_death_row *,
+ struct sock *, __u16, struct inet_timewait_sock **))
+ {
+@@ -774,10 +776,13 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ if (likely(remaining > 1))
+ remaining &= ~1U;
+
+- net_get_random_once(table_perturb, sizeof(table_perturb));
+- index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT);
++ net_get_random_once(table_perturb,
++ INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb));
++ index = port_offset & (INET_TABLE_PERTURB_SIZE - 1);
++
++ offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32);
++ offset %= remaining;
+
+- offset = (READ_ONCE(table_perturb[index]) + port_offset) % remaining;
+ /* In first pass we try ports of @low parity.
+ * inet_csk_get_port() does the opposite choice.
+ */
+@@ -831,11 +836,12 @@ next_port:
+ return -EADDRNOTAVAIL;
+
+ ok:
+- /* If our first attempt found a candidate, skip next candidate
+- * in 1/16 of cases to add some noise.
++ /* Here we want to add a little bit of randomness to the next source
++ * port that will be chosen. We use a max() with a random here so that
++ * on low contention the randomness is maximal and on high contention
++ * it may be inexistent.
+ */
+- if (!i && !(prandom_u32() % 16))
+- i = 2;
++ i = max_t(int, i, (prandom_u32() & 7) * 2);
+ WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2);
+
+ /* Head lock still held and bh's disabled */
+@@ -859,7 +865,7 @@ ok:
+ int inet_hash_connect(struct inet_timewait_death_row *death_row,
+ struct sock *sk)
+ {
+- u32 port_offset = 0;
++ u64 port_offset = 0;
+
+ if (!inet_sk(sk)->inet_num)
+ port_offset = inet_sk_port_offset(sk);
+@@ -909,6 +915,12 @@ void __init inet_hashinfo2_init(struct inet_hashinfo *h, const char *name,
+ low_limit,
+ high_limit);
+ init_hashinfo_lhash2(h);
++
++ /* this one is used for source ports of outgoing connections */
++ table_perturb = kmalloc_array(INET_TABLE_PERTURB_SIZE,
++ sizeof(*table_perturb), GFP_KERNEL);
++ if (!table_perturb)
++ panic("TCP: failed to alloc table_perturb");
+ }
+
+ int inet_hashinfo2_init_mod(struct inet_hashinfo *h)
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 3ee947557b883..aa9a11b20d18e 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -305,6 +305,7 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
+ struct net *net = sock_net(sk);
+ if (sk->sk_family == AF_INET) {
+ struct sockaddr_in *addr = (struct sockaddr_in *) uaddr;
++ u32 tb_id = RT_TABLE_LOCAL;
+ int chk_addr_ret;
+
+ if (addr_len < sizeof(*addr))
+@@ -318,7 +319,8 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
+ pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",
+ sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));
+
+- chk_addr_ret = inet_addr_type(net, addr->sin_addr.s_addr);
++ tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id;
++ chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id);
+
+ if (!inet_addr_valid_or_nonlocal(net, inet_sk(sk),
+ addr->sin_addr.s_addr,
+@@ -355,6 +357,14 @@ static int ping_check_bind_addr(struct sock *sk, struct inet_sock *isk,
+ return -ENODEV;
+ }
+ }
++
++ if (!dev && sk->sk_bound_dev_if) {
++ dev = dev_get_by_index_rcu(net, sk->sk_bound_dev_if);
++ if (!dev) {
++ rcu_read_unlock();
++ return -ENODEV;
++ }
++ }
+ has_addr = pingv6_ops.ipv6_chk_addr(net, &addr->sin6_addr, dev,
+ scoped);
+ rcu_read_unlock();
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index d5d058de36646..eef07b62b2d88 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1748,6 +1748,7 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ #endif
+ RT_CACHE_STAT_INC(in_slow_mc);
+
++ skb_dst_drop(skb);
+ skb_dst_set(skb, &rth->dst);
+ return 0;
+ }
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 4740afecf7c62..32ccac10bd625 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -308,7 +308,7 @@ not_unique:
+ return -EADDRNOTAVAIL;
+ }
+
+-static u32 inet6_sk_port_offset(const struct sock *sk)
++static u64 inet6_sk_port_offset(const struct sock *sk)
+ {
+ const struct inet_sock *inet = inet_sk(sk);
+
+@@ -320,7 +320,7 @@ static u32 inet6_sk_port_offset(const struct sock *sk)
+ int inet6_hash_connect(struct inet_timewait_death_row *death_row,
+ struct sock *sk)
+ {
+- u32 port_offset = 0;
++ u64 port_offset = 0;
+
+ if (!inet_sk(sk)->inet_num)
+ port_offset = inet6_sk_port_offset(sk);
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index c4d3e2da73f23..b5ac06b96329a 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -3574,6 +3574,12 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
+ cbss->transmitted_bss->bssid);
+ bss_conf->bssid_indicator = cbss->max_bssid_indicator;
+ bss_conf->bssid_index = cbss->bssid_index;
++ } else {
++ bss_conf->nontransmitted = false;
++ memset(bss_conf->transmitter_bssid, 0,
++ sizeof(bss_conf->transmitter_bssid));
++ bss_conf->bssid_indicator = 0;
++ bss_conf->bssid_index = 0;
+ }
+
+ /*
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 05a3795eac8e9..73e9c0a9c1876 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1975,7 +1975,6 @@ static int netlink_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ copied = len;
+ }
+
+- skb_reset_transport_header(data_skb);
+ err = skb_copy_datagram_msg(data_skb, 0, msg, copied);
+
+ if (msg->msg_name) {
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 2f638f8b7b1e7..73ee2771093d6 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -487,11 +487,11 @@ struct rds_tcp_net {
+ /* All module specific customizations to the RDS-TCP socket should be done in
+ * rds_tcp_tune() and applied after socket creation.
+ */
+-void rds_tcp_tune(struct socket *sock)
++bool rds_tcp_tune(struct socket *sock)
+ {
+ struct sock *sk = sock->sk;
+ struct net *net = sock_net(sk);
+- struct rds_tcp_net *rtn = net_generic(net, rds_tcp_netid);
++ struct rds_tcp_net *rtn;
+
+ tcp_sock_set_nodelay(sock->sk);
+ lock_sock(sk);
+@@ -499,10 +499,15 @@ void rds_tcp_tune(struct socket *sock)
+ * a process which created this net namespace terminated.
+ */
+ if (!sk->sk_net_refcnt) {
++ if (!maybe_get_net(net)) {
++ release_sock(sk);
++ return false;
++ }
+ sk->sk_net_refcnt = 1;
+- get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
++ netns_tracker_alloc(net, &sk->ns_tracker, GFP_KERNEL);
+ sock_inuse_add(net, 1);
+ }
++ rtn = net_generic(net, rds_tcp_netid);
+ if (rtn->sndbuf_size > 0) {
+ sk->sk_sndbuf = rtn->sndbuf_size;
+ sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+@@ -512,6 +517,7 @@ void rds_tcp_tune(struct socket *sock)
+ sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+ }
+ release_sock(sk);
++ return true;
+ }
+
+ static void rds_tcp_accept_worker(struct work_struct *work)
+diff --git a/net/rds/tcp.h b/net/rds/tcp.h
+index dc8d745d68575..f8b5930d7b343 100644
+--- a/net/rds/tcp.h
++++ b/net/rds/tcp.h
+@@ -49,7 +49,7 @@ struct rds_tcp_statistics {
+ };
+
+ /* tcp.c */
+-void rds_tcp_tune(struct socket *sock);
++bool rds_tcp_tune(struct socket *sock);
+ void rds_tcp_set_callbacks(struct socket *sock, struct rds_conn_path *cp);
+ void rds_tcp_reset_callbacks(struct socket *sock, struct rds_conn_path *cp);
+ void rds_tcp_restore_callbacks(struct socket *sock,
+diff --git a/net/rds/tcp_connect.c b/net/rds/tcp_connect.c
+index 5461d77fff4f4..f0c477c5d1db4 100644
+--- a/net/rds/tcp_connect.c
++++ b/net/rds/tcp_connect.c
+@@ -124,7 +124,10 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
+ if (ret < 0)
+ goto out;
+
+- rds_tcp_tune(sock);
++ if (!rds_tcp_tune(sock)) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ if (isv6) {
+ sin6.sin6_family = AF_INET6;
+diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
+index 09cadd556d1e1..7edf2e69d3fed 100644
+--- a/net/rds/tcp_listen.c
++++ b/net/rds/tcp_listen.c
+@@ -133,7 +133,10 @@ int rds_tcp_accept_one(struct socket *sock)
+ __module_get(new_sock->ops->owner);
+
+ rds_tcp_keepalive(new_sock);
+- rds_tcp_tune(new_sock);
++ if (!rds_tcp_tune(new_sock)) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ inet = inet_sk(new_sock->sk);
+
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index 31fcd279c1776..0eaaf1f45de17 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -149,7 +149,7 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ struct nlattr *pattr;
+ struct tcf_pedit *p;
+ int ret = 0, err;
+- int ksize;
++ int i, ksize;
+ u32 index;
+
+ if (!nla) {
+@@ -228,6 +228,18 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
+ p->tcfp_nkeys = parm->nkeys;
+ }
+ memcpy(p->tcfp_keys, parm->keys, ksize);
++ p->tcfp_off_max_hint = 0;
++ for (i = 0; i < p->tcfp_nkeys; ++i) {
++ u32 cur = p->tcfp_keys[i].off;
++
++ /* The AT option can read a single byte, we can bound the actual
++ * value with uchar max.
++ */
++ cur += (0xff & p->tcfp_keys[i].offmask) >> p->tcfp_keys[i].shift;
++
++ /* Each key touches 4 bytes starting from the computed offset */
++ p->tcfp_off_max_hint = max(p->tcfp_off_max_hint, cur + 4);
++ }
+
+ p->tcfp_flags = parm->flags;
+ goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch);
+@@ -308,13 +320,18 @@ static int tcf_pedit_act(struct sk_buff *skb, const struct tc_action *a,
+ struct tcf_result *res)
+ {
+ struct tcf_pedit *p = to_pedit(a);
++ u32 max_offset;
+ int i;
+
+- if (skb_unclone(skb, GFP_ATOMIC))
+- return p->tcf_action;
+-
+ spin_lock(&p->tcf_lock);
+
++ max_offset = (skb_transport_header_was_set(skb) ?
++ skb_transport_offset(skb) :
++ skb_network_offset(skb)) +
++ p->tcfp_off_max_hint;
++ if (skb_ensure_writable(skb, min(skb->len, max_offset)))
++ goto unlock;
++
+ tcf_lastuse_update(&p->tcf_tm);
+
+ if (p->tcfp_nkeys > 0) {
+@@ -403,6 +420,7 @@ bad:
+ p->tcf_qstats.overlimits++;
+ done:
+ bstats_update(&p->tcf_bstats, skb);
++unlock:
+ spin_unlock(&p->tcf_lock);
+ return p->tcf_action;
+ }
+diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c
+index 51e8eb2933ff4..338b9ef806e82 100644
+--- a/net/smc/smc_rx.c
++++ b/net/smc/smc_rx.c
+@@ -355,12 +355,12 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ }
+ break;
+ }
++ if (!timeo)
++ return -EAGAIN;
+ if (signal_pending(current)) {
+ read_done = sock_intr_errno(timeo);
+ break;
+ }
+- if (!timeo)
+- return -EAGAIN;
+ }
+
+ if (!smc_rx_data_available(conn)) {
+diff --git a/net/sunrpc/auth_gss/gss_rpc_upcall.c b/net/sunrpc/auth_gss/gss_rpc_upcall.c
+index 61c276bddaf25..f549e4c05defc 100644
+--- a/net/sunrpc/auth_gss/gss_rpc_upcall.c
++++ b/net/sunrpc/auth_gss/gss_rpc_upcall.c
+@@ -98,6 +98,7 @@ static int gssp_rpc_create(struct net *net, struct rpc_clnt **_clnt)
+ * done without the correct namespace:
+ */
+ .flags = RPC_CLNT_CREATE_NOPING |
++ RPC_CLNT_CREATE_CONNECTED |
+ RPC_CLNT_CREATE_NO_IDLE_TIMEOUT
+ };
+ struct rpc_clnt *clnt;
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 258ebc194ee2b..1dacd669faadd 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -76,6 +76,7 @@ static int rpc_encode_header(struct rpc_task *task,
+ static int rpc_decode_header(struct rpc_task *task,
+ struct xdr_stream *xdr);
+ static int rpc_ping(struct rpc_clnt *clnt);
++static int rpc_ping_noreply(struct rpc_clnt *clnt);
+ static void rpc_check_timeout(struct rpc_task *task);
+
+ static void rpc_register_client(struct rpc_clnt *clnt)
+@@ -483,6 +484,12 @@ static struct rpc_clnt *rpc_create_xprt(struct rpc_create_args *args,
+ rpc_shutdown_client(clnt);
+ return ERR_PTR(err);
+ }
++ } else if (args->flags & RPC_CLNT_CREATE_CONNECTED) {
++ int err = rpc_ping_noreply(clnt);
++ if (err != 0) {
++ rpc_shutdown_client(clnt);
++ return ERR_PTR(err);
++ }
+ }
+
+ clnt->cl_softrtry = 1;
+@@ -2699,6 +2706,10 @@ static const struct rpc_procinfo rpcproc_null = {
+ .p_decode = rpcproc_decode_null,
+ };
+
++static const struct rpc_procinfo rpcproc_null_noreply = {
++ .p_encode = rpcproc_encode_null,
++};
++
+ static void
+ rpc_null_call_prepare(struct rpc_task *task, void *data)
+ {
+@@ -2752,6 +2763,28 @@ static int rpc_ping(struct rpc_clnt *clnt)
+ return status;
+ }
+
++static int rpc_ping_noreply(struct rpc_clnt *clnt)
++{
++ struct rpc_message msg = {
++ .rpc_proc = &rpcproc_null_noreply,
++ };
++ struct rpc_task_setup task_setup_data = {
++ .rpc_client = clnt,
++ .rpc_message = &msg,
++ .callback_ops = &rpc_null_ops,
++ .flags = RPC_TASK_SOFT | RPC_TASK_SOFTCONN | RPC_TASK_NULLCREDS,
++ };
++ struct rpc_task *task;
++ int status;
++
++ task = rpc_run_task(&task_setup_data);
++ if (IS_ERR(task))
++ return PTR_ERR(task);
++ status = task->tk_status;
++ rpc_put_task(task);
++ return status;
++}
++
+ struct rpc_cb_add_xprt_calldata {
+ struct rpc_xprt_switch *xps;
+ struct rpc_xprt *xprt;
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index a40553e83f8b2..f3e3d009cf1cf 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -1347,7 +1347,10 @@ static int tls_device_down(struct net_device *netdev)
+
+ /* Device contexts for RX and TX will be freed in on sk_destruct
+ * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW.
++ * Now release the ref taken above.
+ */
++ if (refcount_dec_and_test(&ctx->refcount))
++ tls_device_free_ctx(ctx);
+ }
+
+ up_write(&device_offload_lock);
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index b45ec35cd63c3..62b41ca050a20 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -413,6 +413,9 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+
+ val = (val >> mc->shift) & mask;
+
++ if (sel < 0 || sel > mc->max)
++ return -EINVAL;
++
+ *select = sel;
+
+ /* Setting a volume is only valid if it is already On */
+@@ -427,7 +430,7 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ mask << mc->shift,
+ sel << mc->shift);
+
+- return 0;
++ return *select != val;
+ }
+
+ static const char *max98090_perf_pwr_text[] =
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 58347eadd219b..e693070f51fe8 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -519,7 +519,15 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ unsigned int mask = (1 << fls(max)) - 1;
+ unsigned int invert = mc->invert;
+ unsigned int val, val_mask;
+- int err, ret;
++ int err, ret, tmp;
++
++ tmp = ucontrol->value.integer.value[0];
++ if (tmp < 0)
++ return -EINVAL;
++ if (mc->platform_max && tmp > mc->platform_max)
++ return -EINVAL;
++ if (tmp > mc->max - mc->min + 1)
++ return -EINVAL;
+
+ if (invert)
+ val = (max - ucontrol->value.integer.value[0]) & mask;
+@@ -534,6 +542,14 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol,
+ ret = err;
+
+ if (snd_soc_volsw_is_stereo(mc)) {
++ tmp = ucontrol->value.integer.value[1];
++ if (tmp < 0)
++ return -EINVAL;
++ if (mc->platform_max && tmp > mc->platform_max)
++ return -EINVAL;
++ if (tmp > mc->max - mc->min + 1)
++ return -EINVAL;
++
+ if (invert)
+ val = (max - ucontrol->value.integer.value[1]) & mask;
+ else
+diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c
+index 20c6ca37dbc44..53e97abbe6e3b 100644
+--- a/sound/soc/sof/sof-pci-dev.c
++++ b/sound/soc/sof/sof-pci-dev.c
+@@ -130,6 +130,11 @@ int sof_pci_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+
+ dev_dbg(&pci->dev, "PCI DSP detected");
+
++ if (!desc) {
++ dev_err(dev, "error: no matching PCI descriptor\n");
++ return -ENODEV;
++ }
++
+ if (!desc->ops) {
+ dev_err(dev, "error: no matching PCI descriptor ops\n");
+ return -ENODEV;
+diff --git a/tools/perf/tests/shell/test_arm_coresight.sh b/tools/perf/tests/shell/test_arm_coresight.sh
+index 6de53b7ef5ffd..e4cb4f1806ffa 100755
+--- a/tools/perf/tests/shell/test_arm_coresight.sh
++++ b/tools/perf/tests/shell/test_arm_coresight.sh
+@@ -29,7 +29,6 @@ cleanup_files()
+ rm -f ${file}
+ rm -f "${perfdata}.old"
+ trap - exit term int
+- kill -2 $$
+ exit $glb_err
+ }
+
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 1530c3e0242ef..259df83ecd2ee 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -55,9 +55,9 @@ CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_32bit_prog
+ CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_64bit_program.c)
+ CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_program.c -no-pie)
+
+-TARGETS := protection_keys
+-BINARIES_32 := $(TARGETS:%=%_32)
+-BINARIES_64 := $(TARGETS:%=%_64)
++VMTARGETS := protection_keys
++BINARIES_32 := $(VMTARGETS:%=%_32)
++BINARIES_64 := $(VMTARGETS:%=%_64)
+
+ ifeq ($(CAN_BUILD_WITH_NOPIE),1)
+ CFLAGS += -no-pie
+@@ -110,7 +110,7 @@ $(BINARIES_32): CFLAGS += -m32 -mxsave
+ $(BINARIES_32): LDLIBS += -lrt -ldl -lm
+ $(BINARIES_32): $(OUTPUT)/%_32: %.c
+ $(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
+-$(foreach t,$(TARGETS),$(eval $(call gen-target-rule-32,$(t))))
++$(foreach t,$(VMTARGETS),$(eval $(call gen-target-rule-32,$(t))))
+ endif
+
+ ifeq ($(CAN_BUILD_X86_64),1)
+@@ -118,7 +118,7 @@ $(BINARIES_64): CFLAGS += -m64 -mxsave
+ $(BINARIES_64): LDLIBS += -lrt -ldl
+ $(BINARIES_64): $(OUTPUT)/%_64: %.c
+ $(CC) $(CFLAGS) $(EXTRA_CFLAGS) $(notdir $^) $(LDLIBS) -o $@
+-$(foreach t,$(TARGETS),$(eval $(call gen-target-rule-64,$(t))))
++$(foreach t,$(VMTARGETS),$(eval $(call gen-target-rule-64,$(t))))
+ endif
+
+ # x86_64 users should be encouraged to install 32-bit libraries
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-15 22:08 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-15 22:08 UTC (permalink / raw
To: gentoo-commits
commit: 3337a2802d989a6811e43d50ac0a3f477b6fa64d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 15 22:08:24 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 15 22:08:24 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3337a280
Linux patch 5.17.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-5.17.8.patch | 246 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 250 insertions(+)
diff --git a/0000_README b/0000_README
index cf45e5d3..30109237 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-5.17.7.patch
From: http://www.kernel.org
Desc: Linux 5.17.7
+Patch: 1007_linux-5.17.8.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-5.17.8.patch b/1007_linux-5.17.8.patch
new file mode 100644
index 00000000..1c8ce1ab
--- /dev/null
+++ b/1007_linux-5.17.8.patch
@@ -0,0 +1,246 @@
+diff --git a/Makefile b/Makefile
+index ce65b393a2b49..3cf179812f0f9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 0ed4861b038f6..b3d5f97f16cdb 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -75,11 +75,11 @@ int udf_write_fi(struct inode *inode, struct fileIdentDesc *cfi,
+
+ if (fileident) {
+ if (adinicb || (offset + lfi < 0)) {
+- memcpy(udf_get_fi_ident(sfi), fileident, lfi);
++ memcpy(sfi->impUse + liu, fileident, lfi);
+ } else if (offset >= 0) {
+ memcpy(fibh->ebh->b_data + offset, fileident, lfi);
+ } else {
+- memcpy(udf_get_fi_ident(sfi), fileident, -offset);
++ memcpy(sfi->impUse + liu, fileident, -offset);
+ memcpy(fibh->ebh->b_data, fileident - offset,
+ lfi + offset);
+ }
+@@ -88,11 +88,11 @@ int udf_write_fi(struct inode *inode, struct fileIdentDesc *cfi,
+ offset += lfi;
+
+ if (adinicb || (offset + padlen < 0)) {
+- memset(udf_get_fi_ident(sfi) + lfi, 0x00, padlen);
++ memset(sfi->impUse + liu + lfi, 0x00, padlen);
+ } else if (offset >= 0) {
+ memset(fibh->ebh->b_data + offset, 0x00, padlen);
+ } else {
+- memset(udf_get_fi_ident(sfi) + lfi, 0x00, -offset);
++ memset(sfi->impUse + liu + lfi, 0x00, -offset);
+ memset(fibh->ebh->b_data, 0x00, padlen + offset);
+ }
+
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 36d727f94ac29..131514913430a 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -36,6 +36,9 @@
+ /* HCI priority */
+ #define HCI_PRIO_MAX 7
+
++/* HCI maximum id value */
++#define HCI_MAX_ID 10000
++
+ /* HCI Core structures */
+ struct inquiry_data {
+ bdaddr_t bdaddr;
+diff --git a/include/uapi/linux/rfkill.h b/include/uapi/linux/rfkill.h
+index 283c5a7b3f2c8..db6c8588c1d0c 100644
+--- a/include/uapi/linux/rfkill.h
++++ b/include/uapi/linux/rfkill.h
+@@ -184,7 +184,7 @@ struct rfkill_event_ext {
+ #define RFKILL_IOC_NOINPUT 1
+ #define RFKILL_IOCTL_NOINPUT _IO(RFKILL_IOC_MAGIC, RFKILL_IOC_NOINPUT)
+ #define RFKILL_IOC_MAX_SIZE 2
+-#define RFKILL_IOCTL_MAX_SIZE _IOW(RFKILL_IOC_MAGIC, RFKILL_IOC_EXT_SIZE, __u32)
++#define RFKILL_IOCTL_MAX_SIZE _IOW(RFKILL_IOC_MAGIC, RFKILL_IOC_MAX_SIZE, __u32)
+
+ /* and that's all userspace gets */
+
+diff --git a/mm/gup.c b/mm/gup.c
+index 7bc1ba9ce4403..41da0bd61bec3 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -465,7 +465,7 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
+ pte_t *pte, unsigned int flags)
+ {
+ /* No page to get reference */
+- if (flags & FOLL_GET)
++ if (flags & (FOLL_GET | FOLL_PIN))
+ return -EFAULT;
+
+ if (flags & FOLL_TOUCH) {
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a1da8757cc9cc..e2dc190c67256 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5820,7 +5820,8 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
+ *pagep = NULL;
+ goto out;
+ }
+- folio_copy(page_folio(page), page_folio(*pagep));
++ copy_user_huge_page(page, *pagep, dst_addr, dst_vma,
++ pages_per_huge_page(h));
+ put_page(*pagep);
+ *pagep = NULL;
+ }
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 15dcedbc17306..682eedb5ea75b 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -707,8 +707,10 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
+ (void *)&priv);
+ if (ret == 1 && priv.tk.addr)
+ kill_proc(&priv.tk, pfn, flags);
++ else
++ ret = 0;
+ mmap_read_unlock(p->mm);
+- return ret ? -EFAULT : -EHWPOISON;
++ return ret > 0 ? -EHWPOISON : -EFAULT;
+ }
+
+ static const char *action_name[] = {
+diff --git a/mm/memory.c b/mm/memory.c
+index b69afe3dd597a..886925d977592 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5475,6 +5475,8 @@ long copy_huge_page_from_user(struct page *dst_page,
+ if (rc)
+ break;
+
++ flush_dcache_page(subpage);
++
+ cond_resched();
+ }
+ return ret_val;
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 086a366374678..ac7673e43dda6 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -916,9 +916,12 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ if (!PageMappingFlags(page))
+ page->mapping = NULL;
+
+- if (likely(!is_zone_device_page(newpage)))
+- flush_dcache_page(newpage);
++ if (likely(!is_zone_device_page(newpage))) {
++ int i, nr = compound_nr(newpage);
+
++ for (i = 0; i < nr; i++)
++ flush_dcache_page(newpage + i);
++ }
+ }
+ out:
+ return rc;
+@@ -3082,18 +3085,21 @@ static int establish_migrate_target(int node, nodemask_t *used,
+ if (best_distance != -1) {
+ val = node_distance(node, migration_target);
+ if (val > best_distance)
+- return NUMA_NO_NODE;
++ goto out_clear;
+ }
+
+ index = nd->nr;
+ if (WARN_ONCE(index >= DEMOTION_TARGET_NODES,
+ "Exceeds maximum demotion target nodes\n"))
+- return NUMA_NO_NODE;
++ goto out_clear;
+
+ nd->nodes[index] = migration_target;
+ nd->nr++;
+
+ return migration_target;
++out_clear:
++ node_clear(migration_target, *used);
++ return NUMA_NO_NODE;
+ }
+
+ /*
+diff --git a/mm/mlock.c b/mm/mlock.c
+index 37f969ec68fa4..b565b1aac8d49 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -838,6 +838,7 @@ int user_shm_lock(size_t size, struct ucounts *ucounts)
+ }
+ if (!get_ucounts(ucounts)) {
+ dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_MEMLOCK, locked);
++ allowed = 0;
+ goto out;
+ }
+ allowed = 1;
+diff --git a/mm/shmem.c b/mm/shmem.c
+index a09b29ec2b45c..7a46419d331d5 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2357,8 +2357,10 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
+ /* don't free the page */
+ goto out_unacct_blocks;
+ }
++
++ flush_dcache_page(page);
+ } else { /* ZEROPAGE */
+- clear_highpage(page);
++ clear_user_highpage(page, dst_addr);
+ }
+ } else {
+ page = *pagep;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 885e5adb0168d..7259f96faaa0e 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -153,6 +153,8 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
+ /* don't free the page */
+ goto out;
+ }
++
++ flush_dcache_page(page);
+ } else {
+ page = *pagep;
+ *pagep = NULL;
+@@ -628,6 +630,7 @@ retry:
+ err = -EFAULT;
+ goto out;
+ }
++ flush_dcache_page(page);
+ goto retry;
+ } else
+ BUG_ON(page);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 2882bc7d79d79..9e9713f7ddb84 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -2554,10 +2554,10 @@ int hci_register_dev(struct hci_dev *hdev)
+ */
+ switch (hdev->dev_type) {
+ case HCI_PRIMARY:
+- id = ida_simple_get(&hci_index_ida, 0, 0, GFP_KERNEL);
++ id = ida_simple_get(&hci_index_ida, 0, HCI_MAX_ID, GFP_KERNEL);
+ break;
+ case HCI_AMP:
+- id = ida_simple_get(&hci_index_ida, 1, 0, GFP_KERNEL);
++ id = ida_simple_get(&hci_index_ida, 1, HCI_MAX_ID, GFP_KERNEL);
+ break;
+ default:
+ return -EINVAL;
+@@ -2566,7 +2566,7 @@ int hci_register_dev(struct hci_dev *hdev)
+ if (id < 0)
+ return id;
+
+- sprintf(hdev->name, "hci%d", id);
++ snprintf(hdev->name, sizeof(hdev->name), "hci%d", id);
+ hdev->id = id;
+
+ BT_DBG("%p name %s bus %d", hdev, hdev->name, hdev->bus);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-12 11:27 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-12 11:27 UTC (permalink / raw
To: gentoo-commits
commit: 33ae7af26e3ce63605bf43c87d215a1d710d852d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 12 11:27:08 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 12 11:27:08 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=33ae7af2
Linux patch 5.17.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-5.17.7.patch | 4888 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4892 insertions(+)
diff --git a/0000_README b/0000_README
index 91016f55..cf45e5d3 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-5.17.6.patch
From: http://www.kernel.org
Desc: Linux 5.17.6
+Patch: 1006_linux-5.17.7.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-5.17.7.patch b/1006_linux-5.17.7.patch
new file mode 100644
index 00000000..ed7d05cf
--- /dev/null
+++ b/1006_linux-5.17.7.patch
@@ -0,0 +1,4888 @@
+diff --git a/Documentation/devicetree/bindings/pci/apple,pcie.yaml b/Documentation/devicetree/bindings/pci/apple,pcie.yaml
+index 7f01e15fc81c2..daf602ac0d0fd 100644
+--- a/Documentation/devicetree/bindings/pci/apple,pcie.yaml
++++ b/Documentation/devicetree/bindings/pci/apple,pcie.yaml
+@@ -142,7 +142,6 @@ examples:
+ device_type = "pci";
+ reg = <0x0 0x0 0x0 0x0 0x0>;
+ reset-gpios = <&pinctrl_ap 152 0>;
+- max-link-speed = <2>;
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+@@ -153,7 +152,6 @@ examples:
+ device_type = "pci";
+ reg = <0x800 0x0 0x0 0x0 0x0>;
+ reset-gpios = <&pinctrl_ap 153 0>;
+- max-link-speed = <2>;
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+@@ -164,7 +162,6 @@ examples:
+ device_type = "pci";
+ reg = <0x1000 0x0 0x0 0x0 0x0>;
+ reset-gpios = <&pinctrl_ap 33 0>;
+- max-link-speed = <1>;
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+diff --git a/Makefile b/Makefile
+index 7ef8dd5ab6f28..ce65b393a2b49 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/mips/include/asm/timex.h b/arch/mips/include/asm/timex.h
+index b05bb70a2e46f..8026baf46e729 100644
+--- a/arch/mips/include/asm/timex.h
++++ b/arch/mips/include/asm/timex.h
+@@ -40,9 +40,9 @@
+ typedef unsigned int cycles_t;
+
+ /*
+- * On R4000/R4400 before version 5.0 an erratum exists such that if the
+- * cycle counter is read in the exact moment that it is matching the
+- * compare register, no interrupt will be generated.
++ * On R4000/R4400 an erratum exists such that if the cycle counter is
++ * read in the exact moment that it is matching the compare register,
++ * no interrupt will be generated.
+ *
+ * There is a suggested workaround and also the erratum can't strike if
+ * the compare interrupt isn't being used as the clock source device.
+@@ -63,7 +63,7 @@ static inline int can_use_mips_counter(unsigned int prid)
+ if (!__builtin_constant_p(cpu_has_counter))
+ asm volatile("" : "=m" (cpu_data[0].options));
+ if (likely(cpu_has_counter &&
+- prid >= (PRID_IMP_R4000 | PRID_REV_ENCODE_44(5, 0))))
++ prid > (PRID_IMP_R4000 | PRID_REV_ENCODE_44(15, 15))))
+ return 1;
+ else
+ return 0;
+diff --git a/arch/mips/kernel/time.c b/arch/mips/kernel/time.c
+index caa01457dce60..ed339d7979f3f 100644
+--- a/arch/mips/kernel/time.c
++++ b/arch/mips/kernel/time.c
+@@ -141,15 +141,10 @@ static __init int cpu_has_mfc0_count_bug(void)
+ case CPU_R4400MC:
+ /*
+ * The published errata for the R4400 up to 3.0 say the CPU
+- * has the mfc0 from count bug.
++ * has the mfc0 from count bug. This seems the last version
++ * produced.
+ */
+- if ((current_cpu_data.processor_id & 0xff) <= 0x30)
+- return 1;
+-
+- /*
+- * we assume newer revisions are ok
+- */
+- return 0;
++ return 1;
+ }
+
+ return 0;
+diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c
+index 1b6129e7d776b..b861bbbc87178 100644
+--- a/arch/parisc/kernel/processor.c
++++ b/arch/parisc/kernel/processor.c
+@@ -418,8 +418,7 @@ show_cpuinfo (struct seq_file *m, void *v)
+ }
+ seq_printf(m, " (0x%02lx)\n", boot_cpu_data.pdc.capabilities);
+
+- seq_printf(m, "model\t\t: %s\n"
+- "model name\t: %s\n",
++ seq_printf(m, "model\t\t: %s - %s\n",
+ boot_cpu_data.pdc.sys_model_name,
+ cpuinfo->dev ?
+ cpuinfo->dev->name : "Unknown");
+diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c
+index b91cb45ffd4e3..f005ddedb50e4 100644
+--- a/arch/parisc/kernel/setup.c
++++ b/arch/parisc/kernel/setup.c
+@@ -161,6 +161,8 @@ void __init setup_arch(char **cmdline_p)
+ #ifdef CONFIG_PA11
+ dma_ops_init();
+ #endif
++
++ clear_sched_clock_stable();
+ }
+
+ /*
+diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
+index 061119a56fbe8..d8e59a1000ab7 100644
+--- a/arch/parisc/kernel/time.c
++++ b/arch/parisc/kernel/time.c
+@@ -249,13 +249,9 @@ void __init time_init(void)
+ static int __init init_cr16_clocksource(void)
+ {
+ /*
+- * The cr16 interval timers are not syncronized across CPUs, even if
+- * they share the same socket.
++ * The cr16 interval timers are not synchronized across CPUs.
+ */
+ if (num_online_cpus() > 1 && !running_on_qemu) {
+- /* mark sched_clock unstable */
+- clear_sched_clock_stable();
+-
+ clocksource_cr16.name = "cr16_unstable";
+ clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;
+ clocksource_cr16.rating = 0;
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 0d588032d6e69..697a9aed4f77f 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -206,8 +206,25 @@ static void __init setup_bootmem(void)
+ * early_init_fdt_reserve_self() since __pa() does
+ * not work for DTB pointers that are fixmap addresses
+ */
+- if (!IS_ENABLED(CONFIG_BUILTIN_DTB))
+- memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
++ if (!IS_ENABLED(CONFIG_BUILTIN_DTB)) {
++ /*
++ * In case the DTB is not located in a memory region we won't
++ * be able to locate it later on via the linear mapping and
++ * get a segfault when accessing it via __va(dtb_early_pa).
++ * To avoid this situation copy DTB to a memory region.
++ * Note that memblock_phys_alloc will also reserve DTB region.
++ */
++ if (!memblock_is_memory(dtb_early_pa)) {
++ size_t fdt_size = fdt_totalsize(dtb_early_va);
++ phys_addr_t new_dtb_early_pa = memblock_phys_alloc(fdt_size, PAGE_SIZE);
++ void *new_dtb_early_va = early_memremap(new_dtb_early_pa, fdt_size);
++
++ memcpy(new_dtb_early_va, dtb_early_va, fdt_size);
++ early_memunmap(new_dtb_early_va, fdt_size);
++ _dtb_early_pa = new_dtb_early_pa;
++ } else
++ memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
++ }
+
+ early_init_fdt_scan_reserved_mem();
+ dma_contiguous_reserve(dma32_phys_limit);
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 8dea01ffc5c18..5290d64723086 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -41,17 +41,7 @@ struct fpu_state_config fpu_user_cfg __ro_after_init;
+ */
+ struct fpstate init_fpstate __ro_after_init;
+
+-/*
+- * Track whether the kernel is using the FPU state
+- * currently.
+- *
+- * This flag is used:
+- *
+- * - by IRQ context code to potentially use the FPU
+- * if it's unused.
+- *
+- * - to debug kernel_fpu_begin()/end() correctness
+- */
++/* Track in-kernel FPU usage */
+ static DEFINE_PER_CPU(bool, in_kernel_fpu);
+
+ /*
+@@ -59,42 +49,37 @@ static DEFINE_PER_CPU(bool, in_kernel_fpu);
+ */
+ DEFINE_PER_CPU(struct fpu *, fpu_fpregs_owner_ctx);
+
+-static bool kernel_fpu_disabled(void)
+-{
+- return this_cpu_read(in_kernel_fpu);
+-}
+-
+-static bool interrupted_kernel_fpu_idle(void)
+-{
+- return !kernel_fpu_disabled();
+-}
+-
+-/*
+- * Were we in user mode (or vm86 mode) when we were
+- * interrupted?
+- *
+- * Doing kernel_fpu_begin/end() is ok if we are running
+- * in an interrupt context from user mode - we'll just
+- * save the FPU state as required.
+- */
+-static bool interrupted_user_mode(void)
+-{
+- struct pt_regs *regs = get_irq_regs();
+- return regs && user_mode(regs);
+-}
+-
+ /*
+ * Can we use the FPU in kernel mode with the
+ * whole "kernel_fpu_begin/end()" sequence?
+- *
+- * It's always ok in process context (ie "not interrupt")
+- * but it is sometimes ok even from an irq.
+ */
+ bool irq_fpu_usable(void)
+ {
+- return !in_interrupt() ||
+- interrupted_user_mode() ||
+- interrupted_kernel_fpu_idle();
++ if (WARN_ON_ONCE(in_nmi()))
++ return false;
++
++ /* In kernel FPU usage already active? */
++ if (this_cpu_read(in_kernel_fpu))
++ return false;
++
++ /*
++ * When not in NMI or hard interrupt context, FPU can be used in:
++ *
++ * - Task context except from within fpregs_lock()'ed critical
++ * regions.
++ *
++ * - Soft interrupt processing context which cannot happen
++ * while in a fpregs_lock()'ed critical region.
++ */
++ if (!in_hardirq())
++ return true;
++
++ /*
++ * In hard interrupt context it's safe when soft interrupts
++ * are enabled, which means the interrupt did not hit in
++ * a fpregs_lock()'ed critical region.
++ */
++ return !softirq_count();
+ }
+ EXPORT_SYMBOL(irq_fpu_usable);
+
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index ed8a13ac4ab23..4c2a158bb6c4f 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -69,6 +69,7 @@ static DEFINE_PER_CPU_DECRYPTED(struct kvm_vcpu_pv_apf_data, apf_reason) __align
+ DEFINE_PER_CPU_DECRYPTED(struct kvm_steal_time, steal_time) __aligned(64) __visible;
+ static int has_steal_clock = 0;
+
++static int has_guest_poll = 0;
+ /*
+ * No need for any "IO delay" on KVM
+ */
+@@ -706,14 +707,26 @@ static int kvm_cpu_down_prepare(unsigned int cpu)
+
+ static int kvm_suspend(void)
+ {
++ u64 val = 0;
++
+ kvm_guest_cpu_offline(false);
+
++#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
++ if (kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL))
++ rdmsrl(MSR_KVM_POLL_CONTROL, val);
++ has_guest_poll = !(val & 1);
++#endif
+ return 0;
+ }
+
+ static void kvm_resume(void)
+ {
+ kvm_cpu_online(raw_smp_processor_id());
++
++#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
++ if (kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL) && has_guest_poll)
++ wrmsrl(MSR_KVM_POLL_CONTROL, 0);
++#endif
+ }
+
+ static struct syscore_ops kvm_syscore_ops = {
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index b8f8d268d0585..ee15db75fd624 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -865,6 +865,11 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ union cpuid10_eax eax;
+ union cpuid10_edx edx;
+
++ if (!static_cpu_has(X86_FEATURE_ARCH_PERFMON)) {
++ entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
++ break;
++ }
++
+ perf_get_x86_pmu_capability(&cap);
+
+ /*
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 2a10d0033c964..970d5c740b00b 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -113,7 +113,8 @@ static inline u32 kvm_x2apic_id(struct kvm_lapic *apic)
+
+ static bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu)
+ {
+- return pi_inject_timer && kvm_vcpu_apicv_active(vcpu);
++ return pi_inject_timer && kvm_vcpu_apicv_active(vcpu) &&
++ (kvm_mwait_in_guest(vcpu->kvm) || kvm_hlt_in_guest(vcpu->kvm));
+ }
+
+ bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu)
+@@ -2125,10 +2126,9 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+ break;
+
+ case APIC_SELF_IPI:
+- if (apic_x2apic_mode(apic)) {
+- kvm_lapic_reg_write(apic, APIC_ICR,
+- APIC_DEST_SELF | (val & APIC_VECTOR_MASK));
+- } else
++ if (apic_x2apic_mode(apic))
++ kvm_apic_send_ipi(apic, APIC_DEST_SELF | (val & APIC_VECTOR_MASK), 0);
++ else
+ ret = 1;
+ break;
+ default:
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 7f009ebb319ab..e7cd16e1e0a0b 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3239,6 +3239,8 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa,
+ return;
+
+ sp = to_shadow_page(*root_hpa & PT64_BASE_ADDR_MASK);
++ if (WARN_ON(!sp))
++ return;
+
+ if (is_tdp_mmu_page(sp))
+ kvm_tdp_mmu_put_root(kvm, sp, false);
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index b5b0837df0d11..50108634835f4 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -45,6 +45,22 @@ static struct kvm_event_hw_type_mapping amd_event_mapping[] = {
+ [7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
+ };
+
++/* duplicated from amd_f17h_perfmon_event_map. */
++static struct kvm_event_hw_type_mapping amd_f17h_event_mapping[] = {
++ [0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES },
++ [1] = { 0xc0, 0x00, PERF_COUNT_HW_INSTRUCTIONS },
++ [2] = { 0x60, 0xff, PERF_COUNT_HW_CACHE_REFERENCES },
++ [3] = { 0x64, 0x09, PERF_COUNT_HW_CACHE_MISSES },
++ [4] = { 0xc2, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS },
++ [5] = { 0xc3, 0x00, PERF_COUNT_HW_BRANCH_MISSES },
++ [6] = { 0x87, 0x02, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND },
++ [7] = { 0x87, 0x01, PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
++};
++
++/* amd_pmc_perf_hw_id depends on these being the same size */
++static_assert(ARRAY_SIZE(amd_event_mapping) ==
++ ARRAY_SIZE(amd_f17h_event_mapping));
++
+ static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type)
+ {
+ struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
+@@ -140,6 +156,7 @@ static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
+
+ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ {
++ struct kvm_event_hw_type_mapping *event_mapping;
+ u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
+ u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+ int i;
+@@ -148,15 +165,20 @@ static unsigned int amd_pmc_perf_hw_id(struct kvm_pmc *pmc)
+ if (WARN_ON(pmc_is_fixed(pmc)))
+ return PERF_COUNT_HW_MAX;
+
++ if (guest_cpuid_family(pmc->vcpu) >= 0x17)
++ event_mapping = amd_f17h_event_mapping;
++ else
++ event_mapping = amd_event_mapping;
++
+ for (i = 0; i < ARRAY_SIZE(amd_event_mapping); i++)
+- if (amd_event_mapping[i].eventsel == event_select
+- && amd_event_mapping[i].unit_mask == unit_mask)
++ if (event_mapping[i].eventsel == event_select
++ && event_mapping[i].unit_mask == unit_mask)
+ break;
+
+ if (i == ARRAY_SIZE(amd_event_mapping))
+ return PERF_COUNT_HW_MAX;
+
+- return amd_event_mapping[i].event_type;
++ return event_mapping[i].event_type;
+ }
+
+ /* check if a PMC is enabled by comparing it against global_ctrl bits. Because
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index e5cecd4ad2d44..76e6411d4dde1 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1590,24 +1590,51 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
+ atomic_set_release(&src_sev->migration_in_progress, 0);
+ }
+
++/* vCPU mutex subclasses. */
++enum sev_migration_role {
++ SEV_MIGRATION_SOURCE = 0,
++ SEV_MIGRATION_TARGET,
++ SEV_NR_MIGRATION_ROLES,
++};
+
+-static int sev_lock_vcpus_for_migration(struct kvm *kvm)
++static int sev_lock_vcpus_for_migration(struct kvm *kvm,
++ enum sev_migration_role role)
+ {
+ struct kvm_vcpu *vcpu;
+ unsigned long i, j;
++ bool first = true;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+- if (mutex_lock_killable(&vcpu->mutex))
++ if (mutex_lock_killable_nested(&vcpu->mutex, role))
+ goto out_unlock;
++
++ if (first) {
++ /*
++ * Reset the role to one that avoids colliding with
++ * the role used for the first vcpu mutex.
++ */
++ role = SEV_NR_MIGRATION_ROLES;
++ first = false;
++ } else {
++ mutex_release(&vcpu->mutex.dep_map, _THIS_IP_);
++ }
+ }
+
+ return 0;
+
+ out_unlock:
++
++ first = true;
+ kvm_for_each_vcpu(j, vcpu, kvm) {
+ if (i == j)
+ break;
+
++ if (first)
++ first = false;
++ else
++ mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_);
++
++
+ mutex_unlock(&vcpu->mutex);
+ }
+ return -EINTR;
+@@ -1617,8 +1644,15 @@ static void sev_unlock_vcpus_for_migration(struct kvm *kvm)
+ {
+ struct kvm_vcpu *vcpu;
+ unsigned long i;
++ bool first = true;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
++ if (first)
++ first = false;
++ else
++ mutex_acquire(&vcpu->mutex.dep_map,
++ SEV_NR_MIGRATION_ROLES, 0, _THIS_IP_);
++
+ mutex_unlock(&vcpu->mutex);
+ }
+ }
+@@ -1726,10 +1760,10 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
+ charged = true;
+ }
+
+- ret = sev_lock_vcpus_for_migration(kvm);
++ ret = sev_lock_vcpus_for_migration(kvm, SEV_MIGRATION_SOURCE);
+ if (ret)
+ goto out_dst_cgroup;
+- ret = sev_lock_vcpus_for_migration(source_kvm);
++ ret = sev_lock_vcpus_for_migration(source_kvm, SEV_MIGRATION_TARGET);
+ if (ret)
+ goto out_dst_vcpu;
+
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index ef63cfd57029a..267d6dc4b8186 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -5473,7 +5473,7 @@ static bool vmx_emulation_required_with_pending_exception(struct kvm_vcpu *vcpu)
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+ return vmx->emulation_required && !vmx->rmode.vm86_active &&
+- vcpu->arch.exception.pending;
++ (vcpu->arch.exception.pending || vcpu->arch.exception.injected);
+ }
+
+ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu)
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index c59265146e9c8..f1827257ef0e0 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -3677,8 +3677,11 @@ static void cleanup_smi_msgs(struct ipmi_smi *intf)
+ void ipmi_unregister_smi(struct ipmi_smi *intf)
+ {
+ struct ipmi_smi_watcher *w;
+- int intf_num = intf->intf_num, index;
++ int intf_num, index;
+
++ if (!intf)
++ return;
++ intf_num = intf->intf_num;
+ mutex_lock(&ipmi_interfaces_mutex);
+ intf->intf_num = -1;
+ intf->in_shutdown = true;
+@@ -4518,6 +4521,8 @@ return_unspecified:
+ } else
+ /* The message was sent, start the timer. */
+ intf_start_seq_timer(intf, msg->msgid);
++ requeue = 0;
++ goto out;
+ } else if (((msg->rsp[0] >> 2) != ((msg->data[0] >> 2) | 1))
+ || (msg->rsp[1] != msg->data[1])) {
+ /*
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 64dedb3ef8ec4..5604a810fb3d2 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2220,10 +2220,7 @@ static void cleanup_one_si(struct smi_info *smi_info)
+ return;
+
+ list_del(&smi_info->link);
+-
+- if (smi_info->intf)
+- ipmi_unregister_smi(smi_info->intf);
+-
++ ipmi_unregister_smi(smi_info->intf);
+ kfree(smi_info);
+ }
+
+diff --git a/drivers/firewire/core-card.c b/drivers/firewire/core-card.c
+index 54be88167c60b..f3b3953cac834 100644
+--- a/drivers/firewire/core-card.c
++++ b/drivers/firewire/core-card.c
+@@ -668,6 +668,7 @@ EXPORT_SYMBOL_GPL(fw_card_release);
+ void fw_core_remove_card(struct fw_card *card)
+ {
+ struct fw_card_driver dummy_driver = dummy_driver_template;
++ unsigned long flags;
+
+ card->driver->update_phy_reg(card, 4,
+ PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
+@@ -682,7 +683,9 @@ void fw_core_remove_card(struct fw_card *card)
+ dummy_driver.stop_iso = card->driver->stop_iso;
+ card->driver = &dummy_driver;
+
++ spin_lock_irqsave(&card->lock, flags);
+ fw_destroy_nodes(card);
++ spin_unlock_irqrestore(&card->lock, flags);
+
+ /* Wait for all users, especially device workqueue jobs, to finish. */
+ fw_card_put(card);
+diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
+index 9f89c17730b12..708e417200f46 100644
+--- a/drivers/firewire/core-cdev.c
++++ b/drivers/firewire/core-cdev.c
+@@ -1500,6 +1500,7 @@ static void outbound_phy_packet_callback(struct fw_packet *packet,
+ {
+ struct outbound_phy_packet_event *e =
+ container_of(packet, struct outbound_phy_packet_event, p);
++ struct client *e_client;
+
+ switch (status) {
+ /* expected: */
+@@ -1516,9 +1517,10 @@ static void outbound_phy_packet_callback(struct fw_packet *packet,
+ }
+ e->phy_packet.data[0] = packet->timestamp;
+
++ e_client = e->client;
+ queue_event(e->client, &e->event, &e->phy_packet,
+ sizeof(e->phy_packet) + e->phy_packet.length, NULL, 0);
+- client_put(e->client);
++ client_put(e_client);
+ }
+
+ static int ioctl_send_phy_packet(struct client *client, union ioctl_arg *arg)
+diff --git a/drivers/firewire/core-topology.c b/drivers/firewire/core-topology.c
+index b63d55f5ebd33..f40c815343812 100644
+--- a/drivers/firewire/core-topology.c
++++ b/drivers/firewire/core-topology.c
+@@ -375,16 +375,13 @@ static void report_found_node(struct fw_card *card,
+ card->bm_retries = 0;
+ }
+
++/* Must be called with card->lock held */
+ void fw_destroy_nodes(struct fw_card *card)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&card->lock, flags);
+ card->color++;
+ if (card->local_node != NULL)
+ for_each_fw_node(card, card->local_node, report_lost_node);
+ card->local_node = NULL;
+- spin_unlock_irqrestore(&card->lock, flags);
+ }
+
+ static void move_tree(struct fw_node *node0, struct fw_node *node1, int port)
+@@ -510,6 +507,8 @@ void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
+ struct fw_node *local_node;
+ unsigned long flags;
+
++ spin_lock_irqsave(&card->lock, flags);
++
+ /*
+ * If the selfID buffer is not the immediate successor of the
+ * previously processed one, we cannot reliably compare the
+@@ -521,8 +520,6 @@ void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
+ card->bm_retries = 0;
+ }
+
+- spin_lock_irqsave(&card->lock, flags);
+-
+ card->broadcast_channel_allocated = card->broadcast_channel_auto_allocated;
+ card->node_id = node_id;
+ /*
+diff --git a/drivers/firewire/core-transaction.c b/drivers/firewire/core-transaction.c
+index ac487c96bb717..6c20815cc8d16 100644
+--- a/drivers/firewire/core-transaction.c
++++ b/drivers/firewire/core-transaction.c
+@@ -73,24 +73,25 @@ static int try_cancel_split_timeout(struct fw_transaction *t)
+ static int close_transaction(struct fw_transaction *transaction,
+ struct fw_card *card, int rcode)
+ {
+- struct fw_transaction *t;
++ struct fw_transaction *t = NULL, *iter;
+ unsigned long flags;
+
+ spin_lock_irqsave(&card->lock, flags);
+- list_for_each_entry(t, &card->transaction_list, link) {
+- if (t == transaction) {
+- if (!try_cancel_split_timeout(t)) {
++ list_for_each_entry(iter, &card->transaction_list, link) {
++ if (iter == transaction) {
++ if (!try_cancel_split_timeout(iter)) {
+ spin_unlock_irqrestore(&card->lock, flags);
+ goto timed_out;
+ }
+- list_del_init(&t->link);
+- card->tlabel_mask &= ~(1ULL << t->tlabel);
++ list_del_init(&iter->link);
++ card->tlabel_mask &= ~(1ULL << iter->tlabel);
++ t = iter;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&card->lock, flags);
+
+- if (&t->link != &card->transaction_list) {
++ if (t) {
+ t->callback(card, rcode, NULL, 0, t->callback_data);
+ return 0;
+ }
+@@ -935,7 +936,7 @@ EXPORT_SYMBOL(fw_core_handle_request);
+
+ void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
+ {
+- struct fw_transaction *t;
++ struct fw_transaction *t = NULL, *iter;
+ unsigned long flags;
+ u32 *data;
+ size_t data_length;
+@@ -947,20 +948,21 @@ void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
+ rcode = HEADER_GET_RCODE(p->header[1]);
+
+ spin_lock_irqsave(&card->lock, flags);
+- list_for_each_entry(t, &card->transaction_list, link) {
+- if (t->node_id == source && t->tlabel == tlabel) {
+- if (!try_cancel_split_timeout(t)) {
++ list_for_each_entry(iter, &card->transaction_list, link) {
++ if (iter->node_id == source && iter->tlabel == tlabel) {
++ if (!try_cancel_split_timeout(iter)) {
+ spin_unlock_irqrestore(&card->lock, flags);
+ goto timed_out;
+ }
+- list_del_init(&t->link);
+- card->tlabel_mask &= ~(1ULL << t->tlabel);
++ list_del_init(&iter->link);
++ card->tlabel_mask &= ~(1ULL << iter->tlabel);
++ t = iter;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&card->lock, flags);
+
+- if (&t->link == &card->transaction_list) {
++ if (!t) {
+ timed_out:
+ fw_notice(card, "unsolicited response (source %x, tlabel %x)\n",
+ source, tlabel);
+diff --git a/drivers/firewire/sbp2.c b/drivers/firewire/sbp2.c
+index 85cd379fd3838..60051c0cabeaa 100644
+--- a/drivers/firewire/sbp2.c
++++ b/drivers/firewire/sbp2.c
+@@ -408,7 +408,7 @@ static void sbp2_status_write(struct fw_card *card, struct fw_request *request,
+ void *payload, size_t length, void *callback_data)
+ {
+ struct sbp2_logical_unit *lu = callback_data;
+- struct sbp2_orb *orb;
++ struct sbp2_orb *orb = NULL, *iter;
+ struct sbp2_status status;
+ unsigned long flags;
+
+@@ -433,17 +433,18 @@ static void sbp2_status_write(struct fw_card *card, struct fw_request *request,
+
+ /* Lookup the orb corresponding to this status write. */
+ spin_lock_irqsave(&lu->tgt->lock, flags);
+- list_for_each_entry(orb, &lu->orb_list, link) {
++ list_for_each_entry(iter, &lu->orb_list, link) {
+ if (STATUS_GET_ORB_HIGH(status) == 0 &&
+- STATUS_GET_ORB_LOW(status) == orb->request_bus) {
+- orb->rcode = RCODE_COMPLETE;
+- list_del(&orb->link);
++ STATUS_GET_ORB_LOW(status) == iter->request_bus) {
++ iter->rcode = RCODE_COMPLETE;
++ list_del(&iter->link);
++ orb = iter;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&lu->tgt->lock, flags);
+
+- if (&orb->link != &lu->orb_list) {
++ if (orb) {
+ orb->callback(orb, &status);
+ kref_put(&orb->kref, free_orb); /* orb callback reference */
+ } else {
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index 4c1f9e1091b7f..a2c8dd329b31b 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -871,13 +871,6 @@ static int mvebu_pwm_probe(struct platform_device *pdev,
+ mvpwm->chip.dev = dev;
+ mvpwm->chip.ops = &mvebu_pwm_ops;
+ mvpwm->chip.npwm = mvchip->chip.ngpio;
+- /*
+- * There may already be some PWM allocated, so we can't force
+- * mvpwm->chip.base to a fixed point like mvchip->chip.base.
+- * So, we let pwmchip_add() do the numbering and take the next free
+- * region.
+- */
+- mvpwm->chip.base = -1;
+
+ spin_lock_init(&mvpwm->lock);
+
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index d2fe76f3f34fd..8726921a11294 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -762,11 +762,11 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, unsigned long *pendin
+ bitmap_xor(cur_stat, new_stat, old_stat, gc->ngpio);
+ bitmap_and(trigger, cur_stat, chip->irq_mask, gc->ngpio);
+
++ bitmap_copy(chip->irq_stat, new_stat, gc->ngpio);
++
+ if (bitmap_empty(trigger, gc->ngpio))
+ return false;
+
+- bitmap_copy(chip->irq_stat, new_stat, gc->ngpio);
+-
+ bitmap_and(cur_stat, chip->irq_trig_fall, old_stat, gc->ngpio);
+ bitmap_and(old_stat, chip->irq_trig_raise, new_stat, gc->ngpio);
+ bitmap_or(new_stat, old_stat, cur_stat, gc->ngpio);
+diff --git a/drivers/gpio/gpio-visconti.c b/drivers/gpio/gpio-visconti.c
+index 47455810bdb91..e6534ea1eaa7a 100644
+--- a/drivers/gpio/gpio-visconti.c
++++ b/drivers/gpio/gpio-visconti.c
+@@ -130,7 +130,6 @@ static int visconti_gpio_probe(struct platform_device *pdev)
+ struct gpio_irq_chip *girq;
+ struct irq_domain *parent;
+ struct device_node *irq_parent;
+- struct fwnode_handle *fwnode;
+ int ret;
+
+ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+@@ -150,14 +149,12 @@ static int visconti_gpio_probe(struct platform_device *pdev)
+ }
+
+ parent = irq_find_host(irq_parent);
++ of_node_put(irq_parent);
+ if (!parent) {
+ dev_err(dev, "No IRQ parent domain\n");
+ return -ENODEV;
+ }
+
+- fwnode = of_node_to_fwnode(irq_parent);
+- of_node_put(irq_parent);
+-
+ ret = bgpio_init(&priv->gpio_chip, dev, 4,
+ priv->base + GPIO_IDATA,
+ priv->base + GPIO_OSET,
+@@ -180,7 +177,7 @@ static int visconti_gpio_probe(struct platform_device *pdev)
+
+ girq = &priv->gpio_chip.irq;
+ girq->chip = irq_chip;
+- girq->fwnode = fwnode;
++ girq->fwnode = of_node_to_fwnode(dev->of_node);
+ girq->parent_domain = parent;
+ girq->child_to_parent_hwirq = visconti_gpio_child_to_parent_hwirq;
+ girq->populate_parent_alloc_arg = visconti_gpio_populate_parent_fwspec;
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 91dcf2c6cdd84..775a7dadf9a39 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -912,7 +912,7 @@ static void of_gpiochip_init_valid_mask(struct gpio_chip *chip)
+ i, &start);
+ of_property_read_u32_index(np, "gpio-reserved-ranges",
+ i + 1, &count);
+- if (start >= chip->ngpio || start + count >= chip->ngpio)
++ if (start >= chip->ngpio || start + count > chip->ngpio)
+ continue;
+
+ bitmap_clear(chip->valid_mask, start, count);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 07bc0f5047130..5d065bf7c5146 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -24,6 +24,7 @@
+ #include <linux/module.h>
+
+ #include <drm/drm_drv.h>
++#include <xen/xen.h>
+
+ #include "amdgpu.h"
+ #include "amdgpu_ras.h"
+@@ -708,7 +709,8 @@ void amdgpu_detect_virtualization(struct amdgpu_device *adev)
+ adev->virt.caps |= AMDGPU_SRIOV_CAPS_ENABLE_IOV;
+
+ if (!reg) {
+- if (is_virtual_machine()) /* passthrough mode exclus sriov mod */
++ /* passthrough mode exclus sriov mod */
++ if (is_virtual_machine() && !xen_initial_domain())
+ adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 49d5271dcfdc8..bbe94e8729831 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -4634,7 +4634,7 @@ static void dp_test_get_audio_test_data(struct dc_link *link, bool disable_video
+ &dpcd_pattern_type.value,
+ sizeof(dpcd_pattern_type));
+
+- channel_count = dpcd_test_mode.bits.channel_count + 1;
++ channel_count = min(dpcd_test_mode.bits.channel_count + 1, AUDIO_CHANNELS_COUNT);
+
+ // read pattern periods for requested channels when sawTooth pattern is requested
+ if (dpcd_pattern_type.value == AUDIO_TEST_PATTERN_SAWTOOTH ||
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index af9c09c308601..1d7f82e6eafea 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -551,12 +551,6 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+
+ mutex_unlock(&dp->event_mutex);
+
+- /*
+- * add fail safe mode outside event_mutex scope
+- * to avoid potiential circular lock with drm thread
+- */
+- dp_panel_add_fail_safe_mode(dp->dp_display.connector);
+-
+ /* uevent will complete connection part */
+ return 0;
+ };
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 26c3653c99ec9..26f4b6959c31d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -151,15 +151,6 @@ static int dp_panel_update_modes(struct drm_connector *connector,
+ return rc;
+ }
+
+-void dp_panel_add_fail_safe_mode(struct drm_connector *connector)
+-{
+- /* fail safe edid */
+- mutex_lock(&connector->dev->mode_config.mutex);
+- if (drm_add_modes_noedid(connector, 640, 480))
+- drm_set_preferred_mode(connector, 640, 480);
+- mutex_unlock(&connector->dev->mode_config.mutex);
+-}
+-
+ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ struct drm_connector *connector)
+ {
+@@ -215,8 +206,6 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ rc = -ETIMEDOUT;
+ goto end;
+ }
+-
+- dp_panel_add_fail_safe_mode(connector);
+ }
+
+ if (panel->aux_cfg_update_done) {
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
+index 99739ea679a77..9023e5bb4b8b2 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.h
++++ b/drivers/gpu/drm/msm/dp/dp_panel.h
+@@ -59,7 +59,6 @@ int dp_panel_init_panel_info(struct dp_panel *dp_panel);
+ int dp_panel_deinit(struct dp_panel *dp_panel);
+ int dp_panel_timing_cfg(struct dp_panel *dp_panel);
+ void dp_panel_dump_regs(struct dp_panel *dp_panel);
+-void dp_panel_add_fail_safe_mode(struct drm_connector *connector);
+ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ struct drm_connector *connector);
+ u32 dp_panel_get_mode_bpp(struct dp_panel *dp_panel, u32 mode_max_bpp,
+diff --git a/drivers/hwmon/adt7470.c b/drivers/hwmon/adt7470.c
+index fb6d14d213a18..c67cd037a93fd 100644
+--- a/drivers/hwmon/adt7470.c
++++ b/drivers/hwmon/adt7470.c
+@@ -19,6 +19,7 @@
+ #include <linux/log2.h>
+ #include <linux/kthread.h>
+ #include <linux/regmap.h>
++#include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/util_macros.h>
+
+@@ -294,11 +295,10 @@ static int adt7470_update_thread(void *p)
+ adt7470_read_temperatures(data);
+ mutex_unlock(&data->lock);
+
+- set_current_state(TASK_INTERRUPTIBLE);
+ if (kthread_should_stop())
+ break;
+
+- schedule_timeout(msecs_to_jiffies(data->auto_update_interval));
++ schedule_timeout_interruptible(msecs_to_jiffies(data->auto_update_interval));
+ }
+
+ return 0;
+diff --git a/drivers/hwmon/pmbus/delta-ahe50dc-fan.c b/drivers/hwmon/pmbus/delta-ahe50dc-fan.c
+index 40dffd9c4cbfc..f546f0c12497b 100644
+--- a/drivers/hwmon/pmbus/delta-ahe50dc-fan.c
++++ b/drivers/hwmon/pmbus/delta-ahe50dc-fan.c
+@@ -14,6 +14,21 @@
+
+ #define AHE50DC_PMBUS_READ_TEMP4 0xd0
+
++static int ahe50dc_fan_write_byte(struct i2c_client *client, int page, u8 value)
++{
++ /*
++ * The CLEAR_FAULTS operation seems to sometimes (unpredictably, perhaps
++ * 5% of the time or so) trigger a problematic phenomenon in which the
++ * fan speeds surge momentarily and at least some (perhaps all?) of the
++ * system's power outputs experience a glitch.
++ *
++ * However, according to Delta it should be OK to simply not send any
++ * CLEAR_FAULTS commands (the device doesn't seem to be capable of
++ * reporting any faults anyway), so just blackhole them unconditionally.
++ */
++ return value == PMBUS_CLEAR_FAULTS ? -EOPNOTSUPP : -ENODATA;
++}
++
+ static int ahe50dc_fan_read_word_data(struct i2c_client *client, int page, int phase, int reg)
+ {
+ /* temp1 in (virtual) page 1 is remapped to mfr-specific temp4 */
+@@ -68,6 +83,7 @@ static struct pmbus_driver_info ahe50dc_fan_info = {
+ PMBUS_HAVE_VIN | PMBUS_HAVE_FAN12 | PMBUS_HAVE_FAN34 |
+ PMBUS_HAVE_STATUS_FAN12 | PMBUS_HAVE_STATUS_FAN34 | PMBUS_PAGE_VIRTUAL,
+ .func[1] = PMBUS_HAVE_TEMP | PMBUS_PAGE_VIRTUAL,
++ .write_byte = ahe50dc_fan_write_byte,
+ .read_word_data = ahe50dc_fan_read_word_data,
+ };
+
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index ca0bfaf2f6911..5f8f824d997f8 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -2326,6 +2326,9 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ data->has_status_word = true;
+ }
+
++ /* Make sure PEC is disabled, will be enabled later if needed */
++ client->flags &= ~I2C_CLIENT_PEC;
++
+ /* Enable PEC if the controller and bus supports it */
+ if (!(data->flags & PMBUS_NO_CAPABILITY)) {
+ ret = i2c_smbus_read_byte_data(client, PMBUS_CAPABILITY);
+diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
+index 6dea0a49d1718..082a3ddb0fa3b 100644
+--- a/drivers/infiniband/hw/irdma/cm.c
++++ b/drivers/infiniband/hw/irdma/cm.c
+@@ -2305,10 +2305,8 @@ err:
+ return NULL;
+ }
+
+-static void irdma_cm_node_free_cb(struct rcu_head *rcu_head)
++static void irdma_destroy_connection(struct irdma_cm_node *cm_node)
+ {
+- struct irdma_cm_node *cm_node =
+- container_of(rcu_head, struct irdma_cm_node, rcu_head);
+ struct irdma_cm_core *cm_core = cm_node->cm_core;
+ struct irdma_qp *iwqp;
+ struct irdma_cm_info nfo;
+@@ -2356,7 +2354,6 @@ static void irdma_cm_node_free_cb(struct rcu_head *rcu_head)
+ }
+
+ cm_core->cm_free_ah(cm_node);
+- kfree(cm_node);
+ }
+
+ /**
+@@ -2384,8 +2381,9 @@ void irdma_rem_ref_cm_node(struct irdma_cm_node *cm_node)
+
+ spin_unlock_irqrestore(&cm_core->ht_lock, flags);
+
+- /* wait for all list walkers to exit their grace period */
+- call_rcu(&cm_node->rcu_head, irdma_cm_node_free_cb);
++ irdma_destroy_connection(cm_node);
++
++ kfree_rcu(cm_node, rcu_head);
+ }
+
+ /**
+@@ -3465,12 +3463,6 @@ static void irdma_cm_disconn_true(struct irdma_qp *iwqp)
+ }
+
+ cm_id = iwqp->cm_id;
+- /* make sure we havent already closed this connection */
+- if (!cm_id) {
+- spin_unlock_irqrestore(&iwqp->lock, flags);
+- return;
+- }
+-
+ original_hw_tcp_state = iwqp->hw_tcp_state;
+ original_ibqp_state = iwqp->ibqp_state;
+ last_ae = iwqp->last_aeq;
+@@ -3492,11 +3484,11 @@ static void irdma_cm_disconn_true(struct irdma_qp *iwqp)
+ disconn_status = -ECONNRESET;
+ }
+
+- if ((original_hw_tcp_state == IRDMA_TCP_STATE_CLOSED ||
+- original_hw_tcp_state == IRDMA_TCP_STATE_TIME_WAIT ||
+- last_ae == IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE ||
+- last_ae == IRDMA_AE_BAD_CLOSE ||
+- last_ae == IRDMA_AE_LLP_CONNECTION_RESET || iwdev->rf->reset)) {
++ if (original_hw_tcp_state == IRDMA_TCP_STATE_CLOSED ||
++ original_hw_tcp_state == IRDMA_TCP_STATE_TIME_WAIT ||
++ last_ae == IRDMA_AE_RDMAP_ROE_BAD_LLP_CLOSE ||
++ last_ae == IRDMA_AE_BAD_CLOSE ||
++ last_ae == IRDMA_AE_LLP_CONNECTION_RESET || iwdev->rf->reset || !cm_id) {
+ issue_close = 1;
+ iwqp->cm_id = NULL;
+ qp->term_flags = 0;
+diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
+index e81b74a518dd0..7f72a006367fe 100644
+--- a/drivers/infiniband/hw/irdma/utils.c
++++ b/drivers/infiniband/hw/irdma/utils.c
+@@ -258,18 +258,16 @@ int irdma_net_event(struct notifier_block *notifier, unsigned long event,
+ u32 local_ipaddr[4] = {};
+ bool ipv4 = true;
+
+- real_dev = rdma_vlan_dev_real_dev(netdev);
+- if (!real_dev)
+- real_dev = netdev;
+-
+- ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+- if (!ibdev)
+- return NOTIFY_DONE;
+-
+- iwdev = to_iwdev(ibdev);
+-
+ switch (event) {
+ case NETEVENT_NEIGH_UPDATE:
++ real_dev = rdma_vlan_dev_real_dev(netdev);
++ if (!real_dev)
++ real_dev = netdev;
++ ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
++ if (!ibdev)
++ return NOTIFY_DONE;
++
++ iwdev = to_iwdev(ibdev);
+ p = (__be32 *)neigh->primary_key;
+ if (neigh->tbl->family == AF_INET6) {
+ ipv4 = false;
+@@ -290,13 +288,12 @@ int irdma_net_event(struct notifier_block *notifier, unsigned long event,
+ irdma_manage_arp_cache(iwdev->rf, neigh->ha,
+ local_ipaddr, ipv4,
+ IRDMA_ARP_DELETE);
++ ib_device_put(ibdev);
+ break;
+ default:
+ break;
+ }
+
+- ib_device_put(ibdev);
+-
+ return NOTIFY_DONE;
+ }
+
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 1bf6404ec8340..c2aa713e0f4d7 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -1620,13 +1620,13 @@ int irdma_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, int attr_mask,
+
+ if (issue_modify_qp && iwqp->ibqp_state > IB_QPS_RTS) {
+ if (dont_wait) {
+- if (iwqp->cm_id && iwqp->hw_tcp_state) {
++ if (iwqp->hw_tcp_state) {
+ spin_lock_irqsave(&iwqp->lock, flags);
+ iwqp->hw_tcp_state = IRDMA_TCP_STATE_CLOSED;
+ iwqp->last_aeq = IRDMA_AE_RESET_SENT;
+ spin_unlock_irqrestore(&iwqp->lock, flags);
+- irdma_cm_disconn(iwqp);
+ }
++ irdma_cm_disconn(iwqp);
+ } else {
+ int close_timer_started;
+
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index 7acdd3c3a599d..17f34d584cd9e 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -968,14 +968,15 @@ static void siw_accept_newconn(struct siw_cep *cep)
+
+ siw_cep_set_inuse(new_cep);
+ rv = siw_proc_mpareq(new_cep);
+- siw_cep_set_free(new_cep);
+-
+ if (rv != -EAGAIN) {
+ siw_cep_put(cep);
+ new_cep->listen_cep = NULL;
+- if (rv)
++ if (rv) {
++ siw_cep_set_free(new_cep);
+ goto error;
++ }
+ }
++ siw_cep_set_free(new_cep);
+ }
+ return;
+
+diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c
+index 565ef55988112..68821f86b063c 100644
+--- a/drivers/iommu/apple-dart.c
++++ b/drivers/iommu/apple-dart.c
+@@ -782,6 +782,7 @@ static const struct iommu_ops apple_dart_iommu_ops = {
+ .get_resv_regions = apple_dart_get_resv_regions,
+ .put_resv_regions = generic_iommu_put_resv_regions,
+ .pgsize_bitmap = -1UL, /* Restricted during dart probe */
++ .owner = THIS_MODULE,
+ };
+
+ static irqreturn_t apple_dart_irq(int irq, void *dev)
+@@ -857,16 +858,15 @@ static int apple_dart_probe(struct platform_device *pdev)
+ dart->dev = dev;
+ spin_lock_init(&dart->lock);
+
+- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ dart->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
++ if (IS_ERR(dart->regs))
++ return PTR_ERR(dart->regs);
++
+ if (resource_size(res) < 0x4000) {
+ dev_err(dev, "MMIO region too small (%pr)\n", res);
+ return -EINVAL;
+ }
+
+- dart->regs = devm_ioremap_resource(dev, res);
+- if (IS_ERR(dart->regs))
+- return PTR_ERR(dart->regs);
+-
+ dart->irq = platform_get_irq(pdev, 0);
+ if (dart->irq < 0)
+ return -ENODEV;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+index a737ba5f727e6..f9e9b4fb78bd5 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+@@ -183,7 +183,14 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
+ {
+ struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
+ struct arm_smmu_domain *smmu_domain = smmu_mn->domain;
+- size_t size = end - start + 1;
++ size_t size;
++
++ /*
++ * The mm_types defines vm_end as the first byte after the end address,
++ * different from IOMMU subsystem using the last address of an address
++ * range. So do a simple translation here by calculating size correctly.
++ */
++ size = end - start;
+
+ if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
+ arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 5b196cfe9ed23..ab22733003464 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -1717,7 +1717,8 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
+ unsigned long pfn, unsigned int pages,
+ int ih, int map)
+ {
+- unsigned int mask = ilog2(__roundup_pow_of_two(pages));
++ unsigned int aligned_pages = __roundup_pow_of_two(pages);
++ unsigned int mask = ilog2(aligned_pages);
+ uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
+ u16 did = domain->iommu_did[iommu->seq_id];
+
+@@ -1729,10 +1730,30 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
+ if (domain_use_first_level(domain)) {
+ domain_flush_piotlb(iommu, domain, addr, pages, ih);
+ } else {
++ unsigned long bitmask = aligned_pages - 1;
++
++ /*
++ * PSI masks the low order bits of the base address. If the
++ * address isn't aligned to the mask, then compute a mask value
++ * needed to ensure the target range is flushed.
++ */
++ if (unlikely(bitmask & pfn)) {
++ unsigned long end_pfn = pfn + pages - 1, shared_bits;
++
++ /*
++ * Since end_pfn <= pfn + bitmask, the only way bits
++ * higher than bitmask can differ in pfn and end_pfn is
++ * by carrying. This means after masking out bitmask,
++ * high bits starting with the first set bit in
++ * shared_bits are all equal in both pfn and end_pfn.
++ */
++ shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
++ mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
++ }
++
+ /*
+ * Fallback to domain selective flush if no PSI support or
+- * the size is too big. PSI requires page size to be 2 ^ x,
+- * and the base address is naturally aligned to the size.
++ * the size is too big.
+ */
+ if (!cap_pgsel_inv(iommu->cap) ||
+ mask > cap_max_amask_val(iommu->cap))
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 5b5d69b04fcc8..06e51f7241877 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -956,6 +956,10 @@ bad_req:
+ goto bad_req;
+ }
+
++ /* Drop Stop Marker message. No need for a response. */
++ if (unlikely(req->lpig && !req->rd_req && !req->wr_req))
++ goto prq_advance;
++
+ if (!svm || svm->pasid != req->pasid) {
+ /*
+ * It can't go away, because the driver is not permitted
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 43d1b9b2fa499..64f47ec9266a9 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -1389,13 +1389,17 @@ static int mmc_select_hs400es(struct mmc_card *card)
+ goto out_err;
+ }
+
++ /*
++ * Bump to HS timing and frequency. Some cards don't handle
++ * SEND_STATUS reliably at the initial frequency.
++ */
+ mmc_set_timing(host, MMC_TIMING_MMC_HS);
++ mmc_set_bus_speed(card);
++
+ err = mmc_switch_status(card, true);
+ if (err)
+ goto out_err;
+
+- mmc_set_clock(host, card->ext_csd.hs_max_dtr);
+-
+ /* Switch card to DDR with strobe bit */
+ val = EXT_CSD_DDR_BUS_WIDTH_8 | EXT_CSD_BUS_WIDTH_STROBE;
+ err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
+@@ -1453,7 +1457,7 @@ out_err:
+ static int mmc_select_hs200(struct mmc_card *card)
+ {
+ struct mmc_host *host = card->host;
+- unsigned int old_timing, old_signal_voltage;
++ unsigned int old_timing, old_signal_voltage, old_clock;
+ int err = -EINVAL;
+ u8 val;
+
+@@ -1484,8 +1488,17 @@ static int mmc_select_hs200(struct mmc_card *card)
+ false, true, MMC_CMD_RETRIES);
+ if (err)
+ goto err;
++
++ /*
++ * Bump to HS timing and frequency. Some cards don't handle
++ * SEND_STATUS reliably at the initial frequency.
++ * NB: We can't move to full (HS200) speeds until after we've
++ * successfully switched over.
++ */
+ old_timing = host->ios.timing;
++ old_clock = host->ios.clock;
+ mmc_set_timing(host, MMC_TIMING_MMC_HS200);
++ mmc_set_clock(card->host, card->ext_csd.hs_max_dtr);
+
+ /*
+ * For HS200, CRC errors are not a reliable way to know the
+@@ -1498,8 +1511,10 @@ static int mmc_select_hs200(struct mmc_card *card)
+ * mmc_select_timing() assumes timing has not changed if
+ * it is a switch error.
+ */
+- if (err == -EBADMSG)
++ if (err == -EBADMSG) {
++ mmc_set_clock(host, old_clock);
+ mmc_set_timing(host, old_timing);
++ }
+ }
+ err:
+ if (err) {
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index f7c384db89bf3..e1580f78c6b2d 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -38,10 +38,7 @@ struct realtek_pci_sdmmc {
+ bool double_clk;
+ bool eject;
+ bool initial_mode;
+- int power_state;
+-#define SDMMC_POWER_ON 1
+-#define SDMMC_POWER_OFF 0
+-
++ int prev_power_state;
+ int sg_count;
+ s32 cookie;
+ int cookie_sg_count;
+@@ -905,7 +902,7 @@ static int sd_set_bus_width(struct realtek_pci_sdmmc *host,
+ return err;
+ }
+
+-static int sd_power_on(struct realtek_pci_sdmmc *host)
++static int sd_power_on(struct realtek_pci_sdmmc *host, unsigned char power_mode)
+ {
+ struct rtsx_pcr *pcr = host->pcr;
+ struct mmc_host *mmc = host->mmc;
+@@ -913,9 +910,14 @@ static int sd_power_on(struct realtek_pci_sdmmc *host)
+ u32 val;
+ u8 test_mode;
+
+- if (host->power_state == SDMMC_POWER_ON)
++ if (host->prev_power_state == MMC_POWER_ON)
+ return 0;
+
++ if (host->prev_power_state == MMC_POWER_UP) {
++ rtsx_pci_write_register(pcr, SD_BUS_STAT, SD_CLK_TOGGLE_EN, 0);
++ goto finish;
++ }
++
+ msleep(100);
+
+ rtsx_pci_init_cmd(pcr);
+@@ -936,10 +938,15 @@ static int sd_power_on(struct realtek_pci_sdmmc *host)
+ if (err < 0)
+ return err;
+
++ mdelay(1);
++
+ err = rtsx_pci_write_register(pcr, CARD_OE, SD_OUTPUT_EN, SD_OUTPUT_EN);
+ if (err < 0)
+ return err;
+
++ /* send at least 74 clocks */
++ rtsx_pci_write_register(pcr, SD_BUS_STAT, SD_CLK_TOGGLE_EN, SD_CLK_TOGGLE_EN);
++
+ if (PCI_PID(pcr) == PID_5261) {
+ /*
+ * If test mode is set switch to SD Express mandatorily,
+@@ -964,7 +971,8 @@ static int sd_power_on(struct realtek_pci_sdmmc *host)
+ }
+ }
+
+- host->power_state = SDMMC_POWER_ON;
++finish:
++ host->prev_power_state = power_mode;
+ return 0;
+ }
+
+@@ -973,7 +981,7 @@ static int sd_power_off(struct realtek_pci_sdmmc *host)
+ struct rtsx_pcr *pcr = host->pcr;
+ int err;
+
+- host->power_state = SDMMC_POWER_OFF;
++ host->prev_power_state = MMC_POWER_OFF;
+
+ rtsx_pci_init_cmd(pcr);
+
+@@ -999,7 +1007,7 @@ static int sd_set_power_mode(struct realtek_pci_sdmmc *host,
+ if (power_mode == MMC_POWER_OFF)
+ err = sd_power_off(host);
+ else
+- err = sd_power_on(host);
++ err = sd_power_on(host, power_mode);
+
+ return err;
+ }
+@@ -1482,10 +1490,11 @@ static int rtsx_pci_sdmmc_drv_probe(struct platform_device *pdev)
+
+ host = mmc_priv(mmc);
+ host->pcr = pcr;
++ mmc->ios.power_delay_ms = 5;
+ host->mmc = mmc;
+ host->pdev = pdev;
+ host->cookie = -1;
+- host->power_state = SDMMC_POWER_OFF;
++ host->prev_power_state = MMC_POWER_OFF;
+ INIT_WORK(&host->work, sd_request);
+ platform_set_drvdata(pdev, host);
+ pcr->slots[RTSX_SD_CARD].p_dev = pdev;
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 50c71e0ba5e4e..ff9f5b63c337e 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -17,6 +17,7 @@
+ #include <linux/regulator/consumer.h>
+ #include <linux/interconnect.h>
+ #include <linux/pinctrl/consumer.h>
++#include <linux/reset.h>
+
+ #include "sdhci-pltfm.h"
+ #include "cqhci.h"
+@@ -2482,6 +2483,43 @@ static inline void sdhci_msm_get_of_property(struct platform_device *pdev,
+ of_property_read_u32(node, "qcom,dll-config", &msm_host->dll_config);
+ }
+
++static int sdhci_msm_gcc_reset(struct device *dev, struct sdhci_host *host)
++{
++ struct reset_control *reset;
++ int ret = 0;
++
++ reset = reset_control_get_optional_exclusive(dev, NULL);
++ if (IS_ERR(reset))
++ return dev_err_probe(dev, PTR_ERR(reset),
++ "unable to acquire core_reset\n");
++
++ if (!reset)
++ return ret;
++
++ ret = reset_control_assert(reset);
++ if (ret) {
++ reset_control_put(reset);
++ return dev_err_probe(dev, ret, "core_reset assert failed\n");
++ }
++
++ /*
++ * The hardware requirement for delay between assert/deassert
++ * is at least 3-4 sleep clock (32.7KHz) cycles, which comes to
++ * ~125us (4/32768). To be on the safe side add 200us delay.
++ */
++ usleep_range(200, 210);
++
++ ret = reset_control_deassert(reset);
++ if (ret) {
++ reset_control_put(reset);
++ return dev_err_probe(dev, ret, "core_reset deassert failed\n");
++ }
++
++ usleep_range(200, 210);
++ reset_control_put(reset);
++
++ return ret;
++}
+
+ static int sdhci_msm_probe(struct platform_device *pdev)
+ {
+@@ -2529,6 +2567,10 @@ static int sdhci_msm_probe(struct platform_device *pdev)
+
+ msm_host->saved_tuning_phase = INVALID_TUNING_PHASE;
+
++ ret = sdhci_msm_gcc_reset(&pdev->dev, host);
++ if (ret)
++ goto pltfm_free;
++
+ /* Setup SDCC bus voter clock. */
+ msm_host->bus_clk = devm_clk_get(&pdev->dev, "bus");
+ if (!IS_ERR(msm_host->bus_clk)) {
+diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
+index 2702736a1c57d..ce6cb8be654ef 100644
+--- a/drivers/mmc/host/sunxi-mmc.c
++++ b/drivers/mmc/host/sunxi-mmc.c
+@@ -377,8 +377,9 @@ static void sunxi_mmc_init_idma_des(struct sunxi_mmc_host *host,
+ pdes[i].buf_addr_ptr1 =
+ cpu_to_le32(sg_dma_address(&data->sg[i]) >>
+ host->cfg->idma_des_shift);
+- pdes[i].buf_addr_ptr2 = cpu_to_le32((u32)next_desc >>
+- host->cfg->idma_des_shift);
++ pdes[i].buf_addr_ptr2 =
++ cpu_to_le32(next_desc >>
++ host->cfg->idma_des_shift);
+ }
+
+ pdes[0].config |= cpu_to_le32(SDXC_IDMAC_DES0_FD);
+diff --git a/drivers/net/can/grcan.c b/drivers/net/can/grcan.c
+index d0c5a7a60dafb..5215bd9b2c80d 100644
+--- a/drivers/net/can/grcan.c
++++ b/drivers/net/can/grcan.c
+@@ -241,13 +241,14 @@ struct grcan_device_config {
+ .rxsize = GRCAN_DEFAULT_BUFFER_SIZE, \
+ }
+
+-#define GRCAN_TXBUG_SAFE_GRLIB_VERSION 0x4100
++#define GRCAN_TXBUG_SAFE_GRLIB_VERSION 4100
+ #define GRLIB_VERSION_MASK 0xffff
+
+ /* GRCAN private data structure */
+ struct grcan_priv {
+ struct can_priv can; /* must be the first member */
+ struct net_device *dev;
++ struct device *ofdev_dev;
+ struct napi_struct napi;
+
+ struct grcan_registers __iomem *regs; /* ioremap'ed registers */
+@@ -921,7 +922,7 @@ static void grcan_free_dma_buffers(struct net_device *dev)
+ struct grcan_priv *priv = netdev_priv(dev);
+ struct grcan_dma *dma = &priv->dma;
+
+- dma_free_coherent(&dev->dev, dma->base_size, dma->base_buf,
++ dma_free_coherent(priv->ofdev_dev, dma->base_size, dma->base_buf,
+ dma->base_handle);
+ memset(dma, 0, sizeof(*dma));
+ }
+@@ -946,7 +947,7 @@ static int grcan_allocate_dma_buffers(struct net_device *dev,
+
+ /* Extra GRCAN_BUFFER_ALIGNMENT to allow for alignment */
+ dma->base_size = lsize + ssize + GRCAN_BUFFER_ALIGNMENT;
+- dma->base_buf = dma_alloc_coherent(&dev->dev,
++ dma->base_buf = dma_alloc_coherent(priv->ofdev_dev,
+ dma->base_size,
+ &dma->base_handle,
+ GFP_KERNEL);
+@@ -1102,8 +1103,10 @@ static int grcan_close(struct net_device *dev)
+
+ priv->closing = true;
+ if (priv->need_txbug_workaround) {
++ spin_unlock_irqrestore(&priv->lock, flags);
+ del_timer_sync(&priv->hang_timer);
+ del_timer_sync(&priv->rr_timer);
++ spin_lock_irqsave(&priv->lock, flags);
+ }
+ netif_stop_queue(dev);
+ grcan_stop_hardware(dev);
+@@ -1122,7 +1125,7 @@ static int grcan_close(struct net_device *dev)
+ return 0;
+ }
+
+-static int grcan_transmit_catch_up(struct net_device *dev, int budget)
++static void grcan_transmit_catch_up(struct net_device *dev)
+ {
+ struct grcan_priv *priv = netdev_priv(dev);
+ unsigned long flags;
+@@ -1130,7 +1133,7 @@ static int grcan_transmit_catch_up(struct net_device *dev, int budget)
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+- work_done = catch_up_echo_skb(dev, budget, true);
++ work_done = catch_up_echo_skb(dev, -1, true);
+ if (work_done) {
+ if (!priv->resetting && !priv->closing &&
+ !(priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
+@@ -1144,8 +1147,6 @@ static int grcan_transmit_catch_up(struct net_device *dev, int budget)
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+-
+- return work_done;
+ }
+
+ static int grcan_receive(struct net_device *dev, int budget)
+@@ -1227,19 +1228,13 @@ static int grcan_poll(struct napi_struct *napi, int budget)
+ struct net_device *dev = priv->dev;
+ struct grcan_registers __iomem *regs = priv->regs;
+ unsigned long flags;
+- int tx_work_done, rx_work_done;
+- int rx_budget = budget / 2;
+- int tx_budget = budget - rx_budget;
++ int work_done;
+
+- /* Half of the budget for receiving messages */
+- rx_work_done = grcan_receive(dev, rx_budget);
++ work_done = grcan_receive(dev, budget);
+
+- /* Half of the budget for transmitting messages as that can trigger echo
+- * frames being received
+- */
+- tx_work_done = grcan_transmit_catch_up(dev, tx_budget);
++ grcan_transmit_catch_up(dev);
+
+- if (rx_work_done < rx_budget && tx_work_done < tx_budget) {
++ if (work_done < budget) {
+ napi_complete(napi);
+
+ /* Guarantee no interference with a running reset that otherwise
+@@ -1256,7 +1251,7 @@ static int grcan_poll(struct napi_struct *napi, int budget)
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+- return rx_work_done + tx_work_done;
++ return work_done;
+ }
+
+ /* Work tx bug by waiting while for the risky situation to clear. If that fails,
+@@ -1587,6 +1582,7 @@ static int grcan_setup_netdev(struct platform_device *ofdev,
+ memcpy(&priv->config, &grcan_module_config,
+ sizeof(struct grcan_device_config));
+ priv->dev = dev;
++ priv->ofdev_dev = &ofdev->dev;
+ priv->regs = base;
+ priv->can.bittiming_const = &grcan_bittiming_const;
+ priv->can.do_set_bittiming = grcan_set_bittiming;
+@@ -1639,6 +1635,7 @@ exit_free_candev:
+ static int grcan_probe(struct platform_device *ofdev)
+ {
+ struct device_node *np = ofdev->dev.of_node;
++ struct device_node *sysid_parent;
+ u32 sysid, ambafreq;
+ int irq, err;
+ void __iomem *base;
+@@ -1647,10 +1644,15 @@ static int grcan_probe(struct platform_device *ofdev)
+ /* Compare GRLIB version number with the first that does not
+ * have the tx bug (see start_xmit)
+ */
+- err = of_property_read_u32(np, "systemid", &sysid);
+- if (!err && ((sysid & GRLIB_VERSION_MASK)
+- >= GRCAN_TXBUG_SAFE_GRLIB_VERSION))
+- txbug = false;
++ sysid_parent = of_find_node_by_path("/ambapp0");
++ if (sysid_parent) {
++ of_node_get(sysid_parent);
++ err = of_property_read_u32(sysid_parent, "systemid", &sysid);
++ if (!err && ((sysid & GRLIB_VERSION_MASK) >=
++ GRCAN_TXBUG_SAFE_GRLIB_VERSION))
++ txbug = false;
++ of_node_put(sysid_parent);
++ }
+
+ err = of_property_read_u32(np, "freq", &ambafreq);
+ if (err) {
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index a251bc55727ff..fcdd022b24986 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2224,6 +2224,7 @@ mt7530_setup(struct dsa_switch *ds)
+ ret = of_get_phy_mode(mac_np, &interface);
+ if (ret && ret != -ENODEV) {
+ of_node_put(mac_np);
++ of_node_put(phy_node);
+ return ret;
+ }
+ id = of_mdio_parse_addr(ds->dev, phy_node);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 6af0ae1d0c462..9167517de3d97 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2678,6 +2678,10 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget)
+ u32 idx = le32_to_cpu(nqcmp->cq_handle_low);
+ struct bnxt_cp_ring_info *cpr2;
+
++ /* No more budget for RX work */
++ if (budget && work_done >= budget && idx == BNXT_RX_HDL)
++ break;
++
+ cpr2 = cpr->cp_ring_arr[idx];
+ work_done += __bnxt_poll_work(bp, cpr2,
+ budget - work_done);
+@@ -10938,7 +10942,7 @@ static bool bnxt_rfs_capable(struct bnxt *bp)
+
+ if (bp->flags & BNXT_FLAG_CHIP_P5)
+ return bnxt_rfs_supported(bp);
+- if (!(bp->flags & BNXT_FLAG_MSIX_CAP) || !bnxt_can_reserve_rings(bp))
++ if (!(bp->flags & BNXT_FLAG_MSIX_CAP) || !bnxt_can_reserve_rings(bp) || !bp->rx_nr_rings)
+ return false;
+
+ vnics = 1 + bp->rx_nr_rings;
+@@ -13194,10 +13198,9 @@ static int bnxt_init_dflt_ring_mode(struct bnxt *bp)
+ goto init_dflt_ring_err;
+
+ bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
+- if (bnxt_rfs_supported(bp) && bnxt_rfs_capable(bp)) {
+- bp->flags |= BNXT_FLAG_RFS;
+- bp->dev->features |= NETIF_F_NTUPLE;
+- }
++
++ bnxt_set_dflt_rfs(bp);
++
+ init_dflt_ring_err:
+ bnxt_ulp_irq_restart(bp, rc);
+ return rc;
+diff --git a/drivers/net/ethernet/cavium/thunder/nic_main.c b/drivers/net/ethernet/cavium/thunder/nic_main.c
+index f2f1ce81fd9cc..0ec65ec634df5 100644
+--- a/drivers/net/ethernet/cavium/thunder/nic_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nic_main.c
+@@ -59,7 +59,7 @@ struct nicpf {
+
+ /* MSI-X */
+ u8 num_vec;
+- bool irq_allocated[NIC_PF_MSIX_VECTORS];
++ unsigned int irq_allocated[NIC_PF_MSIX_VECTORS];
+ char irq_name[NIC_PF_MSIX_VECTORS][20];
+ };
+
+@@ -1150,7 +1150,7 @@ static irqreturn_t nic_mbx_intr_handler(int irq, void *nic_irq)
+ u64 intr;
+ u8 vf;
+
+- if (irq == pci_irq_vector(nic->pdev, NIC_PF_INTR_ID_MBOX0))
++ if (irq == nic->irq_allocated[NIC_PF_INTR_ID_MBOX0])
+ mbx = 0;
+ else
+ mbx = 1;
+@@ -1176,14 +1176,14 @@ static void nic_free_all_interrupts(struct nicpf *nic)
+
+ for (irq = 0; irq < nic->num_vec; irq++) {
+ if (nic->irq_allocated[irq])
+- free_irq(pci_irq_vector(nic->pdev, irq), nic);
+- nic->irq_allocated[irq] = false;
++ free_irq(nic->irq_allocated[irq], nic);
++ nic->irq_allocated[irq] = 0;
+ }
+ }
+
+ static int nic_register_interrupts(struct nicpf *nic)
+ {
+- int i, ret;
++ int i, ret, irq;
+ nic->num_vec = pci_msix_vec_count(nic->pdev);
+
+ /* Enable MSI-X */
+@@ -1201,13 +1201,13 @@ static int nic_register_interrupts(struct nicpf *nic)
+ sprintf(nic->irq_name[i],
+ "NICPF Mbox%d", (i - NIC_PF_INTR_ID_MBOX0));
+
+- ret = request_irq(pci_irq_vector(nic->pdev, i),
+- nic_mbx_intr_handler, 0,
++ irq = pci_irq_vector(nic->pdev, i);
++ ret = request_irq(irq, nic_mbx_intr_handler, 0,
+ nic->irq_name[i], nic);
+ if (ret)
+ goto fail;
+
+- nic->irq_allocated[i] = true;
++ nic->irq_allocated[i] = irq;
+ }
+
+ /* Enable mailbox interrupt */
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+index 2d9b06d7caadb..f7dc7d825f637 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+@@ -771,7 +771,7 @@ struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
+ /* If we only have one page, still need to get shadown wqe when
+ * wqe rolling-over page
+ */
+- if (curr_pg != end_pg || MASKED_WQE_IDX(wq, end_prod_idx) < *prod_idx) {
++ if (curr_pg != end_pg || end_prod_idx < *prod_idx) {
+ void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
+
+ copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *prod_idx);
+@@ -841,7 +841,10 @@ struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
+
+ *cons_idx = curr_cons_idx;
+
+- if (curr_pg != end_pg) {
++ /* If we only have one page, still need to get shadown wqe when
++ * wqe rolling-over page
++ */
++ if (curr_pg != end_pg || end_cons_idx < curr_cons_idx) {
+ void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
+
+ copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);
+diff --git a/drivers/net/ethernet/mediatek/mtk_sgmii.c b/drivers/net/ethernet/mediatek/mtk_sgmii.c
+index 32d83421226a2..5897940a418b6 100644
+--- a/drivers/net/ethernet/mediatek/mtk_sgmii.c
++++ b/drivers/net/ethernet/mediatek/mtk_sgmii.c
+@@ -26,6 +26,7 @@ int mtk_sgmii_init(struct mtk_sgmii *ss, struct device_node *r, u32 ana_rgc3)
+ break;
+
+ ss->regmap[i] = syscon_node_to_regmap(np);
++ of_node_put(np);
+ if (IS_ERR(ss->regmap[i]))
+ return PTR_ERR(ss->regmap[i]);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c
+index 538adab6878b5..c5b560a8b026e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/rsc_dump.c
+@@ -31,6 +31,7 @@ static const char *const mlx5_rsc_sgmt_name[] = {
+ struct mlx5_rsc_dump {
+ u32 pdn;
+ u32 mkey;
++ u32 number_of_menu_items;
+ u16 fw_segment_type[MLX5_SGMT_TYPE_NUM];
+ };
+
+@@ -50,21 +51,37 @@ static int mlx5_rsc_dump_sgmt_get_by_name(char *name)
+ return -EINVAL;
+ }
+
+-static void mlx5_rsc_dump_read_menu_sgmt(struct mlx5_rsc_dump *rsc_dump, struct page *page)
++#define MLX5_RSC_DUMP_MENU_HEADER_SIZE (MLX5_ST_SZ_BYTES(resource_dump_info_segment) + \
++ MLX5_ST_SZ_BYTES(resource_dump_command_segment) + \
++ MLX5_ST_SZ_BYTES(resource_dump_menu_segment))
++
++static int mlx5_rsc_dump_read_menu_sgmt(struct mlx5_rsc_dump *rsc_dump, struct page *page,
++ int read_size, int start_idx)
+ {
+ void *data = page_address(page);
+ enum mlx5_sgmt_type sgmt_idx;
+ int num_of_items;
+ char *sgmt_name;
+ void *member;
++ int size = 0;
+ void *menu;
+ int i;
+
+- menu = MLX5_ADDR_OF(menu_resource_dump_response, data, menu);
+- num_of_items = MLX5_GET(resource_dump_menu_segment, menu, num_of_records);
++ if (!start_idx) {
++ menu = MLX5_ADDR_OF(menu_resource_dump_response, data, menu);
++ rsc_dump->number_of_menu_items = MLX5_GET(resource_dump_menu_segment, menu,
++ num_of_records);
++ size = MLX5_RSC_DUMP_MENU_HEADER_SIZE;
++ data += size;
++ }
++ num_of_items = rsc_dump->number_of_menu_items;
++
++ for (i = 0; start_idx + i < num_of_items; i++) {
++ size += MLX5_ST_SZ_BYTES(resource_dump_menu_record);
++ if (size >= read_size)
++ return start_idx + i;
+
+- for (i = 0; i < num_of_items; i++) {
+- member = MLX5_ADDR_OF(resource_dump_menu_segment, menu, record[i]);
++ member = data + MLX5_ST_SZ_BYTES(resource_dump_menu_record) * i;
+ sgmt_name = MLX5_ADDR_OF(resource_dump_menu_record, member, segment_name);
+ sgmt_idx = mlx5_rsc_dump_sgmt_get_by_name(sgmt_name);
+ if (sgmt_idx == -EINVAL)
+@@ -72,6 +89,7 @@ static void mlx5_rsc_dump_read_menu_sgmt(struct mlx5_rsc_dump *rsc_dump, struct
+ rsc_dump->fw_segment_type[sgmt_idx] = MLX5_GET(resource_dump_menu_record,
+ member, segment_type);
+ }
++ return 0;
+ }
+
+ static int mlx5_rsc_dump_trigger(struct mlx5_core_dev *dev, struct mlx5_rsc_dump_cmd *cmd,
+@@ -168,6 +186,7 @@ static int mlx5_rsc_dump_menu(struct mlx5_core_dev *dev)
+ struct mlx5_rsc_dump_cmd *cmd = NULL;
+ struct mlx5_rsc_key key = {};
+ struct page *page;
++ int start_idx = 0;
+ int size;
+ int err;
+
+@@ -189,7 +208,7 @@ static int mlx5_rsc_dump_menu(struct mlx5_core_dev *dev)
+ if (err < 0)
+ goto destroy_cmd;
+
+- mlx5_rsc_dump_read_menu_sgmt(dev->rsc_dump, page);
++ start_idx = mlx5_rsc_dump_read_menu_sgmt(dev->rsc_dump, page, size, start_idx);
+
+ } while (err > 0);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index 673f1c82d3815..c9d5d8d93994d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -309,8 +309,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ if (err)
+ return err;
+
+- err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, port_buff_cell_sz,
+- xoff, &port_buffer, &update_buffer);
++ err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer, xoff,
++ port_buff_cell_sz, &port_buffer, &update_buffer);
+ if (err)
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+index 4a0d38d219edc..9028e9958c72d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
+@@ -1739,6 +1739,8 @@ mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg)
+ static void
+ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
+ {
++ struct mlx5e_priv *priv;
++
+ if (!refcount_dec_and_test(&ft->refcount))
+ return;
+
+@@ -1748,6 +1750,8 @@ mlx5_tc_ct_del_ft_cb(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_ft *ft)
+ rhashtable_free_and_destroy(&ft->ct_entries_ht,
+ mlx5_tc_ct_flush_ft_entry,
+ ct_priv);
++ priv = netdev_priv(ct_priv->netdev);
++ flush_workqueue(priv->wq);
+ mlx5_tc_ct_free_pre_ct_tables(ft);
+ mapping_remove(ct_priv->zone_mapping, ft->zone_restore_id);
+ kfree(ft);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index 378fc8e3bd975..d87bbb0be7c86 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -713,6 +713,7 @@ int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv,
+ struct net_device *filter_dev)
+ {
+ struct mlx5_esw_flow_attr *esw_attr = flow_attr->esw_attr;
++ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ struct mlx5e_tc_int_port *int_port;
+ TC_TUN_ROUTE_ATTR_INIT(attr);
+ u16 vport_num;
+@@ -747,7 +748,7 @@ int mlx5e_tc_tun_route_lookup(struct mlx5e_priv *priv,
+ esw_attr->rx_tun_attr->vni = MLX5_GET(fte_match_param, spec->match_value,
+ misc_parameters.vxlan_vni);
+ esw_attr->rx_tun_attr->decap_vport = vport_num;
+- } else if (netif_is_ovs_master(attr.route_dev)) {
++ } else if (netif_is_ovs_master(attr.route_dev) && mlx5e_tc_int_port_supported(esw)) {
+ int_port = mlx5e_tc_int_port_get(mlx5e_get_int_port_priv(priv),
+ attr.route_dev->ifindex,
+ MLX5E_TC_INT_PORT_INGRESS);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+index a4c8d8d00d5a4..72e08559e0d05 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+@@ -1198,6 +1198,16 @@ static int mlx5e_trust_initialize(struct mlx5e_priv *priv)
+ if (err)
+ return err;
+
++ if (priv->dcbx_dp.trust_state == MLX5_QPTS_TRUST_PCP && priv->dcbx.dscp_app_cnt) {
++ /*
++ * Align the driver state with the register state.
++ * Temporary state change is required to enable the app list reset.
++ */
++ priv->dcbx_dp.trust_state = MLX5_QPTS_TRUST_DSCP;
++ mlx5e_dcbnl_delete_app(priv);
++ priv->dcbx_dp.trust_state = MLX5_QPTS_TRUST_PCP;
++ }
++
+ mlx5e_params_calc_trust_tx_min_inline_mode(priv->mdev, &priv->channels.params,
+ priv->dcbx_dp.trust_state);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 7e5c00349ccf9..e0f45cef97c34 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -2355,6 +2355,17 @@ static int __parse_cls_flower(struct mlx5e_priv *priv,
+ match.key->vlan_priority);
+
+ *match_level = MLX5_MATCH_L2;
++
++ if (!flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_CVLAN) &&
++ match.mask->vlan_eth_type &&
++ MLX5_CAP_FLOWTABLE_TYPE(priv->mdev,
++ ft_field_support.outer_second_vid,
++ fs_type)) {
++ MLX5_SET(fte_match_set_misc, misc_c,
++ outer_second_cvlan_tag, 1);
++ spec->match_criteria_enable |=
++ MLX5_MATCH_MISC_PARAMETERS;
++ }
+ }
+ } else if (*match_level != MLX5_MATCH_NONE) {
+ /* cvlan_tag enabled in match criteria and
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index e7e7b4b0dcdb5..cebfa8565c9d9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -139,7 +139,7 @@ mlx5_eswitch_set_rule_source_port(struct mlx5_eswitch *esw,
+ if (mlx5_esw_indir_table_decap_vport(attr))
+ vport = mlx5_esw_indir_table_decap_vport(attr);
+
+- if (esw_attr->int_port)
++ if (attr && !attr->chain && esw_attr->int_port)
+ metadata =
+ mlx5e_tc_int_port_get_metadata_for_match(esw_attr->int_port);
+ else
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 84dbe46d5ede6..862f5b7cb2106 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -112,6 +112,28 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
+ }
+ }
+
++static void mlx5_stop_sync_reset_poll(struct mlx5_core_dev *dev)
++{
++ struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
++
++ del_timer_sync(&fw_reset->timer);
++}
++
++static int mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
++{
++ struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
++
++ if (!test_and_clear_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags)) {
++ mlx5_core_warn(dev, "Reset request was already cleared\n");
++ return -EALREADY;
++ }
++
++ mlx5_stop_sync_reset_poll(dev);
++ if (poll_health)
++ mlx5_start_health_poll(dev);
++ return 0;
++}
++
+ static void mlx5_sync_reset_reload_work(struct work_struct *work)
+ {
+ struct mlx5_fw_reset *fw_reset = container_of(work, struct mlx5_fw_reset,
+@@ -119,6 +141,7 @@ static void mlx5_sync_reset_reload_work(struct work_struct *work)
+ struct mlx5_core_dev *dev = fw_reset->dev;
+ int err;
+
++ mlx5_sync_reset_clear_reset_requested(dev, false);
+ mlx5_enter_error_state(dev, true);
+ mlx5_unload_one(dev);
+ err = mlx5_health_wait_pci_up(dev);
+@@ -128,23 +151,6 @@ static void mlx5_sync_reset_reload_work(struct work_struct *work)
+ mlx5_fw_reset_complete_reload(dev);
+ }
+
+-static void mlx5_stop_sync_reset_poll(struct mlx5_core_dev *dev)
+-{
+- struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+-
+- del_timer_sync(&fw_reset->timer);
+-}
+-
+-static void mlx5_sync_reset_clear_reset_requested(struct mlx5_core_dev *dev, bool poll_health)
+-{
+- struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+-
+- mlx5_stop_sync_reset_poll(dev);
+- clear_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags);
+- if (poll_health)
+- mlx5_start_health_poll(dev);
+-}
+-
+ #define MLX5_RESET_POLL_INTERVAL (HZ / 10)
+ static void poll_sync_reset(struct timer_list *t)
+ {
+@@ -159,7 +165,6 @@ static void poll_sync_reset(struct timer_list *t)
+
+ if (fatal_error) {
+ mlx5_core_warn(dev, "Got Device Reset\n");
+- mlx5_sync_reset_clear_reset_requested(dev, false);
+ queue_work(fw_reset->wq, &fw_reset->reset_reload_work);
+ return;
+ }
+@@ -186,13 +191,17 @@ static int mlx5_fw_reset_set_reset_sync_nack(struct mlx5_core_dev *dev)
+ return mlx5_reg_mfrl_set(dev, MLX5_MFRL_REG_RESET_LEVEL3, 0, 2, false);
+ }
+
+-static void mlx5_sync_reset_set_reset_requested(struct mlx5_core_dev *dev)
++static int mlx5_sync_reset_set_reset_requested(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset;
+
++ if (test_and_set_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags)) {
++ mlx5_core_warn(dev, "Reset request was already set\n");
++ return -EALREADY;
++ }
+ mlx5_stop_health_poll(dev, true);
+- set_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags);
+ mlx5_start_sync_reset_poll(dev);
++ return 0;
+ }
+
+ static void mlx5_fw_live_patch_event(struct work_struct *work)
+@@ -221,7 +230,9 @@ static void mlx5_sync_reset_request_event(struct work_struct *work)
+ err ? "Failed" : "Sent");
+ return;
+ }
+- mlx5_sync_reset_set_reset_requested(dev);
++ if (mlx5_sync_reset_set_reset_requested(dev))
++ return;
++
+ err = mlx5_fw_reset_set_reset_sync_ack(dev);
+ if (err)
+ mlx5_core_warn(dev, "PCI Sync FW Update Reset Ack Failed. Error code: %d\n", err);
+@@ -319,7 +330,8 @@ static void mlx5_sync_reset_now_event(struct work_struct *work)
+ struct mlx5_core_dev *dev = fw_reset->dev;
+ int err;
+
+- mlx5_sync_reset_clear_reset_requested(dev, false);
++ if (mlx5_sync_reset_clear_reset_requested(dev, false))
++ return;
+
+ mlx5_core_warn(dev, "Sync Reset now. Device is going to reset.\n");
+
+@@ -348,10 +360,8 @@ static void mlx5_sync_reset_abort_event(struct work_struct *work)
+ reset_abort_work);
+ struct mlx5_core_dev *dev = fw_reset->dev;
+
+- if (!test_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags))
++ if (mlx5_sync_reset_clear_reset_requested(dev, true))
+ return;
+-
+- mlx5_sync_reset_clear_reset_requested(dev, true);
+ mlx5_core_warn(dev, "PCI Sync FW Update Reset Aborted.\n");
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+index 626aa60b6099b..7da710951572d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
+@@ -100,6 +100,14 @@ static void mlx5_lag_fib_event_flush(struct notifier_block *nb)
+ flush_workqueue(mp->wq);
+ }
+
++static void mlx5_lag_fib_set(struct lag_mp *mp, struct fib_info *fi, u32 dst, int dst_len)
++{
++ mp->fib.mfi = fi;
++ mp->fib.priority = fi->fib_priority;
++ mp->fib.dst = dst;
++ mp->fib.dst_len = dst_len;
++}
++
+ struct mlx5_fib_event_work {
+ struct work_struct work;
+ struct mlx5_lag *ldev;
+@@ -110,10 +118,10 @@ struct mlx5_fib_event_work {
+ };
+ };
+
+-static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+- unsigned long event,
+- struct fib_info *fi)
++static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev, unsigned long event,
++ struct fib_entry_notifier_info *fen_info)
+ {
++ struct fib_info *fi = fen_info->fi;
+ struct lag_mp *mp = &ldev->lag_mp;
+ struct fib_nh *fib_nh0, *fib_nh1;
+ unsigned int nhs;
+@@ -121,13 +129,15 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ /* Handle delete event */
+ if (event == FIB_EVENT_ENTRY_DEL) {
+ /* stop track */
+- if (mp->mfi == fi)
+- mp->mfi = NULL;
++ if (mp->fib.mfi == fi)
++ mp->fib.mfi = NULL;
+ return;
+ }
+
+ /* Handle multipath entry with lower priority value */
+- if (mp->mfi && mp->mfi != fi && fi->fib_priority >= mp->mfi->fib_priority)
++ if (mp->fib.mfi && mp->fib.mfi != fi &&
++ (mp->fib.dst != fen_info->dst || mp->fib.dst_len != fen_info->dst_len) &&
++ fi->fib_priority >= mp->fib.priority)
+ return;
+
+ /* Handle add/replace event */
+@@ -143,9 +153,9 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+
+ i++;
+ mlx5_lag_set_port_affinity(ldev, i);
++ mlx5_lag_fib_set(mp, fi, fen_info->dst, fen_info->dst_len);
+ }
+
+- mp->mfi = fi;
+ return;
+ }
+
+@@ -165,7 +175,7 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ }
+
+ /* First time we see multipath route */
+- if (!mp->mfi && !__mlx5_lag_is_active(ldev)) {
++ if (!mp->fib.mfi && !__mlx5_lag_is_active(ldev)) {
+ struct lag_tracker tracker;
+
+ tracker = ldev->tracker;
+@@ -173,7 +183,7 @@ static void mlx5_lag_fib_route_event(struct mlx5_lag *ldev,
+ }
+
+ mlx5_lag_set_port_affinity(ldev, MLX5_LAG_NORMAL_AFFINITY);
+- mp->mfi = fi;
++ mlx5_lag_fib_set(mp, fi, fen_info->dst, fen_info->dst_len);
+ }
+
+ static void mlx5_lag_fib_nexthop_event(struct mlx5_lag *ldev,
+@@ -184,7 +194,7 @@ static void mlx5_lag_fib_nexthop_event(struct mlx5_lag *ldev,
+ struct lag_mp *mp = &ldev->lag_mp;
+
+ /* Check the nh event is related to the route */
+- if (!mp->mfi || mp->mfi != fi)
++ if (!mp->fib.mfi || mp->fib.mfi != fi)
+ return;
+
+ /* nh added/removed */
+@@ -214,7 +224,7 @@ static void mlx5_lag_fib_update(struct work_struct *work)
+ case FIB_EVENT_ENTRY_REPLACE:
+ case FIB_EVENT_ENTRY_DEL:
+ mlx5_lag_fib_route_event(ldev, fib_work->event,
+- fib_work->fen_info.fi);
++ &fib_work->fen_info);
+ fib_info_put(fib_work->fen_info.fi);
+ break;
+ case FIB_EVENT_NH_ADD:
+@@ -313,7 +323,7 @@ void mlx5_lag_mp_reset(struct mlx5_lag *ldev)
+ /* Clear mfi, as it might become stale when a route delete event
+ * has been missed, see mlx5_lag_fib_route_event().
+ */
+- ldev->lag_mp.mfi = NULL;
++ ldev->lag_mp.fib.mfi = NULL;
+ }
+
+ int mlx5_lag_mp_init(struct mlx5_lag *ldev)
+@@ -324,7 +334,7 @@ int mlx5_lag_mp_init(struct mlx5_lag *ldev)
+ /* always clear mfi, as it might become stale when a route delete event
+ * has been missed
+ */
+- mp->mfi = NULL;
++ mp->fib.mfi = NULL;
+
+ if (mp->fib_nb.notifier_call)
+ return 0;
+@@ -354,5 +364,5 @@ void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev)
+ unregister_fib_notifier(&init_net, &mp->fib_nb);
+ destroy_workqueue(mp->wq);
+ mp->fib_nb.notifier_call = NULL;
+- mp->mfi = NULL;
++ mp->fib.mfi = NULL;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.h
+index 57af962cad298..056a066da604b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mp.h
+@@ -15,7 +15,12 @@ enum mlx5_lag_port_affinity {
+
+ struct lag_mp {
+ struct notifier_block fib_nb;
+- struct fib_info *mfi; /* used in tracking fib events */
++ struct {
++ const void *mfi; /* used in tracking fib events */
++ u32 priority;
++ u32 dst;
++ int dst_len;
++ } fib;
+ struct workqueue_struct *wq;
+ };
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+index a6592f9c3c05f..5be322528279a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+@@ -505,7 +505,7 @@ static int mlx5_lag_create_inner_ttc_table(struct mlx5_lag *ldev)
+ struct ttc_params ttc_params = {};
+
+ mlx5_lag_set_inner_ttc_params(ldev, &ttc_params);
+- port_sel->inner.ttc = mlx5_create_ttc_table(dev, &ttc_params);
++ port_sel->inner.ttc = mlx5_create_inner_ttc_table(dev, &ttc_params);
+ if (IS_ERR(port_sel->inner.ttc))
+ return PTR_ERR(port_sel->inner.ttc);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
+index b63dec24747ab..b78f2ba25c19b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_ttc.c
+@@ -408,6 +408,8 @@ static int mlx5_generate_inner_ttc_table_rules(struct mlx5_core_dev *dev,
+ for (tt = 0; tt < MLX5_NUM_TT; tt++) {
+ struct mlx5_ttc_rule *rule = &rules[tt];
+
++ if (test_bit(tt, params->ignore_dests))
++ continue;
+ rule->rule = mlx5_generate_inner_ttc_rule(dev, ft,
+ ¶ms->dests[tt],
+ ttc_rules[tt].etype,
+diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
+index 7a50ba00f8ae3..c854efdf1f25f 100644
+--- a/drivers/net/ethernet/smsc/smsc911x.c
++++ b/drivers/net/ethernet/smsc/smsc911x.c
+@@ -2431,7 +2431,7 @@ static int smsc911x_drv_probe(struct platform_device *pdev)
+ if (irq == -EPROBE_DEFER) {
+ retval = -EPROBE_DEFER;
+ goto out_0;
+- } else if (irq <= 0) {
++ } else if (irq < 0) {
+ pr_warn("Could not allocate irq resource\n");
+ retval = -ENODEV;
+ goto out_0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+index 8e8778cfbbadd..6f87e296a410f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -454,6 +454,7 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
+ plat->has_gmac4 = 1;
+ plat->force_sf_dma_mode = 0;
+ plat->tso_en = 1;
++ plat->sph_disable = 1;
+
+ /* Multiplying factor to the clk_eee_i clock time
+ * period to make it closer to 100 ns. This value
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 09644ab0d87a7..fda53b4b9406f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -916,6 +916,7 @@ static int sun8i_dwmac_register_mdio_mux(struct stmmac_priv *priv)
+
+ ret = mdio_mux_init(priv->device, mdio_mux, mdio_mux_syscon_switch_fn,
+ &gmac->mux_handle, priv, priv->mii);
++ of_node_put(mdio_mux);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 422e3225f476a..fb115273f5533 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7077,7 +7077,7 @@ int stmmac_dvr_probe(struct device *device,
+ dev_info(priv->device, "TSO feature enabled\n");
+ }
+
+- if (priv->dma_cap.sphen) {
++ if (priv->dma_cap.sphen && !priv->plat->sph_disable) {
+ ndev->hw_features |= NETIF_F_GRO;
+ priv->sph_cap = true;
+ priv->sph = priv->sph_cap;
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index bd4b1528cf992..79e850fe4621c 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -1246,8 +1246,10 @@ static int cpsw_probe_dt(struct cpsw_common *cpsw)
+ data->slave_data = devm_kcalloc(dev, CPSW_SLAVE_PORTS_NUM,
+ sizeof(struct cpsw_slave_data),
+ GFP_KERNEL);
+- if (!data->slave_data)
++ if (!data->slave_data) {
++ of_node_put(tmp_node);
+ return -ENOMEM;
++ }
+
+ /* Populate all the child nodes here...
+ */
+@@ -1341,6 +1343,7 @@ static int cpsw_probe_dt(struct cpsw_common *cpsw)
+
+ err_node_put:
+ of_node_put(port_np);
++ of_node_put(tmp_node);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+index 77fa2cb03acaa..08a670bf2cd19 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
++++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+@@ -823,10 +823,10 @@ static int xemaclite_mdio_write(struct mii_bus *bus, int phy_id, int reg,
+ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ {
+ struct mii_bus *bus;
+- int rc;
+ struct resource res;
+ struct device_node *np = of_get_parent(lp->phy_node);
+ struct device_node *npp;
++ int rc, ret;
+
+ /* Don't register the MDIO bus if the phy_node or its parent node
+ * can't be found.
+@@ -836,8 +836,14 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ return -ENODEV;
+ }
+ npp = of_get_parent(np);
+-
+- of_address_to_resource(npp, 0, &res);
++ ret = of_address_to_resource(npp, 0, &res);
++ of_node_put(npp);
++ if (ret) {
++ dev_err(dev, "%s resource error!\n",
++ dev->of_node->full_name);
++ of_node_put(np);
++ return ret;
++ }
+ if (lp->ndev->mem_start != res.start) {
+ struct phy_device *phydev;
+ phydev = of_phy_find_device(lp->phy_node);
+@@ -846,6 +852,7 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ "MDIO of the phy is not registered yet\n");
+ else
+ put_device(&phydev->mdio.dev);
++ of_node_put(np);
+ return 0;
+ }
+
+@@ -858,6 +865,7 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ bus = mdiobus_alloc();
+ if (!bus) {
+ dev_err(dev, "Failed to allocate mdiobus\n");
++ of_node_put(np);
+ return -ENOMEM;
+ }
+
+@@ -870,6 +878,7 @@ static int xemaclite_mdio_setup(struct net_local *lp, struct device *dev)
+ bus->parent = dev;
+
+ rc = of_mdiobus_register(bus, np);
++ of_node_put(np);
+ if (rc) {
+ dev_err(dev, "Failed to register mdio bus.\n");
+ goto err_register;
+diff --git a/drivers/net/mdio/mdio-mux-bcm6368.c b/drivers/net/mdio/mdio-mux-bcm6368.c
+index 6dcbf987d61b5..8b444a8eb6b55 100644
+--- a/drivers/net/mdio/mdio-mux-bcm6368.c
++++ b/drivers/net/mdio/mdio-mux-bcm6368.c
+@@ -115,7 +115,7 @@ static int bcm6368_mdiomux_probe(struct platform_device *pdev)
+ md->mii_bus = devm_mdiobus_alloc(&pdev->dev);
+ if (!md->mii_bus) {
+ dev_err(&pdev->dev, "mdiomux bus alloc failed\n");
+- return ENOMEM;
++ return -ENOMEM;
+ }
+
+ bus = md->mii_bus;
+diff --git a/drivers/nfc/nfcmrvl/main.c b/drivers/nfc/nfcmrvl/main.c
+index 2fcf545012b16..1a5284de4341b 100644
+--- a/drivers/nfc/nfcmrvl/main.c
++++ b/drivers/nfc/nfcmrvl/main.c
+@@ -183,6 +183,7 @@ void nfcmrvl_nci_unregister_dev(struct nfcmrvl_private *priv)
+ {
+ struct nci_dev *ndev = priv->ndev;
+
++ nci_unregister_device(ndev);
+ if (priv->ndev->nfc_dev->fw_download_in_progress)
+ nfcmrvl_fw_dnld_abort(priv);
+
+@@ -191,7 +192,6 @@ void nfcmrvl_nci_unregister_dev(struct nfcmrvl_private *priv)
+ if (gpio_is_valid(priv->config.reset_n_io))
+ gpio_free(priv->config.reset_n_io);
+
+- nci_unregister_device(ndev);
+ nci_free_device(ndev);
+ kfree(priv);
+ }
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 15348be1a8aa5..5be382b19d9a7 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -38,10 +38,6 @@
+ #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6)
+ #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7)
+ #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8)
+-#define PCIE_CORE_INT_A_ASSERT_ENABLE 1
+-#define PCIE_CORE_INT_B_ASSERT_ENABLE 2
+-#define PCIE_CORE_INT_C_ASSERT_ENABLE 3
+-#define PCIE_CORE_INT_D_ASSERT_ENABLE 4
+ /* PIO registers base address and register offsets */
+ #define PIO_BASE_ADDR 0x4000
+ #define PIO_CTRL (PIO_BASE_ADDR + 0x0)
+@@ -102,6 +98,10 @@
+ #define PCIE_MSG_PM_PME_MASK BIT(7)
+ #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44)
+ #define PCIE_ISR0_MSI_INT_PENDING BIT(24)
++#define PCIE_ISR0_CORR_ERR BIT(11)
++#define PCIE_ISR0_NFAT_ERR BIT(12)
++#define PCIE_ISR0_FAT_ERR BIT(13)
++#define PCIE_ISR0_ERR_MASK GENMASK(13, 11)
+ #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val))
+ #define PCIE_ISR0_INTX_DEASSERT(val) BIT(20 + (val))
+ #define PCIE_ISR0_ALL_MASK GENMASK(31, 0)
+@@ -272,17 +272,16 @@ struct advk_pcie {
+ u32 actions;
+ } wins[OB_WIN_COUNT];
+ u8 wins_count;
++ int irq;
++ struct irq_domain *rp_irq_domain;
+ struct irq_domain *irq_domain;
+ struct irq_chip irq_chip;
+ raw_spinlock_t irq_lock;
+ struct irq_domain *msi_domain;
+ struct irq_domain *msi_inner_domain;
+- struct irq_chip msi_bottom_irq_chip;
+- struct irq_chip msi_irq_chip;
+- struct msi_domain_info msi_domain_info;
++ raw_spinlock_t msi_irq_lock;
+ DECLARE_BITMAP(msi_used, MSI_IRQ_NUM);
+ struct mutex msi_used_lock;
+- u16 msi_msg;
+ int link_gen;
+ struct pci_bridge_emul bridge;
+ struct gpio_desc *reset_gpio;
+@@ -477,6 +476,7 @@ static void advk_pcie_disable_ob_win(struct advk_pcie *pcie, u8 win_num)
+
+ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ {
++ phys_addr_t msi_addr;
+ u32 reg;
+ int i;
+
+@@ -565,6 +565,11 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ reg |= LANE_COUNT_1;
+ advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG);
+
++ /* Set MSI address */
++ msi_addr = virt_to_phys(pcie);
++ advk_writel(pcie, lower_32_bits(msi_addr), PCIE_MSI_ADDR_LOW_REG);
++ advk_writel(pcie, upper_32_bits(msi_addr), PCIE_MSI_ADDR_HIGH_REG);
++
+ /* Enable MSI */
+ reg = advk_readl(pcie, PCIE_CORE_CTRL2_REG);
+ reg |= PCIE_CORE_CTRL2_MSI_ENABLE;
+@@ -576,15 +581,20 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie)
+ advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
+ advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
+
+- /* Disable All ISR0/1 Sources */
+- reg = PCIE_ISR0_ALL_MASK;
++ /* Disable All ISR0/1 and MSI Sources */
++ advk_writel(pcie, PCIE_ISR0_ALL_MASK, PCIE_ISR0_MASK_REG);
++ advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG);
++ advk_writel(pcie, PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG);
++
++ /* Unmask summary MSI interrupt */
++ reg = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+ reg &= ~PCIE_ISR0_MSI_INT_PENDING;
+ advk_writel(pcie, reg, PCIE_ISR0_MASK_REG);
+
+- advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_MASK_REG);
+-
+- /* Unmask all MSIs */
+- advk_writel(pcie, ~(u32)PCIE_MSI_ALL_MASK, PCIE_MSI_MASK_REG);
++ /* Unmask PME interrupt for processing of PME requester */
++ reg = advk_readl(pcie, PCIE_ISR0_MASK_REG);
++ reg &= ~PCIE_MSG_PM_PME_MASK;
++ advk_writel(pcie, reg, PCIE_ISR0_MASK_REG);
+
+ /* Enable summary interrupt for GIC SPI source */
+ reg = PCIE_IRQ_ALL_MASK & (~PCIE_IRQ_ENABLE_INTS_MASK);
+@@ -778,11 +788,15 @@ advk_pci_bridge_emul_base_conf_read(struct pci_bridge_emul *bridge,
+ case PCI_INTERRUPT_LINE: {
+ /*
+ * From the whole 32bit register we support reading from HW only
+- * one bit: PCI_BRIDGE_CTL_BUS_RESET.
++ * two bits: PCI_BRIDGE_CTL_BUS_RESET and PCI_BRIDGE_CTL_SERR.
+ * Other bits are retrieved only from emulated config buffer.
+ */
+ __le32 *cfgspace = (__le32 *)&bridge->conf;
+ u32 val = le32_to_cpu(cfgspace[PCI_INTERRUPT_LINE / 4]);
++ if (advk_readl(pcie, PCIE_ISR0_MASK_REG) & PCIE_ISR0_ERR_MASK)
++ val &= ~(PCI_BRIDGE_CTL_SERR << 16);
++ else
++ val |= PCI_BRIDGE_CTL_SERR << 16;
+ if (advk_readl(pcie, PCIE_CORE_CTRL1_REG) & HOT_RESET_GEN)
+ val |= PCI_BRIDGE_CTL_BUS_RESET << 16;
+ else
+@@ -808,6 +822,19 @@ advk_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge,
+ break;
+
+ case PCI_INTERRUPT_LINE:
++ /*
++ * According to Figure 6-3: Pseudo Logic Diagram for Error
++ * Message Controls in PCIe base specification, SERR# Enable bit
++ * in Bridge Control register enable receiving of ERR_* messages
++ */
++ if (mask & (PCI_BRIDGE_CTL_SERR << 16)) {
++ u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
++ if (new & (PCI_BRIDGE_CTL_SERR << 16))
++ val &= ~PCIE_ISR0_ERR_MASK;
++ else
++ val |= PCIE_ISR0_ERR_MASK;
++ advk_writel(pcie, val, PCIE_ISR0_MASK_REG);
++ }
+ if (mask & (PCI_BRIDGE_CTL_BUS_RESET << 16)) {
+ u32 val = advk_readl(pcie, PCIE_CORE_CTRL1_REG);
+ if (new & (PCI_BRIDGE_CTL_BUS_RESET << 16))
+@@ -835,22 +862,11 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ *value = PCI_EXP_SLTSTA_PDS << 16;
+ return PCI_BRIDGE_EMUL_HANDLED;
+
+- case PCI_EXP_RTCTL: {
+- u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG);
+- *value = (val & PCIE_MSG_PM_PME_MASK) ? 0 : PCI_EXP_RTCTL_PMEIE;
+- *value |= le16_to_cpu(bridge->pcie_conf.rootctl) & PCI_EXP_RTCTL_CRSSVE;
+- *value |= PCI_EXP_RTCAP_CRSVIS << 16;
+- return PCI_BRIDGE_EMUL_HANDLED;
+- }
+-
+- case PCI_EXP_RTSTA: {
+- u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
+- u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
+- *value = msglog >> 16;
+- if (isr0 & PCIE_MSG_PM_PME_MASK)
+- *value |= PCI_EXP_RTSTA_PME;
+- return PCI_BRIDGE_EMUL_HANDLED;
+- }
++ /*
++ * PCI_EXP_RTCTL and PCI_EXP_RTSTA are also supported, but do not need
++ * to be handled here, because their values are stored in emulated
++ * config space buffer, and we read them from there when needed.
++ */
+
+ case PCI_EXP_LNKCAP: {
+ u32 val = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg);
+@@ -905,19 +921,18 @@ advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge,
+ break;
+
+ case PCI_EXP_RTCTL: {
+- /* Only mask/unmask PME interrupt */
+- u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG) &
+- ~PCIE_MSG_PM_PME_MASK;
+- if ((new & PCI_EXP_RTCTL_PMEIE) == 0)
+- val |= PCIE_MSG_PM_PME_MASK;
+- advk_writel(pcie, val, PCIE_ISR0_MASK_REG);
++ u16 rootctl = le16_to_cpu(bridge->pcie_conf.rootctl);
++ /* Only emulation of PMEIE and CRSSVE bits is provided */
++ rootctl &= PCI_EXP_RTCTL_PMEIE | PCI_EXP_RTCTL_CRSSVE;
++ bridge->pcie_conf.rootctl = cpu_to_le16(rootctl);
+ break;
+ }
+
+- case PCI_EXP_RTSTA:
+- new = (new & PCI_EXP_RTSTA_PME) >> 9;
+- advk_writel(pcie, new, PCIE_ISR0_REG);
+- break;
++ /*
++ * PCI_EXP_RTSTA is also supported, but does not need to be handled
++ * here, because its value is stored in emulated config space buffer,
++ * and we write it there when needed.
++ */
+
+ case PCI_EXP_DEVCTL:
+ case PCI_EXP_DEVCTL2:
+@@ -961,7 +976,7 @@ static int advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ bridge->conf.pref_mem_limit = cpu_to_le16(PCI_PREF_RANGE_TYPE_64);
+
+ /* Support interrupt A for MSI feature */
+- bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE;
++ bridge->conf.intpin = PCI_INTERRUPT_INTA;
+
+ /* Aardvark HW provides PCIe Capability structure in version 2 */
+ bridge->pcie_conf.cap = cpu_to_le16(2);
+@@ -983,8 +998,12 @@ static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus,
+ return false;
+
+ /*
+- * If the link goes down after we check for link-up, nothing bad
+- * happens but the config access times out.
++ * If the link goes down after we check for link-up, we have a problem:
++ * if a PIO request is executed while link-down, the whole controller
++ * gets stuck in a non-functional state, and even after link comes up
++ * again, PIO requests won't work anymore, and a reset of the whole PCIe
++ * controller is needed. Therefore we need to prevent sending PIO
++ * requests while the link is down.
+ */
+ if (!pci_is_root_bus(bus) && !advk_pcie_link_up(pcie))
+ return false;
+@@ -1182,10 +1201,10 @@ static void advk_msi_irq_compose_msi_msg(struct irq_data *data,
+ struct msi_msg *msg)
+ {
+ struct advk_pcie *pcie = irq_data_get_irq_chip_data(data);
+- phys_addr_t msi_msg = virt_to_phys(&pcie->msi_msg);
++ phys_addr_t msi_addr = virt_to_phys(pcie);
+
+- msg->address_lo = lower_32_bits(msi_msg);
+- msg->address_hi = upper_32_bits(msi_msg);
++ msg->address_lo = lower_32_bits(msi_addr);
++ msg->address_hi = upper_32_bits(msi_addr);
+ msg->data = data->hwirq;
+ }
+
+@@ -1195,6 +1214,54 @@ static int advk_msi_set_affinity(struct irq_data *irq_data,
+ return -EINVAL;
+ }
+
++static void advk_msi_irq_mask(struct irq_data *d)
++{
++ struct advk_pcie *pcie = d->domain->host_data;
++ irq_hw_number_t hwirq = irqd_to_hwirq(d);
++ unsigned long flags;
++ u32 mask;
++
++ raw_spin_lock_irqsave(&pcie->msi_irq_lock, flags);
++ mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
++ mask |= BIT(hwirq);
++ advk_writel(pcie, mask, PCIE_MSI_MASK_REG);
++ raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags);
++}
++
++static void advk_msi_irq_unmask(struct irq_data *d)
++{
++ struct advk_pcie *pcie = d->domain->host_data;
++ irq_hw_number_t hwirq = irqd_to_hwirq(d);
++ unsigned long flags;
++ u32 mask;
++
++ raw_spin_lock_irqsave(&pcie->msi_irq_lock, flags);
++ mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
++ mask &= ~BIT(hwirq);
++ advk_writel(pcie, mask, PCIE_MSI_MASK_REG);
++ raw_spin_unlock_irqrestore(&pcie->msi_irq_lock, flags);
++}
++
++static void advk_msi_top_irq_mask(struct irq_data *d)
++{
++ pci_msi_mask_irq(d);
++ irq_chip_mask_parent(d);
++}
++
++static void advk_msi_top_irq_unmask(struct irq_data *d)
++{
++ pci_msi_unmask_irq(d);
++ irq_chip_unmask_parent(d);
++}
++
++static struct irq_chip advk_msi_bottom_irq_chip = {
++ .name = "MSI",
++ .irq_compose_msi_msg = advk_msi_irq_compose_msi_msg,
++ .irq_set_affinity = advk_msi_set_affinity,
++ .irq_mask = advk_msi_irq_mask,
++ .irq_unmask = advk_msi_irq_unmask,
++};
++
+ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+ unsigned int virq,
+ unsigned int nr_irqs, void *args)
+@@ -1211,7 +1278,7 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+
+ for (i = 0; i < nr_irqs; i++)
+ irq_domain_set_info(domain, virq + i, hwirq + i,
+- &pcie->msi_bottom_irq_chip,
++ &advk_msi_bottom_irq_chip,
+ domain->host_data, handle_simple_irq,
+ NULL, NULL);
+
+@@ -1267,7 +1334,6 @@ static int advk_pcie_irq_map(struct irq_domain *h,
+ {
+ struct advk_pcie *pcie = h->host_data;
+
+- advk_pcie_irq_mask(irq_get_irq_data(virq));
+ irq_set_status_flags(virq, IRQ_LEVEL);
+ irq_set_chip_and_handler(virq, &pcie->irq_chip,
+ handle_level_irq);
+@@ -1281,37 +1347,25 @@ static const struct irq_domain_ops advk_pcie_irq_domain_ops = {
+ .xlate = irq_domain_xlate_onecell,
+ };
+
++static struct irq_chip advk_msi_irq_chip = {
++ .name = "advk-MSI",
++ .irq_mask = advk_msi_top_irq_mask,
++ .irq_unmask = advk_msi_top_irq_unmask,
++};
++
++static struct msi_domain_info advk_msi_domain_info = {
++ .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
++ MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX,
++ .chip = &advk_msi_irq_chip,
++};
++
+ static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie)
+ {
+ struct device *dev = &pcie->pdev->dev;
+- struct device_node *node = dev->of_node;
+- struct irq_chip *bottom_ic, *msi_ic;
+- struct msi_domain_info *msi_di;
+- phys_addr_t msi_msg_phys;
+
++ raw_spin_lock_init(&pcie->msi_irq_lock);
+ mutex_init(&pcie->msi_used_lock);
+
+- bottom_ic = &pcie->msi_bottom_irq_chip;
+-
+- bottom_ic->name = "MSI";
+- bottom_ic->irq_compose_msi_msg = advk_msi_irq_compose_msi_msg;
+- bottom_ic->irq_set_affinity = advk_msi_set_affinity;
+-
+- msi_ic = &pcie->msi_irq_chip;
+- msi_ic->name = "advk-MSI";
+-
+- msi_di = &pcie->msi_domain_info;
+- msi_di->flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+- MSI_FLAG_MULTI_PCI_MSI;
+- msi_di->chip = msi_ic;
+-
+- msi_msg_phys = virt_to_phys(&pcie->msi_msg);
+-
+- advk_writel(pcie, lower_32_bits(msi_msg_phys),
+- PCIE_MSI_ADDR_LOW_REG);
+- advk_writel(pcie, upper_32_bits(msi_msg_phys),
+- PCIE_MSI_ADDR_HIGH_REG);
+-
+ pcie->msi_inner_domain =
+ irq_domain_add_linear(NULL, MSI_IRQ_NUM,
+ &advk_msi_domain_ops, pcie);
+@@ -1319,8 +1373,9 @@ static int advk_pcie_init_msi_irq_domain(struct advk_pcie *pcie)
+ return -ENOMEM;
+
+ pcie->msi_domain =
+- pci_msi_create_irq_domain(of_node_to_fwnode(node),
+- msi_di, pcie->msi_inner_domain);
++ pci_msi_create_irq_domain(dev_fwnode(dev),
++ &advk_msi_domain_info,
++ pcie->msi_inner_domain);
+ if (!pcie->msi_domain) {
+ irq_domain_remove(pcie->msi_inner_domain);
+ return -ENOMEM;
+@@ -1361,7 +1416,6 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
+ }
+
+ irq_chip->irq_mask = advk_pcie_irq_mask;
+- irq_chip->irq_mask_ack = advk_pcie_irq_mask;
+ irq_chip->irq_unmask = advk_pcie_irq_unmask;
+
+ pcie->irq_domain =
+@@ -1383,6 +1437,70 @@ static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)
+ irq_domain_remove(pcie->irq_domain);
+ }
+
++static struct irq_chip advk_rp_irq_chip = {
++ .name = "advk-RP",
++};
++
++static int advk_pcie_rp_irq_map(struct irq_domain *h,
++ unsigned int virq, irq_hw_number_t hwirq)
++{
++ struct advk_pcie *pcie = h->host_data;
++
++ irq_set_chip_and_handler(virq, &advk_rp_irq_chip, handle_simple_irq);
++ irq_set_chip_data(virq, pcie);
++
++ return 0;
++}
++
++static const struct irq_domain_ops advk_pcie_rp_irq_domain_ops = {
++ .map = advk_pcie_rp_irq_map,
++ .xlate = irq_domain_xlate_onecell,
++};
++
++static int advk_pcie_init_rp_irq_domain(struct advk_pcie *pcie)
++{
++ pcie->rp_irq_domain = irq_domain_add_linear(NULL, 1,
++ &advk_pcie_rp_irq_domain_ops,
++ pcie);
++ if (!pcie->rp_irq_domain) {
++ dev_err(&pcie->pdev->dev, "Failed to add Root Port IRQ domain\n");
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
++static void advk_pcie_remove_rp_irq_domain(struct advk_pcie *pcie)
++{
++ irq_domain_remove(pcie->rp_irq_domain);
++}
++
++static void advk_pcie_handle_pme(struct advk_pcie *pcie)
++{
++ u32 requester = advk_readl(pcie, PCIE_MSG_LOG_REG) >> 16;
++
++ advk_writel(pcie, PCIE_MSG_PM_PME_MASK, PCIE_ISR0_REG);
++
++ /*
++ * PCIE_MSG_LOG_REG contains the last inbound message, so store
++ * the requester ID only when PME was not asserted yet.
++ * Also do not trigger PME interrupt when PME is still asserted.
++ */
++ if (!(le32_to_cpu(pcie->bridge.pcie_conf.rootsta) & PCI_EXP_RTSTA_PME)) {
++ pcie->bridge.pcie_conf.rootsta = cpu_to_le32(requester | PCI_EXP_RTSTA_PME);
++
++ /*
++ * Trigger PME interrupt only if PMEIE bit in Root Control is set.
++ * Aardvark HW returns zero for PCI_EXP_FLAGS_IRQ, so use PCIe interrupt 0.
++ */
++ if (!(le16_to_cpu(pcie->bridge.pcie_conf.rootctl) & PCI_EXP_RTCTL_PMEIE))
++ return;
++
++ if (generic_handle_domain_irq(pcie->rp_irq_domain, 0) == -EINVAL)
++ dev_err_ratelimited(&pcie->pdev->dev, "unhandled PME IRQ\n");
++ }
++}
++
+ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ {
+ u32 msi_val, msi_mask, msi_status, msi_idx;
+@@ -1418,6 +1536,22 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
+ isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG);
+ isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK);
+
++ /* Process PME interrupt as the first one to do not miss PME requester id */
++ if (isr0_status & PCIE_MSG_PM_PME_MASK)
++ advk_pcie_handle_pme(pcie);
++
++ /* Process ERR interrupt */
++ if (isr0_status & PCIE_ISR0_ERR_MASK) {
++ advk_writel(pcie, PCIE_ISR0_ERR_MASK, PCIE_ISR0_REG);
++
++ /*
++ * Aardvark HW returns zero for PCI_ERR_ROOT_AER_IRQ, so use
++ * PCIe interrupt 0
++ */
++ if (generic_handle_domain_irq(pcie->rp_irq_domain, 0) == -EINVAL)
++ dev_err_ratelimited(&pcie->pdev->dev, "unhandled ERR IRQ\n");
++ }
++
+ /* Process MSI interrupts */
+ if (isr0_status & PCIE_ISR0_MSI_INT_PENDING)
+ advk_pcie_handle_msi(pcie);
+@@ -1430,28 +1564,50 @@ static void advk_pcie_handle_int(struct advk_pcie *pcie)
+ advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i),
+ PCIE_ISR1_REG);
+
+- generic_handle_domain_irq(pcie->irq_domain, i);
++ if (generic_handle_domain_irq(pcie->irq_domain, i) == -EINVAL)
++ dev_err_ratelimited(&pcie->pdev->dev, "unexpected INT%c IRQ\n",
++ (char)i + 'A');
+ }
+ }
+
+-static irqreturn_t advk_pcie_irq_handler(int irq, void *arg)
++static void advk_pcie_irq_handler(struct irq_desc *desc)
+ {
+- struct advk_pcie *pcie = arg;
+- u32 status;
++ struct advk_pcie *pcie = irq_desc_get_handler_data(desc);
++ struct irq_chip *chip = irq_desc_get_chip(desc);
++ u32 val, mask, status;
+
+- status = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG);
+- if (!(status & PCIE_IRQ_CORE_INT))
+- return IRQ_NONE;
++ chained_irq_enter(chip, desc);
+
+- advk_pcie_handle_int(pcie);
++ val = advk_readl(pcie, HOST_CTRL_INT_STATUS_REG);
++ mask = advk_readl(pcie, HOST_CTRL_INT_MASK_REG);
++ status = val & ((~mask) & PCIE_IRQ_ALL_MASK);
+
+- /* Clear interrupt */
+- advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG);
++ if (status & PCIE_IRQ_CORE_INT) {
++ advk_pcie_handle_int(pcie);
+
+- return IRQ_HANDLED;
++ /* Clear interrupt */
++ advk_writel(pcie, PCIE_IRQ_CORE_INT, HOST_CTRL_INT_STATUS_REG);
++ }
++
++ chained_irq_exit(chip, desc);
++}
++
++static int advk_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
++{
++ struct advk_pcie *pcie = dev->bus->sysdata;
++
++ /*
++ * Emulated root bridge has its own emulated irq chip and irq domain.
++ * Argument pin is the INTx pin (1=INTA, 2=INTB, 3=INTC, 4=INTD) and
++ * hwirq for irq_create_mapping() is indexed from zero.
++ */
++ if (pci_is_root_bus(dev->bus))
++ return irq_create_mapping(pcie->rp_irq_domain, pin - 1);
++ else
++ return of_irq_parse_and_map_pci(dev, slot, pin);
+ }
+
+-static void __maybe_unused advk_pcie_disable_phy(struct advk_pcie *pcie)
++static void advk_pcie_disable_phy(struct advk_pcie *pcie)
+ {
+ phy_power_off(pcie->phy);
+ phy_exit(pcie->phy);
+@@ -1515,7 +1671,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ struct advk_pcie *pcie;
+ struct pci_host_bridge *bridge;
+ struct resource_entry *entry;
+- int ret, irq;
++ int ret;
+
+ bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie));
+ if (!bridge)
+@@ -1601,17 +1757,9 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ if (IS_ERR(pcie->base))
+ return PTR_ERR(pcie->base);
+
+- irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
+- return irq;
+-
+- ret = devm_request_irq(dev, irq, advk_pcie_irq_handler,
+- IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie",
+- pcie);
+- if (ret) {
+- dev_err(dev, "Failed to register interrupt\n");
+- return ret;
+- }
++ pcie->irq = platform_get_irq(pdev, 0);
++ if (pcie->irq < 0)
++ return pcie->irq;
+
+ pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node,
+ "reset-gpios", 0,
+@@ -1660,11 +1808,24 @@ static int advk_pcie_probe(struct platform_device *pdev)
+ return ret;
+ }
+
++ ret = advk_pcie_init_rp_irq_domain(pcie);
++ if (ret) {
++ dev_err(dev, "Failed to initialize irq\n");
++ advk_pcie_remove_msi_irq_domain(pcie);
++ advk_pcie_remove_irq_domain(pcie);
++ return ret;
++ }
++
++ irq_set_chained_handler_and_data(pcie->irq, advk_pcie_irq_handler, pcie);
++
+ bridge->sysdata = pcie;
+ bridge->ops = &advk_pcie_ops;
++ bridge->map_irq = advk_pcie_map_irq;
+
+ ret = pci_host_probe(bridge);
+ if (ret < 0) {
++ irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
++ advk_pcie_remove_rp_irq_domain(pcie);
+ advk_pcie_remove_msi_irq_domain(pcie);
+ advk_pcie_remove_irq_domain(pcie);
+ return ret;
+@@ -1712,7 +1873,11 @@ static int advk_pcie_remove(struct platform_device *pdev)
+ advk_writel(pcie, PCIE_ISR1_ALL_MASK, PCIE_ISR1_REG);
+ advk_writel(pcie, PCIE_IRQ_ALL_MASK, HOST_CTRL_INT_STATUS_REG);
+
++ /* Remove IRQ handler */
++ irq_set_chained_handler_and_data(pcie->irq, NULL, NULL);
++
+ /* Remove IRQ domains */
++ advk_pcie_remove_rp_irq_domain(pcie);
+ advk_pcie_remove_msi_irq_domain(pcie);
+ advk_pcie_remove_irq_domain(pcie);
+
+diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
+index 8e87a31e329d0..ba6d787896606 100644
+--- a/drivers/s390/block/dasd.c
++++ b/drivers/s390/block/dasd.c
+@@ -1422,6 +1422,13 @@ int dasd_start_IO(struct dasd_ccw_req *cqr)
+ if (!cqr->lpm)
+ cqr->lpm = dasd_path_get_opm(device);
+ }
++ /*
++ * remember the amount of formatted tracks to prevent double format on
++ * ESE devices
++ */
++ if (cqr->block)
++ cqr->trkcount = atomic_read(&cqr->block->trkcount);
++
+ if (cqr->cpmode == 1) {
+ rc = ccw_device_tm_start(device->cdev, cqr->cpaddr,
+ (long) cqr, cqr->lpm);
+@@ -1639,6 +1646,7 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ unsigned long now;
+ int nrf_suppressed = 0;
+ int fp_suppressed = 0;
++ struct request *req;
+ u8 *sense = NULL;
+ int expires;
+
+@@ -1739,7 +1747,12 @@ void dasd_int_handler(struct ccw_device *cdev, unsigned long intparm,
+ }
+
+ if (dasd_ese_needs_format(cqr->block, irb)) {
+- if (rq_data_dir((struct request *)cqr->callback_data) == READ) {
++ req = dasd_get_callback_data(cqr);
++ if (!req) {
++ cqr->status = DASD_CQR_ERROR;
++ return;
++ }
++ if (rq_data_dir(req) == READ) {
+ device->discipline->ese_read(cqr, irb);
+ cqr->status = DASD_CQR_SUCCESS;
+ cqr->stopclk = now;
+@@ -2765,8 +2778,7 @@ static void __dasd_cleanup_cqr(struct dasd_ccw_req *cqr)
+ * complete a request partially.
+ */
+ if (proc_bytes) {
+- blk_update_request(req, BLK_STS_OK,
+- blk_rq_bytes(req) - proc_bytes);
++ blk_update_request(req, BLK_STS_OK, proc_bytes);
+ blk_mq_requeue_request(req, true);
+ } else if (likely(!blk_should_fake_timeout(req->q))) {
+ blk_mq_complete_request(req);
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 8410a25a65c13..e46461b4d8a75 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -3083,13 +3083,24 @@ static int dasd_eckd_format_device(struct dasd_device *base,
+ }
+
+ static bool test_and_set_format_track(struct dasd_format_entry *to_format,
+- struct dasd_block *block)
++ struct dasd_ccw_req *cqr)
+ {
++ struct dasd_block *block = cqr->block;
+ struct dasd_format_entry *format;
+ unsigned long flags;
+ bool rc = false;
+
+ spin_lock_irqsave(&block->format_lock, flags);
++ if (cqr->trkcount != atomic_read(&block->trkcount)) {
++ /*
++ * The number of formatted tracks has changed after request
++ * start and we can not tell if the current track was involved.
++ * To avoid data corruption treat it as if the current track is
++ * involved
++ */
++ rc = true;
++ goto out;
++ }
+ list_for_each_entry(format, &block->format_list, list) {
+ if (format->track == to_format->track) {
+ rc = true;
+@@ -3109,6 +3120,7 @@ static void clear_format_track(struct dasd_format_entry *format,
+ unsigned long flags;
+
+ spin_lock_irqsave(&block->format_lock, flags);
++ atomic_inc(&block->trkcount);
+ list_del_init(&format->list);
+ spin_unlock_irqrestore(&block->format_lock, flags);
+ }
+@@ -3145,7 +3157,7 @@ dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr,
+ sector_t curr_trk;
+ int rc;
+
+- req = cqr->callback_data;
++ req = dasd_get_callback_data(cqr);
+ block = cqr->block;
+ base = block->base;
+ private = base->private;
+@@ -3170,8 +3182,11 @@ dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr,
+ }
+ format->track = curr_trk;
+ /* test if track is already in formatting by another thread */
+- if (test_and_set_format_track(format, block))
++ if (test_and_set_format_track(format, cqr)) {
++ /* this is no real error so do not count down retries */
++ cqr->retries++;
+ return ERR_PTR(-EEXIST);
++ }
+
+ fdata.start_unit = curr_trk;
+ fdata.stop_unit = curr_trk;
+@@ -3270,12 +3285,11 @@ static int dasd_eckd_ese_read(struct dasd_ccw_req *cqr, struct irb *irb)
+ cqr->proc_bytes = blk_count * blksize;
+ return 0;
+ }
+- if (dst && !skip_block) {
+- dst += off;
++ if (dst && !skip_block)
+ memset(dst, 0, blksize);
+- } else {
++ else
+ skip_block--;
+- }
++ dst += blksize;
+ blk_count++;
+ }
+ }
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index 8b458010f88a1..6e7f1a4a28a03 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -188,6 +188,7 @@ struct dasd_ccw_req {
+ void (*callback)(struct dasd_ccw_req *, void *data);
+ void *callback_data;
+ unsigned int proc_bytes; /* bytes for partial completion */
++ unsigned int trkcount; /* count formatted tracks */
+ };
+
+ /*
+@@ -611,6 +612,7 @@ struct dasd_block {
+
+ struct list_head format_list;
+ spinlock_t format_lock;
++ atomic_t trkcount;
+ };
+
+ struct dasd_attention_data {
+@@ -757,6 +759,18 @@ dasd_check_blocksize(int bsize)
+ return 0;
+ }
+
++/*
++ * return the callback data of the original request in case there are
++ * ERP requests build on top of it
++ */
++static inline void *dasd_get_callback_data(struct dasd_ccw_req *cqr)
++{
++ while (cqr->refers)
++ cqr = cqr->refers;
++
++ return cqr->callback_data;
++}
++
+ /* externals in dasd.c */
+ #define DASD_PROFILE_OFF 0
+ #define DASD_PROFILE_ON 1
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 00f0f282e7a13..10a9369c9dea4 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1438,7 +1438,10 @@ fb_release(struct inode *inode, struct file *file)
+ __acquires(&info->lock)
+ __releases(&info->lock)
+ {
+- struct fb_info * const info = file->private_data;
++ struct fb_info * const info = file_fb_info(file);
++
++ if (!info)
++ return -ENODEV;
+
+ lock_fb_info(info);
+ if (info->fbops->fb_release)
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index b3e46aabc3d86..5c0035316dd01 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -346,6 +346,17 @@ static inline bool btrfs_inode_in_log(struct btrfs_inode *inode, u64 generation)
+ return ret;
+ }
+
++/*
++ * Check if the inode has flags compatible with compression
++ */
++static inline bool btrfs_inode_can_compress(const struct btrfs_inode *inode)
++{
++ if (inode->flags & BTRFS_INODE_NODATACOW ||
++ inode->flags & BTRFS_INODE_NODATASUM)
++ return false;
++ return true;
++}
++
+ struct btrfs_dio_private {
+ struct inode *inode;
+
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index ed986c70cbc5e..e5f13922a18fe 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3569,6 +3569,17 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ if (sectorsize < PAGE_SIZE) {
+ struct btrfs_subpage_info *subpage_info;
+
++ /*
++ * V1 space cache has some hardcoded PAGE_SIZE usage, and is
++ * going to be deprecated.
++ *
++ * Force to use v2 cache for subpage case.
++ */
++ btrfs_clear_opt(fs_info->mount_opt, SPACE_CACHE);
++ btrfs_set_and_info(fs_info, FREE_SPACE_TREE,
++ "forcing free space tree for sector size %u with page size %lu",
++ sectorsize, PAGE_SIZE);
++
+ btrfs_warn(fs_info,
+ "read-write for sector size %u with page size %lu is experimental",
+ sectorsize, PAGE_SIZE);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ecd305649e129..10e205fbad6cf 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -485,17 +485,6 @@ static noinline int add_async_extent(struct async_chunk *cow,
+ return 0;
+ }
+
+-/*
+- * Check if the inode has flags compatible with compression
+- */
+-static inline bool inode_can_compress(struct btrfs_inode *inode)
+-{
+- if (inode->flags & BTRFS_INODE_NODATACOW ||
+- inode->flags & BTRFS_INODE_NODATASUM)
+- return false;
+- return true;
+-}
+-
+ /*
+ * Check if the inode needs to be submitted to compression, based on mount
+ * options, defragmentation, properties or heuristics.
+@@ -505,7 +494,7 @@ static inline int inode_need_compress(struct btrfs_inode *inode, u64 start,
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+
+- if (!inode_can_compress(inode)) {
++ if (!btrfs_inode_can_compress(inode)) {
+ WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
+ KERN_ERR "BTRFS: unexpected compression for ino %llu\n",
+ btrfs_ino(inode));
+@@ -2015,7 +2004,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
+ (zoned && btrfs_is_data_reloc_root(inode->root)));
+ ret = run_delalloc_nocow(inode, locked_page, start, end,
+ page_started, nr_written);
+- } else if (!inode_can_compress(inode) ||
++ } else if (!btrfs_inode_can_compress(inode) ||
+ !inode_need_compress(inode, start, end)) {
+ if (zoned)
+ ret = run_delalloc_zoned(inode, locked_page, start, end,
+diff --git a/fs/btrfs/props.c b/fs/btrfs/props.c
+index 1a6d2d5b4b333..1b31481f9e72c 100644
+--- a/fs/btrfs/props.c
++++ b/fs/btrfs/props.c
+@@ -17,9 +17,11 @@ static DEFINE_HASHTABLE(prop_handlers_ht, BTRFS_PROP_HANDLERS_HT_BITS);
+ struct prop_handler {
+ struct hlist_node node;
+ const char *xattr_name;
+- int (*validate)(const char *value, size_t len);
++ int (*validate)(const struct btrfs_inode *inode, const char *value,
++ size_t len);
+ int (*apply)(struct inode *inode, const char *value, size_t len);
+ const char *(*extract)(struct inode *inode);
++ bool (*ignore)(const struct btrfs_inode *inode);
+ int inheritable;
+ };
+
+@@ -55,7 +57,8 @@ find_prop_handler(const char *name,
+ return NULL;
+ }
+
+-int btrfs_validate_prop(const char *name, const char *value, size_t value_len)
++int btrfs_validate_prop(const struct btrfs_inode *inode, const char *name,
++ const char *value, size_t value_len)
+ {
+ const struct prop_handler *handler;
+
+@@ -69,7 +72,29 @@ int btrfs_validate_prop(const char *name, const char *value, size_t value_len)
+ if (value_len == 0)
+ return 0;
+
+- return handler->validate(value, value_len);
++ return handler->validate(inode, value, value_len);
++}
++
++/*
++ * Check if a property should be ignored (not set) for an inode.
++ *
++ * @inode: The target inode.
++ * @name: The property's name.
++ *
++ * The caller must be sure the given property name is valid, for example by
++ * having previously called btrfs_validate_prop().
++ *
++ * Returns: true if the property should be ignored for the given inode
++ * false if the property must not be ignored for the given inode
++ */
++bool btrfs_ignore_prop(const struct btrfs_inode *inode, const char *name)
++{
++ const struct prop_handler *handler;
++
++ handler = find_prop_handler(name, NULL);
++ ASSERT(handler != NULL);
++
++ return handler->ignore(inode);
+ }
+
+ int btrfs_set_prop(struct btrfs_trans_handle *trans, struct inode *inode,
+@@ -252,8 +277,12 @@ int btrfs_load_inode_props(struct inode *inode, struct btrfs_path *path)
+ return ret;
+ }
+
+-static int prop_compression_validate(const char *value, size_t len)
++static int prop_compression_validate(const struct btrfs_inode *inode,
++ const char *value, size_t len)
+ {
++ if (!btrfs_inode_can_compress(inode))
++ return -EINVAL;
++
+ if (!value)
+ return 0;
+
+@@ -310,6 +339,22 @@ static int prop_compression_apply(struct inode *inode, const char *value,
+ return 0;
+ }
+
++static bool prop_compression_ignore(const struct btrfs_inode *inode)
++{
++ /*
++ * Compression only has effect for regular files, and for directories
++ * we set it just to propagate it to new files created inside them.
++ * Everything else (symlinks, devices, sockets, fifos) is pointless as
++ * it will do nothing, so don't waste metadata space on a compression
++ * xattr for anything that is neither a file nor a directory.
++ */
++ if (!S_ISREG(inode->vfs_inode.i_mode) &&
++ !S_ISDIR(inode->vfs_inode.i_mode))
++ return true;
++
++ return false;
++}
++
+ static const char *prop_compression_extract(struct inode *inode)
+ {
+ switch (BTRFS_I(inode)->prop_compress) {
+@@ -330,6 +375,7 @@ static struct prop_handler prop_handlers[] = {
+ .validate = prop_compression_validate,
+ .apply = prop_compression_apply,
+ .extract = prop_compression_extract,
++ .ignore = prop_compression_ignore,
+ .inheritable = 1
+ },
+ };
+@@ -356,6 +402,9 @@ static int inherit_props(struct btrfs_trans_handle *trans,
+ if (!h->inheritable)
+ continue;
+
++ if (h->ignore(BTRFS_I(inode)))
++ continue;
++
+ value = h->extract(parent);
+ if (!value)
+ continue;
+@@ -364,7 +413,7 @@ static int inherit_props(struct btrfs_trans_handle *trans,
+ * This is not strictly necessary as the property should be
+ * valid, but in case it isn't, don't propagate it further.
+ */
+- ret = h->validate(value, strlen(value));
++ ret = h->validate(BTRFS_I(inode), value, strlen(value));
+ if (ret)
+ continue;
+
+diff --git a/fs/btrfs/props.h b/fs/btrfs/props.h
+index 40b2c65b518c6..59bea741cfcf4 100644
+--- a/fs/btrfs/props.h
++++ b/fs/btrfs/props.h
+@@ -13,7 +13,9 @@ void __init btrfs_props_init(void);
+ int btrfs_set_prop(struct btrfs_trans_handle *trans, struct inode *inode,
+ const char *name, const char *value, size_t value_len,
+ int flags);
+-int btrfs_validate_prop(const char *name, const char *value, size_t value_len);
++int btrfs_validate_prop(const struct btrfs_inode *inode, const char *name,
++ const char *value, size_t value_len);
++bool btrfs_ignore_prop(const struct btrfs_inode *inode, const char *name);
+
+ int btrfs_load_inode_props(struct inode *inode, struct btrfs_path *path);
+
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index beb7f72d50b86..11927d440f11a 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -919,6 +919,9 @@ static ssize_t btrfs_exclusive_operation_show(struct kobject *kobj,
+ case BTRFS_EXCLOP_BALANCE:
+ str = "balance\n";
+ break;
++ case BTRFS_EXCLOP_BALANCE_PAUSED:
++ str = "balance paused\n";
++ break;
+ case BTRFS_EXCLOP_DEV_ADD:
+ str = "device add\n";
+ break;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7a0bfa5bedb95..049ee19041c7b 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -5655,6 +5655,18 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ mutex_lock(&inode->log_mutex);
+ }
+
++ /*
++ * For symlinks, we must always log their content, which is stored in an
++ * inline extent, otherwise we could end up with an empty symlink after
++ * log replay, which is invalid on linux (symlink(2) returns -ENOENT if
++ * one attempts to create an empty symlink).
++ * We don't need to worry about flushing delalloc, because when we create
++ * the inline extent when the symlink is created (we never have delalloc
++ * for symlinks).
++ */
++ if (S_ISLNK(inode->vfs_inode.i_mode))
++ inode_only = LOG_INODE_ALL;
++
+ /*
+ * This is for cases where logging a directory could result in losing a
+ * a file after replaying the log. For example, if we move a file from a
+@@ -6015,7 +6027,7 @@ process_leaf:
+ }
+
+ ctx->log_new_dentries = false;
+- if (type == BTRFS_FT_DIR || type == BTRFS_FT_SYMLINK)
++ if (type == BTRFS_FT_DIR)
+ log_mode = LOG_INODE_ALL;
+ ret = btrfs_log_inode(trans, BTRFS_I(di_inode),
+ log_mode, ctx);
+diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
+index 99abf41b89b92..85691dc2232fa 100644
+--- a/fs/btrfs/xattr.c
++++ b/fs/btrfs/xattr.c
+@@ -262,7 +262,8 @@ int btrfs_setxattr_trans(struct inode *inode, const char *name,
+ inode_inc_iversion(inode);
+ inode->i_ctime = current_time(inode);
+ ret = btrfs_update_inode(trans, root, BTRFS_I(inode));
+- BUG_ON(ret);
++ if (ret)
++ btrfs_abort_transaction(trans, ret);
+ out:
+ if (start_trans)
+ btrfs_end_transaction(trans);
+@@ -403,10 +404,13 @@ static int btrfs_xattr_handler_set_prop(const struct xattr_handler *handler,
+ struct btrfs_root *root = BTRFS_I(inode)->root;
+
+ name = xattr_full_name(handler, name);
+- ret = btrfs_validate_prop(name, value, size);
++ ret = btrfs_validate_prop(BTRFS_I(inode), name, value, size);
+ if (ret)
+ return ret;
+
++ if (btrfs_ignore_prop(BTRFS_I(inode), name))
++ return 0;
++
+ trans = btrfs_start_transaction(root, 2);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
+@@ -416,7 +420,8 @@ static int btrfs_xattr_handler_set_prop(const struct xattr_handler *handler,
+ inode_inc_iversion(inode);
+ inode->i_ctime = current_time(inode);
+ ret = btrfs_update_inode(trans, root, BTRFS_I(inode));
+- BUG_ON(ret);
++ if (ret)
++ btrfs_abort_transaction(trans, ret);
+ }
+
+ btrfs_end_transaction(trans);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index c36fa0d0d438b..3d307854c6504 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -363,6 +363,14 @@ static void nfs4_setup_readdir(u64 cookie, __be32 *verifier, struct dentry *dent
+ kunmap_atomic(start);
+ }
+
++static void nfs4_fattr_set_prechange(struct nfs_fattr *fattr, u64 version)
++{
++ if (!(fattr->valid & NFS_ATTR_FATTR_PRECHANGE)) {
++ fattr->pre_change_attr = version;
++ fattr->valid |= NFS_ATTR_FATTR_PRECHANGE;
++ }
++}
++
+ static void nfs4_test_and_free_stateid(struct nfs_server *server,
+ nfs4_stateid *stateid,
+ const struct cred *cred)
+@@ -6556,7 +6564,9 @@ static void nfs4_delegreturn_release(void *calldata)
+ pnfs_roc_release(&data->lr.arg, &data->lr.res,
+ data->res.lr_ret);
+ if (inode) {
+- nfs_post_op_update_inode_force_wcc(inode, &data->fattr);
++ nfs4_fattr_set_prechange(&data->fattr,
++ inode_peek_iversion_raw(inode));
++ nfs_refresh_inode(inode, &data->fattr);
+ nfs_iput_and_deactive(inode);
+ }
+ kfree(calldata);
+diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
+index 24eea1b05ca27..29917850f0794 100644
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -270,5 +270,6 @@ struct plat_stmmacenet_data {
+ int msi_rx_base_vec;
+ int msi_tx_base_vec;
+ bool use_phy_wol;
++ bool sph_disable;
+ };
+ #endif
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index 99cbdf55a8bda..f09c60393e559 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -29,12 +29,14 @@ extern struct irqaction chained_action;
+ * IRQTF_WARNED - warning "IRQ_WAKE_THREAD w/o thread_fn" has been printed
+ * IRQTF_AFFINITY - irq thread is requested to adjust affinity
+ * IRQTF_FORCED_THREAD - irq action is force threaded
++ * IRQTF_READY - signals that irq thread is ready
+ */
+ enum {
+ IRQTF_RUNTHREAD,
+ IRQTF_WARNED,
+ IRQTF_AFFINITY,
+ IRQTF_FORCED_THREAD,
++ IRQTF_READY,
+ };
+
+ /*
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 2267e6527db3c..a4426a00b9edf 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -407,6 +407,7 @@ static struct irq_desc *alloc_desc(int irq, int node, unsigned int flags,
+ lockdep_set_class(&desc->lock, &irq_desc_lock_class);
+ mutex_init(&desc->request_mutex);
+ init_rcu_head(&desc->rcu);
++ init_waitqueue_head(&desc->wait_for_threads);
+
+ desc_set_defaults(irq, desc, node, affinity, owner);
+ irqd_set(&desc->irq_data, flags);
+@@ -575,6 +576,7 @@ int __init early_irq_init(void)
+ raw_spin_lock_init(&desc[i].lock);
+ lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
+ mutex_init(&desc[i].request_mutex);
++ init_waitqueue_head(&desc[i].wait_for_threads);
+ desc_set_defaults(i, &desc[i], node, NULL, NULL);
+ }
+ return arch_early_irq_init();
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index f23ffd30385b1..8915fba0697a0 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -1248,6 +1248,31 @@ static void irq_wake_secondary(struct irq_desc *desc, struct irqaction *action)
+ raw_spin_unlock_irq(&desc->lock);
+ }
+
++/*
++ * Internal function to notify that a interrupt thread is ready.
++ */
++static void irq_thread_set_ready(struct irq_desc *desc,
++ struct irqaction *action)
++{
++ set_bit(IRQTF_READY, &action->thread_flags);
++ wake_up(&desc->wait_for_threads);
++}
++
++/*
++ * Internal function to wake up a interrupt thread and wait until it is
++ * ready.
++ */
++static void wake_up_and_wait_for_irq_thread_ready(struct irq_desc *desc,
++ struct irqaction *action)
++{
++ if (!action || !action->thread)
++ return;
++
++ wake_up_process(action->thread);
++ wait_event(desc->wait_for_threads,
++ test_bit(IRQTF_READY, &action->thread_flags));
++}
++
+ /*
+ * Interrupt handler thread
+ */
+@@ -1259,6 +1284,8 @@ static int irq_thread(void *data)
+ irqreturn_t (*handler_fn)(struct irq_desc *desc,
+ struct irqaction *action);
+
++ irq_thread_set_ready(desc, action);
++
+ sched_set_fifo(current);
+
+ if (force_irqthreads() && test_bit(IRQTF_FORCED_THREAD,
+@@ -1683,8 +1710,6 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+ }
+
+ if (!shared) {
+- init_waitqueue_head(&desc->wait_for_threads);
+-
+ /* Setup the type (level, edge polarity) if configured: */
+ if (new->flags & IRQF_TRIGGER_MASK) {
+ ret = __irq_set_trigger(desc,
+@@ -1780,14 +1805,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
+
+ irq_setup_timings(desc, new);
+
+- /*
+- * Strictly no need to wake it up, but hung_task complains
+- * when no hard interrupt wakes the thread up.
+- */
+- if (new->thread)
+- wake_up_process(new->thread);
+- if (new->secondary)
+- wake_up_process(new->secondary->thread);
++ wake_up_and_wait_for_irq_thread_ready(desc, new);
++ wake_up_and_wait_for_irq_thread_ready(desc, new->secondary);
+
+ register_irq_proc(irq, desc);
+ new->dir = NULL;
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index dcdcb85121e40..3b1398fbddaf8 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -482,7 +482,7 @@ static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf)
+ * of the following timestamps. Callers need to be aware of that and
+ * deal with it.
+ */
+-u64 ktime_get_mono_fast_ns(void)
++u64 notrace ktime_get_mono_fast_ns(void)
+ {
+ return __ktime_get_fast_ns(&tk_fast_mono);
+ }
+@@ -494,7 +494,7 @@ EXPORT_SYMBOL_GPL(ktime_get_mono_fast_ns);
+ * Contrary to ktime_get_mono_fast_ns() this is always correct because the
+ * conversion factor is not affected by NTP/PTP correction.
+ */
+-u64 ktime_get_raw_fast_ns(void)
++u64 notrace ktime_get_raw_fast_ns(void)
+ {
+ return __ktime_get_fast_ns(&tk_fast_raw);
+ }
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 8c753dcefe7fc..26821487a0573 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1146,6 +1146,11 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+
+ lock_sock(sk);
+
++ if (so->bound) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ /* do not register frame reception for functional addressing */
+ if (so->opt.flags & CAN_ISOTP_SF_BROADCAST)
+ do_rx_reg = 0;
+@@ -1156,10 +1161,6 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ goto out;
+ }
+
+- if (so->bound && addr->can_ifindex == so->ifindex &&
+- rx_id == so->rxid && tx_id == so->txid)
+- goto out;
+-
+ dev = dev_get_by_index(net, addr->can_ifindex);
+ if (!dev) {
+ err = -ENODEV;
+@@ -1186,19 +1187,6 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+
+ dev_put(dev);
+
+- if (so->bound && do_rx_reg) {
+- /* unregister old filter */
+- if (so->ifindex) {
+- dev = dev_get_by_index(net, so->ifindex);
+- if (dev) {
+- can_rx_unregister(net, dev, so->rxid,
+- SINGLE_MASK(so->rxid),
+- isotp_rcv, sk);
+- dev_put(dev);
+- }
+- }
+- }
+-
+ /* switch to new settings */
+ so->ifindex = ifindex;
+ so->rxid = rx_id;
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 2ad3c7b42d6d2..1d9e6d5e9a76c 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -2403,9 +2403,10 @@ int ip_mc_source(int add, int omode, struct sock *sk, struct
+ /* decrease mem now to avoid the memleak warning */
+ atomic_sub(struct_size(psl, sl_addr, psl->sl_max),
+ &sk->sk_omem_alloc);
+- kfree_rcu(psl, rcu);
+ }
+ rcu_assign_pointer(pmc->sflist, newpsl);
++ if (psl)
++ kfree_rcu(psl, rcu);
+ psl = newpsl;
+ }
+ rv = 1; /* > 0 for insert logic below if sl_count is 0 */
+@@ -2507,11 +2508,13 @@ int ip_mc_msfilter(struct sock *sk, struct ip_msfilter *msf, int ifindex)
+ /* decrease mem now to avoid the memleak warning */
+ atomic_sub(struct_size(psl, sl_addr, psl->sl_max),
+ &sk->sk_omem_alloc);
+- kfree_rcu(psl, rcu);
+- } else
++ } else {
+ (void) ip_mc_del_src(in_dev, &msf->imsf_multiaddr, pmc->sfmode,
+ 0, NULL, 0);
++ }
+ rcu_assign_pointer(pmc->sflist, newpsl);
++ if (psl)
++ kfree_rcu(psl, rcu);
+ pmc->sfmode = msf->imsf_fmode;
+ err = 0;
+ done:
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index 909f937befd71..7f695c39d9a8c 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -460,10 +460,10 @@ int ip6_mc_source(int add, int omode, struct sock *sk,
+ newpsl->sl_addr[i] = psl->sl_addr[i];
+ atomic_sub(struct_size(psl, sl_addr, psl->sl_max),
+ &sk->sk_omem_alloc);
+- kfree_rcu(psl, rcu);
+ }
++ rcu_assign_pointer(pmc->sflist, newpsl);
++ kfree_rcu(psl, rcu);
+ psl = newpsl;
+- rcu_assign_pointer(pmc->sflist, psl);
+ }
+ rv = 1; /* > 0 for insert logic below if sl_count is 0 */
+ for (i = 0; i < psl->sl_count; i++) {
+@@ -565,12 +565,12 @@ int ip6_mc_msfilter(struct sock *sk, struct group_filter *gsf,
+ psl->sl_count, psl->sl_addr, 0);
+ atomic_sub(struct_size(psl, sl_addr, psl->sl_max),
+ &sk->sk_omem_alloc);
+- kfree_rcu(psl, rcu);
+ } else {
+ ip6_mc_del_src(idev, group, pmc->sfmode, 0, NULL, 0);
+ }
+- mutex_unlock(&idev->mc_lock);
+ rcu_assign_pointer(pmc->sflist, newpsl);
++ mutex_unlock(&idev->mc_lock);
++ kfree_rcu(psl, rcu);
+ pmc->sfmode = gsf->gf_fmode;
+ err = 0;
+ done:
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index dc7a2404efdf9..5b286e1e0a6ff 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -38,7 +38,7 @@ int nfc_fw_download(struct nfc_dev *dev, const char *firmware_name)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -94,7 +94,7 @@ int nfc_dev_up(struct nfc_dev *dev)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -142,7 +142,7 @@ int nfc_dev_down(struct nfc_dev *dev)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -207,7 +207,7 @@ int nfc_start_poll(struct nfc_dev *dev, u32 im_protocols, u32 tm_protocols)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -246,7 +246,7 @@ int nfc_stop_poll(struct nfc_dev *dev)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -291,7 +291,7 @@ int nfc_dep_link_up(struct nfc_dev *dev, int target_index, u8 comm_mode)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -335,7 +335,7 @@ int nfc_dep_link_down(struct nfc_dev *dev)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -401,7 +401,7 @@ int nfc_activate_target(struct nfc_dev *dev, u32 target_idx, u32 protocol)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -448,7 +448,7 @@ int nfc_deactivate_target(struct nfc_dev *dev, u32 target_idx, u8 mode)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -495,7 +495,7 @@ int nfc_data_exchange(struct nfc_dev *dev, u32 target_idx, struct sk_buff *skb,
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ kfree_skb(skb);
+ goto error;
+@@ -552,7 +552,7 @@ int nfc_enable_se(struct nfc_dev *dev, u32 se_idx)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -601,7 +601,7 @@ int nfc_disable_se(struct nfc_dev *dev, u32 se_idx)
+
+ device_lock(&dev->dev);
+
+- if (!device_is_registered(&dev->dev)) {
++ if (dev->shutting_down) {
+ rc = -ENODEV;
+ goto error;
+ }
+@@ -1134,6 +1134,7 @@ int nfc_register_device(struct nfc_dev *dev)
+ dev->rfkill = NULL;
+ }
+ }
++ dev->shutting_down = false;
+ device_unlock(&dev->dev);
+
+ rc = nfc_genl_device_added(dev);
+@@ -1166,12 +1167,10 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ rfkill_unregister(dev->rfkill);
+ rfkill_destroy(dev->rfkill);
+ }
++ dev->shutting_down = true;
+ device_unlock(&dev->dev);
+
+ if (dev->ops->check_presence) {
+- device_lock(&dev->dev);
+- dev->shutting_down = true;
+- device_unlock(&dev->dev);
+ del_timer_sync(&dev->check_pres_timer);
+ cancel_work_sync(&dev->check_pres_work);
+ }
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index f184b0db79d40..7c62417ccfd78 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1244,7 +1244,7 @@ int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name,
+ struct sk_buff *msg;
+ void *hdr;
+
+- msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
++ msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC);
+ if (!msg)
+ return -ENOMEM;
+
+@@ -1260,7 +1260,7 @@ int nfc_genl_fw_download_done(struct nfc_dev *dev, const char *firmware_name,
+
+ genlmsg_end(msg, hdr);
+
+- genlmsg_multicast(&nfc_genl_family, msg, 0, 0, GFP_KERNEL);
++ genlmsg_multicast(&nfc_genl_family, msg, 0, 0, GFP_ATOMIC);
+
+ return 0;
+
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 5327d130c4b56..2f638f8b7b1e7 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -495,6 +495,14 @@ void rds_tcp_tune(struct socket *sock)
+
+ tcp_sock_set_nodelay(sock->sk);
+ lock_sock(sk);
++ /* TCP timer functions might access net namespace even after
++ * a process which created this net namespace terminated.
++ */
++ if (!sk->sk_net_refcnt) {
++ sk->sk_net_refcnt = 1;
++ get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
++ }
+ if (rtn->sndbuf_size > 0) {
+ sk->sk_sndbuf = rtn->sndbuf_size;
+ sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index a4111408ffd0c..6a1611b0e3037 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -117,6 +117,7 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ local, srx->transport_type, srx->transport.family);
+
+ udp_conf.family = srx->transport.family;
++ udp_conf.use_udp_checksums = true;
+ if (udp_conf.family == AF_INET) {
+ udp_conf.local_ip = srx->transport.sin.sin_addr;
+ udp_conf.local_udp_port = srx->transport.sin.sin_port;
+@@ -124,6 +125,8 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ } else {
+ udp_conf.local_ip6 = srx->transport.sin6.sin6_addr;
+ udp_conf.local_udp_port = srx->transport.sin6.sin6_port;
++ udp_conf.use_udp6_tx_checksums = true;
++ udp_conf.use_udp6_rx_checksums = true;
+ #endif
+ }
+ ret = udp_sock_create(net, &udp_conf, &local->socket);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 0222ad4523a9d..258ebc194ee2b 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1065,10 +1065,13 @@ rpc_task_get_next_xprt(struct rpc_clnt *clnt)
+ static
+ void rpc_task_set_transport(struct rpc_task *task, struct rpc_clnt *clnt)
+ {
+- if (task->tk_xprt &&
+- !(test_bit(XPRT_OFFLINE, &task->tk_xprt->state) &&
+- (task->tk_flags & RPC_TASK_MOVEABLE)))
+- return;
++ if (task->tk_xprt) {
++ if (!(test_bit(XPRT_OFFLINE, &task->tk_xprt->state) &&
++ (task->tk_flags & RPC_TASK_MOVEABLE)))
++ return;
++ xprt_release(task);
++ xprt_put(task->tk_xprt);
++ }
+ if (task->tk_flags & RPC_TASK_NO_ROUND_ROBIN)
+ task->tk_xprt = rpc_task_get_first_xprt(clnt);
+ else
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 7aef2876beb38..eec9569af4c51 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1967,6 +1967,9 @@ static void xs_local_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+ int ret;
+
++ if (transport->file)
++ goto force_disconnect;
++
+ if (RPC_IS_ASYNC(task)) {
+ /*
+ * We want the AF_LOCAL connect to be resolved in the
+@@ -1979,11 +1982,17 @@ static void xs_local_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+ */
+ task->tk_rpc_status = -ENOTCONN;
+ rpc_exit(task, -ENOTCONN);
+- return;
++ goto out_wake;
+ }
+ ret = xs_local_setup_socket(transport);
+ if (ret && !RPC_IS_SOFTCONN(task))
+ msleep_interruptible(15000);
++ return;
++force_disconnect:
++ xprt_force_disconnect(xprt);
++out_wake:
++ xprt_clear_connecting(xprt);
++ xprt_wake_pending_tasks(xprt, -ENOTCONN);
+ }
+
+ #if IS_ENABLED(CONFIG_SUNRPC_SWAP)
+@@ -2867,9 +2876,6 @@ static struct rpc_xprt *xs_setup_local(struct xprt_create *args)
+ }
+ xprt_set_bound(xprt);
+ xs_format_peer_addresses(xprt, "local", RPCBIND_NETID_LOCAL);
+- ret = ERR_PTR(xs_local_setup_socket(transport));
+- if (ret)
+- goto out_err;
+ break;
+ default:
+ ret = ERR_PTR(-EAFNOSUPPORT);
+diff --git a/sound/firewire/fireworks/fireworks_hwdep.c b/sound/firewire/fireworks/fireworks_hwdep.c
+index 626c0c34b0b66..3a53914277d35 100644
+--- a/sound/firewire/fireworks/fireworks_hwdep.c
++++ b/sound/firewire/fireworks/fireworks_hwdep.c
+@@ -34,6 +34,7 @@ hwdep_read_resp_buf(struct snd_efw *efw, char __user *buf, long remained,
+ type = SNDRV_FIREWIRE_EVENT_EFW_RESPONSE;
+ if (copy_to_user(buf, &type, sizeof(type)))
+ return -EFAULT;
++ count += sizeof(type);
+ remained -= sizeof(type);
+ buf += sizeof(type);
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c66d31d8a498c..51c54cf0f3127 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8759,6 +8759,8 @@ static const struct hda_fixup alc269_fixups[] = {
+ [ALC287_FIXUP_CS35L41_I2C_2] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = cs35l41_fixup_i2c_two,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI,
+ },
+ [ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED] = {
+ .type = HDA_FIXUP_VERBS,
+@@ -9191,6 +9193,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME),
+ SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
++ SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 13009d08b09ac..c7493549a9a50 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -446,7 +446,7 @@ static int da7219_tonegen_freq_put(struct snd_kcontrol *kcontrol,
+ struct soc_mixer_control *mixer_ctrl =
+ (struct soc_mixer_control *) kcontrol->private_value;
+ unsigned int reg = mixer_ctrl->reg;
+- __le16 val;
++ __le16 val_new, val_old;
+ int ret;
+
+ /*
+@@ -454,13 +454,19 @@ static int da7219_tonegen_freq_put(struct snd_kcontrol *kcontrol,
+ * Therefore we need to convert to little endian here to align with
+ * HW registers.
+ */
+- val = cpu_to_le16(ucontrol->value.integer.value[0]);
++ val_new = cpu_to_le16(ucontrol->value.integer.value[0]);
+
+ mutex_lock(&da7219->ctrl_lock);
+- ret = regmap_raw_write(da7219->regmap, reg, &val, sizeof(val));
++ ret = regmap_raw_read(da7219->regmap, reg, &val_old, sizeof(val_old));
++ if (ret == 0 && (val_old != val_new))
++ ret = regmap_raw_write(da7219->regmap, reg,
++ &val_new, sizeof(val_new));
+ mutex_unlock(&da7219->ctrl_lock);
+
+- return ret;
++ if (ret < 0)
++ return ret;
++
++ return val_old != val_new;
+ }
+
+
+diff --git a/sound/soc/codecs/rt9120.c b/sound/soc/codecs/rt9120.c
+index 7aa1772a915f3..6e0d7cf0c8c92 100644
+--- a/sound/soc/codecs/rt9120.c
++++ b/sound/soc/codecs/rt9120.c
+@@ -341,7 +341,6 @@ static int rt9120_get_reg_size(unsigned int reg)
+ {
+ switch (reg) {
+ case 0x00:
+- case 0x09:
+ case 0x20 ... 0x27:
+ return 2;
+ case 0x30 ... 0x3D:
+diff --git a/sound/soc/codecs/wm8958-dsp2.c b/sound/soc/codecs/wm8958-dsp2.c
+index e4018ba3b19a2..7878c7a58ff10 100644
+--- a/sound/soc/codecs/wm8958-dsp2.c
++++ b/sound/soc/codecs/wm8958-dsp2.c
+@@ -530,7 +530,7 @@ static int wm8958_mbc_put(struct snd_kcontrol *kcontrol,
+
+ wm8958_dsp_apply(component, mbc, wm8994->mbc_ena[mbc]);
+
+- return 0;
++ return 1;
+ }
+
+ #define WM8958_MBC_SWITCH(xname, xval) {\
+@@ -656,7 +656,7 @@ static int wm8958_vss_put(struct snd_kcontrol *kcontrol,
+
+ wm8958_dsp_apply(component, vss, wm8994->vss_ena[vss]);
+
+- return 0;
++ return 1;
+ }
+
+
+@@ -730,7 +730,7 @@ static int wm8958_hpf_put(struct snd_kcontrol *kcontrol,
+
+ wm8958_dsp_apply(component, hpf % 3, ucontrol->value.integer.value[0]);
+
+- return 0;
++ return 1;
+ }
+
+ #define WM8958_HPF_SWITCH(xname, xval) {\
+@@ -824,7 +824,7 @@ static int wm8958_enh_eq_put(struct snd_kcontrol *kcontrol,
+
+ wm8958_dsp_apply(component, eq, ucontrol->value.integer.value[0]);
+
+- return 0;
++ return 1;
+ }
+
+ #define WM8958_ENH_EQ_SWITCH(xname, xval) {\
+diff --git a/sound/soc/meson/aiu-acodec-ctrl.c b/sound/soc/meson/aiu-acodec-ctrl.c
+index 27a6d3259c50a..442c215936d97 100644
+--- a/sound/soc/meson/aiu-acodec-ctrl.c
++++ b/sound/soc/meson/aiu-acodec-ctrl.c
+@@ -58,7 +58,7 @@ static int aiu_acodec_ctrl_mux_put_enum(struct snd_kcontrol *kcontrol,
+
+ snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+
+- return 0;
++ return 1;
+ }
+
+ static SOC_ENUM_SINGLE_DECL(aiu_acodec_ctrl_mux_enum, AIU_ACODEC_CTRL,
+diff --git a/sound/soc/meson/aiu-codec-ctrl.c b/sound/soc/meson/aiu-codec-ctrl.c
+index c3ea733fce91f..c966fc60dc733 100644
+--- a/sound/soc/meson/aiu-codec-ctrl.c
++++ b/sound/soc/meson/aiu-codec-ctrl.c
+@@ -57,7 +57,7 @@ static int aiu_codec_ctrl_mux_put_enum(struct snd_kcontrol *kcontrol,
+
+ snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+
+- return 0;
++ return 1;
+ }
+
+ static SOC_ENUM_SINGLE_DECL(aiu_hdmi_ctrl_mux_enum, AIU_HDMI_CLK_DATA_CTRL,
+diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c
+index cbbaa55d92a66..2b77010c2c5ce 100644
+--- a/sound/soc/meson/axg-card.c
++++ b/sound/soc/meson/axg-card.c
+@@ -320,7 +320,6 @@ static int axg_card_add_link(struct snd_soc_card *card, struct device_node *np,
+
+ dai_link->cpus = cpu;
+ dai_link->num_cpus = 1;
+- dai_link->nonatomic = true;
+
+ ret = meson_card_parse_dai(card, np, &dai_link->cpus->of_node,
+ &dai_link->cpus->dai_name);
+diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c
+index 0c31934a96301..e076ced300257 100644
+--- a/sound/soc/meson/axg-tdm-interface.c
++++ b/sound/soc/meson/axg-tdm-interface.c
+@@ -351,29 +351,13 @@ static int axg_tdm_iface_hw_free(struct snd_pcm_substream *substream,
+ return 0;
+ }
+
+-static int axg_tdm_iface_trigger(struct snd_pcm_substream *substream,
+- int cmd,
++static int axg_tdm_iface_prepare(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+- struct axg_tdm_stream *ts =
+- snd_soc_dai_get_dma_data(dai, substream);
+-
+- switch (cmd) {
+- case SNDRV_PCM_TRIGGER_START:
+- case SNDRV_PCM_TRIGGER_RESUME:
+- case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+- axg_tdm_stream_start(ts);
+- break;
+- case SNDRV_PCM_TRIGGER_SUSPEND:
+- case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+- case SNDRV_PCM_TRIGGER_STOP:
+- axg_tdm_stream_stop(ts);
+- break;
+- default:
+- return -EINVAL;
+- }
++ struct axg_tdm_stream *ts = snd_soc_dai_get_dma_data(dai, substream);
+
+- return 0;
++ /* Force all attached formatters to update */
++ return axg_tdm_stream_reset(ts);
+ }
+
+ static int axg_tdm_iface_remove_dai(struct snd_soc_dai *dai)
+@@ -413,8 +397,8 @@ static const struct snd_soc_dai_ops axg_tdm_iface_ops = {
+ .set_fmt = axg_tdm_iface_set_fmt,
+ .startup = axg_tdm_iface_startup,
+ .hw_params = axg_tdm_iface_hw_params,
++ .prepare = axg_tdm_iface_prepare,
+ .hw_free = axg_tdm_iface_hw_free,
+- .trigger = axg_tdm_iface_trigger,
+ };
+
+ /* TDM Backend DAIs */
+diff --git a/sound/soc/meson/g12a-tohdmitx.c b/sound/soc/meson/g12a-tohdmitx.c
+index 9b2b59536ced0..6c99052feafd8 100644
+--- a/sound/soc/meson/g12a-tohdmitx.c
++++ b/sound/soc/meson/g12a-tohdmitx.c
+@@ -67,7 +67,7 @@ static int g12a_tohdmitx_i2s_mux_put_enum(struct snd_kcontrol *kcontrol,
+
+ snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL);
+
+- return 0;
++ return 1;
+ }
+
+ static SOC_ENUM_SINGLE_DECL(g12a_tohdmitx_i2s_mux_enum, TOHDMITX_CTRL0,
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index 359987bf76d1b..c54c8ca8d7156 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -86,10 +86,10 @@ static int dmaengine_pcm_hw_params(struct snd_soc_component *component,
+
+ memset(&slave_config, 0, sizeof(slave_config));
+
+- if (pcm->config && pcm->config->prepare_slave_config)
+- prepare_slave_config = pcm->config->prepare_slave_config;
+- else
++ if (!pcm->config)
+ prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
++ else
++ prepare_slave_config = pcm->config->prepare_slave_config;
+
+ if (prepare_slave_config) {
+ int ret = prepare_slave_config(substream, params, &slave_config);
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index a0ca58ba16273..58347eadd219b 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -461,7 +461,7 @@ int snd_soc_put_volsw_sx(struct snd_kcontrol *kcontrol,
+ ret = err;
+ }
+ }
+- return err;
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_put_volsw_sx);
+
+diff --git a/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh b/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
+index eaf8a04a7ca5f..10e54bcca7a93 100755
+--- a/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
++++ b/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
+@@ -190,7 +190,7 @@ setup_prepare()
+
+ tc filter add dev $eth0 ingress chain $(IS2 0 0) pref 1 \
+ protocol ipv4 flower skip_sw ip_proto udp dst_port 5201 \
+- action police rate 50mbit burst 64k \
++ action police rate 50mbit burst 64k conform-exceed drop/pipe \
+ action goto chain $(IS2 1 0)
+ }
+
+diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
+index 8a470da7b71af..15a2875698b53 100644
+--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
++++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
+@@ -60,6 +60,21 @@
+ /* CPUID.0x8000_0001.EDX */
+ #define CPUID_GBPAGES (1ul << 26)
+
++/* Page table bitfield declarations */
++#define PTE_PRESENT_MASK BIT_ULL(0)
++#define PTE_WRITABLE_MASK BIT_ULL(1)
++#define PTE_USER_MASK BIT_ULL(2)
++#define PTE_ACCESSED_MASK BIT_ULL(5)
++#define PTE_DIRTY_MASK BIT_ULL(6)
++#define PTE_LARGE_MASK BIT_ULL(7)
++#define PTE_GLOBAL_MASK BIT_ULL(8)
++#define PTE_NX_MASK BIT_ULL(63)
++
++#define PAGE_SHIFT 12
++
++#define PHYSICAL_PAGE_MASK GENMASK_ULL(51, 12)
++#define PTE_GET_PFN(pte) (((pte) & PHYSICAL_PAGE_MASK) >> PAGE_SHIFT)
++
+ /* General Registers in 64-Bit Mode */
+ struct gpr64_regs {
+ u64 rax;
+diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
+index ba1fdc3dcf4a9..2c4a7563a4f8a 100644
+--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
++++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
+@@ -278,7 +278,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
+ else
+ guest_test_phys_mem = p->phys_offset;
+ #ifdef __s390x__
+- alignment = max(0x100000, alignment);
++ alignment = max(0x100000UL, alignment);
+ #endif
+ guest_test_phys_mem = align_down(guest_test_phys_mem, alignment);
+
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+index 9f000dfb55949..0dd442c260159 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
+@@ -19,38 +19,6 @@
+
+ vm_vaddr_t exception_handlers;
+
+-/* Virtual translation table structure declarations */
+-struct pageUpperEntry {
+- uint64_t present:1;
+- uint64_t writable:1;
+- uint64_t user:1;
+- uint64_t write_through:1;
+- uint64_t cache_disable:1;
+- uint64_t accessed:1;
+- uint64_t ignored_06:1;
+- uint64_t page_size:1;
+- uint64_t ignored_11_08:4;
+- uint64_t pfn:40;
+- uint64_t ignored_62_52:11;
+- uint64_t execute_disable:1;
+-};
+-
+-struct pageTableEntry {
+- uint64_t present:1;
+- uint64_t writable:1;
+- uint64_t user:1;
+- uint64_t write_through:1;
+- uint64_t cache_disable:1;
+- uint64_t accessed:1;
+- uint64_t dirty:1;
+- uint64_t reserved_07:1;
+- uint64_t global:1;
+- uint64_t ignored_11_09:3;
+- uint64_t pfn:40;
+- uint64_t ignored_62_52:11;
+- uint64_t execute_disable:1;
+-};
+-
+ void regs_dump(FILE *stream, struct kvm_regs *regs,
+ uint8_t indent)
+ {
+@@ -195,23 +163,21 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t pt_pfn, uint64_t vaddr,
+ return &page_table[index];
+ }
+
+-static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm,
+- uint64_t pt_pfn,
+- uint64_t vaddr,
+- uint64_t paddr,
+- int level,
+- enum x86_page_size page_size)
++static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
++ uint64_t pt_pfn,
++ uint64_t vaddr,
++ uint64_t paddr,
++ int level,
++ enum x86_page_size page_size)
+ {
+- struct pageUpperEntry *pte = virt_get_pte(vm, pt_pfn, vaddr, level);
+-
+- if (!pte->present) {
+- pte->writable = true;
+- pte->present = true;
+- pte->page_size = (level == page_size);
+- if (pte->page_size)
+- pte->pfn = paddr >> vm->page_shift;
++ uint64_t *pte = virt_get_pte(vm, pt_pfn, vaddr, level);
++
++ if (!(*pte & PTE_PRESENT_MASK)) {
++ *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK;
++ if (level == page_size)
++ *pte |= PTE_LARGE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+ else
+- pte->pfn = vm_alloc_page_table(vm) >> vm->page_shift;
++ *pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
+ } else {
+ /*
+ * Entry already present. Assert that the caller doesn't want
+@@ -221,7 +187,7 @@ static struct pageUpperEntry *virt_create_upper_pte(struct kvm_vm *vm,
+ TEST_ASSERT(level != page_size,
+ "Cannot create hugepage at level: %u, vaddr: 0x%lx\n",
+ page_size, vaddr);
+- TEST_ASSERT(!pte->page_size,
++ TEST_ASSERT(!(*pte & PTE_LARGE_MASK),
+ "Cannot create page table at level: %u, vaddr: 0x%lx\n",
+ level, vaddr);
+ }
+@@ -232,8 +198,8 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+ enum x86_page_size page_size)
+ {
+ const uint64_t pg_size = 1ull << ((page_size * 9) + 12);
+- struct pageUpperEntry *pml4e, *pdpe, *pde;
+- struct pageTableEntry *pte;
++ uint64_t *pml4e, *pdpe, *pde;
++ uint64_t *pte;
+
+ TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K,
+ "Unknown or unsupported guest mode, mode: 0x%x", vm->mode);
+@@ -257,24 +223,22 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+ */
+ pml4e = virt_create_upper_pte(vm, vm->pgd >> vm->page_shift,
+ vaddr, paddr, 3, page_size);
+- if (pml4e->page_size)
++ if (*pml4e & PTE_LARGE_MASK)
+ return;
+
+- pdpe = virt_create_upper_pte(vm, pml4e->pfn, vaddr, paddr, 2, page_size);
+- if (pdpe->page_size)
++ pdpe = virt_create_upper_pte(vm, PTE_GET_PFN(*pml4e), vaddr, paddr, 2, page_size);
++ if (*pdpe & PTE_LARGE_MASK)
+ return;
+
+- pde = virt_create_upper_pte(vm, pdpe->pfn, vaddr, paddr, 1, page_size);
+- if (pde->page_size)
++ pde = virt_create_upper_pte(vm, PTE_GET_PFN(*pdpe), vaddr, paddr, 1, page_size);
++ if (*pde & PTE_LARGE_MASK)
+ return;
+
+ /* Fill in page table entry. */
+- pte = virt_get_pte(vm, pde->pfn, vaddr, 0);
+- TEST_ASSERT(!pte->present,
++ pte = virt_get_pte(vm, PTE_GET_PFN(*pde), vaddr, 0);
++ TEST_ASSERT(!(*pte & PTE_PRESENT_MASK),
+ "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
+- pte->pfn = paddr >> vm->page_shift;
+- pte->writable = true;
+- pte->present = 1;
++ *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
+ }
+
+ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+@@ -282,12 +246,12 @@ void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+ __virt_pg_map(vm, vaddr, paddr, X86_PAGE_SIZE_4K);
+ }
+
+-static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid,
++static uint64_t *_vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid,
+ uint64_t vaddr)
+ {
+ uint16_t index[4];
+- struct pageUpperEntry *pml4e, *pdpe, *pde;
+- struct pageTableEntry *pte;
++ uint64_t *pml4e, *pdpe, *pde;
++ uint64_t *pte;
+ struct kvm_cpuid_entry2 *entry;
+ struct kvm_sregs sregs;
+ int max_phy_addr;
+@@ -329,30 +293,29 @@ static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vc
+ index[3] = (vaddr >> 39) & 0x1ffu;
+
+ pml4e = addr_gpa2hva(vm, vm->pgd);
+- TEST_ASSERT(pml4e[index[3]].present,
++ TEST_ASSERT(pml4e[index[3]] & PTE_PRESENT_MASK,
+ "Expected pml4e to be present for gva: 0x%08lx", vaddr);
+- TEST_ASSERT((*(uint64_t*)(&pml4e[index[3]]) &
+- (rsvd_mask | (1ull << 7))) == 0,
++ TEST_ASSERT((pml4e[index[3]] & (rsvd_mask | PTE_LARGE_MASK)) == 0,
+ "Unexpected reserved bits set.");
+
+- pdpe = addr_gpa2hva(vm, pml4e[index[3]].pfn * vm->page_size);
+- TEST_ASSERT(pdpe[index[2]].present,
++ pdpe = addr_gpa2hva(vm, PTE_GET_PFN(pml4e[index[3]]) * vm->page_size);
++ TEST_ASSERT(pdpe[index[2]] & PTE_PRESENT_MASK,
+ "Expected pdpe to be present for gva: 0x%08lx", vaddr);
+- TEST_ASSERT(pdpe[index[2]].page_size == 0,
++ TEST_ASSERT(!(pdpe[index[2]] & PTE_LARGE_MASK),
+ "Expected pdpe to map a pde not a 1-GByte page.");
+- TEST_ASSERT((*(uint64_t*)(&pdpe[index[2]]) & rsvd_mask) == 0,
++ TEST_ASSERT((pdpe[index[2]] & rsvd_mask) == 0,
+ "Unexpected reserved bits set.");
+
+- pde = addr_gpa2hva(vm, pdpe[index[2]].pfn * vm->page_size);
+- TEST_ASSERT(pde[index[1]].present,
++ pde = addr_gpa2hva(vm, PTE_GET_PFN(pdpe[index[2]]) * vm->page_size);
++ TEST_ASSERT(pde[index[1]] & PTE_PRESENT_MASK,
+ "Expected pde to be present for gva: 0x%08lx", vaddr);
+- TEST_ASSERT(pde[index[1]].page_size == 0,
++ TEST_ASSERT(!(pde[index[1]] & PTE_LARGE_MASK),
+ "Expected pde to map a pte not a 2-MByte page.");
+- TEST_ASSERT((*(uint64_t*)(&pde[index[1]]) & rsvd_mask) == 0,
++ TEST_ASSERT((pde[index[1]] & rsvd_mask) == 0,
+ "Unexpected reserved bits set.");
+
+- pte = addr_gpa2hva(vm, pde[index[1]].pfn * vm->page_size);
+- TEST_ASSERT(pte[index[0]].present,
++ pte = addr_gpa2hva(vm, PTE_GET_PFN(pde[index[1]]) * vm->page_size);
++ TEST_ASSERT(pte[index[0]] & PTE_PRESENT_MASK,
+ "Expected pte to be present for gva: 0x%08lx", vaddr);
+
+ return &pte[index[0]];
+@@ -360,7 +323,7 @@ static struct pageTableEntry *_vm_get_page_table_entry(struct kvm_vm *vm, int vc
+
+ uint64_t vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr)
+ {
+- struct pageTableEntry *pte = _vm_get_page_table_entry(vm, vcpuid, vaddr);
++ uint64_t *pte = _vm_get_page_table_entry(vm, vcpuid, vaddr);
+
+ return *(uint64_t *)pte;
+ }
+@@ -368,18 +331,17 @@ uint64_t vm_get_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr)
+ void vm_set_page_table_entry(struct kvm_vm *vm, int vcpuid, uint64_t vaddr,
+ uint64_t pte)
+ {
+- struct pageTableEntry *new_pte = _vm_get_page_table_entry(vm, vcpuid,
+- vaddr);
++ uint64_t *new_pte = _vm_get_page_table_entry(vm, vcpuid, vaddr);
+
+ *(uint64_t *)new_pte = pte;
+ }
+
+ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+ {
+- struct pageUpperEntry *pml4e, *pml4e_start;
+- struct pageUpperEntry *pdpe, *pdpe_start;
+- struct pageUpperEntry *pde, *pde_start;
+- struct pageTableEntry *pte, *pte_start;
++ uint64_t *pml4e, *pml4e_start;
++ uint64_t *pdpe, *pdpe_start;
++ uint64_t *pde, *pde_start;
++ uint64_t *pte, *pte_start;
+
+ if (!vm->pgd_created)
+ return;
+@@ -389,58 +351,58 @@ void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+ fprintf(stream, "%*s index hvaddr gpaddr "
+ "addr w exec dirty\n",
+ indent, "");
+- pml4e_start = (struct pageUpperEntry *) addr_gpa2hva(vm, vm->pgd);
++ pml4e_start = (uint64_t *) addr_gpa2hva(vm, vm->pgd);
+ for (uint16_t n1 = 0; n1 <= 0x1ffu; n1++) {
+ pml4e = &pml4e_start[n1];
+- if (!pml4e->present)
++ if (!(*pml4e & PTE_PRESENT_MASK))
+ continue;
+- fprintf(stream, "%*spml4e 0x%-3zx %p 0x%-12lx 0x%-10lx %u "
++ fprintf(stream, "%*spml4e 0x%-3zx %p 0x%-12lx 0x%-10llx %u "
+ " %u\n",
+ indent, "",
+ pml4e - pml4e_start, pml4e,
+- addr_hva2gpa(vm, pml4e), (uint64_t) pml4e->pfn,
+- pml4e->writable, pml4e->execute_disable);
++ addr_hva2gpa(vm, pml4e), PTE_GET_PFN(*pml4e),
++ !!(*pml4e & PTE_WRITABLE_MASK), !!(*pml4e & PTE_NX_MASK));
+
+- pdpe_start = addr_gpa2hva(vm, pml4e->pfn * vm->page_size);
++ pdpe_start = addr_gpa2hva(vm, *pml4e & PHYSICAL_PAGE_MASK);
+ for (uint16_t n2 = 0; n2 <= 0x1ffu; n2++) {
+ pdpe = &pdpe_start[n2];
+- if (!pdpe->present)
++ if (!(*pdpe & PTE_PRESENT_MASK))
+ continue;
+- fprintf(stream, "%*spdpe 0x%-3zx %p 0x%-12lx 0x%-10lx "
++ fprintf(stream, "%*spdpe 0x%-3zx %p 0x%-12lx 0x%-10llx "
+ "%u %u\n",
+ indent, "",
+ pdpe - pdpe_start, pdpe,
+ addr_hva2gpa(vm, pdpe),
+- (uint64_t) pdpe->pfn, pdpe->writable,
+- pdpe->execute_disable);
++ PTE_GET_PFN(*pdpe), !!(*pdpe & PTE_WRITABLE_MASK),
++ !!(*pdpe & PTE_NX_MASK));
+
+- pde_start = addr_gpa2hva(vm, pdpe->pfn * vm->page_size);
++ pde_start = addr_gpa2hva(vm, *pdpe & PHYSICAL_PAGE_MASK);
+ for (uint16_t n3 = 0; n3 <= 0x1ffu; n3++) {
+ pde = &pde_start[n3];
+- if (!pde->present)
++ if (!(*pde & PTE_PRESENT_MASK))
+ continue;
+ fprintf(stream, "%*spde 0x%-3zx %p "
+- "0x%-12lx 0x%-10lx %u %u\n",
++ "0x%-12lx 0x%-10llx %u %u\n",
+ indent, "", pde - pde_start, pde,
+ addr_hva2gpa(vm, pde),
+- (uint64_t) pde->pfn, pde->writable,
+- pde->execute_disable);
++ PTE_GET_PFN(*pde), !!(*pde & PTE_WRITABLE_MASK),
++ !!(*pde & PTE_NX_MASK));
+
+- pte_start = addr_gpa2hva(vm, pde->pfn * vm->page_size);
++ pte_start = addr_gpa2hva(vm, *pde & PHYSICAL_PAGE_MASK);
+ for (uint16_t n4 = 0; n4 <= 0x1ffu; n4++) {
+ pte = &pte_start[n4];
+- if (!pte->present)
++ if (!(*pte & PTE_PRESENT_MASK))
+ continue;
+ fprintf(stream, "%*spte 0x%-3zx %p "
+- "0x%-12lx 0x%-10lx %u %u "
++ "0x%-12lx 0x%-10llx %u %u "
+ " %u 0x%-10lx\n",
+ indent, "",
+ pte - pte_start, pte,
+ addr_hva2gpa(vm, pte),
+- (uint64_t) pte->pfn,
+- pte->writable,
+- pte->execute_disable,
+- pte->dirty,
++ PTE_GET_PFN(*pte),
++ !!(*pte & PTE_WRITABLE_MASK),
++ !!(*pte & PTE_NX_MASK),
++ !!(*pte & PTE_DIRTY_MASK),
+ ((uint64_t) n1 << 27)
+ | ((uint64_t) n2 << 18)
+ | ((uint64_t) n3 << 9)
+@@ -558,8 +520,8 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_vm *vm, uint16_t selector,
+ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+ {
+ uint16_t index[4];
+- struct pageUpperEntry *pml4e, *pdpe, *pde;
+- struct pageTableEntry *pte;
++ uint64_t *pml4e, *pdpe, *pde;
++ uint64_t *pte;
+
+ TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use "
+ "unknown or unsupported guest mode, mode: 0x%x", vm->mode);
+@@ -572,22 +534,22 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+ if (!vm->pgd_created)
+ goto unmapped_gva;
+ pml4e = addr_gpa2hva(vm, vm->pgd);
+- if (!pml4e[index[3]].present)
++ if (!(pml4e[index[3]] & PTE_PRESENT_MASK))
+ goto unmapped_gva;
+
+- pdpe = addr_gpa2hva(vm, pml4e[index[3]].pfn * vm->page_size);
+- if (!pdpe[index[2]].present)
++ pdpe = addr_gpa2hva(vm, PTE_GET_PFN(pml4e[index[3]]) * vm->page_size);
++ if (!(pdpe[index[2]] & PTE_PRESENT_MASK))
+ goto unmapped_gva;
+
+- pde = addr_gpa2hva(vm, pdpe[index[2]].pfn * vm->page_size);
+- if (!pde[index[1]].present)
++ pde = addr_gpa2hva(vm, PTE_GET_PFN(pdpe[index[2]]) * vm->page_size);
++ if (!(pde[index[1]] & PTE_PRESENT_MASK))
+ goto unmapped_gva;
+
+- pte = addr_gpa2hva(vm, pde[index[1]].pfn * vm->page_size);
+- if (!pte[index[0]].present)
++ pte = addr_gpa2hva(vm, PTE_GET_PFN(pde[index[1]]) * vm->page_size);
++ if (!(pte[index[0]] & PTE_PRESENT_MASK))
+ goto unmapped_gva;
+
+- return (pte[index[0]].pfn * vm->page_size) + (gva & 0xfffu);
++ return (PTE_GET_PFN(pte[index[0]]) * vm->page_size) + (gva & 0xfffu);
+
+ unmapped_gva:
+ TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
+index a3402cd8d5b68..9ff22f28032dd 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh
+@@ -61,9 +61,12 @@ setup_prepare()
+
+ vrf_prepare
+ mirror_gre_topo_create
++ # Avoid changing br1's PVID while it is operational as a L3 interface.
++ ip link set dev br1 down
+
+ ip link set dev $swp3 master br1
+ bridge vlan add dev br1 vid 555 pvid untagged self
++ ip link set dev br1 up
+ ip address add dev br1 192.0.2.129/28
+ ip address add dev br1 2001:db8:2::1/64
+
+diff --git a/tools/testing/selftests/net/so_txtime.c b/tools/testing/selftests/net/so_txtime.c
+index 59067f64b7753..2672ac0b6d1f3 100644
+--- a/tools/testing/selftests/net/so_txtime.c
++++ b/tools/testing/selftests/net/so_txtime.c
+@@ -421,7 +421,7 @@ static void usage(const char *progname)
+ "Options:\n"
+ " -4 only IPv4\n"
+ " -6 only IPv6\n"
+- " -c <clock> monotonic (default) or tai\n"
++ " -c <clock> monotonic or tai (default)\n"
+ " -D <addr> destination IP address (server)\n"
+ " -S <addr> source IP address (client)\n"
+ " -r run rx mode\n"
+@@ -475,7 +475,7 @@ static void parse_opts(int argc, char **argv)
+ cfg_rx = true;
+ break;
+ case 't':
+- cfg_start_time_ns = strtol(optarg, NULL, 0);
++ cfg_start_time_ns = strtoll(optarg, NULL, 0);
+ break;
+ case 'm':
+ cfg_mark = strtol(optarg, NULL, 0);
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 9d126d7fabdb7..313bb0cbfb1eb 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -955,7 +955,7 @@ TEST(ERRNO_valid)
+ ASSERT_EQ(0, ret);
+
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+- EXPECT_EQ(-1, read(0, NULL, 0));
++ EXPECT_EQ(-1, read(-1, NULL, 0));
+ EXPECT_EQ(E2BIG, errno);
+ }
+
+@@ -974,7 +974,7 @@ TEST(ERRNO_zero)
+
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+ /* "errno" of 0 is ok. */
+- EXPECT_EQ(0, read(0, NULL, 0));
++ EXPECT_EQ(0, read(-1, NULL, 0));
+ }
+
+ /*
+@@ -995,7 +995,7 @@ TEST(ERRNO_capped)
+ ASSERT_EQ(0, ret);
+
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+- EXPECT_EQ(-1, read(0, NULL, 0));
++ EXPECT_EQ(-1, read(-1, NULL, 0));
+ EXPECT_EQ(4095, errno);
+ }
+
+@@ -1026,7 +1026,7 @@ TEST(ERRNO_order)
+ ASSERT_EQ(0, ret);
+
+ EXPECT_EQ(parent, syscall(__NR_getppid));
+- EXPECT_EQ(-1, read(0, NULL, 0));
++ EXPECT_EQ(-1, read(-1, NULL, 0));
+ EXPECT_EQ(12, errno);
+ }
+
+@@ -2623,7 +2623,7 @@ void *tsync_sibling(void *data)
+ ret = prctl(PR_GET_NO_NEW_PRIVS, 0, 0, 0, 0);
+ if (!ret)
+ return (void *)SIBLING_EXIT_NEWPRIVS;
+- read(0, NULL, 0);
++ read(-1, NULL, 0);
+ return (void *)SIBLING_EXIT_UNKILLED;
+ }
+
+diff --git a/tools/testing/selftests/vm/mremap_test.c b/tools/testing/selftests/vm/mremap_test.c
+index 58775dab3cc6c..5ef41640d657a 100644
+--- a/tools/testing/selftests/vm/mremap_test.c
++++ b/tools/testing/selftests/vm/mremap_test.c
+@@ -118,6 +118,59 @@ static unsigned long long get_mmap_min_addr(void)
+ return addr;
+ }
+
++/*
++ * Returns false if the requested remap region overlaps with an
++ * existing mapping (e.g text, stack) else returns true.
++ */
++static bool is_remap_region_valid(void *addr, unsigned long long size)
++{
++ void *remap_addr = NULL;
++ bool ret = true;
++
++ /* Use MAP_FIXED_NOREPLACE flag to ensure region is not mapped */
++ remap_addr = mmap(addr, size, PROT_READ | PROT_WRITE,
++ MAP_FIXED_NOREPLACE | MAP_ANONYMOUS | MAP_SHARED,
++ -1, 0);
++
++ if (remap_addr == MAP_FAILED) {
++ if (errno == EEXIST)
++ ret = false;
++ } else {
++ munmap(remap_addr, size);
++ }
++
++ return ret;
++}
++
++/* Returns mmap_min_addr sysctl tunable from procfs */
++static unsigned long long get_mmap_min_addr(void)
++{
++ FILE *fp;
++ int n_matched;
++ static unsigned long long addr;
++
++ if (addr)
++ return addr;
++
++ fp = fopen("/proc/sys/vm/mmap_min_addr", "r");
++ if (fp == NULL) {
++ ksft_print_msg("Failed to open /proc/sys/vm/mmap_min_addr: %s\n",
++ strerror(errno));
++ exit(KSFT_SKIP);
++ }
++
++ n_matched = fscanf(fp, "%llu", &addr);
++ if (n_matched != 1) {
++ ksft_print_msg("Failed to read /proc/sys/vm/mmap_min_addr: %s\n",
++ strerror(errno));
++ fclose(fp);
++ exit(KSFT_SKIP);
++ }
++
++ fclose(fp);
++ return addr;
++}
++
+ /*
+ * Returns the start address of the mapping on success, else returns
+ * NULL on failure.
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-11 16:54 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-11 16:54 UTC (permalink / raw
To: gentoo-commits
commit: 91e82d6ce6213596c5a31f96c3371cbe7167ea34
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 11 16:50:04 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 11 16:54:11 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=91e82d6c
Update Gentoo Hardened patchset based on KSPP thanks to Peter Bo
Bug: https://bugs.gentoo.org/841488
Added:
CONFIG_HARDENED_USERCOPY=y
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
CONFIG_KFENCE=y
CONFIG_IOMMU_DEFAULT_DMA_STRICT=y
CONFIG_SCHED_CORE=y
CONFIG_ZERO_CALL_USED_REGS=y
Dropped deprecated option:
!DEVKMEM
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 9843c3e2..77ae7dc1 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig 2022-04-12 13:11:48.403113171 -0400
-+++ b/Kconfig 2022-04-12 13:12:36.530084675 -0400
+--- a/Kconfig 2022-05-11 08:37:26.685387031 -0400
++++ b/Kconfig 2022-05-11 08:38:35.672171690 -0400
@@ -30,3 +30,5 @@ source "lib/Kconfig"
source "lib/Kconfig.debug"
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2022-04-12 05:39:54.696333295 -0400
-+++ b/distro/Kconfig 2022-04-12 13:21:04.666379519 -0400
-@@ -0,0 +1,285 @@
+--- /dev/null 2022-05-10 13:47:17.750578524 -0400
++++ b/distro/Kconfig 2022-05-11 12:43:39.114196110 -0400
+@@ -0,0 +1,290 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -185,7 +185,7 @@
+config GENTOO_KERNEL_SELF_PROTECTION_COMMON
+ bool "Enable Kernel Self Protection Project Recommendations"
+
-+ depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !DEVKMEM && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS
++ depends on GENTOO_LINUX && !ACPI_CUSTOM_METHOD && !COMPAT_BRK && !PROC_KCORE && !COMPAT_VDSO && !KEXEC && !HIBERNATION && !LEGACY_PTYS && !X86_X32 && !MODIFY_LDT_SYSCALL && GCC_PLUGINS && !IOMMU_DEFAULT_DMA_LAZY && !IOMMU_DEFAULT_PASSTHROUGH && IOMMU_DEFAULT_DMA_STRICT
+
+ select BUG
+ select STRICT_KERNEL_RWX
@@ -199,7 +199,11 @@
+ select DEBUG_NOTIFIERS
+ select DEBUG_LIST
+ select DEBUG_SG
++ select HARDENED_USERCOPY if HAVE_HARDENED_USERCOPY_ALLOCATOR=y
+ select BUG_ON_DATA_CORRUPTION
++ select KFENCE if HAVE_ARCH_KFENCE && (!SLAB || SLUB)
++ select RANDOMIZE_KSTACK_OFFSET_DEFAULT if HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET && (INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION>=140000)
++ select SCHED_CORE if SCHED_SMT
+ select SCHED_STACK_END_CHECK
+ select SECCOMP if HAVE_ARCH_SECCOMP
+ select SECCOMP_FILTER if HAVE_ARCH_SECCOMP_FILTER
@@ -222,6 +226,7 @@
+ select GCC_PLUGIN_STRUCTLEAK_BYREF_ALL
+ select GCC_PLUGIN_RANDSTRUCT
+ select GCC_PLUGIN_RANDSTRUCT_PERFORMANCE
++ select ZERO_CALL_USED_REGS if CC_HAS_ZERO_CALL_USED_REGS
+
+ help
+ Search for GENTOO_KERNEL_SELF_PROTECTION_{X86_64, ARM64, X86_32, ARM} for dependency
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-05-09 10:57 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-05-09 10:57 UTC (permalink / raw
To: gentoo-commits
commit: 760c4239d500c231656fcae059304b1cc31d9a5a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 9 10:57:45 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 9 10:57:45 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=760c4239
Linux patch 5.17.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-5.17.6.patch | 7840 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7844 insertions(+)
diff --git a/0000_README b/0000_README
index a5f79181..91016f55 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-5.17.5.patch
From: http://www.kernel.org
Desc: Linux 5.17.5
+Patch: 1005_linux-5.17.6.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-5.17.6.patch b/1005_linux-5.17.6.patch
new file mode 100644
index 00000000..eeadfd47
--- /dev/null
+++ b/1005_linux-5.17.6.patch
@@ -0,0 +1,7840 @@
+diff --git a/Makefile b/Makefile
+index 3ad5dc6be3930..7ef8dd5ab6f28 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index c9629cb5ccd1e..7da42a5b959cf 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -263,6 +263,8 @@
+ compatible = "ti,am3359-tscadc";
+ reg = <0x0 0x1000>;
+ interrupts = <16>;
++ clocks = <&adc_tsc_fck>;
++ clock-names = "fck";
+ status = "disabled";
+ dmas = <&edma 53 0>, <&edma 57 0>;
+ dma-names = "fifo0", "fifo1";
+diff --git a/arch/arm/boot/dts/am3517-evm.dts b/arch/arm/boot/dts/am3517-evm.dts
+index 0d2fac98ce7d2..c8b80f156ec98 100644
+--- a/arch/arm/boot/dts/am3517-evm.dts
++++ b/arch/arm/boot/dts/am3517-evm.dts
+@@ -161,6 +161,8 @@
+
+ /* HS USB Host PHY on PORT 1 */
+ hsusb1_phy: hsusb1_phy {
++ pinctrl-names = "default";
++ pinctrl-0 = <&hsusb1_rst_pins>;
+ compatible = "usb-nop-xceiv";
+ reset-gpios = <&gpio2 25 GPIO_ACTIVE_LOW>; /* gpio_57 */
+ #phy-cells = <0>;
+@@ -168,7 +170,9 @@
+ };
+
+ &davinci_emac {
+- status = "okay";
++ pinctrl-names = "default";
++ pinctrl-0 = <ðernet_pins>;
++ status = "okay";
+ };
+
+ &davinci_mdio {
+@@ -193,6 +197,8 @@
+ };
+
+ &i2c2 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c2_pins>;
+ clock-frequency = <400000>;
+ /* User DIP swithes [1:8] / User LEDS [1:2] */
+ tca6416: gpio@21 {
+@@ -205,6 +211,8 @@
+ };
+
+ &i2c3 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c3_pins>;
+ clock-frequency = <400000>;
+ };
+
+@@ -223,6 +231,8 @@
+ };
+
+ &usbhshost {
++ pinctrl-names = "default";
++ pinctrl-0 = <&hsusb1_pins>;
+ port1-mode = "ehci-phy";
+ };
+
+@@ -231,8 +241,35 @@
+ };
+
+ &omap3_pmx_core {
+- pinctrl-names = "default";
+- pinctrl-0 = <&hsusb1_rst_pins>;
++
++ ethernet_pins: pinmux_ethernet_pins {
++ pinctrl-single,pins = <
++ OMAP3_CORE1_IOPAD(0x21fe, PIN_INPUT | MUX_MODE0) /* rmii_mdio_data */
++ OMAP3_CORE1_IOPAD(0x2200, MUX_MODE0) /* rmii_mdio_clk */
++ OMAP3_CORE1_IOPAD(0x2202, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_rxd0 */
++ OMAP3_CORE1_IOPAD(0x2204, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_rxd1 */
++ OMAP3_CORE1_IOPAD(0x2206, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_crs_dv */
++ OMAP3_CORE1_IOPAD(0x2208, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* rmii_rxer */
++ OMAP3_CORE1_IOPAD(0x220a, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* rmii_txd0 */
++ OMAP3_CORE1_IOPAD(0x220c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* rmii_txd1 */
++ OMAP3_CORE1_IOPAD(0x220e, PIN_OUTPUT_PULLDOWN |MUX_MODE0) /* rmii_txen */
++ OMAP3_CORE1_IOPAD(0x2210, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii_50mhz_clk */
++ >;
++ };
++
++ i2c2_pins: pinmux_i2c2_pins {
++ pinctrl-single,pins = <
++ OMAP3_CORE1_IOPAD(0x21be, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_scl */
++ OMAP3_CORE1_IOPAD(0x21c0, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_sda */
++ >;
++ };
++
++ i2c3_pins: pinmux_i2c3_pins {
++ pinctrl-single,pins = <
++ OMAP3_CORE1_IOPAD(0x21c2, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */
++ OMAP3_CORE1_IOPAD(0x21c4, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */
++ >;
++ };
+
+ leds_pins: pinmux_leds_pins {
+ pinctrl-single,pins = <
+@@ -300,8 +337,6 @@
+ };
+
+ &omap3_pmx_core2 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&hsusb1_pins>;
+
+ hsusb1_pins: pinmux_hsusb1_pins {
+ pinctrl-single,pins = <
+diff --git a/arch/arm/boot/dts/am3517-som.dtsi b/arch/arm/boot/dts/am3517-som.dtsi
+index 8b669e2eafec4..f7b680f6c48ad 100644
+--- a/arch/arm/boot/dts/am3517-som.dtsi
++++ b/arch/arm/boot/dts/am3517-som.dtsi
+@@ -69,6 +69,8 @@
+ };
+
+ &i2c1 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c1_pins>;
+ clock-frequency = <400000>;
+
+ s35390a: s35390a@30 {
+@@ -179,6 +181,13 @@
+
+ &omap3_pmx_core {
+
++ i2c1_pins: pinmux_i2c1_pins {
++ pinctrl-single,pins = <
++ OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */
++ OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */
++ >;
++ };
++
+ wl12xx_buffer_pins: pinmux_wl12xx_buffer_pins {
+ pinctrl-single,pins = <
+ OMAP3_CORE1_IOPAD(0x2156, PIN_OUTPUT | MUX_MODE4) /* mmc1_dat7.gpio_129 */
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index d72c042f28507..a49c2966b41e2 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -57,8 +57,8 @@
+ };
+
+ spi0: spi@f0004000 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_spi0_cs>;
++ pinctrl-names = "default", "cs";
++ pinctrl-1 = <&pinctrl_spi0_cs>;
+ cs-gpios = <&pioD 13 0>, <0>, <0>, <&pioD 16 0>;
+ status = "okay";
+ };
+@@ -171,8 +171,8 @@
+ };
+
+ spi1: spi@f8008000 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_spi1_cs>;
++ pinctrl-names = "default", "cs";
++ pinctrl-1 = <&pinctrl_spi1_cs>;
+ cs-gpios = <&pioC 25 0>;
+ status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index d241c24f0d836..e519d27479362 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -81,8 +81,8 @@
+ };
+
+ spi1: spi@fc018000 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pinctrl_spi0_cs>;
++ pinctrl-names = "default", "cs";
++ pinctrl-1 = <&pinctrl_spi1_cs>;
+ cs-gpios = <&pioB 21 0>;
+ status = "okay";
+ };
+@@ -140,7 +140,7 @@
+ atmel,pins =
+ <AT91_PIOE 1 AT91_PERIPH_GPIO AT91_PINCTRL_PULL_UP_DEGLITCH>;
+ };
+- pinctrl_spi0_cs: spi0_cs_default {
++ pinctrl_spi1_cs: spi1_cs_default {
+ atmel,pins =
+ <AT91_PIOB 21 AT91_PERIPH_GPIO AT91_PINCTRL_NONE>;
+ };
+diff --git a/arch/arm/boot/dts/at91-sama7g5ek.dts b/arch/arm/boot/dts/at91-sama7g5ek.dts
+index ccf9e224da781..fed260473b6b8 100644
+--- a/arch/arm/boot/dts/at91-sama7g5ek.dts
++++ b/arch/arm/boot/dts/at91-sama7g5ek.dts
+@@ -465,7 +465,7 @@
+ pinctrl_flx3_default: flx3_default {
+ pinmux = <PIN_PD16__FLEXCOM3_IO0>,
+ <PIN_PD17__FLEXCOM3_IO1>;
+- bias-disable;
++ bias-pull-up;
+ };
+
+ pinctrl_flx4_default: flx4_default {
+diff --git a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+index 87bb39060e8be..ca03685f0f086 100644
+--- a/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
++++ b/arch/arm/boot/dts/at91sam9g20ek_common.dtsi
+@@ -219,6 +219,12 @@
+ wm8731: wm8731@1b {
+ compatible = "wm8731";
+ reg = <0x1b>;
++
++ /* PCK0 at 12MHz */
++ clocks = <&pmc PMC_TYPE_SYSTEM 8>;
++ clock-names = "mclk";
++ assigned-clocks = <&pmc PMC_TYPE_SYSTEM 8>;
++ assigned-clock-rates = <12000000>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 0a11bacffc1f1..5733e3a4ea8e7 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -4188,11 +4188,11 @@
+ reg = <0x1d0010 0x4>;
+ reg-names = "sysc";
+ ti,sysc-midle = <SYSC_IDLE_FORCE>,
+- <SYSC_IDLE_NO>,
+- <SYSC_IDLE_SMART>;
++ <SYSC_IDLE_NO>;
+ ti,sysc-sidle = <SYSC_IDLE_FORCE>,
+ <SYSC_IDLE_NO>,
+ <SYSC_IDLE_SMART>;
++ power-domains = <&prm_vpe>;
+ clocks = <&vpe_clkctrl DRA7_VPE_VPE_CLKCTRL 0>;
+ clock-names = "fck";
+ #address-cells = <1>;
+diff --git a/arch/arm/boot/dts/imx6qdl-apalis.dtsi b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+index ed2739e390856..bd763bae596b0 100644
+--- a/arch/arm/boot/dts/imx6qdl-apalis.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+@@ -286,6 +286,8 @@
+ codec: sgtl5000@a {
+ compatible = "fsl,sgtl5000";
+ reg = <0x0a>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&pinctrl_sgtl5000>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
+ VDDA-supply = <®_module_3v3_audio>;
+ VDDIO-supply = <®_module_3v3>;
+@@ -517,8 +519,6 @@
+ MX6QDL_PAD_DISP0_DAT21__AUD4_TXD 0x130b0
+ MX6QDL_PAD_DISP0_DAT22__AUD4_TXFS 0x130b0
+ MX6QDL_PAD_DISP0_DAT23__AUD4_RXD 0x130b0
+- /* SGTL5000 sys_mclk */
+- MX6QDL_PAD_GPIO_5__CCM_CLKO1 0x130b0
+ >;
+ };
+
+@@ -811,6 +811,12 @@
+ >;
+ };
+
++ pinctrl_sgtl5000: sgtl5000grp {
++ fsl,pins = <
++ MX6QDL_PAD_GPIO_5__CCM_CLKO1 0x130b0
++ >;
++ };
++
+ pinctrl_spdif: spdifgrp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_16__SPDIF_IN 0x1b0b0
+diff --git a/arch/arm/boot/dts/imx6ull-colibri.dtsi b/arch/arm/boot/dts/imx6ull-colibri.dtsi
+index 7f35a06dff95b..951a2a6c5a657 100644
+--- a/arch/arm/boot/dts/imx6ull-colibri.dtsi
++++ b/arch/arm/boot/dts/imx6ull-colibri.dtsi
+@@ -37,7 +37,7 @@
+
+ reg_sd1_vmmc: regulator-sd1-vmmc {
+ compatible = "regulator-gpio";
+- gpio = <&gpio5 9 GPIO_ACTIVE_HIGH>;
++ gpios = <&gpio5 9 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_snvs_reg_sd>;
+ regulator-always-on;
+diff --git a/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts b/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts
+index 2a0a98fe67f06..3240c67e0c392 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts
++++ b/arch/arm/boot/dts/logicpd-som-lv-35xx-devkit.dts
+@@ -11,3 +11,18 @@
+ model = "LogicPD Zoom OMAP35xx SOM-LV Development Kit";
+ compatible = "logicpd,dm3730-som-lv-devkit", "ti,omap3430", "ti,omap3";
+ };
++
++&omap3_pmx_core2 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&hsusb2_2_pins>;
++ hsusb2_2_pins: pinmux_hsusb2_2_pins {
++ pinctrl-single,pins = <
++ OMAP3430_CORE2_IOPAD(0x25f0, PIN_OUTPUT | MUX_MODE3) /* etk_d10.hsusb2_clk */
++ OMAP3430_CORE2_IOPAD(0x25f2, PIN_OUTPUT | MUX_MODE3) /* etk_d11.hsusb2_stp */
++ OMAP3430_CORE2_IOPAD(0x25f4, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d12.hsusb2_dir */
++ OMAP3430_CORE2_IOPAD(0x25f6, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d13.hsusb2_nxt */
++ OMAP3430_CORE2_IOPAD(0x25f8, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d14.hsusb2_data0 */
++ OMAP3430_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d15.hsusb2_data1 */
++ >;
++ };
++};
+diff --git a/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts b/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
+index a604d92221a4f..c757f0d7781c1 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
++++ b/arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
+@@ -11,3 +11,18 @@
+ model = "LogicPD Zoom DM3730 SOM-LV Development Kit";
+ compatible = "logicpd,dm3730-som-lv-devkit", "ti,omap3630", "ti,omap3";
+ };
++
++&omap3_pmx_core2 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&hsusb2_2_pins>;
++ hsusb2_2_pins: pinmux_hsusb2_2_pins {
++ pinctrl-single,pins = <
++ OMAP3630_CORE2_IOPAD(0x25f0, PIN_OUTPUT | MUX_MODE3) /* etk_d10.hsusb2_clk */
++ OMAP3630_CORE2_IOPAD(0x25f2, PIN_OUTPUT | MUX_MODE3) /* etk_d11.hsusb2_stp */
++ OMAP3630_CORE2_IOPAD(0x25f4, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d12.hsusb2_dir */
++ OMAP3630_CORE2_IOPAD(0x25f6, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d13.hsusb2_nxt */
++ OMAP3630_CORE2_IOPAD(0x25f8, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d14.hsusb2_data0 */
++ OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d15.hsusb2_data1 */
++ >;
++ };
++};
+diff --git a/arch/arm/boot/dts/logicpd-som-lv.dtsi b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+index b56524cc7fe27..55b619c99e24d 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv.dtsi
++++ b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+@@ -265,21 +265,6 @@
+ };
+ };
+
+-&omap3_pmx_core2 {
+- pinctrl-names = "default";
+- pinctrl-0 = <&hsusb2_2_pins>;
+- hsusb2_2_pins: pinmux_hsusb2_2_pins {
+- pinctrl-single,pins = <
+- OMAP3630_CORE2_IOPAD(0x25f0, PIN_OUTPUT | MUX_MODE3) /* etk_d10.hsusb2_clk */
+- OMAP3630_CORE2_IOPAD(0x25f2, PIN_OUTPUT | MUX_MODE3) /* etk_d11.hsusb2_stp */
+- OMAP3630_CORE2_IOPAD(0x25f4, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d12.hsusb2_dir */
+- OMAP3630_CORE2_IOPAD(0x25f6, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d13.hsusb2_nxt */
+- OMAP3630_CORE2_IOPAD(0x25f8, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d14.hsusb2_data0 */
+- OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d15.hsusb2_data1 */
+- >;
+- };
+-};
+-
+ &uart2 {
+ interrupts-extended = <&intc 73 &omap3_pmx_core OMAP3_UART2_RX>;
+ pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index 7e3d8147e2c1c..0365f06165e90 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -31,6 +31,8 @@
+ aliases {
+ display0 = &lcd;
+ display1 = &tv0;
++ /delete-property/ mmc2;
++ /delete-property/ mmc3;
+ };
+
+ ldo_3v3: fixedregulator {
+diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig
+index f7d993628cb70..a9c1efcf7c9cf 100644
+--- a/arch/arm/mach-exynos/Kconfig
++++ b/arch/arm/mach-exynos/Kconfig
+@@ -17,7 +17,6 @@ menuconfig ARCH_EXYNOS
+ select EXYNOS_PMU
+ select EXYNOS_SROM
+ select EXYNOS_PM_DOMAINS if PM_GENERIC_DOMAINS
+- select GPIOLIB
+ select HAVE_ARM_ARCH_TIMER if ARCH_EXYNOS5
+ select HAVE_ARM_SCU if SMP
+ select PINCTRL
+diff --git a/arch/arm/mach-omap2/omap4-common.c b/arch/arm/mach-omap2/omap4-common.c
+index 5c3845730dbf5..0b80f8bcd3047 100644
+--- a/arch/arm/mach-omap2/omap4-common.c
++++ b/arch/arm/mach-omap2/omap4-common.c
+@@ -314,10 +314,12 @@ void __init omap_gic_of_init(void)
+
+ np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-gic");
+ gic_dist_base_addr = of_iomap(np, 0);
++ of_node_put(np);
+ WARN_ON(!gic_dist_base_addr);
+
+ np = of_find_compatible_node(NULL, NULL, "arm,cortex-a9-twd-timer");
+ twd_base = of_iomap(np, 0);
++ of_node_put(np);
+ WARN_ON(!twd_base);
+
+ skip_errata_init:
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi
+index d61f43052a344..8e9ad1e51d665 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-a311d.dtsi
+@@ -11,26 +11,6 @@
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-100000000 {
+- opp-hz = /bits/ 64 <100000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-250000000 {
+- opp-hz = /bits/ 64 <250000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-500000000 {
+- opp-hz = /bits/ 64 <500000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-667000000 {
+- opp-hz = /bits/ 64 <667000000>;
+- opp-microvolt = <731000>;
+- };
+-
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <761000>;
+@@ -71,26 +51,6 @@
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-100000000 {
+- opp-hz = /bits/ 64 <100000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-250000000 {
+- opp-hz = /bits/ 64 <250000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-500000000 {
+- opp-hz = /bits/ 64 <500000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-667000000 {
+- opp-hz = /bits/ 64 <667000000>;
+- opp-microvolt = <731000>;
+- };
+-
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <731000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi
+index 1e5d0ee5d541b..44c23c984034c 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-s922x.dtsi
+@@ -11,26 +11,6 @@
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-100000000 {
+- opp-hz = /bits/ 64 <100000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-250000000 {
+- opp-hz = /bits/ 64 <250000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-500000000 {
+- opp-hz = /bits/ 64 <500000000>;
+- opp-microvolt = <731000>;
+- };
+-
+- opp-667000000 {
+- opp-hz = /bits/ 64 <667000000>;
+- opp-microvolt = <731000>;
+- };
+-
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <731000>;
+@@ -76,26 +56,6 @@
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-100000000 {
+- opp-hz = /bits/ 64 <100000000>;
+- opp-microvolt = <751000>;
+- };
+-
+- opp-250000000 {
+- opp-hz = /bits/ 64 <250000000>;
+- opp-microvolt = <751000>;
+- };
+-
+- opp-500000000 {
+- opp-hz = /bits/ 64 <500000000>;
+- opp-microvolt = <751000>;
+- };
+-
+- opp-667000000 {
+- opp-hz = /bits/ 64 <667000000>;
+- opp-microvolt = <751000>;
+- };
+-
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <771000>;
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+index 5751c48620edf..cadba194b149b 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+@@ -437,6 +437,7 @@
+ "",
+ "eMMC_RST#", /* BOOT_12 */
+ "eMMC_DS", /* BOOT_13 */
++ "", "",
+ /* GPIOC */
+ "SD_D0_B", /* GPIOC_0 */
+ "SD_D1_B", /* GPIOC_1 */
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+index 3d8b1f4f2001b..78bdbd2ccc9de 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1.dtsi
+@@ -95,26 +95,6 @@
+ compatible = "operating-points-v2";
+ opp-shared;
+
+- opp-100000000 {
+- opp-hz = /bits/ 64 <100000000>;
+- opp-microvolt = <730000>;
+- };
+-
+- opp-250000000 {
+- opp-hz = /bits/ 64 <250000000>;
+- opp-microvolt = <730000>;
+- };
+-
+- opp-500000000 {
+- opp-hz = /bits/ 64 <500000000>;
+- opp-microvolt = <730000>;
+- };
+-
+- opp-667000000 {
+- opp-hz = /bits/ 64 <666666666>;
+- opp-microvolt = <750000>;
+- };
+-
+ opp-1000000000 {
+ opp-hz = /bits/ 64 <1000000000>;
+ opp-microvolt = <770000>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+index 28012279f6f67..7ebbba6bd1665 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw71xx.dtsi
+@@ -103,12 +103,14 @@
+
+ &usbotg1 {
+ dr_mode = "otg";
++ over-current-active-low;
+ vbus-supply = <®_usb_otg1_vbus>;
+ status = "okay";
+ };
+
+ &usbotg2 {
+ dr_mode = "host";
++ disable-over-current;
+ status = "okay";
+ };
+
+@@ -166,7 +168,7 @@
+ fsl,pins = <
+ MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0xd6
+ MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0xd6
+- MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0xd6
++ MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0xd6
+ MX8MM_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0xd6
+ >;
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw72xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw72xx.dtsi
+index 27afa46a253a3..4337a231bbea5 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw72xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw72xx.dtsi
+@@ -139,12 +139,14 @@
+
+ &usbotg1 {
+ dr_mode = "otg";
++ over-current-active-low;
+ vbus-supply = <®_usb_otg1_vbus>;
+ status = "okay";
+ };
+
+ &usbotg2 {
+ dr_mode = "host";
++ disable-over-current;
+ vbus-supply = <®_usb_otg2_vbus>;
+ status = "okay";
+ };
+@@ -231,7 +233,7 @@
+ fsl,pins = <
+ MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0xd6
+ MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0xd6
+- MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0xd6
++ MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0xd6
+ MX8MM_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0xd6
+ >;
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx.dtsi
+index a59e849c7be29..8eba6079bbf38 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx.dtsi
+@@ -166,12 +166,14 @@
+
+ &usbotg1 {
+ dr_mode = "otg";
++ over-current-active-low;
+ vbus-supply = <®_usb_otg1_vbus>;
+ status = "okay";
+ };
+
+ &usbotg2 {
+ dr_mode = "host";
++ disable-over-current;
+ vbus-supply = <®_usb_otg2_vbus>;
+ status = "okay";
+ };
+@@ -280,7 +282,7 @@
+ fsl,pins = <
+ MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0xd6
+ MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0xd6
+- MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0xd6
++ MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0xd6
+ MX8MM_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0xd6
+ >;
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts b/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
+index 7dfee715a2c4d..d8ce217c60166 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mn-ddr4-evk.dts
+@@ -59,6 +59,10 @@
+ interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ rohm,reset-snvs-powered;
+
++ #clock-cells = <0>;
++ clocks = <&osc_32k 0>;
++ clock-output-names = "clk-32k-out";
++
+ regulators {
+ buck1_reg: BUCK1 {
+ regulator-name = "buck1";
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index b8d49d5f26681..98bfb53491fc0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -291,7 +291,7 @@
+ ranges;
+
+ sai2: sai@30020000 {
+- compatible = "fsl,imx8mm-sai", "fsl,imx8mq-sai";
++ compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
+ reg = <0x30020000 0x10000>;
+ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MN_CLK_SAI2_IPG>,
+@@ -305,7 +305,7 @@
+ };
+
+ sai3: sai@30030000 {
+- compatible = "fsl,imx8mm-sai", "fsl,imx8mq-sai";
++ compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
+ reg = <0x30030000 0x10000>;
+ interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MN_CLK_SAI3_IPG>,
+@@ -319,7 +319,7 @@
+ };
+
+ sai5: sai@30050000 {
+- compatible = "fsl,imx8mm-sai", "fsl,imx8mq-sai";
++ compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
+ reg = <0x30050000 0x10000>;
+ interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MN_CLK_SAI5_IPG>,
+@@ -335,7 +335,7 @@
+ };
+
+ sai6: sai@30060000 {
+- compatible = "fsl,imx8mm-sai", "fsl,imx8mq-sai";
++ compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
+ reg = <0x30060000 0x10000>;
+ interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MN_CLK_SAI6_IPG>,
+@@ -392,7 +392,7 @@
+ };
+
+ sai7: sai@300b0000 {
+- compatible = "fsl,imx8mm-sai", "fsl,imx8mq-sai";
++ compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
+ reg = <0x300b0000 0x10000>;
+ interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk IMX8MN_CLK_SAI7_IPG>,
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
+index 8aedcddfeab87..2c63b01e93e01 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mq-tqma8mq.dtsi
+@@ -253,7 +253,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+ spi-max-frequency = <84000000>;
+- spi-tx-bus-width = <4>;
++ spi-tx-bus-width = <1>;
+ spi-rx-bus-width = <4>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8qm.dtsi b/arch/arm64/boot/dts/freescale/imx8qm.dtsi
+index 4a7c017b5f31c..8fecd54198fbf 100644
+--- a/arch/arm64/boot/dts/freescale/imx8qm.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8qm.dtsi
+@@ -193,7 +193,7 @@
+ };
+
+ clk: clock-controller {
+- compatible = "fsl,imx8qxp-clk", "fsl,scu-clk";
++ compatible = "fsl,imx8qm-clk", "fsl,scu-clk";
+ #clock-cells = <2>;
+ };
+
+diff --git a/arch/powerpc/kernel/reloc_64.S b/arch/powerpc/kernel/reloc_64.S
+index 02d4719bf43a8..232e4549defe1 100644
+--- a/arch/powerpc/kernel/reloc_64.S
++++ b/arch/powerpc/kernel/reloc_64.S
+@@ -8,8 +8,10 @@
+ #include <asm/ppc_asm.h>
+
+ RELA = 7
+-RELACOUNT = 0x6ffffff9
++RELASZ = 8
++RELAENT = 9
+ R_PPC64_RELATIVE = 22
++R_PPC64_UADDR64 = 43
+
+ /*
+ * r3 = desired final address of kernel
+@@ -25,29 +27,38 @@ _GLOBAL(relocate)
+ add r9,r9,r12 /* r9 has runtime addr of .rela.dyn section */
+ ld r10,(p_st - 0b)(r12)
+ add r10,r10,r12 /* r10 has runtime addr of _stext */
++ ld r13,(p_sym - 0b)(r12)
++ add r13,r13,r12 /* r13 has runtime addr of .dynsym */
+
+ /*
+- * Scan the dynamic section for the RELA and RELACOUNT entries.
++ * Scan the dynamic section for the RELA, RELASZ and RELAENT entries.
+ */
+ li r7,0
+ li r8,0
+-1: ld r6,0(r11) /* get tag */
++.Ltags:
++ ld r6,0(r11) /* get tag */
+ cmpdi r6,0
+- beq 4f /* end of list */
++ beq .Lend_of_list /* end of list */
+ cmpdi r6,RELA
+ bne 2f
+ ld r7,8(r11) /* get RELA pointer in r7 */
+- b 3f
+-2: addis r6,r6,(-RELACOUNT)@ha
+- cmpdi r6,RELACOUNT@l
++ b 4f
++2: cmpdi r6,RELASZ
+ bne 3f
+- ld r8,8(r11) /* get RELACOUNT value in r8 */
+-3: addi r11,r11,16
+- b 1b
+-4: cmpdi r7,0 /* check we have both RELA and RELACOUNT */
++ ld r8,8(r11) /* get RELASZ value in r8 */
++ b 4f
++3: cmpdi r6,RELAENT
++ bne 4f
++ ld r12,8(r11) /* get RELAENT value in r12 */
++4: addi r11,r11,16
++ b .Ltags
++.Lend_of_list:
++ cmpdi r7,0 /* check we have RELA, RELASZ, RELAENT */
+ cmpdi cr1,r8,0
+- beq 6f
+- beq cr1,6f
++ beq .Lout
++ beq cr1,.Lout
++ cmpdi r12,0
++ beq .Lout
+
+ /*
+ * Work out linktime address of _stext and hence the
+@@ -62,23 +73,39 @@ _GLOBAL(relocate)
+
+ /*
+ * Run through the list of relocations and process the
+- * R_PPC64_RELATIVE ones.
++ * R_PPC64_RELATIVE and R_PPC64_UADDR64 ones.
+ */
++ divd r8,r8,r12 /* RELASZ / RELAENT */
+ mtctr r8
+-5: ld r0,8(9) /* ELF64_R_TYPE(reloc->r_info) */
++.Lrels: ld r0,8(r9) /* ELF64_R_TYPE(reloc->r_info) */
+ cmpdi r0,R_PPC64_RELATIVE
+- bne 6f
++ bne .Luaddr64
+ ld r6,0(r9) /* reloc->r_offset */
+ ld r0,16(r9) /* reloc->r_addend */
++ b .Lstore
++.Luaddr64:
++ srdi r14,r0,32 /* ELF64_R_SYM(reloc->r_info) */
++ clrldi r0,r0,32
++ cmpdi r0,R_PPC64_UADDR64
++ bne .Lnext
++ ld r6,0(r9)
++ ld r0,16(r9)
++ mulli r14,r14,24 /* 24 == sizeof(elf64_sym) */
++ add r14,r14,r13 /* elf64_sym[ELF64_R_SYM] */
++ ld r14,8(r14)
++ add r0,r0,r14
++.Lstore:
+ add r0,r0,r3
+ stdx r0,r7,r6
+- addi r9,r9,24
+- bdnz 5b
+-
+-6: blr
++.Lnext:
++ add r9,r9,r12
++ bdnz .Lrels
++.Lout:
++ blr
+
+ .balign 8
+ p_dyn: .8byte __dynamic_start - 0b
+ p_rela: .8byte __rela_dyn_start - 0b
++p_sym: .8byte __dynamic_symtab - 0b
+ p_st: .8byte _stext - 0b
+
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index 2bcca818136ae..fe22d940412fd 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -281,9 +281,7 @@ SECTIONS
+ . = ALIGN(8);
+ .dynsym : AT(ADDR(.dynsym) - LOAD_OFFSET)
+ {
+-#ifdef CONFIG_PPC32
+ __dynamic_symtab = .;
+-#endif
+ *(.dynsym)
+ }
+ .dynstr : AT(ADDR(.dynstr) - LOAD_OFFSET) { *(.dynstr) }
+diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile
+index 2f46e31c76129..4f53d0b97539b 100644
+--- a/arch/powerpc/perf/Makefile
++++ b/arch/powerpc/perf/Makefile
+@@ -3,11 +3,11 @@
+ obj-y += callchain.o callchain_$(BITS).o perf_regs.o
+ obj-$(CONFIG_COMPAT) += callchain_32.o
+
+-obj-$(CONFIG_PPC_PERF_CTRS) += core-book3s.o bhrb.o
++obj-$(CONFIG_PPC_PERF_CTRS) += core-book3s.o
+ obj64-$(CONFIG_PPC_PERF_CTRS) += ppc970-pmu.o power5-pmu.o \
+ power5+-pmu.o power6-pmu.o power7-pmu.o \
+ isa207-common.o power8-pmu.o power9-pmu.o \
+- generic-compat-pmu.o power10-pmu.o
++ generic-compat-pmu.o power10-pmu.o bhrb.o
+ obj32-$(CONFIG_PPC_PERF_CTRS) += mpc7450-pmu.o
+
+ obj-$(CONFIG_PPC_POWERNV) += imc-pmu.o
+diff --git a/arch/powerpc/tools/relocs_check.sh b/arch/powerpc/tools/relocs_check.sh
+index 014e00e74d2b6..63792af004170 100755
+--- a/arch/powerpc/tools/relocs_check.sh
++++ b/arch/powerpc/tools/relocs_check.sh
+@@ -39,6 +39,7 @@ $objdump -R "$vmlinux" |
+ # R_PPC_NONE
+ grep -F -w -v 'R_PPC64_RELATIVE
+ R_PPC64_NONE
++R_PPC64_UADDR64
+ R_PPC_ADDR16_LO
+ R_PPC_ADDR16_HI
+ R_PPC_ADDR16_HA
+@@ -54,9 +55,3 @@ fi
+ num_bad=$(echo "$bad_relocs" | wc -l)
+ echo "WARNING: $num_bad bad relocations"
+ echo "$bad_relocs"
+-
+-# If we see this type of relocation it's an idication that
+-# we /may/ be using an old version of binutils.
+-if echo "$bad_relocs" | grep -q -F -w R_PPC64_UADDR64; then
+- echo "WARNING: You need at least binutils >= 2.19 to build a CONFIG_RELOCATABLE kernel"
+-fi
+diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
+index 0b552873a5778..765004b605132 100644
+--- a/arch/riscv/kernel/patch.c
++++ b/arch/riscv/kernel/patch.c
+@@ -104,7 +104,7 @@ static int patch_text_cb(void *data)
+ struct patch_insn *patch = data;
+ int ret = 0;
+
+- if (atomic_inc_return(&patch->cpu_count) == 1) {
++ if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
+ ret =
+ patch_text_nosync(patch->addr, &patch->insn,
+ GET_INSN_LENGTH(patch->insn));
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index d6bfdfb0f0afe..0c3d3440fe278 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -131,10 +131,12 @@ extern void __init load_ucode_bsp(void);
+ extern void load_ucode_ap(void);
+ void reload_early_microcode(void);
+ extern bool initrd_gone;
++void microcode_bsp_resume(void);
+ #else
+ static inline void __init load_ucode_bsp(void) { }
+ static inline void load_ucode_ap(void) { }
+ static inline void reload_early_microcode(void) { }
++static inline void microcode_bsp_resume(void) { }
+ #endif
+
+ #endif /* _ASM_X86_MICROCODE_H */
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index f955d25076bab..239ff5fcec6a2 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -758,9 +758,9 @@ static struct subsys_interface mc_cpu_interface = {
+ };
+
+ /**
+- * mc_bp_resume - Update boot CPU microcode during resume.
++ * microcode_bsp_resume - Update boot CPU microcode during resume.
+ */
+-static void mc_bp_resume(void)
++void microcode_bsp_resume(void)
+ {
+ int cpu = smp_processor_id();
+ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+@@ -772,7 +772,7 @@ static void mc_bp_resume(void)
+ }
+
+ static struct syscore_ops mc_syscore_ops = {
+- .resume = mc_bp_resume,
++ .resume = microcode_bsp_resume,
+ };
+
+ static int mc_cpu_starting(unsigned int cpu)
+diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
+index 0402a749f3a0e..0ae6cf8041970 100644
+--- a/arch/x86/lib/usercopy_64.c
++++ b/arch/x86/lib/usercopy_64.c
+@@ -119,7 +119,7 @@ void __memcpy_flushcache(void *_dst, const void *_src, size_t size)
+
+ /* cache copy and flush to align dest */
+ if (!IS_ALIGNED(dest, 8)) {
+- unsigned len = min_t(unsigned, size, ALIGN(dest, 8) - dest);
++ size_t len = min_t(size_t, size, ALIGN(dest, 8) - dest);
+
+ memcpy((void *) dest, (void *) source, len);
+ clean_cache_range((void *) dest, len);
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 9bb1e29411796..b94f727251b64 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -467,7 +467,6 @@ static __init void xen_setup_pci_msi(void)
+ else
+ xen_msi_ops.setup_msi_irqs = xen_setup_msi_irqs;
+ xen_msi_ops.teardown_msi_irqs = xen_pv_teardown_msi_irqs;
+- pci_msi_ignore_mask = 1;
+ } else if (xen_hvm_domain()) {
+ xen_msi_ops.setup_msi_irqs = xen_hvm_setup_msi_irqs;
+ xen_msi_ops.teardown_msi_irqs = xen_teardown_msi_irqs;
+@@ -481,6 +480,11 @@ static __init void xen_setup_pci_msi(void)
+ * in allocating the native domain and never use it.
+ */
+ x86_init.irqs.create_pci_msi_domain = xen_create_pci_msi_domain;
++ /*
++ * With XEN PIRQ/Eventchannels in use PCI/MSI[-X] masking is solely
++ * controlled by the hypervisor.
++ */
++ pci_msi_ignore_mask = 1;
+ }
+
+ #else /* CONFIG_PCI_MSI */
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 3822666fb73d5..bb176c72891c9 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -25,6 +25,7 @@
+ #include <asm/cpu.h>
+ #include <asm/mmu_context.h>
+ #include <asm/cpu_device_id.h>
++#include <asm/microcode.h>
+
+ #ifdef CONFIG_X86_32
+ __visible unsigned long saved_context_ebx;
+@@ -262,11 +263,18 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
+ x86_platform.restore_sched_clock_state();
+ mtrr_bp_restore();
+ perf_restore_debug_store();
+- msr_restore_context(ctxt);
+
+ c = &cpu_data(smp_processor_id());
+ if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
+ init_ia32_feat_ctl(c);
++
++ microcode_bsp_resume();
++
++ /*
++ * This needs to happen after the microcode has been updated upon resume
++ * because some of the MSRs are "emulated" in microcode.
++ */
++ msr_restore_context(ctxt);
+ }
+
+ /* Needed by apm.c */
+diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c
+index 81d7c7e8f7e96..10b79d3c74e07 100644
+--- a/arch/xtensa/platforms/iss/console.c
++++ b/arch/xtensa/platforms/iss/console.c
+@@ -36,24 +36,19 @@ static void rs_poll(struct timer_list *);
+ static struct tty_driver *serial_driver;
+ static struct tty_port serial_port;
+ static DEFINE_TIMER(serial_timer, rs_poll);
+-static DEFINE_SPINLOCK(timer_lock);
+
+ static int rs_open(struct tty_struct *tty, struct file * filp)
+ {
+- spin_lock_bh(&timer_lock);
+ if (tty->count == 1)
+ mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE);
+- spin_unlock_bh(&timer_lock);
+
+ return 0;
+ }
+
+ static void rs_close(struct tty_struct *tty, struct file * filp)
+ {
+- spin_lock_bh(&timer_lock);
+ if (tty->count == 1)
+ del_timer_sync(&serial_timer);
+- spin_unlock_bh(&timer_lock);
+ }
+
+
+@@ -73,8 +68,6 @@ static void rs_poll(struct timer_list *unused)
+ int rd = 1;
+ unsigned char c;
+
+- spin_lock(&timer_lock);
+-
+ while (simc_poll(0)) {
+ rd = simc_read(0, &c, 1);
+ if (rd <= 0)
+@@ -87,7 +80,6 @@ static void rs_poll(struct timer_list *unused)
+ tty_flip_buffer_push(port);
+ if (rd)
+ mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE);
+- spin_unlock(&timer_lock);
+ }
+
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 1dff82d34b44b..963f9f549232b 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -569,7 +569,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ struct bfq_entity *entity = &bfqq->entity;
+ struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH];
+ struct bfq_entity **entities = inline_entities;
+- int depth, level;
++ int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+ int class_idx = bfqq->ioprio_class - 1;
+ struct bfq_sched_data *sched_data;
+ unsigned long wsum;
+@@ -578,15 +578,21 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ if (!entity->on_st_or_in_serv)
+ return false;
+
++retry:
++ spin_lock_irq(&bfqd->lock);
+ /* +1 for bfqq entity, root cgroup not included */
+ depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1;
+- if (depth > BFQ_LIMIT_INLINE_DEPTH) {
++ if (depth > alloc_depth) {
++ spin_unlock_irq(&bfqd->lock);
++ if (entities != inline_entities)
++ kfree(entities);
+ entities = kmalloc_array(depth, sizeof(*entities), GFP_NOIO);
+ if (!entities)
+ return false;
++ alloc_depth = depth;
++ goto retry;
+ }
+
+- spin_lock_irq(&bfqd->lock);
+ sched_data = entity->sched_data;
+ /* Gather our ancestors as we need to traverse them in reverse order */
+ level = 0;
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 769b643942989..871d9529ea357 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2322,7 +2322,17 @@ static void ioc_timer_fn(struct timer_list *timer)
+ iocg->hweight_donating = hwa;
+ iocg->hweight_after_donation = new_hwi;
+ list_add(&iocg->surplus_list, &surpluses);
+- } else {
++ } else if (!iocg->abs_vdebt) {
++ /*
++ * @iocg doesn't have enough to donate. Reset
++ * its inuse to active.
++ *
++ * Don't reset debtors as their inuse's are
++ * owned by debt handling. This shouldn't affect
++ * donation calculuation in any meaningful way
++ * as @iocg doesn't have a meaningful amount of
++ * share anyway.
++ */
+ TRACE_IOCG_PATH(inuse_shortage, iocg, &now,
+ iocg->inuse, iocg->active,
+ iocg->hweight_inuse, new_hwi);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cb50097366cd4..0aa20df31e369 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1122,14 +1122,7 @@ void blk_mq_start_request(struct request *rq)
+ trace_block_rq_issue(rq);
+
+ if (test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) {
+- u64 start_time;
+-#ifdef CONFIG_BLK_CGROUP
+- if (rq->bio)
+- start_time = bio_issue_time(&rq->bio->bi_issue);
+- else
+-#endif
+- start_time = ktime_get_ns();
+- rq->io_start_time_ns = start_time;
++ rq->io_start_time_ns = ktime_get_ns();
+ rq->stats_sectors = blk_rq_sectors(rq);
+ rq->rq_flags |= RQF_STATS;
+ rq_qos_issue(q, rq);
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 4556c86c34659..eb95e188d62bc 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -96,11 +96,6 @@ static const struct dmi_system_id processor_power_dmi_table[] = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME,"L8400B series Notebook PC")},
+ (void *)1},
+- /* T40 can not handle C3 idle state */
+- { set_max_cstate, "IBM ThinkPad T40", {
+- DMI_MATCH(DMI_SYS_VENDOR, "IBM"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "23737CU")},
+- (void *)2},
+ {},
+ };
+
+@@ -795,7 +790,8 @@ static int acpi_processor_setup_cstates(struct acpi_processor *pr)
+ if (cx->type == ACPI_STATE_C1 || cx->type == ACPI_STATE_C2 ||
+ cx->type == ACPI_STATE_C3) {
+ state->enter_dead = acpi_idle_play_dead;
+- drv->safe_state_index = count;
++ if (cx->type != ACPI_STATE_C3)
++ drv->safe_state_index = count;
+ }
+ /*
+ * Halt-induced C1 is not good for ->enter_s2idle, because it
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 8351c5638880b..f3b639e89dd88 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -2295,6 +2295,7 @@ static int binder_do_deferred_txn_copies(struct binder_alloc *alloc,
+ {
+ int ret = 0;
+ struct binder_sg_copy *sgc, *tmpsgc;
++ struct binder_ptr_fixup *tmppf;
+ struct binder_ptr_fixup *pf =
+ list_first_entry_or_null(pf_head, struct binder_ptr_fixup,
+ node);
+@@ -2349,7 +2350,11 @@ static int binder_do_deferred_txn_copies(struct binder_alloc *alloc,
+ list_del(&sgc->node);
+ kfree(sgc);
+ }
+- BUG_ON(!list_empty(pf_head));
++ list_for_each_entry_safe(pf, tmppf, pf_head, node) {
++ BUG_ON(pf->skip_size == 0);
++ list_del(&pf->node);
++ kfree(pf);
++ }
+ BUG_ON(!list_empty(sgc_head));
+
+ return ret > 0 ? -EINVAL : ret;
+@@ -2486,6 +2491,9 @@ static int binder_translate_fd_array(struct list_head *pf_head,
+ struct binder_proc *proc = thread->proc;
+ int ret;
+
++ if (fda->num_fds == 0)
++ return 0;
++
+ fd_buf_size = sizeof(u32) * fda->num_fds;
+ if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
+ binder_user_error("%d:%d got transaction with invalid number of fds (%lld)\n",
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index 976154140f0b0..78b911a2c4c72 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -628,6 +628,15 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
+ core_mask = &cpu_topology[cpu].llc_sibling;
+ }
+
++ /*
++ * For systems with no shared cpu-side LLC but with clusters defined,
++ * extend core_mask to cluster_siblings. The sched domain builder will
++ * then remove MC as redundant with CLS if SCHED_CLUSTER is enabled.
++ */
++ if (IS_ENABLED(CONFIG_SCHED_CLUSTER) &&
++ cpumask_subset(core_mask, &cpu_topology[cpu].cluster_sibling))
++ core_mask = &cpu_topology[cpu].cluster_sibling;
++
+ return core_mask;
+ }
+
+@@ -645,7 +654,7 @@ void update_siblings_masks(unsigned int cpuid)
+ for_each_online_cpu(cpu) {
+ cpu_topo = &cpu_topology[cpu];
+
+- if (cpuid_topo->llc_id == cpu_topo->llc_id) {
++ if (cpu_topo->llc_id != -1 && cpuid_topo->llc_id == cpu_topo->llc_id) {
+ cpumask_set_cpu(cpu, &cpuid_topo->llc_sibling);
+ cpumask_set_cpu(cpuid, &cpu_topo->llc_sibling);
+ }
+diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig
+index 519b6d38d4df6..fdb81f2794cde 100644
+--- a/drivers/block/Kconfig
++++ b/drivers/block/Kconfig
+@@ -33,6 +33,22 @@ config BLK_DEV_FD
+ To compile this driver as a module, choose M here: the
+ module will be called floppy.
+
++config BLK_DEV_FD_RAWCMD
++ bool "Support for raw floppy disk commands (DEPRECATED)"
++ depends on BLK_DEV_FD
++ help
++ If you want to use actual physical floppies and expect to do
++ special low-level hardware accesses to them (access and use
++ non-standard formats, for example), then enable this.
++
++ Note that the code enabled by this option is rarely used and
++ might be unstable or insecure, and distros should not enable it.
++
++ Note: FDRAWCMD is deprecated and will be removed from the kernel
++ in the near future.
++
++ If unsure, say N.
++
+ config AMIGA_FLOPPY
+ tristate "Amiga floppy support"
+ depends on AMIGA
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index e611411a934ce..a29cc2928be47 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -2984,6 +2984,8 @@ static const char *drive_name(int type, int drive)
+ return "(null)";
+ }
+
++#ifdef CONFIG_BLK_DEV_FD_RAWCMD
++
+ /* raw commands */
+ static void raw_cmd_done(int flag)
+ {
+@@ -3183,6 +3185,35 @@ static int raw_cmd_ioctl(int cmd, void __user *param)
+ return ret;
+ }
+
++static int floppy_raw_cmd_ioctl(int type, int drive, int cmd,
++ void __user *param)
++{
++ int ret;
++
++ pr_warn_once("Note: FDRAWCMD is deprecated and will be removed from the kernel in the near future.\n");
++
++ if (type)
++ return -EINVAL;
++ if (lock_fdc(drive))
++ return -EINTR;
++ set_floppy(drive);
++ ret = raw_cmd_ioctl(cmd, param);
++ if (ret == -EINTR)
++ return -EINTR;
++ process_fd_request();
++ return ret;
++}
++
++#else /* CONFIG_BLK_DEV_FD_RAWCMD */
++
++static int floppy_raw_cmd_ioctl(int type, int drive, int cmd,
++ void __user *param)
++{
++ return -EOPNOTSUPP;
++}
++
++#endif
++
+ static int invalidate_drive(struct block_device *bdev)
+ {
+ /* invalidate the buffer track to force a reread */
+@@ -3371,7 +3402,6 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ {
+ int drive = (long)bdev->bd_disk->private_data;
+ int type = ITYPE(drive_state[drive].fd_device);
+- int i;
+ int ret;
+ int size;
+ union inparam {
+@@ -3522,16 +3552,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int
+ outparam = &write_errors[drive];
+ break;
+ case FDRAWCMD:
+- if (type)
+- return -EINVAL;
+- if (lock_fdc(drive))
+- return -EINTR;
+- set_floppy(drive);
+- i = raw_cmd_ioctl(cmd, (void __user *)param);
+- if (i == -EINTR)
+- return -EINTR;
+- process_fd_request();
+- return i;
++ return floppy_raw_cmd_ioctl(type, drive, cmd, (void __user *)param);
+ case FDTWADDLE:
+ if (lock_fdc(drive))
+ return -EINTR;
+diff --git a/drivers/bus/fsl-mc/fsl-mc-msi.c b/drivers/bus/fsl-mc/fsl-mc-msi.c
+index 5e0e4393ce4d4..0cfe859a4ac4d 100644
+--- a/drivers/bus/fsl-mc/fsl-mc-msi.c
++++ b/drivers/bus/fsl-mc/fsl-mc-msi.c
+@@ -224,8 +224,12 @@ int fsl_mc_msi_domain_alloc_irqs(struct device *dev, unsigned int irq_count)
+ if (error)
+ return error;
+
++ msi_lock_descs(dev);
+ if (msi_first_desc(dev, MSI_DESC_ALL))
+- return -EINVAL;
++ error = -EINVAL;
++ msi_unlock_descs(dev);
++ if (error)
++ return error;
+
+ /*
+ * NOTE: Calling this function will trigger the invocation of the
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index 9527b7d638401..541ced27d9412 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -1060,6 +1060,7 @@ static int __maybe_unused mhi_pci_freeze(struct device *dev)
+ * the intermediate restore kernel reinitializes MHI device with new
+ * context.
+ */
++ flush_work(&mhi_pdev->recovery_work);
+ if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
+ mhi_power_down(mhi_cntrl, true);
+ mhi_unprepare_after_power_down(mhi_cntrl);
+@@ -1085,6 +1086,7 @@ static const struct dev_pm_ops mhi_pci_pm_ops = {
+ .resume = mhi_pci_resume,
+ .freeze = mhi_pci_freeze,
+ .thaw = mhi_pci_restore,
++ .poweroff = mhi_pci_freeze,
+ .restore = mhi_pci_restore,
+ #endif
+ };
+diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c
+index 4566e730ef2b8..60b082fe2ed02 100644
+--- a/drivers/bus/sunxi-rsb.c
++++ b/drivers/bus/sunxi-rsb.c
+@@ -227,6 +227,8 @@ static struct sunxi_rsb_device *sunxi_rsb_device_create(struct sunxi_rsb *rsb,
+
+ dev_dbg(&rdev->dev, "device %s registered\n", dev_name(&rdev->dev));
+
++ return rdev;
++
+ err_device_add:
+ put_device(&rdev->dev);
+
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 54c0ee6dda302..7a1b1f9e49333 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3232,13 +3232,27 @@ static int sysc_check_disabled_devices(struct sysc *ddata)
+ */
+ static int sysc_check_active_timer(struct sysc *ddata)
+ {
++ int error;
++
+ if (ddata->cap->type != TI_SYSC_OMAP2_TIMER &&
+ ddata->cap->type != TI_SYSC_OMAP4_TIMER)
+ return 0;
+
++ /*
++ * Quirk for omap3 beagleboard revision A to B4 to use gpt12.
++ * Revision C and later are fixed with commit 23885389dbbb ("ARM:
++ * dts: Fix timer regression for beagleboard revision c"). This all
++ * can be dropped if we stop supporting old beagleboard revisions
++ * A to B4 at some point.
++ */
++ if (sysc_soc->soc == SOC_3430)
++ error = -ENXIO;
++ else
++ error = -EBUSY;
++
+ if ((ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) &&
+ (ddata->cfg.quirks & SYSC_QUIRK_NO_IDLE))
+- return -ENXIO;
++ return error;
+
+ return 0;
+ }
+diff --git a/drivers/clk/sunxi/clk-sun9i-mmc.c b/drivers/clk/sunxi/clk-sun9i-mmc.c
+index 542b31d6e96dd..636bcf2439ef2 100644
+--- a/drivers/clk/sunxi/clk-sun9i-mmc.c
++++ b/drivers/clk/sunxi/clk-sun9i-mmc.c
+@@ -109,6 +109,8 @@ static int sun9i_a80_mmc_config_clk_probe(struct platform_device *pdev)
+ spin_lock_init(&data->lock);
+
+ r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!r)
++ return -EINVAL;
+ /* one clock/reset pair per word */
+ count = DIV_ROUND_UP((resource_size(r)), SUN9I_MMC_WIDTH);
+ data->membase = devm_ioremap_resource(&pdev->dev, r);
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index effbb680b453f..ca0f1be1c3b23 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -24,12 +24,16 @@
+ #define CLK_HW_DIV 2
+ #define LUT_TURBO_IND 1
+
++#define GT_IRQ_STATUS BIT(2)
++
+ #define HZ_PER_KHZ 1000
+
+ struct qcom_cpufreq_soc_data {
+ u32 reg_enable;
++ u32 reg_domain_state;
+ u32 reg_freq_lut;
+ u32 reg_volt_lut;
++ u32 reg_intr_clr;
+ u32 reg_current_vote;
+ u32 reg_perf_state;
+ u8 lut_row_size;
+@@ -267,37 +271,46 @@ static void qcom_get_related_cpus(int index, struct cpumask *m)
+ }
+ }
+
+-static unsigned int qcom_lmh_get_throttle_freq(struct qcom_cpufreq_data *data)
++static unsigned long qcom_lmh_get_throttle_freq(struct qcom_cpufreq_data *data)
+ {
+- unsigned int val = readl_relaxed(data->base + data->soc_data->reg_current_vote);
++ unsigned int lval;
++
++ if (data->soc_data->reg_current_vote)
++ lval = readl_relaxed(data->base + data->soc_data->reg_current_vote) & 0x3ff;
++ else
++ lval = readl_relaxed(data->base + data->soc_data->reg_domain_state) & 0xff;
+
+- return (val & 0x3FF) * 19200;
++ return lval * xo_rate;
+ }
+
+ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
+ {
+ struct cpufreq_policy *policy = data->policy;
+- int cpu = cpumask_first(policy->cpus);
++ int cpu = cpumask_first(policy->related_cpus);
+ struct device *dev = get_cpu_device(cpu);
+ unsigned long freq_hz, throttled_freq;
+ struct dev_pm_opp *opp;
+- unsigned int freq;
+
+ /*
+ * Get the h/w throttled frequency, normalize it using the
+ * registered opp table and use it to calculate thermal pressure.
+ */
+- freq = qcom_lmh_get_throttle_freq(data);
+- freq_hz = freq * HZ_PER_KHZ;
++ freq_hz = qcom_lmh_get_throttle_freq(data);
+
+ opp = dev_pm_opp_find_freq_floor(dev, &freq_hz);
+ if (IS_ERR(opp) && PTR_ERR(opp) == -ERANGE)
+- dev_pm_opp_find_freq_ceil(dev, &freq_hz);
++ opp = dev_pm_opp_find_freq_ceil(dev, &freq_hz);
+
+- throttled_freq = freq_hz / HZ_PER_KHZ;
++ if (IS_ERR(opp)) {
++ dev_warn(dev, "Can't find the OPP for throttling: %pe!\n", opp);
++ } else {
++ throttled_freq = freq_hz / HZ_PER_KHZ;
+
+- /* Update thermal pressure (the boost frequencies are accepted) */
+- arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
++ /* Update thermal pressure (the boost frequencies are accepted) */
++ arch_update_thermal_pressure(policy->related_cpus, throttled_freq);
++
++ dev_pm_opp_put(opp);
++ }
+
+ /*
+ * In the unlikely case policy is unregistered do not enable
+@@ -337,6 +350,10 @@ static irqreturn_t qcom_lmh_dcvs_handle_irq(int irq, void *data)
+ disable_irq_nosync(c_data->throttle_irq);
+ schedule_delayed_work(&c_data->throttle_work, 0);
+
++ if (c_data->soc_data->reg_intr_clr)
++ writel_relaxed(GT_IRQ_STATUS,
++ c_data->base + c_data->soc_data->reg_intr_clr);
++
+ return IRQ_HANDLED;
+ }
+
+@@ -351,8 +368,10 @@ static const struct qcom_cpufreq_soc_data qcom_soc_data = {
+
+ static const struct qcom_cpufreq_soc_data epss_soc_data = {
+ .reg_enable = 0x0,
++ .reg_domain_state = 0x20,
+ .reg_freq_lut = 0x100,
+ .reg_volt_lut = 0x200,
++ .reg_intr_clr = 0x308,
+ .reg_perf_state = 0x320,
+ .lut_row_size = 4,
+ };
+@@ -412,6 +431,7 @@ static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data)
+ mutex_unlock(&data->throttle_lock);
+
+ cancel_delayed_work_sync(&data->throttle_work);
++ irq_set_affinity_hint(data->throttle_irq, NULL);
+ free_irq(data->throttle_irq, data);
+ }
+
+diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+index 2deed8d8773fa..75e1bf3a08f7c 100644
+--- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c
++++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c
+@@ -98,8 +98,10 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ ret = sun50i_cpufreq_get_efuse(&speed);
+- if (ret)
++ if (ret) {
++ kfree(opp_tables);
+ return ret;
++ }
+
+ snprintf(name, MAX_NAME_LEN, "speed%d", speed);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index c853266957ce1..f09aeff513ee9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2348,6 +2348,71 @@ static int amdgpu_pmops_restore(struct device *dev)
+ return amdgpu_device_resume(drm_dev, true);
+ }
+
++static int amdgpu_runtime_idle_check_display(struct device *dev)
++{
++ struct pci_dev *pdev = to_pci_dev(dev);
++ struct drm_device *drm_dev = pci_get_drvdata(pdev);
++ struct amdgpu_device *adev = drm_to_adev(drm_dev);
++
++ if (adev->mode_info.num_crtc) {
++ struct drm_connector *list_connector;
++ struct drm_connector_list_iter iter;
++ int ret = 0;
++
++ /* XXX: Return busy if any displays are connected to avoid
++ * possible display wakeups after runtime resume due to
++ * hotplug events in case any displays were connected while
++ * the GPU was in suspend. Remove this once that is fixed.
++ */
++ mutex_lock(&drm_dev->mode_config.mutex);
++ drm_connector_list_iter_begin(drm_dev, &iter);
++ drm_for_each_connector_iter(list_connector, &iter) {
++ if (list_connector->status == connector_status_connected) {
++ ret = -EBUSY;
++ break;
++ }
++ }
++ drm_connector_list_iter_end(&iter);
++ mutex_unlock(&drm_dev->mode_config.mutex);
++
++ if (ret)
++ return ret;
++
++ if (amdgpu_device_has_dc_support(adev)) {
++ struct drm_crtc *crtc;
++
++ drm_for_each_crtc(crtc, drm_dev) {
++ drm_modeset_lock(&crtc->mutex, NULL);
++ if (crtc->state->active)
++ ret = -EBUSY;
++ drm_modeset_unlock(&crtc->mutex);
++ if (ret < 0)
++ break;
++ }
++ } else {
++ mutex_lock(&drm_dev->mode_config.mutex);
++ drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL);
++
++ drm_connector_list_iter_begin(drm_dev, &iter);
++ drm_for_each_connector_iter(list_connector, &iter) {
++ if (list_connector->dpms == DRM_MODE_DPMS_ON) {
++ ret = -EBUSY;
++ break;
++ }
++ }
++
++ drm_connector_list_iter_end(&iter);
++
++ drm_modeset_unlock(&drm_dev->mode_config.connection_mutex);
++ mutex_unlock(&drm_dev->mode_config.mutex);
++ }
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
+ static int amdgpu_pmops_runtime_suspend(struct device *dev)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+@@ -2360,6 +2425,10 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
+ return -EBUSY;
+ }
+
++ ret = amdgpu_runtime_idle_check_display(dev);
++ if (ret)
++ return ret;
++
+ /* wait for all rings to drain before suspending */
+ for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+ struct amdgpu_ring *ring = adev->rings[i];
+@@ -2469,41 +2538,7 @@ static int amdgpu_pmops_runtime_idle(struct device *dev)
+ return -EBUSY;
+ }
+
+- if (amdgpu_device_has_dc_support(adev)) {
+- struct drm_crtc *crtc;
+-
+- drm_for_each_crtc(crtc, drm_dev) {
+- drm_modeset_lock(&crtc->mutex, NULL);
+- if (crtc->state->active)
+- ret = -EBUSY;
+- drm_modeset_unlock(&crtc->mutex);
+- if (ret < 0)
+- break;
+- }
+-
+- } else {
+- struct drm_connector *list_connector;
+- struct drm_connector_list_iter iter;
+-
+- mutex_lock(&drm_dev->mode_config.mutex);
+- drm_modeset_lock(&drm_dev->mode_config.connection_mutex, NULL);
+-
+- drm_connector_list_iter_begin(drm_dev, &iter);
+- drm_for_each_connector_iter(list_connector, &iter) {
+- if (list_connector->dpms == DRM_MODE_DPMS_ON) {
+- ret = -EBUSY;
+- break;
+- }
+- }
+-
+- drm_connector_list_iter_end(&iter);
+-
+- drm_modeset_unlock(&drm_dev->mode_config.connection_mutex);
+- mutex_unlock(&drm_dev->mode_config.mutex);
+- }
+-
+- if (ret == -EBUSY)
+- DRM_DEBUG_DRIVER("failing to power off - crtc active\n");
++ ret = amdgpu_runtime_idle_check_display(dev);
+
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_autosuspend(dev);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 4b6814949aad0..8d40b93747e06 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -129,19 +129,33 @@ void program_sh_mem_settings(struct device_queue_manager *dqm,
+ }
+
+ static void increment_queue_count(struct device_queue_manager *dqm,
+- enum kfd_queue_type type)
++ struct qcm_process_device *qpd,
++ struct queue *q)
+ {
+ dqm->active_queue_count++;
+- if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
++ if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE ||
++ q->properties.type == KFD_QUEUE_TYPE_DIQ)
+ dqm->active_cp_queue_count++;
++
++ if (q->properties.is_gws) {
++ dqm->gws_queue_count++;
++ qpd->mapped_gws_queue = true;
++ }
+ }
+
+ static void decrement_queue_count(struct device_queue_manager *dqm,
+- enum kfd_queue_type type)
++ struct qcm_process_device *qpd,
++ struct queue *q)
+ {
+ dqm->active_queue_count--;
+- if (type == KFD_QUEUE_TYPE_COMPUTE || type == KFD_QUEUE_TYPE_DIQ)
++ if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE ||
++ q->properties.type == KFD_QUEUE_TYPE_DIQ)
+ dqm->active_cp_queue_count--;
++
++ if (q->properties.is_gws) {
++ dqm->gws_queue_count--;
++ qpd->mapped_gws_queue = false;
++ }
+ }
+
+ static int allocate_doorbell(struct qcm_process_device *qpd, struct queue *q)
+@@ -380,7 +394,7 @@ add_queue_to_list:
+ list_add(&q->list, &qpd->queues_list);
+ qpd->queue_count++;
+ if (q->properties.is_active)
+- increment_queue_count(dqm, q->properties.type);
++ increment_queue_count(dqm, qpd, q);
+
+ /*
+ * Unconditionally increment this counter, regardless of the queue's
+@@ -505,13 +519,8 @@ static int destroy_queue_nocpsch_locked(struct device_queue_manager *dqm,
+ deallocate_vmid(dqm, qpd, q);
+ }
+ qpd->queue_count--;
+- if (q->properties.is_active) {
+- decrement_queue_count(dqm, q->properties.type);
+- if (q->properties.is_gws) {
+- dqm->gws_queue_count--;
+- qpd->mapped_gws_queue = false;
+- }
+- }
++ if (q->properties.is_active)
++ decrement_queue_count(dqm, qpd, q);
+
+ return retval;
+ }
+@@ -604,12 +613,11 @@ static int update_queue(struct device_queue_manager *dqm, struct queue *q,
+ * dqm->active_queue_count to determine whether a new runlist must be
+ * uploaded.
+ */
+- if (q->properties.is_active && !prev_active)
+- increment_queue_count(dqm, q->properties.type);
+- else if (!q->properties.is_active && prev_active)
+- decrement_queue_count(dqm, q->properties.type);
+-
+- if (q->gws && !q->properties.is_gws) {
++ if (q->properties.is_active && !prev_active) {
++ increment_queue_count(dqm, &pdd->qpd, q);
++ } else if (!q->properties.is_active && prev_active) {
++ decrement_queue_count(dqm, &pdd->qpd, q);
++ } else if (q->gws && !q->properties.is_gws) {
+ if (q->properties.is_active) {
+ dqm->gws_queue_count++;
+ pdd->qpd.mapped_gws_queue = true;
+@@ -671,11 +679,7 @@ static int evict_process_queues_nocpsch(struct device_queue_manager *dqm,
+ mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
+ q->properties.type)];
+ q->properties.is_active = false;
+- decrement_queue_count(dqm, q->properties.type);
+- if (q->properties.is_gws) {
+- dqm->gws_queue_count--;
+- qpd->mapped_gws_queue = false;
+- }
++ decrement_queue_count(dqm, qpd, q);
+
+ if (WARN_ONCE(!dqm->sched_running, "Evict when stopped\n"))
+ continue;
+@@ -721,7 +725,7 @@ static int evict_process_queues_cpsch(struct device_queue_manager *dqm,
+ continue;
+
+ q->properties.is_active = false;
+- decrement_queue_count(dqm, q->properties.type);
++ decrement_queue_count(dqm, qpd, q);
+ }
+ pdd->last_evict_timestamp = get_jiffies_64();
+ retval = execute_queues_cpsch(dqm,
+@@ -792,11 +796,7 @@ static int restore_process_queues_nocpsch(struct device_queue_manager *dqm,
+ mqd_mgr = dqm->mqd_mgrs[get_mqd_type_from_queue_type(
+ q->properties.type)];
+ q->properties.is_active = true;
+- increment_queue_count(dqm, q->properties.type);
+- if (q->properties.is_gws) {
+- dqm->gws_queue_count++;
+- qpd->mapped_gws_queue = true;
+- }
++ increment_queue_count(dqm, qpd, q);
+
+ if (WARN_ONCE(!dqm->sched_running, "Restore when stopped\n"))
+ continue;
+@@ -854,7 +854,7 @@ static int restore_process_queues_cpsch(struct device_queue_manager *dqm,
+ continue;
+
+ q->properties.is_active = true;
+- increment_queue_count(dqm, q->properties.type);
++ increment_queue_count(dqm, &pdd->qpd, q);
+ }
+ retval = execute_queues_cpsch(dqm,
+ KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+@@ -1260,7 +1260,7 @@ static int create_kernel_queue_cpsch(struct device_queue_manager *dqm,
+ dqm->total_queue_count);
+
+ list_add(&kq->list, &qpd->priv_queue_list);
+- increment_queue_count(dqm, kq->queue->properties.type);
++ increment_queue_count(dqm, qpd, kq->queue);
+ qpd->is_debug = true;
+ execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+ dqm_unlock(dqm);
+@@ -1274,7 +1274,7 @@ static void destroy_kernel_queue_cpsch(struct device_queue_manager *dqm,
+ {
+ dqm_lock(dqm);
+ list_del(&kq->list);
+- decrement_queue_count(dqm, kq->queue->properties.type);
++ decrement_queue_count(dqm, qpd, kq->queue);
+ qpd->is_debug = false;
+ execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0);
+ /*
+@@ -1341,7 +1341,7 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
+ qpd->queue_count++;
+
+ if (q->properties.is_active) {
+- increment_queue_count(dqm, q->properties.type);
++ increment_queue_count(dqm, qpd, q);
+
+ execute_queues_cpsch(dqm,
+ KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+@@ -1558,15 +1558,11 @@ static int destroy_queue_cpsch(struct device_queue_manager *dqm,
+ list_del(&q->list);
+ qpd->queue_count--;
+ if (q->properties.is_active) {
+- decrement_queue_count(dqm, q->properties.type);
++ decrement_queue_count(dqm, qpd, q);
+ retval = execute_queues_cpsch(dqm,
+ KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+ if (retval == -ETIME)
+ qpd->reset_wavefronts = true;
+- if (q->properties.is_gws) {
+- dqm->gws_queue_count--;
+- qpd->mapped_gws_queue = false;
+- }
+ }
+
+ /*
+@@ -1757,7 +1753,7 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ /* Clean all kernel queues */
+ list_for_each_entry_safe(kq, kq_next, &qpd->priv_queue_list, list) {
+ list_del(&kq->list);
+- decrement_queue_count(dqm, kq->queue->properties.type);
++ decrement_queue_count(dqm, qpd, kq->queue);
+ qpd->is_debug = false;
+ dqm->total_queue_count--;
+ filter = KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES;
+@@ -1770,13 +1766,8 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
+ else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
+ deallocate_sdma_queue(dqm, q);
+
+- if (q->properties.is_active) {
+- decrement_queue_count(dqm, q->properties.type);
+- if (q->properties.is_gws) {
+- dqm->gws_queue_count--;
+- qpd->mapped_gws_queue = false;
+- }
+- }
++ if (q->properties.is_active)
++ decrement_queue_count(dqm, qpd, q);
+
+ dqm->total_queue_count--;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index ca1bbc942fd40..67f3cae553e0d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -1427,6 +1427,7 @@ static struct clock_source *dcn21_clock_source_create(
+ return &clk_src->base;
+ }
+
++ kfree(clk_src);
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
+index 97cf3cac01058..fb6cf30ee6281 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
+@@ -97,6 +97,14 @@
+
+ #define INTEL_EDP_BRIGHTNESS_OPTIMIZATION_1 0x359
+
++enum intel_dp_aux_backlight_modparam {
++ INTEL_DP_AUX_BACKLIGHT_AUTO = -1,
++ INTEL_DP_AUX_BACKLIGHT_OFF = 0,
++ INTEL_DP_AUX_BACKLIGHT_ON = 1,
++ INTEL_DP_AUX_BACKLIGHT_FORCE_VESA = 2,
++ INTEL_DP_AUX_BACKLIGHT_FORCE_INTEL = 3,
++};
++
+ /* Intel EDP backlight callbacks */
+ static bool
+ intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector)
+@@ -126,6 +134,24 @@ intel_dp_aux_supports_hdr_backlight(struct intel_connector *connector)
+ return false;
+ }
+
++ /*
++ * If we don't have HDR static metadata there is no way to
++ * runtime detect used range for nits based control. For now
++ * do not use Intel proprietary eDP backlight control if we
++ * don't have this data in panel EDID. In case we find panel
++ * which supports only nits based control, but doesn't provide
++ * HDR static metadata we need to start maintaining table of
++ * ranges for such panels.
++ */
++ if (i915->params.enable_dpcd_backlight != INTEL_DP_AUX_BACKLIGHT_FORCE_INTEL &&
++ !(connector->base.hdr_sink_metadata.hdmi_type1.metadata_type &
++ BIT(HDMI_STATIC_METADATA_TYPE1))) {
++ drm_info(&i915->drm,
++ "Panel is missing HDR static metadata. Possible support for Intel HDR backlight interface is not used. If your backlight controls don't work try booting with i915.enable_dpcd_backlight=%d. needs this, please file a _new_ bug report on drm/i915, see " FDO_BUG_URL " for details.\n",
++ INTEL_DP_AUX_BACKLIGHT_FORCE_INTEL);
++ return false;
++ }
++
+ panel->backlight.edp.intel.sdr_uses_aux =
+ tcon_cap[2] & INTEL_EDP_SDR_TCON_BRIGHTNESS_AUX_CAP;
+
+@@ -413,14 +439,6 @@ static const struct intel_panel_bl_funcs intel_dp_vesa_bl_funcs = {
+ .get = intel_dp_aux_vesa_get_backlight,
+ };
+
+-enum intel_dp_aux_backlight_modparam {
+- INTEL_DP_AUX_BACKLIGHT_AUTO = -1,
+- INTEL_DP_AUX_BACKLIGHT_OFF = 0,
+- INTEL_DP_AUX_BACKLIGHT_ON = 1,
+- INTEL_DP_AUX_BACKLIGHT_FORCE_VESA = 2,
+- INTEL_DP_AUX_BACKLIGHT_FORCE_INTEL = 3,
+-};
+-
+ int intel_dp_aux_init_backlight_funcs(struct intel_connector *connector)
+ {
+ struct drm_device *dev = connector->base.dev;
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 902e4c802a123..4b8fee1be8ae5 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -7578,7 +7578,7 @@ enum {
+ #define _SEL_FETCH_PLANE_BASE_6_A 0x70940
+ #define _SEL_FETCH_PLANE_BASE_7_A 0x70960
+ #define _SEL_FETCH_PLANE_BASE_CUR_A 0x70880
+-#define _SEL_FETCH_PLANE_BASE_1_B 0x70990
++#define _SEL_FETCH_PLANE_BASE_1_B 0x71890
+
+ #define _SEL_FETCH_PLANE_BASE_A(plane) _PICK(plane, \
+ _SEL_FETCH_PLANE_BASE_1_A, \
+diff --git a/drivers/gpu/drm/sun4i/sun4i_frontend.c b/drivers/gpu/drm/sun4i/sun4i_frontend.c
+index 56ae38389db0b..462fae73eae98 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_frontend.c
++++ b/drivers/gpu/drm/sun4i/sun4i_frontend.c
+@@ -222,13 +222,11 @@ void sun4i_frontend_update_buffer(struct sun4i_frontend *frontend,
+
+ /* Set the physical address of the buffer in memory */
+ paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
+- paddr -= PHYS_OFFSET;
+ DRM_DEBUG_DRIVER("Setting buffer #0 address to %pad\n", &paddr);
+ regmap_write(frontend->regs, SUN4I_FRONTEND_BUF_ADDR0_REG, paddr);
+
+ if (fb->format->num_planes > 1) {
+ paddr = drm_fb_cma_get_gem_addr(fb, state, swap ? 2 : 1);
+- paddr -= PHYS_OFFSET;
+ DRM_DEBUG_DRIVER("Setting buffer #1 address to %pad\n", &paddr);
+ regmap_write(frontend->regs, SUN4I_FRONTEND_BUF_ADDR1_REG,
+ paddr);
+@@ -236,7 +234,6 @@ void sun4i_frontend_update_buffer(struct sun4i_frontend *frontend,
+
+ if (fb->format->num_planes > 2) {
+ paddr = drm_fb_cma_get_gem_addr(fb, state, swap ? 1 : 2);
+- paddr -= PHYS_OFFSET;
+ DRM_DEBUG_DRIVER("Setting buffer #2 address to %pad\n", &paddr);
+ regmap_write(frontend->regs, SUN4I_FRONTEND_BUF_ADDR2_REG,
+ paddr);
+diff --git a/drivers/iio/chemical/scd4x.c b/drivers/iio/chemical/scd4x.c
+index 267bc3c053380..746a40365e976 100644
+--- a/drivers/iio/chemical/scd4x.c
++++ b/drivers/iio/chemical/scd4x.c
+@@ -471,12 +471,15 @@ static ssize_t calibration_forced_value_store(struct device *dev,
+ ret = scd4x_write_and_fetch(state, CMD_FRC, arg, &val, sizeof(val));
+ mutex_unlock(&state->lock);
+
++ if (ret)
++ return ret;
++
+ if (val == 0xff) {
+ dev_err(dev, "forced calibration has failed");
+ return -EINVAL;
+ }
+
+- return ret ?: len;
++ return len;
+ }
+
+ static IIO_DEVICE_ATTR_RW(calibration_auto_enable, 0);
+diff --git a/drivers/iio/dac/ad3552r.c b/drivers/iio/dac/ad3552r.c
+index 97f13c0b96312..d5ea1a1be1226 100644
+--- a/drivers/iio/dac/ad3552r.c
++++ b/drivers/iio/dac/ad3552r.c
+@@ -656,7 +656,7 @@ static int ad3552r_reset(struct ad3552r_desc *dac)
+ {
+ struct reg_addr_pool addr;
+ int ret;
+- u16 val;
++ int val;
+
+ dac->gpio_reset = devm_gpiod_get_optional(&dac->spi->dev, "reset",
+ GPIOD_OUT_LOW);
+@@ -809,10 +809,10 @@ static int ad3552r_configure_custom_gain(struct ad3552r_desc *dac,
+
+ gain_child = fwnode_get_named_child_node(child,
+ "custom-output-range-config");
+- if (IS_ERR(gain_child)) {
++ if (!gain_child) {
+ dev_err(dev,
+ "mandatory custom-output-range-config property missing\n");
+- return PTR_ERR(gain_child);
++ return -EINVAL;
+ }
+
+ dac->ch_data[ch].range_override = 1;
+diff --git a/drivers/iio/dac/ad5446.c b/drivers/iio/dac/ad5446.c
+index 1c9b54c012a7e..ddf18f13fa4ee 100644
+--- a/drivers/iio/dac/ad5446.c
++++ b/drivers/iio/dac/ad5446.c
+@@ -178,7 +178,7 @@ static int ad5446_read_raw(struct iio_dev *indio_dev,
+
+ switch (m) {
+ case IIO_CHAN_INFO_RAW:
+- *val = st->cached_val;
++ *val = st->cached_val >> chan->scan_type.shift;
+ return IIO_VAL_INT;
+ case IIO_CHAN_INFO_SCALE:
+ *val = st->vref_mv;
+diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
+index 2fcc59728fd6b..4f58d1ad68d73 100644
+--- a/drivers/iio/dac/ad5592r-base.c
++++ b/drivers/iio/dac/ad5592r-base.c
+@@ -523,7 +523,7 @@ static int ad5592r_alloc_channels(struct iio_dev *iio_dev)
+ if (!ret)
+ st->channel_modes[reg] = tmp;
+
+- fwnode_property_read_u32(child, "adi,off-state", &tmp);
++ ret = fwnode_property_read_u32(child, "adi,off-state", &tmp);
+ if (!ret)
+ st->channel_offstate[reg] = tmp;
+ }
+diff --git a/drivers/iio/filter/Kconfig b/drivers/iio/filter/Kconfig
+index 3ae35817ad827..a85b345ea14ef 100644
+--- a/drivers/iio/filter/Kconfig
++++ b/drivers/iio/filter/Kconfig
+@@ -8,6 +8,7 @@ menu "Filters"
+ config ADMV8818
+ tristate "Analog Devices ADMV8818 High-Pass and Low-Pass Filter"
+ depends on SPI && COMMON_CLK && 64BIT
++ select REGMAP_SPI
+ help
+ Say yes here to build support for Analog Devices ADMV8818
+ 2 GHz to 18 GHz, Digitally Tunable, High-Pass and Low-Pass Filter.
+diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c
+index 824b5124a5f55..01336105792ee 100644
+--- a/drivers/iio/imu/bmi160/bmi160_core.c
++++ b/drivers/iio/imu/bmi160/bmi160_core.c
+@@ -730,7 +730,7 @@ static int bmi160_chip_init(struct bmi160_data *data, bool use_spi)
+
+ ret = regmap_write(data->regmap, BMI160_REG_CMD, BMI160_CMD_SOFTRESET);
+ if (ret)
+- return ret;
++ goto disable_regulator;
+
+ usleep_range(BMI160_SOFTRESET_USLEEP, BMI160_SOFTRESET_USLEEP + 1);
+
+@@ -741,29 +741,37 @@ static int bmi160_chip_init(struct bmi160_data *data, bool use_spi)
+ if (use_spi) {
+ ret = regmap_read(data->regmap, BMI160_REG_DUMMY, &val);
+ if (ret)
+- return ret;
++ goto disable_regulator;
+ }
+
+ ret = regmap_read(data->regmap, BMI160_REG_CHIP_ID, &val);
+ if (ret) {
+ dev_err(dev, "Error reading chip id\n");
+- return ret;
++ goto disable_regulator;
+ }
+ if (val != BMI160_CHIP_ID_VAL) {
+ dev_err(dev, "Wrong chip id, got %x expected %x\n",
+ val, BMI160_CHIP_ID_VAL);
+- return -ENODEV;
++ ret = -ENODEV;
++ goto disable_regulator;
+ }
+
+ ret = bmi160_set_mode(data, BMI160_ACCEL, true);
+ if (ret)
+- return ret;
++ goto disable_regulator;
+
+ ret = bmi160_set_mode(data, BMI160_GYRO, true);
+ if (ret)
+- return ret;
++ goto disable_accel;
+
+ return 0;
++
++disable_accel:
++ bmi160_set_mode(data, BMI160_ACCEL, false);
++
++disable_regulator:
++ regulator_bulk_disable(ARRAY_SIZE(data->supplies), data->supplies);
++ return ret;
+ }
+
+ static int bmi160_data_rdy_trigger_set_state(struct iio_trigger *trig,
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c
+index 33d9afb1ba914..d4a692b838d0f 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_i2c.c
+@@ -18,12 +18,15 @@ static int inv_icm42600_i2c_bus_setup(struct inv_icm42600_state *st)
+ unsigned int mask, val;
+ int ret;
+
+- /* setup interface registers */
+- ret = regmap_update_bits(st->map, INV_ICM42600_REG_INTF_CONFIG6,
+- INV_ICM42600_INTF_CONFIG6_MASK,
+- INV_ICM42600_INTF_CONFIG6_I3C_EN);
+- if (ret)
+- return ret;
++ /*
++ * setup interface registers
++ * This register write to REG_INTF_CONFIG6 enables a spike filter that
++ * is impacting the line and can prevent the I2C ACK to be seen by the
++ * controller. So we don't test the return value.
++ */
++ regmap_update_bits(st->map, INV_ICM42600_REG_INTF_CONFIG6,
++ INV_ICM42600_INTF_CONFIG6_MASK,
++ INV_ICM42600_INTF_CONFIG6_I3C_EN);
+
+ ret = regmap_update_bits(st->map, INV_ICM42600_REG_INTF_CONFIG4,
+ INV_ICM42600_INTF_CONFIG4_I3C_BUS_ONLY, 0);
+diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
+index 55879a20ae52e..30181bc23b387 100644
+--- a/drivers/iio/magnetometer/ak8975.c
++++ b/drivers/iio/magnetometer/ak8975.c
+@@ -416,6 +416,7 @@ static int ak8975_power_on(const struct ak8975_data *data)
+ if (ret) {
+ dev_warn(&data->client->dev,
+ "Failed to enable specified Vid supply\n");
++ regulator_disable(data->vdd);
+ return ret;
+ }
+
+diff --git a/drivers/input/keyboard/cypress-sf.c b/drivers/input/keyboard/cypress-sf.c
+index c28996028e803..9a23eed6a4f41 100644
+--- a/drivers/input/keyboard/cypress-sf.c
++++ b/drivers/input/keyboard/cypress-sf.c
+@@ -61,6 +61,14 @@ static irqreturn_t cypress_sf_irq_handler(int irq, void *devid)
+ return IRQ_HANDLED;
+ }
+
++static void cypress_sf_disable_regulators(void *arg)
++{
++ struct cypress_sf_data *touchkey = arg;
++
++ regulator_bulk_disable(ARRAY_SIZE(touchkey->regulators),
++ touchkey->regulators);
++}
++
+ static int cypress_sf_probe(struct i2c_client *client)
+ {
+ struct cypress_sf_data *touchkey;
+@@ -121,6 +129,12 @@ static int cypress_sf_probe(struct i2c_client *client)
+ return error;
+ }
+
++ error = devm_add_action_or_reset(&client->dev,
++ cypress_sf_disable_regulators,
++ touchkey);
++ if (error)
++ return error;
++
+ touchkey->input_dev = devm_input_allocate_device(&client->dev);
+ if (!touchkey->input_dev) {
+ dev_err(&client->dev, "Failed to allocate input device\n");
+diff --git a/drivers/interconnect/qcom/sc7180.c b/drivers/interconnect/qcom/sc7180.c
+index 12d59c36df530..5f7c0f85fa8e3 100644
+--- a/drivers/interconnect/qcom/sc7180.c
++++ b/drivers/interconnect/qcom/sc7180.c
+@@ -47,7 +47,6 @@ DEFINE_QNODE(qnm_mnoc_sf, SC7180_MASTER_MNOC_SF_MEM_NOC, 1, 32, SC7180_SLAVE_GEM
+ DEFINE_QNODE(qnm_snoc_gc, SC7180_MASTER_SNOC_GC_MEM_NOC, 1, 8, SC7180_SLAVE_LLCC);
+ DEFINE_QNODE(qnm_snoc_sf, SC7180_MASTER_SNOC_SF_MEM_NOC, 1, 16, SC7180_SLAVE_LLCC);
+ DEFINE_QNODE(qxm_gpu, SC7180_MASTER_GFX3D, 2, 32, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
+-DEFINE_QNODE(ipa_core_master, SC7180_MASTER_IPA_CORE, 1, 8, SC7180_SLAVE_IPA_CORE);
+ DEFINE_QNODE(llcc_mc, SC7180_MASTER_LLCC, 2, 4, SC7180_SLAVE_EBI1);
+ DEFINE_QNODE(qhm_mnoc_cfg, SC7180_MASTER_CNOC_MNOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_MNOC);
+ DEFINE_QNODE(qxm_camnoc_hf0, SC7180_MASTER_CAMNOC_HF0, 2, 32, SC7180_SLAVE_MNOC_HF_MEM_NOC);
+@@ -129,7 +128,6 @@ DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SC7180_SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4);
+ DEFINE_QNODE(qns_gem_noc_snoc, SC7180_SLAVE_GEM_NOC_SNOC, 1, 8, SC7180_MASTER_GEM_NOC_SNOC);
+ DEFINE_QNODE(qns_llcc, SC7180_SLAVE_LLCC, 1, 16, SC7180_MASTER_LLCC);
+ DEFINE_QNODE(srvc_gemnoc, SC7180_SLAVE_SERVICE_GEM_NOC, 1, 4);
+-DEFINE_QNODE(ipa_core_slave, SC7180_SLAVE_IPA_CORE, 1, 8);
+ DEFINE_QNODE(ebi, SC7180_SLAVE_EBI1, 2, 4);
+ DEFINE_QNODE(qns_mem_noc_hf, SC7180_SLAVE_MNOC_HF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_HF_MEM_NOC);
+ DEFINE_QNODE(qns_mem_noc_sf, SC7180_SLAVE_MNOC_SF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_SF_MEM_NOC);
+@@ -160,7 +158,6 @@ DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
+ DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
+ DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf);
+ DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto);
+-DEFINE_QBCM(bcm_ip0, "IP0", false, &ipa_core_slave);
+ DEFINE_QBCM(bcm_cn0, "CN0", true, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_ahb2phy0, &qhs_aop, &qhs_aoss, &qhs_boot_rom, &qhs_camera_cfg, &qhs_camera_nrt_throttle_cfg, &qhs_camera_rt_throttle_cfg, &qhs_clk_ctl, &qhs_cpr_cx, &qhs_cpr_mx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_display_rt_throttle_cfg, &qhs_display_throttle_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_mss_cfg, &qhs_npu_cfg, &qhs_npu_dma_throttle_cfg, &qhs_npu_dsp_throttle_cfg, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qm_cfg, &qhs_qm_mpu_cfg, &qhs_qup0, &qhs_qup1, &qhs_security, &qhs_snoc_cfg, &qhs_tcsr, &qhs_tlmm_1, &qhs_tlmm_2, &qhs_tlmm_3, &qhs_ufs_mem_cfg, &qhs_usb3, &qhs_venus_cfg, &qhs_venus_throttle_cfg, &qhs_vsense_ctrl_cfg, &srvc_cnoc);
+ DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qhm_mnoc_cfg, &qxm_mdp0, &qxm_rot, &qxm_venus0, &qxm_venus_arm9);
+ DEFINE_QBCM(bcm_sh2, "SH2", false, &acm_sys_tcu);
+@@ -372,22 +369,6 @@ static struct qcom_icc_desc sc7180_gem_noc = {
+ .num_bcms = ARRAY_SIZE(gem_noc_bcms),
+ };
+
+-static struct qcom_icc_bcm *ipa_virt_bcms[] = {
+- &bcm_ip0,
+-};
+-
+-static struct qcom_icc_node *ipa_virt_nodes[] = {
+- [MASTER_IPA_CORE] = &ipa_core_master,
+- [SLAVE_IPA_CORE] = &ipa_core_slave,
+-};
+-
+-static struct qcom_icc_desc sc7180_ipa_virt = {
+- .nodes = ipa_virt_nodes,
+- .num_nodes = ARRAY_SIZE(ipa_virt_nodes),
+- .bcms = ipa_virt_bcms,
+- .num_bcms = ARRAY_SIZE(ipa_virt_bcms),
+-};
+-
+ static struct qcom_icc_bcm *mc_virt_bcms[] = {
+ &bcm_acv,
+ &bcm_mc0,
+@@ -519,8 +500,6 @@ static const struct of_device_id qnoc_of_match[] = {
+ .data = &sc7180_dc_noc},
+ { .compatible = "qcom,sc7180-gem-noc",
+ .data = &sc7180_gem_noc},
+- { .compatible = "qcom,sc7180-ipa-virt",
+- .data = &sc7180_ipa_virt},
+ { .compatible = "qcom,sc7180-mc-virt",
+ .data = &sc7180_mc_virt},
+ { .compatible = "qcom,sc7180-mmss-noc",
+diff --git a/drivers/interconnect/qcom/sdx55.c b/drivers/interconnect/qcom/sdx55.c
+index 03d604f84cc57..e3ac25a997b71 100644
+--- a/drivers/interconnect/qcom/sdx55.c
++++ b/drivers/interconnect/qcom/sdx55.c
+@@ -18,7 +18,6 @@
+ #include "icc-rpmh.h"
+ #include "sdx55.h"
+
+-DEFINE_QNODE(ipa_core_master, SDX55_MASTER_IPA_CORE, 1, 8, SDX55_SLAVE_IPA_CORE);
+ DEFINE_QNODE(llcc_mc, SDX55_MASTER_LLCC, 4, 4, SDX55_SLAVE_EBI_CH0);
+ DEFINE_QNODE(acm_tcu, SDX55_MASTER_TCU_0, 1, 8, SDX55_SLAVE_LLCC, SDX55_SLAVE_MEM_NOC_SNOC, SDX55_SLAVE_MEM_NOC_PCIE_SNOC);
+ DEFINE_QNODE(qnm_snoc_gc, SDX55_MASTER_SNOC_GC_MEM_NOC, 1, 8, SDX55_SLAVE_LLCC);
+@@ -40,7 +39,6 @@ DEFINE_QNODE(xm_pcie, SDX55_MASTER_PCIE, 1, 8, SDX55_SLAVE_ANOC_SNOC);
+ DEFINE_QNODE(xm_qdss_etr, SDX55_MASTER_QDSS_ETR, 1, 8, SDX55_SLAVE_SNOC_CFG, SDX55_SLAVE_EMAC_CFG, SDX55_SLAVE_USB3, SDX55_SLAVE_AOSS, SDX55_SLAVE_SPMI_FETCHER, SDX55_SLAVE_QDSS_CFG, SDX55_SLAVE_PDM, SDX55_SLAVE_SNOC_MEM_NOC_GC, SDX55_SLAVE_TCSR, SDX55_SLAVE_CNOC_DDRSS, SDX55_SLAVE_SPMI_VGI_COEX, SDX55_SLAVE_QPIC, SDX55_SLAVE_OCIMEM, SDX55_SLAVE_IPA_CFG, SDX55_SLAVE_USB3_PHY_CFG, SDX55_SLAVE_AOP, SDX55_SLAVE_BLSP_1, SDX55_SLAVE_SDCC_1, SDX55_SLAVE_CNOC_MSS, SDX55_SLAVE_PCIE_PARF, SDX55_SLAVE_ECC_CFG, SDX55_SLAVE_AUDIO, SDX55_SLAVE_AOSS, SDX55_SLAVE_PRNG, SDX55_SLAVE_CRYPTO_0_CFG, SDX55_SLAVE_TCU, SDX55_SLAVE_CLK_CTL, SDX55_SLAVE_IMEM_CFG);
+ DEFINE_QNODE(xm_sdc1, SDX55_MASTER_SDCC_1, 1, 8, SDX55_SLAVE_AOSS, SDX55_SLAVE_IPA_CFG, SDX55_SLAVE_ANOC_SNOC, SDX55_SLAVE_AOP, SDX55_SLAVE_AUDIO);
+ DEFINE_QNODE(xm_usb3, SDX55_MASTER_USB3, 1, 8, SDX55_SLAVE_ANOC_SNOC);
+-DEFINE_QNODE(ipa_core_slave, SDX55_SLAVE_IPA_CORE, 1, 8);
+ DEFINE_QNODE(ebi, SDX55_SLAVE_EBI_CH0, 1, 4);
+ DEFINE_QNODE(qns_llcc, SDX55_SLAVE_LLCC, 1, 16, SDX55_SLAVE_EBI_CH0);
+ DEFINE_QNODE(qns_memnoc_snoc, SDX55_SLAVE_MEM_NOC_SNOC, 1, 8, SDX55_MASTER_MEM_NOC_SNOC);
+@@ -82,7 +80,6 @@ DEFINE_QNODE(xs_sys_tcu_cfg, SDX55_SLAVE_TCU, 1, 8);
+ DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
+ DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
+ DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto);
+-DEFINE_QBCM(bcm_ip0, "IP0", false, &ipa_core_slave);
+ DEFINE_QBCM(bcm_pn0, "PN0", false, &qhm_snoc_cfg);
+ DEFINE_QBCM(bcm_sh3, "SH3", false, &xm_apps_rdwr);
+ DEFINE_QBCM(bcm_sh4, "SH4", false, &qns_memnoc_snoc, &qns_sys_pcie);
+@@ -219,22 +216,6 @@ static const struct qcom_icc_desc sdx55_system_noc = {
+ .num_bcms = ARRAY_SIZE(system_noc_bcms),
+ };
+
+-static struct qcom_icc_bcm *ipa_virt_bcms[] = {
+- &bcm_ip0,
+-};
+-
+-static struct qcom_icc_node *ipa_virt_nodes[] = {
+- [MASTER_IPA_CORE] = &ipa_core_master,
+- [SLAVE_IPA_CORE] = &ipa_core_slave,
+-};
+-
+-static const struct qcom_icc_desc sdx55_ipa_virt = {
+- .nodes = ipa_virt_nodes,
+- .num_nodes = ARRAY_SIZE(ipa_virt_nodes),
+- .bcms = ipa_virt_bcms,
+- .num_bcms = ARRAY_SIZE(ipa_virt_bcms),
+-};
+-
+ static const struct of_device_id qnoc_of_match[] = {
+ { .compatible = "qcom,sdx55-mc-virt",
+ .data = &sdx55_mc_virt},
+@@ -242,8 +223,6 @@ static const struct of_device_id qnoc_of_match[] = {
+ .data = &sdx55_mem_noc},
+ { .compatible = "qcom,sdx55-system-noc",
+ .data = &sdx55_system_noc},
+- { .compatible = "qcom,sdx55-ipa-virt",
+- .data = &sdx55_ipa_virt},
+ { }
+ };
+ MODULE_DEVICE_TABLE(of, qnoc_of_match);
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 2e545f473cc68..019a0822bde0e 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -164,25 +164,39 @@ static const struct regmap_access_table rpcif_volatile_table = {
+
+
+ /*
+- * Custom accessor functions to ensure SMRDR0 and SMWDR0 are always accessed
+- * with proper width. Requires SMENR_SPIDE to be correctly set before!
++ * Custom accessor functions to ensure SM[RW]DR[01] are always accessed with
++ * proper width. Requires rpcif.xfer_size to be correctly set before!
+ */
+ static int rpcif_reg_read(void *context, unsigned int reg, unsigned int *val)
+ {
+ struct rpcif *rpc = context;
+
+- if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
+- u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
+-
+- if (spide == 0x8) {
++ switch (reg) {
++ case RPCIF_SMRDR0:
++ case RPCIF_SMWDR0:
++ switch (rpc->xfer_size) {
++ case 1:
+ *val = readb(rpc->base + reg);
+ return 0;
+- } else if (spide == 0xC) {
++
++ case 2:
+ *val = readw(rpc->base + reg);
+ return 0;
+- } else if (spide != 0xF) {
++
++ case 4:
++ case 8:
++ *val = readl(rpc->base + reg);
++ return 0;
++
++ default:
+ return -EILSEQ;
+ }
++
++ case RPCIF_SMRDR1:
++ case RPCIF_SMWDR1:
++ if (rpc->xfer_size != 8)
++ return -EILSEQ;
++ break;
+ }
+
+ *val = readl(rpc->base + reg);
+@@ -193,18 +207,34 @@ static int rpcif_reg_write(void *context, unsigned int reg, unsigned int val)
+ {
+ struct rpcif *rpc = context;
+
+- if (reg == RPCIF_SMRDR0 || reg == RPCIF_SMWDR0) {
+- u32 spide = readl(rpc->base + RPCIF_SMENR) & RPCIF_SMENR_SPIDE(0xF);
+-
+- if (spide == 0x8) {
++ switch (reg) {
++ case RPCIF_SMWDR0:
++ switch (rpc->xfer_size) {
++ case 1:
+ writeb(val, rpc->base + reg);
+ return 0;
+- } else if (spide == 0xC) {
++
++ case 2:
+ writew(val, rpc->base + reg);
+ return 0;
+- } else if (spide != 0xF) {
++
++ case 4:
++ case 8:
++ writel(val, rpc->base + reg);
++ return 0;
++
++ default:
+ return -EILSEQ;
+ }
++
++ case RPCIF_SMWDR1:
++ if (rpc->xfer_size != 8)
++ return -EILSEQ;
++ break;
++
++ case RPCIF_SMRDR0:
++ case RPCIF_SMRDR1:
++ return -EPERM;
+ }
+
+ writel(val, rpc->base + reg);
+@@ -469,6 +499,7 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+
+ smenr |= RPCIF_SMENR_SPIDE(rpcif_bits_set(rpc, nbytes));
+ regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
++ rpc->xfer_size = nbytes;
+
+ memcpy(data, rpc->buffer + pos, nbytes);
+ if (nbytes == 8) {
+@@ -533,6 +564,7 @@ int rpcif_manual_xfer(struct rpcif *rpc)
+ regmap_write(rpc->regmap, RPCIF_SMENR, smenr);
+ regmap_write(rpc->regmap, RPCIF_SMCR,
+ rpc->smcr | RPCIF_SMCR_SPIE);
++ rpc->xfer_size = nbytes;
+ ret = wait_msg_xfer_end(rpc);
+ if (ret)
+ goto err_out;
+diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
+index bee727ed98db5..d22044edcc9fb 100644
+--- a/drivers/misc/eeprom/at25.c
++++ b/drivers/misc/eeprom/at25.c
+@@ -31,6 +31,8 @@
+ */
+
+ #define FM25_SN_LEN 8 /* serial number length */
++#define EE_MAXADDRLEN 3 /* 24 bit addresses, up to 2 MBytes */
++
+ struct at25_data {
+ struct spi_eeprom chip;
+ struct spi_device *spi;
+@@ -39,6 +41,7 @@ struct at25_data {
+ struct nvmem_config nvmem_config;
+ struct nvmem_device *nvmem;
+ u8 sernum[FM25_SN_LEN];
++ u8 command[EE_MAXADDRLEN + 1];
+ };
+
+ #define AT25_WREN 0x06 /* latch the write enable */
+@@ -61,8 +64,6 @@ struct at25_data {
+
+ #define FM25_ID_LEN 9 /* ID length */
+
+-#define EE_MAXADDRLEN 3 /* 24 bit addresses, up to 2 MBytes */
+-
+ /*
+ * Specs often allow 5ms for a page write, sometimes 20ms;
+ * it's important to recover from write timeouts.
+@@ -78,7 +79,6 @@ static int at25_ee_read(void *priv, unsigned int offset,
+ {
+ struct at25_data *at25 = priv;
+ char *buf = val;
+- u8 command[EE_MAXADDRLEN + 1];
+ u8 *cp;
+ ssize_t status;
+ struct spi_transfer t[2];
+@@ -92,12 +92,15 @@ static int at25_ee_read(void *priv, unsigned int offset,
+ if (unlikely(!count))
+ return -EINVAL;
+
+- cp = command;
++ cp = at25->command;
+
+ instr = AT25_READ;
+ if (at25->chip.flags & EE_INSTR_BIT3_IS_ADDR)
+ if (offset >= BIT(at25->addrlen * 8))
+ instr |= AT25_INSTR_BIT3;
++
++ mutex_lock(&at25->lock);
++
+ *cp++ = instr;
+
+ /* 8/16/24-bit address is written MSB first */
+@@ -116,7 +119,7 @@ static int at25_ee_read(void *priv, unsigned int offset,
+ spi_message_init(&m);
+ memset(t, 0, sizeof(t));
+
+- t[0].tx_buf = command;
++ t[0].tx_buf = at25->command;
+ t[0].len = at25->addrlen + 1;
+ spi_message_add_tail(&t[0], &m);
+
+@@ -124,8 +127,6 @@ static int at25_ee_read(void *priv, unsigned int offset,
+ t[1].len = count;
+ spi_message_add_tail(&t[1], &m);
+
+- mutex_lock(&at25->lock);
+-
+ /*
+ * Read it all at once.
+ *
+@@ -152,7 +153,7 @@ static int fm25_aux_read(struct at25_data *at25, u8 *buf, uint8_t command,
+ spi_message_init(&m);
+ memset(t, 0, sizeof(t));
+
+- t[0].tx_buf = &command;
++ t[0].tx_buf = at25->command;
+ t[0].len = 1;
+ spi_message_add_tail(&t[0], &m);
+
+@@ -162,6 +163,8 @@ static int fm25_aux_read(struct at25_data *at25, u8 *buf, uint8_t command,
+
+ mutex_lock(&at25->lock);
+
++ at25->command[0] = command;
++
+ status = spi_sync(at25->spi, &m);
+ dev_dbg(&at25->spi->dev, "read %d aux bytes --> %d\n", len, status);
+
+diff --git a/drivers/mtd/nand/raw/mtk_ecc.c b/drivers/mtd/nand/raw/mtk_ecc.c
+index 1b47964cb6da2..4f6adb657c89c 100644
+--- a/drivers/mtd/nand/raw/mtk_ecc.c
++++ b/drivers/mtd/nand/raw/mtk_ecc.c
+@@ -43,6 +43,7 @@
+
+ struct mtk_ecc_caps {
+ u32 err_mask;
++ u32 err_shift;
+ const u8 *ecc_strength;
+ const u32 *ecc_regs;
+ u8 num_ecc_strength;
+@@ -76,7 +77,7 @@ static const u8 ecc_strength_mt2712[] = {
+ };
+
+ static const u8 ecc_strength_mt7622[] = {
+- 4, 6, 8, 10, 12, 14, 16
++ 4, 6, 8, 10, 12
+ };
+
+ enum mtk_ecc_regs {
+@@ -221,7 +222,7 @@ void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
+ for (i = 0; i < sectors; i++) {
+ offset = (i >> 2) << 2;
+ err = readl(ecc->regs + ECC_DECENUM0 + offset);
+- err = err >> ((i % 4) * 8);
++ err = err >> ((i % 4) * ecc->caps->err_shift);
+ err &= ecc->caps->err_mask;
+ if (err == ecc->caps->err_mask) {
+ /* uncorrectable errors */
+@@ -449,6 +450,7 @@ EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
+
+ static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
+ .err_mask = 0x3f,
++ .err_shift = 8,
+ .ecc_strength = ecc_strength_mt2701,
+ .ecc_regs = mt2701_ecc_regs,
+ .num_ecc_strength = 20,
+@@ -459,6 +461,7 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
+
+ static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
+ .err_mask = 0x7f,
++ .err_shift = 8,
+ .ecc_strength = ecc_strength_mt2712,
+ .ecc_regs = mt2712_ecc_regs,
+ .num_ecc_strength = 23,
+@@ -468,10 +471,11 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
+ };
+
+ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
+- .err_mask = 0x3f,
++ .err_mask = 0x1f,
++ .err_shift = 5,
+ .ecc_strength = ecc_strength_mt7622,
+ .ecc_regs = mt7622_ecc_regs,
+- .num_ecc_strength = 7,
++ .num_ecc_strength = 5,
+ .ecc_mode_shift = 4,
+ .parity_bits = 13,
+ .pg_irq_sel = 0,
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index 1a77542c6d67c..048b255faa769 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -2651,10 +2651,23 @@ static int qcom_nand_attach_chip(struct nand_chip *chip)
+ ecc->engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST;
+
+ mtd_set_ooblayout(mtd, &qcom_nand_ooblayout_ops);
++ /* Free the initially allocated BAM transaction for reading the ONFI params */
++ if (nandc->props->is_bam)
++ free_bam_transaction(nandc);
+
+ nandc->max_cwperpage = max_t(unsigned int, nandc->max_cwperpage,
+ cwperpage);
+
++ /* Now allocate the BAM transaction based on updated max_cwperpage */
++ if (nandc->props->is_bam) {
++ nandc->bam_txn = alloc_bam_transaction(nandc);
++ if (!nandc->bam_txn) {
++ dev_err(nandc->dev,
++ "failed to allocate bam transaction\n");
++ return -ENOMEM;
++ }
++ }
++
+ /*
+ * DATA_UD_BYTES varies based on whether the read/write command protects
+ * spare data with ECC too. We protect spare data by default, so we set
+@@ -2955,17 +2968,6 @@ static int qcom_nand_host_init_and_register(struct qcom_nand_controller *nandc,
+ if (ret)
+ return ret;
+
+- if (nandc->props->is_bam) {
+- free_bam_transaction(nandc);
+- nandc->bam_txn = alloc_bam_transaction(nandc);
+- if (!nandc->bam_txn) {
+- dev_err(nandc->dev,
+- "failed to allocate bam transaction\n");
+- nand_cleanup(chip);
+- return -ENOMEM;
+- }
+- }
+-
+ ret = mtd_device_parse_register(mtd, probes, NULL, NULL, 0);
+ if (ret)
+ nand_cleanup(chip);
+diff --git a/drivers/mtd/nand/raw/sh_flctl.c b/drivers/mtd/nand/raw/sh_flctl.c
+index 13df4bdf792af..8f89e2d3d817f 100644
+--- a/drivers/mtd/nand/raw/sh_flctl.c
++++ b/drivers/mtd/nand/raw/sh_flctl.c
+@@ -384,7 +384,8 @@ static int flctl_dma_fifo0_transfer(struct sh_flctl *flctl, unsigned long *buf,
+ dma_addr_t dma_addr;
+ dma_cookie_t cookie;
+ uint32_t reg;
+- int ret;
++ int ret = 0;
++ unsigned long time_left;
+
+ if (dir == DMA_FROM_DEVICE) {
+ chan = flctl->chan_fifo0_rx;
+@@ -425,13 +426,14 @@ static int flctl_dma_fifo0_transfer(struct sh_flctl *flctl, unsigned long *buf,
+ goto out;
+ }
+
+- ret =
++ time_left =
+ wait_for_completion_timeout(&flctl->dma_complete,
+ msecs_to_jiffies(3000));
+
+- if (ret <= 0) {
++ if (time_left == 0) {
+ dmaengine_terminate_all(chan);
+ dev_err(&flctl->pdev->dev, "wait_for_completion_timeout\n");
++ ret = -ETIMEDOUT;
+ }
+
+ out:
+@@ -441,7 +443,7 @@ out:
+
+ dma_unmap_single(chan->device->dev, dma_addr, len, dir);
+
+- /* ret > 0 is success */
++ /* ret == 0 is success */
+ return ret;
+ }
+
+@@ -465,7 +467,7 @@ static void read_fiforeg(struct sh_flctl *flctl, int rlen, int offset)
+
+ /* initiate DMA transfer */
+ if (flctl->chan_fifo0_rx && rlen >= 32 &&
+- flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_FROM_DEVICE) > 0)
++ !flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_FROM_DEVICE))
+ goto convert; /* DMA success */
+
+ /* do polling transfer */
+@@ -524,7 +526,7 @@ static void write_ec_fiforeg(struct sh_flctl *flctl, int rlen,
+
+ /* initiate DMA transfer */
+ if (flctl->chan_fifo0_tx && rlen >= 32 &&
+- flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_TO_DEVICE) > 0)
++ !flctl_dma_fifo0_transfer(flctl, buf, rlen, DMA_TO_DEVICE))
+ return; /* DMA success */
+
+ /* do polling transfer */
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index aebeb46e6fa6f..c9107a8b4b906 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3819,14 +3819,19 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb, const v
+ return true;
+ }
+
+-static u32 bond_ip_hash(u32 hash, struct flow_keys *flow)
++static u32 bond_ip_hash(u32 hash, struct flow_keys *flow, int xmit_policy)
+ {
+ hash ^= (__force u32)flow_get_u32_dst(flow) ^
+ (__force u32)flow_get_u32_src(flow);
+ hash ^= (hash >> 16);
+ hash ^= (hash >> 8);
++
+ /* discard lowest hash bit to deal with the common even ports pattern */
+- return hash >> 1;
++ if (xmit_policy == BOND_XMIT_POLICY_LAYER34 ||
++ xmit_policy == BOND_XMIT_POLICY_ENCAP34)
++ return hash >> 1;
++
++ return hash;
+ }
+
+ /* Generate hash based on xmit policy. If @skb is given it is used to linearize
+@@ -3856,7 +3861,7 @@ static u32 __bond_xmit_hash(struct bonding *bond, struct sk_buff *skb, const voi
+ memcpy(&hash, &flow.ports.ports, sizeof(hash));
+ }
+
+- return bond_ip_hash(hash, &flow);
++ return bond_ip_hash(hash, &flow, bond->params.xmit_policy);
+ }
+
+ /**
+@@ -5051,7 +5056,7 @@ static u32 bond_sk_hash_l34(struct sock *sk)
+ /* L4 */
+ memcpy(&hash, &flow.ports.ports, sizeof(hash));
+ /* L3 */
+- return bond_ip_hash(hash, &flow);
++ return bond_ip_hash(hash, &flow, BOND_XMIT_POLICY_LAYER34);
+ }
+
+ static struct net_device *__bond_sk_get_lower_dev(struct bonding *bond,
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index 8a7a8093a1569..8acec33a47027 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1637,9 +1637,6 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
+ break;
+ case PHY_INTERFACE_MODE_RMII:
+ miicfg |= GSWIP_MII_CFG_MODE_RMIIM;
+-
+- /* Configure the RMII clock as output: */
+- miicfg |= GSWIP_MII_CFG_RMII_CLK;
+ break;
+ case PHY_INTERFACE_MODE_RGMII:
+ case PHY_INTERFACE_MODE_RGMII_ID:
+diff --git a/drivers/net/dsa/mv88e6xxx/port_hidden.c b/drivers/net/dsa/mv88e6xxx/port_hidden.c
+index b49d05f0e1179..7a9f9ff6dedf3 100644
+--- a/drivers/net/dsa/mv88e6xxx/port_hidden.c
++++ b/drivers/net/dsa/mv88e6xxx/port_hidden.c
+@@ -40,8 +40,9 @@ int mv88e6xxx_port_hidden_wait(struct mv88e6xxx_chip *chip)
+ {
+ int bit = __bf_shf(MV88E6XXX_PORT_RESERVED_1A_BUSY);
+
+- return mv88e6xxx_wait_bit(chip, MV88E6XXX_PORT_RESERVED_1A_CTRL_PORT,
+- MV88E6XXX_PORT_RESERVED_1A, bit, 0);
++ return mv88e6xxx_port_wait_bit(chip,
++ MV88E6XXX_PORT_RESERVED_1A_CTRL_PORT,
++ MV88E6XXX_PORT_RESERVED_1A, bit, 0);
+ }
+
+ int mv88e6xxx_port_hidden_read(struct mv88e6xxx_chip *chip, int block, int port,
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+index c19b072f3a237..962253db25b82 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+@@ -14153,10 +14153,6 @@ static int bnx2x_eeh_nic_unload(struct bnx2x *bp)
+
+ /* Stop Tx */
+ bnx2x_tx_disable(bp);
+- /* Delete all NAPI objects */
+- bnx2x_del_all_napi(bp);
+- if (CNIC_LOADED(bp))
+- bnx2x_del_all_napi_cnic(bp);
+ netdev_reset_tc(bp->dev);
+
+ del_timer_sync(&bp->timer);
+@@ -14261,6 +14257,11 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev)
+ bnx2x_drain_tx_queues(bp);
+ bnx2x_send_unload_req(bp, UNLOAD_RECOVERY);
+ bnx2x_netif_stop(bp, 1);
++ bnx2x_del_all_napi(bp);
++
++ if (CNIC_LOADED(bp))
++ bnx2x_del_all_napi_cnic(bp);
++
+ bnx2x_free_irq(bp);
+
+ /* Report UNLOAD_DONE to MCP */
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 2da804f84b480..c2bfb25e087c1 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -2035,6 +2035,11 @@ static struct sk_buff *bcmgenet_add_tsb(struct net_device *dev,
+ return skb;
+ }
+
++static void bcmgenet_hide_tsb(struct sk_buff *skb)
++{
++ __skb_pull(skb, sizeof(struct status_64));
++}
++
+ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+@@ -2141,6 +2146,8 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+
+ GENET_CB(skb)->last_cb = tx_cb_ptr;
++
++ bcmgenet_hide_tsb(skb);
+ skb_tx_timestamp(skb);
+
+ /* Decrement total BD count and advance our write pointer */
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index d3d7172e0fcc5..3b148d4e0f0cf 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -308,10 +308,6 @@ int enetc_setup_tc_txtime(struct net_device *ndev, void *type_data)
+ if (tc < 0 || tc >= priv->num_tx_rings)
+ return -EINVAL;
+
+- /* Do not support TXSTART and TX CSUM offload simutaniously */
+- if (ndev->features & NETIF_F_CSUM_MASK)
+- return -EBUSY;
+-
+ /* TSD and Qbv are mutually exclusive in hardware */
+ if (enetc_rd(&priv->si->hw, ENETC_QBV_PTGCR_OFFSET) & ENETC_QBV_TGE)
+ return -EBUSY;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 796133de527e4..870919c665185 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3731,7 +3731,7 @@ static int fec_enet_init_stop_mode(struct fec_enet_private *fep,
+ ARRAY_SIZE(out_val));
+ if (ret) {
+ dev_dbg(&fep->pdev->dev, "no stop mode property\n");
+- return ret;
++ goto out;
+ }
+
+ fep->stop_gpr.gpr = syscon_node_to_regmap(gpr_np);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
+index 0c60f41fca8a6..f3c9395d8351c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_tqp_stats.c
+@@ -75,7 +75,7 @@ int hclge_comm_tqps_update_stats(struct hnae3_handle *handle,
+ ret = hclge_comm_cmd_send(hw, &desc, 1);
+ if (ret) {
+ dev_err(&hw->cmq.csq.pdev->dev,
+- "failed to get tqp stat, ret = %d, tx = %u.\n",
++ "failed to get tqp stat, ret = %d, rx = %u.\n",
+ ret, i);
+ return ret;
+ }
+@@ -89,7 +89,7 @@ int hclge_comm_tqps_update_stats(struct hnae3_handle *handle,
+ ret = hclge_comm_cmd_send(hw, &desc, 1);
+ if (ret) {
+ dev_err(&hw->cmq.csq.pdev->dev,
+- "failed to get tqp stat, ret = %d, rx = %u.\n",
++ "failed to get tqp stat, ret = %d, tx = %u.\n",
+ ret, i);
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index f6082be7481c1..f33b4c351a703 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -5132,6 +5132,13 @@ static void hns3_state_init(struct hnae3_handle *handle)
+ set_bit(HNS3_NIC_STATE_RXD_ADV_LAYOUT_ENABLE, &priv->state);
+ }
+
++static void hns3_state_uninit(struct hnae3_handle *handle)
++{
++ struct hns3_nic_priv *priv = handle->priv;
++
++ clear_bit(HNS3_NIC_STATE_INITED, &priv->state);
++}
++
+ static int hns3_client_init(struct hnae3_handle *handle)
+ {
+ struct pci_dev *pdev = handle->pdev;
+@@ -5249,7 +5256,9 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ return ret;
+
+ out_reg_netdev_fail:
++ hns3_state_uninit(handle);
+ hns3_dbg_uninit(handle);
++ hns3_client_stop(handle);
+ out_client_start:
+ hns3_free_rx_cpu_rmap(netdev);
+ hns3_nic_uninit_irq(priv);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+index 6799d16de34b9..7998ca617a92e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+@@ -94,6 +94,13 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
+ enum hclge_comm_cmd_status status;
+ struct hclge_desc desc;
+
++ if (msg_len > HCLGE_MBX_MAX_MSG_SIZE) {
++ dev_err(&hdev->pdev->dev,
++ "msg data length(=%u) exceeds maximum(=%u)\n",
++ msg_len, HCLGE_MBX_MAX_MSG_SIZE);
++ return -EMSGSIZE;
++ }
++
+ resp_pf_to_vf = (struct hclge_mbx_pf_to_vf_cmd *)desc.data;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGEVF_OPC_MBX_PF_TO_VF, false);
+@@ -176,7 +183,7 @@ static int hclge_get_ring_chain_from_mbx(
+ ring_num = req->msg.ring_num;
+
+ if (ring_num > HCLGE_MBX_MAX_RING_CHAIN_PARAM_NUM)
+- return -ENOMEM;
++ return -EINVAL;
+
+ for (i = 0; i < ring_num; i++) {
+ if (req->msg.param[i].tqp_index >= vport->nic.kinfo.rss_size) {
+@@ -587,9 +594,9 @@ static int hclge_set_vf_mtu(struct hclge_vport *vport,
+ return hclge_set_vport_mtu(vport, mtu);
+ }
+
+-static void hclge_get_queue_id_in_pf(struct hclge_vport *vport,
+- struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+- struct hclge_respond_to_vf_msg *resp_msg)
++static int hclge_get_queue_id_in_pf(struct hclge_vport *vport,
++ struct hclge_mbx_vf_to_pf_cmd *mbx_req,
++ struct hclge_respond_to_vf_msg *resp_msg)
+ {
+ struct hnae3_handle *handle = &vport->nic;
+ struct hclge_dev *hdev = vport->back;
+@@ -599,17 +606,18 @@ static void hclge_get_queue_id_in_pf(struct hclge_vport *vport,
+ if (queue_id >= handle->kinfo.num_tqps) {
+ dev_err(&hdev->pdev->dev, "Invalid queue id(%u) from VF %u\n",
+ queue_id, mbx_req->mbx_src_vfid);
+- return;
++ return -EINVAL;
+ }
+
+ qid_in_pf = hclge_covert_handle_qid_global(&vport->nic, queue_id);
+ memcpy(resp_msg->data, &qid_in_pf, sizeof(qid_in_pf));
+ resp_msg->len = sizeof(qid_in_pf);
++ return 0;
+ }
+
+-static void hclge_get_rss_key(struct hclge_vport *vport,
+- struct hclge_mbx_vf_to_pf_cmd *mbx_req,
+- struct hclge_respond_to_vf_msg *resp_msg)
++static int hclge_get_rss_key(struct hclge_vport *vport,
++ struct hclge_mbx_vf_to_pf_cmd *mbx_req,
++ struct hclge_respond_to_vf_msg *resp_msg)
+ {
+ #define HCLGE_RSS_MBX_RESP_LEN 8
+ struct hclge_dev *hdev = vport->back;
+@@ -627,13 +635,14 @@ static void hclge_get_rss_key(struct hclge_vport *vport,
+ dev_warn(&hdev->pdev->dev,
+ "failed to get the rss hash key, the index(%u) invalid !\n",
+ index);
+- return;
++ return -EINVAL;
+ }
+
+ memcpy(resp_msg->data,
+ &rss_cfg->rss_hash_key[index * HCLGE_RSS_MBX_RESP_LEN],
+ HCLGE_RSS_MBX_RESP_LEN);
+ resp_msg->len = HCLGE_RSS_MBX_RESP_LEN;
++ return 0;
+ }
+
+ static void hclge_link_fail_parse(struct hclge_dev *hdev, u8 link_fail_code)
+@@ -809,10 +818,10 @@ void hclge_mbx_handler(struct hclge_dev *hdev)
+ "VF fail(%d) to set mtu\n", ret);
+ break;
+ case HCLGE_MBX_GET_QID_IN_PF:
+- hclge_get_queue_id_in_pf(vport, req, &resp_msg);
++ ret = hclge_get_queue_id_in_pf(vport, req, &resp_msg);
+ break;
+ case HCLGE_MBX_GET_RSS_KEY:
+- hclge_get_rss_key(vport, req, &resp_msg);
++ ret = hclge_get_rss_key(vport, req, &resp_msg);
+ break;
+ case HCLGE_MBX_GET_LINK_MODE:
+ hclge_get_link_mode(vport, req);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index b4804ce63151f..a4428a7d0f357 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -3209,13 +3209,8 @@ static void ibmvnic_get_ringparam(struct net_device *netdev,
+ {
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+
+- if (adapter->priv_flags & IBMVNIC_USE_SERVER_MAXES) {
+- ring->rx_max_pending = adapter->max_rx_add_entries_per_subcrq;
+- ring->tx_max_pending = adapter->max_tx_entries_per_subcrq;
+- } else {
+- ring->rx_max_pending = IBMVNIC_MAX_QUEUE_SZ;
+- ring->tx_max_pending = IBMVNIC_MAX_QUEUE_SZ;
+- }
++ ring->rx_max_pending = adapter->max_rx_add_entries_per_subcrq;
++ ring->tx_max_pending = adapter->max_tx_entries_per_subcrq;
+ ring->rx_mini_max_pending = 0;
+ ring->rx_jumbo_max_pending = 0;
+ ring->rx_pending = adapter->req_rx_add_entries_per_subcrq;
+@@ -3230,23 +3225,21 @@ static int ibmvnic_set_ringparam(struct net_device *netdev,
+ struct netlink_ext_ack *extack)
+ {
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+- int ret;
+
+- ret = 0;
++ if (ring->rx_pending > adapter->max_rx_add_entries_per_subcrq ||
++ ring->tx_pending > adapter->max_tx_entries_per_subcrq) {
++ netdev_err(netdev, "Invalid request.\n");
++ netdev_err(netdev, "Max tx buffers = %llu\n",
++ adapter->max_rx_add_entries_per_subcrq);
++ netdev_err(netdev, "Max rx buffers = %llu\n",
++ adapter->max_tx_entries_per_subcrq);
++ return -EINVAL;
++ }
++
+ adapter->desired.rx_entries = ring->rx_pending;
+ adapter->desired.tx_entries = ring->tx_pending;
+
+- ret = wait_for_reset(adapter);
+-
+- if (!ret &&
+- (adapter->req_rx_add_entries_per_subcrq != ring->rx_pending ||
+- adapter->req_tx_entries_per_subcrq != ring->tx_pending))
+- netdev_info(netdev,
+- "Could not match full ringsize request. Requested: RX %d, TX %d; Allowed: RX %llu, TX %llu\n",
+- ring->rx_pending, ring->tx_pending,
+- adapter->req_rx_add_entries_per_subcrq,
+- adapter->req_tx_entries_per_subcrq);
+- return ret;
++ return wait_for_reset(adapter);
+ }
+
+ static void ibmvnic_get_channels(struct net_device *netdev,
+@@ -3254,14 +3247,8 @@ static void ibmvnic_get_channels(struct net_device *netdev,
+ {
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+
+- if (adapter->priv_flags & IBMVNIC_USE_SERVER_MAXES) {
+- channels->max_rx = adapter->max_rx_queues;
+- channels->max_tx = adapter->max_tx_queues;
+- } else {
+- channels->max_rx = IBMVNIC_MAX_QUEUES;
+- channels->max_tx = IBMVNIC_MAX_QUEUES;
+- }
+-
++ channels->max_rx = adapter->max_rx_queues;
++ channels->max_tx = adapter->max_tx_queues;
+ channels->max_other = 0;
+ channels->max_combined = 0;
+ channels->rx_count = adapter->req_rx_queues;
+@@ -3274,22 +3261,11 @@ static int ibmvnic_set_channels(struct net_device *netdev,
+ struct ethtool_channels *channels)
+ {
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+- int ret;
+
+- ret = 0;
+ adapter->desired.rx_queues = channels->rx_count;
+ adapter->desired.tx_queues = channels->tx_count;
+
+- ret = wait_for_reset(adapter);
+-
+- if (!ret &&
+- (adapter->req_rx_queues != channels->rx_count ||
+- adapter->req_tx_queues != channels->tx_count))
+- netdev_info(netdev,
+- "Could not match full channels request. Requested: RX %d, TX %d; Allowed: RX %llu, TX %llu\n",
+- channels->rx_count, channels->tx_count,
+- adapter->req_rx_queues, adapter->req_tx_queues);
+- return ret;
++ return wait_for_reset(adapter);
+ }
+
+ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+@@ -3297,43 +3273,32 @@ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+ struct ibmvnic_adapter *adapter = netdev_priv(dev);
+ int i;
+
+- switch (stringset) {
+- case ETH_SS_STATS:
+- for (i = 0; i < ARRAY_SIZE(ibmvnic_stats);
+- i++, data += ETH_GSTRING_LEN)
+- memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
++ if (stringset != ETH_SS_STATS)
++ return;
+
+- for (i = 0; i < adapter->req_tx_queues; i++) {
+- snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i);
+- data += ETH_GSTRING_LEN;
++ for (i = 0; i < ARRAY_SIZE(ibmvnic_stats); i++, data += ETH_GSTRING_LEN)
++ memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
+
+- snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i);
+- data += ETH_GSTRING_LEN;
++ for (i = 0; i < adapter->req_tx_queues; i++) {
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i);
++ data += ETH_GSTRING_LEN;
+
+- snprintf(data, ETH_GSTRING_LEN,
+- "tx%d_dropped_packets", i);
+- data += ETH_GSTRING_LEN;
+- }
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i);
++ data += ETH_GSTRING_LEN;
+
+- for (i = 0; i < adapter->req_rx_queues; i++) {
+- snprintf(data, ETH_GSTRING_LEN, "rx%d_packets", i);
+- data += ETH_GSTRING_LEN;
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_dropped_packets", i);
++ data += ETH_GSTRING_LEN;
++ }
+
+- snprintf(data, ETH_GSTRING_LEN, "rx%d_bytes", i);
+- data += ETH_GSTRING_LEN;
++ for (i = 0; i < adapter->req_rx_queues; i++) {
++ snprintf(data, ETH_GSTRING_LEN, "rx%d_packets", i);
++ data += ETH_GSTRING_LEN;
+
+- snprintf(data, ETH_GSTRING_LEN, "rx%d_interrupts", i);
+- data += ETH_GSTRING_LEN;
+- }
+- break;
++ snprintf(data, ETH_GSTRING_LEN, "rx%d_bytes", i);
++ data += ETH_GSTRING_LEN;
+
+- case ETH_SS_PRIV_FLAGS:
+- for (i = 0; i < ARRAY_SIZE(ibmvnic_priv_flags); i++)
+- strcpy(data + i * ETH_GSTRING_LEN,
+- ibmvnic_priv_flags[i]);
+- break;
+- default:
+- return;
++ snprintf(data, ETH_GSTRING_LEN, "rx%d_interrupts", i);
++ data += ETH_GSTRING_LEN;
+ }
+ }
+
+@@ -3346,8 +3311,6 @@ static int ibmvnic_get_sset_count(struct net_device *dev, int sset)
+ return ARRAY_SIZE(ibmvnic_stats) +
+ adapter->req_tx_queues * NUM_TX_STATS +
+ adapter->req_rx_queues * NUM_RX_STATS;
+- case ETH_SS_PRIV_FLAGS:
+- return ARRAY_SIZE(ibmvnic_priv_flags);
+ default:
+ return -EOPNOTSUPP;
+ }
+@@ -3400,26 +3363,6 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev,
+ }
+ }
+
+-static u32 ibmvnic_get_priv_flags(struct net_device *netdev)
+-{
+- struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+-
+- return adapter->priv_flags;
+-}
+-
+-static int ibmvnic_set_priv_flags(struct net_device *netdev, u32 flags)
+-{
+- struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+- bool which_maxes = !!(flags & IBMVNIC_USE_SERVER_MAXES);
+-
+- if (which_maxes)
+- adapter->priv_flags |= IBMVNIC_USE_SERVER_MAXES;
+- else
+- adapter->priv_flags &= ~IBMVNIC_USE_SERVER_MAXES;
+-
+- return 0;
+-}
+-
+ static const struct ethtool_ops ibmvnic_ethtool_ops = {
+ .get_drvinfo = ibmvnic_get_drvinfo,
+ .get_msglevel = ibmvnic_get_msglevel,
+@@ -3433,8 +3376,6 @@ static const struct ethtool_ops ibmvnic_ethtool_ops = {
+ .get_sset_count = ibmvnic_get_sset_count,
+ .get_ethtool_stats = ibmvnic_get_ethtool_stats,
+ .get_link_ksettings = ibmvnic_get_link_ksettings,
+- .get_priv_flags = ibmvnic_get_priv_flags,
+- .set_priv_flags = ibmvnic_set_priv_flags,
+ };
+
+ /* Routines for managing CRQs/sCRQs */
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index 8f5cefb932dd1..1310c861bf834 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -41,11 +41,6 @@
+
+ #define IBMVNIC_RESET_DELAY 100
+
+-static const char ibmvnic_priv_flags[][ETH_GSTRING_LEN] = {
+-#define IBMVNIC_USE_SERVER_MAXES 0x1
+- "use-server-maxes"
+-};
+-
+ struct ibmvnic_login_buffer {
+ __be32 len;
+ __be32 version;
+@@ -883,7 +878,6 @@ struct ibmvnic_adapter {
+ struct ibmvnic_control_ip_offload_buffer ip_offload_ctrl;
+ dma_addr_t ip_offload_ctrl_tok;
+ u32 msg_enable;
+- u32 priv_flags;
+
+ /* Vital Product Data (VPD) */
+ struct ibmvnic_vpd *vpd;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 2de2bbbca1e97..e347030ee2e33 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6662,12 +6662,15 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
+
+ dev_dbg(dev, "rebuilding PF after reset_type=%d\n", reset_type);
+
++#define ICE_EMP_RESET_SLEEP_MS 5000
+ if (reset_type == ICE_RESET_EMPR) {
+ /* If an EMP reset has occurred, any previously pending flash
+ * update will have completed. We no longer know whether or
+ * not the NVM update EMP reset is restricted.
+ */
+ pf->fw_emp_reset_disabled = false;
++
++ msleep(ICE_EMP_RESET_SLEEP_MS);
+ }
+
+ err = ice_init_all_ctrlq(hw);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+index e596e1a9fc757..69d11ff7677d6 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
+@@ -903,7 +903,8 @@ int ixgbe_ipsec_vf_add_sa(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
+ /* Tx IPsec offload doesn't seem to work on this
+ * device, so block these requests for now.
+ */
+- if (!(sam->flags & XFRM_OFFLOAD_INBOUND)) {
++ sam->flags = sam->flags & ~XFRM_OFFLOAD_IPV6;
++ if (sam->flags != XFRM_OFFLOAD_INBOUND) {
+ err = -EOPNOTSUPP;
+ goto err_out;
+ }
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c b/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c
+index 2679111ef6696..005e56ea5da12 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c
+@@ -346,7 +346,7 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
+
+ lan966x_mac_process_raw_entry(&raw_entries[column],
+ mac, &vid, &dest_idx);
+- if (WARN_ON(dest_idx > lan966x->num_phys_ports))
++ if (WARN_ON(dest_idx >= lan966x->num_phys_ports))
+ continue;
+
+ /* If the entry in SW is found, then there is nothing
+@@ -393,7 +393,7 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
+
+ lan966x_mac_process_raw_entry(&raw_entries[column],
+ mac, &vid, &dest_idx);
+- if (WARN_ON(dest_idx > lan966x->num_phys_ports))
++ if (WARN_ON(dest_idx >= lan966x->num_phys_ports))
+ continue;
+
+ mac_entry = lan966x_mac_alloc_entry(mac, vid, dest_idx);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index ac9e6c7a33b55..6b447d8f0bd8a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -65,8 +65,9 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ struct phy_device *phy_dev = ndev->phydev;
+ u32 val;
+
+- writew(SGMII_ADAPTER_DISABLE,
+- sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
++ if (sgmii_adapter_base)
++ writew(SGMII_ADAPTER_DISABLE,
++ sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+
+ if (splitter_base) {
+ val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
+@@ -88,10 +89,11 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
+ }
+
+- writew(SGMII_ADAPTER_ENABLE,
+- sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+- if (phy_dev)
++ if (phy_dev && sgmii_adapter_base) {
++ writew(SGMII_ADAPTER_ENABLE,
++ sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+ tse_pcs_fix_mac_speed(&dwmac->pcs, phy_dev, speed);
++ }
+ }
+
+ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *dev)
+diff --git a/drivers/net/hippi/rrunner.c b/drivers/net/hippi/rrunner.c
+index 16105292b140b..74e845fa2e07e 100644
+--- a/drivers/net/hippi/rrunner.c
++++ b/drivers/net/hippi/rrunner.c
+@@ -1355,7 +1355,9 @@ static int rr_close(struct net_device *dev)
+
+ rrpriv->fw_running = 0;
+
++ spin_unlock_irqrestore(&rrpriv->lock, flags);
+ del_timer_sync(&rrpriv->timer);
++ spin_lock_irqsave(&rrpriv->lock, flags);
+
+ writel(0, ®s->TxPi);
+ writel(0, ®s->IpRxPi);
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index b6fea119fe137..2b7d0720720b6 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -880,7 +880,7 @@ static int mv3310_read_status_copper(struct phy_device *phydev)
+
+ cssr1 = phy_read_mmd(phydev, MDIO_MMD_PCS, MV_PCS_CSSR1);
+ if (cssr1 < 0)
+- return val;
++ return cssr1;
+
+ /* If the link settings are not resolved, mark the link down */
+ if (!(cssr1 & MV_PCS_CSSR1_RESOLVED)) {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index a801ea40908ff..3b02dd54e2e27 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -978,6 +978,24 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ * xdp.data_meta were adjusted
+ */
+ len = xdp.data_end - xdp.data + vi->hdr_len + metasize;
++
++ /* recalculate headroom if xdp.data or xdp_data_meta
++ * were adjusted, note that offset should always point
++ * to the start of the reserved bytes for virtio_net
++ * header which are followed by xdp.data, that means
++ * that offset is equal to the headroom (when buf is
++ * starting at the beginning of the page, otherwise
++ * there is a base offset inside the page) but it's used
++ * with a different starting point (buf start) than
++ * xdp.data (buf start + vnet hdr size). If xdp.data or
++ * data_meta were adjusted by the xdp prog then the
++ * headroom size has changed and so has the offset, we
++ * can use data_hard_start, which points at buf start +
++ * vnet hdr size, to calculate the new headroom and use
++ * it later to compute buf start in page_to_skb()
++ */
++ headroom = xdp.data - xdp.data_hard_start - metasize;
++
+ /* We can only create skb based on xdp_page. */
+ if (unlikely(xdp_page != page)) {
+ rcu_read_unlock();
+@@ -985,7 +1003,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
+ head_skb = page_to_skb(vi, rq, xdp_page, offset,
+ len, PAGE_SIZE, false,
+ metasize,
+- VIRTIO_XDP_HEADROOM);
++ headroom);
+ return head_skb;
+ }
+ break;
+diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c
+index a46067c38bf5d..5eaef79c06e16 100644
+--- a/drivers/net/wireguard/device.c
++++ b/drivers/net/wireguard/device.c
+@@ -19,6 +19,7 @@
+ #include <linux/if_arp.h>
+ #include <linux/icmp.h>
+ #include <linux/suspend.h>
++#include <net/dst_metadata.h>
+ #include <net/icmp.h>
+ #include <net/rtnetlink.h>
+ #include <net/ip_tunnels.h>
+@@ -152,7 +153,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev)
+ goto err_peer;
+ }
+
+- mtu = skb_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
++ mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
+
+ __skb_queue_head_init(&packets);
+ if (!skb_is_gso(skb)) {
+diff --git a/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c b/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c
+index 5b471ab80fe28..54d65a6f0fccf 100644
+--- a/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c
++++ b/drivers/phy/amlogic/phy-meson-g12a-usb3-pcie.c
+@@ -414,19 +414,19 @@ static int phy_g12a_usb3_pcie_probe(struct platform_device *pdev)
+
+ ret = clk_prepare_enable(priv->clk_ref);
+ if (ret)
+- goto err_disable_clk_ref;
++ return ret;
+
+ priv->reset = devm_reset_control_array_get_exclusive(dev);
+- if (IS_ERR(priv->reset))
+- return PTR_ERR(priv->reset);
++ if (IS_ERR(priv->reset)) {
++ ret = PTR_ERR(priv->reset);
++ goto err_disable_clk_ref;
++ }
+
+ priv->phy = devm_phy_create(dev, np, &phy_g12a_usb3_pcie_ops);
+ if (IS_ERR(priv->phy)) {
+ ret = PTR_ERR(priv->phy);
+- if (ret != -EPROBE_DEFER)
+- dev_err(dev, "failed to create PHY\n");
+-
+- return ret;
++ dev_err_probe(dev, ret, "failed to create PHY\n");
++ goto err_disable_clk_ref;
+ }
+
+ phy_set_drvdata(priv->phy, priv);
+@@ -434,8 +434,12 @@ static int phy_g12a_usb3_pcie_probe(struct platform_device *pdev)
+
+ phy_provider = devm_of_phy_provider_register(dev,
+ phy_g12a_usb3_pcie_xlate);
++ if (IS_ERR(phy_provider)) {
++ ret = PTR_ERR(phy_provider);
++ goto err_disable_clk_ref;
++ }
+
+- return PTR_ERR_OR_ZERO(phy_provider);
++ return 0;
+
+ err_disable_clk_ref:
+ clk_disable_unprepare(priv->clk_ref);
+diff --git a/drivers/phy/motorola/phy-mapphone-mdm6600.c b/drivers/phy/motorola/phy-mapphone-mdm6600.c
+index 5172971f4c360..3cd4d51c247c3 100644
+--- a/drivers/phy/motorola/phy-mapphone-mdm6600.c
++++ b/drivers/phy/motorola/phy-mapphone-mdm6600.c
+@@ -629,7 +629,8 @@ idle:
+ cleanup:
+ if (error < 0)
+ phy_mdm6600_device_power_off(ddata);
+-
++ pm_runtime_disable(ddata->dev);
++ pm_runtime_dont_use_autosuspend(ddata->dev);
+ return error;
+ }
+
+diff --git a/drivers/phy/samsung/phy-exynos5250-sata.c b/drivers/phy/samsung/phy-exynos5250-sata.c
+index 9ec234243f7c6..595adba5fb8f1 100644
+--- a/drivers/phy/samsung/phy-exynos5250-sata.c
++++ b/drivers/phy/samsung/phy-exynos5250-sata.c
+@@ -187,6 +187,7 @@ static int exynos_sata_phy_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ sata_phy->client = of_find_i2c_device_by_node(node);
++ of_node_put(node);
+ if (!sata_phy->client)
+ return -EPROBE_DEFER;
+
+@@ -195,20 +196,21 @@ static int exynos_sata_phy_probe(struct platform_device *pdev)
+ sata_phy->phyclk = devm_clk_get(dev, "sata_phyctrl");
+ if (IS_ERR(sata_phy->phyclk)) {
+ dev_err(dev, "failed to get clk for PHY\n");
+- return PTR_ERR(sata_phy->phyclk);
++ ret = PTR_ERR(sata_phy->phyclk);
++ goto put_dev;
+ }
+
+ ret = clk_prepare_enable(sata_phy->phyclk);
+ if (ret < 0) {
+ dev_err(dev, "failed to enable source clk\n");
+- return ret;
++ goto put_dev;
+ }
+
+ sata_phy->phy = devm_phy_create(dev, NULL, &exynos_sata_phy_ops);
+ if (IS_ERR(sata_phy->phy)) {
+- clk_disable_unprepare(sata_phy->phyclk);
+ dev_err(dev, "failed to create PHY\n");
+- return PTR_ERR(sata_phy->phy);
++ ret = PTR_ERR(sata_phy->phy);
++ goto clk_disable;
+ }
+
+ phy_set_drvdata(sata_phy->phy, sata_phy);
+@@ -216,11 +218,18 @@ static int exynos_sata_phy_probe(struct platform_device *pdev)
+ phy_provider = devm_of_phy_provider_register(dev,
+ of_phy_simple_xlate);
+ if (IS_ERR(phy_provider)) {
+- clk_disable_unprepare(sata_phy->phyclk);
+- return PTR_ERR(phy_provider);
++ ret = PTR_ERR(phy_provider);
++ goto clk_disable;
+ }
+
+ return 0;
++
++clk_disable:
++ clk_disable_unprepare(sata_phy->phyclk);
++put_dev:
++ put_device(&sata_phy->client->dev);
++
++ return ret;
+ }
+
+ static const struct of_device_id exynos_sata_phy_of_match[] = {
+diff --git a/drivers/phy/ti/phy-am654-serdes.c b/drivers/phy/ti/phy-am654-serdes.c
+index c1211c4f863ca..0be727bb9f792 100644
+--- a/drivers/phy/ti/phy-am654-serdes.c
++++ b/drivers/phy/ti/phy-am654-serdes.c
+@@ -838,7 +838,7 @@ static int serdes_am654_probe(struct platform_device *pdev)
+
+ clk_err:
+ of_clk_del_provider(node);
+-
++ pm_runtime_disable(dev);
+ return ret;
+ }
+
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index 3a505fe5715ad..31a775877f6e3 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -215,7 +215,7 @@ static int omap_usb2_enable_clocks(struct omap_usb *phy)
+ return 0;
+
+ err1:
+- clk_disable(phy->wkupclk);
++ clk_disable_unprepare(phy->wkupclk);
+
+ err0:
+ return ret;
+diff --git a/drivers/pinctrl/mediatek/Kconfig b/drivers/pinctrl/mediatek/Kconfig
+index 66db4ac5d169a..c7fa1525e42af 100644
+--- a/drivers/pinctrl/mediatek/Kconfig
++++ b/drivers/pinctrl/mediatek/Kconfig
+@@ -30,6 +30,7 @@ config PINCTRL_MTK_MOORE
+ select GENERIC_PINMUX_FUNCTIONS
+ select GPIOLIB
+ select OF_GPIO
++ select EINT_MTK
+ select PINCTRL_MTK_V2
+
+ config PINCTRL_MTK_PARIS
+diff --git a/drivers/pinctrl/pinctrl-pistachio.c b/drivers/pinctrl/pinctrl-pistachio.c
+index 8d271c6b0ca41..5de691c630b4f 100644
+--- a/drivers/pinctrl/pinctrl-pistachio.c
++++ b/drivers/pinctrl/pinctrl-pistachio.c
+@@ -1374,10 +1374,10 @@ static int pistachio_gpio_register(struct pistachio_pinctrl *pctl)
+ }
+
+ irq = irq_of_parse_and_map(child, 0);
+- if (irq < 0) {
+- dev_err(pctl->dev, "No IRQ for bank %u: %d\n", i, irq);
++ if (!irq) {
++ dev_err(pctl->dev, "No IRQ for bank %u\n", i);
+ of_node_put(child);
+- ret = irq;
++ ret = -EINVAL;
+ goto err;
+ }
+
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index a1b598b86aa9f..65fa305b5f59f 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -457,95 +457,110 @@ static struct rockchip_mux_recalced_data rk3128_mux_recalced_data[] = {
+
+ static struct rockchip_mux_recalced_data rk3308_mux_recalced_data[] = {
+ {
++ /* gpio1b6_sel */
+ .num = 1,
+ .pin = 14,
+ .reg = 0x28,
+ .bit = 12,
+ .mask = 0xf
+ }, {
++ /* gpio1b7_sel */
+ .num = 1,
+ .pin = 15,
+ .reg = 0x2c,
+ .bit = 0,
+ .mask = 0x3
+ }, {
++ /* gpio1c2_sel */
+ .num = 1,
+ .pin = 18,
+ .reg = 0x30,
+ .bit = 4,
+ .mask = 0xf
+ }, {
++ /* gpio1c3_sel */
+ .num = 1,
+ .pin = 19,
+ .reg = 0x30,
+ .bit = 8,
+ .mask = 0xf
+ }, {
++ /* gpio1c4_sel */
+ .num = 1,
+ .pin = 20,
+ .reg = 0x30,
+ .bit = 12,
+ .mask = 0xf
+ }, {
++ /* gpio1c5_sel */
+ .num = 1,
+ .pin = 21,
+ .reg = 0x34,
+ .bit = 0,
+ .mask = 0xf
+ }, {
++ /* gpio1c6_sel */
+ .num = 1,
+ .pin = 22,
+ .reg = 0x34,
+ .bit = 4,
+ .mask = 0xf
+ }, {
++ /* gpio1c7_sel */
+ .num = 1,
+ .pin = 23,
+ .reg = 0x34,
+ .bit = 8,
+ .mask = 0xf
+ }, {
++ /* gpio3b4_sel */
+ .num = 3,
+ .pin = 12,
+ .reg = 0x68,
+ .bit = 8,
+ .mask = 0xf
+ }, {
++ /* gpio3b5_sel */
+ .num = 3,
+ .pin = 13,
+ .reg = 0x68,
+ .bit = 12,
+ .mask = 0xf
+ }, {
++ /* gpio2a2_sel */
+ .num = 2,
+ .pin = 2,
+- .reg = 0x608,
+- .bit = 0,
+- .mask = 0x7
++ .reg = 0x40,
++ .bit = 4,
++ .mask = 0x3
+ }, {
++ /* gpio2a3_sel */
+ .num = 2,
+ .pin = 3,
+- .reg = 0x608,
+- .bit = 4,
+- .mask = 0x7
++ .reg = 0x40,
++ .bit = 6,
++ .mask = 0x3
+ }, {
++ /* gpio2c0_sel */
+ .num = 2,
+ .pin = 16,
+- .reg = 0x610,
+- .bit = 8,
+- .mask = 0x7
++ .reg = 0x50,
++ .bit = 0,
++ .mask = 0x3
+ }, {
++ /* gpio3b2_sel */
+ .num = 3,
+ .pin = 10,
+- .reg = 0x610,
+- .bit = 0,
+- .mask = 0x7
++ .reg = 0x68,
++ .bit = 4,
++ .mask = 0x3
+ }, {
++ /* gpio3b3_sel */
+ .num = 3,
+ .pin = 11,
+- .reg = 0x610,
+- .bit = 4,
+- .mask = 0x7
++ .reg = 0x68,
++ .bit = 6,
++ .mask = 0x3
+ },
+ };
+
+diff --git a/drivers/pinctrl/qcom/pinctrl-sm6350.c b/drivers/pinctrl/qcom/pinctrl-sm6350.c
+index 4d37b817b2328..a91a86628f2f8 100644
+--- a/drivers/pinctrl/qcom/pinctrl-sm6350.c
++++ b/drivers/pinctrl/qcom/pinctrl-sm6350.c
+@@ -264,14 +264,14 @@ static const struct pinctrl_pin_desc sm6350_pins[] = {
+ PINCTRL_PIN(153, "GPIO_153"),
+ PINCTRL_PIN(154, "GPIO_154"),
+ PINCTRL_PIN(155, "GPIO_155"),
+- PINCTRL_PIN(156, "SDC1_RCLK"),
+- PINCTRL_PIN(157, "SDC1_CLK"),
+- PINCTRL_PIN(158, "SDC1_CMD"),
+- PINCTRL_PIN(159, "SDC1_DATA"),
+- PINCTRL_PIN(160, "SDC2_CLK"),
+- PINCTRL_PIN(161, "SDC2_CMD"),
+- PINCTRL_PIN(162, "SDC2_DATA"),
+- PINCTRL_PIN(163, "UFS_RESET"),
++ PINCTRL_PIN(156, "UFS_RESET"),
++ PINCTRL_PIN(157, "SDC1_RCLK"),
++ PINCTRL_PIN(158, "SDC1_CLK"),
++ PINCTRL_PIN(159, "SDC1_CMD"),
++ PINCTRL_PIN(160, "SDC1_DATA"),
++ PINCTRL_PIN(161, "SDC2_CLK"),
++ PINCTRL_PIN(162, "SDC2_CMD"),
++ PINCTRL_PIN(163, "SDC2_DATA"),
+ };
+
+ #define DECLARE_MSM_GPIO_PINS(pin) \
+diff --git a/drivers/pinctrl/samsung/Kconfig b/drivers/pinctrl/samsung/Kconfig
+index dfd805e768624..7b0576f71376e 100644
+--- a/drivers/pinctrl/samsung/Kconfig
++++ b/drivers/pinctrl/samsung/Kconfig
+@@ -4,14 +4,13 @@
+ #
+ config PINCTRL_SAMSUNG
+ bool
+- depends on OF_GPIO
++ select GPIOLIB
+ select PINMUX
+ select PINCONF
+
+ config PINCTRL_EXYNOS
+ bool "Pinctrl common driver part for Samsung Exynos SoCs"
+- depends on OF_GPIO
+- depends on ARCH_EXYNOS || ARCH_S5PV210 || COMPILE_TEST
++ depends on ARCH_EXYNOS || ARCH_S5PV210 || (COMPILE_TEST && OF)
+ select PINCTRL_SAMSUNG
+ select PINCTRL_EXYNOS_ARM if ARM && (ARCH_EXYNOS || ARCH_S5PV210)
+ select PINCTRL_EXYNOS_ARM64 if ARM64 && ARCH_EXYNOS
+@@ -26,12 +25,10 @@ config PINCTRL_EXYNOS_ARM64
+
+ config PINCTRL_S3C24XX
+ bool "Samsung S3C24XX SoC pinctrl driver"
+- depends on OF_GPIO
+- depends on ARCH_S3C24XX || COMPILE_TEST
++ depends on ARCH_S3C24XX || (COMPILE_TEST && OF)
+ select PINCTRL_SAMSUNG
+
+ config PINCTRL_S3C64XX
+ bool "Samsung S3C64XX SoC pinctrl driver"
+- depends on OF_GPIO
+- depends on ARCH_S3C64XX || COMPILE_TEST
++ depends on ARCH_S3C64XX || (COMPILE_TEST && OF)
+ select PINCTRL_SAMSUNG
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 9ed7647315707..f7c9459f66283 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -225,6 +225,13 @@ static void stm32_gpio_free(struct gpio_chip *chip, unsigned offset)
+ pinctrl_gpio_free(chip->base + offset);
+ }
+
++static int stm32_gpio_get_noclk(struct gpio_chip *chip, unsigned int offset)
++{
++ struct stm32_gpio_bank *bank = gpiochip_get_data(chip);
++
++ return !!(readl_relaxed(bank->base + STM32_GPIO_IDR) & BIT(offset));
++}
++
+ static int stm32_gpio_get(struct gpio_chip *chip, unsigned offset)
+ {
+ struct stm32_gpio_bank *bank = gpiochip_get_data(chip);
+@@ -232,7 +239,7 @@ static int stm32_gpio_get(struct gpio_chip *chip, unsigned offset)
+
+ clk_enable(bank->clk);
+
+- ret = !!(readl_relaxed(bank->base + STM32_GPIO_IDR) & BIT(offset));
++ ret = stm32_gpio_get_noclk(chip, offset);
+
+ clk_disable(bank->clk);
+
+@@ -311,8 +318,12 @@ static void stm32_gpio_irq_trigger(struct irq_data *d)
+ struct stm32_gpio_bank *bank = d->domain->host_data;
+ int level;
+
++ /* Do not access the GPIO if this is not LEVEL triggered IRQ. */
++ if (!(bank->irq_type[d->hwirq] & IRQ_TYPE_LEVEL_MASK))
++ return;
++
+ /* If level interrupt type then retrig */
+- level = stm32_gpio_get(&bank->gpio_chip, d->hwirq);
++ level = stm32_gpio_get_noclk(&bank->gpio_chip, d->hwirq);
+ if ((level == 0 && bank->irq_type[d->hwirq] == IRQ_TYPE_LEVEL_LOW) ||
+ (level == 1 && bank->irq_type[d->hwirq] == IRQ_TYPE_LEVEL_HIGH))
+ irq_chip_retrigger_hierarchy(d);
+@@ -354,6 +365,7 @@ static int stm32_gpio_irq_request_resources(struct irq_data *irq_data)
+ {
+ struct stm32_gpio_bank *bank = irq_data->domain->host_data;
+ struct stm32_pinctrl *pctl = dev_get_drvdata(bank->gpio_chip.parent);
++ unsigned long flags;
+ int ret;
+
+ ret = stm32_gpio_direction_input(&bank->gpio_chip, irq_data->hwirq);
+@@ -367,6 +379,10 @@ static int stm32_gpio_irq_request_resources(struct irq_data *irq_data)
+ return ret;
+ }
+
++ flags = irqd_get_trigger_type(irq_data);
++ if (flags & IRQ_TYPE_LEVEL_MASK)
++ clk_enable(bank->clk);
++
+ return 0;
+ }
+
+@@ -374,6 +390,9 @@ static void stm32_gpio_irq_release_resources(struct irq_data *irq_data)
+ {
+ struct stm32_gpio_bank *bank = irq_data->domain->host_data;
+
++ if (bank->irq_type[irq_data->hwirq] & IRQ_TYPE_LEVEL_MASK)
++ clk_disable(bank->clk);
++
+ gpiochip_unlock_as_irq(&bank->gpio_chip, irq_data->hwirq);
+ }
+
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index adab31b52f2af..1e7bc0c595c78 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -371,10 +371,14 @@ static int asus_wmi_evaluate_method_buf(u32 method_id,
+
+ switch (obj->type) {
+ case ACPI_TYPE_BUFFER:
+- if (obj->buffer.length > size)
++ if (obj->buffer.length > size) {
+ err = -ENOSPC;
+- if (obj->buffer.length == 0)
++ break;
++ }
++ if (obj->buffer.length == 0) {
+ err = -ENODATA;
++ break;
++ }
+
+ memcpy(ret_buffer, obj->buffer.pointer, obj->buffer.length);
+ break;
+@@ -2223,9 +2227,10 @@ static int fan_curve_check_present(struct asus_wmi *asus, bool *available,
+
+ err = fan_curve_get_factory_default(asus, fan_dev);
+ if (err) {
+- if (err == -ENODEV || err == -ENODATA)
+- return 0;
+- return err;
++ pr_debug("fan_curve_get_factory_default(0x%08x) failed: %d\n",
++ fan_dev, err);
++ /* Don't cause probe to fail on devices without fan-curves */
++ return 0;
+ }
+
+ *available = true;
+diff --git a/drivers/soc/imx/imx8m-blk-ctrl.c b/drivers/soc/imx/imx8m-blk-ctrl.c
+index 511e74f0db8a9..e096cca9f18a2 100644
+--- a/drivers/soc/imx/imx8m-blk-ctrl.c
++++ b/drivers/soc/imx/imx8m-blk-ctrl.c
+@@ -49,7 +49,7 @@ struct imx8m_blk_ctrl_domain_data {
+ u32 mipi_phy_rst_mask;
+ };
+
+-#define DOMAIN_MAX_CLKS 3
++#define DOMAIN_MAX_CLKS 4
+
+ struct imx8m_blk_ctrl_domain {
+ struct generic_pm_domain genpd;
+diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c
+index f744ab15bf2c6..30a6119a2b16a 100644
+--- a/drivers/tee/optee/ffa_abi.c
++++ b/drivers/tee/optee/ffa_abi.c
+@@ -894,6 +894,7 @@ err_rhashtable_free:
+ rhashtable_free_and_destroy(&optee->ffa.global_ids, rh_free_fn, NULL);
+ optee_supp_uninit(&optee->supp);
+ mutex_destroy(&optee->call_queue.mutex);
++ mutex_destroy(&optee->ffa.mutex);
+ err_unreg_supp_teedev:
+ tee_device_unregister(optee->supp_teedev);
+ err_unreg_teedev:
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index a0b599100106b..ba7eab0c62d43 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -67,7 +67,7 @@ static int evaluate_odvp(struct int3400_thermal_priv *priv);
+ struct odvp_attr {
+ int odvp;
+ struct int3400_thermal_priv *priv;
+- struct kobj_attribute attr;
++ struct device_attribute attr;
+ };
+
+ static ssize_t data_vault_read(struct file *file, struct kobject *kobj,
+@@ -271,7 +271,7 @@ static int int3400_thermal_run_osc(acpi_handle handle,
+ return result;
+ }
+
+-static ssize_t odvp_show(struct kobject *kobj, struct kobj_attribute *attr,
++static ssize_t odvp_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+ struct odvp_attr *odvp_attr;
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index fa92f727fdf89..a38b922bcbc10 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -73,6 +73,8 @@ module_param(debug, int, 0600);
+ */
+ #define MAX_MRU 1500
+ #define MAX_MTU 1500
++/* SOF, ADDR, CTRL, LEN1, LEN2, ..., FCS, EOF */
++#define PROT_OVERHEAD 7
+ #define GSM_NET_TX_TIMEOUT (HZ*10)
+
+ /*
+@@ -219,7 +221,6 @@ struct gsm_mux {
+ int encoding;
+ u8 control;
+ u8 fcs;
+- u8 received_fcs;
+ u8 *txframe; /* TX framing buffer */
+
+ /* Method for the receiver side */
+@@ -231,6 +232,7 @@ struct gsm_mux {
+ int initiator; /* Did we initiate connection */
+ bool dead; /* Has the mux been shut down */
+ struct gsm_dlci *dlci[NUM_DLCI];
++ int old_c_iflag; /* termios c_iflag value before attach */
+ bool constipated; /* Asked by remote to shut up */
+
+ spinlock_t tx_lock;
+@@ -271,10 +273,6 @@ static DEFINE_SPINLOCK(gsm_mux_lock);
+
+ static struct tty_driver *gsm_tty_driver;
+
+-/* Save dlci open address */
+-static int addr_open[256] = { 0 };
+-/* Save dlci open count */
+-static int addr_cnt;
+ /*
+ * This section of the driver logic implements the GSM encodings
+ * both the basic and the 'advanced'. Reliable transport is not
+@@ -369,6 +367,7 @@ static const u8 gsm_fcs8[256] = {
+ #define GOOD_FCS 0xCF
+
+ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len);
++static int gsm_modem_update(struct gsm_dlci *dlci, u8 brk);
+
+ /**
+ * gsm_fcs_add - update FCS
+@@ -832,7 +831,7 @@ static int gsm_dlci_data_output(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ break;
+ case 2: /* Unstructed with modem bits.
+ Always one byte as we never send inline break data */
+- *dp++ = gsm_encode_modem(dlci);
++ *dp++ = (gsm_encode_modem(dlci) << 1) | EA;
+ break;
+ }
+ WARN_ON(kfifo_out_locked(&dlci->fifo, dp , len, &dlci->lock) != len);
+@@ -916,6 +915,66 @@ static int gsm_dlci_data_output_framed(struct gsm_mux *gsm,
+ return size;
+ }
+
++/**
++ * gsm_dlci_modem_output - try and push modem status out of a DLCI
++ * @gsm: mux
++ * @dlci: the DLCI to pull modem status from
++ * @brk: break signal
++ *
++ * Push an empty frame in to the transmit queue to update the modem status
++ * bits and to transmit an optional break.
++ *
++ * Caller must hold the tx_lock of the mux.
++ */
++
++static int gsm_dlci_modem_output(struct gsm_mux *gsm, struct gsm_dlci *dlci,
++ u8 brk)
++{
++ u8 *dp = NULL;
++ struct gsm_msg *msg;
++ int size = 0;
++
++ /* for modem bits without break data */
++ switch (dlci->adaption) {
++ case 1: /* Unstructured */
++ break;
++ case 2: /* Unstructured with modem bits. */
++ size++;
++ if (brk > 0)
++ size++;
++ break;
++ default:
++ pr_err("%s: unsupported adaption %d\n", __func__,
++ dlci->adaption);
++ return -EINVAL;
++ }
++
++ msg = gsm_data_alloc(gsm, dlci->addr, size, gsm->ftype);
++ if (!msg) {
++ pr_err("%s: gsm_data_alloc error", __func__);
++ return -ENOMEM;
++ }
++ dp = msg->data;
++ switch (dlci->adaption) {
++ case 1: /* Unstructured */
++ break;
++ case 2: /* Unstructured with modem bits. */
++ if (brk == 0) {
++ *dp++ = (gsm_encode_modem(dlci) << 1) | EA;
++ } else {
++ *dp++ = gsm_encode_modem(dlci) << 1;
++ *dp++ = (brk << 4) | 2 | EA; /* Length, Break, EA */
++ }
++ break;
++ default:
++ /* Handled above */
++ break;
++ }
++
++ __gsm_data_queue(dlci, msg);
++ return size;
++}
++
+ /**
+ * gsm_dlci_data_sweep - look for data to send
+ * @gsm: the GSM mux
+@@ -1093,7 +1152,6 @@ static void gsm_control_modem(struct gsm_mux *gsm, const u8 *data, int clen)
+ {
+ unsigned int addr = 0;
+ unsigned int modem = 0;
+- unsigned int brk = 0;
+ struct gsm_dlci *dlci;
+ int len = clen;
+ int slen;
+@@ -1123,17 +1181,8 @@ static void gsm_control_modem(struct gsm_mux *gsm, const u8 *data, int clen)
+ return;
+ }
+ len--;
+- if (len > 0) {
+- while (gsm_read_ea(&brk, *dp++) == 0) {
+- len--;
+- if (len == 0)
+- return;
+- }
+- modem <<= 7;
+- modem |= (brk & 0x7f);
+- }
+ tty = tty_port_tty_get(&dlci->port);
+- gsm_process_modem(tty, dlci, modem, slen);
++ gsm_process_modem(tty, dlci, modem, slen - len);
+ if (tty) {
+ tty_wakeup(tty);
+ tty_kref_put(tty);
+@@ -1193,7 +1242,6 @@ static void gsm_control_rls(struct gsm_mux *gsm, const u8 *data, int clen)
+ }
+
+ static void gsm_dlci_begin_close(struct gsm_dlci *dlci);
+-static void gsm_dlci_close(struct gsm_dlci *dlci);
+
+ /**
+ * gsm_control_message - DLCI 0 control processing
+@@ -1212,28 +1260,15 @@ static void gsm_control_message(struct gsm_mux *gsm, unsigned int command,
+ {
+ u8 buf[1];
+ unsigned long flags;
+- struct gsm_dlci *dlci;
+- int i;
+- int address;
+
+ switch (command) {
+ case CMD_CLD: {
+- if (addr_cnt > 0) {
+- for (i = 0; i < addr_cnt; i++) {
+- address = addr_open[i];
+- dlci = gsm->dlci[address];
+- gsm_dlci_close(dlci);
+- addr_open[i] = 0;
+- }
+- }
++ struct gsm_dlci *dlci = gsm->dlci[0];
+ /* Modem wishes to close down */
+- dlci = gsm->dlci[0];
+ if (dlci) {
+ dlci->dead = true;
+ gsm->dead = true;
+- gsm_dlci_close(dlci);
+- addr_cnt = 0;
+- gsm_response(gsm, 0, UA|PF);
++ gsm_dlci_begin_close(dlci);
+ }
+ }
+ break;
+@@ -1326,11 +1361,12 @@ static void gsm_control_response(struct gsm_mux *gsm, unsigned int command,
+
+ static void gsm_control_transmit(struct gsm_mux *gsm, struct gsm_control *ctrl)
+ {
+- struct gsm_msg *msg = gsm_data_alloc(gsm, 0, ctrl->len + 1, gsm->ftype);
++ struct gsm_msg *msg = gsm_data_alloc(gsm, 0, ctrl->len + 2, gsm->ftype);
+ if (msg == NULL)
+ return;
+- msg->data[0] = (ctrl->cmd << 1) | 2 | EA; /* command */
+- memcpy(msg->data + 1, ctrl->data, ctrl->len);
++ msg->data[0] = (ctrl->cmd << 1) | CR | EA; /* command */
++ msg->data[1] = (ctrl->len << 1) | EA;
++ memcpy(msg->data + 2, ctrl->data, ctrl->len);
+ gsm_data_queue(gsm->dlci[0], msg);
+ }
+
+@@ -1353,7 +1389,6 @@ static void gsm_control_retransmit(struct timer_list *t)
+ spin_lock_irqsave(&gsm->control_lock, flags);
+ ctrl = gsm->pending_cmd;
+ if (ctrl) {
+- gsm->cretries--;
+ if (gsm->cretries == 0) {
+ gsm->pending_cmd = NULL;
+ ctrl->error = -ETIMEDOUT;
+@@ -1362,6 +1397,7 @@ static void gsm_control_retransmit(struct timer_list *t)
+ wake_up(&gsm->event);
+ return;
+ }
++ gsm->cretries--;
+ gsm_control_transmit(gsm, ctrl);
+ mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100);
+ }
+@@ -1402,7 +1438,7 @@ retry:
+
+ /* If DLCI0 is in ADM mode skip retries, it won't respond */
+ if (gsm->dlci[0]->mode == DLCI_MODE_ADM)
+- gsm->cretries = 1;
++ gsm->cretries = 0;
+ else
+ gsm->cretries = gsm->n2;
+
+@@ -1450,20 +1486,22 @@ static int gsm_control_wait(struct gsm_mux *gsm, struct gsm_control *control)
+
+ static void gsm_dlci_close(struct gsm_dlci *dlci)
+ {
++ unsigned long flags;
++
+ del_timer(&dlci->t1);
+ if (debug & 8)
+ pr_debug("DLCI %d goes closed.\n", dlci->addr);
+ dlci->state = DLCI_CLOSED;
+ if (dlci->addr != 0) {
+ tty_port_tty_hangup(&dlci->port, false);
++ spin_lock_irqsave(&dlci->lock, flags);
+ kfifo_reset(&dlci->fifo);
++ spin_unlock_irqrestore(&dlci->lock, flags);
+ /* Ensure that gsmtty_open() can return. */
+ tty_port_set_initialized(&dlci->port, 0);
+ wake_up_interruptible(&dlci->port.open_wait);
+ } else
+ dlci->gsm->dead = true;
+- /* Unregister gsmtty driver,report gsmtty dev remove uevent for user */
+- tty_unregister_device(gsm_tty_driver, dlci->addr);
+ wake_up(&dlci->gsm->event);
+ /* A DLCI 0 close is a MUX termination so we need to kick that
+ back to userspace somehow */
+@@ -1485,8 +1523,9 @@ static void gsm_dlci_open(struct gsm_dlci *dlci)
+ dlci->state = DLCI_OPEN;
+ if (debug & 8)
+ pr_debug("DLCI %d goes open.\n", dlci->addr);
+- /* Register gsmtty driver,report gsmtty dev add uevent for user */
+- tty_register_device(gsm_tty_driver, dlci->addr, NULL);
++ /* Send current modem state */
++ if (dlci->addr)
++ gsm_modem_update(dlci, 0);
+ wake_up(&dlci->gsm->event);
+ }
+
+@@ -1623,6 +1662,7 @@ static void gsm_dlci_data(struct gsm_dlci *dlci, const u8 *data, int clen)
+ tty = tty_port_tty_get(port);
+ if (tty) {
+ gsm_process_modem(tty, dlci, modem, slen);
++ tty_wakeup(tty);
+ tty_kref_put(tty);
+ }
+ fallthrough;
+@@ -1793,19 +1833,7 @@ static void gsm_queue(struct gsm_mux *gsm)
+ struct gsm_dlci *dlci;
+ u8 cr;
+ int address;
+- int i, j, k, address_tmp;
+- /* We have to sneak a look at the packet body to do the FCS.
+- A somewhat layering violation in the spec */
+
+- if ((gsm->control & ~PF) == UI)
+- gsm->fcs = gsm_fcs_add_block(gsm->fcs, gsm->buf, gsm->len);
+- if (gsm->encoding == 0) {
+- /* WARNING: gsm->received_fcs is used for
+- gsm->encoding = 0 only.
+- In this case it contain the last piece of data
+- required to generate final CRC */
+- gsm->fcs = gsm_fcs_add(gsm->fcs, gsm->received_fcs);
+- }
+ if (gsm->fcs != GOOD_FCS) {
+ gsm->bad_fcs++;
+ if (debug & 4)
+@@ -1836,11 +1864,6 @@ static void gsm_queue(struct gsm_mux *gsm)
+ else {
+ gsm_response(gsm, address, UA|PF);
+ gsm_dlci_open(dlci);
+- /* Save dlci open address */
+- if (address) {
+- addr_open[addr_cnt] = address;
+- addr_cnt++;
+- }
+ }
+ break;
+ case DISC|PF:
+@@ -1851,35 +1874,9 @@ static void gsm_queue(struct gsm_mux *gsm)
+ return;
+ }
+ /* Real close complete */
+- if (!address) {
+- if (addr_cnt > 0) {
+- for (i = 0; i < addr_cnt; i++) {
+- address = addr_open[i];
+- dlci = gsm->dlci[address];
+- gsm_dlci_close(dlci);
+- addr_open[i] = 0;
+- }
+- }
+- dlci = gsm->dlci[0];
+- gsm_dlci_close(dlci);
+- addr_cnt = 0;
+- gsm_response(gsm, 0, UA|PF);
+- } else {
+- gsm_response(gsm, address, UA|PF);
+- gsm_dlci_close(dlci);
+- /* clear dlci address */
+- for (j = 0; j < addr_cnt; j++) {
+- address_tmp = addr_open[j];
+- if (address_tmp == address) {
+- for (k = j; k < addr_cnt; k++)
+- addr_open[k] = addr_open[k+1];
+- addr_cnt--;
+- break;
+- }
+- }
+- }
++ gsm_response(gsm, address, UA|PF);
++ gsm_dlci_close(dlci);
+ break;
+- case UA:
+ case UA|PF:
+ if (cr == 0 || dlci == NULL)
+ break;
+@@ -1993,19 +1990,25 @@ static void gsm0_receive(struct gsm_mux *gsm, unsigned char c)
+ break;
+ case GSM_DATA: /* Data */
+ gsm->buf[gsm->count++] = c;
+- if (gsm->count == gsm->len)
++ if (gsm->count == gsm->len) {
++ /* Calculate final FCS for UI frames over all data */
++ if ((gsm->control & ~PF) != UIH) {
++ gsm->fcs = gsm_fcs_add_block(gsm->fcs, gsm->buf,
++ gsm->count);
++ }
+ gsm->state = GSM_FCS;
++ }
+ break;
+ case GSM_FCS: /* FCS follows the packet */
+- gsm->received_fcs = c;
+- gsm_queue(gsm);
++ gsm->fcs = gsm_fcs_add(gsm->fcs, c);
+ gsm->state = GSM_SSOF;
+ break;
+ case GSM_SSOF:
+- if (c == GSM0_SOF) {
+- gsm->state = GSM_SEARCH;
+- break;
+- }
++ gsm->state = GSM_SEARCH;
++ if (c == GSM0_SOF)
++ gsm_queue(gsm);
++ else
++ gsm->bad_size++;
+ break;
+ default:
+ pr_debug("%s: unhandled state: %d\n", __func__, gsm->state);
+@@ -2023,12 +2026,35 @@ static void gsm0_receive(struct gsm_mux *gsm, unsigned char c)
+
+ static void gsm1_receive(struct gsm_mux *gsm, unsigned char c)
+ {
++ /* handle XON/XOFF */
++ if ((c & ISO_IEC_646_MASK) == XON) {
++ gsm->constipated = true;
++ return;
++ } else if ((c & ISO_IEC_646_MASK) == XOFF) {
++ gsm->constipated = false;
++ /* Kick the link in case it is idling */
++ gsm_data_kick(gsm, NULL);
++ return;
++ }
+ if (c == GSM1_SOF) {
+- /* EOF is only valid in frame if we have got to the data state
+- and received at least one byte (the FCS) */
+- if (gsm->state == GSM_DATA && gsm->count) {
+- /* Extract the FCS */
++ /* EOF is only valid in frame if we have got to the data state */
++ if (gsm->state == GSM_DATA) {
++ if (gsm->count < 1) {
++ /* Missing FSC */
++ gsm->malformed++;
++ gsm->state = GSM_START;
++ return;
++ }
++ /* Remove the FCS from data */
+ gsm->count--;
++ if ((gsm->control & ~PF) != UIH) {
++ /* Calculate final FCS for UI frames over all
++ * data but FCS
++ */
++ gsm->fcs = gsm_fcs_add_block(gsm->fcs, gsm->buf,
++ gsm->count);
++ }
++ /* Add the FCS itself to test against GOOD_FCS */
+ gsm->fcs = gsm_fcs_add(gsm->fcs, gsm->buf[gsm->count]);
+ gsm->len = gsm->count;
+ gsm_queue(gsm);
+@@ -2037,7 +2063,8 @@ static void gsm1_receive(struct gsm_mux *gsm, unsigned char c)
+ }
+ /* Any partial frame was a runt so go back to start */
+ if (gsm->state != GSM_START) {
+- gsm->malformed++;
++ if (gsm->state != GSM_SEARCH)
++ gsm->malformed++;
+ gsm->state = GSM_START;
+ }
+ /* A SOF in GSM_START means we are still reading idling or
+@@ -2106,74 +2133,43 @@ static void gsm_error(struct gsm_mux *gsm)
+ gsm->io_error++;
+ }
+
+-static int gsm_disconnect(struct gsm_mux *gsm)
+-{
+- struct gsm_dlci *dlci = gsm->dlci[0];
+- struct gsm_control *gc;
+-
+- if (!dlci)
+- return 0;
+-
+- /* In theory disconnecting DLCI 0 is sufficient but for some
+- modems this is apparently not the case. */
+- gc = gsm_control_send(gsm, CMD_CLD, NULL, 0);
+- if (gc)
+- gsm_control_wait(gsm, gc);
+-
+- del_timer_sync(&gsm->t2_timer);
+- /* Now we are sure T2 has stopped */
+-
+- gsm_dlci_begin_close(dlci);
+- wait_event_interruptible(gsm->event,
+- dlci->state == DLCI_CLOSED);
+-
+- if (signal_pending(current))
+- return -EINTR;
+-
+- return 0;
+-}
+-
+ /**
+ * gsm_cleanup_mux - generic GSM protocol cleanup
+ * @gsm: our mux
++ * @disc: disconnect link?
+ *
+ * Clean up the bits of the mux which are the same for all framing
+ * protocols. Remove the mux from the mux table, stop all the timers
+ * and then shut down each device hanging up the channels as we go.
+ */
+
+-static void gsm_cleanup_mux(struct gsm_mux *gsm)
++static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ {
+ int i;
+ struct gsm_dlci *dlci = gsm->dlci[0];
+ struct gsm_msg *txq, *ntxq;
+
+ gsm->dead = true;
++ mutex_lock(&gsm->mutex);
+
+- spin_lock(&gsm_mux_lock);
+- for (i = 0; i < MAX_MUX; i++) {
+- if (gsm_mux[i] == gsm) {
+- gsm_mux[i] = NULL;
+- break;
++ if (dlci) {
++ if (disc && dlci->state != DLCI_CLOSED) {
++ gsm_dlci_begin_close(dlci);
++ wait_event(gsm->event, dlci->state == DLCI_CLOSED);
+ }
++ dlci->dead = true;
+ }
+- spin_unlock(&gsm_mux_lock);
+- /* open failed before registering => nothing to do */
+- if (i == MAX_MUX)
+- return;
+
++ /* Finish outstanding timers, making sure they are done */
+ del_timer_sync(&gsm->t2_timer);
+- /* Now we are sure T2 has stopped */
+- if (dlci)
+- dlci->dead = true;
+
+- /* Free up any link layer users */
+- mutex_lock(&gsm->mutex);
+- for (i = 0; i < NUM_DLCI; i++)
++ /* Free up any link layer users and finally the control channel */
++ for (i = NUM_DLCI - 1; i >= 0; i--)
+ if (gsm->dlci[i])
+ gsm_dlci_release(gsm->dlci[i]);
+ mutex_unlock(&gsm->mutex);
+ /* Now wipe the queues */
++ tty_ldisc_flush(gsm->tty);
+ list_for_each_entry_safe(txq, ntxq, &gsm->tx_list, list)
+ kfree(txq);
+ INIT_LIST_HEAD(&gsm->tx_list);
+@@ -2191,7 +2187,6 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm)
+ static int gsm_activate_mux(struct gsm_mux *gsm)
+ {
+ struct gsm_dlci *dlci;
+- int i = 0;
+
+ timer_setup(&gsm->t2_timer, gsm_control_retransmit, 0);
+ init_waitqueue_head(&gsm->event);
+@@ -2203,18 +2198,6 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+ else
+ gsm->receive = gsm1_receive;
+
+- spin_lock(&gsm_mux_lock);
+- for (i = 0; i < MAX_MUX; i++) {
+- if (gsm_mux[i] == NULL) {
+- gsm->num = i;
+- gsm_mux[i] = gsm;
+- break;
+- }
+- }
+- spin_unlock(&gsm_mux_lock);
+- if (i == MAX_MUX)
+- return -EBUSY;
+-
+ dlci = gsm_dlci_alloc(gsm, 0);
+ if (dlci == NULL)
+ return -ENOMEM;
+@@ -2230,6 +2213,15 @@ static int gsm_activate_mux(struct gsm_mux *gsm)
+ */
+ static void gsm_free_mux(struct gsm_mux *gsm)
+ {
++ int i;
++
++ for (i = 0; i < MAX_MUX; i++) {
++ if (gsm == gsm_mux[i]) {
++ gsm_mux[i] = NULL;
++ break;
++ }
++ }
++ mutex_destroy(&gsm->mutex);
+ kfree(gsm->txframe);
+ kfree(gsm->buf);
+ kfree(gsm);
+@@ -2249,12 +2241,20 @@ static void gsm_free_muxr(struct kref *ref)
+
+ static inline void mux_get(struct gsm_mux *gsm)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&gsm_mux_lock, flags);
+ kref_get(&gsm->ref);
++ spin_unlock_irqrestore(&gsm_mux_lock, flags);
+ }
+
+ static inline void mux_put(struct gsm_mux *gsm)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&gsm_mux_lock, flags);
+ kref_put(&gsm->ref, gsm_free_muxr);
++ spin_unlock_irqrestore(&gsm_mux_lock, flags);
+ }
+
+ static inline unsigned int mux_num_to_base(struct gsm_mux *gsm)
+@@ -2275,6 +2275,7 @@ static inline unsigned int mux_line_to_num(unsigned int line)
+
+ static struct gsm_mux *gsm_alloc_mux(void)
+ {
++ int i;
+ struct gsm_mux *gsm = kzalloc(sizeof(struct gsm_mux), GFP_KERNEL);
+ if (gsm == NULL)
+ return NULL;
+@@ -2283,7 +2284,7 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ kfree(gsm);
+ return NULL;
+ }
+- gsm->txframe = kmalloc(2 * MAX_MRU + 2, GFP_KERNEL);
++ gsm->txframe = kmalloc(2 * (MAX_MTU + PROT_OVERHEAD - 1), GFP_KERNEL);
+ if (gsm->txframe == NULL) {
+ kfree(gsm->buf);
+ kfree(gsm);
+@@ -2304,6 +2305,26 @@ static struct gsm_mux *gsm_alloc_mux(void)
+ gsm->mtu = 64;
+ gsm->dead = true; /* Avoid early tty opens */
+
++ /* Store the instance to the mux array or abort if no space is
++ * available.
++ */
++ spin_lock(&gsm_mux_lock);
++ for (i = 0; i < MAX_MUX; i++) {
++ if (!gsm_mux[i]) {
++ gsm_mux[i] = gsm;
++ gsm->num = i;
++ break;
++ }
++ }
++ spin_unlock(&gsm_mux_lock);
++ if (i == MAX_MUX) {
++ mutex_destroy(&gsm->mutex);
++ kfree(gsm->txframe);
++ kfree(gsm->buf);
++ kfree(gsm);
++ return NULL;
++ }
++
+ return gsm;
+ }
+
+@@ -2339,7 +2360,7 @@ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+ /* Check the MRU/MTU range looks sane */
+ if (c->mru > MAX_MRU || c->mtu > MAX_MTU || c->mru < 8 || c->mtu < 8)
+ return -EINVAL;
+- if (c->n2 < 3)
++ if (c->n2 > 255)
+ return -EINVAL;
+ if (c->encapsulation > 1) /* Basic, advanced, no I */
+ return -EINVAL;
+@@ -2370,19 +2391,11 @@ static int gsm_config(struct gsm_mux *gsm, struct gsm_config *c)
+
+ /*
+ * Close down what is needed, restart and initiate the new
+- * configuration
++ * configuration. On the first time there is no DLCI[0]
++ * and closing or cleaning up is not necessary.
+ */
+-
+- if (gsm->initiator && (need_close || need_restart)) {
+- int ret;
+-
+- ret = gsm_disconnect(gsm);
+-
+- if (ret)
+- return ret;
+- }
+- if (need_restart)
+- gsm_cleanup_mux(gsm);
++ if (need_close || need_restart)
++ gsm_cleanup_mux(gsm, true);
+
+ gsm->initiator = c->initiator;
+ gsm->mru = c->mru;
+@@ -2450,25 +2463,26 @@ static int gsmld_attach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ int ret, i;
+
+ gsm->tty = tty_kref_get(tty);
++ /* Turn off tty XON/XOFF handling to handle it explicitly. */
++ gsm->old_c_iflag = tty->termios.c_iflag;
++ tty->termios.c_iflag &= (IXON | IXOFF);
+ ret = gsm_activate_mux(gsm);
+ if (ret != 0)
+ tty_kref_put(gsm->tty);
+ else {
+ /* Don't register device 0 - this is the control channel and not
+ a usable tty interface */
+- if (gsm->initiator) {
+- base = mux_num_to_base(gsm); /* Base for this MUX */
+- for (i = 1; i < NUM_DLCI; i++) {
+- struct device *dev;
++ base = mux_num_to_base(gsm); /* Base for this MUX */
++ for (i = 1; i < NUM_DLCI; i++) {
++ struct device *dev;
+
+- dev = tty_register_device(gsm_tty_driver,
++ dev = tty_register_device(gsm_tty_driver,
+ base + i, NULL);
+- if (IS_ERR(dev)) {
+- for (i--; i >= 1; i--)
+- tty_unregister_device(gsm_tty_driver,
+- base + i);
+- return PTR_ERR(dev);
+- }
++ if (IS_ERR(dev)) {
++ for (i--; i >= 1; i--)
++ tty_unregister_device(gsm_tty_driver,
++ base + i);
++ return PTR_ERR(dev);
+ }
+ }
+ }
+@@ -2490,11 +2504,10 @@ static void gsmld_detach_gsm(struct tty_struct *tty, struct gsm_mux *gsm)
+ int i;
+
+ WARN_ON(tty != gsm->tty);
+- if (gsm->initiator) {
+- for (i = 1; i < NUM_DLCI; i++)
+- tty_unregister_device(gsm_tty_driver, base + i);
+- }
+- gsm_cleanup_mux(gsm);
++ for (i = 1; i < NUM_DLCI; i++)
++ tty_unregister_device(gsm_tty_driver, base + i);
++ /* Restore tty XON/XOFF handling. */
++ gsm->tty->termios.c_iflag = gsm->old_c_iflag;
+ tty_kref_put(gsm->tty);
+ gsm->tty = NULL;
+ }
+@@ -2559,6 +2572,12 @@ static void gsmld_close(struct tty_struct *tty)
+ {
+ struct gsm_mux *gsm = tty->disc_data;
+
++ /* The ldisc locks and closes the port before calling our close. This
++ * means we have no way to do a proper disconnect. We will not bother
++ * to do one.
++ */
++ gsm_cleanup_mux(gsm, false);
++
+ gsmld_detach_gsm(tty, gsm);
+
+ gsmld_flush_buffer(tty);
+@@ -2597,7 +2616,7 @@ static int gsmld_open(struct tty_struct *tty)
+
+ ret = gsmld_attach_gsm(tty, gsm);
+ if (ret != 0) {
+- gsm_cleanup_mux(gsm);
++ gsm_cleanup_mux(gsm, false);
+ mux_put(gsm);
+ }
+ return ret;
+@@ -2954,26 +2973,78 @@ static struct tty_ldisc_ops tty_ldisc_packet = {
+
+ #define TX_SIZE 512
+
+-static int gsmtty_modem_update(struct gsm_dlci *dlci, u8 brk)
++/**
++ * gsm_modem_upd_via_data - send modem bits via convergence layer
++ * @dlci: channel
++ * @brk: break signal
++ *
++ * Send an empty frame to signal mobile state changes and to transmit the
++ * break signal for adaption 2.
++ */
++
++static void gsm_modem_upd_via_data(struct gsm_dlci *dlci, u8 brk)
+ {
+- u8 modembits[5];
++ struct gsm_mux *gsm = dlci->gsm;
++ unsigned long flags;
++
++ if (dlci->state != DLCI_OPEN || dlci->adaption != 2)
++ return;
++
++ spin_lock_irqsave(&gsm->tx_lock, flags);
++ gsm_dlci_modem_output(gsm, dlci, brk);
++ spin_unlock_irqrestore(&gsm->tx_lock, flags);
++}
++
++/**
++ * gsm_modem_upd_via_msc - send modem bits via control frame
++ * @dlci: channel
++ * @brk: break signal
++ */
++
++static int gsm_modem_upd_via_msc(struct gsm_dlci *dlci, u8 brk)
++{
++ u8 modembits[3];
+ struct gsm_control *ctrl;
+ int len = 2;
+
+- if (brk)
+- len++;
++ if (dlci->gsm->encoding != 0)
++ return 0;
+
+- modembits[0] = len << 1 | EA; /* Data bytes */
+- modembits[1] = dlci->addr << 2 | 3; /* DLCI, EA, 1 */
+- modembits[2] = gsm_encode_modem(dlci) << 1 | EA;
+- if (brk)
+- modembits[3] = brk << 4 | 2 | EA; /* Valid, EA */
+- ctrl = gsm_control_send(dlci->gsm, CMD_MSC, modembits, len + 1);
++ modembits[0] = (dlci->addr << 2) | 2 | EA; /* DLCI, Valid, EA */
++ if (!brk) {
++ modembits[1] = (gsm_encode_modem(dlci) << 1) | EA;
++ } else {
++ modembits[1] = gsm_encode_modem(dlci) << 1;
++ modembits[2] = (brk << 4) | 2 | EA; /* Length, Break, EA */
++ len++;
++ }
++ ctrl = gsm_control_send(dlci->gsm, CMD_MSC, modembits, len);
+ if (ctrl == NULL)
+ return -ENOMEM;
+ return gsm_control_wait(dlci->gsm, ctrl);
+ }
+
++/**
++ * gsm_modem_update - send modem status line state
++ * @dlci: channel
++ * @brk: break signal
++ */
++
++static int gsm_modem_update(struct gsm_dlci *dlci, u8 brk)
++{
++ if (dlci->adaption == 2) {
++ /* Send convergence layer type 2 empty data frame. */
++ gsm_modem_upd_via_data(dlci, brk);
++ return 0;
++ } else if (dlci->gsm->encoding == 0) {
++ /* Send as MSC control message. */
++ return gsm_modem_upd_via_msc(dlci, brk);
++ }
++
++ /* Modem status lines are not supported. */
++ return -EPROTONOSUPPORT;
++}
++
+ static int gsm_carrier_raised(struct tty_port *port)
+ {
+ struct gsm_dlci *dlci = container_of(port, struct gsm_dlci, port);
+@@ -3006,7 +3077,7 @@ static void gsm_dtr_rts(struct tty_port *port, int onoff)
+ modem_tx &= ~(TIOCM_DTR | TIOCM_RTS);
+ if (modem_tx != dlci->modem_tx) {
+ dlci->modem_tx = modem_tx;
+- gsmtty_modem_update(dlci, 0);
++ gsm_modem_update(dlci, 0);
+ }
+ }
+
+@@ -3155,13 +3226,17 @@ static unsigned int gsmtty_chars_in_buffer(struct tty_struct *tty)
+ static void gsmtty_flush_buffer(struct tty_struct *tty)
+ {
+ struct gsm_dlci *dlci = tty->driver_data;
++ unsigned long flags;
++
+ if (dlci->state == DLCI_CLOSED)
+ return;
+ /* Caution needed: If we implement reliable transport classes
+ then the data being transmitted can't simply be junked once
+ it has first hit the stack. Until then we can just blow it
+ away */
++ spin_lock_irqsave(&dlci->lock, flags);
+ kfifo_reset(&dlci->fifo);
++ spin_unlock_irqrestore(&dlci->lock, flags);
+ /* Need to unhook this DLCI from the transmit queue logic */
+ }
+
+@@ -3193,7 +3268,7 @@ static int gsmtty_tiocmset(struct tty_struct *tty,
+
+ if (modem_tx != dlci->modem_tx) {
+ dlci->modem_tx = modem_tx;
+- return gsmtty_modem_update(dlci, 0);
++ return gsm_modem_update(dlci, 0);
+ }
+ return 0;
+ }
+@@ -3254,7 +3329,7 @@ static void gsmtty_throttle(struct tty_struct *tty)
+ dlci->modem_tx &= ~TIOCM_RTS;
+ dlci->throttled = true;
+ /* Send an MSC with RTS cleared */
+- gsmtty_modem_update(dlci, 0);
++ gsm_modem_update(dlci, 0);
+ }
+
+ static void gsmtty_unthrottle(struct tty_struct *tty)
+@@ -3266,7 +3341,7 @@ static void gsmtty_unthrottle(struct tty_struct *tty)
+ dlci->modem_tx |= TIOCM_RTS;
+ dlci->throttled = false;
+ /* Send an MSC with RTS set */
+- gsmtty_modem_update(dlci, 0);
++ gsm_modem_update(dlci, 0);
+ }
+
+ static int gsmtty_break_ctl(struct tty_struct *tty, int state)
+@@ -3284,7 +3359,7 @@ static int gsmtty_break_ctl(struct tty_struct *tty, int state)
+ if (encode > 0x0F)
+ encode = 0x0F; /* Best effort */
+ }
+- return gsmtty_modem_update(dlci, encode);
++ return gsm_modem_update(dlci, encode);
+ }
+
+ static void gsmtty_cleanup(struct tty_struct *tty)
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index e17e97ea86fad..a293e9f107d0f 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2667,7 +2667,7 @@ enum pci_board_num_t {
+ pbn_panacom2,
+ pbn_panacom4,
+ pbn_plx_romulus,
+- pbn_endrun_2_4000000,
++ pbn_endrun_2_3906250,
+ pbn_oxsemi,
+ pbn_oxsemi_1_3906250,
+ pbn_oxsemi_2_3906250,
+@@ -3195,10 +3195,10 @@ static struct pciserial_board pci_boards[] = {
+ * signal now many ports are available
+ * 2 port 952 Uart support
+ */
+- [pbn_endrun_2_4000000] = {
++ [pbn_endrun_2_3906250] = {
+ .flags = FL_BASE0,
+ .num_ports = 2,
+- .base_baud = 4000000,
++ .base_baud = 3906250,
+ .uart_offset = 0x200,
+ .first_offset = 0x1000,
+ },
+@@ -4115,7 +4115,7 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ */
+ { PCI_VENDOR_ID_ENDRUN, PCI_DEVICE_ID_ENDRUN_1588,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_endrun_2_4000000 },
++ pbn_endrun_2_3906250 },
+ /*
+ * Quatech cards. These actually have configurable clocks but for
+ * now we just use the default.
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 9f116e75956e2..992b71efd4a36 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -3340,7 +3340,7 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+
+ serial8250_set_divisor(port, baud, quot, frac);
+ serial_port_out(port, UART_LCR, up->lcr);
+- serial8250_out_MCR(up, UART_MCR_DTR | UART_MCR_RTS);
++ serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS);
+ }
+
+ /*
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index ba053a68529f7..26d2a031aee2d 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1255,13 +1255,18 @@ static inline bool pl011_dma_rx_running(struct uart_amba_port *uap)
+
+ static void pl011_rs485_tx_stop(struct uart_amba_port *uap)
+ {
++ /*
++ * To be on the safe side only time out after twice as many iterations
++ * as fifo size.
++ */
++ const int MAX_TX_DRAIN_ITERS = uap->port.fifosize * 2;
+ struct uart_port *port = &uap->port;
+ int i = 0;
+ u32 cr;
+
+ /* Wait until hardware tx queue is empty */
+ while (!pl011_tx_empty(port)) {
+- if (i == port->fifosize) {
++ if (i > MAX_TX_DRAIN_ITERS) {
+ dev_warn(port->dev,
+ "timeout while draining hardware tx queue\n");
+ break;
+@@ -2052,7 +2057,7 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
+ * with the given baud rate. We use this as the poll interval when we
+ * wait for the tx queue to empty.
+ */
+- uap->rs485_tx_drain_interval = (bits * 1000 * 1000) / baud;
++ uap->rs485_tx_drain_interval = DIV_ROUND_UP(bits * 1000 * 1000, baud);
+
+ pl011_setup_status_masks(port, termios);
+
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index df8a0c8b8b29b..8dcffa69088a4 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -1438,7 +1438,7 @@ static int imx_uart_startup(struct uart_port *port)
+ imx_uart_writel(sport, ucr1, UCR1);
+
+ ucr4 = imx_uart_readl(sport, UCR4) & ~(UCR4_OREN | UCR4_INVR);
+- if (!sport->dma_is_enabled)
++ if (!dma_is_inited)
+ ucr4 |= UCR4_OREN;
+ if (sport->inverted_rx)
+ ucr4 |= UCR4_INVR;
+diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
+index f9af7ebe003d7..d6d515d598dc0 100644
+--- a/drivers/usb/cdns3/cdns3-gadget.c
++++ b/drivers/usb/cdns3/cdns3-gadget.c
+@@ -2684,6 +2684,7 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
+ struct usb_request *request;
+ struct cdns3_request *priv_req;
+ struct cdns3_trb *trb = NULL;
++ struct cdns3_trb trb_tmp;
+ int ret;
+ int val;
+
+@@ -2693,8 +2694,10 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
+ if (request) {
+ priv_req = to_cdns3_request(request);
+ trb = priv_req->trb;
+- if (trb)
++ if (trb) {
++ trb_tmp = *trb;
+ trb->control = trb->control ^ cpu_to_le32(TRB_CYCLE);
++ }
+ }
+
+ writel(EP_CMD_CSTALL | EP_CMD_EPRST, &priv_dev->regs->ep_cmd);
+@@ -2709,7 +2712,7 @@ int __cdns3_gadget_ep_clear_halt(struct cdns3_endpoint *priv_ep)
+
+ if (request) {
+ if (trb)
+- trb->control = trb->control ^ cpu_to_le32(TRB_CYCLE);
++ *trb = trb_tmp;
+
+ cdns3_rearm_transfer(priv_ep, 1);
+ }
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index fa66e6e587928..656ba91c32831 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -1197,12 +1197,16 @@ static int do_proc_control(struct usb_dev_state *ps,
+
+ usb_unlock_device(dev);
+ i = usbfs_start_wait_urb(urb, tmo, &actlen);
++
++ /* Linger a bit, prior to the next control message. */
++ if (dev->quirks & USB_QUIRK_DELAY_CTRL_MSG)
++ msleep(200);
+ usb_lock_device(dev);
+ snoop_urb(dev, NULL, pipe, actlen, i, COMPLETE, tbuf, actlen);
+ if (!i && actlen) {
+ if (copy_to_user(ctrl->data, tbuf, actlen)) {
+ ret = -EFAULT;
+- goto recv_fault;
++ goto done;
+ }
+ }
+ } else {
+@@ -1219,6 +1223,10 @@ static int do_proc_control(struct usb_dev_state *ps,
+
+ usb_unlock_device(dev);
+ i = usbfs_start_wait_urb(urb, tmo, &actlen);
++
++ /* Linger a bit, prior to the next control message. */
++ if (dev->quirks & USB_QUIRK_DELAY_CTRL_MSG)
++ msleep(200);
+ usb_lock_device(dev);
+ snoop_urb(dev, NULL, pipe, actlen, i, COMPLETE, NULL, 0);
+ }
+@@ -1230,10 +1238,6 @@ static int do_proc_control(struct usb_dev_state *ps,
+ }
+ ret = (i < 0 ? i : actlen);
+
+- recv_fault:
+- /* Linger a bit, prior to the next control message. */
+- if (dev->quirks & USB_QUIRK_DELAY_CTRL_MSG)
+- msleep(200);
+ done:
+ kfree(dr);
+ usb_free_urb(urb);
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index d3c14b5ed4a1f..97b44a68668a5 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -404,6 +404,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0b05, 0x17e0), .driver_info =
+ USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+
++ /* Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader)*/
++ { USB_DEVICE(0x0bda, 0x0151), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ /* Realtek hub in Dell WD19 (Type-C) */
+ { USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM },
+
+@@ -507,6 +510,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* DJI CineSSD */
+ { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+
++ /* VCOM device */
++ { USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
++
+ /* INTEL VALUE SSD */
+ { USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
+
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index f4c09951b517e..f9cdfd9606864 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -276,7 +276,8 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+
+ reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ reg |= DWC3_DCTL_CSFTRST;
+- dwc3_writel(dwc->regs, DWC3_DCTL, reg);
++ reg &= ~DWC3_DCTL_RUN_STOP;
++ dwc3_gadget_dctl_write_safe(dwc, reg);
+
+ /*
+ * For DWC_usb31 controller 1.90a and later, the DCTL.CSFRST bit
+@@ -1295,10 +1296,10 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ u8 lpm_nyet_threshold;
+ u8 tx_de_emphasis;
+ u8 hird_threshold;
+- u8 rx_thr_num_pkt_prd;
+- u8 rx_max_burst_prd;
+- u8 tx_thr_num_pkt_prd;
+- u8 tx_max_burst_prd;
++ u8 rx_thr_num_pkt_prd = 0;
++ u8 rx_max_burst_prd = 0;
++ u8 tx_thr_num_pkt_prd = 0;
++ u8 tx_max_burst_prd = 0;
+ u8 tx_fifo_resize_max_num;
+ const char *usb_psy_name;
+ int ret;
+diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c
+index d7f76835137fa..f148b0370f829 100644
+--- a/drivers/usb/dwc3/drd.c
++++ b/drivers/usb/dwc3/drd.c
+@@ -571,16 +571,15 @@ int dwc3_drd_init(struct dwc3 *dwc)
+ {
+ int ret, irq;
+
++ if (ROLE_SWITCH &&
++ device_property_read_bool(dwc->dev, "usb-role-switch"))
++ return dwc3_setup_role_switch(dwc);
++
+ dwc->edev = dwc3_get_extcon(dwc);
+ if (IS_ERR(dwc->edev))
+ return PTR_ERR(dwc->edev);
+
+- if (ROLE_SWITCH &&
+- device_property_read_bool(dwc->dev, "usb-role-switch")) {
+- ret = dwc3_setup_role_switch(dwc);
+- if (ret < 0)
+- return ret;
+- } else if (dwc->edev) {
++ if (dwc->edev) {
+ dwc->edev_nb.notifier_call = dwc3_drd_notifier;
+ ret = extcon_register_notifier(dwc->edev, EXTCON_USB_HOST,
+ &dwc->edev_nb);
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 4d9608cc55f73..f08b2178fd32d 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -44,6 +44,8 @@
+ #define PCI_DEVICE_ID_INTEL_ADLM 0x54ee
+ #define PCI_DEVICE_ID_INTEL_ADLS 0x7ae1
+ #define PCI_DEVICE_ID_INTEL_RPLS 0x7a61
++#define PCI_DEVICE_ID_INTEL_MTLP 0x7ec1
++#define PCI_DEVICE_ID_INTEL_MTL 0x7e7e
+ #define PCI_DEVICE_ID_INTEL_TGL 0x9a15
+ #define PCI_DEVICE_ID_AMD_MR 0x163a
+
+@@ -421,6 +423,12 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPLS),
+ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLP),
++ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
++
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL),
++ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
++
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL),
+ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a0c883f19a417..836049887ac83 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3229,6 +3229,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ const struct dwc3_event_depevt *event,
+ struct dwc3_request *req, int status)
+ {
++ int request_status;
+ int ret;
+
+ if (req->request.num_mapped_sgs)
+@@ -3249,7 +3250,35 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ req->needs_extra_trb = false;
+ }
+
+- dwc3_gadget_giveback(dep, req, status);
++ /*
++ * The event status only reflects the status of the TRB with IOC set.
++ * For the requests that don't set interrupt on completion, the driver
++ * needs to check and return the status of the completed TRBs associated
++ * with the request. Use the status of the last TRB of the request.
++ */
++ if (req->request.no_interrupt) {
++ struct dwc3_trb *trb;
++
++ trb = dwc3_ep_prev_trb(dep, dep->trb_dequeue);
++ switch (DWC3_TRB_SIZE_TRBSTS(trb->size)) {
++ case DWC3_TRBSTS_MISSED_ISOC:
++ /* Isoc endpoint only */
++ request_status = -EXDEV;
++ break;
++ case DWC3_TRB_STS_XFER_IN_PROG:
++ /* Applicable when End Transfer with ForceRM=0 */
++ case DWC3_TRBSTS_SETUP_PENDING:
++ /* Control endpoint only */
++ case DWC3_TRBSTS_OK:
++ default:
++ request_status = 0;
++ break;
++ }
++ } else {
++ request_status = status;
++ }
++
++ dwc3_gadget_giveback(dep, req, request_status);
+
+ out:
+ return ret;
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index d4a678c0806e3..34d45525be1b5 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -1434,6 +1434,8 @@ static void configfs_composite_unbind(struct usb_gadget *gadget)
+ usb_ep_autoconfig_reset(cdev->gadget);
+ spin_lock_irqsave(&gi->spinlock, flags);
+ cdev->gadget = NULL;
++ cdev->deactivations = 0;
++ gadget->deactivated = false;
+ set_gadget_data(gadget, NULL);
+ spin_unlock_irqrestore(&gi->spinlock, flags);
+ }
+diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
+index d852ac9e47e72..2cda982f37650 100644
+--- a/drivers/usb/gadget/function/uvc_queue.c
++++ b/drivers/usb/gadget/function/uvc_queue.c
+@@ -264,6 +264,8 @@ void uvcg_queue_cancel(struct uvc_video_queue *queue, int disconnect)
+ buf->state = UVC_BUF_STATE_ERROR;
+ vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_ERROR);
+ }
++ queue->buf_used = 0;
++
+ /* This must be protected by the irqlock spinlock to avoid race
+ * conditions between uvc_queue_buffer and the disconnection event that
+ * could result in an interruptible wait in uvc_dequeue_buffer. Do not
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 1e7dc130c39a6..f65f1ba2b5929 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1434,7 +1434,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ }
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ if (!wait_for_completion_timeout(&bus_state->u3exit_done[wIndex],
+- msecs_to_jiffies(100)))
++ msecs_to_jiffies(500)))
+ xhci_dbg(xhci, "missing U0 port change event for port %d-%d\n",
+ hcd->self.busnum, wIndex + 1);
+ spin_lock_irqsave(&xhci->lock, flags);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 5c351970cdf1c..d7e0e6ebf0800 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed
+
+ #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639
+ #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
+@@ -266,7 +267,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ pdev->device == PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+- pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI))
++ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
++ pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index d0b6806275e01..f9707997969d4 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -3141,6 +3141,7 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
+ if (event_loop++ < TRBS_PER_SEGMENT / 2)
+ continue;
+ xhci_update_erst_dequeue(xhci, event_ring_deq);
++ event_ring_deq = xhci->event_ring->dequeue;
+
+ /* ring is half-full, force isoc trbs to interrupt more often */
+ if (xhci->isoc_bei_interval > AVOID_BEI_INTERVAL_MIN)
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index c8af2cd2216d6..996958a6565c3 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -1034,13 +1034,13 @@ static int tegra_xusb_unpowergate_partitions(struct tegra_xusb *tegra)
+ int rc;
+
+ if (tegra->use_genpd) {
+- rc = pm_runtime_get_sync(tegra->genpd_dev_ss);
++ rc = pm_runtime_resume_and_get(tegra->genpd_dev_ss);
+ if (rc < 0) {
+ dev_err(dev, "failed to enable XUSB SS partition\n");
+ return rc;
+ }
+
+- rc = pm_runtime_get_sync(tegra->genpd_dev_host);
++ rc = pm_runtime_resume_and_get(tegra->genpd_dev_host);
+ if (rc < 0) {
+ dev_err(dev, "failed to enable XUSB Host partition\n");
+ pm_runtime_put_sync(tegra->genpd_dev_ss);
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 7d1ad8d654cbb..2b78ed3233433 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -778,6 +778,17 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ if (xhci->quirks & XHCI_SPURIOUS_REBOOT)
+ usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev));
+
++ /* Don't poll the roothubs after shutdown. */
++ xhci_dbg(xhci, "%s: stopping usb%d port polling.\n",
++ __func__, hcd->self.busnum);
++ clear_bit(HCD_FLAG_POLL_RH, &hcd->flags);
++ del_timer_sync(&hcd->rh_timer);
++
++ if (xhci->shared_hcd) {
++ clear_bit(HCD_FLAG_POLL_RH, &xhci->shared_hcd->flags);
++ del_timer_sync(&xhci->shared_hcd->rh_timer);
++ }
++
+ spin_lock_irq(&xhci->lock);
+ xhci_halt(xhci);
+ /* Workaround for spurious wakeups at shutdown with HSW */
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index 748139d262633..0be8efcda15d5 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -71,6 +71,7 @@ static void destroy_priv(struct kref *kref)
+
+ dev_dbg(&priv->usbdev->dev, "destroying priv datastructure\n");
+ usb_put_dev(priv->usbdev);
++ priv->usbdev = NULL;
+ kfree(priv);
+ }
+
+@@ -736,7 +737,6 @@ static int uss720_probe(struct usb_interface *intf,
+ parport_announce_port(pp);
+
+ usb_set_intfdata(intf, pp);
+- usb_put_dev(usbdev);
+ return 0;
+
+ probe_abort:
+@@ -754,7 +754,6 @@ static void uss720_disconnect(struct usb_interface *intf)
+ usb_set_intfdata(intf, NULL);
+ if (pp) {
+ priv = pp->private_data;
+- priv->usbdev = NULL;
+ priv->pp = NULL;
+ dev_dbg(&intf->dev, "parport_remove_port\n");
+ parport_remove_port(pp);
+diff --git a/drivers/usb/mtu3/mtu3_dr.c b/drivers/usb/mtu3/mtu3_dr.c
+index a6b04831b20bf..9b8aded3d95e9 100644
+--- a/drivers/usb/mtu3/mtu3_dr.c
++++ b/drivers/usb/mtu3/mtu3_dr.c
+@@ -21,10 +21,8 @@ static inline struct ssusb_mtk *otg_sx_to_ssusb(struct otg_switch_mtk *otg_sx)
+
+ static void toggle_opstate(struct ssusb_mtk *ssusb)
+ {
+- if (!ssusb->otg_switch.is_u3_drd) {
+- mtu3_setbits(ssusb->mac_base, U3D_DEVICE_CONTROL, DC_SESSION);
+- mtu3_setbits(ssusb->mac_base, U3D_POWER_MANAGEMENT, SOFT_CONN);
+- }
++ mtu3_setbits(ssusb->mac_base, U3D_DEVICE_CONTROL, DC_SESSION);
++ mtu3_setbits(ssusb->mac_base, U3D_POWER_MANAGEMENT, SOFT_CONN);
+ }
+
+ /* only port0 supports dual-role mode */
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index 661a229c105dd..34b9f81401871 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -268,6 +268,13 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ return -EPROBE_DEFER;
+ }
+
++ nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
++ if (PTR_ERR(nop->vbus_draw) == -ENODEV)
++ nop->vbus_draw = NULL;
++ if (IS_ERR(nop->vbus_draw))
++ return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
++ "could not get vbus regulator\n");
++
+ nop->dev = dev;
+ nop->phy.dev = nop->dev;
+ nop->phy.label = "nop-xceiv";
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index a27f7efcec6a8..c374620a486f0 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -194,6 +194,8 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x16DC, 0x0015) }, /* W-IE-NE-R Plein & Baus GmbH CML Control, Monitoring and Data Logger */
+ { USB_DEVICE(0x17A8, 0x0001) }, /* Kamstrup Optical Eye/3-wire */
+ { USB_DEVICE(0x17A8, 0x0005) }, /* Kamstrup M-Bus Master MultiPort 250D */
++ { USB_DEVICE(0x17A8, 0x0101) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Int Ant) */
++ { USB_DEVICE(0x17A8, 0x0102) }, /* Kamstrup 868 MHz wM-Bus C-Mode Meter Reader (Ext Ant) */
+ { USB_DEVICE(0x17F4, 0xAAAA) }, /* Wavesense Jazz blood glucose meter */
+ { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
+ { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index e7755d9cfc61a..1364ce7f0abf0 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -432,6 +432,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_CLS8 0x00b0
+ #define CINTERION_PRODUCT_MV31_MBIM 0x00b3
+ #define CINTERION_PRODUCT_MV31_RMNET 0x00b7
++#define CINTERION_PRODUCT_MV32_WA 0x00f1
++#define CINTERION_PRODUCT_MV32_WB 0x00f2
+
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID 0x0b3c
+@@ -1217,6 +1219,10 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff), /* Telit FD980 */
+ .driver_info = NCTRL(2) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1057, 0xff), /* Telit FN980 */
++ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1058, 0xff), /* Telit FN980 (PCIe) */
++ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1060, 0xff), /* Telit LN920 (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1061, 0xff), /* Telit LN920 (MBIM) */
+@@ -1233,6 +1239,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990 (PCIe) */
++ .driver_info = RSVD(0) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -1969,6 +1977,10 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(3)},
+ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
+ .driver_info = RSVD(0)},
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff),
++ .driver_info = RSVD(3)},
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff),
++ .driver_info = RSVD(3)},
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100),
+ .driver_info = RSVD(4) },
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120),
+diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
+index da65d14c9ed5e..06aad0d727ddc 100644
+--- a/drivers/usb/serial/whiteheat.c
++++ b/drivers/usb/serial/whiteheat.c
+@@ -584,9 +584,8 @@ static int firm_send_command(struct usb_serial_port *port, __u8 command,
+ switch (command) {
+ case WHITEHEAT_GET_DTR_RTS:
+ info = usb_get_serial_port_data(port);
+- memcpy(&info->mcr, command_info->result_buffer,
+- sizeof(struct whiteheat_dr_info));
+- break;
++ info->mcr = command_info->result_buffer[0];
++ break;
+ }
+ }
+ exit:
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index f0c2fa19f3e0f..a6045aef0d04f 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -949,6 +949,8 @@ static int ucsi_dr_swap(struct typec_port *port, enum typec_data_role role)
+ role == TYPEC_HOST))
+ goto out_unlock;
+
++ reinit_completion(&con->complete);
++
+ command = UCSI_SET_UOR | UCSI_CONNECTOR_NUMBER(con->num);
+ command |= UCSI_SET_UOR_ROLE(role);
+ command |= UCSI_SET_UOR_ACCEPT_ROLE_SWAPS;
+@@ -956,14 +958,18 @@ static int ucsi_dr_swap(struct typec_port *port, enum typec_data_role role)
+ if (ret < 0)
+ goto out_unlock;
+
++ mutex_unlock(&con->lock);
++
+ if (!wait_for_completion_timeout(&con->complete,
+- msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS)))
+- ret = -ETIMEDOUT;
++ msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS)))
++ return -ETIMEDOUT;
++
++ return 0;
+
+ out_unlock:
+ mutex_unlock(&con->lock);
+
+- return ret < 0 ? ret : 0;
++ return ret;
+ }
+
+ static int ucsi_pr_swap(struct typec_port *port, enum typec_role role)
+@@ -985,6 +991,8 @@ static int ucsi_pr_swap(struct typec_port *port, enum typec_role role)
+ if (cur_role == role)
+ goto out_unlock;
+
++ reinit_completion(&con->complete);
++
+ command = UCSI_SET_PDR | UCSI_CONNECTOR_NUMBER(con->num);
+ command |= UCSI_SET_PDR_ROLE(role);
+ command |= UCSI_SET_PDR_ACCEPT_ROLE_SWAPS;
+@@ -992,11 +1000,13 @@ static int ucsi_pr_swap(struct typec_port *port, enum typec_role role)
+ if (ret < 0)
+ goto out_unlock;
+
++ mutex_unlock(&con->lock);
++
+ if (!wait_for_completion_timeout(&con->complete,
+- msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS))) {
+- ret = -ETIMEDOUT;
+- goto out_unlock;
+- }
++ msecs_to_jiffies(UCSI_SWAP_TIMEOUT_MS)))
++ return -ETIMEDOUT;
++
++ mutex_lock(&con->lock);
+
+ /* Something has gone wrong while swapping the role */
+ if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) !=
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index 90f48b71fd8f7..d9eec1b60e665 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1649,8 +1649,9 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ const struct device_attribute *attr;
+ struct dlfb_data *dlfb;
+ struct fb_info *info;
+- int retval = -ENOMEM;
++ int retval;
+ struct usb_device *usbdev = interface_to_usbdev(intf);
++ struct usb_endpoint_descriptor *out;
+
+ /* usb initialization */
+ dlfb = kzalloc(sizeof(*dlfb), GFP_KERNEL);
+@@ -1664,6 +1665,12 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ dlfb->udev = usb_get_dev(usbdev);
+ usb_set_intfdata(intf, dlfb);
+
++ retval = usb_find_common_endpoints(intf->cur_altsetting, NULL, &out, NULL, NULL);
++ if (retval) {
++ dev_err(&intf->dev, "Device should have at lease 1 bulk endpoint!\n");
++ goto error;
++ }
++
+ dev_dbg(&intf->dev, "console enable=%d\n", console);
+ dev_dbg(&intf->dev, "fb_defio enable=%d\n", fb_defio);
+ dev_dbg(&intf->dev, "shadow enable=%d\n", shadow);
+@@ -1673,6 +1680,7 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ if (!dlfb_parse_vendor_descriptor(dlfb, intf)) {
+ dev_err(&intf->dev,
+ "firmware not recognized, incompatible device?\n");
++ retval = -ENODEV;
+ goto error;
+ }
+
+@@ -1686,8 +1694,10 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+
+ /* allocates framebuffer driver structure, not framebuffer memory */
+ info = framebuffer_alloc(0, &dlfb->udev->dev);
+- if (!info)
++ if (!info) {
++ retval = -ENOMEM;
+ goto error;
++ }
+
+ dlfb->info = info;
+ info->par = dlfb;
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index ebb2d109e8bb2..1022dd383664a 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1029,6 +1029,7 @@ struct btrfs_fs_info {
+ */
+ spinlock_t relocation_bg_lock;
+ u64 data_reloc_bg;
++ struct mutex zoned_data_reloc_io_lock;
+
+ spinlock_t zone_active_bgs_lock;
+ struct list_head zone_active_bgs;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 62b9651ea6629..7f98d3744656a 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -730,7 +730,12 @@ static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+
+ btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
+
+- /* Commit dev_replace state and reserve 1 item for it. */
++ /*
++ * Commit dev_replace state and reserve 1 item for it.
++ * This is crucial to ensure we won't miss copying extents for new block
++ * groups that are allocated after we started the device replace, and
++ * must be done after setting up the device replace state.
++ */
+ trans = btrfs_start_transaction(root, 1);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b43f80c3bffd9..ed986c70cbc5e 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3068,6 +3068,7 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
+ mutex_init(&fs_info->reloc_mutex);
+ mutex_init(&fs_info->delalloc_root_mutex);
+ mutex_init(&fs_info->zoned_meta_io_lock);
++ mutex_init(&fs_info->zoned_data_reloc_io_lock);
+ seqlock_init(&fs_info->profiles_lock);
+
+ INIT_LIST_HEAD(&fs_info->dirty_cowonly_roots);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index e93526d86a922..9b488a737f8ae 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2657,6 +2657,7 @@ int btrfs_repair_one_sector(struct inode *inode,
+
+ repair_bio = btrfs_bio_alloc(1);
+ repair_bbio = btrfs_bio(repair_bio);
++ repair_bbio->file_offset = start;
+ repair_bio->bi_opf = REQ_OP_READ;
+ repair_bio->bi_end_io = failed_bio->bi_end_io;
+ repair_bio->bi_iter.bi_sector = failrec->logical >> 9;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 9547088a93066..ecd305649e129 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7789,8 +7789,6 @@ static blk_status_t btrfs_check_read_dio_bio(struct btrfs_dio_private *dip,
+ const bool csum = !(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM);
+ struct bio_vec bvec;
+ struct bvec_iter iter;
+- const u64 orig_file_offset = dip->file_offset;
+- u64 start = orig_file_offset;
+ u32 bio_offset = 0;
+ blk_status_t err = BLK_STS_OK;
+
+@@ -7800,6 +7798,8 @@ static blk_status_t btrfs_check_read_dio_bio(struct btrfs_dio_private *dip,
+ nr_sectors = BTRFS_BYTES_TO_BLKS(fs_info, bvec.bv_len);
+ pgoff = bvec.bv_offset;
+ for (i = 0; i < nr_sectors; i++) {
++ u64 start = bbio->file_offset + bio_offset;
++
+ ASSERT(pgoff < PAGE_SIZE);
+ if (uptodate &&
+ (!csum || !check_data_csum(inode, bbio,
+@@ -7812,17 +7812,13 @@ static blk_status_t btrfs_check_read_dio_bio(struct btrfs_dio_private *dip,
+ } else {
+ int ret;
+
+- ASSERT((start - orig_file_offset) < UINT_MAX);
+- ret = btrfs_repair_one_sector(inode,
+- &bbio->bio,
+- start - orig_file_offset,
+- bvec.bv_page, pgoff,
++ ret = btrfs_repair_one_sector(inode, &bbio->bio,
++ bio_offset, bvec.bv_page, pgoff,
+ start, bbio->mirror_num,
+ submit_dio_repair_bio);
+ if (ret)
+ err = errno_to_blk_status(ret);
+ }
+- start += sectorsize;
+ ASSERT(bio_offset + sectorsize > bio_offset);
+ bio_offset += sectorsize;
+ pgoff += sectorsize;
+@@ -7849,6 +7845,7 @@ static blk_status_t btrfs_submit_bio_start_direct_io(struct inode *inode,
+ static void btrfs_end_dio_bio(struct bio *bio)
+ {
+ struct btrfs_dio_private *dip = bio->bi_private;
++ struct btrfs_bio *bbio = btrfs_bio(bio);
+ blk_status_t err = bio->bi_status;
+
+ if (err)
+@@ -7859,12 +7856,12 @@ static void btrfs_end_dio_bio(struct bio *bio)
+ bio->bi_iter.bi_size, err);
+
+ if (bio_op(bio) == REQ_OP_READ)
+- err = btrfs_check_read_dio_bio(dip, btrfs_bio(bio), !err);
++ err = btrfs_check_read_dio_bio(dip, bbio, !err);
+
+ if (err)
+ dip->dio_bio->bi_status = err;
+
+- btrfs_record_physical_zoned(dip->inode, dip->file_offset, bio);
++ btrfs_record_physical_zoned(dip->inode, bbio->file_offset, bio);
+
+ bio_put(bio);
+ btrfs_dio_private_put(dip);
+@@ -8025,6 +8022,7 @@ static void btrfs_submit_direct(const struct iomap_iter *iter,
+ bio = btrfs_bio_clone_partial(dio_bio, clone_offset, clone_len);
+ bio->bi_private = dip;
+ bio->bi_end_io = btrfs_end_dio_bio;
++ btrfs_bio(bio)->file_offset = file_offset;
+
+ if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
+ status = extract_ordered_extent(BTRFS_I(inode), bio,
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 2e9a322773f28..5a223de90af8a 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -3699,6 +3699,31 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ if (!cache)
+ goto skip;
+
++ ASSERT(cache->start <= chunk_offset);
++ /*
++ * We are using the commit root to search for device extents, so
++ * that means we could have found a device extent item from a
++ * block group that was deleted in the current transaction. The
++ * logical start offset of the deleted block group, stored at
++ * @chunk_offset, might be part of the logical address range of
++ * a new block group (which uses different physical extents).
++ * In this case btrfs_lookup_block_group() has returned the new
++ * block group, and its start address is less than @chunk_offset.
++ *
++ * We skip such new block groups, because it's pointless to
++ * process them, as we won't find their extents because we search
++ * for them using the commit root of the extent tree. For a device
++ * replace it's also fine to skip it, we won't miss copying them
++ * to the target device because we have the write duplication
++ * setup through the regular write path (by btrfs_map_block()),
++ * and we have committed a transaction when we started the device
++ * replace, right after setting up the device replace state.
++ */
++ if (cache->start < chunk_offset) {
++ btrfs_put_block_group(cache);
++ goto skip;
++ }
++
+ if (sctx->is_dev_replace && btrfs_is_zoned(fs_info)) {
+ spin_lock(&cache->lock);
+ if (!cache->to_copy) {
+@@ -3822,7 +3847,6 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
+ dev_replace->item_needs_writeback = 1;
+ up_write(&dev_replace->rwsem);
+
+- ASSERT(cache->start == chunk_offset);
+ ret = scrub_chunk(sctx, cache, scrub_dev, found_key.offset,
+ dev_extent_len);
+
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 6bc8834ac8f7d..7a0bfa5bedb95 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3225,6 +3225,7 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ ret = btrfs_alloc_log_tree_node(trans, log_root_tree);
+ if (ret) {
+ mutex_unlock(&fs_info->tree_root->log_mutex);
++ blk_finish_plug(&plug);
+ goto out;
+ }
+ }
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index 005c9e2a491a1..c22148bebc2f5 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -323,6 +323,9 @@ struct btrfs_fs_devices {
+ struct btrfs_bio {
+ unsigned int mirror_num;
+
++ /* for direct I/O */
++ u64 file_offset;
++
+ /* @device is for stripe IO submission. */
+ struct btrfs_device *device;
+ u8 *csum;
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index cbf016a7bb5dd..6dee76248cb4d 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -359,7 +359,7 @@ static inline void btrfs_zoned_data_reloc_lock(struct btrfs_inode *inode)
+ struct btrfs_root *root = inode->root;
+
+ if (btrfs_is_data_reloc_root(root) && btrfs_is_zoned(root->fs_info))
+- btrfs_inode_lock(&inode->vfs_inode, 0);
++ mutex_lock(&root->fs_info->zoned_data_reloc_io_lock);
+ }
+
+ static inline void btrfs_zoned_data_reloc_unlock(struct btrfs_inode *inode)
+@@ -367,7 +367,7 @@ static inline void btrfs_zoned_data_reloc_unlock(struct btrfs_inode *inode)
+ struct btrfs_root *root = inode->root;
+
+ if (btrfs_is_data_reloc_root(root) && btrfs_is_zoned(root->fs_info))
+- btrfs_inode_unlock(&inode->vfs_inode, 0);
++ mutex_unlock(&root->fs_info->zoned_data_reloc_io_lock);
+ }
+
+ #endif
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index b472cd066d1c8..ca7b23dded4c0 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -2267,6 +2267,8 @@ retry:
+ list_for_each_entry(req, &ci->i_unsafe_dirops,
+ r_unsafe_dir_item) {
+ s = req->r_session;
++ if (!s)
++ continue;
+ if (unlikely(s->s_mds >= max_sessions)) {
+ spin_unlock(&ci->i_unsafe_lock);
+ for (i = 0; i < max_sessions; i++) {
+@@ -2287,6 +2289,8 @@ retry:
+ list_for_each_entry(req, &ci->i_unsafe_iops,
+ r_unsafe_target_item) {
+ s = req->r_session;
++ if (!s)
++ continue;
+ if (unlikely(s->s_mds >= max_sessions)) {
+ spin_unlock(&ci->i_unsafe_lock);
+ for (i = 0; i < max_sessions; i++) {
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 5d120cd8bc78f..13080d6a140b3 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1861,9 +1861,17 @@ smb2_copychunk_range(const unsigned int xid,
+ int chunks_copied = 0;
+ bool chunk_sizes_updated = false;
+ ssize_t bytes_written, total_bytes_written = 0;
++ struct inode *inode;
+
+ pcchunk = kmalloc(sizeof(struct copychunk_ioctl), GFP_KERNEL);
+
++ /*
++ * We need to flush all unwritten data before we can send the
++ * copychunk ioctl to the server.
++ */
++ inode = d_inode(trgtfile->dentry);
++ filemap_write_and_wait(inode->i_mapping);
++
+ if (pcchunk == NULL)
+ return -ENOMEM;
+
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 423bc1a61da5c..a1b48bcafe632 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1073,12 +1073,9 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,
+
+ /* wake up the caller thread for sync decompression */
+ if (sync) {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&io->u.wait.lock, flags);
+ if (!atomic_add_return(bios, &io->pending_bios))
+- wake_up_locked(&io->u.wait);
+- spin_unlock_irqrestore(&io->u.wait.lock, flags);
++ complete(&io->u.done);
++
+ return;
+ }
+
+@@ -1224,7 +1221,7 @@ jobqueue_init(struct super_block *sb,
+ } else {
+ fg_out:
+ q = fgq;
+- init_waitqueue_head(&fgq->u.wait);
++ init_completion(&fgq->u.done);
+ atomic_set(&fgq->pending_bios, 0);
+ }
+ q->sb = sb;
+@@ -1428,8 +1425,7 @@ static void z_erofs_runqueue(struct super_block *sb,
+ return;
+
+ /* wait until all bios are completed */
+- io_wait_event(io[JQ_SUBMIT].u.wait,
+- !atomic_read(&io[JQ_SUBMIT].pending_bios));
++ wait_for_completion_io(&io[JQ_SUBMIT].u.done);
+
+ /* handle synchronous decompress queue in the caller context */
+ z_erofs_decompress_queue(&io[JQ_SUBMIT], pagepool);
+diff --git a/fs/erofs/zdata.h b/fs/erofs/zdata.h
+index e043216b545f1..800b11c53f574 100644
+--- a/fs/erofs/zdata.h
++++ b/fs/erofs/zdata.h
+@@ -97,7 +97,7 @@ struct z_erofs_decompressqueue {
+ z_erofs_next_pcluster_t head;
+
+ union {
+- wait_queue_head_t wait;
++ struct completion done;
+ struct work_struct work;
+ } u;
+ };
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index ba6530c2d2711..fe30f483c59f0 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1199,20 +1199,25 @@ static void ext4_put_super(struct super_block *sb)
+ int aborted = 0;
+ int i, err;
+
+- ext4_unregister_li_request(sb);
+- ext4_quota_off_umount(sb);
+-
+- flush_work(&sbi->s_error_work);
+- destroy_workqueue(sbi->rsv_conversion_wq);
+- ext4_release_orphan_info(sb);
+-
+ /*
+ * Unregister sysfs before destroying jbd2 journal.
+ * Since we could still access attr_journal_task attribute via sysfs
+ * path which could have sbi->s_journal->j_task as NULL
++ * Unregister sysfs before flush sbi->s_error_work.
++ * Since user may read /proc/fs/ext4/xx/mb_groups during umount, If
++ * read metadata verify failed then will queue error work.
++ * flush_stashed_error_work will call start_this_handle may trigger
++ * BUG_ON.
+ */
+ ext4_unregister_sysfs(sb);
+
++ ext4_unregister_li_request(sb);
++ ext4_quota_off_umount(sb);
++
++ flush_work(&sbi->s_error_work);
++ destroy_workqueue(sbi->rsv_conversion_wq);
++ ext4_release_orphan_info(sb);
++
+ if (sbi->s_journal) {
+ aborted = is_journal_aborted(sbi->s_journal);
+ err = jbd2_journal_destroy(sbi->s_journal);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 71f232dcf3c20..83639238a1fe9 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -550,7 +550,8 @@ make_now:
+ }
+ f2fs_set_inode_flags(inode);
+
+- if (file_should_truncate(inode)) {
++ if (file_should_truncate(inode) &&
++ !is_sbi_flag_set(sbi, SBI_POR_DOING)) {
+ ret = f2fs_truncate(inode);
+ if (ret)
+ goto bad_inode;
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index b53ad18e5ccbf..fa071d738c78e 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -851,9 +851,9 @@ retry_under_glock:
+ leftover = fault_in_iov_iter_writeable(to, window_size);
+ gfs2_holder_disallow_demote(gh);
+ if (leftover != window_size) {
+- if (!gfs2_holder_queued(gh))
+- goto retry;
+- goto retry_under_glock;
++ if (gfs2_holder_queued(gh))
++ goto retry_under_glock;
++ goto retry;
+ }
+ }
+ if (gfs2_holder_queued(gh))
+@@ -920,9 +920,9 @@ retry_under_glock:
+ leftover = fault_in_iov_iter_readable(from, window_size);
+ gfs2_holder_disallow_demote(gh);
+ if (leftover != window_size) {
+- if (!gfs2_holder_queued(gh))
+- goto retry;
+- goto retry_under_glock;
++ if (gfs2_holder_queued(gh))
++ goto retry_under_glock;
++ goto retry;
+ }
+ }
+ out:
+@@ -989,12 +989,9 @@ retry_under_glock:
+ leftover = fault_in_iov_iter_writeable(to, window_size);
+ gfs2_holder_disallow_demote(&gh);
+ if (leftover != window_size) {
+- if (!gfs2_holder_queued(&gh)) {
+- if (written)
+- goto out_uninit;
+- goto retry;
+- }
+- goto retry_under_glock;
++ if (gfs2_holder_queued(&gh))
++ goto retry_under_glock;
++ goto retry;
+ }
+ }
+ if (gfs2_holder_queued(&gh))
+@@ -1068,12 +1065,9 @@ retry_under_glock:
+ gfs2_holder_disallow_demote(gh);
+ if (leftover != window_size) {
+ from->count = min(from->count, window_size - leftover);
+- if (!gfs2_holder_queued(gh)) {
+- if (read)
+- goto out_uninit;
+- goto retry;
+- }
+- goto retry_under_glock;
++ if (gfs2_holder_queued(gh))
++ goto retry_under_glock;
++ goto retry;
+ }
+ }
+ out_unlock:
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index fbba8342172a0..87df379120551 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -3584,6 +3584,7 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
+ if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
+ return -EOPNOTSUPP;
+
++ kiocb->private = NULL;
+ kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
+ kiocb->ki_complete = io_complete_rw_iopoll;
+ req->iopoll_completed = 0;
+@@ -4890,6 +4891,8 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+
+ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ return -EINVAL;
++ if (unlikely(sqe->addr2 || sqe->file_index))
++ return -EINVAL;
+
+ sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+ sr->len = READ_ONCE(sqe->len);
+@@ -5101,6 +5104,8 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+
+ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ return -EINVAL;
++ if (unlikely(sqe->addr2 || sqe->file_index))
++ return -EINVAL;
+
+ sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
+ sr->len = READ_ONCE(sqe->len);
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index e6d9772ddb4ca..4096953390b44 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -1397,7 +1397,12 @@ static void __kernfs_remove(struct kernfs_node *kn)
+ */
+ void kernfs_remove(struct kernfs_node *kn)
+ {
+- struct kernfs_root *root = kernfs_root(kn);
++ struct kernfs_root *root;
++
++ if (!kn)
++ return;
++
++ root = kernfs_root(kn);
+
+ down_write(&root->kernfs_rwsem);
+ __kernfs_remove(kn);
+diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
+index 67e8e28e3fc35..83ffa73c93484 100644
+--- a/fs/ksmbd/smb2pdu.c
++++ b/fs/ksmbd/smb2pdu.c
+@@ -11,6 +11,7 @@
+ #include <linux/statfs.h>
+ #include <linux/ethtool.h>
+ #include <linux/falloc.h>
++#include <linux/mount.h>
+
+ #include "glob.h"
+ #include "smbfsctl.h"
+@@ -5005,15 +5006,17 @@ static int smb2_get_info_filesystem(struct ksmbd_work *work,
+ case FS_SECTOR_SIZE_INFORMATION:
+ {
+ struct smb3_fs_ss_info *info;
++ unsigned int sector_size =
++ min_t(unsigned int, path.mnt->mnt_sb->s_blocksize, 4096);
+
+ info = (struct smb3_fs_ss_info *)(rsp->Buffer);
+
+- info->LogicalBytesPerSector = cpu_to_le32(stfs.f_bsize);
++ info->LogicalBytesPerSector = cpu_to_le32(sector_size);
+ info->PhysicalBytesPerSectorForAtomicity =
+- cpu_to_le32(stfs.f_bsize);
+- info->PhysicalBytesPerSectorForPerf = cpu_to_le32(stfs.f_bsize);
++ cpu_to_le32(sector_size);
++ info->PhysicalBytesPerSectorForPerf = cpu_to_le32(sector_size);
+ info->FSEffPhysicalBytesPerSectorForAtomicity =
+- cpu_to_le32(stfs.f_bsize);
++ cpu_to_le32(sector_size);
+ info->Flags = cpu_to_le32(SSINFO_FLAGS_ALIGNED_DEVICE |
+ SSINFO_FLAGS_PARTITION_ALIGNED_ON_DEVICE);
+ info->ByteOffsetForSectorAlignment = 0;
+@@ -5771,8 +5774,10 @@ static int set_rename_info(struct ksmbd_work *work, struct ksmbd_file *fp,
+ if (parent_fp) {
+ if (parent_fp->daccess & FILE_DELETE_LE) {
+ pr_err("parent dir is opened with delete access\n");
++ ksmbd_fd_put(work, parent_fp);
+ return -ESHARE;
+ }
++ ksmbd_fd_put(work, parent_fp);
+ }
+ next:
+ return smb2_rename(work, fp, user_ns, rename_info,
+diff --git a/fs/ksmbd/vfs_cache.c b/fs/ksmbd/vfs_cache.c
+index 29c1db66bd0f7..8b873d92d7854 100644
+--- a/fs/ksmbd/vfs_cache.c
++++ b/fs/ksmbd/vfs_cache.c
+@@ -497,6 +497,7 @@ struct ksmbd_file *ksmbd_lookup_fd_inode(struct inode *inode)
+ list_for_each_entry(lfp, &ci->m_fp_list, node) {
+ if (inode == file_inode(lfp->filp)) {
+ atomic_dec(&ci->m_count);
++ lfp = ksmbd_fp_get(lfp);
+ read_unlock(&ci->m_lock);
+ return lfp;
+ }
+diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
+index b76dfb310ab65..63e09caf19b82 100644
+--- a/fs/zonefs/super.c
++++ b/fs/zonefs/super.c
+@@ -35,6 +35,17 @@ static inline int zonefs_zone_mgmt(struct inode *inode,
+
+ lockdep_assert_held(&zi->i_truncate_mutex);
+
++ /*
++ * With ZNS drives, closing an explicitly open zone that has not been
++ * written will change the zone state to "closed", that is, the zone
++ * will remain active. Since this can then cause failure of explicit
++ * open operation on other zones if the drive active zone resources
++ * are exceeded, make sure that the zone does not remain active by
++ * resetting it.
++ */
++ if (op == REQ_OP_ZONE_CLOSE && !zi->i_wpoffset)
++ op = REQ_OP_ZONE_RESET;
++
+ trace_zonefs_zone_mgmt(inode, op);
+ ret = blkdev_zone_mgmt(inode->i_sb->s_bdev, op, zi->i_zsector,
+ zi->i_zone_size >> SECTOR_SHIFT, GFP_NOFS);
+@@ -1144,6 +1155,7 @@ static struct inode *zonefs_alloc_inode(struct super_block *sb)
+ inode_init_once(&zi->i_vnode);
+ mutex_init(&zi->i_truncate_mutex);
+ zi->i_wr_refcnt = 0;
++ zi->i_flags = 0;
+
+ return &zi->i_vnode;
+ }
+@@ -1295,12 +1307,13 @@ static void zonefs_init_dir_inode(struct inode *parent, struct inode *inode,
+ inc_nlink(parent);
+ }
+
+-static void zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
+- enum zonefs_ztype type)
++static int zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
++ enum zonefs_ztype type)
+ {
+ struct super_block *sb = inode->i_sb;
+ struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
+ struct zonefs_inode_info *zi = ZONEFS_I(inode);
++ int ret = 0;
+
+ inode->i_ino = zone->start >> sbi->s_zone_sectors_shift;
+ inode->i_mode = S_IFREG | sbi->s_perm;
+@@ -1325,6 +1338,22 @@ static void zonefs_init_file_inode(struct inode *inode, struct blk_zone *zone,
+ sb->s_maxbytes = max(zi->i_max_size, sb->s_maxbytes);
+ sbi->s_blocks += zi->i_max_size >> sb->s_blocksize_bits;
+ sbi->s_used_blocks += zi->i_wpoffset >> sb->s_blocksize_bits;
++
++ /*
++ * For sequential zones, make sure that any open zone is closed first
++ * to ensure that the initial number of open zones is 0, in sync with
++ * the open zone accounting done when the mount option
++ * ZONEFS_MNTOPT_EXPLICIT_OPEN is used.
++ */
++ if (type == ZONEFS_ZTYPE_SEQ &&
++ (zone->cond == BLK_ZONE_COND_IMP_OPEN ||
++ zone->cond == BLK_ZONE_COND_EXP_OPEN)) {
++ mutex_lock(&zi->i_truncate_mutex);
++ ret = zonefs_zone_mgmt(inode, REQ_OP_ZONE_CLOSE);
++ mutex_unlock(&zi->i_truncate_mutex);
++ }
++
++ return ret;
+ }
+
+ static struct dentry *zonefs_create_inode(struct dentry *parent,
+@@ -1334,6 +1363,7 @@ static struct dentry *zonefs_create_inode(struct dentry *parent,
+ struct inode *dir = d_inode(parent);
+ struct dentry *dentry;
+ struct inode *inode;
++ int ret;
+
+ dentry = d_alloc_name(parent, name);
+ if (!dentry)
+@@ -1344,10 +1374,16 @@ static struct dentry *zonefs_create_inode(struct dentry *parent,
+ goto dput;
+
+ inode->i_ctime = inode->i_mtime = inode->i_atime = dir->i_ctime;
+- if (zone)
+- zonefs_init_file_inode(inode, zone, type);
+- else
++ if (zone) {
++ ret = zonefs_init_file_inode(inode, zone, type);
++ if (ret) {
++ iput(inode);
++ goto dput;
++ }
++ } else {
+ zonefs_init_dir_inode(dir, inode, type);
++ }
++
+ d_add(dentry, inode);
+ dir->i_size++;
+
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 33f47a9965132..a56354bccf292 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -280,7 +280,7 @@ static inline char *hex_byte_pack_upper(char *buf, u8 byte)
+ return buf;
+ }
+
+-extern int hex_to_bin(char ch);
++extern int hex_to_bin(unsigned char ch);
+ extern int __must_check hex2bin(u8 *dst, const char *src, size_t count);
+ extern char *bin2hex(char *dst, const void *src, size_t count);
+
+diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
+index 1ffa933121f6e..12ed7bb071be6 100644
+--- a/include/linux/mtd/mtd.h
++++ b/include/linux/mtd/mtd.h
+@@ -392,10 +392,8 @@ struct mtd_info {
+ /* List of partitions attached to this MTD device */
+ struct list_head partitions;
+
+- union {
+- struct mtd_part part;
+- struct mtd_master master;
+- };
++ struct mtd_part part;
++ struct mtd_master master;
+ };
+
+ static inline struct mtd_info *mtd_get_master(struct mtd_info *mtd)
+diff --git a/include/memory/renesas-rpc-if.h b/include/memory/renesas-rpc-if.h
+index 7c93f5177532f..9c0ad64b8d292 100644
+--- a/include/memory/renesas-rpc-if.h
++++ b/include/memory/renesas-rpc-if.h
+@@ -72,6 +72,7 @@ struct rpcif {
+ enum rpcif_type type;
+ enum rpcif_data_dir dir;
+ u8 bus_size;
++ u8 xfer_size;
+ void *buffer;
+ u32 xferlen;
+ u32 smcr;
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 5cb095b09a940..69ef31cea5822 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -578,6 +578,7 @@ enum {
+ #define HCI_ERROR_CONNECTION_TIMEOUT 0x08
+ #define HCI_ERROR_REJ_LIMITED_RESOURCES 0x0d
+ #define HCI_ERROR_REJ_BAD_ADDR 0x0f
++#define HCI_ERROR_INVALID_PARAMETERS 0x12
+ #define HCI_ERROR_REMOTE_USER_TERM 0x13
+ #define HCI_ERROR_REMOTE_LOW_RESOURCES 0x14
+ #define HCI_ERROR_REMOTE_POWER_OFF 0x15
+diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
+index a38c4f1e4e5c6..74b369bddf49e 100644
+--- a/include/net/ip6_tunnel.h
++++ b/include/net/ip6_tunnel.h
+@@ -58,7 +58,7 @@ struct ip6_tnl {
+
+ /* These fields used only by GRE */
+ __u32 i_seqno; /* The last seen seqno */
+- __u32 o_seqno; /* The last output seqno */
++ atomic_t o_seqno; /* The last output seqno */
+ int hlen; /* tun_hlen + encap_hlen */
+ int tun_hlen; /* Precalculated header length */
+ int encap_hlen; /* Encap header length (FOU,GUE) */
+diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
+index 0219fe907b261..3ec6146f87342 100644
+--- a/include/net/ip_tunnels.h
++++ b/include/net/ip_tunnels.h
+@@ -116,7 +116,7 @@ struct ip_tunnel {
+
+ /* These four fields used only by GRE */
+ u32 i_seqno; /* The last seen seqno */
+- u32 o_seqno; /* The last output seqno */
++ atomic_t o_seqno; /* The last output seqno */
+ int tun_hlen; /* Precalculated header length */
+
+ /* These four fields used only by ERSPAN */
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index b9fc978fb2cad..a3fe2f9bc01ce 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -480,6 +480,7 @@ int __cookie_v4_check(const struct iphdr *iph, const struct tcphdr *th,
+ u32 cookie);
+ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb);
+ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
++ const struct tcp_request_sock_ops *af_ops,
+ struct sock *sk, struct sk_buff *skb);
+ #ifdef CONFIG_SYN_COOKIES
+
+@@ -620,6 +621,7 @@ void tcp_synack_rtt_meas(struct sock *sk, struct request_sock *req);
+ void tcp_reset(struct sock *sk, struct sk_buff *skb);
+ void tcp_skb_mark_lost_uncond_verify(struct tcp_sock *tp, struct sk_buff *skb);
+ void tcp_fin(struct sock *sk);
++void tcp_check_space(struct sock *sk);
+
+ /* tcp_timer.c */
+ void tcp_init_xmit_timers(struct sock *);
+@@ -1042,6 +1044,7 @@ struct rate_sample {
+ int losses; /* number of packets marked lost upon ACK */
+ u32 acked_sacked; /* number of packets newly (S)ACKed upon ACK */
+ u32 prior_in_flight; /* in flight before this ACK */
++ u32 last_end_seq; /* end_seq of most recently ACKed packet */
+ bool is_app_limited; /* is sample from packet with bubble in pipe? */
+ bool is_retrans; /* is sample from retransmission? */
+ bool is_ack_delayed; /* is this (likely) a delayed ACK? */
+@@ -1164,6 +1167,11 @@ void tcp_rate_gen(struct sock *sk, u32 delivered, u32 lost,
+ bool is_sack_reneg, struct rate_sample *rs);
+ void tcp_rate_check_app_limited(struct sock *sk);
+
++static inline bool tcp_skb_sent_after(u64 t1, u64 t2, u32 seq1, u32 seq2)
++{
++ return t1 > t2 || (t1 == t2 && after(seq1, seq2));
++}
++
+ /* These functions determine how the current flow behaves in respect of SACK
+ * handling. SACK is negotiated with the peer, and therefore it can vary
+ * between different flows.
+diff --git a/lib/hexdump.c b/lib/hexdump.c
+index 9301578f98e8c..06833d404398d 100644
+--- a/lib/hexdump.c
++++ b/lib/hexdump.c
+@@ -22,15 +22,33 @@ EXPORT_SYMBOL(hex_asc_upper);
+ *
+ * hex_to_bin() converts one hex digit to its actual value or -1 in case of bad
+ * input.
++ *
++ * This function is used to load cryptographic keys, so it is coded in such a
++ * way that there are no conditions or memory accesses that depend on data.
++ *
++ * Explanation of the logic:
++ * (ch - '9' - 1) is negative if ch <= '9'
++ * ('0' - 1 - ch) is negative if ch >= '0'
++ * we "and" these two values, so the result is negative if ch is in the range
++ * '0' ... '9'
++ * we are only interested in the sign, so we do a shift ">> 8"; note that right
++ * shift of a negative value is implementation-defined, so we cast the
++ * value to (unsigned) before the shift --- we have 0xffffff if ch is in
++ * the range '0' ... '9', 0 otherwise
++ * we "and" this value with (ch - '0' + 1) --- we have a value 1 ... 10 if ch is
++ * in the range '0' ... '9', 0 otherwise
++ * we add this value to -1 --- we have a value 0 ... 9 if ch is in the range '0'
++ * ... '9', -1 otherwise
++ * the next line is similar to the previous one, but we need to decode both
++ * uppercase and lowercase letters, so we use (ch & 0xdf), which converts
++ * lowercase to uppercase
+ */
+-int hex_to_bin(char ch)
++int hex_to_bin(unsigned char ch)
+ {
+- if ((ch >= '0') && (ch <= '9'))
+- return ch - '0';
+- ch = tolower(ch);
+- if ((ch >= 'a') && (ch <= 'f'))
+- return ch - 'a' + 10;
+- return -1;
++ unsigned char cu = ch & 0xdf;
++ return -1 +
++ ((ch - '0' + 1) & (unsigned)((ch - '9' - 1) & ('0' - 1 - ch)) >> 8) +
++ ((cu - 'A' + 11) & (unsigned)((cu - 'F' - 1) & ('A' - 1 - cu)) >> 8);
+ }
+ EXPORT_SYMBOL(hex_to_bin);
+
+@@ -45,10 +63,13 @@ EXPORT_SYMBOL(hex_to_bin);
+ int hex2bin(u8 *dst, const char *src, size_t count)
+ {
+ while (count--) {
+- int hi = hex_to_bin(*src++);
+- int lo = hex_to_bin(*src++);
++ int hi, lo;
+
+- if ((hi < 0) || (lo < 0))
++ hi = hex_to_bin(*src++);
++ if (unlikely(hi < 0))
++ return -EINVAL;
++ lo = hex_to_bin(*src++);
++ if (unlikely(lo < 0))
+ return -EINVAL;
+
+ *dst++ = (hi << 4) | lo;
+diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
+index 08291ed33e93a..0a9def8ce5e8b 100644
+--- a/mm/kasan/quarantine.c
++++ b/mm/kasan/quarantine.c
+@@ -315,6 +315,13 @@ static void per_cpu_remove_cache(void *arg)
+ struct qlist_head *q;
+
+ q = this_cpu_ptr(&cpu_quarantine);
++ /*
++ * Ensure the ordering between the writing to q->offline and
++ * per_cpu_remove_cache. Prevent cpu_quarantine from being corrupted
++ * by interrupt.
++ */
++ if (READ_ONCE(q->offline))
++ return;
+ qlist_move_cache(q, &to_free, cache);
+ qlist_free_all(&to_free, cache);
+ }
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index d984777c9b58b..33a1b4115194e 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3067,13 +3067,9 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ {
+ struct hci_ev_conn_complete *ev = data;
+ struct hci_conn *conn;
++ u8 status = ev->status;
+
+- if (__le16_to_cpu(ev->handle) > HCI_CONN_HANDLE_MAX) {
+- bt_dev_err(hdev, "Ignoring HCI_Connection_Complete for invalid handle");
+- return;
+- }
+-
+- bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
++ bt_dev_dbg(hdev, "status 0x%2.2x", status);
+
+ hci_dev_lock(hdev);
+
+@@ -3122,8 +3118,14 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
+- if (!ev->status) {
++ if (!status) {
+ conn->handle = __le16_to_cpu(ev->handle);
++ if (conn->handle > HCI_CONN_HANDLE_MAX) {
++ bt_dev_err(hdev, "Invalid handle: 0x%4.4x > 0x%4.4x",
++ conn->handle, HCI_CONN_HANDLE_MAX);
++ status = HCI_ERROR_INVALID_PARAMETERS;
++ goto done;
++ }
+
+ if (conn->type == ACL_LINK) {
+ conn->state = BT_CONFIG;
+@@ -3164,18 +3166,18 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ hci_send_cmd(hdev, HCI_OP_CHANGE_CONN_PTYPE, sizeof(cp),
+ &cp);
+ }
+- } else {
+- conn->state = BT_CLOSED;
+- if (conn->type == ACL_LINK)
+- mgmt_connect_failed(hdev, &conn->dst, conn->type,
+- conn->dst_type, ev->status);
+ }
+
+ if (conn->type == ACL_LINK)
+ hci_sco_setup(conn, ev->status);
+
+- if (ev->status) {
+- hci_connect_cfm(conn, ev->status);
++done:
++ if (status) {
++ conn->state = BT_CLOSED;
++ if (conn->type == ACL_LINK)
++ mgmt_connect_failed(hdev, &conn->dst, conn->type,
++ conn->dst_type, status);
++ hci_connect_cfm(conn, status);
+ hci_conn_del(conn);
+ } else if (ev->link_type == SCO_LINK) {
+ switch (conn->setting & SCO_AIRMODE_MASK) {
+@@ -3185,7 +3187,7 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ break;
+ }
+
+- hci_connect_cfm(conn, ev->status);
++ hci_connect_cfm(conn, status);
+ }
+
+ unlock:
+@@ -4676,6 +4678,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ {
+ struct hci_ev_sync_conn_complete *ev = data;
+ struct hci_conn *conn;
++ u8 status = ev->status;
+
+ switch (ev->link_type) {
+ case SCO_LINK:
+@@ -4690,12 +4693,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ return;
+ }
+
+- if (__le16_to_cpu(ev->handle) > HCI_CONN_HANDLE_MAX) {
+- bt_dev_err(hdev, "Ignoring HCI_Sync_Conn_Complete for invalid handle");
+- return;
+- }
+-
+- bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
++ bt_dev_dbg(hdev, "status 0x%2.2x", status);
+
+ hci_dev_lock(hdev);
+
+@@ -4729,9 +4727,17 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
+- switch (ev->status) {
++ switch (status) {
+ case 0x00:
+ conn->handle = __le16_to_cpu(ev->handle);
++ if (conn->handle > HCI_CONN_HANDLE_MAX) {
++ bt_dev_err(hdev, "Invalid handle: 0x%4.4x > 0x%4.4x",
++ conn->handle, HCI_CONN_HANDLE_MAX);
++ status = HCI_ERROR_INVALID_PARAMETERS;
++ conn->state = BT_CLOSED;
++ break;
++ }
++
+ conn->state = BT_CONNECTED;
+ conn->type = ev->link_type;
+
+@@ -4775,8 +4781,8 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ }
+ }
+
+- hci_connect_cfm(conn, ev->status);
+- if (ev->status)
++ hci_connect_cfm(conn, status);
++ if (status)
+ hci_conn_del(conn);
+
+ unlock:
+@@ -5527,11 +5533,6 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ struct smp_irk *irk;
+ u8 addr_type;
+
+- if (handle > HCI_CONN_HANDLE_MAX) {
+- bt_dev_err(hdev, "Ignoring HCI_LE_Connection_Complete for invalid handle");
+- return;
+- }
+-
+ hci_dev_lock(hdev);
+
+ /* All controllers implicitly stop advertising in the event of a
+@@ -5603,6 +5604,12 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+
+ conn->dst_type = ev_bdaddr_type(hdev, conn->dst_type, NULL);
+
++ if (handle > HCI_CONN_HANDLE_MAX) {
++ bt_dev_err(hdev, "Invalid handle: 0x%4.4x > 0x%4.4x", handle,
++ HCI_CONN_HANDLE_MAX);
++ status = HCI_ERROR_INVALID_PARAMETERS;
++ }
++
+ if (status) {
+ hci_le_conn_failed(conn, status);
+ goto unlock;
+diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
+index 349480ef68a51..8b6b5e72b2179 100644
+--- a/net/core/lwt_bpf.c
++++ b/net/core/lwt_bpf.c
+@@ -159,10 +159,8 @@ static int bpf_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ return dst->lwtstate->orig_output(net, sk, skb);
+ }
+
+-static int xmit_check_hhlen(struct sk_buff *skb)
++static int xmit_check_hhlen(struct sk_buff *skb, int hh_len)
+ {
+- int hh_len = skb_dst(skb)->dev->hard_header_len;
+-
+ if (skb_headroom(skb) < hh_len) {
+ int nhead = HH_DATA_ALIGN(hh_len - skb_headroom(skb));
+
+@@ -274,6 +272,7 @@ static int bpf_xmit(struct sk_buff *skb)
+
+ bpf = bpf_lwt_lwtunnel(dst->lwtstate);
+ if (bpf->xmit.prog) {
++ int hh_len = dst->dev->hard_header_len;
+ __be16 proto = skb->protocol;
+ int ret;
+
+@@ -291,7 +290,7 @@ static int bpf_xmit(struct sk_buff *skb)
+ /* If the header was expanded, headroom might be too
+ * small for L2 header to come, expand as needed.
+ */
+- ret = xmit_check_hhlen(skb);
++ ret = xmit_check_hhlen(skb, hh_len);
+ if (unlikely(ret))
+ return ret;
+
+diff --git a/net/dsa/port.c b/net/dsa/port.c
+index 1a40c52f5a42d..4368fd32c4a50 100644
+--- a/net/dsa/port.c
++++ b/net/dsa/port.c
+@@ -1240,8 +1240,10 @@ int dsa_port_link_register_of(struct dsa_port *dp)
+ if (ds->ops->phylink_mac_link_down)
+ ds->ops->phylink_mac_link_down(ds, port,
+ MLO_AN_FIXED, PHY_INTERFACE_MODE_NA);
++ of_node_put(phy_np);
+ return dsa_port_phylink_register(dp);
+ }
++ of_node_put(phy_np);
+ return 0;
+ }
+
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 99db2e41ed10f..8cf86e42c1d1c 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -459,14 +459,12 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev,
+ __be16 proto)
+ {
+ struct ip_tunnel *tunnel = netdev_priv(dev);
+-
+- if (tunnel->parms.o_flags & TUNNEL_SEQ)
+- tunnel->o_seqno++;
++ __be16 flags = tunnel->parms.o_flags;
+
+ /* Push GRE header. */
+ gre_build_header(skb, tunnel->tun_hlen,
+- tunnel->parms.o_flags, proto, tunnel->parms.o_key,
+- htonl(tunnel->o_seqno));
++ flags, proto, tunnel->parms.o_key,
++ (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno)) : 0);
+
+ ip_tunnel_xmit(skb, dev, tnl_params, tnl_params->protocol);
+ }
+@@ -504,7 +502,7 @@ static void gre_fb_xmit(struct sk_buff *skb, struct net_device *dev,
+ (TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
+ gre_build_header(skb, tunnel_hlen, flags, proto,
+ tunnel_id_to_key32(tun_info->key.tun_id),
+- (flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++) : 0);
++ (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno)) : 0);
+
+ ip_md_tunnel_xmit(skb, dev, IPPROTO_GRE, tunnel_hlen);
+
+@@ -581,7 +579,7 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+
+ gre_build_header(skb, 8, TUNNEL_SEQ,
+- proto, 0, htonl(tunnel->o_seqno++));
++ proto, 0, htonl(atomic_fetch_inc(&tunnel->o_seqno)));
+
+ ip_md_tunnel_xmit(skb, dev, IPPROTO_GRE, tunnel_hlen);
+
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 2cb3b852d1486..f33c31dd7366c 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -281,6 +281,7 @@ bool cookie_ecn_ok(const struct tcp_options_received *tcp_opt,
+ EXPORT_SYMBOL(cookie_ecn_ok);
+
+ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
++ const struct tcp_request_sock_ops *af_ops,
+ struct sock *sk,
+ struct sk_buff *skb)
+ {
+@@ -297,6 +298,10 @@ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
+ return NULL;
+
+ treq = tcp_rsk(req);
++
++ /* treq->af_specific might be used to perform TCP_MD5 lookup */
++ treq->af_specific = af_ops;
++
+ treq->syn_tos = TCP_SKB_CB(skb)->ip_dsfield;
+ #if IS_ENABLED(CONFIG_MPTCP)
+ treq->is_mptcp = sk_is_mptcp(sk);
+@@ -364,7 +369,8 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
+ goto out;
+
+ ret = NULL;
+- req = cookie_tcp_reqsk_alloc(&tcp_request_sock_ops, sk, skb);
++ req = cookie_tcp_reqsk_alloc(&tcp_request_sock_ops,
++ &tcp_request_sock_ipv4_ops, sk, skb);
+ if (!req)
+ goto out;
+
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index bfe4112e000c0..7bf84ce34d9e7 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3867,7 +3867,8 @@ static int tcp_ack(struct sock *sk, const struct sk_buff *skb, int flag)
+ tcp_process_tlp_ack(sk, ack, flag);
+
+ if (tcp_ack_is_dubious(sk, flag)) {
+- if (!(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP))) {
++ if (!(flag & (FLAG_SND_UNA_ADVANCED |
++ FLAG_NOT_DUP | FLAG_DSACKING_ACK))) {
+ num_dupack = 1;
+ /* Consider if pure acks were aggregated in tcp_add_backlog() */
+ if (!(flag & FLAG_DATA))
+@@ -5437,7 +5438,17 @@ static void tcp_new_space(struct sock *sk)
+ INDIRECT_CALL_1(sk->sk_write_space, sk_stream_write_space, sk);
+ }
+
+-static void tcp_check_space(struct sock *sk)
++/* Caller made space either from:
++ * 1) Freeing skbs in rtx queues (after tp->snd_una has advanced)
++ * 2) Sent skbs from output queue (and thus advancing tp->snd_nxt)
++ *
++ * We might be able to generate EPOLLOUT to the application if:
++ * 1) Space consumed in output/rtx queues is below sk->sk_sndbuf/2
++ * 2) notsent amount (tp->write_seq - tp->snd_nxt) became
++ * small enough that tcp_stream_memory_free() decides it
++ * is time to generate EPOLLOUT.
++ */
++void tcp_check_space(struct sock *sk)
+ {
+ /* pairs with tcp_poll() */
+ smp_mb();
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 7c2d3ac2363ac..492737d2b7d35 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -531,7 +531,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ newtp->tsoffset = treq->ts_off;
+ #ifdef CONFIG_TCP_MD5SIG
+ newtp->md5sig_info = NULL; /*XXX*/
+- if (newtp->af_specific->md5_lookup(sk, newsk))
++ if (treq->af_specific->req_md5_lookup(sk, req_to_sk(req)))
+ newtp->tcp_header_len += TCPOLEN_MD5SIG_ALIGNED;
+ #endif
+ if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len)
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 257780f93305f..0b5eab6851549 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -82,6 +82,7 @@ static void tcp_event_new_data_sent(struct sock *sk, struct sk_buff *skb)
+
+ NET_ADD_STATS(sock_net(sk), LINUX_MIB_TCPORIGDATASENT,
+ tcp_skb_pcount(skb));
++ tcp_check_space(sk);
+ }
+
+ /* SND.NXT, if window was not shrunk or the amount of shrunk was less than one
+diff --git a/net/ipv4/tcp_rate.c b/net/ipv4/tcp_rate.c
+index fbab921670cc9..9a8e014d9b5b9 100644
+--- a/net/ipv4/tcp_rate.c
++++ b/net/ipv4/tcp_rate.c
+@@ -74,27 +74,32 @@ void tcp_rate_skb_sent(struct sock *sk, struct sk_buff *skb)
+ *
+ * If an ACK (s)acks multiple skbs (e.g., stretched-acks), this function is
+ * called multiple times. We favor the information from the most recently
+- * sent skb, i.e., the skb with the highest prior_delivered count.
++ * sent skb, i.e., the skb with the most recently sent time and the highest
++ * sequence.
+ */
+ void tcp_rate_skb_delivered(struct sock *sk, struct sk_buff *skb,
+ struct rate_sample *rs)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_skb_cb *scb = TCP_SKB_CB(skb);
++ u64 tx_tstamp;
+
+ if (!scb->tx.delivered_mstamp)
+ return;
+
++ tx_tstamp = tcp_skb_timestamp_us(skb);
+ if (!rs->prior_delivered ||
+- after(scb->tx.delivered, rs->prior_delivered)) {
++ tcp_skb_sent_after(tx_tstamp, tp->first_tx_mstamp,
++ scb->end_seq, rs->last_end_seq)) {
+ rs->prior_delivered_ce = scb->tx.delivered_ce;
+ rs->prior_delivered = scb->tx.delivered;
+ rs->prior_mstamp = scb->tx.delivered_mstamp;
+ rs->is_app_limited = scb->tx.is_app_limited;
+ rs->is_retrans = scb->sacked & TCPCB_RETRANS;
++ rs->last_end_seq = scb->end_seq;
+
+ /* Record send time of most recently ACKed packet: */
+- tp->first_tx_mstamp = tcp_skb_timestamp_us(skb);
++ tp->first_tx_mstamp = tx_tstamp;
+ /* Find the duration of the "send phase" of this window: */
+ rs->interval_us = tcp_stamp_us_delta(tp->first_tx_mstamp,
+ scb->tx.first_tx_mstamp);
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 9762367361463..5136959b3dc5d 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -724,6 +724,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ {
+ struct ip6_tnl *tunnel = netdev_priv(dev);
+ __be16 protocol;
++ __be16 flags;
+
+ if (dev->type == ARPHRD_ETHER)
+ IPCB(skb)->flags = 0;
+@@ -739,7 +740,6 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ if (tunnel->parms.collect_md) {
+ struct ip_tunnel_info *tun_info;
+ const struct ip_tunnel_key *key;
+- __be16 flags;
+ int tun_hlen;
+
+ tun_info = skb_tunnel_info_txcheck(skb);
+@@ -766,19 +766,19 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ gre_build_header(skb, tun_hlen,
+ flags, protocol,
+ tunnel_id_to_key32(tun_info->key.tun_id),
+- (flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
++ (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno))
+ : 0);
+
+ } else {
+- if (tunnel->parms.o_flags & TUNNEL_SEQ)
+- tunnel->o_seqno++;
+-
+ if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
+ return -ENOMEM;
+
+- gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
++ flags = tunnel->parms.o_flags;
++
++ gre_build_header(skb, tunnel->tun_hlen, flags,
+ protocol, tunnel->parms.o_key,
+- htonl(tunnel->o_seqno));
++ (flags & TUNNEL_SEQ) ? htonl(atomic_fetch_inc(&tunnel->o_seqno))
++ : 0);
+ }
+
+ return ip6_tnl_xmit(skb, dev, dsfield, fl6, encap_limit, pmtu,
+@@ -1056,7 +1056,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
+ /* Push GRE header. */
+ proto = (t->parms.erspan_ver == 1) ? htons(ETH_P_ERSPAN)
+ : htons(ETH_P_ERSPAN2);
+- gre_build_header(skb, 8, TUNNEL_SEQ, proto, 0, htonl(t->o_seqno++));
++ gre_build_header(skb, 8, TUNNEL_SEQ, proto, 0, htonl(atomic_fetch_inc(&t->o_seqno)));
+
+ /* TooBig packet may have updated dst->dev's mtu */
+ if (!t->parms.collect_md && dst && dst_mtu(dst) > dst->dev->mtu)
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 6ab710b5a1a82..118e834e91902 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -24,14 +24,13 @@ int ip6_route_me_harder(struct net *net, struct sock *sk_partial, struct sk_buff
+ {
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
+ struct sock *sk = sk_to_full_sk(sk_partial);
++ struct net_device *dev = skb_dst(skb)->dev;
+ struct flow_keys flkeys;
+ unsigned int hh_len;
+ struct dst_entry *dst;
+ int strict = (ipv6_addr_type(&iph->daddr) &
+ (IPV6_ADDR_MULTICAST | IPV6_ADDR_LINKLOCAL));
+ struct flowi6 fl6 = {
+- .flowi6_oif = sk && sk->sk_bound_dev_if ? sk->sk_bound_dev_if :
+- strict ? skb_dst(skb)->dev->ifindex : 0,
+ .flowi6_mark = skb->mark,
+ .flowi6_uid = sock_net_uid(net, sk),
+ .daddr = iph->daddr,
+@@ -39,6 +38,13 @@ int ip6_route_me_harder(struct net *net, struct sock *sk_partial, struct sk_buff
+ };
+ int err;
+
++ if (sk && sk->sk_bound_dev_if)
++ fl6.flowi6_oif = sk->sk_bound_dev_if;
++ else if (strict)
++ fl6.flowi6_oif = dev->ifindex;
++ else
++ fl6.flowi6_oif = l3mdev_master_ifindex(dev);
++
+ fib6_rules_early_flow_dissect(net, skb, &fl6, &flkeys);
+ dst = ip6_route_output(net, sk, &fl6);
+ err = dst->error;
+diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
+index d1b61d00368e1..9cc123f000fbc 100644
+--- a/net/ipv6/syncookies.c
++++ b/net/ipv6/syncookies.c
+@@ -170,7 +170,8 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
+ goto out;
+
+ ret = NULL;
+- req = cookie_tcp_reqsk_alloc(&tcp6_request_sock_ops, sk, skb);
++ req = cookie_tcp_reqsk_alloc(&tcp6_request_sock_ops,
++ &tcp_request_sock_ipv6_ops, sk, skb);
+ if (!req)
+ goto out;
+
+diff --git a/net/mctp/device.c b/net/mctp/device.c
+index f86ef6d751bdc..9150b9789d251 100644
+--- a/net/mctp/device.c
++++ b/net/mctp/device.c
+@@ -312,6 +312,7 @@ void mctp_dev_hold(struct mctp_dev *mdev)
+ void mctp_dev_put(struct mctp_dev *mdev)
+ {
+ if (mdev && refcount_dec_and_test(&mdev->refs)) {
++ kfree(mdev->addrs);
+ dev_put(mdev->dev);
+ kfree_rcu(mdev, rcu);
+ }
+@@ -440,7 +441,6 @@ static void mctp_unregister(struct net_device *dev)
+
+ mctp_route_remove_dev(mdev);
+ mctp_neigh_remove_dev(mdev);
+- kfree(mdev->addrs);
+
+ mctp_dev_put(mdev);
+ }
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index 2c467c422dc63..fb67f1ca2495b 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1495,7 +1495,7 @@ int __init ip_vs_conn_init(void)
+ pr_info("Connection hash table configured "
+ "(size=%d, memory=%ldKbytes)\n",
+ ip_vs_conn_tab_size,
+- (long)(ip_vs_conn_tab_size*sizeof(struct list_head))/1024);
++ (long)(ip_vs_conn_tab_size*sizeof(*ip_vs_conn_tab))/1024);
+ IP_VS_DBG(0, "Each connection entry needs %zd bytes at least\n",
+ sizeof(struct ip_vs_conn));
+
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 8ec55cd72572e..204a5cdff5b11 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -556,24 +556,14 @@ static bool tcp_in_window(struct nf_conn *ct,
+ }
+
+ }
+- } else if (((state->state == TCP_CONNTRACK_SYN_SENT
+- && dir == IP_CT_DIR_ORIGINAL)
+- || (state->state == TCP_CONNTRACK_SYN_RECV
+- && dir == IP_CT_DIR_REPLY))
+- && after(end, sender->td_end)) {
++ } else if (tcph->syn &&
++ after(end, sender->td_end) &&
++ (state->state == TCP_CONNTRACK_SYN_SENT ||
++ state->state == TCP_CONNTRACK_SYN_RECV)) {
+ /*
+ * RFC 793: "if a TCP is reinitialized ... then it need
+ * not wait at all; it must only be sure to use sequence
+ * numbers larger than those recently used."
+- */
+- sender->td_end =
+- sender->td_maxend = end;
+- sender->td_maxwin = (win == 0 ? 1 : win);
+-
+- tcp_options(skb, dataoff, tcph, sender);
+- } else if (tcph->syn && dir == IP_CT_DIR_REPLY &&
+- state->state == TCP_CONNTRACK_SYN_SENT) {
+- /* Retransmitted syn-ack, or syn (simultaneous open).
+ *
+ * Re-init state for this direction, just like for the first
+ * syn(-ack) reply, it might differ in seq, ack or tcp options.
+@@ -581,7 +571,8 @@ static bool tcp_in_window(struct nf_conn *ct,
+ tcp_init_sender(sender, receiver,
+ skb, dataoff, tcph,
+ end, win);
+- if (!tcph->ack)
++
++ if (dir == IP_CT_DIR_REPLY && !tcph->ack)
+ return true;
+ }
+
+diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
+index 3e1afd10a9b60..55aa55b252b20 100644
+--- a/net/netfilter/nf_conntrack_standalone.c
++++ b/net/netfilter/nf_conntrack_standalone.c
+@@ -823,7 +823,7 @@ static struct ctl_table nf_ct_sysctl_table[] = {
+ .mode = 0644,
+ .proc_handler = proc_dointvec_jiffies,
+ },
+-#if IS_ENABLED(CONFIG_NFT_FLOW_OFFLOAD)
++#if IS_ENABLED(CONFIG_NF_FLOW_TABLE)
+ [NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD] = {
+ .procname = "nf_flowtable_udp_timeout",
+ .maxlen = sizeof(unsigned int),
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index d600a566da324..7325bee7d1442 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -349,7 +349,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ *ext = &rbe->ext;
+ return -EEXIST;
+ } else {
+- p = &parent->rb_left;
++ overlap = false;
++ if (nft_rbtree_interval_end(rbe))
++ p = &parent->rb_left;
++ else
++ p = &parent->rb_right;
+ }
+ }
+
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index b8f0111457650..9ad9cc0d1d27c 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -53,6 +53,32 @@ nft_sock_get_eval_cgroupv2(u32 *dest, struct sock *sk, const struct nft_pktinfo
+ }
+ #endif
+
++static struct sock *nft_socket_do_lookup(const struct nft_pktinfo *pkt)
++{
++ const struct net_device *indev = nft_in(pkt);
++ const struct sk_buff *skb = pkt->skb;
++ struct sock *sk = NULL;
++
++ if (!indev)
++ return NULL;
++
++ switch (nft_pf(pkt)) {
++ case NFPROTO_IPV4:
++ sk = nf_sk_lookup_slow_v4(nft_net(pkt), skb, indev);
++ break;
++#if IS_ENABLED(CONFIG_NF_TABLES_IPV6)
++ case NFPROTO_IPV6:
++ sk = nf_sk_lookup_slow_v6(nft_net(pkt), skb, indev);
++ break;
++#endif
++ default:
++ WARN_ON_ONCE(1);
++ break;
++ }
++
++ return sk;
++}
++
+ static void nft_socket_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
+@@ -66,20 +92,7 @@ static void nft_socket_eval(const struct nft_expr *expr,
+ sk = NULL;
+
+ if (!sk)
+- switch(nft_pf(pkt)) {
+- case NFPROTO_IPV4:
+- sk = nf_sk_lookup_slow_v4(nft_net(pkt), skb, nft_in(pkt));
+- break;
+-#if IS_ENABLED(CONFIG_NF_TABLES_IPV6)
+- case NFPROTO_IPV6:
+- sk = nf_sk_lookup_slow_v6(nft_net(pkt), skb, nft_in(pkt));
+- break;
+-#endif
+- default:
+- WARN_ON_ONCE(1);
+- regs->verdict.code = NFT_BREAK;
+- return;
+- }
++ sk = nft_socket_do_lookup(pkt);
+
+ if (!sk) {
+ regs->verdict.code = NFT_BREAK;
+@@ -197,6 +210,16 @@ static int nft_socket_dump(struct sk_buff *skb,
+ return 0;
+ }
+
++static int nft_socket_validate(const struct nft_ctx *ctx,
++ const struct nft_expr *expr,
++ const struct nft_data **data)
++{
++ return nft_chain_validate_hooks(ctx->chain,
++ (1 << NF_INET_PRE_ROUTING) |
++ (1 << NF_INET_LOCAL_IN) |
++ (1 << NF_INET_LOCAL_OUT));
++}
++
+ static struct nft_expr_type nft_socket_type;
+ static const struct nft_expr_ops nft_socket_ops = {
+ .type = &nft_socket_type,
+@@ -204,6 +227,7 @@ static const struct nft_expr_ops nft_socket_ops = {
+ .eval = nft_socket_eval,
+ .init = nft_socket_init,
+ .dump = nft_socket_dump,
++ .validate = nft_socket_validate,
+ };
+
+ static struct nft_expr_type nft_socket_type __read_mostly = {
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index b3815b568e8e5..463c4a58d2c36 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -458,6 +458,10 @@ void sctp_generate_reconf_event(struct timer_list *t)
+ goto out_unlock;
+ }
+
++ /* This happens when the response arrives after the timer is triggered. */
++ if (!asoc->strreset_chunk)
++ goto out_unlock;
++
+ error = sctp_do_sm(net, SCTP_EVENT_T_TIMEOUT,
+ SCTP_ST_TIMEOUT(SCTP_EVENT_TIMEOUT_RECONF),
+ asoc->state, asoc->ep, asoc,
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 68cd110722a4a..fd9d9cfd0f3dd 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1357,6 +1357,8 @@ static void smc_connect_work(struct work_struct *work)
+ smc->sk.sk_state = SMC_CLOSED;
+ if (rc == -EPIPE || rc == -EAGAIN)
+ smc->sk.sk_err = EPIPE;
++ else if (rc == -ECONNREFUSED)
++ smc->sk.sk_err = ECONNREFUSED;
+ else if (signal_pending(current))
+ smc->sk.sk_err = -sock_intr_errno(timeo);
+ sock_put(&smc->sk); /* passive closing */
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index b932469ee69cc..a40553e83f8b2 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -483,11 +483,13 @@ handle_error:
+ copy = min_t(size_t, size, (pfrag->size - pfrag->offset));
+ copy = min_t(size_t, copy, (max_open_record_len - record->len));
+
+- rc = tls_device_copy_data(page_address(pfrag->page) +
+- pfrag->offset, copy, msg_iter);
+- if (rc)
+- goto handle_error;
+- tls_append_frag(record, pfrag, copy);
++ if (copy) {
++ rc = tls_device_copy_data(page_address(pfrag->page) +
++ pfrag->offset, copy, msg_iter);
++ if (rc)
++ goto handle_error;
++ tls_append_frag(record, pfrag, copy);
++ }
+
+ size -= copy;
+ if (!size) {
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index ac343cd8ff3f6..39a82bfb5caa3 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -640,7 +640,7 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
+ if (sk_can_busy_loop(sk))
+ sk_busy_loop(sk, 1); /* only support non-blocking sockets */
+
+- if (xsk_no_wakeup(sk))
++ if (xs->zc && xsk_no_wakeup(sk))
+ return 0;
+
+ pool = xs->pool;
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 8b0a16ba27d39..a8fe01764b254 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -424,6 +424,15 @@ static const struct config_entry config_table[] = {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x54c8,
+ },
++ /* RaptorLake-P */
++ {
++ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++ .device = 0x51ca,
++ },
++ {
++ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++ .device = 0x51cb,
++ },
+ #endif
+
+ };
+diff --git a/sound/soc/codecs/cs35l41-lib.c b/sound/soc/codecs/cs35l41-lib.c
+index e5a56bcbb223d..281a710a41231 100644
+--- a/sound/soc/codecs/cs35l41-lib.c
++++ b/sound/soc/codecs/cs35l41-lib.c
+@@ -831,12 +831,14 @@ int cs35l41_otp_unpack(struct device *dev, struct regmap *regmap)
+ GENMASK(bit_offset + otp_map[i].size - 33, 0)) <<
+ (32 - bit_offset);
+ bit_offset += otp_map[i].size - 32;
+- } else {
++ } else if (bit_offset + otp_map[i].size - 1 >= 0) {
+ otp_val = (otp_mem[word_offset] &
+ GENMASK(bit_offset + otp_map[i].size - 1, bit_offset)
+ ) >> bit_offset;
+ bit_offset += otp_map[i].size;
+- }
++ } else /* both bit_offset and otp_map[i].size are 0 */
++ otp_val = 0;
++
+ bit_sum += otp_map[i].size;
+
+ if (bit_offset == 32) {
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index c9ff9c89adf70..2b6c6d6b9771e 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -1100,6 +1100,15 @@ void rt5682_jack_detect_handler(struct work_struct *work)
+ return;
+ }
+
++ if (rt5682->is_sdw) {
++ if (pm_runtime_status_suspended(rt5682->slave->dev.parent)) {
++ dev_dbg(&rt5682->slave->dev,
++ "%s: parent device is pm_runtime_status_suspended, skipping jack detection\n",
++ __func__);
++ return;
++ }
++ }
++
+ dapm = snd_soc_component_get_dapm(rt5682->component);
+
+ snd_soc_dapm_mutex_lock(dapm);
+diff --git a/sound/soc/codecs/rt711.c b/sound/soc/codecs/rt711.c
+index 6770825d037a8..ea25fd58d43a9 100644
+--- a/sound/soc/codecs/rt711.c
++++ b/sound/soc/codecs/rt711.c
+@@ -245,6 +245,13 @@ static void rt711_jack_detect_handler(struct work_struct *work)
+ if (!rt711->component->card->instantiated)
+ return;
+
++ if (pm_runtime_status_suspended(rt711->slave->dev.parent)) {
++ dev_dbg(&rt711->slave->dev,
++ "%s: parent device is pm_runtime_status_suspended, skipping jack detection\n",
++ __func__);
++ return;
++ }
++
+ reg = RT711_VERB_GET_PIN_SENSE | RT711_HP_OUT;
+ ret = regmap_read(rt711->regmap, reg, &jack_status);
+ if (ret < 0)
+diff --git a/sound/soc/codecs/wm8731.c b/sound/soc/codecs/wm8731.c
+index 86b1f6eaa5991..518167d90b105 100644
+--- a/sound/soc/codecs/wm8731.c
++++ b/sound/soc/codecs/wm8731.c
+@@ -602,7 +602,7 @@ static int wm8731_hw_init(struct device *dev, struct wm8731_priv *wm8731)
+ ret = wm8731_reset(wm8731->regmap);
+ if (ret < 0) {
+ dev_err(dev, "Failed to issue reset: %d\n", ret);
+- goto err_regulator_enable;
++ goto err;
+ }
+
+ /* Clear POWEROFF, keep everything else disabled */
+@@ -619,10 +619,7 @@ static int wm8731_hw_init(struct device *dev, struct wm8731_priv *wm8731)
+
+ regcache_mark_dirty(wm8731->regmap);
+
+-err_regulator_enable:
+- /* Regulators will be enabled by bias management */
+- regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), wm8731->supplies);
+-
++err:
+ return ret;
+ }
+
+@@ -760,21 +757,27 @@ static int wm8731_i2c_probe(struct i2c_client *i2c,
+ ret = PTR_ERR(wm8731->regmap);
+ dev_err(&i2c->dev, "Failed to allocate register map: %d\n",
+ ret);
+- return ret;
++ goto err_regulator_enable;
+ }
+
+ ret = wm8731_hw_init(&i2c->dev, wm8731);
+ if (ret != 0)
+- return ret;
++ goto err_regulator_enable;
+
+ ret = devm_snd_soc_register_component(&i2c->dev,
+ &soc_component_dev_wm8731, &wm8731_dai, 1);
+ if (ret != 0) {
+ dev_err(&i2c->dev, "Failed to register CODEC: %d\n", ret);
+- return ret;
++ goto err_regulator_enable;
+ }
+
+ return 0;
++
++err_regulator_enable:
++ /* Regulators will be enabled by bias management */
++ regulator_bulk_disable(ARRAY_SIZE(wm8731->supplies), wm8731->supplies);
++
++ return ret;
+ }
+
+ static int wm8731_i2c_remove(struct i2c_client *client)
+diff --git a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+index e2658bca69318..3137cea78d48c 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+@@ -132,13 +132,13 @@ static const struct snd_soc_acpi_adr_device mx8373_1_adr[] = {
+ {
+ .adr = 0x000123019F837300ull,
+ .num_endpoints = 1,
+- .endpoints = &spk_l_endpoint,
++ .endpoints = &spk_r_endpoint,
+ .name_prefix = "Right"
+ },
+ {
+ .adr = 0x000127019F837300ull,
+ .num_endpoints = 1,
+- .endpoints = &spk_r_endpoint,
++ .endpoints = &spk_l_endpoint,
+ .name_prefix = "Left"
+ }
+ };
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 9a954680d4928..11c9853e9e807 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1214,7 +1214,7 @@ static int dpcm_be_connect(struct snd_soc_pcm_runtime *fe,
+ be_substream->pcm->nonatomic = 1;
+ }
+
+- dpcm = kzalloc(sizeof(struct snd_soc_dpcm), GFP_ATOMIC);
++ dpcm = kzalloc(sizeof(struct snd_soc_dpcm), GFP_KERNEL);
+ if (!dpcm)
+ return -ENOMEM;
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 3470813daf4aa..af675c8cf05d6 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -546,12 +546,12 @@ static int add_dead_ends(struct objtool_file *file)
+ else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
+ insn = find_last_insn(file, reloc->sym->sec);
+ if (!insn) {
+- WARN("can't find unreachable insn at %s+0x%x",
++ WARN("can't find unreachable insn at %s+0x%lx",
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+ } else {
+- WARN("can't find unreachable insn at %s+0x%x",
++ WARN("can't find unreachable insn at %s+0x%lx",
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+@@ -581,12 +581,12 @@ reachable:
+ else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
+ insn = find_last_insn(file, reloc->sym->sec);
+ if (!insn) {
+- WARN("can't find reachable insn at %s+0x%x",
++ WARN("can't find reachable insn at %s+0x%lx",
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+ } else {
+- WARN("can't find reachable insn at %s+0x%x",
++ WARN("can't find reachable insn at %s+0x%lx",
+ reloc->sym->sec->name, reloc->addend);
+ return -1;
+ }
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index 4b384c907027e..2a7061e6f465a 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -486,7 +486,7 @@ static struct section *elf_create_reloc_section(struct elf *elf,
+ int reltype);
+
+ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+- unsigned int type, struct symbol *sym, int addend)
++ unsigned int type, struct symbol *sym, long addend)
+ {
+ struct reloc *reloc;
+
+@@ -515,37 +515,180 @@ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+ return 0;
+ }
+
+-int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
+- unsigned long offset, unsigned int type,
+- struct section *insn_sec, unsigned long insn_off)
++/*
++ * Ensure that any reloc section containing references to @sym is marked
++ * changed such that it will get re-generated in elf_rebuild_reloc_sections()
++ * with the new symbol index.
++ */
++static void elf_dirty_reloc_sym(struct elf *elf, struct symbol *sym)
++{
++ struct section *sec;
++
++ list_for_each_entry(sec, &elf->sections, list) {
++ struct reloc *reloc;
++
++ if (sec->changed)
++ continue;
++
++ list_for_each_entry(reloc, &sec->reloc_list, list) {
++ if (reloc->sym == sym) {
++ sec->changed = true;
++ break;
++ }
++ }
++ }
++}
++
++/*
++ * Move the first global symbol, as per sh_info, into a new, higher symbol
++ * index. This fees up the shndx for a new local symbol.
++ */
++static int elf_move_global_symbol(struct elf *elf, struct section *symtab,
++ struct section *symtab_shndx)
+ {
++ Elf_Data *data, *shndx_data = NULL;
++ Elf32_Word first_non_local;
+ struct symbol *sym;
+- int addend;
++ Elf_Scn *s;
+
+- if (insn_sec->sym) {
+- sym = insn_sec->sym;
+- addend = insn_off;
++ first_non_local = symtab->sh.sh_info;
+
+- } else {
+- /*
+- * The Clang assembler strips section symbols, so we have to
+- * reference the function symbol instead:
+- */
+- sym = find_symbol_containing(insn_sec, insn_off);
+- if (!sym) {
+- /*
+- * Hack alert. This happens when we need to reference
+- * the NOP pad insn immediately after the function.
+- */
+- sym = find_symbol_containing(insn_sec, insn_off - 1);
++ sym = find_symbol_by_index(elf, first_non_local);
++ if (!sym) {
++ WARN("no non-local symbols !?");
++ return first_non_local;
++ }
++
++ s = elf_getscn(elf->elf, symtab->idx);
++ if (!s) {
++ WARN_ELF("elf_getscn");
++ return -1;
++ }
++
++ data = elf_newdata(s);
++ if (!data) {
++ WARN_ELF("elf_newdata");
++ return -1;
++ }
++
++ data->d_buf = &sym->sym;
++ data->d_size = sizeof(sym->sym);
++ data->d_align = 1;
++ data->d_type = ELF_T_SYM;
++
++ sym->idx = symtab->sh.sh_size / sizeof(sym->sym);
++ elf_dirty_reloc_sym(elf, sym);
++
++ symtab->sh.sh_info += 1;
++ symtab->sh.sh_size += data->d_size;
++ symtab->changed = true;
++
++ if (symtab_shndx) {
++ s = elf_getscn(elf->elf, symtab_shndx->idx);
++ if (!s) {
++ WARN_ELF("elf_getscn");
++ return -1;
+ }
+
+- if (!sym) {
+- WARN("can't find symbol containing %s+0x%lx", insn_sec->name, insn_off);
++ shndx_data = elf_newdata(s);
++ if (!shndx_data) {
++ WARN_ELF("elf_newshndx_data");
+ return -1;
+ }
+
+- addend = insn_off - sym->offset;
++ shndx_data->d_buf = &sym->sec->idx;
++ shndx_data->d_size = sizeof(Elf32_Word);
++ shndx_data->d_align = 4;
++ shndx_data->d_type = ELF_T_WORD;
++
++ symtab_shndx->sh.sh_size += 4;
++ symtab_shndx->changed = true;
++ }
++
++ return first_non_local;
++}
++
++static struct symbol *
++elf_create_section_symbol(struct elf *elf, struct section *sec)
++{
++ struct section *symtab, *symtab_shndx;
++ Elf_Data *shndx_data = NULL;
++ struct symbol *sym;
++ Elf32_Word shndx;
++
++ symtab = find_section_by_name(elf, ".symtab");
++ if (symtab) {
++ symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
++ if (symtab_shndx)
++ shndx_data = symtab_shndx->data;
++ } else {
++ WARN("no .symtab");
++ return NULL;
++ }
++
++ sym = malloc(sizeof(*sym));
++ if (!sym) {
++ perror("malloc");
++ return NULL;
++ }
++ memset(sym, 0, sizeof(*sym));
++
++ sym->idx = elf_move_global_symbol(elf, symtab, symtab_shndx);
++ if (sym->idx < 0) {
++ WARN("elf_move_global_symbol");
++ return NULL;
++ }
++
++ sym->name = sec->name;
++ sym->sec = sec;
++
++ // st_name 0
++ sym->sym.st_info = GELF_ST_INFO(STB_LOCAL, STT_SECTION);
++ // st_other 0
++ // st_value 0
++ // st_size 0
++ shndx = sec->idx;
++ if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
++ sym->sym.st_shndx = shndx;
++ if (!shndx_data)
++ shndx = 0;
++ } else {
++ sym->sym.st_shndx = SHN_XINDEX;
++ if (!shndx_data) {
++ WARN("no .symtab_shndx");
++ return NULL;
++ }
++ }
++
++ if (!gelf_update_symshndx(symtab->data, shndx_data, sym->idx, &sym->sym, shndx)) {
++ WARN_ELF("gelf_update_symshndx");
++ return NULL;
++ }
++
++ elf_add_symbol(elf, sym);
++
++ return sym;
++}
++
++int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
++ unsigned long offset, unsigned int type,
++ struct section *insn_sec, unsigned long insn_off)
++{
++ struct symbol *sym = insn_sec->sym;
++ int addend = insn_off;
++
++ if (!sym) {
++ /*
++ * Due to how weak functions work, we must use section based
++ * relocations. Symbol based relocations would result in the
++ * weak and non-weak function annotations being overlaid on the
++ * non-weak function after linking.
++ */
++ sym = elf_create_section_symbol(elf, insn_sec);
++ if (!sym)
++ return -1;
++
++ insn_sec->sym = sym;
+ }
+
+ return elf_add_reloc(elf, sec, offset, type, sym, addend);
+diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
+index d223367814017..18a26e3c13290 100644
+--- a/tools/objtool/include/objtool/elf.h
++++ b/tools/objtool/include/objtool/elf.h
+@@ -73,7 +73,7 @@ struct reloc {
+ struct symbol *sym;
+ unsigned long offset;
+ unsigned int type;
+- int addend;
++ long addend;
+ int idx;
+ bool jump_table_start;
+ };
+@@ -135,7 +135,7 @@ struct elf *elf_open_read(const char *name, int flags);
+ struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr);
+
+ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+- unsigned int type, struct symbol *sym, int addend);
++ unsigned int type, struct symbol *sym, long addend);
+ int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
+ unsigned long offset, unsigned int type,
+ struct section *insn_sec, unsigned long insn_off);
+diff --git a/tools/perf/arch/arm64/util/machine.c b/tools/perf/arch/arm64/util/machine.c
+index d2ce31e28cd79..41c1596e52071 100644
+--- a/tools/perf/arch/arm64/util/machine.c
++++ b/tools/perf/arch/arm64/util/machine.c
+@@ -8,27 +8,6 @@
+ #include "callchain.h"
+ #include "record.h"
+
+-/* On arm64, kernel text segment starts at high memory address,
+- * for example 0xffff 0000 8xxx xxxx. Modules start at a low memory
+- * address, like 0xffff 0000 00ax xxxx. When only small amount of
+- * memory is used by modules, gap between end of module's text segment
+- * and start of kernel text segment may reach 2G.
+- * Therefore do not fill this gap and do not assign it to the kernel dso map.
+- */
+-
+-#define SYMBOL_LIMIT (1 << 12) /* 4K */
+-
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+- if ((strchr(p->name, '[') && strchr(c->name, '[') == NULL) ||
+- (strchr(p->name, '[') == NULL && strchr(c->name, '[')))
+- /* Limit range of last symbol in module and kernel */
+- p->end += SYMBOL_LIMIT;
+- else
+- p->end = c->start;
+- pr_debug4("%s sym:%s end:%#" PRIx64 "\n", __func__, p->name, p->end);
+-}
+-
+ void arch__add_leaf_frame_record_opts(struct record_opts *opts)
+ {
+ opts->sample_user_regs |= sample_reg_masks[PERF_REG_ARM64_LR].mask;
+diff --git a/tools/perf/arch/powerpc/util/Build b/tools/perf/arch/powerpc/util/Build
+index 8a79c4126e5b4..0115f31665684 100644
+--- a/tools/perf/arch/powerpc/util/Build
++++ b/tools/perf/arch/powerpc/util/Build
+@@ -1,5 +1,4 @@
+ perf-y += header.o
+-perf-y += machine.o
+ perf-y += kvm-stat.o
+ perf-y += perf_regs.o
+ perf-y += mem-events.o
+diff --git a/tools/perf/arch/powerpc/util/machine.c b/tools/perf/arch/powerpc/util/machine.c
+deleted file mode 100644
+index e652a1aa81322..0000000000000
+--- a/tools/perf/arch/powerpc/util/machine.c
++++ /dev/null
+@@ -1,25 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-#include <inttypes.h>
+-#include <stdio.h>
+-#include <string.h>
+-#include <internal/lib.h> // page_size
+-#include "debug.h"
+-#include "symbol.h"
+-
+-/* On powerpc kernel text segment start at memory addresses, 0xc000000000000000
+- * whereas the modules are located at very high memory addresses,
+- * for example 0xc00800000xxxxxxx. The gap between end of kernel text segment
+- * and beginning of first module's text segment is very high.
+- * Therefore do not fill this gap and do not assign it to the kernel dso map.
+- */
+-
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+- if (strchr(p->name, '[') == NULL && strchr(c->name, '['))
+- /* Limit the range of last kernel symbol */
+- p->end += page_size;
+- else
+- p->end = c->start;
+- pr_debug4("%s sym:%s end:%#" PRIx64 "\n", __func__, p->name, p->end);
+-}
+diff --git a/tools/perf/arch/s390/util/machine.c b/tools/perf/arch/s390/util/machine.c
+index 7644a4f6d4a40..98bc3f39d5f35 100644
+--- a/tools/perf/arch/s390/util/machine.c
++++ b/tools/perf/arch/s390/util/machine.c
+@@ -35,19 +35,3 @@ int arch__fix_module_text_start(u64 *start, u64 *size, const char *name)
+
+ return 0;
+ }
+-
+-/* On s390 kernel text segment start is located at very low memory addresses,
+- * for example 0x10000. Modules are located at very high memory addresses,
+- * for example 0x3ff xxxx xxxx. The gap between end of kernel text segment
+- * and beginning of first module's text segment is very big.
+- * Therefore do not fill this gap and do not assign it to the kernel dso map.
+- */
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+- if (strchr(p->name, '[') == NULL && strchr(c->name, '['))
+- /* Last kernel symbol mapped to end of page */
+- p->end = roundup(p->end, page_size);
+- else
+- p->end = c->start;
+- pr_debug4("%s sym:%s end:%#" PRIx64 "\n", __func__, p->name, p->end);
+-}
+diff --git a/tools/perf/util/arm-spe.c b/tools/perf/util/arm-spe.c
+index d2b64e3f588b2..151cc38a171cf 100644
+--- a/tools/perf/util/arm-spe.c
++++ b/tools/perf/util/arm-spe.c
+@@ -1036,7 +1036,7 @@ arm_spe_synth_events(struct arm_spe *spe, struct perf_session *session)
+ attr.sample_type = evsel->core.attr.sample_type & PERF_SAMPLE_MASK;
+ attr.sample_type |= PERF_SAMPLE_IP | PERF_SAMPLE_TID |
+ PERF_SAMPLE_PERIOD | PERF_SAMPLE_DATA_SRC |
+- PERF_SAMPLE_WEIGHT;
++ PERF_SAMPLE_WEIGHT | PERF_SAMPLE_ADDR;
+ if (spe->timeless_decoding)
+ attr.sample_type &= ~(u64)PERF_SAMPLE_TIME;
+ else
+diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
+index 31cd59a2b66e6..ecd377938eea8 100644
+--- a/tools/perf/util/symbol-elf.c
++++ b/tools/perf/util/symbol-elf.c
+@@ -1290,7 +1290,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
+ * For misannotated, zeroed, ASM function sizes.
+ */
+ if (nr > 0) {
+- symbols__fixup_end(&dso->symbols);
++ symbols__fixup_end(&dso->symbols, false);
+ symbols__fixup_duplicate(&dso->symbols);
+ if (kmap) {
+ /*
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index dfde9eada224a..24dd8c08e901f 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -101,11 +101,6 @@ static int prefix_underscores_count(const char *str)
+ return tail - str;
+ }
+
+-void __weak arch__symbols__fixup_end(struct symbol *p, struct symbol *c)
+-{
+- p->end = c->start;
+-}
+-
+ const char * __weak arch__normalize_symbol_name(const char *name)
+ {
+ return name;
+@@ -217,7 +212,8 @@ again:
+ }
+ }
+
+-void symbols__fixup_end(struct rb_root_cached *symbols)
++/* Update zero-sized symbols using the address of the next symbol */
++void symbols__fixup_end(struct rb_root_cached *symbols, bool is_kallsyms)
+ {
+ struct rb_node *nd, *prevnd = rb_first_cached(symbols);
+ struct symbol *curr, *prev;
+@@ -231,8 +227,29 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
+ prev = curr;
+ curr = rb_entry(nd, struct symbol, rb_node);
+
+- if (prev->end == prev->start || prev->end != curr->start)
+- arch__symbols__fixup_end(prev, curr);
++ /*
++ * On some architecture kernel text segment start is located at
++ * some low memory address, while modules are located at high
++ * memory addresses (or vice versa). The gap between end of
++ * kernel text segment and beginning of first module's text
++ * segment is very big. Therefore do not fill this gap and do
++ * not assign it to the kernel dso map (kallsyms).
++ *
++ * In kallsyms, it determines module symbols using '[' character
++ * like in:
++ * ffffffffc1937000 T hdmi_driver_init [snd_hda_codec_hdmi]
++ */
++ if (prev->end == prev->start) {
++ /* Last kernel/module symbol mapped to end of page */
++ if (is_kallsyms && (!strchr(prev->name, '[') !=
++ !strchr(curr->name, '[')))
++ prev->end = roundup(prev->end + 4096, 4096);
++ else
++ prev->end = curr->start;
++
++ pr_debug4("%s sym:%s end:%#" PRIx64 "\n",
++ __func__, prev->name, prev->end);
++ }
+ }
+
+ /* Last entry */
+@@ -1467,7 +1484,7 @@ int __dso__load_kallsyms(struct dso *dso, const char *filename,
+ if (kallsyms__delta(kmap, filename, &delta))
+ return -1;
+
+- symbols__fixup_end(&dso->symbols);
++ symbols__fixup_end(&dso->symbols, true);
+ symbols__fixup_duplicate(&dso->symbols);
+
+ if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
+@@ -1659,7 +1676,7 @@ int dso__load_bfd_symbols(struct dso *dso, const char *debugfile)
+ #undef bfd_asymbol_section
+ #endif
+
+- symbols__fixup_end(&dso->symbols);
++ symbols__fixup_end(&dso->symbols, false);
+ symbols__fixup_duplicate(&dso->symbols);
+ dso->adjust_symbols = 1;
+
+diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h
+index fbf866d82dccd..0b893dcc8ea68 100644
+--- a/tools/perf/util/symbol.h
++++ b/tools/perf/util/symbol.h
+@@ -203,7 +203,7 @@ void __symbols__insert(struct rb_root_cached *symbols, struct symbol *sym,
+ bool kernel);
+ void symbols__insert(struct rb_root_cached *symbols, struct symbol *sym);
+ void symbols__fixup_duplicate(struct rb_root_cached *symbols);
+-void symbols__fixup_end(struct rb_root_cached *symbols);
++void symbols__fixup_end(struct rb_root_cached *symbols, bool is_kallsyms);
+ void maps__fixup_end(struct maps *maps);
+
+ typedef int (*mapfn_t)(u64 start, u64 len, u64 pgoff, void *data);
+@@ -241,7 +241,6 @@ const char *arch__normalize_symbol_name(const char *name);
+ #define SYMBOL_A 0
+ #define SYMBOL_B 1
+
+-void arch__symbols__fixup_end(struct symbol *p, struct symbol *c);
+ int arch__compare_symbol_names(const char *namea, const char *nameb);
+ int arch__compare_symbol_names_n(const char *namea, const char *nameb,
+ unsigned int n);
+diff --git a/tools/testing/selftests/vm/mremap_test.c b/tools/testing/selftests/vm/mremap_test.c
+index 7c0b0617b9f85..58775dab3cc6c 100644
+--- a/tools/testing/selftests/vm/mremap_test.c
++++ b/tools/testing/selftests/vm/mremap_test.c
+@@ -6,9 +6,11 @@
+
+ #include <errno.h>
+ #include <stdlib.h>
++#include <stdio.h>
+ #include <string.h>
+ #include <sys/mman.h>
+ #include <time.h>
++#include <stdbool.h>
+
+ #include "../kselftest.h"
+
+@@ -63,6 +65,59 @@ enum {
+ .expect_failure = should_fail \
+ }
+
++/*
++ * Returns false if the requested remap region overlaps with an
++ * existing mapping (e.g text, stack) else returns true.
++ */
++static bool is_remap_region_valid(void *addr, unsigned long long size)
++{
++ void *remap_addr = NULL;
++ bool ret = true;
++
++ /* Use MAP_FIXED_NOREPLACE flag to ensure region is not mapped */
++ remap_addr = mmap(addr, size, PROT_READ | PROT_WRITE,
++ MAP_FIXED_NOREPLACE | MAP_ANONYMOUS | MAP_SHARED,
++ -1, 0);
++
++ if (remap_addr == MAP_FAILED) {
++ if (errno == EEXIST)
++ ret = false;
++ } else {
++ munmap(remap_addr, size);
++ }
++
++ return ret;
++}
++
++/* Returns mmap_min_addr sysctl tunable from procfs */
++static unsigned long long get_mmap_min_addr(void)
++{
++ FILE *fp;
++ int n_matched;
++ static unsigned long long addr;
++
++ if (addr)
++ return addr;
++
++ fp = fopen("/proc/sys/vm/mmap_min_addr", "r");
++ if (fp == NULL) {
++ ksft_print_msg("Failed to open /proc/sys/vm/mmap_min_addr: %s\n",
++ strerror(errno));
++ exit(KSFT_SKIP);
++ }
++
++ n_matched = fscanf(fp, "%llu", &addr);
++ if (n_matched != 1) {
++ ksft_print_msg("Failed to read /proc/sys/vm/mmap_min_addr: %s\n",
++ strerror(errno));
++ fclose(fp);
++ exit(KSFT_SKIP);
++ }
++
++ fclose(fp);
++ return addr;
++}
++
+ /*
+ * Returns the start address of the mapping on success, else returns
+ * NULL on failure.
+@@ -71,11 +126,18 @@ static void *get_source_mapping(struct config c)
+ {
+ unsigned long long addr = 0ULL;
+ void *src_addr = NULL;
++ unsigned long long mmap_min_addr;
++
++ mmap_min_addr = get_mmap_min_addr();
++
+ retry:
+ addr += c.src_alignment;
++ if (addr < mmap_min_addr)
++ goto retry;
++
+ src_addr = mmap((void *) addr, c.region_size, PROT_READ | PROT_WRITE,
+- MAP_FIXED_NOREPLACE | MAP_ANONYMOUS | MAP_SHARED,
+- -1, 0);
++ MAP_FIXED_NOREPLACE | MAP_ANONYMOUS | MAP_SHARED,
++ -1, 0);
+ if (src_addr == MAP_FAILED) {
+ if (errno == EPERM || errno == EEXIST)
+ goto retry;
+@@ -90,8 +152,10 @@ retry:
+ * alignment in the tests.
+ */
+ if (((unsigned long long) src_addr & (c.src_alignment - 1)) ||
+- !((unsigned long long) src_addr & c.src_alignment))
++ !((unsigned long long) src_addr & c.src_alignment)) {
++ munmap(src_addr, c.region_size);
+ goto retry;
++ }
+
+ if (!src_addr)
+ goto error;
+@@ -140,9 +204,20 @@ static long long remap_region(struct config c, unsigned int threshold_mb,
+ if (!((unsigned long long) addr & c.dest_alignment))
+ addr = (void *) ((unsigned long long) addr | c.dest_alignment);
+
++ /* Don't destroy existing mappings unless expected to overlap */
++ while (!is_remap_region_valid(addr, c.region_size) && !c.overlapping) {
++ /* Check for unsigned overflow */
++ if (addr + c.dest_alignment < addr) {
++ ksft_print_msg("Couldn't find a valid region to remap to\n");
++ ret = -1;
++ goto out;
++ }
++ addr += c.dest_alignment;
++ }
++
+ clock_gettime(CLOCK_MONOTONIC, &t_start);
+ dest_addr = mremap(src_addr, c.region_size, c.region_size,
+- MREMAP_MAYMOVE|MREMAP_FIXED, (char *) addr);
++ MREMAP_MAYMOVE|MREMAP_FIXED, (char *) addr);
+ clock_gettime(CLOCK_MONOTONIC, &t_end);
+
+ if (dest_addr == MAP_FAILED) {
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-28 12:03 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-28 12:03 UTC (permalink / raw
To: gentoo-commits
commit: 92aed87f4e6d8447265a91433c456412c945f5fe
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 28 12:02:22 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 28 12:02:22 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=92aed87f
Add V4L2_ASYNC as a dependency for VIDEO_OV2640
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch | 29 ++++++++++++++++++++++++
2 files changed, 33 insertions(+)
diff --git a/0000_README b/0000_README
index c24bfaa9..a5f79181 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+Patch: 2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch
+From: Mike Pagano
+Desc: Add V4L2_ASYNC as a dependency to match other drivers and prevent failures when compile testing
+
Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
diff --git a/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch b/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch
new file mode 100644
index 00000000..6fb67ffa
--- /dev/null
+++ b/2910_media-i2c-ov2640-Depend-on-V4L2_ASYNC.patch
@@ -0,0 +1,29 @@
+From 58d23260ea259cb1734d3074be56aabe4c90bae4 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Wed, 27 Apr 2022 17:13:01 -0400
+Subject: [PATCH 1/1] Add V4L2_ASYNC as a dependency to match other drivers and
+ prevent failures when compile testing.
+Cc: mpagano@gentoo.org
+
+Fixes: ff3cc65cadb5 ("media: v4l: async, fwnode: Improve module organisation")
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/media/i2c/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index fae2baabb773..2b20aa6c37b1 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -372,6 +372,7 @@ config VIDEO_OV13B10
+ config VIDEO_OV2640
+ tristate "OmniVision OV2640 sensor support"
+ depends on VIDEO_DEV && I2C
++ select V4L2_ASYNC
+ help
+ This is a Video4Linux2 sensor driver for the OmniVision
+ OV2640 camera.
+--
+2.35.1
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-27 13:16 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-27 13:16 UTC (permalink / raw
To: gentoo-commits
commit: 0d03b9f810f3520d2918c17f15b0beee754852cf
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 27 13:16:16 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 27 13:16:16 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0d03b9f8
Remove redundant patch
Removed:
2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
...quest-interrupts-after-IRQ-is-initialized.patch | 75 ----------------------
2 files changed, 79 deletions(-)
diff --git a/0000_README b/0000_README
index d851c5b7..c24bfaa9 100644
--- a/0000_README
+++ b/0000_README
@@ -75,10 +75,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Path: 2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/gpio?id=06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9
-Desc: gpio: Request interrupts after IRQ is initialized
-
Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
diff --git a/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch b/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
deleted file mode 100644
index 0a1d4624..00000000
--- a/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
+++ /dev/null
@@ -1,75 +0,0 @@
-From 06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9 Mon Sep 17 00:00:00 2001
-From: Mario Limonciello <mario.limonciello@amd.com>
-Date: Fri, 22 Apr 2022 08:14:52 -0500
-Subject: gpio: Request interrupts after IRQ is initialized
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Commit 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members
-before initialization") attempted to fix a race condition that lead to a
-NULL pointer, but in the process caused a regression for _AEI/_EVT
-declared GPIOs.
-
-This manifests in messages showing deferred probing while trying to
-allocate IRQs like so:
-
- amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x0000 to IRQ, err -517
- amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x002C to IRQ, err -517
- amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x003D to IRQ, err -517
- [ .. more of the same .. ]
-
-The code for walking _AEI doesn't handle deferred probing and so this
-leads to non-functional GPIO interrupts.
-
-Fix this issue by moving the call to `acpi_gpiochip_request_interrupts`
-to occur after gc->irc.initialized is set.
-
-Fixes: 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members before initialization")
-Link: https://lore.kernel.org/linux-gpio/BL1PR12MB51577A77F000A008AA694675E2EF9@BL1PR12MB5157.namprd12.prod.outlook.com/
-Link: https://bugzilla.suse.com/show_bug.cgi?id=1198697
-Link: https://bugzilla.kernel.org/show_bug.cgi?id=215850
-Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1979
-Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1976
-Reported-by: Mario Limonciello <mario.limonciello@amd.com>
-Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
-Reviewed-by: Shreeya Patel <shreeya.patel@collabora.com>
-Tested-By: Samuel Čavoj <samuel@cavoj.net>
-Tested-By: lukeluk498@gmail.com Link:
-Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
-Acked-by: Linus Walleij <linus.walleij@linaro.org>
-Reviewed-and-tested-by: Takashi Iwai <tiwai@suse.de>
-Cc: Shreeya Patel <shreeya.patel@collabora.com>
-Cc: stable@vger.kernel.org
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
----
- drivers/gpio/gpiolib.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-(limited to 'drivers/gpio')
-
-diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
-index 085348e089860..b7694171655cf 100644
---- a/drivers/gpio/gpiolib.c
-+++ b/drivers/gpio/gpiolib.c
-@@ -1601,8 +1601,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
-
- gpiochip_set_irq_hooks(gc);
-
-- acpi_gpiochip_request_interrupts(gc);
--
- /*
- * Using barrier() here to prevent compiler from reordering
- * gc->irq.initialized before initialization of above
-@@ -1612,6 +1610,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
-
- gc->irq.initialized = true;
-
-+ acpi_gpiochip_request_interrupts(gc);
-+
- return 0;
- }
-
---
-cgit 1.2.3-1.el7
-
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-27 13:10 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-27 13:10 UTC (permalink / raw
To: gentoo-commits
commit: b08b8be59e2765bb2d36321ab9f026eeb6c43e87
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 27 13:10:12 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 27 13:10:12 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b08b8be5
Linux patch 5.17.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 6 +-
1004_linux-5.17.5.patch | 5187 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5192 insertions(+), 1 deletion(-)
diff --git a/0000_README b/0000_README
index 1f4d8246..d851c5b7 100644
--- a/0000_README
+++ b/0000_README
@@ -55,10 +55,14 @@ Patch: 1002_linux-5.17.3.patch
From: http://www.kernel.org
Desc: Linux 5.17.3
-Patch: 1003_linux-5.17.3.patch
+Patch: 1003_linux-5.17.4.patch
From: http://www.kernel.org
Desc: Linux 5.17.4
+Patch: 1004_linux-5.17.5.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-5.17.5.patch b/1004_linux-5.17.5.patch
new file mode 100644
index 00000000..024660ad
--- /dev/null
+++ b/1004_linux-5.17.5.patch
@@ -0,0 +1,5187 @@
+diff --git a/Documentation/filesystems/ext4/attributes.rst b/Documentation/filesystems/ext4/attributes.rst
+index 54386a010a8d7..871d2da7a0a91 100644
+--- a/Documentation/filesystems/ext4/attributes.rst
++++ b/Documentation/filesystems/ext4/attributes.rst
+@@ -76,7 +76,7 @@ The beginning of an extended attribute block is in
+ - Checksum of the extended attribute block.
+ * - 0x14
+ - \_\_u32
+- - h\_reserved[2]
++ - h\_reserved[3]
+ - Zero.
+
+ The checksum is calculated against the FS UUID, the 64-bit block number
+diff --git a/Makefile b/Makefile
+index d7747e4c216e4..3ad5dc6be3930 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S
+index dd77a0c8f740b..66ba549b520fc 100644
+--- a/arch/arc/kernel/entry.S
++++ b/arch/arc/kernel/entry.S
+@@ -196,6 +196,7 @@ tracesys_exit:
+ st r0, [sp, PT_r0] ; sys call return value in pt_regs
+
+ ;POST Sys Call Ptrace Hook
++ mov r0, sp ; pt_regs needed
+ bl @syscall_trace_exit
+ b ret_from_exception ; NOT ret_from_system_call at is saves r0 which
+ ; we'd done before calling post hook above
+diff --git a/arch/arm/mach-vexpress/spc.c b/arch/arm/mach-vexpress/spc.c
+index 1da11bdb1dfbd..1c6500c4e6a17 100644
+--- a/arch/arm/mach-vexpress/spc.c
++++ b/arch/arm/mach-vexpress/spc.c
+@@ -580,7 +580,7 @@ static int __init ve_spc_clk_init(void)
+ }
+
+ cluster = topology_physical_package_id(cpu_dev->id);
+- if (init_opp_table[cluster])
++ if (cluster < 0 || init_opp_table[cluster])
+ continue;
+
+ if (ve_init_opp_table(cpu_dev))
+diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
+index ec5b082f3de6e..07eb69f9e7df3 100644
+--- a/arch/arm/xen/enlighten.c
++++ b/arch/arm/xen/enlighten.c
+@@ -337,12 +337,15 @@ int __init arch_xen_unpopulated_init(struct resource **res)
+
+ if (!nr_reg) {
+ pr_err("No extended regions are found\n");
++ of_node_put(np);
+ return -EINVAL;
+ }
+
+ regs = kcalloc(nr_reg, sizeof(*regs), GFP_KERNEL);
+- if (!regs)
++ if (!regs) {
++ of_node_put(np);
+ return -ENOMEM;
++ }
+
+ /*
+ * Create resource from extended regions provided by the hypervisor to be
+@@ -403,8 +406,8 @@ int __init arch_xen_unpopulated_init(struct resource **res)
+ *res = &xen_resource;
+
+ err:
++ of_node_put(np);
+ kfree(regs);
+-
+ return rc;
+ }
+ #endif
+@@ -424,8 +427,10 @@ static void __init xen_dt_guest_init(void)
+
+ if (of_address_to_resource(xen_node, GRANT_TABLE_INDEX, &res)) {
+ pr_err("Xen grant table region is not found\n");
++ of_node_put(xen_node);
+ return;
+ }
++ of_node_put(xen_node);
+ xen_grant_frames = res.start;
+ }
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi
+index 1dc9d187601c5..a0bd540f27d3d 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-var-som.dtsi
+@@ -89,12 +89,12 @@
+ pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
+
+ ti,x-min = /bits/ 16 <125>;
+- touchscreen-size-x = /bits/ 16 <4008>;
++ touchscreen-size-x = <4008>;
+ ti,y-min = /bits/ 16 <282>;
+- touchscreen-size-y = /bits/ 16 <3864>;
++ touchscreen-size-y = <3864>;
+ ti,x-plate-ohms = /bits/ 16 <180>;
+- touchscreen-max-pressure = /bits/ 16 <255>;
+- touchscreen-average-samples = /bits/ 16 <10>;
++ touchscreen-max-pressure = <255>;
++ touchscreen-average-samples = <10>;
+ ti,debounce-tol = /bits/ 16 <3>;
+ ti,debounce-rep = /bits/ 16 <1>;
+ ti,settle-delay-usec = /bits/ 16 <150>;
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+index b16c7caf34c11..87b5e23c766f7 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
+@@ -70,12 +70,12 @@
+ pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
+
+ ti,x-min = /bits/ 16 <125>;
+- touchscreen-size-x = /bits/ 16 <4008>;
++ touchscreen-size-x = <4008>;
+ ti,y-min = /bits/ 16 <282>;
+- touchscreen-size-y = /bits/ 16 <3864>;
++ touchscreen-size-y = <3864>;
+ ti,x-plate-ohms = /bits/ 16 <180>;
+- touchscreen-max-pressure = /bits/ 16 <255>;
+- touchscreen-average-samples = /bits/ 16 <10>;
++ touchscreen-max-pressure = <255>;
++ touchscreen-average-samples = <10>;
+ ti,debounce-tol = /bits/ 16 <3>;
+ ti,debounce-rep = /bits/ 16 <1>;
+ ti,settle-delay-usec = /bits/ 16 <150>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index 2151cd8c8c7ab..e1c46b80f14a0 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -1459,6 +1459,8 @@
+ "imem",
+ "config";
+
++ qcom,qmp = <&aoss_qmp>;
++
+ qcom,smem-states = <&ipa_smp2p_out 0>,
+ <&ipa_smp2p_out 1>;
+ qcom,smem-state-names = "ipa-clock-enabled-valid",
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index eab7a85050531..d66865131ef90 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -1714,6 +1714,8 @@
+ interconnect-names = "memory",
+ "config";
+
++ qcom,qmp = <&aoss_qmp>;
++
+ qcom,smem-states = <&ipa_smp2p_out 0>,
+ <&ipa_smp2p_out 1>;
+ qcom,smem-state-names = "ipa-clock-enabled-valid",
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 765d018e6306c..0bde6bbb3bc74 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1443,6 +1443,8 @@
+ interconnect-names = "memory",
+ "config";
+
++ qcom,qmp = <&aoss_qmp>;
++
+ qcom,smem-states = <&ipa_smp2p_out 0>,
+ <&ipa_smp2p_out 1>;
+ qcom,smem-state-names = "ipa-clock-enabled-valid",
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index 94e147e5456ca..dff2b483ea509 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -535,7 +535,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
+ PMD_TYPE_TABLE)
+ #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
+ PMD_TYPE_SECT)
+-#define pmd_leaf(pmd) pmd_sect(pmd)
++#define pmd_leaf(pmd) (pmd_present(pmd) && !pmd_table(pmd))
+ #define pmd_bad(pmd) (!pmd_table(pmd))
+
+ #define pmd_leaf_size(pmd) (pmd_cont(pmd) ? CONT_PMD_SIZE : PMD_SIZE)
+@@ -625,7 +625,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
+ #define pud_none(pud) (!pud_val(pud))
+ #define pud_bad(pud) (!pud_table(pud))
+ #define pud_present(pud) pte_present(pud_pte(pud))
+-#define pud_leaf(pud) pud_sect(pud)
++#define pud_leaf(pud) (pud_present(pud) && !pud_table(pud))
+ #define pud_valid(pud) pte_valid(pud_pte(pud))
+
+ static inline void set_pud(pud_t *pudp, pud_t pud)
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 384f58a3f373f..5f8933aec75ce 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -610,23 +610,22 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt)
+ return;
+ }
+
+- /* Conditionally hard-enable interrupts. */
+- if (should_hard_irq_enable()) {
+- /*
+- * Ensure a positive value is written to the decrementer, or
+- * else some CPUs will continue to take decrementer exceptions.
+- * When the PPC_WATCHDOG (decrementer based) is configured,
+- * keep this at most 31 bits, which is about 4 seconds on most
+- * systems, which gives the watchdog a chance of catching timer
+- * interrupt hard lockups.
+- */
+- if (IS_ENABLED(CONFIG_PPC_WATCHDOG))
+- set_dec(0x7fffffff);
+- else
+- set_dec(decrementer_max);
++ /*
++ * Ensure a positive value is written to the decrementer, or
++ * else some CPUs will continue to take decrementer exceptions.
++ * When the PPC_WATCHDOG (decrementer based) is configured,
++ * keep this at most 31 bits, which is about 4 seconds on most
++ * systems, which gives the watchdog a chance of catching timer
++ * interrupt hard lockups.
++ */
++ if (IS_ENABLED(CONFIG_PPC_WATCHDOG))
++ set_dec(0x7fffffff);
++ else
++ set_dec(decrementer_max);
+
++ /* Conditionally hard-enable interrupts. */
++ if (should_hard_irq_enable())
+ do_hard_irq_enable();
+- }
+
+ #if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC)
+ if (atomic_read(&ppc_n_lost_interrupts) != 0)
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index d42b4b6d4a791..85cfa6328222b 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -420,13 +420,19 @@ static void kvmppc_tce_put(struct kvmppc_spapr_tce_table *stt,
+ tbl[idx % TCES_PER_PAGE] = tce;
+ }
+
+-static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
+- unsigned long entry)
++static void kvmppc_clear_tce(struct mm_struct *mm, struct kvmppc_spapr_tce_table *stt,
++ struct iommu_table *tbl, unsigned long entry)
+ {
+- unsigned long hpa = 0;
+- enum dma_data_direction dir = DMA_NONE;
++ unsigned long i;
++ unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift);
++ unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift);
++
++ for (i = 0; i < subpages; ++i) {
++ unsigned long hpa = 0;
++ enum dma_data_direction dir = DMA_NONE;
+
+- iommu_tce_xchg_no_kill(mm, tbl, entry, &hpa, &dir);
++ iommu_tce_xchg_no_kill(mm, tbl, io_entry + i, &hpa, &dir);
++ }
+ }
+
+ static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
+@@ -485,6 +491,8 @@ static long kvmppc_tce_iommu_unmap(struct kvm *kvm,
+ break;
+ }
+
++ iommu_tce_kill(tbl, io_entry, subpages);
++
+ return ret;
+ }
+
+@@ -544,6 +552,8 @@ static long kvmppc_tce_iommu_map(struct kvm *kvm,
+ break;
+ }
+
++ iommu_tce_kill(tbl, io_entry, subpages);
++
+ return ret;
+ }
+
+@@ -590,10 +600,9 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
+ ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, stit->tbl,
+ entry, ua, dir);
+
+- iommu_tce_kill(stit->tbl, entry, 1);
+
+ if (ret != H_SUCCESS) {
+- kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
++ kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl, entry);
+ goto unlock_exit;
+ }
+ }
+@@ -669,13 +678,13 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ */
+ if (get_user(tce, tces + i)) {
+ ret = H_TOO_HARD;
+- goto invalidate_exit;
++ goto unlock_exit;
+ }
+ tce = be64_to_cpu(tce);
+
+ if (kvmppc_tce_to_ua(vcpu->kvm, tce, &ua)) {
+ ret = H_PARAMETER;
+- goto invalidate_exit;
++ goto unlock_exit;
+ }
+
+ list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
+@@ -684,19 +693,15 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ iommu_tce_direction(tce));
+
+ if (ret != H_SUCCESS) {
+- kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
+- entry);
+- goto invalidate_exit;
++ kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl,
++ entry + i);
++ goto unlock_exit;
+ }
+ }
+
+ kvmppc_tce_put(stt, entry + i, tce);
+ }
+
+-invalidate_exit:
+- list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+- iommu_tce_kill(stit->tbl, entry, npages);
+-
+ unlock_exit:
+ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+
+@@ -735,20 +740,16 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
+ continue;
+
+ if (ret == H_TOO_HARD)
+- goto invalidate_exit;
++ return ret;
+
+ WARN_ON_ONCE(1);
+- kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
++ kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl, entry + i);
+ }
+ }
+
+ for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
+ kvmppc_tce_put(stt, ioba >> stt->page_shift, tce_value);
+
+-invalidate_exit:
+- list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+- iommu_tce_kill(stit->tbl, ioba >> stt->page_shift, npages);
+-
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(kvmppc_h_stuff_tce);
+diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
+index 870b7f0c7ea56..fdeda6a9cff44 100644
+--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
+@@ -247,13 +247,19 @@ static void iommu_tce_kill_rm(struct iommu_table *tbl,
+ tbl->it_ops->tce_kill(tbl, entry, pages, true);
+ }
+
+-static void kvmppc_rm_clear_tce(struct kvm *kvm, struct iommu_table *tbl,
+- unsigned long entry)
++static void kvmppc_rm_clear_tce(struct kvm *kvm, struct kvmppc_spapr_tce_table *stt,
++ struct iommu_table *tbl, unsigned long entry)
+ {
+- unsigned long hpa = 0;
+- enum dma_data_direction dir = DMA_NONE;
++ unsigned long i;
++ unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift);
++ unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift);
++
++ for (i = 0; i < subpages; ++i) {
++ unsigned long hpa = 0;
++ enum dma_data_direction dir = DMA_NONE;
+
+- iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, entry, &hpa, &dir);
++ iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, io_entry + i, &hpa, &dir);
++ }
+ }
+
+ static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm,
+@@ -316,6 +322,8 @@ static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm,
+ break;
+ }
+
++ iommu_tce_kill_rm(tbl, io_entry, subpages);
++
+ return ret;
+ }
+
+@@ -379,6 +387,8 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm,
+ break;
+ }
+
++ iommu_tce_kill_rm(tbl, io_entry, subpages);
++
+ return ret;
+ }
+
+@@ -420,10 +430,8 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
+ ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt,
+ stit->tbl, entry, ua, dir);
+
+- iommu_tce_kill_rm(stit->tbl, entry, 1);
+-
+ if (ret != H_SUCCESS) {
+- kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);
++ kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry);
+ return ret;
+ }
+ }
+@@ -561,7 +569,7 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ ua = 0;
+ if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce, &ua)) {
+ ret = H_PARAMETER;
+- goto invalidate_exit;
++ goto unlock_exit;
+ }
+
+ list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
+@@ -570,19 +578,15 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
+ iommu_tce_direction(tce));
+
+ if (ret != H_SUCCESS) {
+- kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl,
+- entry);
+- goto invalidate_exit;
++ kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl,
++ entry + i);
++ goto unlock_exit;
+ }
+ }
+
+ kvmppc_rm_tce_put(stt, entry + i, tce);
+ }
+
+-invalidate_exit:
+- list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+- iommu_tce_kill_rm(stit->tbl, entry, npages);
+-
+ unlock_exit:
+ if (!prereg)
+ arch_spin_unlock(&kvm->mmu_lock.rlock.raw_lock);
+@@ -620,20 +624,16 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
+ continue;
+
+ if (ret == H_TOO_HARD)
+- goto invalidate_exit;
++ return ret;
+
+ WARN_ON_ONCE_RM(1);
+- kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);
++ kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry + i);
+ }
+ }
+
+ for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
+ kvmppc_rm_tce_put(stt, ioba >> stt->page_shift, tce_value);
+
+-invalidate_exit:
+- list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
+- iommu_tce_kill_rm(stit->tbl, ioba >> stt->page_shift, npages);
+-
+ return ret;
+ }
+
+diff --git a/arch/powerpc/perf/power10-pmu.c b/arch/powerpc/perf/power10-pmu.c
+index 0975ad0b42c42..69b4565d1a8f0 100644
+--- a/arch/powerpc/perf/power10-pmu.c
++++ b/arch/powerpc/perf/power10-pmu.c
+@@ -91,8 +91,8 @@ extern u64 PERF_REG_EXTENDED_MASK;
+
+ /* Table of alternatives, sorted by column 0 */
+ static const unsigned int power10_event_alternatives[][MAX_ALT] = {
+- { PM_CYC_ALT, PM_CYC },
+ { PM_INST_CMPL_ALT, PM_INST_CMPL },
++ { PM_CYC_ALT, PM_CYC },
+ };
+
+ static int power10_get_alternatives(u64 event, unsigned int flags, u64 alt[])
+diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
+index 4b7c17e361003..37b2860db4833 100644
+--- a/arch/powerpc/perf/power9-pmu.c
++++ b/arch/powerpc/perf/power9-pmu.c
+@@ -133,11 +133,11 @@ int p9_dd22_bl_ev[] = {
+
+ /* Table of alternatives, sorted by column 0 */
+ static const unsigned int power9_event_alternatives[][MAX_ALT] = {
+- { PM_INST_DISP, PM_INST_DISP_ALT },
+- { PM_RUN_CYC_ALT, PM_RUN_CYC },
+- { PM_RUN_INST_CMPL_ALT, PM_RUN_INST_CMPL },
+- { PM_LD_MISS_L1, PM_LD_MISS_L1_ALT },
+ { PM_BR_2PATH, PM_BR_2PATH_ALT },
++ { PM_INST_DISP, PM_INST_DISP_ALT },
++ { PM_RUN_CYC_ALT, PM_RUN_CYC },
++ { PM_LD_MISS_L1, PM_LD_MISS_L1_ALT },
++ { PM_RUN_INST_CMPL_ALT, PM_RUN_INST_CMPL },
+ };
+
+ static int power9_get_alternatives(u64 event, unsigned int flags, u64 alt[])
+diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
+index 6785aef4cbd46..aad430668bb4d 100644
+--- a/arch/riscv/kvm/vcpu.c
++++ b/arch/riscv/kvm/vcpu.c
+@@ -38,14 +38,16 @@ const struct kvm_stats_header kvm_vcpu_stats_header = {
+ sizeof(kvm_vcpu_stats_desc),
+ };
+
+-#define KVM_RISCV_ISA_ALLOWED (riscv_isa_extension_mask(a) | \
+- riscv_isa_extension_mask(c) | \
+- riscv_isa_extension_mask(d) | \
+- riscv_isa_extension_mask(f) | \
+- riscv_isa_extension_mask(i) | \
+- riscv_isa_extension_mask(m) | \
+- riscv_isa_extension_mask(s) | \
+- riscv_isa_extension_mask(u))
++#define KVM_RISCV_ISA_DISABLE_ALLOWED (riscv_isa_extension_mask(d) | \
++ riscv_isa_extension_mask(f))
++
++#define KVM_RISCV_ISA_DISABLE_NOT_ALLOWED (riscv_isa_extension_mask(a) | \
++ riscv_isa_extension_mask(c) | \
++ riscv_isa_extension_mask(i) | \
++ riscv_isa_extension_mask(m))
++
++#define KVM_RISCV_ISA_ALLOWED (KVM_RISCV_ISA_DISABLE_ALLOWED | \
++ KVM_RISCV_ISA_DISABLE_NOT_ALLOWED)
+
+ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
+ {
+@@ -219,7 +221,8 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu,
+ switch (reg_num) {
+ case KVM_REG_RISCV_CONFIG_REG(isa):
+ if (!vcpu->arch.ran_atleast_once) {
+- vcpu->arch.isa = reg_val;
++ /* Ignore the disable request for these extensions */
++ vcpu->arch.isa = reg_val | KVM_RISCV_ISA_DISABLE_NOT_ALLOWED;
+ vcpu->arch.isa &= riscv_isa_extension_base(NULL);
+ vcpu->arch.isa &= KVM_RISCV_ISA_ALLOWED;
+ kvm_riscv_vcpu_fp_reset(vcpu);
+diff --git a/arch/x86/include/asm/compat.h b/arch/x86/include/asm/compat.h
+index 7516e4199b3c6..20fd0acd7d800 100644
+--- a/arch/x86/include/asm/compat.h
++++ b/arch/x86/include/asm/compat.h
+@@ -28,15 +28,13 @@ typedef u16 compat_ipc_pid_t;
+ typedef __kernel_fsid_t compat_fsid_t;
+
+ struct compat_stat {
+- compat_dev_t st_dev;
+- u16 __pad1;
++ u32 st_dev;
+ compat_ino_t st_ino;
+ compat_mode_t st_mode;
+ compat_nlink_t st_nlink;
+ __compat_uid_t st_uid;
+ __compat_gid_t st_gid;
+- compat_dev_t st_rdev;
+- u16 __pad2;
++ u32 st_rdev;
+ u32 st_size;
+ u32 st_blksize;
+ u32 st_blocks;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 85ee96abba806..c4b4c0839dbdb 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -969,12 +969,10 @@ enum hv_tsc_page_status {
+ HV_TSC_PAGE_UNSET = 0,
+ /* TSC page MSR was written by the guest, update pending */
+ HV_TSC_PAGE_GUEST_CHANGED,
+- /* TSC page MSR was written by KVM userspace, update pending */
++ /* TSC page update was triggered from the host side */
+ HV_TSC_PAGE_HOST_CHANGED,
+ /* TSC page was properly set up and is currently active */
+ HV_TSC_PAGE_SET,
+- /* TSC page is currently being updated and therefore is inactive */
+- HV_TSC_PAGE_UPDATING,
+ /* TSC page was set up with an inaccessible GPA */
+ HV_TSC_PAGE_BROKEN,
+ };
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 10bc257d3803b..247ac71b7a10f 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1128,11 +1128,13 @@ void kvm_hv_setup_tsc_page(struct kvm *kvm,
+ BUILD_BUG_ON(sizeof(tsc_seq) != sizeof(hv->tsc_ref.tsc_sequence));
+ BUILD_BUG_ON(offsetof(struct ms_hyperv_tsc_page, tsc_sequence) != 0);
+
++ mutex_lock(&hv->hv_lock);
++
+ if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN ||
++ hv->hv_tsc_page_status == HV_TSC_PAGE_SET ||
+ hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET)
+- return;
++ goto out_unlock;
+
+- mutex_lock(&hv->hv_lock);
+ if (!(hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE))
+ goto out_unlock;
+
+@@ -1194,45 +1196,19 @@ out_unlock:
+ mutex_unlock(&hv->hv_lock);
+ }
+
+-void kvm_hv_invalidate_tsc_page(struct kvm *kvm)
++void kvm_hv_request_tsc_page_update(struct kvm *kvm)
+ {
+ struct kvm_hv *hv = to_kvm_hv(kvm);
+- u64 gfn;
+- int idx;
+-
+- if (hv->hv_tsc_page_status == HV_TSC_PAGE_BROKEN ||
+- hv->hv_tsc_page_status == HV_TSC_PAGE_UNSET ||
+- tsc_page_update_unsafe(hv))
+- return;
+
+ mutex_lock(&hv->hv_lock);
+
+- if (!(hv->hv_tsc_page & HV_X64_MSR_TSC_REFERENCE_ENABLE))
+- goto out_unlock;
+-
+- /* Preserve HV_TSC_PAGE_GUEST_CHANGED/HV_TSC_PAGE_HOST_CHANGED states */
+- if (hv->hv_tsc_page_status == HV_TSC_PAGE_SET)
+- hv->hv_tsc_page_status = HV_TSC_PAGE_UPDATING;
++ if (hv->hv_tsc_page_status == HV_TSC_PAGE_SET &&
++ !tsc_page_update_unsafe(hv))
++ hv->hv_tsc_page_status = HV_TSC_PAGE_HOST_CHANGED;
+
+- gfn = hv->hv_tsc_page >> HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT;
+-
+- hv->tsc_ref.tsc_sequence = 0;
+-
+- /*
+- * Take the srcu lock as memslots will be accessed to check the gfn
+- * cache generation against the memslots generation.
+- */
+- idx = srcu_read_lock(&kvm->srcu);
+- if (kvm_write_guest(kvm, gfn_to_gpa(gfn),
+- &hv->tsc_ref, sizeof(hv->tsc_ref.tsc_sequence)))
+- hv->hv_tsc_page_status = HV_TSC_PAGE_BROKEN;
+- srcu_read_unlock(&kvm->srcu, idx);
+-
+-out_unlock:
+ mutex_unlock(&hv->hv_lock);
+ }
+
+-
+ static bool hv_check_msr_access(struct kvm_vcpu_hv *hv_vcpu, u32 msr)
+ {
+ if (!hv_vcpu->enforce_cpuid)
+diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
+index ed1c4e546d049..3e79b4a9ed4ef 100644
+--- a/arch/x86/kvm/hyperv.h
++++ b/arch/x86/kvm/hyperv.h
+@@ -133,7 +133,7 @@ void kvm_hv_process_stimers(struct kvm_vcpu *vcpu);
+
+ void kvm_hv_setup_tsc_page(struct kvm *kvm,
+ struct pvclock_vcpu_time_info *hv_clock);
+-void kvm_hv_invalidate_tsc_page(struct kvm *kvm);
++void kvm_hv_request_tsc_page_update(struct kvm *kvm);
+
+ void kvm_hv_init_vm(struct kvm *kvm);
+ void kvm_hv_destroy_vm(struct kvm *kvm);
+diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
+index 7a7b8d5b775e9..5e7e8d163b985 100644
+--- a/arch/x86/kvm/pmu.h
++++ b/arch/x86/kvm/pmu.h
+@@ -140,6 +140,15 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value)
+ return sample_period;
+ }
+
++static inline void pmc_update_sample_period(struct kvm_pmc *pmc)
++{
++ if (!pmc->perf_event || pmc->is_paused)
++ return;
++
++ perf_event_period(pmc->perf_event,
++ get_sample_period(pmc, pmc->counter));
++}
++
+ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel);
+ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int fixed_idx);
+ void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx);
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index ba40b7fced5ae..b5b0837df0d11 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -257,6 +257,7 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER);
+ if (pmc) {
+ pmc->counter += data - pmc_read_counter(pmc);
++ pmc_update_sample_period(pmc);
+ return 0;
+ }
+ /* MSR_EVNTSELn */
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index fef9758525826..e5cecd4ad2d44 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2204,51 +2204,39 @@ int sev_cpu_init(struct svm_cpu_data *sd)
+ * Pages used by hardware to hold guest encrypted state must be flushed before
+ * returning them to the system.
+ */
+-static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va,
+- unsigned long len)
++static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
+ {
++ int asid = to_kvm_svm(vcpu->kvm)->sev_info.asid;
++
+ /*
+- * If hardware enforced cache coherency for encrypted mappings of the
+- * same physical page is supported, nothing to do.
++ * Note! The address must be a kernel address, as regular page walk
++ * checks are performed by VM_PAGE_FLUSH, i.e. operating on a user
++ * address is non-deterministic and unsafe. This function deliberately
++ * takes a pointer to deter passing in a user address.
+ */
+- if (boot_cpu_has(X86_FEATURE_SME_COHERENT))
+- return;
++ unsigned long addr = (unsigned long)va;
+
+ /*
+- * If the VM Page Flush MSR is supported, use it to flush the page
+- * (using the page virtual address and the guest ASID).
++ * If CPU enforced cache coherency for encrypted mappings of the
++ * same physical page is supported, use CLFLUSHOPT instead. NOTE: cache
++ * flush is still needed in order to work properly with DMA devices.
+ */
+- if (boot_cpu_has(X86_FEATURE_VM_PAGE_FLUSH)) {
+- struct kvm_sev_info *sev;
+- unsigned long va_start;
+- u64 start, stop;
+-
+- /* Align start and stop to page boundaries. */
+- va_start = (unsigned long)va;
+- start = (u64)va_start & PAGE_MASK;
+- stop = PAGE_ALIGN((u64)va_start + len);
+-
+- if (start < stop) {
+- sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info;
+-
+- while (start < stop) {
+- wrmsrl(MSR_AMD64_VM_PAGE_FLUSH,
+- start | sev->asid);
+-
+- start += PAGE_SIZE;
+- }
+-
+- return;
+- }
+-
+- WARN(1, "Address overflow, using WBINVD\n");
++ if (boot_cpu_has(X86_FEATURE_SME_COHERENT)) {
++ clflush_cache_range(va, PAGE_SIZE);
++ return;
+ }
+
+ /*
+- * Hardware should always have one of the above features,
+- * but if not, use WBINVD and issue a warning.
++ * VM Page Flush takes a host virtual address and a guest ASID. Fall
++ * back to WBINVD if this faults so as not to make any problems worse
++ * by leaving stale encrypted data in the cache.
+ */
+- WARN_ONCE(1, "Using WBINVD to flush guest memory\n");
++ if (WARN_ON_ONCE(wrmsrl_safe(MSR_AMD64_VM_PAGE_FLUSH, addr | asid)))
++ goto do_wbinvd;
++
++ return;
++
++do_wbinvd:
+ wbinvd_on_all_cpus();
+ }
+
+@@ -2262,7 +2250,8 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
+ svm = to_svm(vcpu);
+
+ if (vcpu->arch.guest_state_protected)
+- sev_flush_guest_memory(svm, svm->sev_es.vmsa, PAGE_SIZE);
++ sev_flush_encrypted_page(vcpu, svm->sev_es.vmsa);
++
+ __free_page(virt_to_page(svm->sev_es.vmsa));
+
+ if (svm->sev_es.ghcb_sa_free)
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index dc822a1d403d3..896ddf7392365 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -4618,6 +4618,11 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu);
+ }
+
++ if (vmx->nested.update_vmcs01_apicv_status) {
++ vmx->nested.update_vmcs01_apicv_status = false;
++ kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
++ }
++
+ if ((vm_exit_reason != -1) &&
+ (enable_shadow_vmcs || evmptr_is_valid(vmx->nested.hv_evmcs_vmptr)))
+ vmx->nested.need_vmcs12_to_shadow_sync = true;
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 5fa3870b89881..a0c84761c9382 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -431,15 +431,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ !(msr & MSR_PMC_FULL_WIDTH_BIT))
+ data = (s64)(s32)data;
+ pmc->counter += data - pmc_read_counter(pmc);
+- if (pmc->perf_event && !pmc->is_paused)
+- perf_event_period(pmc->perf_event,
+- get_sample_period(pmc, data));
++ pmc_update_sample_period(pmc);
+ return 0;
+ } else if ((pmc = get_fixed_pmc(pmu, msr))) {
+ pmc->counter += data - pmc_read_counter(pmc);
+- if (pmc->perf_event && !pmc->is_paused)
+- perf_event_period(pmc->perf_event,
+- get_sample_period(pmc, data));
++ pmc_update_sample_period(pmc);
+ return 0;
+ } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
+ if (data == pmc->eventsel)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index b730d799c26ed..ef63cfd57029a 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4182,6 +4182,11 @@ static void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+
++ if (is_guest_mode(vcpu)) {
++ vmx->nested.update_vmcs01_apicv_status = true;
++ return;
++ }
++
+ pin_controls_set(vmx, vmx_pin_based_exec_ctrl(vmx));
+ if (cpu_has_secondary_exec_ctrls()) {
+ if (kvm_vcpu_apicv_active(vcpu))
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 9c6bfcd84008b..b98c7e96697a9 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -183,6 +183,7 @@ struct nested_vmx {
+ bool change_vmcs01_virtual_apic_mode;
+ bool reload_vmcs01_apic_access_page;
+ bool update_vmcs01_cpu_dirty_logging;
++ bool update_vmcs01_apicv_status;
+
+ /*
+ * Enlightened VMCS has been enabled. It does not mean that L1 has to
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 05128162ebd58..23d176cd12a4f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2874,7 +2874,7 @@ static void kvm_end_pvclock_update(struct kvm *kvm)
+
+ static void kvm_update_masterclock(struct kvm *kvm)
+ {
+- kvm_hv_invalidate_tsc_page(kvm);
++ kvm_hv_request_tsc_page_update(kvm);
+ kvm_start_pvclock_update(kvm);
+ pvclock_update_vm_gtod_copy(kvm);
+ kvm_end_pvclock_update(kvm);
+@@ -3086,8 +3086,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
+ offsetof(struct compat_vcpu_info, time));
+ if (vcpu->xen.vcpu_time_info_set)
+ kvm_setup_pvclock_page(v, &vcpu->xen.vcpu_time_info_cache, 0);
+- if (!v->vcpu_idx)
+- kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock);
++ kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock);
+ return 0;
+ }
+
+@@ -6190,7 +6189,7 @@ static int kvm_vm_ioctl_set_clock(struct kvm *kvm, void __user *argp)
+ if (data.flags & ~KVM_CLOCK_VALID_FLAGS)
+ return -EINVAL;
+
+- kvm_hv_invalidate_tsc_page(kvm);
++ kvm_hv_request_tsc_page_update(kvm);
+ kvm_start_pvclock_update(kvm);
+ pvclock_update_vm_gtod_copy(kvm);
+
+@@ -10297,12 +10296,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
+
+ static inline int complete_emulated_io(struct kvm_vcpu *vcpu)
+ {
+- int r;
+-
+- vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu);
+- r = kvm_emulate_instruction(vcpu, EMULTYPE_NO_DECODE);
+- srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
+- return r;
++ return kvm_emulate_instruction(vcpu, EMULTYPE_NO_DECODE);
+ }
+
+ static int complete_emulated_pio(struct kvm_vcpu *vcpu)
+@@ -11119,8 +11113,21 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ r = kvm_create_lapic(vcpu, lapic_timer_advance_ns);
+ if (r < 0)
+ goto fail_mmu_destroy;
+- if (kvm_apicv_activated(vcpu->kvm))
++
++ /*
++ * Defer evaluating inhibits until the vCPU is first run, as
++ * this vCPU will not get notified of any changes until this
++ * vCPU is visible to other vCPUs (marked online and added to
++ * the set of vCPUs). Opportunistically mark APICv active as
++ * VMX in particularly is highly unlikely to have inhibits.
++ * Ignore the current per-VM APICv state so that vCPU creation
++ * is guaranteed to run with a deterministic value, the request
++ * will ensure the vCPU gets the correct state before VM-Entry.
++ */
++ if (enable_apicv) {
+ vcpu->arch.apicv_active = true;
++ kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
++ }
+ } else
+ static_branch_inc(&kvm_has_noapic_vcpu);
+
+diff --git a/arch/xtensa/kernel/coprocessor.S b/arch/xtensa/kernel/coprocessor.S
+index 45cc0ae0af6f9..c7b9f12896f20 100644
+--- a/arch/xtensa/kernel/coprocessor.S
++++ b/arch/xtensa/kernel/coprocessor.S
+@@ -29,7 +29,7 @@
+ .if XTENSA_HAVE_COPROCESSOR(x); \
+ .align 4; \
+ .Lsave_cp_regs_cp##x: \
+- xchal_cp##x##_store a2 a4 a5 a6 a7; \
++ xchal_cp##x##_store a2 a3 a4 a5 a6; \
+ jx a0; \
+ .endif
+
+@@ -46,7 +46,7 @@
+ .if XTENSA_HAVE_COPROCESSOR(x); \
+ .align 4; \
+ .Lload_cp_regs_cp##x: \
+- xchal_cp##x##_load a2 a4 a5 a6 a7; \
++ xchal_cp##x##_load a2 a3 a4 a5 a6; \
+ jx a0; \
+ .endif
+
+diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
+index 0dde21e0d3de4..ad1841cecdfb7 100644
+--- a/arch/xtensa/kernel/jump_label.c
++++ b/arch/xtensa/kernel/jump_label.c
+@@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
+ {
+ struct patch *patch = data;
+
+- if (atomic_inc_return(&patch->cpu_count) == 1) {
++ if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
+ local_patch_text(patch->addr, patch->data, patch->sz);
+ atomic_inc(&patch->cpu_count);
+ } else {
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 4a86340133e46..f8703db99c734 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -629,7 +629,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
+ return compat_put_long(argp,
+ (bdev->bd_disk->bdi->ra_pages * PAGE_SIZE) / 512);
+ case BLKGETSIZE:
+- if (bdev_nr_sectors(bdev) > ~0UL)
++ if (bdev_nr_sectors(bdev) > ~(compat_ulong_t)0)
+ return -EFBIG;
+ return compat_put_ulong(argp, bdev_nr_sectors(bdev));
+
+diff --git a/drivers/ata/pata_marvell.c b/drivers/ata/pata_marvell.c
+index 0c5a51970fbf5..014ccb0f45dc4 100644
+--- a/drivers/ata/pata_marvell.c
++++ b/drivers/ata/pata_marvell.c
+@@ -77,6 +77,8 @@ static int marvell_cable_detect(struct ata_port *ap)
+ switch(ap->port_no)
+ {
+ case 0:
++ if (!ap->ioaddr.bmdma_addr)
++ return ATA_CBL_PATA_UNK;
+ if (ioread8(ap->ioaddr.bmdma_addr + 1) & 1)
+ return ATA_CBL_PATA40;
+ return ATA_CBL_PATA80;
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 1476156af74b4..def564d1e8faf 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1453,7 +1453,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ {
+ struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
+ struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
+- struct at_xdmac_desc *desc, *_desc;
++ struct at_xdmac_desc *desc, *_desc, *iter;
+ struct list_head *descs_list;
+ enum dma_status ret;
+ int residue, retry;
+@@ -1568,11 +1568,13 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
+ * microblock.
+ */
+ descs_list = &desc->descs_list;
+- list_for_each_entry_safe(desc, _desc, descs_list, desc_node) {
+- dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
+- residue -= (desc->lld.mbr_ubc & 0xffffff) << dwidth;
+- if ((desc->lld.mbr_nda & 0xfffffffc) == cur_nda)
++ list_for_each_entry_safe(iter, _desc, descs_list, desc_node) {
++ dwidth = at_xdmac_get_dwidth(iter->lld.mbr_cfg);
++ residue -= (iter->lld.mbr_ubc & 0xffffff) << dwidth;
++ if ((iter->lld.mbr_nda & 0xfffffffc) == cur_nda) {
++ desc = iter;
+ break;
++ }
+ }
+ residue += cur_ubc << dwidth;
+
+diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c
+index 329fc2e57b703..b5b8f8181e776 100644
+--- a/drivers/dma/dw-edma/dw-edma-v0-core.c
++++ b/drivers/dma/dw-edma/dw-edma-v0-core.c
+@@ -415,8 +415,11 @@ void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
+ (DW_EDMA_V0_CCS | DW_EDMA_V0_LLE));
+ /* Linked list */
+ #ifdef CONFIG_64BIT
+- SET_CH_64(dw, chan->dir, chan->id, llp.reg,
+- chunk->ll_region.paddr);
++ /* llp is not aligned on 64bit -> keep 32bit accesses */
++ SET_CH_32(dw, chan->dir, chan->id, llp.lsb,
++ lower_32_bits(chunk->ll_region.paddr));
++ SET_CH_32(dw, chan->dir, chan->id, llp.msb,
++ upper_32_bits(chunk->ll_region.paddr));
+ #else /* CONFIG_64BIT */
+ SET_CH_32(dw, chan->dir, chan->id, llp.lsb,
+ lower_32_bits(chunk->ll_region.paddr));
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 3061fe857d69f..f652da6ab47df 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -373,7 +373,6 @@ static void idxd_wq_device_reset_cleanup(struct idxd_wq *wq)
+ {
+ lockdep_assert_held(&wq->wq_lock);
+
+- idxd_wq_disable_cleanup(wq);
+ wq->size = 0;
+ wq->group = NULL;
+ }
+@@ -701,14 +700,17 @@ static void idxd_device_wqs_clear_state(struct idxd_device *idxd)
+
+ if (wq->state == IDXD_WQ_ENABLED) {
+ idxd_wq_disable_cleanup(wq);
+- idxd_wq_device_reset_cleanup(wq);
+ wq->state = IDXD_WQ_DISABLED;
+ }
++ idxd_wq_device_reset_cleanup(wq);
+ }
+ }
+
+ void idxd_device_clear_state(struct idxd_device *idxd)
+ {
++ if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++ return;
++
+ idxd_groups_clear_state(idxd);
+ idxd_engines_clear_state(idxd);
+ idxd_device_wqs_clear_state(idxd);
+diff --git a/drivers/dma/idxd/submit.c b/drivers/dma/idxd/submit.c
+index e289fd48711ad..c01db23e3333f 100644
+--- a/drivers/dma/idxd/submit.c
++++ b/drivers/dma/idxd/submit.c
+@@ -150,14 +150,15 @@ static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
+ */
+ int idxd_enqcmds(struct idxd_wq *wq, void __iomem *portal, const void *desc)
+ {
+- int rc, retries = 0;
++ unsigned int retries = wq->enqcmds_retries;
++ int rc;
+
+ do {
+ rc = enqcmds(portal, desc);
+ if (rc == 0)
+ break;
+ cpu_relax();
+- } while (retries++ < wq->enqcmds_retries);
++ } while (retries--);
+
+ return rc;
+ }
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 7e19ab92b61a8..dfd549685c467 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -905,6 +905,9 @@ static ssize_t wq_max_transfer_size_store(struct device *dev, struct device_attr
+ u64 xfer_size;
+ int rc;
+
++ if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++ return -EPERM;
++
+ if (wq->state != IDXD_WQ_DISABLED)
+ return -EPERM;
+
+@@ -939,6 +942,9 @@ static ssize_t wq_max_batch_size_store(struct device *dev, struct device_attribu
+ u64 batch_size;
+ int rc;
+
++ if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++ return -EPERM;
++
+ if (wq->state != IDXD_WQ_DISABLED)
+ return -EPERM;
+
+diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
+index 75ec0754d4ad4..b1e6173fcc271 100644
+--- a/drivers/dma/imx-sdma.c
++++ b/drivers/dma/imx-sdma.c
+@@ -198,12 +198,12 @@ struct sdma_script_start_addrs {
+ s32 per_2_firi_addr;
+ s32 mcu_2_firi_addr;
+ s32 uart_2_per_addr;
+- s32 uart_2_mcu_ram_addr;
++ s32 uart_2_mcu_addr;
+ s32 per_2_app_addr;
+ s32 mcu_2_app_addr;
+ s32 per_2_per_addr;
+ s32 uartsh_2_per_addr;
+- s32 uartsh_2_mcu_ram_addr;
++ s32 uartsh_2_mcu_addr;
+ s32 per_2_shp_addr;
+ s32 mcu_2_shp_addr;
+ s32 ata_2_mcu_addr;
+@@ -232,8 +232,8 @@ struct sdma_script_start_addrs {
+ s32 mcu_2_ecspi_addr;
+ s32 mcu_2_sai_addr;
+ s32 sai_2_mcu_addr;
+- s32 uart_2_mcu_addr;
+- s32 uartsh_2_mcu_addr;
++ s32 uart_2_mcu_rom_addr;
++ s32 uartsh_2_mcu_rom_addr;
+ /* End of v3 array */
+ s32 mcu_2_zqspi_addr;
+ /* End of v4 array */
+@@ -1780,17 +1780,17 @@ static void sdma_add_scripts(struct sdma_engine *sdma,
+ saddr_arr[i] = addr_arr[i];
+
+ /*
+- * get uart_2_mcu_addr/uartsh_2_mcu_addr rom script specially because
+- * they are now replaced by uart_2_mcu_ram_addr/uartsh_2_mcu_ram_addr
+- * to be compatible with legacy freescale/nxp sdma firmware, and they
+- * are located in the bottom part of sdma_script_start_addrs which are
+- * beyond the SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1.
++ * For compatibility with NXP internal legacy kernel before 4.19 which
++ * is based on uart ram script and mainline kernel based on uart rom
++ * script, both uart ram/rom scripts are present in newer sdma
++ * firmware. Use the rom versions if they are present (V3 or newer).
+ */
+- if (addr->uart_2_mcu_addr)
+- sdma->script_addrs->uart_2_mcu_addr = addr->uart_2_mcu_addr;
+- if (addr->uartsh_2_mcu_addr)
+- sdma->script_addrs->uartsh_2_mcu_addr = addr->uartsh_2_mcu_addr;
+-
++ if (sdma->script_number >= SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V3) {
++ if (addr->uart_2_mcu_rom_addr)
++ sdma->script_addrs->uart_2_mcu_addr = addr->uart_2_mcu_rom_addr;
++ if (addr->uartsh_2_mcu_rom_addr)
++ sdma->script_addrs->uartsh_2_mcu_addr = addr->uartsh_2_mcu_rom_addr;
++ }
+ }
+
+ static void sdma_load_firmware(const struct firmware *fw, void *context)
+@@ -1869,7 +1869,7 @@ static int sdma_event_remap(struct sdma_engine *sdma)
+ u32 reg, val, shift, num_map, i;
+ int ret = 0;
+
+- if (IS_ERR(np) || IS_ERR(gpr_np))
++ if (IS_ERR(np) || !gpr_np)
+ goto out;
+
+ event_remap = of_find_property(np, propname, NULL);
+@@ -1917,7 +1917,7 @@ static int sdma_event_remap(struct sdma_engine *sdma)
+ }
+
+ out:
+- if (!IS_ERR(gpr_np))
++ if (gpr_np)
+ of_node_put(gpr_np);
+
+ return ret;
+diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
+index 375e7e647df6b..a1517ef1f4a01 100644
+--- a/drivers/dma/mediatek/mtk-uart-apdma.c
++++ b/drivers/dma/mediatek/mtk-uart-apdma.c
+@@ -274,7 +274,7 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
+ unsigned int status;
+ int ret;
+
+- ret = pm_runtime_get_sync(mtkd->ddev.dev);
++ ret = pm_runtime_resume_and_get(mtkd->ddev.dev);
+ if (ret < 0) {
+ pm_runtime_put_noidle(chan->device->dev);
+ return ret;
+@@ -288,18 +288,21 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
+ ret = readx_poll_timeout(readl, c->base + VFF_EN,
+ status, !status, 10, 100);
+ if (ret)
+- return ret;
++ goto err_pm;
+
+ ret = request_irq(c->irq, mtk_uart_apdma_irq_handler,
+ IRQF_TRIGGER_NONE, KBUILD_MODNAME, chan);
+ if (ret < 0) {
+ dev_err(chan->device->dev, "Can't request dma IRQ\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_pm;
+ }
+
+ if (mtkd->support_33bits)
+ mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+
++err_pm:
++ pm_runtime_put_noidle(mtkd->ddev.dev);
+ return ret;
+ }
+
+diff --git a/drivers/edac/synopsys_edac.c b/drivers/edac/synopsys_edac.c
+index f05ff02c0656e..40b1abeca8562 100644
+--- a/drivers/edac/synopsys_edac.c
++++ b/drivers/edac/synopsys_edac.c
+@@ -164,6 +164,11 @@
+ #define ECC_STAT_CECNT_SHIFT 8
+ #define ECC_STAT_BITNUM_MASK 0x7F
+
++/* ECC error count register definitions */
++#define ECC_ERRCNT_UECNT_MASK 0xFFFF0000
++#define ECC_ERRCNT_UECNT_SHIFT 16
++#define ECC_ERRCNT_CECNT_MASK 0xFFFF
++
+ /* DDR QOS Interrupt register definitions */
+ #define DDR_QOS_IRQ_STAT_OFST 0x20200
+ #define DDR_QOSUE_MASK 0x4
+@@ -423,15 +428,16 @@ static int zynqmp_get_error_info(struct synps_edac_priv *priv)
+ base = priv->baseaddr;
+ p = &priv->stat;
+
++ regval = readl(base + ECC_ERRCNT_OFST);
++ p->ce_cnt = regval & ECC_ERRCNT_CECNT_MASK;
++ p->ue_cnt = (regval & ECC_ERRCNT_UECNT_MASK) >> ECC_ERRCNT_UECNT_SHIFT;
++ if (!p->ce_cnt)
++ goto ue_err;
++
+ regval = readl(base + ECC_STAT_OFST);
+ if (!regval)
+ return 1;
+
+- p->ce_cnt = (regval & ECC_STAT_CECNT_MASK) >> ECC_STAT_CECNT_SHIFT;
+- p->ue_cnt = (regval & ECC_STAT_UECNT_MASK) >> ECC_STAT_UECNT_SHIFT;
+- if (!p->ce_cnt)
+- goto ue_err;
+-
+ p->ceinfo.bitpos = (regval & ECC_STAT_BITNUM_MASK);
+
+ regval = readl(base + ECC_CEADDR0_OFST);
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index e48108e694f8d..7dad6f57d9704 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -955,8 +955,7 @@ static int cs_dsp_create_control(struct cs_dsp *dsp,
+ ctl->alg_region = *alg_region;
+ if (subname && dsp->fw_ver >= 2) {
+ ctl->subname_len = subname_len;
+- ctl->subname = kmemdup(subname,
+- strlen(subname) + 1, GFP_KERNEL);
++ ctl->subname = kasprintf(GFP_KERNEL, "%.*s", subname_len, subname);
+ if (!ctl->subname) {
+ ret = -ENOMEM;
+ goto err_ctl;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 344e376b2ee99..d5a5cf2691026 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1601,8 +1601,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+
+ gpiochip_set_irq_hooks(gc);
+
+- acpi_gpiochip_request_interrupts(gc);
+-
+ /*
+ * Using barrier() here to prevent compiler from reordering
+ * gc->irq.initialized before initialization of above
+@@ -1612,6 +1610,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+
+ gc->irq.initialized = true;
+
++ acpi_gpiochip_request_interrupts(gc);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+index 87ed48d5530dc..8bd265b408470 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+@@ -138,6 +138,10 @@ static bool dmub_psr_set_version(struct dmub_psr *dmub, struct dc_stream_state *
+ cmd.psr_set_version.psr_set_version_data.version = PSR_VERSION_UNSUPPORTED;
+ break;
+ }
++
++ if (cmd.psr_set_version.psr_set_version_data.version == PSR_VERSION_UNSUPPORTED)
++ return false;
++
+ cmd.psr_set_version.psr_set_version_data.cmd_version = DMUB_CMD_PSR_CONTROL_VERSION_1;
+ cmd.psr_set_version.psr_set_version_data.panel_inst = panel_inst;
+ cmd.psr_set_version.header.payload_bytes = sizeof(struct dmub_cmd_psr_set_version_data);
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index b00de57cc957e..cd32e1470b3cb 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -887,6 +887,20 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
+ return false;
+ }
+
++ /* Wa_16011303918:adl-p */
++ if (crtc_state->vrr.enable &&
++ IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) {
++ drm_dbg_kms(&dev_priv->drm,
++ "PSR2 not enabled, not compatible with HW stepping + VRR\n");
++ return false;
++ }
++
++ if (!_compute_psr2_sdp_prior_scanline_indication(intel_dp, crtc_state)) {
++ drm_dbg_kms(&dev_priv->drm,
++ "PSR2 not enabled, PSR2 SDP indication do not fit in hblank\n");
++ return false;
++ }
++
+ if (HAS_PSR2_SEL_FETCH(dev_priv)) {
+ if (!intel_psr2_sel_fetch_config_valid(intel_dp, crtc_state) &&
+ !HAS_PSR_HW_TRACKING(dev_priv)) {
+@@ -900,12 +914,12 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
+ if (!crtc_state->enable_psr2_sel_fetch &&
+ IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_C0)) {
+ drm_dbg_kms(&dev_priv->drm, "PSR2 HW tracking is not supported this Display stepping\n");
+- return false;
++ goto unsupported;
+ }
+
+ if (!psr2_granularity_check(intel_dp, crtc_state)) {
+ drm_dbg_kms(&dev_priv->drm, "PSR2 not enabled, SU granularity not compatible\n");
+- return false;
++ goto unsupported;
+ }
+
+ if (!crtc_state->enable_psr2_sel_fetch &&
+@@ -914,25 +928,15 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
+ "PSR2 not enabled, resolution %dx%d > max supported %dx%d\n",
+ crtc_hdisplay, crtc_vdisplay,
+ psr_max_h, psr_max_v);
+- return false;
+- }
+-
+- if (!_compute_psr2_sdp_prior_scanline_indication(intel_dp, crtc_state)) {
+- drm_dbg_kms(&dev_priv->drm,
+- "PSR2 not enabled, PSR2 SDP indication do not fit in hblank\n");
+- return false;
+- }
+-
+- /* Wa_16011303918:adl-p */
+- if (crtc_state->vrr.enable &&
+- IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0)) {
+- drm_dbg_kms(&dev_priv->drm,
+- "PSR2 not enabled, not compatible with HW stepping + VRR\n");
+- return false;
++ goto unsupported;
+ }
+
+ tgl_dc3co_exitline_compute_config(intel_dp, crtc_state);
+ return true;
++
++unsupported:
++ crtc_state->enable_psr2_sel_fetch = false;
++ return false;
+ }
+
+ void intel_psr_compute_config(struct intel_dp *intel_dp,
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index fb261930ad1c7..e8a8240a68686 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -601,29 +601,20 @@ static const struct of_device_id dt_match[] = {
+ };
+
+ #ifdef CONFIG_PM
+-static int adreno_resume(struct device *dev)
++static int adreno_runtime_resume(struct device *dev)
+ {
+ struct msm_gpu *gpu = dev_to_gpu(dev);
+
+ return gpu->funcs->pm_resume(gpu);
+ }
+
+-static int active_submits(struct msm_gpu *gpu)
+-{
+- int active_submits;
+- mutex_lock(&gpu->active_lock);
+- active_submits = gpu->active_submits;
+- mutex_unlock(&gpu->active_lock);
+- return active_submits;
+-}
+-
+-static int adreno_suspend(struct device *dev)
++static int adreno_runtime_suspend(struct device *dev)
+ {
+ struct msm_gpu *gpu = dev_to_gpu(dev);
+ int remaining;
+
+ remaining = wait_event_timeout(gpu->retire_event,
+- active_submits(gpu) == 0,
++ gpu->active_submits == 0,
+ msecs_to_jiffies(1000));
+ if (remaining == 0) {
+ dev_err(dev, "Timeout waiting for GPU to suspend\n");
+@@ -636,7 +627,7 @@ static int adreno_suspend(struct device *dev)
+
+ static const struct dev_pm_ops adreno_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+- SET_RUNTIME_PM_OPS(adreno_suspend, adreno_resume, NULL)
++ SET_RUNTIME_PM_OPS(adreno_runtime_suspend, adreno_runtime_resume, NULL)
+ };
+
+ static struct platform_driver adreno_driver = {
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index c6b69afcbac89..50e854207c70a 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -90,7 +90,10 @@ static void mdp5_plane_reset(struct drm_plane *plane)
+ __drm_atomic_helper_plane_destroy_state(plane->state);
+
+ kfree(to_mdp5_plane_state(plane->state));
++ plane->state = NULL;
+ mdp5_state = kzalloc(sizeof(*mdp5_state), GFP_KERNEL);
++ if (!mdp5_state)
++ return;
+
+ if (plane->type == DRM_PLANE_TYPE_PRIMARY)
+ mdp5_state->base.zpos = STAGE_BASE;
+diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+index 5d2ff67910586..acfe1b31e0792 100644
+--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
++++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+@@ -176,6 +176,8 @@ void msm_disp_snapshot_add_block(struct msm_disp_state *disp_state, u32 len,
+ va_list va;
+
+ new_blk = kzalloc(sizeof(struct msm_disp_state_block), GFP_KERNEL);
++ if (!new_blk)
++ return;
+
+ va_start(va, fmt);
+
+diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+index 46029c5610c80..145047e193946 100644
+--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
++++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+@@ -229,7 +229,7 @@ static void rpi_touchscreen_i2c_write(struct rpi_touchscreen *ts,
+
+ ret = i2c_smbus_write_byte_data(ts->i2c, reg, val);
+ if (ret)
+- dev_err(&ts->dsi->dev, "I2C write failed: %d\n", ret);
++ dev_err(&ts->i2c->dev, "I2C write failed: %d\n", ret);
+ }
+
+ static int rpi_touchscreen_write(struct rpi_touchscreen *ts, u16 reg, u32 val)
+@@ -265,7 +265,7 @@ static int rpi_touchscreen_noop(struct drm_panel *panel)
+ return 0;
+ }
+
+-static int rpi_touchscreen_enable(struct drm_panel *panel)
++static int rpi_touchscreen_prepare(struct drm_panel *panel)
+ {
+ struct rpi_touchscreen *ts = panel_to_ts(panel);
+ int i;
+@@ -295,6 +295,13 @@ static int rpi_touchscreen_enable(struct drm_panel *panel)
+ rpi_touchscreen_write(ts, DSI_STARTDSI, 0x01);
+ msleep(100);
+
++ return 0;
++}
++
++static int rpi_touchscreen_enable(struct drm_panel *panel)
++{
++ struct rpi_touchscreen *ts = panel_to_ts(panel);
++
+ /* Turn on the backlight. */
+ rpi_touchscreen_i2c_write(ts, REG_PWM, 255);
+
+@@ -349,7 +356,7 @@ static int rpi_touchscreen_get_modes(struct drm_panel *panel,
+ static const struct drm_panel_funcs rpi_touchscreen_funcs = {
+ .disable = rpi_touchscreen_disable,
+ .unprepare = rpi_touchscreen_noop,
+- .prepare = rpi_touchscreen_noop,
++ .prepare = rpi_touchscreen_prepare,
+ .enable = rpi_touchscreen_enable,
+ .get_modes = rpi_touchscreen_get_modes,
+ };
+diff --git a/drivers/gpu/drm/radeon/radeon_sync.c b/drivers/gpu/drm/radeon/radeon_sync.c
+index b991ba1bcd513..f63efd8d5e524 100644
+--- a/drivers/gpu/drm/radeon/radeon_sync.c
++++ b/drivers/gpu/drm/radeon/radeon_sync.c
+@@ -96,7 +96,7 @@ int radeon_sync_resv(struct radeon_device *rdev,
+ struct dma_fence *f;
+ int r = 0;
+
+- dma_resv_for_each_fence(&cursor, resv, shared, f) {
++ dma_resv_for_each_fence(&cursor, resv, !shared, f) {
+ fence = to_radeon_fence(f);
+ if (fence && fence->rdev == rdev)
+ radeon_sync_fence(sync, fence);
+diff --git a/drivers/gpu/drm/vc4/vc4_dsi.c b/drivers/gpu/drm/vc4/vc4_dsi.c
+index 9300d3354c512..64dfefeb03f5e 100644
+--- a/drivers/gpu/drm/vc4/vc4_dsi.c
++++ b/drivers/gpu/drm/vc4/vc4_dsi.c
+@@ -846,7 +846,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
+ unsigned long phy_clock;
+ int ret;
+
+- ret = pm_runtime_get_sync(dev);
++ ret = pm_runtime_resume_and_get(dev);
+ if (ret) {
+ DRM_ERROR("Failed to runtime PM enable on DSI%d\n", dsi->variant->port);
+ return;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index 31aecc46624b3..04c8a378aeed6 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -46,6 +46,21 @@ vmw_buffer_object(struct ttm_buffer_object *bo)
+ return container_of(bo, struct vmw_buffer_object, base);
+ }
+
++/**
++ * bo_is_vmw - check if the buffer object is a &vmw_buffer_object
++ * @bo: ttm buffer object to be checked
++ *
++ * Uses destroy function associated with the object to determine if this is
++ * a &vmw_buffer_object.
++ *
++ * Returns:
++ * true if the object is of &vmw_buffer_object type, false if not.
++ */
++static bool bo_is_vmw(struct ttm_buffer_object *bo)
++{
++ return bo->destroy == &vmw_bo_bo_free ||
++ bo->destroy == &vmw_gem_destroy;
++}
+
+ /**
+ * vmw_bo_pin_in_placement - Validate a buffer to placement.
+@@ -615,8 +630,9 @@ int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
+
+ ret = vmw_user_bo_synccpu_grab(vbo, arg->flags);
+ vmw_bo_unreference(&vbo);
+- if (unlikely(ret != 0 && ret != -ERESTARTSYS &&
+- ret != -EBUSY)) {
++ if (unlikely(ret != 0)) {
++ if (ret == -ERESTARTSYS || ret == -EBUSY)
++ return -EBUSY;
+ DRM_ERROR("Failed synccpu grab on handle 0x%08x.\n",
+ (unsigned int) arg->handle);
+ return ret;
+@@ -798,7 +814,7 @@ int vmw_dumb_create(struct drm_file *file_priv,
+ void vmw_bo_swap_notify(struct ttm_buffer_object *bo)
+ {
+ /* Is @bo embedded in a struct vmw_buffer_object? */
+- if (vmw_bo_is_vmw_bo(bo))
++ if (!bo_is_vmw(bo))
+ return;
+
+ /* Kill any cached kernel maps before swapout */
+@@ -822,7 +838,7 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
+ struct vmw_buffer_object *vbo;
+
+ /* Make sure @bo is embedded in a struct vmw_buffer_object? */
+- if (vmw_bo_is_vmw_bo(bo))
++ if (!bo_is_vmw(bo))
+ return;
+
+ vbo = container_of(bo, struct vmw_buffer_object, base);
+@@ -843,22 +859,3 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
+ if (mem->mem_type != VMW_PL_MOB && bo->resource->mem_type == VMW_PL_MOB)
+ vmw_resource_unbind_list(vbo);
+ }
+-
+-/**
+- * vmw_bo_is_vmw_bo - check if the buffer object is a &vmw_buffer_object
+- * @bo: buffer object to be checked
+- *
+- * Uses destroy function associated with the object to determine if this is
+- * a &vmw_buffer_object.
+- *
+- * Returns:
+- * true if the object is of &vmw_buffer_object type, false if not.
+- */
+-bool vmw_bo_is_vmw_bo(struct ttm_buffer_object *bo)
+-{
+- if (bo->destroy == &vmw_bo_bo_free ||
+- bo->destroy == &vmw_gem_destroy)
+- return true;
+-
+- return false;
+-}
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index fe36efdb7ff52..f685d426af7e3 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -997,13 +997,10 @@ static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id)
+ goto out_no_fman;
+ }
+
+- drm_vma_offset_manager_init(&dev_priv->vma_manager,
+- DRM_FILE_PAGE_OFFSET_START,
+- DRM_FILE_PAGE_OFFSET_SIZE);
+ ret = ttm_device_init(&dev_priv->bdev, &vmw_bo_driver,
+ dev_priv->drm.dev,
+ dev_priv->drm.anon_inode->i_mapping,
+- &dev_priv->vma_manager,
++ dev_priv->drm.vma_offset_manager,
+ dev_priv->map_mode == vmw_dma_alloc_coherent,
+ false);
+ if (unlikely(ret != 0)) {
+@@ -1173,7 +1170,6 @@ static void vmw_driver_unload(struct drm_device *dev)
+ vmw_devcaps_destroy(dev_priv);
+ vmw_vram_manager_fini(dev_priv);
+ ttm_device_fini(&dev_priv->bdev);
+- drm_vma_offset_manager_destroy(&dev_priv->vma_manager);
+ vmw_release_device_late(dev_priv);
+ vmw_fence_manager_takedown(dev_priv->fman);
+ if (dev_priv->capabilities & SVGA_CAP_IRQMASK)
+@@ -1397,7 +1393,7 @@ vmw_get_unmapped_area(struct file *file, unsigned long uaddr,
+ struct vmw_private *dev_priv = vmw_priv(file_priv->minor->dev);
+
+ return drm_get_unmapped_area(file, uaddr, len, pgoff, flags,
+- &dev_priv->vma_manager);
++ dev_priv->drm.vma_offset_manager);
+ }
+
+ static int vmwgfx_pm_notifier(struct notifier_block *nb, unsigned long val,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index 00e8e27e48846..ace7ca150b036 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -683,6 +683,9 @@ static void vmw_user_surface_base_release(struct ttm_base_object **p_base)
+ container_of(base, struct vmw_user_surface, prime.base);
+ struct vmw_resource *res = &user_srf->srf.res;
+
++ if (base->shareable && res && res->backup)
++ drm_gem_object_put(&res->backup->base.base);
++
+ *p_base = NULL;
+ vmw_resource_unreference(&res);
+ }
+@@ -857,6 +860,7 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ goto out_unlock;
+ }
+ vmw_bo_reference(res->backup);
++ drm_gem_object_get(&res->backup->base.base);
+ }
+
+ tmp = vmw_resource_reference(&srf->res);
+@@ -1513,7 +1517,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ &res->backup);
+ if (ret == 0)
+ vmw_bo_reference(res->backup);
+-
+ }
+
+ if (unlikely(ret != 0)) {
+@@ -1561,6 +1564,8 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
+ drm_vma_node_offset_addr(&res->backup->base.base.vma_node);
+ rep->buffer_size = res->backup->base.base.size;
+ rep->buffer_handle = backup_handle;
++ if (user_srf->prime.base.shareable)
++ drm_gem_object_get(&res->backup->base.base);
+ } else {
+ rep->buffer_map_handle = 0;
+ rep->buffer_size = 0;
+diff --git a/drivers/input/keyboard/omap4-keypad.c b/drivers/input/keyboard/omap4-keypad.c
+index 43375b38ee592..8a7ce41b8c56e 100644
+--- a/drivers/input/keyboard/omap4-keypad.c
++++ b/drivers/input/keyboard/omap4-keypad.c
+@@ -393,7 +393,7 @@ static int omap4_keypad_probe(struct platform_device *pdev)
+ * revision register.
+ */
+ error = pm_runtime_get_sync(dev);
+- if (error) {
++ if (error < 0) {
+ dev_err(dev, "pm_runtime_get_sync() failed\n");
+ pm_runtime_put_noidle(dev);
+ return error;
+diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig
+index db3ec47681596..7a730c9d4bdf6 100644
+--- a/drivers/net/ethernet/Kconfig
++++ b/drivers/net/ethernet/Kconfig
+@@ -35,15 +35,6 @@ source "drivers/net/ethernet/aquantia/Kconfig"
+ source "drivers/net/ethernet/arc/Kconfig"
+ source "drivers/net/ethernet/asix/Kconfig"
+ source "drivers/net/ethernet/atheros/Kconfig"
+-source "drivers/net/ethernet/broadcom/Kconfig"
+-source "drivers/net/ethernet/brocade/Kconfig"
+-source "drivers/net/ethernet/cadence/Kconfig"
+-source "drivers/net/ethernet/calxeda/Kconfig"
+-source "drivers/net/ethernet/cavium/Kconfig"
+-source "drivers/net/ethernet/chelsio/Kconfig"
+-source "drivers/net/ethernet/cirrus/Kconfig"
+-source "drivers/net/ethernet/cisco/Kconfig"
+-source "drivers/net/ethernet/cortina/Kconfig"
+
+ config CX_ECAT
+ tristate "Beckhoff CX5020 EtherCAT master support"
+@@ -57,6 +48,14 @@ config CX_ECAT
+ To compile this driver as a module, choose M here. The module
+ will be called ec_bhf.
+
++source "drivers/net/ethernet/broadcom/Kconfig"
++source "drivers/net/ethernet/cadence/Kconfig"
++source "drivers/net/ethernet/calxeda/Kconfig"
++source "drivers/net/ethernet/cavium/Kconfig"
++source "drivers/net/ethernet/chelsio/Kconfig"
++source "drivers/net/ethernet/cirrus/Kconfig"
++source "drivers/net/ethernet/cisco/Kconfig"
++source "drivers/net/ethernet/cortina/Kconfig"
+ source "drivers/net/ethernet/davicom/Kconfig"
+
+ config DNET
+@@ -84,7 +83,6 @@ source "drivers/net/ethernet/huawei/Kconfig"
+ source "drivers/net/ethernet/i825xx/Kconfig"
+ source "drivers/net/ethernet/ibm/Kconfig"
+ source "drivers/net/ethernet/intel/Kconfig"
+-source "drivers/net/ethernet/microsoft/Kconfig"
+ source "drivers/net/ethernet/xscale/Kconfig"
+
+ config JME
+@@ -127,8 +125,9 @@ source "drivers/net/ethernet/mediatek/Kconfig"
+ source "drivers/net/ethernet/mellanox/Kconfig"
+ source "drivers/net/ethernet/micrel/Kconfig"
+ source "drivers/net/ethernet/microchip/Kconfig"
+-source "drivers/net/ethernet/moxa/Kconfig"
+ source "drivers/net/ethernet/mscc/Kconfig"
++source "drivers/net/ethernet/microsoft/Kconfig"
++source "drivers/net/ethernet/moxa/Kconfig"
+ source "drivers/net/ethernet/myricom/Kconfig"
+
+ config FEALNX
+@@ -140,10 +139,10 @@ config FEALNX
+ Say Y here to support the Myson MTD-800 family of PCI-based Ethernet
+ cards. <http://www.myson.com.tw/>
+
++source "drivers/net/ethernet/ni/Kconfig"
+ source "drivers/net/ethernet/natsemi/Kconfig"
+ source "drivers/net/ethernet/neterion/Kconfig"
+ source "drivers/net/ethernet/netronome/Kconfig"
+-source "drivers/net/ethernet/ni/Kconfig"
+ source "drivers/net/ethernet/8390/Kconfig"
+ source "drivers/net/ethernet/nvidia/Kconfig"
+ source "drivers/net/ethernet/nxp/Kconfig"
+@@ -163,6 +162,7 @@ source "drivers/net/ethernet/packetengines/Kconfig"
+ source "drivers/net/ethernet/pasemi/Kconfig"
+ source "drivers/net/ethernet/pensando/Kconfig"
+ source "drivers/net/ethernet/qlogic/Kconfig"
++source "drivers/net/ethernet/brocade/Kconfig"
+ source "drivers/net/ethernet/qualcomm/Kconfig"
+ source "drivers/net/ethernet/rdc/Kconfig"
+ source "drivers/net/ethernet/realtek/Kconfig"
+@@ -170,10 +170,10 @@ source "drivers/net/ethernet/renesas/Kconfig"
+ source "drivers/net/ethernet/rocker/Kconfig"
+ source "drivers/net/ethernet/samsung/Kconfig"
+ source "drivers/net/ethernet/seeq/Kconfig"
+-source "drivers/net/ethernet/sfc/Kconfig"
+ source "drivers/net/ethernet/sgi/Kconfig"
+ source "drivers/net/ethernet/silan/Kconfig"
+ source "drivers/net/ethernet/sis/Kconfig"
++source "drivers/net/ethernet/sfc/Kconfig"
+ source "drivers/net/ethernet/smsc/Kconfig"
+ source "drivers/net/ethernet/socionext/Kconfig"
+ source "drivers/net/ethernet/stmicro/Kconfig"
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index 33f1a1377588b..24d715c28a355 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -486,8 +486,8 @@ int aq_nic_start(struct aq_nic_s *self)
+ if (err < 0)
+ goto err_exit;
+
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) {
++ for (i = 0U; self->aq_vecs > i; ++i) {
++ aq_vec = self->aq_vec[i];
+ err = aq_vec_start(aq_vec);
+ if (err < 0)
+ goto err_exit;
+@@ -517,8 +517,8 @@ int aq_nic_start(struct aq_nic_s *self)
+ mod_timer(&self->polling_timer, jiffies +
+ AQ_CFG_POLLING_TIMER_INTERVAL);
+ } else {
+- for (i = 0U, aq_vec = self->aq_vec[0];
+- self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) {
++ for (i = 0U; self->aq_vecs > i; ++i) {
++ aq_vec = self->aq_vec[i];
+ err = aq_pci_func_alloc_irq(self, i, self->ndev->name,
+ aq_vec_isr, aq_vec,
+ aq_vec_get_affinity_mask(aq_vec));
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 797a95142d1f4..3a529ee8c8340 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -444,22 +444,22 @@ err_exit:
+
+ static int aq_pm_freeze(struct device *dev)
+ {
+- return aq_suspend_common(dev, false);
++ return aq_suspend_common(dev, true);
+ }
+
+ static int aq_pm_suspend_poweroff(struct device *dev)
+ {
+- return aq_suspend_common(dev, true);
++ return aq_suspend_common(dev, false);
+ }
+
+ static int aq_pm_thaw(struct device *dev)
+ {
+- return atl_resume_common(dev, false);
++ return atl_resume_common(dev, true);
+ }
+
+ static int aq_pm_resume_restore(struct device *dev)
+ {
+- return atl_resume_common(dev, true);
++ return atl_resume_common(dev, false);
+ }
+
+ static const struct dev_pm_ops aq_pm_ops = {
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+index f4774cf051c97..6ab1f3212d246 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_vec.c
+@@ -43,8 +43,8 @@ static int aq_vec_poll(struct napi_struct *napi, int budget)
+ if (!self) {
+ err = -EINVAL;
+ } else {
+- for (i = 0U, ring = self->ring[0];
+- self->tx_rings > i; ++i, ring = self->ring[i]) {
++ for (i = 0U; self->tx_rings > i; ++i) {
++ ring = self->ring[i];
+ u64_stats_update_begin(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
+ ring[AQ_VEC_RX_ID].stats.rx.polls++;
+ u64_stats_update_end(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
+@@ -182,8 +182,8 @@ int aq_vec_init(struct aq_vec_s *self, const struct aq_hw_ops *aq_hw_ops,
+ self->aq_hw_ops = aq_hw_ops;
+ self->aq_hw = aq_hw;
+
+- for (i = 0U, ring = self->ring[0];
+- self->tx_rings > i; ++i, ring = self->ring[i]) {
++ for (i = 0U; self->tx_rings > i; ++i) {
++ ring = self->ring[i];
+ err = aq_ring_init(&ring[AQ_VEC_TX_ID], ATL_RING_TX);
+ if (err < 0)
+ goto err_exit;
+@@ -224,8 +224,8 @@ int aq_vec_start(struct aq_vec_s *self)
+ unsigned int i = 0U;
+ int err = 0;
+
+- for (i = 0U, ring = self->ring[0];
+- self->tx_rings > i; ++i, ring = self->ring[i]) {
++ for (i = 0U; self->tx_rings > i; ++i) {
++ ring = self->ring[i];
+ err = self->aq_hw_ops->hw_ring_tx_start(self->aq_hw,
+ &ring[AQ_VEC_TX_ID]);
+ if (err < 0)
+@@ -248,8 +248,8 @@ void aq_vec_stop(struct aq_vec_s *self)
+ struct aq_ring_s *ring = NULL;
+ unsigned int i = 0U;
+
+- for (i = 0U, ring = self->ring[0];
+- self->tx_rings > i; ++i, ring = self->ring[i]) {
++ for (i = 0U; self->tx_rings > i; ++i) {
++ ring = self->ring[i];
+ self->aq_hw_ops->hw_ring_tx_stop(self->aq_hw,
+ &ring[AQ_VEC_TX_ID]);
+
+@@ -268,8 +268,8 @@ void aq_vec_deinit(struct aq_vec_s *self)
+ if (!self)
+ goto err_exit;
+
+- for (i = 0U, ring = self->ring[0];
+- self->tx_rings > i; ++i, ring = self->ring[i]) {
++ for (i = 0U; self->tx_rings > i; ++i) {
++ ring = self->ring[i];
+ aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
+ aq_ring_rx_deinit(&ring[AQ_VEC_RX_ID]);
+ }
+@@ -297,8 +297,8 @@ void aq_vec_ring_free(struct aq_vec_s *self)
+ if (!self)
+ goto err_exit;
+
+- for (i = 0U, ring = self->ring[0];
+- self->tx_rings > i; ++i, ring = self->ring[i]) {
++ for (i = 0U; self->tx_rings > i; ++i) {
++ ring = self->ring[i];
+ aq_ring_free(&ring[AQ_VEC_TX_ID]);
+ if (i < self->rx_rings)
+ aq_ring_free(&ring[AQ_VEC_RX_ID]);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index d13f06cf0308a..c4f4b13ac4691 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1656,6 +1656,7 @@ static void macb_tx_restart(struct macb_queue *queue)
+ unsigned int head = queue->tx_head;
+ unsigned int tail = queue->tx_tail;
+ struct macb *bp = queue->bp;
++ unsigned int head_idx, tbqp;
+
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, MACB_BIT(TXUBR));
+@@ -1663,6 +1664,13 @@ static void macb_tx_restart(struct macb_queue *queue)
+ if (head == tail)
+ return;
+
++ tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp);
++ tbqp = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp));
++ head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, head));
++
++ if (tbqp == head_idx)
++ return;
++
+ macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
+ }
+
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+index 763d2c7b5fb1a..5750f9a56393a 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c
+@@ -489,11 +489,15 @@ static int dpaa_get_ts_info(struct net_device *net_dev,
+ info->phc_index = -1;
+
+ fman_node = of_get_parent(mac_node);
+- if (fman_node)
++ if (fman_node) {
+ ptp_node = of_parse_phandle(fman_node, "ptimer-handle", 0);
++ of_node_put(fman_node);
++ }
+
+- if (ptp_node)
++ if (ptp_node) {
+ ptp_dev = of_find_device_by_node(ptp_node);
++ of_node_put(ptp_node);
++ }
+
+ if (ptp_dev)
+ ptp = platform_get_drvdata(ptp_dev);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index d60e2016d03c6..e6c8e6d5234f8 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1009,8 +1009,8 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
+ {
+ u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
+ link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
+- u16 max_ltr_enc_d = 0; /* maximum LTR decoded by platform */
+- u16 lat_enc_d = 0; /* latency decoded */
++ u32 max_ltr_enc_d = 0; /* maximum LTR decoded by platform */
++ u32 lat_enc_d = 0; /* latency decoded */
+ u16 lat_enc = 0; /* latency encoded */
+
+ if (link) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+index 73edc24d81d54..c54b72f9fd345 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+@@ -342,7 +342,8 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev)
+ np = netdev_priv(netdev);
+ vsi = np->vsi;
+
+- if (ice_is_reset_in_progress(vsi->back->state))
++ if (ice_is_reset_in_progress(vsi->back->state) ||
++ test_bit(ICE_VF_DIS, vsi->back->state))
+ return NETDEV_TX_BUSY;
+
+ repr = ice_netdev_to_repr(netdev);
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.h b/drivers/net/ethernet/intel/ice/ice_eswitch.h
+index bd58d9d2e5653..6a413331572b6 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch.h
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.h
+@@ -52,7 +52,7 @@ static inline void ice_eswitch_update_repr(struct ice_vsi *vsi) { }
+
+ static inline int ice_eswitch_configure(struct ice_pf *pf)
+ {
+- return -EOPNOTSUPP;
++ return 0;
+ }
+
+ static inline int ice_eswitch_rebuild(struct ice_pf *pf)
+diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
+index 4eb0599714f43..13cdb5ea594d2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
++++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
+@@ -641,6 +641,7 @@ ice_get_orom_civd_data(struct ice_hw *hw, enum ice_bank_select bank,
+ status = ice_read_flash_module(hw, bank, ICE_SR_1ST_OROM_BANK_PTR, 0,
+ orom_data, hw->flash.banks.orom_size);
+ if (status) {
++ vfree(orom_data);
+ ice_debug(hw, ICE_DBG_NVM, "Unable to read Option ROM data\n");
+ return status;
+ }
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 66ea566488d12..59d5c467ea6e3 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -156,8 +156,15 @@ void igc_release_swfw_sync_i225(struct igc_hw *hw, u16 mask)
+ {
+ u32 swfw_sync;
+
+- while (igc_get_hw_semaphore_i225(hw))
+- ; /* Empty */
++ /* Releasing the resource requires first getting the HW semaphore.
++ * If we fail to get the semaphore, there is nothing we can do,
++ * except log an error and quit. We are not allowed to hang here
++ * indefinitely, as it may cause denial of service or system crash.
++ */
++ if (igc_get_hw_semaphore_i225(hw)) {
++ hw_dbg("Failed to release SW_FW_SYNC.\n");
++ return;
++ }
+
+ swfw_sync = rd32(IGC_SW_FW_SYNC);
+ swfw_sync &= ~mask;
+diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c
+index 40dbf4b432345..6961f65d36b9a 100644
+--- a/drivers/net/ethernet/intel/igc/igc_phy.c
++++ b/drivers/net/ethernet/intel/igc/igc_phy.c
+@@ -581,7 +581,7 @@ static s32 igc_read_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 *data)
+ * the lower time out
+ */
+ for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
+- usleep_range(500, 1000);
++ udelay(50);
+ mdic = rd32(IGC_MDIC);
+ if (mdic & IGC_MDIC_READY)
+ break;
+@@ -638,7 +638,7 @@ static s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data)
+ * the lower time out
+ */
+ for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
+- usleep_range(500, 1000);
++ udelay(50);
+ mdic = rd32(IGC_MDIC);
+ if (mdic & IGC_MDIC_READY)
+ break;
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 0d6e3215e98f5..653e9f1e35b5c 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -992,6 +992,17 @@ static void igc_ptp_time_restore(struct igc_adapter *adapter)
+ igc_ptp_write_i225(adapter, &ts);
+ }
+
++static void igc_ptm_stop(struct igc_adapter *adapter)
++{
++ struct igc_hw *hw = &adapter->hw;
++ u32 ctrl;
++
++ ctrl = rd32(IGC_PTM_CTRL);
++ ctrl &= ~IGC_PTM_CTRL_EN;
++
++ wr32(IGC_PTM_CTRL, ctrl);
++}
++
+ /**
+ * igc_ptp_suspend - Disable PTP work items and prepare for suspend
+ * @adapter: Board private structure
+@@ -1009,8 +1020,10 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ adapter->ptp_tx_skb = NULL;
+ clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+
+- if (pci_device_is_present(adapter->pdev))
++ if (pci_device_is_present(adapter->pdev)) {
+ igc_ptp_time_save(adapter);
++ igc_ptm_stop(adapter);
++ }
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index fd3ceb74620d5..a314040c1a6af 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -2508,6 +2508,8 @@ static void ocelot_port_set_mcast_flood(struct ocelot *ocelot, int port,
+ val = BIT(port);
+
+ ocelot_rmw_rix(ocelot, val, BIT(port), ANA_PGID_PGID, PGID_MC);
++ ocelot_rmw_rix(ocelot, val, BIT(port), ANA_PGID_PGID, PGID_MCIPV4);
++ ocelot_rmw_rix(ocelot, val, BIT(port), ANA_PGID_PGID, PGID_MCIPV6);
+ }
+
+ static void ocelot_port_set_bcast_flood(struct ocelot *ocelot, int port,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index a7ec9f4d46ced..d68ef72dcdde0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -71,9 +71,9 @@ static int init_systime(void __iomem *ioaddr, u32 sec, u32 nsec)
+ writel(value, ioaddr + PTP_TCR);
+
+ /* wait for present system time initialize to complete */
+- return readl_poll_timeout(ioaddr + PTP_TCR, value,
++ return readl_poll_timeout_atomic(ioaddr + PTP_TCR, value,
+ !(value & PTP_TCR_TSINIT),
+- 10000, 100000);
++ 10, 100000);
+ }
+
+ static int config_addend(void __iomem *ioaddr, u32 addend)
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 359d16780dbbc..1bf8f7c35b7d2 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -712,11 +712,11 @@ static int vxlan_fdb_append(struct vxlan_fdb *f,
+
+ rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
+ if (rd == NULL)
+- return -ENOBUFS;
++ return -ENOMEM;
+
+ if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
+ kfree(rd);
+- return -ENOBUFS;
++ return -ENOMEM;
+ }
+
+ rd->remote_ip = *ip;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 5d156e591b35c..f7961b22e0518 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -557,7 +557,7 @@ enum brcmf_sdio_frmtype {
+ BRCMF_SDIO_FT_SUB,
+ };
+
+-#define SDIOD_DRVSTR_KEY(chip, pmu) (((chip) << 16) | (pmu))
++#define SDIOD_DRVSTR_KEY(chip, pmu) (((unsigned int)(chip) << 16) | (pmu))
+
+ /* SDIO Pad drive strength to select value mappings */
+ struct sdiod_drive_str {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+index 8a22ee5816748..df85ebc6e1df0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/pci.c
+@@ -80,7 +80,7 @@ mt76x2e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ mt76_rmw_field(dev, 0x15a10, 0x1f << 16, 0x9);
+
+ /* RG_SSUSB_G1_CDR_BIC_LTR = 0xf */
+- mt76_rmw_field(dev, 0x15a0c, 0xf << 28, 0xf);
++ mt76_rmw_field(dev, 0x15a0c, 0xfU << 28, 0xf);
+
+ /* RG_SSUSB_CDR_BR_PE1D = 0x3 */
+ mt76_rmw_field(dev, 0x15c58, 0x3 << 6, 0x3);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6215d50ed3e7d..10f7c79caac2d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1363,6 +1363,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
+ warn_str, cur->nidl);
+ return -1;
+ }
++ if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
++ return NVME_NIDT_EUI64_LEN;
+ memcpy(ids->eui64, data + sizeof(*cur), NVME_NIDT_EUI64_LEN);
+ return NVME_NIDT_EUI64_LEN;
+ case NVME_NIDT_NGUID:
+@@ -1371,6 +1373,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
+ warn_str, cur->nidl);
+ return -1;
+ }
++ if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
++ return NVME_NIDT_NGUID_LEN;
+ memcpy(ids->nguid, data + sizeof(*cur), NVME_NIDT_NGUID_LEN);
+ return NVME_NIDT_NGUID_LEN;
+ case NVME_NIDT_UUID:
+@@ -1379,6 +1383,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
+ warn_str, cur->nidl);
+ return -1;
+ }
++ if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
++ return NVME_NIDT_UUID_LEN;
+ uuid_copy(&ids->uuid, data + sizeof(*cur));
+ return NVME_NIDT_UUID_LEN;
+ case NVME_NIDT_CSI:
+@@ -1475,12 +1481,18 @@ static int nvme_identify_ns(struct nvme_ctrl *ctrl, unsigned nsid,
+ if ((*id)->ncap == 0) /* namespace not allocated or attached */
+ goto out_free_id;
+
+- if (ctrl->vs >= NVME_VS(1, 1, 0) &&
+- !memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
+- memcpy(ids->eui64, (*id)->eui64, sizeof(ids->eui64));
+- if (ctrl->vs >= NVME_VS(1, 2, 0) &&
+- !memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
+- memcpy(ids->nguid, (*id)->nguid, sizeof(ids->nguid));
++
++ if (ctrl->quirks & NVME_QUIRK_BOGUS_NID) {
++ dev_info(ctrl->device,
++ "Ignoring bogus Namespace Identifiers\n");
++ } else {
++ if (ctrl->vs >= NVME_VS(1, 1, 0) &&
++ !memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
++ memcpy(ids->eui64, (*id)->eui64, sizeof(ids->eui64));
++ if (ctrl->vs >= NVME_VS(1, 2, 0) &&
++ !memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
++ memcpy(ids->nguid, (*id)->nguid, sizeof(ids->nguid));
++ }
+
+ return 0;
+
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 730cc80d84ff7..68c42e8311172 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -144,6 +144,11 @@ enum nvme_quirks {
+ * encoding the generation sequence number.
+ */
+ NVME_QUIRK_SKIP_CID_GEN = (1 << 17),
++
++ /*
++ * Reports garbage in the namespace identifiers (eui64, nguid, uuid).
++ */
++ NVME_QUIRK_BOGUS_NID = (1 << 18),
+ };
+
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 6a99ed6809158..e4b79bee62068 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3405,7 +3405,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+ { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
+ .driver_data = NVME_QUIRK_IDENTIFY_CNS |
+- NVME_QUIRK_DISABLE_WRITE_ZEROES, },
++ NVME_QUIRK_DISABLE_WRITE_ZEROES |
++ NVME_QUIRK_BOGUS_NID, },
++ { PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
++ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x126f, 0x2263), /* Silicon Motion unidentified */
+ .driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
+ { PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */
+@@ -3443,6 +3446,10 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(0x2646, 0x2263), /* KINGSTON A2000 NVMe SSD */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
++ { PCI_DEVICE(0x1e4B, 0x1002), /* MAXIO MAP1002 */
++ .driver_data = NVME_QUIRK_BOGUS_NID, },
++ { PCI_DEVICE(0x1e4B, 0x1202), /* MAXIO MAP1202 */
++ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0061),
+ .driver_data = NVME_QUIRK_DMA_ADDRESS_BITS_48, },
+ { PCI_DEVICE(PCI_VENDOR_ID_AMAZON, 0x0065),
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index 295cc7952d0ed..57d20cf3da7a3 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -398,6 +398,9 @@ validate_group(struct perf_event *event)
+ if (!validate_event(event->pmu, &fake_pmu, leader))
+ return -EINVAL;
+
++ if (event == leader)
++ return 0;
++
+ for_each_sibling_event(sibling, leader) {
+ if (!validate_event(event->pmu, &fake_pmu, sibling))
+ return -EINVAL;
+@@ -487,12 +490,7 @@ __hw_perf_event_init(struct perf_event *event)
+ local64_set(&hwc->period_left, hwc->sample_period);
+ }
+
+- if (event->group_leader != event) {
+- if (validate_group(event) != 0)
+- return -EINVAL;
+- }
+-
+- return 0;
++ return validate_group(event);
+ }
+
+ static int armpmu_event_init(struct perf_event *event)
+diff --git a/drivers/platform/x86/samsung-laptop.c b/drivers/platform/x86/samsung-laptop.c
+index c1d9ed9b7b672..19f6b456234f8 100644
+--- a/drivers/platform/x86/samsung-laptop.c
++++ b/drivers/platform/x86/samsung-laptop.c
+@@ -1121,8 +1121,6 @@ static void kbd_led_set(struct led_classdev *led_cdev,
+
+ if (value > samsung->kbd_led.max_brightness)
+ value = samsung->kbd_led.max_brightness;
+- else if (value < 0)
+- value = 0;
+
+ samsung->kbd_led_wk = value;
+ queue_work(samsung->led_workqueue, &samsung->kbd_led_work);
+diff --git a/drivers/reset/reset-rzg2l-usbphy-ctrl.c b/drivers/reset/reset-rzg2l-usbphy-ctrl.c
+index 1e83150388506..a8dde46063602 100644
+--- a/drivers/reset/reset-rzg2l-usbphy-ctrl.c
++++ b/drivers/reset/reset-rzg2l-usbphy-ctrl.c
+@@ -121,7 +121,9 @@ static int rzg2l_usbphy_ctrl_probe(struct platform_device *pdev)
+ return dev_err_probe(dev, PTR_ERR(priv->rstc),
+ "failed to get reset\n");
+
+- reset_control_deassert(priv->rstc);
++ error = reset_control_deassert(priv->rstc);
++ if (error)
++ return error;
+
+ priv->rcdev.ops = &rzg2l_usbphy_ctrl_reset_ops;
+ priv->rcdev.of_reset_n_cells = 1;
+diff --git a/drivers/reset/tegra/reset-bpmp.c b/drivers/reset/tegra/reset-bpmp.c
+index 24d3395964cc4..4c5bba52b1059 100644
+--- a/drivers/reset/tegra/reset-bpmp.c
++++ b/drivers/reset/tegra/reset-bpmp.c
+@@ -20,6 +20,7 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
+ struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
+ struct mrq_reset_request request;
+ struct tegra_bpmp_message msg;
++ int err;
+
+ memset(&request, 0, sizeof(request));
+ request.cmd = command;
+@@ -30,7 +31,13 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
+ msg.tx.data = &request;
+ msg.tx.size = sizeof(request);
+
+- return tegra_bpmp_transfer(bpmp, &msg);
++ err = tegra_bpmp_transfer(bpmp, &msg);
++ if (err)
++ return err;
++ if (msg.rx.ret)
++ return -EINVAL;
++
++ return 0;
+ }
+
+ static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
+diff --git a/drivers/scsi/bnx2i/bnx2i_hwi.c b/drivers/scsi/bnx2i/bnx2i_hwi.c
+index 5521469ce678b..e16327a4b4c96 100644
+--- a/drivers/scsi/bnx2i/bnx2i_hwi.c
++++ b/drivers/scsi/bnx2i/bnx2i_hwi.c
+@@ -1977,7 +1977,7 @@ static int bnx2i_process_new_cqes(struct bnx2i_conn *bnx2i_conn)
+ if (nopin->cq_req_sn != qp->cqe_exp_seq_sn)
+ break;
+
+- if (unlikely(test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx))) {
++ if (unlikely(test_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags))) {
+ if (nopin->op_code == ISCSI_OP_NOOP_IN &&
+ nopin->itt == (u16) RESERVED_ITT) {
+ printk(KERN_ALERT "bnx2i: Unsolicited "
+diff --git a/drivers/scsi/bnx2i/bnx2i_iscsi.c b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+index e21b053b4f3e1..a592ca8602f9f 100644
+--- a/drivers/scsi/bnx2i/bnx2i_iscsi.c
++++ b/drivers/scsi/bnx2i/bnx2i_iscsi.c
+@@ -1721,7 +1721,7 @@ static int bnx2i_tear_down_conn(struct bnx2i_hba *hba,
+ struct iscsi_conn *conn = ep->conn->cls_conn->dd_data;
+
+ /* Must suspend all rx queue activity for this ep */
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
++ set_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags);
+ }
+ /* CONN_DISCONNECT timeout may or may not be an issue depending
+ * on what transcribed in TCP layer, different targets behave
+diff --git a/drivers/scsi/cxgbi/libcxgbi.c b/drivers/scsi/cxgbi/libcxgbi.c
+index 8c7d4dda4cf29..4365d52c6430e 100644
+--- a/drivers/scsi/cxgbi/libcxgbi.c
++++ b/drivers/scsi/cxgbi/libcxgbi.c
+@@ -1634,11 +1634,11 @@ void cxgbi_conn_pdu_ready(struct cxgbi_sock *csk)
+ log_debug(1 << CXGBI_DBG_PDU_RX,
+ "csk 0x%p, conn 0x%p.\n", csk, conn);
+
+- if (unlikely(!conn || conn->suspend_rx)) {
++ if (unlikely(!conn || test_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags))) {
+ log_debug(1 << CXGBI_DBG_PDU_RX,
+- "csk 0x%p, conn 0x%p, id %d, suspend_rx %lu!\n",
++ "csk 0x%p, conn 0x%p, id %d, conn flags 0x%lx!\n",
+ csk, conn, conn ? conn->id : 0xFF,
+- conn ? conn->suspend_rx : 0xFF);
++ conn ? conn->flags : 0xFF);
+ return;
+ }
+
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 059dae8909ee5..f228d991038a2 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -678,7 +678,8 @@ __iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ struct iscsi_task *task;
+ itt_t itt;
+
+- if (session->state == ISCSI_STATE_TERMINATE)
++ if (session->state == ISCSI_STATE_TERMINATE ||
++ !test_bit(ISCSI_CONN_FLAG_BOUND, &conn->flags))
+ return NULL;
+
+ if (opcode == ISCSI_OP_LOGIN || opcode == ISCSI_OP_TEXT) {
+@@ -1392,8 +1393,8 @@ static bool iscsi_set_conn_failed(struct iscsi_conn *conn)
+ if (conn->stop_stage == 0)
+ session->state = ISCSI_STATE_FAILED;
+
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
++ set_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
++ set_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags);
+ return true;
+ }
+
+@@ -1454,7 +1455,7 @@ static int iscsi_xmit_task(struct iscsi_conn *conn, struct iscsi_task *task,
+ * Do this after dropping the extra ref because if this was a requeue
+ * it's removed from that list and cleanup_queued_task would miss it.
+ */
+- if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx)) {
++ if (test_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags)) {
+ /*
+ * Save the task and ref in case we weren't cleaning up this
+ * task and get woken up again.
+@@ -1532,7 +1533,7 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
+ int rc = 0;
+
+ spin_lock_bh(&conn->session->frwd_lock);
+- if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx)) {
++ if (test_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags)) {
+ ISCSI_DBG_SESSION(conn->session, "Tx suspended!\n");
+ spin_unlock_bh(&conn->session->frwd_lock);
+ return -ENODATA;
+@@ -1746,7 +1747,7 @@ int iscsi_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc)
+ goto fault;
+ }
+
+- if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx)) {
++ if (test_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags)) {
+ reason = FAILURE_SESSION_IN_RECOVERY;
+ sc->result = DID_REQUEUE << 16;
+ goto fault;
+@@ -1935,7 +1936,7 @@ static void fail_scsi_tasks(struct iscsi_conn *conn, u64 lun, int error)
+ void iscsi_suspend_queue(struct iscsi_conn *conn)
+ {
+ spin_lock_bh(&conn->session->frwd_lock);
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
++ set_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
+ spin_unlock_bh(&conn->session->frwd_lock);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_suspend_queue);
+@@ -1953,7 +1954,7 @@ void iscsi_suspend_tx(struct iscsi_conn *conn)
+ struct Scsi_Host *shost = conn->session->host;
+ struct iscsi_host *ihost = shost_priv(shost);
+
+- set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
++ set_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
+ if (ihost->workq)
+ flush_workqueue(ihost->workq);
+ }
+@@ -1961,7 +1962,7 @@ EXPORT_SYMBOL_GPL(iscsi_suspend_tx);
+
+ static void iscsi_start_tx(struct iscsi_conn *conn)
+ {
+- clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
++ clear_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
+ iscsi_conn_queue_work(conn);
+ }
+
+@@ -2214,6 +2215,8 @@ void iscsi_conn_unbind(struct iscsi_cls_conn *cls_conn, bool is_active)
+ iscsi_suspend_tx(conn);
+
+ spin_lock_bh(&session->frwd_lock);
++ clear_bit(ISCSI_CONN_FLAG_BOUND, &conn->flags);
++
+ if (!is_active) {
+ /*
+ * if logout timed out before userspace could even send a PDU
+@@ -3311,6 +3314,8 @@ int iscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ spin_lock_bh(&session->frwd_lock);
+ if (is_leading)
+ session->leadconn = conn;
++
++ set_bit(ISCSI_CONN_FLAG_BOUND, &conn->flags);
+ spin_unlock_bh(&session->frwd_lock);
+
+ /*
+@@ -3323,8 +3328,8 @@ int iscsi_conn_bind(struct iscsi_cls_session *cls_session,
+ /*
+ * Unblock xmitworker(), Login Phase will pass through.
+ */
+- clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_rx);
+- clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
++ clear_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags);
++ clear_bit(ISCSI_CONN_FLAG_SUSPEND_TX, &conn->flags);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_conn_bind);
+diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
+index 2e9ffe3d1a55e..883005757ddb8 100644
+--- a/drivers/scsi/libiscsi_tcp.c
++++ b/drivers/scsi/libiscsi_tcp.c
+@@ -927,7 +927,7 @@ int iscsi_tcp_recv_skb(struct iscsi_conn *conn, struct sk_buff *skb,
+ */
+ conn->last_recv = jiffies;
+
+- if (unlikely(conn->suspend_rx)) {
++ if (unlikely(test_bit(ISCSI_CONN_FLAG_SUSPEND_RX, &conn->flags))) {
+ ISCSI_DBG_TCP(conn, "Rx suspended!\n");
+ *status = ISCSI_TCP_SUSPENDED;
+ return 0;
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 282ecb4e39bbd..e1fe989ad7b33 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -859,6 +859,37 @@ static int qedi_task_xmit(struct iscsi_task *task)
+ return qedi_iscsi_send_ioreq(task);
+ }
+
++static void qedi_offload_work(struct work_struct *work)
++{
++ struct qedi_endpoint *qedi_ep =
++ container_of(work, struct qedi_endpoint, offload_work);
++ struct qedi_ctx *qedi;
++ int wait_delay = 5 * HZ;
++ int ret;
++
++ qedi = qedi_ep->qedi;
++
++ ret = qedi_iscsi_offload_conn(qedi_ep);
++ if (ret) {
++ QEDI_ERR(&qedi->dbg_ctx,
++ "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
++ qedi_ep->iscsi_cid, qedi_ep, ret);
++ qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
++ return;
++ }
++
++ ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
++ (qedi_ep->state ==
++ EP_STATE_OFLDCONN_COMPL),
++ wait_delay);
++ if (ret <= 0 || qedi_ep->state != EP_STATE_OFLDCONN_COMPL) {
++ qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
++ QEDI_ERR(&qedi->dbg_ctx,
++ "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
++ qedi_ep->iscsi_cid, qedi_ep);
++ }
++}
++
+ static struct iscsi_endpoint *
+ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ int non_blocking)
+@@ -907,6 +938,7 @@ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ }
+ qedi_ep = ep->dd_data;
+ memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
++ INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
+ qedi_ep->state = EP_STATE_IDLE;
+ qedi_ep->iscsi_cid = (u32)-1;
+ qedi_ep->qedi = qedi;
+@@ -1055,12 +1087,11 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ qedi_ep = ep->dd_data;
+ qedi = qedi_ep->qedi;
+
++ flush_work(&qedi_ep->offload_work);
++
+ if (qedi_ep->state == EP_STATE_OFLDCONN_START)
+ goto ep_exit_recover;
+
+- if (qedi_ep->state != EP_STATE_OFLDCONN_NONE)
+- flush_work(&qedi_ep->offload_work);
+-
+ if (qedi_ep->conn) {
+ qedi_conn = qedi_ep->conn;
+ abrt_conn = qedi_conn->abrt_conn;
+@@ -1234,37 +1265,6 @@ static int qedi_data_avail(struct qedi_ctx *qedi, u16 vlanid)
+ return rc;
+ }
+
+-static void qedi_offload_work(struct work_struct *work)
+-{
+- struct qedi_endpoint *qedi_ep =
+- container_of(work, struct qedi_endpoint, offload_work);
+- struct qedi_ctx *qedi;
+- int wait_delay = 5 * HZ;
+- int ret;
+-
+- qedi = qedi_ep->qedi;
+-
+- ret = qedi_iscsi_offload_conn(qedi_ep);
+- if (ret) {
+- QEDI_ERR(&qedi->dbg_ctx,
+- "offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
+- qedi_ep->iscsi_cid, qedi_ep, ret);
+- qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+- return;
+- }
+-
+- ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
+- (qedi_ep->state ==
+- EP_STATE_OFLDCONN_COMPL),
+- wait_delay);
+- if ((ret <= 0) || (qedi_ep->state != EP_STATE_OFLDCONN_COMPL)) {
+- qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
+- QEDI_ERR(&qedi->dbg_ctx,
+- "Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
+- qedi_ep->iscsi_cid, qedi_ep);
+- }
+-}
+-
+ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+ {
+ struct qedi_ctx *qedi;
+@@ -1380,7 +1380,6 @@ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
+ qedi_ep->dst_addr, qedi_ep->dst_port);
+ }
+
+- INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
+ queue_work(qedi->offload_thread, &qedi_ep->offload_work);
+
+ ret = 0;
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index c7b1b2e8bb02f..bcdfcb25349ad 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -86,6 +86,9 @@ struct iscsi_internal {
+ struct transport_container session_cont;
+ };
+
++static DEFINE_IDR(iscsi_ep_idr);
++static DEFINE_MUTEX(iscsi_ep_idr_mutex);
++
+ static atomic_t iscsi_session_nr; /* sysfs session id for next new session */
+ static struct workqueue_struct *iscsi_eh_timer_workq;
+
+@@ -169,6 +172,11 @@ struct device_attribute dev_attr_##_prefix##_##_name = \
+ static void iscsi_endpoint_release(struct device *dev)
+ {
+ struct iscsi_endpoint *ep = iscsi_dev_to_endpoint(dev);
++
++ mutex_lock(&iscsi_ep_idr_mutex);
++ idr_remove(&iscsi_ep_idr, ep->id);
++ mutex_unlock(&iscsi_ep_idr_mutex);
++
+ kfree(ep);
+ }
+
+@@ -181,7 +189,7 @@ static ssize_t
+ show_ep_handle(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ struct iscsi_endpoint *ep = iscsi_dev_to_endpoint(dev);
+- return sysfs_emit(buf, "%llu\n", (unsigned long long) ep->id);
++ return sysfs_emit(buf, "%d\n", ep->id);
+ }
+ static ISCSI_ATTR(ep, handle, S_IRUGO, show_ep_handle, NULL);
+
+@@ -194,48 +202,32 @@ static struct attribute_group iscsi_endpoint_group = {
+ .attrs = iscsi_endpoint_attrs,
+ };
+
+-#define ISCSI_MAX_EPID -1
+-
+-static int iscsi_match_epid(struct device *dev, const void *data)
+-{
+- struct iscsi_endpoint *ep = iscsi_dev_to_endpoint(dev);
+- const uint64_t *epid = data;
+-
+- return *epid == ep->id;
+-}
+-
+ struct iscsi_endpoint *
+ iscsi_create_endpoint(int dd_size)
+ {
+- struct device *dev;
+ struct iscsi_endpoint *ep;
+- uint64_t id;
+- int err;
+-
+- for (id = 1; id < ISCSI_MAX_EPID; id++) {
+- dev = class_find_device(&iscsi_endpoint_class, NULL, &id,
+- iscsi_match_epid);
+- if (!dev)
+- break;
+- else
+- put_device(dev);
+- }
+- if (id == ISCSI_MAX_EPID) {
+- printk(KERN_ERR "Too many connections. Max supported %u\n",
+- ISCSI_MAX_EPID - 1);
+- return NULL;
+- }
++ int err, id;
+
+ ep = kzalloc(sizeof(*ep) + dd_size, GFP_KERNEL);
+ if (!ep)
+ return NULL;
+
++ mutex_lock(&iscsi_ep_idr_mutex);
++ id = idr_alloc(&iscsi_ep_idr, ep, 0, -1, GFP_NOIO);
++ if (id < 0) {
++ mutex_unlock(&iscsi_ep_idr_mutex);
++ printk(KERN_ERR "Could not allocate endpoint ID. Error %d.\n",
++ id);
++ goto free_ep;
++ }
++ mutex_unlock(&iscsi_ep_idr_mutex);
++
+ ep->id = id;
+ ep->dev.class = &iscsi_endpoint_class;
+- dev_set_name(&ep->dev, "ep-%llu", (unsigned long long) id);
++ dev_set_name(&ep->dev, "ep-%d", id);
+ err = device_register(&ep->dev);
+ if (err)
+- goto free_ep;
++ goto free_id;
+
+ err = sysfs_create_group(&ep->dev.kobj, &iscsi_endpoint_group);
+ if (err)
+@@ -249,6 +241,10 @@ unregister_dev:
+ device_unregister(&ep->dev);
+ return NULL;
+
++free_id:
++ mutex_lock(&iscsi_ep_idr_mutex);
++ idr_remove(&iscsi_ep_idr, id);
++ mutex_unlock(&iscsi_ep_idr_mutex);
+ free_ep:
+ kfree(ep);
+ return NULL;
+@@ -276,14 +272,17 @@ EXPORT_SYMBOL_GPL(iscsi_put_endpoint);
+ */
+ struct iscsi_endpoint *iscsi_lookup_endpoint(u64 handle)
+ {
+- struct device *dev;
++ struct iscsi_endpoint *ep;
+
+- dev = class_find_device(&iscsi_endpoint_class, NULL, &handle,
+- iscsi_match_epid);
+- if (!dev)
+- return NULL;
++ mutex_lock(&iscsi_ep_idr_mutex);
++ ep = idr_find(&iscsi_ep_idr, handle);
++ if (!ep)
++ goto unlock;
+
+- return iscsi_dev_to_endpoint(dev);
++ get_device(&ep->dev);
++unlock:
++ mutex_unlock(&iscsi_ep_idr_mutex);
++ return ep;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_lookup_endpoint);
+
+diff --git a/drivers/scsi/sr_ioctl.c b/drivers/scsi/sr_ioctl.c
+index ddd00efc48825..fbdb5124d7f7d 100644
+--- a/drivers/scsi/sr_ioctl.c
++++ b/drivers/scsi/sr_ioctl.c
+@@ -41,7 +41,7 @@ static int sr_read_tochdr(struct cdrom_device_info *cdi,
+ int result;
+ unsigned char *buffer;
+
+- buffer = kmalloc(32, GFP_KERNEL);
++ buffer = kzalloc(32, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+@@ -55,10 +55,13 @@ static int sr_read_tochdr(struct cdrom_device_info *cdi,
+ cgc.data_direction = DMA_FROM_DEVICE;
+
+ result = sr_do_ioctl(cd, &cgc);
++ if (result)
++ goto err;
+
+ tochdr->cdth_trk0 = buffer[2];
+ tochdr->cdth_trk1 = buffer[3];
+
++err:
+ kfree(buffer);
+ return result;
+ }
+@@ -71,7 +74,7 @@ static int sr_read_tocentry(struct cdrom_device_info *cdi,
+ int result;
+ unsigned char *buffer;
+
+- buffer = kmalloc(32, GFP_KERNEL);
++ buffer = kzalloc(32, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+@@ -86,6 +89,8 @@ static int sr_read_tocentry(struct cdrom_device_info *cdi,
+ cgc.data_direction = DMA_FROM_DEVICE;
+
+ result = sr_do_ioctl(cd, &cgc);
++ if (result)
++ goto err;
+
+ tocentry->cdte_ctrl = buffer[5] & 0xf;
+ tocentry->cdte_adr = buffer[5] >> 4;
+@@ -98,6 +103,7 @@ static int sr_read_tocentry(struct cdrom_device_info *cdi,
+ tocentry->cdte_addr.lba = (((((buffer[8] << 8) + buffer[9]) << 8)
+ + buffer[10]) << 8) + buffer[11];
+
++err:
+ kfree(buffer);
+ return result;
+ }
+@@ -384,7 +390,7 @@ int sr_get_mcn(struct cdrom_device_info *cdi, struct cdrom_mcn *mcn)
+ {
+ Scsi_CD *cd = cdi->handle;
+ struct packet_command cgc;
+- char *buffer = kmalloc(32, GFP_KERNEL);
++ char *buffer = kzalloc(32, GFP_KERNEL);
+ int result;
+
+ if (!buffer)
+@@ -400,10 +406,13 @@ int sr_get_mcn(struct cdrom_device_info *cdi, struct cdrom_mcn *mcn)
+ cgc.data_direction = DMA_FROM_DEVICE;
+ cgc.timeout = IOCTL_TIMEOUT;
+ result = sr_do_ioctl(cd, &cgc);
++ if (result)
++ goto err;
+
+ memcpy(mcn->medium_catalog_number, buffer + 9, 13);
+ mcn->medium_catalog_number[13] = 0;
+
++err:
+ kfree(buffer);
+ return result;
+ }
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index cb285d277201c..5696e52c76e9d 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -367,7 +367,7 @@ static void ufshcd_add_uic_command_trace(struct ufs_hba *hba,
+ static void ufshcd_add_command_trace(struct ufs_hba *hba, unsigned int tag,
+ enum ufs_trace_str_t str_t)
+ {
+- u64 lba;
++ u64 lba = 0;
+ u8 opcode = 0, group_id = 0;
+ u32 intr, doorbell;
+ struct ufshcd_lrb *lrbp = &hba->lrb[tag];
+@@ -384,7 +384,6 @@ static void ufshcd_add_command_trace(struct ufs_hba *hba, unsigned int tag,
+ return;
+
+ opcode = cmd->cmnd[0];
+- lba = scsi_get_lba(cmd);
+
+ if (opcode == READ_10 || opcode == WRITE_10) {
+ /*
+@@ -392,6 +391,7 @@ static void ufshcd_add_command_trace(struct ufs_hba *hba, unsigned int tag,
+ */
+ transfer_len =
+ be32_to_cpu(lrbp->ucd_req_ptr->sc.exp_data_transfer_len);
++ lba = scsi_get_lba(cmd);
+ if (opcode == WRITE_10)
+ group_id = lrbp->cmd->cmnd[6];
+ } else if (opcode == UNMAP) {
+@@ -399,6 +399,7 @@ static void ufshcd_add_command_trace(struct ufs_hba *hba, unsigned int tag,
+ * The number of Bytes to be unmapped beginning with the lba.
+ */
+ transfer_len = blk_rq_bytes(rq);
++ lba = scsi_get_lba(cmd);
+ }
+
+ intr = ufshcd_readl(hba, REG_INTERRUPT_STATUS);
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 92d9610df1fd8..938017a60c8ed 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -277,6 +277,9 @@ static int atmel_qspi_find_mode(const struct spi_mem_op *op)
+ static bool atmel_qspi_supports_op(struct spi_mem *mem,
+ const struct spi_mem_op *op)
+ {
++ if (!spi_mem_default_supports_op(mem, op))
++ return false;
++
+ if (atmel_qspi_find_mode(op) < 0)
+ return false;
+
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 75f3560411386..b8ac24318cb3a 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1415,9 +1415,24 @@ static bool cqspi_supports_mem_op(struct spi_mem *mem,
+ all_false = !op->cmd.dtr && !op->addr.dtr && !op->dummy.dtr &&
+ !op->data.dtr;
+
+- /* Mixed DTR modes not supported. */
+- if (!(all_true || all_false))
++ if (all_true) {
++ /* Right now we only support 8-8-8 DTR mode. */
++ if (op->cmd.nbytes && op->cmd.buswidth != 8)
++ return false;
++ if (op->addr.nbytes && op->addr.buswidth != 8)
++ return false;
++ if (op->data.nbytes && op->data.buswidth != 8)
++ return false;
++ } else if (all_false) {
++ /* Only 1-1-X ops are supported without DTR */
++ if (op->cmd.nbytes && op->cmd.buswidth > 1)
++ return false;
++ if (op->addr.nbytes && op->addr.buswidth > 1)
++ return false;
++ } else {
++ /* Mixed DTR modes are not supported. */
+ return false;
++ }
+
+ if (all_true)
+ return spi_mem_dtr_supports_op(mem, op);
+diff --git a/drivers/spi/spi-mtk-nor.c b/drivers/spi/spi-mtk-nor.c
+index 5c93730615f8d..6d203477c04b1 100644
+--- a/drivers/spi/spi-mtk-nor.c
++++ b/drivers/spi/spi-mtk-nor.c
+@@ -909,7 +909,17 @@ static int __maybe_unused mtk_nor_suspend(struct device *dev)
+
+ static int __maybe_unused mtk_nor_resume(struct device *dev)
+ {
+- return pm_runtime_force_resume(dev);
++ struct spi_controller *ctlr = dev_get_drvdata(dev);
++ struct mtk_nor *sp = spi_controller_get_devdata(ctlr);
++ int ret;
++
++ ret = pm_runtime_force_resume(dev);
++ if (ret)
++ return ret;
++
++ mtk_nor_init(sp);
++
++ return 0;
+ }
+
+ static const struct dev_pm_ops mtk_nor_pm_ops = {
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 792fdcfdc6add..10aa0fb946138 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -946,7 +946,7 @@ cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ ssize_t rc;
+ struct inode *inode = file_inode(iocb->ki_filp);
+
+- if (iocb->ki_filp->f_flags & O_DIRECT)
++ if (iocb->ki_flags & IOCB_DIRECT)
+ return cifs_user_readv(iocb, iter);
+
+ rc = cifs_revalidate_mapping(inode);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index c3be6a541c8fc..532770c30415d 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -534,12 +534,19 @@ int cifs_reconnect(struct TCP_Server_Info *server, bool mark_smb_session)
+ {
+ /* If tcp session is not an dfs connection, then reconnect to last target server */
+ spin_lock(&cifs_tcp_ses_lock);
+- if (!server->is_dfs_conn || !server->origin_fullpath || !server->leaf_fullpath) {
++ if (!server->is_dfs_conn) {
+ spin_unlock(&cifs_tcp_ses_lock);
+ return __cifs_reconnect(server, mark_smb_session);
+ }
+ spin_unlock(&cifs_tcp_ses_lock);
+
++ mutex_lock(&server->refpath_lock);
++ if (!server->origin_fullpath || !server->leaf_fullpath) {
++ mutex_unlock(&server->refpath_lock);
++ return __cifs_reconnect(server, mark_smb_session);
++ }
++ mutex_unlock(&server->refpath_lock);
++
+ return reconnect_dfs_server(server);
+ }
+ #else
+@@ -3675,9 +3682,11 @@ static void setup_server_referral_paths(struct mount_ctx *mnt_ctx)
+ {
+ struct TCP_Server_Info *server = mnt_ctx->server;
+
++ mutex_lock(&server->refpath_lock);
+ server->origin_fullpath = mnt_ctx->origin_fullpath;
+ server->leaf_fullpath = mnt_ctx->leaf_fullpath;
+ server->current_fullpath = mnt_ctx->leaf_fullpath;
++ mutex_unlock(&server->refpath_lock);
+ mnt_ctx->origin_fullpath = mnt_ctx->leaf_fullpath = NULL;
+ }
+
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 30e040da4f096..956f8e5cf3e74 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -1422,12 +1422,14 @@ static int refresh_tcon(struct cifs_ses **sessions, struct cifs_tcon *tcon, bool
+ struct TCP_Server_Info *server = tcon->ses->server;
+
+ mutex_lock(&server->refpath_lock);
+- if (strcasecmp(server->leaf_fullpath, server->origin_fullpath))
+- __refresh_tcon(server->leaf_fullpath + 1, sessions, tcon, force_refresh);
++ if (server->origin_fullpath) {
++ if (server->leaf_fullpath && strcasecmp(server->leaf_fullpath,
++ server->origin_fullpath))
++ __refresh_tcon(server->leaf_fullpath + 1, sessions, tcon, force_refresh);
++ __refresh_tcon(server->origin_fullpath + 1, sessions, tcon, force_refresh);
++ }
+ mutex_unlock(&server->refpath_lock);
+
+- __refresh_tcon(server->origin_fullpath + 1, sessions, tcon, force_refresh);
+-
+ return 0;
+ }
+
+@@ -1530,11 +1532,14 @@ static void refresh_mounts(struct cifs_ses **sessions)
+ list_del_init(&tcon->ulist);
+
+ mutex_lock(&server->refpath_lock);
+- if (strcasecmp(server->leaf_fullpath, server->origin_fullpath))
+- __refresh_tcon(server->leaf_fullpath + 1, sessions, tcon, false);
++ if (server->origin_fullpath) {
++ if (server->leaf_fullpath && strcasecmp(server->leaf_fullpath,
++ server->origin_fullpath))
++ __refresh_tcon(server->leaf_fullpath + 1, sessions, tcon, false);
++ __refresh_tcon(server->origin_fullpath + 1, sessions, tcon, false);
++ }
+ mutex_unlock(&server->refpath_lock);
+
+- __refresh_tcon(server->origin_fullpath + 1, sessions, tcon, false);
+ cifs_put_tcon(tcon);
+ }
+ }
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index bcd3b9bf8069b..9b80693224957 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2271,6 +2271,10 @@ static inline int ext4_forced_shutdown(struct ext4_sb_info *sbi)
+ * Structure of a directory entry
+ */
+ #define EXT4_NAME_LEN 255
++/*
++ * Base length of the ext4 directory entry excluding the name length
++ */
++#define EXT4_BASE_DIR_LEN (sizeof(struct ext4_dir_entry_2) - EXT4_NAME_LEN)
+
+ struct ext4_dir_entry {
+ __le32 inode; /* Inode number */
+@@ -3030,7 +3034,7 @@ extern int ext4_inode_attach_jinode(struct inode *inode);
+ extern int ext4_can_truncate(struct inode *inode);
+ extern int ext4_truncate(struct inode *);
+ extern int ext4_break_layouts(struct inode *);
+-extern int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length);
++extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length);
+ extern void ext4_set_inode_flags(struct inode *, bool init);
+ extern int ext4_alloc_da_blocks(struct inode *inode);
+ extern void ext4_set_aops(struct inode *inode);
+@@ -3062,6 +3066,7 @@ int ext4_fileattr_set(struct user_namespace *mnt_userns,
+ struct dentry *dentry, struct fileattr *fa);
+ int ext4_fileattr_get(struct dentry *dentry, struct fileattr *fa);
+ extern void ext4_reset_inode_seed(struct inode *inode);
++int ext4_update_overhead(struct super_block *sb);
+
+ /* migrate.c */
+ extern int ext4_ext_migrate(struct inode *);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index c0f3f83e0c1b1..488d7c1de941e 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4501,9 +4501,9 @@ retry:
+ return ret > 0 ? ret2 : ret;
+ }
+
+-static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len);
++static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len);
+
+-static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len);
++static int ext4_insert_range(struct file *file, loff_t offset, loff_t len);
+
+ static long ext4_zero_range(struct file *file, loff_t offset,
+ loff_t len, int mode)
+@@ -4575,6 +4575,10 @@ static long ext4_zero_range(struct file *file, loff_t offset,
+ /* Wait all existing dio workers, newcomers will block on i_rwsem */
+ inode_dio_wait(inode);
+
++ ret = file_modified(file);
++ if (ret)
++ goto out_mutex;
++
+ /* Preallocate the range including the unaligned edges */
+ if (partial_begin || partial_end) {
+ ret = ext4_alloc_file_blocks(file,
+@@ -4691,7 +4695,7 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ return -EOPNOTSUPP;
+
+ if (mode & FALLOC_FL_PUNCH_HOLE) {
+- ret = ext4_punch_hole(inode, offset, len);
++ ret = ext4_punch_hole(file, offset, len);
+ goto exit;
+ }
+
+@@ -4700,12 +4704,12 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ goto exit;
+
+ if (mode & FALLOC_FL_COLLAPSE_RANGE) {
+- ret = ext4_collapse_range(inode, offset, len);
++ ret = ext4_collapse_range(file, offset, len);
+ goto exit;
+ }
+
+ if (mode & FALLOC_FL_INSERT_RANGE) {
+- ret = ext4_insert_range(inode, offset, len);
++ ret = ext4_insert_range(file, offset, len);
+ goto exit;
+ }
+
+@@ -4741,6 +4745,10 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ /* Wait all existing dio workers, newcomers will block on i_rwsem */
+ inode_dio_wait(inode);
+
++ ret = file_modified(file);
++ if (ret)
++ goto out;
++
+ ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags);
+ if (ret)
+ goto out;
+@@ -5242,8 +5250,9 @@ out:
+ * This implements the fallocate's collapse range functionality for ext4
+ * Returns: 0 and non-zero on error.
+ */
+-static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
++static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len)
+ {
++ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ struct address_space *mapping = inode->i_mapping;
+ ext4_lblk_t punch_start, punch_stop;
+@@ -5295,6 +5304,10 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ /* Wait for existing dio to complete */
+ inode_dio_wait(inode);
+
++ ret = file_modified(file);
++ if (ret)
++ goto out_mutex;
++
+ /*
+ * Prevent page faults from reinstantiating pages we have released from
+ * page cache.
+@@ -5388,8 +5401,9 @@ out_mutex:
+ * by len bytes.
+ * Returns 0 on success, error otherwise.
+ */
+-static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
++static int ext4_insert_range(struct file *file, loff_t offset, loff_t len)
+ {
++ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ struct address_space *mapping = inode->i_mapping;
+ handle_t *handle;
+@@ -5446,6 +5460,10 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ /* Wait for existing dio to complete */
+ inode_dio_wait(inode);
+
++ ret = file_modified(file);
++ if (ret)
++ goto out_mutex;
++
+ /*
+ * Prevent page faults from reinstantiating pages we have released from
+ * page cache.
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 531a94f48637c..d8ff93a4b1b90 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3944,12 +3944,14 @@ int ext4_break_layouts(struct inode *inode)
+ * Returns: 0 on success or negative on failure
+ */
+
+-int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
++int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ {
++ struct inode *inode = file_inode(file);
+ struct super_block *sb = inode->i_sb;
+ ext4_lblk_t first_block, stop_block;
+ struct address_space *mapping = inode->i_mapping;
+- loff_t first_block_offset, last_block_offset;
++ loff_t first_block_offset, last_block_offset, max_length;
++ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ handle_t *handle;
+ unsigned int credits;
+ int ret = 0, ret2 = 0;
+@@ -3992,6 +3994,14 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ offset;
+ }
+
++ /*
++ * For punch hole the length + offset needs to be within one block
++ * before last range. Adjust the length if it goes beyond that limit.
++ */
++ max_length = sbi->s_bitmap_maxbytes - inode->i_sb->s_blocksize;
++ if (offset + length > max_length)
++ length = max_length - offset;
++
+ if (offset & (sb->s_blocksize - 1) ||
+ (offset + length) & (sb->s_blocksize - 1)) {
+ /*
+@@ -4007,6 +4017,10 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
+ /* Wait all existing dio workers, newcomers will block on i_rwsem */
+ inode_dio_wait(inode);
+
++ ret = file_modified(file);
++ if (ret)
++ goto out_mutex;
++
+ /*
+ * Prevent page faults from reinstantiating pages we have released from
+ * page cache.
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index a8022c2c6a582..da0aefe67673d 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -1652,3 +1652,19 @@ long ext4_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ return ext4_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
+ }
+ #endif
++
++static void set_overhead(struct ext4_super_block *es, const void *arg)
++{
++ es->s_overhead_clusters = cpu_to_le32(*((unsigned long *) arg));
++}
++
++int ext4_update_overhead(struct super_block *sb)
++{
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++ if (sb_rdonly(sb) || sbi->s_overhead == 0 ||
++ sbi->s_overhead == le32_to_cpu(sbi->s_es->s_overhead_clusters))
++ return 0;
++
++ return ext4_update_superblocks_fn(sb, set_overhead, &sbi->s_overhead);
++}
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 39e223f7bf64d..f62260264f68c 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1466,10 +1466,10 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
+
+ de = (struct ext4_dir_entry_2 *)search_buf;
+ dlimit = search_buf + buf_size;
+- while ((char *) de < dlimit) {
++ while ((char *) de < dlimit - EXT4_BASE_DIR_LEN) {
+ /* this code is executed quadratically often */
+ /* do minimal checking `by hand' */
+- if ((char *) de + de->name_len <= dlimit &&
++ if (de->name + de->name_len <= dlimit &&
+ ext4_match(dir, fname, de)) {
+ /* found a match - just to be sure, do
+ * a full check */
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index 1d370364230e8..40b7d8485b445 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -134,8 +134,10 @@ static void ext4_finish_bio(struct bio *bio)
+ continue;
+ }
+ clear_buffer_async_write(bh);
+- if (bio->bi_status)
++ if (bio->bi_status) {
++ set_buffer_write_io_error(bh);
+ buffer_io_error(bh);
++ }
+ } while ((bh = bh->b_this_page) != head);
+ spin_unlock_irqrestore(&head->b_uptodate_lock, flags);
+ if (!under_io) {
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index bed29f96ccc7e..ba6530c2d2711 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4156,9 +4156,11 @@ static int count_overhead(struct super_block *sb, ext4_group_t grp,
+ ext4_fsblk_t first_block, last_block, b;
+ ext4_group_t i, ngroups = ext4_get_groups_count(sb);
+ int s, j, count = 0;
++ int has_super = ext4_bg_has_super(sb, grp);
+
+ if (!ext4_has_feature_bigalloc(sb))
+- return (ext4_bg_has_super(sb, grp) + ext4_bg_num_gdb(sb, grp) +
++ return (has_super + ext4_bg_num_gdb(sb, grp) +
++ (has_super ? le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) : 0) +
+ sbi->s_itb_per_group + 2);
+
+ first_block = le32_to_cpu(sbi->s_es->s_first_data_block) +
+@@ -5266,9 +5268,18 @@ no_journal:
+ * Get the # of file system overhead blocks from the
+ * superblock if present.
+ */
+- if (es->s_overhead_clusters)
+- sbi->s_overhead = le32_to_cpu(es->s_overhead_clusters);
+- else {
++ sbi->s_overhead = le32_to_cpu(es->s_overhead_clusters);
++ /* ignore the precalculated value if it is ridiculous */
++ if (sbi->s_overhead > ext4_blocks_count(es))
++ sbi->s_overhead = 0;
++ /*
++ * If the bigalloc feature is not enabled recalculating the
++ * overhead doesn't take long, so we might as well just redo
++ * it to make sure we are using the correct value.
++ */
++ if (!ext4_has_feature_bigalloc(sb))
++ sbi->s_overhead = 0;
++ if (sbi->s_overhead == 0) {
+ err = ext4_calculate_overhead(sb);
+ if (err)
+ goto failed_mount_wq;
+@@ -5586,6 +5597,8 @@ static int ext4_fill_super(struct super_block *sb, struct fs_context *fc)
+ ext4_msg(sb, KERN_INFO, "mounted filesystem with%s. "
+ "Quota mode: %s.", descr, ext4_quota_mode(sb));
+
++ /* Update the s_overhead_clusters if necessary */
++ ext4_update_overhead(sb);
+ return 0;
+
+ free_sbi:
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 3b34bb24d0af4..801ad9f4f2bef 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -923,15 +923,15 @@ static int read_rindex_entry(struct gfs2_inode *ip)
+ spin_lock_init(&rgd->rd_rsspin);
+ mutex_init(&rgd->rd_mutex);
+
+- error = compute_bitstructs(rgd);
+- if (error)
+- goto fail;
+-
+ error = gfs2_glock_get(sdp, rgd->rd_addr,
+ &gfs2_rgrp_glops, CREATE, &rgd->rd_gl);
+ if (error)
+ goto fail;
+
++ error = compute_bitstructs(rgd);
++ if (error)
++ goto fail_glock;
++
+ rgd->rd_rgl = (struct gfs2_rgrp_lvb *)rgd->rd_gl->gl_lksb.sb_lvbptr;
+ rgd->rd_flags &= ~GFS2_RDF_PREFERRED;
+ if (rgd->rd_data > sdp->sd_max_rg_data)
+@@ -945,6 +945,7 @@ static int read_rindex_entry(struct gfs2_inode *ip)
+ }
+
+ error = 0; /* someone else read in the rgrp; free it and ignore it */
++fail_glock:
+ gfs2_glock_put(rgd->rd_gl);
+
+ fail:
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index a7c6c7498be0b..ed85051b12754 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -206,7 +206,7 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr,
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = current->mm->mmap_base;
+- info.high_limit = TASK_SIZE;
++ info.high_limit = arch_get_mmap_end(addr);
+ info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ info.align_offset = 0;
+ return vm_unmapped_area(&info);
+@@ -222,7 +222,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+ info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+- info.high_limit = current->mm->mmap_base;
++ info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base);
+ info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ info.align_offset = 0;
+ addr = vm_unmapped_area(&info);
+@@ -237,7 +237,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
+ VM_BUG_ON(addr != -ENOMEM);
+ info.flags = 0;
+ info.low_limit = current->mm->mmap_base;
+- info.high_limit = TASK_SIZE;
++ info.high_limit = arch_get_mmap_end(addr);
+ addr = vm_unmapped_area(&info);
+ }
+
+@@ -251,6 +251,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ struct hstate *h = hstate_file(file);
++ const unsigned long mmap_end = arch_get_mmap_end(addr);
+
+ if (len & ~huge_page_mask(h))
+ return -EINVAL;
+@@ -266,7 +267,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ if (addr) {
+ addr = ALIGN(addr, huge_page_size(h));
+ vma = find_vma(mm, addr);
+- if (TASK_SIZE - len >= addr &&
++ if (mmap_end - len >= addr &&
+ (!vma || addr + len <= vm_start_gap(vma)))
+ return addr;
+ }
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 619c67fd456dd..fbba8342172a0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2612,11 +2612,10 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
+ /* order with io_complete_rw_iopoll(), e.g. ->result updates */
+ if (!smp_load_acquire(&req->iopoll_completed))
+ break;
++ nr_events++;
+ if (unlikely(req->flags & REQ_F_CQE_SKIP))
+ continue;
+-
+ __io_fill_cqe(ctx, req->user_data, req->result, io_put_kbuf(req));
+- nr_events++;
+ }
+
+ if (unlikely(!nr_events))
+@@ -3622,8 +3621,10 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ iovec = NULL;
+ }
+ ret = io_rw_init_file(req, FMODE_READ);
+- if (unlikely(ret))
++ if (unlikely(ret)) {
++ kfree(iovec);
+ return ret;
++ }
+ req->result = iov_iter_count(&s->iter);
+
+ if (force_nonblock) {
+@@ -3742,8 +3743,10 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ iovec = NULL;
+ }
+ ret = io_rw_init_file(req, FMODE_WRITE);
+- if (unlikely(ret))
++ if (unlikely(ret)) {
++ kfree(iovec);
+ return ret;
++ }
+ req->result = iov_iter_count(&s->iter);
+
+ if (force_nonblock) {
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 5b9408e3b370d..ac7f067b7bddb 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -488,7 +488,6 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ jbd2_journal_wait_updates(journal);
+
+ commit_transaction->t_state = T_SWITCH;
+- write_unlock(&journal->j_state_lock);
+
+ J_ASSERT (atomic_read(&commit_transaction->t_outstanding_credits) <=
+ journal->j_max_transaction_buffers);
+@@ -508,6 +507,8 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ * has reserved. This is consistent with the existing behaviour
+ * that multiple jbd2_journal_get_write_access() calls to the same
+ * buffer are perfectly permissible.
++ * We use journal->j_state_lock here to serialize processing of
++ * t_reserved_list with eviction of buffers from journal_unmap_buffer().
+ */
+ while (commit_transaction->t_reserved_list) {
+ jh = commit_transaction->t_reserved_list;
+@@ -527,6 +528,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ jbd2_journal_refile_buffer(journal, jh);
+ }
+
++ write_unlock(&journal->j_state_lock);
+ /*
+ * Now try to drop any written-back buffers from the journal's
+ * checkpoint lists. We do this *before* commit because it potentially
+diff --git a/fs/namei.c b/fs/namei.c
+index 3f1829b3ab5b7..509657fdf4f56 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3673,18 +3673,14 @@ static struct dentry *filename_create(int dfd, struct filename *name,
+ {
+ struct dentry *dentry = ERR_PTR(-EEXIST);
+ struct qstr last;
++ bool want_dir = lookup_flags & LOOKUP_DIRECTORY;
++ unsigned int reval_flag = lookup_flags & LOOKUP_REVAL;
++ unsigned int create_flags = LOOKUP_CREATE | LOOKUP_EXCL;
+ int type;
+ int err2;
+ int error;
+- bool is_dir = (lookup_flags & LOOKUP_DIRECTORY);
+
+- /*
+- * Note that only LOOKUP_REVAL and LOOKUP_DIRECTORY matter here. Any
+- * other flags passed in are ignored!
+- */
+- lookup_flags &= LOOKUP_REVAL;
+-
+- error = filename_parentat(dfd, name, lookup_flags, path, &last, &type);
++ error = filename_parentat(dfd, name, reval_flag, path, &last, &type);
+ if (error)
+ return ERR_PTR(error);
+
+@@ -3698,11 +3694,13 @@ static struct dentry *filename_create(int dfd, struct filename *name,
+ /* don't fail immediately if it's r/o, at least try to report other errors */
+ err2 = mnt_want_write(path->mnt);
+ /*
+- * Do the final lookup.
++ * Do the final lookup. Suppress 'create' if there is a trailing
++ * '/', and a directory wasn't requested.
+ */
+- lookup_flags |= LOOKUP_CREATE | LOOKUP_EXCL;
++ if (last.name[last.len] && !want_dir)
++ create_flags = 0;
+ inode_lock_nested(path->dentry->d_inode, I_MUTEX_PARENT);
+- dentry = __lookup_hash(&last, path->dentry, lookup_flags);
++ dentry = __lookup_hash(&last, path->dentry, reval_flag | create_flags);
+ if (IS_ERR(dentry))
+ goto unlock;
+
+@@ -3716,7 +3714,7 @@ static struct dentry *filename_create(int dfd, struct filename *name,
+ * all is fine. Let's be bastards - you had / on the end, you've
+ * been asking for (non-existent) directory. -ENOENT for you.
+ */
+- if (unlikely(!is_dir && last.name[last.len])) {
++ if (unlikely(!create_flags)) {
+ error = -ENOENT;
+ goto fail;
+ }
+diff --git a/fs/posix_acl.c b/fs/posix_acl.c
+index 80acb6885cf90..962d32468eb48 100644
+--- a/fs/posix_acl.c
++++ b/fs/posix_acl.c
+@@ -759,9 +759,14 @@ static void posix_acl_fix_xattr_userns(
+ }
+
+ void posix_acl_fix_xattr_from_user(struct user_namespace *mnt_userns,
++ struct inode *inode,
+ void *value, size_t size)
+ {
+ struct user_namespace *user_ns = current_user_ns();
++
++ /* Leave ids untouched on non-idmapped mounts. */
++ if (no_idmapping(mnt_userns, i_user_ns(inode)))
++ mnt_userns = &init_user_ns;
+ if ((user_ns == &init_user_ns) && (mnt_userns == &init_user_ns))
+ return;
+ posix_acl_fix_xattr_userns(&init_user_ns, user_ns, mnt_userns, value,
+@@ -769,9 +774,14 @@ void posix_acl_fix_xattr_from_user(struct user_namespace *mnt_userns,
+ }
+
+ void posix_acl_fix_xattr_to_user(struct user_namespace *mnt_userns,
++ struct inode *inode,
+ void *value, size_t size)
+ {
+ struct user_namespace *user_ns = current_user_ns();
++
++ /* Leave ids untouched on non-idmapped mounts. */
++ if (no_idmapping(mnt_userns, i_user_ns(inode)))
++ mnt_userns = &init_user_ns;
+ if ((user_ns == &init_user_ns) && (mnt_userns == &init_user_ns))
+ return;
+ posix_acl_fix_xattr_userns(user_ns, &init_user_ns, mnt_userns, value,
+diff --git a/fs/stat.c b/fs/stat.c
+index 28d2020ba1f42..246d138ec0669 100644
+--- a/fs/stat.c
++++ b/fs/stat.c
+@@ -334,9 +334,6 @@ SYSCALL_DEFINE2(fstat, unsigned int, fd, struct __old_kernel_stat __user *, stat
+ # define choose_32_64(a,b) b
+ #endif
+
+-#define valid_dev(x) choose_32_64(old_valid_dev(x),true)
+-#define encode_dev(x) choose_32_64(old_encode_dev,new_encode_dev)(x)
+-
+ #ifndef INIT_STRUCT_STAT_PADDING
+ # define INIT_STRUCT_STAT_PADDING(st) memset(&st, 0, sizeof(st))
+ #endif
+@@ -345,7 +342,9 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
+ {
+ struct stat tmp;
+
+- if (!valid_dev(stat->dev) || !valid_dev(stat->rdev))
++ if (sizeof(tmp.st_dev) < 4 && !old_valid_dev(stat->dev))
++ return -EOVERFLOW;
++ if (sizeof(tmp.st_rdev) < 4 && !old_valid_dev(stat->rdev))
+ return -EOVERFLOW;
+ #if BITS_PER_LONG == 32
+ if (stat->size > MAX_NON_LFS)
+@@ -353,7 +352,7 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
+ #endif
+
+ INIT_STRUCT_STAT_PADDING(tmp);
+- tmp.st_dev = encode_dev(stat->dev);
++ tmp.st_dev = new_encode_dev(stat->dev);
+ tmp.st_ino = stat->ino;
+ if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
+ return -EOVERFLOW;
+@@ -363,7 +362,7 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
+ return -EOVERFLOW;
+ SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
+ SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
+- tmp.st_rdev = encode_dev(stat->rdev);
++ tmp.st_rdev = new_encode_dev(stat->rdev);
+ tmp.st_size = stat->size;
+ tmp.st_atime = stat->atime.tv_sec;
+ tmp.st_mtime = stat->mtime.tv_sec;
+@@ -644,11 +643,13 @@ static int cp_compat_stat(struct kstat *stat, struct compat_stat __user *ubuf)
+ {
+ struct compat_stat tmp;
+
+- if (!old_valid_dev(stat->dev) || !old_valid_dev(stat->rdev))
++ if (sizeof(tmp.st_dev) < 4 && !old_valid_dev(stat->dev))
++ return -EOVERFLOW;
++ if (sizeof(tmp.st_rdev) < 4 && !old_valid_dev(stat->rdev))
+ return -EOVERFLOW;
+
+ memset(&tmp, 0, sizeof(tmp));
+- tmp.st_dev = old_encode_dev(stat->dev);
++ tmp.st_dev = new_encode_dev(stat->dev);
+ tmp.st_ino = stat->ino;
+ if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
+ return -EOVERFLOW;
+@@ -658,7 +659,7 @@ static int cp_compat_stat(struct kstat *stat, struct compat_stat __user *ubuf)
+ return -EOVERFLOW;
+ SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
+ SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
+- tmp.st_rdev = old_encode_dev(stat->rdev);
++ tmp.st_rdev = new_encode_dev(stat->rdev);
+ if ((u64) stat->size > MAX_NON_LFS)
+ return -EOVERFLOW;
+ tmp.st_size = stat->size;
+diff --git a/fs/xattr.c b/fs/xattr.c
+index 5c8c5175b385c..998045165916e 100644
+--- a/fs/xattr.c
++++ b/fs/xattr.c
+@@ -569,7 +569,8 @@ setxattr(struct user_namespace *mnt_userns, struct dentry *d,
+ }
+ if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
+ (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
+- posix_acl_fix_xattr_from_user(mnt_userns, kvalue, size);
++ posix_acl_fix_xattr_from_user(mnt_userns, d_inode(d),
++ kvalue, size);
+ }
+
+ error = vfs_setxattr(mnt_userns, d, kname, kvalue, size, flags);
+@@ -667,7 +668,8 @@ getxattr(struct user_namespace *mnt_userns, struct dentry *d,
+ if (error > 0) {
+ if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
+ (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
+- posix_acl_fix_xattr_to_user(mnt_userns, kvalue, error);
++ posix_acl_fix_xattr_to_user(mnt_userns, d_inode(d),
++ kvalue, error);
+ if (size && copy_to_user(value, kvalue, error))
+ error = -EFAULT;
+ } else if (error == -ERANGE && size >= XATTR_SIZE_MAX) {
+diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
+index 2ad71cc90b37d..92b10e67d5f87 100644
+--- a/include/linux/etherdevice.h
++++ b/include/linux/etherdevice.h
+@@ -134,7 +134,7 @@ static inline bool is_multicast_ether_addr(const u8 *addr)
+ #endif
+ }
+
+-static inline bool is_multicast_ether_addr_64bits(const u8 addr[6+2])
++static inline bool is_multicast_ether_addr_64bits(const u8 *addr)
+ {
+ #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
+ #ifdef __BIG_ENDIAN
+@@ -372,8 +372,7 @@ static inline bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
+ * Please note that alignment of addr1 & addr2 are only guaranteed to be 16 bits.
+ */
+
+-static inline bool ether_addr_equal_64bits(const u8 addr1[6+2],
+- const u8 addr2[6+2])
++static inline bool ether_addr_equal_64bits(const u8 *addr1, const u8 *addr2)
+ {
+ #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
+ u64 fold = (*(const u64 *)addr1) ^ (*(const u64 *)addr2);
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 0abbd685703b9..a1fcf57493479 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -999,6 +999,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec,
+ }
+
+ void mem_cgroup_flush_stats(void);
++void mem_cgroup_flush_stats_delayed(void);
+
+ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
+ int val);
+@@ -1442,6 +1443,10 @@ static inline void mem_cgroup_flush_stats(void)
+ {
+ }
+
++static inline void mem_cgroup_flush_stats_delayed(void)
++{
++}
++
+ static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec,
+ enum node_stat_item idx, int val)
+ {
+diff --git a/include/linux/posix_acl_xattr.h b/include/linux/posix_acl_xattr.h
+index 060e8d2031814..1766e1de69560 100644
+--- a/include/linux/posix_acl_xattr.h
++++ b/include/linux/posix_acl_xattr.h
+@@ -34,15 +34,19 @@ posix_acl_xattr_count(size_t size)
+
+ #ifdef CONFIG_FS_POSIX_ACL
+ void posix_acl_fix_xattr_from_user(struct user_namespace *mnt_userns,
++ struct inode *inode,
+ void *value, size_t size);
+ void posix_acl_fix_xattr_to_user(struct user_namespace *mnt_userns,
++ struct inode *inode,
+ void *value, size_t size);
+ #else
+ static inline void posix_acl_fix_xattr_from_user(struct user_namespace *mnt_userns,
++ struct inode *inode,
+ void *value, size_t size)
+ {
+ }
+ static inline void posix_acl_fix_xattr_to_user(struct user_namespace *mnt_userns,
++ struct inode *inode,
+ void *value, size_t size)
+ {
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index e806326eca723..4b4cc633b2665 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1440,6 +1440,7 @@ struct task_struct {
+ int pagefault_disabled;
+ #ifdef CONFIG_MMU
+ struct task_struct *oom_reaper_list;
++ struct timer_list oom_reaper_timer;
+ #endif
+ #ifdef CONFIG_VMAP_STACK
+ struct vm_struct *stack_vm_area;
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index aa5f09ca5bcf4..0ee0515d5a175 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -135,6 +135,14 @@ static inline void mm_update_next_owner(struct mm_struct *mm)
+ #endif /* CONFIG_MEMCG */
+
+ #ifdef CONFIG_MMU
++#ifndef arch_get_mmap_end
++#define arch_get_mmap_end(addr) (TASK_SIZE)
++#endif
++
++#ifndef arch_get_mmap_base
++#define arch_get_mmap_base(addr, base) (base)
++#endif
++
+ extern void arch_pick_mmap_layout(struct mm_struct *mm,
+ struct rlimit *rlim_stack);
+ extern unsigned long
+diff --git a/include/net/esp.h b/include/net/esp.h
+index 90cd02ff77ef6..9c5637d41d951 100644
+--- a/include/net/esp.h
++++ b/include/net/esp.h
+@@ -4,8 +4,6 @@
+
+ #include <linux/skbuff.h>
+
+-#define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER)
+-
+ struct ip_esp_hdr;
+
+ static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
+diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
+index 6bd7e5a85ce76..ff82983b7ab41 100644
+--- a/include/net/netns/ipv6.h
++++ b/include/net/netns/ipv6.h
+@@ -75,8 +75,8 @@ struct netns_ipv6 {
+ struct list_head fib6_walkers;
+ rwlock_t fib6_walker_lock;
+ spinlock_t fib6_gc_lock;
+- unsigned int ip6_rt_gc_expire;
+- unsigned long ip6_rt_last_gc;
++ atomic_t ip6_rt_gc_expire;
++ unsigned long ip6_rt_last_gc;
+ unsigned char flowlabel_has_excl;
+ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+ bool fib6_has_custom_rules;
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index 4ee233e5a6ffa..d1e282f0d6f18 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -52,8 +52,10 @@ enum {
+
+ #define ISID_SIZE 6
+
+-/* Connection suspend "bit" */
+-#define ISCSI_SUSPEND_BIT 1
++/* Connection flags */
++#define ISCSI_CONN_FLAG_SUSPEND_TX BIT(0)
++#define ISCSI_CONN_FLAG_SUSPEND_RX BIT(1)
++#define ISCSI_CONN_FLAG_BOUND BIT(2)
+
+ #define ISCSI_ITT_MASK 0x1fff
+ #define ISCSI_TOTAL_CMDS_MAX 4096
+@@ -199,8 +201,7 @@ struct iscsi_conn {
+ struct list_head cmdqueue; /* data-path cmd queue */
+ struct list_head requeue; /* tasks needing another run */
+ struct work_struct xmitwork; /* per-conn. xmit workqueue */
+- unsigned long suspend_tx; /* suspend Tx */
+- unsigned long suspend_rx; /* suspend Rx */
++ unsigned long flags; /* ISCSI_CONN_FLAGs */
+
+ /* negotiated params */
+ unsigned max_recv_dlength; /* initiator_max_recv_dsl*/
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index 037c77fb5dc55..3ecf9702287be 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -296,7 +296,7 @@ extern void iscsi_host_for_each_session(struct Scsi_Host *shost,
+ struct iscsi_endpoint {
+ void *dd_data; /* LLD private data */
+ struct device dev;
+- uint64_t id;
++ int id;
+ struct iscsi_cls_conn *conn;
+ };
+
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 0ee9ffceb9764..baa0fe350246f 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6352,7 +6352,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+ again:
+ mutex_lock(&event->mmap_mutex);
+ if (event->rb) {
+- if (event->rb->nr_pages != nr_pages) {
++ if (data_page_nr(event->rb) != nr_pages) {
+ ret = -EINVAL;
+ goto unlock;
+ }
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index 082832738c8fd..5150d5f84c033 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -116,6 +116,11 @@ static inline int page_order(struct perf_buffer *rb)
+ }
+ #endif
+
++static inline int data_page_nr(struct perf_buffer *rb)
++{
++ return rb->nr_pages << page_order(rb);
++}
++
+ static inline unsigned long perf_data_size(struct perf_buffer *rb)
+ {
+ return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 52868716ec358..fb35b926024ca 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -859,11 +859,6 @@ void rb_free(struct perf_buffer *rb)
+ }
+
+ #else
+-static int data_page_nr(struct perf_buffer *rb)
+-{
+- return rb->nr_pages << page_order(rb);
+-}
+-
+ static struct page *
+ __perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
+ {
+diff --git a/kernel/irq_work.c b/kernel/irq_work.c
+index f7df715ec28e6..7afa40fe5cc43 100644
+--- a/kernel/irq_work.c
++++ b/kernel/irq_work.c
+@@ -137,7 +137,7 @@ bool irq_work_queue_on(struct irq_work *work, int cpu)
+ if (!irq_work_claim(work))
+ return false;
+
+- kasan_record_aux_stack(work);
++ kasan_record_aux_stack_noalloc(work);
+
+ preempt_disable();
+ if (cpu != smp_processor_id()) {
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index cddcf2f4f5251..2f461f0592789 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3776,11 +3776,11 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
+
+ se->avg.runnable_sum = se->avg.runnable_avg * divider;
+
+- se->avg.load_sum = divider;
+- if (se_weight(se)) {
+- se->avg.load_sum =
+- div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
+- }
++ se->avg.load_sum = se->avg.load_avg * divider;
++ if (se_weight(se) < se->avg.load_sum)
++ se->avg.load_sum = div_u64(se->avg.load_sum, se_weight(se));
++ else
++ se->avg.load_sum = 1;
+
+ enqueue_load_avg(cfs_rq, se);
+ cfs_rq->avg.util_avg += se->avg.util_avg;
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 88ca87435e3da..32e1669d5b649 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -207,6 +207,8 @@ static void *xas_descend(struct xa_state *xas, struct xa_node *node)
+ if (xa_is_sibling(entry)) {
+ offset = xa_to_sibling(entry);
+ entry = xa_entry(xas->xa, node, offset);
++ if (node->shift && xa_is_node(entry))
++ entry = XA_RETRY_ENTRY;
+ }
+
+ xas->xa_offset = offset;
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 9b89a340a6629..563100f2a693e 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -628,6 +628,9 @@ static DECLARE_DEFERRABLE_WORK(stats_flush_dwork, flush_memcg_stats_dwork);
+ static DEFINE_SPINLOCK(stats_flush_lock);
+ static DEFINE_PER_CPU(unsigned int, stats_updates);
+ static atomic_t stats_flush_threshold = ATOMIC_INIT(0);
++static u64 flush_next_time;
++
++#define FLUSH_TIME (2UL*HZ)
+
+ static inline void memcg_rstat_updated(struct mem_cgroup *memcg, int val)
+ {
+@@ -649,6 +652,7 @@ static void __mem_cgroup_flush_stats(void)
+ if (!spin_trylock_irqsave(&stats_flush_lock, flag))
+ return;
+
++ flush_next_time = jiffies_64 + 2*FLUSH_TIME;
+ cgroup_rstat_flush_irqsafe(root_mem_cgroup->css.cgroup);
+ atomic_set(&stats_flush_threshold, 0);
+ spin_unlock_irqrestore(&stats_flush_lock, flag);
+@@ -660,10 +664,16 @@ void mem_cgroup_flush_stats(void)
+ __mem_cgroup_flush_stats();
+ }
+
++void mem_cgroup_flush_stats_delayed(void)
++{
++ if (time_after64(jiffies_64, flush_next_time))
++ mem_cgroup_flush_stats();
++}
++
+ static void flush_memcg_stats_dwork(struct work_struct *w)
+ {
+ __mem_cgroup_flush_stats();
+- queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ);
++ queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME);
+ }
+
+ /**
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 97a9ed8f87a96..15dcedbc17306 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1779,6 +1779,19 @@ try_again:
+ }
+
+ if (PageTransHuge(hpage)) {
++ /*
++ * Bail out before SetPageHasHWPoisoned() if hpage is
++ * huge_zero_page, although PG_has_hwpoisoned is not
++ * checked in set_huge_zero_page().
++ *
++ * TODO: Handle memory failure of huge_zero_page thoroughly.
++ */
++ if (is_huge_zero_page(hpage)) {
++ action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
++ res = -EBUSY;
++ goto unlock_mutex;
++ }
++
+ /*
+ * The flag must be set after the refcount is bumped
+ * otherwise it may race with THP split.
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 18875c216f8db..eb39f17cb86eb 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2119,14 +2119,6 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info)
+ return addr;
+ }
+
+-#ifndef arch_get_mmap_end
+-#define arch_get_mmap_end(addr) (TASK_SIZE)
+-#endif
+-
+-#ifndef arch_get_mmap_base
+-#define arch_get_mmap_base(addr, base) (base)
+-#endif
+-
+ /* Get an address range which is currently unmapped.
+ * For shmat() with addr=0.
+ *
+diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
+index 459d195d2ff64..f45ff1b7626a6 100644
+--- a/mm/mmu_notifier.c
++++ b/mm/mmu_notifier.c
+@@ -1036,6 +1036,18 @@ int mmu_interval_notifier_insert_locked(
+ }
+ EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked);
+
++static bool
++mmu_interval_seq_released(struct mmu_notifier_subscriptions *subscriptions,
++ unsigned long seq)
++{
++ bool ret;
++
++ spin_lock(&subscriptions->lock);
++ ret = subscriptions->invalidate_seq != seq;
++ spin_unlock(&subscriptions->lock);
++ return ret;
++}
++
+ /**
+ * mmu_interval_notifier_remove - Remove a interval notifier
+ * @interval_sub: Interval subscription to unregister
+@@ -1083,7 +1095,7 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub)
+ lock_map_release(&__mmu_notifier_invalidate_range_start_map);
+ if (seq)
+ wait_event(subscriptions->wq,
+- READ_ONCE(subscriptions->invalidate_seq) != seq);
++ mmu_interval_seq_released(subscriptions, seq));
+
+ /* pairs with mmgrab in mmu_interval_notifier_insert() */
+ mmdrop(mm);
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 832fb330376ef..a6bc4a6786ece 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -635,7 +635,7 @@ done:
+ */
+ set_bit(MMF_OOM_SKIP, &mm->flags);
+
+- /* Drop a reference taken by wake_oom_reaper */
++ /* Drop a reference taken by queue_oom_reaper */
+ put_task_struct(tsk);
+ }
+
+@@ -647,12 +647,12 @@ static int oom_reaper(void *unused)
+ struct task_struct *tsk = NULL;
+
+ wait_event_freezable(oom_reaper_wait, oom_reaper_list != NULL);
+- spin_lock(&oom_reaper_lock);
++ spin_lock_irq(&oom_reaper_lock);
+ if (oom_reaper_list != NULL) {
+ tsk = oom_reaper_list;
+ oom_reaper_list = tsk->oom_reaper_list;
+ }
+- spin_unlock(&oom_reaper_lock);
++ spin_unlock_irq(&oom_reaper_lock);
+
+ if (tsk)
+ oom_reap_task(tsk);
+@@ -661,22 +661,48 @@ static int oom_reaper(void *unused)
+ return 0;
+ }
+
+-static void wake_oom_reaper(struct task_struct *tsk)
++static void wake_oom_reaper(struct timer_list *timer)
+ {
+- /* mm is already queued? */
+- if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
+- return;
++ struct task_struct *tsk = container_of(timer, struct task_struct,
++ oom_reaper_timer);
++ struct mm_struct *mm = tsk->signal->oom_mm;
++ unsigned long flags;
+
+- get_task_struct(tsk);
++ /* The victim managed to terminate on its own - see exit_mmap */
++ if (test_bit(MMF_OOM_SKIP, &mm->flags)) {
++ put_task_struct(tsk);
++ return;
++ }
+
+- spin_lock(&oom_reaper_lock);
++ spin_lock_irqsave(&oom_reaper_lock, flags);
+ tsk->oom_reaper_list = oom_reaper_list;
+ oom_reaper_list = tsk;
+- spin_unlock(&oom_reaper_lock);
++ spin_unlock_irqrestore(&oom_reaper_lock, flags);
+ trace_wake_reaper(tsk->pid);
+ wake_up(&oom_reaper_wait);
+ }
+
++/*
++ * Give the OOM victim time to exit naturally before invoking the oom_reaping.
++ * The timers timeout is arbitrary... the longer it is, the longer the worst
++ * case scenario for the OOM can take. If it is too small, the oom_reaper can
++ * get in the way and release resources needed by the process exit path.
++ * e.g. The futex robust list can sit in Anon|Private memory that gets reaped
++ * before the exit path is able to wake the futex waiters.
++ */
++#define OOM_REAPER_DELAY (2*HZ)
++static void queue_oom_reaper(struct task_struct *tsk)
++{
++ /* mm is already queued? */
++ if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
++ return;
++
++ get_task_struct(tsk);
++ timer_setup(&tsk->oom_reaper_timer, wake_oom_reaper, 0);
++ tsk->oom_reaper_timer.expires = jiffies + OOM_REAPER_DELAY;
++ add_timer(&tsk->oom_reaper_timer);
++}
++
+ static int __init oom_init(void)
+ {
+ oom_reaper_th = kthread_run(oom_reaper, NULL, "oom_reaper");
+@@ -684,7 +710,7 @@ static int __init oom_init(void)
+ }
+ subsys_initcall(oom_init)
+ #else
+-static inline void wake_oom_reaper(struct task_struct *tsk)
++static inline void queue_oom_reaper(struct task_struct *tsk)
+ {
+ }
+ #endif /* CONFIG_MMU */
+@@ -935,7 +961,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
+ rcu_read_unlock();
+
+ if (can_oom_reap)
+- wake_oom_reaper(victim);
++ queue_oom_reaper(victim);
+
+ mmdrop(mm);
+ put_task_struct(victim);
+@@ -971,7 +997,7 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
+ task_lock(victim);
+ if (task_will_free_mem(victim)) {
+ mark_oom_victim(victim);
+- wake_oom_reaper(victim);
++ queue_oom_reaper(victim);
+ task_unlock(victim);
+ put_task_struct(victim);
+ return;
+@@ -1070,7 +1096,7 @@ bool out_of_memory(struct oom_control *oc)
+ */
+ if (task_will_free_mem(current)) {
+ mark_oom_victim(current);
+- wake_oom_reaper(current);
++ queue_oom_reaper(current);
+ return true;
+ }
+
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 0780c2a57ff11..885e5adb0168d 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -72,12 +72,15 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
+ _dst_pte = pte_mkdirty(_dst_pte);
+ if (page_in_cache && !vm_shared)
+ writable = false;
+- if (writable) {
+- if (wp_copy)
+- _dst_pte = pte_mkuffd_wp(_dst_pte);
+- else
+- _dst_pte = pte_mkwrite(_dst_pte);
+- }
++
++ /*
++ * Always mark a PTE as write-protected when needed, regardless of
++ * VM_WRITE, which the user might change.
++ */
++ if (wp_copy)
++ _dst_pte = pte_mkuffd_wp(_dst_pte);
++ else if (writable)
++ _dst_pte = pte_mkwrite(_dst_pte);
+
+ dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
+
+diff --git a/mm/workingset.c b/mm/workingset.c
+index 8c03afe1d67cb..f66a18d1deaad 100644
+--- a/mm/workingset.c
++++ b/mm/workingset.c
+@@ -354,7 +354,7 @@ void workingset_refault(struct folio *folio, void *shadow)
+
+ mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
+
+- mem_cgroup_flush_stats();
++ mem_cgroup_flush_stats_delayed();
+ /*
+ * Compare the distance to the existing workingset size. We
+ * don't activate pages that couldn't stay resident even if
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index 5bce7c66c1219..8c753dcefe7fc 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -866,6 +866,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ struct canfd_frame *cf;
+ int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
+ int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
++ s64 hrtimer_sec = 0;
+ int off;
+ int err;
+
+@@ -964,7 +965,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ isotp_create_fframe(cf, so, ae);
+
+ /* start timeout for FC */
+- hrtimer_start(&so->txtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
++ hrtimer_sec = 1;
++ hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
++ HRTIMER_MODE_REL_SOFT);
+ }
+
+ /* send the first or only CAN frame */
+@@ -977,6 +980,11 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ if (err) {
+ pr_notice_once("can-isotp: %s: can_send_ret %pe\n",
+ __func__, ERR_PTR(err));
++
++ /* no transmission -> no timeout monitoring */
++ if (hrtimer_sec)
++ hrtimer_cancel(&so->txtimer);
++
+ goto err_out_drop;
+ }
+
+diff --git a/net/dsa/tag_hellcreek.c b/net/dsa/tag_hellcreek.c
+index f64b805303cd7..eb204ad36eeec 100644
+--- a/net/dsa/tag_hellcreek.c
++++ b/net/dsa/tag_hellcreek.c
+@@ -21,6 +21,14 @@ static struct sk_buff *hellcreek_xmit(struct sk_buff *skb,
+ struct dsa_port *dp = dsa_slave_to_port(dev);
+ u8 *tag;
+
++ /* Calculate checksums (if required) before adding the trailer tag to
++ * avoid including it in calculations. That would lead to wrong
++ * checksums after the switch strips the tag.
++ */
++ if (skb->ip_summed == CHECKSUM_PARTIAL &&
++ skb_checksum_help(skb))
++ return NULL;
++
+ /* Tag encoding */
+ tag = skb_put(skb, HELLCREEK_TAG_LEN);
+ *tag = BIT(dp->index);
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 70e6c87fbe3df..d747166bb291c 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -446,7 +446,6 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ struct page *page;
+ struct sk_buff *trailer;
+ int tailen = esp->tailen;
+- unsigned int allocsz;
+
+ /* this is non-NULL only with TCP/UDP Encapsulation */
+ if (x->encap) {
+@@ -456,8 +455,8 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ return err;
+ }
+
+- allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
+- if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++ if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
++ ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
+ goto cow;
+
+ if (!skb_cloned(skb)) {
+diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
+index 55d604c9b3b3e..f2120e92caf15 100644
+--- a/net/ipv6/esp6.c
++++ b/net/ipv6/esp6.c
+@@ -482,7 +482,6 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ struct page *page;
+ struct sk_buff *trailer;
+ int tailen = esp->tailen;
+- unsigned int allocsz;
+
+ if (x->encap) {
+ int err = esp6_output_encap(x, skb, esp);
+@@ -491,8 +490,8 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
+ return err;
+ }
+
+- allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
+- if (allocsz > ESP_SKB_FRAG_MAXSIZE)
++ if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
++ ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
+ goto cow;
+
+ if (!skb_cloned(skb)) {
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 8753e9cec3264..9762367361463 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -733,9 +733,6 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ else
+ fl6->daddr = tunnel->parms.raddr;
+
+- if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
+- return -ENOMEM;
+-
+ /* Push GRE header. */
+ protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto;
+
+@@ -743,6 +740,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ struct ip_tunnel_info *tun_info;
+ const struct ip_tunnel_key *key;
+ __be16 flags;
++ int tun_hlen;
+
+ tun_info = skb_tunnel_info_txcheck(skb);
+ if (IS_ERR(tun_info) ||
+@@ -760,9 +758,12 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ dsfield = key->tos;
+ flags = key->tun_flags &
+ (TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
+- tunnel->tun_hlen = gre_calc_hlen(flags);
++ tun_hlen = gre_calc_hlen(flags);
+
+- gre_build_header(skb, tunnel->tun_hlen,
++ if (skb_cow_head(skb, dev->needed_headroom ?: tun_hlen + tunnel->encap_hlen))
++ return -ENOMEM;
++
++ gre_build_header(skb, tun_hlen,
+ flags, protocol,
+ tunnel_id_to_key32(tun_info->key.tun_id),
+ (flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
+@@ -772,6 +773,9 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
+ if (tunnel->parms.o_flags & TUNNEL_SEQ)
+ tunnel->o_seqno++;
+
++ if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
++ return -ENOMEM;
++
+ gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
+ protocol, tunnel->parms.o_key,
+ htonl(tunnel->o_seqno));
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index da1bf48e79370..1caeb1ef20956 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3303,6 +3303,7 @@ static int ip6_dst_gc(struct dst_ops *ops)
+ int rt_elasticity = net->ipv6.sysctl.ip6_rt_gc_elasticity;
+ int rt_gc_timeout = net->ipv6.sysctl.ip6_rt_gc_timeout;
+ unsigned long rt_last_gc = net->ipv6.ip6_rt_last_gc;
++ unsigned int val;
+ int entries;
+
+ entries = dst_entries_get_fast(ops);
+@@ -3313,13 +3314,13 @@ static int ip6_dst_gc(struct dst_ops *ops)
+ entries <= rt_max_size)
+ goto out;
+
+- net->ipv6.ip6_rt_gc_expire++;
+- fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net, true);
++ fib6_run_gc(atomic_inc_return(&net->ipv6.ip6_rt_gc_expire), net, true);
+ entries = dst_entries_get_slow(ops);
+ if (entries < ops->gc_thresh)
+- net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1;
++ atomic_set(&net->ipv6.ip6_rt_gc_expire, rt_gc_timeout >> 1);
+ out:
+- net->ipv6.ip6_rt_gc_expire -= net->ipv6.ip6_rt_gc_expire>>rt_elasticity;
++ val = atomic_read(&net->ipv6.ip6_rt_gc_expire);
++ atomic_set(&net->ipv6.ip6_rt_gc_expire, val - (val >> rt_elasticity));
+ return entries > rt_max_size;
+ }
+
+@@ -6514,7 +6515,7 @@ static int __net_init ip6_route_net_init(struct net *net)
+ net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
+ net->ipv6.sysctl.skip_notify_on_dev_down = 0;
+
+- net->ipv6.ip6_rt_gc_expire = 30*HZ;
++ atomic_set(&net->ipv6.ip6_rt_gc_expire, 30*HZ);
+
+ ret = 0;
+ out:
+diff --git a/net/l3mdev/l3mdev.c b/net/l3mdev/l3mdev.c
+index 17927966abb33..8b14a24f10404 100644
+--- a/net/l3mdev/l3mdev.c
++++ b/net/l3mdev/l3mdev.c
+@@ -147,7 +147,7 @@ int l3mdev_master_upper_ifindex_by_index_rcu(struct net *net, int ifindex)
+
+ dev = dev_get_by_index_rcu(net, ifindex);
+ while (dev && !netif_is_l3_master(dev))
+- dev = netdev_master_upper_dev_get(dev);
++ dev = netdev_master_upper_dev_get_rcu(dev);
+
+ return dev ? dev->ifindex : 0;
+ }
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 47a876ccd2881..05a3795eac8e9 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2263,6 +2263,13 @@ static int netlink_dump(struct sock *sk)
+ * single netdev. The outcome is MSG_TRUNC error.
+ */
+ skb_reserve(skb, skb_tailroom(skb) - alloc_size);
++
++ /* Make sure malicious BPF programs can not read unitialized memory
++ * from skb->head -> skb->data
++ */
++ skb_reset_network_header(skb);
++ skb_reset_mac_header(skb);
++
+ netlink_skb_set_owner_r(skb, sk);
+
+ if (nlk->dump_done_errno > 0) {
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index c591b923016a6..d77c21ff066c9 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2436,7 +2436,7 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
+ new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
+
+ if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
+- if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
++ if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) {
+ OVS_NLERR(log, "Flow action size exceeds max %u",
+ MAX_ACTIONS_BUFSIZE);
+ return ERR_PTR(-EMSGSIZE);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index a7273af2d9009..e3c60251e708a 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2856,8 +2856,9 @@ tpacket_error:
+
+ status = TP_STATUS_SEND_REQUEST;
+ err = po->xmit(skb);
+- if (unlikely(err > 0)) {
+- err = net_xmit_errno(err);
++ if (unlikely(err != 0)) {
++ if (err > 0)
++ err = net_xmit_errno(err);
+ if (err && __packet_get_status(po, ph) ==
+ TP_STATUS_AVAILABLE) {
+ /* skb was destructed already */
+@@ -3058,8 +3059,12 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ skb->no_fcs = 1;
+
+ err = po->xmit(skb);
+- if (err > 0 && (err = net_xmit_errno(err)) != 0)
+- goto out_unlock;
++ if (unlikely(err != 0)) {
++ if (err > 0)
++ err = net_xmit_errno(err);
++ if (err)
++ goto out_unlock;
++ }
+
+ dev_put(dev);
+
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index f15d6942da453..cc7e30733feb0 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -113,7 +113,9 @@ static __net_exit void rxrpc_exit_net(struct net *net)
+ struct rxrpc_net *rxnet = rxrpc_net(net);
+
+ rxnet->live = false;
++ del_timer_sync(&rxnet->peer_keepalive_timer);
+ cancel_work_sync(&rxnet->peer_keepalive_work);
++ /* Remove the timer again as the worker may have restarted it. */
+ del_timer_sync(&rxnet->peer_keepalive_timer);
+ rxrpc_destroy_all_calls(rxnet);
+ rxrpc_destroy_all_connections(rxnet);
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index cf5649292ee00..4d27300c287c4 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -386,14 +386,19 @@ static int u32_init(struct tcf_proto *tp)
+ return 0;
+ }
+
+-static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
++static void __u32_destroy_key(struct tc_u_knode *n)
+ {
+ struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
+
+ tcf_exts_destroy(&n->exts);
+- tcf_exts_put_net(&n->exts);
+ if (ht && --ht->refcnt == 0)
+ kfree(ht);
++ kfree(n);
++}
++
++static void u32_destroy_key(struct tc_u_knode *n, bool free_pf)
++{
++ tcf_exts_put_net(&n->exts);
+ #ifdef CONFIG_CLS_U32_PERF
+ if (free_pf)
+ free_percpu(n->pf);
+@@ -402,8 +407,7 @@ static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
+ if (free_pf)
+ free_percpu(n->pcpu_success);
+ #endif
+- kfree(n);
+- return 0;
++ __u32_destroy_key(n);
+ }
+
+ /* u32_delete_key_rcu should be called when free'ing a copied
+@@ -811,10 +815,6 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
+ new->flags = n->flags;
+ RCU_INIT_POINTER(new->ht_down, ht);
+
+- /* bump reference count as long as we hold pointer to structure */
+- if (ht)
+- ht->refcnt++;
+-
+ #ifdef CONFIG_CLS_U32_PERF
+ /* Statistics may be incremented by readers during update
+ * so we must keep them in tact. When the node is later destroyed
+@@ -836,6 +836,10 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
+ return NULL;
+ }
+
++ /* bump reference count as long as we hold pointer to structure */
++ if (ht)
++ ht->refcnt++;
++
+ return new;
+ }
+
+@@ -900,13 +904,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ extack);
+
+ if (err) {
+- u32_destroy_key(new, false);
++ __u32_destroy_key(new);
+ return err;
+ }
+
+ err = u32_replace_hw_knode(tp, new, flags, extack);
+ if (err) {
+- u32_destroy_key(new, false);
++ __u32_destroy_key(new);
+ return err;
+ }
+
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 303c5e56e4df4..68cd110722a4a 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2538,8 +2538,10 @@ static int smc_shutdown(struct socket *sock, int how)
+ if (smc->use_fallback) {
+ rc = kernel_sock_shutdown(smc->clcsock, how);
+ sk->sk_shutdown = smc->clcsock->sk->sk_shutdown;
+- if (sk->sk_shutdown == SHUTDOWN_MASK)
++ if (sk->sk_shutdown == SHUTDOWN_MASK) {
+ sk->sk_state = SMC_CLOSED;
++ sock_put(sk);
++ }
+ goto out;
+ }
+ switch (how) {
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 70fd8b13938ed..8b0a16ba27d39 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -390,22 +390,36 @@ static const struct config_entry config_table[] = {
+
+ /* Alder Lake */
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_ALDERLAKE)
++ /* Alderlake-S */
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x7ad0,
+ },
++ /* RaptorLake-S */
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+- .device = 0x51c8,
++ .device = 0x7a50,
+ },
++ /* Alderlake-P */
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+- .device = 0x51cc,
++ .device = 0x51c8,
+ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x51cd,
+ },
++ /* Alderlake-PS */
++ {
++ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++ .device = 0x51c9,
++ },
++ /* Alderlake-M */
++ {
++ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++ .device = 0x51cc,
++ },
++ /* Alderlake-N */
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x54c8,
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index cf4f277dccdda..26637a6959792 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1387,7 +1387,7 @@ static int hdmi_find_pcm_slot(struct hdmi_spec *spec,
+
+ last_try:
+ /* the last try; check the empty slots in pins */
+- for (i = 0; i < spec->num_nids; i++) {
++ for (i = 0; i < spec->pcm_used; i++) {
+ if (!test_bit(i, &spec->pcm_bitmap))
+ return i;
+ }
+@@ -2263,7 +2263,9 @@ static int generic_hdmi_build_pcms(struct hda_codec *codec)
+ * dev_num is the device entry number in a pin
+ */
+
+- if (codec->mst_no_extra_pcms)
++ if (spec->dyn_pcm_no_legacy && codec->mst_no_extra_pcms)
++ pcm_num = spec->num_cvts;
++ else if (codec->mst_no_extra_pcms)
+ pcm_num = spec->num_nids;
+ else
+ pcm_num = spec->num_nids + spec->dev_num - 1;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ca40c2bd8ba62..c66d31d8a498c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9116,6 +9116,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[57][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x867c, "Clevo NP7[01]PNP", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
+diff --git a/sound/soc/atmel/sam9g20_wm8731.c b/sound/soc/atmel/sam9g20_wm8731.c
+index 33e43013ff770..0d639a33ad969 100644
+--- a/sound/soc/atmel/sam9g20_wm8731.c
++++ b/sound/soc/atmel/sam9g20_wm8731.c
+@@ -46,35 +46,6 @@
+ */
+ #undef ENABLE_MIC_INPUT
+
+-static struct clk *mclk;
+-
+-static int at91sam9g20ek_set_bias_level(struct snd_soc_card *card,
+- struct snd_soc_dapm_context *dapm,
+- enum snd_soc_bias_level level)
+-{
+- static int mclk_on;
+- int ret = 0;
+-
+- switch (level) {
+- case SND_SOC_BIAS_ON:
+- case SND_SOC_BIAS_PREPARE:
+- if (!mclk_on)
+- ret = clk_enable(mclk);
+- if (ret == 0)
+- mclk_on = 1;
+- break;
+-
+- case SND_SOC_BIAS_OFF:
+- case SND_SOC_BIAS_STANDBY:
+- if (mclk_on)
+- clk_disable(mclk);
+- mclk_on = 0;
+- break;
+- }
+-
+- return ret;
+-}
+-
+ static const struct snd_soc_dapm_widget at91sam9g20ek_dapm_widgets[] = {
+ SND_SOC_DAPM_MIC("Int Mic", NULL),
+ SND_SOC_DAPM_SPK("Ext Spk", NULL),
+@@ -135,7 +106,6 @@ static struct snd_soc_card snd_soc_at91sam9g20ek = {
+ .owner = THIS_MODULE,
+ .dai_link = &at91sam9g20ek_dai,
+ .num_links = 1,
+- .set_bias_level = at91sam9g20ek_set_bias_level,
+
+ .dapm_widgets = at91sam9g20ek_dapm_widgets,
+ .num_dapm_widgets = ARRAY_SIZE(at91sam9g20ek_dapm_widgets),
+@@ -148,7 +118,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ {
+ struct device_node *np = pdev->dev.of_node;
+ struct device_node *codec_np, *cpu_np;
+- struct clk *pllb;
+ struct snd_soc_card *card = &snd_soc_at91sam9g20ek;
+ int ret;
+
+@@ -162,31 +131,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- /*
+- * Codec MCLK is supplied by PCK0 - set it up.
+- */
+- mclk = clk_get(NULL, "pck0");
+- if (IS_ERR(mclk)) {
+- dev_err(&pdev->dev, "Failed to get MCLK\n");
+- ret = PTR_ERR(mclk);
+- goto err;
+- }
+-
+- pllb = clk_get(NULL, "pllb");
+- if (IS_ERR(pllb)) {
+- dev_err(&pdev->dev, "Failed to get PLLB\n");
+- ret = PTR_ERR(pllb);
+- goto err_mclk;
+- }
+- ret = clk_set_parent(mclk, pllb);
+- clk_put(pllb);
+- if (ret != 0) {
+- dev_err(&pdev->dev, "Failed to set MCLK parent\n");
+- goto err_mclk;
+- }
+-
+- clk_set_rate(mclk, MCLK_RATE);
+-
+ card->dev = &pdev->dev;
+
+ /* Parse device node info */
+@@ -230,9 +174,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+
+ return ret;
+
+-err_mclk:
+- clk_put(mclk);
+- mclk = NULL;
+ err:
+ atmel_ssc_put_audio(0);
+ return ret;
+@@ -242,8 +183,6 @@ static int at91sam9g20ek_audio_remove(struct platform_device *pdev)
+ {
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+
+- clk_disable(mclk);
+- mclk = NULL;
+ snd_soc_unregister_card(card);
+ atmel_ssc_put_audio(0);
+
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index 9ad7fc0baf072..20a07c92b2fc2 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -1206,9 +1206,16 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+
+ dev_set_drvdata(dev, priv);
+
+- return devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
++ ret = devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
+ msm8916_wcd_digital_dai,
+ ARRAY_SIZE(msm8916_wcd_digital_dai));
++ if (ret)
++ goto err_mclk;
++
++ return 0;
++
++err_mclk:
++ clk_disable_unprepare(priv->mclk);
+ err_clk:
+ clk_disable_unprepare(priv->ahbclk);
+ return ret;
+diff --git a/sound/soc/codecs/rk817_codec.c b/sound/soc/codecs/rk817_codec.c
+index 8fffe378618d0..cce6f4e7992f5 100644
+--- a/sound/soc/codecs/rk817_codec.c
++++ b/sound/soc/codecs/rk817_codec.c
+@@ -489,7 +489,7 @@ static int rk817_platform_probe(struct platform_device *pdev)
+
+ rk817_codec_parse_dt_property(&pdev->dev, rk817_codec_data);
+
+- rk817_codec_data->mclk = clk_get(pdev->dev.parent, "mclk");
++ rk817_codec_data->mclk = devm_clk_get(pdev->dev.parent, "mclk");
+ if (IS_ERR(rk817_codec_data->mclk)) {
+ dev_dbg(&pdev->dev, "Unable to get mclk\n");
+ ret = -ENXIO;
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index be68d573a4906..c9ff9c89adf70 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -2822,14 +2822,11 @@ static int rt5682_bclk_set_rate(struct clk_hw *hw, unsigned long rate,
+
+ for_each_component_dais(component, dai)
+ if (dai->id == RT5682_AIF1)
+- break;
+- if (!dai) {
+- dev_err(rt5682->i2c_dev, "dai %d not found in component\n",
+- RT5682_AIF1);
+- return -ENODEV;
+- }
++ return rt5682_set_bclk1_ratio(dai, factor);
+
+- return rt5682_set_bclk1_ratio(dai, factor);
++ dev_err(rt5682->i2c_dev, "dai %d not found in component\n",
++ RT5682_AIF1);
++ return -ENODEV;
+ }
+
+ static const struct clk_ops rt5682_dai_clk_ops[RT5682_DAI_NUM_CLKS] = {
+diff --git a/sound/soc/codecs/rt5682s.c b/sound/soc/codecs/rt5682s.c
+index 92b8753f1267b..f2296090716f3 100644
+--- a/sound/soc/codecs/rt5682s.c
++++ b/sound/soc/codecs/rt5682s.c
+@@ -2679,14 +2679,11 @@ static int rt5682s_bclk_set_rate(struct clk_hw *hw, unsigned long rate,
+
+ for_each_component_dais(component, dai)
+ if (dai->id == RT5682S_AIF1)
+- break;
+- if (!dai) {
+- dev_err(component->dev, "dai %d not found in component\n",
+- RT5682S_AIF1);
+- return -ENODEV;
+- }
++ return rt5682s_set_bclk1_ratio(dai, factor);
+
+- return rt5682s_set_bclk1_ratio(dai, factor);
++ dev_err(component->dev, "dai %d not found in component\n",
++ RT5682S_AIF1);
++ return -ENODEV;
+ }
+
+ static const struct clk_ops rt5682s_dai_clk_ops[RT5682S_DAI_NUM_CLKS] = {
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 1e75e93cf28f2..6298ebe96e941 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -1274,29 +1274,7 @@ static int wcd934x_set_sido_input_src(struct wcd934x_codec *wcd, int sido_src)
+ if (sido_src == wcd->sido_input_src)
+ return 0;
+
+- if (sido_src == SIDO_SOURCE_INTERNAL) {
+- regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+- WCD934X_ANA_BUCK_HI_ACCU_EN_MASK, 0);
+- usleep_range(100, 110);
+- regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+- WCD934X_ANA_BUCK_HI_ACCU_PRE_ENX_MASK, 0x0);
+- usleep_range(100, 110);
+- regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
+- WCD934X_ANA_RCO_BG_EN_MASK, 0);
+- usleep_range(100, 110);
+- regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+- WCD934X_ANA_BUCK_PRE_EN1_MASK,
+- WCD934X_ANA_BUCK_PRE_EN1_ENABLE);
+- usleep_range(100, 110);
+- regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+- WCD934X_ANA_BUCK_PRE_EN2_MASK,
+- WCD934X_ANA_BUCK_PRE_EN2_ENABLE);
+- usleep_range(100, 110);
+- regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
+- WCD934X_ANA_BUCK_HI_ACCU_EN_MASK,
+- WCD934X_ANA_BUCK_HI_ACCU_ENABLE);
+- usleep_range(100, 110);
+- } else if (sido_src == SIDO_SOURCE_RCO_BG) {
++ if (sido_src == SIDO_SOURCE_RCO_BG) {
+ regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
+ WCD934X_ANA_RCO_BG_EN_MASK,
+ WCD934X_ANA_RCO_BG_ENABLE);
+@@ -1382,8 +1360,6 @@ static int wcd934x_disable_ana_bias_and_syclk(struct wcd934x_codec *wcd)
+ regmap_update_bits(wcd->regmap, WCD934X_CLK_SYS_MCLK_PRG,
+ WCD934X_EXT_CLK_BUF_EN_MASK |
+ WCD934X_MCLK_EN_MASK, 0x0);
+- wcd934x_set_sido_input_src(wcd, SIDO_SOURCE_INTERNAL);
+-
+ regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
+ WCD934X_ANA_BIAS_EN_MASK, 0);
+ regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index b06c5682445c0..fb43b331a36e8 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -1687,8 +1687,7 @@ static void dapm_seq_run(struct snd_soc_card *card,
+ switch (w->id) {
+ case snd_soc_dapm_pre:
+ if (!w->event)
+- list_for_each_entry_safe_continue(w, n, list,
+- power_list);
++ continue;
+
+ if (event == SND_SOC_DAPM_STREAM_START)
+ ret = w->event(w,
+@@ -1700,8 +1699,7 @@ static void dapm_seq_run(struct snd_soc_card *card,
+
+ case snd_soc_dapm_post:
+ if (!w->event)
+- list_for_each_entry_safe_continue(w, n, list,
+- power_list);
++ continue;
+
+ if (event == SND_SOC_DAPM_STREAM_START)
+ ret = w->event(w,
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index cb24805668bd8..f413238117af7 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -1479,12 +1479,12 @@ static int soc_tplg_dapm_widget_create(struct soc_tplg *tplg,
+ template.num_kcontrols = le32_to_cpu(w->num_kcontrols);
+ kc = devm_kcalloc(tplg->dev, le32_to_cpu(w->num_kcontrols), sizeof(*kc), GFP_KERNEL);
+ if (!kc)
+- goto err;
++ goto hdr_err;
+
+ kcontrol_type = devm_kcalloc(tplg->dev, le32_to_cpu(w->num_kcontrols), sizeof(unsigned int),
+ GFP_KERNEL);
+ if (!kcontrol_type)
+- goto err;
++ goto hdr_err;
+
+ for (i = 0; i < le32_to_cpu(w->num_kcontrols); i++) {
+ control_hdr = (struct snd_soc_tplg_ctl_hdr *)tplg->pos;
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index e72dcae5e7ee7..0db11ba559d65 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1569,6 +1569,46 @@ static int sof_widget_load_buffer(struct snd_soc_component *scomp, int index,
+ return 0;
+ }
+
++static void sof_disconnect_dai_widget(struct snd_soc_component *scomp,
++ struct snd_soc_dapm_widget *w)
++{
++ struct snd_soc_card *card = scomp->card;
++ struct snd_soc_pcm_runtime *rtd;
++ struct snd_soc_dai *cpu_dai;
++ int i;
++
++ if (!w->sname)
++ return;
++
++ list_for_each_entry(rtd, &card->rtd_list, list) {
++ /* does stream match DAI link ? */
++ if (!rtd->dai_link->stream_name ||
++ strcmp(w->sname, rtd->dai_link->stream_name))
++ continue;
++
++ switch (w->id) {
++ case snd_soc_dapm_dai_out:
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai) {
++ if (cpu_dai->capture_widget == w) {
++ cpu_dai->capture_widget = NULL;
++ break;
++ }
++ }
++ break;
++ case snd_soc_dapm_dai_in:
++ for_each_rtd_cpu_dais(rtd, i, cpu_dai) {
++ if (cpu_dai->playback_widget == w) {
++ cpu_dai->playback_widget = NULL;
++ break;
++ }
++ }
++ break;
++ default:
++ break;
++ }
++ }
++}
++
+ /* bind PCM ID to host component ID */
+ static int spcm_bind(struct snd_soc_component *scomp, struct snd_sof_pcm *spcm,
+ int dir)
+@@ -2449,6 +2489,9 @@ static int sof_widget_unload(struct snd_soc_component *scomp,
+ kfree(dai->dai_config);
+ list_del(&dai->list);
+ }
++
++ sof_disconnect_dai_widget(scomp, widget);
++
+ break;
+ default:
+ break;
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 2c01649c70f61..7c6ca2b433a53 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1194,6 +1194,7 @@ static void snd_usbmidi_output_drain(struct snd_rawmidi_substream *substream)
+ } while (drain_urbs && timeout);
+ finish_wait(&ep->drain_wait, &wait);
+ }
++ port->active = 0;
+ spin_unlock_irq(&ep->buffer_lock);
+ }
+
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 64f5544d0a0aa..7ef7a8abcc2b1 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -599,6 +599,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .id = USB_ID(0x0db0, 0x419c),
+ .map = msi_mpg_x570s_carbon_max_wifi_alc4080_map,
+ },
++ { /* MSI MAG X570S Torpedo Max */
++ .id = USB_ID(0x0db0, 0xa073),
++ .map = msi_mpg_x570s_carbon_max_wifi_alc4080_map,
++ },
+ { /* MSI TRX40 */
+ .id = USB_ID(0x0db0, 0x543d),
+ .map = trx40_mobo_map,
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 167834133b9bc..b8359a0aa008a 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -8,7 +8,7 @@
+ */
+
+ /* handling of USB vendor/product ID pairs as 32-bit numbers */
+-#define USB_ID(vendor, product) (((vendor) << 16) | (product))
++#define USB_ID(vendor, product) (((unsigned int)(vendor) << 16) | (product))
+ #define USB_ID_VENDOR(id) ((id) >> 16)
+ #define USB_ID_PRODUCT(id) ((u16)(id))
+
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index 9a770bfdc8042..15d42d871b3e6 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -577,7 +577,6 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
+ {
+ struct perf_evsel *evsel;
+ const struct perf_cpu_map *cpus = evlist->cpus;
+- const struct perf_thread_map *threads = evlist->threads;
+
+ if (!ops || !ops->get || !ops->mmap)
+ return -EINVAL;
+@@ -589,7 +588,7 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
+ perf_evlist__for_each_entry(evlist, evsel) {
+ if ((evsel->attr.read_format & PERF_FORMAT_ID) &&
+ evsel->sample_id == NULL &&
+- perf_evsel__alloc_id(evsel, perf_cpu_map__nr(cpus), threads->nr) < 0)
++ perf_evsel__alloc_id(evsel, evsel->fd->max_x, evsel->fd->max_y) < 0)
+ return -ENOMEM;
+ }
+
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 1dd92d8c92799..a6bb35b0af9f9 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -349,6 +349,7 @@ static int report__setup_sample_type(struct report *rep)
+ struct perf_session *session = rep->session;
+ u64 sample_type = evlist__combined_sample_type(session->evlist);
+ bool is_pipe = perf_data__is_pipe(session->data);
++ struct evsel *evsel;
+
+ if (session->itrace_synth_opts->callchain ||
+ session->itrace_synth_opts->add_callchain ||
+@@ -403,6 +404,19 @@ static int report__setup_sample_type(struct report *rep)
+ }
+
+ if (sort__mode == SORT_MODE__MEMORY) {
++ /*
++ * FIXUP: prior to kernel 5.18, Arm SPE missed to set
++ * PERF_SAMPLE_DATA_SRC bit in sample type. For backward
++ * compatibility, set the bit if it's an old perf data file.
++ */
++ evlist__for_each_entry(session->evlist, evsel) {
++ if (strstr(evsel->name, "arm_spe") &&
++ !(sample_type & PERF_SAMPLE_DATA_SRC)) {
++ evsel->core.attr.sample_type |= PERF_SAMPLE_DATA_SRC;
++ sample_type |= PERF_SAMPLE_DATA_SRC;
++ }
++ }
++
+ if (!is_pipe && !(sample_type & PERF_SAMPLE_DATA_SRC)) {
+ ui__error("Selected --mem-mode but no mem data. "
+ "Did you call perf record without -d?\n");
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index fa478ddcd18ae..537a552fe6b3b 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -459,7 +459,7 @@ static int evsel__check_attr(struct evsel *evsel, struct perf_session *session)
+ return -EINVAL;
+
+ if (PRINT_FIELD(DATA_SRC) &&
+- evsel__check_stype(evsel, PERF_SAMPLE_DATA_SRC, "DATA_SRC", PERF_OUTPUT_DATA_SRC))
++ evsel__do_check_stype(evsel, PERF_SAMPLE_DATA_SRC, "DATA_SRC", PERF_OUTPUT_DATA_SRC, allow_user_set))
+ return -EINVAL;
+
+ if (PRINT_FIELD(WEIGHT) &&
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/vxlan_flooding_ipv6.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/vxlan_flooding_ipv6.sh
+index 429f7ee735cf4..fd23c80eba315 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/vxlan_flooding_ipv6.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/vxlan_flooding_ipv6.sh
+@@ -159,6 +159,17 @@ flooding_remotes_add()
+ local lsb
+ local i
+
++ # Prevent unwanted packets from entering the bridge and interfering
++ # with the test.
++ tc qdisc add dev br0 clsact
++ tc filter add dev br0 egress protocol all pref 1 handle 1 \
++ matchall skip_hw action drop
++ tc qdisc add dev $h1 clsact
++ tc filter add dev $h1 egress protocol all pref 1 handle 1 \
++ flower skip_hw dst_mac de:ad:be:ef:13:37 action pass
++ tc filter add dev $h1 egress protocol all pref 2 handle 2 \
++ matchall skip_hw action drop
++
+ for i in $(eval echo {1..$num_remotes}); do
+ lsb=$((i + 1))
+
+@@ -195,6 +206,12 @@ flooding_filters_del()
+ done
+
+ tc qdisc del dev $rp2 clsact
++
++ tc filter del dev $h1 egress protocol all pref 2 handle 2 matchall
++ tc filter del dev $h1 egress protocol all pref 1 handle 1 flower
++ tc qdisc del dev $h1 clsact
++ tc filter del dev br0 egress protocol all pref 1 handle 1 matchall
++ tc qdisc del dev br0 clsact
+ }
+
+ flooding_check_packets()
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh b/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh
+index fedcb7b35af9f..af5ea50ed5c0e 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/vxlan_flooding.sh
+@@ -172,6 +172,17 @@ flooding_filters_add()
+ local lsb
+ local i
+
++ # Prevent unwanted packets from entering the bridge and interfering
++ # with the test.
++ tc qdisc add dev br0 clsact
++ tc filter add dev br0 egress protocol all pref 1 handle 1 \
++ matchall skip_hw action drop
++ tc qdisc add dev $h1 clsact
++ tc filter add dev $h1 egress protocol all pref 1 handle 1 \
++ flower skip_hw dst_mac de:ad:be:ef:13:37 action pass
++ tc filter add dev $h1 egress protocol all pref 2 handle 2 \
++ matchall skip_hw action drop
++
+ tc qdisc add dev $rp2 clsact
+
+ for i in $(eval echo {1..$num_remotes}); do
+@@ -194,6 +205,12 @@ flooding_filters_del()
+ done
+
+ tc qdisc del dev $rp2 clsact
++
++ tc filter del dev $h1 egress protocol all pref 2 handle 2 matchall
++ tc filter del dev $h1 egress protocol all pref 1 handle 1 flower
++ tc qdisc del dev $h1 clsact
++ tc filter del dev br0 egress protocol all pref 1 handle 1 matchall
++ tc qdisc del dev br0 clsact
+ }
+
+ flooding_check_packets()
+diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
+index b08d30bf71c51..3b940a101bc07 100644
+--- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
++++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
+@@ -362,11 +362,12 @@ static void test_init_timer_irq(struct kvm_vm *vm)
+ pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq);
+ }
+
++static int gic_fd;
++
+ static struct kvm_vm *test_vm_create(void)
+ {
+ struct kvm_vm *vm;
+ unsigned int i;
+- int ret;
+ int nr_vcpus = test_args.nr_vcpus;
+
+ vm = vm_create_default_with_vcpus(nr_vcpus, 0, 0, guest_code, NULL);
+@@ -383,8 +384,8 @@ static struct kvm_vm *test_vm_create(void)
+
+ ucall_init(vm, NULL);
+ test_init_timer_irq(vm);
+- ret = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+- if (ret < 0) {
++ gic_fd = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA);
++ if (gic_fd < 0) {
+ print_skip("Failed to create vgic-v3");
+ exit(KSFT_SKIP);
+ }
+@@ -395,6 +396,12 @@ static struct kvm_vm *test_vm_create(void)
+ return vm;
+ }
+
++static void test_vm_cleanup(struct kvm_vm *vm)
++{
++ close(gic_fd);
++ kvm_vm_free(vm);
++}
++
+ static void test_print_help(char *name)
+ {
+ pr_info("Usage: %s [-h] [-n nr_vcpus] [-i iterations] [-p timer_period_ms]\n",
+@@ -478,7 +485,7 @@ int main(int argc, char *argv[])
+
+ vm = test_vm_create();
+ test_run(vm);
+- kvm_vm_free(vm);
++ test_vm_cleanup(vm);
+
+ return 0;
+ }
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-26 12:07 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-26 12:07 UTC (permalink / raw
To: gentoo-commits
commit: 2909d3b48d7fc631005c543f3ba6225cb5689f81
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 26 12:06:40 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 26 12:06:40 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2909d3b4
gpio: Request interrupts after IRQ is initialized
Bug: https://bugs.gentoo.org/840942
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
...quest-interrupts-after-IRQ-is-initialized.patch | 75 ++++++++++++++++++++++
2 files changed, 79 insertions(+)
diff --git a/0000_README b/0000_README
index a12bf7e5..1f4d8246 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Path: 2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/patch/drivers/gpio?id=06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9
+Desc: gpio: Request interrupts after IRQ is initialized
+
Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
diff --git a/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch b/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
new file mode 100644
index 00000000..0a1d4624
--- /dev/null
+++ b/2800_gpio-Request-interrupts-after-IRQ-is-initialized.patch
@@ -0,0 +1,75 @@
+From 06fb4ecfeac7e00d6704fa5ed19299f2fefb3cc9 Mon Sep 17 00:00:00 2001
+From: Mario Limonciello <mario.limonciello@amd.com>
+Date: Fri, 22 Apr 2022 08:14:52 -0500
+Subject: gpio: Request interrupts after IRQ is initialized
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Commit 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members
+before initialization") attempted to fix a race condition that lead to a
+NULL pointer, but in the process caused a regression for _AEI/_EVT
+declared GPIOs.
+
+This manifests in messages showing deferred probing while trying to
+allocate IRQs like so:
+
+ amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x0000 to IRQ, err -517
+ amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x002C to IRQ, err -517
+ amd_gpio AMDI0030:00: Failed to translate GPIO pin 0x003D to IRQ, err -517
+ [ .. more of the same .. ]
+
+The code for walking _AEI doesn't handle deferred probing and so this
+leads to non-functional GPIO interrupts.
+
+Fix this issue by moving the call to `acpi_gpiochip_request_interrupts`
+to occur after gc->irc.initialized is set.
+
+Fixes: 5467801f1fcb ("gpio: Restrict usage of GPIO chip irq members before initialization")
+Link: https://lore.kernel.org/linux-gpio/BL1PR12MB51577A77F000A008AA694675E2EF9@BL1PR12MB5157.namprd12.prod.outlook.com/
+Link: https://bugzilla.suse.com/show_bug.cgi?id=1198697
+Link: https://bugzilla.kernel.org/show_bug.cgi?id=215850
+Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1979
+Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1976
+Reported-by: Mario Limonciello <mario.limonciello@amd.com>
+Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
+Reviewed-by: Shreeya Patel <shreeya.patel@collabora.com>
+Tested-By: Samuel Čavoj <samuel@cavoj.net>
+Tested-By: lukeluk498@gmail.com Link:
+Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
+Acked-by: Linus Walleij <linus.walleij@linaro.org>
+Reviewed-and-tested-by: Takashi Iwai <tiwai@suse.de>
+Cc: Shreeya Patel <shreeya.patel@collabora.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+---
+ drivers/gpio/gpiolib.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+(limited to 'drivers/gpio')
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 085348e089860..b7694171655cf 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1601,8 +1601,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+
+ gpiochip_set_irq_hooks(gc);
+
+- acpi_gpiochip_request_interrupts(gc);
+-
+ /*
+ * Using barrier() here to prevent compiler from reordering
+ * gc->irq.initialized before initialization of above
+@@ -1612,6 +1610,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+
+ gc->irq.initialized = true;
+
++ acpi_gpiochip_request_interrupts(gc);
++
+ return 0;
+ }
+
+--
+cgit 1.2.3-1.el7
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-20 12:06 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-20 12:06 UTC (permalink / raw
To: gentoo-commits
commit: 3bb7b32b7676d65c17bd93e52c8ba7b78cea0447
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 20 12:06:08 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 20 12:06:08 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3bb7b32b
Linux patch 5.17.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-5.17.4.patch | 8556 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8560 insertions(+)
diff --git a/0000_README b/0000_README
index a57e4c32..a12bf7e5 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-5.17.3.patch
From: http://www.kernel.org
Desc: Linux 5.17.3
+Patch: 1003_linux-5.17.3.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-5.17.4.patch b/1003_linux-5.17.4.patch
new file mode 100644
index 00000000..411d2496
--- /dev/null
+++ b/1003_linux-5.17.4.patch
@@ -0,0 +1,8556 @@
+diff --git a/Documentation/devicetree/bindings/memory-controllers/synopsys,ddrc-ecc.yaml b/Documentation/devicetree/bindings/memory-controllers/synopsys,ddrc-ecc.yaml
+index fb7ae38a9c866..e3bc6ebce0904 100644
+--- a/Documentation/devicetree/bindings/memory-controllers/synopsys,ddrc-ecc.yaml
++++ b/Documentation/devicetree/bindings/memory-controllers/synopsys,ddrc-ecc.yaml
+@@ -24,9 +24,9 @@ description: |
+ properties:
+ compatible:
+ enum:
++ - snps,ddrc-3.80a
+ - xlnx,zynq-ddrc-a05
+ - xlnx,zynqmp-ddrc-2.40a
+- - snps,ddrc-3.80a
+
+ interrupts:
+ maxItems: 1
+@@ -43,7 +43,9 @@ allOf:
+ properties:
+ compatible:
+ contains:
+- const: xlnx,zynqmp-ddrc-2.40a
++ enum:
++ - snps,ddrc-3.80a
++ - xlnx,zynqmp-ddrc-2.40a
+ then:
+ required:
+ - interrupts
+diff --git a/Documentation/devicetree/bindings/net/snps,dwmac.yaml b/Documentation/devicetree/bindings/net/snps,dwmac.yaml
+index 7eb43707e601d..c421e4e306a1b 100644
+--- a/Documentation/devicetree/bindings/net/snps,dwmac.yaml
++++ b/Documentation/devicetree/bindings/net/snps,dwmac.yaml
+@@ -53,20 +53,18 @@ properties:
+ - allwinner,sun8i-r40-gmac
+ - allwinner,sun8i-v3s-emac
+ - allwinner,sun50i-a64-emac
+- - loongson,ls2k-dwmac
+- - loongson,ls7a-dwmac
+ - amlogic,meson6-dwmac
+ - amlogic,meson8b-dwmac
+ - amlogic,meson8m2-dwmac
+ - amlogic,meson-gxbb-dwmac
+ - amlogic,meson-axg-dwmac
+- - loongson,ls2k-dwmac
+- - loongson,ls7a-dwmac
+ - ingenic,jz4775-mac
+ - ingenic,x1000-mac
+ - ingenic,x1600-mac
+ - ingenic,x1830-mac
+ - ingenic,x2000-mac
++ - loongson,ls2k-dwmac
++ - loongson,ls7a-dwmac
+ - rockchip,px30-gmac
+ - rockchip,rk3128-gmac
+ - rockchip,rk3228-gmac
+diff --git a/Makefile b/Makefile
+index 02fbef1a0213b..d7747e4c216e4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
+index 428012687a802..7f7f6bae21c2d 100644
+--- a/arch/arm/mach-davinci/board-da850-evm.c
++++ b/arch/arm/mach-davinci/board-da850-evm.c
+@@ -1101,11 +1101,13 @@ static int __init da850_evm_config_emac(void)
+ int ret;
+ u32 val;
+ struct davinci_soc_info *soc_info = &davinci_soc_info;
+- u8 rmii_en = soc_info->emac_pdata->rmii_en;
++ u8 rmii_en;
+
+ if (!machine_is_davinci_da850_evm())
+ return 0;
+
++ rmii_en = soc_info->emac_pdata->rmii_en;
++
+ cfg_chip3_base = DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG);
+
+ val = __raw_readl(cfg_chip3_base);
+diff --git a/arch/arm/mach-ep93xx/clock.c b/arch/arm/mach-ep93xx/clock.c
+index cc75087134d38..28e0ae6e890e5 100644
+--- a/arch/arm/mach-ep93xx/clock.c
++++ b/arch/arm/mach-ep93xx/clock.c
+@@ -148,8 +148,10 @@ static struct clk_hw *ep93xx_clk_register_gate(const char *name,
+ psc->lock = &clk_lock;
+
+ clk = clk_register(NULL, &psc->hw);
+- if (IS_ERR(clk))
++ if (IS_ERR(clk)) {
+ kfree(psc);
++ return ERR_CAST(clk);
++ }
+
+ return &psc->hw;
+ }
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index d62405ce3e6de..7496deab025ad 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -43,10 +43,22 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
+
+ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu);
+
++#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__)
+ static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
+ {
+ return !(vcpu->arch.hcr_el2 & HCR_RW);
+ }
++#else
++static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
++{
++ struct kvm *kvm = vcpu->kvm;
++
++ WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED,
++ &kvm->arch.flags));
++
++ return test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags);
++}
++#endif
+
+ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
+ {
+@@ -72,15 +84,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
+ vcpu->arch.hcr_el2 |= HCR_TVM;
+ }
+
+- if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
++ if (vcpu_el1_is_32bit(vcpu))
+ vcpu->arch.hcr_el2 &= ~HCR_RW;
+-
+- /*
+- * TID3: trap feature register accesses that we virtualise.
+- * For now this is conditional, since no AArch32 feature regs
+- * are currently virtualised.
+- */
+- if (!vcpu_el1_is_32bit(vcpu))
++ else
++ /*
++ * TID3: trap feature register accesses that we virtualise.
++ * For now this is conditional, since no AArch32 feature regs
++ * are currently virtualised.
++ */
+ vcpu->arch.hcr_el2 |= HCR_TID3;
+
+ if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) ||
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 8234626a945a7..b5ae92f77c616 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -122,7 +122,22 @@ struct kvm_arch {
+ * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is
+ * supported.
+ */
+- bool return_nisv_io_abort_to_user;
++#define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0
++ /* Memory Tagging Extension enabled for the guest */
++#define KVM_ARCH_FLAG_MTE_ENABLED 1
++ /* At least one vCPU has ran in the VM */
++#define KVM_ARCH_FLAG_HAS_RAN_ONCE 2
++ /*
++ * The following two bits are used to indicate the guest's EL1
++ * register width configuration. A value of KVM_ARCH_FLAG_EL1_32BIT
++ * bit is valid only when KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED is set.
++ * Otherwise, the guest's EL1 register width has not yet been
++ * determined yet.
++ */
++#define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED 3
++#define KVM_ARCH_FLAG_EL1_32BIT 4
++
++ unsigned long flags;
+
+ /*
+ * VM-wide PMU filter, implemented as a bitmap and big enough for
+@@ -133,10 +148,6 @@ struct kvm_arch {
+
+ u8 pfr0_csv2;
+ u8 pfr0_csv3;
+-
+- /* Memory Tagging Extension enabled for the guest */
+- bool mte_enabled;
+- bool ran_once;
+ };
+
+ struct kvm_vcpu_fault_info {
+@@ -792,7 +803,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
+ #define kvm_arm_vcpu_sve_finalized(vcpu) \
+ ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+
+-#define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled)
++#define kvm_has_mte(kvm) \
++ (system_supports_mte() && \
++ test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags))
+ #define kvm_vcpu_has_pmu(vcpu) \
+ (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
+
+diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
+index 3fb79b76e9d96..7bbf5104b7b7b 100644
+--- a/arch/arm64/kernel/alternative.c
++++ b/arch/arm64/kernel/alternative.c
+@@ -42,7 +42,7 @@ bool alternative_is_applied(u16 cpufeature)
+ /*
+ * Check if the target PC is within an alternative block.
+ */
+-static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
++static __always_inline bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
+ {
+ unsigned long replptr = (unsigned long)ALT_REPL_PTR(alt);
+ return !(pc >= replptr && pc <= (replptr + alt->alt_len));
+@@ -50,7 +50,7 @@ static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)
+
+ #define align_down(x, a) ((unsigned long)(x) & ~(((unsigned long)(a)) - 1))
+
+-static u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnptr)
++static __always_inline u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnptr)
+ {
+ u32 insn;
+
+@@ -95,7 +95,7 @@ static u32 get_alt_insn(struct alt_instr *alt, __le32 *insnptr, __le32 *altinsnp
+ return insn;
+ }
+
+-static void patch_alternative(struct alt_instr *alt,
++static noinstr void patch_alternative(struct alt_instr *alt,
+ __le32 *origptr, __le32 *updptr, int nr_inst)
+ {
+ __le32 *replptr;
+diff --git a/arch/arm64/kernel/cpuidle.c b/arch/arm64/kernel/cpuidle.c
+index 03991eeff6430..3006f43248084 100644
+--- a/arch/arm64/kernel/cpuidle.c
++++ b/arch/arm64/kernel/cpuidle.c
+@@ -54,6 +54,9 @@ static int psci_acpi_cpu_init_idle(unsigned int cpu)
+ struct acpi_lpi_state *lpi;
+ struct acpi_processor *pr = per_cpu(processors, cpu);
+
++ if (unlikely(!pr || !pr->flags.has_lpi))
++ return -EINVAL;
++
+ /*
+ * If the PSCI cpu_suspend function hook has not been initialized
+ * idle states must not be enabled, so bail out
+@@ -61,9 +64,6 @@ static int psci_acpi_cpu_init_idle(unsigned int cpu)
+ if (!psci_ops.cpu_suspend)
+ return -EOPNOTSUPP;
+
+- if (unlikely(!pr || !pr->flags.has_lpi))
+- return -EINVAL;
+-
+ count = pr->power.count - 1;
+ if (count <= 0)
+ return -ENODEV;
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 85a2a75f44982..25d8aff273a10 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -89,7 +89,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
+ switch (cap->cap) {
+ case KVM_CAP_ARM_NISV_TO_USER:
+ r = 0;
+- kvm->arch.return_nisv_io_abort_to_user = true;
++ set_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER,
++ &kvm->arch.flags);
+ break;
+ case KVM_CAP_ARM_MTE:
+ mutex_lock(&kvm->lock);
+@@ -97,7 +98,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
+ r = -EINVAL;
+ } else {
+ r = 0;
+- kvm->arch.mte_enabled = true;
++ set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &kvm->arch.flags);
+ }
+ mutex_unlock(&kvm->lock);
+ break;
+@@ -635,7 +636,7 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+
+ mutex_lock(&kvm->lock);
+- kvm->arch.ran_once = true;
++ set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
+ mutex_unlock(&kvm->lock);
+
+ return ret;
+diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
+index 3e2d8ba11a027..3dd38a151d2a6 100644
+--- a/arch/arm64/kvm/mmio.c
++++ b/arch/arm64/kvm/mmio.c
+@@ -135,7 +135,8 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
+ * volunteered to do so, and bail out otherwise.
+ */
+ if (!kvm_vcpu_dabt_isvalid(vcpu)) {
+- if (vcpu->kvm->arch.return_nisv_io_abort_to_user) {
++ if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER,
++ &vcpu->kvm->arch.flags)) {
+ run->exit_reason = KVM_EXIT_ARM_NISV;
+ run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu);
+ run->arm_nisv.fault_ipa = fault_ipa;
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index bc771bc1a0413..fc6ee6f02fec4 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -982,7 +982,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+
+ mutex_lock(&kvm->lock);
+
+- if (kvm->arch.ran_once) {
++ if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags)) {
+ mutex_unlock(&kvm->lock);
+ return -EBUSY;
+ }
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index ecc40c8cd6f64..6c70c6f61c703 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -181,27 +181,51 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
+ return 0;
+ }
+
+-static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
++/**
++ * kvm_set_vm_width() - set the register width for the guest
++ * @vcpu: Pointer to the vcpu being configured
++ *
++ * Set both KVM_ARCH_FLAG_EL1_32BIT and KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED
++ * in the VM flags based on the vcpu's requested register width, the HW
++ * capabilities and other options (such as MTE).
++ * When REG_WIDTH_CONFIGURED is already set, the vcpu settings must be
++ * consistent with the value of the FLAG_EL1_32BIT bit in the flags.
++ *
++ * Return: 0 on success, negative error code on failure.
++ */
++static int kvm_set_vm_width(struct kvm_vcpu *vcpu)
+ {
+- struct kvm_vcpu *tmp;
++ struct kvm *kvm = vcpu->kvm;
+ bool is32bit;
+- unsigned long i;
+
+ is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
++
++ lockdep_assert_held(&kvm->lock);
++
++ if (test_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags)) {
++ /*
++ * The guest's register width is already configured.
++ * Make sure that the vcpu is consistent with it.
++ */
++ if (is32bit == test_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags))
++ return 0;
++
++ return -EINVAL;
++ }
++
+ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit)
+- return false;
++ return -EINVAL;
+
+ /* MTE is incompatible with AArch32 */
+- if (kvm_has_mte(vcpu->kvm) && is32bit)
+- return false;
++ if (kvm_has_mte(kvm) && is32bit)
++ return -EINVAL;
+
+- /* Check that the vcpus are either all 32bit or all 64bit */
+- kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+- if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
+- return false;
+- }
++ if (is32bit)
++ set_bit(KVM_ARCH_FLAG_EL1_32BIT, &kvm->arch.flags);
+
+- return true;
++ set_bit(KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED, &kvm->arch.flags);
++
++ return 0;
+ }
+
+ /**
+@@ -230,10 +254,16 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ u32 pstate;
+
+ mutex_lock(&vcpu->kvm->lock);
+- reset_state = vcpu->arch.reset_state;
+- WRITE_ONCE(vcpu->arch.reset_state.reset, false);
++ ret = kvm_set_vm_width(vcpu);
++ if (!ret) {
++ reset_state = vcpu->arch.reset_state;
++ WRITE_ONCE(vcpu->arch.reset_state.reset, false);
++ }
+ mutex_unlock(&vcpu->kvm->lock);
+
++ if (ret)
++ return ret;
++
+ /* Reset PMU outside of the non-preemptible section */
+ kvm_pmu_vcpu_reset(vcpu);
+
+@@ -260,14 +290,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ }
+ }
+
+- if (!vcpu_allowed_register_width(vcpu)) {
+- ret = -EINVAL;
+- goto out;
+- }
+-
+ switch (vcpu->arch.target) {
+ default:
+- if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
++ if (vcpu_el1_is_32bit(vcpu)) {
+ pstate = VCPU_RESET_PSTATE_SVC;
+ } else {
+ pstate = VCPU_RESET_PSTATE_EL1;
+diff --git a/arch/powerpc/include/asm/static_call.h b/arch/powerpc/include/asm/static_call.h
+index 0a0bc79bd1fa9..de1018cc522b3 100644
+--- a/arch/powerpc/include/asm/static_call.h
++++ b/arch/powerpc/include/asm/static_call.h
+@@ -24,5 +24,6 @@
+
+ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) __PPC_SCT(name, "b " #func)
+ #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) __PPC_SCT(name, "blr")
++#define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20")
+
+ #endif /* _ASM_POWERPC_STATIC_CALL_H */
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 791db769080d2..316f61a4cb599 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -225,6 +225,13 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu)
+ int cpu;
+ struct rcuwait *waitp;
+
++ /*
++ * rcuwait_wake_up contains smp_mb() which orders prior stores that
++ * create pending work vs below loads of cpu fields. The other side
++ * is the barrier in vcpu run that orders setting the cpu fields vs
++ * testing for pending work.
++ */
++
+ waitp = kvm_arch_vcpu_get_wait(vcpu);
+ if (rcuwait_wake_up(waitp))
+ ++vcpu->stat.generic.halt_wakeup;
+@@ -1089,7 +1096,7 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
+ break;
+ }
+ tvcpu->arch.prodded = 1;
+- smp_mb();
++ smp_mb(); /* This orders prodded store vs ceded load */
+ if (tvcpu->arch.ceded)
+ kvmppc_fast_vcpu_kick_hv(tvcpu);
+ break;
+@@ -3771,6 +3778,14 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
+ pvc = core_info.vc[sub];
+ pvc->pcpu = pcpu + thr;
+ for_each_runnable_thread(i, vcpu, pvc) {
++ /*
++ * XXX: is kvmppc_start_thread called too late here?
++ * It updates vcpu->cpu and vcpu->arch.thread_cpu
++ * which are used by kvmppc_fast_vcpu_kick_hv(), but
++ * kick is called after new exceptions become available
++ * and exceptions are checked earlier than here, by
++ * kvmppc_core_prepare_to_enter.
++ */
+ kvmppc_start_thread(vcpu, pvc);
+ kvmppc_create_dtl_entry(vcpu, pvc);
+ trace_kvm_guest_enter(vcpu);
+@@ -4492,6 +4507,21 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ if (need_resched() || !kvm->arch.mmu_ready)
+ goto out;
+
++ vcpu->cpu = pcpu;
++ vcpu->arch.thread_cpu = pcpu;
++ vc->pcpu = pcpu;
++ local_paca->kvm_hstate.kvm_vcpu = vcpu;
++ local_paca->kvm_hstate.ptid = 0;
++ local_paca->kvm_hstate.fake_suspend = 0;
++
++ /*
++ * Orders set cpu/thread_cpu vs testing for pending interrupts and
++ * doorbells below. The other side is when these fields are set vs
++ * kvmppc_fast_vcpu_kick_hv reading the cpu/thread_cpu fields to
++ * kick a vCPU to notice the pending interrupt.
++ */
++ smp_mb();
++
+ if (!nested) {
+ kvmppc_core_prepare_to_enter(vcpu);
+ if (test_bit(BOOK3S_IRQPRIO_EXTERNAL,
+@@ -4511,13 +4541,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+
+ tb = mftb();
+
+- vcpu->cpu = pcpu;
+- vcpu->arch.thread_cpu = pcpu;
+- vc->pcpu = pcpu;
+- local_paca->kvm_hstate.kvm_vcpu = vcpu;
+- local_paca->kvm_hstate.ptid = 0;
+- local_paca->kvm_hstate.fake_suspend = 0;
+-
+ __kvmppc_create_dtl_entry(vcpu, pcpu, tb + vc->tb_offset, 0);
+
+ trace_kvm_guest_enter(vcpu);
+@@ -4619,6 +4642,8 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ run->exit_reason = KVM_EXIT_INTR;
+ vcpu->arch.ret = -EINTR;
+ out:
++ vcpu->cpu = -1;
++ vcpu->arch.thread_cpu = -1;
+ powerpc_local_irq_pmu_restore(flags);
+ preempt_enable();
+ goto done;
+diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
+index 624166004e36c..6785aef4cbd46 100644
+--- a/arch/riscv/kvm/vcpu.c
++++ b/arch/riscv/kvm/vcpu.c
+@@ -653,8 +653,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+ vcpu->arch.isa);
+ kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context);
+
+- csr_write(CSR_HGATP, 0);
+-
+ csr->vsstatus = csr_read(CSR_VSSTATUS);
+ csr->vsie = csr_read(CSR_VSIE);
+ csr->vstvec = csr_read(CSR_VSTVEC);
+diff --git a/arch/riscv/kvm/vcpu_fp.c b/arch/riscv/kvm/vcpu_fp.c
+index 4449a976e5a6b..d4308c5120078 100644
+--- a/arch/riscv/kvm/vcpu_fp.c
++++ b/arch/riscv/kvm/vcpu_fp.c
+@@ -11,6 +11,7 @@
+ #include <linux/err.h>
+ #include <linux/kvm_host.h>
+ #include <linux/uaccess.h>
++#include <asm/hwcap.h>
+
+ #ifdef CONFIG_FPU
+ void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 17b4e1808b8e8..85ee96abba806 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1574,8 +1574,9 @@ static inline int kvm_arch_flush_remote_tlb(struct kvm *kvm)
+ #define kvm_arch_pmi_in_guest(vcpu) \
+ ((vcpu) && (vcpu)->arch.handling_intr_from_guest)
+
+-int kvm_mmu_module_init(void);
+-void kvm_mmu_module_exit(void);
++void kvm_mmu_x86_module_init(void);
++int kvm_mmu_vendor_module_init(void);
++void kvm_mmu_vendor_module_exit(void);
+
+ void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
+ int kvm_mmu_create(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index a4a39c3e0f196..0c2610cde6ea2 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -128,9 +128,9 @@
+ #define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
+ #define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
+
+-/* SRBDS support */
+ #define MSR_IA32_MCU_OPT_CTRL 0x00000123
+-#define RNGDS_MITG_DIS BIT(0)
++#define RNGDS_MITG_DIS BIT(0) /* SRBDS support */
++#define RTM_ALLOW BIT(1) /* TSX development mode */
+
+ #define MSR_IA32_SYSENTER_CS 0x00000174
+ #define MSR_IA32_SYSENTER_ESP 0x00000175
+diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
+index ed4f8bb6c2d9c..2455d721503ec 100644
+--- a/arch/x86/include/asm/static_call.h
++++ b/arch/x86/include/asm/static_call.h
+@@ -38,6 +38,8 @@
+ #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \
+ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop")
+
++#define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) \
++ ARCH_DEFINE_STATIC_CALL_TRAMP(name, __static_call_return0)
+
+ #define ARCH_ADD_TRAMP_KEY(name) \
+ asm(".pushsection .static_call_tramp_key, \"a\" \n" \
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 7b8382c117889..bd6c690a9fb98 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1719,6 +1719,8 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+ validate_apic_and_package_id(c);
+ x86_spec_ctrl_setup_ap();
+ update_srbds_msr();
++
++ tsx_ap_init();
+ }
+
+ static __init int setup_noclflush(char *arg)
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index ee6f23f7587d4..2a8e584fc9913 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -55,11 +55,10 @@ enum tsx_ctrl_states {
+ extern __ro_after_init enum tsx_ctrl_states tsx_ctrl_state;
+
+ extern void __init tsx_init(void);
+-extern void tsx_enable(void);
+-extern void tsx_disable(void);
+-extern void tsx_clear_cpuid(void);
++void tsx_ap_init(void);
+ #else
+ static inline void tsx_init(void) { }
++static inline void tsx_ap_init(void) { }
+ #endif /* CONFIG_CPU_SUP_INTEL */
+
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 8321c43554a1d..f7a5370a9b3b8 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -717,13 +717,6 @@ static void init_intel(struct cpuinfo_x86 *c)
+
+ init_intel_misc_features(c);
+
+- if (tsx_ctrl_state == TSX_CTRL_ENABLE)
+- tsx_enable();
+- else if (tsx_ctrl_state == TSX_CTRL_DISABLE)
+- tsx_disable();
+- else if (tsx_ctrl_state == TSX_CTRL_RTM_ALWAYS_ABORT)
+- tsx_clear_cpuid();
+-
+ split_lock_init();
+ bus_lock_init();
+
+diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c
+index 9c7a5f0492929..ec7bbac3a9f29 100644
+--- a/arch/x86/kernel/cpu/tsx.c
++++ b/arch/x86/kernel/cpu/tsx.c
+@@ -19,7 +19,7 @@
+
+ enum tsx_ctrl_states tsx_ctrl_state __ro_after_init = TSX_CTRL_NOT_SUPPORTED;
+
+-void tsx_disable(void)
++static void tsx_disable(void)
+ {
+ u64 tsx;
+
+@@ -39,7 +39,7 @@ void tsx_disable(void)
+ wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+ }
+
+-void tsx_enable(void)
++static void tsx_enable(void)
+ {
+ u64 tsx;
+
+@@ -58,7 +58,7 @@ void tsx_enable(void)
+ wrmsrl(MSR_IA32_TSX_CTRL, tsx);
+ }
+
+-static bool __init tsx_ctrl_is_supported(void)
++static bool tsx_ctrl_is_supported(void)
+ {
+ u64 ia32_cap = x86_read_arch_cap_msr();
+
+@@ -84,7 +84,45 @@ static enum tsx_ctrl_states x86_get_tsx_auto_mode(void)
+ return TSX_CTRL_ENABLE;
+ }
+
+-void tsx_clear_cpuid(void)
++/*
++ * Disabling TSX is not a trivial business.
++ *
++ * First of all, there's a CPUID bit: X86_FEATURE_RTM_ALWAYS_ABORT
++ * which says that TSX is practically disabled (all transactions are
++ * aborted by default). When that bit is set, the kernel unconditionally
++ * disables TSX.
++ *
++ * In order to do that, however, it needs to dance a bit:
++ *
++ * 1. The first method to disable it is through MSR_TSX_FORCE_ABORT and
++ * the MSR is present only when *two* CPUID bits are set:
++ *
++ * - X86_FEATURE_RTM_ALWAYS_ABORT
++ * - X86_FEATURE_TSX_FORCE_ABORT
++ *
++ * 2. The second method is for CPUs which do not have the above-mentioned
++ * MSR: those use a different MSR - MSR_IA32_TSX_CTRL and disable TSX
++ * through that one. Those CPUs can also have the initially mentioned
++ * CPUID bit X86_FEATURE_RTM_ALWAYS_ABORT set and for those the same strategy
++ * applies: TSX gets disabled unconditionally.
++ *
++ * When either of the two methods are present, the kernel disables TSX and
++ * clears the respective RTM and HLE feature flags.
++ *
++ * An additional twist in the whole thing presents late microcode loading
++ * which, when done, may cause for the X86_FEATURE_RTM_ALWAYS_ABORT CPUID
++ * bit to be set after the update.
++ *
++ * A subsequent hotplug operation on any logical CPU except the BSP will
++ * cause for the supported CPUID feature bits to get re-detected and, if
++ * RTM and HLE get cleared all of a sudden, but, userspace did consult
++ * them before the update, then funny explosions will happen. Long story
++ * short: the kernel doesn't modify CPUID feature bits after booting.
++ *
++ * That's why, this function's call in init_intel() doesn't clear the
++ * feature flags.
++ */
++static void tsx_clear_cpuid(void)
+ {
+ u64 msr;
+
+@@ -97,6 +135,39 @@ void tsx_clear_cpuid(void)
+ rdmsrl(MSR_TSX_FORCE_ABORT, msr);
+ msr |= MSR_TFA_TSX_CPUID_CLEAR;
+ wrmsrl(MSR_TSX_FORCE_ABORT, msr);
++ } else if (tsx_ctrl_is_supported()) {
++ rdmsrl(MSR_IA32_TSX_CTRL, msr);
++ msr |= TSX_CTRL_CPUID_CLEAR;
++ wrmsrl(MSR_IA32_TSX_CTRL, msr);
++ }
++}
++
++/*
++ * Disable TSX development mode
++ *
++ * When the microcode released in Feb 2022 is applied, TSX will be disabled by
++ * default on some processors. MSR 0x122 (TSX_CTRL) and MSR 0x123
++ * (IA32_MCU_OPT_CTRL) can be used to re-enable TSX for development, doing so is
++ * not recommended for production deployments. In particular, applying MD_CLEAR
++ * flows for mitigation of the Intel TSX Asynchronous Abort (TAA) transient
++ * execution attack may not be effective on these processors when Intel TSX is
++ * enabled with updated microcode.
++ */
++static void tsx_dev_mode_disable(void)
++{
++ u64 mcu_opt_ctrl;
++
++ /* Check if RTM_ALLOW exists */
++ if (!boot_cpu_has_bug(X86_BUG_TAA) || !tsx_ctrl_is_supported() ||
++ !cpu_feature_enabled(X86_FEATURE_SRBDS_CTRL))
++ return;
++
++ rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_opt_ctrl);
++
++ if (mcu_opt_ctrl & RTM_ALLOW) {
++ mcu_opt_ctrl &= ~RTM_ALLOW;
++ wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_opt_ctrl);
++ setup_force_cpu_cap(X86_FEATURE_RTM_ALWAYS_ABORT);
+ }
+ }
+
+@@ -105,14 +176,14 @@ void __init tsx_init(void)
+ char arg[5] = {};
+ int ret;
+
++ tsx_dev_mode_disable();
++
+ /*
+- * Hardware will always abort a TSX transaction if both CPUID bits
+- * RTM_ALWAYS_ABORT and TSX_FORCE_ABORT are set. In this case, it is
+- * better not to enumerate CPUID.RTM and CPUID.HLE bits. Clear them
+- * here.
++ * Hardware will always abort a TSX transaction when the CPUID bit
++ * RTM_ALWAYS_ABORT is set. In this case, it is better not to enumerate
++ * CPUID.RTM and CPUID.HLE bits. Clear them here.
+ */
+- if (boot_cpu_has(X86_FEATURE_RTM_ALWAYS_ABORT) &&
+- boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)) {
++ if (boot_cpu_has(X86_FEATURE_RTM_ALWAYS_ABORT)) {
+ tsx_ctrl_state = TSX_CTRL_RTM_ALWAYS_ABORT;
+ tsx_clear_cpuid();
+ setup_clear_cpu_cap(X86_FEATURE_RTM);
+@@ -175,3 +246,16 @@ void __init tsx_init(void)
+ setup_force_cpu_cap(X86_FEATURE_HLE);
+ }
+ }
++
++void tsx_ap_init(void)
++{
++ tsx_dev_mode_disable();
++
++ if (tsx_ctrl_state == TSX_CTRL_ENABLE)
++ tsx_enable();
++ else if (tsx_ctrl_state == TSX_CTRL_DISABLE)
++ tsx_disable();
++ else if (tsx_ctrl_state == TSX_CTRL_RTM_ALWAYS_ABORT)
++ /* See comment over that function for more details. */
++ tsx_clear_cpuid();
++}
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 5628d0ba637ec..7f009ebb319ab 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -6144,12 +6144,24 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ return 0;
+ }
+
+-int kvm_mmu_module_init(void)
++/*
++ * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as
++ * its default value of -1 is technically undefined behavior for a boolean.
++ */
++void kvm_mmu_x86_module_init(void)
+ {
+- int ret = -ENOMEM;
+-
+ if (nx_huge_pages == -1)
+ __set_nx_huge_pages(get_nx_auto_mode());
++}
++
++/*
++ * The bulk of the MMU initialization is deferred until the vendor module is
++ * loaded as many of the masks/values may be modified by VMX or SVM, i.e. need
++ * to be reset when a potentially different vendor module is loaded.
++ */
++int kvm_mmu_vendor_module_init(void)
++{
++ int ret = -ENOMEM;
+
+ /*
+ * MMU roles use union aliasing which is, generally speaking, an
+@@ -6197,7 +6209,7 @@ void kvm_mmu_destroy(struct kvm_vcpu *vcpu)
+ mmu_free_memory_caches(vcpu);
+ }
+
+-void kvm_mmu_module_exit(void)
++void kvm_mmu_vendor_module_exit(void)
+ {
+ mmu_destroy_caches();
+ percpu_counter_destroy(&kvm_total_used_mmu_pages);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c81ec70197fb5..05128162ebd58 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8846,7 +8846,7 @@ int kvm_arch_init(void *opaque)
+ }
+ kvm_nr_uret_msrs = 0;
+
+- r = kvm_mmu_module_init();
++ r = kvm_mmu_vendor_module_init();
+ if (r)
+ goto out_free_percpu;
+
+@@ -8894,7 +8894,7 @@ void kvm_arch_exit(void)
+ cancel_work_sync(&pvclock_gtod_work);
+ #endif
+ kvm_x86_ops.hardware_enable = NULL;
+- kvm_mmu_module_exit();
++ kvm_mmu_vendor_module_exit();
+ free_percpu(user_return_msrs);
+ kmem_cache_destroy(x86_emulator_cache);
+ #ifdef CONFIG_KVM_XEN
+@@ -12887,3 +12887,19 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_enter);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_exit);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protocol_enter);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protocol_exit);
++
++static int __init kvm_x86_init(void)
++{
++ kvm_mmu_x86_module_init();
++ return 0;
++}
++module_init(kvm_x86_init);
++
++static void __exit kvm_x86_exit(void)
++{
++ /*
++ * If module_init() is implemented, module_exit() must also be
++ * implemented to allow module unload.
++ */
++}
++module_exit(kvm_x86_exit);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 0ecb140864b21..b272e963388cb 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -398,6 +398,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
+ EMIT_LFENCE();
+ EMIT2(0xFF, 0xE0 + reg);
+ } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
++ OPTIMIZER_HIDE_VAR(reg);
+ emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
+ } else
+ #endif
+diff --git a/block/bio.c b/block/bio.c
+index 1be1e360967d0..342b1cf5d713c 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1570,7 +1570,7 @@ EXPORT_SYMBOL(bio_split);
+ void bio_trim(struct bio *bio, sector_t offset, sector_t size)
+ {
+ if (WARN_ON_ONCE(offset > BIO_MAX_SECTORS || size > BIO_MAX_SECTORS ||
+- offset + size > bio->bi_iter.bi_size))
++ offset + size > bio_sectors(bio)))
+ return;
+
+ size <<= 9;
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 05b3985a1984b..4556c86c34659 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -1079,6 +1079,11 @@ static int flatten_lpi_states(struct acpi_processor *pr,
+ return 0;
+ }
+
++int __weak acpi_processor_ffh_lpi_probe(unsigned int cpu)
++{
++ return -EOPNOTSUPP;
++}
++
+ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+ {
+ int ret, i;
+@@ -1087,6 +1092,11 @@ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+ struct acpi_device *d = NULL;
+ struct acpi_lpi_states_array info[2], *tmp, *prev, *curr;
+
++ /* make sure our architecture has support */
++ ret = acpi_processor_ffh_lpi_probe(pr->id);
++ if (ret == -EOPNOTSUPP)
++ return ret;
++
+ if (!osc_pc_lpi_support_confirmed)
+ return -EOPNOTSUPP;
+
+@@ -1138,11 +1148,6 @@ static int acpi_processor_get_lpi_info(struct acpi_processor *pr)
+ return 0;
+ }
+
+-int __weak acpi_processor_ffh_lpi_probe(unsigned int cpu)
+-{
+- return -ENODEV;
+-}
+-
+ int __weak acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi)
+ {
+ return -ENODEV;
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 0c854aebfe0bd..760c0d81d1482 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4014,6 +4014,9 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
++ { "Samsung SSD 840 EVO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
++ ATA_HORKAGE_NO_DMA_LOG |
++ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ { "Samsung SSD 840*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ { "Samsung SSD 850*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 752a11d16e262..7e079fa3795b1 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -296,6 +296,7 @@ int driver_deferred_probe_check_state(struct device *dev)
+
+ return -EPROBE_DEFER;
+ }
++EXPORT_SYMBOL_GPL(driver_deferred_probe_check_state);
+
+ static void deferred_probe_timeout_work_func(struct work_struct *work)
+ {
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 5d5beeba3ed4f..478ba959362ce 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -2739,6 +2739,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ sprintf(disk->disk_name, "drbd%d", minor);
+ disk->private_data = device;
+
++ blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
+ blk_queue_write_cache(disk->queue, true, true);
+ /* Setting the max_hw_sectors to an odd value of 8kibyte here
+ This triggers a max_bio_size message upon first attach or connect */
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 13004beb48cab..233577b141412 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1606,7 +1606,7 @@ static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res)
+ * Only fake timeouts need to execute blk_mq_complete_request() here.
+ */
+ cmd->error = BLK_STS_TIMEOUT;
+- if (cmd->fake_timeout)
++ if (cmd->fake_timeout || hctx->type == HCTX_TYPE_POLL)
+ blk_mq_complete_request(rq);
+ return BLK_EH_DONE;
+ }
+diff --git a/drivers/firmware/arm_scmi/clock.c b/drivers/firmware/arm_scmi/clock.c
+index 35b56c8ba0c0e..492f3a9197ec2 100644
+--- a/drivers/firmware/arm_scmi/clock.c
++++ b/drivers/firmware/arm_scmi/clock.c
+@@ -204,7 +204,8 @@ scmi_clock_describe_rates_get(const struct scmi_protocol_handle *ph, u32 clk_id,
+
+ if (rate_discrete && rate) {
+ clk->list.num_rates = tot_rate_cnt;
+- sort(rate, tot_rate_cnt, sizeof(*rate), rate_cmp_func, NULL);
++ sort(clk->list.rates, tot_rate_cnt, sizeof(*rate),
++ rate_cmp_func, NULL);
+ }
+
+ clk->rate_discrete = rate_discrete;
+diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
+index d76bab3aaac45..e815b8f987393 100644
+--- a/drivers/firmware/arm_scmi/driver.c
++++ b/drivers/firmware/arm_scmi/driver.c
+@@ -652,7 +652,8 @@ static void scmi_handle_response(struct scmi_chan_info *cinfo,
+
+ xfer = scmi_xfer_command_acquire(cinfo, msg_hdr);
+ if (IS_ERR(xfer)) {
+- scmi_clear_channel(info, cinfo);
++ if (MSG_XTRACT_TYPE(msg_hdr) == MSG_TYPE_DELAYED_RESP)
++ scmi_clear_channel(info, cinfo);
+ return;
+ }
+
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index 8e5d87984a489..41c31b10ae848 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -134,7 +134,7 @@ static int gpio_sim_get_multiple(struct gpio_chip *gc,
+ struct gpio_sim_chip *chip = gpiochip_get_data(gc);
+
+ mutex_lock(&chip->lock);
+- bitmap_copy(bits, chip->value_map, gc->ngpio);
++ bitmap_replace(bits, bits, chip->value_map, mask, gc->ngpio);
+ mutex_unlock(&chip->lock);
+
+ return 0;
+@@ -146,7 +146,7 @@ static void gpio_sim_set_multiple(struct gpio_chip *gc,
+ struct gpio_sim_chip *chip = gpiochip_get_data(gc);
+
+ mutex_lock(&chip->lock);
+- bitmap_copy(chip->value_map, bits, gc->ngpio);
++ bitmap_replace(chip->value_map, chip->value_map, bits, mask, gc->ngpio);
+ mutex_unlock(&chip->lock);
+ }
+
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index a5495ad31c9ce..b7c2f2af1dee5 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -387,8 +387,8 @@ static acpi_status acpi_gpiochip_alloc_event(struct acpi_resource *ares,
+ pin = agpio->pin_table[0];
+
+ if (pin <= 255) {
+- char ev_name[5];
+- sprintf(ev_name, "_%c%02hhX",
++ char ev_name[8];
++ sprintf(ev_name, "_%c%02X",
+ agpio->triggering == ACPI_EDGE_SENSITIVE ? 'E' : 'L',
+ pin);
+ if (ACPI_SUCCESS(acpi_get_handle(handle, ev_name, &evt_handle)))
+diff --git a/drivers/gpu/drm/amd/amdgpu/ObjectID.h b/drivers/gpu/drm/amd/amdgpu/ObjectID.h
+index 5b393622f5920..a0f0a17e224fe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/ObjectID.h
++++ b/drivers/gpu/drm/amd/amdgpu/ObjectID.h
+@@ -119,6 +119,7 @@
+ #define CONNECTOR_OBJECT_ID_eDP 0x14
+ #define CONNECTOR_OBJECT_ID_MXM 0x15
+ #define CONNECTOR_OBJECT_ID_LVDS_eDP 0x16
++#define CONNECTOR_OBJECT_ID_USBC 0x17
+
+ /* deleted */
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index b87dca6d09fa6..052816f0efed4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -5678,7 +5678,7 @@ void amdgpu_device_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+ #ifdef CONFIG_X86_64
+- if (adev->flags & AMD_IS_APU)
++ if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev))
+ return;
+ #endif
+ if (adev->gmc.xgmi.connected_to_cpu)
+@@ -5694,7 +5694,7 @@ void amdgpu_device_invalidate_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+ #ifdef CONFIG_X86_64
+- if (adev->flags & AMD_IS_APU)
++ if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev))
+ return;
+ #endif
+ if (adev->gmc.xgmi.connected_to_cpu)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 0ead08ba58c2a..c853266957ce1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -686,7 +686,7 @@ MODULE_PARM_DESC(sched_policy,
+ * Maximum number of processes that HWS can schedule concurrently. The maximum is the
+ * number of VMIDs assigned to the HWS, which is also the default.
+ */
+-int hws_max_conc_proc = 8;
++int hws_max_conc_proc = -1;
+ module_param(hws_max_conc_proc, int, 0444);
+ MODULE_PARM_DESC(hws_max_conc_proc,
+ "Max # processes HWS can execute concurrently when sched_policy=0 (0 = no concurrency, #VMIDs for KFD = Maximum(default))");
+@@ -2276,18 +2276,23 @@ static int amdgpu_pmops_suspend(struct device *dev)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
+ struct amdgpu_device *adev = drm_to_adev(drm_dev);
+- int r;
+
+ if (amdgpu_acpi_is_s0ix_active(adev))
+ adev->in_s0ix = true;
+ else
+ adev->in_s3 = true;
+- r = amdgpu_device_suspend(drm_dev, true);
+- if (r)
+- return r;
++ return amdgpu_device_suspend(drm_dev, true);
++}
++
++static int amdgpu_pmops_suspend_noirq(struct device *dev)
++{
++ struct drm_device *drm_dev = dev_get_drvdata(dev);
++ struct amdgpu_device *adev = drm_to_adev(drm_dev);
++
+ if (!adev->in_s0ix)
+- r = amdgpu_asic_reset(adev);
+- return r;
++ return amdgpu_asic_reset(adev);
++
++ return 0;
+ }
+
+ static int amdgpu_pmops_resume(struct device *dev)
+@@ -2528,6 +2533,7 @@ static const struct dev_pm_ops amdgpu_pm_ops = {
+ .prepare = amdgpu_pmops_prepare,
+ .complete = amdgpu_pmops_complete,
+ .suspend = amdgpu_pmops_suspend,
++ .suspend_noirq = amdgpu_pmops_suspend_noirq,
+ .resume = amdgpu_pmops_resume,
+ .freeze = amdgpu_pmops_freeze,
+ .thaw = amdgpu_pmops_thaw,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 9189fb85a4dd4..5831aa40b1e81 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1334,6 +1334,8 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
+ { 0x1002, 0x15dd, 0x103c, 0x83e7, 0xd3 },
+ /* GFXOFF is unstable on C6 parts with a VBIOS 113-RAVEN-114 */
+ { 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
++ /* Apple MacBook Pro (15-inch, 2019) Radeon Pro Vega 20 4 GB */
++ { 0x1002, 0x69af, 0x106b, 0x019a, 0xc0 },
+ { 0, 0, 0, 0, 0 },
+ };
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+index a2f8ed0e6a644..f1b794d5d87d6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+@@ -788,7 +788,7 @@ static int gmc_v10_0_mc_init(struct amdgpu_device *adev)
+ adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
+
+ #ifdef CONFIG_X86_64
+- if (adev->flags & AMD_IS_APU) {
++ if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
+ adev->gmc.aper_base = adev->gfxhub.funcs->get_mc_fb_offset(adev);
+ adev->gmc.aper_size = adev->gmc.real_vram_size;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+index ab8adbff9e2d0..5206e2da334a4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+@@ -381,8 +381,9 @@ static int gmc_v7_0_mc_init(struct amdgpu_device *adev)
+ adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
+
+ #ifdef CONFIG_X86_64
+- if (adev->flags & AMD_IS_APU &&
+- adev->gmc.real_vram_size > adev->gmc.aper_size) {
++ if ((adev->flags & AMD_IS_APU) &&
++ adev->gmc.real_vram_size > adev->gmc.aper_size &&
++ !amdgpu_passthrough(adev)) {
+ adev->gmc.aper_base = ((u64)RREG32(mmMC_VM_FB_OFFSET)) << 22;
+ adev->gmc.aper_size = adev->gmc.real_vram_size;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+index 054733838292c..d07d36786836e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+@@ -581,7 +581,7 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
+ adev->gmc.aper_size = pci_resource_len(adev->pdev, 0);
+
+ #ifdef CONFIG_X86_64
+- if (adev->flags & AMD_IS_APU) {
++ if ((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) {
+ adev->gmc.aper_base = ((u64)RREG32(mmMC_VM_FB_OFFSET)) << 22;
+ adev->gmc.aper_size = adev->gmc.real_vram_size;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index 88c1eb9ad0684..2fb24178eaef9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -1420,7 +1420,7 @@ static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
+ */
+
+ /* check whether both host-gpu and gpu-gpu xgmi links exist */
+- if ((adev->flags & AMD_IS_APU) ||
++ if (((adev->flags & AMD_IS_APU) && !amdgpu_passthrough(adev)) ||
+ (adev->gmc.xgmi.supported &&
+ adev->gmc.xgmi.connected_to_cpu)) {
+ adev->gmc.aper_base =
+@@ -1684,7 +1684,7 @@ static int gmc_v9_0_sw_fini(void *handle)
+ amdgpu_gem_force_release(adev);
+ amdgpu_vm_manager_fini(adev);
+ amdgpu_gart_table_vram_free(adev);
+- amdgpu_bo_unref(&adev->gmc.pdb0_bo);
++ amdgpu_bo_free_kernel(&adev->gmc.pdb0_bo, NULL, &adev->gmc.ptr_pdb0);
+ amdgpu_bo_fini(adev);
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 0ce2a7aa400b1..ad9bfc772bdff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -1474,8 +1474,11 @@ static int vcn_v3_0_start_sriov(struct amdgpu_device *adev)
+
+ static int vcn_v3_0_stop_dpg_mode(struct amdgpu_device *adev, int inst_idx)
+ {
++ struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE};
+ uint32_t tmp;
+
++ vcn_v3_0_pause_dpg_mode(adev, 0, &state);
++
+ /* Wait for power status to be 1 */
+ SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
+ UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 2b65d0acae2ce..2fdbe2f475e4f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -480,15 +480,10 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ }
+
+ /* Verify module parameters regarding mapped process number*/
+- if ((hws_max_conc_proc < 0)
+- || (hws_max_conc_proc > kfd->vm_info.vmid_num_kfd)) {
+- dev_err(kfd_device,
+- "hws_max_conc_proc %d must be between 0 and %d, use %d instead\n",
+- hws_max_conc_proc, kfd->vm_info.vmid_num_kfd,
+- kfd->vm_info.vmid_num_kfd);
++ if (hws_max_conc_proc >= 0)
++ kfd->max_proc_per_quantum = min((u32)hws_max_conc_proc, kfd->vm_info.vmid_num_kfd);
++ else
+ kfd->max_proc_per_quantum = kfd->vm_info.vmid_num_kfd;
+- } else
+- kfd->max_proc_per_quantum = hws_max_conc_proc;
+
+ /* calculate max size of mqds needed for queues */
+ size = max_num_of_queues_per_device *
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+index afe72dd11325d..6ca7e12bdab84 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+@@ -531,6 +531,8 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
+ event_waiters = kmalloc_array(num_events,
+ sizeof(struct kfd_event_waiter),
+ GFP_KERNEL);
++ if (!event_waiters)
++ return NULL;
+
+ for (i = 0; (event_waiters) && (i < num_events) ; i++) {
+ init_wait(&event_waiters[i].wait);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 90c017859ad42..24db2297857b4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2693,7 +2693,8 @@ static int dm_resume(void *handle)
+ * this is the case when traversing through already created
+ * MST connectors, should be skipped
+ */
+- if (aconnector->mst_port)
++ if (aconnector->dc_link &&
++ aconnector->dc_link->type == dc_connection_mst_branch)
+ continue;
+
+ mutex_lock(&aconnector->hpd_lock);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index ac3071e38e4a0..f0a97b82c33a3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1667,8 +1667,8 @@ bool dc_is_stream_unchanged(
+ if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)
+ return false;
+
+- // Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks
+- if (old_stream->audio_info.mode_count != stream->audio_info.mode_count)
++ /*compare audio info*/
++ if (memcmp(&old_stream->audio_info, &stream->audio_info, sizeof(stream->audio_info)) != 0)
+ return false;
+
+ return true;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
+index f4f423d0b8c3f..80595d7f060c3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubbub.c
+@@ -940,6 +940,7 @@ static const struct hubbub_funcs hubbub1_funcs = {
+ .program_watermarks = hubbub1_program_watermarks,
+ .is_allow_self_refresh_enabled = hubbub1_is_allow_self_refresh_enabled,
+ .allow_self_refresh_control = hubbub1_allow_self_refresh_control,
++ .verify_allow_pstate_change_high = hubbub1_verify_allow_pstate_change_high,
+ };
+
+ void hubbub1_construct(struct hubbub *hubbub,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 2cefdd96d0cbb..8ca4c06ac5607 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1112,9 +1112,13 @@ static bool dcn10_hw_wa_force_recovery(struct dc *dc)
+
+ void dcn10_verify_allow_pstate_change_high(struct dc *dc)
+ {
++ struct hubbub *hubbub = dc->res_pool->hubbub;
+ static bool should_log_hw_state; /* prevent hw state log by default */
+
+- if (!hubbub1_verify_allow_pstate_change_high(dc->res_pool->hubbub)) {
++ if (!hubbub->funcs->verify_allow_pstate_change_high)
++ return;
++
++ if (!hubbub->funcs->verify_allow_pstate_change_high(hubbub)) {
+ int i = 0;
+
+ if (should_log_hw_state)
+@@ -1123,8 +1127,8 @@ void dcn10_verify_allow_pstate_change_high(struct dc *dc)
+ TRACE_DC_PIPE_STATE(pipe_ctx, i, MAX_PIPES);
+ BREAK_TO_DEBUGGER();
+ if (dcn10_hw_wa_force_recovery(dc)) {
+- /*check again*/
+- if (!hubbub1_verify_allow_pstate_change_high(dc->res_pool->hubbub))
++ /*check again*/
++ if (!hubbub->funcs->verify_allow_pstate_change_high(hubbub))
+ BREAK_TO_DEBUGGER();
+ }
+ }
+@@ -1497,6 +1501,9 @@ void dcn10_init_hw(struct dc *dc)
+ if (dc->config.power_down_display_on_boot)
+ dc_link_blank_all_dp_displays(dc);
+
++ if (hws->funcs.enable_power_gating_plane)
++ hws->funcs.enable_power_gating_plane(dc->hwseq, true);
++
+ /* If taking control over from VBIOS, we may want to optimize our first
+ * mode set, so we need to skip powering down pipes until we know which
+ * pipes we want to use.
+@@ -1549,8 +1556,6 @@ void dcn10_init_hw(struct dc *dc)
+
+ REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
+ }
+- if (hws->funcs.enable_power_gating_plane)
+- hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+
+ if (dc->clk_mgr->funcs->notify_wm_ranges)
+ dc->clk_mgr->funcs->notify_wm_ranges(dc->clk_mgr);
+@@ -2515,14 +2520,18 @@ void dcn10_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ struct mpc *mpc = dc->res_pool->mpc;
+ struct mpc_tree *mpc_tree_params = &(pipe_ctx->stream_res.opp->mpc_tree_params);
+
+- if (per_pixel_alpha)
+- blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
+- else
+- blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
+-
+ blnd_cfg.overlap_only = false;
+ blnd_cfg.global_gain = 0xff;
+
++ if (per_pixel_alpha && pipe_ctx->plane_state->global_alpha) {
++ blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA_COMBINED_GLOBAL_GAIN;
++ blnd_cfg.global_gain = pipe_ctx->plane_state->global_alpha_value;
++ } else if (per_pixel_alpha) {
++ blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
++ } else {
++ blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
++ }
++
+ if (pipe_ctx->plane_state->global_alpha)
+ blnd_cfg.global_alpha = pipe_ctx->plane_state->global_alpha_value;
+ else
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 4991e93e5308c..8a72b7007b9d1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -2313,14 +2313,18 @@ void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ struct mpc *mpc = dc->res_pool->mpc;
+ struct mpc_tree *mpc_tree_params = &(pipe_ctx->stream_res.opp->mpc_tree_params);
+
+- if (per_pixel_alpha)
+- blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
+- else
+- blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
+-
+ blnd_cfg.overlap_only = false;
+ blnd_cfg.global_gain = 0xff;
+
++ if (per_pixel_alpha && pipe_ctx->plane_state->global_alpha) {
++ blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA_COMBINED_GLOBAL_GAIN;
++ blnd_cfg.global_gain = pipe_ctx->plane_state->global_alpha_value;
++ } else if (per_pixel_alpha) {
++ blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_PER_PIXEL_ALPHA;
++ } else {
++ blnd_cfg.alpha_mode = MPCC_ALPHA_BLEND_MODE_GLOBAL_ALPHA;
++ }
++
+ if (pipe_ctx->plane_state->global_alpha)
+ blnd_cfg.global_alpha = pipe_ctx->plane_state->global_alpha_value;
+ else
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubbub.c
+index f4414de96acc5..152c9c5733f1c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hubbub.c
+@@ -448,6 +448,7 @@ static const struct hubbub_funcs hubbub30_funcs = {
+ .program_watermarks = hubbub3_program_watermarks,
+ .allow_self_refresh_control = hubbub1_allow_self_refresh_control,
+ .is_allow_self_refresh_enabled = hubbub1_is_allow_self_refresh_enabled,
++ .verify_allow_pstate_change_high = hubbub1_verify_allow_pstate_change_high,
+ .force_wm_propagate_to_pipes = hubbub3_force_wm_propagate_to_pipes,
+ .force_pstate_change_control = hubbub3_force_pstate_change_control,
+ .init_watermarks = hubbub3_init_watermarks,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index 1db1ca19411d8..05dc0a3ae2a3b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -548,6 +548,9 @@ void dcn30_init_hw(struct dc *dc)
+ if (dc->config.power_down_display_on_boot)
+ dc_link_blank_all_dp_displays(dc);
+
++ if (hws->funcs.enable_power_gating_plane)
++ hws->funcs.enable_power_gating_plane(dc->hwseq, true);
++
+ /* If taking control over from VBIOS, we may want to optimize our first
+ * mode set, so we need to skip powering down pipes until we know which
+ * pipes we want to use.
+@@ -625,8 +628,6 @@ void dcn30_init_hw(struct dc *dc)
+
+ REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
+ }
+- if (hws->funcs.enable_power_gating_plane)
+- hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_hubbub.c
+index 1e3bd2e9cdcc4..a046664e20316 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_hubbub.c
+@@ -60,6 +60,7 @@ static const struct hubbub_funcs hubbub301_funcs = {
+ .program_watermarks = hubbub3_program_watermarks,
+ .allow_self_refresh_control = hubbub1_allow_self_refresh_control,
+ .is_allow_self_refresh_enabled = hubbub1_is_allow_self_refresh_enabled,
++ .verify_allow_pstate_change_high = hubbub1_verify_allow_pstate_change_high,
+ .force_wm_propagate_to_pipes = hubbub3_force_wm_propagate_to_pipes,
+ .force_pstate_change_control = hubbub3_force_pstate_change_control,
+ .hubbub_read_state = hubbub2_read_state,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c
+index 5e3bcaf12cac4..51c5f3685470a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hubbub.c
+@@ -949,6 +949,65 @@ static void hubbub31_get_dchub_ref_freq(struct hubbub *hubbub,
+ }
+ }
+
++static bool hubbub31_verify_allow_pstate_change_high(struct hubbub *hubbub)
++{
++ struct dcn20_hubbub *hubbub2 = TO_DCN20_HUBBUB(hubbub);
++
++ /*
++ * Pstate latency is ~20us so if we wait over 40us and pstate allow
++ * still not asserted, we are probably stuck and going to hang
++ */
++ const unsigned int pstate_wait_timeout_us = 100;
++ const unsigned int pstate_wait_expected_timeout_us = 40;
++
++ static unsigned int max_sampled_pstate_wait_us; /* data collection */
++ static bool forced_pstate_allow; /* help with revert wa */
++
++ unsigned int debug_data = 0;
++ unsigned int i;
++
++ if (forced_pstate_allow) {
++ /* we hacked to force pstate allow to prevent hang last time
++ * we verify_allow_pstate_change_high. so disable force
++ * here so we can check status
++ */
++ REG_UPDATE_2(DCHUBBUB_ARB_DRAM_STATE_CNTL,
++ DCHUBBUB_ARB_ALLOW_PSTATE_CHANGE_FORCE_VALUE, 0,
++ DCHUBBUB_ARB_ALLOW_PSTATE_CHANGE_FORCE_ENABLE, 0);
++ forced_pstate_allow = false;
++ }
++
++ REG_WRITE(DCHUBBUB_TEST_DEBUG_INDEX, hubbub2->debug_test_index_pstate);
++
++ for (i = 0; i < pstate_wait_timeout_us; i++) {
++ debug_data = REG_READ(DCHUBBUB_TEST_DEBUG_DATA);
++
++ /* Debug bit is specific to ASIC. */
++ if (debug_data & (1 << 26)) {
++ if (i > pstate_wait_expected_timeout_us)
++ DC_LOG_WARNING("pstate took longer than expected ~%dus\n", i);
++ return true;
++ }
++ if (max_sampled_pstate_wait_us < i)
++ max_sampled_pstate_wait_us = i;
++
++ udelay(1);
++ }
++
++ /* force pstate allow to prevent system hang
++ * and break to debugger to investigate
++ */
++ REG_UPDATE_2(DCHUBBUB_ARB_DRAM_STATE_CNTL,
++ DCHUBBUB_ARB_ALLOW_PSTATE_CHANGE_FORCE_VALUE, 1,
++ DCHUBBUB_ARB_ALLOW_PSTATE_CHANGE_FORCE_ENABLE, 1);
++ forced_pstate_allow = true;
++
++ DC_LOG_WARNING("pstate TEST_DEBUG_DATA: 0x%X\n",
++ debug_data);
++
++ return false;
++}
++
+ static const struct hubbub_funcs hubbub31_funcs = {
+ .update_dchub = hubbub2_update_dchub,
+ .init_dchub_sys_ctx = hubbub31_init_dchub_sys_ctx,
+@@ -961,6 +1020,7 @@ static const struct hubbub_funcs hubbub31_funcs = {
+ .program_watermarks = hubbub31_program_watermarks,
+ .allow_self_refresh_control = hubbub1_allow_self_refresh_control,
+ .is_allow_self_refresh_enabled = hubbub1_is_allow_self_refresh_enabled,
++ .verify_allow_pstate_change_high = hubbub31_verify_allow_pstate_change_high,
+ .program_det_size = dcn31_program_det_size,
+ .program_compbuf_size = dcn31_program_compbuf_size,
+ .init_crb = dcn31_init_crb,
+@@ -982,5 +1042,7 @@ void hubbub31_construct(struct dcn20_hubbub *hubbub31,
+ hubbub31->detile_buf_size = det_size_kb * 1024;
+ hubbub31->pixel_chunk_size = pixel_chunk_size_kb * 1024;
+ hubbub31->crb_size_segs = config_return_buffer_size_kb / DCN31_CRB_SEGMENT_SIZE_KB;
++
++ hubbub31->debug_test_index_pstate = 0x6;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+index 1e156f3980656..bdc4467b40d79 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+@@ -200,6 +200,9 @@ void dcn31_init_hw(struct dc *dc)
+ if (dc->config.power_down_display_on_boot)
+ dc_link_blank_all_dp_displays(dc);
+
++ if (hws->funcs.enable_power_gating_plane)
++ hws->funcs.enable_power_gating_plane(dc->hwseq, true);
++
+ /* If taking control over from VBIOS, we may want to optimize our first
+ * mode set, so we need to skip powering down pipes until we know which
+ * pipes we want to use.
+@@ -249,8 +252,6 @@ void dcn31_init_hw(struct dc *dc)
+
+ REG_UPDATE(DCFCLK_CNTL, DCFCLK_GATE_DIS, 0);
+ }
+- if (hws->funcs.enable_power_gating_plane)
+- hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+
+ if (!dcb->funcs->is_accelerated_mode(dcb) && dc->res_pool->hubbub->funcs->init_watermarks)
+ dc->res_pool->hubbub->funcs->init_watermarks(dc->res_pool->hubbub);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index 0e11285035b63..f3933c9f57468 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -1011,7 +1011,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .max_downscale_src_width = 4096,/*upto true 4K*/
+ .disable_pplib_wm_range = false,
+ .scl_reset_length10 = true,
+- .sanity_checks = false,
++ .sanity_checks = true,
+ .underflow_assert_delay_us = 0xFFFFFFFF,
+ .dwb_fi_phase = -1, // -1 = disable,
+ .dmub_command_table = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+index 9c74564cbd8de..8973d3a38f9c5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
++++ b/drivers/gpu/drm/amd/display/dc/dsc/dc_dsc.c
+@@ -864,11 +864,11 @@ static bool setup_dsc_config(
+ min_slices_h = inc_num_slices(dsc_common_caps.slice_caps, min_slices_h);
+ }
+
++ is_dsc_possible = (min_slices_h <= max_slices_h);
++
+ if (pic_width % min_slices_h != 0)
+ min_slices_h = 0; // DSC TODO: Maybe try increasing the number of slices first?
+
+- is_dsc_possible = (min_slices_h <= max_slices_h);
+-
+ if (min_slices_h == 0 && max_slices_h == 0)
+ is_dsc_possible = false;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+index 713f5558f5e17..9195dec294c2d 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+@@ -154,6 +154,8 @@ struct hubbub_funcs {
+ bool (*is_allow_self_refresh_enabled)(struct hubbub *hubbub);
+ void (*allow_self_refresh_control)(struct hubbub *hubbub, bool allow);
+
++ bool (*verify_allow_pstate_change_high)(struct hubbub *hubbub);
++
+ void (*apply_DEDCN21_147_wa)(struct hubbub *hubbub);
+
+ void (*force_wm_propagate_to_pipes)(struct hubbub *hubbub);
+diff --git a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+index 57f198de5e2cb..4e075b01d48bb 100644
+--- a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
++++ b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+@@ -100,7 +100,8 @@ enum vsc_packet_revision {
+ //PB7 = MD0
+ #define MASK_VTEM_MD0__VRR_EN 0x01
+ #define MASK_VTEM_MD0__M_CONST 0x02
+-#define MASK_VTEM_MD0__RESERVED2 0x0C
++#define MASK_VTEM_MD0__QMS_EN 0x04
++#define MASK_VTEM_MD0__RESERVED2 0x08
+ #define MASK_VTEM_MD0__FVA_FACTOR_M1 0xF0
+
+ //MD1
+@@ -109,7 +110,7 @@ enum vsc_packet_revision {
+ //MD2
+ #define MASK_VTEM_MD2__BASE_REFRESH_RATE_98 0x03
+ #define MASK_VTEM_MD2__RB 0x04
+-#define MASK_VTEM_MD2__RESERVED3 0xF8
++#define MASK_VTEM_MD2__NEXT_TFR 0xF8
+
+ //MD3
+ #define MASK_VTEM_MD3__BASE_REFRESH_RATE_07 0xFF
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 936a257b511c8..d270d9a918e80 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -67,7 +67,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
+ * mmap ioctl is disallowed for all discrete platforms,
+ * and for all platforms with GRAPHICS_VER > 12.
+ */
+- if (IS_DGFX(i915) || GRAPHICS_VER(i915) > 12)
++ if (IS_DGFX(i915) || GRAPHICS_VER_FULL(i915) > IP_VER(12, 0))
+ return -EOPNOTSUPP;
+
+ if (args->flags & ~(I915_MMAP_WC))
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 616be7265da4d..19622fb1fa35b 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1714,7 +1714,7 @@ a6xx_create_private_address_space(struct msm_gpu *gpu)
+ return ERR_CAST(mmu);
+
+ return msm_gem_address_space_create(mmu,
+- "gpu", 0x100000000ULL, 0x1ffffffffULL);
++ "gpu", 0x100000000ULL, SZ_4G);
+ }
+
+ static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 1d7f82e6eafea..af9c09c308601 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -551,6 +551,12 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+
+ mutex_unlock(&dp->event_mutex);
+
++ /*
++ * add fail safe mode outside event_mutex scope
++ * to avoid potiential circular lock with drm thread
++ */
++ dp_panel_add_fail_safe_mode(dp->dp_display.connector);
++
+ /* uevent will complete connection part */
+ return 0;
+ };
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index f1418722c5492..26c3653c99ec9 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -151,6 +151,15 @@ static int dp_panel_update_modes(struct drm_connector *connector,
+ return rc;
+ }
+
++void dp_panel_add_fail_safe_mode(struct drm_connector *connector)
++{
++ /* fail safe edid */
++ mutex_lock(&connector->dev->mode_config.mutex);
++ if (drm_add_modes_noedid(connector, 640, 480))
++ drm_set_preferred_mode(connector, 640, 480);
++ mutex_unlock(&connector->dev->mode_config.mutex);
++}
++
+ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ struct drm_connector *connector)
+ {
+@@ -207,16 +216,7 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ goto end;
+ }
+
+- /* fail safe edid */
+- mutex_lock(&connector->dev->mode_config.mutex);
+- if (drm_add_modes_noedid(connector, 640, 480))
+- drm_set_preferred_mode(connector, 640, 480);
+- mutex_unlock(&connector->dev->mode_config.mutex);
+- } else {
+- /* always add fail-safe mode as backup mode */
+- mutex_lock(&connector->dev->mode_config.mutex);
+- drm_add_modes_noedid(connector, 640, 480);
+- mutex_unlock(&connector->dev->mode_config.mutex);
++ dp_panel_add_fail_safe_mode(connector);
+ }
+
+ if (panel->aux_cfg_update_done) {
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.h b/drivers/gpu/drm/msm/dp/dp_panel.h
+index 9023e5bb4b8b2..99739ea679a77 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.h
++++ b/drivers/gpu/drm/msm/dp/dp_panel.h
+@@ -59,6 +59,7 @@ int dp_panel_init_panel_info(struct dp_panel *dp_panel);
+ int dp_panel_deinit(struct dp_panel *dp_panel);
+ int dp_panel_timing_cfg(struct dp_panel *dp_panel);
+ void dp_panel_dump_regs(struct dp_panel *dp_panel);
++void dp_panel_add_fail_safe_mode(struct drm_connector *connector);
+ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ struct drm_connector *connector);
+ u32 dp_panel_get_mode_bpp(struct dp_panel *dp_panel, u32 mode_max_bpp,
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index f19bae475c966..cd7b41b7d5180 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -641,7 +641,7 @@ struct drm_connector *msm_dsi_manager_connector_init(u8 id)
+ return connector;
+
+ fail:
+- connector->funcs->destroy(msm_dsi->connector);
++ connector->funcs->destroy(connector);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 02b9ae65a96a8..a4f61972667b5 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -926,6 +926,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
+ get_pid_task(aspace->pid, PIDTYPE_PID);
+ if (task) {
+ comm = kstrdup(task->comm, GFP_KERNEL);
++ put_task_struct(task);
+ } else {
+ comm = NULL;
+ }
+diff --git a/drivers/gpu/ipu-v3/ipu-di.c b/drivers/gpu/ipu-v3/ipu-di.c
+index 666223c6bec4d..0a34e0ab4fe60 100644
+--- a/drivers/gpu/ipu-v3/ipu-di.c
++++ b/drivers/gpu/ipu-v3/ipu-di.c
+@@ -447,8 +447,9 @@ static void ipu_di_config_clock(struct ipu_di *di,
+
+ error = rate / (sig->mode.pixelclock / 1000);
+
+- dev_dbg(di->ipu->dev, " IPU clock can give %lu with divider %u, error %d.%u%%\n",
+- rate, div, (signed)(error - 1000) / 10, error % 10);
++ dev_dbg(di->ipu->dev, " IPU clock can give %lu with divider %u, error %c%d.%d%%\n",
++ rate, div, error < 1000 ? '-' : '+',
++ abs(error - 1000) / 10, abs(error - 1000) % 10);
+
+ /* Allow a 1% error */
+ if (error < 1010 && error >= 990) {
+diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
+index 439f99b8b5de2..3cf334c46c312 100644
+--- a/drivers/hv/hv_balloon.c
++++ b/drivers/hv/hv_balloon.c
+@@ -1653,6 +1653,38 @@ static void disable_page_reporting(void)
+ }
+ }
+
++static int ballooning_enabled(void)
++{
++ /*
++ * Disable ballooning if the page size is not 4k (HV_HYP_PAGE_SIZE),
++ * since currently it's unclear to us whether an unballoon request can
++ * make sure all page ranges are guest page size aligned.
++ */
++ if (PAGE_SIZE != HV_HYP_PAGE_SIZE) {
++ pr_info("Ballooning disabled because page size is not 4096 bytes\n");
++ return 0;
++ }
++
++ return 1;
++}
++
++static int hot_add_enabled(void)
++{
++ /*
++ * Disable hot add on ARM64, because we currently rely on
++ * memory_add_physaddr_to_nid() to get a node id of a hot add range,
++ * however ARM64's memory_add_physaddr_to_nid() always return 0 and
++ * DM_MEM_HOT_ADD_REQUEST doesn't have the NUMA node information for
++ * add_memory().
++ */
++ if (IS_ENABLED(CONFIG_ARM64)) {
++ pr_info("Memory hot add disabled on ARM64\n");
++ return 0;
++ }
++
++ return 1;
++}
++
+ static int balloon_connect_vsp(struct hv_device *dev)
+ {
+ struct dm_version_request version_req;
+@@ -1724,8 +1756,8 @@ static int balloon_connect_vsp(struct hv_device *dev)
+ * currently still requires the bits to be set, so we have to add code
+ * to fail the host's hot-add and balloon up/down requests, if any.
+ */
+- cap_msg.caps.cap_bits.balloon = 1;
+- cap_msg.caps.cap_bits.hot_add = 1;
++ cap_msg.caps.cap_bits.balloon = ballooning_enabled();
++ cap_msg.caps.cap_bits.hot_add = hot_add_enabled();
+
+ /*
+ * Specify our alignment requirements as it relates
+diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
+index 181d16bbf49d7..820e814062519 100644
+--- a/drivers/hv/hv_common.c
++++ b/drivers/hv/hv_common.c
+@@ -20,6 +20,7 @@
+ #include <linux/panic_notifier.h>
+ #include <linux/ptrace.h>
+ #include <linux/slab.h>
++#include <linux/dma-map-ops.h>
+ #include <asm/hyperv-tlfs.h>
+ #include <asm/mshyperv.h>
+
+@@ -216,6 +217,16 @@ bool hv_query_ext_cap(u64 cap_query)
+ }
+ EXPORT_SYMBOL_GPL(hv_query_ext_cap);
+
++void hv_setup_dma_ops(struct device *dev, bool coherent)
++{
++ /*
++ * Hyper-V does not offer a vIOMMU in the guest
++ * VM, so pass 0/NULL for the IOMMU settings
++ */
++ arch_setup_dma_ops(dev, 0, 0, NULL, coherent);
++}
++EXPORT_SYMBOL_GPL(hv_setup_dma_ops);
++
+ bool hv_is_hibernation_supported(void)
+ {
+ return !hv_root_partition && acpi_sleep_state_supported(ACPI_STATE_S4);
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 71efacb909659..3d215d9dec433 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -439,7 +439,16 @@ int hv_ringbuffer_read(struct vmbus_channel *channel,
+ static u32 hv_pkt_iter_avail(const struct hv_ring_buffer_info *rbi)
+ {
+ u32 priv_read_loc = rbi->priv_read_index;
+- u32 write_loc = READ_ONCE(rbi->ring_buffer->write_index);
++ u32 write_loc;
++
++ /*
++ * The Hyper-V host writes the packet data, then uses
++ * store_release() to update the write_index. Use load_acquire()
++ * here to prevent loads of the packet data from being re-ordered
++ * before the read of the write_index and potentially getting
++ * stale data.
++ */
++ write_loc = virt_load_acquire(&rbi->ring_buffer->write_index);
+
+ if (write_loc >= priv_read_loc)
+ return write_loc - priv_read_loc;
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 4bea1dfa41cdc..3cd0d3a44fa2e 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -77,8 +77,8 @@ static int hyperv_panic_event(struct notifier_block *nb, unsigned long val,
+
+ /*
+ * Hyper-V should be notified only once about a panic. If we will be
+- * doing hyperv_report_panic_msg() later with kmsg data, don't do
+- * the notification here.
++ * doing hv_kmsg_dump() with kmsg data later, don't do the notification
++ * here.
+ */
+ if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE
+ && hyperv_report_reg()) {
+@@ -100,8 +100,8 @@ static int hyperv_die_event(struct notifier_block *nb, unsigned long val,
+
+ /*
+ * Hyper-V should be notified only once about a panic. If we will be
+- * doing hyperv_report_panic_msg() later with kmsg data, don't do
+- * the notification here.
++ * doing hv_kmsg_dump() with kmsg data later, don't do the notification
++ * here.
+ */
+ if (hyperv_report_reg())
+ hyperv_report_panic(regs, val, true);
+@@ -920,6 +920,21 @@ static int vmbus_probe(struct device *child_device)
+ return ret;
+ }
+
++/*
++ * vmbus_dma_configure -- Configure DMA coherence for VMbus device
++ */
++static int vmbus_dma_configure(struct device *child_device)
++{
++ /*
++ * On ARM64, propagate the DMA coherence setting from the top level
++ * VMbus ACPI device to the child VMbus device being added here.
++ * On x86/x64 coherence is assumed and these calls have no effect.
++ */
++ hv_setup_dma_ops(child_device,
++ device_get_dma_attr(&hv_acpi_dev->dev) == DEV_DMA_COHERENT);
++ return 0;
++}
++
+ /*
+ * vmbus_remove - Remove a vmbus device
+ */
+@@ -1040,6 +1055,7 @@ static struct bus_type hv_bus = {
+ .remove = vmbus_remove,
+ .probe = vmbus_probe,
+ .uevent = vmbus_uevent,
++ .dma_configure = vmbus_dma_configure,
+ .dev_groups = vmbus_dev_groups,
+ .drv_groups = vmbus_drv_groups,
+ .bus_groups = vmbus_bus_groups,
+@@ -1546,14 +1562,20 @@ static int vmbus_bus_init(void)
+ if (ret)
+ goto err_connect;
+
++ if (hv_is_isolation_supported())
++ sysctl_record_panic_msg = 0;
++
+ /*
+ * Only register if the crash MSRs are available
+ */
+ if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
+ u64 hyperv_crash_ctl;
+ /*
+- * Sysctl registration is not fatal, since by default
+- * reporting is enabled.
++ * Panic message recording (sysctl_record_panic_msg)
++ * is enabled by default in non-isolated guests and
++ * disabled by default in isolated guests; the panic
++ * message recording won't be available in isolated
++ * guests should the following registration fail.
+ */
+ hv_ctl_table_hdr = register_sysctl_table(hv_root_table);
+ if (!hv_ctl_table_hdr)
+@@ -2429,6 +2451,21 @@ static int vmbus_acpi_add(struct acpi_device *device)
+
+ hv_acpi_dev = device;
+
++ /*
++ * Older versions of Hyper-V for ARM64 fail to include the _CCA
++ * method on the top level VMbus device in the DSDT. But devices
++ * are hardware coherent in all current Hyper-V use cases, so fix
++ * up the ACPI device to behave as if _CCA is present and indicates
++ * hardware coherence.
++ */
++ ACPI_COMPANION_SET(&device->dev, device);
++ if (IS_ENABLED(CONFIG_ACPI_CCA_REQUIRED) &&
++ device_get_dma_attr(&device->dev) == DEV_DMA_NOT_SUPPORTED) {
++ pr_info("No ACPI _CCA found; assuming coherent device I/O\n");
++ device->flags.cca_seen = true;
++ device->flags.coherent_dma = true;
++ }
++
+ result = acpi_walk_resources(device->handle, METHOD_NAME__CRS,
+ vmbus_walk_resources, NULL);
+
+diff --git a/drivers/i2c/busses/i2c-pasemi-core.c b/drivers/i2c/busses/i2c-pasemi-core.c
+index 7728c8460dc0f..9028ffb58cc07 100644
+--- a/drivers/i2c/busses/i2c-pasemi-core.c
++++ b/drivers/i2c/busses/i2c-pasemi-core.c
+@@ -137,6 +137,12 @@ static int pasemi_i2c_xfer_msg(struct i2c_adapter *adapter,
+
+ TXFIFO_WR(smbus, msg->buf[msg->len-1] |
+ (stop ? MTXFIFO_STOP : 0));
++
++ if (stop) {
++ err = pasemi_smb_waitready(smbus);
++ if (err)
++ goto reset_out;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index cf5d049342ead..6fd2b6718b086 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -668,16 +668,21 @@ static int i2cdev_attach_adapter(struct device *dev, void *dummy)
+ i2c_dev->dev.class = i2c_dev_class;
+ i2c_dev->dev.parent = &adap->dev;
+ i2c_dev->dev.release = i2cdev_dev_release;
+- dev_set_name(&i2c_dev->dev, "i2c-%d", adap->nr);
++
++ res = dev_set_name(&i2c_dev->dev, "i2c-%d", adap->nr);
++ if (res)
++ goto err_put_i2c_dev;
+
+ res = cdev_device_add(&i2c_dev->cdev, &i2c_dev->dev);
+- if (res) {
+- put_i2c_dev(i2c_dev, false);
+- return res;
+- }
++ if (res)
++ goto err_put_i2c_dev;
+
+ pr_debug("adapter [%s] registered as minor %d\n", adap->name, adap->nr);
+ return 0;
++
++err_put_i2c_dev:
++ put_i2c_dev(i2c_dev, false);
++ return res;
+ }
+
+ static int i2cdev_detach_adapter(struct device *dev, void *dummy)
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 9399006dbc546..ffe50be8b6875 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -4400,6 +4400,7 @@ try_smaller_buffer:
+ }
+
+ if (ic->internal_hash) {
++ size_t recalc_tags_size;
+ ic->recalc_wq = alloc_workqueue("dm-integrity-recalc", WQ_MEM_RECLAIM, 1);
+ if (!ic->recalc_wq ) {
+ ti->error = "Cannot allocate workqueue";
+@@ -4413,8 +4414,10 @@ try_smaller_buffer:
+ r = -ENOMEM;
+ goto bad;
+ }
+- ic->recalc_tags = kvmalloc_array(RECALC_SECTORS >> ic->sb->log2_sectors_per_block,
+- ic->tag_size, GFP_KERNEL);
++ recalc_tags_size = (RECALC_SECTORS >> ic->sb->log2_sectors_per_block) * ic->tag_size;
++ if (crypto_shash_digestsize(ic->internal_hash) > ic->tag_size)
++ recalc_tags_size += crypto_shash_digestsize(ic->internal_hash) - ic->tag_size;
++ ic->recalc_tags = kvmalloc(recalc_tags_size, GFP_KERNEL);
+ if (!ic->recalc_tags) {
+ ti->error = "Cannot allocate tags for recalculating";
+ r = -ENOMEM;
+diff --git a/drivers/md/dm-ps-historical-service-time.c b/drivers/md/dm-ps-historical-service-time.c
+index 875bca30a0dd5..82f2a06153dc0 100644
+--- a/drivers/md/dm-ps-historical-service-time.c
++++ b/drivers/md/dm-ps-historical-service-time.c
+@@ -27,7 +27,6 @@
+ #include <linux/blkdev.h>
+ #include <linux/slab.h>
+ #include <linux/module.h>
+-#include <linux/sched/clock.h>
+
+
+ #define DM_MSG_PREFIX "multipath historical-service-time"
+@@ -433,7 +432,7 @@ static struct dm_path *hst_select_path(struct path_selector *ps,
+ {
+ struct selector *s = ps->context;
+ struct path_info *pi = NULL, *best = NULL;
+- u64 time_now = sched_clock();
++ u64 time_now = ktime_get_ns();
+ struct dm_path *ret = NULL;
+ unsigned long flags;
+
+@@ -474,7 +473,7 @@ static int hst_start_io(struct path_selector *ps, struct dm_path *path,
+
+ static u64 path_service_time(struct path_info *pi, u64 start_time)
+ {
+- u64 sched_now = ktime_get_ns();
++ u64 now = ktime_get_ns();
+
+ /* if a previous disk request has finished after this IO was
+ * sent to the hardware, pretend the submission happened
+@@ -483,11 +482,11 @@ static u64 path_service_time(struct path_info *pi, u64 start_time)
+ if (time_after64(pi->last_finish, start_time))
+ start_time = pi->last_finish;
+
+- pi->last_finish = sched_now;
+- if (time_before64(sched_now, start_time))
++ pi->last_finish = now;
++ if (time_before64(now, start_time))
+ return 0;
+
+- return sched_now - start_time;
++ return now - start_time;
+ }
+
+ static int hst_end_io(struct path_selector *ps, struct dm_path *path,
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 4de5e8d2b261b..3d3d1062e2122 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -892,7 +892,7 @@ static int rga_probe(struct platform_device *pdev)
+ }
+ rga->dst_mmu_pages =
+ (unsigned int *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 3);
+- if (rga->dst_mmu_pages) {
++ if (!rga->dst_mmu_pages) {
+ ret = -ENOMEM;
+ goto free_src_pages;
+ }
+diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
+index 47029746b89ee..0de587b412d4e 100644
+--- a/drivers/media/tuners/si2157.c
++++ b/drivers/media/tuners/si2157.c
+@@ -77,16 +77,16 @@ err_mutex_unlock:
+ }
+
+ static const struct si2157_tuner_info si2157_tuners[] = {
+- { SI2141, false, 0x60, SI2141_60_FIRMWARE, SI2141_A10_FIRMWARE },
+- { SI2141, false, 0x61, SI2141_61_FIRMWARE, SI2141_A10_FIRMWARE },
+- { SI2146, false, 0x11, SI2146_11_FIRMWARE, NULL },
+- { SI2147, false, 0x50, SI2147_50_FIRMWARE, NULL },
+- { SI2148, true, 0x32, SI2148_32_FIRMWARE, SI2158_A20_FIRMWARE },
+- { SI2148, true, 0x33, SI2148_33_FIRMWARE, SI2158_A20_FIRMWARE },
+- { SI2157, false, 0x50, SI2157_50_FIRMWARE, SI2157_A30_FIRMWARE },
+- { SI2158, false, 0x50, SI2158_50_FIRMWARE, SI2158_A20_FIRMWARE },
+- { SI2158, false, 0x51, SI2158_51_FIRMWARE, SI2158_A20_FIRMWARE },
+- { SI2177, false, 0x50, SI2177_50_FIRMWARE, SI2157_A30_FIRMWARE },
++ { SI2141, 0x60, false, SI2141_60_FIRMWARE, SI2141_A10_FIRMWARE },
++ { SI2141, 0x61, false, SI2141_61_FIRMWARE, SI2141_A10_FIRMWARE },
++ { SI2146, 0x11, false, SI2146_11_FIRMWARE, NULL },
++ { SI2147, 0x50, false, SI2147_50_FIRMWARE, NULL },
++ { SI2148, 0x32, true, SI2148_32_FIRMWARE, SI2158_A20_FIRMWARE },
++ { SI2148, 0x33, true, SI2148_33_FIRMWARE, SI2158_A20_FIRMWARE },
++ { SI2157, 0x50, false, SI2157_50_FIRMWARE, SI2157_A30_FIRMWARE },
++ { SI2158, 0x50, false, SI2158_50_FIRMWARE, SI2158_A20_FIRMWARE },
++ { SI2158, 0x51, false, SI2158_51_FIRMWARE, SI2158_A20_FIRMWARE },
++ { SI2177, 0x50, false, SI2177_50_FIRMWARE, SI2157_A30_FIRMWARE },
+ };
+
+ static int si2157_load_firmware(struct dvb_frontend *fe,
+@@ -178,7 +178,7 @@ static int si2157_find_and_load_firmware(struct dvb_frontend *fe)
+ }
+ }
+
+- if (!fw_name && !fw_alt_name) {
++ if (required && !fw_name && !fw_alt_name) {
+ dev_err(&client->dev,
+ "unknown chip version Si21%d-%c%c%c ROM 0x%02x\n",
+ part_id, cmd.args[1], cmd.args[3], cmd.args[4], rom_id);
+diff --git a/drivers/memory/atmel-ebi.c b/drivers/memory/atmel-ebi.c
+index c267283b01fda..e749dcb3ddea9 100644
+--- a/drivers/memory/atmel-ebi.c
++++ b/drivers/memory/atmel-ebi.c
+@@ -544,20 +544,27 @@ static int atmel_ebi_probe(struct platform_device *pdev)
+ smc_np = of_parse_phandle(dev->of_node, "atmel,smc", 0);
+
+ ebi->smc.regmap = syscon_node_to_regmap(smc_np);
+- if (IS_ERR(ebi->smc.regmap))
+- return PTR_ERR(ebi->smc.regmap);
++ if (IS_ERR(ebi->smc.regmap)) {
++ ret = PTR_ERR(ebi->smc.regmap);
++ goto put_node;
++ }
+
+ ebi->smc.layout = atmel_hsmc_get_reg_layout(smc_np);
+- if (IS_ERR(ebi->smc.layout))
+- return PTR_ERR(ebi->smc.layout);
++ if (IS_ERR(ebi->smc.layout)) {
++ ret = PTR_ERR(ebi->smc.layout);
++ goto put_node;
++ }
+
+ ebi->smc.clk = of_clk_get(smc_np, 0);
+ if (IS_ERR(ebi->smc.clk)) {
+- if (PTR_ERR(ebi->smc.clk) != -ENOENT)
+- return PTR_ERR(ebi->smc.clk);
++ if (PTR_ERR(ebi->smc.clk) != -ENOENT) {
++ ret = PTR_ERR(ebi->smc.clk);
++ goto put_node;
++ }
+
+ ebi->smc.clk = NULL;
+ }
++ of_node_put(smc_np);
+ ret = clk_prepare_enable(ebi->smc.clk);
+ if (ret)
+ return ret;
+@@ -608,6 +615,10 @@ static int atmel_ebi_probe(struct platform_device *pdev)
+ }
+
+ return of_platform_populate(np, NULL, NULL, dev);
++
++put_node:
++ of_node_put(smc_np);
++ return ret;
+ }
+
+ static __maybe_unused int atmel_ebi_resume(struct device *dev)
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index e4cc64f560196..2e545f473cc68 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -651,6 +651,7 @@ static int rpcif_probe(struct platform_device *pdev)
+ struct platform_device *vdev;
+ struct device_node *flash;
+ const char *name;
++ int ret;
+
+ flash = of_get_next_child(pdev->dev.of_node, NULL);
+ if (!flash) {
+@@ -674,7 +675,14 @@ static int rpcif_probe(struct platform_device *pdev)
+ return -ENOMEM;
+ vdev->dev.parent = &pdev->dev;
+ platform_set_drvdata(pdev, vdev);
+- return platform_device_add(vdev);
++
++ ret = platform_device_add(vdev);
++ if (ret) {
++ platform_device_put(vdev);
++ return ret;
++ }
++
++ return 0;
+ }
+
+ static int rpcif_remove(struct platform_device *pdev)
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 9957772201d58..c414d9e9d7c09 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -599,6 +599,8 @@ static int felix_change_tag_protocol(struct dsa_switch *ds, int cpu,
+ struct ocelot *ocelot = ds->priv;
+ struct felix *felix = ocelot_to_felix(ocelot);
+ enum dsa_tag_protocol old_proto = felix->tag_proto;
++ bool cpu_port_active = false;
++ struct dsa_port *dp;
+ int err;
+
+ if (proto != DSA_TAG_PROTO_SEVILLE &&
+@@ -606,6 +608,27 @@ static int felix_change_tag_protocol(struct dsa_switch *ds, int cpu,
+ proto != DSA_TAG_PROTO_OCELOT_8021Q)
+ return -EPROTONOSUPPORT;
+
++ /* We don't support multiple CPU ports, yet the DT blob may have
++ * multiple CPU ports defined. The first CPU port is the active one,
++ * the others are inactive. In this case, DSA will call
++ * ->change_tag_protocol() multiple times, once per CPU port.
++ * Since we implement the tagging protocol change towards "ocelot" or
++ * "seville" as effectively initializing the NPI port, what we are
++ * doing is effectively changing who the NPI port is to the last @cpu
++ * argument passed, which is an unused DSA CPU port and not the one
++ * that should actively pass traffic.
++ * Suppress DSA's calls on CPU ports that are inactive.
++ */
++ dsa_switch_for_each_user_port(dp, ds) {
++ if (dp->cpu_dp->index == cpu) {
++ cpu_port_active = true;
++ break;
++ }
++ }
++
++ if (!cpu_port_active)
++ return 0;
++
+ felix_del_tag_protocol(ds, cpu, old_proto);
+
+ err = felix_set_tag_protocol(ds, cpu, proto);
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 2875b52508567..443d34ce2853f 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -2328,7 +2328,7 @@ static int felix_pci_probe(struct pci_dev *pdev,
+
+ err = dsa_register_switch(ds);
+ if (err) {
+- dev_err(&pdev->dev, "Failed to register DSA switch: %d\n", err);
++ dev_err_probe(&pdev->dev, err, "Failed to register DSA switch\n");
+ goto err_register_ds;
+ }
+
+diff --git a/drivers/net/dsa/realtek/Kconfig b/drivers/net/dsa/realtek/Kconfig
+index 1c62212fb0ecb..1315896ed6e2a 100644
+--- a/drivers/net/dsa/realtek/Kconfig
++++ b/drivers/net/dsa/realtek/Kconfig
+@@ -14,6 +14,7 @@ menuconfig NET_DSA_REALTEK
+ config NET_DSA_REALTEK_SMI
+ tristate "Realtek SMI connected switch driver"
+ depends on NET_DSA_REALTEK
++ depends on OF
+ default y
+ help
+ Select to enable support for registering switches connected
+diff --git a/drivers/net/dsa/realtek/realtek-smi-core.c b/drivers/net/dsa/realtek/realtek-smi-core.c
+index aae46ada8d839..a9c21f9e33709 100644
+--- a/drivers/net/dsa/realtek/realtek-smi-core.c
++++ b/drivers/net/dsa/realtek/realtek-smi-core.c
+@@ -315,7 +315,21 @@ static int realtek_smi_read(void *ctx, u32 reg, u32 *val)
+ return realtek_smi_read_reg(smi, reg, val);
+ }
+
+-static const struct regmap_config realtek_smi_mdio_regmap_config = {
++static void realtek_smi_lock(void *ctx)
++{
++ struct realtek_smi *smi = ctx;
++
++ mutex_lock(&smi->map_lock);
++}
++
++static void realtek_smi_unlock(void *ctx)
++{
++ struct realtek_smi *smi = ctx;
++
++ mutex_unlock(&smi->map_lock);
++}
++
++static const struct regmap_config realtek_smi_regmap_config = {
+ .reg_bits = 10, /* A4..A0 R4..R0 */
+ .val_bits = 16,
+ .reg_stride = 1,
+@@ -325,6 +339,21 @@ static const struct regmap_config realtek_smi_mdio_regmap_config = {
+ .reg_read = realtek_smi_read,
+ .reg_write = realtek_smi_write,
+ .cache_type = REGCACHE_NONE,
++ .lock = realtek_smi_lock,
++ .unlock = realtek_smi_unlock,
++};
++
++static const struct regmap_config realtek_smi_nolock_regmap_config = {
++ .reg_bits = 10, /* A4..A0 R4..R0 */
++ .val_bits = 16,
++ .reg_stride = 1,
++ /* PHY regs are at 0x8000 */
++ .max_register = 0xffff,
++ .reg_format_endian = REGMAP_ENDIAN_BIG,
++ .reg_read = realtek_smi_read,
++ .reg_write = realtek_smi_write,
++ .cache_type = REGCACHE_NONE,
++ .disable_locking = true,
+ };
+
+ static int realtek_smi_mdio_read(struct mii_bus *bus, int addr, int regnum)
+@@ -388,6 +417,7 @@ static int realtek_smi_probe(struct platform_device *pdev)
+ const struct realtek_smi_variant *var;
+ struct device *dev = &pdev->dev;
+ struct realtek_smi *smi;
++ struct regmap_config rc;
+ struct device_node *np;
+ int ret;
+
+@@ -398,14 +428,26 @@ static int realtek_smi_probe(struct platform_device *pdev)
+ if (!smi)
+ return -ENOMEM;
+ smi->chip_data = (void *)smi + sizeof(*smi);
+- smi->map = devm_regmap_init(dev, NULL, smi,
+- &realtek_smi_mdio_regmap_config);
++
++ mutex_init(&smi->map_lock);
++
++ rc = realtek_smi_regmap_config;
++ rc.lock_arg = smi;
++ smi->map = devm_regmap_init(dev, NULL, smi, &rc);
+ if (IS_ERR(smi->map)) {
+ ret = PTR_ERR(smi->map);
+ dev_err(dev, "regmap init failed: %d\n", ret);
+ return ret;
+ }
+
++ rc = realtek_smi_nolock_regmap_config;
++ smi->map_nolock = devm_regmap_init(dev, NULL, smi, &rc);
++ if (IS_ERR(smi->map_nolock)) {
++ ret = PTR_ERR(smi->map_nolock);
++ dev_err(dev, "regmap init failed: %d\n", ret);
++ return ret;
++ }
++
+ /* Link forward and backward */
+ smi->dev = dev;
+ smi->clk_delay = var->clk_delay;
+diff --git a/drivers/net/dsa/realtek/realtek-smi-core.h b/drivers/net/dsa/realtek/realtek-smi-core.h
+index faed387d8db38..5fcad51e1984f 100644
+--- a/drivers/net/dsa/realtek/realtek-smi-core.h
++++ b/drivers/net/dsa/realtek/realtek-smi-core.h
+@@ -49,6 +49,8 @@ struct realtek_smi {
+ struct gpio_desc *mdc;
+ struct gpio_desc *mdio;
+ struct regmap *map;
++ struct regmap *map_nolock;
++ struct mutex map_lock;
+ struct mii_bus *slave_mii_bus;
+
+ unsigned int clk_delay;
+diff --git a/drivers/net/dsa/realtek/rtl8365mb.c b/drivers/net/dsa/realtek/rtl8365mb.c
+index 3b729544798b1..696c8906c74cb 100644
+--- a/drivers/net/dsa/realtek/rtl8365mb.c
++++ b/drivers/net/dsa/realtek/rtl8365mb.c
+@@ -565,7 +565,7 @@ static int rtl8365mb_phy_poll_busy(struct realtek_smi *smi)
+ {
+ u32 val;
+
+- return regmap_read_poll_timeout(smi->map,
++ return regmap_read_poll_timeout(smi->map_nolock,
+ RTL8365MB_INDIRECT_ACCESS_STATUS_REG,
+ val, !val, 10, 100);
+ }
+@@ -579,7 +579,7 @@ static int rtl8365mb_phy_ocp_prepare(struct realtek_smi *smi, int phy,
+ /* Set OCP prefix */
+ val = FIELD_GET(RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK, ocp_addr);
+ ret = regmap_update_bits(
+- smi->map, RTL8365MB_GPHY_OCP_MSB_0_REG,
++ smi->map_nolock, RTL8365MB_GPHY_OCP_MSB_0_REG,
+ RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK,
+ FIELD_PREP(RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK, val));
+ if (ret)
+@@ -592,8 +592,8 @@ static int rtl8365mb_phy_ocp_prepare(struct realtek_smi *smi, int phy,
+ ocp_addr >> 1);
+ val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK,
+ ocp_addr >> 6);
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG,
+- val);
++ ret = regmap_write(smi->map_nolock,
++ RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG, val);
+ if (ret)
+ return ret;
+
+@@ -606,36 +606,42 @@ static int rtl8365mb_phy_ocp_read(struct realtek_smi *smi, int phy,
+ u32 val;
+ int ret;
+
++ mutex_lock(&smi->map_lock);
++
+ ret = rtl8365mb_phy_poll_busy(smi);
+ if (ret)
+- return ret;
++ goto out;
+
+ ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Execute read operation */
+ val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
+ RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
+ FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
+ RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ);
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
++ ret = regmap_write(smi->map_nolock, RTL8365MB_INDIRECT_ACCESS_CTRL_REG,
++ val);
+ if (ret)
+- return ret;
++ goto out;
+
+ ret = rtl8365mb_phy_poll_busy(smi);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Get PHY register data */
+- ret = regmap_read(smi->map, RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG,
+- &val);
++ ret = regmap_read(smi->map_nolock,
++ RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG, &val);
+ if (ret)
+- return ret;
++ goto out;
+
+ *data = val & 0xFFFF;
+
+- return 0;
++out:
++ mutex_unlock(&smi->map_lock);
++
++ return ret;
+ }
+
+ static int rtl8365mb_phy_ocp_write(struct realtek_smi *smi, int phy,
+@@ -644,32 +650,38 @@ static int rtl8365mb_phy_ocp_write(struct realtek_smi *smi, int phy,
+ u32 val;
+ int ret;
+
++ mutex_lock(&smi->map_lock);
++
+ ret = rtl8365mb_phy_poll_busy(smi);
+ if (ret)
+- return ret;
++ goto out;
+
+ ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Set PHY register data */
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG,
+- data);
++ ret = regmap_write(smi->map_nolock,
++ RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG, data);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Execute write operation */
+ val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
+ RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
+ FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
+ RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE);
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
++ ret = regmap_write(smi->map_nolock, RTL8365MB_INDIRECT_ACCESS_CTRL_REG,
++ val);
+ if (ret)
+- return ret;
++ goto out;
+
+ ret = rtl8365mb_phy_poll_busy(smi);
+ if (ret)
+- return ret;
++ goto out;
++
++out:
++ mutex_unlock(&smi->map_lock);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index bd5998012a876..2da804f84b480 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -76,7 +76,7 @@ static inline void bcmgenet_writel(u32 value, void __iomem *offset)
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ __raw_writel(value, offset);
+ else
+- writel(value, offset);
++ writel_relaxed(value, offset);
+ }
+
+ static inline u32 bcmgenet_readl(void __iomem *offset)
+@@ -84,7 +84,7 @@ static inline u32 bcmgenet_readl(void __iomem *offset)
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ return __raw_readl(offset);
+ else
+- return readl(offset);
++ return readl_relaxed(offset);
+ }
+
+ static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
+index d5356db7539a4..caf48023f8ea5 100644
+--- a/drivers/net/ethernet/faraday/ftgmac100.c
++++ b/drivers/net/ethernet/faraday/ftgmac100.c
+@@ -1835,11 +1835,6 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ priv->rxdes0_edorr_mask = BIT(30);
+ priv->txdes0_edotr_mask = BIT(30);
+ priv->is_aspeed = true;
+- /* Disable ast2600 problematic HW arbitration */
+- if (of_device_is_compatible(np, "aspeed,ast2600-mac")) {
+- iowrite32(FTGMAC100_TM_DEFAULT,
+- priv->base + FTGMAC100_OFFSET_TM);
+- }
+ } else {
+ priv->rxdes0_edorr_mask = BIT(15);
+ priv->txdes0_edotr_mask = BIT(15);
+@@ -1911,6 +1906,11 @@ static int ftgmac100_probe(struct platform_device *pdev)
+ err = ftgmac100_setup_clk(priv);
+ if (err)
+ goto err_phy_connect;
++
++ /* Disable ast2600 problematic HW arbitration */
++ if (of_device_is_compatible(np, "aspeed,ast2600-mac"))
++ iowrite32(FTGMAC100_TM_DEFAULT,
++ priv->base + FTGMAC100_OFFSET_TM);
+ }
+
+ /* Default ring sizes */
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index d10e9a8e8011f..f55ecb6727684 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -2817,7 +2817,6 @@ continue_reset:
+ running = adapter->state == __IAVF_RUNNING;
+
+ if (running) {
+- netdev->flags &= ~IFF_UP;
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+ adapter->link_up = false;
+@@ -2934,7 +2933,7 @@ continue_reset:
+ * to __IAVF_RUNNING
+ */
+ iavf_up_complete(adapter);
+- netdev->flags |= IFF_UP;
++
+ iavf_irq_enable(adapter, true);
+ } else {
+ iavf_change_state(adapter, __IAVF_DOWN);
+@@ -2950,10 +2949,8 @@ continue_reset:
+ reset_err:
+ mutex_unlock(&adapter->client_lock);
+ mutex_unlock(&adapter->crit_lock);
+- if (running) {
++ if (running)
+ iavf_change_state(adapter, __IAVF_RUNNING);
+- netdev->flags |= IFF_UP;
+- }
+ dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
+ iavf_close(netdev);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c
+index 5daade32ea625..fba178e076009 100644
+--- a/drivers/net/ethernet/intel/ice/ice_arfs.c
++++ b/drivers/net/ethernet/intel/ice/ice_arfs.c
+@@ -577,7 +577,7 @@ void ice_free_cpu_rx_rmap(struct ice_vsi *vsi)
+ {
+ struct net_device *netdev;
+
+- if (!vsi || vsi->type != ICE_VSI_PF || !vsi->arfs_fltr_list)
++ if (!vsi || vsi->type != ICE_VSI_PF)
+ return;
+
+ netdev = vsi->netdev;
+@@ -599,7 +599,7 @@ int ice_set_cpu_rx_rmap(struct ice_vsi *vsi)
+ int base_idx, i;
+
+ if (!vsi || vsi->type != ICE_VSI_PF)
+- return -EINVAL;
++ return 0;
+
+ pf = vsi->back;
+ netdev = vsi->netdev;
+@@ -636,7 +636,6 @@ void ice_remove_arfs(struct ice_pf *pf)
+ if (!pf_vsi)
+ return;
+
+- ice_free_cpu_rx_rmap(pf_vsi);
+ ice_clear_arfs(pf_vsi);
+ }
+
+@@ -653,9 +652,5 @@ void ice_rebuild_arfs(struct ice_pf *pf)
+ return;
+
+ ice_remove_arfs(pf);
+- if (ice_set_cpu_rx_rmap(pf_vsi)) {
+- dev_err(ice_pf_to_dev(pf), "Failed to rebuild aRFS\n");
+- return;
+- }
+ ice_init_arfs(pf_vsi);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 5fd2bbeab2d15..15bb6f001a04f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -2869,6 +2869,8 @@ void ice_vsi_free_irq(struct ice_vsi *vsi)
+ return;
+
+ vsi->irqs_ready = false;
++ ice_free_cpu_rx_rmap(vsi);
++
+ ice_for_each_q_vector(vsi, i) {
+ u16 vector = i + base;
+ int irq_num;
+@@ -2882,7 +2884,8 @@ void ice_vsi_free_irq(struct ice_vsi *vsi)
+ continue;
+
+ /* clear the affinity notifier in the IRQ descriptor */
+- irq_set_affinity_notifier(irq_num, NULL);
++ if (!IS_ENABLED(CONFIG_RFS_ACCEL))
++ irq_set_affinity_notifier(irq_num, NULL);
+
+ /* clear the affinity_mask in the IRQ descriptor */
+ irq_set_affinity_hint(irq_num, NULL);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index db2e02e673a77..2de2bbbca1e97 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2494,6 +2494,13 @@ static int ice_vsi_req_irq_msix(struct ice_vsi *vsi, char *basename)
+ irq_set_affinity_hint(irq_num, &q_vector->affinity_mask);
+ }
+
++ err = ice_set_cpu_rx_rmap(vsi);
++ if (err) {
++ netdev_err(vsi->netdev, "Failed to setup CPU RMAP on VSI %u: %pe\n",
++ vsi->vsi_num, ERR_PTR(err));
++ goto free_q_irqs;
++ }
++
+ vsi->irqs_ready = true;
+ return 0;
+
+@@ -3605,20 +3612,12 @@ static int ice_setup_pf_sw(struct ice_pf *pf)
+ */
+ ice_napi_add(vsi);
+
+- status = ice_set_cpu_rx_rmap(vsi);
+- if (status) {
+- dev_err(dev, "Failed to set CPU Rx map VSI %d error %d\n",
+- vsi->vsi_num, status);
+- goto unroll_napi_add;
+- }
+ status = ice_init_mac_fltr(pf);
+ if (status)
+- goto free_cpu_rx_map;
++ goto unroll_napi_add;
+
+ return 0;
+
+-free_cpu_rx_map:
+- ice_free_cpu_rx_rmap(vsi);
+ unroll_napi_add:
+ ice_tc_indir_block_unregister(vsi);
+ unroll_cfg_netdev:
+@@ -5076,7 +5075,6 @@ static int __maybe_unused ice_suspend(struct device *dev)
+ continue;
+ ice_vsi_free_q_vectors(pf->vsi[v]);
+ }
+- ice_free_cpu_rx_rmap(ice_get_main_vsi(pf));
+ ice_clear_interrupt_scheme(pf);
+
+ pci_save_state(pdev);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+index 939b692ffc335..ce843ea914646 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+@@ -650,6 +650,7 @@ static int mlxsw_i2c_probe(struct i2c_client *client,
+ return 0;
+
+ errout:
++ mutex_destroy(&mlxsw_i2c->cmd.lock);
+ i2c_set_clientdata(client, NULL);
+
+ return err;
+diff --git a/drivers/net/ethernet/micrel/Kconfig b/drivers/net/ethernet/micrel/Kconfig
+index 93df3049cdc05..1b632cdd76309 100644
+--- a/drivers/net/ethernet/micrel/Kconfig
++++ b/drivers/net/ethernet/micrel/Kconfig
+@@ -39,6 +39,7 @@ config KS8851
+ config KS8851_MLL
+ tristate "Micrel KS8851 MLL"
+ depends on HAS_IOMEM
++ depends on PTP_1588_CLOCK_OPTIONAL
+ select MII
+ select CRC32
+ select EEPROM_93CX6
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c b/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c
+index ce5970bdcc6a0..2679111ef6696 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_mac.c
+@@ -346,7 +346,8 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
+
+ lan966x_mac_process_raw_entry(&raw_entries[column],
+ mac, &vid, &dest_idx);
+- WARN_ON(dest_idx > lan966x->num_phys_ports);
++ if (WARN_ON(dest_idx > lan966x->num_phys_ports))
++ continue;
+
+ /* If the entry in SW is found, then there is nothing
+ * to do
+@@ -392,7 +393,8 @@ static void lan966x_mac_irq_process(struct lan966x *lan966x, u32 row,
+
+ lan966x_mac_process_raw_entry(&raw_entries[column],
+ mac, &vid, &dest_idx);
+- WARN_ON(dest_idx > lan966x->num_phys_ports);
++ if (WARN_ON(dest_idx > lan966x->num_phys_ports))
++ continue;
+
+ mac_entry = lan966x_mac_alloc_entry(mac, vid, dest_idx);
+ if (!mac_entry)
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c b/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
+index 7de55f6a4da80..3c987fd6b9e23 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_switchdev.c
+@@ -261,8 +261,7 @@ static int lan966x_port_prechangeupper(struct net_device *dev,
+
+ if (netif_is_bridge_master(info->upper_dev) && !info->linking)
+ switchdev_bridge_port_unoffload(port->dev, port,
+- &lan966x_switchdev_nb,
+- &lan966x_switchdev_blocking_nb);
++ NULL, NULL);
+
+ return NOTIFY_DONE;
+ }
+diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+index 50ac3ee2577a2..21d2645885cef 100644
+--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
++++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+@@ -2903,11 +2903,9 @@ static netdev_tx_t myri10ge_sw_tso(struct sk_buff *skb,
+ status = myri10ge_xmit(curr, dev);
+ if (status != 0) {
+ dev_kfree_skb_any(curr);
+- if (segs != NULL) {
+- curr = segs;
+- segs = next;
++ skb_list_walk_safe(next, curr, next) {
+ curr->next = NULL;
+- dev_kfree_skb_any(segs);
++ dev_kfree_skb_any(curr);
+ }
+ goto drop;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
+index cd478d2cd871a..00f6d347eaf75 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
+@@ -57,10 +57,6 @@
+ #define TSE_PCS_USE_SGMII_ENA BIT(0)
+ #define TSE_PCS_IF_USE_SGMII 0x03
+
+-#define SGMII_ADAPTER_CTRL_REG 0x00
+-#define SGMII_ADAPTER_DISABLE 0x0001
+-#define SGMII_ADAPTER_ENABLE 0x0000
+-
+ #define AUTONEGO_LINK_TIMER 20
+
+ static int tse_pcs_reset(void __iomem *base, struct tse_pcs *pcs)
+@@ -202,12 +198,8 @@ void tse_pcs_fix_mac_speed(struct tse_pcs *pcs, struct phy_device *phy_dev,
+ unsigned int speed)
+ {
+ void __iomem *tse_pcs_base = pcs->tse_pcs_base;
+- void __iomem *sgmii_adapter_base = pcs->sgmii_adapter_base;
+ u32 val;
+
+- writew(SGMII_ADAPTER_ENABLE,
+- sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+-
+ pcs->autoneg = phy_dev->autoneg;
+
+ if (phy_dev->autoneg == AUTONEG_ENABLE) {
+diff --git a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
+index 442812c0a4bdc..694ac25ef426b 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
++++ b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
+@@ -10,6 +10,10 @@
+ #include <linux/phy.h>
+ #include <linux/timer.h>
+
++#define SGMII_ADAPTER_CTRL_REG 0x00
++#define SGMII_ADAPTER_ENABLE 0x0000
++#define SGMII_ADAPTER_DISABLE 0x0001
++
+ struct tse_pcs {
+ struct device *dev;
+ void __iomem *tse_pcs_base;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index b7c2579c963b6..ac9e6c7a33b55 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -18,9 +18,6 @@
+
+ #include "altr_tse_pcs.h"
+
+-#define SGMII_ADAPTER_CTRL_REG 0x00
+-#define SGMII_ADAPTER_DISABLE 0x0001
+-
+ #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII 0x0
+ #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII 0x1
+ #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RMII 0x2
+@@ -62,16 +59,14 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ {
+ struct socfpga_dwmac *dwmac = (struct socfpga_dwmac *)priv;
+ void __iomem *splitter_base = dwmac->splitter_base;
+- void __iomem *tse_pcs_base = dwmac->pcs.tse_pcs_base;
+ void __iomem *sgmii_adapter_base = dwmac->pcs.sgmii_adapter_base;
+ struct device *dev = dwmac->dev;
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct phy_device *phy_dev = ndev->phydev;
+ u32 val;
+
+- if ((tse_pcs_base) && (sgmii_adapter_base))
+- writew(SGMII_ADAPTER_DISABLE,
+- sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
++ writew(SGMII_ADAPTER_DISABLE,
++ sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+
+ if (splitter_base) {
+ val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
+@@ -93,7 +88,9 @@ static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
+ writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
+ }
+
+- if (tse_pcs_base && sgmii_adapter_base)
++ writew(SGMII_ADAPTER_ENABLE,
++ sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
++ if (phy_dev)
+ tse_pcs_fix_mac_speed(&dwmac->pcs, phy_dev, speed);
+ }
+
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 90d96eb79984e..a960227f61da4 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2072,15 +2072,14 @@ static int axienet_probe(struct platform_device *pdev)
+ if (ret)
+ goto cleanup_clk;
+
+- lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+- if (lp->phy_node) {
+- ret = axienet_mdio_setup(lp);
+- if (ret)
+- dev_warn(&pdev->dev,
+- "error registering MDIO bus: %d\n", ret);
+- }
++ ret = axienet_mdio_setup(lp);
++ if (ret)
++ dev_warn(&pdev->dev,
++ "error registering MDIO bus: %d\n", ret);
++
+ if (lp->phy_mode == PHY_INTERFACE_MODE_SGMII ||
+ lp->phy_mode == PHY_INTERFACE_MODE_1000BASEX) {
++ lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0);
+ if (!lp->phy_node) {
+ dev_err(&pdev->dev, "phy-handle required for 1000BaseX/SGMII\n");
+ ret = -EINVAL;
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 6ef5f77be4d0a..c83664b28d890 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -460,8 +460,10 @@ static rx_handler_result_t macvlan_handle_frame(struct sk_buff **pskb)
+ return RX_HANDLER_CONSUMED;
+ *pskb = skb;
+ eth = eth_hdr(skb);
+- if (macvlan_forward_source(skb, port, eth->h_source))
++ if (macvlan_forward_source(skb, port, eth->h_source)) {
++ kfree_skb(skb);
+ return RX_HANDLER_CONSUMED;
++ }
+ src = macvlan_hash_lookup(port, eth->h_source);
+ if (src && src->mode != MACVLAN_MODE_VEPA &&
+ src->mode != MACVLAN_MODE_BRIDGE) {
+@@ -480,8 +482,10 @@ static rx_handler_result_t macvlan_handle_frame(struct sk_buff **pskb)
+ return RX_HANDLER_PASS;
+ }
+
+- if (macvlan_forward_source(skb, port, eth->h_source))
++ if (macvlan_forward_source(skb, port, eth->h_source)) {
++ kfree_skb(skb);
+ return RX_HANDLER_CONSUMED;
++ }
+ if (macvlan_passthru(port))
+ vlan = list_first_or_null_rcu(&port->vlans,
+ struct macvlan_dev, list);
+diff --git a/drivers/net/mdio/fwnode_mdio.c b/drivers/net/mdio/fwnode_mdio.c
+index 1becb1a731f67..1c1584fca6327 100644
+--- a/drivers/net/mdio/fwnode_mdio.c
++++ b/drivers/net/mdio/fwnode_mdio.c
+@@ -43,6 +43,11 @@ int fwnode_mdiobus_phy_device_register(struct mii_bus *mdio,
+ int rc;
+
+ rc = fwnode_irq_get(child, 0);
++ /* Don't wait forever if the IRQ provider doesn't become available,
++ * just fall back to poll mode
++ */
++ if (rc == -EPROBE_DEFER)
++ rc = driver_deferred_probe_check_state(&phy->mdio.dev);
+ if (rc == -EPROBE_DEFER)
+ return rc;
+
+diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
+index 98f586f910fb1..8ed4fcf70b9b2 100644
+--- a/drivers/net/slip/slip.c
++++ b/drivers/net/slip/slip.c
+@@ -469,7 +469,7 @@ static void sl_tx_timeout(struct net_device *dev, unsigned int txqueue)
+ spin_lock(&sl->lock);
+
+ if (netif_queue_stopped(dev)) {
+- if (!netif_running(dev))
++ if (!netif_running(dev) || !sl->tty)
+ goto out;
+
+ /* May be we must check transmitter timeout here ?
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index de999e0fedbca..aa78d7e00289a 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1106,7 +1106,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ /* NETIF_F_LLTX requires to do our own update of trans_start */
+ queue = netdev_get_tx_queue(dev, txq);
+- queue->trans_start = jiffies;
++ txq_trans_cond_update(queue);
+
+ /* Notify and wake up reader process */
+ if (tfile->flags & TUN_FASYNC)
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index ea06d10e1c21a..ca409d450a296 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1102,10 +1102,15 @@ static int aqc111_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
+ if (start_of_descs != desc_offset)
+ goto err;
+
+- /* self check desc_offset from header*/
+- if (desc_offset >= skb_len)
++ /* self check desc_offset from header and make sure that the
++ * bounds of the metadata array are inside the SKB
++ */
++ if (pkt_count * 2 + desc_offset >= skb_len)
+ goto err;
+
++ /* Packets must not overlap the metadata array */
++ skb_trim(skb, desc_offset);
++
+ if (pkt_count == 0)
+ goto err;
+
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index d29fb9759cc95..6c8f4f4dfc8a9 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -320,7 +320,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ rcu_read_lock();
+ rcv = rcu_dereference(priv->peer);
+- if (unlikely(!rcv)) {
++ if (unlikely(!rcv) || !pskb_may_pull(skb, ETH_HLEN)) {
+ kfree_skb(skb);
+ goto drop;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 28de877ad6c47..f54d5819477a4 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -3131,6 +3131,20 @@ static void ath11k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
+ arvif->do_not_send_tmpl = true;
+ else
+ arvif->do_not_send_tmpl = false;
++
++ if (vif->bss_conf.he_support) {
++ ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
++ WMI_VDEV_PARAM_BA_MODE,
++ WMI_BA_MODE_BUFFER_SIZE_256);
++ if (ret)
++ ath11k_warn(ar->ab,
++ "failed to set BA BUFFER SIZE 256 for vdev: %d\n",
++ arvif->vdev_id);
++ else
++ ath11k_dbg(ar->ab, ATH11K_DBG_MAC,
++ "Set BA BUFFER SIZE 256 for VDEV: %d\n",
++ arvif->vdev_id);
++ }
+ }
+
+ if (changed & (BSS_CHANGED_BEACON_INFO | BSS_CHANGED_BEACON)) {
+@@ -3166,14 +3180,6 @@ static void ath11k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
+
+ if (arvif->is_up && vif->bss_conf.he_support &&
+ vif->bss_conf.he_oper.params) {
+- ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+- WMI_VDEV_PARAM_BA_MODE,
+- WMI_BA_MODE_BUFFER_SIZE_256);
+- if (ret)
+- ath11k_warn(ar->ab,
+- "failed to set BA BUFFER SIZE 256 for vdev: %d\n",
+- arvif->vdev_id);
+-
+ param_id = WMI_VDEV_PARAM_HEOPS_0_31;
+ param_value = vif->bss_conf.he_oper.params;
+ ret = ath11k_wmi_vdev_set_param_cmd(ar, arvif->vdev_id,
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 98090e40e1cf4..e2791d45f5f59 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -839,7 +839,7 @@ static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
+ continue;
+
+ txinfo = IEEE80211_SKB_CB(bf->bf_mpdu);
+- fi = (struct ath_frame_info *)&txinfo->rate_driver_data[0];
++ fi = (struct ath_frame_info *)&txinfo->status.status_driver_data[0];
+ if (fi->keyix == keyix)
+ return true;
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index d0caf1de2bdec..db83cc4ba810a 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -141,8 +141,8 @@ static struct ath_frame_info *get_frame_info(struct sk_buff *skb)
+ {
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ BUILD_BUG_ON(sizeof(struct ath_frame_info) >
+- sizeof(tx_info->rate_driver_data));
+- return (struct ath_frame_info *) &tx_info->rate_driver_data[0];
++ sizeof(tx_info->status.status_driver_data));
++ return (struct ath_frame_info *) &tx_info->status.status_driver_data[0];
+ }
+
+ static void ath_send_bar(struct ath_atx_tid *tid, u16 seqno)
+@@ -2542,6 +2542,16 @@ skip_tx_complete:
+ spin_unlock_irqrestore(&sc->tx.txbuflock, flags);
+ }
+
++static void ath_clear_tx_status(struct ieee80211_tx_info *tx_info)
++{
++ void *ptr = &tx_info->status;
++
++ memset(ptr + sizeof(tx_info->status.rates), 0,
++ sizeof(tx_info->status) -
++ sizeof(tx_info->status.rates) -
++ sizeof(tx_info->status.status_driver_data));
++}
++
+ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ struct ath_tx_status *ts, int nframes, int nbad,
+ int txok)
+@@ -2553,6 +2563,8 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ struct ath_hw *ah = sc->sc_ah;
+ u8 i, tx_rateindex;
+
++ ath_clear_tx_status(tx_info);
++
+ if (txok)
+ tx_info->status.ack_signal = ts->ts_rssi;
+
+@@ -2567,6 +2579,13 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ tx_info->status.ampdu_len = nframes;
+ tx_info->status.ampdu_ack_len = nframes - nbad;
+
++ tx_info->status.rates[tx_rateindex].count = ts->ts_longretry + 1;
++
++ for (i = tx_rateindex + 1; i < hw->max_rates; i++) {
++ tx_info->status.rates[i].count = 0;
++ tx_info->status.rates[i].idx = -1;
++ }
++
+ if ((ts->ts_status & ATH9K_TXERR_FILT) == 0 &&
+ (tx_info->flags & IEEE80211_TX_CTL_NO_ACK) == 0) {
+ /*
+@@ -2588,16 +2607,6 @@ static void ath_tx_rc_status(struct ath_softc *sc, struct ath_buf *bf,
+ tx_info->status.rates[tx_rateindex].count =
+ hw->max_rate_tries;
+ }
+-
+- for (i = tx_rateindex + 1; i < hw->max_rates; i++) {
+- tx_info->status.rates[i].count = 0;
+- tx_info->status.rates[i].idx = -1;
+- }
+-
+- tx_info->status.rates[tx_rateindex].count = ts->ts_longretry + 1;
+-
+- /* we report airtime in ath_tx_count_airtime(), don't report twice */
+- tx_info->status.tx_time = 0;
+ }
+
+ static void ath_tx_processq(struct ath_softc *sc, struct ath_txq *txq)
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index ae0bc2fee4ca8..88b3b56d05228 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -3404,6 +3404,15 @@ static int hv_pci_probe(struct hv_device *hdev,
+ hbus->bridge->domain_nr = dom;
+ #ifdef CONFIG_X86
+ hbus->sysdata.domain = dom;
++#elif defined(CONFIG_ARM64)
++ /*
++ * Set the PCI bus parent to be the corresponding VMbus
++ * device. Then the VMbus device will be assigned as the
++ * ACPI companion in pcibios_root_bridge_prepare() and
++ * pci_dma_configure() will propagate device coherence
++ * information to devices created on the bus.
++ */
++ hbus->sysdata.parent = hdev->device.parent;
+ #endif
+
+ hbus->hdev = hdev;
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index 94ebc1ecace7c..b1b2a55de77fc 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -29,7 +29,7 @@
+ #define CNTL_OVER_MASK 0xFFFFFFFE
+
+ #define CNTL_CSV_SHIFT 24
+-#define CNTL_CSV_MASK (0xFF << CNTL_CSV_SHIFT)
++#define CNTL_CSV_MASK (0xFFU << CNTL_CSV_SHIFT)
+
+ #define EVENT_CYCLES_ID 0
+ #define EVENT_CYCLES_COUNTER 0
+diff --git a/drivers/regulator/wm8994-regulator.c b/drivers/regulator/wm8994-regulator.c
+index cadea0344486f..40befdd9dfa92 100644
+--- a/drivers/regulator/wm8994-regulator.c
++++ b/drivers/regulator/wm8994-regulator.c
+@@ -71,6 +71,35 @@ static const struct regulator_ops wm8994_ldo2_ops = {
+ };
+
+ static const struct regulator_desc wm8994_ldo_desc[] = {
++ {
++ .name = "LDO1",
++ .id = 1,
++ .type = REGULATOR_VOLTAGE,
++ .n_voltages = WM8994_LDO1_MAX_SELECTOR + 1,
++ .vsel_reg = WM8994_LDO_1,
++ .vsel_mask = WM8994_LDO1_VSEL_MASK,
++ .ops = &wm8994_ldo1_ops,
++ .min_uV = 2400000,
++ .uV_step = 100000,
++ .enable_time = 3000,
++ .off_on_delay = 36000,
++ .owner = THIS_MODULE,
++ },
++ {
++ .name = "LDO2",
++ .id = 2,
++ .type = REGULATOR_VOLTAGE,
++ .n_voltages = WM8994_LDO2_MAX_SELECTOR + 1,
++ .vsel_reg = WM8994_LDO_2,
++ .vsel_mask = WM8994_LDO2_VSEL_MASK,
++ .ops = &wm8994_ldo2_ops,
++ .enable_time = 3000,
++ .off_on_delay = 36000,
++ .owner = THIS_MODULE,
++ },
++};
++
++static const struct regulator_desc wm8958_ldo_desc[] = {
+ {
+ .name = "LDO1",
+ .id = 1,
+@@ -172,9 +201,16 @@ static int wm8994_ldo_probe(struct platform_device *pdev)
+ * regulator core and we need not worry about it on the
+ * error path.
+ */
+- ldo->regulator = devm_regulator_register(&pdev->dev,
+- &wm8994_ldo_desc[id],
+- &config);
++ if (ldo->wm8994->type == WM8994) {
++ ldo->regulator = devm_regulator_register(&pdev->dev,
++ &wm8994_ldo_desc[id],
++ &config);
++ } else {
++ ldo->regulator = devm_regulator_register(&pdev->dev,
++ &wm8958_ldo_desc[id],
++ &config);
++ }
++
+ if (IS_ERR(ldo->regulator)) {
+ ret = PTR_ERR(ldo->regulator);
+ dev_err(wm8994->dev, "Failed to register LDO%d: %d\n",
+diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+index 61f06f6885a56..89b9fbce7488a 100644
+--- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
++++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
+@@ -36,7 +36,7 @@
+
+ #define IBMVSCSIS_VERSION "v0.2"
+
+-#define INITIAL_SRP_LIMIT 800
++#define INITIAL_SRP_LIMIT 1024
+ #define DEFAULT_MAX_SECTORS 256
+ #define MAX_TXU 1024 * 1024
+
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 98cabe09c0404..8748c5996478f 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -897,6 +897,11 @@ enum lpfc_irq_chann_mode {
+ NHT_MODE,
+ };
+
++enum lpfc_hba_bit_flags {
++ FABRIC_COMANDS_BLOCKED,
++ HBA_PCI_ERR,
++};
++
+ struct lpfc_hba {
+ /* SCSI interface function jump table entries */
+ struct lpfc_io_buf * (*lpfc_get_scsi_buf)
+@@ -1025,7 +1030,6 @@ struct lpfc_hba {
+ * Firmware supports Forced Link Speed
+ * capability
+ */
+-#define HBA_PCI_ERR 0x80000 /* The PCI slot is offline */
+ #define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */
+ #define HBA_SHORT_CMF 0x200000 /* shorter CMF timer routine */
+ #define HBA_CGN_DAY_WRAP 0x400000 /* HBA Congestion info day wraps */
+@@ -1335,7 +1339,6 @@ struct lpfc_hba {
+ atomic_t fabric_iocb_count;
+ struct timer_list fabric_block_timer;
+ unsigned long bit_flags;
+-#define FABRIC_COMANDS_BLOCKED 0
+ atomic_t num_rsrc_err;
+ atomic_t num_cmd_success;
+ unsigned long last_rsrc_error_time;
+diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h
+index 89e36bf14d8f8..d4340e5a3aac2 100644
+--- a/drivers/scsi/lpfc/lpfc_crtn.h
++++ b/drivers/scsi/lpfc/lpfc_crtn.h
+@@ -652,3 +652,6 @@ struct lpfc_vmid *lpfc_get_vmid_from_hashtable(struct lpfc_vport *vport,
+ uint32_t hash, uint8_t *buf);
+ void lpfc_vmid_vport_cleanup(struct lpfc_vport *vport);
+ int lpfc_issue_els_qfpa(struct lpfc_vport *vport);
++
++void lpfc_sli_rpi_release(struct lpfc_vport *vport,
++ struct lpfc_nodelist *ndlp);
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 816fc406135b3..0cba306de0dbf 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -109,8 +109,8 @@ lpfc_rport_invalid(struct fc_rport *rport)
+
+ ndlp = rdata->pnode;
+ if (!rdata->pnode) {
+- pr_err("**** %s: NULL ndlp on rport x%px SID x%x\n",
+- __func__, rport, rport->scsi_target_id);
++ pr_info("**** %s: NULL ndlp on rport x%px SID x%x\n",
++ __func__, rport, rport->scsi_target_id);
+ return -EINVAL;
+ }
+
+@@ -169,9 +169,10 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+
+ lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
+ "3181 dev_loss_callbk x%06x, rport x%px flg x%x "
+- "load_flag x%x refcnt %d\n",
++ "load_flag x%x refcnt %d state %d xpt x%x\n",
+ ndlp->nlp_DID, ndlp->rport, ndlp->nlp_flag,
+- vport->load_flag, kref_read(&ndlp->kref));
++ vport->load_flag, kref_read(&ndlp->kref),
++ ndlp->nlp_state, ndlp->fc4_xpt_flags);
+
+ /* Don't schedule a worker thread event if the vport is going down.
+ * The teardown process cleans up the node via lpfc_drop_node.
+@@ -181,6 +182,11 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ ndlp->rport = NULL;
+
+ ndlp->fc4_xpt_flags &= ~SCSI_XPT_REGD;
++ /* clear the NLP_XPT_REGD if the node is not registered
++ * with nvme-fc
++ */
++ if (ndlp->fc4_xpt_flags == NLP_XPT_REGD)
++ ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
+
+ /* Remove the node reference from remote_port_add now.
+ * The driver will not call remote_port_delete.
+@@ -225,18 +231,36 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ ndlp->rport = NULL;
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
+
+- /* We need to hold the node by incrementing the reference
+- * count until this queued work is done
+- */
+- evtp->evt_arg1 = lpfc_nlp_get(ndlp);
++ if (phba->worker_thread) {
++ /* We need to hold the node by incrementing the reference
++ * count until this queued work is done
++ */
++ evtp->evt_arg1 = lpfc_nlp_get(ndlp);
++
++ spin_lock_irqsave(&phba->hbalock, iflags);
++ if (evtp->evt_arg1) {
++ evtp->evt = LPFC_EVT_DEV_LOSS;
++ list_add_tail(&evtp->evt_listp, &phba->work_list);
++ lpfc_worker_wake_up(phba);
++ }
++ spin_unlock_irqrestore(&phba->hbalock, iflags);
++ } else {
++ lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
++ "3188 worker thread is stopped %s x%06x, "
++ " rport x%px flg x%x load_flag x%x refcnt "
++ "%d\n", __func__, ndlp->nlp_DID,
++ ndlp->rport, ndlp->nlp_flag,
++ vport->load_flag, kref_read(&ndlp->kref));
++ if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) {
++ spin_lock_irqsave(&ndlp->lock, iflags);
++ /* Node is in dev loss. No further transaction. */
++ ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS;
++ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ lpfc_disc_state_machine(vport, ndlp, NULL,
++ NLP_EVT_DEVICE_RM);
++ }
+
+- spin_lock_irqsave(&phba->hbalock, iflags);
+- if (evtp->evt_arg1) {
+- evtp->evt = LPFC_EVT_DEV_LOSS;
+- list_add_tail(&evtp->evt_listp, &phba->work_list);
+- lpfc_worker_wake_up(phba);
+ }
+- spin_unlock_irqrestore(&phba->hbalock, iflags);
+
+ return;
+ }
+@@ -503,11 +527,12 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+ "0203 Devloss timeout on "
+ "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x "
+- "NPort x%06x Data: x%x x%x x%x\n",
++ "NPort x%06x Data: x%x x%x x%x refcnt %d\n",
+ *name, *(name+1), *(name+2), *(name+3),
+ *(name+4), *(name+5), *(name+6), *(name+7),
+ ndlp->nlp_DID, ndlp->nlp_flag,
+- ndlp->nlp_state, ndlp->nlp_rpi);
++ ndlp->nlp_state, ndlp->nlp_rpi,
++ kref_read(&ndlp->kref));
+ } else {
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_TRACE_EVENT,
+ "0204 Devloss timeout on "
+@@ -755,18 +780,22 @@ lpfc_work_list_done(struct lpfc_hba *phba)
+ int free_evt;
+ int fcf_inuse;
+ uint32_t nlp_did;
++ bool hba_pci_err;
+
+ spin_lock_irq(&phba->hbalock);
+ while (!list_empty(&phba->work_list)) {
+ list_remove_head((&phba->work_list), evtp, typeof(*evtp),
+ evt_listp);
+ spin_unlock_irq(&phba->hbalock);
++ hba_pci_err = test_bit(HBA_PCI_ERR, &phba->bit_flags);
+ free_evt = 1;
+ switch (evtp->evt) {
+ case LPFC_EVT_ELS_RETRY:
+ ndlp = (struct lpfc_nodelist *) (evtp->evt_arg1);
+- lpfc_els_retry_delay_handler(ndlp);
+- free_evt = 0; /* evt is part of ndlp */
++ if (!hba_pci_err) {
++ lpfc_els_retry_delay_handler(ndlp);
++ free_evt = 0; /* evt is part of ndlp */
++ }
+ /* decrement the node reference count held
+ * for this queued work
+ */
+@@ -788,8 +817,10 @@ lpfc_work_list_done(struct lpfc_hba *phba)
+ break;
+ case LPFC_EVT_RECOVER_PORT:
+ ndlp = (struct lpfc_nodelist *)(evtp->evt_arg1);
+- lpfc_sli_abts_recover_port(ndlp->vport, ndlp);
+- free_evt = 0;
++ if (!hba_pci_err) {
++ lpfc_sli_abts_recover_port(ndlp->vport, ndlp);
++ free_evt = 0;
++ }
+ /* decrement the node reference count held for
+ * this queued work
+ */
+@@ -859,14 +890,18 @@ lpfc_work_done(struct lpfc_hba *phba)
+ struct lpfc_vport **vports;
+ struct lpfc_vport *vport;
+ int i;
++ bool hba_pci_err;
+
++ hba_pci_err = test_bit(HBA_PCI_ERR, &phba->bit_flags);
+ spin_lock_irq(&phba->hbalock);
+ ha_copy = phba->work_ha;
+ phba->work_ha = 0;
+ spin_unlock_irq(&phba->hbalock);
++ if (hba_pci_err)
++ ha_copy = 0;
+
+ /* First, try to post the next mailbox command to SLI4 device */
+- if (phba->pci_dev_grp == LPFC_PCI_DEV_OC)
++ if (phba->pci_dev_grp == LPFC_PCI_DEV_OC && !hba_pci_err)
+ lpfc_sli4_post_async_mbox(phba);
+
+ if (ha_copy & HA_ERATT) {
+@@ -886,7 +921,7 @@ lpfc_work_done(struct lpfc_hba *phba)
+ lpfc_handle_latt(phba);
+
+ /* Handle VMID Events */
+- if (lpfc_is_vmid_enabled(phba)) {
++ if (lpfc_is_vmid_enabled(phba) && !hba_pci_err) {
+ if (phba->pport->work_port_events &
+ WORKER_CHECK_VMID_ISSUE_QFPA) {
+ lpfc_check_vmid_qfpa_issue(phba);
+@@ -936,6 +971,8 @@ lpfc_work_done(struct lpfc_hba *phba)
+ work_port_events = vport->work_port_events;
+ vport->work_port_events &= ~work_port_events;
+ spin_unlock_irq(&vport->work_port_lock);
++ if (hba_pci_err)
++ continue;
+ if (work_port_events & WORKER_DISC_TMO)
+ lpfc_disc_timeout_handler(vport);
+ if (work_port_events & WORKER_ELS_TMO)
+@@ -1173,12 +1210,14 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ struct lpfc_vport **vports;
+ LPFC_MBOXQ_t *mb;
+ int i;
++ int offline;
+
+ if (phba->link_state == LPFC_LINK_DOWN)
+ return 0;
+
+ /* Block all SCSI stack I/Os */
+ lpfc_scsi_dev_block(phba);
++ offline = pci_channel_offline(phba->pcidev);
+
+ phba->defer_flogi_acc_flag = false;
+
+@@ -1219,7 +1258,7 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ lpfc_destroy_vport_work_array(phba, vports);
+
+ /* Clean up any SLI3 firmware default rpi's */
+- if (phba->sli_rev > LPFC_SLI_REV3)
++ if (phba->sli_rev > LPFC_SLI_REV3 || offline)
+ goto skip_unreg_did;
+
+ mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+@@ -4712,6 +4751,11 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ spin_lock_irqsave(&ndlp->lock, iflags);
+ if (!(ndlp->fc4_xpt_flags & NLP_XPT_REGD)) {
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
++ "0999 %s Not regd: ndlp x%px rport x%px DID "
++ "x%x FLG x%x XPT x%x\n",
++ __func__, ndlp, ndlp->rport, ndlp->nlp_DID,
++ ndlp->nlp_flag, ndlp->fc4_xpt_flags);
+ return;
+ }
+
+@@ -4722,6 +4766,13 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ ndlp->fc4_xpt_flags & SCSI_XPT_REGD) {
+ vport->phba->nport_event_cnt++;
+ lpfc_unregister_remote_port(ndlp);
++ } else if (!ndlp->rport) {
++ lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
++ "1999 %s NDLP in devloss x%px DID x%x FLG x%x"
++ " XPT x%x refcnt %d\n",
++ __func__, ndlp, ndlp->nlp_DID, ndlp->nlp_flag,
++ ndlp->fc4_xpt_flags,
++ kref_read(&ndlp->kref));
+ }
+
+ if (ndlp->fc4_xpt_flags & NVME_XPT_REGD) {
+@@ -5365,6 +5416,7 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ ndlp->nlp_flag &= ~NLP_UNREG_INP;
+ mempool_free(mbox, phba->mbox_mem_pool);
+ acc_plogi = 1;
++ lpfc_nlp_put(ndlp);
+ }
+ } else {
+ lpfc_printf_vlog(vport, KERN_INFO,
+@@ -6089,12 +6141,34 @@ lpfc_disc_flush_list(struct lpfc_vport *vport)
+ }
+ }
+
++/*
++ * lpfc_notify_xport_npr - notifies xport of node disappearance
++ * @vport: Pointer to Virtual Port object.
++ *
++ * Transitions all ndlps to NPR state. When lpfc_nlp_set_state
++ * calls lpfc_nlp_state_cleanup, the ndlp->rport is unregistered
++ * and transport notified that the node is gone.
++ * Return Code:
++ * none
++ */
++static void
++lpfc_notify_xport_npr(struct lpfc_vport *vport)
++{
++ struct lpfc_nodelist *ndlp, *next_ndlp;
++
++ list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes,
++ nlp_listp) {
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE);
++ }
++}
+ void
+ lpfc_cleanup_discovery_resources(struct lpfc_vport *vport)
+ {
+ lpfc_els_flush_rscn(vport);
+ lpfc_els_flush_cmd(vport);
+ lpfc_disc_flush_list(vport);
++ if (pci_channel_offline(vport->phba->pcidev))
++ lpfc_notify_xport_npr(vport);
+ }
+
+ /*****************************************************************************/
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 558f7d2559c4d..9569a7390f9d5 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -95,6 +95,7 @@ static void lpfc_sli4_oas_verify(struct lpfc_hba *phba);
+ static uint16_t lpfc_find_cpu_handle(struct lpfc_hba *, uint16_t, int);
+ static void lpfc_setup_bg(struct lpfc_hba *, struct Scsi_Host *);
+ static int lpfc_sli4_cgn_parm_chg_evt(struct lpfc_hba *);
++static void lpfc_sli4_prep_dev_for_reset(struct lpfc_hba *phba);
+
+ static struct scsi_transport_template *lpfc_transport_template = NULL;
+ static struct scsi_transport_template *lpfc_vport_transport_template = NULL;
+@@ -1652,7 +1653,7 @@ lpfc_sli4_offline_eratt(struct lpfc_hba *phba)
+ {
+ spin_lock_irq(&phba->hbalock);
+ if (phba->link_state == LPFC_HBA_ERROR &&
+- phba->hba_flag & HBA_PCI_ERR) {
++ test_bit(HBA_PCI_ERR, &phba->bit_flags)) {
+ spin_unlock_irq(&phba->hbalock);
+ return;
+ }
+@@ -1995,6 +1996,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
+ if (pci_channel_offline(phba->pcidev)) {
+ lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ "3166 pci channel is offline\n");
++ lpfc_sli_flush_io_rings(phba);
+ return;
+ }
+
+@@ -2983,6 +2985,22 @@ lpfc_cleanup(struct lpfc_vport *vport)
+ NLP_EVT_DEVICE_RM);
+ }
+
++ /* This is a special case flush to return all
++ * IOs before entering this loop. There are
++ * two points in the code where a flush is
++ * avoided if the FC_UNLOADING flag is set.
++ * one is in the multipool destroy,
++ * (this prevents a crash) and the other is
++ * in the nvme abort handler, ( also prevents
++ * a crash). Both of these exceptions are
++ * cases where the slot is still accessible.
++ * The flush here is only when the pci slot
++ * is offline.
++ */
++ if (vport->load_flag & FC_UNLOADING &&
++ pci_channel_offline(phba->pcidev))
++ lpfc_sli_flush_io_rings(vport->phba);
++
+ /* At this point, ALL ndlp's should be gone
+ * because of the previous NLP_EVT_DEVICE_RM.
+ * Lets wait for this to happen, if needed.
+@@ -2995,7 +3013,7 @@ lpfc_cleanup(struct lpfc_vport *vport)
+ list_for_each_entry_safe(ndlp, next_ndlp,
+ &vport->fc_nodes, nlp_listp) {
+ lpfc_printf_vlog(ndlp->vport, KERN_ERR,
+- LOG_TRACE_EVENT,
++ LOG_DISCOVERY,
+ "0282 did:x%x ndlp:x%px "
+ "refcnt:%d xflags x%x nflag x%x\n",
+ ndlp->nlp_DID, (void *)ndlp,
+@@ -3692,7 +3710,8 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
+ struct lpfc_vport **vports;
+ struct Scsi_Host *shost;
+ int i;
+- int offline = 0;
++ int offline;
++ bool hba_pci_err;
+
+ if (vport->fc_flag & FC_OFFLINE_MODE)
+ return;
+@@ -3702,6 +3721,7 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
+ lpfc_linkdown(phba);
+
+ offline = pci_channel_offline(phba->pcidev);
++ hba_pci_err = test_bit(HBA_PCI_ERR, &phba->bit_flags);
+
+ /* Issue an unreg_login to all nodes on all vports */
+ vports = lpfc_create_vport_work_array(phba);
+@@ -3725,11 +3745,14 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action)
+ ndlp->nlp_flag &= ~NLP_NPR_ADISC;
+ spin_unlock_irq(&ndlp->lock);
+
+- if (offline) {
++ if (offline || hba_pci_err) {
+ spin_lock_irq(&ndlp->lock);
+ ndlp->nlp_flag &= ~(NLP_UNREG_INP |
+ NLP_RPI_REGISTERED);
+ spin_unlock_irq(&ndlp->lock);
++ if (phba->sli_rev == LPFC_SLI_REV4)
++ lpfc_sli_rpi_release(vports[i],
++ ndlp);
+ } else {
+ lpfc_unreg_rpi(vports[i], ndlp);
+ }
+@@ -13366,8 +13389,9 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
+ /* Abort all iocbs associated with the hba */
+ lpfc_sli_hba_iocb_abort(phba);
+
+- /* Wait for completion of device XRI exchange busy */
+- lpfc_sli4_xri_exchange_busy_wait(phba);
++ if (!pci_channel_offline(phba->pcidev))
++ /* Wait for completion of device XRI exchange busy */
++ lpfc_sli4_xri_exchange_busy_wait(phba);
+
+ /* per-phba callback de-registration for hotplug event */
+ if (phba->pport)
+@@ -13386,15 +13410,12 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
+ /* Disable FW logging to host memory */
+ lpfc_ras_stop_fwlog(phba);
+
+- /* Unset the queues shared with the hardware then release all
+- * allocated resources.
+- */
+- lpfc_sli4_queue_unset(phba);
+- lpfc_sli4_queue_destroy(phba);
+-
+ /* Reset SLI4 HBA FCoE function */
+ lpfc_pci_function_reset(phba);
+
++ /* release all queue allocated resources. */
++ lpfc_sli4_queue_destroy(phba);
++
+ /* Free RAS DMA memory */
+ if (phba->ras_fwlog.ras_enabled)
+ lpfc_sli4_ras_dma_free(phba);
+@@ -14274,6 +14295,7 @@ lpfc_sli_prep_dev_for_perm_failure(struct lpfc_hba *phba)
+ "2711 PCI channel permanent disable for failure\n");
+ /* Block all SCSI devices' I/Os on the host */
+ lpfc_scsi_dev_block(phba);
++ lpfc_sli4_prep_dev_for_reset(phba);
+
+ /* stop all timers */
+ lpfc_stop_hba_timers(phba);
+@@ -15069,24 +15091,28 @@ lpfc_sli4_prep_dev_for_recover(struct lpfc_hba *phba)
+ static void
+ lpfc_sli4_prep_dev_for_reset(struct lpfc_hba *phba)
+ {
+- lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+- "2826 PCI channel disable preparing for reset\n");
++ int offline = pci_channel_offline(phba->pcidev);
++
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "2826 PCI channel disable preparing for reset offline"
++ " %d\n", offline);
+
+ /* Block any management I/Os to the device */
+ lpfc_block_mgmt_io(phba, LPFC_MBX_NO_WAIT);
+
+- /* Block all SCSI devices' I/Os on the host */
+- lpfc_scsi_dev_block(phba);
+
++ /* HBA_PCI_ERR was set in io_error_detect */
++ lpfc_offline_prep(phba, LPFC_MBX_NO_WAIT);
+ /* Flush all driver's outstanding I/Os as we are to reset */
+ lpfc_sli_flush_io_rings(phba);
++ lpfc_offline(phba);
+
+ /* stop all timers */
+ lpfc_stop_hba_timers(phba);
+
++ lpfc_sli4_queue_destroy(phba);
+ /* Disable interrupt and pci device */
+ lpfc_sli4_disable_intr(phba);
+- lpfc_sli4_queue_destroy(phba);
+ pci_disable_device(phba->pcidev);
+ }
+
+@@ -15135,6 +15161,7 @@ lpfc_io_error_detected_s4(struct pci_dev *pdev, pci_channel_state_t state)
+ {
+ struct Scsi_Host *shost = pci_get_drvdata(pdev);
+ struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
++ bool hba_pci_err;
+
+ switch (state) {
+ case pci_channel_io_normal:
+@@ -15142,17 +15169,24 @@ lpfc_io_error_detected_s4(struct pci_dev *pdev, pci_channel_state_t state)
+ lpfc_sli4_prep_dev_for_recover(phba);
+ return PCI_ERS_RESULT_CAN_RECOVER;
+ case pci_channel_io_frozen:
+- phba->hba_flag |= HBA_PCI_ERR;
++ hba_pci_err = test_and_set_bit(HBA_PCI_ERR, &phba->bit_flags);
+ /* Fatal error, prepare for slot reset */
+- lpfc_sli4_prep_dev_for_reset(phba);
++ if (!hba_pci_err)
++ lpfc_sli4_prep_dev_for_reset(phba);
++ else
++ lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++ "2832 Already handling PCI error "
++ "state: x%x\n", state);
+ return PCI_ERS_RESULT_NEED_RESET;
+ case pci_channel_io_perm_failure:
+- phba->hba_flag |= HBA_PCI_ERR;
++ set_bit(HBA_PCI_ERR, &phba->bit_flags);
+ /* Permanent failure, prepare for device down */
+ lpfc_sli4_prep_dev_for_perm_failure(phba);
+ return PCI_ERS_RESULT_DISCONNECT;
+ default:
+- phba->hba_flag |= HBA_PCI_ERR;
++ hba_pci_err = test_and_set_bit(HBA_PCI_ERR, &phba->bit_flags);
++ if (!hba_pci_err)
++ lpfc_sli4_prep_dev_for_reset(phba);
+ /* Unknown state, prepare and request slot reset */
+ lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ "2825 Unknown PCI error state: x%x\n", state);
+@@ -15186,17 +15220,21 @@ lpfc_io_slot_reset_s4(struct pci_dev *pdev)
+ struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ struct lpfc_sli *psli = &phba->sli;
+ uint32_t intr_mode;
++ bool hba_pci_err;
+
+ dev_printk(KERN_INFO, &pdev->dev, "recovering from a slot reset.\n");
+ if (pci_enable_device_mem(pdev)) {
+ printk(KERN_ERR "lpfc: Cannot re-enable "
+- "PCI device after reset.\n");
++ "PCI device after reset.\n");
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ pci_restore_state(pdev);
+
+- phba->hba_flag &= ~HBA_PCI_ERR;
++ hba_pci_err = test_and_clear_bit(HBA_PCI_ERR, &phba->bit_flags);
++ if (!hba_pci_err)
++ dev_info(&pdev->dev,
++ "hba_pci_err was not set, recovering slot reset.\n");
+ /*
+ * As the new kernel behavior of pci_restore_state() API call clears
+ * device saved_state flag, need to save the restored state again.
+@@ -15210,6 +15248,8 @@ lpfc_io_slot_reset_s4(struct pci_dev *pdev)
+ psli->sli_flag &= ~LPFC_SLI_ACTIVE;
+ spin_unlock_irq(&phba->hbalock);
+
++ /* Init cpu_map array */
++ lpfc_cpu_map_array_init(phba);
+ /* Configure and enable interrupt */
+ intr_mode = lpfc_sli4_enable_intr(phba, phba->intr_mode);
+ if (intr_mode == LPFC_INTR_ERROR) {
+@@ -15251,8 +15291,6 @@ lpfc_io_resume_s4(struct pci_dev *pdev)
+ */
+ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE)) {
+ /* Perform device reset */
+- lpfc_offline_prep(phba, LPFC_MBX_WAIT);
+- lpfc_offline(phba);
+ lpfc_sli_brdrestart(phba);
+ /* Bring the device back online */
+ lpfc_online(phba);
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 9601edd838e10..df73abb59407e 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -93,6 +93,11 @@ lpfc_nvme_create_queue(struct nvme_fc_local_port *pnvme_lport,
+
+ lport = (struct lpfc_nvme_lport *)pnvme_lport->private;
+ vport = lport->vport;
++
++ if (!vport || vport->load_flag & FC_UNLOADING ||
++ vport->phba->hba_flag & HBA_IOQ_FLUSH)
++ return -ENODEV;
++
+ qhandle = kzalloc(sizeof(struct lpfc_nvme_qhandle), GFP_KERNEL);
+ if (qhandle == NULL)
+ return -ENOMEM;
+@@ -267,7 +272,8 @@ lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
+ return -EINVAL;
+
+ remoteport = lpfc_rport->remoteport;
+- if (!vport->localport)
++ if (!vport->localport ||
++ vport->phba->hba_flag & HBA_IOQ_FLUSH)
+ return -EINVAL;
+
+ lport = vport->localport->private;
+@@ -559,6 +565,8 @@ __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ ndlp->nlp_DID, ntype, nstate);
+ return -ENODEV;
+ }
++ if (vport->phba->hba_flag & HBA_IOQ_FLUSH)
++ return -ENODEV;
+
+ if (!vport->phba->sli4_hba.nvmels_wq)
+ return -ENOMEM;
+@@ -662,7 +670,8 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
+ return -EINVAL;
+
+ vport = lport->vport;
+- if (vport->load_flag & FC_UNLOADING)
++ if (vport->load_flag & FC_UNLOADING ||
++ vport->phba->hba_flag & HBA_IOQ_FLUSH)
+ return -ENODEV;
+
+ atomic_inc(&lport->fc4NvmeLsRequests);
+@@ -1515,7 +1524,8 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
+
+ phba = vport->phba;
+
+- if (unlikely(vport->load_flag & FC_UNLOADING)) {
++ if ((unlikely(vport->load_flag & FC_UNLOADING)) ||
++ phba->hba_flag & HBA_IOQ_FLUSH) {
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
+ "6124 Fail IO, Driver unload\n");
+ atomic_inc(&lport->xmt_fcp_err);
+@@ -2169,8 +2179,7 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
+ abts_nvme = 0;
+ for (i = 0; i < phba->cfg_hdw_queue; i++) {
+ qp = &phba->sli4_hba.hdwq[i];
+- if (!vport || !vport->localport ||
+- !qp || !qp->io_wq)
++ if (!vport->localport || !qp || !qp->io_wq)
+ return;
+
+ pring = qp->io_wq->pring;
+@@ -2180,8 +2189,9 @@ lpfc_nvme_lport_unreg_wait(struct lpfc_vport *vport,
+ abts_scsi += qp->abts_scsi_io_bufs;
+ abts_nvme += qp->abts_nvme_io_bufs;
+ }
+- if (!vport || !vport->localport ||
+- vport->phba->hba_flag & HBA_PCI_ERR)
++ if (!vport->localport ||
++ test_bit(HBA_PCI_ERR, &vport->phba->bit_flags) ||
++ vport->load_flag & FC_UNLOADING)
+ return;
+
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
+@@ -2541,8 +2551,7 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ * return values is ignored. The upcall is a courtesy to the
+ * transport.
+ */
+- if (vport->load_flag & FC_UNLOADING ||
+- unlikely(vport->phba->hba_flag & HBA_PCI_ERR))
++ if (vport->load_flag & FC_UNLOADING)
+ (void)nvme_fc_set_remoteport_devloss(remoteport, 0);
+
+ ret = nvme_fc_unregister_remoteport(remoteport);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 430abebf99f15..b64c5f157ce90 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2833,6 +2833,12 @@ __lpfc_sli_rpi_release(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ ndlp->nlp_flag &= ~NLP_UNREG_INP;
+ }
+
++void
++lpfc_sli_rpi_release(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
++{
++ __lpfc_sli_rpi_release(vport, ndlp);
++}
++
+ /**
+ * lpfc_sli_def_mbox_cmpl - Default mailbox completion handler
+ * @phba: Pointer to HBA context object.
+@@ -4466,42 +4472,62 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ void
+ lpfc_sli_abort_iocb_ring(struct lpfc_hba *phba, struct lpfc_sli_ring *pring)
+ {
+- LIST_HEAD(completions);
++ LIST_HEAD(tx_completions);
++ LIST_HEAD(txcmplq_completions);
+ struct lpfc_iocbq *iocb, *next_iocb;
++ int offline;
+
+ if (pring->ringno == LPFC_ELS_RING) {
+ lpfc_fabric_abort_hba(phba);
+ }
++ offline = pci_channel_offline(phba->pcidev);
+
+ /* Error everything on txq and txcmplq
+ * First do the txq.
+ */
+ if (phba->sli_rev >= LPFC_SLI_REV4) {
+ spin_lock_irq(&pring->ring_lock);
+- list_splice_init(&pring->txq, &completions);
++ list_splice_init(&pring->txq, &tx_completions);
+ pring->txq_cnt = 0;
+- spin_unlock_irq(&pring->ring_lock);
+
+- spin_lock_irq(&phba->hbalock);
+- /* Next issue ABTS for everything on the txcmplq */
+- list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list)
+- lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL);
+- spin_unlock_irq(&phba->hbalock);
++ if (offline) {
++ list_splice_init(&pring->txcmplq,
++ &txcmplq_completions);
++ } else {
++ /* Next issue ABTS for everything on the txcmplq */
++ list_for_each_entry_safe(iocb, next_iocb,
++ &pring->txcmplq, list)
++ lpfc_sli_issue_abort_iotag(phba, pring,
++ iocb, NULL);
++ }
++ spin_unlock_irq(&pring->ring_lock);
+ } else {
+ spin_lock_irq(&phba->hbalock);
+- list_splice_init(&pring->txq, &completions);
++ list_splice_init(&pring->txq, &tx_completions);
+ pring->txq_cnt = 0;
+
+- /* Next issue ABTS for everything on the txcmplq */
+- list_for_each_entry_safe(iocb, next_iocb, &pring->txcmplq, list)
+- lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL);
++ if (offline) {
++ list_splice_init(&pring->txcmplq, &txcmplq_completions);
++ } else {
++ /* Next issue ABTS for everything on the txcmplq */
++ list_for_each_entry_safe(iocb, next_iocb,
++ &pring->txcmplq, list)
++ lpfc_sli_issue_abort_iotag(phba, pring,
++ iocb, NULL);
++ }
+ spin_unlock_irq(&phba->hbalock);
+ }
+- /* Make sure HBA is alive */
+- lpfc_issue_hb_tmo(phba);
+
++ if (offline) {
++ /* Cancel all the IOCBs from the completions list */
++ lpfc_sli_cancel_iocbs(phba, &txcmplq_completions,
++ IOSTAT_LOCAL_REJECT, IOERR_SLI_ABORTED);
++ } else {
++ /* Make sure HBA is alive */
++ lpfc_issue_hb_tmo(phba);
++ }
+ /* Cancel all the IOCBs from the completions list */
+- lpfc_sli_cancel_iocbs(phba, &completions, IOSTAT_LOCAL_REJECT,
++ lpfc_sli_cancel_iocbs(phba, &tx_completions, IOSTAT_LOCAL_REJECT,
+ IOERR_SLI_ABORTED);
+ }
+
+@@ -4554,11 +4580,6 @@ lpfc_sli_flush_io_rings(struct lpfc_hba *phba)
+ struct lpfc_iocbq *piocb, *next_iocb;
+
+ spin_lock_irq(&phba->hbalock);
+- if (phba->hba_flag & HBA_IOQ_FLUSH ||
+- !phba->sli4_hba.hdwq) {
+- spin_unlock_irq(&phba->hbalock);
+- return;
+- }
+ /* Indicate the I/O queues are flushed */
+ phba->hba_flag |= HBA_IOQ_FLUSH;
+ spin_unlock_irq(&phba->hbalock);
+@@ -11235,6 +11256,10 @@ lpfc_sli_issue_iocb(struct lpfc_hba *phba, uint32_t ring_number,
+ unsigned long iflags;
+ int rc;
+
++ /* If the PCI channel is in offline state, do not post iocbs. */
++ if (unlikely(pci_channel_offline(phba->pcidev)))
++ return IOCB_ERROR;
++
+ if (phba->sli_rev == LPFC_SLI_REV4) {
+ eq = phba->sli4_hba.hdwq[piocb->hba_wqidx].hba_eq;
+
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 2c9d1b7964756..ae2aef9ba8cfe 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -2558,6 +2558,9 @@ struct megasas_instance_template {
+ #define MEGASAS_IS_LOGICAL(sdev) \
+ ((sdev->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1)
+
++#define MEGASAS_IS_LUN_VALID(sdev) \
++ (((sdev)->lun == 0) ? 1 : 0)
++
+ #define MEGASAS_DEV_INDEX(scp) \
+ (((scp->device->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) + \
+ scp->device->id)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 82e1e24257bcd..ca563498dcdb8 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -2126,6 +2126,9 @@ static int megasas_slave_alloc(struct scsi_device *sdev)
+ goto scan_target;
+ }
+ return -ENXIO;
++ } else if (!MEGASAS_IS_LUN_VALID(sdev)) {
++ sdev_printk(KERN_INFO, sdev, "%s: invalid LUN\n", __func__);
++ return -ENXIO;
+ }
+
+ scan_target:
+@@ -2156,6 +2159,10 @@ static void megasas_slave_destroy(struct scsi_device *sdev)
+ instance = megasas_lookup_instance(sdev->host->host_no);
+
+ if (MEGASAS_IS_LOGICAL(sdev)) {
++ if (!MEGASAS_IS_LUN_VALID(sdev)) {
++ sdev_printk(KERN_INFO, sdev, "%s: invalid LUN\n", __func__);
++ return;
++ }
+ ld_tgt_id = MEGASAS_TARGET_ID(sdev);
+ instance->ld_tgtid_status[ld_tgt_id] = LD_TARGET_ID_DELETED;
+ if (megasas_dbg_lvl & LD_PD_DEBUG)
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_config.c b/drivers/scsi/mpt3sas/mpt3sas_config.c
+index 0563078227de6..a8dd14c91efdb 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_config.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_config.c
+@@ -394,10 +394,13 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
+ retry_count++;
+ if (ioc->config_cmds.smid == smid)
+ mpt3sas_base_free_smid(ioc, smid);
+- if ((ioc->shost_recovery) || (ioc->config_cmds.status &
+- MPT3_CMD_RESET) || ioc->pci_error_recovery)
++ if (ioc->config_cmds.status & MPT3_CMD_RESET)
+ goto retry_config;
+- issue_host_reset = 1;
++ if (ioc->shost_recovery || ioc->pci_error_recovery) {
++ issue_host_reset = 0;
++ r = -EFAULT;
++ } else
++ issue_host_reset = 1;
+ goto free_mem;
+ }
+
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index 44df7c03aab8d..605a8eb7344a7 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -646,6 +646,7 @@ static struct pci_device_id mvs_pci_table[] = {
+ { PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1300), chip_1300 },
+ { PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1320), chip_1320 },
+ { PCI_VDEVICE(ADAPTEC2, 0x0450), chip_6440 },
++ { PCI_VDEVICE(TTI, 0x2640), chip_6440 },
+ { PCI_VDEVICE(TTI, 0x2710), chip_9480 },
+ { PCI_VDEVICE(TTI, 0x2720), chip_9480 },
+ { PCI_VDEVICE(TTI, 0x2721), chip_9480 },
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 55163469030d3..5853b3c0d76db 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -766,6 +766,10 @@ static void init_default_table_values(struct pm8001_hba_info *pm8001_ha)
+ pm8001_ha->main_cfg_tbl.pm80xx_tbl.pcs_event_log_severity = 0x01;
+ pm8001_ha->main_cfg_tbl.pm80xx_tbl.fatal_err_interrupt = 0x01;
+
++ /* Enable higher IQs and OQs, 32 to 63, bit 16 */
++ if (pm8001_ha->max_q_num > 32)
++ pm8001_ha->main_cfg_tbl.pm80xx_tbl.fatal_err_interrupt |=
++ 1 << 16;
+ /* Disable end to end CRC checking */
+ pm8001_ha->main_cfg_tbl.pm80xx_tbl.crc_core_dump = (0x1 << 16);
+
+@@ -1027,6 +1031,13 @@ static int mpi_init_check(struct pm8001_hba_info *pm8001_ha)
+ if (0x0000 != gst_len_mpistate)
+ return -EBUSY;
+
++ /*
++ * As per controller datasheet, after successful MPI
++ * initialization minimum 500ms delay is required before
++ * issuing commands.
++ */
++ msleep(500);
++
+ return 0;
+ }
+
+@@ -1734,10 +1745,11 @@ static void
+ pm80xx_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ {
+ #ifdef PM8001_USE_MSIX
+- u32 mask;
+- mask = (u32)(1 << vec);
+-
+- pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, (u32)(mask & 0xFFFFFFFF));
++ if (vec < 32)
++ pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, 1U << vec);
++ else
++ pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR_U,
++ 1U << (vec - 32));
+ return;
+ #endif
+ pm80xx_chip_intx_interrupt_enable(pm8001_ha);
+@@ -1753,12 +1765,15 @@ static void
+ pm80xx_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec)
+ {
+ #ifdef PM8001_USE_MSIX
+- u32 mask;
+- if (vec == 0xFF)
+- mask = 0xFFFFFFFF;
++ if (vec == 0xFF) {
++ /* disable all vectors 0-31, 32-63 */
++ pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 0xFFFFFFFF);
++ pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 0xFFFFFFFF);
++ } else if (vec < 32)
++ pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 1U << vec);
+ else
+- mask = (u32)(1 << vec);
+- pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, (u32)(mask & 0xFFFFFFFF));
++ pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U,
++ 1U << (vec - 32));
+ return;
+ #endif
+ pm80xx_chip_intx_interrupt_disable(pm8001_ha);
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 554b6f7842236..c7b1b2e8bb02f 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2221,10 +2221,10 @@ static void iscsi_stop_conn(struct iscsi_cls_conn *conn, int flag)
+
+ switch (flag) {
+ case STOP_CONN_RECOVER:
+- conn->state = ISCSI_CONN_FAILED;
++ WRITE_ONCE(conn->state, ISCSI_CONN_FAILED);
+ break;
+ case STOP_CONN_TERM:
+- conn->state = ISCSI_CONN_DOWN;
++ WRITE_ONCE(conn->state, ISCSI_CONN_DOWN);
+ break;
+ default:
+ iscsi_cls_conn_printk(KERN_ERR, conn, "invalid stop flag %d\n",
+@@ -2236,6 +2236,49 @@ static void iscsi_stop_conn(struct iscsi_cls_conn *conn, int flag)
+ ISCSI_DBG_TRANS_CONN(conn, "Stopping conn done.\n");
+ }
+
++static void iscsi_ep_disconnect(struct iscsi_cls_conn *conn, bool is_active)
++{
++ struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
++ struct iscsi_endpoint *ep;
++
++ ISCSI_DBG_TRANS_CONN(conn, "disconnect ep.\n");
++ WRITE_ONCE(conn->state, ISCSI_CONN_FAILED);
++
++ if (!conn->ep || !session->transport->ep_disconnect)
++ return;
++
++ ep = conn->ep;
++ conn->ep = NULL;
++
++ session->transport->unbind_conn(conn, is_active);
++ session->transport->ep_disconnect(ep);
++ ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n");
++}
++
++static void iscsi_if_disconnect_bound_ep(struct iscsi_cls_conn *conn,
++ struct iscsi_endpoint *ep,
++ bool is_active)
++{
++ /* Check if this was a conn error and the kernel took ownership */
++ spin_lock_irq(&conn->lock);
++ if (!test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++ spin_unlock_irq(&conn->lock);
++ iscsi_ep_disconnect(conn, is_active);
++ } else {
++ spin_unlock_irq(&conn->lock);
++ ISCSI_DBG_TRANS_CONN(conn, "flush kernel conn cleanup.\n");
++ mutex_unlock(&conn->ep_mutex);
++
++ flush_work(&conn->cleanup_work);
++ /*
++ * Userspace is now done with the EP so we can release the ref
++ * iscsi_cleanup_conn_work_fn took.
++ */
++ iscsi_put_endpoint(ep);
++ mutex_lock(&conn->ep_mutex);
++ }
++}
++
+ static int iscsi_if_stop_conn(struct iscsi_transport *transport,
+ struct iscsi_uevent *ev)
+ {
+@@ -2256,12 +2299,25 @@ static int iscsi_if_stop_conn(struct iscsi_transport *transport,
+ cancel_work_sync(&conn->cleanup_work);
+ iscsi_stop_conn(conn, flag);
+ } else {
++ /*
++ * For offload, when iscsid is restarted it won't know about
++ * existing endpoints so it can't do a ep_disconnect. We clean
++ * it up here for userspace.
++ */
++ mutex_lock(&conn->ep_mutex);
++ if (conn->ep)
++ iscsi_if_disconnect_bound_ep(conn, conn->ep, true);
++ mutex_unlock(&conn->ep_mutex);
++
+ /*
+ * Figure out if it was the kernel or userspace initiating this.
+ */
++ spin_lock_irq(&conn->lock);
+ if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++ spin_unlock_irq(&conn->lock);
+ iscsi_stop_conn(conn, flag);
+ } else {
++ spin_unlock_irq(&conn->lock);
+ ISCSI_DBG_TRANS_CONN(conn,
+ "flush kernel conn cleanup.\n");
+ flush_work(&conn->cleanup_work);
+@@ -2270,31 +2326,14 @@ static int iscsi_if_stop_conn(struct iscsi_transport *transport,
+ * Only clear for recovery to avoid extra cleanup runs during
+ * termination.
+ */
++ spin_lock_irq(&conn->lock);
+ clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
++ spin_unlock_irq(&conn->lock);
+ }
+ ISCSI_DBG_TRANS_CONN(conn, "iscsi if conn stop done.\n");
+ return 0;
+ }
+
+-static void iscsi_ep_disconnect(struct iscsi_cls_conn *conn, bool is_active)
+-{
+- struct iscsi_cls_session *session = iscsi_conn_to_session(conn);
+- struct iscsi_endpoint *ep;
+-
+- ISCSI_DBG_TRANS_CONN(conn, "disconnect ep.\n");
+- conn->state = ISCSI_CONN_FAILED;
+-
+- if (!conn->ep || !session->transport->ep_disconnect)
+- return;
+-
+- ep = conn->ep;
+- conn->ep = NULL;
+-
+- session->transport->unbind_conn(conn, is_active);
+- session->transport->ep_disconnect(ep);
+- ISCSI_DBG_TRANS_CONN(conn, "disconnect ep done.\n");
+-}
+-
+ static void iscsi_cleanup_conn_work_fn(struct work_struct *work)
+ {
+ struct iscsi_cls_conn *conn = container_of(work, struct iscsi_cls_conn,
+@@ -2303,18 +2342,11 @@ static void iscsi_cleanup_conn_work_fn(struct work_struct *work)
+
+ mutex_lock(&conn->ep_mutex);
+ /*
+- * If we are not at least bound there is nothing for us to do. Userspace
+- * will do a ep_disconnect call if offload is used, but will not be
+- * doing a stop since there is nothing to clean up, so we have to clear
+- * the cleanup bit here.
++ * Get a ref to the ep, so we don't release its ID until after
++ * userspace is done referencing it in iscsi_if_disconnect_bound_ep.
+ */
+- if (conn->state != ISCSI_CONN_BOUND && conn->state != ISCSI_CONN_UP) {
+- ISCSI_DBG_TRANS_CONN(conn, "Got error while conn is already failed. Ignoring.\n");
+- clear_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags);
+- mutex_unlock(&conn->ep_mutex);
+- return;
+- }
+-
++ if (conn->ep)
++ get_device(&conn->ep->dev);
+ iscsi_ep_disconnect(conn, false);
+
+ if (system_state != SYSTEM_RUNNING) {
+@@ -2370,11 +2402,12 @@ iscsi_create_conn(struct iscsi_cls_session *session, int dd_size, uint32_t cid)
+ conn->dd_data = &conn[1];
+
+ mutex_init(&conn->ep_mutex);
++ spin_lock_init(&conn->lock);
+ INIT_LIST_HEAD(&conn->conn_list);
+ INIT_WORK(&conn->cleanup_work, iscsi_cleanup_conn_work_fn);
+ conn->transport = transport;
+ conn->cid = cid;
+- conn->state = ISCSI_CONN_DOWN;
++ WRITE_ONCE(conn->state, ISCSI_CONN_DOWN);
+
+ /* this is released in the dev's release function */
+ if (!get_device(&session->dev))
+@@ -2561,9 +2594,32 @@ void iscsi_conn_error_event(struct iscsi_cls_conn *conn, enum iscsi_err error)
+ struct iscsi_uevent *ev;
+ struct iscsi_internal *priv;
+ int len = nlmsg_total_size(sizeof(*ev));
++ unsigned long flags;
++ int state;
+
+- if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags))
+- queue_work(iscsi_conn_cleanup_workq, &conn->cleanup_work);
++ spin_lock_irqsave(&conn->lock, flags);
++ /*
++ * Userspace will only do a stop call if we are at least bound. And, we
++ * only need to do the in kernel cleanup if in the UP state so cmds can
++ * be released to upper layers. If in other states just wait for
++ * userspace to avoid races that can leave the cleanup_work queued.
++ */
++ state = READ_ONCE(conn->state);
++ switch (state) {
++ case ISCSI_CONN_BOUND:
++ case ISCSI_CONN_UP:
++ if (!test_and_set_bit(ISCSI_CLS_CONN_BIT_CLEANUP,
++ &conn->flags)) {
++ queue_work(iscsi_conn_cleanup_workq,
++ &conn->cleanup_work);
++ }
++ break;
++ default:
++ ISCSI_DBG_TRANS_CONN(conn, "Got conn error in state %d\n",
++ state);
++ break;
++ }
++ spin_unlock_irqrestore(&conn->lock, flags);
+
+ priv = iscsi_if_transport_lookup(conn->transport);
+ if (!priv)
+@@ -2913,7 +2969,7 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ char *data = (char*)ev + sizeof(*ev);
+ struct iscsi_cls_conn *conn;
+ struct iscsi_cls_session *session;
+- int err = 0, value = 0;
++ int err = 0, value = 0, state;
+
+ if (ev->u.set_param.len > PAGE_SIZE)
+ return -EINVAL;
+@@ -2930,8 +2986,8 @@ iscsi_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ session->recovery_tmo = value;
+ break;
+ default:
+- if ((conn->state == ISCSI_CONN_BOUND) ||
+- (conn->state == ISCSI_CONN_UP)) {
++ state = READ_ONCE(conn->state);
++ if (state == ISCSI_CONN_BOUND || state == ISCSI_CONN_UP) {
+ err = transport->set_param(conn, ev->u.set_param.param,
+ data, ev->u.set_param.len);
+ } else {
+@@ -3003,16 +3059,7 @@ static int iscsi_if_ep_disconnect(struct iscsi_transport *transport,
+ }
+
+ mutex_lock(&conn->ep_mutex);
+- /* Check if this was a conn error and the kernel took ownership */
+- if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
+- ISCSI_DBG_TRANS_CONN(conn, "flush kernel conn cleanup.\n");
+- mutex_unlock(&conn->ep_mutex);
+-
+- flush_work(&conn->cleanup_work);
+- goto put_ep;
+- }
+-
+- iscsi_ep_disconnect(conn, false);
++ iscsi_if_disconnect_bound_ep(conn, ep, false);
+ mutex_unlock(&conn->ep_mutex);
+ put_ep:
+ iscsi_put_endpoint(ep);
+@@ -3715,24 +3762,17 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+ return -EINVAL;
+
+ mutex_lock(&conn->ep_mutex);
++ spin_lock_irq(&conn->lock);
+ if (test_bit(ISCSI_CLS_CONN_BIT_CLEANUP, &conn->flags)) {
++ spin_unlock_irq(&conn->lock);
+ mutex_unlock(&conn->ep_mutex);
+ ev->r.retcode = -ENOTCONN;
+ return 0;
+ }
++ spin_unlock_irq(&conn->lock);
+
+ switch (nlh->nlmsg_type) {
+ case ISCSI_UEVENT_BIND_CONN:
+- if (conn->ep) {
+- /*
+- * For offload boot support where iscsid is restarted
+- * during the pivot root stage, the ep will be intact
+- * here when the new iscsid instance starts up and
+- * reconnects.
+- */
+- iscsi_ep_disconnect(conn, true);
+- }
+-
+ session = iscsi_session_lookup(ev->u.b_conn.sid);
+ if (!session) {
+ err = -EINVAL;
+@@ -3743,7 +3783,7 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+ ev->u.b_conn.transport_eph,
+ ev->u.b_conn.is_leading);
+ if (!ev->r.retcode)
+- conn->state = ISCSI_CONN_BOUND;
++ WRITE_ONCE(conn->state, ISCSI_CONN_BOUND);
+
+ if (ev->r.retcode || !transport->ep_connect)
+ break;
+@@ -3762,7 +3802,8 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+ case ISCSI_UEVENT_START_CONN:
+ ev->r.retcode = transport->start_conn(conn);
+ if (!ev->r.retcode)
+- conn->state = ISCSI_CONN_UP;
++ WRITE_ONCE(conn->state, ISCSI_CONN_UP);
++
+ break;
+ case ISCSI_UEVENT_SEND_PDU:
+ pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
+@@ -4070,10 +4111,11 @@ static ssize_t show_conn_state(struct device *dev,
+ {
+ struct iscsi_cls_conn *conn = iscsi_dev_to_conn(dev->parent);
+ const char *state = "unknown";
++ int conn_state = READ_ONCE(conn->state);
+
+- if (conn->state >= 0 &&
+- conn->state < ARRAY_SIZE(connection_state_names))
+- state = connection_state_names[conn->state];
++ if (conn_state >= 0 &&
++ conn_state < ARRAY_SIZE(connection_state_names))
++ state = connection_state_names[conn_state];
+
+ return sysfs_emit(buf, "%s\n", state);
+ }
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index b808c94641fa6..75f3560411386 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -19,6 +19,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/jiffies.h>
+ #include <linux/kernel.h>
++#include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/of.h>
+@@ -102,12 +103,6 @@ struct cqspi_driver_platdata {
+ #define CQSPI_TIMEOUT_MS 500
+ #define CQSPI_READ_TIMEOUT_MS 10
+
+-/* Instruction type */
+-#define CQSPI_INST_TYPE_SINGLE 0
+-#define CQSPI_INST_TYPE_DUAL 1
+-#define CQSPI_INST_TYPE_QUAD 2
+-#define CQSPI_INST_TYPE_OCTAL 3
+-
+ #define CQSPI_DUMMY_CLKS_PER_BYTE 8
+ #define CQSPI_DUMMY_BYTES_MAX 4
+ #define CQSPI_DUMMY_CLKS_MAX 31
+@@ -376,10 +371,6 @@ static unsigned int cqspi_calc_dummy(const struct spi_mem_op *op, bool dtr)
+ static int cqspi_set_protocol(struct cqspi_flash_pdata *f_pdata,
+ const struct spi_mem_op *op)
+ {
+- f_pdata->inst_width = CQSPI_INST_TYPE_SINGLE;
+- f_pdata->addr_width = CQSPI_INST_TYPE_SINGLE;
+- f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
+-
+ /*
+ * For an op to be DTR, cmd phase along with every other non-empty
+ * phase should have dtr field set to 1. If an op phase has zero
+@@ -389,32 +380,23 @@ static int cqspi_set_protocol(struct cqspi_flash_pdata *f_pdata,
+ (!op->addr.nbytes || op->addr.dtr) &&
+ (!op->data.nbytes || op->data.dtr);
+
+- switch (op->data.buswidth) {
+- case 0:
+- break;
+- case 1:
+- f_pdata->data_width = CQSPI_INST_TYPE_SINGLE;
+- break;
+- case 2:
+- f_pdata->data_width = CQSPI_INST_TYPE_DUAL;
+- break;
+- case 4:
+- f_pdata->data_width = CQSPI_INST_TYPE_QUAD;
+- break;
+- case 8:
+- f_pdata->data_width = CQSPI_INST_TYPE_OCTAL;
+- break;
+- default:
+- return -EINVAL;
+- }
++ f_pdata->inst_width = 0;
++ if (op->cmd.buswidth)
++ f_pdata->inst_width = ilog2(op->cmd.buswidth);
++
++ f_pdata->addr_width = 0;
++ if (op->addr.buswidth)
++ f_pdata->addr_width = ilog2(op->addr.buswidth);
++
++ f_pdata->data_width = 0;
++ if (op->data.buswidth)
++ f_pdata->data_width = ilog2(op->data.buswidth);
+
+ /* Right now we only support 8-8-8 DTR mode. */
+ if (f_pdata->dtr) {
+ switch (op->cmd.buswidth) {
+ case 0:
+- break;
+ case 8:
+- f_pdata->inst_width = CQSPI_INST_TYPE_OCTAL;
+ break;
+ default:
+ return -EINVAL;
+@@ -422,9 +404,7 @@ static int cqspi_set_protocol(struct cqspi_flash_pdata *f_pdata,
+
+ switch (op->addr.buswidth) {
+ case 0:
+- break;
+ case 8:
+- f_pdata->addr_width = CQSPI_INST_TYPE_OCTAL;
+ break;
+ default:
+ return -EINVAL;
+@@ -432,9 +412,7 @@ static int cqspi_set_protocol(struct cqspi_flash_pdata *f_pdata,
+
+ switch (op->data.buswidth) {
+ case 0:
+- break;
+ case 8:
+- f_pdata->data_width = CQSPI_INST_TYPE_OCTAL;
+ break;
+ default:
+ return -EINVAL;
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 7b2a89a67cdba..06a5c40865513 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -1820,6 +1820,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
+ mutex_lock(&udev->cmdr_lock);
+ page = xa_load(&udev->data_pages, dpi);
+ if (likely(page)) {
++ get_page(page);
+ mutex_unlock(&udev->cmdr_lock);
+ return page;
+ }
+@@ -1876,6 +1877,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ /* For the vmalloc()ed cmd area pages */
+ addr = (void *)(unsigned long)info->mem[mi].addr + offset;
+ page = vmalloc_to_page(addr);
++ get_page(page);
+ } else {
+ uint32_t dpi;
+
+@@ -1886,7 +1888,6 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ return VM_FAULT_SIGBUS;
+ }
+
+- get_page(page);
+ vmf->page = page;
+ return 0;
+ }
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 2e6409cc11ada..ef54ef11af552 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -36,6 +36,10 @@ static bool nointxmask;
+ static bool disable_vga;
+ static bool disable_idle_d3;
+
++/* List of PF's that vfio_pci_core_sriov_configure() has been called on */
++static DEFINE_MUTEX(vfio_pci_sriov_pfs_mutex);
++static LIST_HEAD(vfio_pci_sriov_pfs);
++
+ static inline bool vfio_vga_disabled(void)
+ {
+ #ifdef CONFIG_VFIO_PCI_VGA
+@@ -434,47 +438,17 @@ out:
+ }
+ EXPORT_SYMBOL_GPL(vfio_pci_core_disable);
+
+-static struct vfio_pci_core_device *get_pf_vdev(struct vfio_pci_core_device *vdev)
+-{
+- struct pci_dev *physfn = pci_physfn(vdev->pdev);
+- struct vfio_device *pf_dev;
+-
+- if (!vdev->pdev->is_virtfn)
+- return NULL;
+-
+- pf_dev = vfio_device_get_from_dev(&physfn->dev);
+- if (!pf_dev)
+- return NULL;
+-
+- if (pci_dev_driver(physfn) != pci_dev_driver(vdev->pdev)) {
+- vfio_device_put(pf_dev);
+- return NULL;
+- }
+-
+- return container_of(pf_dev, struct vfio_pci_core_device, vdev);
+-}
+-
+-static void vfio_pci_vf_token_user_add(struct vfio_pci_core_device *vdev, int val)
+-{
+- struct vfio_pci_core_device *pf_vdev = get_pf_vdev(vdev);
+-
+- if (!pf_vdev)
+- return;
+-
+- mutex_lock(&pf_vdev->vf_token->lock);
+- pf_vdev->vf_token->users += val;
+- WARN_ON(pf_vdev->vf_token->users < 0);
+- mutex_unlock(&pf_vdev->vf_token->lock);
+-
+- vfio_device_put(&pf_vdev->vdev);
+-}
+-
+ void vfio_pci_core_close_device(struct vfio_device *core_vdev)
+ {
+ struct vfio_pci_core_device *vdev =
+ container_of(core_vdev, struct vfio_pci_core_device, vdev);
+
+- vfio_pci_vf_token_user_add(vdev, -1);
++ if (vdev->sriov_pf_core_dev) {
++ mutex_lock(&vdev->sriov_pf_core_dev->vf_token->lock);
++ WARN_ON(!vdev->sriov_pf_core_dev->vf_token->users);
++ vdev->sriov_pf_core_dev->vf_token->users--;
++ mutex_unlock(&vdev->sriov_pf_core_dev->vf_token->lock);
++ }
+ vfio_spapr_pci_eeh_release(vdev->pdev);
+ vfio_pci_core_disable(vdev);
+
+@@ -495,7 +469,12 @@ void vfio_pci_core_finish_enable(struct vfio_pci_core_device *vdev)
+ {
+ vfio_pci_probe_mmaps(vdev);
+ vfio_spapr_pci_eeh_open(vdev->pdev);
+- vfio_pci_vf_token_user_add(vdev, 1);
++
++ if (vdev->sriov_pf_core_dev) {
++ mutex_lock(&vdev->sriov_pf_core_dev->vf_token->lock);
++ vdev->sriov_pf_core_dev->vf_token->users++;
++ mutex_unlock(&vdev->sriov_pf_core_dev->vf_token->lock);
++ }
+ }
+ EXPORT_SYMBOL_GPL(vfio_pci_core_finish_enable);
+
+@@ -1603,11 +1582,8 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ *
+ * If the VF token is provided but unused, an error is generated.
+ */
+- if (!vdev->pdev->is_virtfn && !vdev->vf_token && !vf_token)
+- return 0; /* No VF token provided or required */
+-
+ if (vdev->pdev->is_virtfn) {
+- struct vfio_pci_core_device *pf_vdev = get_pf_vdev(vdev);
++ struct vfio_pci_core_device *pf_vdev = vdev->sriov_pf_core_dev;
+ bool match;
+
+ if (!pf_vdev) {
+@@ -1620,7 +1596,6 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ }
+
+ if (!vf_token) {
+- vfio_device_put(&pf_vdev->vdev);
+ pci_info_ratelimited(vdev->pdev,
+ "VF token required to access device\n");
+ return -EACCES;
+@@ -1630,8 +1605,6 @@ static int vfio_pci_validate_vf_token(struct vfio_pci_core_device *vdev,
+ match = uuid_equal(uuid, &pf_vdev->vf_token->uuid);
+ mutex_unlock(&pf_vdev->vf_token->lock);
+
+- vfio_device_put(&pf_vdev->vdev);
+-
+ if (!match) {
+ pci_info_ratelimited(vdev->pdev,
+ "Incorrect VF token provided for device\n");
+@@ -1752,8 +1725,30 @@ static int vfio_pci_bus_notifier(struct notifier_block *nb,
+ static int vfio_pci_vf_init(struct vfio_pci_core_device *vdev)
+ {
+ struct pci_dev *pdev = vdev->pdev;
++ struct vfio_pci_core_device *cur;
++ struct pci_dev *physfn;
+ int ret;
+
++ if (pdev->is_virtfn) {
++ /*
++ * If this VF was created by our vfio_pci_core_sriov_configure()
++ * then we can find the PF vfio_pci_core_device now, and due to
++ * the locking in pci_disable_sriov() it cannot change until
++ * this VF device driver is removed.
++ */
++ physfn = pci_physfn(vdev->pdev);
++ mutex_lock(&vfio_pci_sriov_pfs_mutex);
++ list_for_each_entry(cur, &vfio_pci_sriov_pfs, sriov_pfs_item) {
++ if (cur->pdev == physfn) {
++ vdev->sriov_pf_core_dev = cur;
++ break;
++ }
++ }
++ mutex_unlock(&vfio_pci_sriov_pfs_mutex);
++ return 0;
++ }
++
++ /* Not a SRIOV PF */
+ if (!pdev->is_physfn)
+ return 0;
+
+@@ -1825,6 +1820,7 @@ void vfio_pci_core_init_device(struct vfio_pci_core_device *vdev,
+ INIT_LIST_HEAD(&vdev->ioeventfds_list);
+ mutex_init(&vdev->vma_lock);
+ INIT_LIST_HEAD(&vdev->vma_list);
++ INIT_LIST_HEAD(&vdev->sriov_pfs_item);
+ init_rwsem(&vdev->memory_lock);
+ }
+ EXPORT_SYMBOL_GPL(vfio_pci_core_init_device);
+@@ -1916,7 +1912,7 @@ void vfio_pci_core_unregister_device(struct vfio_pci_core_device *vdev)
+ {
+ struct pci_dev *pdev = vdev->pdev;
+
+- pci_disable_sriov(pdev);
++ vfio_pci_core_sriov_configure(pdev, 0);
+
+ vfio_unregister_group_dev(&vdev->vdev);
+
+@@ -1954,21 +1950,49 @@ static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev,
+
+ int vfio_pci_core_sriov_configure(struct pci_dev *pdev, int nr_virtfn)
+ {
++ struct vfio_pci_core_device *vdev;
+ struct vfio_device *device;
+ int ret = 0;
+
++ device_lock_assert(&pdev->dev);
++
+ device = vfio_device_get_from_dev(&pdev->dev);
+ if (!device)
+ return -ENODEV;
+
+- if (nr_virtfn == 0)
+- pci_disable_sriov(pdev);
+- else
++ vdev = container_of(device, struct vfio_pci_core_device, vdev);
++
++ if (nr_virtfn) {
++ mutex_lock(&vfio_pci_sriov_pfs_mutex);
++ /*
++ * The thread that adds the vdev to the list is the only thread
++ * that gets to call pci_enable_sriov() and we will only allow
++ * it to be called once without going through
++ * pci_disable_sriov()
++ */
++ if (!list_empty(&vdev->sriov_pfs_item)) {
++ ret = -EINVAL;
++ goto out_unlock;
++ }
++ list_add_tail(&vdev->sriov_pfs_item, &vfio_pci_sriov_pfs);
++ mutex_unlock(&vfio_pci_sriov_pfs_mutex);
+ ret = pci_enable_sriov(pdev, nr_virtfn);
++ if (ret)
++ goto out_del;
++ ret = nr_virtfn;
++ goto out_put;
++ }
+
+- vfio_device_put(device);
++ pci_disable_sriov(pdev);
+
+- return ret < 0 ? ret : nr_virtfn;
++out_del:
++ mutex_lock(&vfio_pci_sriov_pfs_mutex);
++ list_del_init(&vdev->sriov_pfs_item);
++out_unlock:
++ mutex_unlock(&vfio_pci_sriov_pfs_mutex);
++out_put:
++ vfio_device_put(device);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(vfio_pci_core_sriov_configure);
+
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 11273b70271d1..15b3fa6390818 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1116,11 +1116,11 @@ out_free_interp:
+ * independently randomized mmap region (0 load_bias
+ * without MAP_FIXED nor MAP_FIXED_NOREPLACE).
+ */
+- alignment = maximum_alignment(elf_phdata, elf_ex->e_phnum);
+- if (interpreter || alignment > ELF_MIN_ALIGN) {
++ if (interpreter) {
+ load_bias = ELF_ET_DYN_BASE;
+ if (current->flags & PF_RANDOMIZE)
+ load_bias += arch_mmap_rnd();
++ alignment = maximum_alignment(elf_phdata, elf_ex->e_phnum);
+ if (alignment)
+ load_bias &= ~(alignment - 1);
+ elf_flags |= MAP_FIXED_NOREPLACE;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index a0aa6c7e23351..18e5ad5decdeb 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2479,12 +2479,6 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
+ return ERR_PTR(ret);
+ }
+
+- /*
+- * New block group is likely to be used soon. Try to activate it now.
+- * Failure is OK for now.
+- */
+- btrfs_zone_activate(cache);
+-
+ ret = exclude_super_stripes(cache);
+ if (ret) {
+ /* We may have excluded something, so call this just in case */
+@@ -2922,7 +2916,6 @@ int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans)
+ struct btrfs_path *path = NULL;
+ LIST_HEAD(dirty);
+ struct list_head *io = &cur_trans->io_bgs;
+- int num_started = 0;
+ int loops = 0;
+
+ spin_lock(&cur_trans->dirty_bgs_lock);
+@@ -2988,7 +2981,6 @@ again:
+ cache->io_ctl.inode = NULL;
+ ret = btrfs_write_out_cache(trans, cache, path);
+ if (ret == 0 && cache->io_ctl.inode) {
+- num_started++;
+ should_put = 0;
+
+ /*
+@@ -3089,7 +3081,6 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
+ int should_put;
+ struct btrfs_path *path;
+ struct list_head *io = &cur_trans->io_bgs;
+- int num_started = 0;
+
+ path = btrfs_alloc_path();
+ if (!path)
+@@ -3147,7 +3138,6 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
+ cache->io_ctl.inode = NULL;
+ ret = btrfs_write_out_cache(trans, cache, path);
+ if (ret == 0 && cache->io_ctl.inode) {
+- num_started++;
+ should_put = 0;
+ list_add_tail(&cache->io_list, io);
+ } else {
+@@ -3431,7 +3421,7 @@ int btrfs_force_chunk_alloc(struct btrfs_trans_handle *trans, u64 type)
+ return btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
+ }
+
+-static int do_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags)
++static struct btrfs_block_group *do_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags)
+ {
+ struct btrfs_block_group *bg;
+ int ret;
+@@ -3518,7 +3508,11 @@ static int do_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags)
+ out:
+ btrfs_trans_release_chunk_metadata(trans);
+
+- return ret;
++ if (ret)
++ return ERR_PTR(ret);
++
++ btrfs_get_block_group(bg);
++ return bg;
+ }
+
+ /*
+@@ -3633,10 +3627,17 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
+ {
+ struct btrfs_fs_info *fs_info = trans->fs_info;
+ struct btrfs_space_info *space_info;
++ struct btrfs_block_group *ret_bg;
+ bool wait_for_alloc = false;
+ bool should_alloc = false;
++ bool from_extent_allocation = false;
+ int ret = 0;
+
++ if (force == CHUNK_ALLOC_FORCE_FOR_EXTENT) {
++ from_extent_allocation = true;
++ force = CHUNK_ALLOC_FORCE;
++ }
++
+ /* Don't re-enter if we're already allocating a chunk */
+ if (trans->allocating_chunk)
+ return -ENOSPC;
+@@ -3726,9 +3727,22 @@ int btrfs_chunk_alloc(struct btrfs_trans_handle *trans, u64 flags,
+ force_metadata_allocation(fs_info);
+ }
+
+- ret = do_chunk_alloc(trans, flags);
++ ret_bg = do_chunk_alloc(trans, flags);
+ trans->allocating_chunk = false;
+
++ if (IS_ERR(ret_bg)) {
++ ret = PTR_ERR(ret_bg);
++ } else if (from_extent_allocation) {
++ /*
++ * New block group is likely to be used soon. Try to activate
++ * it now. Failure is OK for now.
++ */
++ btrfs_zone_activate(ret_bg);
++ }
++
++ if (!ret)
++ btrfs_put_block_group(ret_bg);
++
+ spin_lock(&space_info->lock);
+ if (ret < 0) {
+ if (ret == -ENOSPC)
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index 5878b7ce3b78e..faa7f1d6782a0 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -35,11 +35,15 @@ enum btrfs_discard_state {
+ * the FS with empty chunks
+ *
+ * CHUNK_ALLOC_FORCE means it must try to allocate one
++ *
++ * CHUNK_ALLOC_FORCE_FOR_EXTENT like CHUNK_ALLOC_FORCE but called from
++ * find_free_extent() that also activaes the zone
+ */
+ enum btrfs_chunk_alloc_enum {
+ CHUNK_ALLOC_NO_FORCE,
+ CHUNK_ALLOC_LIMITED,
+ CHUNK_ALLOC_FORCE,
++ CHUNK_ALLOC_FORCE_FOR_EXTENT,
+ };
+
+ struct btrfs_caching_control {
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 6158b870a269d..93f704ba877e2 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -534,6 +534,9 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
+ cb->orig_bio = NULL;
+ cb->nr_pages = nr_pages;
+
++ if (blkcg_css)
++ kthread_associate_blkcg(blkcg_css);
++
+ while (cur_disk_bytenr < disk_start + compressed_len) {
+ u64 offset = cur_disk_bytenr - disk_start;
+ unsigned int index = offset >> PAGE_SHIFT;
+@@ -552,6 +555,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
+ bio = NULL;
+ goto finish_cb;
+ }
++ if (blkcg_css)
++ bio->bi_opf |= REQ_CGROUP_PUNT;
+ }
+ /*
+ * We should never reach next_stripe_start start as we will
+@@ -609,6 +614,9 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start,
+ return 0;
+
+ finish_cb:
++ if (blkcg_css)
++ kthread_associate_blkcg(NULL);
++
+ if (bio) {
+ bio->bi_status = ret;
+ bio_endio(bio);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 117afcda5affb..b43f80c3bffd9 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1826,9 +1826,10 @@ again:
+
+ ret = btrfs_insert_fs_root(fs_info, root);
+ if (ret) {
+- btrfs_put_root(root);
+- if (ret == -EEXIST)
++ if (ret == -EEXIST) {
++ btrfs_put_root(root);
+ goto again;
++ }
+ goto fail;
+ }
+ return root;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 96427b1ecac3e..e5b832d77df96 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4087,7 +4087,7 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
+ }
+
+ ret = btrfs_chunk_alloc(trans, ffe_ctl->flags,
+- CHUNK_ALLOC_FORCE);
++ CHUNK_ALLOC_FORCE_FOR_EXTENT);
+
+ /* Do not bail out on ENOSPC since we can do more. */
+ if (ret == -ENOSPC)
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 99028984340aa..e93526d86a922 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3562,7 +3562,6 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
+ u64 cur_end;
+ struct extent_map *em;
+ int ret = 0;
+- int nr = 0;
+ size_t pg_offset = 0;
+ size_t iosize;
+ size_t blocksize = inode->i_sb->s_blocksize;
+@@ -3720,9 +3719,7 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
+ end_bio_extent_readpage, 0,
+ this_bio_flag,
+ force_bio_submit);
+- if (!ret) {
+- nr++;
+- } else {
++ if (ret) {
+ unlock_extent(tree, cur, cur + iosize - 1);
+ end_page_read(page, false, cur, iosize);
+ goto out;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index a0179cc62913b..28ddd9cf20692 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2918,8 +2918,9 @@ out:
+ return ret;
+ }
+
+-static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
++static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)
+ {
++ struct inode *inode = file_inode(file);
+ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ struct btrfs_root *root = BTRFS_I(inode)->root;
+ struct extent_state *cached_state = NULL;
+@@ -2951,6 +2952,10 @@ static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ goto out_only_mutex;
+ }
+
++ ret = file_modified(file);
++ if (ret)
++ goto out_only_mutex;
++
+ lockstart = round_up(offset, btrfs_inode_sectorsize(BTRFS_I(inode)));
+ lockend = round_down(offset + len,
+ btrfs_inode_sectorsize(BTRFS_I(inode))) - 1;
+@@ -3391,7 +3396,7 @@ static long btrfs_fallocate(struct file *file, int mode,
+ return -EOPNOTSUPP;
+
+ if (mode & FALLOC_FL_PUNCH_HOLE)
+- return btrfs_punch_hole(inode, offset, len);
++ return btrfs_punch_hole(file, offset, len);
+
+ /*
+ * Only trigger disk allocation, don't trigger qgroup reserve
+@@ -3413,6 +3418,10 @@ static long btrfs_fallocate(struct file *file, int mode,
+ goto out;
+ }
+
++ ret = file_modified(file);
++ if (ret)
++ goto out;
++
+ /*
+ * TODO: Move these two operations after we have checked
+ * accurate reserved space, or fallocate can still fail but
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ec4950cfe1ea9..9547088a93066 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1130,7 +1130,6 @@ static noinline int cow_file_range(struct btrfs_inode *inode,
+ int ret = 0;
+
+ if (btrfs_is_free_space_inode(inode)) {
+- WARN_ON_ONCE(1);
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+@@ -7423,6 +7422,7 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ u64 block_start, orig_start, orig_block_len, ram_bytes;
+ bool can_nocow = false;
+ bool space_reserved = false;
++ u64 prev_len;
+ int ret = 0;
+
+ /*
+@@ -7450,6 +7450,7 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ can_nocow = true;
+ }
+
++ prev_len = len;
+ if (can_nocow) {
+ struct extent_map *em2;
+
+@@ -7479,8 +7480,6 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ goto out;
+ }
+ } else {
+- const u64 prev_len = len;
+-
+ /* Our caller expects us to free the input extent map. */
+ free_extent_map(em);
+ *map = NULL;
+@@ -7511,7 +7510,7 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ * We have created our ordered extent, so we can now release our reservation
+ * for an outstanding extent.
+ */
+- btrfs_delalloc_release_extents(BTRFS_I(inode), len);
++ btrfs_delalloc_release_extents(BTRFS_I(inode), prev_len);
+
+ /*
+ * Need to update the i_size under the extent lock so buffered
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f5688a25fca35..b0dfcc7a4225c 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4467,10 +4467,12 @@ static int balance_kthread(void *data)
+ struct btrfs_fs_info *fs_info = data;
+ int ret = 0;
+
++ sb_start_write(fs_info->sb);
+ mutex_lock(&fs_info->balance_mutex);
+ if (fs_info->balance_ctl)
+ ret = btrfs_balance(fs_info, fs_info->balance_ctl, NULL);
+ mutex_unlock(&fs_info->balance_mutex);
++ sb_end_write(fs_info->sb);
+
+ return ret;
+ }
+diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
+index f256c8aff7bb5..ca9f3e4ec4b3f 100644
+--- a/fs/cachefiles/namei.c
++++ b/fs/cachefiles/namei.c
+@@ -57,6 +57,16 @@ static void __cachefiles_unmark_inode_in_use(struct cachefiles_object *object,
+ trace_cachefiles_mark_inactive(object, inode);
+ }
+
++static void cachefiles_do_unmark_inode_in_use(struct cachefiles_object *object,
++ struct dentry *dentry)
++{
++ struct inode *inode = d_backing_inode(dentry);
++
++ inode_lock(inode);
++ __cachefiles_unmark_inode_in_use(object, dentry);
++ inode_unlock(inode);
++}
++
+ /*
+ * Unmark a backing inode and tell cachefilesd that there's something that can
+ * be culled.
+@@ -68,9 +78,7 @@ void cachefiles_unmark_inode_in_use(struct cachefiles_object *object,
+ struct inode *inode = file_inode(file);
+
+ if (inode) {
+- inode_lock(inode);
+- __cachefiles_unmark_inode_in_use(object, file->f_path.dentry);
+- inode_unlock(inode);
++ cachefiles_do_unmark_inode_in_use(object, file->f_path.dentry);
+
+ if (!test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags)) {
+ atomic_long_add(inode->i_blocks, &cache->b_released);
+@@ -484,7 +492,7 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
+ object, d_backing_inode(path.dentry), ret,
+ cachefiles_trace_trunc_error);
+ file = ERR_PTR(ret);
+- goto out_dput;
++ goto out_unuse;
+ }
+ }
+
+@@ -494,15 +502,20 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
+ trace_cachefiles_vfs_error(object, d_backing_inode(path.dentry),
+ PTR_ERR(file),
+ cachefiles_trace_open_error);
+- goto out_dput;
++ goto out_unuse;
+ }
+ if (unlikely(!file->f_op->read_iter) ||
+ unlikely(!file->f_op->write_iter)) {
+ fput(file);
+ pr_notice("Cache does not support read_iter and write_iter\n");
+ file = ERR_PTR(-EINVAL);
++ goto out_unuse;
+ }
+
++ goto out_dput;
++
++out_unuse:
++ cachefiles_do_unmark_inode_in_use(object, path.dentry);
+ out_dput:
+ dput(path.dentry);
+ out:
+@@ -590,14 +603,16 @@ static bool cachefiles_open_file(struct cachefiles_object *object,
+ check_failed:
+ fscache_cookie_lookup_negative(object->cookie);
+ cachefiles_unmark_inode_in_use(object, file);
+- if (ret == -ESTALE) {
+- fput(file);
+- dput(dentry);
++ fput(file);
++ dput(dentry);
++ if (ret == -ESTALE)
+ return cachefiles_create_file(object);
+- }
++ return false;
++
+ error_fput:
+ fput(file);
+ error:
++ cachefiles_do_unmark_inode_in_use(object, dentry);
+ dput(dentry);
+ return false;
+ }
+diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c
+index 35465109d9c4e..00b087c14995a 100644
+--- a/fs/cachefiles/xattr.c
++++ b/fs/cachefiles/xattr.c
+@@ -203,7 +203,7 @@ bool cachefiles_set_volume_xattr(struct cachefiles_volume *volume)
+ if (!buf)
+ return false;
+ buf->reserved = cpu_to_be32(0);
+- memcpy(buf->data, p, len);
++ memcpy(buf->data, p, volume->vcookie->coherency_len);
+
+ ret = cachefiles_inject_write_error();
+ if (ret == 0)
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 6e5246122ee2c..792fdcfdc6add 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -266,22 +266,24 @@ static void cifs_kill_sb(struct super_block *sb)
+ * before we kill the sb.
+ */
+ if (cifs_sb->root) {
++ for (node = rb_first(root); node; node = rb_next(node)) {
++ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
++ tcon = tlink_tcon(tlink);
++ if (IS_ERR(tcon))
++ continue;
++ cfid = &tcon->crfid;
++ mutex_lock(&cfid->fid_mutex);
++ if (cfid->dentry) {
++ dput(cfid->dentry);
++ cfid->dentry = NULL;
++ }
++ mutex_unlock(&cfid->fid_mutex);
++ }
++
++ /* finally release root dentry */
+ dput(cifs_sb->root);
+ cifs_sb->root = NULL;
+ }
+- node = rb_first(root);
+- while (node != NULL) {
+- tlink = rb_entry(node, struct tcon_link, tl_rbnode);
+- tcon = tlink_tcon(tlink);
+- cfid = &tcon->crfid;
+- mutex_lock(&cfid->fid_mutex);
+- if (cfid->dentry) {
+- dput(cfid->dentry);
+- cfid->dentry = NULL;
+- }
+- mutex_unlock(&cfid->fid_mutex);
+- node = rb_next(node);
+- }
+
+ kill_anon_super(sb);
+ cifs_umount(cifs_sb);
+diff --git a/fs/cifs/link.c b/fs/cifs/link.c
+index 852e54ee82c28..bbdf3281559c8 100644
+--- a/fs/cifs/link.c
++++ b/fs/cifs/link.c
+@@ -85,6 +85,9 @@ parse_mf_symlink(const u8 *buf, unsigned int buf_len, unsigned int *_link_len,
+ if (rc != 1)
+ return -EINVAL;
+
++ if (link_len > CIFS_MF_SYMLINK_LINK_MAXLEN)
++ return -EINVAL;
++
+ rc = symlink_hash(link_len, link_str, md5_hash);
+ if (rc) {
+ cifs_dbg(FYI, "%s: MD5 hash failure: %d\n", __func__, rc);
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index 04d374e65e546..dbecd27656c7c 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -155,7 +155,6 @@ struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack)
+ struct io_wq_work {
+ struct io_wq_work_node list;
+ unsigned flags;
+- int fd;
+ };
+
+ static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 1a9630dc5361b..619c67fd456dd 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -864,7 +864,11 @@ struct io_kiocb {
+
+ u64 user_data;
+ u32 result;
+- u32 cflags;
++ /* fd initially, then cflags for completion */
++ union {
++ u32 cflags;
++ int fd;
++ };
+
+ struct io_ring_ctx *ctx;
+ struct task_struct *task;
+@@ -4136,7 +4140,7 @@ static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
+ return -EAGAIN;
+
+ if (sp->flags & SPLICE_F_FD_IN_FIXED)
+- in = io_file_get_fixed(req, sp->splice_fd_in, IO_URING_F_UNLOCKED);
++ in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
+ else
+ in = io_file_get_normal(req, sp->splice_fd_in);
+ if (!in) {
+@@ -4178,7 +4182,7 @@ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+ return -EAGAIN;
+
+ if (sp->flags & SPLICE_F_FD_IN_FIXED)
+- in = io_file_get_fixed(req, sp->splice_fd_in, IO_URING_F_UNLOCKED);
++ in = io_file_get_fixed(req, sp->splice_fd_in, issue_flags);
+ else
+ in = io_file_get_normal(req, sp->splice_fd_in);
+ if (!in) {
+@@ -5506,11 +5510,11 @@ static int io_poll_check_events(struct io_kiocb *req, bool locked)
+
+ if (!req->result) {
+ struct poll_table_struct pt = { ._key = poll->events };
++ unsigned flags = locked ? 0 : IO_URING_F_UNLOCKED;
+
+- if (unlikely(!io_assign_file(req, IO_URING_F_UNLOCKED)))
+- req->result = -EBADF;
+- else
+- req->result = vfs_poll(req->file, &pt) & poll->events;
++ if (unlikely(!io_assign_file(req, flags)))
++ return -EBADF;
++ req->result = vfs_poll(req->file, &pt) & poll->events;
+ }
+
+ /* multishot, just fill an CQE and proceed */
+@@ -6462,6 +6466,7 @@ static int io_files_update(struct io_kiocb *req, unsigned int issue_flags)
+ up.nr = 0;
+ up.tags = 0;
+ up.resv = 0;
++ up.resv2 = 0;
+
+ io_ring_submit_lock(ctx, needs_lock);
+ ret = __io_register_rsrc_update(ctx, IORING_RSRC_FILE,
+@@ -6708,9 +6713,9 @@ static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags)
+ return true;
+
+ if (req->flags & REQ_F_FIXED_FILE)
+- req->file = io_file_get_fixed(req, req->work.fd, issue_flags);
++ req->file = io_file_get_fixed(req, req->fd, issue_flags);
+ else
+- req->file = io_file_get_normal(req, req->work.fd);
++ req->file = io_file_get_normal(req, req->fd);
+ if (req->file)
+ return true;
+
+@@ -6724,13 +6729,14 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+ const struct cred *creds = NULL;
+ int ret;
+
++ if (unlikely(!io_assign_file(req, issue_flags)))
++ return -EBADF;
++
+ if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
+ creds = override_creds(req->creds);
+
+ if (!io_op_defs[req->opcode].audit_skip)
+ audit_uring_entry(req->opcode);
+- if (unlikely(!io_assign_file(req, issue_flags)))
+- return -EBADF;
+
+ switch (req->opcode) {
+ case IORING_OP_NOP:
+@@ -6888,16 +6894,18 @@ static void io_wq_submit_work(struct io_wq_work *work)
+ if (timeout)
+ io_queue_linked_timeout(timeout);
+
+- if (!io_assign_file(req, issue_flags)) {
+- err = -EBADF;
+- work->flags |= IO_WQ_WORK_CANCEL;
+- }
+
+ /* either cancelled or io-wq is dying, so don't touch tctx->iowq */
+ if (work->flags & IO_WQ_WORK_CANCEL) {
++fail:
+ io_req_task_queue_fail(req, err);
+ return;
+ }
++ if (!io_assign_file(req, issue_flags)) {
++ err = -EBADF;
++ work->flags |= IO_WQ_WORK_CANCEL;
++ goto fail;
++ }
+
+ if (req->flags & REQ_F_FORCE_ASYNC) {
+ bool opcode_poll = def->pollin || def->pollout;
+@@ -7243,7 +7251,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ if (io_op_defs[opcode].needs_file) {
+ struct io_submit_state *state = &ctx->submit_state;
+
+- req->work.fd = READ_ONCE(sqe->fd);
++ req->fd = READ_ONCE(sqe->fd);
+
+ /*
+ * Plug now if we have more than 2 IO left after this, and the
+@@ -8528,13 +8536,15 @@ static int io_sqe_file_register(struct io_ring_ctx *ctx, struct file *file,
+ static int io_queue_rsrc_removal(struct io_rsrc_data *data, unsigned idx,
+ struct io_rsrc_node *node, void *rsrc)
+ {
++ u64 *tag_slot = io_get_tag_slot(data, idx);
+ struct io_rsrc_put *prsrc;
+
+ prsrc = kzalloc(sizeof(*prsrc), GFP_KERNEL);
+ if (!prsrc)
+ return -ENOMEM;
+
+- prsrc->tag = *io_get_tag_slot(data, idx);
++ prsrc->tag = *tag_slot;
++ *tag_slot = 0;
+ prsrc->rsrc = rsrc;
+ list_add(&prsrc->list, &node->rsrc_list);
+ return 0;
+@@ -8603,7 +8613,7 @@ static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags)
+ bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
+ struct io_fixed_file *file_slot;
+ struct file *file;
+- int ret, i;
++ int ret;
+
+ io_ring_submit_lock(ctx, needs_lock);
+ ret = -ENXIO;
+@@ -8616,8 +8626,8 @@ static int io_close_fixed(struct io_kiocb *req, unsigned int issue_flags)
+ if (ret)
+ goto out;
+
+- i = array_index_nospec(offset, ctx->nr_user_files);
+- file_slot = io_fixed_file_slot(&ctx->file_table, i);
++ offset = array_index_nospec(offset, ctx->nr_user_files);
++ file_slot = io_fixed_file_slot(&ctx->file_table, offset);
+ ret = -EBADF;
+ if (!file_slot->file_ptr)
+ goto out;
+@@ -8673,8 +8683,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+
+ if (file_slot->file_ptr) {
+ file = (struct file *)(file_slot->file_ptr & FFS_MASK);
+- err = io_queue_rsrc_removal(data, up->offset + done,
+- ctx->rsrc_node, file);
++ err = io_queue_rsrc_removal(data, i, ctx->rsrc_node, file);
+ if (err)
+ break;
+ file_slot->file_ptr = 0;
+@@ -9347,7 +9356,7 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
+
+ i = array_index_nospec(offset, ctx->nr_user_bufs);
+ if (ctx->user_bufs[i] != ctx->dummy_ubuf) {
+- err = io_queue_rsrc_removal(ctx->buf_data, offset,
++ err = io_queue_rsrc_removal(ctx->buf_data, i,
+ ctx->rsrc_node, ctx->user_bufs[i]);
+ if (unlikely(err)) {
+ io_buffer_unmap(ctx, &imu);
+@@ -10102,6 +10111,8 @@ static int io_get_ext_arg(unsigned flags, const void __user *argp, size_t *argsz
+ return -EINVAL;
+ if (copy_from_user(&arg, argp, sizeof(arg)))
+ return -EFAULT;
++ if (arg.pad)
++ return -EINVAL;
+ *sig = u64_to_user_ptr(arg.sigmask);
+ *argsz = arg.sigmask_sz;
+ *ts = u64_to_user_ptr(arg.ts);
+@@ -10566,7 +10577,8 @@ static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
+ IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
+ IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
+ IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
+- IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP;
++ IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
++ IORING_FEAT_LINKED_FILE;
+
+ if (copy_to_user(params, p, sizeof(*p))) {
+ ret = -EFAULT;
+@@ -10777,8 +10789,6 @@ static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
+ __u32 tmp;
+ int err;
+
+- if (up->resv)
+- return -EINVAL;
+ if (check_add_overflow(up->offset, nr_args, &tmp))
+ return -EOVERFLOW;
+ err = io_rsrc_node_switch_start(ctx);
+@@ -10804,6 +10814,8 @@ static int io_register_files_update(struct io_ring_ctx *ctx, void __user *arg,
+ memset(&up, 0, sizeof(up));
+ if (copy_from_user(&up, arg, sizeof(struct io_uring_rsrc_update)))
+ return -EFAULT;
++ if (up.resv || up.resv2)
++ return -EINVAL;
+ return __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up, nr_args);
+ }
+
+@@ -10816,7 +10828,7 @@ static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg,
+ return -EINVAL;
+ if (copy_from_user(&up, arg, sizeof(up)))
+ return -EFAULT;
+- if (!up.nr || up.resv)
++ if (!up.nr || up.resv || up.resv2)
+ return -EINVAL;
+ return __io_register_rsrc_update(ctx, type, &up, up.nr);
+ }
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index cc2831cec6695..496f7b3f75237 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -235,6 +235,13 @@ nfsd_file_check_write_error(struct nfsd_file *nf)
+ return filemap_check_wb_err(file->f_mapping, READ_ONCE(file->f_wb_err));
+ }
+
++static void
++nfsd_file_flush(struct nfsd_file *nf)
++{
++ if (nf->nf_file && vfs_fsync(nf->nf_file, 1) != 0)
++ nfsd_reset_write_verifier(net_generic(nf->nf_net, nfsd_net_id));
++}
++
+ static void
+ nfsd_file_do_unhash(struct nfsd_file *nf)
+ {
+@@ -302,11 +309,14 @@ nfsd_file_put(struct nfsd_file *nf)
+ return;
+ }
+
+- filemap_flush(nf->nf_file->f_mapping);
+ is_hashed = test_bit(NFSD_FILE_HASHED, &nf->nf_flags) != 0;
+- nfsd_file_put_noref(nf);
+- if (is_hashed)
++ if (!is_hashed) {
++ nfsd_file_flush(nf);
++ nfsd_file_put_noref(nf);
++ } else {
++ nfsd_file_put_noref(nf);
+ nfsd_file_schedule_laundrette();
++ }
+ if (atomic_long_read(&nfsd_filecache_count) >= NFSD_FILE_LRU_LIMIT)
+ nfsd_file_gc();
+ }
+@@ -327,6 +337,7 @@ nfsd_file_dispose_list(struct list_head *dispose)
+ while(!list_empty(dispose)) {
+ nf = list_first_entry(dispose, struct nfsd_file, nf_lru);
+ list_del(&nf->nf_lru);
++ nfsd_file_flush(nf);
+ nfsd_file_put_noref(nf);
+ }
+ }
+@@ -340,6 +351,7 @@ nfsd_file_dispose_list_sync(struct list_head *dispose)
+ while(!list_empty(dispose)) {
+ nf = list_first_entry(dispose, struct nfsd_file, nf_lru);
+ list_del(&nf->nf_lru);
++ nfsd_file_flush(nf);
+ if (!refcount_dec_and_test(&nf->nf_ref))
+ continue;
+ if (nfsd_file_free(nf))
+diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
+index c08758b6b3642..c05d2ce9b6cd8 100644
+--- a/include/asm-generic/mshyperv.h
++++ b/include/asm-generic/mshyperv.h
+@@ -269,6 +269,7 @@ bool hv_isolation_type_snp(void);
+ u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size);
+ void hyperv_cleanup(void);
+ bool hv_query_ext_cap(u64 cap_query);
++void hv_setup_dma_ops(struct device *dev, bool coherent);
+ void *hv_map_memory(void *addr, unsigned long size);
+ void hv_unmap_memory(void *addr);
+ #else /* CONFIG_HYPERV */
+diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
+index 2c68a545ffa7d..71942a1c642d4 100644
+--- a/include/asm-generic/tlb.h
++++ b/include/asm-generic/tlb.h
+@@ -565,10 +565,14 @@ static inline void tlb_flush_p4d_range(struct mmu_gather *tlb,
+ #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \
+ do { \
+ unsigned long _sz = huge_page_size(h); \
+- if (_sz == PMD_SIZE) \
+- tlb_flush_pmd_range(tlb, address, _sz); \
+- else if (_sz == PUD_SIZE) \
++ if (_sz >= P4D_SIZE) \
++ tlb_flush_p4d_range(tlb, address, _sz); \
++ else if (_sz >= PUD_SIZE) \
+ tlb_flush_pud_range(tlb, address, _sz); \
++ else if (_sz >= PMD_SIZE) \
++ tlb_flush_pmd_range(tlb, address, _sz); \
++ else \
++ tlb_flush_pte_range(tlb, address, _sz); \
+ __tlb_remove_tlb_entry(tlb, ptep, address); \
+ } while (0)
+
+diff --git a/include/linux/kfence.h b/include/linux/kfence.h
+index f49e64222628a..726857a4b6805 100644
+--- a/include/linux/kfence.h
++++ b/include/linux/kfence.h
+@@ -204,6 +204,22 @@ static __always_inline __must_check bool kfence_free(void *addr)
+ */
+ bool __must_check kfence_handle_page_fault(unsigned long addr, bool is_write, struct pt_regs *regs);
+
++#ifdef CONFIG_PRINTK
++struct kmem_obj_info;
++/**
++ * __kfence_obj_info() - fill kmem_obj_info struct
++ * @kpp: kmem_obj_info to be filled
++ * @object: the object
++ *
++ * Return:
++ * * false - not a KFENCE object
++ * * true - a KFENCE object, filled @kpp
++ *
++ * Copies information to @kpp for KFENCE objects.
++ */
++bool __kfence_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab);
++#endif
++
+ #else /* CONFIG_KFENCE */
+
+ static inline bool is_kfence_address(const void *addr) { return false; }
+@@ -221,6 +237,14 @@ static inline bool __must_check kfence_handle_page_fault(unsigned long addr, boo
+ return false;
+ }
+
++#ifdef CONFIG_PRINTK
++struct kmem_obj_info;
++static inline bool __kfence_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
++{
++ return false;
++}
++#endif
++
+ #endif
+
+ #endif /* _LINUX_KFENCE_H */
+diff --git a/include/linux/static_call.h b/include/linux/static_call.h
+index fcc5b48989b3c..3c50b0fdda163 100644
+--- a/include/linux/static_call.h
++++ b/include/linux/static_call.h
+@@ -196,6 +196,14 @@ extern long __static_call_return0(void);
+ }; \
+ ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
+
++#define DEFINE_STATIC_CALL_RET0(name, _func) \
++ DECLARE_STATIC_CALL(name, _func); \
++ struct static_call_key STATIC_CALL_KEY(name) = { \
++ .func = __static_call_return0, \
++ .type = 1, \
++ }; \
++ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
++
+ #define static_call_cond(name) (void)__static_call(name)
+
+ #define EXPORT_STATIC_CALL(name) \
+@@ -231,6 +239,12 @@ static inline int static_call_init(void) { return 0; }
+ }; \
+ ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name)
+
++#define DEFINE_STATIC_CALL_RET0(name, _func) \
++ DECLARE_STATIC_CALL(name, _func); \
++ struct static_call_key STATIC_CALL_KEY(name) = { \
++ .func = __static_call_return0, \
++ }; \
++ ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name)
+
+ #define static_call_cond(name) (void)__static_call(name)
+
+@@ -284,6 +298,9 @@ static inline long __static_call_return0(void)
+ .func = NULL, \
+ }
+
++#define DEFINE_STATIC_CALL_RET0(name, _func) \
++ __DEFINE_STATIC_CALL(name, _func, __static_call_return0)
++
+ static inline void __static_call_nop(void) { }
+
+ /*
+@@ -327,7 +344,4 @@ static inline int static_call_text_reserved(void *start, void *end)
+ #define DEFINE_STATIC_CALL(name, _func) \
+ __DEFINE_STATIC_CALL(name, _func, _func)
+
+-#define DEFINE_STATIC_CALL_RET0(name, _func) \
+- __DEFINE_STATIC_CALL(name, _func, __static_call_return0)
+-
+ #endif /* _LINUX_STATIC_CALL_H */
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index f35c22b3355ff..66b49afb9e693 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -412,6 +412,7 @@ struct svc_deferred_req {
+ size_t addrlen;
+ struct sockaddr_storage daddr; /* where reply must come from */
+ size_t daddrlen;
++ void *xprt_ctxt;
+ struct cache_deferred_req handle;
+ size_t xprt_hlen;
+ int argslen;
+diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
+index ae6f4838ab755..6e5db4edc3359 100644
+--- a/include/linux/vfio_pci_core.h
++++ b/include/linux/vfio_pci_core.h
+@@ -133,6 +133,8 @@ struct vfio_pci_core_device {
+ struct mutex ioeventfds_lock;
+ struct list_head ioeventfds_list;
+ struct vfio_pci_vf_token *vf_token;
++ struct list_head sriov_pfs_item;
++ struct vfio_pci_core_device *sriov_pf_core_dev;
+ struct notifier_block nb;
+ struct mutex vma_lock;
+ struct list_head vma_list;
+diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h
+index aa33e1092e2c4..9f65f1bfbd246 100644
+--- a/include/net/flow_dissector.h
++++ b/include/net/flow_dissector.h
+@@ -59,6 +59,8 @@ struct flow_dissector_key_vlan {
+ __be16 vlan_tci;
+ };
+ __be16 vlan_tpid;
++ __be16 vlan_eth_type;
++ u16 padding;
+ };
+
+ struct flow_dissector_mpls_lse {
+diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
+index c5d7810fd7926..037c77fb5dc55 100644
+--- a/include/scsi/scsi_transport_iscsi.h
++++ b/include/scsi/scsi_transport_iscsi.h
+@@ -211,6 +211,8 @@ struct iscsi_cls_conn {
+ struct mutex ep_mutex;
+ struct iscsi_endpoint *ep;
+
++ /* Used when accessing flags and queueing work. */
++ spinlock_t lock;
+ unsigned long flags;
+ struct work_struct cleanup_work;
+
+diff --git a/include/sound/core.h b/include/sound/core.h
+index b7e9b58d3c788..6d4cc49584c63 100644
+--- a/include/sound/core.h
++++ b/include/sound/core.h
+@@ -284,6 +284,7 @@ int snd_card_disconnect(struct snd_card *card);
+ void snd_card_disconnect_sync(struct snd_card *card);
+ int snd_card_free(struct snd_card *card);
+ int snd_card_free_when_closed(struct snd_card *card);
++int snd_card_free_on_error(struct device *dev, int ret);
+ void snd_card_set_id(struct snd_card *card, const char *id);
+ int snd_card_register(struct snd_card *card);
+ int snd_card_info_init(void);
+diff --git a/include/sound/memalloc.h b/include/sound/memalloc.h
+index 653dfffb3ac84..8d79cebf95f32 100644
+--- a/include/sound/memalloc.h
++++ b/include/sound/memalloc.h
+@@ -51,6 +51,11 @@ struct snd_dma_device {
+ #define SNDRV_DMA_TYPE_DEV_SG SNDRV_DMA_TYPE_DEV /* no SG-buf support */
+ #define SNDRV_DMA_TYPE_DEV_WC_SG SNDRV_DMA_TYPE_DEV_WC
+ #endif
++/* fallback types, don't use those directly */
++#ifdef CONFIG_SND_DMA_SGBUF
++#define SNDRV_DMA_TYPE_DEV_SG_FALLBACK 10
++#define SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK 11
++#endif
+
+ /*
+ * info for buffer allocation
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 5be3faf88c1a1..06fe47fb3686a 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1956,17 +1956,18 @@ DECLARE_EVENT_CLASS(svc_deferred_event,
+ TP_STRUCT__entry(
+ __field(const void *, dr)
+ __field(u32, xid)
+- __string(addr, dr->xprt->xpt_remotebuf)
++ __array(__u8, addr, INET6_ADDRSTRLEN + 10)
+ ),
+
+ TP_fast_assign(
+ __entry->dr = dr;
+ __entry->xid = be32_to_cpu(*(__be32 *)(dr->args +
+ (dr->xprt_hlen>>2)));
+- __assign_str(addr, dr->xprt->xpt_remotebuf);
++ snprintf(__entry->addr, sizeof(__entry->addr) - 1,
++ "%pISpc", (struct sockaddr *)&dr->addr);
+ ),
+
+- TP_printk("addr=%s dr=%p xid=0x%08x", __get_str(addr), __entry->dr,
++ TP_printk("addr=%s dr=%p xid=0x%08x", __entry->addr, __entry->dr,
+ __entry->xid)
+ );
+
+diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
+index 787f491f0d2ae..1e45368ad33fd 100644
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -293,6 +293,7 @@ struct io_uring_params {
+ #define IORING_FEAT_NATIVE_WORKERS (1U << 9)
+ #define IORING_FEAT_RSRC_TAGS (1U << 10)
+ #define IORING_FEAT_CQE_SKIP (1U << 11)
++#define IORING_FEAT_LINKED_FILE (1U << 12)
+
+ /*
+ * io_uring_register(2) opcodes and arguments
+diff --git a/include/uapi/linux/stddef.h b/include/uapi/linux/stddef.h
+index 3021ea25a2849..7837ba4fe7289 100644
+--- a/include/uapi/linux/stddef.h
++++ b/include/uapi/linux/stddef.h
+@@ -1,4 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
++#ifndef _UAPI_LINUX_STDDEF_H
++#define _UAPI_LINUX_STDDEF_H
++
+ #include <linux/compiler_types.h>
+
+ #ifndef __always_inline
+@@ -41,3 +44,4 @@
+ struct { } __empty_ ## NAME; \
+ TYPE NAME[]; \
+ }
++#endif
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 407a2568f35eb..5601216eb51bd 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -70,7 +70,6 @@ struct cpuhp_cpu_state {
+ bool rollback;
+ bool single;
+ bool bringup;
+- int cpu;
+ struct hlist_node *node;
+ struct hlist_node *last;
+ enum cpuhp_state cb_state;
+@@ -474,7 +473,7 @@ static inline bool cpu_smt_allowed(unsigned int cpu) { return true; }
+ #endif
+
+ static inline enum cpuhp_state
+-cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
++cpuhp_set_state(int cpu, struct cpuhp_cpu_state *st, enum cpuhp_state target)
+ {
+ enum cpuhp_state prev_state = st->state;
+ bool bringup = st->state < target;
+@@ -485,14 +484,15 @@ cpuhp_set_state(struct cpuhp_cpu_state *st, enum cpuhp_state target)
+ st->target = target;
+ st->single = false;
+ st->bringup = bringup;
+- if (cpu_dying(st->cpu) != !bringup)
+- set_cpu_dying(st->cpu, !bringup);
++ if (cpu_dying(cpu) != !bringup)
++ set_cpu_dying(cpu, !bringup);
+
+ return prev_state;
+ }
+
+ static inline void
+-cpuhp_reset_state(struct cpuhp_cpu_state *st, enum cpuhp_state prev_state)
++cpuhp_reset_state(int cpu, struct cpuhp_cpu_state *st,
++ enum cpuhp_state prev_state)
+ {
+ bool bringup = !st->bringup;
+
+@@ -519,8 +519,8 @@ cpuhp_reset_state(struct cpuhp_cpu_state *st, enum cpuhp_state prev_state)
+ }
+
+ st->bringup = bringup;
+- if (cpu_dying(st->cpu) != !bringup)
+- set_cpu_dying(st->cpu, !bringup);
++ if (cpu_dying(cpu) != !bringup)
++ set_cpu_dying(cpu, !bringup);
+ }
+
+ /* Regular hotplug invocation of the AP hotplug thread */
+@@ -540,15 +540,16 @@ static void __cpuhp_kick_ap(struct cpuhp_cpu_state *st)
+ wait_for_ap_thread(st, st->bringup);
+ }
+
+-static int cpuhp_kick_ap(struct cpuhp_cpu_state *st, enum cpuhp_state target)
++static int cpuhp_kick_ap(int cpu, struct cpuhp_cpu_state *st,
++ enum cpuhp_state target)
+ {
+ enum cpuhp_state prev_state;
+ int ret;
+
+- prev_state = cpuhp_set_state(st, target);
++ prev_state = cpuhp_set_state(cpu, st, target);
+ __cpuhp_kick_ap(st);
+ if ((ret = st->result)) {
+- cpuhp_reset_state(st, prev_state);
++ cpuhp_reset_state(cpu, st, prev_state);
+ __cpuhp_kick_ap(st);
+ }
+
+@@ -580,7 +581,7 @@ static int bringup_wait_for_ap(unsigned int cpu)
+ if (st->target <= CPUHP_AP_ONLINE_IDLE)
+ return 0;
+
+- return cpuhp_kick_ap(st, st->target);
++ return cpuhp_kick_ap(cpu, st, st->target);
+ }
+
+ static int bringup_cpu(unsigned int cpu)
+@@ -703,7 +704,7 @@ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ ret, cpu, cpuhp_get_step(st->state)->name,
+ st->state);
+
+- cpuhp_reset_state(st, prev_state);
++ cpuhp_reset_state(cpu, st, prev_state);
+ if (can_rollback_cpu(st))
+ WARN_ON(cpuhp_invoke_callback_range(false, cpu, st,
+ prev_state));
+@@ -720,7 +721,6 @@ static void cpuhp_create(unsigned int cpu)
+
+ init_completion(&st->done_up);
+ init_completion(&st->done_down);
+- st->cpu = cpu;
+ }
+
+ static int cpuhp_should_run(unsigned int cpu)
+@@ -874,7 +874,7 @@ static int cpuhp_kick_ap_work(unsigned int cpu)
+ cpuhp_lock_release(true);
+
+ trace_cpuhp_enter(cpu, st->target, prev_state, cpuhp_kick_ap_work);
+- ret = cpuhp_kick_ap(st, st->target);
++ ret = cpuhp_kick_ap(cpu, st, st->target);
+ trace_cpuhp_exit(cpu, st->state, prev_state, ret);
+
+ return ret;
+@@ -1106,7 +1106,7 @@ static int cpuhp_down_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ ret, cpu, cpuhp_get_step(st->state)->name,
+ st->state);
+
+- cpuhp_reset_state(st, prev_state);
++ cpuhp_reset_state(cpu, st, prev_state);
+
+ if (st->state < prev_state)
+ WARN_ON(cpuhp_invoke_callback_range(true, cpu, st,
+@@ -1133,7 +1133,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+
+ cpuhp_tasks_frozen = tasks_frozen;
+
+- prev_state = cpuhp_set_state(st, target);
++ prev_state = cpuhp_set_state(cpu, st, target);
+ /*
+ * If the current CPU state is in the range of the AP hotplug thread,
+ * then we need to kick the thread.
+@@ -1164,7 +1164,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+ ret = cpuhp_down_callbacks(cpu, st, target);
+ if (ret && st->state < prev_state) {
+ if (st->state == CPUHP_TEARDOWN_CPU) {
+- cpuhp_reset_state(st, prev_state);
++ cpuhp_reset_state(cpu, st, prev_state);
+ __cpuhp_kick_ap(st);
+ } else {
+ WARN(1, "DEAD callback error for CPU%d", cpu);
+@@ -1351,7 +1351,7 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
+
+ cpuhp_tasks_frozen = tasks_frozen;
+
+- cpuhp_set_state(st, target);
++ cpuhp_set_state(cpu, st, target);
+ /*
+ * If the current CPU state is in the range of the AP hotplug thread,
+ * then we need to kick the thread once more.
+diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
+index 4632b0f4f72eb..8a6cd53dbe8ce 100644
+--- a/kernel/dma/direct.h
++++ b/kernel/dma/direct.h
+@@ -114,6 +114,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
+ dma_direct_sync_single_for_cpu(dev, addr, size, dir);
+
+ if (unlikely(is_swiotlb_buffer(dev, phys)))
+- swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
++ swiotlb_tbl_unmap_single(dev, phys, size, dir,
++ attrs | DMA_ATTR_SKIP_CPU_SYNC);
+ }
+ #endif /* _KERNEL_DMA_DIRECT_H */
+diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
+index f7ff8919dc9bb..fdf170404650f 100644
+--- a/kernel/irq/affinity.c
++++ b/kernel/irq/affinity.c
+@@ -269,8 +269,9 @@ static int __irq_build_affinity_masks(unsigned int startvec,
+ */
+ if (numvecs <= nodes) {
+ for_each_node_mask(n, nodemsk) {
+- cpumask_or(&masks[curvec].mask, &masks[curvec].mask,
+- node_to_cpumask[n]);
++ /* Ensure that only CPUs which are in both masks are set */
++ cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
++ cpumask_or(&masks[curvec].mask, &masks[curvec].mask, nmsk);
+ if (++curvec == last_affv)
+ curvec = firstvec;
+ }
+diff --git a/kernel/smp.c b/kernel/smp.c
+index 01a7c1706a58b..65a630f62363c 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -579,7 +579,7 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
+
+ /* There shouldn't be any pending callbacks on an offline CPU. */
+ if (unlikely(warn_cpu_offline && !cpu_online(smp_processor_id()) &&
+- !warned && !llist_empty(head))) {
++ !warned && entry != NULL)) {
+ warned = true;
+ WARN(1, "IPI on offline CPU %d\n", smp_processor_id());
+
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 17a283ce2b20f..5e80ee44c32a2 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -186,7 +186,7 @@ static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now)
+ */
+ if (unlikely(tick_do_timer_cpu == TICK_DO_TIMER_NONE)) {
+ #ifdef CONFIG_NO_HZ_FULL
+- WARN_ON(tick_nohz_full_running);
++ WARN_ON_ONCE(tick_nohz_full_running);
+ #endif
+ tick_do_timer_cpu = cpu;
+ }
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 85f1021ad4595..9dd2a39cb3b00 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1722,11 +1722,14 @@ static inline void __run_timers(struct timer_base *base)
+ time_after_eq(jiffies, base->next_expiry)) {
+ levels = collect_expired_timers(base, heads);
+ /*
+- * The only possible reason for not finding any expired
+- * timer at this clk is that all matching timers have been
+- * dequeued.
++ * The two possible reasons for not finding any expired
++ * timer at this clk are that all matching timers have been
++ * dequeued or no timer has been queued since
++ * base::next_expiry was set to base::clk +
++ * NEXT_TIMER_MAX_DELTA.
+ */
+- WARN_ON_ONCE(!levels && !base->next_expiry_recalc);
++ WARN_ON_ONCE(!levels && !base->next_expiry_recalc
++ && base->timers_pending);
+ base->clk++;
+ base->next_expiry = __next_timer_interrupt(base);
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index f294db835f4bc..a1da8757cc9cc 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3469,7 +3469,6 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
+ {
+ int nr_nodes, node;
+ struct page *page;
+- int rc = 0;
+
+ lockdep_assert_held(&hugetlb_lock);
+
+@@ -3480,15 +3479,19 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
+ }
+
+ for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
+- if (!list_empty(&h->hugepage_freelists[node])) {
+- page = list_entry(h->hugepage_freelists[node].next,
+- struct page, lru);
+- rc = demote_free_huge_page(h, page);
+- break;
++ list_for_each_entry(page, &h->hugepage_freelists[node], lru) {
++ if (PageHWPoison(page))
++ continue;
++
++ return demote_free_huge_page(h, page);
+ }
+ }
+
+- return rc;
++ /*
++ * Only way to get here is if all pages on free lists are poisoned.
++ * Return -EBUSY so that caller will not retry.
++ */
++ return -EBUSY;
+ }
+
+ #define HSTATE_ATTR_RO(_name) \
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index d4c7978cd75e2..af82c6f7d7239 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -222,27 +222,6 @@ static bool kfence_unprotect(unsigned long addr)
+ return !KFENCE_WARN_ON(!kfence_protect_page(ALIGN_DOWN(addr, PAGE_SIZE), false));
+ }
+
+-static inline struct kfence_metadata *addr_to_metadata(unsigned long addr)
+-{
+- long index;
+-
+- /* The checks do not affect performance; only called from slow-paths. */
+-
+- if (!is_kfence_address((void *)addr))
+- return NULL;
+-
+- /*
+- * May be an invalid index if called with an address at the edge of
+- * __kfence_pool, in which case we would report an "invalid access"
+- * error.
+- */
+- index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1;
+- if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS)
+- return NULL;
+-
+- return &kfence_metadata[index];
+-}
+-
+ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta)
+ {
+ unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2;
+diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
+index 9a6c4b1b12a88..600f2e2431d6d 100644
+--- a/mm/kfence/kfence.h
++++ b/mm/kfence/kfence.h
+@@ -96,6 +96,27 @@ struct kfence_metadata {
+
+ extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
+
++static inline struct kfence_metadata *addr_to_metadata(unsigned long addr)
++{
++ long index;
++
++ /* The checks do not affect performance; only called from slow-paths. */
++
++ if (!is_kfence_address((void *)addr))
++ return NULL;
++
++ /*
++ * May be an invalid index if called with an address at the edge of
++ * __kfence_pool, in which case we would report an "invalid access"
++ * error.
++ */
++ index = (addr - (unsigned long)__kfence_pool) / (PAGE_SIZE * 2) - 1;
++ if (index < 0 || index >= CONFIG_KFENCE_NUM_OBJECTS)
++ return NULL;
++
++ return &kfence_metadata[index];
++}
++
+ /* KFENCE error types for report generation. */
+ enum kfence_error_type {
+ KFENCE_ERROR_OOB, /* Detected a out-of-bounds access. */
+diff --git a/mm/kfence/report.c b/mm/kfence/report.c
+index f93a7b2a338be..f5a6d8ba3e21f 100644
+--- a/mm/kfence/report.c
++++ b/mm/kfence/report.c
+@@ -273,3 +273,50 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
+ /* We encountered a memory safety error, taint the kernel! */
+ add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);
+ }
++
++#ifdef CONFIG_PRINTK
++static void kfence_to_kp_stack(const struct kfence_track *track, void **kp_stack)
++{
++ int i, j;
++
++ i = get_stack_skipnr(track->stack_entries, track->num_stack_entries, NULL);
++ for (j = 0; i < track->num_stack_entries && j < KS_ADDRS_COUNT; ++i, ++j)
++ kp_stack[j] = (void *)track->stack_entries[i];
++ if (j < KS_ADDRS_COUNT)
++ kp_stack[j] = NULL;
++}
++
++bool __kfence_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
++{
++ struct kfence_metadata *meta = addr_to_metadata((unsigned long)object);
++ unsigned long flags;
++
++ if (!meta)
++ return false;
++
++ /*
++ * If state is UNUSED at least show the pointer requested; the rest
++ * would be garbage data.
++ */
++ kpp->kp_ptr = object;
++
++ /* Requesting info an a never-used object is almost certainly a bug. */
++ if (WARN_ON(meta->state == KFENCE_OBJECT_UNUSED))
++ return true;
++
++ raw_spin_lock_irqsave(&meta->lock, flags);
++
++ kpp->kp_slab = slab;
++ kpp->kp_slab_cache = meta->cache;
++ kpp->kp_objp = (void *)meta->addr;
++ kfence_to_kp_stack(&meta->alloc_track, kpp->kp_stack);
++ if (meta->state == KFENCE_OBJECT_FREED)
++ kfence_to_kp_stack(&meta->free_track, kpp->kp_free_stack);
++ /* get_stack_skipnr() ensures the first entry is outside allocator. */
++ kpp->kp_ret = kpp->kp_stack[0];
++
++ raw_spin_unlock_irqrestore(&meta->lock, flags);
++
++ return true;
++}
++#endif
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index acd7cbb82e160..a182f5ddaf68b 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1132,7 +1132,7 @@ EXPORT_SYMBOL(kmemleak_no_scan);
+ void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
+ gfp_t gfp)
+ {
+- if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++ if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_alloc(__va(phys), size, min_count, gfp);
+ }
+ EXPORT_SYMBOL(kmemleak_alloc_phys);
+@@ -1146,7 +1146,7 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
+ */
+ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
+ {
+- if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++ if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_free_part(__va(phys), size);
+ }
+ EXPORT_SYMBOL(kmemleak_free_part_phys);
+@@ -1158,7 +1158,7 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
+ */
+ void __ref kmemleak_not_leak_phys(phys_addr_t phys)
+ {
+- if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++ if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_not_leak(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+@@ -1170,7 +1170,7 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys);
+ */
+ void __ref kmemleak_ignore_phys(phys_addr_t phys)
+ {
+- if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
++ if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+ kmemleak_ignore(__va(phys));
+ }
+ EXPORT_SYMBOL(kmemleak_ignore_phys);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index a1fbf656e7dbd..e6f211dcf82e7 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -6112,7 +6112,7 @@ static int build_zonerefs_node(pg_data_t *pgdat, struct zoneref *zonerefs)
+ do {
+ zone_type--;
+ zone = pgdat->node_zones + zone_type;
+- if (managed_zone(zone)) {
++ if (populated_zone(zone)) {
+ zoneref_set_zone(zone, &zonerefs[nr_zones++]);
+ check_highest_zone(zone_type);
+ }
+diff --git a/mm/page_io.c b/mm/page_io.c
+index 0bf8e40f4e573..d3eea0a3f1afa 100644
+--- a/mm/page_io.c
++++ b/mm/page_io.c
+@@ -51,54 +51,6 @@ void end_swap_bio_write(struct bio *bio)
+ bio_put(bio);
+ }
+
+-static void swap_slot_free_notify(struct page *page)
+-{
+- struct swap_info_struct *sis;
+- struct gendisk *disk;
+- swp_entry_t entry;
+-
+- /*
+- * There is no guarantee that the page is in swap cache - the software
+- * suspend code (at least) uses end_swap_bio_read() against a non-
+- * swapcache page. So we must check PG_swapcache before proceeding with
+- * this optimization.
+- */
+- if (unlikely(!PageSwapCache(page)))
+- return;
+-
+- sis = page_swap_info(page);
+- if (data_race(!(sis->flags & SWP_BLKDEV)))
+- return;
+-
+- /*
+- * The swap subsystem performs lazy swap slot freeing,
+- * expecting that the page will be swapped out again.
+- * So we can avoid an unnecessary write if the page
+- * isn't redirtied.
+- * This is good for real swap storage because we can
+- * reduce unnecessary I/O and enhance wear-leveling
+- * if an SSD is used as the as swap device.
+- * But if in-memory swap device (eg zram) is used,
+- * this causes a duplicated copy between uncompressed
+- * data in VM-owned memory and compressed data in
+- * zram-owned memory. So let's free zram-owned memory
+- * and make the VM-owned decompressed page *dirty*,
+- * so the page should be swapped out somewhere again if
+- * we again wish to reclaim it.
+- */
+- disk = sis->bdev->bd_disk;
+- entry.val = page_private(page);
+- if (disk->fops->swap_slot_free_notify && __swap_count(entry) == 1) {
+- unsigned long offset;
+-
+- offset = swp_offset(entry);
+-
+- SetPageDirty(page);
+- disk->fops->swap_slot_free_notify(sis->bdev,
+- offset);
+- }
+-}
+-
+ static void end_swap_bio_read(struct bio *bio)
+ {
+ struct page *page = bio_first_page_all(bio);
+@@ -114,7 +66,6 @@ static void end_swap_bio_read(struct bio *bio)
+ }
+
+ SetPageUptodate(page);
+- swap_slot_free_notify(page);
+ out:
+ unlock_page(page);
+ WRITE_ONCE(bio->bi_private, NULL);
+@@ -392,11 +343,6 @@ int swap_readpage(struct page *page, bool synchronous)
+ if (sis->flags & SWP_SYNCHRONOUS_IO) {
+ ret = bdev_read_page(sis->bdev, swap_page_sector(page), page);
+ if (!ret) {
+- if (trylock_page(page)) {
+- swap_slot_free_notify(page);
+- unlock_page(page);
+- }
+-
+ count_vm_event(PSWPIN);
+ goto out;
+ }
+diff --git a/mm/secretmem.c b/mm/secretmem.c
+index 22b310adb53d9..5a62ef3bcfcff 100644
+--- a/mm/secretmem.c
++++ b/mm/secretmem.c
+@@ -158,6 +158,22 @@ const struct address_space_operations secretmem_aops = {
+ .isolate_page = secretmem_isolate_page,
+ };
+
++static int secretmem_setattr(struct user_namespace *mnt_userns,
++ struct dentry *dentry, struct iattr *iattr)
++{
++ struct inode *inode = d_inode(dentry);
++ unsigned int ia_valid = iattr->ia_valid;
++
++ if ((ia_valid & ATTR_SIZE) && inode->i_size)
++ return -EINVAL;
++
++ return simple_setattr(mnt_userns, dentry, iattr);
++}
++
++static const struct inode_operations secretmem_iops = {
++ .setattr = secretmem_setattr,
++};
++
+ static struct vfsmount *secretmem_mnt;
+
+ static struct file *secretmem_file_create(unsigned long flags)
+@@ -177,6 +193,7 @@ static struct file *secretmem_file_create(unsigned long flags)
+ mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_unevictable(inode->i_mapping);
+
++ inode->i_op = &secretmem_iops;
+ inode->i_mapping->a_ops = &secretmem_aops;
+
+ /* pretend we are a normal file with zero size */
+diff --git a/mm/slab.c b/mm/slab.c
+index a36af26e15216..f4b3eebc1e2ca 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -3650,7 +3650,7 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller);
+ #endif /* CONFIG_NUMA */
+
+ #ifdef CONFIG_PRINTK
+-void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
++void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
+ {
+ struct kmem_cache *cachep;
+ unsigned int objnr;
+diff --git a/mm/slab.h b/mm/slab.h
+index c7f2abc2b154c..506dab2a6735e 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -851,7 +851,7 @@ struct kmem_obj_info {
+ void *kp_stack[KS_ADDRS_COUNT];
+ void *kp_free_stack[KS_ADDRS_COUNT];
+ };
+-void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab);
++void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab);
+ #endif
+
+ #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 23f2ab0713b77..a9a7d79daa102 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -555,6 +555,13 @@ bool kmem_valid_obj(void *object)
+ }
+ EXPORT_SYMBOL_GPL(kmem_valid_obj);
+
++static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
++{
++ if (__kfence_obj_info(kpp, object, slab))
++ return;
++ __kmem_obj_info(kpp, object, slab);
++}
++
+ /**
+ * kmem_dump_obj - Print available slab provenance information
+ * @object: slab object for which to find provenance information.
+@@ -590,6 +597,8 @@ void kmem_dump_obj(void *object)
+ pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
+ else
+ pr_cont(" slab%s", cp);
++ if (is_kfence_address(object))
++ pr_cont(" (kfence)");
+ if (kp.kp_objp)
+ pr_cont(" start %px", kp.kp_objp);
+ if (kp.kp_data_offset)
+diff --git a/mm/slob.c b/mm/slob.c
+index 60c5842215f1b..fd9c643facbc4 100644
+--- a/mm/slob.c
++++ b/mm/slob.c
+@@ -463,7 +463,7 @@ out:
+ }
+
+ #ifdef CONFIG_PRINTK
+-void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
++void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
+ {
+ kpp->kp_ptr = object;
+ kpp->kp_slab = slab;
+diff --git a/mm/slub.c b/mm/slub.c
+index 261474092e43e..e3277e33ea6e4 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -4322,7 +4322,7 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
+ }
+
+ #ifdef CONFIG_PRINTK
+-void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
++void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
+ {
+ void *base;
+ int __maybe_unused i;
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index f5686c463bc0d..363d47f945324 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -1053,6 +1053,11 @@ static int ax25_release(struct socket *sock)
+ ax25_destroy_socket(ax25);
+ }
+ if (ax25_dev) {
++ del_timer_sync(&ax25->timer);
++ del_timer_sync(&ax25->t1timer);
++ del_timer_sync(&ax25->t2timer);
++ del_timer_sync(&ax25->t3timer);
++ del_timer_sync(&ax25->idletimer);
+ dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+ ax25_dev_put(ax25_dev);
+ }
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 15833e1d6ea11..544d2028ccf51 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1182,6 +1182,7 @@ proto_again:
+ VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
+ }
+ key_vlan->vlan_tpid = saved_vlan_tpid;
++ key_vlan->vlan_eth_type = proto;
+ }
+
+ fdret = FLOW_DISSECT_RET_PROTO_AGAIN;
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index a39bbed77f87d..d3ce6113e6c36 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -561,7 +561,6 @@ static void dsa_port_teardown(struct dsa_port *dp)
+ struct devlink_port *dlp = &dp->devlink_port;
+ struct dsa_switch *ds = dp->ds;
+ struct dsa_mac_addr *a, *tmp;
+- struct net_device *slave;
+
+ if (!dp->setup)
+ return;
+@@ -583,11 +582,9 @@ static void dsa_port_teardown(struct dsa_port *dp)
+ dsa_port_link_unregister_of(dp);
+ break;
+ case DSA_PORT_TYPE_USER:
+- slave = dp->slave;
+-
+- if (slave) {
++ if (dp->slave) {
++ dsa_slave_destroy(dp->slave);
+ dp->slave = NULL;
+- dsa_slave_destroy(slave);
+ }
+ break;
+ }
+@@ -1137,17 +1134,17 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+ if (err)
+ goto teardown_cpu_ports;
+
+- err = dsa_tree_setup_master(dst);
++ err = dsa_tree_setup_ports(dst);
+ if (err)
+ goto teardown_switches;
+
+- err = dsa_tree_setup_ports(dst);
++ err = dsa_tree_setup_master(dst);
+ if (err)
+- goto teardown_master;
++ goto teardown_ports;
+
+ err = dsa_tree_setup_lags(dst);
+ if (err)
+- goto teardown_ports;
++ goto teardown_master;
+
+ dst->setup = true;
+
+@@ -1155,10 +1152,10 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+
+ return 0;
+
+-teardown_ports:
+- dsa_tree_teardown_ports(dst);
+ teardown_master:
+ dsa_tree_teardown_master(dst);
++teardown_ports:
++ dsa_tree_teardown_ports(dst);
+ teardown_switches:
+ dsa_tree_teardown_switches(dst);
+ teardown_cpu_ports:
+@@ -1176,10 +1173,10 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
+
+ dsa_tree_teardown_lags(dst);
+
+- dsa_tree_teardown_ports(dst);
+-
+ dsa_tree_teardown_master(dst);
+
++ dsa_tree_teardown_ports(dst);
++
+ dsa_tree_teardown_switches(dst);
+
+ dsa_tree_teardown_cpu_ports(dst);
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 194832663d856..9d83c11ba1e74 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -485,7 +485,7 @@ int ip6_forward(struct sk_buff *skb)
+ goto drop;
+
+ if (!net->ipv6.devconf_all->disable_policy &&
+- !idev->cnf.disable_policy &&
++ (!idev || !idev->cnf.disable_policy) &&
+ !xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) {
+ __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
+ goto drop;
+diff --git a/net/mac80211/debugfs_sta.c b/net/mac80211/debugfs_sta.c
+index 9479f2787ea79..88d9cc945a216 100644
+--- a/net/mac80211/debugfs_sta.c
++++ b/net/mac80211/debugfs_sta.c
+@@ -441,7 +441,7 @@ static ssize_t sta_ht_capa_read(struct file *file, char __user *userbuf,
+ #define PRINT_HT_CAP(_cond, _str) \
+ do { \
+ if (_cond) \
+- p += scnprintf(p, sizeof(buf)+buf-p, "\t" _str "\n"); \
++ p += scnprintf(p, bufsz + buf - p, "\t" _str "\n"); \
+ } while (0)
+ char *buf, *p;
+ int i;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 1f5a0eece0d14..30d29d038d095 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -9275,7 +9275,7 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest)
+ }
+ EXPORT_SYMBOL_GPL(nft_parse_u32_check);
+
+-static unsigned int nft_parse_register(const struct nlattr *attr, u32 *preg)
++static int nft_parse_register(const struct nlattr *attr, u32 *preg)
+ {
+ unsigned int reg;
+
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index d601974c9d2e0..b8f0111457650 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -36,12 +36,11 @@ static void nft_socket_wildcard(const struct nft_pktinfo *pkt,
+
+ #ifdef CONFIG_SOCK_CGROUP_DATA
+ static noinline bool
+-nft_sock_get_eval_cgroupv2(u32 *dest, const struct nft_pktinfo *pkt, u32 level)
++nft_sock_get_eval_cgroupv2(u32 *dest, struct sock *sk, const struct nft_pktinfo *pkt, u32 level)
+ {
+- struct sock *sk = skb_to_full_sk(pkt->skb);
+ struct cgroup *cgrp;
+
+- if (!sk || !sk_fullsock(sk) || !net_eq(nft_net(pkt), sock_net(sk)))
++ if (!sk_fullsock(sk))
+ return false;
+
+ cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
+@@ -108,7 +107,7 @@ static void nft_socket_eval(const struct nft_expr *expr,
+ break;
+ #ifdef CONFIG_SOCK_CGROUP_DATA
+ case NFT_SOCKET_CGROUPV2:
+- if (!nft_sock_get_eval_cgroupv2(dest, pkt, priv->level)) {
++ if (!nft_sock_get_eval_cgroupv2(dest, sk, pkt, priv->level)) {
+ regs->verdict.code = NFT_BREAK;
+ return;
+ }
+diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c
+index d2537383a3e89..6a193cce2a754 100644
+--- a/net/nfc/nci/core.c
++++ b/net/nfc/nci/core.c
+@@ -560,6 +560,10 @@ static int nci_close_device(struct nci_dev *ndev)
+ mutex_lock(&ndev->req_lock);
+
+ if (!test_and_clear_bit(NCI_UP, &ndev->flags)) {
++ /* Need to flush the cmd wq in case
++ * there is a queued/running cmd_work
++ */
++ flush_workqueue(ndev->cmd_wq);
+ del_timer_sync(&ndev->cmd_timer);
+ del_timer_sync(&ndev->data_timer);
+ mutex_unlock(&ndev->req_lock);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 5ce1208a6ea36..130b5fda9c518 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1653,10 +1653,10 @@ static int tcf_chain_tp_insert(struct tcf_chain *chain,
+ if (chain->flushing)
+ return -EAGAIN;
+
++ RCU_INIT_POINTER(tp->next, tcf_chain_tp_prev(chain, chain_info));
+ if (*chain_info->pprev == chain->filter_chain)
+ tcf_chain0_head_change(chain, tp);
+ tcf_proto_get(tp);
+- RCU_INIT_POINTER(tp->next, tcf_chain_tp_prev(chain, chain_info));
+ rcu_assign_pointer(*chain_info->pprev, tp);
+
+ return 0;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 1a9b1f140f9e9..ef5b3452254aa 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1005,6 +1005,7 @@ static int fl_set_key_mpls(struct nlattr **tb,
+ static void fl_set_key_vlan(struct nlattr **tb,
+ __be16 ethertype,
+ int vlan_id_key, int vlan_prio_key,
++ int vlan_next_eth_type_key,
+ struct flow_dissector_key_vlan *key_val,
+ struct flow_dissector_key_vlan *key_mask)
+ {
+@@ -1023,6 +1024,11 @@ static void fl_set_key_vlan(struct nlattr **tb,
+ }
+ key_val->vlan_tpid = ethertype;
+ key_mask->vlan_tpid = cpu_to_be16(~0);
++ if (tb[vlan_next_eth_type_key]) {
++ key_val->vlan_eth_type =
++ nla_get_be16(tb[vlan_next_eth_type_key]);
++ key_mask->vlan_eth_type = cpu_to_be16(~0);
++ }
+ }
+
+ static void fl_set_key_flag(u32 flower_key, u32 flower_mask,
+@@ -1519,8 +1525,9 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
+
+ if (eth_type_vlan(ethertype)) {
+ fl_set_key_vlan(tb, ethertype, TCA_FLOWER_KEY_VLAN_ID,
+- TCA_FLOWER_KEY_VLAN_PRIO, &key->vlan,
+- &mask->vlan);
++ TCA_FLOWER_KEY_VLAN_PRIO,
++ TCA_FLOWER_KEY_VLAN_ETH_TYPE,
++ &key->vlan, &mask->vlan);
+
+ if (tb[TCA_FLOWER_KEY_VLAN_ETH_TYPE]) {
+ ethertype = nla_get_be16(tb[TCA_FLOWER_KEY_VLAN_ETH_TYPE]);
+@@ -1528,6 +1535,7 @@ static int fl_set_key(struct net *net, struct nlattr **tb,
+ fl_set_key_vlan(tb, ethertype,
+ TCA_FLOWER_KEY_CVLAN_ID,
+ TCA_FLOWER_KEY_CVLAN_PRIO,
++ TCA_FLOWER_KEY_CVLAN_ETH_TYPE,
+ &key->cvlan, &mask->cvlan);
+ fl_set_key_val(tb, &key->basic.n_proto,
+ TCA_FLOWER_KEY_CVLAN_ETH_TYPE,
+@@ -2886,13 +2894,13 @@ static int fl_dump_key(struct sk_buff *skb, struct net *net,
+ goto nla_put_failure;
+
+ if (mask->basic.n_proto) {
+- if (mask->cvlan.vlan_tpid) {
++ if (mask->cvlan.vlan_eth_type) {
+ if (nla_put_be16(skb, TCA_FLOWER_KEY_CVLAN_ETH_TYPE,
+ key->basic.n_proto))
+ goto nla_put_failure;
+- } else if (mask->vlan.vlan_tpid) {
++ } else if (mask->vlan.vlan_eth_type) {
+ if (nla_put_be16(skb, TCA_FLOWER_KEY_VLAN_ETH_TYPE,
+- key->basic.n_proto))
++ key->vlan.vlan_eth_type))
+ goto nla_put_failure;
+ }
+ }
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 377f896bdedc4..b9c71a304d399 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -417,7 +417,8 @@ static int taprio_enqueue_one(struct sk_buff *skb, struct Qdisc *sch,
+ {
+ struct taprio_sched *q = qdisc_priv(sch);
+
+- if (skb->sk && sock_flag(skb->sk, SOCK_TXTIME)) {
++ /* sk_flags are only safe to use on full sockets. */
++ if (skb->sk && sk_fullsock(skb->sk) && sock_flag(skb->sk, SOCK_TXTIME)) {
+ if (!is_valid_interval(skb, sch))
+ return qdisc_drop(skb, sch, to_free);
+ } else if (TXTIME_ASSIST_IS_ENABLED(q->flags)) {
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 7f342bc127358..52edee1322fc3 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -781,7 +781,7 @@ enum sctp_disposition sctp_sf_do_5_1D_ce(struct net *net,
+ }
+ }
+
+- if (security_sctp_assoc_request(new_asoc, chunk->skb)) {
++ if (security_sctp_assoc_request(new_asoc, chunk->head_skb ?: chunk->skb)) {
+ sctp_association_free(new_asoc);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ }
+@@ -932,7 +932,7 @@ enum sctp_disposition sctp_sf_do_5_1E_ca(struct net *net,
+
+ /* Set peer label for connection. */
+ if (security_sctp_assoc_established((struct sctp_association *)asoc,
+- chunk->skb))
++ chunk->head_skb ?: chunk->skb))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
+ /* Verify that the chunk length for the COOKIE-ACK is OK.
+@@ -2262,7 +2262,7 @@ enum sctp_disposition sctp_sf_do_5_2_4_dupcook(
+ }
+
+ /* Update socket peer label if first association. */
+- if (security_sctp_assoc_request(new_asoc, chunk->skb)) {
++ if (security_sctp_assoc_request(new_asoc, chunk->head_skb ?: chunk->skb)) {
+ sctp_association_free(new_asoc);
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+ }
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 3e1a9600be5e1..7b0427658056d 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -5636,7 +5636,7 @@ int sctp_do_peeloff(struct sock *sk, sctp_assoc_t id, struct socket **sockp)
+ * Set the daddr and initialize id to something more random and also
+ * copy over any ip options.
+ */
+- sp->pf->to_sk_daddr(&asoc->peer.primary_addr, sk);
++ sp->pf->to_sk_daddr(&asoc->peer.primary_addr, sock->sk);
+ sp->pf->copy_ip_options(sk, sock->sk);
+
+ /* Populate the fields of the newsk from the oldsk and migrate the
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index ce27399b38b1e..f9f3f59c79de2 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -191,7 +191,8 @@ static int smc_nl_ueid_dumpinfo(struct sk_buff *skb, u32 portid, u32 seq,
+ flags, SMC_NETLINK_DUMP_UEID);
+ if (!hdr)
+ return -ENOMEM;
+- snprintf(ueid_str, sizeof(ueid_str), "%s", ueid);
++ memcpy(ueid_str, ueid, SMC_MAX_EID_LEN);
++ ueid_str[SMC_MAX_EID_LEN] = 0;
+ if (nla_put_string(skb, SMC_NLA_EID_TABLE_ENTRY, ueid_str)) {
+ genlmsg_cancel(skb, hdr);
+ return -EMSGSIZE;
+@@ -252,7 +253,8 @@ int smc_nl_dump_seid(struct sk_buff *skb, struct netlink_callback *cb)
+ goto end;
+
+ smc_ism_get_system_eid(&seid);
+- snprintf(seid_str, sizeof(seid_str), "%s", seid);
++ memcpy(seid_str, seid, SMC_MAX_EID_LEN);
++ seid_str[SMC_MAX_EID_LEN] = 0;
+ if (nla_put_string(skb, SMC_NLA_SEID_ENTRY, seid_str))
+ goto err;
+ read_lock(&smc_clc_eid_table.lock);
+diff --git a/net/smc/smc_pnet.c b/net/smc/smc_pnet.c
+index 29f0a559d8847..4769f76505afc 100644
+--- a/net/smc/smc_pnet.c
++++ b/net/smc/smc_pnet.c
+@@ -311,8 +311,9 @@ static struct smc_ib_device *smc_pnet_find_ib(char *ib_name)
+ list_for_each_entry(ibdev, &smc_ib_devices.list, list) {
+ if (!strncmp(ibdev->ibdev->name, ib_name,
+ sizeof(ibdev->ibdev->name)) ||
+- !strncmp(dev_name(ibdev->ibdev->dev.parent), ib_name,
+- IB_DEVICE_NAME_MAX - 1)) {
++ (ibdev->ibdev->dev.parent &&
++ !strncmp(dev_name(ibdev->ibdev->dev.parent), ib_name,
++ IB_DEVICE_NAME_MAX - 1))) {
+ goto out;
+ }
+ }
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index b21ad79941474..4a423e481a281 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -1213,6 +1213,8 @@ static struct cache_deferred_req *svc_defer(struct cache_req *req)
+ dr->daddr = rqstp->rq_daddr;
+ dr->argslen = rqstp->rq_arg.len >> 2;
+ dr->xprt_hlen = rqstp->rq_xprt_hlen;
++ dr->xprt_ctxt = rqstp->rq_xprt_ctxt;
++ rqstp->rq_xprt_ctxt = NULL;
+
+ /* back up head to the start of the buffer and copy */
+ skip = rqstp->rq_arg.len - rqstp->rq_arg.head[0].iov_len;
+@@ -1251,6 +1253,7 @@ static noinline int svc_deferred_recv(struct svc_rqst *rqstp)
+ rqstp->rq_xprt_hlen = dr->xprt_hlen;
+ rqstp->rq_daddr = dr->daddr;
+ rqstp->rq_respages = rqstp->rq_pages;
++ rqstp->rq_xprt_ctxt = dr->xprt_ctxt;
+ svc_xprt_received(rqstp->rq_xprt);
+ return (dr->argslen<<2) - dr->xprt_hlen;
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index cf76a6ad127b2..864131a9fc6e3 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -831,7 +831,7 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
+ goto out_err;
+ if (ret == 0)
+ goto out_drop;
+- rqstp->rq_xprt_hlen = ret;
++ rqstp->rq_xprt_hlen = 0;
+
+ if (svc_rdma_is_reverse_direction_reply(xprt, ctxt))
+ goto out_backchannel;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index c01fbcc848e86..dc171ca0d1b12 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -519,7 +519,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ .len = IEEE80211_MAX_MESH_ID_LEN },
+ [NL80211_ATTR_MPATH_NEXT_HOP] = NLA_POLICY_ETH_ADDR_COMPAT,
+
+- [NL80211_ATTR_REG_ALPHA2] = { .type = NLA_STRING, .len = 2 },
++ /* allow 3 for NUL-termination, we used to declare this NLA_STRING */
++ [NL80211_ATTR_REG_ALPHA2] = NLA_POLICY_RANGE(NLA_BINARY, 2, 3),
+ [NL80211_ATTR_REG_RULES] = { .type = NLA_NESTED },
+
+ [NL80211_ATTR_BSS_CTS_PROT] = { .type = NLA_U8 },
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index b2fdac96bab07..4a6d864329106 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -2018,11 +2018,13 @@ cfg80211_inform_single_bss_data(struct wiphy *wiphy,
+ /* this is a nontransmitting bss, we need to add it to
+ * transmitting bss' list if it is not there
+ */
++ spin_lock_bh(&rdev->bss_lock);
+ if (cfg80211_add_nontrans_list(non_tx_data->tx_bss,
+ &res->pub)) {
+ if (__cfg80211_unlink_bss(rdev, res))
+ rdev->bss_generation++;
+ }
++ spin_unlock_bh(&rdev->bss_lock);
+ }
+
+ trace_cfg80211_return_bss(&res->pub);
+diff --git a/scripts/gcc-plugins/latent_entropy_plugin.c b/scripts/gcc-plugins/latent_entropy_plugin.c
+index 589454bce9301..8425da41de0da 100644
+--- a/scripts/gcc-plugins/latent_entropy_plugin.c
++++ b/scripts/gcc-plugins/latent_entropy_plugin.c
+@@ -86,25 +86,31 @@ static struct plugin_info latent_entropy_plugin_info = {
+ .help = "disable\tturn off latent entropy instrumentation\n",
+ };
+
+-static unsigned HOST_WIDE_INT seed;
+-/*
+- * get_random_seed() (this is a GCC function) generates the seed.
+- * This is a simple random generator without any cryptographic security because
+- * the entropy doesn't come from here.
+- */
++static unsigned HOST_WIDE_INT deterministic_seed;
++static unsigned HOST_WIDE_INT rnd_buf[32];
++static size_t rnd_idx = ARRAY_SIZE(rnd_buf);
++static int urandom_fd = -1;
++
+ static unsigned HOST_WIDE_INT get_random_const(void)
+ {
+- unsigned int i;
+- unsigned HOST_WIDE_INT ret = 0;
+-
+- for (i = 0; i < 8 * sizeof(ret); i++) {
+- ret = (ret << 1) | (seed & 1);
+- seed >>= 1;
+- if (ret & 1)
+- seed ^= 0xD800000000000000ULL;
++ if (deterministic_seed) {
++ unsigned HOST_WIDE_INT w = deterministic_seed;
++ w ^= w << 13;
++ w ^= w >> 7;
++ w ^= w << 17;
++ deterministic_seed = w;
++ return deterministic_seed;
+ }
+
+- return ret;
++ if (urandom_fd < 0) {
++ urandom_fd = open("/dev/urandom", O_RDONLY);
++ gcc_assert(urandom_fd >= 0);
++ }
++ if (rnd_idx >= ARRAY_SIZE(rnd_buf)) {
++ gcc_assert(read(urandom_fd, rnd_buf, sizeof(rnd_buf)) == sizeof(rnd_buf));
++ rnd_idx = 0;
++ }
++ return rnd_buf[rnd_idx++];
+ }
+
+ static tree tree_get_random_const(tree type)
+@@ -537,8 +543,6 @@ static void latent_entropy_start_unit(void *gcc_data __unused,
+ tree type, id;
+ int quals;
+
+- seed = get_random_seed(false);
+-
+ if (in_lto_p)
+ return;
+
+@@ -573,6 +577,12 @@ __visible int plugin_init(struct plugin_name_args *plugin_info,
+ const struct plugin_argument * const argv = plugin_info->argv;
+ int i;
+
++ /*
++ * Call get_random_seed() with noinit=true, so that this returns
++ * 0 in the case where no seed has been passed via -frandom-seed.
++ */
++ deterministic_seed = get_random_seed(true);
++
+ static const struct ggc_root_tab gt_ggc_r_gt_latent_entropy[] = {
+ {
+ .base = &latent_entropy_decl,
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 31ba7024e3add..726a8353201f8 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -209,6 +209,12 @@ static void __snd_card_release(struct device *dev, void *data)
+ * snd_card_register(), the very first devres action to call snd_card_free()
+ * is added automatically. In that way, the resource disconnection is assured
+ * at first, then released in the expected order.
++ *
++ * If an error happens at the probe before snd_card_register() is called and
++ * there have been other devres resources, you'd need to free the card manually
++ * via snd_card_free() call in the error; otherwise it may lead to UAF due to
++ * devres call orders. You can use snd_card_free_on_error() helper for
++ * handling it more easily.
+ */
+ int snd_devm_card_new(struct device *parent, int idx, const char *xid,
+ struct module *module, size_t extra_size,
+@@ -235,6 +241,28 @@ int snd_devm_card_new(struct device *parent, int idx, const char *xid,
+ }
+ EXPORT_SYMBOL_GPL(snd_devm_card_new);
+
++/**
++ * snd_card_free_on_error - a small helper for handling devm probe errors
++ * @dev: the managed device object
++ * @ret: the return code from the probe callback
++ *
++ * This function handles the explicit snd_card_free() call at the error from
++ * the probe callback. It's just a small helper for simplifying the error
++ * handling with the managed devices.
++ */
++int snd_card_free_on_error(struct device *dev, int ret)
++{
++ struct snd_card *card;
++
++ if (!ret)
++ return 0;
++ card = devres_find(dev, __snd_card_release, NULL, NULL);
++ if (card)
++ snd_card_free(card);
++ return ret;
++}
++EXPORT_SYMBOL_GPL(snd_card_free_on_error);
++
+ static int snd_card_init(struct snd_card *card, struct device *parent,
+ int idx, const char *xid, struct module *module,
+ size_t extra_size)
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 6fd763d4d15b1..15dc7160ba34e 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -499,6 +499,10 @@ static const struct snd_malloc_ops snd_dma_wc_ops = {
+ };
+ #endif /* CONFIG_X86 */
+
++#ifdef CONFIG_SND_DMA_SGBUF
++static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size);
++#endif
++
+ /*
+ * Non-contiguous pages allocator
+ */
+@@ -509,8 +513,18 @@ static void *snd_dma_noncontig_alloc(struct snd_dma_buffer *dmab, size_t size)
+
+ sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir,
+ DEFAULT_GFP, 0);
+- if (!sgt)
++ if (!sgt) {
++#ifdef CONFIG_SND_DMA_SGBUF
++ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG)
++ dmab->dev.type = SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;
++ else
++ dmab->dev.type = SNDRV_DMA_TYPE_DEV_SG_FALLBACK;
++ return snd_dma_sg_fallback_alloc(dmab, size);
++#else
+ return NULL;
++#endif
++ }
++
+ dmab->dev.need_sync = dma_need_sync(dmab->dev.dev,
+ sg_dma_address(sgt->sgl));
+ p = dma_vmap_noncontiguous(dmab->dev.dev, size, sgt);
+@@ -633,6 +647,8 @@ static void *snd_dma_sg_wc_alloc(struct snd_dma_buffer *dmab, size_t size)
+
+ if (!p)
+ return NULL;
++ if (dmab->dev.type != SNDRV_DMA_TYPE_DEV_WC_SG)
++ return p;
+ for_each_sgtable_page(sgt, &iter, 0)
+ set_memory_wc(sg_wc_address(&iter), 1);
+ return p;
+@@ -665,6 +681,95 @@ static const struct snd_malloc_ops snd_dma_sg_wc_ops = {
+ .get_page = snd_dma_noncontig_get_page,
+ .get_chunk_size = snd_dma_noncontig_get_chunk_size,
+ };
++
++/* Fallback SG-buffer allocations for x86 */
++struct snd_dma_sg_fallback {
++ size_t count;
++ struct page **pages;
++ dma_addr_t *addrs;
++};
++
++static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab,
++ struct snd_dma_sg_fallback *sgbuf)
++{
++ size_t i;
++
++ if (sgbuf->count && dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)
++ set_pages_array_wb(sgbuf->pages, sgbuf->count);
++ for (i = 0; i < sgbuf->count && sgbuf->pages[i]; i++)
++ dma_free_coherent(dmab->dev.dev, PAGE_SIZE,
++ page_address(sgbuf->pages[i]),
++ sgbuf->addrs[i]);
++ kvfree(sgbuf->pages);
++ kvfree(sgbuf->addrs);
++ kfree(sgbuf);
++}
++
++static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size)
++{
++ struct snd_dma_sg_fallback *sgbuf;
++ struct page **pages;
++ size_t i, count;
++ void *p;
++
++ sgbuf = kzalloc(sizeof(*sgbuf), GFP_KERNEL);
++ if (!sgbuf)
++ return NULL;
++ count = PAGE_ALIGN(size) >> PAGE_SHIFT;
++ pages = kvcalloc(count, sizeof(*pages), GFP_KERNEL);
++ if (!pages)
++ goto error;
++ sgbuf->pages = pages;
++ sgbuf->addrs = kvcalloc(count, sizeof(*sgbuf->addrs), GFP_KERNEL);
++ if (!sgbuf->addrs)
++ goto error;
++
++ for (i = 0; i < count; sgbuf->count++, i++) {
++ p = dma_alloc_coherent(dmab->dev.dev, PAGE_SIZE,
++ &sgbuf->addrs[i], DEFAULT_GFP);
++ if (!p)
++ goto error;
++ sgbuf->pages[i] = virt_to_page(p);
++ }
++
++ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)
++ set_pages_array_wc(pages, count);
++ p = vmap(pages, count, VM_MAP, PAGE_KERNEL);
++ if (!p)
++ goto error;
++ dmab->private_data = sgbuf;
++ return p;
++
++ error:
++ __snd_dma_sg_fallback_free(dmab, sgbuf);
++ return NULL;
++}
++
++static void snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab)
++{
++ vunmap(dmab->area);
++ __snd_dma_sg_fallback_free(dmab, dmab->private_data);
++}
++
++static int snd_dma_sg_fallback_mmap(struct snd_dma_buffer *dmab,
++ struct vm_area_struct *area)
++{
++ struct snd_dma_sg_fallback *sgbuf = dmab->private_data;
++
++ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)
++ area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
++ return vm_map_pages(area, sgbuf->pages, sgbuf->count);
++}
++
++static const struct snd_malloc_ops snd_dma_sg_fallback_ops = {
++ .alloc = snd_dma_sg_fallback_alloc,
++ .free = snd_dma_sg_fallback_free,
++ .mmap = snd_dma_sg_fallback_mmap,
++ /* reuse vmalloc helpers */
++ .get_addr = snd_dma_vmalloc_get_addr,
++ .get_page = snd_dma_vmalloc_get_page,
++ .get_chunk_size = snd_dma_vmalloc_get_chunk_size,
++};
+ #endif /* CONFIG_SND_DMA_SGBUF */
+
+ /*
+@@ -736,6 +841,10 @@ static const struct snd_malloc_ops *dma_ops[] = {
+ #ifdef CONFIG_GENERIC_ALLOCATOR
+ [SNDRV_DMA_TYPE_DEV_IRAM] = &snd_dma_iram_ops,
+ #endif /* CONFIG_GENERIC_ALLOCATOR */
++#ifdef CONFIG_SND_DMA_SGBUF
++ [SNDRV_DMA_TYPE_DEV_SG_FALLBACK] = &snd_dma_sg_fallback_ops,
++ [SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK] = &snd_dma_sg_fallback_ops,
++#endif
+ #endif /* CONFIG_HAS_DMA */
+ };
+
+diff --git a/sound/core/pcm_misc.c b/sound/core/pcm_misc.c
+index 4866aed97aacc..5588b6a1ee8bd 100644
+--- a/sound/core/pcm_misc.c
++++ b/sound/core/pcm_misc.c
+@@ -433,7 +433,7 @@ int snd_pcm_format_set_silence(snd_pcm_format_t format, void *data, unsigned int
+ return 0;
+ width = pcm_formats[(INT)format].phys; /* physical width */
+ pat = pcm_formats[(INT)format].silence;
+- if (! width)
++ if (!width || !pat)
+ return -EINVAL;
+ /* signed or 1 byte data */
+ if (pcm_formats[(INT)format].signd == 1 || width <= 8) {
+diff --git a/sound/drivers/mtpav.c b/sound/drivers/mtpav.c
+index 11235baaf6fa5..f212f233ea618 100644
+--- a/sound/drivers/mtpav.c
++++ b/sound/drivers/mtpav.c
+@@ -693,8 +693,6 @@ static int snd_mtpav_probe(struct platform_device *dev)
+ mtp_card->outmidihwport = 0xffffffff;
+ timer_setup(&mtp_card->timer, snd_mtpav_output_timer, 0);
+
+- card->private_free = snd_mtpav_free;
+-
+ err = snd_mtpav_get_RAWMIDI(mtp_card);
+ if (err < 0)
+ return err;
+@@ -716,6 +714,8 @@ static int snd_mtpav_probe(struct platform_device *dev)
+ if (err < 0)
+ return err;
+
++ card->private_free = snd_mtpav_free;
++
+ platform_set_drvdata(dev, card);
+ printk(KERN_INFO "Motu MidiTimePiece on parallel port irq: %d ioport: 0x%lx\n", irq, port);
+ return 0;
+diff --git a/sound/isa/galaxy/galaxy.c b/sound/isa/galaxy/galaxy.c
+index ea001c80149dd..3164eb8510fa4 100644
+--- a/sound/isa/galaxy/galaxy.c
++++ b/sound/isa/galaxy/galaxy.c
+@@ -478,7 +478,7 @@ static void snd_galaxy_free(struct snd_card *card)
+ galaxy_set_config(galaxy, galaxy->config);
+ }
+
+-static int snd_galaxy_probe(struct device *dev, unsigned int n)
++static int __snd_galaxy_probe(struct device *dev, unsigned int n)
+ {
+ struct snd_galaxy *galaxy;
+ struct snd_wss *chip;
+@@ -598,6 +598,11 @@ static int snd_galaxy_probe(struct device *dev, unsigned int n)
+ return 0;
+ }
+
++static int snd_galaxy_probe(struct device *dev, unsigned int n)
++{
++ return snd_card_free_on_error(dev, __snd_galaxy_probe(dev, n));
++}
++
+ static struct isa_driver snd_galaxy_driver = {
+ .match = snd_galaxy_match,
+ .probe = snd_galaxy_probe,
+diff --git a/sound/isa/sc6000.c b/sound/isa/sc6000.c
+index 26ab7ff807684..60398fced046b 100644
+--- a/sound/isa/sc6000.c
++++ b/sound/isa/sc6000.c
+@@ -537,7 +537,7 @@ static void snd_sc6000_free(struct snd_card *card)
+ sc6000_setup_board(vport, 0);
+ }
+
+-static int snd_sc6000_probe(struct device *devptr, unsigned int dev)
++static int __snd_sc6000_probe(struct device *devptr, unsigned int dev)
+ {
+ static const int possible_irqs[] = { 5, 7, 9, 10, 11, -1 };
+ static const int possible_dmas[] = { 1, 3, 0, -1 };
+@@ -662,6 +662,11 @@ static int snd_sc6000_probe(struct device *devptr, unsigned int dev)
+ return 0;
+ }
+
++static int snd_sc6000_probe(struct device *devptr, unsigned int dev)
++{
++ return snd_card_free_on_error(devptr, __snd_sc6000_probe(devptr, dev));
++}
++
+ static struct isa_driver snd_sc6000_driver = {
+ .match = snd_sc6000_match,
+ .probe = snd_sc6000_probe,
+diff --git a/sound/pci/ad1889.c b/sound/pci/ad1889.c
+index bba4dae8dcc70..50e30704bf6f9 100644
+--- a/sound/pci/ad1889.c
++++ b/sound/pci/ad1889.c
+@@ -844,8 +844,8 @@ snd_ad1889_create(struct snd_card *card, struct pci_dev *pci)
+ }
+
+ static int
+-snd_ad1889_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++__snd_ad1889_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ int err;
+ static int devno;
+@@ -904,6 +904,12 @@ snd_ad1889_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_ad1889_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_ad1889_probe(pci, pci_id));
++}
++
+ static const struct pci_device_id snd_ad1889_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_ANALOG_DEVICES, PCI_DEVICE_ID_AD1889JS) },
+ { 0, },
+diff --git a/sound/pci/ali5451/ali5451.c b/sound/pci/ali5451/ali5451.c
+index 92eb59db106de..2378a39abaebe 100644
+--- a/sound/pci/ali5451/ali5451.c
++++ b/sound/pci/ali5451/ali5451.c
+@@ -2124,8 +2124,8 @@ static int snd_ali_create(struct snd_card *card,
+ return 0;
+ }
+
+-static int snd_ali_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_ali_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct snd_ali *codec;
+@@ -2170,6 +2170,12 @@ static int snd_ali_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_ali_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_ali_probe(pci, pci_id));
++}
++
+ static struct pci_driver ali5451_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_ali_ids,
+diff --git a/sound/pci/als300.c b/sound/pci/als300.c
+index b86565dcdbe41..c70aff0601205 100644
+--- a/sound/pci/als300.c
++++ b/sound/pci/als300.c
+@@ -708,7 +708,7 @@ static int snd_als300_probe(struct pci_dev *pci,
+
+ err = snd_als300_create(card, pci, chip_type);
+ if (err < 0)
+- return err;
++ goto error;
+
+ strcpy(card->driver, "ALS300");
+ if (chip->chip_type == DEVICE_ALS300_PLUS)
+@@ -723,11 +723,15 @@ static int snd_als300_probe(struct pci_dev *pci,
+
+ err = snd_card_register(card);
+ if (err < 0)
+- return err;
++ goto error;
+
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ static struct pci_driver als300_driver = {
+diff --git a/sound/pci/als4000.c b/sound/pci/als4000.c
+index 535eccd124bee..f33aeb692a112 100644
+--- a/sound/pci/als4000.c
++++ b/sound/pci/als4000.c
+@@ -806,8 +806,8 @@ static void snd_card_als4000_free( struct snd_card *card )
+ snd_als4000_free_gameport(acard);
+ }
+
+-static int snd_card_als4000_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_card_als4000_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -930,6 +930,12 @@ static int snd_card_als4000_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_card_als4000_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_card_als4000_probe(pci, pci_id));
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static int snd_als4000_suspend(struct device *dev)
+ {
+diff --git a/sound/pci/atiixp.c b/sound/pci/atiixp.c
+index b8e035d5930d2..43d01f1847ed7 100644
+--- a/sound/pci/atiixp.c
++++ b/sound/pci/atiixp.c
+@@ -1572,8 +1572,8 @@ static int snd_atiixp_init(struct snd_card *card, struct pci_dev *pci)
+ }
+
+
+-static int snd_atiixp_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_atiixp_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct atiixp *chip;
+@@ -1623,6 +1623,12 @@ static int snd_atiixp_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_atiixp_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_atiixp_probe(pci, pci_id));
++}
++
+ static struct pci_driver atiixp_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_atiixp_ids,
+diff --git a/sound/pci/atiixp_modem.c b/sound/pci/atiixp_modem.c
+index 178dce8ef1e99..8864c4c3c7e13 100644
+--- a/sound/pci/atiixp_modem.c
++++ b/sound/pci/atiixp_modem.c
+@@ -1201,8 +1201,8 @@ static int snd_atiixp_init(struct snd_card *card, struct pci_dev *pci)
+ }
+
+
+-static int snd_atiixp_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_atiixp_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct atiixp_modem *chip;
+@@ -1247,6 +1247,12 @@ static int snd_atiixp_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_atiixp_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_atiixp_probe(pci, pci_id));
++}
++
+ static struct pci_driver atiixp_modem_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_atiixp_ids,
+diff --git a/sound/pci/au88x0/au88x0.c b/sound/pci/au88x0/au88x0.c
+index 342ef2a6655e3..eb234153691bc 100644
+--- a/sound/pci/au88x0/au88x0.c
++++ b/sound/pci/au88x0/au88x0.c
+@@ -193,7 +193,7 @@ snd_vortex_create(struct snd_card *card, struct pci_dev *pci)
+
+ // constructor -- see "Constructor" sub-section
+ static int
+-snd_vortex_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++__snd_vortex_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -310,6 +310,12 @@ snd_vortex_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ return 0;
+ }
+
++static int
++snd_vortex_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_vortex_probe(pci, pci_id));
++}
++
+ // pci_driver definition
+ static struct pci_driver vortex_driver = {
+ .name = KBUILD_MODNAME,
+diff --git a/sound/pci/aw2/aw2-alsa.c b/sound/pci/aw2/aw2-alsa.c
+index d56f126d6fdd9..29a4bcdec237a 100644
+--- a/sound/pci/aw2/aw2-alsa.c
++++ b/sound/pci/aw2/aw2-alsa.c
+@@ -275,7 +275,7 @@ static int snd_aw2_probe(struct pci_dev *pci,
+ /* (3) Create main component */
+ err = snd_aw2_create(card, pci);
+ if (err < 0)
+- return err;
++ goto error;
+
+ /* initialize mutex */
+ mutex_init(&chip->mtx);
+@@ -294,13 +294,17 @@ static int snd_aw2_probe(struct pci_dev *pci,
+ /* (6) Register card instance */
+ err = snd_card_register(card);
+ if (err < 0)
+- return err;
++ goto error;
+
+ /* (7) Set PCI driver data */
+ pci_set_drvdata(pci, card);
+
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ /* open callback */
+diff --git a/sound/pci/azt3328.c b/sound/pci/azt3328.c
+index 089050470ff27..7f329dfc5404a 100644
+--- a/sound/pci/azt3328.c
++++ b/sound/pci/azt3328.c
+@@ -2427,7 +2427,7 @@ snd_azf3328_create(struct snd_card *card,
+ }
+
+ static int
+-snd_azf3328_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++__snd_azf3328_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2520,6 +2520,12 @@ snd_azf3328_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ return 0;
+ }
+
++static int
++snd_azf3328_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_azf3328_probe(pci, pci_id));
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static inline void
+ snd_azf3328_suspend_regs(const struct snd_azf3328 *chip,
+diff --git a/sound/pci/bt87x.c b/sound/pci/bt87x.c
+index d23f931638410..621985bfee5d7 100644
+--- a/sound/pci/bt87x.c
++++ b/sound/pci/bt87x.c
+@@ -805,8 +805,8 @@ static int snd_bt87x_detect_card(struct pci_dev *pci)
+ return SND_BT87X_BOARD_UNKNOWN;
+ }
+
+-static int snd_bt87x_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_bt87x_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -889,6 +889,12 @@ static int snd_bt87x_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_bt87x_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_bt87x_probe(pci, pci_id));
++}
++
+ /* default entries for all Bt87x cards - it's not exported */
+ /* driver_data is set to 0 to call detection */
+ static const struct pci_device_id snd_bt87x_default_ids[] = {
+diff --git a/sound/pci/ca0106/ca0106_main.c b/sound/pci/ca0106/ca0106_main.c
+index 36fb150b72fb5..f4cc112bddf3e 100644
+--- a/sound/pci/ca0106/ca0106_main.c
++++ b/sound/pci/ca0106/ca0106_main.c
+@@ -1725,8 +1725,8 @@ static int snd_ca0106_midi(struct snd_ca0106 *chip, unsigned int channel)
+ }
+
+
+-static int snd_ca0106_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_ca0106_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -1786,6 +1786,12 @@ static int snd_ca0106_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_ca0106_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_ca0106_probe(pci, pci_id));
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static int snd_ca0106_suspend(struct device *dev)
+ {
+diff --git a/sound/pci/cmipci.c b/sound/pci/cmipci.c
+index dab801d9d3b48..727db6d433916 100644
+--- a/sound/pci/cmipci.c
++++ b/sound/pci/cmipci.c
+@@ -3247,15 +3247,19 @@ static int snd_cmipci_probe(struct pci_dev *pci,
+
+ err = snd_cmipci_create(card, pci, dev);
+ if (err < 0)
+- return err;
++ goto error;
+
+ err = snd_card_register(card);
+ if (err < 0)
+- return err;
++ goto error;
+
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/sound/pci/cs4281.c b/sound/pci/cs4281.c
+index e7367402b84a3..0c9cadf7b3b80 100644
+--- a/sound/pci/cs4281.c
++++ b/sound/pci/cs4281.c
+@@ -1827,8 +1827,8 @@ static void snd_cs4281_opl3_command(struct snd_opl3 *opl3, unsigned short cmd,
+ spin_unlock_irqrestore(&opl3->reg_lock, flags);
+ }
+
+-static int snd_cs4281_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_cs4281_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -1888,6 +1888,12 @@ static int snd_cs4281_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_cs4281_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_cs4281_probe(pci, pci_id));
++}
++
+ /*
+ * Power Management
+ */
+diff --git a/sound/pci/cs5535audio/cs5535audio.c b/sound/pci/cs5535audio/cs5535audio.c
+index 499fa0148f9a4..440b8f9b40c96 100644
+--- a/sound/pci/cs5535audio/cs5535audio.c
++++ b/sound/pci/cs5535audio/cs5535audio.c
+@@ -281,8 +281,8 @@ static int snd_cs5535audio_create(struct snd_card *card,
+ return 0;
+ }
+
+-static int snd_cs5535audio_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_cs5535audio_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -331,6 +331,12 @@ static int snd_cs5535audio_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_cs5535audio_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_cs5535audio_probe(pci, pci_id));
++}
++
+ static struct pci_driver cs5535audio_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_cs5535audio_ids,
+diff --git a/sound/pci/echoaudio/echoaudio.c b/sound/pci/echoaudio/echoaudio.c
+index 25b012ef5c3e6..c70c3ac4e99a5 100644
+--- a/sound/pci/echoaudio/echoaudio.c
++++ b/sound/pci/echoaudio/echoaudio.c
+@@ -1970,8 +1970,8 @@ static int snd_echo_create(struct snd_card *card,
+ }
+
+ /* constructor */
+-static int snd_echo_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_echo_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2139,6 +2139,11 @@ static int snd_echo_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_echo_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_echo_probe(pci, pci_id));
++}
+
+
+ #if defined(CONFIG_PM_SLEEP)
+diff --git a/sound/pci/emu10k1/emu10k1x.c b/sound/pci/emu10k1/emu10k1x.c
+index c49c44dc10820..89043392f3ec7 100644
+--- a/sound/pci/emu10k1/emu10k1x.c
++++ b/sound/pci/emu10k1/emu10k1x.c
+@@ -1491,8 +1491,8 @@ static int snd_emu10k1x_midi(struct emu10k1x *emu)
+ return 0;
+ }
+
+-static int snd_emu10k1x_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_emu10k1x_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -1554,6 +1554,12 @@ static int snd_emu10k1x_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_emu10k1x_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_emu10k1x_probe(pci, pci_id));
++}
++
+ // PCI IDs
+ static const struct pci_device_id snd_emu10k1x_ids[] = {
+ { PCI_VDEVICE(CREATIVE, 0x0006), 0 }, /* Dell OEM version (EMU10K1) */
+diff --git a/sound/pci/ens1370.c b/sound/pci/ens1370.c
+index 2651f0c64c062..94efe347a97a9 100644
+--- a/sound/pci/ens1370.c
++++ b/sound/pci/ens1370.c
+@@ -2304,8 +2304,8 @@ static irqreturn_t snd_audiopci_interrupt(int irq, void *dev_id)
+ return IRQ_HANDLED;
+ }
+
+-static int snd_audiopci_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_audiopci_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2369,6 +2369,12 @@ static int snd_audiopci_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_audiopci_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_audiopci_probe(pci, pci_id));
++}
++
+ static struct pci_driver ens137x_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_audiopci_ids,
+diff --git a/sound/pci/es1938.c b/sound/pci/es1938.c
+index 00b976f42a3db..e34ec6f89e7e0 100644
+--- a/sound/pci/es1938.c
++++ b/sound/pci/es1938.c
+@@ -1716,8 +1716,8 @@ static int snd_es1938_mixer(struct es1938 *chip)
+ }
+
+
+-static int snd_es1938_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_es1938_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -1796,6 +1796,12 @@ static int snd_es1938_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_es1938_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_es1938_probe(pci, pci_id));
++}
++
+ static struct pci_driver es1938_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_es1938_ids,
+diff --git a/sound/pci/es1968.c b/sound/pci/es1968.c
+index 6a8a02a9ecf41..4a7e20bb11bca 100644
+--- a/sound/pci/es1968.c
++++ b/sound/pci/es1968.c
+@@ -2741,8 +2741,8 @@ static int snd_es1968_create(struct snd_card *card,
+
+ /*
+ */
+-static int snd_es1968_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_es1968_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2848,6 +2848,12 @@ static int snd_es1968_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_es1968_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_es1968_probe(pci, pci_id));
++}
++
+ static struct pci_driver es1968_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_es1968_ids,
+diff --git a/sound/pci/fm801.c b/sound/pci/fm801.c
+index 9c22ff19e56d2..62b3cb126c6d0 100644
+--- a/sound/pci/fm801.c
++++ b/sound/pci/fm801.c
+@@ -1268,8 +1268,8 @@ static int snd_fm801_create(struct snd_card *card,
+ return 0;
+ }
+
+-static int snd_card_fm801_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_card_fm801_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -1333,6 +1333,12 @@ static int snd_card_fm801_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_card_fm801_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_card_fm801_probe(pci, pci_id));
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static const unsigned char saved_regs[] = {
+ FM801_PCM_VOL, FM801_I2S_VOL, FM801_FM_VOL, FM801_REC_SRC,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 16e90524a4977..ca40c2bd8ba62 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2619,6 +2619,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++ SND_PCI_QUIRK(0x1558, 0x65f5, "Clevo PD50PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+@@ -9217,6 +9218,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x505d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x505f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x5062, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x508b, "Thinkpad X12 Gen 1", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+diff --git a/sound/pci/ice1712/ice1724.c b/sound/pci/ice1712/ice1724.c
+index f6275868877a7..6fab2ad85bbec 100644
+--- a/sound/pci/ice1712/ice1724.c
++++ b/sound/pci/ice1712/ice1724.c
+@@ -2519,8 +2519,8 @@ static int snd_vt1724_create(struct snd_card *card,
+ *
+ */
+
+-static int snd_vt1724_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_vt1724_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2662,6 +2662,12 @@ static int snd_vt1724_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_vt1724_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_vt1724_probe(pci, pci_id));
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static int snd_vt1724_suspend(struct device *dev)
+ {
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index a51032b3ac4d8..ae285c0a629c8 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -3109,8 +3109,8 @@ static int check_default_spdif_aclink(struct pci_dev *pci)
+ return 0;
+ }
+
+-static int snd_intel8x0_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_intel8x0_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct intel8x0 *chip;
+@@ -3189,6 +3189,12 @@ static int snd_intel8x0_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_intel8x0_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_intel8x0_probe(pci, pci_id));
++}
++
+ static struct pci_driver intel8x0_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_intel8x0_ids,
+diff --git a/sound/pci/intel8x0m.c b/sound/pci/intel8x0m.c
+index 7de3cb2f17b52..2845cc006d0cf 100644
+--- a/sound/pci/intel8x0m.c
++++ b/sound/pci/intel8x0m.c
+@@ -1178,8 +1178,8 @@ static struct shortname_table {
+ { 0 },
+ };
+
+-static int snd_intel8x0m_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_intel8x0m_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct intel8x0m *chip;
+@@ -1225,6 +1225,12 @@ static int snd_intel8x0m_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_intel8x0m_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_intel8x0m_probe(pci, pci_id));
++}
++
+ static struct pci_driver intel8x0m_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_intel8x0m_ids,
+diff --git a/sound/pci/korg1212/korg1212.c b/sound/pci/korg1212/korg1212.c
+index 5c9e240ff6a9c..33b4f95d65b3f 100644
+--- a/sound/pci/korg1212/korg1212.c
++++ b/sound/pci/korg1212/korg1212.c
+@@ -2355,7 +2355,7 @@ snd_korg1212_probe(struct pci_dev *pci,
+
+ err = snd_korg1212_create(card, pci);
+ if (err < 0)
+- return err;
++ goto error;
+
+ strcpy(card->driver, "korg1212");
+ strcpy(card->shortname, "korg1212");
+@@ -2366,10 +2366,14 @@ snd_korg1212_probe(struct pci_dev *pci,
+
+ err = snd_card_register(card);
+ if (err < 0)
+- return err;
++ goto error;
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ static struct pci_driver korg1212_driver = {
+diff --git a/sound/pci/lola/lola.c b/sound/pci/lola/lola.c
+index 5269a1d396a5b..1aa30e90b86a7 100644
+--- a/sound/pci/lola/lola.c
++++ b/sound/pci/lola/lola.c
+@@ -637,8 +637,8 @@ static int lola_create(struct snd_card *card, struct pci_dev *pci, int dev)
+ return 0;
+ }
+
+-static int lola_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __lola_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -687,6 +687,12 @@ static int lola_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int lola_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __lola_probe(pci, pci_id));
++}
++
+ /* PCI IDs */
+ static const struct pci_device_id lola_ids[] = {
+ { PCI_VDEVICE(DIGIGRAM, 0x0001) },
+diff --git a/sound/pci/lx6464es/lx6464es.c b/sound/pci/lx6464es/lx6464es.c
+index 168a1084f7303..bd9b6148dd6fb 100644
+--- a/sound/pci/lx6464es/lx6464es.c
++++ b/sound/pci/lx6464es/lx6464es.c
+@@ -1019,7 +1019,7 @@ static int snd_lx6464es_probe(struct pci_dev *pci,
+ err = snd_lx6464es_create(card, pci);
+ if (err < 0) {
+ dev_err(card->dev, "error during snd_lx6464es_create\n");
+- return err;
++ goto error;
+ }
+
+ strcpy(card->driver, "LX6464ES");
+@@ -1036,12 +1036,16 @@ static int snd_lx6464es_probe(struct pci_dev *pci,
+
+ err = snd_card_register(card);
+ if (err < 0)
+- return err;
++ goto error;
+
+ dev_dbg(chip->card->dev, "initialization successful\n");
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ static struct pci_driver lx6464es_driver = {
+diff --git a/sound/pci/maestro3.c b/sound/pci/maestro3.c
+index 056838ead21d6..261850775c807 100644
+--- a/sound/pci/maestro3.c
++++ b/sound/pci/maestro3.c
+@@ -2637,7 +2637,7 @@ snd_m3_create(struct snd_card *card, struct pci_dev *pci,
+ /*
+ */
+ static int
+-snd_m3_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++__snd_m3_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2702,6 +2702,12 @@ snd_m3_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ return 0;
+ }
+
++static int
++snd_m3_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_m3_probe(pci, pci_id));
++}
++
+ static struct pci_driver m3_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_m3_ids,
+diff --git a/sound/pci/nm256/nm256.c b/sound/pci/nm256/nm256.c
+index c9c178504959e..f99a1e96e9231 100644
+--- a/sound/pci/nm256/nm256.c
++++ b/sound/pci/nm256/nm256.c
+@@ -1573,7 +1573,6 @@ snd_nm256_create(struct snd_card *card, struct pci_dev *pci)
+ chip->coeffs_current = 0;
+
+ snd_nm256_init_chip(chip);
+- card->private_free = snd_nm256_free;
+
+ // pci_set_master(pci); /* needed? */
+ return 0;
+@@ -1680,6 +1679,7 @@ static int snd_nm256_probe(struct pci_dev *pci,
+ err = snd_card_register(card);
+ if (err < 0)
+ return err;
++ card->private_free = snd_nm256_free;
+
+ pci_set_drvdata(pci, card);
+ return 0;
+diff --git a/sound/pci/oxygen/oxygen_lib.c b/sound/pci/oxygen/oxygen_lib.c
+index 4fb3f2484fdba..92ffe9dc20c55 100644
+--- a/sound/pci/oxygen/oxygen_lib.c
++++ b/sound/pci/oxygen/oxygen_lib.c
+@@ -576,7 +576,7 @@ static void oxygen_card_free(struct snd_card *card)
+ mutex_destroy(&chip->mutex);
+ }
+
+-int oxygen_pci_probe(struct pci_dev *pci, int index, char *id,
++static int __oxygen_pci_probe(struct pci_dev *pci, int index, char *id,
+ struct module *owner,
+ const struct pci_device_id *ids,
+ int (*get_model)(struct oxygen *chip,
+@@ -701,6 +701,16 @@ int oxygen_pci_probe(struct pci_dev *pci, int index, char *id,
+ pci_set_drvdata(pci, card);
+ return 0;
+ }
++
++int oxygen_pci_probe(struct pci_dev *pci, int index, char *id,
++ struct module *owner,
++ const struct pci_device_id *ids,
++ int (*get_model)(struct oxygen *chip,
++ const struct pci_device_id *id))
++{
++ return snd_card_free_on_error(&pci->dev,
++ __oxygen_pci_probe(pci, index, id, owner, ids, get_model));
++}
+ EXPORT_SYMBOL(oxygen_pci_probe);
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/sound/pci/riptide/riptide.c b/sound/pci/riptide/riptide.c
+index 5a987c683c41c..b37c877c2c160 100644
+--- a/sound/pci/riptide/riptide.c
++++ b/sound/pci/riptide/riptide.c
+@@ -2023,7 +2023,7 @@ static void snd_riptide_joystick_remove(struct pci_dev *pci)
+ #endif
+
+ static int
+-snd_card_riptide_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++__snd_card_riptide_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -2124,6 +2124,12 @@ snd_card_riptide_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ return 0;
+ }
+
++static int
++snd_card_riptide_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_card_riptide_probe(pci, pci_id));
++}
++
+ static struct pci_driver driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_riptide_ids,
+diff --git a/sound/pci/rme32.c b/sound/pci/rme32.c
+index 5b6bd9f0b2f77..9c0ac025e1432 100644
+--- a/sound/pci/rme32.c
++++ b/sound/pci/rme32.c
+@@ -1875,7 +1875,7 @@ static void snd_rme32_card_free(struct snd_card *card)
+ }
+
+ static int
+-snd_rme32_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++__snd_rme32_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct rme32 *rme32;
+@@ -1927,6 +1927,12 @@ snd_rme32_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
+ return 0;
+ }
+
++static int
++snd_rme32_probe(struct pci_dev *pci, const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_rme32_probe(pci, pci_id));
++}
++
+ static struct pci_driver rme32_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_rme32_ids,
+diff --git a/sound/pci/rme96.c b/sound/pci/rme96.c
+index 8fc8115049203..bccb7e0d3d116 100644
+--- a/sound/pci/rme96.c
++++ b/sound/pci/rme96.c
+@@ -2430,8 +2430,8 @@ static void snd_rme96_card_free(struct snd_card *card)
+ }
+
+ static int
+-snd_rme96_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++__snd_rme96_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct rme96 *rme96;
+@@ -2498,6 +2498,12 @@ snd_rme96_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_rme96_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_rme96_probe(pci, pci_id));
++}
++
+ static struct pci_driver rme96_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_rme96_ids,
+diff --git a/sound/pci/rme9652/hdsp.c b/sound/pci/rme9652/hdsp.c
+index 96c12dfb24cf9..3db641318d3ae 100644
+--- a/sound/pci/rme9652/hdsp.c
++++ b/sound/pci/rme9652/hdsp.c
+@@ -5444,17 +5444,21 @@ static int snd_hdsp_probe(struct pci_dev *pci,
+ hdsp->pci = pci;
+ err = snd_hdsp_create(card, hdsp);
+ if (err)
+- return err;
++ goto error;
+
+ strcpy(card->shortname, "Hammerfall DSP");
+ sprintf(card->longname, "%s at 0x%lx, irq %d", hdsp->card_name,
+ hdsp->port, hdsp->irq);
+ err = snd_card_register(card);
+ if (err)
+- return err;
++ goto error;
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ static struct pci_driver hdsp_driver = {
+diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
+index ff06ee82607cf..fa1812e7a49dc 100644
+--- a/sound/pci/rme9652/hdspm.c
++++ b/sound/pci/rme9652/hdspm.c
+@@ -6895,7 +6895,7 @@ static int snd_hdspm_probe(struct pci_dev *pci,
+
+ err = snd_hdspm_create(card, hdspm);
+ if (err < 0)
+- return err;
++ goto error;
+
+ if (hdspm->io_type != MADIface) {
+ snprintf(card->shortname, sizeof(card->shortname), "%s_%x",
+@@ -6914,12 +6914,16 @@ static int snd_hdspm_probe(struct pci_dev *pci,
+
+ err = snd_card_register(card);
+ if (err < 0)
+- return err;
++ goto error;
+
+ pci_set_drvdata(pci, card);
+
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ static struct pci_driver hdspm_driver = {
+diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
+index 7755e19aa7761..1d614fe89a6ae 100644
+--- a/sound/pci/rme9652/rme9652.c
++++ b/sound/pci/rme9652/rme9652.c
+@@ -2572,7 +2572,7 @@ static int snd_rme9652_probe(struct pci_dev *pci,
+ rme9652->pci = pci;
+ err = snd_rme9652_create(card, rme9652, precise_ptr[dev]);
+ if (err)
+- return err;
++ goto error;
+
+ strcpy(card->shortname, rme9652->card_name);
+
+@@ -2580,10 +2580,14 @@ static int snd_rme9652_probe(struct pci_dev *pci,
+ card->shortname, rme9652->port, rme9652->irq);
+ err = snd_card_register(card);
+ if (err)
+- return err;
++ goto error;
+ pci_set_drvdata(pci, card);
+ dev++;
+ return 0;
++
++ error:
++ snd_card_free(card);
++ return err;
+ }
+
+ static struct pci_driver rme9652_driver = {
+diff --git a/sound/pci/sis7019.c b/sound/pci/sis7019.c
+index 0b722b0e0604b..fabe393607f8f 100644
+--- a/sound/pci/sis7019.c
++++ b/sound/pci/sis7019.c
+@@ -1331,8 +1331,8 @@ static int sis_chip_create(struct snd_card *card,
+ return 0;
+ }
+
+-static int snd_sis7019_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_sis7019_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct sis7019 *sis;
+@@ -1352,8 +1352,8 @@ static int snd_sis7019_probe(struct pci_dev *pci,
+ if (!codecs)
+ codecs = SIS_PRIMARY_CODEC_PRESENT;
+
+- rc = snd_card_new(&pci->dev, index, id, THIS_MODULE,
+- sizeof(*sis), &card);
++ rc = snd_devm_card_new(&pci->dev, index, id, THIS_MODULE,
++ sizeof(*sis), &card);
+ if (rc < 0)
+ return rc;
+
+@@ -1386,6 +1386,12 @@ static int snd_sis7019_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_sis7019_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_sis7019_probe(pci, pci_id));
++}
++
+ static struct pci_driver sis7019_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_sis7019_ids,
+diff --git a/sound/pci/sonicvibes.c b/sound/pci/sonicvibes.c
+index c8c49881008fd..f91cbf6eeca0f 100644
+--- a/sound/pci/sonicvibes.c
++++ b/sound/pci/sonicvibes.c
+@@ -1387,8 +1387,8 @@ static int snd_sonicvibes_midi(struct sonicvibes *sonic,
+ return 0;
+ }
+
+-static int snd_sonic_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_sonic_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ static int dev;
+ struct snd_card *card;
+@@ -1459,6 +1459,12 @@ static int snd_sonic_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_sonic_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_sonic_probe(pci, pci_id));
++}
++
+ static struct pci_driver sonicvibes_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_sonic_ids,
+diff --git a/sound/pci/via82xx.c b/sound/pci/via82xx.c
+index 65514f7e42d7d..361b83fd721e6 100644
+--- a/sound/pci/via82xx.c
++++ b/sound/pci/via82xx.c
+@@ -2458,8 +2458,8 @@ static int check_dxs_list(struct pci_dev *pci, int revision)
+ return VIA_DXS_48K;
+ };
+
+-static int snd_via82xx_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_via82xx_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct via82xx *chip;
+@@ -2569,6 +2569,12 @@ static int snd_via82xx_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_via82xx_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_via82xx_probe(pci, pci_id));
++}
++
+ static struct pci_driver via82xx_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_via82xx_ids,
+diff --git a/sound/pci/via82xx_modem.c b/sound/pci/via82xx_modem.c
+index 234f7fbed2364..ca7f024bf8ec6 100644
+--- a/sound/pci/via82xx_modem.c
++++ b/sound/pci/via82xx_modem.c
+@@ -1103,8 +1103,8 @@ static int snd_via82xx_create(struct snd_card *card,
+ }
+
+
+-static int snd_via82xx_probe(struct pci_dev *pci,
+- const struct pci_device_id *pci_id)
++static int __snd_via82xx_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
+ {
+ struct snd_card *card;
+ struct via82xx_modem *chip;
+@@ -1157,6 +1157,12 @@ static int snd_via82xx_probe(struct pci_dev *pci,
+ return 0;
+ }
+
++static int snd_via82xx_probe(struct pci_dev *pci,
++ const struct pci_device_id *pci_id)
++{
++ return snd_card_free_on_error(&pci->dev, __snd_via82xx_probe(pci, pci_id));
++}
++
+ static struct pci_driver via82xx_modem_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = snd_via82xx_modem_ids,
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index cec6e91afea24..6d699065e81a2 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -669,9 +669,9 @@ static const struct snd_pcm_hardware snd_usb_hardware =
+ SNDRV_PCM_INFO_PAUSE,
+ .channels_min = 1,
+ .channels_max = 256,
+- .buffer_bytes_max = 1024 * 1024,
++ .buffer_bytes_max = INT_MAX, /* limited by BUFFER_TIME later */
+ .period_bytes_min = 64,
+- .period_bytes_max = 512 * 1024,
++ .period_bytes_max = INT_MAX, /* limited by PERIOD_TIME later */
+ .periods_min = 2,
+ .periods_max = 1024,
+ };
+@@ -1064,6 +1064,18 @@ static int setup_hw_info(struct snd_pcm_runtime *runtime, struct snd_usb_substre
+ return err;
+ }
+
++ /* set max period and buffer sizes for 1 and 2 seconds, respectively */
++ err = snd_pcm_hw_constraint_minmax(runtime,
++ SNDRV_PCM_HW_PARAM_PERIOD_TIME,
++ 0, 1000000);
++ if (err < 0)
++ return err;
++ err = snd_pcm_hw_constraint_minmax(runtime,
++ SNDRV_PCM_HW_PARAM_BUFFER_TIME,
++ 0, 2000000);
++ if (err < 0)
++ return err;
++
+ /* additional hw constraints for implicit fb */
+ err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_FORMAT,
+ hw_rule_format_implicit_fb, subs,
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index 4a3ff6468aa75..fa664cf03c326 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1665,7 +1665,7 @@ static void hdmi_lpe_audio_free(struct snd_card *card)
+ * This function is called when the i915 driver creates the
+ * hdmi-lpe-audio platform device.
+ */
+-static int hdmi_lpe_audio_probe(struct platform_device *pdev)
++static int __hdmi_lpe_audio_probe(struct platform_device *pdev)
+ {
+ struct snd_card *card;
+ struct snd_intelhad_card *card_ctx;
+@@ -1828,6 +1828,11 @@ static int hdmi_lpe_audio_probe(struct platform_device *pdev)
+ return 0;
+ }
+
++static int hdmi_lpe_audio_probe(struct platform_device *pdev)
++{
++ return snd_card_free_on_error(&pdev->dev, __hdmi_lpe_audio_probe(pdev));
++}
++
+ static const struct dev_pm_ops hdmi_lpe_audio_pm = {
+ SET_SYSTEM_SLEEP_PM_OPS(hdmi_lpe_audio_suspend, hdmi_lpe_audio_resume)
+ };
+diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h
+index a4a39c3e0f196..0c2610cde6ea2 100644
+--- a/tools/arch/x86/include/asm/msr-index.h
++++ b/tools/arch/x86/include/asm/msr-index.h
+@@ -128,9 +128,9 @@
+ #define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
+ #define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
+
+-/* SRBDS support */
+ #define MSR_IA32_MCU_OPT_CTRL 0x00000123
+-#define RNGDS_MITG_DIS BIT(0)
++#define RNGDS_MITG_DIS BIT(0) /* SRBDS support */
++#define RTM_ALLOW BIT(1) /* TSX development mode */
+
+ #define MSR_IA32_SYSENTER_CS 0x00000174
+ #define MSR_IA32_SYSENTER_ESP 0x00000175
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 24997925ae00d..dd84fed698a3b 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -1523,7 +1523,9 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
+ bool use_uncore_alias;
+ LIST_HEAD(config_terms);
+
+- if (verbose > 1) {
++ pmu = parse_state->fake_pmu ?: perf_pmu__find(name);
++
++ if (verbose > 1 && !(pmu && pmu->selectable)) {
+ fprintf(stderr, "Attempting to add event pmu '%s' with '",
+ name);
+ if (head_config) {
+@@ -1536,7 +1538,6 @@ int parse_events_add_pmu(struct parse_events_state *parse_state,
+ fprintf(stderr, "' that may result in non-fatal errors\n");
+ }
+
+- pmu = parse_state->fake_pmu ?: perf_pmu__find(name);
+ if (!pmu) {
+ char *err_str;
+
+diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h
+index dc284c6bdbc37..eca5c622efd25 100644
+--- a/tools/testing/selftests/kvm/include/riscv/processor.h
++++ b/tools/testing/selftests/kvm/include/riscv/processor.h
+@@ -101,7 +101,9 @@ static inline void set_reg(struct kvm_vm *vm, uint32_t vcpuid, uint64_t id,
+ #define PGTBL_PTE_WRITE_SHIFT 2
+ #define PGTBL_PTE_READ_MASK 0x0000000000000002ULL
+ #define PGTBL_PTE_READ_SHIFT 1
+-#define PGTBL_PTE_PERM_MASK (PGTBL_PTE_EXECUTE_MASK | \
++#define PGTBL_PTE_PERM_MASK (PGTBL_PTE_ACCESSED_MASK | \
++ PGTBL_PTE_DIRTY_MASK | \
++ PGTBL_PTE_EXECUTE_MASK | \
+ PGTBL_PTE_WRITE_MASK | \
+ PGTBL_PTE_READ_MASK)
+ #define PGTBL_PTE_VALID_MASK 0x0000000000000001ULL
+diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
+index d377f2603d98a..3961487a4870d 100644
+--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
++++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
+@@ -268,7 +268,7 @@ void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid, uint8_t indent)
+ core.regs.t3, core.regs.t4, core.regs.t5, core.regs.t6);
+ }
+
+-static void guest_hang(void)
++static void __aligned(16) guest_hang(void)
+ {
+ while (1)
+ ;
+diff --git a/tools/testing/selftests/mqueue/mq_perf_tests.c b/tools/testing/selftests/mqueue/mq_perf_tests.c
+index b019e0b8221c7..84fda3b490735 100644
+--- a/tools/testing/selftests/mqueue/mq_perf_tests.c
++++ b/tools/testing/selftests/mqueue/mq_perf_tests.c
+@@ -180,6 +180,9 @@ void shutdown(int exit_val, char *err_cause, int line_no)
+ if (in_shutdown++)
+ return;
+
++ /* Free the cpu_set allocated using CPU_ALLOC in main function */
++ CPU_FREE(cpu_set);
++
+ for (i = 0; i < num_cpus_to_pin; i++)
+ if (cpu_threads[i]) {
+ pthread_kill(cpu_threads[i], SIGUSR1);
+@@ -551,6 +554,12 @@ int main(int argc, char *argv[])
+ perror("sysconf(_SC_NPROCESSORS_ONLN)");
+ exit(1);
+ }
++
++ if (getuid() != 0)
++ ksft_exit_skip("Not running as root, but almost all tests "
++ "require root in order to modify\nsystem settings. "
++ "Exiting.\n");
++
+ cpus_online = min(MAX_CPUS, sysconf(_SC_NPROCESSORS_ONLN));
+ cpu_set = CPU_ALLOC(cpus_online);
+ if (cpu_set == NULL) {
+@@ -589,7 +598,7 @@ int main(int argc, char *argv[])
+ cpu_set)) {
+ fprintf(stderr, "Any given CPU may "
+ "only be given once.\n");
+- exit(1);
++ goto err_code;
+ } else
+ CPU_SET_S(cpus_to_pin[cpu],
+ cpu_set_size, cpu_set);
+@@ -607,7 +616,7 @@ int main(int argc, char *argv[])
+ queue_path = malloc(strlen(option) + 2);
+ if (!queue_path) {
+ perror("malloc()");
+- exit(1);
++ goto err_code;
+ }
+ queue_path[0] = '/';
+ queue_path[1] = 0;
+@@ -622,17 +631,12 @@ int main(int argc, char *argv[])
+ fprintf(stderr, "Must pass at least one CPU to continuous "
+ "mode.\n");
+ poptPrintUsage(popt_context, stderr, 0);
+- exit(1);
++ goto err_code;
+ } else if (!continuous_mode) {
+ num_cpus_to_pin = 1;
+ cpus_to_pin[0] = cpus_online - 1;
+ }
+
+- if (getuid() != 0)
+- ksft_exit_skip("Not running as root, but almost all tests "
+- "require root in order to modify\nsystem settings. "
+- "Exiting.\n");
+-
+ max_msgs = fopen(MAX_MSGS, "r+");
+ max_msgsize = fopen(MAX_MSGSIZE, "r+");
+ if (!max_msgs)
+@@ -740,4 +744,9 @@ int main(int argc, char *argv[])
+ sleep(1);
+ }
+ shutdown(0, "", 0);
++
++err_code:
++ CPU_FREE(cpu_set);
++ exit(1);
++
+ }
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-13 17:53 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-13 17:53 UTC (permalink / raw
To: gentoo-commits
commit: d8736bd8058ccef545d015dcb5b8d7541d58dd99
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 13 17:52:15 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 13 17:52:15 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d8736bd8
Linux patch 5.17.3
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1002_linux-5.17.3.patch | 16307 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 16311 insertions(+)
diff --git a/0000_README b/0000_README
index 18d9a104..a57e4c32 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-5.17.2.patch
From: http://www.kernel.org
Desc: Linux 5.17.2
+Patch: 1002_linux-5.17.3.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-5.17.3.patch b/1002_linux-5.17.3.patch
new file mode 100644
index 00000000..980116ce
--- /dev/null
+++ b/1002_linux-5.17.3.patch
@@ -0,0 +1,16307 @@
+diff --git a/Documentation/virt/kvm/devices/vcpu.rst b/Documentation/virt/kvm/devices/vcpu.rst
+index 60a29972d3f1b..d063aaee5bb73 100644
+--- a/Documentation/virt/kvm/devices/vcpu.rst
++++ b/Documentation/virt/kvm/devices/vcpu.rst
+@@ -70,7 +70,7 @@ irqchip.
+ -ENODEV PMUv3 not supported or GIC not initialized
+ -ENXIO PMUv3 not properly configured or in-kernel irqchip not
+ configured as required prior to calling this attribute
+- -EBUSY PMUv3 already initialized
++ -EBUSY PMUv3 already initialized or a VCPU has already run
+ -EINVAL Invalid filter range
+ ======= ======================================================
+
+diff --git a/Makefile b/Makefile
+index 06d852cad74ff..02fbef1a0213b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index bfbf0c4c7c5e5..39f5c1672f480 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_PART_CORTEX_A77 0xD0D
+ #define ARM_CPU_PART_NEOVERSE_V1 0xD40
+ #define ARM_CPU_PART_CORTEX_A78 0xD41
++#define ARM_CPU_PART_CORTEX_A78AE 0xD42
+ #define ARM_CPU_PART_CORTEX_X1 0xD44
+ #define ARM_CPU_PART_CORTEX_A510 0xD46
+ #define ARM_CPU_PART_CORTEX_A710 0xD47
+@@ -123,6 +124,7 @@
+ #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
+ #define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
+ #define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
++#define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+ #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1)
+ #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510)
+ #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 031e3a2537fc8..8234626a945a7 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -136,6 +136,7 @@ struct kvm_arch {
+
+ /* Memory Tagging Extension enabled for the guest */
+ bool mte_enabled;
++ bool ran_once;
+ };
+
+ struct kvm_vcpu_fault_info {
+diff --git a/arch/arm64/kernel/patching.c b/arch/arm64/kernel/patching.c
+index 771f543464e06..33e0fabc0b79b 100644
+--- a/arch/arm64/kernel/patching.c
++++ b/arch/arm64/kernel/patching.c
+@@ -117,8 +117,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
+ int i, ret = 0;
+ struct aarch64_insn_patch *pp = arg;
+
+- /* The first CPU becomes master */
+- if (atomic_inc_return(&pp->cpu_count) == 1) {
++ /* The last CPU becomes master */
++ if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
+ for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
+ ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
+ pp->new_insns[i]);
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 5777929d35bf4..40be3a7c2c531 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -853,6 +853,7 @@ u8 spectre_bhb_loop_affected(int scope)
+ if (scope == SCOPE_LOCAL_CPU) {
+ static const struct midr_range spectre_bhb_k32_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 27df5c1e6baad..3b46041f2b978 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -234,6 +234,7 @@ asmlinkage notrace void secondary_start_kernel(void)
+ * Log the CPU info before it is marked online and might get read.
+ */
+ cpuinfo_store_cpu();
++ store_cpu_topology(cpu);
+
+ /*
+ * Enable GIC and timers.
+@@ -242,7 +243,6 @@ asmlinkage notrace void secondary_start_kernel(void)
+
+ ipi_setup(cpu);
+
+- store_cpu_topology(cpu);
+ numa_add_cpu(cpu);
+
+ /*
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 4dca6ffd03d42..85a2a75f44982 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -634,6 +634,10 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ if (kvm_vm_is_protected(kvm))
+ kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+
++ mutex_lock(&kvm->lock);
++ kvm->arch.ran_once = true;
++ mutex_unlock(&kvm->lock);
++
+ return ret;
+ }
+
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index fbcfd4ec6f926..bc771bc1a0413 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -924,6 +924,8 @@ static bool pmu_irq_is_valid(struct kvm *kvm, int irq)
+
+ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ {
++ struct kvm *kvm = vcpu->kvm;
++
+ if (!kvm_vcpu_has_pmu(vcpu))
+ return -ENODEV;
+
+@@ -941,7 +943,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ int __user *uaddr = (int __user *)(long)attr->addr;
+ int irq;
+
+- if (!irqchip_in_kernel(vcpu->kvm))
++ if (!irqchip_in_kernel(kvm))
+ return -EINVAL;
+
+ if (get_user(irq, uaddr))
+@@ -951,7 +953,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ if (!(irq_is_ppi(irq) || irq_is_spi(irq)))
+ return -EINVAL;
+
+- if (!pmu_irq_is_valid(vcpu->kvm, irq))
++ if (!pmu_irq_is_valid(kvm, irq))
+ return -EINVAL;
+
+ if (kvm_arm_pmu_irq_initialized(vcpu))
+@@ -966,7 +968,7 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ struct kvm_pmu_event_filter filter;
+ int nr_events;
+
+- nr_events = kvm_pmu_event_mask(vcpu->kvm) + 1;
++ nr_events = kvm_pmu_event_mask(kvm) + 1;
+
+ uaddr = (struct kvm_pmu_event_filter __user *)(long)attr->addr;
+
+@@ -978,12 +980,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ filter.action != KVM_PMU_EVENT_DENY))
+ return -EINVAL;
+
+- mutex_lock(&vcpu->kvm->lock);
++ mutex_lock(&kvm->lock);
++
++ if (kvm->arch.ran_once) {
++ mutex_unlock(&kvm->lock);
++ return -EBUSY;
++ }
+
+- if (!vcpu->kvm->arch.pmu_filter) {
+- vcpu->kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
+- if (!vcpu->kvm->arch.pmu_filter) {
+- mutex_unlock(&vcpu->kvm->lock);
++ if (!kvm->arch.pmu_filter) {
++ kvm->arch.pmu_filter = bitmap_alloc(nr_events, GFP_KERNEL_ACCOUNT);
++ if (!kvm->arch.pmu_filter) {
++ mutex_unlock(&kvm->lock);
+ return -ENOMEM;
+ }
+
+@@ -994,17 +1001,17 @@ int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+ * events, the default is to allow.
+ */
+ if (filter.action == KVM_PMU_EVENT_ALLOW)
+- bitmap_zero(vcpu->kvm->arch.pmu_filter, nr_events);
++ bitmap_zero(kvm->arch.pmu_filter, nr_events);
+ else
+- bitmap_fill(vcpu->kvm->arch.pmu_filter, nr_events);
++ bitmap_fill(kvm->arch.pmu_filter, nr_events);
+ }
+
+ if (filter.action == KVM_PMU_EVENT_ALLOW)
+- bitmap_set(vcpu->kvm->arch.pmu_filter, filter.base_event, filter.nevents);
++ bitmap_set(kvm->arch.pmu_filter, filter.base_event, filter.nevents);
+ else
+- bitmap_clear(vcpu->kvm->arch.pmu_filter, filter.base_event, filter.nevents);
++ bitmap_clear(kvm->arch.pmu_filter, filter.base_event, filter.nevents);
+
+- mutex_unlock(&vcpu->kvm->lock);
++ mutex_unlock(&kvm->lock);
+
+ return 0;
+ }
+diff --git a/arch/mips/boot/dts/ingenic/jz4780.dtsi b/arch/mips/boot/dts/ingenic/jz4780.dtsi
+index 3f9ea47a10cd2..b998301f179ce 100644
+--- a/arch/mips/boot/dts/ingenic/jz4780.dtsi
++++ b/arch/mips/boot/dts/ingenic/jz4780.dtsi
+@@ -510,7 +510,7 @@
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+- eth0_addr: eth-mac-addr@0x22 {
++ eth0_addr: eth-mac-addr@22 {
+ reg = <0x22 0x6>;
+ };
+ };
+diff --git a/arch/mips/include/asm/setup.h b/arch/mips/include/asm/setup.h
+index bb36a400203df..8c56b862fd9c2 100644
+--- a/arch/mips/include/asm/setup.h
++++ b/arch/mips/include/asm/setup.h
+@@ -16,7 +16,7 @@ static inline void setup_8250_early_printk_port(unsigned long base,
+ unsigned int reg_shift, unsigned int timeout) {}
+ #endif
+
+-extern void set_handler(unsigned long offset, void *addr, unsigned long len);
++void set_handler(unsigned long offset, const void *addr, unsigned long len);
+ extern void set_uncached_handler(unsigned long offset, void *addr, unsigned long len);
+
+ typedef void (*vi_handler_t)(void);
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index a486486b2355c..246c6a6b02614 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -2091,19 +2091,19 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
+ * If no shadow set is selected then use the default handler
+ * that does normal register saving and standard interrupt exit
+ */
+- extern char except_vec_vi, except_vec_vi_lui;
+- extern char except_vec_vi_ori, except_vec_vi_end;
+- extern char rollback_except_vec_vi;
+- char *vec_start = using_rollback_handler() ?
+- &rollback_except_vec_vi : &except_vec_vi;
++ extern const u8 except_vec_vi[], except_vec_vi_lui[];
++ extern const u8 except_vec_vi_ori[], except_vec_vi_end[];
++ extern const u8 rollback_except_vec_vi[];
++ const u8 *vec_start = using_rollback_handler() ?
++ rollback_except_vec_vi : except_vec_vi;
+ #if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_BIG_ENDIAN)
+- const int lui_offset = &except_vec_vi_lui - vec_start + 2;
+- const int ori_offset = &except_vec_vi_ori - vec_start + 2;
++ const int lui_offset = except_vec_vi_lui - vec_start + 2;
++ const int ori_offset = except_vec_vi_ori - vec_start + 2;
+ #else
+- const int lui_offset = &except_vec_vi_lui - vec_start;
+- const int ori_offset = &except_vec_vi_ori - vec_start;
++ const int lui_offset = except_vec_vi_lui - vec_start;
++ const int ori_offset = except_vec_vi_ori - vec_start;
+ #endif
+- const int handler_len = &except_vec_vi_end - vec_start;
++ const int handler_len = except_vec_vi_end - vec_start;
+
+ if (handler_len > VECTORSPACING) {
+ /*
+@@ -2311,7 +2311,7 @@ void per_cpu_trap_init(bool is_boot_cpu)
+ }
+
+ /* Install CPU exception handler */
+-void set_handler(unsigned long offset, void *addr, unsigned long size)
++void set_handler(unsigned long offset, const void *addr, unsigned long size)
+ {
+ #ifdef CONFIG_CPU_MICROMIPS
+ memcpy((void *)(ebase + offset), ((unsigned char *)addr - 1), size);
+diff --git a/arch/mips/ralink/ill_acc.c b/arch/mips/ralink/ill_acc.c
+index 115a69fc20caa..f395ae218470f 100644
+--- a/arch/mips/ralink/ill_acc.c
++++ b/arch/mips/ralink/ill_acc.c
+@@ -61,6 +61,7 @@ static int __init ill_acc_of_setup(void)
+ pdev = of_find_device_by_node(np);
+ if (!pdev) {
+ pr_err("%pOFn: failed to lookup pdev\n", np);
++ of_node_put(np);
+ return -EINVAL;
+ }
+
+diff --git a/arch/parisc/kernel/patch.c b/arch/parisc/kernel/patch.c
+index 80a0ab372802d..e59574f65e641 100644
+--- a/arch/parisc/kernel/patch.c
++++ b/arch/parisc/kernel/patch.c
+@@ -40,10 +40,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags,
+
+ *need_unmap = 1;
+ set_fixmap(fixmap, page_to_phys(page));
+- if (flags)
+- raw_spin_lock_irqsave(&patch_lock, *flags);
+- else
+- __acquire(&patch_lock);
++ raw_spin_lock_irqsave(&patch_lock, *flags);
+
+ return (void *) (__fix_to_virt(fixmap) + (uintaddr & ~PAGE_MASK));
+ }
+@@ -52,10 +49,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+ {
+ clear_fixmap(fixmap);
+
+- if (flags)
+- raw_spin_unlock_irqrestore(&patch_lock, *flags);
+- else
+- __release(&patch_lock);
++ raw_spin_unlock_irqrestore(&patch_lock, *flags);
+ }
+
+ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+@@ -67,8 +61,9 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ int mapped;
+
+ /* Make sure we don't have any aliases in cache */
+- flush_kernel_vmap_range(addr, len);
+- flush_icache_range(start, end);
++ flush_kernel_dcache_range_asm(start, end);
++ flush_kernel_icache_range_asm(start, end);
++ flush_tlb_kernel_range(start, end);
+
+ p = fixmap = patch_map(addr, FIX_TEXT_POKE0, &flags, &mapped);
+
+@@ -81,8 +76,10 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ * We're crossing a page boundary, so
+ * need to remap
+ */
+- flush_kernel_vmap_range((void *)fixmap,
+- (p-fixmap) * sizeof(*p));
++ flush_kernel_dcache_range_asm((unsigned long)fixmap,
++ (unsigned long)p);
++ flush_tlb_kernel_range((unsigned long)fixmap,
++ (unsigned long)p);
+ if (mapped)
+ patch_unmap(FIX_TEXT_POKE0, &flags);
+ p = fixmap = patch_map(addr, FIX_TEXT_POKE0, &flags,
+@@ -90,10 +87,10 @@ void __kprobes __patch_text_multiple(void *addr, u32 *insn, unsigned int len)
+ }
+ }
+
+- flush_kernel_vmap_range((void *)fixmap, (p-fixmap) * sizeof(*p));
++ flush_kernel_dcache_range_asm((unsigned long)fixmap, (unsigned long)p);
++ flush_tlb_kernel_range((unsigned long)fixmap, (unsigned long)p);
+ if (mapped)
+ patch_unmap(FIX_TEXT_POKE0, &flags);
+- flush_icache_range(start, end);
+ }
+
+ void __kprobes __patch_text(void *addr, u32 insn)
+diff --git a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+index 099a598c74c00..bfe1ed5be3374 100644
+--- a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
++++ b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+@@ -139,12 +139,12 @@
+ fman@400000 {
+ ethernet@e6000 {
+ phy-handle = <&phy_rgmii_0>;
+- phy-connection-type = "rgmii";
++ phy-connection-type = "rgmii-id";
+ };
+
+ ethernet@e8000 {
+ phy-handle = <&phy_rgmii_1>;
+- phy-connection-type = "rgmii";
++ phy-connection-type = "rgmii-id";
+ };
+
+ mdio0: mdio@fc000 {
+diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
+index fc28f46d2f9dc..5404f7abbcf8d 100644
+--- a/arch/powerpc/include/asm/interrupt.h
++++ b/arch/powerpc/include/asm/interrupt.h
+@@ -612,7 +612,7 @@ DECLARE_INTERRUPT_HANDLER_RAW(do_slb_fault);
+ DECLARE_INTERRUPT_HANDLER(do_bad_segment_interrupt);
+
+ /* hash_utils.c */
+-DECLARE_INTERRUPT_HANDLER_RAW(do_hash_fault);
++DECLARE_INTERRUPT_HANDLER(do_hash_fault);
+
+ /* fault.c */
+ DECLARE_INTERRUPT_HANDLER(do_page_fault);
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index 254687258f42b..f2c5c26869f1a 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -132,7 +132,11 @@ static inline bool pfn_valid(unsigned long pfn)
+ #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
+ #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
+
+-#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
++#define virt_addr_valid(vaddr) ({ \
++ unsigned long _addr = (unsigned long)vaddr; \
++ _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \
++ pfn_valid(virt_to_pfn(_addr)); \
++})
+
+ /*
+ * On Book-E parts we need __va to parse the device tree and we can't
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 733e6ef367589..1f42aabbbab3a 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -1313,6 +1313,12 @@ int __init early_init_dt_scan_rtas(unsigned long node,
+ entryp = of_get_flat_dt_prop(node, "linux,rtas-entry", NULL);
+ sizep = of_get_flat_dt_prop(node, "rtas-size", NULL);
+
++#ifdef CONFIG_PPC64
++ /* need this feature to decide the crashkernel offset */
++ if (of_get_flat_dt_prop(node, "ibm,hypertas-functions", NULL))
++ powerpc_firmware_features |= FW_FEATURE_LPAR;
++#endif
++
+ if (basep && entryp && sizep) {
+ rtas.base = *basep;
+ rtas.entry = *entryp;
+diff --git a/arch/powerpc/kernel/secvar-sysfs.c b/arch/powerpc/kernel/secvar-sysfs.c
+index a0a78aba2083e..1ee4640a26413 100644
+--- a/arch/powerpc/kernel/secvar-sysfs.c
++++ b/arch/powerpc/kernel/secvar-sysfs.c
+@@ -26,15 +26,18 @@ static ssize_t format_show(struct kobject *kobj, struct kobj_attribute *attr,
+ const char *format;
+
+ node = of_find_compatible_node(NULL, NULL, "ibm,secvar-backend");
+- if (!of_device_is_available(node))
+- return -ENODEV;
++ if (!of_device_is_available(node)) {
++ rc = -ENODEV;
++ goto out;
++ }
+
+ rc = of_property_read_string(node, "format", &format);
+ if (rc)
+- return rc;
++ goto out;
+
+ rc = sprintf(buf, "%s\n", format);
+
++out:
+ of_node_put(node);
+
+ return rc;
+diff --git a/arch/powerpc/kexec/core.c b/arch/powerpc/kexec/core.c
+index 8b68d9f91a03b..abf5897ae88c8 100644
+--- a/arch/powerpc/kexec/core.c
++++ b/arch/powerpc/kexec/core.c
+@@ -134,11 +134,18 @@ void __init reserve_crashkernel(void)
+ if (!crashk_res.start) {
+ #ifdef CONFIG_PPC64
+ /*
+- * On 64bit we split the RMO in half but cap it at half of
+- * a small SLB (128MB) since the crash kernel needs to place
+- * itself and some stacks to be in the first segment.
++ * On the LPAR platform place the crash kernel to mid of
++ * RMA size (512MB or more) to ensure the crash kernel
++ * gets enough space to place itself and some stack to be
++ * in the first segment. At the same time normal kernel
++ * also get enough space to allocate memory for essential
++ * system resource in the first segment. Keep the crash
++ * kernel starts at 128MB offset on other platforms.
+ */
+- crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
++ if (firmware_has_feature(FW_FEATURE_LPAR))
++ crashk_res.start = ppc64_rma_size / 2;
++ else
++ crashk_res.start = min(0x8000000ULL, (ppc64_rma_size / 2));
+ #else
+ crashk_res.start = KDUMP_KERNELBASE;
+ #endif
+diff --git a/arch/powerpc/kvm/book3s_64_entry.S b/arch/powerpc/kvm/book3s_64_entry.S
+index 05e003eb5d906..e42d1c609e476 100644
+--- a/arch/powerpc/kvm/book3s_64_entry.S
++++ b/arch/powerpc/kvm/book3s_64_entry.S
+@@ -414,10 +414,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_DAWR1)
+ */
+ ld r10,HSTATE_SCRATCH0(r13)
+ cmpwi r10,BOOK3S_INTERRUPT_MACHINE_CHECK
+- beq machine_check_common
++ beq .Lcall_machine_check_common
+
+ cmpwi r10,BOOK3S_INTERRUPT_SYSTEM_RESET
+- beq system_reset_common
++ beq .Lcall_system_reset_common
+
+ b .
++
++.Lcall_machine_check_common:
++ b machine_check_common
++
++.Lcall_system_reset_common:
++ b system_reset_common
+ #endif
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index 906d434633667..00c68e7fb11e4 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -43,9 +43,14 @@ int raw_patch_instruction(u32 *addr, ppc_inst_t instr)
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+ static DEFINE_PER_CPU(struct vm_struct *, text_poke_area);
+
++static int map_patch_area(void *addr, unsigned long text_poke_addr);
++static void unmap_patch_area(unsigned long addr);
++
+ static int text_area_cpu_up(unsigned int cpu)
+ {
+ struct vm_struct *area;
++ unsigned long addr;
++ int err;
+
+ area = get_vm_area(PAGE_SIZE, VM_ALLOC);
+ if (!area) {
+@@ -53,6 +58,15 @@ static int text_area_cpu_up(unsigned int cpu)
+ cpu);
+ return -1;
+ }
++
++ // Map/unmap the area to ensure all page tables are pre-allocated
++ addr = (unsigned long)area->addr;
++ err = map_patch_area(empty_zero_page, addr);
++ if (err)
++ return err;
++
++ unmap_patch_area(addr);
++
+ this_cpu_write(text_poke_area, area);
+
+ return 0;
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 7abf82a698d32..985cabdd7f679 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -1621,8 +1621,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
+ }
+ EXPORT_SYMBOL_GPL(hash_page);
+
+-DECLARE_INTERRUPT_HANDLER(__do_hash_fault);
+-DEFINE_INTERRUPT_HANDLER(__do_hash_fault)
++DEFINE_INTERRUPT_HANDLER(do_hash_fault)
+ {
+ unsigned long ea = regs->dar;
+ unsigned long dsisr = regs->dsisr;
+@@ -1681,35 +1680,6 @@ DEFINE_INTERRUPT_HANDLER(__do_hash_fault)
+ }
+ }
+
+-/*
+- * The _RAW interrupt entry checks for the in_nmi() case before
+- * running the full handler.
+- */
+-DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
+-{
+- /*
+- * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
+- * don't call hash_page, just fail the fault. This is required to
+- * prevent re-entrancy problems in the hash code, namely perf
+- * interrupts hitting while something holds H_PAGE_BUSY, and taking a
+- * hash fault. See the comment in hash_preload().
+- *
+- * We come here as a result of a DSI at a point where we don't want
+- * to call hash_page, such as when we are accessing memory (possibly
+- * user memory) inside a PMU interrupt that occurred while interrupts
+- * were soft-disabled. We want to invoke the exception handler for
+- * the access, or panic if there isn't a handler.
+- */
+- if (unlikely(in_nmi())) {
+- do_bad_page_fault_segv(regs);
+- return 0;
+- }
+-
+- __do_hash_fault(regs);
+-
+- return 0;
+-}
+-
+ #ifdef CONFIG_PPC_MM_SLICES
+ static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
+ {
+@@ -1776,26 +1746,18 @@ static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
+ #endif /* CONFIG_PPC_64K_PAGES */
+
+ /*
+- * __hash_page_* must run with interrupts off, as it sets the
+- * H_PAGE_BUSY bit. It's possible for perf interrupts to hit at any
+- * time and may take a hash fault reading the user stack, see
+- * read_user_stack_slow() in the powerpc/perf code.
+- *
+- * If that takes a hash fault on the same page as we lock here, it
+- * will bail out when seeing H_PAGE_BUSY set, and retry the access
+- * leading to an infinite loop.
++ * __hash_page_* must run with interrupts off, including PMI interrupts
++ * off, as it sets the H_PAGE_BUSY bit.
+ *
+- * Disabling interrupts here does not prevent perf interrupts, but it
+- * will prevent them taking hash faults (see the NMI test in
+- * do_hash_page), then read_user_stack's copy_from_user_nofault will
+- * fail and perf will fall back to read_user_stack_slow(), which
+- * walks the Linux page tables.
++ * It's otherwise possible for perf interrupts to hit at any time and
++ * may take a hash fault reading the user stack, which could take a
++ * hash miss and deadlock on the same H_PAGE_BUSY bit.
+ *
+ * Interrupts must also be off for the duration of the
+ * mm_is_thread_local test and update, to prevent preempt running the
+ * mm on another CPU (XXX: this may be racy vs kthread_use_mm).
+ */
+- local_irq_save(flags);
++ powerpc_local_irq_pmu_save(flags);
+
+ /* Is that local to this CPU ? */
+ if (mm_is_thread_local(mm))
+@@ -1820,7 +1782,7 @@ static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
+ mm_ctx_user_psize(&mm->context),
+ pte_val(*ptep));
+
+- local_irq_restore(flags);
++ powerpc_local_irq_pmu_restore(flags);
+ }
+
+ /*
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 8e301cd8925b2..4d221d033804e 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -255,7 +255,7 @@ void __init mem_init(void)
+ #endif
+
+ high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
+- set_max_mapnr(max_low_pfn);
++ set_max_mapnr(max_pfn);
+
+ kasan_late_init();
+
+diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
+index 3bb9d168e3b31..85753e32a4de9 100644
+--- a/arch/powerpc/mm/pageattr.c
++++ b/arch/powerpc/mm/pageattr.c
+@@ -15,12 +15,14 @@
+ #include <asm/pgtable.h>
+
+
++static pte_basic_t pte_update_delta(pte_t *ptep, unsigned long addr,
++ unsigned long old, unsigned long new)
++{
++ return pte_update(&init_mm, addr, ptep, old & ~new, new & ~old, 0);
++}
++
+ /*
+- * Updates the attributes of a page in three steps:
+- *
+- * 1. take the page_table_lock
+- * 2. install the new entry with the updated attributes
+- * 3. flush the TLB
++ * Updates the attributes of a page atomically.
+ *
+ * This sequence is safe against concurrent updates, and also allows updating the
+ * attributes of a page currently being executed or accessed.
+@@ -28,25 +30,21 @@
+ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+ {
+ long action = (long)data;
+- pte_t pte;
+
+- spin_lock(&init_mm.page_table_lock);
+-
+- pte = ptep_get(ptep);
+-
+- /* modify the PTE bits as desired, then apply */
++ /* modify the PTE bits as desired */
+ switch (action) {
+ case SET_MEMORY_RO:
+- pte = pte_wrprotect(pte);
++ /* Don't clear DIRTY bit */
++ pte_update_delta(ptep, addr, _PAGE_KERNEL_RW & ~_PAGE_DIRTY, _PAGE_KERNEL_RO);
+ break;
+ case SET_MEMORY_RW:
+- pte = pte_mkwrite(pte_mkdirty(pte));
++ pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_RW);
+ break;
+ case SET_MEMORY_NX:
+- pte = pte_exprotect(pte);
++ pte_update_delta(ptep, addr, _PAGE_KERNEL_ROX, _PAGE_KERNEL_RO);
+ break;
+ case SET_MEMORY_X:
+- pte = pte_mkexec(pte);
++ pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_ROX);
+ break;
+ case SET_MEMORY_NP:
+ pte_update(&init_mm, addr, ptep, _PAGE_PRESENT, 0, 0);
+@@ -59,16 +57,12 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+ break;
+ }
+
+- pte_update(&init_mm, addr, ptep, ~0UL, pte_val(pte), 0);
+-
+ /* See ptesync comment in radix__set_pte_at() */
+ if (radix_enabled())
+ asm volatile("ptesync": : :"memory");
+
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+- spin_unlock(&init_mm.page_table_lock);
+-
+ return 0;
+ }
+
+diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
+index d6fa6e25234f4..19a8d051ddf10 100644
+--- a/arch/powerpc/perf/callchain.h
++++ b/arch/powerpc/perf/callchain.h
+@@ -2,7 +2,6 @@
+ #ifndef _POWERPC_PERF_CALLCHAIN_H
+ #define _POWERPC_PERF_CALLCHAIN_H
+
+-int read_user_stack_slow(const void __user *ptr, void *buf, int nb);
+ void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+ struct pt_regs *regs);
+ void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+@@ -26,17 +25,11 @@ static inline int __read_user_stack(const void __user *ptr, void *ret,
+ size_t size)
+ {
+ unsigned long addr = (unsigned long)ptr;
+- int rc;
+
+ if (addr > TASK_SIZE - size || (addr & (size - 1)))
+ return -EFAULT;
+
+- rc = copy_from_user_nofault(ret, ptr, size);
+-
+- if (IS_ENABLED(CONFIG_PPC64) && !radix_enabled() && rc)
+- return read_user_stack_slow(ptr, ret, size);
+-
+- return rc;
++ return copy_from_user_nofault(ret, ptr, size);
+ }
+
+ #endif /* _POWERPC_PERF_CALLCHAIN_H */
+diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
+index 8d0df4226328d..488e8a21a11ea 100644
+--- a/arch/powerpc/perf/callchain_64.c
++++ b/arch/powerpc/perf/callchain_64.c
+@@ -18,33 +18,6 @@
+
+ #include "callchain.h"
+
+-/*
+- * On 64-bit we don't want to invoke hash_page on user addresses from
+- * interrupt context, so if the access faults, we read the page tables
+- * to find which page (if any) is mapped and access it directly. Radix
+- * has no need for this so it doesn't use read_user_stack_slow.
+- */
+-int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
+-{
+-
+- unsigned long addr = (unsigned long) ptr;
+- unsigned long offset;
+- struct page *page;
+- void *kaddr;
+-
+- if (get_user_page_fast_only(addr, FOLL_WRITE, &page)) {
+- kaddr = page_address(page);
+-
+- /* align address to page boundary */
+- offset = addr & ~PAGE_MASK;
+-
+- memcpy(buf, kaddr + offset, nb);
+- put_page(page);
+- return 0;
+- }
+- return -EFAULT;
+-}
+-
+ static int read_user_stack_64(const unsigned long __user *ptr, unsigned long *ret)
+ {
+ return __read_user_stack(ptr, ret, sizeof(*ret));
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 87bc1929ee5a8..e2e1fec91c6ed 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -107,6 +107,7 @@ config PPC_BOOK3S_64
+
+ config PPC_BOOK3E_64
+ bool "Embedded processors"
++ select PPC_FSL_BOOK3E
+ select PPC_FPU # Make it a choice ?
+ select PPC_SMP_MUXED_IPI
+ select PPC_DOORBELL
+@@ -295,7 +296,7 @@ config FSL_BOOKE
+ config PPC_FSL_BOOK3E
+ bool
+ select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64
+- select FSL_EMB_PERFMON
++ imply FSL_EMB_PERFMON
+ select PPC_SMP_MUXED_IPI
+ select PPC_DOORBELL
+ select PPC_KUEP
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 89c86f32aff86..bb5bda6b2357b 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -1791,7 +1791,7 @@ static int xive_ipi_debug_show(struct seq_file *m, void *private)
+ if (xive_ops->debug_show)
+ xive_ops->debug_show(m, private);
+
+- for_each_possible_cpu(cpu)
++ for_each_online_cpu(cpu)
+ xive_debug_show_ipi(m, cpu);
+ return 0;
+ }
+diff --git a/arch/riscv/lib/memmove.S b/arch/riscv/lib/memmove.S
+index 07d1d2152ba5c..e0609e1f0864d 100644
+--- a/arch/riscv/lib/memmove.S
++++ b/arch/riscv/lib/memmove.S
+@@ -1,64 +1,316 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2022 Michael T. Kloos <michael@michaelkloos.com>
++ */
+
+ #include <linux/linkage.h>
+ #include <asm/asm.h>
+
+-ENTRY(__memmove)
+-WEAK(memmove)
+- move t0, a0
+- move t1, a1
+-
+- beq a0, a1, exit_memcpy
+- beqz a2, exit_memcpy
+- srli t2, a2, 0x2
+-
+- slt t3, a0, a1
+- beqz t3, do_reverse
+-
+- andi a2, a2, 0x3
+- li t4, 1
+- beqz t2, byte_copy
+-
+-word_copy:
+- lw t3, 0(a1)
+- addi t2, t2, -1
+- addi a1, a1, 4
+- sw t3, 0(a0)
+- addi a0, a0, 4
+- bnez t2, word_copy
+- beqz a2, exit_memcpy
+- j byte_copy
+-
+-do_reverse:
+- add a0, a0, a2
+- add a1, a1, a2
+- andi a2, a2, 0x3
+- li t4, -1
+- beqz t2, reverse_byte_copy
+-
+-reverse_word_copy:
+- addi a1, a1, -4
+- addi t2, t2, -1
+- lw t3, 0(a1)
+- addi a0, a0, -4
+- sw t3, 0(a0)
+- bnez t2, reverse_word_copy
+- beqz a2, exit_memcpy
+-
+-reverse_byte_copy:
+- addi a0, a0, -1
+- addi a1, a1, -1
++SYM_FUNC_START(__memmove)
++SYM_FUNC_START_WEAK(memmove)
++ /*
++ * Returns
++ * a0 - dest
++ *
++ * Parameters
++ * a0 - Inclusive first byte of dest
++ * a1 - Inclusive first byte of src
++ * a2 - Length of copy n
++ *
++ * Because the return matches the parameter register a0,
++ * we will not clobber or modify that register.
++ *
++ * Note: This currently only works on little-endian.
++ * To port to big-endian, reverse the direction of shifts
++ * in the 2 misaligned fixup copy loops.
++ */
+
++ /* Return if nothing to do */
++ beq a0, a1, return_from_memmove
++ beqz a2, return_from_memmove
++
++ /*
++ * Register Uses
++ * Forward Copy: a1 - Index counter of src
++ * Reverse Copy: a4 - Index counter of src
++ * Forward Copy: t3 - Index counter of dest
++ * Reverse Copy: t4 - Index counter of dest
++ * Both Copy Modes: t5 - Inclusive first multibyte/aligned of dest
++ * Both Copy Modes: t6 - Non-Inclusive last multibyte/aligned of dest
++ * Both Copy Modes: t0 - Link / Temporary for load-store
++ * Both Copy Modes: t1 - Temporary for load-store
++ * Both Copy Modes: t2 - Temporary for load-store
++ * Both Copy Modes: a5 - dest to src alignment offset
++ * Both Copy Modes: a6 - Shift ammount
++ * Both Copy Modes: a7 - Inverse Shift ammount
++ * Both Copy Modes: a2 - Alternate breakpoint for unrolled loops
++ */
++
++ /*
++ * Solve for some register values now.
++ * Byte copy does not need t5 or t6.
++ */
++ mv t3, a0
++ add t4, a0, a2
++ add a4, a1, a2
++
++ /*
++ * Byte copy if copying less than (2 * SZREG) bytes. This can
++ * cause problems with the bulk copy implementation and is
++ * small enough not to bother.
++ */
++ andi t0, a2, -(2 * SZREG)
++ beqz t0, byte_copy
++
++ /*
++ * Now solve for t5 and t6.
++ */
++ andi t5, t3, -SZREG
++ andi t6, t4, -SZREG
++ /*
++ * If dest(Register t3) rounded down to the nearest naturally
++ * aligned SZREG address, does not equal dest, then add SZREG
++ * to find the low-bound of SZREG alignment in the dest memory
++ * region. Note that this could overshoot the dest memory
++ * region if n is less than SZREG. This is one reason why
++ * we always byte copy if n is less than SZREG.
++ * Otherwise, dest is already naturally aligned to SZREG.
++ */
++ beq t5, t3, 1f
++ addi t5, t5, SZREG
++ 1:
++
++ /*
++ * If the dest and src are co-aligned to SZREG, then there is
++ * no need for the full rigmarole of a full misaligned fixup copy.
++ * Instead, do a simpler co-aligned copy.
++ */
++ xor t0, a0, a1
++ andi t1, t0, (SZREG - 1)
++ beqz t1, coaligned_copy
++ /* Fall through to misaligned fixup copy */
++
++misaligned_fixup_copy:
++ bltu a1, a0, misaligned_fixup_copy_reverse
++
++misaligned_fixup_copy_forward:
++ jal t0, byte_copy_until_aligned_forward
++
++ andi a5, a1, (SZREG - 1) /* Find the alignment offset of src (a1) */
++ slli a6, a5, 3 /* Multiply by 8 to convert that to bits to shift */
++ sub a5, a1, t3 /* Find the difference between src and dest */
++ andi a1, a1, -SZREG /* Align the src pointer */
++ addi a2, t6, SZREG /* The other breakpoint for the unrolled loop*/
++
++ /*
++ * Compute The Inverse Shift
++ * a7 = XLEN - a6 = XLEN + -a6
++ * 2s complement negation to find the negative: -a6 = ~a6 + 1
++ * Add that to XLEN. XLEN = SZREG * 8.
++ */
++ not a7, a6
++ addi a7, a7, (SZREG * 8 + 1)
++
++ /*
++ * Fix Misalignment Copy Loop - Forward
++ * load_val0 = load_ptr[0];
++ * do {
++ * load_val1 = load_ptr[1];
++ * store_ptr += 2;
++ * store_ptr[0 - 2] = (load_val0 >> {a6}) | (load_val1 << {a7});
++ *
++ * if (store_ptr == {a2})
++ * break;
++ *
++ * load_val0 = load_ptr[2];
++ * load_ptr += 2;
++ * store_ptr[1 - 2] = (load_val1 >> {a6}) | (load_val0 << {a7});
++ *
++ * } while (store_ptr != store_ptr_end);
++ * store_ptr = store_ptr_end;
++ */
++
++ REG_L t0, (0 * SZREG)(a1)
++ 1:
++ REG_L t1, (1 * SZREG)(a1)
++ addi t3, t3, (2 * SZREG)
++ srl t0, t0, a6
++ sll t2, t1, a7
++ or t2, t0, t2
++ REG_S t2, ((0 * SZREG) - (2 * SZREG))(t3)
++
++ beq t3, a2, 2f
++
++ REG_L t0, (2 * SZREG)(a1)
++ addi a1, a1, (2 * SZREG)
++ srl t1, t1, a6
++ sll t2, t0, a7
++ or t2, t1, t2
++ REG_S t2, ((1 * SZREG) - (2 * SZREG))(t3)
++
++ bne t3, t6, 1b
++ 2:
++ mv t3, t6 /* Fix the dest pointer in case the loop was broken */
++
++ add a1, t3, a5 /* Restore the src pointer */
++ j byte_copy_forward /* Copy any remaining bytes */
++
++misaligned_fixup_copy_reverse:
++ jal t0, byte_copy_until_aligned_reverse
++
++ andi a5, a4, (SZREG - 1) /* Find the alignment offset of src (a4) */
++ slli a6, a5, 3 /* Multiply by 8 to convert that to bits to shift */
++ sub a5, a4, t4 /* Find the difference between src and dest */
++ andi a4, a4, -SZREG /* Align the src pointer */
++ addi a2, t5, -SZREG /* The other breakpoint for the unrolled loop*/
++
++ /*
++ * Compute The Inverse Shift
++ * a7 = XLEN - a6 = XLEN + -a6
++ * 2s complement negation to find the negative: -a6 = ~a6 + 1
++ * Add that to XLEN. XLEN = SZREG * 8.
++ */
++ not a7, a6
++ addi a7, a7, (SZREG * 8 + 1)
++
++ /*
++ * Fix Misalignment Copy Loop - Reverse
++ * load_val1 = load_ptr[0];
++ * do {
++ * load_val0 = load_ptr[-1];
++ * store_ptr -= 2;
++ * store_ptr[1] = (load_val0 >> {a6}) | (load_val1 << {a7});
++ *
++ * if (store_ptr == {a2})
++ * break;
++ *
++ * load_val1 = load_ptr[-2];
++ * load_ptr -= 2;
++ * store_ptr[0] = (load_val1 >> {a6}) | (load_val0 << {a7});
++ *
++ * } while (store_ptr != store_ptr_end);
++ * store_ptr = store_ptr_end;
++ */
++
++ REG_L t1, ( 0 * SZREG)(a4)
++ 1:
++ REG_L t0, (-1 * SZREG)(a4)
++ addi t4, t4, (-2 * SZREG)
++ sll t1, t1, a7
++ srl t2, t0, a6
++ or t2, t1, t2
++ REG_S t2, ( 1 * SZREG)(t4)
++
++ beq t4, a2, 2f
++
++ REG_L t1, (-2 * SZREG)(a4)
++ addi a4, a4, (-2 * SZREG)
++ sll t0, t0, a7
++ srl t2, t1, a6
++ or t2, t0, t2
++ REG_S t2, ( 0 * SZREG)(t4)
++
++ bne t4, t5, 1b
++ 2:
++ mv t4, t5 /* Fix the dest pointer in case the loop was broken */
++
++ add a4, t4, a5 /* Restore the src pointer */
++ j byte_copy_reverse /* Copy any remaining bytes */
++
++/*
++ * Simple copy loops for SZREG co-aligned memory locations.
++ * These also make calls to do byte copies for any unaligned
++ * data at their terminations.
++ */
++coaligned_copy:
++ bltu a1, a0, coaligned_copy_reverse
++
++coaligned_copy_forward:
++ jal t0, byte_copy_until_aligned_forward
++
++ 1:
++ REG_L t1, ( 0 * SZREG)(a1)
++ addi a1, a1, SZREG
++ addi t3, t3, SZREG
++ REG_S t1, (-1 * SZREG)(t3)
++ bne t3, t6, 1b
++
++ j byte_copy_forward /* Copy any remaining bytes */
++
++coaligned_copy_reverse:
++ jal t0, byte_copy_until_aligned_reverse
++
++ 1:
++ REG_L t1, (-1 * SZREG)(a4)
++ addi a4, a4, -SZREG
++ addi t4, t4, -SZREG
++ REG_S t1, ( 0 * SZREG)(t4)
++ bne t4, t5, 1b
++
++ j byte_copy_reverse /* Copy any remaining bytes */
++
++/*
++ * These are basically sub-functions within the function. They
++ * are used to byte copy until the dest pointer is in alignment.
++ * At which point, a bulk copy method can be used by the
++ * calling code. These work on the same registers as the bulk
++ * copy loops. Therefore, the register values can be picked
++ * up from where they were left and we avoid code duplication
++ * without any overhead except the call in and return jumps.
++ */
++byte_copy_until_aligned_forward:
++ beq t3, t5, 2f
++ 1:
++ lb t1, 0(a1)
++ addi a1, a1, 1
++ addi t3, t3, 1
++ sb t1, -1(t3)
++ bne t3, t5, 1b
++ 2:
++ jalr zero, 0x0(t0) /* Return to multibyte copy loop */
++
++byte_copy_until_aligned_reverse:
++ beq t4, t6, 2f
++ 1:
++ lb t1, -1(a4)
++ addi a4, a4, -1
++ addi t4, t4, -1
++ sb t1, 0(t4)
++ bne t4, t6, 1b
++ 2:
++ jalr zero, 0x0(t0) /* Return to multibyte copy loop */
++
++/*
++ * Simple byte copy loops.
++ * These will byte copy until they reach the end of data to copy.
++ * At that point, they will call to return from memmove.
++ */
+ byte_copy:
+- lb t3, 0(a1)
+- addi a2, a2, -1
+- sb t3, 0(a0)
+- add a1, a1, t4
+- add a0, a0, t4
+- bnez a2, byte_copy
+-
+-exit_memcpy:
+- move a0, t0
+- move a1, t1
+- ret
+-END(__memmove)
++ bltu a1, a0, byte_copy_reverse
++
++byte_copy_forward:
++ beq t3, t4, 2f
++ 1:
++ lb t1, 0(a1)
++ addi a1, a1, 1
++ addi t3, t3, 1
++ sb t1, -1(t3)
++ bne t3, t4, 1b
++ 2:
++ ret
++
++byte_copy_reverse:
++ beq t4, t3, 2f
++ 1:
++ lb t1, -1(a4)
++ addi a4, a4, -1
++ addi t4, t4, -1
++ sb t1, 0(t4)
++ bne t4, t3, 1b
++ 2:
++
++return_from_memmove:
++ ret
++
++SYM_FUNC_END(memmove)
++SYM_FUNC_END(__memmove)
+diff --git a/arch/um/include/asm/xor.h b/arch/um/include/asm/xor.h
+index f512704a9ec7b..22b39de73c246 100644
+--- a/arch/um/include/asm/xor.h
++++ b/arch/um/include/asm/xor.h
+@@ -4,8 +4,10 @@
+
+ #ifdef CONFIG_64BIT
+ #undef CONFIG_X86_32
++#define TT_CPU_INF_XOR_DEFAULT (AVX_SELECT(&xor_block_sse_pf64))
+ #else
+ #define CONFIG_X86_32 1
++#define TT_CPU_INF_XOR_DEFAULT (AVX_SELECT(&xor_block_8regs))
+ #endif
+
+ #include <asm/cpufeature.h>
+@@ -16,7 +18,7 @@
+ #undef XOR_SELECT_TEMPLATE
+ /* pick an arbitrary one - measuring isn't possible with inf-cpu */
+ #define XOR_SELECT_TEMPLATE(x) \
+- (time_travel_mode == TT_MODE_INFCPU ? &xor_block_8regs : NULL)
++ (time_travel_mode == TT_MODE_INFCPU ? TT_CPU_INF_XOR_DEFAULT : x))
+ #endif
+
+ #endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 9f5bd41bf660c..d0ecc4005df33 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2837,6 +2837,11 @@ config IA32_AOUT
+ config X86_X32
+ bool "x32 ABI for 64-bit mode"
+ depends on X86_64
++ # llvm-objcopy does not convert x86_64 .note.gnu.property or
++ # compressed debug sections to x86_x32 properly:
++ # https://github.com/ClangBuiltLinux/linux/issues/514
++ # https://github.com/ClangBuiltLinux/linux/issues/1141
++ depends on $(success,$(OBJCOPY) --version | head -n1 | grep -qv llvm)
+ help
+ Include code to run binaries for the x32 native 32-bit ABI
+ for 64-bit processors. An x32 process gets access to the
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index a3c7ca876aebd..d87c9b246a8fa 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -281,7 +281,7 @@ static struct extra_reg intel_spr_extra_regs[] __read_mostly = {
+ INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0),
+ INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
+ INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
+- INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff17, FE),
++ INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE),
+ INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0x7, FE),
+ INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
+ EVENT_EXTRA_END
+@@ -5515,7 +5515,11 @@ static void intel_pmu_check_event_constraints(struct event_constraint *event_con
+ /* Disabled fixed counters which are not in CPUID */
+ c->idxmsk64 &= intel_ctrl;
+
+- if (c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES)
++ /*
++ * Don't extend the pseudo-encoding to the
++ * generic counters
++ */
++ if (!use_fixed_pseudo_encoding(c->code))
+ c->idxmsk64 |= (1ULL << num_counters) - 1;
+ }
+ c->idxmsk64 &=
+diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
+index c878fed3056fd..fbcfec4dc4ccd 100644
+--- a/arch/x86/include/asm/asm.h
++++ b/arch/x86/include/asm/asm.h
+@@ -154,24 +154,24 @@
+
+ # define DEFINE_EXTABLE_TYPE_REG \
+ ".macro extable_type_reg type:req reg:req\n" \
+- ".set found, 0\n" \
+- ".set regnr, 0\n" \
++ ".set .Lfound, 0\n" \
++ ".set .Lregnr, 0\n" \
+ ".irp rs,rax,rcx,rdx,rbx,rsp,rbp,rsi,rdi,r8,r9,r10,r11,r12,r13,r14,r15\n" \
+ ".ifc \\reg, %%\\rs\n" \
+- ".set found, found+1\n" \
+- ".long \\type + (regnr << 8)\n" \
++ ".set .Lfound, .Lfound+1\n" \
++ ".long \\type + (.Lregnr << 8)\n" \
+ ".endif\n" \
+- ".set regnr, regnr+1\n" \
++ ".set .Lregnr, .Lregnr+1\n" \
+ ".endr\n" \
+- ".set regnr, 0\n" \
++ ".set .Lregnr, 0\n" \
+ ".irp rs,eax,ecx,edx,ebx,esp,ebp,esi,edi,r8d,r9d,r10d,r11d,r12d,r13d,r14d,r15d\n" \
+ ".ifc \\reg, %%\\rs\n" \
+- ".set found, found+1\n" \
+- ".long \\type + (regnr << 8)\n" \
++ ".set .Lfound, .Lfound+1\n" \
++ ".long \\type + (.Lregnr << 8)\n" \
+ ".endif\n" \
+- ".set regnr, regnr+1\n" \
++ ".set .Lregnr, .Lregnr+1\n" \
+ ".endr\n" \
+- ".if (found != 1)\n" \
++ ".if (.Lfound != 1)\n" \
+ ".error \"extable_type_reg: bad register argument\"\n" \
+ ".endif\n" \
+ ".endm\n"
+diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
+index bab883c0b6fee..66570e95af398 100644
+--- a/arch/x86/include/asm/bug.h
++++ b/arch/x86/include/asm/bug.h
+@@ -77,9 +77,9 @@ do { \
+ */
+ #define __WARN_FLAGS(flags) \
+ do { \
+- __auto_type f = BUGFLAG_WARNING|(flags); \
++ __auto_type __flags = BUGFLAG_WARNING|(flags); \
+ instrumentation_begin(); \
+- _BUG_FLAGS(ASM_UD2, f, ASM_REACHABLE); \
++ _BUG_FLAGS(ASM_UD2, __flags, ASM_REACHABLE); \
+ instrumentation_end(); \
+ } while (0)
+
+diff --git a/arch/x86/include/asm/irq_stack.h b/arch/x86/include/asm/irq_stack.h
+index ae9d40f6c7066..05af249d6bec2 100644
+--- a/arch/x86/include/asm/irq_stack.h
++++ b/arch/x86/include/asm/irq_stack.h
+@@ -99,7 +99,8 @@
+ }
+
+ #define ASM_CALL_ARG0 \
+- "call %P[__func] \n"
++ "call %P[__func] \n" \
++ ASM_REACHABLE
+
+ #define ASM_CALL_ARG1 \
+ "movq %[arg1], %%rdi \n" \
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index ec9830d2aabf8..17b4e1808b8e8 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -509,6 +509,7 @@ struct kvm_pmu {
+ u64 global_ctrl_mask;
+ u64 global_ovf_ctrl_mask;
+ u64 reserved_bits;
++ u64 raw_event_mask;
+ u8 version;
+ struct kvm_pmc gp_counters[INTEL_PMC_MAX_GENERIC];
+ struct kvm_pmc fixed_counters[INTEL_PMC_MAX_FIXED];
+diff --git a/arch/x86/include/asm/msi.h b/arch/x86/include/asm/msi.h
+index b85147d75626e..d71c7e8b738d2 100644
+--- a/arch/x86/include/asm/msi.h
++++ b/arch/x86/include/asm/msi.h
+@@ -12,14 +12,17 @@ int pci_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec,
+ /* Structs and defines for the X86 specific MSI message format */
+
+ typedef struct x86_msi_data {
+- u32 vector : 8,
+- delivery_mode : 3,
+- dest_mode_logical : 1,
+- reserved : 2,
+- active_low : 1,
+- is_level : 1;
+-
+- u32 dmar_subhandle;
++ union {
++ struct {
++ u32 vector : 8,
++ delivery_mode : 3,
++ dest_mode_logical : 1,
++ reserved : 2,
++ active_low : 1,
++ is_level : 1;
++ };
++ u32 dmar_subhandle;
++ };
+ } __attribute__ ((packed)) arch_msi_msg_data_t;
+ #define arch_msi_msg_data x86_msi_data
+
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 8fc1b5003713f..a2b6626c681f5 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -241,6 +241,11 @@ struct x86_pmu_capability {
+ #define INTEL_PMC_IDX_FIXED_SLOTS (INTEL_PMC_IDX_FIXED + 3)
+ #define INTEL_PMC_MSK_FIXED_SLOTS (1ULL << INTEL_PMC_IDX_FIXED_SLOTS)
+
++static inline bool use_fixed_pseudo_encoding(u64 code)
++{
++ return !(code & 0xff);
++}
++
+ /*
+ * We model BTS tracing as another fixed-mode PMC.
+ *
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 5818b837fd4d4..2d719e0d2e404 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -834,6 +834,59 @@ static void quirk_sandybridge_ifu(int bank, struct mce *m, struct pt_regs *regs)
+ m->cs = regs->cs;
+ }
+
++/*
++ * Disable fast string copy and return from the MCE handler upon the first SRAR
++ * MCE on bank 1 due to a CPU erratum on Intel Skylake/Cascade Lake/Cooper Lake
++ * CPUs.
++ * The fast string copy instructions ("REP; MOVS*") could consume an
++ * uncorrectable memory error in the cache line _right after_ the desired region
++ * to copy and raise an MCE with RIP pointing to the instruction _after_ the
++ * "REP; MOVS*".
++ * This mitigation addresses the issue completely with the caveat of performance
++ * degradation on the CPU affected. This is still better than the OS crashing on
++ * MCEs raised on an irrelevant process due to "REP; MOVS*" accesses from a
++ * kernel context (e.g., copy_page).
++ *
++ * Returns true when fast string copy on CPU has been disabled.
++ */
++static noinstr bool quirk_skylake_repmov(void)
++{
++ u64 mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS);
++ u64 misc_enable = mce_rdmsrl(MSR_IA32_MISC_ENABLE);
++ u64 mc1_status;
++
++ /*
++ * Apply the quirk only to local machine checks, i.e., no broadcast
++ * sync is needed.
++ */
++ if (!(mcgstatus & MCG_STATUS_LMCES) ||
++ !(misc_enable & MSR_IA32_MISC_ENABLE_FAST_STRING))
++ return false;
++
++ mc1_status = mce_rdmsrl(MSR_IA32_MCx_STATUS(1));
++
++ /* Check for a software-recoverable data fetch error. */
++ if ((mc1_status &
++ (MCI_STATUS_VAL | MCI_STATUS_OVER | MCI_STATUS_UC | MCI_STATUS_EN |
++ MCI_STATUS_ADDRV | MCI_STATUS_MISCV | MCI_STATUS_PCC |
++ MCI_STATUS_AR | MCI_STATUS_S)) ==
++ (MCI_STATUS_VAL | MCI_STATUS_UC | MCI_STATUS_EN |
++ MCI_STATUS_ADDRV | MCI_STATUS_MISCV |
++ MCI_STATUS_AR | MCI_STATUS_S)) {
++ misc_enable &= ~MSR_IA32_MISC_ENABLE_FAST_STRING;
++ mce_wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
++ mce_wrmsrl(MSR_IA32_MCx_STATUS(1), 0);
++
++ instrumentation_begin();
++ pr_err_once("Erratum detected, disable fast string copy instructions.\n");
++ instrumentation_end();
++
++ return true;
++ }
++
++ return false;
++}
++
+ /*
+ * Do a quick check if any of the events requires a panic.
+ * This decides if we keep the events around or clear them.
+@@ -1403,6 +1456,9 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ else if (unlikely(!mca_cfg.initialized))
+ return unexpected_machine_check(regs);
+
++ if (mce_flags.skx_repmov_quirk && quirk_skylake_repmov())
++ goto clear;
++
+ /*
+ * Establish sequential order between the CPUs entering the machine
+ * check handler.
+@@ -1545,6 +1601,7 @@ noinstr void do_machine_check(struct pt_regs *regs)
+ out:
+ instrumentation_end();
+
++clear:
+ mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
+ }
+ EXPORT_SYMBOL_GPL(do_machine_check);
+@@ -1858,6 +1915,13 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
+
+ if (c->x86 == 6 && c->x86_model == 45)
+ mce_flags.snb_ifu_quirk = 1;
++
++ /*
++ * Skylake, Cascacde Lake and Cooper Lake require a quirk on
++ * rep movs.
++ */
++ if (c->x86 == 6 && c->x86_model == INTEL_FAM6_SKYLAKE_X)
++ mce_flags.skx_repmov_quirk = 1;
+ }
+
+ if (c->x86_vendor == X86_VENDOR_ZHAOXIN) {
+diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
+index 52c633950b38d..24d099e2d2a23 100644
+--- a/arch/x86/kernel/cpu/mce/internal.h
++++ b/arch/x86/kernel/cpu/mce/internal.h
+@@ -170,7 +170,10 @@ struct mce_vendor_flags {
+ /* SandyBridge IFU quirk */
+ snb_ifu_quirk : 1,
+
+- __reserved_0 : 57;
++ /* Skylake, Cascade Lake, Cooper Lake REP;MOVS* quirk */
++ skx_repmov_quirk : 1,
++
++ __reserved_0 : 56;
+ };
+
+ extern struct mce_vendor_flags mce_flags;
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 531fb4cbb63fd..aa72cefdd5be6 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -12,10 +12,9 @@ enum insn_type {
+ };
+
+ /*
+- * data16 data16 xorq %rax, %rax - a single 5 byte instruction that clears %rax
+- * The REX.W cancels the effect of any data16.
++ * cs cs cs xorl %eax, %eax - a single 5 byte instruction that clears %[er]ax
+ */
+-static const u8 xor5rax[] = { 0x66, 0x66, 0x48, 0x31, 0xc0 };
++static const u8 xor5rax[] = { 0x2e, 0x2e, 0x2e, 0x31, 0xc0 };
+
+ static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc };
+
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 02d061a06aa19..de9d8a27387cf 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -3523,8 +3523,10 @@ static int em_rdpid(struct x86_emulate_ctxt *ctxt)
+ {
+ u64 tsc_aux = 0;
+
+- if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
++ if (!ctxt->ops->guest_has_rdpid(ctxt))
+ return emulate_ud(ctxt);
++
++ ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux);
+ ctxt->dst.val = tsc_aux;
+ return X86EMUL_CONTINUE;
+ }
+diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
+index 39eded2426ffd..a2a7654d8aced 100644
+--- a/arch/x86/kvm/kvm_emulate.h
++++ b/arch/x86/kvm/kvm_emulate.h
+@@ -226,6 +226,7 @@ struct x86_emulate_ops {
+ bool (*guest_has_long_mode)(struct x86_emulate_ctxt *ctxt);
+ bool (*guest_has_movbe)(struct x86_emulate_ctxt *ctxt);
+ bool (*guest_has_fxsr)(struct x86_emulate_ctxt *ctxt);
++ bool (*guest_has_rdpid)(struct x86_emulate_ctxt *ctxt);
+
+ void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked);
+
+diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
+index b1a02993782b3..eca39f56c2315 100644
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -96,8 +96,7 @@ static void kvm_perf_overflow(struct perf_event *perf_event,
+
+ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+ u64 config, bool exclude_user,
+- bool exclude_kernel, bool intr,
+- bool in_tx, bool in_tx_cp)
++ bool exclude_kernel, bool intr)
+ {
+ struct perf_event *event;
+ struct perf_event_attr attr = {
+@@ -116,16 +115,14 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
+
+ attr.sample_period = get_sample_period(pmc, pmc->counter);
+
+- if (in_tx)
+- attr.config |= HSW_IN_TX;
+- if (in_tx_cp) {
++ if ((attr.config & HSW_IN_TX_CHECKPOINTED) &&
++ guest_cpuid_is_intel(pmc->vcpu)) {
+ /*
+ * HSW_IN_TX_CHECKPOINTED is not supported with nonzero
+ * period. Just clear the sample period so at least
+ * allocating the counter doesn't fail.
+ */
+ attr.sample_period = 0;
+- attr.config |= HSW_IN_TX_CHECKPOINTED;
+ }
+
+ event = perf_event_create_kernel_counter(&attr, -1, current,
+@@ -185,6 +182,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ u32 type = PERF_TYPE_RAW;
+ struct kvm *kvm = pmc->vcpu->kvm;
+ struct kvm_pmu_event_filter *filter;
++ struct kvm_pmu *pmu = vcpu_to_pmu(pmc->vcpu);
+ bool allow_event = true;
+
+ if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL)
+@@ -221,7 +219,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ }
+
+ if (type == PERF_TYPE_RAW)
+- config = eventsel & AMD64_RAW_EVENT_MASK;
++ config = eventsel & pmu->raw_event_mask;
+
+ if (pmc->current_config == eventsel && pmc_resume_counter(pmc))
+ return;
+@@ -232,9 +230,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
+ pmc_reprogram_counter(pmc, type, config,
+ !(eventsel & ARCH_PERFMON_EVENTSEL_USR),
+ !(eventsel & ARCH_PERFMON_EVENTSEL_OS),
+- eventsel & ARCH_PERFMON_EVENTSEL_INT,
+- (eventsel & HSW_IN_TX),
+- (eventsel & HSW_IN_TX_CHECKPOINTED));
++ eventsel & ARCH_PERFMON_EVENTSEL_INT);
+ }
+ EXPORT_SYMBOL_GPL(reprogram_gp_counter);
+
+@@ -270,7 +266,7 @@ void reprogram_fixed_counter(struct kvm_pmc *pmc, u8 ctrl, int idx)
+ kvm_x86_ops.pmu_ops->pmc_perf_hw_id(pmc),
+ !(en_field & 0x2), /* exclude user */
+ !(en_field & 0x1), /* exclude kernel */
+- pmi, false, false);
++ pmi);
+ }
+ EXPORT_SYMBOL_GPL(reprogram_fixed_counter);
+
+diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
+index 5aa45f13b16dc..ba40b7fced5ae 100644
+--- a/arch/x86/kvm/svm/pmu.c
++++ b/arch/x86/kvm/svm/pmu.c
+@@ -262,12 +262,10 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ /* MSR_EVNTSELn */
+ pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL);
+ if (pmc) {
+- if (data == pmc->eventsel)
+- return 0;
+- if (!(data & pmu->reserved_bits)) {
++ data &= ~pmu->reserved_bits;
++ if (data != pmc->eventsel)
+ reprogram_gp_counter(pmc, data);
+- return 0;
+- }
++ return 0;
+ }
+
+ return 1;
+@@ -284,6 +282,7 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
+
+ pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
+ pmu->reserved_bits = 0xfffffff000280000ull;
++ pmu->raw_event_mask = AMD64_RAW_EVENT_MASK;
+ pmu->version = 1;
+ /* not applicable to AMD; but clean them to prevent any fall out */
+ pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index fa98d6844728f..86bcfed6599ea 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -22,6 +22,8 @@
+ #include <asm/svm.h>
+ #include <asm/sev-common.h>
+
++#include "kvm_cache_regs.h"
++
+ #define __sme_page_pa(x) __sme_set(page_to_pfn(x) << PAGE_SHIFT)
+
+ #define IOPM_SIZE PAGE_SIZE * 3
+diff --git a/arch/x86/kvm/svm/svm_onhyperv.c b/arch/x86/kvm/svm/svm_onhyperv.c
+index 98aa981c04ec5..8cdc62c74a964 100644
+--- a/arch/x86/kvm/svm/svm_onhyperv.c
++++ b/arch/x86/kvm/svm/svm_onhyperv.c
+@@ -4,7 +4,6 @@
+ */
+
+ #include <linux/kvm_host.h>
+-#include "kvm_cache_regs.h"
+
+ #include <asm/mshyperv.h>
+
+diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
+index 466d18fc0c5da..5fa3870b89881 100644
+--- a/arch/x86/kvm/vmx/pmu_intel.c
++++ b/arch/x86/kvm/vmx/pmu_intel.c
+@@ -389,6 +389,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ struct kvm_pmc *pmc;
+ u32 msr = msr_info->index;
+ u64 data = msr_info->data;
++ u64 reserved_bits;
+
+ switch (msr) {
+ case MSR_CORE_PERF_FIXED_CTR_CTRL:
+@@ -443,7 +444,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
+ if (data == pmc->eventsel)
+ return 0;
+- if (!(data & pmu->reserved_bits)) {
++ reserved_bits = pmu->reserved_bits;
++ if ((pmc->idx == 2) &&
++ (pmu->raw_event_mask & HSW_IN_TX_CHECKPOINTED))
++ reserved_bits ^= HSW_IN_TX_CHECKPOINTED;
++ if (!(data & reserved_bits)) {
+ reprogram_gp_counter(pmc, data);
+ return 0;
+ }
+@@ -485,6 +490,7 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+ pmu->version = 0;
+ pmu->reserved_bits = 0xffffffff00200000ull;
++ pmu->raw_event_mask = X86_RAW_EVENT_MASK;
+
+ entry = kvm_find_cpuid_entry(vcpu, 0xa, 0);
+ if (!entry || !enable_pmu)
+@@ -533,8 +539,10 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
+ entry = kvm_find_cpuid_entry(vcpu, 7, 0);
+ if (entry &&
+ (boot_cpu_has(X86_FEATURE_HLE) || boot_cpu_has(X86_FEATURE_RTM)) &&
+- (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM)))
+- pmu->reserved_bits ^= HSW_IN_TX|HSW_IN_TX_CHECKPOINTED;
++ (entry->ebx & (X86_FEATURE_HLE|X86_FEATURE_RTM))) {
++ pmu->reserved_bits ^= HSW_IN_TX;
++ pmu->raw_event_mask |= (HSW_IN_TX|HSW_IN_TX_CHECKPOINTED);
++ }
+
+ bitmap_set(pmu->all_valid_pmc_idx,
+ 0, pmu->nr_arch_gp_counters);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 9b6166348c94f..c81ec70197fb5 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7675,6 +7675,11 @@ static bool emulator_guest_has_fxsr(struct x86_emulate_ctxt *ctxt)
+ return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_FXSR);
+ }
+
++static bool emulator_guest_has_rdpid(struct x86_emulate_ctxt *ctxt)
++{
++ return guest_cpuid_has(emul_to_vcpu(ctxt), X86_FEATURE_RDPID);
++}
++
+ static ulong emulator_read_gpr(struct x86_emulate_ctxt *ctxt, unsigned reg)
+ {
+ return kvm_register_read_raw(emul_to_vcpu(ctxt), reg);
+@@ -7757,6 +7762,7 @@ static const struct x86_emulate_ops emulate_ops = {
+ .guest_has_long_mode = emulator_guest_has_long_mode,
+ .guest_has_movbe = emulator_guest_has_movbe,
+ .guest_has_fxsr = emulator_guest_has_fxsr,
++ .guest_has_rdpid = emulator_guest_has_rdpid,
+ .set_nmi_mask = emulator_set_nmi_mask,
+ .get_hflags = emulator_get_hflags,
+ .exiting_smm = emulator_exiting_smm,
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index a6cf56a149393..b3cb49de0a643 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -854,13 +854,11 @@ done:
+ nr_invalidate);
+ }
+
+-static bool tlb_is_not_lazy(int cpu)
++static bool tlb_is_not_lazy(int cpu, void *data)
+ {
+ return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu);
+ }
+
+-static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+-
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
+ EXPORT_PER_CPU_SYMBOL(cpu_tlbstate_shared);
+
+@@ -889,36 +887,11 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
+ * up on the new contents of what used to be page tables, while
+ * doing a speculative memory access.
+ */
+- if (info->freed_tables) {
++ if (info->freed_tables)
+ on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
+- } else {
+- /*
+- * Although we could have used on_each_cpu_cond_mask(),
+- * open-coding it has performance advantages, as it eliminates
+- * the need for indirect calls or retpolines. In addition, it
+- * allows to use a designated cpumask for evaluating the
+- * condition, instead of allocating one.
+- *
+- * This code works under the assumption that there are no nested
+- * TLB flushes, an assumption that is already made in
+- * flush_tlb_mm_range().
+- *
+- * cond_cpumask is logically a stack-local variable, but it is
+- * more efficient to have it off the stack and not to allocate
+- * it on demand. Preemption is disabled and this code is
+- * non-reentrant.
+- */
+- struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+- int cpu;
+-
+- cpumask_clear(cond_cpumask);
+-
+- for_each_cpu(cpu, cpumask) {
+- if (tlb_is_not_lazy(cpu))
+- __cpumask_set_cpu(cpu, cond_cpumask);
+- }
+- on_each_cpu_mask(cond_cpumask, flush_tlb_func, (void *)info, true);
+- }
++ else
++ on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func,
++ (void *)info, 1, cpumask);
+ }
+
+ void flush_tlb_multi(const struct cpumask *cpumask,
+diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
+index 9f2b251e83c56..3822666fb73d5 100644
+--- a/arch/x86/power/cpu.c
++++ b/arch/x86/power/cpu.c
+@@ -40,7 +40,8 @@ static void msr_save_context(struct saved_context *ctxt)
+ struct saved_msr *end = msr + ctxt->saved_msrs.num;
+
+ while (msr < end) {
+- msr->valid = !rdmsrl_safe(msr->info.msr_no, &msr->info.reg.q);
++ if (msr->valid)
++ rdmsrl(msr->info.msr_no, msr->info.reg.q);
+ msr++;
+ }
+ }
+@@ -424,8 +425,10 @@ static int msr_build_context(const u32 *msr_id, const int num)
+ }
+
+ for (i = saved_msrs->num, j = 0; i < total_num; i++, j++) {
++ u64 dummy;
++
+ msr_array[i].info.msr_no = msr_id[j];
+- msr_array[i].valid = false;
++ msr_array[i].valid = !rdmsrl_safe(msr_id[j], &dummy);
+ msr_array[i].info.reg.q = 0;
+ }
+ saved_msrs->num = total_num;
+@@ -500,10 +503,24 @@ static int pm_cpu_check(const struct x86_cpu_id *c)
+ return ret;
+ }
+
++static void pm_save_spec_msr(void)
++{
++ u32 spec_msr_id[] = {
++ MSR_IA32_SPEC_CTRL,
++ MSR_IA32_TSX_CTRL,
++ MSR_TSX_FORCE_ABORT,
++ MSR_IA32_MCU_OPT_CTRL,
++ MSR_AMD64_LS_CFG,
++ };
++
++ msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
++}
++
+ static int pm_check_save_msr(void)
+ {
+ dmi_check_system(msr_save_dmi_table);
+ pm_cpu_check(msr_save_cpu_table);
++ pm_save_spec_msr();
+
+ return 0;
+ }
+diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
+index 6ff3c887e0b99..b70afdff419ca 100644
+--- a/arch/x86/xen/smp_hvm.c
++++ b/arch/x86/xen/smp_hvm.c
+@@ -19,6 +19,12 @@ static void __init xen_hvm_smp_prepare_boot_cpu(void)
+ */
+ xen_vcpu_setup(0);
+
++ /*
++ * Called again in case the kernel boots on vcpu >= MAX_VIRT_CPUS.
++ * Refer to comments in xen_hvm_init_time_ops().
++ */
++ xen_hvm_init_time_ops();
++
+ /*
+ * The alternative logic (which patches the unlock/lock) runs before
+ * the smp bootup up code is activated. Hence we need to set this up
+diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
+index d9c945ee11008..9ef0a5cca96ee 100644
+--- a/arch/x86/xen/time.c
++++ b/arch/x86/xen/time.c
+@@ -558,6 +558,11 @@ static void xen_hvm_setup_cpu_clockevents(void)
+
+ void __init xen_hvm_init_time_ops(void)
+ {
++ static bool hvm_time_initialized;
++
++ if (hvm_time_initialized)
++ return;
++
+ /*
+ * vector callback is needed otherwise we cannot receive interrupts
+ * on cpu > 0 and at this point we don't know how many cpus are
+@@ -567,7 +572,22 @@ void __init xen_hvm_init_time_ops(void)
+ return;
+
+ if (!xen_feature(XENFEAT_hvm_safe_pvclock)) {
+- pr_info("Xen doesn't support pvclock on HVM, disable pv timer");
++ pr_info_once("Xen doesn't support pvclock on HVM, disable pv timer");
++ return;
++ }
++
++ /*
++ * Only MAX_VIRT_CPUS 'vcpu_info' are embedded inside 'shared_info'.
++ * The __this_cpu_read(xen_vcpu) is still NULL when Xen HVM guest
++ * boots on vcpu >= MAX_VIRT_CPUS (e.g., kexec), To access
++ * __this_cpu_read(xen_vcpu) via xen_clocksource_read() will panic.
++ *
++ * The xen_hvm_init_time_ops() should be called again later after
++ * __this_cpu_read(xen_vcpu) is available.
++ */
++ if (!__this_cpu_read(xen_vcpu)) {
++ pr_info("Delay xen_init_time_common() as kernel is running on vcpu=%d\n",
++ xen_vcpu_nr(0));
+ return;
+ }
+
+@@ -577,6 +597,8 @@ void __init xen_hvm_init_time_ops(void)
+ x86_cpuinit.setup_percpu_clockev = xen_hvm_setup_cpu_clockevents;
+
+ x86_platform.set_wallclock = xen_set_wallclock;
++
++ hvm_time_initialized = true;
+ }
+ #endif
+
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
+index 9bf8bad1dd18a..c33932568aa73 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-128m.dtsi
+@@ -8,19 +8,19 @@
+ reg = <0x00000000 0x08000000>;
+ bank-width = <2>;
+ device-width = <2>;
+- partition@0x0 {
++ partition@0 {
+ label = "data";
+ reg = <0x00000000 0x06000000>;
+ };
+- partition@0x6000000 {
++ partition@6000000 {
+ label = "boot loader area";
+ reg = <0x06000000 0x00800000>;
+ };
+- partition@0x6800000 {
++ partition@6800000 {
+ label = "kernel image";
+ reg = <0x06800000 0x017e0000>;
+ };
+- partition@0x7fe0000 {
++ partition@7fe0000 {
+ label = "boot environment";
+ reg = <0x07fe0000 0x00020000>;
+ };
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
+index 40c2f81f7cb66..7bde2ab2d6fb5 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-16m.dtsi
+@@ -8,19 +8,19 @@
+ reg = <0x08000000 0x01000000>;
+ bank-width = <2>;
+ device-width = <2>;
+- partition@0x0 {
++ partition@0 {
+ label = "boot loader area";
+ reg = <0x00000000 0x00400000>;
+ };
+- partition@0x400000 {
++ partition@400000 {
+ label = "kernel image";
+ reg = <0x00400000 0x00600000>;
+ };
+- partition@0xa00000 {
++ partition@a00000 {
+ label = "data";
+ reg = <0x00a00000 0x005e0000>;
+ };
+- partition@0xfe0000 {
++ partition@fe0000 {
+ label = "boot environment";
+ reg = <0x00fe0000 0x00020000>;
+ };
+diff --git a/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi b/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
+index fb8d3a9f33c23..0655b868749a4 100644
+--- a/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
++++ b/arch/xtensa/boot/dts/xtfpga-flash-4m.dtsi
+@@ -8,11 +8,11 @@
+ reg = <0x08000000 0x00400000>;
+ bank-width = <2>;
+ device-width = <2>;
+- partition@0x0 {
++ partition@0 {
+ label = "boot loader area";
+ reg = <0x00000000 0x003f0000>;
+ };
+- partition@0x3f0000 {
++ partition@3f0000 {
+ label = "boot environment";
+ reg = <0x003f0000 0x00010000>;
+ };
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index f8e9fa82cb9b1..05b3985a1984b 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -570,8 +570,7 @@ static int acpi_idle_play_dead(struct cpuidle_device *dev, int index)
+ {
+ struct acpi_processor_cx *cx = per_cpu(acpi_cstate[index], dev->cpu);
+
+- if (cx->type == ACPI_STATE_C3)
+- ACPI_FLUSH_CPU_CACHE();
++ ACPI_FLUSH_CPU_CACHE();
+
+ while (1) {
+
+diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
+index bec33d781ae04..e3263e961045a 100644
+--- a/drivers/ata/sata_dwc_460ex.c
++++ b/drivers/ata/sata_dwc_460ex.c
+@@ -137,7 +137,11 @@ struct sata_dwc_device {
+ #endif
+ };
+
+-#define SATA_DWC_QCMD_MAX 32
++/*
++ * Allow one extra special slot for commands and DMA management
++ * to account for libata internal commands.
++ */
++#define SATA_DWC_QCMD_MAX (ATA_MAX_QUEUE + 1)
+
+ struct sata_dwc_device_port {
+ struct sata_dwc_device *hsdev;
+diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
+index f27d5b0f9a0bb..a98bfcf4a5f02 100644
+--- a/drivers/block/drbd/drbd_int.h
++++ b/drivers/block/drbd/drbd_int.h
+@@ -1642,22 +1642,22 @@ struct sib_info {
+ };
+ void drbd_bcast_event(struct drbd_device *device, const struct sib_info *sib);
+
+-extern void notify_resource_state(struct sk_buff *,
++extern int notify_resource_state(struct sk_buff *,
+ unsigned int,
+ struct drbd_resource *,
+ struct resource_info *,
+ enum drbd_notification_type);
+-extern void notify_device_state(struct sk_buff *,
++extern int notify_device_state(struct sk_buff *,
+ unsigned int,
+ struct drbd_device *,
+ struct device_info *,
+ enum drbd_notification_type);
+-extern void notify_connection_state(struct sk_buff *,
++extern int notify_connection_state(struct sk_buff *,
+ unsigned int,
+ struct drbd_connection *,
+ struct connection_info *,
+ enum drbd_notification_type);
+-extern void notify_peer_device_state(struct sk_buff *,
++extern int notify_peer_device_state(struct sk_buff *,
+ unsigned int,
+ struct drbd_peer_device *,
+ struct peer_device_info *,
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 6f450816c4fa6..5d5beeba3ed4f 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -2793,12 +2793,12 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+
+ if (init_submitter(device)) {
+ err = ERR_NOMEM;
+- goto out_idr_remove_vol;
++ goto out_idr_remove_from_resource;
+ }
+
+ err = add_disk(disk);
+ if (err)
+- goto out_idr_remove_vol;
++ goto out_idr_remove_from_resource;
+
+ /* inherit the connection state */
+ device->state.conn = first_connection(resource)->cstate;
+@@ -2812,8 +2812,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
+ drbd_debugfs_device_add(device);
+ return NO_ERROR;
+
+-out_idr_remove_vol:
+- idr_remove(&connection->peer_devices, vnr);
+ out_idr_remove_from_resource:
+ for_each_connection(connection, resource) {
+ peer_device = idr_remove(&connection->peer_devices, vnr);
+diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
+index 44ccf8b4f4b29..69184cf17b6ad 100644
+--- a/drivers/block/drbd/drbd_nl.c
++++ b/drivers/block/drbd/drbd_nl.c
+@@ -4617,7 +4617,7 @@ static int nla_put_notification_header(struct sk_buff *msg,
+ return drbd_notification_header_to_skb(msg, &nh, true);
+ }
+
+-void notify_resource_state(struct sk_buff *skb,
++int notify_resource_state(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_resource *resource,
+ struct resource_info *resource_info,
+@@ -4659,16 +4659,17 @@ void notify_resource_state(struct sk_buff *skb,
+ if (err && err != -ESRCH)
+ goto failed;
+ }
+- return;
++ return 0;
+
+ nla_put_failure:
+ nlmsg_free(skb);
+ failed:
+ drbd_err(resource, "Error %d while broadcasting event. Event seq:%u\n",
+ err, seq);
++ return err;
+ }
+
+-void notify_device_state(struct sk_buff *skb,
++int notify_device_state(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_device *device,
+ struct device_info *device_info,
+@@ -4708,16 +4709,17 @@ void notify_device_state(struct sk_buff *skb,
+ if (err && err != -ESRCH)
+ goto failed;
+ }
+- return;
++ return 0;
+
+ nla_put_failure:
+ nlmsg_free(skb);
+ failed:
+ drbd_err(device, "Error %d while broadcasting event. Event seq:%u\n",
+ err, seq);
++ return err;
+ }
+
+-void notify_connection_state(struct sk_buff *skb,
++int notify_connection_state(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_connection *connection,
+ struct connection_info *connection_info,
+@@ -4757,16 +4759,17 @@ void notify_connection_state(struct sk_buff *skb,
+ if (err && err != -ESRCH)
+ goto failed;
+ }
+- return;
++ return 0;
+
+ nla_put_failure:
+ nlmsg_free(skb);
+ failed:
+ drbd_err(connection, "Error %d while broadcasting event. Event seq:%u\n",
+ err, seq);
++ return err;
+ }
+
+-void notify_peer_device_state(struct sk_buff *skb,
++int notify_peer_device_state(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_peer_device *peer_device,
+ struct peer_device_info *peer_device_info,
+@@ -4807,13 +4810,14 @@ void notify_peer_device_state(struct sk_buff *skb,
+ if (err && err != -ESRCH)
+ goto failed;
+ }
+- return;
++ return 0;
+
+ nla_put_failure:
+ nlmsg_free(skb);
+ failed:
+ drbd_err(peer_device, "Error %d while broadcasting event. Event seq:%u\n",
+ err, seq);
++ return err;
+ }
+
+ void notify_helper(enum drbd_notification_type type,
+@@ -4864,7 +4868,7 @@ fail:
+ err, seq);
+ }
+
+-static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
++static int notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
+ {
+ struct drbd_genlmsghdr *dh;
+ int err;
+@@ -4878,11 +4882,12 @@ static void notify_initial_state_done(struct sk_buff *skb, unsigned int seq)
+ if (nla_put_notification_header(skb, NOTIFY_EXISTS))
+ goto nla_put_failure;
+ genlmsg_end(skb, dh);
+- return;
++ return 0;
+
+ nla_put_failure:
+ nlmsg_free(skb);
+ pr_err("Error %d sending event. Event seq:%u\n", err, seq);
++ return err;
+ }
+
+ static void free_state_changes(struct list_head *list)
+@@ -4909,6 +4914,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+ unsigned int seq = cb->args[2];
+ unsigned int n;
+ enum drbd_notification_type flags = 0;
++ int err = 0;
+
+ /* There is no need for taking notification_mutex here: it doesn't
+ matter if the initial state events mix with later state chage
+@@ -4917,32 +4923,32 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+
+ cb->args[5]--;
+ if (cb->args[5] == 1) {
+- notify_initial_state_done(skb, seq);
++ err = notify_initial_state_done(skb, seq);
+ goto out;
+ }
+ n = cb->args[4]++;
+ if (cb->args[4] < cb->args[3])
+ flags |= NOTIFY_CONTINUES;
+ if (n < 1) {
+- notify_resource_state_change(skb, seq, state_change->resource,
++ err = notify_resource_state_change(skb, seq, state_change->resource,
+ NOTIFY_EXISTS | flags);
+ goto next;
+ }
+ n--;
+ if (n < state_change->n_connections) {
+- notify_connection_state_change(skb, seq, &state_change->connections[n],
++ err = notify_connection_state_change(skb, seq, &state_change->connections[n],
+ NOTIFY_EXISTS | flags);
+ goto next;
+ }
+ n -= state_change->n_connections;
+ if (n < state_change->n_devices) {
+- notify_device_state_change(skb, seq, &state_change->devices[n],
++ err = notify_device_state_change(skb, seq, &state_change->devices[n],
+ NOTIFY_EXISTS | flags);
+ goto next;
+ }
+ n -= state_change->n_devices;
+ if (n < state_change->n_devices * state_change->n_connections) {
+- notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
++ err = notify_peer_device_state_change(skb, seq, &state_change->peer_devices[n],
+ NOTIFY_EXISTS | flags);
+ goto next;
+ }
+@@ -4957,7 +4963,10 @@ next:
+ cb->args[4] = 0;
+ }
+ out:
+- return skb->len;
++ if (err)
++ return err;
++ else
++ return skb->len;
+ }
+
+ int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
+index b8a27818ab3f8..4ee11aef6672b 100644
+--- a/drivers/block/drbd/drbd_state.c
++++ b/drivers/block/drbd/drbd_state.c
+@@ -1537,7 +1537,7 @@ int drbd_bitmap_io_from_worker(struct drbd_device *device,
+ return rv;
+ }
+
+-void notify_resource_state_change(struct sk_buff *skb,
++int notify_resource_state_change(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_resource_state_change *resource_state_change,
+ enum drbd_notification_type type)
+@@ -1550,10 +1550,10 @@ void notify_resource_state_change(struct sk_buff *skb,
+ .res_susp_fen = resource_state_change->susp_fen[NEW],
+ };
+
+- notify_resource_state(skb, seq, resource, &resource_info, type);
++ return notify_resource_state(skb, seq, resource, &resource_info, type);
+ }
+
+-void notify_connection_state_change(struct sk_buff *skb,
++int notify_connection_state_change(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_connection_state_change *connection_state_change,
+ enum drbd_notification_type type)
+@@ -1564,10 +1564,10 @@ void notify_connection_state_change(struct sk_buff *skb,
+ .conn_role = connection_state_change->peer_role[NEW],
+ };
+
+- notify_connection_state(skb, seq, connection, &connection_info, type);
++ return notify_connection_state(skb, seq, connection, &connection_info, type);
+ }
+
+-void notify_device_state_change(struct sk_buff *skb,
++int notify_device_state_change(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_device_state_change *device_state_change,
+ enum drbd_notification_type type)
+@@ -1577,10 +1577,10 @@ void notify_device_state_change(struct sk_buff *skb,
+ .dev_disk_state = device_state_change->disk_state[NEW],
+ };
+
+- notify_device_state(skb, seq, device, &device_info, type);
++ return notify_device_state(skb, seq, device, &device_info, type);
+ }
+
+-void notify_peer_device_state_change(struct sk_buff *skb,
++int notify_peer_device_state_change(struct sk_buff *skb,
+ unsigned int seq,
+ struct drbd_peer_device_state_change *p,
+ enum drbd_notification_type type)
+@@ -1594,7 +1594,7 @@ void notify_peer_device_state_change(struct sk_buff *skb,
+ .peer_resync_susp_dependency = p->resync_susp_dependency[NEW],
+ };
+
+- notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
++ return notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
+ }
+
+ static void broadcast_state_change(struct drbd_state_change *state_change)
+@@ -1602,7 +1602,7 @@ static void broadcast_state_change(struct drbd_state_change *state_change)
+ struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
+ bool resource_state_has_changed;
+ unsigned int n_device, n_connection, n_peer_device, n_peer_devices;
+- void (*last_func)(struct sk_buff *, unsigned int, void *,
++ int (*last_func)(struct sk_buff *, unsigned int, void *,
+ enum drbd_notification_type) = NULL;
+ void *last_arg = NULL;
+
+diff --git a/drivers/block/drbd/drbd_state_change.h b/drivers/block/drbd/drbd_state_change.h
+index ba80f612d6abb..d5b0479bc9a66 100644
+--- a/drivers/block/drbd/drbd_state_change.h
++++ b/drivers/block/drbd/drbd_state_change.h
+@@ -44,19 +44,19 @@ extern struct drbd_state_change *remember_old_state(struct drbd_resource *, gfp_
+ extern void copy_old_to_new_state_change(struct drbd_state_change *);
+ extern void forget_state_change(struct drbd_state_change *);
+
+-extern void notify_resource_state_change(struct sk_buff *,
++extern int notify_resource_state_change(struct sk_buff *,
+ unsigned int,
+ struct drbd_resource_state_change *,
+ enum drbd_notification_type type);
+-extern void notify_connection_state_change(struct sk_buff *,
++extern int notify_connection_state_change(struct sk_buff *,
+ unsigned int,
+ struct drbd_connection_state_change *,
+ enum drbd_notification_type type);
+-extern void notify_device_state_change(struct sk_buff *,
++extern int notify_device_state_change(struct sk_buff *,
+ unsigned int,
+ struct drbd_device_state_change *,
+ enum drbd_notification_type type);
+-extern void notify_peer_device_state_change(struct sk_buff *,
++extern int notify_peer_device_state_change(struct sk_buff *,
+ unsigned int,
+ struct drbd_peer_device_state_change *,
+ enum drbd_notification_type type);
+diff --git a/drivers/bluetooth/btmtk.h b/drivers/bluetooth/btmtk.h
+index 6e7b0c7567c0f..0defa68bc2cef 100644
+--- a/drivers/bluetooth/btmtk.h
++++ b/drivers/bluetooth/btmtk.h
+@@ -5,6 +5,7 @@
+ #define FIRMWARE_MT7668 "mediatek/mt7668pr2h.bin"
+ #define FIRMWARE_MT7961 "mediatek/BT_RAM_CODE_MT7961_1_2_hdr.bin"
+
++#define HCI_EV_WMT 0xe4
+ #define HCI_WMT_MAX_EVENT_SIZE 64
+
+ #define BTMTK_WMT_REG_READ 0x2
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index 9b868f187316d..ecf29cfa7d792 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -370,13 +370,6 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ struct hci_event_hdr *hdr = (void *)skb->data;
+ int err;
+
+- /* Fix up the vendor event id with 0xff for vendor specific instead
+- * of 0xe4 so that event send via monitoring socket can be parsed
+- * properly.
+- */
+- if (hdr->evt == 0xe4)
+- hdr->evt = HCI_EV_VENDOR;
+-
+ /* When someone waits for the WMT event, the skb is being cloned
+ * and being processed the events from there then.
+ */
+@@ -392,7 +385,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ if (err < 0)
+ goto err_free_skb;
+
+- if (hdr->evt == HCI_EV_VENDOR) {
++ if (hdr->evt == HCI_EV_WMT) {
+ if (test_and_clear_bit(BTMTKSDIO_TX_WAIT_VND_EVT,
+ &bdev->tx_state)) {
+ /* Barrier to sync with other CPUs */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2afbd87d77c9b..42234d5f602dd 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2254,7 +2254,6 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ {
+ struct hci_dev *hdev = urb->context;
+ struct btusb_data *data = hci_get_drvdata(hdev);
+- struct hci_event_hdr *hdr;
+ struct sk_buff *skb;
+ int err;
+
+@@ -2274,13 +2273,6 @@ static void btusb_mtk_wmt_recv(struct urb *urb)
+ hci_skb_pkt_type(skb) = HCI_EVENT_PKT;
+ skb_put_data(skb, urb->transfer_buffer, urb->actual_length);
+
+- hdr = (void *)skb->data;
+- /* Fix up the vendor event id with 0xff for vendor specific
+- * instead of 0xe4 so that event send via monitoring socket can
+- * be parsed properly.
+- */
+- hdr->evt = 0xff;
+-
+ /* When someone waits for the WMT event, the skb is being cloned
+ * and being processed the events from there then.
+ */
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index e3c430539a176..9fa3c76a267f5 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -2245,7 +2245,7 @@ static struct virtio_driver virtio_rproc_serial = {
+ .remove = virtcons_remove,
+ };
+
+-static int __init init(void)
++static int __init virtio_console_init(void)
+ {
+ int err;
+
+@@ -2280,7 +2280,7 @@ free:
+ return err;
+ }
+
+-static void __exit fini(void)
++static void __exit virtio_console_fini(void)
+ {
+ reclaim_dma_bufs();
+
+@@ -2290,8 +2290,8 @@ static void __exit fini(void)
+ class_destroy(pdrvdata.class);
+ debugfs_remove_recursive(pdrvdata.debugfs_dir);
+ }
+-module_init(init);
+-module_exit(fini);
++module_init(virtio_console_init);
++module_exit(virtio_console_fini);
+
+ MODULE_DESCRIPTION("Virtio console driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index f7b41366666e5..4de098b6b0d4e 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -798,6 +798,15 @@ static unsigned long si5341_output_clk_recalc_rate(struct clk_hw *hw,
+ u32 r_divider;
+ u8 r[3];
+
++ err = regmap_read(output->data->regmap,
++ SI5341_OUT_CONFIG(output), &val);
++ if (err < 0)
++ return err;
++
++ /* If SI5341_OUT_CFG_RDIV_FORCE2 is set, r_divider is 2 */
++ if (val & SI5341_OUT_CFG_RDIV_FORCE2)
++ return parent_rate / 2;
++
+ err = regmap_bulk_read(output->data->regmap,
+ SI5341_OUT_R_REG(output), r, 3);
+ if (err < 0)
+@@ -814,13 +823,6 @@ static unsigned long si5341_output_clk_recalc_rate(struct clk_hw *hw,
+ r_divider += 1;
+ r_divider <<= 1;
+
+- err = regmap_read(output->data->regmap,
+- SI5341_OUT_CONFIG(output), &val);
+- if (err < 0)
+- return err;
+-
+- if (val & SI5341_OUT_CFG_RDIV_FORCE2)
+- r_divider = 2;
+
+ return parent_rate / r_divider;
+ }
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 01b64b962e76f..2fdfce116087a 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -632,6 +632,24 @@ static void clk_core_get_boundaries(struct clk_core *core,
+ *max_rate = min(*max_rate, clk_user->max_rate);
+ }
+
++static bool clk_core_check_boundaries(struct clk_core *core,
++ unsigned long min_rate,
++ unsigned long max_rate)
++{
++ struct clk *user;
++
++ lockdep_assert_held(&prepare_lock);
++
++ if (min_rate > core->max_rate || max_rate < core->min_rate)
++ return false;
++
++ hlist_for_each_entry(user, &core->clks, clks_node)
++ if (min_rate > user->max_rate || max_rate < user->min_rate)
++ return false;
++
++ return true;
++}
++
+ void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate,
+ unsigned long max_rate)
+ {
+@@ -2348,6 +2366,11 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+ clk->min_rate = min;
+ clk->max_rate = max;
+
++ if (!clk_core_check_boundaries(clk->core, min, max)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ rate = clk_core_get_rate_nolock(clk->core);
+ if (rate < min || rate > max) {
+ /*
+@@ -2376,6 +2399,7 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
+ }
+ }
+
++out:
+ if (clk->exclusive_count)
+ clk_core_rate_protect(clk->core);
+
+diff --git a/drivers/clk/mediatek/clk-mt8192.c b/drivers/clk/mediatek/clk-mt8192.c
+index cbc7c6dbe0f44..79ddb3cc0b98a 100644
+--- a/drivers/clk/mediatek/clk-mt8192.c
++++ b/drivers/clk/mediatek/clk-mt8192.c
+@@ -1236,9 +1236,17 @@ static int clk_mt8192_infra_probe(struct platform_device *pdev)
+
+ r = mtk_clk_register_gates(node, infra_clks, ARRAY_SIZE(infra_clks), clk_data);
+ if (r)
+- return r;
++ goto free_clk_data;
++
++ r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++ if (r)
++ goto free_clk_data;
++
++ return r;
+
+- return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++free_clk_data:
++ mtk_free_clk_data(clk_data);
++ return r;
+ }
+
+ static int clk_mt8192_peri_probe(struct platform_device *pdev)
+@@ -1253,9 +1261,17 @@ static int clk_mt8192_peri_probe(struct platform_device *pdev)
+
+ r = mtk_clk_register_gates(node, peri_clks, ARRAY_SIZE(peri_clks), clk_data);
+ if (r)
+- return r;
++ goto free_clk_data;
++
++ r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++ if (r)
++ goto free_clk_data;
+
+- return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++ return r;
++
++free_clk_data:
++ mtk_free_clk_data(clk_data);
++ return r;
+ }
+
+ static int clk_mt8192_apmixed_probe(struct platform_device *pdev)
+@@ -1271,9 +1287,17 @@ static int clk_mt8192_apmixed_probe(struct platform_device *pdev)
+ mtk_clk_register_plls(node, plls, ARRAY_SIZE(plls), clk_data);
+ r = mtk_clk_register_gates(node, apmixed_clks, ARRAY_SIZE(apmixed_clks), clk_data);
+ if (r)
+- return r;
++ goto free_clk_data;
+
+- return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++ r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
++ if (r)
++ goto free_clk_data;
++
++ return r;
++
++free_clk_data:
++ mtk_free_clk_data(clk_data);
++ return r;
+ }
+
+ static const struct of_device_id of_match_clk_mt8192[] = {
+diff --git a/drivers/clk/rockchip/clk-rk3568.c b/drivers/clk/rockchip/clk-rk3568.c
+index 69a9e8069a486..604a367bc498a 100644
+--- a/drivers/clk/rockchip/clk-rk3568.c
++++ b/drivers/clk/rockchip/clk-rk3568.c
+@@ -1038,13 +1038,13 @@ static struct rockchip_clk_branch rk3568_clk_branches[] __initdata = {
+ RK3568_CLKGATE_CON(20), 8, GFLAGS),
+ GATE(HCLK_VOP, "hclk_vop", "hclk_vo", 0,
+ RK3568_CLKGATE_CON(20), 9, GFLAGS),
+- COMPOSITE(DCLK_VOP0, "dclk_vop0", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_PARENT | CLK_SET_RATE_NO_REPARENT,
++ COMPOSITE(DCLK_VOP0, "dclk_vop0", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_NO_REPARENT,
+ RK3568_CLKSEL_CON(39), 10, 2, MFLAGS, 0, 8, DFLAGS,
+ RK3568_CLKGATE_CON(20), 10, GFLAGS),
+- COMPOSITE(DCLK_VOP1, "dclk_vop1", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_PARENT | CLK_SET_RATE_NO_REPARENT,
++ COMPOSITE(DCLK_VOP1, "dclk_vop1", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_NO_REPARENT,
+ RK3568_CLKSEL_CON(40), 10, 2, MFLAGS, 0, 8, DFLAGS,
+ RK3568_CLKGATE_CON(20), 11, GFLAGS),
+- COMPOSITE(DCLK_VOP2, "dclk_vop2", hpll_vpll_gpll_cpll_p, 0,
++ COMPOSITE(DCLK_VOP2, "dclk_vop2", hpll_vpll_gpll_cpll_p, CLK_SET_RATE_NO_REPARENT,
+ RK3568_CLKSEL_CON(41), 10, 2, MFLAGS, 0, 8, DFLAGS,
+ RK3568_CLKGATE_CON(20), 12, GFLAGS),
+ GATE(CLK_VOP_PWM, "clk_vop_pwm", "xin24m", 0,
+diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
+index 3da33c786d77c..29eafab4353ef 100644
+--- a/drivers/clk/ti/clk.c
++++ b/drivers/clk/ti/clk.c
+@@ -131,7 +131,7 @@ int ti_clk_setup_ll_ops(struct ti_clk_ll_ops *ops)
+ void __init ti_dt_clocks_register(struct ti_dt_clk oclks[])
+ {
+ struct ti_dt_clk *c;
+- struct device_node *node, *parent;
++ struct device_node *node, *parent, *child;
+ struct clk *clk;
+ struct of_phandle_args clkspec;
+ char buf[64];
+@@ -171,10 +171,13 @@ void __init ti_dt_clocks_register(struct ti_dt_clk oclks[])
+ node = of_find_node_by_name(NULL, buf);
+ if (num_args && compat_mode) {
+ parent = node;
+- node = of_get_child_by_name(parent, "clock");
+- if (!node)
+- node = of_get_child_by_name(parent, "clk");
+- of_node_put(parent);
++ child = of_get_child_by_name(parent, "clock");
++ if (!child)
++ child = of_get_child_by_name(parent, "clk");
++ if (child) {
++ of_node_put(parent);
++ node = child;
++ }
+ }
+
+ clkspec.np = node;
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index db17196266e4b..82d370ae6a4a5 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -303,52 +303,48 @@ static u64 cppc_get_dmi_max_khz(void)
+
+ /*
+ * If CPPC lowest_freq and nominal_freq registers are exposed then we can
+- * use them to convert perf to freq and vice versa
+- *
+- * If the perf/freq point lies between Nominal and Lowest, we can treat
+- * (Low perf, Low freq) and (Nom Perf, Nom freq) as 2D co-ordinates of a line
+- * and extrapolate the rest
+- * For perf/freq > Nominal, we use the ratio perf:freq at Nominal for conversion
++ * use them to convert perf to freq and vice versa. The conversion is
++ * extrapolated as an affine function passing by the 2 points:
++ * - (Low perf, Low freq)
++ * - (Nominal perf, Nominal perf)
+ */
+ static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu_data,
+ unsigned int perf)
+ {
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
++ s64 retval, offset = 0;
+ static u64 max_khz;
+ u64 mul, div;
+
+ if (caps->lowest_freq && caps->nominal_freq) {
+- if (perf >= caps->nominal_perf) {
+- mul = caps->nominal_freq;
+- div = caps->nominal_perf;
+- } else {
+- mul = caps->nominal_freq - caps->lowest_freq;
+- div = caps->nominal_perf - caps->lowest_perf;
+- }
++ mul = caps->nominal_freq - caps->lowest_freq;
++ div = caps->nominal_perf - caps->lowest_perf;
++ offset = caps->nominal_freq - div64_u64(caps->nominal_perf * mul, div);
+ } else {
+ if (!max_khz)
+ max_khz = cppc_get_dmi_max_khz();
+ mul = max_khz;
+ div = caps->highest_perf;
+ }
+- return (u64)perf * mul / div;
++
++ retval = offset + div64_u64(perf * mul, div);
++ if (retval >= 0)
++ return retval;
++ return 0;
+ }
+
+ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data,
+ unsigned int freq)
+ {
+ struct cppc_perf_caps *caps = &cpu_data->perf_caps;
++ s64 retval, offset = 0;
+ static u64 max_khz;
+ u64 mul, div;
+
+ if (caps->lowest_freq && caps->nominal_freq) {
+- if (freq >= caps->nominal_freq) {
+- mul = caps->nominal_perf;
+- div = caps->nominal_freq;
+- } else {
+- mul = caps->lowest_perf;
+- div = caps->lowest_freq;
+- }
++ mul = caps->nominal_perf - caps->lowest_perf;
++ div = caps->nominal_freq - caps->lowest_freq;
++ offset = caps->nominal_perf - div64_u64(caps->nominal_freq * mul, div);
+ } else {
+ if (!max_khz)
+ max_khz = cppc_get_dmi_max_khz();
+@@ -356,7 +352,10 @@ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu_data,
+ div = max_khz;
+ }
+
+- return (u64)freq * mul / div;
++ retval = offset + div64_u64(freq * mul, div);
++ if (retval >= 0)
++ return retval;
++ return 0;
+ }
+
+ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
+diff --git a/drivers/dma/sh/shdma-base.c b/drivers/dma/sh/shdma-base.c
+index b26ed690f03c8..158e5e7defaeb 100644
+--- a/drivers/dma/sh/shdma-base.c
++++ b/drivers/dma/sh/shdma-base.c
+@@ -115,10 +115,8 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
+ ret = pm_runtime_get(schan->dev);
+
+ spin_unlock_irq(&schan->chan_lock);
+- if (ret < 0) {
++ if (ret < 0)
+ dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
+- pm_runtime_put(schan->dev);
+- }
+
+ pm_runtime_barrier(schan->dev);
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 6630d92e30ada..344e376b2ee99 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1404,6 +1404,16 @@ static int gpiochip_to_irq(struct gpio_chip *gc, unsigned int offset)
+ {
+ struct irq_domain *domain = gc->irq.domain;
+
++#ifdef CONFIG_GPIOLIB_IRQCHIP
++ /*
++ * Avoid race condition with other code, which tries to lookup
++ * an IRQ before the irqchip has been properly registered,
++ * i.e. while gpiochip is still being brought up.
++ */
++ if (!gc->irq.initialized)
++ return -EPROBE_DEFER;
++#endif
++
+ if (!gpiochip_irqchip_irq_valid(gc, offset))
+ return -ENXIO;
+
+@@ -1593,6 +1603,15 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
+
+ acpi_gpiochip_request_interrupts(gc);
+
++ /*
++ * Using barrier() here to prevent compiler from reordering
++ * gc->irq.initialized before initialization of above
++ * GPIO chip irq members.
++ */
++ barrier();
++
++ gc->irq.initialized = true;
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index f9bab963a948a..5df387c4d7fbb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1813,12 +1813,6 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(
+ true);
+ ret = unreserve_bo_and_vms(&ctx, false, false);
+
+- /* Only apply no TLB flush on Aldebaran to
+- * workaround regressions on other Asics.
+- */
+- if (table_freed && (adev->asic_type != CHIP_ALDEBARAN))
+- *table_freed = true;
+-
+ goto out;
+
+ out_unreserve:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 06d07502a1f68..a34be65c9eaac 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1509,6 +1509,7 @@ int amdgpu_cs_fence_to_handle_ioctl(struct drm_device *dev, void *data,
+ return 0;
+
+ default:
++ dma_fence_put(fence);
+ return -EINVAL;
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f18c698137a6b..b87dca6d09fa6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2723,11 +2723,11 @@ static int amdgpu_device_ip_fini_early(struct amdgpu_device *adev)
+ }
+ }
+
+- amdgpu_amdkfd_suspend(adev, false);
+-
+ amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+ amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+
++ amdgpu_amdkfd_suspend(adev, false);
++
+ /* Workaroud for ASICs need to disable SMC first */
+ amdgpu_device_smu_fini_early(adev);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 1916ec84dd71f..e7845df6cad22 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -266,7 +266,7 @@ static int amdgpu_gfx_kiq_acquire(struct amdgpu_device *adev,
+ * adev->gfx.mec.num_pipe_per_mec
+ * adev->gfx.mec.num_queue_per_pipe;
+
+- while (queue_bit-- >= 0) {
++ while (--queue_bit >= 0) {
+ if (test_bit(queue_bit, adev->gfx.mec.queue_bitmap))
+ continue;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 5661b82d84d46..dda53fe30975d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1303,7 +1303,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
+ !(abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE))
+ return;
+
+- dma_resv_lock(bo->base.resv, NULL);
++ if (WARN_ON_ONCE(!dma_resv_trylock(bo->base.resv)))
++ return;
+
+ r = amdgpu_fill_buffer(abo, AMDGPU_POISON, bo->base.resv, &fence);
+ if (!WARN_ON(r)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index da11ceba06981..0ce2a7aa400b1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -569,8 +569,8 @@ static void vcn_v3_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_idx
+ AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_fw_shared)), 0, indirect);
+
+ /* VCN global tiling registers */
+- WREG32_SOC15_DPG_MODE(0, SOC15_DPG_MODE_OFFSET(
+- UVD, 0, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
++ WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET(
++ UVD, inst_idx, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
+ }
+
+ static void vcn_v3_0_disable_static_power_gating(struct amdgpu_device *adev, int inst)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 4bfc0c8ab764b..70122978bdd0f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -1416,6 +1416,12 @@ err_unlock:
+ return ret;
+ }
+
++static bool kfd_flush_tlb_after_unmap(struct kfd_dev *dev) {
++ return KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 2) ||
++ (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) &&
++ dev->adev->sdma.instance[0].fw_version >= 18);
++}
++
+ static int kfd_ioctl_map_memory_to_gpu(struct file *filep,
+ struct kfd_process *p, void *data)
+ {
+@@ -1503,7 +1509,7 @@ static int kfd_ioctl_map_memory_to_gpu(struct file *filep,
+ }
+
+ /* Flush TLBs after waiting for the page table updates to complete */
+- if (table_freed) {
++ if (table_freed || !kfd_flush_tlb_after_unmap(dev)) {
+ for (i = 0; i < args->n_devices; i++) {
+ peer = kfd_device_by_id(devices_arr[i]);
+ if (WARN_ON_ONCE(!peer))
+@@ -1603,7 +1609,7 @@ static int kfd_ioctl_unmap_memory_from_gpu(struct file *filep,
+ }
+ mutex_unlock(&p->mutex);
+
+- if (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 2)) {
++ if (kfd_flush_tlb_after_unmap(dev)) {
+ err = amdgpu_amdkfd_gpuvm_sync_memory(dev->adev,
+ (struct kgd_mem *) mem, true);
+ if (err) {
+@@ -1840,13 +1846,9 @@ static int kfd_ioctl_svm(struct file *filep, struct kfd_process *p, void *data)
+ if (!args->start_addr || !args->size)
+ return -EINVAL;
+
+- mutex_lock(&p->mutex);
+-
+ r = svm_ioctl(p, args->op, args->start_addr, args->size, args->nattr,
+ args->attrs);
+
+- mutex_unlock(&p->mutex);
+-
+ return r;
+ }
+ #else
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 9624bbe8b5013..281def1c6c08e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1567,7 +1567,7 @@ int kfd_create_crat_image_acpi(void **crat_image, size_t *size)
+ /* Fetch the CRAT table from ACPI */
+ status = acpi_get_table(CRAT_SIGNATURE, 0, &crat_table);
+ if (status == AE_NOT_FOUND) {
+- pr_warn("CRAT table not found\n");
++ pr_info("CRAT table not found\n");
+ return -ENODATA;
+ } else if (ACPI_FAILURE(status)) {
+ const char *err = acpi_format_exception(status);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index d1145da5348f4..74f162887d3b1 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1150,7 +1150,6 @@ static void kfd_process_notifier_release(struct mmu_notifier *mn,
+
+ cancel_delayed_work_sync(&p->eviction_work);
+ cancel_delayed_work_sync(&p->restore_work);
+- cancel_delayed_work_sync(&p->svms.restore_work);
+
+ mutex_lock(&p->mutex);
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+index deae12dc777d2..40d0d8cb3fe83 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+@@ -268,15 +268,6 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
+ return ret;
+ }
+
+- ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
+- O_RDWR);
+- if (ret < 0) {
+- kfifo_free(&client->fifo);
+- kfree(client);
+- return ret;
+- }
+- *fd = ret;
+-
+ init_waitqueue_head(&client->wait_queue);
+ spin_lock_init(&client->lock);
+ client->events = 0;
+@@ -286,5 +277,20 @@ int kfd_smi_event_open(struct kfd_dev *dev, uint32_t *fd)
+ list_add_rcu(&client->list, &dev->smi_clients);
+ spin_unlock(&dev->smi_lock);
+
++ ret = anon_inode_getfd(kfd_smi_name, &kfd_smi_ev_fops, (void *)client,
++ O_RDWR);
++ if (ret < 0) {
++ spin_lock(&dev->smi_lock);
++ list_del_rcu(&client->list);
++ spin_unlock(&dev->smi_lock);
++
++ synchronize_rcu();
++
++ kfifo_free(&client->fifo);
++ kfree(client);
++ return ret;
++ }
++ *fd = ret;
++
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index f2805ba74c80b..ffec25e642e25 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1643,13 +1643,14 @@ static void svm_range_restore_work(struct work_struct *work)
+
+ pr_debug("restore svm ranges\n");
+
+- /* kfd_process_notifier_release destroys this worker thread. So during
+- * the lifetime of this thread, kfd_process and mm will be valid.
+- */
+ p = container_of(svms, struct kfd_process, svms);
+- mm = p->mm;
+- if (!mm)
++
++ /* Keep mm reference when svm_range_validate_and_map ranges */
++ mm = get_task_mm(p->lead_thread);
++ if (!mm) {
++ pr_debug("svms 0x%p process mm gone\n", svms);
+ return;
++ }
+
+ svm_range_list_lock_and_flush_work(svms, mm);
+ mutex_lock(&svms->lock);
+@@ -1703,6 +1704,7 @@ static void svm_range_restore_work(struct work_struct *work)
+ out_reschedule:
+ mutex_unlock(&svms->lock);
+ mmap_write_unlock(mm);
++ mmput(mm);
+
+ /* If validation failed, reschedule another attempt */
+ if (evicted_ranges) {
+@@ -1985,10 +1987,9 @@ svm_range_update_notifier_and_interval_tree(struct mm_struct *mm,
+ }
+
+ static void
+-svm_range_handle_list_op(struct svm_range_list *svms, struct svm_range *prange)
++svm_range_handle_list_op(struct svm_range_list *svms, struct svm_range *prange,
++ struct mm_struct *mm)
+ {
+- struct mm_struct *mm = prange->work_item.mm;
+-
+ switch (prange->work_item.op) {
+ case SVM_OP_NULL:
+ pr_debug("NULL OP 0x%p prange 0x%p [0x%lx 0x%lx]\n",
+@@ -2065,40 +2066,44 @@ static void svm_range_deferred_list_work(struct work_struct *work)
+ struct svm_range_list *svms;
+ struct svm_range *prange;
+ struct mm_struct *mm;
+- struct kfd_process *p;
+
+ svms = container_of(work, struct svm_range_list, deferred_list_work);
+ pr_debug("enter svms 0x%p\n", svms);
+
+- p = container_of(svms, struct kfd_process, svms);
+- /* Avoid mm is gone when inserting mmu notifier */
+- mm = get_task_mm(p->lead_thread);
+- if (!mm) {
+- pr_debug("svms 0x%p process mm gone\n", svms);
+- return;
+- }
+-retry:
+- mmap_write_lock(mm);
+-
+- /* Checking for the need to drain retry faults must be inside
+- * mmap write lock to serialize with munmap notifiers.
+- */
+- if (unlikely(atomic_read(&svms->drain_pagefaults))) {
+- mmap_write_unlock(mm);
+- svm_range_drain_retry_fault(svms);
+- goto retry;
+- }
+-
+ spin_lock(&svms->deferred_list_lock);
+ while (!list_empty(&svms->deferred_range_list)) {
+ prange = list_first_entry(&svms->deferred_range_list,
+ struct svm_range, deferred_list);
+- list_del_init(&prange->deferred_list);
+ spin_unlock(&svms->deferred_list_lock);
+
+ pr_debug("prange 0x%p [0x%lx 0x%lx] op %d\n", prange,
+ prange->start, prange->last, prange->work_item.op);
+
++ mm = prange->work_item.mm;
++retry:
++ mmap_write_lock(mm);
++
++ /* Checking for the need to drain retry faults must be inside
++ * mmap write lock to serialize with munmap notifiers.
++ */
++ if (unlikely(atomic_read(&svms->drain_pagefaults))) {
++ mmap_write_unlock(mm);
++ svm_range_drain_retry_fault(svms);
++ goto retry;
++ }
++
++ /* Remove from deferred_list must be inside mmap write lock, for
++ * two race cases:
++ * 1. unmap_from_cpu may change work_item.op and add the range
++ * to deferred_list again, cause use after free bug.
++ * 2. svm_range_list_lock_and_flush_work may hold mmap write
++ * lock and continue because deferred_list is empty, but
++ * deferred_list work is actually waiting for mmap lock.
++ */
++ spin_lock(&svms->deferred_list_lock);
++ list_del_init(&prange->deferred_list);
++ spin_unlock(&svms->deferred_list_lock);
++
+ mutex_lock(&svms->lock);
+ mutex_lock(&prange->migrate_mutex);
+ while (!list_empty(&prange->child_list)) {
+@@ -2109,19 +2114,20 @@ retry:
+ pr_debug("child prange 0x%p op %d\n", pchild,
+ pchild->work_item.op);
+ list_del_init(&pchild->child_list);
+- svm_range_handle_list_op(svms, pchild);
++ svm_range_handle_list_op(svms, pchild, mm);
+ }
+ mutex_unlock(&prange->migrate_mutex);
+
+- svm_range_handle_list_op(svms, prange);
++ svm_range_handle_list_op(svms, prange, mm);
+ mutex_unlock(&svms->lock);
++ mmap_write_unlock(mm);
++
++ /* Pairs with mmget in svm_range_add_list_work */
++ mmput(mm);
+
+ spin_lock(&svms->deferred_list_lock);
+ }
+ spin_unlock(&svms->deferred_list_lock);
+-
+- mmap_write_unlock(mm);
+- mmput(mm);
+ pr_debug("exit svms 0x%p\n", svms);
+ }
+
+@@ -2139,6 +2145,9 @@ svm_range_add_list_work(struct svm_range_list *svms, struct svm_range *prange,
+ prange->work_item.op = op;
+ } else {
+ prange->work_item.op = op;
++
++ /* Pairs with mmput in deferred_list_work */
++ mmget(mm);
+ prange->work_item.mm = mm;
+ list_add_tail(&prange->deferred_list,
+ &prange->svms->deferred_range_list);
+@@ -2830,6 +2839,8 @@ void svm_range_list_fini(struct kfd_process *p)
+
+ pr_debug("pasid 0x%x svms 0x%p\n", p->pasid, &p->svms);
+
++ cancel_delayed_work_sync(&p->svms.restore_work);
++
+ /* Ensure list work is finished before process is destroyed */
+ flush_work(&p->svms.deferred_list_work);
+
+@@ -2840,7 +2851,6 @@ void svm_range_list_fini(struct kfd_process *p)
+ atomic_inc(&p->svms.drain_pagefaults);
+ svm_range_drain_retry_fault(&p->svms);
+
+-
+ list_for_each_entry_safe(prange, next, &p->svms.list, list) {
+ svm_range_unlink(prange);
+ svm_range_remove_notifier(prange);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index b28b5c4908601..90c017859ad42 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3951,7 +3951,7 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
+ max - min);
+ }
+
+-static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
++static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ int bl_idx,
+ u32 user_brightness)
+ {
+@@ -3982,7 +3982,8 @@ static int amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
+ DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx);
+ }
+
+- return rc ? 0 : 1;
++ if (rc)
++ dm->actual_brightness[bl_idx] = user_brightness;
+ }
+
+ static int amdgpu_dm_backlight_update_status(struct backlight_device *bd)
+@@ -9914,7 +9915,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
+ /* restore the backlight level */
+ for (i = 0; i < dm->num_of_edps; i++) {
+ if (dm->backlight_dev[i] &&
+- (amdgpu_dm_backlight_get_level(dm, i) != dm->brightness[i]))
++ (dm->actual_brightness[i] != dm->brightness[i]))
+ amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
+ }
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index b9a69b0cef23b..7a23cd603714b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -540,6 +540,12 @@ struct amdgpu_display_manager {
+ * cached backlight values.
+ */
+ u32 brightness[AMDGPU_DM_MAX_NUM_EDP];
++ /**
++ * @actual_brightness:
++ *
++ * last successfully applied backlight values.
++ */
++ u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
+ };
+
+ enum dsc_clock_force_state {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index 26719efa5396d..12d437d9a0e4c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -227,8 +227,10 @@ static ssize_t dp_link_settings_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -389,8 +391,10 @@ static ssize_t dp_phy_settings_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user((*(rd_buf + result)), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -1359,8 +1363,10 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -1376,8 +1382,10 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -1546,8 +1554,10 @@ static ssize_t dp_dsc_slice_width_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -1563,8 +1573,10 @@ static ssize_t dp_dsc_slice_width_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -1731,8 +1743,10 @@ static ssize_t dp_dsc_slice_height_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -1748,8 +1762,10 @@ static ssize_t dp_dsc_slice_height_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -1912,8 +1928,10 @@ static ssize_t dp_dsc_bits_per_pixel_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -1929,8 +1947,10 @@ static ssize_t dp_dsc_bits_per_pixel_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -2088,8 +2108,10 @@ static ssize_t dp_dsc_pic_width_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -2105,8 +2127,10 @@ static ssize_t dp_dsc_pic_width_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -2145,8 +2169,10 @@ static ssize_t dp_dsc_pic_height_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -2162,8 +2188,10 @@ static ssize_t dp_dsc_pic_height_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -2217,8 +2245,10 @@ static ssize_t dp_dsc_chunk_size_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -2234,8 +2264,10 @@ static ssize_t dp_dsc_chunk_size_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -2289,8 +2321,10 @@ static ssize_t dp_dsc_slice_bpg_offset_read(struct file *f, char __user *buf,
+ break;
+ }
+
+- if (!pipe_ctx)
++ if (!pipe_ctx) {
++ kfree(rd_buf);
+ return -ENXIO;
++ }
+
+ dsc = pipe_ctx->stream_res.dsc;
+ if (dsc)
+@@ -2306,8 +2340,10 @@ static ssize_t dp_dsc_slice_bpg_offset_read(struct file *f, char __user *buf,
+ break;
+
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+
+ buf += 1;
+ size -= 1;
+@@ -3459,8 +3495,10 @@ static ssize_t dcc_en_bits_read(
+ dc->hwss.get_dcc_en_bits(dc, dcc_en_bits);
+
+ rd_buf = kcalloc(rd_buf_size, sizeof(char), GFP_KERNEL);
+- if (!rd_buf)
++ if (!rd_buf) {
++ kfree(dcc_en_bits);
+ return -ENOMEM;
++ }
+
+ for (i = 0; i < num_pipes; i++)
+ offset += snprintf(rd_buf + offset, rd_buf_size - offset,
+@@ -3473,8 +3511,10 @@ static ssize_t dcc_en_bits_read(
+ if (*pos >= rd_buf_size)
+ break;
+ r = put_user(*(rd_buf + result), buf);
+- if (r)
++ if (r) {
++ kfree(rd_buf);
+ return r; /* r = -EFAULT */
++ }
+ buf += 1;
+ size -= 1;
+ *pos += 1;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index c510638b4f997..a009fc654ac95 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -149,10 +149,8 @@ bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream)
+
+ link = stream->link;
+
+- psr_config.psr_version = link->dpcd_caps.psr_caps.psr_version;
+-
+- if (psr_config.psr_version > 0) {
+- psr_config.psr_exit_link_training_required = 0x1;
++ if (link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED) {
++ psr_config.psr_version = link->psr_settings.psr_version;
+ psr_config.psr_frame_capture_indication_req = 0;
+ psr_config.psr_rfb_setup_time = 0x37;
+ psr_config.psr_sdp_transmit_line_num_deadline = 0x20;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index ba1aa994db4b7..62bc6ce887535 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -76,6 +76,8 @@
+
+ #include "dc_trace.h"
+
++#include "dce/dmub_outbox.h"
++
+ #define CTX \
+ dc->ctx
+
+@@ -3707,13 +3709,23 @@ void dc_hardware_release(struct dc *dc)
+ }
+ #endif
+
+-/**
+- * dc_enable_dmub_notifications - Returns whether dmub notification can be enabled
+- * @dc: dc structure
++/*
++ *****************************************************************************
++ * Function: dc_is_dmub_outbox_supported -
++ *
++ * @brief
++ * Checks whether DMUB FW supports outbox notifications, if supported
++ * DM should register outbox interrupt prior to actually enabling interrupts
++ * via dc_enable_dmub_outbox
+ *
+- * Returns: True to enable dmub notifications, False otherwise
++ * @param
++ * [in] dc: dc structure
++ *
++ * @return
++ * True if DMUB FW supports outbox notifications, False otherwise
++ *****************************************************************************
+ */
+-bool dc_enable_dmub_notifications(struct dc *dc)
++bool dc_is_dmub_outbox_supported(struct dc *dc)
+ {
+ #if defined(CONFIG_DRM_AMD_DC_DCN)
+ /* YELLOW_CARP B0 USB4 DPIA needs dmub notifications for interrupts */
+@@ -3728,6 +3740,48 @@ bool dc_enable_dmub_notifications(struct dc *dc)
+
+ /**
+ * dc_process_dmub_aux_transfer_async - Submits aux command to dmub via inbox message
++ * Function: dc_enable_dmub_notifications
++ *
++ * @brief
++ * Calls dc_is_dmub_outbox_supported to check if dmub fw supports outbox
++ * notifications. All DMs shall switch to dc_is_dmub_outbox_supported.
++ * This API shall be removed after switching.
++ *
++ * @param
++ * [in] dc: dc structure
++ *
++ * @return
++ * True if DMUB FW supports outbox notifications, False otherwise
++ *****************************************************************************
++ */
++bool dc_enable_dmub_notifications(struct dc *dc)
++{
++ return dc_is_dmub_outbox_supported(dc);
++}
++
++/**
++ *****************************************************************************
++ * Function: dc_enable_dmub_outbox
++ *
++ * @brief
++ * Enables DMUB unsolicited notifications to x86 via outbox
++ *
++ * @param
++ * [in] dc: dc structure
++ *
++ * @return
++ * None
++ *****************************************************************************
++ */
++void dc_enable_dmub_outbox(struct dc *dc)
++{
++ struct dc_context *dc_ctx = dc->ctx;
++
++ dmub_enable_outbox_notification(dc_ctx->dmub_srv);
++}
++
++/**
++ *****************************************************************************
+ * Sets port index appropriately for legacy DDC
+ * @dc: dc structure
+ * @link_index: link index
+@@ -3829,7 +3883,7 @@ uint8_t get_link_index_from_dpia_port_index(const struct dc *dc,
+ * [in] payload: aux payload
+ * [out] notify: set_config immediate reply
+ *
+- * @return
++ * @return
+ * True if successful, False if failure
+ *****************************************************************************
+ */
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 61b8f29a0c303..49d5271dcfdc8 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2378,22 +2378,27 @@ static enum link_training_result dp_perform_8b_10b_link_training(
+ repeater_id--) {
+ status = perform_clock_recovery_sequence(link, link_res, lt_settings, repeater_id);
+
+- if (status != LINK_TRAINING_SUCCESS)
++ if (status != LINK_TRAINING_SUCCESS) {
++ repeater_training_done(link, repeater_id);
+ break;
++ }
+
+ status = perform_channel_equalization_sequence(link,
+ link_res,
+ lt_settings,
+ repeater_id);
+
++ repeater_training_done(link, repeater_id);
++
+ if (status != LINK_TRAINING_SUCCESS)
+ break;
+
+- repeater_training_done(link, repeater_id);
++ for (lane = 0; lane < LANE_COUNT_DP_MAX; lane++) {
++ lt_settings->dpcd_lane_settings[lane].raw = 0;
++ lt_settings->hw_lane_settings[lane].VOLTAGE_SWING = 0;
++ lt_settings->hw_lane_settings[lane].PRE_EMPHASIS = 0;
++ }
+ }
+-
+- for (lane = 0; lane < (uint8_t)lt_settings->link_settings.lane_count; lane++)
+- lt_settings->dpcd_lane_settings[lane].raw = 0;
+ }
+
+ if (status == LINK_TRAINING_SUCCESS) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 18757c1585232..ac3071e38e4a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1640,6 +1640,9 @@ static bool are_stream_backends_same(
+ if (is_timing_changed(stream_a, stream_b))
+ return false;
+
++ if (stream_a->signal != stream_b->signal)
++ return false;
++
+ if (stream_a->dpms_off != stream_b->dpms_off)
+ return false;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index b518648906212..6a8c100a3688d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1448,8 +1448,11 @@ void dc_z10_restore(const struct dc *dc);
+ void dc_z10_save_init(struct dc *dc);
+ #endif
+
++bool dc_is_dmub_outbox_supported(struct dc *dc);
+ bool dc_enable_dmub_notifications(struct dc *dc);
+
++void dc_enable_dmub_outbox(struct dc *dc);
++
+ bool dc_process_dmub_aux_transfer_async(struct dc *dc,
+ uint32_t link_index,
+ struct aux_payload *payload);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c
+index faad8555ddbb6..fff1d07d865d7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.c
+@@ -22,20 +22,23 @@
+ * Authors: AMD
+ */
+
+-#include "dmub_outbox.h"
++#include "dc.h"
+ #include "dc_dmub_srv.h"
++#include "dmub_outbox.h"
+ #include "dmub/inc/dmub_cmd.h"
+
+-/**
+- * dmub_enable_outbox_notification - Sends inbox cmd to dmub to enable outbox1
+- * messages with interrupt. Dmub sends outbox1
+- * message and triggers outbox1 interrupt.
+- * @dc: dc structure
++/*
++ * Function: dmub_enable_outbox_notification
++ *
++ * @brief
++ * Sends inbox cmd to dmub for enabling outbox notifications to x86.
++ *
++ * @param
++ * [in] dmub_srv: dmub_srv structure
+ */
+-void dmub_enable_outbox_notification(struct dc *dc)
++void dmub_enable_outbox_notification(struct dc_dmub_srv *dmub_srv)
+ {
+ union dmub_rb_cmd cmd;
+- struct dc_context *dc_ctx = dc->ctx;
+
+ memset(&cmd, 0x0, sizeof(cmd));
+ cmd.outbox1_enable.header.type = DMUB_CMD__OUTBOX1_ENABLE;
+@@ -45,7 +48,7 @@ void dmub_enable_outbox_notification(struct dc *dc)
+ sizeof(cmd.outbox1_enable.header);
+ cmd.outbox1_enable.enable = true;
+
+- dc_dmub_srv_cmd_queue(dc_ctx->dmub_srv, &cmd);
+- dc_dmub_srv_cmd_execute(dc_ctx->dmub_srv);
+- dc_dmub_srv_wait_idle(dc_ctx->dmub_srv);
++ dc_dmub_srv_cmd_queue(dmub_srv, &cmd);
++ dc_dmub_srv_cmd_execute(dmub_srv);
++ dc_dmub_srv_wait_idle(dmub_srv);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.h b/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.h
+index 4e0aa0d1a2d5c..58ceabb9d497d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_outbox.h
+@@ -26,8 +26,8 @@
+ #ifndef _DMUB_OUTBOX_H_
+ #define _DMUB_OUTBOX_H_
+
+-#include "dc.h"
++struct dc_dmub_srv;
+
+-void dmub_enable_outbox_notification(struct dc *dc);
++void dmub_enable_outbox_notification(struct dc_dmub_srv *dmub_srv);
+
+ #endif /* _DMUB_OUTBOX_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 530a72e3eefe2..2cefdd96d0cbb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -1493,17 +1493,6 @@ void dcn10_init_hw(struct dc *dc)
+ link->link_status.link_active = true;
+ }
+
+- /* Power gate DSCs */
+- if (!is_optimized_init_done) {
+- for (i = 0; i < res_pool->res_cap->num_dsc; i++)
+- if (hws->funcs.dsc_pg_control != NULL)
+- hws->funcs.dsc_pg_control(hws, res_pool->dscs[i]->inst, false);
+- }
+-
+- /* Enable outbox notification feature of dmub */
+- if (dc->debug.enable_dmub_aux_for_legacy_ddc)
+- dmub_enable_outbox_notification(dc);
+-
+ /* we want to turn off all dp displays before doing detection */
+ if (dc->config.power_down_display_on_boot)
+ dc_link_blank_all_dp_displays(dc);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index e5cc6bf45743a..ca1bbc942fd40 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -873,7 +873,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .clock_trace = true,
+ .disable_pplib_clock_request = true,
+ .min_disp_clk_khz = 100000,
+- .pipe_split_policy = MPC_SPLIT_DYNAMIC,
++ .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
+ .force_single_disp_pipe_split = false,
+ .disable_dcc = DCC_ENABLE,
+ .vsr_support = true,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+index 4206ce5bf9a92..1e156f3980656 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
+@@ -194,7 +194,7 @@ void dcn31_init_hw(struct dc *dc)
+
+ /* Enables outbox notifications for usb4 dpia */
+ if (dc->res_pool->usb4_dpia_count)
+- dmub_enable_outbox_notification(dc);
++ dmub_enable_outbox_notification(dc->ctx->dmub_srv);
+
+ /* we want to turn off all dp displays before doing detection */
+ if (dc->config.power_down_display_on_boot)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index 8d64187478e42..0e11285035b63 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -2025,7 +2025,9 @@ bool dcn31_validate_bandwidth(struct dc *dc,
+
+ BW_VAL_TRACE_COUNT();
+
++ DC_FP_START();
+ out = dcn30_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, fast_validate);
++ DC_FP_END();
+
+ // Disable fast_validate to set min dcfclk in alculate_wm_and_dlg
+ if (pipe_cnt == 0)
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+index 08362d506534b..a68496b3f9296 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+@@ -1045,6 +1045,17 @@ bool amdgpu_dpm_is_baco_supported(struct amdgpu_device *adev)
+
+ if (!pp_funcs || !pp_funcs->get_asic_baco_capability)
+ return false;
++ /* Don't use baco for reset in S3.
++ * This is a workaround for some platforms
++ * where entering BACO during suspend
++ * seems to cause reboots or hangs.
++ * This might be related to the fact that BACO controls
++ * power to the whole GPU including devices like audio and USB.
++ * Powering down/up everything may adversely affect these other
++ * devices. Needs more investigation.
++ */
++ if (adev->in_s3)
++ return false;
+
+ if (pp_funcs->get_asic_baco_capability(pp_handle, &baco_cap))
+ return false;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+index 9ddd8491ff008..ede71de2343dc 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
+@@ -773,13 +773,13 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinFclkByFreq,
+ hwmgr->display_config->num_display > 3 ?
+- data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk :
++ (data->clock_vol_info.vdd_dep_on_fclk->entries[0].clk / 100) :
+ min_mclk,
+ NULL);
+
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinSocclkByFreq,
+- data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk,
++ data->clock_vol_info.vdd_dep_on_socclk->entries[0].clk / 100,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetHardMinVcn,
+@@ -792,11 +792,11 @@ static int smu10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetSoftMaxFclkByFreq,
+- data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk,
++ data->clock_vol_info.vdd_dep_on_fclk->entries[index_fclk].clk / 100,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetSoftMaxSocclkByFreq,
+- data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk,
++ data->clock_vol_info.vdd_dep_on_socclk->entries[index_socclk].clk / 100,
+ NULL);
+ smum_send_msg_to_smc_with_parameter(hwmgr,
+ PPSMC_MSG_SetSoftMaxVcn,
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index 6e484d836cfe2..691039aba87f4 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -861,18 +861,19 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ memcpy(&dsi->mode, adjusted_mode, sizeof(dsi->mode));
+ drm_mode_debug_printmodeline(adjusted_mode);
+
+- pm_runtime_get_sync(dev);
++ if (pm_runtime_resume_and_get(dev) < 0)
++ return;
+
+ if (clk_prepare_enable(dsi->lcdif_clk) < 0)
+- return;
++ goto runtime_put;
+ if (clk_prepare_enable(dsi->core_clk) < 0)
+- return;
++ goto runtime_put;
+
+ /* Step 1 from DSI reset-out instructions */
+ ret = reset_control_deassert(dsi->rst_pclk);
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to deassert PCLK: %d\n", ret);
+- return;
++ goto runtime_put;
+ }
+
+ /* Step 2 from DSI reset-out instructions */
+@@ -882,13 +883,18 @@ nwl_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ ret = reset_control_deassert(dsi->rst_esc);
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to deassert ESC: %d\n", ret);
+- return;
++ goto runtime_put;
+ }
+ ret = reset_control_deassert(dsi->rst_byte);
+ if (ret < 0) {
+ DRM_DEV_ERROR(dev, "Failed to deassert BYTE: %d\n", ret);
+- return;
++ goto runtime_put;
+ }
++
++ return;
++
++runtime_put:
++ pm_runtime_put_sync(dev);
+ }
+
+ static void
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index b8f5419e514ae..83e5c115e7547 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -212,9 +212,7 @@ static const struct edid_quirk {
+
+ /* Windows Mixed Reality Headsets */
+ EDID_QUIRK('A', 'C', 'R', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('H', 'P', 'N', 0x3515, EDID_QUIRK_NON_DESKTOP),
+ EDID_QUIRK('L', 'E', 'N', 0x0408, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('L', 'E', 'N', 0xb800, EDID_QUIRK_NON_DESKTOP),
+ EDID_QUIRK('F', 'U', 'J', 0x1970, EDID_QUIRK_NON_DESKTOP),
+ EDID_QUIRK('D', 'E', 'L', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+ EDID_QUIRK('S', 'E', 'C', 0x144a, EDID_QUIRK_NON_DESKTOP),
+@@ -5327,17 +5325,13 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+ info->width_mm = edid->width_cm * 10;
+ info->height_mm = edid->height_cm * 10;
+
+- info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP);
+-
+ drm_get_monitor_range(connector, edid);
+
+- DRM_DEBUG_KMS("non_desktop set to %d\n", info->non_desktop);
+-
+ if (edid->revision < 3)
+- return quirks;
++ goto out;
+
+ if (!(edid->input & DRM_EDID_INPUT_DIGITAL))
+- return quirks;
++ goto out;
+
+ info->color_formats |= DRM_COLOR_FORMAT_RGB444;
+ drm_parse_cea_ext(connector, edid);
+@@ -5358,7 +5352,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+
+ /* Only defined for 1.4 with digital displays */
+ if (edid->revision < 4)
+- return quirks;
++ goto out;
+
+ switch (edid->input & DRM_EDID_DIGITAL_DEPTH_MASK) {
+ case DRM_EDID_DIGITAL_DEPTH_6:
+@@ -5395,6 +5389,13 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
+
+ drm_update_mso(connector, edid);
+
++out:
++ if (quirks & EDID_QUIRK_NON_DESKTOP) {
++ drm_dbg_kms(connector->dev, "Non-desktop display%s\n",
++ info->non_desktop ? " (redundant quirk)" : "");
++ info->non_desktop = true;
++ }
++
+ return quirks;
+ }
+
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index b910978d3e480..4e853acfd1e8a 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -180,6 +180,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MicroPC"),
+ },
+ .driver_data = (void *)&lcd720x1280_rightside_up,
++ }, { /* GPD Win Max */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "G1619-01"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /*
+ * GPD Pocket, note that the the DMI data is less generic then
+ * it seems, devices with a board-vendor of "AMI Corporation"
+diff --git a/drivers/gpu/drm/imx/dw_hdmi-imx.c b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+index 87428fb23d9ff..a2277a0d6d06f 100644
+--- a/drivers/gpu/drm/imx/dw_hdmi-imx.c
++++ b/drivers/gpu/drm/imx/dw_hdmi-imx.c
+@@ -222,6 +222,7 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
+ struct device_node *np = pdev->dev.of_node;
+ const struct of_device_id *match = of_match_node(dw_hdmi_imx_dt_ids, np);
+ struct imx_hdmi *hdmi;
++ int ret;
+
+ hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL);
+ if (!hdmi)
+@@ -243,10 +244,15 @@ static int dw_hdmi_imx_probe(struct platform_device *pdev)
+ hdmi->bridge = of_drm_find_bridge(np);
+ if (!hdmi->bridge) {
+ dev_err(hdmi->dev, "Unable to find bridge\n");
++ dw_hdmi_remove(hdmi->hdmi);
+ return -ENODEV;
+ }
+
+- return component_add(&pdev->dev, &dw_hdmi_imx_ops);
++ ret = component_add(&pdev->dev, &dw_hdmi_imx_ops);
++ if (ret)
++ dw_hdmi_remove(hdmi->hdmi);
++
++ return ret;
+ }
+
+ static int dw_hdmi_imx_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c
+index e5078d03020d9..fb0e951248f68 100644
+--- a/drivers/gpu/drm/imx/imx-ldb.c
++++ b/drivers/gpu/drm/imx/imx-ldb.c
+@@ -572,6 +572,8 @@ static int imx_ldb_panel_ddc(struct device *dev,
+ edidp = of_get_property(child, "edid", &edid_len);
+ if (edidp) {
+ channel->edid = kmemdup(edidp, edid_len, GFP_KERNEL);
++ if (!channel->edid)
++ return -ENOMEM;
+ } else if (!channel->panel) {
+ /* fallback to display-timings node */
+ ret = of_get_drm_display_mode(child,
+diff --git a/drivers/gpu/drm/imx/parallel-display.c b/drivers/gpu/drm/imx/parallel-display.c
+index 06cb1a59b9bcd..63ba2ad846791 100644
+--- a/drivers/gpu/drm/imx/parallel-display.c
++++ b/drivers/gpu/drm/imx/parallel-display.c
+@@ -75,8 +75,10 @@ static int imx_pd_connector_get_modes(struct drm_connector *connector)
+ ret = of_get_drm_display_mode(np, &imxpd->mode,
+ &imxpd->bus_flags,
+ OF_USE_NATIVE_MODE);
+- if (ret)
++ if (ret) {
++ drm_mode_destroy(connector->dev, mode);
+ return ret;
++ }
+
+ drm_mode_copy(mode, &imxpd->mode);
+ mode->type |= DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 6b3ced4aaaf5d..3a3f53f0c8ae1 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1877,7 +1877,7 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi)
+
+ /* do not autoenable, will be enabled later */
+ ret = devm_request_irq(&pdev->dev, msm_host->irq, dsi_host_irq,
+- IRQF_TRIGGER_HIGH | IRQF_ONESHOT | IRQF_NO_AUTOEN,
++ IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN,
+ "dsi_isr", msm_host);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to request IRQ%u: %d\n",
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+index e1772211b0a4b..612310d5d4812 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
+@@ -216,6 +216,7 @@ gm20b_pmu = {
+ .intr = gt215_pmu_intr,
+ .recv = gm20b_pmu_recv,
+ .initmsg = gm20b_pmu_initmsg,
++ .reset = gf100_pmu_reset,
+ };
+
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+index 6bf7fc1bd1e3b..1a6f9c3af5ecd 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp102.c
+@@ -23,7 +23,7 @@
+ */
+ #include "priv.h"
+
+-static void
++void
+ gp102_pmu_reset(struct nvkm_pmu *pmu)
+ {
+ struct nvkm_device *device = pmu->subdev.device;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index ba1583bb618b2..94cfb1791af6e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -83,6 +83,7 @@ gp10b_pmu = {
+ .intr = gt215_pmu_intr,
+ .recv = gm20b_pmu_recv,
+ .initmsg = gm20b_pmu_initmsg,
++ .reset = gp102_pmu_reset,
+ };
+
+ #if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+index bcaade758ff72..21abf31f44420 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/priv.h
+@@ -41,6 +41,7 @@ int gt215_pmu_send(struct nvkm_pmu *, u32[2], u32, u32, u32, u32);
+
+ bool gf100_pmu_enabled(struct nvkm_pmu *);
+ void gf100_pmu_reset(struct nvkm_pmu *);
++void gp102_pmu_reset(struct nvkm_pmu *pmu);
+
+ void gk110_pmu_pgob(struct nvkm_pmu *, bool);
+
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9341.c b/drivers/gpu/drm/panel/panel-ilitek-ili9341.c
+index 2c3378a259b1e..e1542451ef9d0 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9341.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9341.c
+@@ -612,8 +612,10 @@ static int ili9341_dbi_probe(struct spi_device *spi, struct gpio_desc *dc,
+ int ret;
+
+ vcc = devm_regulator_get_optional(dev, "vcc");
+- if (IS_ERR(vcc))
++ if (IS_ERR(vcc)) {
+ dev_err(dev, "get optional vcc failed\n");
++ vcc = NULL;
++ }
+
+ dbidev = devm_drm_dev_alloc(dev, &ili9341_dbi_driver,
+ struct mipi_dbi_dev, drm);
+diff --git a/drivers/gpu/drm/sprd/sprd_dpu.c b/drivers/gpu/drm/sprd/sprd_dpu.c
+index 06a3414ee43a3..1637203ea1036 100644
+--- a/drivers/gpu/drm/sprd/sprd_dpu.c
++++ b/drivers/gpu/drm/sprd/sprd_dpu.c
+@@ -790,6 +790,11 @@ static int sprd_dpu_context_init(struct sprd_dpu *dpu,
+ int ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res) {
++ dev_err(dev, "failed to get I/O resource\n");
++ return -EINVAL;
++ }
++
+ ctx->base = devm_ioremap(dev, res->start, resource_size(res));
+ if (!ctx->base) {
+ dev_err(dev, "failed to map dpu registers\n");
+diff --git a/drivers/gpu/drm/sprd/sprd_drm.c b/drivers/gpu/drm/sprd/sprd_drm.c
+index a077e2d4d7217..af2be97d5ed08 100644
+--- a/drivers/gpu/drm/sprd/sprd_drm.c
++++ b/drivers/gpu/drm/sprd/sprd_drm.c
+@@ -155,7 +155,7 @@ static void sprd_drm_shutdown(struct platform_device *pdev)
+ struct drm_device *drm = platform_get_drvdata(pdev);
+
+ if (!drm) {
+- drm_warn(drm, "drm device is not available, no shutdown\n");
++ dev_warn(&pdev->dev, "drm device is not available, no shutdown\n");
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/sprd/sprd_dsi.c b/drivers/gpu/drm/sprd/sprd_dsi.c
+index 911b3cddc2640..12b67a5d59231 100644
+--- a/drivers/gpu/drm/sprd/sprd_dsi.c
++++ b/drivers/gpu/drm/sprd/sprd_dsi.c
+@@ -907,6 +907,11 @@ static int sprd_dsi_context_init(struct sprd_dsi *dsi,
+ struct resource *res;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ if (!res) {
++ dev_err(dev, "failed to get I/O resource\n");
++ return -EINVAL;
++ }
++
+ ctx->base = devm_ioremap(dev, res->start, resource_size(res));
+ if (!ctx->base) {
+ drm_err(dsi->drm, "failed to map dsi host registers\n");
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index c7ed2e1cbab6e..92bc0faee84f3 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -798,7 +798,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+
+ if (!render->base.perfmon) {
+ ret = -ENOENT;
+- goto fail;
++ goto fail_perfmon;
+ }
+ }
+
+@@ -847,6 +847,7 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
+
+ fail_unreserve:
+ mutex_unlock(&v3d->sched_lock);
++fail_perfmon:
+ drm_gem_unlock_reservations(last_job->bo,
+ last_job->bo_count, &acquire_ctx);
+ fail:
+@@ -1027,7 +1028,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data,
+ args->perfmon_id);
+ if (!job->base.perfmon) {
+ ret = -ENOENT;
+- goto fail;
++ goto fail_perfmon;
+ }
+ }
+
+@@ -1056,6 +1057,7 @@ v3d_submit_csd_ioctl(struct drm_device *dev, void *data,
+
+ fail_unreserve:
+ mutex_unlock(&v3d->sched_lock);
++fail_perfmon:
+ drm_gem_unlock_reservations(clean_job->bo, clean_job->bo_count,
+ &acquire_ctx);
+ fail:
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 7dc89dc6b0f0e..590376d776a18 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -748,15 +748,15 @@ static const struct hid_device_id apple_devices[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY),
+ .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021),
+- .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY },
+ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021),
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021),
+- .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY },
+ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021),
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021),
+- .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY },
+ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021),
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 60375879612f3..67be81208a2d9 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -380,7 +380,7 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel)
+ * execute:
+ *
+ * (a) In the "normal (i.e., not resuming from hibernation)" path,
+- * the full barrier in smp_store_mb() guarantees that the store
++ * the full barrier in virt_store_mb() guarantees that the store
+ * is propagated to all CPUs before the add_channel_work work
+ * is queued. In turn, add_channel_work is queued before the
+ * channel's ring buffer is allocated/initialized and the
+@@ -392,14 +392,14 @@ void vmbus_channel_map_relid(struct vmbus_channel *channel)
+ * recv_int_page before retrieving the channel pointer from the
+ * array of channels.
+ *
+- * (b) In the "resuming from hibernation" path, the smp_store_mb()
++ * (b) In the "resuming from hibernation" path, the virt_store_mb()
+ * guarantees that the store is propagated to all CPUs before
+ * the VMBus connection is marked as ready for the resume event
+ * (cf. check_ready_for_resume_event()). The interrupt handler
+ * of the VMBus driver and vmbus_chan_sched() can not run before
+ * vmbus_bus_resume() has completed execution (cf. resume_noirq).
+ */
+- smp_store_mb(
++ virt_store_mb(
+ vmbus_connection.channels[channel->offermsg.child_relid],
+ channel);
+ }
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 12a2b37e87f30..4bea1dfa41cdc 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2097,6 +2097,10 @@ int vmbus_device_register(struct hv_device *child_device_obj)
+ child_device_obj->device.parent = &hv_acpi_dev->dev;
+ child_device_obj->device.release = vmbus_device_release;
+
++ child_device_obj->device.dma_parms = &child_device_obj->dma_parms;
++ child_device_obj->device.dma_mask = &child_device_obj->dma_mask;
++ dma_set_mask(&child_device_obj->device, DMA_BIT_MASK(64));
++
+ /*
+ * Register with the LDM. This will kick off the driver/device
+ * binding...which will eventually call vmbus_match() and vmbus_probe()
+@@ -2122,9 +2126,6 @@ int vmbus_device_register(struct hv_device *child_device_obj)
+ }
+ hv_debug_add_dev_dir(child_device_obj);
+
+- child_device_obj->device.dma_parms = &child_device_obj->dma_parms;
+- child_device_obj->device.dma_mask = &child_device_obj->dma_mask;
+- dma_set_mask(&child_device_obj->device, DMA_BIT_MASK(64));
+ return 0;
+
+ err_kset_unregister:
+@@ -2780,10 +2781,15 @@ static void __exit vmbus_exit(void)
+ if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
+ kmsg_dump_unregister(&hv_kmsg_dumper);
+ unregister_die_notifier(&hyperv_die_block);
+- atomic_notifier_chain_unregister(&panic_notifier_list,
+- &hyperv_panic_block);
+ }
+
++ /*
++ * The panic notifier is always registered, hence we should
++ * also unconditionally unregister it here as well.
++ */
++ atomic_notifier_chain_unregister(&panic_notifier_list,
++ &hyperv_panic_block);
++
+ free_page((unsigned long)hv_panic_page);
+ unregister_sysctl_table(hv_ctl_table_hdr);
+ hv_ctl_table_hdr = NULL;
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 35f0d5e7533d6..1c107d6d03b99 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -2824,6 +2824,7 @@ static int cm_dreq_handler(struct cm_work *work)
+ switch (cm_id_priv->id.state) {
+ case IB_CM_REP_SENT:
+ case IB_CM_DREQ_SENT:
++ case IB_CM_MRA_REP_RCVD:
+ ib_cancel_mad(cm_id_priv->msg);
+ break;
+ case IB_CM_ESTABLISHED:
+@@ -2831,8 +2832,6 @@ static int cm_dreq_handler(struct cm_work *work)
+ cm_id_priv->id.lap_state == IB_CM_MRA_LAP_RCVD)
+ ib_cancel_mad(cm_id_priv->msg);
+ break;
+- case IB_CM_MRA_REP_RCVD:
+- break;
+ case IB_CM_TIMEWAIT:
+ atomic_long_inc(&work->port->counters[CM_RECV_DUPLICATES]
+ [CM_DREQ_COUNTER]);
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index 876cc78a22cca..7333646021bb8 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -80,6 +80,9 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ unsigned long flags;
+ struct list_head del_list;
+
++ /* Prevent freeing of mm until we are completely finished. */
++ mmgrab(handler->mn.mm);
++
+ /* Unregister first so we don't get any more notifications. */
+ mmu_notifier_unregister(&handler->mn, handler->mn.mm);
+
+@@ -102,6 +105,9 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+
+ do_remove(handler, &del_list);
+
++ /* Now the mm may be freed. */
++ mmdrop(handler->mn.mm);
++
+ kfree(handler);
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 2910d78333130..d40a1460ef971 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -541,8 +541,10 @@ static void __cache_work_func(struct mlx5_cache_ent *ent)
+ spin_lock_irq(&ent->lock);
+ if (ent->disabled)
+ goto out;
+- if (need_delay)
++ if (need_delay) {
+ queue_delayed_work(cache->wq, &ent->dwork, 300 * HZ);
++ goto out;
++ }
+ remove_cache_mr_locked(ent);
+ queue_adjust_cache_locked(ent);
+ }
+@@ -630,6 +632,7 @@ static void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ {
+ struct mlx5_cache_ent *ent = mr->cache_ent;
+
++ WRITE_ONCE(dev->cache.last_add, jiffies);
+ spin_lock_irq(&ent->lock);
+ list_add_tail(&mr->list, &ent->head);
+ ent->available_mrs++;
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index ae50b56e89132..8ef112f883a77 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -3190,7 +3190,11 @@ serr_no_r_lock:
+ spin_lock_irqsave(&sqp->s_lock, flags);
+ rvt_send_complete(sqp, wqe, send_status);
+ if (sqp->ibqp.qp_type == IB_QPT_RC) {
+- int lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
++ int lastwqe;
++
++ spin_lock(&sqp->r_lock);
++ lastwqe = rvt_error_qp(sqp, IB_WC_WR_FLUSH_ERR);
++ spin_unlock(&sqp->r_lock);
+
+ sqp->s_flags &= ~RVT_S_BUSY;
+ spin_unlock_irqrestore(&sqp->s_lock, flags);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 759b85f033315..df4d06d4d183a 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -297,6 +297,7 @@ static bool rtrs_clt_change_state_from_to(struct rtrs_clt_path *clt_path,
+ return changed;
+ }
+
++static void rtrs_clt_stop_and_destroy_conns(struct rtrs_clt_path *clt_path);
+ static void rtrs_rdma_error_recovery(struct rtrs_clt_con *con)
+ {
+ struct rtrs_clt_path *clt_path = to_clt_path(con->c.path);
+@@ -304,16 +305,7 @@ static void rtrs_rdma_error_recovery(struct rtrs_clt_con *con)
+ if (rtrs_clt_change_state_from_to(clt_path,
+ RTRS_CLT_CONNECTED,
+ RTRS_CLT_RECONNECTING)) {
+- struct rtrs_clt_sess *clt = clt_path->clt;
+- unsigned int delay_ms;
+-
+- /*
+- * Normal scenario, reconnect if we were successfully connected
+- */
+- delay_ms = clt->reconnect_delay_sec * 1000;
+- queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork,
+- msecs_to_jiffies(delay_ms +
+- prandom_u32() % RTRS_RECONNECT_SEED));
++ queue_work(rtrs_wq, &clt_path->err_recovery_work);
+ } else {
+ /*
+ * Error can happen just on establishing new connection,
+@@ -1511,6 +1503,22 @@ static void rtrs_clt_init_hb(struct rtrs_clt_path *clt_path)
+ static void rtrs_clt_reconnect_work(struct work_struct *work);
+ static void rtrs_clt_close_work(struct work_struct *work);
+
++static void rtrs_clt_err_recovery_work(struct work_struct *work)
++{
++ struct rtrs_clt_path *clt_path;
++ struct rtrs_clt_sess *clt;
++ int delay_ms;
++
++ clt_path = container_of(work, struct rtrs_clt_path, err_recovery_work);
++ clt = clt_path->clt;
++ delay_ms = clt->reconnect_delay_sec * 1000;
++ rtrs_clt_stop_and_destroy_conns(clt_path);
++ queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork,
++ msecs_to_jiffies(delay_ms +
++ prandom_u32() %
++ RTRS_RECONNECT_SEED));
++}
++
+ static struct rtrs_clt_path *alloc_path(struct rtrs_clt_sess *clt,
+ const struct rtrs_addr *path,
+ size_t con_num, u32 nr_poll_queues)
+@@ -1562,6 +1570,7 @@ static struct rtrs_clt_path *alloc_path(struct rtrs_clt_sess *clt,
+ clt_path->state = RTRS_CLT_CONNECTING;
+ atomic_set(&clt_path->connected_cnt, 0);
+ INIT_WORK(&clt_path->close_work, rtrs_clt_close_work);
++ INIT_WORK(&clt_path->err_recovery_work, rtrs_clt_err_recovery_work);
+ INIT_DELAYED_WORK(&clt_path->reconnect_dwork, rtrs_clt_reconnect_work);
+ rtrs_clt_init_hb(clt_path);
+
+@@ -2326,6 +2335,7 @@ static void rtrs_clt_close_work(struct work_struct *work)
+
+ clt_path = container_of(work, struct rtrs_clt_path, close_work);
+
++ cancel_work_sync(&clt_path->err_recovery_work);
+ cancel_delayed_work_sync(&clt_path->reconnect_dwork);
+ rtrs_clt_stop_and_destroy_conns(clt_path);
+ rtrs_clt_change_state_get_old(clt_path, RTRS_CLT_CLOSED, NULL);
+@@ -2638,7 +2648,6 @@ static void rtrs_clt_reconnect_work(struct work_struct *work)
+ {
+ struct rtrs_clt_path *clt_path;
+ struct rtrs_clt_sess *clt;
+- unsigned int delay_ms;
+ int err;
+
+ clt_path = container_of(to_delayed_work(work), struct rtrs_clt_path,
+@@ -2655,8 +2664,6 @@ static void rtrs_clt_reconnect_work(struct work_struct *work)
+ }
+ clt_path->reconnect_attempts++;
+
+- /* Stop everything */
+- rtrs_clt_stop_and_destroy_conns(clt_path);
+ msleep(RTRS_RECONNECT_BACKOFF);
+ if (rtrs_clt_change_state_get_old(clt_path, RTRS_CLT_CONNECTING, NULL)) {
+ err = init_path(clt_path);
+@@ -2669,11 +2676,7 @@ static void rtrs_clt_reconnect_work(struct work_struct *work)
+ reconnect_again:
+ if (rtrs_clt_change_state_get_old(clt_path, RTRS_CLT_RECONNECTING, NULL)) {
+ clt_path->stats->reconnects.fail_cnt++;
+- delay_ms = clt->reconnect_delay_sec * 1000;
+- queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork,
+- msecs_to_jiffies(delay_ms +
+- prandom_u32() %
+- RTRS_RECONNECT_SEED));
++ queue_work(rtrs_wq, &clt_path->err_recovery_work);
+ }
+ }
+
+@@ -2908,6 +2911,7 @@ int rtrs_clt_reconnect_from_sysfs(struct rtrs_clt_path *clt_path)
+ &old_state);
+ if (changed) {
+ clt_path->reconnect_attempts = 0;
++ rtrs_clt_stop_and_destroy_conns(clt_path);
+ queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork, 0);
+ }
+ if (changed || old_state == RTRS_CLT_RECONNECTING) {
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+index d1b18a154ae03..f848c0392d982 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h
+@@ -134,6 +134,7 @@ struct rtrs_clt_path {
+ struct rtrs_clt_io_req *reqs;
+ struct delayed_work reconnect_dwork;
+ struct work_struct close_work;
++ struct work_struct err_recovery_work;
+ unsigned int reconnect_attempts;
+ bool established;
+ struct rtrs_rbuf *rbufs;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 6dc6d8b6b3686..f60381cdf1c48 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -1558,6 +1558,7 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
+ dev_info(smmu->dev, "\t0x%016llx\n",
+ (unsigned long long)evt[i]);
+
++ cond_resched();
+ }
+
+ /*
+diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
+index 980e4af3f06b6..d2e82a1b56d8a 100644
+--- a/drivers/iommu/omap-iommu.c
++++ b/drivers/iommu/omap-iommu.c
+@@ -1661,7 +1661,7 @@ static struct iommu_device *omap_iommu_probe_device(struct device *dev)
+ num_iommus = of_property_count_elems_of_size(dev->of_node, "iommus",
+ sizeof(phandle));
+ if (num_iommus < 0)
+- return 0;
++ return ERR_PTR(-ENODEV);
+
+ arch_data = kcalloc(num_iommus + 1, sizeof(*arch_data), GFP_KERNEL);
+ if (!arch_data)
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index cd772973114af..a0fc764ec9dc6 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -3011,18 +3011,12 @@ static int __init allocate_lpi_tables(void)
+ return 0;
+ }
+
+-static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
++static u64 read_vpend_dirty_clear(void __iomem *vlpi_base)
+ {
+ u32 count = 1000000; /* 1s! */
+ bool clean;
+ u64 val;
+
+- val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
+- val &= ~GICR_VPENDBASER_Valid;
+- val &= ~clr;
+- val |= set;
+- gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
+-
+ do {
+ val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
+ clean = !(val & GICR_VPENDBASER_Dirty);
+@@ -3033,10 +3027,26 @@ static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
+ }
+ } while (!clean && count);
+
+- if (unlikely(val & GICR_VPENDBASER_Dirty)) {
++ if (unlikely(!clean))
+ pr_err_ratelimited("ITS virtual pending table not cleaning\n");
++
++ return val;
++}
++
++static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
++{
++ u64 val;
++
++ /* Make sure we wait until the RD is done with the initial scan */
++ val = read_vpend_dirty_clear(vlpi_base);
++ val &= ~GICR_VPENDBASER_Valid;
++ val &= ~clr;
++ val |= set;
++ gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
++
++ val = read_vpend_dirty_clear(vlpi_base);
++ if (unlikely(val & GICR_VPENDBASER_Dirty))
+ val |= GICR_VPENDBASER_PendingLast;
+- }
+
+ return val;
+ }
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 5e935d97207dc..907af63d1bba9 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -206,11 +206,11 @@ static inline void __iomem *gic_dist_base(struct irq_data *d)
+ }
+ }
+
+-static void gic_do_wait_for_rwp(void __iomem *base)
++static void gic_do_wait_for_rwp(void __iomem *base, u32 bit)
+ {
+ u32 count = 1000000; /* 1s! */
+
+- while (readl_relaxed(base + GICD_CTLR) & GICD_CTLR_RWP) {
++ while (readl_relaxed(base + GICD_CTLR) & bit) {
+ count--;
+ if (!count) {
+ pr_err_ratelimited("RWP timeout, gone fishing\n");
+@@ -224,13 +224,13 @@ static void gic_do_wait_for_rwp(void __iomem *base)
+ /* Wait for completion of a distributor change */
+ static void gic_dist_wait_for_rwp(void)
+ {
+- gic_do_wait_for_rwp(gic_data.dist_base);
++ gic_do_wait_for_rwp(gic_data.dist_base, GICD_CTLR_RWP);
+ }
+
+ /* Wait for completion of a redistributor change */
+ static void gic_redist_wait_for_rwp(void)
+ {
+- gic_do_wait_for_rwp(gic_data_rdist_rd_base());
++ gic_do_wait_for_rwp(gic_data_rdist_rd_base(), GICR_CTLR_RWP);
+ }
+
+ #ifdef CONFIG_ARM64
+@@ -1466,6 +1466,12 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ if(fwspec->param_count != 2)
+ return -EINVAL;
+
++ if (fwspec->param[0] < 16) {
++ pr_err(FW_BUG "Illegal GSI%d translation request\n",
++ fwspec->param[0]);
++ return -EINVAL;
++ }
++
+ *hwirq = fwspec->param[0];
+ *type = fwspec->param[1];
+
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index b8bb46c65a97a..3dbac62c932dd 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -1085,6 +1085,12 @@ static int gic_irq_domain_translate(struct irq_domain *d,
+ if(fwspec->param_count != 2)
+ return -EINVAL;
+
++ if (fwspec->param[0] < 16) {
++ pr_err(FW_BUG "Illegal GSI%d translation request\n",
++ fwspec->param[0]);
++ return -EINVAL;
++ }
++
+ *hwirq = fwspec->param[0];
+ *type = fwspec->param[1];
+
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index 21fe8652b095b..901abd6dea419 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -18,6 +18,7 @@
+ #include <linux/dm-ioctl.h>
+ #include <linux/hdreg.h>
+ #include <linux/compat.h>
++#include <linux/nospec.h>
+
+ #include <linux/uaccess.h>
+ #include <linux/ima.h>
+@@ -1788,6 +1789,7 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
+ if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
+ return NULL;
+
++ cmd = array_index_nospec(cmd, ARRAY_SIZE(_ioctls));
+ *ioctl_flags = _ioctls[cmd].flags;
+ return _ioctls[cmd].fn;
+ }
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index 579ab6183d4d8..dffeb47a9efbc 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -499,8 +499,13 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
+
+ if (unlikely(!ti)) {
+ int srcu_idx;
+- struct dm_table *map = dm_get_live_table(md, &srcu_idx);
++ struct dm_table *map;
+
++ map = dm_get_live_table(md, &srcu_idx);
++ if (unlikely(!map)) {
++ dm_put_live_table(md, srcu_idx);
++ return BLK_STS_RESOURCE;
++ }
+ ti = dm_table_find_target(map, 0);
+ dm_put_live_table(md, srcu_idx);
+ }
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 394778d8bf549..dcb8d8fc7877a 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1519,15 +1519,10 @@ static void dm_submit_bio(struct bio *bio)
+ struct dm_table *map;
+
+ map = dm_get_live_table(md, &srcu_idx);
+- if (unlikely(!map)) {
+- DMERR_LIMIT("%s: mapping table unavailable, erroring io",
+- dm_device_name(md));
+- bio_io_error(bio);
+- goto out;
+- }
+
+- /* If suspended, queue this IO for later */
+- if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
++ /* If suspended, or map not yet available, queue this IO for later */
++ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
++ unlikely(!map)) {
+ if (bio->bi_opf & REQ_NOWAIT)
+ bio_wouldblock_error(bio);
+ else if (bio->bi_opf & REQ_RAHEAD)
+diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c
+index c1eefaebacb64..bcc6e547e0714 100644
+--- a/drivers/misc/habanalabs/common/memory.c
++++ b/drivers/misc/habanalabs/common/memory.c
+@@ -1967,16 +1967,15 @@ err_dec_exporting_cnt:
+ static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)
+ {
+ struct hl_device *hdev = hpriv->hdev;
+- struct hl_ctx *ctx = hpriv->ctx;
+ u64 block_handle, device_addr = 0;
++ struct hl_ctx *ctx = hpriv->ctx;
+ u32 handle = 0, block_size;
+- int rc, dmabuf_fd = -EBADF;
++ int rc;
+
+ switch (args->in.op) {
+ case HL_MEM_OP_ALLOC:
+ if (args->in.alloc.mem_size == 0) {
+- dev_err(hdev->dev,
+- "alloc size must be larger than 0\n");
++ dev_err(hdev->dev, "alloc size must be larger than 0\n");
+ rc = -EINVAL;
+ goto out;
+ }
+@@ -1997,15 +1996,14 @@ static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)
+
+ case HL_MEM_OP_MAP:
+ if (args->in.flags & HL_MEM_USERPTR) {
+- device_addr = args->in.map_host.host_virt_addr;
+- rc = 0;
++ dev_err(hdev->dev, "Failed to map host memory when MMU is disabled\n");
++ rc = -EPERM;
+ } else {
+- rc = get_paddr_from_handle(ctx, &args->in,
+- &device_addr);
++ rc = get_paddr_from_handle(ctx, &args->in, &device_addr);
++ memset(args, 0, sizeof(*args));
++ args->out.device_virt_addr = device_addr;
+ }
+
+- memset(args, 0, sizeof(*args));
+- args->out.device_virt_addr = device_addr;
+ break;
+
+ case HL_MEM_OP_UNMAP:
+@@ -2013,20 +2011,14 @@ static int mem_ioctl_no_mmu(struct hl_fpriv *hpriv, union hl_mem_args *args)
+ break;
+
+ case HL_MEM_OP_MAP_BLOCK:
+- rc = map_block(hdev, args->in.map_block.block_addr,
+- &block_handle, &block_size);
++ rc = map_block(hdev, args->in.map_block.block_addr, &block_handle, &block_size);
+ args->out.block_handle = block_handle;
+ args->out.block_size = block_size;
+ break;
+
+ case HL_MEM_OP_EXPORT_DMABUF_FD:
+- rc = export_dmabuf_from_addr(ctx,
+- args->in.export_dmabuf_fd.handle,
+- args->in.export_dmabuf_fd.mem_size,
+- args->in.flags,
+- &dmabuf_fd);
+- memset(args, 0, sizeof(*args));
+- args->out.fd = dmabuf_fd;
++ dev_err(hdev->dev, "Failed to export dma-buf object when MMU is disabled\n");
++ rc = -EPERM;
+ break;
+
+ default:
+diff --git a/drivers/misc/habanalabs/common/mmu/mmu_v1.c b/drivers/misc/habanalabs/common/mmu/mmu_v1.c
+index 6134b6ae76157..3cadef97817d6 100644
+--- a/drivers/misc/habanalabs/common/mmu/mmu_v1.c
++++ b/drivers/misc/habanalabs/common/mmu/mmu_v1.c
+@@ -467,7 +467,7 @@ static void hl_mmu_v1_fini(struct hl_device *hdev)
+ {
+ /* MMU H/W fini was already done in device hw_fini() */
+
+- if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.hr.mmu_shadow_hop0)) {
++ if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.dr.mmu_shadow_hop0)) {
+ kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0);
+ gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool);
+
+diff --git a/drivers/misc/habanalabs/gaudi/gaudi.c b/drivers/misc/habanalabs/gaudi/gaudi.c
+index 013c6da2e3ca1..b4dacea801511 100644
+--- a/drivers/misc/habanalabs/gaudi/gaudi.c
++++ b/drivers/misc/habanalabs/gaudi/gaudi.c
+@@ -7819,6 +7819,48 @@ static void gaudi_print_fw_alive_info(struct hl_device *hdev,
+ fw_alive->thread_id, fw_alive->uptime_seconds);
+ }
+
++static void gaudi_print_nic_axi_irq_info(struct hl_device *hdev, u16 event_type,
++ void *data)
++{
++ char desc[64] = "", *type;
++ struct eq_nic_sei_event *eq_nic_sei = data;
++ u16 nic_id = event_type - GAUDI_EVENT_NIC_SEI_0;
++
++ switch (eq_nic_sei->axi_error_cause) {
++ case RXB:
++ type = "RXB";
++ break;
++ case RXE:
++ type = "RXE";
++ break;
++ case TXS:
++ type = "TXS";
++ break;
++ case TXE:
++ type = "TXE";
++ break;
++ case QPC_RESP:
++ type = "QPC_RESP";
++ break;
++ case NON_AXI_ERR:
++ type = "NON_AXI_ERR";
++ break;
++ case TMR:
++ type = "TMR";
++ break;
++ default:
++ dev_err(hdev->dev, "unknown NIC AXI cause %d\n",
++ eq_nic_sei->axi_error_cause);
++ type = "N/A";
++ break;
++ }
++
++ snprintf(desc, sizeof(desc), "NIC%d_%s%d", nic_id, type,
++ eq_nic_sei->id);
++ dev_err_ratelimited(hdev->dev, "Received H/W interrupt %d [\"%s\"]\n",
++ event_type, desc);
++}
++
+ static int gaudi_non_hard_reset_late_init(struct hl_device *hdev)
+ {
+ /* GAUDI doesn't support any reset except hard-reset */
+@@ -8066,6 +8108,7 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ struct hl_eq_entry *eq_entry)
+ {
+ struct gaudi_device *gaudi = hdev->asic_specific;
++ u64 data = le64_to_cpu(eq_entry->data[0]);
+ u32 ctl = le32_to_cpu(eq_entry->hdr.ctl);
+ u32 fw_fatal_err_flag = 0;
+ u16 event_type = ((ctl & EQ_CTL_EVENT_TYPE_MASK)
+@@ -8263,6 +8306,11 @@ static void gaudi_handle_eqe(struct hl_device *hdev,
+ hl_fw_unmask_irq(hdev, event_type);
+ break;
+
++ case GAUDI_EVENT_NIC_SEI_0 ... GAUDI_EVENT_NIC_SEI_4:
++ gaudi_print_nic_axi_irq_info(hdev, event_type, &data);
++ hl_fw_unmask_irq(hdev, event_type);
++ break;
++
+ case GAUDI_EVENT_DMA_IF_SEI_0 ... GAUDI_EVENT_DMA_IF_SEI_3:
+ gaudi_print_irq_info(hdev, event_type, false);
+ gaudi_print_sm_sei_info(hdev, event_type,
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 4e67c1403cc93..db99882c95d86 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1880,6 +1880,31 @@ static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq)
+ brq->data.error || brq->cmd.resp[0] & CMD_ERRORS;
+ }
+
++static int mmc_spi_err_check(struct mmc_card *card)
++{
++ u32 status = 0;
++ int err;
++
++ /*
++ * SPI does not have a TRAN state we have to wait on, instead the
++ * card is ready again when it no longer holds the line LOW.
++ * We still have to ensure two things here before we know the write
++ * was successful:
++ * 1. The card has not disconnected during busy and we actually read our
++ * own pull-up, thinking it was still connected, so ensure it
++ * still responds.
++ * 2. Check for any error bits, in particular R1_SPI_IDLE to catch a
++ * just reconnected card after being disconnected during busy.
++ */
++ err = __mmc_send_status(card, &status, 0);
++ if (err)
++ return err;
++ /* All R1 and R2 bits of SPI are errors in our case */
++ if (status)
++ return -EIO;
++ return 0;
++}
++
+ static int mmc_blk_busy_cb(void *cb_data, bool *busy)
+ {
+ struct mmc_blk_busy_data *data = cb_data;
+@@ -1903,9 +1928,16 @@ static int mmc_blk_card_busy(struct mmc_card *card, struct request *req)
+ struct mmc_blk_busy_data cb_data;
+ int err;
+
+- if (mmc_host_is_spi(card->host) || rq_data_dir(req) == READ)
++ if (rq_data_dir(req) == READ)
+ return 0;
+
++ if (mmc_host_is_spi(card->host)) {
++ err = mmc_spi_err_check(card);
++ if (err)
++ mqrq->brq.data.bytes_xfered = 0;
++ return err;
++ }
++
+ cb_data.card = card;
+ cb_data.status = 0;
+ err = __mmc_poll_for_busy(card->host, 0, MMC_BLK_TIMEOUT_MS,
+@@ -2350,6 +2382,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
+ struct mmc_blk_data *md;
+ int devidx, ret;
+ char cap_str[10];
++ bool cache_enabled = false;
++ bool fua_enabled = false;
+
+ devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL);
+ if (devidx < 0) {
+@@ -2429,13 +2463,17 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
+ md->flags |= MMC_BLK_CMD23;
+ }
+
+- if (mmc_card_mmc(card) &&
+- md->flags & MMC_BLK_CMD23 &&
++ if (md->flags & MMC_BLK_CMD23 &&
+ ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
+ card->ext_csd.rel_sectors)) {
+ md->flags |= MMC_BLK_REL_WR;
+- blk_queue_write_cache(md->queue.queue, true, true);
++ fua_enabled = true;
++ cache_enabled = true;
+ }
++ if (mmc_cache_enabled(card->host))
++ cache_enabled = true;
++
++ blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
+
+ string_get_size((u64)size, 512, STRING_UNITS_2,
+ cap_str, sizeof(cap_str));
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 20f5687272778..f879dc63d9364 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -149,6 +149,11 @@ static const struct mmc_fixup __maybe_unused sdio_fixup_methods[] = {
+ static const struct mmc_fixup __maybe_unused sdio_card_init_methods[] = {
+ SDIO_FIXUP_COMPATIBLE("ti,wl1251", wl1251_quirk, 0),
+
++ SDIO_FIXUP_COMPATIBLE("silabs,wf200", add_quirk,
++ MMC_QUIRK_BROKEN_BYTE_MODE_512 |
++ MMC_QUIRK_LENIENT_FN0 |
++ MMC_QUIRK_BLKSZ_FOR_BYTE_MODE),
++
+ END_FIXUP
+ };
+
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index 9c13f2c313658..4566d7fc9055a 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -62,8 +62,8 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ * excepted the last element which has no constraint on idmasize
+ */
+ for_each_sg(data->sg, sg, data->sg_len - 1, i) {
+- if (!IS_ALIGNED(data->sg->offset, sizeof(u32)) ||
+- !IS_ALIGNED(data->sg->length, SDMMC_IDMA_BURST)) {
++ if (!IS_ALIGNED(sg->offset, sizeof(u32)) ||
++ !IS_ALIGNED(sg->length, SDMMC_IDMA_BURST)) {
+ dev_err(mmc_dev(host->mmc),
+ "unaligned scatterlist: ofst:%x length:%d\n",
+ data->sg->offset, data->sg->length);
+@@ -71,7 +71,7 @@ static int sdmmc_idma_validate_data(struct mmci_host *host,
+ }
+ }
+
+- if (!IS_ALIGNED(data->sg->offset, sizeof(u32))) {
++ if (!IS_ALIGNED(sg->offset, sizeof(u32))) {
+ dev_err(mmc_dev(host->mmc),
+ "unaligned last scatterlist: ofst:%x length:%d\n",
+ data->sg->offset, data->sg->length);
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 2797a9c0f17d8..ddb5ca2f559e2 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -144,9 +144,9 @@ static unsigned int renesas_sdhi_clk_update(struct tmio_mmc_host *host,
+ return clk_get_rate(priv->clk);
+
+ if (priv->clkh) {
++ /* HS400 with 4TAP needs different clock settings */
+ bool use_4tap = priv->quirks && priv->quirks->hs400_4taps;
+- bool need_slow_clkh = (host->mmc->ios.timing == MMC_TIMING_UHS_SDR104) ||
+- (host->mmc->ios.timing == MMC_TIMING_MMC_HS400);
++ bool need_slow_clkh = host->mmc->ios.timing == MMC_TIMING_MMC_HS400;
+ clkh_shift = use_4tap && need_slow_clkh ? 1 : 2;
+ ref_clk = priv->clkh;
+ }
+@@ -396,10 +396,10 @@ static void renesas_sdhi_hs400_complete(struct mmc_host *mmc)
+ SH_MOBILE_SDHI_SCC_TMPPORT2_HS400OSEL) |
+ sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_TMPPORT2));
+
+- /* Set the sampling clock selection range of HS400 mode */
+ sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_DTCNTL,
+ SH_MOBILE_SDHI_SCC_DTCNTL_TAPEN |
+- 0x4 << SH_MOBILE_SDHI_SCC_DTCNTL_TAPNUM_SHIFT);
++ sd_scc_read32(host, priv,
++ SH_MOBILE_SDHI_SCC_DTCNTL));
+
+ /* Avoid bad TAP */
+ if (bad_taps & BIT(priv->tap_set)) {
+diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
+index 666cee4c7f7c6..08e838400b526 100644
+--- a/drivers/mmc/host/sdhci-xenon.c
++++ b/drivers/mmc/host/sdhci-xenon.c
+@@ -241,16 +241,6 @@ static void xenon_voltage_switch(struct sdhci_host *host)
+ {
+ /* Wait for 5ms after set 1.8V signal enable bit */
+ usleep_range(5000, 5500);
+-
+- /*
+- * For some reason the controller's Host Control2 register reports
+- * the bit representing 1.8V signaling as 0 when read after it was
+- * written as 1. Subsequent read reports 1.
+- *
+- * Since this may cause some issues, do an empty read of the Host
+- * Control2 register here to circumvent this.
+- */
+- sdhci_readw(host, SDHCI_HOST_CONTROL2);
+ }
+
+ static unsigned int xenon_get_max_clock(struct sdhci_host *host)
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_fd.c b/drivers/net/can/usb/etas_es58x/es58x_fd.c
+index ec87126e1a7df..8ccda748fd084 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_fd.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_fd.c
+@@ -172,12 +172,11 @@ static int es58x_fd_rx_event_msg(struct net_device *netdev,
+ const struct es58x_fd_rx_event_msg *rx_event_msg;
+ int ret;
+
++ rx_event_msg = &es58x_fd_urb_cmd->rx_event_msg;
+ ret = es58x_check_msg_len(es58x_dev->dev, *rx_event_msg, msg_len);
+ if (ret)
+ return ret;
+
+- rx_event_msg = &es58x_fd_urb_cmd->rx_event_msg;
+-
+ return es58x_rx_err_msg(netdev, rx_event_msg->error_code,
+ rx_event_msg->event_code,
+ get_unaligned_le64(&rx_event_msg->timestamp));
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 33f0ceae381d4..2875b52508567 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1940,6 +1940,10 @@ static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,
+ case FLOW_ACTION_GATE:
+ size = struct_size(sgi, entries, a->gate.num_entries);
+ sgi = kzalloc(size, GFP_KERNEL);
++ if (!sgi) {
++ ret = -ENOMEM;
++ goto err;
++ }
+ vsc9959_psfp_parse_gate(a, sgi);
+ ret = vsc9959_psfp_sgi_table_add(ocelot, sgi);
+ if (ret) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index b1c98d1408b82..6af0ae1d0c462 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -3224,6 +3224,7 @@ static int bnxt_alloc_tx_rings(struct bnxt *bp)
+ }
+ qidx = bp->tc_to_qidx[j];
+ ring->queue_id = bp->q_info[qidx].queue_id;
++ spin_lock_init(&txr->xdp_tx_lock);
+ if (i < bp->tx_nr_rings_xdp)
+ continue;
+ if (i % bp->tx_nr_rings_per_tc == (bp->tx_nr_rings_per_tc - 1))
+@@ -10294,6 +10295,12 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ if (irq_re_init)
+ udp_tunnel_nic_reset_ntf(bp->dev);
+
++ if (bp->tx_nr_rings_xdp < num_possible_cpus()) {
++ if (!static_key_enabled(&bnxt_xdp_locking_key))
++ static_branch_enable(&bnxt_xdp_locking_key);
++ } else if (static_key_enabled(&bnxt_xdp_locking_key)) {
++ static_branch_disable(&bnxt_xdp_locking_key);
++ }
+ set_bit(BNXT_STATE_OPEN, &bp->state);
+ bnxt_enable_int(bp);
+ /* Enable TX queues */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 666fc1e7a7d2f..d57bff46b5878 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -593,7 +593,8 @@ struct nqe_cn {
+ #define BNXT_MAX_MTU 9500
+ #define BNXT_MAX_PAGE_MODE_MTU \
+ ((unsigned int)PAGE_SIZE - VLAN_ETH_HLEN - NET_IP_ALIGN - \
+- XDP_PACKET_HEADROOM)
++ XDP_PACKET_HEADROOM - \
++ SKB_DATA_ALIGN((unsigned int)sizeof(struct skb_shared_info)))
+
+ #define BNXT_MIN_PKT_SIZE 52
+
+@@ -800,6 +801,8 @@ struct bnxt_tx_ring_info {
+ u32 dev_state;
+
+ struct bnxt_ring_struct tx_ring_struct;
++ /* Synchronize simultaneous xdp_xmit on same ring */
++ spinlock_t xdp_tx_lock;
+ };
+
+ #define BNXT_LEGACY_COAL_CMPL_PARAMS \
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 8aaa2335f848a..f09b04556c32e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2101,9 +2101,7 @@ static int bnxt_set_pauseparam(struct net_device *dev,
+ }
+
+ link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+- if (bp->hwrm_spec_code >= 0x10201)
+- link_info->req_flow_ctrl =
+- PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
++ link_info->req_flow_ctrl = 0;
+ } else {
+ /* when transition from auto pause to force pause,
+ * force a link change
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index 52fad0fdeacf3..03b1d6c045048 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -20,6 +20,8 @@
+ #include "bnxt.h"
+ #include "bnxt_xdp.h"
+
++DEFINE_STATIC_KEY_FALSE(bnxt_xdp_locking_key);
++
+ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ struct bnxt_tx_ring_info *txr,
+ dma_addr_t mapping, u32 len)
+@@ -227,11 +229,16 @@ int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
+ ring = smp_processor_id() % bp->tx_nr_rings_xdp;
+ txr = &bp->tx_ring[ring];
+
++ if (READ_ONCE(txr->dev_state) == BNXT_DEV_STATE_CLOSING)
++ return -EINVAL;
++
++ if (static_branch_unlikely(&bnxt_xdp_locking_key))
++ spin_lock(&txr->xdp_tx_lock);
++
+ for (i = 0; i < num_frames; i++) {
+ struct xdp_frame *xdp = frames[i];
+
+- if (!txr || !bnxt_tx_avail(bp, txr) ||
+- !(bp->bnapi[ring]->flags & BNXT_NAPI_FLAG_XDP))
++ if (!bnxt_tx_avail(bp, txr))
+ break;
+
+ mapping = dma_map_single(&pdev->dev, xdp->data, xdp->len,
+@@ -250,6 +257,9 @@ int bnxt_xdp_xmit(struct net_device *dev, int num_frames,
+ bnxt_db_write(bp, &txr->tx_db, txr->tx_prod);
+ }
+
++ if (static_branch_unlikely(&bnxt_xdp_locking_key))
++ spin_unlock(&txr->xdp_tx_lock);
++
+ return nxmit;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+index 0df40c3beb050..067bb5e821f54 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+@@ -10,6 +10,8 @@
+ #ifndef BNXT_XDP_H
+ #define BNXT_XDP_H
+
++DECLARE_STATIC_KEY_FALSE(bnxt_xdp_locking_key);
++
+ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp,
+ struct bnxt_tx_ring_info *txr,
+ dma_addr_t mapping, u32 len);
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+index 5f5f8c53c4a0f..c8cb541572ffe 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ptp.c
+@@ -167,7 +167,7 @@ static int dpaa2_ptp_probe(struct fsl_mc_device *mc_dev)
+ base = of_iomap(node, 0);
+ if (!base) {
+ err = -ENOMEM;
+- goto err_close;
++ goto err_put;
+ }
+
+ err = fsl_mc_allocate_irqs(mc_dev);
+@@ -210,6 +210,8 @@ err_free_mc_irq:
+ fsl_mc_free_irqs(mc_dev);
+ err_unmap:
+ iounmap(base);
++err_put:
++ of_node_put(node);
+ err_close:
+ dprtc_close(mc_dev->mc_io, 0, mc_dev->mc_handle);
+ err_free_mcp:
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
+index 9ddeb015eb7eb..e830987a8c6dd 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
+@@ -1899,8 +1899,9 @@ i40e_status i40e_aq_add_vsi(struct i40e_hw *hw,
+
+ desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+
+- status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info,
+- sizeof(vsi_ctx->info), cmd_details);
++ status = i40e_asq_send_command_atomic(hw, &desc, &vsi_ctx->info,
++ sizeof(vsi_ctx->info),
++ cmd_details, true);
+
+ if (status)
+ goto aq_add_vsi_exit;
+@@ -2287,8 +2288,9 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
+
+ desc.flags |= cpu_to_le16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD));
+
+- status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info,
+- sizeof(vsi_ctx->info), cmd_details);
++ status = i40e_asq_send_command_atomic(hw, &desc, &vsi_ctx->info,
++ sizeof(vsi_ctx->info),
++ cmd_details, true);
+
+ vsi_ctx->vsis_allocated = le16_to_cpu(resp->vsi_used);
+ vsi_ctx->vsis_unallocated = le16_to_cpu(resp->vsi_free);
+@@ -2673,8 +2675,8 @@ i40e_status i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid,
+ if (buf_size > I40E_AQ_LARGE_BUF)
+ desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_LB);
+
+- status = i40e_asq_send_command(hw, &desc, mv_list, buf_size,
+- cmd_details);
++ status = i40e_asq_send_command_atomic(hw, &desc, mv_list, buf_size,
++ cmd_details, true);
+
+ return status;
+ }
+@@ -2715,8 +2717,8 @@ i40e_status i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid,
+ if (buf_size > I40E_AQ_LARGE_BUF)
+ desc.flags |= cpu_to_le16((u16)I40E_AQ_FLAG_LB);
+
+- status = i40e_asq_send_command(hw, &desc, mv_list, buf_size,
+- cmd_details);
++ status = i40e_asq_send_command_atomic(hw, &desc, mv_list, buf_size,
++ cmd_details, true);
+
+ return status;
+ }
+@@ -3868,7 +3870,8 @@ i40e_status i40e_aq_delete_element(struct i40e_hw *hw, u16 seid,
+
+ cmd->seid = cpu_to_le16(seid);
+
+- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
++ status = i40e_asq_send_command_atomic(hw, &desc, NULL, 0,
++ cmd_details, true);
+
+ return status;
+ }
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 4babe4705a552..358a9b3031d5c 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -44,6 +44,9 @@
+ #define DEFAULT_DEBUG_LEVEL_SHIFT 3
+ #define PFX "iavf: "
+
++int iavf_status_to_errno(enum iavf_status status);
++int virtchnl_status_to_errno(enum virtchnl_status_code v_status);
++
+ /* VSI state flags shared with common code */
+ enum iavf_vsi_state_t {
+ __IAVF_VSI_DOWN,
+@@ -515,7 +518,7 @@ void iavf_add_vlans(struct iavf_adapter *adapter);
+ void iavf_del_vlans(struct iavf_adapter *adapter);
+ void iavf_set_promiscuous(struct iavf_adapter *adapter, int flags);
+ void iavf_request_stats(struct iavf_adapter *adapter);
+-void iavf_request_reset(struct iavf_adapter *adapter);
++int iavf_request_reset(struct iavf_adapter *adapter);
+ void iavf_get_hena(struct iavf_adapter *adapter);
+ void iavf_set_hena(struct iavf_adapter *adapter);
+ void iavf_set_rss_key(struct iavf_adapter *adapter);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 0e178a0a59c5d..d10e9a8e8011f 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -51,6 +51,113 @@ MODULE_LICENSE("GPL v2");
+ static const struct net_device_ops iavf_netdev_ops;
+ struct workqueue_struct *iavf_wq;
+
++int iavf_status_to_errno(enum iavf_status status)
++{
++ switch (status) {
++ case IAVF_SUCCESS:
++ return 0;
++ case IAVF_ERR_PARAM:
++ case IAVF_ERR_MAC_TYPE:
++ case IAVF_ERR_INVALID_MAC_ADDR:
++ case IAVF_ERR_INVALID_LINK_SETTINGS:
++ case IAVF_ERR_INVALID_PD_ID:
++ case IAVF_ERR_INVALID_QP_ID:
++ case IAVF_ERR_INVALID_CQ_ID:
++ case IAVF_ERR_INVALID_CEQ_ID:
++ case IAVF_ERR_INVALID_AEQ_ID:
++ case IAVF_ERR_INVALID_SIZE:
++ case IAVF_ERR_INVALID_ARP_INDEX:
++ case IAVF_ERR_INVALID_FPM_FUNC_ID:
++ case IAVF_ERR_QP_INVALID_MSG_SIZE:
++ case IAVF_ERR_INVALID_FRAG_COUNT:
++ case IAVF_ERR_INVALID_ALIGNMENT:
++ case IAVF_ERR_INVALID_PUSH_PAGE_INDEX:
++ case IAVF_ERR_INVALID_IMM_DATA_SIZE:
++ case IAVF_ERR_INVALID_VF_ID:
++ case IAVF_ERR_INVALID_HMCFN_ID:
++ case IAVF_ERR_INVALID_PBLE_INDEX:
++ case IAVF_ERR_INVALID_SD_INDEX:
++ case IAVF_ERR_INVALID_PAGE_DESC_INDEX:
++ case IAVF_ERR_INVALID_SD_TYPE:
++ case IAVF_ERR_INVALID_HMC_OBJ_INDEX:
++ case IAVF_ERR_INVALID_HMC_OBJ_COUNT:
++ case IAVF_ERR_INVALID_SRQ_ARM_LIMIT:
++ return -EINVAL;
++ case IAVF_ERR_NVM:
++ case IAVF_ERR_NVM_CHECKSUM:
++ case IAVF_ERR_PHY:
++ case IAVF_ERR_CONFIG:
++ case IAVF_ERR_UNKNOWN_PHY:
++ case IAVF_ERR_LINK_SETUP:
++ case IAVF_ERR_ADAPTER_STOPPED:
++ case IAVF_ERR_MASTER_REQUESTS_PENDING:
++ case IAVF_ERR_AUTONEG_NOT_COMPLETE:
++ case IAVF_ERR_RESET_FAILED:
++ case IAVF_ERR_BAD_PTR:
++ case IAVF_ERR_SWFW_SYNC:
++ case IAVF_ERR_QP_TOOMANY_WRS_POSTED:
++ case IAVF_ERR_QUEUE_EMPTY:
++ case IAVF_ERR_FLUSHED_QUEUE:
++ case IAVF_ERR_OPCODE_MISMATCH:
++ case IAVF_ERR_CQP_COMPL_ERROR:
++ case IAVF_ERR_BACKING_PAGE_ERROR:
++ case IAVF_ERR_NO_PBLCHUNKS_AVAILABLE:
++ case IAVF_ERR_MEMCPY_FAILED:
++ case IAVF_ERR_SRQ_ENABLED:
++ case IAVF_ERR_ADMIN_QUEUE_ERROR:
++ case IAVF_ERR_ADMIN_QUEUE_FULL:
++ case IAVF_ERR_BAD_IWARP_CQE:
++ case IAVF_ERR_NVM_BLANK_MODE:
++ case IAVF_ERR_PE_DOORBELL_NOT_ENABLED:
++ case IAVF_ERR_DIAG_TEST_FAILED:
++ case IAVF_ERR_FIRMWARE_API_VERSION:
++ case IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR:
++ return -EIO;
++ case IAVF_ERR_DEVICE_NOT_SUPPORTED:
++ return -ENODEV;
++ case IAVF_ERR_NO_AVAILABLE_VSI:
++ case IAVF_ERR_RING_FULL:
++ return -ENOSPC;
++ case IAVF_ERR_NO_MEMORY:
++ return -ENOMEM;
++ case IAVF_ERR_TIMEOUT:
++ case IAVF_ERR_ADMIN_QUEUE_TIMEOUT:
++ return -ETIMEDOUT;
++ case IAVF_ERR_NOT_IMPLEMENTED:
++ case IAVF_NOT_SUPPORTED:
++ return -EOPNOTSUPP;
++ case IAVF_ERR_ADMIN_QUEUE_NO_WORK:
++ return -EALREADY;
++ case IAVF_ERR_NOT_READY:
++ return -EBUSY;
++ case IAVF_ERR_BUF_TOO_SHORT:
++ return -EMSGSIZE;
++ }
++
++ return -EIO;
++}
++
++int virtchnl_status_to_errno(enum virtchnl_status_code v_status)
++{
++ switch (v_status) {
++ case VIRTCHNL_STATUS_SUCCESS:
++ return 0;
++ case VIRTCHNL_STATUS_ERR_PARAM:
++ case VIRTCHNL_STATUS_ERR_INVALID_VF_ID:
++ return -EINVAL;
++ case VIRTCHNL_STATUS_ERR_NO_MEMORY:
++ return -ENOMEM;
++ case VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH:
++ case VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR:
++ case VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR:
++ return -EIO;
++ case VIRTCHNL_STATUS_ERR_NOT_SUPPORTED:
++ return -EOPNOTSUPP;
++ }
++
++ return -EIO;
++}
++
+ /**
+ * iavf_pdev_to_adapter - go from pci_dev to adapter
+ * @pdev: pci_dev pointer
+@@ -1421,7 +1528,7 @@ static int iavf_config_rss_aq(struct iavf_adapter *adapter)
+ struct iavf_aqc_get_set_rss_key_data *rss_key =
+ (struct iavf_aqc_get_set_rss_key_data *)adapter->rss_key;
+ struct iavf_hw *hw = &adapter->hw;
+- int ret = 0;
++ enum iavf_status status;
+
+ if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
+ /* bail because we already have a command pending */
+@@ -1430,24 +1537,25 @@ static int iavf_config_rss_aq(struct iavf_adapter *adapter)
+ return -EBUSY;
+ }
+
+- ret = iavf_aq_set_rss_key(hw, adapter->vsi.id, rss_key);
+- if (ret) {
++ status = iavf_aq_set_rss_key(hw, adapter->vsi.id, rss_key);
++ if (status) {
+ dev_err(&adapter->pdev->dev, "Cannot set RSS key, err %s aq_err %s\n",
+- iavf_stat_str(hw, ret),
++ iavf_stat_str(hw, status),
+ iavf_aq_str(hw, hw->aq.asq_last_status));
+- return ret;
++ return iavf_status_to_errno(status);
+
+ }
+
+- ret = iavf_aq_set_rss_lut(hw, adapter->vsi.id, false,
+- adapter->rss_lut, adapter->rss_lut_size);
+- if (ret) {
++ status = iavf_aq_set_rss_lut(hw, adapter->vsi.id, false,
++ adapter->rss_lut, adapter->rss_lut_size);
++ if (status) {
+ dev_err(&adapter->pdev->dev, "Cannot set RSS lut, err %s aq_err %s\n",
+- iavf_stat_str(hw, ret),
++ iavf_stat_str(hw, status),
+ iavf_aq_str(hw, hw->aq.asq_last_status));
++ return iavf_status_to_errno(status);
+ }
+
+- return ret;
++ return 0;
+
+ }
+
+@@ -2003,23 +2111,24 @@ static void iavf_startup(struct iavf_adapter *adapter)
+ {
+ struct pci_dev *pdev = adapter->pdev;
+ struct iavf_hw *hw = &adapter->hw;
+- int err;
++ enum iavf_status status;
++ int ret;
+
+ WARN_ON(adapter->state != __IAVF_STARTUP);
+
+ /* driver loaded, probe complete */
+ adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
+ adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
+- err = iavf_set_mac_type(hw);
+- if (err) {
+- dev_err(&pdev->dev, "Failed to set MAC type (%d)\n", err);
++ status = iavf_set_mac_type(hw);
++ if (status) {
++ dev_err(&pdev->dev, "Failed to set MAC type (%d)\n", status);
+ goto err;
+ }
+
+- err = iavf_check_reset_complete(hw);
+- if (err) {
++ ret = iavf_check_reset_complete(hw);
++ if (ret) {
+ dev_info(&pdev->dev, "Device is still in reset (%d), retrying\n",
+- err);
++ ret);
+ goto err;
+ }
+ hw->aq.num_arq_entries = IAVF_AQ_LEN;
+@@ -2027,14 +2136,15 @@ static void iavf_startup(struct iavf_adapter *adapter)
+ hw->aq.arq_buf_size = IAVF_MAX_AQ_BUF_SIZE;
+ hw->aq.asq_buf_size = IAVF_MAX_AQ_BUF_SIZE;
+
+- err = iavf_init_adminq(hw);
+- if (err) {
+- dev_err(&pdev->dev, "Failed to init Admin Queue (%d)\n", err);
++ status = iavf_init_adminq(hw);
++ if (status) {
++ dev_err(&pdev->dev, "Failed to init Admin Queue (%d)\n",
++ status);
+ goto err;
+ }
+- err = iavf_send_api_ver(adapter);
+- if (err) {
+- dev_err(&pdev->dev, "Unable to send to PF (%d)\n", err);
++ ret = iavf_send_api_ver(adapter);
++ if (ret) {
++ dev_err(&pdev->dev, "Unable to send to PF (%d)\n", ret);
+ iavf_shutdown_adminq(hw);
+ goto err;
+ }
+@@ -2070,7 +2180,7 @@ static void iavf_init_version_check(struct iavf_adapter *adapter)
+ /* aq msg sent, awaiting reply */
+ err = iavf_verify_api_ver(adapter);
+ if (err) {
+- if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK)
++ if (err == -EALREADY)
+ err = iavf_send_api_ver(adapter);
+ else
+ dev_err(&pdev->dev, "Unsupported PF API version %d.%d, expected %d.%d\n",
+@@ -2171,11 +2281,11 @@ static void iavf_init_get_resources(struct iavf_adapter *adapter)
+ }
+ }
+ err = iavf_get_vf_config(adapter);
+- if (err == IAVF_ERR_ADMIN_QUEUE_NO_WORK) {
++ if (err == -EALREADY) {
+ err = iavf_send_vf_config_msg(adapter);
+ goto err_alloc;
+- } else if (err == IAVF_ERR_PARAM) {
+- /* We only get ERR_PARAM if the device is in a very bad
++ } else if (err == -EINVAL) {
++ /* We only get -EINVAL if the device is in a very bad
+ * state or if we've been disabled for previous bad
+ * behavior. Either way, we're done now.
+ */
+@@ -2626,6 +2736,7 @@ static void iavf_reset_task(struct work_struct *work)
+ struct iavf_hw *hw = &adapter->hw;
+ struct iavf_mac_filter *f, *ftmp;
+ struct iavf_cloud_filter *cf;
++ enum iavf_status status;
+ u32 reg_val;
+ int i = 0, err;
+ bool running;
+@@ -2727,10 +2838,12 @@ continue_reset:
+ /* kill and reinit the admin queue */
+ iavf_shutdown_adminq(hw);
+ adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+- err = iavf_init_adminq(hw);
+- if (err)
++ status = iavf_init_adminq(hw);
++ if (status) {
+ dev_info(&adapter->pdev->dev, "Failed to init adminq: %d\n",
+- err);
++ status);
++ goto reset_err;
++ }
+ adapter->aq_required = 0;
+
+ if ((adapter->flags & IAVF_FLAG_REINIT_MSIX_NEEDED) ||
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+index 5263cefe46f5f..b8c5837f8b505 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+@@ -22,17 +22,17 @@ static int iavf_send_pf_msg(struct iavf_adapter *adapter,
+ enum virtchnl_ops op, u8 *msg, u16 len)
+ {
+ struct iavf_hw *hw = &adapter->hw;
+- enum iavf_status err;
++ enum iavf_status status;
+
+ if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ return 0; /* nothing to see here, move along */
+
+- err = iavf_aq_send_msg_to_pf(hw, op, 0, msg, len, NULL);
+- if (err)
+- dev_dbg(&adapter->pdev->dev, "Unable to send opcode %d to PF, err %s, aq_err %s\n",
+- op, iavf_stat_str(hw, err),
++ status = iavf_aq_send_msg_to_pf(hw, op, 0, msg, len, NULL);
++ if (status)
++ dev_dbg(&adapter->pdev->dev, "Unable to send opcode %d to PF, status %s, aq_err %s\n",
++ op, iavf_stat_str(hw, status),
+ iavf_aq_str(hw, hw->aq.asq_last_status));
+- return err;
++ return iavf_status_to_errno(status);
+ }
+
+ /**
+@@ -1827,11 +1827,13 @@ void iavf_del_adv_rss_cfg(struct iavf_adapter *adapter)
+ *
+ * Request that the PF reset this VF. No response is expected.
+ **/
+-void iavf_request_reset(struct iavf_adapter *adapter)
++int iavf_request_reset(struct iavf_adapter *adapter)
+ {
++ int err;
+ /* Don't check CURRENT_OP - this is always higher priority */
+- iavf_send_pf_msg(adapter, VIRTCHNL_OP_RESET_VF, NULL, 0);
++ err = iavf_send_pf_msg(adapter, VIRTCHNL_OP_RESET_VF, NULL, 0);
+ adapter->current_op = VIRTCHNL_OP_UNKNOWN;
++ return err;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 2f60230d332a6..9c04a71a9fca3 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -674,7 +674,7 @@ static inline struct ice_pf *ice_netdev_to_pf(struct net_device *netdev)
+
+ static inline bool ice_is_xdp_ena_vsi(struct ice_vsi *vsi)
+ {
+- return !!vsi->xdp_prog;
++ return !!READ_ONCE(vsi->xdp_prog);
+ }
+
+ static inline void ice_set_ring_xdp(struct ice_tx_ring *ring)
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 53256aca27c78..5fd2bbeab2d15 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -1452,6 +1452,7 @@ static int ice_vsi_alloc_rings(struct ice_vsi *vsi)
+ ring->tx_tstamps = &pf->ptp.port.tx;
+ ring->dev = dev;
+ ring->count = vsi->num_tx_desc;
++ ring->txq_teid = ICE_INVAL_TEID;
+ WRITE_ONCE(vsi->tx_rings[i], ring);
+ }
+
+@@ -3147,6 +3148,8 @@ int ice_vsi_release(struct ice_vsi *vsi)
+ }
+ }
+
++ if (ice_is_vsi_dflt_vsi(pf->first_sw, vsi))
++ ice_clear_dflt_vsi(pf->first_sw);
+ ice_fltr_remove_all(vsi);
+ ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
+ err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 296f9d5f74084..db2e02e673a77 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2546,7 +2546,7 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi)
+ spin_lock_init(&xdp_ring->tx_lock);
+ for (j = 0; j < xdp_ring->count; j++) {
+ tx_desc = ICE_TX_DESC(xdp_ring, j);
+- tx_desc->cmd_type_offset_bsz = cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE);
++ tx_desc->cmd_type_offset_bsz = 0;
+ }
+ }
+
+@@ -2742,8 +2742,10 @@ free_qmap:
+
+ ice_for_each_xdp_txq(vsi, i)
+ if (vsi->xdp_rings[i]) {
+- if (vsi->xdp_rings[i]->desc)
++ if (vsi->xdp_rings[i]->desc) {
++ synchronize_rcu();
+ ice_free_tx_ring(vsi->xdp_rings[i]);
++ }
+ kfree_rcu(vsi->xdp_rings[i], rcu);
+ vsi->xdp_rings[i] = NULL;
+ }
+@@ -5432,16 +5434,19 @@ static int ice_set_mac_address(struct net_device *netdev, void *pi)
+
+ /* Add filter for new MAC. If filter exists, return success */
+ err = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI);
+- if (err == -EEXIST)
++ if (err == -EEXIST) {
+ /* Although this MAC filter is already present in hardware it's
+ * possible in some cases (e.g. bonding) that dev_addr was
+ * modified outside of the driver and needs to be restored back
+ * to this value.
+ */
+ netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac);
+- else if (err)
++
++ return 0;
++ } else if (err) {
+ /* error if the new filter addition failed */
+ err = -EADDRNOTAVAIL;
++ }
+
+ err_update_filters:
+ if (err) {
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+index 1be3cd4b2bef7..2bee8f10ad89c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+@@ -3351,9 +3351,9 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
+ goto error_param;
+ }
+
+- /* Skip queue if not enabled */
+ if (!test_bit(vf_q_id, vf->txq_ena))
+- continue;
++ dev_dbg(ice_pf_to_dev(vsi->back), "Queue %u on VSI %u is not enabled, but stopping it anyway\n",
++ vf_q_id, vsi->vsi_num);
+
+ ice_fill_txq_meta(vsi, ring, &txq_meta);
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index feb874bde171f..30620b942fa09 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -41,8 +41,10 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
+ static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
+ {
+ ice_clean_tx_ring(vsi->tx_rings[q_idx]);
+- if (ice_is_xdp_ena_vsi(vsi))
++ if (ice_is_xdp_ena_vsi(vsi)) {
++ synchronize_rcu();
+ ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
++ }
+ ice_clean_rx_ring(vsi->rx_rings[q_idx]);
+ }
+
+@@ -763,7 +765,7 @@ ice_xsk_wakeup(struct net_device *netdev, u32 queue_id,
+ struct ice_vsi *vsi = np->vsi;
+ struct ice_tx_ring *ring;
+
+- if (test_bit(ICE_DOWN, vsi->state))
++ if (test_bit(ICE_VSI_DOWN, vsi->state))
+ return -ENETDOWN;
+
+ if (!ice_is_xdp_ena_vsi(vsi))
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index 143ca8be5eb59..4008596963be4 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2751,7 +2751,7 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+ }
+
+ ret = of_get_mac_address(pnp, ppd.mac_addr);
+- if (ret)
++ if (ret == -EPROBE_DEFER)
+ return ret;
+
+ mv643xx_eth_property(pnp, "tx-queue-size", ppd.tx_queue_size);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/sample.c
+index 6699bdf5cf012..b895c378cfaf3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/sample.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/sample.c
+@@ -27,11 +27,7 @@ tc_act_parse_sample(struct mlx5e_tc_act_parse_state *parse_state,
+ struct mlx5e_priv *priv,
+ struct mlx5_flow_attr *attr)
+ {
+- struct mlx5e_sample_attr *sample_attr;
+-
+- sample_attr = kzalloc(sizeof(*attr->sample_attr), GFP_KERNEL);
+- if (!sample_attr)
+- return -ENOMEM;
++ struct mlx5e_sample_attr *sample_attr = &attr->sample_attr;
+
+ sample_attr->rate = act->sample.rate;
+ sample_attr->group_num = act->sample.psample_group->group_num;
+@@ -39,7 +35,6 @@ tc_act_parse_sample(struct mlx5e_tc_act_parse_state *parse_state,
+ if (act->sample.truncate)
+ sample_attr->trunc_size = act->sample.trunc_size;
+
+- attr->sample_attr = sample_attr;
+ flow_flag_set(parse_state->flow, SAMPLE);
+
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
+index ff4b4f8a5a9db..0faaf9a4b5317 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/sample.c
+@@ -513,7 +513,7 @@ mlx5e_tc_sample_offload(struct mlx5e_tc_psample *tc_psample,
+ sample_flow = kzalloc(sizeof(*sample_flow), GFP_KERNEL);
+ if (!sample_flow)
+ return ERR_PTR(-ENOMEM);
+- sample_attr = attr->sample_attr;
++ sample_attr = &attr->sample_attr;
+ sample_attr->sample_flow = sample_flow;
+
+ /* For NICs with reg_c_preserve support or decap action, use
+@@ -546,6 +546,7 @@ mlx5e_tc_sample_offload(struct mlx5e_tc_psample *tc_psample,
+ err = PTR_ERR(sample_flow->sampler);
+ goto err_sampler;
+ }
++ sample_attr->sampler_id = sample_flow->sampler->sampler_id;
+
+ /* Create an id mapping reg_c0 value to sample object. */
+ restore_obj.type = MLX5_MAPPED_OBJ_SAMPLE;
+@@ -585,8 +586,7 @@ mlx5e_tc_sample_offload(struct mlx5e_tc_psample *tc_psample,
+ pre_attr->outer_match_level = attr->outer_match_level;
+ pre_attr->chain = attr->chain;
+ pre_attr->prio = attr->prio;
+- pre_attr->sample_attr = attr->sample_attr;
+- sample_attr->sampler_id = sample_flow->sampler->sampler_id;
++ pre_attr->sample_attr = *sample_attr;
+ pre_esw_attr = pre_attr->esw_attr;
+ pre_esw_attr->in_mdev = esw_attr->in_mdev;
+ pre_esw_attr->in_rep = esw_attr->in_rep;
+@@ -633,11 +633,11 @@ mlx5e_tc_sample_unoffload(struct mlx5e_tc_psample *tc_psample,
+ * will hit fw syndromes.
+ */
+ esw = tc_psample->esw;
+- sample_flow = attr->sample_attr->sample_flow;
++ sample_flow = attr->sample_attr.sample_flow;
+ mlx5_eswitch_del_offloaded_rule(esw, sample_flow->pre_rule, sample_flow->pre_attr);
+
+ sample_restore_put(tc_psample, sample_flow->restore);
+- mapping_remove(esw->offloads.reg_c0_obj_pool, attr->sample_attr->restore_obj_id);
++ mapping_remove(esw->offloads.reg_c0_obj_pool, attr->sample_attr.restore_obj_id);
+ sampler_put(tc_psample, sample_flow->sampler);
+ if (sample_flow->post_act_handle)
+ mlx5e_tc_post_act_del(tc_psample->post_act, sample_flow->post_act_handle);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 3667f5ef5990f..169e3524bb1c7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5345,6 +5345,7 @@ mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *prof
+ }
+
+ netif_carrier_off(netdev);
++ netif_tx_disable(netdev);
+ dev_net_set(netdev, mlx5_core_net(mdev));
+
+ return netdev;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index b27532a9301e7..7e5c00349ccf9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1634,7 +1634,6 @@ static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
+ if (flow_flag_test(flow, L3_TO_L2_DECAP))
+ mlx5e_detach_decap(priv, flow);
+
+- kfree(attr->sample_attr);
+ kvfree(attr->esw_attr->rx_tun_attr);
+ kvfree(attr->parse_attr);
+ kfree(flow->attr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+index 5ffae9b130665..2f09e34db9ffe 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+@@ -71,7 +71,7 @@ struct mlx5_flow_attr {
+ struct mlx5_fc *counter;
+ struct mlx5_modify_hdr *modify_hdr;
+ struct mlx5_ct_attr ct_attr;
+- struct mlx5e_sample_attr *sample_attr;
++ struct mlx5e_sample_attr sample_attr;
+ struct mlx5e_tc_flow_parse_attr *parse_attr;
+ u32 chain;
+ u16 prio;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index cfcd72bad9af6..e7e7b4b0dcdb5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -201,12 +201,12 @@ esw_cleanup_decap_indir(struct mlx5_eswitch *esw,
+ static int
+ esw_setup_sampler_dest(struct mlx5_flow_destination *dest,
+ struct mlx5_flow_act *flow_act,
+- struct mlx5_flow_attr *attr,
++ u32 sampler_id,
+ int i)
+ {
+ flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL;
+ dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_SAMPLER;
+- dest[i].sampler_id = attr->sample_attr->sampler_id;
++ dest[i].sampler_id = sampler_id;
+
+ return 0;
+ }
+@@ -466,7 +466,7 @@ esw_setup_dests(struct mlx5_flow_destination *dest,
+ attr->flags |= MLX5_ESW_ATTR_FLAG_SRC_REWRITE;
+
+ if (attr->flags & MLX5_ESW_ATTR_FLAG_SAMPLE) {
+- esw_setup_sampler_dest(dest, flow_act, attr, *i);
++ esw_setup_sampler_dest(dest, flow_act, attr->sample_attr.sampler_id, *i);
+ (*i)++;
+ } else if (attr->dest_ft) {
+ esw_setup_ft_dest(dest, flow_act, esw, attr, spec, *i);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index 7b16a1188aabb..fd79860de723b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -433,35 +433,12 @@ int mlx5_query_module_eeprom_by_page(struct mlx5_core_dev *dev,
+ struct mlx5_module_eeprom_query_params *params,
+ u8 *data)
+ {
+- u8 module_id;
+ int err;
+
+ err = mlx5_query_module_num(dev, ¶ms->module_number);
+ if (err)
+ return err;
+
+- err = mlx5_query_module_id(dev, params->module_number, &module_id);
+- if (err)
+- return err;
+-
+- switch (module_id) {
+- case MLX5_MODULE_ID_SFP:
+- if (params->page > 0)
+- return -EINVAL;
+- break;
+- case MLX5_MODULE_ID_QSFP:
+- case MLX5_MODULE_ID_QSFP28:
+- case MLX5_MODULE_ID_QSFP_PLUS:
+- if (params->page > 3)
+- return -EINVAL;
+- break;
+- case MLX5_MODULE_ID_DSFP:
+- break;
+- default:
+- mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id);
+- return -EINVAL;
+- }
+-
+ if (params->i2c_address != MLX5_I2C_ADDR_HIGH &&
+ params->i2c_address != MLX5_I2C_ADDR_LOW) {
+ mlx5_core_err(dev, "I2C address not recognized: 0x%x\n", params->i2c_address);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index aa411dec62f00..eb1319d63613e 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2148,13 +2148,11 @@ static void mlxsw_sp_pude_event_func(const struct mlxsw_reg_info *reg,
+ struct mlxsw_sp *mlxsw_sp = priv;
+ struct mlxsw_sp_port *mlxsw_sp_port;
+ enum mlxsw_reg_pude_oper_status status;
+- unsigned int max_ports;
+ u16 local_port;
+
+- max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
+ local_port = mlxsw_reg_pude_local_port_get(pude_pl);
+
+- if (WARN_ON_ONCE(!local_port || local_port >= max_ports))
++ if (WARN_ON_ONCE(!mlxsw_sp_local_port_is_valid(mlxsw_sp, local_port)))
+ return;
+ mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ if (!mlxsw_sp_port)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+index bb2442e1f7052..30942b6ffcf99 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+@@ -481,6 +481,13 @@ int
+ mlxsw_sp_port_vlan_classification_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ bool is_8021ad_tagged,
+ bool is_8021q_tagged);
++static inline bool
++mlxsw_sp_local_port_is_valid(struct mlxsw_sp *mlxsw_sp, u16 local_port)
++{
++ unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
++
++ return local_port < max_ports && local_port;
++}
+
+ /* spectrum_buffers.c */
+ struct mlxsw_sp_hdroom_prio {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
+index 0ff163fbc7750..35422e64d89fc 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ptp.c
+@@ -568,12 +568,11 @@ void mlxsw_sp1_ptp_got_timestamp(struct mlxsw_sp *mlxsw_sp, bool ingress,
+ u8 domain_number, u16 sequence_id,
+ u64 timestamp)
+ {
+- unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
+ struct mlxsw_sp_port *mlxsw_sp_port;
+ struct mlxsw_sp1_ptp_key key;
+ u8 types;
+
+- if (WARN_ON_ONCE(local_port >= max_ports))
++ if (WARN_ON_ONCE(!mlxsw_sp_local_port_is_valid(mlxsw_sp, local_port)))
+ return;
+ mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ if (!mlxsw_sp_port)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 65c1724c63b0a..bffdb41fc4edc 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -2616,7 +2616,6 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
+ char *sfn_pl, int rec_index,
+ bool adding)
+ {
+- unsigned int max_ports = mlxsw_core_max_ports(mlxsw_sp->core);
+ struct mlxsw_sp_port_vlan *mlxsw_sp_port_vlan;
+ struct mlxsw_sp_bridge_device *bridge_device;
+ struct mlxsw_sp_bridge_port *bridge_port;
+@@ -2630,7 +2629,7 @@ static void mlxsw_sp_fdb_notify_mac_process(struct mlxsw_sp *mlxsw_sp,
+
+ mlxsw_reg_sfn_mac_unpack(sfn_pl, rec_index, mac, &fid, &local_port);
+
+- if (WARN_ON_ONCE(local_port >= max_ports))
++ if (WARN_ON_ONCE(!mlxsw_sp_local_port_is_valid(mlxsw_sp, local_port)))
+ return;
+ mlxsw_sp_port = mlxsw_sp->ports[local_port];
+ if (!mlxsw_sp_port) {
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+index e3edca187ddfa..5250d1d1e49ca 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
+@@ -489,7 +489,7 @@ struct split_type_defs {
+
+ #define STATIC_DEBUG_LINE_DWORDS 9
+
+-#define NUM_COMMON_GLOBAL_PARAMS 11
++#define NUM_COMMON_GLOBAL_PARAMS 10
+
+ #define MAX_RECURSION_DEPTH 10
+
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+index b242000a77fd8..b7cc36589f592 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
+@@ -748,6 +748,9 @@ qede_build_skb(struct qede_rx_queue *rxq,
+ buf = page_address(bd->data) + bd->page_offset;
+ skb = build_skb(buf, rxq->rx_buf_seg_size);
+
++ if (unlikely(!skb))
++ return NULL;
++
+ skb_reserve(skb, pad);
+ skb_put(skb, len);
+
+diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
+index ead550ae27097..40bfd0ad7d053 100644
+--- a/drivers/net/ethernet/sfc/efx_channels.c
++++ b/drivers/net/ethernet/sfc/efx_channels.c
+@@ -764,6 +764,85 @@ void efx_remove_channels(struct efx_nic *efx)
+ kfree(efx->xdp_tx_queues);
+ }
+
++static int efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,
++ struct efx_tx_queue *tx_queue)
++{
++ if (xdp_queue_number >= efx->xdp_tx_queue_count)
++ return -EINVAL;
++
++ netif_dbg(efx, drv, efx->net_dev,
++ "Channel %u TXQ %u is XDP %u, HW %u\n",
++ tx_queue->channel->channel, tx_queue->label,
++ xdp_queue_number, tx_queue->queue);
++ efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
++ return 0;
++}
++
++static void efx_set_xdp_channels(struct efx_nic *efx)
++{
++ struct efx_tx_queue *tx_queue;
++ struct efx_channel *channel;
++ unsigned int next_queue = 0;
++ int xdp_queue_number = 0;
++ int rc;
++
++ /* We need to mark which channels really have RX and TX
++ * queues, and adjust the TX queue numbers if we have separate
++ * RX-only and TX-only channels.
++ */
++ efx_for_each_channel(channel, efx) {
++ if (channel->channel < efx->tx_channel_offset)
++ continue;
++
++ if (efx_channel_is_xdp_tx(channel)) {
++ efx_for_each_channel_tx_queue(tx_queue, channel) {
++ tx_queue->queue = next_queue++;
++ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number,
++ tx_queue);
++ if (rc == 0)
++ xdp_queue_number++;
++ }
++ } else {
++ efx_for_each_channel_tx_queue(tx_queue, channel) {
++ tx_queue->queue = next_queue++;
++ netif_dbg(efx, drv, efx->net_dev,
++ "Channel %u TXQ %u is HW %u\n",
++ channel->channel, tx_queue->label,
++ tx_queue->queue);
++ }
++
++ /* If XDP is borrowing queues from net stack, it must
++ * use the queue with no csum offload, which is the
++ * first one of the channel
++ * (note: tx_queue_by_type is not initialized yet)
++ */
++ if (efx->xdp_txq_queues_mode ==
++ EFX_XDP_TX_QUEUES_BORROWED) {
++ tx_queue = &channel->tx_queue[0];
++ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number,
++ tx_queue);
++ if (rc == 0)
++ xdp_queue_number++;
++ }
++ }
++ }
++ WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&
++ xdp_queue_number != efx->xdp_tx_queue_count);
++ WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&
++ xdp_queue_number > efx->xdp_tx_queue_count);
++
++ /* If we have more CPUs than assigned XDP TX queues, assign the already
++ * existing queues to the exceeding CPUs
++ */
++ next_queue = 0;
++ while (xdp_queue_number < efx->xdp_tx_queue_count) {
++ tx_queue = efx->xdp_tx_queues[next_queue++];
++ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
++ if (rc == 0)
++ xdp_queue_number++;
++ }
++}
++
+ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ {
+ struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel;
+@@ -835,6 +914,7 @@ int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)
+ efx_init_napi_channel(efx->channel[i]);
+ }
+
++ efx_set_xdp_channels(efx);
+ out:
+ /* Destroy unused channel structures */
+ for (i = 0; i < efx->n_channels; i++) {
+@@ -867,26 +947,9 @@ rollback:
+ goto out;
+ }
+
+-static inline int
+-efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,
+- struct efx_tx_queue *tx_queue)
+-{
+- if (xdp_queue_number >= efx->xdp_tx_queue_count)
+- return -EINVAL;
+-
+- netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",
+- tx_queue->channel->channel, tx_queue->label,
+- xdp_queue_number, tx_queue->queue);
+- efx->xdp_tx_queues[xdp_queue_number] = tx_queue;
+- return 0;
+-}
+-
+ int efx_set_channels(struct efx_nic *efx)
+ {
+- struct efx_tx_queue *tx_queue;
+ struct efx_channel *channel;
+- unsigned int next_queue = 0;
+- int xdp_queue_number;
+ int rc;
+
+ efx->tx_channel_offset =
+@@ -904,61 +967,14 @@ int efx_set_channels(struct efx_nic *efx)
+ return -ENOMEM;
+ }
+
+- /* We need to mark which channels really have RX and TX
+- * queues, and adjust the TX queue numbers if we have separate
+- * RX-only and TX-only channels.
+- */
+- xdp_queue_number = 0;
+ efx_for_each_channel(channel, efx) {
+ if (channel->channel < efx->n_rx_channels)
+ channel->rx_queue.core_index = channel->channel;
+ else
+ channel->rx_queue.core_index = -1;
+-
+- if (channel->channel >= efx->tx_channel_offset) {
+- if (efx_channel_is_xdp_tx(channel)) {
+- efx_for_each_channel_tx_queue(tx_queue, channel) {
+- tx_queue->queue = next_queue++;
+- rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+- if (rc == 0)
+- xdp_queue_number++;
+- }
+- } else {
+- efx_for_each_channel_tx_queue(tx_queue, channel) {
+- tx_queue->queue = next_queue++;
+- netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is HW %u\n",
+- channel->channel, tx_queue->label,
+- tx_queue->queue);
+- }
+-
+- /* If XDP is borrowing queues from net stack, it must use the queue
+- * with no csum offload, which is the first one of the channel
+- * (note: channel->tx_queue_by_type is not initialized yet)
+- */
+- if (efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_BORROWED) {
+- tx_queue = &channel->tx_queue[0];
+- rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+- if (rc == 0)
+- xdp_queue_number++;
+- }
+- }
+- }
+ }
+- WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&
+- xdp_queue_number != efx->xdp_tx_queue_count);
+- WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&
+- xdp_queue_number > efx->xdp_tx_queue_count);
+
+- /* If we have more CPUs than assigned XDP TX queues, assign the already
+- * existing queues to the exceeding CPUs
+- */
+- next_queue = 0;
+- while (xdp_queue_number < efx->xdp_tx_queue_count) {
+- tx_queue = efx->xdp_tx_queues[next_queue++];
+- rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);
+- if (rc == 0)
+- xdp_queue_number++;
+- }
++ efx_set_xdp_channels(efx);
+
+ rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);
+ if (rc)
+@@ -1102,7 +1118,7 @@ void efx_start_channels(struct efx_nic *efx)
+ struct efx_rx_queue *rx_queue;
+ struct efx_channel *channel;
+
+- efx_for_each_channel(channel, efx) {
++ efx_for_each_channel_rev(channel, efx) {
+ efx_for_each_channel_tx_queue(tx_queue, channel) {
+ efx_init_tx_queue(tx_queue);
+ atomic_inc(&efx->active_queues);
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index 633ca77a26fd1..b925de9b43028 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -166,6 +166,9 @@ static void efx_fini_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+ struct efx_nic *efx = rx_queue->efx;
+ int i;
+
++ if (unlikely(!rx_queue->page_ring))
++ return;
++
+ /* Unmap and release the pages in the recycle ring. Remove the ring. */
+ for (i = 0; i <= rx_queue->page_ptr_mask; i++) {
+ struct page *page = rx_queue->page_ring[i];
+diff --git a/drivers/net/ethernet/sfc/tx.c b/drivers/net/ethernet/sfc/tx.c
+index d16e031e95f44..6983799e1c05d 100644
+--- a/drivers/net/ethernet/sfc/tx.c
++++ b/drivers/net/ethernet/sfc/tx.c
+@@ -443,6 +443,9 @@ int efx_xdp_tx_buffers(struct efx_nic *efx, int n, struct xdp_frame **xdpfs,
+ if (unlikely(!tx_queue))
+ return -EINVAL;
+
++ if (!tx_queue->initialised)
++ return -EINVAL;
++
+ if (efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED)
+ HARD_TX_LOCK(efx->net_dev, tx_queue->core_txq, cpu);
+
+diff --git a/drivers/net/ethernet/sfc/tx_common.c b/drivers/net/ethernet/sfc/tx_common.c
+index d530cde2b8648..9bc8281b7f5bd 100644
+--- a/drivers/net/ethernet/sfc/tx_common.c
++++ b/drivers/net/ethernet/sfc/tx_common.c
+@@ -101,6 +101,8 @@ void efx_fini_tx_queue(struct efx_tx_queue *tx_queue)
+ netif_dbg(tx_queue->efx, drv, tx_queue->efx->net_dev,
+ "shutting down TX queue %d\n", tx_queue->queue);
+
++ tx_queue->initialised = false;
++
+ if (!tx_queue->buffer)
+ return;
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index 5d29f336315b7..11e1055e8260f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -431,8 +431,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ plat->phylink_node = np;
+
+ /* Get max speed of operation from device tree */
+- if (of_property_read_u32(np, "max-speed", &plat->max_speed))
+- plat->max_speed = -1;
++ of_property_read_u32(np, "max-speed", &plat->max_speed);
+
+ plat->bus_id = of_alias_get_id(np, "ethernet");
+ if (plat->bus_id < 0)
+diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
+index 6b12902a803f0..cecf8c63096cd 100644
+--- a/drivers/net/macvtap.c
++++ b/drivers/net/macvtap.c
+@@ -133,11 +133,17 @@ static void macvtap_setup(struct net_device *dev)
+ dev->tx_queue_len = TUN_READQ_SIZE;
+ }
+
++static struct net *macvtap_link_net(const struct net_device *dev)
++{
++ return dev_net(macvlan_dev_real_dev(dev));
++}
++
+ static struct rtnl_link_ops macvtap_link_ops __read_mostly = {
+ .kind = "macvtap",
+ .setup = macvtap_setup,
+ .newlink = macvtap_newlink,
+ .dellink = macvtap_dellink,
++ .get_link_net = macvtap_link_net,
+ .priv_size = sizeof(struct macvtap_dev),
+ };
+
+diff --git a/drivers/net/mdio/mdio-mscc-miim.c b/drivers/net/mdio/mdio-mscc-miim.c
+index 64fb76c1e3959..08381038810d6 100644
+--- a/drivers/net/mdio/mdio-mscc-miim.c
++++ b/drivers/net/mdio/mdio-mscc-miim.c
+@@ -93,6 +93,9 @@ static int mscc_miim_read(struct mii_bus *bus, int mii_id, int regnum)
+ u32 val;
+ int ret;
+
++ if (regnum & MII_ADDR_C45)
++ return -EOPNOTSUPP;
++
+ ret = mscc_miim_wait_pending(bus);
+ if (ret)
+ goto out;
+@@ -136,6 +139,9 @@ static int mscc_miim_write(struct mii_bus *bus, int mii_id,
+ struct mscc_miim_dev *miim = bus->priv;
+ int ret;
+
++ if (regnum & MII_ADDR_C45)
++ return -EOPNOTSUPP;
++
+ ret = mscc_miim_wait_pending(bus);
+ if (ret < 0)
+ goto out;
+diff --git a/drivers/net/phy/sfp-bus.c b/drivers/net/phy/sfp-bus.c
+index c1512c9925a66..15aa5ac1ff49c 100644
+--- a/drivers/net/phy/sfp-bus.c
++++ b/drivers/net/phy/sfp-bus.c
+@@ -74,6 +74,12 @@ static const struct sfp_quirk sfp_quirks[] = {
+ .vendor = "HUAWEI",
+ .part = "MA5671A",
+ .modes = sfp_quirk_2500basex,
++ }, {
++ // Lantech 8330-262D-E can operate at 2500base-X, but
++ // incorrectly report 2500MBd NRZ in their EEPROM
++ .vendor = "Lantech",
++ .part = "8330-262D-E",
++ .modes = sfp_quirk_2500basex,
+ }, {
+ .vendor = "UBNT",
+ .part = "UF-INSTANT",
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 8e3a28ba6b282..ba2ef5437e167 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1198,7 +1198,8 @@ static int tap_sendmsg(struct socket *sock, struct msghdr *m,
+ struct xdp_buff *xdp;
+ int i;
+
+- if (ctl && (ctl->type == TUN_MSG_PTR)) {
++ if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
++ ctl && ctl->type == TUN_MSG_PTR) {
+ for (i = 0; i < ctl->num; i++) {
+ xdp = &((struct xdp_buff *)ctl->ptr)[i];
+ tap_get_user_xdp(q, xdp);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index fed85447701a5..de999e0fedbca 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2489,7 +2489,8 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ if (!tun)
+ return -EBADFD;
+
+- if (ctl && (ctl->type == TUN_MSG_PTR)) {
++ if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
++ ctl && ctl->type == TUN_MSG_PTR) {
+ struct tun_page tpage;
+ int n = ctl->num;
+ int flush = 0;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index e0b1ab99a359e..f37adcef4bef3 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1266,6 +1266,7 @@ static int vrf_prepare_mac_header(struct sk_buff *skb,
+ eth = (struct ethhdr *)skb->data;
+
+ skb_reset_mac_header(skb);
++ skb_reset_mac_len(skb);
+
+ /* we set the ethernet destination and the source addresses to the
+ * address of the VRF device.
+@@ -1295,9 +1296,9 @@ static int vrf_prepare_mac_header(struct sk_buff *skb,
+ */
+ static int vrf_add_mac_header_if_unset(struct sk_buff *skb,
+ struct net_device *vrf_dev,
+- u16 proto)
++ u16 proto, struct net_device *orig_dev)
+ {
+- if (skb_mac_header_was_set(skb))
++ if (skb_mac_header_was_set(skb) && dev_has_header(orig_dev))
+ return 0;
+
+ return vrf_prepare_mac_header(skb, vrf_dev, proto);
+@@ -1403,6 +1404,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+
+ /* if packet is NDISC then keep the ingress interface */
+ if (!is_ndisc) {
++ struct net_device *orig_dev = skb->dev;
++
+ vrf_rx_stats(vrf_dev, skb->len);
+ skb->dev = vrf_dev;
+ skb->skb_iif = vrf_dev->ifindex;
+@@ -1411,7 +1414,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ int err;
+
+ err = vrf_add_mac_header_if_unset(skb, vrf_dev,
+- ETH_P_IPV6);
++ ETH_P_IPV6,
++ orig_dev);
+ if (likely(!err)) {
+ skb_push(skb, skb->mac_len);
+ dev_queue_xmit_nit(skb, vrf_dev);
+@@ -1441,6 +1445,8 @@ static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ struct sk_buff *skb)
+ {
++ struct net_device *orig_dev = skb->dev;
++
+ skb->dev = vrf_dev;
+ skb->skb_iif = vrf_dev->ifindex;
+ IPCB(skb)->flags |= IPSKB_L3SLAVE;
+@@ -1461,7 +1467,8 @@ static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ if (!list_empty(&vrf_dev->ptype_all)) {
+ int err;
+
+- err = vrf_add_mac_header_if_unset(skb, vrf_dev, ETH_P_IP);
++ err = vrf_add_mac_header_if_unset(skb, vrf_dev, ETH_P_IP,
++ orig_dev);
+ if (likely(!err)) {
+ skb_push(skb, skb->mac_len);
+ dev_queue_xmit_nit(skb, vrf_dev);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 3fb0aa0008259..24bd0520926bf 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -391,6 +391,8 @@ static void ath11k_ahb_free_ext_irq(struct ath11k_base *ab)
+
+ for (j = 0; j < irq_grp->num_irq; j++)
+ free_irq(ab->irq_num[irq_grp->irqs[j]], irq_grp);
++
++ netif_napi_del(&irq_grp->napi);
+ }
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 08e33778f63b8..28de877ad6c47 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5574,7 +5574,7 @@ static int ath11k_mac_mgmt_tx(struct ath11k *ar, struct sk_buff *skb,
+
+ skb_queue_tail(q, skb);
+ atomic_inc(&ar->num_pending_mgmt_tx);
+- ieee80211_queue_work(ar->hw, &ar->wmi_mgmt_tx_work);
++ queue_work(ar->ab->workqueue, &ar->wmi_mgmt_tx_work);
+
+ return 0;
+ }
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
+index cccaa348cf212..8b21438028169 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.c
++++ b/drivers/net/wireless/ath/ath11k/mhi.c
+@@ -561,7 +561,7 @@ static int ath11k_mhi_set_state(struct ath11k_pci *ab_pci,
+ ret = 0;
+ break;
+ case ATH11K_MHI_POWER_ON:
+- ret = mhi_async_power_up(ab_pci->mhi_ctrl);
++ ret = mhi_sync_power_up(ab_pci->mhi_ctrl);
+ break;
+ case ATH11K_MHI_POWER_OFF:
+ mhi_power_down(ab_pci->mhi_ctrl, true);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index de71ad594f347..903758751c99a 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -1571,6 +1571,11 @@ static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
+ struct ath11k_base *ab = dev_get_drvdata(dev);
+ int ret;
+
++ if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
++ ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot skipping pci suspend as qmi is not initialised\n");
++ return 0;
++ }
++
+ ret = ath11k_core_suspend(ab);
+ if (ret)
+ ath11k_warn(ab, "failed to suspend core: %d\n", ret);
+@@ -1583,6 +1588,11 @@ static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
+ struct ath11k_base *ab = dev_get_drvdata(dev);
+ int ret;
+
++ if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
++ ath11k_dbg(ab, ATH11K_DBG_BOOT, "boot skipping pci resume as qmi is not initialised\n");
++ return 0;
++ }
++
+ ret = ath11k_core_resume(ab);
+ if (ret)
+ ath11k_warn(ab, "failed to resume core: %d\n", ret);
+diff --git a/drivers/net/wireless/ath/ath5k/eeprom.c b/drivers/net/wireless/ath/ath5k/eeprom.c
+index 1fbc2c19848f2..d444b3d70ba2e 100644
+--- a/drivers/net/wireless/ath/ath5k/eeprom.c
++++ b/drivers/net/wireless/ath/ath5k/eeprom.c
+@@ -746,6 +746,9 @@ ath5k_eeprom_convert_pcal_info_5111(struct ath5k_hw *ah, int mode,
+ }
+ }
+
++ if (idx == AR5K_EEPROM_N_PD_CURVES)
++ goto err_out;
++
+ ee->ee_pd_gains[mode] = 1;
+
+ pd = &chinfo[pier].pd_curves[idx];
+diff --git a/drivers/net/wireless/intel/iwlwifi/Kconfig b/drivers/net/wireless/intel/iwlwifi/Kconfig
+index 85e7042837556..a647a406b87be 100644
+--- a/drivers/net/wireless/intel/iwlwifi/Kconfig
++++ b/drivers/net/wireless/intel/iwlwifi/Kconfig
+@@ -139,6 +139,7 @@ config IWLMEI
+ tristate "Intel Management Engine communication over WLAN"
+ depends on INTEL_MEI
+ depends on PM
++ depends on CFG80211
+ help
+ Enables the iwlmei kernel module.
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
+index 456b7eaac5700..061fe6cc6cf5b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2018-2021 Intel Corporation
++ * Copyright (C) 2018-2022 Intel Corporation
+ */
+ #ifndef __iwl_fw_dbg_tlv_h__
+ #define __iwl_fw_dbg_tlv_h__
+@@ -249,11 +249,10 @@ struct iwl_fw_ini_hcmd_tlv {
+ } __packed; /* FW_TLV_DEBUG_HCMD_API_S_VER_1 */
+
+ /**
+-* struct iwl_fw_ini_conf_tlv - preset configuration TLV
++* struct iwl_fw_ini_addr_val - Address and value to set it to
+ *
+ * @address: the base address
+ * @value: value to set at address
+-
+ */
+ struct iwl_fw_ini_addr_val {
+ __le32 address;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+index 9af40b0fa37ae..a6e6673bf4ee0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2021 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (C) 2017 Intel Deutschland GmbH
+ */
+@@ -349,18 +349,31 @@ void iwl_mvm_phy_ctxt_unref(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt)
+ * otherwise we might not be able to reuse this phy.
+ */
+ if (ctxt->ref == 0) {
+- struct ieee80211_channel *chan;
++ struct ieee80211_channel *chan = NULL;
+ struct cfg80211_chan_def chandef;
+- struct ieee80211_supported_band *sband = NULL;
+- enum nl80211_band band = NL80211_BAND_2GHZ;
++ struct ieee80211_supported_band *sband;
++ enum nl80211_band band;
++ int channel;
+
+- while (!sband && band < NUM_NL80211_BANDS)
+- sband = mvm->hw->wiphy->bands[band++];
++ for (band = NL80211_BAND_2GHZ; band < NUM_NL80211_BANDS; band++) {
++ sband = mvm->hw->wiphy->bands[band];
+
+- if (WARN_ON(!sband))
+- return;
++ if (!sband)
++ continue;
++
++ for (channel = 0; channel < sband->n_channels; channel++)
++ if (!(sband->channels[channel].flags &
++ IEEE80211_CHAN_DISABLED)) {
++ chan = &sband->channels[channel];
++ break;
++ }
+
+- chan = &sband->channels[0];
++ if (chan)
++ break;
++ }
++
++ if (WARN_ON(!chan))
++ return;
+
+ cfg80211_chandef_create(&chandef, chan, NL80211_CHAN_NO_HT);
+ iwl_mvm_phy_ctxt_changed(mvm, ctxt, &chandef, 1, 1);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+index 5f92a09db3742..4cd507cb412de 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+@@ -1893,7 +1893,10 @@ static u8 iwl_mvm_scan_umac_chan_flags_v2(struct iwl_mvm *mvm,
+ IWL_SCAN_CHANNEL_FLAG_CACHE_ADD;
+
+ /* set fragmented ebs for fragmented scan on HB channels */
+- if (iwl_mvm_is_scan_fragmented(params->hb_type))
++ if ((!iwl_mvm_is_cdb_supported(mvm) &&
++ iwl_mvm_is_scan_fragmented(params->type)) ||
++ (iwl_mvm_is_cdb_supported(mvm) &&
++ iwl_mvm_is_scan_fragmented(params->hb_type)))
+ flags |= IWL_SCAN_CHANNEL_FLAG_EBS_FRAG;
+
+ return flags;
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 3a9af8931c35a..3d644925a4e04 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -465,6 +465,7 @@ mt76_dma_rx_fill(struct mt76_dev *dev, struct mt76_queue *q)
+
+ qbuf.addr = addr + offset;
+ qbuf.len = len - offset;
++ qbuf.skip_unmap = false;
+ mt76_dma_add_buf(dev, q, &qbuf, 1, 0, buf, NULL);
+ frames++;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 1f6f7a44d3f00..5197fcb066492 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -19,7 +19,7 @@
+
+ #define MT_MCU_RING_SIZE 32
+ #define MT_RX_BUF_SIZE 2048
+-#define MT_SKB_HEAD_LEN 128
++#define MT_SKB_HEAD_LEN 256
+
+ #define MT_MAX_NON_AQL_PKT 16
+ #define MT_TXQ_FREE_THR 32
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index ba31bb7caaf90..5d69e77814c9d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -1841,7 +1841,7 @@ mt7615_mac_adjust_sensitivity(struct mt7615_phy *phy,
+ struct mt7615_dev *dev = phy->dev;
+ int false_cca = ofdm ? phy->false_cca_ofdm : phy->false_cca_cck;
+ bool ext_phy = phy != &dev->phy;
+- u16 def_th = ofdm ? -98 : -110;
++ s16 def_th = ofdm ? -98 : -110;
+ bool update = false;
+ s8 *sensitivity;
+ int signal;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index db267642924d0..e4c300aa15260 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1085,6 +1085,7 @@ mt7915_mac_write_txwi_80211(struct mt7915_dev *dev, __le32 *txwi,
+ val = MT_TXD3_SN_VALID |
+ FIELD_PREP(MT_TXD3_SEQ, IEEE80211_SEQ_TO_SN(seqno));
+ txwi[3] |= cpu_to_le32(val);
++ txwi[7] &= ~cpu_to_le32(MT_TXD7_HW_AMSDU);
+ }
+
+ val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 7a8d2596c2265..4abb7a6e775af 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -273,6 +273,7 @@ static void mt7921_stop(struct ieee80211_hw *hw)
+
+ cancel_delayed_work_sync(&dev->pm.ps_work);
+ cancel_work_sync(&dev->pm.wake_work);
++ cancel_work_sync(&dev->reset_work);
+ mt76_connac_free_pending_tx_skbs(&dev->pm, NULL);
+
+ mt7921_mutex_acquire(dev);
+diff --git a/drivers/net/wireless/realtek/rtw88/debug.c b/drivers/net/wireless/realtek/rtw88/debug.c
+index e429428232c15..e7e9f17df96a3 100644
+--- a/drivers/net/wireless/realtek/rtw88/debug.c
++++ b/drivers/net/wireless/realtek/rtw88/debug.c
+@@ -390,7 +390,7 @@ static ssize_t rtw_debugfs_set_h2c(struct file *filp,
+ ¶m[0], ¶m[1], ¶m[2], ¶m[3],
+ ¶m[4], ¶m[5], ¶m[6], ¶m[7]);
+ if (num != 8) {
+- rtw_info(rtwdev, "invalid H2C command format for debug\n");
++ rtw_warn(rtwdev, "invalid H2C command format for debug\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/debug.h b/drivers/net/wireless/realtek/rtw88/debug.h
+index 61f8369fe2d61..066792dd96afb 100644
+--- a/drivers/net/wireless/realtek/rtw88/debug.h
++++ b/drivers/net/wireless/realtek/rtw88/debug.h
+@@ -23,6 +23,7 @@ enum rtw_debug_mask {
+ RTW_DBG_PATH_DIV = 0x00004000,
+ RTW_DBG_ADAPTIVITY = 0x00008000,
+ RTW_DBG_HW_SCAN = 0x00010000,
++ RTW_DBG_STATE = 0x00020000,
+
+ RTW_DBG_ALL = 0xffffffff
+ };
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 4c8e5ea5d069c..db90d75a86339 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -2131,7 +2131,7 @@ void rtw_hw_scan_status_report(struct rtw_dev *rtwdev, struct sk_buff *skb)
+ rtw_hw_scan_complete(rtwdev, vif, aborted);
+
+ if (aborted)
+- rtw_info(rtwdev, "HW scan aborted with code: %d\n", rc);
++ rtw_dbg(rtwdev, RTW_DBG_HW_SCAN, "HW scan aborted with code: %d\n", rc);
+ }
+
+ void rtw_store_op_chan(struct rtw_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index 647d2662955ba..5cdc54c9a9aae 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -208,7 +208,7 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw,
+
+ mutex_unlock(&rtwdev->mutex);
+
+- rtw_info(rtwdev, "start vif %pM on port %d\n", vif->addr, rtwvif->port);
++ rtw_dbg(rtwdev, RTW_DBG_STATE, "start vif %pM on port %d\n", vif->addr, rtwvif->port);
+ return 0;
+ }
+
+@@ -219,7 +219,7 @@ static void rtw_ops_remove_interface(struct ieee80211_hw *hw,
+ struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+ u32 config = 0;
+
+- rtw_info(rtwdev, "stop vif %pM on port %d\n", vif->addr, rtwvif->port);
++ rtw_dbg(rtwdev, RTW_DBG_STATE, "stop vif %pM on port %d\n", vif->addr, rtwvif->port);
+
+ mutex_lock(&rtwdev->mutex);
+
+@@ -245,8 +245,8 @@ static int rtw_ops_change_interface(struct ieee80211_hw *hw,
+ {
+ struct rtw_dev *rtwdev = hw->priv;
+
+- rtw_info(rtwdev, "change vif %pM (%d)->(%d), p2p (%d)->(%d)\n",
+- vif->addr, vif->type, type, vif->p2p, p2p);
++ rtw_dbg(rtwdev, RTW_DBG_STATE, "change vif %pM (%d)->(%d), p2p (%d)->(%d)\n",
++ vif->addr, vif->type, type, vif->p2p, p2p);
+
+ rtw_ops_remove_interface(hw, vif);
+
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 39c223a2e3e2d..b00200f81db7d 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -314,8 +314,8 @@ int rtw_sta_add(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+
+ rtwdev->sta_cnt++;
+ rtwdev->beacon_loss = false;
+- rtw_info(rtwdev, "sta %pM joined with macid %d\n",
+- sta->addr, si->mac_id);
++ rtw_dbg(rtwdev, RTW_DBG_STATE, "sta %pM joined with macid %d\n",
++ sta->addr, si->mac_id);
+
+ return 0;
+ }
+@@ -336,8 +336,8 @@ void rtw_sta_remove(struct rtw_dev *rtwdev, struct ieee80211_sta *sta,
+ kfree(si->mask);
+
+ rtwdev->sta_cnt--;
+- rtw_info(rtwdev, "sta %pM with macid %d left\n",
+- sta->addr, si->mac_id);
++ rtw_dbg(rtwdev, RTW_DBG_STATE, "sta %pM with macid %d left\n",
++ sta->addr, si->mac_id);
+ }
+
+ struct rtw_fwcd_hdr {
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index db078df63f855..80d4761796b15 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -499,7 +499,7 @@ static s8 get_cck_rx_pwr(struct rtw_dev *rtwdev, u8 lna_idx, u8 vga_idx)
+ }
+
+ if (lna_idx >= lna_gain_table_size) {
+- rtw_info(rtwdev, "incorrect lna index (%d)\n", lna_idx);
++ rtw_warn(rtwdev, "incorrect lna index (%d)\n", lna_idx);
+ return -120;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+index dd4fbb82750d5..a23806b69b0fa 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+@@ -1012,12 +1012,12 @@ static int rtw8822b_set_antenna(struct rtw_dev *rtwdev,
+ antenna_tx, antenna_rx);
+
+ if (!rtw8822b_check_rf_path(antenna_tx)) {
+- rtw_info(rtwdev, "unsupported tx path 0x%x\n", antenna_tx);
++ rtw_warn(rtwdev, "unsupported tx path 0x%x\n", antenna_tx);
+ return -EINVAL;
+ }
+
+ if (!rtw8822b_check_rf_path(antenna_rx)) {
+- rtw_info(rtwdev, "unsupported rx path 0x%x\n", antenna_rx);
++ rtw_warn(rtwdev, "unsupported rx path 0x%x\n", antenna_rx);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.c b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+index 35c46e5209de3..ddf4d1a23e605 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.c
+@@ -2798,7 +2798,7 @@ static int rtw8822c_set_antenna(struct rtw_dev *rtwdev,
+ case BB_PATH_AB:
+ break;
+ default:
+- rtw_info(rtwdev, "unsupported tx path 0x%x\n", antenna_tx);
++ rtw_warn(rtwdev, "unsupported tx path 0x%x\n", antenna_tx);
+ return -EINVAL;
+ }
+
+@@ -2808,7 +2808,7 @@ static int rtw8822c_set_antenna(struct rtw_dev *rtwdev,
+ case BB_PATH_AB:
+ break;
+ default:
+- rtw_info(rtwdev, "unsupported rx path 0x%x\n", antenna_rx);
++ rtw_warn(rtwdev, "unsupported rx path 0x%x\n", antenna_rx);
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/sar.c b/drivers/net/wireless/realtek/rtw88/sar.c
+index 3383726c4d90f..c472f1502b82a 100644
+--- a/drivers/net/wireless/realtek/rtw88/sar.c
++++ b/drivers/net/wireless/realtek/rtw88/sar.c
+@@ -91,10 +91,10 @@ int rtw_set_sar_specs(struct rtw_dev *rtwdev,
+ return -EINVAL;
+
+ power = sar->sub_specs[i].power;
+- rtw_info(rtwdev, "On freq %u to %u, set SAR %d in 1/%lu dBm\n",
+- rtw_common_sar_freq_ranges[idx].start_freq,
+- rtw_common_sar_freq_ranges[idx].end_freq,
+- power, BIT(RTW_COMMON_SAR_FCT));
++ rtw_dbg(rtwdev, RTW_DBG_REGD, "On freq %u to %u, set SAR %d in 1/%lu dBm\n",
++ rtw_common_sar_freq_ranges[idx].start_freq,
++ rtw_common_sar_freq_ranges[idx].end_freq,
++ power, BIT(RTW_COMMON_SAR_FCT));
+
+ for (j = 0; j < RTW_RF_PATH_MAX; j++) {
+ for (k = 0; k < RTW_RATE_SECTION_MAX; k++) {
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index a0737eea9f81d..9632e7f218dda 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -1509,11 +1509,12 @@ static void rtw89_core_txq_push(struct rtw89_dev *rtwdev,
+ unsigned long i;
+ int ret;
+
++ rcu_read_lock();
+ for (i = 0; i < frame_cnt; i++) {
+ skb = ieee80211_tx_dequeue_ni(rtwdev->hw, txq);
+ if (!skb) {
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX, "dequeue a NULL skb\n");
+- return;
++ goto out;
+ }
+ rtw89_core_txq_check_agg(rtwdev, rtwtxq, skb);
+ ret = rtw89_core_tx_write(rtwdev, vif, sta, skb, NULL);
+@@ -1523,6 +1524,8 @@ static void rtw89_core_txq_push(struct rtw89_dev *rtwdev,
+ break;
+ }
+ }
++out:
++ rcu_read_unlock();
+ }
+
+ static u32 rtw89_check_and_reclaim_tx_resource(struct rtw89_dev *rtwdev, u8 tid)
+diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
+index 596c185b5dda4..b5f2f9f393926 100644
+--- a/drivers/opp/debugfs.c
++++ b/drivers/opp/debugfs.c
+@@ -10,6 +10,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
++#include <linux/of.h>
+ #include <linux/init.h>
+ #include <linux/limits.h>
+ #include <linux/slab.h>
+@@ -131,9 +132,13 @@ void opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
+ debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend);
+ debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate);
+ debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate);
++ debugfs_create_u32("level", S_IRUGO, d, &opp->level);
+ debugfs_create_ulong("clock_latency_ns", S_IRUGO, d,
+ &opp->clock_latency_ns);
+
++ opp->of_name = of_node_full_name(opp->np);
++ debugfs_create_str("of_name", S_IRUGO, d, (char **)&opp->of_name);
++
+ opp_debug_create_supplies(opp, opp_table, d);
+ opp_debug_create_bw(opp, opp_table, d);
+
+diff --git a/drivers/opp/opp.h b/drivers/opp/opp.h
+index 407c3bfe51d96..45e3a55239a13 100644
+--- a/drivers/opp/opp.h
++++ b/drivers/opp/opp.h
+@@ -96,6 +96,7 @@ struct dev_pm_opp {
+
+ #ifdef CONFIG_DEBUG_FS
+ struct dentry *dentry;
++ const char *of_name;
+ #endif
+ };
+
+diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c
+index 952a92504df69..e33036281327d 100644
+--- a/drivers/parisc/dino.c
++++ b/drivers/parisc/dino.c
+@@ -142,9 +142,8 @@ struct dino_device
+ {
+ struct pci_hba_data hba; /* 'C' inheritance - must be first */
+ spinlock_t dinosaur_pen;
+- unsigned long txn_addr; /* EIR addr to generate interrupt */
+- u32 txn_data; /* EIR data assign to each dino */
+ u32 imr; /* IRQ's which are enabled */
++ struct gsc_irq gsc_irq;
+ int global_irq[DINO_LOCAL_IRQS]; /* map IMR bit to global irq */
+ #ifdef DINO_DEBUG
+ unsigned int dino_irr0; /* save most recent IRQ line stat */
+@@ -339,14 +338,43 @@ static void dino_unmask_irq(struct irq_data *d)
+ if (tmp & DINO_MASK_IRQ(local_irq)) {
+ DBG(KERN_WARNING "%s(): IRQ asserted! (ILR 0x%x)\n",
+ __func__, tmp);
+- gsc_writel(dino_dev->txn_data, dino_dev->txn_addr);
++ gsc_writel(dino_dev->gsc_irq.txn_data, dino_dev->gsc_irq.txn_addr);
+ }
+ }
+
++#ifdef CONFIG_SMP
++static int dino_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
++ bool force)
++{
++ struct dino_device *dino_dev = irq_data_get_irq_chip_data(d);
++ struct cpumask tmask;
++ int cpu_irq;
++ u32 eim;
++
++ if (!cpumask_and(&tmask, dest, cpu_online_mask))
++ return -EINVAL;
++
++ cpu_irq = cpu_check_affinity(d, &tmask);
++ if (cpu_irq < 0)
++ return cpu_irq;
++
++ dino_dev->gsc_irq.txn_addr = txn_affinity_addr(d->irq, cpu_irq);
++ eim = ((u32) dino_dev->gsc_irq.txn_addr) | dino_dev->gsc_irq.txn_data;
++ __raw_writel(eim, dino_dev->hba.base_addr+DINO_IAR0);
++
++ irq_data_update_effective_affinity(d, &tmask);
++
++ return IRQ_SET_MASK_OK;
++}
++#endif
++
+ static struct irq_chip dino_interrupt_type = {
+ .name = "GSC-PCI",
+ .irq_unmask = dino_unmask_irq,
+ .irq_mask = dino_mask_irq,
++#ifdef CONFIG_SMP
++ .irq_set_affinity = dino_set_affinity_irq,
++#endif
+ };
+
+
+@@ -806,7 +834,6 @@ static int __init dino_common_init(struct parisc_device *dev,
+ {
+ int status;
+ u32 eim;
+- struct gsc_irq gsc_irq;
+ struct resource *res;
+
+ pcibios_register_hba(&dino_dev->hba);
+@@ -821,10 +848,8 @@ static int __init dino_common_init(struct parisc_device *dev,
+ ** still only has 11 IRQ input lines - just map some of them
+ ** to a different processor.
+ */
+- dev->irq = gsc_alloc_irq(&gsc_irq);
+- dino_dev->txn_addr = gsc_irq.txn_addr;
+- dino_dev->txn_data = gsc_irq.txn_data;
+- eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++ dev->irq = gsc_alloc_irq(&dino_dev->gsc_irq);
++ eim = ((u32) dino_dev->gsc_irq.txn_addr) | dino_dev->gsc_irq.txn_data;
+
+ /*
+ ** Dino needs a PA "IRQ" to get a processor's attention.
+diff --git a/drivers/parisc/gsc.c b/drivers/parisc/gsc.c
+index ed9371acf37eb..ec175ae998733 100644
+--- a/drivers/parisc/gsc.c
++++ b/drivers/parisc/gsc.c
+@@ -135,10 +135,41 @@ static void gsc_asic_unmask_irq(struct irq_data *d)
+ */
+ }
+
++#ifdef CONFIG_SMP
++static int gsc_set_affinity_irq(struct irq_data *d, const struct cpumask *dest,
++ bool force)
++{
++ struct gsc_asic *gsc_dev = irq_data_get_irq_chip_data(d);
++ struct cpumask tmask;
++ int cpu_irq;
++
++ if (!cpumask_and(&tmask, dest, cpu_online_mask))
++ return -EINVAL;
++
++ cpu_irq = cpu_check_affinity(d, &tmask);
++ if (cpu_irq < 0)
++ return cpu_irq;
++
++ gsc_dev->gsc_irq.txn_addr = txn_affinity_addr(d->irq, cpu_irq);
++ gsc_dev->eim = ((u32) gsc_dev->gsc_irq.txn_addr) | gsc_dev->gsc_irq.txn_data;
++
++ /* switch IRQ's for devices below LASI/WAX to other CPU */
++ gsc_writel(gsc_dev->eim, gsc_dev->hpa + OFFSET_IAR);
++
++ irq_data_update_effective_affinity(d, &tmask);
++
++ return IRQ_SET_MASK_OK;
++}
++#endif
++
++
+ static struct irq_chip gsc_asic_interrupt_type = {
+ .name = "GSC-ASIC",
+ .irq_unmask = gsc_asic_unmask_irq,
+ .irq_mask = gsc_asic_mask_irq,
++#ifdef CONFIG_SMP
++ .irq_set_affinity = gsc_set_affinity_irq,
++#endif
+ };
+
+ int gsc_assign_irq(struct irq_chip *type, void *data)
+diff --git a/drivers/parisc/gsc.h b/drivers/parisc/gsc.h
+index 86abad3fa2150..73cbd0bb1975a 100644
+--- a/drivers/parisc/gsc.h
++++ b/drivers/parisc/gsc.h
+@@ -31,6 +31,7 @@ struct gsc_asic {
+ int version;
+ int type;
+ int eim;
++ struct gsc_irq gsc_irq;
+ int global_irq[32];
+ };
+
+diff --git a/drivers/parisc/lasi.c b/drivers/parisc/lasi.c
+index 4e4fd12c2112e..6ef621adb63a8 100644
+--- a/drivers/parisc/lasi.c
++++ b/drivers/parisc/lasi.c
+@@ -163,7 +163,6 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ {
+ extern void (*chassis_power_off)(void);
+ struct gsc_asic *lasi;
+- struct gsc_irq gsc_irq;
+ int ret;
+
+ lasi = kzalloc(sizeof(*lasi), GFP_KERNEL);
+@@ -185,7 +184,7 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ lasi_init_irq(lasi);
+
+ /* the IRQ lasi should use */
+- dev->irq = gsc_alloc_irq(&gsc_irq);
++ dev->irq = gsc_alloc_irq(&lasi->gsc_irq);
+ if (dev->irq < 0) {
+ printk(KERN_ERR "%s(): cannot get GSC irq\n",
+ __func__);
+@@ -193,9 +192,9 @@ static int __init lasi_init_chip(struct parisc_device *dev)
+ return -EBUSY;
+ }
+
+- lasi->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++ lasi->eim = ((u32) lasi->gsc_irq.txn_addr) | lasi->gsc_irq.txn_data;
+
+- ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
++ ret = request_irq(lasi->gsc_irq.irq, gsc_asic_intr, 0, "lasi", lasi);
+ if (ret < 0) {
+ kfree(lasi);
+ return ret;
+diff --git a/drivers/parisc/wax.c b/drivers/parisc/wax.c
+index 5b6df15162354..73a2b01f8d9ca 100644
+--- a/drivers/parisc/wax.c
++++ b/drivers/parisc/wax.c
+@@ -68,7 +68,6 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ {
+ struct gsc_asic *wax;
+ struct parisc_device *parent;
+- struct gsc_irq gsc_irq;
+ int ret;
+
+ wax = kzalloc(sizeof(*wax), GFP_KERNEL);
+@@ -85,7 +84,7 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ wax_init_irq(wax);
+
+ /* the IRQ wax should use */
+- dev->irq = gsc_claim_irq(&gsc_irq, WAX_GSC_IRQ);
++ dev->irq = gsc_claim_irq(&wax->gsc_irq, WAX_GSC_IRQ);
+ if (dev->irq < 0) {
+ printk(KERN_ERR "%s(): cannot get GSC irq\n",
+ __func__);
+@@ -93,9 +92,9 @@ static int __init wax_init_chip(struct parisc_device *dev)
+ return -EBUSY;
+ }
+
+- wax->eim = ((u32) gsc_irq.txn_addr) | gsc_irq.txn_data;
++ wax->eim = ((u32) wax->gsc_irq.txn_addr) | wax->gsc_irq.txn_data;
+
+- ret = request_irq(gsc_irq.irq, gsc_asic_intr, 0, "wax", wax);
++ ret = request_irq(wax->gsc_irq.irq, gsc_asic_intr, 0, "wax", wax);
+ if (ret < 0) {
+ kfree(wax);
+ return ret;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 82e2c618d532d..15348be1a8aa5 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -1186,7 +1186,7 @@ static void advk_msi_irq_compose_msi_msg(struct irq_data *data,
+
+ msg->address_lo = lower_32_bits(msi_msg);
+ msg->address_hi = upper_32_bits(msi_msg);
+- msg->data = data->irq;
++ msg->data = data->hwirq;
+ }
+
+ static int advk_msi_set_affinity(struct irq_data *irq_data,
+@@ -1203,15 +1203,11 @@ static int advk_msi_irq_domain_alloc(struct irq_domain *domain,
+ int hwirq, i;
+
+ mutex_lock(&pcie->msi_used_lock);
+- hwirq = bitmap_find_next_zero_area(pcie->msi_used, MSI_IRQ_NUM,
+- 0, nr_irqs, 0);
+- if (hwirq >= MSI_IRQ_NUM) {
+- mutex_unlock(&pcie->msi_used_lock);
+- return -ENOSPC;
+- }
+-
+- bitmap_set(pcie->msi_used, hwirq, nr_irqs);
++ hwirq = bitmap_find_free_region(pcie->msi_used, MSI_IRQ_NUM,
++ order_base_2(nr_irqs));
+ mutex_unlock(&pcie->msi_used_lock);
++ if (hwirq < 0)
++ return -ENOSPC;
+
+ for (i = 0; i < nr_irqs; i++)
+ irq_domain_set_info(domain, virq + i, hwirq + i,
+@@ -1229,7 +1225,7 @@ static void advk_msi_irq_domain_free(struct irq_domain *domain,
+ struct advk_pcie *pcie = domain->host_data;
+
+ mutex_lock(&pcie->msi_used_lock);
+- bitmap_clear(pcie->msi_used, d->hwirq, nr_irqs);
++ bitmap_release_region(pcie->msi_used, d->hwirq, order_base_2(nr_irqs));
+ mutex_unlock(&pcie->msi_used_lock);
+ }
+
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 90d84d3bc868f..5b833f00e9800 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -285,7 +285,17 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
+ if (ret)
+ dev_err(dev, "Data transfer failed\n");
+ } else {
+- memcpy(dst_addr, src_addr, reg->size);
++ void *buf;
++
++ buf = kzalloc(reg->size, GFP_KERNEL);
++ if (!buf) {
++ ret = -ENOMEM;
++ goto err_map_addr;
++ }
++
++ memcpy_fromio(buf, src_addr, reg->size);
++ memcpy_toio(dst_addr, buf, reg->size);
++ kfree(buf);
+ }
+ ktime_get_ts64(&end);
+ pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma);
+@@ -441,7 +451,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
+ if (!epf_test->dma_supported) {
+ dev_err(dev, "Cannot transfer data using DMA\n");
+ ret = -EINVAL;
+- goto err_map_addr;
++ goto err_dma_map;
+ }
+
+ src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 85dce560831a8..040ae076ec0e9 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -1086,6 +1086,8 @@ static void quirk_cmd_compl(struct pci_dev *pdev)
+ }
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
++DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110,
++ PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
+ PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401,
+diff --git a/drivers/perf/qcom_l2_pmu.c b/drivers/perf/qcom_l2_pmu.c
+index 7640491aab123..30234c261b05c 100644
+--- a/drivers/perf/qcom_l2_pmu.c
++++ b/drivers/perf/qcom_l2_pmu.c
+@@ -736,7 +736,7 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
+ {
+ u64 mpidr;
+ int cpu_cluster_id;
+- struct cluster_pmu *cluster = NULL;
++ struct cluster_pmu *cluster;
+
+ /*
+ * This assumes that the cluster_id is in MPIDR[aff1] for
+@@ -758,10 +758,10 @@ static struct cluster_pmu *l2_cache_associate_cpu_with_cluster(
+ cluster->cluster_id);
+ cpumask_set_cpu(cpu, &cluster->cluster_cpus);
+ *per_cpu_ptr(l2cache_pmu->pmu_cluster, cpu) = cluster;
+- break;
++ return cluster;
+ }
+
+- return cluster;
++ return NULL;
+ }
+
+ static int l2cache_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
+diff --git a/drivers/phy/amlogic/phy-meson-gxl-usb2.c b/drivers/phy/amlogic/phy-meson-gxl-usb2.c
+index 2b3c0d730f20f..db17c3448bfed 100644
+--- a/drivers/phy/amlogic/phy-meson-gxl-usb2.c
++++ b/drivers/phy/amlogic/phy-meson-gxl-usb2.c
+@@ -114,8 +114,10 @@ static int phy_meson_gxl_usb2_init(struct phy *phy)
+ return ret;
+
+ ret = clk_prepare_enable(priv->clk);
+- if (ret)
++ if (ret) {
++ reset_control_rearm(priv->reset);
+ return ret;
++ }
+
+ return 0;
+ }
+@@ -125,6 +127,7 @@ static int phy_meson_gxl_usb2_exit(struct phy *phy)
+ struct phy_meson_gxl_usb2_priv *priv = phy_get_drvdata(phy);
+
+ clk_disable_unprepare(priv->clk);
++ reset_control_rearm(priv->reset);
+
+ return 0;
+ }
+diff --git a/drivers/phy/amlogic/phy-meson8b-usb2.c b/drivers/phy/amlogic/phy-meson8b-usb2.c
+index cf10bed40528a..dd96763911b8b 100644
+--- a/drivers/phy/amlogic/phy-meson8b-usb2.c
++++ b/drivers/phy/amlogic/phy-meson8b-usb2.c
+@@ -154,6 +154,7 @@ static int phy_meson8b_usb2_power_on(struct phy *phy)
+ ret = clk_prepare_enable(priv->clk_usb_general);
+ if (ret) {
+ dev_err(&phy->dev, "Failed to enable USB general clock\n");
++ reset_control_rearm(priv->reset);
+ return ret;
+ }
+
+@@ -161,6 +162,7 @@ static int phy_meson8b_usb2_power_on(struct phy *phy)
+ if (ret) {
+ dev_err(&phy->dev, "Failed to enable USB DDR clock\n");
+ clk_disable_unprepare(priv->clk_usb_general);
++ reset_control_rearm(priv->reset);
+ return ret;
+ }
+
+@@ -199,6 +201,7 @@ static int phy_meson8b_usb2_power_on(struct phy *phy)
+ dev_warn(&phy->dev, "USB ID detect failed!\n");
+ clk_disable_unprepare(priv->clk_usb);
+ clk_disable_unprepare(priv->clk_usb_general);
++ reset_control_rearm(priv->reset);
+ return -EINVAL;
+ }
+ }
+@@ -218,6 +221,7 @@ static int phy_meson8b_usb2_power_off(struct phy *phy)
+
+ clk_disable_unprepare(priv->clk_usb);
+ clk_disable_unprepare(priv->clk_usb_general);
++ reset_control_rearm(priv->reset);
+
+ /* power off the PHY by putting it into reset mode */
+ regmap_update_bits(priv->regmap, REG_CTRL, REG_CTRL_POWER_ON_RESET,
+@@ -265,8 +269,9 @@ static int phy_meson8b_usb2_probe(struct platform_device *pdev)
+ return PTR_ERR(priv->clk_usb);
+
+ priv->reset = devm_reset_control_get_optional_shared(&pdev->dev, NULL);
+- if (PTR_ERR(priv->reset) == -EPROBE_DEFER)
+- return PTR_ERR(priv->reset);
++ if (IS_ERR(priv->reset))
++ return dev_err_probe(&pdev->dev, PTR_ERR(priv->reset),
++ "Failed to get the reset line");
+
+ priv->dr_mode = of_usb_get_dr_mode_by_phy(pdev->dev.of_node, -1);
+ if (priv->dr_mode == USB_DR_MODE_UNKNOWN) {
+diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
+index 24deeeb29af21..53abd553b842e 100644
+--- a/drivers/platform/x86/Kconfig
++++ b/drivers/platform/x86/Kconfig
+@@ -1027,7 +1027,7 @@ config TOUCHSCREEN_DMI
+
+ config X86_ANDROID_TABLETS
+ tristate "X86 Android tablet support"
+- depends on I2C && SERIAL_DEV_BUS && ACPI && GPIOLIB
++ depends on I2C && SPI && SERIAL_DEV_BUS && ACPI && EFI && GPIOLIB
+ help
+ X86 tablets which ship with Android as (part of) the factory image
+ typically have various problems with their DSDTs. The factory kernels
+diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
+index 48a46466f0862..88f0bfd6ecf1a 100644
+--- a/drivers/platform/x86/hp-wmi.c
++++ b/drivers/platform/x86/hp-wmi.c
+@@ -35,10 +35,6 @@ MODULE_LICENSE("GPL");
+ MODULE_ALIAS("wmi:95F24279-4D7B-4334-9387-ACCDC67EF61C");
+ MODULE_ALIAS("wmi:5FB7F034-2C63-45e9-BE91-3D44E2C707E4");
+
+-static int enable_tablet_mode_sw = -1;
+-module_param(enable_tablet_mode_sw, int, 0444);
+-MODULE_PARM_DESC(enable_tablet_mode_sw, "Enable SW_TABLET_MODE reporting (-1=auto, 0=no, 1=yes)");
+-
+ #define HPWMI_EVENT_GUID "95F24279-4D7B-4334-9387-ACCDC67EF61C"
+ #define HPWMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4"
+ #define HP_OMEN_EC_THERMAL_PROFILE_OFFSET 0x95
+@@ -107,6 +103,7 @@ enum hp_wmi_commandtype {
+ HPWMI_FEATURE2_QUERY = 0x0d,
+ HPWMI_WIRELESS2_QUERY = 0x1b,
+ HPWMI_POSTCODEERROR_QUERY = 0x2a,
++ HPWMI_SYSTEM_DEVICE_MODE = 0x40,
+ HPWMI_THERMAL_PROFILE_QUERY = 0x4c,
+ };
+
+@@ -217,6 +214,19 @@ struct rfkill2_device {
+ static int rfkill2_count;
+ static struct rfkill2_device rfkill2[HPWMI_MAX_RFKILL2_DEVICES];
+
++/*
++ * Chassis Types values were obtained from SMBIOS reference
++ * specification version 3.00. A complete list of system enclosures
++ * and chassis types is available on Table 17.
++ */
++static const char * const tablet_chassis_types[] = {
++ "30", /* Tablet*/
++ "31", /* Convertible */
++ "32" /* Detachable */
++};
++
++#define DEVICE_MODE_TABLET 0x06
++
+ /* map output size to the corresponding WMI method id */
+ static inline int encode_outsize_for_pvsz(int outsize)
+ {
+@@ -320,7 +330,7 @@ static int hp_wmi_get_fan_speed(int fan)
+ char fan_data[4] = { fan, 0, 0, 0 };
+
+ int ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_GET_QUERY, HPWMI_GM,
+- &fan_data, sizeof(fan_data),
++ &fan_data, sizeof(char),
+ sizeof(fan_data));
+
+ if (ret != 0)
+@@ -345,14 +355,39 @@ static int hp_wmi_read_int(int query)
+ return val;
+ }
+
+-static int hp_wmi_hw_state(int mask)
++static int hp_wmi_get_dock_state(void)
+ {
+ int state = hp_wmi_read_int(HPWMI_HARDWARE_QUERY);
+
+ if (state < 0)
+ return state;
+
+- return !!(state & mask);
++ return !!(state & HPWMI_DOCK_MASK);
++}
++
++static int hp_wmi_get_tablet_mode(void)
++{
++ char system_device_mode[4] = { 0 };
++ const char *chassis_type;
++ bool tablet_found;
++ int ret;
++
++ chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
++ if (!chassis_type)
++ return -ENODEV;
++
++ tablet_found = match_string(tablet_chassis_types,
++ ARRAY_SIZE(tablet_chassis_types),
++ chassis_type) >= 0;
++ if (!tablet_found)
++ return -ENODEV;
++
++ ret = hp_wmi_perform_query(HPWMI_SYSTEM_DEVICE_MODE, HPWMI_READ,
++ system_device_mode, 0, sizeof(system_device_mode));
++ if (ret < 0)
++ return ret;
++
++ return system_device_mode[0] == DEVICE_MODE_TABLET;
+ }
+
+ static int omen_thermal_profile_set(int mode)
+@@ -364,7 +399,7 @@ static int omen_thermal_profile_set(int mode)
+ return -EINVAL;
+
+ ret = hp_wmi_perform_query(HPWMI_SET_PERFORMANCE_MODE, HPWMI_GM,
+- &buffer, sizeof(buffer), sizeof(buffer));
++ &buffer, sizeof(buffer), 0);
+
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+@@ -401,7 +436,7 @@ static int hp_wmi_fan_speed_max_set(int enabled)
+ int ret;
+
+ ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_MAX_SET_QUERY, HPWMI_GM,
+- &enabled, sizeof(enabled), sizeof(enabled));
++ &enabled, sizeof(enabled), 0);
+
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+@@ -414,7 +449,7 @@ static int hp_wmi_fan_speed_max_get(void)
+ int val = 0, ret;
+
+ ret = hp_wmi_perform_query(HPWMI_FAN_SPEED_MAX_GET_QUERY, HPWMI_GM,
+- &val, sizeof(val), sizeof(val));
++ &val, 0, sizeof(val));
+
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+@@ -426,7 +461,7 @@ static int __init hp_wmi_bios_2008_later(void)
+ {
+ int state = 0;
+ int ret = hp_wmi_perform_query(HPWMI_FEATURE_QUERY, HPWMI_READ, &state,
+- sizeof(state), sizeof(state));
++ 0, sizeof(state));
+ if (!ret)
+ return 1;
+
+@@ -437,7 +472,7 @@ static int __init hp_wmi_bios_2009_later(void)
+ {
+ u8 state[128];
+ int ret = hp_wmi_perform_query(HPWMI_FEATURE2_QUERY, HPWMI_READ, &state,
+- sizeof(state), sizeof(state));
++ 0, sizeof(state));
+ if (!ret)
+ return 1;
+
+@@ -515,7 +550,7 @@ static int hp_wmi_rfkill2_refresh(void)
+ int err, i;
+
+ err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+- sizeof(state), sizeof(state));
++ 0, sizeof(state));
+ if (err)
+ return err;
+
+@@ -568,7 +603,7 @@ static ssize_t als_show(struct device *dev, struct device_attribute *attr,
+ static ssize_t dock_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- int value = hp_wmi_hw_state(HPWMI_DOCK_MASK);
++ int value = hp_wmi_get_dock_state();
+ if (value < 0)
+ return value;
+ return sprintf(buf, "%d\n", value);
+@@ -577,7 +612,7 @@ static ssize_t dock_show(struct device *dev, struct device_attribute *attr,
+ static ssize_t tablet_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- int value = hp_wmi_hw_state(HPWMI_TABLET_MASK);
++ int value = hp_wmi_get_tablet_mode();
+ if (value < 0)
+ return value;
+ return sprintf(buf, "%d\n", value);
+@@ -604,7 +639,7 @@ static ssize_t als_store(struct device *dev, struct device_attribute *attr,
+ return ret;
+
+ ret = hp_wmi_perform_query(HPWMI_ALS_QUERY, HPWMI_WRITE, &tmp,
+- sizeof(tmp), sizeof(tmp));
++ sizeof(tmp), 0);
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+
+@@ -625,9 +660,9 @@ static ssize_t postcode_store(struct device *dev, struct device_attribute *attr,
+ if (clear == false)
+ return -EINVAL;
+
+- /* Clear the POST error code. It is kept until until cleared. */
++ /* Clear the POST error code. It is kept until cleared. */
+ ret = hp_wmi_perform_query(HPWMI_POSTCODEERROR_QUERY, HPWMI_WRITE, &tmp,
+- sizeof(tmp), sizeof(tmp));
++ sizeof(tmp), 0);
+ if (ret)
+ return ret < 0 ? ret : -EINVAL;
+
+@@ -699,10 +734,10 @@ static void hp_wmi_notify(u32 value, void *context)
+ case HPWMI_DOCK_EVENT:
+ if (test_bit(SW_DOCK, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_DOCK,
+- hp_wmi_hw_state(HPWMI_DOCK_MASK));
++ hp_wmi_get_dock_state());
+ if (test_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
+- hp_wmi_hw_state(HPWMI_TABLET_MASK));
++ hp_wmi_get_tablet_mode());
+ input_sync(hp_wmi_input_dev);
+ break;
+ case HPWMI_PARK_HDD:
+@@ -780,19 +815,17 @@ static int __init hp_wmi_input_setup(void)
+ __set_bit(EV_SW, hp_wmi_input_dev->evbit);
+
+ /* Dock */
+- val = hp_wmi_hw_state(HPWMI_DOCK_MASK);
++ val = hp_wmi_get_dock_state();
+ if (!(val < 0)) {
+ __set_bit(SW_DOCK, hp_wmi_input_dev->swbit);
+ input_report_switch(hp_wmi_input_dev, SW_DOCK, val);
+ }
+
+ /* Tablet mode */
+- if (enable_tablet_mode_sw > 0) {
+- val = hp_wmi_hw_state(HPWMI_TABLET_MASK);
+- if (val >= 0) {
+- __set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
+- input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
+- }
++ val = hp_wmi_get_tablet_mode();
++ if (!(val < 0)) {
++ __set_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit);
++ input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE, val);
+ }
+
+ err = sparse_keymap_setup(hp_wmi_input_dev, hp_wmi_keymap, NULL);
+@@ -919,7 +952,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device)
+ int err, i;
+
+ err = hp_wmi_perform_query(HPWMI_WIRELESS2_QUERY, HPWMI_READ, &state,
+- sizeof(state), sizeof(state));
++ 0, sizeof(state));
+ if (err)
+ return err < 0 ? err : -EINVAL;
+
+@@ -1227,10 +1260,10 @@ static int hp_wmi_resume_handler(struct device *device)
+ if (hp_wmi_input_dev) {
+ if (test_bit(SW_DOCK, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_DOCK,
+- hp_wmi_hw_state(HPWMI_DOCK_MASK));
++ hp_wmi_get_dock_state());
+ if (test_bit(SW_TABLET_MODE, hp_wmi_input_dev->swbit))
+ input_report_switch(hp_wmi_input_dev, SW_TABLET_MODE,
+- hp_wmi_hw_state(HPWMI_TABLET_MASK));
++ hp_wmi_get_tablet_mode());
+ input_sync(hp_wmi_input_dev);
+ }
+
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 3424b080db772..3fb8cda31eb9e 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -8699,10 +8699,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_Q_LNV3('N', '2', 'N', TPACPI_FAN_2CTL), /* P53 / P73 */
+ TPACPI_Q_LNV3('N', '2', 'E', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (1st gen) */
+ TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (2nd gen) */
+- TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (3nd gen) */
+- TPACPI_Q_LNV3('N', '4', '0', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (4nd gen) */
+ TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL), /* P15 (1st gen) / P15v (1st gen) */
+- TPACPI_Q_LNV3('N', '3', '2', TPACPI_FAN_2CTL), /* X1 Carbon (9th gen) */
+ TPACPI_Q_LNV3('N', '3', '7', TPACPI_FAN_2CTL), /* T15g (2nd gen) */
+ TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */
+ };
+@@ -8746,6 +8743,9 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+ * ThinkPad ECs supports the fan control register */
+ if (likely(acpi_ec_read(fan_status_offset,
+ &fan_control_initial_status))) {
++ int res;
++ unsigned int speed;
++
+ fan_status_access_mode = TPACPI_FAN_RD_TPEC;
+ if (quirks & TPACPI_FAN_Q1)
+ fan_quirk1_setup();
+@@ -8758,6 +8758,15 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+ tp_features.second_fan_ctl = 1;
+ pr_info("secondary fan control enabled\n");
+ }
++ /* Try and probe the 2nd fan */
++ res = fan2_get_speed(&speed);
++ if (res >= 0) {
++ /* It responded - so let's assume it's there */
++ tp_features.second_fan = 1;
++ tp_features.second_fan_ctl = 1;
++ pr_info("secondary fan control detected & enabled\n");
++ }
++
+ } else {
+ pr_err("ThinkPad ACPI EC access misbehaving, fan status and control unavailable\n");
+ return -ENODEV;
+diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig
+index b366e2fd8e97f..5e4a693528111 100644
+--- a/drivers/power/supply/Kconfig
++++ b/drivers/power/supply/Kconfig
+@@ -351,14 +351,14 @@ config AXP20X_POWER
+
+ config AXP288_CHARGER
+ tristate "X-Powers AXP288 Charger"
+- depends on MFD_AXP20X && EXTCON_AXP288 && IOSF_MBI
++ depends on MFD_AXP20X && EXTCON_AXP288 && IOSF_MBI && ACPI
+ help
+ Say yes here to have support X-Power AXP288 power management IC (PMIC)
+ integrated charger.
+
+ config AXP288_FUEL_GAUGE
+ tristate "X-Powers AXP288 Fuel Gauge"
+- depends on MFD_AXP20X && IIO && IOSF_MBI
++ depends on MFD_AXP20X && IIO && IOSF_MBI && ACPI
+ help
+ Say yes here to have support for X-Power power management IC (PMIC)
+ Fuel Gauge. The device provides battery statistics and status
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index 5d197141f4760..9106077c0dbb4 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -186,7 +186,6 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ union power_supply_propval *val)
+ {
+ struct axp20x_batt_ps *axp20x_batt = power_supply_get_drvdata(psy);
+- struct iio_channel *chan;
+ int ret = 0, reg, val1;
+
+ switch (psp) {
+@@ -266,12 +265,12 @@ static int axp20x_battery_get_prop(struct power_supply *psy,
+ if (ret)
+ return ret;
+
+- if (reg & AXP20X_PWR_STATUS_BAT_CHARGING)
+- chan = axp20x_batt->batt_chrg_i;
+- else
+- chan = axp20x_batt->batt_dischrg_i;
+-
+- ret = iio_read_channel_processed(chan, &val->intval);
++ if (reg & AXP20X_PWR_STATUS_BAT_CHARGING) {
++ ret = iio_read_channel_processed(axp20x_batt->batt_chrg_i, &val->intval);
++ } else {
++ ret = iio_read_channel_processed(axp20x_batt->batt_dischrg_i, &val1);
++ val->intval = -val1;
++ }
+ if (ret)
+ return ret;
+
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index ec41f6cd3f93f..19746e658a6a8 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -42,11 +42,11 @@
+ #define VBUS_ISPOUT_CUR_LIM_1500MA 0x1 /* 1500mA */
+ #define VBUS_ISPOUT_CUR_LIM_2000MA 0x2 /* 2000mA */
+ #define VBUS_ISPOUT_CUR_NO_LIM 0x3 /* 2500mA */
+-#define VBUS_ISPOUT_VHOLD_SET_MASK 0x31
++#define VBUS_ISPOUT_VHOLD_SET_MASK 0x38
+ #define VBUS_ISPOUT_VHOLD_SET_BIT_POS 0x3
+ #define VBUS_ISPOUT_VHOLD_SET_OFFSET 4000 /* 4000mV */
+ #define VBUS_ISPOUT_VHOLD_SET_LSB_RES 100 /* 100mV */
+-#define VBUS_ISPOUT_VHOLD_SET_4300MV 0x3 /* 4300mV */
++#define VBUS_ISPOUT_VHOLD_SET_4400MV 0x4 /* 4400mV */
+ #define VBUS_ISPOUT_VBUS_PATH_DIS BIT(7)
+
+ #define CHRG_CCCV_CC_MASK 0xf /* 4 bits */
+@@ -769,6 +769,16 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ ret = axp288_charger_vbus_path_select(info, true);
+ if (ret < 0)
+ return ret;
++ } else {
++ /* Set Vhold to the factory default / recommended 4.4V */
++ val = VBUS_ISPOUT_VHOLD_SET_4400MV << VBUS_ISPOUT_VHOLD_SET_BIT_POS;
++ ret = regmap_update_bits(info->regmap, AXP20X_VBUS_IPSOUT_MGMT,
++ VBUS_ISPOUT_VHOLD_SET_MASK, val);
++ if (ret < 0) {
++ dev_err(&info->pdev->dev, "register(%x) write error(%d)\n",
++ AXP20X_VBUS_IPSOUT_MGMT, ret);
++ return ret;
++ }
+ }
+
+ /* Read current charge voltage and current limit */
+@@ -828,6 +838,13 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ struct power_supply_config charger_cfg = {};
+ unsigned int val;
+
++ /*
++ * Normally the native AXP288 fg/charger drivers are preferred but
++ * on some devices the ACPI drivers should be used instead.
++ */
++ if (!acpi_quirk_skip_acpi_ac_and_battery())
++ return -ENODEV;
++
+ /*
+ * On some devices the fuelgauge and charger parts of the axp288 are
+ * not used, check that the fuelgauge is enabled (CC_CTRL != 0).
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index c1da217fdb0e2..ce8ffd0a41b5a 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -9,6 +9,7 @@
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
++#include <linux/acpi.h>
+ #include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+@@ -560,12 +561,6 @@ static const struct dmi_system_id axp288_no_battery_list[] = {
+ DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
+ },
+ },
+- {
+- /* ECS EF20EA */
+- .matches = {
+- DMI_MATCH(DMI_PRODUCT_NAME, "EF20EA"),
+- },
+- },
+ {
+ /* Intel Cherry Trail Compute Stick, Windows version */
+ .matches = {
+@@ -624,6 +619,13 @@ static int axp288_fuel_gauge_probe(struct platform_device *pdev)
+ };
+ unsigned int val;
+
++ /*
++ * Normally the native AXP288 fg/charger drivers are preferred but
++ * on some devices the ACPI drivers should be used instead.
++ */
++ if (!acpi_quirk_skip_acpi_ac_and_battery())
++ return -ENODEV;
++
+ if (dmi_check_system(axp288_no_battery_list))
+ return -ENODEV;
+
+diff --git a/drivers/ptp/ptp_sysfs.c b/drivers/ptp/ptp_sysfs.c
+index 41b92dc2f011a..9233bfedeb174 100644
+--- a/drivers/ptp/ptp_sysfs.c
++++ b/drivers/ptp/ptp_sysfs.c
+@@ -14,7 +14,7 @@ static ssize_t clock_name_show(struct device *dev,
+ struct device_attribute *attr, char *page)
+ {
+ struct ptp_clock *ptp = dev_get_drvdata(dev);
+- return snprintf(page, PAGE_SIZE-1, "%s\n", ptp->info->name);
++ return sysfs_emit(page, "%s\n", ptp->info->name);
+ }
+ static DEVICE_ATTR_RO(clock_name);
+
+@@ -387,7 +387,7 @@ static ssize_t ptp_pin_show(struct device *dev, struct device_attribute *attr,
+
+ mutex_unlock(&ptp->pincfg_mux);
+
+- return snprintf(page, PAGE_SIZE, "%u %u\n", func, chan);
++ return sysfs_emit(page, "%u %u\n", func, chan);
+ }
+
+ static ssize_t ptp_pin_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/regulator/atc260x-regulator.c b/drivers/regulator/atc260x-regulator.c
+index 05147d2c38428..485e58b264c04 100644
+--- a/drivers/regulator/atc260x-regulator.c
++++ b/drivers/regulator/atc260x-regulator.c
+@@ -292,6 +292,7 @@ enum atc2603c_reg_ids {
+ .bypass_mask = BIT(5), \
+ .active_discharge_reg = ATC2603C_PMU_SWITCH_CTL, \
+ .active_discharge_mask = BIT(1), \
++ .active_discharge_on = BIT(1), \
+ .owner = THIS_MODULE, \
+ }
+
+diff --git a/drivers/regulator/rtq2134-regulator.c b/drivers/regulator/rtq2134-regulator.c
+index f21e3f8b21f23..8e13dea354a21 100644
+--- a/drivers/regulator/rtq2134-regulator.c
++++ b/drivers/regulator/rtq2134-regulator.c
+@@ -285,6 +285,7 @@ static const unsigned int rtq2134_buck_ramp_delay_table[] = {
+ .enable_mask = RTQ2134_VOUTEN_MASK, \
+ .active_discharge_reg = RTQ2134_REG_BUCK##_id##_CFG0, \
+ .active_discharge_mask = RTQ2134_ACTDISCHG_MASK, \
++ .active_discharge_on = RTQ2134_ACTDISCHG_MASK, \
+ .ramp_reg = RTQ2134_REG_BUCK##_id##_RSPCFG, \
+ .ramp_mask = RTQ2134_RSPUP_MASK, \
+ .ramp_delay_table = rtq2134_buck_ramp_delay_table, \
+diff --git a/drivers/rtc/rtc-wm8350.c b/drivers/rtc/rtc-wm8350.c
+index 2018614f258f6..6eaa9321c0741 100644
+--- a/drivers/rtc/rtc-wm8350.c
++++ b/drivers/rtc/rtc-wm8350.c
+@@ -432,14 +432,21 @@ static int wm8350_rtc_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- wm8350_register_irq(wm8350, WM8350_IRQ_RTC_SEC,
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_RTC_SEC,
+ wm8350_rtc_update_handler, 0,
+ "RTC Seconds", wm8350);
++ if (ret)
++ return ret;
++
+ wm8350_mask_irq(wm8350, WM8350_IRQ_RTC_SEC);
+
+- wm8350_register_irq(wm8350, WM8350_IRQ_RTC_ALM,
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_RTC_ALM,
+ wm8350_rtc_alarm_handler, 0,
+ "RTC Alarm", wm8350);
++ if (ret) {
++ wm8350_free_irq(wm8350, WM8350_IRQ_RTC_SEC, wm8350);
++ return ret;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/scsi/aha152x.c b/drivers/scsi/aha152x.c
+index d17880b57d17b..2449b4215b32d 100644
+--- a/drivers/scsi/aha152x.c
++++ b/drivers/scsi/aha152x.c
+@@ -3375,13 +3375,11 @@ static int __init aha152x_setup(char *str)
+ setup[setup_count].synchronous = ints[0] >= 6 ? ints[6] : 1;
+ setup[setup_count].delay = ints[0] >= 7 ? ints[7] : DELAY_DEFAULT;
+ setup[setup_count].ext_trans = ints[0] >= 8 ? ints[8] : 0;
+- if (ints[0] > 8) { /*}*/
++ if (ints[0] > 8)
+ printk(KERN_NOTICE "aha152x: usage: aha152x=<IOBASE>[,<IRQ>[,<SCSI ID>"
+ "[,<RECONNECT>[,<PARITY>[,<SYNCHRONOUS>[,<DELAY>[,<EXT_TRANS>]]]]]]]\n");
+- } else {
++ else
+ setup_count++;
+- return 0;
+- }
+
+ return 1;
+ }
+diff --git a/drivers/scsi/bfa/bfad_attr.c b/drivers/scsi/bfa/bfad_attr.c
+index f46989bd083cc..5a85401e9e2d3 100644
+--- a/drivers/scsi/bfa/bfad_attr.c
++++ b/drivers/scsi/bfa/bfad_attr.c
+@@ -711,7 +711,7 @@ bfad_im_serial_num_show(struct device *dev, struct device_attribute *attr,
+ char serial_num[BFA_ADAPTER_SERIAL_NUM_LEN];
+
+ bfa_get_adapter_serial_num(&bfad->bfa, serial_num);
+- return snprintf(buf, PAGE_SIZE, "%s\n", serial_num);
++ return sysfs_emit(buf, "%s\n", serial_num);
+ }
+
+ static ssize_t
+@@ -725,7 +725,7 @@ bfad_im_model_show(struct device *dev, struct device_attribute *attr,
+ char model[BFA_ADAPTER_MODEL_NAME_LEN];
+
+ bfa_get_adapter_model(&bfad->bfa, model);
+- return snprintf(buf, PAGE_SIZE, "%s\n", model);
++ return sysfs_emit(buf, "%s\n", model);
+ }
+
+ static ssize_t
+@@ -805,7 +805,7 @@ bfad_im_model_desc_show(struct device *dev, struct device_attribute *attr,
+ snprintf(model_descr, BFA_ADAPTER_MODEL_DESCR_LEN,
+ "Invalid Model");
+
+- return snprintf(buf, PAGE_SIZE, "%s\n", model_descr);
++ return sysfs_emit(buf, "%s\n", model_descr);
+ }
+
+ static ssize_t
+@@ -819,7 +819,7 @@ bfad_im_node_name_show(struct device *dev, struct device_attribute *attr,
+ u64 nwwn;
+
+ nwwn = bfa_fcs_lport_get_nwwn(port->fcs_port);
+- return snprintf(buf, PAGE_SIZE, "0x%llx\n", cpu_to_be64(nwwn));
++ return sysfs_emit(buf, "0x%llx\n", cpu_to_be64(nwwn));
+ }
+
+ static ssize_t
+@@ -836,7 +836,7 @@ bfad_im_symbolic_name_show(struct device *dev, struct device_attribute *attr,
+ bfa_fcs_lport_get_attr(&bfad->bfa_fcs.fabric.bport, &port_attr);
+ strlcpy(symname, port_attr.port_cfg.sym_name.symname,
+ BFA_SYMNAME_MAXLEN);
+- return snprintf(buf, PAGE_SIZE, "%s\n", symname);
++ return sysfs_emit(buf, "%s\n", symname);
+ }
+
+ static ssize_t
+@@ -850,14 +850,14 @@ bfad_im_hw_version_show(struct device *dev, struct device_attribute *attr,
+ char hw_ver[BFA_VERSION_LEN];
+
+ bfa_get_pci_chip_rev(&bfad->bfa, hw_ver);
+- return snprintf(buf, PAGE_SIZE, "%s\n", hw_ver);
++ return sysfs_emit(buf, "%s\n", hw_ver);
+ }
+
+ static ssize_t
+ bfad_im_drv_version_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_VERSION);
++ return sysfs_emit(buf, "%s\n", BFAD_DRIVER_VERSION);
+ }
+
+ static ssize_t
+@@ -871,7 +871,7 @@ bfad_im_optionrom_version_show(struct device *dev,
+ char optrom_ver[BFA_VERSION_LEN];
+
+ bfa_get_adapter_optrom_ver(&bfad->bfa, optrom_ver);
+- return snprintf(buf, PAGE_SIZE, "%s\n", optrom_ver);
++ return sysfs_emit(buf, "%s\n", optrom_ver);
+ }
+
+ static ssize_t
+@@ -885,7 +885,7 @@ bfad_im_fw_version_show(struct device *dev, struct device_attribute *attr,
+ char fw_ver[BFA_VERSION_LEN];
+
+ bfa_get_adapter_fw_ver(&bfad->bfa, fw_ver);
+- return snprintf(buf, PAGE_SIZE, "%s\n", fw_ver);
++ return sysfs_emit(buf, "%s\n", fw_ver);
+ }
+
+ static ssize_t
+@@ -897,7 +897,7 @@ bfad_im_num_of_ports_show(struct device *dev, struct device_attribute *attr,
+ (struct bfad_im_port_s *) shost->hostdata[0];
+ struct bfad_s *bfad = im_port->bfad;
+
+- return snprintf(buf, PAGE_SIZE, "%d\n",
++ return sysfs_emit(buf, "%d\n",
+ bfa_get_nports(&bfad->bfa));
+ }
+
+@@ -905,7 +905,7 @@ static ssize_t
+ bfad_im_drv_name_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- return snprintf(buf, PAGE_SIZE, "%s\n", BFAD_DRIVER_NAME);
++ return sysfs_emit(buf, "%s\n", BFAD_DRIVER_NAME);
+ }
+
+ static ssize_t
+@@ -924,14 +924,14 @@ bfad_im_num_of_discovered_ports_show(struct device *dev,
+ rports = kcalloc(nrports, sizeof(struct bfa_rport_qualifier_s),
+ GFP_ATOMIC);
+ if (rports == NULL)
+- return snprintf(buf, PAGE_SIZE, "Failed\n");
++ return sysfs_emit(buf, "Failed\n");
+
+ spin_lock_irqsave(&bfad->bfad_lock, flags);
+ bfa_fcs_lport_get_rport_quals(port->fcs_port, rports, &nrports);
+ spin_unlock_irqrestore(&bfad->bfad_lock, flags);
+ kfree(rports);
+
+- return snprintf(buf, PAGE_SIZE, "%d\n", nrports);
++ return sysfs_emit(buf, "%d\n", nrports);
+ }
+
+ static DEVICE_ATTR(serial_number, S_IRUGO,
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 70173389f6ebd..52089538e9de6 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2398,17 +2398,25 @@ static irqreturn_t cq_interrupt_v3_hw(int irq_no, void *p)
+ return IRQ_WAKE_THREAD;
+ }
+
++static void hisi_sas_v3_free_vectors(void *data)
++{
++ struct pci_dev *pdev = data;
++
++ pci_free_irq_vectors(pdev);
++}
++
+ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ {
+ int vectors;
+ int max_msi = HISI_SAS_MSI_COUNT_V3_HW, min_msi;
+ struct Scsi_Host *shost = hisi_hba->shost;
++ struct pci_dev *pdev = hisi_hba->pci_dev;
+ struct irq_affinity desc = {
+ .pre_vectors = BASE_VECTORS_V3_HW,
+ };
+
+ min_msi = MIN_AFFINE_VECTORS_V3_HW;
+- vectors = pci_alloc_irq_vectors_affinity(hisi_hba->pci_dev,
++ vectors = pci_alloc_irq_vectors_affinity(pdev,
+ min_msi, max_msi,
+ PCI_IRQ_MSI |
+ PCI_IRQ_AFFINITY,
+@@ -2420,6 +2428,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
+ hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
+ shost->nr_hw_queues = hisi_hba->cq_nvecs;
+
++ devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
+ return 0;
+ }
+
+@@ -3967,6 +3976,54 @@ static const struct file_operations debugfs_bist_phy_v3_hw_fops = {
+ .owner = THIS_MODULE,
+ };
+
++static ssize_t debugfs_bist_cnt_v3_hw_write(struct file *filp,
++ const char __user *buf,
++ size_t count, loff_t *ppos)
++{
++ struct seq_file *m = filp->private_data;
++ struct hisi_hba *hisi_hba = m->private;
++ unsigned int cnt;
++ int val;
++
++ if (hisi_hba->debugfs_bist_enable)
++ return -EPERM;
++
++ val = kstrtouint_from_user(buf, count, 0, &cnt);
++ if (val)
++ return val;
++
++ if (cnt)
++ return -EINVAL;
++
++ hisi_hba->debugfs_bist_cnt = 0;
++ return count;
++}
++
++static int debugfs_bist_cnt_v3_hw_show(struct seq_file *s, void *p)
++{
++ struct hisi_hba *hisi_hba = s->private;
++
++ seq_printf(s, "%u\n", hisi_hba->debugfs_bist_cnt);
++
++ return 0;
++}
++
++static int debugfs_bist_cnt_v3_hw_open(struct inode *inode,
++ struct file *filp)
++{
++ return single_open(filp, debugfs_bist_cnt_v3_hw_show,
++ inode->i_private);
++}
++
++static const struct file_operations debugfs_bist_cnt_v3_hw_ops = {
++ .open = debugfs_bist_cnt_v3_hw_open,
++ .read = seq_read,
++ .write = debugfs_bist_cnt_v3_hw_write,
++ .llseek = seq_lseek,
++ .release = single_release,
++ .owner = THIS_MODULE,
++};
++
+ static const struct {
+ int value;
+ char *name;
+@@ -4604,8 +4661,8 @@ static void debugfs_bist_init_v3_hw(struct hisi_hba *hisi_hba)
+ debugfs_create_file("phy_id", 0600, hisi_hba->debugfs_bist_dentry,
+ hisi_hba, &debugfs_bist_phy_v3_hw_fops);
+
+- debugfs_create_u32("cnt", 0600, hisi_hba->debugfs_bist_dentry,
+- &hisi_hba->debugfs_bist_cnt);
++ debugfs_create_file("cnt", 0600, hisi_hba->debugfs_bist_dentry,
++ hisi_hba, &debugfs_bist_cnt_v3_hw_ops);
+
+ debugfs_create_file("loopback_mode", 0600,
+ hisi_hba->debugfs_bist_dentry,
+@@ -4769,7 +4826,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+
+ rc = scsi_add_host(shost, dev);
+ if (rc)
+- goto err_out_free_irq_vectors;
++ goto err_out_debugfs;
+
+ rc = sas_register_ha(sha);
+ if (rc)
+@@ -4800,8 +4857,6 @@ err_out_hw_init:
+ sas_unregister_ha(sha);
+ err_out_register_ha:
+ scsi_remove_host(shost);
+-err_out_free_irq_vectors:
+- pci_free_irq_vectors(pdev);
+ err_out_debugfs:
+ debugfs_exit_v3_hw(hisi_hba);
+ err_out_ha:
+@@ -4825,7 +4880,6 @@ hisi_sas_v3_destroy_irqs(struct pci_dev *pdev, struct hisi_hba *hisi_hba)
+
+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, nr), cq);
+ }
+- pci_free_irq_vectors(pdev);
+ }
+
+ static void hisi_sas_v3_remove(struct pci_dev *pdev)
+diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
+index 841000445b9a1..aa223db4cf53c 100644
+--- a/drivers/scsi/libfc/fc_exch.c
++++ b/drivers/scsi/libfc/fc_exch.c
+@@ -1701,6 +1701,7 @@ static void fc_exch_abts_resp(struct fc_exch *ep, struct fc_frame *fp)
+ if (cancel_delayed_work_sync(&ep->timeout_work)) {
+ FC_EXCH_DBG(ep, "Exchange timer canceled due to ABTS response\n");
+ fc_exch_release(ep); /* release from pending timer hold */
++ return;
+ }
+
+ spin_lock_bh(&ep->ex_lock);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index fc4eaf6d1e47e..d892ade421bf9 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -866,6 +866,8 @@ struct mpi3mr_ioc {
+ * @send_ack: Event acknowledgment required or not
+ * @process_evt: Bottomhalf processing required or not
+ * @evt_ctx: Event context to send in Ack
++ * @pending_at_sml: waiting for device add/remove API to complete
++ * @discard: discard this event
+ * @ref_count: kref count
+ * @event_data: Actual MPI3 event data
+ */
+@@ -877,6 +879,8 @@ struct mpi3mr_fwevt {
+ bool send_ack;
+ bool process_evt;
+ u32 evt_ctx;
++ bool pending_at_sml;
++ bool discard;
+ struct kref ref_count;
+ char event_data[0] __aligned(4);
+ };
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index 15bdc21ead669..e44868230197f 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -1520,7 +1520,7 @@ static void mpi3mr_free_op_req_q_segments(struct mpi3mr_ioc *mrioc, u16 q_idx)
+ MPI3MR_MAX_SEG_LIST_SIZE,
+ mrioc->req_qinfo[q_idx].q_segment_list,
+ mrioc->req_qinfo[q_idx].q_segment_list_dma);
+- mrioc->op_reply_qinfo[q_idx].q_segment_list = NULL;
++ mrioc->req_qinfo[q_idx].q_segment_list = NULL;
+ }
+ } else
+ size = mrioc->req_qinfo[q_idx].segment_qd *
+@@ -4353,8 +4353,8 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ memset(mrioc->devrem_bitmap, 0, mrioc->devrem_bitmap_sz);
+ memset(mrioc->removepend_bitmap, 0, mrioc->dev_handle_bitmap_sz);
+ memset(mrioc->evtack_cmds_bitmap, 0, mrioc->evtack_cmds_bitmap_sz);
+- mpi3mr_cleanup_fwevt_list(mrioc);
+ mpi3mr_flush_host_io(mrioc);
++ mpi3mr_cleanup_fwevt_list(mrioc);
+ mpi3mr_invalidate_devhandles(mrioc);
+ if (mrioc->prepare_for_reset) {
+ mrioc->prepare_for_reset = 0;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 284117da9086a..f7893de35b26b 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -285,6 +285,35 @@ static struct mpi3mr_fwevt *mpi3mr_dequeue_fwevt(
+ return fwevt;
+ }
+
++/**
++ * mpi3mr_cancel_work - cancel firmware event
++ * @fwevt: fwevt object which needs to be canceled
++ *
++ * Return: Nothing.
++ */
++static void mpi3mr_cancel_work(struct mpi3mr_fwevt *fwevt)
++{
++ /*
++ * Wait on the fwevt to complete. If this returns 1, then
++ * the event was never executed.
++ *
++ * If it did execute, we wait for it to finish, and the put will
++ * happen from mpi3mr_process_fwevt()
++ */
++ if (cancel_work_sync(&fwevt->work)) {
++ /*
++ * Put fwevt reference count after
++ * dequeuing it from worker queue
++ */
++ mpi3mr_fwevt_put(fwevt);
++ /*
++ * Put fwevt reference count to neutralize
++ * kref_init increment
++ */
++ mpi3mr_fwevt_put(fwevt);
++ }
++}
++
+ /**
+ * mpi3mr_cleanup_fwevt_list - Cleanup firmware event list
+ * @mrioc: Adapter instance reference
+@@ -302,28 +331,25 @@ void mpi3mr_cleanup_fwevt_list(struct mpi3mr_ioc *mrioc)
+ !mrioc->fwevt_worker_thread)
+ return;
+
+- while ((fwevt = mpi3mr_dequeue_fwevt(mrioc)) ||
+- (fwevt = mrioc->current_event)) {
++ while ((fwevt = mpi3mr_dequeue_fwevt(mrioc)))
++ mpi3mr_cancel_work(fwevt);
++
++ if (mrioc->current_event) {
++ fwevt = mrioc->current_event;
+ /*
+- * Wait on the fwevt to complete. If this returns 1, then
+- * the event was never executed, and we need a put for the
+- * reference the work had on the fwevt.
+- *
+- * If it did execute, we wait for it to finish, and the put will
+- * happen from mpi3mr_process_fwevt()
++ * Don't call cancel_work_sync() API for the
++ * fwevt work if the controller reset is
++ * get called as part of processing the
++ * same fwevt work (or) when worker thread is
++ * waiting for device add/remove APIs to complete.
++ * Otherwise we will see deadlock.
+ */
+- if (cancel_work_sync(&fwevt->work)) {
+- /*
+- * Put fwevt reference count after
+- * dequeuing it from worker queue
+- */
+- mpi3mr_fwevt_put(fwevt);
+- /*
+- * Put fwevt reference count to neutralize
+- * kref_init increment
+- */
+- mpi3mr_fwevt_put(fwevt);
++ if (current_work() == &fwevt->work || fwevt->pending_at_sml) {
++ fwevt->discard = 1;
++ return;
+ }
++
++ mpi3mr_cancel_work(fwevt);
+ }
+ }
+
+@@ -690,6 +716,24 @@ static struct mpi3mr_tgt_dev *__mpi3mr_get_tgtdev_from_tgtpriv(
+ return tgtdev;
+ }
+
++/**
++ * mpi3mr_print_device_event_notice - print notice related to post processing of
++ * device event after controller reset.
++ *
++ * @mrioc: Adapter instance reference
++ * @device_add: true for device add event and false for device removal event
++ *
++ * Return: None.
++ */
++static void mpi3mr_print_device_event_notice(struct mpi3mr_ioc *mrioc,
++ bool device_add)
++{
++ ioc_notice(mrioc, "Device %s was in progress before the reset and\n",
++ (device_add ? "addition" : "removal"));
++ ioc_notice(mrioc, "completed after reset, verify whether the exposed devices\n");
++ ioc_notice(mrioc, "are matched with attached devices for correctness\n");
++}
++
+ /**
+ * mpi3mr_remove_tgtdev_from_host - Remove dev from upper layers
+ * @mrioc: Adapter instance reference
+@@ -714,8 +758,17 @@ static void mpi3mr_remove_tgtdev_from_host(struct mpi3mr_ioc *mrioc,
+ }
+
+ if (tgtdev->starget) {
++ if (mrioc->current_event)
++ mrioc->current_event->pending_at_sml = 1;
+ scsi_remove_target(&tgtdev->starget->dev);
+ tgtdev->host_exposed = 0;
++ if (mrioc->current_event) {
++ mrioc->current_event->pending_at_sml = 0;
++ if (mrioc->current_event->discard) {
++ mpi3mr_print_device_event_notice(mrioc, false);
++ return;
++ }
++ }
+ }
+ ioc_info(mrioc, "%s :Removed handle(0x%04x), wwid(0x%016llx)\n",
+ __func__, tgtdev->dev_handle, (unsigned long long)tgtdev->wwid);
+@@ -749,11 +802,20 @@ static int mpi3mr_report_tgtdev_to_host(struct mpi3mr_ioc *mrioc,
+ }
+ if (!tgtdev->host_exposed && !mrioc->reset_in_progress) {
+ tgtdev->host_exposed = 1;
++ if (mrioc->current_event)
++ mrioc->current_event->pending_at_sml = 1;
+ scsi_scan_target(&mrioc->shost->shost_gendev, 0,
+ tgtdev->perst_id,
+ SCAN_WILD_CARD, SCSI_SCAN_INITIAL);
+ if (!tgtdev->starget)
+ tgtdev->host_exposed = 0;
++ if (mrioc->current_event) {
++ mrioc->current_event->pending_at_sml = 0;
++ if (mrioc->current_event->discard) {
++ mpi3mr_print_device_event_notice(mrioc, true);
++ goto out;
++ }
++ }
+ }
+ out:
+ if (tgtdev)
+@@ -1193,6 +1255,8 @@ static void mpi3mr_sastopochg_evt_bh(struct mpi3mr_ioc *mrioc,
+ mpi3mr_sastopochg_evt_debug(mrioc, event_data);
+
+ for (i = 0; i < event_data->num_entries; i++) {
++ if (fwevt->discard)
++ return;
+ handle = le16_to_cpu(event_data->phy_entry[i].attached_dev_handle);
+ if (!handle)
+ continue;
+@@ -1324,6 +1388,8 @@ static void mpi3mr_pcietopochg_evt_bh(struct mpi3mr_ioc *mrioc,
+ mpi3mr_pcietopochg_evt_debug(mrioc, event_data);
+
+ for (i = 0; i < event_data->num_entries; i++) {
++ if (fwevt->discard)
++ return;
+ handle =
+ le16_to_cpu(event_data->port_entry[i].attached_dev_handle);
+ if (!handle)
+@@ -1362,8 +1428,8 @@ static void mpi3mr_pcietopochg_evt_bh(struct mpi3mr_ioc *mrioc,
+ static void mpi3mr_fwevt_bh(struct mpi3mr_ioc *mrioc,
+ struct mpi3mr_fwevt *fwevt)
+ {
+- mrioc->current_event = fwevt;
+ mpi3mr_fwevt_del_from_list(mrioc, fwevt);
++ mrioc->current_event = fwevt;
+
+ if (mrioc->stop_drv_processing)
+ goto out;
+@@ -2551,6 +2617,8 @@ void mpi3mr_process_op_reply_desc(struct mpi3mr_ioc *mrioc,
+ scmd->result = DID_OK << 16;
+ goto out_success;
+ }
++
++ scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_count);
+ if (ioc_status == MPI3_IOCSTATUS_SCSI_DATA_UNDERRUN &&
+ xfer_count == 0 && (scsi_status == MPI3_SCSI_STATUS_BUSY ||
+ scsi_status == MPI3_SCSI_STATUS_RESERVATION_CONFLICT ||
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 00792767c620d..7e476f50935b8 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -11035,6 +11035,7 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
+ {
+ struct _sas_port *mpt3sas_port, *next;
+ unsigned long flags;
++ int port_id;
+
+ /* remove sibling ports attached to this expander */
+ list_for_each_entry_safe(mpt3sas_port, next,
+@@ -11055,6 +11056,8 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
+ mpt3sas_port->hba_port);
+ }
+
++ port_id = sas_expander->port->port_id;
++
+ mpt3sas_transport_port_remove(ioc, sas_expander->sas_address,
+ sas_expander->sas_address_parent, sas_expander->port);
+
+@@ -11062,7 +11065,7 @@ _scsih_expander_node_remove(struct MPT3SAS_ADAPTER *ioc,
+ "expander_remove: handle(0x%04x), sas_addr(0x%016llx), port:%d\n",
+ sas_expander->handle, (unsigned long long)
+ sas_expander->sas_address,
+- sas_expander->port->port_id);
++ port_id);
+
+ spin_lock_irqsave(&ioc->sas_node_lock, flags);
+ list_del(&sas_expander->list);
+diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
+index dcae2d4464f90..44df7c03aab8d 100644
+--- a/drivers/scsi/mvsas/mv_init.c
++++ b/drivers/scsi/mvsas/mv_init.c
+@@ -696,7 +696,7 @@ static struct pci_driver mvs_pci_driver = {
+ static ssize_t driver_version_show(struct device *cdev,
+ struct device_attribute *attr, char *buffer)
+ {
+- return snprintf(buffer, PAGE_SIZE, "%s\n", DRV_VERSION);
++ return sysfs_emit(buffer, "%s\n", DRV_VERSION);
+ }
+
+ static DEVICE_ATTR_RO(driver_version);
+@@ -744,7 +744,7 @@ static ssize_t interrupt_coalescing_store(struct device *cdev,
+ static ssize_t interrupt_coalescing_show(struct device *cdev,
+ struct device_attribute *attr, char *buffer)
+ {
+- return snprintf(buffer, PAGE_SIZE, "%d\n", interrupt_coalescing);
++ return sysfs_emit(buffer, "%d\n", interrupt_coalescing);
+ }
+
+ static DEVICE_ATTR_RW(interrupt_coalescing);
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index d853e8d0195a6..27ead825c2bb6 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1522,7 +1522,6 @@ void pm8001_work_fn(struct work_struct *work)
+ case IO_XFER_ERROR_BREAK:
+ { /* This one stashes the sas_task instead */
+ struct sas_task *t = (struct sas_task *)pm8001_dev;
+- u32 tag;
+ struct pm8001_ccb_info *ccb;
+ struct pm8001_hba_info *pm8001_ha = pw->pm8001_ha;
+ unsigned long flags, flags1;
+@@ -1544,8 +1543,8 @@ void pm8001_work_fn(struct work_struct *work)
+ /* Search for a possible ccb that matches the task */
+ for (i = 0; ccb = NULL, i < PM8001_MAX_CCB; i++) {
+ ccb = &pm8001_ha->ccb_info[i];
+- tag = ccb->ccb_tag;
+- if ((tag != 0xFFFFFFFF) && (ccb->task == t))
++ if ((ccb->ccb_tag != PM8001_INVALID_TAG) &&
++ (ccb->task == t))
+ break;
+ }
+ if (!ccb) {
+@@ -1567,11 +1566,11 @@ void pm8001_work_fn(struct work_struct *work)
+ spin_unlock_irqrestore(&t->task_state_lock, flags1);
+ pm8001_dbg(pm8001_ha, FAIL, "task 0x%p done with event 0x%x resp 0x%x stat 0x%x but aborted by upper layer!\n",
+ t, pw->handler, ts->resp, ts->stat);
+- pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
++ pm8001_ccb_task_free(pm8001_ha, t, ccb, ccb->ccb_tag);
+ spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ } else {
+ spin_unlock_irqrestore(&t->task_state_lock, flags1);
+- pm8001_ccb_task_free(pm8001_ha, t, ccb, tag);
++ pm8001_ccb_task_free(pm8001_ha, t, ccb, ccb->ccb_tag);
+ mb();/* in order to force CPU ordering */
+ spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ t->task_done(t);
+@@ -1580,7 +1579,6 @@ void pm8001_work_fn(struct work_struct *work)
+ case IO_XFER_OPEN_RETRY_TIMEOUT:
+ { /* This one stashes the sas_task instead */
+ struct sas_task *t = (struct sas_task *)pm8001_dev;
+- u32 tag;
+ struct pm8001_ccb_info *ccb;
+ struct pm8001_hba_info *pm8001_ha = pw->pm8001_ha;
+ unsigned long flags, flags1;
+@@ -1614,8 +1612,8 @@ void pm8001_work_fn(struct work_struct *work)
+ /* Search for a possible ccb that matches the task */
+ for (i = 0; ccb = NULL, i < PM8001_MAX_CCB; i++) {
+ ccb = &pm8001_ha->ccb_info[i];
+- tag = ccb->ccb_tag;
+- if ((tag != 0xFFFFFFFF) && (ccb->task == t))
++ if ((ccb->ccb_tag != PM8001_INVALID_TAG) &&
++ (ccb->task == t))
+ break;
+ }
+ if (!ccb) {
+@@ -1686,19 +1684,13 @@ void pm8001_work_fn(struct work_struct *work)
+ struct task_status_struct *ts;
+ struct sas_task *task;
+ int i;
+- u32 tag, device_id;
++ u32 device_id;
+
+ for (i = 0; ccb = NULL, i < PM8001_MAX_CCB; i++) {
+ ccb = &pm8001_ha->ccb_info[i];
+ task = ccb->task;
+ ts = &task->task_status;
+- tag = ccb->ccb_tag;
+- /* check if tag is NULL */
+- if (!tag) {
+- pm8001_dbg(pm8001_ha, FAIL,
+- "tag Null\n");
+- continue;
+- }
++
+ if (task != NULL) {
+ dev = task->dev;
+ if (!dev) {
+@@ -1707,10 +1699,11 @@ void pm8001_work_fn(struct work_struct *work)
+ continue;
+ }
+ /*complete sas task and update to top layer */
+- pm8001_ccb_task_free(pm8001_ha, task, ccb, tag);
++ pm8001_ccb_task_free(pm8001_ha, task, ccb,
++ ccb->ccb_tag);
+ ts->resp = SAS_TASK_COMPLETE;
+ task->task_done(task);
+- } else if (tag != 0xFFFFFFFF) {
++ } else if (ccb->ccb_tag != PM8001_INVALID_TAG) {
+ /* complete the internal commands/non-sas task */
+ pm8001_dev = ccb->device;
+ if (pm8001_dev->dcompletion) {
+@@ -1718,7 +1711,7 @@ void pm8001_work_fn(struct work_struct *work)
+ pm8001_dev->dcompletion = NULL;
+ }
+ complete(pm8001_ha->nvmd_completion);
+- pm8001_tag_free(pm8001_ha, tag);
++ pm8001_tag_free(pm8001_ha, ccb->ccb_tag);
+ }
+ }
+ /* Deregister all the device ids */
+@@ -1772,7 +1765,6 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ }
+
+ task = sas_alloc_slow_task(GFP_ATOMIC);
+-
+ if (!task) {
+ pm8001_dbg(pm8001_ha, FAIL, "cannot allocate task\n");
+ return;
+@@ -1781,8 +1773,10 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ task->task_done = pm8001_task_done;
+
+ res = pm8001_tag_alloc(pm8001_ha, &ccb_tag);
+- if (res)
++ if (res) {
++ sas_free_task(task);
+ return;
++ }
+
+ ccb = &pm8001_ha->ccb_info[ccb_tag];
+ ccb->device = pm8001_ha_dev;
+@@ -1799,8 +1793,10 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+
+ ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &task_abort,
+ sizeof(task_abort), 0);
+- if (ret)
++ if (ret) {
++ sas_free_task(task);
+ pm8001_tag_free(pm8001_ha, ccb_tag);
++ }
+
+ }
+
+@@ -2316,11 +2312,6 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ param = le32_to_cpu(psataPayload->param);
+ tag = le32_to_cpu(psataPayload->tag);
+
+- if (!tag) {
+- pm8001_dbg(pm8001_ha, FAIL, "tag null\n");
+- return;
+- }
+-
+ ccb = &pm8001_ha->ccb_info[tag];
+ t = ccb->task;
+ pm8001_dev = ccb->device;
+@@ -3056,7 +3047,7 @@ void pm8001_mpi_set_dev_state_resp(struct pm8001_hba_info *pm8001_ha,
+ device_id, pds, nds, status);
+ complete(pm8001_dev->setds_completion);
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ pm8001_tag_free(pm8001_ha, tag);
+ }
+
+@@ -3074,7 +3065,7 @@ void pm8001_mpi_set_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ dlen_status);
+ }
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ pm8001_tag_free(pm8001_ha, tag);
+ }
+
+@@ -3101,7 +3092,7 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ * freed by requesting path anywhere.
+ */
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ pm8001_tag_free(pm8001_ha, tag);
+ return;
+ }
+@@ -3147,7 +3138,7 @@ pm8001_mpi_get_nvmd_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ complete(pm8001_ha->nvmd_completion);
+ pm8001_dbg(pm8001_ha, MSG, "Get nvmd data complete!\n");
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ pm8001_tag_free(pm8001_ha, tag);
+ }
+
+@@ -3560,7 +3551,7 @@ int pm8001_mpi_reg_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ }
+ complete(pm8001_dev->dcompletion);
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ pm8001_tag_free(pm8001_ha, htag);
+ return 0;
+ }
+@@ -3632,7 +3623,7 @@ int pm8001_mpi_fw_flash_update_resp(struct pm8001_hba_info *pm8001_ha,
+ }
+ kfree(ccb->fw_control_context);
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ pm8001_tag_free(pm8001_ha, tag);
+ complete(pm8001_ha->nvmd_completion);
+ return 0;
+@@ -3668,10 +3659,6 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+
+ status = le32_to_cpu(pPayload->status);
+ tag = le32_to_cpu(pPayload->tag);
+- if (!tag) {
+- pm8001_dbg(pm8001_ha, FAIL, " TAG NULL. RETURNING !!!\n");
+- return -1;
+- }
+
+ scp = le32_to_cpu(pPayload->scp);
+ ccb = &pm8001_ha->ccb_info[tag];
+@@ -3706,12 +3693,11 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ mb();
+
+ if (pm8001_dev->id & NCQ_ABORT_ALL_FLAG) {
+- pm8001_tag_free(pm8001_ha, tag);
+ sas_free_task(t);
+- /* clear the flag */
+- pm8001_dev->id &= 0xBFFFFFFF;
+- } else
++ pm8001_dev->id &= ~NCQ_ABORT_ALL_FLAG;
++ } else {
+ t->task_done(t);
++ }
+
+ return 0;
+ }
+@@ -4479,6 +4465,9 @@ static int pm8001_chip_reg_dev_req(struct pm8001_hba_info *pm8001_ha,
+ SAS_ADDR_SIZE);
+ rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ sizeof(payload), 0);
++ if (rc)
++ pm8001_tag_free(pm8001_ha, tag);
++
+ return rc;
+ }
+
+@@ -4891,6 +4880,11 @@ pm8001_chip_fw_flash_update_req(struct pm8001_hba_info *pm8001_ha,
+ ccb->ccb_tag = tag;
+ rc = pm8001_chip_fw_flash_update_build(pm8001_ha, &flash_update_info,
+ tag);
++ if (rc) {
++ kfree(fw_control_context);
++ pm8001_tag_free(pm8001_ha, tag);
++ }
++
+ return rc;
+ }
+
+@@ -4995,6 +4989,9 @@ pm8001_chip_set_dev_state_req(struct pm8001_hba_info *pm8001_ha,
+ payload.nds = cpu_to_le32(state);
+ rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ sizeof(payload), 0);
++ if (rc)
++ pm8001_tag_free(pm8001_ha, tag);
++
+ return rc;
+
+ }
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index d8a2121cb8d93..d2a2593f669d6 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -1216,10 +1216,11 @@ pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha, struct Scsi_Host *shost,
+ goto err_out;
+ }
+ pm8001_ha->ccb_info[i].task = NULL;
+- pm8001_ha->ccb_info[i].ccb_tag = 0xffffffff;
++ pm8001_ha->ccb_info[i].ccb_tag = PM8001_INVALID_TAG;
+ pm8001_ha->ccb_info[i].device = NULL;
+ ++pm8001_ha->tags_num;
+ }
++
+ return 0;
+
+ err_out_noccb:
+diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
+index 32edda3e55c6c..b68c8400ca158 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.c
++++ b/drivers/scsi/pm8001/pm8001_sas.c
+@@ -567,7 +567,7 @@ void pm8001_ccb_task_free(struct pm8001_hba_info *pm8001_ha,
+
+ task->lldd_task = NULL;
+ ccb->task = NULL;
+- ccb->ccb_tag = 0xFFFFFFFF;
++ ccb->ccb_tag = PM8001_INVALID_TAG;
+ ccb->open_retry = 0;
+ pm8001_tag_free(pm8001_ha, ccb_idx);
+ }
+@@ -847,10 +847,10 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
+
+ res = PM8001_CHIP_DISP->task_abort(pm8001_ha,
+ pm8001_dev, flag, task_tag, ccb_tag);
+-
+ if (res) {
+ del_timer(&task->slow_task->timer);
+ pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
++ pm8001_tag_free(pm8001_ha, ccb_tag);
+ goto ex_err;
+ }
+ wait_for_completion(&task->slow_task->completion);
+@@ -952,9 +952,11 @@ void pm8001_open_reject_retry(
+ struct task_status_struct *ts;
+ struct pm8001_device *pm8001_dev;
+ unsigned long flags1;
+- u32 tag;
+ struct pm8001_ccb_info *ccb = &pm8001_ha->ccb_info[i];
+
++ if (ccb->ccb_tag == PM8001_INVALID_TAG)
++ continue;
++
+ pm8001_dev = ccb->device;
+ if (!pm8001_dev || (pm8001_dev->dev_type == SAS_PHY_UNUSED))
+ continue;
+@@ -966,9 +968,6 @@ void pm8001_open_reject_retry(
+ continue;
+ } else if (pm8001_dev != device_to_close)
+ continue;
+- tag = ccb->ccb_tag;
+- if (!tag || (tag == 0xFFFFFFFF))
+- continue;
+ task = ccb->task;
+ if (!task || !task->task_done)
+ continue;
+@@ -989,11 +988,11 @@ void pm8001_open_reject_retry(
+ & SAS_TASK_STATE_ABORTED))) {
+ spin_unlock_irqrestore(&task->task_state_lock,
+ flags1);
+- pm8001_ccb_task_free(pm8001_ha, task, ccb, tag);
++ pm8001_ccb_task_free(pm8001_ha, task, ccb, ccb->ccb_tag);
+ } else {
+ spin_unlock_irqrestore(&task->task_state_lock,
+ flags1);
+- pm8001_ccb_task_free(pm8001_ha, task, ccb, tag);
++ pm8001_ccb_task_free(pm8001_ha, task, ccb, ccb->ccb_tag);
+ mb();/* in order to force CPU ordering */
+ spin_unlock_irqrestore(&pm8001_ha->lock, flags);
+ task->task_done(task);
+diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h
+index a17da1cebce17..1791cdf302762 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.h
++++ b/drivers/scsi/pm8001/pm8001_sas.h
+@@ -738,6 +738,8 @@ void pm8001_free_dev(struct pm8001_device *pm8001_dev);
+ /* ctl shared API */
+ extern const struct attribute_group *pm8001_host_groups[];
+
++#define PM8001_INVALID_TAG ((u32)-1)
++
+ static inline void
+ pm8001_ccb_task_free_done(struct pm8001_hba_info *pm8001_ha,
+ struct sas_task *task, struct pm8001_ccb_info *ccb,
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 908dbac20b483..55163469030d3 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -67,18 +67,16 @@ int pm80xx_bar4_shift(struct pm8001_hba_info *pm8001_ha, u32 shift_value)
+ }
+
+ static void pm80xx_pci_mem_copy(struct pm8001_hba_info *pm8001_ha, u32 soffset,
+- const void *destination,
++ __le32 *destination,
+ u32 dw_count, u32 bus_base_number)
+ {
+ u32 index, value, offset;
+- u32 *destination1;
+- destination1 = (u32 *)destination;
+
+- for (index = 0; index < dw_count; index += 4, destination1++) {
++ for (index = 0; index < dw_count; index += 4, destination++) {
+ offset = (soffset + index);
+ if (offset < (64 * 1024)) {
+ value = pm8001_cr32(pm8001_ha, bus_base_number, offset);
+- *destination1 = cpu_to_le32(value);
++ *destination = cpu_to_le32(value);
+ }
+ }
+ return;
+@@ -2406,11 +2404,6 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
+ param = le32_to_cpu(psataPayload->param);
+ tag = le32_to_cpu(psataPayload->tag);
+
+- if (!tag) {
+- pm8001_dbg(pm8001_ha, FAIL, "tag null\n");
+- return;
+- }
+-
+ ccb = &pm8001_ha->ccb_info[tag];
+ t = ccb->task;
+ pm8001_dev = ccb->device;
+@@ -4927,8 +4920,13 @@ static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
+ payload.tag = cpu_to_le32(tag);
+ payload.phyop_phyid =
+ cpu_to_le32(((phy_op & 0xFF) << 8) | (phyId & 0xFF));
+- return pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+- sizeof(payload), 0);
++
++ rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
++ sizeof(payload), 0);
++ if (rc)
++ pm8001_tag_free(pm8001_ha, tag);
++
++ return rc;
+ }
+
+ static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
+diff --git a/drivers/scsi/scsi_logging.c b/drivers/scsi/scsi_logging.c
+index 1f8f80b2dbfcb..a9f8de5e9639a 100644
+--- a/drivers/scsi/scsi_logging.c
++++ b/drivers/scsi/scsi_logging.c
+@@ -30,7 +30,7 @@ static inline const char *scmd_name(const struct scsi_cmnd *scmd)
+ {
+ struct request *rq = scsi_cmd_to_rq((struct scsi_cmnd *)scmd);
+
+- if (!rq->q->disk)
++ if (!rq->q || !rq->q->disk)
+ return NULL;
+ return rq->q->disk->disk_name;
+ }
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index f4e6c68ac99ed..2ef78083f1eff 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -223,6 +223,8 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
+ int ret;
+ struct sbitmap sb_backup;
+
++ depth = min_t(unsigned int, depth, scsi_device_max_queue_depth(sdev));
++
+ /*
+ * realloc if new shift is calculated, which is caused by setting
+ * up one new default queue depth after calling ->slave_configure
+@@ -245,6 +247,9 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
+ scsi_device_max_queue_depth(sdev),
+ new_shift, GFP_KERNEL,
+ sdev->request_queue->node, false, true);
++ if (!ret)
++ sbitmap_resize(&sdev->budget_map, depth);
++
+ if (need_free) {
+ if (ret)
+ sdev->budget_map = sb_backup;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 66056806159a6..8b5d2a4076c21 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3320,6 +3320,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ sd_read_block_limits(sdkp);
+ sd_read_block_characteristics(sdkp);
+ sd_zbc_read_zones(sdkp, buffer);
++ sd_read_cpr(sdkp);
+ }
+
+ sd_print_capacity(sdkp, old_capacity);
+@@ -3329,7 +3330,6 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ sd_read_app_tag_own(sdkp, buffer);
+ sd_read_write_same(sdkp, buffer);
+ sd_read_security(sdkp, buffer);
+- sd_read_cpr(sdkp);
+ }
+
+ /*
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index f0897d587454a..f3749e5086737 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -2513,17 +2513,15 @@ static void pqi_remove_all_scsi_devices(struct pqi_ctrl_info *ctrl_info)
+ struct pqi_scsi_dev *device;
+ struct pqi_scsi_dev *next;
+
+- spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
+-
+ list_for_each_entry_safe(device, next, &ctrl_info->scsi_device_list,
+ scsi_device_list_entry) {
+ if (pqi_is_device_added(device))
+ pqi_remove_device(ctrl_info, device);
++ spin_lock_irqsave(&ctrl_info->scsi_device_list_lock, flags);
+ list_del(&device->scsi_device_list_entry);
+ pqi_free_device(device);
++ spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
+ }
+-
+- spin_unlock_irqrestore(&ctrl_info->scsi_device_list_lock, flags);
+ }
+
+ static int pqi_scan_scsi_devices(struct pqi_ctrl_info *ctrl_info)
+@@ -7857,6 +7855,21 @@ static int pqi_force_sis_mode(struct pqi_ctrl_info *ctrl_info)
+ return pqi_revert_to_sis_mode(ctrl_info);
+ }
+
++static void pqi_perform_lockup_action(void)
++{
++ switch (pqi_lockup_action) {
++ case PANIC:
++ panic("FATAL: Smart Family Controller lockup detected");
++ break;
++ case REBOOT:
++ emergency_restart();
++ break;
++ case NONE:
++ default:
++ break;
++ }
++}
++
+ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ {
+ int rc;
+@@ -7881,8 +7894,15 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ * commands.
+ */
+ rc = sis_wait_for_ctrl_ready(ctrl_info);
+- if (rc)
++ if (rc) {
++ if (reset_devices) {
++ dev_err(&ctrl_info->pci_dev->dev,
++ "kdump init failed with error %d\n", rc);
++ pqi_lockup_action = REBOOT;
++ pqi_perform_lockup_action();
++ }
+ return rc;
++ }
+
+ /*
+ * Get the controller properties. This allows us to determine
+@@ -8607,21 +8627,6 @@ static int pqi_ofa_ctrl_restart(struct pqi_ctrl_info *ctrl_info, unsigned int de
+ return pqi_ctrl_init_resume(ctrl_info);
+ }
+
+-static void pqi_perform_lockup_action(void)
+-{
+- switch (pqi_lockup_action) {
+- case PANIC:
+- panic("FATAL: Smart Family Controller lockup detected");
+- break;
+- case REBOOT:
+- emergency_restart();
+- break;
+- case NONE:
+- default:
+- break;
+- }
+-}
+-
+ static struct pqi_raid_error_info pqi_ctrl_offline_raid_error_info = {
+ .data_out_result = PQI_DATA_IN_OUT_HARDWARE_ERROR,
+ .status = SAM_STAT_CHECK_CONDITION,
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index f925b1f1f9ada..a0beb11abdc9d 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -578,7 +578,7 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
+
+ scsi_autopm_get_device(sdev);
+
+- if (ret != CDROMCLOSETRAY && ret != CDROMEJECT) {
++ if (cmd != CDROMCLOSETRAY && cmd != CDROMEJECT) {
+ ret = cdrom_ioctl(&cd->cdi, bdev, mode, cmd, arg);
+ if (ret != -ENOSYS)
+ goto put;
+diff --git a/drivers/scsi/ufs/ufshcd-pci.c b/drivers/scsi/ufs/ufshcd-pci.c
+index f76692053ca17..e892b9feffb11 100644
+--- a/drivers/scsi/ufs/ufshcd-pci.c
++++ b/drivers/scsi/ufs/ufshcd-pci.c
+@@ -428,6 +428,12 @@ static int ufs_intel_adl_init(struct ufs_hba *hba)
+ return ufs_intel_common_init(hba);
+ }
+
++static int ufs_intel_mtl_init(struct ufs_hba *hba)
++{
++ hba->caps |= UFSHCD_CAP_CRYPTO | UFSHCD_CAP_WB_EN;
++ return ufs_intel_common_init(hba);
++}
++
+ static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = {
+ .name = "intel-pci",
+ .init = ufs_intel_common_init,
+@@ -465,6 +471,16 @@ static struct ufs_hba_variant_ops ufs_intel_adl_hba_vops = {
+ .device_reset = ufs_intel_device_reset,
+ };
+
++static struct ufs_hba_variant_ops ufs_intel_mtl_hba_vops = {
++ .name = "intel-pci",
++ .init = ufs_intel_mtl_init,
++ .exit = ufs_intel_common_exit,
++ .hce_enable_notify = ufs_intel_hce_enable_notify,
++ .link_startup_notify = ufs_intel_link_startup_notify,
++ .resume = ufs_intel_resume,
++ .device_reset = ufs_intel_device_reset,
++};
++
+ #ifdef CONFIG_PM_SLEEP
+ static int ufshcd_pci_restore(struct device *dev)
+ {
+@@ -579,6 +595,7 @@ static const struct pci_device_id ufshcd_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x98FA), (kernel_ulong_t)&ufs_intel_lkf_hba_vops },
+ { PCI_VDEVICE(INTEL, 0x51FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops },
+ { PCI_VDEVICE(INTEL, 0x54FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops },
++ { PCI_VDEVICE(INTEL, 0x7E47), (kernel_ulong_t)&ufs_intel_mtl_hba_vops },
+ { } /* terminate list */
+ };
+
+diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c
+index 2d36a0715fca6..b34feba1f53de 100644
+--- a/drivers/scsi/ufs/ufshpb.c
++++ b/drivers/scsi/ufs/ufshpb.c
+@@ -869,12 +869,6 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
+ struct ufshpb_region *rgn, *victim_rgn = NULL;
+
+ list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) {
+- if (!rgn) {
+- dev_err(&hpb->sdev_ufs_lu->sdev_dev,
+- "%s: no region allocated\n",
+- __func__);
+- return NULL;
+- }
+ if (ufshpb_check_srgns_issue_state(hpb, rgn))
+ continue;
+
+@@ -890,6 +884,11 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb)
+ break;
+ }
+
++ if (!victim_rgn)
++ dev_err(&hpb->sdev_ufs_lu->sdev_dev,
++ "%s: no region allocated\n",
++ __func__);
++
+ return victim_rgn;
+ }
+
+diff --git a/drivers/scsi/zorro7xx.c b/drivers/scsi/zorro7xx.c
+index 27b9e2baab1a6..7acf9193a9e80 100644
+--- a/drivers/scsi/zorro7xx.c
++++ b/drivers/scsi/zorro7xx.c
+@@ -159,6 +159,8 @@ static void zorro7xx_remove_one(struct zorro_dev *z)
+ scsi_remove_host(host);
+
+ NCR_700_release(host);
++ if (host->base > 0x01000000)
++ iounmap(hostdata->base);
+ kfree(hostdata);
+ free_irq(host->irq, host);
+ zorro_release_device(z);
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 86c76211b3d3d..cad2d55dcd3d2 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1205,7 +1205,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ addr = op->addr.val;
+ len = op->data.nbytes;
+
+- if (bcm_qspi_bspi_ver_three(qspi) == true) {
++ if (has_bspi(qspi) && bcm_qspi_bspi_ver_three(qspi) == true) {
+ /*
+ * The address coming into this function is a raw flash offset.
+ * But for BSPI <= V3, we need to convert it to a remapped BSPI
+@@ -1224,7 +1224,7 @@ static int bcm_qspi_exec_mem_op(struct spi_mem *mem,
+ len < 4)
+ mspi_read = true;
+
+- if (mspi_read)
++ if (!has_bspi(qspi) || mspi_read)
+ return bcm_qspi_mspi_exec_mem_op(spi, op);
+
+ ret = bcm_qspi_bspi_set_mode(qspi, op, 0);
+diff --git a/drivers/spi/spi-rpc-if.c b/drivers/spi/spi-rpc-if.c
+index fe82f3575df4f..24ec1c83f379c 100644
+--- a/drivers/spi/spi-rpc-if.c
++++ b/drivers/spi/spi-rpc-if.c
+@@ -158,14 +158,18 @@ static int rpcif_spi_probe(struct platform_device *pdev)
+
+ error = rpcif_hw_init(rpc, false);
+ if (error)
+- return error;
++ goto out_disable_rpm;
+
+ error = spi_register_controller(ctlr);
+ if (error) {
+ dev_err(&pdev->dev, "spi_register_controller failed\n");
+- rpcif_disable_rpm(rpc);
++ goto out_disable_rpm;
+ }
+
++ return 0;
++
++out_disable_rpm:
++ rpcif_disable_rpm(rpc);
+ return error;
+ }
+
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index d96082dc3340d..bbf977a0d2c57 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1149,11 +1149,15 @@ static int __spi_unmap_msg(struct spi_controller *ctlr, struct spi_message *msg)
+
+ if (ctlr->dma_tx)
+ tx_dev = ctlr->dma_tx->device->dev;
++ else if (ctlr->dma_map_dev)
++ tx_dev = ctlr->dma_map_dev;
+ else
+ tx_dev = ctlr->dev.parent;
+
+ if (ctlr->dma_rx)
+ rx_dev = ctlr->dma_rx->device->dev;
++ else if (ctlr->dma_map_dev)
++ rx_dev = ctlr->dma_map_dev;
+ else
+ rx_dev = ctlr->dev.parent;
+
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 3a2e4582db8e1..a3e3c9f9aa181 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1209,6 +1209,9 @@ int vchiq_dump_platform_instances(void *dump_context)
+ int len;
+ int i;
+
++ if (!state)
++ return -ENOTCONN;
++
+ /*
+ * There is no list of instances, so instead scan all services,
+ * marking those that have been dumped.
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 7fe20d4b7ba28..b7295236671c1 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -2306,6 +2306,9 @@ void vchiq_msg_queue_push(unsigned int handle, struct vchiq_header *header)
+ struct vchiq_service *service = find_service_by_handle(handle);
+ int pos;
+
++ if (!service)
++ return;
++
+ while (service->msg_queue_write == service->msg_queue_read +
+ VCHIQ_MAX_SLOTS) {
+ if (wait_for_completion_interruptible(&service->msg_queue_pop))
+@@ -2326,6 +2329,9 @@ struct vchiq_header *vchiq_msg_hold(unsigned int handle)
+ struct vchiq_header *header;
+ int pos;
+
++ if (!service)
++ return NULL;
++
+ if (service->msg_queue_write == service->msg_queue_read)
+ return NULL;
+
+diff --git a/drivers/staging/wfx/bus_sdio.c b/drivers/staging/wfx/bus_sdio.c
+index a670176ba06f0..0612f8a7c0857 100644
+--- a/drivers/staging/wfx/bus_sdio.c
++++ b/drivers/staging/wfx/bus_sdio.c
+@@ -207,9 +207,6 @@ static int wfx_sdio_probe(struct sdio_func *func,
+
+ bus->func = func;
+ sdio_set_drvdata(func, bus);
+- func->card->quirks |= MMC_QUIRK_LENIENT_FN0 |
+- MMC_QUIRK_BLKSZ_FOR_BYTE_MODE |
+- MMC_QUIRK_BROKEN_BYTE_MODE_512;
+
+ sdio_claim_host(func);
+ ret = sdio_enable_func(func);
+diff --git a/drivers/staging/wfx/main.c b/drivers/staging/wfx/main.c
+index 858d778cc5897..e3999e95ce851 100644
+--- a/drivers/staging/wfx/main.c
++++ b/drivers/staging/wfx/main.c
+@@ -322,7 +322,8 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ wdev->pdata.gpio_wakeup = devm_gpiod_get_optional(dev, "wakeup",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(wdev->pdata.gpio_wakeup))
+- return NULL;
++ goto err;
++
+ if (wdev->pdata.gpio_wakeup)
+ gpiod_set_consumer_name(wdev->pdata.gpio_wakeup, "wfx wakeup");
+
+@@ -341,6 +342,10 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ return NULL;
+
+ return wdev;
++
++err:
++ ieee80211_free_hw(hw);
++ return NULL;
+ }
+
+ int wfx_probe(struct wfx_dev *wdev)
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index d002a4e48ed93..0d94a7cb275e5 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -921,11 +921,8 @@ static void s3c24xx_serial_tx_chars(struct s3c24xx_uart_port *ourport)
+ return;
+ }
+
+- if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) {
+- spin_unlock(&port->lock);
++ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+- spin_lock(&port->lock);
+- }
+
+ if (uart_circ_empty(xmit))
+ s3c24xx_serial_stop_tx(port);
+diff --git a/drivers/usb/cdns3/cdnsp-debug.h b/drivers/usb/cdns3/cdnsp-debug.h
+index a8776df2d4e0c..f0ca865cce2a0 100644
+--- a/drivers/usb/cdns3/cdnsp-debug.h
++++ b/drivers/usb/cdns3/cdnsp-debug.h
+@@ -182,208 +182,211 @@ static inline const char *cdnsp_decode_trb(char *str, size_t size, u32 field0,
+ int ep_id = TRB_TO_EP_INDEX(field3) - 1;
+ int type = TRB_FIELD_TO_TYPE(field3);
+ unsigned int ep_num;
+- int ret = 0;
++ int ret;
+ u32 temp;
+
+ ep_num = DIV_ROUND_UP(ep_id, 2);
+
+ switch (type) {
+ case TRB_LINK:
+- ret += snprintf(str, size,
+- "LINK %08x%08x intr %ld type '%s' flags %c:%c:%c:%c",
+- field1, field0, GET_INTR_TARGET(field2),
+- cdnsp_trb_type_string(type),
+- field3 & TRB_IOC ? 'I' : 'i',
+- field3 & TRB_CHAIN ? 'C' : 'c',
+- field3 & TRB_TC ? 'T' : 't',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "LINK %08x%08x intr %ld type '%s' flags %c:%c:%c:%c",
++ field1, field0, GET_INTR_TARGET(field2),
++ cdnsp_trb_type_string(type),
++ field3 & TRB_IOC ? 'I' : 'i',
++ field3 & TRB_CHAIN ? 'C' : 'c',
++ field3 & TRB_TC ? 'T' : 't',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_TRANSFER:
+ case TRB_COMPLETION:
+ case TRB_PORT_STATUS:
+ case TRB_HC_EVENT:
+- ret += snprintf(str, size,
+- "ep%d%s(%d) type '%s' TRB %08x%08x status '%s'"
+- " len %ld slot %ld flags %c:%c",
+- ep_num, ep_id % 2 ? "out" : "in",
+- TRB_TO_EP_INDEX(field3),
+- cdnsp_trb_type_string(type), field1, field0,
+- cdnsp_trb_comp_code_string(GET_COMP_CODE(field2)),
+- EVENT_TRB_LEN(field2), TRB_TO_SLOT_ID(field3),
+- field3 & EVENT_DATA ? 'E' : 'e',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "ep%d%s(%d) type '%s' TRB %08x%08x status '%s'"
++ " len %ld slot %ld flags %c:%c",
++ ep_num, ep_id % 2 ? "out" : "in",
++ TRB_TO_EP_INDEX(field3),
++ cdnsp_trb_type_string(type), field1, field0,
++ cdnsp_trb_comp_code_string(GET_COMP_CODE(field2)),
++ EVENT_TRB_LEN(field2), TRB_TO_SLOT_ID(field3),
++ field3 & EVENT_DATA ? 'E' : 'e',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_MFINDEX_WRAP:
+- ret += snprintf(str, size, "%s: flags %c",
+- cdnsp_trb_type_string(type),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size, "%s: flags %c",
++ cdnsp_trb_type_string(type),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_SETUP:
+- ret += snprintf(str, size,
+- "type '%s' bRequestType %02x bRequest %02x "
+- "wValue %02x%02x wIndex %02x%02x wLength %d "
+- "length %ld TD size %ld intr %ld Setup ID %ld "
+- "flags %c:%c:%c",
+- cdnsp_trb_type_string(type),
+- field0 & 0xff,
+- (field0 & 0xff00) >> 8,
+- (field0 & 0xff000000) >> 24,
+- (field0 & 0xff0000) >> 16,
+- (field1 & 0xff00) >> 8,
+- field1 & 0xff,
+- (field1 & 0xff000000) >> 16 |
+- (field1 & 0xff0000) >> 16,
+- TRB_LEN(field2), GET_TD_SIZE(field2),
+- GET_INTR_TARGET(field2),
+- TRB_SETUPID_TO_TYPE(field3),
+- field3 & TRB_IDT ? 'D' : 'd',
+- field3 & TRB_IOC ? 'I' : 'i',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "type '%s' bRequestType %02x bRequest %02x "
++ "wValue %02x%02x wIndex %02x%02x wLength %d "
++ "length %ld TD size %ld intr %ld Setup ID %ld "
++ "flags %c:%c:%c",
++ cdnsp_trb_type_string(type),
++ field0 & 0xff,
++ (field0 & 0xff00) >> 8,
++ (field0 & 0xff000000) >> 24,
++ (field0 & 0xff0000) >> 16,
++ (field1 & 0xff00) >> 8,
++ field1 & 0xff,
++ (field1 & 0xff000000) >> 16 |
++ (field1 & 0xff0000) >> 16,
++ TRB_LEN(field2), GET_TD_SIZE(field2),
++ GET_INTR_TARGET(field2),
++ TRB_SETUPID_TO_TYPE(field3),
++ field3 & TRB_IDT ? 'D' : 'd',
++ field3 & TRB_IOC ? 'I' : 'i',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_DATA:
+- ret += snprintf(str, size,
+- "type '%s' Buffer %08x%08x length %ld TD size %ld "
+- "intr %ld flags %c:%c:%c:%c:%c:%c:%c",
+- cdnsp_trb_type_string(type),
+- field1, field0, TRB_LEN(field2),
+- GET_TD_SIZE(field2),
+- GET_INTR_TARGET(field2),
+- field3 & TRB_IDT ? 'D' : 'i',
+- field3 & TRB_IOC ? 'I' : 'i',
+- field3 & TRB_CHAIN ? 'C' : 'c',
+- field3 & TRB_NO_SNOOP ? 'S' : 's',
+- field3 & TRB_ISP ? 'I' : 'i',
+- field3 & TRB_ENT ? 'E' : 'e',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "type '%s' Buffer %08x%08x length %ld TD size %ld "
++ "intr %ld flags %c:%c:%c:%c:%c:%c:%c",
++ cdnsp_trb_type_string(type),
++ field1, field0, TRB_LEN(field2),
++ GET_TD_SIZE(field2),
++ GET_INTR_TARGET(field2),
++ field3 & TRB_IDT ? 'D' : 'i',
++ field3 & TRB_IOC ? 'I' : 'i',
++ field3 & TRB_CHAIN ? 'C' : 'c',
++ field3 & TRB_NO_SNOOP ? 'S' : 's',
++ field3 & TRB_ISP ? 'I' : 'i',
++ field3 & TRB_ENT ? 'E' : 'e',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_STATUS:
+- ret += snprintf(str, size,
+- "Buffer %08x%08x length %ld TD size %ld intr"
+- "%ld type '%s' flags %c:%c:%c:%c",
+- field1, field0, TRB_LEN(field2),
+- GET_TD_SIZE(field2),
+- GET_INTR_TARGET(field2),
+- cdnsp_trb_type_string(type),
+- field3 & TRB_IOC ? 'I' : 'i',
+- field3 & TRB_CHAIN ? 'C' : 'c',
+- field3 & TRB_ENT ? 'E' : 'e',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "Buffer %08x%08x length %ld TD size %ld intr"
++ "%ld type '%s' flags %c:%c:%c:%c",
++ field1, field0, TRB_LEN(field2),
++ GET_TD_SIZE(field2),
++ GET_INTR_TARGET(field2),
++ cdnsp_trb_type_string(type),
++ field3 & TRB_IOC ? 'I' : 'i',
++ field3 & TRB_CHAIN ? 'C' : 'c',
++ field3 & TRB_ENT ? 'E' : 'e',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_NORMAL:
+ case TRB_ISOC:
+ case TRB_EVENT_DATA:
+ case TRB_TR_NOOP:
+- ret += snprintf(str, size,
+- "type '%s' Buffer %08x%08x length %ld "
+- "TD size %ld intr %ld "
+- "flags %c:%c:%c:%c:%c:%c:%c:%c:%c",
+- cdnsp_trb_type_string(type),
+- field1, field0, TRB_LEN(field2),
+- GET_TD_SIZE(field2),
+- GET_INTR_TARGET(field2),
+- field3 & TRB_BEI ? 'B' : 'b',
+- field3 & TRB_IDT ? 'T' : 't',
+- field3 & TRB_IOC ? 'I' : 'i',
+- field3 & TRB_CHAIN ? 'C' : 'c',
+- field3 & TRB_NO_SNOOP ? 'S' : 's',
+- field3 & TRB_ISP ? 'I' : 'i',
+- field3 & TRB_ENT ? 'E' : 'e',
+- field3 & TRB_CYCLE ? 'C' : 'c',
+- !(field3 & TRB_EVENT_INVALIDATE) ? 'V' : 'v');
++ ret = snprintf(str, size,
++ "type '%s' Buffer %08x%08x length %ld "
++ "TD size %ld intr %ld "
++ "flags %c:%c:%c:%c:%c:%c:%c:%c:%c",
++ cdnsp_trb_type_string(type),
++ field1, field0, TRB_LEN(field2),
++ GET_TD_SIZE(field2),
++ GET_INTR_TARGET(field2),
++ field3 & TRB_BEI ? 'B' : 'b',
++ field3 & TRB_IDT ? 'T' : 't',
++ field3 & TRB_IOC ? 'I' : 'i',
++ field3 & TRB_CHAIN ? 'C' : 'c',
++ field3 & TRB_NO_SNOOP ? 'S' : 's',
++ field3 & TRB_ISP ? 'I' : 'i',
++ field3 & TRB_ENT ? 'E' : 'e',
++ field3 & TRB_CYCLE ? 'C' : 'c',
++ !(field3 & TRB_EVENT_INVALIDATE) ? 'V' : 'v');
+ break;
+ case TRB_CMD_NOOP:
+ case TRB_ENABLE_SLOT:
+- ret += snprintf(str, size, "%s: flags %c",
+- cdnsp_trb_type_string(type),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size, "%s: flags %c",
++ cdnsp_trb_type_string(type),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_DISABLE_SLOT:
+- ret += snprintf(str, size, "%s: slot %ld flags %c",
+- cdnsp_trb_type_string(type),
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size, "%s: slot %ld flags %c",
++ cdnsp_trb_type_string(type),
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_ADDR_DEV:
+- ret += snprintf(str, size,
+- "%s: ctx %08x%08x slot %ld flags %c:%c",
+- cdnsp_trb_type_string(type), field1, field0,
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_BSR ? 'B' : 'b',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "%s: ctx %08x%08x slot %ld flags %c:%c",
++ cdnsp_trb_type_string(type), field1, field0,
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_BSR ? 'B' : 'b',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_CONFIG_EP:
+- ret += snprintf(str, size,
+- "%s: ctx %08x%08x slot %ld flags %c:%c",
+- cdnsp_trb_type_string(type), field1, field0,
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_DC ? 'D' : 'd',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "%s: ctx %08x%08x slot %ld flags %c:%c",
++ cdnsp_trb_type_string(type), field1, field0,
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_DC ? 'D' : 'd',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_EVAL_CONTEXT:
+- ret += snprintf(str, size,
+- "%s: ctx %08x%08x slot %ld flags %c",
+- cdnsp_trb_type_string(type), field1, field0,
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "%s: ctx %08x%08x slot %ld flags %c",
++ cdnsp_trb_type_string(type), field1, field0,
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_RESET_EP:
+ case TRB_HALT_ENDPOINT:
+ case TRB_FLUSH_ENDPOINT:
+- ret += snprintf(str, size,
+- "%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c",
+- cdnsp_trb_type_string(type),
+- ep_num, ep_id % 2 ? "out" : "in",
+- TRB_TO_EP_INDEX(field3), field1, field0,
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "%s: ep%d%s(%d) ctx %08x%08x slot %ld flags %c",
++ cdnsp_trb_type_string(type),
++ ep_num, ep_id % 2 ? "out" : "in",
++ TRB_TO_EP_INDEX(field3), field1, field0,
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_STOP_RING:
+- ret += snprintf(str, size,
+- "%s: ep%d%s(%d) slot %ld sp %d flags %c",
+- cdnsp_trb_type_string(type),
+- ep_num, ep_id % 2 ? "out" : "in",
+- TRB_TO_EP_INDEX(field3),
+- TRB_TO_SLOT_ID(field3),
+- TRB_TO_SUSPEND_PORT(field3),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "%s: ep%d%s(%d) slot %ld sp %d flags %c",
++ cdnsp_trb_type_string(type),
++ ep_num, ep_id % 2 ? "out" : "in",
++ TRB_TO_EP_INDEX(field3),
++ TRB_TO_SLOT_ID(field3),
++ TRB_TO_SUSPEND_PORT(field3),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_SET_DEQ:
+- ret += snprintf(str, size,
+- "%s: ep%d%s(%d) deq %08x%08x stream %ld slot %ld flags %c",
+- cdnsp_trb_type_string(type),
+- ep_num, ep_id % 2 ? "out" : "in",
+- TRB_TO_EP_INDEX(field3), field1, field0,
+- TRB_TO_STREAM_ID(field2),
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size,
++ "%s: ep%d%s(%d) deq %08x%08x stream %ld slot %ld flags %c",
++ cdnsp_trb_type_string(type),
++ ep_num, ep_id % 2 ? "out" : "in",
++ TRB_TO_EP_INDEX(field3), field1, field0,
++ TRB_TO_STREAM_ID(field2),
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_RESET_DEV:
+- ret += snprintf(str, size, "%s: slot %ld flags %c",
+- cdnsp_trb_type_string(type),
+- TRB_TO_SLOT_ID(field3),
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ ret = snprintf(str, size, "%s: slot %ld flags %c",
++ cdnsp_trb_type_string(type),
++ TRB_TO_SLOT_ID(field3),
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ case TRB_ENDPOINT_NRDY:
+- temp = TRB_TO_HOST_STREAM(field2);
+-
+- ret += snprintf(str, size,
+- "%s: ep%d%s(%d) H_SID %x%s%s D_SID %lx flags %c:%c",
+- cdnsp_trb_type_string(type),
+- ep_num, ep_id % 2 ? "out" : "in",
+- TRB_TO_EP_INDEX(field3), temp,
+- temp == STREAM_PRIME_ACK ? "(PRIME)" : "",
+- temp == STREAM_REJECTED ? "(REJECTED)" : "",
+- TRB_TO_DEV_STREAM(field0),
+- field3 & TRB_STAT ? 'S' : 's',
+- field3 & TRB_CYCLE ? 'C' : 'c');
++ temp = TRB_TO_HOST_STREAM(field2);
++
++ ret = snprintf(str, size,
++ "%s: ep%d%s(%d) H_SID %x%s%s D_SID %lx flags %c:%c",
++ cdnsp_trb_type_string(type),
++ ep_num, ep_id % 2 ? "out" : "in",
++ TRB_TO_EP_INDEX(field3), temp,
++ temp == STREAM_PRIME_ACK ? "(PRIME)" : "",
++ temp == STREAM_REJECTED ? "(REJECTED)" : "",
++ TRB_TO_DEV_STREAM(field0),
++ field3 & TRB_STAT ? 'S' : 's',
++ field3 & TRB_CYCLE ? 'C' : 'c');
+ break;
+ default:
+- ret += snprintf(str, size,
+- "type '%s' -> raw %08x %08x %08x %08x",
+- cdnsp_trb_type_string(type),
+- field0, field1, field2, field3);
++ ret = snprintf(str, size,
++ "type '%s' -> raw %08x %08x %08x %08x",
++ cdnsp_trb_type_string(type),
++ field0, field1, field2, field3);
+ }
+
++ if (ret >= size)
++ pr_info("CDNSP: buffer overflowed.\n");
++
+ return str;
+ }
+
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index e196673f5c647..efaf0db595f46 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -242,7 +242,7 @@ static void dwc3_omap_set_mailbox(struct dwc3_omap *omap,
+ break;
+
+ case OMAP_DWC3_ID_FLOAT:
+- if (omap->vbus_reg)
++ if (omap->vbus_reg && regulator_is_enabled(omap->vbus_reg))
+ regulator_disable(omap->vbus_reg);
+ val = dwc3_omap_read_utmi_ctrl(omap);
+ val |= USBOTGSS_UTMI_OTG_CTRL_IDDIG;
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 06d0e88ec8af9..4d9608cc55f73 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -185,7 +185,8 @@ static const struct software_node dwc3_pci_amd_mr_swnode = {
+ .properties = dwc3_pci_mr_properties,
+ };
+
+-static int dwc3_pci_quirks(struct dwc3_pci *dwc)
++static int dwc3_pci_quirks(struct dwc3_pci *dwc,
++ const struct software_node *swnode)
+ {
+ struct pci_dev *pdev = dwc->pci;
+
+@@ -242,7 +243,7 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
+ }
+ }
+
+- return 0;
++ return device_add_software_node(&dwc->dwc3->dev, swnode);
+ }
+
+ #ifdef CONFIG_PM
+@@ -307,11 +308,7 @@ static int dwc3_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ dwc->dwc3->dev.parent = dev;
+ ACPI_COMPANION_SET(&dwc->dwc3->dev, ACPI_COMPANION(dev));
+
+- ret = device_add_software_node(&dwc->dwc3->dev, (void *)id->driver_data);
+- if (ret < 0)
+- goto err;
+-
+- ret = dwc3_pci_quirks(dwc);
++ ret = dwc3_pci_quirks(dwc, (void *)id->driver_data);
+ if (ret)
+ goto err;
+
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 43f1b0d461c1e..be76f891b9c52 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -32,9 +32,6 @@
+ #include <linux/workqueue.h>
+
+ /* XUSB_DEV registers */
+-#define SPARAM 0x000
+-#define SPARAM_ERSTMAX_MASK GENMASK(20, 16)
+-#define SPARAM_ERSTMAX(x) (((x) << 16) & SPARAM_ERSTMAX_MASK)
+ #define DB 0x004
+ #define DB_TARGET_MASK GENMASK(15, 8)
+ #define DB_TARGET(x) (((x) << 8) & DB_TARGET_MASK)
+@@ -275,8 +272,10 @@ BUILD_EP_CONTEXT_RW(deq_hi, deq_hi, 0, 0xffffffff)
+ BUILD_EP_CONTEXT_RW(avg_trb_len, tx_info, 0, 0xffff)
+ BUILD_EP_CONTEXT_RW(max_esit_payload, tx_info, 16, 0xffff)
+ BUILD_EP_CONTEXT_RW(edtla, rsvd[0], 0, 0xffffff)
+-BUILD_EP_CONTEXT_RW(seq_num, rsvd[0], 24, 0xff)
++BUILD_EP_CONTEXT_RW(rsvd, rsvd[0], 24, 0x1)
+ BUILD_EP_CONTEXT_RW(partial_td, rsvd[0], 25, 0x1)
++BUILD_EP_CONTEXT_RW(splitxstate, rsvd[0], 26, 0x1)
++BUILD_EP_CONTEXT_RW(seq_num, rsvd[0], 27, 0x1f)
+ BUILD_EP_CONTEXT_RW(cerrcnt, rsvd[1], 18, 0x3)
+ BUILD_EP_CONTEXT_RW(data_offset, rsvd[2], 0, 0x1ffff)
+ BUILD_EP_CONTEXT_RW(numtrbs, rsvd[2], 22, 0x1f)
+@@ -1557,6 +1556,9 @@ static int __tegra_xudc_ep_set_halt(struct tegra_xudc_ep *ep, bool halt)
+ ep_reload(xudc, ep->index);
+
+ ep_ctx_write_state(ep->context, EP_STATE_RUNNING);
++ ep_ctx_write_rsvd(ep->context, 0);
++ ep_ctx_write_partial_td(ep->context, 0);
++ ep_ctx_write_splitxstate(ep->context, 0);
+ ep_ctx_write_seq_num(ep->context, 0);
+
+ ep_reload(xudc, ep->index);
+@@ -2812,7 +2814,10 @@ static void tegra_xudc_reset(struct tegra_xudc *xudc)
+ xudc->setup_seq_num = 0;
+ xudc->queued_setup_packet = false;
+
+- ep_ctx_write_seq_num(ep0->context, xudc->setup_seq_num);
++ ep_ctx_write_rsvd(ep0->context, 0);
++ ep_ctx_write_partial_td(ep0->context, 0);
++ ep_ctx_write_splitxstate(ep0->context, 0);
++ ep_ctx_write_seq_num(ep0->context, 0);
+
+ deq_ptr = trb_virt_to_phys(ep0, &ep0->transfer_ring[ep0->deq_ptr]);
+
+@@ -3295,11 +3300,6 @@ static void tegra_xudc_init_event_ring(struct tegra_xudc *xudc)
+ unsigned int i;
+ u32 val;
+
+- val = xudc_readl(xudc, SPARAM);
+- val &= ~(SPARAM_ERSTMAX_MASK);
+- val |= SPARAM_ERSTMAX(XUDC_NR_EVENT_RINGS);
+- xudc_writel(xudc, val, SPARAM);
+-
+ for (i = 0; i < ARRAY_SIZE(xudc->event_ring); i++) {
+ memset(xudc->event_ring[i], 0, XUDC_EVENT_RING_SIZE *
+ sizeof(*xudc->event_ring[i]));
+diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
+index e87cf3a00fa4b..638f03b897394 100644
+--- a/drivers/usb/host/ehci-pci.c
++++ b/drivers/usb/host/ehci-pci.c
+@@ -21,6 +21,9 @@ static const char hcd_name[] = "ehci-pci";
+ /* defined here to avoid adding to pci_ids.h for single instance use */
+ #define PCI_DEVICE_ID_INTEL_CE4100_USB 0x2e70
+
++#define PCI_VENDOR_ID_ASPEED 0x1a03
++#define PCI_DEVICE_ID_ASPEED_EHCI 0x2603
++
+ /*-------------------------------------------------------------------------*/
+ #define PCI_DEVICE_ID_INTEL_QUARK_X1000_SOC 0x0939
+ static inline bool is_intel_quark_x1000(struct pci_dev *pdev)
+@@ -222,6 +225,12 @@ static int ehci_pci_setup(struct usb_hcd *hcd)
+ ehci->has_synopsys_hc_bug = 1;
+ }
+ break;
++ case PCI_VENDOR_ID_ASPEED:
++ if (pdev->device == PCI_DEVICE_ID_ASPEED_EHCI) {
++ ehci_info(ehci, "applying Aspeed HC workaround\n");
++ ehci->is_aspeed = 1;
++ }
++ break;
+ }
+
+ /* optional debug port, normally in the first BAR */
+diff --git a/drivers/usb/host/xen-hcd.c b/drivers/usb/host/xen-hcd.c
+index 19b8c7ed74cb1..4ed3ee328a4a6 100644
+--- a/drivers/usb/host/xen-hcd.c
++++ b/drivers/usb/host/xen-hcd.c
+@@ -51,6 +51,7 @@ struct vdevice_status {
+ struct usb_shadow {
+ struct xenusb_urb_request req;
+ struct urb *urb;
++ bool in_flight;
+ };
+
+ struct xenhcd_info {
+@@ -722,6 +723,12 @@ static void xenhcd_gnttab_done(struct xenhcd_info *info, unsigned int id)
+ int nr_segs = 0;
+ int i;
+
++ if (!shadow->in_flight) {
++ xenhcd_set_error(info, "Illegal request id");
++ return;
++ }
++ shadow->in_flight = false;
++
+ nr_segs = shadow->req.nr_buffer_segs;
+
+ if (xenusb_pipeisoc(shadow->req.pipe))
+@@ -805,6 +812,7 @@ static int xenhcd_do_request(struct xenhcd_info *info, struct urb_priv *urbp)
+
+ info->urb_ring.req_prod_pvt++;
+ info->shadow[id].urb = urb;
++ info->shadow[id].in_flight = true;
+
+ RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&info->urb_ring, notify);
+ if (notify)
+@@ -933,10 +941,27 @@ static int xenhcd_unlink_urb(struct xenhcd_info *info, struct urb_priv *urbp)
+ return ret;
+ }
+
+-static int xenhcd_urb_request_done(struct xenhcd_info *info)
++static void xenhcd_res_to_urb(struct xenhcd_info *info,
++ struct xenusb_urb_response *res, struct urb *urb)
++{
++ if (unlikely(!urb))
++ return;
++
++ if (res->actual_length > urb->transfer_buffer_length)
++ urb->actual_length = urb->transfer_buffer_length;
++ else if (res->actual_length < 0)
++ urb->actual_length = 0;
++ else
++ urb->actual_length = res->actual_length;
++ urb->error_count = res->error_count;
++ urb->start_frame = res->start_frame;
++ xenhcd_giveback_urb(info, urb, res->status);
++}
++
++static int xenhcd_urb_request_done(struct xenhcd_info *info,
++ unsigned int *eoiflag)
+ {
+ struct xenusb_urb_response res;
+- struct urb *urb;
+ RING_IDX i, rp;
+ __u16 id;
+ int more_to_do = 0;
+@@ -963,16 +988,12 @@ static int xenhcd_urb_request_done(struct xenhcd_info *info)
+ xenhcd_gnttab_done(info, id);
+ if (info->error)
+ goto err;
+- urb = info->shadow[id].urb;
+- if (likely(urb)) {
+- urb->actual_length = res.actual_length;
+- urb->error_count = res.error_count;
+- urb->start_frame = res.start_frame;
+- xenhcd_giveback_urb(info, urb, res.status);
+- }
++ xenhcd_res_to_urb(info, &res, info->shadow[id].urb);
+ }
+
+ xenhcd_add_id_to_freelist(info, id);
++
++ *eoiflag = 0;
+ }
+ info->urb_ring.rsp_cons = i;
+
+@@ -990,7 +1011,7 @@ static int xenhcd_urb_request_done(struct xenhcd_info *info)
+ return 0;
+ }
+
+-static int xenhcd_conn_notify(struct xenhcd_info *info)
++static int xenhcd_conn_notify(struct xenhcd_info *info, unsigned int *eoiflag)
+ {
+ struct xenusb_conn_response res;
+ struct xenusb_conn_request *req;
+@@ -1035,6 +1056,8 @@ static int xenhcd_conn_notify(struct xenhcd_info *info)
+ info->conn_ring.req_prod_pvt);
+ req->id = id;
+ info->conn_ring.req_prod_pvt++;
++
++ *eoiflag = 0;
+ }
+
+ if (rc != info->conn_ring.req_prod_pvt)
+@@ -1057,14 +1080,19 @@ static int xenhcd_conn_notify(struct xenhcd_info *info)
+ static irqreturn_t xenhcd_int(int irq, void *dev_id)
+ {
+ struct xenhcd_info *info = (struct xenhcd_info *)dev_id;
++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
+
+- if (unlikely(info->error))
++ if (unlikely(info->error)) {
++ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
+ return IRQ_HANDLED;
++ }
+
+- while (xenhcd_urb_request_done(info) | xenhcd_conn_notify(info))
++ while (xenhcd_urb_request_done(info, &eoiflag) |
++ xenhcd_conn_notify(info, &eoiflag))
+ /* Yield point for this unbounded loop. */
+ cond_resched();
+
++ xen_irq_lateeoi(irq, eoiflag);
+ return IRQ_HANDLED;
+ }
+
+@@ -1141,9 +1169,9 @@ static int xenhcd_setup_rings(struct xenbus_device *dev,
+ goto fail;
+ }
+
+- err = bind_evtchn_to_irq(info->evtchn);
++ err = bind_evtchn_to_irq_lateeoi(info->evtchn);
+ if (err <= 0) {
+- xenbus_dev_fatal(dev, err, "bind_evtchn_to_irq");
++ xenbus_dev_fatal(dev, err, "bind_evtchn_to_irq_lateeoi");
+ goto fail;
+ }
+
+@@ -1496,6 +1524,7 @@ static struct usb_hcd *xenhcd_create_hcd(struct xenbus_device *dev)
+ for (i = 0; i < XENUSB_URB_RING_SIZE; i++) {
+ info->shadow[i].req.id = i + 1;
+ info->shadow[i].urb = NULL;
++ info->shadow[i].in_flight = false;
+ }
+ info->shadow[XENUSB_URB_RING_SIZE - 1].req.id = 0x0fff;
+
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 9fe1071a96444..1b5de3af1a627 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -163,6 +163,7 @@ struct mlx5_vdpa_net {
+ u32 cur_num_vqs;
+ struct notifier_block nb;
+ struct vdpa_callback config_cb;
++ struct mlx5_vdpa_wq_ent cvq_ent;
+ };
+
+ static void free_resources(struct mlx5_vdpa_net *ndev);
+@@ -1616,10 +1617,10 @@ static void mlx5_cvq_kick_handler(struct work_struct *work)
+ ndev = to_mlx5_vdpa_ndev(mvdev);
+ cvq = &mvdev->cvq;
+ if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)))
+- goto out;
++ return;
+
+ if (!cvq->ready)
+- goto out;
++ return;
+
+ while (true) {
+ err = vringh_getdesc_iotlb(&cvq->vring, &cvq->riov, &cvq->wiov, &cvq->head,
+@@ -1653,9 +1654,10 @@ static void mlx5_cvq_kick_handler(struct work_struct *work)
+
+ if (vringh_need_notify_iotlb(&cvq->vring))
+ vringh_notify(&cvq->vring);
++
++ queue_work(mvdev->wq, &wqent->work);
++ break;
+ }
+-out:
+- kfree(wqent);
+ }
+
+ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+@@ -1663,7 +1665,6 @@ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+ struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
+ struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
+ struct mlx5_vdpa_virtqueue *mvq;
+- struct mlx5_vdpa_wq_ent *wqent;
+
+ if (!is_index_valid(mvdev, idx))
+ return;
+@@ -1672,13 +1673,7 @@ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+ if (!mvdev->wq || !mvdev->cvq.ready)
+ return;
+
+- wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
+- if (!wqent)
+- return;
+-
+- wqent->mvdev = mvdev;
+- INIT_WORK(&wqent->work, mlx5_cvq_kick_handler);
+- queue_work(mvdev->wq, &wqent->work);
++ queue_work(mvdev->wq, &ndev->cvq_ent.work);
+ return;
+ }
+
+@@ -2668,6 +2663,8 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
+ if (err)
+ goto err_mr;
+
++ ndev->cvq_ent.mvdev = mvdev;
++ INIT_WORK(&ndev->cvq_ent.work, mlx5_cvq_kick_handler);
+ mvdev->wq = create_singlethread_workqueue("mlx5_vdpa_wq");
+ if (!mvdev->wq) {
+ err = -ENOMEM;
+diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
+index 57d3b2cbbd8e5..82ac1569deb05 100644
+--- a/drivers/vfio/pci/vfio_pci_rdwr.c
++++ b/drivers/vfio/pci/vfio_pci_rdwr.c
+@@ -288,6 +288,7 @@ out:
+ return done;
+ }
+
++#ifdef CONFIG_VFIO_PCI_VGA
+ ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
+ {
+@@ -355,6 +356,7 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+
+ return done;
+ }
++#endif
+
+ static void vfio_pci_ioeventfd_do_write(struct vfio_pci_ioeventfd *ioeventfd,
+ bool test_mem)
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 28ef323882fb2..792ab5f236471 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -473,6 +473,7 @@ static void vhost_tx_batch(struct vhost_net *net,
+ goto signal_used;
+
+ msghdr->msg_control = &ctl;
++ msghdr->msg_controllen = sizeof(ctl);
+ err = sock->ops->sendmsg(sock, msghdr, 0);
+ if (unlikely(err < 0)) {
+ vq_err(&nvq->vq, "Fail to batch sending packets\n");
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index ad9aac06427a0..00f0f282e7a13 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1583,7 +1583,14 @@ static void do_remove_conflicting_framebuffers(struct apertures_struct *a,
+ * If it's not a platform device, at least print a warning. A
+ * fix would add code to remove the device from the system.
+ */
+- if (dev_is_platform(device)) {
++ if (!device) {
++ /* TODO: Represent each OF framebuffer as its own
++ * device in the device hierarchy. For now, offb
++ * doesn't have such a device, so unregister the
++ * framebuffer as before without warning.
++ */
++ do_unregister_framebuffer(registered_fb[i]);
++ } else if (dev_is_platform(device)) {
+ registered_fb[i]->forced_out = true;
+ platform_device_unregister(to_platform_device(device));
+ } else {
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index 565578002d79e..c7b8a8e787e23 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -2089,16 +2089,20 @@ static ssize_t w1_seq_show(struct device *device,
+ if (sl->reg_num.id == reg_num->id)
+ seq = i;
+
++ if (w1_reset_bus(sl->master))
++ goto error;
++
++ /* Put the device into chain DONE state */
++ w1_write_8(sl->master, W1_MATCH_ROM);
++ w1_write_block(sl->master, (u8 *)&rn, 8);
+ w1_write_8(sl->master, W1_42_CHAIN);
+ w1_write_8(sl->master, W1_42_CHAIN_DONE);
+ w1_write_8(sl->master, W1_42_CHAIN_DONE_INV);
+- w1_read_block(sl->master, &ack, sizeof(ack));
+
+ /* check for acknowledgment */
+ ack = w1_read_8(sl->master);
+ if (ack != W1_42_SUCCESS_CONFIRM_BYTE)
+ goto error;
+-
+ }
+
+ /* Exit from CHAIN state */
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 0399cf8e3c32c..151e9da5da2dc 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -118,7 +118,7 @@ struct btrfs_bio_ctrl {
+ */
+ struct extent_changeset {
+ /* How many bytes are set/cleared in this operation */
+- unsigned int bytes_changed;
++ u64 bytes_changed;
+
+ /* Changed ranges */
+ struct ulist range_changed;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 0f4408f9daddc..ec4950cfe1ea9 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4466,6 +4466,13 @@ int btrfs_delete_subvolume(struct inode *dir, struct dentry *dentry)
+ dest->root_key.objectid);
+ return -EPERM;
+ }
++ if (atomic_read(&dest->nr_swapfiles)) {
++ spin_unlock(&dest->root_item_lock);
++ btrfs_warn(fs_info,
++ "attempt to delete subvolume %llu with active swapfile",
++ root->root_key.objectid);
++ return -EPERM;
++ }
+ root_flags = btrfs_root_flags(&dest->root_item);
+ btrfs_set_root_flags(&dest->root_item,
+ root_flags | BTRFS_ROOT_SUBVOL_DEAD);
+@@ -10424,8 +10431,23 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ * set. We use this counter to prevent snapshots. We must increment it
+ * before walking the extents because we don't want a concurrent
+ * snapshot to run after we've already checked the extents.
++ *
++ * It is possible that subvolume is marked for deletion but still not
++ * removed yet. To prevent this race, we check the root status before
++ * activating the swapfile.
+ */
++ spin_lock(&root->root_item_lock);
++ if (btrfs_root_dead(root)) {
++ spin_unlock(&root->root_item_lock);
++
++ btrfs_exclop_finish(fs_info);
++ btrfs_warn(fs_info,
++ "cannot activate swapfile because subvolume %llu is being deleted",
++ root->root_key.objectid);
++ return -EPERM;
++ }
+ atomic_inc(&root->nr_swapfiles);
++ spin_unlock(&root->root_item_lock);
+
+ isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
+
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 8d47ec5fc4f44..8fe9d55d68622 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -1215,7 +1215,7 @@ static u32 get_extent_max_capacity(const struct extent_map *em)
+ }
+
+ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+- bool locked)
++ u32 extent_thresh, u64 newer_than, bool locked)
+ {
+ struct extent_map *next;
+ bool ret = false;
+@@ -1225,11 +1225,12 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+ return false;
+
+ /*
+- * We want to check if the next extent can be merged with the current
+- * one, which can be an extent created in a past generation, so we pass
+- * a minimum generation of 0 to defrag_lookup_extent().
++ * Here we need to pass @newer_then when checking the next extent, or
++ * we will hit a case we mark current extent for defrag, but the next
++ * one will not be a target.
++ * This will just cause extra IO without really reducing the fragments.
+ */
+- next = defrag_lookup_extent(inode, em->start + em->len, 0, locked);
++ next = defrag_lookup_extent(inode, em->start + em->len, newer_than, locked);
+ /* No more em or hole */
+ if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)
+ goto out;
+@@ -1241,6 +1242,13 @@ static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,
+ */
+ if (next->len >= get_extent_max_capacity(em))
+ goto out;
++ /* Skip older extent */
++ if (next->generation < newer_than)
++ goto out;
++ /* Also check extent size */
++ if (next->len >= extent_thresh)
++ goto out;
++
+ ret = true;
+ out:
+ free_extent_map(next);
+@@ -1446,7 +1454,7 @@ static int defrag_collect_targets(struct btrfs_inode *inode,
+ goto next;
+
+ next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em,
+- locked);
++ extent_thresh, newer_than, locked);
+ if (!next_mergeable) {
+ struct defrag_target_range *last;
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index f5b11c1a000aa..f5688a25fca35 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -1945,23 +1945,18 @@ static void update_dev_time(const char *device_path)
+ path_put(&path);
+ }
+
+-static int btrfs_rm_dev_item(struct btrfs_device *device)
++static int btrfs_rm_dev_item(struct btrfs_trans_handle *trans,
++ struct btrfs_device *device)
+ {
+ struct btrfs_root *root = device->fs_info->chunk_root;
+ int ret;
+ struct btrfs_path *path;
+ struct btrfs_key key;
+- struct btrfs_trans_handle *trans;
+
+ path = btrfs_alloc_path();
+ if (!path)
+ return -ENOMEM;
+
+- trans = btrfs_start_transaction(root, 0);
+- if (IS_ERR(trans)) {
+- btrfs_free_path(path);
+- return PTR_ERR(trans);
+- }
+ key.objectid = BTRFS_DEV_ITEMS_OBJECTID;
+ key.type = BTRFS_DEV_ITEM_KEY;
+ key.offset = device->devid;
+@@ -1972,21 +1967,12 @@ static int btrfs_rm_dev_item(struct btrfs_device *device)
+ if (ret) {
+ if (ret > 0)
+ ret = -ENOENT;
+- btrfs_abort_transaction(trans, ret);
+- btrfs_end_transaction(trans);
+ goto out;
+ }
+
+ ret = btrfs_del_item(trans, root, path);
+- if (ret) {
+- btrfs_abort_transaction(trans, ret);
+- btrfs_end_transaction(trans);
+- }
+-
+ out:
+ btrfs_free_path(path);
+- if (!ret)
+- ret = btrfs_commit_transaction(trans);
+ return ret;
+ }
+
+@@ -2127,6 +2113,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ struct btrfs_dev_lookup_args *args,
+ struct block_device **bdev, fmode_t *mode)
+ {
++ struct btrfs_trans_handle *trans;
+ struct btrfs_device *device;
+ struct btrfs_fs_devices *cur_devices;
+ struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;
+@@ -2142,7 +2129,7 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+
+ ret = btrfs_check_raid_min_devices(fs_info, num_devices - 1);
+ if (ret)
+- goto out;
++ return ret;
+
+ device = btrfs_find_device(fs_info->fs_devices, args);
+ if (!device) {
+@@ -2150,27 +2137,22 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ ret = BTRFS_ERROR_DEV_MISSING_NOT_FOUND;
+ else
+ ret = -ENOENT;
+- goto out;
++ return ret;
+ }
+
+ if (btrfs_pinned_by_swapfile(fs_info, device)) {
+ btrfs_warn_in_rcu(fs_info,
+ "cannot remove device %s (devid %llu) due to active swapfile",
+ rcu_str_deref(device->name), device->devid);
+- ret = -ETXTBSY;
+- goto out;
++ return -ETXTBSY;
+ }
+
+- if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) {
+- ret = BTRFS_ERROR_DEV_TGT_REPLACE;
+- goto out;
+- }
++ if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state))
++ return BTRFS_ERROR_DEV_TGT_REPLACE;
+
+ if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) &&
+- fs_info->fs_devices->rw_devices == 1) {
+- ret = BTRFS_ERROR_DEV_ONLY_WRITABLE;
+- goto out;
+- }
++ fs_info->fs_devices->rw_devices == 1)
++ return BTRFS_ERROR_DEV_ONLY_WRITABLE;
+
+ if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
+ mutex_lock(&fs_info->chunk_mutex);
+@@ -2183,14 +2165,22 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ if (ret)
+ goto error_undo;
+
+- /*
+- * TODO: the superblock still includes this device in its num_devices
+- * counter although write_all_supers() is not locked out. This
+- * could give a filesystem state which requires a degraded mount.
+- */
+- ret = btrfs_rm_dev_item(device);
+- if (ret)
++ trans = btrfs_start_transaction(fs_info->chunk_root, 0);
++ if (IS_ERR(trans)) {
++ ret = PTR_ERR(trans);
+ goto error_undo;
++ }
++
++ ret = btrfs_rm_dev_item(trans, device);
++ if (ret) {
++ /* Any error in dev item removal is critical */
++ btrfs_crit(fs_info,
++ "failed to remove device item for devid %llu: %d",
++ device->devid, ret);
++ btrfs_abort_transaction(trans, ret);
++ btrfs_end_transaction(trans);
++ return ret;
++ }
+
+ clear_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state);
+ btrfs_scrub_cancel_dev(device);
+@@ -2273,7 +2263,8 @@ int btrfs_rm_device(struct btrfs_fs_info *fs_info,
+ free_fs_devices(cur_devices);
+ }
+
+-out:
++ ret = btrfs_commit_transaction(trans);
++
+ return ret;
+
+ error_undo:
+@@ -2284,7 +2275,7 @@ error_undo:
+ device->fs_devices->rw_devices++;
+ mutex_unlock(&fs_info->chunk_mutex);
+ }
+- goto out;
++ return ret;
+ }
+
+ void btrfs_rm_dev_replace_remove_srcdev(struct btrfs_device *srcdev)
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index f559d517c7c44..f03705d2f8a8c 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1927,18 +1927,19 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group)
+
+ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
+ {
++ struct btrfs_fs_info *fs_info = fs_devices->fs_info;
+ struct btrfs_device *device;
+ bool ret = false;
+
+- if (!btrfs_is_zoned(fs_devices->fs_info))
++ if (!btrfs_is_zoned(fs_info))
+ return true;
+
+ /* Non-single profiles are not supported yet */
+ ASSERT((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0);
+
+ /* Check if there is a device with active zones left */
+- mutex_lock(&fs_devices->device_list_mutex);
+- list_for_each_entry(device, &fs_devices->devices, dev_list) {
++ mutex_lock(&fs_info->chunk_mutex);
++ list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) {
+ struct btrfs_zoned_device_info *zinfo = device->zone_info;
+
+ if (!device->bdev)
+@@ -1950,7 +1951,7 @@ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)
+ break;
+ }
+ }
+- mutex_unlock(&fs_devices->device_list_mutex);
++ mutex_unlock(&fs_info->chunk_mutex);
+
+ return ret;
+ }
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 133dbd9338e73..d91fa53e12b33 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -478,8 +478,11 @@ more:
+ 2 : (fpos_off(rde->offset) + 1);
+ err = note_last_dentry(dfi, rde->name, rde->name_len,
+ next_offset);
+- if (err)
++ if (err) {
++ ceph_mdsc_put_request(dfi->last_readdir);
++ dfi->last_readdir = NULL;
+ return err;
++ }
+ } else if (req->r_reply_info.dir_end) {
+ dfi->next_offset = 2;
+ /* keep last name */
+@@ -520,6 +523,12 @@ more:
+ if (!dir_emit(ctx, rde->name, rde->name_len,
+ ceph_present_ino(inode->i_sb, le64_to_cpu(rde->inode.in->ino)),
+ le32_to_cpu(rde->inode.in->mode) >> 12)) {
++ /*
++ * NOTE: Here no need to put the 'dfi->last_readdir',
++ * because when dir_emit stops us it's most likely
++ * doesn't have enough memory, etc. So for next readdir
++ * it will continue.
++ */
+ dout("filldir stopping us...\n");
+ return 0;
+ }
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index ef4a980a7bf37..c092dce0485c7 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -87,13 +87,13 @@ struct inode *ceph_get_snapdir(struct inode *parent)
+ if (!S_ISDIR(parent->i_mode)) {
+ pr_warn_once("bad snapdir parent type (mode=0%o)\n",
+ parent->i_mode);
+- return ERR_PTR(-ENOTDIR);
++ goto err;
+ }
+
+ if (!(inode->i_state & I_NEW) && !S_ISDIR(inode->i_mode)) {
+ pr_warn_once("bad snapdir inode type (mode=0%o)\n",
+ inode->i_mode);
+- return ERR_PTR(-ENOTDIR);
++ goto err;
+ }
+
+ inode->i_mode = parent->i_mode;
+@@ -113,6 +113,12 @@ struct inode *ceph_get_snapdir(struct inode *parent)
+ }
+
+ return inode;
++err:
++ if ((inode->i_state & I_NEW))
++ discard_new_inode(inode);
++ else
++ iput(inode);
++ return ERR_PTR(-ENOTDIR);
+ }
+
+ const struct inode_operations ceph_file_iops = {
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index d6f8ccc7bfe2f..c3be6a541c8fc 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -453,9 +453,7 @@ static int reconnect_target_unlocked(struct TCP_Server_Info *server, struct dfs_
+ return rc;
+ }
+
+-static int
+-reconnect_dfs_server(struct TCP_Server_Info *server,
+- bool mark_smb_session)
++static int reconnect_dfs_server(struct TCP_Server_Info *server)
+ {
+ int rc = 0;
+ const char *refpath = server->current_fullpath + 1;
+@@ -479,7 +477,12 @@ reconnect_dfs_server(struct TCP_Server_Info *server,
+ if (!cifs_tcp_ses_needs_reconnect(server, num_targets))
+ return 0;
+
+- cifs_mark_tcp_ses_conns_for_reconnect(server, mark_smb_session);
++ /*
++ * Unconditionally mark all sessions & tcons for reconnect as we might be connecting to a
++ * different server or share during failover. It could be improved by adding some logic to
++ * only do that in case it connects to a different server or share, though.
++ */
++ cifs_mark_tcp_ses_conns_for_reconnect(server, true);
+
+ cifs_abort_connection(server);
+
+@@ -537,7 +540,7 @@ int cifs_reconnect(struct TCP_Server_Info *server, bool mark_smb_session)
+ }
+ spin_unlock(&cifs_tcp_ses_lock);
+
+- return reconnect_dfs_server(server, mark_smb_session);
++ return reconnect_dfs_server(server);
+ }
+ #else
+ int cifs_reconnect(struct TCP_Server_Info *server, bool mark_smb_session)
+@@ -4465,7 +4468,7 @@ static int tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *tco
+ */
+ if (rc && server->current_fullpath != server->origin_fullpath) {
+ server->current_fullpath = server->origin_fullpath;
+- cifs_reconnect(tcon->ses->server, true);
++ cifs_signal_cifsd_for_reconnect(server, true);
+ }
+
+ dfs_cache_free_tgts(tl);
+diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c
+index ebe236b9d9f56..235aa1b395ebc 100644
+--- a/fs/cifs/netmisc.c
++++ b/fs/cifs/netmisc.c
+@@ -896,7 +896,7 @@ map_and_check_smb_error(struct mid_q_entry *mid, bool logErr)
+ if (class == ERRSRV && code == ERRbaduid) {
+ cifs_dbg(FYI, "Server returned 0x%x, reconnecting session...\n",
+ code);
+- cifs_reconnect(mid->server, false);
++ cifs_signal_cifsd_for_reconnect(mid->server, false);
+ }
+ }
+
+diff --git a/fs/file_table.c b/fs/file_table.c
+index 7d2e692b66a94..ada8fe814db97 100644
+--- a/fs/file_table.c
++++ b/fs/file_table.c
+@@ -412,6 +412,7 @@ void __fput_sync(struct file *file)
+ }
+
+ EXPORT_SYMBOL(fput);
++EXPORT_SYMBOL(__fput_sync);
+
+ void __init files_init(void)
+ {
+diff --git a/fs/io-wq.h b/fs/io-wq.h
+index dbecd27656c7c..04d374e65e546 100644
+--- a/fs/io-wq.h
++++ b/fs/io-wq.h
+@@ -155,6 +155,7 @@ struct io_wq_work_node *wq_stack_extract(struct io_wq_work_node *stack)
+ struct io_wq_work {
+ struct io_wq_work_node list;
+ unsigned flags;
++ int fd;
+ };
+
+ static inline struct io_wq_work *wq_next_work(struct io_wq_work *work)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 5e6788ab188fa..1a9630dc5361b 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -112,8 +112,7 @@
+ IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
+
+ #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
+- REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
+- REQ_F_ASYNC_DATA)
++ REQ_F_POLLED | REQ_F_CREDS | REQ_F_ASYNC_DATA)
+
+ #define IO_TCTX_REFS_CACHE_NR (1U << 10)
+
+@@ -469,7 +468,6 @@ struct io_uring_task {
+ const struct io_ring_ctx *last;
+ struct io_wq *io_wq;
+ struct percpu_counter inflight;
+- atomic_t inflight_tracked;
+ atomic_t in_idle;
+
+ spinlock_t task_lock;
+@@ -560,7 +558,8 @@ struct io_rw {
+ /* NOTE: kiocb has the file as the first member, so don't do it here */
+ struct kiocb kiocb;
+ u64 addr;
+- u64 len;
++ u32 len;
++ u32 flags;
+ };
+
+ struct io_connect {
+@@ -621,10 +620,10 @@ struct io_epoll {
+
+ struct io_splice {
+ struct file *file_out;
+- struct file *file_in;
+ loff_t off_out;
+ loff_t off_in;
+ u64 len;
++ int splice_fd_in;
+ unsigned int flags;
+ };
+
+@@ -1127,8 +1126,11 @@ static int __io_register_rsrc_update(struct io_ring_ctx *ctx, unsigned type,
+ struct io_uring_rsrc_update2 *up,
+ unsigned nr_args);
+ static void io_clean_op(struct io_kiocb *req);
+-static struct file *io_file_get(struct io_ring_ctx *ctx,
+- struct io_kiocb *req, int fd, bool fixed);
++static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
++ unsigned issue_flags);
++static inline struct file *io_file_get_normal(struct io_kiocb *req, int fd);
++static void io_drop_inflight_file(struct io_kiocb *req);
++static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags);
+ static void __io_queue_sqe(struct io_kiocb *req);
+ static void io_rsrc_put_work(struct work_struct *work);
+
+@@ -1257,13 +1259,20 @@ static void io_rsrc_refs_refill(struct io_ring_ctx *ctx)
+ }
+
+ static inline void io_req_set_rsrc_node(struct io_kiocb *req,
+- struct io_ring_ctx *ctx)
++ struct io_ring_ctx *ctx,
++ unsigned int issue_flags)
+ {
+ if (!req->fixed_rsrc_refs) {
+ req->fixed_rsrc_refs = &ctx->rsrc_node->refs;
+- ctx->rsrc_cached_refs--;
+- if (unlikely(ctx->rsrc_cached_refs < 0))
+- io_rsrc_refs_refill(ctx);
++
++ if (!(issue_flags & IO_URING_F_UNLOCKED)) {
++ lockdep_assert_held(&ctx->uring_lock);
++ ctx->rsrc_cached_refs--;
++ if (unlikely(ctx->rsrc_cached_refs < 0))
++ io_rsrc_refs_refill(ctx);
++ } else {
++ percpu_ref_get(req->fixed_rsrc_refs);
++ }
+ }
+ }
+
+@@ -1303,29 +1312,9 @@ static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
+ bool cancel_all)
+ __must_hold(&req->ctx->timeout_lock)
+ {
+- struct io_kiocb *req;
+-
+ if (task && head->task != task)
+ return false;
+- if (cancel_all)
+- return true;
+-
+- io_for_each_link(req, head) {
+- if (req->flags & REQ_F_INFLIGHT)
+- return true;
+- }
+- return false;
+-}
+-
+-static bool io_match_linked(struct io_kiocb *head)
+-{
+- struct io_kiocb *req;
+-
+- io_for_each_link(req, head) {
+- if (req->flags & REQ_F_INFLIGHT)
+- return true;
+- }
+- return false;
++ return cancel_all;
+ }
+
+ /*
+@@ -1335,24 +1324,9 @@ static bool io_match_linked(struct io_kiocb *head)
+ static bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
+ bool cancel_all)
+ {
+- bool matched;
+-
+ if (task && head->task != task)
+ return false;
+- if (cancel_all)
+- return true;
+-
+- if (head->flags & REQ_F_LINK_TIMEOUT) {
+- struct io_ring_ctx *ctx = head->ctx;
+-
+- /* protect against races with linked timeouts */
+- spin_lock_irq(&ctx->timeout_lock);
+- matched = io_match_linked(head);
+- spin_unlock_irq(&ctx->timeout_lock);
+- } else {
+- matched = io_match_linked(head);
+- }
+- return matched;
++ return cancel_all;
+ }
+
+ static inline bool req_has_async_data(struct io_kiocb *req)
+@@ -1500,14 +1474,6 @@ static inline bool io_req_ffs_set(struct io_kiocb *req)
+ return req->flags & REQ_F_FIXED_FILE;
+ }
+
+-static inline void io_req_track_inflight(struct io_kiocb *req)
+-{
+- if (!(req->flags & REQ_F_INFLIGHT)) {
+- req->flags |= REQ_F_INFLIGHT;
+- atomic_inc(¤t->io_uring->inflight_tracked);
+- }
+-}
+-
+ static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
+ {
+ if (WARN_ON_ONCE(!req->link))
+@@ -1551,14 +1517,6 @@ static void io_prep_async_work(struct io_kiocb *req)
+ if (def->unbound_nonreg_file)
+ req->work.flags |= IO_WQ_WORK_UNBOUND;
+ }
+-
+- switch (req->opcode) {
+- case IORING_OP_SPLICE:
+- case IORING_OP_TEE:
+- if (!S_ISREG(file_inode(req->splice.file_in)->i_mode))
+- req->work.flags |= IO_WQ_WORK_UNBOUND;
+- break;
+- }
+ }
+
+ static void io_prep_async_link(struct io_kiocb *req)
+@@ -1652,12 +1610,11 @@ static __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
+ __must_hold(&ctx->completion_lock)
+ {
+ u32 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
++ struct io_kiocb *req, *tmp;
+
+ spin_lock_irq(&ctx->timeout_lock);
+- while (!list_empty(&ctx->timeout_list)) {
++ list_for_each_entry_safe(req, tmp, &ctx->timeout_list, timeout.list) {
+ u32 events_needed, events_got;
+- struct io_kiocb *req = list_first_entry(&ctx->timeout_list,
+- struct io_kiocb, timeout.list);
+
+ if (io_is_timeout_noseq(req))
+ break;
+@@ -1674,7 +1631,6 @@ static __cold void io_flush_timeouts(struct io_ring_ctx *ctx)
+ if (events_got < events_needed)
+ break;
+
+- list_del_init(&req->timeout.list);
+ io_kill_timeout(req, 0);
+ }
+ ctx->cq_last_tm_flush = seq;
+@@ -2381,6 +2337,8 @@ static void io_req_task_work_add(struct io_kiocb *req, bool priority)
+
+ WARN_ON_ONCE(!tctx);
+
++ io_drop_inflight_file(req);
++
+ spin_lock_irqsave(&tctx->task_lock, flags);
+ if (priority)
+ wq_list_add_tail(&req->io_task_work.node, &tctx->prior_task_list);
+@@ -2994,50 +2952,11 @@ static inline bool io_file_supports_nowait(struct io_kiocb *req)
+
+ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+- struct io_ring_ctx *ctx = req->ctx;
+ struct kiocb *kiocb = &req->rw.kiocb;
+- struct file *file = req->file;
+ unsigned ioprio;
+ int ret;
+
+- if (!io_req_ffs_set(req))
+- req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT;
+-
+ kiocb->ki_pos = READ_ONCE(sqe->off);
+- if (kiocb->ki_pos == -1) {
+- if (!(file->f_mode & FMODE_STREAM)) {
+- req->flags |= REQ_F_CUR_POS;
+- kiocb->ki_pos = file->f_pos;
+- } else {
+- kiocb->ki_pos = 0;
+- }
+- }
+- kiocb->ki_flags = iocb_flags(file);
+- ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
+- if (unlikely(ret))
+- return ret;
+-
+- /*
+- * If the file is marked O_NONBLOCK, still allow retry for it if it
+- * supports async. Otherwise it's impossible to use O_NONBLOCK files
+- * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
+- */
+- if ((kiocb->ki_flags & IOCB_NOWAIT) ||
+- ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req)))
+- req->flags |= REQ_F_NOWAIT;
+-
+- if (ctx->flags & IORING_SETUP_IOPOLL) {
+- if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
+- return -EOPNOTSUPP;
+-
+- kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
+- kiocb->ki_complete = io_complete_rw_iopoll;
+- req->iopoll_completed = 0;
+- } else {
+- if (kiocb->ki_flags & IOCB_HIPRI)
+- return -EINVAL;
+- kiocb->ki_complete = io_complete_rw;
+- }
+
+ ioprio = READ_ONCE(sqe->ioprio);
+ if (ioprio) {
+@@ -3053,6 +2972,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ req->imu = NULL;
+ req->rw.addr = READ_ONCE(sqe->addr);
+ req->rw.len = READ_ONCE(sqe->len);
++ req->rw.flags = READ_ONCE(sqe->rw_flags);
+ req->buf_index = READ_ONCE(sqe->buf_index);
+ return 0;
+ }
+@@ -3169,7 +3089,8 @@ static int __io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter
+ return 0;
+ }
+
+-static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter)
++static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter,
++ unsigned int issue_flags)
+ {
+ struct io_mapped_ubuf *imu = req->imu;
+ u16 index, buf_index = req->buf_index;
+@@ -3179,7 +3100,7 @@ static int io_import_fixed(struct io_kiocb *req, int rw, struct iov_iter *iter)
+
+ if (unlikely(buf_index >= ctx->nr_user_bufs))
+ return -EFAULT;
+- io_req_set_rsrc_node(req, ctx);
++ io_req_set_rsrc_node(req, ctx, issue_flags);
+ index = array_index_nospec(buf_index, ctx->nr_user_bufs);
+ imu = READ_ONCE(ctx->user_bufs[index]);
+ req->imu = imu;
+@@ -3335,7 +3256,7 @@ static struct iovec *__io_import_iovec(int rw, struct io_kiocb *req,
+ ssize_t ret;
+
+ if (opcode == IORING_OP_READ_FIXED || opcode == IORING_OP_WRITE_FIXED) {
+- ret = io_import_fixed(req, rw, iter);
++ ret = io_import_fixed(req, rw, iter, issue_flags);
+ if (ret)
+ return ERR_PTR(ret);
+ return NULL;
+@@ -3533,13 +3454,6 @@ static inline int io_rw_prep_async(struct io_kiocb *req, int rw)
+ return 0;
+ }
+
+-static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(!(req->file->f_mode & FMODE_READ)))
+- return -EBADF;
+- return io_prep_rw(req, sqe);
+-}
+-
+ /*
+ * This is our waitqueue callback handler, registered through __folio_lock_async()
+ * when we initially tried to do the IO with the iocb armed our waitqueue.
+@@ -3627,6 +3541,58 @@ static bool need_read_all(struct io_kiocb *req)
+ S_ISBLK(file_inode(req->file)->i_mode);
+ }
+
++static int io_rw_init_file(struct io_kiocb *req, fmode_t mode)
++{
++ struct kiocb *kiocb = &req->rw.kiocb;
++ struct io_ring_ctx *ctx = req->ctx;
++ struct file *file = req->file;
++ int ret;
++
++ if (unlikely(!file || !(file->f_mode & mode)))
++ return -EBADF;
++
++ if (!io_req_ffs_set(req))
++ req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT;
++
++ if (kiocb->ki_pos == -1) {
++ if (!(file->f_mode & FMODE_STREAM)) {
++ req->flags |= REQ_F_CUR_POS;
++ kiocb->ki_pos = file->f_pos;
++ } else {
++ kiocb->ki_pos = 0;
++ }
++ }
++
++ kiocb->ki_flags = iocb_flags(file);
++ ret = kiocb_set_rw_flags(kiocb, req->rw.flags);
++ if (unlikely(ret))
++ return ret;
++
++ /*
++ * If the file is marked O_NONBLOCK, still allow retry for it if it
++ * supports async. Otherwise it's impossible to use O_NONBLOCK files
++ * reliably. If not, or it IOCB_NOWAIT is set, don't retry.
++ */
++ if ((kiocb->ki_flags & IOCB_NOWAIT) ||
++ ((file->f_flags & O_NONBLOCK) && !io_file_supports_nowait(req)))
++ req->flags |= REQ_F_NOWAIT;
++
++ if (ctx->flags & IORING_SETUP_IOPOLL) {
++ if (!(kiocb->ki_flags & IOCB_DIRECT) || !file->f_op->iopoll)
++ return -EOPNOTSUPP;
++
++ kiocb->ki_flags |= IOCB_HIPRI | IOCB_ALLOC_CACHE;
++ kiocb->ki_complete = io_complete_rw_iopoll;
++ req->iopoll_completed = 0;
++ } else {
++ if (kiocb->ki_flags & IOCB_HIPRI)
++ return -EINVAL;
++ kiocb->ki_complete = io_complete_rw;
++ }
++
++ return 0;
++}
++
+ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ struct io_rw_state __s, *s = &__s;
+@@ -3651,6 +3617,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
+ iov_iter_restore(&s->iter, &s->iter_state);
+ iovec = NULL;
+ }
++ ret = io_rw_init_file(req, FMODE_READ);
++ if (unlikely(ret))
++ return ret;
+ req->result = iov_iter_count(&s->iter);
+
+ if (force_nonblock) {
+@@ -3749,14 +3718,6 @@ out_free:
+ return 0;
+ }
+
+-static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+-{
+- if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
+- return -EBADF;
+- req->rw.kiocb.ki_hint = ki_hint_validate(file_write_hint(req->file));
+- return io_prep_rw(req, sqe);
+-}
+-
+ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ struct io_rw_state __s, *s = &__s;
+@@ -3776,6 +3737,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
+ iov_iter_restore(&s->iter, &s->iter_state);
+ iovec = NULL;
+ }
++ ret = io_rw_init_file(req, FMODE_WRITE);
++ if (unlikely(ret))
++ return ret;
+ req->result = iov_iter_count(&s->iter);
+
+ if (force_nonblock) {
+@@ -4144,18 +4108,11 @@ static int __io_splice_prep(struct io_kiocb *req,
+ if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ return -EINVAL;
+
+- sp->file_in = NULL;
+ sp->len = READ_ONCE(sqe->len);
+ sp->flags = READ_ONCE(sqe->splice_flags);
+-
+ if (unlikely(sp->flags & ~valid_flags))
+ return -EINVAL;
+-
+- sp->file_in = io_file_get(req->ctx, req, READ_ONCE(sqe->splice_fd_in),
+- (sp->flags & SPLICE_F_FD_IN_FIXED));
+- if (!sp->file_in)
+- return -EBADF;
+- req->flags |= REQ_F_NEED_CLEANUP;
++ sp->splice_fd_in = READ_ONCE(sqe->splice_fd_in);
+ return 0;
+ }
+
+@@ -4170,20 +4127,29 @@ static int io_tee_prep(struct io_kiocb *req,
+ static int io_tee(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ struct io_splice *sp = &req->splice;
+- struct file *in = sp->file_in;
+ struct file *out = sp->file_out;
+ unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
++ struct file *in;
+ long ret = 0;
+
+ if (issue_flags & IO_URING_F_NONBLOCK)
+ return -EAGAIN;
++
++ if (sp->flags & SPLICE_F_FD_IN_FIXED)
++ in = io_file_get_fixed(req, sp->splice_fd_in, IO_URING_F_UNLOCKED);
++ else
++ in = io_file_get_normal(req, sp->splice_fd_in);
++ if (!in) {
++ ret = -EBADF;
++ goto done;
++ }
++
+ if (sp->len)
+ ret = do_tee(in, out, sp->len, flags);
+
+ if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
+ io_put_file(in);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+-
++done:
+ if (ret != sp->len)
+ req_set_fail(req);
+ io_req_complete(req, ret);
+@@ -4202,15 +4168,24 @@ static int io_splice_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ struct io_splice *sp = &req->splice;
+- struct file *in = sp->file_in;
+ struct file *out = sp->file_out;
+ unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;
+ loff_t *poff_in, *poff_out;
++ struct file *in;
+ long ret = 0;
+
+ if (issue_flags & IO_URING_F_NONBLOCK)
+ return -EAGAIN;
+
++ if (sp->flags & SPLICE_F_FD_IN_FIXED)
++ in = io_file_get_fixed(req, sp->splice_fd_in, IO_URING_F_UNLOCKED);
++ else
++ in = io_file_get_normal(req, sp->splice_fd_in);
++ if (!in) {
++ ret = -EBADF;
++ goto done;
++ }
++
+ poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;
+ poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;
+
+@@ -4219,8 +4194,7 @@ static int io_splice(struct io_kiocb *req, unsigned int issue_flags)
+
+ if (!(sp->flags & SPLICE_F_FD_IN_FIXED))
+ io_put_file(in);
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+-
++done:
+ if (ret != sp->len)
+ req_set_fail(req);
+ io_req_complete(req, ret);
+@@ -4245,9 +4219,6 @@ static int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+
+- if (!req->file)
+- return -EBADF;
+-
+ if (unlikely(ctx->flags & IORING_SETUP_IOPOLL))
+ return -EINVAL;
+ if (unlikely(sqe->addr || sqe->ioprio || sqe->buf_index ||
+@@ -5514,7 +5485,7 @@ static void io_poll_remove_entries(struct io_kiocb *req)
+ * either spurious wakeup or multishot CQE is served. 0 when it's done with
+ * the request, then the mask is stored in req->result.
+ */
+-static int io_poll_check_events(struct io_kiocb *req)
++static int io_poll_check_events(struct io_kiocb *req, bool locked)
+ {
+ struct io_ring_ctx *ctx = req->ctx;
+ struct io_poll_iocb *poll = io_poll_get_single(req);
+@@ -5536,7 +5507,10 @@ static int io_poll_check_events(struct io_kiocb *req)
+ if (!req->result) {
+ struct poll_table_struct pt = { ._key = poll->events };
+
+- req->result = vfs_poll(req->file, &pt) & poll->events;
++ if (unlikely(!io_assign_file(req, IO_URING_F_UNLOCKED)))
++ req->result = -EBADF;
++ else
++ req->result = vfs_poll(req->file, &pt) & poll->events;
+ }
+
+ /* multishot, just fill an CQE and proceed */
+@@ -5570,7 +5544,7 @@ static void io_poll_task_func(struct io_kiocb *req, bool *locked)
+ struct io_ring_ctx *ctx = req->ctx;
+ int ret;
+
+- ret = io_poll_check_events(req);
++ ret = io_poll_check_events(req, *locked);
+ if (ret > 0)
+ return;
+
+@@ -5595,7 +5569,7 @@ static void io_apoll_task_func(struct io_kiocb *req, bool *locked)
+ struct io_ring_ctx *ctx = req->ctx;
+ int ret;
+
+- ret = io_poll_check_events(req);
++ ret = io_poll_check_events(req, *locked);
+ if (ret > 0)
+ return;
+
+@@ -6281,6 +6255,7 @@ static int io_timeout_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ if (data->ts.tv_sec < 0 || data->ts.tv_nsec < 0)
+ return -EINVAL;
+
++ INIT_LIST_HEAD(&req->timeout.list);
+ data->mode = io_translate_timeout_mode(flags);
+ hrtimer_init(&data->timer, io_timeout_get_clock(data), data->mode);
+
+@@ -6507,11 +6482,10 @@ static int io_req_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ case IORING_OP_READV:
+ case IORING_OP_READ_FIXED:
+ case IORING_OP_READ:
+- return io_read_prep(req, sqe);
+ case IORING_OP_WRITEV:
+ case IORING_OP_WRITE_FIXED:
+ case IORING_OP_WRITE:
+- return io_write_prep(req, sqe);
++ return io_prep_rw(req, sqe);
+ case IORING_OP_POLL_ADD:
+ return io_poll_add_prep(req, sqe);
+ case IORING_OP_POLL_REMOVE:
+@@ -6689,11 +6663,6 @@ static void io_clean_op(struct io_kiocb *req)
+ kfree(io->free_iov);
+ break;
+ }
+- case IORING_OP_SPLICE:
+- case IORING_OP_TEE:
+- if (!(req->splice.flags & SPLICE_F_FD_IN_FIXED))
+- io_put_file(req->splice.file_in);
+- break;
+ case IORING_OP_OPENAT:
+ case IORING_OP_OPENAT2:
+ if (req->open.filename)
+@@ -6724,11 +6693,6 @@ static void io_clean_op(struct io_kiocb *req)
+ kfree(req->apoll);
+ req->apoll = NULL;
+ }
+- if (req->flags & REQ_F_INFLIGHT) {
+- struct io_uring_task *tctx = req->task->io_uring;
+-
+- atomic_dec(&tctx->inflight_tracked);
+- }
+ if (req->flags & REQ_F_CREDS)
+ put_cred(req->creds);
+ if (req->flags & REQ_F_ASYNC_DATA) {
+@@ -6738,6 +6702,23 @@ static void io_clean_op(struct io_kiocb *req)
+ req->flags &= ~IO_REQ_CLEAN_FLAGS;
+ }
+
++static bool io_assign_file(struct io_kiocb *req, unsigned int issue_flags)
++{
++ if (req->file || !io_op_defs[req->opcode].needs_file)
++ return true;
++
++ if (req->flags & REQ_F_FIXED_FILE)
++ req->file = io_file_get_fixed(req, req->work.fd, issue_flags);
++ else
++ req->file = io_file_get_normal(req, req->work.fd);
++ if (req->file)
++ return true;
++
++ req_set_fail(req);
++ req->result = -EBADF;
++ return false;
++}
++
+ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+ {
+ const struct cred *creds = NULL;
+@@ -6748,6 +6729,8 @@ static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
+
+ if (!io_op_defs[req->opcode].audit_skip)
+ audit_uring_entry(req->opcode);
++ if (unlikely(!io_assign_file(req, issue_flags)))
++ return -EBADF;
+
+ switch (req->opcode) {
+ case IORING_OP_NOP:
+@@ -6889,10 +6872,11 @@ static struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
+ static void io_wq_submit_work(struct io_wq_work *work)
+ {
+ struct io_kiocb *req = container_of(work, struct io_kiocb, work);
++ const struct io_op_def *def = &io_op_defs[req->opcode];
+ unsigned int issue_flags = IO_URING_F_UNLOCKED;
+ bool needs_poll = false;
+ struct io_kiocb *timeout;
+- int ret = 0;
++ int ret = 0, err = -ECANCELED;
+
+ /* one will be dropped by ->io_free_work() after returning to io-wq */
+ if (!(req->flags & REQ_F_REFCOUNT))
+@@ -6904,14 +6888,18 @@ static void io_wq_submit_work(struct io_wq_work *work)
+ if (timeout)
+ io_queue_linked_timeout(timeout);
+
++ if (!io_assign_file(req, issue_flags)) {
++ err = -EBADF;
++ work->flags |= IO_WQ_WORK_CANCEL;
++ }
++
+ /* either cancelled or io-wq is dying, so don't touch tctx->iowq */
+ if (work->flags & IO_WQ_WORK_CANCEL) {
+- io_req_task_queue_fail(req, -ECANCELED);
++ io_req_task_queue_fail(req, err);
+ return;
+ }
+
+ if (req->flags & REQ_F_FORCE_ASYNC) {
+- const struct io_op_def *def = &io_op_defs[req->opcode];
+ bool opcode_poll = def->pollin || def->pollout;
+
+ if (opcode_poll && file_can_poll(req->file)) {
+@@ -6968,46 +6956,56 @@ static void io_fixed_file_set(struct io_fixed_file *file_slot, struct file *file
+ file_slot->file_ptr = file_ptr;
+ }
+
+-static inline struct file *io_file_get_fixed(struct io_ring_ctx *ctx,
+- struct io_kiocb *req, int fd)
++static inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
++ unsigned int issue_flags)
+ {
+- struct file *file;
++ struct io_ring_ctx *ctx = req->ctx;
++ struct file *file = NULL;
+ unsigned long file_ptr;
+
++ if (issue_flags & IO_URING_F_UNLOCKED)
++ mutex_lock(&ctx->uring_lock);
++
+ if (unlikely((unsigned int)fd >= ctx->nr_user_files))
+- return NULL;
++ goto out;
+ fd = array_index_nospec(fd, ctx->nr_user_files);
+ file_ptr = io_fixed_file_slot(&ctx->file_table, fd)->file_ptr;
+ file = (struct file *) (file_ptr & FFS_MASK);
+ file_ptr &= ~FFS_MASK;
+ /* mask in overlapping REQ_F and FFS bits */
+ req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT);
+- io_req_set_rsrc_node(req, ctx);
++ io_req_set_rsrc_node(req, ctx, 0);
++out:
++ if (issue_flags & IO_URING_F_UNLOCKED)
++ mutex_unlock(&ctx->uring_lock);
+ return file;
+ }
+
+-static struct file *io_file_get_normal(struct io_ring_ctx *ctx,
+- struct io_kiocb *req, int fd)
++/*
++ * Drop the file for requeue operations. Only used of req->file is the
++ * io_uring descriptor itself.
++ */
++static void io_drop_inflight_file(struct io_kiocb *req)
++{
++ if (unlikely(req->flags & REQ_F_INFLIGHT)) {
++ fput(req->file);
++ req->file = NULL;
++ req->flags &= ~REQ_F_INFLIGHT;
++ }
++}
++
++static struct file *io_file_get_normal(struct io_kiocb *req, int fd)
+ {
+ struct file *file = fget(fd);
+
+- trace_io_uring_file_get(ctx, fd);
++ trace_io_uring_file_get(req->ctx, fd);
+
+ /* we don't allow fixed io_uring files */
+- if (file && unlikely(file->f_op == &io_uring_fops))
+- io_req_track_inflight(req);
++ if (file && file->f_op == &io_uring_fops)
++ req->flags |= REQ_F_INFLIGHT;
+ return file;
+ }
+
+-static inline struct file *io_file_get(struct io_ring_ctx *ctx,
+- struct io_kiocb *req, int fd, bool fixed)
+-{
+- if (fixed)
+- return io_file_get_fixed(ctx, req, fd);
+- else
+- return io_file_get_normal(ctx, req, fd);
+-}
+-
+ static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)
+ {
+ struct io_kiocb *prev = req->timeout.prev;
+@@ -7245,6 +7243,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ if (io_op_defs[opcode].needs_file) {
+ struct io_submit_state *state = &ctx->submit_state;
+
++ req->work.fd = READ_ONCE(sqe->fd);
++
+ /*
+ * Plug now if we have more than 2 IO left after this, and the
+ * target is potentially a read/write to block based storage.
+@@ -7254,11 +7254,6 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ state->need_plug = false;
+ blk_start_plug_nr_ios(&state->plug, state->submit_nr);
+ }
+-
+- req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd),
+- (sqe_flags & IOSQE_FIXED_FILE));
+- if (unlikely(!req->file))
+- return -EBADF;
+ }
+
+ personality = READ_ONCE(sqe->personality);
+@@ -8237,8 +8232,12 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ refcount_add(skb->truesize, &sk->sk_wmem_alloc);
+ skb_queue_head(&sk->sk_receive_queue, skb);
+
+- for (i = 0; i < nr_files; i++)
+- fput(fpl->fp[i]);
++ for (i = 0; i < nr; i++) {
++ struct file *file = io_file_from_index(ctx, i + offset);
++
++ if (file)
++ fput(file);
++ }
+ } else {
+ kfree_skb(skb);
+ free_uid(fpl->user);
+@@ -8700,7 +8699,7 @@ static int __io_sqe_files_update(struct io_ring_ctx *ctx,
+ err = -EBADF;
+ break;
+ }
+- *io_get_tag_slot(data, up->offset + done) = tag;
++ *io_get_tag_slot(data, i) = tag;
+ io_fixed_file_set(file_slot, file);
+ err = io_sqe_file_register(ctx, file, i);
+ if (err) {
+@@ -8775,7 +8774,6 @@ static __cold int io_uring_alloc_task_context(struct task_struct *task,
+ xa_init(&tctx->xa);
+ init_waitqueue_head(&tctx->wait);
+ atomic_set(&tctx->in_idle, 0);
+- atomic_set(&tctx->inflight_tracked, 0);
+ task->io_uring = tctx;
+ spin_lock_init(&tctx->task_lock);
+ INIT_WQ_LIST(&tctx->task_list);
+@@ -9913,7 +9911,7 @@ static __cold void io_uring_clean_tctx(struct io_uring_task *tctx)
+ static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
+ {
+ if (tracked)
+- return atomic_read(&tctx->inflight_tracked);
++ return 0;
+ return percpu_counter_sum(&tctx->inflight);
+ }
+
+@@ -10866,7 +10864,15 @@ static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
+ if (len > cpumask_size())
+ len = cpumask_size();
+
+- if (copy_from_user(new_mask, arg, len)) {
++ if (in_compat_syscall()) {
++ ret = compat_get_bitmap(cpumask_bits(new_mask),
++ (const compat_ulong_t __user *)arg,
++ len * 8 /* CHAR_BIT */);
++ } else {
++ ret = copy_from_user(new_mask, arg, len);
++ }
++
++ if (ret) {
+ free_cpumask_var(new_mask);
+ return -EFAULT;
+ }
+diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
+index 57ab424c05ff0..072821b50ab91 100644
+--- a/fs/jfs/inode.c
++++ b/fs/jfs/inode.c
+@@ -146,12 +146,13 @@ void jfs_evict_inode(struct inode *inode)
+ dquot_initialize(inode);
+
+ if (JFS_IP(inode)->fileset == FILESYSTEM_I) {
++ struct inode *ipimap = JFS_SBI(inode->i_sb)->ipimap;
+ truncate_inode_pages_final(&inode->i_data);
+
+ if (test_cflag(COMMIT_Freewmap, inode))
+ jfs_free_zero_link(inode);
+
+- if (JFS_SBI(inode->i_sb)->ipimap)
++ if (ipimap && JFS_IP(ipimap)->i_imap)
+ diFree(inode);
+
+ /*
+diff --git a/fs/minix/inode.c b/fs/minix/inode.c
+index a71f1cf894b9f..d4bd94234ef73 100644
+--- a/fs/minix/inode.c
++++ b/fs/minix/inode.c
+@@ -447,7 +447,8 @@ static const struct address_space_operations minix_aops = {
+ .writepage = minix_writepage,
+ .write_begin = minix_write_begin,
+ .write_end = generic_write_end,
+- .bmap = minix_bmap
++ .bmap = minix_bmap,
++ .direct_IO = noop_direct_IO
+ };
+
+ static const struct inode_operations minix_symlink_inode_operations = {
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index 75cb1cbe4cdea..911bdb35eb085 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -1853,16 +1853,6 @@ const struct dentry_operations nfs4_dentry_operations = {
+ };
+ EXPORT_SYMBOL_GPL(nfs4_dentry_operations);
+
+-static fmode_t flags_to_mode(int flags)
+-{
+- fmode_t res = (__force fmode_t)flags & FMODE_EXEC;
+- if ((flags & O_ACCMODE) != O_WRONLY)
+- res |= FMODE_READ;
+- if ((flags & O_ACCMODE) != O_RDONLY)
+- res |= FMODE_WRITE;
+- return res;
+-}
+-
+ static struct nfs_open_context *create_nfs_open_context(struct dentry *dentry, int open_flags, struct file *filp)
+ {
+ return alloc_nfs_open_context(dentry, flags_to_mode(open_flags), filp);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index eabfdab543c8c..11c566d8769f6 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -173,8 +173,8 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ VM_BUG_ON(iov_iter_count(iter) != PAGE_SIZE);
+
+ if (iov_iter_rw(iter) == READ)
+- return nfs_file_direct_read(iocb, iter);
+- return nfs_file_direct_write(iocb, iter);
++ return nfs_file_direct_read(iocb, iter, true);
++ return nfs_file_direct_write(iocb, iter, true);
+ }
+
+ static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
+@@ -425,6 +425,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+ * nfs_file_direct_read - file direct read operation for NFS files
+ * @iocb: target I/O control block
+ * @iter: vector of user buffers into which to read data
++ * @swap: flag indicating this is swap IO, not O_DIRECT IO
+ *
+ * We use this function for direct reads instead of calling
+ * generic_file_aio_read() in order to avoid gfar's check to see if
+@@ -440,7 +441,8 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq,
+ * client must read the updated atime from the server back into its
+ * cache.
+ */
+-ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter,
++ bool swap)
+ {
+ struct file *file = iocb->ki_filp;
+ struct address_space *mapping = file->f_mapping;
+@@ -482,12 +484,14 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
+ if (iter_is_iovec(iter))
+ dreq->flags = NFS_ODIRECT_SHOULD_DIRTY;
+
+- nfs_start_io_direct(inode);
++ if (!swap)
++ nfs_start_io_direct(inode);
+
+ NFS_I(inode)->read_io += count;
+ requested = nfs_direct_read_schedule_iovec(dreq, iter, iocb->ki_pos);
+
+- nfs_end_io_direct(inode);
++ if (!swap)
++ nfs_end_io_direct(inode);
+
+ if (requested > 0) {
+ result = nfs_direct_wait(dreq);
+@@ -790,7 +794,7 @@ static const struct nfs_pgio_completion_ops nfs_direct_write_completion_ops = {
+ */
+ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ struct iov_iter *iter,
+- loff_t pos)
++ loff_t pos, int ioflags)
+ {
+ struct nfs_pageio_descriptor desc;
+ struct inode *inode = dreq->inode;
+@@ -798,7 +802,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ size_t requested_bytes = 0;
+ size_t wsize = max_t(size_t, NFS_SERVER(inode)->wsize, PAGE_SIZE);
+
+- nfs_pageio_init_write(&desc, inode, FLUSH_COND_STABLE, false,
++ nfs_pageio_init_write(&desc, inode, ioflags, false,
+ &nfs_direct_write_completion_ops);
+ desc.pg_dreq = dreq;
+ get_dreq(dreq);
+@@ -876,6 +880,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ * nfs_file_direct_write - file direct write operation for NFS files
+ * @iocb: target I/O control block
+ * @iter: vector of user buffers from which to write data
++ * @swap: flag indicating this is swap IO, not O_DIRECT IO
+ *
+ * We use this function for direct writes instead of calling
+ * generic_file_aio_write() in order to avoid taking the inode
+@@ -892,7 +897,8 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ * Note that O_APPEND is not supported for NFS direct writes, as there
+ * is no atomic O_APPEND write facility in the NFS protocol.
+ */
+-ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
++ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter,
++ bool swap)
+ {
+ ssize_t result, requested;
+ size_t count;
+@@ -906,7 +912,11 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ dfprintk(FILE, "NFS: direct write(%pD2, %zd@%Ld)\n",
+ file, iov_iter_count(iter), (long long) iocb->ki_pos);
+
+- result = generic_write_checks(iocb, iter);
++ if (swap)
++ /* bypass generic checks */
++ result = iov_iter_count(iter);
++ else
++ result = generic_write_checks(iocb, iter);
+ if (result <= 0)
+ return result;
+ count = result;
+@@ -937,16 +947,22 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ dreq->iocb = iocb;
+ pnfs_init_ds_commit_info_ops(&dreq->ds_cinfo, inode);
+
+- nfs_start_io_direct(inode);
++ if (swap) {
++ requested = nfs_direct_write_schedule_iovec(dreq, iter, pos,
++ FLUSH_STABLE);
++ } else {
++ nfs_start_io_direct(inode);
+
+- requested = nfs_direct_write_schedule_iovec(dreq, iter, pos);
++ requested = nfs_direct_write_schedule_iovec(dreq, iter, pos,
++ FLUSH_COND_STABLE);
+
+- if (mapping->nrpages) {
+- invalidate_inode_pages2_range(mapping,
+- pos >> PAGE_SHIFT, end);
+- }
++ if (mapping->nrpages) {
++ invalidate_inode_pages2_range(mapping,
++ pos >> PAGE_SHIFT, end);
++ }
+
+- nfs_end_io_direct(inode);
++ nfs_end_io_direct(inode);
++ }
+
+ if (requested > 0) {
+ result = nfs_direct_wait(dreq);
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 76d76acbc5943..d8583f57ff99f 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -162,7 +162,7 @@ nfs_file_read(struct kiocb *iocb, struct iov_iter *to)
+ ssize_t result;
+
+ if (iocb->ki_flags & IOCB_DIRECT)
+- return nfs_file_direct_read(iocb, to);
++ return nfs_file_direct_read(iocb, to, false);
+
+ dprintk("NFS: read(%pD2, %zu@%lu)\n",
+ iocb->ki_filp,
+@@ -619,7 +619,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ return result;
+
+ if (iocb->ki_flags & IOCB_DIRECT)
+- return nfs_file_direct_write(iocb, from);
++ return nfs_file_direct_write(iocb, from, false);
+
+ dprintk("NFS: write(%pD2, %zu@%Ld)\n",
+ file, iov_iter_count(from), (long long) iocb->ki_pos);
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index d96baa4450e39..e4fb939a2904b 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1180,7 +1180,6 @@ int nfs_open(struct inode *inode, struct file *filp)
+ nfs_fscache_open_file(inode, filp);
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(nfs_open);
+
+ /*
+ * This function is called whenever some part of NFS notices that
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 2de7c56a1fbed..465e39ff018d4 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -42,6 +42,16 @@ static inline bool nfs_lookup_is_soft_revalidate(const struct dentry *dentry)
+ return true;
+ }
+
++static inline fmode_t flags_to_mode(int flags)
++{
++ fmode_t res = (__force fmode_t)flags & FMODE_EXEC;
++ if ((flags & O_ACCMODE) != O_WRONLY)
++ res |= FMODE_READ;
++ if ((flags & O_ACCMODE) != O_RDONLY)
++ res |= FMODE_WRITE;
++ return res;
++}
++
+ /*
+ * Note: RFC 1813 doesn't limit the number of auth flavors that
+ * a server can return, so make something up.
+@@ -573,6 +583,13 @@ nfs_write_match_verf(const struct nfs_writeverf *verf,
+ !nfs_write_verifier_cmp(&req->wb_verf, &verf->verifier);
+ }
+
++static inline gfp_t nfs_io_gfp_mask(void)
++{
++ if (current->flags & PF_WQ_WORKER)
++ return GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN;
++ return GFP_KERNEL;
++}
++
+ /* unlink.c */
+ extern struct rpc_task *
+ nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 32129446beca6..ca878d021faba 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -591,8 +591,10 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+
+ ctx = get_nfs_open_context(nfs_file_open_context(src));
+ l_ctx = nfs_get_lock_context(ctx);
+- if (IS_ERR(l_ctx))
+- return PTR_ERR(l_ctx);
++ if (IS_ERR(l_ctx)) {
++ status = PTR_ERR(l_ctx);
++ goto out;
++ }
+
+ status = nfs4_set_rw_stateid(&args->cna_src_stateid, ctx, l_ctx,
+ FMODE_READ);
+@@ -600,7 +602,7 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ if (status) {
+ if (status == -EAGAIN)
+ status = -NFS4ERR_BAD_STATEID;
+- return status;
++ goto out;
+ }
+
+ status = nfs4_call_sync(src_server->client, src_server, &msg,
+@@ -609,6 +611,7 @@ static int _nfs42_proc_copy_notify(struct file *src, struct file *dst,
+ if (status == -ENOTSUPP)
+ src_server->caps &= ~NFS_CAP_COPY_NOTIFY;
+
++out:
+ put_nfs_open_context(nfs_file_open_context(src));
+ return status;
+ }
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index e79ae4cbc395e..e34af48fb4f41 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -32,6 +32,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ struct dentry *parent = NULL;
+ struct inode *dir;
+ unsigned openflags = filp->f_flags;
++ fmode_t f_mode;
+ struct iattr attr;
+ int err;
+
+@@ -50,8 +51,9 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ if (err)
+ return err;
+
++ f_mode = filp->f_mode;
+ if ((openflags & O_ACCMODE) == 3)
+- return nfs_open(inode, filp);
++ f_mode |= flags_to_mode(openflags);
+
+ /* We can't create new files here */
+ openflags &= ~(O_CREAT|O_EXCL);
+@@ -59,7 +61,7 @@ nfs4_file_open(struct inode *inode, struct file *filp)
+ parent = dget_parent(dentry);
+ dir = d_inode(parent);
+
+- ctx = alloc_nfs_open_context(file_dentry(filp), filp->f_mode, filp);
++ ctx = alloc_nfs_open_context(file_dentry(filp), f_mode, filp);
+ err = PTR_ERR(ctx);
+ if (IS_ERR(ctx))
+ goto out;
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index f5a62c0d999b4..0f4818627ef0c 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -49,6 +49,7 @@
+ #include <linux/workqueue.h>
+ #include <linux/bitops.h>
+ #include <linux/jiffies.h>
++#include <linux/sched/mm.h>
+
+ #include <linux/sunrpc/clnt.h>
+
+@@ -2560,9 +2561,17 @@ static void nfs4_layoutreturn_any_run(struct nfs_client *clp)
+
+ static void nfs4_state_manager(struct nfs_client *clp)
+ {
++ unsigned int memflags;
+ int status = 0;
+ const char *section = "", *section_sep = "";
+
++ /*
++ * State recovery can deadlock if the direct reclaim code tries
++ * start NFS writeback. So ensure memory allocations are all
++ * GFP_NOFS.
++ */
++ memflags = memalloc_nofs_save();
++
+ /* Ensure exclusive access to NFSv4 state */
+ do {
+ trace_nfs4_state_mgr(clp);
+@@ -2657,6 +2666,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state);
+ }
+
++ memalloc_nofs_restore(memflags);
+ nfs4_end_drain_session(clp);
+ nfs4_clear_state_manager_bit(clp);
+
+@@ -2674,6 +2684,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
+ return;
+ if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
+ return;
++ memflags = memalloc_nofs_save();
+ } while (refcount_read(&clp->cl_count) > 1 && !signalled());
+ goto out_drain;
+
+@@ -2686,6 +2697,7 @@ out_error:
+ clp->cl_hostname, -status);
+ ssleep(1);
+ out_drain:
++ memalloc_nofs_restore(memflags);
+ nfs4_end_drain_session(clp);
+ nfs4_clear_state_manager_bit(clp);
+ }
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 815d630802451..9157dd19b8b4f 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -90,10 +90,10 @@ void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos)
+ }
+ }
+
+-static inline struct nfs_page *
+-nfs_page_alloc(void)
++static inline struct nfs_page *nfs_page_alloc(void)
+ {
+- struct nfs_page *p = kmem_cache_zalloc(nfs_page_cachep, GFP_KERNEL);
++ struct nfs_page *p =
++ kmem_cache_zalloc(nfs_page_cachep, nfs_io_gfp_mask());
+ if (p)
+ INIT_LIST_HEAD(&p->wb_list);
+ return p;
+@@ -892,7 +892,7 @@ int nfs_generic_pgio(struct nfs_pageio_descriptor *desc,
+ struct nfs_commit_info cinfo;
+ struct nfs_page_array *pg_array = &hdr->page_array;
+ unsigned int pagecount, pageused;
+- gfp_t gfp_flags = GFP_KERNEL;
++ gfp_t gfp_flags = nfs_io_gfp_mask();
+
+ pagecount = nfs_page_array_len(mirror->pg_base, mirror->pg_count);
+ pg_array->npages = pagecount;
+@@ -979,7 +979,7 @@ nfs_pageio_alloc_mirrors(struct nfs_pageio_descriptor *desc,
+ desc->pg_mirrors_dynamic = NULL;
+ if (mirror_count == 1)
+ return desc->pg_mirrors_static;
+- ret = kmalloc_array(mirror_count, sizeof(*ret), GFP_KERNEL);
++ ret = kmalloc_array(mirror_count, sizeof(*ret), nfs_io_gfp_mask());
+ if (ret != NULL) {
+ for (i = 0; i < mirror_count; i++)
+ nfs_pageio_mirror_init(&ret[i], desc->pg_bsize);
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index 316f68f96e573..657c242a18ff1 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -419,7 +419,7 @@ static struct nfs_commit_data *
+ pnfs_bucket_fetch_commitdata(struct pnfs_commit_bucket *bucket,
+ struct nfs_commit_info *cinfo)
+ {
+- struct nfs_commit_data *data = nfs_commitdata_alloc(false);
++ struct nfs_commit_data *data = nfs_commitdata_alloc();
+
+ if (!data)
+ return NULL;
+@@ -515,7 +515,11 @@ pnfs_generic_commit_pagelist(struct inode *inode, struct list_head *mds_pages,
+ unsigned int nreq = 0;
+
+ if (!list_empty(mds_pages)) {
+- data = nfs_commitdata_alloc(true);
++ data = nfs_commitdata_alloc();
++ if (!data) {
++ nfs_retry_commit(mds_pages, NULL, cinfo, -1);
++ return -ENOMEM;
++ }
+ data->ds_commit_index = -1;
+ list_splice_init(mds_pages, &data->pages);
+ list_add_tail(&data->list, &list);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 60693ab6a0325..9388503030992 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -70,27 +70,17 @@ static mempool_t *nfs_wdata_mempool;
+ static struct kmem_cache *nfs_cdata_cachep;
+ static mempool_t *nfs_commit_mempool;
+
+-struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail)
++struct nfs_commit_data *nfs_commitdata_alloc(void)
+ {
+ struct nfs_commit_data *p;
+
+- if (never_fail)
+- p = mempool_alloc(nfs_commit_mempool, GFP_NOIO);
+- else {
+- /* It is OK to do some reclaim, not no safe to wait
+- * for anything to be returned to the pool.
+- * mempool_alloc() cannot handle that particular combination,
+- * so we need two separate attempts.
+- */
++ p = kmem_cache_zalloc(nfs_cdata_cachep, nfs_io_gfp_mask());
++ if (!p) {
+ p = mempool_alloc(nfs_commit_mempool, GFP_NOWAIT);
+- if (!p)
+- p = kmem_cache_alloc(nfs_cdata_cachep, GFP_NOIO |
+- __GFP_NOWARN | __GFP_NORETRY);
+ if (!p)
+ return NULL;
++ memset(p, 0, sizeof(*p));
+ }
+-
+- memset(p, 0, sizeof(*p));
+ INIT_LIST_HEAD(&p->pages);
+ return p;
+ }
+@@ -104,9 +94,15 @@ EXPORT_SYMBOL_GPL(nfs_commit_free);
+
+ static struct nfs_pgio_header *nfs_writehdr_alloc(void)
+ {
+- struct nfs_pgio_header *p = mempool_alloc(nfs_wdata_mempool, GFP_KERNEL);
++ struct nfs_pgio_header *p;
+
+- memset(p, 0, sizeof(*p));
++ p = kmem_cache_zalloc(nfs_wdata_cachep, nfs_io_gfp_mask());
++ if (!p) {
++ p = mempool_alloc(nfs_wdata_mempool, GFP_NOWAIT);
++ if (!p)
++ return NULL;
++ memset(p, 0, sizeof(*p));
++ }
+ p->rw_mode = FMODE_WRITE;
+ return p;
+ }
+@@ -1826,7 +1822,11 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how,
+ if (list_empty(head))
+ return 0;
+
+- data = nfs_commitdata_alloc(true);
++ data = nfs_commitdata_alloc();
++ if (!data) {
++ nfs_retry_commit(head, NULL, cinfo, -1);
++ return -ENOMEM;
++ }
+
+ /* Set up the argument struct */
+ nfs_init_commit(data, head, NULL, cinfo);
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index b0728c8ad90ce..f8996b46f430e 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -218,6 +218,15 @@ struct gpio_irq_chip {
+ */
+ bool per_parent_data;
+
++ /**
++ * @initialized:
++ *
++ * Flag to track GPIO chip irq member's initialization.
++ * This flag will make sure GPIO chip irq members are not used
++ * before they are initialized.
++ */
++ bool initialized;
++
+ /**
+ * @init_hw: optional routine to initialize hardware before
+ * an IRQ chip will be added. This is quite useful when
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index a59d25f193857..b8641dc0ee661 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -51,7 +51,7 @@ struct ipv6_devconf {
+ __s32 use_optimistic;
+ #endif
+ #ifdef CONFIG_IPV6_MROUTE
+- __s32 mc_forwarding;
++ atomic_t mc_forwarding;
+ #endif
+ __s32 disable_ipv6;
+ __s32 drop_unicast_in_l2_multicast;
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index aed44e9b5d899..c7a0d500b396f 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -1389,13 +1389,16 @@ static inline unsigned long *section_to_usemap(struct mem_section *ms)
+
+ static inline struct mem_section *__nr_to_section(unsigned long nr)
+ {
++ unsigned long root = SECTION_NR_TO_ROOT(nr);
++
++ if (unlikely(root >= NR_SECTION_ROOTS))
++ return NULL;
++
+ #ifdef CONFIG_SPARSEMEM_EXTREME
+- if (!mem_section)
++ if (!mem_section || !mem_section[root])
+ return NULL;
+ #endif
+- if (!mem_section[SECTION_NR_TO_ROOT(nr)])
+- return NULL;
+- return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
++ return &mem_section[root][nr & SECTION_ROOT_MASK];
+ }
+ extern size_t mem_section_usage_size(void);
+
+diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
+index 68f81d8d36def..c9d3dc79d5876 100644
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -513,10 +513,10 @@ static inline const struct cred *nfs_file_cred(struct file *file)
+ * linux/fs/nfs/direct.c
+ */
+ extern ssize_t nfs_direct_IO(struct kiocb *, struct iov_iter *);
+-extern ssize_t nfs_file_direct_read(struct kiocb *iocb,
+- struct iov_iter *iter);
+-extern ssize_t nfs_file_direct_write(struct kiocb *iocb,
+- struct iov_iter *iter);
++ssize_t nfs_file_direct_read(struct kiocb *iocb,
++ struct iov_iter *iter, bool swap);
++ssize_t nfs_file_direct_write(struct kiocb *iocb,
++ struct iov_iter *iter, bool swap);
+
+ /*
+ * linux/fs/nfs/dir.c
+@@ -585,7 +585,7 @@ extern int nfs_wb_all(struct inode *inode);
+ extern int nfs_wb_page(struct inode *inode, struct page *page);
+ extern int nfs_wb_page_cancel(struct inode *inode, struct page* page);
+ extern int nfs_commit_inode(struct inode *, int);
+-extern struct nfs_commit_data *nfs_commitdata_alloc(bool never_fail);
++extern struct nfs_commit_data *nfs_commitdata_alloc(void);
+ extern void nfs_commit_free(struct nfs_commit_data *data);
+ bool nfs_commit_end(struct nfs_mds_commit_info *cinfo);
+
+diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h
+index 60f3453be23e6..a443abda937d8 100644
+--- a/include/linux/ref_tracker.h
++++ b/include/linux/ref_tracker.h
+@@ -13,6 +13,7 @@ struct ref_tracker_dir {
+ spinlock_t lock;
+ unsigned int quarantine_avail;
+ refcount_t untracked;
++ bool dead;
+ struct list_head list; /* List of active trackers */
+ struct list_head quarantine; /* List of dead trackers */
+ #endif
+@@ -26,6 +27,7 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir,
+ INIT_LIST_HEAD(&dir->quarantine);
+ spin_lock_init(&dir->lock);
+ dir->quarantine_avail = quarantine_count;
++ dir->dead = false;
+ refcount_set(&dir->untracked, 1);
+ stack_depot_init();
+ }
+diff --git a/include/linux/static_call.h b/include/linux/static_call.h
+index 3e56a9751c062..fcc5b48989b3c 100644
+--- a/include/linux/static_call.h
++++ b/include/linux/static_call.h
+@@ -248,10 +248,7 @@ static inline int static_call_text_reserved(void *start, void *end)
+ return 0;
+ }
+
+-static inline long __static_call_return0(void)
+-{
+- return 0;
+-}
++extern long __static_call_return0(void);
+
+ #define EXPORT_STATIC_CALL(name) \
+ EXPORT_SYMBOL(STATIC_CALL_KEY(name)); \
+diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
+index ef9a44b6cf5d5..ae6f4838ab755 100644
+--- a/include/linux/vfio_pci_core.h
++++ b/include/linux/vfio_pci_core.h
+@@ -159,8 +159,17 @@ extern ssize_t vfio_pci_config_rw(struct vfio_pci_core_device *vdev,
+ extern ssize_t vfio_pci_bar_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite);
+
++#ifdef CONFIG_VFIO_PCI_VGA
+ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite);
++#else
++static inline ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev,
++ char __user *buf, size_t count,
++ loff_t *ppos, bool iswrite)
++{
++ return -EINVAL;
++}
++#endif
+
+ extern long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
+ uint64_t data, int count, int fd);
+diff --git a/include/net/arp.h b/include/net/arp.h
+index 031374ac2f222..d7ef4ec71dfeb 100644
+--- a/include/net/arp.h
++++ b/include/net/arp.h
+@@ -65,6 +65,7 @@ void arp_send(int type, int ptype, __be32 dest_ip,
+ const unsigned char *src_hw, const unsigned char *th);
+ int arp_mc_map(__be32 addr, u8 *haddr, struct net_device *dev, int dir);
+ void arp_ifdown(struct net_device *dev);
++int arp_invalidate(struct net_device *dev, __be32 ip, bool force);
+
+ struct sk_buff *arp_create(int type, int ptype, __be32 dest_ip,
+ struct net_device *dev, __be32 src_ip,
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index a647e5fabdbd6..2aa5e95808a5a 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -204,19 +204,21 @@ void bt_err_ratelimited(const char *fmt, ...);
+ #define BT_DBG(fmt, ...) pr_debug(fmt "\n", ##__VA_ARGS__)
+ #endif
+
++#define bt_dev_name(hdev) ((hdev) ? (hdev)->name : "null")
++
+ #define bt_dev_info(hdev, fmt, ...) \
+- BT_INFO("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++ BT_INFO("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_warn(hdev, fmt, ...) \
+- BT_WARN("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++ BT_WARN("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_err(hdev, fmt, ...) \
+- BT_ERR("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++ BT_ERR("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_dbg(hdev, fmt, ...) \
+- BT_DBG("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++ BT_DBG("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+
+ #define bt_dev_warn_ratelimited(hdev, fmt, ...) \
+- bt_warn_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++ bt_warn_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+ #define bt_dev_err_ratelimited(hdev, fmt, ...) \
+- bt_err_ratelimited("%s: " fmt, (hdev)->name, ##__VA_ARGS__)
++ bt_err_ratelimited("%s: " fmt, bt_dev_name(hdev), ##__VA_ARGS__)
+
+ /* Connection and socket states */
+ enum {
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index e336e9c1dda4f..36d727f94ac29 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -294,6 +294,9 @@ struct adv_monitor {
+
+ #define HCI_MAX_SHORT_NAME_LENGTH 10
+
++#define HCI_CONN_HANDLE_UNSET 0xffff
++#define HCI_CONN_HANDLE_MAX 0x0eff
++
+ /* Min encryption key size to match with SMP */
+ #define HCI_MIN_ENC_KEY_SIZE 7
+
+diff --git a/include/net/mctp.h b/include/net/mctp.h
+index 7e35ec79b909c..204ae3aebc0da 100644
+--- a/include/net/mctp.h
++++ b/include/net/mctp.h
+@@ -36,8 +36,6 @@ struct mctp_hdr {
+ #define MCTP_HDR_TAG_SHIFT 0
+ #define MCTP_HDR_TAG_MASK GENMASK(2, 0)
+
+-#define MCTP_HEADER_MAXLEN 4
+-
+ #define MCTP_INITIAL_DEFAULT_NET 1
+
+ static inline bool mctp_address_ok(mctp_eid_t eid)
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 5b61c462e534b..374cc7b260fcd 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -513,4 +513,10 @@ static inline void fnhe_genid_bump(struct net *net)
+ atomic_inc(&net->fnhe_genid);
+ }
+
++#ifdef CONFIG_NET
++void net_ns_init(void);
++#else
++static inline void net_ns_init(void) {}
++#endif
++
+ #endif /* __NET_NET_NAMESPACE_H */
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 29982d60b68ab..5be3faf88c1a1 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -1005,7 +1005,6 @@ DEFINE_RPC_XPRT_LIFETIME_EVENT(connect);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_auto);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_done);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_force);
+-DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_cleanup);
+ DEFINE_RPC_XPRT_LIFETIME_EVENT(destroy);
+
+ DECLARE_EVENT_CLASS(rpc_xprt_event,
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 015bfec0dbfdb..4b46021eafa99 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -5500,7 +5500,8 @@ struct bpf_sock {
+ __u32 src_ip4;
+ __u32 src_ip6[4];
+ __u32 src_port; /* host byte order */
+- __u32 dst_port; /* network byte order */
++ __be16 dst_port; /* network byte order */
++ __u16 :16; /* zero padding */
+ __u32 dst_ip4;
+ __u32 dst_ip6[4];
+ __u32 state;
+@@ -6378,7 +6379,8 @@ struct bpf_sk_lookup {
+ __u32 protocol; /* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */
+ __u32 remote_ip4; /* Network byte order */
+ __u32 remote_ip6[4]; /* Network byte order */
+- __u32 remote_port; /* Network byte order */
++ __be16 remote_port; /* Network byte order */
++ __u16 :16; /* Zero padding */
+ __u32 local_ip4; /* Network byte order */
+ __u32 local_ip6[4]; /* Network byte order */
+ __u32 local_port; /* Host byte order */
+diff --git a/include/uapi/linux/can/isotp.h b/include/uapi/linux/can/isotp.h
+index c55935b64ccc8..590f8aea2b6d2 100644
+--- a/include/uapi/linux/can/isotp.h
++++ b/include/uapi/linux/can/isotp.h
+@@ -137,20 +137,16 @@ struct can_isotp_ll_options {
+ #define CAN_ISOTP_WAIT_TX_DONE 0x400 /* wait for tx completion */
+ #define CAN_ISOTP_SF_BROADCAST 0x800 /* 1-to-N functional addressing */
+
+-/* default values */
++/* protocol machine default values */
+
+ #define CAN_ISOTP_DEFAULT_FLAGS 0
+ #define CAN_ISOTP_DEFAULT_EXT_ADDRESS 0x00
+ #define CAN_ISOTP_DEFAULT_PAD_CONTENT 0xCC /* prevent bit-stuffing */
+-#define CAN_ISOTP_DEFAULT_FRAME_TXTIME 0
++#define CAN_ISOTP_DEFAULT_FRAME_TXTIME 50000 /* 50 micro seconds */
+ #define CAN_ISOTP_DEFAULT_RECV_BS 0
+ #define CAN_ISOTP_DEFAULT_RECV_STMIN 0x00
+ #define CAN_ISOTP_DEFAULT_RECV_WFTMAX 0
+
+-#define CAN_ISOTP_DEFAULT_LL_MTU CAN_MTU
+-#define CAN_ISOTP_DEFAULT_LL_TX_DL CAN_MAX_DLEN
+-#define CAN_ISOTP_DEFAULT_LL_TX_FLAGS 0
+-
+ /*
+ * Remark on CAN_ISOTP_DEFAULT_RECV_* values:
+ *
+@@ -162,4 +158,24 @@ struct can_isotp_ll_options {
+ * consistency and copied directly into the flow control (FC) frame.
+ */
+
++/* link layer default values => make use of Classical CAN frames */
++
++#define CAN_ISOTP_DEFAULT_LL_MTU CAN_MTU
++#define CAN_ISOTP_DEFAULT_LL_TX_DL CAN_MAX_DLEN
++#define CAN_ISOTP_DEFAULT_LL_TX_FLAGS 0
++
++/*
++ * The CAN_ISOTP_DEFAULT_FRAME_TXTIME has become a non-zero value as
++ * it only makes sense for isotp implementation tests to run without
++ * a N_As value. As user space applications usually do not set the
++ * frame_txtime element of struct can_isotp_options the new in-kernel
++ * default is very likely overwritten with zero when the sockopt()
++ * CAN_ISOTP_OPTS is invoked.
++ * To make sure that a N_As value of zero is only set intentional the
++ * value '0' is now interpreted as 'do not change the current value'.
++ * When a frame_txtime of zero is required for testing purposes this
++ * CAN_ISOTP_FRAME_TXTIME_ZERO u32 value has to be set in frame_txtime.
++ */
++#define CAN_ISOTP_FRAME_TXTIME_ZERO 0xFFFFFFFF
++
+ #endif /* !_UAPI_CAN_ISOTP_H */
+diff --git a/init/main.c b/init/main.c
+index 65fa2e41a9c09..9a5097b2251a5 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -99,6 +99,7 @@
+ #include <linux/kcsan.h>
+ #include <linux/init_syscalls.h>
+ #include <linux/stackdepot.h>
++#include <net/net_namespace.h>
+
+ #include <asm/io.h>
+ #include <asm/bugs.h>
+@@ -1116,6 +1117,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
+ key_init();
+ security_init();
+ dbg_late_init();
++ net_ns_init();
+ vfs_caches_init();
+ pagecache_init();
+ signals_init();
+@@ -1190,7 +1192,7 @@ static int __init initcall_blacklist(char *str)
+ }
+ } while (str_entry);
+
+- return 0;
++ return 1;
+ }
+
+ static bool __init_or_module initcall_blacklisted(initcall_t fn)
+@@ -1452,7 +1454,9 @@ static noinline void __init kernel_init_freeable(void);
+ bool rodata_enabled __ro_after_init = true;
+ static int __init set_debug_rodata(char *str)
+ {
+- return strtobool(str, &rodata_enabled);
++ if (strtobool(str, &rodata_enabled))
++ pr_warn("Invalid option string for rodata: '%s'\n", str);
++ return 1;
+ }
+ __setup("rodata=", set_debug_rodata);
+ #endif
+diff --git a/kernel/Makefile b/kernel/Makefile
+index 56f4ee97f3284..a18d169732d2e 100644
+--- a/kernel/Makefile
++++ b/kernel/Makefile
+@@ -113,7 +113,8 @@ obj-$(CONFIG_CPU_PM) += cpu_pm.o
+ obj-$(CONFIG_BPF) += bpf/
+ obj-$(CONFIG_KCSAN) += kcsan/
+ obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
+-obj-$(CONFIG_HAVE_STATIC_CALL_INLINE) += static_call.o
++obj-$(CONFIG_HAVE_STATIC_CALL) += static_call.o
++obj-$(CONFIG_HAVE_STATIC_CALL_INLINE) += static_call_inline.o
+ obj-$(CONFIG_CFI_CLANG) += cfi.o
+
+ obj-$(CONFIG_PERF_EVENTS) += events/
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 69cf71d973121..0ee9ffceb9764 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11640,6 +11640,9 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+
+ event->state = PERF_EVENT_STATE_INACTIVE;
+
++ if (parent_event)
++ event->event_caps = parent_event->event_caps;
++
+ if (event->attr.sigtrap)
+ atomic_set(&event->event_limit, 1);
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1620ae8535dcf..1eec4925b8c64 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5675,6 +5675,8 @@ static inline struct task_struct *pick_task(struct rq *rq)
+
+ extern void task_vruntime_update(struct rq *rq, struct task_struct *p, bool in_fi);
+
++static void queue_core_balance(struct rq *rq);
++
+ static struct task_struct *
+ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ {
+@@ -5724,7 +5726,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ }
+
+ rq->core_pick = NULL;
+- return next;
++ goto out;
+ }
+
+ put_prev_task_balance(rq, prev, rf);
+@@ -5774,7 +5776,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ */
+ WARN_ON_ONCE(fi_before);
+ task_vruntime_update(rq, next, false);
+- goto done;
++ goto out_set_next;
+ }
+ }
+
+@@ -5893,8 +5895,12 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ resched_curr(rq_i);
+ }
+
+-done:
++out_set_next:
+ set_next_task(rq, next);
++out:
++ if (rq->core->core_forceidle_count && next == rq->idle)
++ queue_core_balance(rq);
++
+ return next;
+ }
+
+@@ -5923,7 +5929,7 @@ static bool try_steal_cookie(int this, int that)
+ if (p == src->core_pick || p == src->curr)
+ goto next;
+
+- if (!cpumask_test_cpu(this, &p->cpus_mask))
++ if (!is_cpu_allowed(p, this))
+ goto next;
+
+ if (p->core_occupation > dst->idle->core_occupation)
+@@ -5989,7 +5995,7 @@ static void sched_core_balance(struct rq *rq)
+
+ static DEFINE_PER_CPU(struct callback_head, core_balance_head);
+
+-void queue_core_balance(struct rq *rq)
++static void queue_core_balance(struct rq *rq)
+ {
+ if (!sched_core_enabled(rq))
+ return;
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index d17b0a5ce6ac3..314c36fc9c42f 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -437,7 +437,6 @@ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool fir
+ {
+ update_idle_core(rq);
+ schedstat_inc(rq->sched_goidle);
+- queue_core_balance(rq);
+ }
+
+ #ifdef CONFIG_SMP
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 9b33ba9c3c420..e8a5549488dd1 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1247,8 +1247,6 @@ static inline bool sched_group_cookie_match(struct rq *rq,
+ return false;
+ }
+
+-extern void queue_core_balance(struct rq *rq);
+-
+ static inline bool sched_core_enqueued(struct task_struct *p)
+ {
+ return !RB_EMPTY_NODE(&p->core_node);
+@@ -1282,10 +1280,6 @@ static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
+ return &rq->__lock;
+ }
+
+-static inline void queue_core_balance(struct rq *rq)
+-{
+-}
+-
+ static inline bool sched_cpu_cookie_match(struct rq *rq, struct task_struct *p)
+ {
+ return true;
+diff --git a/kernel/static_call.c b/kernel/static_call.c
+index 43ba0b1e0edbb..e9c3e69f38379 100644
+--- a/kernel/static_call.c
++++ b/kernel/static_call.c
+@@ -1,548 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <linux/init.h>
+ #include <linux/static_call.h>
+-#include <linux/bug.h>
+-#include <linux/smp.h>
+-#include <linux/sort.h>
+-#include <linux/slab.h>
+-#include <linux/module.h>
+-#include <linux/cpu.h>
+-#include <linux/processor.h>
+-#include <asm/sections.h>
+-
+-extern struct static_call_site __start_static_call_sites[],
+- __stop_static_call_sites[];
+-extern struct static_call_tramp_key __start_static_call_tramp_key[],
+- __stop_static_call_tramp_key[];
+-
+-static bool static_call_initialized;
+-
+-/* mutex to protect key modules/sites */
+-static DEFINE_MUTEX(static_call_mutex);
+-
+-static void static_call_lock(void)
+-{
+- mutex_lock(&static_call_mutex);
+-}
+-
+-static void static_call_unlock(void)
+-{
+- mutex_unlock(&static_call_mutex);
+-}
+-
+-static inline void *static_call_addr(struct static_call_site *site)
+-{
+- return (void *)((long)site->addr + (long)&site->addr);
+-}
+-
+-static inline unsigned long __static_call_key(const struct static_call_site *site)
+-{
+- return (long)site->key + (long)&site->key;
+-}
+-
+-static inline struct static_call_key *static_call_key(const struct static_call_site *site)
+-{
+- return (void *)(__static_call_key(site) & ~STATIC_CALL_SITE_FLAGS);
+-}
+-
+-/* These assume the key is word-aligned. */
+-static inline bool static_call_is_init(struct static_call_site *site)
+-{
+- return __static_call_key(site) & STATIC_CALL_SITE_INIT;
+-}
+-
+-static inline bool static_call_is_tail(struct static_call_site *site)
+-{
+- return __static_call_key(site) & STATIC_CALL_SITE_TAIL;
+-}
+-
+-static inline void static_call_set_init(struct static_call_site *site)
+-{
+- site->key = (__static_call_key(site) | STATIC_CALL_SITE_INIT) -
+- (long)&site->key;
+-}
+-
+-static int static_call_site_cmp(const void *_a, const void *_b)
+-{
+- const struct static_call_site *a = _a;
+- const struct static_call_site *b = _b;
+- const struct static_call_key *key_a = static_call_key(a);
+- const struct static_call_key *key_b = static_call_key(b);
+-
+- if (key_a < key_b)
+- return -1;
+-
+- if (key_a > key_b)
+- return 1;
+-
+- return 0;
+-}
+-
+-static void static_call_site_swap(void *_a, void *_b, int size)
+-{
+- long delta = (unsigned long)_a - (unsigned long)_b;
+- struct static_call_site *a = _a;
+- struct static_call_site *b = _b;
+- struct static_call_site tmp = *a;
+-
+- a->addr = b->addr - delta;
+- a->key = b->key - delta;
+-
+- b->addr = tmp.addr + delta;
+- b->key = tmp.key + delta;
+-}
+-
+-static inline void static_call_sort_entries(struct static_call_site *start,
+- struct static_call_site *stop)
+-{
+- sort(start, stop - start, sizeof(struct static_call_site),
+- static_call_site_cmp, static_call_site_swap);
+-}
+-
+-static inline bool static_call_key_has_mods(struct static_call_key *key)
+-{
+- return !(key->type & 1);
+-}
+-
+-static inline struct static_call_mod *static_call_key_next(struct static_call_key *key)
+-{
+- if (!static_call_key_has_mods(key))
+- return NULL;
+-
+- return key->mods;
+-}
+-
+-static inline struct static_call_site *static_call_key_sites(struct static_call_key *key)
+-{
+- if (static_call_key_has_mods(key))
+- return NULL;
+-
+- return (struct static_call_site *)(key->type & ~1);
+-}
+-
+-void __static_call_update(struct static_call_key *key, void *tramp, void *func)
+-{
+- struct static_call_site *site, *stop;
+- struct static_call_mod *site_mod, first;
+-
+- cpus_read_lock();
+- static_call_lock();
+-
+- if (key->func == func)
+- goto done;
+-
+- key->func = func;
+-
+- arch_static_call_transform(NULL, tramp, func, false);
+-
+- /*
+- * If uninitialized, we'll not update the callsites, but they still
+- * point to the trampoline and we just patched that.
+- */
+- if (WARN_ON_ONCE(!static_call_initialized))
+- goto done;
+-
+- first = (struct static_call_mod){
+- .next = static_call_key_next(key),
+- .mod = NULL,
+- .sites = static_call_key_sites(key),
+- };
+-
+- for (site_mod = &first; site_mod; site_mod = site_mod->next) {
+- bool init = system_state < SYSTEM_RUNNING;
+- struct module *mod = site_mod->mod;
+-
+- if (!site_mod->sites) {
+- /*
+- * This can happen if the static call key is defined in
+- * a module which doesn't use it.
+- *
+- * It also happens in the has_mods case, where the
+- * 'first' entry has no sites associated with it.
+- */
+- continue;
+- }
+-
+- stop = __stop_static_call_sites;
+-
+- if (mod) {
+-#ifdef CONFIG_MODULES
+- stop = mod->static_call_sites +
+- mod->num_static_call_sites;
+- init = mod->state == MODULE_STATE_COMING;
+-#endif
+- }
+-
+- for (site = site_mod->sites;
+- site < stop && static_call_key(site) == key; site++) {
+- void *site_addr = static_call_addr(site);
+-
+- if (!init && static_call_is_init(site))
+- continue;
+-
+- if (!kernel_text_address((unsigned long)site_addr)) {
+- /*
+- * This skips patching built-in __exit, which
+- * is part of init_section_contains() but is
+- * not part of kernel_text_address().
+- *
+- * Skipping built-in __exit is fine since it
+- * will never be executed.
+- */
+- WARN_ONCE(!static_call_is_init(site),
+- "can't patch static call site at %pS",
+- site_addr);
+- continue;
+- }
+-
+- arch_static_call_transform(site_addr, NULL, func,
+- static_call_is_tail(site));
+- }
+- }
+-
+-done:
+- static_call_unlock();
+- cpus_read_unlock();
+-}
+-EXPORT_SYMBOL_GPL(__static_call_update);
+-
+-static int __static_call_init(struct module *mod,
+- struct static_call_site *start,
+- struct static_call_site *stop)
+-{
+- struct static_call_site *site;
+- struct static_call_key *key, *prev_key = NULL;
+- struct static_call_mod *site_mod;
+-
+- if (start == stop)
+- return 0;
+-
+- static_call_sort_entries(start, stop);
+-
+- for (site = start; site < stop; site++) {
+- void *site_addr = static_call_addr(site);
+-
+- if ((mod && within_module_init((unsigned long)site_addr, mod)) ||
+- (!mod && init_section_contains(site_addr, 1)))
+- static_call_set_init(site);
+-
+- key = static_call_key(site);
+- if (key != prev_key) {
+- prev_key = key;
+-
+- /*
+- * For vmlinux (!mod) avoid the allocation by storing
+- * the sites pointer in the key itself. Also see
+- * __static_call_update()'s @first.
+- *
+- * This allows architectures (eg. x86) to call
+- * static_call_init() before memory allocation works.
+- */
+- if (!mod) {
+- key->sites = site;
+- key->type |= 1;
+- goto do_transform;
+- }
+-
+- site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
+- if (!site_mod)
+- return -ENOMEM;
+-
+- /*
+- * When the key has a direct sites pointer, extract
+- * that into an explicit struct static_call_mod, so we
+- * can have a list of modules.
+- */
+- if (static_call_key_sites(key)) {
+- site_mod->mod = NULL;
+- site_mod->next = NULL;
+- site_mod->sites = static_call_key_sites(key);
+-
+- key->mods = site_mod;
+-
+- site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
+- if (!site_mod)
+- return -ENOMEM;
+- }
+-
+- site_mod->mod = mod;
+- site_mod->sites = site;
+- site_mod->next = static_call_key_next(key);
+- key->mods = site_mod;
+- }
+-
+-do_transform:
+- arch_static_call_transform(site_addr, NULL, key->func,
+- static_call_is_tail(site));
+- }
+-
+- return 0;
+-}
+-
+-static int addr_conflict(struct static_call_site *site, void *start, void *end)
+-{
+- unsigned long addr = (unsigned long)static_call_addr(site);
+-
+- if (addr <= (unsigned long)end &&
+- addr + CALL_INSN_SIZE > (unsigned long)start)
+- return 1;
+-
+- return 0;
+-}
+-
+-static int __static_call_text_reserved(struct static_call_site *iter_start,
+- struct static_call_site *iter_stop,
+- void *start, void *end, bool init)
+-{
+- struct static_call_site *iter = iter_start;
+-
+- while (iter < iter_stop) {
+- if (init || !static_call_is_init(iter)) {
+- if (addr_conflict(iter, start, end))
+- return 1;
+- }
+- iter++;
+- }
+-
+- return 0;
+-}
+-
+-#ifdef CONFIG_MODULES
+-
+-static int __static_call_mod_text_reserved(void *start, void *end)
+-{
+- struct module *mod;
+- int ret;
+-
+- preempt_disable();
+- mod = __module_text_address((unsigned long)start);
+- WARN_ON_ONCE(__module_text_address((unsigned long)end) != mod);
+- if (!try_module_get(mod))
+- mod = NULL;
+- preempt_enable();
+-
+- if (!mod)
+- return 0;
+-
+- ret = __static_call_text_reserved(mod->static_call_sites,
+- mod->static_call_sites + mod->num_static_call_sites,
+- start, end, mod->state == MODULE_STATE_COMING);
+-
+- module_put(mod);
+-
+- return ret;
+-}
+-
+-static unsigned long tramp_key_lookup(unsigned long addr)
+-{
+- struct static_call_tramp_key *start = __start_static_call_tramp_key;
+- struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
+- struct static_call_tramp_key *tramp_key;
+-
+- for (tramp_key = start; tramp_key != stop; tramp_key++) {
+- unsigned long tramp;
+-
+- tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
+- if (tramp == addr)
+- return (long)tramp_key->key + (long)&tramp_key->key;
+- }
+-
+- return 0;
+-}
+-
+-static int static_call_add_module(struct module *mod)
+-{
+- struct static_call_site *start = mod->static_call_sites;
+- struct static_call_site *stop = start + mod->num_static_call_sites;
+- struct static_call_site *site;
+-
+- for (site = start; site != stop; site++) {
+- unsigned long s_key = __static_call_key(site);
+- unsigned long addr = s_key & ~STATIC_CALL_SITE_FLAGS;
+- unsigned long key;
+-
+- /*
+- * Is the key is exported, 'addr' points to the key, which
+- * means modules are allowed to call static_call_update() on
+- * it.
+- *
+- * Otherwise, the key isn't exported, and 'addr' points to the
+- * trampoline so we need to lookup the key.
+- *
+- * We go through this dance to prevent crazy modules from
+- * abusing sensitive static calls.
+- */
+- if (!kernel_text_address(addr))
+- continue;
+-
+- key = tramp_key_lookup(addr);
+- if (!key) {
+- pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
+- static_call_addr(site));
+- return -EINVAL;
+- }
+-
+- key |= s_key & STATIC_CALL_SITE_FLAGS;
+- site->key = key - (long)&site->key;
+- }
+-
+- return __static_call_init(mod, start, stop);
+-}
+-
+-static void static_call_del_module(struct module *mod)
+-{
+- struct static_call_site *start = mod->static_call_sites;
+- struct static_call_site *stop = mod->static_call_sites +
+- mod->num_static_call_sites;
+- struct static_call_key *key, *prev_key = NULL;
+- struct static_call_mod *site_mod, **prev;
+- struct static_call_site *site;
+-
+- for (site = start; site < stop; site++) {
+- key = static_call_key(site);
+- if (key == prev_key)
+- continue;
+-
+- prev_key = key;
+-
+- for (prev = &key->mods, site_mod = key->mods;
+- site_mod && site_mod->mod != mod;
+- prev = &site_mod->next, site_mod = site_mod->next)
+- ;
+-
+- if (!site_mod)
+- continue;
+-
+- *prev = site_mod->next;
+- kfree(site_mod);
+- }
+-}
+-
+-static int static_call_module_notify(struct notifier_block *nb,
+- unsigned long val, void *data)
+-{
+- struct module *mod = data;
+- int ret = 0;
+-
+- cpus_read_lock();
+- static_call_lock();
+-
+- switch (val) {
+- case MODULE_STATE_COMING:
+- ret = static_call_add_module(mod);
+- if (ret) {
+- WARN(1, "Failed to allocate memory for static calls");
+- static_call_del_module(mod);
+- }
+- break;
+- case MODULE_STATE_GOING:
+- static_call_del_module(mod);
+- break;
+- }
+-
+- static_call_unlock();
+- cpus_read_unlock();
+-
+- return notifier_from_errno(ret);
+-}
+-
+-static struct notifier_block static_call_module_nb = {
+- .notifier_call = static_call_module_notify,
+-};
+-
+-#else
+-
+-static inline int __static_call_mod_text_reserved(void *start, void *end)
+-{
+- return 0;
+-}
+-
+-#endif /* CONFIG_MODULES */
+-
+-int static_call_text_reserved(void *start, void *end)
+-{
+- bool init = system_state < SYSTEM_RUNNING;
+- int ret = __static_call_text_reserved(__start_static_call_sites,
+- __stop_static_call_sites, start, end, init);
+-
+- if (ret)
+- return ret;
+-
+- return __static_call_mod_text_reserved(start, end);
+-}
+-
+-int __init static_call_init(void)
+-{
+- int ret;
+-
+- if (static_call_initialized)
+- return 0;
+-
+- cpus_read_lock();
+- static_call_lock();
+- ret = __static_call_init(NULL, __start_static_call_sites,
+- __stop_static_call_sites);
+- static_call_unlock();
+- cpus_read_unlock();
+-
+- if (ret) {
+- pr_err("Failed to allocate memory for static_call!\n");
+- BUG();
+- }
+-
+- static_call_initialized = true;
+-
+-#ifdef CONFIG_MODULES
+- register_module_notifier(&static_call_module_nb);
+-#endif
+- return 0;
+-}
+-early_initcall(static_call_init);
+
+ long __static_call_return0(void)
+ {
+ return 0;
+ }
+-
+-#ifdef CONFIG_STATIC_CALL_SELFTEST
+-
+-static int func_a(int x)
+-{
+- return x+1;
+-}
+-
+-static int func_b(int x)
+-{
+- return x+2;
+-}
+-
+-DEFINE_STATIC_CALL(sc_selftest, func_a);
+-
+-static struct static_call_data {
+- int (*func)(int);
+- int val;
+- int expect;
+-} static_call_data [] __initdata = {
+- { NULL, 2, 3 },
+- { func_b, 2, 4 },
+- { func_a, 2, 3 }
+-};
+-
+-static int __init test_static_call_init(void)
+-{
+- int i;
+-
+- for (i = 0; i < ARRAY_SIZE(static_call_data); i++ ) {
+- struct static_call_data *scd = &static_call_data[i];
+-
+- if (scd->func)
+- static_call_update(sc_selftest, scd->func);
+-
+- WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect);
+- }
+-
+- return 0;
+-}
+-early_initcall(test_static_call_init);
+-
+-#endif /* CONFIG_STATIC_CALL_SELFTEST */
++EXPORT_SYMBOL_GPL(__static_call_return0);
+diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
+new file mode 100644
+index 0000000000000..dc5665b628140
+--- /dev/null
++++ b/kernel/static_call_inline.c
+@@ -0,0 +1,543 @@
++// SPDX-License-Identifier: GPL-2.0
++#include <linux/init.h>
++#include <linux/static_call.h>
++#include <linux/bug.h>
++#include <linux/smp.h>
++#include <linux/sort.h>
++#include <linux/slab.h>
++#include <linux/module.h>
++#include <linux/cpu.h>
++#include <linux/processor.h>
++#include <asm/sections.h>
++
++extern struct static_call_site __start_static_call_sites[],
++ __stop_static_call_sites[];
++extern struct static_call_tramp_key __start_static_call_tramp_key[],
++ __stop_static_call_tramp_key[];
++
++static bool static_call_initialized;
++
++/* mutex to protect key modules/sites */
++static DEFINE_MUTEX(static_call_mutex);
++
++static void static_call_lock(void)
++{
++ mutex_lock(&static_call_mutex);
++}
++
++static void static_call_unlock(void)
++{
++ mutex_unlock(&static_call_mutex);
++}
++
++static inline void *static_call_addr(struct static_call_site *site)
++{
++ return (void *)((long)site->addr + (long)&site->addr);
++}
++
++static inline unsigned long __static_call_key(const struct static_call_site *site)
++{
++ return (long)site->key + (long)&site->key;
++}
++
++static inline struct static_call_key *static_call_key(const struct static_call_site *site)
++{
++ return (void *)(__static_call_key(site) & ~STATIC_CALL_SITE_FLAGS);
++}
++
++/* These assume the key is word-aligned. */
++static inline bool static_call_is_init(struct static_call_site *site)
++{
++ return __static_call_key(site) & STATIC_CALL_SITE_INIT;
++}
++
++static inline bool static_call_is_tail(struct static_call_site *site)
++{
++ return __static_call_key(site) & STATIC_CALL_SITE_TAIL;
++}
++
++static inline void static_call_set_init(struct static_call_site *site)
++{
++ site->key = (__static_call_key(site) | STATIC_CALL_SITE_INIT) -
++ (long)&site->key;
++}
++
++static int static_call_site_cmp(const void *_a, const void *_b)
++{
++ const struct static_call_site *a = _a;
++ const struct static_call_site *b = _b;
++ const struct static_call_key *key_a = static_call_key(a);
++ const struct static_call_key *key_b = static_call_key(b);
++
++ if (key_a < key_b)
++ return -1;
++
++ if (key_a > key_b)
++ return 1;
++
++ return 0;
++}
++
++static void static_call_site_swap(void *_a, void *_b, int size)
++{
++ long delta = (unsigned long)_a - (unsigned long)_b;
++ struct static_call_site *a = _a;
++ struct static_call_site *b = _b;
++ struct static_call_site tmp = *a;
++
++ a->addr = b->addr - delta;
++ a->key = b->key - delta;
++
++ b->addr = tmp.addr + delta;
++ b->key = tmp.key + delta;
++}
++
++static inline void static_call_sort_entries(struct static_call_site *start,
++ struct static_call_site *stop)
++{
++ sort(start, stop - start, sizeof(struct static_call_site),
++ static_call_site_cmp, static_call_site_swap);
++}
++
++static inline bool static_call_key_has_mods(struct static_call_key *key)
++{
++ return !(key->type & 1);
++}
++
++static inline struct static_call_mod *static_call_key_next(struct static_call_key *key)
++{
++ if (!static_call_key_has_mods(key))
++ return NULL;
++
++ return key->mods;
++}
++
++static inline struct static_call_site *static_call_key_sites(struct static_call_key *key)
++{
++ if (static_call_key_has_mods(key))
++ return NULL;
++
++ return (struct static_call_site *)(key->type & ~1);
++}
++
++void __static_call_update(struct static_call_key *key, void *tramp, void *func)
++{
++ struct static_call_site *site, *stop;
++ struct static_call_mod *site_mod, first;
++
++ cpus_read_lock();
++ static_call_lock();
++
++ if (key->func == func)
++ goto done;
++
++ key->func = func;
++
++ arch_static_call_transform(NULL, tramp, func, false);
++
++ /*
++ * If uninitialized, we'll not update the callsites, but they still
++ * point to the trampoline and we just patched that.
++ */
++ if (WARN_ON_ONCE(!static_call_initialized))
++ goto done;
++
++ first = (struct static_call_mod){
++ .next = static_call_key_next(key),
++ .mod = NULL,
++ .sites = static_call_key_sites(key),
++ };
++
++ for (site_mod = &first; site_mod; site_mod = site_mod->next) {
++ bool init = system_state < SYSTEM_RUNNING;
++ struct module *mod = site_mod->mod;
++
++ if (!site_mod->sites) {
++ /*
++ * This can happen if the static call key is defined in
++ * a module which doesn't use it.
++ *
++ * It also happens in the has_mods case, where the
++ * 'first' entry has no sites associated with it.
++ */
++ continue;
++ }
++
++ stop = __stop_static_call_sites;
++
++ if (mod) {
++#ifdef CONFIG_MODULES
++ stop = mod->static_call_sites +
++ mod->num_static_call_sites;
++ init = mod->state == MODULE_STATE_COMING;
++#endif
++ }
++
++ for (site = site_mod->sites;
++ site < stop && static_call_key(site) == key; site++) {
++ void *site_addr = static_call_addr(site);
++
++ if (!init && static_call_is_init(site))
++ continue;
++
++ if (!kernel_text_address((unsigned long)site_addr)) {
++ /*
++ * This skips patching built-in __exit, which
++ * is part of init_section_contains() but is
++ * not part of kernel_text_address().
++ *
++ * Skipping built-in __exit is fine since it
++ * will never be executed.
++ */
++ WARN_ONCE(!static_call_is_init(site),
++ "can't patch static call site at %pS",
++ site_addr);
++ continue;
++ }
++
++ arch_static_call_transform(site_addr, NULL, func,
++ static_call_is_tail(site));
++ }
++ }
++
++done:
++ static_call_unlock();
++ cpus_read_unlock();
++}
++EXPORT_SYMBOL_GPL(__static_call_update);
++
++static int __static_call_init(struct module *mod,
++ struct static_call_site *start,
++ struct static_call_site *stop)
++{
++ struct static_call_site *site;
++ struct static_call_key *key, *prev_key = NULL;
++ struct static_call_mod *site_mod;
++
++ if (start == stop)
++ return 0;
++
++ static_call_sort_entries(start, stop);
++
++ for (site = start; site < stop; site++) {
++ void *site_addr = static_call_addr(site);
++
++ if ((mod && within_module_init((unsigned long)site_addr, mod)) ||
++ (!mod && init_section_contains(site_addr, 1)))
++ static_call_set_init(site);
++
++ key = static_call_key(site);
++ if (key != prev_key) {
++ prev_key = key;
++
++ /*
++ * For vmlinux (!mod) avoid the allocation by storing
++ * the sites pointer in the key itself. Also see
++ * __static_call_update()'s @first.
++ *
++ * This allows architectures (eg. x86) to call
++ * static_call_init() before memory allocation works.
++ */
++ if (!mod) {
++ key->sites = site;
++ key->type |= 1;
++ goto do_transform;
++ }
++
++ site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
++ if (!site_mod)
++ return -ENOMEM;
++
++ /*
++ * When the key has a direct sites pointer, extract
++ * that into an explicit struct static_call_mod, so we
++ * can have a list of modules.
++ */
++ if (static_call_key_sites(key)) {
++ site_mod->mod = NULL;
++ site_mod->next = NULL;
++ site_mod->sites = static_call_key_sites(key);
++
++ key->mods = site_mod;
++
++ site_mod = kzalloc(sizeof(*site_mod), GFP_KERNEL);
++ if (!site_mod)
++ return -ENOMEM;
++ }
++
++ site_mod->mod = mod;
++ site_mod->sites = site;
++ site_mod->next = static_call_key_next(key);
++ key->mods = site_mod;
++ }
++
++do_transform:
++ arch_static_call_transform(site_addr, NULL, key->func,
++ static_call_is_tail(site));
++ }
++
++ return 0;
++}
++
++static int addr_conflict(struct static_call_site *site, void *start, void *end)
++{
++ unsigned long addr = (unsigned long)static_call_addr(site);
++
++ if (addr <= (unsigned long)end &&
++ addr + CALL_INSN_SIZE > (unsigned long)start)
++ return 1;
++
++ return 0;
++}
++
++static int __static_call_text_reserved(struct static_call_site *iter_start,
++ struct static_call_site *iter_stop,
++ void *start, void *end, bool init)
++{
++ struct static_call_site *iter = iter_start;
++
++ while (iter < iter_stop) {
++ if (init || !static_call_is_init(iter)) {
++ if (addr_conflict(iter, start, end))
++ return 1;
++ }
++ iter++;
++ }
++
++ return 0;
++}
++
++#ifdef CONFIG_MODULES
++
++static int __static_call_mod_text_reserved(void *start, void *end)
++{
++ struct module *mod;
++ int ret;
++
++ preempt_disable();
++ mod = __module_text_address((unsigned long)start);
++ WARN_ON_ONCE(__module_text_address((unsigned long)end) != mod);
++ if (!try_module_get(mod))
++ mod = NULL;
++ preempt_enable();
++
++ if (!mod)
++ return 0;
++
++ ret = __static_call_text_reserved(mod->static_call_sites,
++ mod->static_call_sites + mod->num_static_call_sites,
++ start, end, mod->state == MODULE_STATE_COMING);
++
++ module_put(mod);
++
++ return ret;
++}
++
++static unsigned long tramp_key_lookup(unsigned long addr)
++{
++ struct static_call_tramp_key *start = __start_static_call_tramp_key;
++ struct static_call_tramp_key *stop = __stop_static_call_tramp_key;
++ struct static_call_tramp_key *tramp_key;
++
++ for (tramp_key = start; tramp_key != stop; tramp_key++) {
++ unsigned long tramp;
++
++ tramp = (long)tramp_key->tramp + (long)&tramp_key->tramp;
++ if (tramp == addr)
++ return (long)tramp_key->key + (long)&tramp_key->key;
++ }
++
++ return 0;
++}
++
++static int static_call_add_module(struct module *mod)
++{
++ struct static_call_site *start = mod->static_call_sites;
++ struct static_call_site *stop = start + mod->num_static_call_sites;
++ struct static_call_site *site;
++
++ for (site = start; site != stop; site++) {
++ unsigned long s_key = __static_call_key(site);
++ unsigned long addr = s_key & ~STATIC_CALL_SITE_FLAGS;
++ unsigned long key;
++
++ /*
++ * Is the key is exported, 'addr' points to the key, which
++ * means modules are allowed to call static_call_update() on
++ * it.
++ *
++ * Otherwise, the key isn't exported, and 'addr' points to the
++ * trampoline so we need to lookup the key.
++ *
++ * We go through this dance to prevent crazy modules from
++ * abusing sensitive static calls.
++ */
++ if (!kernel_text_address(addr))
++ continue;
++
++ key = tramp_key_lookup(addr);
++ if (!key) {
++ pr_warn("Failed to fixup __raw_static_call() usage at: %ps\n",
++ static_call_addr(site));
++ return -EINVAL;
++ }
++
++ key |= s_key & STATIC_CALL_SITE_FLAGS;
++ site->key = key - (long)&site->key;
++ }
++
++ return __static_call_init(mod, start, stop);
++}
++
++static void static_call_del_module(struct module *mod)
++{
++ struct static_call_site *start = mod->static_call_sites;
++ struct static_call_site *stop = mod->static_call_sites +
++ mod->num_static_call_sites;
++ struct static_call_key *key, *prev_key = NULL;
++ struct static_call_mod *site_mod, **prev;
++ struct static_call_site *site;
++
++ for (site = start; site < stop; site++) {
++ key = static_call_key(site);
++ if (key == prev_key)
++ continue;
++
++ prev_key = key;
++
++ for (prev = &key->mods, site_mod = key->mods;
++ site_mod && site_mod->mod != mod;
++ prev = &site_mod->next, site_mod = site_mod->next)
++ ;
++
++ if (!site_mod)
++ continue;
++
++ *prev = site_mod->next;
++ kfree(site_mod);
++ }
++}
++
++static int static_call_module_notify(struct notifier_block *nb,
++ unsigned long val, void *data)
++{
++ struct module *mod = data;
++ int ret = 0;
++
++ cpus_read_lock();
++ static_call_lock();
++
++ switch (val) {
++ case MODULE_STATE_COMING:
++ ret = static_call_add_module(mod);
++ if (ret) {
++ WARN(1, "Failed to allocate memory for static calls");
++ static_call_del_module(mod);
++ }
++ break;
++ case MODULE_STATE_GOING:
++ static_call_del_module(mod);
++ break;
++ }
++
++ static_call_unlock();
++ cpus_read_unlock();
++
++ return notifier_from_errno(ret);
++}
++
++static struct notifier_block static_call_module_nb = {
++ .notifier_call = static_call_module_notify,
++};
++
++#else
++
++static inline int __static_call_mod_text_reserved(void *start, void *end)
++{
++ return 0;
++}
++
++#endif /* CONFIG_MODULES */
++
++int static_call_text_reserved(void *start, void *end)
++{
++ bool init = system_state < SYSTEM_RUNNING;
++ int ret = __static_call_text_reserved(__start_static_call_sites,
++ __stop_static_call_sites, start, end, init);
++
++ if (ret)
++ return ret;
++
++ return __static_call_mod_text_reserved(start, end);
++}
++
++int __init static_call_init(void)
++{
++ int ret;
++
++ if (static_call_initialized)
++ return 0;
++
++ cpus_read_lock();
++ static_call_lock();
++ ret = __static_call_init(NULL, __start_static_call_sites,
++ __stop_static_call_sites);
++ static_call_unlock();
++ cpus_read_unlock();
++
++ if (ret) {
++ pr_err("Failed to allocate memory for static_call!\n");
++ BUG();
++ }
++
++ static_call_initialized = true;
++
++#ifdef CONFIG_MODULES
++ register_module_notifier(&static_call_module_nb);
++#endif
++ return 0;
++}
++early_initcall(static_call_init);
++
++#ifdef CONFIG_STATIC_CALL_SELFTEST
++
++static int func_a(int x)
++{
++ return x+1;
++}
++
++static int func_b(int x)
++{
++ return x+2;
++}
++
++DEFINE_STATIC_CALL(sc_selftest, func_a);
++
++static struct static_call_data {
++ int (*func)(int);
++ int val;
++ int expect;
++} static_call_data [] __initdata = {
++ { NULL, 2, 3 },
++ { func_b, 2, 4 },
++ { func_a, 2, 3 }
++};
++
++static int __init test_static_call_init(void)
++{
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(static_call_data); i++ ) {
++ struct static_call_data *scd = &static_call_data[i];
++
++ if (scd->func)
++ static_call_update(sc_selftest, scd->func);
++
++ WARN_ON(static_call(sc_selftest)(scd->val) != scd->expect);
++ }
++
++ return 0;
++}
++early_initcall(test_static_call_init);
++
++#endif /* CONFIG_STATIC_CALL_SELFTEST */
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 14b89aa37c5c9..440fd666c16d1 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -416,7 +416,8 @@ config SECTION_MISMATCH_WARN_ONLY
+ If unsure, say Y.
+
+ config DEBUG_FORCE_FUNCTION_ALIGN_64B
+- bool "Force all function address 64B aligned" if EXPERT
++ bool "Force all function address 64B aligned"
++ depends on EXPERT && (X86_64 || ARM64 || PPC32 || PPC64 || ARC)
+ help
+ There are cases that a commit from one domain changes the function
+ address alignment of other domains, and cause magic performance
+diff --git a/lib/logic_iomem.c b/lib/logic_iomem.c
+index 8c3365f26e51d..b247d412ddef7 100644
+--- a/lib/logic_iomem.c
++++ b/lib/logic_iomem.c
+@@ -68,7 +68,7 @@ int logic_iomem_add_region(struct resource *resource,
+ }
+ EXPORT_SYMBOL(logic_iomem_add_region);
+
+-#ifndef CONFIG_LOGIC_IOMEM_FALLBACK
++#ifndef CONFIG_INDIRECT_IOMEM_FALLBACK
+ static void __iomem *real_ioremap(phys_addr_t offset, size_t size)
+ {
+ WARN(1, "invalid ioremap(0x%llx, 0x%zx)\n",
+@@ -81,7 +81,7 @@ static void real_iounmap(volatile void __iomem *addr)
+ WARN(1, "invalid iounmap for addr 0x%llx\n",
+ (unsigned long long)(uintptr_t __force)addr);
+ }
+-#endif /* CONFIG_LOGIC_IOMEM_FALLBACK */
++#endif /* CONFIG_INDIRECT_IOMEM_FALLBACK */
+
+ void __iomem *ioremap(phys_addr_t offset, size_t size)
+ {
+@@ -168,7 +168,7 @@ void iounmap(volatile void __iomem *addr)
+ }
+ EXPORT_SYMBOL(iounmap);
+
+-#ifndef CONFIG_LOGIC_IOMEM_FALLBACK
++#ifndef CONFIG_INDIRECT_IOMEM_FALLBACK
+ #define MAKE_FALLBACK(op, sz) \
+ static u##sz real_raw_read ## op(const volatile void __iomem *addr) \
+ { \
+@@ -213,7 +213,7 @@ static void real_memcpy_toio(volatile void __iomem *addr, const void *buffer,
+ WARN(1, "Invalid memcpy_toio at address 0x%llx\n",
+ (unsigned long long)(uintptr_t __force)addr);
+ }
+-#endif /* CONFIG_LOGIC_IOMEM_FALLBACK */
++#endif /* CONFIG_INDIRECT_IOMEM_FALLBACK */
+
+ #define MAKE_OP(op, sz) \
+ u##sz __raw_read ## op(const volatile void __iomem *addr) \
+diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c
+index 926f4823d5eac..fd1728d94babb 100644
+--- a/lib/lz4/lz4_decompress.c
++++ b/lib/lz4/lz4_decompress.c
+@@ -271,8 +271,12 @@ static FORCE_INLINE int LZ4_decompress_generic(
+ ip += length;
+ op += length;
+
+- /* Necessarily EOF, due to parsing restrictions */
+- if (!partialDecoding || (cpy == oend))
++ /* Necessarily EOF when !partialDecoding.
++ * When partialDecoding, it is EOF if we've either
++ * filled the output buffer or
++ * can't proceed with reading an offset for following match.
++ */
++ if (!partialDecoding || (cpy == oend) || (ip >= (iend - 2)))
+ break;
+ } else {
+ /* may overwrite up to WILDCOPYLENGTH beyond cpy */
+diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c
+index a6789c0c626b0..32ff6bd497f8e 100644
+--- a/lib/ref_tracker.c
++++ b/lib/ref_tracker.c
+@@ -20,6 +20,7 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir)
+ unsigned long flags;
+ bool leak = false;
+
++ dir->dead = true;
+ spin_lock_irqsave(&dir->lock, flags);
+ list_for_each_entry_safe(tracker, n, &dir->quarantine, head) {
+ list_del(&tracker->head);
+@@ -72,6 +73,8 @@ int ref_tracker_alloc(struct ref_tracker_dir *dir,
+ gfp_t gfp_mask = gfp;
+ unsigned long flags;
+
++ WARN_ON_ONCE(dir->dead);
++
+ if (gfp & __GFP_DIRECT_RECLAIM)
+ gfp_mask |= __GFP_NOFAIL;
+ *trackerp = tracker = kzalloc(sizeof(*tracker), gfp_mask);
+@@ -100,6 +103,8 @@ int ref_tracker_free(struct ref_tracker_dir *dir,
+ unsigned int nr_entries;
+ unsigned long flags;
+
++ WARN_ON_ONCE(dir->dead);
++
+ if (!tracker) {
+ refcount_dec(&dir->untracked);
+ return -EEXIST;
+diff --git a/mm/highmem.c b/mm/highmem.c
+index 762679050c9a0..916b66e0776c2 100644
+--- a/mm/highmem.c
++++ b/mm/highmem.c
+@@ -624,7 +624,7 @@ void __kmap_local_sched_out(void)
+
+ /* With debug all even slots are unmapped and act as guard */
+ if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
+- WARN_ON_ONCE(!pte_none(pteval));
++ WARN_ON_ONCE(pte_val(pteval) != 0);
+ continue;
+ }
+ if (WARN_ON_ONCE(pte_none(pteval)))
+@@ -661,7 +661,7 @@ void __kmap_local_sched_in(void)
+
+ /* With debug all even slots are unmapped and act as guard */
+ if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
+- WARN_ON_ONCE(!pte_none(pteval));
++ WARN_ON_ONCE(pte_val(pteval) != 0);
+ continue;
+ }
+ if (WARN_ON_ONCE(pte_none(pteval)))
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index 13128fa130625..d4c7978cd75e2 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -555,6 +555,8 @@ static bool __init kfence_init_pool(void)
+ * enters __slab_free() slow-path.
+ */
+ for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
++ struct slab *slab = page_slab(&pages[i]);
++
+ if (!i || (i % 2))
+ continue;
+
+@@ -562,7 +564,11 @@ static bool __init kfence_init_pool(void)
+ if (WARN_ON(compound_head(&pages[i]) != &pages[i]))
+ goto err;
+
+- __SetPageSlab(&pages[i]);
++ __folio_set_slab(slab_folio(slab));
++#ifdef CONFIG_MEMCG
++ slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg |
++ MEMCG_DATA_OBJCGS;
++#endif
+ }
+
+ /*
+@@ -938,6 +944,9 @@ void __kfence_free(void *addr)
+ {
+ struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr);
+
++#ifdef CONFIG_MEMCG
++ KFENCE_WARN_ON(meta->objcg);
++#endif
+ /*
+ * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing
+ * the object, as the object page may be recycled for other-typed
+diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h
+index 2a2d5de9d3791..9a6c4b1b12a88 100644
+--- a/mm/kfence/kfence.h
++++ b/mm/kfence/kfence.h
+@@ -89,6 +89,9 @@ struct kfence_metadata {
+ struct kfence_track free_track;
+ /* For updating alloc_covered on frees. */
+ u32 alloc_stack_hash;
++#ifdef CONFIG_MEMCG
++ struct obj_cgroup *objcg;
++#endif
+ };
+
+ extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 1628cd90d9fcc..f468bd911c3b9 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2736,6 +2736,7 @@ alloc_new:
+ mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
+ if (!mpol_new)
+ goto err_out;
++ atomic_set(&mpol_new->refcnt, 1);
+ goto restart;
+ }
+
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 002eec83e91e5..0e175aef536e1 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -486,6 +486,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ pmd_t *old_pmd, *new_pmd;
+ pud_t *old_pud, *new_pud;
+
++ if (!len)
++ return 0;
++
+ old_end = old_addr + len;
+ flush_cache_range(vma, old_addr, old_end);
+
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 6a1e8c7f62136..9e27f9f038d39 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1599,7 +1599,30 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
+
+ /* MADV_FREE page check */
+ if (!PageSwapBacked(page)) {
+- if (!PageDirty(page)) {
++ int ref_count, map_count;
++
++ /*
++ * Synchronize with gup_pte_range():
++ * - clear PTE; barrier; read refcount
++ * - inc refcount; barrier; read PTE
++ */
++ smp_mb();
++
++ ref_count = page_ref_count(page);
++ map_count = page_mapcount(page);
++
++ /*
++ * Order reads for page refcount and dirty flag
++ * (see comments in __remove_mapping()).
++ */
++ smp_rmb();
++
++ /*
++ * The only page refs must be one from isolation
++ * plus the rmap(s) (dropped by discard:).
++ */
++ if (ref_count == 1 + map_count &&
++ !PageDirty(page)) {
+ /* Invalidate as we cleared the pte */
+ mmu_notifier_invalidate_range(mm,
+ address, address + PAGE_SIZE);
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index f4004cf0ff6fb..9f311fddfaf9a 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -134,7 +134,7 @@ static u8 batadv_mcast_mla_rtr_flags_softif_get_ipv6(struct net_device *dev)
+ {
+ struct inet6_dev *in6_dev = __in6_dev_get(dev);
+
+- if (in6_dev && in6_dev->cnf.mc_forwarding)
++ if (in6_dev && atomic_read(&in6_dev->cnf.mc_forwarding))
+ return BATADV_NO_FLAGS;
+ else
+ return BATADV_MCAST_WANT_NO_RTR6;
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3bb2b3b6a1c92..84312c8365493 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -691,6 +691,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+
+ bacpy(&conn->dst, dst);
+ bacpy(&conn->src, &hdev->bdaddr);
++ conn->handle = HCI_CONN_HANDLE_UNSET;
+ conn->hdev = hdev;
+ conn->type = type;
+ conn->role = role;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index a105b7317560c..d984777c9b58b 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3068,6 +3068,11 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ struct hci_ev_conn_complete *ev = data;
+ struct hci_conn *conn;
+
++ if (__le16_to_cpu(ev->handle) > HCI_CONN_HANDLE_MAX) {
++ bt_dev_err(hdev, "Ignoring HCI_Connection_Complete for invalid handle");
++ return;
++ }
++
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+
+ hci_dev_lock(hdev);
+@@ -3106,6 +3111,17 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ }
+ }
+
++ /* The HCI_Connection_Complete event is only sent once per connection.
++ * Processing it more than once per connection can corrupt kernel memory.
++ *
++ * As the connection handle is set here for the first time, it indicates
++ * whether the connection is already set up.
++ */
++ if (conn->handle != HCI_CONN_HANDLE_UNSET) {
++ bt_dev_err(hdev, "Ignoring HCI_Connection_Complete for existing connection");
++ goto unlock;
++ }
++
+ if (!ev->status) {
+ conn->handle = __le16_to_cpu(ev->handle);
+
+@@ -4661,6 +4677,24 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ struct hci_ev_sync_conn_complete *ev = data;
+ struct hci_conn *conn;
+
++ switch (ev->link_type) {
++ case SCO_LINK:
++ case ESCO_LINK:
++ break;
++ default:
++ /* As per Core 5.3 Vol 4 Part E 7.7.35 (p.2219), Link_Type
++ * for HCI_Synchronous_Connection_Complete is limited to
++ * either SCO or eSCO
++ */
++ bt_dev_err(hdev, "Ignoring connect complete event for invalid link type");
++ return;
++ }
++
++ if (__le16_to_cpu(ev->handle) > HCI_CONN_HANDLE_MAX) {
++ bt_dev_err(hdev, "Ignoring HCI_Sync_Conn_Complete for invalid handle");
++ return;
++ }
++
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+
+ hci_dev_lock(hdev);
+@@ -4684,23 +4718,19 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
++ /* The HCI_Synchronous_Connection_Complete event is only sent once per connection.
++ * Processing it more than once per connection can corrupt kernel memory.
++ *
++ * As the connection handle is set here for the first time, it indicates
++ * whether the connection is already set up.
++ */
++ if (conn->handle != HCI_CONN_HANDLE_UNSET) {
++ bt_dev_err(hdev, "Ignoring HCI_Sync_Conn_Complete event for existing connection");
++ goto unlock;
++ }
++
+ switch (ev->status) {
+ case 0x00:
+- /* The synchronous connection complete event should only be
+- * sent once per new connection. Receiving a successful
+- * complete event when the connection status is already
+- * BT_CONNECTED means that the device is misbehaving and sent
+- * multiple complete event packets for the same new connection.
+- *
+- * Registering the device more than once can corrupt kernel
+- * memory, hence upon detecting this invalid event, we report
+- * an error and ignore the packet.
+- */
+- if (conn->state == BT_CONNECTED) {
+- bt_dev_err(hdev, "Ignoring connect complete event for existing connection");
+- goto unlock;
+- }
+-
+ conn->handle = __le16_to_cpu(ev->handle);
+ conn->state = BT_CONNECTED;
+ conn->type = ev->link_type;
+@@ -5423,8 +5453,9 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev, void *data,
+ hci_dev_lock(hdev);
+
+ hcon = hci_conn_hash_lookup_handle(hdev, ev->phy_handle);
+- if (hcon) {
++ if (hcon && hcon->type == AMP_LINK) {
+ hcon->state = BT_CLOSED;
++ hci_disconn_cfm(hcon, ev->reason);
+ hci_conn_del(hcon);
+ }
+
+@@ -5496,6 +5527,11 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ struct smp_irk *irk;
+ u8 addr_type;
+
++ if (handle > HCI_CONN_HANDLE_MAX) {
++ bt_dev_err(hdev, "Ignoring HCI_LE_Connection_Complete for invalid handle");
++ return;
++ }
++
+ hci_dev_lock(hdev);
+
+ /* All controllers implicitly stop advertising in the event of a
+@@ -5537,6 +5573,17 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ cancel_delayed_work(&conn->le_conn_timeout);
+ }
+
++ /* The HCI_LE_Connection_Complete event is only sent once per connection.
++ * Processing it more than once per connection can corrupt kernel memory.
++ *
++ * As the connection handle is set here for the first time, it indicates
++ * whether the connection is already set up.
++ */
++ if (conn->handle != HCI_CONN_HANDLE_UNSET) {
++ bt_dev_err(hdev, "Ignoring HCI_Connection_Complete for existing connection");
++ goto unlock;
++ }
++
+ le_conn_update_addr(conn, bdaddr, bdaddr_type, local_rpa);
+
+ /* Lookup the identity address from the stored connection
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 405d48c3e63ed..8f4c5698913d7 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -379,6 +379,9 @@ int hci_cmd_sync_queue(struct hci_dev *hdev, hci_cmd_sync_work_func_t func,
+ {
+ struct hci_cmd_sync_work_entry *entry;
+
++ if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
++ return -ENODEV;
++
+ entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+@@ -5156,8 +5159,8 @@ static void set_ext_conn_params(struct hci_conn *conn,
+ p->max_ce_len = cpu_to_le16(0x0000);
+ }
+
+-int hci_le_ext_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+- u8 own_addr_type)
++static int hci_le_ext_create_conn_sync(struct hci_dev *hdev,
++ struct hci_conn *conn, u8 own_addr_type)
+ {
+ struct hci_cp_le_ext_create_conn *cp;
+ struct hci_cp_le_ext_conn_param *p;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index e817ff0607a06..8df99c07f2724 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1436,6 +1436,7 @@ static void l2cap_ecred_connect(struct l2cap_chan *chan)
+
+ l2cap_ecred_init(chan, 0);
+
++ memset(&data, 0, sizeof(data));
+ data.pdu.req.psm = chan->psm;
+ data.pdu.req.mtu = cpu_to_le16(chan->imtu);
+ data.pdu.req.mps = cpu_to_le16(chan->mps);
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 46dd957559672..eb43aaac0392c 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -960,7 +960,7 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat
+ if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx)))
+ goto out;
+
+- if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) {
++ if (user_ctx->local_port > U16_MAX) {
+ ret = -ERANGE;
+ goto out;
+ }
+@@ -968,7 +968,7 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat
+ ctx.family = (u16)user_ctx->family;
+ ctx.protocol = (u16)user_ctx->protocol;
+ ctx.dport = (u16)user_ctx->local_port;
+- ctx.sport = (__force __be16)user_ctx->remote_port;
++ ctx.sport = user_ctx->remote_port;
+
+ switch (ctx.family) {
+ case AF_INET:
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index a95d171b3a64b..5bce7c66c1219 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -141,6 +141,7 @@ struct isotp_sock {
+ struct can_isotp_options opt;
+ struct can_isotp_fc_options rxfc, txfc;
+ struct can_isotp_ll_options ll;
++ u32 frame_txtime;
+ u32 force_tx_stmin;
+ u32 force_rx_stmin;
+ struct tpcon rx, tx;
+@@ -360,7 +361,7 @@ static int isotp_rcv_fc(struct isotp_sock *so, struct canfd_frame *cf, int ae)
+
+ so->tx_gap = ktime_set(0, 0);
+ /* add transmission time for CAN frame N_As */
+- so->tx_gap = ktime_add_ns(so->tx_gap, so->opt.frame_txtime);
++ so->tx_gap = ktime_add_ns(so->tx_gap, so->frame_txtime);
+ /* add waiting time for consecutive frames N_Cs */
+ if (so->opt.flags & CAN_ISOTP_FORCE_TXSTMIN)
+ so->tx_gap = ktime_add_ns(so->tx_gap,
+@@ -1247,6 +1248,14 @@ static int isotp_setsockopt_locked(struct socket *sock, int level, int optname,
+ /* no separate rx_ext_address is given => use ext_address */
+ if (!(so->opt.flags & CAN_ISOTP_RX_EXT_ADDR))
+ so->opt.rx_ext_address = so->opt.ext_address;
++
++ /* check for frame_txtime changes (0 => no changes) */
++ if (so->opt.frame_txtime) {
++ if (so->opt.frame_txtime == CAN_ISOTP_FRAME_TXTIME_ZERO)
++ so->frame_txtime = 0;
++ else
++ so->frame_txtime = so->opt.frame_txtime;
++ }
+ break;
+
+ case CAN_ISOTP_RECV_FC:
+@@ -1448,6 +1457,7 @@ static int isotp_init(struct sock *sk)
+ so->opt.rxpad_content = CAN_ISOTP_DEFAULT_PAD_CONTENT;
+ so->opt.txpad_content = CAN_ISOTP_DEFAULT_PAD_CONTENT;
+ so->opt.frame_txtime = CAN_ISOTP_DEFAULT_FRAME_TXTIME;
++ so->frame_txtime = CAN_ISOTP_DEFAULT_FRAME_TXTIME;
+ so->rxfc.bs = CAN_ISOTP_DEFAULT_RECV_BS;
+ so->rxfc.stmin = CAN_ISOTP_DEFAULT_RECV_STMIN;
+ so->rxfc.wftmax = CAN_ISOTP_DEFAULT_RECV_WFTMAX;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1baab07820f65..91cf709c98b37 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -10732,8 +10732,7 @@ static int __net_init netdev_init(struct net *net)
+ BUILD_BUG_ON(GRO_HASH_BUCKETS >
+ 8 * sizeof_field(struct napi_struct, gro_bitmask));
+
+- if (net != &init_net)
+- INIT_LIST_HEAD(&net->dev_base_head);
++ INIT_LIST_HEAD(&net->dev_base_head);
+
+ net->dev_name_head = netdev_create_hash();
+ if (net->dev_name_head == NULL)
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9eb785842258a..af0bafe9dcce2 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6777,24 +6777,33 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len
+ if (!th->ack || th->rst || th->syn)
+ return -ENOENT;
+
++ if (unlikely(iph_len < sizeof(struct iphdr)))
++ return -EINVAL;
++
+ if (tcp_synq_no_recent_overflow(sk))
+ return -ENOENT;
+
+ cookie = ntohl(th->ack_seq) - 1;
+
+- switch (sk->sk_family) {
+- case AF_INET:
+- if (unlikely(iph_len < sizeof(struct iphdr)))
++ /* Both struct iphdr and struct ipv6hdr have the version field at the
++ * same offset so we can cast to the shorter header (struct iphdr).
++ */
++ switch (((struct iphdr *)iph)->version) {
++ case 4:
++ if (sk->sk_family == AF_INET6 && ipv6_only_sock(sk))
+ return -EINVAL;
+
+ ret = __cookie_v4_check((struct iphdr *)iph, th, cookie);
+ break;
+
+ #if IS_BUILTIN(CONFIG_IPV6)
+- case AF_INET6:
++ case 6:
+ if (unlikely(iph_len < sizeof(struct ipv6hdr)))
+ return -EINVAL;
+
++ if (sk->sk_family != AF_INET6)
++ return -EINVAL;
++
+ ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie);
+ break;
+ #endif /* CONFIG_IPV6 */
+@@ -8033,6 +8042,7 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ struct bpf_insn_access_aux *info)
+ {
+ const int size_default = sizeof(__u32);
++ int field_size;
+
+ if (off < 0 || off >= sizeof(struct bpf_sock))
+ return false;
+@@ -8044,7 +8054,6 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ case offsetof(struct bpf_sock, family):
+ case offsetof(struct bpf_sock, type):
+ case offsetof(struct bpf_sock, protocol):
+- case offsetof(struct bpf_sock, dst_port):
+ case offsetof(struct bpf_sock, src_port):
+ case offsetof(struct bpf_sock, rx_queue_mapping):
+ case bpf_ctx_range(struct bpf_sock, src_ip4):
+@@ -8053,6 +8062,14 @@ bool bpf_sock_is_valid_access(int off, int size, enum bpf_access_type type,
+ case bpf_ctx_range_till(struct bpf_sock, dst_ip6[0], dst_ip6[3]):
+ bpf_ctx_record_field_size(info, size_default);
+ return bpf_ctx_narrow_access_ok(off, size, size_default);
++ case bpf_ctx_range(struct bpf_sock, dst_port):
++ field_size = size == size_default ?
++ size_default : sizeof_field(struct bpf_sock, dst_port);
++ bpf_ctx_record_field_size(info, field_size);
++ return bpf_ctx_narrow_access_ok(off, size, field_size);
++ case offsetofend(struct bpf_sock, dst_port) ...
++ offsetof(struct bpf_sock, dst_ip4) - 1:
++ return false;
+ }
+
+ return size == size_default;
+@@ -10604,12 +10621,24 @@ static bool sk_lookup_is_valid_access(int off, int size,
+ case bpf_ctx_range(struct bpf_sk_lookup, local_ip4):
+ case bpf_ctx_range_till(struct bpf_sk_lookup, remote_ip6[0], remote_ip6[3]):
+ case bpf_ctx_range_till(struct bpf_sk_lookup, local_ip6[0], local_ip6[3]):
+- case bpf_ctx_range(struct bpf_sk_lookup, remote_port):
+ case bpf_ctx_range(struct bpf_sk_lookup, local_port):
+ case bpf_ctx_range(struct bpf_sk_lookup, ingress_ifindex):
+ bpf_ctx_record_field_size(info, sizeof(__u32));
+ return bpf_ctx_narrow_access_ok(off, size, sizeof(__u32));
+
++ case bpf_ctx_range(struct bpf_sk_lookup, remote_port):
++ /* Allow 4-byte access to 2-byte field for backward compatibility */
++ if (size == sizeof(__u32))
++ return true;
++ bpf_ctx_record_field_size(info, sizeof(__be16));
++ return bpf_ctx_narrow_access_ok(off, size, sizeof(__be16));
++
++ case offsetofend(struct bpf_sk_lookup, remote_port) ...
++ offsetof(struct bpf_sk_lookup, local_ip4) - 1:
++ /* Allow access to zero padding for backward compatibility */
++ bpf_ctx_record_field_size(info, sizeof(__u16));
++ return bpf_ctx_narrow_access_ok(off, size, sizeof(__u16));
++
+ default:
+ return false;
+ }
+@@ -10691,6 +10720,11 @@ static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type,
+ sport, 2, target_size));
+ break;
+
++ case offsetofend(struct bpf_sk_lookup, remote_port):
++ *target_size = 2;
++ *insn++ = BPF_MOV32_IMM(si->dst_reg, 0);
++ break;
++
+ case offsetof(struct bpf_sk_lookup, local_port):
+ *insn++ = BPF_LDX_MEM(BPF_H, si->dst_reg, si->src_reg,
+ bpf_target_off(struct bpf_sk_lookup_kern,
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index a5b5bb99c6446..212e65add9512 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -44,13 +44,7 @@ EXPORT_SYMBOL_GPL(net_rwsem);
+ static struct key_tag init_net_key_domain = { .usage = REFCOUNT_INIT(1) };
+ #endif
+
+-struct net init_net = {
+- .ns.count = REFCOUNT_INIT(1),
+- .dev_base_head = LIST_HEAD_INIT(init_net.dev_base_head),
+-#ifdef CONFIG_KEYS
+- .key_domain = &init_net_key_domain,
+-#endif
+-};
++struct net init_net;
+ EXPORT_SYMBOL(init_net);
+
+ static bool init_net_initialized;
+@@ -1084,7 +1078,7 @@ out:
+ rtnl_set_sk_err(net, RTNLGRP_NSID, err);
+ }
+
+-static int __init net_ns_init(void)
++void __init net_ns_init(void)
+ {
+ struct net_generic *ng;
+
+@@ -1105,6 +1099,9 @@ static int __init net_ns_init(void)
+
+ rcu_assign_pointer(init_net.gen, ng);
+
++#ifdef CONFIG_KEYS
++ init_net.key_domain = &init_net_key_domain;
++#endif
+ down_write(&pernet_ops_rwsem);
+ if (setup_net(&init_net, &init_user_ns))
+ panic("Could not setup the initial network namespace");
+@@ -1119,12 +1116,8 @@ static int __init net_ns_init(void)
+ RTNL_FLAG_DOIT_UNLOCKED);
+ rtnl_register(PF_UNSPEC, RTM_GETNSID, rtnl_net_getid, rtnl_net_dumpid,
+ RTNL_FLAG_DOIT_UNLOCKED);
+-
+- return 0;
+ }
+
+-pure_initcall(net_ns_init);
+-
+ static void free_exit_list(struct pernet_operations *ops, struct list_head *net_exit_list)
+ {
+ ops_pre_exit_list(ops, net_exit_list);
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 2fb8eb6791e8a..43b995e935cd6 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -3652,13 +3652,24 @@ static int rtnl_alt_ifname(int cmd, struct net_device *dev, struct nlattr *attr,
+ bool *changed, struct netlink_ext_ack *extack)
+ {
+ char *alt_ifname;
++ size_t size;
+ int err;
+
+ err = nla_validate(attr, attr->nla_len, IFLA_MAX, ifla_policy, extack);
+ if (err)
+ return err;
+
+- alt_ifname = nla_strdup(attr, GFP_KERNEL);
++ if (cmd == RTM_NEWLINKPROP) {
++ size = rtnl_prop_list_size(dev);
++ size += nla_total_size(ALTIFNAMSIZ);
++ if (size >= U16_MAX) {
++ NL_SET_ERR_MSG(extack,
++ "effective property list too long");
++ return -EINVAL;
++ }
++ }
++
++ alt_ifname = nla_strdup(attr, GFP_KERNEL_ACCOUNT);
+ if (!alt_ifname)
+ return -ENOMEM;
+
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index a8a2fb745274c..180fa6a26ad45 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5275,11 +5275,18 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
+ if (skb_cloned(to))
+ return false;
+
+- /* The page pool signature of struct page will eventually figure out
+- * which pages can be recycled or not but for now let's prohibit slab
+- * allocated and page_pool allocated SKBs from being coalesced.
++ /* In general, avoid mixing slab allocated and page_pool allocated
++ * pages within the same SKB. However when @to is not pp_recycle and
++ * @from is cloned, we can transition frag pages from page_pool to
++ * reference counted.
++ *
++ * On the other hand, don't allow coalescing two pp_recycle SKBs if
++ * @from is cloned, in case the SKB is using page_pool fragment
++ * references (PP_FLAG_PAGE_FRAG). Since we only take full page
++ * references for cloned SKBs at the moment that would result in
++ * inconsistent reference counts.
+ */
+- if (to->pp_recycle != from->pp_recycle)
++ if (to->pp_recycle != (from->pp_recycle && !skb_cloned(from)))
+ return false;
+
+ if (len <= skb_tailroom(to)) {
+diff --git a/net/dsa/master.c b/net/dsa/master.c
+index 880f910b23a99..10b51ffbb6f48 100644
+--- a/net/dsa/master.c
++++ b/net/dsa/master.c
+@@ -337,11 +337,24 @@ static const struct attribute_group dsa_group = {
+
+ static struct lock_class_key dsa_master_addr_list_lock_key;
+
++static void dsa_master_reset_mtu(struct net_device *dev)
++{
++ int err;
++
++ err = dev_set_mtu(dev, ETH_DATA_LEN);
++ if (err)
++ netdev_dbg(dev,
++ "Unable to reset MTU to exclude DSA overheads\n");
++}
++
+ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
+ {
++ const struct dsa_device_ops *tag_ops = cpu_dp->tag_ops;
+ struct dsa_switch *ds = cpu_dp->ds;
+ struct device_link *consumer_link;
+- int ret;
++ int mtu, ret;
++
++ mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops);
+
+ /* The DSA master must use SET_NETDEV_DEV for this to work. */
+ consumer_link = device_link_add(ds->dev, dev->dev.parent,
+@@ -351,6 +364,15 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
+ "Failed to create a device link to DSA switch %s\n",
+ dev_name(ds->dev));
+
++ /* The switch driver may not implement ->port_change_mtu(), case in
++ * which dsa_slave_change_mtu() will not update the master MTU either,
++ * so we need to do that here.
++ */
++ ret = dev_set_mtu(dev, mtu);
++ if (ret)
++ netdev_warn(dev, "error %d setting MTU to %d to include DSA overhead\n",
++ ret, mtu);
++
+ /* If we use a tagging format that doesn't have an ethertype
+ * field, make sure that all packets from this point on get
+ * sent to the tag format's receive function.
+@@ -388,6 +410,7 @@ void dsa_master_teardown(struct net_device *dev)
+ sysfs_remove_group(&dev->dev.kobj, &dsa_group);
+ dsa_netdev_ops_set(dev, NULL);
+ dsa_master_ethtool_teardown(dev);
++ dsa_master_reset_mtu(dev);
+ dsa_master_set_promiscuity(dev, -1);
+
+ dev->dsa_ptr = NULL;
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 4db0325f6e1af..dc28f0588e540 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1116,13 +1116,18 @@ static int arp_req_get(struct arpreq *r, struct net_device *dev)
+ return err;
+ }
+
+-static int arp_invalidate(struct net_device *dev, __be32 ip)
++int arp_invalidate(struct net_device *dev, __be32 ip, bool force)
+ {
+ struct neighbour *neigh = neigh_lookup(&arp_tbl, &ip, dev);
+ int err = -ENXIO;
+ struct neigh_table *tbl = &arp_tbl;
+
+ if (neigh) {
++ if ((neigh->nud_state & NUD_VALID) && !force) {
++ neigh_release(neigh);
++ return 0;
++ }
++
+ if (neigh->nud_state & ~NUD_NOARP)
+ err = neigh_update(neigh, NULL, NUD_FAILED,
+ NEIGH_UPDATE_F_OVERRIDE|
+@@ -1169,7 +1174,7 @@ static int arp_req_delete(struct net *net, struct arpreq *r,
+ if (!dev)
+ return -EINVAL;
+ }
+- return arp_invalidate(dev, ip);
++ return arp_invalidate(dev, ip, true);
+ }
+
+ /*
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 85117b45216d4..89a5a48755950 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1115,9 +1115,11 @@ void fib_add_ifaddr(struct in_ifaddr *ifa)
+ return;
+
+ /* Add broadcast address, if it is explicitly assigned. */
+- if (ifa->ifa_broadcast && ifa->ifa_broadcast != htonl(0xFFFFFFFF))
++ if (ifa->ifa_broadcast && ifa->ifa_broadcast != htonl(0xFFFFFFFF)) {
+ fib_magic(RTM_NEWROUTE, RTN_BROADCAST, ifa->ifa_broadcast, 32,
+ prim, 0);
++ arp_invalidate(dev, ifa->ifa_broadcast, false);
++ }
+
+ if (!ipv4_is_zeronet(prefix) && !(ifa->ifa_flags & IFA_F_SECONDARY) &&
+ (prefix != addr || ifa->ifa_prefixlen < 32)) {
+@@ -1131,6 +1133,7 @@ void fib_add_ifaddr(struct in_ifaddr *ifa)
+ if (ifa->ifa_prefixlen < 31) {
+ fib_magic(RTM_NEWROUTE, RTN_BROADCAST, prefix | ~mask,
+ 32, prim, 0);
++ arp_invalidate(dev, prefix | ~mask, false);
+ }
+ }
+ }
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 2dd375f7407b6..0a0f497703450 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -888,8 +888,13 @@ int fib_nh_match(struct net *net, struct fib_config *cfg, struct fib_info *fi,
+ }
+
+ if (cfg->fc_oif || cfg->fc_gw_family) {
+- struct fib_nh *nh = fib_info_nh(fi, 0);
++ struct fib_nh *nh;
++
++ /* cannot match on nexthop object attributes */
++ if (fi->nh)
++ return 1;
+
++ nh = fib_info_nh(fi, 0);
+ if (cfg->fc_encap) {
+ if (fib_encap_match(net, cfg->fc_encap_type,
+ cfg->fc_encap, nh, cfg, extack))
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 30ab717ff1b81..17440840a7914 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -637,7 +637,9 @@ int __inet_hash(struct sock *sk, struct sock *osk)
+ int err = 0;
+
+ if (sk->sk_state != TCP_LISTEN) {
++ local_bh_disable();
+ inet_ehash_nolisten(sk, osk, NULL);
++ local_bh_enable();
+ return 0;
+ }
+ WARN_ON(!sk_unhashed(sk));
+@@ -669,45 +671,54 @@ int inet_hash(struct sock *sk)
+ {
+ int err = 0;
+
+- if (sk->sk_state != TCP_CLOSE) {
+- local_bh_disable();
++ if (sk->sk_state != TCP_CLOSE)
+ err = __inet_hash(sk, NULL);
+- local_bh_enable();
+- }
+
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(inet_hash);
+
+-void inet_unhash(struct sock *sk)
++static void __inet_unhash(struct sock *sk, struct inet_listen_hashbucket *ilb)
+ {
+- struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
+- struct inet_listen_hashbucket *ilb = NULL;
+- spinlock_t *lock;
+-
+ if (sk_unhashed(sk))
+ return;
+
+- if (sk->sk_state == TCP_LISTEN) {
+- ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
+- lock = &ilb->lock;
+- } else {
+- lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
+- }
+- spin_lock_bh(lock);
+- if (sk_unhashed(sk))
+- goto unlock;
+-
+ if (rcu_access_pointer(sk->sk_reuseport_cb))
+ reuseport_stop_listen_sock(sk);
+ if (ilb) {
++ struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
++
+ inet_unhash2(hashinfo, sk);
+ ilb->count--;
+ }
+ __sk_nulls_del_node_init_rcu(sk);
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+-unlock:
+- spin_unlock_bh(lock);
++}
++
++void inet_unhash(struct sock *sk)
++{
++ struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
++
++ if (sk_unhashed(sk))
++ return;
++
++ if (sk->sk_state == TCP_LISTEN) {
++ struct inet_listen_hashbucket *ilb;
++
++ ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
++ /* Don't disable bottom halves while acquiring the lock to
++ * avoid circular locking dependency on PREEMPT_RT.
++ */
++ spin_lock(&ilb->lock);
++ __inet_unhash(sk, ilb);
++ spin_unlock(&ilb->lock);
++ } else {
++ spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
++
++ spin_lock_bh(lock);
++ __inet_unhash(sk, NULL);
++ spin_unlock_bh(lock);
++ }
+ }
+ EXPORT_SYMBOL_GPL(inet_unhash);
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f908e2fd30b24..4df84013c4e6b 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -554,7 +554,7 @@ static int inet6_netconf_fill_devconf(struct sk_buff *skb, int ifindex,
+ #ifdef CONFIG_IPV6_MROUTE
+ if ((all || type == NETCONFA_MC_FORWARDING) &&
+ nla_put_s32(skb, NETCONFA_MC_FORWARDING,
+- devconf->mc_forwarding) < 0)
++ atomic_read(&devconf->mc_forwarding)) < 0)
+ goto nla_put_failure;
+ #endif
+ if ((all || type == NETCONFA_PROXY_NEIGH) &&
+@@ -5539,7 +5539,7 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
+ array[DEVCONF_USE_OPTIMISTIC] = cnf->use_optimistic;
+ #endif
+ #ifdef CONFIG_IPV6_MROUTE
+- array[DEVCONF_MC_FORWARDING] = cnf->mc_forwarding;
++ array[DEVCONF_MC_FORWARDING] = atomic_read(&cnf->mc_forwarding);
+ #endif
+ array[DEVCONF_DISABLE_IPV6] = cnf->disable_ipv6;
+ array[DEVCONF_ACCEPT_DAD] = cnf->accept_dad;
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index 8fe7900f19499..7d7b7523d1265 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -441,11 +441,14 @@ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ {
+ struct sock *sk = sock->sk;
+ u32 flags = BIND_WITH_LOCK;
++ const struct proto *prot;
+ int err = 0;
+
++ /* IPV6_ADDRFORM can change sk->sk_prot under us. */
++ prot = READ_ONCE(sk->sk_prot);
+ /* If the socket has its own bind function then use it. */
+- if (sk->sk_prot->bind)
+- return sk->sk_prot->bind(sk, uaddr, addr_len);
++ if (prot->bind)
++ return prot->bind(sk, uaddr, addr_len);
+
+ if (addr_len < SIN6_LEN_RFC2133)
+ return -EINVAL;
+@@ -555,6 +558,7 @@ int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ void __user *argp = (void __user *)arg;
+ struct sock *sk = sock->sk;
+ struct net *net = sock_net(sk);
++ const struct proto *prot;
+
+ switch (cmd) {
+ case SIOCADDRT:
+@@ -572,9 +576,11 @@ int inet6_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ case SIOCSIFDSTADDR:
+ return addrconf_set_dstaddr(net, argp);
+ default:
+- if (!sk->sk_prot->ioctl)
++ /* IPV6_ADDRFORM can change sk->sk_prot under us. */
++ prot = READ_ONCE(sk->sk_prot);
++ if (!prot->ioctl)
+ return -ENOIOCTLCMD;
+- return sk->sk_prot->ioctl(sk, cmd, arg);
++ return prot->ioctl(sk, cmd, arg);
+ }
+ /*NOTREACHED*/
+ return 0;
+@@ -636,11 +642,14 @@ INDIRECT_CALLABLE_DECLARE(int udpv6_sendmsg(struct sock *, struct msghdr *,
+ int inet6_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
+ {
+ struct sock *sk = sock->sk;
++ const struct proto *prot;
+
+ if (unlikely(inet_send_prepare(sk)))
+ return -EAGAIN;
+
+- return INDIRECT_CALL_2(sk->sk_prot->sendmsg, tcp_sendmsg, udpv6_sendmsg,
++ /* IPV6_ADDRFORM can change sk->sk_prot under us. */
++ prot = READ_ONCE(sk->sk_prot);
++ return INDIRECT_CALL_2(prot->sendmsg, tcp_sendmsg, udpv6_sendmsg,
+ sk, msg, size);
+ }
+
+@@ -650,13 +659,16 @@ int inet6_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ int flags)
+ {
+ struct sock *sk = sock->sk;
++ const struct proto *prot;
+ int addr_len = 0;
+ int err;
+
+ if (likely(!(flags & MSG_ERRQUEUE)))
+ sock_rps_record_flow(sk);
+
+- err = INDIRECT_CALL_2(sk->sk_prot->recvmsg, tcp_recvmsg, udpv6_recvmsg,
++ /* IPV6_ADDRFORM can change sk->sk_prot under us. */
++ prot = READ_ONCE(sk->sk_prot);
++ err = INDIRECT_CALL_2(prot->recvmsg, tcp_recvmsg, udpv6_recvmsg,
+ sk, msg, size, flags & MSG_DONTWAIT,
+ flags & ~MSG_DONTWAIT, &addr_len);
+ if (err >= 0)
+diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
+index 4514444e96c8d..4740afecf7c62 100644
+--- a/net/ipv6/inet6_hashtables.c
++++ b/net/ipv6/inet6_hashtables.c
+@@ -333,11 +333,8 @@ int inet6_hash(struct sock *sk)
+ {
+ int err = 0;
+
+- if (sk->sk_state != TCP_CLOSE) {
+- local_bh_disable();
++ if (sk->sk_state != TCP_CLOSE)
+ err = __inet_hash(sk, NULL);
+- local_bh_enable();
+- }
+
+ return err;
+ }
+diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
+index 80256717868e6..d4b1e2c5aa76d 100644
+--- a/net/ipv6/ip6_input.c
++++ b/net/ipv6/ip6_input.c
+@@ -508,7 +508,7 @@ int ip6_mc_input(struct sk_buff *skb)
+ /*
+ * IPv6 multicast router mode is now supported ;)
+ */
+- if (dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding &&
++ if (atomic_read(&dev_net(skb->dev)->ipv6.devconf_all->mc_forwarding) &&
+ !(ipv6_addr_type(&hdr->daddr) &
+ (IPV6_ADDR_LOOPBACK|IPV6_ADDR_LINKLOCAL)) &&
+ likely(!(IP6CB(skb)->flags & IP6SKB_FORWARDED))) {
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 8a2db926b5eb6..e3c884678dbe2 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -734,7 +734,7 @@ static int mif6_delete(struct mr_table *mrt, int vifi, int notify,
+
+ in6_dev = __in6_dev_get(dev);
+ if (in6_dev) {
+- in6_dev->cnf.mc_forwarding--;
++ atomic_dec(&in6_dev->cnf.mc_forwarding);
+ inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ NETCONFA_MC_FORWARDING,
+ dev->ifindex, &in6_dev->cnf);
+@@ -902,7 +902,7 @@ static int mif6_add(struct net *net, struct mr_table *mrt,
+
+ in6_dev = __in6_dev_get(dev);
+ if (in6_dev) {
+- in6_dev->cnf.mc_forwarding++;
++ atomic_inc(&in6_dev->cnf.mc_forwarding);
+ inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ NETCONFA_MC_FORWARDING,
+ dev->ifindex, &in6_dev->cnf);
+@@ -1553,7 +1553,7 @@ static int ip6mr_sk_init(struct mr_table *mrt, struct sock *sk)
+ } else {
+ rcu_assign_pointer(mrt->mroute_sk, sk);
+ sock_set_flag(sk, SOCK_RCU_FREE);
+- net->ipv6.devconf_all->mc_forwarding++;
++ atomic_inc(&net->ipv6.devconf_all->mc_forwarding);
+ }
+ write_unlock_bh(&mrt_lock);
+
+@@ -1586,7 +1586,7 @@ int ip6mr_sk_done(struct sock *sk)
+ * so the RCU grace period before sk freeing
+ * is guaranteed by sk_destruct()
+ */
+- net->ipv6.devconf_all->mc_forwarding--;
++ atomic_dec(&net->ipv6.devconf_all->mc_forwarding);
+ write_unlock_bh(&mrt_lock);
+ inet6_netconf_notify_devconf(net, RTM_NEWNETCONF,
+ NETCONFA_MC_FORWARDING,
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index a733803a710cf..222f6bf220ba0 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -475,7 +475,8 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ sock_prot_inuse_add(net, sk->sk_prot, -1);
+ sock_prot_inuse_add(net, &tcp_prot, 1);
+
+- sk->sk_prot = &tcp_prot;
++ /* Paired with READ_ONCE(sk->sk_prot) in net/ipv6/af_inet6.c */
++ WRITE_ONCE(sk->sk_prot, &tcp_prot);
+ icsk->icsk_af_ops = &ipv4_specific;
+ sk->sk_socket->ops = &inet_stream_ops;
+ sk->sk_family = PF_INET;
+@@ -489,7 +490,8 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ sock_prot_inuse_add(net, sk->sk_prot, -1);
+ sock_prot_inuse_add(net, prot, 1);
+
+- sk->sk_prot = prot;
++ /* Paired with READ_ONCE(sk->sk_prot) in net/ipv6/af_inet6.c */
++ WRITE_ONCE(sk->sk_prot, prot);
+ sk->sk_socket->ops = &inet_dgram_ops;
+ sk->sk_family = PF_INET;
+ }
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index ea1cf414a92e7..da1bf48e79370 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -4495,7 +4495,7 @@ static int ip6_pkt_drop(struct sk_buff *skb, u8 code, int ipstats_mib_noroutes)
+ struct inet6_dev *idev;
+ int type;
+
+- if (netif_is_l3_master(skb->dev) &&
++ if (netif_is_l3_master(skb->dev) ||
+ dst->dev == net->loopback_dev)
+ idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));
+ else
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index c921de63b494b..fc05351d3a82e 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -90,13 +90,13 @@ out_release:
+ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ {
+ DECLARE_SOCKADDR(struct sockaddr_mctp *, addr, msg->msg_name);
+- const int hlen = MCTP_HEADER_MAXLEN + sizeof(struct mctp_hdr);
+ int rc, addrlen = msg->msg_namelen;
+ struct sock *sk = sock->sk;
+ struct mctp_sock *msk = container_of(sk, struct mctp_sock, sk);
+ struct mctp_skb_cb *cb;
+ struct mctp_route *rt;
+- struct sk_buff *skb;
++ struct sk_buff *skb = NULL;
++ int hlen;
+
+ if (addr) {
+ if (addrlen < sizeof(struct sockaddr_mctp))
+@@ -119,6 +119,34 @@ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ if (addr->smctp_network == MCTP_NET_ANY)
+ addr->smctp_network = mctp_default_net(sock_net(sk));
+
++ /* direct addressing */
++ if (msk->addr_ext && addrlen >= sizeof(struct sockaddr_mctp_ext)) {
++ DECLARE_SOCKADDR(struct sockaddr_mctp_ext *,
++ extaddr, msg->msg_name);
++ struct net_device *dev;
++
++ rc = -EINVAL;
++ rcu_read_lock();
++ dev = dev_get_by_index_rcu(sock_net(sk), extaddr->smctp_ifindex);
++ /* check for correct halen */
++ if (dev && extaddr->smctp_halen == dev->addr_len) {
++ hlen = LL_RESERVED_SPACE(dev) + sizeof(struct mctp_hdr);
++ rc = 0;
++ }
++ rcu_read_unlock();
++ if (rc)
++ goto err_free;
++ rt = NULL;
++ } else {
++ rt = mctp_route_lookup(sock_net(sk), addr->smctp_network,
++ addr->smctp_addr.s_addr);
++ if (!rt) {
++ rc = -EHOSTUNREACH;
++ goto err_free;
++ }
++ hlen = LL_RESERVED_SPACE(rt->dev->dev) + sizeof(struct mctp_hdr);
++ }
++
+ skb = sock_alloc_send_skb(sk, hlen + 1 + len,
+ msg->msg_flags & MSG_DONTWAIT, &rc);
+ if (!skb)
+@@ -137,8 +165,8 @@ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ cb = __mctp_cb(skb);
+ cb->net = addr->smctp_network;
+
+- /* direct addressing */
+- if (msk->addr_ext && addrlen >= sizeof(struct sockaddr_mctp_ext)) {
++ if (!rt) {
++ /* fill extended address in cb */
+ DECLARE_SOCKADDR(struct sockaddr_mctp_ext *,
+ extaddr, msg->msg_name);
+
+@@ -149,17 +177,9 @@ static int mctp_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ }
+
+ cb->ifindex = extaddr->smctp_ifindex;
++ /* smctp_halen is checked above */
+ cb->halen = extaddr->smctp_halen;
+ memcpy(cb->haddr, extaddr->smctp_haddr, cb->halen);
+-
+- rt = NULL;
+- } else {
+- rt = mctp_route_lookup(sock_net(sk), addr->smctp_network,
+- addr->smctp_addr.s_addr);
+- if (!rt) {
+- rc = -EHOSTUNREACH;
+- goto err_free;
+- }
+ }
+
+ rc = mctp_local_output(sk, rt, skb, addr->smctp_addr.s_addr,
+diff --git a/net/mctp/device.c b/net/mctp/device.c
+index ef2755f82f87b..f86ef6d751bdc 100644
+--- a/net/mctp/device.c
++++ b/net/mctp/device.c
+@@ -24,12 +24,25 @@ struct mctp_dump_cb {
+ size_t a_idx;
+ };
+
+-/* unlocked: caller must hold rcu_read_lock */
++/* unlocked: caller must hold rcu_read_lock.
++ * Returned mctp_dev has its refcount incremented, or NULL if unset.
++ */
+ struct mctp_dev *__mctp_dev_get(const struct net_device *dev)
+ {
+- return rcu_dereference(dev->mctp_ptr);
++ struct mctp_dev *mdev = rcu_dereference(dev->mctp_ptr);
++
++ /* RCU guarantees that any mdev is still live.
++ * Zero refcount implies a pending free, return NULL.
++ */
++ if (mdev)
++ if (!refcount_inc_not_zero(&mdev->refs))
++ return NULL;
++ return mdev;
+ }
+
++/* Returned mctp_dev does not have refcount incremented. The returned pointer
++ * remains live while rtnl_lock is held, as that prevents mctp_unregister()
++ */
+ struct mctp_dev *mctp_dev_get_rtnl(const struct net_device *dev)
+ {
+ return rtnl_dereference(dev->mctp_ptr);
+@@ -123,6 +136,7 @@ static int mctp_dump_addrinfo(struct sk_buff *skb, struct netlink_callback *cb)
+ if (mdev) {
+ rc = mctp_dump_dev_addrinfo(mdev,
+ skb, cb);
++ mctp_dev_put(mdev);
+ // Error indicates full buffer, this
+ // callback will get retried.
+ if (rc < 0)
+@@ -297,7 +311,7 @@ void mctp_dev_hold(struct mctp_dev *mdev)
+
+ void mctp_dev_put(struct mctp_dev *mdev)
+ {
+- if (refcount_dec_and_test(&mdev->refs)) {
++ if (mdev && refcount_dec_and_test(&mdev->refs)) {
+ dev_put(mdev->dev);
+ kfree_rcu(mdev, rcu);
+ }
+@@ -369,6 +383,7 @@ static size_t mctp_get_link_af_size(const struct net_device *dev,
+ if (!mdev)
+ return 0;
+ ret = nla_total_size(4); /* IFLA_MCTP_NET */
++ mctp_dev_put(mdev);
+ return ret;
+ }
+
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index e52cef7505002..1a296e211a507 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -498,6 +498,11 @@ static int mctp_route_output(struct mctp_route *route, struct sk_buff *skb)
+
+ if (cb->ifindex) {
+ /* direct route; use the hwaddr we stashed in sendmsg */
++ if (cb->halen != skb->dev->addr_len) {
++ /* sanity check, sendmsg should have already caught this */
++ kfree_skb(skb);
++ return -EMSGSIZE;
++ }
+ daddr = cb->haddr;
+ } else {
+ /* If lookup fails let the device handle daddr==NULL */
+@@ -507,7 +512,7 @@ static int mctp_route_output(struct mctp_route *route, struct sk_buff *skb)
+
+ rc = dev_hard_header(skb, skb->dev, ntohs(skb->protocol),
+ daddr, skb->dev->dev_addr, skb->len);
+- if (rc) {
++ if (rc < 0) {
+ kfree_skb(skb);
+ return -EHOSTUNREACH;
+ }
+@@ -707,7 +712,7 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ {
+ const unsigned int hlen = sizeof(struct mctp_hdr);
+ struct mctp_hdr *hdr, *hdr2;
+- unsigned int pos, size;
++ unsigned int pos, size, headroom;
+ struct sk_buff *skb2;
+ int rc;
+ u8 seq;
+@@ -721,6 +726,9 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ return -EMSGSIZE;
+ }
+
++ /* keep same headroom as the original skb */
++ headroom = skb_headroom(skb);
++
+ /* we've got the header */
+ skb_pull(skb, hlen);
+
+@@ -728,7 +736,7 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ /* size of message payload */
+ size = min(mtu - hlen, skb->len - pos);
+
+- skb2 = alloc_skb(MCTP_HEADER_MAXLEN + hlen + size, GFP_KERNEL);
++ skb2 = alloc_skb(headroom + hlen + size, GFP_KERNEL);
+ if (!skb2) {
+ rc = -ENOMEM;
+ break;
+@@ -744,7 +752,7 @@ static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,
+ skb_set_owner_w(skb2, skb->sk);
+
+ /* establish packet */
+- skb_reserve(skb2, MCTP_HEADER_MAXLEN);
++ skb_reserve(skb2, headroom);
+ skb_reset_network_header(skb2);
+ skb_put(skb2, hlen + size);
+ skb2->transport_header = skb2->network_header + hlen;
+@@ -786,7 +794,7 @@ int mctp_local_output(struct sock *sk, struct mctp_route *rt,
+ {
+ struct mctp_sock *msk = container_of(sk, struct mctp_sock, sk);
+ struct mctp_skb_cb *cb = mctp_cb(skb);
+- struct mctp_route tmp_rt;
++ struct mctp_route tmp_rt = {0};
+ struct mctp_sk_key *key;
+ struct net_device *dev;
+ struct mctp_hdr *hdr;
+@@ -892,6 +900,7 @@ out_release:
+ mctp_route_release(rt);
+
+ dev_put(dev);
++ mctp_dev_put(tmp_rt.dev);
+
+ return rc;
+
+@@ -1057,11 +1066,13 @@ static int mctp_pkttype_receive(struct sk_buff *skb, struct net_device *dev,
+
+ rt->output(rt, skb);
+ mctp_route_release(rt);
++ mctp_dev_put(mdev);
+
+ return NET_RX_SUCCESS;
+
+ err_drop:
+ kfree_skb(skb);
++ mctp_dev_put(mdev);
+ return NET_RX_DROP;
+ }
+
+diff --git a/net/mctp/test/utils.c b/net/mctp/test/utils.c
+index 7b7918702592a..e03ba66bbe181 100644
+--- a/net/mctp/test/utils.c
++++ b/net/mctp/test/utils.c
+@@ -54,7 +54,6 @@ struct mctp_test_dev *mctp_test_create_dev(void)
+
+ rcu_read_lock();
+ dev->mdev = __mctp_dev_get(ndev);
+- mctp_dev_hold(dev->mdev);
+ rcu_read_unlock();
+
+ return dev;
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index bf1e17c678f13..7552e1e9fd629 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -67,6 +67,8 @@ EXPORT_SYMBOL_GPL(nf_conntrack_hash);
+ struct conntrack_gc_work {
+ struct delayed_work dwork;
+ u32 next_bucket;
++ u32 avg_timeout;
++ u32 start_time;
+ bool exiting;
+ bool early_drop;
+ };
+@@ -78,8 +80,19 @@ static __read_mostly bool nf_conntrack_locks_all;
+ /* serialize hash resizes and nf_ct_iterate_cleanup */
+ static DEFINE_MUTEX(nf_conntrack_mutex);
+
+-#define GC_SCAN_INTERVAL (120u * HZ)
++#define GC_SCAN_INTERVAL_MAX (60ul * HZ)
++#define GC_SCAN_INTERVAL_MIN (1ul * HZ)
++
++/* clamp timeouts to this value (TCP unacked) */
++#define GC_SCAN_INTERVAL_CLAMP (300ul * HZ)
++
++/* large initial bias so that we don't scan often just because we have
++ * three entries with a 1s timeout.
++ */
++#define GC_SCAN_INTERVAL_INIT INT_MAX
++
+ #define GC_SCAN_MAX_DURATION msecs_to_jiffies(10)
++#define GC_SCAN_EXPIRED_MAX (64000u / HZ)
+
+ #define MIN_CHAINLEN 8u
+ #define MAX_CHAINLEN (32u - MIN_CHAINLEN)
+@@ -1421,16 +1434,28 @@ static bool gc_worker_can_early_drop(const struct nf_conn *ct)
+
+ static void gc_worker(struct work_struct *work)
+ {
+- unsigned long end_time = jiffies + GC_SCAN_MAX_DURATION;
+ unsigned int i, hashsz, nf_conntrack_max95 = 0;
+- unsigned long next_run = GC_SCAN_INTERVAL;
++ u32 end_time, start_time = nfct_time_stamp;
+ struct conntrack_gc_work *gc_work;
++ unsigned int expired_count = 0;
++ unsigned long next_run;
++ s32 delta_time;
++
+ gc_work = container_of(work, struct conntrack_gc_work, dwork.work);
+
+ i = gc_work->next_bucket;
+ if (gc_work->early_drop)
+ nf_conntrack_max95 = nf_conntrack_max / 100u * 95u;
+
++ if (i == 0) {
++ gc_work->avg_timeout = GC_SCAN_INTERVAL_INIT;
++ gc_work->start_time = start_time;
++ }
++
++ next_run = gc_work->avg_timeout;
++
++ end_time = start_time + GC_SCAN_MAX_DURATION;
++
+ do {
+ struct nf_conntrack_tuple_hash *h;
+ struct hlist_nulls_head *ct_hash;
+@@ -1447,6 +1472,7 @@ static void gc_worker(struct work_struct *work)
+
+ hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) {
+ struct nf_conntrack_net *cnet;
++ unsigned long expires;
+ struct net *net;
+
+ tmp = nf_ct_tuplehash_to_ctrack(h);
+@@ -1456,11 +1482,29 @@ static void gc_worker(struct work_struct *work)
+ continue;
+ }
+
++ if (expired_count > GC_SCAN_EXPIRED_MAX) {
++ rcu_read_unlock();
++
++ gc_work->next_bucket = i;
++ gc_work->avg_timeout = next_run;
++
++ delta_time = nfct_time_stamp - gc_work->start_time;
++
++ /* re-sched immediately if total cycle time is exceeded */
++ next_run = delta_time < (s32)GC_SCAN_INTERVAL_MAX;
++ goto early_exit;
++ }
++
+ if (nf_ct_is_expired(tmp)) {
+ nf_ct_gc_expired(tmp);
++ expired_count++;
+ continue;
+ }
+
++ expires = clamp(nf_ct_expires(tmp), GC_SCAN_INTERVAL_MIN, GC_SCAN_INTERVAL_CLAMP);
++ next_run += expires;
++ next_run /= 2u;
++
+ if (nf_conntrack_max95 == 0 || gc_worker_skip_ct(tmp))
+ continue;
+
+@@ -1478,8 +1522,10 @@ static void gc_worker(struct work_struct *work)
+ continue;
+ }
+
+- if (gc_worker_can_early_drop(tmp))
++ if (gc_worker_can_early_drop(tmp)) {
+ nf_ct_kill(tmp);
++ expired_count++;
++ }
+
+ nf_ct_put(tmp);
+ }
+@@ -1492,33 +1538,38 @@ static void gc_worker(struct work_struct *work)
+ cond_resched();
+ i++;
+
+- if (time_after(jiffies, end_time) && i < hashsz) {
++ delta_time = nfct_time_stamp - end_time;
++ if (delta_time > 0 && i < hashsz) {
++ gc_work->avg_timeout = next_run;
+ gc_work->next_bucket = i;
+ next_run = 0;
+- break;
++ goto early_exit;
+ }
+ } while (i < hashsz);
+
++ gc_work->next_bucket = 0;
++
++ next_run = clamp(next_run, GC_SCAN_INTERVAL_MIN, GC_SCAN_INTERVAL_MAX);
++
++ delta_time = max_t(s32, nfct_time_stamp - gc_work->start_time, 1);
++ if (next_run > (unsigned long)delta_time)
++ next_run -= delta_time;
++ else
++ next_run = 1;
++
++early_exit:
+ if (gc_work->exiting)
+ return;
+
+- /*
+- * Eviction will normally happen from the packet path, and not
+- * from this gc worker.
+- *
+- * This worker is only here to reap expired entries when system went
+- * idle after a busy period.
+- */
+- if (next_run) {
++ if (next_run)
+ gc_work->early_drop = false;
+- gc_work->next_bucket = 0;
+- }
++
+ queue_delayed_work(system_power_efficient_wq, &gc_work->dwork, next_run);
+ }
+
+ static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work)
+ {
+- INIT_DEFERRABLE_WORK(&gc_work->dwork, gc_worker);
++ INIT_DELAYED_WORK(&gc_work->dwork, gc_worker);
+ gc_work->exiting = false;
+ }
+
+diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c
+index 7b727d3ebf9df..04bd2f89afe88 100644
+--- a/net/netfilter/nft_bitwise.c
++++ b/net/netfilter/nft_bitwise.c
+@@ -287,7 +287,7 @@ static bool nft_bitwise_reduce(struct nft_regs_track *track,
+ if (!track->regs[priv->sreg].selector)
+ return false;
+
+- bitwise = nft_expr_priv(expr);
++ bitwise = nft_expr_priv(track->regs[priv->dreg].selector);
+ if (track->regs[priv->sreg].selector == track->regs[priv->dreg].selector &&
+ track->regs[priv->dreg].bitwise &&
+ track->regs[priv->dreg].bitwise->ops == expr->ops &&
+@@ -434,7 +434,7 @@ static bool nft_bitwise_fast_reduce(struct nft_regs_track *track,
+ if (!track->regs[priv->sreg].selector)
+ return false;
+
+- bitwise = nft_expr_priv(expr);
++ bitwise = nft_expr_priv(track->regs[priv->dreg].selector);
+ if (track->regs[priv->sreg].selector == track->regs[priv->dreg].selector &&
+ track->regs[priv->dreg].bitwise &&
+ track->regs[priv->dreg].bitwise->ops == expr->ops &&
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index beb0e573266d0..54c0830039470 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -885,6 +885,8 @@ int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len,
+ unsigned char bitmask;
+ unsigned char byte;
+
++ if (offset >= bitmap_len)
++ return -1;
+ byte_offset = offset / 8;
+ byte = bitmap[byte_offset];
+ bit_spot = offset;
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 780d9e2246f39..8955f31fa47e9 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -1051,7 +1051,7 @@ static int clone(struct datapath *dp, struct sk_buff *skb,
+ int rem = nla_len(attr);
+ bool dont_clone_flow_key;
+
+- /* The first action is always 'OVS_CLONE_ATTR_ARG'. */
++ /* The first action is always 'OVS_CLONE_ATTR_EXEC'. */
+ clone_arg = nla_data(attr);
+ dont_clone_flow_key = nla_get_u32(clone_arg);
+ actions = nla_next(clone_arg, &rem);
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 0d677c9c2c805..c591b923016a6 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2288,6 +2288,62 @@ static struct sw_flow_actions *nla_alloc_flow_actions(int size)
+ return sfa;
+ }
+
++static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len);
++
++static void ovs_nla_free_check_pkt_len_action(const struct nlattr *action)
++{
++ const struct nlattr *a;
++ int rem;
++
++ nla_for_each_nested(a, action, rem) {
++ switch (nla_type(a)) {
++ case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL:
++ case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_GREATER:
++ ovs_nla_free_nested_actions(nla_data(a), nla_len(a));
++ break;
++ }
++ }
++}
++
++static void ovs_nla_free_clone_action(const struct nlattr *action)
++{
++ const struct nlattr *a = nla_data(action);
++ int rem = nla_len(action);
++
++ switch (nla_type(a)) {
++ case OVS_CLONE_ATTR_EXEC:
++ /* The real list of actions follows this attribute. */
++ a = nla_next(a, &rem);
++ ovs_nla_free_nested_actions(a, rem);
++ break;
++ }
++}
++
++static void ovs_nla_free_dec_ttl_action(const struct nlattr *action)
++{
++ const struct nlattr *a = nla_data(action);
++
++ switch (nla_type(a)) {
++ case OVS_DEC_TTL_ATTR_ACTION:
++ ovs_nla_free_nested_actions(nla_data(a), nla_len(a));
++ break;
++ }
++}
++
++static void ovs_nla_free_sample_action(const struct nlattr *action)
++{
++ const struct nlattr *a = nla_data(action);
++ int rem = nla_len(action);
++
++ switch (nla_type(a)) {
++ case OVS_SAMPLE_ATTR_ARG:
++ /* The real list of actions follows this attribute. */
++ a = nla_next(a, &rem);
++ ovs_nla_free_nested_actions(a, rem);
++ break;
++ }
++}
++
+ static void ovs_nla_free_set_action(const struct nlattr *a)
+ {
+ const struct nlattr *ovs_key = nla_data(a);
+@@ -2301,25 +2357,54 @@ static void ovs_nla_free_set_action(const struct nlattr *a)
+ }
+ }
+
+-void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)
++static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len)
+ {
+ const struct nlattr *a;
+ int rem;
+
+- if (!sf_acts)
++ /* Whenever new actions are added, the need to update this
++ * function should be considered.
++ */
++ BUILD_BUG_ON(OVS_ACTION_ATTR_MAX != 23);
++
++ if (!actions)
+ return;
+
+- nla_for_each_attr(a, sf_acts->actions, sf_acts->actions_len, rem) {
++ nla_for_each_attr(a, actions, len, rem) {
+ switch (nla_type(a)) {
+- case OVS_ACTION_ATTR_SET:
+- ovs_nla_free_set_action(a);
++ case OVS_ACTION_ATTR_CHECK_PKT_LEN:
++ ovs_nla_free_check_pkt_len_action(a);
++ break;
++
++ case OVS_ACTION_ATTR_CLONE:
++ ovs_nla_free_clone_action(a);
+ break;
++
+ case OVS_ACTION_ATTR_CT:
+ ovs_ct_free_action(a);
+ break;
++
++ case OVS_ACTION_ATTR_DEC_TTL:
++ ovs_nla_free_dec_ttl_action(a);
++ break;
++
++ case OVS_ACTION_ATTR_SAMPLE:
++ ovs_nla_free_sample_action(a);
++ break;
++
++ case OVS_ACTION_ATTR_SET:
++ ovs_nla_free_set_action(a);
++ break;
+ }
+ }
++}
++
++void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)
++{
++ if (!sf_acts)
++ return;
+
++ ovs_nla_free_nested_actions(sf_acts->actions, sf_acts->actions_len);
+ kfree(sf_acts);
+ }
+
+@@ -3429,7 +3514,9 @@ static int clone_action_to_attr(const struct nlattr *attr,
+ if (!start)
+ return -EMSGSIZE;
+
+- err = ovs_nla_put_actions(nla_data(attr), rem, skb);
++ /* Skipping the OVS_CLONE_ATTR_EXEC that is always the first attribute. */
++ attr = nla_next(nla_data(attr), &rem);
++ err = ovs_nla_put_actions(attr, rem, skb);
+
+ if (err)
+ nla_nest_cancel(skb, start);
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index 25bbc4cc8b135..f15d6942da453 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -113,8 +113,8 @@ static __net_exit void rxrpc_exit_net(struct net *net)
+ struct rxrpc_net *rxnet = rxrpc_net(net);
+
+ rxnet->live = false;
+- del_timer_sync(&rxnet->peer_keepalive_timer);
+ cancel_work_sync(&rxnet->peer_keepalive_work);
++ del_timer_sync(&rxnet->peer_keepalive_timer);
+ rxrpc_destroy_all_calls(rxnet);
+ rxrpc_destroy_all_connections(rxnet);
+ rxrpc_destroy_all_peers(rxnet);
+diff --git a/net/sctp/outqueue.c b/net/sctp/outqueue.c
+index a18609f608fb7..e213aaf45d67c 100644
+--- a/net/sctp/outqueue.c
++++ b/net/sctp/outqueue.c
+@@ -914,6 +914,7 @@ static void sctp_outq_flush_ctrl(struct sctp_flush_ctx *ctx)
+ ctx->asoc->base.sk->sk_err = -error;
+ return;
+ }
++ ctx->asoc->stats.octrlchunks++;
+ break;
+
+ case SCTP_CID_ABORT:
+@@ -938,7 +939,10 @@ static void sctp_outq_flush_ctrl(struct sctp_flush_ctx *ctx)
+
+ case SCTP_CID_HEARTBEAT:
+ if (chunk->pmtu_probe) {
+- sctp_packet_singleton(ctx->transport, chunk, ctx->gfp);
++ error = sctp_packet_singleton(ctx->transport,
++ chunk, ctx->gfp);
++ if (!error)
++ ctx->asoc->stats.octrlchunks++;
+ break;
+ }
+ fallthrough;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 284befa909676..303c5e56e4df4 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2625,8 +2625,8 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ sk->sk_state != SMC_CLOSED) {
+ if (val) {
+ SMC_STAT_INC(smc, ndly_cnt);
+- mod_delayed_work(smc->conn.lgr->tx_wq,
+- &smc->conn.tx_work, 0);
++ smc_tx_pending(&smc->conn);
++ cancel_delayed_work(&smc->conn.tx_work);
+ }
+ }
+ break;
+@@ -2636,8 +2636,8 @@ static int smc_setsockopt(struct socket *sock, int level, int optname,
+ sk->sk_state != SMC_CLOSED) {
+ if (!val) {
+ SMC_STAT_INC(smc, cork_cnt);
+- mod_delayed_work(smc->conn.lgr->tx_wq,
+- &smc->conn.tx_work, 0);
++ smc_tx_pending(&smc->conn);
++ cancel_delayed_work(&smc->conn.tx_work);
+ }
+ }
+ break;
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index be7d704976ffb..f40f6ed0fbdb4 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1989,7 +1989,7 @@ static struct smc_buf_desc *smc_buf_get_slot(int compressed_bufsize,
+ */
+ static inline int smc_rmb_wnd_update_limit(int rmbe_size)
+ {
+- return min_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
++ return max_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
+ }
+
+ /* map an rmb buf to a link */
+diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c
+index be241d53020f1..7b0b6e24582f9 100644
+--- a/net/smc/smc_tx.c
++++ b/net/smc/smc_tx.c
+@@ -597,27 +597,32 @@ int smc_tx_sndbuf_nonempty(struct smc_connection *conn)
+ return rc;
+ }
+
+-/* Wakeup sndbuf consumers from process context
+- * since there is more data to transmit
+- */
+-void smc_tx_work(struct work_struct *work)
++void smc_tx_pending(struct smc_connection *conn)
+ {
+- struct smc_connection *conn = container_of(to_delayed_work(work),
+- struct smc_connection,
+- tx_work);
+ struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+ int rc;
+
+- lock_sock(&smc->sk);
+ if (smc->sk.sk_err)
+- goto out;
++ return;
+
+ rc = smc_tx_sndbuf_nonempty(conn);
+ if (!rc && conn->local_rx_ctrl.prod_flags.write_blocked &&
+ !atomic_read(&conn->bytes_to_rcv))
+ conn->local_rx_ctrl.prod_flags.write_blocked = 0;
++}
++
++/* Wakeup sndbuf consumers from process context
++ * since there is more data to transmit
++ */
++void smc_tx_work(struct work_struct *work)
++{
++ struct smc_connection *conn = container_of(to_delayed_work(work),
++ struct smc_connection,
++ tx_work);
++ struct smc_sock *smc = container_of(conn, struct smc_sock, conn);
+
+-out:
++ lock_sock(&smc->sk);
++ smc_tx_pending(conn);
+ release_sock(&smc->sk);
+ }
+
+diff --git a/net/smc/smc_tx.h b/net/smc/smc_tx.h
+index 07e6ad76224a0..a59f370b8b432 100644
+--- a/net/smc/smc_tx.h
++++ b/net/smc/smc_tx.h
+@@ -27,6 +27,7 @@ static inline int smc_tx_prepared_sends(struct smc_connection *conn)
+ return smc_curs_diff(conn->sndbuf_desc->len, &sent, &prep);
+ }
+
++void smc_tx_pending(struct smc_connection *conn);
+ void smc_tx_work(struct work_struct *work);
+ void smc_tx_init(struct smc_sock *smc);
+ int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t len);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index b36d235d2d6d9..0222ad4523a9d 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2197,6 +2197,7 @@ call_transmit_status(struct rpc_task *task)
+ * socket just returned a connection error,
+ * then hold onto the transport lock.
+ */
++ case -ENOMEM:
+ case -ENOBUFS:
+ rpc_delay(task, HZ>>2);
+ fallthrough;
+@@ -2280,6 +2281,7 @@ call_bc_transmit_status(struct rpc_task *task)
+ case -ENOTCONN:
+ case -EPIPE:
+ break;
++ case -ENOMEM:
+ case -ENOBUFS:
+ rpc_delay(task, HZ>>2);
+ fallthrough;
+@@ -2362,6 +2364,11 @@ call_status(struct rpc_task *task)
+ case -EPIPE:
+ case -EAGAIN:
+ break;
++ case -ENFILE:
++ case -ENOBUFS:
++ case -ENOMEM:
++ rpc_delay(task, HZ>>2);
++ break;
+ case -EIO:
+ /* shutdown or soft timeout */
+ goto out_exit;
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index ae295844ac55a..9020cedb7c95a 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -186,11 +186,6 @@ static void __rpc_add_wait_queue_priority(struct rpc_wait_queue *queue,
+
+ /*
+ * Add new request to wait queue.
+- *
+- * Swapper tasks always get inserted at the head of the queue.
+- * This should avoid many nasty memory deadlocks and hopefully
+- * improve overall performance.
+- * Everyone else gets appended to the queue to ensure proper FIFO behavior.
+ */
+ static void __rpc_add_wait_queue(struct rpc_wait_queue *queue,
+ struct rpc_task *task,
+@@ -199,8 +194,6 @@ static void __rpc_add_wait_queue(struct rpc_wait_queue *queue,
+ INIT_LIST_HEAD(&task->u.tk_wait.timer_list);
+ if (RPC_IS_PRIORITY(queue))
+ __rpc_add_wait_queue_priority(queue, task, queue_priority);
+- else if (RPC_IS_SWAPPER(task))
+- list_add(&task->u.tk_wait.list, &queue->tasks[0]);
+ else
+ list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
+ task->tk_waitqueue = queue;
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 478f857cdaed4..6ea3d87e11475 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1096,7 +1096,9 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
+ int ret;
+
+ *sentp = 0;
+- xdr_alloc_bvec(xdr, GFP_KERNEL);
++ ret = xdr_alloc_bvec(xdr, GFP_KERNEL);
++ if (ret < 0)
++ return ret;
+
+ ret = kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len);
+ if (ret < 0)
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index 396a74974f60f..d557a3cb2ad4a 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -929,12 +929,7 @@ void xprt_connect(struct rpc_task *task)
+ if (!xprt_lock_write(xprt, task))
+ return;
+
+- if (test_and_clear_bit(XPRT_CLOSE_WAIT, &xprt->state)) {
+- trace_xprt_disconnect_cleanup(xprt);
+- xprt->ops->close(xprt);
+- }
+-
+- if (!xprt_connected(xprt)) {
++ if (!xprt_connected(xprt) && !test_bit(XPRT_CLOSE_WAIT, &xprt->state)) {
+ task->tk_rqstp->rq_connect_cookie = xprt->connect_cookie;
+ rpc_sleep_on_timeout(&xprt->pending, task, NULL,
+ xprt_request_timeout(task->tk_rqstp));
+@@ -1354,17 +1349,6 @@ xprt_request_enqueue_transmit(struct rpc_task *task)
+ INIT_LIST_HEAD(&req->rq_xmit2);
+ goto out;
+ }
+- } else if (RPC_IS_SWAPPER(task)) {
+- list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {
+- if (pos->rq_cong || pos->rq_bytes_sent)
+- continue;
+- if (RPC_IS_SWAPPER(pos->rq_task))
+- continue;
+- /* Note: req is added _before_ pos */
+- list_add_tail(&req->rq_xmit, &pos->rq_xmit);
+- INIT_LIST_HEAD(&req->rq_xmit2);
+- goto out;
+- }
+ } else if (!req->rq_seqno) {
+ list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {
+ if (pos->rq_task->tk_owner != task->tk_owner)
+@@ -1690,12 +1674,15 @@ out:
+ static struct rpc_rqst *xprt_dynamic_alloc_slot(struct rpc_xprt *xprt)
+ {
+ struct rpc_rqst *req = ERR_PTR(-EAGAIN);
++ gfp_t gfp_mask = GFP_KERNEL;
+
+ if (xprt->num_reqs >= xprt->max_reqs)
+ goto out;
+ ++xprt->num_reqs;
+ spin_unlock(&xprt->reserve_lock);
+- req = kzalloc(sizeof(struct rpc_rqst), GFP_NOFS);
++ if (current->flags & PF_WQ_WORKER)
++ gfp_mask |= __GFP_NORETRY | __GFP_NOWARN;
++ req = kzalloc(sizeof(*req), gfp_mask);
+ spin_lock(&xprt->reserve_lock);
+ if (req != NULL)
+ goto out;
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index ff78a296fa810..6b7e10e5a141d 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -521,7 +521,7 @@ xprt_rdma_alloc_slot(struct rpc_xprt *xprt, struct rpc_task *task)
+ return;
+
+ out_sleep:
+- task->tk_status = -EAGAIN;
++ task->tk_status = -ENOMEM;
+ xprt_add_backlog(xprt, task);
+ }
+
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 11eab0f0333b0..7aef2876beb38 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -763,12 +763,12 @@ xs_stream_start_connect(struct sock_xprt *transport)
+ /**
+ * xs_nospace - handle transmit was incomplete
+ * @req: pointer to RPC request
++ * @transport: pointer to struct sock_xprt
+ *
+ */
+-static int xs_nospace(struct rpc_rqst *req)
++static int xs_nospace(struct rpc_rqst *req, struct sock_xprt *transport)
+ {
+- struct rpc_xprt *xprt = req->rq_xprt;
+- struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
++ struct rpc_xprt *xprt = &transport->xprt;
+ struct sock *sk = transport->inet;
+ int ret = -EAGAIN;
+
+@@ -779,25 +779,49 @@ static int xs_nospace(struct rpc_rqst *req)
+
+ /* Don't race with disconnect */
+ if (xprt_connected(xprt)) {
++ struct socket_wq *wq;
++
++ rcu_read_lock();
++ wq = rcu_dereference(sk->sk_wq);
++ set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
++ rcu_read_unlock();
++
+ /* wait for more buffer space */
++ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
+ sk->sk_write_pending++;
+ xprt_wait_for_buffer_space(xprt);
+ } else
+ ret = -ENOTCONN;
+
+ spin_unlock(&xprt->transport_lock);
++ return ret;
++}
+
+- /* Race breaker in case memory is freed before above code is called */
+- if (ret == -EAGAIN) {
+- struct socket_wq *wq;
++static int xs_sock_nospace(struct rpc_rqst *req)
++{
++ struct sock_xprt *transport =
++ container_of(req->rq_xprt, struct sock_xprt, xprt);
++ struct sock *sk = transport->inet;
++ int ret = -EAGAIN;
+
+- rcu_read_lock();
+- wq = rcu_dereference(sk->sk_wq);
+- set_bit(SOCKWQ_ASYNC_NOSPACE, &wq->flags);
+- rcu_read_unlock();
++ lock_sock(sk);
++ if (!sock_writeable(sk))
++ ret = xs_nospace(req, transport);
++ release_sock(sk);
++ return ret;
++}
+
+- sk->sk_write_space(sk);
+- }
++static int xs_stream_nospace(struct rpc_rqst *req)
++{
++ struct sock_xprt *transport =
++ container_of(req->rq_xprt, struct sock_xprt, xprt);
++ struct sock *sk = transport->inet;
++ int ret = -EAGAIN;
++
++ lock_sock(sk);
++ if (!sk_stream_memory_free(sk))
++ ret = xs_nospace(req, transport);
++ release_sock(sk);
+ return ret;
+ }
+
+@@ -856,7 +880,7 @@ static int xs_local_send_request(struct rpc_rqst *req)
+
+ /* Close the stream if the previous transmission was incomplete */
+ if (xs_send_request_was_aborted(transport, req)) {
+- xs_close(xprt);
++ xprt_force_disconnect(xprt);
+ return -ENOTCONN;
+ }
+
+@@ -887,14 +911,14 @@ static int xs_local_send_request(struct rpc_rqst *req)
+ case -ENOBUFS:
+ break;
+ case -EAGAIN:
+- status = xs_nospace(req);
++ status = xs_stream_nospace(req);
+ break;
+ default:
+ dprintk("RPC: sendmsg returned unrecognized error %d\n",
+ -status);
+ fallthrough;
+ case -EPIPE:
+- xs_close(xprt);
++ xprt_force_disconnect(xprt);
+ status = -ENOTCONN;
+ }
+
+@@ -963,7 +987,7 @@ process_status:
+ /* Should we call xs_close() here? */
+ break;
+ case -EAGAIN:
+- status = xs_nospace(req);
++ status = xs_sock_nospace(req);
+ break;
+ case -ENETUNREACH:
+ case -ENOBUFS:
+@@ -1083,7 +1107,7 @@ static int xs_tcp_send_request(struct rpc_rqst *req)
+ /* Should we call xs_close() here? */
+ break;
+ case -EAGAIN:
+- status = xs_nospace(req);
++ status = xs_stream_nospace(req);
+ break;
+ case -ECONNRESET:
+ case -ECONNREFUSED:
+@@ -1179,6 +1203,16 @@ static void xs_reset_transport(struct sock_xprt *transport)
+
+ if (sk == NULL)
+ return;
++ /*
++ * Make sure we're calling this in a context from which it is safe
++ * to call __fput_sync(). In practice that means rpciod and the
++ * system workqueue.
++ */
++ if (!(current->flags & PF_WQ_WORKER)) {
++ WARN_ON_ONCE(1);
++ set_bit(XPRT_CLOSE_WAIT, &xprt->state);
++ return;
++ }
+
+ if (atomic_read(&transport->xprt.swapper))
+ sk_clear_memalloc(sk);
+@@ -1202,7 +1236,7 @@ static void xs_reset_transport(struct sock_xprt *transport)
+ mutex_unlock(&transport->recv_mutex);
+
+ trace_rpc_socket_close(xprt, sock);
+- fput(filp);
++ __fput_sync(filp);
+
+ xprt_disconnect_done(xprt);
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index efc84845bb6b0..75a6995913839 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1495,7 +1495,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
+ if (prot->version == TLS_1_3_VERSION ||
+ prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305)
+ memcpy(iv + iv_offset, tls_ctx->rx.iv,
+- crypto_aead_ivsize(ctx->aead_recv));
++ prot->iv_size + prot->salt_size);
+ else
+ memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
+
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index b888522f133b3..b2fdac96bab07 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -700,8 +700,12 @@ static bool cfg80211_find_ssid_match(struct cfg80211_colocated_ap *ap,
+
+ for (i = 0; i < request->n_ssids; i++) {
+ /* wildcard ssid in the scan request */
+- if (!request->ssids[i].ssid_len)
++ if (!request->ssids[i].ssid_len) {
++ if (ap->multi_bss && !ap->transmitted_bssid)
++ continue;
++
+ return true;
++ }
+
+ if (ap->ssid_len &&
+ ap->ssid_len == request->ssids[i].ssid_len) {
+@@ -827,6 +831,9 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
+ !cfg80211_find_ssid_match(ap, request))
+ continue;
+
++ if (!request->n_ssids && ap->multi_bss && !ap->transmitted_bssid)
++ continue;
++
+ cfg80211_scan_req_add_chan(request, chan, true);
+ memcpy(scan_6ghz_params->bssid, ap->bssid, ETH_ALEN);
+ scan_6ghz_params->short_ssid = ap->short_ssid;
+diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
+index 1480910c792e2..de66e1cc07348 100644
+--- a/tools/build/feature/Makefile
++++ b/tools/build/feature/Makefile
+@@ -217,9 +217,16 @@ strip-libs = $(filter-out -l%,$(1))
+ PERL_EMBED_LDOPTS = $(shell perl -MExtUtils::Embed -e ldopts 2>/dev/null)
+ PERL_EMBED_LDFLAGS = $(call strip-libs,$(PERL_EMBED_LDOPTS))
+ PERL_EMBED_LIBADD = $(call grep-libs,$(PERL_EMBED_LDOPTS))
+-PERL_EMBED_CCOPTS = `perl -MExtUtils::Embed -e ccopts 2>/dev/null`
++PERL_EMBED_CCOPTS = $(shell perl -MExtUtils::Embed -e ccopts 2>/dev/null)
+ FLAGS_PERL_EMBED=$(PERL_EMBED_CCOPTS) $(PERL_EMBED_LDOPTS)
+
++ifeq ($(CC_NO_CLANG), 0)
++ PERL_EMBED_LDOPTS := $(filter-out -specs=%,$(PERL_EMBED_LDOPTS))
++ PERL_EMBED_CCOPTS := $(filter-out -flto=auto -ffat-lto-objects, $(PERL_EMBED_CCOPTS))
++ PERL_EMBED_CCOPTS := $(filter-out -specs=%,$(PERL_EMBED_CCOPTS))
++ FLAGS_PERL_EMBED += -Wno-compound-token-split-by-macro
++endif
++
+ $(OUTPUT)test-libperl.bin:
+ $(BUILD) $(FLAGS_PERL_EMBED)
+
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index f947b61b21071..b8b37fe760069 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -131,7 +131,7 @@ GLOBAL_SYM_COUNT = $(shell readelf -s --wide $(BPF_IN_SHARED) | \
+ sort -u | wc -l)
+ VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
+ sed 's/\[.*\]//' | \
+- awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
++ awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}' | \
+ grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l)
+
+ CMD_TARGETS = $(LIB_TARGET) $(PC_FILE)
+@@ -194,7 +194,7 @@ check_abi: $(OUTPUT)libbpf.so $(VERSION_SCRIPT)
+ sort -u > $(OUTPUT)libbpf_global_syms.tmp; \
+ readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
+ sed 's/\[.*\]//' | \
+- awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \
++ awk '/GLOBAL/ && /DEFAULT/ && !/UND|ABS/ {print $$NF}'| \
+ grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \
+ sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \
+ diff -u $(OUTPUT)libbpf_global_syms.tmp \
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index e1b5056068828..122d79c8f4b42 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -112,6 +112,10 @@
+
+ #elif defined(bpf_target_s390)
+
++struct pt_regs___s390 {
++ unsigned long orig_gpr2;
++};
++
+ /* s390 provides user_pt_regs instead of struct pt_regs to userspace */
+ #define __PT_REGS_CAST(x) ((const user_pt_regs *)(x))
+ #define __PT_PARM1_REG gprs[2]
+@@ -124,6 +128,8 @@
+ #define __PT_RC_REG gprs[2]
+ #define __PT_SP_REG gprs[15]
+ #define __PT_IP_REG psw.addr
++#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; })
++#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___s390 *)(x), orig_gpr2)
+
+ #elif defined(bpf_target_arm)
+
+@@ -140,6 +146,10 @@
+
+ #elif defined(bpf_target_arm64)
+
++struct pt_regs___arm64 {
++ unsigned long orig_x0;
++};
++
+ /* arm64 provides struct user_pt_regs instead of struct pt_regs to userspace */
+ #define __PT_REGS_CAST(x) ((const struct user_pt_regs *)(x))
+ #define __PT_PARM1_REG regs[0]
+@@ -152,6 +162,8 @@
+ #define __PT_RC_REG regs[0]
+ #define __PT_SP_REG sp
+ #define __PT_IP_REG pc
++#define PT_REGS_PARM1_SYSCALL(x) ({ _Pragma("GCC error \"use PT_REGS_PARM1_CORE_SYSCALL() instead\""); 0l; })
++#define PT_REGS_PARM1_CORE_SYSCALL(x) BPF_CORE_READ((const struct pt_regs___arm64 *)(x), orig_x0)
+
+ #elif defined(bpf_target_mips)
+
+@@ -178,6 +190,8 @@
+ #define __PT_RC_REG gpr[3]
+ #define __PT_SP_REG sp
+ #define __PT_IP_REG nip
++/* powerpc does not select ARCH_HAS_SYSCALL_WRAPPER. */
++#define PT_REGS_SYSCALL_REGS(ctx) ctx
+
+ #elif defined(bpf_target_sparc)
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 7c33ec67c4a95..3470813daf4aa 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1090,6 +1090,17 @@ static void annotate_call_site(struct objtool_file *file,
+ : arch_nop_insn(insn->len));
+
+ insn->type = sibling ? INSN_RETURN : INSN_NOP;
++
++ if (sibling) {
++ /*
++ * We've replaced the tail-call JMP insn by two new
++ * insn: RET; INT3, except we only have a single struct
++ * insn here. Mark it retpoline_safe to avoid the SLS
++ * warning, instead of adding another insn.
++ */
++ insn->retpoline_safe = true;
++ }
++
+ return;
+ }
+
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 96ad944ca6a88..f3bf9297bcc03 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -272,6 +272,9 @@ ifdef PYTHON_CONFIG
+ PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil
+ PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
+ FLAGS_PYTHON_EMBED := $(PYTHON_EMBED_CCOPTS) $(PYTHON_EMBED_LDOPTS)
++ ifeq ($(CC_NO_CLANG), 0)
++ PYTHON_EMBED_CCOPTS := $(filter-out -ffat-lto-objects, $(PYTHON_EMBED_CCOPTS))
++ endif
+ endif
+
+ FEATURE_CHECK_CFLAGS-libpython := $(PYTHON_EMBED_CCOPTS)
+@@ -790,6 +793,9 @@ else
+ LDFLAGS += $(PERL_EMBED_LDFLAGS)
+ EXTLIBS += $(PERL_EMBED_LIBADD)
+ CFLAGS += -DHAVE_LIBPERL_SUPPORT
++ ifeq ($(CC_NO_CLANG), 0)
++ CFLAGS += -Wno-compound-token-split-by-macro
++ endif
+ $(call detected,CONFIG_LIBPERL)
+ endif
+ endif
+diff --git a/tools/perf/arch/arm64/util/arm-spe.c b/tools/perf/arch/arm64/util/arm-spe.c
+index 2100d46ccf5e6..bb4ab99afa7f8 100644
+--- a/tools/perf/arch/arm64/util/arm-spe.c
++++ b/tools/perf/arch/arm64/util/arm-spe.c
+@@ -239,6 +239,12 @@ static int arm_spe_recording_options(struct auxtrace_record *itr,
+ arm_spe_set_timestamp(itr, arm_spe_evsel);
+ }
+
++ /*
++ * Set this only so that perf report knows that SPE generates memory info. It has no effect
++ * on the opening of the event or the SPE data produced.
++ */
++ evsel__set_sample_bit(arm_spe_evsel, DATA_SRC);
++
+ /* Add dummy event to keep tracking */
+ err = parse_events(evlist, "dummy:u", NULL);
+ if (err)
+diff --git a/tools/perf/perf.c b/tools/perf/perf.c
+index 2f6b67189b426..6aae7b6c376b4 100644
+--- a/tools/perf/perf.c
++++ b/tools/perf/perf.c
+@@ -434,7 +434,7 @@ void pthread__unblock_sigwinch(void)
+ static int libperf_print(enum libperf_print_level level,
+ const char *fmt, va_list ap)
+ {
+- return eprintf(level, verbose, fmt, ap);
++ return veprintf(level, verbose, fmt, ap);
+ }
+
+ int main(int argc, const char **argv)
+diff --git a/tools/perf/tests/dwarf-unwind.c b/tools/perf/tests/dwarf-unwind.c
+index 2dab2d2620608..afdca7f2959f0 100644
+--- a/tools/perf/tests/dwarf-unwind.c
++++ b/tools/perf/tests/dwarf-unwind.c
+@@ -122,7 +122,7 @@ NO_TAIL_CALL_ATTRIBUTE noinline int test_dwarf_unwind__thread(struct thread *thr
+ }
+
+ err = unwind__get_entries(unwind_entry, &cnt, thread,
+- &sample, MAX_STACK);
++ &sample, MAX_STACK, false);
+ if (err)
+ pr_debug("unwind failed\n");
+ else if (cnt != MAX_STACK) {
+diff --git a/tools/perf/util/arm64-frame-pointer-unwind-support.c b/tools/perf/util/arm64-frame-pointer-unwind-support.c
+index 2242a885fbd73..4940be4a0569c 100644
+--- a/tools/perf/util/arm64-frame-pointer-unwind-support.c
++++ b/tools/perf/util/arm64-frame-pointer-unwind-support.c
+@@ -53,7 +53,7 @@ u64 get_leaf_frame_caller_aarch64(struct perf_sample *sample, struct thread *thr
+ sample->user_regs.cache_regs[PERF_REG_ARM64_SP] = 0;
+ }
+
+- ret = unwind__get_entries(add_entry, &entries, thread, sample, 2);
++ ret = unwind__get_entries(add_entry, &entries, thread, sample, 2, true);
+ sample->user_regs = old_regs;
+
+ if (ret || entries.length != 2)
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 3945500036937..564abe17a0bd2 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -2983,7 +2983,7 @@ static int thread__resolve_callchain_unwind(struct thread *thread,
+ return 0;
+
+ return unwind__get_entries(unwind_entry, cursor,
+- thread, sample, max_stack);
++ thread, sample, max_stack, false);
+ }
+
+ int thread__resolve_callchain(struct thread *thread,
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 498b05708db57..245dc70d1882a 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -2084,6 +2084,7 @@ prefetch_event(char *buf, u64 head, size_t mmap_size,
+ bool needs_swap, union perf_event *error)
+ {
+ union perf_event *event;
++ u16 event_size;
+
+ /*
+ * Ensure we have enough space remaining to read
+@@ -2096,15 +2097,23 @@ prefetch_event(char *buf, u64 head, size_t mmap_size,
+ if (needs_swap)
+ perf_event_header__bswap(&event->header);
+
+- if (head + event->header.size <= mmap_size)
++ event_size = event->header.size;
++ if (head + event_size <= mmap_size)
+ return event;
+
+ /* We're not fetching the event so swap back again */
+ if (needs_swap)
+ perf_event_header__bswap(&event->header);
+
+- pr_debug("%s: head=%#" PRIx64 " event->header_size=%#x, mmap_size=%#zx:"
+- " fuzzed or compressed perf.data?\n",__func__, head, event->header.size, mmap_size);
++ /* Check if the event fits into the next mmapped buf. */
++ if (event_size <= mmap_size - head % page_size) {
++ /* Remap buf and fetch again. */
++ return NULL;
++ }
++
++ /* Invalid input. Event size should never exceed mmap_size. */
++ pr_debug("%s: head=%#" PRIx64 " event->header.size=%#x, mmap_size=%#zx:"
++ " fuzzed or compressed perf.data?\n", __func__, head, event_size, mmap_size);
+
+ return error;
+ }
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 483f05004e682..c255a2c90cd67 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -1,12 +1,14 @@
+-from os import getenv
++from os import getenv, path
+ from subprocess import Popen, PIPE
+ from re import sub
+
+ cc = getenv("CC")
+ cc_is_clang = b"clang version" in Popen([cc.split()[0], "-v"], stderr=PIPE).stderr.readline()
++src_feature_tests = getenv('srctree') + '/tools/build/feature'
+
+ def clang_has_option(option):
+- return [o for o in Popen([cc, option], stderr=PIPE).stderr.readlines() if b"unknown argument" in o] == [ ]
++ cc_output = Popen([cc, option, path.join(src_feature_tests, "test-hello.c") ], stderr=PIPE).stderr.readlines()
++ return [o for o in cc_output if ((b"unknown argument" in o) or (b"is not supported" in o))] == [ ]
+
+ if cc_is_clang:
+ from distutils.sysconfig import get_config_vars
+@@ -23,6 +25,8 @@ if cc_is_clang:
+ vars[var] = sub("-fstack-protector-strong", "", vars[var])
+ if not clang_has_option("-fno-semantic-interposition"):
+ vars[var] = sub("-fno-semantic-interposition", "", vars[var])
++ if not clang_has_option("-ffat-lto-objects"):
++ vars[var] = sub("-ffat-lto-objects", "", vars[var])
+
+ from distutils.core import setup, Extension
+
+diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
+index a74b517f74974..94aa40f6e3482 100644
+--- a/tools/perf/util/unwind-libdw.c
++++ b/tools/perf/util/unwind-libdw.c
+@@ -200,7 +200,8 @@ frame_callback(Dwfl_Frame *state, void *arg)
+ bool isactivation;
+
+ if (!dwfl_frame_pc(state, &pc, NULL)) {
+- pr_err("%s", dwfl_errmsg(-1));
++ if (!ui->best_effort)
++ pr_err("%s", dwfl_errmsg(-1));
+ return DWARF_CB_ABORT;
+ }
+
+@@ -208,7 +209,8 @@ frame_callback(Dwfl_Frame *state, void *arg)
+ report_module(pc, ui);
+
+ if (!dwfl_frame_pc(state, &pc, &isactivation)) {
+- pr_err("%s", dwfl_errmsg(-1));
++ if (!ui->best_effort)
++ pr_err("%s", dwfl_errmsg(-1));
+ return DWARF_CB_ABORT;
+ }
+
+@@ -222,7 +224,8 @@ frame_callback(Dwfl_Frame *state, void *arg)
+ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+ struct perf_sample *data,
+- int max_stack)
++ int max_stack,
++ bool best_effort)
+ {
+ struct unwind_info *ui, ui_buf = {
+ .sample = data,
+@@ -231,6 +234,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ .cb = cb,
+ .arg = arg,
+ .max_stack = max_stack,
++ .best_effort = best_effort
+ };
+ Dwarf_Word ip;
+ int err = -EINVAL, i;
+diff --git a/tools/perf/util/unwind-libdw.h b/tools/perf/util/unwind-libdw.h
+index 0cbd2650e280e..8c88bc4f2304b 100644
+--- a/tools/perf/util/unwind-libdw.h
++++ b/tools/perf/util/unwind-libdw.h
+@@ -20,6 +20,7 @@ struct unwind_info {
+ void *arg;
+ int max_stack;
+ int idx;
++ bool best_effort;
+ struct unwind_entry entries[];
+ };
+
+diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
+index 71a3533491815..41e29fc7648ae 100644
+--- a/tools/perf/util/unwind-libunwind-local.c
++++ b/tools/perf/util/unwind-libunwind-local.c
+@@ -96,6 +96,7 @@ struct unwind_info {
+ struct perf_sample *sample;
+ struct machine *machine;
+ struct thread *thread;
++ bool best_effort;
+ };
+
+ #define dw_read(ptr, type, end) ({ \
+@@ -553,7 +554,8 @@ static int access_reg(unw_addr_space_t __maybe_unused as,
+
+ ret = perf_reg_value(&val, &ui->sample->user_regs, id);
+ if (ret) {
+- pr_err("unwind: can't read reg %d\n", regnum);
++ if (!ui->best_effort)
++ pr_err("unwind: can't read reg %d\n", regnum);
+ return ret;
+ }
+
+@@ -666,7 +668,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
+ return -1;
+
+ ret = unw_init_remote(&c, addr_space, ui);
+- if (ret)
++ if (ret && !ui->best_effort)
+ display_error(ret);
+
+ while (!ret && (unw_step(&c) > 0) && i < max_stack) {
+@@ -704,12 +706,14 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
+
+ static int _unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+- struct perf_sample *data, int max_stack)
++ struct perf_sample *data, int max_stack,
++ bool best_effort)
+ {
+ struct unwind_info ui = {
+ .sample = data,
+ .thread = thread,
+ .machine = thread->maps->machine,
++ .best_effort = best_effort
+ };
+
+ if (!data->user_regs.regs)
+diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
+index e89a5479b3613..509c287ee7628 100644
+--- a/tools/perf/util/unwind-libunwind.c
++++ b/tools/perf/util/unwind-libunwind.c
+@@ -80,9 +80,11 @@ void unwind__finish_access(struct maps *maps)
+
+ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+- struct perf_sample *data, int max_stack)
++ struct perf_sample *data, int max_stack,
++ bool best_effort)
+ {
+ if (thread->maps->unwind_libunwind_ops)
+- return thread->maps->unwind_libunwind_ops->get_entries(cb, arg, thread, data, max_stack);
++ return thread->maps->unwind_libunwind_ops->get_entries(cb, arg, thread, data,
++ max_stack, best_effort);
+ return 0;
+ }
+diff --git a/tools/perf/util/unwind.h b/tools/perf/util/unwind.h
+index ab8ad469c8de5..b2a03fa5289b3 100644
+--- a/tools/perf/util/unwind.h
++++ b/tools/perf/util/unwind.h
+@@ -23,13 +23,19 @@ struct unwind_libunwind_ops {
+ void (*finish_access)(struct maps *maps);
+ int (*get_entries)(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+- struct perf_sample *data, int max_stack);
++ struct perf_sample *data, int max_stack, bool best_effort);
+ };
+
+ #ifdef HAVE_DWARF_UNWIND_SUPPORT
++/*
++ * When best_effort is set, don't report errors and fail silently. This could
++ * be expanded in the future to be more permissive about things other than
++ * error messages.
++ */
+ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
+ struct thread *thread,
+- struct perf_sample *data, int max_stack);
++ struct perf_sample *data, int max_stack,
++ bool best_effort);
+ /* libunwind specific */
+ #ifdef HAVE_LIBUNWIND_SUPPORT
+ #ifndef LIBUNWIND__ARCH_REG_ID
+@@ -65,7 +71,8 @@ unwind__get_entries(unwind_entry_cb_t cb __maybe_unused,
+ void *arg __maybe_unused,
+ struct thread *thread __maybe_unused,
+ struct perf_sample *data __maybe_unused,
+- int max_stack __maybe_unused)
++ int max_stack __maybe_unused,
++ bool best_effort __maybe_unused)
+ {
+ return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup.c b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+index 83b0aaa52ef77..a0e0d85e29da7 100644
+--- a/tools/testing/selftests/bpf/progs/test_sk_lookup.c
++++ b/tools/testing/selftests/bpf/progs/test_sk_lookup.c
+@@ -412,8 +412,7 @@ int ctx_narrow_access(struct bpf_sk_lookup *ctx)
+
+ /* Narrow loads from remote_port field. Expect SRC_PORT. */
+ if (LSB(ctx->remote_port, 0) != ((SRC_PORT >> 0) & 0xff) ||
+- LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff) ||
+- LSB(ctx->remote_port, 2) != 0 || LSB(ctx->remote_port, 3) != 0)
++ LSB(ctx->remote_port, 1) != ((SRC_PORT >> 8) & 0xff))
+ return SK_DROP;
+ if (LSW(ctx->remote_port, 0) != SRC_PORT)
+ return SK_DROP;
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
+index ffa5502ad95ed..5f8296d29e778 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.c
++++ b/tools/testing/selftests/bpf/xdpxceiver.c
+@@ -266,22 +266,24 @@ static int xsk_configure_umem(struct xsk_umem_info *umem, void *buffer, u64 size
+ }
+
+ static int xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_info *umem,
+- struct ifobject *ifobject, u32 qid)
++ struct ifobject *ifobject, bool shared)
+ {
+- struct xsk_socket_config cfg;
++ struct xsk_socket_config cfg = {};
+ struct xsk_ring_cons *rxr;
+ struct xsk_ring_prod *txr;
+
+ xsk->umem = umem;
+ cfg.rx_size = xsk->rxqsize;
+ cfg.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS;
+- cfg.libbpf_flags = 0;
++ cfg.libbpf_flags = XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD;
+ cfg.xdp_flags = ifobject->xdp_flags;
+ cfg.bind_flags = ifobject->bind_flags;
++ if (shared)
++ cfg.bind_flags |= XDP_SHARED_UMEM;
+
+ txr = ifobject->tx_on ? &xsk->tx : NULL;
+ rxr = ifobject->rx_on ? &xsk->rx : NULL;
+- return xsk_socket__create(&xsk->xsk, ifobject->ifname, qid, umem->umem, rxr, txr, &cfg);
++ return xsk_socket__create(&xsk->xsk, ifobject->ifname, 0, umem->umem, rxr, txr, &cfg);
+ }
+
+ static struct option long_options[] = {
+@@ -387,7 +389,6 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
+ for (i = 0; i < MAX_INTERFACES; i++) {
+ struct ifobject *ifobj = i ? ifobj_rx : ifobj_tx;
+
+- ifobj->umem = &ifobj->umem_arr[0];
+ ifobj->xsk = &ifobj->xsk_arr[0];
+ ifobj->use_poll = false;
+ ifobj->pacing_on = true;
+@@ -401,11 +402,12 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
+ ifobj->tx_on = false;
+ }
+
++ memset(ifobj->umem, 0, sizeof(*ifobj->umem));
++ ifobj->umem->num_frames = DEFAULT_UMEM_BUFFERS;
++ ifobj->umem->frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
++
+ for (j = 0; j < MAX_SOCKETS; j++) {
+- memset(&ifobj->umem_arr[j], 0, sizeof(ifobj->umem_arr[j]));
+ memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j]));
+- ifobj->umem_arr[j].num_frames = DEFAULT_UMEM_BUFFERS;
+- ifobj->umem_arr[j].frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE;
+ ifobj->xsk_arr[j].rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS;
+ }
+ }
+@@ -950,7 +952,10 @@ static void tx_stats_validate(struct ifobject *ifobject)
+
+ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ {
++ u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size;
+ int mmap_flags = MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE;
++ int ret, ifindex;
++ void *bufs;
+ u32 i;
+
+ ifobject->ns_fd = switch_namespace(ifobject->nsname);
+@@ -958,23 +963,20 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ if (ifobject->umem->unaligned_mode)
+ mmap_flags |= MAP_HUGETLB;
+
+- for (i = 0; i < test->nb_sockets; i++) {
+- u64 umem_sz = ifobject->umem->num_frames * ifobject->umem->frame_size;
+- u32 ctr = 0;
+- void *bufs;
+- int ret;
++ bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
++ if (bufs == MAP_FAILED)
++ exit_with_error(errno);
+
+- bufs = mmap(NULL, umem_sz, PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
+- if (bufs == MAP_FAILED)
+- exit_with_error(errno);
++ ret = xsk_configure_umem(ifobject->umem, bufs, umem_sz);
++ if (ret)
++ exit_with_error(-ret);
+
+- ret = xsk_configure_umem(&ifobject->umem_arr[i], bufs, umem_sz);
+- if (ret)
+- exit_with_error(-ret);
++ for (i = 0; i < test->nb_sockets; i++) {
++ u32 ctr = 0;
+
+ while (ctr++ < SOCK_RECONF_CTR) {
+- ret = xsk_configure_socket(&ifobject->xsk_arr[i], &ifobject->umem_arr[i],
+- ifobject, i);
++ ret = xsk_configure_socket(&ifobject->xsk_arr[i], ifobject->umem,
++ ifobject, !!i);
+ if (!ret)
+ break;
+
+@@ -985,8 +987,22 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
+ }
+ }
+
+- ifobject->umem = &ifobject->umem_arr[0];
+ ifobject->xsk = &ifobject->xsk_arr[0];
++
++ if (!ifobject->rx_on)
++ return;
++
++ ifindex = if_nametoindex(ifobject->ifname);
++ if (!ifindex)
++ exit_with_error(errno);
++
++ ret = xsk_setup_xdp_prog(ifindex, &ifobject->xsk_map_fd);
++ if (ret)
++ exit_with_error(-ret);
++
++ ret = xsk_socket__update_xskmap(ifobject->xsk->xsk, ifobject->xsk_map_fd);
++ if (ret)
++ exit_with_error(-ret);
+ }
+
+ static void testapp_cleanup_xsk_res(struct ifobject *ifobj)
+@@ -1142,14 +1158,16 @@ static void testapp_bidi(struct test_spec *test)
+
+ static void swap_xsk_resources(struct ifobject *ifobj_tx, struct ifobject *ifobj_rx)
+ {
++ int ret;
++
+ xsk_socket__delete(ifobj_tx->xsk->xsk);
+- xsk_umem__delete(ifobj_tx->umem->umem);
+ xsk_socket__delete(ifobj_rx->xsk->xsk);
+- xsk_umem__delete(ifobj_rx->umem->umem);
+- ifobj_tx->umem = &ifobj_tx->umem_arr[1];
+ ifobj_tx->xsk = &ifobj_tx->xsk_arr[1];
+- ifobj_rx->umem = &ifobj_rx->umem_arr[1];
+ ifobj_rx->xsk = &ifobj_rx->xsk_arr[1];
++
++ ret = xsk_socket__update_xskmap(ifobj_rx->xsk->xsk, ifobj_rx->xsk_map_fd);
++ if (ret)
++ exit_with_error(-ret);
+ }
+
+ static void testapp_bpf_res(struct test_spec *test)
+@@ -1408,13 +1426,13 @@ static struct ifobject *ifobject_create(void)
+ if (!ifobj->xsk_arr)
+ goto out_xsk_arr;
+
+- ifobj->umem_arr = calloc(MAX_SOCKETS, sizeof(*ifobj->umem_arr));
+- if (!ifobj->umem_arr)
+- goto out_umem_arr;
++ ifobj->umem = calloc(1, sizeof(*ifobj->umem));
++ if (!ifobj->umem)
++ goto out_umem;
+
+ return ifobj;
+
+-out_umem_arr:
++out_umem:
+ free(ifobj->xsk_arr);
+ out_xsk_arr:
+ free(ifobj);
+@@ -1423,7 +1441,7 @@ out_xsk_arr:
+
+ static void ifobject_delete(struct ifobject *ifobj)
+ {
+- free(ifobj->umem_arr);
++ free(ifobj->umem);
+ free(ifobj->xsk_arr);
+ free(ifobj);
+ }
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.h b/tools/testing/selftests/bpf/xdpxceiver.h
+index 2f705f44b7483..62a3e63886325 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.h
++++ b/tools/testing/selftests/bpf/xdpxceiver.h
+@@ -125,10 +125,10 @@ struct ifobject {
+ struct xsk_socket_info *xsk;
+ struct xsk_socket_info *xsk_arr;
+ struct xsk_umem_info *umem;
+- struct xsk_umem_info *umem_arr;
+ thread_func_t func_ptr;
+ struct pkt_stream *pkt_stream;
+ int ns_fd;
++ int xsk_map_fd;
+ u32 dst_ip;
+ u32 src_ip;
+ u32 xdp_flags;
+diff --git a/tools/testing/selftests/kvm/aarch64/vgic_irq.c b/tools/testing/selftests/kvm/aarch64/vgic_irq.c
+index 7eca977999170..554ca649d4701 100644
+--- a/tools/testing/selftests/kvm/aarch64/vgic_irq.c
++++ b/tools/testing/selftests/kvm/aarch64/vgic_irq.c
+@@ -306,7 +306,8 @@ static void guest_restore_active(struct test_args *args,
+ uint32_t prio, intid, ap1r;
+ int i;
+
+- /* Set the priorities of the first (KVM_NUM_PRIOS - 1) IRQs
++ /*
++ * Set the priorities of the first (KVM_NUM_PRIOS - 1) IRQs
+ * in descending order, so intid+1 can preempt intid.
+ */
+ for (i = 0, prio = (num - 1) * 8; i < num; i++, prio -= 8) {
+@@ -315,7 +316,8 @@ static void guest_restore_active(struct test_args *args,
+ gic_set_priority(intid, prio);
+ }
+
+- /* In a real migration, KVM would restore all GIC state before running
++ /*
++ * In a real migration, KVM would restore all GIC state before running
+ * guest code.
+ */
+ for (i = 0; i < num; i++) {
+@@ -472,10 +474,10 @@ static void test_restore_active(struct test_args *args, struct kvm_inject_desc *
+ guest_restore_active(args, MIN_SPI, 4, f->cmd);
+ }
+
+-static void guest_code(struct test_args args)
++static void guest_code(struct test_args *args)
+ {
+- uint32_t i, nr_irqs = args.nr_irqs;
+- bool level_sensitive = args.level_sensitive;
++ uint32_t i, nr_irqs = args->nr_irqs;
++ bool level_sensitive = args->level_sensitive;
+ struct kvm_inject_desc *f, *inject_fns;
+
+ gic_init(GIC_V3, 1, dist, redist);
+@@ -484,11 +486,11 @@ static void guest_code(struct test_args args)
+ gic_irq_enable(i);
+
+ for (i = MIN_SPI; i < nr_irqs; i++)
+- gic_irq_set_config(i, !args.level_sensitive);
++ gic_irq_set_config(i, !level_sensitive);
+
+- gic_set_eoi_split(args.eoi_split);
++ gic_set_eoi_split(args->eoi_split);
+
+- reset_priorities(&args);
++ reset_priorities(args);
+ gic_set_priority_mask(CPU_PRIO_MASK);
+
+ inject_fns = level_sensitive ? inject_level_fns
+@@ -497,17 +499,18 @@ static void guest_code(struct test_args args)
+ local_irq_enable();
+
+ /* Start the tests. */
+- for_each_supported_inject_fn(&args, inject_fns, f) {
+- test_injection(&args, f);
+- test_preemption(&args, f);
+- test_injection_failure(&args, f);
++ for_each_supported_inject_fn(args, inject_fns, f) {
++ test_injection(args, f);
++ test_preemption(args, f);
++ test_injection_failure(args, f);
+ }
+
+- /* Restore the active state of IRQs. This would happen when live
++ /*
++ * Restore the active state of IRQs. This would happen when live
+ * migrating IRQs in the middle of being handled.
+ */
+- for_each_supported_activate_fn(&args, set_active_fns, f)
+- test_restore_active(&args, f);
++ for_each_supported_activate_fn(args, set_active_fns, f)
++ test_restore_active(args, f);
+
+ GUEST_DONE();
+ }
+@@ -573,8 +576,8 @@ static void kvm_set_gsi_routing_irqchip_check(struct kvm_vm *vm,
+ kvm_gsi_routing_write(vm, routing);
+ } else {
+ ret = _kvm_gsi_routing_write(vm, routing);
+- /* The kernel only checks for KVM_IRQCHIP_NUM_PINS. */
+- if (intid >= KVM_IRQCHIP_NUM_PINS)
++ /* The kernel only checks e->irqchip.pin >= KVM_IRQCHIP_NUM_PINS */
++ if (((uint64_t)intid + num - 1 - MIN_SPI) >= KVM_IRQCHIP_NUM_PINS)
+ TEST_ASSERT(ret != 0 && errno == EINVAL,
+ "Bad intid %u did not cause KVM_SET_GSI_ROUTING "
+ "error: rc: %i errno: %i", intid, ret, errno);
+@@ -739,6 +742,7 @@ static void test_vgic(uint32_t nr_irqs, bool level_sensitive, bool eoi_split)
+ int gic_fd;
+ struct kvm_vm *vm;
+ struct kvm_inject_args inject_args;
++ vm_vaddr_t args_gva;
+
+ struct test_args args = {
+ .nr_irqs = nr_irqs,
+@@ -757,7 +761,9 @@ static void test_vgic(uint32_t nr_irqs, bool level_sensitive, bool eoi_split)
+ vcpu_init_descriptor_tables(vm, VCPU_ID);
+
+ /* Setup the guest args page (so it gets the args). */
+- vcpu_args_set(vm, 0, 1, args);
++ args_gva = vm_vaddr_alloc_page(vm);
++ memcpy(addr_gva2hva(vm, args_gva), &args, sizeof(args));
++ vcpu_args_set(vm, 0, 1, args_gva);
+
+ gic_fd = vgic_v3_setup(vm, 1, nr_irqs,
+ GICD_BASE_GPA, GICR_BASE_GPA);
+@@ -841,7 +847,8 @@ int main(int argc, char **argv)
+ }
+ }
+
+- /* If the user just specified nr_irqs and/or gic_version, then run all
++ /*
++ * If the user just specified nr_irqs and/or gic_version, then run all
+ * combinations.
+ */
+ if (default_args) {
+diff --git a/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
+index 00f613c0583cd..263bf3ed8fd55 100644
+--- a/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
++++ b/tools/testing/selftests/kvm/lib/aarch64/gic_v3.c
+@@ -19,7 +19,7 @@ struct gicv3_data {
+ unsigned int nr_spis;
+ };
+
+-#define sgi_base_from_redist(redist_base) (redist_base + SZ_64K)
++#define sgi_base_from_redist(redist_base) (redist_base + SZ_64K)
+ #define DIST_BIT (1U << 31)
+
+ enum gicv3_intid_range {
+@@ -105,7 +105,8 @@ static void gicv3_set_eoi_split(bool split)
+ {
+ uint32_t val;
+
+- /* All other fields are read-only, so no need to read CTLR first. In
++ /*
++ * All other fields are read-only, so no need to read CTLR first. In
+ * fact, the kernel does the same.
+ */
+ val = split ? (1U << 1) : 0;
+@@ -159,9 +160,10 @@ static void gicv3_access_reg(uint32_t intid, uint64_t offset,
+ uint32_t cpu_or_dist;
+
+ GUEST_ASSERT(bits_per_field <= reg_bits);
+- GUEST_ASSERT(*val < (1U << bits_per_field));
+- /* Some registers like IROUTER are 64 bit long. Those are currently not
+- * supported by readl nor writel, so just asserting here until then.
++ GUEST_ASSERT(!write || *val < (1U << bits_per_field));
++ /*
++ * This function does not support 64 bit accesses. Just asserting here
++ * until we implement readq/writeq.
+ */
+ GUEST_ASSERT(reg_bits == 32);
+
+diff --git a/tools/testing/selftests/kvm/lib/aarch64/vgic.c b/tools/testing/selftests/kvm/lib/aarch64/vgic.c
+index f5cd0c536d85c..5d45046c1b805 100644
+--- a/tools/testing/selftests/kvm/lib/aarch64/vgic.c
++++ b/tools/testing/selftests/kvm/lib/aarch64/vgic.c
+@@ -140,9 +140,6 @@ static void vgic_poke_irq(int gic_fd, uint32_t intid,
+ uint64_t val;
+ bool intid_is_private = INTID_IS_SGI(intid) || INTID_IS_PPI(intid);
+
+- /* Check that the addr part of the attr is within 32 bits. */
+- assert(attr <= KVM_DEV_ARM_VGIC_OFFSET_MASK);
+-
+ uint32_t group = intid_is_private ? KVM_DEV_ARM_VGIC_GRP_REDIST_REGS
+ : KVM_DEV_ARM_VGIC_GRP_DIST_REGS;
+
+@@ -152,7 +149,11 @@ static void vgic_poke_irq(int gic_fd, uint32_t intid,
+ attr += SZ_64K;
+ }
+
+- /* All calls will succeed, even with invalid intid's, as long as the
++ /* Check that the addr part of the attr is within 32 bits. */
++ assert((attr & ~KVM_DEV_ARM_VGIC_OFFSET_MASK) == 0);
++
++ /*
++ * All calls will succeed, even with invalid intid's, as long as the
+ * addr part of the attr is within 32 bits (checked above). An invalid
+ * intid will just make the read/writes point to above the intended
+ * register space (i.e., ICPENDR after ISPENDR).
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 246168310a754..610cc7920c8a2 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -439,8 +439,8 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
+
+ static void kvm_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+- kvm_dirty_ring_free(&vcpu->dirty_ring);
+ kvm_arch_vcpu_destroy(vcpu);
++ kvm_dirty_ring_free(&vcpu->dirty_ring);
+
+ /*
+ * No need for rcu_read_lock as VCPU_RUN is the only place that changes
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-12 19:12 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-12 19:12 UTC (permalink / raw
To: gentoo-commits
commit: 8a23feda2f868706ed10366da08ed63d7223976f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 12 19:11:29 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 12 19:12:43 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8a23feda
Remove deprecated select AUTOFS4_FS
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 9eefdc31..9843c3e2 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -8,7 +8,7 @@
+source "distro/Kconfig"
--- /dev/null 2022-04-12 05:39:54.696333295 -0400
+++ b/distro/Kconfig 2022-04-12 13:21:04.666379519 -0400
-@@ -0,0 +1,286 @@
+@@ -0,0 +1,285 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -121,7 +121,6 @@
+
+ depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
+
-+ select AUTOFS4_FS
+ select AUTOFS_FS
+ select BLK_DEV_BSG
+ select BPF_SYSCALL
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-12 17:40 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-12 17:40 UTC (permalink / raw
To: gentoo-commits
commit: 7045f1ebeb75f5974d8d390916b3727296e22d46
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 12 17:38:27 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 12 17:39:59 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7045f1eb
Select AUTOFS_FS when GENTOO_LINUX_INIT_SYSTEMD selected
Bug: https://bugs.gentoo.org/838082
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 3712fa96..9eefdc31 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig 2021-06-04 19:03:33.646823432 -0400
-+++ b/Kconfig 2021-06-04 19:03:40.508892817 -0400
+--- a/Kconfig 2022-04-12 13:11:48.403113171 -0400
++++ b/Kconfig 2022-04-12 13:12:36.530084675 -0400
@@ -30,3 +30,5 @@ source "lib/Kconfig"
source "lib/Kconfig.debug"
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2022-01-29 13:28:12.679255142 -0500
-+++ b/distro/Kconfig 2022-01-29 15:29:29.800465617 -0500
-@@ -0,0 +1,285 @@
+--- /dev/null 2022-04-12 05:39:54.696333295 -0400
++++ b/distro/Kconfig 2022-04-12 13:21:04.666379519 -0400
+@@ -0,0 +1,286 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -122,6 +122,7 @@
+ depends on GENTOO_LINUX && GENTOO_LINUX_UDEV
+
+ select AUTOFS4_FS
++ select AUTOFS_FS
+ select BLK_DEV_BSG
+ select BPF_SYSCALL
+ select CGROUP_BPF
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-08 13:09 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-08 13:09 UTC (permalink / raw
To: gentoo-commits
commit: 90bdc1445c17fabda7246e1e43ee93e26fba1c95
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 8 13:09:32 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 8 13:09:32 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=90bdc144
Removed redundant patches
Removed:
2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 -
...e-fix-possible-probe-failure-after-reboot.patch | 436 ---------------------
...rework-fix-info-leak-with-dma_from_device.patch | 187 ---------
3 files changed, 631 deletions(-)
diff --git a/0000_README b/0000_README
index 07650c38..18d9a104 100644
--- a/0000_README
+++ b/0000_README
@@ -63,14 +63,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
-From: https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
-Desc: mt76: mt7921e: fix possible probe failure after reboot
-
-Patch: 2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
-Desc: Revert swiotlb: rework fix info leak with DMA_FROM_DEVICE
-
Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
diff --git a/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch b/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
deleted file mode 100644
index 4440e910..00000000
--- a/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
+++ /dev/null
@@ -1,436 +0,0 @@
-From patchwork Fri Jan 7 07:30:03 2022
-Content-Type: text/plain; charset="utf-8"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 7bit
-X-Patchwork-Submitter: Sean Wang <sean.wang@mediatek.com>
-X-Patchwork-Id: 12706336
-X-Patchwork-Delegate: nbd@nbd.name
-Return-Path: <linux-wireless-owner@kernel.org>
-X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on
- aws-us-west-2-korg-lkml-1.web.codeaurora.org
-Received: from vger.kernel.org (vger.kernel.org [23.128.96.18])
- by smtp.lore.kernel.org (Postfix) with ESMTP id 21BAFC433F5
- for <linux-wireless@archiver.kernel.org>;
- Fri, 7 Jan 2022 07:30:14 +0000 (UTC)
-Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
- id S234590AbiAGHaN (ORCPT
- <rfc822;linux-wireless@archiver.kernel.org>);
- Fri, 7 Jan 2022 02:30:13 -0500
-Received: from mailgw01.mediatek.com ([60.244.123.138]:50902 "EHLO
- mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org
- with ESMTP id S234589AbiAGHaM (ORCPT
- <rfc822;linux-wireless@vger.kernel.org>);
- Fri, 7 Jan 2022 02:30:12 -0500
-X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
-X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
-Received: from mtkcas10.mediatek.inc [(172.21.101.39)] by
- mailgw01.mediatek.com
- (envelope-from <sean.wang@mediatek.com>)
- (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256)
- with ESMTP id 982418171; Fri, 07 Jan 2022 15:30:06 +0800
-Received: from mtkcas10.mediatek.inc (172.21.101.39) by
- mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server
- (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3;
- Fri, 7 Jan 2022 15:30:05 +0800
-Received: from mtkswgap22.mediatek.inc (172.21.77.33) by mtkcas10.mediatek.inc
- (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend
- Transport; Fri, 7 Jan 2022 15:30:04 +0800
-From: <sean.wang@mediatek.com>
-To: <nbd@nbd.name>, <lorenzo.bianconi@redhat.com>
-CC: <sean.wang@mediatek.com>, <Soul.Huang@mediatek.com>,
- <YN.Chen@mediatek.com>, <Leon.Yen@mediatek.com>,
- <Eric-SY.Chang@mediatek.com>, <Mark-YW.Chen@mediatek.com>,
- <Deren.Wu@mediatek.com>, <km.lin@mediatek.com>,
- <jenhao.yang@mediatek.com>, <robin.chiu@mediatek.com>,
- <Eddie.Chen@mediatek.com>, <ch.yeh@mediatek.com>,
- <posh.sun@mediatek.com>, <ted.huang@mediatek.com>,
- <Eric.Liang@mediatek.com>, <Stella.Chang@mediatek.com>,
- <Tom.Chou@mediatek.com>, <steve.lee@mediatek.com>,
- <jsiuda@google.com>, <frankgor@google.com>, <jemele@google.com>,
- <abhishekpandit@google.com>, <shawnku@google.com>,
- <linux-wireless@vger.kernel.org>,
- <linux-mediatek@lists.infradead.org>,
- "Deren Wu" <deren.wu@mediatek.com>
-Subject: [PATCH] mt76: mt7921e: fix possible probe failure after reboot
-Date: Fri, 7 Jan 2022 15:30:03 +0800
-Message-ID:
- <70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com>
-X-Mailer: git-send-email 1.7.9.5
-MIME-Version: 1.0
-X-MTK: N
-Precedence: bulk
-List-ID: <linux-wireless.vger.kernel.org>
-X-Mailing-List: linux-wireless@vger.kernel.org
-
-From: Sean Wang <sean.wang@mediatek.com>
-
-It doesn't guarantee the mt7921e gets started with ASPM L0 after each
-machine reboot on every platform.
-
-If mt7921e gets started with not ASPM L0, it would be possible that the
-driver encounters time to time failure in mt7921_pci_probe, like a
-weird chip identifier is read
-
-[ 215.514503] mt7921e 0000:05:00.0: ASIC revision: feed0000
-[ 216.604741] mt7921e: probe of 0000:05:00.0 failed with error -110
-
-or failing to init hardware because the driver is not allowed to access the
-register until the device is in ASPM L0 state. So, we call
-__mt7921e_mcu_drv_pmctrl in early mt7921_pci_probe to force the device
-to bring back to the L0 state for we can safely access registers in any
-case.
-
-In the patch, we move all functions from dma.c to pci.c and register mt76
-bus operation earilier, that is the __mt7921e_mcu_drv_pmctrl depends on.
-
-Fixes: bf3747ae2e25 ("mt76: mt7921: enable aspm by default")
-Reported-by: Kai-Chuan Hsieh <kaichuan.hsieh@canonical.com>
-Co-developed-by: Deren Wu <deren.wu@mediatek.com>
-Signed-off-by: Deren Wu <deren.wu@mediatek.com>
-Signed-off-by: Sean Wang <sean.wang@mediatek.com>
----
- .../net/wireless/mediatek/mt76/mt7921/dma.c | 119 -----------------
- .../wireless/mediatek/mt76/mt7921/mt7921.h | 1 +
- .../net/wireless/mediatek/mt76/mt7921/pci.c | 124 ++++++++++++++++++
- .../wireless/mediatek/mt76/mt7921/pci_mcu.c | 18 ++-
- 4 files changed, 139 insertions(+), 123 deletions(-)
-
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
-index cdff1fd52d93..39d6ce4ecddd 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
-@@ -78,110 +78,6 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
- mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
- }
-
--static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
--{
-- static const struct {
-- u32 phys;
-- u32 mapped;
-- u32 size;
-- } fixed_map[] = {
-- { 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
-- { 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
-- { 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
-- { 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
-- { 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
-- { 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
-- { 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
-- { 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
-- { 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
-- { 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
-- { 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
-- { 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
-- { 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
-- { 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
-- { 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
-- { 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
-- { 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
-- { 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
-- { 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
-- { 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
-- { 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
-- { 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
-- { 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
-- { 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
-- { 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
-- { 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
-- { 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
-- { 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
-- { 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
-- { 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
-- { 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
-- { 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
-- { 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
-- { 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
-- { 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
-- { 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
-- { 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
-- { 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
-- { 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
-- { 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
-- { 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
-- { 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
-- { 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
-- };
-- int i;
--
-- if (addr < 0x100000)
-- return addr;
--
-- for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
-- u32 ofs;
--
-- if (addr < fixed_map[i].phys)
-- continue;
--
-- ofs = addr - fixed_map[i].phys;
-- if (ofs > fixed_map[i].size)
-- continue;
--
-- return fixed_map[i].mapped + ofs;
-- }
--
-- if ((addr >= 0x18000000 && addr < 0x18c00000) ||
-- (addr >= 0x70000000 && addr < 0x78000000) ||
-- (addr >= 0x7c000000 && addr < 0x7c400000))
-- return mt7921_reg_map_l1(dev, addr);
--
-- dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
-- addr);
--
-- return 0;
--}
--
--static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
--{
-- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-- u32 addr = __mt7921_reg_addr(dev, offset);
--
-- return dev->bus_ops->rr(mdev, addr);
--}
--
--static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
--{
-- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-- u32 addr = __mt7921_reg_addr(dev, offset);
--
-- dev->bus_ops->wr(mdev, addr, val);
--}
--
--static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
--{
-- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-- u32 addr = __mt7921_reg_addr(dev, offset);
--
-- return dev->bus_ops->rmw(mdev, addr, mask, val);
--}
--
- static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
- {
- if (force) {
-@@ -341,23 +237,8 @@ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev)
-
- int mt7921_dma_init(struct mt7921_dev *dev)
- {
-- struct mt76_bus_ops *bus_ops;
- int ret;
-
-- dev->phy.dev = dev;
-- dev->phy.mt76 = &dev->mt76.phy;
-- dev->mt76.phy.priv = &dev->phy;
-- dev->bus_ops = dev->mt76.bus;
-- bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
-- GFP_KERNEL);
-- if (!bus_ops)
-- return -ENOMEM;
--
-- bus_ops->rr = mt7921_rr;
-- bus_ops->wr = mt7921_wr;
-- bus_ops->rmw = mt7921_rmw;
-- dev->mt76.bus = bus_ops;
--
- mt76_dma_attach(&dev->mt76);
-
- ret = mt7921_dma_disable(dev, true);
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
-index 8b674e042568..63e3c7ef5e89 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
-@@ -443,6 +443,7 @@ int mt7921e_mcu_init(struct mt7921_dev *dev);
- int mt7921s_wfsys_reset(struct mt7921_dev *dev);
- int mt7921s_mac_reset(struct mt7921_dev *dev);
- int mt7921s_init_reset(struct mt7921_dev *dev);
-+int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
- int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
- int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev);
-
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
-index 1ae0d5826ca7..a0c82d19c4d9 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
-@@ -121,6 +121,110 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
- mt76_free_device(&dev->mt76);
- }
-
-+static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
-+{
-+ static const struct {
-+ u32 phys;
-+ u32 mapped;
-+ u32 size;
-+ } fixed_map[] = {
-+ { 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
-+ { 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
-+ { 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
-+ { 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
-+ { 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
-+ { 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
-+ { 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
-+ { 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
-+ { 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
-+ { 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
-+ { 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
-+ { 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
-+ { 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
-+ { 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
-+ { 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
-+ { 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
-+ { 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
-+ { 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
-+ { 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
-+ { 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
-+ { 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
-+ { 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
-+ { 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
-+ { 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
-+ { 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
-+ { 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
-+ { 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
-+ { 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
-+ { 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
-+ { 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
-+ { 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
-+ { 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
-+ { 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
-+ { 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
-+ { 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
-+ { 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
-+ { 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
-+ { 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
-+ { 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
-+ { 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
-+ { 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
-+ { 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
-+ { 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
-+ };
-+ int i;
-+
-+ if (addr < 0x100000)
-+ return addr;
-+
-+ for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
-+ u32 ofs;
-+
-+ if (addr < fixed_map[i].phys)
-+ continue;
-+
-+ ofs = addr - fixed_map[i].phys;
-+ if (ofs > fixed_map[i].size)
-+ continue;
-+
-+ return fixed_map[i].mapped + ofs;
-+ }
-+
-+ if ((addr >= 0x18000000 && addr < 0x18c00000) ||
-+ (addr >= 0x70000000 && addr < 0x78000000) ||
-+ (addr >= 0x7c000000 && addr < 0x7c400000))
-+ return mt7921_reg_map_l1(dev, addr);
-+
-+ dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
-+ addr);
-+
-+ return 0;
-+}
-+
-+static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
-+{
-+ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-+ u32 addr = __mt7921_reg_addr(dev, offset);
-+
-+ return dev->bus_ops->rr(mdev, addr);
-+}
-+
-+static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
-+{
-+ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-+ u32 addr = __mt7921_reg_addr(dev, offset);
-+
-+ dev->bus_ops->wr(mdev, addr, val);
-+}
-+
-+static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
-+{
-+ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
-+ u32 addr = __mt7921_reg_addr(dev, offset);
-+
-+ return dev->bus_ops->rmw(mdev, addr, mask, val);
-+}
-+
- static int mt7921_pci_probe(struct pci_dev *pdev,
- const struct pci_device_id *id)
- {
-@@ -152,6 +256,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
- .fw_own = mt7921e_mcu_fw_pmctrl,
- };
-
-+ struct mt76_bus_ops *bus_ops;
- struct mt7921_dev *dev;
- struct mt76_dev *mdev;
- int ret;
-@@ -189,6 +294,25 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
-
- mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
- tasklet_init(&dev->irq_tasklet, mt7921_irq_tasklet, (unsigned long)dev);
-+
-+ dev->phy.dev = dev;
-+ dev->phy.mt76 = &dev->mt76.phy;
-+ dev->mt76.phy.priv = &dev->phy;
-+ dev->bus_ops = dev->mt76.bus;
-+ bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
-+ GFP_KERNEL);
-+ if (!bus_ops)
-+ return -ENOMEM;
-+
-+ bus_ops->rr = mt7921_rr;
-+ bus_ops->wr = mt7921_wr;
-+ bus_ops->rmw = mt7921_rmw;
-+ dev->mt76.bus = bus_ops;
-+
-+ ret = __mt7921e_mcu_drv_pmctrl(dev);
-+ if (ret)
-+ return ret;
-+
- mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
- (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
- dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
-index f9e350b67fdc..36669e5aeef3 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
-@@ -59,10 +59,8 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
- return err;
- }
-
--int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
-+int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
- {
-- struct mt76_phy *mphy = &dev->mt76.phy;
-- struct mt76_connac_pm *pm = &dev->pm;
- int i, err = 0;
-
- for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
-@@ -75,9 +73,21 @@ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
- if (i == MT7921_DRV_OWN_RETRY_COUNT) {
- dev_err(dev->mt76.dev, "driver own failed\n");
- err = -EIO;
-- goto out;
- }
-
-+ return err;
-+}
-+
-+int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
-+{
-+ struct mt76_phy *mphy = &dev->mt76.phy;
-+ struct mt76_connac_pm *pm = &dev->pm;
-+ int err;
-+
-+ err = __mt7921e_mcu_drv_pmctrl(dev);
-+ if (err < 0)
-+ goto out;
-+
- mt7921_wpdma_reinit_cond(dev);
- clear_bit(MT76_STATE_PM, &mphy->state);
-
diff --git a/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch b/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
deleted file mode 100644
index 69476ab1..00000000
--- a/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
+++ /dev/null
@@ -1,187 +0,0 @@
-From bddac7c1e02ba47f0570e494c9289acea3062cc1 Mon Sep 17 00:00:00 2001
-From: Linus Torvalds <torvalds@linux-foundation.org>
-Date: Sat, 26 Mar 2022 10:42:04 -0700
-Subject: Revert "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-From: Linus Torvalds <torvalds@linux-foundation.org>
-
-commit bddac7c1e02ba47f0570e494c9289acea3062cc1 upstream.
-
-This reverts commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13.
-
-It turns out this breaks at least the ath9k wireless driver, and
-possibly others.
-
-What the ath9k driver does on packet receive is to set up the DMA
-transfer with:
-
- int ath_rx_init(..)
- ..
- bf->bf_buf_addr = dma_map_single(sc->dev, skb->data,
- common->rx_bufsize,
- DMA_FROM_DEVICE);
-
-and then the receive logic (through ath_rx_tasklet()) will fetch
-incoming packets
-
- static bool ath_edma_get_buffers(..)
- ..
- dma_sync_single_for_cpu(sc->dev, bf->bf_buf_addr,
- common->rx_bufsize, DMA_FROM_DEVICE);
-
- ret = ath9k_hw_process_rxdesc_edma(ah, rs, skb->data);
- if (ret == -EINPROGRESS) {
- /*let device gain the buffer again*/
- dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
- common->rx_bufsize, DMA_FROM_DEVICE);
- return false;
- }
-
-and it's worth noting how that first DMA sync:
-
- dma_sync_single_for_cpu(..DMA_FROM_DEVICE);
-
-is there to make sure the CPU can read the DMA buffer (possibly by
-copying it from the bounce buffer area, or by doing some cache flush).
-The iommu correctly turns that into a "copy from bounce bufer" so that
-the driver can look at the state of the packets.
-
-In the meantime, the device may continue to write to the DMA buffer, but
-we at least have a snapshot of the state due to that first DMA sync.
-
-But that _second_ DMA sync:
-
- dma_sync_single_for_device(..DMA_FROM_DEVICE);
-
-is telling the DMA mapping that the CPU wasn't interested in the area
-because the packet wasn't there. In the case of a DMA bounce buffer,
-that is a no-op.
-
-Note how it's not a sync for the CPU (the "for_device()" part), and it's
-not a sync for data written by the CPU (the "DMA_FROM_DEVICE" part).
-
-Or rather, it _should_ be a no-op. That's what commit aa6f8dcbab47
-broke: it made the code bounce the buffer unconditionally, and changed
-the DMA_FROM_DEVICE to just unconditionally and illogically be
-DMA_TO_DEVICE.
-
-[ Side note: purely within the confines of the swiotlb driver it wasn't
- entirely illogical: The reason it did that odd DMA_FROM_DEVICE ->
- DMA_TO_DEVICE conversion thing is because inside the swiotlb driver,
- it uses just a swiotlb_bounce() helper that doesn't care about the
- whole distinction of who the sync is for - only which direction to
- bounce.
-
- So it took the "sync for device" to mean that the CPU must have been
- the one writing, and thought it meant DMA_TO_DEVICE. ]
-
-Also note how the commentary in that commit was wrong, probably due to
-that whole confusion, claiming that the commit makes the swiotlb code
-
- "bounce unconditionally (that is, also
- when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
- data from the swiotlb buffer"
-
-which is nonsensical for two reasons:
-
- - that "also when dir == DMA_TO_DEVICE" is nonsensical, as that was
- exactly when it always did - and should do - the bounce.
-
- - since this is a sync for the device (not for the CPU), we're clearly
- fundamentally not coping back stale data from the bounce buffers at
- all, because we'd be copying *to* the bounce buffers.
-
-So that commit was just very confused. It confused the direction of the
-synchronization (to the device, not the cpu) with the direction of the
-DMA (from the device).
-
-Reported-and-bisected-by: Oleksandr Natalenko <oleksandr@natalenko.name>
-Reported-by: Olha Cherevyk <olha.cherevyk@gmail.com>
-Cc: Halil Pasic <pasic@linux.ibm.com>
-Cc: Christoph Hellwig <hch@lst.de>
-Cc: Kalle Valo <kvalo@kernel.org>
-Cc: Robin Murphy <robin.murphy@arm.com>
-Cc: Toke Høiland-Jørgensen <toke@toke.dk>
-Cc: Maxime Bizon <mbizon@freebox.fr>
-Cc: Johannes Berg <johannes@sipsolutions.net>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- Documentation/core-api/dma-attributes.rst | 8 ++++++++
- include/linux/dma-mapping.h | 8 ++++++++
- kernel/dma/swiotlb.c | 23 ++++++++---------------
- 3 files changed, 24 insertions(+), 15 deletions(-)
-
---- a/Documentation/core-api/dma-attributes.rst
-+++ b/Documentation/core-api/dma-attributes.rst
-@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileg
- subsystem that the buffer is fully accessible at the elevated privilege
- level (and ideally inaccessible or at least read-only at the
- lesser-privileged levels).
-+
-+DMA_ATTR_OVERWRITE
-+------------------
-+
-+This is a hint to the DMA-mapping subsystem that the device is expected to
-+overwrite the entire mapped size, thus the caller does not require any of the
-+previous buffer contents to be preserved. This allows bounce-buffering
-+implementations to optimise DMA_FROM_DEVICE transfers.
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -62,6 +62,14 @@
- #define DMA_ATTR_PRIVILEGED (1UL << 9)
-
- /*
-+ * This is a hint to the DMA-mapping subsystem that the device is expected
-+ * to overwrite the entire mapped size, thus the caller does not require any
-+ * of the previous buffer contents to be preserved. This allows
-+ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
-+ */
-+#define DMA_ATTR_OVERWRITE (1UL << 10)
-+
-+/*
- * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
- * be given to a device to use as a DMA source or target. It is specific to a
- * given device and there may be a translation between the CPU physical address
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -627,14 +627,10 @@ phys_addr_t swiotlb_tbl_map_single(struc
- for (i = 0; i < nr_slots(alloc_size + offset); i++)
- mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
- tlb_addr = slot_addr(mem->start, index) + offset;
-- /*
-- * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
-- * to the tlb buffer, if we knew for sure the device will
-- * overwirte the entire current content. But we don't. Thus
-- * unconditional bounce may prevent leaking swiotlb content (i.e.
-- * kernel memory) to user-space.
-- */
-- swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
-+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-+ (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
-+ dir == DMA_BIDIRECTIONAL))
-+ swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
- return tlb_addr;
- }
-
-@@ -701,13 +697,10 @@ void swiotlb_tbl_unmap_single(struct dev
- void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
- size_t size, enum dma_data_direction dir)
- {
-- /*
-- * Unconditional bounce is necessary to avoid corruption on
-- * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
-- * the whole lengt of the bounce buffer.
-- */
-- swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
-- BUG_ON(!valid_dma_direction(dir));
-+ if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
-+ swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
-+ else
-+ BUG_ON(dir != DMA_FROM_DEVICE);
- }
-
- void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-04-08 12:53 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-04-08 12:53 UTC (permalink / raw
To: gentoo-commits
commit: 9c9897d6615fc0de7efc1b691d9c9343f79e5f3f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 8 12:53:19 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 8 12:53:19 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9c9897d6
Linux patch 5.17.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-5.17.2.patch | 57034 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 57038 insertions(+)
diff --git a/0000_README b/0000_README
index 19269be2..07650c38 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-5.17.1.patch
From: http://www.kernel.org
Desc: Linux 5.17.1
+Patch: 1001_linux-5.17.2.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-5.17.2.patch b/1001_linux-5.17.2.patch
new file mode 100644
index 00000000..01314802
--- /dev/null
+++ b/1001_linux-5.17.2.patch
@@ -0,0 +1,57034 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index 2416b03ff2837..137f16feee084 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -430,6 +430,7 @@ Description: Show status of f2fs superblock in real time.
+ 0x800 SBI_QUOTA_SKIP_FLUSH skip flushing quota in current CP
+ 0x1000 SBI_QUOTA_NEED_REPAIR quota file may be corrupted
+ 0x2000 SBI_IS_RESIZEFS resizefs is in process
++ 0x4000 SBI_IS_FREEZING freefs is in process
+ ====== ===================== =================================
+
+ What: /sys/fs/f2fs/<disk>/ckpt_thread_ioprio
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 7123524a86b8b..59f881f367793 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3485,8 +3485,7 @@
+ difficult since unequal pointers can no longer be
+ compared. However, if this command-line option is
+ specified, then all normal pointers will have their true
+- value printed. Pointers printed via %pK may still be
+- hashed. This option should only be specified when
++ value printed. This option should only be specified when
+ debugging the kernel. Please do not use on production
+ kernels.
+
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index d359bcfadd39a..0f86e9f931293 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -795,6 +795,7 @@ bit 1 print system memory info
+ bit 2 print timer info
+ bit 3 print locks info if ``CONFIG_LOCKDEP`` is on
+ bit 4 print ftrace buffer
++bit 5 print all printk messages in buffer
+ ===== ============================================
+
+ So for example to print tasks and memory info on panic, user can::
+diff --git a/Documentation/devicetree/bindings/iio/adc/xlnx,zynqmp-ams.yaml b/Documentation/devicetree/bindings/iio/adc/xlnx,zynqmp-ams.yaml
+index 87992db389b28..3698b4b0900f5 100644
+--- a/Documentation/devicetree/bindings/iio/adc/xlnx,zynqmp-ams.yaml
++++ b/Documentation/devicetree/bindings/iio/adc/xlnx,zynqmp-ams.yaml
+@@ -92,6 +92,10 @@ properties:
+ description: AMS Controller register space
+ maxItems: 1
+
++ clocks:
++ items:
++ - description: AMS reference clock
++
+ ranges:
+ description:
+ Maps the child address space for PS and/or PL.
+@@ -181,12 +185,15 @@ properties:
+ required:
+ - compatible
+ - reg
++ - clocks
+ - ranges
+
+ additionalProperties: false
+
+ examples:
+ - |
++ #include <dt-bindings/clock/xlnx-zynqmp-clk.h>
++
+ bus {
+ #address-cells = <2>;
+ #size-cells = <2>;
+@@ -196,6 +203,7 @@ examples:
+ interrupt-parent = <&gic>;
+ interrupts = <0 56 4>;
+ reg = <0x0 0xffa50000 0x0 0x800>;
++ clocks = <&zynqmp_clk AMS_REF>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ #io-channel-cells = <1>;
+diff --git a/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml b/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml
+index 85a8877c2f387..1e2df8cf2937b 100644
+--- a/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml
++++ b/Documentation/devicetree/bindings/media/i2c/hynix,hi846.yaml
+@@ -49,7 +49,8 @@ properties:
+ description: Definition of the regulator used for the VDDD power supply.
+
+ port:
+- $ref: /schemas/graph.yaml#/properties/port
++ $ref: /schemas/graph.yaml#/$defs/port-base
++ unevaluatedProperties: false
+
+ properties:
+ endpoint:
+@@ -68,8 +69,11 @@ properties:
+ - const: 1
+ - const: 2
+
++ link-frequencies: true
++
+ required:
+ - data-lanes
++ - link-frequencies
+
+ required:
+ - compatible
+diff --git a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
+index 3a82b0b27fa0a..4fca71f343109 100644
+--- a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
++++ b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
+@@ -88,10 +88,9 @@ allOf:
+ - mediatek,mt2701-smi-common
+ then:
+ properties:
+- clock:
+- items:
+- minItems: 3
+- maxItems: 3
++ clocks:
++ minItems: 3
++ maxItems: 3
+ clock-names:
+ items:
+ - const: apb
+@@ -108,10 +107,9 @@ allOf:
+ required:
+ - mediatek,smi
+ properties:
+- clock:
+- items:
+- minItems: 3
+- maxItems: 3
++ clocks:
++ minItems: 3
++ maxItems: 3
+ clock-names:
+ items:
+ - const: apb
+@@ -133,10 +131,9 @@ allOf:
+
+ then:
+ properties:
+- clock:
+- items:
+- minItems: 4
+- maxItems: 4
++ clocks:
++ minItems: 4
++ maxItems: 4
+ clock-names:
+ items:
+ - const: apb
+@@ -146,10 +143,9 @@ allOf:
+
+ else: # for gen2 HW that don't have gals
+ properties:
+- clock:
+- items:
+- minItems: 2
+- maxItems: 2
++ clocks:
++ minItems: 2
++ maxItems: 2
+ clock-names:
+ items:
+ - const: apb
+diff --git a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
+index eaeff1ada7f89..c5c32c9100457 100644
+--- a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
++++ b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
+@@ -79,11 +79,11 @@ allOf:
+
+ then:
+ properties:
+- clock:
+- items:
+- minItems: 3
+- maxItems: 3
++ clocks:
++ minItems: 2
++ maxItems: 3
+ clock-names:
++ minItems: 2
+ items:
+ - const: apb
+ - const: smi
+@@ -91,10 +91,9 @@ allOf:
+
+ else:
+ properties:
+- clock:
+- items:
+- minItems: 2
+- maxItems: 2
++ clocks:
++ minItems: 2
++ maxItems: 2
+ clock-names:
+ items:
+ - const: apb
+@@ -108,7 +107,6 @@ allOf:
+ - mediatek,mt2701-smi-larb
+ - mediatek,mt2712-smi-larb
+ - mediatek,mt6779-smi-larb
+- - mediatek,mt8167-smi-larb
+ - mediatek,mt8192-smi-larb
+ - mediatek,mt8195-smi-larb
+
+diff --git a/Documentation/devicetree/bindings/mtd/nand-controller.yaml b/Documentation/devicetree/bindings/mtd/nand-controller.yaml
+index bd217e6f5018a..5cd144a9ec992 100644
+--- a/Documentation/devicetree/bindings/mtd/nand-controller.yaml
++++ b/Documentation/devicetree/bindings/mtd/nand-controller.yaml
+@@ -55,7 +55,7 @@ patternProperties:
+ properties:
+ reg:
+ description:
+- Contains the native Ready/Busy IDs.
++ Contains the chip-select IDs.
+
+ nand-ecc-engine:
+ allOf:
+@@ -184,7 +184,7 @@ examples:
+ nand-use-soft-ecc-engine;
+ nand-ecc-algo = "bch";
+
+- /* controller specific properties */
++ /* NAND chip specific properties */
+ };
+
+ nand@1 {
+diff --git a/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml b/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml
+index cb554084bdf11..0df4e114fdd69 100644
+--- a/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/microchip,sparx5-sgpio.yaml
+@@ -145,7 +145,7 @@ examples:
+ clocks = <&sys_clk>;
+ pinctrl-0 = <&sgpio2_pins>;
+ pinctrl-names = "default";
+- reg = <0x1101059c 0x100>;
++ reg = <0x1101059c 0x118>;
+ microchip,sgpio-port-ranges = <0 0>, <16 18>, <28 31>;
+ bus-frequency = <25000000>;
+ sgpio_in2: gpio@0 {
+diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
+index 328ea59c5466f..8299662c2c096 100644
+--- a/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-mt8195.yaml
+@@ -99,6 +99,14 @@ patternProperties:
+ enum: [2, 4, 6, 8, 10, 12, 14, 16]
+
+ bias-pull-down:
++ oneOf:
++ - type: boolean
++ - enum: [100, 101, 102, 103]
++ description: mt8195 pull down PUPD/R0/R1 type define value.
++ - enum: [200, 201, 202, 203, 204, 205, 206, 207]
++ description: mt8195 pull down RSEL type define value.
++ - enum: [75000, 5000]
++ description: mt8195 pull down RSEL type si unit value(ohm).
+ description: |
+ For pull down type is normal, it don't need add RSEL & R1R0 define
+ and resistance value.
+@@ -115,13 +123,6 @@ patternProperties:
+ & "MTK_PULL_SET_RSEL_110" & "MTK_PULL_SET_RSEL_111"
+ define in mt8195. It can also support resistance value(ohm)
+ "75000" & "5000" in mt8195.
+- oneOf:
+- - enum: [100, 101, 102, 103]
+- - description: mt8195 pull down PUPD/R0/R1 type define value.
+- - enum: [200, 201, 202, 203, 204, 205, 206, 207]
+- - description: mt8195 pull down RSEL type define value.
+- - enum: [75000, 5000]
+- - description: mt8195 pull down RSEL type si unit value(ohm).
+
+ An example of using RSEL define:
+ pincontroller {
+@@ -146,6 +147,14 @@ patternProperties:
+ };
+
+ bias-pull-up:
++ oneOf:
++ - type: boolean
++ - enum: [100, 101, 102, 103]
++ description: mt8195 pull up PUPD/R0/R1 type define value.
++ - enum: [200, 201, 202, 203, 204, 205, 206, 207]
++ description: mt8195 pull up RSEL type define value.
++ - enum: [1000, 1500, 2000, 3000, 4000, 5000, 10000, 75000]
++ description: mt8195 pull up RSEL type si unit value(ohm).
+ description: |
+ For pull up type is normal, it don't need add RSEL & R1R0 define
+ and resistance value.
+@@ -163,13 +172,6 @@ patternProperties:
+ define in mt8195. It can also support resistance value(ohm)
+ "1000" & "1500" & "2000" & "3000" & "4000" & "5000" & "10000" &
+ "75000" in mt8195.
+- oneOf:
+- - enum: [100, 101, 102, 103]
+- - description: mt8195 pull up PUPD/R0/R1 type define value.
+- - enum: [200, 201, 202, 203, 204, 205, 206, 207]
+- - description: mt8195 pull up RSEL type define value.
+- - enum: [1000, 1500, 2000, 3000, 4000, 5000, 10000, 75000]
+- - description: mt8195 pull up RSEL type si unit value(ohm).
+ An example of using RSEL define:
+ pincontroller {
+ i2c0-pins {
+diff --git a/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml b/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
+index 35a8045b2c70d..53627c6e2ae32 100644
+--- a/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
++++ b/Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
+@@ -106,7 +106,7 @@ examples:
+ dma-names = "rx", "tx";
+
+ flash@0 {
+- compatible = "spi-nor";
++ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <104000000>;
+ spi-tx-bus-width = <2>;
+diff --git a/Documentation/devicetree/bindings/spi/spi-mxic.txt b/Documentation/devicetree/bindings/spi/spi-mxic.txt
+index 529f2dab2648a..7bcbb229b78bb 100644
+--- a/Documentation/devicetree/bindings/spi/spi-mxic.txt
++++ b/Documentation/devicetree/bindings/spi/spi-mxic.txt
+@@ -8,11 +8,13 @@ Required properties:
+ - reg: should contain 2 entries, one for the registers and one for the direct
+ mapping area
+ - reg-names: should contain "regs" and "dirmap"
+-- interrupts: interrupt line connected to the SPI controller
+ - clock-names: should contain "ps_clk", "send_clk" and "send_dly_clk"
+ - clocks: should contain 3 entries for the "ps_clk", "send_clk" and
+ "send_dly_clk" clocks
+
++Optional properties:
++- interrupts: interrupt line connected to the SPI controller
++
+ Example:
+
+ spi@43c30000 {
+diff --git a/Documentation/devicetree/bindings/usb/usb-hcd.yaml b/Documentation/devicetree/bindings/usb/usb-hcd.yaml
+index 56853c17af667..1dc3d5d7b44fe 100644
+--- a/Documentation/devicetree/bindings/usb/usb-hcd.yaml
++++ b/Documentation/devicetree/bindings/usb/usb-hcd.yaml
+@@ -33,7 +33,7 @@ patternProperties:
+ "^.*@[0-9a-f]{1,2}$":
+ description: The hard wired USB devices
+ type: object
+- $ref: /usb/usb-device.yaml
++ $ref: /schemas/usb/usb-device.yaml
+
+ additionalProperties: true
+
+diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
+index 3b8f41395f6b5..c8f7a16cd0e3c 100644
+--- a/Documentation/driver-api/cxl/memory-devices.rst
++++ b/Documentation/driver-api/cxl/memory-devices.rst
+@@ -36,10 +36,10 @@ CXL Core
+ .. kernel-doc:: drivers/cxl/cxl.h
+ :internal:
+
+-.. kernel-doc:: drivers/cxl/core/bus.c
++.. kernel-doc:: drivers/cxl/core/port.c
+ :doc: cxl core
+
+-.. kernel-doc:: drivers/cxl/core/bus.c
++.. kernel-doc:: drivers/cxl/core/port.c
+ :identifiers:
+
+ .. kernel-doc:: drivers/cxl/core/pmem.c
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 003c865e9c212..fbcb48bc2a903 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -168,7 +168,16 @@ Trees
+ - The finalized and tagged releases of all stable kernels can be found
+ in separate branches per version at:
+
+- https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
++ https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
++
++ - The release candidate of all stable kernel versions can be found at:
++
++ https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git/
++
++ .. warning::
++ The -stable-rc tree is a snapshot in time of the stable-queue tree and
++ will change frequently, hence will be rebased often. It should only be
++ used for testing purposes (e.g. to be consumed by CI systems).
+
+
+ Review committee
+diff --git a/Documentation/security/SCTP.rst b/Documentation/security/SCTP.rst
+index d5fd6ccc3dcbd..b73eb764a0017 100644
+--- a/Documentation/security/SCTP.rst
++++ b/Documentation/security/SCTP.rst
+@@ -15,10 +15,7 @@ For security module support, three SCTP specific hooks have been implemented::
+ security_sctp_assoc_request()
+ security_sctp_bind_connect()
+ security_sctp_sk_clone()
+-
+-Also the following security hook has been utilised::
+-
+- security_inet_conn_established()
++ security_sctp_assoc_established()
+
+ The usage of these hooks are described below with the SELinux implementation
+ described in the `SCTP SELinux Support`_ chapter.
+@@ -122,11 +119,12 @@ calls **sctp_peeloff**\(3).
+ @newsk - pointer to new sock structure.
+
+
+-security_inet_conn_established()
+-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-Called when a COOKIE ACK is received::
++security_sctp_assoc_established()
++~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++Called when a COOKIE ACK is received, and the peer secid will be
++saved into ``@asoc->peer_secid`` for client::
+
+- @sk - pointer to sock structure.
++ @asoc - pointer to sctp association structure.
+ @skb - pointer to skbuff of the COOKIE ACK packet.
+
+
+@@ -134,7 +132,7 @@ Security Hooks used for Association Establishment
+ -------------------------------------------------
+
+ The following diagram shows the use of ``security_sctp_bind_connect()``,
+-``security_sctp_assoc_request()``, ``security_inet_conn_established()`` when
++``security_sctp_assoc_request()``, ``security_sctp_assoc_established()`` when
+ establishing an association.
+ ::
+
+@@ -172,7 +170,7 @@ establishing an association.
+ <------------------------------------------- COOKIE ACK
+ | |
+ sctp_sf_do_5_1E_ca |
+- Call security_inet_conn_established() |
++ Call security_sctp_assoc_established() |
+ to set the peer label. |
+ | |
+ | If SCTP_SOCKET_TCP or peeled off
+@@ -198,7 +196,7 @@ hooks with the SELinux specifics expanded below::
+ security_sctp_assoc_request()
+ security_sctp_bind_connect()
+ security_sctp_sk_clone()
+- security_inet_conn_established()
++ security_sctp_assoc_established()
+
+
+ security_sctp_assoc_request()
+@@ -271,12 +269,12 @@ sockets sid and peer sid to that contained in the ``@asoc sid`` and
+ @newsk - pointer to new sock structure.
+
+
+-security_inet_conn_established()
+-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++security_sctp_assoc_established()
++~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ Called when a COOKIE ACK is received where it sets the connection's peer sid
+ to that in ``@skb``::
+
+- @sk - pointer to sock structure.
++ @asoc - pointer to sctp association structure.
+ @skb - pointer to skbuff of the COOKIE ACK packet.
+
+
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index d25335993e553..9b52f50a68542 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -261,6 +261,10 @@ alc-sense-combo
+ huawei-mbx-stereo
+ Enable initialization verbs for Huawei MBX stereo speakers;
+ might be risky, try this at your own risk
++alc298-samsung-headphone
++ Samsung laptops with ALC298
++alc256-samsung-headphone
++ Samsung laptops with ALC256
+
+ ALC66x/67x/892
+ ==============
+diff --git a/Documentation/sphinx/requirements.txt b/Documentation/sphinx/requirements.txt
+index 9a35f50798a65..2c573541ab712 100644
+--- a/Documentation/sphinx/requirements.txt
++++ b/Documentation/sphinx/requirements.txt
+@@ -1,2 +1,4 @@
++# jinja2>=3.1 is not compatible with Sphinx<4.0
++jinja2<3.1
+ sphinx_rtd_theme
+ Sphinx==2.4.4
+diff --git a/MAINTAINERS b/MAINTAINERS
+index cd0f68d4a34a6..d9b2f1731ee06 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -16373,8 +16373,7 @@ M: Linus Walleij <linus.walleij@linaro.org>
+ M: Alvin Šipraga <alsi@bang-olufsen.dk>
+ S: Maintained
+ F: Documentation/devicetree/bindings/net/dsa/realtek-smi.txt
+-F: drivers/net/dsa/realtek-smi*
+-F: drivers/net/dsa/rtl83*
++F: drivers/net/dsa/realtek/*
+
+ REALTEK WIRELESS DRIVER (rtlwifi family)
+ M: Ping-Ke Shih <pkshih@realtek.com>
+diff --git a/Makefile b/Makefile
+index 34f9f5a9457af..06d852cad74ff 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/Kconfig b/arch/Kconfig
+index 678a80713b213..5e88237f84d2d 100644
+--- a/arch/Kconfig
++++ b/arch/Kconfig
+@@ -1162,6 +1162,7 @@ config HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
+ config RANDOMIZE_KSTACK_OFFSET_DEFAULT
+ bool "Randomize kernel stack offset on syscall entry"
+ depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
++ depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000
+ help
+ The kernel stack offset can be randomized (after pt_regs) by
+ roughly 5 bits of entropy, frustrating memory corruption
+diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
+index 8e90052f6f056..5f7f5aab361f1 100644
+--- a/arch/arc/kernel/process.c
++++ b/arch/arc/kernel/process.c
+@@ -43,7 +43,7 @@ SYSCALL_DEFINE0(arc_gettls)
+ return task_thread_info(current)->thr_ptr;
+ }
+
+-SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
++SYSCALL_DEFINE3(arc_usr_cmpxchg, int __user *, uaddr, int, expected, int, new)
+ {
+ struct pt_regs *regs = current_pt_regs();
+ u32 uval;
+diff --git a/arch/arm/boot/dts/bcm2711.dtsi b/arch/arm/boot/dts/bcm2711.dtsi
+index 21294f775a20f..89af57482bc8f 100644
+--- a/arch/arm/boot/dts/bcm2711.dtsi
++++ b/arch/arm/boot/dts/bcm2711.dtsi
+@@ -459,12 +459,26 @@
+ #size-cells = <0>;
+ enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
+
++ /* Source for d/i-cache-line-size and d/i-cache-sets
++ * https://developer.arm.com/documentation/100095/0003
++ * /Level-1-Memory-System/About-the-L1-memory-system?lang=en
++ * Source for d/i-cache-size
++ * https://www.raspberrypi.com/documentation/computers
++ * /processors.html#bcm2711
++ */
+ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a72";
+ reg = <0>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000d8>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ i-cache-size = <0xc000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++ next-level-cache = <&l2>;
+ };
+
+ cpu1: cpu@1 {
+@@ -473,6 +487,13 @@
+ reg = <1>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000e0>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ i-cache-size = <0xc000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++ next-level-cache = <&l2>;
+ };
+
+ cpu2: cpu@2 {
+@@ -481,6 +502,13 @@
+ reg = <2>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000e8>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ i-cache-size = <0xc000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++ next-level-cache = <&l2>;
+ };
+
+ cpu3: cpu@3 {
+@@ -489,6 +517,28 @@
+ reg = <3>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000f0>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ i-cache-size = <0xc000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 48KiB(size)/64(line-size)=768ways/3-way set
++ next-level-cache = <&l2>;
++ };
++
++ /* Source for d/i-cache-line-size and d/i-cache-sets
++ * https://developer.arm.com/documentation/100095/0003
++ * /Level-2-Memory-System/About-the-L2-memory-system?lang=en
++ * Source for d/i-cache-size
++ * https://www.raspberrypi.com/documentation/computers
++ * /processors.html#bcm2711
++ */
++ l2: l2-cache0 {
++ compatible = "cache";
++ cache-size = <0x100000>;
++ cache-line-size = <64>;
++ cache-sets = <1024>; // 1MiB(size)/64(line-size)=16384ways/16-way set
++ cache-level = <2>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/bcm2837.dtsi b/arch/arm/boot/dts/bcm2837.dtsi
+index 0199ec98cd616..5dbdebc462594 100644
+--- a/arch/arm/boot/dts/bcm2837.dtsi
++++ b/arch/arm/boot/dts/bcm2837.dtsi
+@@ -40,12 +40,26 @@
+ #size-cells = <0>;
+ enable-method = "brcm,bcm2836-smp"; // for ARM 32-bit
+
++ /* Source for d/i-cache-line-size and d/i-cache-sets
++ * https://developer.arm.com/documentation/ddi0500/e/level-1-memory-system
++ * /about-the-l1-memory-system?lang=en
++ *
++ * Source for d/i-cache-size
++ * https://magpi.raspberrypi.com/articles/raspberry-pi-3-specs-benchmarks
++ */
+ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53";
+ reg = <0>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000d8>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++ i-cache-size = <0x8000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ next-level-cache = <&l2>;
+ };
+
+ cpu1: cpu@1 {
+@@ -54,6 +68,13 @@
+ reg = <1>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000e0>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++ i-cache-size = <0x8000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ next-level-cache = <&l2>;
+ };
+
+ cpu2: cpu@2 {
+@@ -62,6 +83,13 @@
+ reg = <2>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000e8>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++ i-cache-size = <0x8000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ next-level-cache = <&l2>;
+ };
+
+ cpu3: cpu@3 {
+@@ -70,6 +98,27 @@
+ reg = <3>;
+ enable-method = "spin-table";
+ cpu-release-addr = <0x0 0x000000f0>;
++ d-cache-size = <0x8000>;
++ d-cache-line-size = <64>;
++ d-cache-sets = <128>; // 32KiB(size)/64(line-size)=512ways/4-way set
++ i-cache-size = <0x8000>;
++ i-cache-line-size = <64>;
++ i-cache-sets = <256>; // 32KiB(size)/64(line-size)=512ways/2-way set
++ next-level-cache = <&l2>;
++ };
++
++ /* Source for cache-line-size + cache-sets
++ * https://developer.arm.com/documentation/ddi0500
++ * /e/level-2-memory-system/about-the-l2-memory-system?lang=en
++ * Source for cache-size
++ * https://datasheets.raspberrypi.com/cm/cm1-and-cm3-datasheet.pdf
++ */
++ l2: l2-cache0 {
++ compatible = "cache";
++ cache-size = <0x80000>;
++ cache-line-size = <64>;
++ cache-sets = <512>; // 512KiB(size)/64(line-size)=8192ways/16-way set
++ cache-level = <2>;
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/dra7-l4.dtsi b/arch/arm/boot/dts/dra7-l4.dtsi
+index 956a26d52a4c3..0a11bacffc1f1 100644
+--- a/arch/arm/boot/dts/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/dra7-l4.dtsi
+@@ -3482,8 +3482,7 @@
+ ti,timer-pwm;
+ };
+ };
+-
+- target-module@2c000 { /* 0x4882c000, ap 17 02.0 */
++ timer15_target: target-module@2c000 { /* 0x4882c000, ap 17 02.0 */
+ compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ reg = <0x2c000 0x4>,
+ <0x2c010 0x4>;
+@@ -3511,7 +3510,7 @@
+ };
+ };
+
+- target-module@2e000 { /* 0x4882e000, ap 19 14.0 */
++ timer16_target: target-module@2e000 { /* 0x4882e000, ap 19 14.0 */
+ compatible = "ti,sysc-omap4-timer", "ti,sysc";
+ reg = <0x2e000 0x4>,
+ <0x2e010 0x4>;
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index 42bff117656cf..97ce0c4f1df7e 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -1339,20 +1339,20 @@
+ };
+
+ /* Local timers, see ARM architected timer wrap erratum i940 */
+-&timer3_target {
++&timer15_target {
+ ti,no-reset-on-init;
+ ti,no-idle;
+ timer@0 {
+- assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
++ assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER15_CLKCTRL 24>;
+ assigned-clock-parents = <&timer_sys_clk_div>;
+ };
+ };
+
+-&timer4_target {
++&timer16_target {
+ ti,no-reset-on-init;
+ ti,no-idle;
+ timer@0 {
+- assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
++ assigned-clocks = <&l4per3_clkctrl DRA7_L4PER3_TIMER16_CLKCTRL 24>;
+ assigned-clock-parents = <&timer_sys_clk_div>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/exynos5250-pinctrl.dtsi b/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
+index d31a68672bfac..d7d756614edd1 100644
+--- a/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
++++ b/arch/arm/boot/dts/exynos5250-pinctrl.dtsi
+@@ -260,7 +260,7 @@
+ };
+
+ uart3_data: uart3-data {
+- samsung,pins = "gpa1-4", "gpa1-4";
++ samsung,pins = "gpa1-4", "gpa1-5";
+ samsung,pin-function = <EXYNOS_PIN_FUNC_2>;
+ samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+ samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index 39bbe18145cf2..f042954bdfa5d 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -118,6 +118,9 @@
+ status = "okay";
+ ddc = <&i2c_2>;
+ hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
++ vdd-supply = <&ldo8_reg>;
++ vdd_osc-supply = <&ldo10_reg>;
++ vdd_pll-supply = <&ldo8_reg>;
+ };
+
+ &i2c_0 {
+diff --git a/arch/arm/boot/dts/exynos5420-smdk5420.dts b/arch/arm/boot/dts/exynos5420-smdk5420.dts
+index a4f0e3ffedbd3..07f65213aae65 100644
+--- a/arch/arm/boot/dts/exynos5420-smdk5420.dts
++++ b/arch/arm/boot/dts/exynos5420-smdk5420.dts
+@@ -124,6 +124,9 @@
+ hpd-gpios = <&gpx3 7 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_hpd_irq>;
++ vdd-supply = <&ldo6_reg>;
++ vdd_osc-supply = <&ldo7_reg>;
++ vdd_pll-supply = <&ldo6_reg>;
+ };
+
+ &hsi2c_4 {
+diff --git a/arch/arm/boot/dts/imx53-m53menlo.dts b/arch/arm/boot/dts/imx53-m53menlo.dts
+index 4f88e96d81ddb..d5c68d1ea707c 100644
+--- a/arch/arm/boot/dts/imx53-m53menlo.dts
++++ b/arch/arm/boot/dts/imx53-m53menlo.dts
+@@ -53,6 +53,31 @@
+ };
+ };
+
++ lvds-decoder {
++ compatible = "ti,ds90cf364a", "lvds-decoder";
++
++ ports {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ port@0 {
++ reg = <0>;
++
++ lvds_decoder_in: endpoint {
++ remote-endpoint = <&lvds0_out>;
++ };
++ };
++
++ port@1 {
++ reg = <1>;
++
++ lvds_decoder_out: endpoint {
++ remote-endpoint = <&panel_in>;
++ };
++ };
++ };
++ };
++
+ panel {
+ compatible = "edt,etm0700g0dh6";
+ pinctrl-0 = <&pinctrl_display_gpio>;
+@@ -61,7 +86,7 @@
+
+ port {
+ panel_in: endpoint {
+- remote-endpoint = <&lvds0_out>;
++ remote-endpoint = <&lvds_decoder_out>;
+ };
+ };
+ };
+@@ -450,7 +475,7 @@
+ reg = <2>;
+
+ lvds0_out: endpoint {
+- remote-endpoint = <&panel_in>;
++ remote-endpoint = <&lvds_decoder_in>;
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/imx7-colibri.dtsi b/arch/arm/boot/dts/imx7-colibri.dtsi
+index 62b771c1d5a9a..f1c60b0cb143e 100644
+--- a/arch/arm/boot/dts/imx7-colibri.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri.dtsi
+@@ -40,7 +40,7 @@
+
+ dailink_master: simple-audio-card,codec {
+ sound-dai = <&codec>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ };
+ };
+ };
+@@ -293,7 +293,7 @@
+ compatible = "fsl,sgtl5000";
+ #sound-dai-cells = <0>;
+ reg = <0x0a>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sai1_mclk>;
+ VDDA-supply = <®_module_3v3_avdd>;
+diff --git a/arch/arm/boot/dts/imx7-mba7.dtsi b/arch/arm/boot/dts/imx7-mba7.dtsi
+index 49086c6b6a0a2..3df6dff7734ae 100644
+--- a/arch/arm/boot/dts/imx7-mba7.dtsi
++++ b/arch/arm/boot/dts/imx7-mba7.dtsi
+@@ -302,7 +302,7 @@
+ tlv320aic32x4: audio-codec@18 {
+ compatible = "ti,tlv320aic32x4";
+ reg = <0x18>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ clock-names = "mclk";
+ ldoin-supply = <®_audio_3v3>;
+ iov-supply = <®_audio_3v3>;
+diff --git a/arch/arm/boot/dts/imx7d-nitrogen7.dts b/arch/arm/boot/dts/imx7d-nitrogen7.dts
+index e0751e6ba3c0f..a31de900139d6 100644
+--- a/arch/arm/boot/dts/imx7d-nitrogen7.dts
++++ b/arch/arm/boot/dts/imx7d-nitrogen7.dts
+@@ -288,7 +288,7 @@
+ codec: wm8960@1a {
+ compatible = "wlf,wm8960";
+ reg = <0x1a>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ clock-names = "mclk";
+ wlf,shared-lrclk;
+ };
+diff --git a/arch/arm/boot/dts/imx7d-pico-hobbit.dts b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+index 7b2198a9372c6..d917dc4f2f227 100644
+--- a/arch/arm/boot/dts/imx7d-pico-hobbit.dts
++++ b/arch/arm/boot/dts/imx7d-pico-hobbit.dts
+@@ -31,7 +31,7 @@
+
+ dailink_master: simple-audio-card,codec {
+ sound-dai = <&sgtl5000>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ };
+ };
+ };
+@@ -41,7 +41,7 @@
+ #sound-dai-cells = <0>;
+ reg = <0x0a>;
+ compatible = "fsl,sgtl5000";
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ VDDA-supply = <®_2p5v>;
+ VDDIO-supply = <®_vref_1v8>;
+ };
+diff --git a/arch/arm/boot/dts/imx7d-pico-pi.dts b/arch/arm/boot/dts/imx7d-pico-pi.dts
+index 70bea95c06d83..f263e391e24cb 100644
+--- a/arch/arm/boot/dts/imx7d-pico-pi.dts
++++ b/arch/arm/boot/dts/imx7d-pico-pi.dts
+@@ -31,7 +31,7 @@
+
+ dailink_master: simple-audio-card,codec {
+ sound-dai = <&sgtl5000>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ };
+ };
+ };
+@@ -41,7 +41,7 @@
+ #sound-dai-cells = <0>;
+ reg = <0x0a>;
+ compatible = "fsl,sgtl5000";
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ VDDA-supply = <®_2p5v>;
+ VDDIO-supply = <®_vref_1v8>;
+ };
+diff --git a/arch/arm/boot/dts/imx7d-sdb.dts b/arch/arm/boot/dts/imx7d-sdb.dts
+index 7813ef960f6ee..f053f51227417 100644
+--- a/arch/arm/boot/dts/imx7d-sdb.dts
++++ b/arch/arm/boot/dts/imx7d-sdb.dts
+@@ -385,14 +385,14 @@
+ codec: wm8960@1a {
+ compatible = "wlf,wm8960";
+ reg = <0x1a>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ clock-names = "mclk";
+ wlf,shared-lrclk;
+ wlf,hp-cfg = <2 2 3>;
+ wlf,gpio-cfg = <1 3>;
+ assigned-clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_SRC>,
+ <&clks IMX7D_PLL_AUDIO_POST_DIV>,
+- <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ assigned-clock-parents = <&clks IMX7D_PLL_AUDIO_POST_DIV>;
+ assigned-clock-rates = <0>, <884736000>, <12288000>;
+ };
+diff --git a/arch/arm/boot/dts/imx7s-warp.dts b/arch/arm/boot/dts/imx7s-warp.dts
+index 4f1edef06c922..e8734d218b9de 100644
+--- a/arch/arm/boot/dts/imx7s-warp.dts
++++ b/arch/arm/boot/dts/imx7s-warp.dts
+@@ -75,7 +75,7 @@
+
+ dailink_master: simple-audio-card,codec {
+ sound-dai = <&codec>;
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ };
+ };
+ };
+@@ -232,7 +232,7 @@
+ #sound-dai-cells = <0>;
+ reg = <0x0a>;
+ compatible = "fsl,sgtl5000";
+- clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
++ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_DIV>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sai1_mclk>;
+ VDDA-supply = <&vgen4_reg>;
+diff --git a/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi b/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi
+index 31f59de5190b8..7af41361c4800 100644
+--- a/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi
++++ b/arch/arm/boot/dts/openbmc-flash-layout-64.dtsi
+@@ -28,7 +28,7 @@ partitions {
+ label = "rofs";
+ };
+
+- rwfs@6000000 {
++ rwfs@2a00000 {
+ reg = <0x2a00000 0x1600000>; // 22MB
+ label = "rwfs";
+ };
+diff --git a/arch/arm/boot/dts/openbmc-flash-layout.dtsi b/arch/arm/boot/dts/openbmc-flash-layout.dtsi
+index 6c26524e93e11..b47e14063c380 100644
+--- a/arch/arm/boot/dts/openbmc-flash-layout.dtsi
++++ b/arch/arm/boot/dts/openbmc-flash-layout.dtsi
+@@ -20,7 +20,7 @@ partitions {
+ label = "kernel";
+ };
+
+- rofs@c0000 {
++ rofs@4c0000 {
+ reg = <0x4c0000 0x1740000>;
+ label = "rofs";
+ };
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 7dec0553636e5..51c365fdf3bfd 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -142,7 +142,8 @@
+ clocks {
+ sleep_clk: sleep_clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32768>;
++ clock-frequency = <32000>;
++ clock-output-names = "gcc_sleep_clk_src";
+ #clock-cells = <0>;
+ };
+
+diff --git a/arch/arm/boot/dts/qcom-msm8960.dtsi b/arch/arm/boot/dts/qcom-msm8960.dtsi
+index 2a0ec97a264f2..a0f9ab7f08f34 100644
+--- a/arch/arm/boot/dts/qcom-msm8960.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8960.dtsi
+@@ -146,7 +146,9 @@
+ reg = <0x108000 0x1000>;
+ qcom,ipc = <&l2cc 0x8 2>;
+
+- interrupts = <0 19 0>, <0 21 0>, <0 22 0>;
++ interrupts = <GIC_SPI 19 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 21 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 22 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "ack", "err", "wakeup";
+
+ regulators {
+@@ -192,7 +194,7 @@
+ compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm";
+ reg = <0x16440000 0x1000>,
+ <0x16400000 0x1000>;
+- interrupts = <0 154 0x0>;
++ interrupts = <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&gcc GSBI5_UART_CLK>, <&gcc GSBI5_H_CLK>;
+ clock-names = "core", "iface";
+ status = "disabled";
+@@ -318,7 +320,7 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x16080000 0x1000>;
+- interrupts = <0 147 0>;
++ interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
+ spi-max-frequency = <24000000>;
+ cs-gpios = <&msmgpio 8 0>;
+
+diff --git a/arch/arm/boot/dts/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom-sdx55.dtsi
+index 8ac0492c76595..40f11159f061e 100644
+--- a/arch/arm/boot/dts/qcom-sdx55.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx55.dtsi
+@@ -413,7 +413,7 @@
+ <0x40000000 0xf1d>,
+ <0x40000f20 0xc8>,
+ <0x40001000 0x1000>,
+- <0x40002000 0x10000>,
++ <0x40200000 0x100000>,
+ <0x01c03000 0x3000>;
+ reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+ "mmio";
+diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
+index 09c741e8ecb87..c700c3b19e4c4 100644
+--- a/arch/arm/boot/dts/sama5d2.dtsi
++++ b/arch/arm/boot/dts/sama5d2.dtsi
+@@ -415,7 +415,7 @@
+ pmecc: ecc-engine@f8014070 {
+ compatible = "atmel,sama5d2-pmecc";
+ reg = <0xf8014070 0x490>,
+- <0xf8014500 0x100>;
++ <0xf8014500 0x200>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/sama7g5.dtsi b/arch/arm/boot/dts/sama7g5.dtsi
+index eddcfbf4d2233..22520cdd37fc5 100644
+--- a/arch/arm/boot/dts/sama7g5.dtsi
++++ b/arch/arm/boot/dts/sama7g5.dtsi
+@@ -382,8 +382,6 @@
+ dmas = <&dma0 AT91_XDMAC_DT_PERID(7)>,
+ <&dma0 AT91_XDMAC_DT_PERID(8)>;
+ dma-names = "rx", "tx";
+- atmel,use-dma-rx;
+- atmel,use-dma-tx;
+ status = "disabled";
+ };
+ };
+@@ -558,8 +556,6 @@
+ dmas = <&dma0 AT91_XDMAC_DT_PERID(21)>,
+ <&dma0 AT91_XDMAC_DT_PERID(22)>;
+ dma-names = "rx", "tx";
+- atmel,use-dma-rx;
+- atmel,use-dma-tx;
+ status = "disabled";
+ };
+ };
+@@ -584,8 +580,6 @@
+ dmas = <&dma0 AT91_XDMAC_DT_PERID(23)>,
+ <&dma0 AT91_XDMAC_DT_PERID(24)>;
+ dma-names = "rx", "tx";
+- atmel,use-dma-rx;
+- atmel,use-dma-tx;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm/boot/dts/spear1340.dtsi b/arch/arm/boot/dts/spear1340.dtsi
+index 827e887afbda4..13e1bdb3ddbf1 100644
+--- a/arch/arm/boot/dts/spear1340.dtsi
++++ b/arch/arm/boot/dts/spear1340.dtsi
+@@ -134,9 +134,9 @@
+ reg = <0xb4100000 0x1000>;
+ interrupts = <0 105 0x4>;
+ status = "disabled";
+- dmas = <&dwdma0 12 0 1>,
+- <&dwdma0 13 1 0>;
+- dma-names = "tx", "rx";
++ dmas = <&dwdma0 13 0 1>,
++ <&dwdma0 12 1 0>;
++ dma-names = "rx", "tx";
+ };
+
+ thermal@e07008c4 {
+diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
+index c87b881b2c8bb..9135533676879 100644
+--- a/arch/arm/boot/dts/spear13xx.dtsi
++++ b/arch/arm/boot/dts/spear13xx.dtsi
+@@ -284,9 +284,9 @@
+ #size-cells = <0>;
+ interrupts = <0 31 0x4>;
+ status = "disabled";
+- dmas = <&dwdma0 4 0 0>,
+- <&dwdma0 5 0 0>;
+- dma-names = "tx", "rx";
++ dmas = <&dwdma0 5 0 0>,
++ <&dwdma0 4 0 0>;
++ dma-names = "rx", "tx";
+ };
+
+ rtc@e0580000 {
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index 3b65130affec8..6161f5906ec11 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1190,7 +1190,7 @@
+ };
+ };
+
+- sai2a_sleep_pins_c: sai2a-2 {
++ sai2a_sleep_pins_c: sai2a-sleep-2 {
+ pins {
+ pinmux = <STM32_PINMUX('D', 13, ANALOG)>, /* SAI2_SCK_A */
+ <STM32_PINMUX('D', 11, ANALOG)>, /* SAI2_SD_A */
+diff --git a/arch/arm/boot/dts/sun8i-v3s.dtsi b/arch/arm/boot/dts/sun8i-v3s.dtsi
+index b30bc1a25ebb9..084323d5c61cb 100644
+--- a/arch/arm/boot/dts/sun8i-v3s.dtsi
++++ b/arch/arm/boot/dts/sun8i-v3s.dtsi
+@@ -593,6 +593,17 @@
+ #size-cells = <0>;
+ };
+
++ gic: interrupt-controller@1c81000 {
++ compatible = "arm,gic-400";
++ reg = <0x01c81000 0x1000>,
++ <0x01c82000 0x2000>,
++ <0x01c84000 0x2000>,
++ <0x01c86000 0x2000>;
++ interrupt-controller;
++ #interrupt-cells = <3>;
++ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
++ };
++
+ csi1: camera@1cb4000 {
+ compatible = "allwinner,sun8i-v3s-csi";
+ reg = <0x01cb4000 0x3000>;
+@@ -604,16 +615,5 @@
+ resets = <&ccu RST_BUS_CSI>;
+ status = "disabled";
+ };
+-
+- gic: interrupt-controller@1c81000 {
+- compatible = "arm,gic-400";
+- reg = <0x01c81000 0x1000>,
+- <0x01c82000 0x2000>,
+- <0x01c84000 0x2000>,
+- <0x01c86000 0x2000>;
+- interrupt-controller;
+- #interrupt-cells = <3>;
+- interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+- };
+ };
+ };
+diff --git a/arch/arm/boot/dts/tegra20-asus-tf101.dts b/arch/arm/boot/dts/tegra20-asus-tf101.dts
+index 020172ee7340e..e3267cda15cc9 100644
+--- a/arch/arm/boot/dts/tegra20-asus-tf101.dts
++++ b/arch/arm/boot/dts/tegra20-asus-tf101.dts
+@@ -442,11 +442,13 @@
+
+ serial@70006040 {
+ compatible = "nvidia,tegra20-hsuart";
++ /delete-property/ reg-shift;
+ /* GPS BCM4751 */
+ };
+
+ serial@70006200 {
+ compatible = "nvidia,tegra20-hsuart";
++ /delete-property/ reg-shift;
+ status = "okay";
+
+ /* Azurewave AW-NH615 BCM4329B1 */
+diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+index de39c5465c0a9..0e19bd0a847c8 100644
+--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
+@@ -183,8 +183,8 @@
+ };
+ conf_ata {
+ nvidia,pins = "ata", "atb", "atc", "atd", "ate",
+- "cdev1", "cdev2", "dap1", "dtb", "gma",
+- "gmb", "gmc", "gmd", "gme", "gpu7",
++ "cdev1", "cdev2", "dap1", "dtb", "dtf",
++ "gma", "gmb", "gmc", "gmd", "gme", "gpu7",
+ "gpv", "i2cp", "irrx", "irtx", "pta",
+ "rm", "slxa", "slxk", "spia", "spib",
+ "uac";
+@@ -203,7 +203,7 @@
+ };
+ conf_crtp {
+ nvidia,pins = "crtp", "dap2", "dap3", "dap4",
+- "dtc", "dte", "dtf", "gpu", "sdio1",
++ "dtc", "dte", "gpu", "sdio1",
+ "slxc", "slxd", "spdi", "spdo", "spig",
+ "uda";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+diff --git a/arch/arm/boot/dts/tegra30-asus-transformer-common.dtsi b/arch/arm/boot/dts/tegra30-asus-transformer-common.dtsi
+index 85b43a86a26d9..c662ab261ed5f 100644
+--- a/arch/arm/boot/dts/tegra30-asus-transformer-common.dtsi
++++ b/arch/arm/boot/dts/tegra30-asus-transformer-common.dtsi
+@@ -1080,6 +1080,7 @@
+
+ serial@70006040 {
+ compatible = "nvidia,tegra30-hsuart";
++ /delete-property/ reg-shift;
+ status = "okay";
+
+ /* Broadcom GPS BCM47511 */
+@@ -1087,6 +1088,7 @@
+
+ serial@70006200 {
+ compatible = "nvidia,tegra30-hsuart";
++ /delete-property/ reg-shift;
+ status = "okay";
+
+ nvidia,adjust-baud-rates = <0 9600 100>,
+diff --git a/arch/arm/boot/dts/tegra30-pegatron-chagall.dts b/arch/arm/boot/dts/tegra30-pegatron-chagall.dts
+index f4b2d4218849c..8ce61035290b5 100644
+--- a/arch/arm/boot/dts/tegra30-pegatron-chagall.dts
++++ b/arch/arm/boot/dts/tegra30-pegatron-chagall.dts
+@@ -1103,6 +1103,7 @@
+
+ uartb: serial@70006040 {
+ compatible = "nvidia,tegra30-hsuart";
++ /delete-property/ reg-shift;
+ status = "okay";
+
+ /* Broadcom GPS BCM47511 */
+@@ -1110,6 +1111,7 @@
+
+ uartc: serial@70006200 {
+ compatible = "nvidia,tegra30-hsuart";
++ /delete-property/ reg-shift;
+ status = "okay";
+
+ nvidia,adjust-baud-rates = <0 9600 100>,
+diff --git a/arch/arm/configs/multi_v5_defconfig b/arch/arm/configs/multi_v5_defconfig
+index fe8d760256a4c..3e3beb0cc33de 100644
+--- a/arch/arm/configs/multi_v5_defconfig
++++ b/arch/arm/configs/multi_v5_defconfig
+@@ -188,6 +188,7 @@ CONFIG_REGULATOR=y
+ CONFIG_REGULATOR_FIXED_VOLTAGE=y
+ CONFIG_MEDIA_SUPPORT=y
+ CONFIG_MEDIA_CAMERA_SUPPORT=y
++CONFIG_MEDIA_PLATFORM_SUPPORT=y
+ CONFIG_V4L_PLATFORM_DRIVERS=y
+ CONFIG_VIDEO_ASPEED=m
+ CONFIG_VIDEO_ATMEL_ISI=m
+@@ -196,6 +197,7 @@ CONFIG_DRM_ATMEL_HLCDC=m
+ CONFIG_DRM_PANEL_SIMPLE=y
+ CONFIG_DRM_PANEL_EDP=y
+ CONFIG_DRM_ASPEED_GFX=m
++CONFIG_FB=y
+ CONFIG_FB_IMX=y
+ CONFIG_FB_ATMEL=y
+ CONFIG_BACKLIGHT_ATMEL_LCDC=y
+diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig
+index 2b575792363e5..e4dba5461cb3e 100644
+--- a/arch/arm/crypto/Kconfig
++++ b/arch/arm/crypto/Kconfig
+@@ -102,6 +102,8 @@ config CRYPTO_AES_ARM_BS
+ depends on KERNEL_MODE_NEON
+ select CRYPTO_SKCIPHER
+ select CRYPTO_LIB_AES
++ select CRYPTO_AES
++ select CRYPTO_CBC
+ select CRYPTO_SIMD
+ help
+ Use a faster and more secure NEON based implementation of AES in CBC,
+diff --git a/arch/arm/kernel/entry-ftrace.S b/arch/arm/kernel/entry-ftrace.S
+index a74289ebc8036..5f1b1ce10473a 100644
+--- a/arch/arm/kernel/entry-ftrace.S
++++ b/arch/arm/kernel/entry-ftrace.S
+@@ -22,10 +22,7 @@
+ * mcount can be thought of as a function called in the middle of a subroutine
+ * call. As such, it needs to be transparent for both the caller and the
+ * callee: the original lr needs to be restored when leaving mcount, and no
+- * registers should be clobbered. (In the __gnu_mcount_nc implementation, we
+- * clobber the ip register. This is OK because the ARM calling convention
+- * allows it to be clobbered in subroutines and doesn't use it to hold
+- * parameters.)
++ * registers should be clobbered.
+ *
+ * When using dynamic ftrace, we patch out the mcount call by a "pop {lr}"
+ * instead of the __gnu_mcount_nc call (see arch/arm/kernel/ftrace.c).
+@@ -70,26 +67,25 @@
+
+ .macro __ftrace_regs_caller
+
+- sub sp, sp, #8 @ space for PC and CPSR OLD_R0,
++ str lr, [sp, #-8]! @ store LR as PC and make space for CPSR/OLD_R0,
+ @ OLD_R0 will overwrite previous LR
+
+- add ip, sp, #12 @ move in IP the value of SP as it was
+- @ before the push {lr} of the mcount mechanism
++ ldr lr, [sp, #8] @ get previous LR
+
+- str lr, [sp, #0] @ store LR instead of PC
++ str r0, [sp, #8] @ write r0 as OLD_R0 over previous LR
+
+- ldr lr, [sp, #8] @ get previous LR
++ str lr, [sp, #-4]! @ store previous LR as LR
+
+- str r0, [sp, #8] @ write r0 as OLD_R0 over previous LR
++ add lr, sp, #16 @ move in LR the value of SP as it was
++ @ before the push {lr} of the mcount mechanism
+
+- stmdb sp!, {ip, lr}
+- stmdb sp!, {r0-r11, lr}
++ push {r0-r11, ip, lr}
+
+ @ stack content at this point:
+ @ 0 4 48 52 56 60 64 68 72
+- @ R0 | R1 | ... | LR | SP + 4 | previous LR | LR | PSR | OLD_R0 |
++ @ R0 | R1 | ... | IP | SP + 4 | previous LR | LR | PSR | OLD_R0 |
+
+- mov r3, sp @ struct pt_regs*
++ mov r3, sp @ struct pt_regs*
+
+ ldr r2, =function_trace_op
+ ldr r2, [r2] @ pointer to the current
+@@ -112,11 +108,9 @@ ftrace_graph_regs_call:
+ #endif
+
+ @ pop saved regs
+- ldmia sp!, {r0-r12} @ restore r0 through r12
+- ldr ip, [sp, #8] @ restore PC
+- ldr lr, [sp, #4] @ restore LR
+- ldr sp, [sp, #0] @ restore SP
+- mov pc, ip @ return
++ pop {r0-r11, ip, lr} @ restore r0 through r12
++ ldr lr, [sp], #4 @ restore LR
++ ldr pc, [sp], #12
+ .endm
+
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+@@ -132,11 +126,9 @@ ftrace_graph_regs_call:
+ bl prepare_ftrace_return
+
+ @ pop registers saved in ftrace_regs_caller
+- ldmia sp!, {r0-r12} @ restore r0 through r12
+- ldr ip, [sp, #8] @ restore PC
+- ldr lr, [sp, #4] @ restore LR
+- ldr sp, [sp, #0] @ restore SP
+- mov pc, ip @ return
++ pop {r0-r11, ip, lr} @ restore r0 through r12
++ ldr lr, [sp], #4 @ restore LR
++ ldr pc, [sp], #12
+
+ .endm
+ #endif
+@@ -202,16 +194,17 @@ ftrace_graph_call\suffix:
+ .endm
+
+ .macro mcount_exit
+- ldmia sp!, {r0-r3, ip, lr}
+- ret ip
++ ldmia sp!, {r0-r3}
++ ldr lr, [sp, #4]
++ ldr pc, [sp], #8
+ .endm
+
+ ENTRY(__gnu_mcount_nc)
+ UNWIND(.fnstart)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+- mov ip, lr
+- ldmia sp!, {lr}
+- ret ip
++ push {lr}
++ ldr lr, [sp, #4]
++ ldr pc, [sp], #8
+ #else
+ __mcount
+ #endif
+diff --git a/arch/arm/kernel/swp_emulate.c b/arch/arm/kernel/swp_emulate.c
+index 6166ba38bf994..b74bfcf94fb1a 100644
+--- a/arch/arm/kernel/swp_emulate.c
++++ b/arch/arm/kernel/swp_emulate.c
+@@ -195,7 +195,7 @@ static int swp_handler(struct pt_regs *regs, unsigned int instr)
+ destreg, EXTRACT_REG_NUM(instr, RT2_OFFSET), data);
+
+ /* Check access in reasonable access range for both SWP and SWPB */
+- if (!access_ok((address & ~3), 4)) {
++ if (!access_ok((void __user *)(address & ~3), 4)) {
+ pr_debug("SWP{B} emulation: access to %p not allowed!\n",
+ (void *)address);
+ res = -EFAULT;
+diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
+index cae4a748811f8..5d58aee24087f 100644
+--- a/arch/arm/kernel/traps.c
++++ b/arch/arm/kernel/traps.c
+@@ -577,7 +577,7 @@ do_cache_op(unsigned long start, unsigned long end, int flags)
+ if (end < start || flags)
+ return -EINVAL;
+
+- if (!access_ok(start, end - start))
++ if (!access_ok((void __user *)start, end - start))
+ return -EFAULT;
+
+ return __do_cache_op(start, end);
+diff --git a/arch/arm/mach-iop32x/include/mach/entry-macro.S b/arch/arm/mach-iop32x/include/mach/entry-macro.S
+index 8e6766d4621eb..341e5d9a6616d 100644
+--- a/arch/arm/mach-iop32x/include/mach/entry-macro.S
++++ b/arch/arm/mach-iop32x/include/mach/entry-macro.S
+@@ -20,7 +20,7 @@
+ mrc p6, 0, \irqstat, c8, c0, 0 @ Read IINTSRC
+ cmp \irqstat, #0
+ clzne \irqnr, \irqstat
+- rsbne \irqnr, \irqnr, #31
++ rsbne \irqnr, \irqnr, #32
+ .endm
+
+ .macro arch_ret_to_user, tmp1, tmp2
+diff --git a/arch/arm/mach-iop32x/include/mach/irqs.h b/arch/arm/mach-iop32x/include/mach/irqs.h
+index c4e78df428e86..e09ae5f48aec5 100644
+--- a/arch/arm/mach-iop32x/include/mach/irqs.h
++++ b/arch/arm/mach-iop32x/include/mach/irqs.h
+@@ -9,6 +9,6 @@
+ #ifndef __IRQS_H
+ #define __IRQS_H
+
+-#define NR_IRQS 32
++#define NR_IRQS 33
+
+ #endif
+diff --git a/arch/arm/mach-iop32x/irq.c b/arch/arm/mach-iop32x/irq.c
+index 2d48bf1398c10..d1e8824cbd824 100644
+--- a/arch/arm/mach-iop32x/irq.c
++++ b/arch/arm/mach-iop32x/irq.c
+@@ -32,14 +32,14 @@ static void intstr_write(u32 val)
+ static void
+ iop32x_irq_mask(struct irq_data *d)
+ {
+- iop32x_mask &= ~(1 << d->irq);
++ iop32x_mask &= ~(1 << (d->irq - 1));
+ intctl_write(iop32x_mask);
+ }
+
+ static void
+ iop32x_irq_unmask(struct irq_data *d)
+ {
+- iop32x_mask |= 1 << d->irq;
++ iop32x_mask |= 1 << (d->irq - 1);
+ intctl_write(iop32x_mask);
+ }
+
+@@ -65,7 +65,7 @@ void __init iop32x_init_irq(void)
+ machine_is_em7210())
+ *IOP3XX_PCIIRSR = 0x0f;
+
+- for (i = 0; i < NR_IRQS; i++) {
++ for (i = 1; i < NR_IRQS; i++) {
+ irq_set_chip_and_handler(i, &ext_chip, handle_level_irq);
+ irq_clear_status_flags(i, IRQ_NOREQUEST | IRQ_NOPROBE);
+ }
+diff --git a/arch/arm/mach-iop32x/irqs.h b/arch/arm/mach-iop32x/irqs.h
+index 69858e4e905d1..e1dfc8b4e7d7e 100644
+--- a/arch/arm/mach-iop32x/irqs.h
++++ b/arch/arm/mach-iop32x/irqs.h
+@@ -7,36 +7,40 @@
+ #ifndef __IOP32X_IRQS_H
+ #define __IOP32X_IRQS_H
+
++/* Interrupts in Linux start at 1, hardware starts at 0 */
++
++#define IOP_IRQ(x) ((x) + 1)
++
+ /*
+ * IOP80321 chipset interrupts
+ */
+-#define IRQ_IOP32X_DMA0_EOT 0
+-#define IRQ_IOP32X_DMA0_EOC 1
+-#define IRQ_IOP32X_DMA1_EOT 2
+-#define IRQ_IOP32X_DMA1_EOC 3
+-#define IRQ_IOP32X_AA_EOT 6
+-#define IRQ_IOP32X_AA_EOC 7
+-#define IRQ_IOP32X_CORE_PMON 8
+-#define IRQ_IOP32X_TIMER0 9
+-#define IRQ_IOP32X_TIMER1 10
+-#define IRQ_IOP32X_I2C_0 11
+-#define IRQ_IOP32X_I2C_1 12
+-#define IRQ_IOP32X_MESSAGING 13
+-#define IRQ_IOP32X_ATU_BIST 14
+-#define IRQ_IOP32X_PERFMON 15
+-#define IRQ_IOP32X_CORE_PMU 16
+-#define IRQ_IOP32X_BIU_ERR 17
+-#define IRQ_IOP32X_ATU_ERR 18
+-#define IRQ_IOP32X_MCU_ERR 19
+-#define IRQ_IOP32X_DMA0_ERR 20
+-#define IRQ_IOP32X_DMA1_ERR 21
+-#define IRQ_IOP32X_AA_ERR 23
+-#define IRQ_IOP32X_MSG_ERR 24
+-#define IRQ_IOP32X_SSP 25
+-#define IRQ_IOP32X_XINT0 27
+-#define IRQ_IOP32X_XINT1 28
+-#define IRQ_IOP32X_XINT2 29
+-#define IRQ_IOP32X_XINT3 30
+-#define IRQ_IOP32X_HPI 31
++#define IRQ_IOP32X_DMA0_EOT IOP_IRQ(0)
++#define IRQ_IOP32X_DMA0_EOC IOP_IRQ(1)
++#define IRQ_IOP32X_DMA1_EOT IOP_IRQ(2)
++#define IRQ_IOP32X_DMA1_EOC IOP_IRQ(3)
++#define IRQ_IOP32X_AA_EOT IOP_IRQ(6)
++#define IRQ_IOP32X_AA_EOC IOP_IRQ(7)
++#define IRQ_IOP32X_CORE_PMON IOP_IRQ(8)
++#define IRQ_IOP32X_TIMER0 IOP_IRQ(9)
++#define IRQ_IOP32X_TIMER1 IOP_IRQ(10)
++#define IRQ_IOP32X_I2C_0 IOP_IRQ(11)
++#define IRQ_IOP32X_I2C_1 IOP_IRQ(12)
++#define IRQ_IOP32X_MESSAGING IOP_IRQ(13)
++#define IRQ_IOP32X_ATU_BIST IOP_IRQ(14)
++#define IRQ_IOP32X_PERFMON IOP_IRQ(15)
++#define IRQ_IOP32X_CORE_PMU IOP_IRQ(16)
++#define IRQ_IOP32X_BIU_ERR IOP_IRQ(17)
++#define IRQ_IOP32X_ATU_ERR IOP_IRQ(18)
++#define IRQ_IOP32X_MCU_ERR IOP_IRQ(19)
++#define IRQ_IOP32X_DMA0_ERR IOP_IRQ(20)
++#define IRQ_IOP32X_DMA1_ERR IOP_IRQ(21)
++#define IRQ_IOP32X_AA_ERR IOP_IRQ(23)
++#define IRQ_IOP32X_MSG_ERR IOP_IRQ(24)
++#define IRQ_IOP32X_SSP IOP_IRQ(25)
++#define IRQ_IOP32X_XINT0 IOP_IRQ(27)
++#define IRQ_IOP32X_XINT1 IOP_IRQ(28)
++#define IRQ_IOP32X_XINT2 IOP_IRQ(29)
++#define IRQ_IOP32X_XINT3 IOP_IRQ(30)
++#define IRQ_IOP32X_HPI IOP_IRQ(31)
+
+ #endif
+diff --git a/arch/arm/mach-mmp/sram.c b/arch/arm/mach-mmp/sram.c
+index 6794e2db1ad5f..ecc46c31004f6 100644
+--- a/arch/arm/mach-mmp/sram.c
++++ b/arch/arm/mach-mmp/sram.c
+@@ -72,6 +72,8 @@ static int sram_probe(struct platform_device *pdev)
+ if (!info)
+ return -ENOMEM;
+
++ platform_set_drvdata(pdev, info);
++
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res == NULL) {
+ dev_err(&pdev->dev, "no memory resource defined\n");
+@@ -107,8 +109,6 @@ static int sram_probe(struct platform_device *pdev)
+ list_add(&info->node, &sram_bank_list);
+ mutex_unlock(&sram_lock);
+
+- platform_set_drvdata(pdev, info);
+-
+ dev_info(&pdev->dev, "initialized\n");
+ return 0;
+
+@@ -127,17 +127,19 @@ static int sram_remove(struct platform_device *pdev)
+ struct sram_bank_info *info;
+
+ info = platform_get_drvdata(pdev);
+- if (info == NULL)
+- return -ENODEV;
+
+- mutex_lock(&sram_lock);
+- list_del(&info->node);
+- mutex_unlock(&sram_lock);
++ if (info->sram_size) {
++ mutex_lock(&sram_lock);
++ list_del(&info->node);
++ mutex_unlock(&sram_lock);
++
++ gen_pool_destroy(info->gpool);
++ iounmap(info->sram_virt);
++ kfree(info->pool_name);
++ }
+
+- gen_pool_destroy(info->gpool);
+- iounmap(info->sram_virt);
+- kfree(info->pool_name);
+ kfree(info);
++
+ return 0;
+ }
+
+diff --git a/arch/arm/mach-s3c/mach-jive.c b/arch/arm/mach-s3c/mach-jive.c
+index 285e1f0f4145a..0d7d408c37291 100644
+--- a/arch/arm/mach-s3c/mach-jive.c
++++ b/arch/arm/mach-s3c/mach-jive.c
+@@ -236,11 +236,11 @@ static int __init jive_mtdset(char *options)
+ unsigned long set;
+
+ if (options == NULL || options[0] == '\0')
+- return 0;
++ return 1;
+
+ if (kstrtoul(options, 10, &set)) {
+ printk(KERN_ERR "failed to parse mtdset=%s\n", options);
+- return 0;
++ return 1;
+ }
+
+ switch (set) {
+@@ -256,7 +256,7 @@ static int __init jive_mtdset(char *options)
+ "using default.", set);
+ }
+
+- return 0;
++ return 1;
+ }
+
+ /* parse the mtdset= option given to the kernel command line */
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index c842878f81331..baa0e9bbe7547 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -683,6 +683,7 @@ config ARM64_ERRATUM_2051678
+
+ config ARM64_ERRATUM_2077057
+ bool "Cortex-A510: 2077057: workaround software-step corrupting SPSR_EL2"
++ default y
+ help
+ This option adds the workaround for ARM Cortex-A510 erratum 2077057.
+ Affected Cortex-A510 may corrupt SPSR_EL2 when the a step exception is
+diff --git a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+index 984c737fa627a..6e738f2a37013 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+@@ -273,9 +273,9 @@
+ #size-cells = <1>;
+ ranges = <0x00 0x00 0xff800000 0x3000>;
+
+- timer: timer@400 {
+- compatible = "brcm,bcm6328-timer", "syscon";
+- reg = <0x400 0x3c>;
++ twd: timer-mfd@400 {
++ compatible = "brcm,bcm4908-twd", "simple-mfd", "syscon";
++ reg = <0x400 0x4c>;
+ };
+
+ gpio0: gpio-controller@500 {
+@@ -330,7 +330,7 @@
+
+ reboot {
+ compatible = "syscon-reboot";
+- regmap = <&timer>;
++ regmap = <&twd>;
+ offset = <0x34>;
+ mask = <1>;
+ };
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts b/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
+index ec19fbf928a14..12a4b1c03390c 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2-svk.dts
+@@ -111,8 +111,8 @@
+ compatible = "silabs,si3226x";
+ reg = <0>;
+ spi-max-frequency = <5000000>;
+- spi-cpha = <1>;
+- spi-cpol = <1>;
++ spi-cpha;
++ spi-cpol;
+ pl022,hierarchy = <0>;
+ pl022,interface = <0>;
+ pl022,slave-tx-disable = <0>;
+@@ -135,8 +135,8 @@
+ at25,byte-len = <0x8000>;
+ at25,addr-mode = <2>;
+ at25,page-size = <64>;
+- spi-cpha = <1>;
+- spi-cpol = <1>;
++ spi-cpha;
++ spi-cpol;
+ pl022,hierarchy = <0>;
+ pl022,interface = <0>;
+ pl022,slave-tx-disable = <0>;
+diff --git a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+index 2cfeaf3b0a876..8c218689fef70 100644
+--- a/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
++++ b/arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
+@@ -687,7 +687,7 @@
+ };
+ };
+
+- sata: ahci@663f2000 {
++ sata: sata@663f2000 {
+ compatible = "brcm,iproc-ahci", "generic-ahci";
+ reg = <0x663f2000 0x1000>;
+ dma-coherent;
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+index 01b01e3204118..35d1939e690b0 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+@@ -536,9 +536,9 @@
+ clock-names = "i2c";
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(1)>;
+- dmas = <&edma0 1 39>,
+- <&edma0 1 38>;
+- dma-names = "tx", "rx";
++ dmas = <&edma0 1 38>,
++ <&edma0 1 39>;
++ dma-names = "rx", "tx";
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+index 687fea6d8afa4..4e7bd04d97984 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+@@ -499,9 +499,9 @@
+ interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
+ QORIQ_CLK_PLL_DIV(2)>;
+- dmas = <&edma0 1 39>,
+- <&edma0 1 38>;
+- dma-names = "tx", "rx";
++ dmas = <&edma0 1 38>,
++ <&edma0 1 39>;
++ dma-names = "rx", "tx";
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index 66ec5615651d4..5dea37651adfc 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -748,7 +748,7 @@
+ snps,hird-threshold = /bits/ 8 <0x0>;
+ snps,dis_u2_susphy_quirk;
+ snps,dis_u3_susphy_quirk;
+- snps,ref-clock-period-ns = <0x32>;
++ snps,ref-clock-period-ns = <0x29>;
+ dr_mode = "host";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-j5.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-j5.dts
+index 687bea438a571..6c408d61de75a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-j5.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-j5.dts
+@@ -41,7 +41,7 @@
+ };
+
+ home-key {
+- lable = "Home Key";
++ label = "Home Key";
+ gpios = <&msmgpio 109 GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_HOMEPAGE>;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 5a9a5ed0565f6..215f56daa26c2 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -713,6 +713,9 @@
+ #reset-cells = <1>;
+ #power-domain-cells = <1>;
+ reg = <0xfc400000 0x2000>;
++
++ clock-names = "xo", "sleep_clk";
++ clocks = <&xo_board>, <&sleep_clk>;
+ };
+
+ rpm_msg_ram: sram@fc428000 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 937c2e0e93eb9..eab7a85050531 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -1790,7 +1790,7 @@
+ };
+ };
+
+- gmu: gmu@3d69000 {
++ gmu: gmu@3d6a000 {
+ compatible="qcom,adreno-gmu-635.0", "qcom,adreno-gmu";
+ reg = <0 0x03d6a000 0 0x34000>,
+ <0 0x3de0000 0 0x10000>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index cfdeaa81f1bbc..1bb4d98db96fa 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -3613,10 +3613,10 @@
+ #clock-cells = <0>;
+ clock-frequency = <9600000>;
+ clock-output-names = "mclk";
+- qcom,micbias1-millivolt = <1800>;
+- qcom,micbias2-millivolt = <1800>;
+- qcom,micbias3-millivolt = <1800>;
+- qcom,micbias4-millivolt = <1800>;
++ qcom,micbias1-microvolt = <1800000>;
++ qcom,micbias2-microvolt = <1800000>;
++ qcom,micbias3-microvolt = <1800000>;
++ qcom,micbias4-microvolt = <1800000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index 6012322a59846..78265646feff7 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -3556,9 +3556,9 @@
+ qcom,tcs-offset = <0xd00>;
+ qcom,drv-id = <2>;
+ qcom,tcs-config = <ACTIVE_TCS 2>,
+- <SLEEP_TCS 1>,
+- <WAKE_TCS 1>,
+- <CONTROL_TCS 0>;
++ <SLEEP_TCS 3>,
++ <WAKE_TCS 3>,
++ <CONTROL_TCS 1>;
+
+ rpmhcc: clock-controller {
+ compatible = "qcom,sm8150-rpmh-clk";
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 5617a46e5ccdd..a92230bec1ddb 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -1740,8 +1740,8 @@
+ phys = <&pcie0_lane>;
+ phy-names = "pciephy";
+
+- perst-gpio = <&tlmm 79 GPIO_ACTIVE_LOW>;
+- enable-gpio = <&tlmm 81 GPIO_ACTIVE_HIGH>;
++ perst-gpios = <&tlmm 79 GPIO_ACTIVE_LOW>;
++ wake-gpios = <&tlmm 81 GPIO_ACTIVE_HIGH>;
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie0_default_state>;
+@@ -1801,7 +1801,7 @@
+ ranges = <0x01000000 0x0 0x40200000 0x0 0x40200000 0x0 0x100000>,
+ <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>;
+
+- interrupts = <GIC_SPI 306 IRQ_TYPE_EDGE_RISING>;
++ interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "msi";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+@@ -1844,8 +1844,8 @@
+ phys = <&pcie1_lane>;
+ phy-names = "pciephy";
+
+- perst-gpio = <&tlmm 82 GPIO_ACTIVE_LOW>;
+- enable-gpio = <&tlmm 84 GPIO_ACTIVE_HIGH>;
++ perst-gpios = <&tlmm 82 GPIO_ACTIVE_LOW>;
++ wake-gpios = <&tlmm 84 GPIO_ACTIVE_HIGH>;
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie1_default_state>;
+@@ -1907,7 +1907,7 @@
+ ranges = <0x01000000 0x0 0x64200000 0x0 0x64200000 0x0 0x100000>,
+ <0x02000000 0x0 0x64300000 0x0 0x64300000 0x0 0x3d00000>;
+
+- interrupts = <GIC_SPI 236 IRQ_TYPE_EDGE_RISING>;
++ interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "msi";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0x7>;
+@@ -1950,8 +1950,8 @@
+ phys = <&pcie2_lane>;
+ phy-names = "pciephy";
+
+- perst-gpio = <&tlmm 85 GPIO_ACTIVE_LOW>;
+- enable-gpio = <&tlmm 87 GPIO_ACTIVE_HIGH>;
++ perst-gpios = <&tlmm 85 GPIO_ACTIVE_LOW>;
++ wake-gpios = <&tlmm 87 GPIO_ACTIVE_HIGH>;
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie2_default_state>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 4b19744bcfb34..765d018e6306c 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1820,7 +1820,7 @@
+ qcom,tcs-offset = <0xd00>;
+ qcom,drv-id = <2>;
+ qcom,tcs-config = <ACTIVE_TCS 2>, <SLEEP_TCS 3>,
+- <WAKE_TCS 3>, <CONTROL_TCS 1>;
++ <WAKE_TCS 3>, <CONTROL_TCS 0>;
+
+ rpmhcc: clock-controller {
+ compatible = "qcom,sm8350-rpmh-clk";
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 02b97e838c474..9ee055143f8a8 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -203,9 +203,9 @@
+ compatible = "arm,idle-state";
+ idle-state-name = "silver-rail-power-collapse";
+ arm,psci-suspend-param = <0x40000004>;
+- entry-latency-us = <274>;
+- exit-latency-us = <480>;
+- min-residency-us = <3934>;
++ entry-latency-us = <800>;
++ exit-latency-us = <750>;
++ min-residency-us = <4090>;
+ local-timer-stop;
+ };
+
+@@ -213,9 +213,9 @@
+ compatible = "arm,idle-state";
+ idle-state-name = "gold-rail-power-collapse";
+ arm,psci-suspend-param = <0x40000004>;
+- entry-latency-us = <327>;
+- exit-latency-us = <1502>;
+- min-residency-us = <4488>;
++ entry-latency-us = <600>;
++ exit-latency-us = <1550>;
++ min-residency-us = <4791>;
+ local-timer-stop;
+ };
+ };
+@@ -224,10 +224,10 @@
+ CLUSTER_SLEEP_0: cluster-sleep-0 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-l3-off";
+- arm,psci-suspend-param = <0x4100c344>;
+- entry-latency-us = <584>;
+- exit-latency-us = <2332>;
+- min-residency-us = <6118>;
++ arm,psci-suspend-param = <0x41000044>;
++ entry-latency-us = <1050>;
++ exit-latency-us = <2500>;
++ min-residency-us = <5309>;
+ local-timer-stop;
+ };
+
+@@ -235,9 +235,9 @@
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-power-collapse";
+ arm,psci-suspend-param = <0x4100c344>;
+- entry-latency-us = <2893>;
+- exit-latency-us = <4023>;
+- min-residency-us = <9987>;
++ entry-latency-us = <2700>;
++ exit-latency-us = <3500>;
++ min-residency-us = <13959>;
+ local-timer-stop;
+ };
+ };
+@@ -315,7 +315,7 @@
+
+ CLUSTER_PD: cpu-cluster0 {
+ #power-domain-cells = <0>;
+- domain-idle-states = <&CLUSTER_SLEEP_0>;
++ domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts b/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
+index c4dd2a6b48368..f81ce3240342c 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-firefly.dts
+@@ -770,8 +770,8 @@
+ sd-uhs-sdr104;
+
+ /* Power supply */
+- vqmmc-supply = &vcc1v8_s3; /* IO line */
+- vmmc-supply = &vcc_sdio; /* card's power */
++ vqmmc-supply = <&vcc1v8_s3>; /* IO line */
++ vmmc-supply = <&vcc_sdio>; /* card's power */
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+index 012011dc619a5..ce4daff758e7e 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-main.dtsi
+@@ -59,7 +59,10 @@
+ #interrupt-cells = <3>;
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+- <0x00 0x01840000 0x00 0xC0000>; /* GICR */
++ <0x00 0x01840000 0x00 0xC0000>, /* GICR */
++ <0x01 0x00000000 0x00 0x2000>, /* GICC */
++ <0x01 0x00010000 0x00 0x1000>, /* GICH */
++ <0x01 0x00020000 0x00 0x2000>; /* GICV */
+ /*
+ * vcpumntirq:
+ * virtual CPU interface maintenance interrupt
+diff --git a/arch/arm64/boot/dts/ti/k3-am64.dtsi b/arch/arm64/boot/dts/ti/k3-am64.dtsi
+index 120974726be81..19684865d0d68 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64.dtsi
+@@ -87,6 +87,7 @@
+ <0x00 0x68000000 0x00 0x68000000 0x00 0x08000000>, /* PCIe DAT0 */
+ <0x00 0x70000000 0x00 0x70000000 0x00 0x00200000>, /* OC SRAM */
+ <0x00 0x78000000 0x00 0x78000000 0x00 0x00800000>, /* Main R5FSS */
++ <0x01 0x00000000 0x01 0x00000000 0x00 0x00310000>, /* A53 PERIPHBASE */
+ <0x06 0x00000000 0x06 0x00000000 0x01 0x00000000>, /* PCIe DAT1 */
+ <0x05 0x00000000 0x05 0x00000000 0x01 0x00000000>, /* FSS0 DAT3 */
+
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index ce8bb4a61011e..e749343accedd 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -35,7 +35,10 @@
+ #interrupt-cells = <3>;
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+- <0x00 0x01880000 0x00 0x90000>; /* GICR */
++ <0x00 0x01880000 0x00 0x90000>, /* GICR */
++ <0x00 0x6f000000 0x00 0x2000>, /* GICC */
++ <0x00 0x6f010000 0x00 0x1000>, /* GICH */
++ <0x00 0x6f020000 0x00 0x2000>; /* GICV */
+ /*
+ * vcpumntirq:
+ * virtual CPU interface maintenance interrupt
+diff --git a/arch/arm64/boot/dts/ti/k3-am65.dtsi b/arch/arm64/boot/dts/ti/k3-am65.dtsi
+index a58a39fa42dbc..c538a0bf3cdda 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65.dtsi
+@@ -86,6 +86,7 @@
+ <0x00 0x46000000 0x00 0x46000000 0x00 0x00200000>,
+ <0x00 0x47000000 0x00 0x47000000 0x00 0x00068400>,
+ <0x00 0x50000000 0x00 0x50000000 0x00 0x8000000>,
++ <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A53 PERIPHBASE */
+ <0x00 0x70000000 0x00 0x70000000 0x00 0x200000>,
+ <0x05 0x00000000 0x05 0x00000000 0x01 0x0000000>,
+ <0x07 0x00000000 0x07 0x00000000 0x01 0x0000000>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 05a627ad6cdc4..16684a2f054d9 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -54,7 +54,10 @@
+ #interrupt-cells = <3>;
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+- <0x00 0x01900000 0x00 0x100000>; /* GICR */
++ <0x00 0x01900000 0x00 0x100000>, /* GICR */
++ <0x00 0x6f000000 0x00 0x2000>, /* GICC */
++ <0x00 0x6f010000 0x00 0x1000>, /* GICH */
++ <0x00 0x6f020000 0x00 0x2000>; /* GICV */
+
+ /* vcpumntirq: virtual CPU interface maintenance interrupt */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200.dtsi b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+index 64fef4e67d76a..b6da0454cc5bd 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200.dtsi
+@@ -129,6 +129,7 @@
+ <0x00 0x00a40000 0x00 0x00a40000 0x00 0x00000800>, /* timesync router */
+ <0x00 0x01000000 0x00 0x01000000 0x00 0x0d000000>, /* Most peripherals */
+ <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>, /* MAIN NAVSS */
++ <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ <0x00 0x70000000 0x00 0x70000000 0x00 0x00800000>, /* MSMC RAM */
+ <0x00 0x18000000 0x00 0x18000000 0x00 0x08000000>, /* PCIe1 DAT0 */
+ <0x41 0x00000000 0x41 0x00000000 0x01 0x00000000>, /* PCIe1 DAT1 */
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 599861259a30f..db0669985e42a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -76,7 +76,10 @@
+ #interrupt-cells = <3>;
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+- <0x00 0x01900000 0x00 0x100000>; /* GICR */
++ <0x00 0x01900000 0x00 0x100000>, /* GICR */
++ <0x00 0x6f000000 0x00 0x2000>, /* GICC */
++ <0x00 0x6f010000 0x00 0x1000>, /* GICH */
++ <0x00 0x6f020000 0x00 0x2000>; /* GICV */
+
+ /* vcpumntirq: virtual CPU interface maintenance interrupt */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e.dtsi b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+index 4a3872fce5339..0e23886c9fd1d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e.dtsi
+@@ -139,6 +139,7 @@
+ <0x00 0x0e000000 0x00 0x0e000000 0x00 0x01800000>, /* PCIe Core*/
+ <0x00 0x10000000 0x00 0x10000000 0x00 0x10000000>, /* PCIe DAT */
+ <0x00 0x64800000 0x00 0x64800000 0x00 0x00800000>, /* C71 */
++ <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ <0x44 0x00000000 0x44 0x00000000 0x00 0x08000000>, /* PCIe2 DAT */
+ <0x44 0x10000000 0x44 0x10000000 0x00 0x08000000>, /* PCIe3 DAT */
+ <0x4d 0x80800000 0x4d 0x80800000 0x00 0x00800000>, /* C66_0 */
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+index b04db1d3ab617..be7f39299894e 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+@@ -34,7 +34,10 @@
+ #interrupt-cells = <3>;
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x200000>, /* GICD */
+- <0x00 0x01900000 0x00 0x100000>; /* GICR */
++ <0x00 0x01900000 0x00 0x100000>, /* GICR */
++ <0x00 0x6f000000 0x00 0x2000>, /* GICC */
++ <0x00 0x6f010000 0x00 0x1000>, /* GICH */
++ <0x00 0x6f020000 0x00 0x2000>; /* GICV */
+
+ /* vcpumntirq: virtual CPU interface maintenance interrupt */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+index 7521963719ff9..6c5c02edb375d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+@@ -108,7 +108,7 @@
+ reg = <0x00 0x42110000 0x00 0x100>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupt-parent = <&main_gpio_intr>;
++ interrupt-parent = <&wkup_gpio_intr>;
+ interrupts = <103>, <104>, <105>, <106>, <107>, <108>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+@@ -124,7 +124,7 @@
+ reg = <0x00 0x42100000 0x00 0x100>;
+ gpio-controller;
+ #gpio-cells = <2>;
+- interrupt-parent = <&main_gpio_intr>;
++ interrupt-parent = <&wkup_gpio_intr>;
+ interrupts = <112>, <113>, <114>, <115>, <116>, <117>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2.dtsi
+index fe5234c40f6ce..7b930a85a29d6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2.dtsi
+@@ -119,6 +119,7 @@
+ <0x00 0x18000000 0x00 0x18000000 0x00 0x08000000>, /* PCIe1 DAT0 */
+ <0x00 0x64800000 0x00 0x64800000 0x00 0x0070c000>, /* C71_1 */
+ <0x00 0x65800000 0x00 0x65800000 0x00 0x0070c000>, /* C71_2 */
++ <0x00 0x6f000000 0x00 0x6f000000 0x00 0x00310000>, /* A72 PERIPHBASE */
+ <0x00 0x70000000 0x00 0x70000000 0x00 0x00400000>, /* MSMC RAM */
+ <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>, /* MAIN NAVSS */
+ <0x41 0x00000000 0x41 0x00000000 0x01 0x00000000>, /* PCIe1 DAT1 */
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 30516dc0b70ec..7411e4f9b5545 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -939,7 +939,7 @@ CONFIG_DMADEVICES=y
+ CONFIG_DMA_BCM2835=y
+ CONFIG_DMA_SUN6I=m
+ CONFIG_FSL_EDMA=y
+-CONFIG_IMX_SDMA=y
++CONFIG_IMX_SDMA=m
+ CONFIG_K3_DMA=y
+ CONFIG_MV_XOR=y
+ CONFIG_MV_XOR_V2=y
+diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h
+index a11ccadd47d29..094701ec5500b 100644
+--- a/arch/arm64/include/asm/module.lds.h
++++ b/arch/arm64/include/asm/module.lds.h
+@@ -1,8 +1,8 @@
+ SECTIONS {
+ #ifdef CONFIG_ARM64_MODULE_PLTS
+- .plt 0 (NOLOAD) : { BYTE(0) }
+- .init.plt 0 (NOLOAD) : { BYTE(0) }
+- .text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
++ .plt 0 : { BYTE(0) }
++ .init.plt 0 : { BYTE(0) }
++ .text.ftrace_trampoline 0 : { BYTE(0) }
+ #endif
+
+ #ifdef CONFIG_KASAN_SW_TAGS
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index 86e0cc9b9c685..aa3d3607d5c8d 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -67,7 +67,8 @@ struct bp_hardening_data {
+
+ DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
+
+-static inline void arm64_apply_bp_hardening(void)
++/* Called during entry so must be __always_inline */
++static __always_inline void arm64_apply_bp_hardening(void)
+ {
+ struct bp_hardening_data *d;
+
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index 6d45c63c64548..5777929d35bf4 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -233,17 +233,20 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn)
+ __this_cpu_write(bp_hardening_data.slot, HYP_VECTOR_SPECTRE_DIRECT);
+ }
+
+-static void call_smc_arch_workaround_1(void)
++/* Called during entry so must be noinstr */
++static noinstr void call_smc_arch_workaround_1(void)
+ {
+ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+ }
+
+-static void call_hvc_arch_workaround_1(void)
++/* Called during entry so must be noinstr */
++static noinstr void call_hvc_arch_workaround_1(void)
+ {
+ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
+ }
+
+-static void qcom_link_stack_sanitisation(void)
++/* Called during entry so must be noinstr */
++static noinstr void qcom_link_stack_sanitisation(void)
+ {
+ u64 tmp;
+
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index d8aaf4b6f4320..3d66fba69016f 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -577,10 +577,12 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
+ {
+ int err;
+
+- err = sigframe_alloc(user, &user->fpsimd_offset,
+- sizeof(struct fpsimd_context));
+- if (err)
+- return err;
++ if (system_supports_fpsimd()) {
++ err = sigframe_alloc(user, &user->fpsimd_offset,
++ sizeof(struct fpsimd_context));
++ if (err)
++ return err;
++ }
+
+ /* fault information, if valid */
+ if (add_all || current->thread.fault_code) {
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index db63cc885771a..9e26ec80d3175 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -61,8 +61,34 @@ EXPORT_SYMBOL(memstart_addr);
+ * unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
+ * In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
+ * otherwise it is empty.
++ *
++ * Memory reservation for crash kernel either done early or deferred
++ * depending on DMA memory zones configs (ZONE_DMA) --
++ *
++ * In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
++ * here instead of max_zone_phys(). This lets early reservation of
++ * crash kernel memory which has a dependency on arm64_dma_phys_limit.
++ * Reserving memory early for crash kernel allows linear creation of block
++ * mappings (greater than page-granularity) for all the memory bank rangs.
++ * In this scheme a comparatively quicker boot is observed.
++ *
++ * If ZONE_DMA configs are defined, crash kernel memory reservation
++ * is delayed until DMA zone memory range size initilazation performed in
++ * zone_sizes_init(). The defer is necessary to steer clear of DMA zone
++ * memory range to avoid overlap allocation. So crash kernel memory boundaries
++ * are not known when mapping all bank memory ranges, which otherwise means
++ * not possible to exclude crash kernel range from creating block mappings
++ * so page-granularity mappings are created for the entire memory range.
++ * Hence a slightly slower boot is observed.
++ *
++ * Note: Page-granularity mapppings are necessary for crash kernel memory
++ * range for shrinking its size via /sys/kernel/kexec_crash_size interface.
+ */
+-phys_addr_t arm64_dma_phys_limit __ro_after_init;
++#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
++phys_addr_t __ro_after_init arm64_dma_phys_limit;
++#else
++phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
++#endif
+
+ #ifdef CONFIG_KEXEC_CORE
+ /*
+@@ -153,8 +179,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
+ if (!arm64_dma_phys_limit)
+ arm64_dma_phys_limit = dma32_phys_limit;
+ #endif
+- if (!arm64_dma_phys_limit)
+- arm64_dma_phys_limit = PHYS_MASK + 1;
+ max_zone_pfns[ZONE_NORMAL] = max;
+
+ free_area_init(max_zone_pfns);
+@@ -315,6 +339,9 @@ void __init arm64_memblock_init(void)
+
+ early_init_fdt_scan_reserved_mem();
+
++ if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
++ reserve_crashkernel();
++
+ high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
+ }
+
+@@ -361,7 +388,8 @@ void __init bootmem_init(void)
+ * request_standard_resources() depends on crashkernel's memory being
+ * reserved, so do it here.
+ */
+- reserve_crashkernel();
++ if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
++ reserve_crashkernel();
+
+ memblock_dump_all();
+ }
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 49abbf43bf355..37b8230cda6a8 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -63,6 +63,7 @@ static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
+ static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
+
+ static DEFINE_SPINLOCK(swapper_pgdir_lock);
++static DEFINE_MUTEX(fixmap_lock);
+
+ void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd)
+ {
+@@ -329,6 +330,12 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ }
+ BUG_ON(p4d_bad(p4d));
+
++ /*
++ * No need for locking during early boot. And it doesn't work as
++ * expected with KASLR enabled.
++ */
++ if (system_state != SYSTEM_BOOTING)
++ mutex_lock(&fixmap_lock);
+ pudp = pud_set_fixmap_offset(p4dp, addr);
+ do {
+ pud_t old_pud = READ_ONCE(*pudp);
+@@ -359,6 +366,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ } while (pudp++, addr = next, addr != end);
+
+ pud_clear_fixmap();
++ if (system_state != SYSTEM_BOOTING)
++ mutex_unlock(&fixmap_lock);
+ }
+
+ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+@@ -517,7 +526,7 @@ static void __init map_mem(pgd_t *pgdp)
+ */
+ BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
+
+- if (can_set_direct_map() || crash_mem_map || IS_ENABLED(CONFIG_KFENCE))
++ if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
+ flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+
+ /*
+@@ -528,6 +537,17 @@ static void __init map_mem(pgd_t *pgdp)
+ */
+ memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
+
++#ifdef CONFIG_KEXEC_CORE
++ if (crash_mem_map) {
++ if (IS_ENABLED(CONFIG_ZONE_DMA) ||
++ IS_ENABLED(CONFIG_ZONE_DMA32))
++ flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
++ else if (crashk_res.end)
++ memblock_mark_nomap(crashk_res.start,
++ resource_size(&crashk_res));
++ }
++#endif
++
+ /* map all the memory banks */
+ for_each_mem_range(i, &start, &end) {
+ if (start >= end)
+@@ -554,6 +574,25 @@ static void __init map_mem(pgd_t *pgdp)
+ __map_memblock(pgdp, kernel_start, kernel_end,
+ PAGE_KERNEL, NO_CONT_MAPPINGS);
+ memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
++
++ /*
++ * Use page-level mappings here so that we can shrink the region
++ * in page granularity and put back unused memory to buddy system
++ * through /sys/kernel/kexec_crash_size interface.
++ */
++#ifdef CONFIG_KEXEC_CORE
++ if (crash_mem_map &&
++ !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
++ if (crashk_res.end) {
++ __map_memblock(pgdp, crashk_res.start,
++ crashk_res.end + 1,
++ PAGE_KERNEL,
++ NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
++ memblock_clear_nomap(crashk_res.start,
++ resource_size(&crashk_res));
++ }
++ }
++#endif
+ }
+
+ void mark_rodata_ro(void)
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index e96d4d87291f3..cbc41e261f1e7 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -1049,15 +1049,18 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ goto out_off;
+ }
+
+- /* 1. Initial fake pass to compute ctx->idx. */
+-
+- /* Fake pass to fill in ctx->offset. */
+- if (build_body(&ctx, extra_pass)) {
++ /*
++ * 1. Initial fake pass to compute ctx->idx and ctx->offset.
++ *
++ * BPF line info needs ctx->offset[i] to be the offset of
++ * instruction[i] in jited image, so build prologue first.
++ */
++ if (build_prologue(&ctx, was_classic)) {
+ prog = orig_prog;
+ goto out_off;
+ }
+
+- if (build_prologue(&ctx, was_classic)) {
++ if (build_body(&ctx, extra_pass)) {
+ prog = orig_prog;
+ goto out_off;
+ }
+@@ -1130,6 +1133,11 @@ skip_init_ctx:
+ prog->jited_len = prog_size;
+
+ if (!prog->is_func || extra_pass) {
++ int i;
++
++ /* offset[prog->len] is the size of program */
++ for (i = 0; i <= prog->len; i++)
++ ctx.offset[i] *= AARCH64_INSN_SIZE;
+ bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
+ out_off:
+ kfree(ctx.offset);
+diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
+index 92057de08f4f0..1612f43540877 100644
+--- a/arch/csky/kernel/perf_callchain.c
++++ b/arch/csky/kernel/perf_callchain.c
+@@ -49,7 +49,7 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ {
+ struct stackframe buftail;
+ unsigned long lr = 0;
+- unsigned long *user_frame_tail = (unsigned long *)fp;
++ unsigned long __user *user_frame_tail = (unsigned long __user *)fp;
+
+ /* Check accessibility of one struct frame_tail beyond */
+ if (!access_ok(user_frame_tail, sizeof(buftail)))
+diff --git a/arch/csky/kernel/signal.c b/arch/csky/kernel/signal.c
+index c7b763d2f526e..8867ddf3e6c77 100644
+--- a/arch/csky/kernel/signal.c
++++ b/arch/csky/kernel/signal.c
+@@ -136,7 +136,7 @@ static inline void __user *get_sigframe(struct ksignal *ksig,
+ static int
+ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs)
+ {
+- struct rt_sigframe *frame;
++ struct rt_sigframe __user *frame;
+ int err = 0;
+
+ frame = get_sigframe(ksig, regs, sizeof(*frame));
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 0386252e9d043..4218750414bbf 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -480,7 +480,7 @@ static struct platform_device mcf_i2c5 = {
+ #endif /* MCFI2C_BASE5 */
+ #endif /* IS_ENABLED(CONFIG_I2C_IMX) */
+
+-#if IS_ENABLED(CONFIG_MCF_EDMA)
++#ifdef MCFEDMA_BASE
+
+ static const struct dma_slave_map mcf_edma_map[] = {
+ { "dreq0", "rx-tx", MCF_EDMA_FILTER_PARAM(0) },
+@@ -552,7 +552,7 @@ static struct platform_device mcf_edma = {
+ .platform_data = &mcf_edma_data,
+ }
+ };
+-#endif /* IS_ENABLED(CONFIG_MCF_EDMA) */
++#endif /* MCFEDMA_BASE */
+
+ #ifdef MCFSDHC_BASE
+ static struct mcf_esdhc_platform_data mcf_esdhc_data = {
+@@ -651,7 +651,7 @@ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_i2c5,
+ #endif
+ #endif
+-#if IS_ENABLED(CONFIG_MCF_EDMA)
++#ifdef MCFEDMA_BASE
+ &mcf_edma,
+ #endif
+ #ifdef MCFSDHC_BASE
+diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h
+index 5b6e0e7788f44..3fe96979d2c62 100644
+--- a/arch/microblaze/include/asm/uaccess.h
++++ b/arch/microblaze/include/asm/uaccess.h
+@@ -130,27 +130,27 @@ extern long __user_bad(void);
+
+ #define __get_user(x, ptr) \
+ ({ \
+- unsigned long __gu_val = 0; \
+ long __gu_err; \
+ switch (sizeof(*(ptr))) { \
+ case 1: \
+- __get_user_asm("lbu", (ptr), __gu_val, __gu_err); \
++ __get_user_asm("lbu", (ptr), x, __gu_err); \
+ break; \
+ case 2: \
+- __get_user_asm("lhu", (ptr), __gu_val, __gu_err); \
++ __get_user_asm("lhu", (ptr), x, __gu_err); \
+ break; \
+ case 4: \
+- __get_user_asm("lw", (ptr), __gu_val, __gu_err); \
++ __get_user_asm("lw", (ptr), x, __gu_err); \
+ break; \
+- case 8: \
+- __gu_err = __copy_from_user(&__gu_val, ptr, 8); \
+- if (__gu_err) \
+- __gu_err = -EFAULT; \
++ case 8: { \
++ __u64 __x = 0; \
++ __gu_err = raw_copy_from_user(&__x, ptr, 8) ? \
++ -EFAULT : 0; \
++ (x) = (typeof(x))(typeof((x) - (x)))__x; \
+ break; \
++ } \
+ default: \
+ /* __gu_val = 0; __gu_err = -EINVAL;*/ __gu_err = __user_bad();\
+ } \
+- x = (__force __typeof__(*(ptr))) __gu_val; \
+ __gu_err; \
+ })
+
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 058446f01487c..651d4fe355da6 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -101,6 +101,7 @@ config MIPS
+ select TRACE_IRQFLAGS_SUPPORT
+ select VIRT_TO_BUS
+ select ARCH_HAS_ELFCORE_COMPAT
++ select HAVE_ARCH_KCSAN if 64BIT
+
+ config MIPS_FIXUP_BIGPHYS_ADDR
+ bool
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index e036fc025cccb..4478c5661d61d 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -340,14 +340,12 @@ drivers-$(CONFIG_PM) += arch/mips/power/
+ boot-y := vmlinux.bin
+ boot-y += vmlinux.ecoff
+ boot-y += vmlinux.srec
+-ifeq ($(shell expr $(load-y) \< 0xffffffff80000000 2> /dev/null), 0)
+ boot-y += uImage
+ boot-y += uImage.bin
+ boot-y += uImage.bz2
+ boot-y += uImage.gz
+ boot-y += uImage.lzma
+ boot-y += uImage.lzo
+-endif
+ boot-y += vmlinux.itb
+ boot-y += vmlinux.gz.itb
+ boot-y += vmlinux.bz2.itb
+@@ -359,9 +357,7 @@ bootz-y := vmlinuz
+ bootz-y += vmlinuz.bin
+ bootz-y += vmlinuz.ecoff
+ bootz-y += vmlinuz.srec
+-ifeq ($(shell expr $(zload-y) \< 0xffffffff80000000 2> /dev/null), 0)
+ bootz-y += uzImage.bin
+-endif
+ bootz-y += vmlinuz.itb
+
+ #
+diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
+index 5a15d51e88841..6cc28173bee89 100644
+--- a/arch/mips/boot/compressed/Makefile
++++ b/arch/mips/boot/compressed/Makefile
+@@ -38,6 +38,7 @@ KBUILD_AFLAGS := $(KBUILD_AFLAGS) -D__ASSEMBLY__ \
+ KCOV_INSTRUMENT := n
+ GCOV_PROFILE := n
+ UBSAN_SANITIZE := n
++KCSAN_SANITIZE := n
+
+ # decompressor objects (linked with vmlinuz)
+ vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o $(obj)/bswapsi.o
+diff --git a/arch/mips/crypto/crc32-mips.c b/arch/mips/crypto/crc32-mips.c
+index 0a03529cf3178..3e4f5ba104f89 100644
+--- a/arch/mips/crypto/crc32-mips.c
++++ b/arch/mips/crypto/crc32-mips.c
+@@ -28,7 +28,7 @@ enum crc_type {
+ };
+
+ #ifndef TOOLCHAIN_SUPPORTS_CRC
+-#define _ASM_MACRO_CRC32(OP, SZ, TYPE) \
++#define _ASM_SET_CRC(OP, SZ, TYPE) \
+ _ASM_MACRO_3R(OP, rt, rs, rt2, \
+ ".ifnc \\rt, \\rt2\n\t" \
+ ".error \"invalid operands \\\"" #OP " \\rt,\\rs,\\rt2\\\"\"\n\t" \
+@@ -37,30 +37,36 @@ _ASM_MACRO_3R(OP, rt, rs, rt2, \
+ ((SZ) << 6) | ((TYPE) << 8)) \
+ _ASM_INSN32_IF_MM(0x00000030 | (__rs << 16) | (__rt << 21) | \
+ ((SZ) << 14) | ((TYPE) << 3)))
+-_ASM_MACRO_CRC32(crc32b, 0, 0);
+-_ASM_MACRO_CRC32(crc32h, 1, 0);
+-_ASM_MACRO_CRC32(crc32w, 2, 0);
+-_ASM_MACRO_CRC32(crc32d, 3, 0);
+-_ASM_MACRO_CRC32(crc32cb, 0, 1);
+-_ASM_MACRO_CRC32(crc32ch, 1, 1);
+-_ASM_MACRO_CRC32(crc32cw, 2, 1);
+-_ASM_MACRO_CRC32(crc32cd, 3, 1);
+-#define _ASM_SET_CRC ""
++#define _ASM_UNSET_CRC(op, SZ, TYPE) ".purgem " #op "\n\t"
+ #else /* !TOOLCHAIN_SUPPORTS_CRC */
+-#define _ASM_SET_CRC ".set\tcrc\n\t"
++#define _ASM_SET_CRC(op, SZ, TYPE) ".set\tcrc\n\t"
++#define _ASM_UNSET_CRC(op, SZ, TYPE)
+ #endif
+
+-#define _CRC32(crc, value, size, type) \
+-do { \
+- __asm__ __volatile__( \
+- ".set push\n\t" \
+- _ASM_SET_CRC \
+- #type #size " %0, %1, %0\n\t" \
+- ".set pop" \
+- : "+r" (crc) \
+- : "r" (value)); \
++#define __CRC32(crc, value, op, SZ, TYPE) \
++do { \
++ __asm__ __volatile__( \
++ ".set push\n\t" \
++ _ASM_SET_CRC(op, SZ, TYPE) \
++ #op " %0, %1, %0\n\t" \
++ _ASM_UNSET_CRC(op, SZ, TYPE) \
++ ".set pop" \
++ : "+r" (crc) \
++ : "r" (value)); \
+ } while (0)
+
++#define _CRC32_crc32b(crc, value) __CRC32(crc, value, crc32b, 0, 0)
++#define _CRC32_crc32h(crc, value) __CRC32(crc, value, crc32h, 1, 0)
++#define _CRC32_crc32w(crc, value) __CRC32(crc, value, crc32w, 2, 0)
++#define _CRC32_crc32d(crc, value) __CRC32(crc, value, crc32d, 3, 0)
++#define _CRC32_crc32cb(crc, value) __CRC32(crc, value, crc32cb, 0, 1)
++#define _CRC32_crc32ch(crc, value) __CRC32(crc, value, crc32ch, 1, 1)
++#define _CRC32_crc32cw(crc, value) __CRC32(crc, value, crc32cw, 2, 1)
++#define _CRC32_crc32cd(crc, value) __CRC32(crc, value, crc32cd, 3, 1)
++
++#define _CRC32(crc, value, size, op) \
++ _CRC32_##op##size(crc, value)
++
+ #define CRC32(crc, value, size) \
+ _CRC32(crc, value, size, crc32)
+
+diff --git a/arch/mips/dec/int-handler.S b/arch/mips/dec/int-handler.S
+index ea5b5a83f1e11..011d1d678840a 100644
+--- a/arch/mips/dec/int-handler.S
++++ b/arch/mips/dec/int-handler.S
+@@ -131,7 +131,7 @@
+ */
+ mfc0 t0,CP0_CAUSE # get pending interrupts
+ mfc0 t1,CP0_STATUS
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ lw t2,cpu_fpu_mask
+ #endif
+ andi t0,ST0_IM # CAUSE.CE may be non-zero!
+@@ -139,7 +139,7 @@
+
+ beqz t0,spurious
+
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ and t2,t0
+ bnez t2,fpu # handle FPU immediately
+ #endif
+@@ -280,7 +280,7 @@ handle_it:
+ j dec_irq_dispatch
+ nop
+
+-#ifdef CONFIG_32BIT
++#if defined(CONFIG_32BIT) && defined(CONFIG_MIPS_FP_SUPPORT)
+ fpu:
+ lw t0,fpu_kstat_irq
+ nop
+diff --git a/arch/mips/dec/prom/Makefile b/arch/mips/dec/prom/Makefile
+index d95016016b42b..2bad87551203b 100644
+--- a/arch/mips/dec/prom/Makefile
++++ b/arch/mips/dec/prom/Makefile
+@@ -6,4 +6,4 @@
+
+ lib-y += init.o memory.o cmdline.o identify.o console.o
+
+-lib-$(CONFIG_32BIT) += locore.o
++lib-$(CONFIG_CPU_R3000) += locore.o
+diff --git a/arch/mips/dec/setup.c b/arch/mips/dec/setup.c
+index a8a30bb1dee8c..82b00e45ce50a 100644
+--- a/arch/mips/dec/setup.c
++++ b/arch/mips/dec/setup.c
+@@ -746,7 +746,8 @@ void __init arch_init_irq(void)
+ dec_interrupt[DEC_IRQ_HALT] = -1;
+
+ /* Register board interrupts: FPU and cascade. */
+- if (dec_interrupt[DEC_IRQ_FPU] >= 0 && cpu_has_fpu) {
++ if (IS_ENABLED(CONFIG_MIPS_FP_SUPPORT) &&
++ dec_interrupt[DEC_IRQ_FPU] >= 0 && cpu_has_fpu) {
+ struct irq_desc *desc_fpu;
+ int irq_fpu;
+
+diff --git a/arch/mips/include/asm/dec/prom.h b/arch/mips/include/asm/dec/prom.h
+index 62c7dfb90e06c..1e1247add1cf8 100644
+--- a/arch/mips/include/asm/dec/prom.h
++++ b/arch/mips/include/asm/dec/prom.h
+@@ -43,16 +43,11 @@
+ */
+ #define REX_PROM_MAGIC 0x30464354
+
+-#ifdef CONFIG_64BIT
+-
+-#define prom_is_rex(magic) 1 /* KN04 and KN05 are REX PROMs. */
+-
+-#else /* !CONFIG_64BIT */
+-
+-#define prom_is_rex(magic) ((magic) == REX_PROM_MAGIC)
+-
+-#endif /* !CONFIG_64BIT */
+-
++/* KN04 and KN05 are REX PROMs, so only do the check for R3k systems. */
++static inline bool prom_is_rex(u32 magic)
++{
++ return !IS_ENABLED(CONFIG_CPU_R3000) || magic == REX_PROM_MAGIC;
++}
+
+ /*
+ * 3MIN/MAXINE PROM entry points for DS5000/1xx's, DS5000/xx's and
+diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
+index c7925d0e98746..867e9c3db76e9 100644
+--- a/arch/mips/include/asm/pgalloc.h
++++ b/arch/mips/include/asm/pgalloc.h
+@@ -15,6 +15,7 @@
+
+ #define __HAVE_ARCH_PMD_ALLOC_ONE
+ #define __HAVE_ARCH_PUD_ALLOC_ONE
++#define __HAVE_ARCH_PGD_FREE
+ #include <asm-generic/pgalloc.h>
+
+ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
+@@ -48,6 +49,11 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+ extern void pgd_init(unsigned long page);
+ extern pgd_t *pgd_alloc(struct mm_struct *mm);
+
++static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
++{
++ free_pages((unsigned long)pgd, PGD_ORDER);
++}
++
+ #define __pte_free_tlb(tlb,pte,address) \
+ do { \
+ pgtable_pte_page_dtor(pte); \
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index b131e6a773832..5cda07688f67a 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -2160,16 +2160,14 @@ static void build_r4000_tlb_load_handler(void)
+ uasm_i_tlbr(&p);
+
+ switch (current_cpu_type()) {
+- default:
+- if (cpu_has_mips_r2_exec_hazard) {
+- uasm_i_ehb(&p);
+- fallthrough;
+-
+ case CPU_CAVIUM_OCTEON:
+ case CPU_CAVIUM_OCTEON_PLUS:
+ case CPU_CAVIUM_OCTEON2:
+- break;
+- }
++ break;
++ default:
++ if (cpu_has_mips_r2_exec_hazard)
++ uasm_i_ehb(&p);
++ break;
+ }
+
+ /* Examine entrylo 0 or 1 based on ptr. */
+@@ -2236,15 +2234,14 @@ static void build_r4000_tlb_load_handler(void)
+ uasm_i_tlbr(&p);
+
+ switch (current_cpu_type()) {
+- default:
+- if (cpu_has_mips_r2_exec_hazard) {
+- uasm_i_ehb(&p);
+-
+ case CPU_CAVIUM_OCTEON:
+ case CPU_CAVIUM_OCTEON_PLUS:
+ case CPU_CAVIUM_OCTEON2:
+- break;
+- }
++ break;
++ default:
++ if (cpu_has_mips_r2_exec_hazard)
++ uasm_i_ehb(&p);
++ break;
+ }
+
+ /* Examine entrylo 0 or 1 based on ptr. */
+diff --git a/arch/mips/rb532/devices.c b/arch/mips/rb532/devices.c
+index 04684990e28ef..b7f6f782d9a13 100644
+--- a/arch/mips/rb532/devices.c
++++ b/arch/mips/rb532/devices.c
+@@ -301,11 +301,9 @@ static int __init plat_setup_devices(void)
+ static int __init setup_kmac(char *s)
+ {
+ printk(KERN_INFO "korina mac = %s\n", s);
+- if (!mac_pton(s, korina_dev0_data.mac)) {
++ if (!mac_pton(s, korina_dev0_data.mac))
+ printk(KERN_ERR "Invalid mac\n");
+- return -EINVAL;
+- }
+- return 0;
++ return 1;
+ }
+
+ __setup("kmac=", setup_kmac);
+diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
+index d65f55f67e19b..f72658b3a53f7 100644
+--- a/arch/mips/vdso/Makefile
++++ b/arch/mips/vdso/Makefile
+@@ -1,6 +1,9 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Objects to go into the VDSO.
+
++# Sanitizer runtimes are unavailable and cannot be linked here.
++ KCSAN_SANITIZE := n
++
+ # Absolute relocation type $(ARCH_REL_TYPE_ABS) needs to be defined before
+ # the inclusion of generic Makefile.
+ ARCH_REL_TYPE_ABS := R_MIPS_JUMP_SLOT|R_MIPS_GLOB_DAT
+diff --git a/arch/nios2/include/asm/uaccess.h b/arch/nios2/include/asm/uaccess.h
+index ba9340e96fd4c..ca9285a915efa 100644
+--- a/arch/nios2/include/asm/uaccess.h
++++ b/arch/nios2/include/asm/uaccess.h
+@@ -88,6 +88,7 @@ extern __must_check long strnlen_user(const char __user *s, long n);
+ /* Optimized macros */
+ #define __get_user_asm(val, insn, addr, err) \
+ { \
++ unsigned long __gu_val; \
+ __asm__ __volatile__( \
+ " movi %0, %3\n" \
+ "1: " insn " %1, 0(%2)\n" \
+@@ -96,14 +97,20 @@ extern __must_check long strnlen_user(const char __user *s, long n);
+ " .section __ex_table,\"a\"\n" \
+ " .word 1b, 2b\n" \
+ " .previous" \
+- : "=&r" (err), "=r" (val) \
++ : "=&r" (err), "=r" (__gu_val) \
+ : "r" (addr), "i" (-EFAULT)); \
++ val = (__force __typeof__(*(addr)))__gu_val; \
+ }
+
+-#define __get_user_unknown(val, size, ptr, err) do { \
++extern void __get_user_unknown(void);
++
++#define __get_user_8(val, ptr, err) do { \
++ u64 __val = 0; \
+ err = 0; \
+- if (__copy_from_user(&(val), ptr, size)) { \
++ if (raw_copy_from_user(&(__val), ptr, sizeof(val))) { \
+ err = -EFAULT; \
++ } else { \
++ val = (typeof(val))(typeof((val) - (val)))__val; \
+ } \
+ } while (0)
+
+@@ -119,8 +126,11 @@ do { \
+ case 4: \
+ __get_user_asm(val, "ldw", ptr, err); \
+ break; \
++ case 8: \
++ __get_user_8(val, ptr, err); \
++ break; \
+ default: \
+- __get_user_unknown(val, size, ptr, err); \
++ __get_user_unknown(); \
+ break; \
+ } \
+ } while (0)
+@@ -129,9 +139,7 @@ do { \
+ ({ \
+ long __gu_err = -EFAULT; \
+ const __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \
+- unsigned long __gu_val = 0; \
+- __get_user_common(__gu_val, sizeof(*(ptr)), __gu_ptr, __gu_err);\
+- (x) = (__force __typeof__(x))__gu_val; \
++ __get_user_common(x, sizeof(*(ptr)), __gu_ptr, __gu_err); \
+ __gu_err; \
+ })
+
+@@ -139,11 +147,9 @@ do { \
+ ({ \
+ long __gu_err = -EFAULT; \
+ const __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \
+- unsigned long __gu_val = 0; \
+ if (access_ok( __gu_ptr, sizeof(*__gu_ptr))) \
+- __get_user_common(__gu_val, sizeof(*__gu_ptr), \
++ __get_user_common(x, sizeof(*__gu_ptr), \
+ __gu_ptr, __gu_err); \
+- (x) = (__force __typeof__(x))__gu_val; \
+ __gu_err; \
+ })
+
+diff --git a/arch/nios2/kernel/signal.c b/arch/nios2/kernel/signal.c
+index 2009ae2d3c3bb..386e46443b605 100644
+--- a/arch/nios2/kernel/signal.c
++++ b/arch/nios2/kernel/signal.c
+@@ -36,10 +36,10 @@ struct rt_sigframe {
+
+ static inline int rt_restore_ucontext(struct pt_regs *regs,
+ struct switch_stack *sw,
+- struct ucontext *uc, int *pr2)
++ struct ucontext __user *uc, int *pr2)
+ {
+ int temp;
+- unsigned long *gregs = uc->uc_mcontext.gregs;
++ unsigned long __user *gregs = uc->uc_mcontext.gregs;
+ int err;
+
+ /* Always make any pending restarted system calls return -EINTR */
+@@ -102,10 +102,11 @@ asmlinkage int do_rt_sigreturn(struct switch_stack *sw)
+ {
+ struct pt_regs *regs = (struct pt_regs *)(sw + 1);
+ /* Verify, can we follow the stack back */
+- struct rt_sigframe *frame = (struct rt_sigframe *) regs->sp;
++ struct rt_sigframe __user *frame;
+ sigset_t set;
+ int rval;
+
++ frame = (struct rt_sigframe __user *) regs->sp;
+ if (!access_ok(frame, sizeof(*frame)))
+ goto badframe;
+
+@@ -124,10 +125,10 @@ badframe:
+ return 0;
+ }
+
+-static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
++static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *regs)
+ {
+ struct switch_stack *sw = (struct switch_stack *)regs - 1;
+- unsigned long *gregs = uc->uc_mcontext.gregs;
++ unsigned long __user *gregs = uc->uc_mcontext.gregs;
+ int err = 0;
+
+ err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
+@@ -162,8 +163,9 @@ static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)
+ return err;
+ }
+
+-static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
+- size_t frame_size)
++static inline void __user *get_sigframe(struct ksignal *ksig,
++ struct pt_regs *regs,
++ size_t frame_size)
+ {
+ unsigned long usp;
+
+@@ -174,13 +176,13 @@ static inline void *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
+ usp = sigsp(usp, ksig);
+
+ /* Verify, is it 32 or 64 bit aligned */
+- return (void *)((usp - frame_size) & -8UL);
++ return (void __user *)((usp - frame_size) & -8UL);
+ }
+
+ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
+ struct pt_regs *regs)
+ {
+- struct rt_sigframe *frame;
++ struct rt_sigframe __user *frame;
+ int err = 0;
+
+ frame = get_sigframe(ksig, regs, sizeof(*frame));
+diff --git a/arch/parisc/include/asm/traps.h b/arch/parisc/include/asm/traps.h
+index 34619f010c631..0ccdb738a9a36 100644
+--- a/arch/parisc/include/asm/traps.h
++++ b/arch/parisc/include/asm/traps.h
+@@ -18,6 +18,7 @@ unsigned long parisc_acctyp(unsigned long code, unsigned int inst);
+ const char *trap_name(unsigned long code);
+ void do_page_fault(struct pt_regs *regs, unsigned long code,
+ unsigned long address);
++int handle_nadtlb_fault(struct pt_regs *regs);
+ #endif
+
+ #endif
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 94150b91c96fb..bce71cefe5724 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -558,15 +558,6 @@ static void flush_cache_pages(struct vm_area_struct *vma, struct mm_struct *mm,
+ }
+ }
+
+-static void flush_user_cache_tlb(struct vm_area_struct *vma,
+- unsigned long start, unsigned long end)
+-{
+- flush_user_dcache_range_asm(start, end);
+- if (vma->vm_flags & VM_EXEC)
+- flush_user_icache_range_asm(start, end);
+- flush_tlb_range(vma, start, end);
+-}
+-
+ void flush_cache_mm(struct mm_struct *mm)
+ {
+ struct vm_area_struct *vma;
+@@ -581,17 +572,8 @@ void flush_cache_mm(struct mm_struct *mm)
+ return;
+ }
+
+- preempt_disable();
+- if (mm->context == mfsp(3)) {
+- for (vma = mm->mmap; vma; vma = vma->vm_next)
+- flush_user_cache_tlb(vma, vma->vm_start, vma->vm_end);
+- preempt_enable();
+- return;
+- }
+-
+ for (vma = mm->mmap; vma; vma = vma->vm_next)
+ flush_cache_pages(vma, mm, vma->vm_start, vma->vm_end);
+- preempt_enable();
+ }
+
+ void flush_cache_range(struct vm_area_struct *vma,
+@@ -605,15 +587,7 @@ void flush_cache_range(struct vm_area_struct *vma,
+ return;
+ }
+
+- preempt_disable();
+- if (vma->vm_mm->context == mfsp(3)) {
+- flush_user_cache_tlb(vma, start, end);
+- preempt_enable();
+- return;
+- }
+-
+- flush_cache_pages(vma, vma->vm_mm, vma->vm_start, vma->vm_end);
+- preempt_enable();
++ flush_cache_pages(vma, vma->vm_mm, start, end);
+ }
+
+ void
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index b6fdebddc8e99..39576a9245c7f 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -662,6 +662,8 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
+ by hand. Technically we need to emulate:
+ fdc,fdce,pdc,"fic,4f",prober,probeir,probew, probeiw
+ */
++ if (code == 17 && handle_nadtlb_fault(regs))
++ return;
+ fault_address = regs->ior;
+ fault_space = regs->isr;
+ break;
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index e9eabf8f14d7e..f114e102aaf21 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -425,3 +425,92 @@ out_of_memory:
+ }
+ pagefault_out_of_memory();
+ }
++
++/* Handle non-access data TLB miss faults.
++ *
++ * For probe instructions, accesses to userspace are considered allowed
++ * if they lie in a valid VMA and the access type matches. We are not
++ * allowed to handle MM faults here so there may be situations where an
++ * actual access would fail even though a probe was successful.
++ */
++int
++handle_nadtlb_fault(struct pt_regs *regs)
++{
++ unsigned long insn = regs->iir;
++ int breg, treg, xreg, val = 0;
++ struct vm_area_struct *vma, *prev_vma;
++ struct task_struct *tsk;
++ struct mm_struct *mm;
++ unsigned long address;
++ unsigned long acc_type;
++
++ switch (insn & 0x380) {
++ case 0x280:
++ /* FDC instruction */
++ fallthrough;
++ case 0x380:
++ /* PDC and FIC instructions */
++ if (printk_ratelimit()) {
++ pr_warn("BUG: nullifying cache flush/purge instruction\n");
++ show_regs(regs);
++ }
++ if (insn & 0x20) {
++ /* Base modification */
++ breg = (insn >> 21) & 0x1f;
++ xreg = (insn >> 16) & 0x1f;
++ if (breg && xreg)
++ regs->gr[breg] += regs->gr[xreg];
++ }
++ regs->gr[0] |= PSW_N;
++ return 1;
++
++ case 0x180:
++ /* PROBE instruction */
++ treg = insn & 0x1f;
++ if (regs->isr) {
++ tsk = current;
++ mm = tsk->mm;
++ if (mm) {
++ /* Search for VMA */
++ address = regs->ior;
++ mmap_read_lock(mm);
++ vma = find_vma_prev(mm, address, &prev_vma);
++ mmap_read_unlock(mm);
++
++ /*
++ * Check if access to the VMA is okay.
++ * We don't allow for stack expansion.
++ */
++ acc_type = (insn & 0x40) ? VM_WRITE : VM_READ;
++ if (vma
++ && address >= vma->vm_start
++ && (vma->vm_flags & acc_type) == acc_type)
++ val = 1;
++ }
++ }
++ if (treg)
++ regs->gr[treg] = val;
++ regs->gr[0] |= PSW_N;
++ return 1;
++
++ case 0x300:
++ /* LPA instruction */
++ if (insn & 0x20) {
++ /* Base modification */
++ breg = (insn >> 21) & 0x1f;
++ xreg = (insn >> 16) & 0x1f;
++ if (breg && xreg)
++ regs->gr[breg] += regs->gr[xreg];
++ }
++ treg = insn & 0x1f;
++ if (treg)
++ regs->gr[treg] = 0;
++ regs->gr[0] |= PSW_N;
++ return 1;
++
++ default:
++ break;
++ }
++
++ return 0;
++}
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index 5f16ac1583c5d..887efa31f60ab 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -171,7 +171,7 @@ else
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,$(call cc-option,-mtune=power5))
+ CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mcpu=power5,-mcpu=power4)
+ endif
+-else
++else ifdef CONFIG_PPC_BOOK3E_64
+ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
+ endif
+
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+new file mode 100644
+index 0000000000000..73f8c998c64df
+--- /dev/null
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+@@ -0,0 +1,30 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * T1040RDB-REV-A Device Tree Source
++ *
++ * Copyright 2014 - 2015 Freescale Semiconductor Inc.
++ *
++ */
++
++#include "t1040rdb.dts"
++
++/ {
++ model = "fsl,T1040RDB-REV-A";
++ compatible = "fsl,T1040RDB-REV-A";
++};
++
++&seville_port0 {
++ label = "ETH5";
++};
++
++&seville_port2 {
++ label = "ETH7";
++};
++
++&seville_port4 {
++ label = "ETH9";
++};
++
++&seville_port6 {
++ label = "ETH11";
++};
+diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb.dts b/arch/powerpc/boot/dts/fsl/t1040rdb.dts
+index af0c8a6f56138..b6733e7e65805 100644
+--- a/arch/powerpc/boot/dts/fsl/t1040rdb.dts
++++ b/arch/powerpc/boot/dts/fsl/t1040rdb.dts
+@@ -119,7 +119,7 @@
+ managed = "in-band-status";
+ phy-handle = <&phy_qsgmii_0>;
+ phy-mode = "qsgmii";
+- label = "ETH5";
++ label = "ETH3";
+ status = "okay";
+ };
+
+@@ -135,7 +135,7 @@
+ managed = "in-band-status";
+ phy-handle = <&phy_qsgmii_2>;
+ phy-mode = "qsgmii";
+- label = "ETH7";
++ label = "ETH5";
+ status = "okay";
+ };
+
+@@ -151,7 +151,7 @@
+ managed = "in-band-status";
+ phy-handle = <&phy_qsgmii_4>;
+ phy-mode = "qsgmii";
+- label = "ETH9";
++ label = "ETH7";
+ status = "okay";
+ };
+
+@@ -167,7 +167,7 @@
+ managed = "in-band-status";
+ phy-handle = <&phy_qsgmii_6>;
+ phy-mode = "qsgmii";
+- label = "ETH11";
++ label = "ETH9";
+ status = "okay";
+ };
+
+diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
+index beba4979bff93..fee979d3a1aa4 100644
+--- a/arch/powerpc/include/asm/io.h
++++ b/arch/powerpc/include/asm/io.h
+@@ -359,25 +359,37 @@ static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr)
+ */
+ static inline void __raw_rm_writeb(u8 val, volatile void __iomem *paddr)
+ {
+- __asm__ __volatile__("stbcix %0,0,%1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ stbcix %0,0,%1; \
++ .machine pop;"
+ : : "r" (val), "r" (paddr) : "memory");
+ }
+
+ static inline void __raw_rm_writew(u16 val, volatile void __iomem *paddr)
+ {
+- __asm__ __volatile__("sthcix %0,0,%1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ sthcix %0,0,%1; \
++ .machine pop;"
+ : : "r" (val), "r" (paddr) : "memory");
+ }
+
+ static inline void __raw_rm_writel(u32 val, volatile void __iomem *paddr)
+ {
+- __asm__ __volatile__("stwcix %0,0,%1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ stwcix %0,0,%1; \
++ .machine pop;"
+ : : "r" (val), "r" (paddr) : "memory");
+ }
+
+ static inline void __raw_rm_writeq(u64 val, volatile void __iomem *paddr)
+ {
+- __asm__ __volatile__("stdcix %0,0,%1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ stdcix %0,0,%1; \
++ .machine pop;"
+ : : "r" (val), "r" (paddr) : "memory");
+ }
+
+@@ -389,7 +401,10 @@ static inline void __raw_rm_writeq_be(u64 val, volatile void __iomem *paddr)
+ static inline u8 __raw_rm_readb(volatile void __iomem *paddr)
+ {
+ u8 ret;
+- __asm__ __volatile__("lbzcix %0,0, %1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ lbzcix %0,0, %1; \
++ .machine pop;"
+ : "=r" (ret) : "r" (paddr) : "memory");
+ return ret;
+ }
+@@ -397,7 +412,10 @@ static inline u8 __raw_rm_readb(volatile void __iomem *paddr)
+ static inline u16 __raw_rm_readw(volatile void __iomem *paddr)
+ {
+ u16 ret;
+- __asm__ __volatile__("lhzcix %0,0, %1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ lhzcix %0,0, %1; \
++ .machine pop;"
+ : "=r" (ret) : "r" (paddr) : "memory");
+ return ret;
+ }
+@@ -405,7 +423,10 @@ static inline u16 __raw_rm_readw(volatile void __iomem *paddr)
+ static inline u32 __raw_rm_readl(volatile void __iomem *paddr)
+ {
+ u32 ret;
+- __asm__ __volatile__("lwzcix %0,0, %1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ lwzcix %0,0, %1; \
++ .machine pop;"
+ : "=r" (ret) : "r" (paddr) : "memory");
+ return ret;
+ }
+@@ -413,7 +434,10 @@ static inline u32 __raw_rm_readl(volatile void __iomem *paddr)
+ static inline u64 __raw_rm_readq(volatile void __iomem *paddr)
+ {
+ u64 ret;
+- __asm__ __volatile__("ldcix %0,0, %1"
++ __asm__ __volatile__(".machine push; \
++ .machine power6; \
++ ldcix %0,0, %1; \
++ .machine pop;"
+ : "=r" (ret) : "r" (paddr) : "memory");
+ return ret;
+ }
+diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
+index b040094f79202..7ebc807aa8cc8 100644
+--- a/arch/powerpc/include/asm/set_memory.h
++++ b/arch/powerpc/include/asm/set_memory.h
+@@ -6,6 +6,8 @@
+ #define SET_MEMORY_RW 1
+ #define SET_MEMORY_NX 2
+ #define SET_MEMORY_X 3
++#define SET_MEMORY_NP 4 /* Set memory non present */
++#define SET_MEMORY_P 5 /* Set memory present */
+
+ int change_memory_attr(unsigned long addr, int numpages, long action);
+
+@@ -29,6 +31,14 @@ static inline int set_memory_x(unsigned long addr, int numpages)
+ return change_memory_attr(addr, numpages, SET_MEMORY_X);
+ }
+
+-int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot);
++static inline int set_memory_np(unsigned long addr, int numpages)
++{
++ return change_memory_attr(addr, numpages, SET_MEMORY_NP);
++}
++
++static inline int set_memory_p(unsigned long addr, int numpages)
++{
++ return change_memory_attr(addr, numpages, SET_MEMORY_P);
++}
+
+ #endif
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 63316100080c1..4a35423f766db 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -125,8 +125,11 @@ do { \
+ */
+ #define __get_user_atomic_128_aligned(kaddr, uaddr, err) \
+ __asm__ __volatile__( \
++ ".machine push\n" \
++ ".machine altivec\n" \
+ "1: lvx 0,0,%1 # get user\n" \
+ " stvx 0,0,%2 # put kernel\n" \
++ ".machine pop\n" \
+ "2:\n" \
+ ".section .fixup,\"ax\"\n" \
+ "3: li %0,%3\n" \
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index cd0b8b71ecddc..384f58a3f373f 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -582,8 +582,9 @@ void timer_rearm_host_dec(u64 now)
+ local_paca->irq_happened |= PACA_IRQ_DEC;
+ } else {
+ now = *next_tb - now;
+- if (now <= decrementer_max)
+- set_dec_or_work(now);
++ if (now > decrementer_max)
++ now = decrementer_max;
++ set_dec_or_work(now);
+ }
+ }
+ EXPORT_SYMBOL_GPL(timer_rearm_host_dec);
+diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/tm.S
+index 3beecc32940bc..5a0f023a26e90 100644
+--- a/arch/powerpc/kernel/tm.S
++++ b/arch/powerpc/kernel/tm.S
+@@ -443,7 +443,8 @@ restore_gprs:
+
+ REST_GPR(0, r7) /* GPR0 */
+ REST_GPRS(2, 4, r7) /* GPR2-4 */
+- REST_GPRS(8, 31, r7) /* GPR8-31 */
++ REST_GPRS(8, 12, r7) /* GPR8-12 */
++ REST_GPRS(14, 31, r7) /* GPR14-31 */
+
+ /* Load up PPR and DSCR here so we don't run with user values for long */
+ mtspr SPRN_DSCR, r5
+@@ -479,18 +480,24 @@ restore_gprs:
+ REST_GPR(6, r7)
+
+ /*
+- * Store r1 and r5 on the stack so that we can access them after we
+- * clear MSR RI.
++ * Store user r1 and r5 and r13 on the stack (in the unused save
++ * areas / compiler reserved areas), so that we can access them after
++ * we clear MSR RI.
+ */
+
+ REST_GPR(5, r7)
+ std r5, -8(r1)
+- ld r5, GPR1(r7)
++ ld r5, GPR13(r7)
+ std r5, -16(r1)
++ ld r5, GPR1(r7)
++ std r5, -24(r1)
+
+ REST_GPR(7, r7)
+
+- /* Clear MSR RI since we are about to use SCRATCH0. EE is already off */
++ /* Stash the stack pointer away for use after recheckpoint */
++ std r1, PACAR1(r13)
++
++ /* Clear MSR RI since we are about to clobber r13. EE is already off */
+ li r5, 0
+ mtmsrd r5, 1
+
+@@ -501,9 +508,9 @@ restore_gprs:
+ * until we turn MSR RI back on.
+ */
+
+- SET_SCRATCH0(r1)
+ ld r5, -8(r1)
+- ld r1, -16(r1)
++ ld r13, -16(r1)
++ ld r1, -24(r1)
+
+ /* Commit register state as checkpointed state: */
+ TRECHKPT
+@@ -519,9 +526,9 @@ restore_gprs:
+ */
+
+ GET_PACA(r13)
+- GET_SCRATCH0(r1)
++ ld r1, PACAR1(r13)
+
+- /* R1 is restored, so we are recoverable again. EE is still off */
++ /* R13, R1 is restored, so we are recoverable again. EE is still off */
+ li r4, MSR_RI
+ mtmsrd r4, 1
+
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 84c89f08ae9aa..791db769080d2 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -6137,8 +6137,11 @@ static int kvmppc_book3s_init_hv(void)
+ if (r)
+ return r;
+
+- if (kvmppc_radix_possible())
++ if (kvmppc_radix_possible()) {
+ r = kvmppc_radix_init();
++ if (r)
++ return r;
++ }
+
+ r = kvmppc_uvmem_init();
+ if (r < 0)
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index 2ad0ccd202d5d..f0c4545dc3ab8 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -1499,7 +1499,7 @@ int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu,
+ {
+ enum emulation_result emulated = EMULATE_DONE;
+
+- if (vcpu->arch.mmio_vsx_copy_nums > 2)
++ if (vcpu->arch.mmio_vmx_copy_nums > 2)
+ return EMULATE_FAIL;
+
+ while (vcpu->arch.mmio_vmx_copy_nums) {
+@@ -1596,7 +1596,7 @@ int kvmppc_handle_vmx_store(struct kvm_vcpu *vcpu,
+ unsigned int index = rs & KVM_MMIO_REG_MASK;
+ enum emulation_result emulated = EMULATE_DONE;
+
+- if (vcpu->arch.mmio_vsx_copy_nums > 2)
++ if (vcpu->arch.mmio_vmx_copy_nums > 2)
+ return EMULATE_FAIL;
+
+ vcpu->arch.io_gpr = rs;
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index bd3734d5be892..bf755a7be5147 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -112,9 +112,9 @@ static nokprobe_inline long address_ok(struct pt_regs *regs,
+ {
+ if (!user_mode(regs))
+ return 1;
+- if (__access_ok(ea, nb))
++ if (access_ok((void __user *)ea, nb))
+ return 1;
+- if (__access_ok(ea, 1))
++ if (access_ok((void __user *)ea, 1))
+ /* Access overlaps the end of the user region */
+ regs->dar = TASK_SIZE_MAX - 1;
+ else
+@@ -1097,7 +1097,10 @@ NOKPROBE_SYMBOL(emulate_dcbz);
+
+ #define __put_user_asmx(x, addr, err, op, cr) \
+ __asm__ __volatile__( \
++ ".machine push\n" \
++ ".machine power8\n" \
+ "1: " op " %2,0,%3\n" \
++ ".machine pop\n" \
+ " mfcr %1\n" \
+ "2:\n" \
+ ".section .fixup,\"ax\"\n" \
+@@ -1110,7 +1113,10 @@ NOKPROBE_SYMBOL(emulate_dcbz);
+
+ #define __get_user_asmx(x, addr, err, op) \
+ __asm__ __volatile__( \
++ ".machine push\n" \
++ ".machine power8\n" \
+ "1: "op" %1,0,%2\n" \
++ ".machine pop\n" \
+ "2:\n" \
+ ".section .fixup,\"ax\"\n" \
+ "3: li %0,%3\n" \
+@@ -3389,7 +3395,7 @@ int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op)
+ __put_user_asmx(op->val, ea, err, "stbcx.", cr);
+ break;
+ case 2:
+- __put_user_asmx(op->val, ea, err, "stbcx.", cr);
++ __put_user_asmx(op->val, ea, err, "sthcx.", cr);
+ break;
+ #endif
+ case 4:
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index eb8ecd7343a99..7ba6d3eff636d 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -567,18 +567,24 @@ NOKPROBE_SYMBOL(hash__do_page_fault);
+ static void __bad_page_fault(struct pt_regs *regs, int sig)
+ {
+ int is_write = page_fault_is_write(regs->dsisr);
++ const char *msg;
+
+ /* kernel has accessed a bad area */
+
++ if (regs->dar < PAGE_SIZE)
++ msg = "Kernel NULL pointer dereference";
++ else
++ msg = "Unable to handle kernel data access";
++
+ switch (TRAP(regs)) {
+ case INTERRUPT_DATA_STORAGE:
+- case INTERRUPT_DATA_SEGMENT:
+ case INTERRUPT_H_DATA_STORAGE:
+- pr_alert("BUG: %s on %s at 0x%08lx\n",
+- regs->dar < PAGE_SIZE ? "Kernel NULL pointer dereference" :
+- "Unable to handle kernel data access",
++ pr_alert("BUG: %s on %s at 0x%08lx\n", msg,
+ is_write ? "write" : "read", regs->dar);
+ break;
++ case INTERRUPT_DATA_SEGMENT:
++ pr_alert("BUG: %s at 0x%08lx\n", msg, regs->dar);
++ break;
+ case INTERRUPT_INST_STORAGE:
+ case INTERRUPT_INST_SEGMENT:
+ pr_alert("BUG: Unable to handle kernel instruction fetch%s",
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index cf8770b1a692e..f3e4d069e0ba7 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -83,13 +83,12 @@ void __init
+ kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t pte)
+ {
+ unsigned long k_cur;
+- phys_addr_t pa = __pa(kasan_early_shadow_page);
+
+ for (k_cur = k_start; k_cur != k_end; k_cur += PAGE_SIZE) {
+ pmd_t *pmd = pmd_off_k(k_cur);
+ pte_t *ptep = pte_offset_kernel(pmd, k_cur);
+
+- if ((pte_val(*ptep) & PTE_RPN_MASK) != pa)
++ if (pte_page(*ptep) != virt_to_page(lm_alias(kasan_early_shadow_page)))
+ continue;
+
+ __set_pte_at(&init_mm, k_cur, ptep, pte, 0);
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 9d5f710d2c205..b9b7fefbb64b9 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -956,7 +956,9 @@ static int __init parse_numa_properties(void)
+ of_node_put(cpu);
+ }
+
+- node_set_online(nid);
++ /* node_set_online() is an UB if 'nid' is negative */
++ if (likely(nid >= 0))
++ node_set_online(nid);
+ }
+
+ get_n_mem_cells(&n_mem_addr_cells, &n_mem_size_cells);
+diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
+index edea388e9d3fb..3bb9d168e3b31 100644
+--- a/arch/powerpc/mm/pageattr.c
++++ b/arch/powerpc/mm/pageattr.c
+@@ -48,6 +48,12 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
+ case SET_MEMORY_X:
+ pte = pte_mkexec(pte);
+ break;
++ case SET_MEMORY_NP:
++ pte_update(&init_mm, addr, ptep, _PAGE_PRESENT, 0, 0);
++ break;
++ case SET_MEMORY_P:
++ pte_update(&init_mm, addr, ptep, 0, _PAGE_PRESENT, 0);
++ break;
+ default:
+ WARN_ON_ONCE(1);
+ break;
+@@ -96,36 +102,3 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
+ return apply_to_existing_page_range(&init_mm, start, size,
+ change_page_attr, (void *)action);
+ }
+-
+-/*
+- * Set the attributes of a page:
+- *
+- * This function is used by PPC32 at the end of init to set final kernel memory
+- * protection. It includes changing the maping of the page it is executing from
+- * and data pages it is using.
+- */
+-static int set_page_attr(pte_t *ptep, unsigned long addr, void *data)
+-{
+- pgprot_t prot = __pgprot((unsigned long)data);
+-
+- spin_lock(&init_mm.page_table_lock);
+-
+- set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot));
+- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+-
+- spin_unlock(&init_mm.page_table_lock);
+-
+- return 0;
+-}
+-
+-int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot)
+-{
+- unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
+- unsigned long sz = numpages * PAGE_SIZE;
+-
+- if (numpages <= 0)
+- return 0;
+-
+- return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr,
+- (void *)pgprot_val(prot));
+-}
+diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
+index 906e4e4328b2e..f71ededdc02a5 100644
+--- a/arch/powerpc/mm/pgtable_32.c
++++ b/arch/powerpc/mm/pgtable_32.c
+@@ -135,10 +135,12 @@ void mark_initmem_nx(void)
+ unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+ PFN_DOWN((unsigned long)_sinittext);
+
+- if (v_block_mapped((unsigned long)_sinittext))
++ if (v_block_mapped((unsigned long)_sinittext)) {
+ mmu_mark_initmem_nx();
+- else
+- set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL);
++ } else {
++ set_memory_nx((unsigned long)_sinittext, numpages);
++ set_memory_rw((unsigned long)_sinittext, numpages);
++ }
+ }
+
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+@@ -152,18 +154,14 @@ void mark_rodata_ro(void)
+ return;
+ }
+
+- numpages = PFN_UP((unsigned long)_etext) -
+- PFN_DOWN((unsigned long)_stext);
+-
+- set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX);
+ /*
+- * mark .rodata as read only. Use __init_begin rather than __end_rodata
+- * to cover NOTES and EXCEPTION_TABLE.
++ * mark .text and .rodata as read only. Use __init_begin rather than
++ * __end_rodata to cover NOTES and EXCEPTION_TABLE.
+ */
+ numpages = PFN_UP((unsigned long)__init_begin) -
+- PFN_DOWN((unsigned long)__start_rodata);
++ PFN_DOWN((unsigned long)_stext);
+
+- set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO);
++ set_memory_ro((unsigned long)_stext, numpages);
+
+ // mark_initmem_nx() should have already run by now
+ ptdump_check_wx();
+@@ -179,8 +177,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ return;
+
+ if (enable)
+- set_memory_attr(addr, numpages, PAGE_KERNEL);
++ set_memory_p(addr, numpages);
+ else
+- set_memory_attr(addr, numpages, __pgprot(0));
++ set_memory_np(addr, numpages);
+ }
+ #endif /* CONFIG_DEBUG_PAGEALLOC */
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index e106909ff9c37..e7583fbcc8fa1 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -1457,7 +1457,11 @@ static int trace_imc_event_init(struct perf_event *event)
+
+ event->hw.idx = -1;
+
+- event->pmu->task_ctx_nr = perf_hw_context;
++ /*
++ * There can only be a single PMU for perf_hw_context events which is assigned to
++ * core PMU. Hence use "perf_sw_context" for trace_imc.
++ */
++ event->pmu->task_ctx_nr = perf_sw_context;
+ event->destroy = reset_global_refc;
+ return 0;
+ }
+diff --git a/arch/powerpc/platforms/8xx/pic.c b/arch/powerpc/platforms/8xx/pic.c
+index f2ba837249d69..04a6abf14c295 100644
+--- a/arch/powerpc/platforms/8xx/pic.c
++++ b/arch/powerpc/platforms/8xx/pic.c
+@@ -153,6 +153,7 @@ int __init mpc8xx_pic_init(void)
+ if (mpc8xx_pic_host == NULL) {
+ printk(KERN_ERR "MPC8xx PIC: failed to allocate irq host!\n");
+ ret = -ENOMEM;
++ goto out;
+ }
+
+ ret = 0;
+diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c
+index b4386714494a6..e3d44b36ae98f 100644
+--- a/arch/powerpc/platforms/powernv/rng.c
++++ b/arch/powerpc/platforms/powernv/rng.c
+@@ -43,7 +43,11 @@ static unsigned long rng_whiten(struct powernv_rng *rng, unsigned long val)
+ unsigned long parity;
+
+ /* Calculate the parity of the value */
+- asm ("popcntd %0,%1" : "=r" (parity) : "r" (val));
++ asm (".machine push; \
++ .machine power7; \
++ popcntd %0,%1; \
++ .machine pop;"
++ : "=r" (parity) : "r" (val));
+
+ /* xor our value with the previous mask */
+ val ^= rng->mask;
+diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
+index 90c9d3531694b..4ba8245681192 100644
+--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
++++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
+@@ -78,6 +78,9 @@ int remove_phb_dynamic(struct pci_controller *phb)
+
+ pseries_msi_free_domains(phb);
+
++ /* Keep a reference so phb isn't freed yet */
++ get_device(&host_bridge->dev);
++
+ /* Remove the PCI bus and unregister the bridge device from sysfs */
+ phb->bus = NULL;
+ pci_remove_bus(b);
+@@ -101,6 +104,7 @@ int remove_phb_dynamic(struct pci_controller *phb)
+ * the pcibios_free_controller_deferred() callback;
+ * see pseries_root_bridge_prepare().
+ */
++ put_device(&host_bridge->dev);
+
+ return 0;
+ }
+diff --git a/arch/powerpc/sysdev/fsl_gtm.c b/arch/powerpc/sysdev/fsl_gtm.c
+index 8963eaffb1b7b..39186ad6b3c3a 100644
+--- a/arch/powerpc/sysdev/fsl_gtm.c
++++ b/arch/powerpc/sysdev/fsl_gtm.c
+@@ -86,7 +86,7 @@ static LIST_HEAD(gtms);
+ */
+ struct gtm_timer *gtm_get_timer16(void)
+ {
+- struct gtm *gtm = NULL;
++ struct gtm *gtm;
+ int i;
+
+ list_for_each_entry(gtm, >ms, list_node) {
+@@ -103,7 +103,7 @@ struct gtm_timer *gtm_get_timer16(void)
+ spin_unlock_irq(>m->lock);
+ }
+
+- if (gtm)
++ if (!list_empty(>ms))
+ return ERR_PTR(-EBUSY);
+ return ERR_PTR(-ENODEV);
+ }
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 1ca5564bda9d0..89c86f32aff86 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -1708,20 +1708,20 @@ __be32 *xive_queue_page_alloc(unsigned int cpu, u32 queue_shift)
+ static int __init xive_off(char *arg)
+ {
+ xive_cmdline_disabled = true;
+- return 0;
++ return 1;
+ }
+ __setup("xive=off", xive_off);
+
+ static int __init xive_store_eoi_cmdline(char *arg)
+ {
+ if (!arg)
+- return -EINVAL;
++ return 1;
+
+ if (strncmp(arg, "off", 3) == 0) {
+ pr_info("StoreEOI disabled on kernel command line\n");
+ xive_store_eoi = false;
+ }
+- return 0;
++ return 1;
+ }
+ __setup("xive.store-eoi=", xive_store_eoi_cmdline);
+
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts b/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts
+index 984872f3d3a9b..b9e30df127fef 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maix_bit.dts
+@@ -203,6 +203,8 @@
+ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <50000000>;
++ spi-tx-bus-width = <4>;
++ spi-rx-bus-width = <4>;
+ m25p,fast-read;
+ broken-flash-reset;
+ };
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts b/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts
+index 7ba99b4da3042..8d23401b0bbb6 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maix_dock.dts
+@@ -205,6 +205,8 @@
+ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <50000000>;
++ spi-tx-bus-width = <4>;
++ spi-rx-bus-width = <4>;
+ m25p,fast-read;
+ broken-flash-reset;
+ };
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts b/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts
+index be9b12c9b374a..24fd83b43d9d5 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maix_go.dts
+@@ -213,6 +213,8 @@
+ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <50000000>;
++ spi-tx-bus-width = <4>;
++ spi-rx-bus-width = <4>;
+ m25p,fast-read;
+ broken-flash-reset;
+ };
+diff --git a/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts b/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts
+index 031c0c28f8195..25341f38292aa 100644
+--- a/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts
++++ b/arch/riscv/boot/dts/canaan/sipeed_maixduino.dts
+@@ -178,6 +178,8 @@
+ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <50000000>;
++ spi-tx-bus-width = <4>;
++ spi-rx-bus-width = <4>;
+ m25p,fast-read;
+ broken-flash-reset;
+ };
+diff --git a/arch/riscv/include/asm/module.lds.h b/arch/riscv/include/asm/module.lds.h
+index 4254ff2ff0494..1075beae1ac64 100644
+--- a/arch/riscv/include/asm/module.lds.h
++++ b/arch/riscv/include/asm/module.lds.h
+@@ -2,8 +2,8 @@
+ /* Copyright (C) 2017 Andes Technology Corporation */
+ #ifdef CONFIG_MODULE_SECTIONS
+ SECTIONS {
+- .plt (NOLOAD) : { BYTE(0) }
+- .got (NOLOAD) : { BYTE(0) }
+- .got.plt (NOLOAD) : { BYTE(0) }
++ .plt : { BYTE(0) }
++ .got : { BYTE(0) }
++ .got.plt : { BYTE(0) }
+ }
+ #endif
+diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
+index 60da0dcacf145..74d888c8d631a 100644
+--- a/arch/riscv/include/asm/thread_info.h
++++ b/arch/riscv/include/asm/thread_info.h
+@@ -11,11 +11,17 @@
+ #include <asm/page.h>
+ #include <linux/const.h>
+
++#ifdef CONFIG_KASAN
++#define KASAN_STACK_ORDER 1
++#else
++#define KASAN_STACK_ORDER 0
++#endif
++
+ /* thread information allocation */
+ #ifdef CONFIG_64BIT
+-#define THREAD_SIZE_ORDER (2)
++#define THREAD_SIZE_ORDER (2 + KASAN_STACK_ORDER)
+ #else
+-#define THREAD_SIZE_ORDER (1)
++#define THREAD_SIZE_ORDER (1 + KASAN_STACK_ORDER)
+ #endif
+ #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
+
+diff --git a/arch/riscv/kernel/cpu_ops_sbi.c b/arch/riscv/kernel/cpu_ops_sbi.c
+index dae29cbfe550b..7f2ad008274f3 100644
+--- a/arch/riscv/kernel/cpu_ops_sbi.c
++++ b/arch/riscv/kernel/cpu_ops_sbi.c
+@@ -21,7 +21,7 @@ const struct cpu_operations cpu_ops_sbi;
+ * be invoked from multiple threads in parallel. Define a per cpu data
+ * to handle that.
+ */
+-DEFINE_PER_CPU(struct sbi_hart_boot_data, boot_data);
++static DEFINE_PER_CPU(struct sbi_hart_boot_data, boot_data);
+
+ static int sbi_hsm_hart_start(unsigned long hartid, unsigned long saddr,
+ unsigned long priv)
+diff --git a/arch/riscv/kernel/perf_callchain.c b/arch/riscv/kernel/perf_callchain.c
+index 1fc075b8f764a..3348a61de7d99 100644
+--- a/arch/riscv/kernel/perf_callchain.c
++++ b/arch/riscv/kernel/perf_callchain.c
+@@ -15,8 +15,8 @@ static unsigned long user_backtrace(struct perf_callchain_entry_ctx *entry,
+ {
+ struct stackframe buftail;
+ unsigned long ra = 0;
+- unsigned long *user_frame_tail =
+- (unsigned long *)(fp - sizeof(struct stackframe));
++ unsigned long __user *user_frame_tail =
++ (unsigned long __user *)(fp - sizeof(struct stackframe));
+
+ /* Check accessibility of one struct frame_tail beyond */
+ if (!access_ok(user_frame_tail, sizeof(buftail)))
+@@ -68,7 +68,7 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
+
+ static bool fill_callchain(void *entry, unsigned long pc)
+ {
+- return perf_callchain_store(entry, pc);
++ return perf_callchain_store(entry, pc) == 0;
+ }
+
+ void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 2296b1ff1e023..4e3db4004bfdc 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -3869,14 +3869,12 @@ retry:
+ return 0;
+ }
+
+-void kvm_s390_set_tod_clock(struct kvm *kvm,
+- const struct kvm_s390_vm_tod_clock *gtod)
++static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
+ {
+ struct kvm_vcpu *vcpu;
+ union tod_clock clk;
+ unsigned long i;
+
+- mutex_lock(&kvm->lock);
+ preempt_disable();
+
+ store_tod_clock_ext(&clk);
+@@ -3897,7 +3895,22 @@ void kvm_s390_set_tod_clock(struct kvm *kvm,
+
+ kvm_s390_vcpu_unblock_all(kvm);
+ preempt_enable();
++}
++
++void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
++{
++ mutex_lock(&kvm->lock);
++ __kvm_s390_set_tod_clock(kvm, gtod);
++ mutex_unlock(&kvm->lock);
++}
++
++int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod)
++{
++ if (!mutex_trylock(&kvm->lock))
++ return 0;
++ __kvm_s390_set_tod_clock(kvm, gtod);
+ mutex_unlock(&kvm->lock);
++ return 1;
+ }
+
+ /**
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index 098831e815e6c..f2c910763d7fa 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -349,8 +349,8 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
+ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
+
+ /* implemented in kvm-s390.c */
+-void kvm_s390_set_tod_clock(struct kvm *kvm,
+- const struct kvm_s390_vm_tod_clock *gtod);
++void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
++int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod);
+ long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
+ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
+ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 417154b314a64..6a765fe22eafc 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -102,7 +102,20 @@ static int handle_set_clock(struct kvm_vcpu *vcpu)
+ return kvm_s390_inject_prog_cond(vcpu, rc);
+
+ VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", gtod.tod);
+- kvm_s390_set_tod_clock(vcpu->kvm, >od);
++ /*
++ * To set the TOD clock the kvm lock must be taken, but the vcpu lock
++ * is already held in handle_set_clock. The usual lock order is the
++ * opposite. As SCK is deprecated and should not be used in several
++ * cases, for example when the multiple epoch facility or TOD clock
++ * steering facility is installed (see Principles of Operation), a
++ * slow path can be used. If the lock can not be taken via try_lock,
++ * the instruction will be retried via -EAGAIN at a later point in
++ * time.
++ */
++ if (!kvm_s390_try_set_tod_clock(vcpu->kvm, >od)) {
++ kvm_s390_retry_instr(vcpu);
++ return -EAGAIN;
++ }
+
+ kvm_s390_set_psw_cc(vcpu, 0);
+ return 0;
+diff --git a/arch/sparc/kernel/signal_32.c b/arch/sparc/kernel/signal_32.c
+index ffab16369beac..74f80443b195f 100644
+--- a/arch/sparc/kernel/signal_32.c
++++ b/arch/sparc/kernel/signal_32.c
+@@ -65,7 +65,7 @@ struct rt_signal_frame {
+ */
+ static inline bool invalid_frame_pointer(void __user *fp, int fplen)
+ {
+- if ((((unsigned long) fp) & 15) || !__access_ok((unsigned long)fp, fplen))
++ if ((((unsigned long) fp) & 15) || !access_ok(fp, fplen))
+ return true;
+
+ return false;
+diff --git a/arch/um/drivers/mconsole_kern.c b/arch/um/drivers/mconsole_kern.c
+index 6ead1e2404576..8ca67a6926830 100644
+--- a/arch/um/drivers/mconsole_kern.c
++++ b/arch/um/drivers/mconsole_kern.c
+@@ -224,7 +224,7 @@ void mconsole_go(struct mc_request *req)
+
+ void mconsole_stop(struct mc_request *req)
+ {
+- deactivate_fd(req->originating_fd, MCONSOLE_IRQ);
++ block_signals();
+ os_set_fd_block(req->originating_fd, 1);
+ mconsole_reply(req, "stopped", 0, 0);
+ for (;;) {
+@@ -247,6 +247,7 @@ void mconsole_stop(struct mc_request *req)
+ }
+ os_set_fd_block(req->originating_fd, 0);
+ mconsole_reply(req, "", 0, 0);
++ unblock_signals();
+ }
+
+ static DEFINE_SPINLOCK(mc_devices_lock);
+diff --git a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
+index 71fae5a09e56d..2077ce7a56479 100644
+--- a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
++++ b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl
+@@ -297,7 +297,7 @@ ___
+ $code.=<<___;
+ mov \$1,%eax
+ .Lno_key:
+- ret
++ RET
+ ___
+ &end_function("poly1305_init_x86_64");
+
+@@ -373,7 +373,7 @@ $code.=<<___;
+ .cfi_adjust_cfa_offset -48
+ .Lno_data:
+ .Lblocks_epilogue:
+- ret
++ RET
+ .cfi_endproc
+ ___
+ &end_function("poly1305_blocks_x86_64");
+@@ -399,7 +399,7 @@ $code.=<<___;
+ mov %rax,0($mac) # write result
+ mov %rcx,8($mac)
+
+- ret
++ RET
+ ___
+ &end_function("poly1305_emit_x86_64");
+ if ($avx) {
+@@ -429,7 +429,7 @@ ___
+ &poly1305_iteration();
+ $code.=<<___;
+ pop $ctx
+- ret
++ RET
+ .size __poly1305_block,.-__poly1305_block
+
+ .type __poly1305_init_avx,\@abi-omnipotent
+@@ -594,7 +594,7 @@ __poly1305_init_avx:
+
+ lea -48-64($ctx),$ctx # size [de-]optimization
+ pop %rbp
+- ret
++ RET
+ .size __poly1305_init_avx,.-__poly1305_init_avx
+ ___
+
+@@ -747,7 +747,7 @@ $code.=<<___;
+ .cfi_restore %rbp
+ .Lno_data_avx:
+ .Lblocks_avx_epilogue:
+- ret
++ RET
+ .cfi_endproc
+
+ .align 32
+@@ -1452,7 +1452,7 @@ $code.=<<___ if (!$win64);
+ ___
+ $code.=<<___;
+ vzeroupper
+- ret
++ RET
+ .cfi_endproc
+ ___
+ &end_function("poly1305_blocks_avx");
+@@ -1508,7 +1508,7 @@ $code.=<<___;
+ mov %rax,0($mac) # write result
+ mov %rcx,8($mac)
+
+- ret
++ RET
+ ___
+ &end_function("poly1305_emit_avx");
+
+@@ -1675,7 +1675,7 @@ $code.=<<___;
+ .cfi_restore %rbp
+ .Lno_data_avx2$suffix:
+ .Lblocks_avx2_epilogue$suffix:
+- ret
++ RET
+ .cfi_endproc
+
+ .align 32
+@@ -2201,7 +2201,7 @@ $code.=<<___ if (!$win64);
+ ___
+ $code.=<<___;
+ vzeroupper
+- ret
++ RET
+ .cfi_endproc
+ ___
+ if($avx > 2 && $avx512) {
+@@ -2792,7 +2792,7 @@ $code.=<<___ if (!$win64);
+ .cfi_def_cfa_register %rsp
+ ___
+ $code.=<<___;
+- ret
++ RET
+ .cfi_endproc
+ ___
+
+@@ -2893,7 +2893,7 @@ $code.=<<___ if ($flavour =~ /elf32/);
+ ___
+ $code.=<<___;
+ mov \$1,%eax
+- ret
++ RET
+ .size poly1305_init_base2_44,.-poly1305_init_base2_44
+ ___
+ {
+@@ -3010,7 +3010,7 @@ poly1305_blocks_vpmadd52:
+ jnz .Lblocks_vpmadd52_4x
+
+ .Lno_data_vpmadd52:
+- ret
++ RET
+ .size poly1305_blocks_vpmadd52,.-poly1305_blocks_vpmadd52
+ ___
+ }
+@@ -3451,7 +3451,7 @@ poly1305_blocks_vpmadd52_4x:
+ vzeroall
+
+ .Lno_data_vpmadd52_4x:
+- ret
++ RET
+ .size poly1305_blocks_vpmadd52_4x,.-poly1305_blocks_vpmadd52_4x
+ ___
+ }
+@@ -3824,7 +3824,7 @@ $code.=<<___;
+ vzeroall
+
+ .Lno_data_vpmadd52_8x:
+- ret
++ RET
+ .size poly1305_blocks_vpmadd52_8x,.-poly1305_blocks_vpmadd52_8x
+ ___
+ }
+@@ -3861,7 +3861,7 @@ poly1305_emit_base2_44:
+ mov %rax,0($mac) # write result
+ mov %rcx,8($mac)
+
+- ret
++ RET
+ .size poly1305_emit_base2_44,.-poly1305_emit_base2_44
+ ___
+ } } }
+@@ -3916,7 +3916,7 @@ xor128_encrypt_n_pad:
+
+ .Ldone_enc:
+ mov $otp,%rax
+- ret
++ RET
+ .size xor128_encrypt_n_pad,.-xor128_encrypt_n_pad
+
+ .globl xor128_decrypt_n_pad
+@@ -3967,7 +3967,7 @@ xor128_decrypt_n_pad:
+
+ .Ldone_dec:
+ mov $otp,%rax
+- ret
++ RET
+ .size xor128_decrypt_n_pad,.-xor128_decrypt_n_pad
+ ___
+ }
+@@ -4109,7 +4109,7 @@ avx_handler:
+ pop %rbx
+ pop %rdi
+ pop %rsi
+- ret
++ RET
+ .size avx_handler,.-avx_handler
+
+ .section .pdata
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 2d33bba9a1440..215aed65e9782 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -472,7 +472,7 @@ static u64 pt_config_filters(struct perf_event *event)
+ pt->filters.filter[range].msr_b = filter->msr_b;
+ }
+
+- rtit_ctl |= filter->config << pt_address_ranges[range].reg_off;
++ rtit_ctl |= (u64)filter->config << pt_address_ranges[range].reg_off;
+ }
+
+ return rtit_ctl;
+diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
+index bb2fb78523cee..ab572d8def2b7 100644
+--- a/arch/x86/include/asm/svm.h
++++ b/arch/x86/include/asm/svm.h
+@@ -222,17 +222,19 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
+
+
+ /* AVIC */
+-#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFF)
++#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFFULL)
+ #define AVIC_LOGICAL_ID_ENTRY_VALID_BIT 31
+ #define AVIC_LOGICAL_ID_ENTRY_VALID_MASK (1 << 31)
+
+-#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK (0xFFULL)
++#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK GENMASK_ULL(11, 0)
+ #define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK (0xFFFFFFFFFFULL << 12)
+ #define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK (1ULL << 62)
+ #define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK (1ULL << 63)
+-#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK (0xFF)
++#define AVIC_PHYSICAL_ID_TABLE_SIZE_MASK (0xFFULL)
+
+-#define AVIC_DOORBELL_PHYSICAL_ID_MASK (0xFF)
++#define AVIC_DOORBELL_PHYSICAL_ID_MASK GENMASK_ULL(11, 0)
++
++#define VMCB_AVIC_APIC_BAR_MASK 0xFFFFFFFFFF000ULL
+
+ #define AVIC_UNACCEL_ACCESS_WRITE_MASK 1
+ #define AVIC_UNACCEL_ACCESS_OFFSET_MASK 0xFF0
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index 7c7824ae78622..dc6d5e98d2963 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -1639,7 +1639,7 @@ static int __xstate_request_perm(u64 permitted, u64 requested, bool guest)
+
+ perm = guest ? &fpu->guest_perm : &fpu->perm;
+ /* Pairs with the READ_ONCE() in xstate_get_group_perm() */
+- WRITE_ONCE(perm->__state_perm, requested);
++ WRITE_ONCE(perm->__state_perm, mask);
+ /* Protected by sighand lock */
+ perm->__state_size = ksize;
+ perm->__user_state_size = usize;
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index d77481ecb0d5f..ed8a13ac4ab23 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -517,7 +517,7 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
+ } else if (apic_id < min && max - apic_id < KVM_IPI_CLUSTER_SIZE) {
+ ipi_bitmap <<= min - apic_id;
+ min = apic_id;
+- } else if (apic_id < min + KVM_IPI_CLUSTER_SIZE) {
++ } else if (apic_id > min && apic_id < min + KVM_IPI_CLUSTER_SIZE) {
+ max = apic_id < max ? max : apic_id;
+ } else {
+ ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index e86d610dc6b7a..02d061a06aa19 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -1623,11 +1623,6 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ goto exception;
+ }
+
+- if (!seg_desc.p) {
+- err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
+- goto exception;
+- }
+-
+ dpl = seg_desc.dpl;
+
+ switch (seg) {
+@@ -1667,6 +1662,10 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ case VCPU_SREG_TR:
+ if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
+ goto exception;
++ if (!seg_desc.p) {
++ err_vec = NP_VECTOR;
++ goto exception;
++ }
+ old_desc = seg_desc;
+ seg_desc.type |= 2; /* busy */
+ ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
+@@ -1691,6 +1690,11 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
+ break;
+ }
+
++ if (!seg_desc.p) {
++ err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
++ goto exception;
++ }
++
+ if (seg_desc.s) {
+ /* mark segment as accessed */
+ if (!(seg_desc.type & 1)) {
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 6e38a7d22e97a..10bc257d3803b 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -236,7 +236,7 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ struct kvm_vcpu *vcpu = hv_synic_to_vcpu(synic);
+ int ret;
+
+- if (!synic->active && !host)
++ if (!synic->active && (!host || data))
+ return 1;
+
+ trace_kvm_hv_synic_set_msr(vcpu->vcpu_id, msr, data, host);
+@@ -282,6 +282,9 @@ static int synic_set_msr(struct kvm_vcpu_hv_synic *synic,
+ case HV_X64_MSR_EOM: {
+ int i;
+
++ if (!synic->active)
++ break;
++
+ for (i = 0; i < ARRAY_SIZE(synic->sint); i++)
+ kvm_hv_notify_acked_sint(vcpu, i);
+ break;
+@@ -446,6 +449,9 @@ static int synic_set_irq(struct kvm_vcpu_hv_synic *synic, u32 sint)
+ struct kvm_lapic_irq irq;
+ int ret, vector;
+
++ if (KVM_BUG_ON(!lapic_in_kernel(vcpu), vcpu->kvm))
++ return -EINVAL;
++
+ if (sint >= ARRAY_SIZE(synic->sint))
+ return -EINVAL;
+
+@@ -658,7 +664,7 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config,
+ struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
+ struct kvm_vcpu_hv_synic *synic = to_hv_synic(vcpu);
+
+- if (!synic->active && !host)
++ if (!synic->active && (!host || config))
+ return 1;
+
+ if (unlikely(!host && hv_vcpu->enforce_cpuid && new_config.direct_mode &&
+@@ -687,7 +693,7 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count,
+ struct kvm_vcpu *vcpu = hv_stimer_to_vcpu(stimer);
+ struct kvm_vcpu_hv_synic *synic = to_hv_synic(vcpu);
+
+- if (!synic->active && !host)
++ if (!synic->active && (!host || count))
+ return 1;
+
+ trace_kvm_hv_stimer_set_count(hv_stimer_to_vcpu(stimer)->vcpu_id,
+@@ -1750,7 +1756,7 @@ struct kvm_hv_hcall {
+ sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS];
+ };
+
+-static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
++static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
+ {
+ int i;
+ gpa_t gpa;
+@@ -1765,7 +1771,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ int sparse_banks_len;
+ bool all_cpus;
+
+- if (!ex) {
++ if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST ||
++ hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE) {
+ if (hc->fast) {
+ flush.address_space = hc->ingpa;
+ flush.flags = hc->outgpa;
+@@ -1819,7 +1826,8 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+
+ if (!all_cpus) {
+ if (hc->fast) {
+- if (sparse_banks_len > HV_HYPERCALL_MAX_XMM_REGISTERS - 1)
++ /* XMM0 is already consumed, each XMM holds two sparse banks. */
++ if (sparse_banks_len > 2 * (HV_HYPERCALL_MAX_XMM_REGISTERS - 1))
+ return HV_STATUS_INVALID_HYPERCALL_INPUT;
+ for (i = 0; i < sparse_banks_len; i += 2) {
+ sparse_banks[i] = sse128_lo(hc->xmm[i / 2 + 1]);
+@@ -1875,7 +1883,7 @@ static void kvm_send_ipi_to_many(struct kvm *kvm, u32 vector,
+ }
+ }
+
+-static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool ex)
++static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
+ {
+ struct kvm *kvm = vcpu->kvm;
+ struct hv_send_ipi_ex send_ipi_ex;
+@@ -1888,8 +1896,9 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ int sparse_banks_len;
+ u32 vector;
+ bool all_cpus;
++ int i;
+
+- if (!ex) {
++ if (hc->code == HVCALL_SEND_IPI) {
+ if (!hc->fast) {
+ if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
+ sizeof(send_ipi))))
+@@ -1908,9 +1917,15 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+
+ trace_kvm_hv_send_ipi(vector, sparse_banks[0]);
+ } else {
+- if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi_ex,
+- sizeof(send_ipi_ex))))
+- return HV_STATUS_INVALID_HYPERCALL_INPUT;
++ if (!hc->fast) {
++ if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi_ex,
++ sizeof(send_ipi_ex))))
++ return HV_STATUS_INVALID_HYPERCALL_INPUT;
++ } else {
++ send_ipi_ex.vector = (u32)hc->ingpa;
++ send_ipi_ex.vp_set.format = hc->outgpa;
++ send_ipi_ex.vp_set.valid_bank_mask = sse128_lo(hc->xmm[0]);
++ }
+
+ trace_kvm_hv_send_ipi_ex(send_ipi_ex.vector,
+ send_ipi_ex.vp_set.format,
+@@ -1918,8 +1933,7 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+
+ vector = send_ipi_ex.vector;
+ valid_bank_mask = send_ipi_ex.vp_set.valid_bank_mask;
+- sparse_banks_len = bitmap_weight(&valid_bank_mask, 64) *
+- sizeof(sparse_banks[0]);
++ sparse_banks_len = bitmap_weight(&valid_bank_mask, 64);
+
+ all_cpus = send_ipi_ex.vp_set.format == HV_GENERIC_SET_ALL;
+
+@@ -1929,12 +1943,27 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc, bool
+ if (!sparse_banks_len)
+ goto ret_success;
+
+- if (kvm_read_guest(kvm,
+- hc->ingpa + offsetof(struct hv_send_ipi_ex,
+- vp_set.bank_contents),
+- sparse_banks,
+- sparse_banks_len))
+- return HV_STATUS_INVALID_HYPERCALL_INPUT;
++ if (!hc->fast) {
++ if (kvm_read_guest(kvm,
++ hc->ingpa + offsetof(struct hv_send_ipi_ex,
++ vp_set.bank_contents),
++ sparse_banks,
++ sparse_banks_len * sizeof(sparse_banks[0])))
++ return HV_STATUS_INVALID_HYPERCALL_INPUT;
++ } else {
++ /*
++ * The lower half of XMM0 is already consumed, each XMM holds
++ * two sparse banks.
++ */
++ if (sparse_banks_len > (2 * HV_HYPERCALL_MAX_XMM_REGISTERS - 1))
++ return HV_STATUS_INVALID_HYPERCALL_INPUT;
++ for (i = 0; i < sparse_banks_len; i++) {
++ if (i % 2)
++ sparse_banks[i] = sse128_lo(hc->xmm[(i + 1) / 2]);
++ else
++ sparse_banks[i] = sse128_hi(hc->xmm[i / 2]);
++ }
++ }
+ }
+
+ check_and_send_ipi:
+@@ -2096,6 +2125,7 @@ static bool is_xmm_fast_hypercall(struct kvm_hv_hcall *hc)
+ case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+ case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+ case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
++ case HVCALL_SEND_IPI_EX:
+ return true;
+ }
+
+@@ -2247,46 +2277,28 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
+ kvm_hv_hypercall_complete_userspace;
+ return 0;
+ case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST:
+- if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
+- ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+- break;
+- }
+- ret = kvm_hv_flush_tlb(vcpu, &hc, false);
+- break;
+- case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+- if (unlikely(hc.rep)) {
+- ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+- break;
+- }
+- ret = kvm_hv_flush_tlb(vcpu, &hc, false);
+- break;
+ case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX:
+ if (unlikely(!hc.rep_cnt || hc.rep_idx)) {
+ ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+ break;
+ }
+- ret = kvm_hv_flush_tlb(vcpu, &hc, true);
++ ret = kvm_hv_flush_tlb(vcpu, &hc);
+ break;
++ case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE:
+ case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX:
+ if (unlikely(hc.rep)) {
+ ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+ break;
+ }
+- ret = kvm_hv_flush_tlb(vcpu, &hc, true);
++ ret = kvm_hv_flush_tlb(vcpu, &hc);
+ break;
+ case HVCALL_SEND_IPI:
+- if (unlikely(hc.rep)) {
+- ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+- break;
+- }
+- ret = kvm_hv_send_ipi(vcpu, &hc, false);
+- break;
+ case HVCALL_SEND_IPI_EX:
+- if (unlikely(hc.fast || hc.rep)) {
++ if (unlikely(hc.rep)) {
+ ret = HV_STATUS_INVALID_HYPERCALL_INPUT;
+ break;
+ }
+- ret = kvm_hv_send_ipi(vcpu, &hc, true);
++ ret = kvm_hv_send_ipi(vcpu, &hc);
+ break;
+ case HVCALL_POST_DEBUG_DATA:
+ case HVCALL_RETRIEVE_DEBUG_DATA:
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 9322e6340a742..2a10d0033c964 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -992,6 +992,10 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
+ *r = -1;
+
+ if (irq->shorthand == APIC_DEST_SELF) {
++ if (KVM_BUG_ON(!src, kvm)) {
++ *r = 0;
++ return true;
++ }
+ *r = kvm_apic_set_irq(src->vcpu, irq, dest_map);
+ return true;
+ }
+@@ -2242,10 +2246,7 @@ void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
+
+ void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
+ {
+- struct kvm_lapic *apic = vcpu->arch.apic;
+-
+- apic_set_tpr(apic, ((cr8 & 0x0f) << 4)
+- | (kvm_lapic_get_reg(apic, APIC_TASKPRI) & 4));
++ apic_set_tpr(vcpu->arch.apic, (cr8 & 0x0f) << 4);
+ }
+
+ u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
+index e9fbb2c8bbe2d..c7070973f0de1 100644
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -48,6 +48,7 @@
+ X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE)
+
+ #define KVM_MMU_CR0_ROLE_BITS (X86_CR0_PG | X86_CR0_WP)
++#define KVM_MMU_EFER_ROLE_BITS (EFER_LME | EFER_NX)
+
+ static __always_inline u64 rsvd_bits(int s, int e)
+ {
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index 5b5bdac97c7b9..3821d5140ea31 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -34,9 +34,8 @@
+ #define PT_HAVE_ACCESSED_DIRTY(mmu) true
+ #ifdef CONFIG_X86_64
+ #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+- #define CMPXCHG cmpxchg
++ #define CMPXCHG "cmpxchgq"
+ #else
+- #define CMPXCHG cmpxchg64
+ #define PT_MAX_FULL_LEVELS 2
+ #endif
+ #elif PTTYPE == 32
+@@ -52,7 +51,7 @@
+ #define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT
+ #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT
+ #define PT_HAVE_ACCESSED_DIRTY(mmu) true
+- #define CMPXCHG cmpxchg
++ #define CMPXCHG "cmpxchgl"
+ #elif PTTYPE == PTTYPE_EPT
+ #define pt_element_t u64
+ #define guest_walker guest_walkerEPT
+@@ -65,7 +64,9 @@
+ #define PT_GUEST_DIRTY_SHIFT 9
+ #define PT_GUEST_ACCESSED_SHIFT 8
+ #define PT_HAVE_ACCESSED_DIRTY(mmu) ((mmu)->ept_ad)
+- #define CMPXCHG cmpxchg64
++ #ifdef CONFIG_X86_64
++ #define CMPXCHG "cmpxchgq"
++ #endif
+ #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+ #else
+ #error Invalid PTTYPE value
+@@ -147,43 +148,36 @@ static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ pt_element_t __user *ptep_user, unsigned index,
+ pt_element_t orig_pte, pt_element_t new_pte)
+ {
+- int npages;
+- pt_element_t ret;
+- pt_element_t *table;
+- struct page *page;
+-
+- npages = get_user_pages_fast((unsigned long)ptep_user, 1, FOLL_WRITE, &page);
+- if (likely(npages == 1)) {
+- table = kmap_atomic(page);
+- ret = CMPXCHG(&table[index], orig_pte, new_pte);
+- kunmap_atomic(table);
+-
+- kvm_release_page_dirty(page);
+- } else {
+- struct vm_area_struct *vma;
+- unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK;
+- unsigned long pfn;
+- unsigned long paddr;
+-
+- mmap_read_lock(current->mm);
+- vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE);
+- if (!vma || !(vma->vm_flags & VM_PFNMAP)) {
+- mmap_read_unlock(current->mm);
+- return -EFAULT;
+- }
+- pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+- paddr = pfn << PAGE_SHIFT;
+- table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB);
+- if (!table) {
+- mmap_read_unlock(current->mm);
+- return -EFAULT;
+- }
+- ret = CMPXCHG(&table[index], orig_pte, new_pte);
+- memunmap(table);
+- mmap_read_unlock(current->mm);
+- }
++ signed char r;
+
+- return (ret != orig_pte);
++ if (!user_access_begin(ptep_user, sizeof(pt_element_t)))
++ return -EFAULT;
++
++#ifdef CMPXCHG
++ asm volatile("1:" LOCK_PREFIX CMPXCHG " %[new], %[ptr]\n"
++ "setnz %b[r]\n"
++ "2:"
++ _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %k[r])
++ : [ptr] "+m" (*ptep_user),
++ [old] "+a" (orig_pte),
++ [r] "=q" (r)
++ : [new] "r" (new_pte)
++ : "memory");
++#else
++ asm volatile("1:" LOCK_PREFIX "cmpxchg8b %[ptr]\n"
++ "setnz %b[r]\n"
++ "2:"
++ _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %k[r])
++ : [ptr] "+m" (*ptep_user),
++ [old] "+A" (orig_pte),
++ [r] "=q" (r)
++ : [new_lo] "b" ((u32)new_pte),
++ [new_hi] "c" ((u32)(new_pte >> 32))
++ : "memory");
++#endif
++
++ user_access_end();
++ return r;
+ }
+
+ static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index bc9e3553fba2d..d2e69b2ddbbee 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -99,15 +99,18 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ }
+
+ /*
+- * Finds the next valid root after root (or the first valid root if root
+- * is NULL), takes a reference on it, and returns that next root. If root
+- * is not NULL, this thread should have already taken a reference on it, and
+- * that reference will be dropped. If no valid root is found, this
+- * function will return NULL.
++ * Returns the next root after @prev_root (or the first root if @prev_root is
++ * NULL). A reference to the returned root is acquired, and the reference to
++ * @prev_root is released (the caller obviously must hold a reference to
++ * @prev_root if it's non-NULL).
++ *
++ * If @only_valid is true, invalid roots are skipped.
++ *
++ * Returns NULL if the end of tdp_mmu_roots was reached.
+ */
+ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+ struct kvm_mmu_page *prev_root,
+- bool shared)
++ bool shared, bool only_valid)
+ {
+ struct kvm_mmu_page *next_root;
+
+@@ -121,9 +124,14 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+ next_root = list_first_or_null_rcu(&kvm->arch.tdp_mmu_roots,
+ typeof(*next_root), link);
+
+- while (next_root && !kvm_tdp_mmu_get_root(kvm, next_root))
++ while (next_root) {
++ if ((!only_valid || !next_root->role.invalid) &&
++ kvm_tdp_mmu_get_root(kvm, next_root))
++ break;
++
+ next_root = list_next_or_null_rcu(&kvm->arch.tdp_mmu_roots,
+ &next_root->link, typeof(*next_root), link);
++ }
+
+ rcu_read_unlock();
+
+@@ -143,13 +151,19 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+ * mode. In the unlikely event that this thread must free a root, the lock
+ * will be temporarily dropped and reacquired in write mode.
+ */
+-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
+- for (_root = tdp_mmu_next_root(_kvm, NULL, _shared); \
+- _root; \
+- _root = tdp_mmu_next_root(_kvm, _root, _shared)) \
+- if (kvm_mmu_page_as_id(_root) != _as_id) { \
++#define __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, _only_valid)\
++ for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, _only_valid); \
++ _root; \
++ _root = tdp_mmu_next_root(_kvm, _root, _shared, _only_valid)) \
++ if (kvm_mmu_page_as_id(_root) != _as_id) { \
+ } else
+
++#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
++ __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
++
++#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
++ __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, false)
++
+ #define for_each_tdp_mmu_root(_kvm, _root, _as_id) \
+ list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link, \
+ lockdep_is_held_type(&kvm->mmu_lock, 0) || \
+@@ -200,7 +214,10 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
+
+ role = page_role_for_level(vcpu, vcpu->arch.mmu->shadow_root_level);
+
+- /* Check for an existing root before allocating a new one. */
++ /*
++ * Check for an existing root before allocating a new one. Note, the
++ * role check prevents consuming an invalid root.
++ */
+ for_each_tdp_mmu_root(kvm, root, kvm_mmu_role_as_id(role)) {
+ if (root->role.word == role.word &&
+ kvm_tdp_mmu_get_root(kvm, root))
+@@ -1032,13 +1049,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
+ bool flush)
+ {
+- struct kvm_mmu_page *root;
+-
+- for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false)
+- flush = zap_gfn_range(kvm, root, range->start, range->end,
+- range->may_block, flush, false);
+-
+- return flush;
++ return __kvm_tdp_mmu_zap_gfn_range(kvm, range->slot->as_id, range->start,
++ range->end, range->may_block, flush);
+ }
+
+ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
+@@ -1221,7 +1233,7 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
+
+ lockdep_assert_held_read(&kvm->mmu_lock);
+
+- for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
++ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ spte_set |= wrprot_gfn_range(kvm, root, slot->base_gfn,
+ slot->base_gfn + slot->npages, min_level);
+
+@@ -1249,6 +1261,9 @@ retry:
+ if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true))
+ continue;
+
++ if (!is_shadow_present_pte(iter.old_spte))
++ continue;
++
+ if (spte_ad_need_write_protect(iter.old_spte)) {
+ if (is_writable_pte(iter.old_spte))
+ new_spte = iter.old_spte & ~PT_WRITABLE_MASK;
+@@ -1291,7 +1306,7 @@ bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm,
+
+ lockdep_assert_held_read(&kvm->mmu_lock);
+
+- for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
++ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ spte_set |= clear_dirty_gfn_range(kvm, root, slot->base_gfn,
+ slot->base_gfn + slot->npages);
+
+@@ -1416,7 +1431,7 @@ void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
+
+ lockdep_assert_held_read(&kvm->mmu_lock);
+
+- for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
++ for_each_valid_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)
+ zap_collapsible_spte_range(kvm, root, slot);
+ }
+
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
+index 3899004a5d91e..08c917511fedd 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.h
++++ b/arch/x86/kvm/mmu/tdp_mmu.h
+@@ -10,9 +10,6 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
+ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm *kvm,
+ struct kvm_mmu_page *root)
+ {
+- if (root->role.invalid)
+- return false;
+-
+ return refcount_inc_not_zero(&root->tdp_mmu_root_count);
+ }
+
+diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
+index fb3e207913388..7ef229011db8a 100644
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -783,7 +783,7 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+ {
+ struct kvm_kernel_irq_routing_entry *e;
+ struct kvm_irq_routing_table *irq_rt;
+- int idx, ret = -EINVAL;
++ int idx, ret = 0;
+
+ if (!kvm_arch_has_assigned_device(kvm) ||
+ !irq_remapping_cap(IRQ_POSTING_CAP))
+@@ -794,7 +794,13 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
+
+ idx = srcu_read_lock(&kvm->irq_srcu);
+ irq_rt = srcu_dereference(kvm->irq_routing, &kvm->irq_srcu);
+- WARN_ON(guest_irq >= irq_rt->nr_rt_entries);
++
++ if (guest_irq >= irq_rt->nr_rt_entries ||
++ hlist_empty(&irq_rt->map[guest_irq])) {
++ pr_warn_once("no route for guest_irq %u/%u (broken user space?)\n",
++ guest_irq, irq_rt->nr_rt_entries);
++ goto out;
++ }
+
+ hlist_for_each_entry(e, &irq_rt->map[guest_irq], link) {
+ struct vcpu_data vcpu_info;
+@@ -927,17 +933,12 @@ out:
+ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ u64 entry;
+- /* ID = 0xff (broadcast), ID > 0xff (reserved) */
+ int h_physical_id = kvm_cpu_get_apicid(cpu);
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ lockdep_assert_preemption_disabled();
+
+- /*
+- * Since the host physical APIC id is 8 bits,
+- * we can support host APIC ID upto 255.
+- */
+- if (WARN_ON(h_physical_id > AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
++ if (WARN_ON(h_physical_id & ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
+ return;
+
+ /*
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 17b53457d8664..fef9758525826 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2358,7 +2358,7 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
+ memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
+ }
+
+-static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)
++static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
+ {
+ struct kvm_vcpu *vcpu;
+ struct ghcb *ghcb;
+@@ -2463,7 +2463,7 @@ static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)
+ goto vmgexit_err;
+ }
+
+- return true;
++ return 0;
+
+ vmgexit_err:
+ vcpu = &svm->vcpu;
+@@ -2486,7 +2486,8 @@ vmgexit_err:
+ ghcb_set_sw_exit_info_1(ghcb, 2);
+ ghcb_set_sw_exit_info_2(ghcb, reason);
+
+- return false;
++ /* Resume the guest to "return" the error code. */
++ return 1;
+ }
+
+ void sev_es_unmap_ghcb(struct vcpu_svm *svm)
+@@ -2545,7 +2546,7 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu)
+ }
+
+ #define GHCB_SCRATCH_AREA_LIMIT (16ULL * PAGE_SIZE)
+-static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
++static int setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
+ {
+ struct vmcb_control_area *control = &svm->vmcb->control;
+ struct ghcb *ghcb = svm->sev_es.ghcb;
+@@ -2598,14 +2599,14 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
+ }
+ scratch_va = kvzalloc(len, GFP_KERNEL_ACCOUNT);
+ if (!scratch_va)
+- goto e_scratch;
++ return -ENOMEM;
+
+ if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) {
+ /* Unable to copy scratch area from guest */
+ pr_err("vmgexit: kvm_read_guest for scratch area failed\n");
+
+ kvfree(scratch_va);
+- goto e_scratch;
++ return -EFAULT;
+ }
+
+ /*
+@@ -2621,13 +2622,13 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
+ svm->sev_es.ghcb_sa = scratch_va;
+ svm->sev_es.ghcb_sa_len = len;
+
+- return true;
++ return 0;
+
+ e_scratch:
+ ghcb_set_sw_exit_info_1(ghcb, 2);
+ ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_SCRATCH_AREA);
+
+- return false;
++ return 1;
+ }
+
+ static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,
+@@ -2765,17 +2766,18 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+
+ exit_code = ghcb_get_sw_exit_code(ghcb);
+
+- if (!sev_es_validate_vmgexit(svm))
+- return 1;
++ ret = sev_es_validate_vmgexit(svm);
++ if (ret)
++ return ret;
+
+ sev_es_sync_from_ghcb(svm);
+ ghcb_set_sw_exit_info_1(ghcb, 0);
+ ghcb_set_sw_exit_info_2(ghcb, 0);
+
+- ret = 1;
+ switch (exit_code) {
+ case SVM_VMGEXIT_MMIO_READ:
+- if (!setup_vmgexit_scratch(svm, true, control->exit_info_2))
++ ret = setup_vmgexit_scratch(svm, true, control->exit_info_2);
++ if (ret)
+ break;
+
+ ret = kvm_sev_es_mmio_read(vcpu,
+@@ -2784,7 +2786,8 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+ svm->sev_es.ghcb_sa);
+ break;
+ case SVM_VMGEXIT_MMIO_WRITE:
+- if (!setup_vmgexit_scratch(svm, false, control->exit_info_2))
++ ret = setup_vmgexit_scratch(svm, false, control->exit_info_2);
++ if (ret)
+ break;
+
+ ret = kvm_sev_es_mmio_write(vcpu,
+@@ -2817,6 +2820,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
+ ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_INPUT);
+ }
+
++ ret = 1;
+ break;
+ }
+ case SVM_VMGEXIT_UNSUPPORTED_EVENT:
+@@ -2836,6 +2840,7 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ {
+ int count;
+ int bytes;
++ int r;
+
+ if (svm->vmcb->control.exit_info_2 > INT_MAX)
+ return -EINVAL;
+@@ -2844,8 +2849,9 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ if (unlikely(check_mul_overflow(count, size, &bytes)))
+ return -EINVAL;
+
+- if (!setup_vmgexit_scratch(svm, in, bytes))
+- return 1;
++ r = setup_vmgexit_scratch(svm, in, bytes);
++ if (r)
++ return r;
+
+ return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa,
+ count, in);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index eb4029660bd9f..9b6166348c94f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1656,8 +1656,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ return r;
+ }
+
+- /* Update reserved bits */
+- if ((efer ^ old_efer) & EFER_NX)
++ if ((efer ^ old_efer) & KVM_MMU_EFER_ROLE_BITS)
+ kvm_mmu_reset_context(vcpu);
+
+ return 0;
+diff --git a/arch/x86/lib/iomem.c b/arch/x86/lib/iomem.c
+index df50451d94ef7..3e2f33fc33de2 100644
+--- a/arch/x86/lib/iomem.c
++++ b/arch/x86/lib/iomem.c
+@@ -22,7 +22,7 @@ static __always_inline void rep_movs(void *to, const void *from, size_t n)
+ : "memory");
+ }
+
+-void memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
++static void string_memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
+ {
+ if (unlikely(!n))
+ return;
+@@ -38,9 +38,8 @@ void memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
+ }
+ rep_movs(to, (const void *)from, n);
+ }
+-EXPORT_SYMBOL(memcpy_fromio);
+
+-void memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
++static void string_memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
+ {
+ if (unlikely(!n))
+ return;
+@@ -56,14 +55,64 @@ void memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
+ }
+ rep_movs((void *)to, (const void *) from, n);
+ }
++
++static void unrolled_memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
++{
++ const volatile char __iomem *in = from;
++ char *out = to;
++ int i;
++
++ for (i = 0; i < n; ++i)
++ out[i] = readb(&in[i]);
++}
++
++static void unrolled_memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
++{
++ volatile char __iomem *out = to;
++ const char *in = from;
++ int i;
++
++ for (i = 0; i < n; ++i)
++ writeb(in[i], &out[i]);
++}
++
++static void unrolled_memset_io(volatile void __iomem *a, int b, size_t c)
++{
++ volatile char __iomem *mem = a;
++ int i;
++
++ for (i = 0; i < c; ++i)
++ writeb(b, &mem[i]);
++}
++
++void memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
++{
++ if (cc_platform_has(CC_ATTR_GUEST_UNROLL_STRING_IO))
++ unrolled_memcpy_fromio(to, from, n);
++ else
++ string_memcpy_fromio(to, from, n);
++}
++EXPORT_SYMBOL(memcpy_fromio);
++
++void memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
++{
++ if (cc_platform_has(CC_ATTR_GUEST_UNROLL_STRING_IO))
++ unrolled_memcpy_toio(to, from, n);
++ else
++ string_memcpy_toio(to, from, n);
++}
+ EXPORT_SYMBOL(memcpy_toio);
+
+ void memset_io(volatile void __iomem *a, int b, size_t c)
+ {
+- /*
+- * TODO: memset can mangle the IO patterns quite a bit.
+- * perhaps it would be better to use a dumb one:
+- */
+- memset((void *)a, b, c);
++ if (cc_platform_has(CC_ATTR_GUEST_UNROLL_STRING_IO)) {
++ unrolled_memset_io(a, b, c);
++ } else {
++ /*
++ * TODO: memset can mangle the IO patterns quite a bit.
++ * perhaps it would be better to use a dumb one:
++ */
++ memset((void *)a, b, c);
++ }
+ }
+ EXPORT_SYMBOL(memset_io);
+diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c
+index 89dd6b1708b04..21ecbe754cb2f 100644
+--- a/arch/x86/xen/pmu.c
++++ b/arch/x86/xen/pmu.c
+@@ -506,10 +506,7 @@ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id)
+ return ret;
+ }
+
+-bool is_xen_pmu(int cpu)
+-{
+- return (get_xenpmu_data() != NULL);
+-}
++bool is_xen_pmu;
+
+ void xen_pmu_init(int cpu)
+ {
+@@ -520,7 +517,7 @@ void xen_pmu_init(int cpu)
+
+ BUILD_BUG_ON(sizeof(struct xen_pmu_data) > PAGE_SIZE);
+
+- if (xen_hvm_domain())
++ if (xen_hvm_domain() || (cpu != 0 && !is_xen_pmu))
+ return;
+
+ xenpmu_data = (struct xen_pmu_data *)get_zeroed_page(GFP_KERNEL);
+@@ -541,7 +538,8 @@ void xen_pmu_init(int cpu)
+ per_cpu(xenpmu_shared, cpu).xenpmu_data = xenpmu_data;
+ per_cpu(xenpmu_shared, cpu).flags = 0;
+
+- if (cpu == 0) {
++ if (!is_xen_pmu) {
++ is_xen_pmu = true;
+ perf_register_guest_info_callbacks(&xen_guest_cbs);
+ xen_pmu_arch_init();
+ }
+diff --git a/arch/x86/xen/pmu.h b/arch/x86/xen/pmu.h
+index 0e83a160589bc..65c58894fc79f 100644
+--- a/arch/x86/xen/pmu.h
++++ b/arch/x86/xen/pmu.h
+@@ -4,6 +4,8 @@
+
+ #include <xen/interface/xenpmu.h>
+
++extern bool is_xen_pmu;
++
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id);
+ #ifdef CONFIG_XEN_HAVE_VPMU
+ void xen_pmu_init(int cpu);
+@@ -12,7 +14,6 @@ void xen_pmu_finish(int cpu);
+ static inline void xen_pmu_init(int cpu) {}
+ static inline void xen_pmu_finish(int cpu) {}
+ #endif
+-bool is_xen_pmu(int cpu);
+ bool pmu_msr_read(unsigned int msr, uint64_t *val, int *err);
+ bool pmu_msr_write(unsigned int msr, uint32_t low, uint32_t high, int *err);
+ int pmu_apic_update(uint32_t reg);
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 4a6019238ee7d..688aa8b6ae29a 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -129,7 +129,7 @@ int xen_smp_intr_init_pv(unsigned int cpu)
+ per_cpu(xen_irq_work, cpu).irq = rc;
+ per_cpu(xen_irq_work, cpu).name = callfunc_name;
+
+- if (is_xen_pmu(cpu)) {
++ if (is_xen_pmu) {
+ pmu_name = kasprintf(GFP_KERNEL, "pmu%d", cpu);
+ rc = bind_virq_to_irqhandler(VIRQ_XENPMU, cpu,
+ xen_pmu_irq_handler,
+diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
+index bd5aeb7955675..a63eca1266577 100644
+--- a/arch/xtensa/include/asm/pgtable.h
++++ b/arch/xtensa/include/asm/pgtable.h
+@@ -411,6 +411,10 @@ extern void update_mmu_cache(struct vm_area_struct * vma,
+
+ typedef pte_t *pte_addr_t;
+
++void update_mmu_tlb(struct vm_area_struct *vma,
++ unsigned long address, pte_t *ptep);
++#define __HAVE_ARCH_UPDATE_MMU_TLB
++
+ #endif /* !defined (__ASSEMBLY__) */
+
+ #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index 37d3e9887fe7b..d68987d703e75 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -246,8 +246,8 @@ extern unsigned long __get_wchan(struct task_struct *p);
+
+ #define xtensa_set_sr(x, sr) \
+ ({ \
+- unsigned int v = (unsigned int)(x); \
+- __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: "a"(v)); \
++ __asm__ __volatile__ ("wsr %0, "__stringify(sr) :: \
++ "a"((unsigned int)(x))); \
+ })
+
+ #define xtensa_get_sr(sr) \
+diff --git a/arch/xtensa/kernel/jump_label.c b/arch/xtensa/kernel/jump_label.c
+index 61cf6497a646b..0dde21e0d3de4 100644
+--- a/arch/xtensa/kernel/jump_label.c
++++ b/arch/xtensa/kernel/jump_label.c
+@@ -61,7 +61,7 @@ static void patch_text(unsigned long addr, const void *data, size_t sz)
+ .data = data,
+ };
+ stop_machine_cpuslocked(patch_text_stop_machine,
+- &patch, NULL);
++ &patch, cpu_online_mask);
+ } else {
+ unsigned long flags;
+
+diff --git a/arch/xtensa/kernel/mxhead.S b/arch/xtensa/kernel/mxhead.S
+index 9f38437427264..b702c0908b1f6 100644
+--- a/arch/xtensa/kernel/mxhead.S
++++ b/arch/xtensa/kernel/mxhead.S
+@@ -37,11 +37,13 @@ _SetupOCD:
+ * xt-gdb to single step via DEBUG exceptions received directly
+ * by ocd.
+ */
++#if XCHAL_HAVE_WINDOWED
+ movi a1, 1
+ movi a0, 0
+ wsr a1, windowstart
+ wsr a0, windowbase
+ rsync
++#endif
+
+ movi a1, LOCKLEVEL
+ wsr a1, ps
+diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c
+index f436cf2efd8b7..27a477dae2322 100644
+--- a/arch/xtensa/mm/tlb.c
++++ b/arch/xtensa/mm/tlb.c
+@@ -162,6 +162,12 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
+ }
+ }
+
++void update_mmu_tlb(struct vm_area_struct *vma,
++ unsigned long address, pte_t *ptep)
++{
++ local_flush_tlb_page(vma, address);
++}
++
+ #ifdef CONFIG_DEBUG_TLB_SANITY
+
+ static unsigned get_pte_for_vaddr(unsigned vaddr)
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 24a5c5329bcd0..809bc612d96b3 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -646,6 +646,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ struct bfq_entity *entity = &bfqq->entity;
+
++ /*
++ * oom_bfqq is not allowed to move, oom_bfqq will hold ref to root_group
++ * until elevator exit.
++ */
++ if (bfqq == &bfqd->oom_bfqq)
++ return;
+ /*
+ * Get extra reference to prevent bfqq from being freed in
+ * next possible expire or deactivate.
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 36a66e97e3c28..1dff82d34b44b 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2782,6 +2782,15 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ * are likely to increase the throughput.
+ */
+ bfqq->new_bfqq = new_bfqq;
++ /*
++ * The above assignment schedules the following redirections:
++ * each time some I/O for bfqq arrives, the process that
++ * generated that I/O is disassociated from bfqq and
++ * associated with new_bfqq. Here we increases new_bfqq->ref
++ * in advance, adding the number of processes that are
++ * expected to be associated with new_bfqq as they happen to
++ * issue I/O.
++ */
+ new_bfqq->ref += process_refs;
+ return new_bfqq;
+ }
+@@ -2844,6 +2853,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ struct bfq_queue *in_service_bfqq, *new_bfqq;
+
++ /* if a merge has already been setup, then proceed with that first */
++ if (bfqq->new_bfqq)
++ return bfqq->new_bfqq;
++
+ /*
+ * Check delayed stable merge for rotational or non-queueing
+ * devs. For this branch to be executed, bfqq must not be
+@@ -2945,9 +2958,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ if (bfq_too_late_for_merging(bfqq))
+ return NULL;
+
+- if (bfqq->new_bfqq)
+- return bfqq->new_bfqq;
+-
+ if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
+ return NULL;
+
+@@ -5181,7 +5191,7 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
+ struct request *rq;
+ struct bfq_queue *in_serv_queue;
+- bool waiting_rq, idle_timer_disabled;
++ bool waiting_rq, idle_timer_disabled = false;
+
+ spin_lock_irq(&bfqd->lock);
+
+@@ -5189,14 +5199,15 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+ waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
+
+ rq = __bfq_dispatch_request(hctx);
+-
+- idle_timer_disabled =
+- waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
++ if (in_serv_queue == bfqd->in_service_queue) {
++ idle_timer_disabled =
++ waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
++ }
+
+ spin_unlock_irq(&bfqd->lock);
+-
+- bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue,
+- idle_timer_disabled);
++ bfq_update_dispatch_stats(hctx->queue, rq,
++ idle_timer_disabled ? in_serv_queue : NULL,
++ idle_timer_disabled);
+
+ return rq;
+ }
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index b74cc0da118ec..709b901de3ca9 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -519,7 +519,7 @@ unsigned short bfq_ioprio_to_weight(int ioprio)
+ static unsigned short bfq_weight_to_ioprio(int weight)
+ {
+ return max_t(int, 0,
+- IOPRIO_NR_LEVELS * BFQ_WEIGHT_CONVERSION_COEFF - weight);
++ IOPRIO_NR_LEVELS - weight / BFQ_WEIGHT_CONVERSION_COEFF);
+ }
+
+ static void bfq_get_entity(struct bfq_entity *entity)
+diff --git a/block/bio.c b/block/bio.c
+index 4312a8085396b..1be1e360967d0 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1486,8 +1486,7 @@ again:
+ if (!bio_integrity_endio(bio))
+ return;
+
+- if (bio->bi_bdev && bio_flagged(bio, BIO_TRACKED))
+- rq_qos_done_bio(bdev_get_queue(bio->bi_bdev), bio);
++ rq_qos_done_bio(bio);
+
+ if (bio->bi_bdev && bio_flagged(bio, BIO_TRACE_COMPLETION)) {
+ trace_block_bio_complete(bdev_get_queue(bio->bi_bdev), bio);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 650f7e27989f1..87a1c0c3fa401 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -857,11 +857,11 @@ static void blkcg_fill_root_iostats(void)
+ blk_queue_root_blkg(bdev_get_queue(bdev));
+ struct blkg_iostat tmp;
+ int cpu;
++ unsigned long flags;
+
+ memset(&tmp, 0, sizeof(tmp));
+ for_each_possible_cpu(cpu) {
+ struct disk_stats *cpu_dkstats;
+- unsigned long flags;
+
+ cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
+ tmp.ios[BLKG_IOSTAT_READ] +=
+@@ -877,11 +877,11 @@ static void blkcg_fill_root_iostats(void)
+ cpu_dkstats->sectors[STAT_WRITE] << 9;
+ tmp.bytes[BLKG_IOSTAT_DISCARD] +=
+ cpu_dkstats->sectors[STAT_DISCARD] << 9;
+-
+- flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
+- blkg_iostat_set(&blkg->iostat.cur, &tmp);
+- u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ }
++
++ flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
++ blkg_iostat_set(&blkg->iostat.cur, &tmp);
++ u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ }
+ }
+
+diff --git a/block/blk-ioc.c b/block/blk-ioc.c
+index 11f49f78db32b..df9cfe4ca5328 100644
+--- a/block/blk-ioc.c
++++ b/block/blk-ioc.c
+@@ -280,7 +280,6 @@ int set_task_ioprio(struct task_struct *task, int ioprio)
+
+ task_lock(task);
+ if (task->flags & PF_EXITING) {
+- err = -ESRCH;
+ kmem_cache_free(iocontext_cachep, ioc);
+ goto out;
+ }
+@@ -292,7 +291,7 @@ int set_task_ioprio(struct task_struct *task, int ioprio)
+ task->io_context->ioprio = ioprio;
+ out:
+ task_unlock(task);
+- return err;
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(set_task_ioprio);
+
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 6593c7123b97e..24d70e0555ddb 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -598,7 +598,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ int inflight = 0;
+
+ blkg = bio->bi_blkg;
+- if (!blkg || !bio_flagged(bio, BIO_TRACKED))
++ if (!blkg || !bio_flagged(bio, BIO_QOS_THROTTLED))
+ return;
+
+ iolat = blkg_to_lat(bio->bi_blkg);
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 4de34a332c9fd..ea6968313b4a8 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -9,6 +9,7 @@
+ #include <linux/blk-integrity.h>
+ #include <linux/scatterlist.h>
+ #include <linux/part_stat.h>
++#include <linux/blk-cgroup.h>
+
+ #include <trace/events/block.h>
+
+@@ -368,8 +369,6 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
+ trace_block_split(split, (*bio)->bi_iter.bi_sector);
+ submit_bio_noacct(*bio);
+ *bio = split;
+-
+- blk_throtl_charge_bio_split(*bio);
+ }
+ }
+
+@@ -600,6 +599,9 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
+ static inline int ll_new_hw_segment(struct request *req, struct bio *bio,
+ unsigned int nr_phys_segs)
+ {
++ if (!blk_cgroup_mergeable(req, bio))
++ goto no_merge;
++
+ if (blk_integrity_merge_bio(req->q, req, bio) == false)
+ goto no_merge;
+
+@@ -696,6 +698,9 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
+ if (total_phys_segments > blk_rq_get_max_segments(req))
+ return 0;
+
++ if (!blk_cgroup_mergeable(req, next->bio))
++ return 0;
++
+ if (blk_integrity_merge_rq(q, req, next) == false)
+ return 0;
+
+@@ -904,6 +909,10 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio)
+ if (bio_data_dir(bio) != rq_data_dir(rq))
+ return false;
+
++ /* don't merge across cgroup boundaries */
++ if (!blk_cgroup_mergeable(rq, bio))
++ return false;
++
+ /* only merge integrity protected bio into ditto rq */
+ if (blk_integrity_merge_bio(rq->q, rq, bio) == false)
+ return false;
+@@ -1089,12 +1098,20 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
+ if (!plug || rq_list_empty(plug->mq_list))
+ return false;
+
+- /* check the previously added entry for a quick merge attempt */
+- rq = rq_list_peek(&plug->mq_list);
+- if (rq->q == q) {
+- if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
+- BIO_MERGE_OK)
+- return true;
++ rq_list_for_each(&plug->mq_list, rq) {
++ if (rq->q == q) {
++ if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==
++ BIO_MERGE_OK)
++ return true;
++ break;
++ }
++
++ /*
++ * Only keep iterating plug list for merges if we have multiple
++ * queues
++ */
++ if (!plug->multiple_queues)
++ break;
+ }
+ return false;
+ }
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 55488ba978232..80e0eb26b697c 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -180,11 +180,18 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+
+ static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
+ {
++ unsigned long end = jiffies + HZ;
+ int ret;
+
+ do {
+ ret = __blk_mq_do_dispatch_sched(hctx);
+- } while (ret == 1);
++ if (ret != 1)
++ break;
++ if (need_resched() || time_is_before_jiffies(end)) {
++ blk_mq_delay_run_hw_queue(hctx, 0);
++ break;
++ }
++ } while (1);
+
+ return ret;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9a9185a0a2d13..cb50097366cd4 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2561,13 +2561,36 @@ static void __blk_mq_flush_plug_list(struct request_queue *q,
+ q->mq_ops->queue_rqs(&plug->mq_list);
+ }
+
++static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
++{
++ struct blk_mq_hw_ctx *this_hctx = NULL;
++ struct blk_mq_ctx *this_ctx = NULL;
++ struct request *requeue_list = NULL;
++ unsigned int depth = 0;
++ LIST_HEAD(list);
++
++ do {
++ struct request *rq = rq_list_pop(&plug->mq_list);
++
++ if (!this_hctx) {
++ this_hctx = rq->mq_hctx;
++ this_ctx = rq->mq_ctx;
++ } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) {
++ rq_list_add(&requeue_list, rq);
++ continue;
++ }
++ list_add_tail(&rq->queuelist, &list);
++ depth++;
++ } while (!rq_list_empty(plug->mq_list));
++
++ plug->mq_list = requeue_list;
++ trace_block_unplug(this_hctx->queue, depth, !from_sched);
++ blk_mq_sched_insert_requests(this_hctx, this_ctx, &list, from_sched);
++}
++
+ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ {
+- struct blk_mq_hw_ctx *this_hctx;
+- struct blk_mq_ctx *this_ctx;
+ struct request *rq;
+- unsigned int depth;
+- LIST_HEAD(list);
+
+ if (rq_list_empty(plug->mq_list))
+ return;
+@@ -2603,35 +2626,9 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ return;
+ }
+
+- this_hctx = NULL;
+- this_ctx = NULL;
+- depth = 0;
+ do {
+- rq = rq_list_pop(&plug->mq_list);
+-
+- if (!this_hctx) {
+- this_hctx = rq->mq_hctx;
+- this_ctx = rq->mq_ctx;
+- } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) {
+- trace_block_unplug(this_hctx->queue, depth,
+- !from_schedule);
+- blk_mq_sched_insert_requests(this_hctx, this_ctx,
+- &list, from_schedule);
+- depth = 0;
+- this_hctx = rq->mq_hctx;
+- this_ctx = rq->mq_ctx;
+-
+- }
+-
+- list_add(&rq->queuelist, &list);
+- depth++;
++ blk_mq_dispatch_plug_list(plug, from_schedule);
+ } while (!rq_list_empty(plug->mq_list));
+-
+- if (!list_empty(&list)) {
+- trace_block_unplug(this_hctx->queue, depth, !from_schedule);
+- blk_mq_sched_insert_requests(this_hctx, this_ctx, &list,
+- from_schedule);
+- }
+ }
+
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
+index 3cfbc8668cba9..68267007da1c6 100644
+--- a/block/blk-rq-qos.h
++++ b/block/blk-rq-qos.h
+@@ -177,20 +177,20 @@ static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)
+ __rq_qos_requeue(q->rq_qos, rq);
+ }
+
+-static inline void rq_qos_done_bio(struct request_queue *q, struct bio *bio)
++static inline void rq_qos_done_bio(struct bio *bio)
+ {
+- if (q->rq_qos)
+- __rq_qos_done_bio(q->rq_qos, bio);
++ if (bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) ||
++ bio_flagged(bio, BIO_QOS_MERGED))) {
++ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
++ if (q->rq_qos)
++ __rq_qos_done_bio(q->rq_qos, bio);
++ }
+ }
+
+ static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
+ {
+- /*
+- * BIO_TRACKED lets controllers know that a bio went through the
+- * normal rq_qos path.
+- */
+ if (q->rq_qos) {
+- bio_set_flag(bio, BIO_TRACKED);
++ bio_set_flag(bio, BIO_QOS_THROTTLED);
+ __rq_qos_throttle(q->rq_qos, bio);
+ }
+ }
+@@ -205,8 +205,10 @@ static inline void rq_qos_track(struct request_queue *q, struct request *rq,
+ static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
+ struct bio *bio)
+ {
+- if (q->rq_qos)
++ if (q->rq_qos) {
++ bio_set_flag(bio, BIO_QOS_MERGED);
+ __rq_qos_merge(q->rq_qos, rq, bio);
++ }
+ }
+
+ static inline void rq_qos_queue_depth_changed(struct request_queue *q)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 9f32882ceb2f6..7923f49f1046f 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -954,9 +954,6 @@ void blk_unregister_queue(struct gendisk *disk)
+ */
+ if (queue_is_mq(q))
+ blk_mq_unregister_dev(disk_to_dev(disk), q);
+-
+- kobject_uevent(&q->kobj, KOBJ_REMOVE);
+- kobject_del(&q->kobj);
+ blk_trace_remove_sysfs(disk_to_dev(disk));
+
+ mutex_lock(&q->sysfs_lock);
+@@ -964,6 +961,11 @@ void blk_unregister_queue(struct gendisk *disk)
+ elv_unregister_queue(q);
+ disk_unregister_independent_access_ranges(disk);
+ mutex_unlock(&q->sysfs_lock);
++
++ /* Now that we've deleted all child objects, we can delete the queue. */
++ kobject_uevent(&q->kobj, KOBJ_REMOVE);
++ kobject_del(&q->kobj);
++
+ mutex_unlock(&q->sysfs_dir_lock);
+
+ kobject_put(&disk_to_dev(disk)->kobj);
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 7c462c006b269..87769b337fc55 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -808,7 +808,8 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
+ unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
+ unsigned int bio_size = throtl_bio_data_size(bio);
+
+- if (bps_limit == U64_MAX) {
++ /* no need to throttle if this bio's bytes have been accounted */
++ if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED)) {
+ if (wait)
+ *wait = 0;
+ return true;
+@@ -920,9 +921,12 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
+ unsigned int bio_size = throtl_bio_data_size(bio);
+
+ /* Charge the bio to the group */
+- tg->bytes_disp[rw] += bio_size;
++ if (!bio_flagged(bio, BIO_THROTTLED)) {
++ tg->bytes_disp[rw] += bio_size;
++ tg->last_bytes_disp[rw] += bio_size;
++ }
++
+ tg->io_disp[rw]++;
+- tg->last_bytes_disp[rw] += bio_size;
+ tg->last_io_disp[rw]++;
+
+ /*
+diff --git a/block/blk-throttle.h b/block/blk-throttle.h
+index 175f03abd9e41..cb43f4417d6ea 100644
+--- a/block/blk-throttle.h
++++ b/block/blk-throttle.h
+@@ -170,8 +170,6 @@ static inline bool blk_throtl_bio(struct bio *bio)
+ {
+ struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
+
+- if (bio_flagged(bio, BIO_THROTTLED))
+- return false;
+ if (!tg->has_rules[bio_data_dir(bio)])
+ return false;
+
+diff --git a/block/genhd.c b/block/genhd.c
+index 9eca1f7d35c97..9d9d702d07787 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -330,7 +330,7 @@ int blk_alloc_ext_minor(void)
+ {
+ int idx;
+
+- idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT, GFP_KERNEL);
++ idx = ida_alloc_range(&ext_devt_ida, 0, NR_EXT_DEVT - 1, GFP_KERNEL);
+ if (idx == -ENOSPC)
+ return -EBUSY;
+ return idx;
+@@ -927,12 +927,17 @@ ssize_t part_stat_show(struct device *dev,
+ struct disk_stats stat;
+ unsigned int inflight;
+
+- part_stat_read_all(bdev, &stat);
+ if (queue_is_mq(q))
+ inflight = blk_mq_in_flight(q, bdev);
+ else
+ inflight = part_in_flight(bdev);
+
++ if (inflight) {
++ part_stat_lock();
++ update_io_ticks(bdev, jiffies, true);
++ part_stat_unlock();
++ }
++ part_stat_read_all(bdev, &stat);
+ return sprintf(buf,
+ "%8lu %8lu %8llu %8u "
+ "%8lu %8lu %8llu %8u "
+@@ -1188,12 +1193,17 @@ static int diskstats_show(struct seq_file *seqf, void *v)
+ xa_for_each(&gp->part_tbl, idx, hd) {
+ if (bdev_is_partition(hd) && !bdev_nr_sectors(hd))
+ continue;
+- part_stat_read_all(hd, &stat);
+ if (queue_is_mq(gp->queue))
+ inflight = blk_mq_in_flight(gp->queue, hd);
+ else
+ inflight = part_in_flight(hd);
+
++ if (inflight) {
++ part_stat_lock();
++ update_io_ticks(hd, jiffies, true);
++ part_stat_unlock();
++ }
++ part_stat_read_all(hd, &stat);
+ seq_printf(seqf, "%4d %7d %pg "
+ "%lu %lu %lu %u "
+ "%lu %lu %lu %u "
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 442765219c375..2cca54c59fecd 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -1847,6 +1847,7 @@ config CRYPTO_JITTERENTROPY
+
+ config CRYPTO_KDF800108_CTR
+ tristate
++ select CRYPTO_HMAC
+ select CRYPTO_SHA256
+
+ config CRYPTO_USER_API
+diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
+index 0b4d07aa88111..f94a1d1ad3a6c 100644
+--- a/crypto/asymmetric_keys/pkcs7_verify.c
++++ b/crypto/asymmetric_keys/pkcs7_verify.c
+@@ -174,12 +174,6 @@ static int pkcs7_find_key(struct pkcs7_message *pkcs7,
+ pr_devel("Sig %u: Found cert serial match X.509[%u]\n",
+ sinfo->index, certix);
+
+- if (strcmp(x509->pub->pkey_algo, sinfo->sig->pkey_algo) != 0) {
+- pr_warn("Sig %u: X.509 algo and PKCS#7 sig algo don't match\n",
+- sinfo->index);
+- continue;
+- }
+-
+ sinfo->signer = x509;
+ return 0;
+ }
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index 4fefb219bfdc8..7c9e6be35c30c 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -60,39 +60,83 @@ static void public_key_destroy(void *payload0, void *payload3)
+ }
+
+ /*
+- * Determine the crypto algorithm name.
++ * Given a public_key, and an encoding and hash_algo to be used for signing
++ * and/or verification with that key, determine the name of the corresponding
++ * akcipher algorithm. Also check that encoding and hash_algo are allowed.
+ */
+-static
+-int software_key_determine_akcipher(const char *encoding,
+- const char *hash_algo,
+- const struct public_key *pkey,
+- char alg_name[CRYPTO_MAX_ALG_NAME])
++static int
++software_key_determine_akcipher(const struct public_key *pkey,
++ const char *encoding, const char *hash_algo,
++ char alg_name[CRYPTO_MAX_ALG_NAME])
+ {
+ int n;
+
+- if (strcmp(encoding, "pkcs1") == 0) {
+- /* The data wangled by the RSA algorithm is typically padded
+- * and encoded in some manner, such as EMSA-PKCS1-1_5 [RFC3447
+- * sec 8.2].
++ if (!encoding)
++ return -EINVAL;
++
++ if (strcmp(pkey->pkey_algo, "rsa") == 0) {
++ /*
++ * RSA signatures usually use EMSA-PKCS1-1_5 [RFC3447 sec 8.2].
++ */
++ if (strcmp(encoding, "pkcs1") == 0) {
++ if (!hash_algo)
++ n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
++ "pkcs1pad(%s)",
++ pkey->pkey_algo);
++ else
++ n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
++ "pkcs1pad(%s,%s)",
++ pkey->pkey_algo, hash_algo);
++ return n >= CRYPTO_MAX_ALG_NAME ? -EINVAL : 0;
++ }
++ if (strcmp(encoding, "raw") != 0)
++ return -EINVAL;
++ /*
++ * Raw RSA cannot differentiate between different hash
++ * algorithms.
++ */
++ if (hash_algo)
++ return -EINVAL;
++ } else if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
++ if (strcmp(encoding, "x962") != 0)
++ return -EINVAL;
++ /*
++ * ECDSA signatures are taken over a raw hash, so they don't
++ * differentiate between different hash algorithms. That means
++ * that the verifier should hard-code a specific hash algorithm.
++ * Unfortunately, in practice ECDSA is used with multiple SHAs,
++ * so we have to allow all of them and not just one.
+ */
+ if (!hash_algo)
+- n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
+- "pkcs1pad(%s)",
+- pkey->pkey_algo);
+- else
+- n = snprintf(alg_name, CRYPTO_MAX_ALG_NAME,
+- "pkcs1pad(%s,%s)",
+- pkey->pkey_algo, hash_algo);
+- return n >= CRYPTO_MAX_ALG_NAME ? -EINVAL : 0;
+- }
+-
+- if (strcmp(encoding, "raw") == 0 ||
+- strcmp(encoding, "x962") == 0) {
+- strcpy(alg_name, pkey->pkey_algo);
+- return 0;
++ return -EINVAL;
++ if (strcmp(hash_algo, "sha1") != 0 &&
++ strcmp(hash_algo, "sha224") != 0 &&
++ strcmp(hash_algo, "sha256") != 0 &&
++ strcmp(hash_algo, "sha384") != 0 &&
++ strcmp(hash_algo, "sha512") != 0)
++ return -EINVAL;
++ } else if (strcmp(pkey->pkey_algo, "sm2") == 0) {
++ if (strcmp(encoding, "raw") != 0)
++ return -EINVAL;
++ if (!hash_algo)
++ return -EINVAL;
++ if (strcmp(hash_algo, "sm3") != 0)
++ return -EINVAL;
++ } else if (strcmp(pkey->pkey_algo, "ecrdsa") == 0) {
++ if (strcmp(encoding, "raw") != 0)
++ return -EINVAL;
++ if (!hash_algo)
++ return -EINVAL;
++ if (strcmp(hash_algo, "streebog256") != 0 &&
++ strcmp(hash_algo, "streebog512") != 0)
++ return -EINVAL;
++ } else {
++ /* Unknown public key algorithm */
++ return -ENOPKG;
+ }
+-
+- return -ENOPKG;
++ if (strscpy(alg_name, pkey->pkey_algo, CRYPTO_MAX_ALG_NAME) < 0)
++ return -EINVAL;
++ return 0;
+ }
+
+ static u8 *pkey_pack_u32(u8 *dst, u32 val)
+@@ -113,9 +157,8 @@ static int software_key_query(const struct kernel_pkey_params *params,
+ u8 *key, *ptr;
+ int ret, len;
+
+- ret = software_key_determine_akcipher(params->encoding,
+- params->hash_algo,
+- pkey, alg_name);
++ ret = software_key_determine_akcipher(pkey, params->encoding,
++ params->hash_algo, alg_name);
+ if (ret < 0)
+ return ret;
+
+@@ -179,9 +222,8 @@ static int software_key_eds_op(struct kernel_pkey_params *params,
+
+ pr_devel("==>%s()\n", __func__);
+
+- ret = software_key_determine_akcipher(params->encoding,
+- params->hash_algo,
+- pkey, alg_name);
++ ret = software_key_determine_akcipher(pkey, params->encoding,
++ params->hash_algo, alg_name);
+ if (ret < 0)
+ return ret;
+
+@@ -325,9 +367,23 @@ int public_key_verify_signature(const struct public_key *pkey,
+ BUG_ON(!sig);
+ BUG_ON(!sig->s);
+
+- ret = software_key_determine_akcipher(sig->encoding,
+- sig->hash_algo,
+- pkey, alg_name);
++ /*
++ * If the signature specifies a public key algorithm, it *must* match
++ * the key's actual public key algorithm.
++ *
++ * Small exception: ECDSA signatures don't specify the curve, but ECDSA
++ * keys do. So the strings can mismatch slightly in that case:
++ * "ecdsa-nist-*" for the key, but "ecdsa" for the signature.
++ */
++ if (sig->pkey_algo) {
++ if (strcmp(pkey->pkey_algo, sig->pkey_algo) != 0 &&
++ (strncmp(pkey->pkey_algo, "ecdsa-", 6) != 0 ||
++ strcmp(sig->pkey_algo, "ecdsa") != 0))
++ return -EKEYREJECTED;
++ }
++
++ ret = software_key_determine_akcipher(pkey, sig->encoding,
++ sig->hash_algo, alg_name);
+ if (ret < 0)
+ return ret;
+
+diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
+index fe14cae115b51..71cc1738fbfd2 100644
+--- a/crypto/asymmetric_keys/x509_public_key.c
++++ b/crypto/asymmetric_keys/x509_public_key.c
+@@ -128,12 +128,6 @@ int x509_check_for_self_signed(struct x509_certificate *cert)
+ goto out;
+ }
+
+- ret = -EKEYREJECTED;
+- if (strcmp(cert->pub->pkey_algo, cert->sig->pkey_algo) != 0 &&
+- (strncmp(cert->pub->pkey_algo, "ecdsa-", 6) != 0 ||
+- strcmp(cert->sig->pkey_algo, "ecdsa") != 0))
+- goto out;
+-
+ ret = public_key_verify_signature(cert->pub, cert->sig);
+ if (ret < 0) {
+ if (ret == -ENOPKG) {
+diff --git a/crypto/authenc.c b/crypto/authenc.c
+index 670bf1a01d00e..17f674a7cdff5 100644
+--- a/crypto/authenc.c
++++ b/crypto/authenc.c
+@@ -253,7 +253,7 @@ static int crypto_authenc_decrypt_tail(struct aead_request *req,
+ dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
+
+ skcipher_request_set_tfm(skreq, ctx->enc);
+- skcipher_request_set_callback(skreq, aead_request_flags(req),
++ skcipher_request_set_callback(skreq, flags,
+ req->base.complete, req->base.data);
+ skcipher_request_set_crypt(skreq, src, dst,
+ req->cryptlen - authsize, req->iv);
+diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
+index 8ac3e73e8ea65..9d804831c8b3f 100644
+--- a/crypto/rsa-pkcs1pad.c
++++ b/crypto/rsa-pkcs1pad.c
+@@ -476,6 +476,8 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err)
+ pos++;
+
+ if (digest_info) {
++ if (digest_info->size > dst_len - pos)
++ goto done;
+ if (crypto_memneq(out_buf + pos, digest_info->data,
+ digest_info->size))
+ goto done;
+@@ -495,7 +497,7 @@ static int pkcs1pad_verify_complete(struct akcipher_request *req, int err)
+ sg_nents_for_len(req->src,
+ req->src_len + req->dst_len),
+ req_ctx->out_buf + ctx->key_size,
+- req->dst_len, ctx->key_size);
++ req->dst_len, req->src_len);
+ /* Do the actual verification step. */
+ if (memcmp(req_ctx->out_buf + ctx->key_size, out_buf + pos,
+ req->dst_len) != 0)
+@@ -538,7 +540,7 @@ static int pkcs1pad_verify(struct akcipher_request *req)
+
+ if (WARN_ON(req->dst) ||
+ WARN_ON(!req->dst_len) ||
+- !ctx->key_size || req->src_len < ctx->key_size)
++ !ctx->key_size || req->src_len != ctx->key_size)
+ return -EINVAL;
+
+ req_ctx->out_buf = kmalloc(ctx->key_size + req->dst_len, GFP_KERNEL);
+@@ -621,6 +623,11 @@ static int pkcs1pad_create(struct crypto_template *tmpl, struct rtattr **tb)
+
+ rsa_alg = crypto_spawn_akcipher_alg(&ctx->spawn);
+
++ if (strcmp(rsa_alg->base.cra_name, "rsa") != 0) {
++ err = -EINVAL;
++ goto err_free_inst;
++ }
++
+ err = -ENAMETOOLONG;
+ hash_name = crypto_attr_alg_name(tb[2]);
+ if (IS_ERR(hash_name)) {
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 6c12f30dbdd6d..63c85b9e64e08 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -466,3 +466,4 @@ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("XTS block cipher mode");
+ MODULE_ALIAS_CRYPTO("xts");
+ MODULE_IMPORT_NS(CRYPTO_INTERNAL);
++MODULE_SOFTDEP("pre: ecb");
+diff --git a/drivers/acpi/acpica/nswalk.c b/drivers/acpi/acpica/nswalk.c
+index 915c2433463d7..e7c30ce06e189 100644
+--- a/drivers/acpi/acpica/nswalk.c
++++ b/drivers/acpi/acpica/nswalk.c
+@@ -169,6 +169,9 @@ acpi_ns_walk_namespace(acpi_object_type type,
+
+ if (start_node == ACPI_ROOT_OBJECT) {
+ start_node = acpi_gbl_root_node;
++ if (!start_node) {
++ return_ACPI_STATUS(AE_NO_NAMESPACE);
++ }
+ }
+
+ /* Null child means "get first node" */
+diff --git a/drivers/acpi/apei/bert.c b/drivers/acpi/apei/bert.c
+index 19e50fcbf4d6f..598fd19b65fa4 100644
+--- a/drivers/acpi/apei/bert.c
++++ b/drivers/acpi/apei/bert.c
+@@ -29,6 +29,7 @@
+
+ #undef pr_fmt
+ #define pr_fmt(fmt) "BERT: " fmt
++#define ACPI_BERT_PRINT_MAX_LEN 1024
+
+ static int bert_disable;
+
+@@ -58,8 +59,11 @@ static void __init bert_print_all(struct acpi_bert_region *region,
+ }
+
+ pr_info_once("Error records from previous boot:\n");
+-
+- cper_estatus_print(KERN_INFO HW_ERR, estatus);
++ if (region_len < ACPI_BERT_PRINT_MAX_LEN)
++ cper_estatus_print(KERN_INFO HW_ERR, estatus);
++ else
++ pr_info_once("Max print length exceeded, table data is available at:\n"
++ "/sys/firmware/acpi/tables/data/BERT");
+
+ /*
+ * Because the boot error source is "one-time polled" type,
+@@ -77,7 +81,7 @@ static int __init setup_bert_disable(char *str)
+ {
+ bert_disable = 1;
+
+- return 0;
++ return 1;
+ }
+ __setup("bert_disable", setup_bert_disable);
+
+diff --git a/drivers/acpi/apei/erst.c b/drivers/acpi/apei/erst.c
+index 242f3c2d55330..698d67cee0527 100644
+--- a/drivers/acpi/apei/erst.c
++++ b/drivers/acpi/apei/erst.c
+@@ -891,7 +891,7 @@ EXPORT_SYMBOL_GPL(erst_clear);
+ static int __init setup_erst_disable(char *str)
+ {
+ erst_disable = 1;
+- return 0;
++ return 1;
+ }
+
+ __setup("erst_disable", setup_erst_disable);
+diff --git a/drivers/acpi/apei/hest.c b/drivers/acpi/apei/hest.c
+index 0edc1ed476737..6aef1ee5e1bdb 100644
+--- a/drivers/acpi/apei/hest.c
++++ b/drivers/acpi/apei/hest.c
+@@ -224,7 +224,7 @@ err:
+ static int __init setup_hest_disable(char *str)
+ {
+ hest_disable = HEST_DISABLED;
+- return 0;
++ return 1;
+ }
+
+ __setup("hest_disable", setup_hest_disable);
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 07f604832fd6b..079b952ab59f2 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -332,21 +332,32 @@ static void acpi_bus_osc_negotiate_platform_control(void)
+ if (ACPI_FAILURE(acpi_run_osc(handle, &context)))
+ return;
+
+- kfree(context.ret.pointer);
++ capbuf_ret = context.ret.pointer;
++ if (context.ret.length <= OSC_SUPPORT_DWORD) {
++ kfree(context.ret.pointer);
++ return;
++ }
+
+- /* Now run _OSC again with query flag clear */
++ /*
++ * Now run _OSC again with query flag clear and with the caps
++ * supported by both the OS and the platform.
++ */
+ capbuf[OSC_QUERY_DWORD] = 0;
++ capbuf[OSC_SUPPORT_DWORD] = capbuf_ret[OSC_SUPPORT_DWORD];
++ kfree(context.ret.pointer);
+
+ if (ACPI_FAILURE(acpi_run_osc(handle, &context)))
+ return;
+
+ capbuf_ret = context.ret.pointer;
+- osc_sb_apei_support_acked =
+- capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;
+- osc_pc_lpi_support_confirmed =
+- capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;
+- osc_sb_native_usb4_support_confirmed =
+- capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;
++ if (context.ret.length > OSC_SUPPORT_DWORD) {
++ osc_sb_apei_support_acked =
++ capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT;
++ osc_pc_lpi_support_confirmed =
++ capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT;
++ osc_sb_native_usb4_support_confirmed =
++ capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT;
++ }
+
+ kfree(context.ret.pointer);
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 866560cbb082c..123e98a765de7 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -676,6 +676,11 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ cpc_obj = &out_obj->package.elements[0];
+ if (cpc_obj->type == ACPI_TYPE_INTEGER) {
+ num_ent = cpc_obj->integer.value;
++ if (num_ent <= 1) {
++ pr_debug("Unexpected _CPC NumEntries value (%d) for CPU:%d\n",
++ num_ent, pr->id);
++ goto out_free;
++ }
+ } else {
+ pr_debug("Unexpected entry type(%d) for NumEntries\n",
+ cpc_obj->type);
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index d0986bda29640..3fceb4681ec9f 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -685,7 +685,7 @@ int __acpi_node_get_property_reference(const struct fwnode_handle *fwnode,
+ */
+ if (obj->type == ACPI_TYPE_LOCAL_REFERENCE) {
+ if (index)
+- return -EINVAL;
++ return -ENOENT;
+
+ device = acpi_fetch_acpi_dev(obj->reference.handle);
+ if (!device)
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index ffdeed5334d6f..664070fc83498 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -284,6 +284,27 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
+ },
++ {
++ /* Lenovo Yoga Tablet 1050F/L */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corp."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "VALLEYVIEW C0 PLATFORM"),
++ DMI_MATCH(DMI_BOARD_NAME, "BYT-T FFD8"),
++ /* Partial match on beginning of BIOS version */
++ DMI_MATCH(DMI_BIOS_VERSION, "BLADE_21"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++ },
++ {
++ /* Nextbook Ares 8 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "M890BAP"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++ },
+ {
+ /* Whitelabel (sold as various brands) TM800A550L */
+ .matches = {
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index f47cab21430f9..752a11d16e262 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -810,7 +810,7 @@ static int __init save_async_options(char *buf)
+ pr_warn("Too long list of driver names for 'driver_async_probe'!\n");
+
+ strlcpy(async_probe_drv_names, buf, ASYNC_DRV_NAMES_MAX_LEN);
+- return 0;
++ return 1;
+ }
+ __setup("driver_async_probe=", save_async_options);
+
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 365cd4a7f2397..60c38f9cf1a75 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -663,14 +663,16 @@ static int init_memory_block(unsigned long block_id, unsigned long state,
+ mem->nr_vmemmap_pages = nr_vmemmap_pages;
+ INIT_LIST_HEAD(&mem->group_next);
+
++ ret = register_memory(mem);
++ if (ret)
++ return ret;
++
+ if (group) {
+ mem->group = group;
+ list_add(&mem->group_next, &group->memory_blocks);
+ }
+
+- ret = register_memory(mem);
+-
+- return ret;
++ return 0;
+ }
+
+ static int add_memory_block(unsigned long base_section_nr)
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 5db704f02e712..7e8039d1884cc 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -2058,9 +2058,9 @@ static int genpd_remove(struct generic_pm_domain *genpd)
+ kfree(link);
+ }
+
+- genpd_debug_remove(genpd);
+ list_del(&genpd->gpd_list_node);
+ genpd_unlock(genpd);
++ genpd_debug_remove(genpd);
+ cancel_work_sync(&genpd->power_off_work);
+ if (genpd_is_cpu_domain(genpd))
+ free_cpumask_var(genpd->cpus);
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 04ea92cbd9cfd..08c8a69d7b810 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -2018,7 +2018,9 @@ static bool pm_ops_is_empty(const struct dev_pm_ops *ops)
+
+ void device_pm_check_callbacks(struct device *dev)
+ {
+- spin_lock_irq(&dev->power.lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&dev->power.lock, flags);
+ dev->power.no_pm_callbacks =
+ (!dev->bus || (pm_ops_is_empty(dev->bus->pm) &&
+ !dev->bus->suspend && !dev->bus->resume)) &&
+@@ -2027,7 +2029,7 @@ void device_pm_check_callbacks(struct device *dev)
+ (!dev->pm_domain || pm_ops_is_empty(&dev->pm_domain->ops)) &&
+ (!dev->driver || (pm_ops_is_empty(dev->driver->pm) &&
+ !dev->driver->suspend && !dev->driver->resume));
+- spin_unlock_irq(&dev->power.lock);
++ spin_unlock_irqrestore(&dev->power.lock, flags);
+ }
+
+ bool dev_pm_skip_suspend(struct device *dev)
+diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
+index 3235532ae0778..8b26f631ebc15 100644
+--- a/drivers/block/drbd/drbd_req.c
++++ b/drivers/block/drbd/drbd_req.c
+@@ -180,7 +180,8 @@ void start_new_tl_epoch(struct drbd_connection *connection)
+ void complete_master_bio(struct drbd_device *device,
+ struct bio_and_error *m)
+ {
+- m->bio->bi_status = errno_to_blk_status(m->error);
++ if (unlikely(m->error))
++ m->bio->bi_status = errno_to_blk_status(m->error);
+ bio_endio(m->bio);
+ dec_ap_bio(device);
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 19fe19eaa50e9..d46a3d5d0c2ec 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -681,33 +681,33 @@ static ssize_t loop_attr_backing_file_show(struct loop_device *lo, char *buf)
+
+ static ssize_t loop_attr_offset_show(struct loop_device *lo, char *buf)
+ {
+- return sprintf(buf, "%llu\n", (unsigned long long)lo->lo_offset);
++ return sysfs_emit(buf, "%llu\n", (unsigned long long)lo->lo_offset);
+ }
+
+ static ssize_t loop_attr_sizelimit_show(struct loop_device *lo, char *buf)
+ {
+- return sprintf(buf, "%llu\n", (unsigned long long)lo->lo_sizelimit);
++ return sysfs_emit(buf, "%llu\n", (unsigned long long)lo->lo_sizelimit);
+ }
+
+ static ssize_t loop_attr_autoclear_show(struct loop_device *lo, char *buf)
+ {
+ int autoclear = (lo->lo_flags & LO_FLAGS_AUTOCLEAR);
+
+- return sprintf(buf, "%s\n", autoclear ? "1" : "0");
++ return sysfs_emit(buf, "%s\n", autoclear ? "1" : "0");
+ }
+
+ static ssize_t loop_attr_partscan_show(struct loop_device *lo, char *buf)
+ {
+ int partscan = (lo->lo_flags & LO_FLAGS_PARTSCAN);
+
+- return sprintf(buf, "%s\n", partscan ? "1" : "0");
++ return sysfs_emit(buf, "%s\n", partscan ? "1" : "0");
+ }
+
+ static ssize_t loop_attr_dio_show(struct loop_device *lo, char *buf)
+ {
+ int dio = (lo->lo_flags & LO_FLAGS_DIRECT_IO);
+
+- return sprintf(buf, "%s\n", dio ? "1" : "0");
++ return sysfs_emit(buf, "%s\n", dio ? "1" : "0");
+ }
+
+ LOOP_ATTR_RO(backing_file);
+@@ -1592,6 +1592,7 @@ struct compat_loop_info {
+ compat_ulong_t lo_inode; /* ioctl r/o */
+ compat_dev_t lo_rdevice; /* ioctl r/o */
+ compat_int_t lo_offset;
++ compat_int_t lo_encrypt_type; /* obsolete, ignored */
+ compat_int_t lo_encrypt_key_size; /* ioctl w/o */
+ compat_int_t lo_flags; /* ioctl r/o */
+ char lo_name[LO_NAME_SIZE];
+diff --git a/drivers/block/n64cart.c b/drivers/block/n64cart.c
+index 4db9a8c244af5..e094d2b8b5a92 100644
+--- a/drivers/block/n64cart.c
++++ b/drivers/block/n64cart.c
+@@ -88,7 +88,7 @@ static void n64cart_submit_bio(struct bio *bio)
+ {
+ struct bio_vec bvec;
+ struct bvec_iter iter;
+- struct device *dev = bio->bi_disk->private_data;
++ struct device *dev = bio->bi_bdev->bd_disk->private_data;
+ u32 pos = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+
+ bio_for_each_segment(bvec, bio, iter) {
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 1a4f8b227eac0..06514ed660229 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2428,10 +2428,15 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+
+ /* Apply the device specific HCI quirks
+ *
+- * WBS for SdP - SdP and Stp have a same hw_varaint but
+- * different fw_variant
++ * WBS for SdP - For the Legacy ROM products, only SdP
++ * supports the WBS. But the version information is not
++ * enough to use here because the StP2 and SdP have same
++ * hw_variant and fw_variant. So, this flag is set by
++ * the transport driver (btusb) based on the HW info
++ * (idProduct)
+ */
+- if (ver.hw_variant == 0x08 && ver.fw_variant == 0x22)
++ if (!btintel_test_flag(hdev,
++ INTEL_ROM_LEGACY_NO_WBS_SUPPORT))
+ set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ &hdev->quirks);
+
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index c9b24e9299e2a..e0060e58573c3 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -152,6 +152,7 @@ enum {
+ INTEL_BROKEN_INITIAL_NCMD,
+ INTEL_BROKEN_SHUTDOWN_LED,
+ INTEL_ROM_LEGACY,
++ INTEL_ROM_LEGACY_NO_WBS_SUPPORT,
+
+ __INTEL_NUM_FLAGS,
+ };
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index b5ea8d3bffaa7..9b868f187316d 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -38,21 +38,25 @@ static bool enable_autosuspend;
+ struct btmtksdio_data {
+ const char *fwname;
+ u16 chipid;
++ bool lp_mbox_supported;
+ };
+
+ static const struct btmtksdio_data mt7663_data = {
+ .fwname = FIRMWARE_MT7663,
+ .chipid = 0x7663,
++ .lp_mbox_supported = false,
+ };
+
+ static const struct btmtksdio_data mt7668_data = {
+ .fwname = FIRMWARE_MT7668,
+ .chipid = 0x7668,
++ .lp_mbox_supported = false,
+ };
+
+ static const struct btmtksdio_data mt7921_data = {
+ .fwname = FIRMWARE_MT7961,
+ .chipid = 0x7921,
++ .lp_mbox_supported = true,
+ };
+
+ static const struct sdio_device_id btmtksdio_table[] = {
+@@ -87,8 +91,17 @@ MODULE_DEVICE_TABLE(sdio, btmtksdio_table);
+ #define RX_DONE_INT BIT(1)
+ #define TX_EMPTY BIT(2)
+ #define TX_FIFO_OVERFLOW BIT(8)
++#define FW_MAILBOX_INT BIT(15)
++#define INT_MASK GENMASK(15, 0)
+ #define RX_PKT_LEN GENMASK(31, 16)
+
++#define MTK_REG_CSICR 0xc0
++#define CSICR_CLR_MBOX_ACK BIT(0)
++#define MTK_REG_PH2DSM0R 0xc4
++#define PH2DSM0R_DRIVER_OWN BIT(0)
++#define MTK_REG_PD2HRM0R 0xdc
++#define PD2HRM0R_DRV_OWN BIT(0)
++
+ #define MTK_REG_CTDR 0x18
+
+ #define MTK_REG_CRDR 0x1c
+@@ -100,6 +113,7 @@ MODULE_DEVICE_TABLE(sdio, btmtksdio_table);
+ #define BTMTKSDIO_TX_WAIT_VND_EVT 1
+ #define BTMTKSDIO_HW_TX_READY 2
+ #define BTMTKSDIO_FUNC_ENABLED 3
++#define BTMTKSDIO_PATCH_ENABLED 4
+
+ struct mtkbtsdio_hdr {
+ __le16 len;
+@@ -278,6 +292,78 @@ static u32 btmtksdio_drv_own_query(struct btmtksdio_dev *bdev)
+ return sdio_readl(bdev->func, MTK_REG_CHLPCR, NULL);
+ }
+
++static u32 btmtksdio_drv_own_query_79xx(struct btmtksdio_dev *bdev)
++{
++ return sdio_readl(bdev->func, MTK_REG_PD2HRM0R, NULL);
++}
++
++static int btmtksdio_fw_pmctrl(struct btmtksdio_dev *bdev)
++{
++ u32 status;
++ int err;
++
++ sdio_claim_host(bdev->func);
++
++ if (bdev->data->lp_mbox_supported &&
++ test_bit(BTMTKSDIO_PATCH_ENABLED, &bdev->tx_state)) {
++ sdio_writel(bdev->func, CSICR_CLR_MBOX_ACK, MTK_REG_CSICR,
++ &err);
++ err = readx_poll_timeout(btmtksdio_drv_own_query_79xx, bdev,
++ status, !(status & PD2HRM0R_DRV_OWN),
++ 2000, 1000000);
++ if (err < 0) {
++ bt_dev_err(bdev->hdev, "mailbox ACK not cleared");
++ goto out;
++ }
++ }
++
++ /* Return ownership to the device */
++ sdio_writel(bdev->func, C_FW_OWN_REQ_SET, MTK_REG_CHLPCR, &err);
++ if (err < 0)
++ goto out;
++
++ err = readx_poll_timeout(btmtksdio_drv_own_query, bdev, status,
++ !(status & C_COM_DRV_OWN), 2000, 1000000);
++
++out:
++ sdio_release_host(bdev->func);
++
++ if (err < 0)
++ bt_dev_err(bdev->hdev, "Cannot return ownership to device");
++
++ return err;
++}
++
++static int btmtksdio_drv_pmctrl(struct btmtksdio_dev *bdev)
++{
++ u32 status;
++ int err;
++
++ sdio_claim_host(bdev->func);
++
++ /* Get ownership from the device */
++ sdio_writel(bdev->func, C_FW_OWN_REQ_CLR, MTK_REG_CHLPCR, &err);
++ if (err < 0)
++ goto out;
++
++ err = readx_poll_timeout(btmtksdio_drv_own_query, bdev, status,
++ status & C_COM_DRV_OWN, 2000, 1000000);
++
++ if (!err && bdev->data->lp_mbox_supported &&
++ test_bit(BTMTKSDIO_PATCH_ENABLED, &bdev->tx_state))
++ err = readx_poll_timeout(btmtksdio_drv_own_query_79xx, bdev,
++ status, status & PD2HRM0R_DRV_OWN,
++ 2000, 1000000);
++
++out:
++ sdio_release_host(bdev->func);
++
++ if (err < 0)
++ bt_dev_err(bdev->hdev, "Cannot get ownership from device");
++
++ return err;
++}
++
+ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+@@ -480,6 +566,13 @@ static void btmtksdio_txrx_work(struct work_struct *work)
+ * FIFO.
+ */
+ sdio_writel(bdev->func, int_status, MTK_REG_CHISR, NULL);
++ int_status &= INT_MASK;
++
++ if ((int_status & FW_MAILBOX_INT) &&
++ bdev->data->chipid == 0x7921) {
++ sdio_writel(bdev->func, PH2DSM0R_DRIVER_OWN,
++ MTK_REG_PH2DSM0R, 0);
++ }
+
+ if (int_status & FW_OWN_BACK_INT)
+ bt_dev_dbg(bdev->hdev, "Get fw own back");
+@@ -531,7 +624,7 @@ static void btmtksdio_interrupt(struct sdio_func *func)
+ static int btmtksdio_open(struct hci_dev *hdev)
+ {
+ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+- u32 status, val;
++ u32 val;
+ int err;
+
+ sdio_claim_host(bdev->func);
+@@ -542,18 +635,10 @@ static int btmtksdio_open(struct hci_dev *hdev)
+
+ set_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state);
+
+- /* Get ownership from the device */
+- sdio_writel(bdev->func, C_FW_OWN_REQ_CLR, MTK_REG_CHLPCR, &err);
++ err = btmtksdio_drv_pmctrl(bdev);
+ if (err < 0)
+ goto err_disable_func;
+
+- err = readx_poll_timeout(btmtksdio_drv_own_query, bdev, status,
+- status & C_COM_DRV_OWN, 2000, 1000000);
+- if (err < 0) {
+- bt_dev_err(bdev->hdev, "Cannot get ownership from device");
+- goto err_disable_func;
+- }
+-
+ /* Disable interrupt & mask out all interrupt sources */
+ sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, &err);
+ if (err < 0)
+@@ -623,8 +708,6 @@ err_release_host:
+ static int btmtksdio_close(struct hci_dev *hdev)
+ {
+ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+- u32 status;
+- int err;
+
+ sdio_claim_host(bdev->func);
+
+@@ -635,13 +718,7 @@ static int btmtksdio_close(struct hci_dev *hdev)
+
+ cancel_work_sync(&bdev->txrx_work);
+
+- /* Return ownership to the device */
+- sdio_writel(bdev->func, C_FW_OWN_REQ_SET, MTK_REG_CHLPCR, NULL);
+-
+- err = readx_poll_timeout(btmtksdio_drv_own_query, bdev, status,
+- !(status & C_COM_DRV_OWN), 2000, 1000000);
+- if (err < 0)
+- bt_dev_err(bdev->hdev, "Cannot return ownership to device");
++ btmtksdio_fw_pmctrl(bdev);
+
+ clear_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state);
+ sdio_disable_func(bdev->func);
+@@ -686,6 +763,7 @@ static int btmtksdio_func_query(struct hci_dev *hdev)
+
+ static int mt76xx_setup(struct hci_dev *hdev, const char *fwname)
+ {
++ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+ struct btmtk_hci_wmt_params wmt_params;
+ struct btmtk_tci_sleep tci_sleep;
+ struct sk_buff *skb;
+@@ -746,6 +824,8 @@ ignore_setup_fw:
+ return err;
+ }
+
++ set_bit(BTMTKSDIO_PATCH_ENABLED, &bdev->tx_state);
++
+ ignore_func_on:
+ /* Apply the low power environment setup */
+ tci_sleep.mode = 0x5;
+@@ -768,6 +848,7 @@ ignore_func_on:
+
+ static int mt79xx_setup(struct hci_dev *hdev, const char *fwname)
+ {
++ struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+ struct btmtk_hci_wmt_params wmt_params;
+ u8 param = 0x1;
+ int err;
+@@ -793,6 +874,7 @@ static int mt79xx_setup(struct hci_dev *hdev, const char *fwname)
+
+ hci_set_msft_opcode(hdev, 0xFD30);
+ hci_set_aosp_capable(hdev);
++ set_bit(BTMTKSDIO_PATCH_ENABLED, &bdev->tx_state);
+
+ return err;
+ }
+@@ -862,6 +944,15 @@ static int btmtksdio_setup(struct hci_dev *hdev)
+ err = mt79xx_setup(hdev, fwname);
+ if (err < 0)
+ return err;
++
++ err = btmtksdio_fw_pmctrl(bdev);
++ if (err < 0)
++ return err;
++
++ err = btmtksdio_drv_pmctrl(bdev);
++ if (err < 0)
++ return err;
++
+ break;
+ case 0x7663:
+ case 0x7668:
+@@ -1004,6 +1095,8 @@ static int btmtksdio_probe(struct sdio_func *func,
+ hdev->manufacturer = 70;
+ set_bit(HCI_QUIRK_NON_PERSISTENT_SETUP, &hdev->quirks);
+
++ sdio_set_drvdata(func, bdev);
++
+ err = hci_register_dev(hdev);
+ if (err < 0) {
+ dev_err(&func->dev, "Can't register HCI device\n");
+@@ -1011,8 +1104,6 @@ static int btmtksdio_probe(struct sdio_func *func,
+ return err;
+ }
+
+- sdio_set_drvdata(func, bdev);
+-
+ /* pm_runtime_enable would be done after the firmware is being
+ * downloaded because the core layer probably already enables
+ * runtime PM for this func such as the case host->caps &
+@@ -1058,7 +1149,6 @@ static int btmtksdio_runtime_suspend(struct device *dev)
+ {
+ struct sdio_func *func = dev_to_sdio_func(dev);
+ struct btmtksdio_dev *bdev;
+- u32 status;
+ int err;
+
+ bdev = sdio_get_drvdata(func);
+@@ -1070,19 +1160,10 @@ static int btmtksdio_runtime_suspend(struct device *dev)
+
+ sdio_set_host_pm_flags(func, MMC_PM_KEEP_POWER);
+
+- sdio_claim_host(bdev->func);
+-
+- sdio_writel(bdev->func, C_FW_OWN_REQ_SET, MTK_REG_CHLPCR, &err);
+- if (err < 0)
+- goto out;
++ err = btmtksdio_fw_pmctrl(bdev);
+
+- err = readx_poll_timeout(btmtksdio_drv_own_query, bdev, status,
+- !(status & C_COM_DRV_OWN), 2000, 1000000);
+-out:
+ bt_dev_info(bdev->hdev, "status (%d) return ownership to device", err);
+
+- sdio_release_host(bdev->func);
+-
+ return err;
+ }
+
+@@ -1090,7 +1171,6 @@ static int btmtksdio_runtime_resume(struct device *dev)
+ {
+ struct sdio_func *func = dev_to_sdio_func(dev);
+ struct btmtksdio_dev *bdev;
+- u32 status;
+ int err;
+
+ bdev = sdio_get_drvdata(func);
+@@ -1100,19 +1180,10 @@ static int btmtksdio_runtime_resume(struct device *dev)
+ if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state))
+ return 0;
+
+- sdio_claim_host(bdev->func);
++ err = btmtksdio_drv_pmctrl(bdev);
+
+- sdio_writel(bdev->func, C_FW_OWN_REQ_CLR, MTK_REG_CHLPCR, &err);
+- if (err < 0)
+- goto out;
+-
+- err = readx_poll_timeout(btmtksdio_drv_own_query, bdev, status,
+- status & C_COM_DRV_OWN, 2000, 1000000);
+-out:
+ bt_dev_info(bdev->hdev, "status (%d) get ownership from device", err);
+
+- sdio_release_host(bdev->func);
+-
+ return err;
+ }
+
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 19d5686f8a2a1..2afbd87d77c9b 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -62,6 +62,7 @@ static struct usb_driver btusb_driver;
+ #define BTUSB_QCA_WCN6855 0x1000000
+ #define BTUSB_INTEL_BROKEN_SHUTDOWN_LED 0x2000000
+ #define BTUSB_INTEL_BROKEN_INITIAL_NCMD 0x4000000
++#define BTUSB_INTEL_NO_WBS_SUPPORT 0x8000000
+
+ static const struct usb_device_id btusb_table[] = {
+ /* Generic Bluetooth USB device */
+@@ -385,9 +386,11 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x8087, 0x0033), .driver_info = BTUSB_INTEL_COMBINED },
+ { USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR },
+ { USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL_COMBINED |
++ BTUSB_INTEL_NO_WBS_SUPPORT |
+ BTUSB_INTEL_BROKEN_INITIAL_NCMD |
+ BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
+ { USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL_COMBINED |
++ BTUSB_INTEL_NO_WBS_SUPPORT |
+ BTUSB_INTEL_BROKEN_SHUTDOWN_LED },
+ { USB_DEVICE(0x8087, 0x0a2b), .driver_info = BTUSB_INTEL_COMBINED },
+ { USB_DEVICE(0x8087, 0x0aa7), .driver_info = BTUSB_INTEL_COMBINED |
+@@ -3743,6 +3746,9 @@ static int btusb_probe(struct usb_interface *intf,
+ hdev->send = btusb_send_frame_intel;
+ hdev->cmd_timeout = btusb_intel_cmd_timeout;
+
++ if (id->driver_info & BTUSB_INTEL_NO_WBS_SUPPORT)
++ btintel_set_flag(hdev, INTEL_ROM_LEGACY_NO_WBS_SUPPORT);
++
+ if (id->driver_info & BTUSB_INTEL_BROKEN_INITIAL_NCMD)
+ btintel_set_flag(hdev, INTEL_BROKEN_INITIAL_NCMD);
+
+diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
+index 34286ffe0568f..7ac6908a4dfb4 100644
+--- a/drivers/bluetooth/hci_h5.c
++++ b/drivers/bluetooth/hci_h5.c
+@@ -629,9 +629,11 @@ static int h5_enqueue(struct hci_uart *hu, struct sk_buff *skb)
+ break;
+ }
+
+- pm_runtime_get_sync(&hu->serdev->dev);
+- pm_runtime_mark_last_busy(&hu->serdev->dev);
+- pm_runtime_put_autosuspend(&hu->serdev->dev);
++ if (hu->serdev) {
++ pm_runtime_get_sync(&hu->serdev->dev);
++ pm_runtime_mark_last_busy(&hu->serdev->dev);
++ pm_runtime_put_autosuspend(&hu->serdev->dev);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 3b00d82d36cf7..4cda890ce6470 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -305,6 +305,8 @@ int hci_uart_register_device(struct hci_uart *hu,
+ if (err)
+ return err;
+
++ percpu_init_rwsem(&hu->proto_lock);
++
+ err = p->open(hu);
+ if (err)
+ goto err_open;
+@@ -327,7 +329,6 @@ int hci_uart_register_device(struct hci_uart *hu,
+
+ INIT_WORK(&hu->init_ready, hci_uart_init_work);
+ INIT_WORK(&hu->write_work, hci_uart_write_work);
+- percpu_init_rwsem(&hu->proto_lock);
+
+ /* Only when vendor specific setup callback is provided, consider
+ * the manufacturer information valid. This avoids filling in the
+diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
+index 858d7516410bb..d818586c229d2 100644
+--- a/drivers/bus/mhi/core/debugfs.c
++++ b/drivers/bus/mhi/core/debugfs.c
+@@ -60,16 +60,16 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
+ }
+
+ seq_printf(m, "Index: %d intmod count: %lu time: %lu",
+- i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
++ i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
+ EV_CTX_INTMODC_SHIFT,
+- (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
++ (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
+ EV_CTX_INTMODT_SHIFT);
+
+- seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
+- er_ctxt->rlen);
++ seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
++ le64_to_cpu(er_ctxt->rlen));
+
+- seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
+- er_ctxt->wp);
++ seq_printf(m, " rp: 0x%llx wp: 0x%llx", le64_to_cpu(er_ctxt->rp),
++ le64_to_cpu(er_ctxt->wp));
+
+ seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
+ &mhi_event->db_cfg.db_val);
+@@ -106,18 +106,18 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
+
+ seq_printf(m,
+ "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
+- mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
++ mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
+ CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
+- (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
+- CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
++ (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
++ CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
+ CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
+
+- seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
+- chan_ctxt->erindex);
++ seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
++ le32_to_cpu(chan_ctxt->erindex));
+
+ seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
+- chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
+- chan_ctxt->wp);
++ le64_to_cpu(chan_ctxt->rbase), le64_to_cpu(chan_ctxt->rlen),
++ le64_to_cpu(chan_ctxt->rp), le64_to_cpu(chan_ctxt->wp));
+
+ seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
+ ring->rp, ring->wp,
+diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
+index 046f407dc5d6e..d8787aaa176ba 100644
+--- a/drivers/bus/mhi/core/init.c
++++ b/drivers/bus/mhi/core/init.c
+@@ -77,12 +77,14 @@ static const char * const mhi_pm_state_str[] = {
+ [MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
+ };
+
+-const char *to_mhi_pm_state_str(enum mhi_pm_state state)
++const char *to_mhi_pm_state_str(u32 state)
+ {
+- unsigned long pm_state = state;
+- int index = find_last_bit(&pm_state, 32);
++ int index;
+
+- if (index >= ARRAY_SIZE(mhi_pm_state_str))
++ if (state)
++ index = __fls(state);
++
++ if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
+ return "Invalid State";
+
+ return mhi_pm_state_str[index];
+@@ -291,17 +293,17 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ if (mhi_chan->offload_ch)
+ continue;
+
+- tmp = chan_ctxt->chcfg;
++ tmp = le32_to_cpu(chan_ctxt->chcfg);
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+ tmp &= ~CHAN_CTX_BRSTMODE_MASK;
+ tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
+ tmp &= ~CHAN_CTX_POLLCFG_MASK;
+ tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
+- chan_ctxt->chcfg = tmp;
++ chan_ctxt->chcfg = cpu_to_le32(tmp);
+
+- chan_ctxt->chtype = mhi_chan->type;
+- chan_ctxt->erindex = mhi_chan->er_index;
++ chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
++ chan_ctxt->erindex = cpu_to_le32(mhi_chan->er_index);
+
+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+ mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
+@@ -326,14 +328,14 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ if (mhi_event->offload_ev)
+ continue;
+
+- tmp = er_ctxt->intmod;
++ tmp = le32_to_cpu(er_ctxt->intmod);
+ tmp &= ~EV_CTX_INTMODC_MASK;
+ tmp &= ~EV_CTX_INTMODT_MASK;
+ tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
+- er_ctxt->intmod = tmp;
++ er_ctxt->intmod = cpu_to_le32(tmp);
+
+- er_ctxt->ertype = MHI_ER_TYPE_VALID;
+- er_ctxt->msivec = mhi_event->irq;
++ er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
++ er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
+ mhi_event->db_cfg.db_mode = true;
+
+ ring->el_size = sizeof(struct mhi_tre);
+@@ -347,9 +349,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ * ring is empty
+ */
+ ring->rp = ring->wp = ring->base;
+- er_ctxt->rbase = ring->iommu_base;
++ er_ctxt->rbase = cpu_to_le64(ring->iommu_base);
+ er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
+- er_ctxt->rlen = ring->len;
++ er_ctxt->rlen = cpu_to_le64(ring->len);
+ ring->ctxt_wp = &er_ctxt->wp;
+ }
+
+@@ -376,9 +378,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+ goto error_alloc_cmd;
+
+ ring->rp = ring->wp = ring->base;
+- cmd_ctxt->rbase = ring->iommu_base;
++ cmd_ctxt->rbase = cpu_to_le64(ring->iommu_base);
+ cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
+- cmd_ctxt->rlen = ring->len;
++ cmd_ctxt->rlen = cpu_to_le64(ring->len);
+ ring->ctxt_wp = &cmd_ctxt->wp;
+ }
+
+@@ -579,10 +581,10 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ chan_ctxt->rp = 0;
+ chan_ctxt->wp = 0;
+
+- tmp = chan_ctxt->chcfg;
++ tmp = le32_to_cpu(chan_ctxt->chcfg);
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+- chan_ctxt->chcfg = tmp;
++ chan_ctxt->chcfg = cpu_to_le32(tmp);
+
+ /* Update to all cores */
+ smp_wmb();
+@@ -616,14 +618,14 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ return -ENOMEM;
+ }
+
+- tmp = chan_ctxt->chcfg;
++ tmp = le32_to_cpu(chan_ctxt->chcfg);
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
+- chan_ctxt->chcfg = tmp;
++ chan_ctxt->chcfg = cpu_to_le32(tmp);
+
+- chan_ctxt->rbase = tre_ring->iommu_base;
++ chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
+ chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+- chan_ctxt->rlen = tre_ring->len;
++ chan_ctxt->rlen = cpu_to_le64(tre_ring->len);
+ tre_ring->ctxt_wp = &chan_ctxt->wp;
+
+ tre_ring->rp = tre_ring->wp = tre_ring->base;
+diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
+index e2e10474a9d92..37c39bf1c7a98 100644
+--- a/drivers/bus/mhi/core/internal.h
++++ b/drivers/bus/mhi/core/internal.h
+@@ -209,14 +209,14 @@ extern struct bus_type mhi_bus_type;
+ #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
+ #define EV_CTX_INTMODT_SHIFT 16
+ struct mhi_event_ctxt {
+- __u32 intmod;
+- __u32 ertype;
+- __u32 msivec;
+-
+- __u64 rbase __packed __aligned(4);
+- __u64 rlen __packed __aligned(4);
+- __u64 rp __packed __aligned(4);
+- __u64 wp __packed __aligned(4);
++ __le32 intmod;
++ __le32 ertype;
++ __le32 msivec;
++
++ __le64 rbase __packed __aligned(4);
++ __le64 rlen __packed __aligned(4);
++ __le64 rp __packed __aligned(4);
++ __le64 wp __packed __aligned(4);
+ };
+
+ #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
+@@ -227,25 +227,25 @@ struct mhi_event_ctxt {
+ #define CHAN_CTX_POLLCFG_SHIFT 10
+ #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
+ struct mhi_chan_ctxt {
+- __u32 chcfg;
+- __u32 chtype;
+- __u32 erindex;
+-
+- __u64 rbase __packed __aligned(4);
+- __u64 rlen __packed __aligned(4);
+- __u64 rp __packed __aligned(4);
+- __u64 wp __packed __aligned(4);
++ __le32 chcfg;
++ __le32 chtype;
++ __le32 erindex;
++
++ __le64 rbase __packed __aligned(4);
++ __le64 rlen __packed __aligned(4);
++ __le64 rp __packed __aligned(4);
++ __le64 wp __packed __aligned(4);
+ };
+
+ struct mhi_cmd_ctxt {
+- __u32 reserved0;
+- __u32 reserved1;
+- __u32 reserved2;
+-
+- __u64 rbase __packed __aligned(4);
+- __u64 rlen __packed __aligned(4);
+- __u64 rp __packed __aligned(4);
+- __u64 wp __packed __aligned(4);
++ __le32 reserved0;
++ __le32 reserved1;
++ __le32 reserved2;
++
++ __le64 rbase __packed __aligned(4);
++ __le64 rlen __packed __aligned(4);
++ __le64 rp __packed __aligned(4);
++ __le64 wp __packed __aligned(4);
+ };
+
+ struct mhi_ctxt {
+@@ -258,8 +258,8 @@ struct mhi_ctxt {
+ };
+
+ struct mhi_tre {
+- u64 ptr;
+- u32 dword[2];
++ __le64 ptr;
++ __le32 dword[2];
+ };
+
+ struct bhi_vec_entry {
+@@ -277,57 +277,58 @@ enum mhi_cmd_type {
+ /* No operation command */
+ #define MHI_TRE_CMD_NOOP_PTR (0)
+ #define MHI_TRE_CMD_NOOP_DWORD0 (0)
+-#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
++#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
+
+ /* Channel reset command */
+ #define MHI_TRE_CMD_RESET_PTR (0)
+ #define MHI_TRE_CMD_RESET_DWORD0 (0)
+-#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
+- (MHI_CMD_RESET_CHAN << 16))
++#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
++ (MHI_CMD_RESET_CHAN << 16)))
+
+ /* Channel stop command */
+ #define MHI_TRE_CMD_STOP_PTR (0)
+ #define MHI_TRE_CMD_STOP_DWORD0 (0)
+-#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
+- (MHI_CMD_STOP_CHAN << 16))
++#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
++ (MHI_CMD_STOP_CHAN << 16)))
+
+ /* Channel start command */
+ #define MHI_TRE_CMD_START_PTR (0)
+ #define MHI_TRE_CMD_START_DWORD0 (0)
+-#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
+- (MHI_CMD_START_CHAN << 16))
++#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
++ (MHI_CMD_START_CHAN << 16)))
+
+-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
++#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
++#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
++#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+
+ /* Event descriptor macros */
+-#define MHI_TRE_EV_PTR(ptr) (ptr)
+-#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+-#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+-#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
+-#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
+-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
+-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
+-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
+-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
++#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
++#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
++#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
++#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
++#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
++#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
++#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
++#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
++#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
++#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
++#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
++#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
+
+ /* Transfer descriptor macros */
+-#define MHI_TRE_DATA_PTR(ptr) (ptr)
+-#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+- | (ieot << 9) | (ieob << 8) | chain)
++#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
++#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
++#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
++ | (ieot << 9) | (ieob << 8) | chain))
+
+ /* RSC transfer descriptor macros */
+-#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
+-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
+-#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
++#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
++#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
++#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
+
+ enum mhi_pkt_type {
+ MHI_PKT_TYPE_INVALID = 0x0,
+@@ -500,7 +501,7 @@ struct state_transition {
+ struct mhi_ring {
+ dma_addr_t dma_handle;
+ dma_addr_t iommu_base;
+- u64 *ctxt_wp; /* point to ctxt wp */
++ __le64 *ctxt_wp; /* point to ctxt wp */
+ void *pre_aligned;
+ void *base;
+ void *rp;
+@@ -622,7 +623,7 @@ void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+ enum mhi_pm_state __must_check mhi_tryset_pm_state(
+ struct mhi_controller *mhi_cntrl,
+ enum mhi_pm_state state);
+-const char *to_mhi_pm_state_str(enum mhi_pm_state state);
++const char *to_mhi_pm_state_str(u32 state);
+ int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+ enum dev_st_transition state);
+ void mhi_pm_st_worker(struct work_struct *work);
+diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
+index ffde617f93a3b..85f4f7c8d7c60 100644
+--- a/drivers/bus/mhi/core/main.c
++++ b/drivers/bus/mhi/core/main.c
+@@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
+- ring->db_addr, *ring->ctxt_wp);
++ ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
+ }
+
+ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+@@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+ struct mhi_ring *ring = &mhi_cmd->ring;
+
+ db = ring->iommu_base + (ring->wp - ring->base);
+- *ring->ctxt_wp = db;
++ *ring->ctxt_wp = cpu_to_le64(db);
+ mhi_write_db(mhi_cntrl, ring->db_addr, db);
+ }
+
+@@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+ * before letting h/w know there is new element to fetch.
+ */
+ dma_wmb();
+- *ring->ctxt_wp = db;
++ *ring->ctxt_wp = cpu_to_le64(db);
+
+ mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
+ ring->db_addr, db);
+@@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
+ struct mhi_event_ctxt *er_ctxt =
+ &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ struct mhi_ring *ev_ring = &mhi_event->ring;
+- dma_addr_t ptr = er_ctxt->rp;
++ dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
+ void *dev_rp;
+
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+@@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+
+ /* Update the WP */
+ ring->wp += ring->el_size;
+- ctxt_wp = *ring->ctxt_wp + ring->el_size;
++ ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
+
+ if (ring->wp >= (ring->base + ring->len)) {
+ ring->wp = ring->base;
+ ctxt_wp = ring->iommu_base;
+ }
+
+- *ring->ctxt_wp = ctxt_wp;
++ *ring->ctxt_wp = cpu_to_le64(ctxt_wp);
+
+ /* Update the RP */
+ ring->rp += ring->el_size;
+@@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ u32 chan;
+ int count = 0;
+- dma_addr_t ptr = er_ctxt->rp;
++ dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
+
+ /*
+ * This is a quick check to avoid unnecessary event processing
+@@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ local_rp = ev_ring->rp;
+
+- ptr = er_ctxt->rp;
++ ptr = le64_to_cpu(er_ctxt->rp);
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+@@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ int count = 0;
+ u32 chan;
+ struct mhi_chan *mhi_chan;
+- dma_addr_t ptr = er_ctxt->rp;
++ dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
+
+ if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ return -EIO;
+@@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ local_rp = ev_ring->rp;
+
+- ptr = er_ctxt->rp;
++ ptr = le64_to_cpu(er_ctxt->rp);
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+@@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+ /* mark all stale events related to channel as STALE event */
+ spin_lock_irqsave(&mhi_event->lock, flags);
+
+- ptr = er_ctxt->rp;
++ ptr = le64_to_cpu(er_ctxt->rp);
+ if (!is_valid_ring_ptr(ev_ring, ptr)) {
+ dev_err(&mhi_cntrl->mhi_dev->dev,
+ "Event ring rp points outside of the event ring\n");
+diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
+index 4aae0baea0084..c35c5ddc72207 100644
+--- a/drivers/bus/mhi/core/pm.c
++++ b/drivers/bus/mhi/core/pm.c
+@@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
+ continue;
+
+ ring->wp = ring->base + ring->len - ring->el_size;
+- *ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
++ *ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
+ /* Update all cores */
+ smp_wmb();
+
+@@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
+ continue;
+
+ ring->wp = ring->base + ring->len - ring->el_size;
+- *ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
++ *ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
+ /* Update to all cores */
+ smp_wmb();
+
+diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
+index b79895810c52f..9527b7d638401 100644
+--- a/drivers/bus/mhi/pci_generic.c
++++ b/drivers/bus/mhi/pci_generic.c
+@@ -327,6 +327,7 @@ static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
+ .config = &modem_quectel_em1xx_config,
+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
+ .dma_data_width = 32,
++ .mru_default = 32768,
+ .sideband_wake = true,
+ };
+
+diff --git a/drivers/bus/mips_cdmm.c b/drivers/bus/mips_cdmm.c
+index 626dedd110cbc..fca0d0669aa97 100644
+--- a/drivers/bus/mips_cdmm.c
++++ b/drivers/bus/mips_cdmm.c
+@@ -351,6 +351,7 @@ phys_addr_t __weak mips_cdmm_phys_base(void)
+ np = of_find_compatible_node(NULL, NULL, "mti,mips-cdmm");
+ if (np) {
+ err = of_address_to_resource(np, 0, &res);
++ of_node_put(np);
+ if (!err)
+ return res.start;
+ }
+diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
+index 9704963f9d500..a087156a58186 100644
+--- a/drivers/char/hw_random/Kconfig
++++ b/drivers/char/hw_random/Kconfig
+@@ -401,7 +401,7 @@ config HW_RANDOM_MESON
+
+ config HW_RANDOM_CAVIUM
+ tristate "Cavium ThunderX Random Number Generator support"
+- depends on HW_RANDOM && PCI && ARM64
++ depends on HW_RANDOM && PCI && ARCH_THUNDER
+ default HW_RANDOM
+ help
+ This driver provides kernel-side support for the Random Number
+diff --git a/drivers/char/hw_random/atmel-rng.c b/drivers/char/hw_random/atmel-rng.c
+index ecb71c4317a50..8cf0ef501341e 100644
+--- a/drivers/char/hw_random/atmel-rng.c
++++ b/drivers/char/hw_random/atmel-rng.c
+@@ -114,6 +114,7 @@ static int atmel_trng_probe(struct platform_device *pdev)
+
+ err_register:
+ clk_disable_unprepare(trng->clk);
++ atmel_trng_disable(trng);
+ return ret;
+ }
+
+diff --git a/drivers/char/hw_random/cavium-rng-vf.c b/drivers/char/hw_random/cavium-rng-vf.c
+index 6f66919652bf5..7c55f4cf4a8ba 100644
+--- a/drivers/char/hw_random/cavium-rng-vf.c
++++ b/drivers/char/hw_random/cavium-rng-vf.c
+@@ -179,7 +179,7 @@ static int cavium_map_pf_regs(struct cavium_rng *rng)
+ pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVID_CAVIUM_RNG_PF, NULL);
+ if (!pdev) {
+- dev_err(&pdev->dev, "Cannot find RNG PF device\n");
++ pr_err("Cannot find RNG PF device\n");
+ return -EIO;
+ }
+
+diff --git a/drivers/char/hw_random/nomadik-rng.c b/drivers/char/hw_random/nomadik-rng.c
+index 67947a19aa225..e8f9621e79541 100644
+--- a/drivers/char/hw_random/nomadik-rng.c
++++ b/drivers/char/hw_random/nomadik-rng.c
+@@ -65,14 +65,14 @@ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ out_release:
+ amba_release_regions(dev);
+ out_clk:
+- clk_disable(rng_clk);
++ clk_disable_unprepare(rng_clk);
+ return ret;
+ }
+
+ static void nmk_rng_remove(struct amba_device *dev)
+ {
+ amba_release_regions(dev);
+- clk_disable(rng_clk);
++ clk_disable_unprepare(rng_clk);
+ }
+
+ static const struct amba_id nmk_rng_ids[] = {
+diff --git a/drivers/clk/actions/owl-s700.c b/drivers/clk/actions/owl-s700.c
+index a2f34d13fb543..6ea7da1d6d755 100644
+--- a/drivers/clk/actions/owl-s700.c
++++ b/drivers/clk/actions/owl-s700.c
+@@ -162,6 +162,7 @@ static struct clk_div_table hdmia_div_table[] = {
+
+ static struct clk_div_table rmii_div_table[] = {
+ {0, 4}, {1, 10},
++ {0, 0}
+ };
+
+ /* divider clocks */
+diff --git a/drivers/clk/actions/owl-s900.c b/drivers/clk/actions/owl-s900.c
+index 790890978424a..5144ada2c7e1a 100644
+--- a/drivers/clk/actions/owl-s900.c
++++ b/drivers/clk/actions/owl-s900.c
+@@ -140,7 +140,7 @@ static struct clk_div_table rmii_ref_div_table[] = {
+
+ static struct clk_div_table usb3_mac_div_table[] = {
+ { 1, 2 }, { 2, 3 }, { 3, 4 },
+- { 0, 8 },
++ { 0, 0 }
+ };
+
+ static struct clk_div_table i2s_div_table[] = {
+diff --git a/drivers/clk/at91/sama7g5.c b/drivers/clk/at91/sama7g5.c
+index 369dfafabbca2..060e908086a13 100644
+--- a/drivers/clk/at91/sama7g5.c
++++ b/drivers/clk/at91/sama7g5.c
+@@ -696,16 +696,16 @@ static const struct {
+ { .n = "pdmc0_gclk",
+ .id = 68,
+ .r = { .max = 50000000 },
+- .pp = { "syspll_divpmcck", "baudpll_divpmcck", },
+- .pp_mux_table = { 5, 8, },
++ .pp = { "syspll_divpmcck", "audiopll_divpmcck", },
++ .pp_mux_table = { 5, 9, },
+ .pp_count = 2,
+ .pp_chg_id = INT_MIN, },
+
+ { .n = "pdmc1_gclk",
+ .id = 69,
+ .r = { .max = 50000000, },
+- .pp = { "syspll_divpmcck", "baudpll_divpmcck", },
+- .pp_mux_table = { 5, 8, },
++ .pp = { "syspll_divpmcck", "audiopll_divpmcck", },
++ .pp_mux_table = { 5, 9, },
+ .pp_count = 2,
+ .pp_chg_id = INT_MIN, },
+
+diff --git a/drivers/clk/clk-clps711x.c b/drivers/clk/clk-clps711x.c
+index a2c6486ef1708..f8417ee2961aa 100644
+--- a/drivers/clk/clk-clps711x.c
++++ b/drivers/clk/clk-clps711x.c
+@@ -28,11 +28,13 @@ static const struct clk_div_table spi_div_table[] = {
+ { .val = 1, .div = 8, },
+ { .val = 2, .div = 2, },
+ { .val = 3, .div = 1, },
++ { /* sentinel */ }
+ };
+
+ static const struct clk_div_table timer_div_table[] = {
+ { .val = 0, .div = 256, },
+ { .val = 1, .div = 1, },
++ { /* sentinel */ }
+ };
+
+ struct clps711x_clk {
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 8de6a22498e70..01b64b962e76f 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3456,6 +3456,19 @@ static void clk_core_reparent_orphans_nolock(void)
+ __clk_set_parent_after(orphan, parent, NULL);
+ __clk_recalc_accuracies(orphan);
+ __clk_recalc_rates(orphan, 0);
++
++ /*
++ * __clk_init_parent() will set the initial req_rate to
++ * 0 if the clock doesn't have clk_ops::recalc_rate and
++ * is an orphan when it's registered.
++ *
++ * 'req_rate' is used by clk_set_rate_range() and
++ * clk_put() to trigger a clk_set_rate() call whenever
++ * the boundaries are modified. Let's make sure
++ * 'req_rate' is set to something non-zero so that
++ * clk_set_rate_range() doesn't drop the frequency.
++ */
++ orphan->req_rate = orphan->rate;
+ }
+ }
+ }
+@@ -3773,8 +3786,9 @@ struct clk *clk_hw_create_clk(struct device *dev, struct clk_hw *hw,
+ struct clk *clk_hw_get_clk(struct clk_hw *hw, const char *con_id)
+ {
+ struct device *dev = hw->core->dev;
++ const char *name = dev ? dev_name(dev) : NULL;
+
+- return clk_hw_create_clk(dev, hw, dev_name(dev), con_id);
++ return clk_hw_create_clk(dev, hw, name, con_id);
+ }
+ EXPORT_SYMBOL(clk_hw_get_clk);
+
+diff --git a/drivers/clk/hisilicon/clk-hi3559a.c b/drivers/clk/hisilicon/clk-hi3559a.c
+index 56012a3d02192..9ea1a80acbe8b 100644
+--- a/drivers/clk/hisilicon/clk-hi3559a.c
++++ b/drivers/clk/hisilicon/clk-hi3559a.c
+@@ -611,8 +611,8 @@ static struct hisi_mux_clock hi3559av100_shub_mux_clks[] = {
+
+
+ /* shub div clk */
+-static struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}};
+-static struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}};
++static struct clk_div_table shub_spi_clk_table[] = {{0, 8}, {1, 4}, {2, 2}, {/*sentinel*/}};
++static struct clk_div_table shub_uart_div_clk_table[] = {{1, 8}, {2, 4}, {/*sentinel*/}};
+
+ static struct hisi_divider_clock hi3559av100_shub_div_clks[] = {
+ { HI3559AV100_SHUB_SPI_SOURCE_CLK, "clk_spi_clk", "shub_clk", 0, 0x20, 24, 2,
+diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
+index c4e0f1c07192f..3f6fd7ef2a68f 100644
+--- a/drivers/clk/imx/clk-imx7d.c
++++ b/drivers/clk/imx/clk-imx7d.c
+@@ -849,7 +849,6 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ hws[IMX7D_WDOG4_ROOT_CLK] = imx_clk_hw_gate4("wdog4_root_clk", "wdog_post_div", base + 0x49f0, 0);
+ hws[IMX7D_KPP_ROOT_CLK] = imx_clk_hw_gate4("kpp_root_clk", "ipg_root_clk", base + 0x4aa0, 0);
+ hws[IMX7D_CSI_MCLK_ROOT_CLK] = imx_clk_hw_gate4("csi_mclk_root_clk", "csi_mclk_post_div", base + 0x4490, 0);
+- hws[IMX7D_AUDIO_MCLK_ROOT_CLK] = imx_clk_hw_gate4("audio_mclk_root_clk", "audio_mclk_post_div", base + 0x4790, 0);
+ hws[IMX7D_WRCLK_ROOT_CLK] = imx_clk_hw_gate4("wrclk_root_clk", "wrclk_post_div", base + 0x47a0, 0);
+ hws[IMX7D_USB_CTRL_CLK] = imx_clk_hw_gate4("usb_ctrl_clk", "ahb_root_clk", base + 0x4680, 0);
+ hws[IMX7D_USB_PHY1_CLK] = imx_clk_hw_gate4("usb_phy1_clk", "pll_usb1_main_clk", base + 0x46a0, 0);
+diff --git a/drivers/clk/imx/clk-imx8qxp-lpcg.c b/drivers/clk/imx/clk-imx8qxp-lpcg.c
+index b23758083ce52..5e31a6a24b3a3 100644
+--- a/drivers/clk/imx/clk-imx8qxp-lpcg.c
++++ b/drivers/clk/imx/clk-imx8qxp-lpcg.c
+@@ -248,7 +248,7 @@ static int imx_lpcg_parse_clks_from_dt(struct platform_device *pdev,
+
+ for (i = 0; i < count; i++) {
+ idx = bit_offset[i] / 4;
+- if (idx > IMX_LPCG_MAX_CLKS) {
++ if (idx >= IMX_LPCG_MAX_CLKS) {
+ dev_warn(&pdev->dev, "invalid bit offset of clock %d\n",
+ i);
+ ret = -EINVAL;
+diff --git a/drivers/clk/loongson1/clk-loongson1c.c b/drivers/clk/loongson1/clk-loongson1c.c
+index 703f87622cf5f..1ebf740380efb 100644
+--- a/drivers/clk/loongson1/clk-loongson1c.c
++++ b/drivers/clk/loongson1/clk-loongson1c.c
+@@ -37,6 +37,7 @@ static const struct clk_div_table ahb_div_table[] = {
+ [1] = { .val = 1, .div = 4 },
+ [2] = { .val = 2, .div = 3 },
+ [3] = { .val = 3, .div = 3 },
++ [4] = { /* sentinel */ }
+ };
+
+ void __init ls1x_clk_init(void)
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index e1b1b426fae4b..f675fd969c4de 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -264,7 +264,7 @@ static int clk_rcg2_determine_floor_rate(struct clk_hw *hw,
+
+ static int __clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ {
+- u32 cfg, mask;
++ u32 cfg, mask, d_val, not2d_val, n_minus_m;
+ struct clk_hw *hw = &rcg->clkr.hw;
+ int ret, index = qcom_find_src_index(hw, rcg->parent_map, f->src);
+
+@@ -283,8 +283,17 @@ static int __clk_rcg2_configure(struct clk_rcg2 *rcg, const struct freq_tbl *f)
+ if (ret)
+ return ret;
+
++ /* Calculate 2d value */
++ d_val = f->n;
++
++ n_minus_m = f->n - f->m;
++ n_minus_m *= 2;
++
++ d_val = clamp_t(u32, d_val, f->m, n_minus_m);
++ not2d_val = ~d_val & mask;
++
+ ret = regmap_update_bits(rcg->clkr.regmap,
+- RCG_D_OFFSET(rcg), mask, ~f->n);
++ RCG_D_OFFSET(rcg), mask, not2d_val);
+ if (ret)
+ return ret;
+ }
+@@ -720,6 +729,7 @@ static const struct frac_entry frac_table_pixel[] = {
+ { 2, 9 },
+ { 4, 9 },
+ { 1, 1 },
++ { 2, 3 },
+ { }
+ };
+
+diff --git a/drivers/clk/qcom/gcc-ipq8074.c b/drivers/clk/qcom/gcc-ipq8074.c
+index 108fe27bee10f..541016db3c4bb 100644
+--- a/drivers/clk/qcom/gcc-ipq8074.c
++++ b/drivers/clk/qcom/gcc-ipq8074.c
+@@ -60,11 +60,6 @@ static const struct parent_map gcc_xo_gpll0_gpll0_out_main_div2_map[] = {
+ { P_GPLL0_DIV2, 4 },
+ };
+
+-static const char * const gcc_xo_gpll0[] = {
+- "xo",
+- "gpll0",
+-};
+-
+ static const struct parent_map gcc_xo_gpll0_map[] = {
+ { P_XO, 0 },
+ { P_GPLL0, 1 },
+@@ -956,6 +951,11 @@ static struct clk_rcg2 blsp1_uart6_apps_clk_src = {
+ },
+ };
+
++static const struct clk_parent_data gcc_xo_gpll0[] = {
++ { .fw_name = "xo" },
++ { .hw = &gpll0.clkr.hw },
++};
++
+ static const struct freq_tbl ftbl_pcie_axi_clk_src[] = {
+ F(19200000, P_XO, 1, 0, 0),
+ F(200000000, P_GPLL0, 4, 0, 0),
+@@ -969,7 +969,7 @@ static struct clk_rcg2 pcie0_axi_clk_src = {
+ .parent_map = gcc_xo_gpll0_map,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "pcie0_axi_clk_src",
+- .parent_names = gcc_xo_gpll0,
++ .parent_data = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+@@ -1016,7 +1016,7 @@ static struct clk_rcg2 pcie1_axi_clk_src = {
+ .parent_map = gcc_xo_gpll0_map,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "pcie1_axi_clk_src",
+- .parent_names = gcc_xo_gpll0,
++ .parent_data = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+@@ -1074,7 +1074,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
+ .name = "sdcc1_apps_clk_src",
+ .parent_names = gcc_xo_gpll0_gpll2_gpll0_out_main_div2,
+ .num_parents = 4,
+- .ops = &clk_rcg2_ops,
++ .ops = &clk_rcg2_floor_ops,
+ },
+ };
+
+@@ -1330,7 +1330,7 @@ static struct clk_rcg2 nss_ce_clk_src = {
+ .parent_map = gcc_xo_gpll0_map,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "nss_ce_clk_src",
+- .parent_names = gcc_xo_gpll0,
++ .parent_data = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+@@ -4329,8 +4329,7 @@ static struct clk_rcg2 pcie0_rchng_clk_src = {
+ .parent_map = gcc_xo_gpll0_map,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "pcie0_rchng_clk_src",
+- .parent_hws = (const struct clk_hw *[]) {
+- &gpll0.clkr.hw },
++ .parent_data = gcc_xo_gpll0,
+ .num_parents = 2,
+ .ops = &clk_rcg2_ops,
+ },
+diff --git a/drivers/clk/qcom/gcc-msm8994.c b/drivers/clk/qcom/gcc-msm8994.c
+index f09499999eb3a..6b702cdacbf2e 100644
+--- a/drivers/clk/qcom/gcc-msm8994.c
++++ b/drivers/clk/qcom/gcc-msm8994.c
+@@ -77,6 +77,7 @@ static struct clk_alpha_pll gpll4_early = {
+
+ static struct clk_alpha_pll_postdiv gpll4 = {
+ .offset = 0x1dc0,
++ .width = 4,
+ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT],
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gpll4",
+diff --git a/drivers/clk/renesas/r8a779f0-cpg-mssr.c b/drivers/clk/renesas/r8a779f0-cpg-mssr.c
+index e6ec02c2c2a8b..344957d533d81 100644
+--- a/drivers/clk/renesas/r8a779f0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a779f0-cpg-mssr.c
+@@ -103,7 +103,7 @@ static const struct cpg_core_clk r8a779f0_core_clks[] __initconst = {
+ DEF_FIXED("s0d12_hsc", R8A779F0_CLK_S0D12_HSC, CLK_S0, 12, 1),
+ DEF_FIXED("cl16m_hsc", R8A779F0_CLK_CL16M_HSC, CLK_S0, 48, 1),
+ DEF_FIXED("s0d2_cc", R8A779F0_CLK_S0D2_CC, CLK_S0, 2, 1),
+- DEF_FIXED("rsw2", R8A779F0_CLK_RSW2, CLK_PLL5, 2, 1),
++ DEF_FIXED("rsw2", R8A779F0_CLK_RSW2, CLK_PLL5_DIV2, 5, 1),
+ DEF_FIXED("cbfusa", R8A779F0_CLK_CBFUSA, CLK_EXTAL, 2, 1),
+ DEF_FIXED("cpex", R8A779F0_CLK_CPEX, CLK_EXTAL, 2, 1),
+
+diff --git a/drivers/clk/renesas/r9a07g044-cpg.c b/drivers/clk/renesas/r9a07g044-cpg.c
+index 79042bf46fe85..46359afef0d43 100644
+--- a/drivers/clk/renesas/r9a07g044-cpg.c
++++ b/drivers/clk/renesas/r9a07g044-cpg.c
+@@ -88,8 +88,8 @@ static const struct cpg_core_clk r9a07g044_core_clks[] __initconst = {
+ DEF_FIXED(".osc", R9A07G044_OSCCLK, CLK_EXTAL, 1, 1),
+ DEF_FIXED(".osc_div1000", CLK_OSC_DIV1000, CLK_EXTAL, 1, 1000),
+ DEF_SAMPLL(".pll1", CLK_PLL1, CLK_EXTAL, PLL146_CONF(0)),
+- DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 133, 2),
+- DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 133, 2),
++ DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 200, 3),
++ DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 200, 3),
+ DEF_FIXED(".pll3_400", CLK_PLL3_400, CLK_PLL3, 1, 4),
+ DEF_FIXED(".pll3_533", CLK_PLL3_533, CLK_PLL3, 1, 3),
+
+diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
+index b7be7e11b0dfe..bb8a844309bf5 100644
+--- a/drivers/clk/rockchip/clk.c
++++ b/drivers/clk/rockchip/clk.c
+@@ -180,6 +180,7 @@ static void rockchip_fractional_approximation(struct clk_hw *hw,
+ unsigned long rate, unsigned long *parent_rate,
+ unsigned long *m, unsigned long *n)
+ {
++ struct clk_fractional_divider *fd = to_clk_fd(hw);
+ unsigned long p_rate, p_parent_rate;
+ struct clk_hw *p_parent;
+
+@@ -190,6 +191,8 @@ static void rockchip_fractional_approximation(struct clk_hw *hw,
+ *parent_rate = p_parent_rate;
+ }
+
++ fd->flags |= CLK_FRAC_DIVIDER_POWER_OF_TWO_PS;
++
+ clk_fractional_divider_general_approximation(hw, rate, parent_rate, m, n);
+ }
+
+diff --git a/drivers/clk/starfive/clk-starfive-jh7100.c b/drivers/clk/starfive/clk-starfive-jh7100.c
+index 25d31afa0f871..4b59338b5d7d4 100644
+--- a/drivers/clk/starfive/clk-starfive-jh7100.c
++++ b/drivers/clk/starfive/clk-starfive-jh7100.c
+@@ -32,6 +32,13 @@
+ #define JH7100_CLK_MUX_MASK GENMASK(27, 24)
+ #define JH7100_CLK_MUX_SHIFT 24
+ #define JH7100_CLK_DIV_MASK GENMASK(23, 0)
++#define JH7100_CLK_FRAC_MASK GENMASK(15, 8)
++#define JH7100_CLK_FRAC_SHIFT 8
++#define JH7100_CLK_INT_MASK GENMASK(7, 0)
++
++/* fractional divider min/max */
++#define JH7100_CLK_FRAC_MIN 100UL
++#define JH7100_CLK_FRAC_MAX 25599UL
+
+ /* clock data */
+ #define JH7100_GATE(_idx, _name, _flags, _parent) [_idx] = { \
+@@ -55,6 +62,13 @@
+ .parents = { [0] = _parent }, \
+ }
+
++#define JH7100_FDIV(_idx, _name, _parent) [_idx] = { \
++ .name = _name, \
++ .flags = 0, \
++ .max = JH7100_CLK_FRAC_MAX, \
++ .parents = { [0] = _parent }, \
++}
++
+ #define JH7100__MUX(_idx, _name, _nparents, ...) [_idx] = { \
+ .name = _name, \
+ .flags = 0, \
+@@ -225,7 +239,7 @@ static const struct {
+ JH7100__MUX(JH7100_CLK_USBPHY_25M, "usbphy_25m", 2,
+ JH7100_CLK_OSC_SYS,
+ JH7100_CLK_USBPHY_PLLDIV25M),
+- JH7100__DIV(JH7100_CLK_AUDIO_DIV, "audio_div", 131072, JH7100_CLK_AUDIO_ROOT),
++ JH7100_FDIV(JH7100_CLK_AUDIO_DIV, "audio_div", JH7100_CLK_AUDIO_ROOT),
+ JH7100_GATE(JH7100_CLK_AUDIO_SRC, "audio_src", 0, JH7100_CLK_AUDIO_DIV),
+ JH7100_GATE(JH7100_CLK_AUDIO_12288, "audio_12288", 0, JH7100_CLK_OSC_AUD),
+ JH7100_GDIV(JH7100_CLK_VIN_SRC, "vin_src", 0, 4, JH7100_CLK_VIN_ROOT),
+@@ -399,22 +413,13 @@ static unsigned long jh7100_clk_recalc_rate(struct clk_hw *hw,
+ return div ? parent_rate / div : 0;
+ }
+
+-static unsigned long jh7100_clk_bestdiv(struct jh7100_clk *clk,
+- unsigned long rate, unsigned long parent)
+-{
+- unsigned long max = clk->max_div;
+- unsigned long div = DIV_ROUND_UP(parent, rate);
+-
+- return min(div, max);
+-}
+-
+ static int jh7100_clk_determine_rate(struct clk_hw *hw,
+ struct clk_rate_request *req)
+ {
+ struct jh7100_clk *clk = jh7100_clk_from(hw);
+ unsigned long parent = req->best_parent_rate;
+ unsigned long rate = clamp(req->rate, req->min_rate, req->max_rate);
+- unsigned long div = jh7100_clk_bestdiv(clk, rate, parent);
++ unsigned long div = min_t(unsigned long, DIV_ROUND_UP(parent, rate), clk->max_div);
+ unsigned long result = parent / div;
+
+ /*
+@@ -442,12 +447,56 @@ static int jh7100_clk_set_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+ struct jh7100_clk *clk = jh7100_clk_from(hw);
+- unsigned long div = jh7100_clk_bestdiv(clk, rate, parent_rate);
++ unsigned long div = clamp(DIV_ROUND_CLOSEST(parent_rate, rate),
++ 1UL, (unsigned long)clk->max_div);
+
+ jh7100_clk_reg_rmw(clk, JH7100_CLK_DIV_MASK, div);
+ return 0;
+ }
+
++static unsigned long jh7100_clk_frac_recalc_rate(struct clk_hw *hw,
++ unsigned long parent_rate)
++{
++ struct jh7100_clk *clk = jh7100_clk_from(hw);
++ u32 reg = jh7100_clk_reg_get(clk);
++ unsigned long div100 = 100 * (reg & JH7100_CLK_INT_MASK) +
++ ((reg & JH7100_CLK_FRAC_MASK) >> JH7100_CLK_FRAC_SHIFT);
++
++ return (div100 >= JH7100_CLK_FRAC_MIN) ? 100 * parent_rate / div100 : 0;
++}
++
++static int jh7100_clk_frac_determine_rate(struct clk_hw *hw,
++ struct clk_rate_request *req)
++{
++ unsigned long parent100 = 100 * req->best_parent_rate;
++ unsigned long rate = clamp(req->rate, req->min_rate, req->max_rate);
++ unsigned long div100 = clamp(DIV_ROUND_CLOSEST(parent100, rate),
++ JH7100_CLK_FRAC_MIN, JH7100_CLK_FRAC_MAX);
++ unsigned long result = parent100 / div100;
++
++ /* clamp the result as in jh7100_clk_determine_rate() above */
++ if (result > req->max_rate && div100 < JH7100_CLK_FRAC_MAX)
++ result = parent100 / (div100 + 1);
++ if (result < req->min_rate && div100 > JH7100_CLK_FRAC_MIN)
++ result = parent100 / (div100 - 1);
++
++ req->rate = result;
++ return 0;
++}
++
++static int jh7100_clk_frac_set_rate(struct clk_hw *hw,
++ unsigned long rate,
++ unsigned long parent_rate)
++{
++ struct jh7100_clk *clk = jh7100_clk_from(hw);
++ unsigned long div100 = clamp(DIV_ROUND_CLOSEST(100 * parent_rate, rate),
++ JH7100_CLK_FRAC_MIN, JH7100_CLK_FRAC_MAX);
++ u32 value = ((div100 % 100) << JH7100_CLK_FRAC_SHIFT) | (div100 / 100);
++
++ jh7100_clk_reg_rmw(clk, JH7100_CLK_DIV_MASK, value);
++ return 0;
++}
++
+ static u8 jh7100_clk_get_parent(struct clk_hw *hw)
+ {
+ struct jh7100_clk *clk = jh7100_clk_from(hw);
+@@ -534,6 +583,13 @@ static const struct clk_ops jh7100_clk_div_ops = {
+ .debug_init = jh7100_clk_debug_init,
+ };
+
++static const struct clk_ops jh7100_clk_fdiv_ops = {
++ .recalc_rate = jh7100_clk_frac_recalc_rate,
++ .determine_rate = jh7100_clk_frac_determine_rate,
++ .set_rate = jh7100_clk_frac_set_rate,
++ .debug_init = jh7100_clk_debug_init,
++};
++
+ static const struct clk_ops jh7100_clk_gdiv_ops = {
+ .enable = jh7100_clk_enable,
+ .disable = jh7100_clk_disable,
+@@ -572,6 +628,8 @@ static const struct clk_ops *__init jh7100_clk_ops(u32 max)
+ if (max & JH7100_CLK_DIV_MASK) {
+ if (max & JH7100_CLK_ENABLE)
+ return &jh7100_clk_gdiv_ops;
++ if (max == JH7100_CLK_FRAC_MAX)
++ return &jh7100_clk_fdiv_ops;
+ return &jh7100_clk_div_ops;
+ }
+
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index 74c1d894cca86..219c80653dbdb 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -198,6 +198,7 @@ static struct tegra_emc *emc_ensure_emc_driver(struct tegra_clk_emc *tegra)
+
+ tegra->emc = platform_get_drvdata(pdev);
+ if (!tegra->emc) {
++ put_device(&pdev->dev);
+ pr_err("%s: cannot find EMC driver\n", __func__);
+ return NULL;
+ }
+diff --git a/drivers/clk/uniphier/clk-uniphier-fixed-rate.c b/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
+index 5319cd3804801..3bc55ab75314b 100644
+--- a/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
++++ b/drivers/clk/uniphier/clk-uniphier-fixed-rate.c
+@@ -24,6 +24,7 @@ struct clk_hw *uniphier_clk_register_fixed_rate(struct device *dev,
+
+ init.name = name;
+ init.ops = &clk_fixed_rate_ops;
++ init.flags = 0;
+ init.parent_names = NULL;
+ init.num_parents = 0;
+
+diff --git a/drivers/clk/visconti/clkc-tmpv770x.c b/drivers/clk/visconti/clkc-tmpv770x.c
+index c2b2f41a85a45..6c753b2cb558f 100644
+--- a/drivers/clk/visconti/clkc-tmpv770x.c
++++ b/drivers/clk/visconti/clkc-tmpv770x.c
+@@ -176,7 +176,7 @@ static const struct visconti_clk_gate_table clk_gate_tables[] = {
+ { TMPV770X_CLK_WRCK, "wrck",
+ clks_parent_data, ARRAY_SIZE(clks_parent_data),
+ 0, 0x68, 0x168, 9, 32,
+- -1, }, /* No reset */
++ NO_RESET, },
+ { TMPV770X_CLK_PICKMON, "pickmon",
+ clks_parent_data, ARRAY_SIZE(clks_parent_data),
+ 0, 0x10, 0x110, 8, 4,
+diff --git a/drivers/clk/visconti/clkc.c b/drivers/clk/visconti/clkc.c
+index 56a8a4ffebca8..d0b193b5d0b35 100644
+--- a/drivers/clk/visconti/clkc.c
++++ b/drivers/clk/visconti/clkc.c
+@@ -147,7 +147,7 @@ int visconti_clk_register_gates(struct visconti_clk_provider *ctx,
+ if (!dev_name)
+ return -ENOMEM;
+
+- if (clks[i].rs_id >= 0) {
++ if (clks[i].rs_id != NO_RESET) {
+ rson_offset = reset[clks[i].rs_id].rson_offset;
+ rsoff_offset = reset[clks[i].rs_id].rsoff_offset;
+ rs_idx = reset[clks[i].rs_id].rs_idx;
+diff --git a/drivers/clk/visconti/clkc.h b/drivers/clk/visconti/clkc.h
+index 09ed82ff64e45..8756a1ec42efc 100644
+--- a/drivers/clk/visconti/clkc.h
++++ b/drivers/clk/visconti/clkc.h
+@@ -73,4 +73,7 @@ int visconti_clk_register_gates(struct visconti_clk_provider *data,
+ int num_gate,
+ const struct visconti_reset_data *reset,
+ spinlock_t *lock);
++
++#define NO_RESET 0xFF
++
+ #endif /* _VISCONTI_CLKC_H_ */
+diff --git a/drivers/clocksource/acpi_pm.c b/drivers/clocksource/acpi_pm.c
+index eb596ff9e7bb3..279ddff81ab49 100644
+--- a/drivers/clocksource/acpi_pm.c
++++ b/drivers/clocksource/acpi_pm.c
+@@ -229,8 +229,10 @@ static int __init parse_pmtmr(char *arg)
+ int ret;
+
+ ret = kstrtouint(arg, 16, &base);
+- if (ret)
+- return ret;
++ if (ret) {
++ pr_warn("PMTMR: invalid 'pmtmr=' value: '%s'\n", arg);
++ return 1;
++ }
+
+ pr_info("PMTMR IOPort override: 0x%04x -> 0x%04x\n", pmtmr_ioport,
+ base);
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 6db3d5511b0ff..03782b399ea1a 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -541,6 +541,11 @@ static int __init exynos4_timer_interrupts(struct device_node *np,
+ * irqs are specified.
+ */
+ nr_irqs = of_irq_count(np);
++ if (nr_irqs > ARRAY_SIZE(mct_irqs)) {
++ pr_err("exynos-mct: too many (%d) interrupts configured in DT\n",
++ nr_irqs);
++ nr_irqs = ARRAY_SIZE(mct_irqs);
++ }
+ for (i = MCT_L0_IRQ; i < nr_irqs; i++)
+ mct_irqs[i] = irq_of_parse_and_map(np, i);
+
+@@ -553,11 +558,14 @@ static int __init exynos4_timer_interrupts(struct device_node *np,
+ mct_irqs[MCT_L0_IRQ], err);
+ } else {
+ for_each_possible_cpu(cpu) {
+- int mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
++ int mct_irq;
+ struct mct_clock_event_device *pcpu_mevt =
+ per_cpu_ptr(&percpu_mct_tick, cpu);
+
+ pcpu_mevt->evt.irq = -1;
++ if (MCT_L0_IRQ + cpu >= ARRAY_SIZE(mct_irqs))
++ break;
++ mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
+
+ irq_set_status_flags(mct_irq, IRQ_NOAUTOEN);
+ if (request_irq(mct_irq,
+diff --git a/drivers/clocksource/timer-microchip-pit64b.c b/drivers/clocksource/timer-microchip-pit64b.c
+index cfa4ec7ef3968..790d2c9b42a70 100644
+--- a/drivers/clocksource/timer-microchip-pit64b.c
++++ b/drivers/clocksource/timer-microchip-pit64b.c
+@@ -165,7 +165,7 @@ static u64 mchp_pit64b_clksrc_read(struct clocksource *cs)
+ return mchp_pit64b_cnt_read(mchp_pit64b_cs_base);
+ }
+
+-static u64 mchp_pit64b_sched_read_clk(void)
++static u64 notrace mchp_pit64b_sched_read_clk(void)
+ {
+ return mchp_pit64b_cnt_read(mchp_pit64b_cs_base);
+ }
+diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c
+index 529cc6a51cdb3..c3f54d9912be7 100644
+--- a/drivers/clocksource/timer-of.c
++++ b/drivers/clocksource/timer-of.c
+@@ -157,9 +157,9 @@ static __init int timer_of_base_init(struct device_node *np,
+ of_base->base = of_base->name ?
+ of_io_request_and_map(np, of_base->index, of_base->name) :
+ of_iomap(np, of_base->index);
+- if (IS_ERR(of_base->base)) {
+- pr_err("Failed to iomap (%s)\n", of_base->name);
+- return PTR_ERR(of_base->base);
++ if (IS_ERR_OR_NULL(of_base->base)) {
++ pr_err("Failed to iomap (%s:%s)\n", np->name, of_base->name);
++ return of_base->base ? PTR_ERR(of_base->base) : -ENOMEM;
+ }
+
+ return 0;
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 1fccb457fcc54..2737407ff0698 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -694,9 +694,9 @@ static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+ return 0;
+ }
+
+- if (pa == 0x48034000) /* dra7 dmtimer3 */
++ if (pa == 0x4882c000) /* dra7 dmtimer15 */
+ return dmtimer_percpu_timer_init(np, 0);
+- else if (pa == 0x48036000) /* dra7 dmtimer4 */
++ else if (pa == 0x4882e000) /* dra7 dmtimer16 */
+ return dmtimer_percpu_timer_init(np, 1);
+
+ return 0;
+diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+index d1744b5d96190..6dfa86971a757 100644
+--- a/drivers/cpufreq/qcom-cpufreq-nvmem.c
++++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c
+@@ -130,7 +130,7 @@ static void get_krait_bin_format_b(struct device *cpu_dev,
+ }
+
+ /* Check PVS_BLOW_STATUS */
+- pte_efuse = *(((u32 *)buf) + 4);
++ pte_efuse = *(((u32 *)buf) + 1);
+ pte_efuse &= BIT(21);
+ if (pte_efuse) {
+ dev_dbg(cpu_dev, "PVS bin: %d\n", *pvs);
+diff --git a/drivers/cpuidle/cpuidle-qcom-spm.c b/drivers/cpuidle/cpuidle-qcom-spm.c
+index 01e77913a4144..5f27dcc6c110f 100644
+--- a/drivers/cpuidle/cpuidle-qcom-spm.c
++++ b/drivers/cpuidle/cpuidle-qcom-spm.c
+@@ -155,6 +155,22 @@ static struct platform_driver spm_cpuidle_driver = {
+ },
+ };
+
++static bool __init qcom_spm_find_any_cpu(void)
++{
++ struct device_node *cpu_node, *saw_node;
++
++ for_each_of_cpu_node(cpu_node) {
++ saw_node = of_parse_phandle(cpu_node, "qcom,saw", 0);
++ if (of_device_is_available(saw_node)) {
++ of_node_put(saw_node);
++ of_node_put(cpu_node);
++ return true;
++ }
++ of_node_put(saw_node);
++ }
++ return false;
++}
++
+ static int __init qcom_spm_cpuidle_init(void)
+ {
+ struct platform_device *pdev;
+@@ -164,6 +180,10 @@ static int __init qcom_spm_cpuidle_init(void)
+ if (ret)
+ return ret;
+
++ /* Make sure there is actually any CPU managed by the SPM */
++ if (!qcom_spm_find_any_cpu())
++ return 0;
++
+ pdev = platform_device_register_simple("qcom-spm-cpuidle",
+ -1, NULL, 0);
+ if (IS_ERR(pdev)) {
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+index 54ae8d16e4931..35e3cadccac2b 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
+@@ -11,6 +11,7 @@
+ * You could find a link for the datasheet in Documentation/arm/sunxi.rst
+ */
+
++#include <linux/bottom_half.h>
+ #include <linux/crypto.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -283,7 +284,9 @@ static int sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq)
+
+ flow = rctx->flow;
+ err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm));
++ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, breq, err);
++ local_bh_enable();
+ return 0;
+ }
+
+diff --git a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+index 88194718a806c..859b7522faaac 100644
+--- a/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ce/sun8i-ce-hash.c
+@@ -9,6 +9,7 @@
+ *
+ * You could find the datasheet in Documentation/arm/sunxi.rst
+ */
++#include <linux/bottom_half.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
+@@ -414,6 +415,8 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ kfree(buf);
+ kfree(result);
++ local_bh_disable();
+ crypto_finalize_hash_request(engine, breq, err);
++ local_bh_enable();
+ return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 9ef1c85c4aaa5..554e400d41cad 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -11,6 +11,7 @@
+ * You could find a link for the datasheet in Documentation/arm/sunxi.rst
+ */
+
++#include <linux/bottom_half.h>
+ #include <linux/crypto.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/io.h>
+@@ -274,7 +275,9 @@ static int sun8i_ss_handle_cipher_request(struct crypto_engine *engine, void *ar
+ struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+
+ err = sun8i_ss_cipher(breq);
++ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, breq, err);
++ local_bh_enable();
+
+ return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 80e89066dbd1a..319fe3279a716 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -30,6 +30,8 @@
+ static const struct ss_variant ss_a80_variant = {
+ .alg_cipher = { SS_ALG_AES, SS_ALG_DES, SS_ALG_3DES,
+ },
++ .alg_hash = { SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP, SS_ID_NOTSUPP,
++ },
+ .op_mode = { SS_OP_ECB, SS_OP_CBC,
+ },
+ .ss_clks = {
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 3c073eb3db038..1a71ed49d2333 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -9,6 +9,7 @@
+ *
+ * You could find the datasheet in Documentation/arm/sunxi.rst
+ */
++#include <linux/bottom_half.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
+@@ -442,6 +443,8 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ theend:
+ kfree(pad);
+ kfree(result);
++ local_bh_disable();
+ crypto_finalize_hash_request(engine, breq, err);
++ local_bh_enable();
+ return 0;
+ }
+diff --git a/drivers/crypto/amlogic/amlogic-gxl-cipher.c b/drivers/crypto/amlogic/amlogic-gxl-cipher.c
+index c6865cbd334b2..e79514fce731f 100644
+--- a/drivers/crypto/amlogic/amlogic-gxl-cipher.c
++++ b/drivers/crypto/amlogic/amlogic-gxl-cipher.c
+@@ -265,7 +265,9 @@ static int meson_handle_cipher_request(struct crypto_engine *engine,
+ struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+
+ err = meson_cipher(breq);
++ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, breq, err);
++ local_bh_enable();
+
+ return 0;
+ }
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index d718db224be42..7d4b4ad1db1f3 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -632,6 +632,20 @@ static int ccp_terminate_all(struct dma_chan *dma_chan)
+ return 0;
+ }
+
++static void ccp_dma_release(struct ccp_device *ccp)
++{
++ struct ccp_dma_chan *chan;
++ struct dma_chan *dma_chan;
++ unsigned int i;
++
++ for (i = 0; i < ccp->cmd_q_count; i++) {
++ chan = ccp->ccp_dma_chan + i;
++ dma_chan = &chan->dma_chan;
++ tasklet_kill(&chan->cleanup_tasklet);
++ list_del_rcu(&dma_chan->device_node);
++ }
++}
++
+ int ccp_dmaengine_register(struct ccp_device *ccp)
+ {
+ struct ccp_dma_chan *chan;
+@@ -736,6 +750,7 @@ int ccp_dmaengine_register(struct ccp_device *ccp)
+ return 0;
+
+ err_reg:
++ ccp_dma_release(ccp);
+ kmem_cache_destroy(ccp->dma_desc_cache);
+
+ err_cache:
+@@ -752,6 +767,7 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
+ return;
+
+ dma_async_device_unregister(dma_dev);
++ ccp_dma_release(ccp);
+
+ kmem_cache_destroy(ccp->dma_desc_cache);
+ kmem_cache_destroy(ccp->dma_cmd_cache);
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 8fd774a10edc3..6ab93dfd478a9 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -413,7 +413,7 @@ static int __sev_platform_init_locked(int *error)
+ {
+ struct psp_device *psp = psp_master;
+ struct sev_device *sev;
+- int rc, psp_ret;
++ int rc, psp_ret = -1;
+ int (*init_function)(int *error);
+
+ if (!psp || !psp->sev_data)
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index a5e041d9d2cf1..11e0278c8631d 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -258,6 +258,13 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ {
+ int ret = 0;
+
++ if (!nbytes) {
++ *mapped_nents = 0;
++ *lbytes = 0;
++ *nents = 0;
++ return 0;
++ }
++
+ *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
+ if (*nents > max_sg_nents) {
+ *nents = 0;
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index 78833491f534d..309da6334a0a0 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -257,8 +257,8 @@ static void cc_cipher_exit(struct crypto_tfm *tfm)
+ &ctx_p->user.key_dma_addr);
+
+ /* Free key buffer in context */
+- kfree_sensitive(ctx_p->user.key);
+ dev_dbg(dev, "Free key buffer in context. key=@%p\n", ctx_p->user.key);
++ kfree_sensitive(ctx_p->user.key);
+ }
+
+ struct tdes_keys {
+diff --git a/drivers/crypto/gemini/sl3516-ce-cipher.c b/drivers/crypto/gemini/sl3516-ce-cipher.c
+index c1c2b1d866639..f2be0a7d7f7ac 100644
+--- a/drivers/crypto/gemini/sl3516-ce-cipher.c
++++ b/drivers/crypto/gemini/sl3516-ce-cipher.c
+@@ -264,7 +264,9 @@ static int sl3516_ce_handle_cipher_request(struct crypto_engine *engine, void *a
+ struct skcipher_request *breq = container_of(areq, struct skcipher_request, base);
+
+ err = sl3516_ce_cipher(breq);
++ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, breq, err);
++ local_bh_enable();
+
+ return 0;
+ }
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index c5b84a5ea3501..3b29c8993b8c7 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -4295,7 +4295,7 @@ static void qm_vf_get_qos(struct hisi_qm *qm, u32 fun_num)
+ static int qm_vf_read_qos(struct hisi_qm *qm)
+ {
+ int cnt = 0;
+- int ret;
++ int ret = -EINVAL;
+
+ /* reset mailbox qos val */
+ qm->mb_qos = 0;
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 6a45bd23b3635..090920ed50c8f 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -2284,9 +2284,10 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ struct aead_request *aead_req,
+ bool encrypt)
+ {
+- struct aead_request *subreq = aead_request_ctx(aead_req);
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+ struct device *dev = ctx->dev;
++ struct aead_request *subreq;
++ int ret;
+
+ /* Kunpeng920 aead mode not support input 0 size */
+ if (!a_ctx->fallback_aead_tfm) {
+@@ -2294,6 +2295,10 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ return -EINVAL;
+ }
+
++ subreq = aead_request_alloc(a_ctx->fallback_aead_tfm, GFP_KERNEL);
++ if (!subreq)
++ return -ENOMEM;
++
+ aead_request_set_tfm(subreq, a_ctx->fallback_aead_tfm);
+ aead_request_set_callback(subreq, aead_req->base.flags,
+ aead_req->base.complete, aead_req->base.data);
+@@ -2301,8 +2306,13 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ aead_req->cryptlen, aead_req->iv);
+ aead_request_set_ad(subreq, aead_req->assoclen);
+
+- return encrypt ? crypto_aead_encrypt(subreq) :
+- crypto_aead_decrypt(subreq);
++ if (encrypt)
++ ret = crypto_aead_encrypt(subreq);
++ else
++ ret = crypto_aead_decrypt(subreq);
++ aead_request_free(subreq);
++
++ return ret;
+ }
+
+ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index 26d3ab1d308ba..89d4cc767d361 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -443,9 +443,11 @@ static int sec_engine_init(struct hisi_qm *qm)
+
+ writel(SEC_SAA_ENABLE, qm->io_base + SEC_SAA_EN_REG);
+
+- /* Enable sm4 extra mode, as ctr/ecb */
+- writel_relaxed(SEC_BD_ERR_CHK_EN0,
+- qm->io_base + SEC_BD_ERR_CHK_EN_REG0);
++ /* HW V2 enable sm4 extra mode, as ctr/ecb */
++ if (qm->ver < QM_HW_V3)
++ writel_relaxed(SEC_BD_ERR_CHK_EN0,
++ qm->io_base + SEC_BD_ERR_CHK_EN_REG0);
++
+ /* Enable sm4 xts mode multiple iv */
+ writel_relaxed(SEC_BD_ERR_CHK_EN1,
+ qm->io_base + SEC_BD_ERR_CHK_EN_REG1);
+diff --git a/drivers/crypto/marvell/Kconfig b/drivers/crypto/marvell/Kconfig
+index 9125199f1702b..a48591af12d02 100644
+--- a/drivers/crypto/marvell/Kconfig
++++ b/drivers/crypto/marvell/Kconfig
+@@ -47,6 +47,7 @@ config CRYPTO_DEV_OCTEONTX2_CPT
+ select CRYPTO_SKCIPHER
+ select CRYPTO_HASH
+ select CRYPTO_AEAD
++ select NET_DEVLINK
+ help
+ This driver allows you to utilize the Marvell Cryptographic
+ Accelerator Unit(CPT) found in OcteonTX2 series of processors.
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+index 1b4d425bbf0e4..7fd4503d9cfc8 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptpf_ucode.c
+@@ -1076,6 +1076,39 @@ static void delete_engine_grps(struct pci_dev *pdev,
+ delete_engine_group(&pdev->dev, &eng_grps->grp[i]);
+ }
+
++#define PCI_DEVID_CN10K_RNM 0xA098
++#define RNM_ENTROPY_STATUS 0x8
++
++static void rnm_to_cpt_errata_fixup(struct device *dev)
++{
++ struct pci_dev *pdev;
++ void __iomem *base;
++ int timeout = 5000;
++
++ pdev = pci_get_device(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN10K_RNM, NULL);
++ if (!pdev)
++ return;
++
++ base = pci_ioremap_bar(pdev, 0);
++ if (!base)
++ goto put_pdev;
++
++ while ((readq(base + RNM_ENTROPY_STATUS) & 0x7F) != 0x40) {
++ cpu_relax();
++ udelay(1);
++ timeout--;
++ if (!timeout) {
++ dev_warn(dev, "RNM is not producing entropy\n");
++ break;
++ }
++ }
++
++ iounmap(base);
++
++put_pdev:
++ pci_dev_put(pdev);
++}
++
+ int otx2_cpt_get_eng_grp(struct otx2_cpt_eng_grps *eng_grps, int eng_type)
+ {
+
+@@ -1189,9 +1222,17 @@ int otx2_cpt_create_eng_grps(struct otx2_cptpf_dev *cptpf,
+
+ if (is_dev_otx2(pdev))
+ goto unlock;
++
++ /*
++ * Ensure RNM_ENTROPY_STATUS[NORMAL_CNT] = 0x40 before writing
++ * CPT_AF_CTL[RNM_REQ_EN] = 1 as a workaround for HW errata.
++ */
++ rnm_to_cpt_errata_fixup(&pdev->dev);
++
+ /*
+ * Configure engine group mask to allow context prefetching
+- * for the groups.
++ * for the groups and enable random number request, to enable
++ * CPT to request random numbers from RNM.
+ */
+ otx2_cpt_write_af_reg(&cptpf->afpf_mbox, pdev, CPT_AF_CTL,
+ OTX2_CPT_ALL_ENG_GRPS_MASK << 3 | BIT_ULL(16),
+diff --git a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+index 2748a3327e391..f8f8542ce3e47 100644
+--- a/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
++++ b/drivers/crypto/marvell/octeontx2/otx2_cptvf_algs.c
+@@ -1634,16 +1634,13 @@ static inline int cpt_register_algs(void)
+ {
+ int i, err = 0;
+
+- if (!IS_ENABLED(CONFIG_DM_CRYPT)) {
+- for (i = 0; i < ARRAY_SIZE(otx2_cpt_skciphers); i++)
+- otx2_cpt_skciphers[i].base.cra_flags &=
+- ~CRYPTO_ALG_DEAD;
+-
+- err = crypto_register_skciphers(otx2_cpt_skciphers,
+- ARRAY_SIZE(otx2_cpt_skciphers));
+- if (err)
+- return err;
+- }
++ for (i = 0; i < ARRAY_SIZE(otx2_cpt_skciphers); i++)
++ otx2_cpt_skciphers[i].base.cra_flags &= ~CRYPTO_ALG_DEAD;
++
++ err = crypto_register_skciphers(otx2_cpt_skciphers,
++ ARRAY_SIZE(otx2_cpt_skciphers));
++ if (err)
++ return err;
+
+ for (i = 0; i < ARRAY_SIZE(otx2_cpt_aeads); i++)
+ otx2_cpt_aeads[i].base.cra_flags &= ~CRYPTO_ALG_DEAD;
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index d19e5ffb5104b..d6f9e2fe863d7 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -331,7 +331,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq)
+ memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128);
+ }
+
+- for_each_sg(req->src, src, sg_nents(src), i) {
++ for_each_sg(req->src, src, sg_nents(req->src), i) {
+ src_buf = sg_virt(src);
+ len = sg_dma_len(src);
+ tlen += len;
+diff --git a/drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
+index 6d10edc40aca0..68d39c833332e 100644
+--- a/drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
++++ b/drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
+@@ -52,7 +52,7 @@ static const char *const dev_cfg_services[] = {
+ static int get_service_enabled(struct adf_accel_dev *accel_dev)
+ {
+ char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
+- u32 ret;
++ int ret;
+
+ ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
+ ADF_SERVICES_ENABLED, services);
+diff --git a/drivers/crypto/qat/qat_common/adf_gen4_pfvf.c b/drivers/crypto/qat/qat_common/adf_gen4_pfvf.c
+index 8efbedf63bc80..3b3ea849c5e53 100644
+--- a/drivers/crypto/qat/qat_common/adf_gen4_pfvf.c
++++ b/drivers/crypto/qat/qat_common/adf_gen4_pfvf.c
+@@ -9,15 +9,12 @@
+ #include "adf_pfvf_pf_proto.h"
+ #include "adf_pfvf_utils.h"
+
+-#define ADF_4XXX_MAX_NUM_VFS 16
+-
+ #define ADF_4XXX_PF2VM_OFFSET(i) (0x40B010 + ((i) * 0x20))
+ #define ADF_4XXX_VM2PF_OFFSET(i) (0x40B014 + ((i) * 0x20))
+
+ /* VF2PF interrupt source registers */
+-#define ADF_4XXX_VM2PF_SOU(i) (0x41A180 + ((i) * 4))
+-#define ADF_4XXX_VM2PF_MSK(i) (0x41A1C0 + ((i) * 4))
+-#define ADF_4XXX_VM2PF_INT_EN_MSK BIT(0)
++#define ADF_4XXX_VM2PF_SOU 0x41A180
++#define ADF_4XXX_VM2PF_MSK 0x41A1C0
+
+ #define ADF_PFVF_GEN4_MSGTYPE_SHIFT 2
+ #define ADF_PFVF_GEN4_MSGTYPE_MASK 0x3F
+@@ -41,51 +38,30 @@ static u32 adf_gen4_pf_get_vf2pf_offset(u32 i)
+
+ static u32 adf_gen4_get_vf2pf_sources(void __iomem *pmisc_addr)
+ {
+- int i;
+ u32 sou, mask;
+- int num_csrs = ADF_4XXX_MAX_NUM_VFS;
+- u32 vf_mask = 0;
+
+- for (i = 0; i < num_csrs; i++) {
+- sou = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_SOU(i));
+- mask = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK(i));
+- sou &= ~mask;
+- vf_mask |= sou << i;
+- }
++ sou = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_SOU);
++ mask = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK);
+
+- return vf_mask;
++ return sou &= ~mask;
+ }
+
+ static void adf_gen4_enable_vf2pf_interrupts(void __iomem *pmisc_addr,
+ u32 vf_mask)
+ {
+- int num_csrs = ADF_4XXX_MAX_NUM_VFS;
+- unsigned long mask = vf_mask;
+ unsigned int val;
+- int i;
+-
+- for_each_set_bit(i, &mask, num_csrs) {
+- unsigned int offset = ADF_4XXX_VM2PF_MSK(i);
+
+- val = ADF_CSR_RD(pmisc_addr, offset) & ~ADF_4XXX_VM2PF_INT_EN_MSK;
+- ADF_CSR_WR(pmisc_addr, offset, val);
+- }
++ val = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK) & ~vf_mask;
++ ADF_CSR_WR(pmisc_addr, ADF_4XXX_VM2PF_MSK, val);
+ }
+
+ static void adf_gen4_disable_vf2pf_interrupts(void __iomem *pmisc_addr,
+ u32 vf_mask)
+ {
+- int num_csrs = ADF_4XXX_MAX_NUM_VFS;
+- unsigned long mask = vf_mask;
+ unsigned int val;
+- int i;
+-
+- for_each_set_bit(i, &mask, num_csrs) {
+- unsigned int offset = ADF_4XXX_VM2PF_MSK(i);
+
+- val = ADF_CSR_RD(pmisc_addr, offset) | ADF_4XXX_VM2PF_INT_EN_MSK;
+- ADF_CSR_WR(pmisc_addr, offset, val);
+- }
++ val = ADF_CSR_RD(pmisc_addr, ADF_4XXX_VM2PF_MSK) | vf_mask;
++ ADF_CSR_WR(pmisc_addr, ADF_4XXX_VM2PF_MSK, val);
+ }
+
+ static int adf_gen4_pfvf_send(struct adf_accel_dev *accel_dev,
+diff --git a/drivers/crypto/qat/qat_common/adf_pfvf_vf_msg.c b/drivers/crypto/qat/qat_common/adf_pfvf_vf_msg.c
+index 14b222691c9c2..1141258db4b65 100644
+--- a/drivers/crypto/qat/qat_common/adf_pfvf_vf_msg.c
++++ b/drivers/crypto/qat/qat_common/adf_pfvf_vf_msg.c
+@@ -96,7 +96,7 @@ int adf_vf2pf_request_version(struct adf_accel_dev *accel_dev)
+ int adf_vf2pf_get_capabilities(struct adf_accel_dev *accel_dev)
+ {
+ struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+- struct capabilities_v3 cap_msg = { { 0 }, };
++ struct capabilities_v3 cap_msg = { 0 };
+ unsigned int len = sizeof(cap_msg);
+
+ if (accel_dev->vf.pf_compat_ver < ADF_PFVF_COMPAT_CAPABILITIES)
+@@ -141,7 +141,7 @@ int adf_vf2pf_get_capabilities(struct adf_accel_dev *accel_dev)
+
+ int adf_vf2pf_get_ring_to_svc(struct adf_accel_dev *accel_dev)
+ {
+- struct ring_to_svc_map_v1 rts_map_msg = { { 0 }, };
++ struct ring_to_svc_map_v1 rts_map_msg = { 0 };
+ unsigned int len = sizeof(rts_map_msg);
+
+ if (accel_dev->vf.pf_compat_ver < ADF_PFVF_COMPAT_RING_TO_SVC_MAP)
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+index 1cece1a7d3f00..5bbf0d2722e11 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_skcipher.c
+@@ -506,7 +506,6 @@ struct rk_crypto_tmp rk_ecb_des3_ede_alg = {
+ .exit = rk_ablk_exit_tfm,
+ .min_keysize = DES3_EDE_KEY_SIZE,
+ .max_keysize = DES3_EDE_KEY_SIZE,
+- .ivsize = DES_BLOCK_SIZE,
+ .setkey = rk_tdes_setkey,
+ .encrypt = rk_des3_ede_ecb_encrypt,
+ .decrypt = rk_des3_ede_ecb_decrypt,
+diff --git a/drivers/crypto/vmx/Kconfig b/drivers/crypto/vmx/Kconfig
+index c85fab7ef0bdd..b2c28b87f14b3 100644
+--- a/drivers/crypto/vmx/Kconfig
++++ b/drivers/crypto/vmx/Kconfig
+@@ -2,7 +2,11 @@
+ config CRYPTO_DEV_VMX_ENCRYPT
+ tristate "Encryption acceleration support on P8 CPU"
+ depends on CRYPTO_DEV_VMX
++ select CRYPTO_AES
++ select CRYPTO_CBC
++ select CRYPTO_CTR
+ select CRYPTO_GHASH
++ select CRYPTO_XTS
+ default m
+ help
+ Support for VMX cryptographic acceleration instructions on Power8 CPU.
+diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
+index 40ab50318dafa..a90202ac88d2f 100644
+--- a/drivers/cxl/core/Makefile
++++ b/drivers/cxl/core/Makefile
+@@ -2,7 +2,7 @@
+ obj-$(CONFIG_CXL_BUS) += cxl_core.o
+
+ ccflags-y += -I$(srctree)/drivers/cxl
+-cxl_core-y := bus.o
++cxl_core-y := port.o
+ cxl_core-y += pmem.o
+ cxl_core-y += regs.o
+ cxl_core-y += memdev.o
+diff --git a/drivers/cxl/core/bus.c b/drivers/cxl/core/bus.c
+deleted file mode 100644
+index 3f9b98ecd18b7..0000000000000
+--- a/drivers/cxl/core/bus.c
++++ /dev/null
+@@ -1,675 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
+-#include <linux/io-64-nonatomic-lo-hi.h>
+-#include <linux/device.h>
+-#include <linux/module.h>
+-#include <linux/pci.h>
+-#include <linux/slab.h>
+-#include <linux/idr.h>
+-#include <cxlmem.h>
+-#include <cxl.h>
+-#include "core.h"
+-
+-/**
+- * DOC: cxl core
+- *
+- * The CXL core provides a set of interfaces that can be consumed by CXL aware
+- * drivers. The interfaces allow for creation, modification, and destruction of
+- * regions, memory devices, ports, and decoders. CXL aware drivers must register
+- * with the CXL core via these interfaces in order to be able to participate in
+- * cross-device interleave coordination. The CXL core also establishes and
+- * maintains the bridge to the nvdimm subsystem.
+- *
+- * CXL core introduces sysfs hierarchy to control the devices that are
+- * instantiated by the core.
+- */
+-
+-static DEFINE_IDA(cxl_port_ida);
+-
+-static ssize_t devtype_show(struct device *dev, struct device_attribute *attr,
+- char *buf)
+-{
+- return sysfs_emit(buf, "%s\n", dev->type->name);
+-}
+-static DEVICE_ATTR_RO(devtype);
+-
+-static struct attribute *cxl_base_attributes[] = {
+- &dev_attr_devtype.attr,
+- NULL,
+-};
+-
+-struct attribute_group cxl_base_attribute_group = {
+- .attrs = cxl_base_attributes,
+-};
+-
+-static ssize_t start_show(struct device *dev, struct device_attribute *attr,
+- char *buf)
+-{
+- struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-
+- return sysfs_emit(buf, "%#llx\n", cxld->range.start);
+-}
+-static DEVICE_ATTR_RO(start);
+-
+-static ssize_t size_show(struct device *dev, struct device_attribute *attr,
+- char *buf)
+-{
+- struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-
+- return sysfs_emit(buf, "%#llx\n", range_len(&cxld->range));
+-}
+-static DEVICE_ATTR_RO(size);
+-
+-#define CXL_DECODER_FLAG_ATTR(name, flag) \
+-static ssize_t name##_show(struct device *dev, \
+- struct device_attribute *attr, char *buf) \
+-{ \
+- struct cxl_decoder *cxld = to_cxl_decoder(dev); \
+- \
+- return sysfs_emit(buf, "%s\n", \
+- (cxld->flags & (flag)) ? "1" : "0"); \
+-} \
+-static DEVICE_ATTR_RO(name)
+-
+-CXL_DECODER_FLAG_ATTR(cap_pmem, CXL_DECODER_F_PMEM);
+-CXL_DECODER_FLAG_ATTR(cap_ram, CXL_DECODER_F_RAM);
+-CXL_DECODER_FLAG_ATTR(cap_type2, CXL_DECODER_F_TYPE2);
+-CXL_DECODER_FLAG_ATTR(cap_type3, CXL_DECODER_F_TYPE3);
+-CXL_DECODER_FLAG_ATTR(locked, CXL_DECODER_F_LOCK);
+-
+-static ssize_t target_type_show(struct device *dev,
+- struct device_attribute *attr, char *buf)
+-{
+- struct cxl_decoder *cxld = to_cxl_decoder(dev);
+-
+- switch (cxld->target_type) {
+- case CXL_DECODER_ACCELERATOR:
+- return sysfs_emit(buf, "accelerator\n");
+- case CXL_DECODER_EXPANDER:
+- return sysfs_emit(buf, "expander\n");
+- }
+- return -ENXIO;
+-}
+-static DEVICE_ATTR_RO(target_type);
+-
+-static ssize_t target_list_show(struct device *dev,
+- struct device_attribute *attr, char *buf)
+-{
+- struct cxl_decoder *cxld = to_cxl_decoder(dev);
+- ssize_t offset = 0;
+- int i, rc = 0;
+-
+- device_lock(dev);
+- for (i = 0; i < cxld->interleave_ways; i++) {
+- struct cxl_dport *dport = cxld->target[i];
+- struct cxl_dport *next = NULL;
+-
+- if (!dport)
+- break;
+-
+- if (i + 1 < cxld->interleave_ways)
+- next = cxld->target[i + 1];
+- rc = sysfs_emit_at(buf, offset, "%d%s", dport->port_id,
+- next ? "," : "");
+- if (rc < 0)
+- break;
+- offset += rc;
+- }
+- device_unlock(dev);
+-
+- if (rc < 0)
+- return rc;
+-
+- rc = sysfs_emit_at(buf, offset, "\n");
+- if (rc < 0)
+- return rc;
+-
+- return offset + rc;
+-}
+-static DEVICE_ATTR_RO(target_list);
+-
+-static struct attribute *cxl_decoder_base_attrs[] = {
+- &dev_attr_start.attr,
+- &dev_attr_size.attr,
+- &dev_attr_locked.attr,
+- &dev_attr_target_list.attr,
+- NULL,
+-};
+-
+-static struct attribute_group cxl_decoder_base_attribute_group = {
+- .attrs = cxl_decoder_base_attrs,
+-};
+-
+-static struct attribute *cxl_decoder_root_attrs[] = {
+- &dev_attr_cap_pmem.attr,
+- &dev_attr_cap_ram.attr,
+- &dev_attr_cap_type2.attr,
+- &dev_attr_cap_type3.attr,
+- NULL,
+-};
+-
+-static struct attribute_group cxl_decoder_root_attribute_group = {
+- .attrs = cxl_decoder_root_attrs,
+-};
+-
+-static const struct attribute_group *cxl_decoder_root_attribute_groups[] = {
+- &cxl_decoder_root_attribute_group,
+- &cxl_decoder_base_attribute_group,
+- &cxl_base_attribute_group,
+- NULL,
+-};
+-
+-static struct attribute *cxl_decoder_switch_attrs[] = {
+- &dev_attr_target_type.attr,
+- NULL,
+-};
+-
+-static struct attribute_group cxl_decoder_switch_attribute_group = {
+- .attrs = cxl_decoder_switch_attrs,
+-};
+-
+-static const struct attribute_group *cxl_decoder_switch_attribute_groups[] = {
+- &cxl_decoder_switch_attribute_group,
+- &cxl_decoder_base_attribute_group,
+- &cxl_base_attribute_group,
+- NULL,
+-};
+-
+-static void cxl_decoder_release(struct device *dev)
+-{
+- struct cxl_decoder *cxld = to_cxl_decoder(dev);
+- struct cxl_port *port = to_cxl_port(dev->parent);
+-
+- ida_free(&port->decoder_ida, cxld->id);
+- kfree(cxld);
+-}
+-
+-static const struct device_type cxl_decoder_switch_type = {
+- .name = "cxl_decoder_switch",
+- .release = cxl_decoder_release,
+- .groups = cxl_decoder_switch_attribute_groups,
+-};
+-
+-static const struct device_type cxl_decoder_root_type = {
+- .name = "cxl_decoder_root",
+- .release = cxl_decoder_release,
+- .groups = cxl_decoder_root_attribute_groups,
+-};
+-
+-bool is_root_decoder(struct device *dev)
+-{
+- return dev->type == &cxl_decoder_root_type;
+-}
+-EXPORT_SYMBOL_NS_GPL(is_root_decoder, CXL);
+-
+-struct cxl_decoder *to_cxl_decoder(struct device *dev)
+-{
+- if (dev_WARN_ONCE(dev, dev->type->release != cxl_decoder_release,
+- "not a cxl_decoder device\n"))
+- return NULL;
+- return container_of(dev, struct cxl_decoder, dev);
+-}
+-EXPORT_SYMBOL_NS_GPL(to_cxl_decoder, CXL);
+-
+-static void cxl_dport_release(struct cxl_dport *dport)
+-{
+- list_del(&dport->list);
+- put_device(dport->dport);
+- kfree(dport);
+-}
+-
+-static void cxl_port_release(struct device *dev)
+-{
+- struct cxl_port *port = to_cxl_port(dev);
+- struct cxl_dport *dport, *_d;
+-
+- device_lock(dev);
+- list_for_each_entry_safe(dport, _d, &port->dports, list)
+- cxl_dport_release(dport);
+- device_unlock(dev);
+- ida_free(&cxl_port_ida, port->id);
+- kfree(port);
+-}
+-
+-static const struct attribute_group *cxl_port_attribute_groups[] = {
+- &cxl_base_attribute_group,
+- NULL,
+-};
+-
+-static const struct device_type cxl_port_type = {
+- .name = "cxl_port",
+- .release = cxl_port_release,
+- .groups = cxl_port_attribute_groups,
+-};
+-
+-struct cxl_port *to_cxl_port(struct device *dev)
+-{
+- if (dev_WARN_ONCE(dev, dev->type != &cxl_port_type,
+- "not a cxl_port device\n"))
+- return NULL;
+- return container_of(dev, struct cxl_port, dev);
+-}
+-
+-static void unregister_port(void *_port)
+-{
+- struct cxl_port *port = _port;
+- struct cxl_dport *dport;
+-
+- device_lock(&port->dev);
+- list_for_each_entry(dport, &port->dports, list) {
+- char link_name[CXL_TARGET_STRLEN];
+-
+- if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d",
+- dport->port_id) >= CXL_TARGET_STRLEN)
+- continue;
+- sysfs_remove_link(&port->dev.kobj, link_name);
+- }
+- device_unlock(&port->dev);
+- device_unregister(&port->dev);
+-}
+-
+-static void cxl_unlink_uport(void *_port)
+-{
+- struct cxl_port *port = _port;
+-
+- sysfs_remove_link(&port->dev.kobj, "uport");
+-}
+-
+-static int devm_cxl_link_uport(struct device *host, struct cxl_port *port)
+-{
+- int rc;
+-
+- rc = sysfs_create_link(&port->dev.kobj, &port->uport->kobj, "uport");
+- if (rc)
+- return rc;
+- return devm_add_action_or_reset(host, cxl_unlink_uport, port);
+-}
+-
+-static struct cxl_port *cxl_port_alloc(struct device *uport,
+- resource_size_t component_reg_phys,
+- struct cxl_port *parent_port)
+-{
+- struct cxl_port *port;
+- struct device *dev;
+- int rc;
+-
+- port = kzalloc(sizeof(*port), GFP_KERNEL);
+- if (!port)
+- return ERR_PTR(-ENOMEM);
+-
+- rc = ida_alloc(&cxl_port_ida, GFP_KERNEL);
+- if (rc < 0)
+- goto err;
+- port->id = rc;
+-
+- /*
+- * The top-level cxl_port "cxl_root" does not have a cxl_port as
+- * its parent and it does not have any corresponding component
+- * registers as its decode is described by a fixed platform
+- * description.
+- */
+- dev = &port->dev;
+- if (parent_port)
+- dev->parent = &parent_port->dev;
+- else
+- dev->parent = uport;
+-
+- port->uport = uport;
+- port->component_reg_phys = component_reg_phys;
+- ida_init(&port->decoder_ida);
+- INIT_LIST_HEAD(&port->dports);
+-
+- device_initialize(dev);
+- device_set_pm_not_required(dev);
+- dev->bus = &cxl_bus_type;
+- dev->type = &cxl_port_type;
+-
+- return port;
+-
+-err:
+- kfree(port);
+- return ERR_PTR(rc);
+-}
+-
+-/**
+- * devm_cxl_add_port - register a cxl_port in CXL memory decode hierarchy
+- * @host: host device for devm operations
+- * @uport: "physical" device implementing this upstream port
+- * @component_reg_phys: (optional) for configurable cxl_port instances
+- * @parent_port: next hop up in the CXL memory decode hierarchy
+- */
+-struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport,
+- resource_size_t component_reg_phys,
+- struct cxl_port *parent_port)
+-{
+- struct cxl_port *port;
+- struct device *dev;
+- int rc;
+-
+- port = cxl_port_alloc(uport, component_reg_phys, parent_port);
+- if (IS_ERR(port))
+- return port;
+-
+- dev = &port->dev;
+- if (parent_port)
+- rc = dev_set_name(dev, "port%d", port->id);
+- else
+- rc = dev_set_name(dev, "root%d", port->id);
+- if (rc)
+- goto err;
+-
+- rc = device_add(dev);
+- if (rc)
+- goto err;
+-
+- rc = devm_add_action_or_reset(host, unregister_port, port);
+- if (rc)
+- return ERR_PTR(rc);
+-
+- rc = devm_cxl_link_uport(host, port);
+- if (rc)
+- return ERR_PTR(rc);
+-
+- return port;
+-
+-err:
+- put_device(dev);
+- return ERR_PTR(rc);
+-}
+-EXPORT_SYMBOL_NS_GPL(devm_cxl_add_port, CXL);
+-
+-static struct cxl_dport *find_dport(struct cxl_port *port, int id)
+-{
+- struct cxl_dport *dport;
+-
+- device_lock_assert(&port->dev);
+- list_for_each_entry (dport, &port->dports, list)
+- if (dport->port_id == id)
+- return dport;
+- return NULL;
+-}
+-
+-static int add_dport(struct cxl_port *port, struct cxl_dport *new)
+-{
+- struct cxl_dport *dup;
+-
+- device_lock(&port->dev);
+- dup = find_dport(port, new->port_id);
+- if (dup)
+- dev_err(&port->dev,
+- "unable to add dport%d-%s non-unique port id (%s)\n",
+- new->port_id, dev_name(new->dport),
+- dev_name(dup->dport));
+- else
+- list_add_tail(&new->list, &port->dports);
+- device_unlock(&port->dev);
+-
+- return dup ? -EEXIST : 0;
+-}
+-
+-/**
+- * cxl_add_dport - append downstream port data to a cxl_port
+- * @port: the cxl_port that references this dport
+- * @dport_dev: firmware or PCI device representing the dport
+- * @port_id: identifier for this dport in a decoder's target list
+- * @component_reg_phys: optional location of CXL component registers
+- *
+- * Note that all allocations and links are undone by cxl_port deletion
+- * and release.
+- */
+-int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id,
+- resource_size_t component_reg_phys)
+-{
+- char link_name[CXL_TARGET_STRLEN];
+- struct cxl_dport *dport;
+- int rc;
+-
+- if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d", port_id) >=
+- CXL_TARGET_STRLEN)
+- return -EINVAL;
+-
+- dport = kzalloc(sizeof(*dport), GFP_KERNEL);
+- if (!dport)
+- return -ENOMEM;
+-
+- INIT_LIST_HEAD(&dport->list);
+- dport->dport = get_device(dport_dev);
+- dport->port_id = port_id;
+- dport->component_reg_phys = component_reg_phys;
+- dport->port = port;
+-
+- rc = add_dport(port, dport);
+- if (rc)
+- goto err;
+-
+- rc = sysfs_create_link(&port->dev.kobj, &dport_dev->kobj, link_name);
+- if (rc)
+- goto err;
+-
+- return 0;
+-err:
+- cxl_dport_release(dport);
+- return rc;
+-}
+-EXPORT_SYMBOL_NS_GPL(cxl_add_dport, CXL);
+-
+-static int decoder_populate_targets(struct cxl_decoder *cxld,
+- struct cxl_port *port, int *target_map)
+-{
+- int rc = 0, i;
+-
+- if (!target_map)
+- return 0;
+-
+- device_lock(&port->dev);
+- if (list_empty(&port->dports)) {
+- rc = -EINVAL;
+- goto out_unlock;
+- }
+-
+- for (i = 0; i < cxld->nr_targets; i++) {
+- struct cxl_dport *dport = find_dport(port, target_map[i]);
+-
+- if (!dport) {
+- rc = -ENXIO;
+- goto out_unlock;
+- }
+- cxld->target[i] = dport;
+- }
+-
+-out_unlock:
+- device_unlock(&port->dev);
+-
+- return rc;
+-}
+-
+-struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets)
+-{
+- struct cxl_decoder *cxld;
+- struct device *dev;
+- int rc = 0;
+-
+- if (nr_targets > CXL_DECODER_MAX_INTERLEAVE || nr_targets < 1)
+- return ERR_PTR(-EINVAL);
+-
+- cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL);
+- if (!cxld)
+- return ERR_PTR(-ENOMEM);
+-
+- rc = ida_alloc(&port->decoder_ida, GFP_KERNEL);
+- if (rc < 0)
+- goto err;
+-
+- cxld->id = rc;
+- cxld->nr_targets = nr_targets;
+- dev = &cxld->dev;
+- device_initialize(dev);
+- device_set_pm_not_required(dev);
+- dev->parent = &port->dev;
+- dev->bus = &cxl_bus_type;
+-
+- /* root ports do not have a cxl_port_type parent */
+- if (port->dev.parent->type == &cxl_port_type)
+- dev->type = &cxl_decoder_switch_type;
+- else
+- dev->type = &cxl_decoder_root_type;
+-
+- return cxld;
+-err:
+- kfree(cxld);
+- return ERR_PTR(rc);
+-}
+-EXPORT_SYMBOL_NS_GPL(cxl_decoder_alloc, CXL);
+-
+-int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
+-{
+- struct cxl_port *port;
+- struct device *dev;
+- int rc;
+-
+- if (WARN_ON_ONCE(!cxld))
+- return -EINVAL;
+-
+- if (WARN_ON_ONCE(IS_ERR(cxld)))
+- return PTR_ERR(cxld);
+-
+- if (cxld->interleave_ways < 1)
+- return -EINVAL;
+-
+- port = to_cxl_port(cxld->dev.parent);
+- rc = decoder_populate_targets(cxld, port, target_map);
+- if (rc)
+- return rc;
+-
+- dev = &cxld->dev;
+- rc = dev_set_name(dev, "decoder%d.%d", port->id, cxld->id);
+- if (rc)
+- return rc;
+-
+- return device_add(dev);
+-}
+-EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL);
+-
+-static void cxld_unregister(void *dev)
+-{
+- device_unregister(dev);
+-}
+-
+-int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld)
+-{
+- return devm_add_action_or_reset(host, cxld_unregister, &cxld->dev);
+-}
+-EXPORT_SYMBOL_NS_GPL(cxl_decoder_autoremove, CXL);
+-
+-/**
+- * __cxl_driver_register - register a driver for the cxl bus
+- * @cxl_drv: cxl driver structure to attach
+- * @owner: owning module/driver
+- * @modname: KBUILD_MODNAME for parent driver
+- */
+-int __cxl_driver_register(struct cxl_driver *cxl_drv, struct module *owner,
+- const char *modname)
+-{
+- if (!cxl_drv->probe) {
+- pr_debug("%s ->probe() must be specified\n", modname);
+- return -EINVAL;
+- }
+-
+- if (!cxl_drv->name) {
+- pr_debug("%s ->name must be specified\n", modname);
+- return -EINVAL;
+- }
+-
+- if (!cxl_drv->id) {
+- pr_debug("%s ->id must be specified\n", modname);
+- return -EINVAL;
+- }
+-
+- cxl_drv->drv.bus = &cxl_bus_type;
+- cxl_drv->drv.owner = owner;
+- cxl_drv->drv.mod_name = modname;
+- cxl_drv->drv.name = cxl_drv->name;
+-
+- return driver_register(&cxl_drv->drv);
+-}
+-EXPORT_SYMBOL_NS_GPL(__cxl_driver_register, CXL);
+-
+-void cxl_driver_unregister(struct cxl_driver *cxl_drv)
+-{
+- driver_unregister(&cxl_drv->drv);
+-}
+-EXPORT_SYMBOL_NS_GPL(cxl_driver_unregister, CXL);
+-
+-static int cxl_device_id(struct device *dev)
+-{
+- if (dev->type == &cxl_nvdimm_bridge_type)
+- return CXL_DEVICE_NVDIMM_BRIDGE;
+- if (dev->type == &cxl_nvdimm_type)
+- return CXL_DEVICE_NVDIMM;
+- return 0;
+-}
+-
+-static int cxl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
+-{
+- return add_uevent_var(env, "MODALIAS=" CXL_MODALIAS_FMT,
+- cxl_device_id(dev));
+-}
+-
+-static int cxl_bus_match(struct device *dev, struct device_driver *drv)
+-{
+- return cxl_device_id(dev) == to_cxl_drv(drv)->id;
+-}
+-
+-static int cxl_bus_probe(struct device *dev)
+-{
+- return to_cxl_drv(dev->driver)->probe(dev);
+-}
+-
+-static void cxl_bus_remove(struct device *dev)
+-{
+- struct cxl_driver *cxl_drv = to_cxl_drv(dev->driver);
+-
+- if (cxl_drv->remove)
+- cxl_drv->remove(dev);
+-}
+-
+-struct bus_type cxl_bus_type = {
+- .name = "cxl",
+- .uevent = cxl_bus_uevent,
+- .match = cxl_bus_match,
+- .probe = cxl_bus_probe,
+- .remove = cxl_bus_remove,
+-};
+-EXPORT_SYMBOL_NS_GPL(cxl_bus_type, CXL);
+-
+-static __init int cxl_core_init(void)
+-{
+- int rc;
+-
+- cxl_mbox_init();
+-
+- rc = cxl_memdev_init();
+- if (rc)
+- return rc;
+-
+- rc = bus_register(&cxl_bus_type);
+- if (rc)
+- goto err;
+- return 0;
+-
+-err:
+- cxl_memdev_exit();
+- cxl_mbox_exit();
+- return rc;
+-}
+-
+-static void cxl_core_exit(void)
+-{
+- bus_unregister(&cxl_bus_type);
+- cxl_memdev_exit();
+- cxl_mbox_exit();
+-}
+-
+-module_init(cxl_core_init);
+-module_exit(cxl_core_exit);
+-MODULE_LICENSE("GPL v2");
+diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
+new file mode 100644
+index 0000000000000..aa5239ac67c67
+--- /dev/null
++++ b/drivers/cxl/core/port.c
+@@ -0,0 +1,679 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/* Copyright(c) 2020 Intel Corporation. All rights reserved. */
++#include <linux/io-64-nonatomic-lo-hi.h>
++#include <linux/device.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++#include <linux/slab.h>
++#include <linux/idr.h>
++#include <cxlmem.h>
++#include <cxl.h>
++#include "core.h"
++
++/**
++ * DOC: cxl core
++ *
++ * The CXL core provides a set of interfaces that can be consumed by CXL aware
++ * drivers. The interfaces allow for creation, modification, and destruction of
++ * regions, memory devices, ports, and decoders. CXL aware drivers must register
++ * with the CXL core via these interfaces in order to be able to participate in
++ * cross-device interleave coordination. The CXL core also establishes and
++ * maintains the bridge to the nvdimm subsystem.
++ *
++ * CXL core introduces sysfs hierarchy to control the devices that are
++ * instantiated by the core.
++ */
++
++static DEFINE_IDA(cxl_port_ida);
++
++static ssize_t devtype_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ return sysfs_emit(buf, "%s\n", dev->type->name);
++}
++static DEVICE_ATTR_RO(devtype);
++
++static struct attribute *cxl_base_attributes[] = {
++ &dev_attr_devtype.attr,
++ NULL,
++};
++
++struct attribute_group cxl_base_attribute_group = {
++ .attrs = cxl_base_attributes,
++};
++
++static ssize_t start_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ struct cxl_decoder *cxld = to_cxl_decoder(dev);
++
++ return sysfs_emit(buf, "%#llx\n", cxld->range.start);
++}
++static DEVICE_ATTR_RO(start);
++
++static ssize_t size_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ struct cxl_decoder *cxld = to_cxl_decoder(dev);
++
++ return sysfs_emit(buf, "%#llx\n", range_len(&cxld->range));
++}
++static DEVICE_ATTR_RO(size);
++
++#define CXL_DECODER_FLAG_ATTR(name, flag) \
++static ssize_t name##_show(struct device *dev, \
++ struct device_attribute *attr, char *buf) \
++{ \
++ struct cxl_decoder *cxld = to_cxl_decoder(dev); \
++ \
++ return sysfs_emit(buf, "%s\n", \
++ (cxld->flags & (flag)) ? "1" : "0"); \
++} \
++static DEVICE_ATTR_RO(name)
++
++CXL_DECODER_FLAG_ATTR(cap_pmem, CXL_DECODER_F_PMEM);
++CXL_DECODER_FLAG_ATTR(cap_ram, CXL_DECODER_F_RAM);
++CXL_DECODER_FLAG_ATTR(cap_type2, CXL_DECODER_F_TYPE2);
++CXL_DECODER_FLAG_ATTR(cap_type3, CXL_DECODER_F_TYPE3);
++CXL_DECODER_FLAG_ATTR(locked, CXL_DECODER_F_LOCK);
++
++static ssize_t target_type_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ struct cxl_decoder *cxld = to_cxl_decoder(dev);
++
++ switch (cxld->target_type) {
++ case CXL_DECODER_ACCELERATOR:
++ return sysfs_emit(buf, "accelerator\n");
++ case CXL_DECODER_EXPANDER:
++ return sysfs_emit(buf, "expander\n");
++ }
++ return -ENXIO;
++}
++static DEVICE_ATTR_RO(target_type);
++
++static ssize_t target_list_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ struct cxl_decoder *cxld = to_cxl_decoder(dev);
++ ssize_t offset = 0;
++ int i, rc = 0;
++
++ device_lock(dev);
++ for (i = 0; i < cxld->interleave_ways; i++) {
++ struct cxl_dport *dport = cxld->target[i];
++ struct cxl_dport *next = NULL;
++
++ if (!dport)
++ break;
++
++ if (i + 1 < cxld->interleave_ways)
++ next = cxld->target[i + 1];
++ rc = sysfs_emit_at(buf, offset, "%d%s", dport->port_id,
++ next ? "," : "");
++ if (rc < 0)
++ break;
++ offset += rc;
++ }
++ device_unlock(dev);
++
++ if (rc < 0)
++ return rc;
++
++ rc = sysfs_emit_at(buf, offset, "\n");
++ if (rc < 0)
++ return rc;
++
++ return offset + rc;
++}
++static DEVICE_ATTR_RO(target_list);
++
++static struct attribute *cxl_decoder_base_attrs[] = {
++ &dev_attr_start.attr,
++ &dev_attr_size.attr,
++ &dev_attr_locked.attr,
++ &dev_attr_target_list.attr,
++ NULL,
++};
++
++static struct attribute_group cxl_decoder_base_attribute_group = {
++ .attrs = cxl_decoder_base_attrs,
++};
++
++static struct attribute *cxl_decoder_root_attrs[] = {
++ &dev_attr_cap_pmem.attr,
++ &dev_attr_cap_ram.attr,
++ &dev_attr_cap_type2.attr,
++ &dev_attr_cap_type3.attr,
++ NULL,
++};
++
++static struct attribute_group cxl_decoder_root_attribute_group = {
++ .attrs = cxl_decoder_root_attrs,
++};
++
++static const struct attribute_group *cxl_decoder_root_attribute_groups[] = {
++ &cxl_decoder_root_attribute_group,
++ &cxl_decoder_base_attribute_group,
++ &cxl_base_attribute_group,
++ NULL,
++};
++
++static struct attribute *cxl_decoder_switch_attrs[] = {
++ &dev_attr_target_type.attr,
++ NULL,
++};
++
++static struct attribute_group cxl_decoder_switch_attribute_group = {
++ .attrs = cxl_decoder_switch_attrs,
++};
++
++static const struct attribute_group *cxl_decoder_switch_attribute_groups[] = {
++ &cxl_decoder_switch_attribute_group,
++ &cxl_decoder_base_attribute_group,
++ &cxl_base_attribute_group,
++ NULL,
++};
++
++static void cxl_decoder_release(struct device *dev)
++{
++ struct cxl_decoder *cxld = to_cxl_decoder(dev);
++ struct cxl_port *port = to_cxl_port(dev->parent);
++
++ ida_free(&port->decoder_ida, cxld->id);
++ kfree(cxld);
++ put_device(&port->dev);
++}
++
++static const struct device_type cxl_decoder_switch_type = {
++ .name = "cxl_decoder_switch",
++ .release = cxl_decoder_release,
++ .groups = cxl_decoder_switch_attribute_groups,
++};
++
++static const struct device_type cxl_decoder_root_type = {
++ .name = "cxl_decoder_root",
++ .release = cxl_decoder_release,
++ .groups = cxl_decoder_root_attribute_groups,
++};
++
++bool is_root_decoder(struct device *dev)
++{
++ return dev->type == &cxl_decoder_root_type;
++}
++EXPORT_SYMBOL_NS_GPL(is_root_decoder, CXL);
++
++struct cxl_decoder *to_cxl_decoder(struct device *dev)
++{
++ if (dev_WARN_ONCE(dev, dev->type->release != cxl_decoder_release,
++ "not a cxl_decoder device\n"))
++ return NULL;
++ return container_of(dev, struct cxl_decoder, dev);
++}
++EXPORT_SYMBOL_NS_GPL(to_cxl_decoder, CXL);
++
++static void cxl_dport_release(struct cxl_dport *dport)
++{
++ list_del(&dport->list);
++ put_device(dport->dport);
++ kfree(dport);
++}
++
++static void cxl_port_release(struct device *dev)
++{
++ struct cxl_port *port = to_cxl_port(dev);
++ struct cxl_dport *dport, *_d;
++
++ device_lock(dev);
++ list_for_each_entry_safe(dport, _d, &port->dports, list)
++ cxl_dport_release(dport);
++ device_unlock(dev);
++ ida_free(&cxl_port_ida, port->id);
++ kfree(port);
++}
++
++static const struct attribute_group *cxl_port_attribute_groups[] = {
++ &cxl_base_attribute_group,
++ NULL,
++};
++
++static const struct device_type cxl_port_type = {
++ .name = "cxl_port",
++ .release = cxl_port_release,
++ .groups = cxl_port_attribute_groups,
++};
++
++struct cxl_port *to_cxl_port(struct device *dev)
++{
++ if (dev_WARN_ONCE(dev, dev->type != &cxl_port_type,
++ "not a cxl_port device\n"))
++ return NULL;
++ return container_of(dev, struct cxl_port, dev);
++}
++
++static void unregister_port(void *_port)
++{
++ struct cxl_port *port = _port;
++ struct cxl_dport *dport;
++
++ device_lock(&port->dev);
++ list_for_each_entry(dport, &port->dports, list) {
++ char link_name[CXL_TARGET_STRLEN];
++
++ if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d",
++ dport->port_id) >= CXL_TARGET_STRLEN)
++ continue;
++ sysfs_remove_link(&port->dev.kobj, link_name);
++ }
++ device_unlock(&port->dev);
++ device_unregister(&port->dev);
++}
++
++static void cxl_unlink_uport(void *_port)
++{
++ struct cxl_port *port = _port;
++
++ sysfs_remove_link(&port->dev.kobj, "uport");
++}
++
++static int devm_cxl_link_uport(struct device *host, struct cxl_port *port)
++{
++ int rc;
++
++ rc = sysfs_create_link(&port->dev.kobj, &port->uport->kobj, "uport");
++ if (rc)
++ return rc;
++ return devm_add_action_or_reset(host, cxl_unlink_uport, port);
++}
++
++static struct cxl_port *cxl_port_alloc(struct device *uport,
++ resource_size_t component_reg_phys,
++ struct cxl_port *parent_port)
++{
++ struct cxl_port *port;
++ struct device *dev;
++ int rc;
++
++ port = kzalloc(sizeof(*port), GFP_KERNEL);
++ if (!port)
++ return ERR_PTR(-ENOMEM);
++
++ rc = ida_alloc(&cxl_port_ida, GFP_KERNEL);
++ if (rc < 0)
++ goto err;
++ port->id = rc;
++
++ /*
++ * The top-level cxl_port "cxl_root" does not have a cxl_port as
++ * its parent and it does not have any corresponding component
++ * registers as its decode is described by a fixed platform
++ * description.
++ */
++ dev = &port->dev;
++ if (parent_port)
++ dev->parent = &parent_port->dev;
++ else
++ dev->parent = uport;
++
++ port->uport = uport;
++ port->component_reg_phys = component_reg_phys;
++ ida_init(&port->decoder_ida);
++ INIT_LIST_HEAD(&port->dports);
++
++ device_initialize(dev);
++ device_set_pm_not_required(dev);
++ dev->bus = &cxl_bus_type;
++ dev->type = &cxl_port_type;
++
++ return port;
++
++err:
++ kfree(port);
++ return ERR_PTR(rc);
++}
++
++/**
++ * devm_cxl_add_port - register a cxl_port in CXL memory decode hierarchy
++ * @host: host device for devm operations
++ * @uport: "physical" device implementing this upstream port
++ * @component_reg_phys: (optional) for configurable cxl_port instances
++ * @parent_port: next hop up in the CXL memory decode hierarchy
++ */
++struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport,
++ resource_size_t component_reg_phys,
++ struct cxl_port *parent_port)
++{
++ struct cxl_port *port;
++ struct device *dev;
++ int rc;
++
++ port = cxl_port_alloc(uport, component_reg_phys, parent_port);
++ if (IS_ERR(port))
++ return port;
++
++ dev = &port->dev;
++ if (parent_port)
++ rc = dev_set_name(dev, "port%d", port->id);
++ else
++ rc = dev_set_name(dev, "root%d", port->id);
++ if (rc)
++ goto err;
++
++ rc = device_add(dev);
++ if (rc)
++ goto err;
++
++ rc = devm_add_action_or_reset(host, unregister_port, port);
++ if (rc)
++ return ERR_PTR(rc);
++
++ rc = devm_cxl_link_uport(host, port);
++ if (rc)
++ return ERR_PTR(rc);
++
++ return port;
++
++err:
++ put_device(dev);
++ return ERR_PTR(rc);
++}
++EXPORT_SYMBOL_NS_GPL(devm_cxl_add_port, CXL);
++
++static struct cxl_dport *find_dport(struct cxl_port *port, int id)
++{
++ struct cxl_dport *dport;
++
++ device_lock_assert(&port->dev);
++ list_for_each_entry (dport, &port->dports, list)
++ if (dport->port_id == id)
++ return dport;
++ return NULL;
++}
++
++static int add_dport(struct cxl_port *port, struct cxl_dport *new)
++{
++ struct cxl_dport *dup;
++
++ device_lock(&port->dev);
++ dup = find_dport(port, new->port_id);
++ if (dup)
++ dev_err(&port->dev,
++ "unable to add dport%d-%s non-unique port id (%s)\n",
++ new->port_id, dev_name(new->dport),
++ dev_name(dup->dport));
++ else
++ list_add_tail(&new->list, &port->dports);
++ device_unlock(&port->dev);
++
++ return dup ? -EEXIST : 0;
++}
++
++/**
++ * cxl_add_dport - append downstream port data to a cxl_port
++ * @port: the cxl_port that references this dport
++ * @dport_dev: firmware or PCI device representing the dport
++ * @port_id: identifier for this dport in a decoder's target list
++ * @component_reg_phys: optional location of CXL component registers
++ *
++ * Note that all allocations and links are undone by cxl_port deletion
++ * and release.
++ */
++int cxl_add_dport(struct cxl_port *port, struct device *dport_dev, int port_id,
++ resource_size_t component_reg_phys)
++{
++ char link_name[CXL_TARGET_STRLEN];
++ struct cxl_dport *dport;
++ int rc;
++
++ if (snprintf(link_name, CXL_TARGET_STRLEN, "dport%d", port_id) >=
++ CXL_TARGET_STRLEN)
++ return -EINVAL;
++
++ dport = kzalloc(sizeof(*dport), GFP_KERNEL);
++ if (!dport)
++ return -ENOMEM;
++
++ INIT_LIST_HEAD(&dport->list);
++ dport->dport = get_device(dport_dev);
++ dport->port_id = port_id;
++ dport->component_reg_phys = component_reg_phys;
++ dport->port = port;
++
++ rc = add_dport(port, dport);
++ if (rc)
++ goto err;
++
++ rc = sysfs_create_link(&port->dev.kobj, &dport_dev->kobj, link_name);
++ if (rc)
++ goto err;
++
++ return 0;
++err:
++ cxl_dport_release(dport);
++ return rc;
++}
++EXPORT_SYMBOL_NS_GPL(cxl_add_dport, CXL);
++
++static int decoder_populate_targets(struct cxl_decoder *cxld,
++ struct cxl_port *port, int *target_map)
++{
++ int rc = 0, i;
++
++ if (!target_map)
++ return 0;
++
++ device_lock(&port->dev);
++ if (list_empty(&port->dports)) {
++ rc = -EINVAL;
++ goto out_unlock;
++ }
++
++ for (i = 0; i < cxld->nr_targets; i++) {
++ struct cxl_dport *dport = find_dport(port, target_map[i]);
++
++ if (!dport) {
++ rc = -ENXIO;
++ goto out_unlock;
++ }
++ cxld->target[i] = dport;
++ }
++
++out_unlock:
++ device_unlock(&port->dev);
++
++ return rc;
++}
++
++struct cxl_decoder *cxl_decoder_alloc(struct cxl_port *port, int nr_targets)
++{
++ struct cxl_decoder *cxld;
++ struct device *dev;
++ int rc = 0;
++
++ if (nr_targets > CXL_DECODER_MAX_INTERLEAVE || nr_targets < 1)
++ return ERR_PTR(-EINVAL);
++
++ cxld = kzalloc(struct_size(cxld, target, nr_targets), GFP_KERNEL);
++ if (!cxld)
++ return ERR_PTR(-ENOMEM);
++
++ rc = ida_alloc(&port->decoder_ida, GFP_KERNEL);
++ if (rc < 0)
++ goto err;
++
++ /* need parent to stick around to release the id */
++ get_device(&port->dev);
++ cxld->id = rc;
++
++ cxld->nr_targets = nr_targets;
++ dev = &cxld->dev;
++ device_initialize(dev);
++ device_set_pm_not_required(dev);
++ dev->parent = &port->dev;
++ dev->bus = &cxl_bus_type;
++
++ /* root ports do not have a cxl_port_type parent */
++ if (port->dev.parent->type == &cxl_port_type)
++ dev->type = &cxl_decoder_switch_type;
++ else
++ dev->type = &cxl_decoder_root_type;
++
++ return cxld;
++err:
++ kfree(cxld);
++ return ERR_PTR(rc);
++}
++EXPORT_SYMBOL_NS_GPL(cxl_decoder_alloc, CXL);
++
++int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
++{
++ struct cxl_port *port;
++ struct device *dev;
++ int rc;
++
++ if (WARN_ON_ONCE(!cxld))
++ return -EINVAL;
++
++ if (WARN_ON_ONCE(IS_ERR(cxld)))
++ return PTR_ERR(cxld);
++
++ if (cxld->interleave_ways < 1)
++ return -EINVAL;
++
++ port = to_cxl_port(cxld->dev.parent);
++ rc = decoder_populate_targets(cxld, port, target_map);
++ if (rc)
++ return rc;
++
++ dev = &cxld->dev;
++ rc = dev_set_name(dev, "decoder%d.%d", port->id, cxld->id);
++ if (rc)
++ return rc;
++
++ return device_add(dev);
++}
++EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL);
++
++static void cxld_unregister(void *dev)
++{
++ device_unregister(dev);
++}
++
++int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld)
++{
++ return devm_add_action_or_reset(host, cxld_unregister, &cxld->dev);
++}
++EXPORT_SYMBOL_NS_GPL(cxl_decoder_autoremove, CXL);
++
++/**
++ * __cxl_driver_register - register a driver for the cxl bus
++ * @cxl_drv: cxl driver structure to attach
++ * @owner: owning module/driver
++ * @modname: KBUILD_MODNAME for parent driver
++ */
++int __cxl_driver_register(struct cxl_driver *cxl_drv, struct module *owner,
++ const char *modname)
++{
++ if (!cxl_drv->probe) {
++ pr_debug("%s ->probe() must be specified\n", modname);
++ return -EINVAL;
++ }
++
++ if (!cxl_drv->name) {
++ pr_debug("%s ->name must be specified\n", modname);
++ return -EINVAL;
++ }
++
++ if (!cxl_drv->id) {
++ pr_debug("%s ->id must be specified\n", modname);
++ return -EINVAL;
++ }
++
++ cxl_drv->drv.bus = &cxl_bus_type;
++ cxl_drv->drv.owner = owner;
++ cxl_drv->drv.mod_name = modname;
++ cxl_drv->drv.name = cxl_drv->name;
++
++ return driver_register(&cxl_drv->drv);
++}
++EXPORT_SYMBOL_NS_GPL(__cxl_driver_register, CXL);
++
++void cxl_driver_unregister(struct cxl_driver *cxl_drv)
++{
++ driver_unregister(&cxl_drv->drv);
++}
++EXPORT_SYMBOL_NS_GPL(cxl_driver_unregister, CXL);
++
++static int cxl_device_id(struct device *dev)
++{
++ if (dev->type == &cxl_nvdimm_bridge_type)
++ return CXL_DEVICE_NVDIMM_BRIDGE;
++ if (dev->type == &cxl_nvdimm_type)
++ return CXL_DEVICE_NVDIMM;
++ return 0;
++}
++
++static int cxl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
++{
++ return add_uevent_var(env, "MODALIAS=" CXL_MODALIAS_FMT,
++ cxl_device_id(dev));
++}
++
++static int cxl_bus_match(struct device *dev, struct device_driver *drv)
++{
++ return cxl_device_id(dev) == to_cxl_drv(drv)->id;
++}
++
++static int cxl_bus_probe(struct device *dev)
++{
++ return to_cxl_drv(dev->driver)->probe(dev);
++}
++
++static void cxl_bus_remove(struct device *dev)
++{
++ struct cxl_driver *cxl_drv = to_cxl_drv(dev->driver);
++
++ if (cxl_drv->remove)
++ cxl_drv->remove(dev);
++}
++
++struct bus_type cxl_bus_type = {
++ .name = "cxl",
++ .uevent = cxl_bus_uevent,
++ .match = cxl_bus_match,
++ .probe = cxl_bus_probe,
++ .remove = cxl_bus_remove,
++};
++EXPORT_SYMBOL_NS_GPL(cxl_bus_type, CXL);
++
++static __init int cxl_core_init(void)
++{
++ int rc;
++
++ cxl_mbox_init();
++
++ rc = cxl_memdev_init();
++ if (rc)
++ return rc;
++
++ rc = bus_register(&cxl_bus_type);
++ if (rc)
++ goto err;
++ return 0;
++
++err:
++ cxl_memdev_exit();
++ cxl_mbox_exit();
++ return rc;
++}
++
++static void cxl_core_exit(void)
++{
++ bus_unregister(&cxl_bus_type);
++ cxl_memdev_exit();
++ cxl_mbox_exit();
++}
++
++module_init(cxl_core_init);
++module_exit(cxl_core_exit);
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/cxl/core/regs.c b/drivers/cxl/core/regs.c
+index e37e23bf43553..6a18ff8739e00 100644
+--- a/drivers/cxl/core/regs.c
++++ b/drivers/cxl/core/regs.c
+@@ -35,7 +35,7 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
+ struct cxl_component_reg_map *map)
+ {
+ int cap, cap_count;
+- u64 cap_array;
++ u32 cap_array;
+
+ *map = (struct cxl_component_reg_map) { 0 };
+
+@@ -45,11 +45,11 @@ void cxl_probe_component_regs(struct device *dev, void __iomem *base,
+ */
+ base += CXL_CM_OFFSET;
+
+- cap_array = readq(base + CXL_CM_CAP_HDR_OFFSET);
++ cap_array = readl(base + CXL_CM_CAP_HDR_OFFSET);
+
+ if (FIELD_GET(CXL_CM_CAP_HDR_ID_MASK, cap_array) != CM_CAP_HDR_CAP_ID) {
+ dev_err(dev,
+- "Couldn't locate the CXL.cache and CXL.mem capability array header./n");
++ "Couldn't locate the CXL.cache and CXL.mem capability array header.\n");
+ return;
+ }
+
+diff --git a/drivers/dax/super.c b/drivers/dax/super.c
+index e3029389d8097..6bd565fe2e63b 100644
+--- a/drivers/dax/super.c
++++ b/drivers/dax/super.c
+@@ -476,6 +476,7 @@ static int dax_fs_init(void)
+ static void dax_fs_exit(void)
+ {
+ kern_unmount(dax_mnt);
++ rcu_barrier();
+ kmem_cache_destroy(dax_cache);
+ }
+
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index c57a609db75be..e7330684d3b82 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -190,6 +190,10 @@ static long udmabuf_create(struct miscdevice *device,
+ if (ubuf->pagecount > pglimit)
+ goto err;
+ }
++
++ if (!ubuf->pagecount)
++ goto err;
++
+ ubuf->pages = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->pages),
+ GFP_KERNEL);
+ if (!ubuf->pages) {
+diff --git a/drivers/dma/hisi_dma.c b/drivers/dma/hisi_dma.c
+index 97c87a7cba879..43817ced3a3e1 100644
+--- a/drivers/dma/hisi_dma.c
++++ b/drivers/dma/hisi_dma.c
+@@ -30,7 +30,7 @@
+ #define HISI_DMA_MODE 0x217c
+ #define HISI_DMA_OFFSET 0x100
+
+-#define HISI_DMA_MSI_NUM 30
++#define HISI_DMA_MSI_NUM 32
+ #define HISI_DMA_CHAN_NUM 30
+ #define HISI_DMA_Q_DEPTH_VAL 1024
+
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index 573ad8b86804e..3061fe857d69f 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -681,8 +681,13 @@ static void idxd_groups_clear_state(struct idxd_device *idxd)
+ group->use_rdbuf_limit = false;
+ group->rdbufs_allowed = 0;
+ group->rdbufs_reserved = 0;
+- group->tc_a = -1;
+- group->tc_b = -1;
++ if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) {
++ group->tc_a = 1;
++ group->tc_b = 1;
++ } else {
++ group->tc_a = -1;
++ group->tc_b = -1;
++ }
+ }
+ }
+
+diff --git a/drivers/firmware/efi/efi-pstore.c b/drivers/firmware/efi/efi-pstore.c
+index 0ef086e43090b..7e771c56c13c6 100644
+--- a/drivers/firmware/efi/efi-pstore.c
++++ b/drivers/firmware/efi/efi-pstore.c
+@@ -266,7 +266,7 @@ static int efi_pstore_write(struct pstore_record *record)
+ efi_name[i] = name[i];
+
+ ret = efivar_entry_set_safe(efi_name, vendor, PSTORE_EFI_ATTRIBUTES,
+- preemptible(), record->size, record->psi->buf);
++ false, record->size, record->psi->buf);
+
+ if (record->reason == KMSG_DUMP_OOPS && try_module_get(THIS_MODULE))
+ if (!schedule_work(&efivar_work))
+diff --git a/drivers/firmware/google/Kconfig b/drivers/firmware/google/Kconfig
+index 931544c9f63d4..983e07dc022ed 100644
+--- a/drivers/firmware/google/Kconfig
++++ b/drivers/firmware/google/Kconfig
+@@ -21,7 +21,7 @@ config GOOGLE_SMI
+
+ config GOOGLE_COREBOOT_TABLE
+ tristate "Coreboot Table Access"
+- depends on ACPI || OF
++ depends on HAS_IOMEM && (ACPI || OF)
+ help
+ This option enables the coreboot_table module, which provides other
+ firmware modules access to the coreboot table. The coreboot table
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index 7db8066b19fd5..3f67bf774821d 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -749,12 +749,6 @@ int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare)
+ };
+ int ret;
+
+- desc.args[0] = addr;
+- desc.args[1] = size;
+- desc.args[2] = spare;
+- desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL,
+- QCOM_SCM_VAL);
+-
+ ret = qcom_scm_call(__scm->dev, &desc, NULL);
+
+ /* the pg table has been initialized already, ignore the error */
+diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
+index 29c0a616b3177..c4bf934e3553e 100644
+--- a/drivers/firmware/stratix10-svc.c
++++ b/drivers/firmware/stratix10-svc.c
+@@ -477,7 +477,7 @@ static int svc_normal_to_secure_thread(void *data)
+ case INTEL_SIP_SMC_RSU_ERROR:
+ pr_err("%s: STATUS_ERROR\n", __func__);
+ cbdata->status = BIT(SVC_STATUS_ERROR);
+- cbdata->kaddr1 = NULL;
++ cbdata->kaddr1 = &res.a1;
+ cbdata->kaddr2 = NULL;
+ cbdata->kaddr3 = NULL;
+ pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata);
+diff --git a/drivers/firmware/sysfb_simplefb.c b/drivers/firmware/sysfb_simplefb.c
+index 303a491e520d1..757cc8b9f3de9 100644
+--- a/drivers/firmware/sysfb_simplefb.c
++++ b/drivers/firmware/sysfb_simplefb.c
+@@ -113,16 +113,21 @@ __init int sysfb_create_simplefb(const struct screen_info *si,
+ sysfb_apply_efi_quirks(pd);
+
+ ret = platform_device_add_resources(pd, &res, 1);
+- if (ret) {
+- platform_device_put(pd);
+- return ret;
+- }
++ if (ret)
++ goto err_put_device;
+
+ ret = platform_device_add_data(pd, mode, sizeof(*mode));
+- if (ret) {
+- platform_device_put(pd);
+- return ret;
+- }
++ if (ret)
++ goto err_put_device;
++
++ ret = platform_device_add(pd);
++ if (ret)
++ goto err_put_device;
++
++ return 0;
++
++err_put_device:
++ platform_device_put(pd);
+
+- return platform_device_add(pd);
++ return ret;
+ }
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index 8606e55c1721c..0bed2fab80558 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -542,25 +542,28 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ return rc;
+ }
+
+- aspeed = devm_kzalloc(&pdev->dev, sizeof(*aspeed), GFP_KERNEL);
++ aspeed = kzalloc(sizeof(*aspeed), GFP_KERNEL);
+ if (!aspeed)
+ return -ENOMEM;
+
+ aspeed->dev = &pdev->dev;
+
+ aspeed->base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(aspeed->base))
+- return PTR_ERR(aspeed->base);
++ if (IS_ERR(aspeed->base)) {
++ rc = PTR_ERR(aspeed->base);
++ goto err_free_aspeed;
++ }
+
+ aspeed->clk = devm_clk_get(aspeed->dev, NULL);
+ if (IS_ERR(aspeed->clk)) {
+ dev_err(aspeed->dev, "couldn't get clock\n");
+- return PTR_ERR(aspeed->clk);
++ rc = PTR_ERR(aspeed->clk);
++ goto err_free_aspeed;
+ }
+ rc = clk_prepare_enable(aspeed->clk);
+ if (rc) {
+ dev_err(aspeed->dev, "couldn't enable clock\n");
+- return rc;
++ goto err_free_aspeed;
+ }
+
+ rc = setup_cfam_reset(aspeed);
+@@ -595,7 +598,7 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+ rc = opb_readl(aspeed, ctrl_base + FSI_MVER, &raw);
+ if (rc) {
+ dev_err(&pdev->dev, "failed to read hub version\n");
+- return rc;
++ goto err_release;
+ }
+
+ reg = be32_to_cpu(raw);
+@@ -634,6 +637,8 @@ static int fsi_master_aspeed_probe(struct platform_device *pdev)
+
+ err_release:
+ clk_disable_unprepare(aspeed->clk);
++err_free_aspeed:
++ kfree(aspeed);
+ return rc;
+ }
+
+diff --git a/drivers/fsi/fsi-scom.c b/drivers/fsi/fsi-scom.c
+index da1486bb6a144..bcb756dc98663 100644
+--- a/drivers/fsi/fsi-scom.c
++++ b/drivers/fsi/fsi-scom.c
+@@ -145,7 +145,7 @@ static int put_indirect_scom_form0(struct scom_device *scom, uint64_t value,
+ uint64_t addr, uint32_t *status)
+ {
+ uint64_t ind_data, ind_addr;
+- int rc, retries, err = 0;
++ int rc, err;
+
+ if (value & ~XSCOM_DATA_IND_DATA)
+ return -EINVAL;
+@@ -156,19 +156,14 @@ static int put_indirect_scom_form0(struct scom_device *scom, uint64_t value,
+ if (rc || (*status & SCOM_STATUS_ANY_ERR))
+ return rc;
+
+- for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) {
+- rc = __get_scom(scom, &ind_data, addr, status);
+- if (rc || (*status & SCOM_STATUS_ANY_ERR))
+- return rc;
++ rc = __get_scom(scom, &ind_data, addr, status);
++ if (rc || (*status & SCOM_STATUS_ANY_ERR))
++ return rc;
+
+- err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
+- *status = err << SCOM_STATUS_PIB_RESP_SHIFT;
+- if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED))
+- return 0;
++ err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
++ *status = err << SCOM_STATUS_PIB_RESP_SHIFT;
+
+- msleep(1);
+- }
+- return rc;
++ return 0;
+ }
+
+ static int put_indirect_scom_form1(struct scom_device *scom, uint64_t value,
+@@ -188,7 +183,7 @@ static int get_indirect_scom_form0(struct scom_device *scom, uint64_t *value,
+ uint64_t addr, uint32_t *status)
+ {
+ uint64_t ind_data, ind_addr;
+- int rc, retries, err = 0;
++ int rc, err;
+
+ ind_addr = addr & XSCOM_ADDR_DIRECT_PART;
+ ind_data = (addr & XSCOM_ADDR_INDIRECT_PART) | XSCOM_DATA_IND_READ;
+@@ -196,21 +191,15 @@ static int get_indirect_scom_form0(struct scom_device *scom, uint64_t *value,
+ if (rc || (*status & SCOM_STATUS_ANY_ERR))
+ return rc;
+
+- for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) {
+- rc = __get_scom(scom, &ind_data, addr, status);
+- if (rc || (*status & SCOM_STATUS_ANY_ERR))
+- return rc;
+-
+- err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
+- *status = err << SCOM_STATUS_PIB_RESP_SHIFT;
+- *value = ind_data & XSCOM_DATA_IND_DATA;
++ rc = __get_scom(scom, &ind_data, addr, status);
++ if (rc || (*status & SCOM_STATUS_ANY_ERR))
++ return rc;
+
+- if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED))
+- return 0;
++ err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT;
++ *status = err << SCOM_STATUS_PIB_RESP_SHIFT;
++ *value = ind_data & XSCOM_DATA_IND_DATA;
+
+- msleep(1);
+- }
+- return rc;
++ return 0;
+ }
+
+ static int raw_put_scom(struct scom_device *scom, uint64_t value,
+@@ -289,7 +278,7 @@ static int put_scom(struct scom_device *scom, uint64_t value,
+ int rc;
+
+ rc = raw_put_scom(scom, value, addr, &status);
+- if (rc == -ENODEV)
++ if (rc)
+ return rc;
+
+ rc = handle_fsi2pib_status(scom, status);
+@@ -308,7 +297,7 @@ static int get_scom(struct scom_device *scom, uint64_t *value,
+ int rc;
+
+ rc = raw_get_scom(scom, value, addr, &status);
+- if (rc == -ENODEV)
++ if (rc)
+ return rc;
+
+ rc = handle_fsi2pib_status(scom, status);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index c16a2704ced65..f3160b951df3a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -175,7 +175,7 @@ int amdgpu_connector_get_monitor_bpc(struct drm_connector *connector)
+
+ /* Check if bpc is within clock limit. Try to degrade gracefully otherwise */
+ if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) {
+- if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) &&
++ if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) &&
+ (mode_clock * 5/4 <= max_tmds_clock))
+ bpc = 10;
+ else
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index ed077de426d9b..f18c698137a6b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -31,6 +31,7 @@
+ #include <linux/console.h>
+ #include <linux/slab.h>
+ #include <linux/iommu.h>
++#include <linux/pci.h>
+
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_probe_helper.h>
+@@ -2073,6 +2074,8 @@ out:
+ */
+ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ {
++ struct drm_device *dev = adev_to_drm(adev);
++ struct pci_dev *parent;
+ int i, r;
+
+ amdgpu_device_enable_virtual_display(adev);
+@@ -2137,6 +2140,18 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
+ break;
+ }
+
++ if (amdgpu_has_atpx() &&
++ (amdgpu_is_atpx_hybrid() ||
++ amdgpu_has_atpx_dgpu_power_cntl()) &&
++ ((adev->flags & AMD_IS_APU) == 0) &&
++ !pci_is_thunderbolt_attached(to_pci_dev(dev->dev)))
++ adev->flags |= AMD_IS_PX;
++
++ if (!(adev->flags & AMD_IS_APU)) {
++ parent = pci_upstream_bridge(adev->pdev);
++ adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
++ }
++
+ amdgpu_amdkfd_device_probe(adev);
+
+ adev->pm.pp_feature = amdgpu_pp_feature_mask;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+index 2a786e7886277..978c46395ced0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
+@@ -91,17 +91,13 @@ static int amdgpu_fru_read_eeprom(struct amdgpu_device *adev, uint32_t addrptr,
+
+ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ {
+- unsigned char buff[AMDGPU_PRODUCT_NAME_LEN+2];
++ unsigned char buff[AMDGPU_PRODUCT_NAME_LEN];
+ u32 addrptr;
+ int size, len;
+- int offset = 2;
+
+ if (!is_fru_eeprom_supported(adev))
+ return 0;
+
+- if (adev->asic_type == CHIP_ALDEBARAN)
+- offset = 0;
+-
+ /* If algo exists, it means that the i2c_adapter's initialized */
+ if (!adev->pm.smu_i2c.algo) {
+ DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");
+@@ -143,8 +139,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ AMDGPU_PRODUCT_NAME_LEN);
+ len = AMDGPU_PRODUCT_NAME_LEN - 1;
+ }
+- /* Start at 2 due to buff using fields 0 and 1 for the address */
+- memcpy(adev->product_name, &buff[offset], len);
++ memcpy(adev->product_name, buff, len);
+ adev->product_name[len] = '\0';
+
+ addrptr += size + 1;
+@@ -162,7 +157,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ DRM_WARN("FRU Product Number is larger than 16 characters. This is likely a mistake");
+ len = sizeof(adev->product_number) - 1;
+ }
+- memcpy(adev->product_number, &buff[offset], len);
++ memcpy(adev->product_number, buff, len);
+ adev->product_number[len] = '\0';
+
+ addrptr += size + 1;
+@@ -189,7 +184,7 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
+ DRM_WARN("FRU Serial Number is larger than 16 characters. This is likely a mistake");
+ len = sizeof(adev->serial) - 1;
+ }
+- memcpy(adev->serial, &buff[offset], len);
++ memcpy(adev->serial, buff, len);
+ adev->serial[len] = '\0';
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 1ebb91db22743..11a385264bbd2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -152,21 +152,10 @@ static void amdgpu_get_audio_func(struct amdgpu_device *adev)
+ int amdgpu_driver_load_kms(struct amdgpu_device *adev, unsigned long flags)
+ {
+ struct drm_device *dev;
+- struct pci_dev *parent;
+ int r, acpi_status;
+
+ dev = adev_to_drm(adev);
+
+- if (amdgpu_has_atpx() &&
+- (amdgpu_is_atpx_hybrid() ||
+- amdgpu_has_atpx_dgpu_power_cntl()) &&
+- ((flags & AMD_IS_APU) == 0) &&
+- !pci_is_thunderbolt_attached(to_pci_dev(dev->dev)))
+- flags |= AMD_IS_PX;
+-
+- parent = pci_upstream_bridge(adev->pdev);
+- adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
+-
+ /* amdgpu_device_init should report only fatal error
+ * like memory allocation failure or iomapping failure,
+ * or memory manager initialization failure, it must
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 075429bea4275..b28b5c4908601 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2613,10 +2613,13 @@ static int dm_resume(void *handle)
+ * before the 0 streams commit.
+ *
+ * DC expects that link encoder assignments are *not* valid
+- * when committing a state, so as a workaround it needs to be
+- * cleared here.
++ * when committing a state, so as a workaround we can copy
++ * off of the current state.
++ *
++ * We lose the previous assignments, but we had already
++ * commit 0 streams anyway.
+ */
+- link_enc_cfg_init(dm->dc, dc_state);
++ link_enc_cfg_copy(adev->dm.dc->current_state, dc_state);
+
+ if (dc_enable_dmub_notifications(adev->dm.dc))
+ amdgpu_dm_outbox_init(adev);
+@@ -8144,6 +8147,9 @@ static void amdgpu_dm_connector_add_common_modes(struct drm_encoder *encoder,
+ mode = amdgpu_dm_create_common_mode(encoder,
+ common_modes[i].name, common_modes[i].w,
+ common_modes[i].h);
++ if (!mode)
++ continue;
++
+ drm_mode_probed_add(connector, mode);
+ amdgpu_dm_connector->num_modes++;
+ }
+@@ -10858,10 +10864,13 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ static int add_affected_mst_dsc_crtcs(struct drm_atomic_state *state, struct drm_crtc *crtc)
+ {
+ struct drm_connector *connector;
+- struct drm_connector_state *conn_state;
++ struct drm_connector_state *conn_state, *old_conn_state;
+ struct amdgpu_dm_connector *aconnector = NULL;
+ int i;
+- for_each_new_connector_in_state(state, connector, conn_state, i) {
++ for_each_oldnew_connector_in_state(state, connector, old_conn_state, conn_state, i) {
++ if (!conn_state->crtc)
++ conn_state = old_conn_state;
++
+ if (conn_state->crtc != crtc)
+ continue;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_enc_cfg.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_enc_cfg.c
+index a55944da8d53f..72a3fded7142a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_enc_cfg.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_enc_cfg.c
+@@ -122,6 +122,7 @@ static void remove_link_enc_assignment(
+ stream->link_enc = NULL;
+ state->res_ctx.link_enc_cfg_ctx.link_enc_assignments[i].eng_id = ENGINE_ID_UNKNOWN;
+ state->res_ctx.link_enc_cfg_ctx.link_enc_assignments[i].stream = NULL;
++ dc_stream_release(stream);
+ break;
+ }
+ }
+@@ -271,6 +272,13 @@ void link_enc_cfg_init(
+ state->res_ctx.link_enc_cfg_ctx.mode = LINK_ENC_CFG_STEADY;
+ }
+
++void link_enc_cfg_copy(const struct dc_state *src_ctx, struct dc_state *dst_ctx)
++{
++ memcpy(&dst_ctx->res_ctx.link_enc_cfg_ctx,
++ &src_ctx->res_ctx.link_enc_cfg_ctx,
++ sizeof(dst_ctx->res_ctx.link_enc_cfg_ctx));
++}
++
+ void link_enc_cfg_link_encs_assign(
+ struct dc *dc,
+ struct dc_state *state,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/link_enc_cfg.h b/drivers/gpu/drm/amd/display/dc/inc/link_enc_cfg.h
+index a4e43b4826e0e..59ceb9ed385db 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/link_enc_cfg.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/link_enc_cfg.h
+@@ -39,6 +39,11 @@ void link_enc_cfg_init(
+ const struct dc *dc,
+ struct dc_state *state);
+
++/*
++ * Copies a link encoder assignment from another state.
++ */
++void link_enc_cfg_copy(const struct dc_state *src_ctx, struct dc_state *dst_ctx);
++
+ /*
+ * Algorithm for assigning available DIG link encoders to streams.
+ *
+diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+index 0f15bcada4e99..717977aec6d06 100644
+--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c
+@@ -265,14 +265,6 @@ static const struct irq_source_info_funcs vline0_irq_info_funcs = {
+ .funcs = &pflip_irq_info_funcs\
+ }
+
+-#define vupdate_int_entry(reg_num)\
+- [DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
+- IRQ_REG_ENTRY(OTG, reg_num,\
+- OTG_GLOBAL_SYNC_STATUS, VUPDATE_INT_EN,\
+- OTG_GLOBAL_SYNC_STATUS, VUPDATE_EVENT_CLEAR),\
+- .funcs = &vblank_irq_info_funcs\
+- }
+-
+ /* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic
+ * of DCE's DC_IRQ_SOURCE_VUPDATEx.
+ */
+@@ -401,12 +393,6 @@ irq_source_info_dcn21[DAL_IRQ_SOURCES_NUMBER] = {
+ dc_underflow_int_entry(6),
+ [DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+ [DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+- vupdate_int_entry(0),
+- vupdate_int_entry(1),
+- vupdate_int_entry(2),
+- vupdate_int_entry(3),
+- vupdate_int_entry(4),
+- vupdate_int_entry(5),
+ vupdate_no_lock_int_entry(0),
+ vupdate_no_lock_int_entry(1),
+ vupdate_no_lock_int_entry(2),
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 48cc009d9bdf3..dc910003f3cab 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2134,8 +2134,8 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
+ }
+ }
+
+- /* setting should not be allowed from VF */
+- if (amdgpu_sriov_vf(adev)) {
++ /* setting should not be allowed from VF if not in one VF mode */
++ if (amdgpu_sriov_vf(adev) && !amdgpu_sriov_is_pp_one_vf(adev)) {
+ dev_attr->attr.mode &= ~S_IWUGO;
+ dev_attr->store = NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index d93d28c1af95b..b51368fa30253 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -138,7 +138,7 @@ int smu_get_dpm_freq_range(struct smu_context *smu,
+ uint32_t *min,
+ uint32_t *max)
+ {
+- int ret = 0;
++ int ret = -ENOTSUPP;
+
+ if (!min && !max)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511.h b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+index 592ecfcf00caf..6a882891d91c5 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511.h
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511.h
+@@ -169,6 +169,7 @@
+ #define ADV7511_PACKET_ENABLE_SPARE2 BIT(1)
+ #define ADV7511_PACKET_ENABLE_SPARE1 BIT(0)
+
++#define ADV7535_REG_POWER2_HPD_OVERRIDE BIT(6)
+ #define ADV7511_REG_POWER2_HPD_SRC_MASK 0xc0
+ #define ADV7511_REG_POWER2_HPD_SRC_BOTH 0x00
+ #define ADV7511_REG_POWER2_HPD_SRC_HPD 0x40
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index f8e5da1485999..77118c3395bf0 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -351,11 +351,17 @@ static void __adv7511_power_on(struct adv7511 *adv7511)
+ * from standby or are enabled. When the HPD goes low the adv7511 is
+ * reset and the outputs are disabled which might cause the monitor to
+ * go to standby again. To avoid this we ignore the HPD pin for the
+- * first few seconds after enabling the output.
++ * first few seconds after enabling the output. On the other hand
++ * adv7535 require to enable HPD Override bit for proper HPD.
+ */
+- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+- ADV7511_REG_POWER2_HPD_SRC_MASK,
+- ADV7511_REG_POWER2_HPD_SRC_NONE);
++ if (adv7511->type == ADV7535)
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++ ADV7535_REG_POWER2_HPD_OVERRIDE,
++ ADV7535_REG_POWER2_HPD_OVERRIDE);
++ else
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++ ADV7511_REG_POWER2_HPD_SRC_MASK,
++ ADV7511_REG_POWER2_HPD_SRC_NONE);
+ }
+
+ static void adv7511_power_on(struct adv7511 *adv7511)
+@@ -375,6 +381,10 @@ static void adv7511_power_on(struct adv7511 *adv7511)
+ static void __adv7511_power_off(struct adv7511 *adv7511)
+ {
+ /* TODO: setup additional power down modes */
++ if (adv7511->type == ADV7535)
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++ ADV7535_REG_POWER2_HPD_OVERRIDE, 0);
++
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER,
+ ADV7511_POWER_POWER_DOWN,
+ ADV7511_POWER_POWER_DOWN);
+@@ -672,9 +682,14 @@ adv7511_detect(struct adv7511 *adv7511, struct drm_connector *connector)
+ status = connector_status_disconnected;
+ } else {
+ /* Renable HPD sensing */
+- regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
+- ADV7511_REG_POWER2_HPD_SRC_MASK,
+- ADV7511_REG_POWER2_HPD_SRC_BOTH);
++ if (adv7511->type == ADV7535)
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++ ADV7535_REG_POWER2_HPD_OVERRIDE,
++ ADV7535_REG_POWER2_HPD_OVERRIDE);
++ else
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER2,
++ ADV7511_REG_POWER2_HPD_SRC_MASK,
++ ADV7511_REG_POWER2_HPD_SRC_BOTH);
+ }
+
+ adv7511->status = status;
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 2346dbcc505f2..e596cacce9e3e 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -846,7 +846,8 @@ static int segments_edid_read(struct anx7625_data *ctx,
+ static int sp_tx_edid_read(struct anx7625_data *ctx,
+ u8 *pedid_blocks_buf)
+ {
+- u8 offset, edid_pos;
++ u8 offset;
++ int edid_pos;
+ int count, blocks_num;
+ u8 pblock_buf[MAX_DPCD_BUFFER_SIZE];
+ u8 i, j;
+diff --git a/drivers/gpu/drm/bridge/cdns-dsi.c b/drivers/gpu/drm/bridge/cdns-dsi.c
+index d8a15c459b42c..829e1a1446567 100644
+--- a/drivers/gpu/drm/bridge/cdns-dsi.c
++++ b/drivers/gpu/drm/bridge/cdns-dsi.c
+@@ -1284,6 +1284,7 @@ static const struct of_device_id cdns_dsi_of_match[] = {
+ { .compatible = "cdns,dsi" },
+ { },
+ };
++MODULE_DEVICE_TABLE(of, cdns_dsi_of_match);
+
+ static struct platform_driver cdns_dsi_platform_driver = {
+ .probe = cdns_dsi_drm_probe,
+diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
+index dafb1b47c15fb..00597eb54661f 100644
+--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
++++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
+@@ -1164,7 +1164,11 @@ static int lt9611_probe(struct i2c_client *client,
+
+ lt9611_enable_hpd_interrupts(lt9611);
+
+- return lt9611_audio_init(dev, lt9611);
++ ret = lt9611_audio_init(dev, lt9611);
++ if (ret)
++ goto err_remove_bridge;
++
++ return 0;
+
+ err_remove_bridge:
+ drm_bridge_remove(<9611->bridge);
+diff --git a/drivers/gpu/drm/bridge/nwl-dsi.c b/drivers/gpu/drm/bridge/nwl-dsi.c
+index af07eeb47ca02..6e484d836cfe2 100644
+--- a/drivers/gpu/drm/bridge/nwl-dsi.c
++++ b/drivers/gpu/drm/bridge/nwl-dsi.c
+@@ -1204,6 +1204,7 @@ static int nwl_dsi_probe(struct platform_device *pdev)
+
+ ret = nwl_dsi_select_input(dsi);
+ if (ret < 0) {
++ pm_runtime_disable(dev);
+ mipi_dsi_host_unregister(&dsi->dsi_host);
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/bridge/sil-sii8620.c b/drivers/gpu/drm/bridge/sil-sii8620.c
+index 843265d7f1b12..ec7745c31da07 100644
+--- a/drivers/gpu/drm/bridge/sil-sii8620.c
++++ b/drivers/gpu/drm/bridge/sil-sii8620.c
+@@ -2120,7 +2120,7 @@ static void sii8620_init_rcp_input_dev(struct sii8620 *ctx)
+ if (ret) {
+ dev_err(ctx->dev, "Failed to register RC device\n");
+ ctx->error = ret;
+- rc_free_device(ctx->rc_dev);
++ rc_free_device(rc_dev);
+ return;
+ }
+ ctx->rc_dev = rc_dev;
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index 54d8fdad395f5..97cdc61b57f61 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -2551,8 +2551,9 @@ static u32 *dw_hdmi_bridge_atomic_get_output_bus_fmts(struct drm_bridge *bridge,
+ if (!output_fmts)
+ return NULL;
+
+- /* If dw-hdmi is the only bridge, avoid negociating with ourselves */
+- if (list_is_singular(&bridge->encoder->bridge_chain)) {
++ /* If dw-hdmi is the first or only bridge, avoid negociating with ourselves */
++ if (list_is_singular(&bridge->encoder->bridge_chain) ||
++ list_is_first(&bridge->chain_node, &bridge->encoder->bridge_chain)) {
+ *num_output_fmts = 1;
+ output_fmts[0] = MEDIA_BUS_FMT_FIXED;
+
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+index e44e18a0112af..56c3fd08c6a0b 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+@@ -1199,6 +1199,7 @@ __dw_mipi_dsi_probe(struct platform_device *pdev,
+ ret = mipi_dsi_host_register(&dsi->dsi_host);
+ if (ret) {
+ dev_err(dev, "Failed to register MIPI host: %d\n", ret);
++ pm_runtime_disable(dev);
+ dw_mipi_dsi_debugfs_remove(dsi);
+ return ERR_PTR(ret);
+ }
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+index 945f08de45f1d..314a84ffcea3d 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -560,10 +560,14 @@ static int sn65dsi83_parse_dt(struct sn65dsi83 *ctx, enum sn65dsi83_model model)
+ ctx->host_node = of_graph_get_remote_port_parent(endpoint);
+ of_node_put(endpoint);
+
+- if (ctx->dsi_lanes < 0 || ctx->dsi_lanes > 4)
+- return -EINVAL;
+- if (!ctx->host_node)
+- return -ENODEV;
++ if (ctx->dsi_lanes < 0 || ctx->dsi_lanes > 4) {
++ ret = -EINVAL;
++ goto err_put_node;
++ }
++ if (!ctx->host_node) {
++ ret = -ENODEV;
++ goto err_put_node;
++ }
+
+ ctx->lvds_dual_link = false;
+ ctx->lvds_dual_link_even_odd_swap = false;
+@@ -590,16 +594,22 @@ static int sn65dsi83_parse_dt(struct sn65dsi83 *ctx, enum sn65dsi83_model model)
+
+ ret = drm_of_find_panel_or_bridge(dev->of_node, 2, 0, &panel, &panel_bridge);
+ if (ret < 0)
+- return ret;
++ goto err_put_node;
+ if (panel) {
+ panel_bridge = devm_drm_panel_bridge_add(dev, panel);
+- if (IS_ERR(panel_bridge))
+- return PTR_ERR(panel_bridge);
++ if (IS_ERR(panel_bridge)) {
++ ret = PTR_ERR(panel_bridge);
++ goto err_put_node;
++ }
+ }
+
+ ctx->panel_bridge = panel_bridge;
+
+ return 0;
++
++err_put_node:
++ of_node_put(ctx->host_node);
++ return ret;
+ }
+
+ static int sn65dsi83_host_attach(struct sn65dsi83 *ctx)
+@@ -673,8 +683,10 @@ static int sn65dsi83_probe(struct i2c_client *client,
+ return ret;
+
+ ctx->regmap = devm_regmap_init_i2c(client, &sn65dsi83_regmap_config);
+- if (IS_ERR(ctx->regmap))
+- return PTR_ERR(ctx->regmap);
++ if (IS_ERR(ctx->regmap)) {
++ ret = PTR_ERR(ctx->regmap);
++ goto err_put_node;
++ }
+
+ dev_set_drvdata(dev, ctx);
+ i2c_set_clientdata(client, ctx);
+@@ -691,6 +703,8 @@ static int sn65dsi83_probe(struct i2c_client *client,
+
+ err_remove_bridge:
+ drm_bridge_remove(&ctx->bridge);
++err_put_node:
++ of_node_put(ctx->host_node);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/drm_dp_helper.c b/drivers/gpu/drm/drm_dp_helper.c
+index 23f9073bc473a..c9528aa62c9c9 100644
+--- a/drivers/gpu/drm/drm_dp_helper.c
++++ b/drivers/gpu/drm/drm_dp_helper.c
+@@ -144,16 +144,6 @@ u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
+ }
+ EXPORT_SYMBOL(drm_dp_get_adjust_tx_ffe_preset);
+
+-u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
+- unsigned int lane)
+-{
+- unsigned int offset = DP_ADJUST_REQUEST_POST_CURSOR2;
+- u8 value = dp_link_status(link_status, offset);
+-
+- return (value >> (lane << 1)) & 0x3;
+-}
+-EXPORT_SYMBOL(drm_dp_get_adjust_request_post_cursor);
+-
+ static int __8b10b_clock_recovery_delay_us(const struct drm_dp_aux *aux, u8 rd_interval)
+ {
+ if (rd_interval > 4)
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index f5f5de362ff2c..b8f5419e514ae 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -4848,7 +4848,8 @@ bool drm_detect_monitor_audio(struct edid *edid)
+ if (!edid_ext)
+ goto end;
+
+- has_audio = ((edid_ext[3] & EDID_BASIC_AUDIO) != 0);
++ has_audio = (edid_ext[0] == CEA_EXT &&
++ (edid_ext[3] & EDID_BASIC_AUDIO) != 0);
+
+ if (has_audio) {
+ DRM_DEBUG_KMS("Monitor has basic audio support\n");
+@@ -5075,21 +5076,21 @@ static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector,
+
+ if (hdmi[6] & DRM_EDID_HDMI_DC_30) {
+ dc_bpc = 10;
+- info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_30;
++ info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_30;
+ DRM_DEBUG("%s: HDMI sink does deep color 30.\n",
+ connector->name);
+ }
+
+ if (hdmi[6] & DRM_EDID_HDMI_DC_36) {
+ dc_bpc = 12;
+- info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_36;
++ info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_36;
+ DRM_DEBUG("%s: HDMI sink does deep color 36.\n",
+ connector->name);
+ }
+
+ if (hdmi[6] & DRM_EDID_HDMI_DC_48) {
+ dc_bpc = 16;
+- info->edid_hdmi_dc_modes |= DRM_EDID_HDMI_DC_48;
++ info->edid_hdmi_rgb444_dc_modes |= DRM_EDID_HDMI_DC_48;
+ DRM_DEBUG("%s: HDMI sink does deep color 48.\n",
+ connector->name);
+ }
+@@ -5104,16 +5105,9 @@ static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector,
+ connector->name, dc_bpc);
+ info->bpc = dc_bpc;
+
+- /*
+- * Deep color support mandates RGB444 support for all video
+- * modes and forbids YCRCB422 support for all video modes per
+- * HDMI 1.3 spec.
+- */
+- info->color_formats = DRM_COLOR_FORMAT_RGB444;
+-
+ /* YCRCB444 is optional according to spec. */
+ if (hdmi[6] & DRM_EDID_HDMI_DC_Y444) {
+- info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
++ info->edid_hdmi_ycbcr444_dc_modes = info->edid_hdmi_rgb444_dc_modes;
+ DRM_DEBUG("%s: HDMI sink does YCRCB444 in deep color.\n",
+ connector->name);
+ }
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index ed43b987d306a..f15127a32f7a7 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -2346,6 +2346,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
+ fbi->fbops = &drm_fbdev_fb_ops;
+ fbi->screen_size = sizes->surface_height * fb->pitches[0];
+ fbi->fix.smem_len = fbi->screen_size;
++ fbi->flags = FBINFO_DEFAULT;
+
+ drm_fb_helper_fill_info(fbi, fb_helper, sizes);
+
+@@ -2353,19 +2354,21 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
+ fbi->screen_buffer = vzalloc(fbi->screen_size);
+ if (!fbi->screen_buffer)
+ return -ENOMEM;
++ fbi->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
+
+ fbi->fbdefio = &drm_fbdev_defio;
+-
+ fb_deferred_io_init(fbi);
+ } else {
+ /* buffer is mapped for HW framebuffer */
+ ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+ if (ret)
+ return ret;
+- if (map.is_iomem)
++ if (map.is_iomem) {
+ fbi->screen_base = map.vaddr_iomem;
+- else
++ } else {
+ fbi->screen_buffer = map.vaddr;
++ fbi->flags |= FBINFO_VIRTFB;
++ }
+
+ /*
+ * Shamelessly leak the physical address to user-space. As
+diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
+index c313a5b4549c4..7e48dcd1bee4d 100644
+--- a/drivers/gpu/drm/drm_syncobj.c
++++ b/drivers/gpu/drm/drm_syncobj.c
+@@ -853,12 +853,57 @@ drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data,
+ &args->handle);
+ }
+
++
++/*
++ * Try to flatten a dma_fence_chain into a dma_fence_array so that it can be
++ * added as timeline fence to a chain again.
++ */
++static int drm_syncobj_flatten_chain(struct dma_fence **f)
++{
++ struct dma_fence_chain *chain = to_dma_fence_chain(*f);
++ struct dma_fence *tmp, **fences;
++ struct dma_fence_array *array;
++ unsigned int count;
++
++ if (!chain)
++ return 0;
++
++ count = 0;
++ dma_fence_chain_for_each(tmp, &chain->base)
++ ++count;
++
++ fences = kmalloc_array(count, sizeof(*fences), GFP_KERNEL);
++ if (!fences)
++ return -ENOMEM;
++
++ count = 0;
++ dma_fence_chain_for_each(tmp, &chain->base)
++ fences[count++] = dma_fence_get(tmp);
++
++ array = dma_fence_array_create(count, fences,
++ dma_fence_context_alloc(1),
++ 1, false);
++ if (!array)
++ goto free_fences;
++
++ dma_fence_put(*f);
++ *f = &array->base;
++ return 0;
++
++free_fences:
++ while (count--)
++ dma_fence_put(fences[count]);
++
++ kfree(fences);
++ return -ENOMEM;
++}
++
+ static int drm_syncobj_transfer_to_timeline(struct drm_file *file_private,
+ struct drm_syncobj_transfer *args)
+ {
+ struct drm_syncobj *timeline_syncobj = NULL;
+- struct dma_fence *fence;
+ struct dma_fence_chain *chain;
++ struct dma_fence *fence;
+ int ret;
+
+ timeline_syncobj = drm_syncobj_find(file_private, args->dst_handle);
+@@ -869,16 +914,22 @@ static int drm_syncobj_transfer_to_timeline(struct drm_file *file_private,
+ args->src_point, args->flags,
+ &fence);
+ if (ret)
+- goto err;
++ goto err_put_timeline;
++
++ ret = drm_syncobj_flatten_chain(&fence);
++ if (ret)
++ goto err_free_fence;
++
+ chain = dma_fence_chain_alloc();
+ if (!chain) {
+ ret = -ENOMEM;
+- goto err1;
++ goto err_free_fence;
+ }
++
+ drm_syncobj_add_point(timeline_syncobj, chain, fence, args->dst_point);
+-err1:
++err_free_fence:
+ dma_fence_put(fence);
+-err:
++err_put_timeline:
+ drm_syncobj_put(timeline_syncobj);
+
+ return ret;
+diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
+index 8ac196e814d5d..d351b834a6551 100644
+--- a/drivers/gpu/drm/i915/display/intel_bw.c
++++ b/drivers/gpu/drm/i915/display/intel_bw.c
+@@ -966,7 +966,8 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
+ * cause.
+ */
+ if (!intel_can_enable_sagv(dev_priv, new_bw_state)) {
+- allowed_points = BIT(max_bw_point);
++ allowed_points &= ADLS_PSF_PT_MASK;
++ allowed_points |= BIT(max_bw_point);
+ drm_dbg_kms(&dev_priv->drm, "No SAGV, using single QGV point %d\n",
+ max_bw_point);
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index b5e2508db1cfe..62e763faf0aa5 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -4831,7 +4831,7 @@ intel_dp_hpd_pulse(struct intel_digital_port *dig_port, bool long_hpd)
+ struct intel_dp *intel_dp = &dig_port->dp;
+
+ if (dig_port->base.type == INTEL_OUTPUT_EDP &&
+- (long_hpd || !intel_pps_have_power(intel_dp))) {
++ (long_hpd || !intel_pps_have_panel_power_or_vdd(intel_dp))) {
+ /*
+ * vdd off can generate a long/short pulse on eDP which
+ * would require vdd on to handle it, and thus we
+diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c b/drivers/gpu/drm/i915/display/intel_hdmi.c
+index 3b5b9e7b05b7b..866ac090e3e32 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/display/intel_hdmi.c
+@@ -1836,6 +1836,7 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
+ bool has_hdmi_sink)
+ {
+ struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi);
++ enum phy phy = intel_port_to_phy(dev_priv, hdmi_to_dig_port(hdmi)->base.port);
+
+ if (clock < 25000)
+ return MODE_CLOCK_LOW;
+@@ -1856,6 +1857,14 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
+ if (IS_CHERRYVIEW(dev_priv) && clock > 216000 && clock < 240000)
+ return MODE_CLOCK_RANGE;
+
++ /* ICL+ combo PHY PLL can't generate 500-533.2 MHz */
++ if (intel_phy_is_combo(dev_priv, phy) && clock > 500000 && clock < 533200)
++ return MODE_CLOCK_RANGE;
++
++ /* ICL+ TC PHY PLL can't generate 500-532.8 MHz */
++ if (intel_phy_is_tc(dev_priv, phy) && clock > 500000 && clock < 532800)
++ return MODE_CLOCK_RANGE;
++
+ /*
+ * SNPS PHYs' MPLLB table-based programming can only handle a fixed
+ * set of link rates.
+@@ -1912,7 +1921,7 @@ static bool intel_hdmi_sink_bpc_possible(struct drm_connector *connector,
+ if (ycbcr420_output)
+ return hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_36;
+ else
+- return info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_36;
++ return info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_36;
+ case 10:
+ if (!has_hdmi_sink)
+ return false;
+@@ -1920,7 +1929,7 @@ static bool intel_hdmi_sink_bpc_possible(struct drm_connector *connector,
+ if (ycbcr420_output)
+ return hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_30;
+ else
+- return info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30;
++ return info->edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30;
+ case 8:
+ return true;
+ default:
+diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c
+index 4a2662838cd8d..df10b6898987a 100644
+--- a/drivers/gpu/drm/i915/display/intel_opregion.c
++++ b/drivers/gpu/drm/i915/display/intel_opregion.c
+@@ -375,6 +375,21 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
+ return -EINVAL;
+ }
+
++ /*
++ * The port numbering and mapping here is bizarre. The now-obsolete
++ * swsci spec supports ports numbered [0..4]. Port E is handled as a
++ * special case, but port F and beyond are not. The functionality is
++ * supposed to be obsolete for new platforms. Just bail out if the port
++ * number is out of bounds after mapping.
++ */
++ if (port > 4) {
++ drm_dbg_kms(&dev_priv->drm,
++ "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
++ intel_encoder->base.base.id, intel_encoder->base.name,
++ port_name(intel_encoder->port), port);
++ return -EINVAL;
++ }
++
+ if (!enable)
+ parm |= 4 << 8;
+
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.c b/drivers/gpu/drm/i915/display/intel_pps.c
+index e9c679bb1b2eb..5edd188d97479 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.c
++++ b/drivers/gpu/drm/i915/display/intel_pps.c
+@@ -1075,14 +1075,14 @@ static void intel_pps_vdd_sanitize(struct intel_dp *intel_dp)
+ edp_panel_vdd_schedule_off(intel_dp);
+ }
+
+-bool intel_pps_have_power(struct intel_dp *intel_dp)
++bool intel_pps_have_panel_power_or_vdd(struct intel_dp *intel_dp)
+ {
+ intel_wakeref_t wakeref;
+ bool have_power = false;
+
+ with_intel_pps_lock(intel_dp, wakeref) {
+- have_power = edp_have_panel_power(intel_dp) &&
+- edp_have_panel_vdd(intel_dp);
++ have_power = edp_have_panel_power(intel_dp) ||
++ edp_have_panel_vdd(intel_dp);
+ }
+
+ return have_power;
+diff --git a/drivers/gpu/drm/i915/display/intel_pps.h b/drivers/gpu/drm/i915/display/intel_pps.h
+index fbb47f6f453e4..e64144659d31f 100644
+--- a/drivers/gpu/drm/i915/display/intel_pps.h
++++ b/drivers/gpu/drm/i915/display/intel_pps.h
+@@ -37,7 +37,7 @@ void intel_pps_vdd_on(struct intel_dp *intel_dp);
+ void intel_pps_on(struct intel_dp *intel_dp);
+ void intel_pps_off(struct intel_dp *intel_dp);
+ void intel_pps_vdd_off_sync(struct intel_dp *intel_dp);
+-bool intel_pps_have_power(struct intel_dp *intel_dp);
++bool intel_pps_have_panel_power_or_vdd(struct intel_dp *intel_dp);
+ void intel_pps_wait_power_cycle(struct intel_dp *intel_dp);
+
+ void intel_pps_init(struct intel_dp *intel_dp);
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 00279e8c27756..b00de57cc957e 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -1816,6 +1816,9 @@ static void _intel_psr_post_plane_update(const struct intel_atomic_state *state,
+
+ mutex_lock(&psr->lock);
+
++ if (psr->sink_not_reliable)
++ goto exit;
++
+ drm_WARN_ON(&dev_priv->drm, psr->enabled && !crtc_state->active_planes);
+
+ /* Only enable if there is active planes */
+@@ -1826,6 +1829,7 @@ static void _intel_psr_post_plane_update(const struct intel_atomic_state *state,
+ if (crtc_state->crc_enabled && psr->enabled)
+ psr_force_hw_tracking_exit(intel_dp);
+
++exit:
+ mutex_unlock(&psr->lock);
+ }
+ }
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 1478c02a82cbe..936a257b511c8 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -439,7 +439,7 @@ vm_access(struct vm_area_struct *area, unsigned long addr,
+ return -EACCES;
+
+ addr -= area->vm_start;
+- if (addr >= obj->base.size)
++ if (range_overflows_t(u64, addr, len, obj->base.size))
+ return -EINVAL;
+
+ i915_gem_ww_ctx_init(&ww, true);
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 0c70ab08fc0c9..73efed2f30ca7 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1146,7 +1146,7 @@ static inline struct intel_gt *to_gt(struct drm_i915_private *i915)
+ (GRAPHICS_VER(i915) >= (from) && GRAPHICS_VER(i915) <= (until))
+
+ #define MEDIA_VER(i915) (INTEL_INFO(i915)->media.ver)
+-#define MEDIA_VER_FULL(i915) IP_VER(INTEL_INFO(i915)->media.arch, \
++#define MEDIA_VER_FULL(i915) IP_VER(INTEL_INFO(i915)->media.ver, \
+ INTEL_INFO(i915)->media.rel)
+ #define IS_MEDIA_VER(i915, from, until) \
+ (MEDIA_VER(i915) >= (from) && MEDIA_VER(i915) <= (until))
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index fae4f7818d28b..12120474c80c7 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -3722,8 +3722,7 @@ skl_setup_sagv_block_time(struct drm_i915_private *dev_priv)
+ MISSING_CASE(DISPLAY_VER(dev_priv));
+ }
+
+- /* Default to an unusable block time */
+- dev_priv->sagv_block_time_us = -1;
++ dev_priv->sagv_block_time_us = 0;
+ }
+
+ /*
+@@ -5652,7 +5651,7 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
+ result->min_ddb_alloc = max(min_ddb_alloc, blocks) + 1;
+ result->enable = true;
+
+- if (DISPLAY_VER(dev_priv) < 12)
++ if (DISPLAY_VER(dev_priv) < 12 && dev_priv->sagv_block_time_us)
+ result->can_sagv = latency >= dev_priv->sagv_block_time_us;
+ }
+
+@@ -5683,7 +5682,10 @@ static void tgl_compute_sagv_wm(const struct intel_crtc_state *crtc_state,
+ struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
+ struct skl_wm_level *sagv_wm = &plane_wm->sagv.wm0;
+ struct skl_wm_level *levels = plane_wm->wm;
+- unsigned int latency = dev_priv->wm.skl_latency[0] + dev_priv->sagv_block_time_us;
++ unsigned int latency = 0;
++
++ if (dev_priv->sagv_block_time_us)
++ latency = dev_priv->sagv_block_time_us + dev_priv->wm.skl_latency[0];
+
+ skl_compute_plane_wm(crtc_state, 0, latency,
+ wm_params, &levels[0],
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 80f1d439841a6..26aeaf0ab86ef 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -302,42 +302,42 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ if (priv->afbcd.ops) {
+ ret = priv->afbcd.ops->init(priv);
+ if (ret)
+- return ret;
++ goto free_drm;
+ }
+
+ /* Encoder Initialization */
+
+ ret = meson_encoder_cvbs_init(priv);
+ if (ret)
+- goto free_drm;
++ goto exit_afbcd;
+
+ if (has_components) {
+ ret = component_bind_all(drm->dev, drm);
+ if (ret) {
+ dev_err(drm->dev, "Couldn't bind all components\n");
+- goto free_drm;
++ goto exit_afbcd;
+ }
+ }
+
+ ret = meson_encoder_hdmi_init(priv);
+ if (ret)
+- goto free_drm;
++ goto exit_afbcd;
+
+ ret = meson_plane_create(priv);
+ if (ret)
+- goto free_drm;
++ goto exit_afbcd;
+
+ ret = meson_overlay_create(priv);
+ if (ret)
+- goto free_drm;
++ goto exit_afbcd;
+
+ ret = meson_crtc_create(priv);
+ if (ret)
+- goto free_drm;
++ goto exit_afbcd;
+
+ ret = request_irq(priv->vsync_irq, meson_irq, 0, drm->driver->name, drm);
+ if (ret)
+- goto free_drm;
++ goto exit_afbcd;
+
+ drm_mode_config_reset(drm);
+
+@@ -355,6 +355,9 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+
+ uninstall_irq:
+ free_irq(priv->vsync_irq, drm);
++exit_afbcd:
++ if (priv->afbcd.ops)
++ priv->afbcd.ops->exit(priv);
+ free_drm:
+ drm_dev_put(drm);
+
+@@ -385,10 +388,8 @@ static void meson_drv_unbind(struct device *dev)
+ free_irq(priv->vsync_irq, drm);
+ drm_dev_put(drm);
+
+- if (priv->afbcd.ops) {
+- priv->afbcd.ops->reset(priv);
+- meson_rdma_free(priv);
+- }
++ if (priv->afbcd.ops)
++ priv->afbcd.ops->exit(priv);
+ }
+
+ static const struct component_master_ops meson_drv_master_ops = {
+diff --git a/drivers/gpu/drm/meson/meson_osd_afbcd.c b/drivers/gpu/drm/meson/meson_osd_afbcd.c
+index ffc6b584dbf85..0cdbe899402f8 100644
+--- a/drivers/gpu/drm/meson/meson_osd_afbcd.c
++++ b/drivers/gpu/drm/meson/meson_osd_afbcd.c
+@@ -79,11 +79,6 @@ static bool meson_gxm_afbcd_supported_fmt(u64 modifier, uint32_t format)
+ return meson_gxm_afbcd_pixel_fmt(modifier, format) >= 0;
+ }
+
+-static int meson_gxm_afbcd_init(struct meson_drm *priv)
+-{
+- return 0;
+-}
+-
+ static int meson_gxm_afbcd_reset(struct meson_drm *priv)
+ {
+ writel_relaxed(VIU_SW_RESET_OSD1_AFBCD,
+@@ -93,6 +88,16 @@ static int meson_gxm_afbcd_reset(struct meson_drm *priv)
+ return 0;
+ }
+
++static int meson_gxm_afbcd_init(struct meson_drm *priv)
++{
++ return 0;
++}
++
++static void meson_gxm_afbcd_exit(struct meson_drm *priv)
++{
++ meson_gxm_afbcd_reset(priv);
++}
++
+ static int meson_gxm_afbcd_enable(struct meson_drm *priv)
+ {
+ writel_relaxed(FIELD_PREP(OSD1_AFBCD_ID_FIFO_THRD, 0x40) |
+@@ -172,6 +177,7 @@ static int meson_gxm_afbcd_setup(struct meson_drm *priv)
+
+ struct meson_afbcd_ops meson_afbcd_gxm_ops = {
+ .init = meson_gxm_afbcd_init,
++ .exit = meson_gxm_afbcd_exit,
+ .reset = meson_gxm_afbcd_reset,
+ .enable = meson_gxm_afbcd_enable,
+ .disable = meson_gxm_afbcd_disable,
+@@ -269,6 +275,18 @@ static bool meson_g12a_afbcd_supported_fmt(u64 modifier, uint32_t format)
+ return meson_g12a_afbcd_pixel_fmt(modifier, format) >= 0;
+ }
+
++static int meson_g12a_afbcd_reset(struct meson_drm *priv)
++{
++ meson_rdma_reset(priv);
++
++ meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB |
++ VIU_SW_RESET_G12A_OSD1_AFBCD,
++ VIU_SW_RESET);
++ meson_rdma_writel_sync(priv, 0, VIU_SW_RESET);
++
++ return 0;
++}
++
+ static int meson_g12a_afbcd_init(struct meson_drm *priv)
+ {
+ int ret;
+@@ -286,16 +304,10 @@ static int meson_g12a_afbcd_init(struct meson_drm *priv)
+ return 0;
+ }
+
+-static int meson_g12a_afbcd_reset(struct meson_drm *priv)
++static void meson_g12a_afbcd_exit(struct meson_drm *priv)
+ {
+- meson_rdma_reset(priv);
+-
+- meson_rdma_writel_sync(priv, VIU_SW_RESET_G12A_AFBC_ARB |
+- VIU_SW_RESET_G12A_OSD1_AFBCD,
+- VIU_SW_RESET);
+- meson_rdma_writel_sync(priv, 0, VIU_SW_RESET);
+-
+- return 0;
++ meson_g12a_afbcd_reset(priv);
++ meson_rdma_free(priv);
+ }
+
+ static int meson_g12a_afbcd_enable(struct meson_drm *priv)
+@@ -380,6 +392,7 @@ static int meson_g12a_afbcd_setup(struct meson_drm *priv)
+
+ struct meson_afbcd_ops meson_afbcd_g12a_ops = {
+ .init = meson_g12a_afbcd_init,
++ .exit = meson_g12a_afbcd_exit,
+ .reset = meson_g12a_afbcd_reset,
+ .enable = meson_g12a_afbcd_enable,
+ .disable = meson_g12a_afbcd_disable,
+diff --git a/drivers/gpu/drm/meson/meson_osd_afbcd.h b/drivers/gpu/drm/meson/meson_osd_afbcd.h
+index 5e5523304f42f..e77ddeb6416f3 100644
+--- a/drivers/gpu/drm/meson/meson_osd_afbcd.h
++++ b/drivers/gpu/drm/meson/meson_osd_afbcd.h
+@@ -14,6 +14,7 @@
+
+ struct meson_afbcd_ops {
+ int (*init)(struct meson_drm *priv);
++ void (*exit)(struct meson_drm *priv);
+ int (*reset)(struct meson_drm *priv);
+ int (*enable)(struct meson_drm *priv);
+ int (*disable)(struct meson_drm *priv);
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index b983541a4c530..cd9ba13ad5fc8 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -529,7 +529,10 @@ static void mgag200_set_format_regs(struct mga_device *mdev,
+ WREG_GFX(3, 0x00);
+ WREG_GFX(4, 0x00);
+ WREG_GFX(5, 0x40);
+- WREG_GFX(6, 0x05);
++ /* GCTL6 should be 0x05, but we configure memmapsl to 0xb8000 (text mode),
++ * so that it doesn't hang when running kexec/kdump on G200_SE rev42.
++ */
++ WREG_GFX(6, 0x0d);
+ WREG_GFX(7, 0x0f);
+ WREG_GFX(8, 0x0f);
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 17cfad6424db6..616be7265da4d 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -655,19 +655,23 @@ static void a6xx_set_cp_protect(struct msm_gpu *gpu)
+ {
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ const u32 *regs = a6xx_protect;
+- unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32;
+-
+- BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);
+- BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);
++ unsigned i, count, count_max;
+
+ if (adreno_is_a650(adreno_gpu)) {
+ regs = a650_protect;
+ count = ARRAY_SIZE(a650_protect);
+ count_max = 48;
++ BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48);
+ } else if (adreno_is_a660_family(adreno_gpu)) {
+ regs = a660_protect;
+ count = ARRAY_SIZE(a660_protect);
+ count_max = 48;
++ BUILD_BUG_ON(ARRAY_SIZE(a660_protect) > 48);
++ } else {
++ regs = a6xx_protect;
++ count = ARRAY_SIZE(a6xx_protect);
++ count_max = 32;
++ BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32);
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 1e648db439f9b..16ae0cccbbb1e 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -168,7 +168,6 @@ enum dpu_enc_rc_states {
+ * @vsync_event_work: worker to handle vsync event for autorefresh
+ * @topology: topology of the display
+ * @idle_timeout: idle timeout duration in milliseconds
+- * @dp: msm_dp pointer, for DP encoders
+ */
+ struct dpu_encoder_virt {
+ struct drm_encoder base;
+@@ -207,8 +206,6 @@ struct dpu_encoder_virt {
+ struct msm_display_topology topology;
+
+ u32 idle_timeout;
+-
+- struct msm_dp *dp;
+ };
+
+ #define to_dpu_encoder_virt(x) container_of(x, struct dpu_encoder_virt, base)
+@@ -1099,7 +1096,7 @@ static void _dpu_encoder_virt_enable_helper(struct drm_encoder *drm_enc)
+ }
+
+
+- if (dpu_enc->disp_info.intf_type == DRM_MODE_CONNECTOR_DisplayPort &&
++ if (dpu_enc->disp_info.intf_type == DRM_MODE_ENCODER_TMDS &&
+ dpu_enc->cur_master->hw_mdptop &&
+ dpu_enc->cur_master->hw_mdptop->ops.intf_audio_select)
+ dpu_enc->cur_master->hw_mdptop->ops.intf_audio_select(
+@@ -2128,8 +2125,6 @@ int dpu_encoder_setup(struct drm_device *dev, struct drm_encoder *enc,
+ timer_setup(&dpu_enc->vsync_event_timer,
+ dpu_encoder_vsync_event_handler,
+ 0);
+- else if (disp_info->intf_type == DRM_MODE_ENCODER_TMDS)
+- dpu_enc->dp = priv->dp[disp_info->h_tile_instance[0]];
+
+ INIT_DELAYED_WORK(&dpu_enc->delayed_off_work,
+ dpu_encoder_off_work);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+index f9c83d6e427ad..24fbaf562d418 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+@@ -35,6 +35,14 @@ int dpu_rm_destroy(struct dpu_rm *rm)
+ {
+ int i;
+
++ for (i = 0; i < ARRAY_SIZE(rm->dspp_blks); i++) {
++ struct dpu_hw_dspp *hw;
++
++ if (rm->dspp_blks[i]) {
++ hw = to_dpu_hw_dspp(rm->dspp_blks[i]);
++ dpu_hw_dspp_destroy(hw);
++ }
++ }
+ for (i = 0; i < ARRAY_SIZE(rm->pingpong_blks); i++) {
+ struct dpu_hw_pingpong *hw;
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index c724cb0bde9dc..8d1ea694d06cd 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1365,60 +1365,44 @@ static int dp_ctrl_enable_stream_clocks(struct dp_ctrl_private *ctrl)
+ return ret;
+ }
+
+-int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip, bool reset)
++void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable)
++{
++ struct dp_ctrl_private *ctrl;
++
++ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
++
++ dp_catalog_ctrl_reset(ctrl->catalog);
++
++ if (enable)
++ dp_catalog_ctrl_enable_irq(ctrl->catalog, enable);
++}
++
++void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl)
+ {
+ struct dp_ctrl_private *ctrl;
+ struct dp_io *dp_io;
+ struct phy *phy;
+
+- if (!dp_ctrl) {
+- DRM_ERROR("Invalid input data\n");
+- return -EINVAL;
+- }
+-
+ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
+ dp_io = &ctrl->parser->io;
+ phy = dp_io->phy;
+
+- ctrl->dp_ctrl.orientation = flip;
+-
+- if (reset)
+- dp_catalog_ctrl_reset(ctrl->catalog);
+-
+- DRM_DEBUG_DP("flip=%d\n", flip);
+ dp_catalog_ctrl_phy_reset(ctrl->catalog);
+ phy_init(phy);
+- dp_catalog_ctrl_enable_irq(ctrl->catalog, true);
+-
+- return 0;
+ }
+
+-/**
+- * dp_ctrl_host_deinit() - Uninitialize DP controller
+- * @dp_ctrl: Display Port Driver data
+- *
+- * Perform required steps to uninitialize DP controller
+- * and its resources.
+- */
+-void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl)
++void dp_ctrl_phy_exit(struct dp_ctrl *dp_ctrl)
+ {
+ struct dp_ctrl_private *ctrl;
+ struct dp_io *dp_io;
+ struct phy *phy;
+
+- if (!dp_ctrl) {
+- DRM_ERROR("Invalid input data\n");
+- return;
+- }
+-
+ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
+ dp_io = &ctrl->parser->io;
+ phy = dp_io->phy;
+
+- dp_catalog_ctrl_enable_irq(ctrl->catalog, false);
++ dp_catalog_ctrl_phy_reset(ctrl->catalog);
+ phy_exit(phy);
+-
+- DRM_DEBUG_DP("Host deinitialized successfully\n");
+ }
+
+ static bool dp_ctrl_use_fixed_nvid(struct dp_ctrl_private *ctrl)
+@@ -1488,7 +1472,10 @@ static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl)
+ }
+
+ phy_power_off(phy);
++
++ /* aux channel down, reinit phy */
+ phy_exit(phy);
++ phy_init(phy);
+
+ return 0;
+ }
+@@ -1761,6 +1748,9 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ /* end with failure */
+ break; /* lane == 1 already */
+ }
++
++ /* stop link training before start re training */
++ dp_ctrl_clear_training_pattern(ctrl);
+ }
+ }
+
+@@ -1893,8 +1883,14 @@ int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl)
+ return ret;
+ }
+
++ DRM_DEBUG_DP("Before, phy=%x init_count=%d power_on=%d\n",
++ (u32)(uintptr_t)phy, phy->init_count, phy->power_count);
++
+ phy_power_off(phy);
+
++ DRM_DEBUG_DP("After, phy=%x init_count=%d power_on=%d\n",
++ (u32)(uintptr_t)phy, phy->init_count, phy->power_count);
++
+ /* aux channel down, reinit phy */
+ phy_exit(phy);
+ phy_init(phy);
+@@ -1903,23 +1899,6 @@ int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl)
+ return ret;
+ }
+
+-void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl)
+-{
+- struct dp_ctrl_private *ctrl;
+- struct dp_io *dp_io;
+- struct phy *phy;
+-
+- ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);
+- dp_io = &ctrl->parser->io;
+- phy = dp_io->phy;
+-
+- dp_catalog_ctrl_reset(ctrl->catalog);
+-
+- phy_exit(phy);
+-
+- DRM_DEBUG_DP("DP off phy done\n");
+-}
+-
+ int dp_ctrl_off(struct dp_ctrl *dp_ctrl)
+ {
+ struct dp_ctrl_private *ctrl;
+@@ -1947,10 +1926,14 @@ int dp_ctrl_off(struct dp_ctrl *dp_ctrl)
+ DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret);
+ }
+
++ DRM_DEBUG_DP("Before, phy=%x init_count=%d power_on=%d\n",
++ (u32)(uintptr_t)phy, phy->init_count, phy->power_count);
++
+ phy_power_off(phy);
+- phy_exit(phy);
+
+- DRM_DEBUG_DP("DP off done\n");
++ DRM_DEBUG_DP("After, phy=%x init_count=%d power_on=%d\n",
++ (u32)(uintptr_t)phy, phy->init_count, phy->power_count);
++
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.h b/drivers/gpu/drm/msm/dp/dp_ctrl.h
+index 2363a2df9597b..2433edbc70a6d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.h
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.h
+@@ -19,12 +19,9 @@ struct dp_ctrl {
+ u32 pixel_rate;
+ };
+
+-int dp_ctrl_host_init(struct dp_ctrl *dp_ctrl, bool flip, bool reset);
+-void dp_ctrl_host_deinit(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl);
+-void dp_ctrl_off_phy(struct dp_ctrl *dp_ctrl);
+ int dp_ctrl_off(struct dp_ctrl *dp_ctrl);
+ void dp_ctrl_push_idle(struct dp_ctrl *dp_ctrl);
+ void dp_ctrl_isr(struct dp_ctrl *dp_ctrl);
+@@ -34,4 +31,9 @@ struct dp_ctrl *dp_ctrl_get(struct device *dev, struct dp_link *link,
+ struct dp_power *power, struct dp_catalog *catalog,
+ struct dp_parser *parser);
+
++void dp_ctrl_reset_irq_ctrl(struct dp_ctrl *dp_ctrl, bool enable);
++void dp_ctrl_phy_init(struct dp_ctrl *dp_ctrl);
++void dp_ctrl_phy_exit(struct dp_ctrl *dp_ctrl);
++void dp_ctrl_irq_phy_exit(struct dp_ctrl *dp_ctrl);
++
+ #endif /* _DP_CTRL_H_ */
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 7cc4d21f20911..1d7f82e6eafea 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -83,6 +83,7 @@ struct dp_display_private {
+
+ /* state variables */
+ bool core_initialized;
++ bool phy_initialized;
+ bool hpd_irq_on;
+ bool audio_supported;
+
+@@ -372,36 +373,45 @@ end:
+ return rc;
+ }
+
+-static void dp_display_host_init(struct dp_display_private *dp, int reset)
++static void dp_display_host_phy_init(struct dp_display_private *dp)
+ {
+- bool flip = false;
++ DRM_DEBUG_DP("core_init=%d phy_init=%d\n",
++ dp->core_initialized, dp->phy_initialized);
+
+- DRM_DEBUG_DP("core_initialized=%d\n", dp->core_initialized);
+- if (dp->core_initialized) {
+- DRM_DEBUG_DP("DP core already initialized\n");
+- return;
++ if (!dp->phy_initialized) {
++ dp_ctrl_phy_init(dp->ctrl);
++ dp->phy_initialized = true;
++ }
++}
++
++static void dp_display_host_phy_exit(struct dp_display_private *dp)
++{
++ DRM_DEBUG_DP("core_init=%d phy_init=%d\n",
++ dp->core_initialized, dp->phy_initialized);
++
++ if (dp->phy_initialized) {
++ dp_ctrl_phy_exit(dp->ctrl);
++ dp->phy_initialized = false;
+ }
++}
+
+- if (dp->usbpd->orientation == ORIENTATION_CC2)
+- flip = true;
++static void dp_display_host_init(struct dp_display_private *dp)
++{
++ DRM_DEBUG_DP("core_initialized=%d\n", dp->core_initialized);
+
+- dp_power_init(dp->power, flip);
+- dp_ctrl_host_init(dp->ctrl, flip, reset);
++ dp_power_init(dp->power, false);
++ dp_ctrl_reset_irq_ctrl(dp->ctrl, true);
+ dp_aux_init(dp->aux);
+ dp->core_initialized = true;
+ }
+
+ static void dp_display_host_deinit(struct dp_display_private *dp)
+ {
+- if (!dp->core_initialized) {
+- DRM_DEBUG_DP("DP core not initialized\n");
+- return;
+- }
++ DRM_DEBUG_DP("core_initialized=%d\n", dp->core_initialized);
+
+- dp_ctrl_host_deinit(dp->ctrl);
++ dp_ctrl_reset_irq_ctrl(dp->ctrl, false);
+ dp_aux_deinit(dp->aux);
+ dp_power_deinit(dp->power);
+-
+ dp->core_initialized = false;
+ }
+
+@@ -409,7 +419,7 @@ static int dp_display_usbpd_configure_cb(struct device *dev)
+ {
+ struct dp_display_private *dp = dev_get_dp_display_private(dev);
+
+- dp_display_host_init(dp, false);
++ dp_display_host_phy_init(dp);
+
+ return dp_display_process_hpd_high(dp);
+ }
+@@ -530,11 +540,6 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data)
+ ret = dp_display_usbpd_configure_cb(&dp->pdev->dev);
+ if (ret) { /* link train failed */
+ dp->hpd_state = ST_DISCONNECTED;
+-
+- if (ret == -ECONNRESET) { /* cable unplugged */
+- dp->core_initialized = false;
+- }
+-
+ } else {
+ /* start sentinel checking in case of missing uevent */
+ dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout);
+@@ -604,8 +609,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data)
+ if (state == ST_DISCONNECTED) {
+ /* triggered by irq_hdp with sink_count = 0 */
+ if (dp->link->sink_count == 0) {
+- dp_ctrl_off_phy(dp->ctrl);
+- dp->core_initialized = false;
++ dp_display_host_phy_exit(dp);
+ }
+ mutex_unlock(&dp->event_mutex);
+ return 0;
+@@ -667,7 +671,6 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data
+ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
+ {
+ u32 state;
+- int ret;
+
+ mutex_lock(&dp->event_mutex);
+
+@@ -692,16 +695,8 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data)
+ return 0;
+ }
+
+- /*
+- * dp core (ahb/aux clks) must be initialized before
+- * irq_hpd be handled
+- */
+- if (dp->core_initialized) {
+- ret = dp_display_usbpd_attention_cb(&dp->pdev->dev);
+- if (ret == -ECONNRESET) { /* cable unplugged */
+- dp->core_initialized = false;
+- }
+- }
++ dp_display_usbpd_attention_cb(&dp->pdev->dev);
++
+ DRM_DEBUG_DP("hpd_state=%d\n", state);
+
+ mutex_unlock(&dp->event_mutex);
+@@ -892,12 +887,19 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data)
+
+ dp_display->audio_enabled = false;
+
+- /* triggered by irq_hpd with sink_count = 0 */
+ if (dp->link->sink_count == 0) {
++ /*
++ * irq_hpd with sink_count = 0
++ * hdmi unplugged out of dongle
++ */
+ dp_ctrl_off_link_stream(dp->ctrl);
+ } else {
++ /*
++ * unplugged interrupt
++ * dongle unplugged out of DUT
++ */
+ dp_ctrl_off(dp->ctrl);
+- dp->core_initialized = false;
++ dp_display_host_phy_exit(dp);
+ }
+
+ dp_display->power_on = false;
+@@ -1027,7 +1029,7 @@ void msm_dp_snapshot(struct msm_disp_state *disp_state, struct msm_dp *dp)
+ static void dp_display_config_hpd(struct dp_display_private *dp)
+ {
+
+- dp_display_host_init(dp, true);
++ dp_display_host_init(dp);
+ dp_catalog_ctrl_hpd_config(dp->catalog);
+
+ /* Enable interrupt first time
+@@ -1306,20 +1308,23 @@ static int dp_pm_resume(struct device *dev)
+ dp->hpd_state = ST_DISCONNECTED;
+
+ /* turn on dp ctrl/phy */
+- dp_display_host_init(dp, true);
++ dp_display_host_init(dp);
+
+ dp_catalog_ctrl_hpd_config(dp->catalog);
+
+- /*
+- * set sink to normal operation mode -- D0
+- * before dpcd read
+- */
+- dp_link_psm_config(dp->link, &dp->panel->link_info, false);
+
+ if (dp_catalog_link_is_connected(dp->catalog)) {
++ /*
++ * set sink to normal operation mode -- D0
++ * before dpcd read
++ */
++ dp_display_host_phy_init(dp);
++ dp_link_psm_config(dp->link, &dp->panel->link_info, false);
+ sink_count = drm_dp_read_sink_count(dp->aux);
+ if (sink_count < 0)
+ sink_count = 0;
++
++ dp_display_host_phy_exit(dp);
+ }
+
+ dp->link->sink_count = sink_count;
+@@ -1358,18 +1363,16 @@ static int dp_pm_suspend(struct device *dev)
+ DRM_DEBUG_DP("Before, core_inited=%d power_on=%d\n",
+ dp->core_initialized, dp_display->power_on);
+
+- if (dp->core_initialized == true) {
+- /* mainlink enabled */
+- if (dp_power_clk_status(dp->power, DP_CTRL_PM))
+- dp_ctrl_off_link_stream(dp->ctrl);
+-
+- dp_display_host_deinit(dp);
+- }
++ /* mainlink enabled */
++ if (dp_power_clk_status(dp->power, DP_CTRL_PM))
++ dp_ctrl_off_link_stream(dp->ctrl);
+
+- dp->hpd_state = ST_SUSPENDED;
++ dp_display_host_phy_exit(dp);
+
+ /* host_init will be called at pm_resume */
+- dp->core_initialized = false;
++ dp_display_host_deinit(dp);
++
++ dp->hpd_state = ST_SUSPENDED;
+
+ DRM_DEBUG_DP("After, core_inited=%d power_on=%d\n",
+ dp->core_initialized, dp_display->power_on);
+@@ -1460,6 +1463,7 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ struct drm_encoder *encoder)
+ {
+ struct msm_drm_private *priv;
++ struct dp_display_private *dp_priv;
+ int ret;
+
+ if (WARN_ON(!encoder) || WARN_ON(!dp_display) || WARN_ON(!dev))
+@@ -1468,6 +1472,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ priv = dev->dev_private;
+ dp_display->drm_dev = dev;
+
++ dp_priv = container_of(dp_display, struct dp_display_private, dp_display);
++
+ ret = dp_display_request_irq(dp_display);
+ if (ret) {
+ DRM_ERROR("request_irq failed, ret=%d\n", ret);
+@@ -1485,6 +1491,8 @@ int msm_dp_modeset_init(struct msm_dp *dp_display, struct drm_device *dev,
+ return ret;
+ }
+
++ dp_priv->panel->connector = dp_display->connector;
++
+ priv->connectors[priv->num_connectors++] = dp_display->connector;
+
+ dp_display->bridge = msm_dp_bridge_init(dp_display, dev, encoder);
+@@ -1535,7 +1543,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder)
+ state = dp_display->hpd_state;
+
+ if (state == ST_DISPLAY_OFF)
+- dp_display_host_init(dp_display, true);
++ dp_display_host_phy_init(dp_display);
+
+ dp_display_enable(dp_display, 0);
+
+diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c
+index d4d360d19ebad..26ef41a4c1b68 100644
+--- a/drivers/gpu/drm/msm/dp/dp_drm.c
++++ b/drivers/gpu/drm/msm/dp/dp_drm.c
+@@ -169,16 +169,6 @@ struct drm_connector *dp_drm_connector_init(struct msm_dp *dp_display)
+
+ drm_connector_attach_encoder(connector, dp_display->encoder);
+
+- if (dp_display->panel_bridge) {
+- ret = drm_bridge_attach(dp_display->encoder,
+- dp_display->panel_bridge, NULL,
+- DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+- if (ret < 0) {
+- DRM_ERROR("failed to attach panel bridge: %d\n", ret);
+- return ERR_PTR(ret);
+- }
+- }
+-
+ return connector;
+ }
+
+@@ -246,5 +236,16 @@ struct drm_bridge *msm_dp_bridge_init(struct msm_dp *dp_display, struct drm_devi
+ return ERR_PTR(rc);
+ }
+
++ if (dp_display->panel_bridge) {
++ rc = drm_bridge_attach(dp_display->encoder,
++ dp_display->panel_bridge, bridge,
++ DRM_BRIDGE_ATTACH_NO_CONNECTOR);
++ if (rc < 0) {
++ DRM_ERROR("failed to attach panel bridge: %d\n", rc);
++ drm_bridge_remove(bridge);
++ return ERR_PTR(rc);
++ }
++ }
++
+ return bridge;
+ }
+diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c
+index 71db10c0f262d..f1418722c5492 100644
+--- a/drivers/gpu/drm/msm/dp/dp_panel.c
++++ b/drivers/gpu/drm/msm/dp/dp_panel.c
+@@ -212,6 +212,11 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel,
+ if (drm_add_modes_noedid(connector, 640, 480))
+ drm_set_preferred_mode(connector, 640, 480);
+ mutex_unlock(&connector->dev->mode_config.mutex);
++ } else {
++ /* always add fail-safe mode as backup mode */
++ mutex_lock(&connector->dev->mode_config.mutex);
++ drm_add_modes_noedid(connector, 640, 480);
++ mutex_unlock(&connector->dev->mode_config.mutex);
+ }
+
+ if (panel->aux_cfg_update_done) {
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+index d8128f50b0dd5..0b782cc18b3f4 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
+@@ -562,7 +562,9 @@ static int pll_10nm_register(struct dsi_pll_10nm *pll_10nm, struct clk_hw **prov
+ char clk_name[32], parent[32], vco_name[32];
+ char parent2[32], parent3[32], parent4[32];
+ struct clk_init_data vco_init = {
+- .parent_names = (const char *[]){ "xo" },
++ .parent_data = &(const struct clk_parent_data) {
++ .fw_name = "ref",
++ },
+ .num_parents = 1,
+ .name = vco_name,
+ .flags = CLK_IGNORE_UNUSED,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 7414966f198e3..75557ac99adf1 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -802,7 +802,9 @@ static int pll_14nm_register(struct dsi_pll_14nm *pll_14nm, struct clk_hw **prov
+ {
+ char clk_name[32], parent[32], vco_name[32];
+ struct clk_init_data vco_init = {
+- .parent_names = (const char *[]){ "xo" },
++ .parent_data = &(const struct clk_parent_data) {
++ .fw_name = "ref",
++ },
+ .num_parents = 1,
+ .name = vco_name,
+ .flags = CLK_IGNORE_UNUSED,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+index 2da673a2add69..48eab80b548e1 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
+@@ -521,7 +521,9 @@ static int pll_28nm_register(struct dsi_pll_28nm *pll_28nm, struct clk_hw **prov
+ {
+ char clk_name[32], parent1[32], parent2[32], vco_name[32];
+ struct clk_init_data vco_init = {
+- .parent_names = (const char *[]){ "xo" },
++ .parent_data = &(const struct clk_parent_data) {
++ .fw_name = "ref", .name = "xo",
++ },
+ .num_parents = 1,
+ .name = vco_name,
+ .flags = CLK_IGNORE_UNUSED,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
+index 71ed4aa0dc67e..fc56cdcc9ad64 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
+@@ -385,7 +385,9 @@ static int pll_28nm_register(struct dsi_pll_28nm *pll_28nm, struct clk_hw **prov
+ {
+ char *clk_name, *parent_name, *vco_name;
+ struct clk_init_data vco_init = {
+- .parent_names = (const char *[]){ "pxo" },
++ .parent_data = &(const struct clk_parent_data) {
++ .fw_name = "ref",
++ },
+ .num_parents = 1,
+ .flags = CLK_IGNORE_UNUSED,
+ .ops = &clk_ops_dsi_pll_28nm_vco,
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 079613d2aaa98..6e506feb111fd 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -588,7 +588,9 @@ static int pll_7nm_register(struct dsi_pll_7nm *pll_7nm, struct clk_hw **provide
+ char clk_name[32], parent[32], vco_name[32];
+ char parent2[32], parent3[32], parent4[32];
+ struct clk_init_data vco_init = {
+- .parent_names = (const char *[]){ "bi_tcxo" },
++ .parent_data = &(const struct clk_parent_data) {
++ .fw_name = "ref",
++ },
+ .num_parents = 1,
+ .name = vco_name,
+ .flags = CLK_IGNORE_UNUSED,
+@@ -862,20 +864,26 @@ static int dsi_7nm_phy_enable(struct msm_dsi_phy *phy,
+ /* Alter PHY configurations if data rate less than 1.5GHZ*/
+ less_than_1500_mhz = (clk_req->bitclk_rate <= 1500000000);
+
+- /* For C-PHY, no low power settings for lower clk rate */
+- if (phy->cphy_mode)
+- less_than_1500_mhz = false;
+-
+ if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_1) {
+ vreg_ctrl_0 = less_than_1500_mhz ? 0x53 : 0x52;
+- glbl_rescode_top_ctrl = less_than_1500_mhz ? 0x3d : 0x00;
+- glbl_rescode_bot_ctrl = less_than_1500_mhz ? 0x39 : 0x3c;
++ if (phy->cphy_mode) {
++ glbl_rescode_top_ctrl = 0x00;
++ glbl_rescode_bot_ctrl = 0x3c;
++ } else {
++ glbl_rescode_top_ctrl = less_than_1500_mhz ? 0x3d : 0x00;
++ glbl_rescode_bot_ctrl = less_than_1500_mhz ? 0x39 : 0x3c;
++ }
+ glbl_str_swi_cal_sel_ctrl = 0x00;
+ glbl_hstx_str_ctrl_0 = 0x88;
+ } else {
+ vreg_ctrl_0 = less_than_1500_mhz ? 0x5B : 0x59;
+- glbl_str_swi_cal_sel_ctrl = less_than_1500_mhz ? 0x03 : 0x00;
+- glbl_hstx_str_ctrl_0 = less_than_1500_mhz ? 0x66 : 0x88;
++ if (phy->cphy_mode) {
++ glbl_str_swi_cal_sel_ctrl = 0x03;
++ glbl_hstx_str_ctrl_0 = 0x66;
++ } else {
++ glbl_str_swi_cal_sel_ctrl = less_than_1500_mhz ? 0x03 : 0x00;
++ glbl_hstx_str_ctrl_0 = less_than_1500_mhz ? 0x66 : 0x88;
++ }
+ glbl_rescode_top_ctrl = 0x03;
+ glbl_rescode_bot_ctrl = 0x3c;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index ae2f2abc8f5a5..daf9f87477ba1 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -101,7 +101,6 @@ nv40_backlight_init(struct nouveau_encoder *encoder,
+ if (!(nvif_rd32(device, NV40_PMC_BACKLIGHT) & NV40_PMC_BACKLIGHT_MASK))
+ return -ENODEV;
+
+- props->type = BACKLIGHT_RAW;
+ props->max_brightness = 31;
+ *ops = &nv40_bl_ops;
+ return 0;
+@@ -294,7 +293,8 @@ nv50_backlight_init(struct nouveau_backlight *bl,
+ struct nouveau_drm *drm = nouveau_drm(nv_encoder->base.base.dev);
+ struct nvif_object *device = &drm->client.device.object;
+
+- if (!nvif_rd32(device, NV50_PDISP_SOR_PWM_CTL(ffs(nv_encoder->dcb->or) - 1)))
++ if (!nvif_rd32(device, NV50_PDISP_SOR_PWM_CTL(ffs(nv_encoder->dcb->or) - 1)) ||
++ nv_conn->base.status != connector_status_connected)
+ return -ENODEV;
+
+ if (nv_conn->type == DCB_CONNECTOR_eDP) {
+@@ -342,7 +342,6 @@ nv50_backlight_init(struct nouveau_backlight *bl,
+ else
+ *ops = &nva3_bl_ops;
+
+- props->type = BACKLIGHT_RAW;
+ props->max_brightness = 100;
+
+ return 0;
+@@ -410,6 +409,7 @@ nouveau_backlight_init(struct drm_connector *connector)
+ goto fail_alloc;
+ }
+
++ props.type = BACKLIGHT_RAW;
+ bl->dev = backlight_device_register(backlight_name, connector->kdev,
+ nv_encoder, ops, &props);
+ if (IS_ERR(bl->dev)) {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
+index 667fa016496ee..a6ea89a5d51ab 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/hsfw.c
+@@ -142,11 +142,12 @@ nvkm_acr_hsfw_load_bl(struct nvkm_acr *acr, const char *name, int ver,
+
+ hsfw->imem_size = desc->code_size;
+ hsfw->imem_tag = desc->start_tag;
+- hsfw->imem = kmalloc(desc->code_size, GFP_KERNEL);
+- memcpy(hsfw->imem, data + desc->code_off, desc->code_size);
+-
++ hsfw->imem = kmemdup(data + desc->code_off, desc->code_size, GFP_KERNEL);
+ nvkm_firmware_put(fw);
+- return 0;
++ if (!hsfw->imem)
++ return -ENOMEM;
++ else
++ return 0;
+ }
+
+ int
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index bbe628b306ee3..f8355de6e335d 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -360,8 +360,11 @@ int panfrost_gpu_init(struct panfrost_device *pfdev)
+
+ panfrost_gpu_init_features(pfdev);
+
+- dma_set_mask_and_coherent(pfdev->dev,
++ err = dma_set_mask_and_coherent(pfdev->dev,
+ DMA_BIT_MASK(FIELD_GET(0xff00, pfdev->features.mmu_features)));
++ if (err)
++ return err;
++
+ dma_set_max_seg_size(pfdev->dev, UINT_MAX);
+
+ irq = platform_get_irq_byname(to_platform_device(pfdev->dev), "gpu");
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 607ad5620bd99..1546abcadacf4 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -204,7 +204,7 @@ int radeon_get_monitor_bpc(struct drm_connector *connector)
+
+ /* Check if bpc is within clock limit. Try to degrade gracefully otherwise */
+ if ((bpc == 12) && (mode_clock * 3/2 > max_tmds_clock)) {
+- if ((connector->display_info.edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_30) &&
++ if ((connector->display_info.edid_hdmi_rgb444_dc_modes & DRM_EDID_HDMI_DC_30) &&
+ (mode_clock * 5/4 <= max_tmds_clock))
+ bpc = 10;
+ else
+diff --git a/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c b/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
+index 6b4759ed6bfd4..c491429f1a029 100644
+--- a/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
++++ b/drivers/gpu/drm/selftests/test-drm_dp_mst_helper.c
+@@ -131,8 +131,10 @@ sideband_msg_req_encode_decode(struct drm_dp_sideband_msg_req_body *in)
+ return false;
+
+ txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
+- if (!txmsg)
++ if (!txmsg) {
++ kfree(out);
+ return false;
++ }
+
+ drm_dp_encode_sideband_req(in, txmsg);
+ ret = drm_dp_decode_sideband_req(txmsg, out);
+diff --git a/drivers/gpu/drm/tegra/dp.c b/drivers/gpu/drm/tegra/dp.c
+index 70dfb7d1dec55..f5535eb04c6b1 100644
+--- a/drivers/gpu/drm/tegra/dp.c
++++ b/drivers/gpu/drm/tegra/dp.c
+@@ -549,6 +549,15 @@ static void drm_dp_link_get_adjustments(struct drm_dp_link *link,
+ {
+ struct drm_dp_link_train_set *adjust = &link->train.adjust;
+ unsigned int i;
++ u8 post_cursor;
++ int err;
++
++ err = drm_dp_dpcd_read(link->aux, DP_ADJUST_REQUEST_POST_CURSOR2,
++ &post_cursor, sizeof(post_cursor));
++ if (err < 0) {
++ DRM_ERROR("failed to read post_cursor2: %d\n", err);
++ post_cursor = 0;
++ }
+
+ for (i = 0; i < link->lanes; i++) {
+ adjust->voltage_swing[i] =
+@@ -560,7 +569,7 @@ static void drm_dp_link_get_adjustments(struct drm_dp_link *link,
+ DP_TRAIN_PRE_EMPHASIS_SHIFT;
+
+ adjust->post_cursor[i] =
+- drm_dp_get_adjust_request_post_cursor(status, i);
++ (post_cursor >> (i << 1)) & 0x3;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c
+index f46d377f0c304..de1333dc0d867 100644
+--- a/drivers/gpu/drm/tegra/dsi.c
++++ b/drivers/gpu/drm/tegra/dsi.c
+@@ -1538,8 +1538,10 @@ static int tegra_dsi_ganged_probe(struct tegra_dsi *dsi)
+ dsi->slave = platform_get_drvdata(gangster);
+ of_node_put(np);
+
+- if (!dsi->slave)
++ if (!dsi->slave) {
++ put_device(&gangster->dev);
+ return -EPROBE_DEFER;
++ }
+
+ dsi->slave->master = dsi;
+ }
+diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c
+index 04146da2d1d8e..11576e0297e41 100644
+--- a/drivers/gpu/drm/tiny/simpledrm.c
++++ b/drivers/gpu/drm/tiny/simpledrm.c
+@@ -798,6 +798,9 @@ static int simpledrm_device_init_modeset(struct simpledrm_device *sdev)
+ if (ret)
+ return ret;
+ drm_connector_helper_add(connector, &simpledrm_connector_helper_funcs);
++ drm_connector_set_panel_orientation_with_quirk(connector,
++ DRM_MODE_PANEL_ORIENTATION_UNKNOWN,
++ mode->hdisplay, mode->vdisplay);
+
+ formats = simpledrm_device_formats(sdev, &nformats);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index bd46396a1ae07..1afcd54fbbd53 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -219,6 +219,7 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ int ret;
+ u32 mmu_debug;
+ u32 ident1;
++ u64 mask;
+
+ v3d = devm_drm_dev_alloc(dev, &v3d_drm_driver, struct v3d_dev, drm);
+ if (IS_ERR(v3d))
+@@ -237,8 +238,11 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ return ret;
+
+ mmu_debug = V3D_READ(V3D_MMU_DEBUG_INFO);
+- dma_set_mask_and_coherent(dev,
+- DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH)));
++ mask = DMA_BIT_MASK(30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_PA_WIDTH));
++ ret = dma_set_mask_and_coherent(dev, mask);
++ if (ret)
++ return ret;
++
+ v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH);
+
+ ident1 = V3D_READ(V3D_HUB_IDENT1);
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 6994f8c0e02ea..80c685ab3e30d 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -447,7 +447,6 @@ static int host1x_probe(struct platform_device *pdev)
+ if (syncpt_irq < 0)
+ return syncpt_irq;
+
+- host1x_bo_cache_init(&host->cache);
+ mutex_init(&host->devices_lock);
+ INIT_LIST_HEAD(&host->devices);
+ INIT_LIST_HEAD(&host->list);
+@@ -489,10 +488,12 @@ static int host1x_probe(struct platform_device *pdev)
+ if (err)
+ return err;
+
++ host1x_bo_cache_init(&host->cache);
++
+ err = host1x_iommu_init(host);
+ if (err < 0) {
+ dev_err(&pdev->dev, "failed to setup IOMMU: %d\n", err);
+- return err;
++ goto destroy_cache;
+ }
+
+ err = host1x_channel_list_init(&host->channel_list,
+@@ -553,6 +554,8 @@ free_channels:
+ host1x_channel_list_free(&host->channel_list);
+ iommu_exit:
+ host1x_iommu_exit(host);
++destroy_cache:
++ host1x_bo_cache_destroy(&host->cache);
+
+ return err;
+ }
+@@ -568,6 +571,7 @@ static int host1x_remove(struct platform_device *pdev)
+
+ host1x_intr_deinit(host);
+ host1x_syncpt_deinit(host);
++ host1x_channel_list_free(&host->channel_list);
+ host1x_iommu_exit(host);
+ host1x_bo_cache_destroy(&host->cache);
+
+diff --git a/drivers/greybus/svc.c b/drivers/greybus/svc.c
+index ce7740ef449ba..51d0875a34800 100644
+--- a/drivers/greybus/svc.c
++++ b/drivers/greybus/svc.c
+@@ -866,8 +866,14 @@ static int gb_svc_hello(struct gb_operation *op)
+
+ gb_svc_debugfs_init(svc);
+
+- return gb_svc_queue_deferred_request(op);
++ ret = gb_svc_queue_deferred_request(op);
++ if (ret)
++ goto err_remove_debugfs;
++
++ return 0;
+
++err_remove_debugfs:
++ gb_svc_debugfs_exit(svc);
+ err_unregister_device:
+ gb_svc_watchdog_destroy(svc);
+ device_del(&svc->dev);
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 6726567d72976..8d6fc50dab65f 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -618,6 +618,17 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+ if (report_type == HID_OUTPUT_REPORT)
+ return -EINVAL;
+
++ /*
++ * In case of unnumbered reports the response from the device will
++ * not have the report ID that the upper layers expect, so we need
++ * to stash it the buffer ourselves and adjust the data size.
++ */
++ if (!report_number) {
++ buf[0] = 0;
++ buf++;
++ count--;
++ }
++
+ /* +2 bytes to include the size of the reply in the query buffer */
+ ask_count = min(count + 2, (size_t)ihid->bufsize);
+
+@@ -639,6 +650,9 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+ count = min(count, ret_count - 2);
+ memcpy(buf, ihid->rawbuf + 2, count);
+
++ if (!report_number)
++ count++;
++
+ return count;
+ }
+
+@@ -655,17 +669,19 @@ static int i2c_hid_output_raw_report(struct hid_device *hid, __u8 *buf,
+
+ mutex_lock(&ihid->reset_lock);
+
+- if (report_id) {
+- buf++;
+- count--;
+- }
+-
++ /*
++ * Note that both numbered and unnumbered reports passed here
++ * are supposed to have report ID stored in the 1st byte of the
++ * buffer, so we strip it off unconditionally before passing payload
++ * to i2c_hid_set_or_send_report which takes care of encoding
++ * everything properly.
++ */
+ ret = i2c_hid_set_or_send_report(client,
+ report_type == HID_FEATURE_REPORT ? 0x03 : 0x02,
+- report_id, buf, count, use_data);
++ report_id, buf + 1, count - 1, use_data);
+
+- if (report_id && ret >= 0)
+- ret++; /* add report_id to the number of transfered bytes */
++ if (ret >= 0)
++ ret++; /* add report_id to the number of transferred bytes */
+
+ mutex_unlock(&ihid->reset_lock);
+
+diff --git a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+index e24988586710d..16aa030af8453 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
++++ b/drivers/hid/intel-ish-hid/ishtp-fw-loader.c
+@@ -661,21 +661,12 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ */
+ payload_max_size &= ~(L1_CACHE_BYTES - 1);
+
+- dma_buf = kmalloc(payload_max_size, GFP_KERNEL | GFP_DMA32);
++ dma_buf = dma_alloc_coherent(devc, payload_max_size, &dma_buf_phy, GFP_KERNEL);
+ if (!dma_buf) {
+ client_data->flag_retry = true;
+ return -ENOMEM;
+ }
+
+- dma_buf_phy = dma_map_single(devc, dma_buf, payload_max_size,
+- DMA_TO_DEVICE);
+- if (dma_mapping_error(devc, dma_buf_phy)) {
+- dev_err(cl_data_to_dev(client_data), "DMA map failed\n");
+- client_data->flag_retry = true;
+- rv = -ENOMEM;
+- goto end_err_dma_buf_release;
+- }
+-
+ ldr_xfer_dma_frag.fragment.hdr.command = LOADER_CMD_XFER_FRAGMENT;
+ ldr_xfer_dma_frag.fragment.xfer_mode = LOADER_XFER_MODE_DIRECT_DMA;
+ ldr_xfer_dma_frag.ddr_phys_addr = (u64)dma_buf_phy;
+@@ -695,14 +686,7 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ ldr_xfer_dma_frag.fragment.size = fragment_size;
+ memcpy(dma_buf, &fw->data[fragment_offset], fragment_size);
+
+- dma_sync_single_for_device(devc, dma_buf_phy,
+- payload_max_size,
+- DMA_TO_DEVICE);
+-
+- /*
+- * Flush cache here because the dma_sync_single_for_device()
+- * does not do for x86.
+- */
++ /* Flush cache to be sure the data is in main memory. */
+ clflush_cache_range(dma_buf, payload_max_size);
+
+ dev_dbg(cl_data_to_dev(client_data),
+@@ -725,15 +709,8 @@ static int ish_fw_xfer_direct_dma(struct ishtp_cl_data *client_data,
+ fragment_offset += fragment_size;
+ }
+
+- dma_unmap_single(devc, dma_buf_phy, payload_max_size, DMA_TO_DEVICE);
+- kfree(dma_buf);
+- return 0;
+-
+ end_err_resp_buf_release:
+- /* Free ISH buffer if not done already, in error case */
+- dma_unmap_single(devc, dma_buf_phy, payload_max_size, DMA_TO_DEVICE);
+-end_err_dma_buf_release:
+- kfree(dma_buf);
++ dma_free_coherent(devc, payload_max_size, dma_buf, dma_buf_phy);
+ return rv;
+ }
+
+diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
+index f2d05bff42453..439f99b8b5de2 100644
+--- a/drivers/hv/hv_balloon.c
++++ b/drivers/hv/hv_balloon.c
+@@ -1563,7 +1563,7 @@ static void balloon_onchannelcallback(void *context)
+ break;
+
+ default:
+- pr_warn("Unhandled message: type: %d\n", dm_hdr->type);
++ pr_warn_ratelimited("Unhandled message: type: %d\n", dm_hdr->type);
+
+ }
+ }
+diff --git a/drivers/hwmon/pmbus/pmbus.h b/drivers/hwmon/pmbus/pmbus.h
+index e0aa8aa46d8c4..ef3a8ecde4dfc 100644
+--- a/drivers/hwmon/pmbus/pmbus.h
++++ b/drivers/hwmon/pmbus/pmbus.h
+@@ -319,6 +319,7 @@ enum pmbus_fan_mode { percent = 0, rpm };
+ /*
+ * STATUS_VOUT, STATUS_INPUT
+ */
++#define PB_VOLTAGE_VIN_OFF BIT(3)
+ #define PB_VOLTAGE_UV_FAULT BIT(4)
+ #define PB_VOLTAGE_UV_WARNING BIT(5)
+ #define PB_VOLTAGE_OV_WARNING BIT(6)
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index ac2fbee1ba9c0..ca0bfaf2f6911 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -1373,7 +1373,7 @@ static const struct pmbus_limit_attr vin_limit_attrs[] = {
+ .reg = PMBUS_VIN_UV_FAULT_LIMIT,
+ .attr = "lcrit",
+ .alarm = "lcrit_alarm",
+- .sbit = PB_VOLTAGE_UV_FAULT,
++ .sbit = PB_VOLTAGE_UV_FAULT | PB_VOLTAGE_VIN_OFF,
+ }, {
+ .reg = PMBUS_VIN_OV_WARN_LIMIT,
+ .attr = "max",
+@@ -2391,10 +2391,14 @@ static int pmbus_regulator_is_enabled(struct regulator_dev *rdev)
+ {
+ struct device *dev = rdev_get_dev(rdev);
+ struct i2c_client *client = to_i2c_client(dev->parent);
++ struct pmbus_data *data = i2c_get_clientdata(client);
+ u8 page = rdev_get_id(rdev);
+ int ret;
+
++ mutex_lock(&data->update_lock);
+ ret = pmbus_read_byte_data(client, page, PMBUS_OPERATION);
++ mutex_unlock(&data->update_lock);
++
+ if (ret < 0)
+ return ret;
+
+@@ -2405,11 +2409,17 @@ static int _pmbus_regulator_on_off(struct regulator_dev *rdev, bool enable)
+ {
+ struct device *dev = rdev_get_dev(rdev);
+ struct i2c_client *client = to_i2c_client(dev->parent);
++ struct pmbus_data *data = i2c_get_clientdata(client);
+ u8 page = rdev_get_id(rdev);
++ int ret;
+
+- return pmbus_update_byte_data(client, page, PMBUS_OPERATION,
+- PB_OPERATION_CONTROL_ON,
+- enable ? PB_OPERATION_CONTROL_ON : 0);
++ mutex_lock(&data->update_lock);
++ ret = pmbus_update_byte_data(client, page, PMBUS_OPERATION,
++ PB_OPERATION_CONTROL_ON,
++ enable ? PB_OPERATION_CONTROL_ON : 0);
++ mutex_unlock(&data->update_lock);
++
++ return ret;
+ }
+
+ static int pmbus_regulator_enable(struct regulator_dev *rdev)
+diff --git a/drivers/hwmon/sch56xx-common.c b/drivers/hwmon/sch56xx-common.c
+index 40cdadad35e52..f85eede6d7663 100644
+--- a/drivers/hwmon/sch56xx-common.c
++++ b/drivers/hwmon/sch56xx-common.c
+@@ -422,7 +422,7 @@ void sch56xx_watchdog_register(struct device *parent, u16 addr, u32 revision,
+ data->wddev.max_timeout = 255 * 60;
+ watchdog_set_nowayout(&data->wddev, nowayout);
+ if (output_enable & SCH56XX_WDOG_OUTPUT_ENABLE)
+- set_bit(WDOG_ACTIVE, &data->wddev.status);
++ set_bit(WDOG_HW_RUNNING, &data->wddev.status);
+
+ /* Since the watchdog uses a downcounter there is no register to read
+ the BIOS set timeout from (if any was set at all) ->
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index a0640fa5c55bd..57e94424a8d65 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -367,8 +367,12 @@ static ssize_t mode_store(struct device *dev,
+ mode = ETM_MODE_QELEM(config->mode);
+ /* start by clearing QE bits */
+ config->cfg &= ~(BIT(13) | BIT(14));
+- /* if supported, Q elements with instruction counts are enabled */
+- if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
++ /*
++ * if supported, Q elements with instruction counts are enabled.
++ * Always set the low bit for any requested mode. Valid combos are
++ * 0b00, 0b01 and 0b11.
++ */
++ if (mode && drvdata->q_support)
+ config->cfg |= BIT(13);
+ /*
+ * if supported, Q elements with and without instruction
+diff --git a/drivers/hwtracing/coresight/coresight-syscfg.c b/drivers/hwtracing/coresight/coresight-syscfg.c
+index 098fc34c48293..11850fd8c3b5b 100644
+--- a/drivers/hwtracing/coresight/coresight-syscfg.c
++++ b/drivers/hwtracing/coresight/coresight-syscfg.c
+@@ -1049,7 +1049,7 @@ static int cscfg_create_device(void)
+
+ err = device_register(dev);
+ if (err)
+- cscfg_dev_release(dev);
++ put_device(dev);
+
+ create_dev_exit_unlock:
+ mutex_unlock(&cscfg_mutex);
+diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c
+index 5149454eef4a5..f72c6576d8a36 100644
+--- a/drivers/i2c/busses/i2c-bcm2835.c
++++ b/drivers/i2c/busses/i2c-bcm2835.c
+@@ -454,18 +454,20 @@ static int bcm2835_i2c_probe(struct platform_device *pdev)
+ ret = clk_prepare_enable(i2c_dev->bus_clk);
+ if (ret) {
+ dev_err(&pdev->dev, "Couldn't prepare clock");
+- return ret;
++ goto err_put_exclusive_rate;
+ }
+
+ i2c_dev->irq = platform_get_irq(pdev, 0);
+- if (i2c_dev->irq < 0)
+- return i2c_dev->irq;
++ if (i2c_dev->irq < 0) {
++ ret = i2c_dev->irq;
++ goto err_disable_unprepare_clk;
++ }
+
+ ret = request_irq(i2c_dev->irq, bcm2835_i2c_isr, IRQF_SHARED,
+ dev_name(&pdev->dev), i2c_dev);
+ if (ret) {
+ dev_err(&pdev->dev, "Could not request IRQ\n");
+- return -ENODEV;
++ goto err_disable_unprepare_clk;
+ }
+
+ adap = &i2c_dev->adapter;
+@@ -489,7 +491,16 @@ static int bcm2835_i2c_probe(struct platform_device *pdev)
+
+ ret = i2c_add_adapter(adap);
+ if (ret)
+- free_irq(i2c_dev->irq, i2c_dev);
++ goto err_free_irq;
++
++ return 0;
++
++err_free_irq:
++ free_irq(i2c_dev->irq, i2c_dev);
++err_disable_unprepare_clk:
++ clk_disable_unprepare(i2c_dev->bus_clk);
++err_put_exclusive_rate:
++ clk_rate_exclusive_put(i2c_dev->bus_clk);
+
+ return ret;
+ }
+diff --git a/drivers/i2c/busses/i2c-meson.c b/drivers/i2c/busses/i2c-meson.c
+index ef73a42577cc7..07eb819072c4f 100644
+--- a/drivers/i2c/busses/i2c-meson.c
++++ b/drivers/i2c/busses/i2c-meson.c
+@@ -465,18 +465,18 @@ static int meson_i2c_probe(struct platform_device *pdev)
+ */
+ meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_START, 0);
+
+- ret = i2c_add_adapter(&i2c->adap);
+- if (ret < 0) {
+- clk_disable_unprepare(i2c->clk);
+- return ret;
+- }
+-
+ /* Disable filtering */
+ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR,
+ REG_SLV_SDA_FILTER | REG_SLV_SCL_FILTER, 0);
+
+ meson_i2c_set_clk_div(i2c, timings.bus_freq_hz);
+
++ ret = i2c_add_adapter(&i2c->adap);
++ if (ret < 0) {
++ clk_disable_unprepare(i2c->clk);
++ return ret;
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-pasemi-core.c b/drivers/i2c/busses/i2c-pasemi-core.c
+index 4e161a4089d85..7728c8460dc0f 100644
+--- a/drivers/i2c/busses/i2c-pasemi-core.c
++++ b/drivers/i2c/busses/i2c-pasemi-core.c
+@@ -333,7 +333,6 @@ int pasemi_i2c_common_probe(struct pasemi_smbus *smbus)
+ smbus->adapter.owner = THIS_MODULE;
+ snprintf(smbus->adapter.name, sizeof(smbus->adapter.name),
+ "PA Semi SMBus adapter (%s)", dev_name(smbus->dev));
+- smbus->adapter.class = I2C_CLASS_HWMON | I2C_CLASS_SPD;
+ smbus->adapter.algo = &smbus_algorithm;
+ smbus->adapter.algo_data = smbus;
+
+diff --git a/drivers/i2c/busses/i2c-pasemi-pci.c b/drivers/i2c/busses/i2c-pasemi-pci.c
+index 1ab1f28744fb2..cfc89e04eb94c 100644
+--- a/drivers/i2c/busses/i2c-pasemi-pci.c
++++ b/drivers/i2c/busses/i2c-pasemi-pci.c
+@@ -56,6 +56,7 @@ static int pasemi_smb_pci_probe(struct pci_dev *dev,
+ if (!smbus->ioaddr)
+ return -EBUSY;
+
++ smbus->adapter.class = I2C_CLASS_HWMON | I2C_CLASS_SPD;
+ error = pasemi_i2c_common_probe(smbus);
+ if (error)
+ return error;
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index eb789cfb99739..ffefe3c482e9c 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -734,7 +734,6 @@ static const struct i2c_adapter_quirks xiic_quirks = {
+
+ static const struct i2c_adapter xiic_adapter = {
+ .owner = THIS_MODULE,
+- .name = DRIVER_NAME,
+ .class = I2C_CLASS_DEPRECATED,
+ .algo = &xiic_algorithm,
+ .quirks = &xiic_quirks,
+@@ -771,6 +770,8 @@ static int xiic_i2c_probe(struct platform_device *pdev)
+ i2c_set_adapdata(&i2c->adap, i2c);
+ i2c->adap.dev.parent = &pdev->dev;
+ i2c->adap.dev.of_node = pdev->dev.of_node;
++ snprintf(i2c->adap.name, sizeof(i2c->adap.name),
++ DRIVER_NAME " %s", pdev->name);
+
+ mutex_init(&i2c->lock);
+
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 5365199a31f41..f7a7405d4350a 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -261,7 +261,7 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+
+ err = device_create_file(&pdev->dev, &dev_attr_available_masters);
+ if (err)
+- goto err_rollback;
++ goto err_rollback_activation;
+
+ err = device_create_file(&pdev->dev, &dev_attr_current_master);
+ if (err)
+@@ -271,8 +271,9 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+
+ err_rollback_available:
+ device_remove_file(&pdev->dev, &dev_attr_available_masters);
+-err_rollback:
++err_rollback_activation:
+ i2c_demux_deactivate_master(priv);
++err_rollback:
+ for (j = 0; j < i; j++) {
+ of_node_put(priv->chan[j].parent_np);
+ of_changeset_destroy(&priv->chan[j].chgset);
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 64b82b4503ada..a21fdb015c6c0 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -176,6 +176,7 @@ static const struct mma8452_event_regs trans_ev_regs = {
+ * @enabled_events: event flags enabled and handled by this driver
+ */
+ struct mma_chip_info {
++ const char *name;
+ u8 chip_id;
+ const struct iio_chan_spec *channels;
+ int num_channels;
+@@ -379,8 +380,8 @@ static ssize_t mma8452_show_scale_avail(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- struct mma8452_data *data = iio_priv(i2c_get_clientdata(
+- to_i2c_client(dev)));
++ struct iio_dev *indio_dev = dev_to_iio_dev(dev);
++ struct mma8452_data *data = iio_priv(indio_dev);
+
+ return mma8452_show_int_plus_micros(buf, data->chip_info->mma_scales,
+ ARRAY_SIZE(data->chip_info->mma_scales));
+@@ -1301,6 +1302,7 @@ enum {
+
+ static const struct mma_chip_info mma_chip_info_table[] = {
+ [mma8451] = {
++ .name = "mma8451",
+ .chip_id = MMA8451_DEVICE_ID,
+ .channels = mma8451_channels,
+ .num_channels = ARRAY_SIZE(mma8451_channels),
+@@ -1325,6 +1327,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ MMA8452_INT_FF_MT,
+ },
+ [mma8452] = {
++ .name = "mma8452",
+ .chip_id = MMA8452_DEVICE_ID,
+ .channels = mma8452_channels,
+ .num_channels = ARRAY_SIZE(mma8452_channels),
+@@ -1341,6 +1344,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ MMA8452_INT_FF_MT,
+ },
+ [mma8453] = {
++ .name = "mma8453",
+ .chip_id = MMA8453_DEVICE_ID,
+ .channels = mma8453_channels,
+ .num_channels = ARRAY_SIZE(mma8453_channels),
+@@ -1357,6 +1361,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ MMA8452_INT_FF_MT,
+ },
+ [mma8652] = {
++ .name = "mma8652",
+ .chip_id = MMA8652_DEVICE_ID,
+ .channels = mma8652_channels,
+ .num_channels = ARRAY_SIZE(mma8652_channels),
+@@ -1366,6 +1371,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ .enabled_events = MMA8452_INT_FF_MT,
+ },
+ [mma8653] = {
++ .name = "mma8653",
+ .chip_id = MMA8653_DEVICE_ID,
+ .channels = mma8653_channels,
+ .num_channels = ARRAY_SIZE(mma8653_channels),
+@@ -1380,6 +1386,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
+ .enabled_events = MMA8452_INT_FF_MT,
+ },
+ [fxls8471] = {
++ .name = "fxls8471",
+ .chip_id = FXLS8471_DEVICE_ID,
+ .channels = mma8451_channels,
+ .num_channels = ARRAY_SIZE(mma8451_channels),
+@@ -1522,13 +1529,6 @@ static int mma8452_probe(struct i2c_client *client,
+ struct mma8452_data *data;
+ struct iio_dev *indio_dev;
+ int ret;
+- const struct of_device_id *match;
+-
+- match = of_match_device(mma8452_dt_ids, &client->dev);
+- if (!match) {
+- dev_err(&client->dev, "unknown device model\n");
+- return -ENODEV;
+- }
+
+ indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ if (!indio_dev)
+@@ -1537,7 +1537,14 @@ static int mma8452_probe(struct i2c_client *client,
+ data = iio_priv(indio_dev);
+ data->client = client;
+ mutex_init(&data->lock);
+- data->chip_info = match->data;
++
++ data->chip_info = device_get_match_data(&client->dev);
++ if (!data->chip_info && id) {
++ data->chip_info = &mma_chip_info_table[id->driver_data];
++ } else {
++ dev_err(&client->dev, "unknown device model\n");
++ return -ENODEV;
++ }
+
+ data->vdd_reg = devm_regulator_get(&client->dev, "vdd");
+ if (IS_ERR(data->vdd_reg))
+@@ -1581,11 +1588,11 @@ static int mma8452_probe(struct i2c_client *client,
+ }
+
+ dev_info(&client->dev, "registering %s accelerometer; ID 0x%x\n",
+- match->compatible, data->chip_info->chip_id);
++ data->chip_info->name, data->chip_info->chip_id);
+
+ i2c_set_clientdata(client, indio_dev);
+ indio_dev->info = &mma8452_info;
+- indio_dev->name = id->name;
++ indio_dev->name = data->chip_info->name;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->channels = data->chip_info->channels;
+ indio_dev->num_channels = data->chip_info->num_channels;
+@@ -1810,7 +1817,7 @@ MODULE_DEVICE_TABLE(i2c, mma8452_id);
+ static struct i2c_driver mma8452_driver = {
+ .driver = {
+ .name = "mma8452",
+- .of_match_table = of_match_ptr(mma8452_dt_ids),
++ .of_match_table = mma8452_dt_ids,
+ .pm = &mma8452_pm_ops,
+ },
+ .probe = mma8452_probe,
+diff --git a/drivers/iio/adc/aspeed_adc.c b/drivers/iio/adc/aspeed_adc.c
+index e939b84cbb561..0793d2474cdcf 100644
+--- a/drivers/iio/adc/aspeed_adc.c
++++ b/drivers/iio/adc/aspeed_adc.c
+@@ -539,7 +539,9 @@ static int aspeed_adc_probe(struct platform_device *pdev)
+ data->clk_scaler = devm_clk_hw_register_divider(
+ &pdev->dev, clk_name, clk_parent_name, scaler_flags,
+ data->base + ASPEED_REG_CLOCK_CONTROL, 0,
+- data->model_data->scaler_bit_width, 0, &data->clk_lock);
++ data->model_data->scaler_bit_width,
++ data->model_data->need_prescaler ? CLK_DIVIDER_ONE_BASED : 0,
++ &data->clk_lock);
+ if (IS_ERR(data->clk_scaler))
+ return PTR_ERR(data->clk_scaler);
+
+diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
+index afdb59e0b5267..d0223e39d59af 100644
+--- a/drivers/iio/adc/twl6030-gpadc.c
++++ b/drivers/iio/adc/twl6030-gpadc.c
+@@ -911,6 +911,8 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
+ ret = devm_request_threaded_irq(dev, irq, NULL,
+ twl6030_gpadc_irq_handler,
+ IRQF_ONESHOT, "twl6030_gpadc", indio_dev);
++ if (ret)
++ return ret;
+
+ ret = twl6030_gpadc_enable_irq(TWL6030_GPADC_RT_SW1_EOC_MASK);
+ if (ret < 0) {
+diff --git a/drivers/iio/adc/xilinx-ams.c b/drivers/iio/adc/xilinx-ams.c
+index 8343c5f74121e..7bf097fa10cb7 100644
+--- a/drivers/iio/adc/xilinx-ams.c
++++ b/drivers/iio/adc/xilinx-ams.c
+@@ -91,8 +91,8 @@
+
+ #define AMS_CONF1_SEQ_MASK GENMASK(15, 12)
+ #define AMS_CONF1_SEQ_DEFAULT FIELD_PREP(AMS_CONF1_SEQ_MASK, 0)
+-#define AMS_CONF1_SEQ_CONTINUOUS FIELD_PREP(AMS_CONF1_SEQ_MASK, 1)
+-#define AMS_CONF1_SEQ_SINGLE_CHANNEL FIELD_PREP(AMS_CONF1_SEQ_MASK, 2)
++#define AMS_CONF1_SEQ_CONTINUOUS FIELD_PREP(AMS_CONF1_SEQ_MASK, 2)
++#define AMS_CONF1_SEQ_SINGLE_CHANNEL FIELD_PREP(AMS_CONF1_SEQ_MASK, 3)
+
+ #define AMS_REG_SEQ0_MASK GENMASK(15, 0)
+ #define AMS_REG_SEQ2_MASK GENMASK(21, 16)
+@@ -530,14 +530,18 @@ static int ams_enable_single_channel(struct ams *ams, unsigned int offset)
+ return -EINVAL;
+ }
+
+- /* set single channel, sequencer off mode */
++ /* put sysmon in a soft reset to change the sequence */
+ ams_ps_update_reg(ams, AMS_REG_CONFIG1, AMS_CONF1_SEQ_MASK,
+- AMS_CONF1_SEQ_SINGLE_CHANNEL);
++ AMS_CONF1_SEQ_DEFAULT);
+
+ /* write the channel number */
+ ams_ps_update_reg(ams, AMS_REG_CONFIG0, AMS_CONF0_CHANNEL_NUM_MASK,
+ channel_num);
+
++ /* set single channel, sequencer off mode */
++ ams_ps_update_reg(ams, AMS_REG_CONFIG1, AMS_CONF1_SEQ_MASK,
++ AMS_CONF1_SEQ_SINGLE_CHANNEL);
++
+ return 0;
+ }
+
+@@ -551,6 +555,8 @@ static int ams_read_vcc_reg(struct ams *ams, unsigned int offset, u32 *data)
+ if (ret)
+ return ret;
+
++ /* clear end-of-conversion flag, wait for next conversion to complete */
++ writel(expect, ams->base + AMS_ISR_1);
+ ret = readl_poll_timeout(ams->base + AMS_ISR_1, reg, (reg & expect),
+ AMS_INIT_POLL_TIME_US, AMS_INIT_TIMEOUT_US);
+ if (ret)
+@@ -1224,6 +1230,7 @@ static int ams_init_module(struct iio_dev *indio_dev,
+
+ /* add PS channels to iio device channels */
+ memcpy(channels, ams_ps_channels, sizeof(ams_ps_channels));
++ num_channels = ARRAY_SIZE(ams_ps_channels);
+ } else if (fwnode_property_match_string(fwnode, "compatible",
+ "xlnx,zynqmp-ams-pl") == 0) {
+ ams->pl_base = fwnode_iomap(fwnode, 0);
+diff --git a/drivers/iio/afe/iio-rescale.c b/drivers/iio/afe/iio-rescale.c
+index 774eb3044edd8..271d73e420c42 100644
+--- a/drivers/iio/afe/iio-rescale.c
++++ b/drivers/iio/afe/iio-rescale.c
+@@ -39,7 +39,7 @@ static int rescale_read_raw(struct iio_dev *indio_dev,
+ int *val, int *val2, long mask)
+ {
+ struct rescale *rescale = iio_priv(indio_dev);
+- unsigned long long tmp;
++ s64 tmp;
+ int ret;
+
+ switch (mask) {
+@@ -77,10 +77,10 @@ static int rescale_read_raw(struct iio_dev *indio_dev,
+ *val2 = rescale->denominator;
+ return IIO_VAL_FRACTIONAL;
+ case IIO_VAL_FRACTIONAL_LOG2:
+- tmp = *val * 1000000000LL;
+- do_div(tmp, rescale->denominator);
++ tmp = (s64)*val * 1000000000LL;
++ tmp = div_s64(tmp, rescale->denominator);
+ tmp *= rescale->numerator;
+- do_div(tmp, 1000000000LL);
++ tmp = div_s64(tmp, 1000000000LL);
+ *val = tmp;
+ return ret;
+ default:
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 93f0c6bce502c..b1d8d5a66f01f 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -1633,7 +1633,7 @@ st_lsm6dsx_sysfs_sampling_frequency_avail(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- struct st_lsm6dsx_sensor *sensor = iio_priv(dev_get_drvdata(dev));
++ struct st_lsm6dsx_sensor *sensor = iio_priv(dev_to_iio_dev(dev));
+ const struct st_lsm6dsx_odr_table_entry *odr_table;
+ int i, len = 0;
+
+@@ -1651,7 +1651,7 @@ static ssize_t st_lsm6dsx_sysfs_scale_avail(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- struct st_lsm6dsx_sensor *sensor = iio_priv(dev_get_drvdata(dev));
++ struct st_lsm6dsx_sensor *sensor = iio_priv(dev_to_iio_dev(dev));
+ const struct st_lsm6dsx_fs_table_entry *fs_table;
+ struct st_lsm6dsx_hw *hw = sensor->hw;
+ int i, len = 0;
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 0222885b334c1..df74765d33dcb 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -595,28 +595,50 @@ EXPORT_SYMBOL_GPL(iio_read_channel_average_raw);
+ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan,
+ int raw, int *processed, unsigned int scale)
+ {
+- int scale_type, scale_val, scale_val2, offset;
++ int scale_type, scale_val, scale_val2;
++ int offset_type, offset_val, offset_val2;
+ s64 raw64 = raw;
+- int ret;
+
+- ret = iio_channel_read(chan, &offset, NULL, IIO_CHAN_INFO_OFFSET);
+- if (ret >= 0)
+- raw64 += offset;
++ offset_type = iio_channel_read(chan, &offset_val, &offset_val2,
++ IIO_CHAN_INFO_OFFSET);
++ if (offset_type >= 0) {
++ switch (offset_type) {
++ case IIO_VAL_INT:
++ break;
++ case IIO_VAL_INT_PLUS_MICRO:
++ case IIO_VAL_INT_PLUS_NANO:
++ /*
++ * Both IIO_VAL_INT_PLUS_MICRO and IIO_VAL_INT_PLUS_NANO
++ * implicitely truncate the offset to it's integer form.
++ */
++ break;
++ case IIO_VAL_FRACTIONAL:
++ offset_val /= offset_val2;
++ break;
++ case IIO_VAL_FRACTIONAL_LOG2:
++ offset_val >>= offset_val2;
++ break;
++ default:
++ return -EINVAL;
++ }
++
++ raw64 += offset_val;
++ }
+
+ scale_type = iio_channel_read(chan, &scale_val, &scale_val2,
+ IIO_CHAN_INFO_SCALE);
+ if (scale_type < 0) {
+ /*
+- * Just pass raw values as processed if no scaling is
+- * available.
++ * If no channel scaling is available apply consumer scale to
++ * raw value and return.
+ */
+- *processed = raw;
++ *processed = raw * scale;
+ return 0;
+ }
+
+ switch (scale_type) {
+ case IIO_VAL_INT:
+- *processed = raw64 * scale_val;
++ *processed = raw64 * scale_val * scale;
+ break;
+ case IIO_VAL_INT_PLUS_MICRO:
+ if (scale_val2 < 0)
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 50c53409ceb61..fabca5e51e3d4 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2642,7 +2642,7 @@ int rdma_set_ack_timeout(struct rdma_cm_id *id, u8 timeout)
+ {
+ struct rdma_id_private *id_priv;
+
+- if (id->qp_type != IB_QPT_RC)
++ if (id->qp_type != IB_QPT_RC && id->qp_type != IB_QPT_XRC_INI)
+ return -EINVAL;
+
+ id_priv = container_of(id, struct rdma_id_private, id);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index f5aacaf7fb8ef..ca24ce34da766 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1951,9 +1951,10 @@ static int nldev_stat_set_counter_dynamic_doit(struct nlattr *tb[],
+ u32 port)
+ {
+ struct rdma_hw_stats *stats;
+- int rem, i, index, ret = 0;
+ struct nlattr *entry_attr;
+ unsigned long *target;
++ int rem, i, ret = 0;
++ u32 index;
+
+ stats = ib_get_hw_stats_port(device, port);
+ if (!stats)
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index c18634bec2126..e821dc94a43ed 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -2153,6 +2153,7 @@ struct ib_mr *ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ return mr;
+
+ mr->device = pd->device;
++ mr->type = IB_MR_TYPE_USER;
+ mr->pd = pd;
+ mr->dm = NULL;
+ atomic_inc(&pd->usecnt);
+diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
+index dc9211f3a0098..99d0743133cac 100644
+--- a/drivers/infiniband/hw/hfi1/verbs.c
++++ b/drivers/infiniband/hw/hfi1/verbs.c
+@@ -1397,8 +1397,7 @@ static int query_port(struct rvt_dev_info *rdi, u32 port_num,
+ 4096 : hfi1_max_mtu), IB_MTU_4096);
+ props->active_mtu = !valid_ib_mtu(ppd->ibmtu) ? props->max_mtu :
+ mtu_to_enum(ppd->ibmtu, IB_MTU_4096);
+- props->phys_mtu = HFI1_CAP_IS_KSET(AIP) ? hfi1_max_mtu :
+- ib_mtu_enum_to_int(props->max_mtu);
++ props->phys_mtu = hfi1_max_mtu;
+
+ return 0;
+ }
+diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c
+index 3141a9c85de5a..e7554b6043e4b 100644
+--- a/drivers/infiniband/hw/irdma/ctrl.c
++++ b/drivers/infiniband/hw/irdma/ctrl.c
+@@ -433,7 +433,7 @@ enum irdma_status_code irdma_sc_qp_create(struct irdma_sc_qp *qp, struct irdma_c
+
+ cqp = qp->dev->cqp;
+ if (qp->qp_uk.qp_id < cqp->dev->hw_attrs.min_hw_qp_id ||
+- qp->qp_uk.qp_id > (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt - 1))
++ qp->qp_uk.qp_id >= (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_QP].max_cnt))
+ return IRDMA_ERR_INVALID_QP_ID;
+
+ wqe = irdma_sc_cqp_get_next_send_wqe(cqp, scratch);
+@@ -2512,10 +2512,10 @@ static enum irdma_status_code irdma_sc_cq_create(struct irdma_sc_cq *cq,
+ enum irdma_status_code ret_code = 0;
+
+ cqp = cq->dev->cqp;
+- if (cq->cq_uk.cq_id > (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt - 1))
++ if (cq->cq_uk.cq_id >= (cqp->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_CQ].max_cnt))
+ return IRDMA_ERR_INVALID_CQ_ID;
+
+- if (cq->ceq_id > (cq->dev->hmc_fpm_misc.max_ceqs - 1))
++ if (cq->ceq_id >= (cq->dev->hmc_fpm_misc.max_ceqs))
+ return IRDMA_ERR_INVALID_CEQ_ID;
+
+ ceq = cq->dev->ceq[cq->ceq_id];
+@@ -3617,7 +3617,7 @@ enum irdma_status_code irdma_sc_ceq_init(struct irdma_sc_ceq *ceq,
+ info->elem_cnt > info->dev->hw_attrs.max_hw_ceq_size)
+ return IRDMA_ERR_INVALID_SIZE;
+
+- if (info->ceq_id > (info->dev->hmc_fpm_misc.max_ceqs - 1))
++ if (info->ceq_id >= (info->dev->hmc_fpm_misc.max_ceqs))
+ return IRDMA_ERR_INVALID_CEQ_ID;
+ pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt;
+
+@@ -4166,7 +4166,7 @@ enum irdma_status_code irdma_sc_ccq_init(struct irdma_sc_cq *cq,
+ info->num_elem > info->dev->hw_attrs.uk_attrs.max_hw_cq_size)
+ return IRDMA_ERR_INVALID_SIZE;
+
+- if (info->ceq_id > (info->dev->hmc_fpm_misc.max_ceqs - 1))
++ if (info->ceq_id >= (info->dev->hmc_fpm_misc.max_ceqs ))
+ return IRDMA_ERR_INVALID_CEQ_ID;
+
+ pble_obj_cnt = info->dev->hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt;
+diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
+index 89234d04cc652..e46e3240cc9fd 100644
+--- a/drivers/infiniband/hw/irdma/hw.c
++++ b/drivers/infiniband/hw/irdma/hw.c
+@@ -1608,7 +1608,7 @@ static enum irdma_status_code irdma_initialize_dev(struct irdma_pci_f *rf)
+ info.fpm_commit_buf = mem.va;
+
+ info.bar0 = rf->hw.hw_addr;
+- info.hmc_fn_id = PCI_FUNC(rf->pcidev->devfn);
++ info.hmc_fn_id = rf->pf_id;
+ info.hw = &rf->hw;
+ status = irdma_sc_dev_init(rf->rdma_ver, &rf->sc_dev, &info);
+ if (status)
+diff --git a/drivers/infiniband/hw/irdma/i40iw_if.c b/drivers/infiniband/hw/irdma/i40iw_if.c
+index 43e962b97d6a3..0886783db647c 100644
+--- a/drivers/infiniband/hw/irdma/i40iw_if.c
++++ b/drivers/infiniband/hw/irdma/i40iw_if.c
+@@ -77,6 +77,7 @@ static void i40iw_fill_device_info(struct irdma_device *iwdev, struct i40e_info
+ rf->rdma_ver = IRDMA_GEN_1;
+ rf->gen_ops.request_reset = i40iw_request_reset;
+ rf->pcidev = cdev_info->pcidev;
++ rf->pf_id = cdev_info->fid;
+ rf->hw.hw_addr = cdev_info->hw_addr;
+ rf->cdev = cdev_info;
+ rf->msix_count = cdev_info->msix_count;
+diff --git a/drivers/infiniband/hw/irdma/main.c b/drivers/infiniband/hw/irdma/main.c
+index 9fab29039f1c0..5e8e8860686dc 100644
+--- a/drivers/infiniband/hw/irdma/main.c
++++ b/drivers/infiniband/hw/irdma/main.c
+@@ -226,6 +226,7 @@ static void irdma_fill_device_info(struct irdma_device *iwdev, struct ice_pf *pf
+ rf->hw.hw_addr = pf->hw.hw_addr;
+ rf->pcidev = pf->pdev;
+ rf->msix_count = pf->num_rdma_msix;
++ rf->pf_id = pf->hw.pf_id;
+ rf->msix_entries = &pf->msix_entries[pf->rdma_base_vector];
+ rf->default_vsi.vsi_idx = vsi->vsi_num;
+ rf->protocol_used = pf->rdma_mode & IIDC_RDMA_PROTOCOL_ROCEV2 ?
+diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h
+index cb218cab79ac1..fb7faa85e4c9d 100644
+--- a/drivers/infiniband/hw/irdma/main.h
++++ b/drivers/infiniband/hw/irdma/main.h
+@@ -257,6 +257,7 @@ struct irdma_pci_f {
+ u8 *mem_rsrc;
+ u8 rdma_ver;
+ u8 rst_to;
++ u8 pf_id;
+ enum irdma_protocol_used protocol_used;
+ u32 sd_type;
+ u32 msix_count;
+diff --git a/drivers/infiniband/hw/irdma/utils.c b/drivers/infiniband/hw/irdma/utils.c
+index 398736d8c78a4..e81b74a518dd0 100644
+--- a/drivers/infiniband/hw/irdma/utils.c
++++ b/drivers/infiniband/hw/irdma/utils.c
+@@ -150,31 +150,35 @@ int irdma_inetaddr_event(struct notifier_block *notifier, unsigned long event,
+ void *ptr)
+ {
+ struct in_ifaddr *ifa = ptr;
+- struct net_device *netdev = ifa->ifa_dev->dev;
++ struct net_device *real_dev, *netdev = ifa->ifa_dev->dev;
+ struct irdma_device *iwdev;
+ struct ib_device *ibdev;
+ u32 local_ipaddr;
+
+- ibdev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_IRDMA);
++ real_dev = rdma_vlan_dev_real_dev(netdev);
++ if (!real_dev)
++ real_dev = netdev;
++
++ ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+ if (!ibdev)
+ return NOTIFY_DONE;
+
+ iwdev = to_iwdev(ibdev);
+ local_ipaddr = ntohl(ifa->ifa_address);
+ ibdev_dbg(&iwdev->ibdev,
+- "DEV: netdev %p event %lu local_ip=%pI4 MAC=%pM\n", netdev,
+- event, &local_ipaddr, netdev->dev_addr);
++ "DEV: netdev %p event %lu local_ip=%pI4 MAC=%pM\n", real_dev,
++ event, &local_ipaddr, real_dev->dev_addr);
+ switch (event) {
+ case NETDEV_DOWN:
+- irdma_manage_arp_cache(iwdev->rf, netdev->dev_addr,
++ irdma_manage_arp_cache(iwdev->rf, real_dev->dev_addr,
+ &local_ipaddr, true, IRDMA_ARP_DELETE);
+- irdma_if_notify(iwdev, netdev, &local_ipaddr, true, false);
++ irdma_if_notify(iwdev, real_dev, &local_ipaddr, true, false);
+ irdma_gid_change_event(&iwdev->ibdev);
+ break;
+ case NETDEV_UP:
+ case NETDEV_CHANGEADDR:
+- irdma_add_arp(iwdev->rf, &local_ipaddr, true, netdev->dev_addr);
+- irdma_if_notify(iwdev, netdev, &local_ipaddr, true, true);
++ irdma_add_arp(iwdev->rf, &local_ipaddr, true, real_dev->dev_addr);
++ irdma_if_notify(iwdev, real_dev, &local_ipaddr, true, true);
+ irdma_gid_change_event(&iwdev->ibdev);
+ break;
+ default:
+@@ -196,32 +200,36 @@ int irdma_inet6addr_event(struct notifier_block *notifier, unsigned long event,
+ void *ptr)
+ {
+ struct inet6_ifaddr *ifa = ptr;
+- struct net_device *netdev = ifa->idev->dev;
++ struct net_device *real_dev, *netdev = ifa->idev->dev;
+ struct irdma_device *iwdev;
+ struct ib_device *ibdev;
+ u32 local_ipaddr6[4];
+
+- ibdev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_IRDMA);
++ real_dev = rdma_vlan_dev_real_dev(netdev);
++ if (!real_dev)
++ real_dev = netdev;
++
++ ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+ if (!ibdev)
+ return NOTIFY_DONE;
+
+ iwdev = to_iwdev(ibdev);
+ irdma_copy_ip_ntohl(local_ipaddr6, ifa->addr.in6_u.u6_addr32);
+ ibdev_dbg(&iwdev->ibdev,
+- "DEV: netdev %p event %lu local_ip=%pI6 MAC=%pM\n", netdev,
+- event, local_ipaddr6, netdev->dev_addr);
++ "DEV: netdev %p event %lu local_ip=%pI6 MAC=%pM\n", real_dev,
++ event, local_ipaddr6, real_dev->dev_addr);
+ switch (event) {
+ case NETDEV_DOWN:
+- irdma_manage_arp_cache(iwdev->rf, netdev->dev_addr,
++ irdma_manage_arp_cache(iwdev->rf, real_dev->dev_addr,
+ local_ipaddr6, false, IRDMA_ARP_DELETE);
+- irdma_if_notify(iwdev, netdev, local_ipaddr6, false, false);
++ irdma_if_notify(iwdev, real_dev, local_ipaddr6, false, false);
+ irdma_gid_change_event(&iwdev->ibdev);
+ break;
+ case NETDEV_UP:
+ case NETDEV_CHANGEADDR:
+ irdma_add_arp(iwdev->rf, local_ipaddr6, false,
+- netdev->dev_addr);
+- irdma_if_notify(iwdev, netdev, local_ipaddr6, false, true);
++ real_dev->dev_addr);
++ irdma_if_notify(iwdev, real_dev, local_ipaddr6, false, true);
+ irdma_gid_change_event(&iwdev->ibdev);
+ break;
+ default:
+@@ -243,14 +251,18 @@ int irdma_net_event(struct notifier_block *notifier, unsigned long event,
+ void *ptr)
+ {
+ struct neighbour *neigh = ptr;
++ struct net_device *real_dev, *netdev = (struct net_device *)neigh->dev;
+ struct irdma_device *iwdev;
+ struct ib_device *ibdev;
+ __be32 *p;
+ u32 local_ipaddr[4] = {};
+ bool ipv4 = true;
+
+- ibdev = ib_device_get_by_netdev((struct net_device *)neigh->dev,
+- RDMA_DRIVER_IRDMA);
++ real_dev = rdma_vlan_dev_real_dev(netdev);
++ if (!real_dev)
++ real_dev = netdev;
++
++ ibdev = ib_device_get_by_netdev(real_dev, RDMA_DRIVER_IRDMA);
+ if (!ibdev)
+ return NOTIFY_DONE;
+
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 460e757d3fe61..1bf6404ec8340 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -2509,7 +2509,7 @@ static int irdma_dealloc_mw(struct ib_mw *ibmw)
+ cqp_info = &cqp_request->info;
+ info = &cqp_info->in.u.dealloc_stag.info;
+ memset(info, 0, sizeof(*info));
+- info->pd_id = iwpd->sc_pd.pd_id & 0x00007fff;
++ info->pd_id = iwpd->sc_pd.pd_id;
+ info->stag_idx = ibmw->rkey >> IRDMA_CQPSQ_STAG_IDX_S;
+ info->mr = false;
+ cqp_info->cqp_cmd = IRDMA_OP_DEALLOC_STAG;
+@@ -3021,7 +3021,7 @@ static int irdma_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata)
+ cqp_info = &cqp_request->info;
+ info = &cqp_info->in.u.dealloc_stag.info;
+ memset(info, 0, sizeof(*info));
+- info->pd_id = iwpd->sc_pd.pd_id & 0x00007fff;
++ info->pd_id = iwpd->sc_pd.pd_id;
+ info->stag_idx = ib_mr->rkey >> IRDMA_CQPSQ_STAG_IDX_S;
+ info->mr = true;
+ if (iwpbl->pbl_allocated)
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index 08b7f6bc56c37..15c0884d1f498 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1886,8 +1886,10 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ key_level2,
+ obj_event,
+ GFP_KERNEL);
+- if (err)
++ if (err) {
++ kfree(obj_event);
+ return err;
++ }
+ INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 157d862fb8642..2910d78333130 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -585,6 +585,8 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev,
+ ent = &cache->ent[entry];
+ spin_lock_irq(&ent->lock);
+ if (list_empty(&ent->head)) {
++ queue_adjust_cache_locked(ent);
++ ent->miss++;
+ spin_unlock_irq(&ent->lock);
+ mr = create_cache_mr(ent);
+ if (IS_ERR(mr))
+diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c
+index 38c7b6fb39d70..360a567159fe5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_av.c
++++ b/drivers/infiniband/sw/rxe/rxe_av.c
+@@ -99,11 +99,14 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr)
+ av->network_type = type;
+ }
+
+-struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt)
++struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp)
+ {
+ struct rxe_ah *ah;
+ u32 ah_num;
+
++ if (ahp)
++ *ahp = NULL;
++
+ if (!pkt || !pkt->qp)
+ return NULL;
+
+@@ -117,10 +120,22 @@ struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt)
+ if (ah_num) {
+ /* only new user provider or kernel client */
+ ah = rxe_pool_get_index(&pkt->rxe->ah_pool, ah_num);
+- if (!ah || ah->ah_num != ah_num || rxe_ah_pd(ah) != pkt->qp->pd) {
++ if (!ah) {
+ pr_warn("Unable to find AH matching ah_num\n");
+ return NULL;
+ }
++
++ if (rxe_ah_pd(ah) != pkt->qp->pd) {
++ pr_warn("PDs don't match for AH and QP\n");
++ rxe_drop_ref(ah);
++ return NULL;
++ }
++
++ if (ahp)
++ *ahp = ah;
++ else
++ rxe_drop_ref(ah);
++
+ return &ah->av;
+ }
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index b1e174afb1d49..b92bb7a152905 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -19,7 +19,7 @@ void rxe_av_to_attr(struct rxe_av *av, struct rdma_ah_attr *attr);
+
+ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr);
+
+-struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt);
++struct rxe_av *rxe_get_av(struct rxe_pkt_info *pkt, struct rxe_ah **ahp);
+
+ /* rxe_cq.c */
+ int rxe_cq_chk_attr(struct rxe_dev *rxe, struct rxe_cq *cq,
+@@ -102,7 +102,8 @@ void rxe_mw_cleanup(struct rxe_pool_elem *arg);
+ /* rxe_net.c */
+ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
+ int paylen, struct rxe_pkt_info *pkt);
+-int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb);
++int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt,
++ struct sk_buff *skb);
+ int rxe_xmit_packet(struct rxe_qp *qp, struct rxe_pkt_info *pkt,
+ struct sk_buff *skb);
+ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num);
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index be72bdbfb4ba7..580cfd742dd2f 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -289,13 +289,13 @@ static void prepare_ipv6_hdr(struct dst_entry *dst, struct sk_buff *skb,
+ ip6h->payload_len = htons(skb->len - sizeof(*ip6h));
+ }
+
+-static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb)
++static int prepare4(struct rxe_av *av, struct rxe_pkt_info *pkt,
++ struct sk_buff *skb)
+ {
+ struct rxe_qp *qp = pkt->qp;
+ struct dst_entry *dst;
+ bool xnet = false;
+ __be16 df = htons(IP_DF);
+- struct rxe_av *av = rxe_get_av(pkt);
+ struct in_addr *saddr = &av->sgid_addr._sockaddr_in.sin_addr;
+ struct in_addr *daddr = &av->dgid_addr._sockaddr_in.sin_addr;
+
+@@ -315,11 +315,11 @@ static int prepare4(struct rxe_pkt_info *pkt, struct sk_buff *skb)
+ return 0;
+ }
+
+-static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb)
++static int prepare6(struct rxe_av *av, struct rxe_pkt_info *pkt,
++ struct sk_buff *skb)
+ {
+ struct rxe_qp *qp = pkt->qp;
+ struct dst_entry *dst;
+- struct rxe_av *av = rxe_get_av(pkt);
+ struct in6_addr *saddr = &av->sgid_addr._sockaddr_in6.sin6_addr;
+ struct in6_addr *daddr = &av->dgid_addr._sockaddr_in6.sin6_addr;
+
+@@ -340,16 +340,17 @@ static int prepare6(struct rxe_pkt_info *pkt, struct sk_buff *skb)
+ return 0;
+ }
+
+-int rxe_prepare(struct rxe_pkt_info *pkt, struct sk_buff *skb)
++int rxe_prepare(struct rxe_av *av, struct rxe_pkt_info *pkt,
++ struct sk_buff *skb)
+ {
+ int err = 0;
+
+ if (skb->protocol == htons(ETH_P_IP))
+- err = prepare4(pkt, skb);
++ err = prepare4(av, pkt, skb);
+ else if (skb->protocol == htons(ETH_P_IPV6))
+- err = prepare6(pkt, skb);
++ err = prepare6(av, pkt, skb);
+
+- if (ether_addr_equal(skb->dev->dev_addr, rxe_get_av(pkt)->dmac))
++ if (ether_addr_equal(skb->dev->dev_addr, av->dmac))
+ pkt->mask |= RXE_LOOPBACK_MASK;
+
+ return err;
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 5eb89052dd668..204e31bbd61f7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -358,14 +358,14 @@ static inline int get_mtu(struct rxe_qp *qp)
+ }
+
+ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
++ struct rxe_av *av,
+ struct rxe_send_wqe *wqe,
+- int opcode, int payload,
++ int opcode, u32 payload,
+ struct rxe_pkt_info *pkt)
+ {
+ struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ struct sk_buff *skb;
+ struct rxe_send_wr *ibwr = &wqe->wr;
+- struct rxe_av *av;
+ int pad = (-payload) & 0x3;
+ int paylen;
+ int solicited;
+@@ -374,21 +374,9 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+
+ /* length from start of bth to end of icrc */
+ paylen = rxe_opcode[opcode].length + payload + pad + RXE_ICRC_SIZE;
+-
+- /* pkt->hdr, port_num and mask are initialized in ifc layer */
+- pkt->rxe = rxe;
+- pkt->opcode = opcode;
+- pkt->qp = qp;
+- pkt->psn = qp->req.psn;
+- pkt->mask = rxe_opcode[opcode].mask;
+- pkt->paylen = paylen;
+- pkt->wqe = wqe;
++ pkt->paylen = paylen;
+
+ /* init skb */
+- av = rxe_get_av(pkt);
+- if (!av)
+- return NULL;
+-
+ skb = rxe_init_packet(rxe, av, paylen, pkt);
+ if (unlikely(!skb))
+ return NULL;
+@@ -447,13 +435,13 @@ static struct sk_buff *init_req_packet(struct rxe_qp *qp,
+ return skb;
+ }
+
+-static int finish_packet(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+- struct rxe_pkt_info *pkt, struct sk_buff *skb,
+- int paylen)
++static int finish_packet(struct rxe_qp *qp, struct rxe_av *av,
++ struct rxe_send_wqe *wqe, struct rxe_pkt_info *pkt,
++ struct sk_buff *skb, u32 paylen)
+ {
+ int err;
+
+- err = rxe_prepare(pkt, skb);
++ err = rxe_prepare(av, pkt, skb);
+ if (err)
+ return err;
+
+@@ -497,7 +485,7 @@ static void update_wqe_state(struct rxe_qp *qp,
+ static void update_wqe_psn(struct rxe_qp *qp,
+ struct rxe_send_wqe *wqe,
+ struct rxe_pkt_info *pkt,
+- int payload)
++ u32 payload)
+ {
+ /* number of packets left to send including current one */
+ int num_pkt = (wqe->dma.resid + payload + qp->mtu - 1) / qp->mtu;
+@@ -540,7 +528,7 @@ static void rollback_state(struct rxe_send_wqe *wqe,
+ }
+
+ static void update_state(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+- struct rxe_pkt_info *pkt, int payload)
++ struct rxe_pkt_info *pkt, u32 payload)
+ {
+ qp->req.opcode = pkt->opcode;
+
+@@ -608,17 +596,20 @@ static int rxe_do_local_ops(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ int rxe_requester(void *arg)
+ {
+ struct rxe_qp *qp = (struct rxe_qp *)arg;
++ struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ struct rxe_pkt_info pkt;
+ struct sk_buff *skb;
+ struct rxe_send_wqe *wqe;
+ enum rxe_hdr_mask mask;
+- int payload;
++ u32 payload;
+ int mtu;
+ int opcode;
+ int ret;
+ struct rxe_send_wqe rollback_wqe;
+ u32 rollback_psn;
+ struct rxe_queue *q = qp->sq.queue;
++ struct rxe_ah *ah;
++ struct rxe_av *av;
+
+ rxe_add_ref(qp);
+
+@@ -705,14 +696,28 @@ next_wqe:
+ payload = mtu;
+ }
+
+- skb = init_req_packet(qp, wqe, opcode, payload, &pkt);
++ pkt.rxe = rxe;
++ pkt.opcode = opcode;
++ pkt.qp = qp;
++ pkt.psn = qp->req.psn;
++ pkt.mask = rxe_opcode[opcode].mask;
++ pkt.wqe = wqe;
++
++ av = rxe_get_av(&pkt, &ah);
++ if (unlikely(!av)) {
++ pr_err("qp#%d Failed no address vector\n", qp_num(qp));
++ wqe->status = IB_WC_LOC_QP_OP_ERR;
++ goto err_drop_ah;
++ }
++
++ skb = init_req_packet(qp, av, wqe, opcode, payload, &pkt);
+ if (unlikely(!skb)) {
+ pr_err("qp#%d Failed allocating skb\n", qp_num(qp));
+ wqe->status = IB_WC_LOC_QP_OP_ERR;
+- goto err;
++ goto err_drop_ah;
+ }
+
+- ret = finish_packet(qp, wqe, &pkt, skb, payload);
++ ret = finish_packet(qp, av, wqe, &pkt, skb, payload);
+ if (unlikely(ret)) {
+ pr_debug("qp#%d Error during finish packet\n", qp_num(qp));
+ if (ret == -EFAULT)
+@@ -720,9 +725,12 @@ next_wqe:
+ else
+ wqe->status = IB_WC_LOC_QP_OP_ERR;
+ kfree_skb(skb);
+- goto err;
++ goto err_drop_ah;
+ }
+
++ if (ah)
++ rxe_drop_ref(ah);
++
+ /*
+ * To prevent a race on wqe access between requester and completer,
+ * wqe members state and psn need to be set before calling
+@@ -751,6 +759,9 @@ next_wqe:
+
+ goto next_wqe;
+
++err_drop_ah:
++ if (ah)
++ rxe_drop_ref(ah);
+ err:
+ wqe->state = wqe_state_error;
+ __rxe_do_task(&qp->comp.task);
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index e8f435fa6e4d7..192cb9a096a14 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -632,7 +632,7 @@ static struct sk_buff *prepare_ack_packet(struct rxe_qp *qp,
+ if (ack->mask & RXE_ATMACK_MASK)
+ atmack_set_orig(ack, qp->resp.atomic_orig);
+
+- err = rxe_prepare(ack, skb);
++ err = rxe_prepare(&qp->pri_av, ack, skb);
+ if (err) {
+ kfree_skb(skb);
+ return NULL;
+@@ -814,6 +814,10 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ return RESPST_ERR_INVALIDATE_RKEY;
+ }
+
++ if (pkt->mask & RXE_END_MASK)
++ /* We successfully processed this new request. */
++ qp->resp.msn++;
++
+ /* next expected psn, read handles this separately */
+ qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
+ qp->resp.ack_psn = qp->resp.psn;
+@@ -821,11 +825,9 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ qp->resp.opcode = pkt->opcode;
+ qp->resp.status = IB_WC_SUCCESS;
+
+- if (pkt->mask & RXE_COMP_MASK) {
+- /* We successfully processed this new request. */
+- qp->resp.msn++;
++ if (pkt->mask & RXE_COMP_MASK)
+ return RESPST_COMPLETE;
+- } else if (qp_type(qp) == IB_QPT_RC)
++ else if (qp_type(qp) == IB_QPT_RC)
+ return RESPST_ACKNOWLEDGE;
+ else
+ return RESPST_CLEANUP;
+diff --git a/drivers/input/input.c b/drivers/input/input.c
+index c3139bc2aa0db..ccaeb24263854 100644
+--- a/drivers/input/input.c
++++ b/drivers/input/input.c
+@@ -2285,12 +2285,6 @@ int input_register_device(struct input_dev *dev)
+ /* KEY_RESERVED is not supposed to be transmitted to userspace. */
+ __clear_bit(KEY_RESERVED, dev->keybit);
+
+- /* Buttonpads should not map BTN_RIGHT and/or BTN_MIDDLE. */
+- if (test_bit(INPUT_PROP_BUTTONPAD, dev->propbit)) {
+- __clear_bit(BTN_RIGHT, dev->keybit);
+- __clear_bit(BTN_MIDDLE, dev->keybit);
+- }
+-
+ /* Make sure that bitmasks not mentioned in dev->evbit are clean. */
+ input_cleanse_bitmasks(dev);
+
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index b28c9435b898d..170e0f33040e8 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -95,10 +95,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
+ cached_iova = to_iova(iovad->cached32_node);
+ if (free == cached_iova ||
+ (free->pfn_hi < iovad->dma_32bit_pfn &&
+- free->pfn_lo >= cached_iova->pfn_lo)) {
++ free->pfn_lo >= cached_iova->pfn_lo))
+ iovad->cached32_node = rb_next(&free->node);
++
++ if (free->pfn_lo < iovad->dma_32bit_pfn)
+ iovad->max32_alloc_size = iovad->dma_32bit_pfn;
+- }
+
+ cached_iova = to_iova(iovad->cached_node);
+ if (free->pfn_lo >= cached_iova->pfn_lo)
+diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
+index ca752bdc710f6..61bd9a3004ede 100644
+--- a/drivers/iommu/ipmmu-vmsa.c
++++ b/drivers/iommu/ipmmu-vmsa.c
+@@ -1006,7 +1006,9 @@ static int ipmmu_probe(struct platform_device *pdev)
+ bitmap_zero(mmu->ctx, IPMMU_CTX_MAX);
+ mmu->features = of_device_get_match_data(&pdev->dev);
+ memset(mmu->utlb_ctx, IPMMU_CTX_INVALID, mmu->features->num_utlbs);
+- dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40));
++ ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40));
++ if (ret)
++ return ret;
+
+ /* Map I/O memory and request IRQ. */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 25b834104790c..5971a11686662 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -562,22 +562,52 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ {
+ struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+ struct mtk_iommu_data *data;
++ struct device_link *link;
++ struct device *larbdev;
++ unsigned int larbid, larbidx, i;
+
+ if (!fwspec || fwspec->ops != &mtk_iommu_ops)
+ return ERR_PTR(-ENODEV); /* Not a iommu client device */
+
+ data = dev_iommu_priv_get(dev);
+
++ /*
++ * Link the consumer device with the smi-larb device(supplier).
++ * The device that connects with each a larb is a independent HW.
++ * All the ports in each a device should be in the same larbs.
++ */
++ larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
++ for (i = 1; i < fwspec->num_ids; i++) {
++ larbidx = MTK_M4U_TO_LARB(fwspec->ids[i]);
++ if (larbid != larbidx) {
++ dev_err(dev, "Can only use one larb. Fail@larb%d-%d.\n",
++ larbid, larbidx);
++ return ERR_PTR(-EINVAL);
++ }
++ }
++ larbdev = data->larb_imu[larbid].dev;
++ link = device_link_add(dev, larbdev,
++ DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
++ if (!link)
++ dev_err(dev, "Unable to link %s\n", dev_name(larbdev));
+ return &data->iommu;
+ }
+
+ static void mtk_iommu_release_device(struct device *dev)
+ {
+ struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
++ struct mtk_iommu_data *data;
++ struct device *larbdev;
++ unsigned int larbid;
+
+ if (!fwspec || fwspec->ops != &mtk_iommu_ops)
+ return;
+
++ data = dev_iommu_priv_get(dev);
++ larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
++ larbdev = data->larb_imu[larbid].dev;
++ device_link_remove(dev, larbdev);
++
+ iommu_fwspec_free(dev);
+ }
+
+@@ -848,7 +878,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ plarbdev = of_find_device_by_node(larbnode);
+ if (!plarbdev) {
+ of_node_put(larbnode);
+- return -EPROBE_DEFER;
++ return -ENODEV;
+ }
+ data->larb_imu[id].dev = &plarbdev->dev;
+
+diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
+index be22fcf988cee..bc7ee90b9373d 100644
+--- a/drivers/iommu/mtk_iommu_v1.c
++++ b/drivers/iommu/mtk_iommu_v1.c
+@@ -423,7 +423,18 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+ struct of_phandle_args iommu_spec;
+ struct mtk_iommu_data *data;
+- int err, idx = 0;
++ int err, idx = 0, larbid, larbidx;
++ struct device_link *link;
++ struct device *larbdev;
++
++ /*
++ * In the deferred case, free the existed fwspec.
++ * Always initialize the fwspec internally.
++ */
++ if (fwspec) {
++ iommu_fwspec_free(dev);
++ fwspec = dev_iommu_fwspec_get(dev);
++ }
+
+ while (!of_parse_phandle_with_args(dev->of_node, "iommus",
+ "#iommu-cells",
+@@ -444,6 +455,23 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+
+ data = dev_iommu_priv_get(dev);
+
++ /* Link the consumer device with the smi-larb device(supplier) */
++ larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
++ for (idx = 1; idx < fwspec->num_ids; idx++) {
++ larbidx = mt2701_m4u_to_larb(fwspec->ids[idx]);
++ if (larbid != larbidx) {
++ dev_err(dev, "Can only use one larb. Fail@larb%d-%d.\n",
++ larbid, larbidx);
++ return ERR_PTR(-EINVAL);
++ }
++ }
++
++ larbdev = data->larb_imu[larbid].dev;
++ link = device_link_add(dev, larbdev,
++ DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
++ if (!link)
++ dev_err(dev, "Unable to link %s\n", dev_name(larbdev));
++
+ return &data->iommu;
+ }
+
+@@ -464,10 +492,18 @@ static void mtk_iommu_probe_finalize(struct device *dev)
+ static void mtk_iommu_release_device(struct device *dev)
+ {
+ struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
++ struct mtk_iommu_data *data;
++ struct device *larbdev;
++ unsigned int larbid;
+
+ if (!fwspec || fwspec->ops != &mtk_iommu_ops)
+ return;
+
++ data = dev_iommu_priv_get(dev);
++ larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
++ larbdev = data->larb_imu[larbid].dev;
++ device_link_remove(dev, larbdev);
++
+ iommu_fwspec_free(dev);
+ }
+
+@@ -595,7 +631,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ plarbdev = of_find_device_by_node(larbnode);
+ if (!plarbdev) {
+ of_node_put(larbnode);
+- return -EPROBE_DEFER;
++ return -ENODEV;
+ }
+ data->larb_imu[i].dev = &plarbdev->dev;
+
+diff --git a/drivers/irqchip/irq-nvic.c b/drivers/irqchip/irq-nvic.c
+index ba4759b3e2693..94230306e0eee 100644
+--- a/drivers/irqchip/irq-nvic.c
++++ b/drivers/irqchip/irq-nvic.c
+@@ -107,6 +107,7 @@ static int __init nvic_of_init(struct device_node *node,
+
+ if (!nvic_irq_domain) {
+ pr_warn("Failed to allocate irq domain\n");
++ iounmap(nvic_base);
+ return -ENOMEM;
+ }
+
+@@ -116,6 +117,7 @@ static int __init nvic_of_init(struct device_node *node,
+ if (ret) {
+ pr_warn("Failed to allocate irq chips\n");
+ irq_domain_remove(nvic_irq_domain);
++ iounmap(nvic_base);
+ return ret;
+ }
+
+diff --git a/drivers/irqchip/qcom-pdc.c b/drivers/irqchip/qcom-pdc.c
+index 173e6520e06ec..c0b457f26ec41 100644
+--- a/drivers/irqchip/qcom-pdc.c
++++ b/drivers/irqchip/qcom-pdc.c
+@@ -56,17 +56,18 @@ static u32 pdc_reg_read(int reg, u32 i)
+ static void pdc_enable_intr(struct irq_data *d, bool on)
+ {
+ int pin_out = d->hwirq;
++ unsigned long flags;
+ u32 index, mask;
+ u32 enable;
+
+ index = pin_out / 32;
+ mask = pin_out % 32;
+
+- raw_spin_lock(&pdc_lock);
++ raw_spin_lock_irqsave(&pdc_lock, flags);
+ enable = pdc_reg_read(IRQ_ENABLE_BANK, index);
+ enable = on ? ENABLE_INTR(enable, mask) : CLEAR_INTR(enable, mask);
+ pdc_reg_write(IRQ_ENABLE_BANK, index, enable);
+- raw_spin_unlock(&pdc_lock);
++ raw_spin_unlock_irqrestore(&pdc_lock, flags);
+ }
+
+ static void qcom_pdc_gic_disable(struct irq_data *d)
+diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
+index 544de2db64531..a0c252415c868 100644
+--- a/drivers/mailbox/imx-mailbox.c
++++ b/drivers/mailbox/imx-mailbox.c
+@@ -14,6 +14,7 @@
+ #include <linux/module.h>
+ #include <linux/of_device.h>
+ #include <linux/pm_runtime.h>
++#include <linux/suspend.h>
+ #include <linux/slab.h>
+
+ #define IMX_MU_CHANS 16
+@@ -76,6 +77,7 @@ struct imx_mu_priv {
+ const struct imx_mu_dcfg *dcfg;
+ struct clk *clk;
+ int irq;
++ bool suspend;
+
+ u32 xcr[4];
+
+@@ -334,6 +336,9 @@ static irqreturn_t imx_mu_isr(int irq, void *p)
+ return IRQ_NONE;
+ }
+
++ if (priv->suspend)
++ pm_system_wakeup();
++
+ return IRQ_HANDLED;
+ }
+
+@@ -702,6 +707,8 @@ static int __maybe_unused imx_mu_suspend_noirq(struct device *dev)
+ priv->xcr[i] = imx_mu_read(priv, priv->dcfg->xCR[i]);
+ }
+
++ priv->suspend = true;
++
+ return 0;
+ }
+
+@@ -718,11 +725,13 @@ static int __maybe_unused imx_mu_resume_noirq(struct device *dev)
+ * send failed, may lead to system freeze. This issue
+ * is observed by testing freeze mode suspend.
+ */
+- if (!imx_mu_read(priv, priv->dcfg->xCR[0]) && !priv->clk) {
++ if (!priv->clk && !imx_mu_read(priv, priv->dcfg->xCR[0])) {
+ for (i = 0; i < IMX_MU_xCR_MAX; i++)
+ imx_mu_write(priv, priv->xcr[i], priv->dcfg->xCR[i]);
+ }
+
++ priv->suspend = false;
++
+ return 0;
+ }
+
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index acd0675da681e..78f7265039c66 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -412,6 +412,11 @@ static int tegra_hsp_mailbox_flush(struct mbox_chan *chan,
+ value = tegra_hsp_channel_readl(ch, HSP_SM_SHRD_MBOX);
+ if ((value & HSP_SM_SHRD_MBOX_FULL) == 0) {
+ mbox_chan_txdone(chan, 0);
++
++ /* Wait until channel is empty */
++ if (chan->active_req != NULL)
++ continue;
++
+ return 0;
+ }
+
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 88c573eeb5982..ad9f16689419d 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2060,9 +2060,11 @@ int bch_btree_check(struct cache_set *c)
+ }
+ }
+
++ /*
++ * Must wait for all threads to stop.
++ */
+ wait_event_interruptible(check_state->wait,
+- atomic_read(&check_state->started) == 0 ||
+- test_bit(CACHE_SET_IO_DISABLE, &c->flags));
++ atomic_read(&check_state->started) == 0);
+
+ for (i = 0; i < check_state->total_threads; i++) {
+ if (check_state->infos[i].result) {
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index c7560f66dca88..68d3dd6b4f119 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -998,9 +998,11 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ }
+ }
+
++ /*
++ * Must wait for all threads to stop.
++ */
+ wait_event_interruptible(state->wait,
+- atomic_read(&state->started) == 0 ||
+- test_bit(CACHE_SET_IO_DISABLE, &c->flags));
++ atomic_read(&state->started) == 0);
+
+ out:
+ kfree(state);
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index b855fef4f38a6..adb9604e85ac4 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -65,6 +65,8 @@ struct mapped_device {
+ struct gendisk *disk;
+ struct dax_device *dax_dev;
+
++ unsigned long __percpu *pending_io;
++
+ /*
+ * A list of ios that arrived while we were suspended.
+ */
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index d4ae31558826a..f51aea71cb036 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2590,7 +2590,7 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
+
+ static int get_key_size(char **key_string)
+ {
+- return (*key_string[0] == ':') ? -EINVAL : strlen(*key_string) >> 1;
++ return (*key_string[0] == ':') ? -EINVAL : (int)(strlen(*key_string) >> 1);
+ }
+
+ #endif /* CONFIG_KEYS */
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index eb4b5e52bd6ff..9399006dbc546 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2473,9 +2473,11 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
+ dm_integrity_io_error(ic, "invalid sector in journal", -EIO);
+ sec &= ~(sector_t)(ic->sectors_per_block - 1);
+ }
++ if (unlikely(sec >= ic->provided_data_sectors)) {
++ journal_entry_set_unused(je);
++ continue;
++ }
+ }
+- if (unlikely(sec >= ic->provided_data_sectors))
+- continue;
+ get_area_and_offset(ic, sec, &area, &offset);
+ restore_last_bytes(ic, access_journal_data(ic, i, j), je);
+ for (k = j + 1; k < ic->journal_section_entries; k++) {
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index 35d368c418d03..0e039a8c0bf2e 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -195,6 +195,7 @@ void dm_stats_init(struct dm_stats *stats)
+
+ mutex_init(&stats->mutex);
+ INIT_LIST_HEAD(&stats->list);
++ stats->precise_timestamps = false;
+ stats->last = alloc_percpu(struct dm_stats_last_position);
+ for_each_possible_cpu(cpu) {
+ last = per_cpu_ptr(stats->last, cpu);
+@@ -231,6 +232,22 @@ void dm_stats_cleanup(struct dm_stats *stats)
+ mutex_destroy(&stats->mutex);
+ }
+
++static void dm_stats_recalc_precise_timestamps(struct dm_stats *stats)
++{
++ struct list_head *l;
++ struct dm_stat *tmp_s;
++ bool precise_timestamps = false;
++
++ list_for_each(l, &stats->list) {
++ tmp_s = container_of(l, struct dm_stat, list_entry);
++ if (tmp_s->stat_flags & STAT_PRECISE_TIMESTAMPS) {
++ precise_timestamps = true;
++ break;
++ }
++ }
++ stats->precise_timestamps = precise_timestamps;
++}
++
+ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ sector_t step, unsigned stat_flags,
+ unsigned n_histogram_entries,
+@@ -376,6 +393,9 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
+ }
+ ret_id = s->id;
+ list_add_tail_rcu(&s->list_entry, l);
++
++ dm_stats_recalc_precise_timestamps(stats);
++
+ mutex_unlock(&stats->mutex);
+
+ resume_callback(md);
+@@ -418,6 +438,9 @@ static int dm_stats_delete(struct dm_stats *stats, int id)
+ }
+
+ list_del_rcu(&s->list_entry);
++
++ dm_stats_recalc_precise_timestamps(stats);
++
+ mutex_unlock(&stats->mutex);
+
+ /*
+@@ -621,13 +644,14 @@ static void __dm_stat_bio(struct dm_stat *s, int bi_rw,
+
+ void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw,
+ sector_t bi_sector, unsigned bi_sectors, bool end,
+- unsigned long duration_jiffies,
++ unsigned long start_time,
+ struct dm_stats_aux *stats_aux)
+ {
+ struct dm_stat *s;
+ sector_t end_sector;
+ struct dm_stats_last_position *last;
+ bool got_precise_time;
++ unsigned long duration_jiffies = 0;
+
+ if (unlikely(!bi_sectors))
+ return;
+@@ -647,16 +671,16 @@ void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw,
+ ));
+ WRITE_ONCE(last->last_sector, end_sector);
+ WRITE_ONCE(last->last_rw, bi_rw);
+- }
++ } else
++ duration_jiffies = jiffies - start_time;
+
+ rcu_read_lock();
+
+ got_precise_time = false;
+ list_for_each_entry_rcu(s, &stats->list, list_entry) {
+ if (s->stat_flags & STAT_PRECISE_TIMESTAMPS && !got_precise_time) {
+- if (!end)
+- stats_aux->duration_ns = ktime_to_ns(ktime_get());
+- else
++ /* start (!end) duration_ns is set by DM core's alloc_io() */
++ if (end)
+ stats_aux->duration_ns = ktime_to_ns(ktime_get()) - stats_aux->duration_ns;
+ got_precise_time = true;
+ }
+diff --git a/drivers/md/dm-stats.h b/drivers/md/dm-stats.h
+index 2ddfae678f320..09c81a1ec057d 100644
+--- a/drivers/md/dm-stats.h
++++ b/drivers/md/dm-stats.h
+@@ -13,8 +13,7 @@ struct dm_stats {
+ struct mutex mutex;
+ struct list_head list; /* list of struct dm_stat */
+ struct dm_stats_last_position __percpu *last;
+- sector_t last_sector;
+- unsigned last_rw;
++ bool precise_timestamps;
+ };
+
+ struct dm_stats_aux {
+@@ -32,7 +31,7 @@ int dm_stats_message(struct mapped_device *md, unsigned argc, char **argv,
+
+ void dm_stats_account_io(struct dm_stats *stats, unsigned long bi_rw,
+ sector_t bi_sector, unsigned bi_sectors, bool end,
+- unsigned long duration_jiffies,
++ unsigned long start_time,
+ struct dm_stats_aux *aux);
+
+ static inline bool dm_stats_used(struct dm_stats *st)
+@@ -40,4 +39,10 @@ static inline bool dm_stats_used(struct dm_stats *st)
+ return !list_empty(&st->list);
+ }
+
++static inline void dm_stats_record_start(struct dm_stats *stats, struct dm_stats_aux *aux)
++{
++ if (unlikely(stats->precise_timestamps))
++ aux->duration_ns = ktime_to_ns(ktime_get());
++}
++
+ #endif
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 997ace47bbd54..394778d8bf549 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -484,33 +484,48 @@ u64 dm_start_time_ns_from_clone(struct bio *bio)
+ }
+ EXPORT_SYMBOL_GPL(dm_start_time_ns_from_clone);
+
+-static void start_io_acct(struct dm_io *io)
++static bool bio_is_flush_with_data(struct bio *bio)
+ {
+- struct mapped_device *md = io->md;
+- struct bio *bio = io->orig_bio;
+-
+- bio_start_io_acct_time(bio, io->start_time);
+- if (unlikely(dm_stats_used(&md->stats)))
+- dm_stats_account_io(&md->stats, bio_data_dir(bio),
+- bio->bi_iter.bi_sector, bio_sectors(bio),
+- false, 0, &io->stats_aux);
++ return ((bio->bi_opf & REQ_PREFLUSH) && bio->bi_iter.bi_size);
+ }
+
+-static void end_io_acct(struct mapped_device *md, struct bio *bio,
+- unsigned long start_time, struct dm_stats_aux *stats_aux)
++static void dm_io_acct(bool end, struct mapped_device *md, struct bio *bio,
++ unsigned long start_time, struct dm_stats_aux *stats_aux)
+ {
+- unsigned long duration = jiffies - start_time;
++ bool is_flush_with_data;
++ unsigned int bi_size;
+
+- bio_end_io_acct(bio, start_time);
++ /* If REQ_PREFLUSH set save any payload but do not account it */
++ is_flush_with_data = bio_is_flush_with_data(bio);
++ if (is_flush_with_data) {
++ bi_size = bio->bi_iter.bi_size;
++ bio->bi_iter.bi_size = 0;
++ }
++
++ if (!end)
++ bio_start_io_acct_time(bio, start_time);
++ else
++ bio_end_io_acct(bio, start_time);
+
+ if (unlikely(dm_stats_used(&md->stats)))
+ dm_stats_account_io(&md->stats, bio_data_dir(bio),
+ bio->bi_iter.bi_sector, bio_sectors(bio),
+- true, duration, stats_aux);
++ end, start_time, stats_aux);
++
++ /* Restore bio's payload so it does get accounted upon requeue */
++ if (is_flush_with_data)
++ bio->bi_iter.bi_size = bi_size;
++}
++
++static void start_io_acct(struct dm_io *io)
++{
++ dm_io_acct(false, io->md, io->orig_bio, io->start_time, &io->stats_aux);
++}
+
+- /* nudge anyone waiting on suspend queue */
+- if (unlikely(wq_has_sleeper(&md->wait)))
+- wake_up(&md->wait);
++static void end_io_acct(struct mapped_device *md, struct bio *bio,
++ unsigned long start_time, struct dm_stats_aux *stats_aux)
++{
++ dm_io_acct(true, md, bio, start_time, stats_aux);
+ }
+
+ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
+@@ -531,12 +546,15 @@ static struct dm_io *alloc_io(struct mapped_device *md, struct bio *bio)
+ io->magic = DM_IO_MAGIC;
+ io->status = 0;
+ atomic_set(&io->io_count, 1);
++ this_cpu_inc(*md->pending_io);
+ io->orig_bio = bio;
+ io->md = md;
+ spin_lock_init(&io->endio_lock);
+
+ io->start_time = jiffies;
+
++ dm_stats_record_start(&md->stats, &io->stats_aux);
++
+ return io;
+ }
+
+@@ -826,11 +844,17 @@ void dm_io_dec_pending(struct dm_io *io, blk_status_t error)
+ stats_aux = io->stats_aux;
+ free_io(md, io);
+ end_io_acct(md, bio, start_time, &stats_aux);
++ smp_wmb();
++ this_cpu_dec(*md->pending_io);
++
++ /* nudge anyone waiting on suspend queue */
++ if (unlikely(wq_has_sleeper(&md->wait)))
++ wake_up(&md->wait);
+
+ if (io_error == BLK_STS_DM_REQUEUE)
+ return;
+
+- if ((bio->bi_opf & REQ_PREFLUSH) && bio->bi_iter.bi_size) {
++ if (bio_is_flush_with_data(bio)) {
+ /*
+ * Preflush done for flush with data, reissue
+ * without REQ_PREFLUSH.
+@@ -1607,6 +1631,7 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ md->dax_dev = NULL;
+ }
+
++ dm_cleanup_zoned_dev(md);
+ if (md->disk) {
+ spin_lock(&_minor_lock);
+ md->disk->private_data = NULL;
+@@ -1619,6 +1644,11 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ blk_cleanup_disk(md->disk);
+ }
+
++ if (md->pending_io) {
++ free_percpu(md->pending_io);
++ md->pending_io = NULL;
++ }
++
+ cleanup_srcu_struct(&md->io_barrier);
+
+ mutex_destroy(&md->suspend_lock);
+@@ -1627,7 +1657,6 @@ static void cleanup_mapped_device(struct mapped_device *md)
+ mutex_destroy(&md->swap_bios_lock);
+
+ dm_mq_cleanup_mapped_device(md);
+- dm_cleanup_zoned_dev(md);
+ }
+
+ /*
+@@ -1721,6 +1750,10 @@ static struct mapped_device *alloc_dev(int minor)
+ if (!md->wq)
+ goto bad;
+
++ md->pending_io = alloc_percpu(unsigned long);
++ if (!md->pending_io)
++ goto bad;
++
+ dm_stats_init(&md->stats);
+
+ /* Populate the mapping, nobody knows we exist yet */
+@@ -2128,16 +2161,13 @@ void dm_put(struct mapped_device *md)
+ }
+ EXPORT_SYMBOL_GPL(dm_put);
+
+-static bool md_in_flight_bios(struct mapped_device *md)
++static bool dm_in_flight_bios(struct mapped_device *md)
+ {
+ int cpu;
+- struct block_device *part = dm_disk(md)->part0;
+- long sum = 0;
++ unsigned long sum = 0;
+
+- for_each_possible_cpu(cpu) {
+- sum += part_stat_local_read_cpu(part, in_flight[0], cpu);
+- sum += part_stat_local_read_cpu(part, in_flight[1], cpu);
+- }
++ for_each_possible_cpu(cpu)
++ sum += *per_cpu_ptr(md->pending_io, cpu);
+
+ return sum != 0;
+ }
+@@ -2150,7 +2180,7 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, unsigned int ta
+ while (true) {
+ prepare_to_wait(&md->wait, &wait, task_state);
+
+- if (!md_in_flight_bios(md))
++ if (!dm_in_flight_bios(md))
+ break;
+
+ if (signal_pending_state(task_state, current)) {
+@@ -2162,6 +2192,8 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, unsigned int ta
+ }
+ finish_wait(&md->wait, &wait);
+
++ smp_rmb();
++
+ return r;
+ }
+
+diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c
+index 8e13cae40ec5b..db7f41a80770d 100644
+--- a/drivers/media/i2c/adv7511-v4l2.c
++++ b/drivers/media/i2c/adv7511-v4l2.c
+@@ -522,7 +522,7 @@ static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_
+ buffer[3] = 0;
+ buffer[3] = hdmi_infoframe_checksum(buffer, len + 4);
+
+- if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++ if (hdmi_infoframe_unpack(&frame, buffer, len + 4) < 0) {
+ v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+ return;
+ }
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index a2fa408d2d9f5..bb0c8fc6d3832 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -2484,7 +2484,7 @@ static int adv76xx_read_infoframe(struct v4l2_subdev *sd, int index,
+ buffer[i + 3] = infoframe_read(sd,
+ adv76xx_cri[index].payload_addr + i);
+
+- if (hdmi_infoframe_unpack(frame, buffer, sizeof(buffer)) < 0) {
++ if (hdmi_infoframe_unpack(frame, buffer, len + 3) < 0) {
+ v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__,
+ adv76xx_cri[index].desc);
+ return -ENOENT;
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index 9d6eed0f82819..22caa070273b4 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -2583,7 +2583,7 @@ static void log_infoframe(struct v4l2_subdev *sd, const struct adv7842_cfg_read_
+ for (i = 0; i < len; i++)
+ buffer[i + 3] = infoframe_read(sd, cri->payload_addr + i);
+
+- if (hdmi_infoframe_unpack(&frame, buffer, sizeof(buffer)) < 0) {
++ if (hdmi_infoframe_unpack(&frame, buffer, len + 3) < 0) {
+ v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc);
+ return;
+ }
+diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
+index bab720c7c1de1..d5f0eabf20c6a 100644
+--- a/drivers/media/i2c/ov2740.c
++++ b/drivers/media/i2c/ov2740.c
+@@ -1162,6 +1162,7 @@ static int ov2740_probe(struct i2c_client *client)
+ if (!ov2740)
+ return -ENOMEM;
+
++ v4l2_i2c_subdev_init(&ov2740->sd, client, &ov2740_subdev_ops);
+ full_power = acpi_dev_state_d0(&client->dev);
+ if (full_power) {
+ ret = ov2740_identify_module(ov2740);
+@@ -1171,13 +1172,6 @@ static int ov2740_probe(struct i2c_client *client)
+ }
+ }
+
+- v4l2_i2c_subdev_init(&ov2740->sd, client, &ov2740_subdev_ops);
+- ret = ov2740_identify_module(ov2740);
+- if (ret) {
+- dev_err(&client->dev, "failed to find sensor: %d", ret);
+- return ret;
+- }
+-
+ mutex_init(&ov2740->mutex);
+ ov2740->cur_mode = &supported_modes[0];
+ ret = ov2740_init_controls(ov2740);
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index ddbd71394db33..db5a19babe67d 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -2293,7 +2293,6 @@ static int ov5640_set_fmt(struct v4l2_subdev *sd,
+ struct ov5640_dev *sensor = to_ov5640_dev(sd);
+ const struct ov5640_mode_info *new_mode;
+ struct v4l2_mbus_framefmt *mbus_fmt = &format->format;
+- struct v4l2_mbus_framefmt *fmt;
+ int ret;
+
+ if (format->pad != 0)
+@@ -2311,12 +2310,10 @@ static int ov5640_set_fmt(struct v4l2_subdev *sd,
+ if (ret)
+ goto out;
+
+- if (format->which == V4L2_SUBDEV_FORMAT_TRY)
+- fmt = v4l2_subdev_get_try_format(sd, sd_state, 0);
+- else
+- fmt = &sensor->fmt;
+-
+- *fmt = *mbus_fmt;
++ if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
++ *v4l2_subdev_get_try_format(sd, sd_state, 0) = *mbus_fmt;
++ goto out;
++ }
+
+ if (new_mode != sensor->current_mode) {
+ sensor->current_mode = new_mode;
+@@ -2325,6 +2322,9 @@ static int ov5640_set_fmt(struct v4l2_subdev *sd,
+ if (mbus_fmt->code != sensor->fmt.code)
+ sensor->pending_fmt_change = true;
+
++ /* update format even if code is unchanged, resolution might change */
++ sensor->fmt = *mbus_fmt;
++
+ __v4l2_ctrl_s_ctrl_int64(sensor->ctrls.pixel_rate,
+ ov5640_calc_pixel_rate(sensor));
+ out:
+diff --git a/drivers/media/i2c/ov5648.c b/drivers/media/i2c/ov5648.c
+index 947d437ed0efe..ef8b52dc9401d 100644
+--- a/drivers/media/i2c/ov5648.c
++++ b/drivers/media/i2c/ov5648.c
+@@ -639,7 +639,7 @@ struct ov5648_ctrls {
+ struct v4l2_ctrl *pixel_rate;
+
+ struct v4l2_ctrl_handler handler;
+-} __packed;
++};
+
+ struct ov5648_sensor {
+ struct device *dev;
+@@ -1778,8 +1778,14 @@ static int ov5648_state_configure(struct ov5648_sensor *sensor,
+
+ static int ov5648_state_init(struct ov5648_sensor *sensor)
+ {
+- return ov5648_state_configure(sensor, &ov5648_modes[0],
+- ov5648_mbus_codes[0]);
++ int ret;
++
++ mutex_lock(&sensor->mutex);
++ ret = ov5648_state_configure(sensor, &ov5648_modes[0],
++ ov5648_mbus_codes[0]);
++ mutex_unlock(&sensor->mutex);
++
++ return ret;
+ }
+
+ /* Sensor Base */
+diff --git a/drivers/media/i2c/ov6650.c b/drivers/media/i2c/ov6650.c
+index f67412150b16b..eb59dc8bb5929 100644
+--- a/drivers/media/i2c/ov6650.c
++++ b/drivers/media/i2c/ov6650.c
+@@ -472,9 +472,16 @@ static int ov6650_get_selection(struct v4l2_subdev *sd,
+ {
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct ov6650 *priv = to_ov6650(client);
++ struct v4l2_rect *rect;
+
+- if (sel->which != V4L2_SUBDEV_FORMAT_ACTIVE)
+- return -EINVAL;
++ if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
++ /* pre-select try crop rectangle */
++ rect = &sd_state->pads->try_crop;
++
++ } else {
++ /* pre-select active crop rectangle */
++ rect = &priv->rect;
++ }
+
+ switch (sel->target) {
+ case V4L2_SEL_TGT_CROP_BOUNDS:
+@@ -483,14 +490,33 @@ static int ov6650_get_selection(struct v4l2_subdev *sd,
+ sel->r.width = W_CIF;
+ sel->r.height = H_CIF;
+ return 0;
++
+ case V4L2_SEL_TGT_CROP:
+- sel->r = priv->rect;
++ /* use selected crop rectangle */
++ sel->r = *rect;
+ return 0;
++
+ default:
+ return -EINVAL;
+ }
+ }
+
++static bool is_unscaled_ok(int width, int height, struct v4l2_rect *rect)
++{
++ return width > rect->width >> 1 || height > rect->height >> 1;
++}
++
++static void ov6650_bind_align_crop_rectangle(struct v4l2_rect *rect)
++{
++ v4l_bound_align_image(&rect->width, 2, W_CIF, 1,
++ &rect->height, 2, H_CIF, 1, 0);
++ v4l_bound_align_image(&rect->left, DEF_HSTRT << 1,
++ (DEF_HSTRT << 1) + W_CIF - (__s32)rect->width, 1,
++ &rect->top, DEF_VSTRT << 1,
++ (DEF_VSTRT << 1) + H_CIF - (__s32)rect->height,
++ 1, 0);
++}
++
+ static int ov6650_set_selection(struct v4l2_subdev *sd,
+ struct v4l2_subdev_state *sd_state,
+ struct v4l2_subdev_selection *sel)
+@@ -499,18 +525,30 @@ static int ov6650_set_selection(struct v4l2_subdev *sd,
+ struct ov6650 *priv = to_ov6650(client);
+ int ret;
+
+- if (sel->which != V4L2_SUBDEV_FORMAT_ACTIVE ||
+- sel->target != V4L2_SEL_TGT_CROP)
++ if (sel->target != V4L2_SEL_TGT_CROP)
+ return -EINVAL;
+
+- v4l_bound_align_image(&sel->r.width, 2, W_CIF, 1,
+- &sel->r.height, 2, H_CIF, 1, 0);
+- v4l_bound_align_image(&sel->r.left, DEF_HSTRT << 1,
+- (DEF_HSTRT << 1) + W_CIF - (__s32)sel->r.width, 1,
+- &sel->r.top, DEF_VSTRT << 1,
+- (DEF_VSTRT << 1) + H_CIF - (__s32)sel->r.height,
+- 1, 0);
++ ov6650_bind_align_crop_rectangle(&sel->r);
++
++ if (sel->which == V4L2_SUBDEV_FORMAT_TRY) {
++ struct v4l2_rect *crop = &sd_state->pads->try_crop;
++ struct v4l2_mbus_framefmt *mf = &sd_state->pads->try_fmt;
++ /* detect current pad config scaling factor */
++ bool half_scale = !is_unscaled_ok(mf->width, mf->height, crop);
++
++ /* store new crop rectangle */
++ *crop = sel->r;
+
++ /* adjust frame size */
++ mf->width = crop->width >> half_scale;
++ mf->height = crop->height >> half_scale;
++
++ return 0;
++ }
++
++ /* V4L2_SUBDEV_FORMAT_ACTIVE */
++
++ /* apply new crop rectangle */
+ ret = ov6650_reg_write(client, REG_HSTRT, sel->r.left >> 1);
+ if (!ret) {
+ priv->rect.width += priv->rect.left - sel->r.left;
+@@ -562,30 +600,13 @@ static int ov6650_get_fmt(struct v4l2_subdev *sd,
+ return 0;
+ }
+
+-static bool is_unscaled_ok(int width, int height, struct v4l2_rect *rect)
+-{
+- return width > rect->width >> 1 || height > rect->height >> 1;
+-}
+-
+ #define to_clkrc(div) ((div) - 1)
+
+ /* set the format we will capture in */
+-static int ov6650_s_fmt(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt *mf)
++static int ov6650_s_fmt(struct v4l2_subdev *sd, u32 code, bool half_scale)
+ {
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct ov6650 *priv = to_ov6650(client);
+- bool half_scale = !is_unscaled_ok(mf->width, mf->height, &priv->rect);
+- struct v4l2_subdev_selection sel = {
+- .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+- .target = V4L2_SEL_TGT_CROP,
+- .r.left = priv->rect.left + (priv->rect.width >> 1) -
+- (mf->width >> (1 - half_scale)),
+- .r.top = priv->rect.top + (priv->rect.height >> 1) -
+- (mf->height >> (1 - half_scale)),
+- .r.width = mf->width << half_scale,
+- .r.height = mf->height << half_scale,
+- };
+- u32 code = mf->code;
+ u8 coma_set = 0, coma_mask = 0, coml_set, coml_mask;
+ int ret;
+
+@@ -653,9 +674,7 @@ static int ov6650_s_fmt(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt *mf)
+ coma_mask |= COMA_QCIF;
+ }
+
+- ret = ov6650_set_selection(sd, NULL, &sel);
+- if (!ret)
+- ret = ov6650_reg_rmw(client, REG_COMA, coma_set, coma_mask);
++ ret = ov6650_reg_rmw(client, REG_COMA, coma_set, coma_mask);
+ if (!ret) {
+ priv->half_scale = half_scale;
+
+@@ -674,14 +693,12 @@ static int ov6650_set_fmt(struct v4l2_subdev *sd,
+ struct v4l2_mbus_framefmt *mf = &format->format;
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct ov6650 *priv = to_ov6650(client);
++ struct v4l2_rect *crop;
++ bool half_scale;
+
+ if (format->pad)
+ return -EINVAL;
+
+- if (is_unscaled_ok(mf->width, mf->height, &priv->rect))
+- v4l_bound_align_image(&mf->width, 2, W_CIF, 1,
+- &mf->height, 2, H_CIF, 1, 0);
+-
+ switch (mf->code) {
+ case MEDIA_BUS_FMT_Y10_1X10:
+ mf->code = MEDIA_BUS_FMT_Y8_1X8;
+@@ -699,10 +716,17 @@ static int ov6650_set_fmt(struct v4l2_subdev *sd,
+ break;
+ }
+
++ if (format->which == V4L2_SUBDEV_FORMAT_TRY)
++ crop = &sd_state->pads->try_crop;
++ else
++ crop = &priv->rect;
++
++ half_scale = !is_unscaled_ok(mf->width, mf->height, crop);
++
+ if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+- /* store media bus format code and frame size in pad config */
+- sd_state->pads->try_fmt.width = mf->width;
+- sd_state->pads->try_fmt.height = mf->height;
++ /* store new mbus frame format code and size in pad config */
++ sd_state->pads->try_fmt.width = crop->width >> half_scale;
++ sd_state->pads->try_fmt.height = crop->height >> half_scale;
+ sd_state->pads->try_fmt.code = mf->code;
+
+ /* return default mbus frame format updated with pad config */
+@@ -712,9 +736,11 @@ static int ov6650_set_fmt(struct v4l2_subdev *sd,
+ mf->code = sd_state->pads->try_fmt.code;
+
+ } else {
+- /* apply new media bus format code and frame size */
+- int ret = ov6650_s_fmt(sd, mf);
++ int ret = 0;
+
++ /* apply new media bus frame format and scaling if changed */
++ if (mf->code != priv->code || half_scale != priv->half_scale)
++ ret = ov6650_s_fmt(sd, mf->code, half_scale);
+ if (ret)
+ return ret;
+
+@@ -890,9 +916,8 @@ static int ov6650_video_probe(struct v4l2_subdev *sd)
+ if (!ret)
+ ret = ov6650_prog_dflt(client, xclk->clkrc);
+ if (!ret) {
+- struct v4l2_mbus_framefmt mf = ov6650_def_fmt;
+-
+- ret = ov6650_s_fmt(sd, &mf);
++ /* driver default frame format, no scaling */
++ ret = ov6650_s_fmt(sd, ov6650_def_fmt.code, false);
+ }
+ if (!ret)
+ ret = v4l2_ctrl_handler_setup(&priv->hdl);
+diff --git a/drivers/media/i2c/ov8865.c b/drivers/media/i2c/ov8865.c
+index d9d016cfa9ac0..e0dd0f4849a7a 100644
+--- a/drivers/media/i2c/ov8865.c
++++ b/drivers/media/i2c/ov8865.c
+@@ -457,8 +457,8 @@
+
+ #define OV8865_NATIVE_WIDTH 3296
+ #define OV8865_NATIVE_HEIGHT 2528
+-#define OV8865_ACTIVE_START_TOP 32
+-#define OV8865_ACTIVE_START_LEFT 80
++#define OV8865_ACTIVE_START_LEFT 16
++#define OV8865_ACTIVE_START_TOP 40
+ #define OV8865_ACTIVE_WIDTH 3264
+ #define OV8865_ACTIVE_HEIGHT 2448
+
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 8cc9bec43688e..5ca3d0cc653a8 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -3890,7 +3890,7 @@ static int bttv_register_video(struct bttv *btv)
+
+ /* video */
+ vdev_init(btv, &btv->video_dev, &bttv_video_template, "video");
+- btv->video_dev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_TUNER |
++ btv->video_dev.device_caps = V4L2_CAP_VIDEO_CAPTURE |
+ V4L2_CAP_READWRITE | V4L2_CAP_STREAMING;
+ if (btv->tuner_type != TUNER_ABSENT)
+ btv->video_dev.device_caps |= V4L2_CAP_TUNER;
+@@ -3911,7 +3911,7 @@ static int bttv_register_video(struct bttv *btv)
+ /* vbi */
+ vdev_init(btv, &btv->vbi_dev, &bttv_video_template, "vbi");
+ btv->vbi_dev.device_caps = V4L2_CAP_VBI_CAPTURE | V4L2_CAP_READWRITE |
+- V4L2_CAP_STREAMING | V4L2_CAP_TUNER;
++ V4L2_CAP_STREAMING;
+ if (btv->tuner_type != TUNER_ABSENT)
+ btv->vbi_dev.device_caps |= V4L2_CAP_TUNER;
+
+diff --git a/drivers/media/pci/cx88/cx88-mpeg.c b/drivers/media/pci/cx88/cx88-mpeg.c
+index 680e1e3fe89b7..2c1d5137ac470 100644
+--- a/drivers/media/pci/cx88/cx88-mpeg.c
++++ b/drivers/media/pci/cx88/cx88-mpeg.c
+@@ -162,6 +162,9 @@ int cx8802_start_dma(struct cx8802_dev *dev,
+ cx_write(MO_TS_GPCNTRL, GP_COUNT_CONTROL_RESET);
+ q->count = 0;
+
++ /* clear interrupt status register */
++ cx_write(MO_TS_INTSTAT, 0x1f1111);
++
+ /* enable irqs */
+ dprintk(1, "setting the interrupt mask\n");
+ cx_set(MO_PCI_INTMSK, core->pci_irqmask | PCI_INT_TSINT);
+diff --git a/drivers/media/pci/ivtv/ivtv-driver.h b/drivers/media/pci/ivtv/ivtv-driver.h
+index 4cf92dee65271..ce3a7ca51736e 100644
+--- a/drivers/media/pci/ivtv/ivtv-driver.h
++++ b/drivers/media/pci/ivtv/ivtv-driver.h
+@@ -330,7 +330,6 @@ struct ivtv_stream {
+ struct ivtv *itv; /* for ease of use */
+ const char *name; /* name of the stream */
+ int type; /* stream type */
+- u32 caps; /* V4L2 capabilities */
+
+ struct v4l2_fh *fh; /* pointer to the streaming filehandle */
+ spinlock_t qlock; /* locks access to the queues */
+diff --git a/drivers/media/pci/ivtv/ivtv-ioctl.c b/drivers/media/pci/ivtv/ivtv-ioctl.c
+index 0cdf6b3210c2f..fee460e2ca863 100644
+--- a/drivers/media/pci/ivtv/ivtv-ioctl.c
++++ b/drivers/media/pci/ivtv/ivtv-ioctl.c
+@@ -438,7 +438,7 @@ static int ivtv_g_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2_f
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ struct v4l2_window *winfmt = &fmt->fmt.win;
+
+- if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++ if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -EINVAL;
+ if (!itv->osd_video_pbase)
+ return -EINVAL;
+@@ -549,7 +549,7 @@ static int ivtv_try_fmt_vid_out_overlay(struct file *file, void *fh, struct v4l2
+ u32 chromakey = fmt->fmt.win.chromakey;
+ u8 global_alpha = fmt->fmt.win.global_alpha;
+
+- if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++ if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -EINVAL;
+ if (!itv->osd_video_pbase)
+ return -EINVAL;
+@@ -1383,7 +1383,7 @@ static int ivtv_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *fb)
+ 0,
+ };
+
+- if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++ if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -ENOTTY;
+ if (!itv->osd_video_pbase)
+ return -ENOTTY;
+@@ -1450,7 +1450,7 @@ static int ivtv_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffe
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+ struct yuv_playback_info *yi = &itv->yuv_info;
+
+- if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++ if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -ENOTTY;
+ if (!itv->osd_video_pbase)
+ return -ENOTTY;
+@@ -1470,7 +1470,7 @@ static int ivtv_overlay(struct file *file, void *fh, unsigned int on)
+ struct ivtv *itv = id->itv;
+ struct ivtv_stream *s = &itv->streams[fh2id(fh)->type];
+
+- if (!(s->caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
++ if (!(s->vdev.device_caps & V4L2_CAP_VIDEO_OUTPUT_OVERLAY))
+ return -ENOTTY;
+ if (!itv->osd_video_pbase)
+ return -ENOTTY;
+diff --git a/drivers/media/pci/ivtv/ivtv-streams.c b/drivers/media/pci/ivtv/ivtv-streams.c
+index 6e455948cc77a..13d7d55e65949 100644
+--- a/drivers/media/pci/ivtv/ivtv-streams.c
++++ b/drivers/media/pci/ivtv/ivtv-streams.c
+@@ -176,7 +176,7 @@ static void ivtv_stream_init(struct ivtv *itv, int type)
+ s->itv = itv;
+ s->type = type;
+ s->name = ivtv_stream_info[type].name;
+- s->caps = ivtv_stream_info[type].v4l2_caps;
++ s->vdev.device_caps = ivtv_stream_info[type].v4l2_caps;
+
+ if (ivtv_stream_info[type].pio)
+ s->dma = DMA_NONE;
+@@ -299,12 +299,9 @@ static int ivtv_reg_dev(struct ivtv *itv, int type)
+ if (s_mpg->vdev.v4l2_dev)
+ num = s_mpg->vdev.num + ivtv_stream_info[type].num_offset;
+ }
+- s->vdev.device_caps = s->caps;
+- if (itv->osd_video_pbase) {
+- itv->streams[IVTV_DEC_STREAM_TYPE_YUV].vdev.device_caps |=
+- V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+- itv->streams[IVTV_DEC_STREAM_TYPE_MPG].vdev.device_caps |=
+- V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
++ if (itv->osd_video_pbase && (type == IVTV_DEC_STREAM_TYPE_YUV ||
++ type == IVTV_DEC_STREAM_TYPE_MPG)) {
++ s->vdev.device_caps |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ itv->v4l2_cap |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+ }
+ video_set_drvdata(&s->vdev, s);
+diff --git a/drivers/media/pci/saa7134/saa7134-alsa.c b/drivers/media/pci/saa7134/saa7134-alsa.c
+index fb24d2ed3621b..d3cde05a6ebab 100644
+--- a/drivers/media/pci/saa7134/saa7134-alsa.c
++++ b/drivers/media/pci/saa7134/saa7134-alsa.c
+@@ -1214,7 +1214,7 @@ static int alsa_device_exit(struct saa7134_dev *dev)
+
+ static int saa7134_alsa_init(void)
+ {
+- struct saa7134_dev *dev = NULL;
++ struct saa7134_dev *dev;
+
+ saa7134_dmasound_init = alsa_device_init;
+ saa7134_dmasound_exit = alsa_device_exit;
+@@ -1229,7 +1229,7 @@ static int saa7134_alsa_init(void)
+ alsa_device_init(dev);
+ }
+
+- if (dev == NULL)
++ if (list_empty(&saa7134_devlist))
+ pr_info("saa7134 ALSA: no saa7134 cards found\n");
+
+ return 0;
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index 7a24daf7165a4..bdeecde0d9978 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -153,7 +153,7 @@
+ #define VE_SRC_TB_EDGE_DET_BOT GENMASK(28, VE_SRC_TB_EDGE_DET_BOT_SHF)
+
+ #define VE_MODE_DETECT_STATUS 0x098
+-#define VE_MODE_DETECT_H_PIXELS GENMASK(11, 0)
++#define VE_MODE_DETECT_H_PERIOD GENMASK(11, 0)
+ #define VE_MODE_DETECT_V_LINES_SHF 16
+ #define VE_MODE_DETECT_V_LINES GENMASK(27, VE_MODE_DETECT_V_LINES_SHF)
+ #define VE_MODE_DETECT_STATUS_VSYNC BIT(28)
+@@ -164,6 +164,8 @@
+ #define VE_SYNC_STATUS_VSYNC_SHF 16
+ #define VE_SYNC_STATUS_VSYNC GENMASK(27, VE_SYNC_STATUS_VSYNC_SHF)
+
++#define VE_H_TOTAL_PIXELS 0x0A0
++
+ #define VE_INTERRUPT_CTRL 0x304
+ #define VE_INTERRUPT_STATUS 0x308
+ #define VE_INTERRUPT_MODE_DETECT_WD BIT(0)
+@@ -802,6 +804,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ u32 src_lr_edge;
+ u32 src_tb_edge;
+ u32 sync;
++ u32 htotal;
+ struct v4l2_bt_timings *det = &video->detected_timings;
+
+ det->width = MIN_WIDTH;
+@@ -847,6 +850,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ src_tb_edge = aspeed_video_read(video, VE_SRC_TB_EDGE_DET);
+ mds = aspeed_video_read(video, VE_MODE_DETECT_STATUS);
+ sync = aspeed_video_read(video, VE_SYNC_STATUS);
++ htotal = aspeed_video_read(video, VE_H_TOTAL_PIXELS);
+
+ video->frame_bottom = (src_tb_edge & VE_SRC_TB_EDGE_DET_BOT) >>
+ VE_SRC_TB_EDGE_DET_BOT_SHF;
+@@ -863,8 +867,7 @@ static void aspeed_video_get_resolution(struct aspeed_video *video)
+ VE_SRC_LR_EDGE_DET_RT_SHF;
+ video->frame_left = src_lr_edge & VE_SRC_LR_EDGE_DET_LEFT;
+ det->hfrontporch = video->frame_left;
+- det->hbackporch = (mds & VE_MODE_DETECT_H_PIXELS) -
+- video->frame_right;
++ det->hbackporch = htotal - video->frame_right;
+ det->hsync = sync & VE_SYNC_STATUS_HSYNC;
+ if (video->frame_left > video->frame_right)
+ continue;
+diff --git a/drivers/media/platform/atmel/atmel-isc-base.c b/drivers/media/platform/atmel/atmel-isc-base.c
+index 660cd0ab6749a..24807782c9e50 100644
+--- a/drivers/media/platform/atmel/atmel-isc-base.c
++++ b/drivers/media/platform/atmel/atmel-isc-base.c
+@@ -1369,14 +1369,12 @@ static int isc_enum_framesizes(struct file *file, void *fh,
+ struct v4l2_frmsizeenum *fsize)
+ {
+ struct isc_device *isc = video_drvdata(file);
+- struct v4l2_subdev_frame_size_enum fse = {
+- .code = isc->config.sd_format->mbus_code,
+- .index = fsize->index,
+- .which = V4L2_SUBDEV_FORMAT_ACTIVE,
+- };
+ int ret = -EINVAL;
+ int i;
+
++ if (fsize->index)
++ return -EINVAL;
++
+ for (i = 0; i < isc->num_user_formats; i++)
+ if (isc->user_formats[i]->fourcc == fsize->pixel_format)
+ ret = 0;
+@@ -1388,14 +1386,14 @@ static int isc_enum_framesizes(struct file *file, void *fh,
+ if (ret)
+ return ret;
+
+- ret = v4l2_subdev_call(isc->current_subdev->sd, pad, enum_frame_size,
+- NULL, &fse);
+- if (ret)
+- return ret;
++ fsize->type = V4L2_FRMSIZE_TYPE_CONTINUOUS;
+
+- fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
+- fsize->discrete.width = fse.max_width;
+- fsize->discrete.height = fse.max_height;
++ fsize->stepwise.min_width = 16;
++ fsize->stepwise.max_width = isc->max_width;
++ fsize->stepwise.min_height = 16;
++ fsize->stepwise.max_height = isc->max_height;
++ fsize->stepwise.step_width = 1;
++ fsize->stepwise.step_height = 1;
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/atmel/atmel-sama7g5-isc.c b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+index 5d1c76f680f37..2b1082295c130 100644
+--- a/drivers/media/platform/atmel/atmel-sama7g5-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama7g5-isc.c
+@@ -556,7 +556,6 @@ static int microchip_xisc_remove(struct platform_device *pdev)
+
+ v4l2_device_unregister(&isc->v4l2_dev);
+
+- clk_disable_unprepare(isc->ispck);
+ clk_disable_unprepare(isc->hclock);
+
+ isc_clk_cleanup(isc);
+@@ -568,7 +567,6 @@ static int __maybe_unused xisc_runtime_suspend(struct device *dev)
+ {
+ struct isc_device *isc = dev_get_drvdata(dev);
+
+- clk_disable_unprepare(isc->ispck);
+ clk_disable_unprepare(isc->hclock);
+
+ return 0;
+@@ -583,10 +581,6 @@ static int __maybe_unused xisc_runtime_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- ret = clk_prepare_enable(isc->ispck);
+- if (ret)
+- clk_disable_unprepare(isc->hclock);
+-
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c
+index 3cd47ba26357e..a57822b050706 100644
+--- a/drivers/media/platform/coda/coda-common.c
++++ b/drivers/media/platform/coda/coda-common.c
+@@ -409,6 +409,7 @@ static struct vdoa_data *coda_get_vdoa_data(void)
+ if (!vdoa_data)
+ vdoa_data = ERR_PTR(-EPROBE_DEFER);
+
++ put_device(&vdoa_pdev->dev);
+ out:
+ of_node_put(vdoa_node);
+
+diff --git a/drivers/media/platform/davinci/vpif.c b/drivers/media/platform/davinci/vpif.c
+index 5a89d885d0e3b..4a260f4ed236b 100644
+--- a/drivers/media/platform/davinci/vpif.c
++++ b/drivers/media/platform/davinci/vpif.c
+@@ -41,6 +41,11 @@ MODULE_ALIAS("platform:" VPIF_DRIVER_NAME);
+ #define VPIF_CH2_MAX_MODES 15
+ #define VPIF_CH3_MAX_MODES 2
+
++struct vpif_data {
++ struct platform_device *capture;
++ struct platform_device *display;
++};
++
+ DEFINE_SPINLOCK(vpif_lock);
+ EXPORT_SYMBOL_GPL(vpif_lock);
+
+@@ -423,16 +428,31 @@ int vpif_channel_getfid(u8 channel_id)
+ }
+ EXPORT_SYMBOL(vpif_channel_getfid);
+
++static void vpif_pdev_release(struct device *dev)
++{
++ struct platform_device *pdev = to_platform_device(dev);
++
++ kfree(pdev);
++}
++
+ static int vpif_probe(struct platform_device *pdev)
+ {
+ static struct resource *res_irq;
+ struct platform_device *pdev_capture, *pdev_display;
+ struct device_node *endpoint = NULL;
++ struct vpif_data *data;
++ int ret;
+
+ vpif_base = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(vpif_base))
+ return PTR_ERR(vpif_base);
+
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
++ platform_set_drvdata(pdev, data);
++
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get(&pdev->dev);
+
+@@ -456,46 +476,79 @@ static int vpif_probe(struct platform_device *pdev)
+ res_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (!res_irq) {
+ dev_warn(&pdev->dev, "Missing IRQ resource.\n");
+- pm_runtime_put(&pdev->dev);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto err_put_rpm;
+ }
+
+- pdev_capture = devm_kzalloc(&pdev->dev, sizeof(*pdev_capture),
+- GFP_KERNEL);
+- if (pdev_capture) {
+- pdev_capture->name = "vpif_capture";
+- pdev_capture->id = -1;
+- pdev_capture->resource = res_irq;
+- pdev_capture->num_resources = 1;
+- pdev_capture->dev.dma_mask = pdev->dev.dma_mask;
+- pdev_capture->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
+- pdev_capture->dev.parent = &pdev->dev;
+- platform_device_register(pdev_capture);
+- } else {
+- dev_warn(&pdev->dev, "Unable to allocate memory for pdev_capture.\n");
++ pdev_capture = kzalloc(sizeof(*pdev_capture), GFP_KERNEL);
++ if (!pdev_capture) {
++ ret = -ENOMEM;
++ goto err_put_rpm;
+ }
+
+- pdev_display = devm_kzalloc(&pdev->dev, sizeof(*pdev_display),
+- GFP_KERNEL);
+- if (pdev_display) {
+- pdev_display->name = "vpif_display";
+- pdev_display->id = -1;
+- pdev_display->resource = res_irq;
+- pdev_display->num_resources = 1;
+- pdev_display->dev.dma_mask = pdev->dev.dma_mask;
+- pdev_display->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
+- pdev_display->dev.parent = &pdev->dev;
+- platform_device_register(pdev_display);
+- } else {
+- dev_warn(&pdev->dev, "Unable to allocate memory for pdev_display.\n");
++ pdev_capture->name = "vpif_capture";
++ pdev_capture->id = -1;
++ pdev_capture->resource = res_irq;
++ pdev_capture->num_resources = 1;
++ pdev_capture->dev.dma_mask = pdev->dev.dma_mask;
++ pdev_capture->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
++ pdev_capture->dev.parent = &pdev->dev;
++ pdev_capture->dev.release = vpif_pdev_release;
++
++ ret = platform_device_register(pdev_capture);
++ if (ret)
++ goto err_put_pdev_capture;
++
++ pdev_display = kzalloc(sizeof(*pdev_display), GFP_KERNEL);
++ if (!pdev_display) {
++ ret = -ENOMEM;
++ goto err_put_pdev_capture;
+ }
+
++ pdev_display->name = "vpif_display";
++ pdev_display->id = -1;
++ pdev_display->resource = res_irq;
++ pdev_display->num_resources = 1;
++ pdev_display->dev.dma_mask = pdev->dev.dma_mask;
++ pdev_display->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
++ pdev_display->dev.parent = &pdev->dev;
++ pdev_display->dev.release = vpif_pdev_release;
++
++ ret = platform_device_register(pdev_display);
++ if (ret)
++ goto err_put_pdev_display;
++
++ data->capture = pdev_capture;
++ data->display = pdev_display;
++
+ return 0;
++
++err_put_pdev_display:
++ platform_device_put(pdev_display);
++err_put_pdev_capture:
++ platform_device_put(pdev_capture);
++err_put_rpm:
++ pm_runtime_put(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
++ kfree(data);
++
++ return ret;
+ }
+
+ static int vpif_remove(struct platform_device *pdev)
+ {
++ struct vpif_data *data = platform_get_drvdata(pdev);
++
++ if (data->capture)
++ platform_device_unregister(data->capture);
++ if (data->display)
++ platform_device_unregister(data->display);
++
++ pm_runtime_put(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++
++ kfree(data);
++
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+index 4ca96cf9def76..83a2b4d13bad3 100644
+--- a/drivers/media/platform/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+@@ -947,8 +947,13 @@ static void mxc_jpeg_device_run(void *priv)
+ v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, true);
+
+ jpeg_src_buf = vb2_to_mxc_buf(&src_buf->vb2_buf);
++ if (q_data_cap->fmt->colplanes != dst_buf->vb2_buf.num_planes) {
++ dev_err(dev, "Capture format %s has %d planes, but capture buffer has %d planes\n",
++ q_data_cap->fmt->name, q_data_cap->fmt->colplanes,
++ dst_buf->vb2_buf.num_planes);
++ jpeg_src_buf->jpeg_parse_error = true;
++ }
+ if (jpeg_src_buf->jpeg_parse_error) {
+- jpeg->slot_data[ctx->slot].used = false;
+ v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_ERROR);
+diff --git a/drivers/media/platform/meson/ge2d/ge2d.c b/drivers/media/platform/meson/ge2d/ge2d.c
+index ccda18e5a3774..5e7b319f300df 100644
+--- a/drivers/media/platform/meson/ge2d/ge2d.c
++++ b/drivers/media/platform/meson/ge2d/ge2d.c
+@@ -215,35 +215,35 @@ static void ge2d_hw_start(struct meson_ge2d *ge2d)
+
+ regmap_write(ge2d->map, GE2D_SRC1_CLIPY_START_END,
+ FIELD_PREP(GE2D_START, ctx->in.crop.top) |
+- FIELD_PREP(GE2D_END, ctx->in.crop.top + ctx->in.crop.height));
++ FIELD_PREP(GE2D_END, ctx->in.crop.top + ctx->in.crop.height - 1));
+ regmap_write(ge2d->map, GE2D_SRC1_CLIPX_START_END,
+ FIELD_PREP(GE2D_START, ctx->in.crop.left) |
+- FIELD_PREP(GE2D_END, ctx->in.crop.left + ctx->in.crop.width));
++ FIELD_PREP(GE2D_END, ctx->in.crop.left + ctx->in.crop.width - 1));
+ regmap_write(ge2d->map, GE2D_SRC2_CLIPY_START_END,
+ FIELD_PREP(GE2D_START, ctx->out.crop.top) |
+- FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height));
++ FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height - 1));
+ regmap_write(ge2d->map, GE2D_SRC2_CLIPX_START_END,
+ FIELD_PREP(GE2D_START, ctx->out.crop.left) |
+- FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width));
++ FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width - 1));
+ regmap_write(ge2d->map, GE2D_DST_CLIPY_START_END,
+ FIELD_PREP(GE2D_START, ctx->out.crop.top) |
+- FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height));
++ FIELD_PREP(GE2D_END, ctx->out.crop.top + ctx->out.crop.height - 1));
+ regmap_write(ge2d->map, GE2D_DST_CLIPX_START_END,
+ FIELD_PREP(GE2D_START, ctx->out.crop.left) |
+- FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width));
++ FIELD_PREP(GE2D_END, ctx->out.crop.left + ctx->out.crop.width - 1));
+
+ regmap_write(ge2d->map, GE2D_SRC1_Y_START_END,
+- FIELD_PREP(GE2D_END, ctx->in.pix_fmt.height));
++ FIELD_PREP(GE2D_END, ctx->in.pix_fmt.height - 1));
+ regmap_write(ge2d->map, GE2D_SRC1_X_START_END,
+- FIELD_PREP(GE2D_END, ctx->in.pix_fmt.width));
++ FIELD_PREP(GE2D_END, ctx->in.pix_fmt.width - 1));
+ regmap_write(ge2d->map, GE2D_SRC2_Y_START_END,
+- FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height));
++ FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height - 1));
+ regmap_write(ge2d->map, GE2D_SRC2_X_START_END,
+- FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width));
++ FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width - 1));
+ regmap_write(ge2d->map, GE2D_DST_Y_START_END,
+- FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height));
++ FIELD_PREP(GE2D_END, ctx->out.pix_fmt.height - 1));
+ regmap_write(ge2d->map, GE2D_DST_X_START_END,
+- FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width));
++ FIELD_PREP(GE2D_END, ctx->out.pix_fmt.width - 1));
+
+ /* Color, no blend, use source color */
+ reg = GE2D_ALU_DO_COLOR_OPERATION_LOGIC(LOGIC_OPERATION_COPY,
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+index cd27f637dbe7c..cfc7ebed8fb7a 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
+@@ -102,6 +102,8 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_vpu_init(struct mtk_vcodec_dev *dev,
+ vpu_wdt_reg_handler(fw_pdev, mtk_vcodec_vpu_reset_handler, dev, rst_id);
+
+ fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL);
++ if (!fw)
++ return ERR_PTR(-ENOMEM);
+ fw->type = VPU;
+ fw->ops = &mtk_vcodec_vpu_msg;
+ fw->pdev = fw_pdev;
+diff --git a/drivers/media/platform/omap3isp/ispstat.c b/drivers/media/platform/omap3isp/ispstat.c
+index 5b9b57f4d9bf8..68cf68dbcace2 100644
+--- a/drivers/media/platform/omap3isp/ispstat.c
++++ b/drivers/media/platform/omap3isp/ispstat.c
+@@ -512,7 +512,7 @@ int omap3isp_stat_request_statistics(struct ispstat *stat,
+ int omap3isp_stat_request_statistics_time32(struct ispstat *stat,
+ struct omap3isp_stat_data_time32 *data)
+ {
+- struct omap3isp_stat_data data64;
++ struct omap3isp_stat_data data64 = { };
+ int ret;
+
+ ret = omap3isp_stat_request_statistics(stat, &data64);
+@@ -521,7 +521,8 @@ int omap3isp_stat_request_statistics_time32(struct ispstat *stat,
+
+ data->ts.tv_sec = data64.ts.tv_sec;
+ data->ts.tv_usec = data64.ts.tv_usec;
+- memcpy(&data->buf, &data64.buf, sizeof(*data) - sizeof(data->ts));
++ data->buf = (uintptr_t)data64.buf;
++ memcpy(&data->frame, &data64.frame, sizeof(data->frame));
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/qcom/camss/camss-csid-170.c b/drivers/media/platform/qcom/camss/camss-csid-170.c
+index ac22ff29d2a9f..82f59933ad7b3 100644
+--- a/drivers/media/platform/qcom/camss/camss-csid-170.c
++++ b/drivers/media/platform/qcom/camss/camss-csid-170.c
+@@ -105,7 +105,8 @@
+ #define CSID_RDI_CTRL(rdi) ((IS_LITE ? 0x208 : 0x308)\
+ + 0x100 * (rdi))
+ #define RDI_CTRL_HALT_CMD 0
+-#define ALT_CMD_RESUME_AT_FRAME_BOUNDARY 1
++#define HALT_CMD_HALT_AT_FRAME_BOUNDARY 0
++#define HALT_CMD_RESUME_AT_FRAME_BOUNDARY 1
+ #define RDI_CTRL_HALT_MODE 2
+
+ #define CSID_RDI_FRM_DROP_PATTERN(rdi) ((IS_LITE ? 0x20C : 0x30C)\
+@@ -366,7 +367,7 @@ static void csid_configure_stream(struct csid_device *csid, u8 enable)
+ val |= input_format->width & 0x1fff << TPG_DT_n_CFG_0_FRAME_WIDTH;
+ writel_relaxed(val, csid->base + CSID_TPG_DT_n_CFG_0(0));
+
+- val = DATA_TYPE_RAW_10BIT << TPG_DT_n_CFG_1_DATA_TYPE;
++ val = format->data_type << TPG_DT_n_CFG_1_DATA_TYPE;
+ writel_relaxed(val, csid->base + CSID_TPG_DT_n_CFG_1(0));
+
+ val = tg->mode << TPG_DT_n_CFG_2_PAYLOAD_MODE;
+@@ -382,8 +383,9 @@ static void csid_configure_stream(struct csid_device *csid, u8 enable)
+ val = 1 << RDI_CFG0_BYTE_CNTR_EN;
+ val |= 1 << RDI_CFG0_FORMAT_MEASURE_EN;
+ val |= 1 << RDI_CFG0_TIMESTAMP_EN;
++ /* note: for non-RDI path, this should be format->decode_format */
+ val |= DECODE_FORMAT_PAYLOAD_ONLY << RDI_CFG0_DECODE_FORMAT;
+- val |= DATA_TYPE_RAW_10BIT << RDI_CFG0_DATA_TYPE;
++ val |= format->data_type << RDI_CFG0_DATA_TYPE;
+ val |= vc << RDI_CFG0_VIRTUAL_CHANNEL;
+ val |= dt_id << RDI_CFG0_DT_ID;
+ writel_relaxed(val, csid->base + CSID_RDI_CFG0(0));
+@@ -443,13 +445,10 @@ static void csid_configure_stream(struct csid_device *csid, u8 enable)
+ val |= 1 << CSI2_RX_CFG1_MISR_EN;
+ writel_relaxed(val, csid->base + CSID_CSI2_RX_CFG1); // csi2_vc_mode_shift_val ?
+
+- /* error irqs start at BIT(11) */
+- writel_relaxed(~0u, csid->base + CSID_CSI2_RX_IRQ_MASK);
+-
+- /* RDI irq */
+- writel_relaxed(~0u, csid->base + CSID_TOP_IRQ_MASK);
+-
+- val = 1 << RDI_CTRL_HALT_CMD;
++ if (enable)
++ val = HALT_CMD_RESUME_AT_FRAME_BOUNDARY << RDI_CTRL_HALT_CMD;
++ else
++ val = HALT_CMD_HALT_AT_FRAME_BOUNDARY << RDI_CTRL_HALT_CMD;
+ writel_relaxed(val, csid->base + CSID_RDI_CTRL(0));
+ }
+
+diff --git a/drivers/media/platform/qcom/camss/camss-vfe-170.c b/drivers/media/platform/qcom/camss/camss-vfe-170.c
+index f524af712a843..600150cfc4f70 100644
+--- a/drivers/media/platform/qcom/camss/camss-vfe-170.c
++++ b/drivers/media/platform/qcom/camss/camss-vfe-170.c
+@@ -395,17 +395,7 @@ static irqreturn_t vfe_isr(int irq, void *dev)
+ */
+ static int vfe_halt(struct vfe_device *vfe)
+ {
+- unsigned long time;
+-
+- reinit_completion(&vfe->halt_complete);
+-
+- time = wait_for_completion_timeout(&vfe->halt_complete,
+- msecs_to_jiffies(VFE_HALT_TIMEOUT_MS));
+- if (!time) {
+- dev_err(vfe->camss->dev, "VFE halt timeout\n");
+- return -EIO;
+- }
+-
++ /* rely on vfe_disable_output() to stop the VFE */
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 84c3a511ec31e..0bca95d016507 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -189,7 +189,6 @@ int venus_helper_alloc_dpb_bufs(struct venus_inst *inst)
+ buf->va = dma_alloc_attrs(dev, buf->size, &buf->da, GFP_KERNEL,
+ buf->attrs);
+ if (!buf->va) {
+- kfree(buf);
+ ret = -ENOMEM;
+ goto fail;
+ }
+@@ -209,6 +208,7 @@ int venus_helper_alloc_dpb_bufs(struct venus_inst *inst)
+ return 0;
+
+ fail:
++ kfree(buf);
+ venus_helper_free_dpb_bufs(inst);
+ return ret;
+ }
+diff --git a/drivers/media/platform/qcom/venus/hfi_cmds.c b/drivers/media/platform/qcom/venus/hfi_cmds.c
+index 5aea07307e02e..4ecd444050bb6 100644
+--- a/drivers/media/platform/qcom/venus/hfi_cmds.c
++++ b/drivers/media/platform/qcom/venus/hfi_cmds.c
+@@ -1054,6 +1054,8 @@ static int pkt_session_set_property_1x(struct hfi_session_set_property_pkt *pkt,
+ pkt->shdr.hdr.size += sizeof(u32) + sizeof(*info);
+ break;
+ }
++ case HFI_PROPERTY_PARAM_VENC_HDR10_PQ_SEI:
++ return -ENOTSUPP;
+
+ /* FOLLOWING PROPERTIES ARE NOT IMPLEMENTED IN CORE YET */
+ case HFI_PROPERTY_CONFIG_BUFFER_REQUIREMENTS:
+diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c
+index 84bafc3118cc6..adea4c3b8c204 100644
+--- a/drivers/media/platform/qcom/venus/venc.c
++++ b/drivers/media/platform/qcom/venus/venc.c
+@@ -662,8 +662,8 @@ static int venc_set_properties(struct venus_inst *inst)
+
+ ptype = HFI_PROPERTY_PARAM_VENC_H264_TRANSFORM_8X8;
+ h264_transform.enable_type = 0;
+- if (ctr->profile.h264 == HFI_H264_PROFILE_HIGH ||
+- ctr->profile.h264 == HFI_H264_PROFILE_CONSTRAINED_HIGH)
++ if (ctr->profile.h264 == V4L2_MPEG_VIDEO_H264_PROFILE_HIGH ||
++ ctr->profile.h264 == V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH)
+ h264_transform.enable_type = ctr->h264_8x8_transform;
+
+ ret = hfi_session_set_property(inst, ptype, &h264_transform);
+diff --git a/drivers/media/platform/qcom/venus/venc_ctrls.c b/drivers/media/platform/qcom/venus/venc_ctrls.c
+index 1ada42df314dc..ea5805e71c143 100644
+--- a/drivers/media/platform/qcom/venus/venc_ctrls.c
++++ b/drivers/media/platform/qcom/venus/venc_ctrls.c
+@@ -320,8 +320,8 @@ static int venc_op_s_ctrl(struct v4l2_ctrl *ctrl)
+ ctr->intra_refresh_period = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM:
+- if (ctr->profile.h264 != HFI_H264_PROFILE_HIGH &&
+- ctr->profile.h264 != HFI_H264_PROFILE_CONSTRAINED_HIGH)
++ if (ctr->profile.h264 != V4L2_MPEG_VIDEO_H264_PROFILE_HIGH &&
++ ctr->profile.h264 != V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH)
+ return -EINVAL;
+
+ /*
+@@ -457,7 +457,7 @@ int venc_ctrl_init(struct venus_inst *inst)
+ V4L2_CID_MPEG_VIDEO_H264_I_FRAME_MIN_QP, 1, 51, 1, 1);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+- V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM, 0, 1, 1, 0);
++ V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM, 0, 1, 1, 1);
+
+ v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops,
+ V4L2_CID_MPEG_VIDEO_H264_P_FRAME_MIN_QP, 1, 51, 1, 1);
+diff --git a/drivers/media/platform/ti-vpe/cal-video.c b/drivers/media/platform/ti-vpe/cal-video.c
+index 7799da1cc261b..3e936a2ca36c6 100644
+--- a/drivers/media/platform/ti-vpe/cal-video.c
++++ b/drivers/media/platform/ti-vpe/cal-video.c
+@@ -823,6 +823,9 @@ static int cal_ctx_v4l2_init_formats(struct cal_ctx *ctx)
+ /* Enumerate sub device formats and enable all matching local formats */
+ ctx->active_fmt = devm_kcalloc(ctx->cal->dev, cal_num_formats,
+ sizeof(*ctx->active_fmt), GFP_KERNEL);
++ if (!ctx->active_fmt)
++ return -ENOMEM;
++
+ ctx->num_active_fmt = 0;
+
+ for (j = 0, i = 0; ; ++j) {
+diff --git a/drivers/media/rc/gpio-ir-tx.c b/drivers/media/rc/gpio-ir-tx.c
+index c6cd2e6d8e654..a50701cfbbd7b 100644
+--- a/drivers/media/rc/gpio-ir-tx.c
++++ b/drivers/media/rc/gpio-ir-tx.c
+@@ -48,11 +48,29 @@ static int gpio_ir_tx_set_carrier(struct rc_dev *dev, u32 carrier)
+ return 0;
+ }
+
++static void delay_until(ktime_t until)
++{
++ /*
++ * delta should never exceed 0.5 seconds (IR_MAX_DURATION) and on
++ * m68k ndelay(s64) does not compile; so use s32 rather than s64.
++ */
++ s32 delta;
++
++ while (true) {
++ delta = ktime_us_delta(until, ktime_get());
++ if (delta <= 0)
++ return;
++
++ /* udelay more than 1ms may not work */
++ delta = min(delta, 1000);
++ udelay(delta);
++ }
++}
++
+ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ uint count)
+ {
+ ktime_t edge;
+- s32 delta;
+ int i;
+
+ local_irq_disable();
+@@ -63,9 +81,7 @@ static void gpio_ir_tx_unmodulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ gpiod_set_value(gpio_ir->gpio, !(i % 2));
+
+ edge = ktime_add_us(edge, txbuf[i]);
+- delta = ktime_us_delta(edge, ktime_get());
+- if (delta > 0)
+- udelay(delta);
++ delay_until(edge);
+ }
+
+ gpiod_set_value(gpio_ir->gpio, 0);
+@@ -97,9 +113,7 @@ static void gpio_ir_tx_modulated(struct gpio_ir *gpio_ir, uint *txbuf,
+ if (i % 2) {
+ // space
+ edge = ktime_add_us(edge, txbuf[i]);
+- delta = ktime_us_delta(edge, ktime_get());
+- if (delta > 0)
+- udelay(delta);
++ delay_until(edge);
+ } else {
+ // pulse
+ ktime_t last = ktime_add_us(edge, txbuf[i]);
+diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c
+index 7e98e7e3aacec..1968067092594 100644
+--- a/drivers/media/rc/ir_toy.c
++++ b/drivers/media/rc/ir_toy.c
+@@ -458,7 +458,7 @@ static int irtoy_probe(struct usb_interface *intf,
+ err = usb_submit_urb(irtoy->urb_in, GFP_KERNEL);
+ if (err != 0) {
+ dev_err(irtoy->dev, "fail to submit in urb: %d\n", err);
+- return err;
++ goto free_rcdev;
+ }
+
+ err = irtoy_setup(irtoy);
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_s302m.c b/drivers/media/test-drivers/vidtv/vidtv_s302m.c
+index d79b65854627c..4676083cee3b8 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_s302m.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_s302m.c
+@@ -455,6 +455,9 @@ struct vidtv_encoder
+ e->name = kstrdup(args.name, GFP_KERNEL);
+
+ e->encoder_buf = vzalloc(VIDTV_S302M_BUF_SZ);
++ if (!e->encoder_buf)
++ goto out_kfree_e;
++
+ e->encoder_buf_sz = VIDTV_S302M_BUF_SZ;
+ e->encoder_buf_offset = 0;
+
+@@ -467,10 +470,8 @@ struct vidtv_encoder
+ e->is_video_encoder = false;
+
+ ctx = kzalloc(priv_sz, GFP_KERNEL);
+- if (!ctx) {
+- kfree(e);
+- return NULL;
+- }
++ if (!ctx)
++ goto out_kfree_buf;
+
+ e->ctx = ctx;
+ ctx->last_duration = 0;
+@@ -498,6 +499,14 @@ struct vidtv_encoder
+ e->next = NULL;
+
+ return e;
++
++out_kfree_buf:
++ kfree(e->encoder_buf);
++
++out_kfree_e:
++ kfree(e->name);
++ kfree(e);
++ return NULL;
+ }
+
+ void vidtv_s302m_encoder_destroy(struct vidtv_encoder *e)
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index b451ce3cb169a..ae25d2cbfdfee 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -3936,6 +3936,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ goto err_free;
+ }
+
++ kref_init(&dev->ref);
++
+ dev->devno = nr;
+ dev->model = id->driver_info;
+ dev->alt = -1;
+@@ -4036,6 +4038,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ }
+
+ if (dev->board.has_dual_ts && em28xx_duplicate_dev(dev) == 0) {
++ kref_init(&dev->dev_next->ref);
++
+ dev->dev_next->ts = SECONDARY_TS;
+ dev->dev_next->alt = -1;
+ dev->dev_next->is_audio_only = has_vendor_audio &&
+@@ -4090,12 +4094,8 @@ static int em28xx_usb_probe(struct usb_interface *intf,
+ em28xx_write_reg(dev, 0x0b, 0x82);
+ mdelay(100);
+ }
+-
+- kref_init(&dev->dev_next->ref);
+ }
+
+- kref_init(&dev->ref);
+-
+ request_modules(dev);
+
+ /*
+@@ -4150,11 +4150,8 @@ static void em28xx_usb_disconnect(struct usb_interface *intf)
+
+ em28xx_close_extension(dev);
+
+- if (dev->dev_next) {
+- em28xx_close_extension(dev->dev_next);
++ if (dev->dev_next)
+ em28xx_release_resources(dev->dev_next);
+- }
+-
+ em28xx_release_resources(dev);
+
+ if (dev->dev_next) {
+diff --git a/drivers/media/usb/go7007/s2250-board.c b/drivers/media/usb/go7007/s2250-board.c
+index c742cc88fac5c..1fa6f10ee157b 100644
+--- a/drivers/media/usb/go7007/s2250-board.c
++++ b/drivers/media/usb/go7007/s2250-board.c
+@@ -504,6 +504,7 @@ static int s2250_probe(struct i2c_client *client,
+ u8 *data;
+ struct go7007 *go = i2c_get_adapdata(adapter);
+ struct go7007_usb *usb = go->hpi_context;
++ int err = -EIO;
+
+ audio = i2c_new_dummy_device(adapter, TLV320_ADDRESS >> 1);
+ if (IS_ERR(audio))
+@@ -532,11 +533,8 @@ static int s2250_probe(struct i2c_client *client,
+ V4L2_CID_HUE, -512, 511, 1, 0);
+ sd->ctrl_handler = &state->hdl;
+ if (state->hdl.error) {
+- int err = state->hdl.error;
+-
+- v4l2_ctrl_handler_free(&state->hdl);
+- kfree(state);
+- return err;
++ err = state->hdl.error;
++ goto fail;
+ }
+
+ state->std = V4L2_STD_NTSC;
+@@ -600,7 +598,7 @@ fail:
+ i2c_unregister_device(audio);
+ v4l2_ctrl_handler_free(&state->hdl);
+ kfree(state);
+- return -EIO;
++ return err;
+ }
+
+ static int s2250_remove(struct i2c_client *client)
+diff --git a/drivers/media/usb/hdpvr/hdpvr-video.c b/drivers/media/usb/hdpvr/hdpvr-video.c
+index 563128d117317..60e57e0f19272 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-video.c
++++ b/drivers/media/usb/hdpvr/hdpvr-video.c
+@@ -308,7 +308,6 @@ static int hdpvr_start_streaming(struct hdpvr_device *dev)
+
+ dev->status = STATUS_STREAMING;
+
+- INIT_WORK(&dev->worker, hdpvr_transmit_buffers);
+ schedule_work(&dev->worker);
+
+ v4l2_dbg(MSG_BUFFER, hdpvr_debug, &dev->v4l2_dev,
+@@ -1165,6 +1164,9 @@ int hdpvr_register_videodev(struct hdpvr_device *dev, struct device *parent,
+ bool ac3 = dev->flags & HDPVR_FLAG_AC3_CAP;
+ int res;
+
++ // initialize dev->worker
++ INIT_WORK(&dev->worker, hdpvr_transmit_buffers);
++
+ dev->cur_std = V4L2_STD_525_60;
+ dev->width = 720;
+ dev->height = 480;
+diff --git a/drivers/media/usb/stk1160/stk1160-core.c b/drivers/media/usb/stk1160/stk1160-core.c
+index 4e1698f788187..ce717502ea4c3 100644
+--- a/drivers/media/usb/stk1160/stk1160-core.c
++++ b/drivers/media/usb/stk1160/stk1160-core.c
+@@ -403,7 +403,7 @@ static void stk1160_disconnect(struct usb_interface *interface)
+ /* Here is the only place where isoc get released */
+ stk1160_uninit_isoc(dev);
+
+- stk1160_clear_queue(dev);
++ stk1160_clear_queue(dev, VB2_BUF_STATE_ERROR);
+
+ video_unregister_device(&dev->vdev);
+ v4l2_device_disconnect(&dev->v4l2_dev);
+diff --git a/drivers/media/usb/stk1160/stk1160-v4l.c b/drivers/media/usb/stk1160/stk1160-v4l.c
+index 6a4eb616d5160..1aa953469402f 100644
+--- a/drivers/media/usb/stk1160/stk1160-v4l.c
++++ b/drivers/media/usb/stk1160/stk1160-v4l.c
+@@ -258,7 +258,7 @@ out_uninit:
+ stk1160_uninit_isoc(dev);
+ out_stop_hw:
+ usb_set_interface(dev->udev, 0, 0);
+- stk1160_clear_queue(dev);
++ stk1160_clear_queue(dev, VB2_BUF_STATE_QUEUED);
+
+ mutex_unlock(&dev->v4l_lock);
+
+@@ -306,7 +306,7 @@ static int stk1160_stop_streaming(struct stk1160 *dev)
+
+ stk1160_stop_hw(dev);
+
+- stk1160_clear_queue(dev);
++ stk1160_clear_queue(dev, VB2_BUF_STATE_ERROR);
+
+ stk1160_dbg("streaming stopped\n");
+
+@@ -745,7 +745,7 @@ static const struct video_device v4l_template = {
+ /********************************************************************/
+
+ /* Must be called with both v4l_lock and vb_queue_lock hold */
+-void stk1160_clear_queue(struct stk1160 *dev)
++void stk1160_clear_queue(struct stk1160 *dev, enum vb2_buffer_state vb2_state)
+ {
+ struct stk1160_buffer *buf;
+ unsigned long flags;
+@@ -756,7 +756,7 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ buf = list_first_entry(&dev->avail_bufs,
+ struct stk1160_buffer, list);
+ list_del(&buf->list);
+- vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++ vb2_buffer_done(&buf->vb.vb2_buf, vb2_state);
+ stk1160_dbg("buffer [%p/%d] aborted\n",
+ buf, buf->vb.vb2_buf.index);
+ }
+@@ -766,7 +766,7 @@ void stk1160_clear_queue(struct stk1160 *dev)
+ buf = dev->isoc_ctl.buf;
+ dev->isoc_ctl.buf = NULL;
+
+- vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
++ vb2_buffer_done(&buf->vb.vb2_buf, vb2_state);
+ stk1160_dbg("buffer [%p/%d] aborted\n",
+ buf, buf->vb.vb2_buf.index);
+ }
+diff --git a/drivers/media/usb/stk1160/stk1160.h b/drivers/media/usb/stk1160/stk1160.h
+index a31ea1c80f255..a70963ce87533 100644
+--- a/drivers/media/usb/stk1160/stk1160.h
++++ b/drivers/media/usb/stk1160/stk1160.h
+@@ -166,7 +166,7 @@ struct regval {
+ int stk1160_vb2_setup(struct stk1160 *dev);
+ int stk1160_video_register(struct stk1160 *dev);
+ void stk1160_video_unregister(struct stk1160 *dev);
+-void stk1160_clear_queue(struct stk1160 *dev);
++void stk1160_clear_queue(struct stk1160 *dev, enum vb2_buffer_state vb2_state);
+
+ /* Provided by stk1160-video.c */
+ int stk1160_alloc_isoc(struct stk1160 *dev);
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+index 54abe5245dcc4..df8cff47a7fb5 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
+@@ -112,7 +112,9 @@ static void std_init_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ struct v4l2_ctrl_mpeg2_picture *p_mpeg2_picture;
+ struct v4l2_ctrl_mpeg2_quantisation *p_mpeg2_quant;
+ struct v4l2_ctrl_vp8_frame *p_vp8_frame;
++ struct v4l2_ctrl_vp9_frame *p_vp9_frame;
+ struct v4l2_ctrl_fwht_params *p_fwht_params;
++ struct v4l2_ctrl_h264_scaling_matrix *p_h264_scaling_matrix;
+ void *p = ptr.p + idx * ctrl->elem_size;
+
+ if (ctrl->p_def.p_const)
+@@ -152,6 +154,13 @@ static void std_init_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ p_vp8_frame = p;
+ p_vp8_frame->num_dct_parts = 1;
+ break;
++ case V4L2_CTRL_TYPE_VP9_FRAME:
++ p_vp9_frame = p;
++ p_vp9_frame->profile = 0;
++ p_vp9_frame->bit_depth = 8;
++ p_vp9_frame->flags |= V4L2_VP9_FRAME_FLAG_X_SUBSAMPLING |
++ V4L2_VP9_FRAME_FLAG_Y_SUBSAMPLING;
++ break;
+ case V4L2_CTRL_TYPE_FWHT_PARAMS:
+ p_fwht_params = p;
+ p_fwht_params->version = V4L2_FWHT_VERSION;
+@@ -160,6 +169,15 @@ static void std_init_compound(const struct v4l2_ctrl *ctrl, u32 idx,
+ p_fwht_params->flags = V4L2_FWHT_FL_PIXENC_YUV |
+ (2 << V4L2_FWHT_FL_COMPONENTS_NUM_OFFSET);
+ break;
++ case V4L2_CTRL_TYPE_H264_SCALING_MATRIX:
++ p_h264_scaling_matrix = p;
++ /*
++ * The default (flat) H.264 scaling matrix when none are
++ * specified in the bitstream, this is according to formulas
++ * (7-8) and (7-9) of the specification.
++ */
++ memset(p_h264_scaling_matrix, 16, sizeof(*p_h264_scaling_matrix));
++ break;
+ }
+ }
+
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 9ac557b8e1467..642cb90f457c6 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -279,8 +279,8 @@ static void v4l_print_format(const void *arg, bool write_only)
+ const struct v4l2_vbi_format *vbi;
+ const struct v4l2_sliced_vbi_format *sliced;
+ const struct v4l2_window *win;
+- const struct v4l2_sdr_format *sdr;
+ const struct v4l2_meta_format *meta;
++ u32 pixelformat;
+ u32 planes;
+ unsigned i;
+
+@@ -299,8 +299,9 @@ static void v4l_print_format(const void *arg, bool write_only)
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+ mp = &p->fmt.pix_mp;
++ pixelformat = mp->pixelformat;
+ pr_cont(", width=%u, height=%u, format=%p4cc, field=%s, colorspace=%d, num_planes=%u, flags=0x%x, ycbcr_enc=%u, quantization=%u, xfer_func=%u\n",
+- mp->width, mp->height, &mp->pixelformat,
++ mp->width, mp->height, &pixelformat,
+ prt_names(mp->field, v4l2_field_names),
+ mp->colorspace, mp->num_planes, mp->flags,
+ mp->ycbcr_enc, mp->quantization, mp->xfer_func);
+@@ -343,14 +344,15 @@ static void v4l_print_format(const void *arg, bool write_only)
+ break;
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- sdr = &p->fmt.sdr;
+- pr_cont(", pixelformat=%p4cc\n", &sdr->pixelformat);
++ pixelformat = p->fmt.sdr.pixelformat;
++ pr_cont(", pixelformat=%p4cc\n", &pixelformat);
+ break;
+ case V4L2_BUF_TYPE_META_CAPTURE:
+ case V4L2_BUF_TYPE_META_OUTPUT:
+ meta = &p->fmt.meta;
++ pixelformat = meta->dataformat;
+ pr_cont(", dataformat=%p4cc, buffersize=%u\n",
+- &meta->dataformat, meta->buffersize);
++ &pixelformat, meta->buffersize);
+ break;
+ }
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
+index e2654b422334c..675e22895ebe6 100644
+--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
++++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
+@@ -585,19 +585,14 @@ int v4l2_m2m_reqbufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_reqbufs);
+
+-int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+- struct v4l2_buffer *buf)
++static void v4l2_m2m_adjust_mem_offset(struct vb2_queue *vq,
++ struct v4l2_buffer *buf)
+ {
+- struct vb2_queue *vq;
+- int ret = 0;
+- unsigned int i;
+-
+- vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+- ret = vb2_querybuf(vq, buf);
+-
+ /* Adjust MMAP memory offsets for the CAPTURE queue */
+ if (buf->memory == V4L2_MEMORY_MMAP && V4L2_TYPE_IS_CAPTURE(vq->type)) {
+ if (V4L2_TYPE_IS_MULTIPLANAR(vq->type)) {
++ unsigned int i;
++
+ for (i = 0; i < buf->length; ++i)
+ buf->m.planes[i].m.mem_offset
+ += DST_QUEUE_OFF_BASE;
+@@ -605,8 +600,23 @@ int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ buf->m.offset += DST_QUEUE_OFF_BASE;
+ }
+ }
++}
+
+- return ret;
++int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
++ struct v4l2_buffer *buf)
++{
++ struct vb2_queue *vq;
++ int ret;
++
++ vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
++ ret = vb2_querybuf(vq, buf);
++ if (ret)
++ return ret;
++
++ /* Adjust MMAP memory offsets for the CAPTURE queue */
++ v4l2_m2m_adjust_mem_offset(vq, buf);
++
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
+
+@@ -763,6 +773,9 @@ int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ if (ret)
+ return ret;
+
++ /* Adjust MMAP memory offsets for the CAPTURE queue */
++ v4l2_m2m_adjust_mem_offset(vq, buf);
++
+ /*
+ * If the capture queue is streaming, but streaming hasn't started
+ * on the device, but was asked to stop, mark the previously queued
+@@ -784,9 +797,17 @@ int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_buffer *buf)
+ {
+ struct vb2_queue *vq;
++ int ret;
+
+ vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+- return vb2_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
++ ret = vb2_dqbuf(vq, buf, file->f_flags & O_NONBLOCK);
++ if (ret)
++ return ret;
++
++ /* Adjust MMAP memory offsets for the CAPTURE queue */
++ v4l2_m2m_adjust_mem_offset(vq, buf);
++
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf);
+
+@@ -795,9 +816,17 @@ int v4l2_m2m_prepare_buf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ {
+ struct video_device *vdev = video_devdata(file);
+ struct vb2_queue *vq;
++ int ret;
+
+ vq = v4l2_m2m_get_vq(m2m_ctx, buf->type);
+- return vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
++ ret = vb2_prepare_buf(vq, vdev->v4l2_dev->mdev, buf);
++ if (ret)
++ return ret;
++
++ /* Adjust MMAP memory offsets for the CAPTURE queue */
++ v4l2_m2m_adjust_mem_offset(vq, buf);
++
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_m2m_prepare_buf);
+
+diff --git a/drivers/memory/emif.c b/drivers/memory/emif.c
+index 762d0c0f0716f..ecc78d6f89ed2 100644
+--- a/drivers/memory/emif.c
++++ b/drivers/memory/emif.c
+@@ -1025,7 +1025,7 @@ static struct emif_data *__init_or_module get_device_details(
+ temp = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+ dev_info = devm_kzalloc(dev, sizeof(*dev_info), GFP_KERNEL);
+
+- if (!emif || !pd || !dev_info) {
++ if (!emif || !temp || !dev_info) {
+ dev_err(dev, "%s:%d: allocation error\n", __func__, __LINE__);
+ goto error;
+ }
+@@ -1117,7 +1117,7 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
+ {
+ struct emif_data *emif;
+ struct resource *res;
+- int irq;
++ int irq, ret;
+
+ if (pdev->dev.of_node)
+ emif = of_get_memory_device_details(pdev->dev.of_node, &pdev->dev);
+@@ -1147,7 +1147,9 @@ static int __init_or_module emif_probe(struct platform_device *pdev)
+ emif_onetime_settings(emif);
+ emif_debugfs_init(emif);
+ disable_and_clear_all_interrupts(emif);
+- setup_interrupts(emif, irq);
++ ret = setup_interrupts(emif, irq);
++ if (ret)
++ goto error;
+
+ /* One-time actions taken on probing the first device */
+ if (!emif1) {
+diff --git a/drivers/memory/tegra/tegra20-emc.c b/drivers/memory/tegra/tegra20-emc.c
+index 497b6edbf3ca1..25ba3c5e4ad6a 100644
+--- a/drivers/memory/tegra/tegra20-emc.c
++++ b/drivers/memory/tegra/tegra20-emc.c
+@@ -540,7 +540,7 @@ static int emc_read_lpddr_mode_register(struct tegra_emc *emc,
+ unsigned int register_addr,
+ unsigned int *register_data)
+ {
+- u32 memory_dev = emem_dev + 1;
++ u32 memory_dev = emem_dev ? 1 : 2;
+ u32 val, mr_mask = 0xff;
+ int err;
+
+diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
+index c0450397b6735..7ea312f0840e0 100644
+--- a/drivers/memstick/core/mspro_block.c
++++ b/drivers/memstick/core/mspro_block.c
+@@ -186,13 +186,8 @@ static int mspro_block_bd_open(struct block_device *bdev, fmode_t mode)
+
+ mutex_lock(&mspro_block_disk_lock);
+
+- if (msb && msb->card) {
++ if (msb && msb->card)
+ msb->usage_count++;
+- if ((mode & FMODE_WRITE) && msb->read_only)
+- rc = -EROFS;
+- else
+- rc = 0;
+- }
+
+ mutex_unlock(&mspro_block_disk_lock);
+
+@@ -1239,6 +1234,9 @@ static int mspro_block_init_disk(struct memstick_dev *card)
+ set_capacity(msb->disk, capacity);
+ dev_dbg(&card->dev, "capacity set %ld\n", capacity);
+
++ if (msb->read_only)
++ set_disk_ro(msb->disk, true);
++
+ rc = device_add_disk(&card->dev, msb->disk, NULL);
+ if (rc)
+ goto out_cleanup_disk;
+diff --git a/drivers/mfd/asic3.c b/drivers/mfd/asic3.c
+index 8d58c8df46cfb..56338f9dbd0ba 100644
+--- a/drivers/mfd/asic3.c
++++ b/drivers/mfd/asic3.c
+@@ -906,14 +906,14 @@ static int __init asic3_mfd_probe(struct platform_device *pdev,
+ ret = mfd_add_devices(&pdev->dev, pdev->id,
+ &asic3_cell_ds1wm, 1, mem, asic->irq_base, NULL);
+ if (ret < 0)
+- goto out;
++ goto out_unmap;
+ }
+
+ if (mem_sdio && (irq >= 0)) {
+ ret = mfd_add_devices(&pdev->dev, pdev->id,
+ &asic3_cell_mmc, 1, mem_sdio, irq, NULL);
+ if (ret < 0)
+- goto out;
++ goto out_unmap;
+ }
+
+ ret = 0;
+@@ -927,8 +927,12 @@ static int __init asic3_mfd_probe(struct platform_device *pdev,
+ ret = mfd_add_devices(&pdev->dev, 0,
+ asic3_cell_leds, ASIC3_NUM_LEDS, NULL, 0, NULL);
+ }
++ return ret;
+
+- out:
++out_unmap:
++ if (asic->tmio_cnf)
++ iounmap(asic->tmio_cnf);
++out:
+ return ret;
+ }
+
+diff --git a/drivers/mfd/mc13xxx-core.c b/drivers/mfd/mc13xxx-core.c
+index 8a4f1d90dcfd1..1000572761a84 100644
+--- a/drivers/mfd/mc13xxx-core.c
++++ b/drivers/mfd/mc13xxx-core.c
+@@ -323,8 +323,10 @@ int mc13xxx_adc_do_conversion(struct mc13xxx *mc13xxx, unsigned int mode,
+ adc1 |= MC13783_ADC1_ATOX;
+
+ dev_dbg(mc13xxx->dev, "%s: request irq\n", __func__);
+- mc13xxx_irq_request(mc13xxx, MC13XXX_IRQ_ADCDONE,
++ ret = mc13xxx_irq_request(mc13xxx, MC13XXX_IRQ_ADCDONE,
+ mc13xxx_handler_adcdone, __func__, &adcdone_data);
++ if (ret)
++ goto out;
+
+ mc13xxx_reg_write(mc13xxx, MC13XXX_ADC0, adc0);
+ mc13xxx_reg_write(mc13xxx, MC13XXX_ADC1, adc1);
+diff --git a/drivers/misc/cardreader/alcor_pci.c b/drivers/misc/cardreader/alcor_pci.c
+index de6d44a158bba..3f514d77a843f 100644
+--- a/drivers/misc/cardreader/alcor_pci.c
++++ b/drivers/misc/cardreader/alcor_pci.c
+@@ -266,7 +266,7 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ if (!priv)
+ return -ENOMEM;
+
+- ret = ida_simple_get(&alcor_pci_idr, 0, 0, GFP_KERNEL);
++ ret = ida_alloc(&alcor_pci_idr, GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+ priv->id = ret;
+@@ -280,7 +280,8 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+ ret = pci_request_regions(pdev, DRV_NAME_ALCOR_PCI);
+ if (ret) {
+ dev_err(&pdev->dev, "Cannot request region\n");
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto error_free_ida;
+ }
+
+ if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) {
+@@ -324,6 +325,8 @@ static int alcor_pci_probe(struct pci_dev *pdev,
+
+ error_release_regions:
+ pci_release_regions(pdev);
++error_free_ida:
++ ida_free(&alcor_pci_idr, priv->id);
+ return ret;
+ }
+
+@@ -337,7 +340,7 @@ static void alcor_pci_remove(struct pci_dev *pdev)
+
+ mfd_remove_devices(&pdev->dev);
+
+- ida_simple_remove(&alcor_pci_idr, priv->id);
++ ida_free(&alcor_pci_idr, priv->id);
+
+ pci_release_regions(pdev);
+ pci_set_drvdata(pdev, NULL);
+diff --git a/drivers/misc/habanalabs/common/debugfs.c b/drivers/misc/habanalabs/common/debugfs.c
+index fc084ee5106ec..09001fd9db85f 100644
+--- a/drivers/misc/habanalabs/common/debugfs.c
++++ b/drivers/misc/habanalabs/common/debugfs.c
+@@ -890,6 +890,8 @@ static ssize_t hl_set_power_state(struct file *f, const char __user *buf,
+ pci_set_power_state(hdev->pdev, PCI_D0);
+ pci_restore_state(hdev->pdev);
+ rc = pci_enable_device(hdev->pdev);
++ if (rc < 0)
++ return rc;
+ } else if (value == 2) {
+ pci_save_state(hdev->pdev);
+ pci_disable_device(hdev->pdev);
+diff --git a/drivers/misc/kgdbts.c b/drivers/misc/kgdbts.c
+index 67c5b452dd356..88b91ad8e5413 100644
+--- a/drivers/misc/kgdbts.c
++++ b/drivers/misc/kgdbts.c
+@@ -1070,10 +1070,10 @@ static int kgdbts_option_setup(char *opt)
+ {
+ if (strlen(opt) >= MAX_CONFIG_LEN) {
+ printk(KERN_ERR "kgdbts: config string too long\n");
+- return -ENOSPC;
++ return 1;
+ }
+ strcpy(config, opt);
+- return 0;
++ return 1;
+ }
+
+ __setup("kgdbts=", kgdbts_option_setup);
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 67bb6a25fd0a0..64ce3f830262b 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -107,6 +107,7 @@
+ #define MEI_DEV_ID_ADP_S 0x7AE8 /* Alder Lake Point S */
+ #define MEI_DEV_ID_ADP_LP 0x7A60 /* Alder Lake Point LP */
+ #define MEI_DEV_ID_ADP_P 0x51E0 /* Alder Lake Point P */
++#define MEI_DEV_ID_ADP_N 0x54E0 /* Alder Lake Point N */
+
+ /*
+ * MEI HW Section
+@@ -120,6 +121,7 @@
+ #define PCI_CFG_HFS_2 0x48
+ #define PCI_CFG_HFS_3 0x60
+ # define PCI_CFG_HFS_3_FW_SKU_MSK 0x00000070
++# define PCI_CFG_HFS_3_FW_SKU_IGN 0x00000000
+ # define PCI_CFG_HFS_3_FW_SKU_SPS 0x00000060
+ #define PCI_CFG_HFS_4 0x64
+ #define PCI_CFG_HFS_5 0x68
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index d3a6c07286451..fbc4c95818645 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -1405,16 +1405,16 @@ static bool mei_me_fw_type_sps_4(const struct pci_dev *pdev)
+ .quirk_probe = mei_me_fw_type_sps_4
+
+ /**
+- * mei_me_fw_type_sps() - check for sps sku
++ * mei_me_fw_type_sps_ign() - check for sps or ign sku
+ *
+- * Read ME FW Status register to check for SPS Firmware.
+- * The SPS FW is only signaled in pci function 0
++ * Read ME FW Status register to check for SPS or IGN Firmware.
++ * The SPS/IGN FW is only signaled in pci function 0
+ *
+ * @pdev: pci device
+ *
+- * Return: true in case of SPS firmware
++ * Return: true in case of SPS/IGN firmware
+ */
+-static bool mei_me_fw_type_sps(const struct pci_dev *pdev)
++static bool mei_me_fw_type_sps_ign(const struct pci_dev *pdev)
+ {
+ u32 reg;
+ u32 fw_type;
+@@ -1427,14 +1427,15 @@ static bool mei_me_fw_type_sps(const struct pci_dev *pdev)
+
+ dev_dbg(&pdev->dev, "fw type is %d\n", fw_type);
+
+- return fw_type == PCI_CFG_HFS_3_FW_SKU_SPS;
++ return fw_type == PCI_CFG_HFS_3_FW_SKU_IGN ||
++ fw_type == PCI_CFG_HFS_3_FW_SKU_SPS;
+ }
+
+ #define MEI_CFG_KIND_ITOUCH \
+ .kind = "itouch"
+
+-#define MEI_CFG_FW_SPS \
+- .quirk_probe = mei_me_fw_type_sps
++#define MEI_CFG_FW_SPS_IGN \
++ .quirk_probe = mei_me_fw_type_sps_ign
+
+ #define MEI_CFG_FW_VER_SUPP \
+ .fw_ver_supported = 1
+@@ -1535,7 +1536,7 @@ static const struct mei_cfg mei_me_pch12_sps_cfg = {
+ MEI_CFG_PCH8_HFS,
+ MEI_CFG_FW_VER_SUPP,
+ MEI_CFG_DMA_128,
+- MEI_CFG_FW_SPS,
++ MEI_CFG_FW_SPS_IGN,
+ };
+
+ /* Cannon Lake itouch with quirk for SPS 5.0 and newer Firmware exclusion
+@@ -1545,7 +1546,7 @@ static const struct mei_cfg mei_me_pch12_itouch_sps_cfg = {
+ MEI_CFG_KIND_ITOUCH,
+ MEI_CFG_PCH8_HFS,
+ MEI_CFG_FW_VER_SUPP,
+- MEI_CFG_FW_SPS,
++ MEI_CFG_FW_SPS_IGN,
+ };
+
+ /* Tiger Lake and newer devices */
+@@ -1562,7 +1563,7 @@ static const struct mei_cfg mei_me_pch15_sps_cfg = {
+ MEI_CFG_FW_VER_SUPP,
+ MEI_CFG_DMA_128,
+ MEI_CFG_TRC,
+- MEI_CFG_FW_SPS,
++ MEI_CFG_FW_SPS_IGN,
+ };
+
+ /*
+diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
+index a67f4f2d33a93..0706322154cbe 100644
+--- a/drivers/misc/mei/interrupt.c
++++ b/drivers/misc/mei/interrupt.c
+@@ -424,31 +424,26 @@ int mei_irq_read_handler(struct mei_device *dev,
+ list_for_each_entry(cl, &dev->file_list, link) {
+ if (mei_cl_hbm_equal(cl, mei_hdr)) {
+ cl_dbg(dev, cl, "got a message\n");
+- break;
++ ret = mei_cl_irq_read_msg(cl, mei_hdr, meta_hdr, cmpl_list);
++ goto reset_slots;
+ }
+ }
+
+ /* if no recipient cl was found we assume corrupted header */
+- if (&cl->link == &dev->file_list) {
+- /* A message for not connected fixed address clients
+- * should be silently discarded
+- * On power down client may be force cleaned,
+- * silently discard such messages
+- */
+- if (hdr_is_fixed(mei_hdr) ||
+- dev->dev_state == MEI_DEV_POWER_DOWN) {
+- mei_irq_discard_msg(dev, mei_hdr, mei_hdr->length);
+- ret = 0;
+- goto reset_slots;
+- }
+- dev_err(dev->dev, "no destination client found 0x%08X\n",
+- dev->rd_msg_hdr[0]);
+- ret = -EBADMSG;
+- goto end;
++ /* A message for not connected fixed address clients
++ * should be silently discarded
++ * On power down client may be force cleaned,
++ * silently discard such messages
++ */
++ if (hdr_is_fixed(mei_hdr) ||
++ dev->dev_state == MEI_DEV_POWER_DOWN) {
++ mei_irq_discard_msg(dev, mei_hdr, mei_hdr->length);
++ ret = 0;
++ goto reset_slots;
+ }
+-
+- ret = mei_cl_irq_read_msg(cl, mei_hdr, meta_hdr, cmpl_list);
+-
++ dev_err(dev->dev, "no destination client found 0x%08X\n", dev->rd_msg_hdr[0]);
++ ret = -EBADMSG;
++ goto end;
+
+ reset_slots:
+ /* reset the number of slots and header */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 3a45aaf002ac8..a738253dbd056 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -113,6 +113,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
+
+ /* required last entry */
+ {0, }
+diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
+index 096ae624be9aa..58a60afa650b6 100644
+--- a/drivers/mmc/core/bus.c
++++ b/drivers/mmc/core/bus.c
+@@ -15,6 +15,7 @@
+ #include <linux/stat.h>
+ #include <linux/of.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sysfs.h>
+
+ #include <linux/mmc/card.h>
+ #include <linux/mmc/host.h>
+@@ -34,13 +35,13 @@ static ssize_t type_show(struct device *dev,
+
+ switch (card->type) {
+ case MMC_TYPE_MMC:
+- return sprintf(buf, "MMC\n");
++ return sysfs_emit(buf, "MMC\n");
+ case MMC_TYPE_SD:
+- return sprintf(buf, "SD\n");
++ return sysfs_emit(buf, "SD\n");
+ case MMC_TYPE_SDIO:
+- return sprintf(buf, "SDIO\n");
++ return sysfs_emit(buf, "SDIO\n");
+ case MMC_TYPE_SD_COMBO:
+- return sprintf(buf, "SDcombo\n");
++ return sysfs_emit(buf, "SDcombo\n");
+ default:
+ return -EFAULT;
+ }
+diff --git a/drivers/mmc/core/bus.h b/drivers/mmc/core/bus.h
+index 8105852c4b62f..3996b191b68d1 100644
+--- a/drivers/mmc/core/bus.h
++++ b/drivers/mmc/core/bus.h
+@@ -9,6 +9,7 @@
+ #define _MMC_CORE_BUS_H
+
+ #include <linux/device.h>
++#include <linux/sysfs.h>
+
+ struct mmc_host;
+ struct mmc_card;
+@@ -17,7 +18,7 @@ struct mmc_card;
+ static ssize_t mmc_##name##_show (struct device *dev, struct device_attribute *attr, char *buf) \
+ { \
+ struct mmc_card *card = mmc_dev_to_card(dev); \
+- return sprintf(buf, fmt, args); \
++ return sysfs_emit(buf, fmt, args); \
+ } \
+ static DEVICE_ATTR(name, S_IRUGO, mmc_##name##_show, NULL)
+
+diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
+index cf140f4ec8643..d739e2b631fe8 100644
+--- a/drivers/mmc/core/host.c
++++ b/drivers/mmc/core/host.c
+@@ -588,6 +588,16 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
+
+ EXPORT_SYMBOL(mmc_alloc_host);
+
++static int mmc_validate_host_caps(struct mmc_host *host)
++{
++ if (host->caps & MMC_CAP_SDIO_IRQ && !host->ops->enable_sdio_irq) {
++ dev_warn(host->parent, "missing ->enable_sdio_irq() ops\n");
++ return -EINVAL;
++ }
++
++ return 0;
++}
++
+ /**
+ * mmc_add_host - initialise host hardware
+ * @host: mmc host
+@@ -600,8 +610,9 @@ int mmc_add_host(struct mmc_host *host)
+ {
+ int err;
+
+- WARN_ON((host->caps & MMC_CAP_SDIO_IRQ) &&
+- !host->ops->enable_sdio_irq);
++ err = mmc_validate_host_caps(host);
++ if (err)
++ return err;
+
+ err = device_add(&host->class_dev);
+ if (err)
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 8421519c2a983..43d1b9b2fa499 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -12,6 +12,7 @@
+ #include <linux/slab.h>
+ #include <linux/stat.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sysfs.h>
+
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/card.h>
+@@ -812,12 +813,11 @@ static ssize_t mmc_fwrev_show(struct device *dev,
+ {
+ struct mmc_card *card = mmc_dev_to_card(dev);
+
+- if (card->ext_csd.rev < 7) {
+- return sprintf(buf, "0x%x\n", card->cid.fwrev);
+- } else {
+- return sprintf(buf, "0x%*phN\n", MMC_FIRMWARE_LEN,
+- card->ext_csd.fwrev);
+- }
++ if (card->ext_csd.rev < 7)
++ return sysfs_emit(buf, "0x%x\n", card->cid.fwrev);
++ else
++ return sysfs_emit(buf, "0x%*phN\n", MMC_FIRMWARE_LEN,
++ card->ext_csd.fwrev);
+ }
+
+ static DEVICE_ATTR(fwrev, S_IRUGO, mmc_fwrev_show, NULL);
+@@ -830,10 +830,10 @@ static ssize_t mmc_dsr_show(struct device *dev,
+ struct mmc_host *host = card->host;
+
+ if (card->csd.dsr_imp && host->dsr_req)
+- return sprintf(buf, "0x%x\n", host->dsr);
++ return sysfs_emit(buf, "0x%x\n", host->dsr);
+ else
+ /* return default DSR value */
+- return sprintf(buf, "0x%x\n", 0x404);
++ return sysfs_emit(buf, "0x%x\n", 0x404);
+ }
+
+ static DEVICE_ATTR(dsr, S_IRUGO, mmc_dsr_show, NULL);
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index bfbfed30dc4d8..68df6b2f49cc7 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -13,6 +13,7 @@
+ #include <linux/stat.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/scatterlist.h>
++#include <linux/sysfs.h>
+
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/card.h>
+@@ -708,18 +709,16 @@ MMC_DEV_ATTR(ocr, "0x%08x\n", card->ocr);
+ MMC_DEV_ATTR(rca, "0x%04x\n", card->rca);
+
+
+-static ssize_t mmc_dsr_show(struct device *dev,
+- struct device_attribute *attr,
+- char *buf)
++static ssize_t mmc_dsr_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
+ {
+- struct mmc_card *card = mmc_dev_to_card(dev);
+- struct mmc_host *host = card->host;
+-
+- if (card->csd.dsr_imp && host->dsr_req)
+- return sprintf(buf, "0x%x\n", host->dsr);
+- else
+- /* return default DSR value */
+- return sprintf(buf, "0x%x\n", 0x404);
++ struct mmc_card *card = mmc_dev_to_card(dev);
++ struct mmc_host *host = card->host;
++
++ if (card->csd.dsr_imp && host->dsr_req)
++ return sysfs_emit(buf, "0x%x\n", host->dsr);
++ /* return default DSR value */
++ return sysfs_emit(buf, "0x%x\n", 0x404);
+ }
+
+ static DEVICE_ATTR(dsr, S_IRUGO, mmc_dsr_show, NULL);
+@@ -735,9 +734,9 @@ static ssize_t info##num##_show(struct device *dev, struct device_attribute *att
+ \
+ if (num > card->num_info) \
+ return -ENODATA; \
+- if (!card->info[num-1][0]) \
++ if (!card->info[num - 1][0]) \
+ return 0; \
+- return sprintf(buf, "%s\n", card->info[num-1]); \
++ return sysfs_emit(buf, "%s\n", card->info[num - 1]); \
+ } \
+ static DEVICE_ATTR_RO(info##num)
+
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 41164748723d2..25799accf8a02 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -7,6 +7,7 @@
+
+ #include <linux/err.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sysfs.h>
+
+ #include <linux/mmc/host.h>
+ #include <linux/mmc/card.h>
+@@ -40,9 +41,9 @@ static ssize_t info##num##_show(struct device *dev, struct device_attribute *att
+ \
+ if (num > card->num_info) \
+ return -ENODATA; \
+- if (!card->info[num-1][0]) \
++ if (!card->info[num - 1][0]) \
+ return 0; \
+- return sprintf(buf, "%s\n", card->info[num-1]); \
++ return sysfs_emit(buf, "%s\n", card->info[num - 1]); \
+ } \
+ static DEVICE_ATTR_RO(info##num)
+
+diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
+index fda03b35c14a5..c6268c38c69e5 100644
+--- a/drivers/mmc/core/sdio_bus.c
++++ b/drivers/mmc/core/sdio_bus.c
+@@ -14,6 +14,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/pm_domain.h>
+ #include <linux/acpi.h>
++#include <linux/sysfs.h>
+
+ #include <linux/mmc/card.h>
+ #include <linux/mmc/host.h>
+@@ -35,7 +36,7 @@ field##_show(struct device *dev, struct device_attribute *attr, char *buf) \
+ struct sdio_func *func; \
+ \
+ func = dev_to_sdio_func (dev); \
+- return sprintf(buf, format_string, args); \
++ return sysfs_emit(buf, format_string, args); \
+ } \
+ static DEVICE_ATTR_RO(field)
+
+@@ -52,9 +53,9 @@ static ssize_t info##num##_show(struct device *dev, struct device_attribute *att
+ \
+ if (num > func->num_info) \
+ return -ENODATA; \
+- if (!func->info[num-1][0]) \
++ if (!func->info[num - 1][0]) \
+ return 0; \
+- return sprintf(buf, "%s\n", func->info[num-1]); \
++ return sysfs_emit(buf, "%s\n", func->info[num - 1]); \
+ } \
+ static DEVICE_ATTR_RO(info##num)
+
+diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c
+index 2a757c88f9d21..80de660027d89 100644
+--- a/drivers/mmc/host/davinci_mmc.c
++++ b/drivers/mmc/host/davinci_mmc.c
+@@ -1375,8 +1375,12 @@ static int davinci_mmcsd_suspend(struct device *dev)
+ static int davinci_mmcsd_resume(struct device *dev)
+ {
+ struct mmc_davinci_host *host = dev_get_drvdata(dev);
++ int ret;
++
++ ret = clk_enable(host->clk);
++ if (ret)
++ return ret;
+
+- clk_enable(host->clk);
+ mmc_davinci_reset_ctrl(host, 0);
+
+ return 0;
+diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c
+index 58cfaffa3c2d8..f7c384db89bf3 100644
+--- a/drivers/mmc/host/rtsx_pci_sdmmc.c
++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
+@@ -1495,12 +1495,12 @@ static int rtsx_pci_sdmmc_drv_probe(struct platform_device *pdev)
+
+ realtek_init_host(host);
+
+- if (pcr->rtd3_en) {
+- pm_runtime_set_autosuspend_delay(&pdev->dev, 5000);
+- pm_runtime_use_autosuspend(&pdev->dev);
+- pm_runtime_enable(&pdev->dev);
+- }
+-
++ pm_runtime_no_callbacks(&pdev->dev);
++ pm_runtime_set_active(&pdev->dev);
++ pm_runtime_enable(&pdev->dev);
++ pm_runtime_set_autosuspend_delay(&pdev->dev, 200);
++ pm_runtime_mark_last_busy(&pdev->dev);
++ pm_runtime_use_autosuspend(&pdev->dev);
+
+ mmc_add_host(mmc);
+
+@@ -1521,11 +1521,6 @@ static int rtsx_pci_sdmmc_drv_remove(struct platform_device *pdev)
+ pcr->slots[RTSX_SD_CARD].card_event = NULL;
+ mmc = host->mmc;
+
+- if (pcr->rtd3_en) {
+- pm_runtime_dont_use_autosuspend(&pdev->dev);
+- pm_runtime_disable(&pdev->dev);
+- }
+-
+ cancel_work_sync(&host->work);
+
+ mutex_lock(&host->host_mutex);
+@@ -1548,6 +1543,9 @@ static int rtsx_pci_sdmmc_drv_remove(struct platform_device *pdev)
+
+ flush_work(&host->work);
+
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
++
+ mmc_free_host(mmc);
+
+ dev_dbg(&(pdev->dev),
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index f654afbe8e83c..b4891bb266485 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -514,26 +514,6 @@ static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
+ .flags = IOMUX_PRESENT,
+ };
+
+-static const struct sdhci_pltfm_data sdhci_am64_8bit_pdata = {
+- .ops = &sdhci_j721e_8bit_ops,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+-};
+-
+-static const struct sdhci_am654_driver_data sdhci_am64_8bit_drvdata = {
+- .pdata = &sdhci_am64_8bit_pdata,
+- .flags = DLL_PRESENT | DLL_CALIB,
+-};
+-
+-static const struct sdhci_pltfm_data sdhci_am64_4bit_pdata = {
+- .ops = &sdhci_j721e_4bit_ops,
+- .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
+-};
+-
+-static const struct sdhci_am654_driver_data sdhci_am64_4bit_drvdata = {
+- .pdata = &sdhci_am64_4bit_pdata,
+- .flags = IOMUX_PRESENT,
+-};
+-
+ static const struct soc_device_attribute sdhci_am654_devices[] = {
+ { .family = "AM65X",
+ .revision = "SR1.0",
+@@ -759,11 +739,11 @@ static const struct of_device_id sdhci_am654_of_match[] = {
+ },
+ {
+ .compatible = "ti,am64-sdhci-8bit",
+- .data = &sdhci_am64_8bit_drvdata,
++ .data = &sdhci_j721e_8bit_drvdata,
+ },
+ {
+ .compatible = "ti,am64-sdhci-4bit",
+- .data = &sdhci_am64_4bit_drvdata,
++ .data = &sdhci_j721e_4bit_drvdata,
+ },
+ { /* sentinel */ }
+ };
+diff --git a/drivers/mtd/devices/mchp23k256.c b/drivers/mtd/devices/mchp23k256.c
+index a8b31bddf14b8..1a840db207b5a 100644
+--- a/drivers/mtd/devices/mchp23k256.c
++++ b/drivers/mtd/devices/mchp23k256.c
+@@ -231,6 +231,19 @@ static const struct of_device_id mchp23k256_of_table[] = {
+ };
+ MODULE_DEVICE_TABLE(of, mchp23k256_of_table);
+
++static const struct spi_device_id mchp23k256_spi_ids[] = {
++ {
++ .name = "mchp23k256",
++ .driver_data = (kernel_ulong_t)&mchp23k256_caps,
++ },
++ {
++ .name = "mchp23lcv1024",
++ .driver_data = (kernel_ulong_t)&mchp23lcv1024_caps,
++ },
++ {}
++};
++MODULE_DEVICE_TABLE(spi, mchp23k256_spi_ids);
++
+ static struct spi_driver mchp23k256_driver = {
+ .driver = {
+ .name = "mchp23k256",
+@@ -238,6 +251,7 @@ static struct spi_driver mchp23k256_driver = {
+ },
+ .probe = mchp23k256_probe,
+ .remove = mchp23k256_remove,
++ .id_table = mchp23k256_spi_ids,
+ };
+
+ module_spi_driver(mchp23k256_driver);
+diff --git a/drivers/mtd/devices/mchp48l640.c b/drivers/mtd/devices/mchp48l640.c
+index 231a107901960..b9cf2b4415a54 100644
+--- a/drivers/mtd/devices/mchp48l640.c
++++ b/drivers/mtd/devices/mchp48l640.c
+@@ -359,6 +359,15 @@ static const struct of_device_id mchp48l640_of_table[] = {
+ };
+ MODULE_DEVICE_TABLE(of, mchp48l640_of_table);
+
++static const struct spi_device_id mchp48l640_spi_ids[] = {
++ {
++ .name = "48l640",
++ .driver_data = (kernel_ulong_t)&mchp48l640_caps,
++ },
++ {}
++};
++MODULE_DEVICE_TABLE(spi, mchp48l640_spi_ids);
++
+ static struct spi_driver mchp48l640_driver = {
+ .driver = {
+ .name = "mchp48l640",
+@@ -366,6 +375,7 @@ static struct spi_driver mchp48l640_driver = {
+ },
+ .probe = mchp48l640_probe,
+ .remove = mchp48l640_remove,
++ .id_table = mchp48l640_spi_ids,
+ };
+
+ module_spi_driver(mchp48l640_driver);
+diff --git a/drivers/mtd/nand/onenand/generic.c b/drivers/mtd/nand/onenand/generic.c
+index 8b6f4da5d7201..a4b8b65fe15f5 100644
+--- a/drivers/mtd/nand/onenand/generic.c
++++ b/drivers/mtd/nand/onenand/generic.c
+@@ -53,7 +53,12 @@ static int generic_onenand_probe(struct platform_device *pdev)
+ }
+
+ info->onenand.mmcontrol = pdata ? pdata->mmcontrol : NULL;
+- info->onenand.irq = platform_get_irq(pdev, 0);
++
++ err = platform_get_irq(pdev, 0);
++ if (err < 0)
++ goto out_iounmap;
++
++ info->onenand.irq = err;
+
+ info->mtd.dev.parent = &pdev->dev;
+ info->mtd.priv = &info->onenand;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index f3276ee9e4fe7..ddd93bc38ea6c 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -2060,13 +2060,15 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ nc->mck = of_clk_get(dev->parent->of_node, 0);
+ if (IS_ERR(nc->mck)) {
+ dev_err(dev, "Failed to retrieve MCK clk\n");
+- return PTR_ERR(nc->mck);
++ ret = PTR_ERR(nc->mck);
++ goto out_release_dma;
+ }
+
+ np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
+ if (!np) {
+ dev_err(dev, "Missing or invalid atmel,smc property\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_release_dma;
+ }
+
+ nc->smc = syscon_node_to_regmap(np);
+@@ -2074,10 +2076,16 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
+ if (IS_ERR(nc->smc)) {
+ ret = PTR_ERR(nc->smc);
+ dev_err(dev, "Could not get SMC regmap (err = %d)\n", ret);
+- return ret;
++ goto out_release_dma;
+ }
+
+ return 0;
++
++out_release_dma:
++ if (nc->dmac)
++ dma_release_channel(nc->dmac);
++
++ return ret;
+ }
+
+ static int
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index ded4df4739280..e50db25e5ddcb 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -648,6 +648,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ const struct nand_sdr_timings *sdr)
+ {
+ struct gpmi_nfc_hardware_timing *hw = &this->hw;
++ struct resources *r = &this->resources;
+ unsigned int dll_threshold_ps = this->devdata->max_chain_delay;
+ unsigned int period_ps, reference_period_ps;
+ unsigned int data_setup_cycles, data_hold_cycles, addr_setup_cycles;
+@@ -671,6 +672,8 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this,
+ wrn_dly_sel = BV_GPMI_CTRL1_WRN_DLY_SEL_NO_DELAY;
+ }
+
++ hw->clk_rate = clk_round_rate(r->clock[0], hw->clk_rate);
++
+ /* SDR core timings are given in picoseconds */
+ period_ps = div_u64((u64)NSEC_PER_SEC * 1000, hw->clk_rate);
+
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index e7b2ba016d8c6..8daaba96edb2c 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -338,16 +338,19 @@ static int nand_isbad_bbm(struct nand_chip *chip, loff_t ofs)
+ *
+ * Return: -EBUSY if the chip has been suspended, 0 otherwise
+ */
+-static int nand_get_device(struct nand_chip *chip)
++static void nand_get_device(struct nand_chip *chip)
+ {
+- mutex_lock(&chip->lock);
+- if (chip->suspended) {
++ /* Wait until the device is resumed. */
++ while (1) {
++ mutex_lock(&chip->lock);
++ if (!chip->suspended) {
++ mutex_lock(&chip->controller->lock);
++ return;
++ }
+ mutex_unlock(&chip->lock);
+- return -EBUSY;
+- }
+- mutex_lock(&chip->controller->lock);
+
+- return 0;
++ wait_event(chip->resume_wq, !chip->suspended);
++ }
+ }
+
+ /**
+@@ -576,9 +579,7 @@ static int nand_block_markbad_lowlevel(struct nand_chip *chip, loff_t ofs)
+ nand_erase_nand(chip, &einfo, 0);
+
+ /* Write bad block marker to OOB */
+- ret = nand_get_device(chip);
+- if (ret)
+- return ret;
++ nand_get_device(chip);
+
+ ret = nand_markbad_bbm(chip, ofs);
+ nand_release_device(chip);
+@@ -3826,9 +3827,7 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from,
+ ops->mode != MTD_OPS_RAW)
+ return -ENOTSUPP;
+
+- ret = nand_get_device(chip);
+- if (ret)
+- return ret;
++ nand_get_device(chip);
+
+ if (!ops->datbuf)
+ ret = nand_do_read_oob(chip, from, ops);
+@@ -4415,13 +4414,11 @@ static int nand_write_oob(struct mtd_info *mtd, loff_t to,
+ struct mtd_oob_ops *ops)
+ {
+ struct nand_chip *chip = mtd_to_nand(mtd);
+- int ret;
++ int ret = 0;
+
+ ops->retlen = 0;
+
+- ret = nand_get_device(chip);
+- if (ret)
+- return ret;
++ nand_get_device(chip);
+
+ switch (ops->mode) {
+ case MTD_OPS_PLACE_OOB:
+@@ -4481,9 +4478,7 @@ int nand_erase_nand(struct nand_chip *chip, struct erase_info *instr,
+ return -EIO;
+
+ /* Grab the lock and see if the device is available */
+- ret = nand_get_device(chip);
+- if (ret)
+- return ret;
++ nand_get_device(chip);
+
+ /* Shift to get first page */
+ page = (int)(instr->addr >> chip->page_shift);
+@@ -4570,7 +4565,7 @@ static void nand_sync(struct mtd_info *mtd)
+ pr_debug("%s: called\n", __func__);
+
+ /* Grab the lock and see if the device is available */
+- WARN_ON(nand_get_device(chip));
++ nand_get_device(chip);
+ /* Release it and go back */
+ nand_release_device(chip);
+ }
+@@ -4587,9 +4582,7 @@ static int nand_block_isbad(struct mtd_info *mtd, loff_t offs)
+ int ret;
+
+ /* Select the NAND device */
+- ret = nand_get_device(chip);
+- if (ret)
+- return ret;
++ nand_get_device(chip);
+
+ nand_select_target(chip, chipnr);
+
+@@ -4660,6 +4653,8 @@ static void nand_resume(struct mtd_info *mtd)
+ __func__);
+ }
+ mutex_unlock(&chip->lock);
++
++ wake_up_all(&chip->resume_wq);
+ }
+
+ /**
+@@ -5437,6 +5432,7 @@ static int nand_scan_ident(struct nand_chip *chip, unsigned int maxchips,
+ chip->cur_cs = -1;
+
+ mutex_init(&chip->lock);
++ init_waitqueue_head(&chip->resume_wq);
+
+ /* Enforce the right timings for reset/detection */
+ chip->current_interface_config = nand_get_reset_interface_config();
+diff --git a/drivers/mtd/nand/raw/pl35x-nand-controller.c b/drivers/mtd/nand/raw/pl35x-nand-controller.c
+index 8a91e069ee2e9..3c6f6aff649f8 100644
+--- a/drivers/mtd/nand/raw/pl35x-nand-controller.c
++++ b/drivers/mtd/nand/raw/pl35x-nand-controller.c
+@@ -1062,7 +1062,7 @@ static int pl35x_nand_chip_init(struct pl35x_nandc *nfc,
+ chip->controller = &nfc->controller;
+ mtd = nand_to_mtd(chip);
+ mtd->dev.parent = nfc->dev;
+- nand_set_flash_node(chip, nfc->dev->of_node);
++ nand_set_flash_node(chip, np);
+ if (!mtd->name) {
+ mtd->name = devm_kasprintf(nfc->dev, GFP_KERNEL,
+ "%s", PL35X_NANDC_DRIVER_NAME);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 04ea180118e33..cc155f6c6c68c 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -3181,10 +3181,11 @@ static void spi_nor_set_mtd_info(struct spi_nor *nor)
+ mtd->flags = MTD_CAP_NORFLASH;
+ if (nor->info->flags & SPI_NOR_NO_ERASE)
+ mtd->flags |= MTD_NO_ERASE;
++ else
++ mtd->_erase = spi_nor_erase;
+ mtd->writesize = nor->params->writesize;
+ mtd->writebufsize = nor->params->page_size;
+ mtd->size = nor->params->size;
+- mtd->_erase = spi_nor_erase;
+ mtd->_read = spi_nor_read;
+ /* Might be already set by some SST flashes. */
+ if (!mtd->_write)
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index a7e3eb9befb62..a32050fecabf3 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -351,9 +351,6 @@ static ssize_t dev_attribute_show(struct device *dev,
+ * we still can use 'ubi->ubi_num'.
+ */
+ ubi = container_of(dev, struct ubi_device, dev);
+- ubi = ubi_get_device(ubi->ubi_num);
+- if (!ubi)
+- return -ENODEV;
+
+ if (attr == &dev_eraseblock_size)
+ ret = sprintf(buf, "%d\n", ubi->leb_size);
+@@ -382,7 +379,6 @@ static ssize_t dev_attribute_show(struct device *dev,
+ else
+ ret = -EINVAL;
+
+- ubi_put_device(ubi);
+ return ret;
+ }
+
+@@ -979,9 +975,6 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ goto out_detach;
+ }
+
+- /* Make device "available" before it becomes accessible via sysfs */
+- ubi_devices[ubi_num] = ubi;
+-
+ err = uif_init(ubi);
+ if (err)
+ goto out_detach;
+@@ -1026,6 +1019,7 @@ int ubi_attach_mtd_dev(struct mtd_info *mtd, int ubi_num,
+ wake_up_process(ubi->bgt_thread);
+ spin_unlock(&ubi->wl_lock);
+
++ ubi_devices[ubi_num] = ubi;
+ ubi_notify_all(ubi, UBI_VOLUME_ADDED, NULL);
+ return ubi_num;
+
+@@ -1034,7 +1028,6 @@ out_debugfs:
+ out_uif:
+ uif_close(ubi);
+ out_detach:
+- ubi_devices[ubi_num] = NULL;
+ ubi_wl_close(ubi);
+ ubi_free_all_volumes(ubi);
+ vfree(ubi->vtbl);
+diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
+index 022af59906aa9..6b5f1ffd961b9 100644
+--- a/drivers/mtd/ubi/fastmap.c
++++ b/drivers/mtd/ubi/fastmap.c
+@@ -468,7 +468,9 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ if (err == UBI_IO_FF_BITFLIPS)
+ scrub = 1;
+
+- add_aeb(ai, free, pnum, ec, scrub);
++ ret = add_aeb(ai, free, pnum, ec, scrub);
++ if (ret)
++ goto out;
+ continue;
+ } else if (err == 0 || err == UBI_IO_BITFLIPS) {
+ dbg_bld("Found non empty PEB:%i in pool", pnum);
+@@ -638,8 +640,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ if (fm_pos >= fm_size)
+ goto fail_bad;
+
+- add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum),
+- be32_to_cpu(fmec->ec), 0);
++ ret = add_aeb(ai, &ai->free, be32_to_cpu(fmec->pnum),
++ be32_to_cpu(fmec->ec), 0);
++ if (ret)
++ goto fail;
+ }
+
+ /* read EC values from used list */
+@@ -649,8 +653,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ if (fm_pos >= fm_size)
+ goto fail_bad;
+
+- add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
+- be32_to_cpu(fmec->ec), 0);
++ ret = add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
++ be32_to_cpu(fmec->ec), 0);
++ if (ret)
++ goto fail;
+ }
+
+ /* read EC values from scrub list */
+@@ -660,8 +666,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ if (fm_pos >= fm_size)
+ goto fail_bad;
+
+- add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
+- be32_to_cpu(fmec->ec), 1);
++ ret = add_aeb(ai, &used, be32_to_cpu(fmec->pnum),
++ be32_to_cpu(fmec->ec), 1);
++ if (ret)
++ goto fail;
+ }
+
+ /* read EC values from erase list */
+@@ -671,8 +679,10 @@ static int ubi_attach_fastmap(struct ubi_device *ubi,
+ if (fm_pos >= fm_size)
+ goto fail_bad;
+
+- add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum),
+- be32_to_cpu(fmec->ec), 1);
++ ret = add_aeb(ai, &ai->erase, be32_to_cpu(fmec->pnum),
++ be32_to_cpu(fmec->ec), 1);
++ if (ret)
++ goto fail;
+ }
+
+ ai->mean_ec = div_u64(ai->ec_sum, ai->ec_count);
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 139ee132bfbcf..1bc7b3a056046 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -56,16 +56,11 @@ static ssize_t vol_attribute_show(struct device *dev,
+ {
+ int ret;
+ struct ubi_volume *vol = container_of(dev, struct ubi_volume, dev);
+- struct ubi_device *ubi;
+-
+- ubi = ubi_get_device(vol->ubi->ubi_num);
+- if (!ubi)
+- return -ENODEV;
++ struct ubi_device *ubi = vol->ubi;
+
+ spin_lock(&ubi->volumes_lock);
+ if (!ubi->volumes[vol->vol_id]) {
+ spin_unlock(&ubi->volumes_lock);
+- ubi_put_device(ubi);
+ return -ENODEV;
+ }
+ /* Take a reference to prevent volume removal */
+@@ -103,7 +98,6 @@ static ssize_t vol_attribute_show(struct device *dev,
+ vol->ref_count -= 1;
+ ubi_assert(vol->ref_count >= 0);
+ spin_unlock(&ubi->volumes_lock);
+- ubi_put_device(ubi);
+ return ret;
+ }
+
+diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
+index ba587e5fc24fc..683203f87ae2b 100644
+--- a/drivers/net/bareudp.c
++++ b/drivers/net/bareudp.c
+@@ -148,14 +148,14 @@ static int bareudp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ skb_reset_network_header(skb);
+ skb_reset_mac_header(skb);
+
+- if (!IS_ENABLED(CONFIG_IPV6) || family == AF_INET)
++ if (!ipv6_mod_enabled() || family == AF_INET)
+ err = IP_ECN_decapsulate(oiph, skb);
+ else
+ err = IP6_ECN_decapsulate(oiph, skb);
+
+ if (unlikely(err)) {
+ if (log_ecn_error) {
+- if (!IS_ENABLED(CONFIG_IPV6) || family == AF_INET)
++ if (!ipv6_mod_enabled() || family == AF_INET)
+ net_info_ratelimited("non-ECT from %pI4 "
+ "with TOS=%#x\n",
+ &((struct iphdr *)oiph)->saddr,
+@@ -221,11 +221,12 @@ static struct socket *bareudp_create_sock(struct net *net, __be16 port)
+ int err;
+
+ memset(&udp_conf, 0, sizeof(udp_conf));
+-#if IS_ENABLED(CONFIG_IPV6)
+- udp_conf.family = AF_INET6;
+-#else
+- udp_conf.family = AF_INET;
+-#endif
++
++ if (ipv6_mod_enabled())
++ udp_conf.family = AF_INET6;
++ else
++ udp_conf.family = AF_INET;
++
+ udp_conf.local_udp_port = port;
+ /* Open UDP socket */
+ err = udp_sock_create(net, &udp_conf, &sock);
+@@ -448,7 +449,7 @@ static netdev_tx_t bareudp_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+
+ rcu_read_lock();
+- if (IS_ENABLED(CONFIG_IPV6) && info->mode & IP_TUNNEL_INFO_IPV6)
++ if (ipv6_mod_enabled() && info->mode & IP_TUNNEL_INFO_IPV6)
+ err = bareudp6_xmit_skb(skb, dev, bareudp, info);
+ else
+ err = bareudp_xmit_skb(skb, dev, bareudp, info);
+@@ -478,7 +479,7 @@ static int bareudp_fill_metadata_dst(struct net_device *dev,
+
+ use_cache = ip_tunnel_dst_cache_usable(skb, info);
+
+- if (!IS_ENABLED(CONFIG_IPV6) || ip_tunnel_info_af(info) == AF_INET) {
++ if (!ipv6_mod_enabled() || ip_tunnel_info_af(info) == AF_INET) {
+ struct rtable *rt;
+ __be32 saddr;
+
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 1a4b56f6fa8c6..b3b5bc1c803b3 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1637,8 +1637,6 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ if (err)
+ goto out_fail;
+
+- can_put_echo_skb(skb, dev, 0, 0);
+-
+ if (cdev->can.ctrlmode & CAN_CTRLMODE_FD) {
+ cccr = m_can_read(cdev, M_CAN_CCCR);
+ cccr &= ~CCCR_CMR_MASK;
+@@ -1655,6 +1653,9 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev)
+ m_can_write(cdev, M_CAN_CCCR, cccr);
+ }
+ m_can_write(cdev, M_CAN_TXBTIE, 0x1);
++
++ can_put_echo_skb(skb, dev, 0, 0);
++
+ m_can_write(cdev, M_CAN_TXBAR, 0x1);
+ /* End of xmit function for version 3.0.x */
+ } else {
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+index b5986df6eca0b..1c192554209a3 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+@@ -1657,7 +1657,7 @@ mcp251xfd_register_get_dev_id(const struct mcp251xfd_priv *priv,
+ out_kfree_buf_rx:
+ kfree(buf_rx);
+
+- return 0;
++ return err;
+ }
+
+ #define MCP251XFD_QUIRK_ACTIVE(quirk) \
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 7bedceffdfa36..bbec3311d8934 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -819,7 +819,6 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
+
+ usb_unanchor_urb(urb);
+ usb_free_coherent(dev->udev, size, buf, urb->transfer_dma);
+- dev_kfree_skb(skb);
+
+ atomic_dec(&dev->active_tx_urbs);
+
+diff --git a/drivers/net/can/usb/mcba_usb.c b/drivers/net/can/usb/mcba_usb.c
+index 77bddff86252b..c45a814e1de2f 100644
+--- a/drivers/net/can/usb/mcba_usb.c
++++ b/drivers/net/can/usb/mcba_usb.c
+@@ -33,10 +33,6 @@
+ #define MCBA_USB_RX_BUFF_SIZE 64
+ #define MCBA_USB_TX_BUFF_SIZE (sizeof(struct mcba_usb_msg))
+
+-/* MCBA endpoint numbers */
+-#define MCBA_USB_EP_IN 1
+-#define MCBA_USB_EP_OUT 1
+-
+ /* Microchip command id */
+ #define MBCA_CMD_RECEIVE_MESSAGE 0xE3
+ #define MBCA_CMD_I_AM_ALIVE_FROM_CAN 0xF5
+@@ -83,6 +79,8 @@ struct mcba_priv {
+ atomic_t free_ctx_cnt;
+ void *rxbuf[MCBA_MAX_RX_URBS];
+ dma_addr_t rxbuf_dma[MCBA_MAX_RX_URBS];
++ int rx_pipe;
++ int tx_pipe;
+ };
+
+ /* CAN frame */
+@@ -268,10 +266,8 @@ static netdev_tx_t mcba_usb_xmit(struct mcba_priv *priv,
+
+ memcpy(buf, usb_msg, MCBA_USB_TX_BUFF_SIZE);
+
+- usb_fill_bulk_urb(urb, priv->udev,
+- usb_sndbulkpipe(priv->udev, MCBA_USB_EP_OUT), buf,
+- MCBA_USB_TX_BUFF_SIZE, mcba_usb_write_bulk_callback,
+- ctx);
++ usb_fill_bulk_urb(urb, priv->udev, priv->tx_pipe, buf, MCBA_USB_TX_BUFF_SIZE,
++ mcba_usb_write_bulk_callback, ctx);
+
+ urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+ usb_anchor_urb(urb, &priv->tx_submitted);
+@@ -364,7 +360,6 @@ static netdev_tx_t mcba_usb_start_xmit(struct sk_buff *skb,
+ xmit_failed:
+ can_free_echo_skb(priv->netdev, ctx->ndx, NULL);
+ mcba_usb_free_ctx(ctx);
+- dev_kfree_skb(skb);
+ stats->tx_dropped++;
+
+ return NETDEV_TX_OK;
+@@ -608,7 +603,7 @@ static void mcba_usb_read_bulk_callback(struct urb *urb)
+ resubmit_urb:
+
+ usb_fill_bulk_urb(urb, priv->udev,
+- usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_OUT),
++ priv->rx_pipe,
+ urb->transfer_buffer, MCBA_USB_RX_BUFF_SIZE,
+ mcba_usb_read_bulk_callback, priv);
+
+@@ -653,7 +648,7 @@ static int mcba_usb_start(struct mcba_priv *priv)
+ urb->transfer_dma = buf_dma;
+
+ usb_fill_bulk_urb(urb, priv->udev,
+- usb_rcvbulkpipe(priv->udev, MCBA_USB_EP_IN),
++ priv->rx_pipe,
+ buf, MCBA_USB_RX_BUFF_SIZE,
+ mcba_usb_read_bulk_callback, priv);
+ urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
+@@ -807,6 +802,13 @@ static int mcba_usb_probe(struct usb_interface *intf,
+ struct mcba_priv *priv;
+ int err;
+ struct usb_device *usbdev = interface_to_usbdev(intf);
++ struct usb_endpoint_descriptor *in, *out;
++
++ err = usb_find_common_endpoints(intf->cur_altsetting, &in, &out, NULL, NULL);
++ if (err) {
++ dev_err(&intf->dev, "Can't find endpoints\n");
++ return err;
++ }
+
+ netdev = alloc_candev(sizeof(struct mcba_priv), MCBA_MAX_TX_URBS);
+ if (!netdev) {
+@@ -852,6 +854,9 @@ static int mcba_usb_probe(struct usb_interface *intf,
+ goto cleanup_free_candev;
+ }
+
++ priv->rx_pipe = usb_rcvbulkpipe(priv->udev, in->bEndpointAddress);
++ priv->tx_pipe = usb_sndbulkpipe(priv->udev, out->bEndpointAddress);
++
+ devm_can_led_init(netdev);
+
+ /* Start USB dev only if we have successfully registered CAN device */
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index 431af1ec1e3ca..b638604bf1eef 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -663,9 +663,20 @@ static netdev_tx_t usb_8dev_start_xmit(struct sk_buff *skb,
+ atomic_inc(&priv->active_tx_urbs);
+
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+- if (unlikely(err))
+- goto failed;
+- else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
++ if (unlikely(err)) {
++ can_free_echo_skb(netdev, context->echo_index, NULL);
++
++ usb_unanchor_urb(urb);
++ usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
++
++ atomic_dec(&priv->active_tx_urbs);
++
++ if (err == -ENODEV)
++ netif_device_detach(netdev);
++ else
++ netdev_warn(netdev, "failed tx_urb %d\n", err);
++ stats->tx_dropped++;
++ } else if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS)
+ /* Slow down tx path */
+ netif_stop_queue(netdev);
+
+@@ -684,19 +695,6 @@ nofreecontext:
+
+ return NETDEV_TX_BUSY;
+
+-failed:
+- can_free_echo_skb(netdev, context->echo_index, NULL);
+-
+- usb_unanchor_urb(urb);
+- usb_free_coherent(priv->udev, size, buf, urb->transfer_dma);
+-
+- atomic_dec(&priv->active_tx_urbs);
+-
+- if (err == -ENODEV)
+- netif_device_detach(netdev);
+- else
+- netdev_warn(netdev, "failed tx_urb %d\n", err);
+-
+ nomembuf:
+ usb_free_urb(urb);
+
+diff --git a/drivers/net/can/vxcan.c b/drivers/net/can/vxcan.c
+index 47ccc15a3486b..191ffa7776e8d 100644
+--- a/drivers/net/can/vxcan.c
++++ b/drivers/net/can/vxcan.c
+@@ -148,7 +148,7 @@ static void vxcan_setup(struct net_device *dev)
+ dev->hard_header_len = 0;
+ dev->addr_len = 0;
+ dev->tx_queue_len = 0;
+- dev->flags = (IFF_NOARP|IFF_ECHO);
++ dev->flags = IFF_NOARP;
+ dev->netdev_ops = &vxcan_netdev_ops;
+ dev->needs_free_netdev = true;
+
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index 0029d279616fd..37a3dabdce313 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -68,17 +68,7 @@ config NET_DSA_QCA8K
+ This enables support for the Qualcomm Atheros QCA8K Ethernet
+ switch chips.
+
+-config NET_DSA_REALTEK_SMI
+- tristate "Realtek SMI Ethernet switch family support"
+- select NET_DSA_TAG_RTL4_A
+- select NET_DSA_TAG_RTL8_4
+- select FIXED_PHY
+- select IRQ_DOMAIN
+- select REALTEK_PHY
+- select REGMAP
+- help
+- This enables support for the Realtek SMI-based switch
+- chips, currently only RTL8366RB.
++source "drivers/net/dsa/realtek/Kconfig"
+
+ config NET_DSA_SMSC_LAN9303
+ tristate
+diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile
+index 8da1569a34e6e..e73838c122560 100644
+--- a/drivers/net/dsa/Makefile
++++ b/drivers/net/dsa/Makefile
+@@ -9,8 +9,6 @@ obj-$(CONFIG_NET_DSA_LANTIQ_GSWIP) += lantiq_gswip.o
+ obj-$(CONFIG_NET_DSA_MT7530) += mt7530.o
+ obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
+ obj-$(CONFIG_NET_DSA_QCA8K) += qca8k.o
+-obj-$(CONFIG_NET_DSA_REALTEK_SMI) += realtek-smi.o
+-realtek-smi-objs := realtek-smi-core.o rtl8366.o rtl8366rb.o rtl8365mb.o
+ obj-$(CONFIG_NET_DSA_SMSC_LAN9303) += lan9303-core.o
+ obj-$(CONFIG_NET_DSA_SMSC_LAN9303_I2C) += lan9303_i2c.o
+ obj-$(CONFIG_NET_DSA_SMSC_LAN9303_MDIO) += lan9303_mdio.o
+@@ -23,5 +21,6 @@ obj-y += microchip/
+ obj-y += mv88e6xxx/
+ obj-y += ocelot/
+ obj-y += qca/
++obj-y += realtek/
+ obj-y += sja1105/
+ obj-y += xrs700x/
+diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
+index a7e2fcf2df2c9..edbe5e7f1cb6b 100644
+--- a/drivers/net/dsa/bcm_sf2_cfp.c
++++ b/drivers/net/dsa/bcm_sf2_cfp.c
+@@ -567,14 +567,14 @@ static void bcm_sf2_cfp_slice_ipv6(struct bcm_sf2_priv *priv,
+ static struct cfp_rule *bcm_sf2_cfp_rule_find(struct bcm_sf2_priv *priv,
+ int port, u32 location)
+ {
+- struct cfp_rule *rule = NULL;
++ struct cfp_rule *rule;
+
+ list_for_each_entry(rule, &priv->cfp.rules_list, next) {
+ if (rule->port == port && rule->fs.location == location)
+- break;
++ return rule;
+ }
+
+- return rule;
++ return NULL;
+ }
+
+ static int bcm_sf2_cfp_rule_cmp(struct bcm_sf2_priv *priv, int port,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index ab1676553714c..cf7754dddad78 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3639,6 +3639,7 @@ static const struct mv88e6xxx_ops mv88e6097_ops = {
+ .port_sync_link = mv88e6185_port_sync_link,
+ .port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
+ .port_tag_remap = mv88e6095_port_tag_remap,
++ .port_set_policy = mv88e6352_port_set_policy,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_ucast_flood = mv88e6352_port_set_ucast_flood,
+ .port_set_mcast_flood = mv88e6352_port_set_mcast_flood,
+diff --git a/drivers/net/dsa/realtek-smi-core.c b/drivers/net/dsa/realtek-smi-core.c
+deleted file mode 100644
+index aae46ada8d839..0000000000000
+--- a/drivers/net/dsa/realtek-smi-core.c
++++ /dev/null
+@@ -1,523 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/* Realtek Simple Management Interface (SMI) driver
+- * It can be discussed how "simple" this interface is.
+- *
+- * The SMI protocol piggy-backs the MDIO MDC and MDIO signals levels
+- * but the protocol is not MDIO at all. Instead it is a Realtek
+- * pecularity that need to bit-bang the lines in a special way to
+- * communicate with the switch.
+- *
+- * ASICs we intend to support with this driver:
+- *
+- * RTL8366 - The original version, apparently
+- * RTL8369 - Similar enough to have the same datsheet as RTL8366
+- * RTL8366RB - Probably reads out "RTL8366 revision B", has a quite
+- * different register layout from the other two
+- * RTL8366S - Is this "RTL8366 super"?
+- * RTL8367 - Has an OpenWRT driver as well
+- * RTL8368S - Seems to be an alternative name for RTL8366RB
+- * RTL8370 - Also uses SMI
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
+- * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
+- * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- */
+-
+-#include <linux/kernel.h>
+-#include <linux/module.h>
+-#include <linux/device.h>
+-#include <linux/spinlock.h>
+-#include <linux/skbuff.h>
+-#include <linux/of.h>
+-#include <linux/of_device.h>
+-#include <linux/of_mdio.h>
+-#include <linux/delay.h>
+-#include <linux/gpio/consumer.h>
+-#include <linux/platform_device.h>
+-#include <linux/regmap.h>
+-#include <linux/bitops.h>
+-#include <linux/if_bridge.h>
+-
+-#include "realtek-smi-core.h"
+-
+-#define REALTEK_SMI_ACK_RETRY_COUNT 5
+-#define REALTEK_SMI_HW_STOP_DELAY 25 /* msecs */
+-#define REALTEK_SMI_HW_START_DELAY 100 /* msecs */
+-
+-static inline void realtek_smi_clk_delay(struct realtek_smi *smi)
+-{
+- ndelay(smi->clk_delay);
+-}
+-
+-static void realtek_smi_start(struct realtek_smi *smi)
+-{
+- /* Set GPIO pins to output mode, with initial state:
+- * SCK = 0, SDA = 1
+- */
+- gpiod_direction_output(smi->mdc, 0);
+- gpiod_direction_output(smi->mdio, 1);
+- realtek_smi_clk_delay(smi);
+-
+- /* CLK 1: 0 -> 1, 1 -> 0 */
+- gpiod_set_value(smi->mdc, 1);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 0);
+- realtek_smi_clk_delay(smi);
+-
+- /* CLK 2: */
+- gpiod_set_value(smi->mdc, 1);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdio, 0);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 0);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdio, 1);
+-}
+-
+-static void realtek_smi_stop(struct realtek_smi *smi)
+-{
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdio, 0);
+- gpiod_set_value(smi->mdc, 1);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdio, 1);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 1);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 0);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 1);
+-
+- /* Add a click */
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 0);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 1);
+-
+- /* Set GPIO pins to input mode */
+- gpiod_direction_input(smi->mdio);
+- gpiod_direction_input(smi->mdc);
+-}
+-
+-static void realtek_smi_write_bits(struct realtek_smi *smi, u32 data, u32 len)
+-{
+- for (; len > 0; len--) {
+- realtek_smi_clk_delay(smi);
+-
+- /* Prepare data */
+- gpiod_set_value(smi->mdio, !!(data & (1 << (len - 1))));
+- realtek_smi_clk_delay(smi);
+-
+- /* Clocking */
+- gpiod_set_value(smi->mdc, 1);
+- realtek_smi_clk_delay(smi);
+- gpiod_set_value(smi->mdc, 0);
+- }
+-}
+-
+-static void realtek_smi_read_bits(struct realtek_smi *smi, u32 len, u32 *data)
+-{
+- gpiod_direction_input(smi->mdio);
+-
+- for (*data = 0; len > 0; len--) {
+- u32 u;
+-
+- realtek_smi_clk_delay(smi);
+-
+- /* Clocking */
+- gpiod_set_value(smi->mdc, 1);
+- realtek_smi_clk_delay(smi);
+- u = !!gpiod_get_value(smi->mdio);
+- gpiod_set_value(smi->mdc, 0);
+-
+- *data |= (u << (len - 1));
+- }
+-
+- gpiod_direction_output(smi->mdio, 0);
+-}
+-
+-static int realtek_smi_wait_for_ack(struct realtek_smi *smi)
+-{
+- int retry_cnt;
+-
+- retry_cnt = 0;
+- do {
+- u32 ack;
+-
+- realtek_smi_read_bits(smi, 1, &ack);
+- if (ack == 0)
+- break;
+-
+- if (++retry_cnt > REALTEK_SMI_ACK_RETRY_COUNT) {
+- dev_err(smi->dev, "ACK timeout\n");
+- return -ETIMEDOUT;
+- }
+- } while (1);
+-
+- return 0;
+-}
+-
+-static int realtek_smi_write_byte(struct realtek_smi *smi, u8 data)
+-{
+- realtek_smi_write_bits(smi, data, 8);
+- return realtek_smi_wait_for_ack(smi);
+-}
+-
+-static int realtek_smi_write_byte_noack(struct realtek_smi *smi, u8 data)
+-{
+- realtek_smi_write_bits(smi, data, 8);
+- return 0;
+-}
+-
+-static int realtek_smi_read_byte0(struct realtek_smi *smi, u8 *data)
+-{
+- u32 t;
+-
+- /* Read data */
+- realtek_smi_read_bits(smi, 8, &t);
+- *data = (t & 0xff);
+-
+- /* Send an ACK */
+- realtek_smi_write_bits(smi, 0x00, 1);
+-
+- return 0;
+-}
+-
+-static int realtek_smi_read_byte1(struct realtek_smi *smi, u8 *data)
+-{
+- u32 t;
+-
+- /* Read data */
+- realtek_smi_read_bits(smi, 8, &t);
+- *data = (t & 0xff);
+-
+- /* Send an ACK */
+- realtek_smi_write_bits(smi, 0x01, 1);
+-
+- return 0;
+-}
+-
+-static int realtek_smi_read_reg(struct realtek_smi *smi, u32 addr, u32 *data)
+-{
+- unsigned long flags;
+- u8 lo = 0;
+- u8 hi = 0;
+- int ret;
+-
+- spin_lock_irqsave(&smi->lock, flags);
+-
+- realtek_smi_start(smi);
+-
+- /* Send READ command */
+- ret = realtek_smi_write_byte(smi, smi->cmd_read);
+- if (ret)
+- goto out;
+-
+- /* Set ADDR[7:0] */
+- ret = realtek_smi_write_byte(smi, addr & 0xff);
+- if (ret)
+- goto out;
+-
+- /* Set ADDR[15:8] */
+- ret = realtek_smi_write_byte(smi, addr >> 8);
+- if (ret)
+- goto out;
+-
+- /* Read DATA[7:0] */
+- realtek_smi_read_byte0(smi, &lo);
+- /* Read DATA[15:8] */
+- realtek_smi_read_byte1(smi, &hi);
+-
+- *data = ((u32)lo) | (((u32)hi) << 8);
+-
+- ret = 0;
+-
+- out:
+- realtek_smi_stop(smi);
+- spin_unlock_irqrestore(&smi->lock, flags);
+-
+- return ret;
+-}
+-
+-static int realtek_smi_write_reg(struct realtek_smi *smi,
+- u32 addr, u32 data, bool ack)
+-{
+- unsigned long flags;
+- int ret;
+-
+- spin_lock_irqsave(&smi->lock, flags);
+-
+- realtek_smi_start(smi);
+-
+- /* Send WRITE command */
+- ret = realtek_smi_write_byte(smi, smi->cmd_write);
+- if (ret)
+- goto out;
+-
+- /* Set ADDR[7:0] */
+- ret = realtek_smi_write_byte(smi, addr & 0xff);
+- if (ret)
+- goto out;
+-
+- /* Set ADDR[15:8] */
+- ret = realtek_smi_write_byte(smi, addr >> 8);
+- if (ret)
+- goto out;
+-
+- /* Write DATA[7:0] */
+- ret = realtek_smi_write_byte(smi, data & 0xff);
+- if (ret)
+- goto out;
+-
+- /* Write DATA[15:8] */
+- if (ack)
+- ret = realtek_smi_write_byte(smi, data >> 8);
+- else
+- ret = realtek_smi_write_byte_noack(smi, data >> 8);
+- if (ret)
+- goto out;
+-
+- ret = 0;
+-
+- out:
+- realtek_smi_stop(smi);
+- spin_unlock_irqrestore(&smi->lock, flags);
+-
+- return ret;
+-}
+-
+-/* There is one single case when we need to use this accessor and that
+- * is when issueing soft reset. Since the device reset as soon as we write
+- * that bit, no ACK will come back for natural reasons.
+- */
+-int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
+- u32 data)
+-{
+- return realtek_smi_write_reg(smi, addr, data, false);
+-}
+-EXPORT_SYMBOL_GPL(realtek_smi_write_reg_noack);
+-
+-/* Regmap accessors */
+-
+-static int realtek_smi_write(void *ctx, u32 reg, u32 val)
+-{
+- struct realtek_smi *smi = ctx;
+-
+- return realtek_smi_write_reg(smi, reg, val, true);
+-}
+-
+-static int realtek_smi_read(void *ctx, u32 reg, u32 *val)
+-{
+- struct realtek_smi *smi = ctx;
+-
+- return realtek_smi_read_reg(smi, reg, val);
+-}
+-
+-static const struct regmap_config realtek_smi_mdio_regmap_config = {
+- .reg_bits = 10, /* A4..A0 R4..R0 */
+- .val_bits = 16,
+- .reg_stride = 1,
+- /* PHY regs are at 0x8000 */
+- .max_register = 0xffff,
+- .reg_format_endian = REGMAP_ENDIAN_BIG,
+- .reg_read = realtek_smi_read,
+- .reg_write = realtek_smi_write,
+- .cache_type = REGCACHE_NONE,
+-};
+-
+-static int realtek_smi_mdio_read(struct mii_bus *bus, int addr, int regnum)
+-{
+- struct realtek_smi *smi = bus->priv;
+-
+- return smi->ops->phy_read(smi, addr, regnum);
+-}
+-
+-static int realtek_smi_mdio_write(struct mii_bus *bus, int addr, int regnum,
+- u16 val)
+-{
+- struct realtek_smi *smi = bus->priv;
+-
+- return smi->ops->phy_write(smi, addr, regnum, val);
+-}
+-
+-int realtek_smi_setup_mdio(struct realtek_smi *smi)
+-{
+- struct device_node *mdio_np;
+- int ret;
+-
+- mdio_np = of_get_compatible_child(smi->dev->of_node, "realtek,smi-mdio");
+- if (!mdio_np) {
+- dev_err(smi->dev, "no MDIO bus node\n");
+- return -ENODEV;
+- }
+-
+- smi->slave_mii_bus = devm_mdiobus_alloc(smi->dev);
+- if (!smi->slave_mii_bus) {
+- ret = -ENOMEM;
+- goto err_put_node;
+- }
+- smi->slave_mii_bus->priv = smi;
+- smi->slave_mii_bus->name = "SMI slave MII";
+- smi->slave_mii_bus->read = realtek_smi_mdio_read;
+- smi->slave_mii_bus->write = realtek_smi_mdio_write;
+- snprintf(smi->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
+- smi->ds->index);
+- smi->slave_mii_bus->dev.of_node = mdio_np;
+- smi->slave_mii_bus->parent = smi->dev;
+- smi->ds->slave_mii_bus = smi->slave_mii_bus;
+-
+- ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
+- if (ret) {
+- dev_err(smi->dev, "unable to register MDIO bus %s\n",
+- smi->slave_mii_bus->id);
+- goto err_put_node;
+- }
+-
+- return 0;
+-
+-err_put_node:
+- of_node_put(mdio_np);
+-
+- return ret;
+-}
+-
+-static int realtek_smi_probe(struct platform_device *pdev)
+-{
+- const struct realtek_smi_variant *var;
+- struct device *dev = &pdev->dev;
+- struct realtek_smi *smi;
+- struct device_node *np;
+- int ret;
+-
+- var = of_device_get_match_data(dev);
+- np = dev->of_node;
+-
+- smi = devm_kzalloc(dev, sizeof(*smi) + var->chip_data_sz, GFP_KERNEL);
+- if (!smi)
+- return -ENOMEM;
+- smi->chip_data = (void *)smi + sizeof(*smi);
+- smi->map = devm_regmap_init(dev, NULL, smi,
+- &realtek_smi_mdio_regmap_config);
+- if (IS_ERR(smi->map)) {
+- ret = PTR_ERR(smi->map);
+- dev_err(dev, "regmap init failed: %d\n", ret);
+- return ret;
+- }
+-
+- /* Link forward and backward */
+- smi->dev = dev;
+- smi->clk_delay = var->clk_delay;
+- smi->cmd_read = var->cmd_read;
+- smi->cmd_write = var->cmd_write;
+- smi->ops = var->ops;
+-
+- dev_set_drvdata(dev, smi);
+- spin_lock_init(&smi->lock);
+-
+- /* TODO: if power is software controlled, set up any regulators here */
+-
+- /* Assert then deassert RESET */
+- smi->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
+- if (IS_ERR(smi->reset)) {
+- dev_err(dev, "failed to get RESET GPIO\n");
+- return PTR_ERR(smi->reset);
+- }
+- msleep(REALTEK_SMI_HW_STOP_DELAY);
+- gpiod_set_value(smi->reset, 0);
+- msleep(REALTEK_SMI_HW_START_DELAY);
+- dev_info(dev, "deasserted RESET\n");
+-
+- /* Fetch MDIO pins */
+- smi->mdc = devm_gpiod_get_optional(dev, "mdc", GPIOD_OUT_LOW);
+- if (IS_ERR(smi->mdc))
+- return PTR_ERR(smi->mdc);
+- smi->mdio = devm_gpiod_get_optional(dev, "mdio", GPIOD_OUT_LOW);
+- if (IS_ERR(smi->mdio))
+- return PTR_ERR(smi->mdio);
+-
+- smi->leds_disabled = of_property_read_bool(np, "realtek,disable-leds");
+-
+- ret = smi->ops->detect(smi);
+- if (ret) {
+- dev_err(dev, "unable to detect switch\n");
+- return ret;
+- }
+-
+- smi->ds = devm_kzalloc(dev, sizeof(*smi->ds), GFP_KERNEL);
+- if (!smi->ds)
+- return -ENOMEM;
+-
+- smi->ds->dev = dev;
+- smi->ds->num_ports = smi->num_ports;
+- smi->ds->priv = smi;
+-
+- smi->ds->ops = var->ds_ops;
+- ret = dsa_register_switch(smi->ds);
+- if (ret) {
+- dev_err_probe(dev, ret, "unable to register switch\n");
+- return ret;
+- }
+- return 0;
+-}
+-
+-static int realtek_smi_remove(struct platform_device *pdev)
+-{
+- struct realtek_smi *smi = platform_get_drvdata(pdev);
+-
+- if (!smi)
+- return 0;
+-
+- dsa_unregister_switch(smi->ds);
+- if (smi->slave_mii_bus)
+- of_node_put(smi->slave_mii_bus->dev.of_node);
+- gpiod_set_value(smi->reset, 1);
+-
+- platform_set_drvdata(pdev, NULL);
+-
+- return 0;
+-}
+-
+-static void realtek_smi_shutdown(struct platform_device *pdev)
+-{
+- struct realtek_smi *smi = platform_get_drvdata(pdev);
+-
+- if (!smi)
+- return;
+-
+- dsa_switch_shutdown(smi->ds);
+-
+- platform_set_drvdata(pdev, NULL);
+-}
+-
+-static const struct of_device_id realtek_smi_of_match[] = {
+- {
+- .compatible = "realtek,rtl8366rb",
+- .data = &rtl8366rb_variant,
+- },
+- {
+- /* FIXME: add support for RTL8366S and more */
+- .compatible = "realtek,rtl8366s",
+- .data = NULL,
+- },
+- {
+- .compatible = "realtek,rtl8365mb",
+- .data = &rtl8365mb_variant,
+- },
+- { /* sentinel */ },
+-};
+-MODULE_DEVICE_TABLE(of, realtek_smi_of_match);
+-
+-static struct platform_driver realtek_smi_driver = {
+- .driver = {
+- .name = "realtek-smi",
+- .of_match_table = of_match_ptr(realtek_smi_of_match),
+- },
+- .probe = realtek_smi_probe,
+- .remove = realtek_smi_remove,
+- .shutdown = realtek_smi_shutdown,
+-};
+-module_platform_driver(realtek_smi_driver);
+-
+-MODULE_LICENSE("GPL");
+diff --git a/drivers/net/dsa/realtek-smi-core.h b/drivers/net/dsa/realtek-smi-core.h
+deleted file mode 100644
+index 5bfa53e2480ae..0000000000000
+--- a/drivers/net/dsa/realtek-smi-core.h
++++ /dev/null
+@@ -1,145 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0+ */
+-/* Realtek SMI interface driver defines
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- */
+-
+-#ifndef _REALTEK_SMI_H
+-#define _REALTEK_SMI_H
+-
+-#include <linux/phy.h>
+-#include <linux/platform_device.h>
+-#include <linux/gpio/consumer.h>
+-#include <net/dsa.h>
+-
+-struct realtek_smi_ops;
+-struct dentry;
+-struct inode;
+-struct file;
+-
+-struct rtl8366_mib_counter {
+- unsigned int base;
+- unsigned int offset;
+- unsigned int length;
+- const char *name;
+-};
+-
+-/**
+- * struct rtl8366_vlan_mc - Virtual LAN member configuration
+- */
+-struct rtl8366_vlan_mc {
+- u16 vid;
+- u16 untag;
+- u16 member;
+- u8 fid;
+- u8 priority;
+-};
+-
+-struct rtl8366_vlan_4k {
+- u16 vid;
+- u16 untag;
+- u16 member;
+- u8 fid;
+-};
+-
+-struct realtek_smi {
+- struct device *dev;
+- struct gpio_desc *reset;
+- struct gpio_desc *mdc;
+- struct gpio_desc *mdio;
+- struct regmap *map;
+- struct mii_bus *slave_mii_bus;
+-
+- unsigned int clk_delay;
+- u8 cmd_read;
+- u8 cmd_write;
+- spinlock_t lock; /* Locks around command writes */
+- struct dsa_switch *ds;
+- struct irq_domain *irqdomain;
+- bool leds_disabled;
+-
+- unsigned int cpu_port;
+- unsigned int num_ports;
+- unsigned int num_vlan_mc;
+- unsigned int num_mib_counters;
+- struct rtl8366_mib_counter *mib_counters;
+-
+- const struct realtek_smi_ops *ops;
+-
+- int vlan_enabled;
+- int vlan4k_enabled;
+-
+- char buf[4096];
+- void *chip_data; /* Per-chip extra variant data */
+-};
+-
+-/**
+- * struct realtek_smi_ops - vtable for the per-SMI-chiptype operations
+- * @detect: detects the chiptype
+- */
+-struct realtek_smi_ops {
+- int (*detect)(struct realtek_smi *smi);
+- int (*reset_chip)(struct realtek_smi *smi);
+- int (*setup)(struct realtek_smi *smi);
+- void (*cleanup)(struct realtek_smi *smi);
+- int (*get_mib_counter)(struct realtek_smi *smi,
+- int port,
+- struct rtl8366_mib_counter *mib,
+- u64 *mibvalue);
+- int (*get_vlan_mc)(struct realtek_smi *smi, u32 index,
+- struct rtl8366_vlan_mc *vlanmc);
+- int (*set_vlan_mc)(struct realtek_smi *smi, u32 index,
+- const struct rtl8366_vlan_mc *vlanmc);
+- int (*get_vlan_4k)(struct realtek_smi *smi, u32 vid,
+- struct rtl8366_vlan_4k *vlan4k);
+- int (*set_vlan_4k)(struct realtek_smi *smi,
+- const struct rtl8366_vlan_4k *vlan4k);
+- int (*get_mc_index)(struct realtek_smi *smi, int port, int *val);
+- int (*set_mc_index)(struct realtek_smi *smi, int port, int index);
+- bool (*is_vlan_valid)(struct realtek_smi *smi, unsigned int vlan);
+- int (*enable_vlan)(struct realtek_smi *smi, bool enable);
+- int (*enable_vlan4k)(struct realtek_smi *smi, bool enable);
+- int (*enable_port)(struct realtek_smi *smi, int port, bool enable);
+- int (*phy_read)(struct realtek_smi *smi, int phy, int regnum);
+- int (*phy_write)(struct realtek_smi *smi, int phy, int regnum,
+- u16 val);
+-};
+-
+-struct realtek_smi_variant {
+- const struct dsa_switch_ops *ds_ops;
+- const struct realtek_smi_ops *ops;
+- unsigned int clk_delay;
+- u8 cmd_read;
+- u8 cmd_write;
+- size_t chip_data_sz;
+-};
+-
+-/* SMI core calls */
+-int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
+- u32 data);
+-int realtek_smi_setup_mdio(struct realtek_smi *smi);
+-
+-/* RTL8366 library helpers */
+-int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used);
+-int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+- u32 untag, u32 fid);
+-int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
+- unsigned int vid);
+-int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable);
+-int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable);
+-int rtl8366_reset_vlan(struct realtek_smi *smi);
+-int rtl8366_vlan_add(struct dsa_switch *ds, int port,
+- const struct switchdev_obj_port_vlan *vlan,
+- struct netlink_ext_ack *extack);
+-int rtl8366_vlan_del(struct dsa_switch *ds, int port,
+- const struct switchdev_obj_port_vlan *vlan);
+-void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+- uint8_t *data);
+-int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset);
+-void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data);
+-
+-extern const struct realtek_smi_variant rtl8366rb_variant;
+-extern const struct realtek_smi_variant rtl8365mb_variant;
+-
+-#endif /* _REALTEK_SMI_H */
+diff --git a/drivers/net/dsa/realtek/Kconfig b/drivers/net/dsa/realtek/Kconfig
+new file mode 100644
+index 0000000000000..1c62212fb0ecb
+--- /dev/null
++++ b/drivers/net/dsa/realtek/Kconfig
+@@ -0,0 +1,20 @@
++# SPDX-License-Identifier: GPL-2.0-only
++menuconfig NET_DSA_REALTEK
++ tristate "Realtek Ethernet switch family support"
++ depends on NET_DSA
++ select NET_DSA_TAG_RTL4_A
++ select NET_DSA_TAG_RTL8_4
++ select FIXED_PHY
++ select IRQ_DOMAIN
++ select REALTEK_PHY
++ select REGMAP
++ help
++ Select to enable support for Realtek Ethernet switch chips.
++
++config NET_DSA_REALTEK_SMI
++ tristate "Realtek SMI connected switch driver"
++ depends on NET_DSA_REALTEK
++ default y
++ help
++ Select to enable support for registering switches connected
++ through SMI.
+diff --git a/drivers/net/dsa/realtek/Makefile b/drivers/net/dsa/realtek/Makefile
+new file mode 100644
+index 0000000000000..323b921bfce0f
+--- /dev/null
++++ b/drivers/net/dsa/realtek/Makefile
+@@ -0,0 +1,3 @@
++# SPDX-License-Identifier: GPL-2.0
++obj-$(CONFIG_NET_DSA_REALTEK_SMI) += realtek-smi.o
++realtek-smi-objs := realtek-smi-core.o rtl8366.o rtl8366rb.o rtl8365mb.o
+diff --git a/drivers/net/dsa/realtek/realtek-smi-core.c b/drivers/net/dsa/realtek/realtek-smi-core.c
+new file mode 100644
+index 0000000000000..aae46ada8d839
+--- /dev/null
++++ b/drivers/net/dsa/realtek/realtek-smi-core.c
+@@ -0,0 +1,523 @@
++// SPDX-License-Identifier: GPL-2.0+
++/* Realtek Simple Management Interface (SMI) driver
++ * It can be discussed how "simple" this interface is.
++ *
++ * The SMI protocol piggy-backs the MDIO MDC and MDIO signals levels
++ * but the protocol is not MDIO at all. Instead it is a Realtek
++ * pecularity that need to bit-bang the lines in a special way to
++ * communicate with the switch.
++ *
++ * ASICs we intend to support with this driver:
++ *
++ * RTL8366 - The original version, apparently
++ * RTL8369 - Similar enough to have the same datsheet as RTL8366
++ * RTL8366RB - Probably reads out "RTL8366 revision B", has a quite
++ * different register layout from the other two
++ * RTL8366S - Is this "RTL8366 super"?
++ * RTL8367 - Has an OpenWRT driver as well
++ * RTL8368S - Seems to be an alternative name for RTL8366RB
++ * RTL8370 - Also uses SMI
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
++ * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
++ * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ */
++
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/device.h>
++#include <linux/spinlock.h>
++#include <linux/skbuff.h>
++#include <linux/of.h>
++#include <linux/of_device.h>
++#include <linux/of_mdio.h>
++#include <linux/delay.h>
++#include <linux/gpio/consumer.h>
++#include <linux/platform_device.h>
++#include <linux/regmap.h>
++#include <linux/bitops.h>
++#include <linux/if_bridge.h>
++
++#include "realtek-smi-core.h"
++
++#define REALTEK_SMI_ACK_RETRY_COUNT 5
++#define REALTEK_SMI_HW_STOP_DELAY 25 /* msecs */
++#define REALTEK_SMI_HW_START_DELAY 100 /* msecs */
++
++static inline void realtek_smi_clk_delay(struct realtek_smi *smi)
++{
++ ndelay(smi->clk_delay);
++}
++
++static void realtek_smi_start(struct realtek_smi *smi)
++{
++ /* Set GPIO pins to output mode, with initial state:
++ * SCK = 0, SDA = 1
++ */
++ gpiod_direction_output(smi->mdc, 0);
++ gpiod_direction_output(smi->mdio, 1);
++ realtek_smi_clk_delay(smi);
++
++ /* CLK 1: 0 -> 1, 1 -> 0 */
++ gpiod_set_value(smi->mdc, 1);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 0);
++ realtek_smi_clk_delay(smi);
++
++ /* CLK 2: */
++ gpiod_set_value(smi->mdc, 1);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdio, 0);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 0);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdio, 1);
++}
++
++static void realtek_smi_stop(struct realtek_smi *smi)
++{
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdio, 0);
++ gpiod_set_value(smi->mdc, 1);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdio, 1);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 1);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 0);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 1);
++
++ /* Add a click */
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 0);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 1);
++
++ /* Set GPIO pins to input mode */
++ gpiod_direction_input(smi->mdio);
++ gpiod_direction_input(smi->mdc);
++}
++
++static void realtek_smi_write_bits(struct realtek_smi *smi, u32 data, u32 len)
++{
++ for (; len > 0; len--) {
++ realtek_smi_clk_delay(smi);
++
++ /* Prepare data */
++ gpiod_set_value(smi->mdio, !!(data & (1 << (len - 1))));
++ realtek_smi_clk_delay(smi);
++
++ /* Clocking */
++ gpiod_set_value(smi->mdc, 1);
++ realtek_smi_clk_delay(smi);
++ gpiod_set_value(smi->mdc, 0);
++ }
++}
++
++static void realtek_smi_read_bits(struct realtek_smi *smi, u32 len, u32 *data)
++{
++ gpiod_direction_input(smi->mdio);
++
++ for (*data = 0; len > 0; len--) {
++ u32 u;
++
++ realtek_smi_clk_delay(smi);
++
++ /* Clocking */
++ gpiod_set_value(smi->mdc, 1);
++ realtek_smi_clk_delay(smi);
++ u = !!gpiod_get_value(smi->mdio);
++ gpiod_set_value(smi->mdc, 0);
++
++ *data |= (u << (len - 1));
++ }
++
++ gpiod_direction_output(smi->mdio, 0);
++}
++
++static int realtek_smi_wait_for_ack(struct realtek_smi *smi)
++{
++ int retry_cnt;
++
++ retry_cnt = 0;
++ do {
++ u32 ack;
++
++ realtek_smi_read_bits(smi, 1, &ack);
++ if (ack == 0)
++ break;
++
++ if (++retry_cnt > REALTEK_SMI_ACK_RETRY_COUNT) {
++ dev_err(smi->dev, "ACK timeout\n");
++ return -ETIMEDOUT;
++ }
++ } while (1);
++
++ return 0;
++}
++
++static int realtek_smi_write_byte(struct realtek_smi *smi, u8 data)
++{
++ realtek_smi_write_bits(smi, data, 8);
++ return realtek_smi_wait_for_ack(smi);
++}
++
++static int realtek_smi_write_byte_noack(struct realtek_smi *smi, u8 data)
++{
++ realtek_smi_write_bits(smi, data, 8);
++ return 0;
++}
++
++static int realtek_smi_read_byte0(struct realtek_smi *smi, u8 *data)
++{
++ u32 t;
++
++ /* Read data */
++ realtek_smi_read_bits(smi, 8, &t);
++ *data = (t & 0xff);
++
++ /* Send an ACK */
++ realtek_smi_write_bits(smi, 0x00, 1);
++
++ return 0;
++}
++
++static int realtek_smi_read_byte1(struct realtek_smi *smi, u8 *data)
++{
++ u32 t;
++
++ /* Read data */
++ realtek_smi_read_bits(smi, 8, &t);
++ *data = (t & 0xff);
++
++ /* Send an ACK */
++ realtek_smi_write_bits(smi, 0x01, 1);
++
++ return 0;
++}
++
++static int realtek_smi_read_reg(struct realtek_smi *smi, u32 addr, u32 *data)
++{
++ unsigned long flags;
++ u8 lo = 0;
++ u8 hi = 0;
++ int ret;
++
++ spin_lock_irqsave(&smi->lock, flags);
++
++ realtek_smi_start(smi);
++
++ /* Send READ command */
++ ret = realtek_smi_write_byte(smi, smi->cmd_read);
++ if (ret)
++ goto out;
++
++ /* Set ADDR[7:0] */
++ ret = realtek_smi_write_byte(smi, addr & 0xff);
++ if (ret)
++ goto out;
++
++ /* Set ADDR[15:8] */
++ ret = realtek_smi_write_byte(smi, addr >> 8);
++ if (ret)
++ goto out;
++
++ /* Read DATA[7:0] */
++ realtek_smi_read_byte0(smi, &lo);
++ /* Read DATA[15:8] */
++ realtek_smi_read_byte1(smi, &hi);
++
++ *data = ((u32)lo) | (((u32)hi) << 8);
++
++ ret = 0;
++
++ out:
++ realtek_smi_stop(smi);
++ spin_unlock_irqrestore(&smi->lock, flags);
++
++ return ret;
++}
++
++static int realtek_smi_write_reg(struct realtek_smi *smi,
++ u32 addr, u32 data, bool ack)
++{
++ unsigned long flags;
++ int ret;
++
++ spin_lock_irqsave(&smi->lock, flags);
++
++ realtek_smi_start(smi);
++
++ /* Send WRITE command */
++ ret = realtek_smi_write_byte(smi, smi->cmd_write);
++ if (ret)
++ goto out;
++
++ /* Set ADDR[7:0] */
++ ret = realtek_smi_write_byte(smi, addr & 0xff);
++ if (ret)
++ goto out;
++
++ /* Set ADDR[15:8] */
++ ret = realtek_smi_write_byte(smi, addr >> 8);
++ if (ret)
++ goto out;
++
++ /* Write DATA[7:0] */
++ ret = realtek_smi_write_byte(smi, data & 0xff);
++ if (ret)
++ goto out;
++
++ /* Write DATA[15:8] */
++ if (ack)
++ ret = realtek_smi_write_byte(smi, data >> 8);
++ else
++ ret = realtek_smi_write_byte_noack(smi, data >> 8);
++ if (ret)
++ goto out;
++
++ ret = 0;
++
++ out:
++ realtek_smi_stop(smi);
++ spin_unlock_irqrestore(&smi->lock, flags);
++
++ return ret;
++}
++
++/* There is one single case when we need to use this accessor and that
++ * is when issueing soft reset. Since the device reset as soon as we write
++ * that bit, no ACK will come back for natural reasons.
++ */
++int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
++ u32 data)
++{
++ return realtek_smi_write_reg(smi, addr, data, false);
++}
++EXPORT_SYMBOL_GPL(realtek_smi_write_reg_noack);
++
++/* Regmap accessors */
++
++static int realtek_smi_write(void *ctx, u32 reg, u32 val)
++{
++ struct realtek_smi *smi = ctx;
++
++ return realtek_smi_write_reg(smi, reg, val, true);
++}
++
++static int realtek_smi_read(void *ctx, u32 reg, u32 *val)
++{
++ struct realtek_smi *smi = ctx;
++
++ return realtek_smi_read_reg(smi, reg, val);
++}
++
++static const struct regmap_config realtek_smi_mdio_regmap_config = {
++ .reg_bits = 10, /* A4..A0 R4..R0 */
++ .val_bits = 16,
++ .reg_stride = 1,
++ /* PHY regs are at 0x8000 */
++ .max_register = 0xffff,
++ .reg_format_endian = REGMAP_ENDIAN_BIG,
++ .reg_read = realtek_smi_read,
++ .reg_write = realtek_smi_write,
++ .cache_type = REGCACHE_NONE,
++};
++
++static int realtek_smi_mdio_read(struct mii_bus *bus, int addr, int regnum)
++{
++ struct realtek_smi *smi = bus->priv;
++
++ return smi->ops->phy_read(smi, addr, regnum);
++}
++
++static int realtek_smi_mdio_write(struct mii_bus *bus, int addr, int regnum,
++ u16 val)
++{
++ struct realtek_smi *smi = bus->priv;
++
++ return smi->ops->phy_write(smi, addr, regnum, val);
++}
++
++int realtek_smi_setup_mdio(struct realtek_smi *smi)
++{
++ struct device_node *mdio_np;
++ int ret;
++
++ mdio_np = of_get_compatible_child(smi->dev->of_node, "realtek,smi-mdio");
++ if (!mdio_np) {
++ dev_err(smi->dev, "no MDIO bus node\n");
++ return -ENODEV;
++ }
++
++ smi->slave_mii_bus = devm_mdiobus_alloc(smi->dev);
++ if (!smi->slave_mii_bus) {
++ ret = -ENOMEM;
++ goto err_put_node;
++ }
++ smi->slave_mii_bus->priv = smi;
++ smi->slave_mii_bus->name = "SMI slave MII";
++ smi->slave_mii_bus->read = realtek_smi_mdio_read;
++ smi->slave_mii_bus->write = realtek_smi_mdio_write;
++ snprintf(smi->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
++ smi->ds->index);
++ smi->slave_mii_bus->dev.of_node = mdio_np;
++ smi->slave_mii_bus->parent = smi->dev;
++ smi->ds->slave_mii_bus = smi->slave_mii_bus;
++
++ ret = devm_of_mdiobus_register(smi->dev, smi->slave_mii_bus, mdio_np);
++ if (ret) {
++ dev_err(smi->dev, "unable to register MDIO bus %s\n",
++ smi->slave_mii_bus->id);
++ goto err_put_node;
++ }
++
++ return 0;
++
++err_put_node:
++ of_node_put(mdio_np);
++
++ return ret;
++}
++
++static int realtek_smi_probe(struct platform_device *pdev)
++{
++ const struct realtek_smi_variant *var;
++ struct device *dev = &pdev->dev;
++ struct realtek_smi *smi;
++ struct device_node *np;
++ int ret;
++
++ var = of_device_get_match_data(dev);
++ np = dev->of_node;
++
++ smi = devm_kzalloc(dev, sizeof(*smi) + var->chip_data_sz, GFP_KERNEL);
++ if (!smi)
++ return -ENOMEM;
++ smi->chip_data = (void *)smi + sizeof(*smi);
++ smi->map = devm_regmap_init(dev, NULL, smi,
++ &realtek_smi_mdio_regmap_config);
++ if (IS_ERR(smi->map)) {
++ ret = PTR_ERR(smi->map);
++ dev_err(dev, "regmap init failed: %d\n", ret);
++ return ret;
++ }
++
++ /* Link forward and backward */
++ smi->dev = dev;
++ smi->clk_delay = var->clk_delay;
++ smi->cmd_read = var->cmd_read;
++ smi->cmd_write = var->cmd_write;
++ smi->ops = var->ops;
++
++ dev_set_drvdata(dev, smi);
++ spin_lock_init(&smi->lock);
++
++ /* TODO: if power is software controlled, set up any regulators here */
++
++ /* Assert then deassert RESET */
++ smi->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
++ if (IS_ERR(smi->reset)) {
++ dev_err(dev, "failed to get RESET GPIO\n");
++ return PTR_ERR(smi->reset);
++ }
++ msleep(REALTEK_SMI_HW_STOP_DELAY);
++ gpiod_set_value(smi->reset, 0);
++ msleep(REALTEK_SMI_HW_START_DELAY);
++ dev_info(dev, "deasserted RESET\n");
++
++ /* Fetch MDIO pins */
++ smi->mdc = devm_gpiod_get_optional(dev, "mdc", GPIOD_OUT_LOW);
++ if (IS_ERR(smi->mdc))
++ return PTR_ERR(smi->mdc);
++ smi->mdio = devm_gpiod_get_optional(dev, "mdio", GPIOD_OUT_LOW);
++ if (IS_ERR(smi->mdio))
++ return PTR_ERR(smi->mdio);
++
++ smi->leds_disabled = of_property_read_bool(np, "realtek,disable-leds");
++
++ ret = smi->ops->detect(smi);
++ if (ret) {
++ dev_err(dev, "unable to detect switch\n");
++ return ret;
++ }
++
++ smi->ds = devm_kzalloc(dev, sizeof(*smi->ds), GFP_KERNEL);
++ if (!smi->ds)
++ return -ENOMEM;
++
++ smi->ds->dev = dev;
++ smi->ds->num_ports = smi->num_ports;
++ smi->ds->priv = smi;
++
++ smi->ds->ops = var->ds_ops;
++ ret = dsa_register_switch(smi->ds);
++ if (ret) {
++ dev_err_probe(dev, ret, "unable to register switch\n");
++ return ret;
++ }
++ return 0;
++}
++
++static int realtek_smi_remove(struct platform_device *pdev)
++{
++ struct realtek_smi *smi = platform_get_drvdata(pdev);
++
++ if (!smi)
++ return 0;
++
++ dsa_unregister_switch(smi->ds);
++ if (smi->slave_mii_bus)
++ of_node_put(smi->slave_mii_bus->dev.of_node);
++ gpiod_set_value(smi->reset, 1);
++
++ platform_set_drvdata(pdev, NULL);
++
++ return 0;
++}
++
++static void realtek_smi_shutdown(struct platform_device *pdev)
++{
++ struct realtek_smi *smi = platform_get_drvdata(pdev);
++
++ if (!smi)
++ return;
++
++ dsa_switch_shutdown(smi->ds);
++
++ platform_set_drvdata(pdev, NULL);
++}
++
++static const struct of_device_id realtek_smi_of_match[] = {
++ {
++ .compatible = "realtek,rtl8366rb",
++ .data = &rtl8366rb_variant,
++ },
++ {
++ /* FIXME: add support for RTL8366S and more */
++ .compatible = "realtek,rtl8366s",
++ .data = NULL,
++ },
++ {
++ .compatible = "realtek,rtl8365mb",
++ .data = &rtl8365mb_variant,
++ },
++ { /* sentinel */ },
++};
++MODULE_DEVICE_TABLE(of, realtek_smi_of_match);
++
++static struct platform_driver realtek_smi_driver = {
++ .driver = {
++ .name = "realtek-smi",
++ .of_match_table = of_match_ptr(realtek_smi_of_match),
++ },
++ .probe = realtek_smi_probe,
++ .remove = realtek_smi_remove,
++ .shutdown = realtek_smi_shutdown,
++};
++module_platform_driver(realtek_smi_driver);
++
++MODULE_LICENSE("GPL");
+diff --git a/drivers/net/dsa/realtek/realtek-smi-core.h b/drivers/net/dsa/realtek/realtek-smi-core.h
+new file mode 100644
+index 0000000000000..faed387d8db38
+--- /dev/null
++++ b/drivers/net/dsa/realtek/realtek-smi-core.h
+@@ -0,0 +1,145 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++/* Realtek SMI interface driver defines
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ */
++
++#ifndef _REALTEK_SMI_H
++#define _REALTEK_SMI_H
++
++#include <linux/phy.h>
++#include <linux/platform_device.h>
++#include <linux/gpio/consumer.h>
++#include <net/dsa.h>
++
++struct realtek_smi_ops;
++struct dentry;
++struct inode;
++struct file;
++
++struct rtl8366_mib_counter {
++ unsigned int base;
++ unsigned int offset;
++ unsigned int length;
++ const char *name;
++};
++
++/*
++ * struct rtl8366_vlan_mc - Virtual LAN member configuration
++ */
++struct rtl8366_vlan_mc {
++ u16 vid;
++ u16 untag;
++ u16 member;
++ u8 fid;
++ u8 priority;
++};
++
++struct rtl8366_vlan_4k {
++ u16 vid;
++ u16 untag;
++ u16 member;
++ u8 fid;
++};
++
++struct realtek_smi {
++ struct device *dev;
++ struct gpio_desc *reset;
++ struct gpio_desc *mdc;
++ struct gpio_desc *mdio;
++ struct regmap *map;
++ struct mii_bus *slave_mii_bus;
++
++ unsigned int clk_delay;
++ u8 cmd_read;
++ u8 cmd_write;
++ spinlock_t lock; /* Locks around command writes */
++ struct dsa_switch *ds;
++ struct irq_domain *irqdomain;
++ bool leds_disabled;
++
++ unsigned int cpu_port;
++ unsigned int num_ports;
++ unsigned int num_vlan_mc;
++ unsigned int num_mib_counters;
++ struct rtl8366_mib_counter *mib_counters;
++
++ const struct realtek_smi_ops *ops;
++
++ int vlan_enabled;
++ int vlan4k_enabled;
++
++ char buf[4096];
++ void *chip_data; /* Per-chip extra variant data */
++};
++
++/*
++ * struct realtek_smi_ops - vtable for the per-SMI-chiptype operations
++ * @detect: detects the chiptype
++ */
++struct realtek_smi_ops {
++ int (*detect)(struct realtek_smi *smi);
++ int (*reset_chip)(struct realtek_smi *smi);
++ int (*setup)(struct realtek_smi *smi);
++ void (*cleanup)(struct realtek_smi *smi);
++ int (*get_mib_counter)(struct realtek_smi *smi,
++ int port,
++ struct rtl8366_mib_counter *mib,
++ u64 *mibvalue);
++ int (*get_vlan_mc)(struct realtek_smi *smi, u32 index,
++ struct rtl8366_vlan_mc *vlanmc);
++ int (*set_vlan_mc)(struct realtek_smi *smi, u32 index,
++ const struct rtl8366_vlan_mc *vlanmc);
++ int (*get_vlan_4k)(struct realtek_smi *smi, u32 vid,
++ struct rtl8366_vlan_4k *vlan4k);
++ int (*set_vlan_4k)(struct realtek_smi *smi,
++ const struct rtl8366_vlan_4k *vlan4k);
++ int (*get_mc_index)(struct realtek_smi *smi, int port, int *val);
++ int (*set_mc_index)(struct realtek_smi *smi, int port, int index);
++ bool (*is_vlan_valid)(struct realtek_smi *smi, unsigned int vlan);
++ int (*enable_vlan)(struct realtek_smi *smi, bool enable);
++ int (*enable_vlan4k)(struct realtek_smi *smi, bool enable);
++ int (*enable_port)(struct realtek_smi *smi, int port, bool enable);
++ int (*phy_read)(struct realtek_smi *smi, int phy, int regnum);
++ int (*phy_write)(struct realtek_smi *smi, int phy, int regnum,
++ u16 val);
++};
++
++struct realtek_smi_variant {
++ const struct dsa_switch_ops *ds_ops;
++ const struct realtek_smi_ops *ops;
++ unsigned int clk_delay;
++ u8 cmd_read;
++ u8 cmd_write;
++ size_t chip_data_sz;
++};
++
++/* SMI core calls */
++int realtek_smi_write_reg_noack(struct realtek_smi *smi, u32 addr,
++ u32 data);
++int realtek_smi_setup_mdio(struct realtek_smi *smi);
++
++/* RTL8366 library helpers */
++int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used);
++int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
++ u32 untag, u32 fid);
++int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
++ unsigned int vid);
++int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable);
++int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable);
++int rtl8366_reset_vlan(struct realtek_smi *smi);
++int rtl8366_vlan_add(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan,
++ struct netlink_ext_ack *extack);
++int rtl8366_vlan_del(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan);
++void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
++ uint8_t *data);
++int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset);
++void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data);
++
++extern const struct realtek_smi_variant rtl8366rb_variant;
++extern const struct realtek_smi_variant rtl8365mb_variant;
++
++#endif /* _REALTEK_SMI_H */
+diff --git a/drivers/net/dsa/realtek/rtl8365mb.c b/drivers/net/dsa/realtek/rtl8365mb.c
+new file mode 100644
+index 0000000000000..3b729544798b1
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8365mb.c
+@@ -0,0 +1,1987 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Realtek SMI subdriver for the Realtek RTL8365MB-VC ethernet switch.
++ *
++ * Copyright (C) 2021 Alvin Šipraga <alsi@bang-olufsen.dk>
++ * Copyright (C) 2021 Michael Rasmussen <mir@bang-olufsen.dk>
++ *
++ * The RTL8365MB-VC is a 4+1 port 10/100/1000M switch controller. It includes 4
++ * integrated PHYs for the user facing ports, and an extension interface which
++ * can be connected to the CPU - or another PHY - via either MII, RMII, or
++ * RGMII. The switch is configured via the Realtek Simple Management Interface
++ * (SMI), which uses the MDIO/MDC lines.
++ *
++ * Below is a simplified block diagram of the chip and its relevant interfaces.
++ *
++ * .-----------------------------------.
++ * | |
++ * UTP <---------------> Giga PHY <-> PCS <-> P0 GMAC |
++ * UTP <---------------> Giga PHY <-> PCS <-> P1 GMAC |
++ * UTP <---------------> Giga PHY <-> PCS <-> P2 GMAC |
++ * UTP <---------------> Giga PHY <-> PCS <-> P3 GMAC |
++ * | |
++ * CPU/PHY <-MII/RMII/RGMII---> Extension <---> Extension |
++ * | interface 1 GMAC 1 |
++ * | |
++ * SMI driver/ <-MDC/SCL---> Management ~~~~~~~~~~~~~~ |
++ * EEPROM <-MDIO/SDA--> interface ~REALTEK ~~~~~ |
++ * | ~RTL8365MB ~~~ |
++ * | ~GXXXC TAIWAN~ |
++ * GPIO <--------------> Reset ~~~~~~~~~~~~~~ |
++ * | |
++ * Interrupt <----------> Link UP/DOWN events |
++ * controller | |
++ * '-----------------------------------'
++ *
++ * The driver uses DSA to integrate the 4 user and 1 extension ports into the
++ * kernel. Netdevices are created for the user ports, as are PHY devices for
++ * their integrated PHYs. The device tree firmware should also specify the link
++ * partner of the extension port - either via a fixed-link or other phy-handle.
++ * See the device tree bindings for more detailed information. Note that the
++ * driver has only been tested with a fixed-link, but in principle it should not
++ * matter.
++ *
++ * NOTE: Currently, only the RGMII interface is implemented in this driver.
++ *
++ * The interrupt line is asserted on link UP/DOWN events. The driver creates a
++ * custom irqchip to handle this interrupt and demultiplex the events by reading
++ * the status registers via SMI. Interrupts are then propagated to the relevant
++ * PHY device.
++ *
++ * The EEPROM contains initial register values which the chip will read over I2C
++ * upon hardware reset. It is also possible to omit the EEPROM. In both cases,
++ * the driver will manually reprogram some registers using jam tables to reach
++ * an initial state defined by the vendor driver.
++ *
++ * This Linux driver is written based on an OS-agnostic vendor driver from
++ * Realtek. The reference GPL-licensed sources can be found in the OpenWrt
++ * source tree under the name rtl8367c. The vendor driver claims to support a
++ * number of similar switch controllers from Realtek, but the only hardware we
++ * have is the RTL8365MB-VC. Moreover, there does not seem to be any chip under
++ * the name RTL8367C. Although one wishes that the 'C' stood for some kind of
++ * common hardware revision, there exist examples of chips with the suffix -VC
++ * which are explicitly not supported by the rtl8367c driver and which instead
++ * require the rtl8367d vendor driver. With all this uncertainty, the driver has
++ * been modestly named rtl8365mb. Future implementors may wish to rename things
++ * accordingly.
++ *
++ * In the same family of chips, some carry up to 8 user ports and up to 2
++ * extension ports. Where possible this driver tries to make things generic, but
++ * more work must be done to support these configurations. According to
++ * documentation from Realtek, the family should include the following chips:
++ *
++ * - RTL8363NB
++ * - RTL8363NB-VB
++ * - RTL8363SC
++ * - RTL8363SC-VB
++ * - RTL8364NB
++ * - RTL8364NB-VB
++ * - RTL8365MB-VC
++ * - RTL8366SC
++ * - RTL8367RB-VB
++ * - RTL8367SB
++ * - RTL8367S
++ * - RTL8370MB
++ * - RTL8310SR
++ *
++ * Some of the register logic for these additional chips has been skipped over
++ * while implementing this driver. It is therefore not possible to assume that
++ * things will work out-of-the-box for other chips, and a careful review of the
++ * vendor driver may be needed to expand support. The RTL8365MB-VC seems to be
++ * one of the simpler chips.
++ */
++
++#include <linux/bitfield.h>
++#include <linux/bitops.h>
++#include <linux/interrupt.h>
++#include <linux/irqdomain.h>
++#include <linux/mutex.h>
++#include <linux/of_irq.h>
++#include <linux/regmap.h>
++#include <linux/if_bridge.h>
++
++#include "realtek-smi-core.h"
++
++/* Chip-specific data and limits */
++#define RTL8365MB_CHIP_ID_8365MB_VC 0x6367
++#define RTL8365MB_CPU_PORT_NUM_8365MB_VC 6
++#define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC 2112
++
++/* Family-specific data and limits */
++#define RTL8365MB_PHYADDRMAX 7
++#define RTL8365MB_NUM_PHYREGS 32
++#define RTL8365MB_PHYREGMAX (RTL8365MB_NUM_PHYREGS - 1)
++#define RTL8365MB_MAX_NUM_PORTS (RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1)
++
++/* Chip identification registers */
++#define RTL8365MB_CHIP_ID_REG 0x1300
++
++#define RTL8365MB_CHIP_VER_REG 0x1301
++
++#define RTL8365MB_MAGIC_REG 0x13C2
++#define RTL8365MB_MAGIC_VALUE 0x0249
++
++/* Chip reset register */
++#define RTL8365MB_CHIP_RESET_REG 0x1322
++#define RTL8365MB_CHIP_RESET_SW_MASK 0x0002
++#define RTL8365MB_CHIP_RESET_HW_MASK 0x0001
++
++/* Interrupt polarity register */
++#define RTL8365MB_INTR_POLARITY_REG 0x1100
++#define RTL8365MB_INTR_POLARITY_MASK 0x0001
++#define RTL8365MB_INTR_POLARITY_HIGH 0
++#define RTL8365MB_INTR_POLARITY_LOW 1
++
++/* Interrupt control/status register - enable/check specific interrupt types */
++#define RTL8365MB_INTR_CTRL_REG 0x1101
++#define RTL8365MB_INTR_STATUS_REG 0x1102
++#define RTL8365MB_INTR_SLIENT_START_2_MASK 0x1000
++#define RTL8365MB_INTR_SLIENT_START_MASK 0x0800
++#define RTL8365MB_INTR_ACL_ACTION_MASK 0x0200
++#define RTL8365MB_INTR_CABLE_DIAG_FIN_MASK 0x0100
++#define RTL8365MB_INTR_INTERRUPT_8051_MASK 0x0080
++#define RTL8365MB_INTR_LOOP_DETECTION_MASK 0x0040
++#define RTL8365MB_INTR_GREEN_TIMER_MASK 0x0020
++#define RTL8365MB_INTR_SPECIAL_CONGEST_MASK 0x0010
++#define RTL8365MB_INTR_SPEED_CHANGE_MASK 0x0008
++#define RTL8365MB_INTR_LEARN_OVER_MASK 0x0004
++#define RTL8365MB_INTR_METER_EXCEEDED_MASK 0x0002
++#define RTL8365MB_INTR_LINK_CHANGE_MASK 0x0001
++#define RTL8365MB_INTR_ALL_MASK \
++ (RTL8365MB_INTR_SLIENT_START_2_MASK | \
++ RTL8365MB_INTR_SLIENT_START_MASK | \
++ RTL8365MB_INTR_ACL_ACTION_MASK | \
++ RTL8365MB_INTR_CABLE_DIAG_FIN_MASK | \
++ RTL8365MB_INTR_INTERRUPT_8051_MASK | \
++ RTL8365MB_INTR_LOOP_DETECTION_MASK | \
++ RTL8365MB_INTR_GREEN_TIMER_MASK | \
++ RTL8365MB_INTR_SPECIAL_CONGEST_MASK | \
++ RTL8365MB_INTR_SPEED_CHANGE_MASK | \
++ RTL8365MB_INTR_LEARN_OVER_MASK | \
++ RTL8365MB_INTR_METER_EXCEEDED_MASK | \
++ RTL8365MB_INTR_LINK_CHANGE_MASK)
++
++/* Per-port interrupt type status registers */
++#define RTL8365MB_PORT_LINKDOWN_IND_REG 0x1106
++#define RTL8365MB_PORT_LINKDOWN_IND_MASK 0x07FF
++
++#define RTL8365MB_PORT_LINKUP_IND_REG 0x1107
++#define RTL8365MB_PORT_LINKUP_IND_MASK 0x07FF
++
++/* PHY indirect access registers */
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_REG 0x1F00
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK 0x0002
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ 0
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE 1
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK 0x0001
++#define RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE 1
++#define RTL8365MB_INDIRECT_ACCESS_STATUS_REG 0x1F01
++#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG 0x1F02
++#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK GENMASK(4, 0)
++#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK GENMASK(7, 5)
++#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK GENMASK(11, 8)
++#define RTL8365MB_PHY_BASE 0x2000
++#define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG 0x1F03
++#define RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG 0x1F04
++
++/* PHY OCP address prefix register */
++#define RTL8365MB_GPHY_OCP_MSB_0_REG 0x1D15
++#define RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK 0x0FC0
++#define RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK 0xFC00
++
++/* The PHY OCP addresses of PHY registers 0~31 start here */
++#define RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE 0xA400
++
++/* EXT port interface mode values - used in DIGITAL_INTERFACE_SELECT */
++#define RTL8365MB_EXT_PORT_MODE_DISABLE 0
++#define RTL8365MB_EXT_PORT_MODE_RGMII 1
++#define RTL8365MB_EXT_PORT_MODE_MII_MAC 2
++#define RTL8365MB_EXT_PORT_MODE_MII_PHY 3
++#define RTL8365MB_EXT_PORT_MODE_TMII_MAC 4
++#define RTL8365MB_EXT_PORT_MODE_TMII_PHY 5
++#define RTL8365MB_EXT_PORT_MODE_GMII 6
++#define RTL8365MB_EXT_PORT_MODE_RMII_MAC 7
++#define RTL8365MB_EXT_PORT_MODE_RMII_PHY 8
++#define RTL8365MB_EXT_PORT_MODE_SGMII 9
++#define RTL8365MB_EXT_PORT_MODE_HSGMII 10
++#define RTL8365MB_EXT_PORT_MODE_1000X_100FX 11
++#define RTL8365MB_EXT_PORT_MODE_1000X 12
++#define RTL8365MB_EXT_PORT_MODE_100FX 13
++
++/* EXT port interface mode configuration registers 0~1 */
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0 0x1305
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG1 0x13C3
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(_extport) \
++ (RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0 + \
++ ((_extport) >> 1) * (0x13C3 - 0x1305))
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(_extport) \
++ (0xF << (((_extport) % 2)))
++#define RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(_extport) \
++ (((_extport) % 2) * 4)
++
++/* EXT port RGMII TX/RX delay configuration registers 1~2 */
++#define RTL8365MB_EXT_RGMXF_REG1 0x1307
++#define RTL8365MB_EXT_RGMXF_REG2 0x13C5
++#define RTL8365MB_EXT_RGMXF_REG(_extport) \
++ (RTL8365MB_EXT_RGMXF_REG1 + \
++ (((_extport) >> 1) * (0x13C5 - 0x1307)))
++#define RTL8365MB_EXT_RGMXF_RXDELAY_MASK 0x0007
++#define RTL8365MB_EXT_RGMXF_TXDELAY_MASK 0x0008
++
++/* External port speed values - used in DIGITAL_INTERFACE_FORCE */
++#define RTL8365MB_PORT_SPEED_10M 0
++#define RTL8365MB_PORT_SPEED_100M 1
++#define RTL8365MB_PORT_SPEED_1000M 2
++
++/* EXT port force configuration registers 0~2 */
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0 0x1310
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG1 0x1311
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG2 0x13C4
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(_extport) \
++ (RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0 + \
++ ((_extport) & 0x1) + \
++ ((((_extport) >> 1) & 0x1) * (0x13C4 - 0x1310)))
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK 0x1000
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_NWAY_MASK 0x0080
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK 0x0040
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK 0x0020
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK 0x0010
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK 0x0004
++#define RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK 0x0003
++
++/* CPU port mask register - controls which ports are treated as CPU ports */
++#define RTL8365MB_CPU_PORT_MASK_REG 0x1219
++#define RTL8365MB_CPU_PORT_MASK_MASK 0x07FF
++
++/* CPU control register */
++#define RTL8365MB_CPU_CTRL_REG 0x121A
++#define RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK 0x0400
++#define RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK 0x0200
++#define RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK 0x0080
++#define RTL8365MB_CPU_CTRL_TAG_POSITION_MASK 0x0040
++#define RTL8365MB_CPU_CTRL_TRAP_PORT_MASK 0x0038
++#define RTL8365MB_CPU_CTRL_INSERTMODE_MASK 0x0006
++#define RTL8365MB_CPU_CTRL_EN_MASK 0x0001
++
++/* Maximum packet length register */
++#define RTL8365MB_CFG0_MAX_LEN_REG 0x088C
++#define RTL8365MB_CFG0_MAX_LEN_MASK 0x3FFF
++
++/* Port learning limit registers */
++#define RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE 0x0A20
++#define RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(_physport) \
++ (RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE + (_physport))
++
++/* Port isolation (forwarding mask) registers */
++#define RTL8365MB_PORT_ISOLATION_REG_BASE 0x08A2
++#define RTL8365MB_PORT_ISOLATION_REG(_physport) \
++ (RTL8365MB_PORT_ISOLATION_REG_BASE + (_physport))
++#define RTL8365MB_PORT_ISOLATION_MASK 0x07FF
++
++/* MSTP port state registers - indexed by tree instance */
++#define RTL8365MB_MSTI_CTRL_BASE 0x0A00
++#define RTL8365MB_MSTI_CTRL_REG(_msti, _physport) \
++ (RTL8365MB_MSTI_CTRL_BASE + ((_msti) << 1) + ((_physport) >> 3))
++#define RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(_physport) ((_physport) << 1)
++#define RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(_physport) \
++ (0x3 << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET((_physport)))
++
++/* MIB counter value registers */
++#define RTL8365MB_MIB_COUNTER_BASE 0x1000
++#define RTL8365MB_MIB_COUNTER_REG(_x) (RTL8365MB_MIB_COUNTER_BASE + (_x))
++
++/* MIB counter address register */
++#define RTL8365MB_MIB_ADDRESS_REG 0x1004
++#define RTL8365MB_MIB_ADDRESS_PORT_OFFSET 0x007C
++#define RTL8365MB_MIB_ADDRESS(_p, _x) \
++ (((RTL8365MB_MIB_ADDRESS_PORT_OFFSET) * (_p) + (_x)) >> 2)
++
++#define RTL8365MB_MIB_CTRL0_REG 0x1005
++#define RTL8365MB_MIB_CTRL0_RESET_MASK 0x0002
++#define RTL8365MB_MIB_CTRL0_BUSY_MASK 0x0001
++
++/* The DSA callback .get_stats64 runs in atomic context, so we are not allowed
++ * to block. On the other hand, accessing MIB counters absolutely requires us to
++ * block. The solution is thus to schedule work which polls the MIB counters
++ * asynchronously and updates some private data, which the callback can then
++ * fetch atomically. Three seconds should be a good enough polling interval.
++ */
++#define RTL8365MB_STATS_INTERVAL_JIFFIES (3 * HZ)
++
++enum rtl8365mb_mib_counter_index {
++ RTL8365MB_MIB_ifInOctets,
++ RTL8365MB_MIB_dot3StatsFCSErrors,
++ RTL8365MB_MIB_dot3StatsSymbolErrors,
++ RTL8365MB_MIB_dot3InPauseFrames,
++ RTL8365MB_MIB_dot3ControlInUnknownOpcodes,
++ RTL8365MB_MIB_etherStatsFragments,
++ RTL8365MB_MIB_etherStatsJabbers,
++ RTL8365MB_MIB_ifInUcastPkts,
++ RTL8365MB_MIB_etherStatsDropEvents,
++ RTL8365MB_MIB_ifInMulticastPkts,
++ RTL8365MB_MIB_ifInBroadcastPkts,
++ RTL8365MB_MIB_inMldChecksumError,
++ RTL8365MB_MIB_inIgmpChecksumError,
++ RTL8365MB_MIB_inMldSpecificQuery,
++ RTL8365MB_MIB_inMldGeneralQuery,
++ RTL8365MB_MIB_inIgmpSpecificQuery,
++ RTL8365MB_MIB_inIgmpGeneralQuery,
++ RTL8365MB_MIB_inMldLeaves,
++ RTL8365MB_MIB_inIgmpLeaves,
++ RTL8365MB_MIB_etherStatsOctets,
++ RTL8365MB_MIB_etherStatsUnderSizePkts,
++ RTL8365MB_MIB_etherOversizeStats,
++ RTL8365MB_MIB_etherStatsPkts64Octets,
++ RTL8365MB_MIB_etherStatsPkts65to127Octets,
++ RTL8365MB_MIB_etherStatsPkts128to255Octets,
++ RTL8365MB_MIB_etherStatsPkts256to511Octets,
++ RTL8365MB_MIB_etherStatsPkts512to1023Octets,
++ RTL8365MB_MIB_etherStatsPkts1024to1518Octets,
++ RTL8365MB_MIB_ifOutOctets,
++ RTL8365MB_MIB_dot3StatsSingleCollisionFrames,
++ RTL8365MB_MIB_dot3StatsMultipleCollisionFrames,
++ RTL8365MB_MIB_dot3StatsDeferredTransmissions,
++ RTL8365MB_MIB_dot3StatsLateCollisions,
++ RTL8365MB_MIB_etherStatsCollisions,
++ RTL8365MB_MIB_dot3StatsExcessiveCollisions,
++ RTL8365MB_MIB_dot3OutPauseFrames,
++ RTL8365MB_MIB_ifOutDiscards,
++ RTL8365MB_MIB_dot1dTpPortInDiscards,
++ RTL8365MB_MIB_ifOutUcastPkts,
++ RTL8365MB_MIB_ifOutMulticastPkts,
++ RTL8365MB_MIB_ifOutBroadcastPkts,
++ RTL8365MB_MIB_outOampduPkts,
++ RTL8365MB_MIB_inOampduPkts,
++ RTL8365MB_MIB_inIgmpJoinsSuccess,
++ RTL8365MB_MIB_inIgmpJoinsFail,
++ RTL8365MB_MIB_inMldJoinsSuccess,
++ RTL8365MB_MIB_inMldJoinsFail,
++ RTL8365MB_MIB_inReportSuppressionDrop,
++ RTL8365MB_MIB_inLeaveSuppressionDrop,
++ RTL8365MB_MIB_outIgmpReports,
++ RTL8365MB_MIB_outIgmpLeaves,
++ RTL8365MB_MIB_outIgmpGeneralQuery,
++ RTL8365MB_MIB_outIgmpSpecificQuery,
++ RTL8365MB_MIB_outMldReports,
++ RTL8365MB_MIB_outMldLeaves,
++ RTL8365MB_MIB_outMldGeneralQuery,
++ RTL8365MB_MIB_outMldSpecificQuery,
++ RTL8365MB_MIB_inKnownMulticastPkts,
++ RTL8365MB_MIB_END,
++};
++
++struct rtl8365mb_mib_counter {
++ u32 offset;
++ u32 length;
++ const char *name;
++};
++
++#define RTL8365MB_MAKE_MIB_COUNTER(_offset, _length, _name) \
++ [RTL8365MB_MIB_ ## _name] = { _offset, _length, #_name }
++
++static struct rtl8365mb_mib_counter rtl8365mb_mib_counters[] = {
++ RTL8365MB_MAKE_MIB_COUNTER(0, 4, ifInOctets),
++ RTL8365MB_MAKE_MIB_COUNTER(4, 2, dot3StatsFCSErrors),
++ RTL8365MB_MAKE_MIB_COUNTER(6, 2, dot3StatsSymbolErrors),
++ RTL8365MB_MAKE_MIB_COUNTER(8, 2, dot3InPauseFrames),
++ RTL8365MB_MAKE_MIB_COUNTER(10, 2, dot3ControlInUnknownOpcodes),
++ RTL8365MB_MAKE_MIB_COUNTER(12, 2, etherStatsFragments),
++ RTL8365MB_MAKE_MIB_COUNTER(14, 2, etherStatsJabbers),
++ RTL8365MB_MAKE_MIB_COUNTER(16, 2, ifInUcastPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(18, 2, etherStatsDropEvents),
++ RTL8365MB_MAKE_MIB_COUNTER(20, 2, ifInMulticastPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(22, 2, ifInBroadcastPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(24, 2, inMldChecksumError),
++ RTL8365MB_MAKE_MIB_COUNTER(26, 2, inIgmpChecksumError),
++ RTL8365MB_MAKE_MIB_COUNTER(28, 2, inMldSpecificQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(30, 2, inMldGeneralQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(32, 2, inIgmpSpecificQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(34, 2, inIgmpGeneralQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(36, 2, inMldLeaves),
++ RTL8365MB_MAKE_MIB_COUNTER(38, 2, inIgmpLeaves),
++ RTL8365MB_MAKE_MIB_COUNTER(40, 4, etherStatsOctets),
++ RTL8365MB_MAKE_MIB_COUNTER(44, 2, etherStatsUnderSizePkts),
++ RTL8365MB_MAKE_MIB_COUNTER(46, 2, etherOversizeStats),
++ RTL8365MB_MAKE_MIB_COUNTER(48, 2, etherStatsPkts64Octets),
++ RTL8365MB_MAKE_MIB_COUNTER(50, 2, etherStatsPkts65to127Octets),
++ RTL8365MB_MAKE_MIB_COUNTER(52, 2, etherStatsPkts128to255Octets),
++ RTL8365MB_MAKE_MIB_COUNTER(54, 2, etherStatsPkts256to511Octets),
++ RTL8365MB_MAKE_MIB_COUNTER(56, 2, etherStatsPkts512to1023Octets),
++ RTL8365MB_MAKE_MIB_COUNTER(58, 2, etherStatsPkts1024to1518Octets),
++ RTL8365MB_MAKE_MIB_COUNTER(60, 4, ifOutOctets),
++ RTL8365MB_MAKE_MIB_COUNTER(64, 2, dot3StatsSingleCollisionFrames),
++ RTL8365MB_MAKE_MIB_COUNTER(66, 2, dot3StatsMultipleCollisionFrames),
++ RTL8365MB_MAKE_MIB_COUNTER(68, 2, dot3StatsDeferredTransmissions),
++ RTL8365MB_MAKE_MIB_COUNTER(70, 2, dot3StatsLateCollisions),
++ RTL8365MB_MAKE_MIB_COUNTER(72, 2, etherStatsCollisions),
++ RTL8365MB_MAKE_MIB_COUNTER(74, 2, dot3StatsExcessiveCollisions),
++ RTL8365MB_MAKE_MIB_COUNTER(76, 2, dot3OutPauseFrames),
++ RTL8365MB_MAKE_MIB_COUNTER(78, 2, ifOutDiscards),
++ RTL8365MB_MAKE_MIB_COUNTER(80, 2, dot1dTpPortInDiscards),
++ RTL8365MB_MAKE_MIB_COUNTER(82, 2, ifOutUcastPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(84, 2, ifOutMulticastPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(86, 2, ifOutBroadcastPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(88, 2, outOampduPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(90, 2, inOampduPkts),
++ RTL8365MB_MAKE_MIB_COUNTER(92, 4, inIgmpJoinsSuccess),
++ RTL8365MB_MAKE_MIB_COUNTER(96, 2, inIgmpJoinsFail),
++ RTL8365MB_MAKE_MIB_COUNTER(98, 2, inMldJoinsSuccess),
++ RTL8365MB_MAKE_MIB_COUNTER(100, 2, inMldJoinsFail),
++ RTL8365MB_MAKE_MIB_COUNTER(102, 2, inReportSuppressionDrop),
++ RTL8365MB_MAKE_MIB_COUNTER(104, 2, inLeaveSuppressionDrop),
++ RTL8365MB_MAKE_MIB_COUNTER(106, 2, outIgmpReports),
++ RTL8365MB_MAKE_MIB_COUNTER(108, 2, outIgmpLeaves),
++ RTL8365MB_MAKE_MIB_COUNTER(110, 2, outIgmpGeneralQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(112, 2, outIgmpSpecificQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(114, 2, outMldReports),
++ RTL8365MB_MAKE_MIB_COUNTER(116, 2, outMldLeaves),
++ RTL8365MB_MAKE_MIB_COUNTER(118, 2, outMldGeneralQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(120, 2, outMldSpecificQuery),
++ RTL8365MB_MAKE_MIB_COUNTER(122, 2, inKnownMulticastPkts),
++};
++
++static_assert(ARRAY_SIZE(rtl8365mb_mib_counters) == RTL8365MB_MIB_END);
++
++struct rtl8365mb_jam_tbl_entry {
++ u16 reg;
++ u16 val;
++};
++
++/* Lifted from the vendor driver sources */
++static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_8365mb_vc[] = {
++ { 0x13EB, 0x15BB }, { 0x1303, 0x06D6 }, { 0x1304, 0x0700 },
++ { 0x13E2, 0x003F }, { 0x13F9, 0x0090 }, { 0x121E, 0x03CA },
++ { 0x1233, 0x0352 }, { 0x1237, 0x00A0 }, { 0x123A, 0x0030 },
++ { 0x1239, 0x0084 }, { 0x0301, 0x1000 }, { 0x1349, 0x001F },
++ { 0x18E0, 0x4004 }, { 0x122B, 0x241C }, { 0x1305, 0xC000 },
++ { 0x13F0, 0x0000 },
++};
++
++static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_common[] = {
++ { 0x1200, 0x7FCB }, { 0x0884, 0x0003 }, { 0x06EB, 0x0001 },
++ { 0x03Fa, 0x0007 }, { 0x08C8, 0x00C0 }, { 0x0A30, 0x020E },
++ { 0x0800, 0x0000 }, { 0x0802, 0x0000 }, { 0x09DA, 0x0013 },
++ { 0x1D32, 0x0002 },
++};
++
++enum rtl8365mb_stp_state {
++ RTL8365MB_STP_STATE_DISABLED = 0,
++ RTL8365MB_STP_STATE_BLOCKING = 1,
++ RTL8365MB_STP_STATE_LEARNING = 2,
++ RTL8365MB_STP_STATE_FORWARDING = 3,
++};
++
++enum rtl8365mb_cpu_insert {
++ RTL8365MB_CPU_INSERT_TO_ALL = 0,
++ RTL8365MB_CPU_INSERT_TO_TRAPPING = 1,
++ RTL8365MB_CPU_INSERT_TO_NONE = 2,
++};
++
++enum rtl8365mb_cpu_position {
++ RTL8365MB_CPU_POS_AFTER_SA = 0,
++ RTL8365MB_CPU_POS_BEFORE_CRC = 1,
++};
++
++enum rtl8365mb_cpu_format {
++ RTL8365MB_CPU_FORMAT_8BYTES = 0,
++ RTL8365MB_CPU_FORMAT_4BYTES = 1,
++};
++
++enum rtl8365mb_cpu_rxlen {
++ RTL8365MB_CPU_RXLEN_72BYTES = 0,
++ RTL8365MB_CPU_RXLEN_64BYTES = 1,
++};
++
++/**
++ * struct rtl8365mb_cpu - CPU port configuration
++ * @enable: enable/disable hardware insertion of CPU tag in switch->CPU frames
++ * @mask: port mask of ports that parse should parse CPU tags
++ * @trap_port: forward trapped frames to this port
++ * @insert: CPU tag insertion mode in switch->CPU frames
++ * @position: position of CPU tag in frame
++ * @rx_length: minimum CPU RX length
++ * @format: CPU tag format
++ *
++ * Represents the CPU tagging and CPU port configuration of the switch. These
++ * settings are configurable at runtime.
++ */
++struct rtl8365mb_cpu {
++ bool enable;
++ u32 mask;
++ u32 trap_port;
++ enum rtl8365mb_cpu_insert insert;
++ enum rtl8365mb_cpu_position position;
++ enum rtl8365mb_cpu_rxlen rx_length;
++ enum rtl8365mb_cpu_format format;
++};
++
++/**
++ * struct rtl8365mb_port - private per-port data
++ * @smi: pointer to parent realtek_smi data
++ * @index: DSA port index, same as dsa_port::index
++ * @stats: link statistics populated by rtl8365mb_stats_poll, ready for atomic
++ * access via rtl8365mb_get_stats64
++ * @stats_lock: protect the stats structure during read/update
++ * @mib_work: delayed work for polling MIB counters
++ */
++struct rtl8365mb_port {
++ struct realtek_smi *smi;
++ unsigned int index;
++ struct rtnl_link_stats64 stats;
++ spinlock_t stats_lock;
++ struct delayed_work mib_work;
++};
++
++/**
++ * struct rtl8365mb - private chip-specific driver data
++ * @smi: pointer to parent realtek_smi data
++ * @irq: registered IRQ or zero
++ * @chip_id: chip identifier
++ * @chip_ver: chip silicon revision
++ * @port_mask: mask of all ports
++ * @learn_limit_max: maximum number of L2 addresses the chip can learn
++ * @cpu: CPU tagging and CPU port configuration for this chip
++ * @mib_lock: prevent concurrent reads of MIB counters
++ * @ports: per-port data
++ * @jam_table: chip-specific initialization jam table
++ * @jam_size: size of the chip's jam table
++ *
++ * Private data for this driver.
++ */
++struct rtl8365mb {
++ struct realtek_smi *smi;
++ int irq;
++ u32 chip_id;
++ u32 chip_ver;
++ u32 port_mask;
++ u32 learn_limit_max;
++ struct rtl8365mb_cpu cpu;
++ struct mutex mib_lock;
++ struct rtl8365mb_port ports[RTL8365MB_MAX_NUM_PORTS];
++ const struct rtl8365mb_jam_tbl_entry *jam_table;
++ size_t jam_size;
++};
++
++static int rtl8365mb_phy_poll_busy(struct realtek_smi *smi)
++{
++ u32 val;
++
++ return regmap_read_poll_timeout(smi->map,
++ RTL8365MB_INDIRECT_ACCESS_STATUS_REG,
++ val, !val, 10, 100);
++}
++
++static int rtl8365mb_phy_ocp_prepare(struct realtek_smi *smi, int phy,
++ u32 ocp_addr)
++{
++ u32 val;
++ int ret;
++
++ /* Set OCP prefix */
++ val = FIELD_GET(RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK, ocp_addr);
++ ret = regmap_update_bits(
++ smi->map, RTL8365MB_GPHY_OCP_MSB_0_REG,
++ RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK,
++ FIELD_PREP(RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK, val));
++ if (ret)
++ return ret;
++
++ /* Set PHY register address */
++ val = RTL8365MB_PHY_BASE;
++ val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK, phy);
++ val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK,
++ ocp_addr >> 1);
++ val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK,
++ ocp_addr >> 6);
++ ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG,
++ val);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static int rtl8365mb_phy_ocp_read(struct realtek_smi *smi, int phy,
++ u32 ocp_addr, u16 *data)
++{
++ u32 val;
++ int ret;
++
++ ret = rtl8365mb_phy_poll_busy(smi);
++ if (ret)
++ return ret;
++
++ ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
++ if (ret)
++ return ret;
++
++ /* Execute read operation */
++ val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
++ RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
++ FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
++ RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ);
++ ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
++ if (ret)
++ return ret;
++
++ ret = rtl8365mb_phy_poll_busy(smi);
++ if (ret)
++ return ret;
++
++ /* Get PHY register data */
++ ret = regmap_read(smi->map, RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG,
++ &val);
++ if (ret)
++ return ret;
++
++ *data = val & 0xFFFF;
++
++ return 0;
++}
++
++static int rtl8365mb_phy_ocp_write(struct realtek_smi *smi, int phy,
++ u32 ocp_addr, u16 data)
++{
++ u32 val;
++ int ret;
++
++ ret = rtl8365mb_phy_poll_busy(smi);
++ if (ret)
++ return ret;
++
++ ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
++ if (ret)
++ return ret;
++
++ /* Set PHY register data */
++ ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG,
++ data);
++ if (ret)
++ return ret;
++
++ /* Execute write operation */
++ val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
++ RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
++ FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
++ RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE);
++ ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
++ if (ret)
++ return ret;
++
++ ret = rtl8365mb_phy_poll_busy(smi);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static int rtl8365mb_phy_read(struct realtek_smi *smi, int phy, int regnum)
++{
++ u32 ocp_addr;
++ u16 val;
++ int ret;
++
++ if (phy > RTL8365MB_PHYADDRMAX)
++ return -EINVAL;
++
++ if (regnum > RTL8365MB_PHYREGMAX)
++ return -EINVAL;
++
++ ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
++
++ ret = rtl8365mb_phy_ocp_read(smi, phy, ocp_addr, &val);
++ if (ret) {
++ dev_err(smi->dev,
++ "failed to read PHY%d reg %02x @ %04x, ret %d\n", phy,
++ regnum, ocp_addr, ret);
++ return ret;
++ }
++
++ dev_dbg(smi->dev, "read PHY%d register 0x%02x @ %04x, val <- %04x\n",
++ phy, regnum, ocp_addr, val);
++
++ return val;
++}
++
++static int rtl8365mb_phy_write(struct realtek_smi *smi, int phy, int regnum,
++ u16 val)
++{
++ u32 ocp_addr;
++ int ret;
++
++ if (phy > RTL8365MB_PHYADDRMAX)
++ return -EINVAL;
++
++ if (regnum > RTL8365MB_PHYREGMAX)
++ return -EINVAL;
++
++ ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
++
++ ret = rtl8365mb_phy_ocp_write(smi, phy, ocp_addr, val);
++ if (ret) {
++ dev_err(smi->dev,
++ "failed to write PHY%d reg %02x @ %04x, ret %d\n", phy,
++ regnum, ocp_addr, ret);
++ return ret;
++ }
++
++ dev_dbg(smi->dev, "write PHY%d register 0x%02x @ %04x, val -> %04x\n",
++ phy, regnum, ocp_addr, val);
++
++ return 0;
++}
++
++static enum dsa_tag_protocol
++rtl8365mb_get_tag_protocol(struct dsa_switch *ds, int port,
++ enum dsa_tag_protocol mp)
++{
++ return DSA_TAG_PROTO_RTL8_4;
++}
++
++static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
++ phy_interface_t interface)
++{
++ struct device_node *dn;
++ struct dsa_port *dp;
++ int tx_delay = 0;
++ int rx_delay = 0;
++ int ext_port;
++ u32 val;
++ int ret;
++
++ if (port == smi->cpu_port) {
++ ext_port = 1;
++ } else {
++ dev_err(smi->dev, "only one EXT port is currently supported\n");
++ return -EINVAL;
++ }
++
++ dp = dsa_to_port(smi->ds, port);
++ dn = dp->dn;
++
++ /* Set the RGMII TX/RX delay
++ *
++ * The Realtek vendor driver indicates the following possible
++ * configuration settings:
++ *
++ * TX delay:
++ * 0 = no delay, 1 = 2 ns delay
++ * RX delay:
++ * 0 = no delay, 7 = maximum delay
++ * Each step is approximately 0.3 ns, so the maximum delay is about
++ * 2.1 ns.
++ *
++ * The vendor driver also states that this must be configured *before*
++ * forcing the external interface into a particular mode, which is done
++ * in the rtl8365mb_phylink_mac_link_{up,down} functions.
++ *
++ * Only configure an RGMII TX (resp. RX) delay if the
++ * tx-internal-delay-ps (resp. rx-internal-delay-ps) OF property is
++ * specified. We ignore the detail of the RGMII interface mode
++ * (RGMII_{RXID, TXID, etc.}), as this is considered to be a PHY-only
++ * property.
++ */
++ if (!of_property_read_u32(dn, "tx-internal-delay-ps", &val)) {
++ val = val / 1000; /* convert to ns */
++
++ if (val == 0 || val == 2)
++ tx_delay = val / 2;
++ else
++ dev_warn(smi->dev,
++ "EXT port TX delay must be 0 or 2 ns\n");
++ }
++
++ if (!of_property_read_u32(dn, "rx-internal-delay-ps", &val)) {
++ val = DIV_ROUND_CLOSEST(val, 300); /* convert to 0.3 ns step */
++
++ if (val <= 7)
++ rx_delay = val;
++ else
++ dev_warn(smi->dev,
++ "EXT port RX delay must be 0 to 2.1 ns\n");
++ }
++
++ ret = regmap_update_bits(
++ smi->map, RTL8365MB_EXT_RGMXF_REG(ext_port),
++ RTL8365MB_EXT_RGMXF_TXDELAY_MASK |
++ RTL8365MB_EXT_RGMXF_RXDELAY_MASK,
++ FIELD_PREP(RTL8365MB_EXT_RGMXF_TXDELAY_MASK, tx_delay) |
++ FIELD_PREP(RTL8365MB_EXT_RGMXF_RXDELAY_MASK, rx_delay));
++ if (ret)
++ return ret;
++
++ ret = regmap_update_bits(
++ smi->map, RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(ext_port),
++ RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(ext_port),
++ RTL8365MB_EXT_PORT_MODE_RGMII
++ << RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(
++ ext_port));
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static int rtl8365mb_ext_config_forcemode(struct realtek_smi *smi, int port,
++ bool link, int speed, int duplex,
++ bool tx_pause, bool rx_pause)
++{
++ u32 r_tx_pause;
++ u32 r_rx_pause;
++ u32 r_duplex;
++ u32 r_speed;
++ u32 r_link;
++ int ext_port;
++ int val;
++ int ret;
++
++ if (port == smi->cpu_port) {
++ ext_port = 1;
++ } else {
++ dev_err(smi->dev, "only one EXT port is currently supported\n");
++ return -EINVAL;
++ }
++
++ if (link) {
++ /* Force the link up with the desired configuration */
++ r_link = 1;
++ r_rx_pause = rx_pause ? 1 : 0;
++ r_tx_pause = tx_pause ? 1 : 0;
++
++ if (speed == SPEED_1000) {
++ r_speed = RTL8365MB_PORT_SPEED_1000M;
++ } else if (speed == SPEED_100) {
++ r_speed = RTL8365MB_PORT_SPEED_100M;
++ } else if (speed == SPEED_10) {
++ r_speed = RTL8365MB_PORT_SPEED_10M;
++ } else {
++ dev_err(smi->dev, "unsupported port speed %s\n",
++ phy_speed_to_str(speed));
++ return -EINVAL;
++ }
++
++ if (duplex == DUPLEX_FULL) {
++ r_duplex = 1;
++ } else if (duplex == DUPLEX_HALF) {
++ r_duplex = 0;
++ } else {
++ dev_err(smi->dev, "unsupported duplex %s\n",
++ phy_duplex_to_str(duplex));
++ return -EINVAL;
++ }
++ } else {
++ /* Force the link down and reset any programmed configuration */
++ r_link = 0;
++ r_tx_pause = 0;
++ r_rx_pause = 0;
++ r_speed = 0;
++ r_duplex = 0;
++ }
++
++ val = FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK, 1) |
++ FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK,
++ r_tx_pause) |
++ FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK,
++ r_rx_pause) |
++ FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK, r_link) |
++ FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK,
++ r_duplex) |
++ FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK, r_speed);
++ ret = regmap_write(smi->map,
++ RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(ext_port),
++ val);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static bool rtl8365mb_phy_mode_supported(struct dsa_switch *ds, int port,
++ phy_interface_t interface)
++{
++ if (dsa_is_user_port(ds, port) &&
++ (interface == PHY_INTERFACE_MODE_NA ||
++ interface == PHY_INTERFACE_MODE_INTERNAL ||
++ interface == PHY_INTERFACE_MODE_GMII))
++ /* Internal PHY */
++ return true;
++ else if (dsa_is_cpu_port(ds, port) &&
++ phy_interface_mode_is_rgmii(interface))
++ /* Extension MAC */
++ return true;
++
++ return false;
++}
++
++static void rtl8365mb_phylink_validate(struct dsa_switch *ds, int port,
++ unsigned long *supported,
++ struct phylink_link_state *state)
++{
++ struct realtek_smi *smi = ds->priv;
++ __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0 };
++
++ /* include/linux/phylink.h says:
++ * When @state->interface is %PHY_INTERFACE_MODE_NA, phylink
++ * expects the MAC driver to return all supported link modes.
++ */
++ if (state->interface != PHY_INTERFACE_MODE_NA &&
++ !rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
++ dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
++ phy_modes(state->interface), port);
++ linkmode_zero(supported);
++ return;
++ }
++
++ phylink_set_port_modes(mask);
++
++ phylink_set(mask, Autoneg);
++ phylink_set(mask, Pause);
++ phylink_set(mask, Asym_Pause);
++
++ phylink_set(mask, 10baseT_Half);
++ phylink_set(mask, 10baseT_Full);
++ phylink_set(mask, 100baseT_Half);
++ phylink_set(mask, 100baseT_Full);
++ phylink_set(mask, 1000baseT_Full);
++
++ linkmode_and(supported, supported, mask);
++ linkmode_and(state->advertising, state->advertising, mask);
++}
++
++static void rtl8365mb_phylink_mac_config(struct dsa_switch *ds, int port,
++ unsigned int mode,
++ const struct phylink_link_state *state)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret;
++
++ if (!rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
++ dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
++ phy_modes(state->interface), port);
++ return;
++ }
++
++ if (mode != MLO_AN_PHY && mode != MLO_AN_FIXED) {
++ dev_err(smi->dev,
++ "port %d supports only conventional PHY or fixed-link\n",
++ port);
++ return;
++ }
++
++ if (phy_interface_mode_is_rgmii(state->interface)) {
++ ret = rtl8365mb_ext_config_rgmii(smi, port, state->interface);
++ if (ret)
++ dev_err(smi->dev,
++ "failed to configure RGMII mode on port %d: %d\n",
++ port, ret);
++ return;
++ }
++
++ /* TODO: Implement MII and RMII modes, which the RTL8365MB-VC also
++ * supports
++ */
++}
++
++static void rtl8365mb_phylink_mac_link_down(struct dsa_switch *ds, int port,
++ unsigned int mode,
++ phy_interface_t interface)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb_port *p;
++ struct rtl8365mb *mb;
++ int ret;
++
++ mb = smi->chip_data;
++ p = &mb->ports[port];
++ cancel_delayed_work_sync(&p->mib_work);
++
++ if (phy_interface_mode_is_rgmii(interface)) {
++ ret = rtl8365mb_ext_config_forcemode(smi, port, false, 0, 0,
++ false, false);
++ if (ret)
++ dev_err(smi->dev,
++ "failed to reset forced mode on port %d: %d\n",
++ port, ret);
++
++ return;
++ }
++}
++
++static void rtl8365mb_phylink_mac_link_up(struct dsa_switch *ds, int port,
++ unsigned int mode,
++ phy_interface_t interface,
++ struct phy_device *phydev, int speed,
++ int duplex, bool tx_pause,
++ bool rx_pause)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb_port *p;
++ struct rtl8365mb *mb;
++ int ret;
++
++ mb = smi->chip_data;
++ p = &mb->ports[port];
++ schedule_delayed_work(&p->mib_work, 0);
++
++ if (phy_interface_mode_is_rgmii(interface)) {
++ ret = rtl8365mb_ext_config_forcemode(smi, port, true, speed,
++ duplex, tx_pause,
++ rx_pause);
++ if (ret)
++ dev_err(smi->dev,
++ "failed to force mode on port %d: %d\n", port,
++ ret);
++
++ return;
++ }
++}
++
++static void rtl8365mb_port_stp_state_set(struct dsa_switch *ds, int port,
++ u8 state)
++{
++ struct realtek_smi *smi = ds->priv;
++ enum rtl8365mb_stp_state val;
++ int msti = 0;
++
++ switch (state) {
++ case BR_STATE_DISABLED:
++ val = RTL8365MB_STP_STATE_DISABLED;
++ break;
++ case BR_STATE_BLOCKING:
++ case BR_STATE_LISTENING:
++ val = RTL8365MB_STP_STATE_BLOCKING;
++ break;
++ case BR_STATE_LEARNING:
++ val = RTL8365MB_STP_STATE_LEARNING;
++ break;
++ case BR_STATE_FORWARDING:
++ val = RTL8365MB_STP_STATE_FORWARDING;
++ break;
++ default:
++ dev_err(smi->dev, "invalid STP state: %u\n", state);
++ return;
++ }
++
++ regmap_update_bits(smi->map, RTL8365MB_MSTI_CTRL_REG(msti, port),
++ RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(port),
++ val << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(port));
++}
++
++static int rtl8365mb_port_set_learning(struct realtek_smi *smi, int port,
++ bool enable)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++
++ /* Enable/disable learning by limiting the number of L2 addresses the
++ * port can learn. Realtek documentation states that a limit of zero
++ * disables learning. When enabling learning, set it to the chip's
++ * maximum.
++ */
++ return regmap_write(smi->map, RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(port),
++ enable ? mb->learn_limit_max : 0);
++}
++
++static int rtl8365mb_port_set_isolation(struct realtek_smi *smi, int port,
++ u32 mask)
++{
++ return regmap_write(smi->map, RTL8365MB_PORT_ISOLATION_REG(port), mask);
++}
++
++static int rtl8365mb_mib_counter_read(struct realtek_smi *smi, int port,
++ u32 offset, u32 length, u64 *mibvalue)
++{
++ u64 tmpvalue = 0;
++ u32 val;
++ int ret;
++ int i;
++
++ /* The MIB address is an SRAM address. We request a particular address
++ * and then poll the control register before reading the value from some
++ * counter registers.
++ */
++ ret = regmap_write(smi->map, RTL8365MB_MIB_ADDRESS_REG,
++ RTL8365MB_MIB_ADDRESS(port, offset));
++ if (ret)
++ return ret;
++
++ /* Poll for completion */
++ ret = regmap_read_poll_timeout(smi->map, RTL8365MB_MIB_CTRL0_REG, val,
++ !(val & RTL8365MB_MIB_CTRL0_BUSY_MASK),
++ 10, 100);
++ if (ret)
++ return ret;
++
++ /* Presumably this indicates a MIB counter read failure */
++ if (val & RTL8365MB_MIB_CTRL0_RESET_MASK)
++ return -EIO;
++
++ /* There are four MIB counter registers each holding a 16 bit word of a
++ * MIB counter. Depending on the offset, we should read from the upper
++ * two or lower two registers. In case the MIB counter is 4 words, we
++ * read from all four registers.
++ */
++ if (length == 4)
++ offset = 3;
++ else
++ offset = (offset + 1) % 4;
++
++ /* Read the MIB counter 16 bits at a time */
++ for (i = 0; i < length; i++) {
++ ret = regmap_read(smi->map,
++ RTL8365MB_MIB_COUNTER_REG(offset - i), &val);
++ if (ret)
++ return ret;
++
++ tmpvalue = ((tmpvalue) << 16) | (val & 0xFFFF);
++ }
++
++ /* Only commit the result if no error occurred */
++ *mibvalue = tmpvalue;
++
++ return 0;
++}
++
++static void rtl8365mb_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb *mb;
++ int ret;
++ int i;
++
++ mb = smi->chip_data;
++
++ mutex_lock(&mb->mib_lock);
++ for (i = 0; i < RTL8365MB_MIB_END; i++) {
++ struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
++
++ ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
++ mib->length, &data[i]);
++ if (ret) {
++ dev_err(smi->dev,
++ "failed to read port %d counters: %d\n", port,
++ ret);
++ break;
++ }
++ }
++ mutex_unlock(&mb->mib_lock);
++}
++
++static void rtl8365mb_get_strings(struct dsa_switch *ds, int port, u32 stringset, u8 *data)
++{
++ int i;
++
++ if (stringset != ETH_SS_STATS)
++ return;
++
++ for (i = 0; i < RTL8365MB_MIB_END; i++) {
++ struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
++
++ strncpy(data + i * ETH_GSTRING_LEN, mib->name, ETH_GSTRING_LEN);
++ }
++}
++
++static int rtl8365mb_get_sset_count(struct dsa_switch *ds, int port, int sset)
++{
++ if (sset != ETH_SS_STATS)
++ return -EOPNOTSUPP;
++
++ return RTL8365MB_MIB_END;
++}
++
++static void rtl8365mb_get_phy_stats(struct dsa_switch *ds, int port,
++ struct ethtool_eth_phy_stats *phy_stats)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb_mib_counter *mib;
++ struct rtl8365mb *mb;
++
++ mb = smi->chip_data;
++ mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3StatsSymbolErrors];
++
++ mutex_lock(&mb->mib_lock);
++ rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
++ &phy_stats->SymbolErrorDuringCarrier);
++ mutex_unlock(&mb->mib_lock);
++}
++
++static void rtl8365mb_get_mac_stats(struct dsa_switch *ds, int port,
++ struct ethtool_eth_mac_stats *mac_stats)
++{
++ u64 cnt[RTL8365MB_MIB_END] = {
++ [RTL8365MB_MIB_ifOutOctets] = 1,
++ [RTL8365MB_MIB_ifOutUcastPkts] = 1,
++ [RTL8365MB_MIB_ifOutMulticastPkts] = 1,
++ [RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
++ [RTL8365MB_MIB_dot3OutPauseFrames] = 1,
++ [RTL8365MB_MIB_ifOutDiscards] = 1,
++ [RTL8365MB_MIB_ifInOctets] = 1,
++ [RTL8365MB_MIB_ifInUcastPkts] = 1,
++ [RTL8365MB_MIB_ifInMulticastPkts] = 1,
++ [RTL8365MB_MIB_ifInBroadcastPkts] = 1,
++ [RTL8365MB_MIB_dot3InPauseFrames] = 1,
++ [RTL8365MB_MIB_dot3StatsSingleCollisionFrames] = 1,
++ [RTL8365MB_MIB_dot3StatsMultipleCollisionFrames] = 1,
++ [RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
++ [RTL8365MB_MIB_dot3StatsDeferredTransmissions] = 1,
++ [RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
++ [RTL8365MB_MIB_dot3StatsExcessiveCollisions] = 1,
++
++ };
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb *mb;
++ int ret;
++ int i;
++
++ mb = smi->chip_data;
++
++ mutex_lock(&mb->mib_lock);
++ for (i = 0; i < RTL8365MB_MIB_END; i++) {
++ struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
++
++ /* Only fetch required MIB counters (marked = 1 above) */
++ if (!cnt[i])
++ continue;
++
++ ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
++ mib->length, &cnt[i]);
++ if (ret)
++ break;
++ }
++ mutex_unlock(&mb->mib_lock);
++
++ /* The RTL8365MB-VC exposes MIB objects, which we have to translate into
++ * IEEE 802.3 Managed Objects. This is not always completely faithful,
++ * but we try out best. See RFC 3635 for a detailed treatment of the
++ * subject.
++ */
++
++ mac_stats->FramesTransmittedOK = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
++ cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
++ cnt[RTL8365MB_MIB_ifOutBroadcastPkts] +
++ cnt[RTL8365MB_MIB_dot3OutPauseFrames] -
++ cnt[RTL8365MB_MIB_ifOutDiscards];
++ mac_stats->SingleCollisionFrames =
++ cnt[RTL8365MB_MIB_dot3StatsSingleCollisionFrames];
++ mac_stats->MultipleCollisionFrames =
++ cnt[RTL8365MB_MIB_dot3StatsMultipleCollisionFrames];
++ mac_stats->FramesReceivedOK = cnt[RTL8365MB_MIB_ifInUcastPkts] +
++ cnt[RTL8365MB_MIB_ifInMulticastPkts] +
++ cnt[RTL8365MB_MIB_ifInBroadcastPkts] +
++ cnt[RTL8365MB_MIB_dot3InPauseFrames];
++ mac_stats->FrameCheckSequenceErrors =
++ cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
++ mac_stats->OctetsTransmittedOK = cnt[RTL8365MB_MIB_ifOutOctets] -
++ 18 * mac_stats->FramesTransmittedOK;
++ mac_stats->FramesWithDeferredXmissions =
++ cnt[RTL8365MB_MIB_dot3StatsDeferredTransmissions];
++ mac_stats->LateCollisions = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
++ mac_stats->FramesAbortedDueToXSColls =
++ cnt[RTL8365MB_MIB_dot3StatsExcessiveCollisions];
++ mac_stats->OctetsReceivedOK = cnt[RTL8365MB_MIB_ifInOctets] -
++ 18 * mac_stats->FramesReceivedOK;
++ mac_stats->MulticastFramesXmittedOK =
++ cnt[RTL8365MB_MIB_ifOutMulticastPkts];
++ mac_stats->BroadcastFramesXmittedOK =
++ cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
++ mac_stats->MulticastFramesReceivedOK =
++ cnt[RTL8365MB_MIB_ifInMulticastPkts];
++ mac_stats->BroadcastFramesReceivedOK =
++ cnt[RTL8365MB_MIB_ifInBroadcastPkts];
++}
++
++static void rtl8365mb_get_ctrl_stats(struct dsa_switch *ds, int port,
++ struct ethtool_eth_ctrl_stats *ctrl_stats)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb_mib_counter *mib;
++ struct rtl8365mb *mb;
++
++ mb = smi->chip_data;
++ mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3ControlInUnknownOpcodes];
++
++ mutex_lock(&mb->mib_lock);
++ rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
++ &ctrl_stats->UnsupportedOpcodesReceived);
++ mutex_unlock(&mb->mib_lock);
++}
++
++static void rtl8365mb_stats_update(struct realtek_smi *smi, int port)
++{
++ u64 cnt[RTL8365MB_MIB_END] = {
++ [RTL8365MB_MIB_ifOutOctets] = 1,
++ [RTL8365MB_MIB_ifOutUcastPkts] = 1,
++ [RTL8365MB_MIB_ifOutMulticastPkts] = 1,
++ [RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
++ [RTL8365MB_MIB_ifOutDiscards] = 1,
++ [RTL8365MB_MIB_ifInOctets] = 1,
++ [RTL8365MB_MIB_ifInUcastPkts] = 1,
++ [RTL8365MB_MIB_ifInMulticastPkts] = 1,
++ [RTL8365MB_MIB_ifInBroadcastPkts] = 1,
++ [RTL8365MB_MIB_etherStatsDropEvents] = 1,
++ [RTL8365MB_MIB_etherStatsCollisions] = 1,
++ [RTL8365MB_MIB_etherStatsFragments] = 1,
++ [RTL8365MB_MIB_etherStatsJabbers] = 1,
++ [RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
++ [RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
++ };
++ struct rtl8365mb *mb = smi->chip_data;
++ struct rtnl_link_stats64 *stats;
++ int ret;
++ int i;
++
++ stats = &mb->ports[port].stats;
++
++ mutex_lock(&mb->mib_lock);
++ for (i = 0; i < RTL8365MB_MIB_END; i++) {
++ struct rtl8365mb_mib_counter *c = &rtl8365mb_mib_counters[i];
++
++ /* Only fetch required MIB counters (marked = 1 above) */
++ if (!cnt[i])
++ continue;
++
++ ret = rtl8365mb_mib_counter_read(smi, port, c->offset,
++ c->length, &cnt[i]);
++ if (ret)
++ break;
++ }
++ mutex_unlock(&mb->mib_lock);
++
++ /* Don't update statistics if there was an error reading the counters */
++ if (ret)
++ return;
++
++ spin_lock(&mb->ports[port].stats_lock);
++
++ stats->rx_packets = cnt[RTL8365MB_MIB_ifInUcastPkts] +
++ cnt[RTL8365MB_MIB_ifInMulticastPkts] +
++ cnt[RTL8365MB_MIB_ifInBroadcastPkts] -
++ cnt[RTL8365MB_MIB_ifOutDiscards];
++
++ stats->tx_packets = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
++ cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
++ cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
++
++ /* if{In,Out}Octets includes FCS - remove it */
++ stats->rx_bytes = cnt[RTL8365MB_MIB_ifInOctets] - 4 * stats->rx_packets;
++ stats->tx_bytes =
++ cnt[RTL8365MB_MIB_ifOutOctets] - 4 * stats->tx_packets;
++
++ stats->rx_dropped = cnt[RTL8365MB_MIB_etherStatsDropEvents];
++ stats->tx_dropped = cnt[RTL8365MB_MIB_ifOutDiscards];
++
++ stats->multicast = cnt[RTL8365MB_MIB_ifInMulticastPkts];
++ stats->collisions = cnt[RTL8365MB_MIB_etherStatsCollisions];
++
++ stats->rx_length_errors = cnt[RTL8365MB_MIB_etherStatsFragments] +
++ cnt[RTL8365MB_MIB_etherStatsJabbers];
++ stats->rx_crc_errors = cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
++ stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors;
++
++ stats->tx_aborted_errors = cnt[RTL8365MB_MIB_ifOutDiscards];
++ stats->tx_window_errors = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
++ stats->tx_errors = stats->tx_aborted_errors + stats->tx_window_errors;
++
++ spin_unlock(&mb->ports[port].stats_lock);
++}
++
++static void rtl8365mb_stats_poll(struct work_struct *work)
++{
++ struct rtl8365mb_port *p = container_of(to_delayed_work(work),
++ struct rtl8365mb_port,
++ mib_work);
++ struct realtek_smi *smi = p->smi;
++
++ rtl8365mb_stats_update(smi, p->index);
++
++ schedule_delayed_work(&p->mib_work, RTL8365MB_STATS_INTERVAL_JIFFIES);
++}
++
++static void rtl8365mb_get_stats64(struct dsa_switch *ds, int port,
++ struct rtnl_link_stats64 *s)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb_port *p;
++ struct rtl8365mb *mb;
++
++ mb = smi->chip_data;
++ p = &mb->ports[port];
++
++ spin_lock(&p->stats_lock);
++ memcpy(s, &p->stats, sizeof(*s));
++ spin_unlock(&p->stats_lock);
++}
++
++static void rtl8365mb_stats_setup(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ int i;
++
++ /* Per-chip global mutex to protect MIB counter access, since doing
++ * so requires accessing a series of registers in a particular order.
++ */
++ mutex_init(&mb->mib_lock);
++
++ for (i = 0; i < smi->num_ports; i++) {
++ struct rtl8365mb_port *p = &mb->ports[i];
++
++ if (dsa_is_unused_port(smi->ds, i))
++ continue;
++
++ /* Per-port spinlock to protect the stats64 data */
++ spin_lock_init(&p->stats_lock);
++
++ /* This work polls the MIB counters and keeps the stats64 data
++ * up-to-date.
++ */
++ INIT_DELAYED_WORK(&p->mib_work, rtl8365mb_stats_poll);
++ }
++}
++
++static void rtl8365mb_stats_teardown(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ int i;
++
++ for (i = 0; i < smi->num_ports; i++) {
++ struct rtl8365mb_port *p = &mb->ports[i];
++
++ if (dsa_is_unused_port(smi->ds, i))
++ continue;
++
++ cancel_delayed_work_sync(&p->mib_work);
++ }
++}
++
++static int rtl8365mb_get_and_clear_status_reg(struct realtek_smi *smi, u32 reg,
++ u32 *val)
++{
++ int ret;
++
++ ret = regmap_read(smi->map, reg, val);
++ if (ret)
++ return ret;
++
++ return regmap_write(smi->map, reg, *val);
++}
++
++static irqreturn_t rtl8365mb_irq(int irq, void *data)
++{
++ struct realtek_smi *smi = data;
++ unsigned long line_changes = 0;
++ struct rtl8365mb *mb;
++ u32 stat;
++ int line;
++ int ret;
++
++ mb = smi->chip_data;
++
++ ret = rtl8365mb_get_and_clear_status_reg(smi, RTL8365MB_INTR_STATUS_REG,
++ &stat);
++ if (ret)
++ goto out_error;
++
++ if (stat & RTL8365MB_INTR_LINK_CHANGE_MASK) {
++ u32 linkdown_ind;
++ u32 linkup_ind;
++ u32 val;
++
++ ret = rtl8365mb_get_and_clear_status_reg(
++ smi, RTL8365MB_PORT_LINKUP_IND_REG, &val);
++ if (ret)
++ goto out_error;
++
++ linkup_ind = FIELD_GET(RTL8365MB_PORT_LINKUP_IND_MASK, val);
++
++ ret = rtl8365mb_get_and_clear_status_reg(
++ smi, RTL8365MB_PORT_LINKDOWN_IND_REG, &val);
++ if (ret)
++ goto out_error;
++
++ linkdown_ind = FIELD_GET(RTL8365MB_PORT_LINKDOWN_IND_MASK, val);
++
++ line_changes = (linkup_ind | linkdown_ind) & mb->port_mask;
++ }
++
++ if (!line_changes)
++ goto out_none;
++
++ for_each_set_bit(line, &line_changes, smi->num_ports) {
++ int child_irq = irq_find_mapping(smi->irqdomain, line);
++
++ handle_nested_irq(child_irq);
++ }
++
++ return IRQ_HANDLED;
++
++out_error:
++ dev_err(smi->dev, "failed to read interrupt status: %d\n", ret);
++
++out_none:
++ return IRQ_NONE;
++}
++
++static struct irq_chip rtl8365mb_irq_chip = {
++ .name = "rtl8365mb",
++ /* The hardware doesn't support masking IRQs on a per-port basis */
++};
++
++static int rtl8365mb_irq_map(struct irq_domain *domain, unsigned int irq,
++ irq_hw_number_t hwirq)
++{
++ irq_set_chip_data(irq, domain->host_data);
++ irq_set_chip_and_handler(irq, &rtl8365mb_irq_chip, handle_simple_irq);
++ irq_set_nested_thread(irq, 1);
++ irq_set_noprobe(irq);
++
++ return 0;
++}
++
++static void rtl8365mb_irq_unmap(struct irq_domain *d, unsigned int irq)
++{
++ irq_set_nested_thread(irq, 0);
++ irq_set_chip_and_handler(irq, NULL, NULL);
++ irq_set_chip_data(irq, NULL);
++}
++
++static const struct irq_domain_ops rtl8365mb_irqdomain_ops = {
++ .map = rtl8365mb_irq_map,
++ .unmap = rtl8365mb_irq_unmap,
++ .xlate = irq_domain_xlate_onecell,
++};
++
++static int rtl8365mb_set_irq_enable(struct realtek_smi *smi, bool enable)
++{
++ return regmap_update_bits(smi->map, RTL8365MB_INTR_CTRL_REG,
++ RTL8365MB_INTR_LINK_CHANGE_MASK,
++ FIELD_PREP(RTL8365MB_INTR_LINK_CHANGE_MASK,
++ enable ? 1 : 0));
++}
++
++static int rtl8365mb_irq_enable(struct realtek_smi *smi)
++{
++ return rtl8365mb_set_irq_enable(smi, true);
++}
++
++static int rtl8365mb_irq_disable(struct realtek_smi *smi)
++{
++ return rtl8365mb_set_irq_enable(smi, false);
++}
++
++static int rtl8365mb_irq_setup(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ struct device_node *intc;
++ u32 irq_trig;
++ int virq;
++ int irq;
++ u32 val;
++ int ret;
++ int i;
++
++ intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
++ if (!intc) {
++ dev_err(smi->dev, "missing child interrupt-controller node\n");
++ return -EINVAL;
++ }
++
++ /* rtl8365mb IRQs cascade off this one */
++ irq = of_irq_get(intc, 0);
++ if (irq <= 0) {
++ if (irq != -EPROBE_DEFER)
++ dev_err(smi->dev, "failed to get parent irq: %d\n",
++ irq);
++ ret = irq ? irq : -EINVAL;
++ goto out_put_node;
++ }
++
++ smi->irqdomain = irq_domain_add_linear(intc, smi->num_ports,
++ &rtl8365mb_irqdomain_ops, smi);
++ if (!smi->irqdomain) {
++ dev_err(smi->dev, "failed to add irq domain\n");
++ ret = -ENOMEM;
++ goto out_put_node;
++ }
++
++ for (i = 0; i < smi->num_ports; i++) {
++ virq = irq_create_mapping(smi->irqdomain, i);
++ if (!virq) {
++ dev_err(smi->dev,
++ "failed to create irq domain mapping\n");
++ ret = -EINVAL;
++ goto out_remove_irqdomain;
++ }
++
++ irq_set_parent(virq, irq);
++ }
++
++ /* Configure chip interrupt signal polarity */
++ irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
++ switch (irq_trig) {
++ case IRQF_TRIGGER_RISING:
++ case IRQF_TRIGGER_HIGH:
++ val = RTL8365MB_INTR_POLARITY_HIGH;
++ break;
++ case IRQF_TRIGGER_FALLING:
++ case IRQF_TRIGGER_LOW:
++ val = RTL8365MB_INTR_POLARITY_LOW;
++ break;
++ default:
++ dev_err(smi->dev, "unsupported irq trigger type %u\n",
++ irq_trig);
++ ret = -EINVAL;
++ goto out_remove_irqdomain;
++ }
++
++ ret = regmap_update_bits(smi->map, RTL8365MB_INTR_POLARITY_REG,
++ RTL8365MB_INTR_POLARITY_MASK,
++ FIELD_PREP(RTL8365MB_INTR_POLARITY_MASK, val));
++ if (ret)
++ goto out_remove_irqdomain;
++
++ /* Disable the interrupt in case the chip has it enabled on reset */
++ ret = rtl8365mb_irq_disable(smi);
++ if (ret)
++ goto out_remove_irqdomain;
++
++ /* Clear the interrupt status register */
++ ret = regmap_write(smi->map, RTL8365MB_INTR_STATUS_REG,
++ RTL8365MB_INTR_ALL_MASK);
++ if (ret)
++ goto out_remove_irqdomain;
++
++ ret = request_threaded_irq(irq, NULL, rtl8365mb_irq, IRQF_ONESHOT,
++ "rtl8365mb", smi);
++ if (ret) {
++ dev_err(smi->dev, "failed to request irq: %d\n", ret);
++ goto out_remove_irqdomain;
++ }
++
++ /* Store the irq so that we know to free it during teardown */
++ mb->irq = irq;
++
++ ret = rtl8365mb_irq_enable(smi);
++ if (ret)
++ goto out_free_irq;
++
++ of_node_put(intc);
++
++ return 0;
++
++out_free_irq:
++ free_irq(mb->irq, smi);
++ mb->irq = 0;
++
++out_remove_irqdomain:
++ for (i = 0; i < smi->num_ports; i++) {
++ virq = irq_find_mapping(smi->irqdomain, i);
++ irq_dispose_mapping(virq);
++ }
++
++ irq_domain_remove(smi->irqdomain);
++ smi->irqdomain = NULL;
++
++out_put_node:
++ of_node_put(intc);
++
++ return ret;
++}
++
++static void rtl8365mb_irq_teardown(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ int virq;
++ int i;
++
++ if (mb->irq) {
++ free_irq(mb->irq, smi);
++ mb->irq = 0;
++ }
++
++ if (smi->irqdomain) {
++ for (i = 0; i < smi->num_ports; i++) {
++ virq = irq_find_mapping(smi->irqdomain, i);
++ irq_dispose_mapping(virq);
++ }
++
++ irq_domain_remove(smi->irqdomain);
++ smi->irqdomain = NULL;
++ }
++}
++
++static int rtl8365mb_cpu_config(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ struct rtl8365mb_cpu *cpu = &mb->cpu;
++ u32 val;
++ int ret;
++
++ ret = regmap_update_bits(smi->map, RTL8365MB_CPU_PORT_MASK_REG,
++ RTL8365MB_CPU_PORT_MASK_MASK,
++ FIELD_PREP(RTL8365MB_CPU_PORT_MASK_MASK,
++ cpu->mask));
++ if (ret)
++ return ret;
++
++ val = FIELD_PREP(RTL8365MB_CPU_CTRL_EN_MASK, cpu->enable ? 1 : 0) |
++ FIELD_PREP(RTL8365MB_CPU_CTRL_INSERTMODE_MASK, cpu->insert) |
++ FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_POSITION_MASK, cpu->position) |
++ FIELD_PREP(RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK, cpu->rx_length) |
++ FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK, cpu->format) |
++ FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_MASK, cpu->trap_port) |
++ FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK,
++ cpu->trap_port >> 3);
++ ret = regmap_write(smi->map, RTL8365MB_CPU_CTRL_REG, val);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static int rtl8365mb_switch_init(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ int ret;
++ int i;
++
++ /* Do any chip-specific init jam before getting to the common stuff */
++ if (mb->jam_table) {
++ for (i = 0; i < mb->jam_size; i++) {
++ ret = regmap_write(smi->map, mb->jam_table[i].reg,
++ mb->jam_table[i].val);
++ if (ret)
++ return ret;
++ }
++ }
++
++ /* Common init jam */
++ for (i = 0; i < ARRAY_SIZE(rtl8365mb_init_jam_common); i++) {
++ ret = regmap_write(smi->map, rtl8365mb_init_jam_common[i].reg,
++ rtl8365mb_init_jam_common[i].val);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static int rtl8365mb_reset_chip(struct realtek_smi *smi)
++{
++ u32 val;
++
++ realtek_smi_write_reg_noack(smi, RTL8365MB_CHIP_RESET_REG,
++ FIELD_PREP(RTL8365MB_CHIP_RESET_HW_MASK,
++ 1));
++
++ /* Realtek documentation says the chip needs 1 second to reset. Sleep
++ * for 100 ms before accessing any registers to prevent ACK timeouts.
++ */
++ msleep(100);
++ return regmap_read_poll_timeout(smi->map, RTL8365MB_CHIP_RESET_REG, val,
++ !(val & RTL8365MB_CHIP_RESET_HW_MASK),
++ 20000, 1e6);
++}
++
++static int rtl8365mb_setup(struct dsa_switch *ds)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8365mb *mb;
++ int ret;
++ int i;
++
++ mb = smi->chip_data;
++
++ ret = rtl8365mb_reset_chip(smi);
++ if (ret) {
++ dev_err(smi->dev, "failed to reset chip: %d\n", ret);
++ goto out_error;
++ }
++
++ /* Configure switch to vendor-defined initial state */
++ ret = rtl8365mb_switch_init(smi);
++ if (ret) {
++ dev_err(smi->dev, "failed to initialize switch: %d\n", ret);
++ goto out_error;
++ }
++
++ /* Set up cascading IRQs */
++ ret = rtl8365mb_irq_setup(smi);
++ if (ret == -EPROBE_DEFER)
++ return ret;
++ else if (ret)
++ dev_info(smi->dev, "no interrupt support\n");
++
++ /* Configure CPU tagging */
++ ret = rtl8365mb_cpu_config(smi);
++ if (ret)
++ goto out_teardown_irq;
++
++ /* Configure ports */
++ for (i = 0; i < smi->num_ports; i++) {
++ struct rtl8365mb_port *p = &mb->ports[i];
++
++ if (dsa_is_unused_port(smi->ds, i))
++ continue;
++
++ /* Set up per-port private data */
++ p->smi = smi;
++ p->index = i;
++
++ /* Forward only to the CPU */
++ ret = rtl8365mb_port_set_isolation(smi, i, BIT(smi->cpu_port));
++ if (ret)
++ goto out_teardown_irq;
++
++ /* Disable learning */
++ ret = rtl8365mb_port_set_learning(smi, i, false);
++ if (ret)
++ goto out_teardown_irq;
++
++ /* Set the initial STP state of all ports to DISABLED, otherwise
++ * ports will still forward frames to the CPU despite being
++ * administratively down by default.
++ */
++ rtl8365mb_port_stp_state_set(smi->ds, i, BR_STATE_DISABLED);
++ }
++
++ /* Set maximum packet length to 1536 bytes */
++ ret = regmap_update_bits(smi->map, RTL8365MB_CFG0_MAX_LEN_REG,
++ RTL8365MB_CFG0_MAX_LEN_MASK,
++ FIELD_PREP(RTL8365MB_CFG0_MAX_LEN_MASK, 1536));
++ if (ret)
++ goto out_teardown_irq;
++
++ ret = realtek_smi_setup_mdio(smi);
++ if (ret) {
++ dev_err(smi->dev, "could not set up MDIO bus\n");
++ goto out_teardown_irq;
++ }
++
++ /* Start statistics counter polling */
++ rtl8365mb_stats_setup(smi);
++
++ return 0;
++
++out_teardown_irq:
++ rtl8365mb_irq_teardown(smi);
++
++out_error:
++ return ret;
++}
++
++static void rtl8365mb_teardown(struct dsa_switch *ds)
++{
++ struct realtek_smi *smi = ds->priv;
++
++ rtl8365mb_stats_teardown(smi);
++ rtl8365mb_irq_teardown(smi);
++}
++
++static int rtl8365mb_get_chip_id_and_ver(struct regmap *map, u32 *id, u32 *ver)
++{
++ int ret;
++
++ /* For some reason we have to write a magic value to an arbitrary
++ * register whenever accessing the chip ID/version registers.
++ */
++ ret = regmap_write(map, RTL8365MB_MAGIC_REG, RTL8365MB_MAGIC_VALUE);
++ if (ret)
++ return ret;
++
++ ret = regmap_read(map, RTL8365MB_CHIP_ID_REG, id);
++ if (ret)
++ return ret;
++
++ ret = regmap_read(map, RTL8365MB_CHIP_VER_REG, ver);
++ if (ret)
++ return ret;
++
++ /* Reset magic register */
++ ret = regmap_write(map, RTL8365MB_MAGIC_REG, 0);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static int rtl8365mb_detect(struct realtek_smi *smi)
++{
++ struct rtl8365mb *mb = smi->chip_data;
++ u32 chip_id;
++ u32 chip_ver;
++ int ret;
++
++ ret = rtl8365mb_get_chip_id_and_ver(smi->map, &chip_id, &chip_ver);
++ if (ret) {
++ dev_err(smi->dev, "failed to read chip id and version: %d\n",
++ ret);
++ return ret;
++ }
++
++ switch (chip_id) {
++ case RTL8365MB_CHIP_ID_8365MB_VC:
++ dev_info(smi->dev,
++ "found an RTL8365MB-VC switch (ver=0x%04x)\n",
++ chip_ver);
++
++ smi->cpu_port = RTL8365MB_CPU_PORT_NUM_8365MB_VC;
++ smi->num_ports = smi->cpu_port + 1;
++
++ mb->smi = smi;
++ mb->chip_id = chip_id;
++ mb->chip_ver = chip_ver;
++ mb->port_mask = BIT(smi->num_ports) - 1;
++ mb->learn_limit_max = RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC;
++ mb->jam_table = rtl8365mb_init_jam_8365mb_vc;
++ mb->jam_size = ARRAY_SIZE(rtl8365mb_init_jam_8365mb_vc);
++
++ mb->cpu.enable = 1;
++ mb->cpu.mask = BIT(smi->cpu_port);
++ mb->cpu.trap_port = smi->cpu_port;
++ mb->cpu.insert = RTL8365MB_CPU_INSERT_TO_ALL;
++ mb->cpu.position = RTL8365MB_CPU_POS_AFTER_SA;
++ mb->cpu.rx_length = RTL8365MB_CPU_RXLEN_64BYTES;
++ mb->cpu.format = RTL8365MB_CPU_FORMAT_8BYTES;
++
++ break;
++ default:
++ dev_err(smi->dev,
++ "found an unknown Realtek switch (id=0x%04x, ver=0x%04x)\n",
++ chip_id, chip_ver);
++ return -ENODEV;
++ }
++
++ return 0;
++}
++
++static const struct dsa_switch_ops rtl8365mb_switch_ops = {
++ .get_tag_protocol = rtl8365mb_get_tag_protocol,
++ .setup = rtl8365mb_setup,
++ .teardown = rtl8365mb_teardown,
++ .phylink_validate = rtl8365mb_phylink_validate,
++ .phylink_mac_config = rtl8365mb_phylink_mac_config,
++ .phylink_mac_link_down = rtl8365mb_phylink_mac_link_down,
++ .phylink_mac_link_up = rtl8365mb_phylink_mac_link_up,
++ .port_stp_state_set = rtl8365mb_port_stp_state_set,
++ .get_strings = rtl8365mb_get_strings,
++ .get_ethtool_stats = rtl8365mb_get_ethtool_stats,
++ .get_sset_count = rtl8365mb_get_sset_count,
++ .get_eth_phy_stats = rtl8365mb_get_phy_stats,
++ .get_eth_mac_stats = rtl8365mb_get_mac_stats,
++ .get_eth_ctrl_stats = rtl8365mb_get_ctrl_stats,
++ .get_stats64 = rtl8365mb_get_stats64,
++};
++
++static const struct realtek_smi_ops rtl8365mb_smi_ops = {
++ .detect = rtl8365mb_detect,
++ .phy_read = rtl8365mb_phy_read,
++ .phy_write = rtl8365mb_phy_write,
++};
++
++const struct realtek_smi_variant rtl8365mb_variant = {
++ .ds_ops = &rtl8365mb_switch_ops,
++ .ops = &rtl8365mb_smi_ops,
++ .clk_delay = 10,
++ .cmd_read = 0xb9,
++ .cmd_write = 0xb8,
++ .chip_data_sz = sizeof(struct rtl8365mb),
++};
++EXPORT_SYMBOL_GPL(rtl8365mb_variant);
+diff --git a/drivers/net/dsa/realtek/rtl8366.c b/drivers/net/dsa/realtek/rtl8366.c
+new file mode 100644
+index 0000000000000..bdb8d8d348807
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366.c
+@@ -0,0 +1,448 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Realtek SMI library helpers for the RTL8366x variants
++ * RTL8366RB and RTL8366S
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
++ * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
++ * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
++ */
++#include <linux/if_bridge.h>
++#include <net/dsa.h>
++
++#include "realtek-smi-core.h"
++
++int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used)
++{
++ int ret;
++ int i;
++
++ *used = 0;
++ for (i = 0; i < smi->num_ports; i++) {
++ int index = 0;
++
++ ret = smi->ops->get_mc_index(smi, i, &index);
++ if (ret)
++ return ret;
++
++ if (mc_index == index) {
++ *used = 1;
++ break;
++ }
++ }
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_mc_is_used);
++
++/**
++ * rtl8366_obtain_mc() - retrieve or allocate a VLAN member configuration
++ * @smi: the Realtek SMI device instance
++ * @vid: the VLAN ID to look up or allocate
++ * @vlanmc: the pointer will be assigned to a pointer to a valid member config
++ * if successful
++ * @return: index of a new member config or negative error number
++ */
++static int rtl8366_obtain_mc(struct realtek_smi *smi, int vid,
++ struct rtl8366_vlan_mc *vlanmc)
++{
++ struct rtl8366_vlan_4k vlan4k;
++ int ret;
++ int i;
++
++ /* Try to find an existing member config entry for this VID */
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ if (vid == vlanmc->vid)
++ return i;
++ }
++
++ /* We have no MC entry for this VID, try to find an empty one */
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ if (vlanmc->vid == 0 && vlanmc->member == 0) {
++ /* Update the entry from the 4K table */
++ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++ if (ret) {
++ dev_err(smi->dev, "error looking for 4K VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ vlanmc->vid = vid;
++ vlanmc->member = vlan4k.member;
++ vlanmc->untag = vlan4k.untag;
++ vlanmc->fid = vlan4k.fid;
++ ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++
++ dev_dbg(smi->dev, "created new MC at index %d for VID %d\n",
++ i, vid);
++ return i;
++ }
++ }
++
++ /* MC table is full, try to find an unused entry and replace it */
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ int used;
++
++ ret = rtl8366_mc_is_used(smi, i, &used);
++ if (ret)
++ return ret;
++
++ if (!used) {
++ /* Update the entry from the 4K table */
++ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++ if (ret)
++ return ret;
++
++ vlanmc->vid = vid;
++ vlanmc->member = vlan4k.member;
++ vlanmc->untag = vlan4k.untag;
++ vlanmc->fid = vlan4k.fid;
++ ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
++ if (ret) {
++ dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
++ i, vid);
++ return ret;
++ }
++ dev_dbg(smi->dev, "recycled MC at index %i for VID %d\n",
++ i, vid);
++ return i;
++ }
++ }
++
++ dev_err(smi->dev, "all VLAN member configurations are in use\n");
++ return -ENOSPC;
++}
++
++int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
++ u32 untag, u32 fid)
++{
++ struct rtl8366_vlan_mc vlanmc;
++ struct rtl8366_vlan_4k vlan4k;
++ int mc;
++ int ret;
++
++ if (!smi->ops->is_vlan_valid(smi, vid))
++ return -EINVAL;
++
++ dev_dbg(smi->dev,
++ "setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++ vid, member, untag);
++
++ /* Update the 4K table */
++ ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
++ if (ret)
++ return ret;
++
++ vlan4k.member |= member;
++ vlan4k.untag |= untag;
++ vlan4k.fid = fid;
++ ret = smi->ops->set_vlan_4k(smi, &vlan4k);
++ if (ret)
++ return ret;
++
++ dev_dbg(smi->dev,
++ "resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlan4k.member, vlan4k.untag);
++
++ /* Find or allocate a member config for this VID */
++ ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
++ if (ret < 0)
++ return ret;
++ mc = ret;
++
++ /* Update the MC entry */
++ vlanmc.member |= member;
++ vlanmc.untag |= untag;
++ vlanmc.fid = fid;
++
++ /* Commit updates to the MC entry */
++ ret = smi->ops->set_vlan_mc(smi, mc, &vlanmc);
++ if (ret)
++ dev_err(smi->dev, "failed to commit changes to VLAN MC index %d for VID %d\n",
++ mc, vid);
++ else
++ dev_dbg(smi->dev,
++ "resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
++ vid, vlanmc.member, vlanmc.untag);
++
++ return ret;
++}
++EXPORT_SYMBOL_GPL(rtl8366_set_vlan);
++
++int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
++ unsigned int vid)
++{
++ struct rtl8366_vlan_mc vlanmc;
++ int mc;
++ int ret;
++
++ if (!smi->ops->is_vlan_valid(smi, vid))
++ return -EINVAL;
++
++ /* Find or allocate a member config for this VID */
++ ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
++ if (ret < 0)
++ return ret;
++ mc = ret;
++
++ ret = smi->ops->set_mc_index(smi, port, mc);
++ if (ret) {
++ dev_err(smi->dev, "set PVID: failed to set MC index %d for port %d\n",
++ mc, port);
++ return ret;
++ }
++
++ dev_dbg(smi->dev, "set PVID: the PVID for port %d set to %d using existing MC index %d\n",
++ port, vid, mc);
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_set_pvid);
++
++int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable)
++{
++ int ret;
++
++ /* To enable 4k VLAN, ordinary VLAN must be enabled first,
++ * but if we disable 4k VLAN it is fine to leave ordinary
++ * VLAN enabled.
++ */
++ if (enable) {
++ /* Make sure VLAN is ON */
++ ret = smi->ops->enable_vlan(smi, true);
++ if (ret)
++ return ret;
++
++ smi->vlan_enabled = true;
++ }
++
++ ret = smi->ops->enable_vlan4k(smi, enable);
++ if (ret)
++ return ret;
++
++ smi->vlan4k_enabled = enable;
++ return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_enable_vlan4k);
++
++int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable)
++{
++ int ret;
++
++ ret = smi->ops->enable_vlan(smi, enable);
++ if (ret)
++ return ret;
++
++ smi->vlan_enabled = enable;
++
++ /* If we turn VLAN off, make sure that we turn off
++ * 4k VLAN as well, if that happened to be on.
++ */
++ if (!enable) {
++ smi->vlan4k_enabled = false;
++ ret = smi->ops->enable_vlan4k(smi, false);
++ }
++
++ return ret;
++}
++EXPORT_SYMBOL_GPL(rtl8366_enable_vlan);
++
++int rtl8366_reset_vlan(struct realtek_smi *smi)
++{
++ struct rtl8366_vlan_mc vlanmc;
++ int ret;
++ int i;
++
++ rtl8366_enable_vlan(smi, false);
++ rtl8366_enable_vlan4k(smi, false);
++
++ /* Clear the 16 VLAN member configurations */
++ vlanmc.vid = 0;
++ vlanmc.priority = 0;
++ vlanmc.member = 0;
++ vlanmc.untag = 0;
++ vlanmc.fid = 0;
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_reset_vlan);
++
++int rtl8366_vlan_add(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan,
++ struct netlink_ext_ack *extack)
++{
++ bool untagged = !!(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED);
++ bool pvid = !!(vlan->flags & BRIDGE_VLAN_INFO_PVID);
++ struct realtek_smi *smi = ds->priv;
++ u32 member = 0;
++ u32 untag = 0;
++ int ret;
++
++ if (!smi->ops->is_vlan_valid(smi, vlan->vid)) {
++ NL_SET_ERR_MSG_MOD(extack, "VLAN ID not valid");
++ return -EINVAL;
++ }
++
++ /* Enable VLAN in the hardware
++ * FIXME: what's with this 4k business?
++ * Just rtl8366_enable_vlan() seems inconclusive.
++ */
++ ret = rtl8366_enable_vlan4k(smi, true);
++ if (ret) {
++ NL_SET_ERR_MSG_MOD(extack, "Failed to enable VLAN 4K");
++ return ret;
++ }
++
++ dev_dbg(smi->dev, "add VLAN %d on port %d, %s, %s\n",
++ vlan->vid, port, untagged ? "untagged" : "tagged",
++ pvid ? "PVID" : "no PVID");
++
++ member |= BIT(port);
++
++ if (untagged)
++ untag |= BIT(port);
++
++ ret = rtl8366_set_vlan(smi, vlan->vid, member, untag, 0);
++ if (ret) {
++ dev_err(smi->dev, "failed to set up VLAN %04x", vlan->vid);
++ return ret;
++ }
++
++ if (!pvid)
++ return 0;
++
++ ret = rtl8366_set_pvid(smi, port, vlan->vid);
++ if (ret) {
++ dev_err(smi->dev, "failed to set PVID on port %d to VLAN %04x",
++ port, vlan->vid);
++ return ret;
++ }
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
++
++int rtl8366_vlan_del(struct dsa_switch *ds, int port,
++ const struct switchdev_obj_port_vlan *vlan)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret, i;
++
++ dev_dbg(smi->dev, "del VLAN %d on port %d\n", vlan->vid, port);
++
++ for (i = 0; i < smi->num_vlan_mc; i++) {
++ struct rtl8366_vlan_mc vlanmc;
++
++ ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
++ if (ret)
++ return ret;
++
++ if (vlan->vid == vlanmc.vid) {
++ /* Remove this port from the VLAN */
++ vlanmc.member &= ~BIT(port);
++ vlanmc.untag &= ~BIT(port);
++ /*
++ * If no ports are members of this VLAN
++ * anymore then clear the whole member
++ * config so it can be reused.
++ */
++ if (!vlanmc.member) {
++ vlanmc.vid = 0;
++ vlanmc.priority = 0;
++ vlanmc.fid = 0;
++ }
++ ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
++ if (ret) {
++ dev_err(smi->dev,
++ "failed to remove VLAN %04x\n",
++ vlan->vid);
++ return ret;
++ }
++ break;
++ }
++ }
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(rtl8366_vlan_del);
++
++void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
++ uint8_t *data)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8366_mib_counter *mib;
++ int i;
++
++ if (port >= smi->num_ports)
++ return;
++
++ for (i = 0; i < smi->num_mib_counters; i++) {
++ mib = &smi->mib_counters[i];
++ strncpy(data + i * ETH_GSTRING_LEN,
++ mib->name, ETH_GSTRING_LEN);
++ }
++}
++EXPORT_SYMBOL_GPL(rtl8366_get_strings);
++
++int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset)
++{
++ struct realtek_smi *smi = ds->priv;
++
++ /* We only support SS_STATS */
++ if (sset != ETH_SS_STATS)
++ return 0;
++ if (port >= smi->num_ports)
++ return -EINVAL;
++
++ return smi->num_mib_counters;
++}
++EXPORT_SYMBOL_GPL(rtl8366_get_sset_count);
++
++void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data)
++{
++ struct realtek_smi *smi = ds->priv;
++ int i;
++ int ret;
++
++ if (port >= smi->num_ports)
++ return;
++
++ for (i = 0; i < smi->num_mib_counters; i++) {
++ struct rtl8366_mib_counter *mib;
++ u64 mibvalue = 0;
++
++ mib = &smi->mib_counters[i];
++ ret = smi->ops->get_mib_counter(smi, port, mib, &mibvalue);
++ if (ret) {
++ dev_err(smi->dev, "error reading MIB counter %s\n",
++ mib->name);
++ }
++ data[i] = mibvalue;
++ }
++}
++EXPORT_SYMBOL_GPL(rtl8366_get_ethtool_stats);
+diff --git a/drivers/net/dsa/realtek/rtl8366rb.c b/drivers/net/dsa/realtek/rtl8366rb.c
+new file mode 100644
+index 0000000000000..4f8c06d7ab3a9
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366rb.c
+@@ -0,0 +1,1816 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Realtek SMI subdriver for the Realtek RTL8366RB ethernet switch
++ *
++ * This is a sparsely documented chip, the only viable documentation seems
++ * to be a patched up code drop from the vendor that appear in various
++ * GPL source trees.
++ *
++ * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
++ * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
++ * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
++ * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
++ * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
++ */
++
++#include <linux/bitops.h>
++#include <linux/etherdevice.h>
++#include <linux/if_bridge.h>
++#include <linux/interrupt.h>
++#include <linux/irqdomain.h>
++#include <linux/irqchip/chained_irq.h>
++#include <linux/of_irq.h>
++#include <linux/regmap.h>
++
++#include "realtek-smi-core.h"
++
++#define RTL8366RB_PORT_NUM_CPU 5
++#define RTL8366RB_NUM_PORTS 6
++#define RTL8366RB_PHY_NO_MAX 4
++#define RTL8366RB_PHY_ADDR_MAX 31
++
++/* Switch Global Configuration register */
++#define RTL8366RB_SGCR 0x0000
++#define RTL8366RB_SGCR_EN_BC_STORM_CTRL BIT(0)
++#define RTL8366RB_SGCR_MAX_LENGTH(a) ((a) << 4)
++#define RTL8366RB_SGCR_MAX_LENGTH_MASK RTL8366RB_SGCR_MAX_LENGTH(0x3)
++#define RTL8366RB_SGCR_MAX_LENGTH_1522 RTL8366RB_SGCR_MAX_LENGTH(0x0)
++#define RTL8366RB_SGCR_MAX_LENGTH_1536 RTL8366RB_SGCR_MAX_LENGTH(0x1)
++#define RTL8366RB_SGCR_MAX_LENGTH_1552 RTL8366RB_SGCR_MAX_LENGTH(0x2)
++#define RTL8366RB_SGCR_MAX_LENGTH_16000 RTL8366RB_SGCR_MAX_LENGTH(0x3)
++#define RTL8366RB_SGCR_EN_VLAN BIT(13)
++#define RTL8366RB_SGCR_EN_VLAN_4KTB BIT(14)
++
++/* Port Enable Control register */
++#define RTL8366RB_PECR 0x0001
++
++/* Switch per-port learning disablement register */
++#define RTL8366RB_PORT_LEARNDIS_CTRL 0x0002
++
++/* Security control, actually aging register */
++#define RTL8366RB_SECURITY_CTRL 0x0003
++
++#define RTL8366RB_SSCR2 0x0004
++#define RTL8366RB_SSCR2_DROP_UNKNOWN_DA BIT(0)
++
++/* Port Mode Control registers */
++#define RTL8366RB_PMC0 0x0005
++#define RTL8366RB_PMC0_SPI BIT(0)
++#define RTL8366RB_PMC0_EN_AUTOLOAD BIT(1)
++#define RTL8366RB_PMC0_PROBE BIT(2)
++#define RTL8366RB_PMC0_DIS_BISR BIT(3)
++#define RTL8366RB_PMC0_ADCTEST BIT(4)
++#define RTL8366RB_PMC0_SRAM_DIAG BIT(5)
++#define RTL8366RB_PMC0_EN_SCAN BIT(6)
++#define RTL8366RB_PMC0_P4_IOMODE_SHIFT 7
++#define RTL8366RB_PMC0_P4_IOMODE_MASK GENMASK(9, 7)
++#define RTL8366RB_PMC0_P5_IOMODE_SHIFT 10
++#define RTL8366RB_PMC0_P5_IOMODE_MASK GENMASK(12, 10)
++#define RTL8366RB_PMC0_SDSMODE_SHIFT 13
++#define RTL8366RB_PMC0_SDSMODE_MASK GENMASK(15, 13)
++#define RTL8366RB_PMC1 0x0006
++
++/* Port Mirror Control Register */
++#define RTL8366RB_PMCR 0x0007
++#define RTL8366RB_PMCR_SOURCE_PORT(a) (a)
++#define RTL8366RB_PMCR_SOURCE_PORT_MASK 0x000f
++#define RTL8366RB_PMCR_MONITOR_PORT(a) ((a) << 4)
++#define RTL8366RB_PMCR_MONITOR_PORT_MASK 0x00f0
++#define RTL8366RB_PMCR_MIRROR_RX BIT(8)
++#define RTL8366RB_PMCR_MIRROR_TX BIT(9)
++#define RTL8366RB_PMCR_MIRROR_SPC BIT(10)
++#define RTL8366RB_PMCR_MIRROR_ISO BIT(11)
++
++/* bits 0..7 = port 0, bits 8..15 = port 1 */
++#define RTL8366RB_PAACR0 0x0010
++/* bits 0..7 = port 2, bits 8..15 = port 3 */
++#define RTL8366RB_PAACR1 0x0011
++/* bits 0..7 = port 4, bits 8..15 = port 5 */
++#define RTL8366RB_PAACR2 0x0012
++#define RTL8366RB_PAACR_SPEED_10M 0
++#define RTL8366RB_PAACR_SPEED_100M 1
++#define RTL8366RB_PAACR_SPEED_1000M 2
++#define RTL8366RB_PAACR_FULL_DUPLEX BIT(2)
++#define RTL8366RB_PAACR_LINK_UP BIT(4)
++#define RTL8366RB_PAACR_TX_PAUSE BIT(5)
++#define RTL8366RB_PAACR_RX_PAUSE BIT(6)
++#define RTL8366RB_PAACR_AN BIT(7)
++
++#define RTL8366RB_PAACR_CPU_PORT (RTL8366RB_PAACR_SPEED_1000M | \
++ RTL8366RB_PAACR_FULL_DUPLEX | \
++ RTL8366RB_PAACR_LINK_UP | \
++ RTL8366RB_PAACR_TX_PAUSE | \
++ RTL8366RB_PAACR_RX_PAUSE)
++
++/* bits 0..7 = port 0, bits 8..15 = port 1 */
++#define RTL8366RB_PSTAT0 0x0014
++/* bits 0..7 = port 2, bits 8..15 = port 3 */
++#define RTL8366RB_PSTAT1 0x0015
++/* bits 0..7 = port 4, bits 8..15 = port 5 */
++#define RTL8366RB_PSTAT2 0x0016
++
++#define RTL8366RB_POWER_SAVING_REG 0x0021
++
++/* Spanning tree status (STP) control, two bits per port per FID */
++#define RTL8366RB_STP_STATE_BASE 0x0050 /* 0x0050..0x0057 */
++#define RTL8366RB_STP_STATE_DISABLED 0x0
++#define RTL8366RB_STP_STATE_BLOCKING 0x1
++#define RTL8366RB_STP_STATE_LEARNING 0x2
++#define RTL8366RB_STP_STATE_FORWARDING 0x3
++#define RTL8366RB_STP_MASK GENMASK(1, 0)
++#define RTL8366RB_STP_STATE(port, state) \
++ ((state) << ((port) * 2))
++#define RTL8366RB_STP_STATE_MASK(port) \
++ RTL8366RB_STP_STATE((port), RTL8366RB_STP_MASK)
++
++/* CPU port control reg */
++#define RTL8368RB_CPU_CTRL_REG 0x0061
++#define RTL8368RB_CPU_PORTS_MSK 0x00FF
++/* Disables inserting custom tag length/type 0x8899 */
++#define RTL8368RB_CPU_NO_TAG BIT(15)
++
++#define RTL8366RB_SMAR0 0x0070 /* bits 0..15 */
++#define RTL8366RB_SMAR1 0x0071 /* bits 16..31 */
++#define RTL8366RB_SMAR2 0x0072 /* bits 32..47 */
++
++#define RTL8366RB_RESET_CTRL_REG 0x0100
++#define RTL8366RB_CHIP_CTRL_RESET_HW BIT(0)
++#define RTL8366RB_CHIP_CTRL_RESET_SW BIT(1)
++
++#define RTL8366RB_CHIP_ID_REG 0x0509
++#define RTL8366RB_CHIP_ID_8366 0x5937
++#define RTL8366RB_CHIP_VERSION_CTRL_REG 0x050A
++#define RTL8366RB_CHIP_VERSION_MASK 0xf
++
++/* PHY registers control */
++#define RTL8366RB_PHY_ACCESS_CTRL_REG 0x8000
++#define RTL8366RB_PHY_CTRL_READ BIT(0)
++#define RTL8366RB_PHY_CTRL_WRITE 0
++#define RTL8366RB_PHY_ACCESS_BUSY_REG 0x8001
++#define RTL8366RB_PHY_INT_BUSY BIT(0)
++#define RTL8366RB_PHY_EXT_BUSY BIT(4)
++#define RTL8366RB_PHY_ACCESS_DATA_REG 0x8002
++#define RTL8366RB_PHY_EXT_CTRL_REG 0x8010
++#define RTL8366RB_PHY_EXT_WRDATA_REG 0x8011
++#define RTL8366RB_PHY_EXT_RDDATA_REG 0x8012
++
++#define RTL8366RB_PHY_REG_MASK 0x1f
++#define RTL8366RB_PHY_PAGE_OFFSET 5
++#define RTL8366RB_PHY_PAGE_MASK (0xf << 5)
++#define RTL8366RB_PHY_NO_OFFSET 9
++#define RTL8366RB_PHY_NO_MASK (0x1f << 9)
++
++/* VLAN Ingress Control Register 1, one bit per port.
++ * bit 0 .. 5 will make the switch drop ingress frames without
++ * VID such as untagged or priority-tagged frames for respective
++ * port.
++ * bit 6 .. 11 will make the switch drop ingress frames carrying
++ * a C-tag with VID != 0 for respective port.
++ */
++#define RTL8366RB_VLAN_INGRESS_CTRL1_REG 0x037E
++#define RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port) (BIT((port)) | BIT((port) + 6))
++
++/* VLAN Ingress Control Register 2, one bit per port.
++ * bit0 .. bit5 will make the switch drop all ingress frames with
++ * a VLAN classification that does not include the port is in its
++ * member set.
++ */
++#define RTL8366RB_VLAN_INGRESS_CTRL2_REG 0x037f
++
++/* LED control registers */
++#define RTL8366RB_LED_BLINKRATE_REG 0x0430
++#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
++#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
++#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
++#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
++#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
++#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
++#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
++
++#define RTL8366RB_LED_CTRL_REG 0x0431
++#define RTL8366RB_LED_OFF 0x0
++#define RTL8366RB_LED_DUP_COL 0x1
++#define RTL8366RB_LED_LINK_ACT 0x2
++#define RTL8366RB_LED_SPD1000 0x3
++#define RTL8366RB_LED_SPD100 0x4
++#define RTL8366RB_LED_SPD10 0x5
++#define RTL8366RB_LED_SPD1000_ACT 0x6
++#define RTL8366RB_LED_SPD100_ACT 0x7
++#define RTL8366RB_LED_SPD10_ACT 0x8
++#define RTL8366RB_LED_SPD100_10_ACT 0x9
++#define RTL8366RB_LED_FIBER 0xa
++#define RTL8366RB_LED_AN_FAULT 0xb
++#define RTL8366RB_LED_LINK_RX 0xc
++#define RTL8366RB_LED_LINK_TX 0xd
++#define RTL8366RB_LED_MASTER 0xe
++#define RTL8366RB_LED_FORCE 0xf
++#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
++#define RTL8366RB_LED_1_OFFSET 6
++#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
++#define RTL8366RB_LED_3_OFFSET 6
++
++#define RTL8366RB_MIB_COUNT 33
++#define RTL8366RB_GLOBAL_MIB_COUNT 1
++#define RTL8366RB_MIB_COUNTER_PORT_OFFSET 0x0050
++#define RTL8366RB_MIB_COUNTER_BASE 0x1000
++#define RTL8366RB_MIB_CTRL_REG 0x13F0
++#define RTL8366RB_MIB_CTRL_USER_MASK 0x0FFC
++#define RTL8366RB_MIB_CTRL_BUSY_MASK BIT(0)
++#define RTL8366RB_MIB_CTRL_RESET_MASK BIT(1)
++#define RTL8366RB_MIB_CTRL_PORT_RESET(_p) BIT(2 + (_p))
++#define RTL8366RB_MIB_CTRL_GLOBAL_RESET BIT(11)
++
++#define RTL8366RB_PORT_VLAN_CTRL_BASE 0x0063
++#define RTL8366RB_PORT_VLAN_CTRL_REG(_p) \
++ (RTL8366RB_PORT_VLAN_CTRL_BASE + (_p) / 4)
++#define RTL8366RB_PORT_VLAN_CTRL_MASK 0xf
++#define RTL8366RB_PORT_VLAN_CTRL_SHIFT(_p) (4 * ((_p) % 4))
++
++#define RTL8366RB_VLAN_TABLE_READ_BASE 0x018C
++#define RTL8366RB_VLAN_TABLE_WRITE_BASE 0x0185
++
++#define RTL8366RB_TABLE_ACCESS_CTRL_REG 0x0180
++#define RTL8366RB_TABLE_VLAN_READ_CTRL 0x0E01
++#define RTL8366RB_TABLE_VLAN_WRITE_CTRL 0x0F01
++
++#define RTL8366RB_VLAN_MC_BASE(_x) (0x0020 + (_x) * 3)
++
++#define RTL8366RB_PORT_LINK_STATUS_BASE 0x0014
++#define RTL8366RB_PORT_STATUS_SPEED_MASK 0x0003
++#define RTL8366RB_PORT_STATUS_DUPLEX_MASK 0x0004
++#define RTL8366RB_PORT_STATUS_LINK_MASK 0x0010
++#define RTL8366RB_PORT_STATUS_TXPAUSE_MASK 0x0020
++#define RTL8366RB_PORT_STATUS_RXPAUSE_MASK 0x0040
++#define RTL8366RB_PORT_STATUS_AN_MASK 0x0080
++
++#define RTL8366RB_NUM_VLANS 16
++#define RTL8366RB_NUM_LEDGROUPS 4
++#define RTL8366RB_NUM_VIDS 4096
++#define RTL8366RB_PRIORITYMAX 7
++#define RTL8366RB_NUM_FIDS 8
++#define RTL8366RB_FIDMAX 7
++
++#define RTL8366RB_PORT_1 BIT(0) /* In userspace port 0 */
++#define RTL8366RB_PORT_2 BIT(1) /* In userspace port 1 */
++#define RTL8366RB_PORT_3 BIT(2) /* In userspace port 2 */
++#define RTL8366RB_PORT_4 BIT(3) /* In userspace port 3 */
++#define RTL8366RB_PORT_5 BIT(4) /* In userspace port 4 */
++
++#define RTL8366RB_PORT_CPU BIT(5) /* CPU port */
++
++#define RTL8366RB_PORT_ALL (RTL8366RB_PORT_1 | \
++ RTL8366RB_PORT_2 | \
++ RTL8366RB_PORT_3 | \
++ RTL8366RB_PORT_4 | \
++ RTL8366RB_PORT_5 | \
++ RTL8366RB_PORT_CPU)
++
++#define RTL8366RB_PORT_ALL_BUT_CPU (RTL8366RB_PORT_1 | \
++ RTL8366RB_PORT_2 | \
++ RTL8366RB_PORT_3 | \
++ RTL8366RB_PORT_4 | \
++ RTL8366RB_PORT_5)
++
++#define RTL8366RB_PORT_ALL_EXTERNAL (RTL8366RB_PORT_1 | \
++ RTL8366RB_PORT_2 | \
++ RTL8366RB_PORT_3 | \
++ RTL8366RB_PORT_4)
++
++#define RTL8366RB_PORT_ALL_INTERNAL RTL8366RB_PORT_CPU
++
++/* First configuration word per member config, VID and prio */
++#define RTL8366RB_VLAN_VID_MASK 0xfff
++#define RTL8366RB_VLAN_PRIORITY_SHIFT 12
++#define RTL8366RB_VLAN_PRIORITY_MASK 0x7
++/* Second configuration word per member config, member and untagged */
++#define RTL8366RB_VLAN_UNTAG_SHIFT 8
++#define RTL8366RB_VLAN_UNTAG_MASK 0xff
++#define RTL8366RB_VLAN_MEMBER_MASK 0xff
++/* Third config word per member config, STAG currently unused */
++#define RTL8366RB_VLAN_STAG_MBR_MASK 0xff
++#define RTL8366RB_VLAN_STAG_MBR_SHIFT 8
++#define RTL8366RB_VLAN_STAG_IDX_MASK 0x7
++#define RTL8366RB_VLAN_STAG_IDX_SHIFT 5
++#define RTL8366RB_VLAN_FID_MASK 0x7
++
++/* Port ingress bandwidth control */
++#define RTL8366RB_IB_BASE 0x0200
++#define RTL8366RB_IB_REG(pnum) (RTL8366RB_IB_BASE + (pnum))
++#define RTL8366RB_IB_BDTH_MASK 0x3fff
++#define RTL8366RB_IB_PREIFG BIT(14)
++
++/* Port egress bandwidth control */
++#define RTL8366RB_EB_BASE 0x02d1
++#define RTL8366RB_EB_REG(pnum) (RTL8366RB_EB_BASE + (pnum))
++#define RTL8366RB_EB_BDTH_MASK 0x3fff
++#define RTL8366RB_EB_PREIFG_REG 0x02f8
++#define RTL8366RB_EB_PREIFG BIT(9)
++
++#define RTL8366RB_BDTH_SW_MAX 1048512 /* 1048576? */
++#define RTL8366RB_BDTH_UNIT 64
++#define RTL8366RB_BDTH_REG_DEFAULT 16383
++
++/* QOS */
++#define RTL8366RB_QOS BIT(15)
++/* Include/Exclude Preamble and IFG (20 bytes). 0:Exclude, 1:Include. */
++#define RTL8366RB_QOS_DEFAULT_PREIFG 1
++
++/* Interrupt handling */
++#define RTL8366RB_INTERRUPT_CONTROL_REG 0x0440
++#define RTL8366RB_INTERRUPT_POLARITY BIT(0)
++#define RTL8366RB_P4_RGMII_LED BIT(2)
++#define RTL8366RB_INTERRUPT_MASK_REG 0x0441
++#define RTL8366RB_INTERRUPT_LINK_CHGALL GENMASK(11, 0)
++#define RTL8366RB_INTERRUPT_ACLEXCEED BIT(8)
++#define RTL8366RB_INTERRUPT_STORMEXCEED BIT(9)
++#define RTL8366RB_INTERRUPT_P4_FIBER BIT(12)
++#define RTL8366RB_INTERRUPT_P4_UTP BIT(13)
++#define RTL8366RB_INTERRUPT_VALID (RTL8366RB_INTERRUPT_LINK_CHGALL | \
++ RTL8366RB_INTERRUPT_ACLEXCEED | \
++ RTL8366RB_INTERRUPT_STORMEXCEED | \
++ RTL8366RB_INTERRUPT_P4_FIBER | \
++ RTL8366RB_INTERRUPT_P4_UTP)
++#define RTL8366RB_INTERRUPT_STATUS_REG 0x0442
++#define RTL8366RB_NUM_INTERRUPT 14 /* 0..13 */
++
++/* Port isolation registers */
++#define RTL8366RB_PORT_ISO_BASE 0x0F08
++#define RTL8366RB_PORT_ISO(pnum) (RTL8366RB_PORT_ISO_BASE + (pnum))
++#define RTL8366RB_PORT_ISO_EN BIT(0)
++#define RTL8366RB_PORT_ISO_PORTS_MASK GENMASK(7, 1)
++#define RTL8366RB_PORT_ISO_PORTS(pmask) ((pmask) << 1)
++
++/* bits 0..5 enable force when cleared */
++#define RTL8366RB_MAC_FORCE_CTRL_REG 0x0F11
++
++#define RTL8366RB_OAM_PARSER_REG 0x0F14
++#define RTL8366RB_OAM_MULTIPLEXER_REG 0x0F15
++
++#define RTL8366RB_GREEN_FEATURE_REG 0x0F51
++#define RTL8366RB_GREEN_FEATURE_MSK 0x0007
++#define RTL8366RB_GREEN_FEATURE_TX BIT(0)
++#define RTL8366RB_GREEN_FEATURE_RX BIT(2)
++
++/**
++ * struct rtl8366rb - RTL8366RB-specific data
++ * @max_mtu: per-port max MTU setting
++ * @pvid_enabled: if PVID is set for respective port
++ */
++struct rtl8366rb {
++ unsigned int max_mtu[RTL8366RB_NUM_PORTS];
++ bool pvid_enabled[RTL8366RB_NUM_PORTS];
++};
++
++static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
++ { 0, 0, 4, "IfInOctets" },
++ { 0, 4, 4, "EtherStatsOctets" },
++ { 0, 8, 2, "EtherStatsUnderSizePkts" },
++ { 0, 10, 2, "EtherFragments" },
++ { 0, 12, 2, "EtherStatsPkts64Octets" },
++ { 0, 14, 2, "EtherStatsPkts65to127Octets" },
++ { 0, 16, 2, "EtherStatsPkts128to255Octets" },
++ { 0, 18, 2, "EtherStatsPkts256to511Octets" },
++ { 0, 20, 2, "EtherStatsPkts512to1023Octets" },
++ { 0, 22, 2, "EtherStatsPkts1024to1518Octets" },
++ { 0, 24, 2, "EtherOversizeStats" },
++ { 0, 26, 2, "EtherStatsJabbers" },
++ { 0, 28, 2, "IfInUcastPkts" },
++ { 0, 30, 2, "EtherStatsMulticastPkts" },
++ { 0, 32, 2, "EtherStatsBroadcastPkts" },
++ { 0, 34, 2, "EtherStatsDropEvents" },
++ { 0, 36, 2, "Dot3StatsFCSErrors" },
++ { 0, 38, 2, "Dot3StatsSymbolErrors" },
++ { 0, 40, 2, "Dot3InPauseFrames" },
++ { 0, 42, 2, "Dot3ControlInUnknownOpcodes" },
++ { 0, 44, 4, "IfOutOctets" },
++ { 0, 48, 2, "Dot3StatsSingleCollisionFrames" },
++ { 0, 50, 2, "Dot3StatMultipleCollisionFrames" },
++ { 0, 52, 2, "Dot3sDeferredTransmissions" },
++ { 0, 54, 2, "Dot3StatsLateCollisions" },
++ { 0, 56, 2, "EtherStatsCollisions" },
++ { 0, 58, 2, "Dot3StatsExcessiveCollisions" },
++ { 0, 60, 2, "Dot3OutPauseFrames" },
++ { 0, 62, 2, "Dot1dBasePortDelayExceededDiscards" },
++ { 0, 64, 2, "Dot1dTpPortInDiscards" },
++ { 0, 66, 2, "IfOutUcastPkts" },
++ { 0, 68, 2, "IfOutMulticastPkts" },
++ { 0, 70, 2, "IfOutBroadcastPkts" },
++};
++
++static int rtl8366rb_get_mib_counter(struct realtek_smi *smi,
++ int port,
++ struct rtl8366_mib_counter *mib,
++ u64 *mibvalue)
++{
++ u32 addr, val;
++ int ret;
++ int i;
++
++ addr = RTL8366RB_MIB_COUNTER_BASE +
++ RTL8366RB_MIB_COUNTER_PORT_OFFSET * (port) +
++ mib->offset;
++
++ /* Writing access counter address first
++ * then ASIC will prepare 64bits counter wait for being retrived
++ */
++ ret = regmap_write(smi->map, addr, 0); /* Write whatever */
++ if (ret)
++ return ret;
++
++ /* Read MIB control register */
++ ret = regmap_read(smi->map, RTL8366RB_MIB_CTRL_REG, &val);
++ if (ret)
++ return -EIO;
++
++ if (val & RTL8366RB_MIB_CTRL_BUSY_MASK)
++ return -EBUSY;
++
++ if (val & RTL8366RB_MIB_CTRL_RESET_MASK)
++ return -EIO;
++
++ /* Read each individual MIB 16 bits at the time */
++ *mibvalue = 0;
++ for (i = mib->length; i > 0; i--) {
++ ret = regmap_read(smi->map, addr + (i - 1), &val);
++ if (ret)
++ return ret;
++ *mibvalue = (*mibvalue << 16) | (val & 0xFFFF);
++ }
++ return 0;
++}
++
++static u32 rtl8366rb_get_irqmask(struct irq_data *d)
++{
++ int line = irqd_to_hwirq(d);
++ u32 val;
++
++ /* For line interrupts we combine link down in bits
++ * 6..11 with link up in bits 0..5 into one interrupt.
++ */
++ if (line < 12)
++ val = BIT(line) | BIT(line + 6);
++ else
++ val = BIT(line);
++ return val;
++}
++
++static void rtl8366rb_mask_irq(struct irq_data *d)
++{
++ struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
++ int ret;
++
++ ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
++ rtl8366rb_get_irqmask(d), 0);
++ if (ret)
++ dev_err(smi->dev, "could not mask IRQ\n");
++}
++
++static void rtl8366rb_unmask_irq(struct irq_data *d)
++{
++ struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
++ int ret;
++
++ ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
++ rtl8366rb_get_irqmask(d),
++ rtl8366rb_get_irqmask(d));
++ if (ret)
++ dev_err(smi->dev, "could not unmask IRQ\n");
++}
++
++static irqreturn_t rtl8366rb_irq(int irq, void *data)
++{
++ struct realtek_smi *smi = data;
++ u32 stat;
++ int ret;
++
++ /* This clears the IRQ status register */
++ ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
++ &stat);
++ if (ret) {
++ dev_err(smi->dev, "can't read interrupt status\n");
++ return IRQ_NONE;
++ }
++ stat &= RTL8366RB_INTERRUPT_VALID;
++ if (!stat)
++ return IRQ_NONE;
++ while (stat) {
++ int line = __ffs(stat);
++ int child_irq;
++
++ stat &= ~BIT(line);
++ /* For line interrupts we combine link down in bits
++ * 6..11 with link up in bits 0..5 into one interrupt.
++ */
++ if (line < 12 && line > 5)
++ line -= 5;
++ child_irq = irq_find_mapping(smi->irqdomain, line);
++ handle_nested_irq(child_irq);
++ }
++ return IRQ_HANDLED;
++}
++
++static struct irq_chip rtl8366rb_irq_chip = {
++ .name = "RTL8366RB",
++ .irq_mask = rtl8366rb_mask_irq,
++ .irq_unmask = rtl8366rb_unmask_irq,
++};
++
++static int rtl8366rb_irq_map(struct irq_domain *domain, unsigned int irq,
++ irq_hw_number_t hwirq)
++{
++ irq_set_chip_data(irq, domain->host_data);
++ irq_set_chip_and_handler(irq, &rtl8366rb_irq_chip, handle_simple_irq);
++ irq_set_nested_thread(irq, 1);
++ irq_set_noprobe(irq);
++
++ return 0;
++}
++
++static void rtl8366rb_irq_unmap(struct irq_domain *d, unsigned int irq)
++{
++ irq_set_nested_thread(irq, 0);
++ irq_set_chip_and_handler(irq, NULL, NULL);
++ irq_set_chip_data(irq, NULL);
++}
++
++static const struct irq_domain_ops rtl8366rb_irqdomain_ops = {
++ .map = rtl8366rb_irq_map,
++ .unmap = rtl8366rb_irq_unmap,
++ .xlate = irq_domain_xlate_onecell,
++};
++
++static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
++{
++ struct device_node *intc;
++ unsigned long irq_trig;
++ int irq;
++ int ret;
++ u32 val;
++ int i;
++
++ intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
++ if (!intc) {
++ dev_err(smi->dev, "missing child interrupt-controller node\n");
++ return -EINVAL;
++ }
++ /* RB8366RB IRQs cascade off this one */
++ irq = of_irq_get(intc, 0);
++ if (irq <= 0) {
++ dev_err(smi->dev, "failed to get parent IRQ\n");
++ ret = irq ? irq : -EINVAL;
++ goto out_put_node;
++ }
++
++ /* This clears the IRQ status register */
++ ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
++ &val);
++ if (ret) {
++ dev_err(smi->dev, "can't read interrupt status\n");
++ goto out_put_node;
++ }
++
++ /* Fetch IRQ edge information from the descriptor */
++ irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
++ switch (irq_trig) {
++ case IRQF_TRIGGER_RISING:
++ case IRQF_TRIGGER_HIGH:
++ dev_info(smi->dev, "active high/rising IRQ\n");
++ val = 0;
++ break;
++ case IRQF_TRIGGER_FALLING:
++ case IRQF_TRIGGER_LOW:
++ dev_info(smi->dev, "active low/falling IRQ\n");
++ val = RTL8366RB_INTERRUPT_POLARITY;
++ break;
++ }
++ ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_CONTROL_REG,
++ RTL8366RB_INTERRUPT_POLARITY,
++ val);
++ if (ret) {
++ dev_err(smi->dev, "could not configure IRQ polarity\n");
++ goto out_put_node;
++ }
++
++ ret = devm_request_threaded_irq(smi->dev, irq, NULL,
++ rtl8366rb_irq, IRQF_ONESHOT,
++ "RTL8366RB", smi);
++ if (ret) {
++ dev_err(smi->dev, "unable to request irq: %d\n", ret);
++ goto out_put_node;
++ }
++ smi->irqdomain = irq_domain_add_linear(intc,
++ RTL8366RB_NUM_INTERRUPT,
++ &rtl8366rb_irqdomain_ops,
++ smi);
++ if (!smi->irqdomain) {
++ dev_err(smi->dev, "failed to create IRQ domain\n");
++ ret = -EINVAL;
++ goto out_put_node;
++ }
++ for (i = 0; i < smi->num_ports; i++)
++ irq_set_parent(irq_create_mapping(smi->irqdomain, i), irq);
++
++out_put_node:
++ of_node_put(intc);
++ return ret;
++}
++
++static int rtl8366rb_set_addr(struct realtek_smi *smi)
++{
++ u8 addr[ETH_ALEN];
++ u16 val;
++ int ret;
++
++ eth_random_addr(addr);
++
++ dev_info(smi->dev, "set MAC: %02X:%02X:%02X:%02X:%02X:%02X\n",
++ addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
++ val = addr[0] << 8 | addr[1];
++ ret = regmap_write(smi->map, RTL8366RB_SMAR0, val);
++ if (ret)
++ return ret;
++ val = addr[2] << 8 | addr[3];
++ ret = regmap_write(smi->map, RTL8366RB_SMAR1, val);
++ if (ret)
++ return ret;
++ val = addr[4] << 8 | addr[5];
++ ret = regmap_write(smi->map, RTL8366RB_SMAR2, val);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++/* Found in a vendor driver */
++
++/* Struct for handling the jam tables' entries */
++struct rtl8366rb_jam_tbl_entry {
++ u16 reg;
++ u16 val;
++};
++
++/* For the "version 0" early silicon, appear in most source releases */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_0[] = {
++ {0x000B, 0x0001}, {0x03A6, 0x0100}, {0x03A7, 0x0001}, {0x02D1, 0x3FFF},
++ {0x02D2, 0x3FFF}, {0x02D3, 0x3FFF}, {0x02D4, 0x3FFF}, {0x02D5, 0x3FFF},
++ {0x02D6, 0x3FFF}, {0x02D7, 0x3FFF}, {0x02D8, 0x3FFF}, {0x022B, 0x0688},
++ {0x022C, 0x0FAC}, {0x03D0, 0x4688}, {0x03D1, 0x01F5}, {0x0000, 0x0830},
++ {0x02F9, 0x0200}, {0x02F7, 0x7FFF}, {0x02F8, 0x03FF}, {0x0080, 0x03E8},
++ {0x0081, 0x00CE}, {0x0082, 0x00DA}, {0x0083, 0x0230}, {0xBE0F, 0x2000},
++ {0x0231, 0x422A}, {0x0232, 0x422A}, {0x0233, 0x422A}, {0x0234, 0x422A},
++ {0x0235, 0x422A}, {0x0236, 0x422A}, {0x0237, 0x422A}, {0x0238, 0x422A},
++ {0x0239, 0x422A}, {0x023A, 0x422A}, {0x023B, 0x422A}, {0x023C, 0x422A},
++ {0x023D, 0x422A}, {0x023E, 0x422A}, {0x023F, 0x422A}, {0x0240, 0x422A},
++ {0x0241, 0x422A}, {0x0242, 0x422A}, {0x0243, 0x422A}, {0x0244, 0x422A},
++ {0x0245, 0x422A}, {0x0246, 0x422A}, {0x0247, 0x422A}, {0x0248, 0x422A},
++ {0x0249, 0x0146}, {0x024A, 0x0146}, {0x024B, 0x0146}, {0xBE03, 0xC961},
++ {0x024D, 0x0146}, {0x024E, 0x0146}, {0x024F, 0x0146}, {0x0250, 0x0146},
++ {0xBE64, 0x0226}, {0x0252, 0x0146}, {0x0253, 0x0146}, {0x024C, 0x0146},
++ {0x0251, 0x0146}, {0x0254, 0x0146}, {0xBE62, 0x3FD0}, {0x0084, 0x0320},
++ {0x0255, 0x0146}, {0x0256, 0x0146}, {0x0257, 0x0146}, {0x0258, 0x0146},
++ {0x0259, 0x0146}, {0x025A, 0x0146}, {0x025B, 0x0146}, {0x025C, 0x0146},
++ {0x025D, 0x0146}, {0x025E, 0x0146}, {0x025F, 0x0146}, {0x0260, 0x0146},
++ {0x0261, 0xA23F}, {0x0262, 0x0294}, {0x0263, 0xA23F}, {0x0264, 0x0294},
++ {0x0265, 0xA23F}, {0x0266, 0x0294}, {0x0267, 0xA23F}, {0x0268, 0x0294},
++ {0x0269, 0xA23F}, {0x026A, 0x0294}, {0x026B, 0xA23F}, {0x026C, 0x0294},
++ {0x026D, 0xA23F}, {0x026E, 0x0294}, {0x026F, 0xA23F}, {0x0270, 0x0294},
++ {0x02F5, 0x0048}, {0xBE09, 0x0E00}, {0xBE1E, 0x0FA0}, {0xBE14, 0x8448},
++ {0xBE15, 0x1007}, {0xBE4A, 0xA284}, {0xC454, 0x3F0B}, {0xC474, 0x3F0B},
++ {0xBE48, 0x3672}, {0xBE4B, 0x17A7}, {0xBE4C, 0x0B15}, {0xBE52, 0x0EDD},
++ {0xBE49, 0x8C00}, {0xBE5B, 0x785C}, {0xBE5C, 0x785C}, {0xBE5D, 0x785C},
++ {0xBE61, 0x368A}, {0xBE63, 0x9B84}, {0xC456, 0xCC13}, {0xC476, 0xCC13},
++ {0xBE65, 0x307D}, {0xBE6D, 0x0005}, {0xBE6E, 0xE120}, {0xBE2E, 0x7BAF},
++};
++
++/* This v1 init sequence is from Belkin F5D8235 U-Boot release */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_1[] = {
++ {0x0000, 0x0830}, {0x0001, 0x8000}, {0x0400, 0x8130}, {0xBE78, 0x3C3C},
++ {0x0431, 0x5432}, {0xBE37, 0x0CE4}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
++ {0xC44C, 0x1585}, {0xC44C, 0x1185}, {0xC44C, 0x1585}, {0xC46C, 0x1585},
++ {0xC46C, 0x1185}, {0xC46C, 0x1585}, {0xC451, 0x2135}, {0xC471, 0x2135},
++ {0xBE10, 0x8140}, {0xBE15, 0x0007}, {0xBE6E, 0xE120}, {0xBE69, 0xD20F},
++ {0xBE6B, 0x0320}, {0xBE24, 0xB000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF20},
++ {0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800}, {0xBE24, 0x0000},
++ {0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60}, {0xBE21, 0x0140},
++ {0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000}, {0xBE2E, 0x7B7A},
++ {0xBE36, 0x0CE4}, {0x02F5, 0x0048}, {0xBE77, 0x2940}, {0x000A, 0x83E0},
++ {0xBE79, 0x3C3C}, {0xBE00, 0x1340},
++};
++
++/* This v2 init sequence is from Belkin F5D8235 U-Boot release */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_2[] = {
++ {0x0450, 0x0000}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
++ {0xC44F, 0x6250}, {0xC46F, 0x6250}, {0xC456, 0x0C14}, {0xC476, 0x0C14},
++ {0xC44C, 0x1C85}, {0xC44C, 0x1885}, {0xC44C, 0x1C85}, {0xC46C, 0x1C85},
++ {0xC46C, 0x1885}, {0xC46C, 0x1C85}, {0xC44C, 0x0885}, {0xC44C, 0x0881},
++ {0xC44C, 0x0885}, {0xC46C, 0x0885}, {0xC46C, 0x0881}, {0xC46C, 0x0885},
++ {0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
++ {0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6E, 0x0320},
++ {0xBE77, 0x2940}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
++ {0x8000, 0x0001}, {0xBE15, 0x1007}, {0x8000, 0x0000}, {0xBE15, 0x1007},
++ {0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160}, {0xBE10, 0x8140},
++ {0xBE00, 0x1340}, {0x0F51, 0x0010},
++};
++
++/* Appears in a DDWRT code dump */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_3[] = {
++ {0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
++ {0x0F51, 0x0017}, {0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
++ {0xC456, 0x0C14}, {0xC476, 0x0C14}, {0xC454, 0x3F8B}, {0xC474, 0x3F8B},
++ {0xC450, 0x2071}, {0xC470, 0x2071}, {0xC451, 0x226B}, {0xC471, 0x226B},
++ {0xC452, 0xA293}, {0xC472, 0xA293}, {0xC44C, 0x1585}, {0xC44C, 0x1185},
++ {0xC44C, 0x1585}, {0xC46C, 0x1585}, {0xC46C, 0x1185}, {0xC46C, 0x1585},
++ {0xC44C, 0x0185}, {0xC44C, 0x0181}, {0xC44C, 0x0185}, {0xC46C, 0x0185},
++ {0xC46C, 0x0181}, {0xC46C, 0x0185}, {0xBE24, 0xB000}, {0xBE23, 0xFF51},
++ {0xBE22, 0xDF20}, {0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800},
++ {0xBE24, 0x0000}, {0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60},
++ {0xBE21, 0x0140}, {0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000},
++ {0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
++ {0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6B, 0x0320},
++ {0xBE77, 0x2800}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
++ {0x8000, 0x0001}, {0xBE10, 0x8140}, {0x8000, 0x0000}, {0xBE10, 0x8140},
++ {0xBE15, 0x1007}, {0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160},
++ {0xBE10, 0x8140}, {0xBE00, 0x1340}, {0x0450, 0x0000}, {0x0401, 0x0000},
++};
++
++/* Belkin F5D8235 v1, "belkin,f5d8235-v1" */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_f5d8235[] = {
++ {0x0242, 0x02BF}, {0x0245, 0x02BF}, {0x0248, 0x02BF}, {0x024B, 0x02BF},
++ {0x024E, 0x02BF}, {0x0251, 0x02BF}, {0x0254, 0x0A3F}, {0x0256, 0x0A3F},
++ {0x0258, 0x0A3F}, {0x025A, 0x0A3F}, {0x025C, 0x0A3F}, {0x025E, 0x0A3F},
++ {0x0263, 0x007C}, {0x0100, 0x0004}, {0xBE5B, 0x3500}, {0x800E, 0x200F},
++ {0xBE1D, 0x0F00}, {0x8001, 0x5011}, {0x800A, 0xA2F4}, {0x800B, 0x17A3},
++ {0xBE4B, 0x17A3}, {0xBE41, 0x5011}, {0xBE17, 0x2100}, {0x8000, 0x8304},
++ {0xBE40, 0x8304}, {0xBE4A, 0xA2F4}, {0x800C, 0xA8D5}, {0x8014, 0x5500},
++ {0x8015, 0x0004}, {0xBE4C, 0xA8D5}, {0xBE59, 0x0008}, {0xBE09, 0x0E00},
++ {0xBE36, 0x1036}, {0xBE37, 0x1036}, {0x800D, 0x00FF}, {0xBE4D, 0x00FF},
++};
++
++/* DGN3500, "netgear,dgn3500", "netgear,dgn3500b" */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_dgn3500[] = {
++ {0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0F51, 0x0017},
++ {0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0}, {0x0450, 0x0000},
++ {0x0401, 0x0000}, {0x0431, 0x0960},
++};
++
++/* This jam table activates "green ethernet", which means low power mode
++ * and is claimed to detect the cable length and not use more power than
++ * necessary, and the ports should enter power saving mode 10 seconds after
++ * a cable is disconnected. Seems to always be the same.
++ */
++static const struct rtl8366rb_jam_tbl_entry rtl8366rb_green_jam[] = {
++ {0xBE78, 0x323C}, {0xBE77, 0x5000}, {0xBE2E, 0x7BA7},
++ {0xBE59, 0x3459}, {0xBE5A, 0x745A}, {0xBE5B, 0x785C},
++ {0xBE5C, 0x785C}, {0xBE6E, 0xE120}, {0xBE79, 0x323C},
++};
++
++/* Function that jams the tables in the proper registers */
++static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
++ int jam_size, struct realtek_smi *smi,
++ bool write_dbg)
++{
++ u32 val;
++ int ret;
++ int i;
++
++ for (i = 0; i < jam_size; i++) {
++ if ((jam_table[i].reg & 0xBE00) == 0xBE00) {
++ ret = regmap_read(smi->map,
++ RTL8366RB_PHY_ACCESS_BUSY_REG,
++ &val);
++ if (ret)
++ return ret;
++ if (!(val & RTL8366RB_PHY_INT_BUSY)) {
++ ret = regmap_write(smi->map,
++ RTL8366RB_PHY_ACCESS_CTRL_REG,
++ RTL8366RB_PHY_CTRL_WRITE);
++ if (ret)
++ return ret;
++ }
++ }
++ if (write_dbg)
++ dev_dbg(smi->dev, "jam %04x into register %04x\n",
++ jam_table[i].val,
++ jam_table[i].reg);
++ ret = regmap_write(smi->map,
++ jam_table[i].reg,
++ jam_table[i].val);
++ if (ret)
++ return ret;
++ }
++ return 0;
++}
++
++static int rtl8366rb_setup(struct dsa_switch *ds)
++{
++ struct realtek_smi *smi = ds->priv;
++ const struct rtl8366rb_jam_tbl_entry *jam_table;
++ struct rtl8366rb *rb;
++ u32 chip_ver = 0;
++ u32 chip_id = 0;
++ int jam_size;
++ u32 val;
++ int ret;
++ int i;
++
++ rb = smi->chip_data;
++
++ ret = regmap_read(smi->map, RTL8366RB_CHIP_ID_REG, &chip_id);
++ if (ret) {
++ dev_err(smi->dev, "unable to read chip id\n");
++ return ret;
++ }
++
++ switch (chip_id) {
++ case RTL8366RB_CHIP_ID_8366:
++ break;
++ default:
++ dev_err(smi->dev, "unknown chip id (%04x)\n", chip_id);
++ return -ENODEV;
++ }
++
++ ret = regmap_read(smi->map, RTL8366RB_CHIP_VERSION_CTRL_REG,
++ &chip_ver);
++ if (ret) {
++ dev_err(smi->dev, "unable to read chip version\n");
++ return ret;
++ }
++
++ dev_info(smi->dev, "RTL%04x ver %u chip found\n",
++ chip_id, chip_ver & RTL8366RB_CHIP_VERSION_MASK);
++
++ /* Do the init dance using the right jam table */
++ switch (chip_ver) {
++ case 0:
++ jam_table = rtl8366rb_init_jam_ver_0;
++ jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_0);
++ break;
++ case 1:
++ jam_table = rtl8366rb_init_jam_ver_1;
++ jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_1);
++ break;
++ case 2:
++ jam_table = rtl8366rb_init_jam_ver_2;
++ jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_2);
++ break;
++ default:
++ jam_table = rtl8366rb_init_jam_ver_3;
++ jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_3);
++ break;
++ }
++
++ /* Special jam tables for special routers
++ * TODO: are these necessary? Maintainers, please test
++ * without them, using just the off-the-shelf tables.
++ */
++ if (of_machine_is_compatible("belkin,f5d8235-v1")) {
++ jam_table = rtl8366rb_init_jam_f5d8235;
++ jam_size = ARRAY_SIZE(rtl8366rb_init_jam_f5d8235);
++ }
++ if (of_machine_is_compatible("netgear,dgn3500") ||
++ of_machine_is_compatible("netgear,dgn3500b")) {
++ jam_table = rtl8366rb_init_jam_dgn3500;
++ jam_size = ARRAY_SIZE(rtl8366rb_init_jam_dgn3500);
++ }
++
++ ret = rtl8366rb_jam_table(jam_table, jam_size, smi, true);
++ if (ret)
++ return ret;
++
++ /* Isolate all user ports so they can only send packets to itself and the CPU port */
++ for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
++ ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(i),
++ RTL8366RB_PORT_ISO_PORTS(BIT(RTL8366RB_PORT_NUM_CPU)) |
++ RTL8366RB_PORT_ISO_EN);
++ if (ret)
++ return ret;
++ }
++ /* CPU port can send packets to all ports */
++ ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(RTL8366RB_PORT_NUM_CPU),
++ RTL8366RB_PORT_ISO_PORTS(dsa_user_ports(ds)) |
++ RTL8366RB_PORT_ISO_EN);
++ if (ret)
++ return ret;
++
++ /* Set up the "green ethernet" feature */
++ ret = rtl8366rb_jam_table(rtl8366rb_green_jam,
++ ARRAY_SIZE(rtl8366rb_green_jam), smi, false);
++ if (ret)
++ return ret;
++
++ ret = regmap_write(smi->map,
++ RTL8366RB_GREEN_FEATURE_REG,
++ (chip_ver == 1) ? 0x0007 : 0x0003);
++ if (ret)
++ return ret;
++
++ /* Vendor driver sets 0x240 in registers 0xc and 0xd (undocumented) */
++ ret = regmap_write(smi->map, 0x0c, 0x240);
++ if (ret)
++ return ret;
++ ret = regmap_write(smi->map, 0x0d, 0x240);
++ if (ret)
++ return ret;
++
++ /* Set some random MAC address */
++ ret = rtl8366rb_set_addr(smi);
++ if (ret)
++ return ret;
++
++ /* Enable CPU port with custom DSA tag 8899.
++ *
++ * If you set RTL8368RB_CPU_NO_TAG (bit 15) in this registers
++ * the custom tag is turned off.
++ */
++ ret = regmap_update_bits(smi->map, RTL8368RB_CPU_CTRL_REG,
++ 0xFFFF,
++ BIT(smi->cpu_port));
++ if (ret)
++ return ret;
++
++ /* Make sure we default-enable the fixed CPU port */
++ ret = regmap_update_bits(smi->map, RTL8366RB_PECR,
++ BIT(smi->cpu_port),
++ 0);
++ if (ret)
++ return ret;
++
++ /* Set maximum packet length to 1536 bytes */
++ ret = regmap_update_bits(smi->map, RTL8366RB_SGCR,
++ RTL8366RB_SGCR_MAX_LENGTH_MASK,
++ RTL8366RB_SGCR_MAX_LENGTH_1536);
++ if (ret)
++ return ret;
++ for (i = 0; i < RTL8366RB_NUM_PORTS; i++)
++ /* layer 2 size, see rtl8366rb_change_mtu() */
++ rb->max_mtu[i] = 1532;
++
++ /* Disable learning for all ports */
++ ret = regmap_write(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
++ RTL8366RB_PORT_ALL);
++ if (ret)
++ return ret;
++
++ /* Enable auto ageing for all ports */
++ ret = regmap_write(smi->map, RTL8366RB_SECURITY_CTRL, 0);
++ if (ret)
++ return ret;
++
++ /* Port 4 setup: this enables Port 4, usually the WAN port,
++ * common PHY IO mode is apparently mode 0, and this is not what
++ * the port is initialized to. There is no explanation of the
++ * IO modes in the Realtek source code, if your WAN port is
++ * connected to something exotic such as fiber, then this might
++ * be worth experimenting with.
++ */
++ ret = regmap_update_bits(smi->map, RTL8366RB_PMC0,
++ RTL8366RB_PMC0_P4_IOMODE_MASK,
++ 0 << RTL8366RB_PMC0_P4_IOMODE_SHIFT);
++ if (ret)
++ return ret;
++
++ /* Accept all packets by default, we enable filtering on-demand */
++ ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
++ 0);
++ if (ret)
++ return ret;
++ ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
++ 0);
++ if (ret)
++ return ret;
++
++ /* Don't drop packets whose DA has not been learned */
++ ret = regmap_update_bits(smi->map, RTL8366RB_SSCR2,
++ RTL8366RB_SSCR2_DROP_UNKNOWN_DA, 0);
++ if (ret)
++ return ret;
++
++ /* Set blinking, TODO: make this configurable */
++ ret = regmap_update_bits(smi->map, RTL8366RB_LED_BLINKRATE_REG,
++ RTL8366RB_LED_BLINKRATE_MASK,
++ RTL8366RB_LED_BLINKRATE_56MS);
++ if (ret)
++ return ret;
++
++ /* Set up LED activity:
++ * Each port has 4 LEDs, we configure all ports to the same
++ * behaviour (no individual config) but we can set up each
++ * LED separately.
++ */
++ if (smi->leds_disabled) {
++ /* Turn everything off */
++ regmap_update_bits(smi->map,
++ RTL8366RB_LED_0_1_CTRL_REG,
++ 0x0FFF, 0);
++ regmap_update_bits(smi->map,
++ RTL8366RB_LED_2_3_CTRL_REG,
++ 0x0FFF, 0);
++ regmap_update_bits(smi->map,
++ RTL8366RB_INTERRUPT_CONTROL_REG,
++ RTL8366RB_P4_RGMII_LED,
++ 0);
++ val = RTL8366RB_LED_OFF;
++ } else {
++ /* TODO: make this configurable per LED */
++ val = RTL8366RB_LED_FORCE;
++ }
++ for (i = 0; i < 4; i++) {
++ ret = regmap_update_bits(smi->map,
++ RTL8366RB_LED_CTRL_REG,
++ 0xf << (i * 4),
++ val << (i * 4));
++ if (ret)
++ return ret;
++ }
++
++ ret = rtl8366_reset_vlan(smi);
++ if (ret)
++ return ret;
++
++ ret = rtl8366rb_setup_cascaded_irq(smi);
++ if (ret)
++ dev_info(smi->dev, "no interrupt support\n");
++
++ ret = realtek_smi_setup_mdio(smi);
++ if (ret) {
++ dev_info(smi->dev, "could not set up MDIO bus\n");
++ return -ENODEV;
++ }
++
++ return 0;
++}
++
++static enum dsa_tag_protocol rtl8366_get_tag_protocol(struct dsa_switch *ds,
++ int port,
++ enum dsa_tag_protocol mp)
++{
++ /* This switch uses the 4 byte protocol A Realtek DSA tag */
++ return DSA_TAG_PROTO_RTL4_A;
++}
++
++static void
++rtl8366rb_mac_link_up(struct dsa_switch *ds, int port, unsigned int mode,
++ phy_interface_t interface, struct phy_device *phydev,
++ int speed, int duplex, bool tx_pause, bool rx_pause)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret;
++
++ if (port != smi->cpu_port)
++ return;
++
++ dev_dbg(smi->dev, "MAC link up on CPU port (%d)\n", port);
++
++ /* Force the fixed CPU port into 1Gbit mode, no autonegotiation */
++ ret = regmap_update_bits(smi->map, RTL8366RB_MAC_FORCE_CTRL_REG,
++ BIT(port), BIT(port));
++ if (ret) {
++ dev_err(smi->dev, "failed to force 1Gbit on CPU port\n");
++ return;
++ }
++
++ ret = regmap_update_bits(smi->map, RTL8366RB_PAACR2,
++ 0xFF00U,
++ RTL8366RB_PAACR_CPU_PORT << 8);
++ if (ret) {
++ dev_err(smi->dev, "failed to set PAACR on CPU port\n");
++ return;
++ }
++
++ /* Enable the CPU port */
++ ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++ 0);
++ if (ret) {
++ dev_err(smi->dev, "failed to enable the CPU port\n");
++ return;
++ }
++}
++
++static void
++rtl8366rb_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
++ phy_interface_t interface)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret;
++
++ if (port != smi->cpu_port)
++ return;
++
++ dev_dbg(smi->dev, "MAC link down on CPU port (%d)\n", port);
++
++ /* Disable the CPU port */
++ ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++ BIT(port));
++ if (ret) {
++ dev_err(smi->dev, "failed to disable the CPU port\n");
++ return;
++ }
++}
++
++static void rb8366rb_set_port_led(struct realtek_smi *smi,
++ int port, bool enable)
++{
++ u16 val = enable ? 0x3f : 0;
++ int ret;
++
++ if (smi->leds_disabled)
++ return;
++
++ switch (port) {
++ case 0:
++ ret = regmap_update_bits(smi->map,
++ RTL8366RB_LED_0_1_CTRL_REG,
++ 0x3F, val);
++ break;
++ case 1:
++ ret = regmap_update_bits(smi->map,
++ RTL8366RB_LED_0_1_CTRL_REG,
++ 0x3F << RTL8366RB_LED_1_OFFSET,
++ val << RTL8366RB_LED_1_OFFSET);
++ break;
++ case 2:
++ ret = regmap_update_bits(smi->map,
++ RTL8366RB_LED_2_3_CTRL_REG,
++ 0x3F, val);
++ break;
++ case 3:
++ ret = regmap_update_bits(smi->map,
++ RTL8366RB_LED_2_3_CTRL_REG,
++ 0x3F << RTL8366RB_LED_3_OFFSET,
++ val << RTL8366RB_LED_3_OFFSET);
++ break;
++ case 4:
++ ret = regmap_update_bits(smi->map,
++ RTL8366RB_INTERRUPT_CONTROL_REG,
++ RTL8366RB_P4_RGMII_LED,
++ enable ? RTL8366RB_P4_RGMII_LED : 0);
++ break;
++ default:
++ dev_err(smi->dev, "no LED for port %d\n", port);
++ return;
++ }
++ if (ret)
++ dev_err(smi->dev, "error updating LED on port %d\n", port);
++}
++
++static int
++rtl8366rb_port_enable(struct dsa_switch *ds, int port,
++ struct phy_device *phy)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret;
++
++ dev_dbg(smi->dev, "enable port %d\n", port);
++ ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++ 0);
++ if (ret)
++ return ret;
++
++ rb8366rb_set_port_led(smi, port, true);
++ return 0;
++}
++
++static void
++rtl8366rb_port_disable(struct dsa_switch *ds, int port)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret;
++
++ dev_dbg(smi->dev, "disable port %d\n", port);
++ ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
++ BIT(port));
++ if (ret)
++ return;
++
++ rb8366rb_set_port_led(smi, port, false);
++}
++
++static int
++rtl8366rb_port_bridge_join(struct dsa_switch *ds, int port,
++ struct dsa_bridge bridge,
++ bool *tx_fwd_offload)
++{
++ struct realtek_smi *smi = ds->priv;
++ unsigned int port_bitmap = 0;
++ int ret, i;
++
++ /* Loop over all other ports than the current one */
++ for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
++ /* Current port handled last */
++ if (i == port)
++ continue;
++ /* Not on this bridge */
++ if (!dsa_port_offloads_bridge(dsa_to_port(ds, i), &bridge))
++ continue;
++ /* Join this port to each other port on the bridge */
++ ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
++ RTL8366RB_PORT_ISO_PORTS(BIT(port)),
++ RTL8366RB_PORT_ISO_PORTS(BIT(port)));
++ if (ret)
++ dev_err(smi->dev, "failed to join port %d\n", port);
++
++ port_bitmap |= BIT(i);
++ }
++
++ /* Set the bits for the ports we can access */
++ return regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
++ RTL8366RB_PORT_ISO_PORTS(port_bitmap),
++ RTL8366RB_PORT_ISO_PORTS(port_bitmap));
++}
++
++static void
++rtl8366rb_port_bridge_leave(struct dsa_switch *ds, int port,
++ struct dsa_bridge bridge)
++{
++ struct realtek_smi *smi = ds->priv;
++ unsigned int port_bitmap = 0;
++ int ret, i;
++
++ /* Loop over all other ports than this one */
++ for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
++ /* Current port handled last */
++ if (i == port)
++ continue;
++ /* Not on this bridge */
++ if (!dsa_port_offloads_bridge(dsa_to_port(ds, i), &bridge))
++ continue;
++ /* Remove this port from any other port on the bridge */
++ ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
++ RTL8366RB_PORT_ISO_PORTS(BIT(port)), 0);
++ if (ret)
++ dev_err(smi->dev, "failed to leave port %d\n", port);
++
++ port_bitmap |= BIT(i);
++ }
++
++ /* Clear the bits for the ports we can not access, leave ourselves */
++ regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
++ RTL8366RB_PORT_ISO_PORTS(port_bitmap), 0);
++}
++
++/**
++ * rtl8366rb_drop_untagged() - make the switch drop untagged and C-tagged frames
++ * @smi: SMI state container
++ * @port: the port to drop untagged and C-tagged frames on
++ * @drop: whether to drop or pass untagged and C-tagged frames
++ *
++ * Return: zero for success, a negative number on error.
++ */
++static int rtl8366rb_drop_untagged(struct realtek_smi *smi, int port, bool drop)
++{
++ return regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
++ RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port),
++ drop ? RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port) : 0);
++}
++
++static int rtl8366rb_vlan_filtering(struct dsa_switch *ds, int port,
++ bool vlan_filtering,
++ struct netlink_ext_ack *extack)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8366rb *rb;
++ int ret;
++
++ rb = smi->chip_data;
++
++ dev_dbg(smi->dev, "port %d: %s VLAN filtering\n", port,
++ vlan_filtering ? "enable" : "disable");
++
++ /* If the port is not in the member set, the frame will be dropped */
++ ret = regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
++ BIT(port), vlan_filtering ? BIT(port) : 0);
++ if (ret)
++ return ret;
++
++ /* If VLAN filtering is enabled and PVID is also enabled, we must
++ * not drop any untagged or C-tagged frames. If we turn off VLAN
++ * filtering on a port, we need to accept any frames.
++ */
++ if (vlan_filtering)
++ ret = rtl8366rb_drop_untagged(smi, port, !rb->pvid_enabled[port]);
++ else
++ ret = rtl8366rb_drop_untagged(smi, port, false);
++
++ return ret;
++}
++
++static int
++rtl8366rb_port_pre_bridge_flags(struct dsa_switch *ds, int port,
++ struct switchdev_brport_flags flags,
++ struct netlink_ext_ack *extack)
++{
++ /* We support enabling/disabling learning */
++ if (flags.mask & ~(BR_LEARNING))
++ return -EINVAL;
++
++ return 0;
++}
++
++static int
++rtl8366rb_port_bridge_flags(struct dsa_switch *ds, int port,
++ struct switchdev_brport_flags flags,
++ struct netlink_ext_ack *extack)
++{
++ struct realtek_smi *smi = ds->priv;
++ int ret;
++
++ if (flags.mask & BR_LEARNING) {
++ ret = regmap_update_bits(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
++ BIT(port),
++ (flags.val & BR_LEARNING) ? 0 : BIT(port));
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static void
++rtl8366rb_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
++{
++ struct realtek_smi *smi = ds->priv;
++ u32 val;
++ int i;
++
++ switch (state) {
++ case BR_STATE_DISABLED:
++ val = RTL8366RB_STP_STATE_DISABLED;
++ break;
++ case BR_STATE_BLOCKING:
++ case BR_STATE_LISTENING:
++ val = RTL8366RB_STP_STATE_BLOCKING;
++ break;
++ case BR_STATE_LEARNING:
++ val = RTL8366RB_STP_STATE_LEARNING;
++ break;
++ case BR_STATE_FORWARDING:
++ val = RTL8366RB_STP_STATE_FORWARDING;
++ break;
++ default:
++ dev_err(smi->dev, "unknown bridge state requested\n");
++ return;
++ }
++
++ /* Set the same status for the port on all the FIDs */
++ for (i = 0; i < RTL8366RB_NUM_FIDS; i++) {
++ regmap_update_bits(smi->map, RTL8366RB_STP_STATE_BASE + i,
++ RTL8366RB_STP_STATE_MASK(port),
++ RTL8366RB_STP_STATE(port, val));
++ }
++}
++
++static void
++rtl8366rb_port_fast_age(struct dsa_switch *ds, int port)
++{
++ struct realtek_smi *smi = ds->priv;
++
++ /* This will age out any learned L2 entries */
++ regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
++ BIT(port), BIT(port));
++ /* Restore the normal state of things */
++ regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
++ BIT(port), 0);
++}
++
++static int rtl8366rb_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
++{
++ struct realtek_smi *smi = ds->priv;
++ struct rtl8366rb *rb;
++ unsigned int max_mtu;
++ u32 len;
++ int i;
++
++ /* Cache the per-port MTU setting */
++ rb = smi->chip_data;
++ rb->max_mtu[port] = new_mtu;
++
++ /* Roof out the MTU for the entire switch to the greatest
++ * common denominator: the biggest set for any one port will
++ * be the biggest MTU for the switch.
++ *
++ * The first setting, 1522 bytes, is max IP packet 1500 bytes,
++ * plus ethernet header, 1518 bytes, plus CPU tag, 4 bytes.
++ * This function should consider the parameter an SDU, so the
++ * MTU passed for this setting is 1518 bytes. The same logic
++ * of subtracting the DSA tag of 4 bytes apply to the other
++ * settings.
++ */
++ max_mtu = 1518;
++ for (i = 0; i < RTL8366RB_NUM_PORTS; i++) {
++ if (rb->max_mtu[i] > max_mtu)
++ max_mtu = rb->max_mtu[i];
++ }
++ if (max_mtu <= 1518)
++ len = RTL8366RB_SGCR_MAX_LENGTH_1522;
++ else if (max_mtu > 1518 && max_mtu <= 1532)
++ len = RTL8366RB_SGCR_MAX_LENGTH_1536;
++ else if (max_mtu > 1532 && max_mtu <= 1548)
++ len = RTL8366RB_SGCR_MAX_LENGTH_1552;
++ else
++ len = RTL8366RB_SGCR_MAX_LENGTH_16000;
++
++ return regmap_update_bits(smi->map, RTL8366RB_SGCR,
++ RTL8366RB_SGCR_MAX_LENGTH_MASK,
++ len);
++}
++
++static int rtl8366rb_max_mtu(struct dsa_switch *ds, int port)
++{
++ /* The max MTU is 16000 bytes, so we subtract the CPU tag
++ * and the max presented to the system is 15996 bytes.
++ */
++ return 15996;
++}
++
++static int rtl8366rb_get_vlan_4k(struct realtek_smi *smi, u32 vid,
++ struct rtl8366_vlan_4k *vlan4k)
++{
++ u32 data[3];
++ int ret;
++ int i;
++
++ memset(vlan4k, '\0', sizeof(struct rtl8366_vlan_4k));
++
++ if (vid >= RTL8366RB_NUM_VIDS)
++ return -EINVAL;
++
++ /* write VID */
++ ret = regmap_write(smi->map, RTL8366RB_VLAN_TABLE_WRITE_BASE,
++ vid & RTL8366RB_VLAN_VID_MASK);
++ if (ret)
++ return ret;
++
++ /* write table access control word */
++ ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
++ RTL8366RB_TABLE_VLAN_READ_CTRL);
++ if (ret)
++ return ret;
++
++ for (i = 0; i < 3; i++) {
++ ret = regmap_read(smi->map,
++ RTL8366RB_VLAN_TABLE_READ_BASE + i,
++ &data[i]);
++ if (ret)
++ return ret;
++ }
++
++ vlan4k->vid = vid;
++ vlan4k->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
++ RTL8366RB_VLAN_UNTAG_MASK;
++ vlan4k->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
++ vlan4k->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
++
++ return 0;
++}
++
++static int rtl8366rb_set_vlan_4k(struct realtek_smi *smi,
++ const struct rtl8366_vlan_4k *vlan4k)
++{
++ u32 data[3];
++ int ret;
++ int i;
++
++ if (vlan4k->vid >= RTL8366RB_NUM_VIDS ||
++ vlan4k->member > RTL8366RB_VLAN_MEMBER_MASK ||
++ vlan4k->untag > RTL8366RB_VLAN_UNTAG_MASK ||
++ vlan4k->fid > RTL8366RB_FIDMAX)
++ return -EINVAL;
++
++ data[0] = vlan4k->vid & RTL8366RB_VLAN_VID_MASK;
++ data[1] = (vlan4k->member & RTL8366RB_VLAN_MEMBER_MASK) |
++ ((vlan4k->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
++ RTL8366RB_VLAN_UNTAG_SHIFT);
++ data[2] = vlan4k->fid & RTL8366RB_VLAN_FID_MASK;
++
++ for (i = 0; i < 3; i++) {
++ ret = regmap_write(smi->map,
++ RTL8366RB_VLAN_TABLE_WRITE_BASE + i,
++ data[i]);
++ if (ret)
++ return ret;
++ }
++
++ /* write table access control word */
++ ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
++ RTL8366RB_TABLE_VLAN_WRITE_CTRL);
++
++ return ret;
++}
++
++static int rtl8366rb_get_vlan_mc(struct realtek_smi *smi, u32 index,
++ struct rtl8366_vlan_mc *vlanmc)
++{
++ u32 data[3];
++ int ret;
++ int i;
++
++ memset(vlanmc, '\0', sizeof(struct rtl8366_vlan_mc));
++
++ if (index >= RTL8366RB_NUM_VLANS)
++ return -EINVAL;
++
++ for (i = 0; i < 3; i++) {
++ ret = regmap_read(smi->map,
++ RTL8366RB_VLAN_MC_BASE(index) + i,
++ &data[i]);
++ if (ret)
++ return ret;
++ }
++
++ vlanmc->vid = data[0] & RTL8366RB_VLAN_VID_MASK;
++ vlanmc->priority = (data[0] >> RTL8366RB_VLAN_PRIORITY_SHIFT) &
++ RTL8366RB_VLAN_PRIORITY_MASK;
++ vlanmc->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
++ RTL8366RB_VLAN_UNTAG_MASK;
++ vlanmc->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
++ vlanmc->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
++
++ return 0;
++}
++
++static int rtl8366rb_set_vlan_mc(struct realtek_smi *smi, u32 index,
++ const struct rtl8366_vlan_mc *vlanmc)
++{
++ u32 data[3];
++ int ret;
++ int i;
++
++ if (index >= RTL8366RB_NUM_VLANS ||
++ vlanmc->vid >= RTL8366RB_NUM_VIDS ||
++ vlanmc->priority > RTL8366RB_PRIORITYMAX ||
++ vlanmc->member > RTL8366RB_VLAN_MEMBER_MASK ||
++ vlanmc->untag > RTL8366RB_VLAN_UNTAG_MASK ||
++ vlanmc->fid > RTL8366RB_FIDMAX)
++ return -EINVAL;
++
++ data[0] = (vlanmc->vid & RTL8366RB_VLAN_VID_MASK) |
++ ((vlanmc->priority & RTL8366RB_VLAN_PRIORITY_MASK) <<
++ RTL8366RB_VLAN_PRIORITY_SHIFT);
++ data[1] = (vlanmc->member & RTL8366RB_VLAN_MEMBER_MASK) |
++ ((vlanmc->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
++ RTL8366RB_VLAN_UNTAG_SHIFT);
++ data[2] = vlanmc->fid & RTL8366RB_VLAN_FID_MASK;
++
++ for (i = 0; i < 3; i++) {
++ ret = regmap_write(smi->map,
++ RTL8366RB_VLAN_MC_BASE(index) + i,
++ data[i]);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static int rtl8366rb_get_mc_index(struct realtek_smi *smi, int port, int *val)
++{
++ u32 data;
++ int ret;
++
++ if (port >= smi->num_ports)
++ return -EINVAL;
++
++ ret = regmap_read(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
++ &data);
++ if (ret)
++ return ret;
++
++ *val = (data >> RTL8366RB_PORT_VLAN_CTRL_SHIFT(port)) &
++ RTL8366RB_PORT_VLAN_CTRL_MASK;
++
++ return 0;
++}
++
++static int rtl8366rb_set_mc_index(struct realtek_smi *smi, int port, int index)
++{
++ struct rtl8366rb *rb;
++ bool pvid_enabled;
++ int ret;
++
++ rb = smi->chip_data;
++ pvid_enabled = !!index;
++
++ if (port >= smi->num_ports || index >= RTL8366RB_NUM_VLANS)
++ return -EINVAL;
++
++ ret = regmap_update_bits(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
++ RTL8366RB_PORT_VLAN_CTRL_MASK <<
++ RTL8366RB_PORT_VLAN_CTRL_SHIFT(port),
++ (index & RTL8366RB_PORT_VLAN_CTRL_MASK) <<
++ RTL8366RB_PORT_VLAN_CTRL_SHIFT(port));
++ if (ret)
++ return ret;
++
++ rb->pvid_enabled[port] = pvid_enabled;
++
++ /* If VLAN filtering is enabled and PVID is also enabled, we must
++ * not drop any untagged or C-tagged frames. Make sure to update the
++ * filtering setting.
++ */
++ if (dsa_port_is_vlan_filtering(dsa_to_port(smi->ds, port)))
++ ret = rtl8366rb_drop_untagged(smi, port, !pvid_enabled);
++
++ return ret;
++}
++
++static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
++{
++ unsigned int max = RTL8366RB_NUM_VLANS - 1;
++
++ if (smi->vlan4k_enabled)
++ max = RTL8366RB_NUM_VIDS - 1;
++
++ if (vlan > max)
++ return false;
++
++ return true;
++}
++
++static int rtl8366rb_enable_vlan(struct realtek_smi *smi, bool enable)
++{
++ dev_dbg(smi->dev, "%s VLAN\n", enable ? "enable" : "disable");
++ return regmap_update_bits(smi->map,
++ RTL8366RB_SGCR, RTL8366RB_SGCR_EN_VLAN,
++ enable ? RTL8366RB_SGCR_EN_VLAN : 0);
++}
++
++static int rtl8366rb_enable_vlan4k(struct realtek_smi *smi, bool enable)
++{
++ dev_dbg(smi->dev, "%s VLAN 4k\n", enable ? "enable" : "disable");
++ return regmap_update_bits(smi->map, RTL8366RB_SGCR,
++ RTL8366RB_SGCR_EN_VLAN_4KTB,
++ enable ? RTL8366RB_SGCR_EN_VLAN_4KTB : 0);
++}
++
++static int rtl8366rb_phy_read(struct realtek_smi *smi, int phy, int regnum)
++{
++ u32 val;
++ u32 reg;
++ int ret;
++
++ if (phy > RTL8366RB_PHY_NO_MAX)
++ return -EINVAL;
++
++ ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
++ RTL8366RB_PHY_CTRL_READ);
++ if (ret)
++ return ret;
++
++ reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
++
++ ret = regmap_write(smi->map, reg, 0);
++ if (ret) {
++ dev_err(smi->dev,
++ "failed to write PHY%d reg %04x @ %04x, ret %d\n",
++ phy, regnum, reg, ret);
++ return ret;
++ }
++
++ ret = regmap_read(smi->map, RTL8366RB_PHY_ACCESS_DATA_REG, &val);
++ if (ret)
++ return ret;
++
++ dev_dbg(smi->dev, "read PHY%d register 0x%04x @ %08x, val <- %04x\n",
++ phy, regnum, reg, val);
++
++ return val;
++}
++
++static int rtl8366rb_phy_write(struct realtek_smi *smi, int phy, int regnum,
++ u16 val)
++{
++ u32 reg;
++ int ret;
++
++ if (phy > RTL8366RB_PHY_NO_MAX)
++ return -EINVAL;
++
++ ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
++ RTL8366RB_PHY_CTRL_WRITE);
++ if (ret)
++ return ret;
++
++ reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
++
++ dev_dbg(smi->dev, "write PHY%d register 0x%04x @ %04x, val -> %04x\n",
++ phy, regnum, reg, val);
++
++ ret = regmap_write(smi->map, reg, val);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static int rtl8366rb_reset_chip(struct realtek_smi *smi)
++{
++ int timeout = 10;
++ u32 val;
++ int ret;
++
++ realtek_smi_write_reg_noack(smi, RTL8366RB_RESET_CTRL_REG,
++ RTL8366RB_CHIP_CTRL_RESET_HW);
++ do {
++ usleep_range(20000, 25000);
++ ret = regmap_read(smi->map, RTL8366RB_RESET_CTRL_REG, &val);
++ if (ret)
++ return ret;
++
++ if (!(val & RTL8366RB_CHIP_CTRL_RESET_HW))
++ break;
++ } while (--timeout);
++
++ if (!timeout) {
++ dev_err(smi->dev, "timeout waiting for the switch to reset\n");
++ return -EIO;
++ }
++
++ return 0;
++}
++
++static int rtl8366rb_detect(struct realtek_smi *smi)
++{
++ struct device *dev = smi->dev;
++ int ret;
++ u32 val;
++
++ /* Detect device */
++ ret = regmap_read(smi->map, 0x5c, &val);
++ if (ret) {
++ dev_err(dev, "can't get chip ID (%d)\n", ret);
++ return ret;
++ }
++
++ switch (val) {
++ case 0x6027:
++ dev_info(dev, "found an RTL8366S switch\n");
++ dev_err(dev, "this switch is not yet supported, submit patches!\n");
++ return -ENODEV;
++ case 0x5937:
++ dev_info(dev, "found an RTL8366RB switch\n");
++ smi->cpu_port = RTL8366RB_PORT_NUM_CPU;
++ smi->num_ports = RTL8366RB_NUM_PORTS;
++ smi->num_vlan_mc = RTL8366RB_NUM_VLANS;
++ smi->mib_counters = rtl8366rb_mib_counters;
++ smi->num_mib_counters = ARRAY_SIZE(rtl8366rb_mib_counters);
++ break;
++ default:
++ dev_info(dev, "found an Unknown Realtek switch (id=0x%04x)\n",
++ val);
++ break;
++ }
++
++ ret = rtl8366rb_reset_chip(smi);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
++static const struct dsa_switch_ops rtl8366rb_switch_ops = {
++ .get_tag_protocol = rtl8366_get_tag_protocol,
++ .setup = rtl8366rb_setup,
++ .phylink_mac_link_up = rtl8366rb_mac_link_up,
++ .phylink_mac_link_down = rtl8366rb_mac_link_down,
++ .get_strings = rtl8366_get_strings,
++ .get_ethtool_stats = rtl8366_get_ethtool_stats,
++ .get_sset_count = rtl8366_get_sset_count,
++ .port_bridge_join = rtl8366rb_port_bridge_join,
++ .port_bridge_leave = rtl8366rb_port_bridge_leave,
++ .port_vlan_filtering = rtl8366rb_vlan_filtering,
++ .port_vlan_add = rtl8366_vlan_add,
++ .port_vlan_del = rtl8366_vlan_del,
++ .port_enable = rtl8366rb_port_enable,
++ .port_disable = rtl8366rb_port_disable,
++ .port_pre_bridge_flags = rtl8366rb_port_pre_bridge_flags,
++ .port_bridge_flags = rtl8366rb_port_bridge_flags,
++ .port_stp_state_set = rtl8366rb_port_stp_state_set,
++ .port_fast_age = rtl8366rb_port_fast_age,
++ .port_change_mtu = rtl8366rb_change_mtu,
++ .port_max_mtu = rtl8366rb_max_mtu,
++};
++
++static const struct realtek_smi_ops rtl8366rb_smi_ops = {
++ .detect = rtl8366rb_detect,
++ .get_vlan_mc = rtl8366rb_get_vlan_mc,
++ .set_vlan_mc = rtl8366rb_set_vlan_mc,
++ .get_vlan_4k = rtl8366rb_get_vlan_4k,
++ .set_vlan_4k = rtl8366rb_set_vlan_4k,
++ .get_mc_index = rtl8366rb_get_mc_index,
++ .set_mc_index = rtl8366rb_set_mc_index,
++ .get_mib_counter = rtl8366rb_get_mib_counter,
++ .is_vlan_valid = rtl8366rb_is_vlan_valid,
++ .enable_vlan = rtl8366rb_enable_vlan,
++ .enable_vlan4k = rtl8366rb_enable_vlan4k,
++ .phy_read = rtl8366rb_phy_read,
++ .phy_write = rtl8366rb_phy_write,
++};
++
++const struct realtek_smi_variant rtl8366rb_variant = {
++ .ds_ops = &rtl8366rb_switch_ops,
++ .ops = &rtl8366rb_smi_ops,
++ .clk_delay = 10,
++ .cmd_read = 0xa9,
++ .cmd_write = 0xa8,
++ .chip_data_sz = sizeof(struct rtl8366rb),
++};
++EXPORT_SYMBOL_GPL(rtl8366rb_variant);
+diff --git a/drivers/net/dsa/rtl8365mb.c b/drivers/net/dsa/rtl8365mb.c
+deleted file mode 100644
+index 3b729544798b1..0000000000000
+--- a/drivers/net/dsa/rtl8365mb.c
++++ /dev/null
+@@ -1,1987 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Realtek SMI subdriver for the Realtek RTL8365MB-VC ethernet switch.
+- *
+- * Copyright (C) 2021 Alvin Šipraga <alsi@bang-olufsen.dk>
+- * Copyright (C) 2021 Michael Rasmussen <mir@bang-olufsen.dk>
+- *
+- * The RTL8365MB-VC is a 4+1 port 10/100/1000M switch controller. It includes 4
+- * integrated PHYs for the user facing ports, and an extension interface which
+- * can be connected to the CPU - or another PHY - via either MII, RMII, or
+- * RGMII. The switch is configured via the Realtek Simple Management Interface
+- * (SMI), which uses the MDIO/MDC lines.
+- *
+- * Below is a simplified block diagram of the chip and its relevant interfaces.
+- *
+- * .-----------------------------------.
+- * | |
+- * UTP <---------------> Giga PHY <-> PCS <-> P0 GMAC |
+- * UTP <---------------> Giga PHY <-> PCS <-> P1 GMAC |
+- * UTP <---------------> Giga PHY <-> PCS <-> P2 GMAC |
+- * UTP <---------------> Giga PHY <-> PCS <-> P3 GMAC |
+- * | |
+- * CPU/PHY <-MII/RMII/RGMII---> Extension <---> Extension |
+- * | interface 1 GMAC 1 |
+- * | |
+- * SMI driver/ <-MDC/SCL---> Management ~~~~~~~~~~~~~~ |
+- * EEPROM <-MDIO/SDA--> interface ~REALTEK ~~~~~ |
+- * | ~RTL8365MB ~~~ |
+- * | ~GXXXC TAIWAN~ |
+- * GPIO <--------------> Reset ~~~~~~~~~~~~~~ |
+- * | |
+- * Interrupt <----------> Link UP/DOWN events |
+- * controller | |
+- * '-----------------------------------'
+- *
+- * The driver uses DSA to integrate the 4 user and 1 extension ports into the
+- * kernel. Netdevices are created for the user ports, as are PHY devices for
+- * their integrated PHYs. The device tree firmware should also specify the link
+- * partner of the extension port - either via a fixed-link or other phy-handle.
+- * See the device tree bindings for more detailed information. Note that the
+- * driver has only been tested with a fixed-link, but in principle it should not
+- * matter.
+- *
+- * NOTE: Currently, only the RGMII interface is implemented in this driver.
+- *
+- * The interrupt line is asserted on link UP/DOWN events. The driver creates a
+- * custom irqchip to handle this interrupt and demultiplex the events by reading
+- * the status registers via SMI. Interrupts are then propagated to the relevant
+- * PHY device.
+- *
+- * The EEPROM contains initial register values which the chip will read over I2C
+- * upon hardware reset. It is also possible to omit the EEPROM. In both cases,
+- * the driver will manually reprogram some registers using jam tables to reach
+- * an initial state defined by the vendor driver.
+- *
+- * This Linux driver is written based on an OS-agnostic vendor driver from
+- * Realtek. The reference GPL-licensed sources can be found in the OpenWrt
+- * source tree under the name rtl8367c. The vendor driver claims to support a
+- * number of similar switch controllers from Realtek, but the only hardware we
+- * have is the RTL8365MB-VC. Moreover, there does not seem to be any chip under
+- * the name RTL8367C. Although one wishes that the 'C' stood for some kind of
+- * common hardware revision, there exist examples of chips with the suffix -VC
+- * which are explicitly not supported by the rtl8367c driver and which instead
+- * require the rtl8367d vendor driver. With all this uncertainty, the driver has
+- * been modestly named rtl8365mb. Future implementors may wish to rename things
+- * accordingly.
+- *
+- * In the same family of chips, some carry up to 8 user ports and up to 2
+- * extension ports. Where possible this driver tries to make things generic, but
+- * more work must be done to support these configurations. According to
+- * documentation from Realtek, the family should include the following chips:
+- *
+- * - RTL8363NB
+- * - RTL8363NB-VB
+- * - RTL8363SC
+- * - RTL8363SC-VB
+- * - RTL8364NB
+- * - RTL8364NB-VB
+- * - RTL8365MB-VC
+- * - RTL8366SC
+- * - RTL8367RB-VB
+- * - RTL8367SB
+- * - RTL8367S
+- * - RTL8370MB
+- * - RTL8310SR
+- *
+- * Some of the register logic for these additional chips has been skipped over
+- * while implementing this driver. It is therefore not possible to assume that
+- * things will work out-of-the-box for other chips, and a careful review of the
+- * vendor driver may be needed to expand support. The RTL8365MB-VC seems to be
+- * one of the simpler chips.
+- */
+-
+-#include <linux/bitfield.h>
+-#include <linux/bitops.h>
+-#include <linux/interrupt.h>
+-#include <linux/irqdomain.h>
+-#include <linux/mutex.h>
+-#include <linux/of_irq.h>
+-#include <linux/regmap.h>
+-#include <linux/if_bridge.h>
+-
+-#include "realtek-smi-core.h"
+-
+-/* Chip-specific data and limits */
+-#define RTL8365MB_CHIP_ID_8365MB_VC 0x6367
+-#define RTL8365MB_CPU_PORT_NUM_8365MB_VC 6
+-#define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC 2112
+-
+-/* Family-specific data and limits */
+-#define RTL8365MB_PHYADDRMAX 7
+-#define RTL8365MB_NUM_PHYREGS 32
+-#define RTL8365MB_PHYREGMAX (RTL8365MB_NUM_PHYREGS - 1)
+-#define RTL8365MB_MAX_NUM_PORTS (RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1)
+-
+-/* Chip identification registers */
+-#define RTL8365MB_CHIP_ID_REG 0x1300
+-
+-#define RTL8365MB_CHIP_VER_REG 0x1301
+-
+-#define RTL8365MB_MAGIC_REG 0x13C2
+-#define RTL8365MB_MAGIC_VALUE 0x0249
+-
+-/* Chip reset register */
+-#define RTL8365MB_CHIP_RESET_REG 0x1322
+-#define RTL8365MB_CHIP_RESET_SW_MASK 0x0002
+-#define RTL8365MB_CHIP_RESET_HW_MASK 0x0001
+-
+-/* Interrupt polarity register */
+-#define RTL8365MB_INTR_POLARITY_REG 0x1100
+-#define RTL8365MB_INTR_POLARITY_MASK 0x0001
+-#define RTL8365MB_INTR_POLARITY_HIGH 0
+-#define RTL8365MB_INTR_POLARITY_LOW 1
+-
+-/* Interrupt control/status register - enable/check specific interrupt types */
+-#define RTL8365MB_INTR_CTRL_REG 0x1101
+-#define RTL8365MB_INTR_STATUS_REG 0x1102
+-#define RTL8365MB_INTR_SLIENT_START_2_MASK 0x1000
+-#define RTL8365MB_INTR_SLIENT_START_MASK 0x0800
+-#define RTL8365MB_INTR_ACL_ACTION_MASK 0x0200
+-#define RTL8365MB_INTR_CABLE_DIAG_FIN_MASK 0x0100
+-#define RTL8365MB_INTR_INTERRUPT_8051_MASK 0x0080
+-#define RTL8365MB_INTR_LOOP_DETECTION_MASK 0x0040
+-#define RTL8365MB_INTR_GREEN_TIMER_MASK 0x0020
+-#define RTL8365MB_INTR_SPECIAL_CONGEST_MASK 0x0010
+-#define RTL8365MB_INTR_SPEED_CHANGE_MASK 0x0008
+-#define RTL8365MB_INTR_LEARN_OVER_MASK 0x0004
+-#define RTL8365MB_INTR_METER_EXCEEDED_MASK 0x0002
+-#define RTL8365MB_INTR_LINK_CHANGE_MASK 0x0001
+-#define RTL8365MB_INTR_ALL_MASK \
+- (RTL8365MB_INTR_SLIENT_START_2_MASK | \
+- RTL8365MB_INTR_SLIENT_START_MASK | \
+- RTL8365MB_INTR_ACL_ACTION_MASK | \
+- RTL8365MB_INTR_CABLE_DIAG_FIN_MASK | \
+- RTL8365MB_INTR_INTERRUPT_8051_MASK | \
+- RTL8365MB_INTR_LOOP_DETECTION_MASK | \
+- RTL8365MB_INTR_GREEN_TIMER_MASK | \
+- RTL8365MB_INTR_SPECIAL_CONGEST_MASK | \
+- RTL8365MB_INTR_SPEED_CHANGE_MASK | \
+- RTL8365MB_INTR_LEARN_OVER_MASK | \
+- RTL8365MB_INTR_METER_EXCEEDED_MASK | \
+- RTL8365MB_INTR_LINK_CHANGE_MASK)
+-
+-/* Per-port interrupt type status registers */
+-#define RTL8365MB_PORT_LINKDOWN_IND_REG 0x1106
+-#define RTL8365MB_PORT_LINKDOWN_IND_MASK 0x07FF
+-
+-#define RTL8365MB_PORT_LINKUP_IND_REG 0x1107
+-#define RTL8365MB_PORT_LINKUP_IND_MASK 0x07FF
+-
+-/* PHY indirect access registers */
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_REG 0x1F00
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK 0x0002
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ 0
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE 1
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK 0x0001
+-#define RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE 1
+-#define RTL8365MB_INDIRECT_ACCESS_STATUS_REG 0x1F01
+-#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG 0x1F02
+-#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK GENMASK(4, 0)
+-#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK GENMASK(7, 5)
+-#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK GENMASK(11, 8)
+-#define RTL8365MB_PHY_BASE 0x2000
+-#define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG 0x1F03
+-#define RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG 0x1F04
+-
+-/* PHY OCP address prefix register */
+-#define RTL8365MB_GPHY_OCP_MSB_0_REG 0x1D15
+-#define RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK 0x0FC0
+-#define RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK 0xFC00
+-
+-/* The PHY OCP addresses of PHY registers 0~31 start here */
+-#define RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE 0xA400
+-
+-/* EXT port interface mode values - used in DIGITAL_INTERFACE_SELECT */
+-#define RTL8365MB_EXT_PORT_MODE_DISABLE 0
+-#define RTL8365MB_EXT_PORT_MODE_RGMII 1
+-#define RTL8365MB_EXT_PORT_MODE_MII_MAC 2
+-#define RTL8365MB_EXT_PORT_MODE_MII_PHY 3
+-#define RTL8365MB_EXT_PORT_MODE_TMII_MAC 4
+-#define RTL8365MB_EXT_PORT_MODE_TMII_PHY 5
+-#define RTL8365MB_EXT_PORT_MODE_GMII 6
+-#define RTL8365MB_EXT_PORT_MODE_RMII_MAC 7
+-#define RTL8365MB_EXT_PORT_MODE_RMII_PHY 8
+-#define RTL8365MB_EXT_PORT_MODE_SGMII 9
+-#define RTL8365MB_EXT_PORT_MODE_HSGMII 10
+-#define RTL8365MB_EXT_PORT_MODE_1000X_100FX 11
+-#define RTL8365MB_EXT_PORT_MODE_1000X 12
+-#define RTL8365MB_EXT_PORT_MODE_100FX 13
+-
+-/* EXT port interface mode configuration registers 0~1 */
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0 0x1305
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG1 0x13C3
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(_extport) \
+- (RTL8365MB_DIGITAL_INTERFACE_SELECT_REG0 + \
+- ((_extport) >> 1) * (0x13C3 - 0x1305))
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(_extport) \
+- (0xF << (((_extport) % 2)))
+-#define RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(_extport) \
+- (((_extport) % 2) * 4)
+-
+-/* EXT port RGMII TX/RX delay configuration registers 1~2 */
+-#define RTL8365MB_EXT_RGMXF_REG1 0x1307
+-#define RTL8365MB_EXT_RGMXF_REG2 0x13C5
+-#define RTL8365MB_EXT_RGMXF_REG(_extport) \
+- (RTL8365MB_EXT_RGMXF_REG1 + \
+- (((_extport) >> 1) * (0x13C5 - 0x1307)))
+-#define RTL8365MB_EXT_RGMXF_RXDELAY_MASK 0x0007
+-#define RTL8365MB_EXT_RGMXF_TXDELAY_MASK 0x0008
+-
+-/* External port speed values - used in DIGITAL_INTERFACE_FORCE */
+-#define RTL8365MB_PORT_SPEED_10M 0
+-#define RTL8365MB_PORT_SPEED_100M 1
+-#define RTL8365MB_PORT_SPEED_1000M 2
+-
+-/* EXT port force configuration registers 0~2 */
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0 0x1310
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG1 0x1311
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG2 0x13C4
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(_extport) \
+- (RTL8365MB_DIGITAL_INTERFACE_FORCE_REG0 + \
+- ((_extport) & 0x1) + \
+- ((((_extport) >> 1) & 0x1) * (0x13C4 - 0x1310)))
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK 0x1000
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_NWAY_MASK 0x0080
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK 0x0040
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK 0x0020
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK 0x0010
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK 0x0004
+-#define RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK 0x0003
+-
+-/* CPU port mask register - controls which ports are treated as CPU ports */
+-#define RTL8365MB_CPU_PORT_MASK_REG 0x1219
+-#define RTL8365MB_CPU_PORT_MASK_MASK 0x07FF
+-
+-/* CPU control register */
+-#define RTL8365MB_CPU_CTRL_REG 0x121A
+-#define RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK 0x0400
+-#define RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK 0x0200
+-#define RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK 0x0080
+-#define RTL8365MB_CPU_CTRL_TAG_POSITION_MASK 0x0040
+-#define RTL8365MB_CPU_CTRL_TRAP_PORT_MASK 0x0038
+-#define RTL8365MB_CPU_CTRL_INSERTMODE_MASK 0x0006
+-#define RTL8365MB_CPU_CTRL_EN_MASK 0x0001
+-
+-/* Maximum packet length register */
+-#define RTL8365MB_CFG0_MAX_LEN_REG 0x088C
+-#define RTL8365MB_CFG0_MAX_LEN_MASK 0x3FFF
+-
+-/* Port learning limit registers */
+-#define RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE 0x0A20
+-#define RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(_physport) \
+- (RTL8365MB_LUT_PORT_LEARN_LIMIT_BASE + (_physport))
+-
+-/* Port isolation (forwarding mask) registers */
+-#define RTL8365MB_PORT_ISOLATION_REG_BASE 0x08A2
+-#define RTL8365MB_PORT_ISOLATION_REG(_physport) \
+- (RTL8365MB_PORT_ISOLATION_REG_BASE + (_physport))
+-#define RTL8365MB_PORT_ISOLATION_MASK 0x07FF
+-
+-/* MSTP port state registers - indexed by tree instance */
+-#define RTL8365MB_MSTI_CTRL_BASE 0x0A00
+-#define RTL8365MB_MSTI_CTRL_REG(_msti, _physport) \
+- (RTL8365MB_MSTI_CTRL_BASE + ((_msti) << 1) + ((_physport) >> 3))
+-#define RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(_physport) ((_physport) << 1)
+-#define RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(_physport) \
+- (0x3 << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET((_physport)))
+-
+-/* MIB counter value registers */
+-#define RTL8365MB_MIB_COUNTER_BASE 0x1000
+-#define RTL8365MB_MIB_COUNTER_REG(_x) (RTL8365MB_MIB_COUNTER_BASE + (_x))
+-
+-/* MIB counter address register */
+-#define RTL8365MB_MIB_ADDRESS_REG 0x1004
+-#define RTL8365MB_MIB_ADDRESS_PORT_OFFSET 0x007C
+-#define RTL8365MB_MIB_ADDRESS(_p, _x) \
+- (((RTL8365MB_MIB_ADDRESS_PORT_OFFSET) * (_p) + (_x)) >> 2)
+-
+-#define RTL8365MB_MIB_CTRL0_REG 0x1005
+-#define RTL8365MB_MIB_CTRL0_RESET_MASK 0x0002
+-#define RTL8365MB_MIB_CTRL0_BUSY_MASK 0x0001
+-
+-/* The DSA callback .get_stats64 runs in atomic context, so we are not allowed
+- * to block. On the other hand, accessing MIB counters absolutely requires us to
+- * block. The solution is thus to schedule work which polls the MIB counters
+- * asynchronously and updates some private data, which the callback can then
+- * fetch atomically. Three seconds should be a good enough polling interval.
+- */
+-#define RTL8365MB_STATS_INTERVAL_JIFFIES (3 * HZ)
+-
+-enum rtl8365mb_mib_counter_index {
+- RTL8365MB_MIB_ifInOctets,
+- RTL8365MB_MIB_dot3StatsFCSErrors,
+- RTL8365MB_MIB_dot3StatsSymbolErrors,
+- RTL8365MB_MIB_dot3InPauseFrames,
+- RTL8365MB_MIB_dot3ControlInUnknownOpcodes,
+- RTL8365MB_MIB_etherStatsFragments,
+- RTL8365MB_MIB_etherStatsJabbers,
+- RTL8365MB_MIB_ifInUcastPkts,
+- RTL8365MB_MIB_etherStatsDropEvents,
+- RTL8365MB_MIB_ifInMulticastPkts,
+- RTL8365MB_MIB_ifInBroadcastPkts,
+- RTL8365MB_MIB_inMldChecksumError,
+- RTL8365MB_MIB_inIgmpChecksumError,
+- RTL8365MB_MIB_inMldSpecificQuery,
+- RTL8365MB_MIB_inMldGeneralQuery,
+- RTL8365MB_MIB_inIgmpSpecificQuery,
+- RTL8365MB_MIB_inIgmpGeneralQuery,
+- RTL8365MB_MIB_inMldLeaves,
+- RTL8365MB_MIB_inIgmpLeaves,
+- RTL8365MB_MIB_etherStatsOctets,
+- RTL8365MB_MIB_etherStatsUnderSizePkts,
+- RTL8365MB_MIB_etherOversizeStats,
+- RTL8365MB_MIB_etherStatsPkts64Octets,
+- RTL8365MB_MIB_etherStatsPkts65to127Octets,
+- RTL8365MB_MIB_etherStatsPkts128to255Octets,
+- RTL8365MB_MIB_etherStatsPkts256to511Octets,
+- RTL8365MB_MIB_etherStatsPkts512to1023Octets,
+- RTL8365MB_MIB_etherStatsPkts1024to1518Octets,
+- RTL8365MB_MIB_ifOutOctets,
+- RTL8365MB_MIB_dot3StatsSingleCollisionFrames,
+- RTL8365MB_MIB_dot3StatsMultipleCollisionFrames,
+- RTL8365MB_MIB_dot3StatsDeferredTransmissions,
+- RTL8365MB_MIB_dot3StatsLateCollisions,
+- RTL8365MB_MIB_etherStatsCollisions,
+- RTL8365MB_MIB_dot3StatsExcessiveCollisions,
+- RTL8365MB_MIB_dot3OutPauseFrames,
+- RTL8365MB_MIB_ifOutDiscards,
+- RTL8365MB_MIB_dot1dTpPortInDiscards,
+- RTL8365MB_MIB_ifOutUcastPkts,
+- RTL8365MB_MIB_ifOutMulticastPkts,
+- RTL8365MB_MIB_ifOutBroadcastPkts,
+- RTL8365MB_MIB_outOampduPkts,
+- RTL8365MB_MIB_inOampduPkts,
+- RTL8365MB_MIB_inIgmpJoinsSuccess,
+- RTL8365MB_MIB_inIgmpJoinsFail,
+- RTL8365MB_MIB_inMldJoinsSuccess,
+- RTL8365MB_MIB_inMldJoinsFail,
+- RTL8365MB_MIB_inReportSuppressionDrop,
+- RTL8365MB_MIB_inLeaveSuppressionDrop,
+- RTL8365MB_MIB_outIgmpReports,
+- RTL8365MB_MIB_outIgmpLeaves,
+- RTL8365MB_MIB_outIgmpGeneralQuery,
+- RTL8365MB_MIB_outIgmpSpecificQuery,
+- RTL8365MB_MIB_outMldReports,
+- RTL8365MB_MIB_outMldLeaves,
+- RTL8365MB_MIB_outMldGeneralQuery,
+- RTL8365MB_MIB_outMldSpecificQuery,
+- RTL8365MB_MIB_inKnownMulticastPkts,
+- RTL8365MB_MIB_END,
+-};
+-
+-struct rtl8365mb_mib_counter {
+- u32 offset;
+- u32 length;
+- const char *name;
+-};
+-
+-#define RTL8365MB_MAKE_MIB_COUNTER(_offset, _length, _name) \
+- [RTL8365MB_MIB_ ## _name] = { _offset, _length, #_name }
+-
+-static struct rtl8365mb_mib_counter rtl8365mb_mib_counters[] = {
+- RTL8365MB_MAKE_MIB_COUNTER(0, 4, ifInOctets),
+- RTL8365MB_MAKE_MIB_COUNTER(4, 2, dot3StatsFCSErrors),
+- RTL8365MB_MAKE_MIB_COUNTER(6, 2, dot3StatsSymbolErrors),
+- RTL8365MB_MAKE_MIB_COUNTER(8, 2, dot3InPauseFrames),
+- RTL8365MB_MAKE_MIB_COUNTER(10, 2, dot3ControlInUnknownOpcodes),
+- RTL8365MB_MAKE_MIB_COUNTER(12, 2, etherStatsFragments),
+- RTL8365MB_MAKE_MIB_COUNTER(14, 2, etherStatsJabbers),
+- RTL8365MB_MAKE_MIB_COUNTER(16, 2, ifInUcastPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(18, 2, etherStatsDropEvents),
+- RTL8365MB_MAKE_MIB_COUNTER(20, 2, ifInMulticastPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(22, 2, ifInBroadcastPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(24, 2, inMldChecksumError),
+- RTL8365MB_MAKE_MIB_COUNTER(26, 2, inIgmpChecksumError),
+- RTL8365MB_MAKE_MIB_COUNTER(28, 2, inMldSpecificQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(30, 2, inMldGeneralQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(32, 2, inIgmpSpecificQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(34, 2, inIgmpGeneralQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(36, 2, inMldLeaves),
+- RTL8365MB_MAKE_MIB_COUNTER(38, 2, inIgmpLeaves),
+- RTL8365MB_MAKE_MIB_COUNTER(40, 4, etherStatsOctets),
+- RTL8365MB_MAKE_MIB_COUNTER(44, 2, etherStatsUnderSizePkts),
+- RTL8365MB_MAKE_MIB_COUNTER(46, 2, etherOversizeStats),
+- RTL8365MB_MAKE_MIB_COUNTER(48, 2, etherStatsPkts64Octets),
+- RTL8365MB_MAKE_MIB_COUNTER(50, 2, etherStatsPkts65to127Octets),
+- RTL8365MB_MAKE_MIB_COUNTER(52, 2, etherStatsPkts128to255Octets),
+- RTL8365MB_MAKE_MIB_COUNTER(54, 2, etherStatsPkts256to511Octets),
+- RTL8365MB_MAKE_MIB_COUNTER(56, 2, etherStatsPkts512to1023Octets),
+- RTL8365MB_MAKE_MIB_COUNTER(58, 2, etherStatsPkts1024to1518Octets),
+- RTL8365MB_MAKE_MIB_COUNTER(60, 4, ifOutOctets),
+- RTL8365MB_MAKE_MIB_COUNTER(64, 2, dot3StatsSingleCollisionFrames),
+- RTL8365MB_MAKE_MIB_COUNTER(66, 2, dot3StatsMultipleCollisionFrames),
+- RTL8365MB_MAKE_MIB_COUNTER(68, 2, dot3StatsDeferredTransmissions),
+- RTL8365MB_MAKE_MIB_COUNTER(70, 2, dot3StatsLateCollisions),
+- RTL8365MB_MAKE_MIB_COUNTER(72, 2, etherStatsCollisions),
+- RTL8365MB_MAKE_MIB_COUNTER(74, 2, dot3StatsExcessiveCollisions),
+- RTL8365MB_MAKE_MIB_COUNTER(76, 2, dot3OutPauseFrames),
+- RTL8365MB_MAKE_MIB_COUNTER(78, 2, ifOutDiscards),
+- RTL8365MB_MAKE_MIB_COUNTER(80, 2, dot1dTpPortInDiscards),
+- RTL8365MB_MAKE_MIB_COUNTER(82, 2, ifOutUcastPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(84, 2, ifOutMulticastPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(86, 2, ifOutBroadcastPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(88, 2, outOampduPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(90, 2, inOampduPkts),
+- RTL8365MB_MAKE_MIB_COUNTER(92, 4, inIgmpJoinsSuccess),
+- RTL8365MB_MAKE_MIB_COUNTER(96, 2, inIgmpJoinsFail),
+- RTL8365MB_MAKE_MIB_COUNTER(98, 2, inMldJoinsSuccess),
+- RTL8365MB_MAKE_MIB_COUNTER(100, 2, inMldJoinsFail),
+- RTL8365MB_MAKE_MIB_COUNTER(102, 2, inReportSuppressionDrop),
+- RTL8365MB_MAKE_MIB_COUNTER(104, 2, inLeaveSuppressionDrop),
+- RTL8365MB_MAKE_MIB_COUNTER(106, 2, outIgmpReports),
+- RTL8365MB_MAKE_MIB_COUNTER(108, 2, outIgmpLeaves),
+- RTL8365MB_MAKE_MIB_COUNTER(110, 2, outIgmpGeneralQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(112, 2, outIgmpSpecificQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(114, 2, outMldReports),
+- RTL8365MB_MAKE_MIB_COUNTER(116, 2, outMldLeaves),
+- RTL8365MB_MAKE_MIB_COUNTER(118, 2, outMldGeneralQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(120, 2, outMldSpecificQuery),
+- RTL8365MB_MAKE_MIB_COUNTER(122, 2, inKnownMulticastPkts),
+-};
+-
+-static_assert(ARRAY_SIZE(rtl8365mb_mib_counters) == RTL8365MB_MIB_END);
+-
+-struct rtl8365mb_jam_tbl_entry {
+- u16 reg;
+- u16 val;
+-};
+-
+-/* Lifted from the vendor driver sources */
+-static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_8365mb_vc[] = {
+- { 0x13EB, 0x15BB }, { 0x1303, 0x06D6 }, { 0x1304, 0x0700 },
+- { 0x13E2, 0x003F }, { 0x13F9, 0x0090 }, { 0x121E, 0x03CA },
+- { 0x1233, 0x0352 }, { 0x1237, 0x00A0 }, { 0x123A, 0x0030 },
+- { 0x1239, 0x0084 }, { 0x0301, 0x1000 }, { 0x1349, 0x001F },
+- { 0x18E0, 0x4004 }, { 0x122B, 0x241C }, { 0x1305, 0xC000 },
+- { 0x13F0, 0x0000 },
+-};
+-
+-static const struct rtl8365mb_jam_tbl_entry rtl8365mb_init_jam_common[] = {
+- { 0x1200, 0x7FCB }, { 0x0884, 0x0003 }, { 0x06EB, 0x0001 },
+- { 0x03Fa, 0x0007 }, { 0x08C8, 0x00C0 }, { 0x0A30, 0x020E },
+- { 0x0800, 0x0000 }, { 0x0802, 0x0000 }, { 0x09DA, 0x0013 },
+- { 0x1D32, 0x0002 },
+-};
+-
+-enum rtl8365mb_stp_state {
+- RTL8365MB_STP_STATE_DISABLED = 0,
+- RTL8365MB_STP_STATE_BLOCKING = 1,
+- RTL8365MB_STP_STATE_LEARNING = 2,
+- RTL8365MB_STP_STATE_FORWARDING = 3,
+-};
+-
+-enum rtl8365mb_cpu_insert {
+- RTL8365MB_CPU_INSERT_TO_ALL = 0,
+- RTL8365MB_CPU_INSERT_TO_TRAPPING = 1,
+- RTL8365MB_CPU_INSERT_TO_NONE = 2,
+-};
+-
+-enum rtl8365mb_cpu_position {
+- RTL8365MB_CPU_POS_AFTER_SA = 0,
+- RTL8365MB_CPU_POS_BEFORE_CRC = 1,
+-};
+-
+-enum rtl8365mb_cpu_format {
+- RTL8365MB_CPU_FORMAT_8BYTES = 0,
+- RTL8365MB_CPU_FORMAT_4BYTES = 1,
+-};
+-
+-enum rtl8365mb_cpu_rxlen {
+- RTL8365MB_CPU_RXLEN_72BYTES = 0,
+- RTL8365MB_CPU_RXLEN_64BYTES = 1,
+-};
+-
+-/**
+- * struct rtl8365mb_cpu - CPU port configuration
+- * @enable: enable/disable hardware insertion of CPU tag in switch->CPU frames
+- * @mask: port mask of ports that parse should parse CPU tags
+- * @trap_port: forward trapped frames to this port
+- * @insert: CPU tag insertion mode in switch->CPU frames
+- * @position: position of CPU tag in frame
+- * @rx_length: minimum CPU RX length
+- * @format: CPU tag format
+- *
+- * Represents the CPU tagging and CPU port configuration of the switch. These
+- * settings are configurable at runtime.
+- */
+-struct rtl8365mb_cpu {
+- bool enable;
+- u32 mask;
+- u32 trap_port;
+- enum rtl8365mb_cpu_insert insert;
+- enum rtl8365mb_cpu_position position;
+- enum rtl8365mb_cpu_rxlen rx_length;
+- enum rtl8365mb_cpu_format format;
+-};
+-
+-/**
+- * struct rtl8365mb_port - private per-port data
+- * @smi: pointer to parent realtek_smi data
+- * @index: DSA port index, same as dsa_port::index
+- * @stats: link statistics populated by rtl8365mb_stats_poll, ready for atomic
+- * access via rtl8365mb_get_stats64
+- * @stats_lock: protect the stats structure during read/update
+- * @mib_work: delayed work for polling MIB counters
+- */
+-struct rtl8365mb_port {
+- struct realtek_smi *smi;
+- unsigned int index;
+- struct rtnl_link_stats64 stats;
+- spinlock_t stats_lock;
+- struct delayed_work mib_work;
+-};
+-
+-/**
+- * struct rtl8365mb - private chip-specific driver data
+- * @smi: pointer to parent realtek_smi data
+- * @irq: registered IRQ or zero
+- * @chip_id: chip identifier
+- * @chip_ver: chip silicon revision
+- * @port_mask: mask of all ports
+- * @learn_limit_max: maximum number of L2 addresses the chip can learn
+- * @cpu: CPU tagging and CPU port configuration for this chip
+- * @mib_lock: prevent concurrent reads of MIB counters
+- * @ports: per-port data
+- * @jam_table: chip-specific initialization jam table
+- * @jam_size: size of the chip's jam table
+- *
+- * Private data for this driver.
+- */
+-struct rtl8365mb {
+- struct realtek_smi *smi;
+- int irq;
+- u32 chip_id;
+- u32 chip_ver;
+- u32 port_mask;
+- u32 learn_limit_max;
+- struct rtl8365mb_cpu cpu;
+- struct mutex mib_lock;
+- struct rtl8365mb_port ports[RTL8365MB_MAX_NUM_PORTS];
+- const struct rtl8365mb_jam_tbl_entry *jam_table;
+- size_t jam_size;
+-};
+-
+-static int rtl8365mb_phy_poll_busy(struct realtek_smi *smi)
+-{
+- u32 val;
+-
+- return regmap_read_poll_timeout(smi->map,
+- RTL8365MB_INDIRECT_ACCESS_STATUS_REG,
+- val, !val, 10, 100);
+-}
+-
+-static int rtl8365mb_phy_ocp_prepare(struct realtek_smi *smi, int phy,
+- u32 ocp_addr)
+-{
+- u32 val;
+- int ret;
+-
+- /* Set OCP prefix */
+- val = FIELD_GET(RTL8365MB_PHY_OCP_ADDR_PREFIX_MASK, ocp_addr);
+- ret = regmap_update_bits(
+- smi->map, RTL8365MB_GPHY_OCP_MSB_0_REG,
+- RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK,
+- FIELD_PREP(RTL8365MB_GPHY_OCP_MSB_0_CFG_CPU_OCPADR_MASK, val));
+- if (ret)
+- return ret;
+-
+- /* Set PHY register address */
+- val = RTL8365MB_PHY_BASE;
+- val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK, phy);
+- val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK,
+- ocp_addr >> 1);
+- val |= FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK,
+- ocp_addr >> 6);
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG,
+- val);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_phy_ocp_read(struct realtek_smi *smi, int phy,
+- u32 ocp_addr, u16 *data)
+-{
+- u32 val;
+- int ret;
+-
+- ret = rtl8365mb_phy_poll_busy(smi);
+- if (ret)
+- return ret;
+-
+- ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
+- if (ret)
+- return ret;
+-
+- /* Execute read operation */
+- val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
+- RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
+- FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
+- RTL8365MB_INDIRECT_ACCESS_CTRL_RW_READ);
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
+- if (ret)
+- return ret;
+-
+- ret = rtl8365mb_phy_poll_busy(smi);
+- if (ret)
+- return ret;
+-
+- /* Get PHY register data */
+- ret = regmap_read(smi->map, RTL8365MB_INDIRECT_ACCESS_READ_DATA_REG,
+- &val);
+- if (ret)
+- return ret;
+-
+- *data = val & 0xFFFF;
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_phy_ocp_write(struct realtek_smi *smi, int phy,
+- u32 ocp_addr, u16 data)
+-{
+- u32 val;
+- int ret;
+-
+- ret = rtl8365mb_phy_poll_busy(smi);
+- if (ret)
+- return ret;
+-
+- ret = rtl8365mb_phy_ocp_prepare(smi, phy, ocp_addr);
+- if (ret)
+- return ret;
+-
+- /* Set PHY register data */
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG,
+- data);
+- if (ret)
+- return ret;
+-
+- /* Execute write operation */
+- val = FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_MASK,
+- RTL8365MB_INDIRECT_ACCESS_CTRL_CMD_VALUE) |
+- FIELD_PREP(RTL8365MB_INDIRECT_ACCESS_CTRL_RW_MASK,
+- RTL8365MB_INDIRECT_ACCESS_CTRL_RW_WRITE);
+- ret = regmap_write(smi->map, RTL8365MB_INDIRECT_ACCESS_CTRL_REG, val);
+- if (ret)
+- return ret;
+-
+- ret = rtl8365mb_phy_poll_busy(smi);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_phy_read(struct realtek_smi *smi, int phy, int regnum)
+-{
+- u32 ocp_addr;
+- u16 val;
+- int ret;
+-
+- if (phy > RTL8365MB_PHYADDRMAX)
+- return -EINVAL;
+-
+- if (regnum > RTL8365MB_PHYREGMAX)
+- return -EINVAL;
+-
+- ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
+-
+- ret = rtl8365mb_phy_ocp_read(smi, phy, ocp_addr, &val);
+- if (ret) {
+- dev_err(smi->dev,
+- "failed to read PHY%d reg %02x @ %04x, ret %d\n", phy,
+- regnum, ocp_addr, ret);
+- return ret;
+- }
+-
+- dev_dbg(smi->dev, "read PHY%d register 0x%02x @ %04x, val <- %04x\n",
+- phy, regnum, ocp_addr, val);
+-
+- return val;
+-}
+-
+-static int rtl8365mb_phy_write(struct realtek_smi *smi, int phy, int regnum,
+- u16 val)
+-{
+- u32 ocp_addr;
+- int ret;
+-
+- if (phy > RTL8365MB_PHYADDRMAX)
+- return -EINVAL;
+-
+- if (regnum > RTL8365MB_PHYREGMAX)
+- return -EINVAL;
+-
+- ocp_addr = RTL8365MB_PHY_OCP_ADDR_PHYREG_BASE + regnum * 2;
+-
+- ret = rtl8365mb_phy_ocp_write(smi, phy, ocp_addr, val);
+- if (ret) {
+- dev_err(smi->dev,
+- "failed to write PHY%d reg %02x @ %04x, ret %d\n", phy,
+- regnum, ocp_addr, ret);
+- return ret;
+- }
+-
+- dev_dbg(smi->dev, "write PHY%d register 0x%02x @ %04x, val -> %04x\n",
+- phy, regnum, ocp_addr, val);
+-
+- return 0;
+-}
+-
+-static enum dsa_tag_protocol
+-rtl8365mb_get_tag_protocol(struct dsa_switch *ds, int port,
+- enum dsa_tag_protocol mp)
+-{
+- return DSA_TAG_PROTO_RTL8_4;
+-}
+-
+-static int rtl8365mb_ext_config_rgmii(struct realtek_smi *smi, int port,
+- phy_interface_t interface)
+-{
+- struct device_node *dn;
+- struct dsa_port *dp;
+- int tx_delay = 0;
+- int rx_delay = 0;
+- int ext_port;
+- u32 val;
+- int ret;
+-
+- if (port == smi->cpu_port) {
+- ext_port = 1;
+- } else {
+- dev_err(smi->dev, "only one EXT port is currently supported\n");
+- return -EINVAL;
+- }
+-
+- dp = dsa_to_port(smi->ds, port);
+- dn = dp->dn;
+-
+- /* Set the RGMII TX/RX delay
+- *
+- * The Realtek vendor driver indicates the following possible
+- * configuration settings:
+- *
+- * TX delay:
+- * 0 = no delay, 1 = 2 ns delay
+- * RX delay:
+- * 0 = no delay, 7 = maximum delay
+- * Each step is approximately 0.3 ns, so the maximum delay is about
+- * 2.1 ns.
+- *
+- * The vendor driver also states that this must be configured *before*
+- * forcing the external interface into a particular mode, which is done
+- * in the rtl8365mb_phylink_mac_link_{up,down} functions.
+- *
+- * Only configure an RGMII TX (resp. RX) delay if the
+- * tx-internal-delay-ps (resp. rx-internal-delay-ps) OF property is
+- * specified. We ignore the detail of the RGMII interface mode
+- * (RGMII_{RXID, TXID, etc.}), as this is considered to be a PHY-only
+- * property.
+- */
+- if (!of_property_read_u32(dn, "tx-internal-delay-ps", &val)) {
+- val = val / 1000; /* convert to ns */
+-
+- if (val == 0 || val == 2)
+- tx_delay = val / 2;
+- else
+- dev_warn(smi->dev,
+- "EXT port TX delay must be 0 or 2 ns\n");
+- }
+-
+- if (!of_property_read_u32(dn, "rx-internal-delay-ps", &val)) {
+- val = DIV_ROUND_CLOSEST(val, 300); /* convert to 0.3 ns step */
+-
+- if (val <= 7)
+- rx_delay = val;
+- else
+- dev_warn(smi->dev,
+- "EXT port RX delay must be 0 to 2.1 ns\n");
+- }
+-
+- ret = regmap_update_bits(
+- smi->map, RTL8365MB_EXT_RGMXF_REG(ext_port),
+- RTL8365MB_EXT_RGMXF_TXDELAY_MASK |
+- RTL8365MB_EXT_RGMXF_RXDELAY_MASK,
+- FIELD_PREP(RTL8365MB_EXT_RGMXF_TXDELAY_MASK, tx_delay) |
+- FIELD_PREP(RTL8365MB_EXT_RGMXF_RXDELAY_MASK, rx_delay));
+- if (ret)
+- return ret;
+-
+- ret = regmap_update_bits(
+- smi->map, RTL8365MB_DIGITAL_INTERFACE_SELECT_REG(ext_port),
+- RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_MASK(ext_port),
+- RTL8365MB_EXT_PORT_MODE_RGMII
+- << RTL8365MB_DIGITAL_INTERFACE_SELECT_MODE_OFFSET(
+- ext_port));
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_ext_config_forcemode(struct realtek_smi *smi, int port,
+- bool link, int speed, int duplex,
+- bool tx_pause, bool rx_pause)
+-{
+- u32 r_tx_pause;
+- u32 r_rx_pause;
+- u32 r_duplex;
+- u32 r_speed;
+- u32 r_link;
+- int ext_port;
+- int val;
+- int ret;
+-
+- if (port == smi->cpu_port) {
+- ext_port = 1;
+- } else {
+- dev_err(smi->dev, "only one EXT port is currently supported\n");
+- return -EINVAL;
+- }
+-
+- if (link) {
+- /* Force the link up with the desired configuration */
+- r_link = 1;
+- r_rx_pause = rx_pause ? 1 : 0;
+- r_tx_pause = tx_pause ? 1 : 0;
+-
+- if (speed == SPEED_1000) {
+- r_speed = RTL8365MB_PORT_SPEED_1000M;
+- } else if (speed == SPEED_100) {
+- r_speed = RTL8365MB_PORT_SPEED_100M;
+- } else if (speed == SPEED_10) {
+- r_speed = RTL8365MB_PORT_SPEED_10M;
+- } else {
+- dev_err(smi->dev, "unsupported port speed %s\n",
+- phy_speed_to_str(speed));
+- return -EINVAL;
+- }
+-
+- if (duplex == DUPLEX_FULL) {
+- r_duplex = 1;
+- } else if (duplex == DUPLEX_HALF) {
+- r_duplex = 0;
+- } else {
+- dev_err(smi->dev, "unsupported duplex %s\n",
+- phy_duplex_to_str(duplex));
+- return -EINVAL;
+- }
+- } else {
+- /* Force the link down and reset any programmed configuration */
+- r_link = 0;
+- r_tx_pause = 0;
+- r_rx_pause = 0;
+- r_speed = 0;
+- r_duplex = 0;
+- }
+-
+- val = FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_EN_MASK, 1) |
+- FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_TXPAUSE_MASK,
+- r_tx_pause) |
+- FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_RXPAUSE_MASK,
+- r_rx_pause) |
+- FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_LINK_MASK, r_link) |
+- FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_DUPLEX_MASK,
+- r_duplex) |
+- FIELD_PREP(RTL8365MB_DIGITAL_INTERFACE_FORCE_SPEED_MASK, r_speed);
+- ret = regmap_write(smi->map,
+- RTL8365MB_DIGITAL_INTERFACE_FORCE_REG(ext_port),
+- val);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static bool rtl8365mb_phy_mode_supported(struct dsa_switch *ds, int port,
+- phy_interface_t interface)
+-{
+- if (dsa_is_user_port(ds, port) &&
+- (interface == PHY_INTERFACE_MODE_NA ||
+- interface == PHY_INTERFACE_MODE_INTERNAL ||
+- interface == PHY_INTERFACE_MODE_GMII))
+- /* Internal PHY */
+- return true;
+- else if (dsa_is_cpu_port(ds, port) &&
+- phy_interface_mode_is_rgmii(interface))
+- /* Extension MAC */
+- return true;
+-
+- return false;
+-}
+-
+-static void rtl8365mb_phylink_validate(struct dsa_switch *ds, int port,
+- unsigned long *supported,
+- struct phylink_link_state *state)
+-{
+- struct realtek_smi *smi = ds->priv;
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0 };
+-
+- /* include/linux/phylink.h says:
+- * When @state->interface is %PHY_INTERFACE_MODE_NA, phylink
+- * expects the MAC driver to return all supported link modes.
+- */
+- if (state->interface != PHY_INTERFACE_MODE_NA &&
+- !rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
+- dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
+- phy_modes(state->interface), port);
+- linkmode_zero(supported);
+- return;
+- }
+-
+- phylink_set_port_modes(mask);
+-
+- phylink_set(mask, Autoneg);
+- phylink_set(mask, Pause);
+- phylink_set(mask, Asym_Pause);
+-
+- phylink_set(mask, 10baseT_Half);
+- phylink_set(mask, 10baseT_Full);
+- phylink_set(mask, 100baseT_Half);
+- phylink_set(mask, 100baseT_Full);
+- phylink_set(mask, 1000baseT_Full);
+-
+- linkmode_and(supported, supported, mask);
+- linkmode_and(state->advertising, state->advertising, mask);
+-}
+-
+-static void rtl8365mb_phylink_mac_config(struct dsa_switch *ds, int port,
+- unsigned int mode,
+- const struct phylink_link_state *state)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret;
+-
+- if (!rtl8365mb_phy_mode_supported(ds, port, state->interface)) {
+- dev_err(smi->dev, "phy mode %s is unsupported on port %d\n",
+- phy_modes(state->interface), port);
+- return;
+- }
+-
+- if (mode != MLO_AN_PHY && mode != MLO_AN_FIXED) {
+- dev_err(smi->dev,
+- "port %d supports only conventional PHY or fixed-link\n",
+- port);
+- return;
+- }
+-
+- if (phy_interface_mode_is_rgmii(state->interface)) {
+- ret = rtl8365mb_ext_config_rgmii(smi, port, state->interface);
+- if (ret)
+- dev_err(smi->dev,
+- "failed to configure RGMII mode on port %d: %d\n",
+- port, ret);
+- return;
+- }
+-
+- /* TODO: Implement MII and RMII modes, which the RTL8365MB-VC also
+- * supports
+- */
+-}
+-
+-static void rtl8365mb_phylink_mac_link_down(struct dsa_switch *ds, int port,
+- unsigned int mode,
+- phy_interface_t interface)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb_port *p;
+- struct rtl8365mb *mb;
+- int ret;
+-
+- mb = smi->chip_data;
+- p = &mb->ports[port];
+- cancel_delayed_work_sync(&p->mib_work);
+-
+- if (phy_interface_mode_is_rgmii(interface)) {
+- ret = rtl8365mb_ext_config_forcemode(smi, port, false, 0, 0,
+- false, false);
+- if (ret)
+- dev_err(smi->dev,
+- "failed to reset forced mode on port %d: %d\n",
+- port, ret);
+-
+- return;
+- }
+-}
+-
+-static void rtl8365mb_phylink_mac_link_up(struct dsa_switch *ds, int port,
+- unsigned int mode,
+- phy_interface_t interface,
+- struct phy_device *phydev, int speed,
+- int duplex, bool tx_pause,
+- bool rx_pause)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb_port *p;
+- struct rtl8365mb *mb;
+- int ret;
+-
+- mb = smi->chip_data;
+- p = &mb->ports[port];
+- schedule_delayed_work(&p->mib_work, 0);
+-
+- if (phy_interface_mode_is_rgmii(interface)) {
+- ret = rtl8365mb_ext_config_forcemode(smi, port, true, speed,
+- duplex, tx_pause,
+- rx_pause);
+- if (ret)
+- dev_err(smi->dev,
+- "failed to force mode on port %d: %d\n", port,
+- ret);
+-
+- return;
+- }
+-}
+-
+-static void rtl8365mb_port_stp_state_set(struct dsa_switch *ds, int port,
+- u8 state)
+-{
+- struct realtek_smi *smi = ds->priv;
+- enum rtl8365mb_stp_state val;
+- int msti = 0;
+-
+- switch (state) {
+- case BR_STATE_DISABLED:
+- val = RTL8365MB_STP_STATE_DISABLED;
+- break;
+- case BR_STATE_BLOCKING:
+- case BR_STATE_LISTENING:
+- val = RTL8365MB_STP_STATE_BLOCKING;
+- break;
+- case BR_STATE_LEARNING:
+- val = RTL8365MB_STP_STATE_LEARNING;
+- break;
+- case BR_STATE_FORWARDING:
+- val = RTL8365MB_STP_STATE_FORWARDING;
+- break;
+- default:
+- dev_err(smi->dev, "invalid STP state: %u\n", state);
+- return;
+- }
+-
+- regmap_update_bits(smi->map, RTL8365MB_MSTI_CTRL_REG(msti, port),
+- RTL8365MB_MSTI_CTRL_PORT_STATE_MASK(port),
+- val << RTL8365MB_MSTI_CTRL_PORT_STATE_OFFSET(port));
+-}
+-
+-static int rtl8365mb_port_set_learning(struct realtek_smi *smi, int port,
+- bool enable)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+-
+- /* Enable/disable learning by limiting the number of L2 addresses the
+- * port can learn. Realtek documentation states that a limit of zero
+- * disables learning. When enabling learning, set it to the chip's
+- * maximum.
+- */
+- return regmap_write(smi->map, RTL8365MB_LUT_PORT_LEARN_LIMIT_REG(port),
+- enable ? mb->learn_limit_max : 0);
+-}
+-
+-static int rtl8365mb_port_set_isolation(struct realtek_smi *smi, int port,
+- u32 mask)
+-{
+- return regmap_write(smi->map, RTL8365MB_PORT_ISOLATION_REG(port), mask);
+-}
+-
+-static int rtl8365mb_mib_counter_read(struct realtek_smi *smi, int port,
+- u32 offset, u32 length, u64 *mibvalue)
+-{
+- u64 tmpvalue = 0;
+- u32 val;
+- int ret;
+- int i;
+-
+- /* The MIB address is an SRAM address. We request a particular address
+- * and then poll the control register before reading the value from some
+- * counter registers.
+- */
+- ret = regmap_write(smi->map, RTL8365MB_MIB_ADDRESS_REG,
+- RTL8365MB_MIB_ADDRESS(port, offset));
+- if (ret)
+- return ret;
+-
+- /* Poll for completion */
+- ret = regmap_read_poll_timeout(smi->map, RTL8365MB_MIB_CTRL0_REG, val,
+- !(val & RTL8365MB_MIB_CTRL0_BUSY_MASK),
+- 10, 100);
+- if (ret)
+- return ret;
+-
+- /* Presumably this indicates a MIB counter read failure */
+- if (val & RTL8365MB_MIB_CTRL0_RESET_MASK)
+- return -EIO;
+-
+- /* There are four MIB counter registers each holding a 16 bit word of a
+- * MIB counter. Depending on the offset, we should read from the upper
+- * two or lower two registers. In case the MIB counter is 4 words, we
+- * read from all four registers.
+- */
+- if (length == 4)
+- offset = 3;
+- else
+- offset = (offset + 1) % 4;
+-
+- /* Read the MIB counter 16 bits at a time */
+- for (i = 0; i < length; i++) {
+- ret = regmap_read(smi->map,
+- RTL8365MB_MIB_COUNTER_REG(offset - i), &val);
+- if (ret)
+- return ret;
+-
+- tmpvalue = ((tmpvalue) << 16) | (val & 0xFFFF);
+- }
+-
+- /* Only commit the result if no error occurred */
+- *mibvalue = tmpvalue;
+-
+- return 0;
+-}
+-
+-static void rtl8365mb_get_ethtool_stats(struct dsa_switch *ds, int port, u64 *data)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb *mb;
+- int ret;
+- int i;
+-
+- mb = smi->chip_data;
+-
+- mutex_lock(&mb->mib_lock);
+- for (i = 0; i < RTL8365MB_MIB_END; i++) {
+- struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
+-
+- ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
+- mib->length, &data[i]);
+- if (ret) {
+- dev_err(smi->dev,
+- "failed to read port %d counters: %d\n", port,
+- ret);
+- break;
+- }
+- }
+- mutex_unlock(&mb->mib_lock);
+-}
+-
+-static void rtl8365mb_get_strings(struct dsa_switch *ds, int port, u32 stringset, u8 *data)
+-{
+- int i;
+-
+- if (stringset != ETH_SS_STATS)
+- return;
+-
+- for (i = 0; i < RTL8365MB_MIB_END; i++) {
+- struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
+-
+- strncpy(data + i * ETH_GSTRING_LEN, mib->name, ETH_GSTRING_LEN);
+- }
+-}
+-
+-static int rtl8365mb_get_sset_count(struct dsa_switch *ds, int port, int sset)
+-{
+- if (sset != ETH_SS_STATS)
+- return -EOPNOTSUPP;
+-
+- return RTL8365MB_MIB_END;
+-}
+-
+-static void rtl8365mb_get_phy_stats(struct dsa_switch *ds, int port,
+- struct ethtool_eth_phy_stats *phy_stats)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb_mib_counter *mib;
+- struct rtl8365mb *mb;
+-
+- mb = smi->chip_data;
+- mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3StatsSymbolErrors];
+-
+- mutex_lock(&mb->mib_lock);
+- rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
+- &phy_stats->SymbolErrorDuringCarrier);
+- mutex_unlock(&mb->mib_lock);
+-}
+-
+-static void rtl8365mb_get_mac_stats(struct dsa_switch *ds, int port,
+- struct ethtool_eth_mac_stats *mac_stats)
+-{
+- u64 cnt[RTL8365MB_MIB_END] = {
+- [RTL8365MB_MIB_ifOutOctets] = 1,
+- [RTL8365MB_MIB_ifOutUcastPkts] = 1,
+- [RTL8365MB_MIB_ifOutMulticastPkts] = 1,
+- [RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
+- [RTL8365MB_MIB_dot3OutPauseFrames] = 1,
+- [RTL8365MB_MIB_ifOutDiscards] = 1,
+- [RTL8365MB_MIB_ifInOctets] = 1,
+- [RTL8365MB_MIB_ifInUcastPkts] = 1,
+- [RTL8365MB_MIB_ifInMulticastPkts] = 1,
+- [RTL8365MB_MIB_ifInBroadcastPkts] = 1,
+- [RTL8365MB_MIB_dot3InPauseFrames] = 1,
+- [RTL8365MB_MIB_dot3StatsSingleCollisionFrames] = 1,
+- [RTL8365MB_MIB_dot3StatsMultipleCollisionFrames] = 1,
+- [RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
+- [RTL8365MB_MIB_dot3StatsDeferredTransmissions] = 1,
+- [RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
+- [RTL8365MB_MIB_dot3StatsExcessiveCollisions] = 1,
+-
+- };
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb *mb;
+- int ret;
+- int i;
+-
+- mb = smi->chip_data;
+-
+- mutex_lock(&mb->mib_lock);
+- for (i = 0; i < RTL8365MB_MIB_END; i++) {
+- struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
+-
+- /* Only fetch required MIB counters (marked = 1 above) */
+- if (!cnt[i])
+- continue;
+-
+- ret = rtl8365mb_mib_counter_read(smi, port, mib->offset,
+- mib->length, &cnt[i]);
+- if (ret)
+- break;
+- }
+- mutex_unlock(&mb->mib_lock);
+-
+- /* The RTL8365MB-VC exposes MIB objects, which we have to translate into
+- * IEEE 802.3 Managed Objects. This is not always completely faithful,
+- * but we try out best. See RFC 3635 for a detailed treatment of the
+- * subject.
+- */
+-
+- mac_stats->FramesTransmittedOK = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
+- cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
+- cnt[RTL8365MB_MIB_ifOutBroadcastPkts] +
+- cnt[RTL8365MB_MIB_dot3OutPauseFrames] -
+- cnt[RTL8365MB_MIB_ifOutDiscards];
+- mac_stats->SingleCollisionFrames =
+- cnt[RTL8365MB_MIB_dot3StatsSingleCollisionFrames];
+- mac_stats->MultipleCollisionFrames =
+- cnt[RTL8365MB_MIB_dot3StatsMultipleCollisionFrames];
+- mac_stats->FramesReceivedOK = cnt[RTL8365MB_MIB_ifInUcastPkts] +
+- cnt[RTL8365MB_MIB_ifInMulticastPkts] +
+- cnt[RTL8365MB_MIB_ifInBroadcastPkts] +
+- cnt[RTL8365MB_MIB_dot3InPauseFrames];
+- mac_stats->FrameCheckSequenceErrors =
+- cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
+- mac_stats->OctetsTransmittedOK = cnt[RTL8365MB_MIB_ifOutOctets] -
+- 18 * mac_stats->FramesTransmittedOK;
+- mac_stats->FramesWithDeferredXmissions =
+- cnt[RTL8365MB_MIB_dot3StatsDeferredTransmissions];
+- mac_stats->LateCollisions = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
+- mac_stats->FramesAbortedDueToXSColls =
+- cnt[RTL8365MB_MIB_dot3StatsExcessiveCollisions];
+- mac_stats->OctetsReceivedOK = cnt[RTL8365MB_MIB_ifInOctets] -
+- 18 * mac_stats->FramesReceivedOK;
+- mac_stats->MulticastFramesXmittedOK =
+- cnt[RTL8365MB_MIB_ifOutMulticastPkts];
+- mac_stats->BroadcastFramesXmittedOK =
+- cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
+- mac_stats->MulticastFramesReceivedOK =
+- cnt[RTL8365MB_MIB_ifInMulticastPkts];
+- mac_stats->BroadcastFramesReceivedOK =
+- cnt[RTL8365MB_MIB_ifInBroadcastPkts];
+-}
+-
+-static void rtl8365mb_get_ctrl_stats(struct dsa_switch *ds, int port,
+- struct ethtool_eth_ctrl_stats *ctrl_stats)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb_mib_counter *mib;
+- struct rtl8365mb *mb;
+-
+- mb = smi->chip_data;
+- mib = &rtl8365mb_mib_counters[RTL8365MB_MIB_dot3ControlInUnknownOpcodes];
+-
+- mutex_lock(&mb->mib_lock);
+- rtl8365mb_mib_counter_read(smi, port, mib->offset, mib->length,
+- &ctrl_stats->UnsupportedOpcodesReceived);
+- mutex_unlock(&mb->mib_lock);
+-}
+-
+-static void rtl8365mb_stats_update(struct realtek_smi *smi, int port)
+-{
+- u64 cnt[RTL8365MB_MIB_END] = {
+- [RTL8365MB_MIB_ifOutOctets] = 1,
+- [RTL8365MB_MIB_ifOutUcastPkts] = 1,
+- [RTL8365MB_MIB_ifOutMulticastPkts] = 1,
+- [RTL8365MB_MIB_ifOutBroadcastPkts] = 1,
+- [RTL8365MB_MIB_ifOutDiscards] = 1,
+- [RTL8365MB_MIB_ifInOctets] = 1,
+- [RTL8365MB_MIB_ifInUcastPkts] = 1,
+- [RTL8365MB_MIB_ifInMulticastPkts] = 1,
+- [RTL8365MB_MIB_ifInBroadcastPkts] = 1,
+- [RTL8365MB_MIB_etherStatsDropEvents] = 1,
+- [RTL8365MB_MIB_etherStatsCollisions] = 1,
+- [RTL8365MB_MIB_etherStatsFragments] = 1,
+- [RTL8365MB_MIB_etherStatsJabbers] = 1,
+- [RTL8365MB_MIB_dot3StatsFCSErrors] = 1,
+- [RTL8365MB_MIB_dot3StatsLateCollisions] = 1,
+- };
+- struct rtl8365mb *mb = smi->chip_data;
+- struct rtnl_link_stats64 *stats;
+- int ret;
+- int i;
+-
+- stats = &mb->ports[port].stats;
+-
+- mutex_lock(&mb->mib_lock);
+- for (i = 0; i < RTL8365MB_MIB_END; i++) {
+- struct rtl8365mb_mib_counter *c = &rtl8365mb_mib_counters[i];
+-
+- /* Only fetch required MIB counters (marked = 1 above) */
+- if (!cnt[i])
+- continue;
+-
+- ret = rtl8365mb_mib_counter_read(smi, port, c->offset,
+- c->length, &cnt[i]);
+- if (ret)
+- break;
+- }
+- mutex_unlock(&mb->mib_lock);
+-
+- /* Don't update statistics if there was an error reading the counters */
+- if (ret)
+- return;
+-
+- spin_lock(&mb->ports[port].stats_lock);
+-
+- stats->rx_packets = cnt[RTL8365MB_MIB_ifInUcastPkts] +
+- cnt[RTL8365MB_MIB_ifInMulticastPkts] +
+- cnt[RTL8365MB_MIB_ifInBroadcastPkts] -
+- cnt[RTL8365MB_MIB_ifOutDiscards];
+-
+- stats->tx_packets = cnt[RTL8365MB_MIB_ifOutUcastPkts] +
+- cnt[RTL8365MB_MIB_ifOutMulticastPkts] +
+- cnt[RTL8365MB_MIB_ifOutBroadcastPkts];
+-
+- /* if{In,Out}Octets includes FCS - remove it */
+- stats->rx_bytes = cnt[RTL8365MB_MIB_ifInOctets] - 4 * stats->rx_packets;
+- stats->tx_bytes =
+- cnt[RTL8365MB_MIB_ifOutOctets] - 4 * stats->tx_packets;
+-
+- stats->rx_dropped = cnt[RTL8365MB_MIB_etherStatsDropEvents];
+- stats->tx_dropped = cnt[RTL8365MB_MIB_ifOutDiscards];
+-
+- stats->multicast = cnt[RTL8365MB_MIB_ifInMulticastPkts];
+- stats->collisions = cnt[RTL8365MB_MIB_etherStatsCollisions];
+-
+- stats->rx_length_errors = cnt[RTL8365MB_MIB_etherStatsFragments] +
+- cnt[RTL8365MB_MIB_etherStatsJabbers];
+- stats->rx_crc_errors = cnt[RTL8365MB_MIB_dot3StatsFCSErrors];
+- stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors;
+-
+- stats->tx_aborted_errors = cnt[RTL8365MB_MIB_ifOutDiscards];
+- stats->tx_window_errors = cnt[RTL8365MB_MIB_dot3StatsLateCollisions];
+- stats->tx_errors = stats->tx_aborted_errors + stats->tx_window_errors;
+-
+- spin_unlock(&mb->ports[port].stats_lock);
+-}
+-
+-static void rtl8365mb_stats_poll(struct work_struct *work)
+-{
+- struct rtl8365mb_port *p = container_of(to_delayed_work(work),
+- struct rtl8365mb_port,
+- mib_work);
+- struct realtek_smi *smi = p->smi;
+-
+- rtl8365mb_stats_update(smi, p->index);
+-
+- schedule_delayed_work(&p->mib_work, RTL8365MB_STATS_INTERVAL_JIFFIES);
+-}
+-
+-static void rtl8365mb_get_stats64(struct dsa_switch *ds, int port,
+- struct rtnl_link_stats64 *s)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb_port *p;
+- struct rtl8365mb *mb;
+-
+- mb = smi->chip_data;
+- p = &mb->ports[port];
+-
+- spin_lock(&p->stats_lock);
+- memcpy(s, &p->stats, sizeof(*s));
+- spin_unlock(&p->stats_lock);
+-}
+-
+-static void rtl8365mb_stats_setup(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- int i;
+-
+- /* Per-chip global mutex to protect MIB counter access, since doing
+- * so requires accessing a series of registers in a particular order.
+- */
+- mutex_init(&mb->mib_lock);
+-
+- for (i = 0; i < smi->num_ports; i++) {
+- struct rtl8365mb_port *p = &mb->ports[i];
+-
+- if (dsa_is_unused_port(smi->ds, i))
+- continue;
+-
+- /* Per-port spinlock to protect the stats64 data */
+- spin_lock_init(&p->stats_lock);
+-
+- /* This work polls the MIB counters and keeps the stats64 data
+- * up-to-date.
+- */
+- INIT_DELAYED_WORK(&p->mib_work, rtl8365mb_stats_poll);
+- }
+-}
+-
+-static void rtl8365mb_stats_teardown(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- int i;
+-
+- for (i = 0; i < smi->num_ports; i++) {
+- struct rtl8365mb_port *p = &mb->ports[i];
+-
+- if (dsa_is_unused_port(smi->ds, i))
+- continue;
+-
+- cancel_delayed_work_sync(&p->mib_work);
+- }
+-}
+-
+-static int rtl8365mb_get_and_clear_status_reg(struct realtek_smi *smi, u32 reg,
+- u32 *val)
+-{
+- int ret;
+-
+- ret = regmap_read(smi->map, reg, val);
+- if (ret)
+- return ret;
+-
+- return regmap_write(smi->map, reg, *val);
+-}
+-
+-static irqreturn_t rtl8365mb_irq(int irq, void *data)
+-{
+- struct realtek_smi *smi = data;
+- unsigned long line_changes = 0;
+- struct rtl8365mb *mb;
+- u32 stat;
+- int line;
+- int ret;
+-
+- mb = smi->chip_data;
+-
+- ret = rtl8365mb_get_and_clear_status_reg(smi, RTL8365MB_INTR_STATUS_REG,
+- &stat);
+- if (ret)
+- goto out_error;
+-
+- if (stat & RTL8365MB_INTR_LINK_CHANGE_MASK) {
+- u32 linkdown_ind;
+- u32 linkup_ind;
+- u32 val;
+-
+- ret = rtl8365mb_get_and_clear_status_reg(
+- smi, RTL8365MB_PORT_LINKUP_IND_REG, &val);
+- if (ret)
+- goto out_error;
+-
+- linkup_ind = FIELD_GET(RTL8365MB_PORT_LINKUP_IND_MASK, val);
+-
+- ret = rtl8365mb_get_and_clear_status_reg(
+- smi, RTL8365MB_PORT_LINKDOWN_IND_REG, &val);
+- if (ret)
+- goto out_error;
+-
+- linkdown_ind = FIELD_GET(RTL8365MB_PORT_LINKDOWN_IND_MASK, val);
+-
+- line_changes = (linkup_ind | linkdown_ind) & mb->port_mask;
+- }
+-
+- if (!line_changes)
+- goto out_none;
+-
+- for_each_set_bit(line, &line_changes, smi->num_ports) {
+- int child_irq = irq_find_mapping(smi->irqdomain, line);
+-
+- handle_nested_irq(child_irq);
+- }
+-
+- return IRQ_HANDLED;
+-
+-out_error:
+- dev_err(smi->dev, "failed to read interrupt status: %d\n", ret);
+-
+-out_none:
+- return IRQ_NONE;
+-}
+-
+-static struct irq_chip rtl8365mb_irq_chip = {
+- .name = "rtl8365mb",
+- /* The hardware doesn't support masking IRQs on a per-port basis */
+-};
+-
+-static int rtl8365mb_irq_map(struct irq_domain *domain, unsigned int irq,
+- irq_hw_number_t hwirq)
+-{
+- irq_set_chip_data(irq, domain->host_data);
+- irq_set_chip_and_handler(irq, &rtl8365mb_irq_chip, handle_simple_irq);
+- irq_set_nested_thread(irq, 1);
+- irq_set_noprobe(irq);
+-
+- return 0;
+-}
+-
+-static void rtl8365mb_irq_unmap(struct irq_domain *d, unsigned int irq)
+-{
+- irq_set_nested_thread(irq, 0);
+- irq_set_chip_and_handler(irq, NULL, NULL);
+- irq_set_chip_data(irq, NULL);
+-}
+-
+-static const struct irq_domain_ops rtl8365mb_irqdomain_ops = {
+- .map = rtl8365mb_irq_map,
+- .unmap = rtl8365mb_irq_unmap,
+- .xlate = irq_domain_xlate_onecell,
+-};
+-
+-static int rtl8365mb_set_irq_enable(struct realtek_smi *smi, bool enable)
+-{
+- return regmap_update_bits(smi->map, RTL8365MB_INTR_CTRL_REG,
+- RTL8365MB_INTR_LINK_CHANGE_MASK,
+- FIELD_PREP(RTL8365MB_INTR_LINK_CHANGE_MASK,
+- enable ? 1 : 0));
+-}
+-
+-static int rtl8365mb_irq_enable(struct realtek_smi *smi)
+-{
+- return rtl8365mb_set_irq_enable(smi, true);
+-}
+-
+-static int rtl8365mb_irq_disable(struct realtek_smi *smi)
+-{
+- return rtl8365mb_set_irq_enable(smi, false);
+-}
+-
+-static int rtl8365mb_irq_setup(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- struct device_node *intc;
+- u32 irq_trig;
+- int virq;
+- int irq;
+- u32 val;
+- int ret;
+- int i;
+-
+- intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
+- if (!intc) {
+- dev_err(smi->dev, "missing child interrupt-controller node\n");
+- return -EINVAL;
+- }
+-
+- /* rtl8365mb IRQs cascade off this one */
+- irq = of_irq_get(intc, 0);
+- if (irq <= 0) {
+- if (irq != -EPROBE_DEFER)
+- dev_err(smi->dev, "failed to get parent irq: %d\n",
+- irq);
+- ret = irq ? irq : -EINVAL;
+- goto out_put_node;
+- }
+-
+- smi->irqdomain = irq_domain_add_linear(intc, smi->num_ports,
+- &rtl8365mb_irqdomain_ops, smi);
+- if (!smi->irqdomain) {
+- dev_err(smi->dev, "failed to add irq domain\n");
+- ret = -ENOMEM;
+- goto out_put_node;
+- }
+-
+- for (i = 0; i < smi->num_ports; i++) {
+- virq = irq_create_mapping(smi->irqdomain, i);
+- if (!virq) {
+- dev_err(smi->dev,
+- "failed to create irq domain mapping\n");
+- ret = -EINVAL;
+- goto out_remove_irqdomain;
+- }
+-
+- irq_set_parent(virq, irq);
+- }
+-
+- /* Configure chip interrupt signal polarity */
+- irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
+- switch (irq_trig) {
+- case IRQF_TRIGGER_RISING:
+- case IRQF_TRIGGER_HIGH:
+- val = RTL8365MB_INTR_POLARITY_HIGH;
+- break;
+- case IRQF_TRIGGER_FALLING:
+- case IRQF_TRIGGER_LOW:
+- val = RTL8365MB_INTR_POLARITY_LOW;
+- break;
+- default:
+- dev_err(smi->dev, "unsupported irq trigger type %u\n",
+- irq_trig);
+- ret = -EINVAL;
+- goto out_remove_irqdomain;
+- }
+-
+- ret = regmap_update_bits(smi->map, RTL8365MB_INTR_POLARITY_REG,
+- RTL8365MB_INTR_POLARITY_MASK,
+- FIELD_PREP(RTL8365MB_INTR_POLARITY_MASK, val));
+- if (ret)
+- goto out_remove_irqdomain;
+-
+- /* Disable the interrupt in case the chip has it enabled on reset */
+- ret = rtl8365mb_irq_disable(smi);
+- if (ret)
+- goto out_remove_irqdomain;
+-
+- /* Clear the interrupt status register */
+- ret = regmap_write(smi->map, RTL8365MB_INTR_STATUS_REG,
+- RTL8365MB_INTR_ALL_MASK);
+- if (ret)
+- goto out_remove_irqdomain;
+-
+- ret = request_threaded_irq(irq, NULL, rtl8365mb_irq, IRQF_ONESHOT,
+- "rtl8365mb", smi);
+- if (ret) {
+- dev_err(smi->dev, "failed to request irq: %d\n", ret);
+- goto out_remove_irqdomain;
+- }
+-
+- /* Store the irq so that we know to free it during teardown */
+- mb->irq = irq;
+-
+- ret = rtl8365mb_irq_enable(smi);
+- if (ret)
+- goto out_free_irq;
+-
+- of_node_put(intc);
+-
+- return 0;
+-
+-out_free_irq:
+- free_irq(mb->irq, smi);
+- mb->irq = 0;
+-
+-out_remove_irqdomain:
+- for (i = 0; i < smi->num_ports; i++) {
+- virq = irq_find_mapping(smi->irqdomain, i);
+- irq_dispose_mapping(virq);
+- }
+-
+- irq_domain_remove(smi->irqdomain);
+- smi->irqdomain = NULL;
+-
+-out_put_node:
+- of_node_put(intc);
+-
+- return ret;
+-}
+-
+-static void rtl8365mb_irq_teardown(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- int virq;
+- int i;
+-
+- if (mb->irq) {
+- free_irq(mb->irq, smi);
+- mb->irq = 0;
+- }
+-
+- if (smi->irqdomain) {
+- for (i = 0; i < smi->num_ports; i++) {
+- virq = irq_find_mapping(smi->irqdomain, i);
+- irq_dispose_mapping(virq);
+- }
+-
+- irq_domain_remove(smi->irqdomain);
+- smi->irqdomain = NULL;
+- }
+-}
+-
+-static int rtl8365mb_cpu_config(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- struct rtl8365mb_cpu *cpu = &mb->cpu;
+- u32 val;
+- int ret;
+-
+- ret = regmap_update_bits(smi->map, RTL8365MB_CPU_PORT_MASK_REG,
+- RTL8365MB_CPU_PORT_MASK_MASK,
+- FIELD_PREP(RTL8365MB_CPU_PORT_MASK_MASK,
+- cpu->mask));
+- if (ret)
+- return ret;
+-
+- val = FIELD_PREP(RTL8365MB_CPU_CTRL_EN_MASK, cpu->enable ? 1 : 0) |
+- FIELD_PREP(RTL8365MB_CPU_CTRL_INSERTMODE_MASK, cpu->insert) |
+- FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_POSITION_MASK, cpu->position) |
+- FIELD_PREP(RTL8365MB_CPU_CTRL_RXBYTECOUNT_MASK, cpu->rx_length) |
+- FIELD_PREP(RTL8365MB_CPU_CTRL_TAG_FORMAT_MASK, cpu->format) |
+- FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_MASK, cpu->trap_port) |
+- FIELD_PREP(RTL8365MB_CPU_CTRL_TRAP_PORT_EXT_MASK,
+- cpu->trap_port >> 3);
+- ret = regmap_write(smi->map, RTL8365MB_CPU_CTRL_REG, val);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_switch_init(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- int ret;
+- int i;
+-
+- /* Do any chip-specific init jam before getting to the common stuff */
+- if (mb->jam_table) {
+- for (i = 0; i < mb->jam_size; i++) {
+- ret = regmap_write(smi->map, mb->jam_table[i].reg,
+- mb->jam_table[i].val);
+- if (ret)
+- return ret;
+- }
+- }
+-
+- /* Common init jam */
+- for (i = 0; i < ARRAY_SIZE(rtl8365mb_init_jam_common); i++) {
+- ret = regmap_write(smi->map, rtl8365mb_init_jam_common[i].reg,
+- rtl8365mb_init_jam_common[i].val);
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_reset_chip(struct realtek_smi *smi)
+-{
+- u32 val;
+-
+- realtek_smi_write_reg_noack(smi, RTL8365MB_CHIP_RESET_REG,
+- FIELD_PREP(RTL8365MB_CHIP_RESET_HW_MASK,
+- 1));
+-
+- /* Realtek documentation says the chip needs 1 second to reset. Sleep
+- * for 100 ms before accessing any registers to prevent ACK timeouts.
+- */
+- msleep(100);
+- return regmap_read_poll_timeout(smi->map, RTL8365MB_CHIP_RESET_REG, val,
+- !(val & RTL8365MB_CHIP_RESET_HW_MASK),
+- 20000, 1e6);
+-}
+-
+-static int rtl8365mb_setup(struct dsa_switch *ds)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8365mb *mb;
+- int ret;
+- int i;
+-
+- mb = smi->chip_data;
+-
+- ret = rtl8365mb_reset_chip(smi);
+- if (ret) {
+- dev_err(smi->dev, "failed to reset chip: %d\n", ret);
+- goto out_error;
+- }
+-
+- /* Configure switch to vendor-defined initial state */
+- ret = rtl8365mb_switch_init(smi);
+- if (ret) {
+- dev_err(smi->dev, "failed to initialize switch: %d\n", ret);
+- goto out_error;
+- }
+-
+- /* Set up cascading IRQs */
+- ret = rtl8365mb_irq_setup(smi);
+- if (ret == -EPROBE_DEFER)
+- return ret;
+- else if (ret)
+- dev_info(smi->dev, "no interrupt support\n");
+-
+- /* Configure CPU tagging */
+- ret = rtl8365mb_cpu_config(smi);
+- if (ret)
+- goto out_teardown_irq;
+-
+- /* Configure ports */
+- for (i = 0; i < smi->num_ports; i++) {
+- struct rtl8365mb_port *p = &mb->ports[i];
+-
+- if (dsa_is_unused_port(smi->ds, i))
+- continue;
+-
+- /* Set up per-port private data */
+- p->smi = smi;
+- p->index = i;
+-
+- /* Forward only to the CPU */
+- ret = rtl8365mb_port_set_isolation(smi, i, BIT(smi->cpu_port));
+- if (ret)
+- goto out_teardown_irq;
+-
+- /* Disable learning */
+- ret = rtl8365mb_port_set_learning(smi, i, false);
+- if (ret)
+- goto out_teardown_irq;
+-
+- /* Set the initial STP state of all ports to DISABLED, otherwise
+- * ports will still forward frames to the CPU despite being
+- * administratively down by default.
+- */
+- rtl8365mb_port_stp_state_set(smi->ds, i, BR_STATE_DISABLED);
+- }
+-
+- /* Set maximum packet length to 1536 bytes */
+- ret = regmap_update_bits(smi->map, RTL8365MB_CFG0_MAX_LEN_REG,
+- RTL8365MB_CFG0_MAX_LEN_MASK,
+- FIELD_PREP(RTL8365MB_CFG0_MAX_LEN_MASK, 1536));
+- if (ret)
+- goto out_teardown_irq;
+-
+- ret = realtek_smi_setup_mdio(smi);
+- if (ret) {
+- dev_err(smi->dev, "could not set up MDIO bus\n");
+- goto out_teardown_irq;
+- }
+-
+- /* Start statistics counter polling */
+- rtl8365mb_stats_setup(smi);
+-
+- return 0;
+-
+-out_teardown_irq:
+- rtl8365mb_irq_teardown(smi);
+-
+-out_error:
+- return ret;
+-}
+-
+-static void rtl8365mb_teardown(struct dsa_switch *ds)
+-{
+- struct realtek_smi *smi = ds->priv;
+-
+- rtl8365mb_stats_teardown(smi);
+- rtl8365mb_irq_teardown(smi);
+-}
+-
+-static int rtl8365mb_get_chip_id_and_ver(struct regmap *map, u32 *id, u32 *ver)
+-{
+- int ret;
+-
+- /* For some reason we have to write a magic value to an arbitrary
+- * register whenever accessing the chip ID/version registers.
+- */
+- ret = regmap_write(map, RTL8365MB_MAGIC_REG, RTL8365MB_MAGIC_VALUE);
+- if (ret)
+- return ret;
+-
+- ret = regmap_read(map, RTL8365MB_CHIP_ID_REG, id);
+- if (ret)
+- return ret;
+-
+- ret = regmap_read(map, RTL8365MB_CHIP_VER_REG, ver);
+- if (ret)
+- return ret;
+-
+- /* Reset magic register */
+- ret = regmap_write(map, RTL8365MB_MAGIC_REG, 0);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static int rtl8365mb_detect(struct realtek_smi *smi)
+-{
+- struct rtl8365mb *mb = smi->chip_data;
+- u32 chip_id;
+- u32 chip_ver;
+- int ret;
+-
+- ret = rtl8365mb_get_chip_id_and_ver(smi->map, &chip_id, &chip_ver);
+- if (ret) {
+- dev_err(smi->dev, "failed to read chip id and version: %d\n",
+- ret);
+- return ret;
+- }
+-
+- switch (chip_id) {
+- case RTL8365MB_CHIP_ID_8365MB_VC:
+- dev_info(smi->dev,
+- "found an RTL8365MB-VC switch (ver=0x%04x)\n",
+- chip_ver);
+-
+- smi->cpu_port = RTL8365MB_CPU_PORT_NUM_8365MB_VC;
+- smi->num_ports = smi->cpu_port + 1;
+-
+- mb->smi = smi;
+- mb->chip_id = chip_id;
+- mb->chip_ver = chip_ver;
+- mb->port_mask = BIT(smi->num_ports) - 1;
+- mb->learn_limit_max = RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC;
+- mb->jam_table = rtl8365mb_init_jam_8365mb_vc;
+- mb->jam_size = ARRAY_SIZE(rtl8365mb_init_jam_8365mb_vc);
+-
+- mb->cpu.enable = 1;
+- mb->cpu.mask = BIT(smi->cpu_port);
+- mb->cpu.trap_port = smi->cpu_port;
+- mb->cpu.insert = RTL8365MB_CPU_INSERT_TO_ALL;
+- mb->cpu.position = RTL8365MB_CPU_POS_AFTER_SA;
+- mb->cpu.rx_length = RTL8365MB_CPU_RXLEN_64BYTES;
+- mb->cpu.format = RTL8365MB_CPU_FORMAT_8BYTES;
+-
+- break;
+- default:
+- dev_err(smi->dev,
+- "found an unknown Realtek switch (id=0x%04x, ver=0x%04x)\n",
+- chip_id, chip_ver);
+- return -ENODEV;
+- }
+-
+- return 0;
+-}
+-
+-static const struct dsa_switch_ops rtl8365mb_switch_ops = {
+- .get_tag_protocol = rtl8365mb_get_tag_protocol,
+- .setup = rtl8365mb_setup,
+- .teardown = rtl8365mb_teardown,
+- .phylink_validate = rtl8365mb_phylink_validate,
+- .phylink_mac_config = rtl8365mb_phylink_mac_config,
+- .phylink_mac_link_down = rtl8365mb_phylink_mac_link_down,
+- .phylink_mac_link_up = rtl8365mb_phylink_mac_link_up,
+- .port_stp_state_set = rtl8365mb_port_stp_state_set,
+- .get_strings = rtl8365mb_get_strings,
+- .get_ethtool_stats = rtl8365mb_get_ethtool_stats,
+- .get_sset_count = rtl8365mb_get_sset_count,
+- .get_eth_phy_stats = rtl8365mb_get_phy_stats,
+- .get_eth_mac_stats = rtl8365mb_get_mac_stats,
+- .get_eth_ctrl_stats = rtl8365mb_get_ctrl_stats,
+- .get_stats64 = rtl8365mb_get_stats64,
+-};
+-
+-static const struct realtek_smi_ops rtl8365mb_smi_ops = {
+- .detect = rtl8365mb_detect,
+- .phy_read = rtl8365mb_phy_read,
+- .phy_write = rtl8365mb_phy_write,
+-};
+-
+-const struct realtek_smi_variant rtl8365mb_variant = {
+- .ds_ops = &rtl8365mb_switch_ops,
+- .ops = &rtl8365mb_smi_ops,
+- .clk_delay = 10,
+- .cmd_read = 0xb9,
+- .cmd_write = 0xb8,
+- .chip_data_sz = sizeof(struct rtl8365mb),
+-};
+-EXPORT_SYMBOL_GPL(rtl8365mb_variant);
+diff --git a/drivers/net/dsa/rtl8366.c b/drivers/net/dsa/rtl8366.c
+deleted file mode 100644
+index bdb8d8d348807..0000000000000
+--- a/drivers/net/dsa/rtl8366.c
++++ /dev/null
+@@ -1,448 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Realtek SMI library helpers for the RTL8366x variants
+- * RTL8366RB and RTL8366S
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
+- * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
+- * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
+- */
+-#include <linux/if_bridge.h>
+-#include <net/dsa.h>
+-
+-#include "realtek-smi-core.h"
+-
+-int rtl8366_mc_is_used(struct realtek_smi *smi, int mc_index, int *used)
+-{
+- int ret;
+- int i;
+-
+- *used = 0;
+- for (i = 0; i < smi->num_ports; i++) {
+- int index = 0;
+-
+- ret = smi->ops->get_mc_index(smi, i, &index);
+- if (ret)
+- return ret;
+-
+- if (mc_index == index) {
+- *used = 1;
+- break;
+- }
+- }
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_mc_is_used);
+-
+-/**
+- * rtl8366_obtain_mc() - retrieve or allocate a VLAN member configuration
+- * @smi: the Realtek SMI device instance
+- * @vid: the VLAN ID to look up or allocate
+- * @vlanmc: the pointer will be assigned to a pointer to a valid member config
+- * if successful
+- * @return: index of a new member config or negative error number
+- */
+-static int rtl8366_obtain_mc(struct realtek_smi *smi, int vid,
+- struct rtl8366_vlan_mc *vlanmc)
+-{
+- struct rtl8366_vlan_4k vlan4k;
+- int ret;
+- int i;
+-
+- /* Try to find an existing member config entry for this VID */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
+- if (ret) {
+- dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
+- i, vid);
+- return ret;
+- }
+-
+- if (vid == vlanmc->vid)
+- return i;
+- }
+-
+- /* We have no MC entry for this VID, try to find an empty one */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- ret = smi->ops->get_vlan_mc(smi, i, vlanmc);
+- if (ret) {
+- dev_err(smi->dev, "error searching for VLAN MC %d for VID %d\n",
+- i, vid);
+- return ret;
+- }
+-
+- if (vlanmc->vid == 0 && vlanmc->member == 0) {
+- /* Update the entry from the 4K table */
+- ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+- if (ret) {
+- dev_err(smi->dev, "error looking for 4K VLAN MC %d for VID %d\n",
+- i, vid);
+- return ret;
+- }
+-
+- vlanmc->vid = vid;
+- vlanmc->member = vlan4k.member;
+- vlanmc->untag = vlan4k.untag;
+- vlanmc->fid = vlan4k.fid;
+- ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
+- if (ret) {
+- dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
+- i, vid);
+- return ret;
+- }
+-
+- dev_dbg(smi->dev, "created new MC at index %d for VID %d\n",
+- i, vid);
+- return i;
+- }
+- }
+-
+- /* MC table is full, try to find an unused entry and replace it */
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- int used;
+-
+- ret = rtl8366_mc_is_used(smi, i, &used);
+- if (ret)
+- return ret;
+-
+- if (!used) {
+- /* Update the entry from the 4K table */
+- ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+- if (ret)
+- return ret;
+-
+- vlanmc->vid = vid;
+- vlanmc->member = vlan4k.member;
+- vlanmc->untag = vlan4k.untag;
+- vlanmc->fid = vlan4k.fid;
+- ret = smi->ops->set_vlan_mc(smi, i, vlanmc);
+- if (ret) {
+- dev_err(smi->dev, "unable to set/update VLAN MC %d for VID %d\n",
+- i, vid);
+- return ret;
+- }
+- dev_dbg(smi->dev, "recycled MC at index %i for VID %d\n",
+- i, vid);
+- return i;
+- }
+- }
+-
+- dev_err(smi->dev, "all VLAN member configurations are in use\n");
+- return -ENOSPC;
+-}
+-
+-int rtl8366_set_vlan(struct realtek_smi *smi, int vid, u32 member,
+- u32 untag, u32 fid)
+-{
+- struct rtl8366_vlan_mc vlanmc;
+- struct rtl8366_vlan_4k vlan4k;
+- int mc;
+- int ret;
+-
+- if (!smi->ops->is_vlan_valid(smi, vid))
+- return -EINVAL;
+-
+- dev_dbg(smi->dev,
+- "setting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
+- vid, member, untag);
+-
+- /* Update the 4K table */
+- ret = smi->ops->get_vlan_4k(smi, vid, &vlan4k);
+- if (ret)
+- return ret;
+-
+- vlan4k.member |= member;
+- vlan4k.untag |= untag;
+- vlan4k.fid = fid;
+- ret = smi->ops->set_vlan_4k(smi, &vlan4k);
+- if (ret)
+- return ret;
+-
+- dev_dbg(smi->dev,
+- "resulting VLAN%d 4k members: 0x%02x, untagged: 0x%02x\n",
+- vid, vlan4k.member, vlan4k.untag);
+-
+- /* Find or allocate a member config for this VID */
+- ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
+- if (ret < 0)
+- return ret;
+- mc = ret;
+-
+- /* Update the MC entry */
+- vlanmc.member |= member;
+- vlanmc.untag |= untag;
+- vlanmc.fid = fid;
+-
+- /* Commit updates to the MC entry */
+- ret = smi->ops->set_vlan_mc(smi, mc, &vlanmc);
+- if (ret)
+- dev_err(smi->dev, "failed to commit changes to VLAN MC index %d for VID %d\n",
+- mc, vid);
+- else
+- dev_dbg(smi->dev,
+- "resulting VLAN%d MC members: 0x%02x, untagged: 0x%02x\n",
+- vid, vlanmc.member, vlanmc.untag);
+-
+- return ret;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_set_vlan);
+-
+-int rtl8366_set_pvid(struct realtek_smi *smi, unsigned int port,
+- unsigned int vid)
+-{
+- struct rtl8366_vlan_mc vlanmc;
+- int mc;
+- int ret;
+-
+- if (!smi->ops->is_vlan_valid(smi, vid))
+- return -EINVAL;
+-
+- /* Find or allocate a member config for this VID */
+- ret = rtl8366_obtain_mc(smi, vid, &vlanmc);
+- if (ret < 0)
+- return ret;
+- mc = ret;
+-
+- ret = smi->ops->set_mc_index(smi, port, mc);
+- if (ret) {
+- dev_err(smi->dev, "set PVID: failed to set MC index %d for port %d\n",
+- mc, port);
+- return ret;
+- }
+-
+- dev_dbg(smi->dev, "set PVID: the PVID for port %d set to %d using existing MC index %d\n",
+- port, vid, mc);
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_set_pvid);
+-
+-int rtl8366_enable_vlan4k(struct realtek_smi *smi, bool enable)
+-{
+- int ret;
+-
+- /* To enable 4k VLAN, ordinary VLAN must be enabled first,
+- * but if we disable 4k VLAN it is fine to leave ordinary
+- * VLAN enabled.
+- */
+- if (enable) {
+- /* Make sure VLAN is ON */
+- ret = smi->ops->enable_vlan(smi, true);
+- if (ret)
+- return ret;
+-
+- smi->vlan_enabled = true;
+- }
+-
+- ret = smi->ops->enable_vlan4k(smi, enable);
+- if (ret)
+- return ret;
+-
+- smi->vlan4k_enabled = enable;
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_enable_vlan4k);
+-
+-int rtl8366_enable_vlan(struct realtek_smi *smi, bool enable)
+-{
+- int ret;
+-
+- ret = smi->ops->enable_vlan(smi, enable);
+- if (ret)
+- return ret;
+-
+- smi->vlan_enabled = enable;
+-
+- /* If we turn VLAN off, make sure that we turn off
+- * 4k VLAN as well, if that happened to be on.
+- */
+- if (!enable) {
+- smi->vlan4k_enabled = false;
+- ret = smi->ops->enable_vlan4k(smi, false);
+- }
+-
+- return ret;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_enable_vlan);
+-
+-int rtl8366_reset_vlan(struct realtek_smi *smi)
+-{
+- struct rtl8366_vlan_mc vlanmc;
+- int ret;
+- int i;
+-
+- rtl8366_enable_vlan(smi, false);
+- rtl8366_enable_vlan4k(smi, false);
+-
+- /* Clear the 16 VLAN member configurations */
+- vlanmc.vid = 0;
+- vlanmc.priority = 0;
+- vlanmc.member = 0;
+- vlanmc.untag = 0;
+- vlanmc.fid = 0;
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_reset_vlan);
+-
+-int rtl8366_vlan_add(struct dsa_switch *ds, int port,
+- const struct switchdev_obj_port_vlan *vlan,
+- struct netlink_ext_ack *extack)
+-{
+- bool untagged = !!(vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED);
+- bool pvid = !!(vlan->flags & BRIDGE_VLAN_INFO_PVID);
+- struct realtek_smi *smi = ds->priv;
+- u32 member = 0;
+- u32 untag = 0;
+- int ret;
+-
+- if (!smi->ops->is_vlan_valid(smi, vlan->vid)) {
+- NL_SET_ERR_MSG_MOD(extack, "VLAN ID not valid");
+- return -EINVAL;
+- }
+-
+- /* Enable VLAN in the hardware
+- * FIXME: what's with this 4k business?
+- * Just rtl8366_enable_vlan() seems inconclusive.
+- */
+- ret = rtl8366_enable_vlan4k(smi, true);
+- if (ret) {
+- NL_SET_ERR_MSG_MOD(extack, "Failed to enable VLAN 4K");
+- return ret;
+- }
+-
+- dev_dbg(smi->dev, "add VLAN %d on port %d, %s, %s\n",
+- vlan->vid, port, untagged ? "untagged" : "tagged",
+- pvid ? "PVID" : "no PVID");
+-
+- member |= BIT(port);
+-
+- if (untagged)
+- untag |= BIT(port);
+-
+- ret = rtl8366_set_vlan(smi, vlan->vid, member, untag, 0);
+- if (ret) {
+- dev_err(smi->dev, "failed to set up VLAN %04x", vlan->vid);
+- return ret;
+- }
+-
+- if (!pvid)
+- return 0;
+-
+- ret = rtl8366_set_pvid(smi, port, vlan->vid);
+- if (ret) {
+- dev_err(smi->dev, "failed to set PVID on port %d to VLAN %04x",
+- port, vlan->vid);
+- return ret;
+- }
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_vlan_add);
+-
+-int rtl8366_vlan_del(struct dsa_switch *ds, int port,
+- const struct switchdev_obj_port_vlan *vlan)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret, i;
+-
+- dev_dbg(smi->dev, "del VLAN %d on port %d\n", vlan->vid, port);
+-
+- for (i = 0; i < smi->num_vlan_mc; i++) {
+- struct rtl8366_vlan_mc vlanmc;
+-
+- ret = smi->ops->get_vlan_mc(smi, i, &vlanmc);
+- if (ret)
+- return ret;
+-
+- if (vlan->vid == vlanmc.vid) {
+- /* Remove this port from the VLAN */
+- vlanmc.member &= ~BIT(port);
+- vlanmc.untag &= ~BIT(port);
+- /*
+- * If no ports are members of this VLAN
+- * anymore then clear the whole member
+- * config so it can be reused.
+- */
+- if (!vlanmc.member) {
+- vlanmc.vid = 0;
+- vlanmc.priority = 0;
+- vlanmc.fid = 0;
+- }
+- ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
+- if (ret) {
+- dev_err(smi->dev,
+- "failed to remove VLAN %04x\n",
+- vlan->vid);
+- return ret;
+- }
+- break;
+- }
+- }
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_vlan_del);
+-
+-void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+- uint8_t *data)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8366_mib_counter *mib;
+- int i;
+-
+- if (port >= smi->num_ports)
+- return;
+-
+- for (i = 0; i < smi->num_mib_counters; i++) {
+- mib = &smi->mib_counters[i];
+- strncpy(data + i * ETH_GSTRING_LEN,
+- mib->name, ETH_GSTRING_LEN);
+- }
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_strings);
+-
+-int rtl8366_get_sset_count(struct dsa_switch *ds, int port, int sset)
+-{
+- struct realtek_smi *smi = ds->priv;
+-
+- /* We only support SS_STATS */
+- if (sset != ETH_SS_STATS)
+- return 0;
+- if (port >= smi->num_ports)
+- return -EINVAL;
+-
+- return smi->num_mib_counters;
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_sset_count);
+-
+-void rtl8366_get_ethtool_stats(struct dsa_switch *ds, int port, uint64_t *data)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int i;
+- int ret;
+-
+- if (port >= smi->num_ports)
+- return;
+-
+- for (i = 0; i < smi->num_mib_counters; i++) {
+- struct rtl8366_mib_counter *mib;
+- u64 mibvalue = 0;
+-
+- mib = &smi->mib_counters[i];
+- ret = smi->ops->get_mib_counter(smi, port, mib, &mibvalue);
+- if (ret) {
+- dev_err(smi->dev, "error reading MIB counter %s\n",
+- mib->name);
+- }
+- data[i] = mibvalue;
+- }
+-}
+-EXPORT_SYMBOL_GPL(rtl8366_get_ethtool_stats);
+diff --git a/drivers/net/dsa/rtl8366rb.c b/drivers/net/dsa/rtl8366rb.c
+deleted file mode 100644
+index ecc19bd5115f0..0000000000000
+--- a/drivers/net/dsa/rtl8366rb.c
++++ /dev/null
+@@ -1,1814 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Realtek SMI subdriver for the Realtek RTL8366RB ethernet switch
+- *
+- * This is a sparsely documented chip, the only viable documentation seems
+- * to be a patched up code drop from the vendor that appear in various
+- * GPL source trees.
+- *
+- * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org>
+- * Copyright (C) 2009-2010 Gabor Juhos <juhosg@openwrt.org>
+- * Copyright (C) 2010 Antti Seppälä <a.seppala@gmail.com>
+- * Copyright (C) 2010 Roman Yeryomin <roman@advem.lv>
+- * Copyright (C) 2011 Colin Leitner <colin.leitner@googlemail.com>
+- */
+-
+-#include <linux/bitops.h>
+-#include <linux/etherdevice.h>
+-#include <linux/if_bridge.h>
+-#include <linux/interrupt.h>
+-#include <linux/irqdomain.h>
+-#include <linux/irqchip/chained_irq.h>
+-#include <linux/of_irq.h>
+-#include <linux/regmap.h>
+-
+-#include "realtek-smi-core.h"
+-
+-#define RTL8366RB_PORT_NUM_CPU 5
+-#define RTL8366RB_NUM_PORTS 6
+-#define RTL8366RB_PHY_NO_MAX 4
+-#define RTL8366RB_PHY_ADDR_MAX 31
+-
+-/* Switch Global Configuration register */
+-#define RTL8366RB_SGCR 0x0000
+-#define RTL8366RB_SGCR_EN_BC_STORM_CTRL BIT(0)
+-#define RTL8366RB_SGCR_MAX_LENGTH(a) ((a) << 4)
+-#define RTL8366RB_SGCR_MAX_LENGTH_MASK RTL8366RB_SGCR_MAX_LENGTH(0x3)
+-#define RTL8366RB_SGCR_MAX_LENGTH_1522 RTL8366RB_SGCR_MAX_LENGTH(0x0)
+-#define RTL8366RB_SGCR_MAX_LENGTH_1536 RTL8366RB_SGCR_MAX_LENGTH(0x1)
+-#define RTL8366RB_SGCR_MAX_LENGTH_1552 RTL8366RB_SGCR_MAX_LENGTH(0x2)
+-#define RTL8366RB_SGCR_MAX_LENGTH_16000 RTL8366RB_SGCR_MAX_LENGTH(0x3)
+-#define RTL8366RB_SGCR_EN_VLAN BIT(13)
+-#define RTL8366RB_SGCR_EN_VLAN_4KTB BIT(14)
+-
+-/* Port Enable Control register */
+-#define RTL8366RB_PECR 0x0001
+-
+-/* Switch per-port learning disablement register */
+-#define RTL8366RB_PORT_LEARNDIS_CTRL 0x0002
+-
+-/* Security control, actually aging register */
+-#define RTL8366RB_SECURITY_CTRL 0x0003
+-
+-#define RTL8366RB_SSCR2 0x0004
+-#define RTL8366RB_SSCR2_DROP_UNKNOWN_DA BIT(0)
+-
+-/* Port Mode Control registers */
+-#define RTL8366RB_PMC0 0x0005
+-#define RTL8366RB_PMC0_SPI BIT(0)
+-#define RTL8366RB_PMC0_EN_AUTOLOAD BIT(1)
+-#define RTL8366RB_PMC0_PROBE BIT(2)
+-#define RTL8366RB_PMC0_DIS_BISR BIT(3)
+-#define RTL8366RB_PMC0_ADCTEST BIT(4)
+-#define RTL8366RB_PMC0_SRAM_DIAG BIT(5)
+-#define RTL8366RB_PMC0_EN_SCAN BIT(6)
+-#define RTL8366RB_PMC0_P4_IOMODE_SHIFT 7
+-#define RTL8366RB_PMC0_P4_IOMODE_MASK GENMASK(9, 7)
+-#define RTL8366RB_PMC0_P5_IOMODE_SHIFT 10
+-#define RTL8366RB_PMC0_P5_IOMODE_MASK GENMASK(12, 10)
+-#define RTL8366RB_PMC0_SDSMODE_SHIFT 13
+-#define RTL8366RB_PMC0_SDSMODE_MASK GENMASK(15, 13)
+-#define RTL8366RB_PMC1 0x0006
+-
+-/* Port Mirror Control Register */
+-#define RTL8366RB_PMCR 0x0007
+-#define RTL8366RB_PMCR_SOURCE_PORT(a) (a)
+-#define RTL8366RB_PMCR_SOURCE_PORT_MASK 0x000f
+-#define RTL8366RB_PMCR_MONITOR_PORT(a) ((a) << 4)
+-#define RTL8366RB_PMCR_MONITOR_PORT_MASK 0x00f0
+-#define RTL8366RB_PMCR_MIRROR_RX BIT(8)
+-#define RTL8366RB_PMCR_MIRROR_TX BIT(9)
+-#define RTL8366RB_PMCR_MIRROR_SPC BIT(10)
+-#define RTL8366RB_PMCR_MIRROR_ISO BIT(11)
+-
+-/* bits 0..7 = port 0, bits 8..15 = port 1 */
+-#define RTL8366RB_PAACR0 0x0010
+-/* bits 0..7 = port 2, bits 8..15 = port 3 */
+-#define RTL8366RB_PAACR1 0x0011
+-/* bits 0..7 = port 4, bits 8..15 = port 5 */
+-#define RTL8366RB_PAACR2 0x0012
+-#define RTL8366RB_PAACR_SPEED_10M 0
+-#define RTL8366RB_PAACR_SPEED_100M 1
+-#define RTL8366RB_PAACR_SPEED_1000M 2
+-#define RTL8366RB_PAACR_FULL_DUPLEX BIT(2)
+-#define RTL8366RB_PAACR_LINK_UP BIT(4)
+-#define RTL8366RB_PAACR_TX_PAUSE BIT(5)
+-#define RTL8366RB_PAACR_RX_PAUSE BIT(6)
+-#define RTL8366RB_PAACR_AN BIT(7)
+-
+-#define RTL8366RB_PAACR_CPU_PORT (RTL8366RB_PAACR_SPEED_1000M | \
+- RTL8366RB_PAACR_FULL_DUPLEX | \
+- RTL8366RB_PAACR_LINK_UP | \
+- RTL8366RB_PAACR_TX_PAUSE | \
+- RTL8366RB_PAACR_RX_PAUSE)
+-
+-/* bits 0..7 = port 0, bits 8..15 = port 1 */
+-#define RTL8366RB_PSTAT0 0x0014
+-/* bits 0..7 = port 2, bits 8..15 = port 3 */
+-#define RTL8366RB_PSTAT1 0x0015
+-/* bits 0..7 = port 4, bits 8..15 = port 5 */
+-#define RTL8366RB_PSTAT2 0x0016
+-
+-#define RTL8366RB_POWER_SAVING_REG 0x0021
+-
+-/* Spanning tree status (STP) control, two bits per port per FID */
+-#define RTL8366RB_STP_STATE_BASE 0x0050 /* 0x0050..0x0057 */
+-#define RTL8366RB_STP_STATE_DISABLED 0x0
+-#define RTL8366RB_STP_STATE_BLOCKING 0x1
+-#define RTL8366RB_STP_STATE_LEARNING 0x2
+-#define RTL8366RB_STP_STATE_FORWARDING 0x3
+-#define RTL8366RB_STP_MASK GENMASK(1, 0)
+-#define RTL8366RB_STP_STATE(port, state) \
+- ((state) << ((port) * 2))
+-#define RTL8366RB_STP_STATE_MASK(port) \
+- RTL8366RB_STP_STATE((port), RTL8366RB_STP_MASK)
+-
+-/* CPU port control reg */
+-#define RTL8368RB_CPU_CTRL_REG 0x0061
+-#define RTL8368RB_CPU_PORTS_MSK 0x00FF
+-/* Disables inserting custom tag length/type 0x8899 */
+-#define RTL8368RB_CPU_NO_TAG BIT(15)
+-
+-#define RTL8366RB_SMAR0 0x0070 /* bits 0..15 */
+-#define RTL8366RB_SMAR1 0x0071 /* bits 16..31 */
+-#define RTL8366RB_SMAR2 0x0072 /* bits 32..47 */
+-
+-#define RTL8366RB_RESET_CTRL_REG 0x0100
+-#define RTL8366RB_CHIP_CTRL_RESET_HW BIT(0)
+-#define RTL8366RB_CHIP_CTRL_RESET_SW BIT(1)
+-
+-#define RTL8366RB_CHIP_ID_REG 0x0509
+-#define RTL8366RB_CHIP_ID_8366 0x5937
+-#define RTL8366RB_CHIP_VERSION_CTRL_REG 0x050A
+-#define RTL8366RB_CHIP_VERSION_MASK 0xf
+-
+-/* PHY registers control */
+-#define RTL8366RB_PHY_ACCESS_CTRL_REG 0x8000
+-#define RTL8366RB_PHY_CTRL_READ BIT(0)
+-#define RTL8366RB_PHY_CTRL_WRITE 0
+-#define RTL8366RB_PHY_ACCESS_BUSY_REG 0x8001
+-#define RTL8366RB_PHY_INT_BUSY BIT(0)
+-#define RTL8366RB_PHY_EXT_BUSY BIT(4)
+-#define RTL8366RB_PHY_ACCESS_DATA_REG 0x8002
+-#define RTL8366RB_PHY_EXT_CTRL_REG 0x8010
+-#define RTL8366RB_PHY_EXT_WRDATA_REG 0x8011
+-#define RTL8366RB_PHY_EXT_RDDATA_REG 0x8012
+-
+-#define RTL8366RB_PHY_REG_MASK 0x1f
+-#define RTL8366RB_PHY_PAGE_OFFSET 5
+-#define RTL8366RB_PHY_PAGE_MASK (0xf << 5)
+-#define RTL8366RB_PHY_NO_OFFSET 9
+-#define RTL8366RB_PHY_NO_MASK (0x1f << 9)
+-
+-/* VLAN Ingress Control Register 1, one bit per port.
+- * bit 0 .. 5 will make the switch drop ingress frames without
+- * VID such as untagged or priority-tagged frames for respective
+- * port.
+- * bit 6 .. 11 will make the switch drop ingress frames carrying
+- * a C-tag with VID != 0 for respective port.
+- */
+-#define RTL8366RB_VLAN_INGRESS_CTRL1_REG 0x037E
+-#define RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port) (BIT((port)) | BIT((port) + 6))
+-
+-/* VLAN Ingress Control Register 2, one bit per port.
+- * bit0 .. bit5 will make the switch drop all ingress frames with
+- * a VLAN classification that does not include the port is in its
+- * member set.
+- */
+-#define RTL8366RB_VLAN_INGRESS_CTRL2_REG 0x037f
+-
+-/* LED control registers */
+-#define RTL8366RB_LED_BLINKRATE_REG 0x0430
+-#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
+-#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
+-#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
+-#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
+-#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
+-#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
+-#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
+-
+-#define RTL8366RB_LED_CTRL_REG 0x0431
+-#define RTL8366RB_LED_OFF 0x0
+-#define RTL8366RB_LED_DUP_COL 0x1
+-#define RTL8366RB_LED_LINK_ACT 0x2
+-#define RTL8366RB_LED_SPD1000 0x3
+-#define RTL8366RB_LED_SPD100 0x4
+-#define RTL8366RB_LED_SPD10 0x5
+-#define RTL8366RB_LED_SPD1000_ACT 0x6
+-#define RTL8366RB_LED_SPD100_ACT 0x7
+-#define RTL8366RB_LED_SPD10_ACT 0x8
+-#define RTL8366RB_LED_SPD100_10_ACT 0x9
+-#define RTL8366RB_LED_FIBER 0xa
+-#define RTL8366RB_LED_AN_FAULT 0xb
+-#define RTL8366RB_LED_LINK_RX 0xc
+-#define RTL8366RB_LED_LINK_TX 0xd
+-#define RTL8366RB_LED_MASTER 0xe
+-#define RTL8366RB_LED_FORCE 0xf
+-#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
+-#define RTL8366RB_LED_1_OFFSET 6
+-#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
+-#define RTL8366RB_LED_3_OFFSET 6
+-
+-#define RTL8366RB_MIB_COUNT 33
+-#define RTL8366RB_GLOBAL_MIB_COUNT 1
+-#define RTL8366RB_MIB_COUNTER_PORT_OFFSET 0x0050
+-#define RTL8366RB_MIB_COUNTER_BASE 0x1000
+-#define RTL8366RB_MIB_CTRL_REG 0x13F0
+-#define RTL8366RB_MIB_CTRL_USER_MASK 0x0FFC
+-#define RTL8366RB_MIB_CTRL_BUSY_MASK BIT(0)
+-#define RTL8366RB_MIB_CTRL_RESET_MASK BIT(1)
+-#define RTL8366RB_MIB_CTRL_PORT_RESET(_p) BIT(2 + (_p))
+-#define RTL8366RB_MIB_CTRL_GLOBAL_RESET BIT(11)
+-
+-#define RTL8366RB_PORT_VLAN_CTRL_BASE 0x0063
+-#define RTL8366RB_PORT_VLAN_CTRL_REG(_p) \
+- (RTL8366RB_PORT_VLAN_CTRL_BASE + (_p) / 4)
+-#define RTL8366RB_PORT_VLAN_CTRL_MASK 0xf
+-#define RTL8366RB_PORT_VLAN_CTRL_SHIFT(_p) (4 * ((_p) % 4))
+-
+-#define RTL8366RB_VLAN_TABLE_READ_BASE 0x018C
+-#define RTL8366RB_VLAN_TABLE_WRITE_BASE 0x0185
+-
+-#define RTL8366RB_TABLE_ACCESS_CTRL_REG 0x0180
+-#define RTL8366RB_TABLE_VLAN_READ_CTRL 0x0E01
+-#define RTL8366RB_TABLE_VLAN_WRITE_CTRL 0x0F01
+-
+-#define RTL8366RB_VLAN_MC_BASE(_x) (0x0020 + (_x) * 3)
+-
+-#define RTL8366RB_PORT_LINK_STATUS_BASE 0x0014
+-#define RTL8366RB_PORT_STATUS_SPEED_MASK 0x0003
+-#define RTL8366RB_PORT_STATUS_DUPLEX_MASK 0x0004
+-#define RTL8366RB_PORT_STATUS_LINK_MASK 0x0010
+-#define RTL8366RB_PORT_STATUS_TXPAUSE_MASK 0x0020
+-#define RTL8366RB_PORT_STATUS_RXPAUSE_MASK 0x0040
+-#define RTL8366RB_PORT_STATUS_AN_MASK 0x0080
+-
+-#define RTL8366RB_NUM_VLANS 16
+-#define RTL8366RB_NUM_LEDGROUPS 4
+-#define RTL8366RB_NUM_VIDS 4096
+-#define RTL8366RB_PRIORITYMAX 7
+-#define RTL8366RB_NUM_FIDS 8
+-#define RTL8366RB_FIDMAX 7
+-
+-#define RTL8366RB_PORT_1 BIT(0) /* In userspace port 0 */
+-#define RTL8366RB_PORT_2 BIT(1) /* In userspace port 1 */
+-#define RTL8366RB_PORT_3 BIT(2) /* In userspace port 2 */
+-#define RTL8366RB_PORT_4 BIT(3) /* In userspace port 3 */
+-#define RTL8366RB_PORT_5 BIT(4) /* In userspace port 4 */
+-
+-#define RTL8366RB_PORT_CPU BIT(5) /* CPU port */
+-
+-#define RTL8366RB_PORT_ALL (RTL8366RB_PORT_1 | \
+- RTL8366RB_PORT_2 | \
+- RTL8366RB_PORT_3 | \
+- RTL8366RB_PORT_4 | \
+- RTL8366RB_PORT_5 | \
+- RTL8366RB_PORT_CPU)
+-
+-#define RTL8366RB_PORT_ALL_BUT_CPU (RTL8366RB_PORT_1 | \
+- RTL8366RB_PORT_2 | \
+- RTL8366RB_PORT_3 | \
+- RTL8366RB_PORT_4 | \
+- RTL8366RB_PORT_5)
+-
+-#define RTL8366RB_PORT_ALL_EXTERNAL (RTL8366RB_PORT_1 | \
+- RTL8366RB_PORT_2 | \
+- RTL8366RB_PORT_3 | \
+- RTL8366RB_PORT_4)
+-
+-#define RTL8366RB_PORT_ALL_INTERNAL RTL8366RB_PORT_CPU
+-
+-/* First configuration word per member config, VID and prio */
+-#define RTL8366RB_VLAN_VID_MASK 0xfff
+-#define RTL8366RB_VLAN_PRIORITY_SHIFT 12
+-#define RTL8366RB_VLAN_PRIORITY_MASK 0x7
+-/* Second configuration word per member config, member and untagged */
+-#define RTL8366RB_VLAN_UNTAG_SHIFT 8
+-#define RTL8366RB_VLAN_UNTAG_MASK 0xff
+-#define RTL8366RB_VLAN_MEMBER_MASK 0xff
+-/* Third config word per member config, STAG currently unused */
+-#define RTL8366RB_VLAN_STAG_MBR_MASK 0xff
+-#define RTL8366RB_VLAN_STAG_MBR_SHIFT 8
+-#define RTL8366RB_VLAN_STAG_IDX_MASK 0x7
+-#define RTL8366RB_VLAN_STAG_IDX_SHIFT 5
+-#define RTL8366RB_VLAN_FID_MASK 0x7
+-
+-/* Port ingress bandwidth control */
+-#define RTL8366RB_IB_BASE 0x0200
+-#define RTL8366RB_IB_REG(pnum) (RTL8366RB_IB_BASE + (pnum))
+-#define RTL8366RB_IB_BDTH_MASK 0x3fff
+-#define RTL8366RB_IB_PREIFG BIT(14)
+-
+-/* Port egress bandwidth control */
+-#define RTL8366RB_EB_BASE 0x02d1
+-#define RTL8366RB_EB_REG(pnum) (RTL8366RB_EB_BASE + (pnum))
+-#define RTL8366RB_EB_BDTH_MASK 0x3fff
+-#define RTL8366RB_EB_PREIFG_REG 0x02f8
+-#define RTL8366RB_EB_PREIFG BIT(9)
+-
+-#define RTL8366RB_BDTH_SW_MAX 1048512 /* 1048576? */
+-#define RTL8366RB_BDTH_UNIT 64
+-#define RTL8366RB_BDTH_REG_DEFAULT 16383
+-
+-/* QOS */
+-#define RTL8366RB_QOS BIT(15)
+-/* Include/Exclude Preamble and IFG (20 bytes). 0:Exclude, 1:Include. */
+-#define RTL8366RB_QOS_DEFAULT_PREIFG 1
+-
+-/* Interrupt handling */
+-#define RTL8366RB_INTERRUPT_CONTROL_REG 0x0440
+-#define RTL8366RB_INTERRUPT_POLARITY BIT(0)
+-#define RTL8366RB_P4_RGMII_LED BIT(2)
+-#define RTL8366RB_INTERRUPT_MASK_REG 0x0441
+-#define RTL8366RB_INTERRUPT_LINK_CHGALL GENMASK(11, 0)
+-#define RTL8366RB_INTERRUPT_ACLEXCEED BIT(8)
+-#define RTL8366RB_INTERRUPT_STORMEXCEED BIT(9)
+-#define RTL8366RB_INTERRUPT_P4_FIBER BIT(12)
+-#define RTL8366RB_INTERRUPT_P4_UTP BIT(13)
+-#define RTL8366RB_INTERRUPT_VALID (RTL8366RB_INTERRUPT_LINK_CHGALL | \
+- RTL8366RB_INTERRUPT_ACLEXCEED | \
+- RTL8366RB_INTERRUPT_STORMEXCEED | \
+- RTL8366RB_INTERRUPT_P4_FIBER | \
+- RTL8366RB_INTERRUPT_P4_UTP)
+-#define RTL8366RB_INTERRUPT_STATUS_REG 0x0442
+-#define RTL8366RB_NUM_INTERRUPT 14 /* 0..13 */
+-
+-/* Port isolation registers */
+-#define RTL8366RB_PORT_ISO_BASE 0x0F08
+-#define RTL8366RB_PORT_ISO(pnum) (RTL8366RB_PORT_ISO_BASE + (pnum))
+-#define RTL8366RB_PORT_ISO_EN BIT(0)
+-#define RTL8366RB_PORT_ISO_PORTS_MASK GENMASK(7, 1)
+-#define RTL8366RB_PORT_ISO_PORTS(pmask) ((pmask) << 1)
+-
+-/* bits 0..5 enable force when cleared */
+-#define RTL8366RB_MAC_FORCE_CTRL_REG 0x0F11
+-
+-#define RTL8366RB_OAM_PARSER_REG 0x0F14
+-#define RTL8366RB_OAM_MULTIPLEXER_REG 0x0F15
+-
+-#define RTL8366RB_GREEN_FEATURE_REG 0x0F51
+-#define RTL8366RB_GREEN_FEATURE_MSK 0x0007
+-#define RTL8366RB_GREEN_FEATURE_TX BIT(0)
+-#define RTL8366RB_GREEN_FEATURE_RX BIT(2)
+-
+-/**
+- * struct rtl8366rb - RTL8366RB-specific data
+- * @max_mtu: per-port max MTU setting
+- * @pvid_enabled: if PVID is set for respective port
+- */
+-struct rtl8366rb {
+- unsigned int max_mtu[RTL8366RB_NUM_PORTS];
+- bool pvid_enabled[RTL8366RB_NUM_PORTS];
+-};
+-
+-static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
+- { 0, 0, 4, "IfInOctets" },
+- { 0, 4, 4, "EtherStatsOctets" },
+- { 0, 8, 2, "EtherStatsUnderSizePkts" },
+- { 0, 10, 2, "EtherFragments" },
+- { 0, 12, 2, "EtherStatsPkts64Octets" },
+- { 0, 14, 2, "EtherStatsPkts65to127Octets" },
+- { 0, 16, 2, "EtherStatsPkts128to255Octets" },
+- { 0, 18, 2, "EtherStatsPkts256to511Octets" },
+- { 0, 20, 2, "EtherStatsPkts512to1023Octets" },
+- { 0, 22, 2, "EtherStatsPkts1024to1518Octets" },
+- { 0, 24, 2, "EtherOversizeStats" },
+- { 0, 26, 2, "EtherStatsJabbers" },
+- { 0, 28, 2, "IfInUcastPkts" },
+- { 0, 30, 2, "EtherStatsMulticastPkts" },
+- { 0, 32, 2, "EtherStatsBroadcastPkts" },
+- { 0, 34, 2, "EtherStatsDropEvents" },
+- { 0, 36, 2, "Dot3StatsFCSErrors" },
+- { 0, 38, 2, "Dot3StatsSymbolErrors" },
+- { 0, 40, 2, "Dot3InPauseFrames" },
+- { 0, 42, 2, "Dot3ControlInUnknownOpcodes" },
+- { 0, 44, 4, "IfOutOctets" },
+- { 0, 48, 2, "Dot3StatsSingleCollisionFrames" },
+- { 0, 50, 2, "Dot3StatMultipleCollisionFrames" },
+- { 0, 52, 2, "Dot3sDeferredTransmissions" },
+- { 0, 54, 2, "Dot3StatsLateCollisions" },
+- { 0, 56, 2, "EtherStatsCollisions" },
+- { 0, 58, 2, "Dot3StatsExcessiveCollisions" },
+- { 0, 60, 2, "Dot3OutPauseFrames" },
+- { 0, 62, 2, "Dot1dBasePortDelayExceededDiscards" },
+- { 0, 64, 2, "Dot1dTpPortInDiscards" },
+- { 0, 66, 2, "IfOutUcastPkts" },
+- { 0, 68, 2, "IfOutMulticastPkts" },
+- { 0, 70, 2, "IfOutBroadcastPkts" },
+-};
+-
+-static int rtl8366rb_get_mib_counter(struct realtek_smi *smi,
+- int port,
+- struct rtl8366_mib_counter *mib,
+- u64 *mibvalue)
+-{
+- u32 addr, val;
+- int ret;
+- int i;
+-
+- addr = RTL8366RB_MIB_COUNTER_BASE +
+- RTL8366RB_MIB_COUNTER_PORT_OFFSET * (port) +
+- mib->offset;
+-
+- /* Writing access counter address first
+- * then ASIC will prepare 64bits counter wait for being retrived
+- */
+- ret = regmap_write(smi->map, addr, 0); /* Write whatever */
+- if (ret)
+- return ret;
+-
+- /* Read MIB control register */
+- ret = regmap_read(smi->map, RTL8366RB_MIB_CTRL_REG, &val);
+- if (ret)
+- return -EIO;
+-
+- if (val & RTL8366RB_MIB_CTRL_BUSY_MASK)
+- return -EBUSY;
+-
+- if (val & RTL8366RB_MIB_CTRL_RESET_MASK)
+- return -EIO;
+-
+- /* Read each individual MIB 16 bits at the time */
+- *mibvalue = 0;
+- for (i = mib->length; i > 0; i--) {
+- ret = regmap_read(smi->map, addr + (i - 1), &val);
+- if (ret)
+- return ret;
+- *mibvalue = (*mibvalue << 16) | (val & 0xFFFF);
+- }
+- return 0;
+-}
+-
+-static u32 rtl8366rb_get_irqmask(struct irq_data *d)
+-{
+- int line = irqd_to_hwirq(d);
+- u32 val;
+-
+- /* For line interrupts we combine link down in bits
+- * 6..11 with link up in bits 0..5 into one interrupt.
+- */
+- if (line < 12)
+- val = BIT(line) | BIT(line + 6);
+- else
+- val = BIT(line);
+- return val;
+-}
+-
+-static void rtl8366rb_mask_irq(struct irq_data *d)
+-{
+- struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
+- int ret;
+-
+- ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
+- rtl8366rb_get_irqmask(d), 0);
+- if (ret)
+- dev_err(smi->dev, "could not mask IRQ\n");
+-}
+-
+-static void rtl8366rb_unmask_irq(struct irq_data *d)
+-{
+- struct realtek_smi *smi = irq_data_get_irq_chip_data(d);
+- int ret;
+-
+- ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_MASK_REG,
+- rtl8366rb_get_irqmask(d),
+- rtl8366rb_get_irqmask(d));
+- if (ret)
+- dev_err(smi->dev, "could not unmask IRQ\n");
+-}
+-
+-static irqreturn_t rtl8366rb_irq(int irq, void *data)
+-{
+- struct realtek_smi *smi = data;
+- u32 stat;
+- int ret;
+-
+- /* This clears the IRQ status register */
+- ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
+- &stat);
+- if (ret) {
+- dev_err(smi->dev, "can't read interrupt status\n");
+- return IRQ_NONE;
+- }
+- stat &= RTL8366RB_INTERRUPT_VALID;
+- if (!stat)
+- return IRQ_NONE;
+- while (stat) {
+- int line = __ffs(stat);
+- int child_irq;
+-
+- stat &= ~BIT(line);
+- /* For line interrupts we combine link down in bits
+- * 6..11 with link up in bits 0..5 into one interrupt.
+- */
+- if (line < 12 && line > 5)
+- line -= 5;
+- child_irq = irq_find_mapping(smi->irqdomain, line);
+- handle_nested_irq(child_irq);
+- }
+- return IRQ_HANDLED;
+-}
+-
+-static struct irq_chip rtl8366rb_irq_chip = {
+- .name = "RTL8366RB",
+- .irq_mask = rtl8366rb_mask_irq,
+- .irq_unmask = rtl8366rb_unmask_irq,
+-};
+-
+-static int rtl8366rb_irq_map(struct irq_domain *domain, unsigned int irq,
+- irq_hw_number_t hwirq)
+-{
+- irq_set_chip_data(irq, domain->host_data);
+- irq_set_chip_and_handler(irq, &rtl8366rb_irq_chip, handle_simple_irq);
+- irq_set_nested_thread(irq, 1);
+- irq_set_noprobe(irq);
+-
+- return 0;
+-}
+-
+-static void rtl8366rb_irq_unmap(struct irq_domain *d, unsigned int irq)
+-{
+- irq_set_nested_thread(irq, 0);
+- irq_set_chip_and_handler(irq, NULL, NULL);
+- irq_set_chip_data(irq, NULL);
+-}
+-
+-static const struct irq_domain_ops rtl8366rb_irqdomain_ops = {
+- .map = rtl8366rb_irq_map,
+- .unmap = rtl8366rb_irq_unmap,
+- .xlate = irq_domain_xlate_onecell,
+-};
+-
+-static int rtl8366rb_setup_cascaded_irq(struct realtek_smi *smi)
+-{
+- struct device_node *intc;
+- unsigned long irq_trig;
+- int irq;
+- int ret;
+- u32 val;
+- int i;
+-
+- intc = of_get_child_by_name(smi->dev->of_node, "interrupt-controller");
+- if (!intc) {
+- dev_err(smi->dev, "missing child interrupt-controller node\n");
+- return -EINVAL;
+- }
+- /* RB8366RB IRQs cascade off this one */
+- irq = of_irq_get(intc, 0);
+- if (irq <= 0) {
+- dev_err(smi->dev, "failed to get parent IRQ\n");
+- ret = irq ? irq : -EINVAL;
+- goto out_put_node;
+- }
+-
+- /* This clears the IRQ status register */
+- ret = regmap_read(smi->map, RTL8366RB_INTERRUPT_STATUS_REG,
+- &val);
+- if (ret) {
+- dev_err(smi->dev, "can't read interrupt status\n");
+- goto out_put_node;
+- }
+-
+- /* Fetch IRQ edge information from the descriptor */
+- irq_trig = irqd_get_trigger_type(irq_get_irq_data(irq));
+- switch (irq_trig) {
+- case IRQF_TRIGGER_RISING:
+- case IRQF_TRIGGER_HIGH:
+- dev_info(smi->dev, "active high/rising IRQ\n");
+- val = 0;
+- break;
+- case IRQF_TRIGGER_FALLING:
+- case IRQF_TRIGGER_LOW:
+- dev_info(smi->dev, "active low/falling IRQ\n");
+- val = RTL8366RB_INTERRUPT_POLARITY;
+- break;
+- }
+- ret = regmap_update_bits(smi->map, RTL8366RB_INTERRUPT_CONTROL_REG,
+- RTL8366RB_INTERRUPT_POLARITY,
+- val);
+- if (ret) {
+- dev_err(smi->dev, "could not configure IRQ polarity\n");
+- goto out_put_node;
+- }
+-
+- ret = devm_request_threaded_irq(smi->dev, irq, NULL,
+- rtl8366rb_irq, IRQF_ONESHOT,
+- "RTL8366RB", smi);
+- if (ret) {
+- dev_err(smi->dev, "unable to request irq: %d\n", ret);
+- goto out_put_node;
+- }
+- smi->irqdomain = irq_domain_add_linear(intc,
+- RTL8366RB_NUM_INTERRUPT,
+- &rtl8366rb_irqdomain_ops,
+- smi);
+- if (!smi->irqdomain) {
+- dev_err(smi->dev, "failed to create IRQ domain\n");
+- ret = -EINVAL;
+- goto out_put_node;
+- }
+- for (i = 0; i < smi->num_ports; i++)
+- irq_set_parent(irq_create_mapping(smi->irqdomain, i), irq);
+-
+-out_put_node:
+- of_node_put(intc);
+- return ret;
+-}
+-
+-static int rtl8366rb_set_addr(struct realtek_smi *smi)
+-{
+- u8 addr[ETH_ALEN];
+- u16 val;
+- int ret;
+-
+- eth_random_addr(addr);
+-
+- dev_info(smi->dev, "set MAC: %02X:%02X:%02X:%02X:%02X:%02X\n",
+- addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
+- val = addr[0] << 8 | addr[1];
+- ret = regmap_write(smi->map, RTL8366RB_SMAR0, val);
+- if (ret)
+- return ret;
+- val = addr[2] << 8 | addr[3];
+- ret = regmap_write(smi->map, RTL8366RB_SMAR1, val);
+- if (ret)
+- return ret;
+- val = addr[4] << 8 | addr[5];
+- ret = regmap_write(smi->map, RTL8366RB_SMAR2, val);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-/* Found in a vendor driver */
+-
+-/* Struct for handling the jam tables' entries */
+-struct rtl8366rb_jam_tbl_entry {
+- u16 reg;
+- u16 val;
+-};
+-
+-/* For the "version 0" early silicon, appear in most source releases */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_0[] = {
+- {0x000B, 0x0001}, {0x03A6, 0x0100}, {0x03A7, 0x0001}, {0x02D1, 0x3FFF},
+- {0x02D2, 0x3FFF}, {0x02D3, 0x3FFF}, {0x02D4, 0x3FFF}, {0x02D5, 0x3FFF},
+- {0x02D6, 0x3FFF}, {0x02D7, 0x3FFF}, {0x02D8, 0x3FFF}, {0x022B, 0x0688},
+- {0x022C, 0x0FAC}, {0x03D0, 0x4688}, {0x03D1, 0x01F5}, {0x0000, 0x0830},
+- {0x02F9, 0x0200}, {0x02F7, 0x7FFF}, {0x02F8, 0x03FF}, {0x0080, 0x03E8},
+- {0x0081, 0x00CE}, {0x0082, 0x00DA}, {0x0083, 0x0230}, {0xBE0F, 0x2000},
+- {0x0231, 0x422A}, {0x0232, 0x422A}, {0x0233, 0x422A}, {0x0234, 0x422A},
+- {0x0235, 0x422A}, {0x0236, 0x422A}, {0x0237, 0x422A}, {0x0238, 0x422A},
+- {0x0239, 0x422A}, {0x023A, 0x422A}, {0x023B, 0x422A}, {0x023C, 0x422A},
+- {0x023D, 0x422A}, {0x023E, 0x422A}, {0x023F, 0x422A}, {0x0240, 0x422A},
+- {0x0241, 0x422A}, {0x0242, 0x422A}, {0x0243, 0x422A}, {0x0244, 0x422A},
+- {0x0245, 0x422A}, {0x0246, 0x422A}, {0x0247, 0x422A}, {0x0248, 0x422A},
+- {0x0249, 0x0146}, {0x024A, 0x0146}, {0x024B, 0x0146}, {0xBE03, 0xC961},
+- {0x024D, 0x0146}, {0x024E, 0x0146}, {0x024F, 0x0146}, {0x0250, 0x0146},
+- {0xBE64, 0x0226}, {0x0252, 0x0146}, {0x0253, 0x0146}, {0x024C, 0x0146},
+- {0x0251, 0x0146}, {0x0254, 0x0146}, {0xBE62, 0x3FD0}, {0x0084, 0x0320},
+- {0x0255, 0x0146}, {0x0256, 0x0146}, {0x0257, 0x0146}, {0x0258, 0x0146},
+- {0x0259, 0x0146}, {0x025A, 0x0146}, {0x025B, 0x0146}, {0x025C, 0x0146},
+- {0x025D, 0x0146}, {0x025E, 0x0146}, {0x025F, 0x0146}, {0x0260, 0x0146},
+- {0x0261, 0xA23F}, {0x0262, 0x0294}, {0x0263, 0xA23F}, {0x0264, 0x0294},
+- {0x0265, 0xA23F}, {0x0266, 0x0294}, {0x0267, 0xA23F}, {0x0268, 0x0294},
+- {0x0269, 0xA23F}, {0x026A, 0x0294}, {0x026B, 0xA23F}, {0x026C, 0x0294},
+- {0x026D, 0xA23F}, {0x026E, 0x0294}, {0x026F, 0xA23F}, {0x0270, 0x0294},
+- {0x02F5, 0x0048}, {0xBE09, 0x0E00}, {0xBE1E, 0x0FA0}, {0xBE14, 0x8448},
+- {0xBE15, 0x1007}, {0xBE4A, 0xA284}, {0xC454, 0x3F0B}, {0xC474, 0x3F0B},
+- {0xBE48, 0x3672}, {0xBE4B, 0x17A7}, {0xBE4C, 0x0B15}, {0xBE52, 0x0EDD},
+- {0xBE49, 0x8C00}, {0xBE5B, 0x785C}, {0xBE5C, 0x785C}, {0xBE5D, 0x785C},
+- {0xBE61, 0x368A}, {0xBE63, 0x9B84}, {0xC456, 0xCC13}, {0xC476, 0xCC13},
+- {0xBE65, 0x307D}, {0xBE6D, 0x0005}, {0xBE6E, 0xE120}, {0xBE2E, 0x7BAF},
+-};
+-
+-/* This v1 init sequence is from Belkin F5D8235 U-Boot release */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_1[] = {
+- {0x0000, 0x0830}, {0x0001, 0x8000}, {0x0400, 0x8130}, {0xBE78, 0x3C3C},
+- {0x0431, 0x5432}, {0xBE37, 0x0CE4}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
+- {0xC44C, 0x1585}, {0xC44C, 0x1185}, {0xC44C, 0x1585}, {0xC46C, 0x1585},
+- {0xC46C, 0x1185}, {0xC46C, 0x1585}, {0xC451, 0x2135}, {0xC471, 0x2135},
+- {0xBE10, 0x8140}, {0xBE15, 0x0007}, {0xBE6E, 0xE120}, {0xBE69, 0xD20F},
+- {0xBE6B, 0x0320}, {0xBE24, 0xB000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF20},
+- {0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800}, {0xBE24, 0x0000},
+- {0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60}, {0xBE21, 0x0140},
+- {0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000}, {0xBE2E, 0x7B7A},
+- {0xBE36, 0x0CE4}, {0x02F5, 0x0048}, {0xBE77, 0x2940}, {0x000A, 0x83E0},
+- {0xBE79, 0x3C3C}, {0xBE00, 0x1340},
+-};
+-
+-/* This v2 init sequence is from Belkin F5D8235 U-Boot release */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_2[] = {
+- {0x0450, 0x0000}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
+- {0xC44F, 0x6250}, {0xC46F, 0x6250}, {0xC456, 0x0C14}, {0xC476, 0x0C14},
+- {0xC44C, 0x1C85}, {0xC44C, 0x1885}, {0xC44C, 0x1C85}, {0xC46C, 0x1C85},
+- {0xC46C, 0x1885}, {0xC46C, 0x1C85}, {0xC44C, 0x0885}, {0xC44C, 0x0881},
+- {0xC44C, 0x0885}, {0xC46C, 0x0885}, {0xC46C, 0x0881}, {0xC46C, 0x0885},
+- {0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
+- {0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6E, 0x0320},
+- {0xBE77, 0x2940}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
+- {0x8000, 0x0001}, {0xBE15, 0x1007}, {0x8000, 0x0000}, {0xBE15, 0x1007},
+- {0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160}, {0xBE10, 0x8140},
+- {0xBE00, 0x1340}, {0x0F51, 0x0010},
+-};
+-
+-/* Appears in a DDWRT code dump */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_ver_3[] = {
+- {0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0431, 0x5432},
+- {0x0F51, 0x0017}, {0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0},
+- {0xC456, 0x0C14}, {0xC476, 0x0C14}, {0xC454, 0x3F8B}, {0xC474, 0x3F8B},
+- {0xC450, 0x2071}, {0xC470, 0x2071}, {0xC451, 0x226B}, {0xC471, 0x226B},
+- {0xC452, 0xA293}, {0xC472, 0xA293}, {0xC44C, 0x1585}, {0xC44C, 0x1185},
+- {0xC44C, 0x1585}, {0xC46C, 0x1585}, {0xC46C, 0x1185}, {0xC46C, 0x1585},
+- {0xC44C, 0x0185}, {0xC44C, 0x0181}, {0xC44C, 0x0185}, {0xC46C, 0x0185},
+- {0xC46C, 0x0181}, {0xC46C, 0x0185}, {0xBE24, 0xB000}, {0xBE23, 0xFF51},
+- {0xBE22, 0xDF20}, {0xBE21, 0x0140}, {0xBE20, 0x00BB}, {0xBE24, 0xB800},
+- {0xBE24, 0x0000}, {0xBE24, 0x7000}, {0xBE23, 0xFF51}, {0xBE22, 0xDF60},
+- {0xBE21, 0x0140}, {0xBE20, 0x0077}, {0xBE24, 0x7800}, {0xBE24, 0x0000},
+- {0xBE2E, 0x7BA7}, {0xBE36, 0x1000}, {0xBE37, 0x1000}, {0x8000, 0x0001},
+- {0xBE69, 0xD50F}, {0x8000, 0x0000}, {0xBE69, 0xD50F}, {0xBE6B, 0x0320},
+- {0xBE77, 0x2800}, {0xBE78, 0x3C3C}, {0xBE79, 0x3C3C}, {0xBE6E, 0xE120},
+- {0x8000, 0x0001}, {0xBE10, 0x8140}, {0x8000, 0x0000}, {0xBE10, 0x8140},
+- {0xBE15, 0x1007}, {0xBE14, 0x0448}, {0xBE1E, 0x00A0}, {0xBE10, 0x8160},
+- {0xBE10, 0x8140}, {0xBE00, 0x1340}, {0x0450, 0x0000}, {0x0401, 0x0000},
+-};
+-
+-/* Belkin F5D8235 v1, "belkin,f5d8235-v1" */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_f5d8235[] = {
+- {0x0242, 0x02BF}, {0x0245, 0x02BF}, {0x0248, 0x02BF}, {0x024B, 0x02BF},
+- {0x024E, 0x02BF}, {0x0251, 0x02BF}, {0x0254, 0x0A3F}, {0x0256, 0x0A3F},
+- {0x0258, 0x0A3F}, {0x025A, 0x0A3F}, {0x025C, 0x0A3F}, {0x025E, 0x0A3F},
+- {0x0263, 0x007C}, {0x0100, 0x0004}, {0xBE5B, 0x3500}, {0x800E, 0x200F},
+- {0xBE1D, 0x0F00}, {0x8001, 0x5011}, {0x800A, 0xA2F4}, {0x800B, 0x17A3},
+- {0xBE4B, 0x17A3}, {0xBE41, 0x5011}, {0xBE17, 0x2100}, {0x8000, 0x8304},
+- {0xBE40, 0x8304}, {0xBE4A, 0xA2F4}, {0x800C, 0xA8D5}, {0x8014, 0x5500},
+- {0x8015, 0x0004}, {0xBE4C, 0xA8D5}, {0xBE59, 0x0008}, {0xBE09, 0x0E00},
+- {0xBE36, 0x1036}, {0xBE37, 0x1036}, {0x800D, 0x00FF}, {0xBE4D, 0x00FF},
+-};
+-
+-/* DGN3500, "netgear,dgn3500", "netgear,dgn3500b" */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_init_jam_dgn3500[] = {
+- {0x0000, 0x0830}, {0x0400, 0x8130}, {0x000A, 0x83ED}, {0x0F51, 0x0017},
+- {0x02F5, 0x0048}, {0x02FA, 0xFFDF}, {0x02FB, 0xFFE0}, {0x0450, 0x0000},
+- {0x0401, 0x0000}, {0x0431, 0x0960},
+-};
+-
+-/* This jam table activates "green ethernet", which means low power mode
+- * and is claimed to detect the cable length and not use more power than
+- * necessary, and the ports should enter power saving mode 10 seconds after
+- * a cable is disconnected. Seems to always be the same.
+- */
+-static const struct rtl8366rb_jam_tbl_entry rtl8366rb_green_jam[] = {
+- {0xBE78, 0x323C}, {0xBE77, 0x5000}, {0xBE2E, 0x7BA7},
+- {0xBE59, 0x3459}, {0xBE5A, 0x745A}, {0xBE5B, 0x785C},
+- {0xBE5C, 0x785C}, {0xBE6E, 0xE120}, {0xBE79, 0x323C},
+-};
+-
+-/* Function that jams the tables in the proper registers */
+-static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
+- int jam_size, struct realtek_smi *smi,
+- bool write_dbg)
+-{
+- u32 val;
+- int ret;
+- int i;
+-
+- for (i = 0; i < jam_size; i++) {
+- if ((jam_table[i].reg & 0xBE00) == 0xBE00) {
+- ret = regmap_read(smi->map,
+- RTL8366RB_PHY_ACCESS_BUSY_REG,
+- &val);
+- if (ret)
+- return ret;
+- if (!(val & RTL8366RB_PHY_INT_BUSY)) {
+- ret = regmap_write(smi->map,
+- RTL8366RB_PHY_ACCESS_CTRL_REG,
+- RTL8366RB_PHY_CTRL_WRITE);
+- if (ret)
+- return ret;
+- }
+- }
+- if (write_dbg)
+- dev_dbg(smi->dev, "jam %04x into register %04x\n",
+- jam_table[i].val,
+- jam_table[i].reg);
+- ret = regmap_write(smi->map,
+- jam_table[i].reg,
+- jam_table[i].val);
+- if (ret)
+- return ret;
+- }
+- return 0;
+-}
+-
+-static int rtl8366rb_setup(struct dsa_switch *ds)
+-{
+- struct realtek_smi *smi = ds->priv;
+- const struct rtl8366rb_jam_tbl_entry *jam_table;
+- struct rtl8366rb *rb;
+- u32 chip_ver = 0;
+- u32 chip_id = 0;
+- int jam_size;
+- u32 val;
+- int ret;
+- int i;
+-
+- rb = smi->chip_data;
+-
+- ret = regmap_read(smi->map, RTL8366RB_CHIP_ID_REG, &chip_id);
+- if (ret) {
+- dev_err(smi->dev, "unable to read chip id\n");
+- return ret;
+- }
+-
+- switch (chip_id) {
+- case RTL8366RB_CHIP_ID_8366:
+- break;
+- default:
+- dev_err(smi->dev, "unknown chip id (%04x)\n", chip_id);
+- return -ENODEV;
+- }
+-
+- ret = regmap_read(smi->map, RTL8366RB_CHIP_VERSION_CTRL_REG,
+- &chip_ver);
+- if (ret) {
+- dev_err(smi->dev, "unable to read chip version\n");
+- return ret;
+- }
+-
+- dev_info(smi->dev, "RTL%04x ver %u chip found\n",
+- chip_id, chip_ver & RTL8366RB_CHIP_VERSION_MASK);
+-
+- /* Do the init dance using the right jam table */
+- switch (chip_ver) {
+- case 0:
+- jam_table = rtl8366rb_init_jam_ver_0;
+- jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_0);
+- break;
+- case 1:
+- jam_table = rtl8366rb_init_jam_ver_1;
+- jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_1);
+- break;
+- case 2:
+- jam_table = rtl8366rb_init_jam_ver_2;
+- jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_2);
+- break;
+- default:
+- jam_table = rtl8366rb_init_jam_ver_3;
+- jam_size = ARRAY_SIZE(rtl8366rb_init_jam_ver_3);
+- break;
+- }
+-
+- /* Special jam tables for special routers
+- * TODO: are these necessary? Maintainers, please test
+- * without them, using just the off-the-shelf tables.
+- */
+- if (of_machine_is_compatible("belkin,f5d8235-v1")) {
+- jam_table = rtl8366rb_init_jam_f5d8235;
+- jam_size = ARRAY_SIZE(rtl8366rb_init_jam_f5d8235);
+- }
+- if (of_machine_is_compatible("netgear,dgn3500") ||
+- of_machine_is_compatible("netgear,dgn3500b")) {
+- jam_table = rtl8366rb_init_jam_dgn3500;
+- jam_size = ARRAY_SIZE(rtl8366rb_init_jam_dgn3500);
+- }
+-
+- ret = rtl8366rb_jam_table(jam_table, jam_size, smi, true);
+- if (ret)
+- return ret;
+-
+- /* Isolate all user ports so they can only send packets to itself and the CPU port */
+- for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
+- ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(i),
+- RTL8366RB_PORT_ISO_PORTS(BIT(RTL8366RB_PORT_NUM_CPU)) |
+- RTL8366RB_PORT_ISO_EN);
+- if (ret)
+- return ret;
+- }
+- /* CPU port can send packets to all ports */
+- ret = regmap_write(smi->map, RTL8366RB_PORT_ISO(RTL8366RB_PORT_NUM_CPU),
+- RTL8366RB_PORT_ISO_PORTS(dsa_user_ports(ds)) |
+- RTL8366RB_PORT_ISO_EN);
+- if (ret)
+- return ret;
+-
+- /* Set up the "green ethernet" feature */
+- ret = rtl8366rb_jam_table(rtl8366rb_green_jam,
+- ARRAY_SIZE(rtl8366rb_green_jam), smi, false);
+- if (ret)
+- return ret;
+-
+- ret = regmap_write(smi->map,
+- RTL8366RB_GREEN_FEATURE_REG,
+- (chip_ver == 1) ? 0x0007 : 0x0003);
+- if (ret)
+- return ret;
+-
+- /* Vendor driver sets 0x240 in registers 0xc and 0xd (undocumented) */
+- ret = regmap_write(smi->map, 0x0c, 0x240);
+- if (ret)
+- return ret;
+- ret = regmap_write(smi->map, 0x0d, 0x240);
+- if (ret)
+- return ret;
+-
+- /* Set some random MAC address */
+- ret = rtl8366rb_set_addr(smi);
+- if (ret)
+- return ret;
+-
+- /* Enable CPU port with custom DSA tag 8899.
+- *
+- * If you set RTL8368RB_CPU_NO_TAG (bit 15) in this registers
+- * the custom tag is turned off.
+- */
+- ret = regmap_update_bits(smi->map, RTL8368RB_CPU_CTRL_REG,
+- 0xFFFF,
+- BIT(smi->cpu_port));
+- if (ret)
+- return ret;
+-
+- /* Make sure we default-enable the fixed CPU port */
+- ret = regmap_update_bits(smi->map, RTL8366RB_PECR,
+- BIT(smi->cpu_port),
+- 0);
+- if (ret)
+- return ret;
+-
+- /* Set maximum packet length to 1536 bytes */
+- ret = regmap_update_bits(smi->map, RTL8366RB_SGCR,
+- RTL8366RB_SGCR_MAX_LENGTH_MASK,
+- RTL8366RB_SGCR_MAX_LENGTH_1536);
+- if (ret)
+- return ret;
+- for (i = 0; i < RTL8366RB_NUM_PORTS; i++)
+- /* layer 2 size, see rtl8366rb_change_mtu() */
+- rb->max_mtu[i] = 1532;
+-
+- /* Disable learning for all ports */
+- ret = regmap_write(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
+- RTL8366RB_PORT_ALL);
+- if (ret)
+- return ret;
+-
+- /* Enable auto ageing for all ports */
+- ret = regmap_write(smi->map, RTL8366RB_SECURITY_CTRL, 0);
+- if (ret)
+- return ret;
+-
+- /* Port 4 setup: this enables Port 4, usually the WAN port,
+- * common PHY IO mode is apparently mode 0, and this is not what
+- * the port is initialized to. There is no explanation of the
+- * IO modes in the Realtek source code, if your WAN port is
+- * connected to something exotic such as fiber, then this might
+- * be worth experimenting with.
+- */
+- ret = regmap_update_bits(smi->map, RTL8366RB_PMC0,
+- RTL8366RB_PMC0_P4_IOMODE_MASK,
+- 0 << RTL8366RB_PMC0_P4_IOMODE_SHIFT);
+- if (ret)
+- return ret;
+-
+- /* Accept all packets by default, we enable filtering on-demand */
+- ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
+- 0);
+- if (ret)
+- return ret;
+- ret = regmap_write(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
+- 0);
+- if (ret)
+- return ret;
+-
+- /* Don't drop packets whose DA has not been learned */
+- ret = regmap_update_bits(smi->map, RTL8366RB_SSCR2,
+- RTL8366RB_SSCR2_DROP_UNKNOWN_DA, 0);
+- if (ret)
+- return ret;
+-
+- /* Set blinking, TODO: make this configurable */
+- ret = regmap_update_bits(smi->map, RTL8366RB_LED_BLINKRATE_REG,
+- RTL8366RB_LED_BLINKRATE_MASK,
+- RTL8366RB_LED_BLINKRATE_56MS);
+- if (ret)
+- return ret;
+-
+- /* Set up LED activity:
+- * Each port has 4 LEDs, we configure all ports to the same
+- * behaviour (no individual config) but we can set up each
+- * LED separately.
+- */
+- if (smi->leds_disabled) {
+- /* Turn everything off */
+- regmap_update_bits(smi->map,
+- RTL8366RB_LED_0_1_CTRL_REG,
+- 0x0FFF, 0);
+- regmap_update_bits(smi->map,
+- RTL8366RB_LED_2_3_CTRL_REG,
+- 0x0FFF, 0);
+- regmap_update_bits(smi->map,
+- RTL8366RB_INTERRUPT_CONTROL_REG,
+- RTL8366RB_P4_RGMII_LED,
+- 0);
+- val = RTL8366RB_LED_OFF;
+- } else {
+- /* TODO: make this configurable per LED */
+- val = RTL8366RB_LED_FORCE;
+- }
+- for (i = 0; i < 4; i++) {
+- ret = regmap_update_bits(smi->map,
+- RTL8366RB_LED_CTRL_REG,
+- 0xf << (i * 4),
+- val << (i * 4));
+- if (ret)
+- return ret;
+- }
+-
+- ret = rtl8366_reset_vlan(smi);
+- if (ret)
+- return ret;
+-
+- ret = rtl8366rb_setup_cascaded_irq(smi);
+- if (ret)
+- dev_info(smi->dev, "no interrupt support\n");
+-
+- ret = realtek_smi_setup_mdio(smi);
+- if (ret) {
+- dev_info(smi->dev, "could not set up MDIO bus\n");
+- return -ENODEV;
+- }
+-
+- return 0;
+-}
+-
+-static enum dsa_tag_protocol rtl8366_get_tag_protocol(struct dsa_switch *ds,
+- int port,
+- enum dsa_tag_protocol mp)
+-{
+- /* This switch uses the 4 byte protocol A Realtek DSA tag */
+- return DSA_TAG_PROTO_RTL4_A;
+-}
+-
+-static void
+-rtl8366rb_mac_link_up(struct dsa_switch *ds, int port, unsigned int mode,
+- phy_interface_t interface, struct phy_device *phydev,
+- int speed, int duplex, bool tx_pause, bool rx_pause)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret;
+-
+- if (port != smi->cpu_port)
+- return;
+-
+- dev_dbg(smi->dev, "MAC link up on CPU port (%d)\n", port);
+-
+- /* Force the fixed CPU port into 1Gbit mode, no autonegotiation */
+- ret = regmap_update_bits(smi->map, RTL8366RB_MAC_FORCE_CTRL_REG,
+- BIT(port), BIT(port));
+- if (ret) {
+- dev_err(smi->dev, "failed to force 1Gbit on CPU port\n");
+- return;
+- }
+-
+- ret = regmap_update_bits(smi->map, RTL8366RB_PAACR2,
+- 0xFF00U,
+- RTL8366RB_PAACR_CPU_PORT << 8);
+- if (ret) {
+- dev_err(smi->dev, "failed to set PAACR on CPU port\n");
+- return;
+- }
+-
+- /* Enable the CPU port */
+- ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+- 0);
+- if (ret) {
+- dev_err(smi->dev, "failed to enable the CPU port\n");
+- return;
+- }
+-}
+-
+-static void
+-rtl8366rb_mac_link_down(struct dsa_switch *ds, int port, unsigned int mode,
+- phy_interface_t interface)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret;
+-
+- if (port != smi->cpu_port)
+- return;
+-
+- dev_dbg(smi->dev, "MAC link down on CPU port (%d)\n", port);
+-
+- /* Disable the CPU port */
+- ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+- BIT(port));
+- if (ret) {
+- dev_err(smi->dev, "failed to disable the CPU port\n");
+- return;
+- }
+-}
+-
+-static void rb8366rb_set_port_led(struct realtek_smi *smi,
+- int port, bool enable)
+-{
+- u16 val = enable ? 0x3f : 0;
+- int ret;
+-
+- if (smi->leds_disabled)
+- return;
+-
+- switch (port) {
+- case 0:
+- ret = regmap_update_bits(smi->map,
+- RTL8366RB_LED_0_1_CTRL_REG,
+- 0x3F, val);
+- break;
+- case 1:
+- ret = regmap_update_bits(smi->map,
+- RTL8366RB_LED_0_1_CTRL_REG,
+- 0x3F << RTL8366RB_LED_1_OFFSET,
+- val << RTL8366RB_LED_1_OFFSET);
+- break;
+- case 2:
+- ret = regmap_update_bits(smi->map,
+- RTL8366RB_LED_2_3_CTRL_REG,
+- 0x3F, val);
+- break;
+- case 3:
+- ret = regmap_update_bits(smi->map,
+- RTL8366RB_LED_2_3_CTRL_REG,
+- 0x3F << RTL8366RB_LED_3_OFFSET,
+- val << RTL8366RB_LED_3_OFFSET);
+- break;
+- case 4:
+- ret = regmap_update_bits(smi->map,
+- RTL8366RB_INTERRUPT_CONTROL_REG,
+- RTL8366RB_P4_RGMII_LED,
+- enable ? RTL8366RB_P4_RGMII_LED : 0);
+- break;
+- default:
+- dev_err(smi->dev, "no LED for port %d\n", port);
+- return;
+- }
+- if (ret)
+- dev_err(smi->dev, "error updating LED on port %d\n", port);
+-}
+-
+-static int
+-rtl8366rb_port_enable(struct dsa_switch *ds, int port,
+- struct phy_device *phy)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret;
+-
+- dev_dbg(smi->dev, "enable port %d\n", port);
+- ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+- 0);
+- if (ret)
+- return ret;
+-
+- rb8366rb_set_port_led(smi, port, true);
+- return 0;
+-}
+-
+-static void
+-rtl8366rb_port_disable(struct dsa_switch *ds, int port)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret;
+-
+- dev_dbg(smi->dev, "disable port %d\n", port);
+- ret = regmap_update_bits(smi->map, RTL8366RB_PECR, BIT(port),
+- BIT(port));
+- if (ret)
+- return;
+-
+- rb8366rb_set_port_led(smi, port, false);
+-}
+-
+-static int
+-rtl8366rb_port_bridge_join(struct dsa_switch *ds, int port,
+- struct dsa_bridge bridge,
+- bool *tx_fwd_offload)
+-{
+- struct realtek_smi *smi = ds->priv;
+- unsigned int port_bitmap = 0;
+- int ret, i;
+-
+- /* Loop over all other ports than the current one */
+- for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
+- /* Current port handled last */
+- if (i == port)
+- continue;
+- /* Not on this bridge */
+- if (!dsa_port_offloads_bridge(dsa_to_port(ds, i), &bridge))
+- continue;
+- /* Join this port to each other port on the bridge */
+- ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
+- RTL8366RB_PORT_ISO_PORTS(BIT(port)),
+- RTL8366RB_PORT_ISO_PORTS(BIT(port)));
+- if (ret)
+- dev_err(smi->dev, "failed to join port %d\n", port);
+-
+- port_bitmap |= BIT(i);
+- }
+-
+- /* Set the bits for the ports we can access */
+- return regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
+- RTL8366RB_PORT_ISO_PORTS(port_bitmap),
+- RTL8366RB_PORT_ISO_PORTS(port_bitmap));
+-}
+-
+-static void
+-rtl8366rb_port_bridge_leave(struct dsa_switch *ds, int port,
+- struct dsa_bridge bridge)
+-{
+- struct realtek_smi *smi = ds->priv;
+- unsigned int port_bitmap = 0;
+- int ret, i;
+-
+- /* Loop over all other ports than this one */
+- for (i = 0; i < RTL8366RB_PORT_NUM_CPU; i++) {
+- /* Current port handled last */
+- if (i == port)
+- continue;
+- /* Not on this bridge */
+- if (!dsa_port_offloads_bridge(dsa_to_port(ds, i), &bridge))
+- continue;
+- /* Remove this port from any other port on the bridge */
+- ret = regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(i),
+- RTL8366RB_PORT_ISO_PORTS(BIT(port)), 0);
+- if (ret)
+- dev_err(smi->dev, "failed to leave port %d\n", port);
+-
+- port_bitmap |= BIT(i);
+- }
+-
+- /* Clear the bits for the ports we can not access, leave ourselves */
+- regmap_update_bits(smi->map, RTL8366RB_PORT_ISO(port),
+- RTL8366RB_PORT_ISO_PORTS(port_bitmap), 0);
+-}
+-
+-/**
+- * rtl8366rb_drop_untagged() - make the switch drop untagged and C-tagged frames
+- * @smi: SMI state container
+- * @port: the port to drop untagged and C-tagged frames on
+- * @drop: whether to drop or pass untagged and C-tagged frames
+- */
+-static int rtl8366rb_drop_untagged(struct realtek_smi *smi, int port, bool drop)
+-{
+- return regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL1_REG,
+- RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port),
+- drop ? RTL8366RB_VLAN_INGRESS_CTRL1_DROP(port) : 0);
+-}
+-
+-static int rtl8366rb_vlan_filtering(struct dsa_switch *ds, int port,
+- bool vlan_filtering,
+- struct netlink_ext_ack *extack)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8366rb *rb;
+- int ret;
+-
+- rb = smi->chip_data;
+-
+- dev_dbg(smi->dev, "port %d: %s VLAN filtering\n", port,
+- vlan_filtering ? "enable" : "disable");
+-
+- /* If the port is not in the member set, the frame will be dropped */
+- ret = regmap_update_bits(smi->map, RTL8366RB_VLAN_INGRESS_CTRL2_REG,
+- BIT(port), vlan_filtering ? BIT(port) : 0);
+- if (ret)
+- return ret;
+-
+- /* If VLAN filtering is enabled and PVID is also enabled, we must
+- * not drop any untagged or C-tagged frames. If we turn off VLAN
+- * filtering on a port, we need to accept any frames.
+- */
+- if (vlan_filtering)
+- ret = rtl8366rb_drop_untagged(smi, port, !rb->pvid_enabled[port]);
+- else
+- ret = rtl8366rb_drop_untagged(smi, port, false);
+-
+- return ret;
+-}
+-
+-static int
+-rtl8366rb_port_pre_bridge_flags(struct dsa_switch *ds, int port,
+- struct switchdev_brport_flags flags,
+- struct netlink_ext_ack *extack)
+-{
+- /* We support enabling/disabling learning */
+- if (flags.mask & ~(BR_LEARNING))
+- return -EINVAL;
+-
+- return 0;
+-}
+-
+-static int
+-rtl8366rb_port_bridge_flags(struct dsa_switch *ds, int port,
+- struct switchdev_brport_flags flags,
+- struct netlink_ext_ack *extack)
+-{
+- struct realtek_smi *smi = ds->priv;
+- int ret;
+-
+- if (flags.mask & BR_LEARNING) {
+- ret = regmap_update_bits(smi->map, RTL8366RB_PORT_LEARNDIS_CTRL,
+- BIT(port),
+- (flags.val & BR_LEARNING) ? 0 : BIT(port));
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-static void
+-rtl8366rb_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
+-{
+- struct realtek_smi *smi = ds->priv;
+- u32 val;
+- int i;
+-
+- switch (state) {
+- case BR_STATE_DISABLED:
+- val = RTL8366RB_STP_STATE_DISABLED;
+- break;
+- case BR_STATE_BLOCKING:
+- case BR_STATE_LISTENING:
+- val = RTL8366RB_STP_STATE_BLOCKING;
+- break;
+- case BR_STATE_LEARNING:
+- val = RTL8366RB_STP_STATE_LEARNING;
+- break;
+- case BR_STATE_FORWARDING:
+- val = RTL8366RB_STP_STATE_FORWARDING;
+- break;
+- default:
+- dev_err(smi->dev, "unknown bridge state requested\n");
+- return;
+- }
+-
+- /* Set the same status for the port on all the FIDs */
+- for (i = 0; i < RTL8366RB_NUM_FIDS; i++) {
+- regmap_update_bits(smi->map, RTL8366RB_STP_STATE_BASE + i,
+- RTL8366RB_STP_STATE_MASK(port),
+- RTL8366RB_STP_STATE(port, val));
+- }
+-}
+-
+-static void
+-rtl8366rb_port_fast_age(struct dsa_switch *ds, int port)
+-{
+- struct realtek_smi *smi = ds->priv;
+-
+- /* This will age out any learned L2 entries */
+- regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
+- BIT(port), BIT(port));
+- /* Restore the normal state of things */
+- regmap_update_bits(smi->map, RTL8366RB_SECURITY_CTRL,
+- BIT(port), 0);
+-}
+-
+-static int rtl8366rb_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+-{
+- struct realtek_smi *smi = ds->priv;
+- struct rtl8366rb *rb;
+- unsigned int max_mtu;
+- u32 len;
+- int i;
+-
+- /* Cache the per-port MTU setting */
+- rb = smi->chip_data;
+- rb->max_mtu[port] = new_mtu;
+-
+- /* Roof out the MTU for the entire switch to the greatest
+- * common denominator: the biggest set for any one port will
+- * be the biggest MTU for the switch.
+- *
+- * The first setting, 1522 bytes, is max IP packet 1500 bytes,
+- * plus ethernet header, 1518 bytes, plus CPU tag, 4 bytes.
+- * This function should consider the parameter an SDU, so the
+- * MTU passed for this setting is 1518 bytes. The same logic
+- * of subtracting the DSA tag of 4 bytes apply to the other
+- * settings.
+- */
+- max_mtu = 1518;
+- for (i = 0; i < RTL8366RB_NUM_PORTS; i++) {
+- if (rb->max_mtu[i] > max_mtu)
+- max_mtu = rb->max_mtu[i];
+- }
+- if (max_mtu <= 1518)
+- len = RTL8366RB_SGCR_MAX_LENGTH_1522;
+- else if (max_mtu > 1518 && max_mtu <= 1532)
+- len = RTL8366RB_SGCR_MAX_LENGTH_1536;
+- else if (max_mtu > 1532 && max_mtu <= 1548)
+- len = RTL8366RB_SGCR_MAX_LENGTH_1552;
+- else
+- len = RTL8366RB_SGCR_MAX_LENGTH_16000;
+-
+- return regmap_update_bits(smi->map, RTL8366RB_SGCR,
+- RTL8366RB_SGCR_MAX_LENGTH_MASK,
+- len);
+-}
+-
+-static int rtl8366rb_max_mtu(struct dsa_switch *ds, int port)
+-{
+- /* The max MTU is 16000 bytes, so we subtract the CPU tag
+- * and the max presented to the system is 15996 bytes.
+- */
+- return 15996;
+-}
+-
+-static int rtl8366rb_get_vlan_4k(struct realtek_smi *smi, u32 vid,
+- struct rtl8366_vlan_4k *vlan4k)
+-{
+- u32 data[3];
+- int ret;
+- int i;
+-
+- memset(vlan4k, '\0', sizeof(struct rtl8366_vlan_4k));
+-
+- if (vid >= RTL8366RB_NUM_VIDS)
+- return -EINVAL;
+-
+- /* write VID */
+- ret = regmap_write(smi->map, RTL8366RB_VLAN_TABLE_WRITE_BASE,
+- vid & RTL8366RB_VLAN_VID_MASK);
+- if (ret)
+- return ret;
+-
+- /* write table access control word */
+- ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
+- RTL8366RB_TABLE_VLAN_READ_CTRL);
+- if (ret)
+- return ret;
+-
+- for (i = 0; i < 3; i++) {
+- ret = regmap_read(smi->map,
+- RTL8366RB_VLAN_TABLE_READ_BASE + i,
+- &data[i]);
+- if (ret)
+- return ret;
+- }
+-
+- vlan4k->vid = vid;
+- vlan4k->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
+- RTL8366RB_VLAN_UNTAG_MASK;
+- vlan4k->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
+- vlan4k->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
+-
+- return 0;
+-}
+-
+-static int rtl8366rb_set_vlan_4k(struct realtek_smi *smi,
+- const struct rtl8366_vlan_4k *vlan4k)
+-{
+- u32 data[3];
+- int ret;
+- int i;
+-
+- if (vlan4k->vid >= RTL8366RB_NUM_VIDS ||
+- vlan4k->member > RTL8366RB_VLAN_MEMBER_MASK ||
+- vlan4k->untag > RTL8366RB_VLAN_UNTAG_MASK ||
+- vlan4k->fid > RTL8366RB_FIDMAX)
+- return -EINVAL;
+-
+- data[0] = vlan4k->vid & RTL8366RB_VLAN_VID_MASK;
+- data[1] = (vlan4k->member & RTL8366RB_VLAN_MEMBER_MASK) |
+- ((vlan4k->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
+- RTL8366RB_VLAN_UNTAG_SHIFT);
+- data[2] = vlan4k->fid & RTL8366RB_VLAN_FID_MASK;
+-
+- for (i = 0; i < 3; i++) {
+- ret = regmap_write(smi->map,
+- RTL8366RB_VLAN_TABLE_WRITE_BASE + i,
+- data[i]);
+- if (ret)
+- return ret;
+- }
+-
+- /* write table access control word */
+- ret = regmap_write(smi->map, RTL8366RB_TABLE_ACCESS_CTRL_REG,
+- RTL8366RB_TABLE_VLAN_WRITE_CTRL);
+-
+- return ret;
+-}
+-
+-static int rtl8366rb_get_vlan_mc(struct realtek_smi *smi, u32 index,
+- struct rtl8366_vlan_mc *vlanmc)
+-{
+- u32 data[3];
+- int ret;
+- int i;
+-
+- memset(vlanmc, '\0', sizeof(struct rtl8366_vlan_mc));
+-
+- if (index >= RTL8366RB_NUM_VLANS)
+- return -EINVAL;
+-
+- for (i = 0; i < 3; i++) {
+- ret = regmap_read(smi->map,
+- RTL8366RB_VLAN_MC_BASE(index) + i,
+- &data[i]);
+- if (ret)
+- return ret;
+- }
+-
+- vlanmc->vid = data[0] & RTL8366RB_VLAN_VID_MASK;
+- vlanmc->priority = (data[0] >> RTL8366RB_VLAN_PRIORITY_SHIFT) &
+- RTL8366RB_VLAN_PRIORITY_MASK;
+- vlanmc->untag = (data[1] >> RTL8366RB_VLAN_UNTAG_SHIFT) &
+- RTL8366RB_VLAN_UNTAG_MASK;
+- vlanmc->member = data[1] & RTL8366RB_VLAN_MEMBER_MASK;
+- vlanmc->fid = data[2] & RTL8366RB_VLAN_FID_MASK;
+-
+- return 0;
+-}
+-
+-static int rtl8366rb_set_vlan_mc(struct realtek_smi *smi, u32 index,
+- const struct rtl8366_vlan_mc *vlanmc)
+-{
+- u32 data[3];
+- int ret;
+- int i;
+-
+- if (index >= RTL8366RB_NUM_VLANS ||
+- vlanmc->vid >= RTL8366RB_NUM_VIDS ||
+- vlanmc->priority > RTL8366RB_PRIORITYMAX ||
+- vlanmc->member > RTL8366RB_VLAN_MEMBER_MASK ||
+- vlanmc->untag > RTL8366RB_VLAN_UNTAG_MASK ||
+- vlanmc->fid > RTL8366RB_FIDMAX)
+- return -EINVAL;
+-
+- data[0] = (vlanmc->vid & RTL8366RB_VLAN_VID_MASK) |
+- ((vlanmc->priority & RTL8366RB_VLAN_PRIORITY_MASK) <<
+- RTL8366RB_VLAN_PRIORITY_SHIFT);
+- data[1] = (vlanmc->member & RTL8366RB_VLAN_MEMBER_MASK) |
+- ((vlanmc->untag & RTL8366RB_VLAN_UNTAG_MASK) <<
+- RTL8366RB_VLAN_UNTAG_SHIFT);
+- data[2] = vlanmc->fid & RTL8366RB_VLAN_FID_MASK;
+-
+- for (i = 0; i < 3; i++) {
+- ret = regmap_write(smi->map,
+- RTL8366RB_VLAN_MC_BASE(index) + i,
+- data[i]);
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-static int rtl8366rb_get_mc_index(struct realtek_smi *smi, int port, int *val)
+-{
+- u32 data;
+- int ret;
+-
+- if (port >= smi->num_ports)
+- return -EINVAL;
+-
+- ret = regmap_read(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
+- &data);
+- if (ret)
+- return ret;
+-
+- *val = (data >> RTL8366RB_PORT_VLAN_CTRL_SHIFT(port)) &
+- RTL8366RB_PORT_VLAN_CTRL_MASK;
+-
+- return 0;
+-}
+-
+-static int rtl8366rb_set_mc_index(struct realtek_smi *smi, int port, int index)
+-{
+- struct rtl8366rb *rb;
+- bool pvid_enabled;
+- int ret;
+-
+- rb = smi->chip_data;
+- pvid_enabled = !!index;
+-
+- if (port >= smi->num_ports || index >= RTL8366RB_NUM_VLANS)
+- return -EINVAL;
+-
+- ret = regmap_update_bits(smi->map, RTL8366RB_PORT_VLAN_CTRL_REG(port),
+- RTL8366RB_PORT_VLAN_CTRL_MASK <<
+- RTL8366RB_PORT_VLAN_CTRL_SHIFT(port),
+- (index & RTL8366RB_PORT_VLAN_CTRL_MASK) <<
+- RTL8366RB_PORT_VLAN_CTRL_SHIFT(port));
+- if (ret)
+- return ret;
+-
+- rb->pvid_enabled[port] = pvid_enabled;
+-
+- /* If VLAN filtering is enabled and PVID is also enabled, we must
+- * not drop any untagged or C-tagged frames. Make sure to update the
+- * filtering setting.
+- */
+- if (dsa_port_is_vlan_filtering(dsa_to_port(smi->ds, port)))
+- ret = rtl8366rb_drop_untagged(smi, port, !pvid_enabled);
+-
+- return ret;
+-}
+-
+-static bool rtl8366rb_is_vlan_valid(struct realtek_smi *smi, unsigned int vlan)
+-{
+- unsigned int max = RTL8366RB_NUM_VLANS - 1;
+-
+- if (smi->vlan4k_enabled)
+- max = RTL8366RB_NUM_VIDS - 1;
+-
+- if (vlan > max)
+- return false;
+-
+- return true;
+-}
+-
+-static int rtl8366rb_enable_vlan(struct realtek_smi *smi, bool enable)
+-{
+- dev_dbg(smi->dev, "%s VLAN\n", enable ? "enable" : "disable");
+- return regmap_update_bits(smi->map,
+- RTL8366RB_SGCR, RTL8366RB_SGCR_EN_VLAN,
+- enable ? RTL8366RB_SGCR_EN_VLAN : 0);
+-}
+-
+-static int rtl8366rb_enable_vlan4k(struct realtek_smi *smi, bool enable)
+-{
+- dev_dbg(smi->dev, "%s VLAN 4k\n", enable ? "enable" : "disable");
+- return regmap_update_bits(smi->map, RTL8366RB_SGCR,
+- RTL8366RB_SGCR_EN_VLAN_4KTB,
+- enable ? RTL8366RB_SGCR_EN_VLAN_4KTB : 0);
+-}
+-
+-static int rtl8366rb_phy_read(struct realtek_smi *smi, int phy, int regnum)
+-{
+- u32 val;
+- u32 reg;
+- int ret;
+-
+- if (phy > RTL8366RB_PHY_NO_MAX)
+- return -EINVAL;
+-
+- ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
+- RTL8366RB_PHY_CTRL_READ);
+- if (ret)
+- return ret;
+-
+- reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
+-
+- ret = regmap_write(smi->map, reg, 0);
+- if (ret) {
+- dev_err(smi->dev,
+- "failed to write PHY%d reg %04x @ %04x, ret %d\n",
+- phy, regnum, reg, ret);
+- return ret;
+- }
+-
+- ret = regmap_read(smi->map, RTL8366RB_PHY_ACCESS_DATA_REG, &val);
+- if (ret)
+- return ret;
+-
+- dev_dbg(smi->dev, "read PHY%d register 0x%04x @ %08x, val <- %04x\n",
+- phy, regnum, reg, val);
+-
+- return val;
+-}
+-
+-static int rtl8366rb_phy_write(struct realtek_smi *smi, int phy, int regnum,
+- u16 val)
+-{
+- u32 reg;
+- int ret;
+-
+- if (phy > RTL8366RB_PHY_NO_MAX)
+- return -EINVAL;
+-
+- ret = regmap_write(smi->map, RTL8366RB_PHY_ACCESS_CTRL_REG,
+- RTL8366RB_PHY_CTRL_WRITE);
+- if (ret)
+- return ret;
+-
+- reg = 0x8000 | (1 << (phy + RTL8366RB_PHY_NO_OFFSET)) | regnum;
+-
+- dev_dbg(smi->dev, "write PHY%d register 0x%04x @ %04x, val -> %04x\n",
+- phy, regnum, reg, val);
+-
+- ret = regmap_write(smi->map, reg, val);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static int rtl8366rb_reset_chip(struct realtek_smi *smi)
+-{
+- int timeout = 10;
+- u32 val;
+- int ret;
+-
+- realtek_smi_write_reg_noack(smi, RTL8366RB_RESET_CTRL_REG,
+- RTL8366RB_CHIP_CTRL_RESET_HW);
+- do {
+- usleep_range(20000, 25000);
+- ret = regmap_read(smi->map, RTL8366RB_RESET_CTRL_REG, &val);
+- if (ret)
+- return ret;
+-
+- if (!(val & RTL8366RB_CHIP_CTRL_RESET_HW))
+- break;
+- } while (--timeout);
+-
+- if (!timeout) {
+- dev_err(smi->dev, "timeout waiting for the switch to reset\n");
+- return -EIO;
+- }
+-
+- return 0;
+-}
+-
+-static int rtl8366rb_detect(struct realtek_smi *smi)
+-{
+- struct device *dev = smi->dev;
+- int ret;
+- u32 val;
+-
+- /* Detect device */
+- ret = regmap_read(smi->map, 0x5c, &val);
+- if (ret) {
+- dev_err(dev, "can't get chip ID (%d)\n", ret);
+- return ret;
+- }
+-
+- switch (val) {
+- case 0x6027:
+- dev_info(dev, "found an RTL8366S switch\n");
+- dev_err(dev, "this switch is not yet supported, submit patches!\n");
+- return -ENODEV;
+- case 0x5937:
+- dev_info(dev, "found an RTL8366RB switch\n");
+- smi->cpu_port = RTL8366RB_PORT_NUM_CPU;
+- smi->num_ports = RTL8366RB_NUM_PORTS;
+- smi->num_vlan_mc = RTL8366RB_NUM_VLANS;
+- smi->mib_counters = rtl8366rb_mib_counters;
+- smi->num_mib_counters = ARRAY_SIZE(rtl8366rb_mib_counters);
+- break;
+- default:
+- dev_info(dev, "found an Unknown Realtek switch (id=0x%04x)\n",
+- val);
+- break;
+- }
+-
+- ret = rtl8366rb_reset_chip(smi);
+- if (ret)
+- return ret;
+-
+- return 0;
+-}
+-
+-static const struct dsa_switch_ops rtl8366rb_switch_ops = {
+- .get_tag_protocol = rtl8366_get_tag_protocol,
+- .setup = rtl8366rb_setup,
+- .phylink_mac_link_up = rtl8366rb_mac_link_up,
+- .phylink_mac_link_down = rtl8366rb_mac_link_down,
+- .get_strings = rtl8366_get_strings,
+- .get_ethtool_stats = rtl8366_get_ethtool_stats,
+- .get_sset_count = rtl8366_get_sset_count,
+- .port_bridge_join = rtl8366rb_port_bridge_join,
+- .port_bridge_leave = rtl8366rb_port_bridge_leave,
+- .port_vlan_filtering = rtl8366rb_vlan_filtering,
+- .port_vlan_add = rtl8366_vlan_add,
+- .port_vlan_del = rtl8366_vlan_del,
+- .port_enable = rtl8366rb_port_enable,
+- .port_disable = rtl8366rb_port_disable,
+- .port_pre_bridge_flags = rtl8366rb_port_pre_bridge_flags,
+- .port_bridge_flags = rtl8366rb_port_bridge_flags,
+- .port_stp_state_set = rtl8366rb_port_stp_state_set,
+- .port_fast_age = rtl8366rb_port_fast_age,
+- .port_change_mtu = rtl8366rb_change_mtu,
+- .port_max_mtu = rtl8366rb_max_mtu,
+-};
+-
+-static const struct realtek_smi_ops rtl8366rb_smi_ops = {
+- .detect = rtl8366rb_detect,
+- .get_vlan_mc = rtl8366rb_get_vlan_mc,
+- .set_vlan_mc = rtl8366rb_set_vlan_mc,
+- .get_vlan_4k = rtl8366rb_get_vlan_4k,
+- .set_vlan_4k = rtl8366rb_set_vlan_4k,
+- .get_mc_index = rtl8366rb_get_mc_index,
+- .set_mc_index = rtl8366rb_set_mc_index,
+- .get_mib_counter = rtl8366rb_get_mib_counter,
+- .is_vlan_valid = rtl8366rb_is_vlan_valid,
+- .enable_vlan = rtl8366rb_enable_vlan,
+- .enable_vlan4k = rtl8366rb_enable_vlan4k,
+- .phy_read = rtl8366rb_phy_read,
+- .phy_write = rtl8366rb_phy_write,
+-};
+-
+-const struct realtek_smi_variant rtl8366rb_variant = {
+- .ds_ops = &rtl8366rb_switch_ops,
+- .ops = &rtl8366rb_smi_ops,
+- .clk_delay = 10,
+- .cmd_read = 0xa9,
+- .cmd_write = 0xa8,
+- .chip_data_sz = sizeof(struct rtl8366rb),
+-};
+-EXPORT_SYMBOL_GPL(rtl8366rb_variant);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 48520967746ff..c75c5ae64d5d8 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -329,7 +329,7 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg,
+ ptp_info);
+ struct bnxt *bp = ptp->bp;
+- u8 pin_id;
++ int pin_id;
+ int rc;
+
+ switch (rq->type) {
+@@ -337,6 +337,8 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ /* Configure an External PPS IN */
+ pin_id = ptp_find_pin(ptp->ptp_clock, PTP_PF_EXTTS,
+ rq->extts.index);
++ if (!TSIO_PIN_VALID(pin_id))
++ return -EOPNOTSUPP;
+ if (!on)
+ break;
+ rc = bnxt_ptp_cfg_pin(bp, pin_id, BNXT_PPS_PIN_PPS_IN);
+@@ -350,6 +352,8 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ /* Configure a Periodic PPS OUT */
+ pin_id = ptp_find_pin(ptp->ptp_clock, PTP_PF_PEROUT,
+ rq->perout.index);
++ if (!TSIO_PIN_VALID(pin_id))
++ return -EOPNOTSUPP;
+ if (!on)
+ break;
+
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index 7c528e1f8713e..8205140db829e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -31,7 +31,7 @@ struct pps_pin {
+ u8 state;
+ };
+
+-#define TSIO_PIN_VALID(pin) ((pin) < (BNXT_MAX_TSIO_PINS))
++#define TSIO_PIN_VALID(pin) ((pin) >= 0 && (pin) < (BNXT_MAX_TSIO_PINS))
+
+ #define EVENT_DATA2_PPS_EVENT_TYPE(data2) \
+ ((data2) & ASYNC_EVENT_CMPL_PPS_TIMESTAMP_EVENT_DATA2_EVENT_TYPE)
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 2da804f84b480..bd5998012a876 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -76,7 +76,7 @@ static inline void bcmgenet_writel(u32 value, void __iomem *offset)
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ __raw_writel(value, offset);
+ else
+- writel_relaxed(value, offset);
++ writel(value, offset);
+ }
+
+ static inline u32 bcmgenet_readl(void __iomem *offset)
+@@ -84,7 +84,7 @@ static inline u32 bcmgenet_readl(void __iomem *offset)
+ if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
+ return __raw_readl(offset);
+ else
+- return readl_relaxed(offset);
++ return readl(offset);
+ }
+
+ static inline void dmadesc_set_length_status(struct bcmgenet_priv *priv,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index fa5b4f885b177..60ec64bfb3f0b 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -674,7 +674,10 @@ static int enetc_get_ts_info(struct net_device *ndev,
+ #ifdef CONFIG_FSL_ENETC_PTP_CLOCK
+ info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+ SOF_TIMESTAMPING_RX_HARDWARE |
+- SOF_TIMESTAMPING_RAW_HARDWARE;
++ SOF_TIMESTAMPING_RAW_HARDWARE |
++ SOF_TIMESTAMPING_TX_SOFTWARE |
++ SOF_TIMESTAMPING_RX_SOFTWARE |
++ SOF_TIMESTAMPING_SOFTWARE;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+ (1 << HWTSTAMP_TX_ON) |
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+index 3555c12edb45a..d3d7172e0fcc5 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+@@ -45,6 +45,7 @@ void enetc_sched_speed_set(struct enetc_ndev_priv *priv, int speed)
+ | pspeed);
+ }
+
++#define ENETC_QOS_ALIGN 64
+ static int enetc_setup_taprio(struct net_device *ndev,
+ struct tc_taprio_qopt_offload *admin_conf)
+ {
+@@ -52,10 +53,11 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ struct enetc_cbd cbd = {.cmd = 0};
+ struct tgs_gcl_conf *gcl_config;
+ struct tgs_gcl_data *gcl_data;
++ dma_addr_t dma, dma_align;
+ struct gce *gce;
+- dma_addr_t dma;
+ u16 data_size;
+ u16 gcl_len;
++ void *tmp;
+ u32 tge;
+ int err;
+ int i;
+@@ -82,9 +84,16 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ gcl_config = &cbd.gcl_conf;
+
+ data_size = struct_size(gcl_data, entry, gcl_len);
+- gcl_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+- if (!gcl_data)
++ tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++ data_size + ENETC_QOS_ALIGN,
++ &dma, GFP_KERNEL);
++ if (!tmp) {
++ dev_err(&priv->si->pdev->dev,
++ "DMA mapping of taprio gate list failed!\n");
+ return -ENOMEM;
++ }
++ dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++ gcl_data = (struct tgs_gcl_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
+
+ gce = (struct gce *)(gcl_data + 1);
+
+@@ -110,16 +119,8 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ cbd.length = cpu_to_le16(data_size);
+ cbd.status_flags = 0;
+
+- dma = dma_map_single(&priv->si->pdev->dev, gcl_data,
+- data_size, DMA_TO_DEVICE);
+- if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+- netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+- kfree(gcl_data);
+- return -ENOMEM;
+- }
+-
+- cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+- cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++ cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++ cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ cbd.cls = BDCR_CMD_PORT_GCL;
+ cbd.status_flags = 0;
+
+@@ -132,8 +133,8 @@ static int enetc_setup_taprio(struct net_device *ndev,
+ ENETC_QBV_PTGCR_OFFSET,
+ tge & (~ENETC_QBV_TGE));
+
+- dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_TO_DEVICE);
+- kfree(gcl_data);
++ dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++ tmp, dma);
+
+ return err;
+ }
+@@ -463,8 +464,9 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ struct enetc_cbd cbd = {.cmd = 0};
+ struct streamid_data *si_data;
+ struct streamid_conf *si_conf;
++ dma_addr_t dma, dma_align;
+ u16 data_size;
+- dma_addr_t dma;
++ void *tmp;
+ int port;
+ int err;
+
+@@ -485,21 +487,20 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+ cbd.status_flags = 0;
+
+ data_size = sizeof(struct streamid_data);
+- si_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+- if (!si_data)
++ tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++ data_size + ENETC_QOS_ALIGN,
++ &dma, GFP_KERNEL);
++ if (!tmp) {
++ dev_err(&priv->si->pdev->dev,
++ "DMA mapping of stream identify failed!\n");
+ return -ENOMEM;
+- cbd.length = cpu_to_le16(data_size);
+-
+- dma = dma_map_single(&priv->si->pdev->dev, si_data,
+- data_size, DMA_FROM_DEVICE);
+- if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+- netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+- err = -ENOMEM;
+- goto out;
+ }
++ dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++ si_data = (struct streamid_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
+
+- cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+- cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++ cbd.length = cpu_to_le16(data_size);
++ cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++ cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+ eth_broadcast_addr(si_data->dmac);
+ si_data->vid_vidm_tg = (ENETC_CBDR_SID_VID_MASK
+ + ((0x3 << 14) | ENETC_CBDR_SID_VIDM));
+@@ -539,8 +540,8 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+
+ cbd.length = cpu_to_le16(data_size);
+
+- cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+- cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++ cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++ cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+
+ /* VIDM default to be 1.
+ * VID Match. If set (b1) then the VID must match, otherwise
+@@ -561,10 +562,8 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv *priv,
+
+ err = enetc_send_cmd(priv->si, &cbd);
+ out:
+- if (!dma_mapping_error(&priv->si->pdev->dev, dma))
+- dma_unmap_single(&priv->si->pdev->dev, dma, data_size, DMA_FROM_DEVICE);
+-
+- kfree(si_data);
++ dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++ tmp, dma);
+
+ return err;
+ }
+@@ -633,8 +632,9 @@ static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv,
+ {
+ struct enetc_cbd cbd = { .cmd = 2 };
+ struct sfi_counter_data *data_buf;
+- dma_addr_t dma;
++ dma_addr_t dma, dma_align;
+ u16 data_size;
++ void *tmp;
+ int err;
+
+ cbd.index = cpu_to_le16((u16)index);
+@@ -643,19 +643,19 @@ static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv,
+ cbd.status_flags = 0;
+
+ data_size = sizeof(struct sfi_counter_data);
+- data_buf = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+- if (!data_buf)
++ tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++ data_size + ENETC_QOS_ALIGN,
++ &dma, GFP_KERNEL);
++ if (!tmp) {
++ dev_err(&priv->si->pdev->dev,
++ "DMA mapping of stream counter failed!\n");
+ return -ENOMEM;
+-
+- dma = dma_map_single(&priv->si->pdev->dev, data_buf,
+- data_size, DMA_FROM_DEVICE);
+- if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+- netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+- err = -ENOMEM;
+- goto exit;
+ }
+- cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+- cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++ dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++ data_buf = (struct sfi_counter_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
++
++ cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++ cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+
+ cbd.length = cpu_to_le16(data_size);
+
+@@ -684,7 +684,9 @@ static int enetc_streamcounter_hw_get(struct enetc_ndev_priv *priv,
+ data_buf->flow_meter_dropl;
+
+ exit:
+- kfree(data_buf);
++ dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++ tmp, dma);
++
+ return err;
+ }
+
+@@ -723,9 +725,10 @@ static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv,
+ struct sgcl_conf *sgcl_config;
+ struct sgcl_data *sgcl_data;
+ struct sgce *sgce;
+- dma_addr_t dma;
++ dma_addr_t dma, dma_align;
+ u16 data_size;
+ int err, i;
++ void *tmp;
+ u64 now;
+
+ cbd.index = cpu_to_le16(sgi->index);
+@@ -772,24 +775,20 @@ static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv,
+ sgcl_config->acl_len = (sgi->num_entries - 1) & 0x3;
+
+ data_size = struct_size(sgcl_data, sgcl, sgi->num_entries);
+-
+- sgcl_data = kzalloc(data_size, __GFP_DMA | GFP_KERNEL);
+- if (!sgcl_data)
+- return -ENOMEM;
+-
+- cbd.length = cpu_to_le16(data_size);
+-
+- dma = dma_map_single(&priv->si->pdev->dev,
+- sgcl_data, data_size,
+- DMA_FROM_DEVICE);
+- if (dma_mapping_error(&priv->si->pdev->dev, dma)) {
+- netdev_err(priv->si->ndev, "DMA mapping failed!\n");
+- kfree(sgcl_data);
++ tmp = dma_alloc_coherent(&priv->si->pdev->dev,
++ data_size + ENETC_QOS_ALIGN,
++ &dma, GFP_KERNEL);
++ if (!tmp) {
++ dev_err(&priv->si->pdev->dev,
++ "DMA mapping of stream counter failed!\n");
+ return -ENOMEM;
+ }
++ dma_align = ALIGN(dma, ENETC_QOS_ALIGN);
++ sgcl_data = (struct sgcl_data *)PTR_ALIGN(tmp, ENETC_QOS_ALIGN);
+
+- cbd.addr[0] = cpu_to_le32(lower_32_bits(dma));
+- cbd.addr[1] = cpu_to_le32(upper_32_bits(dma));
++ cbd.length = cpu_to_le16(data_size);
++ cbd.addr[0] = cpu_to_le32(lower_32_bits(dma_align));
++ cbd.addr[1] = cpu_to_le32(upper_32_bits(dma_align));
+
+ sgce = &sgcl_data->sgcl[0];
+
+@@ -844,7 +843,8 @@ static int enetc_streamgate_hw_set(struct enetc_ndev_priv *priv,
+ err = enetc_send_cmd(priv->si, &cbd);
+
+ exit:
+- kfree(sgcl_data);
++ dma_free_coherent(&priv->si->pdev->dev, data_size + ENETC_QOS_ALIGN,
++ tmp, dma);
+
+ return err;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 9298fbecb31ac..8184a954f6481 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -536,6 +536,8 @@ struct hnae3_ae_dev {
+ * Get 1588 rx hwstamp
+ * get_ts_info
+ * Get phc info
++ * clean_vf_config
++ * Clean residual vf info after disable sriov
+ */
+ struct hnae3_ae_ops {
+ int (*init_ae_dev)(struct hnae3_ae_dev *ae_dev);
+@@ -729,6 +731,7 @@ struct hnae3_ae_ops {
+ struct ethtool_ts_info *info);
+ int (*get_link_diagnosis_info)(struct hnae3_handle *handle,
+ u32 *status_code);
++ void (*clean_vf_config)(struct hnae3_ae_dev *ae_dev, int num_vfs);
+ };
+
+ struct hnae3_dcb_ops {
+@@ -841,6 +844,7 @@ struct hnae3_handle {
+ struct dentry *hnae3_dbgfs;
+ /* protects concurrent contention between debugfs commands */
+ struct mutex dbgfs_lock;
++ char **dbgfs_buf;
+
+ /* Network interface message level enabled bits */
+ u32 msg_enable;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index f726a5b70f9e2..44d9b560b3374 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -1227,7 +1227,7 @@ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
+ return ret;
+
+ mutex_lock(&handle->dbgfs_lock);
+- save_buf = &hns3_dbg_cmd[index].buf;
++ save_buf = &handle->dbgfs_buf[index];
+
+ if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
+ test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) {
+@@ -1332,6 +1332,13 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ int ret;
+ u32 i;
+
++ handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev,
++ ARRAY_SIZE(hns3_dbg_cmd),
++ sizeof(*handle->dbgfs_buf),
++ GFP_KERNEL);
++ if (!handle->dbgfs_buf)
++ return -ENOMEM;
++
+ hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry =
+ debugfs_create_dir(name, hns3_dbgfs_root);
+ handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry;
+@@ -1380,9 +1387,9 @@ void hns3_dbg_uninit(struct hnae3_handle *handle)
+ u32 i;
+
+ for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++)
+- if (hns3_dbg_cmd[i].buf) {
+- kvfree(hns3_dbg_cmd[i].buf);
+- hns3_dbg_cmd[i].buf = NULL;
++ if (handle->dbgfs_buf[i]) {
++ kvfree(handle->dbgfs_buf[i]);
++ handle->dbgfs_buf[i] = NULL;
+ }
+
+ mutex_destroy(&handle->dbgfs_lock);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
+index 83aa1450ab9fe..97578eabb7d8b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.h
+@@ -49,7 +49,6 @@ struct hns3_dbg_cmd_info {
+ enum hnae3_dbg_cmd cmd;
+ enum hns3_dbg_dentry_type dentry;
+ u32 buf_len;
+- char *buf;
+ int (*init)(struct hnae3_handle *handle, unsigned int cmd);
+ };
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index babc5d7a3b526..f6082be7481c1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1028,46 +1028,56 @@ static bool hns3_can_use_tx_sgl(struct hns3_enet_ring *ring,
+
+ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
+ {
++ u32 alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size;
+ struct hns3_tx_spare *tx_spare;
+ struct page *page;
+- u32 alloc_size;
+ dma_addr_t dma;
+ int order;
+
+- alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size;
+ if (!alloc_size)
+ return;
+
+ order = get_order(alloc_size);
++ if (order >= MAX_ORDER) {
++ if (net_ratelimit())
++ dev_warn(ring_to_dev(ring), "failed to allocate tx spare buffer, exceed to max order\n");
++ return;
++ }
++
+ tx_spare = devm_kzalloc(ring_to_dev(ring), sizeof(*tx_spare),
+ GFP_KERNEL);
+ if (!tx_spare) {
+ /* The driver still work without the tx spare buffer */
+ dev_warn(ring_to_dev(ring), "failed to allocate hns3_tx_spare\n");
+- return;
++ goto devm_kzalloc_error;
+ }
+
+ page = alloc_pages_node(dev_to_node(ring_to_dev(ring)),
+ GFP_KERNEL, order);
+ if (!page) {
+ dev_warn(ring_to_dev(ring), "failed to allocate tx spare pages\n");
+- devm_kfree(ring_to_dev(ring), tx_spare);
+- return;
++ goto alloc_pages_error;
+ }
+
+ dma = dma_map_page(ring_to_dev(ring), page, 0,
+ PAGE_SIZE << order, DMA_TO_DEVICE);
+ if (dma_mapping_error(ring_to_dev(ring), dma)) {
+ dev_warn(ring_to_dev(ring), "failed to map pages for tx spare\n");
+- put_page(page);
+- devm_kfree(ring_to_dev(ring), tx_spare);
+- return;
++ goto dma_mapping_error;
+ }
+
+ tx_spare->dma = dma;
+ tx_spare->buf = page_address(page);
+ tx_spare->len = PAGE_SIZE << order;
+ ring->tx_spare = tx_spare;
++ return;
++
++dma_mapping_error:
++ put_page(page);
++alloc_pages_error:
++ devm_kfree(ring_to_dev(ring), tx_spare);
++devm_kzalloc_error:
++ ring->tqp->handle->kinfo.tx_spare_buf_size = 0;
+ }
+
+ /* Use hns3_tx_spare_space() to make sure there is enough buffer
+@@ -2982,6 +2992,21 @@ static int hns3_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return ret;
+ }
+
++/**
++ * hns3_clean_vf_config
++ * @pdev: pointer to a pci_dev structure
++ * @num_vfs: number of VFs allocated
++ *
++ * Clean residual vf config after disable sriov
++ **/
++static void hns3_clean_vf_config(struct pci_dev *pdev, int num_vfs)
++{
++ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
++
++ if (ae_dev->ops->clean_vf_config)
++ ae_dev->ops->clean_vf_config(ae_dev, num_vfs);
++}
++
+ /* hns3_remove - Device removal routine
+ * @pdev: PCI device information struct
+ */
+@@ -3020,7 +3045,10 @@ static int hns3_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
+ else
+ return num_vfs;
+ } else if (!pci_vfs_assigned(pdev)) {
++ int num_vfs_pre = pci_num_vf(pdev);
++
+ pci_disable_sriov(pdev);
++ hns3_clean_vf_config(pdev, num_vfs_pre);
+ } else {
+ dev_warn(&pdev->dev,
+ "Unable to free VFs because some are assigned to VMs.\n");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index c06c39ece80da..cbf36cc86803a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -651,8 +651,8 @@ static void hns3_get_ringparam(struct net_device *netdev,
+ struct hnae3_handle *h = priv->ae_handle;
+ int rx_queue_index = h->kinfo.num_tqps;
+
+- if (hns3_nic_resetting(netdev)) {
+- netdev_err(netdev, "dev resetting!");
++ if (hns3_nic_resetting(netdev) || !priv->ring) {
++ netdev_err(netdev, "failed to get ringparam value, due to dev resetting or uninited\n");
+ return;
+ }
+
+@@ -1072,8 +1072,14 @@ static int hns3_check_ringparam(struct net_device *ndev,
+ {
+ #define RX_BUF_LEN_2K 2048
+ #define RX_BUF_LEN_4K 4096
+- if (hns3_nic_resetting(ndev))
++
++ struct hns3_nic_priv *priv = netdev_priv(ndev);
++
++ if (hns3_nic_resetting(ndev) || !priv->ring) {
++ netdev_err(ndev, "failed to set ringparam value, due to dev resetting or uninited\n");
+ return -EBUSY;
++ }
++
+
+ if (param->rx_mini_pending || param->rx_jumbo_pending)
+ return -EINVAL;
+@@ -1764,9 +1770,6 @@ static int hns3_set_tx_spare_buf_size(struct net_device *netdev,
+ struct hnae3_handle *h = priv->ae_handle;
+ int ret;
+
+- if (hns3_nic_resetting(netdev))
+- return -EBUSY;
+-
+ h->kinfo.tx_spare_buf_size = data;
+
+ ret = hns3_reset_notify(h, HNAE3_DOWN_CLIENT);
+@@ -1797,6 +1800,11 @@ static int hns3_set_tunable(struct net_device *netdev,
+ struct hnae3_handle *h = priv->ae_handle;
+ int i, ret = 0;
+
++ if (hns3_nic_resetting(netdev) || !priv->ring) {
++ netdev_err(netdev, "failed to set tunable value, dev resetting!");
++ return -EBUSY;
++ }
++
+ switch (tuna->id) {
+ case ETHTOOL_TX_COPYBREAK:
+ priv->tx_copybreak = *(u32 *)data;
+@@ -1816,7 +1824,8 @@ static int hns3_set_tunable(struct net_device *netdev,
+ old_tx_spare_buf_size = h->kinfo.tx_spare_buf_size;
+ new_tx_spare_buf_size = *(u32 *)data;
+ ret = hns3_set_tx_spare_buf_size(netdev, new_tx_spare_buf_size);
+- if (ret) {
++ if (ret ||
++ (!priv->ring->tx_spare && new_tx_spare_buf_size != 0)) {
+ int ret1;
+
+ netdev_warn(netdev,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 24f7afacae028..e96bc61a0a877 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -1863,6 +1863,7 @@ static int hclge_alloc_vport(struct hclge_dev *hdev)
+ vport->vf_info.link_state = IFLA_VF_LINK_STATE_AUTO;
+ vport->mps = HCLGE_MAC_DEFAULT_FRAME;
+ vport->port_base_vlan_cfg.state = HNAE3_PORT_BASE_VLAN_DISABLE;
++ vport->port_base_vlan_cfg.tbl_sta = true;
+ vport->rxvlan_cfg.rx_vlan_offload_en = true;
+ vport->req_vlan_fltr_en = true;
+ INIT_LIST_HEAD(&vport->vlan_list);
+@@ -8429,12 +8430,11 @@ int hclge_rm_uc_addr_common(struct hclge_vport *vport,
+ hnae3_set_bit(req.entry_type, HCLGE_MAC_VLAN_BIT0_EN_B, 0);
+ hclge_prepare_mac_addr(&req, addr, false);
+ ret = hclge_remove_mac_vlan_tbl(vport, &req);
+- if (!ret) {
++ if (!ret || ret == -ENOENT) {
+ mutex_lock(&hdev->vport_lock);
+ hclge_update_umv_space(vport, true);
+ mutex_unlock(&hdev->vport_lock);
+- } else if (ret == -ENOENT) {
+- ret = 0;
++ return 0;
+ }
+
+ return ret;
+@@ -8984,11 +8984,16 @@ static int hclge_set_vf_mac(struct hnae3_handle *handle, int vf,
+
+ ether_addr_copy(vport->vf_info.mac, mac_addr);
+
++ /* there is a timewindow for PF to know VF unalive, it may
++ * cause send mailbox fail, but it doesn't matter, VF will
++ * query it when reinit.
++ */
+ if (test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state)) {
+ dev_info(&hdev->pdev->dev,
+ "MAC of VF %d has been set to %s, and it will be reinitialized!\n",
+ vf, format_mac_addr);
+- return hclge_inform_reset_assert_to_vf(vport);
++ (void)hclge_inform_reset_assert_to_vf(vport);
++ return 0;
+ }
+
+ dev_info(&hdev->pdev->dev, "MAC of VF %d has been set to %s\n",
+@@ -9809,19 +9814,28 @@ static void hclge_add_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ bool writen_to_tbl)
+ {
+ struct hclge_vport_vlan_cfg *vlan, *tmp;
++ struct hclge_dev *hdev = vport->back;
+
+- list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node)
+- if (vlan->vlan_id == vlan_id)
++ mutex_lock(&hdev->vport_lock);
++
++ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
++ if (vlan->vlan_id == vlan_id) {
++ mutex_unlock(&hdev->vport_lock);
+ return;
++ }
++ }
+
+ vlan = kzalloc(sizeof(*vlan), GFP_KERNEL);
+- if (!vlan)
++ if (!vlan) {
++ mutex_unlock(&hdev->vport_lock);
+ return;
++ }
+
+ vlan->hd_tbl_status = writen_to_tbl;
+ vlan->vlan_id = vlan_id;
+
+ list_add_tail(&vlan->node, &vport->vlan_list);
++ mutex_unlock(&hdev->vport_lock);
+ }
+
+ static int hclge_add_vport_all_vlan_table(struct hclge_vport *vport)
+@@ -9830,6 +9844,8 @@ static int hclge_add_vport_all_vlan_table(struct hclge_vport *vport)
+ struct hclge_dev *hdev = vport->back;
+ int ret;
+
++ mutex_lock(&hdev->vport_lock);
++
+ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ if (!vlan->hd_tbl_status) {
+ ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
+@@ -9839,12 +9855,16 @@ static int hclge_add_vport_all_vlan_table(struct hclge_vport *vport)
+ dev_err(&hdev->pdev->dev,
+ "restore vport vlan list failed, ret=%d\n",
+ ret);
++
++ mutex_unlock(&hdev->vport_lock);
+ return ret;
+ }
+ }
+ vlan->hd_tbl_status = true;
+ }
+
++ mutex_unlock(&hdev->vport_lock);
++
+ return 0;
+ }
+
+@@ -9854,6 +9874,8 @@ static void hclge_rm_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ struct hclge_vport_vlan_cfg *vlan, *tmp;
+ struct hclge_dev *hdev = vport->back;
+
++ mutex_lock(&hdev->vport_lock);
++
+ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ if (vlan->vlan_id == vlan_id) {
+ if (is_write_tbl && vlan->hd_tbl_status)
+@@ -9868,6 +9890,8 @@ static void hclge_rm_vport_vlan_table(struct hclge_vport *vport, u16 vlan_id,
+ break;
+ }
+ }
++
++ mutex_unlock(&hdev->vport_lock);
+ }
+
+ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
+@@ -9875,6 +9899,8 @@ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
+ struct hclge_vport_vlan_cfg *vlan, *tmp;
+ struct hclge_dev *hdev = vport->back;
+
++ mutex_lock(&hdev->vport_lock);
++
+ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+ if (vlan->hd_tbl_status)
+ hclge_set_vlan_filter_hw(hdev,
+@@ -9890,6 +9916,7 @@ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list)
+ }
+ }
+ clear_bit(vport->vport_id, hdev->vf_vlan_full);
++ mutex_unlock(&hdev->vport_lock);
+ }
+
+ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
+@@ -9898,6 +9925,8 @@ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
+ struct hclge_vport *vport;
+ int i;
+
++ mutex_lock(&hdev->vport_lock);
++
+ for (i = 0; i < hdev->num_alloc_vport; i++) {
+ vport = &hdev->vport[i];
+ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+@@ -9905,37 +9934,61 @@ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev)
+ kfree(vlan);
+ }
+ }
++
++ mutex_unlock(&hdev->vport_lock);
+ }
+
+-void hclge_restore_vport_vlan_table(struct hclge_vport *vport)
++void hclge_restore_vport_port_base_vlan_config(struct hclge_dev *hdev)
+ {
+- struct hclge_vport_vlan_cfg *vlan, *tmp;
+- struct hclge_dev *hdev = vport->back;
++ struct hclge_vlan_info *vlan_info;
++ struct hclge_vport *vport;
+ u16 vlan_proto;
+ u16 vlan_id;
+ u16 state;
++ int vf_id;
+ int ret;
+
+- vlan_proto = vport->port_base_vlan_cfg.vlan_info.vlan_proto;
+- vlan_id = vport->port_base_vlan_cfg.vlan_info.vlan_tag;
+- state = vport->port_base_vlan_cfg.state;
++ /* PF should restore all vfs port base vlan */
++ for (vf_id = 0; vf_id < hdev->num_alloc_vfs; vf_id++) {
++ vport = &hdev->vport[vf_id + HCLGE_VF_VPORT_START_NUM];
++ vlan_info = vport->port_base_vlan_cfg.tbl_sta ?
++ &vport->port_base_vlan_cfg.vlan_info :
++ &vport->port_base_vlan_cfg.old_vlan_info;
+
+- if (state != HNAE3_PORT_BASE_VLAN_DISABLE) {
+- clear_bit(vport->vport_id, hdev->vlan_table[vlan_id]);
+- hclge_set_vlan_filter_hw(hdev, htons(vlan_proto),
+- vport->vport_id, vlan_id,
+- false);
+- return;
++ vlan_id = vlan_info->vlan_tag;
++ vlan_proto = vlan_info->vlan_proto;
++ state = vport->port_base_vlan_cfg.state;
++
++ if (state != HNAE3_PORT_BASE_VLAN_DISABLE) {
++ clear_bit(vport->vport_id, hdev->vlan_table[vlan_id]);
++ ret = hclge_set_vlan_filter_hw(hdev, htons(vlan_proto),
++ vport->vport_id,
++ vlan_id, false);
++ vport->port_base_vlan_cfg.tbl_sta = ret == 0;
++ }
+ }
++}
+
+- list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
+- ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
+- vport->vport_id,
+- vlan->vlan_id, false);
+- if (ret)
+- break;
+- vlan->hd_tbl_status = true;
++void hclge_restore_vport_vlan_table(struct hclge_vport *vport)
++{
++ struct hclge_vport_vlan_cfg *vlan, *tmp;
++ struct hclge_dev *hdev = vport->back;
++ int ret;
++
++ mutex_lock(&hdev->vport_lock);
++
++ if (vport->port_base_vlan_cfg.state == HNAE3_PORT_BASE_VLAN_DISABLE) {
++ list_for_each_entry_safe(vlan, tmp, &vport->vlan_list, node) {
++ ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q),
++ vport->vport_id,
++ vlan->vlan_id, false);
++ if (ret)
++ break;
++ vlan->hd_tbl_status = true;
++ }
+ }
++
++ mutex_unlock(&hdev->vport_lock);
+ }
+
+ /* For global reset and imp reset, hardware will clear the mac table,
+@@ -9975,6 +10028,7 @@ static void hclge_restore_hw_table(struct hclge_dev *hdev)
+ struct hnae3_handle *handle = &vport->nic;
+
+ hclge_restore_mac_table_common(vport);
++ hclge_restore_vport_port_base_vlan_config(hdev);
+ hclge_restore_vport_vlan_table(vport);
+ set_bit(HCLGE_STATE_FD_USER_DEF_CHANGED, &hdev->state);
+ hclge_restore_fd_entries(handle);
+@@ -10031,6 +10085,8 @@ static int hclge_update_vlan_filter_entries(struct hclge_vport *vport,
+ false);
+ }
+
++ vport->port_base_vlan_cfg.tbl_sta = false;
++
+ /* force add VLAN 0 */
+ ret = hclge_set_vf_vlan_common(hdev, vport->vport_id, false, 0);
+ if (ret)
+@@ -10120,7 +10176,9 @@ out:
+ else
+ nic->port_base_vlan_state = HNAE3_PORT_BASE_VLAN_ENABLE;
+
++ vport->port_base_vlan_cfg.old_vlan_info = *old_vlan_info;
+ vport->port_base_vlan_cfg.vlan_info = *vlan_info;
++ vport->port_base_vlan_cfg.tbl_sta = true;
+ hclge_set_vport_vlan_fltr_change(vport);
+
+ return 0;
+@@ -10188,14 +10246,17 @@ static int hclge_set_vf_vlan_filter(struct hnae3_handle *handle, int vfid,
+ return ret;
+ }
+
+- /* for DEVICE_VERSION_V3, vf doesn't need to know about the port based
++ /* there is a timewindow for PF to know VF unalive, it may
++ * cause send mailbox fail, but it doesn't matter, VF will
++ * query it when reinit.
++ * for DEVICE_VERSION_V3, vf doesn't need to know about the port based
+ * VLAN state.
+ */
+ if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3 &&
+ test_bit(HCLGE_VPORT_STATE_ALIVE, &vport->state))
+- hclge_push_vf_port_base_vlan_info(&hdev->vport[0],
+- vport->vport_id, state,
+- &vlan_info);
++ (void)hclge_push_vf_port_base_vlan_info(&hdev->vport[0],
++ vport->vport_id,
++ state, &vlan_info);
+
+ return 0;
+ }
+@@ -10253,11 +10314,11 @@ int hclge_set_vlan_filter(struct hnae3_handle *handle, __be16 proto,
+ }
+
+ if (!ret) {
+- if (is_kill)
+- hclge_rm_vport_vlan_table(vport, vlan_id, false);
+- else
++ if (!is_kill)
+ hclge_add_vport_vlan_table(vport, vlan_id,
+ writen_to_tbl);
++ else if (is_kill && vlan_id != 0)
++ hclge_rm_vport_vlan_table(vport, vlan_id, false);
+ } else if (is_kill) {
+ /* when remove hw vlan filter failed, record the vlan id,
+ * and try to remove it from hw later, to be consistence
+@@ -11831,8 +11892,8 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+ hclge_misc_irq_uninit(hdev);
+ hclge_devlink_uninit(hdev);
+ hclge_pci_uninit(hdev);
+- mutex_destroy(&hdev->vport_lock);
+ hclge_uninit_vport_vlan_table(hdev);
++ mutex_destroy(&hdev->vport_lock);
+ ae_dev->priv = NULL;
+ }
+
+@@ -12656,6 +12717,55 @@ static int hclge_get_link_diagnosis_info(struct hnae3_handle *handle,
+ return 0;
+ }
+
++/* After disable sriov, VF still has some config and info need clean,
++ * which configed by PF.
++ */
++static void hclge_clear_vport_vf_info(struct hclge_vport *vport, int vfid)
++{
++ struct hclge_dev *hdev = vport->back;
++ struct hclge_vlan_info vlan_info;
++ int ret;
++
++ /* after disable sriov, clean VF rate configured by PF */
++ ret = hclge_tm_qs_shaper_cfg(vport, 0);
++ if (ret)
++ dev_err(&hdev->pdev->dev,
++ "failed to clean vf%d rate config, ret = %d\n",
++ vfid, ret);
++
++ vlan_info.vlan_tag = 0;
++ vlan_info.qos = 0;
++ vlan_info.vlan_proto = ETH_P_8021Q;
++ ret = hclge_update_port_base_vlan_cfg(vport,
++ HNAE3_PORT_BASE_VLAN_DISABLE,
++ &vlan_info);
++ if (ret)
++ dev_err(&hdev->pdev->dev,
++ "failed to clean vf%d port base vlan, ret = %d\n",
++ vfid, ret);
++
++ ret = hclge_set_vf_spoofchk_hw(hdev, vport->vport_id, false);
++ if (ret)
++ dev_err(&hdev->pdev->dev,
++ "failed to clean vf%d spoof config, ret = %d\n",
++ vfid, ret);
++
++ memset(&vport->vf_info, 0, sizeof(vport->vf_info));
++}
++
++static void hclge_clean_vport_config(struct hnae3_ae_dev *ae_dev, int num_vfs)
++{
++ struct hclge_dev *hdev = ae_dev->priv;
++ struct hclge_vport *vport;
++ int i;
++
++ for (i = 0; i < num_vfs; i++) {
++ vport = &hdev->vport[i + HCLGE_VF_VPORT_START_NUM];
++
++ hclge_clear_vport_vf_info(vport, i);
++ }
++}
++
+ static const struct hnae3_ae_ops hclge_ops = {
+ .init_ae_dev = hclge_init_ae_dev,
+ .uninit_ae_dev = hclge_uninit_ae_dev,
+@@ -12757,6 +12867,7 @@ static const struct hnae3_ae_ops hclge_ops = {
+ .get_rx_hwts = hclge_ptp_get_rx_hwts,
+ .get_ts_info = hclge_ptp_get_ts_info,
+ .get_link_diagnosis_info = hclge_get_link_diagnosis_info,
++ .clean_vf_config = hclge_clean_vport_config,
+ };
+
+ static struct hnae3_ae_algo ae_algo = {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index adfb26e792621..63197257dd4e4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -977,7 +977,9 @@ struct hclge_vlan_info {
+
+ struct hclge_port_base_vlan_config {
+ u16 state;
++ bool tbl_sta;
+ struct hclge_vlan_info vlan_info;
++ struct hclge_vlan_info old_vlan_info;
+ };
+
+ struct hclge_vf_info {
+@@ -1023,6 +1025,7 @@ struct hclge_vport {
+ spinlock_t mac_list_lock; /* protect mac address need to add/detele */
+ struct list_head uc_mac_list; /* Store VF unicast table */
+ struct list_head mc_mac_list; /* Store VF multicast table */
++
+ struct list_head vlan_list; /* Store VF vlan table */
+ };
+
+@@ -1097,6 +1100,7 @@ void hclge_rm_vport_all_mac_table(struct hclge_vport *vport, bool is_del_list,
+ void hclge_rm_vport_all_vlan_table(struct hclge_vport *vport, bool is_del_list);
+ void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev);
+ void hclge_restore_mac_table_common(struct hclge_vport *vport);
++void hclge_restore_vport_port_base_vlan_config(struct hclge_dev *hdev);
+ void hclge_restore_vport_vlan_table(struct hclge_vport *vport);
+ int hclge_update_port_base_vlan_cfg(struct hclge_vport *vport, u16 state,
+ struct hclge_vlan_info *vlan_info);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 63d2be4349e3e..03d63b6a9b2bc 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -48,7 +48,7 @@ static int hclge_mdio_write(struct mii_bus *bus, int phyid, int regnum,
+ int ret;
+
+ if (test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state))
+- return 0;
++ return -EBUSY;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MDIO_CONFIG, false);
+
+@@ -86,7 +86,7 @@ static int hclge_mdio_read(struct mii_bus *bus, int phyid, int regnum)
+ int ret;
+
+ if (test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state))
+- return 0;
++ return -EBUSY;
+
+ hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MDIO_CONFIG, true);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 21442a9bb9961..90c6197d9374c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -2855,6 +2855,11 @@ static int hclgevf_reset_hdev(struct hclgevf_dev *hdev)
+ return ret;
+ }
+
++ /* get current port based vlan state from PF */
++ ret = hclgevf_get_port_base_vlan_filter_state(hdev);
++ if (ret)
++ return ret;
++
+ set_bit(HCLGEVF_STATE_PROMISC_CHANGED, &hdev->state);
+
+ hclgevf_init_rxd_adv_layout(hdev);
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index b423e94956f10..b4804ce63151f 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1429,6 +1429,15 @@ static int __ibmvnic_open(struct net_device *netdev)
+ return rc;
+ }
+
++ adapter->tx_queues_active = true;
++
++ /* Since queues were stopped until now, there shouldn't be any
++ * one in ibmvnic_complete_tx() or ibmvnic_xmit() so maybe we
++ * don't need the synchronize_rcu()? Leaving it for consistency
++ * with setting ->tx_queues_active = false.
++ */
++ synchronize_rcu();
++
+ netif_tx_start_all_queues(netdev);
+
+ if (prev_state == VNIC_CLOSED) {
+@@ -1603,6 +1612,14 @@ static void ibmvnic_cleanup(struct net_device *netdev)
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+
+ /* ensure that transmissions are stopped if called by do_reset */
++
++ adapter->tx_queues_active = false;
++
++ /* Ensure complete_tx() and ibmvnic_xmit() see ->tx_queues_active
++ * update so they don't restart a queue after we stop it below.
++ */
++ synchronize_rcu();
++
+ if (test_bit(0, &adapter->resetting))
+ netif_tx_disable(netdev);
+ else
+@@ -1842,14 +1859,21 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+ tx_buff->skb = NULL;
+ adapter->netdev->stats.tx_dropped++;
+ }
++
+ ind_bufp->index = 0;
++
+ if (atomic_sub_return(entries, &tx_scrq->used) <=
+ (adapter->req_tx_entries_per_subcrq / 2) &&
+- __netif_subqueue_stopped(adapter->netdev, queue_num) &&
+- !test_bit(0, &adapter->resetting)) {
+- netif_wake_subqueue(adapter->netdev, queue_num);
+- netdev_dbg(adapter->netdev, "Started queue %d\n",
+- queue_num);
++ __netif_subqueue_stopped(adapter->netdev, queue_num)) {
++ rcu_read_lock();
++
++ if (adapter->tx_queues_active) {
++ netif_wake_subqueue(adapter->netdev, queue_num);
++ netdev_dbg(adapter->netdev, "Started queue %d\n",
++ queue_num);
++ }
++
++ rcu_read_unlock();
+ }
+ }
+
+@@ -1904,11 +1928,12 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ int index = 0;
+ u8 proto = 0;
+
+- tx_scrq = adapter->tx_scrq[queue_num];
+- txq = netdev_get_tx_queue(netdev, queue_num);
+- ind_bufp = &tx_scrq->ind_buf;
+-
+- if (test_bit(0, &adapter->resetting)) {
++ /* If a reset is in progress, drop the packet since
++ * the scrqs may get torn down. Otherwise use the
++ * rcu to ensure reset waits for us to complete.
++ */
++ rcu_read_lock();
++ if (!adapter->tx_queues_active) {
+ dev_kfree_skb_any(skb);
+
+ tx_send_failed++;
+@@ -1917,6 +1942,10 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ goto out;
+ }
+
++ tx_scrq = adapter->tx_scrq[queue_num];
++ txq = netdev_get_tx_queue(netdev, queue_num);
++ ind_bufp = &tx_scrq->ind_buf;
++
+ if (ibmvnic_xmit_workarounds(skb, netdev)) {
+ tx_dropped++;
+ tx_send_failed++;
+@@ -1924,6 +1953,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ ibmvnic_tx_scrq_flush(adapter, tx_scrq);
+ goto out;
+ }
++
+ if (skb_is_gso(skb))
+ tx_pool = &adapter->tso_pool[queue_num];
+ else
+@@ -2078,6 +2108,7 @@ tx_err:
+ netif_carrier_off(netdev);
+ }
+ out:
++ rcu_read_unlock();
+ netdev->stats.tx_dropped += tx_dropped;
+ netdev->stats.tx_bytes += tx_bytes;
+ netdev->stats.tx_packets += tx_packets;
+@@ -3732,9 +3763,15 @@ restart_loop:
+ (adapter->req_tx_entries_per_subcrq / 2) &&
+ __netif_subqueue_stopped(adapter->netdev,
+ scrq->pool_index)) {
+- netif_wake_subqueue(adapter->netdev, scrq->pool_index);
+- netdev_dbg(adapter->netdev, "Started queue %d\n",
+- scrq->pool_index);
++ rcu_read_lock();
++ if (adapter->tx_queues_active) {
++ netif_wake_subqueue(adapter->netdev,
++ scrq->pool_index);
++ netdev_dbg(adapter->netdev,
++ "Started queue %d\n",
++ scrq->pool_index);
++ }
++ rcu_read_unlock();
+ }
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index fa2d607a7b1b9..8f5cefb932dd1 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -1006,11 +1006,14 @@ struct ibmvnic_adapter {
+ struct work_struct ibmvnic_reset;
+ struct delayed_work ibmvnic_delayed_reset;
+ unsigned long resetting;
+- bool napi_enabled, from_passive_init;
+- bool login_pending;
+ /* last device reset time */
+ unsigned long last_reset_time;
+
++ bool napi_enabled;
++ bool from_passive_init;
++ bool login_pending;
++ /* protected by rcu */
++ bool tx_queues_active;
+ bool failover_pending;
+ bool force_reset_recovery;
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+index 945b1bb9c6f40..e5e72b5bb6196 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+@@ -218,7 +218,6 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count)
+ ntu += nb_buffs;
+ if (ntu == rx_ring->count) {
+ rx_desc = I40E_RX_DESC(rx_ring, 0);
+- xdp = i40e_rx_bi(rx_ring, 0);
+ ntu = 0;
+ }
+
+@@ -241,21 +240,25 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count)
+ static struct sk_buff *i40e_construct_skb_zc(struct i40e_ring *rx_ring,
+ struct xdp_buff *xdp)
+ {
++ unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ unsigned int metasize = xdp->data - xdp->data_meta;
+- unsigned int datasize = xdp->data_end - xdp->data;
+ struct sk_buff *skb;
+
++ net_prefetch(xdp->data_meta);
++
+ /* allocate a skb to store the frags */
+- skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+- xdp->data_end - xdp->data_hard_start,
++ skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb))
+ goto out;
+
+- skb_reserve(skb, xdp->data - xdp->data_hard_start);
+- memcpy(__skb_put(skb, datasize), xdp->data, datasize);
+- if (metasize)
++ memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++ ALIGN(totalsize, sizeof(long)));
++
++ if (metasize) {
+ skb_metadata_set(skb, metasize);
++ __skb_pull(skb, metasize);
++ }
+
+ out:
+ xsk_buff_free(xdp);
+@@ -324,11 +327,11 @@ static void i40e_handle_xdp_result_zc(struct i40e_ring *rx_ring,
+ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
+ {
+ unsigned int total_rx_bytes = 0, total_rx_packets = 0;
+- u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
+ u16 next_to_clean = rx_ring->next_to_clean;
+ u16 count_mask = rx_ring->count - 1;
+ unsigned int xdp_res, xdp_xmit = 0;
+ bool failure = false;
++ u16 cleaned_count;
+
+ while (likely(total_rx_packets < (unsigned int)budget)) {
+ union i40e_rx_desc *rx_desc;
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index bea1d1e39fa27..2f60230d332a6 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -290,6 +290,7 @@ enum ice_pf_state {
+ ICE_LINK_DEFAULT_OVERRIDE_PENDING,
+ ICE_PHY_INIT_COMPLETE,
+ ICE_FD_VF_FLUSH_CTX, /* set at FD Rx IRQ or timeout */
++ ICE_AUX_ERR_PENDING,
+ ICE_STATE_NBITS /* must be last */
+ };
+
+@@ -559,6 +560,7 @@ struct ice_pf {
+ wait_queue_head_t reset_wait_queue;
+
+ u32 hw_csum_rx_error;
++ u32 oicr_err_reg;
+ u16 oicr_idx; /* Other interrupt cause MSIX vector index */
+ u16 num_avail_sw_msix; /* remaining MSIX SW vectors left unclaimed */
+ u16 max_pf_txqs; /* Total Tx queues PF wide */
+@@ -710,7 +712,7 @@ static inline struct xsk_buff_pool *ice_tx_xsk_pool(struct ice_tx_ring *ring)
+ struct ice_vsi *vsi = ring->vsi;
+ u16 qid;
+
+- qid = ring->q_index - vsi->num_xdp_txq;
++ qid = ring->q_index - vsi->alloc_txq;
+
+ if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps))
+ return NULL;
+diff --git a/drivers/net/ethernet/intel/ice/ice_idc.c b/drivers/net/ethernet/intel/ice/ice_idc.c
+index fc3580167e7b5..5559230eff8b5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_idc.c
++++ b/drivers/net/ethernet/intel/ice/ice_idc.c
+@@ -34,6 +34,9 @@ void ice_send_event_to_aux(struct ice_pf *pf, struct iidc_event *event)
+ {
+ struct iidc_auxiliary_drv *iadrv;
+
++ if (WARN_ON_ONCE(!in_task()))
++ return;
++
+ if (!pf->adev)
+ return;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index b7e8744b0c0a6..296f9d5f74084 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -2255,6 +2255,19 @@ static void ice_service_task(struct work_struct *work)
+ return;
+ }
+
++ if (test_and_clear_bit(ICE_AUX_ERR_PENDING, pf->state)) {
++ struct iidc_event *event;
++
++ event = kzalloc(sizeof(*event), GFP_KERNEL);
++ if (event) {
++ set_bit(IIDC_EVENT_CRIT_ERR, event->type);
++ /* report the entire OICR value to AUX driver */
++ swap(event->reg, pf->oicr_err_reg);
++ ice_send_event_to_aux(pf, event);
++ kfree(event);
++ }
++ }
++
+ if (test_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) {
+ /* Plug aux device per request */
+ ice_plug_aux_dev(pf);
+@@ -3041,17 +3054,9 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
+
+ #define ICE_AUX_CRIT_ERR (PFINT_OICR_PE_CRITERR_M | PFINT_OICR_HMC_ERR_M | PFINT_OICR_PE_PUSH_M)
+ if (oicr & ICE_AUX_CRIT_ERR) {
+- struct iidc_event *event;
+-
++ pf->oicr_err_reg |= oicr;
++ set_bit(ICE_AUX_ERR_PENDING, pf->state);
+ ena_mask &= ~ICE_AUX_CRIT_ERR;
+- event = kzalloc(sizeof(*event), GFP_ATOMIC);
+- if (event) {
+- set_bit(IIDC_EVENT_CRIT_ERR, event->type);
+- /* report the entire OICR value to AUX driver */
+- event->reg = oicr;
+- ice_send_event_to_aux(pf, event);
+- kfree(event);
+- }
+ }
+
+ /* Report any remaining unexpected interrupts */
+diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
+index 2388837d6d6c9..feb874bde171f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -428,20 +428,24 @@ static void ice_bump_ntc(struct ice_rx_ring *rx_ring)
+ static struct sk_buff *
+ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
+ {
+- unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;
++ unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ unsigned int metasize = xdp->data - xdp->data_meta;
+- unsigned int datasize = xdp->data_end - xdp->data;
+ struct sk_buff *skb;
+
+- skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard,
++ net_prefetch(xdp->data_meta);
++
++ skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb))
+ return NULL;
+
+- skb_reserve(skb, xdp->data - xdp->data_hard_start);
+- memcpy(__skb_put(skb, datasize), xdp->data, datasize);
+- if (metasize)
++ memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++ ALIGN(totalsize, sizeof(long)));
++
++ if (metasize) {
+ skb_metadata_set(skb, metasize);
++ __skb_pull(skb, metasize);
++ }
+
+ xsk_buff_free(xdp);
+ return skb;
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index 51a2dcaf553de..2a5782063f4c8 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -965,10 +965,6 @@ static int igb_set_ringparam(struct net_device *netdev,
+ memcpy(&temp_ring[i], adapter->rx_ring[i],
+ sizeof(struct igb_ring));
+
+- /* Clear copied XDP RX-queue info */
+- memset(&temp_ring[i].xdp_rxq, 0,
+- sizeof(temp_ring[i].xdp_rxq));
+-
+ temp_ring[i].count = new_rx_count;
+ err = igb_setup_rx_resources(&temp_ring[i]);
+ if (err) {
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 38ba92022cd45..c1e4ad65b02de 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4352,7 +4352,18 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+ {
+ struct igb_adapter *adapter = netdev_priv(rx_ring->netdev);
+ struct device *dev = rx_ring->dev;
+- int size;
++ int size, res;
++
++ /* XDP RX-queue info */
++ if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
++ xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
++ res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
++ rx_ring->queue_index, 0);
++ if (res < 0) {
++ dev_err(dev, "Failed to register xdp_rxq index %u\n",
++ rx_ring->queue_index);
++ return res;
++ }
+
+ size = sizeof(struct igb_rx_buffer) * rx_ring->count;
+
+@@ -4375,14 +4386,10 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+
+ rx_ring->xdp_prog = adapter->xdp_prog;
+
+- /* XDP RX-queue info */
+- if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+- rx_ring->queue_index, 0) < 0)
+- goto err;
+-
+ return 0;
+
+ err:
++ xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ vfree(rx_ring->rx_buffer_info);
+ rx_ring->rx_buffer_info = NULL;
+ dev_err(dev, "Unable to allocate memory for the Rx descriptor ring\n");
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 2f17f36e94fde..2a9ae53238f73 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -505,6 +505,9 @@ int igc_setup_rx_resources(struct igc_ring *rx_ring)
+ u8 index = rx_ring->queue_index;
+ int size, desc_len, res;
+
++ /* XDP RX-queue info */
++ if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
++ xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, ndev, index,
+ rx_ring->q_vector->napi.napi_id);
+ if (res < 0) {
+@@ -2446,19 +2449,20 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget)
+ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+ struct xdp_buff *xdp)
+ {
++ unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ unsigned int metasize = xdp->data - xdp->data_meta;
+- unsigned int datasize = xdp->data_end - xdp->data;
+- unsigned int totalsize = metasize + datasize;
+ struct sk_buff *skb;
+
+- skb = __napi_alloc_skb(&ring->q_vector->napi,
+- xdp->data_end - xdp->data_hard_start,
++ net_prefetch(xdp->data_meta);
++
++ skb = __napi_alloc_skb(&ring->q_vector->napi, totalsize,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb))
+ return NULL;
+
+- skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
+- memcpy(__skb_put(skb, totalsize), xdp->data_meta, totalsize);
++ memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++ ALIGN(totalsize, sizeof(long)));
++
+ if (metasize) {
+ skb_metadata_set(skb, metasize);
+ __skb_pull(skb, metasize);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+index 6a5e9cf6b5dac..dd7ff66d422f0 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+@@ -207,26 +207,28 @@ bool ixgbe_alloc_rx_buffers_zc(struct ixgbe_ring *rx_ring, u16 count)
+ }
+
+ static struct sk_buff *ixgbe_construct_skb_zc(struct ixgbe_ring *rx_ring,
+- struct ixgbe_rx_buffer *bi)
++ const struct xdp_buff *xdp)
+ {
+- unsigned int metasize = bi->xdp->data - bi->xdp->data_meta;
+- unsigned int datasize = bi->xdp->data_end - bi->xdp->data;
++ unsigned int totalsize = xdp->data_end - xdp->data_meta;
++ unsigned int metasize = xdp->data - xdp->data_meta;
+ struct sk_buff *skb;
+
++ net_prefetch(xdp->data_meta);
++
+ /* allocate a skb to store the frags */
+- skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+- bi->xdp->data_end - bi->xdp->data_hard_start,
++ skb = __napi_alloc_skb(&rx_ring->q_vector->napi, totalsize,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb))
+ return NULL;
+
+- skb_reserve(skb, bi->xdp->data - bi->xdp->data_hard_start);
+- memcpy(__skb_put(skb, datasize), bi->xdp->data, datasize);
+- if (metasize)
++ memcpy(__skb_put(skb, totalsize), xdp->data_meta,
++ ALIGN(totalsize, sizeof(long)));
++
++ if (metasize) {
+ skb_metadata_set(skb, metasize);
++ __skb_pull(skb, metasize);
++ }
+
+- xsk_buff_free(bi->xdp);
+- bi->xdp = NULL;
+ return skb;
+ }
+
+@@ -317,12 +319,15 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
+ }
+
+ /* XDP_PASS path */
+- skb = ixgbe_construct_skb_zc(rx_ring, bi);
++ skb = ixgbe_construct_skb_zc(rx_ring, bi->xdp);
+ if (!skb) {
+ rx_ring->rx_stats.alloc_rx_buff_failed++;
+ break;
+ }
+
++ xsk_buff_free(bi->xdp);
++ bi->xdp = NULL;
++
+ cleaned_count++;
+ ixgbe_inc_ntc(rx_ring);
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+index 91f86d77cd41b..3a31fb8cc1554 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+@@ -605,7 +605,7 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
+ struct npc_install_flow_req req = { 0 };
+ struct npc_install_flow_rsp rsp = { 0 };
+ struct npc_mcam *mcam = &rvu->hw->mcam;
+- struct nix_rx_action action;
++ struct nix_rx_action action = { 0 };
+ int blkaddr, index;
+
+ /* AF's and SDP VFs work in promiscuous mode */
+@@ -626,7 +626,6 @@ void rvu_npc_install_ucast_entry(struct rvu *rvu, u16 pcifunc,
+ *(u64 *)&action = npc_get_mcam_action(rvu, mcam,
+ blkaddr, index);
+ } else {
+- *(u64 *)&action = 0x00;
+ action.op = NIX_RX_ACTIONOP_UCAST;
+ action.pf_func = pcifunc;
+ }
+@@ -657,7 +656,7 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
+ struct npc_mcam *mcam = &rvu->hw->mcam;
+ struct rvu_hwinfo *hw = rvu->hw;
+ int blkaddr, ucast_idx, index;
+- struct nix_rx_action action;
++ struct nix_rx_action action = { 0 };
+ u64 relaxed_mask;
+
+ if (!hw->cap.nix_rx_multicast && is_cgx_vf(rvu, pcifunc))
+@@ -685,14 +684,14 @@ void rvu_npc_install_promisc_entry(struct rvu *rvu, u16 pcifunc,
+ blkaddr, ucast_idx);
+
+ if (action.op != NIX_RX_ACTIONOP_RSS) {
+- *(u64 *)&action = 0x00;
++ *(u64 *)&action = 0;
+ action.op = NIX_RX_ACTIONOP_UCAST;
+ }
+
+ /* RX_ACTION set to MCAST for CGX PF's */
+ if (hw->cap.nix_rx_multicast && pfvf->use_mce_list &&
+ is_pf_cgxmapped(rvu, rvu_get_pf(pcifunc))) {
+- *(u64 *)&action = 0x00;
++ *(u64 *)&action = 0;
+ action.op = NIX_RX_ACTIONOP_MCAST;
+ pfvf = rvu_get_pfvf(rvu, pcifunc & ~RVU_PFVF_FUNC_MASK);
+ action.index = pfvf->promisc_mce_idx;
+@@ -832,7 +831,7 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
+ struct rvu_hwinfo *hw = rvu->hw;
+ int blkaddr, ucast_idx, index;
+ u8 mac_addr[ETH_ALEN] = { 0 };
+- struct nix_rx_action action;
++ struct nix_rx_action action = { 0 };
+ struct rvu_pfvf *pfvf;
+ u16 vf_func;
+
+@@ -861,14 +860,14 @@ void rvu_npc_install_allmulti_entry(struct rvu *rvu, u16 pcifunc, int nixlf,
+ blkaddr, ucast_idx);
+
+ if (action.op != NIX_RX_ACTIONOP_RSS) {
+- *(u64 *)&action = 0x00;
++ *(u64 *)&action = 0;
+ action.op = NIX_RX_ACTIONOP_UCAST;
+ action.pf_func = pcifunc;
+ }
+
+ /* RX_ACTION set to MCAST for CGX PF's */
+ if (hw->cap.nix_rx_multicast && pfvf->use_mce_list) {
+- *(u64 *)&action = 0x00;
++ *(u64 *)&action = 0;
+ action.op = NIX_RX_ACTIONOP_MCAST;
+ action.index = pfvf->mcast_mce_idx;
+ }
+diff --git a/drivers/net/ethernet/microchip/sparx5/Kconfig b/drivers/net/ethernet/microchip/sparx5/Kconfig
+index 7bdbb2d09a148..cc5e48e1bb4c3 100644
+--- a/drivers/net/ethernet/microchip/sparx5/Kconfig
++++ b/drivers/net/ethernet/microchip/sparx5/Kconfig
+@@ -4,6 +4,8 @@ config SPARX5_SWITCH
+ depends on HAS_IOMEM
+ depends on OF
+ depends on ARCH_SPARX5 || COMPILE_TEST
++ depends on PTP_1588_CLOCK_OPTIONAL
++ depends on BRIDGE || BRIDGE=n
+ select PHYLINK
+ select PHY_SPARX5_SERDES
+ select RESET_CONTROLLER
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c b/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c
+index 7436f62fa1525..174ad95e746a3 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_fdma.c
+@@ -420,6 +420,8 @@ static int sparx5_fdma_tx_alloc(struct sparx5 *sparx5)
+ db_hw->dataptr = phys;
+ db_hw->status = 0;
+ db = devm_kzalloc(sparx5->dev, sizeof(*db), GFP_KERNEL);
++ if (!db)
++ return -ENOMEM;
+ db->cpu_addr = cpu_addr;
+ list_add_tail(&db->list, &tx->db_list);
+ }
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+index 7e296fa71b368..40fa5bce2ac2c 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_bus_pci.c
+@@ -331,6 +331,9 @@ static int ionic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_out_deregister_lifs;
+ }
+
++ mod_timer(&ionic->watchdog_timer,
++ round_jiffies(jiffies + ionic->watchdog_period));
++
+ return 0;
+
+ err_out_deregister_lifs:
+@@ -348,7 +351,6 @@ err_out_port_reset:
+ err_out_reset:
+ ionic_reset(ionic);
+ err_out_teardown:
+- del_timer_sync(&ionic->watchdog_timer);
+ pci_clear_master(pdev);
+ /* Don't fail the probe for these errors, keep
+ * the hw interface around for inspection
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+index d57e80d44c9df..2c7ce820a1fa7 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+@@ -122,9 +122,6 @@ int ionic_dev_setup(struct ionic *ionic)
+ idev->fw_generation = IONIC_FW_STS_F_GENERATION &
+ ioread8(&idev->dev_info_regs->fw_status);
+
+- mod_timer(&ionic->watchdog_timer,
+- round_jiffies(jiffies + ionic->watchdog_period));
+-
+ idev->db_pages = bar->vaddr;
+ idev->phy_db_pages = bar->bus_addr;
+
+@@ -132,6 +129,16 @@ int ionic_dev_setup(struct ionic *ionic)
+ }
+
+ /* Devcmd Interface */
++bool ionic_is_fw_running(struct ionic_dev *idev)
++{
++ u8 fw_status = ioread8(&idev->dev_info_regs->fw_status);
++
++ /* firmware is useful only if the running bit is set and
++ * fw_status != 0xff (bad PCI read)
++ */
++ return (fw_status != 0xff) && (fw_status & IONIC_FW_STS_F_RUNNING);
++}
++
+ int ionic_heartbeat_check(struct ionic *ionic)
+ {
+ struct ionic_dev *idev = &ionic->idev;
+@@ -155,13 +162,10 @@ do_check_time:
+ goto do_check_time;
+ }
+
+- /* firmware is useful only if the running bit is set and
+- * fw_status != 0xff (bad PCI read)
+- * If fw_status is not ready don't bother with the generation.
+- */
+ fw_status = ioread8(&idev->dev_info_regs->fw_status);
+
+- if (fw_status == 0xff || !(fw_status & IONIC_FW_STS_F_RUNNING)) {
++ /* If fw_status is not ready don't bother with the generation */
++ if (!ionic_is_fw_running(idev)) {
+ fw_status_ready = false;
+ } else {
+ fw_generation = fw_status & IONIC_FW_STS_F_GENERATION;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+index e5acf3bd62b2d..73b950ac12722 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+@@ -353,5 +353,6 @@ void ionic_q_rewind(struct ionic_queue *q, struct ionic_desc_info *start);
+ void ionic_q_service(struct ionic_queue *q, struct ionic_cq_info *cq_info,
+ unsigned int stop_index);
+ int ionic_heartbeat_check(struct ionic *ionic);
++bool ionic_is_fw_running(struct ionic_dev *idev);
+
+ #endif /* _IONIC_DEV_H_ */
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+index 875f4ec42efee..a0f9136b2d899 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
+@@ -215,9 +215,13 @@ static void ionic_adminq_flush(struct ionic_lif *lif)
+ void ionic_adminq_netdev_err_print(struct ionic_lif *lif, u8 opcode,
+ u8 status, int err)
+ {
++ const char *stat_str;
++
++ stat_str = (err == -ETIMEDOUT) ? "TIMEOUT" :
++ ionic_error_to_str(status);
++
+ netdev_err(lif->netdev, "%s (%d) failed: %s (%d)\n",
+- ionic_opcode_to_str(opcode), opcode,
+- ionic_error_to_str(status), err);
++ ionic_opcode_to_str(opcode), opcode, stat_str, err);
+ }
+
+ static int ionic_adminq_check_err(struct ionic_lif *lif,
+@@ -318,6 +322,7 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx,
+ if (do_msg && !test_bit(IONIC_LIF_F_FW_RESET, lif->state))
+ netdev_err(netdev, "Posting of %s (%d) failed: %d\n",
+ name, ctx->cmd.cmd.opcode, err);
++ ctx->comp.comp.status = IONIC_RC_ERROR;
+ return err;
+ }
+
+@@ -336,6 +341,7 @@ int ionic_adminq_wait(struct ionic_lif *lif, struct ionic_admin_ctx *ctx,
+ if (do_msg)
+ netdev_err(netdev, "%s (%d) interrupted, FW in reset\n",
+ name, ctx->cmd.cmd.opcode);
++ ctx->comp.comp.status = IONIC_RC_ERROR;
+ return -ENXIO;
+ }
+
+@@ -370,10 +376,10 @@ int ionic_adminq_post_wait_nomsg(struct ionic_lif *lif, struct ionic_admin_ctx *
+
+ static void ionic_dev_cmd_clean(struct ionic *ionic)
+ {
+- union __iomem ionic_dev_cmd_regs *regs = ionic->idev.dev_cmd_regs;
++ struct ionic_dev *idev = &ionic->idev;
+
+- iowrite32(0, ®s->doorbell);
+- memset_io(®s->cmd, 0, sizeof(regs->cmd));
++ iowrite32(0, &idev->dev_cmd_regs->doorbell);
++ memset_io(&idev->dev_cmd_regs->cmd, 0, sizeof(idev->dev_cmd_regs->cmd));
+ }
+
+ int ionic_dev_cmd_wait(struct ionic *ionic, unsigned long max_seconds)
+@@ -540,6 +546,9 @@ int ionic_reset(struct ionic *ionic)
+ struct ionic_dev *idev = &ionic->idev;
+ int err;
+
++ if (!ionic_is_fw_running(idev))
++ return 0;
++
+ mutex_lock(&ionic->dev_cmd_lock);
+ ionic_dev_cmd_reset(idev);
+ err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
+@@ -612,15 +621,17 @@ int ionic_port_init(struct ionic *ionic)
+ int ionic_port_reset(struct ionic *ionic)
+ {
+ struct ionic_dev *idev = &ionic->idev;
+- int err;
++ int err = 0;
+
+ if (!idev->port_info)
+ return 0;
+
+- mutex_lock(&ionic->dev_cmd_lock);
+- ionic_dev_cmd_port_reset(idev);
+- err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
+- mutex_unlock(&ionic->dev_cmd_lock);
++ if (ionic_is_fw_running(idev)) {
++ mutex_lock(&ionic->dev_cmd_lock);
++ ionic_dev_cmd_port_reset(idev);
++ err = ionic_dev_cmd_wait(ionic, DEVCMD_TIMEOUT);
++ mutex_unlock(&ionic->dev_cmd_lock);
++ }
+
+ dma_free_coherent(ionic->dev, idev->port_info_sz,
+ idev->port_info, idev->port_info_pa);
+@@ -628,9 +639,6 @@ int ionic_port_reset(struct ionic *ionic)
+ idev->port_info = NULL;
+ idev->port_info_pa = 0;
+
+- if (err)
+- dev_err(ionic->dev, "Failed to reset port\n");
+-
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+index 48cf4355bc47a..0848b5529d48a 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+@@ -2984,12 +2984,16 @@ static int qed_iov_pre_update_vport(struct qed_hwfn *hwfn,
+ u8 mask = QED_ACCEPT_UCAST_UNMATCHED | QED_ACCEPT_MCAST_UNMATCHED;
+ struct qed_filter_accept_flags *flags = ¶ms->accept_flags;
+ struct qed_public_vf_info *vf_info;
++ u16 tlv_mask;
++
++ tlv_mask = BIT(QED_IOV_VP_UPDATE_ACCEPT_PARAM) |
++ BIT(QED_IOV_VP_UPDATE_ACCEPT_ANY_VLAN);
+
+ /* Untrusted VFs can't even be trusted to know that fact.
+ * Simply indicate everything is configured fine, and trace
+ * configuration 'behind their back'.
+ */
+- if (!(*tlvs & BIT(QED_IOV_VP_UPDATE_ACCEPT_PARAM)))
++ if (!(*tlvs & tlv_mask))
+ return 0;
+
+ vf_info = qed_iov_get_public_vf_info(hwfn, vfid, true);
+@@ -3006,6 +3010,13 @@ static int qed_iov_pre_update_vport(struct qed_hwfn *hwfn,
+ flags->tx_accept_filter &= ~mask;
+ }
+
++ if (params->update_accept_any_vlan_flg) {
++ vf_info->accept_any_vlan = params->accept_any_vlan;
++
++ if (vf_info->forced_vlan && !vf_info->is_trusted_configured)
++ params->accept_any_vlan = false;
++ }
++
+ return 0;
+ }
+
+@@ -4719,6 +4730,7 @@ static int qed_get_vf_config(struct qed_dev *cdev,
+ tx_rate = vf_info->tx_rate;
+ ivi->max_tx_rate = tx_rate ? tx_rate : link.speed;
+ ivi->min_tx_rate = qed_iov_get_vf_min_rate(hwfn, vf_id);
++ ivi->trusted = vf_info->is_trusted_request;
+
+ return 0;
+ }
+@@ -5149,6 +5161,12 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+
+ params.update_ctl_frame_check = 1;
+ params.mac_chk_en = !vf_info->is_trusted_configured;
++ params.update_accept_any_vlan_flg = 0;
++
++ if (vf_info->accept_any_vlan && vf_info->forced_vlan) {
++ params.update_accept_any_vlan_flg = 1;
++ params.accept_any_vlan = vf_info->accept_any_vlan;
++ }
+
+ if (vf_info->rx_accept_mode & mask) {
+ flags->update_rx_mode_config = 1;
+@@ -5164,13 +5182,20 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
+ if (!vf_info->is_trusted_configured) {
+ flags->rx_accept_filter &= ~mask;
+ flags->tx_accept_filter &= ~mask;
++ params.accept_any_vlan = false;
+ }
+
+ if (flags->update_rx_mode_config ||
+ flags->update_tx_mode_config ||
+- params.update_ctl_frame_check)
++ params.update_ctl_frame_check ||
++ params.update_accept_any_vlan_flg) {
++ DP_VERBOSE(hwfn, QED_MSG_IOV,
++ "vport update config for %s VF[abs 0x%x rel 0x%x]\n",
++ vf_info->is_trusted_configured ? "trusted" : "untrusted",
++ vf->abs_vf_id, vf->relative_vf_id);
+ qed_sp_vport_update(hwfn, ¶ms,
+ QED_SPQ_MODE_EBLOCK, NULL);
++ }
+ }
+ }
+
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.h b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
+index f448e3dd6c8ba..6ee2493de1642 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
+@@ -62,6 +62,7 @@ struct qed_public_vf_info {
+ bool is_trusted_request;
+ u8 rx_accept_mode;
+ u8 tx_accept_mode;
++ bool accept_any_vlan;
+ };
+
+ struct qed_iov_vf_init_params {
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+index 5d79ee4370bcd..7519773eaca6e 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_dcb.h
+@@ -51,7 +51,7 @@ static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)
+ if (dcb && dcb->ops->get_hw_capability)
+ return dcb->ops->get_hw_capability(dcb);
+
+- return 0;
++ return -EOPNOTSUPP;
+ }
+
+ static inline void qlcnic_dcb_free(struct qlcnic_dcb *dcb)
+@@ -65,7 +65,7 @@ static inline int qlcnic_dcb_attach(struct qlcnic_dcb *dcb)
+ if (dcb && dcb->ops->attach)
+ return dcb->ops->attach(dcb);
+
+- return 0;
++ return -EOPNOTSUPP;
+ }
+
+ static inline int
+@@ -74,7 +74,7 @@ qlcnic_dcb_query_hw_capability(struct qlcnic_dcb *dcb, char *buf)
+ if (dcb && dcb->ops->query_hw_capability)
+ return dcb->ops->query_hw_capability(dcb, buf);
+
+- return 0;
++ return -EOPNOTSUPP;
+ }
+
+ static inline void qlcnic_dcb_get_info(struct qlcnic_dcb *dcb)
+@@ -89,7 +89,7 @@ qlcnic_dcb_query_cee_param(struct qlcnic_dcb *dcb, char *buf, u8 type)
+ if (dcb && dcb->ops->query_cee_param)
+ return dcb->ops->query_cee_param(dcb, buf, type);
+
+- return 0;
++ return -EOPNOTSUPP;
+ }
+
+ static inline int qlcnic_dcb_get_cee_cfg(struct qlcnic_dcb *dcb)
+@@ -97,7 +97,7 @@ static inline int qlcnic_dcb_get_cee_cfg(struct qlcnic_dcb *dcb)
+ if (dcb && dcb->ops->get_cee_cfg)
+ return dcb->ops->get_cee_cfg(dcb);
+
+- return 0;
++ return -EOPNOTSUPP;
+ }
+
+ static inline void qlcnic_dcb_aen_handler(struct qlcnic_dcb *dcb, void *msg)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+index 2ffa0a11eea56..569683f33804c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+@@ -460,6 +460,13 @@ static int ethqos_clks_config(void *priv, bool enabled)
+ dev_err(ðqos->pdev->dev, "rgmii_clk enable failed\n");
+ return ret;
+ }
++
++ /* Enable functional clock to prevent DMA reset to timeout due
++ * to lacking PHY clock after the hardware block has been power
++ * cycled. The actual configuration will be adjusted once
++ * ethqos_fix_mac_speed() is invoked.
++ */
++ ethqos_set_func_clk_en(ethqos);
+ } else {
+ clk_disable_unprepare(ethqos->rgmii_clk);
+ }
+diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c
+index aa42141be3c0e..a557a477d0393 100644
+--- a/drivers/net/ethernet/ti/cpsw_ethtool.c
++++ b/drivers/net/ethernet/ti/cpsw_ethtool.c
+@@ -364,11 +364,9 @@ int cpsw_ethtool_op_begin(struct net_device *ndev)
+ struct cpsw_common *cpsw = priv->cpsw;
+ int ret;
+
+- ret = pm_runtime_get_sync(cpsw->dev);
+- if (ret < 0) {
++ ret = pm_runtime_resume_and_get(cpsw->dev);
++ if (ret < 0)
+ cpsw_err(priv, drv, "ethtool begin failed %d\n", ret);
+- pm_runtime_put_noidle(cpsw->dev);
+- }
+
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 377c94ec24869..90d96eb79984e 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -857,46 +857,53 @@ static void axienet_recv(struct net_device *ndev)
+ while ((cur_p->status & XAXIDMA_BD_STS_COMPLETE_MASK)) {
+ dma_addr_t phys;
+
+- tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
+-
+ /* Ensure we see complete descriptor update */
+ dma_rmb();
+- phys = desc_get_phys_addr(lp, cur_p);
+- dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
+- DMA_FROM_DEVICE);
+
+ skb = cur_p->skb;
+ cur_p->skb = NULL;
+- length = cur_p->app4 & 0x0000FFFF;
+-
+- skb_put(skb, length);
+- skb->protocol = eth_type_trans(skb, ndev);
+- /*skb_checksum_none_assert(skb);*/
+- skb->ip_summed = CHECKSUM_NONE;
+-
+- /* if we're doing Rx csum offload, set it up */
+- if (lp->features & XAE_FEATURE_FULL_RX_CSUM) {
+- csumstatus = (cur_p->app2 &
+- XAE_FULL_CSUM_STATUS_MASK) >> 3;
+- if ((csumstatus == XAE_IP_TCP_CSUM_VALIDATED) ||
+- (csumstatus == XAE_IP_UDP_CSUM_VALIDATED)) {
+- skb->ip_summed = CHECKSUM_UNNECESSARY;
++
++ /* skb could be NULL if a previous pass already received the
++ * packet for this slot in the ring, but failed to refill it
++ * with a newly allocated buffer. In this case, don't try to
++ * receive it again.
++ */
++ if (likely(skb)) {
++ length = cur_p->app4 & 0x0000FFFF;
++
++ phys = desc_get_phys_addr(lp, cur_p);
++ dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size,
++ DMA_FROM_DEVICE);
++
++ skb_put(skb, length);
++ skb->protocol = eth_type_trans(skb, ndev);
++ /*skb_checksum_none_assert(skb);*/
++ skb->ip_summed = CHECKSUM_NONE;
++
++ /* if we're doing Rx csum offload, set it up */
++ if (lp->features & XAE_FEATURE_FULL_RX_CSUM) {
++ csumstatus = (cur_p->app2 &
++ XAE_FULL_CSUM_STATUS_MASK) >> 3;
++ if (csumstatus == XAE_IP_TCP_CSUM_VALIDATED ||
++ csumstatus == XAE_IP_UDP_CSUM_VALIDATED) {
++ skb->ip_summed = CHECKSUM_UNNECESSARY;
++ }
++ } else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 &&
++ skb->protocol == htons(ETH_P_IP) &&
++ skb->len > 64) {
++ skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF);
++ skb->ip_summed = CHECKSUM_COMPLETE;
+ }
+- } else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 &&
+- skb->protocol == htons(ETH_P_IP) &&
+- skb->len > 64) {
+- skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF);
+- skb->ip_summed = CHECKSUM_COMPLETE;
+- }
+
+- netif_rx(skb);
++ netif_rx(skb);
+
+- size += length;
+- packets++;
++ size += length;
++ packets++;
++ }
+
+ new_skb = netdev_alloc_skb_ip_align(ndev, lp->max_frm_size);
+ if (!new_skb)
+- return;
++ break;
+
+ phys = dma_map_single(ndev->dev.parent, new_skb->data,
+ lp->max_frm_size,
+@@ -905,7 +912,7 @@ static void axienet_recv(struct net_device *ndev)
+ if (net_ratelimit())
+ netdev_err(ndev, "RX DMA mapping error\n");
+ dev_kfree_skb(new_skb);
+- return;
++ break;
+ }
+ desc_set_phys_addr(lp, phys, cur_p);
+
+@@ -913,6 +920,11 @@ static void axienet_recv(struct net_device *ndev)
+ cur_p->status = 0;
+ cur_p->skb = new_skb;
+
++ /* Only update tail_p to mark this slot as usable after it has
++ * been successfully refilled.
++ */
++ tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci;
++
+ if (++lp->rx_bd_ci >= lp->rx_bd_num)
+ lp->rx_bd_ci = 0;
+ cur_p = &lp->rx_bd_v[lp->rx_bd_ci];
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index afa81a9480ccd..e675d1016c3c8 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -154,19 +154,15 @@ static void free_netvsc_device(struct rcu_head *head)
+
+ kfree(nvdev->extension);
+
+- if (nvdev->recv_original_buf) {
+- hv_unmap_memory(nvdev->recv_buf);
++ if (nvdev->recv_original_buf)
+ vfree(nvdev->recv_original_buf);
+- } else {
++ else
+ vfree(nvdev->recv_buf);
+- }
+
+- if (nvdev->send_original_buf) {
+- hv_unmap_memory(nvdev->send_buf);
++ if (nvdev->send_original_buf)
+ vfree(nvdev->send_original_buf);
+- } else {
++ else
+ vfree(nvdev->send_buf);
+- }
+
+ bitmap_free(nvdev->send_section_map);
+
+@@ -765,6 +761,12 @@ void netvsc_device_remove(struct hv_device *device)
+ netvsc_teardown_send_gpadl(device, net_device, ndev);
+ }
+
++ if (net_device->recv_original_buf)
++ hv_unmap_memory(net_device->recv_buf);
++
++ if (net_device->send_original_buf)
++ hv_unmap_memory(net_device->send_buf);
++
+ /* Release all resources */
+ free_netvsc_device_rcu(net_device);
+ }
+@@ -1821,6 +1823,12 @@ cleanup:
+ netif_napi_del(&net_device->chan_table[0].napi);
+
+ cleanup2:
++ if (net_device->recv_original_buf)
++ hv_unmap_memory(net_device->recv_buf);
++
++ if (net_device->send_original_buf)
++ hv_unmap_memory(net_device->send_buf);
++
+ free_netvsc_device(&net_device->rcu);
+
+ return ERR_PTR(ret);
+diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
+index 29aa811af430f..a8794065b250b 100644
+--- a/drivers/net/phy/at803x.c
++++ b/drivers/net/phy/at803x.c
+@@ -784,25 +784,7 @@ static int at803x_probe(struct phy_device *phydev)
+ return ret;
+ }
+
+- /* Some bootloaders leave the fiber page selected.
+- * Switch to the copper page, as otherwise we read
+- * the PHY capabilities from the fiber side.
+- */
+- if (phydev->drv->phy_id == ATH8031_PHY_ID) {
+- phy_lock_mdio_bus(phydev);
+- ret = at803x_write_page(phydev, AT803X_PAGE_COPPER);
+- phy_unlock_mdio_bus(phydev);
+- if (ret)
+- goto err;
+- }
+-
+ return 0;
+-
+-err:
+- if (priv->vddio)
+- regulator_disable(priv->vddio);
+-
+- return ret;
+ }
+
+ static void at803x_remove(struct phy_device *phydev)
+@@ -912,6 +894,22 @@ static int at803x_config_init(struct phy_device *phydev)
+ {
+ int ret;
+
++ if (phydev->drv->phy_id == ATH8031_PHY_ID) {
++ /* Some bootloaders leave the fiber page selected.
++ * Switch to the copper page, as otherwise we read
++ * the PHY capabilities from the fiber side.
++ */
++ phy_lock_mdio_bus(phydev);
++ ret = at803x_write_page(phydev, AT803X_PAGE_COPPER);
++ phy_unlock_mdio_bus(phydev);
++ if (ret)
++ return ret;
++
++ ret = at8031_pll_config(phydev);
++ if (ret < 0)
++ return ret;
++ }
++
+ /* The RX and TX delay default is:
+ * after HW reset: RX delay enabled and TX delay disabled
+ * after SW reset: RX delay enabled, while TX delay retains the
+@@ -941,12 +939,6 @@ static int at803x_config_init(struct phy_device *phydev)
+ if (ret < 0)
+ return ret;
+
+- if (phydev->drv->phy_id == ATH8031_PHY_ID) {
+- ret = at8031_pll_config(phydev);
+- if (ret < 0)
+- return ret;
+- }
+-
+ /* Ar803x extended next page bit is enabled by default. Cisco
+ * multigig switches read this bit and attempt to negotiate 10Gbps
+ * rates even if the next page bit is disabled. This is incorrect
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index 3c683e0e40e9e..e36809aa6d300 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -11,6 +11,7 @@
+ */
+
+ #include "bcm-phy-lib.h"
++#include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/phy.h>
+ #include <linux/brcmphy.h>
+@@ -602,6 +603,26 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ if (err < 0)
+ return err;
+
++ /* The datasheet indicates the PHY needs up to 1us to complete a reset,
++ * build some slack here.
++ */
++ usleep_range(1000, 2000);
++
++ /* The PHY requires 65 MDC clock cycles to complete a write operation
++ * and turnaround the line properly.
++ *
++ * We ignore -EIO here as the MDIO controller (e.g.: mdio-bcm-unimac)
++ * may flag the lack of turn-around as a read failure. This is
++ * particularly true with this combination since the MDIO controller
++ * only used 64 MDC cycles. This is not a critical failure in this
++ * specific case and it has no functional impact otherwise, so we let
++ * that one go through. If there is a genuine bus error, the next read
++ * of MII_BRCM_FET_INTREG will error out.
++ */
++ err = phy_read(phydev, MII_BMCR);
++ if (err < 0 && err != -EIO)
++ return err;
++
+ reg = phy_read(phydev, MII_BRCM_FET_INTREG);
+ if (reg < 0)
+ return reg;
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index a7ebcdab415b5..281cebc3d00cc 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1596,11 +1596,13 @@ static int lanphy_read_page_reg(struct phy_device *phydev, int page, u32 addr)
+ {
+ u32 data;
+
+- phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
+- phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
+- phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
+- (page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC));
+- data = phy_read(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA);
++ phy_lock_mdio_bus(phydev);
++ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
++ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
++ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
++ (page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC));
++ data = __phy_read(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA);
++ phy_unlock_mdio_bus(phydev);
+
+ return data;
+ }
+@@ -1608,18 +1610,18 @@ static int lanphy_read_page_reg(struct phy_device *phydev, int page, u32 addr)
+ static int lanphy_write_page_reg(struct phy_device *phydev, int page, u16 addr,
+ u16 val)
+ {
+- phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
+- phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
+- phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
+- (page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC));
++ phy_lock_mdio_bus(phydev);
++ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL, page);
++ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, addr);
++ __phy_write(phydev, LAN_EXT_PAGE_ACCESS_CONTROL,
++ page | LAN_EXT_PAGE_ACCESS_CTRL_EP_FUNC);
+
+- val = phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, val);
+- if (val) {
++ val = __phy_write(phydev, LAN_EXT_PAGE_ACCESS_ADDRESS_DATA, val);
++ if (val != 0)
+ phydev_err(phydev, "Error: phy_write has returned error %d\n",
+ val);
+- return val;
+- }
+- return 0;
++ phy_unlock_mdio_bus(phydev);
++ return val;
+ }
+
+ static int lan8814_config_init(struct phy_device *phydev)
+diff --git a/drivers/net/usb/asix.h b/drivers/net/usb/asix.h
+index 2a1e31defe718..4334aafab59a4 100644
+--- a/drivers/net/usb/asix.h
++++ b/drivers/net/usb/asix.h
+@@ -192,8 +192,8 @@ extern const struct driver_info ax88172a_info;
+ /* ASIX specific flags */
+ #define FLAG_EEPROM_MAC (1UL << 0) /* init device MAC from eeprom */
+
+-int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+- u16 size, void *data, int in_pm);
++int __must_check asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
++ u16 size, void *data, int in_pm);
+
+ int asix_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+ u16 size, void *data, int in_pm);
+diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
+index 71682970be584..524805285019a 100644
+--- a/drivers/net/usb/asix_common.c
++++ b/drivers/net/usb/asix_common.c
+@@ -11,8 +11,8 @@
+
+ #define AX_HOST_EN_RETRIES 30
+
+-int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+- u16 size, void *data, int in_pm)
++int __must_check asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
++ u16 size, void *data, int in_pm)
+ {
+ int ret;
+ int (*fn)(struct usbnet *, u8, u8, u16, u16, void *, u16);
+@@ -27,9 +27,12 @@ int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
+ ret = fn(dev, cmd, USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ value, index, data, size);
+
+- if (unlikely(ret < 0))
++ if (unlikely(ret < size)) {
++ ret = ret < 0 ? ret : -ENODATA;
++
+ netdev_warn(dev->net, "Failed to read reg index 0x%04x: %d\n",
+ index, ret);
++ }
+
+ return ret;
+ }
+@@ -79,7 +82,7 @@ static int asix_check_host_enable(struct usbnet *dev, int in_pm)
+ 0, 0, 1, &smsr, in_pm);
+ if (ret == -ENODEV)
+ break;
+- else if (ret < sizeof(smsr))
++ else if (ret < 0)
+ continue;
+ else if (smsr & AX_HOST_EN)
+ break;
+@@ -579,8 +582,12 @@ int asix_mdio_read_nopm(struct net_device *netdev, int phy_id, int loc)
+ return ret;
+ }
+
+- asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id,
+- (__u16)loc, 2, &res, 1);
++ ret = asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id,
++ (__u16)loc, 2, &res, 1);
++ if (ret < 0) {
++ mutex_unlock(&dev->phy_mutex);
++ return ret;
++ }
+ asix_set_hw_mii(dev, 1);
+ mutex_unlock(&dev->phy_mutex);
+
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 4514d35ef4c48..6b2fbdf4e0fde 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -755,7 +755,12 @@ static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf)
+ priv->phy_addr = ret;
+ priv->embd_phy = ((priv->phy_addr & 0x1f) == 0x10);
+
+- asix_read_cmd(dev, AX_CMD_STATMNGSTS_REG, 0, 0, 1, &chipcode, 0);
++ ret = asix_read_cmd(dev, AX_CMD_STATMNGSTS_REG, 0, 0, 1, &chipcode, 0);
++ if (ret < 0) {
++ netdev_dbg(dev->net, "Failed to read STATMNGSTS_REG: %d\n", ret);
++ return ret;
++ }
++
+ chipcode &= AX_CHIPCODE_MASK;
+
+ ret = (chipcode == AX_AX88772_CHIPCODE) ? ax88772_hw_reset(dev, 0) :
+@@ -920,11 +925,21 @@ static int ax88178_reset(struct usbnet *dev)
+ int gpio0 = 0;
+ u32 phyid;
+
+- asix_read_cmd(dev, AX_CMD_READ_GPIOS, 0, 0, 1, &status, 0);
++ ret = asix_read_cmd(dev, AX_CMD_READ_GPIOS, 0, 0, 1, &status, 0);
++ if (ret < 0) {
++ netdev_dbg(dev->net, "Failed to read GPIOS: %d\n", ret);
++ return ret;
++ }
++
+ netdev_dbg(dev->net, "GPIO Status: 0x%04x\n", status);
+
+ asix_write_cmd(dev, AX_CMD_WRITE_ENABLE, 0, 0, 0, NULL, 0);
+- asix_read_cmd(dev, AX_CMD_READ_EEPROM, 0x0017, 0, 2, &eeprom, 0);
++ ret = asix_read_cmd(dev, AX_CMD_READ_EEPROM, 0x0017, 0, 2, &eeprom, 0);
++ if (ret < 0) {
++ netdev_dbg(dev->net, "Failed to read EEPROM: %d\n", ret);
++ return ret;
++ }
++
+ asix_write_cmd(dev, AX_CMD_WRITE_DISABLE, 0, 0, 0, NULL, 0);
+
+ netdev_dbg(dev->net, "EEPROM index 0x17 is 0x%04x\n", eeprom);
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 1de413b19e342..8084e7408c0ae 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -4,6 +4,7 @@
+ */
+
+ #include "queueing.h"
++#include <linux/skb_array.h>
+
+ struct multicore_worker __percpu *
+ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
+@@ -42,7 +43,7 @@ void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
+ {
+ free_percpu(queue->worker);
+ WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
+- ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
++ ptr_ring_cleanup(&queue->ring, purge ? __skb_array_destroy_skb : NULL);
+ }
+
+ #define NEXT(skb) ((skb)->prev)
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index 6f07b949cb81d..0414d7a6ce741 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -160,6 +160,7 @@ out:
+ rcu_read_unlock_bh();
+ return ret;
+ #else
++ kfree_skb(skb);
+ return -EAFNOSUPPORT;
+ #endif
+ }
+@@ -241,7 +242,7 @@ int wg_socket_endpoint_from_skb(struct endpoint *endpoint,
+ endpoint->addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
+ endpoint->src4.s_addr = ip_hdr(skb)->daddr;
+ endpoint->src_if4 = skb->skb_iif;
+- } else if (skb->protocol == htons(ETH_P_IPV6)) {
++ } else if (IS_ENABLED(CONFIG_IPV6) && skb->protocol == htons(ETH_P_IPV6)) {
+ endpoint->addr6.sin6_family = AF_INET6;
+ endpoint->addr6.sin6_port = udp_hdr(skb)->source;
+ endpoint->addr6.sin6_addr = ipv6_hdr(skb)->saddr;
+@@ -284,7 +285,7 @@ void wg_socket_set_peer_endpoint(struct wg_peer *peer,
+ peer->endpoint.addr4 = endpoint->addr4;
+ peer->endpoint.src4 = endpoint->src4;
+ peer->endpoint.src_if4 = endpoint->src_if4;
+- } else if (endpoint->addr.sa_family == AF_INET6) {
++ } else if (IS_ENABLED(CONFIG_IPV6) && endpoint->addr.sa_family == AF_INET6) {
+ peer->endpoint.addr6 = endpoint->addr6;
+ peer->endpoint.src6 = endpoint->src6;
+ } else {
+diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
+index 9513ab696fff1..f79dd9a716906 100644
+--- a/drivers/net/wireless/ath/ath10k/snoc.c
++++ b/drivers/net/wireless/ath/ath10k/snoc.c
+@@ -1556,11 +1556,11 @@ static int ath10k_setup_msa_resources(struct ath10k *ar, u32 msa_size)
+ node = of_parse_phandle(dev->of_node, "memory-region", 0);
+ if (node) {
+ ret = of_address_to_resource(node, 0, &r);
++ of_node_put(node);
+ if (ret) {
+ dev_err(dev, "failed to resolve msa fixed region\n");
+ return ret;
+ }
+- of_node_put(node);
+
+ ar->msa.paddr = r.start;
+ ar->msa.mem_size = resource_size(&r);
+diff --git a/drivers/net/wireless/ath/ath10k/wow.c b/drivers/net/wireless/ath/ath10k/wow.c
+index 7d65c115669fe..20b9aa8ddf7d5 100644
+--- a/drivers/net/wireless/ath/ath10k/wow.c
++++ b/drivers/net/wireless/ath/ath10k/wow.c
+@@ -337,14 +337,15 @@ static int ath10k_vif_wow_set_wakeups(struct ath10k_vif *arvif,
+ if (patterns[i].mask[j / 8] & BIT(j % 8))
+ bitmask[j] = 0xff;
+ old_pattern.mask = bitmask;
+- new_pattern = old_pattern;
+
+ if (ar->wmi.rx_decap_mode == ATH10K_HW_TXRX_NATIVE_WIFI) {
+- if (patterns[i].pkt_offset < ETH_HLEN)
++ if (patterns[i].pkt_offset < ETH_HLEN) {
+ ath10k_wow_convert_8023_to_80211(&new_pattern,
+ &old_pattern);
+- else
++ } else {
++ new_pattern = old_pattern;
+ new_pattern.pkt_offset += WOW_HDR_LEN - ETH_HLEN;
++ }
+ }
+
+ if (WARN_ON(new_pattern.pattern_len > WOW_MAX_PATTERN_SIZE))
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index c212a789421ee..e432f8dc05d61 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2642,9 +2642,9 @@ int ath11k_dp_process_rx(struct ath11k_base *ab, int ring_id,
+
+ spin_lock_bh(&srng->lock);
+
++try_again:
+ ath11k_hal_srng_access_begin(ab, srng);
+
+-try_again:
+ while (likely(desc =
+ (struct hal_reo_dest_ring *)ath11k_hal_srng_dst_get_next_entry(ab,
+ srng))) {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
+index 91d6244b65435..8402961c66887 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
+@@ -426,7 +426,7 @@ void ath11k_dp_tx_update_txcompl(struct ath11k *ar, struct hal_tx_status *ts)
+ struct ath11k_sta *arsta;
+ struct ieee80211_sta *sta;
+ u16 rate, ru_tones;
+- u8 mcs, rate_idx, ofdma;
++ u8 mcs, rate_idx = 0, ofdma;
+ int ret;
+
+ spin_lock_bh(&ab->base_lock);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 07f499d5ec92b..08e33778f63b8 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -2319,6 +2319,9 @@ static void ath11k_peer_assoc_h_he_6ghz(struct ath11k *ar,
+ if (!arg->he_flag || band != NL80211_BAND_6GHZ || !sta->he_6ghz_capa.capa)
+ return;
+
++ if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
++ arg->bw_40 = true;
++
+ if (sta->bandwidth == IEEE80211_STA_RX_BW_80)
+ arg->bw_80 = true;
+
+@@ -4504,24 +4507,30 @@ static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw,
+ sta->addr, arvif->vdev_id);
+ } else if ((old_state == IEEE80211_STA_NONE &&
+ new_state == IEEE80211_STA_NOTEXIST)) {
+- ath11k_dp_peer_cleanup(ar, arvif->vdev_id, sta->addr);
++ bool skip_peer_delete = ar->ab->hw_params.vdev_start_delay &&
++ vif->type == NL80211_IFTYPE_STATION;
+
+- if (ar->ab->hw_params.vdev_start_delay &&
+- vif->type == NL80211_IFTYPE_STATION)
+- goto free;
++ ath11k_dp_peer_cleanup(ar, arvif->vdev_id, sta->addr);
+
+- ret = ath11k_peer_delete(ar, arvif->vdev_id, sta->addr);
+- if (ret)
+- ath11k_warn(ar->ab, "Failed to delete peer: %pM for VDEV: %d\n",
+- sta->addr, arvif->vdev_id);
+- else
+- ath11k_dbg(ar->ab, ATH11K_DBG_MAC, "Removed peer: %pM for VDEV: %d\n",
+- sta->addr, arvif->vdev_id);
++ if (!skip_peer_delete) {
++ ret = ath11k_peer_delete(ar, arvif->vdev_id, sta->addr);
++ if (ret)
++ ath11k_warn(ar->ab,
++ "Failed to delete peer: %pM for VDEV: %d\n",
++ sta->addr, arvif->vdev_id);
++ else
++ ath11k_dbg(ar->ab,
++ ATH11K_DBG_MAC,
++ "Removed peer: %pM for VDEV: %d\n",
++ sta->addr, arvif->vdev_id);
++ }
+
+ ath11k_mac_dec_num_stations(arvif, sta);
+ spin_lock_bh(&ar->ab->base_lock);
+ peer = ath11k_peer_find(ar->ab, arvif->vdev_id, sta->addr);
+- if (peer && peer->sta == sta) {
++ if (skip_peer_delete && peer) {
++ peer->sta = NULL;
++ } else if (peer && peer->sta == sta) {
+ ath11k_warn(ar->ab, "Found peer entry %pM n vdev %i after it was supposedly removed\n",
+ vif->addr, arvif->vdev_id);
+ peer->sta = NULL;
+@@ -4531,7 +4540,6 @@ static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw,
+ }
+ spin_unlock_bh(&ar->ab->base_lock);
+
+-free:
+ kfree(arsta->tx_stats);
+ arsta->tx_stats = NULL;
+
+diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
+index e4250ba8dfee2..cccaa348cf212 100644
+--- a/drivers/net/wireless/ath/ath11k/mhi.c
++++ b/drivers/net/wireless/ath/ath11k/mhi.c
+@@ -332,6 +332,7 @@ static int ath11k_mhi_read_addr_from_dt(struct mhi_controller *mhi_ctrl)
+ return -ENOENT;
+
+ ret = of_address_to_resource(np, 0, &res);
++ of_node_put(np);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index 65d3c6ba35ae6..d0701e8eca9c0 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -1932,10 +1932,11 @@ static int ath11k_qmi_assign_target_mem_chunk(struct ath11k_base *ab)
+ if (!hremote_node) {
+ ath11k_dbg(ab, ATH11K_DBG_QMI,
+ "qmi fail to get hremote_node\n");
+- return ret;
++ return -ENODEV;
+ }
+
+ ret = of_address_to_resource(hremote_node, 0, &res);
++ of_node_put(hremote_node);
+ if (ret) {
+ ath11k_dbg(ab, ATH11K_DBG_QMI,
+ "qmi fail to get reg from hremote\n");
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index 510e61e97dbcb..994ec48b2f669 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -30,6 +30,7 @@ static int htc_issue_send(struct htc_target *target, struct sk_buff* skb,
+ hdr->endpoint_id = epid;
+ hdr->flags = flags;
+ hdr->payload_len = cpu_to_be16(len);
++ memset(hdr->control, 0, sizeof(hdr->control));
+
+ status = target->hif->send(target->hif_dev, endpoint->ul_pipeid, skb);
+
+@@ -272,6 +273,10 @@ int htc_connect_service(struct htc_target *target,
+ conn_msg->dl_pipeid = endpoint->dl_pipeid;
+ conn_msg->ul_pipeid = endpoint->ul_pipeid;
+
++ /* To prevent infoleak */
++ conn_msg->svc_meta_len = 0;
++ conn_msg->pad = 0;
++
+ ret = htc_issue_send(target, skb, skb->len, 0, ENDPOINT0);
+ if (ret)
+ goto err;
+diff --git a/drivers/net/wireless/ath/carl9170/main.c b/drivers/net/wireless/ath/carl9170/main.c
+index 49f7ee1c912b8..2208ec8004821 100644
+--- a/drivers/net/wireless/ath/carl9170/main.c
++++ b/drivers/net/wireless/ath/carl9170/main.c
+@@ -1914,7 +1914,7 @@ static int carl9170_parse_eeprom(struct ar9170 *ar)
+ WARN_ON(!(tx_streams >= 1 && tx_streams <=
+ IEEE80211_HT_MCS_TX_MAX_STREAMS));
+
+- tx_params = (tx_streams - 1) <<
++ tx_params |= (tx_streams - 1) <<
+ IEEE80211_HT_MCS_TX_MAX_STREAMS_SHIFT;
+
+ carl9170_band_2GHz.ht_cap.mcs.tx_params |= tx_params;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+index d99140960a820..dcbe55b56e437 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+@@ -207,6 +207,8 @@ static int brcmf_init_nvram_parser(struct nvram_parser *nvp,
+ size = BRCMF_FW_MAX_NVRAM_SIZE;
+ else
+ size = data_len;
++ /* Add space for properties we may add */
++ size += strlen(BRCMF_FW_DEFAULT_BOARDREV) + 1;
+ /* Alloc for extra 0 byte + roundup by 4 + length field */
+ size += 1 + 3 + sizeof(u32);
+ nvp->nvram = kzalloc(size, GFP_KERNEL);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+index 8b149996fc000..3ff4997e1c97a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+@@ -12,6 +12,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/bcma/bcma.h>
+ #include <linux/sched.h>
++#include <linux/io.h>
+ #include <asm/unaligned.h>
+
+ #include <soc.h>
+@@ -59,6 +60,13 @@ BRCMF_FW_DEF(4366B, "brcmfmac4366b-pcie");
+ BRCMF_FW_DEF(4366C, "brcmfmac4366c-pcie");
+ BRCMF_FW_DEF(4371, "brcmfmac4371-pcie");
+
++/* firmware config files */
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.txt");
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txt");
++
++/* per-board firmware binaries */
++MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.bin");
++
+ static const struct brcmf_firmware_mapping brcmf_pcie_fwnames[] = {
+ BRCMF_FW_ENTRY(BRCM_CC_43602_CHIP_ID, 0xFFFFFFFF, 43602),
+ BRCMF_FW_ENTRY(BRCM_CC_43465_CHIP_ID, 0xFFFFFFF0, 4366C),
+@@ -447,47 +455,6 @@ brcmf_pcie_write_ram32(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+ }
+
+
+-static void
+-brcmf_pcie_copy_mem_todev(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+- void *srcaddr, u32 len)
+-{
+- void __iomem *address = devinfo->tcm + mem_offset;
+- __le32 *src32;
+- __le16 *src16;
+- u8 *src8;
+-
+- if (((ulong)address & 4) || ((ulong)srcaddr & 4) || (len & 4)) {
+- if (((ulong)address & 2) || ((ulong)srcaddr & 2) || (len & 2)) {
+- src8 = (u8 *)srcaddr;
+- while (len) {
+- iowrite8(*src8, address);
+- address++;
+- src8++;
+- len--;
+- }
+- } else {
+- len = len / 2;
+- src16 = (__le16 *)srcaddr;
+- while (len) {
+- iowrite16(le16_to_cpu(*src16), address);
+- address += 2;
+- src16++;
+- len--;
+- }
+- }
+- } else {
+- len = len / 4;
+- src32 = (__le32 *)srcaddr;
+- while (len) {
+- iowrite32(le32_to_cpu(*src32), address);
+- address += 4;
+- src32++;
+- len--;
+- }
+- }
+-}
+-
+-
+ static void
+ brcmf_pcie_copy_dev_tomem(struct brcmf_pciedev_info *devinfo, u32 mem_offset,
+ void *dstaddr, u32 len)
+@@ -1348,6 +1315,18 @@ static void brcmf_pcie_down(struct device *dev)
+ {
+ }
+
++static int brcmf_pcie_preinit(struct device *dev)
++{
++ struct brcmf_bus *bus_if = dev_get_drvdata(dev);
++ struct brcmf_pciedev *buspub = bus_if->bus_priv.pcie;
++
++ brcmf_dbg(PCIE, "Enter\n");
++
++ brcmf_pcie_intr_enable(buspub->devinfo);
++ brcmf_pcie_hostready(buspub->devinfo);
++
++ return 0;
++}
+
+ static int brcmf_pcie_tx(struct device *dev, struct sk_buff *skb)
+ {
+@@ -1456,6 +1435,7 @@ static int brcmf_pcie_reset(struct device *dev)
+ }
+
+ static const struct brcmf_bus_ops brcmf_pcie_bus_ops = {
++ .preinit = brcmf_pcie_preinit,
+ .txdata = brcmf_pcie_tx,
+ .stop = brcmf_pcie_down,
+ .txctl = brcmf_pcie_tx_ctlpkt,
+@@ -1563,8 +1543,8 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ return err;
+
+ brcmf_dbg(PCIE, "Download FW %s\n", devinfo->fw_name);
+- brcmf_pcie_copy_mem_todev(devinfo, devinfo->ci->rambase,
+- (void *)fw->data, fw->size);
++ memcpy_toio(devinfo->tcm + devinfo->ci->rambase,
++ (void *)fw->data, fw->size);
+
+ resetintr = get_unaligned_le32(fw->data);
+ release_firmware(fw);
+@@ -1578,7 +1558,7 @@ static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo,
+ brcmf_dbg(PCIE, "Download NVRAM %s\n", devinfo->nvram_name);
+ address = devinfo->ci->rambase + devinfo->ci->ramsize -
+ nvram_len;
+- brcmf_pcie_copy_mem_todev(devinfo, address, nvram, nvram_len);
++ memcpy_toio(devinfo->tcm + address, nvram, nvram_len);
+ brcmf_fw_nvram_free(nvram);
+ } else {
+ brcmf_dbg(PCIE, "No matching NVRAM file found %s\n",
+@@ -1777,6 +1757,8 @@ static void brcmf_pcie_setup(struct device *dev, int ret,
+ ret = brcmf_chip_get_raminfo(devinfo->ci);
+ if (ret) {
+ brcmf_err(bus, "Failed to get RAM info\n");
++ release_firmware(fw);
++ brcmf_fw_nvram_free(nvram);
+ goto fail;
+ }
+
+@@ -1826,9 +1808,6 @@ static void brcmf_pcie_setup(struct device *dev, int ret,
+
+ init_waitqueue_head(&devinfo->mbdata_resp_wait);
+
+- brcmf_pcie_intr_enable(devinfo);
+- brcmf_pcie_hostready(devinfo);
+-
+ ret = brcmf_attach(&devinfo->pdev->dev);
+ if (ret)
+ goto fail;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index 8effeb7a7269b..5d156e591b35c 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -629,7 +629,6 @@ BRCMF_FW_CLM_DEF(43752, "brcmfmac43752-sdio");
+
+ /* firmware config files */
+ MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-sdio.*.txt");
+-MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-pcie.*.txt");
+
+ /* per-board firmware binaries */
+ MODULE_FIRMWARE(BRCMF_FW_DEFAULT_PATH "brcmfmac*-sdio.*.bin");
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+index 754876cd27ce8..e8bd4f0e3d2dc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+@@ -299,7 +299,7 @@ static int iwlagn_mac_start(struct ieee80211_hw *hw)
+
+ priv->is_open = 1;
+ IWL_DEBUG_MAC80211(priv, "leave\n");
+- return 0;
++ return ret;
+ }
+
+ static void iwlagn_mac_stop(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index 7ad9cee925da5..372cc950cc884 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -1561,8 +1561,6 @@ iwl_dump_ini_dbgi_sram_iter(struct iwl_fw_runtime *fwrt,
+ return -EBUSY;
+
+ range->range_data_size = reg->dev_addr.size;
+- iwl_write_prph_no_grab(fwrt->trans, DBGI_SRAM_TARGET_ACCESS_CFG,
+- DBGI_SRAM_TARGET_ACCESS_CFG_RESET_ADDRESS_MSK);
+ for (i = 0; i < (le32_to_cpu(reg->dev_addr.size) / 4); i++) {
+ prph_data = iwl_read_prph_no_grab(fwrt->trans, (i % 2) ?
+ DBGI_SRAM_TARGET_ACCESS_RDATA_MSB :
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index c73672d613562..42f6f8bb83be9 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -861,11 +861,18 @@ static void iwl_dbg_tlv_apply_config(struct iwl_fw_runtime *fwrt,
+ case IWL_FW_INI_CONFIG_SET_TYPE_DBGC_DRAM_ADDR: {
+ struct iwl_dbgc1_info dram_info = {};
+ struct iwl_dram_data *frags = &fwrt->trans->dbg.fw_mon_ini[1].frags[0];
+- __le64 dram_base_addr = cpu_to_le64(frags->physical);
+- __le32 dram_size = cpu_to_le32(frags->size);
+- u64 dram_addr = le64_to_cpu(dram_base_addr);
++ __le64 dram_base_addr;
++ __le32 dram_size;
++ u64 dram_addr;
+ u32 ret;
+
++ if (!frags)
++ break;
++
++ dram_base_addr = cpu_to_le64(frags->physical);
++ dram_size = cpu_to_le32(frags->size);
++ dram_addr = le64_to_cpu(dram_base_addr);
++
+ IWL_DEBUG_FW(fwrt, "WRT: dram_base_addr 0x%016llx, dram_size 0x%x\n",
+ dram_base_addr, dram_size);
+ IWL_DEBUG_FW(fwrt, "WRT: config_list->addr_offset: %u\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+index 95b3dae7b504b..9331a6b6bf36c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+@@ -354,8 +354,6 @@
+ #define WFPM_GP2 0xA030B4
+
+ /* DBGI SRAM Register details */
+-#define DBGI_SRAM_TARGET_ACCESS_CFG 0x00A2E14C
+-#define DBGI_SRAM_TARGET_ACCESS_CFG_RESET_ADDRESS_MSK 0x10000
+ #define DBGI_SRAM_TARGET_ACCESS_RDATA_LSB 0x00A2E154
+ #define DBGI_SRAM_TARGET_ACCESS_RDATA_MSB 0x00A2E158
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index b400867e94f0a..3f284836e7076 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2704,7 +2704,9 @@ static int iwl_mvm_d3_test_open(struct inode *inode, struct file *file)
+
+ /* start pseudo D3 */
+ rtnl_lock();
++ wiphy_lock(mvm->hw->wiphy);
+ err = __iwl_mvm_suspend(mvm->hw, mvm->hw->wiphy->wowlan_config, true);
++ wiphy_unlock(mvm->hw->wiphy);
+ rtnl_unlock();
+ if (err > 0)
+ err = -EINVAL;
+@@ -2760,7 +2762,9 @@ static int iwl_mvm_d3_test_release(struct inode *inode, struct file *file)
+ iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt);
+
+ rtnl_lock();
++ wiphy_lock(mvm->hw->wiphy);
+ __iwl_mvm_resume(mvm, true);
++ wiphy_unlock(mvm->hw->wiphy);
+ rtnl_unlock();
+
+ iwl_mvm_resume_tcm(mvm);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index ae589b3b8c46e..ee031a5897140 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1658,8 +1658,10 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ while (!sband && i < NUM_NL80211_BANDS)
+ sband = mvm->hw->wiphy->bands[i++];
+
+- if (WARN_ON_ONCE(!sband))
++ if (WARN_ON_ONCE(!sband)) {
++ ret = -ENODEV;
+ goto error;
++ }
+
+ chan = &sband->channels[0];
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 1f8b97995b943..069d54501e30e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -235,7 +235,8 @@ static void iwl_mvm_rx_thermal_dual_chain_req(struct iwl_mvm *mvm,
+ */
+ mvm->fw_static_smps_request =
+ req->event == cpu_to_le32(THERMAL_DUAL_CHAIN_REQ_DISABLE);
+- ieee80211_iterate_interfaces(mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
++ ieee80211_iterate_interfaces(mvm->hw,
++ IEEE80211_IFACE_SKIP_SDATA_NOT_IN_DRIVER,
+ iwl_mvm_intf_dual_chain_req, NULL);
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+index 64446a11ef980..9a46468bd4345 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+@@ -640,7 +640,7 @@ static void iwl_mvm_stat_iterator_all_macs(void *_data, u8 *mac,
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ u16 vif_id = mvmvif->id;
+
+- if (WARN_ONCE(vif_id > MAC_INDEX_AUX, "invalid vif id: %d", vif_id))
++ if (WARN_ONCE(vif_id >= MAC_INDEX_AUX, "invalid vif id: %d", vif_id))
+ return;
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 9213f8518f10d..40daced97b9e8 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -318,15 +318,14 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm,
+
+ /* info->control is only relevant for non HW rate control */
+ if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {
+- struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+-
+ /* HT rate doesn't make sense for a non data frame */
+ WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS &&
+ !ieee80211_is_data(fc),
+ "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n",
+ info->control.rates[0].flags,
+ info->control.rates[0].idx,
+- le16_to_cpu(fc), sta ? mvmsta->sta_state : -1);
++ le16_to_cpu(fc),
++ sta ? iwl_mvm_sta_from_mac80211(sta)->sta_state : -1);
+
+ rate_idx = info->control.rates[0].idx;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index ef14584fc0a17..4b08eb46617c7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1112,7 +1112,7 @@ static const struct iwl_causes_list causes_list_pre_bz[] = {
+ };
+
+ static const struct iwl_causes_list causes_list_bz[] = {
+- {MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ, CSR_MSIX_HW_INT_MASK_AD, 0x29},
++ {MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ, CSR_MSIX_HW_INT_MASK_AD, 0x15},
+ };
+
+ static void iwl_pcie_map_list(struct iwl_trans *trans,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 404c3d1a70d69..1f6f7a44d3f00 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -224,7 +224,7 @@ enum mt76_wcid_flags {
+ MT_WCID_FLAG_HDR_TRANS,
+ };
+
+-#define MT76_N_WCIDS 288
++#define MT76_N_WCIDS 544
+
+ /* stored in ieee80211_tx_info::hw_queue */
+ #define MT_TX_HW_QUEUE_EXT_PHY BIT(3)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+index 2b546bc05d822..83c5eec5b1633 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+@@ -641,6 +641,9 @@ mt7603_sta_rate_tbl_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_sta_rates *sta_rates = rcu_dereference(sta->rates);
+ int i;
+
++ if (!sta_rates)
++ return;
++
+ spin_lock_bh(&dev->mt76.lock);
+ for (i = 0; i < ARRAY_SIZE(msta->rates); i++) {
+ msta->rates[i].idx = sta_rates->rate[i].idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index ec25e5a95d442..ba31bb7caaf90 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -253,13 +253,13 @@ static void mt7615_mac_fill_tm_rx(struct mt7615_phy *phy, __le32 *rxv)
+ static int mt7615_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ {
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
++ struct ethhdr *eth_hdr = (struct ethhdr *)(skb->data + hdr_gap);
+ struct mt7615_sta *msta = (struct mt7615_sta *)status->wcid;
++ __le32 *rxd = (__le32 *)skb->data;
+ struct ieee80211_sta *sta;
+ struct ieee80211_vif *vif;
+ struct ieee80211_hdr hdr;
+- struct ethhdr eth_hdr;
+- __le32 *rxd = (__le32 *)skb->data;
+- __le32 qos_ctrl, ht_ctrl;
++ u16 frame_control;
+
+ if (FIELD_GET(MT_RXD1_NORMAL_ADDR_TYPE, le32_to_cpu(rxd[1])) !=
+ MT_RXD1_NORMAL_U2M)
+@@ -275,47 +275,53 @@ static int mt7615_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ vif = container_of((void *)msta->vif, struct ieee80211_vif, drv_priv);
+
+ /* store the info from RXD and ethhdr to avoid being overridden */
+- memcpy(ð_hdr, skb->data + hdr_gap, sizeof(eth_hdr));
+- hdr.frame_control = FIELD_GET(MT_RXD4_FRAME_CONTROL, rxd[4]);
+- hdr.seq_ctrl = FIELD_GET(MT_RXD6_SEQ_CTRL, rxd[6]);
+- qos_ctrl = FIELD_GET(MT_RXD6_QOS_CTL, rxd[6]);
+- ht_ctrl = FIELD_GET(MT_RXD7_HT_CONTROL, rxd[7]);
+-
++ frame_control = le32_get_bits(rxd[4], MT_RXD4_FRAME_CONTROL);
++ hdr.frame_control = cpu_to_le16(frame_control);
++ hdr.seq_ctrl = cpu_to_le16(le32_get_bits(rxd[6], MT_RXD6_SEQ_CTRL));
+ hdr.duration_id = 0;
++
+ ether_addr_copy(hdr.addr1, vif->addr);
+ ether_addr_copy(hdr.addr2, sta->addr);
+- switch (le16_to_cpu(hdr.frame_control) &
+- (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) {
++ switch (frame_control & (IEEE80211_FCTL_TODS |
++ IEEE80211_FCTL_FROMDS)) {
+ case 0:
+ ether_addr_copy(hdr.addr3, vif->bss_conf.bssid);
+ break;
+ case IEEE80211_FCTL_FROMDS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_source);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_source);
+ break;
+ case IEEE80211_FCTL_TODS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_dest);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
+ break;
+ case IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_dest);
+- ether_addr_copy(hdr.addr4, eth_hdr.h_source);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
++ ether_addr_copy(hdr.addr4, eth_hdr->h_source);
+ break;
+ default:
+ break;
+ }
+
+ skb_pull(skb, hdr_gap + sizeof(struct ethhdr) - 2);
+- if (eth_hdr.h_proto == htons(ETH_P_AARP) ||
+- eth_hdr.h_proto == htons(ETH_P_IPX))
++ if (eth_hdr->h_proto == cpu_to_be16(ETH_P_AARP) ||
++ eth_hdr->h_proto == cpu_to_be16(ETH_P_IPX))
+ ether_addr_copy(skb_push(skb, ETH_ALEN), bridge_tunnel_header);
+- else if (eth_hdr.h_proto >= htons(ETH_P_802_3_MIN))
++ else if (be16_to_cpu(eth_hdr->h_proto) >= ETH_P_802_3_MIN)
+ ether_addr_copy(skb_push(skb, ETH_ALEN), rfc1042_header);
+ else
+ skb_pull(skb, 2);
+
+ if (ieee80211_has_order(hdr.frame_control))
+- memcpy(skb_push(skb, 2), &ht_ctrl, 2);
+- if (ieee80211_is_data_qos(hdr.frame_control))
+- memcpy(skb_push(skb, 2), &qos_ctrl, 2);
++ memcpy(skb_push(skb, IEEE80211_HT_CTL_LEN), &rxd[7],
++ IEEE80211_HT_CTL_LEN);
++
++ if (ieee80211_is_data_qos(hdr.frame_control)) {
++ __le16 qos_ctrl;
++
++ qos_ctrl = cpu_to_le16(le32_get_bits(rxd[6], MT_RXD6_QOS_CTL));
++ memcpy(skb_push(skb, IEEE80211_QOS_CTL_LEN), &qos_ctrl,
++ IEEE80211_QOS_CTL_LEN);
++ }
++
+ if (ieee80211_has_a4(hdr.frame_control))
+ memcpy(skb_push(skb, sizeof(hdr)), &hdr, sizeof(hdr));
+ else
+@@ -2103,6 +2109,14 @@ void mt7615_pm_power_save_work(struct work_struct *work)
+ test_bit(MT76_HW_SCHED_SCANNING, &dev->mphy.state))
+ goto out;
+
++ if (mutex_is_locked(&dev->mt76.mutex))
++ /* if mt76 mutex is held we should not put the device
++ * to sleep since we are currently accessing device
++ * register map. We need to wait for the next power_save
++ * trigger.
++ */
++ goto out;
++
+ if (time_is_after_jiffies(dev->pm.last_activity + delta)) {
+ delta = dev->pm.last_activity + delta - jiffies;
+ goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 82d625a16a62c..ce902b107ce33 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -683,6 +683,9 @@ static void mt7615_sta_rate_tbl_update(struct ieee80211_hw *hw,
+ struct ieee80211_sta_rates *sta_rates = rcu_dereference(sta->rates);
+ int i;
+
++ if (!sta_rates)
++ return;
++
+ spin_lock_bh(&dev->mt76.lock);
+ for (i = 0; i < ARRAY_SIZE(msta->rates); i++) {
+ msta->rates[i].idx = sta_rates->rate[i].idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index f79e3d5084f39..5664f119447bc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -310,7 +310,7 @@ mt76_connac_mcu_alloc_wtbl_req(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ }
+
+ if (sta_hdr)
+- sta_hdr->len = cpu_to_le16(sizeof(hdr));
++ le16_add_cpu(&sta_hdr->len, sizeof(hdr));
+
+ return skb_put_data(nskb, &hdr, sizeof(hdr));
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 5baf8370b7bd8..93c783a3af7c5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -996,7 +996,8 @@ enum {
+ MCU_CE_CMD_SET_BSS_CONNECTED = 0x16,
+ MCU_CE_CMD_SET_BSS_ABORT = 0x17,
+ MCU_CE_CMD_CANCEL_HW_SCAN = 0x1b,
+- MCU_CE_CMD_SET_ROC = 0x1d,
++ MCU_CE_CMD_SET_ROC = 0x1c,
++ MCU_CE_CMD_SET_EDCA_PARMS = 0x1d,
+ MCU_CE_CMD_SET_P2P_OPPPS = 0x33,
+ MCU_CE_CMD_SET_RATE_TX_POWER = 0x5d,
+ MCU_CE_CMD_SCHED_SCAN_ENABLE = 0x61,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index d054cdecd5f70..29517ca08de0c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -399,7 +399,7 @@ static void mt7915_mac_init(struct mt7915_dev *dev)
+ /* enable hardware de-agg */
+ mt76_set(dev, MT_MDP_DCR0, MT_MDP_DCR0_DAMSDU_EN);
+
+- for (i = 0; i < MT7915_WTBL_SIZE; i++)
++ for (i = 0; i < mt7915_wtbl_size(dev); i++)
+ mt7915_mac_wtbl_update(dev, i,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+ for (i = 0; i < 2; i++)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index 48f1155022823..db267642924d0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -391,13 +391,13 @@ mt7915_mac_decode_he_radiotap(struct sk_buff *skb, __le32 *rxv, u32 mode)
+ static int mt7915_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ {
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
++ struct ethhdr *eth_hdr = (struct ethhdr *)(skb->data + hdr_gap);
+ struct mt7915_sta *msta = (struct mt7915_sta *)status->wcid;
++ __le32 *rxd = (__le32 *)skb->data;
+ struct ieee80211_sta *sta;
+ struct ieee80211_vif *vif;
+ struct ieee80211_hdr hdr;
+- struct ethhdr eth_hdr;
+- __le32 *rxd = (__le32 *)skb->data;
+- __le32 qos_ctrl, ht_ctrl;
++ u16 frame_control;
+
+ if (FIELD_GET(MT_RXD3_NORMAL_ADDR_TYPE, le32_to_cpu(rxd[3])) !=
+ MT_RXD3_NORMAL_U2M)
+@@ -413,47 +413,52 @@ static int mt7915_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ vif = container_of((void *)msta->vif, struct ieee80211_vif, drv_priv);
+
+ /* store the info from RXD and ethhdr to avoid being overridden */
+- memcpy(ð_hdr, skb->data + hdr_gap, sizeof(eth_hdr));
+- hdr.frame_control = FIELD_GET(MT_RXD6_FRAME_CONTROL, rxd[6]);
+- hdr.seq_ctrl = FIELD_GET(MT_RXD8_SEQ_CTRL, rxd[8]);
+- qos_ctrl = FIELD_GET(MT_RXD8_QOS_CTL, rxd[8]);
+- ht_ctrl = FIELD_GET(MT_RXD9_HT_CONTROL, rxd[9]);
+-
++ frame_control = le32_get_bits(rxd[6], MT_RXD6_FRAME_CONTROL);
++ hdr.frame_control = cpu_to_le16(frame_control);
++ hdr.seq_ctrl = cpu_to_le16(le32_get_bits(rxd[8], MT_RXD8_SEQ_CTRL));
+ hdr.duration_id = 0;
++
+ ether_addr_copy(hdr.addr1, vif->addr);
+ ether_addr_copy(hdr.addr2, sta->addr);
+- switch (le16_to_cpu(hdr.frame_control) &
+- (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) {
++ switch (frame_control & (IEEE80211_FCTL_TODS |
++ IEEE80211_FCTL_FROMDS)) {
+ case 0:
+ ether_addr_copy(hdr.addr3, vif->bss_conf.bssid);
+ break;
+ case IEEE80211_FCTL_FROMDS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_source);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_source);
+ break;
+ case IEEE80211_FCTL_TODS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_dest);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
+ break;
+ case IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_dest);
+- ether_addr_copy(hdr.addr4, eth_hdr.h_source);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
++ ether_addr_copy(hdr.addr4, eth_hdr->h_source);
+ break;
+ default:
+ break;
+ }
+
+ skb_pull(skb, hdr_gap + sizeof(struct ethhdr) - 2);
+- if (eth_hdr.h_proto == htons(ETH_P_AARP) ||
+- eth_hdr.h_proto == htons(ETH_P_IPX))
++ if (eth_hdr->h_proto == cpu_to_be16(ETH_P_AARP) ||
++ eth_hdr->h_proto == cpu_to_be16(ETH_P_IPX))
+ ether_addr_copy(skb_push(skb, ETH_ALEN), bridge_tunnel_header);
+- else if (eth_hdr.h_proto >= htons(ETH_P_802_3_MIN))
++ else if (be16_to_cpu(eth_hdr->h_proto) >= ETH_P_802_3_MIN)
+ ether_addr_copy(skb_push(skb, ETH_ALEN), rfc1042_header);
+ else
+ skb_pull(skb, 2);
+
+ if (ieee80211_has_order(hdr.frame_control))
+- memcpy(skb_push(skb, 2), &ht_ctrl, 2);
+- if (ieee80211_is_data_qos(hdr.frame_control))
+- memcpy(skb_push(skb, 2), &qos_ctrl, 2);
++ memcpy(skb_push(skb, IEEE80211_HT_CTL_LEN), &rxd[9],
++ IEEE80211_HT_CTL_LEN);
++ if (ieee80211_is_data_qos(hdr.frame_control)) {
++ __le16 qos_ctrl;
++
++ qos_ctrl = cpu_to_le16(le32_get_bits(rxd[8], MT_RXD8_QOS_CTL));
++ memcpy(skb_push(skb, IEEE80211_QOS_CTL_LEN), &qos_ctrl,
++ IEEE80211_QOS_CTL_LEN);
++ }
++
+ if (ieee80211_has_a4(hdr.frame_control))
+ memcpy(skb_push(skb, sizeof(hdr)), &hdr, sizeof(hdr));
+ else
+@@ -1512,7 +1517,6 @@ mt7915_mac_add_txs_skb(struct mt7915_dev *dev, struct mt76_wcid *wcid, int pid,
+ break;
+ case MT_PHY_TYPE_HT:
+ case MT_PHY_TYPE_HT_GF:
+- rate.mcs += (rate.nss - 1) * 8;
+ if (rate.mcs > 31)
+ goto out;
+
+@@ -1594,7 +1598,7 @@ static void mt7915_mac_add_txs(struct mt7915_dev *dev, void *data)
+ if (pid < MT_PACKET_ID_FIRST)
+ return;
+
+- if (wcidx >= MT7915_WTBL_SIZE)
++ if (wcidx >= mt7915_wtbl_size(dev))
+ return;
+
+ rcu_read_lock();
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 0911b6f973b5a..31634d7ed1737 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -211,24 +211,12 @@ mt7915_mcu_get_sta_nss(u16 mcs_map)
+
+ static void
+ mt7915_mcu_set_sta_he_mcs(struct ieee80211_sta *sta, __le16 *he_mcs,
+- const u16 *mask)
++ u16 mcs_map)
+ {
+ struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
+- struct cfg80211_chan_def *chandef = &msta->vif->phy->mt76->chandef;
++ enum nl80211_band band = msta->vif->phy->mt76->chandef.chan->band;
++ const u16 *mask = msta->vif->bitrate_mask.control[band].he_mcs;
+ int nss, max_nss = sta->rx_nss > 3 ? 4 : sta->rx_nss;
+- u16 mcs_map;
+-
+- switch (chandef->width) {
+- case NL80211_CHAN_WIDTH_80P80:
+- mcs_map = le16_to_cpu(sta->he_cap.he_mcs_nss_supp.rx_mcs_80p80);
+- break;
+- case NL80211_CHAN_WIDTH_160:
+- mcs_map = le16_to_cpu(sta->he_cap.he_mcs_nss_supp.rx_mcs_160);
+- break;
+- default:
+- mcs_map = le16_to_cpu(sta->he_cap.he_mcs_nss_supp.rx_mcs_80);
+- break;
+- }
+
+ for (nss = 0; nss < max_nss; nss++) {
+ int mcs;
+@@ -1264,8 +1252,11 @@ mt7915_mcu_wtbl_generic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ generic = (struct wtbl_generic *)tlv;
+
+ if (sta) {
++ if (vif->type == NL80211_IFTYPE_STATION)
++ generic->partial_aid = cpu_to_le16(vif->bss_conf.aid);
++ else
++ generic->partial_aid = cpu_to_le16(sta->aid);
+ memcpy(generic->peer_addr, sta->addr, ETH_ALEN);
+- generic->partial_aid = cpu_to_le16(sta->aid);
+ generic->muar_idx = mvif->mt76.omac_idx;
+ generic->qos = sta->wme;
+ } else {
+@@ -1319,12 +1310,15 @@ mt7915_mcu_sta_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ case NL80211_IFTYPE_MESH_POINT:
+ case NL80211_IFTYPE_AP:
+ basic->conn_type = cpu_to_le32(CONNECTION_INFRA_STA);
++ basic->aid = cpu_to_le16(sta->aid);
+ break;
+ case NL80211_IFTYPE_STATION:
+ basic->conn_type = cpu_to_le32(CONNECTION_INFRA_AP);
++ basic->aid = cpu_to_le16(vif->bss_conf.aid);
+ break;
+ case NL80211_IFTYPE_ADHOC:
+ basic->conn_type = cpu_to_le32(CONNECTION_IBSS_ADHOC);
++ basic->aid = cpu_to_le16(sta->aid);
+ break;
+ default:
+ WARN_ON(1);
+@@ -1332,7 +1326,6 @@ mt7915_mcu_sta_basic_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ }
+
+ memcpy(basic->peer_addr, sta->addr, ETH_ALEN);
+- basic->aid = cpu_to_le16(sta->aid);
+ basic->qos = sta->wme;
+ }
+
+@@ -1340,11 +1333,9 @@ static void
+ mt7915_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif)
+ {
+- struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
+ struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+ struct ieee80211_he_cap_elem *elem = &sta->he_cap.he_cap_elem;
+- enum nl80211_band band = msta->vif->phy->mt76->chandef.chan->band;
+- const u16 *mcs_mask = msta->vif->bitrate_mask.control[band].he_mcs;
++ struct ieee80211_he_mcs_nss_supp mcs_map;
+ struct sta_rec_he *he;
+ struct tlv *tlv;
+ u32 cap = 0;
+@@ -1434,22 +1425,23 @@ mt7915_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+
+ he->he_cap = cpu_to_le32(cap);
+
++ mcs_map = sta->he_cap.he_mcs_nss_supp;
+ switch (sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ if (elem->phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)
+ mt7915_mcu_set_sta_he_mcs(sta,
+ &he->max_nss_mcs[CMD_HE_MCS_BW8080],
+- mcs_mask);
++ le16_to_cpu(mcs_map.rx_mcs_80p80));
+
+ mt7915_mcu_set_sta_he_mcs(sta,
+ &he->max_nss_mcs[CMD_HE_MCS_BW160],
+- mcs_mask);
++ le16_to_cpu(mcs_map.rx_mcs_160));
+ fallthrough;
+ default:
+ mt7915_mcu_set_sta_he_mcs(sta,
+ &he->max_nss_mcs[CMD_HE_MCS_BW80],
+- mcs_mask);
++ le16_to_cpu(mcs_map.rx_mcs_80));
+ break;
+ }
+
+@@ -1524,9 +1516,6 @@ mt7915_mcu_sta_muru_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ vif->type != NL80211_IFTYPE_AP)
+ return;
+
+- if (!sta->vht_cap.vht_supported)
+- return;
+-
+ tlv = mt7915_mcu_add_tlv(skb, STA_REC_MURU, sizeof(*muru));
+
+ muru = (struct sta_rec_muru *)tlv;
+@@ -1534,9 +1523,12 @@ mt7915_mcu_sta_muru_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ muru->cfg.mimo_dl_en = mvif->cap.he_mu_ebfer ||
+ mvif->cap.vht_mu_ebfer ||
+ mvif->cap.vht_mu_ebfee;
++ muru->cfg.mimo_ul_en = true;
++ muru->cfg.ofdma_dl_en = true;
+
+- muru->mimo_dl.vht_mu_bfee =
+- !!(sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
++ if (sta->vht_cap.vht_supported)
++ muru->mimo_dl.vht_mu_bfee =
++ !!(sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
+
+ if (!sta->he_cap.has_he)
+ return;
+@@ -1544,13 +1536,11 @@ mt7915_mcu_sta_muru_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ muru->mimo_dl.partial_bw_dl_mimo =
+ HE_PHY(CAP6_PARTIAL_BANDWIDTH_DL_MUMIMO, elem->phy_cap_info[6]);
+
+- muru->cfg.mimo_ul_en = true;
+ muru->mimo_ul.full_ul_mimo =
+ HE_PHY(CAP2_UL_MU_FULL_MU_MIMO, elem->phy_cap_info[2]);
+ muru->mimo_ul.partial_ul_mimo =
+ HE_PHY(CAP2_UL_MU_PARTIAL_MU_MIMO, elem->phy_cap_info[2]);
+
+- muru->cfg.ofdma_dl_en = true;
+ muru->ofdma_dl.punc_pream_rx =
+ HE_PHY(CAP1_PREAMBLE_PUNC_RX_MASK, elem->phy_cap_info[1]);
+ muru->ofdma_dl.he_20m_in_40m_2g =
+@@ -2134,9 +2124,12 @@ mt7915_mcu_add_rate_ctrl_fixed(struct mt7915_dev *dev,
+ phy.sgi |= gi << (i << (_he)); \
+ phy.he_ltf |= mask->control[band].he_ltf << (i << (_he));\
+ } \
+- for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) \
+- nrates += hweight16(mask->control[band]._mcs[i]); \
+- phy.mcs = ffs(mask->control[band]._mcs[0]) - 1; \
++ for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) { \
++ if (!mask->control[band]._mcs[i]) \
++ continue; \
++ nrates += hweight16(mask->control[band]._mcs[i]); \
++ phy.mcs = ffs(mask->control[band]._mcs[i]) - 1; \
++ } \
+ } while (0)
+
+ if (sta->he_cap.has_he) {
+@@ -2394,8 +2387,10 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ }
+
+ ret = mt7915_mcu_sta_wtbl_tlv(dev, skb, vif, sta);
+- if (ret)
++ if (ret) {
++ dev_kfree_skb(skb);
+ return ret;
++ }
+
+ if (sta && sta->ht_cap.ht_supported) {
+ /* starec amsdu */
+@@ -2409,8 +2404,10 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ }
+
+ ret = mt7915_mcu_add_group(dev, vif, sta);
+- if (ret)
++ if (ret) {
++ dev_kfree_skb(skb);
+ return ret;
++ }
+ out:
+ return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_EXT_CMD(STA_REC_UPDATE), true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 42d887383e8d8..12ca545664614 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -12,7 +12,8 @@
+ #define MT7915_MAX_INTERFACES 19
+ #define MT7915_MAX_WMM_SETS 4
+ #define MT7915_WTBL_SIZE 288
+-#define MT7915_WTBL_RESERVED (MT7915_WTBL_SIZE - 1)
++#define MT7916_WTBL_SIZE 544
++#define MT7915_WTBL_RESERVED (mt7915_wtbl_size(dev) - 1)
+ #define MT7915_WTBL_STA (MT7915_WTBL_RESERVED - \
+ MT7915_MAX_INTERFACES)
+
+@@ -449,6 +450,11 @@ static inline bool is_mt7915(struct mt76_dev *dev)
+ return mt76_chip(dev) == 0x7915;
+ }
+
++static inline u16 mt7915_wtbl_size(struct mt7915_dev *dev)
++{
++ return is_mt7915(&dev->mt76) ? MT7915_WTBL_SIZE : MT7916_WTBL_SIZE;
++}
++
+ void mt7915_dual_hif_set_irq_mask(struct mt7915_dev *dev, bool write_reg,
+ u32 clear, u32 set);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+index 86fd7292b229f..196b50e616fe0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+@@ -129,23 +129,22 @@ mt7921_queues_acq(struct seq_file *s, void *data)
+
+ mt7921_mutex_acquire(dev);
+
+- for (i = 0; i < 16; i++) {
+- int j, acs = i / 4, index = i % 4;
++ for (i = 0; i < 4; i++) {
+ u32 ctrl, val, qlen = 0;
++ int j;
+
+- val = mt76_rr(dev, MT_PLE_AC_QEMPTY(acs, index));
+- ctrl = BIT(31) | BIT(15) | (acs << 8);
++ val = mt76_rr(dev, MT_PLE_AC_QEMPTY(i));
++ ctrl = BIT(31) | BIT(11) | (i << 24);
+
+ for (j = 0; j < 32; j++) {
+ if (val & BIT(j))
+ continue;
+
+- mt76_wr(dev, MT_PLE_FL_Q0_CTRL,
+- ctrl | (j + (index << 5)));
++ mt76_wr(dev, MT_PLE_FL_Q0_CTRL, ctrl | j);
+ qlen += mt76_get_field(dev, MT_PLE_FL_Q3_CTRL,
+ GENMASK(11, 0));
+ }
+- seq_printf(s, "AC%d%d: queued=%d\n", acs, index, qlen);
++ seq_printf(s, "AC%d: queued=%d\n", i, qlen);
+ }
+
+ mt7921_mutex_release(dev);
+@@ -291,13 +290,12 @@ mt7921_pm_set(void *data, u64 val)
+ pm->enable = false;
+ mt76_connac_pm_wake(&dev->mphy, pm);
+
++ pm->enable = val;
+ ieee80211_iterate_active_interfaces(mt76_hw(dev),
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7921_pm_interface_iter, dev);
+
+ mt76_connac_mcu_set_deep_sleep(&dev->mt76, pm->ds_enable);
+-
+- pm->enable = val;
+ mt76_connac_power_save_sched(&dev->mphy, pm);
+ out:
+ mutex_unlock(&dev->mt76.mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index cdff1fd52d93a..39d6ce4ecddd7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -78,110 +78,6 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
+ mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
+ }
+
+-static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
+-{
+- static const struct {
+- u32 phys;
+- u32 mapped;
+- u32 size;
+- } fixed_map[] = {
+- { 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
+- { 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
+- { 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
+- { 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
+- { 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
+- { 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
+- { 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
+- { 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
+- { 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
+- { 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
+- { 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
+- { 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
+- { 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
+- { 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
+- { 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
+- { 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
+- { 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
+- { 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
+- { 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
+- { 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
+- { 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
+- { 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
+- { 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
+- { 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
+- { 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
+- { 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
+- { 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
+- { 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
+- { 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
+- { 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
+- { 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
+- { 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
+- { 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
+- { 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
+- { 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
+- { 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
+- { 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
+- { 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
+- { 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
+- { 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
+- { 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
+- { 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
+- { 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
+- };
+- int i;
+-
+- if (addr < 0x100000)
+- return addr;
+-
+- for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
+- u32 ofs;
+-
+- if (addr < fixed_map[i].phys)
+- continue;
+-
+- ofs = addr - fixed_map[i].phys;
+- if (ofs > fixed_map[i].size)
+- continue;
+-
+- return fixed_map[i].mapped + ofs;
+- }
+-
+- if ((addr >= 0x18000000 && addr < 0x18c00000) ||
+- (addr >= 0x70000000 && addr < 0x78000000) ||
+- (addr >= 0x7c000000 && addr < 0x7c400000))
+- return mt7921_reg_map_l1(dev, addr);
+-
+- dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
+- addr);
+-
+- return 0;
+-}
+-
+-static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
+-{
+- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+- u32 addr = __mt7921_reg_addr(dev, offset);
+-
+- return dev->bus_ops->rr(mdev, addr);
+-}
+-
+-static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
+-{
+- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+- u32 addr = __mt7921_reg_addr(dev, offset);
+-
+- dev->bus_ops->wr(mdev, addr, val);
+-}
+-
+-static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
+-{
+- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+- u32 addr = __mt7921_reg_addr(dev, offset);
+-
+- return dev->bus_ops->rmw(mdev, addr, mask, val);
+-}
+-
+ static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
+ {
+ if (force) {
+@@ -341,23 +237,8 @@ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev)
+
+ int mt7921_dma_init(struct mt7921_dev *dev)
+ {
+- struct mt76_bus_ops *bus_ops;
+ int ret;
+
+- dev->phy.dev = dev;
+- dev->phy.mt76 = &dev->mt76.phy;
+- dev->mt76.phy.priv = &dev->phy;
+- dev->bus_ops = dev->mt76.bus;
+- bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+- GFP_KERNEL);
+- if (!bus_ops)
+- return -ENOMEM;
+-
+- bus_ops->rr = mt7921_rr;
+- bus_ops->wr = mt7921_wr;
+- bus_ops->rmw = mt7921_rmw;
+- dev->mt76.bus = bus_ops;
+-
+ mt76_dma_attach(&dev->mt76);
+
+ ret = mt7921_dma_disable(dev, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index ec10f95a46495..84f72dd1bf930 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -402,13 +402,13 @@ mt7921_mac_assoc_rssi(struct mt7921_dev *dev, struct sk_buff *skb)
+ static int mt7921_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ {
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
++ struct ethhdr *eth_hdr = (struct ethhdr *)(skb->data + hdr_gap);
+ struct mt7921_sta *msta = (struct mt7921_sta *)status->wcid;
++ __le32 *rxd = (__le32 *)skb->data;
+ struct ieee80211_sta *sta;
+ struct ieee80211_vif *vif;
+ struct ieee80211_hdr hdr;
+- struct ethhdr eth_hdr;
+- __le32 *rxd = (__le32 *)skb->data;
+- __le32 qos_ctrl, ht_ctrl;
++ u16 frame_control;
+
+ if (FIELD_GET(MT_RXD3_NORMAL_ADDR_TYPE, le32_to_cpu(rxd[3])) !=
+ MT_RXD3_NORMAL_U2M)
+@@ -424,47 +424,52 @@ static int mt7921_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+ vif = container_of((void *)msta->vif, struct ieee80211_vif, drv_priv);
+
+ /* store the info from RXD and ethhdr to avoid being overridden */
+- memcpy(ð_hdr, skb->data + hdr_gap, sizeof(eth_hdr));
+- hdr.frame_control = FIELD_GET(MT_RXD6_FRAME_CONTROL, rxd[6]);
+- hdr.seq_ctrl = FIELD_GET(MT_RXD8_SEQ_CTRL, rxd[8]);
+- qos_ctrl = FIELD_GET(MT_RXD8_QOS_CTL, rxd[8]);
+- ht_ctrl = FIELD_GET(MT_RXD9_HT_CONTROL, rxd[9]);
+-
++ frame_control = le32_get_bits(rxd[6], MT_RXD6_FRAME_CONTROL);
++ hdr.frame_control = cpu_to_le16(frame_control);
++ hdr.seq_ctrl = cpu_to_le16(le32_get_bits(rxd[8], MT_RXD8_SEQ_CTRL));
+ hdr.duration_id = 0;
++
+ ether_addr_copy(hdr.addr1, vif->addr);
+ ether_addr_copy(hdr.addr2, sta->addr);
+- switch (le16_to_cpu(hdr.frame_control) &
+- (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) {
++ switch (frame_control & (IEEE80211_FCTL_TODS |
++ IEEE80211_FCTL_FROMDS)) {
+ case 0:
+ ether_addr_copy(hdr.addr3, vif->bss_conf.bssid);
+ break;
+ case IEEE80211_FCTL_FROMDS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_source);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_source);
+ break;
+ case IEEE80211_FCTL_TODS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_dest);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
+ break;
+ case IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS:
+- ether_addr_copy(hdr.addr3, eth_hdr.h_dest);
+- ether_addr_copy(hdr.addr4, eth_hdr.h_source);
++ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
++ ether_addr_copy(hdr.addr4, eth_hdr->h_source);
+ break;
+ default:
+ break;
+ }
+
+ skb_pull(skb, hdr_gap + sizeof(struct ethhdr) - 2);
+- if (eth_hdr.h_proto == htons(ETH_P_AARP) ||
+- eth_hdr.h_proto == htons(ETH_P_IPX))
++ if (eth_hdr->h_proto == cpu_to_be16(ETH_P_AARP) ||
++ eth_hdr->h_proto == cpu_to_be16(ETH_P_IPX))
+ ether_addr_copy(skb_push(skb, ETH_ALEN), bridge_tunnel_header);
+- else if (eth_hdr.h_proto >= htons(ETH_P_802_3_MIN))
++ else if (be16_to_cpu(eth_hdr->h_proto) >= ETH_P_802_3_MIN)
+ ether_addr_copy(skb_push(skb, ETH_ALEN), rfc1042_header);
+ else
+ skb_pull(skb, 2);
+
+ if (ieee80211_has_order(hdr.frame_control))
+- memcpy(skb_push(skb, 2), &ht_ctrl, 2);
+- if (ieee80211_is_data_qos(hdr.frame_control))
+- memcpy(skb_push(skb, 2), &qos_ctrl, 2);
++ memcpy(skb_push(skb, IEEE80211_HT_CTL_LEN), &rxd[9],
++ IEEE80211_HT_CTL_LEN);
++ if (ieee80211_is_data_qos(hdr.frame_control)) {
++ __le16 qos_ctrl;
++
++ qos_ctrl = cpu_to_le16(le32_get_bits(rxd[8], MT_RXD8_QOS_CTL));
++ memcpy(skb_push(skb, IEEE80211_QOS_CTL_LEN), &qos_ctrl,
++ IEEE80211_QOS_CTL_LEN);
++ }
++
+ if (ieee80211_has_a4(hdr.frame_control))
+ memcpy(skb_push(skb, sizeof(hdr)), &hdr, sizeof(hdr));
+ else
+@@ -914,9 +919,15 @@ mt7921_mac_write_txwi_80211(struct mt7921_dev *dev, __le32 *txwi,
+ txwi[3] |= cpu_to_le32(val);
+ }
+
+- val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
+- FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
+- txwi[7] |= cpu_to_le32(val);
++ if (mt76_is_mmio(&dev->mt76)) {
++ val = FIELD_PREP(MT_TXD7_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD7_SUB_TYPE, fc_stype);
++ txwi[7] |= cpu_to_le32(val);
++ } else {
++ val = FIELD_PREP(MT_TXD8_L_TYPE, fc_type) |
++ FIELD_PREP(MT_TXD8_L_SUB_TYPE, fc_stype);
++ txwi[8] |= cpu_to_le32(val);
++ }
+ }
+
+ void mt7921_mac_write_txwi(struct mt7921_dev *dev, __le32 *txwi,
+@@ -1092,7 +1103,6 @@ mt7921_mac_add_txs_skb(struct mt7921_dev *dev, struct mt76_wcid *wcid, int pid,
+ break;
+ case MT_PHY_TYPE_HT:
+ case MT_PHY_TYPE_HT_GF:
+- rate.mcs += (rate.nss - 1) * 8;
+ if (rate.mcs > 31)
+ goto out;
+
+@@ -1551,6 +1561,14 @@ void mt7921_pm_power_save_work(struct work_struct *work)
+ test_bit(MT76_HW_SCHED_SCANNING, &mphy->state))
+ goto out;
+
++ if (mutex_is_locked(&dev->mt76.mutex))
++ /* if mt76 mutex is held we should not put the device
++ * to sleep since we are currently accessing device
++ * register map. We need to wait for the next power_save
++ * trigger.
++ */
++ goto out;
++
+ if (time_is_after_jiffies(dev->pm.last_activity + delta)) {
+ delta = dev->pm.last_activity + delta - jiffies;
+ goto out;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+index 544a1c33126a4..12e1cf8abe6ea 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+@@ -284,6 +284,9 @@ enum tx_mcu_port_q_idx {
+ #define MT_TXD7_HW_AMSDU BIT(10)
+ #define MT_TXD7_TX_TIME GENMASK(9, 0)
+
++#define MT_TXD8_L_TYPE GENMASK(5, 4)
++#define MT_TXD8_L_SUB_TYPE GENMASK(3, 0)
++
+ #define MT_TX_RATE_STBC BIT(13)
+ #define MT_TX_RATE_NSS GENMASK(12, 10)
+ #define MT_TX_RATE_MODE GENMASK(9, 6)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index ef1e1ef91611b..e82545a7fcc11 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -707,12 +707,8 @@ static int mt7921_load_patch(struct mt7921_dev *dev)
+ if (mt76_is_sdio(&dev->mt76)) {
+ /* activate again */
+ ret = __mt7921_mcu_fw_pmctrl(dev);
+- if (ret)
+- return ret;
+-
+- ret = __mt7921_mcu_drv_pmctrl(dev);
+- if (ret)
+- return ret;
++ if (!ret)
++ ret = __mt7921_mcu_drv_pmctrl(dev);
+ }
+
+ out:
+@@ -920,33 +916,28 @@ EXPORT_SYMBOL_GPL(mt7921_mcu_exit);
+
+ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ {
+-#define WMM_AIFS_SET BIT(0)
+-#define WMM_CW_MIN_SET BIT(1)
+-#define WMM_CW_MAX_SET BIT(2)
+-#define WMM_TXOP_SET BIT(3)
+-#define WMM_PARAM_SET GENMASK(3, 0)
+-#define TX_CMD_MODE 1
++ struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++
+ struct edca {
+- u8 queue;
+- u8 set;
+- u8 aifs;
+- u8 cw_min;
++ __le16 cw_min;
+ __le16 cw_max;
+ __le16 txop;
+- };
++ __le16 aifs;
++ u8 guardtime;
++ u8 acm;
++ } __packed;
+ struct mt7921_mcu_tx {
+- u8 total;
+- u8 action;
+- u8 valid;
+- u8 mode;
+-
+ struct edca edca[IEEE80211_NUM_ACS];
++ u8 bss_idx;
++ u8 qos;
++ u8 wmm_idx;
++ u8 pad;
+ } __packed req = {
+- .valid = true,
+- .mode = TX_CMD_MODE,
+- .total = IEEE80211_NUM_ACS,
++ .bss_idx = mvif->mt76.idx,
++ .qos = vif->bss_conf.qos,
++ .wmm_idx = mvif->mt76.wmm_idx,
+ };
+- struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++
+ struct mu_edca {
+ u8 cw_min;
+ u8 cw_max;
+@@ -970,30 +961,29 @@ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ .qos = vif->bss_conf.qos,
+ .wmm_idx = mvif->mt76.wmm_idx,
+ };
++ int to_aci[] = {1, 0, 2, 3};
+ int ac, ret;
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ struct ieee80211_tx_queue_params *q = &mvif->queue_params[ac];
+- struct edca *e = &req.edca[ac];
++ struct edca *e = &req.edca[to_aci[ac]];
+
+- e->set = WMM_PARAM_SET;
+- e->queue = ac + mvif->mt76.wmm_idx * MT7921_MAX_WMM_SETS;
+ e->aifs = q->aifs;
+ e->txop = cpu_to_le16(q->txop);
+
+ if (q->cw_min)
+- e->cw_min = fls(q->cw_min);
++ e->cw_min = cpu_to_le16(q->cw_min);
+ else
+ e->cw_min = 5;
+
+ if (q->cw_max)
+- e->cw_max = cpu_to_le16(fls(q->cw_max));
++ e->cw_max = cpu_to_le16(q->cw_max);
+ else
+ e->cw_max = cpu_to_le16(10);
+ }
+
+- ret = mt76_mcu_send_msg(&dev->mt76, MCU_EXT_CMD(EDCA_UPDATE),
+- &req, sizeof(req), true);
++ ret = mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(SET_EDCA_PARMS), &req,
++ sizeof(req), false);
+ if (ret)
+ return ret;
+
+@@ -1003,7 +993,6 @@ int mt7921_mcu_set_tx(struct mt7921_dev *dev, struct ieee80211_vif *vif)
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ struct ieee80211_he_mu_edca_param_ac_rec *q;
+ struct mu_edca *e;
+- int to_aci[] = {1, 0, 2, 3};
+
+ if (!mvif->queue_params[ac].mu_edca)
+ break;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 96647801850a5..33f8e5b541b35 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -452,6 +452,7 @@ int mt7921e_mcu_init(struct mt7921_dev *dev);
+ int mt7921s_wfsys_reset(struct mt7921_dev *dev);
+ int mt7921s_mac_reset(struct mt7921_dev *dev);
+ int mt7921s_init_reset(struct mt7921_dev *dev);
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 9dae2f5972bf9..9a71a5d864819 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -121,6 +121,110 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ mt76_free_device(&dev->mt76);
+ }
+
++static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
++{
++ static const struct {
++ u32 phys;
++ u32 mapped;
++ u32 size;
++ } fixed_map[] = {
++ { 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
++ { 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
++ { 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
++ { 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
++ { 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
++ { 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
++ { 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
++ { 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
++ { 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
++ { 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
++ { 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
++ { 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
++ { 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
++ { 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
++ { 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
++ { 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
++ { 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
++ { 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
++ { 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
++ { 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
++ { 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
++ { 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
++ { 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
++ { 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
++ { 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
++ { 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
++ { 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
++ { 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
++ { 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
++ { 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
++ { 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
++ { 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
++ { 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
++ { 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
++ { 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
++ { 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
++ { 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
++ { 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
++ { 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
++ { 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
++ { 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
++ { 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
++ { 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
++ };
++ int i;
++
++ if (addr < 0x100000)
++ return addr;
++
++ for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
++ u32 ofs;
++
++ if (addr < fixed_map[i].phys)
++ continue;
++
++ ofs = addr - fixed_map[i].phys;
++ if (ofs > fixed_map[i].size)
++ continue;
++
++ return fixed_map[i].mapped + ofs;
++ }
++
++ if ((addr >= 0x18000000 && addr < 0x18c00000) ||
++ (addr >= 0x70000000 && addr < 0x78000000) ||
++ (addr >= 0x7c000000 && addr < 0x7c400000))
++ return mt7921_reg_map_l1(dev, addr);
++
++ dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
++ addr);
++
++ return 0;
++}
++
++static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
++{
++ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ u32 addr = __mt7921_reg_addr(dev, offset);
++
++ return dev->bus_ops->rr(mdev, addr);
++}
++
++static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
++{
++ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ u32 addr = __mt7921_reg_addr(dev, offset);
++
++ dev->bus_ops->wr(mdev, addr, val);
++}
++
++static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
++{
++ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ u32 addr = __mt7921_reg_addr(dev, offset);
++
++ return dev->bus_ops->rmw(mdev, addr, mask, val);
++}
++
+ static int mt7921_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+ {
+@@ -151,6 +255,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ .fw_own = mt7921e_mcu_fw_pmctrl,
+ };
+
++ struct mt76_bus_ops *bus_ops;
+ struct mt7921_dev *dev;
+ struct mt76_dev *mdev;
+ int ret;
+@@ -188,6 +293,25 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+
+ mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
+ tasklet_init(&dev->irq_tasklet, mt7921_irq_tasklet, (unsigned long)dev);
++
++ dev->phy.dev = dev;
++ dev->phy.mt76 = &dev->mt76.phy;
++ dev->mt76.phy.priv = &dev->phy;
++ dev->bus_ops = dev->mt76.bus;
++ bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
++ GFP_KERNEL);
++ if (!bus_ops)
++ return -ENOMEM;
++
++ bus_ops->rr = mt7921_rr;
++ bus_ops->wr = mt7921_wr;
++ bus_ops->rmw = mt7921_rmw;
++ dev->mt76.bus = bus_ops;
++
++ ret = __mt7921e_mcu_drv_pmctrl(dev);
++ if (ret)
++ return ret;
++
+ mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
+ (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+ dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+index a020352122a12..daa73c92426ca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+@@ -59,10 +59,8 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
+ return err;
+ }
+
+-int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ {
+- struct mt76_phy *mphy = &dev->mt76.phy;
+- struct mt76_connac_pm *pm = &dev->pm;
+ int i, err = 0;
+
+ for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
+@@ -75,9 +73,21 @@ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ if (i == MT7921_DRV_OWN_RETRY_COUNT) {
+ dev_err(dev->mt76.dev, "driver own failed\n");
+ err = -EIO;
+- goto out;
+ }
+
++ return err;
++}
++
++int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++{
++ struct mt76_phy *mphy = &dev->mt76.phy;
++ struct mt76_connac_pm *pm = &dev->pm;
++ int err;
++
++ err = __mt7921e_mcu_drv_pmctrl(dev);
++ if (err < 0)
++ goto out;
++
+ mt7921_wpdma_reinit_cond(dev);
+ clear_bit(MT76_STATE_PM, &mphy->state);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+index cbd38122c510f..c8c92faa4624f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+@@ -17,13 +17,12 @@
+ #define MT_PLE_BASE 0x820c0000
+ #define MT_PLE(ofs) (MT_PLE_BASE + (ofs))
+
+-#define MT_PLE_FL_Q0_CTRL MT_PLE(0x1b0)
+-#define MT_PLE_FL_Q1_CTRL MT_PLE(0x1b4)
+-#define MT_PLE_FL_Q2_CTRL MT_PLE(0x1b8)
+-#define MT_PLE_FL_Q3_CTRL MT_PLE(0x1bc)
++#define MT_PLE_FL_Q0_CTRL MT_PLE(0x3e0)
++#define MT_PLE_FL_Q1_CTRL MT_PLE(0x3e4)
++#define MT_PLE_FL_Q2_CTRL MT_PLE(0x3e8)
++#define MT_PLE_FL_Q3_CTRL MT_PLE(0x3ec)
+
+-#define MT_PLE_AC_QEMPTY(ac, n) MT_PLE(0x300 + 0x10 * (ac) + \
+- ((n) << 2))
++#define MT_PLE_AC_QEMPTY(_n) MT_PLE(0x500 + 0x40 * (_n))
+ #define MT_PLE_AMSDU_PACK_MSDU_CNT(n) MT_PLE(0x10e0 + ((n) << 2))
+
+ #define MT_MDP_BASE 0x820cd000
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+index d20f2ff01be17..5d8af18c70267 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+@@ -49,6 +49,26 @@ mt7921s_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ return ret;
+ }
+
++static u32 mt7921s_read_rm3r(struct mt7921_dev *dev)
++{
++ struct mt76_sdio *sdio = &dev->mt76.sdio;
++
++ return sdio_readl(sdio->func, MCR_D2HRM3R, NULL);
++}
++
++static u32 mt7921s_clear_rm3r_drv_own(struct mt7921_dev *dev)
++{
++ struct mt76_sdio *sdio = &dev->mt76.sdio;
++ u32 val;
++
++ val = sdio_readl(sdio->func, MCR_D2HRM3R, NULL);
++ if (val)
++ sdio_writel(sdio->func, H2D_SW_INT_CLEAR_MAILBOX_ACK,
++ MCR_WSICR, NULL);
++
++ return val;
++}
++
+ int mt7921s_mcu_init(struct mt7921_dev *dev)
+ {
+ static const struct mt76_mcu_ops mt7921s_mcu_ops = {
+@@ -88,6 +108,12 @@ int mt7921s_mcu_drv_pmctrl(struct mt7921_dev *dev)
+
+ err = readx_poll_timeout(mt76s_read_pcr, &dev->mt76, status,
+ status & WHLPCR_IS_DRIVER_OWN, 2000, 1000000);
++
++ if (!err && test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state))
++ err = readx_poll_timeout(mt7921s_read_rm3r, dev, status,
++ status & D2HRM3R_IS_DRIVER_OWN,
++ 2000, 1000000);
++
+ sdio_release_host(func);
+
+ if (err < 0) {
+@@ -115,12 +141,24 @@ int mt7921s_mcu_fw_pmctrl(struct mt7921_dev *dev)
+
+ sdio_claim_host(func);
+
++ if (test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state)) {
++ err = readx_poll_timeout(mt7921s_clear_rm3r_drv_own,
++ dev, status,
++ !(status & D2HRM3R_IS_DRIVER_OWN),
++ 2000, 1000000);
++ if (err < 0) {
++ dev_err(dev->mt76.dev, "mailbox ACK not cleared\n");
++ goto err;
++ }
++ }
++
+ sdio_writel(func, WHLPCR_FW_OWN_REQ_SET, MCR_WHLPCR, NULL);
+
+ err = readx_poll_timeout(mt76s_read_pcr, &dev->mt76, status,
+ !(status & WHLPCR_IS_DRIVER_OWN), 2000, 1000000);
+ sdio_release_host(func);
+
++err:
+ if (err < 0) {
+ dev_err(dev->mt76.dev, "firmware own failed\n");
+ clear_bit(MT76_STATE_PM, &mphy->state);
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.h b/drivers/net/wireless/mediatek/mt76/sdio.h
+index 99db4ad93b7c7..27d5d2077ebae 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.h
++++ b/drivers/net/wireless/mediatek/mt76/sdio.h
+@@ -65,6 +65,7 @@
+ #define MCR_H2DSM0R 0x0070
+ #define H2D_SW_INT_READ BIT(16)
+ #define H2D_SW_INT_WRITE BIT(17)
++#define H2D_SW_INT_CLEAR_MAILBOX_ACK BIT(22)
+
+ #define MCR_H2DSM1R 0x0074
+ #define MCR_D2HRM0R 0x0078
+@@ -109,6 +110,7 @@
+ #define MCR_H2DSM2R 0x0160 /* supported in CONNAC2 */
+ #define MCR_H2DSM3R 0x0164 /* supported in CONNAC2 */
+ #define MCR_D2HRM3R 0x0174 /* supported in CONNAC2 */
++#define D2HRM3R_IS_DRIVER_OWN BIT(0)
+ #define MCR_WTQCR8 0x0190 /* supported in CONNAC2 */
+ #define MCR_WTQCR9 0x0194 /* supported in CONNAC2 */
+ #define MCR_WTQCR10 0x0198 /* supported in CONNAC2 */
+diff --git a/drivers/net/wireless/ray_cs.c b/drivers/net/wireless/ray_cs.c
+index 2987ad9271f64..87e98ab068ed7 100644
+--- a/drivers/net/wireless/ray_cs.c
++++ b/drivers/net/wireless/ray_cs.c
+@@ -382,6 +382,8 @@ static int ray_config(struct pcmcia_device *link)
+ goto failed;
+ local->sram = ioremap(link->resource[2]->start,
+ resource_size(link->resource[2]));
++ if (!local->sram)
++ goto failed;
+
+ /*** Set up 16k window for shared memory (receive buffer) ***************/
+ link->resource[3]->flags |=
+@@ -396,6 +398,8 @@ static int ray_config(struct pcmcia_device *link)
+ goto failed;
+ local->rmem = ioremap(link->resource[3]->start,
+ resource_size(link->resource[3]));
++ if (!local->rmem)
++ goto failed;
+
+ /*** Set up window for attribute memory ***********************************/
+ link->resource[4]->flags |=
+@@ -410,6 +414,8 @@ static int ray_config(struct pcmcia_device *link)
+ goto failed;
+ local->amem = ioremap(link->resource[4]->start,
+ resource_size(link->resource[4]));
++ if (!local->amem)
++ goto failed;
+
+ dev_dbg(&link->dev, "ray_config sram=%p\n", local->sram);
+ dev_dbg(&link->dev, "ray_config rmem=%p\n", local->rmem);
+diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
+index 2f7c036f90221..4c8e5ea5d069c 100644
+--- a/drivers/net/wireless/realtek/rtw88/fw.c
++++ b/drivers/net/wireless/realtek/rtw88/fw.c
+@@ -1784,9 +1784,9 @@ void rtw_fw_scan_notify(struct rtw_dev *rtwdev, bool start)
+ rtw_fw_send_h2c_command(rtwdev, h2c_pkt);
+ }
+
+-static void rtw_append_probe_req_ie(struct rtw_dev *rtwdev, struct sk_buff *skb,
+- struct sk_buff_head *list,
+- struct rtw_vif *rtwvif)
++static int rtw_append_probe_req_ie(struct rtw_dev *rtwdev, struct sk_buff *skb,
++ struct sk_buff_head *list, u8 *bands,
++ struct rtw_vif *rtwvif)
+ {
+ struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
+ struct rtw_chip_info *chip = rtwdev->chip;
+@@ -1797,19 +1797,24 @@ static void rtw_append_probe_req_ie(struct rtw_dev *rtwdev, struct sk_buff *skb,
+ if (!(BIT(idx) & chip->band))
+ continue;
+ new = skb_copy(skb, GFP_KERNEL);
++ if (!new)
++ return -ENOMEM;
+ skb_put_data(new, ies->ies[idx], ies->len[idx]);
+ skb_put_data(new, ies->common_ies, ies->common_ie_len);
+ skb_queue_tail(list, new);
++ (*bands)++;
+ }
++
++ return 0;
+ }
+
+-static int _rtw_hw_scan_update_probe_req(struct rtw_dev *rtwdev, u8 num_ssids,
++static int _rtw_hw_scan_update_probe_req(struct rtw_dev *rtwdev, u8 num_probes,
+ struct sk_buff_head *probe_req_list)
+ {
+ struct rtw_chip_info *chip = rtwdev->chip;
+ struct sk_buff *skb, *tmp;
+ u8 page_offset = 1, *buf, page_size = chip->page_size;
+- u8 pages = page_offset + num_ssids * RTW_PROBE_PG_CNT;
++ u8 pages = page_offset + num_probes * RTW_PROBE_PG_CNT;
+ u16 pg_addr = rtwdev->fifo.rsvd_h2c_info_addr, loc;
+ u16 buf_offset = page_size * page_offset;
+ u8 tx_desc_sz = chip->tx_pkt_desc_sz;
+@@ -1848,6 +1853,8 @@ static int _rtw_hw_scan_update_probe_req(struct rtw_dev *rtwdev, u8 num_ssids,
+ rtwdev->scan_info.probe_pg_size = page_offset;
+ out:
+ kfree(buf);
++ skb_queue_walk_safe(probe_req_list, skb, tmp)
++ kfree_skb(skb);
+
+ return ret;
+ }
+@@ -1857,8 +1864,9 @@ static int rtw_hw_scan_update_probe_req(struct rtw_dev *rtwdev,
+ {
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct sk_buff_head list;
+- struct sk_buff *skb;
+- u8 num = req->n_ssids, i;
++ struct sk_buff *skb, *tmp;
++ u8 num = req->n_ssids, i, bands = 0;
++ int ret;
+
+ skb_queue_head_init(&list);
+ for (i = 0; i < num; i++) {
+@@ -1866,11 +1874,25 @@ static int rtw_hw_scan_update_probe_req(struct rtw_dev *rtwdev,
+ req->ssids[i].ssid,
+ req->ssids[i].ssid_len,
+ req->ie_len);
+- rtw_append_probe_req_ie(rtwdev, skb, &list, rtwvif);
++ if (!skb) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ ret = rtw_append_probe_req_ie(rtwdev, skb, &list, &bands,
++ rtwvif);
++ if (ret)
++ goto out;
++
+ kfree_skb(skb);
+ }
+
+- return _rtw_hw_scan_update_probe_req(rtwdev, num, &list);
++ return _rtw_hw_scan_update_probe_req(rtwdev, num * bands, &list);
++
++out:
++ skb_queue_walk_safe(&list, skb, tmp)
++ kfree_skb(skb);
++
++ return ret;
+ }
+
+ static int rtw_add_chan_info(struct rtw_dev *rtwdev, struct rtw_chan_info *info,
+@@ -2022,7 +2044,7 @@ void rtw_hw_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
+ rtwdev->hal.rcr |= BIT_CBSSID_BCN;
+ rtw_write32(rtwdev, REG_RCR, rtwdev->hal.rcr);
+
+- rtw_core_scan_complete(rtwdev, vif);
++ rtw_core_scan_complete(rtwdev, vif, true);
+
+ ieee80211_wake_queues(rtwdev->hw);
+ ieee80211_scan_completed(rtwdev->hw, &info);
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index ae7d97de5fdf4..647d2662955ba 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -72,6 +72,9 @@ static int rtw_ops_config(struct ieee80211_hw *hw, u32 changed)
+ struct rtw_dev *rtwdev = hw->priv;
+ int ret = 0;
+
++ /* let previous ips work finish to ensure we don't leave ips twice */
++ cancel_work_sync(&rtwdev->ips_work);
++
+ mutex_lock(&rtwdev->mutex);
+
+ rtw_leave_lps_deep(rtwdev);
+@@ -614,7 +617,7 @@ static void rtw_ops_sw_scan_complete(struct ieee80211_hw *hw,
+ struct rtw_dev *rtwdev = hw->priv;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw_core_scan_complete(rtwdev, vif);
++ rtw_core_scan_complete(rtwdev, vif, false);
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
+index 38252113c4a87..39c223a2e3e2d 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.c
++++ b/drivers/net/wireless/realtek/rtw88/main.c
+@@ -272,6 +272,15 @@ static void rtw_c2h_work(struct work_struct *work)
+ }
+ }
+
++static void rtw_ips_work(struct work_struct *work)
++{
++ struct rtw_dev *rtwdev = container_of(work, struct rtw_dev, ips_work);
++
++ mutex_lock(&rtwdev->mutex);
++ rtw_enter_ips(rtwdev);
++ mutex_unlock(&rtwdev->mutex);
++}
++
+ static u8 rtw_acquire_macid(struct rtw_dev *rtwdev)
+ {
+ unsigned long mac_id;
+@@ -1339,7 +1348,8 @@ void rtw_core_scan_start(struct rtw_dev *rtwdev, struct rtw_vif *rtwvif,
+ set_bit(RTW_FLAG_SCANNING, rtwdev->flags);
+ }
+
+-void rtw_core_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw_core_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
++ bool hw_scan)
+ {
+ struct rtw_vif *rtwvif = (struct rtw_vif *)vif->drv_priv;
+ u32 config = 0;
+@@ -1354,6 +1364,9 @@ void rtw_core_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif)
+ rtw_vif_port_config(rtwdev, rtwvif, config);
+
+ rtw_coex_scan_notify(rtwdev, COEX_SCAN_FINISH);
++
++ if (rtwvif->net_type == RTW_NET_NO_LINK && hw_scan)
++ ieee80211_queue_work(rtwdev->hw, &rtwdev->ips_work);
+ }
+
+ int rtw_core_start(struct rtw_dev *rtwdev)
+@@ -1919,6 +1932,7 @@ int rtw_core_init(struct rtw_dev *rtwdev)
+ INIT_DELAYED_WORK(&coex->wl_ccklock_work, rtw_coex_wl_ccklock_work);
+ INIT_WORK(&rtwdev->tx_work, rtw_tx_work);
+ INIT_WORK(&rtwdev->c2h_work, rtw_c2h_work);
++ INIT_WORK(&rtwdev->ips_work, rtw_ips_work);
+ INIT_WORK(&rtwdev->fw_recovery_work, rtw_fw_recovery_work);
+ INIT_WORK(&rtwdev->ba_work, rtw_txq_ba_work);
+ skb_queue_head_init(&rtwdev->c2h_queue);
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index dc1cd9bd4b8a3..36e1e408933db 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -1960,6 +1960,7 @@ struct rtw_dev {
+ /* c2h cmd queue & handler work */
+ struct sk_buff_head c2h_queue;
+ struct work_struct c2h_work;
++ struct work_struct ips_work;
+ struct work_struct fw_recovery_work;
+
+ /* used to protect txqs list */
+@@ -2101,7 +2102,8 @@ void rtw_tx_report_purge_timer(struct timer_list *t);
+ void rtw_update_sta_info(struct rtw_dev *rtwdev, struct rtw_sta_info *si);
+ void rtw_core_scan_start(struct rtw_dev *rtwdev, struct rtw_vif *rtwvif,
+ const u8 *mac_addr, bool hw_scan);
+-void rtw_core_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw_core_scan_complete(struct rtw_dev *rtwdev, struct ieee80211_vif *vif,
++ bool hw_scan);
+ int rtw_core_start(struct rtw_dev *rtwdev);
+ void rtw_core_stop(struct rtw_dev *rtwdev);
+ int rtw_chip_info_setup(struct rtw_dev *rtwdev);
+diff --git a/drivers/net/wwan/qcom_bam_dmux.c b/drivers/net/wwan/qcom_bam_dmux.c
+index 5dfa2eba6014c..17d46f4d29139 100644
+--- a/drivers/net/wwan/qcom_bam_dmux.c
++++ b/drivers/net/wwan/qcom_bam_dmux.c
+@@ -755,7 +755,7 @@ static int __maybe_unused bam_dmux_runtime_resume(struct device *dev)
+ return 0;
+
+ dmux->tx = dma_request_chan(dev, "tx");
+- if (IS_ERR(dmux->rx)) {
++ if (IS_ERR(dmux->tx)) {
+ dev_err(dev, "Failed to request TX DMA channel: %pe\n", dmux->tx);
+ dmux->tx = NULL;
+ bam_dmux_runtime_suspend(dev);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index 9ccf3d6087993..70ad891a76bae 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -1025,6 +1025,9 @@ static unsigned long default_align(struct nd_region *nd_region)
+ }
+ }
+
++ if (nd_region->ndr_size < MEMREMAP_COMPAT_ALIGN_MAX)
++ align = PAGE_SIZE;
++
+ mappings = max_t(u16, 1, nd_region->ndr_mappings);
+ div_u64_rem(align, mappings, &remainder);
+ if (remainder)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index fd4720d37cc0b..6215d50ed3e7d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1683,13 +1683,6 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns)
+ blk_queue_max_write_zeroes_sectors(queue, UINT_MAX);
+ }
+
+-static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids)
+-{
+- return !uuid_is_null(&ids->uuid) ||
+- memchr_inv(ids->nguid, 0, sizeof(ids->nguid)) ||
+- memchr_inv(ids->eui64, 0, sizeof(ids->eui64));
+-}
+-
+ static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b)
+ {
+ return uuid_equal(&a->uuid, &b->uuid) &&
+@@ -1864,9 +1857,6 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ nvme_config_discard(disk, ns);
+ blk_queue_max_write_zeroes_sectors(disk->queue,
+ ns->ctrl->max_zeroes_sectors);
+-
+- set_disk_ro(disk, (id->nsattr & NVME_NS_ATTR_RO) ||
+- test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ }
+
+ static inline bool nvme_first_scan(struct gendisk *disk)
+@@ -1925,6 +1915,8 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ goto out_unfreeze;
+ }
+
++ set_disk_ro(ns->disk, (id->nsattr & NVME_NS_ATTR_RO) ||
++ test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ set_bit(NVME_NS_READY, &ns->flags);
+ blk_mq_unfreeze_queue(ns->disk->queue);
+
+@@ -1937,6 +1929,9 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id)
+ if (nvme_ns_head_multipath(ns->head)) {
+ blk_mq_freeze_queue(ns->head->disk->queue);
+ nvme_update_disk_info(ns->head->disk, ns, id);
++ set_disk_ro(ns->head->disk,
++ (id->nsattr & NVME_NS_ATTR_RO) ||
++ test_bit(NVME_NS_FORCE_RO, &ns->flags));
+ nvme_mpath_revalidate_paths(ns);
+ blk_stack_limits(&ns->head->disk->queue->limits,
+ &ns->queue->limits, 0);
+@@ -3581,15 +3576,20 @@ static const struct attribute_group *nvme_dev_attr_groups[] = {
+ NULL,
+ };
+
+-static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
++static struct nvme_ns_head *nvme_find_ns_head(struct nvme_ctrl *ctrl,
+ unsigned nsid)
+ {
+ struct nvme_ns_head *h;
+
+- lockdep_assert_held(&subsys->lock);
++ lockdep_assert_held(&ctrl->subsys->lock);
+
+- list_for_each_entry(h, &subsys->nsheads, entry) {
+- if (h->ns_id != nsid)
++ list_for_each_entry(h, &ctrl->subsys->nsheads, entry) {
++ /*
++ * Private namespaces can share NSIDs under some conditions.
++ * In that case we can't use the same ns_head for namespaces
++ * with the same NSID.
++ */
++ if (h->ns_id != nsid || !nvme_is_unique_nsid(ctrl, h))
+ continue;
+ if (!list_empty(&h->list) && nvme_tryget_ns_head(h))
+ return h;
+@@ -3598,16 +3598,24 @@ static struct nvme_ns_head *nvme_find_ns_head(struct nvme_subsystem *subsys,
+ return NULL;
+ }
+
+-static int __nvme_check_ids(struct nvme_subsystem *subsys,
+- struct nvme_ns_head *new)
++static int nvme_subsys_check_duplicate_ids(struct nvme_subsystem *subsys,
++ struct nvme_ns_ids *ids)
+ {
++ bool has_uuid = !uuid_is_null(&ids->uuid);
++ bool has_nguid = memchr_inv(ids->nguid, 0, sizeof(ids->nguid));
++ bool has_eui64 = memchr_inv(ids->eui64, 0, sizeof(ids->eui64));
+ struct nvme_ns_head *h;
+
+ lockdep_assert_held(&subsys->lock);
+
+ list_for_each_entry(h, &subsys->nsheads, entry) {
+- if (nvme_ns_ids_valid(&new->ids) &&
+- nvme_ns_ids_equal(&new->ids, &h->ids))
++ if (has_uuid && uuid_equal(&ids->uuid, &h->ids.uuid))
++ return -EINVAL;
++ if (has_nguid &&
++ memcmp(&ids->nguid, &h->ids.nguid, sizeof(ids->nguid)) == 0)
++ return -EINVAL;
++ if (has_eui64 &&
++ memcmp(&ids->eui64, &h->ids.eui64, sizeof(ids->eui64)) == 0)
+ return -EINVAL;
+ }
+
+@@ -3706,7 +3714,7 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
+ head->ids = *ids;
+ kref_init(&head->ref);
+
+- ret = __nvme_check_ids(ctrl->subsys, head);
++ ret = nvme_subsys_check_duplicate_ids(ctrl->subsys, &head->ids);
+ if (ret) {
+ dev_err(ctrl->device,
+ "duplicate IDs for nsid %d\n", nsid);
+@@ -3749,7 +3757,7 @@ static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid,
+ int ret = 0;
+
+ mutex_lock(&ctrl->subsys->lock);
+- head = nvme_find_ns_head(ctrl->subsys, nsid);
++ head = nvme_find_ns_head(ctrl, nsid);
+ if (!head) {
+ head = nvme_alloc_ns_head(ctrl, nsid, ids);
+ if (IS_ERR(head)) {
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index ff775235534cf..a703f1f5fb64c 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -504,10 +504,11 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
+
+ /*
+ * Add a multipath node if the subsystems supports multiple controllers.
+- * We also do this for private namespaces as the namespace sharing data could
+- * change after a rescan.
++ * We also do this for private namespaces as the namespace sharing flag
++ * could change after a rescan.
+ */
+- if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || !multipath)
++ if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) ||
++ !nvme_is_unique_nsid(ctrl, head) || !multipath)
+ return 0;
+
+ head->disk = blk_alloc_disk(ctrl->numa_node);
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index a162f6c6da6e1..730cc80d84ff7 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -716,6 +716,25 @@ static inline bool nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
+ return queue_live;
+ return __nvme_check_ready(ctrl, rq, queue_live);
+ }
++
++/*
++ * NSID shall be unique for all shared namespaces, or if at least one of the
++ * following conditions is met:
++ * 1. Namespace Management is supported by the controller
++ * 2. ANA is supported by the controller
++ * 3. NVM Set are supported by the controller
++ *
++ * In other case, private namespace are not required to report a unique NSID.
++ */
++static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
++ struct nvme_ns_head *head)
++{
++ return head->shared ||
++ (ctrl->oacs & NVME_CTRL_OACS_NS_MNGT_SUPP) ||
++ (ctrl->subsys->cmic & NVME_CTRL_CMIC_ANA) ||
++ (ctrl->ctratt & NVME_CTRL_CTRATT_NVM_SETS);
++}
++
+ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
+ void *buf, unsigned bufflen);
+ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 65e00c64a588b..d66e2de044e0a 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -30,6 +30,44 @@ static int so_priority;
+ module_param(so_priority, int, 0644);
+ MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
+
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++/* lockdep can detect a circular dependency of the form
++ * sk_lock -> mmap_lock (page fault) -> fs locks -> sk_lock
++ * because dependencies are tracked for both nvme-tcp and user contexts. Using
++ * a separate class prevents lockdep from conflating nvme-tcp socket use with
++ * user-space socket API use.
++ */
++static struct lock_class_key nvme_tcp_sk_key[2];
++static struct lock_class_key nvme_tcp_slock_key[2];
++
++static void nvme_tcp_reclassify_socket(struct socket *sock)
++{
++ struct sock *sk = sock->sk;
++
++ if (WARN_ON_ONCE(!sock_allow_reclassification(sk)))
++ return;
++
++ switch (sk->sk_family) {
++ case AF_INET:
++ sock_lock_init_class_and_name(sk, "slock-AF_INET-NVME",
++ &nvme_tcp_slock_key[0],
++ "sk_lock-AF_INET-NVME",
++ &nvme_tcp_sk_key[0]);
++ break;
++ case AF_INET6:
++ sock_lock_init_class_and_name(sk, "slock-AF_INET6-NVME",
++ &nvme_tcp_slock_key[1],
++ "sk_lock-AF_INET6-NVME",
++ &nvme_tcp_sk_key[1]);
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ }
++}
++#else
++static void nvme_tcp_reclassify_socket(struct socket *sock) { }
++#endif
++
+ enum nvme_tcp_send_state {
+ NVME_TCP_SEND_CMD_PDU = 0,
+ NVME_TCP_SEND_H2C_PDU,
+@@ -1469,6 +1507,8 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl,
+ goto err_destroy_mutex;
+ }
+
++ nvme_tcp_reclassify_socket(queue->sock);
++
+ /* Single syn retry */
+ tcp_sock_set_syncnt(queue->sock->sk, 1);
+
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index 0d9f6b21babb1..708c7529647fd 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -159,9 +159,12 @@ int pci_generic_config_write32(struct pci_bus *bus, unsigned int devfn,
+ * write happen to have any RW1C (write-one-to-clear) bits set, we
+ * just inadvertently cleared something we shouldn't have.
+ */
+- dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
+- size, pci_domain_nr(bus), bus->number,
+- PCI_SLOT(devfn), PCI_FUNC(devfn), where);
++ if (!bus->unsafe_warn) {
++ dev_warn(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n",
++ size, pci_domain_nr(bus), bus->number,
++ PCI_SLOT(devfn), PCI_FUNC(devfn), where);
++ bus->unsafe_warn = 1;
++ }
+
+ mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8));
+ tmp = readl(addr) & mask;
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 6974bd5aa1165..343fe1429e3c2 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -453,10 +453,6 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
+ case IMX7D:
+ break;
+ case IMX8MM:
+- ret = clk_prepare_enable(imx6_pcie->pcie_aux);
+- if (ret)
+- dev_err(dev, "unable to enable pcie_aux clock\n");
+- break;
+ case IMX8MQ:
+ ret = clk_prepare_enable(imx6_pcie->pcie_aux);
+ if (ret) {
+@@ -809,9 +805,7 @@ static int imx6_pcie_start_link(struct dw_pcie *pci)
+ /* Start LTSSM. */
+ imx6_pcie_ltssm_enable(dev);
+
+- ret = dw_pcie_wait_for_link(pci);
+- if (ret)
+- goto err_reset_phy;
++ dw_pcie_wait_for_link(pci);
+
+ if (pci->link_gen == 2) {
+ /* Allow Gen2 mode after the link is up. */
+@@ -847,11 +841,7 @@ static int imx6_pcie_start_link(struct dw_pcie *pci)
+ }
+
+ /* Make sure link training is finished as well! */
+- ret = dw_pcie_wait_for_link(pci);
+- if (ret) {
+- dev_err(dev, "Failed to bring link up!\n");
+- goto err_reset_phy;
+- }
++ dw_pcie_wait_for_link(pci);
+ } else {
+ dev_info(dev, "Link: Gen2 disabled\n");
+ }
+@@ -983,6 +973,7 @@ static int imx6_pcie_suspend_noirq(struct device *dev)
+ case IMX8MM:
+ if (phy_power_off(imx6_pcie->phy))
+ dev_err(dev, "unable to power off PHY\n");
++ phy_exit(imx6_pcie->phy);
+ break;
+ default:
+ break;
+diff --git a/drivers/pci/controller/dwc/pcie-fu740.c b/drivers/pci/controller/dwc/pcie-fu740.c
+index 00cde9a248b5a..78d002be4f821 100644
+--- a/drivers/pci/controller/dwc/pcie-fu740.c
++++ b/drivers/pci/controller/dwc/pcie-fu740.c
+@@ -181,10 +181,59 @@ static int fu740_pcie_start_link(struct dw_pcie *pci)
+ {
+ struct device *dev = pci->dev;
+ struct fu740_pcie *afp = dev_get_drvdata(dev);
++ u8 cap_exp = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
++ int ret;
++ u32 orig, tmp;
++
++ /*
++ * Force 2.5GT/s when starting the link, due to some devices not
++ * probing at higher speeds. This happens with the PCIe switch
++ * on the Unmatched board when U-Boot has not initialised the PCIe.
++ * The fix in U-Boot is to force 2.5GT/s, which then gets cleared
++ * by the soft reset done by this driver.
++ */
++ dev_dbg(dev, "cap_exp at %x\n", cap_exp);
++ dw_pcie_dbi_ro_wr_en(pci);
++
++ tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP);
++ orig = tmp & PCI_EXP_LNKCAP_SLS;
++ tmp &= ~PCI_EXP_LNKCAP_SLS;
++ tmp |= PCI_EXP_LNKCAP_SLS_2_5GB;
++ dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp);
+
+ /* Enable LTSSM */
+ writel_relaxed(0x1, afp->mgmt_base + PCIEX8MGMT_APP_LTSSM_ENABLE);
+- return 0;
++
++ ret = dw_pcie_wait_for_link(pci);
++ if (ret) {
++ dev_err(dev, "error: link did not start\n");
++ goto err;
++ }
++
++ tmp = dw_pcie_readl_dbi(pci, cap_exp + PCI_EXP_LNKCAP);
++ if ((tmp & PCI_EXP_LNKCAP_SLS) != orig) {
++ dev_dbg(dev, "changing speed back to original\n");
++
++ tmp &= ~PCI_EXP_LNKCAP_SLS;
++ tmp |= orig;
++ dw_pcie_writel_dbi(pci, cap_exp + PCI_EXP_LNKCAP, tmp);
++
++ tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
++ tmp |= PORT_LOGIC_SPEED_CHANGE;
++ dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp);
++
++ ret = dw_pcie_wait_for_link(pci);
++ if (ret) {
++ dev_err(dev, "error: link did not start at new speed\n");
++ goto err;
++ }
++ }
++
++ ret = 0;
++err:
++ WARN_ON(ret); /* we assume that errors will be very rare */
++ dw_pcie_dbi_ro_wr_dis(pci);
++ return ret;
+ }
+
+ static int fu740_pcie_host_init(struct pcie_port *pp)
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 4f5b44827d213..82e2c618d532d 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -846,7 +846,9 @@ advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge,
+ case PCI_EXP_RTSTA: {
+ u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG);
+ u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG);
+- *value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16);
++ *value = msglog >> 16;
++ if (isr0 & PCIE_MSG_PM_PME_MASK)
++ *value |= PCI_EXP_RTSTA_PME;
+ return PCI_BRIDGE_EMUL_HANDLED;
+ }
+
+@@ -1388,7 +1390,6 @@ static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)
+ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ {
+ u32 msi_val, msi_mask, msi_status, msi_idx;
+- u16 msi_data;
+
+ msi_mask = advk_readl(pcie, PCIE_MSI_MASK_REG);
+ msi_val = advk_readl(pcie, PCIE_MSI_STATUS_REG);
+@@ -1398,13 +1399,9 @@ static void advk_pcie_handle_msi(struct advk_pcie *pcie)
+ if (!(BIT(msi_idx) & msi_status))
+ continue;
+
+- /*
+- * msi_idx contains bits [4:0] of the msi_data and msi_data
+- * contains 16bit MSI interrupt number
+- */
+ advk_writel(pcie, BIT(msi_idx), PCIE_MSI_STATUS_REG);
+- msi_data = advk_readl(pcie, PCIE_MSI_PAYLOAD_REG) & PCIE_MSI_DATA_MASK;
+- generic_handle_irq(msi_data);
++ if (generic_handle_domain_irq(pcie->msi_inner_domain, msi_idx) == -EINVAL)
++ dev_err_ratelimited(&pcie->pdev->dev, "unexpected MSI 0x%02x\n", msi_idx);
+ }
+
+ advk_writel(pcie, PCIE_ISR0_MSI_INT_PENDING,
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index 0d5acbfc7143f..7c763d820c52c 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -465,7 +465,7 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ return 1;
+ }
+
+- if ((size > SZ_1K) && (size < SZ_4G) && !(*ib_reg_mask & (1 << 0))) {
++ if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) {
+ *ib_reg_mask |= (1 << 0);
+ return 0;
+ }
+@@ -479,28 +479,27 @@ static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size)
+ }
+
+ static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port,
+- struct resource_entry *entry,
+- u8 *ib_reg_mask)
++ struct of_pci_range *range, u8 *ib_reg_mask)
+ {
+ void __iomem *cfg_base = port->cfg_base;
+ struct device *dev = port->dev;
+ void __iomem *bar_addr;
+ u32 pim_reg;
+- u64 cpu_addr = entry->res->start;
+- u64 pci_addr = cpu_addr - entry->offset;
+- u64 size = resource_size(entry->res);
++ u64 cpu_addr = range->cpu_addr;
++ u64 pci_addr = range->pci_addr;
++ u64 size = range->size;
+ u64 mask = ~(size - 1) | EN_REG;
+ u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64;
+ u32 bar_low;
+ int region;
+
+- region = xgene_pcie_select_ib_reg(ib_reg_mask, size);
++ region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size);
+ if (region < 0) {
+ dev_warn(dev, "invalid pcie dma-range config\n");
+ return;
+ }
+
+- if (entry->res->flags & IORESOURCE_PREFETCH)
++ if (range->flags & IORESOURCE_PREFETCH)
+ flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
+
+ bar_low = pcie_bar_low_val((u32)cpu_addr, flags);
+@@ -531,13 +530,25 @@ static void xgene_pcie_setup_ib_reg(struct xgene_pcie *port,
+
+ static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie *port)
+ {
+- struct pci_host_bridge *bridge = pci_host_bridge_from_priv(port);
+- struct resource_entry *entry;
++ struct device_node *np = port->node;
++ struct of_pci_range range;
++ struct of_pci_range_parser parser;
++ struct device *dev = port->dev;
+ u8 ib_reg_mask = 0;
+
+- resource_list_for_each_entry(entry, &bridge->dma_ranges)
+- xgene_pcie_setup_ib_reg(port, entry, &ib_reg_mask);
++ if (of_pci_dma_range_parser_init(&parser, np)) {
++ dev_err(dev, "missing dma-ranges property\n");
++ return -EINVAL;
++ }
++
++ /* Get the dma-ranges from DT */
++ for_each_of_pci_range(&parser, &range) {
++ u64 end = range.cpu_addr + range.size - 1;
+
++ dev_dbg(dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n",
++ range.flags, range.cpu_addr, end, range.pci_addr);
++ xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask);
++ }
+ return 0;
+ }
+
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 1c1ebf3dad43c..85dce560831a8 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -98,6 +98,8 @@ static int pcie_poll_cmd(struct controller *ctrl, int timeout)
+ if (slot_status & PCI_EXP_SLTSTA_CC) {
+ pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
+ PCI_EXP_SLTSTA_CC);
++ ctrl->cmd_busy = 0;
++ smp_mb();
+ return 1;
+ }
+ msleep(10);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 65f7f6b0576c6..da829274fc66d 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1811,6 +1811,18 @@ static void quirk_alder_ioapic(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EESSC, quirk_alder_ioapic);
+ #endif
+
++static void quirk_no_msi(struct pci_dev *dev)
++{
++ pci_info(dev, "avoiding MSI to work around a hardware defect\n");
++ dev->no_msi = 1;
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4386, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4387, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4388, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x4389, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438a, quirk_no_msi);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x438b, quirk_no_msi);
++
+ static void quirk_pcie_mch(struct pci_dev *pdev)
+ {
+ pdev->no_msi = 1;
+diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
+index e1a0c44bc6864..7d6ffdf44a415 100644
+--- a/drivers/perf/Kconfig
++++ b/drivers/perf/Kconfig
+@@ -141,7 +141,7 @@ config ARM_DMC620_PMU
+
+ config MARVELL_CN10K_TAD_PMU
+ tristate "Marvell CN10K LLC-TAD PMU"
+- depends on ARM64 || (COMPILE_TEST && 64BIT)
++ depends on ARCH_THUNDER || (COMPILE_TEST && 64BIT)
+ help
+ Provides support for Last-Level cache Tag-and-data Units (LLC-TAD)
+ performance monitors on CN10K family silicons.
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 0e48adce57ef3..71448229bc5e9 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -71,9 +71,11 @@
+ #define CMN_DTM_WPn(n) (0x1A0 + (n) * 0x18)
+ #define CMN_DTM_WPn_CONFIG(n) (CMN_DTM_WPn(n) + 0x00)
+ #define CMN_DTM_WPn_CONFIG_WP_DEV_SEL2 GENMASK_ULL(18,17)
+-#define CMN_DTM_WPn_CONFIG_WP_COMBINE BIT(6)
+-#define CMN_DTM_WPn_CONFIG_WP_EXCLUSIVE BIT(5)
+-#define CMN_DTM_WPn_CONFIG_WP_GRP BIT(4)
++#define CMN_DTM_WPn_CONFIG_WP_COMBINE BIT(9)
++#define CMN_DTM_WPn_CONFIG_WP_EXCLUSIVE BIT(8)
++#define CMN600_WPn_CONFIG_WP_COMBINE BIT(6)
++#define CMN600_WPn_CONFIG_WP_EXCLUSIVE BIT(5)
++#define CMN_DTM_WPn_CONFIG_WP_GRP GENMASK_ULL(5, 4)
+ #define CMN_DTM_WPn_CONFIG_WP_CHN_SEL GENMASK_ULL(3, 1)
+ #define CMN_DTM_WPn_CONFIG_WP_DEV_SEL BIT(0)
+ #define CMN_DTM_WPn_VAL(n) (CMN_DTM_WPn(n) + 0x08)
+@@ -155,6 +157,7 @@
+ #define CMN_CONFIG_WP_COMBINE GENMASK_ULL(27, 24)
+ #define CMN_CONFIG_WP_DEV_SEL GENMASK_ULL(50, 48)
+ #define CMN_CONFIG_WP_CHN_SEL GENMASK_ULL(55, 51)
++/* Note that we don't yet support the tertiary match group on newer IPs */
+ #define CMN_CONFIG_WP_GRP BIT_ULL(56)
+ #define CMN_CONFIG_WP_EXCLUSIVE BIT_ULL(57)
+ #define CMN_CONFIG1_WP_VAL GENMASK_ULL(63, 0)
+@@ -595,6 +598,9 @@ static umode_t arm_cmn_event_attr_is_visible(struct kobject *kobj,
+ if ((intf & 4) && !(cmn->ports_used & BIT(intf & 3)))
+ return 0;
+
++ if (chan == 4 && cmn->model == CMN600)
++ return 0;
++
+ if ((chan == 5 && cmn->rsp_vc_num < 2) ||
+ (chan == 6 && cmn->dat_vc_num < 2))
+ return 0;
+@@ -905,15 +911,18 @@ static u32 arm_cmn_wp_config(struct perf_event *event)
+ u32 grp = CMN_EVENT_WP_GRP(event);
+ u32 exc = CMN_EVENT_WP_EXCLUSIVE(event);
+ u32 combine = CMN_EVENT_WP_COMBINE(event);
++ bool is_cmn600 = to_cmn(event->pmu)->model == CMN600;
+
+ config = FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_DEV_SEL, dev) |
+ FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_CHN_SEL, chn) |
+ FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_GRP, grp) |
+- FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_EXCLUSIVE, exc) |
+ FIELD_PREP(CMN_DTM_WPn_CONFIG_WP_DEV_SEL2, dev >> 1);
++ if (exc)
++ config |= is_cmn600 ? CMN600_WPn_CONFIG_WP_EXCLUSIVE :
++ CMN_DTM_WPn_CONFIG_WP_EXCLUSIVE;
+ if (combine && !grp)
+- config |= CMN_DTM_WPn_CONFIG_WP_COMBINE;
+-
++ config |= is_cmn600 ? CMN600_WPn_CONFIG_WP_COMBINE :
++ CMN_DTM_WPn_CONFIG_WP_COMBINE;
+ return config;
+ }
+
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.c b/drivers/phy/broadcom/phy-brcm-usb-init.c
+index 9391ab42a12b3..dd0f66288fbdd 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init.c
++++ b/drivers/phy/broadcom/phy-brcm-usb-init.c
+@@ -79,6 +79,7 @@
+
+ enum brcm_family_type {
+ BRCM_FAMILY_3390A0,
++ BRCM_FAMILY_4908,
+ BRCM_FAMILY_7250B0,
+ BRCM_FAMILY_7271A0,
+ BRCM_FAMILY_7364A0,
+@@ -96,6 +97,7 @@ enum brcm_family_type {
+
+ static const char *family_names[BRCM_FAMILY_COUNT] = {
+ USB_BRCM_FAMILY(3390A0),
++ USB_BRCM_FAMILY(4908),
+ USB_BRCM_FAMILY(7250B0),
+ USB_BRCM_FAMILY(7271A0),
+ USB_BRCM_FAMILY(7364A0),
+@@ -203,6 +205,27 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ USB_CTRL_USB_PM_USB20_HC_RESETB_VAR_MASK,
+ ENDIAN_SETTINGS, /* USB_CTRL_SETUP ENDIAN bits */
+ },
++ /* 4908 */
++ [BRCM_FAMILY_4908] = {
++ 0, /* USB_CTRL_SETUP_SCB1_EN_MASK */
++ 0, /* USB_CTRL_SETUP_SCB2_EN_MASK */
++ 0, /* USB_CTRL_SETUP_SS_EHCI64BIT_EN_MASK */
++ 0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
++ 0, /* USB_CTRL_SETUP_OC3_DISABLE_MASK */
++ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
++ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
++ USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
++ USB_CTRL_USB_PM_USB_PWRDN_MASK,
++ 0, /* USB_CTRL_USB30_CTL1_XHC_SOFT_RESETB_MASK */
++ 0, /* USB_CTRL_USB30_CTL1_USB3_IOC_MASK */
++ 0, /* USB_CTRL_USB30_CTL1_USB3_IPP_MASK */
++ 0, /* USB_CTRL_USB_DEVICE_CTL1_PORT_MODE_MASK */
++ 0, /* USB_CTRL_USB_PM_SOFT_RESET_MASK */
++ 0, /* USB_CTRL_SETUP_CC_DRD_MODE_ENABLE_MASK */
++ 0, /* USB_CTRL_SETUP_STRAP_CC_DRD_MODE_ENABLE_SEL_MASK */
++ 0, /* USB_CTRL_USB_PM_USB20_HC_RESETB_VAR_MASK */
++ 0, /* USB_CTRL_SETUP ENDIAN bits */
++ },
+ /* 7250b0 */
+ [BRCM_FAMILY_7250B0] = {
+ USB_CTRL_SETUP_SCB1_EN_MASK,
+@@ -559,6 +582,7 @@ static void brcmusb_usb3_pll_54mhz(struct brcm_usb_init_params *params)
+ */
+ switch (params->selected_family) {
+ case BRCM_FAMILY_3390A0:
++ case BRCM_FAMILY_4908:
+ case BRCM_FAMILY_7250B0:
+ case BRCM_FAMILY_7366C0:
+ case BRCM_FAMILY_74371A0:
+@@ -1004,6 +1028,18 @@ static const struct brcm_usb_init_ops bcm7445_ops = {
+ .set_dual_select = usb_set_dual_select,
+ };
+
++void brcm_usb_dvr_init_4908(struct brcm_usb_init_params *params)
++{
++ int fam;
++
++ fam = BRCM_FAMILY_4908;
++ params->selected_family = fam;
++ params->usb_reg_bits_map =
++ &usb_reg_bits_map_table[fam][0];
++ params->family_name = family_names[fam];
++ params->ops = &bcm7445_ops;
++}
++
+ void brcm_usb_dvr_init_7445(struct brcm_usb_init_params *params)
+ {
+ int fam;
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.h b/drivers/phy/broadcom/phy-brcm-usb-init.h
+index a39f30fa2e991..1ccb5ddab865c 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init.h
++++ b/drivers/phy/broadcom/phy-brcm-usb-init.h
+@@ -64,6 +64,7 @@ struct brcm_usb_init_params {
+ bool suspend_with_clocks;
+ };
+
++void brcm_usb_dvr_init_4908(struct brcm_usb_init_params *params);
+ void brcm_usb_dvr_init_7445(struct brcm_usb_init_params *params);
+ void brcm_usb_dvr_init_7216(struct brcm_usb_init_params *params);
+ void brcm_usb_dvr_init_7211b0(struct brcm_usb_init_params *params);
+diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
+index 0f1deb6e0eabf..2cb3779fcdf82 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -283,6 +283,15 @@ static const struct attribute_group brcm_usb_phy_group = {
+ .attrs = brcm_usb_phy_attrs,
+ };
+
++static const struct match_chip_info chip_info_4908 = {
++ .init_func = &brcm_usb_dvr_init_4908,
++ .required_regs = {
++ BRCM_REGS_CTRL,
++ BRCM_REGS_XHCI_EC,
++ -1,
++ },
++};
++
+ static const struct match_chip_info chip_info_7216 = {
+ .init_func = &brcm_usb_dvr_init_7216,
+ .required_regs = {
+@@ -318,7 +327,7 @@ static const struct match_chip_info chip_info_7445 = {
+ static const struct of_device_id brcm_usb_dt_ids[] = {
+ {
+ .compatible = "brcm,bcm4908-usb-phy",
+- .data = &chip_info_7445,
++ .data = &chip_info_4908,
+ },
+ {
+ .compatible = "brcm,bcm7216-usb-phy",
+diff --git a/drivers/phy/phy-core-mipi-dphy.c b/drivers/phy/phy-core-mipi-dphy.c
+index ccb4045685cdd..929e86d6558e0 100644
+--- a/drivers/phy/phy-core-mipi-dphy.c
++++ b/drivers/phy/phy-core-mipi-dphy.c
+@@ -64,10 +64,10 @@ int phy_mipi_dphy_get_default_config(unsigned long pixel_clock,
+ cfg->hs_trail = max(4 * 8 * ui, 60000 + 4 * 4 * ui);
+
+ cfg->init = 100;
+- cfg->lpx = 60000;
++ cfg->lpx = 50000;
+ cfg->ta_get = 5 * cfg->lpx;
+ cfg->ta_go = 4 * cfg->lpx;
+- cfg->ta_sure = 2 * cfg->lpx;
++ cfg->ta_sure = cfg->lpx;
+ cfg->wakeup = 1000;
+
+ cfg->hs_clk_rate = hs_clk_rate;
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+index 5f7c421ab6e76..334cb85855a93 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+@@ -1038,6 +1038,7 @@ int mtk_pctrl_init(struct platform_device *pdev,
+ node = of_parse_phandle(np, "mediatek,pctl-regmap", 0);
+ if (node) {
+ pctl->regmap1 = syscon_node_to_regmap(node);
++ of_node_put(node);
+ if (IS_ERR(pctl->regmap1))
+ return PTR_ERR(pctl->regmap1);
+ } else if (regmap) {
+@@ -1051,6 +1052,7 @@ int mtk_pctrl_init(struct platform_device *pdev,
+ node = of_parse_phandle(np, "mediatek,pctl-regmap", 1);
+ if (node) {
+ pctl->regmap2 = syscon_node_to_regmap(node);
++ of_node_put(node);
+ if (IS_ERR(pctl->regmap2))
+ return PTR_ERR(pctl->regmap2);
+ }
+diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
+index f9f9110f2107d..fe6cf068c4f41 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
+@@ -96,20 +96,16 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev,
+ err = hw->soc->bias_get_combo(hw, desc, &pullup, &ret);
+ if (err)
+ goto out;
++ if (ret == MTK_PUPD_SET_R1R0_00)
++ ret = MTK_DISABLE;
+ if (param == PIN_CONFIG_BIAS_DISABLE) {
+- if (ret == MTK_PUPD_SET_R1R0_00)
+- ret = MTK_DISABLE;
++ if (ret != MTK_DISABLE)
++ err = -EINVAL;
+ } else if (param == PIN_CONFIG_BIAS_PULL_UP) {
+- /* When desire to get pull-up value, return
+- * error if current setting is pull-down
+- */
+- if (!pullup)
++ if (!pullup || ret == MTK_DISABLE)
+ err = -EINVAL;
+ } else if (param == PIN_CONFIG_BIAS_PULL_DOWN) {
+- /* When desire to get pull-down value, return
+- * error if current setting is pull-up
+- */
+- if (pullup)
++ if (pullup || ret == MTK_DISABLE)
+ err = -EINVAL;
+ }
+ } else {
+@@ -188,8 +184,7 @@ out:
+ }
+
+ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+- enum pin_config_param param,
+- enum pin_config_param arg)
++ enum pin_config_param param, u32 arg)
+ {
+ struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
+ const struct mtk_pin_desc *desc;
+@@ -586,6 +581,9 @@ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
+ if (gpio >= hw->soc->npins)
+ return -EINVAL;
+
++ if (mtk_is_virt_gpio(hw, gpio))
++ return -EINVAL;
++
+ desc = (const struct mtk_pin_desc *)&hw->soc->pins[gpio];
+ pinmux = mtk_pctrl_get_pinmux(hw, gpio);
+ if (pinmux >= hw->soc->nfuncs)
+@@ -737,10 +735,10 @@ static int mtk_pconf_group_get(struct pinctrl_dev *pctldev, unsigned group,
+ unsigned long *config)
+ {
+ struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
++ struct mtk_pinctrl_group *grp = &hw->groups[group];
+
+- *config = hw->groups[group].config;
+-
+- return 0;
++ /* One pin per group only */
++ return mtk_pinconf_get(pctldev, grp->pin, config);
+ }
+
+ static int mtk_pconf_group_set(struct pinctrl_dev *pctldev, unsigned group,
+@@ -756,8 +754,6 @@ static int mtk_pconf_group_set(struct pinctrl_dev *pctldev, unsigned group,
+ pinconf_to_config_argument(configs[i]));
+ if (ret < 0)
+ return ret;
+-
+- grp->config = configs[i];
+ }
+
+ return 0;
+@@ -988,7 +984,7 @@ int mtk_paris_pinctrl_probe(struct platform_device *pdev,
+ hw->nbase = hw->soc->nbase_names;
+
+ if (of_find_property(hw->dev->of_node,
+- "mediatek,rsel_resistance_in_si_unit", NULL))
++ "mediatek,rsel-resistance-in-si-unit", NULL))
+ hw->rsel_si_unit = true;
+ else
+ hw->rsel_si_unit = false;
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index 39828e9c3120a..4757bf964d3cd 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -1883,8 +1883,10 @@ static int nmk_pinctrl_probe(struct platform_device *pdev)
+ }
+
+ prcm_np = of_parse_phandle(np, "prcm", 0);
+- if (prcm_np)
++ if (prcm_np) {
+ npct->prcm_base = of_iomap(prcm_np, 0);
++ of_node_put(prcm_np);
++ }
+ if (!npct->prcm_base) {
+ if (version == PINCTRL_NMK_STN8815) {
+ dev_info(&pdev->dev,
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+index 4d81908d6725d..41136f63014a4 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+@@ -78,7 +78,6 @@ struct npcm7xx_gpio {
+ struct gpio_chip gc;
+ int irqbase;
+ int irq;
+- void *priv;
+ struct irq_chip irq_chip;
+ u32 pinctrl_id;
+ int (*direction_input)(struct gpio_chip *chip, unsigned offset);
+@@ -226,7 +225,7 @@ static void npcmgpio_irq_handler(struct irq_desc *desc)
+ chained_irq_enter(chip, desc);
+ sts = ioread32(bank->base + NPCM7XX_GP_N_EVST);
+ en = ioread32(bank->base + NPCM7XX_GP_N_EVEN);
+- dev_dbg(chip->parent_device, "==> got irq sts %.8x %.8x\n", sts,
++ dev_dbg(bank->gc.parent, "==> got irq sts %.8x %.8x\n", sts,
+ en);
+
+ sts &= en;
+@@ -241,33 +240,33 @@ static int npcmgpio_set_irq_type(struct irq_data *d, unsigned int type)
+ gpiochip_get_data(irq_data_get_irq_chip_data(d));
+ unsigned int gpio = BIT(d->hwirq);
+
+- dev_dbg(d->chip->parent_device, "setirqtype: %u.%u = %u\n", gpio,
++ dev_dbg(bank->gc.parent, "setirqtype: %u.%u = %u\n", gpio,
+ d->irq, type);
+ switch (type) {
+ case IRQ_TYPE_EDGE_RISING:
+- dev_dbg(d->chip->parent_device, "edge.rising\n");
++ dev_dbg(bank->gc.parent, "edge.rising\n");
+ npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ break;
+ case IRQ_TYPE_EDGE_FALLING:
+- dev_dbg(d->chip->parent_device, "edge.falling\n");
++ dev_dbg(bank->gc.parent, "edge.falling\n");
+ npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ break;
+ case IRQ_TYPE_EDGE_BOTH:
+- dev_dbg(d->chip->parent_device, "edge.both\n");
++ dev_dbg(bank->gc.parent, "edge.both\n");
+ npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_EVBE, gpio);
+ break;
+ case IRQ_TYPE_LEVEL_LOW:
+- dev_dbg(d->chip->parent_device, "level.low\n");
++ dev_dbg(bank->gc.parent, "level.low\n");
+ npcm_gpio_set(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ break;
+ case IRQ_TYPE_LEVEL_HIGH:
+- dev_dbg(d->chip->parent_device, "level.high\n");
++ dev_dbg(bank->gc.parent, "level.high\n");
+ npcm_gpio_clr(&bank->gc, bank->base + NPCM7XX_GP_N_POL, gpio);
+ break;
+ default:
+- dev_dbg(d->chip->parent_device, "invalid irq type\n");
++ dev_dbg(bank->gc.parent, "invalid irq type\n");
+ return -EINVAL;
+ }
+
+@@ -289,7 +288,7 @@ static void npcmgpio_irq_ack(struct irq_data *d)
+ gpiochip_get_data(irq_data_get_irq_chip_data(d));
+ unsigned int gpio = d->hwirq;
+
+- dev_dbg(d->chip->parent_device, "irq_ack: %u.%u\n", gpio, d->irq);
++ dev_dbg(bank->gc.parent, "irq_ack: %u.%u\n", gpio, d->irq);
+ iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVST);
+ }
+
+@@ -301,7 +300,7 @@ static void npcmgpio_irq_mask(struct irq_data *d)
+ unsigned int gpio = d->hwirq;
+
+ /* Clear events */
+- dev_dbg(d->chip->parent_device, "irq_mask: %u.%u\n", gpio, d->irq);
++ dev_dbg(bank->gc.parent, "irq_mask: %u.%u\n", gpio, d->irq);
+ iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVENC);
+ }
+
+@@ -313,7 +312,7 @@ static void npcmgpio_irq_unmask(struct irq_data *d)
+ unsigned int gpio = d->hwirq;
+
+ /* Enable events */
+- dev_dbg(d->chip->parent_device, "irq_unmask: %u.%u\n", gpio, d->irq);
++ dev_dbg(bank->gc.parent, "irq_unmask: %u.%u\n", gpio, d->irq);
+ iowrite32(BIT(gpio), bank->base + NPCM7XX_GP_N_EVENS);
+ }
+
+@@ -323,7 +322,7 @@ static unsigned int npcmgpio_irq_startup(struct irq_data *d)
+ unsigned int gpio = d->hwirq;
+
+ /* active-high, input, clear interrupt, enable interrupt */
+- dev_dbg(d->chip->parent_device, "startup: %u.%u\n", gpio, d->irq);
++ dev_dbg(gc->parent, "startup: %u.%u\n", gpio, d->irq);
+ npcmgpio_direction_input(gc, gpio);
+ npcmgpio_irq_ack(d);
+ npcmgpio_irq_unmask(d);
+@@ -905,7 +904,7 @@ static struct npcm7xx_func npcm7xx_funcs[] = {
+ #define DRIVE_STRENGTH_HI_SHIFT 12
+ #define DRIVE_STRENGTH_MASK 0x0000FF00
+
+-#define DS(lo, hi) (((lo) << DRIVE_STRENGTH_LO_SHIFT) | \
++#define DSTR(lo, hi) (((lo) << DRIVE_STRENGTH_LO_SHIFT) | \
+ ((hi) << DRIVE_STRENGTH_HI_SHIFT))
+ #define DSLO(x) (((x) >> DRIVE_STRENGTH_LO_SHIFT) & 0xF)
+ #define DSHI(x) (((x) >> DRIVE_STRENGTH_HI_SHIFT) & 0xF)
+@@ -925,31 +924,31 @@ struct npcm7xx_pincfg {
+ static const struct npcm7xx_pincfg pincfg[] = {
+ /* PIN FUNCTION 1 FUNCTION 2 FUNCTION 3 FLAGS */
+ NPCM7XX_PINCFG(0, iox1, MFSEL1, 30, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(1, iox1, MFSEL1, 30, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(2, iox1, MFSEL1, 30, none, NONE, 0, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(1, iox1, MFSEL1, 30, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(2, iox1, MFSEL1, 30, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(3, iox1, MFSEL1, 30, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(4, iox2, MFSEL3, 14, smb1d, I2CSEGSEL, 7, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(5, iox2, MFSEL3, 14, smb1d, I2CSEGSEL, 7, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(6, iox2, MFSEL3, 14, smb2d, I2CSEGSEL, 10, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(7, iox2, MFSEL3, 14, smb2d, I2CSEGSEL, 10, none, NONE, 0, SLEW),
+- NPCM7XX_PINCFG(8, lkgpo1, FLOCKR1, 4, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(9, lkgpo2, FLOCKR1, 8, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(10, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(11, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(8, lkgpo1, FLOCKR1, 4, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(9, lkgpo2, FLOCKR1, 8, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(10, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(11, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(12, gspi, MFSEL1, 24, smb5b, I2CSEGSEL, 19, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(13, gspi, MFSEL1, 24, smb5b, I2CSEGSEL, 19, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(14, gspi, MFSEL1, 24, smb5c, I2CSEGSEL, 20, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(15, gspi, MFSEL1, 24, smb5c, I2CSEGSEL, 20, none, NONE, 0, SLEW),
+- NPCM7XX_PINCFG(16, lkgpo0, FLOCKR1, 0, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(17, pspi2, MFSEL3, 13, smb4den, I2CSEGSEL, 23, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(18, pspi2, MFSEL3, 13, smb4b, I2CSEGSEL, 14, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(19, pspi2, MFSEL3, 13, smb4b, I2CSEGSEL, 14, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(16, lkgpo0, FLOCKR1, 0, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(17, pspi2, MFSEL3, 13, smb4den, I2CSEGSEL, 23, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(18, pspi2, MFSEL3, 13, smb4b, I2CSEGSEL, 14, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(19, pspi2, MFSEL3, 13, smb4b, I2CSEGSEL, 14, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(20, smb4c, I2CSEGSEL, 15, smb15, MFSEL3, 8, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(21, smb4c, I2CSEGSEL, 15, smb15, MFSEL3, 8, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(22, smb4d, I2CSEGSEL, 16, smb14, MFSEL3, 7, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(23, smb4d, I2CSEGSEL, 16, smb14, MFSEL3, 7, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(24, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(25, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(24, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(25, ioxh, MFSEL3, 18, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(26, smb5, MFSEL1, 2, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(27, smb5, MFSEL1, 2, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(28, smb4, MFSEL1, 1, none, NONE, 0, none, NONE, 0, 0),
+@@ -965,12 +964,12 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(39, smb3b, I2CSEGSEL, 11, none, NONE, 0, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(40, smb3b, I2CSEGSEL, 11, none, NONE, 0, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(41, bmcuart0a, MFSEL1, 9, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(42, bmcuart0a, MFSEL1, 9, none, NONE, 0, none, NONE, 0, DS(2, 4) | GPO),
++ NPCM7XX_PINCFG(42, bmcuart0a, MFSEL1, 9, none, NONE, 0, none, NONE, 0, DSTR(2, 4) | GPO),
+ NPCM7XX_PINCFG(43, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, bmcuart1, MFSEL3, 24, 0),
+ NPCM7XX_PINCFG(44, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, bmcuart1, MFSEL3, 24, 0),
+ NPCM7XX_PINCFG(45, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(46, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, none, NONE, 0, DS(2, 8)),
+- NPCM7XX_PINCFG(47, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, none, NONE, 0, DS(2, 8)),
++ NPCM7XX_PINCFG(46, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, none, NONE, 0, DSTR(2, 8)),
++ NPCM7XX_PINCFG(47, uart1, MFSEL1, 10, jtag2, MFSEL4, 0, none, NONE, 0, DSTR(2, 8)),
+ NPCM7XX_PINCFG(48, uart2, MFSEL1, 11, bmcuart0b, MFSEL4, 1, none, NONE, 0, GPO),
+ NPCM7XX_PINCFG(49, uart2, MFSEL1, 11, bmcuart0b, MFSEL4, 1, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(50, uart2, MFSEL1, 11, none, NONE, 0, none, NONE, 0, 0),
+@@ -980,8 +979,8 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(54, uart2, MFSEL1, 11, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(55, uart2, MFSEL1, 11, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(56, r1err, MFSEL1, 12, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(57, r1md, MFSEL1, 13, none, NONE, 0, none, NONE, 0, DS(2, 4)),
+- NPCM7XX_PINCFG(58, r1md, MFSEL1, 13, none, NONE, 0, none, NONE, 0, DS(2, 4)),
++ NPCM7XX_PINCFG(57, r1md, MFSEL1, 13, none, NONE, 0, none, NONE, 0, DSTR(2, 4)),
++ NPCM7XX_PINCFG(58, r1md, MFSEL1, 13, none, NONE, 0, none, NONE, 0, DSTR(2, 4)),
+ NPCM7XX_PINCFG(59, smb3d, I2CSEGSEL, 13, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(60, smb3d, I2CSEGSEL, 13, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(61, uart1, MFSEL1, 10, none, NONE, 0, none, NONE, 0, GPO),
+@@ -1004,19 +1003,19 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(77, fanin13, MFSEL2, 13, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(78, fanin14, MFSEL2, 14, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(79, fanin15, MFSEL2, 15, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(80, pwm0, MFSEL2, 16, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(81, pwm1, MFSEL2, 17, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(82, pwm2, MFSEL2, 18, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(83, pwm3, MFSEL2, 19, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(84, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(85, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(86, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
++ NPCM7XX_PINCFG(80, pwm0, MFSEL2, 16, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(81, pwm1, MFSEL2, 17, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(82, pwm2, MFSEL2, 18, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(83, pwm3, MFSEL2, 19, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(84, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(85, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(86, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
+ NPCM7XX_PINCFG(87, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(88, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(89, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(90, r2err, MFSEL1, 15, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(91, r2md, MFSEL1, 16, none, NONE, 0, none, NONE, 0, DS(2, 4)),
+- NPCM7XX_PINCFG(92, r2md, MFSEL1, 16, none, NONE, 0, none, NONE, 0, DS(2, 4)),
++ NPCM7XX_PINCFG(91, r2md, MFSEL1, 16, none, NONE, 0, none, NONE, 0, DSTR(2, 4)),
++ NPCM7XX_PINCFG(92, r2md, MFSEL1, 16, none, NONE, 0, none, NONE, 0, DSTR(2, 4)),
+ NPCM7XX_PINCFG(93, ga20kbc, MFSEL1, 17, smb5d, I2CSEGSEL, 21, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(94, ga20kbc, MFSEL1, 17, smb5d, I2CSEGSEL, 21, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(95, lpc, NONE, 0, espi, MFSEL4, 8, gpio, MFSEL1, 26, 0),
+@@ -1062,34 +1061,34 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(133, smb10, MFSEL4, 13, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(134, smb11, MFSEL4, 14, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(135, smb11, MFSEL4, 14, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(136, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(137, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(138, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(139, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(140, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
++ NPCM7XX_PINCFG(136, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(137, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(138, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(139, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(140, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
+ NPCM7XX_PINCFG(141, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(142, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
++ NPCM7XX_PINCFG(142, sd1, MFSEL3, 12, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
+ NPCM7XX_PINCFG(143, sd1, MFSEL3, 12, sd1pwr, MFSEL4, 5, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(144, pwm4, MFSEL2, 20, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(145, pwm5, MFSEL2, 21, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(146, pwm6, MFSEL2, 22, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(147, pwm7, MFSEL2, 23, none, NONE, 0, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(148, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(149, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(150, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(151, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(152, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
++ NPCM7XX_PINCFG(144, pwm4, MFSEL2, 20, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(145, pwm5, MFSEL2, 21, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(146, pwm6, MFSEL2, 22, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(147, pwm7, MFSEL2, 23, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(148, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(149, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(150, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(151, mmc8, MFSEL3, 11, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(152, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
+ NPCM7XX_PINCFG(153, mmcwp, FLOCKR1, 24, none, NONE, 0, none, NONE, 0, 0), /* Z1/A1 */
+- NPCM7XX_PINCFG(154, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
++ NPCM7XX_PINCFG(154, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
+ NPCM7XX_PINCFG(155, mmccd, MFSEL3, 25, mmcrst, MFSEL4, 6, none, NONE, 0, 0), /* Z1/A1 */
+- NPCM7XX_PINCFG(156, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(157, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(158, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(159, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+-
+- NPCM7XX_PINCFG(160, clkout, MFSEL1, 21, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(161, lpc, NONE, 0, espi, MFSEL4, 8, gpio, MFSEL1, 26, DS(8, 12)),
+- NPCM7XX_PINCFG(162, serirq, NONE, 0, gpio, MFSEL1, 31, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(156, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(157, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(158, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(159, mmc, MFSEL3, 10, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++
++ NPCM7XX_PINCFG(160, clkout, MFSEL1, 21, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(161, lpc, NONE, 0, espi, MFSEL4, 8, gpio, MFSEL1, 26, DSTR(8, 12)),
++ NPCM7XX_PINCFG(162, serirq, NONE, 0, gpio, MFSEL1, 31, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(163, lpc, NONE, 0, espi, MFSEL4, 8, gpio, MFSEL1, 26, 0),
+ NPCM7XX_PINCFG(164, lpc, NONE, 0, espi, MFSEL4, 8, gpio, MFSEL1, 26, SLEWLPC),
+ NPCM7XX_PINCFG(165, lpc, NONE, 0, espi, MFSEL4, 8, gpio, MFSEL1, 26, SLEWLPC),
+@@ -1102,25 +1101,25 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(172, smb6, MFSEL3, 1, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(173, smb7, MFSEL3, 2, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(174, smb7, MFSEL3, 2, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(175, pspi1, MFSEL3, 4, faninx, MFSEL3, 3, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(176, pspi1, MFSEL3, 4, faninx, MFSEL3, 3, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(177, pspi1, MFSEL3, 4, faninx, MFSEL3, 3, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(178, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(179, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(180, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
++ NPCM7XX_PINCFG(175, pspi1, MFSEL3, 4, faninx, MFSEL3, 3, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(176, pspi1, MFSEL3, 4, faninx, MFSEL3, 3, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(177, pspi1, MFSEL3, 4, faninx, MFSEL3, 3, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(178, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(179, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(180, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
+ NPCM7XX_PINCFG(181, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(182, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(183, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(184, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW | GPO),
+- NPCM7XX_PINCFG(185, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW | GPO),
+- NPCM7XX_PINCFG(186, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(187, spi3cs1, MFSEL4, 17, none, NONE, 0, none, NONE, 0, DS(8, 12)),
+- NPCM7XX_PINCFG(188, spi3quad, MFSEL4, 20, spi3cs2, MFSEL4, 18, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(189, spi3quad, MFSEL4, 20, spi3cs3, MFSEL4, 19, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(190, gpio, FLOCKR1, 20, nprd_smi, NONE, 0, none, NONE, 0, DS(2, 4)),
+- NPCM7XX_PINCFG(191, none, NONE, 0, none, NONE, 0, none, NONE, 0, DS(8, 12)), /* XX */
+-
+- NPCM7XX_PINCFG(192, none, NONE, 0, none, NONE, 0, none, NONE, 0, DS(8, 12)), /* XX */
++ NPCM7XX_PINCFG(183, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(184, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW | GPO),
++ NPCM7XX_PINCFG(185, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW | GPO),
++ NPCM7XX_PINCFG(186, spi3, MFSEL4, 16, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(187, spi3cs1, MFSEL4, 17, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
++ NPCM7XX_PINCFG(188, spi3quad, MFSEL4, 20, spi3cs2, MFSEL4, 18, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(189, spi3quad, MFSEL4, 20, spi3cs3, MFSEL4, 19, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(190, gpio, FLOCKR1, 20, nprd_smi, NONE, 0, none, NONE, 0, DSTR(2, 4)),
++ NPCM7XX_PINCFG(191, none, NONE, 0, none, NONE, 0, none, NONE, 0, DSTR(8, 12)), /* XX */
++
++ NPCM7XX_PINCFG(192, none, NONE, 0, none, NONE, 0, none, NONE, 0, DSTR(8, 12)), /* XX */
+ NPCM7XX_PINCFG(193, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(194, smb0b, I2CSEGSEL, 0, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(195, smb0b, I2CSEGSEL, 0, none, NONE, 0, none, NONE, 0, 0),
+@@ -1131,11 +1130,11 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(200, r2, MFSEL1, 14, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(201, r1, MFSEL3, 9, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(202, smb0c, I2CSEGSEL, 1, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(203, faninx, MFSEL3, 3, none, NONE, 0, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(203, faninx, MFSEL3, 3, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(204, ddc, NONE, 0, gpio, MFSEL3, 22, none, NONE, 0, SLEW),
+ NPCM7XX_PINCFG(205, ddc, NONE, 0, gpio, MFSEL3, 22, none, NONE, 0, SLEW),
+- NPCM7XX_PINCFG(206, ddc, NONE, 0, gpio, MFSEL3, 22, none, NONE, 0, DS(4, 8)),
+- NPCM7XX_PINCFG(207, ddc, NONE, 0, gpio, MFSEL3, 22, none, NONE, 0, DS(4, 8)),
++ NPCM7XX_PINCFG(206, ddc, NONE, 0, gpio, MFSEL3, 22, none, NONE, 0, DSTR(4, 8)),
++ NPCM7XX_PINCFG(207, ddc, NONE, 0, gpio, MFSEL3, 22, none, NONE, 0, DSTR(4, 8)),
+ NPCM7XX_PINCFG(208, rg2, MFSEL4, 24, ddr, MFSEL3, 26, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(209, rg2, MFSEL4, 24, ddr, MFSEL3, 26, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(210, rg2, MFSEL4, 24, ddr, MFSEL3, 26, none, NONE, 0, 0),
+@@ -1147,20 +1146,20 @@ static const struct npcm7xx_pincfg pincfg[] = {
+ NPCM7XX_PINCFG(216, rg2mdio, MFSEL4, 23, ddr, MFSEL3, 26, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(217, rg2mdio, MFSEL4, 23, ddr, MFSEL3, 26, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(218, wdog1, MFSEL3, 19, none, NONE, 0, none, NONE, 0, 0),
+- NPCM7XX_PINCFG(219, wdog2, MFSEL3, 20, none, NONE, 0, none, NONE, 0, DS(4, 8)),
++ NPCM7XX_PINCFG(219, wdog2, MFSEL3, 20, none, NONE, 0, none, NONE, 0, DSTR(4, 8)),
+ NPCM7XX_PINCFG(220, smb12, MFSEL3, 5, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(221, smb12, MFSEL3, 5, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(222, smb13, MFSEL3, 6, none, NONE, 0, none, NONE, 0, 0),
+ NPCM7XX_PINCFG(223, smb13, MFSEL3, 6, none, NONE, 0, none, NONE, 0, 0),
+
+ NPCM7XX_PINCFG(224, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, SLEW),
+- NPCM7XX_PINCFG(225, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW | GPO),
+- NPCM7XX_PINCFG(226, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW | GPO),
+- NPCM7XX_PINCFG(227, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(228, spixcs1, MFSEL4, 28, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(229, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(230, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DS(8, 12) | SLEW),
+- NPCM7XX_PINCFG(231, clkreq, MFSEL4, 9, none, NONE, 0, none, NONE, 0, DS(8, 12)),
++ NPCM7XX_PINCFG(225, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW | GPO),
++ NPCM7XX_PINCFG(226, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW | GPO),
++ NPCM7XX_PINCFG(227, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(228, spixcs1, MFSEL4, 28, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(229, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(230, spix, MFSEL4, 27, none, NONE, 0, none, NONE, 0, DSTR(8, 12) | SLEW),
++ NPCM7XX_PINCFG(231, clkreq, MFSEL4, 9, none, NONE, 0, none, NONE, 0, DSTR(8, 12)),
+ NPCM7XX_PINCFG(253, none, NONE, 0, none, NONE, 0, none, NONE, 0, GPI), /* SDHC1 power */
+ NPCM7XX_PINCFG(254, none, NONE, 0, none, NONE, 0, none, NONE, 0, GPI), /* SDHC2 power */
+ NPCM7XX_PINCFG(255, none, NONE, 0, none, NONE, 0, none, NONE, 0, GPI), /* DACOSEL */
+@@ -1561,7 +1560,7 @@ static int npcm7xx_get_groups_count(struct pinctrl_dev *pctldev)
+ {
+ struct npcm7xx_pinctrl *npcm = pinctrl_dev_get_drvdata(pctldev);
+
+- dev_dbg(npcm->dev, "group size: %d\n", ARRAY_SIZE(npcm7xx_groups));
++ dev_dbg(npcm->dev, "group size: %zu\n", ARRAY_SIZE(npcm7xx_groups));
+ return ARRAY_SIZE(npcm7xx_groups);
+ }
+
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index f8edcc88ac013..415d1df8f46a5 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -30,10 +30,10 @@ static const struct pin_config_item conf_items[] = {
+ PCONFDUMP(PIN_CONFIG_BIAS_BUS_HOLD, "input bias bus hold", NULL, false),
+ PCONFDUMP(PIN_CONFIG_BIAS_DISABLE, "input bias disabled", NULL, false),
+ PCONFDUMP(PIN_CONFIG_BIAS_HIGH_IMPEDANCE, "input bias high impedance", NULL, false),
+- PCONFDUMP(PIN_CONFIG_BIAS_PULL_DOWN, "input bias pull down", NULL, false),
++ PCONFDUMP(PIN_CONFIG_BIAS_PULL_DOWN, "input bias pull down", "ohms", true),
+ PCONFDUMP(PIN_CONFIG_BIAS_PULL_PIN_DEFAULT,
+- "input bias pull to pin specific state", NULL, false),
+- PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", NULL, false),
++ "input bias pull to pin specific state", "ohms", true),
++ PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", "ohms", true),
+ PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_DRAIN, "output drive open drain", NULL, false),
+ PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_SOURCE, "output drive open source", NULL, false),
+ PCONFDUMP(PIN_CONFIG_DRIVE_PUSH_PULL, "output drive push pull", NULL, false),
+diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c
+index 2712f51eb2381..fa6becca17889 100644
+--- a/drivers/pinctrl/pinctrl-ingenic.c
++++ b/drivers/pinctrl/pinctrl-ingenic.c
+@@ -119,6 +119,8 @@ struct ingenic_chip_info {
+ unsigned int num_functions;
+
+ const u32 *pull_ups, *pull_downs;
++
++ const struct regmap_access_table *access_table;
+ };
+
+ struct ingenic_pinctrl {
+@@ -2179,6 +2181,17 @@ static const struct function_desc x1000_functions[] = {
+ { "mac", x1000_mac_groups, ARRAY_SIZE(x1000_mac_groups), },
+ };
+
++static const struct regmap_range x1000_access_ranges[] = {
++ regmap_reg_range(0x000, 0x400 - 4),
++ regmap_reg_range(0x700, 0x800 - 4),
++};
++
++/* shared with X1500 */
++static const struct regmap_access_table x1000_access_table = {
++ .yes_ranges = x1000_access_ranges,
++ .n_yes_ranges = ARRAY_SIZE(x1000_access_ranges),
++};
++
+ static const struct ingenic_chip_info x1000_chip_info = {
+ .num_chips = 4,
+ .reg_offset = 0x100,
+@@ -2189,6 +2202,7 @@ static const struct ingenic_chip_info x1000_chip_info = {
+ .num_functions = ARRAY_SIZE(x1000_functions),
+ .pull_ups = x1000_pull_ups,
+ .pull_downs = x1000_pull_downs,
++ .access_table = &x1000_access_table,
+ };
+
+ static int x1500_uart0_data_pins[] = { 0x4a, 0x4b, };
+@@ -2300,6 +2314,7 @@ static const struct ingenic_chip_info x1500_chip_info = {
+ .num_functions = ARRAY_SIZE(x1500_functions),
+ .pull_ups = x1000_pull_ups,
+ .pull_downs = x1000_pull_downs,
++ .access_table = &x1000_access_table,
+ };
+
+ static const u32 x1830_pull_ups[4] = {
+@@ -2506,6 +2521,16 @@ static const struct function_desc x1830_functions[] = {
+ { "mac", x1830_mac_groups, ARRAY_SIZE(x1830_mac_groups), },
+ };
+
++static const struct regmap_range x1830_access_ranges[] = {
++ regmap_reg_range(0x0000, 0x4000 - 4),
++ regmap_reg_range(0x7000, 0x8000 - 4),
++};
++
++static const struct regmap_access_table x1830_access_table = {
++ .yes_ranges = x1830_access_ranges,
++ .n_yes_ranges = ARRAY_SIZE(x1830_access_ranges),
++};
++
+ static const struct ingenic_chip_info x1830_chip_info = {
+ .num_chips = 4,
+ .reg_offset = 0x1000,
+@@ -2516,6 +2541,7 @@ static const struct ingenic_chip_info x1830_chip_info = {
+ .num_functions = ARRAY_SIZE(x1830_functions),
+ .pull_ups = x1830_pull_ups,
+ .pull_downs = x1830_pull_downs,
++ .access_table = &x1830_access_table,
+ };
+
+ static const u32 x2000_pull_ups[5] = {
+@@ -2969,6 +2995,17 @@ static const struct function_desc x2000_functions[] = {
+ { "otg", x2000_otg_groups, ARRAY_SIZE(x2000_otg_groups), },
+ };
+
++static const struct regmap_range x2000_access_ranges[] = {
++ regmap_reg_range(0x000, 0x500 - 4),
++ regmap_reg_range(0x700, 0x800 - 4),
++};
++
++/* shared with X2100 */
++static const struct regmap_access_table x2000_access_table = {
++ .yes_ranges = x2000_access_ranges,
++ .n_yes_ranges = ARRAY_SIZE(x2000_access_ranges),
++};
++
+ static const struct ingenic_chip_info x2000_chip_info = {
+ .num_chips = 5,
+ .reg_offset = 0x100,
+@@ -2979,6 +3016,7 @@ static const struct ingenic_chip_info x2000_chip_info = {
+ .num_functions = ARRAY_SIZE(x2000_functions),
+ .pull_ups = x2000_pull_ups,
+ .pull_downs = x2000_pull_downs,
++ .access_table = &x2000_access_table,
+ };
+
+ static const u32 x2100_pull_ups[5] = {
+@@ -3189,6 +3227,7 @@ static const struct ingenic_chip_info x2100_chip_info = {
+ .num_functions = ARRAY_SIZE(x2100_functions),
+ .pull_ups = x2100_pull_ups,
+ .pull_downs = x2100_pull_downs,
++ .access_table = &x2000_access_table,
+ };
+
+ static u32 ingenic_gpio_read_reg(struct ingenic_gpio_chip *jzgc, u8 reg)
+@@ -4168,7 +4207,12 @@ static int __init ingenic_pinctrl_probe(struct platform_device *pdev)
+ return PTR_ERR(base);
+
+ regmap_config = ingenic_pinctrl_regmap_config;
+- regmap_config.max_register = chip_info->num_chips * chip_info->reg_offset;
++ if (chip_info->access_table) {
++ regmap_config.rd_table = chip_info->access_table;
++ regmap_config.wr_table = chip_info->access_table;
++ } else {
++ regmap_config.max_register = chip_info->num_chips * chip_info->reg_offset - 4;
++ }
+
+ jzpc->map = devm_regmap_init_mmio(dev, base, ®map_config);
+ if (IS_ERR(jzpc->map)) {
+diff --git a/drivers/pinctrl/pinctrl-microchip-sgpio.c b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+index 639f1130e9892..666f1e3889e00 100644
+--- a/drivers/pinctrl/pinctrl-microchip-sgpio.c
++++ b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+@@ -19,6 +19,7 @@
+ #include <linux/property.h>
+ #include <linux/regmap.h>
+ #include <linux/reset.h>
++#include <linux/spinlock.h>
+
+ #include "core.h"
+ #include "pinconf.h"
+@@ -116,6 +117,7 @@ struct sgpio_priv {
+ u32 clock;
+ struct regmap *regs;
+ const struct sgpio_properties *properties;
++ spinlock_t lock;
+ };
+
+ struct sgpio_port_addr {
+@@ -229,6 +231,7 @@ static void sgpio_output_set(struct sgpio_priv *priv,
+ int value)
+ {
+ unsigned int bit = SGPIO_SRC_BITS * addr->bit;
++ unsigned long flags;
+ u32 clr, set;
+
+ switch (priv->properties->arch) {
+@@ -247,7 +250,10 @@ static void sgpio_output_set(struct sgpio_priv *priv,
+ default:
+ return;
+ }
++
++ spin_lock_irqsave(&priv->lock, flags);
+ sgpio_clrsetbits(priv, REG_PORT_CONFIG, addr->port, clr, set);
++ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+ static int sgpio_output_get(struct sgpio_priv *priv,
+@@ -575,10 +581,13 @@ static void microchip_sgpio_irq_settype(struct irq_data *data,
+ struct sgpio_bank *bank = gpiochip_get_data(chip);
+ unsigned int gpio = irqd_to_hwirq(data);
+ struct sgpio_port_addr addr;
++ unsigned long flags;
+ u32 ena;
+
+ sgpio_pin_to_addr(bank->priv, gpio, &addr);
+
++ spin_lock_irqsave(&bank->priv->lock, flags);
++
+ /* Disable interrupt while changing type */
+ ena = sgpio_readl(bank->priv, REG_INT_ENABLE, addr.bit);
+ sgpio_writel(bank->priv, ena & ~BIT(addr.port), REG_INT_ENABLE, addr.bit);
+@@ -595,6 +604,8 @@ static void microchip_sgpio_irq_settype(struct irq_data *data,
+
+ /* Possibly re-enable interrupts */
+ sgpio_writel(bank->priv, ena, REG_INT_ENABLE, addr.bit);
++
++ spin_unlock_irqrestore(&bank->priv->lock, flags);
+ }
+
+ static void microchip_sgpio_irq_setreg(struct irq_data *data,
+@@ -605,13 +616,16 @@ static void microchip_sgpio_irq_setreg(struct irq_data *data,
+ struct sgpio_bank *bank = gpiochip_get_data(chip);
+ unsigned int gpio = irqd_to_hwirq(data);
+ struct sgpio_port_addr addr;
++ unsigned long flags;
+
+ sgpio_pin_to_addr(bank->priv, gpio, &addr);
+
++ spin_lock_irqsave(&bank->priv->lock, flags);
+ if (clear)
+ sgpio_clrsetbits(bank->priv, reg, addr.bit, BIT(addr.port), 0);
+ else
+ sgpio_clrsetbits(bank->priv, reg, addr.bit, 0, BIT(addr.port));
++ spin_unlock_irqrestore(&bank->priv->lock, flags);
+ }
+
+ static void microchip_sgpio_irq_mask(struct irq_data *data)
+@@ -833,6 +847,7 @@ static int microchip_sgpio_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ priv->dev = dev;
++ spin_lock_init(&priv->lock);
+
+ reset = devm_reset_control_get_optional_shared(&pdev->dev, "switch");
+ if (IS_ERR(reset))
+diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
+index fc969208d904c..370459243007b 100644
+--- a/drivers/pinctrl/pinctrl-ocelot.c
++++ b/drivers/pinctrl/pinctrl-ocelot.c
+@@ -1750,8 +1750,8 @@ static int ocelot_gpiochip_register(struct platform_device *pdev,
+ gc->base = -1;
+ gc->label = "ocelot-gpio";
+
+- irq = irq_of_parse_and_map(gc->of_node, 0);
+- if (irq) {
++ irq = platform_get_irq_optional(pdev, 0);
++ if (irq > 0) {
+ girq = &gc->irq;
+ girq->chip = &ocelot_irqchip;
+ girq->parent_handler = ocelot_irq_handler;
+@@ -1788,9 +1788,10 @@ static struct regmap *ocelot_pinctrl_create_pincfg(struct platform_device *pdev)
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 32,
++ .name = "pincfg",
+ };
+
+- base = devm_platform_ioremap_resource(pdev, 0);
++ base = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(base)) {
+ dev_dbg(&pdev->dev, "Failed to ioremap config registers (no extended pinconf)\n");
+ return NULL;
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index d8dd8415fa81b..a1b598b86aa9f 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2693,6 +2693,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ node = of_parse_phandle(np, "rockchip,grf", 0);
+ if (node) {
+ info->regmap_base = syscon_node_to_regmap(node);
++ of_node_put(node);
+ if (IS_ERR(info->regmap_base))
+ return PTR_ERR(info->regmap_base);
+ } else {
+@@ -2725,6 +2726,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
+ node = of_parse_phandle(np, "rockchip,pmu", 0);
+ if (node) {
+ info->regmap_pmu = syscon_node_to_regmap(node);
++ of_node_put(node);
+ if (IS_ERR(info->regmap_pmu))
+ return PTR_ERR(info->regmap_pmu);
+ }
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index 0d4ea2e22a535..12d41ac017b53 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -741,7 +741,7 @@ static int sh_pfc_suspend_init(struct sh_pfc *pfc) { return 0; }
+
+ #ifdef DEBUG
+ #define SH_PFC_MAX_REGS 300
+-#define SH_PFC_MAX_ENUMS 3000
++#define SH_PFC_MAX_ENUMS 5000
+
+ static unsigned int sh_pfc_errors __initdata;
+ static unsigned int sh_pfc_warnings __initdata;
+@@ -865,7 +865,8 @@ static void __init sh_pfc_check_cfg_reg(const char *drvname,
+ GENMASK(cfg_reg->reg_width - 1, 0));
+
+ if (cfg_reg->field_width) {
+- n = cfg_reg->reg_width / cfg_reg->field_width;
++ fw = cfg_reg->field_width;
++ n = (cfg_reg->reg_width / fw) << fw;
+ /* Skip field checks (done at build time) */
+ goto check_enum_ids;
+ }
+diff --git a/drivers/pinctrl/renesas/pfc-r8a77470.c b/drivers/pinctrl/renesas/pfc-r8a77470.c
+index e6e5487691c16..cf7153d06a953 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a77470.c
++++ b/drivers/pinctrl/renesas/pfc-r8a77470.c
+@@ -2140,7 +2140,7 @@ static const unsigned int vin0_clk_mux[] = {
+ VI0_CLK_MARK,
+ };
+ /* - VIN1 ------------------------------------------------------------------- */
+-static const union vin_data vin1_data_pins = {
++static const union vin_data12 vin1_data_pins = {
+ .data12 = {
+ RCAR_GP_PIN(3, 1), RCAR_GP_PIN(3, 2),
+ RCAR_GP_PIN(3, 3), RCAR_GP_PIN(3, 4),
+@@ -2150,7 +2150,7 @@ static const union vin_data vin1_data_pins = {
+ RCAR_GP_PIN(3, 15), RCAR_GP_PIN(3, 16),
+ },
+ };
+-static const union vin_data vin1_data_mux = {
++static const union vin_data12 vin1_data_mux = {
+ .data12 = {
+ VI1_DATA0_MARK, VI1_DATA1_MARK,
+ VI1_DATA2_MARK, VI1_DATA3_MARK,
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index 2e490e7696f47..4102ce955bd7f 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -585,13 +585,11 @@ static const struct samsung_pin_ctrl exynos850_pin_ctrl[] __initconst = {
+ /* pin-controller instance 0 ALIVE data */
+ .pin_banks = exynos850_pin_banks0,
+ .nr_banks = ARRAY_SIZE(exynos850_pin_banks0),
+- .eint_gpio_init = exynos_eint_gpio_init,
+ .eint_wkup_init = exynos_eint_wkup_init,
+ }, {
+ /* pin-controller instance 1 CMGP data */
+ .pin_banks = exynos850_pin_banks1,
+ .nr_banks = ARRAY_SIZE(exynos850_pin_banks1),
+- .eint_gpio_init = exynos_eint_gpio_init,
+ .eint_wkup_init = exynos_eint_wkup_init,
+ }, {
+ /* pin-controller instance 2 AUD data */
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 0f6e9305fec58..c4175fea7d741 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1002,6 +1002,16 @@ samsung_pinctrl_get_soc_data_for_of_alias(struct platform_device *pdev)
+ return &(of_data->ctrl[id]);
+ }
+
++static void samsung_banks_of_node_put(struct samsung_pinctrl_drv_data *d)
++{
++ struct samsung_pin_bank *bank;
++ unsigned int i;
++
++ bank = d->pin_banks;
++ for (i = 0; i < d->nr_banks; ++i, ++bank)
++ of_node_put(bank->of_node);
++}
++
+ /* retrieve the soc specific data */
+ static const struct samsung_pin_ctrl *
+ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+@@ -1117,19 +1127,19 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ if (ctrl->retention_data) {
+ drvdata->retention_ctrl = ctrl->retention_data->init(drvdata,
+ ctrl->retention_data);
+- if (IS_ERR(drvdata->retention_ctrl))
+- return PTR_ERR(drvdata->retention_ctrl);
++ if (IS_ERR(drvdata->retention_ctrl)) {
++ ret = PTR_ERR(drvdata->retention_ctrl);
++ goto err_put_banks;
++ }
+ }
+
+ ret = samsung_pinctrl_register(pdev, drvdata);
+ if (ret)
+- return ret;
++ goto err_put_banks;
+
+ ret = samsung_gpiolib_register(pdev, drvdata);
+- if (ret) {
+- samsung_pinctrl_unregister(pdev, drvdata);
+- return ret;
+- }
++ if (ret)
++ goto err_unregister;
+
+ if (ctrl->eint_gpio_init)
+ ctrl->eint_gpio_init(drvdata);
+@@ -1139,6 +1149,12 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, drvdata);
+
+ return 0;
++
++err_unregister:
++ samsung_pinctrl_unregister(pdev, drvdata);
++err_put_banks:
++ samsung_banks_of_node_put(drvdata);
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/platform/chrome/Makefile b/drivers/platform/chrome/Makefile
+index f901d2e43166c..88cbc434c06b2 100644
+--- a/drivers/platform/chrome/Makefile
++++ b/drivers/platform/chrome/Makefile
+@@ -2,6 +2,7 @@
+
+ # tell define_trace.h where to find the cros ec trace header
+ CFLAGS_cros_ec_trace.o:= -I$(src)
++CFLAGS_cros_ec_sensorhub_ring.o:= -I$(src)
+
+ obj-$(CONFIG_CHROMEOS_LAPTOP) += chromeos_laptop.o
+ obj-$(CONFIG_CHROMEOS_PSTORE) += chromeos_pstore.o
+@@ -20,7 +21,7 @@ obj-$(CONFIG_CROS_EC_CHARDEV) += cros_ec_chardev.o
+ obj-$(CONFIG_CROS_EC_LIGHTBAR) += cros_ec_lightbar.o
+ obj-$(CONFIG_CROS_EC_VBC) += cros_ec_vbc.o
+ obj-$(CONFIG_CROS_EC_DEBUGFS) += cros_ec_debugfs.o
+-cros-ec-sensorhub-objs := cros_ec_sensorhub.o cros_ec_sensorhub_ring.o cros_ec_trace.o
++cros-ec-sensorhub-objs := cros_ec_sensorhub.o cros_ec_sensorhub_ring.o
+ obj-$(CONFIG_CROS_EC_SENSORHUB) += cros-ec-sensorhub.o
+ obj-$(CONFIG_CROS_EC_SYSFS) += cros_ec_sysfs.o
+ obj-$(CONFIG_CROS_USBPD_LOGGER) += cros_usbpd_logger.o
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_ring.c b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+index 98e37080f7609..71948dade0e2a 100644
+--- a/drivers/platform/chrome/cros_ec_sensorhub_ring.c
++++ b/drivers/platform/chrome/cros_ec_sensorhub_ring.c
+@@ -17,7 +17,8 @@
+ #include <linux/sort.h>
+ #include <linux/slab.h>
+
+-#include "cros_ec_trace.h"
++#define CREATE_TRACE_POINTS
++#include "cros_ec_sensorhub_trace.h"
+
+ /* Precision of fixed point for the m values from the filter */
+ #define M_PRECISION BIT(23)
+diff --git a/drivers/platform/chrome/cros_ec_sensorhub_trace.h b/drivers/platform/chrome/cros_ec_sensorhub_trace.h
+new file mode 100644
+index 0000000000000..57d9b47859692
+--- /dev/null
++++ b/drivers/platform/chrome/cros_ec_sensorhub_trace.h
+@@ -0,0 +1,123 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Trace events for the ChromeOS Sensorhub kernel module
++ *
++ * Copyright 2021 Google LLC.
++ */
++
++#undef TRACE_SYSTEM
++#define TRACE_SYSTEM cros_ec
++
++#if !defined(_CROS_EC_SENSORHUB_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
++#define _CROS_EC_SENSORHUB_TRACE_H_
++
++#include <linux/types.h>
++#include <linux/platform_data/cros_ec_sensorhub.h>
++
++#include <linux/tracepoint.h>
++
++TRACE_EVENT(cros_ec_sensorhub_timestamp,
++ TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++ s64 current_timestamp, s64 current_time),
++ TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
++ current_time),
++ TP_STRUCT__entry(
++ __field(u32, ec_sample_timestamp)
++ __field(u32, ec_fifo_timestamp)
++ __field(s64, fifo_timestamp)
++ __field(s64, current_timestamp)
++ __field(s64, current_time)
++ __field(s64, delta)
++ ),
++ TP_fast_assign(
++ __entry->ec_sample_timestamp = ec_sample_timestamp;
++ __entry->ec_fifo_timestamp = ec_fifo_timestamp;
++ __entry->fifo_timestamp = fifo_timestamp;
++ __entry->current_timestamp = current_timestamp;
++ __entry->current_time = current_time;
++ __entry->delta = current_timestamp - current_time;
++ ),
++ TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++ __entry->ec_sample_timestamp,
++ __entry->ec_fifo_timestamp,
++ __entry->fifo_timestamp,
++ __entry->current_timestamp,
++ __entry->current_time,
++ __entry->delta
++ )
++);
++
++TRACE_EVENT(cros_ec_sensorhub_data,
++ TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
++ s64 current_timestamp, s64 current_time),
++ TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
++ TP_STRUCT__entry(
++ __field(u32, ec_sensor_num)
++ __field(u32, ec_fifo_timestamp)
++ __field(s64, fifo_timestamp)
++ __field(s64, current_timestamp)
++ __field(s64, current_time)
++ __field(s64, delta)
++ ),
++ TP_fast_assign(
++ __entry->ec_sensor_num = ec_sensor_num;
++ __entry->ec_fifo_timestamp = ec_fifo_timestamp;
++ __entry->fifo_timestamp = fifo_timestamp;
++ __entry->current_timestamp = current_timestamp;
++ __entry->current_time = current_time;
++ __entry->delta = current_timestamp - current_time;
++ ),
++ TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
++ __entry->ec_sensor_num,
++ __entry->ec_fifo_timestamp,
++ __entry->fifo_timestamp,
++ __entry->current_timestamp,
++ __entry->current_time,
++ __entry->delta
++ )
++);
++
++TRACE_EVENT(cros_ec_sensorhub_filter,
++ TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
++ TP_ARGS(state, dx, dy),
++ TP_STRUCT__entry(
++ __field(s64, dx)
++ __field(s64, dy)
++ __field(s64, median_m)
++ __field(s64, median_error)
++ __field(s64, history_len)
++ __field(s64, x)
++ __field(s64, y)
++ ),
++ TP_fast_assign(
++ __entry->dx = dx;
++ __entry->dy = dy;
++ __entry->median_m = state->median_m;
++ __entry->median_error = state->median_error;
++ __entry->history_len = state->history_len;
++ __entry->x = state->x_offset;
++ __entry->y = state->y_offset;
++ ),
++ TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
++ __entry->dx,
++ __entry->dy,
++ __entry->median_m,
++ __entry->median_error,
++ __entry->history_len,
++ __entry->x,
++ __entry->y
++ )
++);
++
++
++#endif /* _CROS_EC_SENSORHUB_TRACE_H_ */
++
++/* this part must be outside header guard */
++
++#undef TRACE_INCLUDE_PATH
++#define TRACE_INCLUDE_PATH .
++
++#undef TRACE_INCLUDE_FILE
++#define TRACE_INCLUDE_FILE cros_ec_sensorhub_trace
++
++#include <trace/define_trace.h>
+diff --git a/drivers/platform/chrome/cros_ec_trace.h b/drivers/platform/chrome/cros_ec_trace.h
+index 7e7cfc98657a4..9bb5cd2c98b8b 100644
+--- a/drivers/platform/chrome/cros_ec_trace.h
++++ b/drivers/platform/chrome/cros_ec_trace.h
+@@ -15,7 +15,6 @@
+ #include <linux/types.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
+-#include <linux/platform_data/cros_ec_sensorhub.h>
+
+ #include <linux/tracepoint.h>
+
+@@ -71,100 +70,6 @@ TRACE_EVENT(cros_ec_request_done,
+ __entry->retval)
+ );
+
+-TRACE_EVENT(cros_ec_sensorhub_timestamp,
+- TP_PROTO(u32 ec_sample_timestamp, u32 ec_fifo_timestamp, s64 fifo_timestamp,
+- s64 current_timestamp, s64 current_time),
+- TP_ARGS(ec_sample_timestamp, ec_fifo_timestamp, fifo_timestamp, current_timestamp,
+- current_time),
+- TP_STRUCT__entry(
+- __field(u32, ec_sample_timestamp)
+- __field(u32, ec_fifo_timestamp)
+- __field(s64, fifo_timestamp)
+- __field(s64, current_timestamp)
+- __field(s64, current_time)
+- __field(s64, delta)
+- ),
+- TP_fast_assign(
+- __entry->ec_sample_timestamp = ec_sample_timestamp;
+- __entry->ec_fifo_timestamp = ec_fifo_timestamp;
+- __entry->fifo_timestamp = fifo_timestamp;
+- __entry->current_timestamp = current_timestamp;
+- __entry->current_time = current_time;
+- __entry->delta = current_timestamp - current_time;
+- ),
+- TP_printk("ec_ts: %9u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
+- __entry->ec_sample_timestamp,
+- __entry->ec_fifo_timestamp,
+- __entry->fifo_timestamp,
+- __entry->current_timestamp,
+- __entry->current_time,
+- __entry->delta
+- )
+-);
+-
+-TRACE_EVENT(cros_ec_sensorhub_data,
+- TP_PROTO(u32 ec_sensor_num, u32 ec_fifo_timestamp, s64 fifo_timestamp,
+- s64 current_timestamp, s64 current_time),
+- TP_ARGS(ec_sensor_num, ec_fifo_timestamp, fifo_timestamp, current_timestamp, current_time),
+- TP_STRUCT__entry(
+- __field(u32, ec_sensor_num)
+- __field(u32, ec_fifo_timestamp)
+- __field(s64, fifo_timestamp)
+- __field(s64, current_timestamp)
+- __field(s64, current_time)
+- __field(s64, delta)
+- ),
+- TP_fast_assign(
+- __entry->ec_sensor_num = ec_sensor_num;
+- __entry->ec_fifo_timestamp = ec_fifo_timestamp;
+- __entry->fifo_timestamp = fifo_timestamp;
+- __entry->current_timestamp = current_timestamp;
+- __entry->current_time = current_time;
+- __entry->delta = current_timestamp - current_time;
+- ),
+- TP_printk("ec_num: %4u, ec_fifo_ts: %9u, fifo_ts: %12lld, curr_ts: %12lld, curr_time: %12lld, delta %12lld",
+- __entry->ec_sensor_num,
+- __entry->ec_fifo_timestamp,
+- __entry->fifo_timestamp,
+- __entry->current_timestamp,
+- __entry->current_time,
+- __entry->delta
+- )
+-);
+-
+-TRACE_EVENT(cros_ec_sensorhub_filter,
+- TP_PROTO(struct cros_ec_sensors_ts_filter_state *state, s64 dx, s64 dy),
+- TP_ARGS(state, dx, dy),
+- TP_STRUCT__entry(
+- __field(s64, dx)
+- __field(s64, dy)
+- __field(s64, median_m)
+- __field(s64, median_error)
+- __field(s64, history_len)
+- __field(s64, x)
+- __field(s64, y)
+- ),
+- TP_fast_assign(
+- __entry->dx = dx;
+- __entry->dy = dy;
+- __entry->median_m = state->median_m;
+- __entry->median_error = state->median_error;
+- __entry->history_len = state->history_len;
+- __entry->x = state->x_offset;
+- __entry->y = state->y_offset;
+- ),
+- TP_printk("dx: %12lld. dy: %12lld median_m: %12lld median_error: %12lld len: %lld x: %12lld y: %12lld",
+- __entry->dx,
+- __entry->dy,
+- __entry->median_m,
+- __entry->median_error,
+- __entry->history_len,
+- __entry->x,
+- __entry->y
+- )
+-);
+-
+-
+ #endif /* _CROS_EC_TRACE_H_ */
+
+ /* this part must be outside header guard */
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index 5de0bfb0bc4d9..952c1756f59ee 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -1075,7 +1075,13 @@ static int cros_typec_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ typec->dev = dev;
++
+ typec->ec = dev_get_drvdata(pdev->dev.parent);
++ if (!typec->ec) {
++ dev_err(dev, "couldn't find parent EC device\n");
++ return -ENODEV;
++ }
++
+ platform_set_drvdata(pdev, typec);
+
+ ret = cros_typec_get_cmd_version(typec);
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 2104a2621e507..adab31b52f2af 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -2059,7 +2059,7 @@ static int fan_boost_mode_check_present(struct asus_wmi *asus)
+ err = asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_FAN_BOOST_MODE,
+ &result);
+ if (err) {
+- if (err == -ENODEV)
++ if (err == -ENODEV || err == -ENODATA)
+ return 0;
+ else
+ return err;
+diff --git a/drivers/platform/x86/huawei-wmi.c b/drivers/platform/x86/huawei-wmi.c
+index a2d846c4a7eef..eac3e6b4ea113 100644
+--- a/drivers/platform/x86/huawei-wmi.c
++++ b/drivers/platform/x86/huawei-wmi.c
+@@ -470,10 +470,17 @@ static DEVICE_ATTR_RW(charge_control_thresholds);
+
+ static int huawei_wmi_battery_add(struct power_supply *battery)
+ {
+- device_create_file(&battery->dev, &dev_attr_charge_control_start_threshold);
+- device_create_file(&battery->dev, &dev_attr_charge_control_end_threshold);
++ int err = 0;
+
+- return 0;
++ err = device_create_file(&battery->dev, &dev_attr_charge_control_start_threshold);
++ if (err)
++ return err;
++
++ err = device_create_file(&battery->dev, &dev_attr_charge_control_end_threshold);
++ if (err)
++ device_remove_file(&battery->dev, &dev_attr_charge_control_start_threshold);
++
++ return err;
+ }
+
+ static int huawei_wmi_battery_remove(struct power_supply *battery)
+diff --git a/drivers/power/reset/gemini-poweroff.c b/drivers/power/reset/gemini-poweroff.c
+index 90e35c07240ae..b7f7a8225f22e 100644
+--- a/drivers/power/reset/gemini-poweroff.c
++++ b/drivers/power/reset/gemini-poweroff.c
+@@ -107,8 +107,8 @@ static int gemini_poweroff_probe(struct platform_device *pdev)
+ return PTR_ERR(gpw->base);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq)
+- return -EINVAL;
++ if (irq < 0)
++ return irq;
+
+ gpw->dev = dev;
+
+diff --git a/drivers/power/supply/ab8500_bmdata.c b/drivers/power/supply/ab8500_bmdata.c
+index 7ae95f5375801..9a8334a65de1b 100644
+--- a/drivers/power/supply/ab8500_bmdata.c
++++ b/drivers/power/supply/ab8500_bmdata.c
+@@ -188,13 +188,11 @@ int ab8500_bm_of_probe(struct power_supply *psy,
+ * fall back to safe defaults.
+ */
+ if ((bi->voltage_min_design_uv < 0) ||
+- (bi->voltage_max_design_uv < 0) ||
+- (bi->overvoltage_limit_uv < 0)) {
++ (bi->voltage_max_design_uv < 0)) {
+ /* Nominal voltage is 3.7V for unknown batteries */
+ bi->voltage_min_design_uv = 3700000;
+- bi->voltage_max_design_uv = 3700000;
+- /* Termination voltage (overcharge limit) 4.05V */
+- bi->overvoltage_limit_uv = 4050000;
++ /* Termination voltage 4.05V */
++ bi->voltage_max_design_uv = 4050000;
+ }
+
+ if (bi->constant_charge_current_max_ua < 0)
+diff --git a/drivers/power/supply/ab8500_chargalg.c b/drivers/power/supply/ab8500_chargalg.c
+index c4a2fe07126c3..da490e090ce48 100644
+--- a/drivers/power/supply/ab8500_chargalg.c
++++ b/drivers/power/supply/ab8500_chargalg.c
+@@ -802,7 +802,7 @@ static void ab8500_chargalg_end_of_charge(struct ab8500_chargalg *di)
+ if (di->charge_status == POWER_SUPPLY_STATUS_CHARGING &&
+ di->charge_state == STATE_NORMAL &&
+ !di->maintenance_chg && (di->batt_data.volt_uv >=
+- di->bm->bi->overvoltage_limit_uv ||
++ di->bm->bi->voltage_max_design_uv ||
+ di->events.usb_cv_active || di->events.ac_cv_active) &&
+ di->batt_data.avg_curr_ua <
+ di->bm->bi->charge_term_current_ua &&
+@@ -2020,11 +2020,11 @@ static int ab8500_chargalg_probe(struct platform_device *pdev)
+ psy_cfg.drv_data = di;
+
+ /* Initilialize safety timer */
+- hrtimer_init(&di->safety_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
++ hrtimer_init(&di->safety_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ di->safety_timer.function = ab8500_chargalg_safety_timer_expired;
+
+ /* Initilialize maintenance timer */
+- hrtimer_init(&di->maintenance_timer, CLOCK_REALTIME, HRTIMER_MODE_ABS);
++ hrtimer_init(&di->maintenance_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ di->maintenance_timer.function =
+ ab8500_chargalg_maintenance_timer_expired;
+
+diff --git a/drivers/power/supply/ab8500_fg.c b/drivers/power/supply/ab8500_fg.c
+index b0919a6a65878..09a4cbd69676a 100644
+--- a/drivers/power/supply/ab8500_fg.c
++++ b/drivers/power/supply/ab8500_fg.c
+@@ -2263,7 +2263,13 @@ static int ab8500_fg_init_hw_registers(struct ab8500_fg *di)
+ {
+ int ret;
+
+- /* Set VBAT OVV threshold */
++ /*
++ * Set VBAT OVV (overvoltage) threshold to 4.75V (typ) this is what
++ * the hardware supports, nothing else can be configured in hardware.
++ * See this as an "outer limit" where the charger will certainly
++ * shut down. Other (lower) overvoltage levels need to be implemented
++ * in software.
++ */
+ ret = abx500_mask_and_set_register_interruptible(di->dev,
+ AB8500_CHARGER,
+ AB8500_BATT_OVV,
+@@ -2521,8 +2527,10 @@ static int ab8500_fg_sysfs_init(struct ab8500_fg *di)
+ ret = kobject_init_and_add(&di->fg_kobject,
+ &ab8500_fg_ktype,
+ NULL, "battery");
+- if (ret < 0)
++ if (ret < 0) {
++ kobject_put(&di->fg_kobject);
+ dev_err(di->dev, "failed to create sysfs entry\n");
++ }
+
+ return ret;
+ }
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 06c34b09349ca..8ad1b3b02490c 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -39,6 +39,7 @@
+ #define BQ24190_REG_POC_CHG_CONFIG_DISABLE 0x0
+ #define BQ24190_REG_POC_CHG_CONFIG_CHARGE 0x1
+ #define BQ24190_REG_POC_CHG_CONFIG_OTG 0x2
++#define BQ24190_REG_POC_CHG_CONFIG_OTG_ALT 0x3
+ #define BQ24190_REG_POC_SYS_MIN_MASK (BIT(3) | BIT(2) | BIT(1))
+ #define BQ24190_REG_POC_SYS_MIN_SHIFT 1
+ #define BQ24190_REG_POC_SYS_MIN_MIN 3000
+@@ -550,7 +551,11 @@ static int bq24190_vbus_is_enabled(struct regulator_dev *dev)
+ pm_runtime_mark_last_busy(bdi->dev);
+ pm_runtime_put_autosuspend(bdi->dev);
+
+- return ret ? ret : val == BQ24190_REG_POC_CHG_CONFIG_OTG;
++ if (ret)
++ return ret;
++
++ return (val == BQ24190_REG_POC_CHG_CONFIG_OTG ||
++ val == BQ24190_REG_POC_CHG_CONFIG_OTG_ALT);
+ }
+
+ static const struct regulator_ops bq24190_vbus_ops = {
+diff --git a/drivers/power/supply/sbs-charger.c b/drivers/power/supply/sbs-charger.c
+index 6fa65d118ec12..b08f7d0c41815 100644
+--- a/drivers/power/supply/sbs-charger.c
++++ b/drivers/power/supply/sbs-charger.c
+@@ -18,6 +18,7 @@
+ #include <linux/interrupt.h>
+ #include <linux/regmap.h>
+ #include <linux/bitops.h>
++#include <linux/devm-helpers.h>
+
+ #define SBS_CHARGER_REG_SPEC_INFO 0x11
+ #define SBS_CHARGER_REG_STATUS 0x13
+@@ -209,7 +210,12 @@ static int sbs_probe(struct i2c_client *client,
+ if (ret)
+ return dev_err_probe(&client->dev, ret, "Failed to request irq\n");
+ } else {
+- INIT_DELAYED_WORK(&chip->work, sbs_delayed_work);
++ ret = devm_delayed_work_autocancel(&client->dev, &chip->work,
++ sbs_delayed_work);
++ if (ret)
++ return dev_err_probe(&client->dev, ret,
++ "Failed to init work for polling\n");
++
+ schedule_delayed_work(&chip->work,
+ msecs_to_jiffies(SBS_CHARGER_POLL_TIME));
+ }
+@@ -220,15 +226,6 @@ static int sbs_probe(struct i2c_client *client,
+ return 0;
+ }
+
+-static int sbs_remove(struct i2c_client *client)
+-{
+- struct sbs_info *chip = i2c_get_clientdata(client);
+-
+- cancel_delayed_work_sync(&chip->work);
+-
+- return 0;
+-}
+-
+ #ifdef CONFIG_OF
+ static const struct of_device_id sbs_dt_ids[] = {
+ { .compatible = "sbs,sbs-charger" },
+@@ -245,7 +242,6 @@ MODULE_DEVICE_TABLE(i2c, sbs_id);
+
+ static struct i2c_driver sbs_driver = {
+ .probe = sbs_probe,
+- .remove = sbs_remove,
+ .id_table = sbs_id,
+ .driver = {
+ .name = "sbs-charger",
+diff --git a/drivers/power/supply/wm8350_power.c b/drivers/power/supply/wm8350_power.c
+index e05cee457471b..908cfd45d2624 100644
+--- a/drivers/power/supply/wm8350_power.c
++++ b/drivers/power/supply/wm8350_power.c
+@@ -408,44 +408,112 @@ static const struct power_supply_desc wm8350_usb_desc = {
+ * Initialisation
+ *********************************************************************/
+
+-static void wm8350_init_charger(struct wm8350 *wm8350)
++static int wm8350_init_charger(struct wm8350 *wm8350)
+ {
++ int ret;
++
+ /* register our interest in charger events */
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT,
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT,
+ wm8350_charger_handler, 0, "Battery hot", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD,
++ if (ret)
++ goto err;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD,
+ wm8350_charger_handler, 0, "Battery cold", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL,
++ if (ret)
++ goto free_chg_bat_hot;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL,
+ wm8350_charger_handler, 0, "Battery fail", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_TO,
++ if (ret)
++ goto free_chg_bat_cold;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_TO,
+ wm8350_charger_handler, 0,
+ "Charger timeout", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_END,
++ if (ret)
++ goto free_chg_bat_fail;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_END,
+ wm8350_charger_handler, 0,
+ "Charge end", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_START,
++ if (ret)
++ goto free_chg_to;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_START,
+ wm8350_charger_handler, 0,
+ "Charge start", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY,
++ if (ret)
++ goto free_chg_end;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY,
+ wm8350_charger_handler, 0,
+ "Fast charge ready", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9,
++ if (ret)
++ goto free_chg_start;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9,
+ wm8350_charger_handler, 0,
+ "Battery <3.9V", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1,
++ if (ret)
++ goto free_chg_fast_rdy;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1,
+ wm8350_charger_handler, 0,
+ "Battery <3.1V", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85,
++ if (ret)
++ goto free_chg_vbatt_lt_3p9;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85,
+ wm8350_charger_handler, 0,
+ "Battery <2.85V", wm8350);
++ if (ret)
++ goto free_chg_vbatt_lt_3p1;
+
+ /* and supply change events */
+- wm8350_register_irq(wm8350, WM8350_IRQ_EXT_USB_FB,
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_USB_FB,
+ wm8350_charger_handler, 0, "USB", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_EXT_WALL_FB,
++ if (ret)
++ goto free_chg_vbatt_lt_2p85;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_WALL_FB,
+ wm8350_charger_handler, 0, "Wall", wm8350);
+- wm8350_register_irq(wm8350, WM8350_IRQ_EXT_BAT_FB,
++ if (ret)
++ goto free_ext_usb_fb;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_EXT_BAT_FB,
+ wm8350_charger_handler, 0, "Battery", wm8350);
++ if (ret)
++ goto free_ext_wall_fb;
++
++ return 0;
++
++free_ext_wall_fb:
++ wm8350_free_irq(wm8350, WM8350_IRQ_EXT_WALL_FB, wm8350);
++free_ext_usb_fb:
++ wm8350_free_irq(wm8350, WM8350_IRQ_EXT_USB_FB, wm8350);
++free_chg_vbatt_lt_2p85:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85, wm8350);
++free_chg_vbatt_lt_3p1:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1, wm8350);
++free_chg_vbatt_lt_3p9:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9, wm8350);
++free_chg_fast_rdy:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY, wm8350);
++free_chg_start:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_START, wm8350);
++free_chg_end:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_END, wm8350);
++free_chg_to:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_TO, wm8350);
++free_chg_bat_fail:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_FAIL, wm8350);
++free_chg_bat_cold:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_COLD, wm8350);
++free_chg_bat_hot:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_BAT_HOT, wm8350);
++err:
++ return ret;
+ }
+
+ static void free_charger_irq(struct wm8350 *wm8350)
+@@ -456,6 +524,7 @@ static void free_charger_irq(struct wm8350 *wm8350)
+ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_TO, wm8350);
+ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_END, wm8350);
+ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_START, wm8350);
++ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_FAST_RDY, wm8350);
+ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P9, wm8350);
+ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_3P1, wm8350);
+ wm8350_free_irq(wm8350, WM8350_IRQ_CHG_VBATT_LT_2P85, wm8350);
+diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c
+index b740866b228d9..1e8cac699646c 100644
+--- a/drivers/powercap/dtpm_cpu.c
++++ b/drivers/powercap/dtpm_cpu.c
+@@ -150,10 +150,17 @@ static int update_pd_power_uw(struct dtpm *dtpm)
+ static void pd_release(struct dtpm *dtpm)
+ {
+ struct dtpm_cpu *dtpm_cpu = to_dtpm_cpu(dtpm);
++ struct cpufreq_policy *policy;
+
+ if (freq_qos_request_active(&dtpm_cpu->qos_req))
+ freq_qos_remove_request(&dtpm_cpu->qos_req);
+
++ policy = cpufreq_cpu_get(dtpm_cpu->cpu);
++ if (policy) {
++ for_each_cpu(dtpm_cpu->cpu, policy->related_cpus)
++ per_cpu(dtpm_per_cpu, dtpm_cpu->cpu) = NULL;
++ }
++
+ kfree(dtpm_cpu);
+ }
+
+diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c
+index 35799e6401c99..2f4b11b4dfcd9 100644
+--- a/drivers/pps/clients/pps-gpio.c
++++ b/drivers/pps/clients/pps-gpio.c
+@@ -169,7 +169,7 @@ static int pps_gpio_probe(struct platform_device *pdev)
+ /* GPIO setup */
+ ret = pps_gpio_setup(dev);
+ if (ret)
+- return -EINVAL;
++ return ret;
+
+ /* IRQ setup */
+ ret = gpiod_to_irq(data->gpio_pin);
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 0e4bc8b9329dd..b6f2cfd15dd2d 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -317,11 +317,18 @@ no_memory:
+ }
+ EXPORT_SYMBOL(ptp_clock_register);
+
++static int unregister_vclock(struct device *dev, void *data)
++{
++ struct ptp_clock *ptp = dev_get_drvdata(dev);
++
++ ptp_vclock_unregister(info_to_vclock(ptp->info));
++ return 0;
++}
++
+ int ptp_clock_unregister(struct ptp_clock *ptp)
+ {
+ if (ptp_vclock_in_use(ptp)) {
+- pr_err("ptp: virtual clock in use\n");
+- return -EBUSY;
++ device_for_each_child(&ptp->dev, NULL, unregister_vclock);
+ }
+
+ ptp->defunct = 1;
+diff --git a/drivers/pwm/pwm-lpc18xx-sct.c b/drivers/pwm/pwm-lpc18xx-sct.c
+index 8e461f3baa05a..8cc8ae16553cf 100644
+--- a/drivers/pwm/pwm-lpc18xx-sct.c
++++ b/drivers/pwm/pwm-lpc18xx-sct.c
+@@ -395,12 +395,6 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_LIMIT,
+ BIT(lpc18xx_pwm->period_event));
+
+- ret = pwmchip_add(&lpc18xx_pwm->chip);
+- if (ret < 0) {
+- dev_err(&pdev->dev, "pwmchip_add failed: %d\n", ret);
+- goto disable_pwmclk;
+- }
+-
+ for (i = 0; i < lpc18xx_pwm->chip.npwm; i++) {
+ struct lpc18xx_pwm_data *data;
+
+@@ -410,14 +404,12 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ GFP_KERNEL);
+ if (!data) {
+ ret = -ENOMEM;
+- goto remove_pwmchip;
++ goto disable_pwmclk;
+ }
+
+ pwm_set_chip_data(pwm, data);
+ }
+
+- platform_set_drvdata(pdev, lpc18xx_pwm);
+-
+ val = lpc18xx_pwm_readl(lpc18xx_pwm, LPC18XX_PWM_CTRL);
+ val &= ~LPC18XX_PWM_BIDIR;
+ val &= ~LPC18XX_PWM_CTRL_HALT;
+@@ -425,10 +417,16 @@ static int lpc18xx_pwm_probe(struct platform_device *pdev)
+ val |= LPC18XX_PWM_PRE(0);
+ lpc18xx_pwm_writel(lpc18xx_pwm, LPC18XX_PWM_CTRL, val);
+
++ ret = pwmchip_add(&lpc18xx_pwm->chip);
++ if (ret < 0) {
++ dev_err(&pdev->dev, "pwmchip_add failed: %d\n", ret);
++ goto disable_pwmclk;
++ }
++
++ platform_set_drvdata(pdev, lpc18xx_pwm);
++
+ return 0;
+
+-remove_pwmchip:
+- pwmchip_remove(&lpc18xx_pwm->chip);
+ disable_pwmclk:
+ clk_disable_unprepare(lpc18xx_pwm->pwm_clk);
+ return ret;
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 9fc666107a06c..8490aa8eecb1a 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -1317,8 +1317,10 @@ static int rpm_reg_probe(struct platform_device *pdev)
+
+ for_each_available_child_of_node(dev->of_node, node) {
+ vreg = devm_kzalloc(&pdev->dev, sizeof(*vreg), GFP_KERNEL);
+- if (!vreg)
++ if (!vreg) {
++ of_node_put(node);
+ return -ENOMEM;
++ }
+
+ ret = rpm_regulator_init_vreg(vreg, dev, node, rpm, vreg_data);
+
+diff --git a/drivers/regulator/rpi-panel-attiny-regulator.c b/drivers/regulator/rpi-panel-attiny-regulator.c
+index ee46bfbf5eee7..991b4730d7687 100644
+--- a/drivers/regulator/rpi-panel-attiny-regulator.c
++++ b/drivers/regulator/rpi-panel-attiny-regulator.c
+@@ -37,11 +37,24 @@ static const struct regmap_config attiny_regmap_config = {
+ static int attiny_lcd_power_enable(struct regulator_dev *rdev)
+ {
+ unsigned int data;
++ int ret, i;
+
+ regmap_write(rdev->regmap, REG_POWERON, 1);
++ msleep(80);
++
+ /* Wait for nPWRDWN to go low to indicate poweron is done. */
+- regmap_read_poll_timeout(rdev->regmap, REG_PORTB, data,
+- data & BIT(0), 10, 1000000);
++ for (i = 0; i < 20; i++) {
++ ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++ if (!ret) {
++ if (data & BIT(0))
++ break;
++ }
++ usleep_range(10000, 12000);
++ }
++ usleep_range(10000, 12000);
++
++ if (ret)
++ pr_err("%s: regmap_read_poll_timeout failed %d\n", __func__, ret);
+
+ /* Default to the same orientation as the closed source
+ * firmware used for the panel. Runtime rotation
+@@ -57,23 +70,34 @@ static int attiny_lcd_power_disable(struct regulator_dev *rdev)
+ {
+ regmap_write(rdev->regmap, REG_PWM, 0);
+ regmap_write(rdev->regmap, REG_POWERON, 0);
+- udelay(1);
++ msleep(30);
+ return 0;
+ }
+
+ static int attiny_lcd_power_is_enabled(struct regulator_dev *rdev)
+ {
+ unsigned int data;
+- int ret;
++ int ret, i;
+
+- ret = regmap_read(rdev->regmap, REG_POWERON, &data);
++ for (i = 0; i < 10; i++) {
++ ret = regmap_read(rdev->regmap, REG_POWERON, &data);
++ if (!ret)
++ break;
++ usleep_range(10000, 12000);
++ }
+ if (ret < 0)
+ return ret;
+
+ if (!(data & BIT(0)))
+ return 0;
+
+- ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++ for (i = 0; i < 10; i++) {
++ ret = regmap_read(rdev->regmap, REG_PORTB, &data);
++ if (!ret)
++ break;
++ usleep_range(10000, 12000);
++ }
++
+ if (ret < 0)
+ return ret;
+
+@@ -103,20 +127,32 @@ static int attiny_update_status(struct backlight_device *bl)
+ {
+ struct regmap *regmap = bl_get_data(bl);
+ int brightness = bl->props.brightness;
++ int ret, i;
+
+ if (bl->props.power != FB_BLANK_UNBLANK ||
+ bl->props.fb_blank != FB_BLANK_UNBLANK)
+ brightness = 0;
+
+- return regmap_write(regmap, REG_PWM, brightness);
++ for (i = 0; i < 10; i++) {
++ ret = regmap_write(regmap, REG_PWM, brightness);
++ if (!ret)
++ break;
++ }
++
++ return ret;
+ }
+
+ static int attiny_get_brightness(struct backlight_device *bl)
+ {
+ struct regmap *regmap = bl_get_data(bl);
+- int ret, brightness;
++ int ret, brightness, i;
++
++ for (i = 0; i < 10; i++) {
++ ret = regmap_read(regmap, REG_PWM, &brightness);
++ if (!ret)
++ break;
++ }
+
+- ret = regmap_read(regmap, REG_PWM, &brightness);
+ if (ret)
+ return ret;
+
+@@ -166,7 +202,7 @@ static int attiny_i2c_probe(struct i2c_client *i2c,
+ }
+
+ regmap_write(regmap, REG_POWERON, 0);
+- mdelay(1);
++ msleep(30);
+
+ config.dev = &i2c->dev;
+ config.regmap = regmap;
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index 098362e6e233b..7c02bc1322479 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -408,6 +408,7 @@ static int adsp_alloc_memory_region(struct qcom_adsp *adsp)
+ }
+
+ ret = of_address_to_resource(node, 0, &r);
++ of_node_put(node);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 43ea8455546ca..b9ab91540b00d 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1806,18 +1806,20 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ * reserved memory regions from device's memory-region property.
+ */
+ child = of_get_child_by_name(qproc->dev->of_node, "mba");
+- if (!child)
++ if (!child) {
+ node = of_parse_phandle(qproc->dev->of_node,
+ "memory-region", 0);
+- else
++ } else {
+ node = of_parse_phandle(child, "memory-region", 0);
++ of_node_put(child);
++ }
+
+ ret = of_address_to_resource(node, 0, &r);
++ of_node_put(node);
+ if (ret) {
+ dev_err(qproc->dev, "unable to resolve mba region\n");
+ return ret;
+ }
+- of_node_put(node);
+
+ qproc->mba_phys = r.start;
+ qproc->mba_size = resource_size(&r);
+@@ -1828,14 +1830,15 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
+ } else {
+ child = of_get_child_by_name(qproc->dev->of_node, "mpss");
+ node = of_parse_phandle(child, "memory-region", 0);
++ of_node_put(child);
+ }
+
+ ret = of_address_to_resource(node, 0, &r);
++ of_node_put(node);
+ if (ret) {
+ dev_err(qproc->dev, "unable to resolve mpss region\n");
+ return ret;
+ }
+- of_node_put(node);
+
+ qproc->mpss_phys = qproc->mpss_reloc = r.start;
+ qproc->mpss_size = resource_size(&r);
+diff --git a/drivers/remoteproc/qcom_wcnss.c b/drivers/remoteproc/qcom_wcnss.c
+index 80bbafee98463..9a223d394087f 100644
+--- a/drivers/remoteproc/qcom_wcnss.c
++++ b/drivers/remoteproc/qcom_wcnss.c
+@@ -500,6 +500,7 @@ static int wcnss_alloc_memory_region(struct qcom_wcnss *wcnss)
+ }
+
+ ret = of_address_to_resource(node, 0, &r);
++ of_node_put(node);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/remoteproc/remoteproc_debugfs.c b/drivers/remoteproc/remoteproc_debugfs.c
+index b5a1e3b697d9f..581930483ef84 100644
+--- a/drivers/remoteproc/remoteproc_debugfs.c
++++ b/drivers/remoteproc/remoteproc_debugfs.c
+@@ -76,7 +76,7 @@ static ssize_t rproc_coredump_write(struct file *filp,
+ int ret, err = 0;
+ char buf[20];
+
+- if (count > sizeof(buf))
++ if (count < 1 || count > sizeof(buf))
+ return -EINVAL;
+
+ ret = copy_from_user(buf, user_buf, count);
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index d8e8357981537..9edd662c69ace 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -804,9 +804,13 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+ struct timerqueue_node *next = timerqueue_getnext(&rtc->timerqueue);
+ struct rtc_time tm;
+ ktime_t now;
++ int err;
++
++ err = __rtc_read_time(rtc, &tm);
++ if (err)
++ return err;
+
+ timer->enabled = 1;
+- __rtc_read_time(rtc, &tm);
+ now = rtc_tm_to_ktime(tm);
+
+ /* Skip over expired timers */
+@@ -820,7 +824,6 @@ static int rtc_timer_enqueue(struct rtc_device *rtc, struct rtc_timer *timer)
+ trace_rtc_timer_enqueue(timer);
+ if (!next || ktime_before(timer->node.expires, next->expires)) {
+ struct rtc_wkalrm alarm;
+- int err;
+
+ alarm.time = rtc_ktime_to_tm(timer->node.expires);
+ alarm.enabled = 1;
+diff --git a/drivers/rtc/rtc-gamecube.c b/drivers/rtc/rtc-gamecube.c
+index f717b36f4738c..18ca3b38b2d04 100644
+--- a/drivers/rtc/rtc-gamecube.c
++++ b/drivers/rtc/rtc-gamecube.c
+@@ -235,6 +235,7 @@ static int gamecube_rtc_read_offset_from_sram(struct priv *d)
+ }
+
+ ret = of_address_to_resource(np, 0, &res);
++ of_node_put(np);
+ if (ret) {
+ pr_err("no io memory range found\n");
+ return -1;
+diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c
+index ae9f131b43c0c..562f99b664a24 100644
+--- a/drivers/rtc/rtc-mc146818-lib.c
++++ b/drivers/rtc/rtc-mc146818-lib.c
+@@ -232,8 +232,10 @@ int mc146818_set_time(struct rtc_time *time)
+ if (yrs >= 100)
+ yrs -= 100;
+
+- if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY)
+- || RTC_ALWAYS_BCD) {
++ spin_lock_irqsave(&rtc_lock, flags);
++ save_control = CMOS_READ(RTC_CONTROL);
++ spin_unlock_irqrestore(&rtc_lock, flags);
++ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ sec = bin2bcd(sec);
+ min = bin2bcd(min);
+ hrs = bin2bcd(hrs);
+diff --git a/drivers/rtc/rtc-pl031.c b/drivers/rtc/rtc-pl031.c
+index e38ee88483855..bad6a5d9c6839 100644
+--- a/drivers/rtc/rtc-pl031.c
++++ b/drivers/rtc/rtc-pl031.c
+@@ -350,9 +350,6 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id)
+ }
+ }
+
+- if (!adev->irq[0])
+- clear_bit(RTC_FEATURE_ALARM, ldata->rtc->features);
+-
+ device_init_wakeup(&adev->dev, true);
+ ldata->rtc = devm_rtc_allocate_device(&adev->dev);
+ if (IS_ERR(ldata->rtc)) {
+@@ -360,6 +357,9 @@ static int pl031_probe(struct amba_device *adev, const struct amba_id *id)
+ goto out;
+ }
+
++ if (!adev->irq[0])
++ clear_bit(RTC_FEATURE_ALARM, ldata->rtc->features);
++
+ ldata->rtc->ops = ops;
+ ldata->rtc->range_min = vendor->range_min;
+ ldata->rtc->range_max = vendor->range_max;
+diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
+index 40a52feb315da..65047806a5410 100644
+--- a/drivers/scsi/fnic/fnic_scsi.c
++++ b/drivers/scsi/fnic/fnic_scsi.c
+@@ -604,7 +604,7 @@ out:
+
+ FNIC_TRACE(fnic_queuecommand, sc->device->host->host_no,
+ tag, sc, io_req, sg_count, cmd_trace,
+- (((u64)CMD_FLAGS(sc) >> 32) | CMD_STATE(sc)));
++ (((u64)CMD_FLAGS(sc) << 32) | CMD_STATE(sc)));
+
+ /* if only we issued IO, will we have the io lock */
+ if (io_lock_acquired)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index a01a3a7b706b5..70173389f6ebd 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -530,7 +530,7 @@ MODULE_PARM_DESC(intr_conv, "interrupt converge enable (0-1)");
+
+ /* permit overriding the host protection capabilities mask (EEDP/T10 PI) */
+ static int prot_mask;
+-module_param(prot_mask, int, 0);
++module_param(prot_mask, int, 0444);
+ MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=0x0 ");
+
+ static void debugfs_work_handler_v3_hw(struct work_struct *work);
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index a315715b36227..7e0cde710fc3c 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -197,7 +197,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ task->total_xfer_len = qc->nbytes;
+ task->num_scatter = qc->n_elem;
+ task->data_dir = qc->dma_dir;
+- } else if (qc->tf.protocol == ATA_PROT_NODATA) {
++ } else if (!ata_is_data(qc->tf.protocol)) {
+ task->data_dir = DMA_NONE;
+ } else {
+ for_each_sg(qc->sg, sg, qc->n_elem, si)
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 76229b839560a..fb5a3a348dbec 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -5736,14 +5736,13 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
+ */
+
+ static int
+-mpt3sas_check_same_4gb_region(long reply_pool_start_address, u32 pool_sz)
++mpt3sas_check_same_4gb_region(dma_addr_t start_address, u32 pool_sz)
+ {
+- long reply_pool_end_address;
++ dma_addr_t end_address;
+
+- reply_pool_end_address = reply_pool_start_address + pool_sz;
++ end_address = start_address + pool_sz - 1;
+
+- if (upper_32_bits(reply_pool_start_address) ==
+- upper_32_bits(reply_pool_end_address))
++ if (upper_32_bits(start_address) == upper_32_bits(end_address))
+ return 1;
+ else
+ return 0;
+@@ -5804,7 +5803,7 @@ _base_allocate_pcie_sgl_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ }
+
+ if (!mpt3sas_check_same_4gb_region(
+- (long)ioc->pcie_sg_lookup[i].pcie_sgl, sz)) {
++ ioc->pcie_sg_lookup[i].pcie_sgl_dma, sz)) {
+ ioc_err(ioc, "PCIE SGLs are not in same 4G !! pcie sgl (0x%p) dma = (0x%llx)\n",
+ ioc->pcie_sg_lookup[i].pcie_sgl,
+ (unsigned long long)
+@@ -5859,8 +5858,8 @@ _base_allocate_chain_dma_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ GFP_KERNEL, &ctr->chain_buffer_dma);
+ if (!ctr->chain_buffer)
+ return -EAGAIN;
+- if (!mpt3sas_check_same_4gb_region((long)
+- ctr->chain_buffer, ioc->chain_segment_sz)) {
++ if (!mpt3sas_check_same_4gb_region(
++ ctr->chain_buffer_dma, ioc->chain_segment_sz)) {
+ ioc_err(ioc,
+ "Chain buffers are not in same 4G !!! Chain buff (0x%p) dma = (0x%llx)\n",
+ ctr->chain_buffer,
+@@ -5896,7 +5895,7 @@ _base_allocate_sense_dma_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ GFP_KERNEL, &ioc->sense_dma);
+ if (!ioc->sense)
+ return -EAGAIN;
+- if (!mpt3sas_check_same_4gb_region((long)ioc->sense, sz)) {
++ if (!mpt3sas_check_same_4gb_region(ioc->sense_dma, sz)) {
+ dinitprintk(ioc, pr_err(
+ "Bad Sense Pool! sense (0x%p) sense_dma = (0x%llx)\n",
+ ioc->sense, (unsigned long long) ioc->sense_dma));
+@@ -5929,7 +5928,7 @@ _base_allocate_reply_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ &ioc->reply_dma);
+ if (!ioc->reply)
+ return -EAGAIN;
+- if (!mpt3sas_check_same_4gb_region((long)ioc->reply_free, sz)) {
++ if (!mpt3sas_check_same_4gb_region(ioc->reply_dma, sz)) {
+ dinitprintk(ioc, pr_err(
+ "Bad Reply Pool! Reply (0x%p) Reply dma = (0x%llx)\n",
+ ioc->reply, (unsigned long long) ioc->reply_dma));
+@@ -5964,7 +5963,7 @@ _base_allocate_reply_free_dma_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
+ GFP_KERNEL, &ioc->reply_free_dma);
+ if (!ioc->reply_free)
+ return -EAGAIN;
+- if (!mpt3sas_check_same_4gb_region((long)ioc->reply_free, sz)) {
++ if (!mpt3sas_check_same_4gb_region(ioc->reply_free_dma, sz)) {
+ dinitprintk(ioc,
+ pr_err("Bad Reply Free Pool! Reply Free (0x%p) Reply Free dma = (0x%llx)\n",
+ ioc->reply_free, (unsigned long long) ioc->reply_free_dma));
+@@ -6003,7 +6002,7 @@ _base_allocate_reply_post_free_array(struct MPT3SAS_ADAPTER *ioc,
+ GFP_KERNEL, &ioc->reply_post_free_array_dma);
+ if (!ioc->reply_post_free_array)
+ return -EAGAIN;
+- if (!mpt3sas_check_same_4gb_region((long)ioc->reply_post_free_array,
++ if (!mpt3sas_check_same_4gb_region(ioc->reply_post_free_array_dma,
+ reply_post_free_array_sz)) {
+ dinitprintk(ioc, pr_err(
+ "Bad Reply Free Pool! Reply Free (0x%p) Reply Free dma = (0x%llx)\n",
+@@ -6068,7 +6067,7 @@ base_alloc_rdpq_dma_pool(struct MPT3SAS_ADAPTER *ioc, int sz)
+ * resources and set DMA mask to 32 and allocate.
+ */
+ if (!mpt3sas_check_same_4gb_region(
+- (long)ioc->reply_post[i].reply_post_free, sz)) {
++ ioc->reply_post[i].reply_post_free_dma, sz)) {
+ dinitprintk(ioc,
+ ioc_err(ioc, "bad Replypost free pool(0x%p)"
+ "reply_post_free_dma = (0x%llx)\n",
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 9ec310b795c33..d853e8d0195a6 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -1788,6 +1788,7 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ ccb->device = pm8001_ha_dev;
+ ccb->ccb_tag = ccb_tag;
+ ccb->task = task;
++ ccb->n_elem = 0;
+
+ circularQ = &pm8001_ha->inbnd_q_tbl[0];
+
+@@ -1849,6 +1850,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+ ccb->device = pm8001_ha_dev;
+ ccb->ccb_tag = ccb_tag;
+ ccb->task = task;
++ ccb->n_elem = 0;
+ pm8001_ha_dev->id |= NCQ_READ_LOG_FLAG;
+ pm8001_ha_dev->id |= NCQ_2ND_RLE_FLAG;
+
+@@ -1865,7 +1867,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
+
+ sata_cmd.tag = cpu_to_le32(ccb_tag);
+ sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
+- sata_cmd.ncqtag_atap_dir_m |= ((0x1 << 7) | (0x5 << 9));
++ sata_cmd.ncqtag_atap_dir_m = cpu_to_le32((0x1 << 7) | (0x5 << 9));
+ memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis));
+
+ res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+@@ -2418,7 +2420,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha, void *piomb)
+ len = sizeof(struct pio_setup_fis);
+ pm8001_dbg(pm8001_ha, IO,
+ "PIO read len = %d\n", len);
+- } else if (t->ata_task.use_ncq) {
++ } else if (t->ata_task.use_ncq &&
++ t->data_dir != DMA_NONE) {
+ len = sizeof(struct set_dev_bits_fis);
+ pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
+ len);
+@@ -4271,22 +4274,22 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ u32 opc = OPC_INB_SATA_HOST_OPSTART;
+ memset(&sata_cmd, 0, sizeof(sata_cmd));
+ circularQ = &pm8001_ha->inbnd_q_tbl[0];
+- if (task->data_dir == DMA_NONE) {
++
++ if (task->data_dir == DMA_NONE && !task->ata_task.use_ncq) {
+ ATAP = 0x04; /* no data*/
+ pm8001_dbg(pm8001_ha, IO, "no data\n");
+ } else if (likely(!task->ata_task.device_control_reg_update)) {
+- if (task->ata_task.dma_xfer) {
++ if (task->ata_task.use_ncq &&
++ dev->sata_dev.class != ATA_DEV_ATAPI) {
++ ATAP = 0x07; /* FPDMA */
++ pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
++ } else if (task->ata_task.dma_xfer) {
+ ATAP = 0x06; /* DMA */
+ pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ } else {
+ ATAP = 0x05; /* PIO*/
+ pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ }
+- if (task->ata_task.use_ncq &&
+- dev->sata_dev.class != ATA_DEV_ATAPI) {
+- ATAP = 0x07; /* FPDMA */
+- pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+- }
+ }
+ if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+ task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
+@@ -4626,7 +4629,7 @@ int pm8001_chip_ssp_tm_req(struct pm8001_hba_info *pm8001_ha,
+ memcpy(sspTMCmd.lun, task->ssp_task.LUN, 8);
+ sspTMCmd.tag = cpu_to_le32(ccb->ccb_tag);
+ if (pm8001_ha->chip_id != chip_8001)
+- sspTMCmd.ds_ads_m = 0x08;
++ sspTMCmd.ds_ads_m = cpu_to_le32(0x08);
+ circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ ret = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sspTMCmd,
+ sizeof(sspTMCmd), 0);
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 9d20f8009b89f..908dbac20b483 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -1203,9 +1203,11 @@ pm80xx_set_thermal_config(struct pm8001_hba_info *pm8001_ha)
+ else
+ page_code = THERMAL_PAGE_CODE_8H;
+
+- payload.cfg_pg[0] = (THERMAL_LOG_ENABLE << 9) |
+- (THERMAL_ENABLE << 8) | page_code;
+- payload.cfg_pg[1] = (LTEMPHIL << 24) | (RTEMPHIL << 8);
++ payload.cfg_pg[0] =
++ cpu_to_le32((THERMAL_LOG_ENABLE << 9) |
++ (THERMAL_ENABLE << 8) | page_code);
++ payload.cfg_pg[1] =
++ cpu_to_le32((LTEMPHIL << 24) | (RTEMPHIL << 8));
+
+ pm8001_dbg(pm8001_ha, DEV,
+ "Setting up thermal config. cfg_pg 0 0x%x cfg_pg 1 0x%x\n",
+@@ -1245,43 +1247,41 @@ pm80xx_set_sas_protocol_timer_config(struct pm8001_hba_info *pm8001_ha)
+ circularQ = &pm8001_ha->inbnd_q_tbl[0];
+ payload.tag = cpu_to_le32(tag);
+
+- SASConfigPage.pageCode = SAS_PROTOCOL_TIMER_CONFIG_PAGE;
+- SASConfigPage.MST_MSI = 3 << 15;
+- SASConfigPage.STP_SSP_MCT_TMO = (STP_MCT_TMO << 16) | SSP_MCT_TMO;
+- SASConfigPage.STP_FRM_TMO = (SAS_MAX_OPEN_TIME << 24) |
+- (SMP_MAX_CONN_TIMER << 16) | STP_FRM_TIMER;
+- SASConfigPage.STP_IDLE_TMO = STP_IDLE_TIME;
+-
+- if (SASConfigPage.STP_IDLE_TMO > 0x3FFFFFF)
+- SASConfigPage.STP_IDLE_TMO = 0x3FFFFFF;
+-
+-
+- SASConfigPage.OPNRJT_RTRY_INTVL = (SAS_MFD << 16) |
+- SAS_OPNRJT_RTRY_INTVL;
+- SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO = (SAS_DOPNRJT_RTRY_TMO << 16)
+- | SAS_COPNRJT_RTRY_TMO;
+- SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR = (SAS_DOPNRJT_RTRY_THR << 16)
+- | SAS_COPNRJT_RTRY_THR;
+- SASConfigPage.MAX_AIP = SAS_MAX_AIP;
++ SASConfigPage.pageCode = cpu_to_le32(SAS_PROTOCOL_TIMER_CONFIG_PAGE);
++ SASConfigPage.MST_MSI = cpu_to_le32(3 << 15);
++ SASConfigPage.STP_SSP_MCT_TMO =
++ cpu_to_le32((STP_MCT_TMO << 16) | SSP_MCT_TMO);
++ SASConfigPage.STP_FRM_TMO =
++ cpu_to_le32((SAS_MAX_OPEN_TIME << 24) |
++ (SMP_MAX_CONN_TIMER << 16) | STP_FRM_TIMER);
++ SASConfigPage.STP_IDLE_TMO = cpu_to_le32(STP_IDLE_TIME);
++
++ SASConfigPage.OPNRJT_RTRY_INTVL =
++ cpu_to_le32((SAS_MFD << 16) | SAS_OPNRJT_RTRY_INTVL);
++ SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO =
++ cpu_to_le32((SAS_DOPNRJT_RTRY_TMO << 16) | SAS_COPNRJT_RTRY_TMO);
++ SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR =
++ cpu_to_le32((SAS_DOPNRJT_RTRY_THR << 16) | SAS_COPNRJT_RTRY_THR);
++ SASConfigPage.MAX_AIP = cpu_to_le32(SAS_MAX_AIP);
+
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.pageCode 0x%08x\n",
+- SASConfigPage.pageCode);
++ le32_to_cpu(SASConfigPage.pageCode));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MST_MSI 0x%08x\n",
+- SASConfigPage.MST_MSI);
++ le32_to_cpu(SASConfigPage.MST_MSI));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_SSP_MCT_TMO 0x%08x\n",
+- SASConfigPage.STP_SSP_MCT_TMO);
++ le32_to_cpu(SASConfigPage.STP_SSP_MCT_TMO));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_FRM_TMO 0x%08x\n",
+- SASConfigPage.STP_FRM_TMO);
++ le32_to_cpu(SASConfigPage.STP_FRM_TMO));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.STP_IDLE_TMO 0x%08x\n",
+- SASConfigPage.STP_IDLE_TMO);
++ le32_to_cpu(SASConfigPage.STP_IDLE_TMO));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.OPNRJT_RTRY_INTVL 0x%08x\n",
+- SASConfigPage.OPNRJT_RTRY_INTVL);
++ le32_to_cpu(SASConfigPage.OPNRJT_RTRY_INTVL));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO 0x%08x\n",
+- SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO);
++ le32_to_cpu(SASConfigPage.Data_Cmd_OPNRJT_RTRY_TMO));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR 0x%08x\n",
+- SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR);
++ le32_to_cpu(SASConfigPage.Data_Cmd_OPNRJT_RTRY_THR));
+ pm8001_dbg(pm8001_ha, INIT, "SASConfigPage.MAX_AIP 0x%08x\n",
+- SASConfigPage.MAX_AIP);
++ le32_to_cpu(SASConfigPage.MAX_AIP));
+
+ memcpy(&payload.cfg_pg, &SASConfigPage,
+ sizeof(SASProtocolTimerConfig_t));
+@@ -1407,12 +1407,13 @@ static int pm80xx_encrypt_update(struct pm8001_hba_info *pm8001_ha)
+ /* Currently only one key is used. New KEK index is 1.
+ * Current KEK index is 1. Store KEK to NVRAM is 1.
+ */
+- payload.new_curidx_ksop = ((1 << 24) | (1 << 16) | (1 << 8) |
+- KEK_MGMT_SUBOP_KEYCARDUPDATE);
++ payload.new_curidx_ksop =
++ cpu_to_le32(((1 << 24) | (1 << 16) | (1 << 8) |
++ KEK_MGMT_SUBOP_KEYCARDUPDATE));
+
+ pm8001_dbg(pm8001_ha, DEV,
+ "Saving Encryption info to flash. payload 0x%x\n",
+- payload.new_curidx_ksop);
++ le32_to_cpu(payload.new_curidx_ksop));
+
+ rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &payload,
+ sizeof(payload), 0);
+@@ -1801,6 +1802,7 @@ static void pm80xx_send_abort_all(struct pm8001_hba_info *pm8001_ha,
+ ccb->device = pm8001_ha_dev;
+ ccb->ccb_tag = ccb_tag;
+ ccb->task = task;
++ ccb->n_elem = 0;
+
+ circularQ = &pm8001_ha->inbnd_q_tbl[0];
+
+@@ -1882,7 +1884,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
+
+ sata_cmd.tag = cpu_to_le32(ccb_tag);
+ sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
+- sata_cmd.ncqtag_atap_dir_m_dad |= ((0x1 << 7) | (0x5 << 9));
++ sata_cmd.ncqtag_atap_dir_m_dad = cpu_to_le32(((0x1 << 7) | (0x5 << 9)));
+ memcpy(&sata_cmd.sata_fis, &fis, sizeof(struct host_to_dev_fis));
+
+ res = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &sata_cmd,
+@@ -2510,7 +2512,8 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
+ len = sizeof(struct pio_setup_fis);
+ pm8001_dbg(pm8001_ha, IO,
+ "PIO read len = %d\n", len);
+- } else if (t->ata_task.use_ncq) {
++ } else if (t->ata_task.use_ncq &&
++ t->data_dir != DMA_NONE) {
+ len = sizeof(struct set_dev_bits_fis);
+ pm8001_dbg(pm8001_ha, IO, "FPDMA len = %d\n",
+ len);
+@@ -4379,13 +4382,15 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ struct ssp_ini_io_start_req ssp_cmd;
+ u32 tag = ccb->ccb_tag;
+ int ret;
+- u64 phys_addr, start_addr, end_addr;
++ u64 phys_addr, end_addr;
+ u32 end_addr_high, end_addr_low;
+ struct inbound_queue_table *circularQ;
+ u32 q_index, cpu_id;
+ u32 opc = OPC_INB_SSPINIIOSTART;
++
+ memset(&ssp_cmd, 0, sizeof(ssp_cmd));
+ memcpy(ssp_cmd.ssp_iu.lun, task->ssp_task.LUN, 8);
++
+ /* data address domain added for spcv; set to 0 by host,
+ * used internally by controller
+ * 0 for SAS 1.1 and SAS 2.0 compatible TLR
+@@ -4396,7 +4401,7 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ ssp_cmd.device_id = cpu_to_le32(pm8001_dev->device_id);
+ ssp_cmd.tag = cpu_to_le32(tag);
+ if (task->ssp_task.enable_first_burst)
+- ssp_cmd.ssp_iu.efb_prio_attr |= 0x80;
++ ssp_cmd.ssp_iu.efb_prio_attr = 0x80;
+ ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_prio << 3);
+ ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_attr & 7);
+ memcpy(ssp_cmd.ssp_iu.cdb, task->ssp_task.cmd->cmnd,
+@@ -4428,21 +4433,24 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ ssp_cmd.enc_esgl = cpu_to_le32(1<<31);
+ } else if (task->num_scatter == 1) {
+ u64 dma_addr = sg_dma_address(task->scatter);
++
+ ssp_cmd.enc_addr_low =
+ cpu_to_le32(lower_32_bits(dma_addr));
+ ssp_cmd.enc_addr_high =
+ cpu_to_le32(upper_32_bits(dma_addr));
+ ssp_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ ssp_cmd.enc_esgl = 0;
++
+ /* Check 4G Boundary */
+- start_addr = cpu_to_le64(dma_addr);
+- end_addr = (start_addr + ssp_cmd.enc_len) - 1;
+- end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+- end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+- if (end_addr_high != ssp_cmd.enc_addr_high) {
++ end_addr = dma_addr + le32_to_cpu(ssp_cmd.enc_len) - 1;
++ end_addr_low = lower_32_bits(end_addr);
++ end_addr_high = upper_32_bits(end_addr);
++
++ if (end_addr_high != le32_to_cpu(ssp_cmd.enc_addr_high)) {
+ pm8001_dbg(pm8001_ha, FAIL,
+ "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+- start_addr, ssp_cmd.enc_len,
++ dma_addr,
++ le32_to_cpu(ssp_cmd.enc_len),
+ end_addr_high, end_addr_low);
+ pm8001_chip_make_sg(task->scatter, 1,
+ ccb->buf_prd);
+@@ -4451,7 +4459,7 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ cpu_to_le32(lower_32_bits(phys_addr));
+ ssp_cmd.enc_addr_high =
+ cpu_to_le32(upper_32_bits(phys_addr));
+- ssp_cmd.enc_esgl = cpu_to_le32(1<<31);
++ ssp_cmd.enc_esgl = cpu_to_le32(1U<<31);
+ }
+ } else if (task->num_scatter == 0) {
+ ssp_cmd.enc_addr_low = 0;
+@@ -4459,8 +4467,10 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ ssp_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ ssp_cmd.enc_esgl = 0;
+ }
++
+ /* XTS mode. All other fields are 0 */
+- ssp_cmd.key_cmode = 0x6 << 4;
++ ssp_cmd.key_cmode = cpu_to_le32(0x6 << 4);
++
+ /* set tweak values. Should be the start lba */
+ ssp_cmd.twk_val0 = cpu_to_le32((task->ssp_task.cmd->cmnd[2] << 24) |
+ (task->ssp_task.cmd->cmnd[3] << 16) |
+@@ -4482,20 +4492,22 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
+ ssp_cmd.esgl = cpu_to_le32(1<<31);
+ } else if (task->num_scatter == 1) {
+ u64 dma_addr = sg_dma_address(task->scatter);
++
+ ssp_cmd.addr_low = cpu_to_le32(lower_32_bits(dma_addr));
+ ssp_cmd.addr_high =
+ cpu_to_le32(upper_32_bits(dma_addr));
+ ssp_cmd.len = cpu_to_le32(task->total_xfer_len);
+ ssp_cmd.esgl = 0;
++
+ /* Check 4G Boundary */
+- start_addr = cpu_to_le64(dma_addr);
+- end_addr = (start_addr + ssp_cmd.len) - 1;
+- end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+- end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+- if (end_addr_high != ssp_cmd.addr_high) {
++ end_addr = dma_addr + le32_to_cpu(ssp_cmd.len) - 1;
++ end_addr_low = lower_32_bits(end_addr);
++ end_addr_high = upper_32_bits(end_addr);
++ if (end_addr_high != le32_to_cpu(ssp_cmd.addr_high)) {
+ pm8001_dbg(pm8001_ha, FAIL,
+ "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+- start_addr, ssp_cmd.len,
++ dma_addr,
++ le32_to_cpu(ssp_cmd.len),
+ end_addr_high, end_addr_low);
+ pm8001_chip_make_sg(task->scatter, 1,
+ ccb->buf_prd);
+@@ -4530,7 +4542,7 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ u32 q_index, cpu_id;
+ struct sata_start_req sata_cmd;
+ u32 hdr_tag, ncg_tag = 0;
+- u64 phys_addr, start_addr, end_addr;
++ u64 phys_addr, end_addr;
+ u32 end_addr_high, end_addr_low;
+ u32 ATAP = 0x0;
+ u32 dir;
+@@ -4542,22 +4554,21 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ q_index = (u32) (cpu_id) % (pm8001_ha->max_q_num);
+ circularQ = &pm8001_ha->inbnd_q_tbl[q_index];
+
+- if (task->data_dir == DMA_NONE) {
++ if (task->data_dir == DMA_NONE && !task->ata_task.use_ncq) {
+ ATAP = 0x04; /* no data*/
+ pm8001_dbg(pm8001_ha, IO, "no data\n");
+ } else if (likely(!task->ata_task.device_control_reg_update)) {
+- if (task->ata_task.dma_xfer) {
++ if (task->ata_task.use_ncq &&
++ dev->sata_dev.class != ATA_DEV_ATAPI) {
++ ATAP = 0x07; /* FPDMA */
++ pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
++ } else if (task->ata_task.dma_xfer) {
+ ATAP = 0x06; /* DMA */
+ pm8001_dbg(pm8001_ha, IO, "DMA\n");
+ } else {
+ ATAP = 0x05; /* PIO*/
+ pm8001_dbg(pm8001_ha, IO, "PIO\n");
+ }
+- if (task->ata_task.use_ncq &&
+- dev->sata_dev.class != ATA_DEV_ATAPI) {
+- ATAP = 0x07; /* FPDMA */
+- pm8001_dbg(pm8001_ha, IO, "FPDMA\n");
+- }
+ }
+ if (task->ata_task.use_ncq && pm8001_get_ncq_tag(task, &hdr_tag)) {
+ task->ata_task.fis.sector_count |= (u8) (hdr_tag << 3);
+@@ -4591,32 +4602,38 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ pm8001_chip_make_sg(task->scatter,
+ ccb->n_elem, ccb->buf_prd);
+ phys_addr = ccb->ccb_dma_handle;
+- sata_cmd.enc_addr_low = lower_32_bits(phys_addr);
+- sata_cmd.enc_addr_high = upper_32_bits(phys_addr);
++ sata_cmd.enc_addr_low =
++ cpu_to_le32(lower_32_bits(phys_addr));
++ sata_cmd.enc_addr_high =
++ cpu_to_le32(upper_32_bits(phys_addr));
+ sata_cmd.enc_esgl = cpu_to_le32(1 << 31);
+ } else if (task->num_scatter == 1) {
+ u64 dma_addr = sg_dma_address(task->scatter);
+- sata_cmd.enc_addr_low = lower_32_bits(dma_addr);
+- sata_cmd.enc_addr_high = upper_32_bits(dma_addr);
++
++ sata_cmd.enc_addr_low =
++ cpu_to_le32(lower_32_bits(dma_addr));
++ sata_cmd.enc_addr_high =
++ cpu_to_le32(upper_32_bits(dma_addr));
+ sata_cmd.enc_len = cpu_to_le32(task->total_xfer_len);
+ sata_cmd.enc_esgl = 0;
++
+ /* Check 4G Boundary */
+- start_addr = cpu_to_le64(dma_addr);
+- end_addr = (start_addr + sata_cmd.enc_len) - 1;
+- end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+- end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
+- if (end_addr_high != sata_cmd.enc_addr_high) {
++ end_addr = dma_addr + le32_to_cpu(sata_cmd.enc_len) - 1;
++ end_addr_low = lower_32_bits(end_addr);
++ end_addr_high = upper_32_bits(end_addr);
++ if (end_addr_high != le32_to_cpu(sata_cmd.enc_addr_high)) {
+ pm8001_dbg(pm8001_ha, FAIL,
+ "The sg list address start_addr=0x%016llx data_len=0x%x end_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+- start_addr, sata_cmd.enc_len,
++ dma_addr,
++ le32_to_cpu(sata_cmd.enc_len),
+ end_addr_high, end_addr_low);
+ pm8001_chip_make_sg(task->scatter, 1,
+ ccb->buf_prd);
+ phys_addr = ccb->ccb_dma_handle;
+ sata_cmd.enc_addr_low =
+- lower_32_bits(phys_addr);
++ cpu_to_le32(lower_32_bits(phys_addr));
+ sata_cmd.enc_addr_high =
+- upper_32_bits(phys_addr);
++ cpu_to_le32(upper_32_bits(phys_addr));
+ sata_cmd.enc_esgl =
+ cpu_to_le32(1 << 31);
+ }
+@@ -4627,7 +4644,8 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ sata_cmd.enc_esgl = 0;
+ }
+ /* XTS mode. All other fields are 0 */
+- sata_cmd.key_index_mode = 0x6 << 4;
++ sata_cmd.key_index_mode = cpu_to_le32(0x6 << 4);
++
+ /* set tweak values. Should be the start lba */
+ sata_cmd.twk_val0 =
+ cpu_to_le32((sata_cmd.sata_fis.lbal_exp << 24) |
+@@ -4653,31 +4671,31 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ phys_addr = ccb->ccb_dma_handle;
+ sata_cmd.addr_low = lower_32_bits(phys_addr);
+ sata_cmd.addr_high = upper_32_bits(phys_addr);
+- sata_cmd.esgl = cpu_to_le32(1 << 31);
++ sata_cmd.esgl = cpu_to_le32(1U << 31);
+ } else if (task->num_scatter == 1) {
+ u64 dma_addr = sg_dma_address(task->scatter);
++
+ sata_cmd.addr_low = lower_32_bits(dma_addr);
+ sata_cmd.addr_high = upper_32_bits(dma_addr);
+ sata_cmd.len = cpu_to_le32(task->total_xfer_len);
+ sata_cmd.esgl = 0;
++
+ /* Check 4G Boundary */
+- start_addr = cpu_to_le64(dma_addr);
+- end_addr = (start_addr + sata_cmd.len) - 1;
+- end_addr_low = cpu_to_le32(lower_32_bits(end_addr));
+- end_addr_high = cpu_to_le32(upper_32_bits(end_addr));
++ end_addr = dma_addr + le32_to_cpu(sata_cmd.len) - 1;
++ end_addr_low = lower_32_bits(end_addr);
++ end_addr_high = upper_32_bits(end_addr);
+ if (end_addr_high != sata_cmd.addr_high) {
+ pm8001_dbg(pm8001_ha, FAIL,
+ "The sg list address start_addr=0x%016llx data_len=0x%xend_addr_high=0x%08x end_addr_low=0x%08x has crossed 4G boundary\n",
+- start_addr, sata_cmd.len,
++ dma_addr,
++ le32_to_cpu(sata_cmd.len),
+ end_addr_high, end_addr_low);
+ pm8001_chip_make_sg(task->scatter, 1,
+ ccb->buf_prd);
+ phys_addr = ccb->ccb_dma_handle;
+- sata_cmd.addr_low =
+- lower_32_bits(phys_addr);
+- sata_cmd.addr_high =
+- upper_32_bits(phys_addr);
+- sata_cmd.esgl = cpu_to_le32(1 << 31);
++ sata_cmd.addr_low = lower_32_bits(phys_addr);
++ sata_cmd.addr_high = upper_32_bits(phys_addr);
++ sata_cmd.esgl = cpu_to_le32(1U << 31);
+ }
+ } else if (task->num_scatter == 0) {
+ sata_cmd.addr_low = 0;
+@@ -4685,27 +4703,28 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
+ sata_cmd.len = cpu_to_le32(task->total_xfer_len);
+ sata_cmd.esgl = 0;
+ }
++
+ /* scsi cdb */
+ sata_cmd.atapi_scsi_cdb[0] =
+ cpu_to_le32(((task->ata_task.atapi_packet[0]) |
+- (task->ata_task.atapi_packet[1] << 8) |
+- (task->ata_task.atapi_packet[2] << 16) |
+- (task->ata_task.atapi_packet[3] << 24)));
++ (task->ata_task.atapi_packet[1] << 8) |
++ (task->ata_task.atapi_packet[2] << 16) |
++ (task->ata_task.atapi_packet[3] << 24)));
+ sata_cmd.atapi_scsi_cdb[1] =
+ cpu_to_le32(((task->ata_task.atapi_packet[4]) |
+- (task->ata_task.atapi_packet[5] << 8) |
+- (task->ata_task.atapi_packet[6] << 16) |
+- (task->ata_task.atapi_packet[7] << 24)));
++ (task->ata_task.atapi_packet[5] << 8) |
++ (task->ata_task.atapi_packet[6] << 16) |
++ (task->ata_task.atapi_packet[7] << 24)));
+ sata_cmd.atapi_scsi_cdb[2] =
+ cpu_to_le32(((task->ata_task.atapi_packet[8]) |
+- (task->ata_task.atapi_packet[9] << 8) |
+- (task->ata_task.atapi_packet[10] << 16) |
+- (task->ata_task.atapi_packet[11] << 24)));
++ (task->ata_task.atapi_packet[9] << 8) |
++ (task->ata_task.atapi_packet[10] << 16) |
++ (task->ata_task.atapi_packet[11] << 24)));
+ sata_cmd.atapi_scsi_cdb[3] =
+ cpu_to_le32(((task->ata_task.atapi_packet[12]) |
+- (task->ata_task.atapi_packet[13] << 8) |
+- (task->ata_task.atapi_packet[14] << 16) |
+- (task->ata_task.atapi_packet[15] << 24)));
++ (task->ata_task.atapi_packet[13] << 8) |
++ (task->ata_task.atapi_packet[14] << 16) |
++ (task->ata_task.atapi_packet[15] << 24)));
+ }
+
+ /* Check for read log for failed drive and return */
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index db55737000ab5..3b3e4234f37a0 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -555,7 +555,7 @@ qla2x00_sysfs_read_vpd(struct file *filp, struct kobject *kobj,
+ if (!capable(CAP_SYS_ADMIN))
+ return -EINVAL;
+
+- if (IS_NOCACHE_VPD_TYPE(ha))
++ if (!IS_NOCACHE_VPD_TYPE(ha))
+ goto skip;
+
+ faddr = ha->flt_region_vpd << 2;
+@@ -745,7 +745,7 @@ qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj,
+ ql_log(ql_log_info, vha, 0x706f,
+ "Issuing MPI reset.\n");
+
+- if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
++ if (IS_QLA83XX(ha)) {
+ uint32_t idc_control;
+
+ qla83xx_idc_lock(vha, 0);
+@@ -1056,9 +1056,6 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon)
+ continue;
+ if (iter->type == 3 && !(IS_CNA_CAPABLE(ha)))
+ continue;
+- if (iter->type == 0x27 &&
+- (!IS_QLA27XX(ha) || !IS_QLA28XX(ha)))
+- continue;
+
+ sysfs_remove_bin_file(&host->shost_gendev.kobj,
+ iter->attr);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 9da8034ccad40..c2f00f076f799 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -29,7 +29,8 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ "%s: sp hdl %x, result=%x bsg ptr %p\n",
+ __func__, sp->handle, res, bsg_job);
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ bsg_reply->result = res;
+ bsg_job_done(bsg_job, bsg_reply->result,
+@@ -3013,7 +3014,8 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+
+ done:
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return 0;
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 9ebf4a234d9a9..aefb29d7c7aee 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -726,6 +726,11 @@ typedef struct srb {
+ * code.
+ */
+ void (*put_fn)(struct kref *kref);
++
++ /*
++ * Report completion for asynchronous commands.
++ */
++ void (*async_done)(struct srb *sp, int res);
+ } srb_t;
+
+ #define GET_CMD_SP(sp) (sp->u.scmd.cmd)
+@@ -2886,7 +2891,11 @@ struct ct_fdmi2_hba_attributes {
+ #define FDMI_PORT_SPEED_8GB 0x10
+ #define FDMI_PORT_SPEED_16GB 0x20
+ #define FDMI_PORT_SPEED_32GB 0x40
+-#define FDMI_PORT_SPEED_64GB 0x80
++#define FDMI_PORT_SPEED_20GB 0x80
++#define FDMI_PORT_SPEED_40GB 0x100
++#define FDMI_PORT_SPEED_128GB 0x200
++#define FDMI_PORT_SPEED_64GB 0x400
++#define FDMI_PORT_SPEED_256GB 0x800
+ #define FDMI_PORT_SPEED_UNKNOWN 0x8000
+
+ #define FC_CLASS_2 0x04
+@@ -4262,8 +4271,10 @@ struct qla_hw_data {
+ #define QLA_ABTS_WAIT_ENABLED(_sp) \
+ (QLA_NVME_IOS(_sp) && QLA_ABTS_FW_ENABLED(_sp->fcport->vha->hw))
+
+-#define IS_PI_UNINIT_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
+-#define IS_PI_IPGUARD_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha))
++#define IS_PI_UNINIT_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
++ IS_QLA28XX(ha))
++#define IS_PI_IPGUARD_CAPABLE(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
++ IS_QLA28XX(ha))
+ #define IS_PI_DIFB_DIX0_CAPABLE(ha) (0)
+ #define IS_PI_SPLIT_DET_CAPABLE_HBA(ha) (IS_QLA83XX(ha) || IS_QLA27XX(ha) || \
+ IS_QLA28XX(ha))
+@@ -4610,6 +4621,7 @@ struct qla_hw_data {
+ struct workqueue_struct *wq;
+ struct work_struct heartbeat_work;
+ struct qlfc_fw fw_buf;
++ unsigned long last_heartbeat_run_jiffies;
+
+ /* FCP_CMND priority support */
+ struct qla_fcp_prio_cfg *fcp_prio_cfg;
+@@ -5427,4 +5439,8 @@ struct ql_vnd_tgt_stats_resp {
+ #include "qla_gbl.h"
+ #include "qla_dbg.h"
+ #include "qla_inline.h"
++
++#define IS_SESSION_DELETED(_fcport) (_fcport->disc_state == DSC_DELETE_PEND || \
++ _fcport->disc_state == DSC_DELETED)
++
+ #endif
+diff --git a/drivers/scsi/qla2xxx/qla_edif.c b/drivers/scsi/qla2xxx/qla_edif.c
+index 53d2b85620271..0628633c7c7e9 100644
+--- a/drivers/scsi/qla2xxx/qla_edif.c
++++ b/drivers/scsi/qla2xxx/qla_edif.c
+@@ -668,6 +668,11 @@ qla_edif_app_authok(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ bsg_job->request_payload.sg_cnt, &appplogiok,
+ sizeof(struct auth_complete_cmd));
+
++ /* silent unaligned access warning */
++ portid.b.domain = appplogiok.u.d_id.b.domain;
++ portid.b.area = appplogiok.u.d_id.b.area;
++ portid.b.al_pa = appplogiok.u.d_id.b.al_pa;
++
+ switch (appplogiok.type) {
+ case PL_TYPE_WWPN:
+ fcport = qla2x00_find_fcport_by_wwpn(vha,
+@@ -678,7 +683,7 @@ qla_edif_app_authok(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ __func__, appplogiok.u.wwpn);
+ break;
+ case PL_TYPE_DID:
+- fcport = qla2x00_find_fcport_by_pid(vha, &appplogiok.u.d_id);
++ fcport = qla2x00_find_fcport_by_pid(vha, &portid);
+ if (!fcport)
+ ql_dbg(ql_dbg_edif, vha, 0x911d,
+ "%s d_id lookup failed: %x\n", __func__,
+@@ -777,6 +782,11 @@ qla_edif_app_authfail(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ bsg_job->request_payload.sg_cnt, &appplogifail,
+ sizeof(struct auth_complete_cmd));
+
++ /* silent unaligned access warning */
++ portid.b.domain = appplogifail.u.d_id.b.domain;
++ portid.b.area = appplogifail.u.d_id.b.area;
++ portid.b.al_pa = appplogifail.u.d_id.b.al_pa;
++
+ /*
+ * TODO: edif: app has failed this plogi. Inform driver to
+ * take any action (if any).
+@@ -788,7 +798,7 @@ qla_edif_app_authfail(scsi_qla_host_t *vha, struct bsg_job *bsg_job)
+ SET_DID_STATUS(bsg_reply->result, DID_OK);
+ break;
+ case PL_TYPE_DID:
+- fcport = qla2x00_find_fcport_by_pid(vha, &appplogifail.u.d_id);
++ fcport = qla2x00_find_fcport_by_pid(vha, &portid);
+ if (!fcport)
+ ql_dbg(ql_dbg_edif, vha, 0x911d,
+ "%s d_id lookup failed: %x\n", __func__,
+@@ -1253,6 +1263,7 @@ qla24xx_sadb_update(struct bsg_job *bsg_job)
+ int result = 0;
+ struct qla_sa_update_frame sa_frame;
+ struct srb_iocb *iocb_cmd;
++ port_id_t portid;
+
+ ql_dbg(ql_dbg_edif + ql_dbg_verbose, vha, 0x911d,
+ "%s entered, vha: 0x%p\n", __func__, vha);
+@@ -1276,7 +1287,12 @@ qla24xx_sadb_update(struct bsg_job *bsg_job)
+ goto done;
+ }
+
+- fcport = qla2x00_find_fcport_by_pid(vha, &sa_frame.port_id);
++ /* silent unaligned access warning */
++ portid.b.domain = sa_frame.port_id.b.domain;
++ portid.b.area = sa_frame.port_id.b.area;
++ portid.b.al_pa = sa_frame.port_id.b.al_pa;
++
++ fcport = qla2x00_find_fcport_by_pid(vha, &portid);
+ if (fcport) {
+ found = 1;
+ if (sa_frame.flags == QLA_SA_UPDATE_FLAGS_TX_KEY)
+@@ -2146,7 +2162,8 @@ edif_doorbell_show(struct device *dev, struct device_attribute *attr,
+
+ static void qla_noop_sp_done(srb_t *sp, int res)
+ {
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ /*
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index 8d8503a284790..3f8b8bbabe6de 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -316,7 +316,8 @@ extern int qla2x00_start_sp(srb_t *);
+ extern int qla24xx_dif_start_scsi(srb_t *);
+ extern int qla2x00_start_bidir(srb_t *, struct scsi_qla_host *, uint32_t);
+ extern int qla2xxx_dif_start_scsi_mq(srb_t *);
+-extern void qla2x00_init_timer(srb_t *sp, unsigned long tmo);
++extern void qla2x00_init_async_sp(srb_t *sp, unsigned long tmo,
++ void (*done)(struct srb *, int));
+ extern unsigned long qla2x00_get_async_timeout(struct scsi_qla_host *);
+
+ extern void *qla2x00_alloc_iocbs(struct scsi_qla_host *, srb_t *);
+@@ -332,6 +333,7 @@ extern int qla24xx_get_one_block_sg(uint32_t, struct qla2_sgx *, uint32_t *);
+ extern int qla24xx_configure_prot_mode(srb_t *, uint16_t *);
+ extern int qla24xx_issue_sa_replace_iocb(scsi_qla_host_t *vha,
+ struct qla_work_evt *e);
++void qla2x00_sp_release(struct kref *kref);
+
+ /*
+ * Global Function Prototypes in qla_mbx.c source file.
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index 28b574e20ef32..6b67bd561810d 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -529,7 +529,6 @@ static void qla2x00_async_sns_sp_done(srb_t *sp, int rc)
+ if (!e)
+ goto err2;
+
+- del_timer(&sp->u.iocb_cmd.timer);
+ e->u.iosb.sp = sp;
+ qla2x00_post_work(vha, e);
+ return;
+@@ -556,8 +555,8 @@ err2:
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ }
+
+- sp->free(sp);
+-
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return;
+ }
+
+@@ -592,13 +591,15 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ if (!vha->flags.online)
+ goto done;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_CT_PTHRU_CMD;
+ sp->name = "rft_id";
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_sns_sp_done);
+
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -638,8 +639,6 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ sp->u.iocb_cmd.u.ctarg.req_size = RFT_ID_REQ_SIZE;
+ sp->u.iocb_cmd.u.ctarg.rsp_size = RFT_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- sp->done = qla2x00_async_sns_sp_done;
+
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s - hdl=%x portid %06x.\n",
+@@ -653,7 +652,8 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id)
+ }
+ return rval;
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -676,8 +676,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type)
+ return (QLA_SUCCESS);
+ }
+
+- return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha),
+- FC4_TYPE_FCP_SCSI);
++ return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), type);
+ }
+
+ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+@@ -688,13 +687,15 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ srb_t *sp;
+ struct ct_sns_pkt *ct_sns;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_CT_PTHRU_CMD;
+ sp->name = "rff_id";
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_sns_sp_done);
+
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -727,13 +728,11 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ /* Prepare CT arguments -- port_id, FC-4 feature, FC-4 type */
+ ct_req->req.rff_id.port_id = port_id_to_be_id(*d_id);
+ ct_req->req.rff_id.fc4_feature = fc4feature;
+- ct_req->req.rff_id.fc4_type = fc4type; /* SCSI - FCP */
++ ct_req->req.rff_id.fc4_type = fc4type; /* SCSI-FCP or FC-NVMe */
+
+ sp->u.iocb_cmd.u.ctarg.req_size = RFF_ID_REQ_SIZE;
+ sp->u.iocb_cmd.u.ctarg.rsp_size = RFF_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- sp->done = qla2x00_async_sns_sp_done;
+
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s - hdl=%x portid %06x feature %x type %x.\n",
+@@ -749,7 +748,8 @@ static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id,
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -779,13 +779,15 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ srb_t *sp;
+ struct ct_sns_pkt *ct_sns;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_CT_PTHRU_CMD;
+ sp->name = "rnid";
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_sns_sp_done);
+
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -823,9 +825,6 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ sp->u.iocb_cmd.u.ctarg.rsp_size = RNN_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- sp->done = qla2x00_async_sns_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s - hdl=%x portid %06x\n",
+ sp->name, sp->handle, d_id->b24);
+@@ -840,7 +839,8 @@ static int qla_async_rnnid(scsi_qla_host_t *vha, port_id_t *d_id,
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -886,13 +886,15 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ srb_t *sp;
+ struct ct_sns_pkt *ct_sns;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_CT_PTHRU_CMD;
+ sp->name = "rsnn_nn";
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_sns_sp_done);
+
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+@@ -936,9 +938,6 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ sp->u.iocb_cmd.u.ctarg.rsp_size = RSNN_NN_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- sp->done = qla2x00_async_sns_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s - hdl=%x.\n",
+ sp->name, sp->handle);
+@@ -953,7 +952,8 @@ static int qla_async_rsnn_nn(scsi_qla_host_t *vha)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -2893,7 +2893,8 @@ static void qla24xx_async_gpsc_sp_done(srb_t *sp, int res)
+ qla24xx_handle_gpsc_event(vha, &ea);
+
+ done:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+@@ -2905,6 +2906,7 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ return rval;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+@@ -2913,8 +2915,8 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->name = "gpsc";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+-
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla24xx_async_gpsc_sp_done);
+
+ /* CT_IU preamble */
+ ct_req = qla24xx_prep_ct_fm_req(fcport->ct_desc.ct_sns, GPSC_CMD,
+@@ -2932,9 +2934,6 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->u.iocb_cmd.u.ctarg.rsp_size = GPSC_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = vha->mgmt_svr_loop_id;
+
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- sp->done = qla24xx_async_gpsc_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0x205e,
+ "Async-%s %8phC hdl=%x loopid=%x portid=%02x%02x%02x.\n",
+ sp->name, fcport->port_name, sp->handle,
+@@ -2947,7 +2946,8 @@ int qla24xx_async_gpsc(scsi_qla_host_t *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -2996,7 +2996,8 @@ void qla24xx_sp_unmap(scsi_qla_host_t *vha, srb_t *sp)
+ break;
+ }
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ void qla24xx_handle_gpnid_event(scsi_qla_host_t *vha, struct event_arg *ea)
+@@ -3135,13 +3136,15 @@ static void qla2x00_async_gpnid_sp_done(srb_t *sp, int res)
+ if (res) {
+ if (res == QLA_FUNCTION_TIMEOUT) {
+ qla24xx_post_gpnid_work(sp->vha, &ea.id);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return;
+ }
+ } else if (sp->gen1) {
+ /* There was another RSCN for this Nport ID */
+ qla24xx_post_gpnid_work(sp->vha, &ea.id);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return;
+ }
+
+@@ -3162,7 +3165,8 @@ static void qla2x00_async_gpnid_sp_done(srb_t *sp, int res)
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return;
+ }
+
+@@ -3182,6 +3186,7 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ if (!vha->flags.online)
+ goto done;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp)
+ goto done;
+@@ -3190,14 +3195,16 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ sp->name = "gpnid";
+ sp->u.iocb_cmd.u.ctarg.id = *id;
+ sp->gen1 = 0;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_gpnid_sp_done);
+
+ spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ list_for_each_entry(tsp, &vha->gpnid_list, elem) {
+ if (tsp->u.iocb_cmd.u.ctarg.id.b24 == id->b24) {
+ tsp->gen1++;
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ goto done;
+ }
+ }
+@@ -3238,9 +3245,6 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ sp->u.iocb_cmd.u.ctarg.rsp_size = GPN_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- sp->done = qla2x00_async_gpnid_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0x2067,
+ "Async-%s hdl=%x ID %3phC.\n", sp->name,
+ sp->handle, &ct_req->req.port_id.port_id);
+@@ -3270,8 +3274,8 @@ done_free_sp:
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ }
+-
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -3326,7 +3330,8 @@ void qla24xx_async_gffid_sp_done(srb_t *sp, int res)
+ ea.rc = res;
+
+ qla24xx_handle_gffid_event(vha, &ea);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ /* Get FC4 Feature with Nport ID. */
+@@ -3339,6 +3344,7 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ return rval;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ return rval;
+@@ -3348,9 +3354,8 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->name = "gffid";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+-
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla24xx_async_gffid_sp_done);
+
+ /* CT_IU preamble */
+ ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GFF_ID_CMD,
+@@ -3368,8 +3373,6 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->u.iocb_cmd.u.ctarg.rsp_size = GFF_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->done = qla24xx_async_gffid_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0x2132,
+ "Async-%s hdl=%x %8phC.\n", sp->name,
+ sp->handle, fcport->port_name);
+@@ -3380,7 +3383,8 @@ int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport)
+
+ return rval;
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ return rval;
+ }
+@@ -3767,7 +3771,6 @@ static void qla2x00_async_gpnft_gnnft_sp_done(srb_t *sp, int res)
+ "Async done-%s res %x FC4Type %x\n",
+ sp->name, res, sp->gen2);
+
+- del_timer(&sp->u.iocb_cmd.timer);
+ sp->rc = res;
+ if (res) {
+ unsigned long flags;
+@@ -3892,9 +3895,8 @@ static int qla24xx_async_gnnft(scsi_qla_host_t *vha, struct srb *sp,
+ sp->name = "gnnft";
+ sp->gen1 = vha->hw->base_qpair->chip_reset;
+ sp->gen2 = fc4_type;
+-
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_gpnft_gnnft_sp_done);
+
+ memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
+ memset(sp->u.iocb_cmd.u.ctarg.req, 0, sp->u.iocb_cmd.u.ctarg.req_size);
+@@ -3910,8 +3912,6 @@ static int qla24xx_async_gnnft(scsi_qla_host_t *vha, struct srb *sp,
+ sp->u.iocb_cmd.u.ctarg.req_size = GNN_FT_REQ_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->done = qla2x00_async_gpnft_gnnft_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s hdl=%x FC4Type %x.\n", sp->name,
+ sp->handle, ct_req->req.gpn_ft.port_type);
+@@ -3938,8 +3938,8 @@ done_free_sp:
+ sp->u.iocb_cmd.u.ctarg.rsp_dma);
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ }
+-
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ spin_lock_irqsave(&vha->work_lock, flags);
+ vha->scan.scan_flags &= ~SF_SCANNING;
+@@ -3991,9 +3991,12 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xffff,
+ "%s: Performing FCP Scan\n", __func__);
+
+- if (sp)
+- sp->free(sp); /* should not happen */
++ if (sp) {
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
++ }
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp) {
+ spin_lock_irqsave(&vha->work_lock, flags);
+@@ -4038,6 +4041,7 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ sp->u.iocb_cmd.u.ctarg.req,
+ sp->u.iocb_cmd.u.ctarg.req_dma);
+ sp->u.iocb_cmd.u.ctarg.req = NULL;
++ /* ref: INIT */
+ qla2x00_rel_sp(sp);
+ return rval;
+ }
+@@ -4057,9 +4061,8 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+ sp->name = "gpnft";
+ sp->gen1 = vha->hw->base_qpair->chip_reset;
+ sp->gen2 = fc4_type;
+-
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_gpnft_gnnft_sp_done);
+
+ rspsz = sp->u.iocb_cmd.u.ctarg.rsp_size;
+ memset(sp->u.iocb_cmd.u.ctarg.rsp, 0, sp->u.iocb_cmd.u.ctarg.rsp_size);
+@@ -4074,8 +4077,6 @@ int qla24xx_async_gpnft(scsi_qla_host_t *vha, u8 fc4_type, srb_t *sp)
+
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->done = qla2x00_async_gpnft_gnnft_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s hdl=%x FC4Type %x.\n", sp->name,
+ sp->handle, ct_req->req.gpn_ft.port_type);
+@@ -4103,7 +4104,8 @@ done_free_sp:
+ sp->u.iocb_cmd.u.ctarg.rsp = NULL;
+ }
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ spin_lock_irqsave(&vha->work_lock, flags);
+ vha->scan.scan_flags &= ~SF_SCANNING;
+@@ -4167,7 +4169,8 @@ static void qla2x00_async_gnnid_sp_done(srb_t *sp, int res)
+
+ qla24xx_handle_gnnid_event(vha, &ea);
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+@@ -4180,6 +4183,7 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ return rval;
+
+ qla2x00_set_fcport_disc_state(fcport, DSC_GNN_ID);
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
+ if (!sp)
+ goto done;
+@@ -4189,9 +4193,8 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->name = "gnnid";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+-
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_gnnid_sp_done);
+
+ /* CT_IU preamble */
+ ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GNN_ID_CMD,
+@@ -4210,8 +4213,6 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->u.iocb_cmd.u.ctarg.rsp_size = GNN_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->done = qla2x00_async_gnnid_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s - %8phC hdl=%x loopid=%x portid %06x.\n",
+ sp->name, fcport->port_name,
+@@ -4223,7 +4224,8 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ return rval;
+@@ -4297,7 +4299,8 @@ static void qla2x00_async_gfpnid_sp_done(srb_t *sp, int res)
+
+ qla24xx_handle_gfpnid_event(vha, &ea);
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+@@ -4309,6 +4312,7 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ return rval;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC);
+ if (!sp)
+ goto done;
+@@ -4317,9 +4321,8 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->name = "gfpnid";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+-
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_gfpnid_sp_done);
+
+ /* CT_IU preamble */
+ ct_req = qla2x00_prep_ct_req(fcport->ct_desc.ct_sns, GFPN_ID_CMD,
+@@ -4338,8 +4341,6 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ sp->u.iocb_cmd.u.ctarg.rsp_size = GFPN_ID_RSP_SIZE;
+ sp->u.iocb_cmd.u.ctarg.nport_handle = NPH_SNS;
+
+- sp->done = qla2x00_async_gfpnid_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0xffff,
+ "Async-%s - %8phC hdl=%x loopid=%x portid %06x.\n",
+ sp->name, fcport->port_name,
+@@ -4352,7 +4353,8 @@ int qla24xx_async_gfpnid(scsi_qla_host_t *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 1fe4966fc2f68..7f81525c4fb32 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -51,6 +51,9 @@ qla2x00_sp_timeout(struct timer_list *t)
+ WARN_ON(irqs_disabled());
+ iocb = &sp->u.iocb_cmd;
+ iocb->timeout(sp);
++
++ /* ref: TMR */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ void qla2x00_sp_free(srb_t *sp)
+@@ -125,8 +128,13 @@ static void qla24xx_abort_iocb_timeout(void *data)
+ }
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+
+- if (sp->cmd_sp)
++ if (sp->cmd_sp) {
++ /*
++ * This done function should take care of
++ * original command ref: INIT
++ */
+ sp->cmd_sp->done(sp->cmd_sp, QLA_OS_TIMER_EXPIRED);
++ }
+
+ abt->u.abt.comp_status = cpu_to_le16(CS_TIMEOUT);
+ sp->done(sp, QLA_OS_TIMER_EXPIRED);
+@@ -140,11 +148,11 @@ static void qla24xx_abort_sp_done(srb_t *sp, int res)
+ if (orig_sp)
+ qla_wait_nvme_release_cmd_kref(orig_sp);
+
+- del_timer(&sp->u.iocb_cmd.timer);
+ if (sp->flags & SRB_WAKEUP_ON_COMP)
+ complete(&abt->u.abt.comp);
+ else
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+@@ -154,6 +162,7 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ srb_t *sp;
+ int rval = QLA_FUNCTION_FAILED;
+
++ /* ref: INIT for ABTS command */
+ sp = qla2xxx_get_qpair_sp(cmd_sp->vha, cmd_sp->qpair, cmd_sp->fcport,
+ GFP_ATOMIC);
+ if (!sp)
+@@ -167,23 +176,22 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ if (wait)
+ sp->flags = SRB_WAKEUP_ON_COMP;
+
+- abt_iocb->timeout = qla24xx_abort_iocb_timeout;
+ init_completion(&abt_iocb->u.abt.comp);
+ /* FW can send 2 x ABTS's timeout/20s */
+- qla2x00_init_timer(sp, 42);
++ qla2x00_init_async_sp(sp, 42, qla24xx_abort_sp_done);
++ sp->u.iocb_cmd.timeout = qla24xx_abort_iocb_timeout;
+
+ abt_iocb->u.abt.cmd_hndl = cmd_sp->handle;
+ abt_iocb->u.abt.req_que_no = cpu_to_le16(cmd_sp->qpair->req->id);
+
+- sp->done = qla24xx_abort_sp_done;
+-
+ ql_dbg(ql_dbg_async, vha, 0x507c,
+ "Abort command issued - hdl=%x, type=%x\n", cmd_sp->handle,
+ cmd_sp->type);
+
+ rval = qla2x00_start_sp(sp);
+ if (rval != QLA_SUCCESS) {
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return rval;
+ }
+
+@@ -191,7 +199,8 @@ int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait)
+ wait_for_completion(&abt_iocb->u.abt.comp);
+ rval = abt_iocb->u.abt.comp_status == CS_COMPLETE ?
+ QLA_SUCCESS : QLA_ERR_FROM_FW;
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ return rval;
+@@ -286,10 +295,13 @@ static void qla2x00_async_login_sp_done(srb_t *sp, int res)
+ ea.iop[0] = lio->u.logio.iop[0];
+ ea.iop[1] = lio->u.logio.iop[1];
+ ea.sp = sp;
++ if (res)
++ ea.data[0] = MBS_COMMAND_ERROR;
+ qla24xx_handle_plogi_done_event(vha, &ea);
+ }
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int
+@@ -308,6 +320,7 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ return rval;
+ }
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+@@ -320,12 +333,10 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ sp->name = "login";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_login_sp_done);
+
+ lio = &sp->u.iocb_cmd;
+- lio->timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+- sp->done = qla2x00_async_login_sp_done;
+ if (N2N_TOPO(fcport->vha->hw) && fcport_is_bigger(fcport)) {
+ lio->u.logio.flags |= SRB_LOGIN_PRLI_ONLY;
+ } else {
+@@ -358,7 +369,8 @@ qla2x00_async_login(struct scsi_qla_host *vha, fc_port_t *fcport,
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ fcport->flags &= ~FCF_ASYNC_ACTIVE;
+@@ -370,29 +382,26 @@ static void qla2x00_async_logout_sp_done(srb_t *sp, int res)
+ sp->fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ sp->fcport->login_gen++;
+ qlt_logo_completion_handler(sp->fcport, sp->u.iocb_cmd.u.logio.data[0]);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int
+ qla2x00_async_logout(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ srb_t *sp;
+- struct srb_iocb *lio;
+ int rval = QLA_FUNCTION_FAILED;
+
+ fcport->flags |= FCF_ASYNC_SENT;
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_LOGOUT_CMD;
+ sp->name = "logout";
+-
+- lio = &sp->u.iocb_cmd;
+- lio->timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+- sp->done = qla2x00_async_logout_sp_done;
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_logout_sp_done),
+
+ ql_dbg(ql_dbg_disc, vha, 0x2070,
+ "Async-logout - hdl=%x loop-id=%x portid=%02x%02x%02x %8phC explicit %d.\n",
+@@ -406,7 +415,8 @@ qla2x00_async_logout(struct scsi_qla_host *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ return rval;
+@@ -432,29 +442,26 @@ static void qla2x00_async_prlo_sp_done(srb_t *sp, int res)
+ if (!test_bit(UNLOADING, &vha->dpc_flags))
+ qla2x00_post_async_prlo_done_work(sp->fcport->vha, sp->fcport,
+ lio->u.logio.data);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int
+ qla2x00_async_prlo(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ srb_t *sp;
+- struct srb_iocb *lio;
+ int rval;
+
+ rval = QLA_FUNCTION_FAILED;
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_PRLO_CMD;
+ sp->name = "prlo";
+-
+- lio = &sp->u.iocb_cmd;
+- lio->timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+- sp->done = qla2x00_async_prlo_sp_done;
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_prlo_sp_done);
+
+ ql_dbg(ql_dbg_disc, vha, 0x2070,
+ "Async-prlo - hdl=%x loop-id=%x portid=%02x%02x%02x.\n",
+@@ -468,7 +475,8 @@ qla2x00_async_prlo(struct scsi_qla_host *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ fcport->flags &= ~FCF_ASYNC_ACTIVE;
+ return rval;
+@@ -551,10 +559,12 @@ static void qla2x00_async_adisc_sp_done(srb_t *sp, int res)
+ ea.iop[1] = lio->u.logio.iop[1];
+ ea.fcport = sp->fcport;
+ ea.sp = sp;
++ if (res)
++ ea.data[0] = MBS_COMMAND_ERROR;
+
+ qla24xx_handle_adisc_event(vha, &ea);
+-
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int
+@@ -565,26 +575,34 @@ qla2x00_async_adisc(struct scsi_qla_host *vha, fc_port_t *fcport,
+ struct srb_iocb *lio;
+ int rval = QLA_FUNCTION_FAILED;
+
++ if (IS_SESSION_DELETED(fcport)) {
++ ql_log(ql_log_warn, vha, 0xffff,
++ "%s: %8phC is being delete - not sending command.\n",
++ __func__, fcport->port_name);
++ fcport->flags &= ~FCF_ASYNC_ACTIVE;
++ return rval;
++ }
++
+ if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
+ return rval;
+
+ fcport->flags |= FCF_ASYNC_SENT;
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_ADISC_CMD;
+ sp->name = "adisc";
+-
+- lio = &sp->u.iocb_cmd;
+- lio->timeout = qla2x00_async_iocb_timeout;
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_adisc_sp_done);
+
+- sp->done = qla2x00_async_adisc_sp_done;
+- if (data[1] & QLA_LOGIO_LOGIN_RETRIED)
++ if (data[1] & QLA_LOGIO_LOGIN_RETRIED) {
++ lio = &sp->u.iocb_cmd;
+ lio->u.logio.flags |= SRB_LOGIN_RETRIED;
++ }
+
+ ql_dbg(ql_dbg_disc, vha, 0x206f,
+ "Async-adisc - hdl=%x loopid=%x portid=%06x %8phC.\n",
+@@ -597,7 +615,8 @@ qla2x00_async_adisc(struct scsi_qla_host *vha, fc_port_t *fcport,
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ qla2x00_post_async_adisc_work(vha, fcport, data);
+@@ -963,6 +982,9 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ }
+ break;
++ case ISP_CFG_NL:
++ qla24xx_fcport_handle_login(vha, fcport);
++ break;
+ default:
+ break;
+ }
+@@ -1078,13 +1100,13 @@ static void qla24xx_async_gnl_sp_done(srb_t *sp, int res)
+ }
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ {
+ srb_t *sp;
+- struct srb_iocb *mbx;
+ int rval = QLA_FUNCTION_FAILED;
+ unsigned long flags;
+ u16 *mb;
+@@ -1109,6 +1131,7 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ vha->gnl.sent = 1;
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+@@ -1117,10 +1140,8 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ sp->name = "gnlist";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+-
+- mbx = &sp->u.iocb_cmd;
+- mbx->timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha)+2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla24xx_async_gnl_sp_done);
+
+ mb = sp->u.iocb_cmd.u.mbx.out_mb;
+ mb[0] = MBC_PORT_NODE_NAME_LIST;
+@@ -1132,8 +1153,6 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ mb[8] = vha->gnl.size;
+ mb[9] = vha->vp_idx;
+
+- sp->done = qla24xx_async_gnl_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0x20da,
+ "Async-%s - OUT WWPN %8phC hndl %x\n",
+ sp->name, fcport->port_name, sp->handle);
+@@ -1145,7 +1164,8 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ fcport->flags &= ~(FCF_ASYNC_ACTIVE | FCF_ASYNC_SENT);
+ return rval;
+@@ -1191,7 +1211,7 @@ done:
+ dma_pool_free(ha->s_dma_pool, sp->u.iocb_cmd.u.mbx.in,
+ sp->u.iocb_cmd.u.mbx.in_dma);
+
+- sp->free(sp);
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport)
+@@ -1232,11 +1252,13 @@ static void qla2x00_async_prli_sp_done(srb_t *sp, int res)
+ ea.sp = sp;
+ if (res == QLA_OS_TIMER_EXPIRED)
+ ea.data[0] = QLA_OS_TIMER_EXPIRED;
++ else if (res)
++ ea.data[0] = MBS_COMMAND_ERROR;
+
+ qla24xx_handle_prli_done_event(vha, &ea);
+ }
+
+- sp->free(sp);
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int
+@@ -1269,12 +1291,10 @@ qla24xx_async_prli(struct scsi_qla_host *vha, fc_port_t *fcport)
+
+ sp->type = SRB_PRLI_CMD;
+ sp->name = "prli";
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_prli_sp_done);
+
+ lio = &sp->u.iocb_cmd;
+- lio->timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+-
+- sp->done = qla2x00_async_prli_sp_done;
+ lio->u.logio.flags = 0;
+
+ if (NVME_TARGET(vha->hw, fcport))
+@@ -1296,7 +1316,8 @@ qla24xx_async_prli(struct scsi_qla_host *vha, fc_port_t *fcport)
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ return rval;
+ }
+@@ -1325,14 +1346,21 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ struct port_database_24xx *pd;
+ struct qla_hw_data *ha = vha->hw;
+
+- if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT) ||
+- fcport->loop_id == FC_NO_LOOP_ID) {
++ if (IS_SESSION_DELETED(fcport)) {
+ ql_log(ql_log_warn, vha, 0xffff,
+- "%s: %8phC - not sending command.\n",
+- __func__, fcport->port_name);
++ "%s: %8phC is being delete - not sending command.\n",
++ __func__, fcport->port_name);
++ fcport->flags &= ~FCF_ASYNC_ACTIVE;
+ return rval;
+ }
+
++ if (!vha->flags.online || fcport->flags & FCF_ASYNC_SENT) {
++ ql_log(ql_log_warn, vha, 0xffff,
++ "%s: %8phC online %d flags %x - not sending command.\n",
++ __func__, fcport->port_name, vha->flags.online, fcport->flags);
++ goto done;
++ }
++
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+@@ -1344,10 +1372,8 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ sp->name = "gpdb";
+ sp->gen1 = fcport->rscn_gen;
+ sp->gen2 = fcport->login_gen;
+-
+- mbx = &sp->u.iocb_cmd;
+- mbx->timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla24xx_async_gpdb_sp_done);
+
+ pd = dma_pool_zalloc(ha->s_dma_pool, GFP_KERNEL, &pd_dma);
+ if (pd == NULL) {
+@@ -1366,11 +1392,10 @@ int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt)
+ mb[9] = vha->vp_idx;
+ mb[10] = opt;
+
+- mbx->u.mbx.in = pd;
++ mbx = &sp->u.iocb_cmd;
++ mbx->u.mbx.in = (void *)pd;
+ mbx->u.mbx.in_dma = pd_dma;
+
+- sp->done = qla24xx_async_gpdb_sp_done;
+-
+ ql_dbg(ql_dbg_disc, vha, 0x20dc,
+ "Async-%s %8phC hndl %x opt %x\n",
+ sp->name, fcport->port_name, sp->handle, opt);
+@@ -1384,7 +1409,7 @@ done_free_sp:
+ if (pd)
+ dma_pool_free(ha->s_dma_pool, pd, pd_dma);
+
+- sp->free(sp);
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ fcport->flags &= ~FCF_ASYNC_ACTIVE;
+@@ -1556,6 +1581,11 @@ static void qla_chk_n2n_b4_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ u8 login = 0;
+ int rc;
+
++ ql_dbg(ql_dbg_disc, vha, 0x307b,
++ "%s %8phC DS %d LS %d lid %d retries=%d\n",
++ __func__, fcport->port_name, fcport->disc_state,
++ fcport->fw_login_state, fcport->loop_id, fcport->login_retry);
++
+ if (qla_tgt_mode_enabled(vha))
+ return;
+
+@@ -1614,7 +1644,8 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ fcport->login_gen, fcport->loop_id, fcport->scan_state,
+ fcport->fc4_type);
+
+- if (fcport->scan_state != QLA_FCPORT_FOUND)
++ if (fcport->scan_state != QLA_FCPORT_FOUND ||
++ fcport->disc_state == DSC_DELETE_PEND)
+ return 0;
+
+ if ((fcport->loop_id != FC_NO_LOOP_ID) &&
+@@ -1635,7 +1666,7 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ if (vha->host->active_mode == MODE_TARGET && !N2N_TOPO(vha->hw))
+ return 0;
+
+- if (fcport->flags & FCF_ASYNC_SENT) {
++ if (fcport->flags & (FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE)) {
+ set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+ return 0;
+ }
+@@ -1970,22 +2001,21 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ srb_t *sp;
+ int rval = QLA_FUNCTION_FAILED;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+- tm_iocb = &sp->u.iocb_cmd;
+ sp->type = SRB_TM_CMD;
+ sp->name = "tmf";
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha),
++ qla2x00_tmf_sp_done);
++ sp->u.iocb_cmd.timeout = qla2x00_tmf_iocb_timeout;
+
+- tm_iocb->timeout = qla2x00_tmf_iocb_timeout;
++ tm_iocb = &sp->u.iocb_cmd;
+ init_completion(&tm_iocb->u.tmf.comp);
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha));
+-
+ tm_iocb->u.tmf.flags = flags;
+ tm_iocb->u.tmf.lun = lun;
+- tm_iocb->u.tmf.data = tag;
+- sp->done = qla2x00_tmf_sp_done;
+
+ ql_dbg(ql_dbg_taskm, vha, 0x802f,
+ "Async-tmf hdl=%x loop-id=%x portid=%02x%02x%02x.\n",
+@@ -2015,7 +2045,8 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
+ }
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ done:
+ return rval;
+@@ -2074,13 +2105,6 @@ qla24xx_handle_prli_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ qla24xx_post_gpdb_work(vha, ea->fcport, 0);
+ break;
+ default:
+- if ((ea->iop[0] == LSC_SCODE_ELS_REJECT) &&
+- (ea->iop[1] == 0x50000)) { /* reson 5=busy expl:0x0 */
+- set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+- ea->fcport->fw_login_state = DSC_LS_PLOGI_COMP;
+- break;
+- }
+-
+ sp = ea->sp;
+ ql_dbg(ql_dbg_disc, vha, 0x2118,
+ "%s %d %8phC priority %s, fc4type %x prev try %s\n",
+@@ -2224,12 +2248,7 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ ql_dbg(ql_dbg_disc, vha, 0x20eb, "%s %d %8phC cmd error %x\n",
+ __func__, __LINE__, ea->fcport->port_name, ea->data[1]);
+
+- ea->fcport->flags &= ~FCF_ASYNC_SENT;
+- qla2x00_set_fcport_disc_state(ea->fcport, DSC_LOGIN_FAILED);
+- if (ea->data[1] & QLA_LOGIO_LOGIN_RETRIED)
+- set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
+- else
+- qla2x00_mark_device_lost(vha, ea->fcport, 1);
++ qlt_schedule_sess_for_deletion(ea->fcport);
+ break;
+ case MBS_LOOP_ID_USED:
+ /* data[1] = IO PARAM 1 = nport ID */
+@@ -3472,6 +3491,14 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ struct rsp_que *rsp = ha->rsp_q_map[0];
+ struct qla2xxx_fw_dump *fw_dump;
+
++ if (ha->fw_dump) {
++ ql_dbg(ql_dbg_init, vha, 0x00bd,
++ "Firmware dump already allocated.\n");
++ return;
++ }
++
++ ha->fw_dumped = 0;
++ ha->fw_dump_cap_flags = 0;
+ dump_size = fixed_size = mem_size = eft_size = fce_size = mq_size = 0;
+ req_q_size = rsp_q_size = 0;
+
+@@ -3482,7 +3509,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ mem_size = (ha->fw_memory_size - 0x11000 + 1) *
+ sizeof(uint16_t);
+ } else if (IS_FWI2_CAPABLE(ha)) {
+- if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
++ if (IS_QLA83XX(ha))
+ fixed_size = offsetof(struct qla83xx_fw_dump, ext_mem);
+ else if (IS_QLA81XX(ha))
+ fixed_size = offsetof(struct qla81xx_fw_dump, ext_mem);
+@@ -3494,8 +3521,7 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ mem_size = (ha->fw_memory_size - 0x100000 + 1) *
+ sizeof(uint32_t);
+ if (ha->mqenable) {
+- if (!IS_QLA83XX(ha) && !IS_QLA27XX(ha) &&
+- !IS_QLA28XX(ha))
++ if (!IS_QLA83XX(ha))
+ mq_size = sizeof(struct qla2xxx_mq_chain);
+ /*
+ * Allocate maximum buffer size for all queues - Q0.
+@@ -4056,8 +4082,7 @@ enable_82xx_npiv:
+ ha->fw_major_version, ha->fw_minor_version,
+ ha->fw_subminor_version);
+
+- if (IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+- IS_QLA28XX(ha)) {
++ if (IS_QLA83XX(ha)) {
+ ha->flags.fac_supported = 0;
+ rval = QLA_SUCCESS;
+ }
+@@ -5602,6 +5627,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ memcpy(fcport->node_name, new_fcport->node_name,
+ WWN_SIZE);
+ fcport->scan_state = QLA_FCPORT_FOUND;
++ if (fcport->login_retry == 0) {
++ fcport->login_retry = vha->hw->login_retry_count;
++ ql_dbg(ql_dbg_disc, vha, 0x2135,
++ "Port login retry %8phN, lid 0x%04x retry cnt=%d.\n",
++ fcport->port_name, fcport->loop_id,
++ fcport->login_retry);
++ }
+ found++;
+ break;
+ }
+@@ -5735,6 +5767,8 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (atomic_read(&fcport->state) == FCS_ONLINE)
+ return;
+
++ qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ rport_ids.node_name = wwn_to_u64(fcport->node_name);
+ rport_ids.port_name = wwn_to_u64(fcport->port_name);
+ rport_ids.port_id = fcport->d_id.b.domain << 16 |
+@@ -5835,6 +5869,7 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ qla2x00_reg_remote_port(vha, fcport);
+ break;
+ case MODE_TARGET:
++ qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+ if (!vha->vha_tgt.qla_tgt->tgt_stop &&
+ !vha->vha_tgt.qla_tgt->tgt_stopped)
+ qlt_fc_port_added(vha, fcport);
+@@ -5852,8 +5887,6 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
+ if (NVME_TARGET(vha->hw, fcport))
+ qla_nvme_register_remote(vha, fcport);
+
+- qla2x00_set_fcport_state(fcport, FCS_ONLINE);
+-
+ if (IS_IIDMA_CAPABLE(vha->hw) && vha->hw->flags.gpsc_supported) {
+ if (fcport->id_changed) {
+ fcport->id_changed = 0;
+@@ -9390,7 +9423,7 @@ struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *vha, int qos,
+ qpair->rsp->req = qpair->req;
+ qpair->rsp->qpair = qpair;
+ /* init qpair to this cpu. Will adjust at run time. */
+- qla_cpu_update(qpair, smp_processor_id());
++ qla_cpu_update(qpair, raw_smp_processor_id());
+
+ if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) {
+ if (ha->fw_attributes & BIT_4)
+diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
+index 5f3b7995cc8f3..db17f7f410cdd 100644
+--- a/drivers/scsi/qla2xxx/qla_inline.h
++++ b/drivers/scsi/qla2xxx/qla_inline.h
+@@ -184,6 +184,8 @@ static void qla2xxx_init_sp(srb_t *sp, scsi_qla_host_t *vha,
+ sp->vha = vha;
+ sp->qpair = qpair;
+ sp->cmd_type = TYPE_SRB;
++ /* ref : INIT - normal flow */
++ kref_init(&sp->cmd_kref);
+ INIT_LIST_HEAD(&sp->elem);
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index ed604f2185bf2..e0fe9ddb4bd2c 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2560,11 +2560,38 @@ qla24xx_tm_iocb(srb_t *sp, struct tsk_mgmt_entry *tsk)
+ }
+ }
+
+-void qla2x00_init_timer(srb_t *sp, unsigned long tmo)
++static void
++qla2x00_async_done(struct srb *sp, int res)
++{
++ if (del_timer(&sp->u.iocb_cmd.timer)) {
++ /*
++ * Successfully cancelled the timeout handler
++ * ref: TMR
++ */
++ if (kref_put(&sp->cmd_kref, qla2x00_sp_release))
++ return;
++ }
++ sp->async_done(sp, res);
++}
++
++void
++qla2x00_sp_release(struct kref *kref)
++{
++ struct srb *sp = container_of(kref, struct srb, cmd_kref);
++
++ sp->free(sp);
++}
++
++void
++qla2x00_init_async_sp(srb_t *sp, unsigned long tmo,
++ void (*done)(struct srb *sp, int res))
+ {
+ timer_setup(&sp->u.iocb_cmd.timer, qla2x00_sp_timeout, 0);
+- sp->u.iocb_cmd.timer.expires = jiffies + tmo * HZ;
++ sp->done = qla2x00_async_done;
++ sp->async_done = done;
+ sp->free = qla2x00_sp_free;
++ sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
++ sp->u.iocb_cmd.timer.expires = jiffies + tmo * HZ;
+ if (IS_QLAFX00(sp->vha->hw) && sp->type == SRB_FXIOCB_DCMD)
+ init_completion(&sp->u.iocb_cmd.u.fxiocb.fxiocb_comp);
+ sp->start_timer = 1;
+@@ -2651,7 +2678,9 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ return -ENOMEM;
+ }
+
+- /* Alloc SRB structure */
++ /* Alloc SRB structure
++ * ref: INIT
++ */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp) {
+ kfree(fcport);
+@@ -2672,18 +2701,19 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+ sp->type = SRB_ELS_DCMD;
+ sp->name = "ELS_DCMD";
+ sp->fcport = fcport;
+- elsio->timeout = qla2x00_els_dcmd_iocb_timeout;
+- qla2x00_init_timer(sp, ELS_DCMD_TIMEOUT);
+- init_completion(&sp->u.iocb_cmd.u.els_logo.comp);
+- sp->done = qla2x00_els_dcmd_sp_done;
++ qla2x00_init_async_sp(sp, ELS_DCMD_TIMEOUT,
++ qla2x00_els_dcmd_sp_done);
+ sp->free = qla2x00_els_dcmd_sp_free;
++ sp->u.iocb_cmd.timeout = qla2x00_els_dcmd_iocb_timeout;
++ init_completion(&sp->u.iocb_cmd.u.els_logo.comp);
+
+ elsio->u.els_logo.els_logo_pyld = dma_alloc_coherent(&ha->pdev->dev,
+ DMA_POOL_SIZE, &elsio->u.els_logo.els_logo_pyld_dma,
+ GFP_KERNEL);
+
+ if (!elsio->u.els_logo.els_logo_pyld) {
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return QLA_FUNCTION_FAILED;
+ }
+
+@@ -2706,7 +2736,8 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+
+ rval = qla2x00_start_sp(sp);
+ if (rval != QLA_SUCCESS) {
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return QLA_FUNCTION_FAILED;
+ }
+
+@@ -2717,7 +2748,8 @@ qla24xx_els_dcmd_iocb(scsi_qla_host_t *vha, int els_opcode,
+
+ wait_for_completion(&elsio->u.els_logo.comp);
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return rval;
+ }
+
+@@ -2850,7 +2882,6 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ sp->name, res, sp->handle, fcport->d_id.b24, fcport->port_name);
+
+ fcport->flags &= ~(FCF_ASYNC_SENT|FCF_ASYNC_ACTIVE);
+- del_timer(&sp->u.iocb_cmd.timer);
+
+ if (sp->flags & SRB_WAKEUP_ON_COMP)
+ complete(&lio->u.els_plogi.comp);
+@@ -2927,6 +2958,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ set_bit(ISP_ABORT_NEEDED,
+ &vha->dpc_flags);
+ qla2xxx_wake_dpc(vha);
++ break;
+ }
+ fallthrough;
+ default:
+@@ -2936,9 +2968,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ fw_status[0], fw_status[1], fw_status[2]);
+
+ fcport->flags &= ~FCF_ASYNC_SENT;
+- qla2x00_set_fcport_disc_state(fcport,
+- DSC_LOGIN_FAILED);
+- set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++ qlt_schedule_sess_for_deletion(fcport);
+ break;
+ }
+ break;
+@@ -2950,8 +2980,7 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ fw_status[0], fw_status[1], fw_status[2]);
+
+ sp->fcport->flags &= ~FCF_ASYNC_SENT;
+- qla2x00_set_fcport_disc_state(fcport, DSC_LOGIN_FAILED);
+- set_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++ qlt_schedule_sess_for_deletion(fcport);
+ break;
+ }
+
+@@ -2960,7 +2989,8 @@ static void qla2x00_els_dcmd2_sp_done(srb_t *sp, int res)
+ struct srb_iocb *elsio = &sp->u.iocb_cmd;
+
+ qla2x00_els_dcmd2_free(vha, &elsio->u.els_plogi);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return;
+ }
+ e->u.iosb.sp = sp;
+@@ -2978,7 +3008,9 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ int rval = QLA_SUCCESS;
+ void *ptr, *resp_ptr;
+
+- /* Alloc SRB structure */
++ /* Alloc SRB structure
++ * ref: INIT
++ */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp) {
+ ql_log(ql_log_info, vha, 0x70e6,
+@@ -2993,17 +3025,16 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ ql_dbg(ql_dbg_io, vha, 0x3073,
+ "%s Enter: PLOGI portid=%06x\n", __func__, fcport->d_id.b24);
+
+- sp->type = SRB_ELS_DCMD;
+- sp->name = "ELS_DCMD";
+- sp->fcport = fcport;
+-
+- elsio->timeout = qla2x00_els_dcmd2_iocb_timeout;
+ if (wait)
+ sp->flags = SRB_WAKEUP_ON_COMP;
+
+- qla2x00_init_timer(sp, ELS_DCMD_TIMEOUT + 2);
++ sp->type = SRB_ELS_DCMD;
++ sp->name = "ELS_DCMD";
++ sp->fcport = fcport;
++ qla2x00_init_async_sp(sp, ELS_DCMD_TIMEOUT + 2,
++ qla2x00_els_dcmd2_sp_done);
++ sp->u.iocb_cmd.timeout = qla2x00_els_dcmd2_iocb_timeout;
+
+- sp->done = qla2x00_els_dcmd2_sp_done;
+ elsio->u.els_plogi.tx_size = elsio->u.els_plogi.rx_size = DMA_POOL_SIZE;
+
+ ptr = elsio->u.els_plogi.els_plogi_pyld =
+@@ -3068,7 +3099,8 @@ qla24xx_els_dcmd2_iocb(scsi_qla_host_t *vha, int els_opcode,
+ out:
+ fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE);
+ qla2x00_els_dcmd2_free(vha, &elsio->u.els_plogi);
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+@@ -3879,8 +3911,15 @@ qla2x00_start_sp(srb_t *sp)
+ break;
+ }
+
+- if (sp->start_timer)
++ if (sp->start_timer) {
++ /* ref: TMR timer ref
++ * this code should be just before start_iocbs function
++ * This will make sure that caller function don't to do
++ * kref_put even on failure
++ */
++ kref_get(&sp->cmd_kref);
+ add_timer(&sp->u.iocb_cmd.timer);
++ }
+
+ wmb();
+ qla2x00_start_iocbs(vha, qp->req);
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index aaf6504570fdd..198b782d77901 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -2498,6 +2498,7 @@ qla24xx_tm_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
+ iocb->u.tmf.data = QLA_FUNCTION_FAILED;
+ } else if ((le16_to_cpu(sts->scsi_status) &
+ SS_RESPONSE_INFO_LEN_VALID)) {
++ host_to_fcp_swap(sts->data, sizeof(sts->data));
+ if (le32_to_cpu(sts->rsp_data_len) < 4) {
+ ql_log(ql_log_warn, fcport->vha, 0x503b,
+ "Async-%s error - hdl=%x not enough response(%d).\n",
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 10d2655ef6767..7f236db058869 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -9,6 +9,12 @@
+ #include <linux/delay.h>
+ #include <linux/gfp.h>
+
++#ifdef CONFIG_PPC
++#define IS_PPCARCH true
++#else
++#define IS_PPCARCH false
++#endif
++
+ static struct mb_cmd_name {
+ uint16_t cmd;
+ const char *str;
+@@ -728,6 +734,9 @@ again:
+ vha->min_supported_speed =
+ nv->min_supported_speed;
+ }
++
++ if (IS_PPCARCH)
++ mcp->mb[11] |= BIT_4;
+ }
+
+ if (ha->flags.exlogins_enabled)
+@@ -3029,8 +3038,7 @@ qla2x00_get_resource_cnts(scsi_qla_host_t *vha)
+ ha->orig_fw_iocb_count = mcp->mb[10];
+ if (ha->flags.npiv_supported)
+ ha->max_npiv_vports = mcp->mb[11];
+- if (IS_QLA81XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+- IS_QLA28XX(ha))
++ if (IS_QLA81XX(ha) || IS_QLA83XX(ha))
+ ha->fw_max_fcf_count = mcp->mb[12];
+ }
+
+@@ -5621,7 +5629,7 @@ qla2x00_get_data_rate(scsi_qla_host_t *vha)
+ mcp->out_mb = MBX_1|MBX_0;
+ mcp->in_mb = MBX_2|MBX_1|MBX_0;
+ if (IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha))
+- mcp->in_mb |= MBX_3;
++ mcp->in_mb |= MBX_4|MBX_3;
+ mcp->tov = MBX_TOV_SECONDS;
+ mcp->flags = 0;
+ rval = qla2x00_mailbox_command(vha, mcp);
+@@ -6479,23 +6487,21 @@ int qla24xx_send_mb_cmd(struct scsi_qla_host *vha, mbx_cmd_t *mcp)
+ if (!vha->hw->flags.fw_started)
+ goto done;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+- sp->type = SRB_MB_IOCB;
+- sp->name = mb_to_str(mcp->mb[0]);
+-
+ c = &sp->u.iocb_cmd;
+- c->timeout = qla2x00_async_iocb_timeout;
+ init_completion(&c->u.mbx.comp);
+
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ sp->type = SRB_MB_IOCB;
++ sp->name = mb_to_str(mcp->mb[0]);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_mb_sp_done);
+
+ memcpy(sp->u.iocb_cmd.u.mbx.out_mb, mcp->mb, SIZEOF_IOCB_MB_REG);
+
+- sp->done = qla2x00_async_mb_sp_done;
+-
+ rval = qla2x00_start_sp(sp);
+ if (rval != QLA_SUCCESS) {
+ ql_dbg(ql_dbg_mbx, vha, 0x1018,
+@@ -6527,7 +6533,8 @@ int qla24xx_send_mb_cmd(struct scsi_qla_host *vha, mbx_cmd_t *mcp)
+ }
+
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index 1c024055f8c50..e6b5c4ccce97b 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -965,6 +965,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ if (vp_index == 0 || vp_index >= ha->max_npiv_vports)
+ return QLA_PARAMETER_ERROR;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(base_vha, NULL, GFP_KERNEL);
+ if (!sp)
+ return rval;
+@@ -972,9 +973,8 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ sp->type = SRB_CTRL_VP;
+ sp->name = "ctrl_vp";
+ sp->comp = ∁
+- sp->done = qla_ctrlvp_sp_done;
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla_ctrlvp_sp_done);
+ sp->u.iocb_cmd.u.ctrlvp.cmd = cmd;
+ sp->u.iocb_cmd.u.ctrlvp.vp_index = vp_index;
+
+@@ -1008,6 +1008,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd)
+ break;
+ }
+ done:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mr.c b/drivers/scsi/qla2xxx/qla_mr.c
+index 350b0c4346fb6..f726eb8449c5e 100644
+--- a/drivers/scsi/qla2xxx/qla_mr.c
++++ b/drivers/scsi/qla2xxx/qla_mr.c
+@@ -1787,17 +1787,18 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
+ struct register_host_info *preg_hsi;
+ struct new_utsname *p_sysid = NULL;
+
++ /* ref: INIT */
+ sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL);
+ if (!sp)
+ goto done;
+
+ sp->type = SRB_FXIOCB_DCMD;
+ sp->name = "fxdisc";
++ qla2x00_init_async_sp(sp, FXDISC_TIMEOUT,
++ qla2x00_fxdisc_sp_done);
++ sp->u.iocb_cmd.timeout = qla2x00_fxdisc_iocb_timeout;
+
+ fdisc = &sp->u.iocb_cmd;
+- fdisc->timeout = qla2x00_fxdisc_iocb_timeout;
+- qla2x00_init_timer(sp, FXDISC_TIMEOUT);
+-
+ switch (fx_type) {
+ case FXDISC_GET_CONFIG_INFO:
+ fdisc->u.fxiocb.flags =
+@@ -1898,7 +1899,6 @@ qlafx00_fx_disc(scsi_qla_host_t *vha, fc_port_t *fcport, uint16_t fx_type)
+ }
+
+ fdisc->u.fxiocb.req_func_type = cpu_to_le16(fx_type);
+- sp->done = qla2x00_fxdisc_sp_done;
+
+ rval = qla2x00_start_sp(sp);
+ if (rval != QLA_SUCCESS)
+@@ -1974,7 +1974,8 @@ done_unmap_req:
+ dma_free_coherent(&ha->pdev->dev, fdisc->u.fxiocb.req_len,
+ fdisc->u.fxiocb.req_addr, fdisc->u.fxiocb.req_dma_handle);
+ done_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index e22ec7cb65db5..4cfc2efdf7766 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -37,6 +37,11 @@ int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport)
+ (fcport->nvme_flag & NVME_FLAG_REGISTERED))
+ return 0;
+
++ if (atomic_read(&fcport->state) == FCS_ONLINE)
++ return 0;
++
++ qla2x00_set_fcport_state(fcport, FCS_ONLINE);
++
+ fcport->nvme_flag &= ~NVME_FLAG_RESETTING;
+
+ memset(&req, 0, sizeof(struct nvme_fc_port_info));
+@@ -170,6 +175,18 @@ out:
+ qla2xxx_rel_qpair_sp(sp->qpair, sp);
+ }
+
++static void qla_nvme_ls_unmap(struct srb *sp, struct nvmefc_ls_req *fd)
++{
++ if (sp->flags & SRB_DMA_VALID) {
++ struct srb_iocb *nvme = &sp->u.iocb_cmd;
++ struct qla_hw_data *ha = sp->fcport->vha->hw;
++
++ dma_unmap_single(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
++ fd->rqstlen, DMA_TO_DEVICE);
++ sp->flags &= ~SRB_DMA_VALID;
++ }
++}
++
+ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ {
+ struct srb *sp = container_of(kref, struct srb, cmd_kref);
+@@ -186,6 +203,8 @@ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
+ spin_unlock_irqrestore(&priv->cmd_lock, flags);
+
+ fd = priv->fd;
++
++ qla_nvme_ls_unmap(sp, fd);
+ fd->done(fd, priv->comp_status);
+ out:
+ qla2x00_rel_sp(sp);
+@@ -356,6 +375,8 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
+ fd->rqstlen, DMA_TO_DEVICE);
+
++ sp->flags |= SRB_DMA_VALID;
++
+ rval = qla2x00_start_sp(sp);
+ if (rval != QLA_SUCCESS) {
+ ql_log(ql_log_warn, vha, 0x700e,
+@@ -363,6 +384,7 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
+ wake_up(&sp->nvme_ls_waitq);
+ sp->priv = NULL;
+ priv->sp = NULL;
++ qla_nvme_ls_unmap(sp, fd);
+ qla2x00_rel_sp(sp);
+ return rval;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index abcd309172638..6dc2189badd33 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -728,7 +728,8 @@ void qla2x00_sp_compl(srb_t *sp, int res)
+ struct scsi_cmnd *cmd = GET_CMD_SP(sp);
+ struct completion *comp = sp->comp;
+
+- sp->free(sp);
++ /* kref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ cmd->result = res;
+ CMD_SP(cmd) = NULL;
+ scsi_done(cmd);
+@@ -819,7 +820,8 @@ void qla2xxx_qpair_sp_compl(srb_t *sp, int res)
+ struct scsi_cmnd *cmd = GET_CMD_SP(sp);
+ struct completion *comp = sp->comp;
+
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ cmd->result = res;
+ CMD_SP(cmd) = NULL;
+ scsi_done(cmd);
+@@ -919,6 +921,7 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ goto qc24_target_busy;
+
+ sp = scsi_cmd_priv(cmd);
++ /* ref: INIT */
+ qla2xxx_init_sp(sp, vha, vha->hw->base_qpair, fcport);
+
+ sp->u.scmd.cmd = cmd;
+@@ -938,7 +941,8 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ return 0;
+
+ qc24_host_busy_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ qc24_target_busy:
+ return SCSI_MLQUEUE_TARGET_BUSY;
+@@ -1008,6 +1012,7 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ goto qc24_target_busy;
+
+ sp = scsi_cmd_priv(cmd);
++ /* ref: INIT */
+ qla2xxx_init_sp(sp, vha, qpair, fcport);
+
+ sp->u.scmd.cmd = cmd;
+@@ -1026,7 +1031,8 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd,
+ return 0;
+
+ qc24_host_busy_free_sp:
+- sp->free(sp);
++ /* ref: INIT */
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ qc24_target_busy:
+ return SCSI_MLQUEUE_TARGET_BUSY;
+@@ -3748,8 +3754,7 @@ qla2x00_unmap_iobases(struct qla_hw_data *ha)
+ if (ha->mqiobase)
+ iounmap(ha->mqiobase);
+
+- if ((IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) &&
+- ha->msixbase)
++ if (ha->msixbase)
+ iounmap(ha->msixbase);
+ }
+ }
+@@ -3891,6 +3896,8 @@ qla24xx_free_purex_list(struct purex_list *list)
+ spin_lock_irqsave(&list->lock, flags);
+ list_for_each_entry_safe(item, next, &list->head, list) {
+ list_del(&item->list);
++ if (item == &item->vha->default_item)
++ continue;
+ kfree(item);
+ }
+ spin_unlock_irqrestore(&list->lock, flags);
+@@ -5526,6 +5533,11 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
+ memset(&ea, 0, sizeof(ea));
+ ea.fcport = fcport;
+ qla24xx_handle_relogin_event(vha, &ea);
++ } else if (vha->hw->current_topology ==
++ ISP_CFG_NL &&
++ IS_QLA2XXX_MIDTYPE(vha->hw)) {
++ (void)qla24xx_fcport_handle_login(vha,
++ fcport);
+ } else if (vha->hw->current_topology ==
+ ISP_CFG_NL) {
+ fcport->login_retry--;
+@@ -7199,7 +7211,7 @@ skip:
+ return do_heartbeat;
+ }
+
+-static void qla_heart_beat(struct scsi_qla_host *vha)
++static void qla_heart_beat(struct scsi_qla_host *vha, u16 dpc_started)
+ {
+ struct qla_hw_data *ha = vha->hw;
+
+@@ -7209,8 +7221,19 @@ static void qla_heart_beat(struct scsi_qla_host *vha)
+ if (vha->hw->flags.eeh_busy || qla2x00_chip_is_down(vha))
+ return;
+
+- if (qla_do_heartbeat(vha))
++ /*
++ * dpc thread cannot run if heartbeat is running at the same time.
++ * We also do not want to starve heartbeat task. Therefore, do
++ * heartbeat task at least once every 5 seconds.
++ */
++ if (dpc_started &&
++ time_before(jiffies, ha->last_heartbeat_run_jiffies + 5 * HZ))
++ return;
++
++ if (qla_do_heartbeat(vha)) {
++ ha->last_heartbeat_run_jiffies = jiffies;
+ queue_work(ha->wq, &ha->heartbeat_work);
++ }
+ }
+
+ /**************************************************************************
+@@ -7401,6 +7424,8 @@ qla2x00_timer(struct timer_list *t)
+ start_dpc++;
+ }
+
++ /* borrowing w to signify dpc will run */
++ w = 0;
+ /* Schedule the DPC routine if needed */
+ if ((test_bit(ISP_ABORT_NEEDED, &vha->dpc_flags) ||
+ test_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags) ||
+@@ -7433,9 +7458,10 @@ qla2x00_timer(struct timer_list *t)
+ test_bit(RELOGIN_NEEDED, &vha->dpc_flags),
+ test_bit(PROCESS_PUREX_IOCB, &vha->dpc_flags));
+ qla2xxx_wake_dpc(vha);
++ w = 1;
+ }
+
+- qla_heart_beat(vha);
++ qla_heart_beat(vha, w);
+
+ qla2x00_restart_timer(vha, WATCH_INTERVAL);
+ }
+@@ -7633,7 +7659,7 @@ qla2xxx_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
+
+ switch (state) {
+ case pci_channel_io_normal:
+- ha->flags.eeh_busy = 0;
++ qla_pci_set_eeh_busy(vha);
+ if (ql2xmqsupport || ql2xnvmeenable) {
+ set_bit(QPAIR_ONLINE_CHECK_NEEDED, &vha->dpc_flags);
+ qla2xxx_wake_dpc(vha);
+@@ -7674,9 +7700,16 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ "mmio enabled\n");
+
+ ha->pci_error_state = QLA_PCI_MMIO_ENABLED;
++
+ if (IS_QLA82XX(ha))
+ return PCI_ERS_RESULT_RECOVERED;
+
++ if (qla2x00_isp_reg_stat(ha)) {
++ ql_log(ql_log_info, base_vha, 0x803f,
++ "During mmio enabled, PCI/Register disconnect still detected.\n");
++ goto out;
++ }
++
+ spin_lock_irqsave(&ha->hardware_lock, flags);
+ if (IS_QLA2100(ha) || IS_QLA2200(ha)){
+ stat = rd_reg_word(®->hccr);
+@@ -7698,6 +7731,7 @@ qla2xxx_pci_mmio_enabled(struct pci_dev *pdev)
+ "RISC paused -- mmio_enabled, Dumping firmware.\n");
+ qla2xxx_dump_fw(base_vha);
+ }
++out:
+ /* set PCI_ERS_RESULT_NEED_RESET to trigger call to qla2xxx_pci_slot_reset */
+ ql_dbg(ql_dbg_aer, base_vha, 0x600d,
+ "mmio enabled returning.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_sup.c b/drivers/scsi/qla2xxx/qla_sup.c
+index a0aeba69513d4..c092a6b1ced4f 100644
+--- a/drivers/scsi/qla2xxx/qla_sup.c
++++ b/drivers/scsi/qla2xxx/qla_sup.c
+@@ -844,7 +844,7 @@ qla2xxx_get_flt_info(scsi_qla_host_t *vha, uint32_t flt_addr)
+ ha->flt_region_nvram = start;
+ break;
+ case FLT_REG_IMG_PRI_27XX:
+- if (IS_QLA27XX(ha) && !IS_QLA28XX(ha))
++ if (IS_QLA27XX(ha) || IS_QLA28XX(ha))
+ ha->flt_region_img_status_pri = start;
+ break;
+ case FLT_REG_IMG_SEC_27XX:
+@@ -1356,7 +1356,7 @@ next:
+ flash_data_addr(ha, faddr), le32_to_cpu(*dwptr));
+ if (ret) {
+ ql_dbg(ql_dbg_user, vha, 0x7006,
+- "Failed slopw write %x (%x)\n", faddr, *dwptr);
++ "Failed slow write %x (%x)\n", faddr, *dwptr);
+ break;
+ }
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 8993d438e0b72..b109716d44fb7 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -620,7 +620,7 @@ static void qla2x00_async_nack_sp_done(srb_t *sp, int res)
+ }
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+
+- sp->free(sp);
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ }
+
+ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+@@ -656,12 +656,10 @@ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+
+ sp->type = type;
+ sp->name = "nack";
+-
+- sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+- qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha)+2);
++ qla2x00_init_async_sp(sp, qla2x00_get_async_timeout(vha) + 2,
++ qla2x00_async_nack_sp_done);
+
+ sp->u.iocb_cmd.u.nack.ntfy = ntfy;
+- sp->done = qla2x00_async_nack_sp_done;
+
+ ql_dbg(ql_dbg_disc, vha, 0x20f4,
+ "Async-%s %8phC hndl %x %s\n",
+@@ -674,7 +672,7 @@ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+ return rval;
+
+ done_free_sp:
+- sp->free(sp);
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
+ done:
+ fcport->flags &= ~FCF_ASYNC_SENT;
+ return rval;
+@@ -3320,6 +3318,7 @@ int qlt_xmit_response(struct qla_tgt_cmd *cmd, int xmit_type,
+ "RESET-RSP online/active/old-count/new-count = %d/%d/%d/%d.\n",
+ vha->flags.online, qla2x00_reset_active(vha),
+ cmd->reset_count, qpair->chip_reset);
++ res = 0;
+ goto out_unmap_unlock;
+ }
+
+@@ -7221,8 +7220,7 @@ qlt_probe_one_stage1(struct scsi_qla_host *base_vha, struct qla_hw_data *ha)
+ if (!QLA_TGT_MODE_ENABLED())
+ return;
+
+- if ((ql2xenablemsix == 0) || IS_QLA83XX(ha) || IS_QLA27XX(ha) ||
+- IS_QLA28XX(ha)) {
++ if (ha->mqenable || IS_QLA83XX(ha) || IS_QLA27XX(ha) || IS_QLA28XX(ha)) {
+ ISP_ATIO_Q_IN(base_vha) = &ha->mqiobase->isp25mq.atio_q_in;
+ ISP_ATIO_Q_OUT(base_vha) = &ha->mqiobase->isp25mq.atio_q_out;
+ } else {
+diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
+index 26c13a953b975..b0a74b036cf4b 100644
+--- a/drivers/scsi/qla2xxx/qla_tmpl.c
++++ b/drivers/scsi/qla2xxx/qla_tmpl.c
+@@ -435,8 +435,13 @@ qla27xx_fwdt_entry_t266(struct scsi_qla_host *vha,
+ {
+ ql_dbg(ql_dbg_misc, vha, 0xd20a,
+ "%s: reset risc [%lx]\n", __func__, *len);
+- if (buf)
+- WARN_ON_ONCE(qla24xx_soft_reset(vha->hw) != QLA_SUCCESS);
++ if (buf) {
++ if (qla24xx_soft_reset(vha->hw) != QLA_SUCCESS) {
++ ql_dbg(ql_dbg_async, vha, 0x5001,
++ "%s: unable to soft reset\n", __func__);
++ return INVALID_ENTRY;
++ }
++ }
+
+ return qla27xx_next_entry(ent);
+ }
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 60a6ae9d1219f..a75499616f5ef 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -484,8 +484,13 @@ static void scsi_report_sense(struct scsi_device *sdev,
+
+ if (sshdr->asc == 0x29) {
+ evt_type = SDEV_EVT_POWER_ON_RESET_OCCURRED;
+- sdev_printk(KERN_WARNING, sdev,
+- "Power-on or device reset occurred\n");
++ /*
++ * Do not print message if it is an expected side-effect
++ * of runtime PM.
++ */
++ if (!sdev->silence_suspend)
++ sdev_printk(KERN_WARNING, sdev,
++ "Power-on or device reset occurred\n");
+ }
+
+ if (sshdr->asc == 0x2a && sshdr->ascq == 0x01) {
+diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
+index 60e406bcf42a9..a2524106206db 100644
+--- a/drivers/scsi/scsi_transport_fc.c
++++ b/drivers/scsi/scsi_transport_fc.c
+@@ -34,7 +34,7 @@ static int fc_bsg_hostadd(struct Scsi_Host *, struct fc_host_attrs *);
+ static int fc_bsg_rportadd(struct Scsi_Host *, struct fc_rport *);
+ static void fc_bsg_remove(struct request_queue *);
+ static void fc_bsg_goose_queue(struct fc_rport *);
+-static void fc_li_stats_update(struct fc_fn_li_desc *li_desc,
++static void fc_li_stats_update(u16 event_type,
+ struct fc_fpin_stats *stats);
+ static void fc_delivery_stats_update(u32 reason_code,
+ struct fc_fpin_stats *stats);
+@@ -670,42 +670,34 @@ fc_find_rport_by_wwpn(struct Scsi_Host *shost, u64 wwpn)
+ EXPORT_SYMBOL(fc_find_rport_by_wwpn);
+
+ static void
+-fc_li_stats_update(struct fc_fn_li_desc *li_desc,
++fc_li_stats_update(u16 event_type,
+ struct fc_fpin_stats *stats)
+ {
+- stats->li += be32_to_cpu(li_desc->event_count);
+- switch (be16_to_cpu(li_desc->event_type)) {
++ stats->li++;
++ switch (event_type) {
+ case FPIN_LI_UNKNOWN:
+- stats->li_failure_unknown +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_failure_unknown++;
+ break;
+ case FPIN_LI_LINK_FAILURE:
+- stats->li_link_failure_count +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_link_failure_count++;
+ break;
+ case FPIN_LI_LOSS_OF_SYNC:
+- stats->li_loss_of_sync_count +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_loss_of_sync_count++;
+ break;
+ case FPIN_LI_LOSS_OF_SIG:
+- stats->li_loss_of_signals_count +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_loss_of_signals_count++;
+ break;
+ case FPIN_LI_PRIM_SEQ_ERR:
+- stats->li_prim_seq_err_count +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_prim_seq_err_count++;
+ break;
+ case FPIN_LI_INVALID_TX_WD:
+- stats->li_invalid_tx_word_count +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_invalid_tx_word_count++;
+ break;
+ case FPIN_LI_INVALID_CRC:
+- stats->li_invalid_crc_count +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_invalid_crc_count++;
+ break;
+ case FPIN_LI_DEVICE_SPEC:
+- stats->li_device_specific +=
+- be32_to_cpu(li_desc->event_count);
++ stats->li_device_specific++;
+ break;
+ }
+ }
+@@ -767,6 +759,7 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+ struct fc_rport *attach_rport = NULL;
+ struct fc_host_attrs *fc_host = shost_to_fc_host(shost);
+ struct fc_fn_li_desc *li_desc = (struct fc_fn_li_desc *)tlv;
++ u16 event_type = be16_to_cpu(li_desc->event_type);
+ u64 wwpn;
+
+ rport = fc_find_rport_by_wwpn(shost,
+@@ -775,7 +768,7 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+ (rport->roles & FC_PORT_ROLE_FCP_TARGET ||
+ rport->roles & FC_PORT_ROLE_NVME_TARGET)) {
+ attach_rport = rport;
+- fc_li_stats_update(li_desc, &attach_rport->fpin_stats);
++ fc_li_stats_update(event_type, &attach_rport->fpin_stats);
+ }
+
+ if (be32_to_cpu(li_desc->pname_count) > 0) {
+@@ -789,14 +782,14 @@ fc_fpin_li_stats_update(struct Scsi_Host *shost, struct fc_tlv_desc *tlv)
+ rport->roles & FC_PORT_ROLE_NVME_TARGET)) {
+ if (rport == attach_rport)
+ continue;
+- fc_li_stats_update(li_desc,
++ fc_li_stats_update(event_type,
+ &rport->fpin_stats);
+ }
+ }
+ }
+
+ if (fc_host->port_name == be64_to_cpu(li_desc->attached_wwpn))
+- fc_li_stats_update(li_desc, &fc_host->fpin_stats);
++ fc_li_stats_update(event_type, &fc_host->fpin_stats);
+ }
+
+ /*
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 62eb9921cc947..66056806159a6 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3752,7 +3752,8 @@ static int sd_suspend_common(struct device *dev, bool ignore_stop_errors)
+ return 0;
+
+ if (sdkp->WCE && sdkp->media_present) {
+- sd_printk(KERN_NOTICE, sdkp, "Synchronizing SCSI cache\n");
++ if (!sdkp->device->silence_suspend)
++ sd_printk(KERN_NOTICE, sdkp, "Synchronizing SCSI cache\n");
+ ret = sd_sync_cache(sdkp, &sshdr);
+
+ if (ret) {
+@@ -3774,7 +3775,8 @@ static int sd_suspend_common(struct device *dev, bool ignore_stop_errors)
+ }
+
+ if (sdkp->device->manage_start_stop) {
+- sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
++ if (!sdkp->device->silence_suspend)
++ sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ /* an error is not worth aborting a system sleep */
+ ret = sd_start_stop_device(sdkp, 0);
+ if (ignore_stop_errors)
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 9349557b8a01b..cb285d277201c 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -585,7 +585,12 @@ static void ufshcd_print_pwr_info(struct ufs_hba *hba)
+ "INVALID MODE",
+ };
+
+- dev_err(hba->dev, "%s:[RX, TX]: gear=[%d, %d], lane[%d, %d], pwr[%s, %s], rate = %d\n",
++ /*
++ * Using dev_dbg to avoid messages during runtime PM to avoid
++ * never-ending cycles of messages written back to storage by user space
++ * causing runtime resume, causing more messages and so on.
++ */
++ dev_dbg(hba->dev, "%s:[RX, TX]: gear=[%d, %d], lane[%d, %d], pwr[%s, %s], rate = %d\n",
+ __func__,
+ hba->pwr_info.gear_rx, hba->pwr_info.gear_tx,
+ hba->pwr_info.lane_rx, hba->pwr_info.lane_tx,
+@@ -5024,6 +5029,12 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
+ pm_runtime_get_noresume(&sdev->sdev_gendev);
+ else if (ufshcd_is_rpm_autosuspend_allowed(hba))
+ sdev->rpm_autosuspend = 1;
++ /*
++ * Do not print messages during runtime PM to avoid never-ending cycles
++ * of messages written back to storage by user space causing runtime
++ * resume, causing more messages and so on.
++ */
++ sdev->silence_suspend = 1;
+
+ ufshcd_crypto_register(hba, q);
+
+@@ -7339,7 +7350,13 @@ static u32 ufshcd_find_max_sup_active_icc_level(struct ufs_hba *hba,
+
+ if (!hba->vreg_info.vcc || !hba->vreg_info.vccq ||
+ !hba->vreg_info.vccq2) {
+- dev_err(hba->dev,
++ /*
++ * Using dev_dbg to avoid messages during runtime PM to avoid
++ * never-ending cycles of messages written back to storage by
++ * user space causing runtime resume, causing more messages and
++ * so on.
++ */
++ dev_dbg(hba->dev,
+ "%s: Regulator capability was not set, actvIccLevel=%d",
+ __func__, icc_level);
+ goto out;
+diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c
+index b762bc40f56bd..afd2fd74802d2 100644
+--- a/drivers/soc/mediatek/mtk-pm-domains.c
++++ b/drivers/soc/mediatek/mtk-pm-domains.c
+@@ -443,6 +443,9 @@ generic_pm_domain *scpsys_add_one_domain(struct scpsys *scpsys, struct device_no
+ pd->genpd.power_off = scpsys_power_off;
+ pd->genpd.power_on = scpsys_power_on;
+
++ if (MTK_SCPD_CAPS(pd, MTK_SCPD_ACTIVE_WAKEUP))
++ pd->genpd.flags |= GENPD_FLAG_ACTIVE_WAKEUP;
++
+ if (MTK_SCPD_CAPS(pd, MTK_SCPD_KEEP_DEFAULT_OFF))
+ pm_genpd_init(&pd->genpd, NULL, true);
+ else
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index d2dacbbaafbd1..97fd24c178f8d 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -206,6 +206,7 @@ struct ocmem *of_get_ocmem(struct device *dev)
+ ocmem = platform_get_drvdata(pdev);
+ if (!ocmem) {
+ dev_err(dev, "Cannot get ocmem\n");
++ put_device(&pdev->dev);
+ return ERR_PTR(-ENODEV);
+ }
+ return ocmem;
+diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c
+index cbe5e39fdaeb0..a59bb34e5ebaf 100644
+--- a/drivers/soc/qcom/qcom_aoss.c
++++ b/drivers/soc/qcom/qcom_aoss.c
+@@ -451,7 +451,11 @@ struct qmp *qmp_get(struct device *dev)
+
+ qmp = platform_get_drvdata(pdev);
+
+- return qmp ? qmp : ERR_PTR(-EPROBE_DEFER);
++ if (!qmp) {
++ put_device(&pdev->dev);
++ return ERR_PTR(-EPROBE_DEFER);
++ }
++ return qmp;
+ }
+ EXPORT_SYMBOL(qmp_get);
+
+@@ -497,7 +501,7 @@ static int qmp_probe(struct platform_device *pdev)
+ }
+
+ irq = platform_get_irq(pdev, 0);
+- ret = devm_request_irq(&pdev->dev, irq, qmp_intr, IRQF_ONESHOT,
++ ret = devm_request_irq(&pdev->dev, irq, qmp_intr, 0,
+ "aoss-qmp", qmp);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to request interrupt\n");
+diff --git a/drivers/soc/qcom/rpmpd.c b/drivers/soc/qcom/rpmpd.c
+index 0a8d8d24bfb77..624b5630feb87 100644
+--- a/drivers/soc/qcom/rpmpd.c
++++ b/drivers/soc/qcom/rpmpd.c
+@@ -610,6 +610,9 @@ static int rpmpd_probe(struct platform_device *pdev)
+
+ data->domains = devm_kcalloc(&pdev->dev, num, sizeof(*data->domains),
+ GFP_KERNEL);
++ if (!data->domains)
++ return -ENOMEM;
++
+ data->num_domains = num;
+
+ for (i = 0; i < num; i++) {
+diff --git a/drivers/soc/ti/wkup_m3_ipc.c b/drivers/soc/ti/wkup_m3_ipc.c
+index 72386bd393fed..2f03ced0f4113 100644
+--- a/drivers/soc/ti/wkup_m3_ipc.c
++++ b/drivers/soc/ti/wkup_m3_ipc.c
+@@ -450,9 +450,9 @@ static int wkup_m3_ipc_probe(struct platform_device *pdev)
+ return PTR_ERR(m3_ipc->ipc_mem_base);
+
+ irq = platform_get_irq(pdev, 0);
+- if (!irq) {
++ if (irq < 0) {
+ dev_err(&pdev->dev, "no irq resource\n");
+- return -ENXIO;
++ return irq;
+ }
+
+ ret = devm_request_irq(dev, irq, wkup_m3_txev_handler,
+diff --git a/drivers/soundwire/dmi-quirks.c b/drivers/soundwire/dmi-quirks.c
+index 0ca2a3e3a02e2..747983743a14b 100644
+--- a/drivers/soundwire/dmi-quirks.c
++++ b/drivers/soundwire/dmi-quirks.c
+@@ -59,7 +59,7 @@ static const struct dmi_system_id adr_remap_quirk_table[] = {
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Convertible"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Conv"),
+ },
+ .driver_data = (void *)intel_tgl_bios,
+ },
+diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
+index 122f7a29d8ca9..63101f1ba2713 100644
+--- a/drivers/soundwire/intel.c
++++ b/drivers/soundwire/intel.c
+@@ -448,8 +448,8 @@ static void intel_shim_wake(struct sdw_intel *sdw, bool wake_enable)
+
+ /* Clear wake status */
+ wake_sts = intel_readw(shim, SDW_SHIM_WAKESTS);
+- wake_sts |= (SDW_SHIM_WAKEEN_ENABLE << link_id);
+- intel_writew(shim, SDW_SHIM_WAKESTS_STATUS, wake_sts);
++ wake_sts |= (SDW_SHIM_WAKESTS_STATUS << link_id);
++ intel_writew(shim, SDW_SHIM_WAKESTS, wake_sts);
+ }
+ mutex_unlock(sdw->link_res->shim_lock);
+ }
+diff --git a/drivers/spi/spi-fsi.c b/drivers/spi/spi-fsi.c
+index b6c7467f0b590..d403a7a3021d0 100644
+--- a/drivers/spi/spi-fsi.c
++++ b/drivers/spi/spi-fsi.c
+@@ -25,6 +25,7 @@
+
+ #define SPI_FSI_BASE 0x70000
+ #define SPI_FSI_INIT_TIMEOUT_MS 1000
++#define SPI_FSI_STATUS_TIMEOUT_MS 100
+ #define SPI_FSI_MAX_RX_SIZE 8
+ #define SPI_FSI_MAX_TX_SIZE 40
+
+@@ -299,6 +300,7 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ struct spi_transfer *transfer)
+ {
+ int rc = 0;
++ unsigned long end;
+ u64 status = 0ULL;
+
+ if (transfer->tx_buf) {
+@@ -315,10 +317,14 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ if (rc)
+ return rc;
+
++ end = jiffies + msecs_to_jiffies(SPI_FSI_STATUS_TIMEOUT_MS);
+ do {
+ rc = fsi_spi_status(ctx, &status, "TX");
+ if (rc)
+ return rc;
++
++ if (time_after(jiffies, end))
++ return -ETIMEDOUT;
+ } while (status & SPI_FSI_STATUS_TDR_FULL);
+
+ sent += nb;
+@@ -329,10 +335,14 @@ static int fsi_spi_transfer_data(struct fsi_spi *ctx,
+ u8 *rx = transfer->rx_buf;
+
+ while (transfer->len > recv) {
++ end = jiffies + msecs_to_jiffies(SPI_FSI_STATUS_TIMEOUT_MS);
+ do {
+ rc = fsi_spi_status(ctx, &status, "RX");
+ if (rc)
+ return rc;
++
++ if (time_after(jiffies, end))
++ return -ETIMEDOUT;
+ } while (!(status & SPI_FSI_STATUS_RDR_FULL));
+
+ rc = fsi_spi_read_reg(ctx, SPI_FSI_DATA_RX, &in);
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 753bd313e6fda..2ca19b01948a2 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -43,8 +43,11 @@
+ #define SPI_CFG1_PACKET_LOOP_OFFSET 8
+ #define SPI_CFG1_PACKET_LENGTH_OFFSET 16
+ #define SPI_CFG1_GET_TICK_DLY_OFFSET 29
++#define SPI_CFG1_GET_TICK_DLY_OFFSET_V1 30
+
+ #define SPI_CFG1_GET_TICK_DLY_MASK 0xe0000000
++#define SPI_CFG1_GET_TICK_DLY_MASK_V1 0xc0000000
++
+ #define SPI_CFG1_CS_IDLE_MASK 0xff
+ #define SPI_CFG1_PACKET_LOOP_MASK 0xff00
+ #define SPI_CFG1_PACKET_LENGTH_MASK 0x3ff0000
+@@ -346,9 +349,15 @@ static int mtk_spi_prepare_message(struct spi_master *master,
+
+ /* tick delay */
+ reg_val = readl(mdata->base + SPI_CFG1_REG);
+- reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK;
+- reg_val |= ((chip_config->tick_delay & 0x7)
+- << SPI_CFG1_GET_TICK_DLY_OFFSET);
++ if (mdata->dev_comp->enhance_timing) {
++ reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK;
++ reg_val |= ((chip_config->tick_delay & 0x7)
++ << SPI_CFG1_GET_TICK_DLY_OFFSET);
++ } else {
++ reg_val &= ~SPI_CFG1_GET_TICK_DLY_MASK_V1;
++ reg_val |= ((chip_config->tick_delay & 0x3)
++ << SPI_CFG1_GET_TICK_DLY_OFFSET_V1);
++ }
+ writel(reg_val, mdata->base + SPI_CFG1_REG);
+
+ /* set hw cs timing */
+diff --git a/drivers/spi/spi-mxic.c b/drivers/spi/spi-mxic.c
+index 45889947afed8..03fce4493aa79 100644
+--- a/drivers/spi/spi-mxic.c
++++ b/drivers/spi/spi-mxic.c
+@@ -304,25 +304,21 @@ static int mxic_spi_data_xfer(struct mxic_spi *mxic, const void *txbuf,
+
+ writel(data, mxic->regs + TXD(nbytes % 4));
+
++ ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
++ sts & INT_TX_EMPTY, 0, USEC_PER_SEC);
++ if (ret)
++ return ret;
++
++ ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
++ sts & INT_RX_NOT_EMPTY, 0,
++ USEC_PER_SEC);
++ if (ret)
++ return ret;
++
++ data = readl(mxic->regs + RXD);
+ if (rxbuf) {
+- ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
+- sts & INT_TX_EMPTY, 0,
+- USEC_PER_SEC);
+- if (ret)
+- return ret;
+-
+- ret = readl_poll_timeout(mxic->regs + INT_STS, sts,
+- sts & INT_RX_NOT_EMPTY, 0,
+- USEC_PER_SEC);
+- if (ret)
+- return ret;
+-
+- data = readl(mxic->regs + RXD);
+ data >>= (8 * (4 - nbytes));
+ memcpy(rxbuf + pos, &data, nbytes);
+- WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY);
+- } else {
+- readl(mxic->regs + RXD);
+ }
+ WARN_ON(readl(mxic->regs + INT_STS) & INT_RX_NOT_EMPTY);
+
+diff --git a/drivers/spi/spi-pxa2xx-pci.c b/drivers/spi/spi-pxa2xx-pci.c
+index 2e134eb4bd2c9..6502fda6243e0 100644
+--- a/drivers/spi/spi-pxa2xx-pci.c
++++ b/drivers/spi/spi-pxa2xx-pci.c
+@@ -76,14 +76,23 @@ static bool lpss_dma_filter(struct dma_chan *chan, void *param)
+ return true;
+ }
+
++static void lpss_dma_put_device(void *dma_dev)
++{
++ pci_dev_put(dma_dev);
++}
++
+ static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ {
+ struct pci_dev *dma_dev;
++ int ret;
+
+ c->num_chipselect = 1;
+ c->max_clk_rate = 50000000;
+
+ dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0));
++ ret = devm_add_action_or_reset(&dev->dev, lpss_dma_put_device, dma_dev);
++ if (ret)
++ return ret;
+
+ if (c->tx_param) {
+ struct dw_dma_slave *slave = c->tx_param;
+@@ -107,8 +116,9 @@ static int lpss_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+
+ static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ {
+- struct pci_dev *dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(21, 0));
+ struct dw_dma_slave *tx, *rx;
++ struct pci_dev *dma_dev;
++ int ret;
+
+ switch (PCI_FUNC(dev->devfn)) {
+ case 0:
+@@ -133,6 +143,11 @@ static int mrfld_spi_setup(struct pci_dev *dev, struct pxa_spi_info *c)
+ return -ENODEV;
+ }
+
++ dma_dev = pci_get_slot(dev->bus, PCI_DEVFN(21, 0));
++ ret = devm_add_action_or_reset(&dev->dev, lpss_dma_put_device, dma_dev);
++ if (ret)
++ return ret;
++
+ tx = c->tx_param;
+ tx->dma_dev = &dma_dev->dev;
+
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index e9de1d958bbd2..8f345247a8c32 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -1352,6 +1352,10 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ tspi->phys = r->start;
+
+ spi_irq = platform_get_irq(pdev, 0);
++ if (spi_irq < 0) {
++ ret = spi_irq;
++ goto exit_free_master;
++ }
+ tspi->irq = spi_irq;
+
+ tspi->clk = devm_clk_get(&pdev->dev, "spi");
+diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
+index 2a03739a0c609..80c3787deea9d 100644
+--- a/drivers/spi/spi-tegra20-slink.c
++++ b/drivers/spi/spi-tegra20-slink.c
+@@ -1006,14 +1006,8 @@ static int tegra_slink_probe(struct platform_device *pdev)
+ struct resource *r;
+ int ret, spi_irq;
+ const struct tegra_slink_chip_data *cdata = NULL;
+- const struct of_device_id *match;
+
+- match = of_match_device(tegra_slink_of_match, &pdev->dev);
+- if (!match) {
+- dev_err(&pdev->dev, "Error: No device match found\n");
+- return -ENODEV;
+- }
+- cdata = match->data;
++ cdata = of_device_get_match_data(&pdev->dev);
+
+ master = spi_alloc_master(&pdev->dev, sizeof(*tspi));
+ if (!master) {
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index ce1bdb4767ea3..cb00ac2fc7d8e 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -1240,6 +1240,8 @@ static int tegra_qspi_probe(struct platform_device *pdev)
+
+ tqspi->phys = r->start;
+ qspi_irq = platform_get_irq(pdev, 0);
++ if (qspi_irq < 0)
++ return qspi_irq;
+ tqspi->irq = qspi_irq;
+
+ tqspi->clk = devm_clk_get(&pdev->dev, "qspi");
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index 328b6559bb19a..2b5afae8ff7fc 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1172,7 +1172,10 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ goto clk_dis_all;
+ }
+
+- dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++ ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
++ if (ret)
++ goto clk_dis_all;
++
+ ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+ ctlr->num_chipselect = GQSPI_DEFAULT_NUM_CS;
+ ctlr->mem_ops = &zynqmp_qspi_mem_ops;
+diff --git a/drivers/staging/iio/adc/ad7280a.c b/drivers/staging/iio/adc/ad7280a.c
+index fef0055b89909..20183b2ea1279 100644
+--- a/drivers/staging/iio/adc/ad7280a.c
++++ b/drivers/staging/iio/adc/ad7280a.c
+@@ -107,9 +107,9 @@
+ static unsigned int ad7280a_devaddr(unsigned int addr)
+ {
+ return ((addr & 0x1) << 4) |
+- ((addr & 0x2) << 3) |
++ ((addr & 0x2) << 2) |
+ (addr & 0x4) |
+- ((addr & 0x8) >> 3) |
++ ((addr & 0x8) >> 2) |
+ ((addr & 0x10) >> 4);
+ }
+
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_acc.c b/drivers/staging/media/atomisp/pci/atomisp_acc.c
+index 9a1751895ab03..28cb271663c47 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_acc.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_acc.c
+@@ -439,6 +439,18 @@ int atomisp_acc_s_mapped_arg(struct atomisp_sub_device *asd,
+ return 0;
+ }
+
++static void atomisp_acc_unload_some_extensions(struct atomisp_sub_device *asd,
++ int i,
++ struct atomisp_acc_fw *acc_fw)
++{
++ while (--i >= 0) {
++ if (acc_fw->flags & acc_flag_to_pipe[i].flag) {
++ atomisp_css_unload_acc_extension(asd, acc_fw->fw,
++ acc_flag_to_pipe[i].pipe_id);
++ }
++ }
++}
++
+ /*
+ * Appends the loaded acceleration binary extensions to the
+ * current ISP mode. Must be called just before sh_css_start().
+@@ -479,16 +491,20 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ acc_fw->fw,
+ acc_flag_to_pipe[i].pipe_id,
+ acc_fw->type);
+- if (ret)
++ if (ret) {
++ atomisp_acc_unload_some_extensions(asd, i, acc_fw);
+ goto error;
++ }
+
+ ext_loaded = true;
+ }
+ }
+
+ ret = atomisp_css_set_acc_parameters(acc_fw);
+- if (ret < 0)
++ if (ret < 0) {
++ atomisp_acc_unload_some_extensions(asd, i, acc_fw);
+ goto error;
++ }
+ }
+
+ if (!ext_loaded)
+@@ -497,6 +513,7 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ ret = atomisp_css_update_stream(asd);
+ if (ret) {
+ dev_err(isp->dev, "%s: update stream failed.\n", __func__);
++ atomisp_acc_unload_extensions(asd);
+ goto error;
+ }
+
+@@ -504,13 +521,6 @@ int atomisp_acc_load_extensions(struct atomisp_sub_device *asd)
+ return 0;
+
+ error:
+- while (--i >= 0) {
+- if (acc_fw->flags & acc_flag_to_pipe[i].flag) {
+- atomisp_css_unload_acc_extension(asd, acc_fw->fw,
+- acc_flag_to_pipe[i].pipe_id);
+- }
+- }
+-
+ list_for_each_entry_continue_reverse(acc_fw, &asd->acc.fw, list) {
+ if (acc_fw->type != ATOMISP_ACC_FW_LOAD_TYPE_OUTPUT &&
+ acc_fw->type != ATOMISP_ACC_FW_LOAD_TYPE_VIEWFINDER)
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index 1cc581074ba76..9a194fbb305b7 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -748,6 +748,21 @@ static int axp_regulator_set(struct device *dev, struct gmin_subdev *gs,
+ return 0;
+ }
+
++/*
++ * Some boards contain a hw-bug where turning eldo2 back on after having turned
++ * it off causes the CPLM3218 ambient-light-sensor on the image-sensor's I2C bus
++ * to crash, hanging the bus. Do not turn eldo2 off on these systems.
++ */
++static const struct dmi_system_id axp_leave_eldo2_on_ids[] = {
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TrekStor"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "SurfTab duo W1 10.1 (VT4)"),
++ },
++ },
++ { }
++};
++
+ static int axp_v1p8_on(struct device *dev, struct gmin_subdev *gs)
+ {
+ int ret;
+@@ -782,6 +797,9 @@ static int axp_v1p8_off(struct device *dev, struct gmin_subdev *gs)
+ if (ret)
+ return ret;
+
++ if (dmi_check_system(axp_leave_eldo2_on_ids))
++ return 0;
++
+ ret = axp_regulator_set(dev, gs, gs->eldo2_sel_reg, gs->eldo2_1p8v,
+ ELDO_CTRL_REG, gs->eldo2_ctrl_shift, false);
+ return ret;
+diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+index 6a5ee46070898..c1cda16f2dc01 100644
+--- a/drivers/staging/media/atomisp/pci/hmm/hmm.c
++++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c
+@@ -39,7 +39,7 @@
+ struct hmm_bo_device bo_device;
+ struct hmm_pool dynamic_pool;
+ struct hmm_pool reserved_pool;
+-static ia_css_ptr dummy_ptr;
++static ia_css_ptr dummy_ptr = mmgr_EXCEPTION;
+ static bool hmm_initialized;
+ struct _hmm_mem_stat hmm_mem_stat;
+
+@@ -209,7 +209,7 @@ int hmm_init(void)
+
+ void hmm_cleanup(void)
+ {
+- if (!dummy_ptr)
++ if (dummy_ptr == mmgr_EXCEPTION)
+ return;
+ sysfs_remove_group(&atomisp_dev->kobj, atomisp_attribute_group);
+
+@@ -288,7 +288,8 @@ void hmm_free(ia_css_ptr virt)
+
+ dev_dbg(atomisp_dev, "%s: free 0x%08x\n", __func__, virt);
+
+- WARN_ON(!virt);
++ if (WARN_ON(virt == mmgr_EXCEPTION))
++ return;
+
+ bo = hmm_bo_device_search_start(&bo_device, (unsigned int)virt);
+
+diff --git a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+index 1450013d3685d..c5d32048d90ff 100644
+--- a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
++++ b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+@@ -23,7 +23,7 @@ static void hantro_h1_set_src_img_ctrl(struct hantro_dev *vpu,
+
+ reg = H1_REG_IN_IMG_CTRL_ROW_LEN(pix_fmt->width)
+ | H1_REG_IN_IMG_CTRL_OVRFLR_D4(0)
+- | H1_REG_IN_IMG_CTRL_OVRFLB_D4(0)
++ | H1_REG_IN_IMG_CTRL_OVRFLB(0)
+ | H1_REG_IN_IMG_CTRL_FMT(ctx->vpu_src_fmt->enc_fmt);
+ vepu_write_relaxed(vpu, reg, H1_REG_IN_IMG_CTRL);
+ }
+diff --git a/drivers/staging/media/hantro/hantro_h1_regs.h b/drivers/staging/media/hantro/hantro_h1_regs.h
+index d6e9825bb5c7b..30e7e7b920b55 100644
+--- a/drivers/staging/media/hantro/hantro_h1_regs.h
++++ b/drivers/staging/media/hantro/hantro_h1_regs.h
+@@ -47,7 +47,7 @@
+ #define H1_REG_IN_IMG_CTRL 0x03c
+ #define H1_REG_IN_IMG_CTRL_ROW_LEN(x) ((x) << 12)
+ #define H1_REG_IN_IMG_CTRL_OVRFLR_D4(x) ((x) << 10)
+-#define H1_REG_IN_IMG_CTRL_OVRFLB_D4(x) ((x) << 6)
++#define H1_REG_IN_IMG_CTRL_OVRFLB(x) ((x) << 6)
+ #define H1_REG_IN_IMG_CTRL_FMT(x) ((x) << 2)
+ #define H1_REG_ENC_CTRL0 0x040
+ #define H1_REG_ENC_CTRL0_INIT_QP(x) ((x) << 26)
+diff --git a/drivers/staging/media/hantro/sunxi_vpu_hw.c b/drivers/staging/media/hantro/sunxi_vpu_hw.c
+index 90633406c4eb8..c0edd5856a0c8 100644
+--- a/drivers/staging/media/hantro/sunxi_vpu_hw.c
++++ b/drivers/staging/media/hantro/sunxi_vpu_hw.c
+@@ -29,10 +29,10 @@ static const struct hantro_fmt sunxi_vpu_dec_fmts[] = {
+ .frmsize = {
+ .min_width = 48,
+ .max_width = 3840,
+- .step_width = MB_DIM,
++ .step_width = 32,
+ .min_height = 48,
+ .max_height = 2160,
+- .step_height = MB_DIM,
++ .step_height = 32,
+ },
+ },
+ };
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index 2b73fa55c938b..9ea723bb5f209 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -32,7 +32,6 @@
+ #include <media/v4l2-subdev.h>
+
+ #define CSIS_DRIVER_NAME "imx7-mipi-csis"
+-#define CSIS_SUBDEV_NAME CSIS_DRIVER_NAME
+
+ #define CSIS_PAD_SINK 0
+ #define CSIS_PAD_SOURCE 1
+@@ -311,7 +310,6 @@ struct csi_state {
+ struct reset_control *mrst;
+ struct regulator *mipi_phy_regulator;
+ const struct mipi_csis_info *info;
+- u8 index;
+
+ struct v4l2_subdev sd;
+ struct media_pad pads[CSIS_PADS_NUM];
+@@ -1303,8 +1301,8 @@ static int mipi_csis_subdev_init(struct csi_state *state)
+
+ v4l2_subdev_init(sd, &mipi_csis_subdev_ops);
+ sd->owner = THIS_MODULE;
+- snprintf(sd->name, sizeof(sd->name), "%s.%d",
+- CSIS_SUBDEV_NAME, state->index);
++ snprintf(sd->name, sizeof(sd->name), "csis-%s",
++ dev_name(state->dev));
+
+ sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ sd->ctrl_handler = NULL;
+diff --git a/drivers/staging/media/imx/imx8mq-mipi-csi2.c b/drivers/staging/media/imx/imx8mq-mipi-csi2.c
+index 7adbdd14daa93..3b9fa75efac6b 100644
+--- a/drivers/staging/media/imx/imx8mq-mipi-csi2.c
++++ b/drivers/staging/media/imx/imx8mq-mipi-csi2.c
+@@ -398,9 +398,6 @@ static int imx8mq_mipi_csi_s_stream(struct v4l2_subdev *sd, int enable)
+ struct csi_state *state = mipi_sd_to_csi2_state(sd);
+ int ret = 0;
+
+- imx8mq_mipi_csi_write(state, CSI2RX_IRQ_MASK,
+- CSI2RX_IRQ_MASK_ULPS_STATUS_CHANGE);
+-
+ if (enable) {
+ ret = pm_runtime_resume_and_get(state->dev);
+ if (ret < 0)
+@@ -696,7 +693,7 @@ err_parse:
+ * Suspend/resume
+ */
+
+-static int imx8mq_mipi_csi_pm_suspend(struct device *dev, bool runtime)
++static int imx8mq_mipi_csi_pm_suspend(struct device *dev)
+ {
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct csi_state *state = mipi_sd_to_csi2_state(sd);
+@@ -708,36 +705,21 @@ static int imx8mq_mipi_csi_pm_suspend(struct device *dev, bool runtime)
+ imx8mq_mipi_csi_stop_stream(state);
+ imx8mq_mipi_csi_clk_disable(state);
+ state->state &= ~ST_POWERED;
+- if (!runtime)
+- state->state |= ST_SUSPENDED;
+ }
+
+ mutex_unlock(&state->lock);
+
+- ret = icc_set_bw(state->icc_path, 0, 0);
+- if (ret)
+- dev_err(dev, "icc_set_bw failed with %d\n", ret);
+-
+ return ret ? -EAGAIN : 0;
+ }
+
+-static int imx8mq_mipi_csi_pm_resume(struct device *dev, bool runtime)
++static int imx8mq_mipi_csi_pm_resume(struct device *dev)
+ {
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct csi_state *state = mipi_sd_to_csi2_state(sd);
+ int ret = 0;
+
+- ret = icc_set_bw(state->icc_path, 0, state->icc_path_bw);
+- if (ret) {
+- dev_err(dev, "icc_set_bw failed with %d\n", ret);
+- return ret;
+- }
+-
+ mutex_lock(&state->lock);
+
+- if (!runtime && !(state->state & ST_SUSPENDED))
+- goto unlock;
+-
+ if (!(state->state & ST_POWERED)) {
+ state->state |= ST_POWERED;
+ ret = imx8mq_mipi_csi_clk_enable(state);
+@@ -758,22 +740,60 @@ unlock:
+
+ static int __maybe_unused imx8mq_mipi_csi_suspend(struct device *dev)
+ {
+- return imx8mq_mipi_csi_pm_suspend(dev, false);
++ struct v4l2_subdev *sd = dev_get_drvdata(dev);
++ struct csi_state *state = mipi_sd_to_csi2_state(sd);
++ int ret;
++
++ ret = imx8mq_mipi_csi_pm_suspend(dev);
++ if (ret)
++ return ret;
++
++ state->state |= ST_SUSPENDED;
++
++ return ret;
+ }
+
+ static int __maybe_unused imx8mq_mipi_csi_resume(struct device *dev)
+ {
+- return imx8mq_mipi_csi_pm_resume(dev, false);
++ struct v4l2_subdev *sd = dev_get_drvdata(dev);
++ struct csi_state *state = mipi_sd_to_csi2_state(sd);
++
++ if (!(state->state & ST_SUSPENDED))
++ return 0;
++
++ return imx8mq_mipi_csi_pm_resume(dev);
+ }
+
+ static int __maybe_unused imx8mq_mipi_csi_runtime_suspend(struct device *dev)
+ {
+- return imx8mq_mipi_csi_pm_suspend(dev, true);
++ struct v4l2_subdev *sd = dev_get_drvdata(dev);
++ struct csi_state *state = mipi_sd_to_csi2_state(sd);
++ int ret;
++
++ ret = imx8mq_mipi_csi_pm_suspend(dev);
++ if (ret)
++ return ret;
++
++ ret = icc_set_bw(state->icc_path, 0, 0);
++ if (ret)
++ dev_err(dev, "icc_set_bw failed with %d\n", ret);
++
++ return ret;
+ }
+
+ static int __maybe_unused imx8mq_mipi_csi_runtime_resume(struct device *dev)
+ {
+- return imx8mq_mipi_csi_pm_resume(dev, true);
++ struct v4l2_subdev *sd = dev_get_drvdata(dev);
++ struct csi_state *state = mipi_sd_to_csi2_state(sd);
++ int ret;
++
++ ret = icc_set_bw(state->icc_path, 0, state->icc_path_bw);
++ if (ret) {
++ dev_err(dev, "icc_set_bw failed with %d\n", ret);
++ return ret;
++ }
++
++ return imx8mq_mipi_csi_pm_resume(dev);
+ }
+
+ static const struct dev_pm_ops imx8mq_mipi_csi_pm_ops = {
+@@ -921,7 +941,7 @@ static int imx8mq_mipi_csi_probe(struct platform_device *pdev)
+ /* Enable runtime PM. */
+ pm_runtime_enable(dev);
+ if (!pm_runtime_enabled(dev)) {
+- ret = imx8mq_mipi_csi_pm_resume(dev, true);
++ ret = imx8mq_mipi_csi_runtime_resume(dev);
+ if (ret < 0)
+ goto icc;
+ }
+@@ -934,7 +954,7 @@ static int imx8mq_mipi_csi_probe(struct platform_device *pdev)
+
+ cleanup:
+ pm_runtime_disable(&pdev->dev);
+- imx8mq_mipi_csi_pm_suspend(&pdev->dev, true);
++ imx8mq_mipi_csi_runtime_suspend(&pdev->dev);
+
+ media_entity_cleanup(&state->sd.entity);
+ v4l2_async_nf_unregister(&state->notifier);
+@@ -958,7 +978,7 @@ static int imx8mq_mipi_csi_remove(struct platform_device *pdev)
+ v4l2_async_unregister_subdev(&state->sd);
+
+ pm_runtime_disable(&pdev->dev);
+- imx8mq_mipi_csi_pm_suspend(&pdev->dev, true);
++ imx8mq_mipi_csi_runtime_suspend(&pdev->dev);
+ media_entity_cleanup(&state->sd.entity);
+ mutex_destroy(&state->lock);
+ pm_runtime_set_suspended(&pdev->dev);
+diff --git a/drivers/staging/media/meson/vdec/esparser.c b/drivers/staging/media/meson/vdec/esparser.c
+index db7022707ff8d..86ccc8937afca 100644
+--- a/drivers/staging/media/meson/vdec/esparser.c
++++ b/drivers/staging/media/meson/vdec/esparser.c
+@@ -328,7 +328,12 @@ esparser_queue(struct amvdec_session *sess, struct vb2_v4l2_buffer *vbuf)
+
+ offset = esparser_get_offset(sess);
+
+- amvdec_add_ts(sess, vb->timestamp, vbuf->timecode, offset, vbuf->flags);
++ ret = amvdec_add_ts(sess, vb->timestamp, vbuf->timecode, offset, vbuf->flags);
++ if (ret) {
++ v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR);
++ return ret;
++ }
++
+ dev_dbg(core->dev, "esparser: ts = %llu pld_size = %u offset = %08X flags = %08X\n",
+ vb->timestamp, payload_size, offset, vbuf->flags);
+
+diff --git a/drivers/staging/media/meson/vdec/vdec_helpers.c b/drivers/staging/media/meson/vdec/vdec_helpers.c
+index 203d7afa085d7..7d2a756532503 100644
+--- a/drivers/staging/media/meson/vdec/vdec_helpers.c
++++ b/drivers/staging/media/meson/vdec/vdec_helpers.c
+@@ -227,13 +227,16 @@ int amvdec_set_canvases(struct amvdec_session *sess,
+ }
+ EXPORT_SYMBOL_GPL(amvdec_set_canvases);
+
+-void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+- struct v4l2_timecode tc, u32 offset, u32 vbuf_flags)
++int amvdec_add_ts(struct amvdec_session *sess, u64 ts,
++ struct v4l2_timecode tc, u32 offset, u32 vbuf_flags)
+ {
+ struct amvdec_timestamp *new_ts;
+ unsigned long flags;
+
+ new_ts = kzalloc(sizeof(*new_ts), GFP_KERNEL);
++ if (!new_ts)
++ return -ENOMEM;
++
+ new_ts->ts = ts;
+ new_ts->tc = tc;
+ new_ts->offset = offset;
+@@ -242,6 +245,7 @@ void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+ spin_lock_irqsave(&sess->ts_spinlock, flags);
+ list_add_tail(&new_ts->list, &sess->timestamps);
+ spin_unlock_irqrestore(&sess->ts_spinlock, flags);
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(amvdec_add_ts);
+
+diff --git a/drivers/staging/media/meson/vdec/vdec_helpers.h b/drivers/staging/media/meson/vdec/vdec_helpers.h
+index 88137d15aa3ad..4bf3e61d081b3 100644
+--- a/drivers/staging/media/meson/vdec/vdec_helpers.h
++++ b/drivers/staging/media/meson/vdec/vdec_helpers.h
+@@ -56,8 +56,8 @@ void amvdec_dst_buf_done_offset(struct amvdec_session *sess,
+ * @offset: offset in the VIFIFO where the associated packet was written
+ * @flags: the vb2_v4l2_buffer flags
+ */
+-void amvdec_add_ts(struct amvdec_session *sess, u64 ts,
+- struct v4l2_timecode tc, u32 offset, u32 flags);
++int amvdec_add_ts(struct amvdec_session *sess, u64 ts,
++ struct v4l2_timecode tc, u32 offset, u32 flags);
+ void amvdec_remove_ts(struct amvdec_session *sess, u64 ts);
+
+ /**
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+index b4173a8926d69..d8fb93035470e 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h264.c
+@@ -38,7 +38,7 @@ struct cedrus_h264_sram_ref_pic {
+
+ #define CEDRUS_H264_FRAME_NUM 18
+
+-#define CEDRUS_NEIGHBOR_INFO_BUF_SIZE (16 * SZ_1K)
++#define CEDRUS_NEIGHBOR_INFO_BUF_SIZE (32 * SZ_1K)
+ #define CEDRUS_MIN_PIC_INFO_BUF_SIZE (130 * SZ_1K)
+
+ static void cedrus_h264_write_sram(struct cedrus_dev *dev,
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+index 8829a7bab07ec..ffade5cbd2e40 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_h265.c
+@@ -23,7 +23,7 @@
+ * Subsequent BSP implementations seem to double the neighbor info buffer size
+ * for the H6 SoC, which may be related to 10 bit H265 support.
+ */
+-#define CEDRUS_H265_NEIGHBOR_INFO_BUF_SIZE (397 * SZ_1K)
++#define CEDRUS_H265_NEIGHBOR_INFO_BUF_SIZE (794 * SZ_1K)
+ #define CEDRUS_H265_ENTRY_POINTS_BUF_SIZE (4 * SZ_1K)
+ #define CEDRUS_H265_MV_COL_BUF_UNIT_CTB_SIZE 160
+
+diff --git a/drivers/staging/media/zoran/zoran.h b/drivers/staging/media/zoran/zoran.h
+index b1ad2a2b914cd..50d5a7acfab6c 100644
+--- a/drivers/staging/media/zoran/zoran.h
++++ b/drivers/staging/media/zoran/zoran.h
+@@ -313,6 +313,6 @@ static inline struct zoran *to_zoran(struct v4l2_device *v4l2_dev)
+
+ #endif
+
+-int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq);
++int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq, int dir);
+ void zoran_queue_exit(struct zoran *zr);
+ int zr_set_buf(struct zoran *zr);
+diff --git a/drivers/staging/media/zoran/zoran_card.c b/drivers/staging/media/zoran/zoran_card.c
+index f259585b06897..11d415c0c05d2 100644
+--- a/drivers/staging/media/zoran/zoran_card.c
++++ b/drivers/staging/media/zoran/zoran_card.c
+@@ -803,6 +803,52 @@ int zoran_check_jpg_settings(struct zoran *zr,
+ return 0;
+ }
+
++static int zoran_init_video_device(struct zoran *zr, struct video_device *video_dev, int dir)
++{
++ int err;
++
++ /* Now add the template and register the device unit. */
++ *video_dev = zoran_template;
++ video_dev->v4l2_dev = &zr->v4l2_dev;
++ video_dev->lock = &zr->lock;
++ video_dev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_READWRITE | dir;
++
++ strscpy(video_dev->name, ZR_DEVNAME(zr), sizeof(video_dev->name));
++ /*
++ * It's not a mem2mem device, but you can both capture and output from one and the same
++ * device. This should really be split up into two device nodes, but that's a job for
++ * another day.
++ */
++ video_dev->vfl_dir = VFL_DIR_M2M;
++ zoran_queue_init(zr, &zr->vq, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++
++ err = video_register_device(video_dev, VFL_TYPE_VIDEO, video_nr[zr->id]);
++ if (err < 0)
++ return err;
++ video_set_drvdata(video_dev, zr);
++ return 0;
++}
++
++static void zoran_exit_video_devices(struct zoran *zr)
++{
++ video_unregister_device(zr->video_dev);
++ kfree(zr->video_dev);
++}
++
++static int zoran_init_video_devices(struct zoran *zr)
++{
++ int err;
++
++ zr->video_dev = video_device_alloc();
++ if (!zr->video_dev)
++ return -ENOMEM;
++
++ err = zoran_init_video_device(zr, zr->video_dev, V4L2_CAP_VIDEO_CAPTURE);
++ if (err)
++ kfree(zr->video_dev);
++ return err;
++}
++
+ void zoran_open_init_params(struct zoran *zr)
+ {
+ int i;
+@@ -874,17 +920,11 @@ static int zr36057_init(struct zoran *zr)
+ zoran_open_init_params(zr);
+
+ /* allocate memory *before* doing anything to the hardware in case allocation fails */
+- zr->video_dev = video_device_alloc();
+- if (!zr->video_dev) {
+- err = -ENOMEM;
+- goto exit;
+- }
+ zr->stat_com = dma_alloc_coherent(&zr->pci_dev->dev,
+ BUZ_NUM_STAT_COM * sizeof(u32),
+ &zr->p_sc, GFP_KERNEL);
+ if (!zr->stat_com) {
+- err = -ENOMEM;
+- goto exit_video;
++ return -ENOMEM;
+ }
+ for (j = 0; j < BUZ_NUM_STAT_COM; j++)
+ zr->stat_com[j] = cpu_to_le32(1); /* mark as unavailable to zr36057 */
+@@ -897,26 +937,9 @@ static int zr36057_init(struct zoran *zr)
+ goto exit_statcom;
+ }
+
+- /* Now add the template and register the device unit. */
+- *zr->video_dev = zoran_template;
+- zr->video_dev->v4l2_dev = &zr->v4l2_dev;
+- zr->video_dev->lock = &zr->lock;
+- zr->video_dev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE;
+-
+- strscpy(zr->video_dev->name, ZR_DEVNAME(zr), sizeof(zr->video_dev->name));
+- /*
+- * It's not a mem2mem device, but you can both capture and output from one and the same
+- * device. This should really be split up into two device nodes, but that's a job for
+- * another day.
+- */
+- zr->video_dev->vfl_dir = VFL_DIR_M2M;
+-
+- zoran_queue_init(zr, &zr->vq);
+-
+- err = video_register_device(zr->video_dev, VFL_TYPE_VIDEO, video_nr[zr->id]);
+- if (err < 0)
++ err = zoran_init_video_devices(zr);
++ if (err)
+ goto exit_statcomb;
+- video_set_drvdata(zr->video_dev, zr);
+
+ zoran_init_hardware(zr);
+ if (!pass_through) {
+@@ -931,9 +954,6 @@ exit_statcomb:
+ dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32) * 2, zr->stat_comb, zr->p_scb);
+ exit_statcom:
+ dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32), zr->stat_com, zr->p_sc);
+-exit_video:
+- kfree(zr->video_dev);
+-exit:
+ return err;
+ }
+
+@@ -965,7 +985,7 @@ static void zoran_remove(struct pci_dev *pdev)
+ dma_free_coherent(&zr->pci_dev->dev, BUZ_NUM_STAT_COM * sizeof(u32) * 2, zr->stat_comb, zr->p_scb);
+ pci_release_regions(pdev);
+ pci_disable_device(zr->pci_dev);
+- video_unregister_device(zr->video_dev);
++ zoran_exit_video_devices(zr);
+ exit_free:
+ v4l2_ctrl_handler_free(&zr->hdl);
+ v4l2_device_unregister(&zr->v4l2_dev);
+@@ -1069,8 +1089,10 @@ static int zoran_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+ if (err)
+- return -ENODEV;
+- vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
++ return err;
++ err = vb2_dma_contig_set_max_seg_size(&pdev->dev, U32_MAX);
++ if (err)
++ return err;
+
+ nr = zoran_num++;
+ if (nr >= BUZ_MAX) {
+diff --git a/drivers/staging/media/zoran/zoran_device.c b/drivers/staging/media/zoran/zoran_device.c
+index 5b12a730a2290..fb1f0465ca87f 100644
+--- a/drivers/staging/media/zoran/zoran_device.c
++++ b/drivers/staging/media/zoran/zoran_device.c
+@@ -814,7 +814,7 @@ static void zoran_reap_stat_com(struct zoran *zr)
+ if (zr->jpg_settings.tmp_dcm == 1)
+ i = (zr->jpg_dma_tail - zr->jpg_err_shift) & BUZ_MASK_STAT_COM;
+ else
+- i = ((zr->jpg_dma_tail - zr->jpg_err_shift) & 1) * 2 + 1;
++ i = ((zr->jpg_dma_tail - zr->jpg_err_shift) & 1) * 2;
+
+ stat_com = le32_to_cpu(zr->stat_com[i]);
+ if ((stat_com & 1) == 0) {
+@@ -826,6 +826,11 @@ static void zoran_reap_stat_com(struct zoran *zr)
+ size = (stat_com & GENMASK(22, 1)) >> 1;
+
+ buf = zr->inuse[i];
++ if (!buf) {
++ spin_unlock_irqrestore(&zr->queued_bufs_lock, flags);
++ pci_err(zr->pci_dev, "No buffer at slot %d\n", i);
++ return;
++ }
+ buf->vbuf.vb2_buf.timestamp = ktime_get_ns();
+
+ if (zr->codec_mode == BUZ_MODE_MOTION_COMPRESS) {
+diff --git a/drivers/staging/media/zoran/zoran_driver.c b/drivers/staging/media/zoran/zoran_driver.c
+index 46382e43f1bf7..84665637ebb79 100644
+--- a/drivers/staging/media/zoran/zoran_driver.c
++++ b/drivers/staging/media/zoran/zoran_driver.c
+@@ -255,8 +255,6 @@ static int zoran_querycap(struct file *file, void *__fh, struct v4l2_capability
+ strscpy(cap->card, ZR_DEVNAME(zr), sizeof(cap->card));
+ strscpy(cap->driver, "zoran", sizeof(cap->driver));
+ snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", pci_name(zr->pci_dev));
+- cap->device_caps = zr->video_dev->device_caps;
+- cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+ return 0;
+ }
+
+@@ -582,6 +580,9 @@ static int zoran_s_std(struct file *file, void *__fh, v4l2_std_id std)
+ struct zoran *zr = video_drvdata(file);
+ int res = 0;
+
++ if (zr->norm == std)
++ return 0;
++
+ if (zr->running != ZORAN_MAP_MODE_NONE)
+ return -EBUSY;
+
+@@ -739,6 +740,7 @@ static int zoran_g_parm(struct file *file, void *priv, struct v4l2_streamparm *p
+ if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
++ parm->parm.capture.readbuffers = 9;
+ return 0;
+ }
+
+@@ -869,6 +871,10 @@ int zr_set_buf(struct zoran *zr)
+ vbuf = &buf->vbuf;
+
+ buf->vbuf.field = V4L2_FIELD_INTERLACED;
++ if (BUZ_MAX_HEIGHT < (zr->v4l_settings.height * 2))
++ buf->vbuf.field = V4L2_FIELD_INTERLACED;
++ else
++ buf->vbuf.field = V4L2_FIELD_TOP;
+ vb2_set_plane_payload(&buf->vbuf.vb2_buf, 0, zr->buffer_size);
+ vb2_buffer_done(&buf->vbuf.vb2_buf, VB2_BUF_STATE_DONE);
+ zr->inuse[0] = NULL;
+@@ -928,6 +934,7 @@ static int zr_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
+ zr->stat_com[j] = cpu_to_le32(1);
+ zr->inuse[j] = NULL;
+ }
++ zr->vbseq = 0;
+
+ if (zr->map_mode != ZORAN_MAP_MODE_RAW) {
+ pci_info(zr->pci_dev, "START JPG\n");
+@@ -1008,7 +1015,7 @@ static const struct vb2_ops zr_video_qops = {
+ .wait_finish = vb2_ops_wait_finish,
+ };
+
+-int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq)
++int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq, int dir)
+ {
+ int err;
+
+@@ -1016,8 +1023,9 @@ int zoran_queue_init(struct zoran *zr, struct vb2_queue *vq)
+ INIT_LIST_HEAD(&zr->queued_bufs);
+
+ vq->dev = &zr->pci_dev->dev;
+- vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+- vq->io_modes = VB2_USERPTR | VB2_DMABUF | VB2_MMAP | VB2_READ | VB2_WRITE;
++ vq->type = dir;
++
++ vq->io_modes = VB2_DMABUF | VB2_MMAP | VB2_READ | VB2_WRITE;
+ vq->drv_priv = zr;
+ vq->buf_struct_size = sizeof(struct zr_buffer);
+ vq->ops = &zr_video_qops;
+diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
+index e38a083811e54..5ae94b1ad5998 100644
+--- a/drivers/staging/mt7621-dts/gbpc1.dts
++++ b/drivers/staging/mt7621-dts/gbpc1.dts
+@@ -12,7 +12,8 @@
+
+ memory@0 {
+ device_type = "memory";
+- reg = <0x0 0x1c000000>, <0x20000000 0x4000000>;
++ reg = <0x00000000 0x1c000000>,
++ <0x20000000 0x04000000>;
+ };
+
+ chosen {
+@@ -38,24 +39,16 @@
+ gpio-leds {
+ compatible = "gpio-leds";
+
+- system {
+- label = "gb-pc1:green:system";
++ power {
++ label = "green:power";
+ gpios = <&gpio 6 GPIO_ACTIVE_LOW>;
++ linux,default-trigger = "default-on";
+ };
+
+- status {
+- label = "gb-pc1:green:status";
++ system {
++ label = "green:system";
+ gpios = <&gpio 8 GPIO_ACTIVE_LOW>;
+- };
+-
+- lan1 {
+- label = "gb-pc1:green:lan1";
+- gpios = <&gpio 24 GPIO_ACTIVE_LOW>;
+- };
+-
+- lan2 {
+- label = "gb-pc1:green:lan2";
+- gpios = <&gpio 25 GPIO_ACTIVE_LOW>;
++ linux,default-trigger = "disk-activity";
+ };
+ };
+ };
+@@ -95,9 +88,8 @@
+
+ partition@50000 {
+ label = "firmware";
+- reg = <0x50000 0x1FB0000>;
++ reg = <0x50000 0x1fb0000>;
+ };
+-
+ };
+ };
+
+@@ -106,9 +98,12 @@
+ };
+
+ &pinctrl {
+- state_default: pinctrl0 {
+- default_gpio: gpio {
+- groups = "wdt", "rgmii2", "uart3";
++ pinctrl-names = "default";
++ pinctrl-0 = <&state_default>;
++
++ state_default: state-default {
++ gpio-pinmux {
++ groups = "rgmii2", "uart3", "wdt";
+ function = "gpio";
+ };
+ };
+@@ -117,12 +112,13 @@
+ &switch0 {
+ ports {
+ port@0 {
++ status = "okay";
+ label = "ethblack";
+- status = "ok";
+ };
++
+ port@4 {
++ status = "okay";
+ label = "ethblue";
+- status = "ok";
+ };
+ };
+ };
+diff --git a/drivers/staging/mt7621-dts/gbpc2.dts b/drivers/staging/mt7621-dts/gbpc2.dts
+index 6fe603c7711d7..a7fce8de61472 100644
+--- a/drivers/staging/mt7621-dts/gbpc2.dts
++++ b/drivers/staging/mt7621-dts/gbpc2.dts
+@@ -1,22 +1,122 @@
+ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+ /dts-v1/;
+
+-#include "gbpc1.dts"
++#include "mt7621.dtsi"
++
++#include <dt-bindings/gpio/gpio.h>
++#include <dt-bindings/input/input.h>
+
+ / {
+ compatible = "gnubee,gb-pc2", "mediatek,mt7621-soc";
+ model = "GB-PC2";
++
++ memory@0 {
++ device_type = "memory";
++ reg = <0x00000000 0x1c000000>,
++ <0x20000000 0x04000000>;
++ };
++
++ chosen {
++ bootargs = "console=ttyS0,57600";
++ };
++
++ palmbus: palmbus@1e000000 {
++ i2c@900 {
++ status = "okay";
++ };
++ };
++
++ gpio-keys {
++ compatible = "gpio-keys";
++
++ reset {
++ label = "reset";
++ gpios = <&gpio 18 GPIO_ACTIVE_HIGH>;
++ linux,code = <KEY_RESTART>;
++ };
++ };
++};
++
++&sdhci {
++ status = "okay";
++};
++
++&spi0 {
++ status = "okay";
++
++ m25p80@0 {
++ #address-cells = <1>;
++ #size-cells = <1>;
++ compatible = "jedec,spi-nor";
++ reg = <0>;
++ spi-max-frequency = <50000000>;
++ broken-flash-reset;
++
++ partition@0 {
++ label = "u-boot";
++ reg = <0x0 0x30000>;
++ read-only;
++ };
++
++ partition@30000 {
++ label = "u-boot-env";
++ reg = <0x30000 0x10000>;
++ read-only;
++ };
++
++ factory: partition@40000 {
++ label = "factory";
++ reg = <0x40000 0x10000>;
++ read-only;
++ };
++
++ partition@50000 {
++ label = "firmware";
++ reg = <0x50000 0x1fb0000>;
++ };
++ };
+ };
+
+-&default_gpio {
+- groups = "wdt", "uart3";
+- function = "gpio";
++&pcie {
++ status = "okay";
+ };
+
+-&gmac1 {
+- status = "ok";
++&pinctrl {
++ pinctrl-names = "default";
++ pinctrl-0 = <&state_default>;
++
++ state_default: state-default {
++ gpio-pinmux {
++ groups = "wdt";
++ function = "gpio";
++ };
++ };
+ };
+
+-&phy_external {
+- status = "ok";
++ðernet {
++ gmac1: mac@1 {
++ status = "okay";
++ phy-handle = <ðphy7>;
++ };
++
++ mdio-bus {
++ ethphy7: ethernet-phy@7 {
++ reg = <7>;
++ phy-mode = "rgmii-rxid";
++ };
++ };
++};
++
++&switch0 {
++ ports {
++ port@0 {
++ status = "okay";
++ label = "ethblack";
++ };
++
++ port@4 {
++ status = "okay";
++ label = "ethblue";
++ };
++ };
+ };
+diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
+index 644a65d1a6a16..786cdb5fc4da1 100644
+--- a/drivers/staging/mt7621-dts/mt7621.dtsi
++++ b/drivers/staging/mt7621-dts/mt7621.dtsi
+@@ -44,9 +44,9 @@
+ regulator-max-microvolt = <3300000>;
+ enable-active-high;
+ regulator-always-on;
+- };
++ };
+
+- mmc_fixed_1v8_io: fixedregulator@1 {
++ mmc_fixed_1v8_io: fixedregulator@1 {
+ compatible = "regulator-fixed";
+ regulator-name = "mmc_io";
+ regulator-min-microvolt = <1800000>;
+@@ -325,37 +325,32 @@
+
+ mediatek,ethsys = <&sysc>;
+
++ pinctrl-names = "default";
++ pinctrl-0 = <&mdio_pins>, <&rgmii1_pins>, <&rgmii2_pins>;
+
+ gmac0: mac@0 {
+ compatible = "mediatek,eth-mac";
+ reg = <0>;
+ phy-mode = "rgmii";
++
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ pause;
+ };
+ };
++
+ gmac1: mac@1 {
+ compatible = "mediatek,eth-mac";
+ reg = <1>;
+ status = "off";
+ phy-mode = "rgmii-rxid";
+- phy-handle = <&phy_external>;
+ };
++
+ mdio-bus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+- phy_external: ethernet-phy@5 {
+- status = "off";
+- reg = <5>;
+- phy-mode = "rgmii-rxid";
+-
+- pinctrl-names = "default";
+- pinctrl-0 = <&rgmii2_pins>;
+- };
+-
+ switch0: switch0@0 {
+ compatible = "mediatek,mt7621";
+ #address-cells = <1>;
+@@ -373,36 +368,43 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
++
+ port@0 {
+ status = "off";
+ reg = <0>;
+ label = "lan0";
+ };
++
+ port@1 {
+ status = "off";
+ reg = <1>;
+ label = "lan1";
+ };
++
+ port@2 {
+ status = "off";
+ reg = <2>;
+ label = "lan2";
+ };
++
+ port@3 {
+ status = "off";
+ reg = <3>;
+ label = "lan3";
+ };
++
+ port@4 {
+ status = "off";
+ reg = <4>;
+ label = "lan4";
+ };
++
+ port@6 {
+ reg = <6>;
+ label = "cpu";
+ ethernet = <&gmac0>;
+ phy-mode = "trgmii";
++
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
+index 9873bb2a9ee4f..113a3efd12e95 100644
+--- a/drivers/staging/qlge/qlge_main.c
++++ b/drivers/staging/qlge/qlge_main.c
+@@ -4605,14 +4605,12 @@ static int qlge_probe(struct pci_dev *pdev,
+ err = register_netdev(ndev);
+ if (err) {
+ dev_err(&pdev->dev, "net device registration failed.\n");
+- qlge_release_all(pdev);
+- pci_disable_device(pdev);
+- goto netdev_free;
++ goto cleanup_pdev;
+ }
+
+ err = qlge_health_create_reporters(qdev);
+ if (err)
+- goto netdev_free;
++ goto unregister_netdev;
+
+ /* Start up the timer to trigger EEH if
+ * the bus goes dead
+@@ -4626,6 +4624,11 @@ static int qlge_probe(struct pci_dev *pdev,
+ devlink_register(devlink);
+ return 0;
+
++unregister_netdev:
++ unregister_netdev(ndev);
++cleanup_pdev:
++ qlge_release_all(pdev);
++ pci_disable_device(pdev);
+ netdev_free:
+ free_netdev(ndev);
+ devlink_free:
+diff --git a/drivers/staging/r8188eu/core/rtw_recv.c b/drivers/staging/r8188eu/core/rtw_recv.c
+index 51a13262a226f..d120d61454a35 100644
+--- a/drivers/staging/r8188eu/core/rtw_recv.c
++++ b/drivers/staging/r8188eu/core/rtw_recv.c
+@@ -1853,8 +1853,7 @@ static int recv_func(struct adapter *padapter, struct recv_frame *rframe)
+ struct recv_frame *pending_frame;
+ int cnt = 0;
+
+- pending_frame = rtw_alloc_recvframe(&padapter->recvpriv.uc_swdec_pending_queue);
+- while (pending_frame) {
++ while ((pending_frame = rtw_alloc_recvframe(&padapter->recvpriv.uc_swdec_pending_queue))) {
+ cnt++;
+ recv_func_posthandle(padapter, pending_frame);
+ }
+diff --git a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+index b818872e0d194..31a9b7500a7b6 100644
+--- a/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
++++ b/drivers/staging/r8188eu/hal/rtl8188e_hal_init.c
+@@ -538,10 +538,10 @@ static int load_firmware(struct rt_firmware *pFirmware, struct device *device)
+ }
+ memcpy(pFirmware->szFwBuffer, fw->data, fw->size);
+ pFirmware->ulFwLength = fw->size;
+- release_firmware(fw);
+ dev_dbg(device, "!bUsedWoWLANFw, FmrmwareLen:%d+\n", pFirmware->ulFwLength);
+
+ Exit:
++ release_firmware(fw);
+ return rtStatus;
+ }
+
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 4f478812cb514..a0b599100106b 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -53,7 +53,7 @@ struct int3400_thermal_priv {
+ struct art *arts;
+ int trt_count;
+ struct trt *trts;
+- u8 uuid_bitmap;
++ u32 uuid_bitmap;
+ int rel_misc_dev_res;
+ int current_uuid_index;
+ char *data_vault;
+@@ -468,6 +468,11 @@ static void int3400_setup_gddv(struct int3400_thermal_priv *priv)
+ priv->data_vault = kmemdup(obj->package.elements[0].buffer.pointer,
+ obj->package.elements[0].buffer.length,
+ GFP_KERNEL);
++ if (!priv->data_vault) {
++ kfree(buffer.pointer);
++ return;
++ }
++
+ bin_attr_data_vault.private = priv->data_vault;
+ bin_attr_data_vault.size = obj->package.elements[0].buffer.length;
+ kfree(buffer.pointer);
+diff --git a/drivers/tty/hvc/hvc_iucv.c b/drivers/tty/hvc/hvc_iucv.c
+index 82a76cac94deb..32366caca6623 100644
+--- a/drivers/tty/hvc/hvc_iucv.c
++++ b/drivers/tty/hvc/hvc_iucv.c
+@@ -1417,7 +1417,9 @@ out_error:
+ */
+ static int __init hvc_iucv_config(char *val)
+ {
+- return kstrtoul(val, 10, &hvc_iucv_devices);
++ if (kstrtoul(val, 10, &hvc_iucv_devices))
++ pr_warn("hvc_iucv= invalid parameter value '%s'\n", val);
++ return 1;
+ }
+
+
+diff --git a/drivers/tty/mxser.c b/drivers/tty/mxser.c
+index c858aff721c41..fbb796f837532 100644
+--- a/drivers/tty/mxser.c
++++ b/drivers/tty/mxser.c
+@@ -744,6 +744,7 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ struct mxser_port *info = container_of(port, struct mxser_port, port);
+ unsigned long page;
+ unsigned long flags;
++ int ret;
+
+ page = __get_free_page(GFP_KERNEL);
+ if (!page)
+@@ -753,9 +754,9 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+
+ if (!info->type) {
+ set_bit(TTY_IO_ERROR, &tty->flags);
+- free_page(page);
+ spin_unlock_irqrestore(&info->slock, flags);
+- return 0;
++ ret = 0;
++ goto err_free_xmit;
+ }
+ info->port.xmit_buf = (unsigned char *) page;
+
+@@ -775,8 +776,10 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ if (capable(CAP_SYS_ADMIN)) {
+ set_bit(TTY_IO_ERROR, &tty->flags);
+ return 0;
+- } else
+- return -ENODEV;
++ }
++
++ ret = -ENODEV;
++ goto err_free_xmit;
+ }
+
+ /*
+@@ -821,6 +824,10 @@ static int mxser_activate(struct tty_port *port, struct tty_struct *tty)
+ spin_unlock_irqrestore(&info->slock, flags);
+
+ return 0;
++err_free_xmit:
++ free_page(page);
++ info->port.xmit_buf = NULL;
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/tty/serial/8250/8250_aspeed_vuart.c b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+index 2350fb3bb5e4c..c2cecc6f47db4 100644
+--- a/drivers/tty/serial/8250/8250_aspeed_vuart.c
++++ b/drivers/tty/serial/8250/8250_aspeed_vuart.c
+@@ -487,7 +487,7 @@ static int aspeed_vuart_probe(struct platform_device *pdev)
+ port.port.irq = irq_of_parse_and_map(np, 0);
+ port.port.handle_irq = aspeed_vuart_handle_irq;
+ port.port.iotype = UPIO_MEM;
+- port.port.type = PORT_16550A;
++ port.port.type = PORT_ASPEED_VUART;
+ port.port.uartclk = clk;
+ port.port.flags = UPF_SHARE_IRQ | UPF_BOOT_AUTOCONF | UPF_IOREMAP
+ | UPF_FIXED_PORT | UPF_FIXED_TYPE | UPF_NO_THRE_TEST;
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index 890fa7ddaa7f3..b3c3f7e5851ab 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -64,10 +64,19 @@ int serial8250_tx_dma(struct uart_8250_port *p)
+ struct uart_8250_dma *dma = p->dma;
+ struct circ_buf *xmit = &p->port.state->xmit;
+ struct dma_async_tx_descriptor *desc;
++ struct uart_port *up = &p->port;
+ int ret;
+
+- if (dma->tx_running)
++ if (dma->tx_running) {
++ if (up->x_char) {
++ dmaengine_pause(dma->txchan);
++ uart_xchar_out(up, UART_TX);
++ dmaengine_resume(dma->txchan);
++ }
+ return 0;
++ } else if (up->x_char) {
++ uart_xchar_out(up, UART_TX);
++ }
+
+ if (uart_tx_stopped(&p->port) || uart_circ_empty(xmit)) {
+ /* We have been called from __dma_tx_complete() */
+diff --git a/drivers/tty/serial/8250/8250_lpss.c b/drivers/tty/serial/8250/8250_lpss.c
+index d3bafec7619da..0f5af061e0b45 100644
+--- a/drivers/tty/serial/8250/8250_lpss.c
++++ b/drivers/tty/serial/8250/8250_lpss.c
+@@ -117,8 +117,7 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ {
+ struct dw_dma_slave *param = &lpss->dma_param;
+ struct pci_dev *pdev = to_pci_dev(port->dev);
+- unsigned int dma_devfn = PCI_DEVFN(PCI_SLOT(pdev->devfn), 0);
+- struct pci_dev *dma_dev = pci_get_slot(pdev->bus, dma_devfn);
++ struct pci_dev *dma_dev;
+
+ switch (pdev->device) {
+ case PCI_DEVICE_ID_INTEL_BYT_UART1:
+@@ -137,6 +136,8 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ return -EINVAL;
+ }
+
++ dma_dev = pci_get_slot(pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), 0));
++
+ param->dma_dev = &dma_dev->dev;
+ param->m_master = 0;
+ param->p_master = 1;
+@@ -152,6 +153,14 @@ static int byt_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ return 0;
+ }
+
++static void byt_serial_exit(struct lpss8250 *lpss)
++{
++ struct dw_dma_slave *param = &lpss->dma_param;
++
++ /* Paired with pci_get_slot() in the byt_serial_setup() above */
++ put_device(param->dma_dev);
++}
++
+ static int ehl_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ {
+ struct uart_8250_dma *dma = &lpss->data.dma;
+@@ -170,6 +179,13 @@ static int ehl_serial_setup(struct lpss8250 *lpss, struct uart_port *port)
+ return 0;
+ }
+
++static void ehl_serial_exit(struct lpss8250 *lpss)
++{
++ struct uart_8250_port *up = serial8250_get_port(lpss->data.line);
++
++ up->dma = NULL;
++}
++
+ #ifdef CONFIG_SERIAL_8250_DMA
+ static const struct dw_dma_platform_data qrk_serial_dma_pdata = {
+ .nr_channels = 2,
+@@ -344,8 +360,7 @@ static int lpss8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ return 0;
+
+ err_exit:
+- if (lpss->board->exit)
+- lpss->board->exit(lpss);
++ lpss->board->exit(lpss);
+ pci_free_irq_vectors(pdev);
+ return ret;
+ }
+@@ -356,8 +371,7 @@ static void lpss8250_remove(struct pci_dev *pdev)
+
+ serial8250_unregister_port(lpss->data.line);
+
+- if (lpss->board->exit)
+- lpss->board->exit(lpss);
++ lpss->board->exit(lpss);
+ pci_free_irq_vectors(pdev);
+ }
+
+@@ -365,12 +379,14 @@ static const struct lpss8250_board byt_board = {
+ .freq = 100000000,
+ .base_baud = 2764800,
+ .setup = byt_serial_setup,
++ .exit = byt_serial_exit,
+ };
+
+ static const struct lpss8250_board ehl_board = {
+ .freq = 200000000,
+ .base_baud = 12500000,
+ .setup = ehl_serial_setup,
++ .exit = ehl_serial_exit,
+ };
+
+ static const struct lpss8250_board qrk_board = {
+diff --git a/drivers/tty/serial/8250/8250_mid.c b/drivers/tty/serial/8250/8250_mid.c
+index efa0515139f8e..e6c1791609ddf 100644
+--- a/drivers/tty/serial/8250/8250_mid.c
++++ b/drivers/tty/serial/8250/8250_mid.c
+@@ -73,6 +73,11 @@ static int pnw_setup(struct mid8250 *mid, struct uart_port *p)
+ return 0;
+ }
+
++static void pnw_exit(struct mid8250 *mid)
++{
++ pci_dev_put(mid->dma_dev);
++}
++
+ static int tng_handle_irq(struct uart_port *p)
+ {
+ struct mid8250 *mid = p->private_data;
+@@ -124,6 +129,11 @@ static int tng_setup(struct mid8250 *mid, struct uart_port *p)
+ return 0;
+ }
+
++static void tng_exit(struct mid8250 *mid)
++{
++ pci_dev_put(mid->dma_dev);
++}
++
+ static int dnv_handle_irq(struct uart_port *p)
+ {
+ struct mid8250 *mid = p->private_data;
+@@ -330,9 +340,9 @@ static int mid8250_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+
+ pci_set_drvdata(pdev, mid);
+ return 0;
++
+ err:
+- if (mid->board->exit)
+- mid->board->exit(mid);
++ mid->board->exit(mid);
+ return ret;
+ }
+
+@@ -342,8 +352,7 @@ static void mid8250_remove(struct pci_dev *pdev)
+
+ serial8250_unregister_port(mid->line);
+
+- if (mid->board->exit)
+- mid->board->exit(mid);
++ mid->board->exit(mid);
+ }
+
+ static const struct mid8250_board pnw_board = {
+@@ -351,6 +360,7 @@ static const struct mid8250_board pnw_board = {
+ .freq = 50000000,
+ .base_baud = 115200,
+ .setup = pnw_setup,
++ .exit = pnw_exit,
+ };
+
+ static const struct mid8250_board tng_board = {
+@@ -358,6 +368,7 @@ static const struct mid8250_board tng_board = {
+ .freq = 38400000,
+ .base_baud = 1843200,
+ .setup = tng_setup,
++ .exit = tng_exit,
+ };
+
+ static const struct mid8250_board dnv_board = {
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3b12bfc1ed67b..9f116e75956e2 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -307,6 +307,14 @@ static const struct serial8250_config uart_config[] = {
+ .rxtrig_bytes = {1, 32, 64, 112},
+ .flags = UART_CAP_FIFO | UART_CAP_SLEEP,
+ },
++ [PORT_ASPEED_VUART] = {
++ .name = "ASPEED VUART",
++ .fifo_size = 16,
++ .tx_loadsz = 16,
++ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_00,
++ .rxtrig_bytes = {1, 4, 8, 14},
++ .flags = UART_CAP_FIFO,
++ },
+ };
+
+ /* Uart divisor latch read */
+@@ -1615,6 +1623,18 @@ static inline void start_tx_rs485(struct uart_port *port)
+ struct uart_8250_port *up = up_to_u8250p(port);
+ struct uart_8250_em485 *em485 = up->em485;
+
++ /*
++ * While serial8250_em485_handle_stop_tx() is a noop if
++ * em485->active_timer != &em485->stop_tx_timer, it might happen that
++ * the timer is still armed and triggers only after the current bunch of
++ * chars is send and em485->active_timer == &em485->stop_tx_timer again.
++ * So cancel the timer. There is still a theoretical race condition if
++ * the timer is already running and only comes around to check for
++ * em485->active_timer when &em485->stop_tx_timer is armed again.
++ */
++ if (em485->active_timer == &em485->stop_tx_timer)
++ hrtimer_try_to_cancel(&em485->stop_tx_timer);
++
+ em485->active_timer = NULL;
+
+ if (em485->tx_stopped) {
+@@ -1799,9 +1819,7 @@ void serial8250_tx_chars(struct uart_8250_port *up)
+ int count;
+
+ if (port->x_char) {
+- serial_out(up, UART_TX, port->x_char);
+- port->icount.tx++;
+- port->x_char = 0;
++ uart_xchar_out(port, UART_TX);
+ return;
+ }
+ if (uart_tx_stopped(port)) {
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 49d0c7f2b29b8..79b7db8580e05 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -403,16 +403,16 @@ static int kgdboc_option_setup(char *opt)
+ {
+ if (!opt) {
+ pr_err("config string not provided\n");
+- return -EINVAL;
++ return 1;
+ }
+
+ if (strlen(opt) >= MAX_CONFIG_LEN) {
+ pr_err("config string too long\n");
+- return -ENOSPC;
++ return 1;
+ }
+ strcpy(config, opt);
+
+- return 0;
++ return 1;
+ }
+
+ __setup("kgdboc=", kgdboc_option_setup);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 0db90be4c3bc3..f67540ae2a883 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -644,6 +644,20 @@ static void uart_flush_buffer(struct tty_struct *tty)
+ tty_port_tty_wakeup(&state->port);
+ }
+
++/*
++ * This function performs low-level write of high-priority XON/XOFF
++ * character and accounting for it.
++ *
++ * Requires uart_port to implement .serial_out().
++ */
++void uart_xchar_out(struct uart_port *uport, int offset)
++{
++ serial_port_out(uport, offset, uport->x_char);
++ uport->icount.tx++;
++ uport->x_char = 0;
++}
++EXPORT_SYMBOL_GPL(uart_xchar_out);
++
+ /*
+ * This function is used to send a high-priority XON/XOFF character to
+ * the device
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index df3522dab31b5..1e7dc130c39a6 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -762,7 +762,7 @@ static int xhci_exit_test_mode(struct xhci_hcd *xhci)
+ }
+ pm_runtime_allow(xhci_to_hcd(xhci)->self.controller);
+ xhci->test_mode = 0;
+- return xhci_reset(xhci);
++ return xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ }
+
+ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,
+@@ -1088,6 +1088,9 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status,
+ if (link_state == XDEV_U2)
+ *status |= USB_PORT_STAT_L1;
+ if (link_state == XDEV_U0) {
++ if (bus_state->resume_done[portnum])
++ usb_hcd_end_port_resume(&port->rhub->hcd->self,
++ portnum);
+ bus_state->resume_done[portnum] = 0;
+ clear_bit(portnum, &bus_state->resuming_ports);
+ if (bus_state->suspended_ports & (1 << portnum)) {
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 0e312066c5c63..b398d3fdabf61 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2583,7 +2583,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+
+ fail:
+ xhci_halt(xhci);
+- xhci_reset(xhci);
++ xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ xhci_mem_cleanup(xhci);
+ return -ENOMEM;
+ }
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 2d378543bc3aa..7d1ad8d654cbb 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -65,7 +65,7 @@ static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
+ * handshake done). There are two failure modes: "usec" have passed (major
+ * hardware flakeout), or the register reads as all-ones (hardware removed).
+ */
+-int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
++int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us)
+ {
+ u32 result;
+ int ret;
+@@ -73,7 +73,7 @@ int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
+ ret = readl_poll_timeout_atomic(ptr, result,
+ (result & mask) == done ||
+ result == U32_MAX,
+- 1, usec);
++ 1, timeout_us);
+ if (result == U32_MAX) /* card removed */
+ return -ENODEV;
+
+@@ -162,7 +162,7 @@ int xhci_start(struct xhci_hcd *xhci)
+ * Transactions will be terminated immediately, and operational registers
+ * will be set to their defaults.
+ */
+-int xhci_reset(struct xhci_hcd *xhci)
++int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
+ {
+ u32 command;
+ u32 state;
+@@ -195,8 +195,7 @@ int xhci_reset(struct xhci_hcd *xhci)
+ if (xhci->quirks & XHCI_INTEL_HOST)
+ udelay(1000);
+
+- ret = xhci_handshake(&xhci->op_regs->command,
+- CMD_RESET, 0, 10 * 1000 * 1000);
++ ret = xhci_handshake(&xhci->op_regs->command, CMD_RESET, 0, timeout_us);
+ if (ret)
+ return ret;
+
+@@ -209,8 +208,7 @@ int xhci_reset(struct xhci_hcd *xhci)
+ * xHCI cannot write to any doorbells or operational registers other
+ * than status until the "Controller Not Ready" flag is cleared.
+ */
+- ret = xhci_handshake(&xhci->op_regs->status,
+- STS_CNR, 0, 10 * 1000 * 1000);
++ ret = xhci_handshake(&xhci->op_regs->status, STS_CNR, 0, timeout_us);
+
+ xhci->usb2_rhub.bus_state.port_c_suspend = 0;
+ xhci->usb2_rhub.bus_state.suspended_ports = 0;
+@@ -731,7 +729,7 @@ static void xhci_stop(struct usb_hcd *hcd)
+ xhci->xhc_state |= XHCI_STATE_HALTED;
+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+ xhci_halt(xhci);
+- xhci_reset(xhci);
++ xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ spin_unlock_irq(&xhci->lock);
+
+ xhci_cleanup_msix(xhci);
+@@ -784,7 +782,7 @@ void xhci_shutdown(struct usb_hcd *hcd)
+ xhci_halt(xhci);
+ /* Workaround for spurious wakeups at shutdown with HSW */
+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+- xhci_reset(xhci);
++ xhci_reset(xhci, XHCI_RESET_SHORT_USEC);
+ spin_unlock_irq(&xhci->lock);
+
+ xhci_cleanup_msix(xhci);
+@@ -1170,7 +1168,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ xhci_dbg(xhci, "Stop HCD\n");
+ xhci_halt(xhci);
+ xhci_zero_64b_regs(xhci);
+- retval = xhci_reset(xhci);
++ retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ spin_unlock_irq(&xhci->lock);
+ if (retval)
+ return retval;
+@@ -5316,7 +5314,7 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks)
+
+ xhci_dbg(xhci, "Resetting HCD\n");
+ /* Reset the internal HC memory state and registers. */
+- retval = xhci_reset(xhci);
++ retval = xhci_reset(xhci, XHCI_RESET_LONG_USEC);
+ if (retval)
+ return retval;
+ xhci_dbg(xhci, "Reset complete\n");
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 5a75fe5631238..bc0789229527f 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -229,6 +229,9 @@ struct xhci_op_regs {
+ #define CMD_ETE (1 << 14)
+ /* bits 15:31 are reserved (and should be preserved on writes). */
+
++#define XHCI_RESET_LONG_USEC (10 * 1000 * 1000)
++#define XHCI_RESET_SHORT_USEC (250 * 1000)
++
+ /* IMAN - Interrupt Management Register */
+ #define IMAN_IE (1 << 1)
+ #define IMAN_IP (1 << 0)
+@@ -2083,11 +2086,11 @@ void xhci_free_container_ctx(struct xhci_hcd *xhci,
+
+ /* xHCI host controller glue */
+ typedef void (*xhci_get_quirks_t)(struct device *, struct xhci_hcd *);
+-int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec);
++int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, u64 timeout_us);
+ void xhci_quiesce(struct xhci_hcd *xhci);
+ int xhci_halt(struct xhci_hcd *xhci);
+ int xhci_start(struct xhci_hcd *xhci);
+-int xhci_reset(struct xhci_hcd *xhci);
++int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us);
+ int xhci_run(struct usb_hcd *hcd);
+ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
+ void xhci_shutdown(struct usb_hcd *hcd);
+@@ -2467,6 +2470,8 @@ static inline const char *xhci_decode_ctrl_ctx(char *str,
+ unsigned int bit;
+ int ret = 0;
+
++ str[0] = '\0';
++
+ if (drop) {
+ ret = sprintf(str, "Drop:");
+ for_each_set_bit(bit, &drop, 32)
+@@ -2624,8 +2629,11 @@ static inline const char *xhci_decode_usbsts(char *str, u32 usbsts)
+ {
+ int ret = 0;
+
++ ret = sprintf(str, " 0x%08x", usbsts);
++
+ if (usbsts == ~(u32)0)
+- return " 0xffffffff";
++ return str;
++
+ if (usbsts & STS_HALT)
+ ret += sprintf(str + ret, " HCHalted");
+ if (usbsts & STS_FATAL)
+diff --git a/drivers/usb/serial/Kconfig b/drivers/usb/serial/Kconfig
+index de5c012570603..ef8d1c73c7545 100644
+--- a/drivers/usb/serial/Kconfig
++++ b/drivers/usb/serial/Kconfig
+@@ -66,6 +66,7 @@ config USB_SERIAL_SIMPLE
+ - Libtransistor USB console
+ - a number of Motorola phones
+ - Motorola Tetra devices
++ - Nokia mobile phones
+ - Novatel Wireless GPS receivers
+ - Siemens USB/MPI adapter.
+ - ViVOtech ViVOpay USB device.
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index a70fd86f735ca..88b284d61681a 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -116,6 +116,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(ADLINK_VENDOR_ID, ADLINK_ND6530GC_PRODUCT_ID) },
+ { USB_DEVICE(SMART_VENDOR_ID, SMART_PRODUCT_ID) },
+ { USB_DEVICE(AT_VENDOR_ID, AT_VTKIT3_PRODUCT_ID) },
++ { USB_DEVICE(IBM_VENDOR_ID, IBM_PRODUCT_ID) },
+ { } /* Terminating entry */
+ };
+
+@@ -435,6 +436,7 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ case 0x105:
+ case 0x305:
+ case 0x405:
++ case 0x605:
+ /*
+ * Assume it's an HXN-type if the device doesn't
+ * support the old read request value.
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index 6097ee8fccb25..c5406452b774e 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -35,6 +35,9 @@
+ #define ATEN_PRODUCT_UC232B 0x2022
+ #define ATEN_PRODUCT_ID2 0x2118
+
++#define IBM_VENDOR_ID 0x04b3
++#define IBM_PRODUCT_ID 0x4016
++
+ #define IODATA_VENDOR_ID 0x04bb
+ #define IODATA_PRODUCT_ID 0x0a03
+ #define IODATA_PRODUCT_ID_RSAQ5 0x0a0e
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index bd23a7cb1be2b..4c6747889a194 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -91,6 +91,11 @@ DEVICE(moto_modem, MOTO_IDS);
+ { USB_DEVICE(0x0cad, 0x9016) } /* TPG2200 */
+ DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
+
++/* Nokia mobile phone driver */
++#define NOKIA_IDS() \
++ { USB_DEVICE(0x0421, 0x069a) } /* Nokia 130 (RM-1035) */
++DEVICE(nokia, NOKIA_IDS);
++
+ /* Novatel Wireless GPS driver */
+ #define NOVATEL_IDS() \
+ { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */
+@@ -123,6 +128,7 @@ static struct usb_serial_driver * const serial_drivers[] = {
+ &vivopay_device,
+ &moto_modem_device,
+ &motorola_tetra_device,
++ &nokia_device,
+ &novatel_gps_device,
+ &hp4x_device,
+ &suunto_device,
+@@ -140,6 +146,7 @@ static const struct usb_device_id id_table[] = {
+ VIVOPAY_IDS(),
+ MOTO_IDS(),
+ MOTOROLA_TETRA_IDS(),
++ NOKIA_IDS(),
+ NOVATEL_IDS(),
+ HP4X_IDS(),
+ SUUNTO_IDS(),
+diff --git a/drivers/usb/storage/ene_ub6250.c b/drivers/usb/storage/ene_ub6250.c
+index 5f7d678502be4..6012603f3630e 100644
+--- a/drivers/usb/storage/ene_ub6250.c
++++ b/drivers/usb/storage/ene_ub6250.c
+@@ -237,36 +237,33 @@ static struct us_unusual_dev ene_ub6250_unusual_dev_list[] = {
+ #define memstick_logaddr(logadr1, logadr0) ((((u16)(logadr1)) << 8) | (logadr0))
+
+
+-struct SD_STATUS {
+- u8 Insert:1;
+- u8 Ready:1;
+- u8 MediaChange:1;
+- u8 IsMMC:1;
+- u8 HiCapacity:1;
+- u8 HiSpeed:1;
+- u8 WtP:1;
+- u8 Reserved:1;
+-};
+-
+-struct MS_STATUS {
+- u8 Insert:1;
+- u8 Ready:1;
+- u8 MediaChange:1;
+- u8 IsMSPro:1;
+- u8 IsMSPHG:1;
+- u8 Reserved1:1;
+- u8 WtP:1;
+- u8 Reserved2:1;
+-};
+-
+-struct SM_STATUS {
+- u8 Insert:1;
+- u8 Ready:1;
+- u8 MediaChange:1;
+- u8 Reserved:3;
+- u8 WtP:1;
+- u8 IsMS:1;
+-};
++/* SD_STATUS bits */
++#define SD_Insert BIT(0)
++#define SD_Ready BIT(1)
++#define SD_MediaChange BIT(2)
++#define SD_IsMMC BIT(3)
++#define SD_HiCapacity BIT(4)
++#define SD_HiSpeed BIT(5)
++#define SD_WtP BIT(6)
++ /* Bit 7 reserved */
++
++/* MS_STATUS bits */
++#define MS_Insert BIT(0)
++#define MS_Ready BIT(1)
++#define MS_MediaChange BIT(2)
++#define MS_IsMSPro BIT(3)
++#define MS_IsMSPHG BIT(4)
++ /* Bit 5 reserved */
++#define MS_WtP BIT(6)
++ /* Bit 7 reserved */
++
++/* SM_STATUS bits */
++#define SM_Insert BIT(0)
++#define SM_Ready BIT(1)
++#define SM_MediaChange BIT(2)
++ /* Bits 3-5 reserved */
++#define SM_WtP BIT(6)
++#define SM_IsMS BIT(7)
+
+ struct ms_bootblock_cis {
+ u8 bCistplDEVICE[6]; /* 0 */
+@@ -437,9 +434,9 @@ struct ene_ub6250_info {
+ u8 *bbuf;
+
+ /* for 6250 code */
+- struct SD_STATUS SD_Status;
+- struct MS_STATUS MS_Status;
+- struct SM_STATUS SM_Status;
++ u8 SD_Status;
++ u8 MS_Status;
++ u8 SM_Status;
+
+ /* ----- SD Control Data ---------------- */
+ /*SD_REGISTER SD_Regs; */
+@@ -602,7 +599,7 @@ static int sd_scsi_test_unit_ready(struct us_data *us, struct scsi_cmnd *srb)
+ {
+ struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+
+- if (info->SD_Status.Insert && info->SD_Status.Ready)
++ if ((info->SD_Status & SD_Insert) && (info->SD_Status & SD_Ready))
+ return USB_STOR_TRANSPORT_GOOD;
+ else {
+ ene_sd_init(us);
+@@ -622,7 +619,7 @@ static int sd_scsi_mode_sense(struct us_data *us, struct scsi_cmnd *srb)
+ 0x0b, 0x00, 0x80, 0x08, 0x00, 0x00,
+ 0x71, 0xc0, 0x00, 0x00, 0x02, 0x00 };
+
+- if (info->SD_Status.WtP)
++ if (info->SD_Status & SD_WtP)
+ usb_stor_set_xfer_buf(mediaWP, 12, srb);
+ else
+ usb_stor_set_xfer_buf(mediaNoWP, 12, srb);
+@@ -641,9 +638,9 @@ static int sd_scsi_read_capacity(struct us_data *us, struct scsi_cmnd *srb)
+ struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+
+ usb_stor_dbg(us, "sd_scsi_read_capacity\n");
+- if (info->SD_Status.HiCapacity) {
++ if (info->SD_Status & SD_HiCapacity) {
+ bl_len = 0x200;
+- if (info->SD_Status.IsMMC)
++ if (info->SD_Status & SD_IsMMC)
+ bl_num = info->HC_C_SIZE-1;
+ else
+ bl_num = (info->HC_C_SIZE + 1) * 1024 - 1;
+@@ -693,7 +690,7 @@ static int sd_scsi_read(struct us_data *us, struct scsi_cmnd *srb)
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+
+- if (info->SD_Status.HiCapacity)
++ if (info->SD_Status & SD_HiCapacity)
+ bnByte = bn;
+
+ /* set up the command wrapper */
+@@ -733,7 +730,7 @@ static int sd_scsi_write(struct us_data *us, struct scsi_cmnd *srb)
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+
+- if (info->SD_Status.HiCapacity)
++ if (info->SD_Status & SD_HiCapacity)
+ bnByte = bn;
+
+ /* set up the command wrapper */
+@@ -1456,7 +1453,7 @@ static int ms_scsi_test_unit_ready(struct us_data *us, struct scsi_cmnd *srb)
+ struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+
+ /* pr_info("MS_SCSI_Test_Unit_Ready\n"); */
+- if (info->MS_Status.Insert && info->MS_Status.Ready) {
++ if ((info->MS_Status & MS_Insert) && (info->MS_Status & MS_Ready)) {
+ return USB_STOR_TRANSPORT_GOOD;
+ } else {
+ ene_ms_init(us);
+@@ -1476,7 +1473,7 @@ static int ms_scsi_mode_sense(struct us_data *us, struct scsi_cmnd *srb)
+ 0x0b, 0x00, 0x80, 0x08, 0x00, 0x00,
+ 0x71, 0xc0, 0x00, 0x00, 0x02, 0x00 };
+
+- if (info->MS_Status.WtP)
++ if (info->MS_Status & MS_WtP)
+ usb_stor_set_xfer_buf(mediaWP, 12, srb);
+ else
+ usb_stor_set_xfer_buf(mediaNoWP, 12, srb);
+@@ -1495,7 +1492,7 @@ static int ms_scsi_read_capacity(struct us_data *us, struct scsi_cmnd *srb)
+
+ usb_stor_dbg(us, "ms_scsi_read_capacity\n");
+ bl_len = 0x200;
+- if (info->MS_Status.IsMSPro)
++ if (info->MS_Status & MS_IsMSPro)
+ bl_num = info->MSP_TotalBlock - 1;
+ else
+ bl_num = info->MS_Lib.NumberOfLogBlock * info->MS_Lib.blockSize * 2 - 1;
+@@ -1650,7 +1647,7 @@ static int ms_scsi_read(struct us_data *us, struct scsi_cmnd *srb)
+ if (bn > info->bl_num)
+ return USB_STOR_TRANSPORT_ERROR;
+
+- if (info->MS_Status.IsMSPro) {
++ if (info->MS_Status & MS_IsMSPro) {
+ result = ene_load_bincode(us, MSP_RW_PATTERN);
+ if (result != USB_STOR_XFER_GOOD) {
+ usb_stor_dbg(us, "Load MPS RW pattern Fail !!\n");
+@@ -1751,7 +1748,7 @@ static int ms_scsi_write(struct us_data *us, struct scsi_cmnd *srb)
+ if (bn > info->bl_num)
+ return USB_STOR_TRANSPORT_ERROR;
+
+- if (info->MS_Status.IsMSPro) {
++ if (info->MS_Status & MS_IsMSPro) {
+ result = ene_load_bincode(us, MSP_RW_PATTERN);
+ if (result != USB_STOR_XFER_GOOD) {
+ pr_info("Load MSP RW pattern Fail !!\n");
+@@ -1859,12 +1856,12 @@ static int ene_get_card_status(struct us_data *us, u8 *buf)
+
+ tmpreg = (u16) reg4b;
+ reg4b = *(u32 *)(&buf[0x14]);
+- if (info->SD_Status.HiCapacity && !info->SD_Status.IsMMC)
++ if ((info->SD_Status & SD_HiCapacity) && !(info->SD_Status & SD_IsMMC))
+ info->HC_C_SIZE = (reg4b >> 8) & 0x3fffff;
+
+ info->SD_C_SIZE = ((tmpreg & 0x03) << 10) | (u16)(reg4b >> 22);
+ info->SD_C_SIZE_MULT = (u8)(reg4b >> 7) & 0x07;
+- if (info->SD_Status.HiCapacity && info->SD_Status.IsMMC)
++ if ((info->SD_Status & SD_HiCapacity) && (info->SD_Status & SD_IsMMC))
+ info->HC_C_SIZE = *(u32 *)(&buf[0x100]);
+
+ if (info->SD_READ_BL_LEN > SD_BLOCK_LEN) {
+@@ -2076,6 +2073,7 @@ static int ene_ms_init(struct us_data *us)
+ u16 MSP_BlockSize, MSP_UserAreaBlocks;
+ struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
+ u8 *bbuf = info->bbuf;
++ unsigned int s;
+
+ printk(KERN_INFO "transport --- ENE_MSInit\n");
+
+@@ -2100,15 +2098,16 @@ static int ene_ms_init(struct us_data *us)
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+ /* the same part to test ENE */
+- info->MS_Status = *(struct MS_STATUS *) bbuf;
+-
+- if (info->MS_Status.Insert && info->MS_Status.Ready) {
+- printk(KERN_INFO "Insert = %x\n", info->MS_Status.Insert);
+- printk(KERN_INFO "Ready = %x\n", info->MS_Status.Ready);
+- printk(KERN_INFO "IsMSPro = %x\n", info->MS_Status.IsMSPro);
+- printk(KERN_INFO "IsMSPHG = %x\n", info->MS_Status.IsMSPHG);
+- printk(KERN_INFO "WtP= %x\n", info->MS_Status.WtP);
+- if (info->MS_Status.IsMSPro) {
++ info->MS_Status = bbuf[0];
++
++ s = info->MS_Status;
++ if ((s & MS_Insert) && (s & MS_Ready)) {
++ printk(KERN_INFO "Insert = %x\n", !!(s & MS_Insert));
++ printk(KERN_INFO "Ready = %x\n", !!(s & MS_Ready));
++ printk(KERN_INFO "IsMSPro = %x\n", !!(s & MS_IsMSPro));
++ printk(KERN_INFO "IsMSPHG = %x\n", !!(s & MS_IsMSPHG));
++ printk(KERN_INFO "WtP= %x\n", !!(s & MS_WtP));
++ if (s & MS_IsMSPro) {
+ MSP_BlockSize = (bbuf[6] << 8) | bbuf[7];
+ MSP_UserAreaBlocks = (bbuf[10] << 8) | bbuf[11];
+ info->MSP_TotalBlock = MSP_BlockSize * MSP_UserAreaBlocks;
+@@ -2169,17 +2168,17 @@ static int ene_sd_init(struct us_data *us)
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+
+- info->SD_Status = *(struct SD_STATUS *) bbuf;
+- if (info->SD_Status.Insert && info->SD_Status.Ready) {
+- struct SD_STATUS *s = &info->SD_Status;
++ info->SD_Status = bbuf[0];
++ if ((info->SD_Status & SD_Insert) && (info->SD_Status & SD_Ready)) {
++ unsigned int s = info->SD_Status;
+
+ ene_get_card_status(us, bbuf);
+- usb_stor_dbg(us, "Insert = %x\n", s->Insert);
+- usb_stor_dbg(us, "Ready = %x\n", s->Ready);
+- usb_stor_dbg(us, "IsMMC = %x\n", s->IsMMC);
+- usb_stor_dbg(us, "HiCapacity = %x\n", s->HiCapacity);
+- usb_stor_dbg(us, "HiSpeed = %x\n", s->HiSpeed);
+- usb_stor_dbg(us, "WtP = %x\n", s->WtP);
++ usb_stor_dbg(us, "Insert = %x\n", !!(s & SD_Insert));
++ usb_stor_dbg(us, "Ready = %x\n", !!(s & SD_Ready));
++ usb_stor_dbg(us, "IsMMC = %x\n", !!(s & SD_IsMMC));
++ usb_stor_dbg(us, "HiCapacity = %x\n", !!(s & SD_HiCapacity));
++ usb_stor_dbg(us, "HiSpeed = %x\n", !!(s & SD_HiSpeed));
++ usb_stor_dbg(us, "WtP = %x\n", !!(s & SD_WtP));
+ } else {
+ usb_stor_dbg(us, "SD Card Not Ready --- %x\n", bbuf[0]);
+ return USB_STOR_TRANSPORT_ERROR;
+@@ -2201,14 +2200,14 @@ static int ene_init(struct us_data *us)
+
+ misc_reg03 = bbuf[0];
+ if (misc_reg03 & 0x01) {
+- if (!info->SD_Status.Ready) {
++ if (!(info->SD_Status & SD_Ready)) {
+ result = ene_sd_init(us);
+ if (result != USB_STOR_XFER_GOOD)
+ return USB_STOR_TRANSPORT_ERROR;
+ }
+ }
+ if (misc_reg03 & 0x02) {
+- if (!info->MS_Status.Ready) {
++ if (!(info->MS_Status & MS_Ready)) {
+ result = ene_ms_init(us);
+ if (result != USB_STOR_XFER_GOOD)
+ return USB_STOR_TRANSPORT_ERROR;
+@@ -2307,14 +2306,14 @@ static int ene_transport(struct scsi_cmnd *srb, struct us_data *us)
+
+ /*US_DEBUG(usb_stor_show_command(us, srb)); */
+ scsi_set_resid(srb, 0);
+- if (unlikely(!(info->SD_Status.Ready || info->MS_Status.Ready)))
++ if (unlikely(!(info->SD_Status & SD_Ready) || (info->MS_Status & MS_Ready)))
+ result = ene_init(us);
+ if (result == USB_STOR_XFER_GOOD) {
+ result = USB_STOR_TRANSPORT_ERROR;
+- if (info->SD_Status.Ready)
++ if (info->SD_Status & SD_Ready)
+ result = sd_scsi_irp(us, srb);
+
+- if (info->MS_Status.Ready)
++ if (info->MS_Status & MS_Ready)
+ result = ms_scsi_irp(us, srb);
+ }
+ return result;
+@@ -2378,7 +2377,6 @@ static int ene_ub6250_probe(struct usb_interface *intf,
+
+ static int ene_ub6250_resume(struct usb_interface *iface)
+ {
+- u8 tmp = 0;
+ struct us_data *us = usb_get_intfdata(iface);
+ struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+
+@@ -2390,17 +2388,16 @@ static int ene_ub6250_resume(struct usb_interface *iface)
+ mutex_unlock(&us->dev_mutex);
+
+ info->Power_IsResum = true;
+- /*info->SD_Status.Ready = 0; */
+- info->SD_Status = *(struct SD_STATUS *)&tmp;
+- info->MS_Status = *(struct MS_STATUS *)&tmp;
+- info->SM_Status = *(struct SM_STATUS *)&tmp;
++ /* info->SD_Status &= ~SD_Ready; */
++ info->SD_Status = 0;
++ info->MS_Status = 0;
++ info->SM_Status = 0;
+
+ return 0;
+ }
+
+ static int ene_ub6250_reset_resume(struct usb_interface *iface)
+ {
+- u8 tmp = 0;
+ struct us_data *us = usb_get_intfdata(iface);
+ struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra);
+
+@@ -2412,10 +2409,10 @@ static int ene_ub6250_reset_resume(struct usb_interface *iface)
+ * the device
+ */
+ info->Power_IsResum = true;
+- /*info->SD_Status.Ready = 0; */
+- info->SD_Status = *(struct SD_STATUS *)&tmp;
+- info->MS_Status = *(struct MS_STATUS *)&tmp;
+- info->SM_Status = *(struct SM_STATUS *)&tmp;
++ /* info->SD_Status &= ~SD_Ready; */
++ info->SD_Status = 0;
++ info->MS_Status = 0;
++ info->SM_Status = 0;
+
+ return 0;
+ }
+diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c
+index 3789698d9d3c6..0c423916d7bfa 100644
+--- a/drivers/usb/storage/realtek_cr.c
++++ b/drivers/usb/storage/realtek_cr.c
+@@ -365,7 +365,7 @@ static int rts51x_read_mem(struct us_data *us, u16 addr, u8 *data, u16 len)
+
+ buf = kmalloc(len, GFP_NOIO);
+ if (buf == NULL)
+- return USB_STOR_TRANSPORT_ERROR;
++ return -ENOMEM;
+
+ usb_stor_dbg(us, "addr = 0x%x, len = %d\n", addr, len);
+
+diff --git a/drivers/usb/typec/tipd/core.c b/drivers/usb/typec/tipd/core.c
+index 7ffcda94d323a..16b4560216ba6 100644
+--- a/drivers/usb/typec/tipd/core.c
++++ b/drivers/usb/typec/tipd/core.c
+@@ -256,6 +256,10 @@ static int tps6598x_connect(struct tps6598x *tps, u32 status)
+ typec_set_pwr_opmode(tps->port, mode);
+ typec_set_pwr_role(tps->port, TPS_STATUS_TO_TYPEC_PORTROLE(status));
+ typec_set_vconn_role(tps->port, TPS_STATUS_TO_TYPEC_VCONN(status));
++ if (TPS_STATUS_TO_UPSIDE_DOWN(status))
++ typec_set_orientation(tps->port, TYPEC_ORIENTATION_REVERSE);
++ else
++ typec_set_orientation(tps->port, TYPEC_ORIENTATION_NORMAL);
+ tps6598x_set_data_role(tps, TPS_STATUS_TO_TYPEC_DATAROLE(status), true);
+
+ tps->partner = typec_register_partner(tps->port, &desc);
+@@ -278,6 +282,7 @@ static void tps6598x_disconnect(struct tps6598x *tps, u32 status)
+ typec_set_pwr_opmode(tps->port, TYPEC_PWR_MODE_USB);
+ typec_set_pwr_role(tps->port, TPS_STATUS_TO_TYPEC_PORTROLE(status));
+ typec_set_vconn_role(tps->port, TPS_STATUS_TO_TYPEC_VCONN(status));
++ typec_set_orientation(tps->port, TYPEC_ORIENTATION_NONE);
+ tps6598x_set_data_role(tps, TPS_STATUS_TO_TYPEC_DATAROLE(status), false);
+
+ power_supply_changed(tps->psy);
+diff --git a/drivers/usb/typec/tipd/tps6598x.h b/drivers/usb/typec/tipd/tps6598x.h
+index 3dae84c524fb5..527857549d699 100644
+--- a/drivers/usb/typec/tipd/tps6598x.h
++++ b/drivers/usb/typec/tipd/tps6598x.h
+@@ -17,6 +17,7 @@
+ /* TPS_REG_STATUS bits */
+ #define TPS_STATUS_PLUG_PRESENT BIT(0)
+ #define TPS_STATUS_PLUG_UPSIDE_DOWN BIT(4)
++#define TPS_STATUS_TO_UPSIDE_DOWN(s) (!!((s) & TPS_STATUS_PLUG_UPSIDE_DOWN))
+ #define TPS_STATUS_PORTROLE BIT(5)
+ #define TPS_STATUS_TO_TYPEC_PORTROLE(s) (!!((s) & TPS_STATUS_PORTROLE))
+ #define TPS_STATUS_DATAROLE BIT(6)
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index d0f91078600e9..9fe1071a96444 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -1669,7 +1669,7 @@ static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)
+ return;
+
+ if (unlikely(is_ctrl_vq_idx(mvdev, idx))) {
+- if (!mvdev->cvq.ready)
++ if (!mvdev->wq || !mvdev->cvq.ready)
+ return;
+
+ wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);
+@@ -2707,9 +2707,12 @@ static void mlx5_vdpa_dev_del(struct vdpa_mgmt_dev *v_mdev, struct vdpa_device *
+ struct mlx5_vdpa_mgmtdev *mgtdev = container_of(v_mdev, struct mlx5_vdpa_mgmtdev, mgtdev);
+ struct mlx5_vdpa_dev *mvdev = to_mvdev(dev);
+ struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
++ struct workqueue_struct *wq;
+
+ mlx5_notifier_unregister(mvdev->mdev, &ndev->nb);
+- destroy_workqueue(mvdev->wq);
++ wq = mvdev->wq;
++ mvdev->wq = NULL;
++ destroy_workqueue(wq);
+ _vdpa_unregister_device(dev);
+ mgtdev->ndev = NULL;
+ }
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index f948e6cd29939..2e6409cc11ada 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -228,6 +228,19 @@ int vfio_pci_set_power_state(struct vfio_pci_core_device *vdev, pci_power_t stat
+ if (!ret) {
+ /* D3 might be unsupported via quirk, skip unless in D3 */
+ if (needs_save && pdev->current_state >= PCI_D3hot) {
++ /*
++ * The current PCI state will be saved locally in
++ * 'pm_save' during the D3hot transition. When the
++ * device state is changed to D0 again with the current
++ * function, then pci_store_saved_state() will restore
++ * the state and will free the memory pointed by
++ * 'pm_save'. There are few cases where the PCI power
++ * state can be changed to D0 without the involvement
++ * of the driver. For these cases, free the earlier
++ * allocated memory first before overwriting 'pm_save'
++ * to prevent the memory leak.
++ */
++ kfree(vdev->pm_save);
+ vdev->pm_save = pci_store_saved_state(pdev);
+ } else if (needs_restore) {
+ pci_load_and_free_saved_state(pdev, &vdev->pm_save);
+@@ -322,6 +335,17 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
+ /* For needs_reset */
+ lockdep_assert_held(&vdev->vdev.dev_set->lock);
+
++ /*
++ * This function can be invoked while the power state is non-D0.
++ * This function calls __pci_reset_function_locked() which internally
++ * can use pci_pm_reset() for the function reset. pci_pm_reset() will
++ * fail if the power state is non-D0. Also, for the devices which
++ * have NoSoftRst-, the reset function can cause the PCI config space
++ * reset without restoring the original state (saved locally in
++ * 'vdev->pm_save').
++ */
++ vfio_pci_set_power_state(vdev, PCI_D0);
++
+ /* Stop the device from further DMA */
+ pci_clear_master(pdev);
+
+@@ -921,6 +945,19 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
+ return -EINVAL;
+
+ vfio_pci_zap_and_down_write_memory_lock(vdev);
++
++ /*
++ * This function can be invoked while the power state is non-D0.
++ * If pci_try_reset_function() has been called while the power
++ * state is non-D0, then pci_try_reset_function() will
++ * internally set the power state to D0 without vfio driver
++ * involvement. For the devices which have NoSoftRst-, the
++ * reset function can cause the PCI config space reset without
++ * restoring the original state (saved locally in
++ * 'vdev->pm_save').
++ */
++ vfio_pci_set_power_state(vdev, PCI_D0);
++
+ ret = pci_try_reset_function(vdev->pdev);
+ up_write(&vdev->memory_lock);
+
+@@ -2055,6 +2092,18 @@ static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set,
+ }
+ cur_mem = NULL;
+
++ /*
++ * The pci_reset_bus() will reset all the devices in the bus.
++ * The power state can be non-D0 for some of the devices in the bus.
++ * For these devices, the pci_reset_bus() will internally set
++ * the power state to D0 without vfio driver involvement.
++ * For the devices which have NoSoftRst-, the reset function can
++ * cause the PCI config space reset without restoring the original
++ * state (saved locally in 'vdev->pm_save').
++ */
++ list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list)
++ vfio_pci_set_power_state(cur, PCI_D0);
++
+ ret = pci_reset_bus(pdev);
+
+ err_undo:
+@@ -2108,6 +2157,18 @@ static bool vfio_pci_dev_set_try_reset(struct vfio_device_set *dev_set)
+ if (!pdev)
+ return false;
+
++ /*
++ * The pci_reset_bus() will reset all the devices in the bus.
++ * The power state can be non-D0 for some of the devices in the bus.
++ * For these devices, the pci_reset_bus() will internally set
++ * the power state to D0 without vfio driver involvement.
++ * For the devices which have NoSoftRst-, the reset function can
++ * cause the PCI config space reset without restoring the original
++ * state (saved locally in 'vdev->pm_save').
++ */
++ list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list)
++ vfio_pci_set_power_state(cur, PCI_D0);
++
+ ret = pci_reset_bus(pdev);
+ if (ret)
+ return false;
+diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c
+index 40b098320b2a7..5829cf2d0552d 100644
+--- a/drivers/vhost/iotlb.c
++++ b/drivers/vhost/iotlb.c
+@@ -62,8 +62,12 @@ int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb,
+ */
+ if (start == 0 && last == ULONG_MAX) {
+ u64 mid = last / 2;
++ int err = vhost_iotlb_add_range_ctx(iotlb, start, mid, addr,
++ perm, opaque);
++
++ if (err)
++ return err;
+
+- vhost_iotlb_add_range_ctx(iotlb, start, mid, addr, perm, opaque);
+ addr += mid + 1;
+ start = mid + 1;
+ }
+diff --git a/drivers/video/fbdev/atafb.c b/drivers/video/fbdev/atafb.c
+index e3812a8ff55a4..29e650ecfceb1 100644
+--- a/drivers/video/fbdev/atafb.c
++++ b/drivers/video/fbdev/atafb.c
+@@ -1683,9 +1683,9 @@ static int falcon_setcolreg(unsigned int regno, unsigned int red,
+ ((blue & 0xfc00) >> 8));
+ if (regno < 16) {
+ shifter_tt.color_reg[regno] =
+- (((red & 0xe000) >> 13) | ((red & 0x1000) >> 12) << 8) |
+- (((green & 0xe000) >> 13) | ((green & 0x1000) >> 12) << 4) |
+- ((blue & 0xe000) >> 13) | ((blue & 0x1000) >> 12);
++ ((((red & 0xe000) >> 13) | ((red & 0x1000) >> 12)) << 8) |
++ ((((green & 0xe000) >> 13) | ((green & 0x1000) >> 12)) << 4) |
++ ((blue & 0xe000) >> 13) | ((blue & 0x1000) >> 12);
+ ((u32 *)info->pseudo_palette)[regno] = ((red & 0xf800) |
+ ((green & 0xfc00) >> 5) |
+ ((blue & 0xf800) >> 11));
+@@ -1971,9 +1971,9 @@ static int stste_setcolreg(unsigned int regno, unsigned int red,
+ green >>= 12;
+ if (ATARIHW_PRESENT(EXTD_SHIFTER))
+ shifter_tt.color_reg[regno] =
+- (((red & 0xe) >> 1) | ((red & 1) << 3) << 8) |
+- (((green & 0xe) >> 1) | ((green & 1) << 3) << 4) |
+- ((blue & 0xe) >> 1) | ((blue & 1) << 3);
++ ((((red & 0xe) >> 1) | ((red & 1) << 3)) << 8) |
++ ((((green & 0xe) >> 1) | ((green & 1) << 3)) << 4) |
++ ((blue & 0xe) >> 1) | ((blue & 1) << 3);
+ else
+ shifter_tt.color_reg[regno] =
+ ((red & 0xe) << 7) |
+diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c
+index 355b6120dc4f0..1fc8de4ecbebf 100644
+--- a/drivers/video/fbdev/atmel_lcdfb.c
++++ b/drivers/video/fbdev/atmel_lcdfb.c
+@@ -1062,15 +1062,16 @@ static int __init atmel_lcdfb_probe(struct platform_device *pdev)
+
+ INIT_LIST_HEAD(&info->modelist);
+
+- if (pdev->dev.of_node) {
+- ret = atmel_lcdfb_of_init(sinfo);
+- if (ret)
+- goto free_info;
+- } else {
++ if (!pdev->dev.of_node) {
+ dev_err(dev, "cannot get default configuration\n");
+ goto free_info;
+ }
+
++ ret = atmel_lcdfb_of_init(sinfo);
++ if (ret)
++ goto free_info;
++
++ ret = -ENODEV;
+ if (!sinfo->config)
+ goto free_info;
+
+diff --git a/drivers/video/fbdev/cirrusfb.c b/drivers/video/fbdev/cirrusfb.c
+index 93802abbbc72a..3d47c347b8970 100644
+--- a/drivers/video/fbdev/cirrusfb.c
++++ b/drivers/video/fbdev/cirrusfb.c
+@@ -469,7 +469,7 @@ static int cirrusfb_check_mclk(struct fb_info *info, long freq)
+ return 0;
+ }
+
+-static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
++static int cirrusfb_check_pixclock(struct fb_var_screeninfo *var,
+ struct fb_info *info)
+ {
+ long freq;
+@@ -478,9 +478,7 @@ static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
+ unsigned maxclockidx = var->bits_per_pixel >> 3;
+
+ /* convert from ps to kHz */
+- freq = PICOS2KHZ(var->pixclock);
+-
+- dev_dbg(info->device, "desired pixclock: %ld kHz\n", freq);
++ freq = PICOS2KHZ(var->pixclock ? : 1);
+
+ maxclock = cirrusfb_board_info[cinfo->btype].maxclock[maxclockidx];
+ cinfo->multiplexing = 0;
+@@ -488,11 +486,13 @@ static int cirrusfb_check_pixclock(const struct fb_var_screeninfo *var,
+ /* If the frequency is greater than we can support, we might be able
+ * to use multiplexing for the video mode */
+ if (freq > maxclock) {
+- dev_err(info->device,
+- "Frequency greater than maxclock (%ld kHz)\n",
+- maxclock);
+- return -EINVAL;
++ var->pixclock = KHZ2PICOS(maxclock);
++
++ while ((freq = PICOS2KHZ(var->pixclock)) > maxclock)
++ var->pixclock++;
+ }
++ dev_dbg(info->device, "desired pixclock: %ld kHz\n", freq);
++
+ /*
+ * Additional constraint: 8bpp uses DAC clock doubling to allow maximum
+ * pixel clock
+diff --git a/drivers/video/fbdev/controlfb.c b/drivers/video/fbdev/controlfb.c
+index 509311471d515..bd59e7b11ed53 100644
+--- a/drivers/video/fbdev/controlfb.c
++++ b/drivers/video/fbdev/controlfb.c
+@@ -67,7 +67,9 @@
+ #define out_8(addr, val) (void)(val)
+ #define in_le32(addr) 0
+ #define out_le32(addr, val) (void)(val)
++#ifndef pgprot_cached_wthru
+ #define pgprot_cached_wthru(prot) (prot)
++#endif
+ #else
+ static void invalid_vram_cache(void __force *addr)
+ {
+diff --git a/drivers/video/fbdev/core/fbcvt.c b/drivers/video/fbdev/core/fbcvt.c
+index 55d2bd0ce5c02..64843464c6613 100644
+--- a/drivers/video/fbdev/core/fbcvt.c
++++ b/drivers/video/fbdev/core/fbcvt.c
+@@ -214,9 +214,11 @@ static u32 fb_cvt_aspect_ratio(struct fb_cvt_data *cvt)
+ static void fb_cvt_print_name(struct fb_cvt_data *cvt)
+ {
+ u32 pixcount, pixcount_mod;
+- int cnt = 255, offset = 0, read = 0;
+- u8 *buf = kzalloc(256, GFP_KERNEL);
++ int size = 256;
++ int off = 0;
++ u8 *buf;
+
++ buf = kzalloc(size, GFP_KERNEL);
+ if (!buf)
+ return;
+
+@@ -224,43 +226,30 @@ static void fb_cvt_print_name(struct fb_cvt_data *cvt)
+ pixcount_mod = (cvt->xres * (cvt->yres/cvt->interlace)) % 1000000;
+ pixcount_mod /= 1000;
+
+- read = snprintf(buf+offset, cnt, "fbcvt: %dx%d@%d: CVT Name - ",
+- cvt->xres, cvt->yres, cvt->refresh);
+- offset += read;
+- cnt -= read;
++ off += scnprintf(buf + off, size - off, "fbcvt: %dx%d@%d: CVT Name - ",
++ cvt->xres, cvt->yres, cvt->refresh);
+
+- if (cvt->status)
+- snprintf(buf+offset, cnt, "Not a CVT standard - %d.%03d Mega "
+- "Pixel Image\n", pixcount, pixcount_mod);
+- else {
+- if (pixcount) {
+- read = snprintf(buf+offset, cnt, "%d", pixcount);
+- cnt -= read;
+- offset += read;
+- }
++ if (cvt->status) {
++ off += scnprintf(buf + off, size - off,
++ "Not a CVT standard - %d.%03d Mega Pixel Image\n",
++ pixcount, pixcount_mod);
++ } else {
++ if (pixcount)
++ off += scnprintf(buf + off, size - off, "%d", pixcount);
+
+- read = snprintf(buf+offset, cnt, ".%03dM", pixcount_mod);
+- cnt -= read;
+- offset += read;
++ off += scnprintf(buf + off, size - off, ".%03dM", pixcount_mod);
+
+ if (cvt->aspect_ratio == 0)
+- read = snprintf(buf+offset, cnt, "3");
++ off += scnprintf(buf + off, size - off, "3");
+ else if (cvt->aspect_ratio == 3)
+- read = snprintf(buf+offset, cnt, "4");
++ off += scnprintf(buf + off, size - off, "4");
+ else if (cvt->aspect_ratio == 1 || cvt->aspect_ratio == 4)
+- read = snprintf(buf+offset, cnt, "9");
++ off += scnprintf(buf + off, size - off, "9");
+ else if (cvt->aspect_ratio == 2)
+- read = snprintf(buf+offset, cnt, "A");
+- else
+- read = 0;
+- cnt -= read;
+- offset += read;
+-
+- if (cvt->flags & FB_CVT_FLAG_REDUCED_BLANK) {
+- read = snprintf(buf+offset, cnt, "-R");
+- cnt -= read;
+- offset += read;
+- }
++ off += scnprintf(buf + off, size - off, "A");
++
++ if (cvt->flags & FB_CVT_FLAG_REDUCED_BLANK)
++ off += scnprintf(buf + off, size - off, "-R");
+ }
+
+ printk(KERN_INFO "%s\n", buf);
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 13083ad8d7515..ad9aac06427a0 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -25,6 +25,7 @@
+ #include <linux/init.h>
+ #include <linux/linux_logo.h>
+ #include <linux/proc_fs.h>
++#include <linux/platform_device.h>
+ #include <linux/seq_file.h>
+ #include <linux/console.h>
+ #include <linux/kmod.h>
+@@ -1559,18 +1560,36 @@ static void do_remove_conflicting_framebuffers(struct apertures_struct *a,
+ /* check all firmware fbs and kick off if the base addr overlaps */
+ for_each_registered_fb(i) {
+ struct apertures_struct *gen_aper;
++ struct device *device;
+
+ if (!(registered_fb[i]->flags & FBINFO_MISC_FIRMWARE))
+ continue;
+
+ gen_aper = registered_fb[i]->apertures;
++ device = registered_fb[i]->device;
+ if (fb_do_apertures_overlap(gen_aper, a) ||
+ (primary && gen_aper && gen_aper->count &&
+ gen_aper->ranges[0].base == VGA_FB_PHYS)) {
+
+ printk(KERN_INFO "fb%d: switching to %s from %s\n",
+ i, name, registered_fb[i]->fix.id);
+- do_unregister_framebuffer(registered_fb[i]);
++
++ /*
++ * If we kick-out a firmware driver, we also want to remove
++ * the underlying platform device, such as simple-framebuffer,
++ * VESA, EFI, etc. A native driver will then be able to
++ * allocate the memory range.
++ *
++ * If it's not a platform device, at least print a warning. A
++ * fix would add code to remove the device from the system.
++ */
++ if (dev_is_platform(device)) {
++ registered_fb[i]->forced_out = true;
++ platform_device_unregister(to_platform_device(device));
++ } else {
++ pr_warn("fb%d: cannot remove device\n", i);
++ do_unregister_framebuffer(registered_fb[i]);
++ }
+ }
+ }
+ }
+@@ -1900,9 +1919,13 @@ EXPORT_SYMBOL(register_framebuffer);
+ void
+ unregister_framebuffer(struct fb_info *fb_info)
+ {
+- mutex_lock(®istration_lock);
++ bool forced_out = fb_info->forced_out;
++
++ if (!forced_out)
++ mutex_lock(®istration_lock);
+ do_unregister_framebuffer(fb_info);
+- mutex_unlock(®istration_lock);
++ if (!forced_out)
++ mutex_unlock(®istration_lock);
+ }
+ EXPORT_SYMBOL(unregister_framebuffer);
+
+diff --git a/drivers/video/fbdev/matrox/matroxfb_base.c b/drivers/video/fbdev/matrox/matroxfb_base.c
+index 5c82611e93d99..236521b19daf7 100644
+--- a/drivers/video/fbdev/matrox/matroxfb_base.c
++++ b/drivers/video/fbdev/matrox/matroxfb_base.c
+@@ -1377,7 +1377,7 @@ static struct video_board vbG200 = {
+ .lowlevel = &matrox_G100
+ };
+ static struct video_board vbG200eW = {
+- .maxvram = 0x800000,
++ .maxvram = 0x100000,
+ .maxdisplayable = 0x800000,
+ .accelID = FB_ACCEL_MATROX_MGAG200,
+ .lowlevel = &matrox_G100
+diff --git a/drivers/video/fbdev/nvidia/nv_i2c.c b/drivers/video/fbdev/nvidia/nv_i2c.c
+index d7994a1732459..0b48965a6420c 100644
+--- a/drivers/video/fbdev/nvidia/nv_i2c.c
++++ b/drivers/video/fbdev/nvidia/nv_i2c.c
+@@ -86,7 +86,7 @@ static int nvidia_setup_i2c_bus(struct nvidia_i2c_chan *chan, const char *name,
+ {
+ int rc;
+
+- strcpy(chan->adapter.name, name);
++ strscpy(chan->adapter.name, name, sizeof(chan->adapter.name));
+ chan->adapter.owner = THIS_MODULE;
+ chan->adapter.class = i2c_class;
+ chan->adapter.algo_data = &chan->algo;
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c b/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
+index 2fa436475b406..c8ad3ef42bd31 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/connector-dvi.c
+@@ -246,6 +246,7 @@ static int dvic_probe_of(struct platform_device *pdev)
+ adapter_node = of_parse_phandle(node, "ddc-i2c-bus", 0);
+ if (adapter_node) {
+ adapter = of_get_i2c_adapter_by_node(adapter_node);
++ of_node_put(adapter_node);
+ if (adapter == NULL) {
+ dev_err(&pdev->dev, "failed to parse ddc-i2c-bus\n");
+ omap_dss_put_device(ddata->in);
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
+index 4b0793abdd84b..a2c7c5cb15234 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
+@@ -409,7 +409,7 @@ static ssize_t dsicm_num_errors_show(struct device *dev,
+ if (r)
+ return r;
+
+- return snprintf(buf, PAGE_SIZE, "%d\n", errors);
++ return sysfs_emit(buf, "%d\n", errors);
+ }
+
+ static ssize_t dsicm_hw_revision_show(struct device *dev,
+@@ -439,7 +439,7 @@ static ssize_t dsicm_hw_revision_show(struct device *dev,
+ if (r)
+ return r;
+
+- return snprintf(buf, PAGE_SIZE, "%02x.%02x.%02x\n", id1, id2, id3);
++ return sysfs_emit(buf, "%02x.%02x.%02x\n", id1, id2, id3);
+ }
+
+ static ssize_t dsicm_store_ulps(struct device *dev,
+@@ -487,7 +487,7 @@ static ssize_t dsicm_show_ulps(struct device *dev,
+ t = ddata->ulps_enabled;
+ mutex_unlock(&ddata->lock);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n", t);
++ return sysfs_emit(buf, "%u\n", t);
+ }
+
+ static ssize_t dsicm_store_ulps_timeout(struct device *dev,
+@@ -532,7 +532,7 @@ static ssize_t dsicm_show_ulps_timeout(struct device *dev,
+ t = ddata->ulps_timeout;
+ mutex_unlock(&ddata->lock);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n", t);
++ return sysfs_emit(buf, "%u\n", t);
+ }
+
+ static DEVICE_ATTR(num_dsi_errors, S_IRUGO, dsicm_num_errors_show, NULL);
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
+index 8d8b5ff7d43c8..3696eb09b69b4 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-sony-acx565akm.c
+@@ -476,7 +476,7 @@ static ssize_t show_cabc_available_modes(struct device *dev,
+ int i;
+
+ if (!ddata->has_cabc)
+- return snprintf(buf, PAGE_SIZE, "%s\n", cabc_modes[0]);
++ return sysfs_emit(buf, "%s\n", cabc_modes[0]);
+
+ for (i = 0, len = 0;
+ len < PAGE_SIZE && i < ARRAY_SIZE(cabc_modes); i++)
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
+index afac1d9445aa2..57b7d1f490962 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td043mtea1.c
+@@ -169,7 +169,7 @@ static ssize_t tpo_td043_vmirror_show(struct device *dev,
+ {
+ struct panel_drv_data *ddata = dev_get_drvdata(dev);
+
+- return snprintf(buf, PAGE_SIZE, "%d\n", ddata->vmirror);
++ return sysfs_emit(buf, "%d\n", ddata->vmirror);
+ }
+
+ static ssize_t tpo_td043_vmirror_store(struct device *dev,
+@@ -199,7 +199,7 @@ static ssize_t tpo_td043_mode_show(struct device *dev,
+ {
+ struct panel_drv_data *ddata = dev_get_drvdata(dev);
+
+- return snprintf(buf, PAGE_SIZE, "%d\n", ddata->mode);
++ return sysfs_emit(buf, "%d\n", ddata->mode);
+ }
+
+ static ssize_t tpo_td043_mode_store(struct device *dev,
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 0dbc6bf8268ac..092a1caa1208e 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -1047,7 +1047,7 @@ static ssize_t smtcfb_read(struct fb_info *info, char __user *buf,
+ if (count + p > total_size)
+ count = total_size - p;
+
+- buffer = kmalloc((count > PAGE_SIZE) ? PAGE_SIZE : count, GFP_KERNEL);
++ buffer = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+@@ -1059,25 +1059,14 @@ static ssize_t smtcfb_read(struct fb_info *info, char __user *buf,
+ while (count) {
+ c = (count > PAGE_SIZE) ? PAGE_SIZE : count;
+ dst = buffer;
+- for (i = c >> 2; i--;) {
+- *dst = fb_readl(src++);
+- *dst = big_swap(*dst);
++ for (i = (c + 3) >> 2; i--;) {
++ u32 val;
++
++ val = fb_readl(src);
++ *dst = big_swap(val);
++ src++;
+ dst++;
+ }
+- if (c & 3) {
+- u8 *dst8 = (u8 *)dst;
+- u8 __iomem *src8 = (u8 __iomem *)src;
+-
+- for (i = c & 3; i--;) {
+- if (i & 1) {
+- *dst8++ = fb_readb(++src8);
+- } else {
+- *dst8++ = fb_readb(--src8);
+- src8 += 2;
+- }
+- }
+- src = (u32 __iomem *)src8;
+- }
+
+ if (copy_to_user(buf, buffer, c)) {
+ err = -EFAULT;
+@@ -1130,7 +1119,7 @@ static ssize_t smtcfb_write(struct fb_info *info, const char __user *buf,
+ count = total_size - p;
+ }
+
+- buffer = kmalloc((count > PAGE_SIZE) ? PAGE_SIZE : count, GFP_KERNEL);
++ buffer = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+@@ -1148,24 +1137,11 @@ static ssize_t smtcfb_write(struct fb_info *info, const char __user *buf,
+ break;
+ }
+
+- for (i = c >> 2; i--;) {
+- fb_writel(big_swap(*src), dst++);
++ for (i = (c + 3) >> 2; i--;) {
++ fb_writel(big_swap(*src), dst);
++ dst++;
+ src++;
+ }
+- if (c & 3) {
+- u8 *src8 = (u8 *)src;
+- u8 __iomem *dst8 = (u8 __iomem *)dst;
+-
+- for (i = c & 3; i--;) {
+- if (i & 1) {
+- fb_writeb(*src8++, ++dst8);
+- } else {
+- fb_writeb(*src8++, --dst8);
+- dst8 += 2;
+- }
+- }
+- dst = (u32 __iomem *)dst8;
+- }
+
+ *ppos += c;
+ buf += c;
+diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
+index bfac3ee4a6422..28768c272b73d 100644
+--- a/drivers/video/fbdev/smscufx.c
++++ b/drivers/video/fbdev/smscufx.c
+@@ -1656,6 +1656,7 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ info->par = dev;
+ info->pseudo_palette = dev->pseudo_palette;
+ info->fbops = &ufx_ops;
++ INIT_LIST_HEAD(&info->modelist);
+
+ retval = fb_alloc_cmap(&info->cmap, 256, 0);
+ if (retval < 0) {
+@@ -1666,8 +1667,6 @@ static int ufx_usb_probe(struct usb_interface *interface,
+ INIT_DELAYED_WORK(&dev->free_framebuffer_work,
+ ufx_free_framebuffer_work);
+
+- INIT_LIST_HEAD(&info->modelist);
+-
+ retval = ufx_reg_read(dev, 0x3000, &id_rev);
+ check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval);
+ dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev);
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index b9cdd02c10009..90f48b71fd8f7 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1426,7 +1426,7 @@ static ssize_t metrics_bytes_rendered_show(struct device *fbdev,
+ struct device_attribute *a, char *buf) {
+ struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ struct dlfb_data *dlfb = fb_info->par;
+- return snprintf(buf, PAGE_SIZE, "%u\n",
++ return sysfs_emit(buf, "%u\n",
+ atomic_read(&dlfb->bytes_rendered));
+ }
+
+@@ -1434,7 +1434,7 @@ static ssize_t metrics_bytes_identical_show(struct device *fbdev,
+ struct device_attribute *a, char *buf) {
+ struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ struct dlfb_data *dlfb = fb_info->par;
+- return snprintf(buf, PAGE_SIZE, "%u\n",
++ return sysfs_emit(buf, "%u\n",
+ atomic_read(&dlfb->bytes_identical));
+ }
+
+@@ -1442,7 +1442,7 @@ static ssize_t metrics_bytes_sent_show(struct device *fbdev,
+ struct device_attribute *a, char *buf) {
+ struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ struct dlfb_data *dlfb = fb_info->par;
+- return snprintf(buf, PAGE_SIZE, "%u\n",
++ return sysfs_emit(buf, "%u\n",
+ atomic_read(&dlfb->bytes_sent));
+ }
+
+@@ -1450,7 +1450,7 @@ static ssize_t metrics_cpu_kcycles_used_show(struct device *fbdev,
+ struct device_attribute *a, char *buf) {
+ struct fb_info *fb_info = dev_get_drvdata(fbdev);
+ struct dlfb_data *dlfb = fb_info->par;
+- return snprintf(buf, PAGE_SIZE, "%u\n",
++ return sysfs_emit(buf, "%u\n",
+ atomic_read(&dlfb->cpu_kcycles_used));
+ }
+
+diff --git a/drivers/video/fbdev/w100fb.c b/drivers/video/fbdev/w100fb.c
+index d96ab28f8ce4a..4e641a780726e 100644
+--- a/drivers/video/fbdev/w100fb.c
++++ b/drivers/video/fbdev/w100fb.c
+@@ -770,12 +770,18 @@ out:
+ fb_dealloc_cmap(&info->cmap);
+ kfree(info->pseudo_palette);
+ }
+- if (remapped_fbuf != NULL)
++ if (remapped_fbuf != NULL) {
+ iounmap(remapped_fbuf);
+- if (remapped_regs != NULL)
++ remapped_fbuf = NULL;
++ }
++ if (remapped_regs != NULL) {
+ iounmap(remapped_regs);
+- if (remapped_base != NULL)
++ remapped_regs = NULL;
++ }
++ if (remapped_base != NULL) {
+ iounmap(remapped_base);
++ remapped_base = NULL;
++ }
+ if (info)
+ framebuffer_release(info);
+ return err;
+@@ -795,8 +801,11 @@ static int w100fb_remove(struct platform_device *pdev)
+ fb_dealloc_cmap(&info->cmap);
+
+ iounmap(remapped_base);
++ remapped_base = NULL;
+ iounmap(remapped_regs);
++ remapped_regs = NULL;
+ iounmap(remapped_fbuf);
++ remapped_fbuf = NULL;
+
+ framebuffer_release(info);
+
+diff --git a/drivers/virt/acrn/hsm.c b/drivers/virt/acrn/hsm.c
+index 5419794fccf1e..423ea888d79af 100644
+--- a/drivers/virt/acrn/hsm.c
++++ b/drivers/virt/acrn/hsm.c
+@@ -136,8 +136,10 @@ static long acrn_dev_ioctl(struct file *filp, unsigned int cmd,
+ if (IS_ERR(vm_param))
+ return PTR_ERR(vm_param);
+
+- if ((vm_param->reserved0 | vm_param->reserved1) != 0)
++ if ((vm_param->reserved0 | vm_param->reserved1) != 0) {
++ kfree(vm_param);
+ return -EINVAL;
++ }
+
+ vm = acrn_vm_create(vm, vm_param);
+ if (!vm) {
+@@ -182,21 +184,29 @@ static long acrn_dev_ioctl(struct file *filp, unsigned int cmd,
+ return PTR_ERR(cpu_regs);
+
+ for (i = 0; i < ARRAY_SIZE(cpu_regs->reserved); i++)
+- if (cpu_regs->reserved[i])
++ if (cpu_regs->reserved[i]) {
++ kfree(cpu_regs);
+ return -EINVAL;
++ }
+
+ for (i = 0; i < ARRAY_SIZE(cpu_regs->vcpu_regs.reserved_32); i++)
+- if (cpu_regs->vcpu_regs.reserved_32[i])
++ if (cpu_regs->vcpu_regs.reserved_32[i]) {
++ kfree(cpu_regs);
+ return -EINVAL;
++ }
+
+ for (i = 0; i < ARRAY_SIZE(cpu_regs->vcpu_regs.reserved_64); i++)
+- if (cpu_regs->vcpu_regs.reserved_64[i])
++ if (cpu_regs->vcpu_regs.reserved_64[i]) {
++ kfree(cpu_regs);
+ return -EINVAL;
++ }
+
+ for (i = 0; i < ARRAY_SIZE(cpu_regs->vcpu_regs.gdt.reserved); i++)
+ if (cpu_regs->vcpu_regs.gdt.reserved[i] |
+- cpu_regs->vcpu_regs.idt.reserved[i])
++ cpu_regs->vcpu_regs.idt.reserved[i]) {
++ kfree(cpu_regs);
+ return -EINVAL;
++ }
+
+ ret = hcall_set_vcpu_regs(vm->vmid, virt_to_phys(cpu_regs));
+ if (ret < 0)
+diff --git a/drivers/virt/acrn/mm.c b/drivers/virt/acrn/mm.c
+index c4f2e15c8a2ba..3b1b1e7a844b4 100644
+--- a/drivers/virt/acrn/mm.c
++++ b/drivers/virt/acrn/mm.c
+@@ -162,10 +162,34 @@ int acrn_vm_ram_map(struct acrn_vm *vm, struct acrn_vm_memmap *memmap)
+ void *remap_vaddr;
+ int ret, pinned;
+ u64 user_vm_pa;
++ unsigned long pfn;
++ struct vm_area_struct *vma;
+
+ if (!vm || !memmap)
+ return -EINVAL;
+
++ mmap_read_lock(current->mm);
++ vma = vma_lookup(current->mm, memmap->vma_base);
++ if (vma && ((vma->vm_flags & VM_PFNMAP) != 0)) {
++ if ((memmap->vma_base + memmap->len) > vma->vm_end) {
++ mmap_read_unlock(current->mm);
++ return -EINVAL;
++ }
++
++ ret = follow_pfn(vma, memmap->vma_base, &pfn);
++ mmap_read_unlock(current->mm);
++ if (ret < 0) {
++ dev_dbg(acrn_dev.this_device,
++ "Failed to lookup PFN at VMA:%pK.\n", (void *)memmap->vma_base);
++ return ret;
++ }
++
++ return acrn_mm_region_add(vm, memmap->user_vm_pa,
++ PFN_PHYS(pfn), memmap->len,
++ ACRN_MEM_TYPE_WB, memmap->attr);
++ }
++ mmap_read_unlock(current->mm);
++
+ /* Get the page number of the map region */
+ nr_pages = memmap->len >> PAGE_SHIFT;
+ pages = vzalloc(nr_pages * sizeof(struct page *));
+diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
+index 22f15f444f757..75c8d560bbd36 100644
+--- a/drivers/virtio/virtio.c
++++ b/drivers/virtio/virtio.c
+@@ -526,8 +526,9 @@ int virtio_device_restore(struct virtio_device *dev)
+ goto err;
+ }
+
+- /* Finally, tell the device we're all set */
+- virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
++ /* If restore didn't do it, mark device DRIVER_OK ourselves. */
++ if (!(dev->config->get_status(dev) & VIRTIO_CONFIG_S_DRIVER_OK))
++ virtio_device_ready(dev);
+
+ virtio_config_enable(dev);
+
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index fdbde1db5ec59..d724f676608ba 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -24,46 +24,17 @@ MODULE_PARM_DESC(force_legacy,
+ "Force legacy mode for transitional virtio 1 devices");
+ #endif
+
+-/* disable irq handlers */
+-void vp_disable_cbs(struct virtio_device *vdev)
++/* wait for pending irq handlers */
++void vp_synchronize_vectors(struct virtio_device *vdev)
+ {
+ struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+ int i;
+
+- if (vp_dev->intx_enabled) {
+- /*
+- * The below synchronize() guarantees that any
+- * interrupt for this line arriving after
+- * synchronize_irq() has completed is guaranteed to see
+- * intx_soft_enabled == false.
+- */
+- WRITE_ONCE(vp_dev->intx_soft_enabled, false);
++ if (vp_dev->intx_enabled)
+ synchronize_irq(vp_dev->pci_dev->irq);
+- }
+-
+- for (i = 0; i < vp_dev->msix_vectors; ++i)
+- disable_irq(pci_irq_vector(vp_dev->pci_dev, i));
+-}
+-
+-/* enable irq handlers */
+-void vp_enable_cbs(struct virtio_device *vdev)
+-{
+- struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+- int i;
+-
+- if (vp_dev->intx_enabled) {
+- disable_irq(vp_dev->pci_dev->irq);
+- /*
+- * The above disable_irq() provides TSO ordering and
+- * as such promotes the below store to store-release.
+- */
+- WRITE_ONCE(vp_dev->intx_soft_enabled, true);
+- enable_irq(vp_dev->pci_dev->irq);
+- return;
+- }
+
+ for (i = 0; i < vp_dev->msix_vectors; ++i)
+- enable_irq(pci_irq_vector(vp_dev->pci_dev, i));
++ synchronize_irq(pci_irq_vector(vp_dev->pci_dev, i));
+ }
+
+ /* the notify function used when creating a virt queue */
+@@ -113,9 +84,6 @@ static irqreturn_t vp_interrupt(int irq, void *opaque)
+ struct virtio_pci_device *vp_dev = opaque;
+ u8 isr;
+
+- if (!READ_ONCE(vp_dev->intx_soft_enabled))
+- return IRQ_NONE;
+-
+ /* reading the ISR has the effect of also clearing it so it's very
+ * important to save off the value. */
+ isr = ioread8(vp_dev->isr);
+@@ -173,8 +141,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
+ snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names,
+ "%s-config", name);
+ err = request_irq(pci_irq_vector(vp_dev->pci_dev, v),
+- vp_config_changed, IRQF_NO_AUTOEN,
+- vp_dev->msix_names[v],
++ vp_config_changed, 0, vp_dev->msix_names[v],
+ vp_dev);
+ if (err)
+ goto error;
+@@ -193,8 +160,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
+ snprintf(vp_dev->msix_names[v], sizeof *vp_dev->msix_names,
+ "%s-virtqueues", name);
+ err = request_irq(pci_irq_vector(vp_dev->pci_dev, v),
+- vp_vring_interrupt, IRQF_NO_AUTOEN,
+- vp_dev->msix_names[v],
++ vp_vring_interrupt, 0, vp_dev->msix_names[v],
+ vp_dev);
+ if (err)
+ goto error;
+@@ -371,7 +337,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs,
+ "%s-%s",
+ dev_name(&vp_dev->vdev.dev), names[i]);
+ err = request_irq(pci_irq_vector(vp_dev->pci_dev, msix_vec),
+- vring_interrupt, IRQF_NO_AUTOEN,
++ vring_interrupt, 0,
+ vp_dev->msix_names[msix_vec],
+ vqs[i]);
+ if (err)
+diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
+index 23f6c5c678d5e..eb17a29fc7ef1 100644
+--- a/drivers/virtio/virtio_pci_common.h
++++ b/drivers/virtio/virtio_pci_common.h
+@@ -63,7 +63,6 @@ struct virtio_pci_device {
+ /* MSI-X support */
+ int msix_enabled;
+ int intx_enabled;
+- bool intx_soft_enabled;
+ cpumask_var_t *msix_affinity_masks;
+ /* Name strings for interrupts. This size should be enough,
+ * and I'm too lazy to allocate each name separately. */
+@@ -102,10 +101,8 @@ static struct virtio_pci_device *to_vp_device(struct virtio_device *vdev)
+ return container_of(vdev, struct virtio_pci_device, vdev);
+ }
+
+-/* disable irq handlers */
+-void vp_disable_cbs(struct virtio_device *vdev);
+-/* enable irq handlers */
+-void vp_enable_cbs(struct virtio_device *vdev);
++/* wait for pending irq handlers */
++void vp_synchronize_vectors(struct virtio_device *vdev);
+ /* the notify function used when creating a virt queue */
+ bool vp_notify(struct virtqueue *vq);
+ /* the config->del_vqs() implementation */
+diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
+index 34141b9abe278..6f4e34ce96b81 100644
+--- a/drivers/virtio/virtio_pci_legacy.c
++++ b/drivers/virtio/virtio_pci_legacy.c
+@@ -98,8 +98,8 @@ static void vp_reset(struct virtio_device *vdev)
+ /* Flush out the status write, and flush in device writes,
+ * including MSi-X interrupts, if any. */
+ vp_legacy_get_status(&vp_dev->ldev);
+- /* Disable VQ/configuration callbacks. */
+- vp_disable_cbs(vdev);
++ /* Flush pending VQ/configuration callbacks. */
++ vp_synchronize_vectors(vdev);
+ }
+
+ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
+@@ -185,7 +185,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
+ }
+
+ static const struct virtio_config_ops virtio_pci_config_ops = {
+- .enable_cbs = vp_enable_cbs,
+ .get = vp_get,
+ .set = vp_set,
+ .get_status = vp_get_status,
+diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
+index 5455bc041fb69..30654d3a0b41e 100644
+--- a/drivers/virtio/virtio_pci_modern.c
++++ b/drivers/virtio/virtio_pci_modern.c
+@@ -172,8 +172,8 @@ static void vp_reset(struct virtio_device *vdev)
+ */
+ while (vp_modern_get_status(mdev))
+ msleep(1);
+- /* Disable VQ/configuration callbacks. */
+- vp_disable_cbs(vdev);
++ /* Flush pending VQ/configuration callbacks. */
++ vp_synchronize_vectors(vdev);
+ }
+
+ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
+@@ -380,7 +380,6 @@ static bool vp_get_shm_region(struct virtio_device *vdev,
+ }
+
+ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
+- .enable_cbs = vp_enable_cbs,
+ .get = NULL,
+ .set = NULL,
+ .generation = vp_generation,
+@@ -398,7 +397,6 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
+ };
+
+ static const struct virtio_config_ops virtio_pci_config_ops = {
+- .enable_cbs = vp_enable_cbs,
+ .get = vp_get,
+ .set = vp_set,
+ .generation = vp_generation,
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 117bc2a8eb0a4..db843f8258602 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -228,6 +228,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(dev);
+ if (ret) {
+ pm_runtime_put_noidle(dev);
++ pm_runtime_disable(&pdev->dev);
+ return dev_err_probe(dev, ret, "runtime pm failed\n");
+ }
+
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index d61543fbd6528..11273b70271d1 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -170,8 +170,8 @@ static int padzero(unsigned long elf_bss)
+
+ static int
+ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+- unsigned long load_addr, unsigned long interp_load_addr,
+- unsigned long e_entry)
++ unsigned long interp_load_addr,
++ unsigned long e_entry, unsigned long phdr_addr)
+ {
+ struct mm_struct *mm = current->mm;
+ unsigned long p = bprm->p;
+@@ -257,7 +257,7 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
+ NEW_AUX_ENT(AT_HWCAP, ELF_HWCAP);
+ NEW_AUX_ENT(AT_PAGESZ, ELF_EXEC_PAGESIZE);
+ NEW_AUX_ENT(AT_CLKTCK, CLOCKS_PER_SEC);
+- NEW_AUX_ENT(AT_PHDR, load_addr + exec->e_phoff);
++ NEW_AUX_ENT(AT_PHDR, phdr_addr);
+ NEW_AUX_ENT(AT_PHENT, sizeof(struct elf_phdr));
+ NEW_AUX_ENT(AT_PHNUM, exec->e_phnum);
+ NEW_AUX_ENT(AT_BASE, interp_load_addr);
+@@ -823,7 +823,7 @@ static int parse_elf_properties(struct file *f, const struct elf_phdr *phdr,
+ static int load_elf_binary(struct linux_binprm *bprm)
+ {
+ struct file *interpreter = NULL; /* to shut gcc up */
+- unsigned long load_addr = 0, load_bias = 0;
++ unsigned long load_addr, load_bias = 0, phdr_addr = 0;
+ int load_addr_set = 0;
+ unsigned long error;
+ struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL;
+@@ -1180,6 +1180,17 @@ out_free_interp:
+ reloc_func_desc = load_bias;
+ }
+ }
++
++ /*
++ * Figure out which segment in the file contains the Program
++ * Header table, and map to the associated memory address.
++ */
++ if (elf_ppnt->p_offset <= elf_ex->e_phoff &&
++ elf_ex->e_phoff < elf_ppnt->p_offset + elf_ppnt->p_filesz) {
++ phdr_addr = elf_ex->e_phoff - elf_ppnt->p_offset +
++ elf_ppnt->p_vaddr;
++ }
++
+ k = elf_ppnt->p_vaddr;
+ if ((elf_ppnt->p_flags & PF_X) && k < start_code)
+ start_code = k;
+@@ -1215,6 +1226,7 @@ out_free_interp:
+ }
+
+ e_entry = elf_ex->e_entry + load_bias;
++ phdr_addr += load_bias;
+ elf_bss += load_bias;
+ elf_brk += load_bias;
+ start_code += load_bias;
+@@ -1278,8 +1290,8 @@ out_free_interp:
+ goto out;
+ #endif /* ARCH_HAS_SETUP_ADDITIONAL_PAGES */
+
+- retval = create_elf_tables(bprm, elf_ex,
+- load_addr, interp_load_addr, e_entry);
++ retval = create_elf_tables(bprm, elf_ex, interp_load_addr,
++ e_entry, phdr_addr);
+ if (retval < 0)
+ goto out;
+
+@@ -1630,17 +1642,16 @@ static void fill_siginfo_note(struct memelfnote *note, user_siginfo_t *csigdata,
+ * long file_ofs
+ * followed by COUNT filenames in ASCII: "FILE1" NUL "FILE2" NUL...
+ */
+-static int fill_files_note(struct memelfnote *note)
++static int fill_files_note(struct memelfnote *note, struct coredump_params *cprm)
+ {
+- struct mm_struct *mm = current->mm;
+- struct vm_area_struct *vma;
+ unsigned count, size, names_ofs, remaining, n;
+ user_long_t *data;
+ user_long_t *start_end_ofs;
+ char *name_base, *name_curpos;
++ int i;
+
+ /* *Estimated* file count and total data size needed */
+- count = mm->map_count;
++ count = cprm->vma_count;
+ if (count > UINT_MAX / 64)
+ return -EINVAL;
+ size = count * 64;
+@@ -1662,11 +1673,12 @@ static int fill_files_note(struct memelfnote *note)
+ name_base = name_curpos = ((char *)data) + names_ofs;
+ remaining = size - names_ofs;
+ count = 0;
+- for (vma = mm->mmap; vma != NULL; vma = vma->vm_next) {
++ for (i = 0; i < cprm->vma_count; i++) {
++ struct core_vma_metadata *m = &cprm->vma_meta[i];
+ struct file *file;
+ const char *filename;
+
+- file = vma->vm_file;
++ file = m->file;
+ if (!file)
+ continue;
+ filename = file_path(file, name_curpos, remaining);
+@@ -1686,9 +1698,9 @@ static int fill_files_note(struct memelfnote *note)
+ memmove(name_curpos, filename, n);
+ name_curpos += n;
+
+- *start_end_ofs++ = vma->vm_start;
+- *start_end_ofs++ = vma->vm_end;
+- *start_end_ofs++ = vma->vm_pgoff;
++ *start_end_ofs++ = m->start;
++ *start_end_ofs++ = m->end;
++ *start_end_ofs++ = m->pgoff;
+ count++;
+ }
+
+@@ -1699,7 +1711,7 @@ static int fill_files_note(struct memelfnote *note)
+ * Count usually is less than mm->map_count,
+ * we need to move filenames down.
+ */
+- n = mm->map_count - count;
++ n = cprm->vma_count - count;
+ if (n != 0) {
+ unsigned shift_bytes = n * 3 * sizeof(data[0]);
+ memmove(name_base - shift_bytes, name_base,
+@@ -1811,7 +1823,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ struct elf_note_info *info,
+- const kernel_siginfo_t *siginfo, struct pt_regs *regs)
++ struct coredump_params *cprm)
+ {
+ struct task_struct *dump_task = current;
+ const struct user_regset_view *view = task_user_regset_view(dump_task);
+@@ -1883,7 +1895,7 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ * Now fill in each thread's information.
+ */
+ for (t = info->thread; t != NULL; t = t->next)
+- if (!fill_thread_core_info(t, view, siginfo->si_signo, &info->size))
++ if (!fill_thread_core_info(t, view, cprm->siginfo->si_signo, &info->size))
+ return 0;
+
+ /*
+@@ -1892,13 +1904,13 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ fill_psinfo(psinfo, dump_task->group_leader, dump_task->mm);
+ info->size += notesize(&info->psinfo);
+
+- fill_siginfo_note(&info->signote, &info->csigdata, siginfo);
++ fill_siginfo_note(&info->signote, &info->csigdata, cprm->siginfo);
+ info->size += notesize(&info->signote);
+
+ fill_auxv_note(&info->auxv, current->mm);
+ info->size += notesize(&info->auxv);
+
+- if (fill_files_note(&info->files) == 0)
++ if (fill_files_note(&info->files, cprm) == 0)
+ info->size += notesize(&info->files);
+
+ return 1;
+@@ -2040,7 +2052,7 @@ static int elf_note_info_init(struct elf_note_info *info)
+
+ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ struct elf_note_info *info,
+- const kernel_siginfo_t *siginfo, struct pt_regs *regs)
++ struct coredump_params *cprm)
+ {
+ struct core_thread *ct;
+ struct elf_thread_status *ets;
+@@ -2061,13 +2073,13 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ list_for_each_entry(ets, &info->thread_list, list) {
+ int sz;
+
+- sz = elf_dump_thread_status(siginfo->si_signo, ets);
++ sz = elf_dump_thread_status(cprm->siginfo->si_signo, ets);
+ info->thread_status_size += sz;
+ }
+ /* now collect the dump for the current */
+ memset(info->prstatus, 0, sizeof(*info->prstatus));
+- fill_prstatus(&info->prstatus->common, current, siginfo->si_signo);
+- elf_core_copy_regs(&info->prstatus->pr_reg, regs);
++ fill_prstatus(&info->prstatus->common, current, cprm->siginfo->si_signo);
++ elf_core_copy_regs(&info->prstatus->pr_reg, cprm->regs);
+
+ /* Set up header */
+ fill_elf_header(elf, phdrs, ELF_ARCH, ELF_CORE_EFLAGS);
+@@ -2083,18 +2095,18 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
+ fill_note(info->notes + 1, "CORE", NT_PRPSINFO,
+ sizeof(*info->psinfo), info->psinfo);
+
+- fill_siginfo_note(info->notes + 2, &info->csigdata, siginfo);
++ fill_siginfo_note(info->notes + 2, &info->csigdata, cprm->siginfo);
+ fill_auxv_note(info->notes + 3, current->mm);
+ info->numnote = 4;
+
+- if (fill_files_note(info->notes + info->numnote) == 0) {
++ if (fill_files_note(info->notes + info->numnote, cprm) == 0) {
+ info->notes_files = info->notes + info->numnote;
+ info->numnote++;
+ }
+
+ /* Try to dump the FPU. */
+- info->prstatus->pr_fpvalid = elf_core_copy_task_fpregs(current, regs,
+- info->fpu);
++ info->prstatus->pr_fpvalid =
++ elf_core_copy_task_fpregs(current, cprm->regs, info->fpu);
+ if (info->prstatus->pr_fpvalid)
+ fill_note(info->notes + info->numnote++,
+ "CORE", NT_PRFPREG, sizeof(*info->fpu), info->fpu);
+@@ -2180,8 +2192,7 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum,
+ static int elf_core_dump(struct coredump_params *cprm)
+ {
+ int has_dumped = 0;
+- int vma_count, segs, i;
+- size_t vma_data_size;
++ int segs, i;
+ struct elfhdr elf;
+ loff_t offset = 0, dataoff;
+ struct elf_note_info info = { };
+@@ -2189,16 +2200,12 @@ static int elf_core_dump(struct coredump_params *cprm)
+ struct elf_shdr *shdr4extnum = NULL;
+ Elf_Half e_phnum;
+ elf_addr_t e_shoff;
+- struct core_vma_metadata *vma_meta;
+-
+- if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, &vma_data_size))
+- return 0;
+
+ /*
+ * The number of segs are recored into ELF header as 16bit value.
+ * Please check DEFAULT_MAX_MAP_COUNT definition when you modify here.
+ */
+- segs = vma_count + elf_core_extra_phdrs();
++ segs = cprm->vma_count + elf_core_extra_phdrs();
+
+ /* for notes section */
+ segs++;
+@@ -2212,7 +2219,7 @@ static int elf_core_dump(struct coredump_params *cprm)
+ * Collect all the non-memory information about the process for the
+ * notes. This also sets up the file header.
+ */
+- if (!fill_note_info(&elf, e_phnum, &info, cprm->siginfo, cprm->regs))
++ if (!fill_note_info(&elf, e_phnum, &info, cprm))
+ goto end_coredump;
+
+ has_dumped = 1;
+@@ -2237,7 +2244,7 @@ static int elf_core_dump(struct coredump_params *cprm)
+
+ dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+
+- offset += vma_data_size;
++ offset += cprm->vma_data_size;
+ offset += elf_core_extra_data_size();
+ e_shoff = offset;
+
+@@ -2257,8 +2264,8 @@ static int elf_core_dump(struct coredump_params *cprm)
+ goto end_coredump;
+
+ /* Write program headers for segments dump */
+- for (i = 0; i < vma_count; i++) {
+- struct core_vma_metadata *meta = vma_meta + i;
++ for (i = 0; i < cprm->vma_count; i++) {
++ struct core_vma_metadata *meta = cprm->vma_meta + i;
+ struct elf_phdr phdr;
+
+ phdr.p_type = PT_LOAD;
+@@ -2295,8 +2302,8 @@ static int elf_core_dump(struct coredump_params *cprm)
+ /* Align to page */
+ dump_skip_to(cprm, dataoff);
+
+- for (i = 0; i < vma_count; i++) {
+- struct core_vma_metadata *meta = vma_meta + i;
++ for (i = 0; i < cprm->vma_count; i++) {
++ struct core_vma_metadata *meta = cprm->vma_meta + i;
+
+ if (!dump_user_range(cprm, meta->start, meta->dump_size))
+ goto end_coredump;
+@@ -2313,7 +2320,6 @@ static int elf_core_dump(struct coredump_params *cprm)
+ end_coredump:
+ free_note_info(&info);
+ kfree(shdr4extnum);
+- kvfree(vma_meta);
+ kfree(phdr4note);
+ return has_dumped;
+ }
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index c6f588dc4a9db..1a25536b01201 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -1465,7 +1465,7 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm,
+ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ {
+ int has_dumped = 0;
+- int vma_count, segs;
++ int segs;
+ int i;
+ struct elfhdr *elf = NULL;
+ loff_t offset = 0, dataoff;
+@@ -1480,8 +1480,6 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ elf_addr_t e_shoff;
+ struct core_thread *ct;
+ struct elf_thread_status *tmp;
+- struct core_vma_metadata *vma_meta = NULL;
+- size_t vma_data_size;
+
+ /* alloc memory for large data structures: too large to be on stack */
+ elf = kmalloc(sizeof(*elf), GFP_KERNEL);
+@@ -1491,9 +1489,6 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ if (!psinfo)
+ goto end_coredump;
+
+- if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, &vma_data_size))
+- goto end_coredump;
+-
+ for (ct = current->signal->core_state->dumper.next;
+ ct; ct = ct->next) {
+ tmp = elf_dump_thread_status(cprm->siginfo->si_signo,
+@@ -1513,7 +1508,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ tmp->next = thread_list;
+ thread_list = tmp;
+
+- segs = vma_count + elf_core_extra_phdrs();
++ segs = cprm->vma_count + elf_core_extra_phdrs();
+
+ /* for notes section */
+ segs++;
+@@ -1558,7 +1553,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ /* Page-align dumped data */
+ dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+
+- offset += vma_data_size;
++ offset += cprm->vma_data_size;
+ offset += elf_core_extra_data_size();
+ e_shoff = offset;
+
+@@ -1578,8 +1573,8 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+ goto end_coredump;
+
+ /* write program headers for segments dump */
+- for (i = 0; i < vma_count; i++) {
+- struct core_vma_metadata *meta = vma_meta + i;
++ for (i = 0; i < cprm->vma_count; i++) {
++ struct core_vma_metadata *meta = cprm->vma_meta + i;
+ struct elf_phdr phdr;
+ size_t sz;
+
+@@ -1628,7 +1623,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm)
+
+ dump_skip_to(cprm, dataoff);
+
+- if (!elf_fdpic_dump_segments(cprm, vma_meta, vma_count))
++ if (!elf_fdpic_dump_segments(cprm, cprm->vma_meta, cprm->vma_count))
+ goto end_coredump;
+
+ if (!elf_core_write_extra_data(cprm))
+@@ -1652,7 +1647,6 @@ end_coredump:
+ thread_list = thread_list->next;
+ kfree(tmp);
+ }
+- kvfree(vma_meta);
+ kfree(phdr4note);
+ kfree(elf);
+ kfree(psinfo);
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 8202ad6aa1317..a0aa6c7e23351 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1522,8 +1522,12 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ if (!test_bit(BTRFS_FS_OPEN, &fs_info->flags))
+ return;
+
+- if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE))
++ sb_start_write(fs_info->sb);
++
++ if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) {
++ sb_end_write(fs_info->sb);
+ return;
++ }
+
+ /*
+ * Long running balances can keep us blocked here for eternity, so
+@@ -1531,6 +1535,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ */
+ if (!mutex_trylock(&fs_info->reclaim_bgs_lock)) {
+ btrfs_exclop_finish(fs_info);
++ sb_end_write(fs_info->sb);
+ return;
+ }
+
+@@ -1605,6 +1610,7 @@ next:
+ spin_unlock(&fs_info->unused_bgs_lock);
+ mutex_unlock(&fs_info->reclaim_bgs_lock);
+ btrfs_exclop_finish(fs_info);
++ sb_end_write(fs_info->sb);
+ }
+
+ void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info)
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 71e5b2e9a1ba8..6158b870a269d 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -808,7 +808,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ u64 em_len;
+ u64 em_start;
+ struct extent_map *em;
+- blk_status_t ret = BLK_STS_RESOURCE;
++ blk_status_t ret;
+ int faili = 0;
+ u8 *sums;
+
+@@ -821,14 +821,18 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ read_lock(&em_tree->lock);
+ em = lookup_extent_mapping(em_tree, file_offset, fs_info->sectorsize);
+ read_unlock(&em_tree->lock);
+- if (!em)
+- return BLK_STS_IOERR;
++ if (!em) {
++ ret = BLK_STS_IOERR;
++ goto out;
++ }
+
+ ASSERT(em->compress_type != BTRFS_COMPRESS_NONE);
+ compressed_len = em->block_len;
+ cb = kmalloc(compressed_bio_size(fs_info, compressed_len), GFP_NOFS);
+- if (!cb)
++ if (!cb) {
++ ret = BLK_STS_RESOURCE;
+ goto out;
++ }
+
+ refcount_set(&cb->pending_sectors, compressed_len >> fs_info->sectorsize_bits);
+ cb->errors = 0;
+@@ -851,8 +855,10 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ nr_pages = DIV_ROUND_UP(compressed_len, PAGE_SIZE);
+ cb->compressed_pages = kcalloc(nr_pages, sizeof(struct page *),
+ GFP_NOFS);
+- if (!cb->compressed_pages)
++ if (!cb->compressed_pages) {
++ ret = BLK_STS_RESOURCE;
+ goto fail1;
++ }
+
+ for (pg_index = 0; pg_index < nr_pages; pg_index++) {
+ cb->compressed_pages[pg_index] = alloc_page(GFP_NOFS);
+@@ -938,7 +944,7 @@ blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
+ comp_bio = NULL;
+ }
+ }
+- return 0;
++ return BLK_STS_OK;
+
+ fail2:
+ while (faili >= 0) {
+@@ -951,6 +957,8 @@ fail1:
+ kfree(cb);
+ out:
+ free_extent_map(em);
++ bio->bi_status = ret;
++ bio_endio(bio);
+ return ret;
+ finish_cb:
+ if (comp_bio) {
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 48590a3807621..117afcda5affb 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -441,17 +441,31 @@ static int csum_one_extent_buffer(struct extent_buffer *eb)
+ else
+ ret = btrfs_check_leaf_full(eb);
+
+- if (ret < 0) {
+- btrfs_print_tree(eb, 0);
++ if (ret < 0)
++ goto error;
++
++ /*
++ * Also check the generation, the eb reached here must be newer than
++ * last committed. Or something seriously wrong happened.
++ */
++ if (unlikely(btrfs_header_generation(eb) <= fs_info->last_trans_committed)) {
++ ret = -EUCLEAN;
+ btrfs_err(fs_info,
+- "block=%llu write time tree block corruption detected",
+- eb->start);
+- WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+- return ret;
++ "block=%llu bad generation, have %llu expect > %llu",
++ eb->start, btrfs_header_generation(eb),
++ fs_info->last_trans_committed);
++ goto error;
+ }
+ write_extent_buffer(eb, result, 0, fs_info->csum_size);
+
+ return 0;
++
++error:
++ btrfs_print_tree(eb, 0);
++ btrfs_err(fs_info, "block=%llu write time tree block corruption detected",
++ eb->start);
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++ return ret;
+ }
+
+ /* Checksum all dirty extent buffers in one bio_vec */
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 4c91060d103ae..99028984340aa 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2639,7 +2639,6 @@ int btrfs_repair_one_sector(struct inode *inode,
+ const int icsum = bio_offset >> fs_info->sectorsize_bits;
+ struct bio *repair_bio;
+ struct btrfs_bio *repair_bbio;
+- blk_status_t status;
+
+ btrfs_debug(fs_info,
+ "repair read error: read error at %llu", start);
+@@ -2678,13 +2677,13 @@ int btrfs_repair_one_sector(struct inode *inode,
+ "repair read error: submitting new read to mirror %d",
+ failrec->this_mirror);
+
+- status = submit_bio_hook(inode, repair_bio, failrec->this_mirror,
+- failrec->bio_flags);
+- if (status) {
+- free_io_failure(failure_tree, tree, failrec);
+- bio_put(repair_bio);
+- }
+- return blk_status_to_errno(status);
++ /*
++ * At this point we have a bio, so any errors from submit_bio_hook()
++ * will be handled by the endio on the repair_bio, so we can't return an
++ * error here.
++ */
++ submit_bio_hook(inode, repair_bio, failrec->this_mirror, failrec->bio_flags);
++ return BLK_STS_OK;
+ }
+
+ static void end_page_read(struct page *page, bool uptodate, u64 start, u32 len)
+@@ -4780,11 +4779,12 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
+ return ret;
+ }
+ if (cache) {
+- /* Impiles write in zoned mode */
+- btrfs_put_block_group(cache);
+- /* Mark the last eb in a block group */
++ /*
++ * Implies write in zoned mode. Mark the last eb in a block group.
++ */
+ if (cache->seq_zone && eb->start + eb->len == cache->zone_capacity)
+ set_bit(EXTENT_BUFFER_ZONE_FINISH, &eb->bflags);
++ btrfs_put_block_group(cache);
+ }
+ ret = write_one_eb(eb, wbc, epd);
+ free_extent_buffer(eb);
+diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
+index 90c5c38836ab3..77c8f298f52e2 100644
+--- a/fs/btrfs/file-item.c
++++ b/fs/btrfs/file-item.c
+@@ -305,7 +305,7 @@ found:
+ read_extent_buffer(path->nodes[0], dst, (unsigned long)item,
+ ret * csum_size);
+ out:
+- if (ret == -ENOENT)
++ if (ret == -ENOENT || ret == -EFBIG)
+ ret = 0;
+ return ret;
+ }
+@@ -368,6 +368,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ {
+ struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
++ struct btrfs_bio *bbio = NULL;
+ struct btrfs_path *path;
+ const u32 sectorsize = fs_info->sectorsize;
+ const u32 csum_size = fs_info->csum_size;
+@@ -377,6 +378,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ u8 *csum;
+ const unsigned int nblocks = orig_len >> fs_info->sectorsize_bits;
+ int count = 0;
++ blk_status_t ret = BLK_STS_OK;
+
+ if ((BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM) ||
+ test_bit(BTRFS_FS_STATE_NO_CSUMS, &fs_info->fs_state))
+@@ -400,7 +402,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ return BLK_STS_RESOURCE;
+
+ if (!dst) {
+- struct btrfs_bio *bbio = btrfs_bio(bio);
++ bbio = btrfs_bio(bio);
+
+ if (nblocks * csum_size > BTRFS_BIO_INLINE_CSUM_SIZE) {
+ bbio->csum = kmalloc_array(nblocks, csum_size, GFP_NOFS);
+@@ -456,21 +458,27 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+
+ count = search_csum_tree(fs_info, path, cur_disk_bytenr,
+ search_len, csum_dst);
+- if (count <= 0) {
+- /*
+- * Either we hit a critical error or we didn't find
+- * the csum.
+- * Either way, we put zero into the csums dst, and skip
+- * to the next sector.
+- */
++ if (count < 0) {
++ ret = errno_to_blk_status(count);
++ if (bbio)
++ btrfs_bio_free_csum(bbio);
++ break;
++ }
++
++ /*
++ * We didn't find a csum for this range. We need to make sure
++ * we complain loudly about this, because we are not NODATASUM.
++ *
++ * However for the DATA_RELOC inode we could potentially be
++ * relocating data extents for a NODATASUM inode, so the inode
++ * itself won't be marked with NODATASUM, but the extent we're
++ * copying is in fact NODATASUM. If we don't find a csum we
++ * assume this is the case.
++ */
++ if (count == 0) {
+ memset(csum_dst, 0, csum_size);
+ count = 1;
+
+- /*
+- * For data reloc inode, we need to mark the range
+- * NODATASUM so that balance won't report false csum
+- * error.
+- */
+ if (BTRFS_I(inode)->root->root_key.objectid ==
+ BTRFS_DATA_RELOC_TREE_OBJECTID) {
+ u64 file_offset;
+@@ -491,7 +499,7 @@ blk_status_t btrfs_lookup_bio_sums(struct inode *inode, struct bio *bio, u8 *dst
+ }
+
+ btrfs_free_path(path);
+- return BLK_STS_OK;
++ return ret;
+ }
+
+ int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end,
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 5bbea5ec31fc5..0f4408f9daddc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2538,10 +2538,15 @@ blk_status_t btrfs_submit_data_bio(struct inode *inode, struct bio *bio,
+ goto out;
+
+ if (bio_flags & EXTENT_BIO_COMPRESSED) {
++ /*
++ * btrfs_submit_compressed_read will handle completing
++ * the bio if there were any errors, so just return
++ * here.
++ */
+ ret = btrfs_submit_compressed_read(inode, bio,
+ mirror_num,
+ bio_flags);
+- goto out;
++ goto out_no_endio;
+ } else {
+ /*
+ * Lookup bio sums does extra checks around whether we
+@@ -2575,6 +2580,7 @@ out:
+ bio->bi_status = ret;
+ bio_endio(bio);
+ }
++out_no_endio:
+ return ret;
+ }
+
+diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
+index a3930da4eb3fb..e437238cc603e 100644
+--- a/fs/btrfs/reflink.c
++++ b/fs/btrfs/reflink.c
+@@ -505,8 +505,11 @@ process_slot:
+ */
+ ASSERT(key.offset == 0);
+ ASSERT(datal <= fs_info->sectorsize);
+- if (key.offset != 0 || datal > fs_info->sectorsize)
+- return -EUCLEAN;
++ if (WARN_ON(key.offset != 0) ||
++ WARN_ON(datal > fs_info->sectorsize)) {
++ ret = -EUCLEAN;
++ goto out;
++ }
+
+ ret = clone_copy_inline_extent(inode, path, &new_key,
+ drop_start, datal, size,
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 294242c194d80..62382ae1eb02a 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -1061,7 +1061,6 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ trans_rsv->reserved;
+ if (block_rsv_size < space_info->bytes_may_use)
+ delalloc_size = space_info->bytes_may_use - block_rsv_size;
+- spin_unlock(&space_info->lock);
+
+ /*
+ * We don't want to include the global_rsv in our calculation,
+@@ -1092,6 +1091,8 @@ static void btrfs_preempt_reclaim_metadata_space(struct work_struct *work)
+ flush = FLUSH_DELAYED_REFS_NR;
+ }
+
++ spin_unlock(&space_info->lock);
++
+ /*
+ * We don't want to reclaim everything, just a portion, so scale
+ * down the to_reclaim by 1/4. If it takes us down to 0,
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index b07d382d53a86..f5b11c1a000aa 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -534,15 +534,48 @@ error:
+ return ret;
+ }
+
+-static bool device_path_matched(const char *path, struct btrfs_device *device)
++/*
++ * Check if the device in the path matches the device in the given struct device.
++ *
++ * Returns:
++ * true If it is the same device.
++ * false If it is not the same device or on error.
++ */
++static bool device_matched(const struct btrfs_device *device, const char *path)
+ {
+- int found;
++ char *device_name;
++ dev_t dev_old;
++ dev_t dev_new;
++ int ret;
++
++ /*
++ * If we are looking for a device with the matching dev_t, then skip
++ * device without a name (a missing device).
++ */
++ if (!device->name)
++ return false;
++
++ device_name = kzalloc(BTRFS_PATH_NAME_MAX, GFP_KERNEL);
++ if (!device_name)
++ return false;
+
+ rcu_read_lock();
+- found = strcmp(rcu_str_deref(device->name), path);
++ scnprintf(device_name, BTRFS_PATH_NAME_MAX, "%s", rcu_str_deref(device->name));
+ rcu_read_unlock();
+
+- return found == 0;
++ ret = lookup_bdev(device_name, &dev_old);
++ kfree(device_name);
++ if (ret)
++ return false;
++
++ ret = lookup_bdev(path, &dev_new);
++ if (ret)
++ return false;
++
++ if (dev_old == dev_new)
++ return true;
++
++ return false;
+ }
+
+ /*
+@@ -575,9 +608,7 @@ static int btrfs_free_stale_devices(const char *path,
+ &fs_devices->devices, dev_list) {
+ if (skip_device && skip_device == device)
+ continue;
+- if (path && !device->name)
+- continue;
+- if (path && !device_path_matched(path, device))
++ if (path && !device_matched(device, path))
+ continue;
+ if (fs_devices->opened) {
+ /* for an already deleted device return 0 */
+@@ -8299,10 +8330,12 @@ static int relocating_repair_kthread(void *data)
+ target = cache->start;
+ btrfs_put_block_group(cache);
+
++ sb_start_write(fs_info->sb);
+ if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_BALANCE)) {
+ btrfs_info(fs_info,
+ "zoned: skip relocating block group %llu to repair: EBUSY",
+ target);
++ sb_end_write(fs_info->sb);
+ return -EBUSY;
+ }
+
+@@ -8330,6 +8363,7 @@ out:
+ btrfs_put_block_group(cache);
+ mutex_unlock(&fs_info->reclaim_bgs_lock);
+ btrfs_exclop_finish(fs_info);
++ sb_end_write(fs_info->sb);
+
+ return ret;
+ }
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 8e112b6bd3719..c76a8ef60a758 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -1235,16 +1235,18 @@ static void bh_lru_install(struct buffer_head *bh)
+ int i;
+
+ check_irqs_on();
++ bh_lru_lock();
++
+ /*
+ * the refcount of buffer_head in bh_lru prevents dropping the
+ * attached page(i.e., try_to_free_buffers) so it could cause
+ * failing page migration.
+ * Skip putting upcoming bh into bh_lru until migration is done.
+ */
+- if (lru_cache_disabled())
++ if (lru_cache_disabled()) {
++ bh_lru_unlock();
+ return;
+-
+- bh_lru_lock();
++ }
+
+ b = this_cpu_ptr(&bh_lrus);
+ for (i = 0; i < BH_LRU_SIZE; i++) {
+diff --git a/fs/cifs/cifs_swn.c b/fs/cifs/cifs_swn.c
+index cdce1609c5c26..180c234c2f46c 100644
+--- a/fs/cifs/cifs_swn.c
++++ b/fs/cifs/cifs_swn.c
+@@ -396,11 +396,11 @@ static int cifs_swn_resource_state_changed(struct cifs_swn_reg *swnreg, const ch
+ switch (state) {
+ case CIFS_SWN_RESOURCE_STATE_UNAVAILABLE:
+ cifs_dbg(FYI, "%s: resource name '%s' become unavailable\n", __func__, name);
+- cifs_mark_tcp_ses_conns_for_reconnect(swnreg->tcon->ses->server, true);
++ cifs_signal_cifsd_for_reconnect(swnreg->tcon->ses->server, true);
+ break;
+ case CIFS_SWN_RESOURCE_STATE_AVAILABLE:
+ cifs_dbg(FYI, "%s: resource name '%s' become available\n", __func__, name);
+- cifs_mark_tcp_ses_conns_for_reconnect(swnreg->tcon->ses->server, true);
++ cifs_signal_cifsd_for_reconnect(swnreg->tcon->ses->server, true);
+ break;
+ case CIFS_SWN_RESOURCE_STATE_UNKNOWN:
+ cifs_dbg(FYI, "%s: resource name '%s' changed to unknown state\n", __func__, name);
+@@ -498,7 +498,7 @@ static int cifs_swn_reconnect(struct cifs_tcon *tcon, struct sockaddr_storage *a
+ goto unlock;
+ }
+
+- cifs_mark_tcp_ses_conns_for_reconnect(tcon->ses->server, false);
++ cifs_signal_cifsd_for_reconnect(tcon->ses->server, false);
+
+ unlock:
+ mutex_unlock(&tcon->ses->server->srv_mutex);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 082c214786867..6e5246122ee2c 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -210,6 +210,9 @@ cifs_read_super(struct super_block *sb)
+ if (rc)
+ goto out_no_root;
+ /* tune readahead according to rsize if readahead size not set on mount */
++ if (cifs_sb->ctx->rsize == 0)
++ cifs_sb->ctx->rsize =
++ tcon->ses->server->ops->negotiate_rsize(tcon, cifs_sb->ctx);
+ if (cifs_sb->ctx->rasize)
+ sb->s_bdi->ra_pages = cifs_sb->ctx->rasize / PAGE_SIZE;
+ else
+@@ -254,6 +257,9 @@ static void cifs_kill_sb(struct super_block *sb)
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ struct cifs_tcon *tcon;
+ struct cached_fid *cfid;
++ struct rb_root *root = &cifs_sb->tlink_tree;
++ struct rb_node *node;
++ struct tcon_link *tlink;
+
+ /*
+ * We ned to release all dentries for the cached directories
+@@ -263,16 +269,18 @@ static void cifs_kill_sb(struct super_block *sb)
+ dput(cifs_sb->root);
+ cifs_sb->root = NULL;
+ }
+- tcon = cifs_sb_master_tcon(cifs_sb);
+- if (tcon) {
++ node = rb_first(root);
++ while (node != NULL) {
++ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
++ tcon = tlink_tcon(tlink);
+ cfid = &tcon->crfid;
+ mutex_lock(&cfid->fid_mutex);
+ if (cfid->dentry) {
+-
+ dput(cfid->dentry);
+ cfid->dentry = NULL;
+ }
+ mutex_unlock(&cfid->fid_mutex);
++ node = rb_next(node);
+ }
+
+ kill_anon_super(sb);
+diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
+index d3701295402d2..0df3b24a0bf4c 100644
+--- a/fs/cifs/cifsproto.h
++++ b/fs/cifs/cifsproto.h
+@@ -132,6 +132,9 @@ extern int SendReceiveBlockingLock(const unsigned int xid,
+ struct smb_hdr *out_buf,
+ int *bytes_returned);
+ void
++cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server,
++ bool all_channels);
++void
+ cifs_mark_tcp_ses_conns_for_reconnect(struct TCP_Server_Info *server,
+ bool mark_smb_session);
+ extern int cifs_reconnect(struct TCP_Server_Info *server,
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index d3020abfe404a..d6f8ccc7bfe2f 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -162,11 +162,51 @@ static void cifs_resolve_server(struct work_struct *work)
+ mutex_unlock(&server->srv_mutex);
+ }
+
++/*
++ * Update the tcpStatus for the server.
++ * This is used to signal the cifsd thread to call cifs_reconnect
++ * ONLY cifsd thread should call cifs_reconnect. For any other
++ * thread, use this function
++ *
++ * @server: the tcp ses for which reconnect is needed
++ * @all_channels: if this needs to be done for all channels
++ */
++void
++cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server,
++ bool all_channels)
++{
++ struct TCP_Server_Info *pserver;
++ struct cifs_ses *ses;
++ int i;
++
++ /* If server is a channel, select the primary channel */
++ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
++
++ spin_lock(&cifs_tcp_ses_lock);
++ if (!all_channels) {
++ pserver->tcpStatus = CifsNeedReconnect;
++ spin_unlock(&cifs_tcp_ses_lock);
++ return;
++ }
++
++ list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {
++ spin_lock(&ses->chan_lock);
++ for (i = 0; i < ses->chan_count; i++)
++ ses->chans[i].server->tcpStatus = CifsNeedReconnect;
++ spin_unlock(&ses->chan_lock);
++ }
++ spin_unlock(&cifs_tcp_ses_lock);
++}
++
+ /*
+ * Mark all sessions and tcons for reconnect.
++ * IMPORTANT: make sure that this gets called only from
++ * cifsd thread. For any other thread, use
++ * cifs_signal_cifsd_for_reconnect
+ *
++ * @server: the tcp ses for which reconnect is needed
+ * @server needs to be previously set to CifsNeedReconnect.
+- *
++ * @mark_smb_session: whether even sessions need to be marked
+ */
+ void
+ cifs_mark_tcp_ses_conns_for_reconnect(struct TCP_Server_Info *server,
+@@ -3473,6 +3513,9 @@ static int connect_dfs_target(struct mount_ctx *mnt_ctx, const char *full_path,
+ struct cifs_sb_info *cifs_sb = mnt_ctx->cifs_sb;
+ char *oldmnt = cifs_sb->ctx->mount_options;
+
++ cifs_dbg(FYI, "%s: full_path=%s ref_path=%s target=%s\n", __func__, full_path, ref_path,
++ dfs_cache_get_tgt_name(tit));
++
+ rc = dfs_cache_get_tgt_referral(ref_path, tit, &ref);
+ if (rc)
+ goto out;
+@@ -3571,13 +3614,18 @@ static int __follow_dfs_link(struct mount_ctx *mnt_ctx)
+ if (rc)
+ goto out;
+
+- /* Try all dfs link targets */
++ /* Try all dfs link targets. If an I/O fails from currently connected DFS target with an
++ * error other than STATUS_PATH_NOT_COVERED (-EREMOTE), then retry it from other targets as
++ * specified in MS-DFSC "3.1.5.2 I/O Operation to Target Fails with an Error Other Than
++ * STATUS_PATH_NOT_COVERED."
++ */
+ for (rc = -ENOENT, tit = dfs_cache_get_tgt_iterator(&tl);
+ tit; tit = dfs_cache_get_next_tgt(&tl, tit)) {
+ rc = connect_dfs_target(mnt_ctx, full_path, mnt_ctx->leaf_fullpath + 1, tit);
+ if (!rc) {
+ rc = is_path_remote(mnt_ctx);
+- break;
++ if (!rc || rc == -EREMOTE)
++ break;
+ }
+ }
+
+@@ -3651,7 +3699,7 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb3_fs_context *ctx)
+ goto error;
+
+ rc = is_path_remote(&mnt_ctx);
+- if (rc == -EREMOTE)
++ if (rc)
+ rc = follow_dfs_link(&mnt_ctx);
+ if (rc)
+ goto error;
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 831f42458bf6d..30e040da4f096 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -1355,7 +1355,7 @@ static void mark_for_reconnect_if_needed(struct cifs_tcon *tcon, struct dfs_cach
+ }
+
+ cifs_dbg(FYI, "%s: no cached or matched targets. mark dfs share for reconnect.\n", __func__);
+- cifs_mark_tcp_ses_conns_for_reconnect(tcon->ses->server, true);
++ cifs_signal_cifsd_for_reconnect(tcon->ses->server, true);
+ }
+
+ /* Refresh dfs referral of tcon and mark it for reconnect if needed */
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index e7af802dcfa60..a2723f7cb5e9d 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3740,6 +3740,11 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file,
+ break;
+ }
+
++ if (cifs_sb->ctx->rsize == 0)
++ cifs_sb->ctx->rsize =
++ server->ops->negotiate_rsize(tlink_tcon(open_file->tlink),
++ cifs_sb->ctx);
++
+ rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
+ &rsize, credits);
+ if (rc)
+@@ -4474,6 +4479,11 @@ static void cifs_readahead(struct readahead_control *ractl)
+ }
+ }
+
++ if (cifs_sb->ctx->rsize == 0)
++ cifs_sb->ctx->rsize =
++ server->ops->negotiate_rsize(tlink_tcon(open_file->tlink),
++ cifs_sb->ctx);
++
+ rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
+ &rsize, credits);
+ if (rc)
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index b2fb7bd119366..c71c9a44bef4b 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -228,7 +228,7 @@ cifs_get_next_mid(struct TCP_Server_Info *server)
+ spin_unlock(&GlobalMid_Lock);
+
+ if (reconnect) {
+- cifs_mark_tcp_ses_conns_for_reconnect(server, false);
++ cifs_signal_cifsd_for_reconnect(server, false);
+ }
+
+ return mid;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index af5d0830bc8a8..5d120cd8bc78f 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -25,6 +25,7 @@
+ #include "smb2glob.h"
+ #include "cifs_ioctl.h"
+ #include "smbdirect.h"
++#include "fscache.h"
+ #include "fs_context.h"
+
+ /* Change credits for different ops and return the total number of credits */
+@@ -1642,6 +1643,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ unsigned int size[2];
+ void *data[2];
+ int create_options = is_dir ? CREATE_NOT_FILE : CREATE_NOT_DIR;
++ void (*free_req1_func)(struct smb_rqst *r);
+
+ vars = kzalloc(sizeof(*vars), GFP_ATOMIC);
+ if (vars == NULL)
+@@ -1651,27 +1653,29 @@ smb2_ioctl_query_info(const unsigned int xid,
+
+ resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER;
+
+- if (copy_from_user(&qi, arg, sizeof(struct smb_query_info)))
+- goto e_fault;
+-
++ if (copy_from_user(&qi, arg, sizeof(struct smb_query_info))) {
++ rc = -EFAULT;
++ goto free_vars;
++ }
+ if (qi.output_buffer_length > 1024) {
+- kfree(vars);
+- return -EINVAL;
++ rc = -EINVAL;
++ goto free_vars;
+ }
+
+ if (!ses || !server) {
+- kfree(vars);
+- return -EIO;
++ rc = -EIO;
++ goto free_vars;
+ }
+
+ if (smb3_encryption_required(tcon))
+ flags |= CIFS_TRANSFORM_REQ;
+
+- buffer = memdup_user(arg + sizeof(struct smb_query_info),
+- qi.output_buffer_length);
+- if (IS_ERR(buffer)) {
+- kfree(vars);
+- return PTR_ERR(buffer);
++ if (qi.output_buffer_length) {
++ buffer = memdup_user(arg + sizeof(struct smb_query_info), qi.output_buffer_length);
++ if (IS_ERR(buffer)) {
++ rc = PTR_ERR(buffer);
++ goto free_vars;
++ }
+ }
+
+ /* Open */
+@@ -1709,45 +1713,45 @@ smb2_ioctl_query_info(const unsigned int xid,
+ rc = SMB2_open_init(tcon, server,
+ &rqst[0], &oplock, &oparms, path);
+ if (rc)
+- goto iqinf_exit;
++ goto free_output_buffer;
+ smb2_set_next_command(tcon, &rqst[0]);
+
+ /* Query */
+ if (qi.flags & PASSTHRU_FSCTL) {
+ /* Can eventually relax perm check since server enforces too */
+- if (!capable(CAP_SYS_ADMIN))
++ if (!capable(CAP_SYS_ADMIN)) {
+ rc = -EPERM;
+- else {
+- rqst[1].rq_iov = &vars->io_iov[0];
+- rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
+-
+- rc = SMB2_ioctl_init(tcon, server,
+- &rqst[1],
+- COMPOUND_FID, COMPOUND_FID,
+- qi.info_type, true, buffer,
+- qi.output_buffer_length,
+- CIFSMaxBufSize -
+- MAX_SMB2_CREATE_RESPONSE_SIZE -
+- MAX_SMB2_CLOSE_RESPONSE_SIZE);
++ goto free_open_req;
+ }
++ rqst[1].rq_iov = &vars->io_iov[0];
++ rqst[1].rq_nvec = SMB2_IOCTL_IOV_SIZE;
++
++ rc = SMB2_ioctl_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
++ qi.info_type, true, buffer, qi.output_buffer_length,
++ CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
++ MAX_SMB2_CLOSE_RESPONSE_SIZE);
++ free_req1_func = SMB2_ioctl_free;
+ } else if (qi.flags == PASSTHRU_SET_INFO) {
+ /* Can eventually relax perm check since server enforces too */
+- if (!capable(CAP_SYS_ADMIN))
++ if (!capable(CAP_SYS_ADMIN)) {
+ rc = -EPERM;
+- else {
+- rqst[1].rq_iov = &vars->si_iov[0];
+- rqst[1].rq_nvec = 1;
+-
+- size[0] = 8;
+- data[0] = buffer;
+-
+- rc = SMB2_set_info_init(tcon, server,
+- &rqst[1],
+- COMPOUND_FID, COMPOUND_FID,
+- current->tgid,
+- FILE_END_OF_FILE_INFORMATION,
+- SMB2_O_INFO_FILE, 0, data, size);
++ goto free_open_req;
++ }
++ if (qi.output_buffer_length < 8) {
++ rc = -EINVAL;
++ goto free_open_req;
+ }
++ rqst[1].rq_iov = &vars->si_iov[0];
++ rqst[1].rq_nvec = 1;
++
++ /* MS-FSCC 2.4.13 FileEndOfFileInformation */
++ size[0] = 8;
++ data[0] = buffer;
++
++ rc = SMB2_set_info_init(tcon, server, &rqst[1], COMPOUND_FID, COMPOUND_FID,
++ current->tgid, FILE_END_OF_FILE_INFORMATION,
++ SMB2_O_INFO_FILE, 0, data, size);
++ free_req1_func = SMB2_set_info_free;
+ } else if (qi.flags == PASSTHRU_QUERY_INFO) {
+ rqst[1].rq_iov = &vars->qi_iov[0];
+ rqst[1].rq_nvec = 1;
+@@ -1758,6 +1762,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ qi.info_type, qi.additional_information,
+ qi.input_buffer_length,
+ qi.output_buffer_length, buffer);
++ free_req1_func = SMB2_query_info_free;
+ } else { /* unknown flags */
+ cifs_tcon_dbg(VFS, "Invalid passthru query flags: 0x%x\n",
+ qi.flags);
+@@ -1765,7 +1770,7 @@ smb2_ioctl_query_info(const unsigned int xid,
+ }
+
+ if (rc)
+- goto iqinf_exit;
++ goto free_open_req;
+ smb2_set_next_command(tcon, &rqst[1]);
+ smb2_set_related(&rqst[1]);
+
+@@ -1776,14 +1781,14 @@ smb2_ioctl_query_info(const unsigned int xid,
+ rc = SMB2_close_init(tcon, server,
+ &rqst[2], COMPOUND_FID, COMPOUND_FID, false);
+ if (rc)
+- goto iqinf_exit;
++ goto free_req_1;
+ smb2_set_related(&rqst[2]);
+
+ rc = compound_send_recv(xid, ses, server,
+ flags, 3, rqst,
+ resp_buftype, rsp_iov);
+ if (rc)
+- goto iqinf_exit;
++ goto out;
+
+ /* No need to bump num_remote_opens since handle immediately closed */
+ if (qi.flags & PASSTHRU_FSCTL) {
+@@ -1793,18 +1798,22 @@ smb2_ioctl_query_info(const unsigned int xid,
+ qi.input_buffer_length = le32_to_cpu(io_rsp->OutputCount);
+ if (qi.input_buffer_length > 0 &&
+ le32_to_cpu(io_rsp->OutputOffset) + qi.input_buffer_length
+- > rsp_iov[1].iov_len)
+- goto e_fault;
++ > rsp_iov[1].iov_len) {
++ rc = -EFAULT;
++ goto out;
++ }
+
+ if (copy_to_user(&pqi->input_buffer_length,
+ &qi.input_buffer_length,
+- sizeof(qi.input_buffer_length)))
+- goto e_fault;
++ sizeof(qi.input_buffer_length))) {
++ rc = -EFAULT;
++ goto out;
++ }
+
+ if (copy_to_user((void __user *)pqi + sizeof(struct smb_query_info),
+ (const void *)io_rsp + le32_to_cpu(io_rsp->OutputOffset),
+ qi.input_buffer_length))
+- goto e_fault;
++ rc = -EFAULT;
+ } else {
+ pqi = (struct smb_query_info __user *)arg;
+ qi_rsp = (struct smb2_query_info_rsp *)rsp_iov[1].iov_base;
+@@ -1812,28 +1821,30 @@ smb2_ioctl_query_info(const unsigned int xid,
+ qi.input_buffer_length = le32_to_cpu(qi_rsp->OutputBufferLength);
+ if (copy_to_user(&pqi->input_buffer_length,
+ &qi.input_buffer_length,
+- sizeof(qi.input_buffer_length)))
+- goto e_fault;
++ sizeof(qi.input_buffer_length))) {
++ rc = -EFAULT;
++ goto out;
++ }
+
+ if (copy_to_user(pqi + 1, qi_rsp->Buffer,
+ qi.input_buffer_length))
+- goto e_fault;
++ rc = -EFAULT;
+ }
+
+- iqinf_exit:
+- cifs_small_buf_release(rqst[0].rq_iov[0].iov_base);
+- cifs_small_buf_release(rqst[1].rq_iov[0].iov_base);
+- cifs_small_buf_release(rqst[2].rq_iov[0].iov_base);
++out:
+ free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
+ free_rsp_buf(resp_buftype[2], rsp_iov[2].iov_base);
+- kfree(vars);
++ SMB2_close_free(&rqst[2]);
++free_req_1:
++ free_req1_func(&rqst[1]);
++free_open_req:
++ SMB2_open_free(&rqst[0]);
++free_output_buffer:
+ kfree(buffer);
++free_vars:
++ kfree(vars);
+ return rc;
+-
+-e_fault:
+- rc = -EFAULT;
+- goto iqinf_exit;
+ }
+
+ static ssize_t
+@@ -3887,29 +3898,38 @@ static long smb3_collapse_range(struct file *file, struct cifs_tcon *tcon,
+ {
+ int rc;
+ unsigned int xid;
++ struct inode *inode;
+ struct cifsFileInfo *cfile = file->private_data;
++ struct cifsInodeInfo *cifsi;
+ __le64 eof;
+
+ xid = get_xid();
+
+- if (off >= i_size_read(file->f_inode) ||
+- off + len >= i_size_read(file->f_inode)) {
++ inode = d_inode(cfile->dentry);
++ cifsi = CIFS_I(inode);
++
++ if (off >= i_size_read(inode) ||
++ off + len >= i_size_read(inode)) {
+ rc = -EINVAL;
+ goto out;
+ }
+
+ rc = smb2_copychunk_range(xid, cfile, cfile, off + len,
+- i_size_read(file->f_inode) - off - len, off);
++ i_size_read(inode) - off - len, off);
+ if (rc < 0)
+ goto out;
+
+- eof = cpu_to_le64(i_size_read(file->f_inode) - len);
++ eof = cpu_to_le64(i_size_read(inode) - len);
+ rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid, cfile->pid, &eof);
+ if (rc < 0)
+ goto out;
+
+ rc = 0;
++
++ cifsi->server_eof = i_size_read(inode) - len;
++ truncate_setsize(inode, cifsi->server_eof);
++ fscache_resize_cookie(cifs_inode_cookie(inode), cifsi->server_eof);
+ out:
+ free_xid(xid);
+ return rc;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 7e7909b1ae118..f82d6fcb5c646 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3858,8 +3858,10 @@ void smb2_reconnect_server(struct work_struct *work)
+ tcon = kzalloc(sizeof(struct cifs_tcon), GFP_KERNEL);
+ if (!tcon) {
+ resched = true;
+- list_del_init(&ses->rlist);
+- cifs_put_smb_ses(ses);
++ list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) {
++ list_del_init(&ses->rlist);
++ cifs_put_smb_ses(ses);
++ }
+ goto done;
+ }
+
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index a4c3e027cca25..eeb1a699bd6f2 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -430,7 +430,7 @@ unmask:
+ * be taken as the remainder of this one. We need to kill the
+ * socket so the server throws away the partial SMB
+ */
+- cifs_mark_tcp_ses_conns_for_reconnect(server, false);
++ cifs_signal_cifsd_for_reconnect(server, false);
+ trace_smb3_partial_send_reconnect(server->CurrentMid,
+ server->conn_id, server->hostname);
+ }
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 1c060c0a2d72f..7ed7d601e5e00 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -42,6 +42,7 @@
+ #include <linux/path.h>
+ #include <linux/timekeeping.h>
+ #include <linux/sysctl.h>
++#include <linux/elf.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -53,6 +54,9 @@
+
+ #include <trace/events/sched.h>
+
++static bool dump_vma_snapshot(struct coredump_params *cprm);
++static void free_vma_snapshot(struct coredump_params *cprm);
++
+ static int core_uses_pid;
+ static unsigned int core_pipe_limit;
+ static char core_pattern[CORENAME_MAX_SIZE] = "core";
+@@ -531,6 +535,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ * by any locks.
+ */
+ .mm_flags = mm->flags,
++ .vma_meta = NULL,
+ };
+
+ audit_core_dumps(siginfo->si_signo);
+@@ -745,6 +750,9 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ pr_info("Core dump to |%s disabled\n", cn.corename);
+ goto close_fail;
+ }
++ if (!dump_vma_snapshot(&cprm))
++ goto close_fail;
++
+ file_start_write(cprm.file);
+ core_dumped = binfmt->core_dump(&cprm);
+ /*
+@@ -758,6 +766,7 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ dump_emit(&cprm, "", 1);
+ }
+ file_end_write(cprm.file);
++ free_vma_snapshot(&cprm);
+ }
+ if (ispipe && core_pipe_limit)
+ wait_for_dump_helpers(cprm.file);
+@@ -980,6 +989,8 @@ static bool always_dump_vma(struct vm_area_struct *vma)
+ return false;
+ }
+
++#define DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER 1
++
+ /*
+ * Decide how much of @vma's contents should be included in a core dump.
+ */
+@@ -1039,9 +1050,20 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma,
+ * dump the first page to aid in determining what was mapped here.
+ */
+ if (FILTER(ELF_HEADERS) &&
+- vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ) &&
+- (READ_ONCE(file_inode(vma->vm_file)->i_mode) & 0111) != 0)
+- return PAGE_SIZE;
++ vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) {
++ if ((READ_ONCE(file_inode(vma->vm_file)->i_mode) & 0111) != 0)
++ return PAGE_SIZE;
++
++ /*
++ * ELF libraries aren't always executable.
++ * We'll want to check whether the mapping starts with the ELF
++ * magic, but not now - we're holding the mmap lock,
++ * so copy_from_user() doesn't work here.
++ * Use a placeholder instead, and fix it up later in
++ * dump_vma_snapshot().
++ */
++ return DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER;
++ }
+
+ #undef FILTER
+
+@@ -1078,18 +1100,29 @@ static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma,
+ return gate_vma;
+ }
+
++static void free_vma_snapshot(struct coredump_params *cprm)
++{
++ if (cprm->vma_meta) {
++ int i;
++ for (i = 0; i < cprm->vma_count; i++) {
++ struct file *file = cprm->vma_meta[i].file;
++ if (file)
++ fput(file);
++ }
++ kvfree(cprm->vma_meta);
++ cprm->vma_meta = NULL;
++ }
++}
++
+ /*
+ * Under the mmap_lock, take a snapshot of relevant information about the task's
+ * VMAs.
+ */
+-int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+- struct core_vma_metadata **vma_meta,
+- size_t *vma_data_size_ptr)
++static bool dump_vma_snapshot(struct coredump_params *cprm)
+ {
+ struct vm_area_struct *vma, *gate_vma;
+ struct mm_struct *mm = current->mm;
+ int i;
+- size_t vma_data_size = 0;
+
+ /*
+ * Once the stack expansion code is fixed to not change VMA bounds
+@@ -1097,36 +1130,51 @@ int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+ * mmap_lock in read mode.
+ */
+ if (mmap_write_lock_killable(mm))
+- return -EINTR;
++ return false;
+
++ cprm->vma_data_size = 0;
+ gate_vma = get_gate_vma(mm);
+- *vma_count = mm->map_count + (gate_vma ? 1 : 0);
++ cprm->vma_count = mm->map_count + (gate_vma ? 1 : 0);
+
+- *vma_meta = kvmalloc_array(*vma_count, sizeof(**vma_meta), GFP_KERNEL);
+- if (!*vma_meta) {
++ cprm->vma_meta = kvmalloc_array(cprm->vma_count, sizeof(*cprm->vma_meta), GFP_KERNEL);
++ if (!cprm->vma_meta) {
+ mmap_write_unlock(mm);
+- return -ENOMEM;
++ return false;
+ }
+
+ for (i = 0, vma = first_vma(current, gate_vma); vma != NULL;
+ vma = next_vma(vma, gate_vma), i++) {
+- struct core_vma_metadata *m = (*vma_meta) + i;
++ struct core_vma_metadata *m = cprm->vma_meta + i;
+
+ m->start = vma->vm_start;
+ m->end = vma->vm_end;
+ m->flags = vma->vm_flags;
+ m->dump_size = vma_dump_size(vma, cprm->mm_flags);
++ m->pgoff = vma->vm_pgoff;
+
+- vma_data_size += m->dump_size;
++ m->file = vma->vm_file;
++ if (m->file)
++ get_file(m->file);
+ }
+
+ mmap_write_unlock(mm);
+
+- if (WARN_ON(i != *vma_count)) {
+- kvfree(*vma_meta);
+- return -EFAULT;
++ for (i = 0; i < cprm->vma_count; i++) {
++ struct core_vma_metadata *m = cprm->vma_meta + i;
++
++ if (m->dump_size == DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER) {
++ char elfmag[SELFMAG];
++
++ if (copy_from_user(elfmag, (void __user *)m->start, SELFMAG) ||
++ memcmp(elfmag, ELFMAG, SELFMAG) != 0) {
++ m->dump_size = 0;
++ } else {
++ m->dump_size = PAGE_SIZE;
++ }
++ }
++
++ cprm->vma_data_size += m->dump_size;
+ }
+
+- *vma_data_size_ptr = vma_data_size;
+- return 0;
++ return true;
+ }
+diff --git a/fs/erofs/sysfs.c b/fs/erofs/sysfs.c
+index dac252bc92281..f3babf1e66083 100644
+--- a/fs/erofs/sysfs.c
++++ b/fs/erofs/sysfs.c
+@@ -221,9 +221,11 @@ void erofs_unregister_sysfs(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- kobject_del(&sbi->s_kobj);
+- kobject_put(&sbi->s_kobj);
+- wait_for_completion(&sbi->s_kobj_unregister);
++ if (sbi->s_kobj.state_in_sysfs) {
++ kobject_del(&sbi->s_kobj);
++ kobject_put(&sbi->s_kobj);
++ wait_for_completion(&sbi->s_kobj_unregister);
++ }
+ }
+
+ int __init erofs_init_sysfs(void)
+diff --git a/fs/exec.c b/fs/exec.c
+index 79f2c9483302d..40b1008fb0f79 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -495,8 +495,14 @@ static int bprm_stack_limits(struct linux_binprm *bprm)
+ * the stack. They aren't stored until much later when we can't
+ * signal to the parent that the child has run out of stack space.
+ * Instead, calculate it here so it's possible to fail gracefully.
++ *
++ * In the case of argc = 0, make sure there is space for adding a
++ * empty string (which will bump argc to 1), to ensure confused
++ * userspace programs don't start processing from argv[1], thinking
++ * argc can never be 0, to keep them from walking envp by accident.
++ * See do_execveat_common().
+ */
+- ptr_size = (bprm->argc + bprm->envc) * sizeof(void *);
++ ptr_size = (max(bprm->argc, 1) + bprm->envc) * sizeof(void *);
+ if (limit <= ptr_size)
+ return -E2BIG;
+ limit -= ptr_size;
+@@ -1897,6 +1903,9 @@ static int do_execveat_common(int fd, struct filename *filename,
+ }
+
+ retval = count(argv, MAX_ARG_STRINGS);
++ if (retval == 0)
++ pr_warn_once("process '%s' launched '%s' with NULL argv: empty string added\n",
++ current->comm, bprm->filename);
+ if (retval < 0)
+ goto out_free;
+ bprm->argc = retval;
+@@ -1923,6 +1932,19 @@ static int do_execveat_common(int fd, struct filename *filename,
+ if (retval < 0)
+ goto out_free;
+
++ /*
++ * When argv is empty, add an empty string ("") as argv[0] to
++ * ensure confused userspace programs that start processing
++ * from argv[1] won't end up walking envp. See also
++ * bprm_stack_limits().
++ */
++ if (bprm->argc == 0) {
++ retval = copy_string_kernel("", bprm);
++ if (retval < 0)
++ goto out_free;
++ bprm->argc = 1;
++ }
++
+ retval = bprm_execve(bprm, fd, filename, flags);
+ out_free:
+ free_bprm(bprm);
+@@ -1951,6 +1973,8 @@ int kernel_execve(const char *kernel_filename,
+ }
+
+ retval = count_strings_kernel(argv);
++ if (WARN_ON_ONCE(retval == 0))
++ retval = -EINVAL;
+ if (retval < 0)
+ goto out_free;
+ bprm->argc = retval;
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 94f1fbd7d3ac2..6d4f5ef747660 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -753,8 +753,12 @@ static loff_t ext2_max_size(int bits)
+ res += 1LL << (bits-2);
+ res += 1LL << (2*(bits-2));
+ res += 1LL << (3*(bits-2));
++ /* Compute how many metadata blocks are needed */
++ meta_blocks = 1;
++ meta_blocks += 1 + ppb;
++ meta_blocks += 1 + ppb + ppb * ppb;
+ /* Does block tree limit file size? */
+- if (res < upper_limit)
++ if (res + meta_blocks <= upper_limit)
+ goto check_lfs;
+
+ res = upper_limit;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index e429418036050..9c076262770d9 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1783,19 +1783,20 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ void *inline_pos;
+ unsigned int offset;
+ struct ext4_dir_entry_2 *de;
+- bool ret = true;
++ bool ret = false;
+
+ err = ext4_get_inode_loc(dir, &iloc);
+ if (err) {
+ EXT4_ERROR_INODE_ERR(dir, -err,
+ "error %d getting inode %lu block",
+ err, dir->i_ino);
+- return true;
++ return false;
+ }
+
+ down_read(&EXT4_I(dir)->xattr_sem);
+ if (!ext4_has_inline_data(dir)) {
+ *has_inline_data = 0;
++ ret = true;
+ goto out;
+ }
+
+@@ -1804,7 +1805,6 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ ext4_warning(dir->i_sb,
+ "bad inline directory (dir #%lu) - no `..'",
+ dir->i_ino);
+- ret = true;
+ goto out;
+ }
+
+@@ -1823,16 +1823,15 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+ dir->i_ino, le32_to_cpu(de->inode),
+ le16_to_cpu(de->rec_len), de->name_len,
+ inline_size);
+- ret = true;
+ goto out;
+ }
+ if (le32_to_cpu(de->inode)) {
+- ret = false;
+ goto out;
+ }
+ offset += ext4_rec_len_from_disk(de->rec_len, inline_size);
+ }
+
++ ret = true;
+ out:
+ up_read(&EXT4_I(dir)->xattr_sem);
+ brelse(iloc.bh);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 01c9e4f743ba9..531a94f48637c 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1993,6 +1993,15 @@ static int ext4_writepage(struct page *page,
+ else
+ len = PAGE_SIZE;
+
++ /* Should never happen but for bugs in other kernel subsystems */
++ if (!page_has_buffers(page)) {
++ ext4_warning_inode(inode,
++ "page %lu does not have buffers attached", page->index);
++ ClearPageDirty(page);
++ unlock_page(page);
++ return 0;
++ }
++
+ page_bufs = page_buffers(page);
+ /*
+ * We cannot do block allocation or other extent handling in this
+@@ -2594,6 +2603,22 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
+ wait_on_page_writeback(page);
+ BUG_ON(PageWriteback(page));
+
++ /*
++ * Should never happen but for buggy code in
++ * other subsystems that call
++ * set_page_dirty() without properly warning
++ * the file system first. See [1] for more
++ * information.
++ *
++ * [1] https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz
++ */
++ if (!page_has_buffers(page)) {
++ ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", page->index);
++ ClearPageDirty(page);
++ unlock_page(page);
++ continue;
++ }
++
+ if (mpd->map.m_len == 0)
+ mpd->first_page = page->index;
+ mpd->next_page = page->index + 1;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 67ac95c4cd9b8..1f37eb0176ccc 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1000,7 +1000,7 @@ static inline int should_optimize_scan(struct ext4_allocation_context *ac)
+ return 0;
+ if (ac->ac_criteria >= 2)
+ return 0;
+- if (ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS))
++ if (!ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS))
+ return 0;
+ return 1;
+ }
+@@ -3899,69 +3899,95 @@ void ext4_mb_mark_bb(struct super_block *sb, ext4_fsblk_t block,
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ ext4_group_t group;
+ ext4_grpblk_t blkoff;
+- int i, clen, err;
++ int i, err;
+ int already;
++ unsigned int clen, clen_changed, thisgrp_len;
+
+- clen = EXT4_B2C(sbi, len);
++ while (len > 0) {
++ ext4_get_group_no_and_offset(sb, block, &group, &blkoff);
+
+- ext4_get_group_no_and_offset(sb, block, &group, &blkoff);
+- bitmap_bh = ext4_read_block_bitmap(sb, group);
+- if (IS_ERR(bitmap_bh)) {
+- err = PTR_ERR(bitmap_bh);
+- bitmap_bh = NULL;
+- goto out_err;
+- }
++ /*
++ * Check to see if we are freeing blocks across a group
++ * boundary.
++ * In case of flex_bg, this can happen that (block, len) may
++ * span across more than one group. In that case we need to
++ * get the corresponding group metadata to work with.
++ * For this we have goto again loop.
++ */
++ thisgrp_len = min_t(unsigned int, (unsigned int)len,
++ EXT4_BLOCKS_PER_GROUP(sb) - EXT4_C2B(sbi, blkoff));
++ clen = EXT4_NUM_B2C(sbi, thisgrp_len);
+
+- err = -EIO;
+- gdp = ext4_get_group_desc(sb, group, &gdp_bh);
+- if (!gdp)
+- goto out_err;
++ bitmap_bh = ext4_read_block_bitmap(sb, group);
++ if (IS_ERR(bitmap_bh)) {
++ err = PTR_ERR(bitmap_bh);
++ bitmap_bh = NULL;
++ break;
++ }
+
+- ext4_lock_group(sb, group);
+- already = 0;
+- for (i = 0; i < clen; i++)
+- if (!mb_test_bit(blkoff + i, bitmap_bh->b_data) == !state)
+- already++;
++ err = -EIO;
++ gdp = ext4_get_group_desc(sb, group, &gdp_bh);
++ if (!gdp)
++ break;
+
+- if (state)
+- ext4_set_bits(bitmap_bh->b_data, blkoff, clen);
+- else
+- mb_test_and_clear_bits(bitmap_bh->b_data, blkoff, clen);
+- if (ext4_has_group_desc_csum(sb) &&
+- (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
+- gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
+- ext4_free_group_clusters_set(sb, gdp,
+- ext4_free_clusters_after_init(sb,
+- group, gdp));
+- }
+- if (state)
+- clen = ext4_free_group_clusters(sb, gdp) - clen + already;
+- else
+- clen = ext4_free_group_clusters(sb, gdp) + clen - already;
++ ext4_lock_group(sb, group);
++ already = 0;
++ for (i = 0; i < clen; i++)
++ if (!mb_test_bit(blkoff + i, bitmap_bh->b_data) ==
++ !state)
++ already++;
++
++ clen_changed = clen - already;
++ if (state)
++ ext4_set_bits(bitmap_bh->b_data, blkoff, clen);
++ else
++ mb_test_and_clear_bits(bitmap_bh->b_data, blkoff, clen);
++ if (ext4_has_group_desc_csum(sb) &&
++ (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {
++ gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);
++ ext4_free_group_clusters_set(sb, gdp,
++ ext4_free_clusters_after_init(sb, group, gdp));
++ }
++ if (state)
++ clen = ext4_free_group_clusters(sb, gdp) - clen_changed;
++ else
++ clen = ext4_free_group_clusters(sb, gdp) + clen_changed;
+
+- ext4_free_group_clusters_set(sb, gdp, clen);
+- ext4_block_bitmap_csum_set(sb, group, gdp, bitmap_bh);
+- ext4_group_desc_csum_set(sb, group, gdp);
++ ext4_free_group_clusters_set(sb, gdp, clen);
++ ext4_block_bitmap_csum_set(sb, group, gdp, bitmap_bh);
++ ext4_group_desc_csum_set(sb, group, gdp);
+
+- ext4_unlock_group(sb, group);
++ ext4_unlock_group(sb, group);
+
+- if (sbi->s_log_groups_per_flex) {
+- ext4_group_t flex_group = ext4_flex_group(sbi, group);
++ if (sbi->s_log_groups_per_flex) {
++ ext4_group_t flex_group = ext4_flex_group(sbi, group);
++ struct flex_groups *fg = sbi_array_rcu_deref(sbi,
++ s_flex_groups, flex_group);
+
+- atomic64_sub(len,
+- &sbi_array_rcu_deref(sbi, s_flex_groups,
+- flex_group)->free_clusters);
++ if (state)
++ atomic64_sub(clen_changed, &fg->free_clusters);
++ else
++ atomic64_add(clen_changed, &fg->free_clusters);
++
++ }
++
++ err = ext4_handle_dirty_metadata(NULL, NULL, bitmap_bh);
++ if (err)
++ break;
++ sync_dirty_buffer(bitmap_bh);
++ err = ext4_handle_dirty_metadata(NULL, NULL, gdp_bh);
++ sync_dirty_buffer(gdp_bh);
++ if (err)
++ break;
++
++ block += thisgrp_len;
++ len -= thisgrp_len;
++ brelse(bitmap_bh);
++ BUG_ON(len < 0);
+ }
+
+- err = ext4_handle_dirty_metadata(NULL, NULL, bitmap_bh);
+ if (err)
+- goto out_err;
+- sync_dirty_buffer(bitmap_bh);
+- err = ext4_handle_dirty_metadata(NULL, NULL, gdp_bh);
+- sync_dirty_buffer(gdp_bh);
+-
+-out_err:
+- brelse(bitmap_bh);
++ brelse(bitmap_bh);
+ }
+
+ /*
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 8cf0a924a49bf..39e223f7bf64d 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2997,14 +2997,14 @@ bool ext4_empty_dir(struct inode *inode)
+ if (inode->i_size < ext4_dir_rec_len(1, NULL) +
+ ext4_dir_rec_len(2, NULL)) {
+ EXT4_ERROR_INODE(inode, "invalid size");
+- return true;
++ return false;
+ }
+ /* The first directory block must not be a hole,
+ * so treat it as DIRENT_HTREE
+ */
+ bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE);
+ if (IS_ERR(bh))
+- return true;
++ return false;
+
+ de = (struct ext4_dir_entry_2 *) bh->b_data;
+ if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, bh->b_size,
+@@ -3012,7 +3012,7 @@ bool ext4_empty_dir(struct inode *inode)
+ le32_to_cpu(de->inode) != inode->i_ino || strcmp(".", de->name)) {
+ ext4_warning_inode(inode, "directory missing '.'");
+ brelse(bh);
+- return true;
++ return false;
+ }
+ offset = ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize);
+ de = ext4_next_entry(de, sb->s_blocksize);
+@@ -3021,7 +3021,7 @@ bool ext4_empty_dir(struct inode *inode)
+ le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
+ ext4_warning_inode(inode, "directory missing '..'");
+ brelse(bh);
+- return true;
++ return false;
+ }
+ offset += ext4_rec_len_from_disk(de->rec_len, sb->s_blocksize);
+ while (offset < inode->i_size) {
+@@ -3035,7 +3035,7 @@ bool ext4_empty_dir(struct inode *inode)
+ continue;
+ }
+ if (IS_ERR(bh))
+- return true;
++ return false;
+ }
+ de = (struct ext4_dir_entry_2 *) (bh->b_data +
+ (offset & (sb->s_blocksize - 1)));
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index c5021ca0a28ad..bed29f96ccc7e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2021,12 +2021,12 @@ static int ext4_set_test_dummy_encryption(struct super_block *sb, char *arg)
+ #define EXT4_SPEC_s_commit_interval (1 << 16)
+ #define EXT4_SPEC_s_fc_debug_max_replay (1 << 17)
+ #define EXT4_SPEC_s_sb_block (1 << 18)
++#define EXT4_SPEC_mb_optimize_scan (1 << 19)
+
+ struct ext4_fs_context {
+ char *s_qf_names[EXT4_MAXQUOTAS];
+ char *test_dummy_enc_arg;
+ int s_jquota_fmt; /* Format of quota to use */
+- int mb_optimize_scan;
+ #ifdef CONFIG_EXT4_DEBUG
+ int s_fc_debug_max_replay;
+ #endif
+@@ -2045,8 +2045,8 @@ struct ext4_fs_context {
+ unsigned int mask_s_mount_opt;
+ unsigned int vals_s_mount_opt2;
+ unsigned int mask_s_mount_opt2;
+- unsigned int vals_s_mount_flags;
+- unsigned int mask_s_mount_flags;
++ unsigned long vals_s_mount_flags;
++ unsigned long mask_s_mount_flags;
+ unsigned int opt_flags; /* MOPT flags */
+ unsigned int spec;
+ u32 s_max_batch_time;
+@@ -2149,23 +2149,36 @@ static inline void ctx_set_##name(struct ext4_fs_context *ctx, \
+ { \
+ ctx->mask_s_##name |= flag; \
+ ctx->vals_s_##name |= flag; \
+-} \
++}
++
++#define EXT4_CLEAR_CTX(name) \
+ static inline void ctx_clear_##name(struct ext4_fs_context *ctx, \
+ unsigned long flag) \
+ { \
+ ctx->mask_s_##name |= flag; \
+ ctx->vals_s_##name &= ~flag; \
+-} \
++}
++
++#define EXT4_TEST_CTX(name) \
+ static inline unsigned long \
+ ctx_test_##name(struct ext4_fs_context *ctx, unsigned long flag) \
+ { \
+ return (ctx->vals_s_##name & flag); \
+-} \
++}
+
+-EXT4_SET_CTX(flags);
++EXT4_SET_CTX(flags); /* set only */
+ EXT4_SET_CTX(mount_opt);
++EXT4_CLEAR_CTX(mount_opt);
++EXT4_TEST_CTX(mount_opt);
+ EXT4_SET_CTX(mount_opt2);
+-EXT4_SET_CTX(mount_flags);
++EXT4_CLEAR_CTX(mount_opt2);
++EXT4_TEST_CTX(mount_opt2);
++
++static inline void ctx_set_mount_flag(struct ext4_fs_context *ctx, int bit)
++{
++ set_bit(bit, &ctx->mask_s_mount_flags);
++ set_bit(bit, &ctx->vals_s_mount_flags);
++}
+
+ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ {
+@@ -2235,7 +2248,7 @@ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ param->key);
+ return 0;
+ case Opt_abort:
+- ctx_set_mount_flags(ctx, EXT4_MF_FS_ABORTED);
++ ctx_set_mount_flag(ctx, EXT4_MF_FS_ABORTED);
+ return 0;
+ case Opt_i_version:
+ ext4_msg(NULL, KERN_WARNING, deprecated_msg, param->key, "5.20");
+@@ -2451,12 +2464,17 @@ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ ctx_clear_mount_opt(ctx, m->mount_opt);
+ return 0;
+ case Opt_mb_optimize_scan:
+- if (result.int_32 != 0 && result.int_32 != 1) {
++ if (result.int_32 == 1) {
++ ctx_set_mount_opt2(ctx, EXT4_MOUNT2_MB_OPTIMIZE_SCAN);
++ ctx->spec |= EXT4_SPEC_mb_optimize_scan;
++ } else if (result.int_32 == 0) {
++ ctx_clear_mount_opt2(ctx, EXT4_MOUNT2_MB_OPTIMIZE_SCAN);
++ ctx->spec |= EXT4_SPEC_mb_optimize_scan;
++ } else {
+ ext4_msg(NULL, KERN_WARNING,
+ "mb_optimize_scan should be set to 0 or 1.");
+ return -EINVAL;
+ }
+- ctx->mb_optimize_scan = result.int_32;
+ return 0;
+ }
+
+@@ -4369,7 +4387,6 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+
+ /* Set defaults for the variables that will be set during parsing */
+ ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+- ctx->mb_optimize_scan = DEFAULT_MB_OPTIMIZE_SCAN;
+
+ sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
+ sbi->s_sectors_written_start =
+@@ -5320,12 +5337,12 @@ no_journal:
+ * turned off by passing "mb_optimize_scan=0". This can also be
+ * turned on forcefully by passing "mb_optimize_scan=1".
+ */
+- if (ctx->mb_optimize_scan == 1)
+- set_opt2(sb, MB_OPTIMIZE_SCAN);
+- else if (ctx->mb_optimize_scan == 0)
+- clear_opt2(sb, MB_OPTIMIZE_SCAN);
+- else if (sbi->s_groups_count >= MB_DEFAULT_LINEAR_SCAN_THRESHOLD)
+- set_opt2(sb, MB_OPTIMIZE_SCAN);
++ if (!(ctx->spec & EXT4_SPEC_mb_optimize_scan)) {
++ if (sbi->s_groups_count >= MB_DEFAULT_LINEAR_SCAN_THRESHOLD)
++ set_opt2(sb, MB_OPTIMIZE_SCAN);
++ else
++ clear_opt2(sb, MB_OPTIMIZE_SCAN);
++ }
+
+ err = ext4_mb_init(sb);
+ if (err) {
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 982f0170639fc..bf3ba85cf325b 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -864,6 +864,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ struct page *cp_page_1 = NULL, *cp_page_2 = NULL;
+ struct f2fs_checkpoint *cp_block = NULL;
+ unsigned long long cur_version = 0, pre_version = 0;
++ unsigned int cp_blocks;
+ int err;
+
+ err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+@@ -871,15 +872,16 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
+ if (err)
+ return NULL;
+
+- if (le32_to_cpu(cp_block->cp_pack_total_block_count) >
+- sbi->blocks_per_seg) {
++ cp_blocks = le32_to_cpu(cp_block->cp_pack_total_block_count);
++
++ if (cp_blocks > sbi->blocks_per_seg || cp_blocks <= F2FS_CP_PACKS) {
+ f2fs_warn(sbi, "invalid cp_pack_total_block_count:%u",
+ le32_to_cpu(cp_block->cp_pack_total_block_count));
+ goto invalid_cp;
+ }
+ pre_version = *version;
+
+- cp_addr += le32_to_cpu(cp_block->cp_pack_total_block_count) - 1;
++ cp_addr += cp_blocks - 1;
+ err = get_checkpoint_version(sbi, cp_addr, &cp_block,
+ &cp_page_2, version);
+ if (err)
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index d0c3aeba59454..3b162506b269a 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -314,10 +314,9 @@ static int lz4_decompress_pages(struct decompress_io_ctx *dic)
+ }
+
+ if (ret != PAGE_SIZE << dic->log_cluster_size) {
+- printk_ratelimited("%sF2FS-fs (%s): lz4 invalid rlen:%zu, "
++ printk_ratelimited("%sF2FS-fs (%s): lz4 invalid ret:%d, "
+ "expected:%lu\n", KERN_ERR,
+- F2FS_I_SB(dic->inode)->sb->s_id,
+- dic->rlen,
++ F2FS_I_SB(dic->inode)->sb->s_id, ret,
+ PAGE_SIZE << dic->log_cluster_size);
+ return -EIO;
+ }
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 8c417864c66ae..bdfa8bed10b2c 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -3163,8 +3163,12 @@ static int __f2fs_write_data_pages(struct address_space *mapping,
+ /* to avoid spliting IOs due to mixed WB_SYNC_ALL and WB_SYNC_NONE */
+ if (wbc->sync_mode == WB_SYNC_ALL)
+ atomic_inc(&sbi->wb_sync_req[DATA]);
+- else if (atomic_read(&sbi->wb_sync_req[DATA]))
++ else if (atomic_read(&sbi->wb_sync_req[DATA])) {
++ /* to avoid potential deadlock */
++ if (current->plug)
++ blk_finish_plug(current->plug);
+ goto skip_write;
++ }
+
+ if (__should_serialize_io(inode, wbc)) {
+ mutex_lock(&sbi->writepages);
+@@ -3353,7 +3357,7 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
+
+ *fsdata = NULL;
+
+- if (len == PAGE_SIZE)
++ if (len == PAGE_SIZE && !(f2fs_is_atomic_file(inode)))
+ goto repeat;
+
+ ret = f2fs_prepare_compress_overwrite(inode, pagep,
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index 8c50518475a99..b449c7a372a4b 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -21,7 +21,7 @@
+ #include "gc.h"
+
+ static LIST_HEAD(f2fs_stat_list);
+-static DEFINE_MUTEX(f2fs_stat_mutex);
++static DEFINE_RAW_SPINLOCK(f2fs_stat_lock);
+ #ifdef CONFIG_DEBUG_FS
+ static struct dentry *f2fs_debugfs_root;
+ #endif
+@@ -338,14 +338,16 @@ static char *s_flag[] = {
+ [SBI_QUOTA_SKIP_FLUSH] = " quota_skip_flush",
+ [SBI_QUOTA_NEED_REPAIR] = " quota_need_repair",
+ [SBI_IS_RESIZEFS] = " resizefs",
++ [SBI_IS_FREEZING] = " freezefs",
+ };
+
+ static int stat_show(struct seq_file *s, void *v)
+ {
+ struct f2fs_stat_info *si;
+ int i = 0, j = 0;
++ unsigned long flags;
+
+- mutex_lock(&f2fs_stat_mutex);
++ raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
+ list_for_each_entry(si, &f2fs_stat_list, stat_list) {
+ update_general_status(si->sbi);
+
+@@ -573,7 +575,7 @@ static int stat_show(struct seq_file *s, void *v)
+ seq_printf(s, " - paged : %llu KB\n",
+ si->page_mem >> 10);
+ }
+- mutex_unlock(&f2fs_stat_mutex);
++ raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
+ return 0;
+ }
+
+@@ -584,6 +586,7 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ {
+ struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
+ struct f2fs_stat_info *si;
++ unsigned long flags;
+ int i;
+
+ si = f2fs_kzalloc(sbi, sizeof(struct f2fs_stat_info), GFP_KERNEL);
+@@ -619,9 +622,9 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ atomic_set(&sbi->max_aw_cnt, 0);
+ atomic_set(&sbi->max_vw_cnt, 0);
+
+- mutex_lock(&f2fs_stat_mutex);
++ raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
+ list_add_tail(&si->stat_list, &f2fs_stat_list);
+- mutex_unlock(&f2fs_stat_mutex);
++ raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
+
+ return 0;
+ }
+@@ -629,10 +632,11 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ void f2fs_destroy_stats(struct f2fs_sb_info *sbi)
+ {
+ struct f2fs_stat_info *si = F2FS_STAT(sbi);
++ unsigned long flags;
+
+- mutex_lock(&f2fs_stat_mutex);
++ raw_spin_lock_irqsave(&f2fs_stat_lock, flags);
+ list_del(&si->stat_list);
+- mutex_unlock(&f2fs_stat_mutex);
++ raw_spin_unlock_irqrestore(&f2fs_stat_lock, flags);
+
+ kfree(si);
+ }
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 68b44015514f5..2514597f5b26b 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1267,6 +1267,7 @@ enum {
+ SBI_QUOTA_SKIP_FLUSH, /* skip flushing quota in current CP */
+ SBI_QUOTA_NEED_REPAIR, /* quota file may be corrupted */
+ SBI_IS_RESIZEFS, /* resizefs is in process */
++ SBI_IS_FREEZING, /* freezefs is in process */
+ };
+
+ enum {
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 3c98ef6af97d1..b110c3a7db6ae 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2008,7 +2008,10 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+
+ inode_lock(inode);
+
+- f2fs_disable_compressed_file(inode);
++ if (!f2fs_disable_compressed_file(inode)) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ if (f2fs_is_atomic_file(inode)) {
+ if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST))
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index ee308a8de4327..e020804f7b075 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1038,8 +1038,10 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ }
+
+- if (f2fs_check_nid_range(sbi, dni->ino))
++ if (f2fs_check_nid_range(sbi, dni->ino)) {
++ f2fs_put_page(node_page, 1);
+ return false;
++ }
+
+ *nofs = ofs_of_node(node_page);
+ source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 0ec8e32a00b47..71f232dcf3c20 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -778,7 +778,8 @@ void f2fs_evict_inode(struct inode *inode)
+ f2fs_remove_ino_entry(sbi, inode->i_ino, UPDATE_INO);
+ f2fs_remove_ino_entry(sbi, inode->i_ino, FLUSH_INO);
+
+- sb_start_intwrite(inode->i_sb);
++ if (!is_sbi_flag_set(sbi, SBI_IS_FREEZING))
++ sb_start_intwrite(inode->i_sb);
+ set_inode_flag(inode, FI_NO_ALLOC);
+ i_size_write(inode, 0);
+ retry:
+@@ -809,7 +810,8 @@ retry:
+ if (dquot_initialize_needed(inode))
+ set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR);
+ }
+- sb_end_intwrite(inode->i_sb);
++ if (!is_sbi_flag_set(sbi, SBI_IS_FREEZING))
++ sb_end_intwrite(inode->i_sb);
+ no_delete:
+ dquot_drop(inode);
+
+@@ -885,6 +887,7 @@ void f2fs_handle_failed_inode(struct inode *inode)
+ err = f2fs_get_node_info(sbi, inode->i_ino, &ni, false);
+ if (err) {
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ set_inode_flag(inode, FI_FREE_NID);
+ f2fs_warn(sbi, "May loss orphan inode, run fsck to fix.");
+ goto out;
+ }
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 50b2874e758c9..4ff7dfb542502 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -2111,8 +2111,12 @@ static int f2fs_write_node_pages(struct address_space *mapping,
+
+ if (wbc->sync_mode == WB_SYNC_ALL)
+ atomic_inc(&sbi->wb_sync_req[NODE]);
+- else if (atomic_read(&sbi->wb_sync_req[NODE]))
++ else if (atomic_read(&sbi->wb_sync_req[NODE])) {
++ /* to avoid potential deadlock */
++ if (current->plug)
++ blk_finish_plug(current->plug);
+ goto skip_write;
++ }
+
+ trace_f2fs_writepages(mapping->host, wbc, NODE);
+
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 1dabc8244083d..416d802ebbea6 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -4789,6 +4789,13 @@ static int sanity_check_curseg(struct f2fs_sb_info *sbi)
+
+ sanity_check_seg_type(sbi, curseg->seg_type);
+
++ if (curseg->alloc_type != LFS && curseg->alloc_type != SSR) {
++ f2fs_err(sbi,
++ "Current segment has invalid alloc_type:%d",
++ curseg->alloc_type);
++ return -EFSCORRUPTED;
++ }
++
+ if (f2fs_test_bit(blkofs, se->cur_valid_map))
+ goto out;
+
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index baefd398ec1a3..c4f8510fac930 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1662,11 +1662,15 @@ static int f2fs_freeze(struct super_block *sb)
+ /* ensure no checkpoint required */
+ if (!llist_empty(&F2FS_SB(sb)->cprc_info.issue_list))
+ return -EINVAL;
++
++ /* to avoid deadlock on f2fs_evict_inode->SB_FREEZE_FS */
++ set_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+ return 0;
+ }
+
+ static int f2fs_unfreeze(struct super_block *sb)
+ {
++ clear_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+ return 0;
+ }
+
+@@ -2688,7 +2692,7 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ struct f2fs_sb_info *sbi = F2FS_SB(sb);
+ struct quota_info *dqopt = sb_dqopt(sb);
+ int cnt;
+- int ret;
++ int ret = 0;
+
+ /*
+ * Now when everything is written we can discard the pagecache so
+@@ -2699,8 +2703,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ if (type != -1 && cnt != type)
+ continue;
+
+- if (!sb_has_quota_active(sb, type))
+- return 0;
++ if (!sb_has_quota_active(sb, cnt))
++ continue;
+
+ inode_lock(dqopt->files[cnt]);
+
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 8ac5066712454..bdb1b5c05be2e 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -481,7 +481,7 @@ out:
+ } else if (t == GC_IDLE_AT) {
+ if (!sbi->am.atgc_enabled)
+ return -EINVAL;
+- sbi->gc_mode = GC_AT;
++ sbi->gc_mode = GC_IDLE_AT;
+ } else {
+ sbi->gc_mode = GC_NORMAL;
+ }
+diff --git a/fs/file.c b/fs/file.c
+index 97d212a9b8144..ee93173467025 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -87,6 +87,21 @@ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds);
+ }
+
++/*
++ * Note how the fdtable bitmap allocations very much have to be a multiple of
++ * BITS_PER_LONG. This is not only because we walk those things in chunks of
++ * 'unsigned long' in some places, but simply because that is how the Linux
++ * kernel bitmaps are defined to work: they are not "bits in an array of bytes",
++ * they are very much "bits in an array of unsigned long".
++ *
++ * The ALIGN(nr, BITS_PER_LONG) here is for clarity: since we just multiplied
++ * by that "1024/sizeof(ptr)" before, we already know there are sufficient
++ * clear low bits. Clang seems to realize that, gcc ends up being confused.
++ *
++ * On a 128-bit machine, the ALIGN() would actually matter. In the meantime,
++ * let's consider it documentation (and maybe a test-case for gcc to improve
++ * its code generation ;)
++ */
+ static struct fdtable * alloc_fdtable(unsigned int nr)
+ {
+ struct fdtable *fdt;
+@@ -102,6 +117,7 @@ static struct fdtable * alloc_fdtable(unsigned int nr)
+ nr /= (1024 / sizeof(struct file *));
+ nr = roundup_pow_of_two(nr + 1);
+ nr *= (1024 / sizeof(struct file *));
++ nr = ALIGN(nr, BITS_PER_LONG);
+ /*
+ * Note that this can drive nr *below* what we had passed if sysctl_nr_open
+ * had been set lower between the check in expand_files() and here. Deal
+@@ -269,6 +285,19 @@ static unsigned int count_open_files(struct fdtable *fdt)
+ return i;
+ }
+
++/*
++ * Note that a sane fdtable size always has to be a multiple of
++ * BITS_PER_LONG, since we have bitmaps that are sized by this.
++ *
++ * 'max_fds' will normally already be properly aligned, but it
++ * turns out that in the close_range() -> __close_range() ->
++ * unshare_fd() -> dup_fd() -> sane_fdtable_size() we can end
++ * up having a 'max_fds' value that isn't already aligned.
++ *
++ * Rather than make close_range() have to worry about this,
++ * just make that BITS_PER_LONG alignment be part of a sane
++ * fdtable size. Becuase that's really what it is.
++ */
+ static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
+ {
+ unsigned int count;
+@@ -276,7 +305,7 @@ static unsigned int sane_fdtable_size(struct fdtable *fdt, unsigned int max_fds)
+ count = count_open_files(fdt);
+ if (max_fds < NR_OPEN_DEFAULT)
+ max_fds = NR_OPEN_DEFAULT;
+- return min(count, max_fds);
++ return ALIGN(min(count, max_fds), BITS_PER_LONG);
+ }
+
+ /*
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index d67108489148e..fbdb7a30470a3 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -2146,7 +2146,7 @@ int gfs2_setattr_size(struct inode *inode, u64 newsize)
+
+ ret = do_shrink(inode, newsize);
+ out:
+- gfs2_rs_delete(ip, NULL);
++ gfs2_rs_delete(ip);
+ gfs2_qa_put(ip);
+ return ret;
+ }
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index 8c39a8571b1fa..b53ad18e5ccbf 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -706,7 +706,7 @@ static int gfs2_release(struct inode *inode, struct file *file)
+
+ if (file->f_mode & FMODE_WRITE) {
+ if (gfs2_rs_active(&ip->i_res))
+- gfs2_rs_delete(ip, &inode->i_writecount);
++ gfs2_rs_delete(ip);
+ gfs2_qa_put(ip);
+ }
+ return 0;
+@@ -1083,6 +1083,7 @@ out_uninit:
+ gfs2_holder_uninit(gh);
+ if (statfs_gh)
+ kfree(statfs_gh);
++ from->count = orig_count - read;
+ return read ? read : ret;
+ }
+
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index 89905f4f29bb6..66a123306aecb 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -793,7 +793,7 @@ fail_free_inode:
+ if (free_vfs_inode) /* else evict will do the put for us */
+ gfs2_glock_put(ip->i_gl);
+ }
+- gfs2_rs_delete(ip, NULL);
++ gfs2_rs_deltree(&ip->i_res);
+ gfs2_qa_put(ip);
+ fail_free_acls:
+ posix_acl_release(default_acl);
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 0fb3c01bc5577..3b34bb24d0af4 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -680,13 +680,14 @@ void gfs2_rs_deltree(struct gfs2_blkreserv *rs)
+ /**
+ * gfs2_rs_delete - delete a multi-block reservation
+ * @ip: The inode for this reservation
+- * @wcount: The inode's write count, or NULL
+ *
+ */
+-void gfs2_rs_delete(struct gfs2_inode *ip, atomic_t *wcount)
++void gfs2_rs_delete(struct gfs2_inode *ip)
+ {
++ struct inode *inode = &ip->i_inode;
++
+ down_write(&ip->i_rw_mutex);
+- if ((wcount == NULL) || (atomic_read(wcount) <= 1))
++ if (atomic_read(&inode->i_writecount) <= 1)
+ gfs2_rs_deltree(&ip->i_res);
+ up_write(&ip->i_rw_mutex);
+ }
+@@ -1415,7 +1416,8 @@ int gfs2_fitrim(struct file *filp, void __user *argp)
+
+ start = r.start >> bs_shift;
+ end = start + (r.len >> bs_shift);
+- minlen = max_t(u64, r.minlen,
++ minlen = max_t(u64, r.minlen, sdp->sd_sb.sb_bsize);
++ minlen = max_t(u64, minlen,
+ q->limits.discard_granularity) >> bs_shift;
+
+ if (end <= start || minlen > sdp->sd_max_rg_data)
+diff --git a/fs/gfs2/rgrp.h b/fs/gfs2/rgrp.h
+index 3e2ca1fb43056..46dd94e9e085c 100644
+--- a/fs/gfs2/rgrp.h
++++ b/fs/gfs2/rgrp.h
+@@ -45,7 +45,7 @@ extern int gfs2_alloc_blocks(struct gfs2_inode *ip, u64 *bn, unsigned int *n,
+ bool dinode, u64 *generation);
+
+ extern void gfs2_rs_deltree(struct gfs2_blkreserv *rs);
+-extern void gfs2_rs_delete(struct gfs2_inode *ip, atomic_t *wcount);
++extern void gfs2_rs_delete(struct gfs2_inode *ip);
+ extern void __gfs2_free_blocks(struct gfs2_inode *ip, struct gfs2_rgrpd *rgd,
+ u64 bstart, u32 blen, int meta);
+ extern void gfs2_free_meta(struct gfs2_inode *ip, struct gfs2_rgrpd *rgd,
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 64c67090f5036..143a47359d1b8 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1396,7 +1396,7 @@ out:
+ truncate_inode_pages_final(&inode->i_data);
+ if (ip->i_qadata)
+ gfs2_assert_warn(sdp, ip->i_qadata->qa_ref == 0);
+- gfs2_rs_delete(ip, NULL);
++ gfs2_rs_deltree(&ip->i_res);
+ gfs2_ordered_del_inode(ip);
+ clear_inode(inode);
+ gfs2_dir_hash_inval(ip);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 4715980e90150..5e6788ab188fa 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -2813,8 +2813,12 @@ static bool io_rw_should_reissue(struct io_kiocb *req)
+
+ static bool __io_complete_rw_common(struct io_kiocb *req, long res)
+ {
+- if (req->rw.kiocb.ki_flags & IOCB_WRITE)
++ if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
+ kiocb_end_write(req);
++ fsnotify_modify(req->file);
++ } else {
++ fsnotify_access(req->file);
++ }
+ if (unlikely(res != req->result)) {
+ if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
+ io_rw_should_reissue(req)) {
+@@ -3436,13 +3440,15 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
+ ret = nr;
+ break;
+ }
++ ret += nr;
+ if (!iov_iter_is_bvec(iter)) {
+ iov_iter_advance(iter, nr);
+ } else {
+- req->rw.len -= nr;
+ req->rw.addr += nr;
++ req->rw.len -= nr;
++ if (!req->rw.len)
++ break;
+ }
+- ret += nr;
+ if (nr != iovec.iov_len)
+ break;
+ }
+@@ -4301,6 +4307,8 @@ static int io_fallocate(struct io_kiocb *req, unsigned int issue_flags)
+ req->sync.len);
+ if (ret < 0)
+ req_set_fail(req);
++ else
++ fsnotify_modify(req->file);
+ io_req_complete(req, ret);
+ return 0;
+ }
+@@ -5258,8 +5266,7 @@ static int io_accept_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ accept->nofile = rlimit(RLIMIT_NOFILE);
+
+ accept->file_slot = READ_ONCE(sqe->file_index);
+- if (accept->file_slot && ((req->open.how.flags & O_CLOEXEC) ||
+- (accept->flags & SOCK_CLOEXEC)))
++ if (accept->file_slot && (accept->flags & SOCK_CLOEXEC))
+ return -EINVAL;
+ if (accept->flags & ~(SOCK_CLOEXEC | SOCK_NONBLOCK))
+ return -EINVAL;
+@@ -5407,7 +5414,7 @@ struct io_poll_table {
+ };
+
+ #define IO_POLL_CANCEL_FLAG BIT(31)
+-#define IO_POLL_REF_MASK ((1u << 20)-1)
++#define IO_POLL_REF_MASK GENMASK(30, 0)
+
+ /*
+ * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
+@@ -5863,6 +5870,7 @@ static __cold bool io_poll_remove_all(struct io_ring_ctx *ctx,
+ list = &ctx->cancel_hash[i];
+ hlist_for_each_entry_safe(req, tmp, list, hash_node) {
+ if (io_match_task_safe(req, tsk, cancel_all)) {
++ hlist_del_init(&req->hash_node);
+ io_poll_cancel_req(req);
+ found = true;
+ }
+@@ -8233,6 +8241,7 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ fput(fpl->fp[i]);
+ } else {
+ kfree_skb(skb);
++ free_uid(fpl->user);
+ kfree(fpl);
+ }
+
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 6c51a75d0be61..d020a2e81a24c 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -480,7 +480,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage);
+
+ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len)
+ {
+- trace_iomap_invalidatepage(folio->mapping->host, offset, len);
++ trace_iomap_invalidatepage(folio->mapping->host,
++ folio_pos(folio) + offset, len);
+
+ /*
+ * If we're invalidating the entire folio, clear the dirty state
+diff --git a/fs/jffs2/build.c b/fs/jffs2/build.c
+index b288c8ae1236b..837cd55fd4c5e 100644
+--- a/fs/jffs2/build.c
++++ b/fs/jffs2/build.c
+@@ -415,13 +415,15 @@ int jffs2_do_mount_fs(struct jffs2_sb_info *c)
+ jffs2_free_ino_caches(c);
+ jffs2_free_raw_node_refs(c);
+ ret = -EIO;
+- goto out_free;
++ goto out_sum_exit;
+ }
+
+ jffs2_calc_trigger_levels(c);
+
+ return 0;
+
++ out_sum_exit:
++ jffs2_sum_exit(c);
+ out_free:
+ kvfree(c->blocks);
+
+diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c
+index 2ac410477c4f4..71f03a5d36ed2 100644
+--- a/fs/jffs2/fs.c
++++ b/fs/jffs2/fs.c
+@@ -603,8 +603,8 @@ out_root:
+ jffs2_free_ino_caches(c);
+ jffs2_free_raw_node_refs(c);
+ kvfree(c->blocks);
+- out_inohash:
+ jffs2_clear_xattr_subsystem(c);
++ out_inohash:
+ kfree(c->inocache_list);
+ out_wbuf:
+ jffs2_flash_cleanup(c);
+diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c
+index b676056826beb..29671e33a1714 100644
+--- a/fs/jffs2/scan.c
++++ b/fs/jffs2/scan.c
+@@ -136,7 +136,7 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ if (!s) {
+ JFFS2_WARNING("Can't allocate memory for summary\n");
+ ret = -ENOMEM;
+- goto out;
++ goto out_buf;
+ }
+ }
+
+@@ -275,13 +275,15 @@ int jffs2_scan_medium(struct jffs2_sb_info *c)
+ }
+ ret = 0;
+ out:
++ jffs2_sum_reset_collected(s);
++ kfree(s);
++ out_buf:
+ if (buf_size)
+ kfree(flashbuf);
+ #ifndef __ECOS
+ else
+ mtd_unpoint(c->mtd, 0, c->mtd->size);
+ #endif
+- kfree(s);
+ return ret;
+ }
+
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 91f4ec93dab1f..d8502f4989d9d 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -148,6 +148,7 @@ static const s8 budtab[256] = {
+ * 0 - success
+ * -ENOMEM - insufficient memory
+ * -EIO - i/o error
++ * -EINVAL - wrong bmap data
+ */
+ int dbMount(struct inode *ipbmap)
+ {
+@@ -179,6 +180,12 @@ int dbMount(struct inode *ipbmap)
+ bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
+ bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+ bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
++ if (!bmp->db_numag) {
++ release_metapage(mp);
++ kfree(bmp);
++ return -EINVAL;
++ }
++
+ bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+ bmp->db_maxag = le32_to_cpu(dbmp_le->dn_maxag);
+ bmp->db_agpref = le32_to_cpu(dbmp_le->dn_agpref);
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index c343666d9a428..6464dde03705c 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -358,12 +358,11 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ struct cb_process_state *cps)
+ {
+ struct cb_devicenotifyargs *args = argp;
++ const struct pnfs_layoutdriver_type *ld = NULL;
+ uint32_t i;
+ __be32 res = 0;
+- struct nfs_client *clp = cps->clp;
+- struct nfs_server *server = NULL;
+
+- if (!clp) {
++ if (!cps->clp) {
+ res = cpu_to_be32(NFS4ERR_OP_NOT_IN_SESSION);
+ goto out;
+ }
+@@ -371,23 +370,15 @@ __be32 nfs4_callback_devicenotify(void *argp, void *resp,
+ for (i = 0; i < args->ndevs; i++) {
+ struct cb_devicenotifyitem *dev = &args->devs[i];
+
+- if (!server ||
+- server->pnfs_curr_ld->id != dev->cbd_layout_type) {
+- rcu_read_lock();
+- list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link)
+- if (server->pnfs_curr_ld &&
+- server->pnfs_curr_ld->id == dev->cbd_layout_type) {
+- rcu_read_unlock();
+- goto found;
+- }
+- rcu_read_unlock();
+- continue;
++ if (!ld || ld->id != dev->cbd_layout_type) {
++ pnfs_put_layoutdriver(ld);
++ ld = pnfs_find_layoutdriver(dev->cbd_layout_type);
++ if (!ld)
++ continue;
+ }
+-
+- found:
+- nfs4_delete_deviceid(server->pnfs_curr_ld, clp, &dev->cbd_dev_id);
++ nfs4_delete_deviceid(ld, cps->clp, &dev->cbd_dev_id);
+ }
+-
++ pnfs_put_layoutdriver(ld);
+ out:
+ kfree(args->devs);
+ return res;
+diff --git a/fs/nfs/callback_xdr.c b/fs/nfs/callback_xdr.c
+index f90de8043b0f9..8dcb08e1a885d 100644
+--- a/fs/nfs/callback_xdr.c
++++ b/fs/nfs/callback_xdr.c
+@@ -271,10 +271,6 @@ __be32 decode_devicenotify_args(struct svc_rqst *rqstp,
+ n = ntohl(*p++);
+ if (n == 0)
+ goto out;
+- if (n > ULONG_MAX / sizeof(*args->devs)) {
+- status = htonl(NFS4ERR_BADXDR);
+- goto out;
+- }
+
+ args->devs = kmalloc_array(n, sizeof(*args->devs), GFP_KERNEL);
+ if (!args->devs) {
+diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
+index 7fba7711e6b3a..3d5ba43f44bb6 100644
+--- a/fs/nfs/nfs2xdr.c
++++ b/fs/nfs/nfs2xdr.c
+@@ -949,7 +949,7 @@ int nfs2_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+
+ error = decode_filename_inline(xdr, &entry->name, &entry->len);
+ if (unlikely(error))
+- return error;
++ return -EAGAIN;
+
+ /*
+ * The type (size and byte order) of nfscookie isn't defined in
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index 9274c9c5efea6..7ab60ad98776f 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -1967,7 +1967,6 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ bool plus)
+ {
+ struct user_namespace *userns = rpc_userns(entry->server->client);
+- struct nfs_entry old = *entry;
+ __be32 *p;
+ int error;
+ u64 new_cookie;
+@@ -1987,15 +1986,15 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+
+ error = decode_fileid3(xdr, &entry->ino);
+ if (unlikely(error))
+- return error;
++ return -EAGAIN;
+
+ error = decode_inline_filename3(xdr, &entry->name, &entry->len);
+ if (unlikely(error))
+- return error;
++ return -EAGAIN;
+
+ error = decode_cookie3(xdr, &new_cookie);
+ if (unlikely(error))
+- return error;
++ return -EAGAIN;
+
+ entry->d_type = DT_UNKNOWN;
+
+@@ -2003,7 +2002,7 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ entry->fattr->valid = 0;
+ error = decode_post_op_attr(xdr, entry->fattr, userns);
+ if (unlikely(error))
+- return error;
++ return -EAGAIN;
+ if (entry->fattr->valid & NFS_ATTR_FATTR_V3)
+ entry->d_type = nfs_umode_to_dtype(entry->fattr->mode);
+
+@@ -2018,11 +2017,8 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ return -EAGAIN;
+ if (*p != xdr_zero) {
+ error = decode_nfs_fh3(xdr, entry->fh);
+- if (unlikely(error)) {
+- if (error == -E2BIG)
+- goto out_truncated;
+- return error;
+- }
++ if (unlikely(error))
++ return -EAGAIN;
+ } else
+ zero_nfs_fh3(entry->fh);
+ }
+@@ -2031,11 +2027,6 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ entry->cookie = new_cookie;
+
+ return 0;
+-
+-out_truncated:
+- dprintk("NFS: directory entry contains invalid file handle\n");
+- *entry = old;
+- return -EAGAIN;
+ }
+
+ /*
+@@ -2228,6 +2219,7 @@ static int decode_fsinfo3resok(struct xdr_stream *xdr,
+ /* ignore properties */
+ result->lease_time = 0;
+ result->change_attr_type = NFS4_CHANGE_TYPE_IS_UNDEFINED;
++ result->xattr_support = 0;
+ return 0;
+ }
+
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 0e0db6c276196..c36fa0d0d438b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -8333,6 +8333,7 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ case -NFS4ERR_DEADSESSION:
+ nfs4_schedule_session_recovery(clp->cl_session,
+ task->tk_status);
++ return;
+ }
+ if (args->dir == NFS4_CDFC4_FORE_OR_BOTH &&
+ res->dir != NFS4_CDFS4_BOTH) {
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index ad7f83dc9a2df..815d630802451 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1218,6 +1218,7 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+
+ do {
+ list_splice_init(&mirror->pg_list, &head);
++ mirror->pg_recoalesce = 0;
+
+ while (!list_empty(&head)) {
+ struct nfs_page *req;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 7c9090a28e5c3..7ddd003ab8b1a 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -92,6 +92,17 @@ find_pnfs_driver(u32 id)
+ return local;
+ }
+
++const struct pnfs_layoutdriver_type *pnfs_find_layoutdriver(u32 id)
++{
++ return find_pnfs_driver(id);
++}
++
++void pnfs_put_layoutdriver(const struct pnfs_layoutdriver_type *ld)
++{
++ if (ld)
++ module_put(ld->owner);
++}
++
+ void
+ unset_pnfs_layoutdriver(struct nfs_server *nfss)
+ {
+diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
+index f4d7548d67b24..07f11489e4e9f 100644
+--- a/fs/nfs/pnfs.h
++++ b/fs/nfs/pnfs.h
+@@ -234,6 +234,8 @@ struct pnfs_devicelist {
+
+ extern int pnfs_register_layoutdriver(struct pnfs_layoutdriver_type *);
+ extern void pnfs_unregister_layoutdriver(struct pnfs_layoutdriver_type *);
++extern const struct pnfs_layoutdriver_type *pnfs_find_layoutdriver(u32 id);
++extern void pnfs_put_layoutdriver(const struct pnfs_layoutdriver_type *ld);
+
+ /* nfs4proc.c */
+ extern size_t max_response_pages(struct nfs_server *server);
+diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
+index 73dcaa99fa9ba..e3570c656b0f9 100644
+--- a/fs/nfs/proc.c
++++ b/fs/nfs/proc.c
+@@ -92,6 +92,7 @@ nfs_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
+ info->maxfilesize = 0x7FFFFFFF;
+ info->lease_time = 0;
+ info->change_attr_type = NFS4_CHANGE_TYPE_IS_UNDEFINED;
++ info->xattr_support = 0;
+ return 0;
+ }
+
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 987a187bd39aa..60693ab6a0325 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -316,7 +316,10 @@ static void nfs_mapping_set_error(struct page *page, int error)
+ struct address_space *mapping = page_file_mapping(page);
+
+ SetPageError(page);
+- mapping_set_error(mapping, error);
++ filemap_set_wb_err(mapping, error);
++ if (mapping->host)
++ errseq_set(&mapping->host->i_sb->s_wb_err,
++ error == -ENOSPC ? -ENOSPC : -EIO);
+ nfs_set_pageerror(mapping);
+ }
+
+@@ -1409,6 +1412,8 @@ static void nfs_initiate_write(struct nfs_pgio_header *hdr,
+ {
+ int priority = flush_task_priority(how);
+
++ if (IS_SWAPFILE(hdr->inode))
++ task_setup_data->flags |= RPC_TASK_SWAPPER;
+ task_setup_data->priority = priority;
+ rpc_ops->write_setup(hdr, msg, &task_setup_data->rpc_client);
+ trace_nfs_initiate_write(hdr);
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 8bc807c5fea4c..cc2831cec6695 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -632,7 +632,7 @@ nfsd_file_cache_init(void)
+ if (!nfsd_filecache_wq)
+ goto out;
+
+- nfsd_file_hashtbl = kcalloc(NFSD_FILE_HASH_SIZE,
++ nfsd_file_hashtbl = kvcalloc(NFSD_FILE_HASH_SIZE,
+ sizeof(*nfsd_file_hashtbl), GFP_KERNEL);
+ if (!nfsd_file_hashtbl) {
+ pr_err("nfsd: unable to allocate nfsd_file_hashtbl\n");
+@@ -700,7 +700,7 @@ out_err:
+ nfsd_file_slab = NULL;
+ kmem_cache_destroy(nfsd_file_mark_slab);
+ nfsd_file_mark_slab = NULL;
+- kfree(nfsd_file_hashtbl);
++ kvfree(nfsd_file_hashtbl);
+ nfsd_file_hashtbl = NULL;
+ destroy_workqueue(nfsd_filecache_wq);
+ nfsd_filecache_wq = NULL;
+@@ -811,7 +811,7 @@ nfsd_file_cache_shutdown(void)
+ fsnotify_wait_marks_destroyed();
+ kmem_cache_destroy(nfsd_file_mark_slab);
+ nfsd_file_mark_slab = NULL;
+- kfree(nfsd_file_hashtbl);
++ kvfree(nfsd_file_hashtbl);
+ nfsd_file_hashtbl = NULL;
+ destroy_workqueue(nfsd_filecache_wq);
+ nfsd_filecache_wq = NULL;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 32063733443d4..f3b71fd1d1341 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -4711,6 +4711,14 @@ nfsd_break_deleg_cb(struct file_lock *fl)
+ return ret;
+ }
+
++/**
++ * nfsd_breaker_owns_lease - Check if lease conflict was resolved
++ * @fl: Lock state to check
++ *
++ * Return values:
++ * %true: Lease conflict was resolved
++ * %false: Lease conflict was not resolved.
++ */
+ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ {
+ struct nfs4_delegation *dl = fl->fl_owner;
+@@ -4718,11 +4726,11 @@ static bool nfsd_breaker_owns_lease(struct file_lock *fl)
+ struct nfs4_client *clp;
+
+ if (!i_am_nfsd())
+- return NULL;
++ return false;
+ rqst = kthread_data(current);
+ /* Note rq_prog == NFS_ACL_PROGRAM is also possible: */
+ if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4)
+- return NULL;
++ return false;
+ clp = *(rqst->rq_lease_breaker);
+ return dl->dl_stid.sc_client == clp;
+ }
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index 18b8eb43a19bc..fcdab8a8a41f4 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -230,7 +230,7 @@ nfsd_proc_write(struct svc_rqst *rqstp)
+ unsigned long cnt = argp->len;
+ unsigned int nvecs;
+
+- dprintk("nfsd: WRITE %s %d bytes at %d\n",
++ dprintk("nfsd: WRITE %s %u bytes at %d\n",
+ SVCFH_fmt(&argp->fh),
+ argp->len, argp->offset);
+
+diff --git a/fs/nfsd/xdr.h b/fs/nfsd/xdr.h
+index 528fb299430e6..852f71580bd06 100644
+--- a/fs/nfsd/xdr.h
++++ b/fs/nfsd/xdr.h
+@@ -32,7 +32,7 @@ struct nfsd_readargs {
+ struct nfsd_writeargs {
+ svc_fh fh;
+ __u32 offset;
+- int len;
++ __u32 len;
+ struct xdr_buf payload;
+ };
+
+diff --git a/fs/ntfs/inode.c b/fs/ntfs/inode.c
+index 4474adb393ca8..517b71c73aa96 100644
+--- a/fs/ntfs/inode.c
++++ b/fs/ntfs/inode.c
+@@ -1881,6 +1881,10 @@ int ntfs_read_inode_mount(struct inode *vi)
+ }
+ /* Now allocate memory for the attribute list. */
+ ni->attr_list_size = (u32)ntfs_attr_size(a);
++ if (!ni->attr_list_size) {
++ ntfs_error(sb, "Attr_list_size is zero");
++ goto put_err_out;
++ }
+ ni->attr_list = ntfs_malloc_nofs(ni->attr_list_size);
+ if (!ni->attr_list) {
+ ntfs_error(sb, "Not enough memory to allocate buffer "
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index f033de733adb3..effe92c7d6937 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -337,7 +337,6 @@ void ocfs2_unlock_global_qf(struct ocfs2_mem_dqinfo *oinfo, int ex)
+ /* Read information header from global quota file */
+ int ocfs2_global_read_info(struct super_block *sb, int type)
+ {
+- struct inode *gqinode = NULL;
+ unsigned int ino[OCFS2_MAXQUOTAS] = { USER_QUOTA_SYSTEM_INODE,
+ GROUP_QUOTA_SYSTEM_INODE };
+ struct ocfs2_global_disk_dqinfo dinfo;
+@@ -346,29 +345,31 @@ int ocfs2_global_read_info(struct super_block *sb, int type)
+ u64 pcount;
+ int status;
+
++ oinfo->dqi_gi.dqi_sb = sb;
++ oinfo->dqi_gi.dqi_type = type;
++ ocfs2_qinfo_lock_res_init(&oinfo->dqi_gqlock, oinfo);
++ oinfo->dqi_gi.dqi_entry_size = sizeof(struct ocfs2_global_disk_dqblk);
++ oinfo->dqi_gi.dqi_ops = &ocfs2_global_ops;
++ oinfo->dqi_gqi_bh = NULL;
++ oinfo->dqi_gqi_count = 0;
++
+ /* Read global header */
+- gqinode = ocfs2_get_system_file_inode(OCFS2_SB(sb), ino[type],
++ oinfo->dqi_gqinode = ocfs2_get_system_file_inode(OCFS2_SB(sb), ino[type],
+ OCFS2_INVALID_SLOT);
+- if (!gqinode) {
++ if (!oinfo->dqi_gqinode) {
+ mlog(ML_ERROR, "failed to get global quota inode (type=%d)\n",
+ type);
+ status = -EINVAL;
+ goto out_err;
+ }
+- oinfo->dqi_gi.dqi_sb = sb;
+- oinfo->dqi_gi.dqi_type = type;
+- oinfo->dqi_gi.dqi_entry_size = sizeof(struct ocfs2_global_disk_dqblk);
+- oinfo->dqi_gi.dqi_ops = &ocfs2_global_ops;
+- oinfo->dqi_gqi_bh = NULL;
+- oinfo->dqi_gqi_count = 0;
+- oinfo->dqi_gqinode = gqinode;
++
+ status = ocfs2_lock_global_qf(oinfo, 0);
+ if (status < 0) {
+ mlog_errno(status);
+ goto out_err;
+ }
+
+- status = ocfs2_extent_map_get_blocks(gqinode, 0, &oinfo->dqi_giblk,
++ status = ocfs2_extent_map_get_blocks(oinfo->dqi_gqinode, 0, &oinfo->dqi_giblk,
+ &pcount, NULL);
+ if (status < 0)
+ goto out_unlock;
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 0e4b16d4c037f..b1a8b046f4c22 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -702,8 +702,6 @@ static int ocfs2_local_read_info(struct super_block *sb, int type)
+ info->dqi_priv = oinfo;
+ oinfo->dqi_type = type;
+ INIT_LIST_HEAD(&oinfo->dqi_chunk);
+- oinfo->dqi_gqinode = NULL;
+- ocfs2_qinfo_lock_res_init(&oinfo->dqi_gqlock, oinfo);
+ oinfo->dqi_rec = NULL;
+ oinfo->dqi_lqi_bh = NULL;
+ oinfo->dqi_libh = NULL;
+diff --git a/fs/proc/bootconfig.c b/fs/proc/bootconfig.c
+index 6d8d4bf208377..2e244ada1f970 100644
+--- a/fs/proc/bootconfig.c
++++ b/fs/proc/bootconfig.c
+@@ -32,6 +32,8 @@ static int __init copy_xbc_key_value_list(char *dst, size_t size)
+ int ret = 0;
+
+ key = kzalloc(XBC_KEYLEN_MAX, GFP_KERNEL);
++ if (!key)
++ return -ENOMEM;
+
+ xbc_for_each_key_value(leaf, val) {
+ ret = xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX);
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index f243cb5e6a4fb..e26162f102ffe 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -143,21 +143,22 @@ static void pstore_timer_kick(void)
+ mod_timer(&pstore_timer, jiffies + msecs_to_jiffies(pstore_update_ms));
+ }
+
+-/*
+- * Should pstore_dump() wait for a concurrent pstore_dump()? If
+- * not, the current pstore_dump() will report a failure to dump
+- * and return.
+- */
+-static bool pstore_cannot_wait(enum kmsg_dump_reason reason)
++static bool pstore_cannot_block_path(enum kmsg_dump_reason reason)
+ {
+- /* In NMI path, pstore shouldn't block regardless of reason. */
++ /*
++ * In case of NMI path, pstore shouldn't be blocked
++ * regardless of reason.
++ */
+ if (in_nmi())
+ return true;
+
+ switch (reason) {
+ /* In panic case, other cpus are stopped by smp_send_stop(). */
+ case KMSG_DUMP_PANIC:
+- /* Emergency restart shouldn't be blocked. */
++ /*
++ * Emergency restart shouldn't be blocked by spinning on
++ * pstore_info::buf_lock.
++ */
+ case KMSG_DUMP_EMERG:
+ return true;
+ default:
+@@ -389,21 +390,19 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ unsigned long total = 0;
+ const char *why;
+ unsigned int part = 1;
++ unsigned long flags = 0;
+ int ret;
+
+ why = kmsg_dump_reason_str(reason);
+
+- if (down_trylock(&psinfo->buf_lock)) {
+- /* Failed to acquire lock: give up if we cannot wait. */
+- if (pstore_cannot_wait(reason)) {
+- pr_err("dump skipped in %s path: may corrupt error record\n",
+- in_nmi() ? "NMI" : why);
+- return;
+- }
+- if (down_interruptible(&psinfo->buf_lock)) {
+- pr_err("could not grab semaphore?!\n");
++ if (pstore_cannot_block_path(reason)) {
++ if (!spin_trylock_irqsave(&psinfo->buf_lock, flags)) {
++ pr_err("dump skipped in %s path because of concurrent dump\n",
++ in_nmi() ? "NMI" : why);
+ return;
+ }
++ } else {
++ spin_lock_irqsave(&psinfo->buf_lock, flags);
+ }
+
+ kmsg_dump_rewind(&iter);
+@@ -467,8 +466,7 @@ static void pstore_dump(struct kmsg_dumper *dumper,
+ total += record.size;
+ part++;
+ }
+-
+- up(&psinfo->buf_lock);
++ spin_unlock_irqrestore(&psinfo->buf_lock, flags);
+ }
+
+ static struct kmsg_dumper pstore_dumper = {
+@@ -594,7 +592,7 @@ int pstore_register(struct pstore_info *psi)
+ psi->write_user = pstore_write_user_compat;
+ psinfo = psi;
+ mutex_init(&psinfo->read_mutex);
+- sema_init(&psinfo->buf_lock, 1);
++ spin_lock_init(&psinfo->buf_lock);
+
+ if (psi->flags & PSTORE_FLAGS_DMESG)
+ allocate_buf_for_compression();
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index dbe72f664abf3..86151889548e3 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -349,20 +349,97 @@ out_budg:
+ return err;
+ }
+
+-static int do_tmpfile(struct inode *dir, struct dentry *dentry,
+- umode_t mode, struct inode **whiteout)
++static struct inode *create_whiteout(struct inode *dir, struct dentry *dentry)
++{
++ int err;
++ umode_t mode = S_IFCHR | WHITEOUT_MODE;
++ struct inode *inode;
++ struct ubifs_info *c = dir->i_sb->s_fs_info;
++ struct fscrypt_name nm;
++
++ /*
++ * Create an inode('nlink = 1') for whiteout without updating journal,
++ * let ubifs_jnl_rename() store it on flash to complete rename whiteout
++ * atomically.
++ */
++
++ dbg_gen("dent '%pd', mode %#hx in dir ino %lu",
++ dentry, mode, dir->i_ino);
++
++ err = fscrypt_setup_filename(dir, &dentry->d_name, 0, &nm);
++ if (err)
++ return ERR_PTR(err);
++
++ inode = ubifs_new_inode(c, dir, mode);
++ if (IS_ERR(inode)) {
++ err = PTR_ERR(inode);
++ goto out_free;
++ }
++
++ init_special_inode(inode, inode->i_mode, WHITEOUT_DEV);
++ ubifs_assert(c, inode->i_op == &ubifs_file_inode_operations);
++
++ err = ubifs_init_security(dir, inode, &dentry->d_name);
++ if (err)
++ goto out_inode;
++
++ /* The dir size is updated by do_rename. */
++ insert_inode_hash(inode);
++
++ return inode;
++
++out_inode:
++ make_bad_inode(inode);
++ iput(inode);
++out_free:
++ fscrypt_free_filename(&nm);
++ ubifs_err(c, "cannot create whiteout file, error %d", err);
++ return ERR_PTR(err);
++}
++
++/**
++ * lock_2_inodes - a wrapper for locking two UBIFS inodes.
++ * @inode1: first inode
++ * @inode2: second inode
++ *
++ * We do not implement any tricks to guarantee strict lock ordering, because
++ * VFS has already done it for us on the @i_mutex. So this is just a simple
++ * wrapper function.
++ */
++static void lock_2_inodes(struct inode *inode1, struct inode *inode2)
++{
++ mutex_lock_nested(&ubifs_inode(inode1)->ui_mutex, WB_MUTEX_1);
++ mutex_lock_nested(&ubifs_inode(inode2)->ui_mutex, WB_MUTEX_2);
++}
++
++/**
++ * unlock_2_inodes - a wrapper for unlocking two UBIFS inodes.
++ * @inode1: first inode
++ * @inode2: second inode
++ */
++static void unlock_2_inodes(struct inode *inode1, struct inode *inode2)
++{
++ mutex_unlock(&ubifs_inode(inode2)->ui_mutex);
++ mutex_unlock(&ubifs_inode(inode1)->ui_mutex);
++}
++
++static int ubifs_tmpfile(struct user_namespace *mnt_userns, struct inode *dir,
++ struct dentry *dentry, umode_t mode)
+ {
+ struct inode *inode;
+ struct ubifs_info *c = dir->i_sb->s_fs_info;
+- struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1};
++ struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
++ .dirtied_ino = 1};
+ struct ubifs_budget_req ino_req = { .dirtied_ino = 1 };
+- struct ubifs_inode *ui, *dir_ui = ubifs_inode(dir);
++ struct ubifs_inode *ui;
+ int err, instantiated = 0;
+ struct fscrypt_name nm;
+
+ /*
+- * Budget request settings: new dirty inode, new direntry,
+- * budget for dirtied inode will be released via writeback.
++ * Budget request settings: new inode, new direntry, changing the
++ * parent directory inode.
++ * Allocate budget separately for new dirtied inode, the budget will
++ * be released via writeback.
+ */
+
+ dbg_gen("dent '%pd', mode %#hx in dir ino %lu",
+@@ -392,42 +469,30 @@ static int do_tmpfile(struct inode *dir, struct dentry *dentry,
+ }
+ ui = ubifs_inode(inode);
+
+- if (whiteout) {
+- init_special_inode(inode, inode->i_mode, WHITEOUT_DEV);
+- ubifs_assert(c, inode->i_op == &ubifs_file_inode_operations);
+- }
+-
+ err = ubifs_init_security(dir, inode, &dentry->d_name);
+ if (err)
+ goto out_inode;
+
+ mutex_lock(&ui->ui_mutex);
+ insert_inode_hash(inode);
+-
+- if (whiteout) {
+- mark_inode_dirty(inode);
+- drop_nlink(inode);
+- *whiteout = inode;
+- } else {
+- d_tmpfile(dentry, inode);
+- }
++ d_tmpfile(dentry, inode);
+ ubifs_assert(c, ui->dirty);
+
+ instantiated = 1;
+ mutex_unlock(&ui->ui_mutex);
+
+- mutex_lock(&dir_ui->ui_mutex);
++ lock_2_inodes(dir, inode);
+ err = ubifs_jnl_update(c, dir, &nm, inode, 1, 0);
+ if (err)
+ goto out_cancel;
+- mutex_unlock(&dir_ui->ui_mutex);
++ unlock_2_inodes(dir, inode);
+
+ ubifs_release_budget(c, &req);
+
+ return 0;
+
+ out_cancel:
+- mutex_unlock(&dir_ui->ui_mutex);
++ unlock_2_inodes(dir, inode);
+ out_inode:
+ make_bad_inode(inode);
+ if (!instantiated)
+@@ -441,12 +506,6 @@ out_budg:
+ return err;
+ }
+
+-static int ubifs_tmpfile(struct user_namespace *mnt_userns, struct inode *dir,
+- struct dentry *dentry, umode_t mode)
+-{
+- return do_tmpfile(dir, dentry, mode, NULL);
+-}
+-
+ /**
+ * vfs_dent_type - get VFS directory entry type.
+ * @type: UBIFS directory entry type
+@@ -660,32 +719,6 @@ static int ubifs_dir_release(struct inode *dir, struct file *file)
+ return 0;
+ }
+
+-/**
+- * lock_2_inodes - a wrapper for locking two UBIFS inodes.
+- * @inode1: first inode
+- * @inode2: second inode
+- *
+- * We do not implement any tricks to guarantee strict lock ordering, because
+- * VFS has already done it for us on the @i_mutex. So this is just a simple
+- * wrapper function.
+- */
+-static void lock_2_inodes(struct inode *inode1, struct inode *inode2)
+-{
+- mutex_lock_nested(&ubifs_inode(inode1)->ui_mutex, WB_MUTEX_1);
+- mutex_lock_nested(&ubifs_inode(inode2)->ui_mutex, WB_MUTEX_2);
+-}
+-
+-/**
+- * unlock_2_inodes - a wrapper for unlocking two UBIFS inodes.
+- * @inode1: first inode
+- * @inode2: second inode
+- */
+-static void unlock_2_inodes(struct inode *inode1, struct inode *inode2)
+-{
+- mutex_unlock(&ubifs_inode(inode2)->ui_mutex);
+- mutex_unlock(&ubifs_inode(inode1)->ui_mutex);
+-}
+-
+ static int ubifs_link(struct dentry *old_dentry, struct inode *dir,
+ struct dentry *dentry)
+ {
+@@ -949,7 +982,8 @@ static int ubifs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
+ struct ubifs_inode *dir_ui = ubifs_inode(dir);
+ struct ubifs_info *c = dir->i_sb->s_fs_info;
+ int err, sz_change;
+- struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1 };
++ struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
++ .dirtied_ino = 1};
+ struct fscrypt_name nm;
+
+ /*
+@@ -1264,17 +1298,19 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ .dirtied_ino = 3 };
+ struct ubifs_budget_req ino_req = { .dirtied_ino = 1,
+ .dirtied_ino_d = ALIGN(old_inode_ui->data_len, 8) };
++ struct ubifs_budget_req wht_req;
+ struct timespec64 time;
+ unsigned int saved_nlink;
+ struct fscrypt_name old_nm, new_nm;
+
+ /*
+- * Budget request settings: deletion direntry, new direntry, removing
+- * the old inode, and changing old and new parent directory inodes.
++ * Budget request settings:
++ * req: deletion direntry, new direntry, removing the old inode,
++ * and changing old and new parent directory inodes.
++ *
++ * wht_req: new whiteout inode for RENAME_WHITEOUT.
+ *
+- * However, this operation also marks the target inode as dirty and
+- * does not write it, so we allocate budget for the target inode
+- * separately.
++ * ino_req: marks the target inode as dirty and does not write it.
+ */
+
+ dbg_gen("dent '%pd' ino %lu in dir ino %lu to dent '%pd' in dir ino %lu flags 0x%x",
+@@ -1331,20 +1367,44 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ goto out_release;
+ }
+
+- err = do_tmpfile(old_dir, old_dentry, S_IFCHR | WHITEOUT_MODE, &whiteout);
+- if (err) {
++ /*
++ * The whiteout inode without dentry is pinned in memory,
++ * umount won't happen during rename process because we
++ * got parent dentry.
++ */
++ whiteout = create_whiteout(old_dir, old_dentry);
++ if (IS_ERR(whiteout)) {
++ err = PTR_ERR(whiteout);
+ kfree(dev);
+ goto out_release;
+ }
+
+- spin_lock(&whiteout->i_lock);
+- whiteout->i_state |= I_LINKABLE;
+- spin_unlock(&whiteout->i_lock);
+-
+ whiteout_ui = ubifs_inode(whiteout);
+ whiteout_ui->data = dev;
+ whiteout_ui->data_len = ubifs_encode_dev(dev, MKDEV(0, 0));
+ ubifs_assert(c, !whiteout_ui->dirty);
++
++ memset(&wht_req, 0, sizeof(struct ubifs_budget_req));
++ wht_req.new_ino = 1;
++ wht_req.new_ino_d = ALIGN(whiteout_ui->data_len, 8);
++ /*
++ * To avoid deadlock between space budget (holds ui_mutex and
++ * waits wb work) and writeback work(waits ui_mutex), do space
++ * budget before ubifs inodes locked.
++ */
++ err = ubifs_budget_space(c, &wht_req);
++ if (err) {
++ /*
++ * Whiteout inode can not be written on flash by
++ * ubifs_jnl_write_inode(), because it's neither
++ * dirty nor zero-nlink.
++ */
++ iput(whiteout);
++ goto out_release;
++ }
++
++ /* Add the old_dentry size to the old_dir size. */
++ old_sz -= CALC_DENT_SIZE(fname_len(&old_nm));
+ }
+
+ lock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+@@ -1416,29 +1476,11 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ sync = IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir);
+ if (unlink && IS_SYNC(new_inode))
+ sync = 1;
+- }
+-
+- if (whiteout) {
+- struct ubifs_budget_req wht_req = { .dirtied_ino = 1,
+- .dirtied_ino_d = \
+- ALIGN(ubifs_inode(whiteout)->data_len, 8) };
+-
+- err = ubifs_budget_space(c, &wht_req);
+- if (err) {
+- kfree(whiteout_ui->data);
+- whiteout_ui->data_len = 0;
+- iput(whiteout);
+- goto out_release;
+- }
+-
+- inc_nlink(whiteout);
+- mark_inode_dirty(whiteout);
+-
+- spin_lock(&whiteout->i_lock);
+- whiteout->i_state &= ~I_LINKABLE;
+- spin_unlock(&whiteout->i_lock);
+-
+- iput(whiteout);
++ /*
++ * S_SYNC flag of whiteout inherits from the old_dir, and we
++ * have already checked the old dir inode. So there is no need
++ * to check whiteout.
++ */
+ }
+
+ err = ubifs_jnl_rename(c, old_dir, old_inode, &old_nm, new_dir,
+@@ -1449,6 +1491,11 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ unlock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+ ubifs_release_budget(c, &req);
+
++ if (whiteout) {
++ ubifs_release_budget(c, &wht_req);
++ iput(whiteout);
++ }
++
+ mutex_lock(&old_inode_ui->ui_mutex);
+ release = old_inode_ui->dirty;
+ mark_inode_dirty_sync(old_inode);
+@@ -1457,11 +1504,16 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ if (release)
+ ubifs_release_budget(c, &ino_req);
+ if (IS_SYNC(old_inode))
+- err = old_inode->i_sb->s_op->write_inode(old_inode, NULL);
++ /*
++ * Rename finished here. Although old inode cannot be updated
++ * on flash, old ctime is not a big problem, don't return err
++ * code to userspace.
++ */
++ old_inode->i_sb->s_op->write_inode(old_inode, NULL);
+
+ fscrypt_free_filename(&old_nm);
+ fscrypt_free_filename(&new_nm);
+- return err;
++ return 0;
+
+ out_cancel:
+ if (unlink) {
+@@ -1482,11 +1534,11 @@ out_cancel:
+ inc_nlink(old_dir);
+ }
+ }
++ unlock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+ if (whiteout) {
+- drop_nlink(whiteout);
++ ubifs_release_budget(c, &wht_req);
+ iput(whiteout);
+ }
+- unlock_4_inodes(old_dir, new_dir, new_inode, whiteout);
+ out_release:
+ ubifs_release_budget(c, &ino_req);
+ ubifs_release_budget(c, &req);
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 5cfa28cd00cdc..6b45a037a0471 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -570,7 +570,7 @@ static int ubifs_write_end(struct file *file, struct address_space *mapping,
+ }
+
+ if (!PagePrivate(page)) {
+- SetPagePrivate(page);
++ attach_page_private(page, (void *)1);
+ atomic_long_inc(&c->dirty_pg_cnt);
+ __set_page_dirty_nobuffers(page);
+ }
+@@ -947,7 +947,7 @@ static int do_writepage(struct page *page, int len)
+ release_existing_page_budget(c);
+
+ atomic_long_dec(&c->dirty_pg_cnt);
+- ClearPagePrivate(page);
++ detach_page_private(page);
+ ClearPageChecked(page);
+
+ kunmap(page);
+@@ -1304,7 +1304,7 @@ static void ubifs_invalidatepage(struct page *page, unsigned int offset,
+ release_existing_page_budget(c);
+
+ atomic_long_dec(&c->dirty_pg_cnt);
+- ClearPagePrivate(page);
++ detach_page_private(page);
+ ClearPageChecked(page);
+ }
+
+@@ -1471,8 +1471,8 @@ static int ubifs_migrate_page(struct address_space *mapping,
+ return rc;
+
+ if (PagePrivate(page)) {
+- ClearPagePrivate(page);
+- SetPagePrivate(newpage);
++ detach_page_private(page);
++ attach_page_private(newpage, (void *)1);
+ }
+
+ if (mode != MIGRATE_SYNC_NO_COPY)
+@@ -1496,7 +1496,7 @@ static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags)
+ return 0;
+ ubifs_assert(c, PagePrivate(page));
+ ubifs_assert(c, 0);
+- ClearPagePrivate(page);
++ detach_page_private(page);
+ ClearPageChecked(page);
+ return 1;
+ }
+@@ -1567,7 +1567,7 @@ static vm_fault_t ubifs_vm_page_mkwrite(struct vm_fault *vmf)
+ else {
+ if (!PageChecked(page))
+ ubifs_convert_page_budget(c);
+- SetPagePrivate(page);
++ attach_page_private(page, (void *)1);
+ atomic_long_inc(&c->dirty_pg_cnt);
+ __set_page_dirty_nobuffers(page);
+ }
+diff --git a/fs/ubifs/io.c b/fs/ubifs/io.c
+index 789a7813f3fa2..1607a3c76681a 100644
+--- a/fs/ubifs/io.c
++++ b/fs/ubifs/io.c
+@@ -854,16 +854,42 @@ int ubifs_wbuf_write_nolock(struct ubifs_wbuf *wbuf, void *buf, int len)
+ */
+ n = aligned_len >> c->max_write_shift;
+ if (n) {
+- n <<= c->max_write_shift;
++ int m = n - 1;
++
+ dbg_io("write %d bytes to LEB %d:%d", n, wbuf->lnum,
+ wbuf->offs);
+- err = ubifs_leb_write(c, wbuf->lnum, buf + written,
+- wbuf->offs, n);
++
++ if (m) {
++ /* '(n-1)<<c->max_write_shift < len' is always true. */
++ m <<= c->max_write_shift;
++ err = ubifs_leb_write(c, wbuf->lnum, buf + written,
++ wbuf->offs, m);
++ if (err)
++ goto out;
++ wbuf->offs += m;
++ aligned_len -= m;
++ len -= m;
++ written += m;
++ }
++
++ /*
++ * The non-written len of buf may be less than 'n' because
++ * parameter 'len' is not 8 bytes aligned, so here we read
++ * min(len, n) bytes from buf.
++ */
++ n = 1 << c->max_write_shift;
++ memcpy(wbuf->buf, buf + written, min(len, n));
++ if (n > len) {
++ ubifs_assert(c, n - len < 8);
++ ubifs_pad(c, wbuf->buf + len, n - len);
++ }
++
++ err = ubifs_leb_write(c, wbuf->lnum, wbuf->buf, wbuf->offs, n);
+ if (err)
+ goto out;
+ wbuf->offs += n;
+ aligned_len -= n;
+- len -= n;
++ len -= min(len, n);
+ written += n;
+ }
+
+diff --git a/fs/ubifs/ioctl.c b/fs/ubifs/ioctl.c
+index c6a8634877803..71bcebe45f9c5 100644
+--- a/fs/ubifs/ioctl.c
++++ b/fs/ubifs/ioctl.c
+@@ -108,7 +108,7 @@ static int setflags(struct inode *inode, int flags)
+ struct ubifs_inode *ui = ubifs_inode(inode);
+ struct ubifs_info *c = inode->i_sb->s_fs_info;
+ struct ubifs_budget_req req = { .dirtied_ino = 1,
+- .dirtied_ino_d = ui->data_len };
++ .dirtied_ino_d = ALIGN(ui->data_len, 8) };
+
+ err = ubifs_budget_space(c, &req);
+ if (err)
+diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
+index 8ea680dba61e3..75dab0ae3939d 100644
+--- a/fs/ubifs/journal.c
++++ b/fs/ubifs/journal.c
+@@ -1207,9 +1207,9 @@ out_free:
+ * @sync: non-zero if the write-buffer has to be synchronized
+ *
+ * This function implements the re-name operation which may involve writing up
+- * to 4 inodes and 2 directory entries. It marks the written inodes as clean
+- * and returns zero on success. In case of failure, a negative error code is
+- * returned.
++ * to 4 inodes(new inode, whiteout inode, old and new parent directory inodes)
++ * and 2 directory entries. It marks the written inodes as clean and returns
++ * zero on success. In case of failure, a negative error code is returned.
+ */
+ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ const struct inode *old_inode,
+@@ -1222,14 +1222,15 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ void *p;
+ union ubifs_key key;
+ struct ubifs_dent_node *dent, *dent2;
+- int err, dlen1, dlen2, ilen, lnum, offs, len, orphan_added = 0;
++ int err, dlen1, dlen2, ilen, wlen, lnum, offs, len, orphan_added = 0;
+ int aligned_dlen1, aligned_dlen2, plen = UBIFS_INO_NODE_SZ;
+ int last_reference = !!(new_inode && new_inode->i_nlink == 0);
+ int move = (old_dir != new_dir);
+- struct ubifs_inode *new_ui;
++ struct ubifs_inode *new_ui, *whiteout_ui;
+ u8 hash_old_dir[UBIFS_HASH_ARR_SZ];
+ u8 hash_new_dir[UBIFS_HASH_ARR_SZ];
+ u8 hash_new_inode[UBIFS_HASH_ARR_SZ];
++ u8 hash_whiteout_inode[UBIFS_HASH_ARR_SZ];
+ u8 hash_dent1[UBIFS_HASH_ARR_SZ];
+ u8 hash_dent2[UBIFS_HASH_ARR_SZ];
+
+@@ -1249,9 +1250,20 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ } else
+ ilen = 0;
+
++ if (whiteout) {
++ whiteout_ui = ubifs_inode(whiteout);
++ ubifs_assert(c, mutex_is_locked(&whiteout_ui->ui_mutex));
++ ubifs_assert(c, whiteout->i_nlink == 1);
++ ubifs_assert(c, !whiteout_ui->dirty);
++ wlen = UBIFS_INO_NODE_SZ;
++ wlen += whiteout_ui->data_len;
++ } else
++ wlen = 0;
++
+ aligned_dlen1 = ALIGN(dlen1, 8);
+ aligned_dlen2 = ALIGN(dlen2, 8);
+- len = aligned_dlen1 + aligned_dlen2 + ALIGN(ilen, 8) + ALIGN(plen, 8);
++ len = aligned_dlen1 + aligned_dlen2 + ALIGN(ilen, 8) +
++ ALIGN(wlen, 8) + ALIGN(plen, 8);
+ if (move)
+ len += plen;
+
+@@ -1313,6 +1325,15 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ p += ALIGN(ilen, 8);
+ }
+
++ if (whiteout) {
++ pack_inode(c, p, whiteout, 0);
++ err = ubifs_node_calc_hash(c, p, hash_whiteout_inode);
++ if (err)
++ goto out_release;
++
++ p += ALIGN(wlen, 8);
++ }
++
+ if (!move) {
+ pack_inode(c, p, old_dir, 1);
+ err = ubifs_node_calc_hash(c, p, hash_old_dir);
+@@ -1352,6 +1373,9 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ if (new_inode)
+ ubifs_wbuf_add_ino_nolock(&c->jheads[BASEHD].wbuf,
+ new_inode->i_ino);
++ if (whiteout)
++ ubifs_wbuf_add_ino_nolock(&c->jheads[BASEHD].wbuf,
++ whiteout->i_ino);
+ }
+ release_head(c, BASEHD);
+
+@@ -1368,8 +1392,6 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ err = ubifs_tnc_add_nm(c, &key, lnum, offs, dlen2, hash_dent2, old_nm);
+ if (err)
+ goto out_ro;
+-
+- ubifs_delete_orphan(c, whiteout->i_ino);
+ } else {
+ err = ubifs_add_dirt(c, lnum, dlen2);
+ if (err)
+@@ -1390,6 +1412,15 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ offs += ALIGN(ilen, 8);
+ }
+
++ if (whiteout) {
++ ino_key_init(c, &key, whiteout->i_ino);
++ err = ubifs_tnc_add(c, &key, lnum, offs, wlen,
++ hash_whiteout_inode);
++ if (err)
++ goto out_ro;
++ offs += ALIGN(wlen, 8);
++ }
++
+ ino_key_init(c, &key, old_dir->i_ino);
+ err = ubifs_tnc_add(c, &key, lnum, offs, plen, hash_old_dir);
+ if (err)
+@@ -1410,6 +1441,11 @@ int ubifs_jnl_rename(struct ubifs_info *c, const struct inode *old_dir,
+ new_ui->synced_i_size = new_ui->ui_size;
+ spin_unlock(&new_ui->ui_lock);
+ }
++ /*
++ * No need to mark whiteout inode clean.
++ * Whiteout doesn't have non-zero size, no need to update
++ * synced_i_size for whiteout_ui.
++ */
+ mark_inode_clean(c, ubifs_inode(old_dir));
+ if (move)
+ mark_inode_clean(c, ubifs_inode(new_dir));
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index b501d0badaea2..5f88e484515a1 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -592,10 +592,16 @@ struct drm_display_info {
+ bool rgb_quant_range_selectable;
+
+ /**
+- * @edid_hdmi_dc_modes: Mask of supported hdmi deep color modes. Even
+- * more stuff redundant with @bus_formats.
++ * @edid_hdmi_rgb444_dc_modes: Mask of supported hdmi deep color modes
++ * in RGB 4:4:4. Even more stuff redundant with @bus_formats.
+ */
+- u8 edid_hdmi_dc_modes;
++ u8 edid_hdmi_rgb444_dc_modes;
++
++ /**
++ * @edid_hdmi_ycbcr444_dc_modes: Mask of supported hdmi deep color
++ * modes in YCbCr 4:4:4. Even more stuff redundant with @bus_formats.
++ */
++ u8 edid_hdmi_ycbcr444_dc_modes;
+
+ /**
+ * @cea_rev: CEA revision of the HDMI sink.
+diff --git a/include/drm/drm_dp_helper.h b/include/drm/drm_dp_helper.h
+index 30359e434c3f3..fdf3cf6ccc021 100644
+--- a/include/drm/drm_dp_helper.h
++++ b/include/drm/drm_dp_helper.h
+@@ -456,7 +456,7 @@ struct drm_panel;
+ #define DP_FEC_CAPABILITY_1 0x091 /* 2.0 */
+
+ /* DP-HDMI2.1 PCON DSC ENCODER SUPPORT */
+-#define DP_PCON_DSC_ENCODER_CAP_SIZE 0xC /* 0x9E - 0x92 */
++#define DP_PCON_DSC_ENCODER_CAP_SIZE 0xD /* 0x92 through 0x9E */
+ #define DP_PCON_DSC_ENCODER 0x092
+ # define DP_PCON_DSC_ENCODER_SUPPORTED (1 << 0)
+ # define DP_PCON_DSC_PPS_ENC_OVERRIDE (1 << 1)
+@@ -1528,8 +1528,6 @@ u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SI
+ int lane);
+ u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
+ int lane);
+-u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
+- unsigned int lane);
+
+ #define DP_BRANCH_OUI_HEADER_SIZE 0xc
+ #define DP_RECEIVER_CAP_SIZE 0xf
+diff --git a/include/drm/drm_modeset_lock.h b/include/drm/drm_modeset_lock.h
+index b84693fbd2b50..ec4f543c3d950 100644
+--- a/include/drm/drm_modeset_lock.h
++++ b/include/drm/drm_modeset_lock.h
+@@ -34,6 +34,7 @@ struct drm_modeset_lock;
+ * struct drm_modeset_acquire_ctx - locking context (see ww_acquire_ctx)
+ * @ww_ctx: base acquire ctx
+ * @contended: used internally for -EDEADLK handling
++ * @stack_depot: used internally for contention debugging
+ * @locked: list of held locks
+ * @trylock_only: trylock mode used in atomic contexts/panic notifiers
+ * @interruptible: whether interruptible locking should be used.
+diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
+index a3dba31df01e9..6db58d1808665 100644
+--- a/include/linux/atomic/atomic-arch-fallback.h
++++ b/include/linux/atomic/atomic-arch-fallback.h
+@@ -151,7 +151,16 @@
+ static __always_inline int
+ arch_atomic_read_acquire(const atomic_t *v)
+ {
+- return smp_load_acquire(&(v)->counter);
++ int ret;
++
++ if (__native_word(atomic_t)) {
++ ret = smp_load_acquire(&(v)->counter);
++ } else {
++ ret = arch_atomic_read(v);
++ __atomic_acquire_fence();
++ }
++
++ return ret;
+ }
+ #define arch_atomic_read_acquire arch_atomic_read_acquire
+ #endif
+@@ -160,7 +169,12 @@ arch_atomic_read_acquire(const atomic_t *v)
+ static __always_inline void
+ arch_atomic_set_release(atomic_t *v, int i)
+ {
+- smp_store_release(&(v)->counter, i);
++ if (__native_word(atomic_t)) {
++ smp_store_release(&(v)->counter, i);
++ } else {
++ __atomic_release_fence();
++ arch_atomic_set(v, i);
++ }
+ }
+ #define arch_atomic_set_release arch_atomic_set_release
+ #endif
+@@ -1258,7 +1272,16 @@ arch_atomic_dec_if_positive(atomic_t *v)
+ static __always_inline s64
+ arch_atomic64_read_acquire(const atomic64_t *v)
+ {
+- return smp_load_acquire(&(v)->counter);
++ s64 ret;
++
++ if (__native_word(atomic64_t)) {
++ ret = smp_load_acquire(&(v)->counter);
++ } else {
++ ret = arch_atomic64_read(v);
++ __atomic_acquire_fence();
++ }
++
++ return ret;
+ }
+ #define arch_atomic64_read_acquire arch_atomic64_read_acquire
+ #endif
+@@ -1267,7 +1290,12 @@ arch_atomic64_read_acquire(const atomic64_t *v)
+ static __always_inline void
+ arch_atomic64_set_release(atomic64_t *v, s64 i)
+ {
+- smp_store_release(&(v)->counter, i);
++ if (__native_word(atomic64_t)) {
++ smp_store_release(&(v)->counter, i);
++ } else {
++ __atomic_release_fence();
++ arch_atomic64_set(v, i);
++ }
+ }
+ #define arch_atomic64_set_release arch_atomic64_set_release
+ #endif
+@@ -2358,4 +2386,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
+ #endif
+
+ #endif /* _LINUX_ATOMIC_FALLBACK_H */
+-// cca554917d7ea73d5e3e7397dd70c484cad9b2c4
++// 8e2cc06bc0d2c0967d2f8424762bd48555ee40ae
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index 049cf9421d831..f821b72433613 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -87,6 +87,9 @@ struct coredump_params {
+ loff_t written;
+ loff_t pos;
+ loff_t to_skip;
++ int vma_count;
++ size_t vma_data_size;
++ struct core_vma_metadata *vma_meta;
+ };
+
+ /*
+diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
+index b4de2010fba55..bc5c04d711bbc 100644
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -24,6 +24,7 @@
+ #include <linux/atomic.h>
+ #include <linux/kthread.h>
+ #include <linux/fs.h>
++#include <linux/blk-mq.h>
+
+ /* percpu_counter batch for blkg_[rw]stats, per-cpu drift doesn't matter */
+ #define BLKG_STAT_CPU_BATCH (INT_MAX / 2)
+@@ -604,6 +605,21 @@ static inline void blkcg_clear_delay(struct blkcg_gq *blkg)
+ atomic_dec(&blkg->blkcg->css.cgroup->congestion_count);
+ }
+
++/**
++ * blk_cgroup_mergeable - Determine whether to allow or disallow merges
++ * @rq: request to merge into
++ * @bio: bio to merge
++ *
++ * @bio and @rq should belong to the same cgroup and their issue_as_root should
++ * match. The latter is necessary as we don't want to throttle e.g. a metadata
++ * update because it happens to be next to a regular IO.
++ */
++static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio)
++{
++ return rq->bio->bi_blkg == bio->bi_blkg &&
++ bio_issue_as_root_blkg(rq->bio) == bio_issue_as_root_blkg(bio);
++}
++
+ void blk_cgroup_bio_start(struct bio *bio);
+ void blkcg_add_delay(struct blkcg_gq *blkg, u64 now, u64 delta);
+ void blkcg_schedule_throttle(struct request_queue *q, bool use_memdelay);
+@@ -659,6 +675,7 @@ static inline void blkg_put(struct blkcg_gq *blkg) { }
+ static inline bool blkcg_punt_bio_submit(struct bio *bio) { return false; }
+ static inline void blkcg_bio_issue_init(struct bio *bio) { }
+ static inline void blk_cgroup_bio_start(struct bio *bio) { }
++static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio) { return true; }
+
+ #define blk_queue_for_each_rl(rl, q) \
+ for ((rl) = &(q)->root_rl; (rl); (rl) = NULL)
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index fe065c394fff6..86c0f85df8bb4 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -317,7 +317,8 @@ enum {
+ BIO_TRACE_COMPLETION, /* bio_endio() should trace the final completion
+ * of this bio. */
+ BIO_CGROUP_ACCT, /* has been accounted to a cgroup */
+- BIO_TRACKED, /* set if bio goes through the rq_qos path */
++ BIO_QOS_THROTTLED, /* bio went through rq_qos throttle path */
++ BIO_QOS_MERGED, /* but went through rq_qos merge path */
+ BIO_REMAPPED,
+ BIO_ZONE_WRITE_LOCKED, /* Owns a zoned device zone write lock */
+ BIO_PERCPU_CACHE, /* can participate in per-cpu alloc cache */
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index 248a68c668b45..aa12ec94fae28 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -12,6 +12,8 @@ struct core_vma_metadata {
+ unsigned long start, end;
+ unsigned long flags;
+ unsigned long dump_size;
++ unsigned long pgoff;
++ struct file *file;
+ };
+
+ /*
+@@ -25,9 +27,6 @@ extern int dump_emit(struct coredump_params *cprm, const void *addr, int nr);
+ extern int dump_align(struct coredump_params *cprm, int align);
+ int dump_user_range(struct coredump_params *cprm, unsigned long start,
+ unsigned long len);
+-int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count,
+- struct core_vma_metadata **vma_meta,
+- size_t *vma_data_size_ptr);
+ extern void do_coredump(const kernel_siginfo_t *siginfo);
+ #else
+ static inline void do_coredump(const kernel_siginfo_t *siginfo) {}
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index 02f362c661c80..3d7306c9a7065 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -502,6 +502,7 @@ struct fb_info {
+ } *apertures;
+
+ bool skip_vt_switch; /* no VT switch on suspend/resume required */
++ bool forced_out; /* set when being removed by another driver */
+ };
+
+ static inline struct apertures_struct *alloc_apertures(unsigned int max_num) {
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 819ec92dc2a82..db924fe379c9c 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -332,6 +332,8 @@ LSM_HOOK(int, 0, sctp_bind_connect, struct sock *sk, int optname,
+ struct sockaddr *address, int addrlen)
+ LSM_HOOK(void, LSM_RET_VOID, sctp_sk_clone, struct sctp_association *asoc,
+ struct sock *sk, struct sock *newsk)
++LSM_HOOK(int, 0, sctp_assoc_established, struct sctp_association *asoc,
++ struct sk_buff *skb)
+ #endif /* CONFIG_SECURITY_NETWORK */
+
+ #ifdef CONFIG_SECURITY_INFINIBAND
+diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
+index 3bf5c658bc448..419b5febc3ca5 100644
+--- a/include/linux/lsm_hooks.h
++++ b/include/linux/lsm_hooks.h
+@@ -1046,6 +1046,11 @@
+ * @asoc pointer to current sctp association structure.
+ * @sk pointer to current sock structure.
+ * @newsk pointer to new sock structure.
++ * @sctp_assoc_established:
++ * Passes the @asoc and @chunk->skb of the association COOKIE_ACK packet
++ * to the security module.
++ * @asoc pointer to sctp association structure.
++ * @skb pointer to skbuff of association packet.
+ *
+ * Security hooks for Infiniband
+ *
+diff --git a/include/linux/migrate.h b/include/linux/migrate.h
+index db96e10eb8da2..90e75d5a54d66 100644
+--- a/include/linux/migrate.h
++++ b/include/linux/migrate.h
+@@ -48,7 +48,15 @@ int folio_migrate_mapping(struct address_space *mapping,
+ struct folio *newfolio, struct folio *folio, int extra_count);
+
+ extern bool numa_demotion_enabled;
++extern void migrate_on_reclaim_init(void);
++#ifdef CONFIG_HOTPLUG_CPU
++extern void set_migration_target_nodes(void);
+ #else
++static inline void set_migration_target_nodes(void) {}
++#endif
++#else
++
++static inline void set_migration_target_nodes(void) {}
+
+ static inline void putback_movable_pages(struct list_head *l) {}
+ static inline int migrate_pages(struct list_head *l, new_page_t new,
+diff --git a/include/linux/mtd/rawnand.h b/include/linux/mtd/rawnand.h
+index 5b88cd51fadb5..dcf90144d70b7 100644
+--- a/include/linux/mtd/rawnand.h
++++ b/include/linux/mtd/rawnand.h
+@@ -1240,6 +1240,7 @@ struct nand_secure_region {
+ * @lock: Lock protecting the suspended field. Also used to serialize accesses
+ * to the NAND device
+ * @suspended: Set to 1 when the device is suspended, 0 when it's not
++ * @resume_wq: wait queue to sleep if rawnand is in suspended state.
+ * @cur_cs: Currently selected target. -1 means no target selected, otherwise we
+ * should always have cur_cs >= 0 && cur_cs < nanddev_ntargets().
+ * NAND Controller drivers should not modify this value, but they're
+@@ -1294,6 +1295,7 @@ struct nand_chip {
+ /* Internals */
+ struct mutex lock;
+ unsigned int suspended : 1;
++ wait_queue_head_t resume_wq;
+ int cur_cs;
+ int read_retries;
+ struct nand_secure_region *secure_regions;
+diff --git a/include/linux/netfilter_netdev.h b/include/linux/netfilter_netdev.h
+index e6487a6911360..8676316547cc4 100644
+--- a/include/linux/netfilter_netdev.h
++++ b/include/linux/netfilter_netdev.h
+@@ -99,7 +99,7 @@ static inline struct sk_buff *nf_hook_egress(struct sk_buff *skb, int *rc,
+ return skb;
+
+ nf_hook_state_init(&state, NF_NETDEV_EGRESS,
+- NFPROTO_NETDEV, dev, NULL, NULL,
++ NFPROTO_NETDEV, NULL, dev, NULL,
+ dev_net(dev), NULL);
+
+ /* nf assumes rcu_read_lock, not just read_lock_bh */
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index 855dd9b3e84be..a662435c9b6f1 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -337,6 +337,7 @@ enum {
+ NVME_CTRL_ONCS_TIMESTAMP = 1 << 6,
+ NVME_CTRL_VWC_PRESENT = 1 << 0,
+ NVME_CTRL_OACS_SEC_SUPP = 1 << 0,
++ NVME_CTRL_OACS_NS_MNGT_SUPP = 1 << 3,
+ NVME_CTRL_OACS_DIRECTIVES = 1 << 5,
+ NVME_CTRL_OACS_DBBUF_SUPP = 1 << 8,
+ NVME_CTRL_LPA_CMD_EFFECTS_LOG = 1 << 1,
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 8253a5413d7c4..678fecdf6b812 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -668,6 +668,7 @@ struct pci_bus {
+ struct bin_attribute *legacy_io; /* Legacy I/O for this bus */
+ struct bin_attribute *legacy_mem; /* Legacy mem */
+ unsigned int is_added:1;
++ unsigned int unsafe_warn:1; /* warned about RW1C config write */
+ };
+
+ #define to_pci_bus(n) container_of(n, struct pci_bus, dev)
+diff --git a/include/linux/pstore.h b/include/linux/pstore.h
+index eb93a54cff31f..e97a8188f0fd8 100644
+--- a/include/linux/pstore.h
++++ b/include/linux/pstore.h
+@@ -14,7 +14,7 @@
+ #include <linux/errno.h>
+ #include <linux/kmsg_dump.h>
+ #include <linux/mutex.h>
+-#include <linux/semaphore.h>
++#include <linux/spinlock.h>
+ #include <linux/time.h>
+ #include <linux/types.h>
+
+@@ -87,7 +87,7 @@ struct pstore_record {
+ * @owner: module which is responsible for this backend driver
+ * @name: name of the backend driver
+ *
+- * @buf_lock: semaphore to serialize access to @buf
++ * @buf_lock: spinlock to serialize access to @buf
+ * @buf: preallocated crash dump buffer
+ * @bufsize: size of @buf available for crash dump bytes (must match
+ * smallest number of bytes available for writing to a
+@@ -178,7 +178,7 @@ struct pstore_info {
+ struct module *owner;
+ const char *name;
+
+- struct semaphore buf_lock;
++ spinlock_t buf_lock;
+ char *buf;
+ size_t bufsize;
+
+diff --git a/include/linux/randomize_kstack.h b/include/linux/randomize_kstack.h
+index bebc911161b6f..d373f1bcbf7ca 100644
+--- a/include/linux/randomize_kstack.h
++++ b/include/linux/randomize_kstack.h
+@@ -16,8 +16,20 @@ DECLARE_PER_CPU(u32, kstack_offset);
+ * alignment. Also, since this use is being explicitly masked to a max of
+ * 10 bits, stack-clash style attacks are unlikely. For more details see
+ * "VLAs" in Documentation/process/deprecated.rst
++ *
++ * The normal __builtin_alloca() is initialized with INIT_STACK_ALL (currently
++ * only with Clang and not GCC). Initializing the unused area on each syscall
++ * entry is expensive, and generating an implicit call to memset() may also be
++ * problematic (such as in noinstr functions). Therefore, if the compiler
++ * supports it (which it should if it initializes allocas), always use the
++ * "uninitialized" variant of the builtin.
+ */
+-void *__builtin_alloca(size_t size);
++#if __has_builtin(__builtin_alloca_uninitialized)
++#define __kstack_alloca __builtin_alloca_uninitialized
++#else
++#define __kstack_alloca __builtin_alloca
++#endif
++
+ /*
+ * Use, at most, 10 bits of entropy. We explicitly cap this to keep the
+ * "VLA" from being unbounded (see above). 10 bits leaves enough room for
+@@ -36,7 +48,7 @@ void *__builtin_alloca(size_t size);
+ if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
+ &randomize_kstack_offset)) { \
+ u32 offset = raw_cpu_read(kstack_offset); \
+- u8 *ptr = __builtin_alloca(KSTACK_OFFSET_MAX(offset)); \
++ u8 *ptr = __kstack_alloca(KSTACK_OFFSET_MAX(offset)); \
+ /* Keep allocation even after "ptr" loses scope. */ \
+ asm volatile("" :: "r"(ptr) : "memory"); \
+ } \
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 75ba8aa60248b..e806326eca723 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1630,6 +1630,14 @@ static inline unsigned int task_state_index(struct task_struct *tsk)
+ if (tsk_state == TASK_IDLE)
+ state = TASK_REPORT_IDLE;
+
++ /*
++ * We're lying here, but rather than expose a completely new task state
++ * to userspace, we can make this appear as if the task has gone through
++ * a regular rt_mutex_lock() call.
++ */
++ if (tsk_state == TASK_RTLOCK_WAIT)
++ state = TASK_UNINTERRUPTIBLE;
++
+ return fls(state);
+ }
+
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 6d72772182c82..25b3ef71f495e 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -1422,6 +1422,8 @@ int security_sctp_bind_connect(struct sock *sk, int optname,
+ struct sockaddr *address, int addrlen);
+ void security_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk,
+ struct sock *newsk);
++int security_sctp_assoc_established(struct sctp_association *asoc,
++ struct sk_buff *skb);
+
+ #else /* CONFIG_SECURITY_NETWORK */
+ static inline int security_unix_stream_connect(struct sock *sock,
+@@ -1641,6 +1643,12 @@ static inline void security_sctp_sk_clone(struct sctp_association *asoc,
+ struct sock *newsk)
+ {
+ }
++
++static inline int security_sctp_assoc_established(struct sctp_association *asoc,
++ struct sk_buff *skb)
++{
++ return 0;
++}
+ #endif /* CONFIG_SECURITY_NETWORK */
+
+ #ifdef CONFIG_SECURITY_INFINIBAND
+diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h
+index c58cc142d23f4..8c32935e1059d 100644
+--- a/include/linux/serial_core.h
++++ b/include/linux/serial_core.h
+@@ -458,6 +458,8 @@ extern void uart_handle_cts_change(struct uart_port *uport,
+ extern void uart_insert_char(struct uart_port *port, unsigned int status,
+ unsigned int overrun, unsigned int ch, unsigned int flag);
+
++void uart_xchar_out(struct uart_port *uport, int offset);
++
+ #ifdef CONFIG_MAGIC_SYSRQ_SERIAL
+ #define SYSRQ_TIMEOUT (HZ * 5)
+
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index 8a636e678902d..42f885f0ce8af 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1475,6 +1475,11 @@ static inline unsigned int skb_end_offset(const struct sk_buff *skb)
+ {
+ return skb->end;
+ }
++
++static inline void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
++{
++ skb->end = offset;
++}
+ #else
+ static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
+ {
+@@ -1485,6 +1490,11 @@ static inline unsigned int skb_end_offset(const struct sk_buff *skb)
+ {
+ return skb->end - skb->head;
+ }
++
++static inline void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
++{
++ skb->end = skb->head + offset;
++}
+ #endif
+
+ /* Internal */
+@@ -1724,19 +1734,19 @@ static inline int skb_unclone(struct sk_buff *skb, gfp_t pri)
+ return 0;
+ }
+
+-/* This variant of skb_unclone() makes sure skb->truesize is not changed */
++/* This variant of skb_unclone() makes sure skb->truesize
++ * and skb_end_offset() are not changed, whenever a new skb->head is needed.
++ *
++ * Indeed there is no guarantee that ksize(kmalloc(X)) == ksize(kmalloc(X))
++ * when various debugging features are in place.
++ */
++int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri);
+ static inline int skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
+ {
+ might_sleep_if(gfpflags_allow_blocking(pri));
+
+- if (skb_cloned(skb)) {
+- unsigned int save = skb->truesize;
+- int res;
+-
+- res = pskb_expand_head(skb, 0, 0, pri);
+- skb->truesize = save;
+- return res;
+- }
++ if (skb_cloned(skb))
++ return __skb_unclone_keeptruesize(skb, pri);
+ return 0;
+ }
+
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 18a717fe62eb0..7f32dd59e7513 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -310,21 +310,16 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb)
+ kfree_skb(skb);
+ }
+
+-static inline void drop_sk_msg(struct sk_psock *psock, struct sk_msg *msg)
+-{
+- if (msg->skb)
+- sock_drop(psock->sk, msg->skb);
+- kfree(msg);
+-}
+-
+ static inline void sk_psock_queue_msg(struct sk_psock *psock,
+ struct sk_msg *msg)
+ {
+ spin_lock_bh(&psock->ingress_lock);
+ if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
+ list_add_tail(&msg->list, &psock->ingress_msg);
+- else
+- drop_sk_msg(psock, msg);
++ else {
++ sk_msg_free(psock->sk, msg);
++ kfree(msg);
++ }
+ spin_unlock_bh(&psock->ingress_lock);
+ }
+
+diff --git a/include/linux/soc/ti/ti_sci_protocol.h b/include/linux/soc/ti/ti_sci_protocol.h
+index 0aad7009b50e6..bd0d11af76c5e 100644
+--- a/include/linux/soc/ti/ti_sci_protocol.h
++++ b/include/linux/soc/ti/ti_sci_protocol.h
+@@ -645,7 +645,7 @@ devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
+
+ static inline struct ti_sci_resource *
+ devm_ti_sci_get_resource(const struct ti_sci_handle *handle, struct device *dev,
+- u32 dev_id, u32 sub_type);
++ u32 dev_id, u32 sub_type)
+ {
+ return ERR_PTR(-EINVAL);
+ }
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index b519609af1d02..4417f667c757e 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -731,6 +731,8 @@ xdr_stream_decode_uint32_array(struct xdr_stream *xdr,
+
+ if (unlikely(xdr_stream_decode_u32(xdr, &len) < 0))
+ return -EBADMSG;
++ if (len > SIZE_MAX / sizeof(*p))
++ return -EBADMSG;
+ p = xdr_inline_decode(xdr, len * sizeof(*p));
+ if (unlikely(!p))
+ return -EBADMSG;
+diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
+index 955ea4d7af0b2..eef5e87c03b43 100644
+--- a/include/linux/sunrpc/xprt.h
++++ b/include/linux/sunrpc/xprt.h
+@@ -139,6 +139,9 @@ struct rpc_xprt_ops {
+ void (*rpcbind)(struct rpc_task *task);
+ void (*set_port)(struct rpc_xprt *xprt, unsigned short port);
+ void (*connect)(struct rpc_xprt *xprt, struct rpc_task *task);
++ int (*get_srcaddr)(struct rpc_xprt *xprt, char *buf,
++ size_t buflen);
++ unsigned short (*get_srcport)(struct rpc_xprt *xprt);
+ int (*buf_alloc)(struct rpc_task *task);
+ void (*buf_free)(struct rpc_task *task);
+ void (*prepare_request)(struct rpc_rqst *req);
+diff --git a/include/linux/sunrpc/xprtsock.h b/include/linux/sunrpc/xprtsock.h
+index 8c2a712cb2420..fed813ffe7db1 100644
+--- a/include/linux/sunrpc/xprtsock.h
++++ b/include/linux/sunrpc/xprtsock.h
+@@ -10,7 +10,6 @@
+
+ int init_socket_xprt(void);
+ void cleanup_socket_xprt(void);
+-unsigned short get_srcport(struct rpc_xprt *);
+
+ #define RPC_MIN_RESVPORT (1U)
+ #define RPC_MAX_RESVPORT (65535U)
+@@ -89,5 +88,6 @@ struct sock_xprt {
+ #define XPRT_SOCK_WAKE_WRITE (5)
+ #define XPRT_SOCK_WAKE_PENDING (6)
+ #define XPRT_SOCK_WAKE_DISCONNECT (7)
++#define XPRT_SOCK_CONNECT_SENT (8)
+
+ #endif /* _LINUX_SUNRPC_XPRTSOCK_H */
+diff --git a/include/linux/xarray.h b/include/linux/xarray.h
+index d6d5da6ed7354..66e28bc1a023f 100644
+--- a/include/linux/xarray.h
++++ b/include/linux/xarray.h
+@@ -9,6 +9,7 @@
+ * See Documentation/core-api/xarray.rst for how to use the XArray.
+ */
+
++#include <linux/bitmap.h>
+ #include <linux/bug.h>
+ #include <linux/compiler.h>
+ #include <linux/gfp.h>
+diff --git a/include/net/netfilter/nf_conntrack_helper.h b/include/net/netfilter/nf_conntrack_helper.h
+index 37f0fbefb060f..9939c366f720d 100644
+--- a/include/net/netfilter/nf_conntrack_helper.h
++++ b/include/net/netfilter/nf_conntrack_helper.h
+@@ -177,4 +177,5 @@ void nf_nat_helper_unregister(struct nf_conntrack_nat_helper *nat);
+ int nf_nat_helper_try_module_get(const char *name, u16 l3num,
+ u8 protonum);
+ void nf_nat_helper_put(struct nf_conntrack_helper *helper);
++void nf_ct_set_auto_assign_helper_warned(struct net *net);
+ #endif /*_NF_CONNTRACK_HELPER_H*/
+diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h
+index bd59e950f4d67..64daafd1fc41c 100644
+--- a/include/net/netfilter/nf_flow_table.h
++++ b/include/net/netfilter/nf_flow_table.h
+@@ -10,6 +10,8 @@
+ #include <linux/netfilter/nf_conntrack_tuple_common.h>
+ #include <net/flow_offload.h>
+ #include <net/dst.h>
++#include <linux/if_pppox.h>
++#include <linux/ppp_defs.h>
+
+ struct nf_flowtable;
+ struct nf_flow_rule;
+@@ -317,4 +319,20 @@ int nf_flow_rule_route_ipv6(struct net *net, const struct flow_offload *flow,
+ int nf_flow_table_offload_init(void);
+ void nf_flow_table_offload_exit(void);
+
++static inline __be16 nf_flow_pppoe_proto(const struct sk_buff *skb)
++{
++ __be16 proto;
++
++ proto = *((__be16 *)(skb_mac_header(skb) + ETH_HLEN +
++ sizeof(struct pppoe_hdr)));
++ switch (proto) {
++ case htons(PPP_IP):
++ return htons(ETH_P_IP);
++ case htons(PPP_IPV6):
++ return htons(ETH_P_IPV6);
++ }
++
++ return 0;
++}
++
+ #endif /* _NF_FLOW_TABLE_H */
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index 647c53b261051..57e3e239a1fce 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -206,6 +206,7 @@ struct scsi_device {
+ unsigned rpm_autosuspend:1; /* Enable runtime autosuspend at device
+ * creation time */
+ unsigned ignore_media_change:1; /* Ignore MEDIA CHANGE on resume */
++ unsigned silence_suspend:1; /* Do not print runtime PM related messages */
+
+ unsigned int queue_stopped; /* request queue is quiesced */
+ bool offline_already; /* Device offline message logged */
+diff --git a/include/sound/intel-nhlt.h b/include/sound/intel-nhlt.h
+index 089a760d36eb7..6fb2d5e378fdd 100644
+--- a/include/sound/intel-nhlt.h
++++ b/include/sound/intel-nhlt.h
+@@ -18,6 +18,13 @@ enum nhlt_link_type {
+ NHLT_LINK_INVALID
+ };
+
++enum nhlt_device_type {
++ NHLT_DEVICE_BT = 0,
++ NHLT_DEVICE_DMIC = 1,
++ NHLT_DEVICE_I2S = 4,
++ NHLT_DEVICE_INVALID
++};
++
+ #if IS_ENABLED(CONFIG_ACPI) && IS_ENABLED(CONFIG_SND_INTEL_NHLT)
+
+ struct wav_fmt {
+@@ -41,13 +48,6 @@ struct wav_fmt_ext {
+ u8 sub_fmt[16];
+ } __packed;
+
+-enum nhlt_device_type {
+- NHLT_DEVICE_BT = 0,
+- NHLT_DEVICE_DMIC = 1,
+- NHLT_DEVICE_I2S = 4,
+- NHLT_DEVICE_INVALID
+-};
+-
+ struct nhlt_specific_cfg {
+ u32 size;
+ u8 caps[];
+@@ -133,6 +133,9 @@ void intel_nhlt_free(struct nhlt_acpi_table *addr);
+ int intel_nhlt_get_dmic_geo(struct device *dev, struct nhlt_acpi_table *nhlt);
+
+ bool intel_nhlt_has_endpoint_type(struct nhlt_acpi_table *nhlt, u8 link_type);
++
++int intel_nhlt_ssp_endpoint_mask(struct nhlt_acpi_table *nhlt, u8 device_type);
++
+ struct nhlt_specific_cfg *
+ intel_nhlt_get_endpoint_blob(struct device *dev, struct nhlt_acpi_table *nhlt,
+ u32 bus_id, u8 link_type, u8 vbps, u8 bps,
+@@ -163,6 +166,11 @@ static inline bool intel_nhlt_has_endpoint_type(struct nhlt_acpi_table *nhlt,
+ return false;
+ }
+
++static inline int intel_nhlt_ssp_endpoint_mask(struct nhlt_acpi_table *nhlt, u8 device_type)
++{
++ return 0;
++}
++
+ static inline struct nhlt_specific_cfg *
+ intel_nhlt_get_endpoint_blob(struct device *dev, struct nhlt_acpi_table *nhlt,
+ u32 bus_id, u8 link_type, u8 vbps, u8 bps,
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 314f2779cab52..6b99310b5b889 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -402,6 +402,7 @@ struct snd_pcm_runtime {
+ struct fasync_struct *fasync;
+ bool stop_operating; /* sync_stop will be called */
+ struct mutex buffer_mutex; /* protect for buffer changes */
++ atomic_t buffer_accessing; /* >0: in r/w operation, <0: blocked */
+
+ /* -- private section -- */
+ void *private_data;
+diff --git a/include/trace/events/ext4.h b/include/trace/events/ext4.h
+index 19e957b7f9410..1a0b7030f72a3 100644
+--- a/include/trace/events/ext4.h
++++ b/include/trace/events/ext4.h
+@@ -95,6 +95,17 @@ TRACE_DEFINE_ENUM(ES_REFERENCED_B);
+ { FALLOC_FL_COLLAPSE_RANGE, "COLLAPSE_RANGE"}, \
+ { FALLOC_FL_ZERO_RANGE, "ZERO_RANGE"})
+
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_XATTR);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_CROSS_RENAME);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_NOMEM);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_SWAP_BOOT);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_RESIZE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_RENAME_DIR);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_FALLOC_RANGE);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_INODE_JOURNAL_DATA);
++TRACE_DEFINE_ENUM(EXT4_FC_REASON_MAX);
++
+ #define show_fc_reason(reason) \
+ __print_symbolic(reason, \
+ { EXT4_FC_REASON_XATTR, "XATTR"}, \
+@@ -2723,41 +2734,50 @@ TRACE_EVENT(ext4_fc_commit_stop,
+
+ #define FC_REASON_NAME_STAT(reason) \
+ show_fc_reason(reason), \
+- __entry->sbi->s_fc_stats.fc_ineligible_reason_count[reason]
++ __entry->fc_ineligible_rc[reason]
+
+ TRACE_EVENT(ext4_fc_stats,
+- TP_PROTO(struct super_block *sb),
+-
+- TP_ARGS(sb),
++ TP_PROTO(struct super_block *sb),
+
+- TP_STRUCT__entry(
+- __field(dev_t, dev)
+- __field(struct ext4_sb_info *, sbi)
+- __field(int, count)
+- ),
++ TP_ARGS(sb),
+
+- TP_fast_assign(
+- __entry->dev = sb->s_dev;
+- __entry->sbi = EXT4_SB(sb);
+- ),
++ TP_STRUCT__entry(
++ __field(dev_t, dev)
++ __array(unsigned int, fc_ineligible_rc, EXT4_FC_REASON_MAX)
++ __field(unsigned long, fc_commits)
++ __field(unsigned long, fc_ineligible_commits)
++ __field(unsigned long, fc_numblks)
++ ),
+
+- TP_printk("dev %d:%d fc ineligible reasons:\n"
+- "%s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d, %s:%d; "
+- "num_commits:%ld, ineligible: %ld, numblks: %ld",
+- MAJOR(__entry->dev), MINOR(__entry->dev),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_CROSS_RENAME),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_NOMEM),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_SWAP_BOOT),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_RESIZE),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
+- FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
+- __entry->sbi->s_fc_stats.fc_num_commits,
+- __entry->sbi->s_fc_stats.fc_ineligible_commits,
+- __entry->sbi->s_fc_stats.fc_numblks)
++ TP_fast_assign(
++ int i;
+
++ __entry->dev = sb->s_dev;
++ for (i = 0; i < EXT4_FC_REASON_MAX; i++) {
++ __entry->fc_ineligible_rc[i] =
++ EXT4_SB(sb)->s_fc_stats.fc_ineligible_reason_count[i];
++ }
++ __entry->fc_commits = EXT4_SB(sb)->s_fc_stats.fc_num_commits;
++ __entry->fc_ineligible_commits =
++ EXT4_SB(sb)->s_fc_stats.fc_ineligible_commits;
++ __entry->fc_numblks = EXT4_SB(sb)->s_fc_stats.fc_numblks;
++ ),
++
++ TP_printk("dev %d,%d fc ineligible reasons:\n"
++ "%s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u, %s:%u "
++ "num_commits:%lu, ineligible: %lu, numblks: %lu",
++ MAJOR(__entry->dev), MINOR(__entry->dev),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_XATTR),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_CROSS_RENAME),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_JOURNAL_FLAG_CHANGE),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_NOMEM),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_SWAP_BOOT),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_RESIZE),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_RENAME_DIR),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_FALLOC_RANGE),
++ FC_REASON_NAME_STAT(EXT4_FC_REASON_INODE_JOURNAL_DATA),
++ __entry->fc_commits, __entry->fc_ineligible_commits,
++ __entry->fc_numblks)
+ );
+
+ #define DEFINE_TRACE_DENTRY_EVENT(__type) \
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index e70c90116edae..4a3ab0ed6e062 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -83,12 +83,15 @@ enum rxrpc_call_trace {
+ rxrpc_call_error,
+ rxrpc_call_got,
+ rxrpc_call_got_kernel,
++ rxrpc_call_got_timer,
+ rxrpc_call_got_userid,
+ rxrpc_call_new_client,
+ rxrpc_call_new_service,
+ rxrpc_call_put,
+ rxrpc_call_put_kernel,
+ rxrpc_call_put_noqueue,
++ rxrpc_call_put_notimer,
++ rxrpc_call_put_timer,
+ rxrpc_call_put_userid,
+ rxrpc_call_queued,
+ rxrpc_call_queued_ref,
+@@ -278,12 +281,15 @@ enum rxrpc_tx_point {
+ EM(rxrpc_call_error, "*E*") \
+ EM(rxrpc_call_got, "GOT") \
+ EM(rxrpc_call_got_kernel, "Gke") \
++ EM(rxrpc_call_got_timer, "GTM") \
+ EM(rxrpc_call_got_userid, "Gus") \
+ EM(rxrpc_call_new_client, "NWc") \
+ EM(rxrpc_call_new_service, "NWs") \
+ EM(rxrpc_call_put, "PUT") \
+ EM(rxrpc_call_put_kernel, "Pke") \
+- EM(rxrpc_call_put_noqueue, "PNQ") \
++ EM(rxrpc_call_put_noqueue, "PnQ") \
++ EM(rxrpc_call_put_notimer, "PnT") \
++ EM(rxrpc_call_put_timer, "PTM") \
+ EM(rxrpc_call_put_userid, "Pus") \
+ EM(rxrpc_call_queued, "QUE") \
+ EM(rxrpc_call_queued_ref, "QUR") \
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index b0383d371b9af..015bfec0dbfdb 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -2286,8 +2286,8 @@ union bpf_attr {
+ * Return
+ * The return value depends on the result of the test, and can be:
+ *
+- * * 0, if current task belongs to the cgroup2.
+- * * 1, if current task does not belong to the cgroup2.
++ * * 1, if current task belongs to the cgroup2.
++ * * 0, if current task does not belong to the cgroup2.
+ * * A negative error code, if an error occurred.
+ *
+ * long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+@@ -2975,8 +2975,8 @@ union bpf_attr {
+ *
+ * # sysctl kernel.perf_event_max_stack=<new value>
+ * Return
+- * A non-negative value equal to or less than *size* on success,
+- * or a negative error in case of failure.
++ * The non-negative copied *buf* length equal to or less than
++ * *size* on success, or a negative error in case of failure.
+ *
+ * long bpf_skb_load_bytes_relative(const void *skb, u32 offset, void *to, u32 len, u32 start_header)
+ * Description
+@@ -4279,8 +4279,8 @@ union bpf_attr {
+ *
+ * # sysctl kernel.perf_event_max_stack=<new value>
+ * Return
+- * A non-negative value equal to or less than *size* on success,
+- * or a negative error in case of failure.
++ * The non-negative copied *buf* length equal to or less than
++ * *size* on success, or a negative error in case of failure.
+ *
+ * long bpf_load_hdr_opt(struct bpf_sock_ops *skops, void *searchby_res, u32 len, u64 flags)
+ * Description
+diff --git a/include/uapi/linux/loop.h b/include/uapi/linux/loop.h
+index 24a1c45bd1ae2..98e60801195e2 100644
+--- a/include/uapi/linux/loop.h
++++ b/include/uapi/linux/loop.h
+@@ -45,7 +45,7 @@ struct loop_info {
+ unsigned long lo_inode; /* ioctl r/o */
+ __kernel_old_dev_t lo_rdevice; /* ioctl r/o */
+ int lo_offset;
+- int lo_encrypt_type;
++ int lo_encrypt_type; /* obsolete, ignored */
+ int lo_encrypt_key_size; /* ioctl w/o */
+ int lo_flags;
+ char lo_name[LO_NAME_SIZE];
+@@ -61,7 +61,7 @@ struct loop_info64 {
+ __u64 lo_offset;
+ __u64 lo_sizelimit;/* bytes, 0 == max available */
+ __u32 lo_number; /* ioctl r/o */
+- __u32 lo_encrypt_type;
++ __u32 lo_encrypt_type; /* obsolete, ignored */
+ __u32 lo_encrypt_key_size; /* ioctl w/o */
+ __u32 lo_flags;
+ __u8 lo_file_name[LO_NAME_SIZE];
+diff --git a/include/uapi/linux/omap3isp.h b/include/uapi/linux/omap3isp.h
+index 87b55755f4ffe..d9db7ad438908 100644
+--- a/include/uapi/linux/omap3isp.h
++++ b/include/uapi/linux/omap3isp.h
+@@ -162,6 +162,7 @@ struct omap3isp_h3a_aewb_config {
+ * struct omap3isp_stat_data - Statistic data sent to or received from user
+ * @ts: Timestamp of returned framestats.
+ * @buf: Pointer to pass to user.
++ * @buf_size: Size of buffer.
+ * @frame_number: Frame number of requested stats.
+ * @cur_frame: Current frame number being processed.
+ * @config_counter: Number of the configuration associated with the data.
+@@ -176,10 +177,12 @@ struct omap3isp_stat_data {
+ struct timeval ts;
+ #endif
+ void __user *buf;
+- __u32 buf_size;
+- __u16 frame_number;
+- __u16 cur_frame;
+- __u16 config_counter;
++ __struct_group(/* no tag */, frame, /* no attrs */,
++ __u32 buf_size;
++ __u16 frame_number;
++ __u16 cur_frame;
++ __u16 config_counter;
++ );
+ };
+
+ #ifdef __KERNEL__
+@@ -189,10 +192,12 @@ struct omap3isp_stat_data_time32 {
+ __s32 tv_usec;
+ } ts;
+ __u32 buf;
+- __u32 buf_size;
+- __u16 frame_number;
+- __u16 cur_frame;
+- __u16 config_counter;
++ __struct_group(/* no tag */, frame, /* no attrs */,
++ __u32 buf_size;
++ __u16 frame_number;
++ __u16 cur_frame;
++ __u16 config_counter;
++ );
+ };
+ #endif
+
+diff --git a/include/uapi/linux/rfkill.h b/include/uapi/linux/rfkill.h
+index 9b77cfc42efa3..283c5a7b3f2c8 100644
+--- a/include/uapi/linux/rfkill.h
++++ b/include/uapi/linux/rfkill.h
+@@ -159,8 +159,16 @@ struct rfkill_event_ext {
+ * old behaviour for all userspace, unless it explicitly opts in to the
+ * rules outlined here by using the new &struct rfkill_event_ext.
+ *
+- * Userspace using &struct rfkill_event_ext must adhere to the following
+- * rules
++ * Additionally, some other userspace (bluez, g-s-d) was reading with a
++ * large size but as streaming reads rather than message-based, or with
++ * too strict checks for the returned size. So eventually, we completely
++ * reverted this, and extended messages need to be opted in to by using
++ * an ioctl:
++ *
++ * ioctl(fd, RFKILL_IOCTL_MAX_SIZE, sizeof(struct rfkill_event_ext));
++ *
++ * Userspace using &struct rfkill_event_ext and the ioctl must adhere to
++ * the following rules:
+ *
+ * 1. accept short writes, optionally using them to detect that it's
+ * running on an older kernel;
+@@ -175,6 +183,8 @@ struct rfkill_event_ext {
+ #define RFKILL_IOC_MAGIC 'R'
+ #define RFKILL_IOC_NOINPUT 1
+ #define RFKILL_IOCTL_NOINPUT _IO(RFKILL_IOC_MAGIC, RFKILL_IOC_NOINPUT)
++#define RFKILL_IOC_MAX_SIZE 2
++#define RFKILL_IOCTL_MAX_SIZE _IOW(RFKILL_IOC_MAGIC, RFKILL_IOC_EXT_SIZE, __u32)
+
+ /* and that's all userspace gets */
+
+diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h
+index 9a402fdb60e97..77ee207623a9b 100644
+--- a/include/uapi/linux/rseq.h
++++ b/include/uapi/linux/rseq.h
+@@ -105,23 +105,11 @@ struct rseq {
+ * Read and set by the kernel. Set by user-space with single-copy
+ * atomicity semantics. This field should only be updated by the
+ * thread which registered this data structure. Aligned on 64-bit.
++ *
++ * 32-bit architectures should update the low order bits of the
++ * rseq_cs field, leaving the high order bits initialized to 0.
+ */
+- union {
+- __u64 ptr64;
+-#ifdef __LP64__
+- __u64 ptr;
+-#else
+- struct {
+-#if (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || defined(__BIG_ENDIAN)
+- __u32 padding; /* Initialized to zero. */
+- __u32 ptr32;
+-#else /* LITTLE */
+- __u32 ptr32;
+- __u32 padding; /* Initialized to zero. */
+-#endif /* ENDIAN */
+- } ptr;
+-#endif
+- } rseq_cs;
++ __u64 rseq_cs;
+
+ /*
+ * Restartable sequences flags field.
+diff --git a/include/uapi/linux/serial_core.h b/include/uapi/linux/serial_core.h
+index c4042dcfdc0c3..8885e69178bd7 100644
+--- a/include/uapi/linux/serial_core.h
++++ b/include/uapi/linux/serial_core.h
+@@ -68,6 +68,9 @@
+ /* NVIDIA Tegra Combined UART */
+ #define PORT_TEGRA_TCU 41
+
++/* ASPEED AST2x00 virtual UART */
++#define PORT_ASPEED_VUART 42
++
+ /* Intel EG20 */
+ #define PORT_PCH_8LINE 44
+ #define PORT_PCH_2LINE 45
+diff --git a/kernel/audit.h b/kernel/audit.h
+index c4498090a5bd6..58b66543b4d57 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -201,6 +201,10 @@ struct audit_context {
+ struct {
+ char *name;
+ } module;
++ struct {
++ struct audit_ntp_data ntp_data;
++ struct timespec64 tk_injoffset;
++ } time;
+ };
+ int fds[2];
+ struct audit_proctitle proctitle;
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index a83928cbdcb7c..ea2ee1181921e 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1340,6 +1340,53 @@ static void audit_log_fcaps(struct audit_buffer *ab, struct audit_names *name)
+ from_kuid(&init_user_ns, name->fcap.rootid));
+ }
+
++static void audit_log_time(struct audit_context *context, struct audit_buffer **ab)
++{
++ const struct audit_ntp_data *ntp = &context->time.ntp_data;
++ const struct timespec64 *tk = &context->time.tk_injoffset;
++ static const char * const ntp_name[] = {
++ "offset",
++ "freq",
++ "status",
++ "tai",
++ "tick",
++ "adjust",
++ };
++ int type;
++
++ if (context->type == AUDIT_TIME_ADJNTPVAL) {
++ for (type = 0; type < AUDIT_NTP_NVALS; type++) {
++ if (ntp->vals[type].newval != ntp->vals[type].oldval) {
++ if (!*ab) {
++ *ab = audit_log_start(context,
++ GFP_KERNEL,
++ AUDIT_TIME_ADJNTPVAL);
++ if (!*ab)
++ return;
++ }
++ audit_log_format(*ab, "op=%s old=%lli new=%lli",
++ ntp_name[type],
++ ntp->vals[type].oldval,
++ ntp->vals[type].newval);
++ audit_log_end(*ab);
++ *ab = NULL;
++ }
++ }
++ }
++ if (tk->tv_sec != 0 || tk->tv_nsec != 0) {
++ if (!*ab) {
++ *ab = audit_log_start(context, GFP_KERNEL,
++ AUDIT_TIME_INJOFFSET);
++ if (!*ab)
++ return;
++ }
++ audit_log_format(*ab, "sec=%lli nsec=%li",
++ (long long)tk->tv_sec, tk->tv_nsec);
++ audit_log_end(*ab);
++ *ab = NULL;
++ }
++}
++
+ static void show_special(struct audit_context *context, int *call_panic)
+ {
+ struct audit_buffer *ab;
+@@ -1454,6 +1501,11 @@ static void show_special(struct audit_context *context, int *call_panic)
+ audit_log_format(ab, "(null)");
+
+ break;
++ case AUDIT_TIME_ADJNTPVAL:
++ case AUDIT_TIME_INJOFFSET:
++ /* this call deviates from the rest, eating the buffer */
++ audit_log_time(context, &ab);
++ break;
+ }
+ audit_log_end(ab);
+ }
+@@ -2849,31 +2901,26 @@ void __audit_fanotify(unsigned int response)
+
+ void __audit_tk_injoffset(struct timespec64 offset)
+ {
+- audit_log(audit_context(), GFP_KERNEL, AUDIT_TIME_INJOFFSET,
+- "sec=%lli nsec=%li",
+- (long long)offset.tv_sec, offset.tv_nsec);
+-}
+-
+-static void audit_log_ntp_val(const struct audit_ntp_data *ad,
+- const char *op, enum audit_ntp_type type)
+-{
+- const struct audit_ntp_val *val = &ad->vals[type];
+-
+- if (val->newval == val->oldval)
+- return;
++ struct audit_context *context = audit_context();
+
+- audit_log(audit_context(), GFP_KERNEL, AUDIT_TIME_ADJNTPVAL,
+- "op=%s old=%lli new=%lli", op, val->oldval, val->newval);
++ /* only set type if not already set by NTP */
++ if (!context->type)
++ context->type = AUDIT_TIME_INJOFFSET;
++ memcpy(&context->time.tk_injoffset, &offset, sizeof(offset));
+ }
+
+ void __audit_ntp_log(const struct audit_ntp_data *ad)
+ {
+- audit_log_ntp_val(ad, "offset", AUDIT_NTP_OFFSET);
+- audit_log_ntp_val(ad, "freq", AUDIT_NTP_FREQ);
+- audit_log_ntp_val(ad, "status", AUDIT_NTP_STATUS);
+- audit_log_ntp_val(ad, "tai", AUDIT_NTP_TAI);
+- audit_log_ntp_val(ad, "tick", AUDIT_NTP_TICK);
+- audit_log_ntp_val(ad, "adjust", AUDIT_NTP_ADJUST);
++ struct audit_context *context = audit_context();
++ int type;
++
++ for (type = 0; type < AUDIT_NTP_NVALS; type++)
++ if (ad->vals[type].newval != ad->vals[type].oldval) {
++ /* unconditionally set type, overwriting TK */
++ context->type = AUDIT_TIME_ADJNTPVAL;
++ memcpy(&context->time.ntp_data, ad, sizeof(*ad));
++ break;
++ }
+ }
+
+ void __audit_log_nfcfg(const char *name, u8 af, unsigned int nentries,
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 3e23b3fa79ff6..ac89e65d1692e 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -403,6 +403,9 @@ static struct btf_type btf_void;
+ static int btf_resolve(struct btf_verifier_env *env,
+ const struct btf_type *t, u32 type_id);
+
++static int btf_func_check(struct btf_verifier_env *env,
++ const struct btf_type *t);
++
+ static bool btf_type_is_modifier(const struct btf_type *t)
+ {
+ /* Some of them is not strictly a C modifier
+@@ -579,6 +582,7 @@ static bool btf_type_needs_resolve(const struct btf_type *t)
+ btf_type_is_struct(t) ||
+ btf_type_is_array(t) ||
+ btf_type_is_var(t) ||
++ btf_type_is_func(t) ||
+ btf_type_is_decl_tag(t) ||
+ btf_type_is_datasec(t);
+ }
+@@ -3533,9 +3537,24 @@ static s32 btf_func_check_meta(struct btf_verifier_env *env,
+ return 0;
+ }
+
++static int btf_func_resolve(struct btf_verifier_env *env,
++ const struct resolve_vertex *v)
++{
++ const struct btf_type *t = v->t;
++ u32 next_type_id = t->type;
++ int err;
++
++ err = btf_func_check(env, t);
++ if (err)
++ return err;
++
++ env_stack_pop_resolved(env, next_type_id, 0);
++ return 0;
++}
++
+ static struct btf_kind_operations func_ops = {
+ .check_meta = btf_func_check_meta,
+- .resolve = btf_df_resolve,
++ .resolve = btf_func_resolve,
+ .check_member = btf_df_check_member,
+ .check_kflag_member = btf_df_check_kflag_member,
+ .log_details = btf_ref_type_log,
+@@ -4156,7 +4175,7 @@ static bool btf_resolve_valid(struct btf_verifier_env *env,
+ return !btf_resolved_type_id(btf, type_id) &&
+ !btf_resolved_type_size(btf, type_id);
+
+- if (btf_type_is_decl_tag(t))
++ if (btf_type_is_decl_tag(t) || btf_type_is_func(t))
+ return btf_resolved_type_id(btf, type_id) &&
+ !btf_resolved_type_size(btf, type_id);
+
+@@ -4246,12 +4265,6 @@ static int btf_check_all_types(struct btf_verifier_env *env)
+ if (err)
+ return err;
+ }
+-
+- if (btf_type_is_func(t)) {
+- err = btf_func_check(env, t);
+- if (err)
+- return err;
+- }
+ }
+
+ return 0;
+@@ -6201,12 +6214,17 @@ bool btf_id_set_contains(const struct btf_id_set *set, u32 id)
+ return bsearch(&id, set->ids, set->cnt, sizeof(u32), btf_id_cmp_func) != NULL;
+ }
+
++enum {
++ BTF_MODULE_F_LIVE = (1 << 0),
++};
++
+ #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
+ struct btf_module {
+ struct list_head list;
+ struct module *module;
+ struct btf *btf;
+ struct bin_attribute *sysfs_attr;
++ int flags;
+ };
+
+ static LIST_HEAD(btf_modules);
+@@ -6234,7 +6252,8 @@ static int btf_module_notify(struct notifier_block *nb, unsigned long op,
+ int err = 0;
+
+ if (mod->btf_data_size == 0 ||
+- (op != MODULE_STATE_COMING && op != MODULE_STATE_GOING))
++ (op != MODULE_STATE_COMING && op != MODULE_STATE_LIVE &&
++ op != MODULE_STATE_GOING))
+ goto out;
+
+ switch (op) {
+@@ -6292,6 +6311,17 @@ static int btf_module_notify(struct notifier_block *nb, unsigned long op,
+ btf_mod->sysfs_attr = attr;
+ }
+
++ break;
++ case MODULE_STATE_LIVE:
++ mutex_lock(&btf_module_mutex);
++ list_for_each_entry_safe(btf_mod, tmp, &btf_modules, list) {
++ if (btf_mod->module != module)
++ continue;
++
++ btf_mod->flags |= BTF_MODULE_F_LIVE;
++ break;
++ }
++ mutex_unlock(&btf_module_mutex);
+ break;
+ case MODULE_STATE_GOING:
+ mutex_lock(&btf_module_mutex);
+@@ -6339,7 +6369,12 @@ struct module *btf_try_get_module(const struct btf *btf)
+ if (btf_mod->btf != btf)
+ continue;
+
+- if (try_module_get(btf_mod->module))
++ /* We must only consider module whose __init routine has
++ * finished, hence we must check for BTF_MODULE_F_LIVE flag,
++ * which is set from the notifier callback for
++ * MODULE_STATE_LIVE.
++ */
++ if ((btf_mod->flags & BTF_MODULE_F_LIVE) && try_module_get(btf_mod->module))
+ res = btf_mod->module;
+
+ break;
+diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
+index 22c8ae94e4c1c..2823dcefae10e 100644
+--- a/kernel/bpf/stackmap.c
++++ b/kernel/bpf/stackmap.c
+@@ -166,7 +166,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs,
+ }
+
+ static struct perf_callchain_entry *
+-get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
++get_callchain_entry_for_task(struct task_struct *task, u32 max_depth)
+ {
+ #ifdef CONFIG_STACKTRACE
+ struct perf_callchain_entry *entry;
+@@ -177,9 +177,8 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
+ if (!entry)
+ return NULL;
+
+- entry->nr = init_nr +
+- stack_trace_save_tsk(task, (unsigned long *)(entry->ip + init_nr),
+- sysctl_perf_event_max_stack - init_nr, 0);
++ entry->nr = stack_trace_save_tsk(task, (unsigned long *)entry->ip,
++ max_depth, 0);
+
+ /* stack_trace_save_tsk() works on unsigned long array, while
+ * perf_callchain_entry uses u64 array. For 32-bit systems, it is
+@@ -191,7 +190,7 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
+ int i;
+
+ /* copy data from the end to avoid using extra buffer */
+- for (i = entry->nr - 1; i >= (int)init_nr; i--)
++ for (i = entry->nr - 1; i >= 0; i--)
+ to[i] = (u64)(from[i]);
+ }
+
+@@ -208,27 +207,19 @@ static long __bpf_get_stackid(struct bpf_map *map,
+ {
+ struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map);
+ struct stack_map_bucket *bucket, *new_bucket, *old_bucket;
+- u32 max_depth = map->value_size / stack_map_data_size(map);
+- /* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */
+- u32 init_nr = sysctl_perf_event_max_stack - max_depth;
+ u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ u32 hash, id, trace_nr, trace_len;
+ bool user = flags & BPF_F_USER_STACK;
+ u64 *ips;
+ bool hash_matches;
+
+- /* get_perf_callchain() guarantees that trace->nr >= init_nr
+- * and trace-nr <= sysctl_perf_event_max_stack, so trace_nr <= max_depth
+- */
+- trace_nr = trace->nr - init_nr;
+-
+- if (trace_nr <= skip)
++ if (trace->nr <= skip)
+ /* skipping more than usable stack trace */
+ return -EFAULT;
+
+- trace_nr -= skip;
++ trace_nr = trace->nr - skip;
+ trace_len = trace_nr * sizeof(u64);
+- ips = trace->ip + skip + init_nr;
++ ips = trace->ip + skip;
+ hash = jhash2((u32 *)ips, trace_len / sizeof(u32), 0);
+ id = hash & (smap->n_buckets - 1);
+ bucket = READ_ONCE(smap->buckets[id]);
+@@ -285,8 +276,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
+ u64, flags)
+ {
+ u32 max_depth = map->value_size / stack_map_data_size(map);
+- /* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */
+- u32 init_nr = sysctl_perf_event_max_stack - max_depth;
++ u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ bool user = flags & BPF_F_USER_STACK;
+ struct perf_callchain_entry *trace;
+ bool kernel = !user;
+@@ -295,8 +285,12 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
+ BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID)))
+ return -EINVAL;
+
+- trace = get_perf_callchain(regs, init_nr, kernel, user,
+- sysctl_perf_event_max_stack, false, false);
++ max_depth += skip;
++ if (max_depth > sysctl_perf_event_max_stack)
++ max_depth = sysctl_perf_event_max_stack;
++
++ trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
++ false, false);
+
+ if (unlikely(!trace))
+ /* couldn't fetch the stack trace */
+@@ -387,7 +381,7 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ struct perf_callchain_entry *trace_in,
+ void *buf, u32 size, u64 flags)
+ {
+- u32 init_nr, trace_nr, copy_len, elem_size, num_elem;
++ u32 trace_nr, copy_len, elem_size, num_elem, max_depth;
+ bool user_build_id = flags & BPF_F_USER_BUILD_ID;
+ u32 skip = flags & BPF_F_SKIP_FIELD_MASK;
+ bool user = flags & BPF_F_USER_STACK;
+@@ -412,30 +406,28 @@ static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task,
+ goto err_fault;
+
+ num_elem = size / elem_size;
+- if (sysctl_perf_event_max_stack < num_elem)
+- init_nr = 0;
+- else
+- init_nr = sysctl_perf_event_max_stack - num_elem;
++ max_depth = num_elem + skip;
++ if (sysctl_perf_event_max_stack < max_depth)
++ max_depth = sysctl_perf_event_max_stack;
+
+ if (trace_in)
+ trace = trace_in;
+ else if (kernel && task)
+- trace = get_callchain_entry_for_task(task, init_nr);
++ trace = get_callchain_entry_for_task(task, max_depth);
+ else
+- trace = get_perf_callchain(regs, init_nr, kernel, user,
+- sysctl_perf_event_max_stack,
++ trace = get_perf_callchain(regs, 0, kernel, user, max_depth,
+ false, false);
+ if (unlikely(!trace))
+ goto err_fault;
+
+- trace_nr = trace->nr - init_nr;
+- if (trace_nr < skip)
++ if (trace->nr < skip)
+ goto err_fault;
+
+- trace_nr -= skip;
++ trace_nr = trace->nr - skip;
+ trace_nr = (trace_nr <= num_elem) ? trace_nr : num_elem;
+ copy_len = trace_nr * elem_size;
+- ips = trace->ip + skip + init_nr;
++
++ ips = trace->ip + skip;
+ if (user && user_build_id)
+ stack_map_get_build_id_offset(buf, ips, trace_nr, user);
+ else
+diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c
+index df2bface866ef..85cb51c4a17e6 100644
+--- a/kernel/debug/kdb/kdb_support.c
++++ b/kernel/debug/kdb/kdb_support.c
+@@ -291,7 +291,7 @@ int kdb_getarea_size(void *res, unsigned long addr, size_t size)
+ */
+ int kdb_putarea_size(unsigned long addr, void *res, size_t size)
+ {
+- int ret = copy_from_kernel_nofault((char *)addr, (char *)res, size);
++ int ret = copy_to_kernel_nofault((char *)addr, (char *)res, size);
+ if (ret) {
+ if (!KDB_STATE(SUPPRESS)) {
+ kdb_func_printf("Bad address 0x%lx\n", addr);
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 7a14ca29c3778..f8ff598596b85 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -927,7 +927,7 @@ static __init int dma_debug_cmdline(char *str)
+ global_disable = true;
+ }
+
+- return 0;
++ return 1;
+ }
+
+ static __init int dma_debug_entries_cmdline(char *str)
+@@ -936,7 +936,7 @@ static __init int dma_debug_entries_cmdline(char *str)
+ return -EINVAL;
+ if (!get_option(&str, &nr_prealloc_entries))
+ nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES;
+- return 0;
++ return 1;
+ }
+
+ __setup("dma_debug=", dma_debug_cmdline);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 6db1c475ec827..6c350555e5a1c 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -701,13 +701,10 @@ void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+ void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+ size_t size, enum dma_data_direction dir)
+ {
+- /*
+- * Unconditional bounce is necessary to avoid corruption on
+- * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
+- * the whole lengt of the bounce buffer.
+- */
+- swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+- BUG_ON(!valid_dma_direction(dir));
++ if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
++ swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
++ else
++ BUG_ON(dir != DMA_FROM_DEVICE);
+ }
+
+ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 6859229497b15..69cf71d973121 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10574,8 +10574,11 @@ perf_event_parse_addr_filter(struct perf_event *event, char *fstr,
+ }
+
+ /* ready to consume more filters */
++ kfree(filename);
++ filename = NULL;
+ state = IF_STATE_ACTION;
+ filter = NULL;
++ kernel = 0;
+ }
+ }
+
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 585494ec464f9..bc475e62279d2 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -190,7 +190,7 @@ static int klp_find_object_symbol(const char *objname, const char *name,
+ return -EINVAL;
+ }
+
+-static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
++static int klp_resolve_symbols(Elf_Shdr *sechdrs, const char *strtab,
+ unsigned int symndx, Elf_Shdr *relasec,
+ const char *sec_objname)
+ {
+@@ -218,7 +218,7 @@ static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
+ relas = (Elf_Rela *) relasec->sh_addr;
+ /* For each rela in this klp relocation section */
+ for (i = 0; i < relasec->sh_size / sizeof(Elf_Rela); i++) {
+- sym = (Elf64_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
++ sym = (Elf_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
+ if (sym->st_shndx != SHN_LIVEPATCH) {
+ pr_err("symbol %s is not marked as a livepatch symbol\n",
+ strtab + sym->st_name);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index f8a0212189cad..4675a686f942f 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -183,11 +183,9 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
+ static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
+ unsigned long nr_lock_classes;
+ unsigned long nr_zapped_classes;
+-#ifndef CONFIG_DEBUG_LOCKDEP
+-static
+-#endif
++unsigned long max_lock_class_idx;
+ struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+-static DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
++DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
+
+ static inline struct lock_class *hlock_class(struct held_lock *hlock)
+ {
+@@ -338,7 +336,7 @@ static inline void lock_release_holdtime(struct held_lock *hlock)
+ * elements. These elements are linked together by the lock_entry member in
+ * struct lock_class.
+ */
+-LIST_HEAD(all_lock_classes);
++static LIST_HEAD(all_lock_classes);
+ static LIST_HEAD(free_lock_classes);
+
+ /**
+@@ -1252,6 +1250,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ struct lockdep_subclass_key *key;
+ struct hlist_head *hash_head;
+ struct lock_class *class;
++ int idx;
+
+ DEBUG_LOCKS_WARN_ON(!irqs_disabled());
+
+@@ -1317,6 +1316,9 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
+ * of classes.
+ */
+ list_move_tail(&class->lock_entry, &all_lock_classes);
++ idx = class - lock_classes;
++ if (idx > max_lock_class_idx)
++ max_lock_class_idx = idx;
+
+ if (verbose(class)) {
+ graph_unlock();
+@@ -6000,6 +6002,8 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
+ WRITE_ONCE(class->name, NULL);
+ nr_lock_classes--;
+ __clear_bit(class - lock_classes, lock_classes_in_use);
++ if (class - lock_classes == max_lock_class_idx)
++ max_lock_class_idx--;
+ } else {
+ WARN_ONCE(true, "%s() failed for class %s\n", __func__,
+ class->name);
+@@ -6290,7 +6294,13 @@ void lockdep_reset_lock(struct lockdep_map *lock)
+ lockdep_reset_lock_reg(lock);
+ }
+
+-/* Unregister a dynamically allocated key. */
++/*
++ * Unregister a dynamically allocated key.
++ *
++ * Unlike lockdep_register_key(), a search is always done to find a matching
++ * key irrespective of debug_locks to avoid potential invalid access to freed
++ * memory in lock_class entry.
++ */
+ void lockdep_unregister_key(struct lock_class_key *key)
+ {
+ struct hlist_head *hash_head = keyhashentry(key);
+@@ -6305,10 +6315,8 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ return;
+
+ raw_local_irq_save(flags);
+- if (!graph_lock())
+- goto out_irq;
++ lockdep_lock();
+
+- pf = get_pending_free();
+ hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
+ if (k == key) {
+ hlist_del_rcu(&k->hash_entry);
+@@ -6316,11 +6324,13 @@ void lockdep_unregister_key(struct lock_class_key *key)
+ break;
+ }
+ }
+- WARN_ON_ONCE(!found);
+- __lockdep_free_key_range(pf, key, 1);
+- call_rcu_zapped(pf);
+- graph_unlock();
+-out_irq:
++ WARN_ON_ONCE(!found && debug_locks);
++ if (found) {
++ pf = get_pending_free();
++ __lockdep_free_key_range(pf, key, 1);
++ call_rcu_zapped(pf);
++ }
++ lockdep_unlock();
+ raw_local_irq_restore(flags);
+
+ /* Wait until is_dynamic_key() has finished accessing k->hash_entry. */
+diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
+index ecb8662e7a4ed..bbe9000260d02 100644
+--- a/kernel/locking/lockdep_internals.h
++++ b/kernel/locking/lockdep_internals.h
+@@ -121,7 +121,6 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ =
+
+ #define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)
+
+-extern struct list_head all_lock_classes;
+ extern struct lock_chain lock_chains[];
+
+ #define LOCK_USAGE_CHARS (2*XXX_LOCK_USAGE_STATES + 1)
+@@ -151,6 +150,10 @@ extern unsigned int nr_large_chain_blocks;
+
+ extern unsigned int max_lockdep_depth;
+ extern unsigned int max_bfs_queue_depth;
++extern unsigned long max_lock_class_idx;
++
++extern struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
++extern unsigned long lock_classes_in_use[];
+
+ #ifdef CONFIG_PROVE_LOCKING
+ extern unsigned long lockdep_count_forward_deps(struct lock_class *);
+@@ -205,7 +208,6 @@ struct lockdep_stats {
+ };
+
+ DECLARE_PER_CPU(struct lockdep_stats, lockdep_stats);
+-extern struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
+
+ #define __debug_atomic_inc(ptr) \
+ this_cpu_inc(lockdep_stats.ptr);
+diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
+index b8d9a050c337a..15fdc7fa5c688 100644
+--- a/kernel/locking/lockdep_proc.c
++++ b/kernel/locking/lockdep_proc.c
+@@ -24,14 +24,33 @@
+
+ #include "lockdep_internals.h"
+
++/*
++ * Since iteration of lock_classes is done without holding the lockdep lock,
++ * it is not safe to iterate all_lock_classes list directly as the iteration
++ * may branch off to free_lock_classes or the zapped list. Iteration is done
++ * directly on the lock_classes array by checking the lock_classes_in_use
++ * bitmap and max_lock_class_idx.
++ */
++#define iterate_lock_classes(idx, class) \
++ for (idx = 0, class = lock_classes; idx <= max_lock_class_idx; \
++ idx++, class++)
++
+ static void *l_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+- return seq_list_next(v, &all_lock_classes, pos);
++ struct lock_class *class = v;
++
++ ++class;
++ *pos = class - lock_classes;
++ return (*pos > max_lock_class_idx) ? NULL : class;
+ }
+
+ static void *l_start(struct seq_file *m, loff_t *pos)
+ {
+- return seq_list_start_head(&all_lock_classes, *pos);
++ unsigned long idx = *pos;
++
++ if (idx > max_lock_class_idx)
++ return NULL;
++ return lock_classes + idx;
+ }
+
+ static void l_stop(struct seq_file *m, void *v)
+@@ -57,14 +76,16 @@ static void print_name(struct seq_file *m, struct lock_class *class)
+
+ static int l_show(struct seq_file *m, void *v)
+ {
+- struct lock_class *class = list_entry(v, struct lock_class, lock_entry);
++ struct lock_class *class = v;
+ struct lock_list *entry;
+ char usage[LOCK_USAGE_CHARS];
++ int idx = class - lock_classes;
+
+- if (v == &all_lock_classes) {
++ if (v == lock_classes)
+ seq_printf(m, "all lock classes:\n");
++
++ if (!test_bit(idx, lock_classes_in_use))
+ return 0;
+- }
+
+ seq_printf(m, "%p", class->key);
+ #ifdef CONFIG_DEBUG_LOCKDEP
+@@ -220,8 +241,11 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+
+ #ifdef CONFIG_PROVE_LOCKING
+ struct lock_class *class;
++ unsigned long idx;
+
+- list_for_each_entry(class, &all_lock_classes, lock_entry) {
++ iterate_lock_classes(idx, class) {
++ if (!test_bit(idx, lock_classes_in_use))
++ continue;
+
+ if (class->usage_mask == 0)
+ nr_unused++;
+@@ -254,6 +278,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+
+ sum_forward_deps += lockdep_count_forward_deps(class);
+ }
++
+ #ifdef CONFIG_DEBUG_LOCKDEP
+ DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused);
+ #endif
+@@ -345,6 +370,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
+ seq_printf(m, " max bfs queue depth: %11u\n",
+ max_bfs_queue_depth);
+ #endif
++ seq_printf(m, " max lock class index: %11lu\n",
++ max_lock_class_idx);
+ lockdep_stats_debug_show(m);
+ seq_printf(m, " debug_locks: %11u\n",
+ debug_locks);
+@@ -622,12 +649,16 @@ static int lock_stat_open(struct inode *inode, struct file *file)
+ if (!res) {
+ struct lock_stat_data *iter = data->stats;
+ struct seq_file *m = file->private_data;
++ unsigned long idx;
+
+- list_for_each_entry(class, &all_lock_classes, lock_entry) {
++ iterate_lock_classes(idx, class) {
++ if (!test_bit(idx, lock_classes_in_use))
++ continue;
+ iter->class = class;
+ iter->stats = lock_stats(class);
+ iter++;
+ }
++
+ data->iter_end = iter;
+
+ sort(data->stats, data->iter_end - data->stats,
+@@ -645,6 +676,7 @@ static ssize_t lock_stat_write(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+ {
+ struct lock_class *class;
++ unsigned long idx;
+ char c;
+
+ if (count) {
+@@ -654,8 +686,11 @@ static ssize_t lock_stat_write(struct file *file, const char __user *buf,
+ if (c != '0')
+ return count;
+
+- list_for_each_entry(class, &all_lock_classes, lock_entry)
++ iterate_lock_classes(idx, class) {
++ if (!test_bit(idx, lock_classes_in_use))
++ continue;
+ clear_lock_stats(class);
++ }
+ }
+ return count;
+ }
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index e6af502c2fd77..08780a466fdf7 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -1328,7 +1328,7 @@ static int __init resumedelay_setup(char *str)
+ int rc = kstrtouint(str, 0, &resume_delay);
+
+ if (rc)
+- return rc;
++ pr_warn("resumedelay: bad option string '%s'\n", str);
+ return 1;
+ }
+
+diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c
+index d20526c5be15b..b663a97f5867a 100644
+--- a/kernel/power/suspend_test.c
++++ b/kernel/power/suspend_test.c
+@@ -157,22 +157,22 @@ static int __init setup_test_suspend(char *value)
+ value++;
+ suspend_type = strsep(&value, ",");
+ if (!suspend_type)
+- return 0;
++ return 1;
+
+ repeat = strsep(&value, ",");
+ if (repeat) {
+ if (kstrtou32(repeat, 0, &test_repeat_count_max))
+- return 0;
++ return 1;
+ }
+
+ for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
+ if (!strcmp(pm_labels[i], suspend_type)) {
+ test_state_label = pm_labels[i];
+- return 0;
++ return 1;
+ }
+
+ printk(warn_bad_state, suspend_type);
+- return 0;
++ return 1;
+ }
+ __setup("test_suspend", setup_test_suspend);
+
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 82abfaf3c2aad..833e407545b82 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -146,8 +146,10 @@ static int __control_devkmsg(char *str)
+
+ static int __init control_devkmsg(char *str)
+ {
+- if (__control_devkmsg(str) < 0)
++ if (__control_devkmsg(str) < 0) {
++ pr_warn("printk.devkmsg: bad option string '%s'\n", str);
+ return 1;
++ }
+
+ /*
+ * Set sysctl string accordingly:
+@@ -166,7 +168,7 @@ static int __init control_devkmsg(char *str)
+ */
+ devkmsg_log |= DEVKMSG_LOG_MASK_LOCK;
+
+- return 0;
++ return 1;
+ }
+ __setup("printk.devkmsg=", control_devkmsg);
+
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index eea265082e975..ccc4b465775b8 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -371,6 +371,26 @@ bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+ return !err;
+ }
+
++static int check_ptrace_options(unsigned long data)
++{
++ if (data & ~(unsigned long)PTRACE_O_MASK)
++ return -EINVAL;
++
++ if (unlikely(data & PTRACE_O_SUSPEND_SECCOMP)) {
++ if (!IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) ||
++ !IS_ENABLED(CONFIG_SECCOMP))
++ return -EINVAL;
++
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ if (seccomp_mode(¤t->seccomp) != SECCOMP_MODE_DISABLED ||
++ current->ptrace & PT_SUSPEND_SECCOMP)
++ return -EPERM;
++ }
++ return 0;
++}
++
+ static int ptrace_attach(struct task_struct *task, long request,
+ unsigned long addr,
+ unsigned long flags)
+@@ -382,8 +402,16 @@ static int ptrace_attach(struct task_struct *task, long request,
+ if (seize) {
+ if (addr != 0)
+ goto out;
++ /*
++ * This duplicates the check in check_ptrace_options() because
++ * ptrace_attach() and ptrace_setoptions() have historically
++ * used different error codes for unknown ptrace options.
++ */
+ if (flags & ~(unsigned long)PTRACE_O_MASK)
+ goto out;
++ retval = check_ptrace_options(flags);
++ if (retval)
++ return retval;
+ flags = PT_PTRACED | PT_SEIZED | (flags << PT_OPT_FLAG_SHIFT);
+ } else {
+ flags = PT_PTRACED;
+@@ -654,22 +682,11 @@ int ptrace_writedata(struct task_struct *tsk, char __user *src, unsigned long ds
+ static int ptrace_setoptions(struct task_struct *child, unsigned long data)
+ {
+ unsigned flags;
++ int ret;
+
+- if (data & ~(unsigned long)PTRACE_O_MASK)
+- return -EINVAL;
+-
+- if (unlikely(data & PTRACE_O_SUSPEND_SECCOMP)) {
+- if (!IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) ||
+- !IS_ENABLED(CONFIG_SECCOMP))
+- return -EINVAL;
+-
+- if (!capable(CAP_SYS_ADMIN))
+- return -EPERM;
+-
+- if (seccomp_mode(¤t->seccomp) != SECCOMP_MODE_DISABLED ||
+- current->ptrace & PT_SUSPEND_SECCOMP)
+- return -EPERM;
+- }
++ ret = check_ptrace_options(data);
++ if (ret)
++ return ret;
+
+ /* Avoid intermediate state when all opts are cleared */
+ flags = child->ptrace;
+diff --git a/kernel/rcu/rcu_segcblist.h b/kernel/rcu/rcu_segcblist.h
+index e373fbe44da5e..431cee212467d 100644
+--- a/kernel/rcu/rcu_segcblist.h
++++ b/kernel/rcu/rcu_segcblist.h
+@@ -56,13 +56,13 @@ static inline long rcu_segcblist_n_cbs(struct rcu_segcblist *rsclp)
+ static inline void rcu_segcblist_set_flags(struct rcu_segcblist *rsclp,
+ int flags)
+ {
+- rsclp->flags |= flags;
++ WRITE_ONCE(rsclp->flags, rsclp->flags | flags);
+ }
+
+ static inline void rcu_segcblist_clear_flags(struct rcu_segcblist *rsclp,
+ int flags)
+ {
+- rsclp->flags &= ~flags;
++ WRITE_ONCE(rsclp->flags, rsclp->flags & ~flags);
+ }
+
+ static inline bool rcu_segcblist_test_flags(struct rcu_segcblist *rsclp,
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index a4c25a6283b0b..73a4c9d07b865 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -91,7 +91,7 @@ static struct rcu_state rcu_state = {
+ .abbr = RCU_ABBR,
+ .exp_mutex = __MUTEX_INITIALIZER(rcu_state.exp_mutex),
+ .exp_wake_mutex = __MUTEX_INITIALIZER(rcu_state.exp_wake_mutex),
+- .ofl_lock = __RAW_SPIN_LOCK_UNLOCKED(rcu_state.ofl_lock),
++ .ofl_lock = __ARCH_SPIN_LOCK_UNLOCKED,
+ };
+
+ /* Dump rcu_node combining tree at boot to verify correct setup. */
+@@ -1175,7 +1175,15 @@ bool rcu_lockdep_current_cpu_online(void)
+ preempt_disable_notrace();
+ rdp = this_cpu_ptr(&rcu_data);
+ rnp = rdp->mynode;
+- if (rdp->grpmask & rcu_rnp_online_cpus(rnp) || READ_ONCE(rnp->ofl_seq) & 0x1)
++ /*
++ * Strictly, we care here about the case where the current CPU is
++ * in rcu_cpu_starting() and thus has an excuse for rdp->grpmask
++ * not being up to date. So arch_spin_is_locked() might have a
++ * false positive if it's held by some *other* CPU, but that's
++ * OK because that just means a false *negative* on the warning.
++ */
++ if (rdp->grpmask & rcu_rnp_online_cpus(rnp) ||
++ arch_spin_is_locked(&rcu_state.ofl_lock))
+ ret = true;
+ preempt_enable_notrace();
+ return ret;
+@@ -1739,7 +1747,6 @@ static void rcu_strict_gp_boundary(void *unused)
+ */
+ static noinline_for_stack bool rcu_gp_init(void)
+ {
+- unsigned long firstseq;
+ unsigned long flags;
+ unsigned long oldmask;
+ unsigned long mask;
+@@ -1782,22 +1789,17 @@ static noinline_for_stack bool rcu_gp_init(void)
+ * of RCU's Requirements documentation.
+ */
+ WRITE_ONCE(rcu_state.gp_state, RCU_GP_ONOFF);
++ /* Exclude CPU hotplug operations. */
+ rcu_for_each_leaf_node(rnp) {
+- // Wait for CPU-hotplug operations that might have
+- // started before this grace period did.
+- smp_mb(); // Pair with barriers used when updating ->ofl_seq to odd values.
+- firstseq = READ_ONCE(rnp->ofl_seq);
+- if (firstseq & 0x1)
+- while (firstseq == READ_ONCE(rnp->ofl_seq))
+- schedule_timeout_idle(1); // Can't wake unless RCU is watching.
+- smp_mb(); // Pair with barriers used when updating ->ofl_seq to even values.
+- raw_spin_lock(&rcu_state.ofl_lock);
+- raw_spin_lock_irq_rcu_node(rnp);
++ local_irq_save(flags);
++ arch_spin_lock(&rcu_state.ofl_lock);
++ raw_spin_lock_rcu_node(rnp);
+ if (rnp->qsmaskinit == rnp->qsmaskinitnext &&
+ !rnp->wait_blkd_tasks) {
+ /* Nothing to do on this leaf rcu_node structure. */
+- raw_spin_unlock_irq_rcu_node(rnp);
+- raw_spin_unlock(&rcu_state.ofl_lock);
++ raw_spin_unlock_rcu_node(rnp);
++ arch_spin_unlock(&rcu_state.ofl_lock);
++ local_irq_restore(flags);
+ continue;
+ }
+
+@@ -1832,8 +1834,9 @@ static noinline_for_stack bool rcu_gp_init(void)
+ rcu_cleanup_dead_rnp(rnp);
+ }
+
+- raw_spin_unlock_irq_rcu_node(rnp);
+- raw_spin_unlock(&rcu_state.ofl_lock);
++ raw_spin_unlock_rcu_node(rnp);
++ arch_spin_unlock(&rcu_state.ofl_lock);
++ local_irq_restore(flags);
+ }
+ rcu_gp_slow(gp_preinit_delay); /* Races with CPU hotplug. */
+
+@@ -4287,11 +4290,10 @@ void rcu_cpu_starting(unsigned int cpu)
+
+ rnp = rdp->mynode;
+ mask = rdp->grpmask;
+- WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+- WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
++ local_irq_save(flags);
++ arch_spin_lock(&rcu_state.ofl_lock);
+ rcu_dynticks_eqs_online();
+- smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+- raw_spin_lock_irqsave_rcu_node(rnp, flags);
++ raw_spin_lock_rcu_node(rnp);
+ WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
+ newcpu = !(rnp->expmaskinitnext & mask);
+ rnp->expmaskinitnext |= mask;
+@@ -4304,15 +4306,18 @@ void rcu_cpu_starting(unsigned int cpu)
+
+ /* An incoming CPU should never be blocking a grace period. */
+ if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */
++ /* rcu_report_qs_rnp() *really* wants some flags to restore */
++ unsigned long flags2;
++
++ local_irq_save(flags2);
+ rcu_disable_urgency_upon_qs(rdp);
+ /* Report QS -after- changing ->qsmaskinitnext! */
+- rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
++ rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags2);
+ } else {
+- raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
++ raw_spin_unlock_rcu_node(rnp);
+ }
+- smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+- WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+- WARN_ON_ONCE(rnp->ofl_seq & 0x1);
++ arch_spin_unlock(&rcu_state.ofl_lock);
++ local_irq_restore(flags);
+ smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
+ }
+
+@@ -4326,7 +4331,7 @@ void rcu_cpu_starting(unsigned int cpu)
+ */
+ void rcu_report_dead(unsigned int cpu)
+ {
+- unsigned long flags;
++ unsigned long flags, seq_flags;
+ unsigned long mask;
+ struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
+ struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
+@@ -4340,10 +4345,8 @@ void rcu_report_dead(unsigned int cpu)
+
+ /* Remove outgoing CPU from mask in the leaf rcu_node structure. */
+ mask = rdp->grpmask;
+- WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+- WARN_ON_ONCE(!(rnp->ofl_seq & 0x1));
+- smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+- raw_spin_lock(&rcu_state.ofl_lock);
++ local_irq_save(seq_flags);
++ arch_spin_lock(&rcu_state.ofl_lock);
+ raw_spin_lock_irqsave_rcu_node(rnp, flags); /* Enforce GP memory-order guarantee. */
+ rdp->rcu_ofl_gp_seq = READ_ONCE(rcu_state.gp_seq);
+ rdp->rcu_ofl_gp_flags = READ_ONCE(rcu_state.gp_flags);
+@@ -4354,10 +4357,8 @@ void rcu_report_dead(unsigned int cpu)
+ }
+ WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext & ~mask);
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+- raw_spin_unlock(&rcu_state.ofl_lock);
+- smp_mb(); // Pair with rcu_gp_cleanup()'s ->ofl_seq barrier().
+- WRITE_ONCE(rnp->ofl_seq, rnp->ofl_seq + 1);
+- WARN_ON_ONCE(rnp->ofl_seq & 0x1);
++ arch_spin_unlock(&rcu_state.ofl_lock);
++ local_irq_restore(seq_flags);
+
+ rdp->cpu_started = false;
+ }
+diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
+index 486fc901bd085..4b4bcef8a9743 100644
+--- a/kernel/rcu/tree.h
++++ b/kernel/rcu/tree.h
+@@ -56,8 +56,6 @@ struct rcu_node {
+ /* Initialized from ->qsmaskinitnext at the */
+ /* beginning of each grace period. */
+ unsigned long qsmaskinitnext;
+- unsigned long ofl_seq; /* CPU-hotplug operation sequence count. */
+- /* Online CPUs for next grace period. */
+ unsigned long expmask; /* CPUs or groups that need to check in */
+ /* to allow the current expedited GP */
+ /* to complete. */
+@@ -355,7 +353,7 @@ struct rcu_state {
+ const char *name; /* Name of structure. */
+ char abbr; /* Abbreviated name. */
+
+- raw_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
++ arch_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
+ /* Synchronize offline with */
+ /* GP pre-initialization. */
+ };
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 9c08d6e9eef27..34eaee179689a 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -56,14 +56,6 @@ struct resource_constraint {
+
+ static DEFINE_RWLOCK(resource_lock);
+
+-/*
+- * For memory hotplug, there is no way to free resource entries allocated
+- * by boot mem after the system is up. So for reusing the resource entry
+- * we need to remember the resource.
+- */
+-static struct resource *bootmem_resource_free;
+-static DEFINE_SPINLOCK(bootmem_resource_lock);
+-
+ static struct resource *next_resource(struct resource *p)
+ {
+ if (p->child)
+@@ -160,36 +152,19 @@ __initcall(ioresources_init);
+
+ static void free_resource(struct resource *res)
+ {
+- if (!res)
+- return;
+-
+- if (!PageSlab(virt_to_head_page(res))) {
+- spin_lock(&bootmem_resource_lock);
+- res->sibling = bootmem_resource_free;
+- bootmem_resource_free = res;
+- spin_unlock(&bootmem_resource_lock);
+- } else {
++ /**
++ * If the resource was allocated using memblock early during boot
++ * we'll leak it here: we can only return full pages back to the
++ * buddy and trying to be smart and reusing them eventually in
++ * alloc_resource() overcomplicates resource handling.
++ */
++ if (res && PageSlab(virt_to_head_page(res)))
+ kfree(res);
+- }
+ }
+
+ static struct resource *alloc_resource(gfp_t flags)
+ {
+- struct resource *res = NULL;
+-
+- spin_lock(&bootmem_resource_lock);
+- if (bootmem_resource_free) {
+- res = bootmem_resource_free;
+- bootmem_resource_free = res->sibling;
+- }
+- spin_unlock(&bootmem_resource_lock);
+-
+- if (res)
+- memset(res, 0, sizeof(struct resource));
+- else
+- res = kzalloc(sizeof(struct resource), flags);
+-
+- return res;
++ return kzalloc(sizeof(struct resource), flags);
+ }
+
+ /* Return the conflict entry if you can't request it */
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index 6d45ac3dae7fb..97ac20b4f7387 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -128,10 +128,10 @@ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ int ret;
+
+ #ifdef CONFIG_64BIT
+- if (get_user(ptr, &t->rseq->rseq_cs.ptr64))
++ if (get_user(ptr, &t->rseq->rseq_cs))
+ return -EFAULT;
+ #else
+- if (copy_from_user(&ptr, &t->rseq->rseq_cs.ptr64, sizeof(ptr)))
++ if (copy_from_user(&ptr, &t->rseq->rseq_cs, sizeof(ptr)))
+ return -EFAULT;
+ #endif
+ if (!ptr) {
+@@ -217,9 +217,9 @@ static int clear_rseq_cs(struct task_struct *t)
+ * Set rseq_cs to NULL.
+ */
+ #ifdef CONFIG_64BIT
+- return put_user(0UL, &t->rseq->rseq_cs.ptr64);
++ return put_user(0UL, &t->rseq->rseq_cs);
+ #else
+- if (clear_user(&t->rseq->rseq_cs.ptr64, sizeof(t->rseq->rseq_cs.ptr64)))
++ if (clear_user(&t->rseq->rseq_cs, sizeof(t->rseq->rseq_cs)))
+ return -EFAULT;
+ return 0;
+ #endif
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 9745613d531ce..1620ae8535dcf 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -36,6 +36,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_rt_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_dl_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_se_tp);
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_thermal_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_cpu_capacity_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_overutilized_tp);
+ EXPORT_TRACEPOINT_SYMBOL_GPL(sched_util_est_cfs_tp);
+diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
+index 3d06c5e4220d4..307800586ac81 100644
+--- a/kernel/sched/cpuacct.c
++++ b/kernel/sched/cpuacct.c
+@@ -334,12 +334,13 @@ static struct cftype files[] = {
+ */
+ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
+ {
++ unsigned int cpu = task_cpu(tsk);
+ struct cpuacct *ca;
+
+ rcu_read_lock();
+
+ for (ca = task_ca(tsk); ca; ca = parent_ca(ca))
+- __this_cpu_add(*ca->cpuusage, cputime);
++ *per_cpu_ptr(ca->cpuusage, cpu) += cputime;
+
+ rcu_read_unlock();
+ }
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 26778884d9ab1..6d65ab6e484e2 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -289,6 +289,7 @@ static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time)
+ * into the same scale so we can compare.
+ */
+ boost = (sg_cpu->iowait_boost * sg_cpu->max) >> SCHED_CAPACITY_SHIFT;
++ boost = uclamp_rq_util_with(cpu_rq(sg_cpu->cpu), boost, NULL);
+ if (sg_cpu->util < boost)
+ sg_cpu->util = boost;
+ }
+@@ -348,8 +349,11 @@ static void sugov_update_single_freq(struct update_util_data *hook, u64 time,
+ /*
+ * Do not reduce the frequency if the CPU has not been idle
+ * recently, as the reduction is likely to be premature then.
++ *
++ * Except when the rq is capped by uclamp_max.
+ */
+- if (sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) {
++ if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) &&
++ sugov_cpu_is_busy(sg_cpu) && next_f < sg_policy->next_freq) {
+ next_f = sg_policy->next_freq;
+
+ /* Restore cached freq as next_freq has changed */
+@@ -395,8 +399,11 @@ static void sugov_update_single_perf(struct update_util_data *hook, u64 time,
+ /*
+ * Do not reduce the target performance level if the CPU has not been
+ * idle recently, as the reduction is likely to be premature then.
++ *
++ * Except when the rq is capped by uclamp_max.
+ */
+- if (sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
++ if (!uclamp_rq_is_capped(cpu_rq(sg_cpu->cpu)) &&
++ sugov_cpu_is_busy(sg_cpu) && sg_cpu->util < prev_util)
+ sg_cpu->util = prev_util;
+
+ cpufreq_driver_adjust_perf(sg_cpu->cpu, map_util_perf(sg_cpu->bw_dl),
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index d2c072b0ef01f..62f0cf8422775 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2240,12 +2240,6 @@ static int push_dl_task(struct rq *rq)
+ return 0;
+
+ retry:
+- if (is_migration_disabled(next_task))
+- return 0;
+-
+- if (WARN_ON(next_task == rq->curr))
+- return 0;
+-
+ /*
+ * If next_task preempts rq->curr, and rq->curr
+ * can move away, it makes sense to just reschedule
+@@ -2258,6 +2252,12 @@ retry:
+ return 0;
+ }
+
++ if (is_migration_disabled(next_task))
++ return 0;
++
++ if (WARN_ON(next_task == rq->curr))
++ return 0;
++
+ /* We might release rq lock */
+ get_task_struct(next_task);
+
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index aa29211de1bf8..102d6f70e84d3 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -931,25 +931,15 @@ void print_numa_stats(struct seq_file *m, int node, unsigned long tsf,
+ static void sched_show_numa(struct task_struct *p, struct seq_file *m)
+ {
+ #ifdef CONFIG_NUMA_BALANCING
+- struct mempolicy *pol;
+-
+ if (p->mm)
+ P(mm->numa_scan_seq);
+
+- task_lock(p);
+- pol = p->mempolicy;
+- if (pol && !(pol->flags & MPOL_F_MORON))
+- pol = NULL;
+- mpol_get(pol);
+- task_unlock(p);
+-
+ P(numa_pages_migrated);
+ P(numa_preferred_nid);
+ P(total_numa_faults);
+ SEQ_printf(m, "current_node=%d, numa_group_id=%d\n",
+ task_node(p), task_numa_group_id(p));
+ show_numa_stats(p, m);
+- mpol_put(pol);
+ #endif
+ }
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 5146163bfabb9..cddcf2f4f5251 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9040,9 +9040,10 @@ static bool update_pick_idlest(struct sched_group *idlest,
+ * This is an approximation as the number of running tasks may not be
+ * related to the number of busy CPUs due to sched_setaffinity.
+ */
+-static inline bool allow_numa_imbalance(int dst_running, int dst_weight)
++static inline bool
++allow_numa_imbalance(unsigned int running, unsigned int weight)
+ {
+- return (dst_running < (dst_weight >> 2));
++ return (running < (weight >> 2));
+ }
+
+ /*
+@@ -9176,12 +9177,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
+ return idlest;
+ #endif
+ /*
+- * Otherwise, keep the task on this node to stay close
+- * its wakeup source and improve locality. If there is
+- * a real need of migration, periodic load balance will
+- * take care of it.
++ * Otherwise, keep the task close to the wakeup source
++ * and improve locality if the number of running tasks
++ * would remain below threshold where an imbalance is
++ * allowed. If there is a real need of migration,
++ * periodic load balance will take care of it.
+ */
+- if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight))
++ if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight))
+ return NULL;
+ }
+
+@@ -9387,7 +9389,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
+ /* Consider allowing a small imbalance between NUMA groups */
+ if (env->sd->flags & SD_NUMA) {
+ env->imbalance = adjust_numa_imbalance(env->imbalance,
+- busiest->sum_nr_running, busiest->group_weight);
++ local->sum_nr_running + 1, local->group_weight);
+ }
+
+ return;
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 7b4f4fbbb4048..14f273c295183 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2026,6 +2026,16 @@ static int push_rt_task(struct rq *rq, bool pull)
+ return 0;
+
+ retry:
++ /*
++ * It's possible that the next_task slipped in of
++ * higher priority than current. If that's the case
++ * just reschedule current.
++ */
++ if (unlikely(next_task->prio < rq->curr->prio)) {
++ resched_curr(rq);
++ return 0;
++ }
++
+ if (is_migration_disabled(next_task)) {
+ struct task_struct *push_task = NULL;
+ int cpu;
+@@ -2033,6 +2043,18 @@ retry:
+ if (!pull || rq->push_busy)
+ return 0;
+
++ /*
++ * Invoking find_lowest_rq() on anything but an RT task doesn't
++ * make sense. Per the above priority check, curr has to
++ * be of higher priority than next_task, so no need to
++ * reschedule when bailing out.
++ *
++ * Note that the stoppers are masqueraded as SCHED_FIFO
++ * (cf. sched_set_stop_task()), so we can't rely on rt_task().
++ */
++ if (rq->curr->sched_class != &rt_sched_class)
++ return 0;
++
+ cpu = find_lowest_rq(rq->curr);
+ if (cpu == -1 || cpu == rq->cpu)
+ return 0;
+@@ -2057,16 +2079,6 @@ retry:
+ if (WARN_ON(next_task == rq->curr))
+ return 0;
+
+- /*
+- * It's possible that the next_task slipped in of
+- * higher priority than current. If that's the case
+- * just reschedule current.
+- */
+- if (unlikely(next_task->prio < rq->curr->prio)) {
+- resched_curr(rq);
+- return 0;
+- }
+-
+ /* We might release rq lock */
+ get_task_struct(next_task);
+
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index de53be9057390..9b33ba9c3c420 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2841,88 +2841,6 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
+ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {}
+ #endif /* CONFIG_CPU_FREQ */
+
+-#ifdef CONFIG_UCLAMP_TASK
+-unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
+-
+-/**
+- * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
+- * @rq: The rq to clamp against. Must not be NULL.
+- * @util: The util value to clamp.
+- * @p: The task to clamp against. Can be NULL if you want to clamp
+- * against @rq only.
+- *
+- * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
+- *
+- * If sched_uclamp_used static key is disabled, then just return the util
+- * without any clamping since uclamp aggregation at the rq level in the fast
+- * path is disabled, rendering this operation a NOP.
+- *
+- * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
+- * will return the correct effective uclamp value of the task even if the
+- * static key is disabled.
+- */
+-static __always_inline
+-unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+- struct task_struct *p)
+-{
+- unsigned long min_util = 0;
+- unsigned long max_util = 0;
+-
+- if (!static_branch_likely(&sched_uclamp_used))
+- return util;
+-
+- if (p) {
+- min_util = uclamp_eff_value(p, UCLAMP_MIN);
+- max_util = uclamp_eff_value(p, UCLAMP_MAX);
+-
+- /*
+- * Ignore last runnable task's max clamp, as this task will
+- * reset it. Similarly, no need to read the rq's min clamp.
+- */
+- if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
+- goto out;
+- }
+-
+- min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
+- max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
+-out:
+- /*
+- * Since CPU's {min,max}_util clamps are MAX aggregated considering
+- * RUNNABLE tasks with _different_ clamps, we can end up with an
+- * inversion. Fix it now when the clamps are applied.
+- */
+- if (unlikely(min_util >= max_util))
+- return min_util;
+-
+- return clamp(util, min_util, max_util);
+-}
+-
+-/*
+- * When uclamp is compiled in, the aggregation at rq level is 'turned off'
+- * by default in the fast path and only gets turned on once userspace performs
+- * an operation that requires it.
+- *
+- * Returns true if userspace opted-in to use uclamp and aggregation at rq level
+- * hence is active.
+- */
+-static inline bool uclamp_is_used(void)
+-{
+- return static_branch_likely(&sched_uclamp_used);
+-}
+-#else /* CONFIG_UCLAMP_TASK */
+-static inline
+-unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
+- struct task_struct *p)
+-{
+- return util;
+-}
+-
+-static inline bool uclamp_is_used(void)
+-{
+- return false;
+-}
+-#endif /* CONFIG_UCLAMP_TASK */
+-
+ #ifdef arch_scale_freq_capacity
+ # ifndef arch_scale_freq_invariant
+ # define arch_scale_freq_invariant() true
+@@ -3020,6 +2938,105 @@ static inline unsigned long cpu_util_rt(struct rq *rq)
+ }
+ #endif
+
++#ifdef CONFIG_UCLAMP_TASK
++unsigned long uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id);
++
++/**
++ * uclamp_rq_util_with - clamp @util with @rq and @p effective uclamp values.
++ * @rq: The rq to clamp against. Must not be NULL.
++ * @util: The util value to clamp.
++ * @p: The task to clamp against. Can be NULL if you want to clamp
++ * against @rq only.
++ *
++ * Clamps the passed @util to the max(@rq, @p) effective uclamp values.
++ *
++ * If sched_uclamp_used static key is disabled, then just return the util
++ * without any clamping since uclamp aggregation at the rq level in the fast
++ * path is disabled, rendering this operation a NOP.
++ *
++ * Use uclamp_eff_value() if you don't care about uclamp values at rq level. It
++ * will return the correct effective uclamp value of the task even if the
++ * static key is disabled.
++ */
++static __always_inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++ struct task_struct *p)
++{
++ unsigned long min_util = 0;
++ unsigned long max_util = 0;
++
++ if (!static_branch_likely(&sched_uclamp_used))
++ return util;
++
++ if (p) {
++ min_util = uclamp_eff_value(p, UCLAMP_MIN);
++ max_util = uclamp_eff_value(p, UCLAMP_MAX);
++
++ /*
++ * Ignore last runnable task's max clamp, as this task will
++ * reset it. Similarly, no need to read the rq's min clamp.
++ */
++ if (rq->uclamp_flags & UCLAMP_FLAG_IDLE)
++ goto out;
++ }
++
++ min_util = max_t(unsigned long, min_util, READ_ONCE(rq->uclamp[UCLAMP_MIN].value));
++ max_util = max_t(unsigned long, max_util, READ_ONCE(rq->uclamp[UCLAMP_MAX].value));
++out:
++ /*
++ * Since CPU's {min,max}_util clamps are MAX aggregated considering
++ * RUNNABLE tasks with _different_ clamps, we can end up with an
++ * inversion. Fix it now when the clamps are applied.
++ */
++ if (unlikely(min_util >= max_util))
++ return min_util;
++
++ return clamp(util, min_util, max_util);
++}
++
++/* Is the rq being capped/throttled by uclamp_max? */
++static inline bool uclamp_rq_is_capped(struct rq *rq)
++{
++ unsigned long rq_util;
++ unsigned long max_util;
++
++ if (!static_branch_likely(&sched_uclamp_used))
++ return false;
++
++ rq_util = cpu_util_cfs(cpu_of(rq)) + cpu_util_rt(rq);
++ max_util = READ_ONCE(rq->uclamp[UCLAMP_MAX].value);
++
++ return max_util != SCHED_CAPACITY_SCALE && rq_util >= max_util;
++}
++
++/*
++ * When uclamp is compiled in, the aggregation at rq level is 'turned off'
++ * by default in the fast path and only gets turned on once userspace performs
++ * an operation that requires it.
++ *
++ * Returns true if userspace opted-in to use uclamp and aggregation at rq level
++ * hence is active.
++ */
++static inline bool uclamp_is_used(void)
++{
++ return static_branch_likely(&sched_uclamp_used);
++}
++#else /* CONFIG_UCLAMP_TASK */
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++ struct task_struct *p)
++{
++ return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++static inline bool uclamp_is_used(void)
++{
++ return false;
++}
++#endif /* CONFIG_UCLAMP_TASK */
++
+ #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
+ static inline unsigned long cpu_util_irq(struct rq *rq)
+ {
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index eb44418574f9c..96265a717ca4e 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3663,12 +3663,17 @@ static char *trace_iter_expand_format(struct trace_iterator *iter)
+ }
+
+ /* Returns true if the string is safe to dereference from an event */
+-static bool trace_safe_str(struct trace_iterator *iter, const char *str)
++static bool trace_safe_str(struct trace_iterator *iter, const char *str,
++ bool star, int len)
+ {
+ unsigned long addr = (unsigned long)str;
+ struct trace_event *trace_event;
+ struct trace_event_call *event;
+
++ /* Ignore strings with no length */
++ if (star && !len)
++ return true;
++
+ /* OK if part of the event data */
+ if ((addr >= (unsigned long)iter->ent) &&
+ (addr < (unsigned long)iter->ent + iter->ent_size))
+@@ -3854,7 +3859,7 @@ void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+ * instead. See samples/trace_events/trace-events-sample.h
+ * for reference.
+ */
+- if (WARN_ONCE(!trace_safe_str(iter, str),
++ if (WARN_ONCE(!trace_safe_str(iter, str, star, len),
+ "fmt: '%s' current_buffer: '%s'",
+ fmt, show_buffer(&iter->seq))) {
+ int ret;
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 3147614c1812a..25b5d0f9f3758 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -40,6 +40,14 @@ static LIST_HEAD(ftrace_generic_fields);
+ static LIST_HEAD(ftrace_common_fields);
+ static bool eventdir_initialized;
+
++static LIST_HEAD(module_strings);
++
++struct module_string {
++ struct list_head next;
++ struct module *module;
++ char *str;
++};
++
+ #define GFP_TRACE (GFP_KERNEL | __GFP_ZERO)
+
+ static struct kmem_cache *field_cachep;
+@@ -2633,6 +2641,76 @@ static void update_event_printk(struct trace_event_call *call,
+ }
+ }
+
++static void add_str_to_module(struct module *module, char *str)
++{
++ struct module_string *modstr;
++
++ modstr = kmalloc(sizeof(*modstr), GFP_KERNEL);
++
++ /*
++ * If we failed to allocate memory here, then we'll just
++ * let the str memory leak when the module is removed.
++ * If this fails to allocate, there's worse problems than
++ * a leaked string on module removal.
++ */
++ if (WARN_ON_ONCE(!modstr))
++ return;
++
++ modstr->module = module;
++ modstr->str = str;
++
++ list_add(&modstr->next, &module_strings);
++}
++
++static void update_event_fields(struct trace_event_call *call,
++ struct trace_eval_map *map)
++{
++ struct ftrace_event_field *field;
++ struct list_head *head;
++ char *ptr;
++ char *str;
++ int len = strlen(map->eval_string);
++
++ /* Dynamic events should never have field maps */
++ if (WARN_ON_ONCE(call->flags & TRACE_EVENT_FL_DYNAMIC))
++ return;
++
++ head = trace_get_fields(call);
++ list_for_each_entry(field, head, link) {
++ ptr = strchr(field->type, '[');
++ if (!ptr)
++ continue;
++ ptr++;
++
++ if (!isalpha(*ptr) && *ptr != '_')
++ continue;
++
++ if (strncmp(map->eval_string, ptr, len) != 0)
++ continue;
++
++ str = kstrdup(field->type, GFP_KERNEL);
++ if (WARN_ON_ONCE(!str))
++ return;
++ ptr = str + (ptr - field->type);
++ ptr = eval_replace(ptr, map, len);
++ /* enum/sizeof string smaller than value */
++ if (WARN_ON_ONCE(!ptr)) {
++ kfree(str);
++ continue;
++ }
++
++ /*
++ * If the event is part of a module, then we need to free the string
++ * when the module is removed. Otherwise, it will stay allocated
++ * until a reboot.
++ */
++ if (call->module)
++ add_str_to_module(call->module, str);
++
++ field->type = str;
++ }
++}
++
+ void trace_event_eval_update(struct trace_eval_map **map, int len)
+ {
+ struct trace_event_call *call, *p;
+@@ -2668,6 +2746,7 @@ void trace_event_eval_update(struct trace_eval_map **map, int len)
+ first = false;
+ }
+ update_event_printk(call, map[i]);
++ update_event_fields(call, map[i]);
+ }
+ }
+ }
+@@ -2853,6 +2932,7 @@ static void trace_module_add_events(struct module *mod)
+ static void trace_module_remove_events(struct module *mod)
+ {
+ struct trace_event_call *call, *p;
++ struct module_string *modstr, *m;
+
+ down_write(&trace_event_sem);
+ list_for_each_entry_safe(call, p, &ftrace_events, list) {
+@@ -2861,6 +2941,14 @@ static void trace_module_remove_events(struct module *mod)
+ if (call->module == mod)
+ __trace_remove_event_call(call);
+ }
++ /* Check for any strings allocade for this module */
++ list_for_each_entry_safe(modstr, m, &module_strings, next) {
++ if (modstr->module != mod)
++ continue;
++ list_del(&modstr->next);
++ kfree(modstr->str);
++ kfree(modstr);
++ }
+ up_write(&trace_event_sem);
+
+ /*
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index 00703444a2194..230038d4f9081 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -271,7 +271,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ return 0;
+
+ error_p:
+- for (i = 0; i < nr_pages; i++)
++ while (--i >= 0)
+ __free_page(pages[i]);
+ kfree(pages);
+ error:
+@@ -370,6 +370,7 @@ static void __put_watch_queue(struct kref *kref)
+
+ for (i = 0; i < wqueue->nr_pages; i++)
+ __free_page(wqueue->notes[i]);
++ kfree(wqueue->notes);
+ bitmap_free(wqueue->notes_bitmap);
+
+ wfilter = rcu_access_pointer(wqueue->filter);
+@@ -395,6 +396,7 @@ static void free_watch(struct rcu_head *rcu)
+ put_watch_queue(rcu_access_pointer(watch->queue));
+ atomic_dec(&watch->cred->user->nr_watches);
+ put_cred(watch->cred);
++ kfree(watch);
+ }
+
+ static void __put_watch(struct kref *kref)
+diff --git a/lib/kunit/try-catch.c b/lib/kunit/try-catch.c
+index be38a2c5ecc2b..42825941f19f2 100644
+--- a/lib/kunit/try-catch.c
++++ b/lib/kunit/try-catch.c
+@@ -52,7 +52,7 @@ static unsigned long kunit_test_timeout(void)
+ * If tests timeout due to exceeding sysctl_hung_task_timeout_secs,
+ * the task will be killed and an oops generated.
+ */
+- return 300 * MSEC_PER_SEC; /* 5 min */
++ return 300 * msecs_to_jiffies(MSEC_PER_SEC); /* 5 min */
+ }
+
+ void kunit_try_catch_run(struct kunit_try_catch *try_catch, void *context)
+diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile
+index a4c7cd74cff58..4fb7700a741bd 100644
+--- a/lib/raid6/test/Makefile
++++ b/lib/raid6/test/Makefile
+@@ -4,6 +4,8 @@
+ # from userspace.
+ #
+
++pound := \#
++
+ CC = gcc
+ OPTFLAGS = -O2 # Adjust as desired
+ CFLAGS = -I.. -I ../../../include -g $(OPTFLAGS)
+@@ -42,7 +44,7 @@ else ifeq ($(HAS_NEON),yes)
+ OBJS += neon.o neon1.o neon2.o neon4.o neon8.o recov_neon.o recov_neon_inner.o
+ CFLAGS += -DCONFIG_KERNEL_MODE_NEON=1
+ else
+- HAS_ALTIVEC := $(shell printf '\#include <altivec.h>\nvector int a;\n' |\
++ HAS_ALTIVEC := $(shell printf '$(pound)include <altivec.h>\nvector int a;\n' |\
+ gcc -c -x c - >/dev/null && rm ./-.o && echo yes)
+ ifeq ($(HAS_ALTIVEC),yes)
+ CFLAGS += -I../../../arch/powerpc/include
+diff --git a/lib/raid6/test/test.c b/lib/raid6/test/test.c
+index a3cf071941ab4..841a55242abaa 100644
+--- a/lib/raid6/test/test.c
++++ b/lib/raid6/test/test.c
+@@ -19,7 +19,6 @@
+ #define NDISKS 16 /* Including P and Q */
+
+ const char raid6_empty_zero_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+-struct raid6_calls raid6_call;
+
+ char *dataptrs[NDISKS];
+ char data[NDISKS][PAGE_SIZE] __attribute__((aligned(PAGE_SIZE)));
+diff --git a/lib/test_kmod.c b/lib/test_kmod.c
+index ce15893914131..cb800b1d0d99c 100644
+--- a/lib/test_kmod.c
++++ b/lib/test_kmod.c
+@@ -1149,6 +1149,7 @@ static struct kmod_test_device *register_test_dev_kmod(void)
+ if (ret) {
+ pr_err("could not register misc device: %d\n", ret);
+ free_test_dev_kmod(test_dev);
++ test_dev = NULL;
+ goto out;
+ }
+
+diff --git a/lib/test_lockup.c b/lib/test_lockup.c
+index 906b598740a7b..c3fd87d6c2dd0 100644
+--- a/lib/test_lockup.c
++++ b/lib/test_lockup.c
+@@ -417,9 +417,14 @@ static bool test_kernel_ptr(unsigned long addr, int size)
+ return false;
+
+ /* should be at least readable kernel address */
+- if (access_ok(ptr, 1) ||
+- access_ok(ptr + size - 1, 1) ||
+- get_kernel_nofault(buf, ptr) ||
++ if (!IS_ENABLED(CONFIG_ALTERNATE_USER_ADDRESS_SPACE) &&
++ (access_ok((void __user *)ptr, 1) ||
++ access_ok((void __user *)ptr + size - 1, 1))) {
++ pr_err("user space ptr invalid in kernel: %#lx\n", addr);
++ return true;
++ }
++
++ if (get_kernel_nofault(buf, ptr) ||
+ get_kernel_nofault(buf, ptr + size - 1)) {
+ pr_err("invalid kernel ptr: %#lx\n", addr);
+ return true;
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 8b1c318189ce8..e77d4856442c3 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -1463,6 +1463,25 @@ unlock:
+ XA_BUG_ON(xa, !xa_empty(xa));
+ }
+
++static noinline void check_create_range_5(struct xarray *xa,
++ unsigned long index, unsigned int order)
++{
++ XA_STATE_ORDER(xas, xa, index, order);
++ unsigned int i;
++
++ xa_store_order(xa, index, order, xa_mk_index(index), GFP_KERNEL);
++
++ for (i = 0; i < order + 10; i++) {
++ do {
++ xas_lock(&xas);
++ xas_create_range(&xas);
++ xas_unlock(&xas);
++ } while (xas_nomem(&xas, GFP_KERNEL));
++ }
++
++ xa_destroy(xa);
++}
++
+ static noinline void check_create_range(struct xarray *xa)
+ {
+ unsigned int order;
+@@ -1490,6 +1509,9 @@ static noinline void check_create_range(struct xarray *xa)
+ check_create_range_4(xa, (3U << order) + 1, order);
+ check_create_range_4(xa, (3U << order) - 1, order);
+ check_create_range_4(xa, (1U << 24) + 1, order);
++
++ check_create_range_5(xa, 0, order);
++ check_create_range_5(xa, (1U << order), order);
+ }
+
+ check_create_range_3();
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 3b8129dd374cd..fbf261bbea950 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -49,10 +49,15 @@
+
+ #include <asm/page.h> /* for PAGE_SIZE */
+ #include <asm/byteorder.h> /* cpu_to_le16 */
++#include <asm/unaligned.h>
+
+ #include <linux/string_helpers.h>
+ #include "kstrtox.h"
+
++/* Disable pointer hashing if requested */
++bool no_hash_pointers __ro_after_init;
++EXPORT_SYMBOL_GPL(no_hash_pointers);
++
+ static noinline unsigned long long simple_strntoull(const char *startp, size_t max_chars, char **endp, unsigned int base)
+ {
+ const char *cp;
+@@ -848,6 +853,19 @@ static char *ptr_to_id(char *buf, char *end, const void *ptr,
+ return pointer_string(buf, end, (const void *)hashval, spec);
+ }
+
++static char *default_pointer(char *buf, char *end, const void *ptr,
++ struct printf_spec spec)
++{
++ /*
++ * default is to _not_ leak addresses, so hash before printing,
++ * unless no_hash_pointers is specified on the command line.
++ */
++ if (unlikely(no_hash_pointers))
++ return pointer_string(buf, end, ptr, spec);
++
++ return ptr_to_id(buf, end, ptr, spec);
++}
++
+ int kptr_restrict __read_mostly;
+
+ static noinline_for_stack
+@@ -857,7 +875,7 @@ char *restricted_pointer(char *buf, char *end, const void *ptr,
+ switch (kptr_restrict) {
+ case 0:
+ /* Handle as %p, hash and do _not_ leak addresses. */
+- return ptr_to_id(buf, end, ptr, spec);
++ return default_pointer(buf, end, ptr, spec);
+ case 1: {
+ const struct cred *cred;
+
+@@ -1761,7 +1779,7 @@ char *fourcc_string(char *buf, char *end, const u32 *fourcc,
+ char output[sizeof("0123 little-endian (0x01234567)")];
+ char *p = output;
+ unsigned int i;
+- u32 val;
++ u32 orig, val;
+
+ if (fmt[1] != 'c' || fmt[2] != 'c')
+ return error_string(buf, end, "(%p4?)", spec);
+@@ -1769,21 +1787,22 @@ char *fourcc_string(char *buf, char *end, const u32 *fourcc,
+ if (check_pointer(&buf, end, fourcc, spec))
+ return buf;
+
+- val = *fourcc & ~BIT(31);
++ orig = get_unaligned(fourcc);
++ val = orig & ~BIT(31);
+
+- for (i = 0; i < sizeof(*fourcc); i++) {
++ for (i = 0; i < sizeof(u32); i++) {
+ unsigned char c = val >> (i * 8);
+
+ /* Print non-control ASCII characters as-is, dot otherwise */
+ *p++ = isascii(c) && isprint(c) ? c : '.';
+ }
+
+- strcpy(p, *fourcc & BIT(31) ? " big-endian" : " little-endian");
++ strcpy(p, orig & BIT(31) ? " big-endian" : " little-endian");
+ p += strlen(p);
+
+ *p++ = ' ';
+ *p++ = '(';
+- p = special_hex_number(p, output + sizeof(output) - 2, *fourcc, sizeof(u32));
++ p = special_hex_number(p, output + sizeof(output) - 2, orig, sizeof(u32));
+ *p++ = ')';
+ *p = '\0';
+
+@@ -2223,10 +2242,6 @@ char *fwnode_string(char *buf, char *end, struct fwnode_handle *fwnode,
+ return widen_string(buf, buf - buf_start, end, spec);
+ }
+
+-/* Disable pointer hashing if requested */
+-bool no_hash_pointers __ro_after_init;
+-EXPORT_SYMBOL_GPL(no_hash_pointers);
+-
+ int __init no_hash_pointers_enable(char *str)
+ {
+ if (no_hash_pointers)
+@@ -2455,7 +2470,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ case 'e':
+ /* %pe with a non-ERR_PTR gets treated as plain %p */
+ if (!IS_ERR(ptr))
+- break;
++ return default_pointer(buf, end, ptr, spec);
+ return err_ptr(buf, end, ptr, spec);
+ case 'u':
+ case 'k':
+@@ -2465,16 +2480,9 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ default:
+ return error_string(buf, end, "(einval)", spec);
+ }
++ default:
++ return default_pointer(buf, end, ptr, spec);
+ }
+-
+- /*
+- * default is to _not_ leak addresses, so hash before printing,
+- * unless no_hash_pointers is specified on the command line.
+- */
+- if (unlikely(no_hash_pointers))
+- return pointer_string(buf, end, ptr, spec);
+- else
+- return ptr_to_id(buf, end, ptr, spec);
+ }
+
+ /*
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 6f47f6375808a..88ca87435e3da 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -722,6 +722,8 @@ void xas_create_range(struct xa_state *xas)
+
+ for (;;) {
+ struct xa_node *node = xas->xa_node;
++ if (node->shift >= shift)
++ break;
+ xas->xa_node = xa_parent_locked(xas->xa, node);
+ xas->xa_offset = node->offset - 1;
+ if (node->offset != 0)
+@@ -1079,6 +1081,7 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
+ xa_mk_node(child));
+ if (xa_is_value(curr))
+ values--;
++ xas_update(xas, child);
+ } else {
+ unsigned int canon = offset - xas->xa_sibs;
+
+@@ -1093,6 +1096,7 @@ void xas_split(struct xa_state *xas, void *entry, unsigned int order)
+ } while (offset-- > xas->xa_offset);
+
+ node->nr_values += values;
++ xas_update(xas, node);
+ }
+ EXPORT_SYMBOL_GPL(xas_split);
+ #endif
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 7580baa76af1c..acd7cbb82e160 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -796,6 +796,8 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ unsigned long flags;
+ struct kmemleak_object *object;
+ struct kmemleak_scan_area *area = NULL;
++ unsigned long untagged_ptr;
++ unsigned long untagged_objp;
+
+ object = find_and_get_object(ptr, 1);
+ if (!object) {
+@@ -804,6 +806,9 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ return;
+ }
+
++ untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
++ untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
++
+ if (scan_area_cache)
+ area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp));
+
+@@ -815,8 +820,8 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
+ goto out_unlock;
+ }
+ if (size == SIZE_MAX) {
+- size = object->pointer + object->size - ptr;
+- } else if (ptr + size > object->pointer + object->size) {
++ size = untagged_objp + object->size - untagged_ptr;
++ } else if (untagged_ptr + size > untagged_objp + object->size) {
+ kmemleak_warn("Scan area larger than object 0x%08lx\n", ptr);
+ dump_object_info(object);
+ kmem_cache_free(scan_area_cache, area);
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 38d0f515d5486..e97e6a93d5aee 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -1433,8 +1433,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec,
+ iov_iter_advance(&iter, iovec.iov_len);
+ }
+
+- if (ret == 0)
+- ret = total_len - iov_iter_count(&iter);
++ ret = (total_len - iov_iter_count(&iter)) ? : ret;
+
+ release_mm:
+ mmput(mm);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 36e9f38c919d0..9b89a340a6629 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -7053,7 +7053,7 @@ static int __init cgroup_memory(char *s)
+ if (!strcmp(token, "nokmem"))
+ cgroup_memory_nokmem = true;
+ }
+- return 0;
++ return 1;
+ }
+ __setup("cgroup.memory=", cgroup_memory);
+
+diff --git a/mm/memory.c b/mm/memory.c
+index c125c4969913a..b69afe3dd597a 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1313,6 +1313,17 @@ struct zap_details {
+ struct folio *single_folio; /* Locked folio to be unmapped */
+ };
+
++/* Whether we should zap all COWed (private) pages too */
++static inline bool should_zap_cows(struct zap_details *details)
++{
++ /* By default, zap all pages */
++ if (!details)
++ return true;
++
++ /* Or, we zap COWed pages only if the caller wants to */
++ return !details->zap_mapping;
++}
++
+ /*
+ * We set details->zap_mapping when we want to unmap shared but keep private
+ * pages. Return true if skip zapping this page, false otherwise.
+@@ -1320,11 +1331,15 @@ struct zap_details {
+ static inline bool
+ zap_skip_check_mapping(struct zap_details *details, struct page *page)
+ {
+- if (!details || !page)
++ /* If we can make a decision without *page.. */
++ if (should_zap_cows(details))
+ return false;
+
+- return details->zap_mapping &&
+- (details->zap_mapping != page_rmapping(page));
++ /* E.g. the caller passes NULL for the case of a zero page */
++ if (!page)
++ return false;
++
++ return details->zap_mapping != page_rmapping(page);
+ }
+
+ static unsigned long zap_pte_range(struct mmu_gather *tlb,
+@@ -1405,17 +1420,24 @@ again:
+ continue;
+ }
+
+- /* If details->check_mapping, we leave swap entries. */
+- if (unlikely(details))
+- continue;
+-
+- if (!non_swap_entry(entry))
++ if (!non_swap_entry(entry)) {
++ /* Genuine swap entry, hence a private anon page */
++ if (!should_zap_cows(details))
++ continue;
+ rss[MM_SWAPENTS]--;
+- else if (is_migration_entry(entry)) {
++ } else if (is_migration_entry(entry)) {
+ struct page *page;
+
+ page = pfn_swap_entry_to_page(entry);
++ if (zap_skip_check_mapping(details, page))
++ continue;
+ rss[mm_counter(page)]--;
++ } else if (is_hwpoison_entry(entry)) {
++ if (!should_zap_cows(details))
++ continue;
++ } else {
++ /* We should have covered all the swap entry types */
++ WARN_ON_ONCE(1);
+ }
+ if (unlikely(!free_swap_and_cache(entry)))
+ print_bad_pte(vma, addr, ptent, NULL);
+@@ -3871,11 +3893,20 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
+ return ret;
+
+ if (unlikely(PageHWPoison(vmf->page))) {
+- if (ret & VM_FAULT_LOCKED)
+- unlock_page(vmf->page);
+- put_page(vmf->page);
++ struct page *page = vmf->page;
++ vm_fault_t poisonret = VM_FAULT_HWPOISON;
++ if (ret & VM_FAULT_LOCKED) {
++ if (page_mapped(page))
++ unmap_mapping_pages(page_mapping(page),
++ page->index, 1, false);
++ /* Retry if a clean page was removed from the cache. */
++ if (invalidate_inode_page(page))
++ poisonret = VM_FAULT_NOPAGE;
++ unlock_page(page);
++ }
++ put_page(page);
+ vmf->page = NULL;
+- return VM_FAULT_HWPOISON;
++ return poisonret;
+ }
+
+ if (unlikely(!(ret & VM_FAULT_LOCKED)))
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 69284d3b5e53f..1628cd90d9fcc 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -786,7 +786,6 @@ static int vma_replace_policy(struct vm_area_struct *vma,
+ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ unsigned long end, struct mempolicy *new_pol)
+ {
+- struct vm_area_struct *next;
+ struct vm_area_struct *prev;
+ struct vm_area_struct *vma;
+ int err = 0;
+@@ -801,8 +800,7 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ if (start > vma->vm_start)
+ prev = vma;
+
+- for (; vma && vma->vm_start < end; prev = vma, vma = next) {
+- next = vma->vm_next;
++ for (; vma && vma->vm_start < end; prev = vma, vma = vma->vm_next) {
+ vmstart = max(start, vma->vm_start);
+ vmend = min(end, vma->vm_end);
+
+@@ -817,10 +815,6 @@ static int mbind_range(struct mm_struct *mm, unsigned long start,
+ anon_vma_name(vma));
+ if (prev) {
+ vma = prev;
+- next = vma->vm_next;
+- if (mpol_equal(vma_policy(vma), new_pol))
+- continue;
+- /* vma_merge() joined vma && vma->next, case 8 */
+ goto replace;
+ }
+ if (vma->vm_start != vmstart) {
+diff --git a/mm/migrate.c b/mm/migrate.c
+index c7da064b4781b..086a366374678 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -3190,7 +3190,7 @@ again:
+ /*
+ * For callers that do not hold get_online_mems() already.
+ */
+-static void set_migration_target_nodes(void)
++void set_migration_target_nodes(void)
+ {
+ get_online_mems();
+ __set_migration_target_nodes();
+@@ -3254,51 +3254,24 @@ static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
+ return notifier_from_errno(0);
+ }
+
+-/*
+- * React to hotplug events that might affect the migration targets
+- * like events that online or offline NUMA nodes.
+- *
+- * The ordering is also currently dependent on which nodes have
+- * CPUs. That means we need CPU on/offline notification too.
+- */
+-static int migration_online_cpu(unsigned int cpu)
+-{
+- set_migration_target_nodes();
+- return 0;
+-}
+-
+-static int migration_offline_cpu(unsigned int cpu)
++void __init migrate_on_reclaim_init(void)
+ {
+- set_migration_target_nodes();
+- return 0;
+-}
+-
+-static int __init migrate_on_reclaim_init(void)
+-{
+- int ret;
+-
+ node_demotion = kmalloc_array(nr_node_ids,
+ sizeof(struct demotion_nodes),
+ GFP_KERNEL);
+ WARN_ON(!node_demotion);
+
+- ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",
+- NULL, migration_offline_cpu);
++ hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
+ /*
+- * In the unlikely case that this fails, the automatic
+- * migration targets may become suboptimal for nodes
+- * where N_CPU changes. With such a small impact in a
+- * rare case, do not bother trying to do anything special.
++ * At this point, all numa nodes with memory/CPus have their state
++ * properly set, so we can build the demotion order now.
++ * Let us hold the cpu_hotplug lock just, as we could possibily have
++ * CPU hotplug events during boot.
+ */
+- WARN_ON(ret < 0);
+- ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",
+- migration_online_cpu, NULL);
+- WARN_ON(ret < 0);
+-
+- hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
+- return 0;
++ cpus_read_lock();
++ set_migration_target_nodes();
++ cpus_read_unlock();
+ }
+-late_initcall(migrate_on_reclaim_init);
+ #endif /* CONFIG_HOTPLUG_CPU */
+
+ bool numa_demotion_enabled = false;
+diff --git a/mm/mlock.c b/mm/mlock.c
+index 25934e7db3e10..37f969ec68fa4 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -827,13 +827,12 @@ int user_shm_lock(size_t size, struct ucounts *ucounts)
+
+ locked = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ lock_limit = rlimit(RLIMIT_MEMLOCK);
+- if (lock_limit == RLIM_INFINITY)
+- allowed = 1;
+- lock_limit >>= PAGE_SHIFT;
++ if (lock_limit != RLIM_INFINITY)
++ lock_limit >>= PAGE_SHIFT;
+ spin_lock(&shmlock_user_lock);
+ memlock = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_MEMLOCK, locked);
+
+- if (!allowed && (memlock == LONG_MAX || memlock > lock_limit) && !capable(CAP_IPC_LOCK)) {
++ if ((memlock == LONG_MAX || memlock > lock_limit) && !capable(CAP_IPC_LOCK)) {
+ dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_MEMLOCK, locked);
+ goto out;
+ }
+diff --git a/mm/mmap.c b/mm/mmap.c
+index f61a15474dd6d..18875c216f8db 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2557,7 +2557,7 @@ static int __init cmdline_parse_stack_guard_gap(char *p)
+ if (!*endptr)
+ stack_guard_gap = val << PAGE_SHIFT;
+
+- return 0;
++ return 1;
+ }
+ __setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 3589febc6d319..a1fbf656e7dbd 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7972,10 +7972,17 @@ restart:
+
+ out2:
+ /* Align start of ZONE_MOVABLE on all nids to MAX_ORDER_NR_PAGES */
+- for (nid = 0; nid < MAX_NUMNODES; nid++)
++ for (nid = 0; nid < MAX_NUMNODES; nid++) {
++ unsigned long start_pfn, end_pfn;
++
+ zone_movable_pfn[nid] =
+ roundup(zone_movable_pfn[nid], MAX_ORDER_NR_PAGES);
+
++ get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
++ if (zone_movable_pfn[nid] >= end_pfn)
++ zone_movable_pfn[nid] = 0;
++ }
++
+ out:
+ /* restore the node_state */
+ node_states[N_MEMORY] = saved_node_state;
+diff --git a/mm/slab.c b/mm/slab.c
+index ddf5737c63d90..a36af26e15216 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -3421,6 +3421,7 @@ static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
+
+ if (is_kfence_address(objp)) {
+ kmemleak_free_recursive(objp, cachep->flags);
++ memcg_slab_free_hook(cachep, &objp, 1);
+ __kfence_free(objp);
+ return;
+ }
+diff --git a/mm/usercopy.c b/mm/usercopy.c
+index d0d268135d96d..21fd84ee7fcd4 100644
+--- a/mm/usercopy.c
++++ b/mm/usercopy.c
+@@ -295,7 +295,10 @@ static bool enable_checks __initdata = true;
+
+ static int __init parse_hardened_usercopy(char *str)
+ {
+- return strtobool(str, &enable_checks);
++ if (strtobool(str, &enable_checks))
++ pr_warn("Invalid option string for hardened_usercopy: '%s'\n",
++ str);
++ return 1;
+ }
+
+ __setup("hardened_usercopy=", parse_hardened_usercopy);
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 4057372745d04..9e9536df51b5a 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -28,6 +28,7 @@
+ #include <linux/mm_inline.h>
+ #include <linux/page_ext.h>
+ #include <linux/page_owner.h>
++#include <linux/migrate.h>
+
+ #include "internal.h"
+
+@@ -2043,7 +2044,12 @@ static void __init init_cpu_node_state(void)
+ static int vmstat_cpu_online(unsigned int cpu)
+ {
+ refresh_zone_stat_thresholds();
+- node_set_state(cpu_to_node(cpu), N_CPU);
++
++ if (!node_state(cpu_to_node(cpu), N_CPU)) {
++ node_set_state(cpu_to_node(cpu), N_CPU);
++ set_migration_target_nodes();
++ }
++
+ return 0;
+ }
+
+@@ -2066,6 +2072,8 @@ static int vmstat_cpu_dead(unsigned int cpu)
+ return 0;
+
+ node_clear_state(node, N_CPU);
++ set_migration_target_nodes();
++
+ return 0;
+ }
+
+@@ -2097,6 +2105,9 @@ void __init init_mm_internals(void)
+
+ start_shepherd_timer();
+ #endif
++#if defined(CONFIG_MIGRATION) && defined(CONFIG_HOTPLUG_CPU)
++ migrate_on_reclaim_init();
++#endif
+ #ifdef CONFIG_PROC_FS
+ proc_create_seq("buddyinfo", 0444, NULL, &fragmentation_op);
+ proc_create_seq("pagetypeinfo", 0400, NULL, &pagetypeinfo_op);
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index 6bd0971807721..f5686c463bc0d 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -89,18 +89,20 @@ again:
+ sk = s->sk;
+ if (!sk) {
+ spin_unlock_bh(&ax25_list_lock);
+- s->ax25_dev = NULL;
+ ax25_disconnect(s, ENETUNREACH);
++ s->ax25_dev = NULL;
+ spin_lock_bh(&ax25_list_lock);
+ goto again;
+ }
+ sock_hold(sk);
+ spin_unlock_bh(&ax25_list_lock);
+ lock_sock(sk);
+- s->ax25_dev = NULL;
+- dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
+- ax25_dev_put(ax25_dev);
+ ax25_disconnect(s, ENETUNREACH);
++ s->ax25_dev = NULL;
++ if (sk->sk_socket) {
++ dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
++ ax25_dev_put(ax25_dev);
++ }
+ release_sock(sk);
+ spin_lock_bh(&ax25_list_lock);
+ sock_put(sk);
+@@ -979,14 +981,16 @@ static int ax25_release(struct socket *sock)
+ {
+ struct sock *sk = sock->sk;
+ ax25_cb *ax25;
++ ax25_dev *ax25_dev;
+
+ if (sk == NULL)
+ return 0;
+
+ sock_hold(sk);
+- sock_orphan(sk);
+ lock_sock(sk);
++ sock_orphan(sk);
+ ax25 = sk_to_ax25(sk);
++ ax25_dev = ax25->ax25_dev;
+
+ if (sk->sk_type == SOCK_SEQPACKET) {
+ switch (ax25->state) {
+@@ -1048,6 +1052,10 @@ static int ax25_release(struct socket *sock)
+ sk->sk_state_change(sk);
+ ax25_destroy_socket(ax25);
+ }
++ if (ax25_dev) {
++ dev_put_track(ax25_dev->dev, &ax25_dev->dev_tracker);
++ ax25_dev_put(ax25_dev);
++ }
+
+ sock->sk = NULL;
+ release_sock(sk);
+diff --git a/net/ax25/ax25_subr.c b/net/ax25/ax25_subr.c
+index 15ab812c4fe4b..3a476e4f6cd0b 100644
+--- a/net/ax25/ax25_subr.c
++++ b/net/ax25/ax25_subr.c
+@@ -261,12 +261,20 @@ void ax25_disconnect(ax25_cb *ax25, int reason)
+ {
+ ax25_clear_queues(ax25);
+
+- if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
+- ax25_stop_heartbeat(ax25);
+- ax25_stop_t1timer(ax25);
+- ax25_stop_t2timer(ax25);
+- ax25_stop_t3timer(ax25);
+- ax25_stop_idletimer(ax25);
++ if (reason == ENETUNREACH) {
++ del_timer_sync(&ax25->timer);
++ del_timer_sync(&ax25->t1timer);
++ del_timer_sync(&ax25->t2timer);
++ del_timer_sync(&ax25->t3timer);
++ del_timer_sync(&ax25->idletimer);
++ } else {
++ if (!ax25->sk || !sock_flag(ax25->sk, SOCK_DESTROY))
++ ax25_stop_heartbeat(ax25);
++ ax25_stop_t1timer(ax25);
++ ax25_stop_t2timer(ax25);
++ ax25_stop_t3timer(ax25);
++ ax25_stop_idletimer(ax25);
++ }
+
+ ax25->state = AX25_STATE_0;
+
+diff --git a/net/bluetooth/eir.h b/net/bluetooth/eir.h
+index 05e2e917fc254..e5876751f07ed 100644
+--- a/net/bluetooth/eir.h
++++ b/net/bluetooth/eir.h
+@@ -15,6 +15,11 @@ u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr);
+ u8 eir_append_local_name(struct hci_dev *hdev, u8 *eir, u8 ad_len);
+ u8 eir_append_appearance(struct hci_dev *hdev, u8 *ptr, u8 ad_len);
+
++static inline u16 eir_precalc_len(u8 data_len)
++{
++ return sizeof(u8) * 2 + data_len;
++}
++
+ static inline u16 eir_append_data(u8 *eir, u16 eir_len, u8 type,
+ u8 *data, u8 data_len)
+ {
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 04ebe901e86f0..3bb2b3b6a1c92 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -669,7 +669,9 @@ static void le_conn_timeout(struct work_struct *work)
+ if (conn->role == HCI_ROLE_SLAVE) {
+ /* Disable LE Advertising */
+ le_disable_advertising(hdev);
++ hci_dev_lock(hdev);
+ hci_le_conn_failed(conn, HCI_ERROR_ADVERTISING_TIMEOUT);
++ hci_dev_unlock(hdev);
+ return;
+ }
+
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index fc30f4c03d292..a105b7317560c 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4534,7 +4534,7 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, void *edata,
+ if (!info) {
+ bt_dev_err(hdev, "Malformed HCI Event: 0x%2.2x",
+ HCI_EV_INQUIRY_RESULT_WITH_RSSI);
+- return;
++ goto unlock;
+ }
+
+ bacpy(&data.bdaddr, &info->bdaddr);
+@@ -4565,7 +4565,7 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, void *edata,
+ if (!info) {
+ bt_dev_err(hdev, "Malformed HCI Event: 0x%2.2x",
+ HCI_EV_INQUIRY_RESULT_WITH_RSSI);
+- return;
++ goto unlock;
+ }
+
+ bacpy(&data.bdaddr, &info->bdaddr);
+@@ -4587,7 +4587,7 @@ static void hci_inquiry_result_with_rssi_evt(struct hci_dev *hdev, void *edata,
+ bt_dev_err(hdev, "Malformed HCI Event: 0x%2.2x",
+ HCI_EV_INQUIRY_RESULT_WITH_RSSI);
+ }
+-
++unlock:
+ hci_dev_unlock(hdev);
+ }
+
+@@ -6798,7 +6798,7 @@ static const struct hci_ev {
+ HCI_EV(HCI_EV_NUM_COMP_BLOCKS, hci_num_comp_blocks_evt,
+ sizeof(struct hci_ev_num_comp_blocks)),
+ /* [0xff = HCI_EV_VENDOR] */
+- HCI_EV(HCI_EV_VENDOR, msft_vendor_evt, 0),
++ HCI_EV_VL(HCI_EV_VENDOR, msft_vendor_evt, 0, HCI_MAX_EVENT_SIZE),
+ };
+
+ static void hci_event_func(struct hci_dev *hdev, u8 event, struct sk_buff *skb,
+@@ -6823,8 +6823,9 @@ static void hci_event_func(struct hci_dev *hdev, u8 event, struct sk_buff *skb,
+ * decide if that is acceptable.
+ */
+ if (skb->len > ev->max_len)
+- bt_dev_warn(hdev, "unexpected event 0x%2.2x length: %u > %u",
+- event, skb->len, ev->max_len);
++ bt_dev_warn_ratelimited(hdev,
++ "unexpected event 0x%2.2x length: %u > %u",
++ event, skb->len, ev->max_len);
+
+ data = hci_ev_skb_pull(hdev, skb, event, ev->min_len);
+ if (!data)
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 5e93f37c2e04d..405d48c3e63ed 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4432,7 +4432,7 @@ static int hci_disconnect_all_sync(struct hci_dev *hdev, u8 reason)
+ return err;
+ }
+
+- return err;
++ return 0;
+ }
+
+ /* This function perform power off HCI command sequence as follows:
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 230a7a8196c07..15eab8b968ce8 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -9086,12 +9086,14 @@ void mgmt_device_connected(struct hci_dev *hdev, struct hci_conn *conn,
+ u16 eir_len = 0;
+ u32 flags = 0;
+
++ /* allocate buff for LE or BR/EDR adv */
+ if (conn->le_adv_data_len > 0)
+ skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_CONNECTED,
+- conn->le_adv_data_len);
++ sizeof(*ev) + conn->le_adv_data_len);
+ else
+ skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_CONNECTED,
+- 2 + name_len + 5);
++ sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0) +
++ eir_precalc_len(sizeof(conn->dev_class)));
+
+ ev = skb_put(skb, sizeof(*ev));
+ bacpy(&ev->addr.bdaddr, &conn->dst);
+@@ -9707,13 +9709,11 @@ void mgmt_remote_name(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+ {
+ struct sk_buff *skb;
+ struct mgmt_ev_device_found *ev;
+- u16 eir_len;
+- u32 flags;
++ u16 eir_len = 0;
++ u32 flags = 0;
+
+- if (name_len)
+- skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_FOUND, 2 + name_len);
+- else
+- skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_FOUND, 0);
++ skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_FOUND,
++ sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0));
+
+ ev = skb_put(skb, sizeof(*ev));
+ bacpy(&ev->addr.bdaddr, bdaddr);
+@@ -9723,10 +9723,8 @@ void mgmt_remote_name(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+ if (name) {
+ eir_len = eir_append_data(ev->eir, 0, EIR_NAME_COMPLETE, name,
+ name_len);
+- flags = 0;
+ skb_put(skb, eir_len);
+ } else {
+- eir_len = 0;
+ flags = MGMT_DEV_FOUND_NAME_REQUEST_FAILED;
+ }
+
+diff --git a/net/can/isotp.c b/net/can/isotp.c
+index d2a430b6a13bd..a95d171b3a64b 100644
+--- a/net/can/isotp.c
++++ b/net/can/isotp.c
+@@ -1005,26 +1005,29 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ {
+ struct sock *sk = sock->sk;
+ struct sk_buff *skb;
+- int err = 0;
+- int noblock;
++ struct isotp_sock *so = isotp_sk(sk);
++ int noblock = flags & MSG_DONTWAIT;
++ int ret = 0;
+
+- noblock = flags & MSG_DONTWAIT;
+- flags &= ~MSG_DONTWAIT;
++ if (flags & ~(MSG_DONTWAIT | MSG_TRUNC | MSG_PEEK))
++ return -EINVAL;
+
+- skb = skb_recv_datagram(sk, flags, noblock, &err);
++ if (!so->bound)
++ return -EADDRNOTAVAIL;
++
++ flags &= ~MSG_DONTWAIT;
++ skb = skb_recv_datagram(sk, flags, noblock, &ret);
+ if (!skb)
+- return err;
++ return ret;
+
+ if (size < skb->len)
+ msg->msg_flags |= MSG_TRUNC;
+ else
+ size = skb->len;
+
+- err = memcpy_to_msg(msg, skb->data, size);
+- if (err < 0) {
+- skb_free_datagram(sk, skb);
+- return err;
+- }
++ ret = memcpy_to_msg(msg, skb->data, size);
++ if (ret < 0)
++ goto out_err;
+
+ sock_recv_timestamp(msg, sk, skb);
+
+@@ -1034,9 +1037,13 @@ static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+ memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
+ }
+
++ /* set length of return value */
++ ret = (flags & MSG_TRUNC) ? skb->len : size;
++
++out_err:
+ skb_free_datagram(sk, skb);
+
+- return size;
++ return ret;
+ }
+
+ static int isotp_release(struct socket *sock)
+@@ -1104,6 +1111,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ struct net *net = sock_net(sk);
+ int ifindex;
+ struct net_device *dev;
++ canid_t tx_id, rx_id;
+ int err = 0;
+ int notify_enetdown = 0;
+ int do_rx_reg = 1;
+@@ -1111,8 +1119,18 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ if (len < ISOTP_MIN_NAMELEN)
+ return -EINVAL;
+
+- if (addr->can_addr.tp.tx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG))
+- return -EADDRNOTAVAIL;
++ /* sanitize tx/rx CAN identifiers */
++ tx_id = addr->can_addr.tp.tx_id;
++ if (tx_id & CAN_EFF_FLAG)
++ tx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++ else
++ tx_id &= CAN_SFF_MASK;
++
++ rx_id = addr->can_addr.tp.rx_id;
++ if (rx_id & CAN_EFF_FLAG)
++ rx_id &= (CAN_EFF_FLAG | CAN_EFF_MASK);
++ else
++ rx_id &= CAN_SFF_MASK;
+
+ if (!addr->can_ifindex)
+ return -ENODEV;
+@@ -1124,21 +1142,13 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ do_rx_reg = 0;
+
+ /* do not validate rx address for functional addressing */
+- if (do_rx_reg) {
+- if (addr->can_addr.tp.rx_id == addr->can_addr.tp.tx_id) {
+- err = -EADDRNOTAVAIL;
+- goto out;
+- }
+-
+- if (addr->can_addr.tp.rx_id & (CAN_ERR_FLAG | CAN_RTR_FLAG)) {
+- err = -EADDRNOTAVAIL;
+- goto out;
+- }
++ if (do_rx_reg && rx_id == tx_id) {
++ err = -EADDRNOTAVAIL;
++ goto out;
+ }
+
+ if (so->bound && addr->can_ifindex == so->ifindex &&
+- addr->can_addr.tp.rx_id == so->rxid &&
+- addr->can_addr.tp.tx_id == so->txid)
++ rx_id == so->rxid && tx_id == so->txid)
+ goto out;
+
+ dev = dev_get_by_index(net, addr->can_ifindex);
+@@ -1162,8 +1172,7 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ ifindex = dev->ifindex;
+
+ if (do_rx_reg)
+- can_rx_register(net, dev, addr->can_addr.tp.rx_id,
+- SINGLE_MASK(addr->can_addr.tp.rx_id),
++ can_rx_register(net, dev, rx_id, SINGLE_MASK(rx_id),
+ isotp_rcv, sk, "isotp", sk);
+
+ dev_put(dev);
+@@ -1183,8 +1192,8 @@ static int isotp_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+
+ /* switch to new settings */
+ so->ifindex = ifindex;
+- so->rxid = addr->can_addr.tp.rx_id;
+- so->txid = addr->can_addr.tp.tx_id;
++ so->rxid = rx_id;
++ so->txid = tx_id;
+ so->bound = 1;
+
+ out:
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index ea51e23e9247e..a8a2fb745274c 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -201,7 +201,7 @@ static void __build_skb_around(struct sk_buff *skb, void *data,
+ skb->head = data;
+ skb->data = data;
+ skb_reset_tail_pointer(skb);
+- skb->end = skb->tail + size;
++ skb_set_end_offset(skb, size);
+ skb->mac_header = (typeof(skb->mac_header))~0U;
+ skb->transport_header = (typeof(skb->transport_header))~0U;
+
+@@ -1736,11 +1736,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
+ skb->head = data;
+ skb->head_frag = 0;
+ skb->data += off;
++
++ skb_set_end_offset(skb, size);
+ #ifdef NET_SKBUFF_DATA_USES_OFFSET
+- skb->end = size;
+ off = nhead;
+-#else
+- skb->end = skb->head + size;
+ #endif
+ skb->tail += off;
+ skb_headers_offset_update(skb, nhead);
+@@ -1788,6 +1787,38 @@ struct sk_buff *skb_realloc_headroom(struct sk_buff *skb, unsigned int headroom)
+ }
+ EXPORT_SYMBOL(skb_realloc_headroom);
+
++int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
++{
++ unsigned int saved_end_offset, saved_truesize;
++ struct skb_shared_info *shinfo;
++ int res;
++
++ saved_end_offset = skb_end_offset(skb);
++ saved_truesize = skb->truesize;
++
++ res = pskb_expand_head(skb, 0, 0, pri);
++ if (res)
++ return res;
++
++ skb->truesize = saved_truesize;
++
++ if (likely(skb_end_offset(skb) == saved_end_offset))
++ return 0;
++
++ shinfo = skb_shinfo(skb);
++
++ /* We are about to change back skb->end,
++ * we need to move skb_shinfo() to its new location.
++ */
++ memmove(skb->head + saved_end_offset,
++ shinfo,
++ offsetof(struct skb_shared_info, frags[shinfo->nr_frags]));
++
++ skb_set_end_offset(skb, saved_end_offset);
++
++ return 0;
++}
++
+ /**
+ * skb_expand_head - reallocate header of &sk_buff
+ * @skb: buffer to reallocate
+@@ -6044,11 +6075,7 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off,
+ skb->head = data;
+ skb->data = data;
+ skb->head_frag = 0;
+-#ifdef NET_SKBUFF_DATA_USES_OFFSET
+- skb->end = size;
+-#else
+- skb->end = skb->head + size;
+-#endif
++ skb_set_end_offset(skb, size);
+ skb_set_tail_pointer(skb, skb_headlen(skb));
+ skb_headers_offset_update(skb, 0);
+ skb->cloned = 0;
+@@ -6186,11 +6213,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off,
+ skb->head = data;
+ skb->head_frag = 0;
+ skb->data = data;
+-#ifdef NET_SKBUFF_DATA_USES_OFFSET
+- skb->end = size;
+-#else
+- skb->end = skb->head + size;
+-#endif
++ skb_set_end_offset(skb, size);
+ skb_reset_tail_pointer(skb);
+ skb_headers_offset_update(skb, 0);
+ skb->cloned = 0;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 929a2b096b04e..cc381165ea080 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -27,6 +27,7 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ int elem_first_coalesce)
+ {
+ struct page_frag *pfrag = sk_page_frag(sk);
++ u32 osize = msg->sg.size;
+ int ret = 0;
+
+ len -= msg->sg.size;
+@@ -35,13 +36,17 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ u32 orig_offset;
+ int use, i;
+
+- if (!sk_page_frag_refill(sk, pfrag))
+- return -ENOMEM;
++ if (!sk_page_frag_refill(sk, pfrag)) {
++ ret = -ENOMEM;
++ goto msg_trim;
++ }
+
+ orig_offset = pfrag->offset;
+ use = min_t(int, len, pfrag->size - orig_offset);
+- if (!sk_wmem_schedule(sk, use))
+- return -ENOMEM;
++ if (!sk_wmem_schedule(sk, use)) {
++ ret = -ENOMEM;
++ goto msg_trim;
++ }
+
+ i = msg->sg.end;
+ sk_msg_iter_var_prev(i);
+@@ -71,6 +76,10 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
+ }
+
+ return ret;
++
++msg_trim:
++ sk_msg_trim(sk, msg, osize);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(sk_msg_alloc);
+
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index 88e2808019b47..a39bbed77f87d 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -1722,6 +1722,10 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+ struct dsa_port *dp;
+
+ mutex_lock(&dsa2_mutex);
++
++ if (!ds->setup)
++ goto out;
++
+ rtnl_lock();
+
+ dsa_switch_for_each_user_port(dp, ds) {
+@@ -1738,6 +1742,7 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
+ dp->master->dsa_ptr = NULL;
+
+ rtnl_unlock();
++out:
+ mutex_unlock(&dsa2_mutex);
+ }
+ EXPORT_SYMBOL_GPL(dsa_switch_shutdown);
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index e3c7d2627a619..517cc83d13cc8 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -113,26 +113,15 @@ static int dsa_switch_bridge_join(struct dsa_switch *ds,
+ return dsa_tag_8021q_bridge_join(ds, info);
+ }
+
+-static int dsa_switch_bridge_leave(struct dsa_switch *ds,
+- struct dsa_notifier_bridge_info *info)
++static int dsa_switch_sync_vlan_filtering(struct dsa_switch *ds,
++ struct dsa_notifier_bridge_info *info)
+ {
+- struct dsa_switch_tree *dst = ds->dst;
+ struct netlink_ext_ack extack = {0};
+ bool change_vlan_filtering = false;
+ bool vlan_filtering;
+ struct dsa_port *dp;
+ int err;
+
+- if (dst->index == info->tree_index && ds->index == info->sw_index &&
+- ds->ops->port_bridge_leave)
+- ds->ops->port_bridge_leave(ds, info->port, info->bridge);
+-
+- if ((dst->index != info->tree_index || ds->index != info->sw_index) &&
+- ds->ops->crosschip_bridge_leave)
+- ds->ops->crosschip_bridge_leave(ds, info->tree_index,
+- info->sw_index, info->port,
+- info->bridge);
+-
+ if (ds->needs_standalone_vlan_filtering &&
+ !br_vlan_enabled(info->bridge.dev)) {
+ change_vlan_filtering = true;
+@@ -172,6 +161,31 @@ static int dsa_switch_bridge_leave(struct dsa_switch *ds,
+ return err;
+ }
+
++ return 0;
++}
++
++static int dsa_switch_bridge_leave(struct dsa_switch *ds,
++ struct dsa_notifier_bridge_info *info)
++{
++ struct dsa_switch_tree *dst = ds->dst;
++ int err;
++
++ if (dst->index == info->tree_index && ds->index == info->sw_index &&
++ ds->ops->port_bridge_leave)
++ ds->ops->port_bridge_leave(ds, info->port, info->bridge);
++
++ if ((dst->index != info->tree_index || ds->index != info->sw_index) &&
++ ds->ops->crosschip_bridge_leave)
++ ds->ops->crosschip_bridge_leave(ds, info->tree_index,
++ info->sw_index, info->port,
++ info->bridge);
++
++ if (ds->dst->index == info->tree_index && ds->index == info->sw_index) {
++ err = dsa_switch_sync_vlan_filtering(ds, info);
++ if (err)
++ return err;
++ }
++
+ return dsa_tag_8021q_bridge_leave(ds, info);
+ }
+
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index f33ad1f383b68..d5d058de36646 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -499,6 +499,15 @@ void __ip_select_ident(struct net *net, struct iphdr *iph, int segs)
+ }
+ EXPORT_SYMBOL(__ip_select_ident);
+
++static void ip_rt_fix_tos(struct flowi4 *fl4)
++{
++ __u8 tos = RT_FL_TOS(fl4);
++
++ fl4->flowi4_tos = tos & IPTOS_RT_MASK;
++ fl4->flowi4_scope = tos & RTO_ONLINK ?
++ RT_SCOPE_LINK : RT_SCOPE_UNIVERSE;
++}
++
+ static void __build_flow_key(const struct net *net, struct flowi4 *fl4,
+ const struct sock *sk,
+ const struct iphdr *iph,
+@@ -824,6 +833,7 @@ static void ip_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_buf
+ rt = (struct rtable *) dst;
+
+ __build_flow_key(net, &fl4, sk, iph, oif, tos, prot, mark, 0);
++ ip_rt_fix_tos(&fl4);
+ __ip_do_redirect(rt, skb, &fl4, true);
+ }
+
+@@ -1048,6 +1058,7 @@ static void ip_rt_update_pmtu(struct dst_entry *dst, struct sock *sk,
+ struct flowi4 fl4;
+
+ ip_rt_build_flow_key(&fl4, sk, skb);
++ ip_rt_fix_tos(&fl4);
+
+ /* Don't make lookup fail for bridged encapsulations */
+ if (skb && netif_is_any_bridge_port(skb->dev))
+@@ -1122,6 +1133,8 @@ void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
+ goto out;
+
+ new = true;
++ } else {
++ ip_rt_fix_tos(&fl4);
+ }
+
+ __ip_rt_update_pmtu((struct rtable *)xfrm_dst_path(&rt->dst), &fl4, mtu);
+@@ -2603,7 +2616,6 @@ add:
+ struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4,
+ const struct sk_buff *skb)
+ {
+- __u8 tos = RT_FL_TOS(fl4);
+ struct fib_result res = {
+ .type = RTN_UNSPEC,
+ .fi = NULL,
+@@ -2613,9 +2625,7 @@ struct rtable *ip_route_output_key_hash(struct net *net, struct flowi4 *fl4,
+ struct rtable *rth;
+
+ fl4->flowi4_iif = LOOPBACK_IFINDEX;
+- fl4->flowi4_tos = tos & IPTOS_RT_MASK;
+- fl4->flowi4_scope = ((tos & RTO_ONLINK) ?
+- RT_SCOPE_LINK : RT_SCOPE_UNIVERSE);
++ ip_rt_fix_tos(fl4);
+
+ rcu_read_lock();
+ rth = ip_route_output_key_hash_rcu(net, fl4, &res, skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 9b9b02052fd36..1cdcb4df0eb7e 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -138,10 +138,9 @@ int tcp_bpf_sendmsg_redir(struct sock *sk, struct sk_msg *msg,
+ struct sk_psock *psock = sk_psock_get(sk);
+ int ret;
+
+- if (unlikely(!psock)) {
+- sk_msg_free(sk, msg);
+- return 0;
+- }
++ if (unlikely(!psock))
++ return -EPIPE;
++
+ ret = ingress ? bpf_tcp_ingress(sk, psock, msg, bytes, flags) :
+ tcp_bpf_push_locked(sk, msg, bytes, flags, false);
+ sk_psock_put(sk, psock);
+@@ -335,7 +334,7 @@ more_data:
+ cork = true;
+ psock->cork = NULL;
+ }
+- sk_msg_return(sk, msg, tosend);
++ sk_msg_return(sk, msg, msg->sg.size);
+ release_sock(sk);
+
+ ret = tcp_bpf_sendmsg_redir(sk_redir, msg, tosend, flags);
+@@ -375,8 +374,11 @@ more_data:
+ }
+ if (msg &&
+ msg->sg.data[msg->sg.start].page_link &&
+- msg->sg.data[msg->sg.start].length)
++ msg->sg.data[msg->sg.start].length) {
++ if (eval == __SK_REDIRECT)
++ sk_mem_charge(sk, msg->sg.size);
+ goto more_data;
++ }
+ }
+ return ret;
+ }
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 5079832af5c10..257780f93305f 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -3719,6 +3719,7 @@ static void tcp_connect_queue_skb(struct sock *sk, struct sk_buff *skb)
+ */
+ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ {
++ struct inet_connection_sock *icsk = inet_csk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_fastopen_request *fo = tp->fastopen_req;
+ int space, err = 0;
+@@ -3733,8 +3734,10 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
+ * private TCP options. The cost is reduced data space in SYN :(
+ */
+ tp->rx_opt.mss_clamp = tcp_mss_clamp(tp, tp->rx_opt.mss_clamp);
++ /* Sync mss_cache after updating the mss_clamp */
++ tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);
+
+- space = __tcp_mtu_to_mss(sk, inet_csk(sk)->icsk_pmtu_cookie) -
++ space = __tcp_mtu_to_mss(sk, icsk->icsk_pmtu_cookie) -
+ MAX_TCP_OPTION_SPACE;
+
+ space = min_t(size_t, space, fo->size);
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index e87bccaab561f..95aaf00c876c3 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -2380,7 +2380,7 @@ u8 *ieee80211_ie_build_vht_cap(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap,
+ u8 *ieee80211_ie_build_vht_oper(u8 *pos, struct ieee80211_sta_vht_cap *vht_cap,
+ const struct cfg80211_chan_def *chandef);
+ u8 ieee80211_ie_len_he_cap(struct ieee80211_sub_if_data *sdata, u8 iftype);
+-u8 *ieee80211_ie_build_he_cap(u8 *pos,
++u8 *ieee80211_ie_build_he_cap(u32 disable_flags, u8 *pos,
+ const struct ieee80211_sta_he_cap *he_cap,
+ u8 *end);
+ void ieee80211_ie_build_he_6ghz_cap(struct ieee80211_sub_if_data *sdata,
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 15ac08d111ea1..6847fdf934392 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -580,7 +580,7 @@ int mesh_add_he_cap_ie(struct ieee80211_sub_if_data *sdata,
+ return -ENOMEM;
+
+ pos = skb_put(skb, ie_len);
+- ieee80211_ie_build_he_cap(pos, he_cap, pos + ie_len);
++ ieee80211_ie_build_he_cap(0, pos, he_cap, pos + ie_len);
+
+ return 0;
+ }
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 744842c4513b1..c4d3e2da73f23 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -636,7 +636,7 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+ struct sk_buff *skb,
+ struct ieee80211_supported_band *sband)
+ {
+- u8 *pos;
++ u8 *pos, *pre_he_pos;
+ const struct ieee80211_sta_he_cap *he_cap = NULL;
+ struct ieee80211_chanctx_conf *chanctx_conf;
+ u8 he_cap_size;
+@@ -653,16 +653,21 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+
+ he_cap = ieee80211_get_he_iftype_cap(sband,
+ ieee80211_vif_type_p2p(&sdata->vif));
+- if (!he_cap || !reg_cap)
++ if (!he_cap || !chanctx_conf || !reg_cap)
+ return;
+
++ /* get a max size estimate */
+ he_cap_size =
+ 2 + 1 + sizeof(he_cap->he_cap_elem) +
+ ieee80211_he_mcs_nss_size(&he_cap->he_cap_elem) +
+ ieee80211_he_ppe_size(he_cap->ppe_thres[0],
+ he_cap->he_cap_elem.phy_cap_info);
+ pos = skb_put(skb, he_cap_size);
+- ieee80211_ie_build_he_cap(pos, he_cap, pos + he_cap_size);
++ pre_he_pos = pos;
++ pos = ieee80211_ie_build_he_cap(sdata->u.mgd.flags,
++ pos, he_cap, pos + he_cap_size);
++ /* trim excess if any */
++ skb_trim(skb, skb->len - (pre_he_pos + he_cap_size - pos));
+
+ ieee80211_ie_build_he_6ghz_cap(sdata, skb);
+ }
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index f71b042a5c8bb..342c2bfe27091 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1974,7 +1974,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
+ if (he_cap &&
+ cfg80211_any_usable_channels(local->hw.wiphy, BIT(sband->band),
+ IEEE80211_CHAN_NO_HE)) {
+- pos = ieee80211_ie_build_he_cap(pos, he_cap, end);
++ pos = ieee80211_ie_build_he_cap(0, pos, he_cap, end);
+ if (!pos)
+ goto out_err;
+ }
+@@ -2918,10 +2918,11 @@ u8 ieee80211_ie_len_he_cap(struct ieee80211_sub_if_data *sdata, u8 iftype)
+ he_cap->he_cap_elem.phy_cap_info);
+ }
+
+-u8 *ieee80211_ie_build_he_cap(u8 *pos,
++u8 *ieee80211_ie_build_he_cap(u32 disable_flags, u8 *pos,
+ const struct ieee80211_sta_he_cap *he_cap,
+ u8 *end)
+ {
++ struct ieee80211_he_cap_elem elem;
+ u8 n;
+ u8 ie_len;
+ u8 *orig_pos = pos;
+@@ -2934,7 +2935,23 @@ u8 *ieee80211_ie_build_he_cap(u8 *pos,
+ if (!he_cap)
+ return orig_pos;
+
+- n = ieee80211_he_mcs_nss_size(&he_cap->he_cap_elem);
++ /* modify on stack first to calculate 'n' and 'ie_len' correctly */
++ elem = he_cap->he_cap_elem;
++
++ if (disable_flags & IEEE80211_STA_DISABLE_40MHZ)
++ elem.phy_cap_info[0] &=
++ ~(IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
++ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G);
++
++ if (disable_flags & IEEE80211_STA_DISABLE_160MHZ)
++ elem.phy_cap_info[0] &=
++ ~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
++
++ if (disable_flags & IEEE80211_STA_DISABLE_80P80MHZ)
++ elem.phy_cap_info[0] &=
++ ~IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G;
++
++ n = ieee80211_he_mcs_nss_size(&elem);
+ ie_len = 2 + 1 +
+ sizeof(he_cap->he_cap_elem) + n +
+ ieee80211_he_ppe_size(he_cap->ppe_thres[0],
+@@ -2948,8 +2965,8 @@ u8 *ieee80211_ie_build_he_cap(u8 *pos,
+ *pos++ = WLAN_EID_EXT_HE_CAPABILITY;
+
+ /* Fixed data */
+- memcpy(pos, &he_cap->he_cap_elem, sizeof(he_cap->he_cap_elem));
+- pos += sizeof(he_cap->he_cap_elem);
++ memcpy(pos, &elem, sizeof(elem));
++ pos += sizeof(elem);
+
+ memcpy(pos, &he_cap->he_mcs_nss_supp, n);
+ pos += n;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 1c72f25f083ea..014c9d88f9479 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1196,6 +1196,7 @@ static struct sk_buff *__mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, g
+ tcp_skb_entail(ssk, skb);
+ return skb;
+ }
++ tcp_skb_tsorted_anchor_cleanup(skb);
+ kfree_skb(skb);
+ return NULL;
+ }
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index ae4488a13c70c..ceb38a7b37cb7 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -556,6 +556,12 @@ static const struct nf_ct_ext_type helper_extend = {
+ .id = NF_CT_EXT_HELPER,
+ };
+
++void nf_ct_set_auto_assign_helper_warned(struct net *net)
++{
++ nf_ct_pernet(net)->auto_assign_helper_warned = true;
++}
++EXPORT_SYMBOL_GPL(nf_ct_set_auto_assign_helper_warned);
++
+ void nf_conntrack_helper_pernet_init(struct net *net)
+ {
+ struct nf_conntrack_net *cnet = nf_ct_pernet(net);
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index d1582b888c0d8..8ec55cd72572e 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -341,8 +341,8 @@ static void tcp_options(const struct sk_buff *skb,
+ if (!ptr)
+ return;
+
+- state->td_scale =
+- state->flags = 0;
++ state->td_scale = 0;
++ state->flags &= IP_CT_TCP_FLAG_BE_LIBERAL;
+
+ while (length > 0) {
+ int opcode=*ptr++;
+@@ -862,6 +862,16 @@ static bool tcp_can_early_drop(const struct nf_conn *ct)
+ return false;
+ }
+
++static void nf_ct_tcp_state_reset(struct ip_ct_tcp_state *state)
++{
++ state->td_end = 0;
++ state->td_maxend = 0;
++ state->td_maxwin = 0;
++ state->td_maxack = 0;
++ state->td_scale = 0;
++ state->flags &= IP_CT_TCP_FLAG_BE_LIBERAL;
++}
++
+ /* Returns verdict for packet, or -1 for invalid. */
+ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ struct sk_buff *skb,
+@@ -968,8 +978,7 @@ int nf_conntrack_tcp_packet(struct nf_conn *ct,
+ ct->proto.tcp.last_flags &= ~IP_CT_EXP_CHALLENGE_ACK;
+ ct->proto.tcp.seen[ct->proto.tcp.last_dir].flags =
+ ct->proto.tcp.last_flags;
+- memset(&ct->proto.tcp.seen[dir], 0,
+- sizeof(struct ip_ct_tcp_state));
++ nf_ct_tcp_state_reset(&ct->proto.tcp.seen[dir]);
+ break;
+ }
+ ct->proto.tcp.last_index = index;
+diff --git a/net/netfilter/nf_flow_table_inet.c b/net/netfilter/nf_flow_table_inet.c
+index 5c57ade6bd05a..0ccabf3fa6aa3 100644
+--- a/net/netfilter/nf_flow_table_inet.c
++++ b/net/netfilter/nf_flow_table_inet.c
+@@ -6,12 +6,29 @@
+ #include <linux/rhashtable.h>
+ #include <net/netfilter/nf_flow_table.h>
+ #include <net/netfilter/nf_tables.h>
++#include <linux/if_vlan.h>
+
+ static unsigned int
+ nf_flow_offload_inet_hook(void *priv, struct sk_buff *skb,
+ const struct nf_hook_state *state)
+ {
++ struct vlan_ethhdr *veth;
++ __be16 proto;
++
+ switch (skb->protocol) {
++ case htons(ETH_P_8021Q):
++ veth = (struct vlan_ethhdr *)skb_mac_header(skb);
++ proto = veth->h_vlan_encapsulated_proto;
++ break;
++ case htons(ETH_P_PPP_SES):
++ proto = nf_flow_pppoe_proto(skb);
++ break;
++ default:
++ proto = skb->protocol;
++ break;
++ }
++
++ switch (proto) {
+ case htons(ETH_P_IP):
+ return nf_flow_offload_ip_hook(priv, skb, state);
+ case htons(ETH_P_IPV6):
+diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c
+index 889cf88d3dba6..6257d87c3a56d 100644
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -8,8 +8,6 @@
+ #include <linux/ipv6.h>
+ #include <linux/netdevice.h>
+ #include <linux/if_ether.h>
+-#include <linux/if_pppox.h>
+-#include <linux/ppp_defs.h>
+ #include <net/ip.h>
+ #include <net/ipv6.h>
+ #include <net/ip6_route.h>
+@@ -239,22 +237,6 @@ static unsigned int nf_flow_xmit_xfrm(struct sk_buff *skb,
+ return NF_STOLEN;
+ }
+
+-static inline __be16 nf_flow_pppoe_proto(const struct sk_buff *skb)
+-{
+- __be16 proto;
+-
+- proto = *((__be16 *)(skb_mac_header(skb) + ETH_HLEN +
+- sizeof(struct pppoe_hdr)));
+- switch (proto) {
+- case htons(PPP_IP):
+- return htons(ETH_P_IP);
+- case htons(PPP_IPV6):
+- return htons(ETH_P_IPV6);
+- }
+-
+- return 0;
+-}
+-
+ static bool nf_flow_skb_encap_protocol(const struct sk_buff *skb, __be16 proto,
+ u32 *offset)
+ {
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 5adf8bb628a80..9c7472af9e4a1 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -1041,6 +1041,9 @@ static int nft_ct_helper_obj_init(const struct nft_ctx *ctx,
+ if (err < 0)
+ goto err_put_helper;
+
++ /* Avoid the bogus warning, helper will be assigned after CT init */
++ nf_ct_set_auto_assign_helper_warned(ctx->net);
++
+ return 0;
+
+ err_put_helper:
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 7b344035bfe3f..47a876ccd2881 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -159,6 +159,8 @@ EXPORT_SYMBOL(do_trace_netlink_extack);
+
+ static inline u32 netlink_group_mask(u32 group)
+ {
++ if (group > 32)
++ return 0;
+ return group ? 1 << (group - 1) : 0;
+ }
+
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index c07afff57dd32..4a947c13c813a 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -734,6 +734,57 @@ static bool skb_nfct_cached(struct net *net,
+ }
+
+ #if IS_ENABLED(CONFIG_NF_NAT)
++static void ovs_nat_update_key(struct sw_flow_key *key,
++ const struct sk_buff *skb,
++ enum nf_nat_manip_type maniptype)
++{
++ if (maniptype == NF_NAT_MANIP_SRC) {
++ __be16 src;
++
++ key->ct_state |= OVS_CS_F_SRC_NAT;
++ if (key->eth.type == htons(ETH_P_IP))
++ key->ipv4.addr.src = ip_hdr(skb)->saddr;
++ else if (key->eth.type == htons(ETH_P_IPV6))
++ memcpy(&key->ipv6.addr.src, &ipv6_hdr(skb)->saddr,
++ sizeof(key->ipv6.addr.src));
++ else
++ return;
++
++ if (key->ip.proto == IPPROTO_UDP)
++ src = udp_hdr(skb)->source;
++ else if (key->ip.proto == IPPROTO_TCP)
++ src = tcp_hdr(skb)->source;
++ else if (key->ip.proto == IPPROTO_SCTP)
++ src = sctp_hdr(skb)->source;
++ else
++ return;
++
++ key->tp.src = src;
++ } else {
++ __be16 dst;
++
++ key->ct_state |= OVS_CS_F_DST_NAT;
++ if (key->eth.type == htons(ETH_P_IP))
++ key->ipv4.addr.dst = ip_hdr(skb)->daddr;
++ else if (key->eth.type == htons(ETH_P_IPV6))
++ memcpy(&key->ipv6.addr.dst, &ipv6_hdr(skb)->daddr,
++ sizeof(key->ipv6.addr.dst));
++ else
++ return;
++
++ if (key->ip.proto == IPPROTO_UDP)
++ dst = udp_hdr(skb)->dest;
++ else if (key->ip.proto == IPPROTO_TCP)
++ dst = tcp_hdr(skb)->dest;
++ else if (key->ip.proto == IPPROTO_SCTP)
++ dst = sctp_hdr(skb)->dest;
++ else
++ return;
++
++ key->tp.dst = dst;
++ }
++}
++
+ /* Modelled after nf_nat_ipv[46]_fn().
+ * range is only used for new, uninitialized NAT state.
+ * Returns either NF_ACCEPT or NF_DROP.
+@@ -741,7 +792,7 @@ static bool skb_nfct_cached(struct net *net,
+ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ enum ip_conntrack_info ctinfo,
+ const struct nf_nat_range2 *range,
+- enum nf_nat_manip_type maniptype)
++ enum nf_nat_manip_type maniptype, struct sw_flow_key *key)
+ {
+ int hooknum, nh_off, err = NF_ACCEPT;
+
+@@ -813,58 +864,11 @@ static int ovs_ct_nat_execute(struct sk_buff *skb, struct nf_conn *ct,
+ push:
+ skb_push_rcsum(skb, nh_off);
+
+- return err;
+-}
+-
+-static void ovs_nat_update_key(struct sw_flow_key *key,
+- const struct sk_buff *skb,
+- enum nf_nat_manip_type maniptype)
+-{
+- if (maniptype == NF_NAT_MANIP_SRC) {
+- __be16 src;
+-
+- key->ct_state |= OVS_CS_F_SRC_NAT;
+- if (key->eth.type == htons(ETH_P_IP))
+- key->ipv4.addr.src = ip_hdr(skb)->saddr;
+- else if (key->eth.type == htons(ETH_P_IPV6))
+- memcpy(&key->ipv6.addr.src, &ipv6_hdr(skb)->saddr,
+- sizeof(key->ipv6.addr.src));
+- else
+- return;
+-
+- if (key->ip.proto == IPPROTO_UDP)
+- src = udp_hdr(skb)->source;
+- else if (key->ip.proto == IPPROTO_TCP)
+- src = tcp_hdr(skb)->source;
+- else if (key->ip.proto == IPPROTO_SCTP)
+- src = sctp_hdr(skb)->source;
+- else
+- return;
+-
+- key->tp.src = src;
+- } else {
+- __be16 dst;
+-
+- key->ct_state |= OVS_CS_F_DST_NAT;
+- if (key->eth.type == htons(ETH_P_IP))
+- key->ipv4.addr.dst = ip_hdr(skb)->daddr;
+- else if (key->eth.type == htons(ETH_P_IPV6))
+- memcpy(&key->ipv6.addr.dst, &ipv6_hdr(skb)->daddr,
+- sizeof(key->ipv6.addr.dst));
+- else
+- return;
+-
+- if (key->ip.proto == IPPROTO_UDP)
+- dst = udp_hdr(skb)->dest;
+- else if (key->ip.proto == IPPROTO_TCP)
+- dst = tcp_hdr(skb)->dest;
+- else if (key->ip.proto == IPPROTO_SCTP)
+- dst = sctp_hdr(skb)->dest;
+- else
+- return;
++ /* Update the flow key if NAT successful. */
++ if (err == NF_ACCEPT)
++ ovs_nat_update_key(key, skb, maniptype);
+
+- key->tp.dst = dst;
+- }
++ return err;
+ }
+
+ /* Returns NF_DROP if the packet should be dropped, NF_ACCEPT otherwise. */
+@@ -906,7 +910,7 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ } else {
+ return NF_ACCEPT; /* Connection is not NATed. */
+ }
+- err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype);
++ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype, key);
+
+ if (err == NF_ACCEPT && ct->status & IPS_DST_NAT) {
+ if (ct->status & IPS_SRC_NAT) {
+@@ -916,17 +920,13 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key,
+ maniptype = NF_NAT_MANIP_SRC;
+
+ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range,
+- maniptype);
++ maniptype, key);
+ } else if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
+ err = ovs_ct_nat_execute(skb, ct, ctinfo, NULL,
+- NF_NAT_MANIP_SRC);
++ NF_NAT_MANIP_SRC, key);
+ }
+ }
+
+- /* Mark NAT done if successful and update the flow key. */
+- if (err == NF_ACCEPT)
+- ovs_nat_update_key(key, skb, maniptype);
+-
+ return err;
+ }
+ #else /* !CONFIG_NF_NAT */
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index fd1f809e9bc1b..0d677c9c2c805 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2201,8 +2201,8 @@ static int __ovs_nla_put_key(const struct sw_flow_key *swkey,
+ icmpv6_key->icmpv6_type = ntohs(output->tp.src);
+ icmpv6_key->icmpv6_code = ntohs(output->tp.dst);
+
+- if (icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_SOLICITATION ||
+- icmpv6_key->icmpv6_type == NDISC_NEIGHBOUR_ADVERTISEMENT) {
++ if (swkey->tp.src == htons(NDISC_NEIGHBOUR_SOLICITATION) ||
++ swkey->tp.src == htons(NDISC_NEIGHBOUR_ADVERTISEMENT)) {
+ struct ovs_key_nd *nd_key;
+
+ nla = nla_reserve(skb, OVS_KEY_ATTR_ND, sizeof(*nd_key));
+diff --git a/net/rfkill/core.c b/net/rfkill/core.c
+index 5b1927d66f0da..dac4fdc7488a3 100644
+--- a/net/rfkill/core.c
++++ b/net/rfkill/core.c
+@@ -78,6 +78,7 @@ struct rfkill_data {
+ struct mutex mtx;
+ wait_queue_head_t read_wait;
+ bool input_handler;
++ u8 max_size;
+ };
+
+
+@@ -1153,6 +1154,8 @@ static int rfkill_fop_open(struct inode *inode, struct file *file)
+ if (!data)
+ return -ENOMEM;
+
++ data->max_size = RFKILL_EVENT_SIZE_V1;
++
+ INIT_LIST_HEAD(&data->events);
+ mutex_init(&data->mtx);
+ init_waitqueue_head(&data->read_wait);
+@@ -1235,6 +1238,7 @@ static ssize_t rfkill_fop_read(struct file *file, char __user *buf,
+ list);
+
+ sz = min_t(unsigned long, sizeof(ev->ev), count);
++ sz = min_t(unsigned long, sz, data->max_size);
+ ret = sz;
+ if (copy_to_user(buf, &ev->ev, sz))
+ ret = -EFAULT;
+@@ -1249,6 +1253,7 @@ static ssize_t rfkill_fop_read(struct file *file, char __user *buf,
+ static ssize_t rfkill_fop_write(struct file *file, const char __user *buf,
+ size_t count, loff_t *pos)
+ {
++ struct rfkill_data *data = file->private_data;
+ struct rfkill *rfkill;
+ struct rfkill_event_ext ev;
+ int ret;
+@@ -1263,6 +1268,7 @@ static ssize_t rfkill_fop_write(struct file *file, const char __user *buf,
+ * our API version even in a write() call, if it cares.
+ */
+ count = min(count, sizeof(ev));
++ count = min_t(size_t, count, data->max_size);
+ if (copy_from_user(&ev, buf, count))
+ return -EFAULT;
+
+@@ -1322,31 +1328,47 @@ static int rfkill_fop_release(struct inode *inode, struct file *file)
+ return 0;
+ }
+
+-#ifdef CONFIG_RFKILL_INPUT
+ static long rfkill_fop_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+ {
+ struct rfkill_data *data = file->private_data;
++ int ret = -ENOSYS;
++ u32 size;
+
+ if (_IOC_TYPE(cmd) != RFKILL_IOC_MAGIC)
+ return -ENOSYS;
+
+- if (_IOC_NR(cmd) != RFKILL_IOC_NOINPUT)
+- return -ENOSYS;
+-
+ mutex_lock(&data->mtx);
+-
+- if (!data->input_handler) {
+- if (atomic_inc_return(&rfkill_input_disabled) == 1)
+- printk(KERN_DEBUG "rfkill: input handler disabled\n");
+- data->input_handler = true;
++ switch (_IOC_NR(cmd)) {
++#ifdef CONFIG_RFKILL_INPUT
++ case RFKILL_IOC_NOINPUT:
++ if (!data->input_handler) {
++ if (atomic_inc_return(&rfkill_input_disabled) == 1)
++ printk(KERN_DEBUG "rfkill: input handler disabled\n");
++ data->input_handler = true;
++ }
++ ret = 0;
++ break;
++#endif
++ case RFKILL_IOC_MAX_SIZE:
++ if (get_user(size, (__u32 __user *)arg)) {
++ ret = -EFAULT;
++ break;
++ }
++ if (size < RFKILL_EVENT_SIZE_V1 || size > U8_MAX) {
++ ret = -EINVAL;
++ break;
++ }
++ data->max_size = size;
++ ret = 0;
++ break;
++ default:
++ break;
+ }
+-
+ mutex_unlock(&data->mtx);
+
+- return 0;
++ return ret;
+ }
+-#endif
+
+ static const struct file_operations rfkill_fops = {
+ .owner = THIS_MODULE,
+@@ -1355,10 +1377,8 @@ static const struct file_operations rfkill_fops = {
+ .write = rfkill_fop_write,
+ .poll = rfkill_fop_poll,
+ .release = rfkill_fop_release,
+-#ifdef CONFIG_RFKILL_INPUT
+ .unlocked_ioctl = rfkill_fop_ioctl,
+ .compat_ioctl = compat_ptr_ioctl,
+-#endif
+ .llseek = no_llseek,
+ };
+
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 7bd6f8a66a3ef..969e532f77a90 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -777,14 +777,12 @@ void rxrpc_propose_ACK(struct rxrpc_call *, u8, u32, bool, bool,
+ enum rxrpc_propose_ack_trace);
+ void rxrpc_process_call(struct work_struct *);
+
+-static inline void rxrpc_reduce_call_timer(struct rxrpc_call *call,
+- unsigned long expire_at,
+- unsigned long now,
+- enum rxrpc_timer_trace why)
+-{
+- trace_rxrpc_timer(call, why, now);
+- timer_reduce(&call->timer, expire_at);
+-}
++void rxrpc_reduce_call_timer(struct rxrpc_call *call,
++ unsigned long expire_at,
++ unsigned long now,
++ enum rxrpc_timer_trace why);
++
++void rxrpc_delete_call_timer(struct rxrpc_call *call);
+
+ /*
+ * call_object.c
+@@ -808,6 +806,7 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
+ bool __rxrpc_queue_call(struct rxrpc_call *);
+ bool rxrpc_queue_call(struct rxrpc_call *);
+ void rxrpc_see_call(struct rxrpc_call *);
++bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op);
+ void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
+ void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
+ void rxrpc_cleanup_call(struct rxrpc_call *);
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index df864e6922679..22e05de5d1ca9 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -310,7 +310,7 @@ recheck_state:
+ }
+
+ if (call->state == RXRPC_CALL_COMPLETE) {
+- del_timer_sync(&call->timer);
++ rxrpc_delete_call_timer(call);
+ goto out_put;
+ }
+
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 4eb91d958a48d..043508fd8d8a5 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -53,10 +53,30 @@ static void rxrpc_call_timer_expired(struct timer_list *t)
+
+ if (call->state < RXRPC_CALL_COMPLETE) {
+ trace_rxrpc_timer(call, rxrpc_timer_expired, jiffies);
+- rxrpc_queue_call(call);
++ __rxrpc_queue_call(call);
++ } else {
++ rxrpc_put_call(call, rxrpc_call_put);
++ }
++}
++
++void rxrpc_reduce_call_timer(struct rxrpc_call *call,
++ unsigned long expire_at,
++ unsigned long now,
++ enum rxrpc_timer_trace why)
++{
++ if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) {
++ trace_rxrpc_timer(call, why, now);
++ if (timer_reduce(&call->timer, expire_at))
++ rxrpc_put_call(call, rxrpc_call_put_notimer);
+ }
+ }
+
++void rxrpc_delete_call_timer(struct rxrpc_call *call)
++{
++ if (del_timer_sync(&call->timer))
++ rxrpc_put_call(call, rxrpc_call_put_timer);
++}
++
+ static struct lock_class_key rxrpc_call_user_mutex_lock_class_key;
+
+ /*
+@@ -463,6 +483,17 @@ void rxrpc_see_call(struct rxrpc_call *call)
+ }
+ }
+
++bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
++{
++ const void *here = __builtin_return_address(0);
++ int n = atomic_fetch_add_unless(&call->usage, 1, 0);
++
++ if (n == 0)
++ return false;
++ trace_rxrpc_call(call->debug_id, op, n, here, NULL);
++ return true;
++}
++
+ /*
+ * Note the addition of a ref on a call.
+ */
+@@ -510,8 +541,7 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
+ spin_unlock_bh(&call->lock);
+
+ rxrpc_put_call_slot(call);
+-
+- del_timer_sync(&call->timer);
++ rxrpc_delete_call_timer(call);
+
+ /* Make sure we don't get any more notifications */
+ write_lock_bh(&rx->recvmsg_lock);
+@@ -618,6 +648,8 @@ static void rxrpc_destroy_call(struct work_struct *work)
+ struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor);
+ struct rxrpc_net *rxnet = call->rxnet;
+
++ rxrpc_delete_call_timer(call);
++
+ rxrpc_put_connection(call->conn);
+ rxrpc_put_peer(call->peer);
+ kfree(call->rxtx_buffer);
+@@ -652,8 +684,6 @@ void rxrpc_cleanup_call(struct rxrpc_call *call)
+
+ memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
+
+- del_timer_sync(&call->timer);
+-
+ ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+ ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
+
+diff --git a/net/rxrpc/server_key.c b/net/rxrpc/server_key.c
+index ead3471307ee5..ee269e0e6ee87 100644
+--- a/net/rxrpc/server_key.c
++++ b/net/rxrpc/server_key.c
+@@ -84,6 +84,9 @@ static int rxrpc_preparse_s(struct key_preparsed_payload *prep)
+
+ prep->payload.data[1] = (struct rxrpc_security *)sec;
+
++ if (!sec->preparse_server_key)
++ return -EINVAL;
++
+ return sec->preparse_server_key(prep);
+ }
+
+@@ -91,7 +94,7 @@ static void rxrpc_free_preparse_s(struct key_preparsed_payload *prep)
+ {
+ const struct rxrpc_security *sec = prep->payload.data[1];
+
+- if (sec)
++ if (sec && sec->free_preparse_server_key)
+ sec->free_preparse_server_key(prep);
+ }
+
+@@ -99,7 +102,7 @@ static void rxrpc_destroy_s(struct key *key)
+ {
+ const struct rxrpc_security *sec = key->payload.data[1];
+
+- if (sec)
++ if (sec && sec->destroy_server_key)
+ sec->destroy_server_key(key);
+ }
+
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index ec19f625863a0..25718acc0ff00 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -605,22 +605,25 @@ static bool tcf_ct_skb_nfct_cached(struct net *net, struct sk_buff *skb,
+ if (!ct)
+ return false;
+ if (!net_eq(net, read_pnet(&ct->ct_net)))
+- return false;
++ goto drop_ct;
+ if (nf_ct_zone(ct)->id != zone_id)
+- return false;
++ goto drop_ct;
+
+ /* Force conntrack entry direction. */
+ if (force && CTINFO2DIR(ctinfo) != IP_CT_DIR_ORIGINAL) {
+ if (nf_ct_is_confirmed(ct))
+ nf_ct_kill(ct);
+
+- nf_ct_put(ct);
+- nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
+-
+- return false;
++ goto drop_ct;
+ }
+
+ return true;
++
++drop_ct:
++ nf_ct_put(ct);
++ nf_ct_set(skb, NULL, IP_CT_UNTRACKED);
++
++ return false;
+ }
+
+ /* Trim the skb to the length specified by the IP/IPv6 header,
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index cc544a97c4afd..7f342bc127358 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -930,6 +930,11 @@ enum sctp_disposition sctp_sf_do_5_1E_ca(struct net *net,
+ if (!sctp_vtag_verify(chunk, asoc))
+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
+
++ /* Set peer label for connection. */
++ if (security_sctp_assoc_established((struct sctp_association *)asoc,
++ chunk->skb))
++ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
++
+ /* Verify that the chunk length for the COOKIE-ACK is OK.
+ * If we don't do this, any bundled chunks may be junked.
+ */
+@@ -945,9 +950,6 @@ enum sctp_disposition sctp_sf_do_5_1E_ca(struct net *net,
+ */
+ sctp_add_cmd_sf(commands, SCTP_CMD_INIT_COUNTER_RESET, SCTP_NULL());
+
+- /* Set peer label for connection. */
+- security_inet_conn_established(ep->base.sk, chunk->skb);
+-
+ /* RFC 2960 5.1 Normal Establishment of an Association
+ *
+ * E) Upon reception of the COOKIE ACK, endpoint "A" will move
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index c83fe618767c4..b36d235d2d6d9 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -1065,7 +1065,9 @@ rpc_task_get_next_xprt(struct rpc_clnt *clnt)
+ static
+ void rpc_task_set_transport(struct rpc_task *task, struct rpc_clnt *clnt)
+ {
+- if (task->tk_xprt)
++ if (task->tk_xprt &&
++ !(test_bit(XPRT_OFFLINE, &task->tk_xprt->state) &&
++ (task->tk_flags & RPC_TASK_MOVEABLE)))
+ return;
+ if (task->tk_flags & RPC_TASK_NO_ROUND_ROBIN)
+ task->tk_xprt = rpc_task_get_first_xprt(clnt);
+@@ -1085,8 +1087,6 @@ void rpc_task_set_client(struct rpc_task *task, struct rpc_clnt *clnt)
+ task->tk_flags |= RPC_TASK_TIMEOUT;
+ if (clnt->cl_noretranstimeo)
+ task->tk_flags |= RPC_TASK_NO_RETRANS_TIMEOUT;
+- if (atomic_read(&clnt->cl_swapper))
+- task->tk_flags |= RPC_TASK_SWAPPER;
+ /* Add to the client's list of all tasks */
+ spin_lock(&clnt->cl_lock);
+ list_add_tail(&task->tk_task, &clnt->cl_tasks);
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index e2c835482791e..ae295844ac55a 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -876,6 +876,15 @@ void rpc_release_calldata(const struct rpc_call_ops *ops, void *calldata)
+ ops->rpc_release(calldata);
+ }
+
++static bool xprt_needs_memalloc(struct rpc_xprt *xprt, struct rpc_task *tk)
++{
++ if (!xprt)
++ return false;
++ if (!atomic_read(&xprt->swapper))
++ return false;
++ return test_bit(XPRT_LOCKED, &xprt->state) && xprt->snd_task == tk;
++}
++
+ /*
+ * This is the RPC `scheduler' (or rather, the finite state machine).
+ */
+@@ -884,6 +893,7 @@ static void __rpc_execute(struct rpc_task *task)
+ struct rpc_wait_queue *queue;
+ int task_is_async = RPC_IS_ASYNC(task);
+ int status = 0;
++ unsigned long pflags = current->flags;
+
+ WARN_ON_ONCE(RPC_IS_QUEUED(task));
+ if (RPC_IS_QUEUED(task))
+@@ -906,6 +916,10 @@ static void __rpc_execute(struct rpc_task *task)
+ }
+ if (!do_action)
+ break;
++ if (RPC_IS_SWAPPER(task) ||
++ xprt_needs_memalloc(task->tk_xprt, task))
++ current->flags |= PF_MEMALLOC;
++
+ trace_rpc_task_run_action(task, do_action);
+ do_action(task);
+
+@@ -943,7 +957,7 @@ static void __rpc_execute(struct rpc_task *task)
+ rpc_clear_running(task);
+ spin_unlock(&queue->lock);
+ if (task_is_async)
+- return;
++ goto out;
+
+ /* sync task: sleep here */
+ trace_rpc_task_sync_sleep(task, task->tk_action);
+@@ -967,6 +981,8 @@ static void __rpc_execute(struct rpc_task *task)
+
+ /* Release all resources associated with the task */
+ rpc_release_task(task);
++out:
++ current_restore_flags(pflags, PF_MEMALLOC);
+ }
+
+ /*
+@@ -1023,8 +1039,8 @@ int rpc_malloc(struct rpc_task *task)
+ struct rpc_buffer *buf;
+ gfp_t gfp = GFP_NOFS;
+
+- if (RPC_IS_SWAPPER(task))
+- gfp = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
++ if (RPC_IS_ASYNC(task))
++ gfp = GFP_NOWAIT | __GFP_NOWARN;
+
+ size += sizeof(struct rpc_buffer);
+ if (size <= RPC_BUFFER_MAXSIZE)
+diff --git a/net/sunrpc/sysfs.c b/net/sunrpc/sysfs.c
+index 05c758da6a92a..9d8a7d9f3e412 100644
+--- a/net/sunrpc/sysfs.c
++++ b/net/sunrpc/sysfs.c
+@@ -97,7 +97,7 @@ static ssize_t rpc_sysfs_xprt_dstaddr_show(struct kobject *kobj,
+ return 0;
+ ret = sprintf(buf, "%s\n", xprt->address_strings[RPC_DISPLAY_ADDR]);
+ xprt_put(xprt);
+- return ret + 1;
++ return ret;
+ }
+
+ static ssize_t rpc_sysfs_xprt_srcaddr_show(struct kobject *kobj,
+@@ -105,33 +105,31 @@ static ssize_t rpc_sysfs_xprt_srcaddr_show(struct kobject *kobj,
+ char *buf)
+ {
+ struct rpc_xprt *xprt = rpc_sysfs_xprt_kobj_get_xprt(kobj);
+- struct sockaddr_storage saddr;
+- struct sock_xprt *sock;
+- ssize_t ret = -1;
++ size_t buflen = PAGE_SIZE;
++ ssize_t ret = -ENOTSOCK;
+
+ if (!xprt || !xprt_connected(xprt)) {
+- xprt_put(xprt);
+- return -ENOTCONN;
++ ret = -ENOTCONN;
++ } else if (xprt->ops->get_srcaddr) {
++ ret = xprt->ops->get_srcaddr(xprt, buf, buflen);
++ if (ret > 0) {
++ if (ret < buflen - 1) {
++ buf[ret] = '\n';
++ ret++;
++ buf[ret] = '\0';
++ }
++ }
+ }
+-
+- sock = container_of(xprt, struct sock_xprt, xprt);
+- mutex_lock(&sock->recv_mutex);
+- if (sock->sock == NULL ||
+- kernel_getsockname(sock->sock, (struct sockaddr *)&saddr) < 0)
+- goto out;
+-
+- ret = sprintf(buf, "%pISc\n", &saddr);
+-out:
+- mutex_unlock(&sock->recv_mutex);
+ xprt_put(xprt);
+- return ret + 1;
++ return ret;
+ }
+
+ static ssize_t rpc_sysfs_xprt_info_show(struct kobject *kobj,
+- struct kobj_attribute *attr,
+- char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ struct rpc_xprt *xprt = rpc_sysfs_xprt_kobj_get_xprt(kobj);
++ unsigned short srcport = 0;
++ size_t buflen = PAGE_SIZE;
+ ssize_t ret;
+
+ if (!xprt || !xprt_connected(xprt)) {
+@@ -139,7 +137,11 @@ static ssize_t rpc_sysfs_xprt_info_show(struct kobject *kobj,
+ return -ENOTCONN;
+ }
+
+- ret = sprintf(buf, "last_used=%lu\ncur_cong=%lu\ncong_win=%lu\n"
++ if (xprt->ops->get_srcport)
++ srcport = xprt->ops->get_srcport(xprt);
++
++ ret = snprintf(buf, buflen,
++ "last_used=%lu\ncur_cong=%lu\ncong_win=%lu\n"
+ "max_num_slots=%u\nmin_num_slots=%u\nnum_reqs=%u\n"
+ "binding_q_len=%u\nsending_q_len=%u\npending_q_len=%u\n"
+ "backlog_q_len=%u\nmain_xprt=%d\nsrc_port=%u\n"
+@@ -147,14 +149,11 @@ static ssize_t rpc_sysfs_xprt_info_show(struct kobject *kobj,
+ xprt->last_used, xprt->cong, xprt->cwnd, xprt->max_reqs,
+ xprt->min_reqs, xprt->num_reqs, xprt->binding.qlen,
+ xprt->sending.qlen, xprt->pending.qlen,
+- xprt->backlog.qlen, xprt->main,
+- (xprt->xprt_class->ident == XPRT_TRANSPORT_TCP) ?
+- get_srcport(xprt) : 0,
++ xprt->backlog.qlen, xprt->main, srcport,
+ atomic_long_read(&xprt->queuelen),
+- (xprt->xprt_class->ident == XPRT_TRANSPORT_TCP) ?
+- xprt->address_strings[RPC_DISPLAY_PORT] : "0");
++ xprt->address_strings[RPC_DISPLAY_PORT]);
+ xprt_put(xprt);
+- return ret + 1;
++ return ret;
+ }
+
+ static ssize_t rpc_sysfs_xprt_state_show(struct kobject *kobj,
+@@ -201,7 +200,7 @@ static ssize_t rpc_sysfs_xprt_state_show(struct kobject *kobj,
+ }
+
+ xprt_put(xprt);
+- return ret + 1;
++ return ret;
+ }
+
+ static ssize_t rpc_sysfs_xprt_switch_info_show(struct kobject *kobj,
+@@ -220,7 +219,7 @@ static ssize_t rpc_sysfs_xprt_switch_info_show(struct kobject *kobj,
+ xprt_switch->xps_nunique_destaddr_xprts,
+ atomic_long_read(&xprt_switch->xps_queuelen));
+ xprt_switch_put(xprt_switch);
+- return ret + 1;
++ return ret;
+ }
+
+ static ssize_t rpc_sysfs_xprt_dstaddr_store(struct kobject *kobj,
+diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
+index a02de2bddb28b..396a74974f60f 100644
+--- a/net/sunrpc/xprt.c
++++ b/net/sunrpc/xprt.c
+@@ -1503,6 +1503,9 @@ bool xprt_prepare_transmit(struct rpc_task *task)
+ return false;
+
+ }
++ if (atomic_read(&xprt->swapper))
++ /* This will be clear in __rpc_execute */
++ current->flags |= PF_MEMALLOC;
+ return true;
+ }
+
+@@ -2112,7 +2115,14 @@ static void xprt_destroy(struct rpc_xprt *xprt)
+ */
+ wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE);
+
++ /*
++ * xprt_schedule_autodisconnect() can run after XPRT_LOCKED
++ * is cleared. We use ->transport_lock to ensure the mod_timer()
++ * can only run *before* del_time_sync(), never after.
++ */
++ spin_lock(&xprt->transport_lock);
+ del_timer_sync(&xprt->timer);
++ spin_unlock(&xprt->transport_lock);
+
+ /*
+ * Destroy sockets etc from the system workqueue so they can
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 42e375dbdadb4..ff78a296fa810 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -235,8 +235,11 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ struct rpcrdma_xprt *r_xprt = container_of(work, struct rpcrdma_xprt,
+ rx_connect_worker.work);
+ struct rpc_xprt *xprt = &r_xprt->rx_xprt;
++ unsigned int pflags = current->flags;
+ int rc;
+
++ if (atomic_read(&xprt->swapper))
++ current->flags |= PF_MEMALLOC;
+ rc = rpcrdma_xprt_connect(r_xprt);
+ xprt_clear_connecting(xprt);
+ if (!rc) {
+@@ -250,6 +253,7 @@ xprt_rdma_connect_worker(struct work_struct *work)
+ rpcrdma_xprt_disconnect(r_xprt);
+ xprt_unlock_connect(xprt, r_xprt);
+ xprt_wake_pending_tasks(xprt, rc);
++ current_restore_flags(pflags, PF_MEMALLOC);
+ }
+
+ /**
+@@ -570,8 +574,8 @@ xprt_rdma_allocate(struct rpc_task *task)
+ gfp_t flags;
+
+ flags = RPCRDMA_DEF_GFP;
+- if (RPC_IS_SWAPPER(task))
+- flags = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN;
++ if (RPC_IS_ASYNC(task))
++ flags = GFP_NOWAIT | __GFP_NOWARN;
+
+ if (!rpcrdma_check_regbuf(r_xprt, req->rl_sendbuf, rqst->rq_callsize,
+ flags))
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 0f39e08ee580e..11eab0f0333b0 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1638,7 +1638,7 @@ static int xs_get_srcport(struct sock_xprt *transport)
+ return port;
+ }
+
+-unsigned short get_srcport(struct rpc_xprt *xprt)
++static unsigned short xs_sock_srcport(struct rpc_xprt *xprt)
+ {
+ struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
+ unsigned short ret = 0;
+@@ -1648,7 +1648,25 @@ unsigned short get_srcport(struct rpc_xprt *xprt)
+ mutex_unlock(&sock->recv_mutex);
+ return ret;
+ }
+-EXPORT_SYMBOL(get_srcport);
++
++static int xs_sock_srcaddr(struct rpc_xprt *xprt, char *buf, size_t buflen)
++{
++ struct sock_xprt *sock = container_of(xprt, struct sock_xprt, xprt);
++ union {
++ struct sockaddr sa;
++ struct sockaddr_storage st;
++ } saddr;
++ int ret = -ENOTCONN;
++
++ mutex_lock(&sock->recv_mutex);
++ if (sock->sock) {
++ ret = kernel_getsockname(sock->sock, &saddr.sa);
++ if (ret >= 0)
++ ret = snprintf(buf, buflen, "%pISc", &saddr.sa);
++ }
++ mutex_unlock(&sock->recv_mutex);
++ return ret;
++}
+
+ static unsigned short xs_next_srcport(struct sock_xprt *transport, unsigned short port)
+ {
+@@ -2052,7 +2070,10 @@ static void xs_udp_setup_socket(struct work_struct *work)
+ struct rpc_xprt *xprt = &transport->xprt;
+ struct socket *sock;
+ int status = -EIO;
++ unsigned int pflags = current->flags;
+
++ if (atomic_read(&xprt->swapper))
++ current->flags |= PF_MEMALLOC;
+ sock = xs_create_sock(xprt, transport,
+ xs_addr(xprt)->sa_family, SOCK_DGRAM,
+ IPPROTO_UDP, false);
+@@ -2072,6 +2093,7 @@ out:
+ xprt_clear_connecting(xprt);
+ xprt_unlock_connect(xprt, transport);
+ xprt_wake_pending_tasks(xprt, status);
++ current_restore_flags(pflags, PF_MEMALLOC);
+ }
+
+ /**
+@@ -2231,11 +2253,19 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ struct socket *sock = transport->sock;
+ struct rpc_xprt *xprt = &transport->xprt;
+ int status;
++ unsigned int pflags = current->flags;
++
++ if (atomic_read(&xprt->swapper))
++ current->flags |= PF_MEMALLOC;
+
+- if (!sock) {
+- sock = xs_create_sock(xprt, transport,
+- xs_addr(xprt)->sa_family, SOCK_STREAM,
+- IPPROTO_TCP, true);
++ if (xprt_connected(xprt))
++ goto out;
++ if (test_and_clear_bit(XPRT_SOCK_CONNECT_SENT,
++ &transport->sock_state) ||
++ !sock) {
++ xs_reset_transport(transport);
++ sock = xs_create_sock(xprt, transport, xs_addr(xprt)->sa_family,
++ SOCK_STREAM, IPPROTO_TCP, true);
+ if (IS_ERR(sock)) {
+ xprt_wake_pending_tasks(xprt, PTR_ERR(sock));
+ goto out;
+@@ -2259,6 +2289,7 @@ static void xs_tcp_setup_socket(struct work_struct *work)
+ fallthrough;
+ case -EINPROGRESS:
+ /* SYN_SENT! */
++ set_bit(XPRT_SOCK_CONNECT_SENT, &transport->sock_state);
+ if (xprt->reestablish_timeout < XS_TCP_INIT_REEST_TO)
+ xprt->reestablish_timeout = XS_TCP_INIT_REEST_TO;
+ fallthrough;
+@@ -2296,6 +2327,7 @@ out:
+ xprt_clear_connecting(xprt);
+ out_unlock:
+ xprt_unlock_connect(xprt, transport);
++ current_restore_flags(pflags, PF_MEMALLOC);
+ }
+
+ /**
+@@ -2319,13 +2351,9 @@ static void xs_connect(struct rpc_xprt *xprt, struct rpc_task *task)
+
+ WARN_ON_ONCE(!xprt_lock_connect(xprt, task, transport));
+
+- if (transport->sock != NULL && !xprt_connecting(xprt)) {
++ if (transport->sock != NULL) {
+ dprintk("RPC: xs_connect delayed xprt %p for %lu "
+- "seconds\n",
+- xprt, xprt->reestablish_timeout / HZ);
+-
+- /* Start by resetting any existing state */
+- xs_reset_transport(transport);
++ "seconds\n", xprt, xprt->reestablish_timeout / HZ);
+
+ delay = xprt_reconnect_delay(xprt);
+ xprt_reconnect_backoff(xprt, XS_TCP_INIT_REEST_TO);
+@@ -2621,6 +2649,8 @@ static const struct rpc_xprt_ops xs_udp_ops = {
+ .rpcbind = rpcb_getport_async,
+ .set_port = xs_set_port,
+ .connect = xs_connect,
++ .get_srcaddr = xs_sock_srcaddr,
++ .get_srcport = xs_sock_srcport,
+ .buf_alloc = rpc_malloc,
+ .buf_free = rpc_free,
+ .send_request = xs_udp_send_request,
+@@ -2643,6 +2673,8 @@ static const struct rpc_xprt_ops xs_tcp_ops = {
+ .rpcbind = rpcb_getport_async,
+ .set_port = xs_set_port,
+ .connect = xs_connect,
++ .get_srcaddr = xs_sock_srcaddr,
++ .get_srcport = xs_sock_srcport,
+ .buf_alloc = rpc_malloc,
+ .buf_free = rpc_free,
+ .prepare_request = xs_stream_prepare_request,
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 7545321c3440b..17f8c523e33b0 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2852,7 +2852,8 @@ static void tipc_sk_retry_connect(struct sock *sk, struct sk_buff_head *list)
+
+ /* Try again later if dest link is congested */
+ if (tsk->cong_link_cnt) {
+- sk_reset_timer(sk, &sk->sk_timer, msecs_to_jiffies(100));
++ sk_reset_timer(sk, &sk->sk_timer,
++ jiffies + msecs_to_jiffies(100));
+ return;
+ }
+ /* Prepare SYN for retransmit */
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index c19569819866e..1e7ed5829ed51 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2084,7 +2084,7 @@ static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other
+ if (ousk->oob_skb)
+ consume_skb(ousk->oob_skb);
+
+- ousk->oob_skb = skb;
++ WRITE_ONCE(ousk->oob_skb, skb);
+
+ scm_stat_add(other, skb);
+ skb_queue_tail(&other->sk_receive_queue, skb);
+@@ -2602,9 +2602,8 @@ static int unix_stream_recv_urg(struct unix_stream_read_state *state)
+
+ oob_skb = u->oob_skb;
+
+- if (!(state->flags & MSG_PEEK)) {
+- u->oob_skb = NULL;
+- }
++ if (!(state->flags & MSG_PEEK))
++ WRITE_ONCE(u->oob_skb, NULL);
+
+ unix_state_unlock(sk);
+
+@@ -2639,7 +2638,7 @@ static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
+ skb = NULL;
+ } else if (sock_flag(sk, SOCK_URGINLINE)) {
+ if (!(flags & MSG_PEEK)) {
+- u->oob_skb = NULL;
++ WRITE_ONCE(u->oob_skb, NULL);
+ consume_skb(skb);
+ }
+ } else if (!(flags & MSG_PEEK)) {
+@@ -3094,11 +3093,10 @@ static int unix_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ case SIOCATMARK:
+ {
+ struct sk_buff *skb;
+- struct unix_sock *u = unix_sk(sk);
+ int answ = 0;
+
+ skb = skb_peek(&sk->sk_receive_queue);
+- if (skb && skb == u->oob_skb)
++ if (skb && skb == READ_ONCE(unix_sk(sk)->oob_skb))
+ answ = 1;
+ err = put_user(answ, (int __user *)arg);
+ }
+@@ -3139,6 +3137,10 @@ static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wa
+ mask |= EPOLLIN | EPOLLRDNORM;
+ if (sk_is_readable(sk))
+ mask |= EPOLLIN | EPOLLRDNORM;
++#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
++ if (READ_ONCE(unix_sk(sk)->oob_skb))
++ mask |= EPOLLPRI;
++#endif
+
+ /* Connection-based need to check for termination and startup */
+ if ((sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) &&
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 5afc194a58bbd..ba1c8cc0c4671 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -622,6 +622,13 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
+ INIT_WORK(&vsock->event_work, virtio_transport_event_work);
+ INIT_WORK(&vsock->send_pkt_work, virtio_transport_send_pkt_work);
+
++ if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_SEQPACKET))
++ vsock->seqpacket_allow = true;
++
++ vdev->priv = vsock;
++
++ virtio_device_ready(vdev);
++
+ mutex_lock(&vsock->tx_lock);
+ vsock->tx_run = true;
+ mutex_unlock(&vsock->tx_lock);
+@@ -636,10 +643,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
+ vsock->event_run = true;
+ mutex_unlock(&vsock->event_lock);
+
+- if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_SEQPACKET))
+- vsock->seqpacket_allow = true;
+-
+- vdev->priv = vsock;
+ rcu_assign_pointer(the_virtio_vsock, vsock);
+
+ mutex_unlock(&the_virtio_vsock_mutex);
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index 3583354a7d7fe..3a171828638b1 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -1765,10 +1765,15 @@ void x25_kill_by_neigh(struct x25_neigh *nb)
+
+ write_lock_bh(&x25_list_lock);
+
+- sk_for_each(s, &x25_list)
+- if (x25_sk(s)->neighbour == nb)
++ sk_for_each(s, &x25_list) {
++ if (x25_sk(s)->neighbour == nb) {
++ write_unlock_bh(&x25_list_lock);
++ lock_sock(s);
+ x25_disconnect(s, ENETUNREACH, 0, 0);
+-
++ release_sock(s);
++ write_lock_bh(&x25_list_lock);
++ }
++ }
+ write_unlock_bh(&x25_list_lock);
+
+ /* Remove any related forwards */
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 28ef3f4465ae9..ac343cd8ff3f6 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -403,18 +403,8 @@ EXPORT_SYMBOL(xsk_tx_peek_release_desc_batch);
+ static int xsk_wakeup(struct xdp_sock *xs, u8 flags)
+ {
+ struct net_device *dev = xs->dev;
+- int err;
+-
+- rcu_read_lock();
+- err = dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+- rcu_read_unlock();
+-
+- return err;
+-}
+
+-static int xsk_zc_xmit(struct xdp_sock *xs)
+-{
+- return xsk_wakeup(xs, XDP_WAKEUP_TX);
++ return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+ }
+
+ static void xsk_destruct_skb(struct sk_buff *skb)
+@@ -533,6 +523,12 @@ static int xsk_generic_xmit(struct sock *sk)
+
+ mutex_lock(&xs->mutex);
+
++ /* Since we dropped the RCU read lock, the socket state might have changed. */
++ if (unlikely(!xsk_is_bound(xs))) {
++ err = -ENXIO;
++ goto out;
++ }
++
+ if (xs->queue_id >= xs->dev->real_num_tx_queues)
+ goto out;
+
+@@ -596,16 +592,26 @@ out:
+ return err;
+ }
+
+-static int __xsk_sendmsg(struct sock *sk)
++static int xsk_xmit(struct sock *sk)
+ {
+ struct xdp_sock *xs = xdp_sk(sk);
++ int ret;
+
+ if (unlikely(!(xs->dev->flags & IFF_UP)))
+ return -ENETDOWN;
+ if (unlikely(!xs->tx))
+ return -ENOBUFS;
+
+- return xs->zc ? xsk_zc_xmit(xs) : xsk_generic_xmit(sk);
++ if (xs->zc)
++ return xsk_wakeup(xs, XDP_WAKEUP_TX);
++
++ /* Drop the RCU lock since the SKB path might sleep. */
++ rcu_read_unlock();
++ ret = xsk_generic_xmit(sk);
++ /* Reaquire RCU lock before going into common code. */
++ rcu_read_lock();
++
++ return ret;
+ }
+
+ static bool xsk_no_wakeup(struct sock *sk)
+@@ -619,7 +625,7 @@ static bool xsk_no_wakeup(struct sock *sk)
+ #endif
+ }
+
+-static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
++static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+ {
+ bool need_wait = !(m->msg_flags & MSG_DONTWAIT);
+ struct sock *sk = sock->sk;
+@@ -639,11 +645,22 @@ static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
+
+ pool = xs->pool;
+ if (pool->cached_need_wakeup & XDP_WAKEUP_TX)
+- return __xsk_sendmsg(sk);
++ return xsk_xmit(sk);
+ return 0;
+ }
+
+-static int xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int flags)
++static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
++{
++ int ret;
++
++ rcu_read_lock();
++ ret = __xsk_sendmsg(sock, m, total_len);
++ rcu_read_unlock();
++
++ return ret;
++}
++
++static int __xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int flags)
+ {
+ bool need_wait = !(flags & MSG_DONTWAIT);
+ struct sock *sk = sock->sk;
+@@ -669,6 +686,17 @@ static int xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int fl
+ return 0;
+ }
+
++static int xsk_recvmsg(struct socket *sock, struct msghdr *m, size_t len, int flags)
++{
++ int ret;
++
++ rcu_read_lock();
++ ret = __xsk_recvmsg(sock, m, len, flags);
++ rcu_read_unlock();
++
++ return ret;
++}
++
+ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ struct poll_table_struct *wait)
+ {
+@@ -679,8 +707,11 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+
+ sock_poll_wait(file, sock, wait);
+
+- if (unlikely(!xsk_is_bound(xs)))
++ rcu_read_lock();
++ if (unlikely(!xsk_is_bound(xs))) {
++ rcu_read_unlock();
+ return mask;
++ }
+
+ pool = xs->pool;
+
+@@ -689,7 +720,7 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ xsk_wakeup(xs, pool->cached_need_wakeup);
+ else
+ /* Poll needs to drive Tx also in copy mode */
+- __xsk_sendmsg(sk);
++ xsk_xmit(sk);
+ }
+
+ if (xs->rx && !xskq_prod_is_empty(xs->rx))
+@@ -697,6 +728,7 @@ static __poll_t xsk_poll(struct file *file, struct socket *sock,
+ if (xs->tx && xsk_tx_writeable(xs))
+ mask |= EPOLLOUT | EPOLLWRNORM;
+
++ rcu_read_unlock();
+ return mask;
+ }
+
+@@ -728,7 +760,6 @@ static void xsk_unbind_dev(struct xdp_sock *xs)
+
+ /* Wait for driver to stop using the xdp socket. */
+ xp_del_xsk(xs->pool, xs);
+- xs->dev = NULL;
+ synchronize_net();
+ dev_put(dev);
+ }
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index fd39bb660ebcd..0202a90b65e3a 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -584,9 +584,13 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
+ u32 nb_entries1 = 0, nb_entries2;
+
+ if (unlikely(pool->dma_need_sync)) {
++ struct xdp_buff *buff;
++
+ /* Slow path */
+- *xdp = xp_alloc(pool);
+- return !!*xdp;
++ buff = xp_alloc(pool);
++ if (buff)
++ *xdp = buff;
++ return !!buff;
+ }
+
+ if (unlikely(pool->free_list_cnt)) {
+diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
+index aa50864e4415a..9f3446af50ce2 100644
+--- a/samples/bpf/xdpsock_user.c
++++ b/samples/bpf/xdpsock_user.c
+@@ -1984,15 +1984,15 @@ int main(int argc, char **argv)
+
+ setlocale(LC_ALL, "");
+
++ prev_time = get_nsecs();
++ start_time = prev_time;
++
+ if (!opt_quiet) {
+ ret = pthread_create(&pt, NULL, poller, NULL);
+ if (ret)
+ exit_with_error(ret);
+ }
+
+- prev_time = get_nsecs();
+- start_time = prev_time;
+-
+ /* Configure sched priority for better wake-up accuracy */
+ memset(&schparam, 0, sizeof(schparam));
+ schparam.sched_priority = opt_schprio;
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index 7a15910d21718..8859fc1935428 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -134,6 +134,7 @@ static int populate_ruleset(
+ ret = 0;
+
+ out_free_name:
++ free(path_list);
+ free(env_path_name);
+ return ret;
+ }
+diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
+index 803ba75610766..a0ea1d26e6b2e 100755
+--- a/scripts/atomic/fallbacks/read_acquire
++++ b/scripts/atomic/fallbacks/read_acquire
+@@ -2,6 +2,15 @@ cat <<EOF
+ static __always_inline ${ret}
+ arch_${atomic}_read_acquire(const ${atomic}_t *v)
+ {
+- return smp_load_acquire(&(v)->counter);
++ ${int} ret;
++
++ if (__native_word(${atomic}_t)) {
++ ret = smp_load_acquire(&(v)->counter);
++ } else {
++ ret = arch_${atomic}_read(v);
++ __atomic_acquire_fence();
++ }
++
++ return ret;
+ }
+ EOF
+diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
+index 86ede759f24ea..05cdb7f42477a 100755
+--- a/scripts/atomic/fallbacks/set_release
++++ b/scripts/atomic/fallbacks/set_release
+@@ -2,6 +2,11 @@ cat <<EOF
+ static __always_inline void
+ arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
+ {
+- smp_store_release(&(v)->counter, i);
++ if (__native_word(${atomic}_t)) {
++ smp_store_release(&(v)->counter, i);
++ } else {
++ __atomic_release_fence();
++ arch_${atomic}_set(v, i);
++ }
+ }
+ EOF
+diff --git a/scripts/dtc/Makefile b/scripts/dtc/Makefile
+index 95aaf7431bffa..1cba78e1dce68 100644
+--- a/scripts/dtc/Makefile
++++ b/scripts/dtc/Makefile
+@@ -29,7 +29,7 @@ dtc-objs += yamltree.o
+ # To include <yaml.h> installed in a non-default path
+ HOSTCFLAGS_yamltree.o := $(shell pkg-config --cflags yaml-0.1)
+ # To link libyaml installed in a non-default path
+-HOSTLDLIBS_dtc := $(shell pkg-config yaml-0.1 --libs)
++HOSTLDLIBS_dtc := $(shell pkg-config --libs yaml-0.1)
+ endif
+
+ # Generated files need one more search path to include headers in source tree
+diff --git a/scripts/gcc-plugins/stackleak_plugin.c b/scripts/gcc-plugins/stackleak_plugin.c
+index e9db7dcb3e5f4..b04aa8e91a41f 100644
+--- a/scripts/gcc-plugins/stackleak_plugin.c
++++ b/scripts/gcc-plugins/stackleak_plugin.c
+@@ -429,6 +429,23 @@ static unsigned int stackleak_cleanup_execute(void)
+ return 0;
+ }
+
++/*
++ * STRING_CST may or may not be NUL terminated:
++ * https://gcc.gnu.org/onlinedocs/gccint/Constant-expressions.html
++ */
++static inline bool string_equal(tree node, const char *string, int length)
++{
++ if (TREE_STRING_LENGTH(node) < length)
++ return false;
++ if (TREE_STRING_LENGTH(node) > length + 1)
++ return false;
++ if (TREE_STRING_LENGTH(node) == length + 1 &&
++ TREE_STRING_POINTER(node)[length] != '\0')
++ return false;
++ return !memcmp(TREE_STRING_POINTER(node), string, length);
++}
++#define STRING_EQUAL(node, str) string_equal(node, str, strlen(str))
++
+ static bool stackleak_gate(void)
+ {
+ tree section;
+@@ -438,13 +455,13 @@ static bool stackleak_gate(void)
+ if (section && TREE_VALUE(section)) {
+ section = TREE_VALUE(TREE_VALUE(section));
+
+- if (!strncmp(TREE_STRING_POINTER(section), ".init.text", 10))
++ if (STRING_EQUAL(section, ".init.text"))
+ return false;
+- if (!strncmp(TREE_STRING_POINTER(section), ".devinit.text", 13))
++ if (STRING_EQUAL(section, ".devinit.text"))
+ return false;
+- if (!strncmp(TREE_STRING_POINTER(section), ".cpuinit.text", 13))
++ if (STRING_EQUAL(section, ".cpuinit.text"))
+ return false;
+- if (!strncmp(TREE_STRING_POINTER(section), ".meminit.text", 13))
++ if (STRING_EQUAL(section, ".meminit.text"))
+ return false;
+ }
+
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 6bfa332179140..e04ae56931e2e 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -669,7 +669,7 @@ static void handle_modversion(const struct module *mod,
+ unsigned int crc;
+
+ if (sym->st_shndx == SHN_UNDEF) {
+- warn("EXPORT symbol \"%s\" [%s%s] version ...\n"
++ warn("EXPORT symbol \"%s\" [%s%s] version generation failed, symbol will not be versioned.\n"
+ "Is \"%s\" prototyped in <asm/asm-prototypes.h>?\n",
+ symname, mod->name, mod->is_vmlinux ? "" : ".ko",
+ symname);
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 08f907382c618..7d87772f0ce68 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -86,7 +86,7 @@ static int __init evm_set_fixmode(char *str)
+ else
+ pr_err("invalid \"%s\" mode", str);
+
+- return 0;
++ return 1;
+ }
+ __setup("evm=", evm_set_fixmode);
+
+diff --git a/security/keys/keyctl_pkey.c b/security/keys/keyctl_pkey.c
+index 5de0d599a2748..97bc27bbf0797 100644
+--- a/security/keys/keyctl_pkey.c
++++ b/security/keys/keyctl_pkey.c
+@@ -135,15 +135,23 @@ static int keyctl_pkey_params_get_2(const struct keyctl_pkey_params __user *_par
+
+ switch (op) {
+ case KEYCTL_PKEY_ENCRYPT:
++ if (uparams.in_len > info.max_dec_size ||
++ uparams.out_len > info.max_enc_size)
++ return -EINVAL;
++ break;
+ case KEYCTL_PKEY_DECRYPT:
+ if (uparams.in_len > info.max_enc_size ||
+ uparams.out_len > info.max_dec_size)
+ return -EINVAL;
+ break;
+ case KEYCTL_PKEY_SIGN:
++ if (uparams.in_len > info.max_data_size ||
++ uparams.out_len > info.max_sig_size)
++ return -EINVAL;
++ break;
+ case KEYCTL_PKEY_VERIFY:
+- if (uparams.in_len > info.max_sig_size ||
+- uparams.out_len > info.max_data_size)
++ if (uparams.in_len > info.max_data_size ||
++ uparams.in2_len > info.max_sig_size)
+ return -EINVAL;
+ break;
+ default:
+@@ -151,7 +159,7 @@ static int keyctl_pkey_params_get_2(const struct keyctl_pkey_params __user *_par
+ }
+
+ params->in_len = uparams.in_len;
+- params->out_len = uparams.out_len;
++ params->out_len = uparams.out_len; /* Note: same as in2_len */
+ return 0;
+ }
+
+diff --git a/security/keys/trusted-keys/trusted_core.c b/security/keys/trusted-keys/trusted_core.c
+index d5c891d8d3534..9b9d3ef79cbe3 100644
+--- a/security/keys/trusted-keys/trusted_core.c
++++ b/security/keys/trusted-keys/trusted_core.c
+@@ -27,10 +27,10 @@ module_param_named(source, trusted_key_source, charp, 0);
+ MODULE_PARM_DESC(source, "Select trusted keys source (tpm or tee)");
+
+ static const struct trusted_key_source trusted_key_sources[] = {
+-#if defined(CONFIG_TCG_TPM)
++#if IS_REACHABLE(CONFIG_TCG_TPM)
+ { "tpm", &trusted_key_tpm_ops },
+ #endif
+-#if defined(CONFIG_TEE)
++#if IS_REACHABLE(CONFIG_TEE)
+ { "tee", &trusted_key_tee_ops },
+ #endif
+ };
+@@ -351,7 +351,7 @@ static int __init init_trusted(void)
+
+ static void __exit cleanup_trusted(void)
+ {
+- static_call(trusted_key_exit)();
++ static_call_cond(trusted_key_exit)();
+ }
+
+ late_initcall(init_trusted);
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 32396962f04d6..7e27ce394020d 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -192,7 +192,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ return PTR_ERR(ruleset);
+
+ /* Creates anonymous FD referring to the ruleset. */
+- ruleset_fd = anon_inode_getfd("landlock-ruleset", &ruleset_fops,
++ ruleset_fd = anon_inode_getfd("[landlock-ruleset]", &ruleset_fops,
+ ruleset, O_RDWR | O_CLOEXEC);
+ if (ruleset_fd < 0)
+ landlock_put_ruleset(ruleset);
+diff --git a/security/security.c b/security/security.c
+index 22261d79f3333..b7cf5cbfdc677 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -884,9 +884,22 @@ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc)
+ return call_int_hook(fs_context_dup, 0, fc, src_fc);
+ }
+
+-int security_fs_context_parse_param(struct fs_context *fc, struct fs_parameter *param)
++int security_fs_context_parse_param(struct fs_context *fc,
++ struct fs_parameter *param)
+ {
+- return call_int_hook(fs_context_parse_param, -ENOPARAM, fc, param);
++ struct security_hook_list *hp;
++ int trc;
++ int rc = -ENOPARAM;
++
++ hlist_for_each_entry(hp, &security_hook_heads.fs_context_parse_param,
++ list) {
++ trc = hp->hook.fs_context_parse_param(fc, param);
++ if (trc == 0)
++ rc = 0;
++ else if (trc != -ENOPARAM)
++ return trc;
++ }
++ return rc;
+ }
+
+ int security_sb_alloc(struct super_block *sb)
+@@ -2391,6 +2404,13 @@ void security_sctp_sk_clone(struct sctp_association *asoc, struct sock *sk,
+ }
+ EXPORT_SYMBOL(security_sctp_sk_clone);
+
++int security_sctp_assoc_established(struct sctp_association *asoc,
++ struct sk_buff *skb)
++{
++ return call_int_hook(sctp_assoc_established, 0, asoc, skb);
++}
++EXPORT_SYMBOL(security_sctp_assoc_established);
++
+ #endif /* CONFIG_SECURITY_NETWORK */
+
+ #ifdef CONFIG_SECURITY_INFINIBAND
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 5b6895e4fc29e..ea725891e566e 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -342,6 +342,10 @@ static void inode_free_security(struct inode *inode)
+
+ struct selinux_mnt_opts {
+ const char *fscontext, *context, *rootcontext, *defcontext;
++ u32 fscontext_sid;
++ u32 context_sid;
++ u32 rootcontext_sid;
++ u32 defcontext_sid;
+ };
+
+ static void selinux_free_mnt_opts(void *mnt_opts)
+@@ -479,7 +483,7 @@ static int selinux_is_sblabel_mnt(struct super_block *sb)
+
+ static int sb_check_xattr_support(struct super_block *sb)
+ {
+- struct superblock_security_struct *sbsec = sb->s_security;
++ struct superblock_security_struct *sbsec = selinux_superblock(sb);
+ struct dentry *root = sb->s_root;
+ struct inode *root_inode = d_backing_inode(root);
+ u32 sid;
+@@ -598,15 +602,14 @@ static int bad_option(struct superblock_security_struct *sbsec, char flag,
+ return 0;
+ }
+
+-static int parse_sid(struct super_block *sb, const char *s, u32 *sid,
+- gfp_t gfp)
++static int parse_sid(struct super_block *sb, const char *s, u32 *sid)
+ {
+ int rc = security_context_str_to_sid(&selinux_state, s,
+- sid, gfp);
++ sid, GFP_KERNEL);
+ if (rc)
+ pr_warn("SELinux: security_context_str_to_sid"
+ "(%s) failed for (dev %s, type %s) errno=%d\n",
+- s, sb->s_id, sb->s_type->name, rc);
++ s, sb ? sb->s_id : "?", sb ? sb->s_type->name : "?", rc);
+ return rc;
+ }
+
+@@ -673,8 +676,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ */
+ if (opts) {
+ if (opts->fscontext) {
+- rc = parse_sid(sb, opts->fscontext, &fscontext_sid,
+- GFP_KERNEL);
++ rc = parse_sid(sb, opts->fscontext, &fscontext_sid);
+ if (rc)
+ goto out;
+ if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid,
+@@ -683,8 +685,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ sbsec->flags |= FSCONTEXT_MNT;
+ }
+ if (opts->context) {
+- rc = parse_sid(sb, opts->context, &context_sid,
+- GFP_KERNEL);
++ rc = parse_sid(sb, opts->context, &context_sid);
+ if (rc)
+ goto out;
+ if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid,
+@@ -693,8 +694,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ sbsec->flags |= CONTEXT_MNT;
+ }
+ if (opts->rootcontext) {
+- rc = parse_sid(sb, opts->rootcontext, &rootcontext_sid,
+- GFP_KERNEL);
++ rc = parse_sid(sb, opts->rootcontext, &rootcontext_sid);
+ if (rc)
+ goto out;
+ if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid,
+@@ -703,8 +703,7 @@ static int selinux_set_mnt_opts(struct super_block *sb,
+ sbsec->flags |= ROOTCONTEXT_MNT;
+ }
+ if (opts->defcontext) {
+- rc = parse_sid(sb, opts->defcontext, &defcontext_sid,
+- GFP_KERNEL);
++ rc = parse_sid(sb, opts->defcontext, &defcontext_sid);
+ if (rc)
+ goto out;
+ if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid,
+@@ -996,21 +995,29 @@ static int selinux_add_opt(int token, const char *s, void **mnt_opts)
+ if (opts->context || opts->defcontext)
+ goto err;
+ opts->context = s;
++ if (selinux_initialized(&selinux_state))
++ parse_sid(NULL, s, &opts->context_sid);
+ break;
+ case Opt_fscontext:
+ if (opts->fscontext)
+ goto err;
+ opts->fscontext = s;
++ if (selinux_initialized(&selinux_state))
++ parse_sid(NULL, s, &opts->fscontext_sid);
+ break;
+ case Opt_rootcontext:
+ if (opts->rootcontext)
+ goto err;
+ opts->rootcontext = s;
++ if (selinux_initialized(&selinux_state))
++ parse_sid(NULL, s, &opts->rootcontext_sid);
+ break;
+ case Opt_defcontext:
+ if (opts->context || opts->defcontext)
+ goto err;
+ opts->defcontext = s;
++ if (selinux_initialized(&selinux_state))
++ parse_sid(NULL, s, &opts->defcontext_sid);
+ break;
+ }
+
+@@ -2647,9 +2654,7 @@ free_opt:
+ static int selinux_sb_mnt_opts_compat(struct super_block *sb, void *mnt_opts)
+ {
+ struct selinux_mnt_opts *opts = mnt_opts;
+- struct superblock_security_struct *sbsec = sb->s_security;
+- u32 sid;
+- int rc;
++ struct superblock_security_struct *sbsec = selinux_superblock(sb);
+
+ /*
+ * Superblock not initialized (i.e. no options) - reject if any
+@@ -2666,34 +2671,36 @@ static int selinux_sb_mnt_opts_compat(struct super_block *sb, void *mnt_opts)
+ return (sbsec->flags & SE_MNTMASK) ? 1 : 0;
+
+ if (opts->fscontext) {
+- rc = parse_sid(sb, opts->fscontext, &sid, GFP_NOWAIT);
+- if (rc)
++ if (opts->fscontext_sid == SECSID_NULL)
+ return 1;
+- if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, sid))
++ else if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid,
++ opts->fscontext_sid))
+ return 1;
+ }
+ if (opts->context) {
+- rc = parse_sid(sb, opts->context, &sid, GFP_NOWAIT);
+- if (rc)
++ if (opts->context_sid == SECSID_NULL)
+ return 1;
+- if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, sid))
++ else if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid,
++ opts->context_sid))
+ return 1;
+ }
+ if (opts->rootcontext) {
+- struct inode_security_struct *root_isec;
+-
+- root_isec = backing_inode_security(sb->s_root);
+- rc = parse_sid(sb, opts->rootcontext, &sid, GFP_NOWAIT);
+- if (rc)
+- return 1;
+- if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, sid))
++ if (opts->rootcontext_sid == SECSID_NULL)
+ return 1;
++ else {
++ struct inode_security_struct *root_isec;
++
++ root_isec = backing_inode_security(sb->s_root);
++ if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid,
++ opts->rootcontext_sid))
++ return 1;
++ }
+ }
+ if (opts->defcontext) {
+- rc = parse_sid(sb, opts->defcontext, &sid, GFP_NOWAIT);
+- if (rc)
++ if (opts->defcontext_sid == SECSID_NULL)
+ return 1;
+- if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, sid))
++ else if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid,
++ opts->defcontext_sid))
+ return 1;
+ }
+ return 0;
+@@ -2713,14 +2720,14 @@ static int selinux_sb_remount(struct super_block *sb, void *mnt_opts)
+ return 0;
+
+ if (opts->fscontext) {
+- rc = parse_sid(sb, opts->fscontext, &sid, GFP_KERNEL);
++ rc = parse_sid(sb, opts->fscontext, &sid);
+ if (rc)
+ return rc;
+ if (bad_option(sbsec, FSCONTEXT_MNT, sbsec->sid, sid))
+ goto out_bad_option;
+ }
+ if (opts->context) {
+- rc = parse_sid(sb, opts->context, &sid, GFP_KERNEL);
++ rc = parse_sid(sb, opts->context, &sid);
+ if (rc)
+ return rc;
+ if (bad_option(sbsec, CONTEXT_MNT, sbsec->mntpoint_sid, sid))
+@@ -2729,14 +2736,14 @@ static int selinux_sb_remount(struct super_block *sb, void *mnt_opts)
+ if (opts->rootcontext) {
+ struct inode_security_struct *root_isec;
+ root_isec = backing_inode_security(sb->s_root);
+- rc = parse_sid(sb, opts->rootcontext, &sid, GFP_KERNEL);
++ rc = parse_sid(sb, opts->rootcontext, &sid);
+ if (rc)
+ return rc;
+ if (bad_option(sbsec, ROOTCONTEXT_MNT, root_isec->sid, sid))
+ goto out_bad_option;
+ }
+ if (opts->defcontext) {
+- rc = parse_sid(sb, opts->defcontext, &sid, GFP_KERNEL);
++ rc = parse_sid(sb, opts->defcontext, &sid);
+ if (rc)
+ return rc;
+ if (bad_option(sbsec, DEFCONTEXT_MNT, sbsec->def_sid, sid))
+@@ -2860,10 +2867,9 @@ static int selinux_fs_context_parse_param(struct fs_context *fc,
+ return opt;
+
+ rc = selinux_add_opt(opt, param->string, &fc->security);
+- if (!rc) {
++ if (!rc)
+ param->string = NULL;
+- rc = 1;
+- }
++
+ return rc;
+ }
+
+@@ -3745,6 +3751,12 @@ static int selinux_file_ioctl(struct file *file, unsigned int cmd,
+ CAP_OPT_NONE, true);
+ break;
+
++ case FIOCLEX:
++ case FIONCLEX:
++ if (!selinux_policycap_ioctl_skip_cloexec())
++ error = ioctl_has_perm(cred, file, FILE__IOCTL, (u16) cmd);
++ break;
++
+ /* default case assumes that the command will go
+ * to the file's ioctl() function.
+ */
+@@ -5299,37 +5311,38 @@ static void selinux_sock_graft(struct sock *sk, struct socket *parent)
+ sksec->sclass = isec->sclass;
+ }
+
+-/* Called whenever SCTP receives an INIT chunk. This happens when an incoming
+- * connect(2), sctp_connectx(3) or sctp_sendmsg(3) (with no association
+- * already present).
++/*
++ * Determines peer_secid for the asoc and updates socket's peer label
++ * if it's the first association on the socket.
+ */
+-static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+- struct sk_buff *skb)
++static int selinux_sctp_process_new_assoc(struct sctp_association *asoc,
++ struct sk_buff *skb)
+ {
+- struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++ struct sock *sk = asoc->base.sk;
++ u16 family = sk->sk_family;
++ struct sk_security_struct *sksec = sk->sk_security;
+ struct common_audit_data ad;
+ struct lsm_network_audit net = {0,};
+- u8 peerlbl_active;
+- u32 peer_sid = SECINITSID_UNLABELED;
+- u32 conn_sid;
+- int err = 0;
++ int err;
+
+- if (!selinux_policycap_extsockclass())
+- return 0;
++ /* handle mapped IPv4 packets arriving via IPv6 sockets */
++ if (family == PF_INET6 && skb->protocol == htons(ETH_P_IP))
++ family = PF_INET;
+
+- peerlbl_active = selinux_peerlbl_enabled();
++ if (selinux_peerlbl_enabled()) {
++ asoc->peer_secid = SECSID_NULL;
+
+- if (peerlbl_active) {
+ /* This will return peer_sid = SECSID_NULL if there are
+ * no peer labels, see security_net_peersid_resolve().
+ */
+- err = selinux_skb_peerlbl_sid(skb, asoc->base.sk->sk_family,
+- &peer_sid);
++ err = selinux_skb_peerlbl_sid(skb, family, &asoc->peer_secid);
+ if (err)
+ return err;
+
+- if (peer_sid == SECSID_NULL)
+- peer_sid = SECINITSID_UNLABELED;
++ if (asoc->peer_secid == SECSID_NULL)
++ asoc->peer_secid = SECINITSID_UNLABELED;
++ } else {
++ asoc->peer_secid = SECINITSID_UNLABELED;
+ }
+
+ if (sksec->sctp_assoc_state == SCTP_ASSOC_UNSET) {
+@@ -5340,8 +5353,8 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ * then it is approved by policy and used as the primary
+ * peer SID for getpeercon(3).
+ */
+- sksec->peer_sid = peer_sid;
+- } else if (sksec->peer_sid != peer_sid) {
++ sksec->peer_sid = asoc->peer_secid;
++ } else if (sksec->peer_sid != asoc->peer_secid) {
+ /* Other association peer SIDs are checked to enforce
+ * consistency among the peer SIDs.
+ */
+@@ -5349,11 +5362,32 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ ad.u.net = &net;
+ ad.u.net->sk = asoc->base.sk;
+ err = avc_has_perm(&selinux_state,
+- sksec->peer_sid, peer_sid, sksec->sclass,
+- SCTP_SOCKET__ASSOCIATION, &ad);
++ sksec->peer_sid, asoc->peer_secid,
++ sksec->sclass, SCTP_SOCKET__ASSOCIATION,
++ &ad);
+ if (err)
+ return err;
+ }
++ return 0;
++}
++
++/* Called whenever SCTP receives an INIT or COOKIE ECHO chunk. This
++ * happens on an incoming connect(2), sctp_connectx(3) or
++ * sctp_sendmsg(3) (with no association already present).
++ */
++static int selinux_sctp_assoc_request(struct sctp_association *asoc,
++ struct sk_buff *skb)
++{
++ struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++ u32 conn_sid;
++ int err;
++
++ if (!selinux_policycap_extsockclass())
++ return 0;
++
++ err = selinux_sctp_process_new_assoc(asoc, skb);
++ if (err)
++ return err;
+
+ /* Compute the MLS component for the connection and store
+ * the information in asoc. This will be used by SCTP TCP type
+@@ -5361,17 +5395,36 @@ static int selinux_sctp_assoc_request(struct sctp_association *asoc,
+ * socket to be generated. selinux_sctp_sk_clone() will then
+ * plug this into the new socket.
+ */
+- err = selinux_conn_sid(sksec->sid, peer_sid, &conn_sid);
++ err = selinux_conn_sid(sksec->sid, asoc->peer_secid, &conn_sid);
+ if (err)
+ return err;
+
+ asoc->secid = conn_sid;
+- asoc->peer_secid = peer_sid;
+
+ /* Set any NetLabel labels including CIPSO/CALIPSO options. */
+ return selinux_netlbl_sctp_assoc_request(asoc, skb);
+ }
+
++/* Called when SCTP receives a COOKIE ACK chunk as the final
++ * response to an association request (initited by us).
++ */
++static int selinux_sctp_assoc_established(struct sctp_association *asoc,
++ struct sk_buff *skb)
++{
++ struct sk_security_struct *sksec = asoc->base.sk->sk_security;
++
++ if (!selinux_policycap_extsockclass())
++ return 0;
++
++ /* Inherit secid from the parent socket - this will be picked up
++ * by selinux_sctp_sk_clone() if the association gets peeled off
++ * into a new socket.
++ */
++ asoc->secid = sksec->sid;
++
++ return selinux_sctp_process_new_assoc(asoc, skb);
++}
++
+ /* Check if sctp IPv4/IPv6 addresses are valid for binding or connecting
+ * based on their @optname.
+ */
+@@ -7192,6 +7245,7 @@ static struct security_hook_list selinux_hooks[] __lsm_ro_after_init = {
+ LSM_HOOK_INIT(sctp_assoc_request, selinux_sctp_assoc_request),
+ LSM_HOOK_INIT(sctp_sk_clone, selinux_sctp_sk_clone),
+ LSM_HOOK_INIT(sctp_bind_connect, selinux_sctp_bind_connect),
++ LSM_HOOK_INIT(sctp_assoc_established, selinux_sctp_assoc_established),
+ LSM_HOOK_INIT(inet_conn_request, selinux_inet_conn_request),
+ LSM_HOOK_INIT(inet_csk_clone, selinux_inet_csk_clone),
+ LSM_HOOK_INIT(inet_conn_established, selinux_inet_conn_established),
+diff --git a/security/selinux/include/policycap.h b/security/selinux/include/policycap.h
+index 2ec038efbb03c..a9e572ca4fd96 100644
+--- a/security/selinux/include/policycap.h
++++ b/security/selinux/include/policycap.h
+@@ -11,6 +11,7 @@ enum {
+ POLICYDB_CAPABILITY_CGROUPSECLABEL,
+ POLICYDB_CAPABILITY_NNP_NOSUID_TRANSITION,
+ POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS,
++ POLICYDB_CAPABILITY_IOCTL_SKIP_CLOEXEC,
+ __POLICYDB_CAPABILITY_MAX
+ };
+ #define POLICYDB_CAPABILITY_MAX (__POLICYDB_CAPABILITY_MAX - 1)
+diff --git a/security/selinux/include/policycap_names.h b/security/selinux/include/policycap_names.h
+index b89289f092c93..ebd64afe1defd 100644
+--- a/security/selinux/include/policycap_names.h
++++ b/security/selinux/include/policycap_names.h
+@@ -12,7 +12,8 @@ const char *selinux_policycap_names[__POLICYDB_CAPABILITY_MAX] = {
+ "always_check_network",
+ "cgroup_seclabel",
+ "nnp_nosuid_transition",
+- "genfs_seclabel_symlinks"
++ "genfs_seclabel_symlinks",
++ "ioctl_skip_cloexec"
+ };
+
+ #endif /* _SELINUX_POLICYCAP_NAMES_H_ */
+diff --git a/security/selinux/include/security.h b/security/selinux/include/security.h
+index ac0ece01305a6..c0d966020ebdd 100644
+--- a/security/selinux/include/security.h
++++ b/security/selinux/include/security.h
+@@ -219,6 +219,13 @@ static inline bool selinux_policycap_genfs_seclabel_symlinks(void)
+ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_GENFS_SECLABEL_SYMLINKS]);
+ }
+
++static inline bool selinux_policycap_ioctl_skip_cloexec(void)
++{
++ struct selinux_state *state = &selinux_state;
++
++ return READ_ONCE(state->policycap[POLICYDB_CAPABILITY_IOCTL_SKIP_CLOEXEC]);
++}
++
+ struct selinux_policy_convert_data;
+
+ struct selinux_load_state {
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index e4cd7cb856f37..f2f6203e0fff5 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -2127,6 +2127,8 @@ static int sel_fill_super(struct super_block *sb, struct fs_context *fc)
+ }
+
+ ret = sel_make_avc_files(dentry);
++ if (ret)
++ goto err;
+
+ dentry = sel_make_dir(sb->s_root, "ss", &fsi->last_ino);
+ if (IS_ERR(dentry)) {
+diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
+index 90697317895fb..c576832febc67 100644
+--- a/security/selinux/xfrm.c
++++ b/security/selinux/xfrm.c
+@@ -347,7 +347,7 @@ int selinux_xfrm_state_alloc_acquire(struct xfrm_state *x,
+ int rc;
+ struct xfrm_sec_ctx *ctx;
+ char *ctx_str = NULL;
+- int str_len;
++ u32 str_len;
+
+ if (!polsec)
+ return 0;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 14b279cc75c96..6207762dbdb13 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2510,7 +2510,7 @@ static int smk_ipv6_check(struct smack_known *subject,
+ #ifdef CONFIG_AUDIT
+ smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net);
+ ad.a.u.net->family = PF_INET6;
+- ad.a.u.net->dport = ntohs(address->sin6_port);
++ ad.a.u.net->dport = address->sin6_port;
+ if (act == SMK_RECEIVING)
+ ad.a.u.net->v6info.saddr = address->sin6_addr;
+ else
+diff --git a/security/tomoyo/load_policy.c b/security/tomoyo/load_policy.c
+index 3445ae6fd4794..363b65be87ab7 100644
+--- a/security/tomoyo/load_policy.c
++++ b/security/tomoyo/load_policy.c
+@@ -24,7 +24,7 @@ static const char *tomoyo_loader;
+ static int __init tomoyo_loader_setup(char *str)
+ {
+ tomoyo_loader = str;
+- return 0;
++ return 1;
+ }
+
+ __setup("TOMOYO_loader=", tomoyo_loader_setup);
+@@ -64,7 +64,7 @@ static const char *tomoyo_trigger;
+ static int __init tomoyo_trigger_setup(char *str)
+ {
+ tomoyo_trigger = str;
+- return 0;
++ return 1;
+ }
+
+ __setup("TOMOYO_trigger=", tomoyo_trigger_setup);
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index edd9849210f2d..977d54320a5ca 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -970,6 +970,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream,
+
+ runtime->status->state = SNDRV_PCM_STATE_OPEN;
+ mutex_init(&runtime->buffer_mutex);
++ atomic_set(&runtime->buffer_accessing, 0);
+
+ substream->runtime = runtime;
+ substream->private_data = pcm->private_data;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index a40a35e51fad7..1fc7c50ffa625 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1906,11 +1906,9 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ if (avail >= runtime->twake)
+ break;
+ snd_pcm_stream_unlock_irq(substream);
+- mutex_unlock(&runtime->buffer_mutex);
+
+ tout = schedule_timeout(wait_time);
+
+- mutex_lock(&runtime->buffer_mutex);
+ snd_pcm_stream_lock_irq(substream);
+ set_current_state(TASK_INTERRUPTIBLE);
+ switch (runtime->status->state) {
+@@ -2221,7 +2219,6 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+
+ nonblock = !!(substream->f_flags & O_NONBLOCK);
+
+- mutex_lock(&runtime->buffer_mutex);
+ snd_pcm_stream_lock_irq(substream);
+ err = pcm_accessible_state(runtime);
+ if (err < 0)
+@@ -2276,6 +2273,10 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ err = -EINVAL;
+ goto _end_unlock;
+ }
++ if (!atomic_inc_unless_negative(&runtime->buffer_accessing)) {
++ err = -EBUSY;
++ goto _end_unlock;
++ }
+ snd_pcm_stream_unlock_irq(substream);
+ if (!is_playback)
+ snd_pcm_dma_buffer_sync(substream, SNDRV_DMA_SYNC_CPU);
+@@ -2284,6 +2285,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ if (is_playback)
+ snd_pcm_dma_buffer_sync(substream, SNDRV_DMA_SYNC_DEVICE);
+ snd_pcm_stream_lock_irq(substream);
++ atomic_dec(&runtime->buffer_accessing);
+ if (err < 0)
+ goto _end_unlock;
+ err = pcm_accessible_state(runtime);
+@@ -2313,7 +2315,6 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ if (xfer > 0 && err >= 0)
+ snd_pcm_update_state(substream, runtime);
+ snd_pcm_stream_unlock_irq(substream);
+- mutex_unlock(&runtime->buffer_mutex);
+ return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
+ }
+ EXPORT_SYMBOL(__snd_pcm_lib_xfer);
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 704fdc9ebf911..4adaee62ef333 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -685,6 +685,24 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
+ return 0;
+ }
+
++/* acquire buffer_mutex; if it's in r/w operation, return -EBUSY, otherwise
++ * block the further r/w operations
++ */
++static int snd_pcm_buffer_access_lock(struct snd_pcm_runtime *runtime)
++{
++ if (!atomic_dec_unless_positive(&runtime->buffer_accessing))
++ return -EBUSY;
++ mutex_lock(&runtime->buffer_mutex);
++ return 0; /* keep buffer_mutex, unlocked by below */
++}
++
++/* release buffer_mutex and clear r/w access flag */
++static void snd_pcm_buffer_access_unlock(struct snd_pcm_runtime *runtime)
++{
++ mutex_unlock(&runtime->buffer_mutex);
++ atomic_inc(&runtime->buffer_accessing);
++}
++
+ #if IS_ENABLED(CONFIG_SND_PCM_OSS)
+ #define is_oss_stream(substream) ((substream)->oss.oss)
+ #else
+@@ -695,14 +713,16 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+ {
+ struct snd_pcm_runtime *runtime;
+- int err = 0, usecs;
++ int err, usecs;
+ unsigned int bits;
+ snd_pcm_uframes_t frames;
+
+ if (PCM_RUNTIME_CHECK(substream))
+ return -ENXIO;
+ runtime = substream->runtime;
+- mutex_lock(&runtime->buffer_mutex);
++ err = snd_pcm_buffer_access_lock(runtime);
++ if (err < 0)
++ return err;
+ snd_pcm_stream_lock_irq(substream);
+ switch (runtime->status->state) {
+ case SNDRV_PCM_STATE_OPEN:
+@@ -820,7 +840,7 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ snd_pcm_lib_free_pages(substream);
+ }
+ unlock:
+- mutex_unlock(&runtime->buffer_mutex);
++ snd_pcm_buffer_access_unlock(runtime);
+ return err;
+ }
+
+@@ -865,7 +885,9 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ if (PCM_RUNTIME_CHECK(substream))
+ return -ENXIO;
+ runtime = substream->runtime;
+- mutex_lock(&runtime->buffer_mutex);
++ result = snd_pcm_buffer_access_lock(runtime);
++ if (result < 0)
++ return result;
+ snd_pcm_stream_lock_irq(substream);
+ switch (runtime->status->state) {
+ case SNDRV_PCM_STATE_SETUP:
+@@ -884,7 +906,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
+ unlock:
+- mutex_unlock(&runtime->buffer_mutex);
++ snd_pcm_buffer_access_unlock(runtime);
+ return result;
+ }
+
+@@ -1369,12 +1391,15 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
+
+ /* Guarantee the group members won't change during non-atomic action */
+ down_read(&snd_pcm_link_rwsem);
+- mutex_lock(&substream->runtime->buffer_mutex);
++ res = snd_pcm_buffer_access_lock(substream->runtime);
++ if (res < 0)
++ goto unlock;
+ if (snd_pcm_stream_linked(substream))
+ res = snd_pcm_action_group(ops, substream, state, false);
+ else
+ res = snd_pcm_action_single(ops, substream, state);
+- mutex_unlock(&substream->runtime->buffer_mutex);
++ snd_pcm_buffer_access_unlock(substream->runtime);
++ unlock:
+ up_read(&snd_pcm_link_rwsem);
+ return res;
+ }
+diff --git a/sound/firewire/fcp.c b/sound/firewire/fcp.c
+index bbfbebf4affbc..df44dd5dc4b22 100644
+--- a/sound/firewire/fcp.c
++++ b/sound/firewire/fcp.c
+@@ -240,9 +240,7 @@ int fcp_avc_transaction(struct fw_unit *unit,
+ t.response_match_bytes = response_match_bytes;
+ t.state = STATE_PENDING;
+ init_waitqueue_head(&t.wait);
+-
+- if (*(const u8 *)command == 0x00 || *(const u8 *)command == 0x03)
+- t.deferrable = true;
++ t.deferrable = (*(const u8 *)command == 0x00 || *(const u8 *)command == 0x03);
+
+ spin_lock_irq(&transactions_lock);
+ list_add_tail(&t.list, &transactions);
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 4fb90ceb4053b..70fd8b13938ed 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -11,6 +11,7 @@
+ #include <sound/core.h>
+ #include <sound/intel-dsp-config.h>
+ #include <sound/intel-nhlt.h>
++#include <sound/soc-acpi.h>
+
+ static int dsp_driver;
+
+@@ -31,7 +32,12 @@ struct config_entry {
+ u16 device;
+ u8 acpi_hid[ACPI_ID_LEN];
+ const struct dmi_system_id *dmi_table;
+- u8 codec_hid[ACPI_ID_LEN];
++ const struct snd_soc_acpi_codecs *codec_hid;
++};
++
++static const struct snd_soc_acpi_codecs __maybe_unused essx_83x6 = {
++ .num_codecs = 3,
++ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
+ };
+
+ /*
+@@ -77,7 +83,7 @@ static const struct config_entry config_table[] = {
+ {
+ .flags = FLAG_SOF,
+ .device = 0x5a98,
+- .codec_hid = "ESSX8336",
++ .codec_hid = &essx_83x6,
+ },
+ #endif
+ #if IS_ENABLED(CONFIG_SND_SOC_INTEL_APL)
+@@ -163,7 +169,7 @@ static const struct config_entry config_table[] = {
+ {
+ .flags = FLAG_SOF,
+ .device = 0x3198,
+- .codec_hid = "ESSX8336",
++ .codec_hid = &essx_83x6,
+ },
+ #endif
+
+@@ -193,6 +199,11 @@ static const struct config_entry config_table[] = {
+ {}
+ }
+ },
++ {
++ .flags = FLAG_SOF,
++ .device = 0x09dc8,
++ .codec_hid = &essx_83x6,
++ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x9dc8,
+@@ -251,7 +262,7 @@ static const struct config_entry config_table[] = {
+ {
+ .flags = FLAG_SOF,
+ .device = 0x02c8,
+- .codec_hid = "ESSX8336",
++ .codec_hid = &essx_83x6,
+ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+@@ -280,7 +291,7 @@ static const struct config_entry config_table[] = {
+ {
+ .flags = FLAG_SOF,
+ .device = 0x06c8,
+- .codec_hid = "ESSX8336",
++ .codec_hid = &essx_83x6,
+ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+@@ -327,7 +338,7 @@ static const struct config_entry config_table[] = {
+ {
+ .flags = FLAG_SOF,
+ .device = 0x4dc8,
+- .codec_hid = "ESSX8336",
++ .codec_hid = &essx_83x6,
+ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC,
+@@ -353,7 +364,7 @@ static const struct config_entry config_table[] = {
+ {
+ .flags = FLAG_SOF,
+ .device = 0xa0c8,
+- .codec_hid = "ESSX8336",
++ .codec_hid = &essx_83x6,
+ },
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+@@ -414,8 +425,15 @@ static const struct config_entry *snd_intel_dsp_find_config
+ continue;
+ if (table->dmi_table && !dmi_check_system(table->dmi_table))
+ continue;
+- if (table->codec_hid[0] && !acpi_dev_present(table->codec_hid, NULL, -1))
+- continue;
++ if (table->codec_hid) {
++ int i;
++
++ for (i = 0; i < table->codec_hid->num_codecs; i++)
++ if (acpi_dev_present(table->codec_hid->codecs[i], NULL, -1))
++ break;
++ if (i == table->codec_hid->num_codecs)
++ continue;
++ }
+ return table;
+ }
+ return NULL;
+diff --git a/sound/hda/intel-nhlt.c b/sound/hda/intel-nhlt.c
+index 128476aa7c61d..4063da3782833 100644
+--- a/sound/hda/intel-nhlt.c
++++ b/sound/hda/intel-nhlt.c
+@@ -130,6 +130,28 @@ bool intel_nhlt_has_endpoint_type(struct nhlt_acpi_table *nhlt, u8 link_type)
+ }
+ EXPORT_SYMBOL(intel_nhlt_has_endpoint_type);
+
++int intel_nhlt_ssp_endpoint_mask(struct nhlt_acpi_table *nhlt, u8 device_type)
++{
++ struct nhlt_endpoint *epnt;
++ int ssp_mask = 0;
++ int i;
++
++ if (!nhlt || (device_type != NHLT_DEVICE_BT && device_type != NHLT_DEVICE_I2S))
++ return 0;
++
++ epnt = (struct nhlt_endpoint *)nhlt->desc;
++ for (i = 0; i < nhlt->endpoint_count; i++) {
++ if (epnt->linktype == NHLT_LINK_SSP && epnt->device_type == device_type) {
++ /* for SSP the virtual bus id is the SSP port */
++ ssp_mask |= BIT(epnt->virtual_bus_id);
++ }
++ epnt = (struct nhlt_endpoint *)((u8 *)epnt + epnt->length);
++ }
++
++ return ssp_mask;
++}
++EXPORT_SYMBOL(intel_nhlt_ssp_endpoint_mask);
++
+ static struct nhlt_specific_cfg *
+ nhlt_get_specific_cfg(struct device *dev, struct nhlt_fmt *fmt, u8 num_ch,
+ u32 rate, u8 vbps, u8 bps)
+diff --git a/sound/isa/cs423x/cs4236.c b/sound/isa/cs423x/cs4236.c
+index b6bdebd9ef275..10112e1bb25dc 100644
+--- a/sound/isa/cs423x/cs4236.c
++++ b/sound/isa/cs423x/cs4236.c
+@@ -494,7 +494,7 @@ static int snd_cs423x_pnpbios_detect(struct pnp_dev *pdev,
+ static int dev;
+ int err;
+ struct snd_card *card;
+- struct pnp_dev *cdev;
++ struct pnp_dev *cdev, *iter;
+ char cid[PNP_ID_LEN];
+
+ if (pnp_device_is_isapnp(pdev))
+@@ -510,9 +510,11 @@ static int snd_cs423x_pnpbios_detect(struct pnp_dev *pdev,
+ strcpy(cid, pdev->id[0].id);
+ cid[5] = '1';
+ cdev = NULL;
+- list_for_each_entry(cdev, &(pdev->protocol->devices), protocol_list) {
+- if (!strcmp(cdev->id[0].id, cid))
++ list_for_each_entry(iter, &(pdev->protocol->devices), protocol_list) {
++ if (!strcmp(iter->id[0].id, cid)) {
++ cdev = iter;
+ break;
++ }
+ }
+ err = snd_cs423x_card_new(&pdev->dev, dev, &card);
+ if (err < 0)
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 572ff0d1fafee..8eff25d2d9e67 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2066,14 +2066,16 @@ static const struct hda_controller_ops pci_hda_ops = {
+ .position_check = azx_position_check,
+ };
+
++static DECLARE_BITMAP(probed_devs, SNDRV_CARDS);
++
+ static int azx_probe(struct pci_dev *pci,
+ const struct pci_device_id *pci_id)
+ {
+- static int dev;
+ struct snd_card *card;
+ struct hda_intel *hda;
+ struct azx *chip;
+ bool schedule_probe;
++ int dev;
+ int err;
+
+ if (pci_match_id(driver_denylist, pci)) {
+@@ -2081,10 +2083,11 @@ static int azx_probe(struct pci_dev *pci,
+ return -ENODEV;
+ }
+
++ dev = find_first_zero_bit(probed_devs, SNDRV_CARDS);
+ if (dev >= SNDRV_CARDS)
+ return -ENODEV;
+ if (!enable[dev]) {
+- dev++;
++ set_bit(dev, probed_devs);
+ return -ENOENT;
+ }
+
+@@ -2151,7 +2154,7 @@ static int azx_probe(struct pci_dev *pci,
+ if (schedule_probe)
+ schedule_delayed_work(&hda->probe_work, 0);
+
+- dev++;
++ set_bit(dev, probed_devs);
+ if (chip->disabled)
+ complete_all(&hda->probe_wait);
+ return 0;
+@@ -2374,6 +2377,7 @@ static void azx_remove(struct pci_dev *pci)
+ cancel_delayed_work_sync(&hda->probe_work);
+ device_lock(&pci->dev);
+
++ clear_bit(chip->dev_index, probed_devs);
+ pci_set_drvdata(pci, NULL);
+ snd_card_free(card);
+ }
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 92df4f243ec65..cf4f277dccdda 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1617,6 +1617,7 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ struct hda_codec *codec = per_pin->codec;
+ struct hdmi_spec *spec = codec->spec;
+ struct hdmi_eld *eld = &spec->temp_eld;
++ struct device *dev = hda_codec_dev(codec);
+ hda_nid_t pin_nid = per_pin->pin_nid;
+ int dev_id = per_pin->dev_id;
+ /*
+@@ -1630,8 +1631,13 @@ static void hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ int present;
+ int ret;
+
++#ifdef CONFIG_PM
++ if (dev->power.runtime_status == RPM_SUSPENDING)
++ return;
++#endif
++
+ ret = snd_hda_power_up_pm(codec);
+- if (ret < 0 && pm_runtime_suspended(hda_codec_dev(codec)))
++ if (ret < 0 && pm_runtime_suspended(dev))
+ goto out;
+
+ present = snd_hda_jack_pin_sense(codec, pin_nid, dev_id);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 75ff7e8498b83..16e90524a4977 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3617,8 +3617,8 @@ static void alc256_shutup(struct hda_codec *codec)
+ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ * when booting with headset plugged. So skip setting it for the codec alc257
+ */
+- if (spec->codec_variant != ALC269_TYPE_ALC257 &&
+- spec->codec_variant != ALC269_TYPE_ALC256)
++ if (codec->core.vendor_id != 0x10ec0236 &&
++ codec->core.vendor_id != 0x10ec0257)
+ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+ if (!spec->no_shutup_pins)
+@@ -6948,6 +6948,7 @@ enum {
+ ALC236_FIXUP_HP_MUTE_LED,
+ ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF,
+ ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
++ ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
+ ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+ ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS,
+ ALC269VC_FIXUP_ACER_HEADSET_MIC,
+@@ -8273,6 +8274,14 @@ static const struct hda_fixup alc269_fixups[] = {
+ { }
+ },
+ },
++ [ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x08},
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x2fcf},
++ { }
++ },
++ },
+ [ALC295_FIXUP_ASUS_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -9054,6 +9063,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++ SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -9400,6 +9410,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
+ {.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
+ {.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
++ {.id = ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc256-samsung-headphone"},
+ {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
+ {.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"},
+ {.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"},
+diff --git a/sound/soc/amd/acp/acp-mach-common.c b/sound/soc/amd/acp/acp-mach-common.c
+index cd05ee2802c9e..5247015e8b316 100644
+--- a/sound/soc/amd/acp/acp-mach-common.c
++++ b/sound/soc/amd/acp/acp-mach-common.c
+@@ -556,6 +556,8 @@ int acp_legacy_dai_links_create(struct snd_soc_card *card)
+ num_links++;
+
+ links = devm_kzalloc(dev, sizeof(struct snd_soc_dai_link) * num_links, GFP_KERNEL);
++ if (!links)
++ return -ENOMEM;
+
+ if (drv_data->hs_cpu_id == I2S_SP) {
+ links[i].name = "acp-headset-codec";
+diff --git a/sound/soc/amd/vangogh/acp5x-mach.c b/sound/soc/amd/vangogh/acp5x-mach.c
+index 14cf325e4b237..5d7a17755fa7f 100644
+--- a/sound/soc/amd/vangogh/acp5x-mach.c
++++ b/sound/soc/amd/vangogh/acp5x-mach.c
+@@ -165,6 +165,7 @@ static int acp5x_cs35l41_hw_params(struct snd_pcm_substream *substream,
+ unsigned int num_codecs = rtd->num_codecs;
+ unsigned int bclk_val;
+
++ ret = 0;
+ for (i = 0; i < num_codecs; i++) {
+ codec_dai = asoc_rtd_to_codec(rtd, i);
+ if ((strcmp(codec_dai->name, "spi-VLV1776:00") == 0) ||
+diff --git a/sound/soc/amd/vangogh/acp5x-pcm-dma.c b/sound/soc/amd/vangogh/acp5x-pcm-dma.c
+index f10de38976cb5..bfca4cf423cf1 100644
+--- a/sound/soc/amd/vangogh/acp5x-pcm-dma.c
++++ b/sound/soc/amd/vangogh/acp5x-pcm-dma.c
+@@ -281,7 +281,7 @@ static int acp5x_dma_hw_params(struct snd_soc_component *component,
+ return -EINVAL;
+ }
+ size = params_buffer_bytes(params);
+- rtd->dma_addr = substream->dma_buffer.addr;
++ rtd->dma_addr = substream->runtime->dma_addr;
+ rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT);
+ config_acp5x_dma(rtd, substream->stream);
+ return 0;
+@@ -426,51 +426,51 @@ static int acp5x_audio_remove(struct platform_device *pdev)
+ static int __maybe_unused acp5x_pcm_resume(struct device *dev)
+ {
+ struct i2s_dev_data *adata;
+- u32 val, reg_val, frmt_val;
++ struct i2s_stream_instance *rtd;
++ u32 val;
+
+- reg_val = 0;
+- frmt_val = 0;
+ adata = dev_get_drvdata(dev);
+
+ if (adata->play_stream && adata->play_stream->runtime) {
+- struct i2s_stream_instance *rtd =
+- adata->play_stream->runtime->private_data;
++ rtd = adata->play_stream->runtime->private_data;
+ config_acp5x_dma(rtd, SNDRV_PCM_STREAM_PLAYBACK);
+- switch (rtd->i2s_instance) {
+- case I2S_HS_INSTANCE:
+- reg_val = ACP_HSTDM_ITER;
+- frmt_val = ACP_HSTDM_TXFRMT;
+- break;
+- case I2S_SP_INSTANCE:
+- default:
+- reg_val = ACP_I2STDM_ITER;
+- frmt_val = ACP_I2STDM_TXFRMT;
++ acp_writel((rtd->xfer_resolution << 3), rtd->acp5x_base + ACP_HSTDM_ITER);
++ if (adata->tdm_mode == TDM_ENABLE) {
++ acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_HSTDM_TXFRMT);
++ val = acp_readl(adata->acp5x_base + ACP_HSTDM_ITER);
++ acp_writel(val | 0x2, adata->acp5x_base + ACP_HSTDM_ITER);
++ }
++ }
++ if (adata->i2ssp_play_stream && adata->i2ssp_play_stream->runtime) {
++ rtd = adata->i2ssp_play_stream->runtime->private_data;
++ config_acp5x_dma(rtd, SNDRV_PCM_STREAM_PLAYBACK);
++ acp_writel((rtd->xfer_resolution << 3), rtd->acp5x_base + ACP_I2STDM_ITER);
++ if (adata->tdm_mode == TDM_ENABLE) {
++ acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_I2STDM_TXFRMT);
++ val = acp_readl(adata->acp5x_base + ACP_I2STDM_ITER);
++ acp_writel(val | 0x2, adata->acp5x_base + ACP_I2STDM_ITER);
+ }
+- acp_writel((rtd->xfer_resolution << 3),
+- rtd->acp5x_base + reg_val);
+ }
+
+ if (adata->capture_stream && adata->capture_stream->runtime) {
+- struct i2s_stream_instance *rtd =
+- adata->capture_stream->runtime->private_data;
++ rtd = adata->capture_stream->runtime->private_data;
+ config_acp5x_dma(rtd, SNDRV_PCM_STREAM_CAPTURE);
+- switch (rtd->i2s_instance) {
+- case I2S_HS_INSTANCE:
+- reg_val = ACP_HSTDM_IRER;
+- frmt_val = ACP_HSTDM_RXFRMT;
+- break;
+- case I2S_SP_INSTANCE:
+- default:
+- reg_val = ACP_I2STDM_IRER;
+- frmt_val = ACP_I2STDM_RXFRMT;
++ acp_writel((rtd->xfer_resolution << 3), rtd->acp5x_base + ACP_HSTDM_IRER);
++ if (adata->tdm_mode == TDM_ENABLE) {
++ acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_HSTDM_RXFRMT);
++ val = acp_readl(adata->acp5x_base + ACP_HSTDM_IRER);
++ acp_writel(val | 0x2, adata->acp5x_base + ACP_HSTDM_IRER);
+ }
+- acp_writel((rtd->xfer_resolution << 3),
+- rtd->acp5x_base + reg_val);
+ }
+- if (adata->tdm_mode == TDM_ENABLE) {
+- acp_writel(adata->tdm_fmt, adata->acp5x_base + frmt_val);
+- val = acp_readl(adata->acp5x_base + reg_val);
+- acp_writel(val | 0x2, adata->acp5x_base + reg_val);
++ if (adata->i2ssp_capture_stream && adata->i2ssp_capture_stream->runtime) {
++ rtd = adata->i2ssp_capture_stream->runtime->private_data;
++ config_acp5x_dma(rtd, SNDRV_PCM_STREAM_CAPTURE);
++ acp_writel((rtd->xfer_resolution << 3), rtd->acp5x_base + ACP_I2STDM_IRER);
++ if (adata->tdm_mode == TDM_ENABLE) {
++ acp_writel(adata->tdm_fmt, adata->acp5x_base + ACP_I2STDM_RXFRMT);
++ val = acp_readl(adata->acp5x_base + ACP_I2STDM_IRER);
++ acp_writel(val | 0x2, adata->acp5x_base + ACP_I2STDM_IRER);
++ }
+ }
+ acp_writel(1, adata->acp5x_base + ACP_EXTERNAL_INTR_ENB);
+ return 0;
+diff --git a/sound/soc/atmel/atmel_ssc_dai.c b/sound/soc/atmel/atmel_ssc_dai.c
+index 26e2bc690d86e..c1dea8d624164 100644
+--- a/sound/soc/atmel/atmel_ssc_dai.c
++++ b/sound/soc/atmel/atmel_ssc_dai.c
+@@ -280,7 +280,10 @@ static int atmel_ssc_startup(struct snd_pcm_substream *substream,
+
+ /* Enable PMC peripheral clock for this SSC */
+ pr_debug("atmel_ssc_dai: Starting clock\n");
+- clk_enable(ssc_p->ssc->clk);
++ ret = clk_enable(ssc_p->ssc->clk);
++ if (ret)
++ return ret;
++
+ ssc_p->mck_rate = clk_get_rate(ssc_p->ssc->clk);
+
+ /* Reset the SSC unless initialized to keep it in a clean state */
+diff --git a/sound/soc/atmel/mikroe-proto.c b/sound/soc/atmel/mikroe-proto.c
+index 627564c18c270..ce46d8a0b7e43 100644
+--- a/sound/soc/atmel/mikroe-proto.c
++++ b/sound/soc/atmel/mikroe-proto.c
+@@ -115,7 +115,8 @@ static int snd_proto_probe(struct platform_device *pdev)
+ cpu_np = of_parse_phandle(np, "i2s-controller", 0);
+ if (!cpu_np) {
+ dev_err(&pdev->dev, "i2s-controller missing\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_codec_node;
+ }
+ dai->cpus->of_node = cpu_np;
+ dai->platforms->of_node = cpu_np;
+@@ -125,7 +126,8 @@ static int snd_proto_probe(struct platform_device *pdev)
+ &bitclkmaster, &framemaster);
+ if (bitclkmaster != framemaster) {
+ dev_err(&pdev->dev, "Must be the same bitclock and frame master\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_cpu_node;
+ }
+ if (bitclkmaster) {
+ if (codec_np == bitclkmaster)
+@@ -136,18 +138,20 @@ static int snd_proto_probe(struct platform_device *pdev)
+ dai_fmt |= snd_soc_daifmt_parse_clock_provider_as_flag(np, NULL);
+ }
+
+- of_node_put(bitclkmaster);
+- of_node_put(framemaster);
+- dai->dai_fmt = dai_fmt;
+-
+- of_node_put(codec_np);
+- of_node_put(cpu_np);
+
++ dai->dai_fmt = dai_fmt;
+ ret = snd_soc_register_card(&snd_proto);
+ if (ret)
+ dev_err_probe(&pdev->dev, ret,
+ "snd_soc_register_card() failed\n");
+
++
++put_cpu_node:
++ of_node_put(bitclkmaster);
++ of_node_put(framemaster);
++ of_node_put(cpu_np);
++put_codec_node:
++ of_node_put(codec_np);
+ return ret;
+ }
+
+diff --git a/sound/soc/atmel/sam9g20_wm8731.c b/sound/soc/atmel/sam9g20_wm8731.c
+index 915da92e1ec82..33e43013ff770 100644
+--- a/sound/soc/atmel/sam9g20_wm8731.c
++++ b/sound/soc/atmel/sam9g20_wm8731.c
+@@ -214,6 +214,7 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
+ cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0);
+ if (!cpu_np) {
+ dev_err(&pdev->dev, "dai and pcm info missing\n");
++ of_node_put(codec_np);
+ return -EINVAL;
+ }
+ at91sam9g20ek_dai.cpus->of_node = cpu_np;
+diff --git a/sound/soc/atmel/sam9x5_wm8731.c b/sound/soc/atmel/sam9x5_wm8731.c
+index 7c45dc4f8c1bb..99310e40e7a62 100644
+--- a/sound/soc/atmel/sam9x5_wm8731.c
++++ b/sound/soc/atmel/sam9x5_wm8731.c
+@@ -142,7 +142,7 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ if (!cpu_np) {
+ dev_err(&pdev->dev, "atmel,ssc-controller node missing\n");
+ ret = -EINVAL;
+- goto out;
++ goto out_put_codec_np;
+ }
+ dai->cpus->of_node = cpu_np;
+ dai->platforms->of_node = cpu_np;
+@@ -153,12 +153,9 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+ if (ret != 0) {
+ dev_err(&pdev->dev, "Failed to set SSC %d for audio: %d\n",
+ ret, priv->ssc_id);
+- goto out;
++ goto out_put_cpu_np;
+ }
+
+- of_node_put(codec_np);
+- of_node_put(cpu_np);
+-
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+ if (ret) {
+ dev_err(&pdev->dev, "Platform device allocation failed\n");
+@@ -167,10 +164,14 @@ static int sam9x5_wm8731_driver_probe(struct platform_device *pdev)
+
+ dev_dbg(&pdev->dev, "%s ok\n", __func__);
+
+- return ret;
++ goto out_put_cpu_np;
+
+ out_put_audio:
+ atmel_ssc_put_audio(priv->ssc_id);
++out_put_cpu_np:
++ of_node_put(cpu_np);
++out_put_codec_np:
++ of_node_put(codec_np);
+ out:
+ return ret;
+ }
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index d3e5ae8310ef2..30c00380499cd 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -733,6 +733,7 @@ config SND_SOC_CS4349
+
+ config SND_SOC_CS47L15
+ tristate
++ depends on MFD_CS47L15
+
+ config SND_SOC_CS47L24
+ tristate
+@@ -740,15 +741,19 @@ config SND_SOC_CS47L24
+
+ config SND_SOC_CS47L35
+ tristate
++ depends on MFD_CS47L35
+
+ config SND_SOC_CS47L85
+ tristate
++ depends on MFD_CS47L85
+
+ config SND_SOC_CS47L90
+ tristate
++ depends on MFD_CS47L90
+
+ config SND_SOC_CS47L92
+ tristate
++ depends on MFD_CS47L92
+
+ # Cirrus Logic Quad-Channel ADC
+ config SND_SOC_CS53L30
+diff --git a/sound/soc/codecs/cs35l41.c b/sound/soc/codecs/cs35l41.c
+index 77a0176946459..f3787d77f892b 100644
+--- a/sound/soc/codecs/cs35l41.c
++++ b/sound/soc/codecs/cs35l41.c
+@@ -1035,8 +1035,8 @@ static int cs35l41_irq_gpio_config(struct cs35l41_private *cs35l41)
+
+ regmap_update_bits(cs35l41->regmap, CS35L41_GPIO2_CTRL1,
+ CS35L41_GPIO_POL_MASK | CS35L41_GPIO_DIR_MASK,
+- irq_gpio_cfg1->irq_pol_inv << CS35L41_GPIO_POL_SHIFT |
+- !irq_gpio_cfg1->irq_out_en << CS35L41_GPIO_DIR_SHIFT);
++ irq_gpio_cfg2->irq_pol_inv << CS35L41_GPIO_POL_SHIFT |
++ !irq_gpio_cfg2->irq_out_en << CS35L41_GPIO_DIR_SHIFT);
+
+ regmap_update_bits(cs35l41->regmap, CS35L41_GPIO_PAD_CONTROL,
+ CS35L41_GPIO1_CTRL_MASK | CS35L41_GPIO2_CTRL_MASK,
+@@ -1091,7 +1091,7 @@ static struct snd_soc_dai_driver cs35l41_dai[] = {
+ .capture = {
+ .stream_name = "AMP Capture",
+ .channels_min = 1,
+- .channels_max = 8,
++ .channels_max = 4,
+ .rates = SNDRV_PCM_RATE_KNOT,
+ .formats = CS35L41_TX_FORMATS,
+ },
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index 43d98bdb5b5b0..2c294868008ed 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -1637,7 +1637,11 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+
+ mutex_lock(&cs42l42->jack_detect_mutex);
+
+- /* Check auto-detect status */
++ /*
++ * Check auto-detect status. Don't assume a previous unplug event has
++ * cleared the flags. If the jack is unplugged and plugged during
++ * system suspend there won't have been an unplug event.
++ */
+ if ((~masks[5]) & irq_params_table[5].mask) {
+ if (stickies[5] & CS42L42_HSDET_AUTO_DONE_MASK) {
+ cs42l42_process_hs_type_detect(cs42l42);
+@@ -1645,11 +1649,15 @@ static irqreturn_t cs42l42_irq_thread(int irq, void *data)
+ case CS42L42_PLUG_CTIA:
+ case CS42L42_PLUG_OMTP:
+ snd_soc_jack_report(cs42l42->jack, SND_JACK_HEADSET,
+- SND_JACK_HEADSET);
++ SND_JACK_HEADSET |
++ SND_JACK_BTN_0 | SND_JACK_BTN_1 |
++ SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ break;
+ case CS42L42_PLUG_HEADPHONE:
+ snd_soc_jack_report(cs42l42->jack, SND_JACK_HEADPHONE,
+- SND_JACK_HEADPHONE);
++ SND_JACK_HEADSET |
++ SND_JACK_BTN_0 | SND_JACK_BTN_1 |
++ SND_JACK_BTN_2 | SND_JACK_BTN_3);
+ break;
+ default:
+ break;
+diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
+index 6ffe88345de5f..3a3dc0539d921 100644
+--- a/sound/soc/codecs/lpass-rx-macro.c
++++ b/sound/soc/codecs/lpass-rx-macro.c
+@@ -2039,6 +2039,10 @@ static int rx_macro_load_compander_coeff(struct snd_soc_component *component,
+ int i;
+ int hph_pwr_mode;
+
++ /* AUX does not have compander */
++ if (comp == INTERP_AUX)
++ return 0;
++
+ if (!rx->comp_enabled[comp])
+ return 0;
+
+@@ -2268,7 +2272,7 @@ static int rx_macro_mux_get(struct snd_kcontrol *kcontrol,
+ struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm);
+ struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+
+- ucontrol->value.integer.value[0] =
++ ucontrol->value.enumerated.item[0] =
+ rx->rx_port_value[widget->shift];
+ return 0;
+ }
+@@ -2280,7 +2284,7 @@ static int rx_macro_mux_put(struct snd_kcontrol *kcontrol,
+ struct snd_soc_component *component = snd_soc_dapm_to_component(widget->dapm);
+ struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ struct snd_soc_dapm_update *update = NULL;
+- u32 rx_port_value = ucontrol->value.integer.value[0];
++ u32 rx_port_value = ucontrol->value.enumerated.item[0];
+ u32 aif_rst;
+ struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+
+@@ -2392,7 +2396,7 @@ static int rx_macro_get_hph_pwr_mode(struct snd_kcontrol *kcontrol,
+ struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+
+- ucontrol->value.integer.value[0] = rx->hph_pwr_mode;
++ ucontrol->value.enumerated.item[0] = rx->hph_pwr_mode;
+ return 0;
+ }
+
+@@ -2402,7 +2406,7 @@ static int rx_macro_put_hph_pwr_mode(struct snd_kcontrol *kcontrol,
+ struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ struct rx_macro *rx = snd_soc_component_get_drvdata(component);
+
+- rx->hph_pwr_mode = ucontrol->value.integer.value[0];
++ rx->hph_pwr_mode = ucontrol->value.enumerated.item[0];
+ return 0;
+ }
+
+@@ -3542,6 +3546,8 @@ static int rx_macro_probe(struct platform_device *pdev)
+ return PTR_ERR(base);
+
+ rx->regmap = devm_regmap_init_mmio(dev, base, &rx_regmap_config);
++ if (IS_ERR(rx->regmap))
++ return PTR_ERR(rx->regmap);
+
+ dev_set_drvdata(dev, rx);
+
+diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
+index a4c0a155af565..9c96ab1bf84f9 100644
+--- a/sound/soc/codecs/lpass-tx-macro.c
++++ b/sound/soc/codecs/lpass-tx-macro.c
+@@ -1821,6 +1821,8 @@ static int tx_macro_probe(struct platform_device *pdev)
+ }
+
+ tx->regmap = devm_regmap_init_mmio(dev, base, &tx_regmap_config);
++ if (IS_ERR(tx->regmap))
++ return PTR_ERR(tx->regmap);
+
+ dev_set_drvdata(dev, tx);
+
+diff --git a/sound/soc/codecs/lpass-va-macro.c b/sound/soc/codecs/lpass-va-macro.c
+index 11147e35689b2..e14c277e6a8b6 100644
+--- a/sound/soc/codecs/lpass-va-macro.c
++++ b/sound/soc/codecs/lpass-va-macro.c
+@@ -780,7 +780,7 @@ static int va_macro_dec_mode_get(struct snd_kcontrol *kcontrol,
+ struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ int path = e->shift_l;
+
+- ucontrol->value.integer.value[0] = va->dec_mode[path];
++ ucontrol->value.enumerated.item[0] = va->dec_mode[path];
+
+ return 0;
+ }
+@@ -789,7 +789,7 @@ static int va_macro_dec_mode_put(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+ {
+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);
+- int value = ucontrol->value.integer.value[0];
++ int value = ucontrol->value.enumerated.item[0];
+ struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ int path = e->shift_l;
+ struct va_macro *va = snd_soc_component_get_drvdata(comp);
+diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c
+index 75baf8eb70299..69d2915f40d88 100644
+--- a/sound/soc/codecs/lpass-wsa-macro.c
++++ b/sound/soc/codecs/lpass-wsa-macro.c
+@@ -2405,6 +2405,8 @@ static int wsa_macro_probe(struct platform_device *pdev)
+ return PTR_ERR(base);
+
+ wsa->regmap = devm_regmap_init_mmio(dev, base, &wsa_regmap_config);
++ if (IS_ERR(wsa->regmap))
++ return PTR_ERR(wsa->regmap);
+
+ dev_set_drvdata(dev, wsa);
+
+diff --git a/sound/soc/codecs/max98927.c b/sound/soc/codecs/max98927.c
+index 5ba5f876eab87..fd84780bf689f 100644
+--- a/sound/soc/codecs/max98927.c
++++ b/sound/soc/codecs/max98927.c
+@@ -16,6 +16,7 @@
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+ #include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/of_gpio.h>
+ #include <sound/tlv.h>
+ #include "max98927.h"
+diff --git a/sound/soc/codecs/msm8916-wcd-analog.c b/sound/soc/codecs/msm8916-wcd-analog.c
+index 485cda46dbb9b..e52a559c52d68 100644
+--- a/sound/soc/codecs/msm8916-wcd-analog.c
++++ b/sound/soc/codecs/msm8916-wcd-analog.c
+@@ -1222,8 +1222,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ }
+
+ irq = platform_get_irq_byname(pdev, "mbhc_switch_int");
+- if (irq < 0)
+- return irq;
++ if (irq < 0) {
++ ret = irq;
++ goto err_disable_clk;
++ }
+
+ ret = devm_request_threaded_irq(dev, irq, NULL,
+ pm8916_mbhc_switch_irq_handler,
+@@ -1235,8 +1237,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+
+ if (priv->mbhc_btn_enabled) {
+ irq = platform_get_irq_byname(pdev, "mbhc_but_press_det");
+- if (irq < 0)
+- return irq;
++ if (irq < 0) {
++ ret = irq;
++ goto err_disable_clk;
++ }
+
+ ret = devm_request_threaded_irq(dev, irq, NULL,
+ mbhc_btn_press_irq_handler,
+@@ -1247,8 +1251,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ dev_err(dev, "cannot request mbhc button press irq\n");
+
+ irq = platform_get_irq_byname(pdev, "mbhc_but_rel_det");
+- if (irq < 0)
+- return irq;
++ if (irq < 0) {
++ ret = irq;
++ goto err_disable_clk;
++ }
+
+ ret = devm_request_threaded_irq(dev, irq, NULL,
+ mbhc_btn_release_irq_handler,
+@@ -1265,6 +1271,10 @@ static int pm8916_wcd_analog_spmi_probe(struct platform_device *pdev)
+ return devm_snd_soc_register_component(dev, &pm8916_wcd_analog,
+ pm8916_wcd_analog_dai,
+ ARRAY_SIZE(pm8916_wcd_analog_dai));
++
++err_disable_clk:
++ clk_disable_unprepare(priv->mclk);
++ return ret;
+ }
+
+ static int pm8916_wcd_analog_spmi_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/msm8916-wcd-digital.c b/sound/soc/codecs/msm8916-wcd-digital.c
+index fcc10c8bc6259..9ad7fc0baf072 100644
+--- a/sound/soc/codecs/msm8916-wcd-digital.c
++++ b/sound/soc/codecs/msm8916-wcd-digital.c
+@@ -1201,7 +1201,7 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ ret = clk_prepare_enable(priv->mclk);
+ if (ret < 0) {
+ dev_err(dev, "failed to enable mclk %d\n", ret);
+- return ret;
++ goto err_clk;
+ }
+
+ dev_set_drvdata(dev, priv);
+@@ -1209,6 +1209,9 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
+ return devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
+ msm8916_wcd_digital_dai,
+ ARRAY_SIZE(msm8916_wcd_digital_dai));
++err_clk:
++ clk_disable_unprepare(priv->ahbclk);
++ return ret;
+ }
+
+ static int msm8916_wcd_digital_remove(struct platform_device *pdev)
+diff --git a/sound/soc/codecs/mt6358.c b/sound/soc/codecs/mt6358.c
+index 9b263a9a669dc..4c7b5d940799b 100644
+--- a/sound/soc/codecs/mt6358.c
++++ b/sound/soc/codecs/mt6358.c
+@@ -107,6 +107,7 @@ int mt6358_set_mtkaif_protocol(struct snd_soc_component *cmpnt,
+ priv->mtkaif_protocol = mtkaif_protocol;
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_set_mtkaif_protocol);
+
+ static void playback_gpio_set(struct mt6358_priv *priv)
+ {
+@@ -273,6 +274,7 @@ int mt6358_mtkaif_calibration_enable(struct snd_soc_component *cmpnt)
+ 1 << RG_AUD_PAD_TOP_DAT_MISO_LOOPBACK_SFT);
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_mtkaif_calibration_enable);
+
+ int mt6358_mtkaif_calibration_disable(struct snd_soc_component *cmpnt)
+ {
+@@ -296,6 +298,7 @@ int mt6358_mtkaif_calibration_disable(struct snd_soc_component *cmpnt)
+ capture_gpio_reset(priv);
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_mtkaif_calibration_disable);
+
+ int mt6358_set_mtkaif_calibration_phase(struct snd_soc_component *cmpnt,
+ int phase_1, int phase_2)
+@@ -310,6 +313,7 @@ int mt6358_set_mtkaif_calibration_phase(struct snd_soc_component *cmpnt,
+ phase_2 << RG_AUD_PAD_TOP_PHASE_MODE2_SFT);
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(mt6358_set_mtkaif_calibration_phase);
+
+ /* dl pga gain */
+ enum {
+diff --git a/sound/soc/codecs/rk817_codec.c b/sound/soc/codecs/rk817_codec.c
+index 03f24edfe4f64..8fffe378618d0 100644
+--- a/sound/soc/codecs/rk817_codec.c
++++ b/sound/soc/codecs/rk817_codec.c
+@@ -508,12 +508,14 @@ static int rk817_platform_probe(struct platform_device *pdev)
+ if (ret < 0) {
+ dev_err(&pdev->dev, "%s() register codec error %d\n",
+ __func__, ret);
+- goto err_;
++ goto err_clk;
+ }
+
+ return 0;
+-err_:
+
++err_clk:
++ clk_disable_unprepare(rk817_codec_data->mclk);
++err_:
+ return ret;
+ }
+
+diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c
+index 2138f62e6af5d..3a8fba101b20f 100644
+--- a/sound/soc/codecs/rt5663.c
++++ b/sound/soc/codecs/rt5663.c
+@@ -3478,6 +3478,8 @@ static int rt5663_parse_dp(struct rt5663_priv *rt5663, struct device *dev)
+ table_size = sizeof(struct impedance_mapping_table) *
+ rt5663->pdata.impedance_sensing_num;
+ rt5663->imp_table = devm_kzalloc(dev, table_size, GFP_KERNEL);
++ if (!rt5663->imp_table)
++ return -ENOMEM;
+ ret = device_property_read_u32_array(dev,
+ "realtek,impedance_sensing_table",
+ (u32 *)rt5663->imp_table, table_size);
+diff --git a/sound/soc/codecs/rt5682s.c b/sound/soc/codecs/rt5682s.c
+index 1e662d1be2b3e..92b8753f1267b 100644
+--- a/sound/soc/codecs/rt5682s.c
++++ b/sound/soc/codecs/rt5682s.c
+@@ -822,6 +822,7 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ {
+ struct rt5682s_priv *rt5682s =
+ container_of(work, struct rt5682s_priv, jack_detect_work.work);
++ struct snd_soc_dapm_context *dapm;
+ int val, btn_type;
+
+ if (!rt5682s->component || !rt5682s->component->card ||
+@@ -832,7 +833,9 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ return;
+ }
+
+- mutex_lock(&rt5682s->jdet_mutex);
++ dapm = snd_soc_component_get_dapm(rt5682s->component);
++
++ snd_soc_dapm_mutex_lock(dapm);
+ mutex_lock(&rt5682s->calibrate_mutex);
+
+ val = snd_soc_component_read(rt5682s->component, RT5682S_AJD1_CTRL)
+@@ -889,6 +892,9 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ rt5682s->irq_work_delay_time = 50;
+ }
+
++ mutex_unlock(&rt5682s->calibrate_mutex);
++ snd_soc_dapm_mutex_unlock(dapm);
++
+ snd_soc_jack_report(rt5682s->hs_jack, rt5682s->jack_type,
+ SND_JACK_HEADSET | SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+ SND_JACK_BTN_2 | SND_JACK_BTN_3);
+@@ -898,9 +904,6 @@ static void rt5682s_jack_detect_handler(struct work_struct *work)
+ schedule_delayed_work(&rt5682s->jd_check_work, 0);
+ else
+ cancel_delayed_work_sync(&rt5682s->jd_check_work);
+-
+- mutex_unlock(&rt5682s->calibrate_mutex);
+- mutex_unlock(&rt5682s->jdet_mutex);
+ }
+
+ static void rt5682s_jd_check_handler(struct work_struct *work)
+@@ -908,14 +911,9 @@ static void rt5682s_jd_check_handler(struct work_struct *work)
+ struct rt5682s_priv *rt5682s =
+ container_of(work, struct rt5682s_priv, jd_check_work.work);
+
+- if (snd_soc_component_read(rt5682s->component, RT5682S_AJD1_CTRL)
+- & RT5682S_JDH_RS_MASK) {
++ if (snd_soc_component_read(rt5682s->component, RT5682S_AJD1_CTRL) & RT5682S_JDH_RS_MASK) {
+ /* jack out */
+- rt5682s->jack_type = rt5682s_headset_detect(rt5682s->component, 0);
+-
+- snd_soc_jack_report(rt5682s->hs_jack, rt5682s->jack_type,
+- SND_JACK_HEADSET | SND_JACK_BTN_0 | SND_JACK_BTN_1 |
+- SND_JACK_BTN_2 | SND_JACK_BTN_3);
++ schedule_delayed_work(&rt5682s->jack_detect_work, 0);
+ } else {
+ schedule_delayed_work(&rt5682s->jd_check_work, 500);
+ }
+@@ -1323,7 +1321,6 @@ static int rt5682s_hp_amp_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+ {
+ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- struct rt5682s_priv *rt5682s = snd_soc_component_get_drvdata(component);
+
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+@@ -1339,8 +1336,6 @@ static int rt5682s_hp_amp_event(struct snd_soc_dapm_widget *w,
+ snd_soc_component_write(component, RT5682S_BIAS_CUR_CTRL_11, 0x6666);
+ snd_soc_component_write(component, RT5682S_BIAS_CUR_CTRL_12, 0xa82a);
+
+- mutex_lock(&rt5682s->jdet_mutex);
+-
+ snd_soc_component_update_bits(component, RT5682S_HP_CTRL_2,
+ RT5682S_HPO_L_PATH_MASK | RT5682S_HPO_R_PATH_MASK |
+ RT5682S_HPO_SEL_IP_EN_SW, RT5682S_HPO_L_PATH_EN |
+@@ -1348,8 +1343,6 @@ static int rt5682s_hp_amp_event(struct snd_soc_dapm_widget *w,
+ usleep_range(5000, 10000);
+ snd_soc_component_update_bits(component, RT5682S_HP_AMP_DET_CTL_1,
+ RT5682S_CP_SW_SIZE_MASK, RT5682S_CP_SW_SIZE_L | RT5682S_CP_SW_SIZE_S);
+-
+- mutex_unlock(&rt5682s->jdet_mutex);
+ break;
+
+ case SND_SOC_DAPM_POST_PMD:
+@@ -3103,7 +3096,6 @@ static int rt5682s_i2c_probe(struct i2c_client *i2c,
+
+ mutex_init(&rt5682s->calibrate_mutex);
+ mutex_init(&rt5682s->sar_mutex);
+- mutex_init(&rt5682s->jdet_mutex);
+ rt5682s_calibrate(rt5682s);
+
+ regmap_update_bits(rt5682s->regmap, RT5682S_MICBIAS_2,
+diff --git a/sound/soc/codecs/rt5682s.h b/sound/soc/codecs/rt5682s.h
+index 1bf2ef7ce5784..397a2531b6f68 100644
+--- a/sound/soc/codecs/rt5682s.h
++++ b/sound/soc/codecs/rt5682s.h
+@@ -1446,7 +1446,6 @@ struct rt5682s_priv {
+ struct delayed_work jd_check_work;
+ struct mutex calibrate_mutex;
+ struct mutex sar_mutex;
+- struct mutex jdet_mutex;
+
+ #ifdef CONFIG_COMMON_CLK
+ struct clk_hw dai_clks_hw[RT5682S_DAI_NUM_CLKS];
+diff --git a/sound/soc/codecs/wcd934x.c b/sound/soc/codecs/wcd934x.c
+index 6c468527fec61..1e75e93cf28f2 100644
+--- a/sound/soc/codecs/wcd934x.c
++++ b/sound/soc/codecs/wcd934x.c
+@@ -3023,14 +3023,14 @@ static int wcd934x_hph_impedance_get(struct snd_kcontrol *kcontrol,
+ return 0;
+ }
+ static const struct snd_kcontrol_new hph_type_detect_controls[] = {
+- SOC_SINGLE_EXT("HPH Type", 0, 0, UINT_MAX, 0,
++ SOC_SINGLE_EXT("HPH Type", 0, 0, WCD_MBHC_HPH_STEREO, 0,
+ wcd934x_get_hph_type, NULL),
+ };
+
+ static const struct snd_kcontrol_new impedance_detect_controls[] = {
+- SOC_SINGLE_EXT("HPHL Impedance", 0, 0, UINT_MAX, 0,
++ SOC_SINGLE_EXT("HPHL Impedance", 0, 0, INT_MAX, 0,
+ wcd934x_hph_impedance_get, NULL),
+- SOC_SINGLE_EXT("HPHR Impedance", 0, 1, UINT_MAX, 0,
++ SOC_SINGLE_EXT("HPHR Impedance", 0, 1, INT_MAX, 0,
+ wcd934x_hph_impedance_get, NULL),
+ };
+
+@@ -3308,13 +3308,16 @@ static int wcd934x_rx_hph_mode_put(struct snd_kcontrol *kc,
+
+ mode_val = ucontrol->value.enumerated.item[0];
+
++ if (mode_val == wcd->hph_mode)
++ return 0;
++
+ if (mode_val == 0) {
+ dev_err(wcd->dev, "Invalid HPH Mode, default to ClSH HiFi\n");
+ mode_val = CLS_H_LOHIFI;
+ }
+ wcd->hph_mode = mode_val;
+
+- return 0;
++ return 1;
+ }
+
+ static int slim_rx_mux_get(struct snd_kcontrol *kc,
+@@ -5883,6 +5886,7 @@ static int wcd934x_codec_parse_data(struct wcd934x_codec *wcd)
+ }
+
+ wcd->sidev = of_slim_get_device(wcd->sdev->ctrl, ifc_dev_np);
++ of_node_put(ifc_dev_np);
+ if (!wcd->sidev) {
+ dev_err(dev, "Unable to get SLIM Interface device\n");
+ return -EINVAL;
+diff --git a/sound/soc/codecs/wcd938x.c b/sound/soc/codecs/wcd938x.c
+index 36cbc66914f90..9ae65cbabb1aa 100644
+--- a/sound/soc/codecs/wcd938x.c
++++ b/sound/soc/codecs/wcd938x.c
+@@ -2504,7 +2504,7 @@ static int wcd938x_tx_mode_get(struct snd_kcontrol *kcontrol,
+ struct soc_enum *e = (struct soc_enum *)kcontrol->private_value;
+ int path = e->shift_l;
+
+- ucontrol->value.integer.value[0] = wcd938x->tx_mode[path];
++ ucontrol->value.enumerated.item[0] = wcd938x->tx_mode[path];
+
+ return 0;
+ }
+@@ -2528,7 +2528,7 @@ static int wcd938x_rx_hph_mode_get(struct snd_kcontrol *kcontrol,
+ struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol);
+ struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component);
+
+- ucontrol->value.integer.value[0] = wcd938x->hph_mode;
++ ucontrol->value.enumerated.item[0] = wcd938x->hph_mode;
+
+ return 0;
+ }
+@@ -3575,14 +3575,14 @@ static int wcd938x_hph_impedance_get(struct snd_kcontrol *kcontrol,
+ }
+
+ static const struct snd_kcontrol_new hph_type_detect_controls[] = {
+- SOC_SINGLE_EXT("HPH Type", 0, 0, UINT_MAX, 0,
++ SOC_SINGLE_EXT("HPH Type", 0, 0, WCD_MBHC_HPH_STEREO, 0,
+ wcd938x_get_hph_type, NULL),
+ };
+
+ static const struct snd_kcontrol_new impedance_detect_controls[] = {
+- SOC_SINGLE_EXT("HPHL Impedance", 0, 0, UINT_MAX, 0,
++ SOC_SINGLE_EXT("HPHL Impedance", 0, 0, INT_MAX, 0,
+ wcd938x_hph_impedance_get, NULL),
+- SOC_SINGLE_EXT("HPHR Impedance", 0, 1, UINT_MAX, 0,
++ SOC_SINGLE_EXT("HPHR Impedance", 0, 1, INT_MAX, 0,
+ wcd938x_hph_impedance_get, NULL),
+ };
+
+diff --git a/sound/soc/codecs/wm8350.c b/sound/soc/codecs/wm8350.c
+index 15d42ce3b21d6..41504ce2a682f 100644
+--- a/sound/soc/codecs/wm8350.c
++++ b/sound/soc/codecs/wm8350.c
+@@ -1537,18 +1537,38 @@ static int wm8350_component_probe(struct snd_soc_component *component)
+ wm8350_clear_bits(wm8350, WM8350_JACK_DETECT,
+ WM8350_JDL_ENA | WM8350_JDR_ENA);
+
+- wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L,
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L,
+ wm8350_hpl_jack_handler, 0, "Left jack detect",
+ priv);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R,
++ if (ret != 0)
++ goto err;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R,
+ wm8350_hpr_jack_handler, 0, "Right jack detect",
+ priv);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICSCD,
++ if (ret != 0)
++ goto free_jck_det_l;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICSCD,
+ wm8350_mic_handler, 0, "Microphone short", priv);
+- wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICD,
++ if (ret != 0)
++ goto free_jck_det_r;
++
++ ret = wm8350_register_irq(wm8350, WM8350_IRQ_CODEC_MICD,
+ wm8350_mic_handler, 0, "Microphone detect", priv);
++ if (ret != 0)
++ goto free_micscd;
+
+ return 0;
++
++free_micscd:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_MICSCD, priv);
++free_jck_det_r:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_R, priv);
++free_jck_det_l:
++ wm8350_free_irq(wm8350, WM8350_IRQ_CODEC_JCK_DET_L, priv);
++err:
++ return ret;
+ }
+
+ static void wm8350_component_remove(struct snd_soc_component *component)
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 5cb58929090d4..1edac3e10f345 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -403,9 +403,13 @@ static int dw_i2s_runtime_suspend(struct device *dev)
+ static int dw_i2s_runtime_resume(struct device *dev)
+ {
+ struct dw_i2s_dev *dw_dev = dev_get_drvdata(dev);
++ int ret;
+
+- if (dw_dev->capability & DW_I2S_MASTER)
+- clk_enable(dw_dev->clk);
++ if (dw_dev->capability & DW_I2S_MASTER) {
++ ret = clk_enable(dw_dev->clk);
++ if (ret)
++ return ret;
++ }
+ return 0;
+ }
+
+@@ -422,10 +426,13 @@ static int dw_i2s_resume(struct snd_soc_component *component)
+ {
+ struct dw_i2s_dev *dev = snd_soc_component_get_drvdata(component);
+ struct snd_soc_dai *dai;
+- int stream;
++ int stream, ret;
+
+- if (dev->capability & DW_I2S_MASTER)
+- clk_enable(dev->clk);
++ if (dev->capability & DW_I2S_MASTER) {
++ ret = clk_enable(dev->clk);
++ if (ret)
++ return ret;
++ }
+
+ for_each_component_dais(component, dai) {
+ for_each_pcm_streams(stream)
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index d178b479c8bd4..06d4a014f296d 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -610,6 +610,8 @@ static void fsl_spdif_shutdown(struct snd_pcm_substream *substream,
+ mask = SCR_TXFIFO_AUTOSYNC_MASK | SCR_TXFIFO_CTRL_MASK |
+ SCR_TXSEL_MASK | SCR_USRC_SEL_MASK |
+ SCR_TXFIFO_FSEL_MASK;
++ /* Disable TX clock */
++ regmap_update_bits(regmap, REG_SPDIF_STC, STC_TXCLK_ALL_EN_MASK, 0);
+ } else {
+ scr = SCR_RXFIFO_OFF | SCR_RXFIFO_CTL_ZERO;
+ mask = SCR_RXFIFO_FSEL_MASK | SCR_RXFIFO_AUTOSYNC_MASK|
+diff --git a/sound/soc/fsl/imx-es8328.c b/sound/soc/fsl/imx-es8328.c
+index 09c674ee79f1a..168973035e35f 100644
+--- a/sound/soc/fsl/imx-es8328.c
++++ b/sound/soc/fsl/imx-es8328.c
+@@ -87,6 +87,7 @@ static int imx_es8328_probe(struct platform_device *pdev)
+ if (int_port > MUX_PORT_MAX || int_port == 0) {
+ dev_err(dev, "mux-int-port: hardware only has %d mux ports\n",
+ MUX_PORT_MAX);
++ ret = -EINVAL;
+ goto fail;
+ }
+
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index a81323d1691d0..9736102e68088 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -275,6 +275,7 @@ int asoc_simple_hw_params(struct snd_pcm_substream *substream,
+ mclk_fs = props->mclk_fs;
+
+ if (mclk_fs) {
++ struct snd_soc_component *component;
+ mclk = params_rate(params) * mclk_fs;
+
+ for_each_prop_dai_codec(props, i, pdai) {
+@@ -282,16 +283,30 @@ int asoc_simple_hw_params(struct snd_pcm_substream *substream,
+ if (ret < 0)
+ return ret;
+ }
++
+ for_each_prop_dai_cpu(props, i, pdai) {
+ ret = asoc_simple_set_clk_rate(pdai, mclk);
+ if (ret < 0)
+ return ret;
+ }
++
++ /* Ensure sysclk is set on all components in case any
++ * (such as platform components) are missed by calls to
++ * snd_soc_dai_set_sysclk.
++ */
++ for_each_rtd_components(rtd, i, component) {
++ ret = snd_soc_component_set_sysclk(component, 0, 0,
++ mclk, SND_SOC_CLOCK_IN);
++ if (ret && ret != -ENOTSUPP)
++ return ret;
++ }
++
+ for_each_rtd_codec_dais(rtd, i, sdai) {
+ ret = snd_soc_dai_set_sysclk(sdai, 0, mclk, SND_SOC_CLOCK_IN);
+ if (ret && ret != -ENOTSUPP)
+ return ret;
+ }
++
+ for_each_rtd_cpu_dais(rtd, i, sdai) {
+ ret = snd_soc_dai_set_sysclk(sdai, 0, mclk, SND_SOC_CLOCK_OUT);
+ if (ret && ret != -ENOTSUPP)
+diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/sof_es8336.c
+index 20d577eaab6d7..28d7670b8f8f8 100644
+--- a/sound/soc/intel/boards/sof_es8336.c
++++ b/sound/soc/intel/boards/sof_es8336.c
+@@ -63,7 +63,12 @@ static const struct acpi_gpio_mapping *gpio_mapping = acpi_es8336_gpios;
+
+ static void log_quirks(struct device *dev)
+ {
+- dev_info(dev, "quirk SSP%ld", SOF_ES8336_SSP_CODEC(quirk));
++ dev_info(dev, "quirk mask %#lx\n", quirk);
++ dev_info(dev, "quirk SSP%ld\n", SOF_ES8336_SSP_CODEC(quirk));
++ if (quirk & SOF_ES8336_ENABLE_DMIC)
++ dev_info(dev, "quirk DMIC enabled\n");
++ if (quirk & SOF_ES8336_TGL_GPIO_QUIRK)
++ dev_info(dev, "quirk TGL GPIO enabled\n");
+ }
+
+ static int sof_es8316_speaker_power_event(struct snd_soc_dapm_widget *w,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index da515eb1ddbe7..1f00679b42409 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -185,7 +185,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
+- DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Convertible"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Spectre x360 Conv"),
+ },
+ .driver_data = (void *)(SOF_SDW_TGL_HDMI |
+ SOF_SDW_PCH_DMIC |
+diff --git a/sound/soc/intel/common/soc-acpi-intel-bxt-match.c b/sound/soc/intel/common/soc-acpi-intel-bxt-match.c
+index 342d340522045..04a92e74d99bc 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-bxt-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-bxt-match.c
+@@ -41,6 +41,11 @@ static struct snd_soc_acpi_mach *apl_quirk(void *arg)
+ return mach;
+ }
+
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++ .num_codecs = 3,
++ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs bxt_codecs = {
+ .num_codecs = 1,
+ .codecs = {"MX98357A"}
+@@ -83,7 +88,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_bxt_machines[] = {
+ .sof_tplg_filename = "sof-apl-tdf8532.tplg",
+ },
+ {
+- .id = "ESSX8336",
++ .comp_ids = &essx_83x6,
+ .drv_name = "sof-essx8336",
+ .sof_fw_filename = "sof-apl.ri",
+ .sof_tplg_filename = "sof-apl-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-cml-match.c b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+index 4eebc79d4b486..14395833d89e8 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-cml-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-cml-match.c
+@@ -9,6 +9,11 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++ .num_codecs = 3,
++ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs rt1011_spk_codecs = {
+ .num_codecs = 1,
+ .codecs = {"10EC1011"}
+@@ -82,7 +87,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_cml_machines[] = {
+ .sof_tplg_filename = "sof-cml-da7219-max98390.tplg",
+ },
+ {
+- .id = "ESSX8336",
++ .comp_ids = &essx_83x6,
+ .drv_name = "sof-essx8336",
+ .sof_fw_filename = "sof-cml.ri",
+ .sof_tplg_filename = "sof-cml-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-glk-match.c b/sound/soc/intel/common/soc-acpi-intel-glk-match.c
+index 8492b7e2a9450..7aa6a870d5a5c 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-glk-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-glk-match.c
+@@ -9,6 +9,11 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++ .num_codecs = 3,
++ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs glk_codecs = {
+ .num_codecs = 1,
+ .codecs = {"MX98357A"}
+@@ -58,7 +63,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_glk_machines[] = {
+ .sof_tplg_filename = "sof-glk-cs42l42.tplg",
+ },
+ {
+- .id = "ESSX8336",
++ .comp_ids = &essx_83x6,
+ .drv_name = "sof-essx8336",
+ .sof_fw_filename = "sof-glk.ri",
+ .sof_tplg_filename = "sof-glk-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-jsl-match.c b/sound/soc/intel/common/soc-acpi-intel-jsl-match.c
+index 278ec196da7bf..9d0d0e1437a4b 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-jsl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-jsl-match.c
+@@ -9,6 +9,11 @@
+ #include <sound/soc-acpi.h>
+ #include <sound/soc-acpi-intel-match.h>
+
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++ .num_codecs = 3,
++ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs jsl_7219_98373_codecs = {
+ .num_codecs = 1,
+ .codecs = {"MX98373"}
+@@ -87,7 +92,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_jsl_machines[] = {
+ .sof_tplg_filename = "sof-jsl-cs42l42-mx98360a.tplg",
+ },
+ {
+- .id = "ESSX8336",
++ .comp_ids = &essx_83x6,
+ .drv_name = "sof-essx8336",
+ .sof_fw_filename = "sof-jsl.ri",
+ .sof_tplg_filename = "sof-jsl-es8336.tplg",
+diff --git a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+index da31bb3cca17c..e2658bca69318 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-tgl-match.c
+@@ -10,6 +10,11 @@
+ #include <sound/soc-acpi-intel-match.h>
+ #include "soc-acpi-intel-sdw-mockup-match.h"
+
++static const struct snd_soc_acpi_codecs essx_83x6 = {
++ .num_codecs = 3,
++ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
++};
++
+ static const struct snd_soc_acpi_codecs tgl_codecs = {
+ .num_codecs = 1,
+ .codecs = {"MX98357A"}
+@@ -389,7 +394,7 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_tgl_machines[] = {
+ .sof_tplg_filename = "sof-tgl-rt1011-rt5682.tplg",
+ },
+ {
+- .id = "ESSX8336",
++ .comp_ids = &essx_83x6,
+ .drv_name = "sof-essx8336",
+ .sof_fw_filename = "sof-tgl.ri",
+ .sof_tplg_filename = "sof-tgl-es8336.tplg",
+diff --git a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+index 718505c754188..f090dee0c7a4f 100644
+--- a/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
++++ b/sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
+@@ -695,8 +695,11 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ }
+
+ card = (struct snd_soc_card *)of_device_get_match_data(&pdev->dev);
+- if (!card)
+- return -EINVAL;
++ if (!card) {
++ ret = -EINVAL;
++ goto put_platform_node;
++ }
++
+ card->dev = &pdev->dev;
+
+ hdmi_codec = of_parse_phandle(pdev->dev.of_node,
+@@ -761,12 +764,15 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ if (!mt8183_da7219_max98357_headset_dev.dlc.of_node) {
+ dev_err(&pdev->dev,
+ "Property 'mediatek,headset-codec' missing/invalid\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_hdmi_codec;
+ }
+
+ priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+- if (!priv)
+- return -ENOMEM;
++ if (!priv) {
++ ret = -ENOMEM;
++ goto put_hdmi_codec;
++ }
+
+ snd_soc_card_set_drvdata(card, priv);
+
+@@ -775,13 +781,16 @@ static int mt8183_da7219_max98357_dev_probe(struct platform_device *pdev)
+ ret = PTR_ERR(pinctrl);
+ dev_err(&pdev->dev, "%s failed to select default state %d\n",
+ __func__, ret);
+- return ret;
++ goto put_hdmi_codec;
+ }
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+
+- of_node_put(platform_node);
++
++put_hdmi_codec:
+ of_node_put(hdmi_codec);
++put_platform_node:
++ of_node_put(platform_node);
+ return ret;
+ }
+
+diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+index f7daad1bfe1ed..ee91569c09117 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
++++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+@@ -1116,8 +1116,10 @@ static int mt8192_mt6359_dev_probe(struct platform_device *pdev)
+ }
+
+ card = (struct snd_soc_card *)of_device_get_match_data(&pdev->dev);
+- if (!card)
+- return -EINVAL;
++ if (!card) {
++ ret = -EINVAL;
++ goto put_platform_node;
++ }
+ card->dev = &pdev->dev;
+
+ hdmi_codec = of_parse_phandle(pdev->dev.of_node,
+@@ -1159,20 +1161,24 @@ static int mt8192_mt6359_dev_probe(struct platform_device *pdev)
+ }
+
+ priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+- if (!priv)
+- return -ENOMEM;
++ if (!priv) {
++ ret = -ENOMEM;
++ goto put_hdmi_codec;
++ }
+ snd_soc_card_set_drvdata(card, priv);
+
+ ret = mt8192_afe_gpio_init(&pdev->dev);
+ if (ret) {
+ dev_err(&pdev->dev, "init gpio error %d\n", ret);
+- return ret;
++ goto put_hdmi_codec;
+ }
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+
+- of_node_put(platform_node);
++put_hdmi_codec:
+ of_node_put(hdmi_codec);
++put_platform_node:
++ of_node_put(platform_node);
+ return ret;
+ }
+
+diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359-rt1019-rt5682.c b/sound/soc/mediatek/mt8195/mt8195-mt6359-rt1019-rt5682.c
+index 29c2d3407cc7c..e3146311722f8 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-mt6359-rt1019-rt5682.c
++++ b/sound/soc/mediatek/mt8195/mt8195-mt6359-rt1019-rt5682.c
+@@ -1342,7 +1342,8 @@ static int mt8195_mt6359_rt1019_rt5682_dev_probe(struct platform_device *pdev)
+ "mediatek,dai-link");
+ if (ret) {
+ dev_dbg(&pdev->dev, "Parse dai-link fail\n");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto put_node;
+ }
+ } else {
+ if (!sof_on)
+@@ -1398,6 +1399,7 @@ static int mt8195_mt6359_rt1019_rt5682_dev_probe(struct platform_device *pdev)
+
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
+
++put_node:
+ of_node_put(platform_node);
+ of_node_put(adsp_node);
+ of_node_put(dp_node);
+diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
+index 6a2d24d489647..879c1221a809b 100644
+--- a/sound/soc/mxs/mxs-saif.c
++++ b/sound/soc/mxs/mxs-saif.c
+@@ -455,7 +455,10 @@ static int mxs_saif_hw_params(struct snd_pcm_substream *substream,
+ * basic clock which should be fast enough for the internal
+ * logic.
+ */
+- clk_enable(saif->clk);
++ ret = clk_enable(saif->clk);
++ if (ret)
++ return ret;
++
+ ret = clk_set_rate(saif->clk, 24000000);
+ clk_disable(saif->clk);
+ if (ret)
+diff --git a/sound/soc/mxs/mxs-sgtl5000.c b/sound/soc/mxs/mxs-sgtl5000.c
+index 2412dc7e65d44..746f409386751 100644
+--- a/sound/soc/mxs/mxs-sgtl5000.c
++++ b/sound/soc/mxs/mxs-sgtl5000.c
+@@ -118,6 +118,9 @@ static int mxs_sgtl5000_probe(struct platform_device *pdev)
+ codec_np = of_parse_phandle(np, "audio-codec", 0);
+ if (!saif_np[0] || !saif_np[1] || !codec_np) {
+ dev_err(&pdev->dev, "phandle missing or invalid\n");
++ of_node_put(codec_np);
++ of_node_put(saif_np[0]);
++ of_node_put(saif_np[1]);
+ return -EINVAL;
+ }
+
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index a6d7656c206e5..4ce5d25793875 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -716,19 +716,23 @@ static int rockchip_i2s_probe(struct platform_device *pdev)
+ i2s->mclk = devm_clk_get(&pdev->dev, "i2s_clk");
+ if (IS_ERR(i2s->mclk)) {
+ dev_err(&pdev->dev, "Can't retrieve i2s master clock\n");
+- return PTR_ERR(i2s->mclk);
++ ret = PTR_ERR(i2s->mclk);
++ goto err_clk;
+ }
+
+ regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+- if (IS_ERR(regs))
+- return PTR_ERR(regs);
++ if (IS_ERR(regs)) {
++ ret = PTR_ERR(regs);
++ goto err_clk;
++ }
+
+ i2s->regmap = devm_regmap_init_mmio(&pdev->dev, regs,
+ &rockchip_i2s_regmap_config);
+ if (IS_ERR(i2s->regmap)) {
+ dev_err(&pdev->dev,
+ "Failed to initialise managed register map\n");
+- return PTR_ERR(i2s->regmap);
++ ret = PTR_ERR(i2s->regmap);
++ goto err_clk;
+ }
+
+ i2s->bclk_ratio = 64;
+@@ -768,7 +772,8 @@ err_suspend:
+ i2s_runtime_suspend(&pdev->dev);
+ err_pm_disable:
+ pm_runtime_disable(&pdev->dev);
+-
++err_clk:
++ clk_disable_unprepare(i2s->hclk);
+ return ret;
+ }
+
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index 5f9cb5c4c7f09..98700e75b82a1 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -469,14 +469,14 @@ static int rockchip_i2s_tdm_set_fmt(struct snd_soc_dai *cpu_dai,
+ txcr_val = I2S_TXCR_IBM_NORMAL;
+ rxcr_val = I2S_RXCR_IBM_NORMAL;
+ break;
+- case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */
+- txcr_val = I2S_TXCR_TFS_PCM;
+- rxcr_val = I2S_RXCR_TFS_PCM;
+- break;
+- case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */
++ case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 mode */
+ txcr_val = I2S_TXCR_TFS_PCM | I2S_TXCR_PBM_MODE(1);
+ rxcr_val = I2S_RXCR_TFS_PCM | I2S_RXCR_PBM_MODE(1);
+ break;
++ case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */
++ txcr_val = I2S_TXCR_TFS_PCM;
++ rxcr_val = I2S_RXCR_TFS_PCM;
++ break;
+ default:
+ ret = -EINVAL;
+ goto err_pm_put;
+@@ -1738,7 +1738,7 @@ static int __maybe_unused rockchip_i2s_tdm_resume(struct device *dev)
+ struct rk_i2s_tdm_dev *i2s_tdm = dev_get_drvdata(dev);
+ int ret;
+
+- ret = pm_runtime_get_sync(dev);
++ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0)
+ return ret;
+ ret = regcache_sync(i2s_tdm->regmap);
+diff --git a/sound/soc/sh/fsi.c b/sound/soc/sh/fsi.c
+index cdf3b7f69ba70..e9a1eb6bdf66a 100644
+--- a/sound/soc/sh/fsi.c
++++ b/sound/soc/sh/fsi.c
+@@ -816,14 +816,27 @@ static int fsi_clk_enable(struct device *dev,
+ return ret;
+ }
+
+- clk_enable(clock->xck);
+- clk_enable(clock->ick);
+- clk_enable(clock->div);
++ ret = clk_enable(clock->xck);
++ if (ret)
++ goto err;
++ ret = clk_enable(clock->ick);
++ if (ret)
++ goto disable_xck;
++ ret = clk_enable(clock->div);
++ if (ret)
++ goto disable_ick;
+
+ clock->count++;
+ }
+
+ return ret;
++
++disable_ick:
++ clk_disable(clock->ick);
++disable_xck:
++ clk_disable(clock->xck);
++err:
++ return ret;
+ }
+
+ static int fsi_clk_disable(struct device *dev,
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index e8d98b362f9db..7379b1489e358 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -411,54 +411,56 @@ static int rz_ssi_pio_recv(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
+ {
+ struct snd_pcm_substream *substream = strm->substream;
+ struct snd_pcm_runtime *runtime;
++ bool done = false;
+ u16 *buf;
+ int fifo_samples;
+ int frames_left;
+- int samples = 0;
++ int samples;
+ int i;
+
+ if (!rz_ssi_stream_is_valid(ssi, strm))
+ return -EINVAL;
+
+ runtime = substream->runtime;
+- /* frames left in this period */
+- frames_left = runtime->period_size - (strm->buffer_pos %
+- runtime->period_size);
+- if (frames_left == 0)
+- frames_left = runtime->period_size;
+
+- /* Samples in RX FIFO */
+- fifo_samples = (rz_ssi_reg_readl(ssi, SSIFSR) >>
+- SSIFSR_RDC_SHIFT) & SSIFSR_RDC_MASK;
++ while (!done) {
++ /* frames left in this period */
++ frames_left = runtime->period_size -
++ (strm->buffer_pos % runtime->period_size);
++ if (!frames_left)
++ frames_left = runtime->period_size;
++
++ /* Samples in RX FIFO */
++ fifo_samples = (rz_ssi_reg_readl(ssi, SSIFSR) >>
++ SSIFSR_RDC_SHIFT) & SSIFSR_RDC_MASK;
++
++ /* Only read full frames at a time */
++ samples = 0;
++ while (frames_left && (fifo_samples >= runtime->channels)) {
++ samples += runtime->channels;
++ fifo_samples -= runtime->channels;
++ frames_left--;
++ }
+
+- /* Only read full frames at a time */
+- while (frames_left && (fifo_samples >= runtime->channels)) {
+- samples += runtime->channels;
+- fifo_samples -= runtime->channels;
+- frames_left--;
+- }
++ /* not enough samples yet */
++ if (!samples)
++ break;
+
+- /* not enough samples yet */
+- if (samples == 0)
+- return 0;
++ /* calculate new buffer index */
++ buf = (u16 *)(runtime->dma_area);
++ buf += strm->buffer_pos * runtime->channels;
+
+- /* calculate new buffer index */
+- buf = (u16 *)(runtime->dma_area);
+- buf += strm->buffer_pos * runtime->channels;
+-
+- /* Note, only supports 16-bit samples */
+- for (i = 0; i < samples; i++)
+- *buf++ = (u16)(rz_ssi_reg_readl(ssi, SSIFRDR) >> 16);
++ /* Note, only supports 16-bit samples */
++ for (i = 0; i < samples; i++)
++ *buf++ = (u16)(rz_ssi_reg_readl(ssi, SSIFRDR) >> 16);
+
+- rz_ssi_reg_mask_setl(ssi, SSIFSR, SSIFSR_RDF, 0);
+- rz_ssi_pointer_update(strm, samples / runtime->channels);
++ rz_ssi_reg_mask_setl(ssi, SSIFSR, SSIFSR_RDF, 0);
++ rz_ssi_pointer_update(strm, samples / runtime->channels);
+
+- /*
+- * If we finished this period, but there are more samples in
+- * the RX FIFO, call this function again
+- */
+- if (frames_left == 0 && fifo_samples >= runtime->channels)
+- rz_ssi_pio_recv(ssi, strm);
++ /* check if there are no more samples in the RX FIFO */
++ if (!(!frames_left && fifo_samples >= runtime->channels))
++ done = true;
++ }
+
+ return 0;
+ }
+@@ -975,6 +977,9 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ ssi->playback.priv = ssi;
+ ssi->capture.priv = ssi;
+
++ spin_lock_init(&ssi->lock);
++ dev_set_drvdata(&pdev->dev, ssi);
++
+ /* Error Interrupt */
+ ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
+ if (ssi->irq_int < 0)
+@@ -1027,8 +1032,6 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ return dev_err_probe(ssi->dev, ret, "pm_runtime_resume_and_get failed\n");
+ }
+
+- spin_lock_init(&ssi->lock);
+- dev_set_drvdata(&pdev->dev, ssi);
+ ret = devm_snd_soc_register_component(&pdev->dev, &rz_ssi_soc_component,
+ rz_ssi_soc_dai,
+ ARRAY_SIZE(rz_ssi_soc_dai));
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 8e2494a9f3a7f..e9dd25894dc0f 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -567,6 +567,11 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
+ return -EINVAL;
+ }
+
++ if (!codec_dai) {
++ dev_err(rtd->card->dev, "Missing codec\n");
++ return -EINVAL;
++ }
++
+ /* check client and interface hw capabilities */
+ if (snd_soc_dai_stream_valid(codec_dai, SNDRV_PCM_STREAM_PLAYBACK) &&
+ snd_soc_dai_stream_valid(cpu_dai, SNDRV_PCM_STREAM_PLAYBACK))
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 434e61b46983c..a088bc9f7dd7c 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -3233,7 +3233,7 @@ int snd_soc_get_dai_name(const struct of_phandle_args *args,
+ for_each_component(pos) {
+ struct device_node *component_of_node = soc_component_to_node(pos);
+
+- if (component_of_node != args->np)
++ if (component_of_node != args->np || !pos->num_dai)
+ continue;
+
+ ret = snd_soc_component_of_xlate_dai_name(pos, args, dai_name);
+diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c
+index c54c8ca8d7156..359987bf76d1b 100644
+--- a/sound/soc/soc-generic-dmaengine-pcm.c
++++ b/sound/soc/soc-generic-dmaengine-pcm.c
+@@ -86,10 +86,10 @@ static int dmaengine_pcm_hw_params(struct snd_soc_component *component,
+
+ memset(&slave_config, 0, sizeof(slave_config));
+
+- if (!pcm->config)
+- prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+- else
++ if (pcm->config && pcm->config->prepare_slave_config)
+ prepare_slave_config = pcm->config->prepare_slave_config;
++ else
++ prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+
+ if (prepare_slave_config) {
+ int ret = prepare_slave_config(substream, params, &slave_config);
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 2630df024dff3..cb24805668bd8 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -512,7 +512,8 @@ static int soc_tplg_kcontrol_bind_io(struct snd_soc_tplg_ctl_hdr *hdr,
+
+ if (le32_to_cpu(hdr->ops.info) == SND_SOC_TPLG_CTL_BYTES
+ && k->iface & SNDRV_CTL_ELEM_IFACE_MIXER
+- && k->access & SNDRV_CTL_ELEM_ACCESS_TLV_READWRITE
++ && (k->access & SNDRV_CTL_ELEM_ACCESS_TLV_READ
++ || k->access & SNDRV_CTL_ELEM_ACCESS_TLV_WRITE)
+ && k->access & SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK) {
+ struct soc_bytes_ext *sbe;
+ struct snd_soc_tplg_bytes_control *be;
+diff --git a/sound/soc/sof/debug.c b/sound/soc/sof/debug.c
+index 6d6757075f7c3..e755c0c5f86c0 100644
+--- a/sound/soc/sof/debug.c
++++ b/sound/soc/sof/debug.c
+@@ -960,7 +960,7 @@ static void snd_sof_dbg_print_fw_state(struct snd_sof_dev *sdev, const char *lev
+
+ void snd_sof_dsp_dbg_dump(struct snd_sof_dev *sdev, const char *msg, u32 flags)
+ {
+- char *level = flags & SOF_DBG_DUMP_OPTIONAL ? KERN_DEBUG : KERN_ERR;
++ char *level = (flags & SOF_DBG_DUMP_OPTIONAL) ? KERN_DEBUG : KERN_ERR;
+ bool print_all = sof_debug_check_flag(SOF_DBG_PRINT_ALL_DUMPS);
+
+ if (flags & SOF_DBG_DUMP_OPTIONAL && !print_all)
+diff --git a/sound/soc/sof/imx/imx8m.c b/sound/soc/sof/imx/imx8m.c
+index 788e77bcb6038..60251486b24b2 100644
+--- a/sound/soc/sof/imx/imx8m.c
++++ b/sound/soc/sof/imx/imx8m.c
+@@ -224,6 +224,7 @@ static int imx8m_probe(struct snd_sof_dev *sdev)
+ }
+
+ ret = of_address_to_resource(res_node, 0, &res);
++ of_node_put(res_node);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to get reserved region address\n");
+ goto exit_pdev_unregister;
+diff --git a/sound/soc/sof/intel/Kconfig b/sound/soc/sof/intel/Kconfig
+index 88b6176af021c..d83e1a36707af 100644
+--- a/sound/soc/sof/intel/Kconfig
++++ b/sound/soc/sof/intel/Kconfig
+@@ -84,6 +84,7 @@ if SND_SOC_SOF_PCI
+ config SND_SOC_SOF_MERRIFIELD
+ tristate "SOF support for Tangier/Merrifield"
+ default SND_SOC_SOF_PCI
++ select SND_SOC_SOF_PCI_DEV
+ select SND_SOC_SOF_INTEL_ATOM_HIFI_EP
+ help
+ This adds support for Sound Open Firmware for Intel(R) platforms
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index cd12589355eff..28a54145c1506 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -59,6 +59,8 @@ static struct hdac_ext_stream *
+ {
+ struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream);
+ struct sof_intel_hda_stream *hda_stream;
++ const struct sof_intel_dsp_desc *chip;
++ struct snd_sof_dev *sdev;
+ struct hdac_ext_stream *res = NULL;
+ struct hdac_stream *stream = NULL;
+
+@@ -77,9 +79,20 @@ static struct hdac_ext_stream *
+ continue;
+
+ hda_stream = hstream_to_sof_hda_stream(hstream);
++ sdev = hda_stream->sdev;
++ chip = get_chip_info(sdev->pdata);
+
+ /* check if link is available */
+ if (!hstream->link_locked) {
++ /*
++ * choose the first available link for platforms that do not have the
++ * PROCEN_FMT_QUIRK set.
++ */
++ if (!(chip->quirks & SOF_INTEL_PROCEN_FMT_QUIRK)) {
++ res = hstream;
++ break;
++ }
++
+ if (stream->opened) {
+ /*
+ * check if the stream tag matches the stream
+diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c
+index 33306d2023a78..9bbfdab8009de 100644
+--- a/sound/soc/sof/intel/hda-loader.c
++++ b/sound/soc/sof/intel/hda-loader.c
+@@ -47,7 +47,7 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig
+ ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG, &pci->dev, size, dmab);
+ if (ret < 0) {
+ dev_err(sdev->dev, "error: memory alloc failed: %d\n", ret);
+- goto error;
++ goto out_put;
+ }
+
+ hstream->period_bytes = 0;/* initialize period_bytes */
+@@ -58,22 +58,23 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig
+ ret = hda_dsp_iccmax_stream_hw_params(sdev, dsp_stream, dmab, NULL);
+ if (ret < 0) {
+ dev_err(sdev->dev, "error: iccmax stream prepare failed: %d\n", ret);
+- goto error;
++ goto out_free;
+ }
+ } else {
+ ret = hda_dsp_stream_hw_params(sdev, dsp_stream, dmab, NULL);
+ if (ret < 0) {
+ dev_err(sdev->dev, "error: hdac prepare failed: %d\n", ret);
+- goto error;
++ goto out_free;
+ }
+ hda_dsp_stream_spib_config(sdev, dsp_stream, HDA_DSP_SPIB_ENABLE, size);
+ }
+
+ return dsp_stream;
+
+-error:
+- hda_dsp_stream_put(sdev, direction, hstream->stream_tag);
++out_free:
+ snd_dma_free_pages(dmab);
++out_put:
++ hda_dsp_stream_put(sdev, direction, hstream->stream_tag);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/sound/soc/sof/intel/hda-pcm.c b/sound/soc/sof/intel/hda-pcm.c
+index d78aa5d8552d5..8aeb00eacd219 100644
+--- a/sound/soc/sof/intel/hda-pcm.c
++++ b/sound/soc/sof/intel/hda-pcm.c
+@@ -315,6 +315,7 @@ int hda_dsp_pcm_open(struct snd_sof_dev *sdev,
+ runtime->hw.info &= ~SNDRV_PCM_INFO_PAUSE;
+
+ if (hda_always_enable_dmi_l1 ||
++ direction == SNDRV_PCM_STREAM_PLAYBACK ||
+ spcm->stream[substream->stream].d0i3_compatible)
+ flags |= SOF_HDA_STREAM_DMI_L1_COMPATIBLE;
+
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 1385695d77458..028751549f6da 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -432,11 +432,9 @@ static char *hda_model;
+ module_param(hda_model, charp, 0444);
+ MODULE_PARM_DESC(hda_model, "Use the given HDA board model.");
+
+-#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) || IS_ENABLED(CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE)
+-static int hda_dmic_num = -1;
+-module_param_named(dmic_num, hda_dmic_num, int, 0444);
++static int dmic_num_override = -1;
++module_param_named(dmic_num, dmic_num_override, int, 0444);
+ MODULE_PARM_DESC(dmic_num, "SOF HDA DMIC number");
+-#endif
+
+ #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA)
+ static bool hda_codec_use_common_hdmi = IS_ENABLED(CONFIG_SND_HDA_CODEC_HDMI);
+@@ -644,24 +642,35 @@ static int hda_init(struct snd_sof_dev *sdev)
+ return ret;
+ }
+
+-#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) || IS_ENABLED(CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE)
+-
+-static int check_nhlt_dmic(struct snd_sof_dev *sdev)
++static int check_dmic_num(struct snd_sof_dev *sdev)
+ {
+ struct nhlt_acpi_table *nhlt;
+- int dmic_num;
++ int dmic_num = 0;
+
+ nhlt = intel_nhlt_init(sdev->dev);
+ if (nhlt) {
+ dmic_num = intel_nhlt_get_dmic_geo(sdev->dev, nhlt);
+ intel_nhlt_free(nhlt);
+- if (dmic_num >= 1 && dmic_num <= 4)
+- return dmic_num;
+ }
+
+- return 0;
++ /* allow for module parameter override */
++ if (dmic_num_override != -1) {
++ dev_dbg(sdev->dev,
++ "overriding DMICs detected in NHLT tables %d by kernel param %d\n",
++ dmic_num, dmic_num_override);
++ dmic_num = dmic_num_override;
++ }
++
++ if (dmic_num < 0 || dmic_num > 4) {
++ dev_dbg(sdev->dev, "invalid dmic_number %d\n", dmic_num);
++ dmic_num = 0;
++ }
++
++ return dmic_num;
+ }
+
++#if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) || IS_ENABLED(CONFIG_SND_SOC_SOF_INTEL_SOUNDWIRE)
++
+ static const char *fixup_tplg_name(struct snd_sof_dev *sdev,
+ const char *sof_tplg_filename,
+ const char *idisp_str,
+@@ -697,16 +706,8 @@ static int dmic_topology_fixup(struct snd_sof_dev *sdev,
+ const char *dmic_str;
+ int dmic_num;
+
+- /* first check NHLT for DMICs */
+- dmic_num = check_nhlt_dmic(sdev);
+-
+- /* allow for module parameter override */
+- if (hda_dmic_num != -1) {
+- dev_dbg(sdev->dev,
+- "overriding DMICs detected in NHLT tables %d by kernel param %d\n",
+- dmic_num, hda_dmic_num);
+- dmic_num = hda_dmic_num;
+- }
++ /* first check for DMICs (using NHLT or module parameter) */
++ dmic_num = check_dmic_num(sdev);
+
+ switch (dmic_num) {
+ case 1:
+@@ -1188,7 +1189,7 @@ static bool link_slaves_found(struct snd_sof_dev *sdev,
+ struct hdac_bus *bus = sof_to_bus(sdev);
+ struct sdw_intel_slave_id *ids = sdw->ids;
+ int num_slaves = sdw->num_slaves;
+- unsigned int part_id, link_id, unique_id, mfg_id;
++ unsigned int part_id, link_id, unique_id, mfg_id, version;
+ int i, j, k;
+
+ for (i = 0; i < link->num_adr; i++) {
+@@ -1198,12 +1199,14 @@ static bool link_slaves_found(struct snd_sof_dev *sdev,
+ mfg_id = SDW_MFG_ID(adr);
+ part_id = SDW_PART_ID(adr);
+ link_id = SDW_DISCO_LINK_ID(adr);
++ version = SDW_VERSION(adr);
+
+ for (j = 0; j < num_slaves; j++) {
+ /* find out how many identical parts were reported on that link */
+ if (ids[j].link_id == link_id &&
+ ids[j].id.part_id == part_id &&
+- ids[j].id.mfg_id == mfg_id)
++ ids[j].id.mfg_id == mfg_id &&
++ ids[j].id.sdw_version == version)
+ reported_part_count++;
+ }
+
+@@ -1212,21 +1215,24 @@ static bool link_slaves_found(struct snd_sof_dev *sdev,
+
+ if (ids[j].link_id != link_id ||
+ ids[j].id.part_id != part_id ||
+- ids[j].id.mfg_id != mfg_id)
++ ids[j].id.mfg_id != mfg_id ||
++ ids[j].id.sdw_version != version)
+ continue;
+
+ /* find out how many identical parts are expected */
+ for (k = 0; k < link->num_adr; k++) {
+ u64 adr2 = link->adr_d[k].adr;
+- unsigned int part_id2, link_id2, mfg_id2;
++ unsigned int part_id2, link_id2, mfg_id2, version2;
+
+ mfg_id2 = SDW_MFG_ID(adr2);
+ part_id2 = SDW_PART_ID(adr2);
+ link_id2 = SDW_DISCO_LINK_ID(adr2);
++ version2 = SDW_VERSION(adr2);
+
+ if (link_id2 == link_id &&
+ part_id2 == part_id &&
+- mfg_id2 == mfg_id)
++ mfg_id2 == mfg_id &&
++ version2 == version)
+ expected_part_count++;
+ }
+
+@@ -1387,6 +1393,9 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ if (!sof_pdata->tplg_filename)
+ sof_pdata->tplg_filename = mach->sof_tplg_filename;
+
++ /* report to machine driver if any DMICs are found */
++ mach->mach_params.dmic_num = check_dmic_num(sdev);
++
+ if (mach->link_mask) {
+ mach->mach_params.links = mach->links;
+ mach->mach_params.link_mask = mach->link_mask;
+diff --git a/sound/soc/ti/davinci-i2s.c b/sound/soc/ti/davinci-i2s.c
+index 6dca51862dd76..0363a088d2e00 100644
+--- a/sound/soc/ti/davinci-i2s.c
++++ b/sound/soc/ti/davinci-i2s.c
+@@ -708,7 +708,9 @@ static int davinci_i2s_probe(struct platform_device *pdev)
+ dev->clk = clk_get(&pdev->dev, NULL);
+ if (IS_ERR(dev->clk))
+ return -ENODEV;
+- clk_enable(dev->clk);
++ ret = clk_enable(dev->clk);
++ if (ret)
++ goto err_put_clk;
+
+ dev->dev = &pdev->dev;
+ dev_set_drvdata(&pdev->dev, dev);
+@@ -730,6 +732,7 @@ err_unregister_component:
+ snd_soc_unregister_component(&pdev->dev);
+ err_release_clk:
+ clk_disable(dev->clk);
++err_put_clk:
+ clk_put(dev->clk);
+ return ret;
+ }
+diff --git a/sound/soc/xilinx/xlnx_formatter_pcm.c b/sound/soc/xilinx/xlnx_formatter_pcm.c
+index ce19a6058b279..5c4158069a5a8 100644
+--- a/sound/soc/xilinx/xlnx_formatter_pcm.c
++++ b/sound/soc/xilinx/xlnx_formatter_pcm.c
+@@ -84,6 +84,7 @@ struct xlnx_pcm_drv_data {
+ struct snd_pcm_substream *play_stream;
+ struct snd_pcm_substream *capture_stream;
+ struct clk *axi_clk;
++ unsigned int sysclk;
+ };
+
+ /*
+@@ -314,6 +315,15 @@ static irqreturn_t xlnx_s2mm_irq_handler(int irq, void *arg)
+ return IRQ_NONE;
+ }
+
++static int xlnx_formatter_set_sysclk(struct snd_soc_component *component,
++ int clk_id, int source, unsigned int freq, int dir)
++{
++ struct xlnx_pcm_drv_data *adata = dev_get_drvdata(component->dev);
++
++ adata->sysclk = freq;
++ return 0;
++}
++
+ static int xlnx_formatter_pcm_open(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream)
+ {
+@@ -450,11 +460,25 @@ static int xlnx_formatter_pcm_hw_params(struct snd_soc_component *component,
+ u64 size;
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct xlnx_pcm_stream_param *stream_data = runtime->private_data;
++ struct xlnx_pcm_drv_data *adata = dev_get_drvdata(component->dev);
+
+ active_ch = params_channels(params);
+ if (active_ch > stream_data->ch_limit)
+ return -EINVAL;
+
++ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
++ adata->sysclk) {
++ unsigned int mclk_fs = adata->sysclk / params_rate(params);
++
++ if (adata->sysclk % params_rate(params) != 0) {
++ dev_warn(component->dev, "sysclk %u not divisible by rate %u\n",
++ adata->sysclk, params_rate(params));
++ return -EINVAL;
++ }
++
++ writel(mclk_fs, stream_data->mmio + XLNX_AUD_FS_MULTIPLIER);
++ }
++
+ if (substream->stream == SNDRV_PCM_STREAM_CAPTURE &&
+ stream_data->xfer_mode == AES_TO_PCM) {
+ val = readl(stream_data->mmio + XLNX_AUD_STS);
+@@ -552,6 +576,7 @@ static int xlnx_formatter_pcm_new(struct snd_soc_component *component,
+
+ static const struct snd_soc_component_driver xlnx_asoc_component = {
+ .name = DRV_NAME,
++ .set_sysclk = xlnx_formatter_set_sysclk,
+ .open = xlnx_formatter_pcm_open,
+ .close = xlnx_formatter_pcm_close,
+ .hw_params = xlnx_formatter_pcm_hw_params,
+diff --git a/sound/spi/at73c213.c b/sound/spi/at73c213.c
+index 76c0e37a838cf..8a2da6b1012eb 100644
+--- a/sound/spi/at73c213.c
++++ b/sound/spi/at73c213.c
+@@ -218,7 +218,9 @@ static int snd_at73c213_pcm_open(struct snd_pcm_substream *substream)
+ runtime->hw = snd_at73c213_playback_hw;
+ chip->substream = substream;
+
+- clk_enable(chip->ssc->clk);
++ err = clk_enable(chip->ssc->clk);
++ if (err)
++ return err;
+
+ return 0;
+ }
+@@ -776,7 +778,9 @@ static int snd_at73c213_chip_init(struct snd_at73c213 *chip)
+ goto out;
+
+ /* Enable DAC master clock. */
+- clk_enable(chip->board->dac_clk);
++ retval = clk_enable(chip->board->dac_clk);
++ if (retval)
++ goto out;
+
+ /* Initialize at73c213 on SPI bus. */
+ retval = snd_at73c213_write_reg(chip, DAC_RST, 0x04);
+@@ -889,7 +893,9 @@ static int snd_at73c213_dev_init(struct snd_card *card,
+ chip->card = card;
+ chip->irq = -1;
+
+- clk_enable(chip->ssc->clk);
++ retval = clk_enable(chip->ssc->clk);
++ if (retval)
++ return retval;
+
+ retval = request_irq(irq, snd_at73c213_interrupt, 0, "at73c213", chip);
+ if (retval) {
+@@ -1008,7 +1014,9 @@ static int snd_at73c213_remove(struct spi_device *spi)
+ int retval;
+
+ /* Stop playback. */
+- clk_enable(chip->ssc->clk);
++ retval = clk_enable(chip->ssc->clk);
++ if (retval)
++ goto out;
+ ssc_writel(chip->ssc->regs, CR, SSC_BIT(CR_TXDIS));
+ clk_disable(chip->ssc->clk);
+
+@@ -1088,9 +1096,16 @@ static int snd_at73c213_resume(struct device *dev)
+ {
+ struct snd_card *card = dev_get_drvdata(dev);
+ struct snd_at73c213 *chip = card->private_data;
++ int retval;
+
+- clk_enable(chip->board->dac_clk);
+- clk_enable(chip->ssc->clk);
++ retval = clk_enable(chip->board->dac_clk);
++ if (retval)
++ return retval;
++ retval = clk_enable(chip->ssc->clk);
++ if (retval) {
++ clk_disable(chip->board->dac_clk);
++ return retval;
++ }
+ ssc_writel(chip->ssc->regs, CR, SSC_BIT(CR_TXEN));
+
+ return 0;
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index 59833125ac0a1..a2c665beda87c 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -902,7 +902,7 @@ static int do_show(int argc, char **argv)
+ equal_fn_for_key_as_id, NULL);
+ btf_map_table = hashmap__new(hash_fn_for_key_as_id,
+ equal_fn_for_key_as_id, NULL);
+- if (!btf_prog_table || !btf_map_table) {
++ if (IS_ERR(btf_prog_table) || IS_ERR(btf_map_table)) {
+ hashmap__free(btf_prog_table);
+ hashmap__free(btf_map_table);
+ if (fd >= 0)
+diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
+index b4695df2ea3d7..a7387c265e3cf 100644
+--- a/tools/bpf/bpftool/gen.c
++++ b/tools/bpf/bpftool/gen.c
+@@ -927,7 +927,6 @@ static int do_skeleton(int argc, char **argv)
+ s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\
+ if (!s) \n\
+ goto err; \n\
+- obj->skeleton = s; \n\
+ \n\
+ s->sz = sizeof(*s); \n\
+ s->name = \"%1$s\"; \n\
+@@ -1000,6 +999,7 @@ static int do_skeleton(int argc, char **argv)
+ \n\
+ s->data = (void *)%2$s__elf_bytes(&s->data_sz); \n\
+ \n\
++ obj->skeleton = s; \n\
+ return 0; \n\
+ err: \n\
+ bpf_object__destroy_skeleton(s); \n\
+diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
+index 2c258db0d3521..97dec81950e5d 100644
+--- a/tools/bpf/bpftool/link.c
++++ b/tools/bpf/bpftool/link.c
+@@ -2,6 +2,7 @@
+ /* Copyright (C) 2020 Facebook */
+
+ #include <errno.h>
++#include <linux/err.h>
+ #include <net/if.h>
+ #include <stdio.h>
+ #include <unistd.h>
+@@ -306,7 +307,7 @@ static int do_show(int argc, char **argv)
+ if (show_pinned) {
+ link_table = hashmap__new(hash_fn_for_key_as_id,
+ equal_fn_for_key_as_id, NULL);
+- if (!link_table) {
++ if (IS_ERR(link_table)) {
+ p_err("failed to create hashmap for pinned paths");
+ return -1;
+ }
+diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
+index cc530a2298124..0bba33729c7f0 100644
+--- a/tools/bpf/bpftool/map.c
++++ b/tools/bpf/bpftool/map.c
+@@ -620,17 +620,14 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info)
+ u32_as_hash_field(info->id))
+ printf("\n\tpinned %s", (char *)entry->value);
+ }
+- printf("\n");
+
+ if (frozen_str) {
+ frozen = atoi(frozen_str);
+ free(frozen_str);
+ }
+
+- if (!info->btf_id && !frozen)
+- return 0;
+-
+- printf("\t");
++ if (info->btf_id || frozen)
++ printf("\n\t");
+
+ if (info->btf_id)
+ printf("btf_id %d", info->btf_id);
+@@ -699,7 +696,7 @@ static int do_show(int argc, char **argv)
+ if (show_pinned) {
+ map_table = hashmap__new(hash_fn_for_key_as_id,
+ equal_fn_for_key_as_id, NULL);
+- if (!map_table) {
++ if (IS_ERR(map_table)) {
+ p_err("failed to create hashmap for pinned paths");
+ return -1;
+ }
+@@ -805,29 +802,30 @@ static int maps_have_btf(int *fds, int nb_fds)
+
+ static struct btf *btf_vmlinux;
+
+-static struct btf *get_map_kv_btf(const struct bpf_map_info *info)
++static int get_map_kv_btf(const struct bpf_map_info *info, struct btf **btf)
+ {
+- struct btf *btf = NULL;
++ int err = 0;
+
+ if (info->btf_vmlinux_value_type_id) {
+ if (!btf_vmlinux) {
+ btf_vmlinux = libbpf_find_kernel_btf();
+- if (libbpf_get_error(btf_vmlinux))
++ err = libbpf_get_error(btf_vmlinux);
++ if (err) {
+ p_err("failed to get kernel btf");
++ return err;
++ }
+ }
+- return btf_vmlinux;
++ *btf = btf_vmlinux;
+ } else if (info->btf_value_type_id) {
+- int err;
+-
+- btf = btf__load_from_kernel_by_id(info->btf_id);
+- err = libbpf_get_error(btf);
+- if (err) {
++ *btf = btf__load_from_kernel_by_id(info->btf_id);
++ err = libbpf_get_error(*btf);
++ if (err)
+ p_err("failed to get btf");
+- btf = ERR_PTR(err);
+- }
++ } else {
++ *btf = NULL;
+ }
+
+- return btf;
++ return err;
+ }
+
+ static void free_map_kv_btf(struct btf *btf)
+@@ -862,8 +860,7 @@ map_dump(int fd, struct bpf_map_info *info, json_writer_t *wtr,
+ prev_key = NULL;
+
+ if (wtr) {
+- btf = get_map_kv_btf(info);
+- err = libbpf_get_error(btf);
++ err = get_map_kv_btf(info, &btf);
+ if (err) {
+ goto exit_free;
+ }
+@@ -1054,11 +1051,8 @@ static void print_key_value(struct bpf_map_info *info, void *key,
+ json_writer_t *btf_wtr;
+ struct btf *btf;
+
+- btf = btf__load_from_kernel_by_id(info->btf_id);
+- if (libbpf_get_error(btf)) {
+- p_err("failed to get btf");
++ if (get_map_kv_btf(info, &btf))
+ return;
+- }
+
+ if (json_output) {
+ print_entry_json(info, key, value, btf);
+diff --git a/tools/bpf/bpftool/pids.c b/tools/bpf/bpftool/pids.c
+index 56b598eee043a..7c384d10e95f8 100644
+--- a/tools/bpf/bpftool/pids.c
++++ b/tools/bpf/bpftool/pids.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+ /* Copyright (C) 2020 Facebook */
+ #include <errno.h>
++#include <linux/err.h>
+ #include <stdbool.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+@@ -101,7 +102,7 @@ int build_obj_refs_table(struct hashmap **map, enum bpf_obj_type type)
+ libbpf_print_fn_t default_print;
+
+ *map = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL);
+- if (!*map) {
++ if (IS_ERR(*map)) {
+ p_err("failed to create hashmap for PID references");
+ return -1;
+ }
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 2a21d50516bc4..33ca834d5f510 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -641,7 +641,7 @@ static int do_show(int argc, char **argv)
+ if (show_pinned) {
+ prog_table = hashmap__new(hash_fn_for_key_as_id,
+ equal_fn_for_key_as_id, NULL);
+- if (!prog_table) {
++ if (IS_ERR(prog_table)) {
+ p_err("failed to create hashmap for pinned paths");
+ return -1;
+ }
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index b0383d371b9af..49340175feb94 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -2286,8 +2286,8 @@ union bpf_attr {
+ * Return
+ * The return value depends on the result of the test, and can be:
+ *
+- * * 0, if current task belongs to the cgroup2.
+- * * 1, if current task does not belong to the cgroup2.
++ * * 1, if current task belongs to the cgroup2.
++ * * 0, if current task does not belong to the cgroup2.
+ * * A negative error code, if an error occurred.
+ *
+ * long bpf_skb_change_tail(struct sk_buff *skb, u32 len, u64 flags)
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index 90f56b0f585f0..e1b5056068828 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -206,10 +206,10 @@
+ #define __PT_PARM4_REG a3
+ #define __PT_PARM5_REG a4
+ #define __PT_RET_REG ra
+-#define __PT_FP_REG fp
++#define __PT_FP_REG s0
+ #define __PT_RC_REG a5
+ #define __PT_SP_REG sp
+-#define __PT_IP_REG epc
++#define __PT_IP_REG pc
+
+ #endif
+
+diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
+index 061839f045255..51862fdee850b 100644
+--- a/tools/lib/bpf/btf.h
++++ b/tools/lib/bpf/btf.h
+@@ -375,8 +375,28 @@ btf_dump__dump_type_data(struct btf_dump *d, __u32 id,
+ const struct btf_dump_type_data_opts *opts);
+
+ /*
+- * A set of helpers for easier BTF types handling
++ * A set of helpers for easier BTF types handling.
++ *
++ * The inline functions below rely on constants from the kernel headers which
++ * may not be available for applications including this header file. To avoid
++ * compilation errors, we define all the constants here that were added after
++ * the initial introduction of the BTF_KIND* constants.
+ */
++#ifndef BTF_KIND_FUNC
++#define BTF_KIND_FUNC 12 /* Function */
++#define BTF_KIND_FUNC_PROTO 13 /* Function Proto */
++#endif
++#ifndef BTF_KIND_VAR
++#define BTF_KIND_VAR 14 /* Variable */
++#define BTF_KIND_DATASEC 15 /* Section */
++#endif
++#ifndef BTF_KIND_FLOAT
++#define BTF_KIND_FLOAT 16 /* Floating point */
++#endif
++/* The kernel header switched to enums, so these two were never #defined */
++#define BTF_KIND_DECL_TAG 17 /* Decl Tag */
++#define BTF_KIND_TYPE_TAG 18 /* Type Tag */
++
+ static inline __u16 btf_kind(const struct btf_type *t)
+ {
+ return BTF_INFO_KIND(t->info);
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index b9a3260c83cbd..6b1bc1f43728c 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -1505,6 +1505,11 @@ static const char *btf_dump_resolve_name(struct btf_dump *d, __u32 id,
+ if (s->name_resolved)
+ return *cached_name ? *cached_name : orig_name;
+
++ if (btf_is_fwd(t) || (btf_is_enum(t) && btf_vlen(t) == 0)) {
++ s->name_resolved = 1;
++ return orig_name;
++ }
++
+ dup_cnt = btf_dump_name_dups(d, name_map, orig_name);
+ if (dup_cnt > 1) {
+ const size_t max_len = 256;
+@@ -1861,14 +1866,16 @@ static int btf_dump_array_data(struct btf_dump *d,
+ {
+ const struct btf_array *array = btf_array(t);
+ const struct btf_type *elem_type;
+- __u32 i, elem_size = 0, elem_type_id;
++ __u32 i, elem_type_id;
++ __s64 elem_size;
+ bool is_array_member;
+
+ elem_type_id = array->type;
+ elem_type = skip_mods_and_typedefs(d->btf, elem_type_id, NULL);
+ elem_size = btf__resolve_size(d->btf, elem_type_id);
+ if (elem_size <= 0) {
+- pr_warn("unexpected elem size %d for array type [%u]\n", elem_size, id);
++ pr_warn("unexpected elem size %zd for array type [%u]\n",
++ (ssize_t)elem_size, id);
+ return -EINVAL;
+ }
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 7f10dd501a52b..94a6a8543cbc9 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4854,7 +4854,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ LIBBPF_OPTS(bpf_map_create_opts, create_attr);
+ struct bpf_map_def *def = &map->def;
+ const char *map_name = NULL;
+- __u32 max_entries;
+ int err = 0;
+
+ if (kernel_supports(obj, FEAT_PROG_NAME))
+@@ -4864,21 +4863,6 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ create_attr.numa_node = map->numa_node;
+ create_attr.map_extra = map->map_extra;
+
+- if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !def->max_entries) {
+- int nr_cpus;
+-
+- nr_cpus = libbpf_num_possible_cpus();
+- if (nr_cpus < 0) {
+- pr_warn("map '%s': failed to determine number of system CPUs: %d\n",
+- map->name, nr_cpus);
+- return nr_cpus;
+- }
+- pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus);
+- max_entries = nr_cpus;
+- } else {
+- max_entries = def->max_entries;
+- }
+-
+ if (bpf_map__is_struct_ops(map))
+ create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
+
+@@ -4928,7 +4912,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+
+ if (obj->gen_loader) {
+ bpf_gen__map_create(obj->gen_loader, def->type, map_name,
+- def->key_size, def->value_size, max_entries,
++ def->key_size, def->value_size, def->max_entries,
+ &create_attr, is_inner ? -1 : map - obj->maps);
+ /* Pretend to have valid FD to pass various fd >= 0 checks.
+ * This fd == 0 will not be used with any syscall and will be reset to -1 eventually.
+@@ -4937,7 +4921,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ } else {
+ map->fd = bpf_map_create(def->type, map_name,
+ def->key_size, def->value_size,
+- max_entries, &create_attr);
++ def->max_entries, &create_attr);
+ }
+ if (map->fd < 0 && (create_attr.btf_key_type_id ||
+ create_attr.btf_value_type_id)) {
+@@ -4954,7 +4938,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
+ map->btf_value_type_id = 0;
+ map->fd = bpf_map_create(def->type, map_name,
+ def->key_size, def->value_size,
+- max_entries, &create_attr);
++ def->max_entries, &create_attr);
+ }
+
+ err = map->fd < 0 ? -errno : 0;
+@@ -5058,6 +5042,24 @@ static int bpf_object_init_prog_arrays(struct bpf_object *obj)
+ return 0;
+ }
+
++static int map_set_def_max_entries(struct bpf_map *map)
++{
++ if (map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !map->def.max_entries) {
++ int nr_cpus;
++
++ nr_cpus = libbpf_num_possible_cpus();
++ if (nr_cpus < 0) {
++ pr_warn("map '%s': failed to determine number of system CPUs: %d\n",
++ map->name, nr_cpus);
++ return nr_cpus;
++ }
++ pr_debug("map '%s': setting size to %d\n", map->name, nr_cpus);
++ map->def.max_entries = nr_cpus;
++ }
++
++ return 0;
++}
++
+ static int
+ bpf_object__create_maps(struct bpf_object *obj)
+ {
+@@ -5090,6 +5092,10 @@ bpf_object__create_maps(struct bpf_object *obj)
+ continue;
+ }
+
++ err = map_set_def_max_entries(map);
++ if (err)
++ goto err_out;
++
+ retried = false;
+ retry:
+ if (map->pin_path) {
+@@ -11795,6 +11801,9 @@ void bpf_object__detach_skeleton(struct bpf_object_skeleton *s)
+
+ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
+ {
++ if (!s)
++ return;
++
+ if (s->progs)
+ bpf_object__detach_skeleton(s);
+ if (s->obj)
+diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
+index 5297839677930..9a89fdfe4987e 100644
+--- a/tools/lib/bpf/libbpf.map
++++ b/tools/lib/bpf/libbpf.map
+@@ -431,4 +431,4 @@ LIBBPF_0.7.0 {
+ libbpf_probe_bpf_map_type;
+ libbpf_probe_bpf_prog_type;
+ libbpf_set_memlock_rlim_max;
+-};
++} LIBBPF_0.6.0;
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 39f25e09b51e2..fadde7d80a51c 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -87,29 +87,75 @@ enum {
+ NL_DONE,
+ };
+
++static int netlink_recvmsg(int sock, struct msghdr *mhdr, int flags)
++{
++ int len;
++
++ do {
++ len = recvmsg(sock, mhdr, flags);
++ } while (len < 0 && (errno == EINTR || errno == EAGAIN));
++
++ if (len < 0)
++ return -errno;
++ return len;
++}
++
++static int alloc_iov(struct iovec *iov, int len)
++{
++ void *nbuf;
++
++ nbuf = realloc(iov->iov_base, len);
++ if (!nbuf)
++ return -ENOMEM;
++
++ iov->iov_base = nbuf;
++ iov->iov_len = len;
++ return 0;
++}
++
+ static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
+ __dump_nlmsg_t _fn, libbpf_dump_nlmsg_t fn,
+ void *cookie)
+ {
++ struct iovec iov = {};
++ struct msghdr mhdr = {
++ .msg_iov = &iov,
++ .msg_iovlen = 1,
++ };
+ bool multipart = true;
+ struct nlmsgerr *err;
+ struct nlmsghdr *nh;
+- char buf[4096];
+ int len, ret;
+
++ ret = alloc_iov(&iov, 4096);
++ if (ret)
++ goto done;
++
+ while (multipart) {
+ start:
+ multipart = false;
+- len = recv(sock, buf, sizeof(buf), 0);
++ len = netlink_recvmsg(sock, &mhdr, MSG_PEEK | MSG_TRUNC);
++ if (len < 0) {
++ ret = len;
++ goto done;
++ }
++
++ if (len > iov.iov_len) {
++ ret = alloc_iov(&iov, len);
++ if (ret)
++ goto done;
++ }
++
++ len = netlink_recvmsg(sock, &mhdr, 0);
+ if (len < 0) {
+- ret = -errno;
++ ret = len;
+ goto done;
+ }
+
+ if (len == 0)
+ break;
+
+- for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
++ for (nh = (struct nlmsghdr *)iov.iov_base; NLMSG_OK(nh, len);
+ nh = NLMSG_NEXT(nh, len)) {
+ if (nh->nlmsg_pid != nl_pid) {
+ ret = -LIBBPF_ERRNO__WRNGPID;
+@@ -130,7 +176,8 @@ start:
+ libbpf_nla_dump_errormsg(nh);
+ goto done;
+ case NLMSG_DONE:
+- return 0;
++ ret = 0;
++ goto done;
+ default:
+ break;
+ }
+@@ -142,15 +189,17 @@ start:
+ case NL_NEXT:
+ goto start;
+ case NL_DONE:
+- return 0;
++ ret = 0;
++ goto done;
+ default:
+- return ret;
++ goto done;
+ }
+ }
+ }
+ }
+ ret = 0;
+ done:
++ free(iov.iov_base);
+ return ret;
+ }
+
+diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
+index edafe56664f3a..32a2f5749c711 100644
+--- a/tools/lib/bpf/xsk.c
++++ b/tools/lib/bpf/xsk.c
+@@ -1193,12 +1193,23 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
+
+ int xsk_umem__delete(struct xsk_umem *umem)
+ {
++ struct xdp_mmap_offsets off;
++ int err;
++
+ if (!umem)
+ return 0;
+
+ if (umem->refcount)
+ return -EBUSY;
+
++ err = xsk_get_mmap_offsets(umem->fd, &off);
++ if (!err && umem->fill_save && umem->comp_save) {
++ munmap(umem->fill_save->ring - off.fr.desc,
++ off.fr.desc + umem->config.fill_size * sizeof(__u64));
++ munmap(umem->comp_save->ring - off.cr.desc,
++ off.cr.desc + umem->config.comp_size * sizeof(__u64));
++ }
++
+ close(umem->fd);
+ free(umem);
+
+diff --git a/tools/lib/perf/tests/test-evlist.c b/tools/lib/perf/tests/test-evlist.c
+index fa854c83b7e7b..ed616fc19b4f2 100644
+--- a/tools/lib/perf/tests/test-evlist.c
++++ b/tools/lib/perf/tests/test-evlist.c
+@@ -69,7 +69,7 @@ static int test_stat_cpu(void)
+ perf_evlist__set_maps(evlist, cpus, NULL);
+
+ err = perf_evlist__open(evlist);
+- __T("failed to open evsel", err == 0);
++ __T("failed to open evlist", err == 0);
+
+ perf_evlist__for_each_evsel(evlist, evsel) {
+ cpus = perf_evsel__cpus(evsel);
+@@ -130,7 +130,7 @@ static int test_stat_thread(void)
+ perf_evlist__set_maps(evlist, NULL, threads);
+
+ err = perf_evlist__open(evlist);
+- __T("failed to open evsel", err == 0);
++ __T("failed to open evlist", err == 0);
+
+ perf_evlist__for_each_evsel(evlist, evsel) {
+ perf_evsel__read(evsel, 0, 0, &counts);
+@@ -187,7 +187,7 @@ static int test_stat_thread_enable(void)
+ perf_evlist__set_maps(evlist, NULL, threads);
+
+ err = perf_evlist__open(evlist);
+- __T("failed to open evsel", err == 0);
++ __T("failed to open evlist", err == 0);
+
+ perf_evlist__for_each_evsel(evlist, evsel) {
+ perf_evsel__read(evsel, 0, 0, &counts);
+@@ -507,7 +507,7 @@ static int test_stat_multiplexing(void)
+ perf_evlist__set_maps(evlist, NULL, threads);
+
+ err = perf_evlist__open(evlist);
+- __T("failed to open evsel", err == 0);
++ __T("failed to open evlist", err == 0);
+
+ perf_evlist__enable(evlist);
+
+diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
+index 8d9b55959256a..cfc208d71f00a 100644
+--- a/tools/perf/arch/x86/util/evlist.c
++++ b/tools/perf/arch/x86/util/evlist.c
+@@ -20,17 +20,27 @@ int arch_evlist__add_default_attrs(struct evlist *evlist)
+
+ struct evsel *arch_evlist__leader(struct list_head *list)
+ {
+- struct evsel *evsel, *first;
++ struct evsel *evsel, *first, *slots = NULL;
++ bool has_topdown = false;
+
+ first = list_first_entry(list, struct evsel, core.node);
+
+ if (!pmu_have_event("cpu", "slots"))
+ return first;
+
++ /* If there is a slots event and a topdown event then the slots event comes first. */
+ __evlist__for_each_entry(list, evsel) {
+- if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") &&
+- evsel->name && strcasestr(evsel->name, "slots"))
+- return evsel;
++ if (evsel->pmu_name && !strcmp(evsel->pmu_name, "cpu") && evsel->name) {
++ if (strcasestr(evsel->name, "slots")) {
++ slots = evsel;
++ if (slots == first)
++ return first;
++ }
++ if (!strncasecmp(evsel->name, "topdown", 7))
++ has_topdown = true;
++ if (slots && has_topdown)
++ return slots;
++ }
+ }
+ return first;
+ }
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 3f98689dd6878..60baa3dadc4b6 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -955,10 +955,10 @@ try_again_reset:
+ * Enable counters and exec the command:
+ */
+ if (forks) {
+- evlist__start_workload(evsel_list);
+ err = enable_counters();
+ if (err)
+ return -1;
++ evlist__start_workload(evsel_list);
+
+ t0 = rdclock();
+ clock_gettime(CLOCK_MONOTONIC, &ref_time);
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/cache.json b/tools/perf/pmu-events/arch/x86/skylakex/cache.json
+index 9ff67206ade4e..821d2f2a8f251 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/cache.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/cache.json
+@@ -314,6 +314,19 @@
+ "SampleAfterValue": "2000003",
+ "UMask": "0x82"
+ },
++ {
++ "BriefDescription": "All retired memory instructions.",
++ "Counter": "0,1,2,3",
++ "CounterHTOff": "0,1,2,3",
++ "Data_LA": "1",
++ "EventCode": "0xD0",
++ "EventName": "MEM_INST_RETIRED.ANY",
++ "L1_Hit_Indication": "1",
++ "PEBS": "1",
++ "PublicDescription": "Counts all retired memory instructions - loads and stores.",
++ "SampleAfterValue": "2000003",
++ "UMask": "0x83"
++ },
+ {
+ "BriefDescription": "Retired load instructions with locked access.",
+ "Counter": "0,1,2,3",
+@@ -358,6 +371,7 @@
+ "EventCode": "0xD0",
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_LOADS",
+ "PEBS": "1",
++ "PublicDescription": "Number of retired load instructions that (start a) miss in the 2nd-level TLB (STLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x11"
+ },
+@@ -370,6 +384,7 @@
+ "EventName": "MEM_INST_RETIRED.STLB_MISS_STORES",
+ "L1_Hit_Indication": "1",
+ "PEBS": "1",
++ "PublicDescription": "Number of retired store instructions that (start a) miss in the 2nd-level TLB (STLB).",
+ "SampleAfterValue": "100003",
+ "UMask": "0x12"
+ },
+@@ -733,7 +748,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010491",
++ "MSRValue": "0x10491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -772,7 +787,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0491",
++ "MSRValue": "0x4003C0491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -785,7 +800,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0491",
++ "MSRValue": "0x1003C0491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -798,7 +813,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0491",
++ "MSRValue": "0x8003C0491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -811,7 +826,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010490",
++ "MSRValue": "0x10490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -850,7 +865,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0490",
++ "MSRValue": "0x4003C0490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -863,7 +878,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0490",
++ "MSRValue": "0x1003C0490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -876,7 +891,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0490",
++ "MSRValue": "0x8003C0490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -889,7 +904,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010120",
++ "MSRValue": "0x10120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -928,7 +943,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0120",
++ "MSRValue": "0x4003C0120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -941,7 +956,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0120",
++ "MSRValue": "0x1003C0120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -954,7 +969,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0120",
++ "MSRValue": "0x8003C0120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -967,7 +982,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010122",
++ "MSRValue": "0x10122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1006,7 +1021,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0122",
++ "MSRValue": "0x4003C0122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1019,7 +1034,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0122",
++ "MSRValue": "0x1003C0122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1032,7 +1047,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0122",
++ "MSRValue": "0x8003C0122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1045,7 +1060,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010004",
++ "MSRValue": "0x10004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1084,7 +1099,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0004",
++ "MSRValue": "0x4003C0004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1097,7 +1112,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0004",
++ "MSRValue": "0x1003C0004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1110,7 +1125,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0004",
++ "MSRValue": "0x8003C0004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1123,7 +1138,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010001",
++ "MSRValue": "0x10001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1162,7 +1177,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0001",
++ "MSRValue": "0x4003C0001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1175,7 +1190,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0001",
++ "MSRValue": "0x1003C0001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1188,7 +1203,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0001",
++ "MSRValue": "0x8003C0001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1201,7 +1216,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010002",
++ "MSRValue": "0x10002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1240,7 +1255,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0002",
++ "MSRValue": "0x4003C0002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1253,7 +1268,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0002",
++ "MSRValue": "0x1003C0002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1266,7 +1281,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0002",
++ "MSRValue": "0x8003C0002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1279,7 +1294,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010400",
++ "MSRValue": "0x10400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1318,7 +1333,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0400",
++ "MSRValue": "0x4003C0400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1331,7 +1346,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0400",
++ "MSRValue": "0x1003C0400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1344,7 +1359,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0400",
++ "MSRValue": "0x8003C0400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1357,7 +1372,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010010",
++ "MSRValue": "0x10010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1396,7 +1411,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0010",
++ "MSRValue": "0x4003C0010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1409,7 +1424,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0010",
++ "MSRValue": "0x1003C0010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1422,7 +1437,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0010",
++ "MSRValue": "0x8003C0010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1435,7 +1450,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010020",
++ "MSRValue": "0x10020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1474,7 +1489,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0020",
++ "MSRValue": "0x4003C0020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1487,7 +1502,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0020",
++ "MSRValue": "0x1003C0020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1500,7 +1515,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0020",
++ "MSRValue": "0x8003C0020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1513,7 +1528,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010080",
++ "MSRValue": "0x10080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1552,7 +1567,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0080",
++ "MSRValue": "0x4003C0080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1565,7 +1580,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0080",
++ "MSRValue": "0x1003C0080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1578,7 +1593,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0080",
++ "MSRValue": "0x8003C0080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1591,7 +1606,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.ANY_RESPONSE",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0000010100",
++ "MSRValue": "0x10100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1630,7 +1645,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.HIT_OTHER_CORE_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x04003C0100",
++ "MSRValue": "0x4003C0100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1643,7 +1658,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.NO_SNOOP_NEEDED",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x01003C0100",
++ "MSRValue": "0x1003C0100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1656,7 +1671,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_HIT.SNOOP_HIT_WITH_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x08003C0100",
++ "MSRValue": "0x8003C0100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json b/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json
+index 503737ed3a83c..9e873ab224502 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/floating-point.json
+@@ -1,73 +1,81 @@
+ [
+ {
+- "BriefDescription": "Number of SSE/AVX computational 128-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT14 RCP14 DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++ "BriefDescription": "Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE",
++ "PublicDescription": "Counts once for most SIMD 128-bit packed computational double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 2 computation operations, one for each element. Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x4"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational 128-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++ "BriefDescription": "Counts once for most SIMD 128-bit packed computational single precision floating-point instruction retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE",
++ "PublicDescription": "Counts once for most SIMD 128-bit packed computational single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x8"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational 256-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++ "BriefDescription": "Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE",
++ "PublicDescription": "Counts once for most SIMD 256-bit packed double computational precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 4 computation operations, one for each element. Applies to packed double precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x10"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational 256-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++ "BriefDescription": "Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE",
++ "PublicDescription": "Counts once for most SIMD 256-bit packed single computational precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to packed single precision floating-point instructions: ADD SUB HADD HSUB SUBADD MUL DIV MIN MAX SQRT RSQRT RCP DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x20"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 8 calculations per element.",
++ "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE",
++ "PublicDescription": "Number of SSE/AVX computational 512-bit packed double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 8 computation operations, one for each element. Applies to SSE* and AVX* packed double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x40"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 16 calculations per element.",
++ "BriefDescription": "Counts number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE",
++ "PublicDescription": "Number of SSE/AVX computational 512-bit packed single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 16 computation operations, one for each element. Applies to SSE* and AVX* packed single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT14 RCP14 FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x80"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational scalar double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation. Applies to SSE* and AVX* scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++ "BriefDescription": "Counts once for most SIMD scalar computational double precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
++ "PublicDescription": "Counts once for most SIMD scalar computational double precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar double precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x1"
+ },
+ {
+- "BriefDescription": "Number of SSE/AVX computational scalar single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computation. Applies to SSE* and AVX* scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX RCP14 RSQRT14 SQRT DPP FM(N)ADD/SUB. DPP and FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element.",
++ "BriefDescription": "Counts once for most SIMD scalar computational single precision floating-point instructions retired. Counts twice for DPP and FM(N)ADD/SUB instructions retired.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xC7",
+ "EventName": "FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
++ "PublicDescription": "Counts once for most SIMD scalar computational single precision floating-point instructions retired; some instructions will count twice as noted below. Each count represents 1 computational operation. Applies to SIMD scalar single precision floating-point instructions: ADD SUB MUL DIV MIN MAX SQRT RSQRT RCP FM(N)ADD/SUB. FM(N)ADD/SUB instructions count twice as they perform 2 calculations per element. The DAZ and FTZ flags in the MXCSR register need to be set when using these events.",
+ "SampleAfterValue": "2000003",
+ "UMask": "0x2"
+ },
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/frontend.json b/tools/perf/pmu-events/arch/x86/skylakex/frontend.json
+index 078706a500919..ecce4273ae52c 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/frontend.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/frontend.json
+@@ -30,7 +30,21 @@
+ "UMask": "0x2"
+ },
+ {
+- "BriefDescription": "Retired Instructions who experienced decode stream buffer (DSB - the decoded instruction-cache) miss.",
++ "BriefDescription": "Retired Instructions who experienced DSB miss.",
++ "Counter": "0,1,2,3",
++ "CounterHTOff": "0,1,2,3",
++ "EventCode": "0xC6",
++ "EventName": "FRONTEND_RETIRED.ANY_DSB_MISS",
++ "MSRIndex": "0x3F7",
++ "MSRValue": "0x1",
++ "PEBS": "1",
++ "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
++ "SampleAfterValue": "100007",
++ "TakenAlone": "1",
++ "UMask": "0x1"
++ },
++ {
++ "BriefDescription": "Retired Instructions who experienced a critical DSB miss.",
+ "Counter": "0,1,2,3",
+ "CounterHTOff": "0,1,2,3",
+ "EventCode": "0xC6",
+@@ -38,7 +52,7 @@
+ "MSRIndex": "0x3F7",
+ "MSRValue": "0x11",
+ "PEBS": "1",
+- "PublicDescription": "Counts retired Instructions that experienced DSB (Decode stream buffer i.e. the decoded instruction-cache) miss.",
++ "PublicDescription": "Number of retired Instructions that experienced a critical DSB (Decode stream buffer i.e. the decoded instruction-cache) miss. Critical means stalls were exposed to the back-end as a result of the DSB miss.",
+ "SampleAfterValue": "100007",
+ "TakenAlone": "1",
+ "UMask": "0x1"
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/memory.json b/tools/perf/pmu-events/arch/x86/skylakex/memory.json
+index 6f29b02fa320c..60c286b4fe54c 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/memory.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/memory.json
+@@ -299,7 +299,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00491",
++ "MSRValue": "0x83FC00491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -312,7 +312,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00491",
++ "MSRValue": "0x63FC00491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -325,7 +325,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000491",
++ "MSRValue": "0x604000491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -338,7 +338,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800491",
++ "MSRValue": "0x63B800491",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -377,7 +377,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00490",
++ "MSRValue": "0x83FC00490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -390,7 +390,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00490",
++ "MSRValue": "0x63FC00490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -403,7 +403,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000490",
++ "MSRValue": "0x604000490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -416,7 +416,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800490",
++ "MSRValue": "0x63B800490",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -455,7 +455,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00120",
++ "MSRValue": "0x83FC00120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -468,7 +468,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00120",
++ "MSRValue": "0x63FC00120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -481,7 +481,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000120",
++ "MSRValue": "0x604000120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -494,7 +494,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_PF_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800120",
++ "MSRValue": "0x63B800120",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -533,7 +533,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00122",
++ "MSRValue": "0x83FC00122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -546,7 +546,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00122",
++ "MSRValue": "0x63FC00122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -559,7 +559,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000122",
++ "MSRValue": "0x604000122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -572,7 +572,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.ALL_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800122",
++ "MSRValue": "0x63B800122",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -611,7 +611,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00004",
++ "MSRValue": "0x83FC00004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -624,7 +624,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00004",
++ "MSRValue": "0x63FC00004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -637,7 +637,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000004",
++ "MSRValue": "0x604000004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -650,7 +650,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_CODE_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800004",
++ "MSRValue": "0x63B800004",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -689,7 +689,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00001",
++ "MSRValue": "0x83FC00001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -702,7 +702,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00001",
++ "MSRValue": "0x63FC00001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -715,7 +715,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000001",
++ "MSRValue": "0x604000001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -728,7 +728,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800001",
++ "MSRValue": "0x63B800001",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -767,7 +767,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00002",
++ "MSRValue": "0x83FC00002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -780,7 +780,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00002",
++ "MSRValue": "0x63FC00002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -793,7 +793,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000002",
++ "MSRValue": "0x604000002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -806,7 +806,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.DEMAND_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800002",
++ "MSRValue": "0x63B800002",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -845,7 +845,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00400",
++ "MSRValue": "0x83FC00400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -858,7 +858,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00400",
++ "MSRValue": "0x63FC00400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -871,7 +871,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000400",
++ "MSRValue": "0x604000400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -884,7 +884,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L1D_AND_SW.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800400",
++ "MSRValue": "0x63B800400",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -923,7 +923,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00010",
++ "MSRValue": "0x83FC00010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -936,7 +936,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00010",
++ "MSRValue": "0x63FC00010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -949,7 +949,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000010",
++ "MSRValue": "0x604000010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -962,7 +962,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800010",
++ "MSRValue": "0x63B800010",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1001,7 +1001,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00020",
++ "MSRValue": "0x83FC00020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1014,7 +1014,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00020",
++ "MSRValue": "0x63FC00020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1027,7 +1027,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000020",
++ "MSRValue": "0x604000020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1040,7 +1040,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L2_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800020",
++ "MSRValue": "0x63B800020",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1079,7 +1079,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00080",
++ "MSRValue": "0x83FC00080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1092,7 +1092,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00080",
++ "MSRValue": "0x63FC00080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1105,7 +1105,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000080",
++ "MSRValue": "0x604000080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1118,7 +1118,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_DATA_RD.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800080",
++ "MSRValue": "0x63B800080",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1157,7 +1157,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.REMOTE_HIT_FORWARD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x083FC00100",
++ "MSRValue": "0x83FC00100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1170,7 +1170,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063FC00100",
++ "MSRValue": "0x63FC00100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1183,7 +1183,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_LOCAL_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x0604000100",
++ "MSRValue": "0x604000100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+@@ -1196,7 +1196,7 @@
+ "EventCode": "0xB7, 0xBB",
+ "EventName": "OFFCORE_RESPONSE.PF_L3_RFO.L3_MISS_REMOTE_DRAM.SNOOP_MISS_OR_NO_FWD",
+ "MSRIndex": "0x1a6,0x1a7",
+- "MSRValue": "0x063B800100",
++ "MSRValue": "0x63B800100",
+ "Offcore": "1",
+ "PublicDescription": "Offcore response can be programmed only with a specific pair of event select and counter MSR, and with specific event codes and predefine mask bit value in a dedicated MSR to specify attributes of the offcore transaction.",
+ "SampleAfterValue": "100003",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json b/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json
+index ca57481206660..12eabae3e2242 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/pipeline.json
+@@ -435,6 +435,17 @@
+ "PublicDescription": "Counts the number of instructions (EOMs) retired. Counting covers macro-fused instructions individually (that is, increments by two).",
+ "SampleAfterValue": "2000003"
+ },
++ {
++ "BriefDescription": "Number of all retired NOP instructions.",
++ "Counter": "0,1,2,3",
++ "CounterHTOff": "0,1,2,3,4,5,6,7",
++ "Errata": "SKL091, SKL044",
++ "EventCode": "0xC0",
++ "EventName": "INST_RETIRED.NOP",
++ "PEBS": "1",
++ "SampleAfterValue": "2000003",
++ "UMask": "0x2"
++ },
+ {
+ "BriefDescription": "Precise instruction retired event with HW to reduce effect of PEBS shadow in IP distribution",
+ "Counter": "1",
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
+index 863c9e103969e..b016f7d1ff3de 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/skx-metrics.json
+@@ -1,26 +1,167 @@
+ [
++ {
++ "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend",
++ "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)",
++ "MetricGroup": "TopdownL1",
++ "MetricName": "Frontend_Bound",
++ "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Machine_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound."
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++ "MetricGroup": "TopdownL1_SMT",
++ "MetricName": "Frontend_Bound_SMT",
++ "PublicDescription": "This category represents fraction of slots where the processor's Frontend undersupplies its Backend. Frontend denotes the first part of the processor core responsible to fetch operations that are executed later on by the Backend part. Within the Frontend; a branch predictor predicts the next address to fetch; cache-lines are fetched from the memory subsystem; parsed into instructions; and lastly decoded into micro-operations (uops). Ideally the Frontend can issue Machine_Width uops every cycle to the Backend. Frontend Bound denotes unutilized issue-slots when there is no Backend stall; i.e. bubbles where Frontend delivered no uops while Backend could have accepted them. For example; stalls due to instruction-cache misses would be categorized under Frontend Bound. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations",
++ "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)",
++ "MetricGroup": "TopdownL1",
++ "MetricName": "Bad_Speculation",
++ "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example."
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots wasted due to incorrect speculations. SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++ "MetricGroup": "TopdownL1_SMT",
++ "MetricName": "Bad_Speculation_SMT",
++ "PublicDescription": "This category represents fraction of slots wasted due to incorrect speculations. This include slots used to issue uops that do not eventually get retired and slots for which the issue-pipeline was blocked due to recovery from earlier incorrect speculation. For example; wasted work due to miss-predicted branches are categorized under Bad Speculation category. Incorrect data speculation followed by Memory Ordering Nukes is another example. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend",
++ "MetricConstraint": "NO_NMI_WATCHDOG",
++ "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD)",
++ "MetricGroup": "TopdownL1",
++ "MetricName": "Backend_Bound",
++ "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound."
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++ "MetricGroup": "TopdownL1_SMT",
++ "MetricName": "Backend_Bound_SMT",
++ "PublicDescription": "This category represents fraction of slots where no uops are being delivered due to a lack of required resources for accepting new uops in the Backend. Backend is the portion of the processor core where the out-of-order scheduler dispatches ready uops into their respective execution units; and once completed these uops get retired according to program order. For example; stalls due to data-cache misses or stalls due to the divider unit being overloaded are both categorized under Backend Bound. Backend Bound is further divided into two main categories: Memory Bound and Core Bound. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired",
++ "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)",
++ "MetricGroup": "TopdownL1",
++ "MetricName": "Retiring",
++ "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category. Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved. Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance. For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. "
++ },
++ {
++ "BriefDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))",
++ "MetricGroup": "TopdownL1_SMT",
++ "MetricName": "Retiring_SMT",
++ "PublicDescription": "This category represents fraction of slots utilized by useful work i.e. issued uops that eventually get retired. Ideally; all pipeline slots would be attributed to the Retiring category. Retiring of 100% would indicate the maximum Pipeline_Width throughput was achieved. Maximizing Retiring typically increases the Instructions-per-cycle (see IPC metric). Note that a high Retiring value does not necessary mean there is no room for more performance. For example; Heavy-operations or Microcode Assists are categorized under Retiring. They often indicate suboptimal performance and can often be optimized or avoided. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
++ {
++ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
++ "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) )",
++ "MetricGroup": "Bad;BadSpec;BrMispredicts",
++ "MetricName": "Mispredictions"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of Branch Misprediction related bottlenecks",
++ "MetricExpr": "100 * ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) )",
++ "MetricGroup": "Bad;BadSpec;BrMispredicts_SMT",
++ "MetricName": "Mispredictions_SMT"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
++ "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES )
/ (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREAD) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CY
CLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (OFFCORE_REQUESTS_BUFFER.SQ_FULL / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES))
* (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ",
++ "MetricGroup": "Mem;MemoryBW;Offcore",
++ "MetricName": "Memory_Bandwidth"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of (external) Memory Bandwidth related bottlenecks",
++ "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (M
EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREA
D) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THR
EAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( OFFCORE_REQUESTS_BUFFER.SQ_FULL / 2 ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) ) + ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_
NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( ((L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )) * cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ / CPU_CLK_UNHALTED.THREAD) / #(max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) ",
++ "MetricGroup": "Mem;MemoryBW;Offcore_SMT",
++ "MetricName": "Memory_Bandwidth_SMT"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
++ "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES )
/ (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD ) / CPU_CLK_UNHALTED.THREAD - (min( CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREAD)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D
_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_
UNHALTED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) )",
++ "MetricGroup": "Mem;MemoryLat;Offcore",
++ "MetricName": "Memory_Latency"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of Memory Latency related bottlenecks (external memory and off-core caches)",
++ "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * ( ( (CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (M
EM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min( CPU_CLK_UNHALTED.THREAD , OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD ) / CPU_CLK_UNHALTED.THREAD - (min(
CPU_CLK_UNHALTED.THREAD , cpu@OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD\\,cmask\\=4@ ) / CPU_CLK_UNHALTED.THREAD)) / #(CYCLE_ACTIVITY.STALLS_L3_MISS / CPU_CLK_UNHALTED.THREAD + (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD) - (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD))) ) + ( (( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_
PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( (20.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) - (3.5 * ((CPU_CLK_UNHALTED.THREAD / CPU_CLK_UNHALTED.REF_TSC) * msr@tsc@ / 1000000000 / duration_time)) ) * MEM_LOAD_RETIRED.L3_HIT * (1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) / 2) / CPU_CLK_UNHALTED.THREAD) / #(( CYCLE_ACTIVITY.STALLS_L2_MISS - CYCLE_ACTIVITY.STALLS_L3_MISS ) / CPU_CLK_UNHALTED.THREAD) ) + ( (( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.L1_MISS) )) / ( (MEM_LOAD_RETIRED.L2_HIT * ( 1 + (MEM_LOAD_RETIRED.FB_HIT / MEM_LOAD_RETIRED.
L1_MISS) )) + cpu@L1D_PEND_MISS.FB_FULL\\,cmask\\=1@ ) ) * (( CYCLE_ACTIVITY.STALLS_L1D_MISS - CYCLE_ACTIVITY.STALLS_L2_MISS ) / CPU_CLK_UNHALTED.THREAD)) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) )",
++ "MetricGroup": "Mem;MemoryLat;Offcore_SMT",
++ "MetricName": "Memory_Latency_SMT"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
++ "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (min( 9 * cpu@DTLB_LOAD_MISSES.ST
LB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * CPU_CLK_UNHALTED.THREAD)) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - ( UOPS_ISSUED.ANY + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE ) / CPU_CLK_UNHALTED.THREAD) / #(EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ",
++ "MetricGroup": "Mem;MemoryTLB",
++ "MetricName": "Memory_Data_TLBs"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of Memory Address Translation related bottlenecks (data-side TLBs)",
++ "MetricExpr": "100 * ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * ( ( (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) / ((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOPS_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHA
LTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (min( 9 * cpu@DTLB_LOAD_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_LOAD_MISSES.WALK_ACTIVE , max( CYCLE_ACTIVITY.CYCLES_MEM_ANY - CYCLE_ACTIVITY.CYCLES_L1D_MISS , 0 ) ) / CPU_CLK_UNHALTED.THREAD) / (max( ( CYCLE_ACTIVITY.STALLS_MEM_ANY - CYCLE_ACTIVITY.STALLS_L1D_MISS ) / CPU_CLK_UNHALTED.THREAD , 0 )) ) + ( (EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) / #((( CYCLE_ACTIVITY.STALLS_MEM_ANY + EXE_ACTIVITY.BOUND_ON_STORES ) / (CYCLE_ACTIVITY.STALLS_TOTAL + (EXE_ACTIVITY.1_PORTS_UTIL + (UOP
S_RETIRED.RETIRE_SLOTS / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * EXE_ACTIVITY.2_PORTS_UTIL) + EXE_ACTIVITY.BOUND_ON_STORES)) * (1 - (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - ( UOPS_ISSUED.ANY + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) ) * ( (( 9 * cpu@DTLB_STORE_MISSES.STLB_HIT\\,cmask\\=1@ + DTLB_STORE_MISSES.WALK_ACTIVE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / #(EXE_ACTIVITY.BOUND_ON_STORES / CPU_CLK_UNHALTED.THREAD) ) ) ",
++ "MetricGroup": "Mem;MemoryTLB;_SMT",
++ "MetricName": "Memory_Data_TLBs_SMT"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
++ "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 * CPU_CLK_UNHALTED.THREAD))",
++ "MetricGroup": "Ret",
++ "MetricName": "Branching_Overhead"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of branch related instructions (used for program control-flow including function calls)",
++ "MetricExpr": "100 * (( BR_INST_RETIRED.CONDITIONAL + 3 * BR_INST_RETIRED.NEAR_CALL + (BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))",
++ "MetricGroup": "Ret_SMT",
++ "MetricName": "Branching_Overhead_SMT"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
++ "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))",
++ "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB",
++ "MetricName": "Big_Code"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of instruction fetch related bottlenecks by large code footprint programs (i-side cache; TLB and BTB misses)",
++ "MetricExpr": "100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))",
++ "MetricGroup": "BigFoot;Fed;Frontend;IcMiss;MemoryTLB_SMT",
++ "MetricName": "Big_Code_SMT"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
++ "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@ ) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)))",
++ "MetricGroup": "Fed;FetchBW;Frontend",
++ "MetricName": "Instruction_Fetch_BW"
++ },
++ {
++ "BriefDescription": "Total pipeline cost of instruction fetch bandwidth related bottlenecks",
++ "MetricExpr": "100 * ( (IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) - (100 * (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ( (ICACHE_64B.IFTAG_STALL / CPU_CLK_UNHALTED.THREAD) + (( ICACHE_16B.IFDATA_STALL + 2 * cpu@ICACHE_16B.IFDATA_STALL\\,cmask\\=1\\,edge@
) / CPU_CLK_UNHALTED.THREAD) + (9 * BACLEARS.ANY / CPU_CLK_UNHALTED.THREAD) ) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))",
++ "MetricGroup": "Fed;FetchBW;Frontend_SMT",
++ "MetricName": "Instruction_Fetch_BW_SMT"
++ },
+ {
+ "BriefDescription": "Instructions Per Cycle (per Logical Processor)",
+ "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD",
+- "MetricGroup": "Summary",
++ "MetricGroup": "Ret;Summary",
+ "MetricName": "IPC"
+ },
+ {
+ "BriefDescription": "Uops Per Instruction",
+ "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / INST_RETIRED.ANY",
+- "MetricGroup": "Pipeline;Retire",
++ "MetricGroup": "Pipeline;Ret;Retire",
+ "MetricName": "UPI"
+ },
+ {
+ "BriefDescription": "Instruction per taken branch",
+- "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
+- "MetricGroup": "Branches;FetchBW;PGO",
+- "MetricName": "IpTB"
++ "MetricExpr": "UOPS_RETIRED.RETIRE_SLOTS / BR_INST_RETIRED.NEAR_TAKEN",
++ "MetricGroup": "Branches;Fed;FetchBW",
++ "MetricName": "UpTB"
+ },
+ {
+ "BriefDescription": "Cycles Per Instruction (per Logical Processor)",
+ "MetricExpr": "1 / (INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD)",
+- "MetricGroup": "Pipeline",
++ "MetricGroup": "Pipeline;Mem",
+ "MetricName": "CPI"
+ },
+ {
+@@ -30,39 +171,84 @@
+ "MetricName": "CLKS"
+ },
+ {
+- "BriefDescription": "Instructions Per Cycle (per physical core)",
++ "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
++ "MetricExpr": "4 * CPU_CLK_UNHALTED.THREAD",
++ "MetricGroup": "TmaL1",
++ "MetricName": "SLOTS"
++ },
++ {
++ "BriefDescription": "Total issue-pipeline slots (per-Physical Core till ICL; per-Logical Processor ICL onward)",
++ "MetricExpr": "4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++ "MetricGroup": "TmaL1_SMT",
++ "MetricName": "SLOTS_SMT"
++ },
++ {
++ "BriefDescription": "The ratio of Executed- by Issued-Uops",
++ "MetricExpr": "UOPS_EXECUTED.THREAD / UOPS_ISSUED.ANY",
++ "MetricGroup": "Cor;Pipeline",
++ "MetricName": "Execute_per_Issue",
++ "PublicDescription": "The ratio of Executed- by Issued-Uops. Ratio > 1 suggests high rate of uop micro-fusions. Ratio < 1 suggest high rate of \"execute\" at rename stage."
++ },
++ {
++ "BriefDescription": "Instructions Per Cycle across hyper-threads (per physical core)",
+ "MetricExpr": "INST_RETIRED.ANY / CPU_CLK_UNHALTED.THREAD",
+- "MetricGroup": "SMT;TmaL1",
++ "MetricGroup": "Ret;SMT;TmaL1",
+ "MetricName": "CoreIPC"
+ },
+ {
+- "BriefDescription": "Instructions Per Cycle (per physical core)",
++ "BriefDescription": "Instructions Per Cycle across hyper-threads (per physical core)",
+ "MetricExpr": "INST_RETIRED.ANY / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
+- "MetricGroup": "SMT;TmaL1",
++ "MetricGroup": "Ret;SMT;TmaL1_SMT",
+ "MetricName": "CoreIPC_SMT"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / CPU_CLK_UNHALTED.THREAD",
+- "MetricGroup": "Flops",
++ "MetricGroup": "Ret;Flops",
+ "MetricName": "FLOPc"
+ },
+ {
+ "BriefDescription": "Floating Point Operations Per Cycle",
+ "MetricExpr": "( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
+- "MetricGroup": "Flops_SMT",
++ "MetricGroup": "Ret;Flops_SMT",
+ "MetricName": "FLOPc_SMT"
+ },
++ {
++ "BriefDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width)",
++ "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * CPU_CLK_UNHALTED.THREAD )",
++ "MetricGroup": "Cor;Flops;HPC",
++ "MetricName": "FP_Arith_Utilization",
++ "PublicDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width). Values > 1 are possible due to Fused-Multiply Add (FMA) counting."
++ },
++ {
++ "BriefDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width). SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) ) / ( 2 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) )",
++ "MetricGroup": "Cor;Flops;HPC_SMT",
++ "MetricName": "FP_Arith_Utilization_SMT",
++ "PublicDescription": "Actual per-core usage of the Floating Point execution units (regardless of the vector width). Values > 1 are possible due to Fused-Multiply Add (FMA) counting. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
+ {
+ "BriefDescription": "Instruction-Level-Parallelism (average number of uops executed when there is at least 1 uop executed)",
+ "MetricExpr": "UOPS_EXECUTED.THREAD / (( UOPS_EXECUTED.CORE_CYCLES_GE_1 / 2 ) if #SMT_on else UOPS_EXECUTED.CORE_CYCLES_GE_1)",
+- "MetricGroup": "Pipeline;PortsUtil",
++ "MetricGroup": "Backend;Cor;Pipeline;PortsUtil",
+ "MetricName": "ILP"
+ },
++ {
++ "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
++ "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * INT_MISC.RECOVERY_CYCLES ) / (4 * CPU_CLK_UNHALTED.THREAD))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) ) * (4 * CPU_CLK_UNHALTED.THREAD) / BR_MISP_RETIRED.ALL_BRANCHES",
++ "MetricGroup": "Bad;BrMispredicts",
++ "MetricName": "Branch_Misprediction_Cost"
++ },
++ {
++ "BriefDescription": "Branch Misprediction Cost: Fraction of TMA slots wasted per non-speculative branch misprediction (retired JEClear)",
++ "MetricExpr": " ( ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * (( UOPS_ISSUED.ANY - UOPS_RETIRED.RETIRE_SLOTS + 4 * ( INT_MISC.RECOVERY_CYCLES_ANY / 2 ) ) / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) + (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * ((BR_MISP_RETIRED.ALL_BRANCHES / ( BR_MISP_RETIRED.ALL_BRANCHES + MACHINE_CLEARS.COUNT )) * INT_MISC.CLEAR_RESTEER_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) ) * (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )) / BR_MISP_RETIRED.ALL_BRANCHES",
++ "MetricGroup": "Bad;BrMispredicts_SMT",
++ "MetricName": "Branch_Misprediction_Cost_SMT"
++ },
+ {
+ "BriefDescription": "Number of Instructions per non-speculative Branch Misprediction (JEClear)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_MISP_RETIRED.ALL_BRANCHES",
+- "MetricGroup": "BrMispredicts",
++ "MetricGroup": "Bad;BadSpec;BrMispredicts",
+ "MetricName": "IpMispredict"
+ },
+ {
+@@ -86,122 +272,249 @@
+ {
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
+- "MetricGroup": "Branches;InsType",
++ "MetricGroup": "Branches;Fed;InsType",
+ "MetricName": "IpBranch"
+ },
+ {
+ "BriefDescription": "Instructions per (near) call (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_CALL",
+- "MetricGroup": "Branches",
++ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "IpCall"
+ },
++ {
++ "BriefDescription": "Instruction per taken branch",
++ "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.NEAR_TAKEN",
++ "MetricGroup": "Branches;Fed;FetchBW;Frontend;PGO",
++ "MetricName": "IpTB"
++ },
+ {
+ "BriefDescription": "Branch instructions per taken branch. ",
+ "MetricExpr": "BR_INST_RETIRED.ALL_BRANCHES / BR_INST_RETIRED.NEAR_TAKEN",
+- "MetricGroup": "Branches;PGO",
++ "MetricGroup": "Branches;Fed;PGO",
+ "MetricName": "BpTkBranch"
+ },
+ {
+ "BriefDescription": "Instructions per Floating Point (FP) Operation (lower number means higher occurrence rate)",
+ "MetricExpr": "INST_RETIRED.ANY / ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )",
+- "MetricGroup": "Flops;FpArith;InsType",
++ "MetricGroup": "Flops;InsType",
+ "MetricName": "IpFLOP"
+ },
++ {
++ "BriefDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate)",
++ "MetricExpr": "INST_RETIRED.ANY / ( (FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE) + (FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE) )",
++ "MetricGroup": "Flops;InsType",
++ "MetricName": "IpArith",
++ "PublicDescription": "Instructions per FP Arithmetic instruction (lower number means higher occurrence rate). May undercount due to FMA double counting. Approximated prior to BDW."
++ },
++ {
++ "BriefDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate)",
++ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_SINGLE",
++ "MetricGroup": "Flops;FpScalar;InsType",
++ "MetricName": "IpArith_Scalar_SP",
++ "PublicDescription": "Instructions per FP Arithmetic Scalar Single-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++ },
++ {
++ "BriefDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate)",
++ "MetricExpr": "INST_RETIRED.ANY / FP_ARITH_INST_RETIRED.SCALAR_DOUBLE",
++ "MetricGroup": "Flops;FpScalar;InsType",
++ "MetricName": "IpArith_Scalar_DP",
++ "PublicDescription": "Instructions per FP Arithmetic Scalar Double-Precision instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++ },
++ {
++ "BriefDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate)",
++ "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE )",
++ "MetricGroup": "Flops;FpVector;InsType",
++ "MetricName": "IpArith_AVX128",
++ "PublicDescription": "Instructions per FP Arithmetic AVX/SSE 128-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++ },
++ {
++ "BriefDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate)",
++ "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE )",
++ "MetricGroup": "Flops;FpVector;InsType",
++ "MetricName": "IpArith_AVX256",
++ "PublicDescription": "Instructions per FP Arithmetic AVX* 256-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++ },
++ {
++ "BriefDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate)",
++ "MetricExpr": "INST_RETIRED.ANY / ( FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE + FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE )",
++ "MetricGroup": "Flops;FpVector;InsType",
++ "MetricName": "IpArith_AVX512",
++ "PublicDescription": "Instructions per FP Arithmetic AVX 512-bit instruction (lower number means higher occurrence rate). May undercount due to FMA double counting."
++ },
+ {
+ "BriefDescription": "Total number of retired Instructions, Sample with: INST_RETIRED.PREC_DIST",
+ "MetricExpr": "INST_RETIRED.ANY",
+ "MetricGroup": "Summary;TmaL1",
+ "MetricName": "Instructions"
+ },
++ {
++ "BriefDescription": "Average number of Uops issued by front-end when it issued something",
++ "MetricExpr": "UOPS_ISSUED.ANY / cpu@UOPS_ISSUED.ANY\\,cmask\\=1@",
++ "MetricGroup": "Fed;FetchBW",
++ "MetricName": "Fetch_UpC"
++ },
+ {
+ "BriefDescription": "Fraction of Uops delivered by the DSB (aka Decoded ICache; or Uop Cache)",
+ "MetricExpr": "IDQ.DSB_UOPS / (IDQ.DSB_UOPS + IDQ.MITE_UOPS + IDQ.MS_UOPS)",
+- "MetricGroup": "DSB;FetchBW",
++ "MetricGroup": "DSB;Fed;FetchBW",
+ "MetricName": "DSB_Coverage"
+ },
+ {
+- "BriefDescription": "Actual Average Latency for L1 data-cache miss demand loads (in core cycles)",
++ "BriefDescription": "Total penalty related to DSB (uop cache) misses - subset/see of/the Instruction_Fetch_BW Bottleneck.",
++ "MetricExpr": "(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) * (DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) + ((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / CPU_CLK_UNHALTED.THREAD / 2) / #((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * CPU_CLK_UNHALTED.THREAD)) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * CPU_CLK_UNHALTED.THREAD)))",
++ "MetricGroup": "DSBmiss;Fed",
++ "MetricName": "DSB_Misses_Cost"
++ },
++ {
++ "BriefDescription": "Total penalty related to DSB (uop cache) misses - subset/see of/the Instruction_Fetch_BW Bottleneck.",
++ "MetricExpr": "(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) * (DSB2MITE_SWITCHES.PENALTY_CYCLES / CPU_CLK_UNHALTED.THREAD) / #(4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) + ((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )))) * (( IDQ.ALL_MITE_CYCLES_ANY_UOPS - IDQ.ALL_MITE_CYCLES_4_UOPS ) / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) / 2) / #((IDQ_UOPS_NOT_DELIVERED.CORE / (4 * ( ( CPU_CLK_UNHALTED
.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))) - (4 * IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE / (4 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ))))",
++ "MetricGroup": "DSBmiss;Fed_SMT",
++ "MetricName": "DSB_Misses_Cost_SMT"
++ },
++ {
++ "BriefDescription": "Number of Instructions per non-speculative DSB miss",
++ "MetricExpr": "INST_RETIRED.ANY / FRONTEND_RETIRED.ANY_DSB_MISS",
++ "MetricGroup": "DSBmiss;Fed",
++ "MetricName": "IpDSB_Miss_Ret"
++ },
++ {
++ "BriefDescription": "Fraction of branches that are non-taken conditionals",
++ "MetricExpr": "BR_INST_RETIRED.NOT_TAKEN / BR_INST_RETIRED.ALL_BRANCHES",
++ "MetricGroup": "Bad;Branches;CodeGen;PGO",
++ "MetricName": "Cond_NT"
++ },
++ {
++ "BriefDescription": "Fraction of branches that are taken conditionals",
++ "MetricExpr": "( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) / BR_INST_RETIRED.ALL_BRANCHES",
++ "MetricGroup": "Bad;Branches;CodeGen;PGO",
++ "MetricName": "Cond_TK"
++ },
++ {
++ "BriefDescription": "Fraction of branches that are CALL or RET",
++ "MetricExpr": "( BR_INST_RETIRED.NEAR_CALL + BR_INST_RETIRED.NEAR_RETURN ) / BR_INST_RETIRED.ALL_BRANCHES",
++ "MetricGroup": "Bad;Branches",
++ "MetricName": "CallRet"
++ },
++ {
++ "BriefDescription": "Fraction of branches that are unconditional (direct or indirect) jumps",
++ "MetricExpr": "(BR_INST_RETIRED.NEAR_TAKEN - ( BR_INST_RETIRED.CONDITIONAL - BR_INST_RETIRED.NOT_TAKEN ) - 2 * BR_INST_RETIRED.NEAR_CALL) / BR_INST_RETIRED.ALL_BRANCHES",
++ "MetricGroup": "Bad;Branches",
++ "MetricName": "Jump"
++ },
++ {
++ "BriefDescription": "Actual Average Latency for L1 data-cache miss demand load instructions (in core cycles)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / ( MEM_LOAD_RETIRED.L1_MISS + MEM_LOAD_RETIRED.FB_HIT )",
+- "MetricGroup": "MemoryBound;MemoryLat",
+- "MetricName": "Load_Miss_Real_Latency"
++ "MetricGroup": "Mem;MemoryBound;MemoryLat",
++ "MetricName": "Load_Miss_Real_Latency",
++ "PublicDescription": "Actual Average Latency for L1 data-cache miss demand load instructions (in core cycles). Latency may be overestimated for multi-load instructions - e.g. repeat strings."
+ },
+ {
+ "BriefDescription": "Memory-Level-Parallelism (average number of L1 miss demand load when there is at least one such miss. Per-Logical Processor)",
+ "MetricExpr": "L1D_PEND_MISS.PENDING / L1D_PEND_MISS.PENDING_CYCLES",
+- "MetricGroup": "MemoryBound;MemoryBW",
++ "MetricGroup": "Mem;MemoryBound;MemoryBW",
+ "MetricName": "MLP"
+ },
+- {
+- "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
+- "MetricConstraint": "NO_NMI_WATCHDOG",
+- "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CORE_CLKS )",
+- "MetricGroup": "MemoryTLB",
+- "MetricName": "Page_Walks_Utilization"
+- },
+ {
+ "BriefDescription": "Average data fill bandwidth to the L1 data cache [GB / sec]",
+ "MetricExpr": "64 * L1D.REPLACEMENT / 1000000000 / duration_time",
+- "MetricGroup": "MemoryBW",
++ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "L1D_Cache_Fill_BW"
+ },
+ {
+ "BriefDescription": "Average data fill bandwidth to the L2 cache [GB / sec]",
+ "MetricExpr": "64 * L2_LINES_IN.ALL / 1000000000 / duration_time",
+- "MetricGroup": "MemoryBW",
++ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "L2_Cache_Fill_BW"
+ },
+ {
+ "BriefDescription": "Average per-core data fill bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * LONGEST_LAT_CACHE.MISS / 1000000000 / duration_time",
+- "MetricGroup": "MemoryBW",
++ "MetricGroup": "Mem;MemoryBW",
+ "MetricName": "L3_Cache_Fill_BW"
+ },
+ {
+ "BriefDescription": "Average per-core data access bandwidth to the L3 cache [GB / sec]",
+ "MetricExpr": "64 * OFFCORE_REQUESTS.ALL_REQUESTS / 1000000000 / duration_time",
+- "MetricGroup": "MemoryBW;Offcore",
++ "MetricGroup": "Mem;MemoryBW;Offcore",
+ "MetricName": "L3_Cache_Access_BW"
+ },
+ {
+ "BriefDescription": "L1 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1000 * MEM_LOAD_RETIRED.L1_MISS / INST_RETIRED.ANY",
+- "MetricGroup": "CacheMisses",
++ "MetricGroup": "Mem;CacheMisses",
+ "MetricName": "L1MPKI"
+ },
++ {
++ "BriefDescription": "L1 cache true misses per kilo instruction for all demand loads (including speculative)",
++ "MetricExpr": "1000 * L2_RQSTS.ALL_DEMAND_DATA_RD / INST_RETIRED.ANY",
++ "MetricGroup": "Mem;CacheMisses",
++ "MetricName": "L1MPKI_Load"
++ },
+ {
+ "BriefDescription": "L2 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1000 * MEM_LOAD_RETIRED.L2_MISS / INST_RETIRED.ANY",
+- "MetricGroup": "CacheMisses",
++ "MetricGroup": "Mem;Backend;CacheMisses",
+ "MetricName": "L2MPKI"
+ },
+ {
+ "BriefDescription": "L2 cache misses per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1000 * L2_RQSTS.MISS / INST_RETIRED.ANY",
+- "MetricGroup": "CacheMisses;Offcore",
++ "MetricGroup": "Mem;CacheMisses;Offcore",
+ "MetricName": "L2MPKI_All"
+ },
++ {
++ "BriefDescription": "L2 cache misses per kilo instruction for all demand loads (including speculative)",
++ "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_MISS / INST_RETIRED.ANY",
++ "MetricGroup": "Mem;CacheMisses",
++ "MetricName": "L2MPKI_Load"
++ },
+ {
+ "BriefDescription": "L2 cache hits per kilo instruction for all request types (including speculative)",
+ "MetricExpr": "1000 * ( L2_RQSTS.REFERENCES - L2_RQSTS.MISS ) / INST_RETIRED.ANY",
+- "MetricGroup": "CacheMisses",
++ "MetricGroup": "Mem;CacheMisses",
+ "MetricName": "L2HPKI_All"
+ },
++ {
++ "BriefDescription": "L2 cache hits per kilo instruction for all demand loads (including speculative)",
++ "MetricExpr": "1000 * L2_RQSTS.DEMAND_DATA_RD_HIT / INST_RETIRED.ANY",
++ "MetricGroup": "Mem;CacheMisses",
++ "MetricName": "L2HPKI_Load"
++ },
+ {
+ "BriefDescription": "L3 cache true misses per kilo instruction for retired demand loads",
+ "MetricExpr": "1000 * MEM_LOAD_RETIRED.L3_MISS / INST_RETIRED.ANY",
+- "MetricGroup": "CacheMisses",
++ "MetricGroup": "Mem;CacheMisses",
+ "MetricName": "L3MPKI"
+ },
++ {
++ "BriefDescription": "Fill Buffer (FB) true hits per kilo instructions for retired demand loads",
++ "MetricExpr": "1000 * MEM_LOAD_RETIRED.FB_HIT / INST_RETIRED.ANY",
++ "MetricGroup": "Mem;CacheMisses",
++ "MetricName": "FB_HPKI"
++ },
++ {
++ "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
++ "MetricConstraint": "NO_NMI_WATCHDOG",
++ "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * CPU_CLK_UNHALTED.THREAD )",
++ "MetricGroup": "Mem;MemoryTLB",
++ "MetricName": "Page_Walks_Utilization"
++ },
++ {
++ "BriefDescription": "Utilization of the core's Page Walker(s) serving STLB misses triggered by instruction/Load/Store accesses",
++ "MetricExpr": "( ITLB_MISSES.WALK_PENDING + DTLB_LOAD_MISSES.WALK_PENDING + DTLB_STORE_MISSES.WALK_PENDING + EPT.WALK_PENDING ) / ( 2 * ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) ) )",
++ "MetricGroup": "Mem;MemoryTLB_SMT",
++ "MetricName": "Page_Walks_Utilization_SMT"
++ },
+ {
+ "BriefDescription": "Rate of silent evictions from the L2 cache per Kilo instruction where the evicted lines are dropped (no writeback to L3 or memory)",
+ "MetricExpr": "1000 * L2_LINES_OUT.SILENT / INST_RETIRED.ANY",
+- "MetricGroup": "L2Evicts;Server",
++ "MetricGroup": "L2Evicts;Mem;Server",
+ "MetricName": "L2_Evictions_Silent_PKI"
+ },
+ {
+ "BriefDescription": "Rate of non silent evictions from the L2 cache per Kilo instruction",
+ "MetricExpr": "1000 * L2_LINES_OUT.NON_SILENT / INST_RETIRED.ANY",
+- "MetricGroup": "L2Evicts;Server",
++ "MetricGroup": "L2Evicts;Mem;Server",
+ "MetricName": "L2_Evictions_NonSilent_PKI"
+ },
+ {
+@@ -219,7 +532,7 @@
+ {
+ "BriefDescription": "Giga Floating Point Operations Per Second",
+ "MetricExpr": "( ( 1 * ( FP_ARITH_INST_RETIRED.SCALAR_SINGLE + FP_ARITH_INST_RETIRED.SCALAR_DOUBLE ) + 2 * FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE + 4 * ( FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE ) + 8 * ( FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE + FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE ) + 16 * FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE ) / 1000000000 ) / duration_time",
+- "MetricGroup": "Flops;HPC",
++ "MetricGroup": "Cor;Flops;HPC",
+ "MetricName": "GFLOPs"
+ },
+ {
+@@ -228,6 +541,48 @@
+ "MetricGroup": "Power",
+ "MetricName": "Turbo_Utilization"
+ },
++ {
++ "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0",
++ "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / CPU_CLK_UNHALTED.THREAD",
++ "MetricGroup": "Power",
++ "MetricName": "Power_License0_Utilization",
++ "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes."
++ },
++ {
++ "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0. SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "CORE_POWER.LVL0_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++ "MetricGroup": "Power_SMT",
++ "MetricName": "Power_License0_Utilization_SMT",
++ "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for baseline license level 0. This includes non-AVX codes, SSE, AVX 128-bit, and low-current AVX 256-bit codes. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
++ {
++ "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1",
++ "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / CPU_CLK_UNHALTED.THREAD",
++ "MetricGroup": "Power",
++ "MetricName": "Power_License1_Utilization",
++ "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions."
++ },
++ {
++ "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1. SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "CORE_POWER.LVL1_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++ "MetricGroup": "Power_SMT",
++ "MetricName": "Power_License1_Utilization_SMT",
++ "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 1. This includes high current AVX 256-bit instructions as well as low current AVX 512-bit instructions. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
++ {
++ "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX)",
++ "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / CPU_CLK_UNHALTED.THREAD",
++ "MetricGroup": "Power",
++ "MetricName": "Power_License2_Utilization",
++ "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX). This includes high current AVX 512-bit instructions."
++ },
++ {
++ "BriefDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX). SMT version; use when SMT is enabled and measuring per logical CPU.",
++ "MetricExpr": "CORE_POWER.LVL2_TURBO_LICENSE / 2 / ( ( CPU_CLK_UNHALTED.THREAD / 2 ) * ( 1 + CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / CPU_CLK_UNHALTED.REF_XCLK ) )",
++ "MetricGroup": "Power_SMT",
++ "MetricName": "Power_License2_Utilization_SMT",
++ "PublicDescription": "Fraction of Core cycles where the core was running with power-delivery for license level 2 (introduced in SKX). This includes high current AVX 512-bit instructions. SMT version; use when SMT is enabled and measuring per logical CPU."
++ },
+ {
+ "BriefDescription": "Fraction of cycles where both hardware Logical Processors were active",
+ "MetricExpr": "1 - CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE / ( CPU_CLK_UNHALTED.REF_XCLK_ANY / 2 ) if #SMT_on else 0",
+@@ -240,34 +595,46 @@
+ "MetricGroup": "OS",
+ "MetricName": "Kernel_Utilization"
+ },
++ {
++ "BriefDescription": "Cycles Per Instruction for the Operating System (OS) Kernel mode",
++ "MetricExpr": "CPU_CLK_UNHALTED.THREAD_P:k / INST_RETIRED.ANY_P:k",
++ "MetricGroup": "OS",
++ "MetricName": "Kernel_CPI"
++ },
+ {
+ "BriefDescription": "Average external Memory Bandwidth Use for reads and writes [GB / sec]",
+ "MetricExpr": "( 64 * ( uncore_imc@cas_count_read@ + uncore_imc@cas_count_write@ ) / 1000000000 ) / duration_time",
+- "MetricGroup": "HPC;MemoryBW;SoC",
++ "MetricGroup": "HPC;Mem;MemoryBW;SoC",
+ "MetricName": "DRAM_BW_Use"
+ },
+ {
+ "BriefDescription": "Average latency of data read request to external memory (in nanoseconds). Accounts for demand loads and L1/L2 prefetches",
+ "MetricExpr": "1000000000 * ( cha@event\\=0x36\\,umask\\=0x21\\,config\\=0x40433@ / cha@event\\=0x35\\,umask\\=0x21\\,config\\=0x40433@ ) / ( cha_0@event\\=0x0@ / duration_time )",
+- "MetricGroup": "MemoryLat;SoC",
++ "MetricGroup": "Mem;MemoryLat;SoC",
+ "MetricName": "MEM_Read_Latency"
+ },
+ {
+ "BriefDescription": "Average number of parallel data read requests to external memory. Accounts for demand loads and L1/L2 prefetches",
+ "MetricExpr": "cha@event\\=0x36\\,umask\\=0x21\\,config\\=0x40433@ / cha@event\\=0x36\\,umask\\=0x21\\,config\\=0x40433\\,thresh\\=1@",
+- "MetricGroup": "MemoryBW;SoC",
++ "MetricGroup": "Mem;MemoryBW;SoC",
+ "MetricName": "MEM_Parallel_Reads"
+ },
++ {
++ "BriefDescription": "Average latency of data read request to external DRAM memory [in nanoseconds]. Accounts for demand loads and L1/L2 data-read prefetches",
++ "MetricExpr": "1000000000 * ( UNC_M_RPQ_OCCUPANCY / UNC_M_RPQ_INSERTS ) / imc_0@event\\=0x0@",
++ "MetricGroup": "Mem;MemoryLat;SoC;Server",
++ "MetricName": "MEM_DRAM_Read_Latency"
++ },
+ {
+ "BriefDescription": "Average IO (network or disk) Bandwidth Use for Writes [GB / sec]",
+ "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_READ.PART3 ) * 4 / 1000000000 / duration_time",
+- "MetricGroup": "IoBW;SoC;Server",
++ "MetricGroup": "IoBW;Mem;SoC;Server",
+ "MetricName": "IO_Write_BW"
+ },
+ {
+ "BriefDescription": "Average IO (network or disk) Bandwidth Use for Reads [GB / sec]",
+ "MetricExpr": "( UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART0 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART1 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART2 + UNC_IIO_DATA_REQ_OF_CPU.MEM_WRITE.PART3 ) * 4 / 1000000000 / duration_time",
+- "MetricGroup": "IoBW;SoC;Server",
++ "MetricGroup": "IoBW;Mem;SoC;Server",
+ "MetricName": "IO_Read_BW"
+ },
+ {
+diff --git a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
+index 6ed92bc5c129b..06c5ca26ca3f3 100644
+--- a/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
++++ b/tools/perf/pmu-events/arch/x86/skylakex/uncore-other.json
+@@ -537,6 +537,18 @@
+ "PublicDescription": "Counts clockticks of the 1GHz trafiic controller clock in the IIO unit.",
+ "Unit": "IIO"
+ },
++ {
++ "BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-3",
++ "Counter": "0,1,2,3",
++ "EventCode": "0xC2",
++ "EventName": "UNC_IIO_COMP_BUF_INSERTS.CMPD.ALL_PARTS",
++ "FCMask": "0x4",
++ "PerPkg": "1",
++ "PortMask": "0x0f",
++ "PublicDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0-3",
++ "UMask": "0x03",
++ "Unit": "IIO"
++ },
+ {
+ "BriefDescription": "PCIe Completion Buffer Inserts of completions with data: Part 0",
+ "Counter": "0,1,2,3",
+@@ -585,6 +597,17 @@
+ "UMask": "0x03",
+ "Unit": "IIO"
+ },
++ {
++ "BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0-3",
++ "Counter": "2,3",
++ "EventCode": "0xD5",
++ "EventName": "UNC_IIO_COMP_BUF_OCCUPANCY.CMPD.ALL_PARTS",
++ "FCMask": "0x04",
++ "PerPkg": "1",
++ "PublicDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0-3",
++ "UMask": "0x0f",
++ "Unit": "IIO"
++ },
+ {
+ "BriefDescription": "PCIe Completion Buffer occupancy of completions with data: Part 0",
+ "Counter": "2,3",
+diff --git a/tools/perf/tests/shell/test_arm_callgraph_fp.sh b/tools/perf/tests/shell/test_arm_callgraph_fp.sh
+new file mode 100755
+index 0000000000000..6ffbb27afabac
+--- /dev/null
++++ b/tools/perf/tests/shell/test_arm_callgraph_fp.sh
+@@ -0,0 +1,68 @@
++#!/bin/sh
++# Check Arm64 callgraphs are complete in fp mode
++# SPDX-License-Identifier: GPL-2.0
++
++lscpu | grep -q "aarch64" || exit 2
++
++if ! [ -x "$(command -v cc)" ]; then
++ echo "failed: no compiler, install gcc"
++ exit 2
++fi
++
++PERF_DATA=$(mktemp /tmp/__perf_test.perf.data.XXXXX)
++TEST_PROGRAM_SOURCE=$(mktemp /tmp/test_program.XXXXX.c)
++TEST_PROGRAM=$(mktemp /tmp/test_program.XXXXX)
++
++cleanup_files()
++{
++ rm -f $PERF_DATA
++ rm -f $TEST_PROGRAM_SOURCE
++ rm -f $TEST_PROGRAM
++}
++
++trap cleanup_files exit term int
++
++cat << EOF > $TEST_PROGRAM_SOURCE
++int a = 0;
++void leaf(void) {
++ for (;;)
++ a += a;
++}
++void parent(void) {
++ leaf();
++}
++int main(void) {
++ parent();
++ return 0;
++}
++EOF
++
++echo " + Compiling test program ($TEST_PROGRAM)..."
++
++CFLAGS="-g -O0 -fno-inline -fno-omit-frame-pointer"
++cc $CFLAGS $TEST_PROGRAM_SOURCE -o $TEST_PROGRAM || exit 1
++
++# Add a 1 second delay to skip samples that are not in the leaf() function
++perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 -- $TEST_PROGRAM 2> /dev/null &
++PID=$!
++
++echo " + Recording (PID=$PID)..."
++sleep 2
++echo " + Stopping perf-record..."
++
++kill $PID
++wait $PID
++
++# expected perf-script output:
++#
++# program
++# 728 leaf
++# 753 parent
++# 76c main
++# ...
++
++perf script -i $PERF_DATA -F comm,ip,sym | head -n4
++perf script -i $PERF_DATA -F comm,ip,sym | head -n4 | \
++ awk '{ if ($2 != "") sym[i++] = $2 } END { if (sym[0] != "leaf" ||
++ sym[1] != "parent" ||
++ sym[2] != "main") exit 1 }'
+diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild
+index 1acdf2fc31c59..3299fb0977b27 100644
+--- a/tools/testing/cxl/Kbuild
++++ b/tools/testing/cxl/Kbuild
+@@ -25,7 +25,7 @@ cxl_pmem-y += config_check.o
+
+ obj-m += cxl_core.o
+
+-cxl_core-y := $(CXL_CORE_SRC)/bus.o
++cxl_core-y := $(CXL_CORE_SRC)/port.o
+ cxl_core-y += $(CXL_CORE_SRC)/pmem.o
+ cxl_core-y += $(CXL_CORE_SRC)/regs.o
+ cxl_core-y += $(CXL_CORE_SRC)/memdev.o
+diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
+index 736d99006fb7a..f0a410962af0d 100644
+--- a/tools/testing/cxl/test/cxl.c
++++ b/tools/testing/cxl/test/cxl.c
+@@ -511,7 +511,7 @@ static __init int cxl_test_init(void)
+
+ for (i = 0; i < ARRAY_SIZE(cxl_root_port); i++) {
+ struct platform_device *bridge =
+- cxl_host_bridge[i / NR_CXL_ROOT_PORTS];
++ cxl_host_bridge[i % ARRAY_SIZE(cxl_host_bridge)];
+ struct platform_device *pdev;
+
+ pdev = platform_device_alloc("cxl_root_port", i);
+diff --git a/tools/testing/selftests/bpf/prog_tests/bind_perm.c b/tools/testing/selftests/bpf/prog_tests/bind_perm.c
+index d0f06e40c16d0..eac71fbb24ce2 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bind_perm.c
++++ b/tools/testing/selftests/bpf/prog_tests/bind_perm.c
+@@ -1,13 +1,24 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <test_progs.h>
+-#include "bind_perm.skel.h"
+-
++#define _GNU_SOURCE
++#include <sched.h>
++#include <stdlib.h>
+ #include <sys/types.h>
+ #include <sys/socket.h>
+ #include <sys/capability.h>
+
++#include "test_progs.h"
++#include "bind_perm.skel.h"
++
+ static int duration;
+
++static int create_netns(void)
++{
++ if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns"))
++ return -1;
++
++ return 0;
++}
++
+ void try_bind(int family, int port, int expected_errno)
+ {
+ struct sockaddr_storage addr = {};
+@@ -75,6 +86,9 @@ void test_bind_perm(void)
+ struct bind_perm *skel;
+ int cgroup_fd;
+
++ if (create_netns())
++ return;
++
+ cgroup_fd = test__join_cgroup("/bind_perm");
+ if (CHECK(cgroup_fd < 0, "cg-join", "errno %d", errno))
+ return;
+diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
+new file mode 100644
+index 0000000000000..5bb11fe595a43
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef __BPF_MISC_H__
++#define __BPF_MISC_H__
++
++#if defined(__TARGET_ARCH_x86)
++#define SYSCALL_WRAPPER 1
++#define SYS_PREFIX "__x64_"
++#elif defined(__TARGET_ARCH_s390)
++#define SYSCALL_WRAPPER 1
++#define SYS_PREFIX "__s390x_"
++#elif defined(__TARGET_ARCH_arm64)
++#define SYSCALL_WRAPPER 1
++#define SYS_PREFIX "__arm64_"
++#else
++#define SYSCALL_WRAPPER 0
++#define SYS_PREFIX "__se_"
++#endif
++
++#endif
+diff --git a/tools/testing/selftests/bpf/progs/test_probe_user.c b/tools/testing/selftests/bpf/progs/test_probe_user.c
+index 8812a90da4eb8..702578a5e496d 100644
+--- a/tools/testing/selftests/bpf/progs/test_probe_user.c
++++ b/tools/testing/selftests/bpf/progs/test_probe_user.c
+@@ -7,20 +7,7 @@
+
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+-
+-#if defined(__TARGET_ARCH_x86)
+-#define SYSCALL_WRAPPER 1
+-#define SYS_PREFIX "__x64_"
+-#elif defined(__TARGET_ARCH_s390)
+-#define SYSCALL_WRAPPER 1
+-#define SYS_PREFIX "__s390x_"
+-#elif defined(__TARGET_ARCH_arm64)
+-#define SYSCALL_WRAPPER 1
+-#define SYS_PREFIX "__arm64_"
+-#else
+-#define SYSCALL_WRAPPER 0
+-#define SYS_PREFIX ""
+-#endif
++#include "bpf_misc.h"
+
+ static struct sockaddr_in old;
+
+diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+index 81b57b9aaaeae..7967348b11af6 100644
+--- a/tools/testing/selftests/bpf/progs/test_sock_fields.c
++++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c
+@@ -113,7 +113,7 @@ static void tpcpy(struct bpf_tcp_sock *dst,
+
+ #define RET_LOG() ({ \
+ linum = __LINE__; \
+- bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_NOEXIST); \
++ bpf_map_update_elem(&linum_map, &linum_idx, &linum, BPF_ANY); \
+ return CG_OK; \
+ })
+
+diff --git a/tools/testing/selftests/bpf/test_lirc_mode2.sh b/tools/testing/selftests/bpf/test_lirc_mode2.sh
+index ec4e15948e406..5252b91f48a18 100755
+--- a/tools/testing/selftests/bpf/test_lirc_mode2.sh
++++ b/tools/testing/selftests/bpf/test_lirc_mode2.sh
+@@ -3,6 +3,7 @@
+
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
++ret=$ksft_skip
+
+ msg="skip all tests:"
+ if [ $UID != 0 ]; then
+@@ -25,7 +26,7 @@ do
+ fi
+ done
+
+-if [ -n $LIRCDEV ];
++if [ -n "$LIRCDEV" ];
+ then
+ TYPE=lirc_mode2
+ ./test_lirc_mode2_user $LIRCDEV $INPUTDEV
+@@ -36,3 +37,5 @@ then
+ echo -e ${GREEN}"PASS: $TYPE"${NC}
+ fi
+ fi
++
++exit $ret
+diff --git a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+index b497bb85b667f..6c69c42b1d607 100755
+--- a/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
++++ b/tools/testing/selftests/bpf/test_lwt_ip_encap.sh
+@@ -120,6 +120,14 @@ setup()
+ ip netns exec ${NS2} sysctl -wq net.ipv4.conf.default.rp_filter=0
+ ip netns exec ${NS3} sysctl -wq net.ipv4.conf.default.rp_filter=0
+
++ # disable IPv6 DAD because it sometimes takes too long and fails tests
++ ip netns exec ${NS1} sysctl -wq net.ipv6.conf.all.accept_dad=0
++ ip netns exec ${NS2} sysctl -wq net.ipv6.conf.all.accept_dad=0
++ ip netns exec ${NS3} sysctl -wq net.ipv6.conf.all.accept_dad=0
++ ip netns exec ${NS1} sysctl -wq net.ipv6.conf.default.accept_dad=0
++ ip netns exec ${NS2} sysctl -wq net.ipv6.conf.default.accept_dad=0
++ ip netns exec ${NS3} sysctl -wq net.ipv6.conf.default.accept_dad=0
++
+ ip link add veth1 type veth peer name veth2
+ ip link add veth3 type veth peer name veth4
+ ip link add veth5 type veth peer name veth6
+@@ -289,7 +297,7 @@ test_ping()
+ ip netns exec ${NS1} ping -c 1 -W 1 -I veth1 ${IPv4_DST} 2>&1 > /dev/null
+ RET=$?
+ elif [ "${PROTO}" == "IPv6" ] ; then
+- ip netns exec ${NS1} ping6 -c 1 -W 6 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
++ ip netns exec ${NS1} ping6 -c 1 -W 1 -I veth1 ${IPv6_DST} 2>&1 > /dev/null
+ RET=$?
+ else
+ echo " test_ping: unknown PROTO: ${PROTO}"
+diff --git a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+index 05f8727409997..cc57cb87e65f6 100755
+--- a/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
++++ b/tools/testing/selftests/bpf/test_xdp_redirect_multi.sh
+@@ -32,6 +32,11 @@ DRV_MODE="xdpgeneric xdpdrv xdpegress"
+ PASS=0
+ FAIL=0
+ LOG_DIR=$(mktemp -d)
++declare -a NS
++NS[0]="ns0-$(mktemp -u XXXXXX)"
++NS[1]="ns1-$(mktemp -u XXXXXX)"
++NS[2]="ns2-$(mktemp -u XXXXXX)"
++NS[3]="ns3-$(mktemp -u XXXXXX)"
+
+ test_pass()
+ {
+@@ -47,11 +52,9 @@ test_fail()
+
+ clean_up()
+ {
+- for i in $(seq $NUM); do
+- ip link del veth$i 2> /dev/null
+- ip netns del ns$i 2> /dev/null
++ for i in $(seq 0 $NUM); do
++ ip netns del ${NS[$i]} 2> /dev/null
+ done
+- ip netns del ns0 2> /dev/null
+ }
+
+ # Kselftest framework requirement - SKIP code is 4.
+@@ -79,23 +82,22 @@ setup_ns()
+ mode="xdpdrv"
+ fi
+
+- ip netns add ns0
++ ip netns add ${NS[0]}
+ for i in $(seq $NUM); do
+- ip netns add ns$i
+- ip -n ns$i link add veth0 index 2 type veth \
+- peer name veth$i netns ns0 index $((1 + $i))
+- ip -n ns0 link set veth$i up
+- ip -n ns$i link set veth0 up
+-
+- ip -n ns$i addr add 192.0.2.$i/24 dev veth0
+- ip -n ns$i addr add 2001:db8::$i/64 dev veth0
++ ip netns add ${NS[$i]}
++ ip -n ${NS[$i]} link add veth0 type veth peer name veth$i netns ${NS[0]}
++ ip -n ${NS[$i]} link set veth0 up
++ ip -n ${NS[0]} link set veth$i up
++
++ ip -n ${NS[$i]} addr add 192.0.2.$i/24 dev veth0
++ ip -n ${NS[$i]} addr add 2001:db8::$i/64 dev veth0
+ # Add a neigh entry for IPv4 ping test
+- ip -n ns$i neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0
+- ip -n ns$i link set veth0 $mode obj \
++ ip -n ${NS[$i]} neigh add 192.0.2.253 lladdr 00:00:00:00:00:01 dev veth0
++ ip -n ${NS[$i]} link set veth0 $mode obj \
+ xdp_dummy.o sec xdp &> /dev/null || \
+ { test_fail "Unable to load dummy xdp" && exit 1; }
+ IFACES="$IFACES veth$i"
+- veth_mac[$i]=$(ip -n ns0 link show veth$i | awk '/link\/ether/ {print $2}')
++ veth_mac[$i]=$(ip -n ${NS[0]} link show veth$i | awk '/link\/ether/ {print $2}')
+ done
+ }
+
+@@ -104,10 +106,10 @@ do_egress_tests()
+ local mode=$1
+
+ # mac test
+- ip netns exec ns2 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
+- ip netns exec ns3 tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
++ ip netns exec ${NS[2]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-2_${mode}.log &
++ ip netns exec ${NS[3]} tcpdump -e -i veth0 -nn -l -e &> ${LOG_DIR}/mac_ns1-3_${mode}.log &
+ sleep 0.5
+- ip netns exec ns1 ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
++ ip netns exec ${NS[1]} ping 192.0.2.254 -i 0.1 -c 4 &> /dev/null
+ sleep 0.5
+ pkill tcpdump
+
+@@ -123,18 +125,18 @@ do_ping_tests()
+ local mode=$1
+
+ # ping6 test: echo request should be redirect back to itself, not others
+- ip netns exec ns1 ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
++ ip netns exec ${NS[1]} ip neigh add 2001:db8::2 dev veth0 lladdr 00:00:00:00:00:02
+
+- ip netns exec ns1 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
+- ip netns exec ns2 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
+- ip netns exec ns3 tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
++ ip netns exec ${NS[1]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-1_${mode}.log &
++ ip netns exec ${NS[2]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-2_${mode}.log &
++ ip netns exec ${NS[3]} tcpdump -i veth0 -nn -l -e &> ${LOG_DIR}/ns1-3_${mode}.log &
+ sleep 0.5
+ # ARP test
+- ip netns exec ns1 arping -q -c 2 -I veth0 192.0.2.254
++ ip netns exec ${NS[1]} arping -q -c 2 -I veth0 192.0.2.254
+ # IPv4 test
+- ip netns exec ns1 ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
++ ip netns exec ${NS[1]} ping 192.0.2.253 -i 0.1 -c 4 &> /dev/null
+ # IPv6 test
+- ip netns exec ns1 ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
++ ip netns exec ${NS[1]} ping6 2001:db8::2 -i 0.1 -c 2 &> /dev/null
+ sleep 0.5
+ pkill tcpdump
+
+@@ -180,7 +182,7 @@ do_tests()
+ xdpgeneric) drv_p="-S";;
+ esac
+
+- ip netns exec ns0 ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
++ ip netns exec ${NS[0]} ./xdp_redirect_multi $drv_p $IFACES &> ${LOG_DIR}/xdp_redirect_${mode}.log &
+ xdp_pid=$!
+ sleep 1
+ if ! ps -p $xdp_pid > /dev/null; then
+@@ -197,10 +199,10 @@ do_tests()
+ kill $xdp_pid
+ }
+
+-trap clean_up EXIT
+-
+ check_env
+
++trap clean_up EXIT
++
+ for mode in ${DRV_MODE}; do
+ setup_ns $mode
+ do_tests $mode
+diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c
+index 0a5d23da486df..ffa5502ad95ed 100644
+--- a/tools/testing/selftests/bpf/xdpxceiver.c
++++ b/tools/testing/selftests/bpf/xdpxceiver.c
+@@ -906,7 +906,10 @@ static bool rx_stats_are_valid(struct ifobject *ifobject)
+ return true;
+ case STAT_TEST_RX_FULL:
+ xsk_stat = stats.rx_ring_full;
+- expected_stat -= RX_FULL_RXQSIZE;
++ if (ifobject->umem->num_frames < XSK_RING_PROD__DEFAULT_NUM_DESCS)
++ expected_stat = ifobject->umem->num_frames - RX_FULL_RXQSIZE;
++ else
++ expected_stat = XSK_RING_PROD__DEFAULT_NUM_DESCS - RX_FULL_RXQSIZE;
+ break;
+ case STAT_TEST_RX_FILL_EMPTY:
+ xsk_stat = stats.rx_fill_ring_empty_descs;
+diff --git a/tools/testing/selftests/lkdtm/config b/tools/testing/selftests/lkdtm/config
+index a26a3fa9e9255..8bd847f0463cd 100644
+--- a/tools/testing/selftests/lkdtm/config
++++ b/tools/testing/selftests/lkdtm/config
+@@ -6,6 +6,7 @@ CONFIG_HARDENED_USERCOPY=y
+ # CONFIG_HARDENED_USERCOPY_FALLBACK is not set
+ CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
+ CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
++CONFIG_UBSAN=y
+ CONFIG_UBSAN_BOUNDS=y
+ CONFIG_UBSAN_TRAP=y
+ CONFIG_STACKPROTECTOR_STRONG=y
+diff --git a/tools/testing/selftests/net/af_unix/test_unix_oob.c b/tools/testing/selftests/net/af_unix/test_unix_oob.c
+index 3dece8b292536..b57e91e1c3f28 100644
+--- a/tools/testing/selftests/net/af_unix/test_unix_oob.c
++++ b/tools/testing/selftests/net/af_unix/test_unix_oob.c
+@@ -218,10 +218,10 @@ main(int argc, char **argv)
+
+ /* Test 1:
+ * veriyf that SIGURG is
+- * delivered and 63 bytes are
+- * read and oob is '@'
++ * delivered, 63 bytes are
++ * read, oob is '@', and POLLPRI works.
+ */
+- wait_for_data(pfd, POLLIN | POLLPRI);
++ wait_for_data(pfd, POLLPRI);
+ read_oob(pfd, &oob);
+ len = read_data(pfd, buf, 1024);
+ if (!signal_recvd || len != 63 || oob != '@') {
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+index f0f4ab96b8f3e..621af6895f4d5 100755
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+@@ -432,6 +432,8 @@ do_transfer()
+ local stat_ackrx_last_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableACKRX")
+ local stat_cookietx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent")
+ local stat_cookierx_last=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv")
++ local stat_csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
++ local stat_csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
+
+ timeout ${timeout_test} \
+ ip netns exec ${listener_ns} \
+@@ -524,6 +526,23 @@ do_transfer()
+ fi
+ fi
+
++ if $checksum; then
++ local csum_err_s=$(get_mib_counter "${listener_ns}" "MPTcpExtDataCsumErr")
++ local csum_err_c=$(get_mib_counter "${connector_ns}" "MPTcpExtDataCsumErr")
++
++ local csum_err_s_nr=$((csum_err_s - stat_csum_err_s))
++ if [ $csum_err_s_nr -gt 0 ]; then
++ printf "[ FAIL ]\nserver got $csum_err_s_nr data checksum error[s]"
++ rets=1
++ fi
++
++ local csum_err_c_nr=$((csum_err_c - stat_csum_err_c))
++ if [ $csum_err_c_nr -gt 0 ]; then
++ printf "[ FAIL ]\nclient got $csum_err_c_nr data checksum error[s]"
++ retc=1
++ fi
++ fi
++
+ if [ $retc -eq 0 ] && [ $rets -eq 0 ]; then
+ printf "[ OK ]"
+ fi
+diff --git a/tools/testing/selftests/net/test_vxlan_under_vrf.sh b/tools/testing/selftests/net/test_vxlan_under_vrf.sh
+index ea5a7a808f120..1fd1250ebc667 100755
+--- a/tools/testing/selftests/net/test_vxlan_under_vrf.sh
++++ b/tools/testing/selftests/net/test_vxlan_under_vrf.sh
+@@ -120,11 +120,11 @@ echo "[ OK ]"
+
+ # Move the underlay to a non-default VRF
+ ip -netns hv-1 link set veth0 vrf vrf-underlay
+-ip -netns hv-1 link set veth0 down
+-ip -netns hv-1 link set veth0 up
++ip -netns hv-1 link set vxlan0 down
++ip -netns hv-1 link set vxlan0 up
+ ip -netns hv-2 link set veth0 vrf vrf-underlay
+-ip -netns hv-2 link set veth0 down
+-ip -netns hv-2 link set veth0 up
++ip -netns hv-2 link set vxlan0 down
++ip -netns hv-2 link set vxlan0 up
+
+ echo -n "Check VM connectivity through VXLAN (underlay in a VRF) "
+ ip netns exec vm-1 ping -c 1 -W 1 10.0.0.2 &> /dev/null || (echo "[FAIL]"; false)
+diff --git a/tools/testing/selftests/net/timestamping.c b/tools/testing/selftests/net/timestamping.c
+index aee631c5284eb..044bc0e9ed81a 100644
+--- a/tools/testing/selftests/net/timestamping.c
++++ b/tools/testing/selftests/net/timestamping.c
+@@ -325,8 +325,8 @@ int main(int argc, char **argv)
+ struct ifreq device;
+ struct ifreq hwtstamp;
+ struct hwtstamp_config hwconfig, hwconfig_requested;
+- struct so_timestamping so_timestamping_get = { 0, -1 };
+- struct so_timestamping so_timestamping = { 0, -1 };
++ struct so_timestamping so_timestamping_get = { 0, 0 };
++ struct so_timestamping so_timestamping = { 0, 0 };
+ struct sockaddr_in addr;
+ struct ip_mreq imr;
+ struct in_addr iaddr;
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index 6e468e0f42f78..5d70b04c482c9 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -683,6 +683,9 @@ TEST_F(tls, splice_cmsg_to_pipe)
+ char buf[10];
+ int p[2];
+
++ if (self->notls)
++ SKIP(return, "no TLS support");
++
+ ASSERT_GE(pipe(p), 0);
+ EXPECT_EQ(tls_send_cmsg(self->fd, 100, test_str, send_len, 0), 10);
+ EXPECT_EQ(splice(self->cfd, NULL, p[1], NULL, send_len, 0), -1);
+@@ -703,6 +706,9 @@ TEST_F(tls, splice_dec_cmsg_to_pipe)
+ char buf[10];
+ int p[2];
+
++ if (self->notls)
++ SKIP(return, "no TLS support");
++
+ ASSERT_GE(pipe(p), 0);
+ EXPECT_EQ(tls_send_cmsg(self->fd, 100, test_str, send_len, 0), 10);
+ EXPECT_EQ(recv(self->cfd, buf, send_len, 0), -1);
+diff --git a/tools/testing/selftests/rcutorture/bin/torture.sh b/tools/testing/selftests/rcutorture/bin/torture.sh
+index eae88aacca2aa..b2929fb15f7e6 100755
+--- a/tools/testing/selftests/rcutorture/bin/torture.sh
++++ b/tools/testing/selftests/rcutorture/bin/torture.sh
+@@ -71,8 +71,8 @@ usage () {
+ echo " --configs-rcutorture \"config-file list w/ repeat factor (3*TINY01)\""
+ echo " --configs-locktorture \"config-file list w/ repeat factor (10*LOCK01)\""
+ echo " --configs-scftorture \"config-file list w/ repeat factor (2*CFLIST)\""
+- echo " --doall"
+- echo " --doallmodconfig / --do-no-allmodconfig"
++ echo " --do-all"
++ echo " --do-allmodconfig / --do-no-allmodconfig"
+ echo " --do-clocksourcewd / --do-no-clocksourcewd"
+ echo " --do-kasan / --do-no-kasan"
+ echo " --do-kcsan / --do-no-kcsan"
+diff --git a/tools/testing/selftests/sgx/Makefile b/tools/testing/selftests/sgx/Makefile
+index 2956584e1e37f..75af864e07b65 100644
+--- a/tools/testing/selftests/sgx/Makefile
++++ b/tools/testing/selftests/sgx/Makefile
+@@ -4,7 +4,7 @@ include ../lib.mk
+
+ .PHONY: all clean
+
+-CAN_BUILD_X86_64 := $(shell ../x86/check_cc.sh $(CC) \
++CAN_BUILD_X86_64 := $(shell ../x86/check_cc.sh "$(CC)" \
+ ../x86/trivial_64bit_program.c)
+
+ ifndef OBJCOPY
+diff --git a/tools/testing/selftests/sgx/load.c b/tools/testing/selftests/sgx/load.c
+index 9d4322c946e2b..006b464c8fc94 100644
+--- a/tools/testing/selftests/sgx/load.c
++++ b/tools/testing/selftests/sgx/load.c
+@@ -21,7 +21,7 @@
+
+ void encl_delete(struct encl *encl)
+ {
+- struct encl_segment *heap_seg = &encl->segment_tbl[encl->nr_segments - 1];
++ struct encl_segment *heap_seg;
+
+ if (encl->encl_base)
+ munmap((void *)encl->encl_base, encl->encl_size);
+@@ -32,10 +32,11 @@ void encl_delete(struct encl *encl)
+ if (encl->fd)
+ close(encl->fd);
+
+- munmap(heap_seg->src, heap_seg->size);
+-
+- if (encl->segment_tbl)
++ if (encl->segment_tbl) {
++ heap_seg = &encl->segment_tbl[encl->nr_segments - 1];
++ munmap(heap_seg->src, heap_seg->size);
+ free(encl->segment_tbl);
++ }
+
+ memset(encl, 0, sizeof(*encl));
+ }
+diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c
+index 370c4995f7c4a..b0bd95a4730d5 100644
+--- a/tools/testing/selftests/sgx/main.c
++++ b/tools/testing/selftests/sgx/main.c
+@@ -147,6 +147,7 @@ static bool setup_test_encl(unsigned long heap_size, struct encl *encl,
+ if (!encl_load("test_encl.elf", encl, heap_size)) {
+ encl_delete(encl);
+ TH_LOG("Failed to load the test enclave.\n");
++ return false;
+ }
+
+ if (!encl_measure(encl))
+@@ -185,8 +186,6 @@ static bool setup_test_encl(unsigned long heap_size, struct encl *encl,
+ return true;
+
+ err:
+- encl_delete(encl);
+-
+ for (i = 0; i < encl->nr_segments; i++) {
+ seg = &encl->segment_tbl[i];
+
+@@ -207,6 +206,8 @@ err:
+
+ TH_LOG("Failed to initialize the test enclave.\n");
+
++ encl_delete(encl);
++
+ return false;
+ }
+
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index a14b5b8008970..1530c3e0242ef 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -51,9 +51,9 @@ TEST_GEN_FILES += split_huge_page_test
+ TEST_GEN_FILES += ksm_tests
+
+ ifeq ($(MACHINE),x86_64)
+-CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_64bit_program.c)
+-CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_program.c -no-pie)
++CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_64bit_program.c)
++CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh "$(CC)" ../x86/trivial_program.c -no-pie)
+
+ TARGETS := protection_keys
+ BINARIES_32 := $(TARGETS:%=%_32)
+diff --git a/tools/testing/selftests/vm/userfaultfd.c b/tools/testing/selftests/vm/userfaultfd.c
+index 3fc1d2ee29485..c964bfe9fbcda 100644
+--- a/tools/testing/selftests/vm/userfaultfd.c
++++ b/tools/testing/selftests/vm/userfaultfd.c
+@@ -120,6 +120,9 @@ struct uffd_stats {
+ ~(unsigned long)(sizeof(unsigned long long) \
+ - 1)))
+
++#define swap(a, b) \
++ do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0)
++
+ const char *examples =
+ "# Run anonymous memory test on 100MiB region with 99999 bounces:\n"
+ "./userfaultfd anon 100 99999\n\n"
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 8a1f62ab3c8e6..53df7d3893d31 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -6,9 +6,9 @@ include ../lib.mk
+ .PHONY: all all_32 all_64 warn_32bit_failure clean
+
+ UNAME_M := $(shell uname -m)
+-CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
+-CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh $(CC) trivial_program.c -no-pie)
++CAN_BUILD_I386 := $(shell ./check_cc.sh "$(CC)" trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./check_cc.sh "$(CC)" trivial_64bit_program.c)
++CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh "$(CC)" trivial_program.c -no-pie)
+
+ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
+ check_initial_reg_state sigreturn iopl ioperm \
+diff --git a/tools/testing/selftests/x86/check_cc.sh b/tools/testing/selftests/x86/check_cc.sh
+index 3e2089c8cf549..8c669c0d662ee 100755
+--- a/tools/testing/selftests/x86/check_cc.sh
++++ b/tools/testing/selftests/x86/check_cc.sh
+@@ -7,7 +7,7 @@ CC="$1"
+ TESTPROG="$2"
+ shift 2
+
+-if "$CC" -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
++if [ -n "$CC" ] && $CC -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then
+ echo 1
+ else
+ echo 0
+diff --git a/tools/tracing/rtla/src/osnoise_hist.c b/tools/tracing/rtla/src/osnoise_hist.c
+index 52c053cc1789d..e88f5c870141e 100644
+--- a/tools/tracing/rtla/src/osnoise_hist.c
++++ b/tools/tracing/rtla/src/osnoise_hist.c
+@@ -782,7 +782,7 @@ int osnoise_hist_main(int argc, char *argv[])
+ return_value = 0;
+
+ if (!tracefs_trace_is_on(trace->inst)) {
+- printf("rtla timelat hit stop tracing\n");
++ printf("rtla osnoise hit stop tracing\n");
+ if (params->trace_output) {
+ printf(" Saving trace to %s\n", params->trace_output);
+ save_trace_to_file(record->trace.inst, params->trace_output);
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 0afc016cc54d4..246168310a754 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -117,6 +117,8 @@ EXPORT_SYMBOL_GPL(kvm_debugfs_dir);
+
+ static const struct file_operations stat_fops_per_vm;
+
++static struct file_operations kvm_chardev_ops;
++
+ static long kvm_vcpu_ioctl(struct file *file, unsigned int ioctl,
+ unsigned long arg);
+ #ifdef CONFIG_KVM_COMPAT
+@@ -1137,6 +1139,16 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ preempt_notifier_inc();
+ kvm_init_pm_notifier(kvm);
+
++ /*
++ * When the fd passed to this ioctl() is opened it pins the module,
++ * but try_module_get() also prevents getting a reference if the module
++ * is in MODULE_STATE_GOING (e.g. if someone ran "rmmod --wait").
++ */
++ if (!try_module_get(kvm_chardev_ops.owner)) {
++ r = -ENODEV;
++ goto out_err;
++ }
++
+ return kvm;
+
+ out_err:
+@@ -1226,6 +1238,7 @@ static void kvm_destroy_vm(struct kvm *kvm)
+ preempt_notifier_dec();
+ hardware_disable_all();
+ mmdrop(mm);
++ module_put(kvm_chardev_ops.owner);
+ }
+
+ void kvm_get_kvm(struct kvm *kvm)
+diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
+index ce878f4be4daa..1621f8efd9616 100644
+--- a/virt/kvm/pfncache.c
++++ b/virt/kvm/pfncache.c
+@@ -191,6 +191,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
+ gpc->uhva = gfn_to_hva_memslot(gpc->memslot, gfn);
+
+ if (kvm_is_error_hva(gpc->uhva)) {
++ gpc->pfn = KVM_PFN_ERR_FAULT;
+ ret = -EFAULT;
+ goto out;
+ }
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-03-28 22:06 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-03-28 22:06 UTC (permalink / raw
To: gentoo-commits
commit: 58d68cf3c08d4109201e1a36068a7e366d6dde84
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 28 22:04:23 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 28 22:04:23 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=58d68cf3
Revert swiotlb: rework fix info leak with DMA_FROM_DEVICE
Bug: https://bugs.gentoo.org/835513
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
...rework-fix-info-leak-with-dma_from_device.patch | 187 +++++++++++++++++++++
2 files changed, 191 insertions(+)
diff --git a/0000_README b/0000_README
index 684989ae..19269be2 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
From: https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
Desc: mt76: mt7921e: fix possible probe failure after reboot
+Patch: 2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git
+Desc: Revert swiotlb: rework fix info leak with DMA_FROM_DEVICE
+
Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
From: https://bugs.gentoo.org/710790
Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
diff --git a/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch b/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
new file mode 100644
index 00000000..69476ab1
--- /dev/null
+++ b/2410_revert-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
@@ -0,0 +1,187 @@
+From bddac7c1e02ba47f0570e494c9289acea3062cc1 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds@linux-foundation.org>
+Date: Sat, 26 Mar 2022 10:42:04 -0700
+Subject: Revert "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Linus Torvalds <torvalds@linux-foundation.org>
+
+commit bddac7c1e02ba47f0570e494c9289acea3062cc1 upstream.
+
+This reverts commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13.
+
+It turns out this breaks at least the ath9k wireless driver, and
+possibly others.
+
+What the ath9k driver does on packet receive is to set up the DMA
+transfer with:
+
+ int ath_rx_init(..)
+ ..
+ bf->bf_buf_addr = dma_map_single(sc->dev, skb->data,
+ common->rx_bufsize,
+ DMA_FROM_DEVICE);
+
+and then the receive logic (through ath_rx_tasklet()) will fetch
+incoming packets
+
+ static bool ath_edma_get_buffers(..)
+ ..
+ dma_sync_single_for_cpu(sc->dev, bf->bf_buf_addr,
+ common->rx_bufsize, DMA_FROM_DEVICE);
+
+ ret = ath9k_hw_process_rxdesc_edma(ah, rs, skb->data);
+ if (ret == -EINPROGRESS) {
+ /*let device gain the buffer again*/
+ dma_sync_single_for_device(sc->dev, bf->bf_buf_addr,
+ common->rx_bufsize, DMA_FROM_DEVICE);
+ return false;
+ }
+
+and it's worth noting how that first DMA sync:
+
+ dma_sync_single_for_cpu(..DMA_FROM_DEVICE);
+
+is there to make sure the CPU can read the DMA buffer (possibly by
+copying it from the bounce buffer area, or by doing some cache flush).
+The iommu correctly turns that into a "copy from bounce bufer" so that
+the driver can look at the state of the packets.
+
+In the meantime, the device may continue to write to the DMA buffer, but
+we at least have a snapshot of the state due to that first DMA sync.
+
+But that _second_ DMA sync:
+
+ dma_sync_single_for_device(..DMA_FROM_DEVICE);
+
+is telling the DMA mapping that the CPU wasn't interested in the area
+because the packet wasn't there. In the case of a DMA bounce buffer,
+that is a no-op.
+
+Note how it's not a sync for the CPU (the "for_device()" part), and it's
+not a sync for data written by the CPU (the "DMA_FROM_DEVICE" part).
+
+Or rather, it _should_ be a no-op. That's what commit aa6f8dcbab47
+broke: it made the code bounce the buffer unconditionally, and changed
+the DMA_FROM_DEVICE to just unconditionally and illogically be
+DMA_TO_DEVICE.
+
+[ Side note: purely within the confines of the swiotlb driver it wasn't
+ entirely illogical: The reason it did that odd DMA_FROM_DEVICE ->
+ DMA_TO_DEVICE conversion thing is because inside the swiotlb driver,
+ it uses just a swiotlb_bounce() helper that doesn't care about the
+ whole distinction of who the sync is for - only which direction to
+ bounce.
+
+ So it took the "sync for device" to mean that the CPU must have been
+ the one writing, and thought it meant DMA_TO_DEVICE. ]
+
+Also note how the commentary in that commit was wrong, probably due to
+that whole confusion, claiming that the commit makes the swiotlb code
+
+ "bounce unconditionally (that is, also
+ when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
+ data from the swiotlb buffer"
+
+which is nonsensical for two reasons:
+
+ - that "also when dir == DMA_TO_DEVICE" is nonsensical, as that was
+ exactly when it always did - and should do - the bounce.
+
+ - since this is a sync for the device (not for the CPU), we're clearly
+ fundamentally not coping back stale data from the bounce buffers at
+ all, because we'd be copying *to* the bounce buffers.
+
+So that commit was just very confused. It confused the direction of the
+synchronization (to the device, not the cpu) with the direction of the
+DMA (from the device).
+
+Reported-and-bisected-by: Oleksandr Natalenko <oleksandr@natalenko.name>
+Reported-by: Olha Cherevyk <olha.cherevyk@gmail.com>
+Cc: Halil Pasic <pasic@linux.ibm.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: Kalle Valo <kvalo@kernel.org>
+Cc: Robin Murphy <robin.murphy@arm.com>
+Cc: Toke Høiland-Jørgensen <toke@toke.dk>
+Cc: Maxime Bizon <mbizon@freebox.fr>
+Cc: Johannes Berg <johannes@sipsolutions.net>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/core-api/dma-attributes.rst | 8 ++++++++
+ include/linux/dma-mapping.h | 8 ++++++++
+ kernel/dma/swiotlb.c | 23 ++++++++---------------
+ 3 files changed, 24 insertions(+), 15 deletions(-)
+
+--- a/Documentation/core-api/dma-attributes.rst
++++ b/Documentation/core-api/dma-attributes.rst
+@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileg
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
++
++DMA_ATTR_OVERWRITE
++------------------
++
++This is a hint to the DMA-mapping subsystem that the device is expected to
++overwrite the entire mapped size, thus the caller does not require any of the
++previous buffer contents to be preserved. This allows bounce-buffering
++implementations to optimise DMA_FROM_DEVICE transfers.
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -62,6 +62,14 @@
+ #define DMA_ATTR_PRIVILEGED (1UL << 9)
+
+ /*
++ * This is a hint to the DMA-mapping subsystem that the device is expected
++ * to overwrite the entire mapped size, thus the caller does not require any
++ * of the previous buffer contents to be preserved. This allows
++ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
++ */
++#define DMA_ATTR_OVERWRITE (1UL << 10)
++
++/*
+ * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
+ * be given to a device to use as a DMA source or target. It is specific to a
+ * given device and there may be a translation between the CPU physical address
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -627,14 +627,10 @@ phys_addr_t swiotlb_tbl_map_single(struc
+ for (i = 0; i < nr_slots(alloc_size + offset); i++)
+ mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
+ tlb_addr = slot_addr(mem->start, index) + offset;
+- /*
+- * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
+- * to the tlb buffer, if we knew for sure the device will
+- * overwirte the entire current content. But we don't. Thus
+- * unconditional bounce may prevent leaking swiotlb content (i.e.
+- * kernel memory) to user-space.
+- */
+- swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
++ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
++ (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
++ dir == DMA_BIDIRECTIONAL))
++ swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
+ return tlb_addr;
+ }
+
+@@ -701,13 +697,10 @@ void swiotlb_tbl_unmap_single(struct dev
+ void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+ size_t size, enum dma_data_direction dir)
+ {
+- /*
+- * Unconditional bounce is necessary to avoid corruption on
+- * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
+- * the whole lengt of the bounce buffer.
+- */
+- swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+- BUG_ON(!valid_dma_direction(dir));
++ if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
++ swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
++ else
++ BUG_ON(dir != DMA_FROM_DEVICE);
+ }
+
+ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-03-28 10:53 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-03-28 10:53 UTC (permalink / raw
To: gentoo-commits
commit: 19741bd3d3d380bb76ef3a795e876d9af3481ea9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 28 10:53:05 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 28 10:53:05 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=19741bd3
Linux patch 5.17.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1000_linux-5.17.1.patch | 1721 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1725 insertions(+)
diff --git a/0000_README b/0000_README
index c012760e..684989ae 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-5.17.1.patch
+From: http://www.kernel.org
+Desc: Linux 5.17.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-5.17.1.patch b/1000_linux-5.17.1.patch
new file mode 100644
index 00000000..bfdbbde3
--- /dev/null
+++ b/1000_linux-5.17.1.patch
@@ -0,0 +1,1721 @@
+diff --git a/Makefile b/Makefile
+index 7214f075e1f06..34f9f5a9457af 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 17
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Superb Owl
+
+diff --git a/arch/csky/include/asm/uaccess.h b/arch/csky/include/asm/uaccess.h
+index c40f06ee8d3ef..ac5a54f57d407 100644
+--- a/arch/csky/include/asm/uaccess.h
++++ b/arch/csky/include/asm/uaccess.h
+@@ -3,14 +3,13 @@
+ #ifndef __ASM_CSKY_UACCESS_H
+ #define __ASM_CSKY_UACCESS_H
+
+-#define user_addr_max() \
+- (uaccess_kernel() ? KERNEL_DS.seg : get_fs().seg)
++#define user_addr_max() (current_thread_info()->addr_limit.seg)
+
+ static inline int __access_ok(unsigned long addr, unsigned long size)
+ {
+- unsigned long limit = current_thread_info()->addr_limit.seg;
++ unsigned long limit = user_addr_max();
+
+- return ((addr < limit) && ((addr + size) < limit));
++ return (size <= limit) && (addr <= (limit - size));
+ }
+ #define __access_ok __access_ok
+
+diff --git a/arch/hexagon/include/asm/uaccess.h b/arch/hexagon/include/asm/uaccess.h
+index ef5bfef8d490c..719ba3f3c45cd 100644
+--- a/arch/hexagon/include/asm/uaccess.h
++++ b/arch/hexagon/include/asm/uaccess.h
+@@ -25,17 +25,17 @@
+ * Returns true (nonzero) if the memory block *may* be valid, false (zero)
+ * if it is definitely invalid.
+ *
+- * User address space in Hexagon, like x86, goes to 0xbfffffff, so the
+- * simple MSB-based tests used by MIPS won't work. Some further
+- * optimization is probably possible here, but for now, keep it
+- * reasonably simple and not *too* slow. After all, we've got the
+- * MMU for backup.
+ */
++#define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
++#define user_addr_max() (uaccess_kernel() ? ~0UL : TASK_SIZE)
+
+-#define __access_ok(addr, size) \
+- ((get_fs().seg == KERNEL_DS.seg) || \
+- (((unsigned long)addr < get_fs().seg) && \
+- (unsigned long)size < (get_fs().seg - (unsigned long)addr)))
++static inline int __access_ok(unsigned long addr, unsigned long size)
++{
++ unsigned long limit = TASK_SIZE;
++
++ return (size <= limit) && (addr <= (limit - size));
++}
++#define __access_ok __access_ok
+
+ /*
+ * When a kernel-mode page fault is taken, the faulting instruction
+diff --git a/arch/m68k/include/asm/uaccess.h b/arch/m68k/include/asm/uaccess.h
+index ba670523885c8..60b786eb2254e 100644
+--- a/arch/m68k/include/asm/uaccess.h
++++ b/arch/m68k/include/asm/uaccess.h
+@@ -12,14 +12,17 @@
+ #include <asm/extable.h>
+
+ /* We let the MMU do all checking */
+-static inline int access_ok(const void __user *addr,
++static inline int access_ok(const void __user *ptr,
+ unsigned long size)
+ {
+- /*
+- * XXX: for !CONFIG_CPU_HAS_ADDRESS_SPACES this really needs to check
+- * for TASK_SIZE!
+- */
+- return 1;
++ unsigned long limit = TASK_SIZE;
++ unsigned long addr = (unsigned long)ptr;
++
++ if (IS_ENABLED(CONFIG_CPU_HAS_ADDRESS_SPACES) ||
++ !IS_ENABLED(CONFIG_MMU))
++ return 1;
++
++ return (size <= limit) && (addr <= (limit - size));
+ }
+
+ /*
+diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h
+index d2a8ef9f89787..5b6e0e7788f44 100644
+--- a/arch/microblaze/include/asm/uaccess.h
++++ b/arch/microblaze/include/asm/uaccess.h
+@@ -39,24 +39,13 @@
+
+ # define uaccess_kernel() (get_fs().seg == KERNEL_DS.seg)
+
+-static inline int access_ok(const void __user *addr, unsigned long size)
++static inline int __access_ok(unsigned long addr, unsigned long size)
+ {
+- if (!size)
+- goto ok;
++ unsigned long limit = user_addr_max();
+
+- if ((get_fs().seg < ((unsigned long)addr)) ||
+- (get_fs().seg < ((unsigned long)addr + size - 1))) {
+- pr_devel("ACCESS fail at 0x%08x (size 0x%x), seg 0x%08x\n",
+- (__force u32)addr, (u32)size,
+- (u32)get_fs().seg);
+- return 0;
+- }
+-ok:
+- pr_devel("ACCESS OK at 0x%08x (size 0x%x), seg 0x%08x\n",
+- (__force u32)addr, (u32)size,
+- (u32)get_fs().seg);
+- return 1;
++ return (size <= limit) && (addr <= (limit - size));
+ }
++#define access_ok(addr, size) __access_ok((unsigned long)addr, size)
+
+ # define __FIXUP_SECTION ".section .fixup,\"ax\"\n"
+ # define __EX_TABLE_SECTION ".section __ex_table,\"a\"\n"
+diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h
+index d4cbf069dc224..37a40981deb3b 100644
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -70,9 +70,7 @@ static inline void set_fs(mm_segment_t fs)
+ * versions are void (ie, don't return a value as such).
+ */
+
+-#define get_user __get_user \
+-
+-#define __get_user(x, ptr) \
++#define get_user(x, ptr) \
+ ({ \
+ long __gu_err = 0; \
+ __get_user_check((x), (ptr), __gu_err); \
+@@ -85,6 +83,14 @@ static inline void set_fs(mm_segment_t fs)
+ (void)0; \
+ })
+
++#define __get_user(x, ptr) \
++({ \
++ long __gu_err = 0; \
++ const __typeof__(*(ptr)) __user *__p = (ptr); \
++ __get_user_err((x), __p, (__gu_err)); \
++ __gu_err; \
++})
++
+ #define __get_user_check(x, ptr, err) \
+ ({ \
+ const __typeof__(*(ptr)) __user *__p = (ptr); \
+@@ -165,12 +171,18 @@ do { \
+ : "r"(addr), "i"(-EFAULT) \
+ : "cc")
+
+-#define put_user __put_user \
++#define put_user(x, ptr) \
++({ \
++ long __pu_err = 0; \
++ __put_user_check((x), (ptr), __pu_err); \
++ __pu_err; \
++})
+
+ #define __put_user(x, ptr) \
+ ({ \
+ long __pu_err = 0; \
+- __put_user_err((x), (ptr), __pu_err); \
++ __typeof__(*(ptr)) __user *__p = (ptr); \
++ __put_user_err((x), __p, __pu_err); \
+ __pu_err; \
+ })
+
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 5b6d1a95776f0..0d01e7f5078c2 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1328,6 +1328,17 @@ static int __init disable_acpi_pci(const struct dmi_system_id *d)
+ return 0;
+ }
+
++static int __init disable_acpi_xsdt(const struct dmi_system_id *d)
++{
++ if (!acpi_force) {
++ pr_notice("%s detected: force use of acpi=rsdt\n", d->ident);
++ acpi_gbl_do_not_use_xsdt = TRUE;
++ } else {
++ pr_notice("Warning: DMI blacklist says broken, but acpi XSDT forced\n");
++ }
++ return 0;
++}
++
+ static int __init dmi_disable_acpi(const struct dmi_system_id *d)
+ {
+ if (!acpi_force) {
+@@ -1451,6 +1462,19 @@ static const struct dmi_system_id acpi_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate 360"),
+ },
+ },
++ /*
++ * Boxes that need ACPI XSDT use disabled due to corrupted tables
++ */
++ {
++ .callback = disable_acpi_xsdt,
++ .ident = "Advantech DAC-BJ01",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "NEC"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Bearlake CRB Board"),
++ DMI_MATCH(DMI_BIOS_VERSION, "V1.12"),
++ DMI_MATCH(DMI_BIOS_DATE, "02/01/2011"),
++ },
++ },
+ {}
+ };
+
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index ea31ae01458b4..dc208f5f5a1f7 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -59,6 +59,10 @@ MODULE_PARM_DESC(cache_time, "cache time in milliseconds");
+
+ static const struct acpi_device_id battery_device_ids[] = {
+ {"PNP0C0A", 0},
++
++ /* Microsoft Surface Go 3 */
++ {"MSHW0146", 0},
++
+ {"", 0},
+ };
+
+@@ -1148,6 +1152,14 @@ static const struct dmi_system_id bat_dmi_table[] __initconst = {
+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad"),
+ },
+ },
++ {
++ /* Microsoft Surface Go 3 */
++ .callback = battery_notification_delay_quirk,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
++ },
++ },
+ {},
+ };
+
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 4f64713e9917b..becc198e4c224 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -415,6 +415,81 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "GA503"),
+ },
+ },
++ /*
++ * Clevo NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2 have both a
++ * working native and video interface. However the default detection
++ * mechanism first registers the video interface before unregistering
++ * it again and switching to the native interface during boot. This
++ * results in a dangling SBIOS request for backlight change for some
++ * reason, causing the backlight to switch to ~2% once per boot on the
++ * first power cord connect or disconnect event. Setting the native
++ * interface explicitly circumvents this buggy behaviour, by avoiding
++ * the unregistering process.
++ */
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xRU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xRU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xRU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xRU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xRU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xNU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xNU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++ },
++ },
++ {
++ .callback = video_detect_force_native,
++ .ident = "Clevo NL5xNU",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++ DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"),
++ },
++ },
+
+ /*
+ * Desktops which falsely report a backlight and which our heuristics
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index c30d131da7847..19d5686f8a2a1 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -405,6 +405,8 @@ static const struct usb_device_id blacklist_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+
+ /* Realtek 8852AE Bluetooth devices */
++ { USB_DEVICE(0x0bda, 0x2852), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0bda, 0xc852), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0bda, 0x385a), .driver_info = BTUSB_REALTEK |
+@@ -482,6 +484,8 @@ static const struct usb_device_id blacklist_table[] = {
+ /* Additional Realtek 8761BU Bluetooth devices */
+ { USB_DEVICE(0x0b05, 0x190e), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x2550, 0x8761), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* Additional Realtek 8821AE Bluetooth devices */
+ { USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK },
+@@ -2041,6 +2045,8 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ */
+ set_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks);
++ set_bit(HCI_QUIRK_NO_SUSPEND_NOTIFIER, &hdev->quirks);
+
+ /* Clear the reset quirk since this is not an actual
+ * early Bluetooth 1.1 device from CSR.
+@@ -2051,7 +2057,7 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ /*
+ * Special workaround for these BT 4.0 chip clones, and potentially more:
+ *
+- * - 0x0134: a Barrot 8041a02 (HCI rev: 0x1012 sub: 0x0810)
++ * - 0x0134: a Barrot 8041a02 (HCI rev: 0x0810 sub: 0x1012)
+ * - 0x7558: IC markings FR3191AHAL 749H15143 (HCI rev/sub-version: 0x0709)
+ *
+ * These controllers are really messed-up.
+@@ -2080,7 +2086,7 @@ static int btusb_setup_csr(struct hci_dev *hdev)
+ if (ret >= 0)
+ msleep(200);
+ else
+- bt_dev_err(hdev, "CSR: Failed to suspend the device for our Barrot 8041a02 receive-issue workaround");
++ bt_dev_warn(hdev, "CSR: Couldn't suspend the device for our Barrot 8041a02 receive-issue workaround");
+
+ pm_runtime_forbid(&data->udev->dev);
+
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index b009e7479b702..783d65fc71f07 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -274,14 +274,6 @@ static void tpm_dev_release(struct device *dev)
+ kfree(chip);
+ }
+
+-static void tpm_devs_release(struct device *dev)
+-{
+- struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
+-
+- /* release the master device reference */
+- put_device(&chip->dev);
+-}
+-
+ /**
+ * tpm_class_shutdown() - prepare the TPM device for loss of power.
+ * @dev: device to which the chip is associated.
+@@ -344,7 +336,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ chip->dev_num = rc;
+
+ device_initialize(&chip->dev);
+- device_initialize(&chip->devs);
+
+ chip->dev.class = tpm_class;
+ chip->dev.class->shutdown_pre = tpm_class_shutdown;
+@@ -352,29 +343,12 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ chip->dev.parent = pdev;
+ chip->dev.groups = chip->groups;
+
+- chip->devs.parent = pdev;
+- chip->devs.class = tpmrm_class;
+- chip->devs.release = tpm_devs_release;
+- /* get extra reference on main device to hold on
+- * behalf of devs. This holds the chip structure
+- * while cdevs is in use. The corresponding put
+- * is in the tpm_devs_release (TPM2 only)
+- */
+- if (chip->flags & TPM_CHIP_FLAG_TPM2)
+- get_device(&chip->dev);
+-
+ if (chip->dev_num == 0)
+ chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR);
+ else
+ chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num);
+
+- chip->devs.devt =
+- MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
+-
+ rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num);
+- if (rc)
+- goto out;
+- rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
+ if (rc)
+ goto out;
+
+@@ -382,9 +356,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
+
+ cdev_init(&chip->cdev, &tpm_fops);
+- cdev_init(&chip->cdevs, &tpmrm_fops);
+ chip->cdev.owner = THIS_MODULE;
+- chip->cdevs.owner = THIS_MODULE;
+
+ rc = tpm2_init_space(&chip->work_space, TPM2_SPACE_BUFFER_SIZE);
+ if (rc) {
+@@ -396,7 +368,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ return chip;
+
+ out:
+- put_device(&chip->devs);
+ put_device(&chip->dev);
+ return ERR_PTR(rc);
+ }
+@@ -445,14 +416,9 @@ static int tpm_add_char_device(struct tpm_chip *chip)
+ }
+
+ if (chip->flags & TPM_CHIP_FLAG_TPM2 && !tpm_is_firmware_upgrade(chip)) {
+- rc = cdev_device_add(&chip->cdevs, &chip->devs);
+- if (rc) {
+- dev_err(&chip->devs,
+- "unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
+- dev_name(&chip->devs), MAJOR(chip->devs.devt),
+- MINOR(chip->devs.devt), rc);
+- return rc;
+- }
++ rc = tpm_devs_add(chip);
++ if (rc)
++ goto err_del_cdev;
+ }
+
+ /* Make the chip available. */
+@@ -460,6 +426,10 @@ static int tpm_add_char_device(struct tpm_chip *chip)
+ idr_replace(&dev_nums_idr, chip, chip->dev_num);
+ mutex_unlock(&idr_lock);
+
++ return 0;
++
++err_del_cdev:
++ cdev_device_del(&chip->cdev, &chip->dev);
+ return rc;
+ }
+
+@@ -654,7 +624,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ hwrng_unregister(&chip->hwrng);
+ tpm_bios_log_teardown(chip);
+ if (chip->flags & TPM_CHIP_FLAG_TPM2 && !tpm_is_firmware_upgrade(chip))
+- cdev_device_del(&chip->cdevs, &chip->devs);
++ tpm_devs_remove(chip);
+ tpm_del_char_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index c08cbb306636b..dc4c0a0a51290 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -69,7 +69,13 @@ static void tpm_dev_async_work(struct work_struct *work)
+ ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
+ sizeof(priv->data_buffer));
+ tpm_put_ops(priv->chip);
+- if (ret > 0) {
++
++ /*
++ * If ret is > 0 then tpm_dev_transmit returned the size of the
++ * response. If ret is < 0 then tpm_dev_transmit failed and
++ * returned an error code.
++ */
++ if (ret != 0) {
+ priv->response_length = ret;
+ mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
+ }
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 283f78211c3a7..2163c6ee0d364 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -234,6 +234,8 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+ size_t cmdsiz);
+ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf,
+ size_t *bufsiz);
++int tpm_devs_add(struct tpm_chip *chip);
++void tpm_devs_remove(struct tpm_chip *chip);
+
+ void tpm_bios_log_setup(struct tpm_chip *chip);
+ void tpm_bios_log_teardown(struct tpm_chip *chip);
+diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c
+index 97e916856cf3e..ffb35f0154c16 100644
+--- a/drivers/char/tpm/tpm2-space.c
++++ b/drivers/char/tpm/tpm2-space.c
+@@ -58,12 +58,12 @@ int tpm2_init_space(struct tpm_space *space, unsigned int buf_size)
+
+ void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space)
+ {
+- mutex_lock(&chip->tpm_mutex);
+- if (!tpm_chip_start(chip)) {
++
++ if (tpm_try_get_ops(chip) == 0) {
+ tpm2_flush_sessions(chip, space);
+- tpm_chip_stop(chip);
++ tpm_put_ops(chip);
+ }
+- mutex_unlock(&chip->tpm_mutex);
++
+ kfree(space->context_buf);
+ kfree(space->session_buf);
+ }
+@@ -574,3 +574,68 @@ out:
+ dev_err(&chip->dev, "%s: error %d\n", __func__, rc);
+ return rc;
+ }
++
++/*
++ * Put the reference to the main device.
++ */
++static void tpm_devs_release(struct device *dev)
++{
++ struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
++
++ /* release the master device reference */
++ put_device(&chip->dev);
++}
++
++/*
++ * Remove the device file for exposed TPM spaces and release the device
++ * reference. This may also release the reference to the master device.
++ */
++void tpm_devs_remove(struct tpm_chip *chip)
++{
++ cdev_device_del(&chip->cdevs, &chip->devs);
++ put_device(&chip->devs);
++}
++
++/*
++ * Add a device file to expose TPM spaces. Also take a reference to the
++ * main device.
++ */
++int tpm_devs_add(struct tpm_chip *chip)
++{
++ int rc;
++
++ device_initialize(&chip->devs);
++ chip->devs.parent = chip->dev.parent;
++ chip->devs.class = tpmrm_class;
++
++ /*
++ * Get extra reference on main device to hold on behalf of devs.
++ * This holds the chip structure while cdevs is in use. The
++ * corresponding put is in the tpm_devs_release.
++ */
++ get_device(&chip->dev);
++ chip->devs.release = tpm_devs_release;
++ chip->devs.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
++ cdev_init(&chip->cdevs, &tpmrm_fops);
++ chip->cdevs.owner = THIS_MODULE;
++
++ rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
++ if (rc)
++ goto err_put_devs;
++
++ rc = cdev_device_add(&chip->cdevs, &chip->devs);
++ if (rc) {
++ dev_err(&chip->devs,
++ "unable to cdev_device_add() %s, major %d, minor %d, err=%d\n",
++ dev_name(&chip->devs), MAJOR(chip->devs.devt),
++ MINOR(chip->devs.devt), rc);
++ goto err_put_devs;
++ }
++
++ return 0;
++
++err_put_devs:
++ put_device(&chip->devs);
++
++ return rc;
++}
+diff --git a/drivers/crypto/qat/qat_4xxx/adf_drv.c b/drivers/crypto/qat/qat_4xxx/adf_drv.c
+index a6c78b9c730bc..fa4c350c1bf92 100644
+--- a/drivers/crypto/qat/qat_4xxx/adf_drv.c
++++ b/drivers/crypto/qat/qat_4xxx/adf_drv.c
+@@ -75,6 +75,13 @@ static int adf_crypto_dev_config(struct adf_accel_dev *accel_dev)
+ if (ret)
+ goto err;
+
++ /* Temporarily set the number of crypto instances to zero to avoid
++ * registering the crypto algorithms.
++ * This will be removed when the algorithms will support the
++ * CRYPTO_TFM_REQ_MAY_BACKLOG flag
++ */
++ instances = 0;
++
+ for (i = 0; i < instances; i++) {
+ val = i;
+ bank = i * 2;
+diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c
+index 7234c4940fae4..67c9588e89df9 100644
+--- a/drivers/crypto/qat/qat_common/qat_crypto.c
++++ b/drivers/crypto/qat/qat_common/qat_crypto.c
+@@ -161,6 +161,13 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev)
+ if (ret)
+ goto err;
+
++ /* Temporarily set the number of crypto instances to zero to avoid
++ * registering the crypto algorithms.
++ * This will be removed when the algorithms will support the
++ * CRYPTO_TFM_REQ_MAY_BACKLOG flag
++ */
++ instances = 0;
++
+ for (i = 0; i < instances; i++) {
+ val = i;
+ snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i);
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index 9bf319be11f60..12641616acd30 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -83,6 +83,12 @@ static struct devfreq_dev_profile msm_devfreq_profile = {
+ static void msm_devfreq_boost_work(struct kthread_work *work);
+ static void msm_devfreq_idle_work(struct kthread_work *work);
+
++static bool has_devfreq(struct msm_gpu *gpu)
++{
++ struct msm_gpu_devfreq *df = &gpu->devfreq;
++ return !!df->devfreq;
++}
++
+ void msm_devfreq_init(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+@@ -149,6 +155,9 @@ void msm_devfreq_cleanup(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+
++ if (!has_devfreq(gpu))
++ return;
++
+ devfreq_cooling_unregister(gpu->cooling);
+ dev_pm_qos_remove_request(&df->boost_freq);
+ dev_pm_qos_remove_request(&df->idle_freq);
+@@ -156,16 +165,24 @@ void msm_devfreq_cleanup(struct msm_gpu *gpu)
+
+ void msm_devfreq_resume(struct msm_gpu *gpu)
+ {
+- gpu->devfreq.busy_cycles = 0;
+- gpu->devfreq.time = ktime_get();
++ struct msm_gpu_devfreq *df = &gpu->devfreq;
+
+- devfreq_resume_device(gpu->devfreq.devfreq);
++ if (!has_devfreq(gpu))
++ return;
++
++ df->busy_cycles = 0;
++ df->time = ktime_get();
++
++ devfreq_resume_device(df->devfreq);
+ }
+
+ void msm_devfreq_suspend(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+
++ if (!has_devfreq(gpu))
++ return;
++
+ devfreq_suspend_device(df->devfreq);
+
+ cancel_idle_work(df);
+@@ -185,6 +202,9 @@ void msm_devfreq_boost(struct msm_gpu *gpu, unsigned factor)
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+ uint64_t freq;
+
++ if (!has_devfreq(gpu))
++ return;
++
+ freq = get_freq(gpu);
+ freq *= factor;
+
+@@ -207,7 +227,7 @@ void msm_devfreq_active(struct msm_gpu *gpu)
+ struct devfreq_dev_status status;
+ unsigned int idle_time;
+
+- if (!df->devfreq)
++ if (!has_devfreq(gpu))
+ return;
+
+ /*
+@@ -253,7 +273,7 @@ void msm_devfreq_idle(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+
+- if (!df->devfreq)
++ if (!has_devfreq(gpu))
+ return;
+
+ msm_hrtimer_queue_work(&df->idle_work, ms_to_ktime(1),
+diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
+index 2de61b63ef91d..48d3c9955f0dd 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
++++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
+@@ -248,6 +248,9 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs)
+ {
+ u32 i;
+
++ if (!objs)
++ return;
++
+ for (i = 0; i < objs->nents; i++)
+ drm_gem_object_put(objs->objs[i]);
+ virtio_gpu_array_free(objs);
+diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+index ff2d099aab218..53dc8d5fede86 100644
+--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
++++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+@@ -696,6 +696,12 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
+ buf_pool->rx_skb[skb_index] = NULL;
+
+ datalen = xgene_enet_get_data_len(le64_to_cpu(raw_desc->m1));
++
++ /* strip off CRC as HW isn't doing this */
++ nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
++ if (!nv)
++ datalen -= 4;
++
+ skb_put(skb, datalen);
+ prefetch(skb->data - NET_IP_ALIGN);
+ skb->protocol = eth_type_trans(skb, ndev);
+@@ -717,12 +723,8 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
+ }
+ }
+
+- nv = GET_VAL(NV, le64_to_cpu(raw_desc->m0));
+- if (!nv) {
+- /* strip off CRC as HW isn't doing this */
+- datalen -= 4;
++ if (!nv)
+ goto skip_jumbo;
+- }
+
+ slots = page_pool->slots - 1;
+ head = page_pool->head;
+diff --git a/drivers/net/wireless/ath/regd.c b/drivers/net/wireless/ath/regd.c
+index b2400e2417a55..f15e7bd690b5b 100644
+--- a/drivers/net/wireless/ath/regd.c
++++ b/drivers/net/wireless/ath/regd.c
+@@ -667,14 +667,14 @@ ath_regd_init_wiphy(struct ath_regulatory *reg,
+
+ /*
+ * Some users have reported their EEPROM programmed with
+- * 0x8000 or 0x0 set, this is not a supported regulatory
+- * domain but since we have more than one user with it we
+- * need a solution for them. We default to 0x64, which is
+- * the default Atheros world regulatory domain.
++ * 0x8000 set, this is not a supported regulatory domain
++ * but since we have more than one user with it we need
++ * a solution for them. We default to 0x64, which is the
++ * default Atheros world regulatory domain.
+ */
+ static void ath_regd_sanitize(struct ath_regulatory *reg)
+ {
+- if (reg->current_rd != COUNTRY_ERD_FLAG && reg->current_rd != 0)
++ if (reg->current_rd != COUNTRY_ERD_FLAG)
+ return;
+ printk(KERN_DEBUG "ath: EEPROM regdomain sanitized\n");
+ reg->current_rd = 0x64;
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 9575d7373bf27..ac2813ed851c4 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -1513,6 +1513,9 @@ static int wcn36xx_platform_get_resources(struct wcn36xx *wcn,
+ if (iris_node) {
+ if (of_device_is_compatible(iris_node, "qcom,wcn3620"))
+ wcn->rf_id = RF_IRIS_WCN3620;
++ if (of_device_is_compatible(iris_node, "qcom,wcn3660") ||
++ of_device_is_compatible(iris_node, "qcom,wcn3660b"))
++ wcn->rf_id = RF_IRIS_WCN3660;
+ if (of_device_is_compatible(iris_node, "qcom,wcn3680"))
+ wcn->rf_id = RF_IRIS_WCN3680;
+ of_node_put(iris_node);
+diff --git a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+index fbd0558c2c196..5d3f8f56e5681 100644
+--- a/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
++++ b/drivers/net/wireless/ath/wcn36xx/wcn36xx.h
+@@ -97,6 +97,7 @@ enum wcn36xx_ampdu_state {
+
+ #define RF_UNKNOWN 0x0000
+ #define RF_IRIS_WCN3620 0x3620
++#define RF_IRIS_WCN3660 0x3660
+ #define RF_IRIS_WCN3680 0x3680
+
+ static inline void buff_to_be(u32 *buf, size_t len)
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 8e2f8275a2535..259e00046a8bd 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -842,27 +842,38 @@ EXPORT_SYMBOL(jbd2_journal_restart);
+ */
+ void jbd2_journal_wait_updates(journal_t *journal)
+ {
+- transaction_t *commit_transaction = journal->j_running_transaction;
++ DEFINE_WAIT(wait);
+
+- if (!commit_transaction)
+- return;
++ while (1) {
++ /*
++ * Note that the running transaction can get freed under us if
++ * this transaction is getting committed in
++ * jbd2_journal_commit_transaction() ->
++ * jbd2_journal_free_transaction(). This can only happen when we
++ * release j_state_lock -> schedule() -> acquire j_state_lock.
++ * Hence we should everytime retrieve new j_running_transaction
++ * value (after j_state_lock release acquire cycle), else it may
++ * lead to use-after-free of old freed transaction.
++ */
++ transaction_t *transaction = journal->j_running_transaction;
+
+- spin_lock(&commit_transaction->t_handle_lock);
+- while (atomic_read(&commit_transaction->t_updates)) {
+- DEFINE_WAIT(wait);
++ if (!transaction)
++ break;
+
++ spin_lock(&transaction->t_handle_lock);
+ prepare_to_wait(&journal->j_wait_updates, &wait,
+- TASK_UNINTERRUPTIBLE);
+- if (atomic_read(&commit_transaction->t_updates)) {
+- spin_unlock(&commit_transaction->t_handle_lock);
+- write_unlock(&journal->j_state_lock);
+- schedule();
+- write_lock(&journal->j_state_lock);
+- spin_lock(&commit_transaction->t_handle_lock);
++ TASK_UNINTERRUPTIBLE);
++ if (!atomic_read(&transaction->t_updates)) {
++ spin_unlock(&transaction->t_handle_lock);
++ finish_wait(&journal->j_wait_updates, &wait);
++ break;
+ }
++ spin_unlock(&transaction->t_handle_lock);
++ write_unlock(&journal->j_state_lock);
++ schedule();
+ finish_wait(&journal->j_wait_updates, &wait);
++ write_lock(&journal->j_state_lock);
+ }
+- spin_unlock(&commit_transaction->t_handle_lock);
+ }
+
+ /**
+@@ -877,8 +888,6 @@ void jbd2_journal_wait_updates(journal_t *journal)
+ */
+ void jbd2_journal_lock_updates(journal_t *journal)
+ {
+- DEFINE_WAIT(wait);
+-
+ jbd2_might_wait_for_commit(journal);
+
+ write_lock(&journal->j_state_lock);
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 35c073d44ec5a..5cb095b09a940 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -255,6 +255,16 @@ enum {
+ * during the hdev->setup vendor callback.
+ */
+ HCI_QUIRK_BROKEN_READ_TRANSMIT_POWER,
++
++ /* When this quirk is set, HCI_OP_SET_EVENT_FLT requests with
++ * HCI_FLT_CLEAR_ALL are ignored and event filtering is
++ * completely avoided. A subset of the CSR controller
++ * clones struggle with this and instantly lock up.
++ *
++ * Note that devices using this must (separately) disable
++ * runtime suspend, because event filtering takes place there.
++ */
++ HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL,
+ };
+
+ /* HCI device flags */
+diff --git a/include/sound/pcm.h b/include/sound/pcm.h
+index 36da42cd07748..314f2779cab52 100644
+--- a/include/sound/pcm.h
++++ b/include/sound/pcm.h
+@@ -401,6 +401,7 @@ struct snd_pcm_runtime {
+ wait_queue_head_t tsleep; /* transfer sleep */
+ struct fasync_struct *fasync;
+ bool stop_operating; /* sync_stop will be called */
++ struct mutex buffer_mutex; /* protect for buffer changes */
+
+ /* -- private section -- */
+ void *private_data;
+diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
+index c5b45c2f68a15..5678bee7aefee 100644
+--- a/kernel/rcu/tree_plugin.h
++++ b/kernel/rcu/tree_plugin.h
+@@ -556,16 +556,16 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ }
+
+- /* Unboost if we were boosted. */
+- if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
+- rt_mutex_futex_unlock(&rnp->boost_mtx.rtmutex);
+-
+ /*
+ * If this was the last task on the expedited lists,
+ * then we need to report up the rcu_node hierarchy.
+ */
+ if (!empty_exp && empty_exp_now)
+ rcu_report_exp_rnp(rnp, true);
++
++ /* Unboost if we were boosted. */
++ if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
++ rt_mutex_futex_unlock(&rnp->boost_mtx.rtmutex);
+ } else {
+ local_irq_restore(flags);
+ }
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index ab9aa700b6b33..5e93f37c2e04d 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -2806,6 +2806,9 @@ static int hci_set_event_filter_sync(struct hci_dev *hdev, u8 flt_type,
+ if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED))
+ return 0;
+
++ if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
++ return 0;
++
+ memset(&cp, 0, sizeof(cp));
+ cp.flt_type = flt_type;
+
+@@ -2826,6 +2829,13 @@ static int hci_clear_event_filter_sync(struct hci_dev *hdev)
+ if (!hci_dev_test_flag(hdev, HCI_EVENT_FILTER_CONFIGURED))
+ return 0;
+
++ /* In theory the state machine should not reach here unless
++ * a hci_set_event_filter_sync() call succeeds, but we do
++ * the check both for parity and as a future reminder.
++ */
++ if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
++ return 0;
++
+ return hci_set_event_filter_sync(hdev, HCI_FLT_CLEAR_ALL, 0x00,
+ BDADDR_ANY, 0x00);
+ }
+@@ -4825,6 +4835,12 @@ static int hci_update_event_filter_sync(struct hci_dev *hdev)
+ if (!hci_dev_test_flag(hdev, HCI_BREDR_ENABLED))
+ return 0;
+
++ /* Some fake CSR controllers lock up after setting this type of
++ * filter, so avoid sending the request altogether.
++ */
++ if (test_bit(HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL, &hdev->quirks))
++ return 0;
++
+ /* Always clear event filter when starting */
+ hci_clear_event_filter_sync(hdev);
+
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 26c00ebf4fbae..7f555d2e5357f 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -275,6 +275,7 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ {
+ struct sock *sk = sock->sk;
+ struct llc_sock *llc = llc_sk(sk);
++ struct net_device *dev = NULL;
+ struct llc_sap *sap;
+ int rc = -EINVAL;
+
+@@ -286,16 +287,15 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ goto out;
+ rc = -ENODEV;
+ if (sk->sk_bound_dev_if) {
+- llc->dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
+- if (llc->dev && addr->sllc_arphrd != llc->dev->type) {
+- dev_put(llc->dev);
+- llc->dev = NULL;
++ dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);
++ if (dev && addr->sllc_arphrd != dev->type) {
++ dev_put(dev);
++ dev = NULL;
+ }
+ } else
+- llc->dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
+- if (!llc->dev)
++ dev = dev_getfirstbyhwtype(&init_net, addr->sllc_arphrd);
++ if (!dev)
+ goto out;
+- netdev_tracker_alloc(llc->dev, &llc->dev_tracker, GFP_KERNEL);
+ rc = -EUSERS;
+ llc->laddr.lsap = llc_ui_autoport();
+ if (!llc->laddr.lsap)
+@@ -304,6 +304,12 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ sap = llc_sap_open(llc->laddr.lsap, NULL);
+ if (!sap)
+ goto out;
++
++ /* Note: We do not expect errors from this point. */
++ llc->dev = dev;
++ netdev_tracker_alloc(llc->dev, &llc->dev_tracker, GFP_KERNEL);
++ dev = NULL;
++
+ memcpy(llc->laddr.mac, llc->dev->dev_addr, IFHWADDRLEN);
+ memcpy(&llc->addr, addr, sizeof(llc->addr));
+ /* assign new connection to its SAP */
+@@ -311,6 +317,7 @@ static int llc_ui_autobind(struct socket *sock, struct sockaddr_llc *addr)
+ sock_reset_flag(sk, SOCK_ZAPPED);
+ rc = 0;
+ out:
++ dev_put(dev);
+ return rc;
+ }
+
+@@ -333,6 +340,7 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ struct sockaddr_llc *addr = (struct sockaddr_llc *)uaddr;
+ struct sock *sk = sock->sk;
+ struct llc_sock *llc = llc_sk(sk);
++ struct net_device *dev = NULL;
+ struct llc_sap *sap;
+ int rc = -EINVAL;
+
+@@ -348,25 +356,27 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ rc = -ENODEV;
+ rcu_read_lock();
+ if (sk->sk_bound_dev_if) {
+- llc->dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
+- if (llc->dev) {
++ dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);
++ if (dev) {
+ if (is_zero_ether_addr(addr->sllc_mac))
+- memcpy(addr->sllc_mac, llc->dev->dev_addr,
++ memcpy(addr->sllc_mac, dev->dev_addr,
+ IFHWADDRLEN);
+- if (addr->sllc_arphrd != llc->dev->type ||
++ if (addr->sllc_arphrd != dev->type ||
+ !ether_addr_equal(addr->sllc_mac,
+- llc->dev->dev_addr)) {
++ dev->dev_addr)) {
+ rc = -EINVAL;
+- llc->dev = NULL;
++ dev = NULL;
+ }
+ }
+- } else
+- llc->dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
++ } else {
++ dev = dev_getbyhwaddr_rcu(&init_net, addr->sllc_arphrd,
+ addr->sllc_mac);
+- dev_hold_track(llc->dev, &llc->dev_tracker, GFP_ATOMIC);
++ }
++ dev_hold(dev);
+ rcu_read_unlock();
+- if (!llc->dev)
++ if (!dev)
+ goto out;
++
+ if (!addr->sllc_sap) {
+ rc = -EUSERS;
+ addr->sllc_sap = llc_ui_autoport();
+@@ -398,6 +408,12 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ goto out_put;
+ }
+ }
++
++ /* Note: We do not expect errors from this point. */
++ llc->dev = dev;
++ netdev_tracker_alloc(llc->dev, &llc->dev_tracker, GFP_KERNEL);
++ dev = NULL;
++
+ llc->laddr.lsap = addr->sllc_sap;
+ memcpy(llc->laddr.mac, addr->sllc_mac, IFHWADDRLEN);
+ memcpy(&llc->addr, addr, sizeof(llc->addr));
+@@ -408,6 +424,7 @@ static int llc_ui_bind(struct socket *sock, struct sockaddr *uaddr, int addrlen)
+ out_put:
+ llc_sap_put(sap);
+ out:
++ dev_put(dev);
+ release_sock(sk);
+ return rc;
+ }
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 87a208089caf7..58ff57dc669c4 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2148,14 +2148,12 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
+ const struct mesh_setup *setup)
+ {
+ u8 *new_ie;
+- const u8 *old_ie;
+ struct ieee80211_sub_if_data *sdata = container_of(ifmsh,
+ struct ieee80211_sub_if_data, u.mesh);
+ int i;
+
+ /* allocate information elements */
+ new_ie = NULL;
+- old_ie = ifmsh->ie;
+
+ if (setup->ie_len) {
+ new_ie = kmemdup(setup->ie, setup->ie_len,
+@@ -2165,7 +2163,6 @@ static int copy_mesh_setup(struct ieee80211_if_mesh *ifmsh,
+ }
+ ifmsh->ie_len = setup->ie_len;
+ ifmsh->ie = new_ie;
+- kfree(old_ie);
+
+ /* now copy the rest of the setup parameters */
+ ifmsh->mesh_id_len = setup->mesh_id_len;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index d71a33ae39b35..1f5a0eece0d14 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -9275,17 +9275,23 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest)
+ }
+ EXPORT_SYMBOL_GPL(nft_parse_u32_check);
+
+-static unsigned int nft_parse_register(const struct nlattr *attr)
++static unsigned int nft_parse_register(const struct nlattr *attr, u32 *preg)
+ {
+ unsigned int reg;
+
+ reg = ntohl(nla_get_be32(attr));
+ switch (reg) {
+ case NFT_REG_VERDICT...NFT_REG_4:
+- return reg * NFT_REG_SIZE / NFT_REG32_SIZE;
++ *preg = reg * NFT_REG_SIZE / NFT_REG32_SIZE;
++ break;
++ case NFT_REG32_00...NFT_REG32_15:
++ *preg = reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
++ break;
+ default:
+- return reg + NFT_REG_SIZE / NFT_REG32_SIZE - NFT_REG32_00;
++ return -ERANGE;
+ }
++
++ return 0;
+ }
+
+ /**
+@@ -9327,7 +9333,10 @@ int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len)
+ u32 reg;
+ int err;
+
+- reg = nft_parse_register(attr);
++ err = nft_parse_register(attr, ®);
++ if (err < 0)
++ return err;
++
+ err = nft_validate_register_load(reg, len);
+ if (err < 0)
+ return err;
+@@ -9382,7 +9391,10 @@ int nft_parse_register_store(const struct nft_ctx *ctx,
+ int err;
+ u32 reg;
+
+- reg = nft_parse_register(attr);
++ err = nft_parse_register(attr, ®);
++ if (err < 0)
++ return err;
++
+ err = nft_validate_register_store(ctx, reg, data, type, len);
+ if (err < 0)
+ return err;
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index 36e73f9828c50..8af98239655db 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -201,7 +201,7 @@ nft_do_chain(struct nft_pktinfo *pkt, void *priv)
+ const struct nft_rule_dp *rule, *last_rule;
+ const struct net *net = nft_net(pkt);
+ const struct nft_expr *expr, *last;
+- struct nft_regs regs;
++ struct nft_regs regs = {};
+ unsigned int stackptr = 0;
+ struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE];
+ bool genbit = READ_ONCE(net->nft.gencursor);
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 3ee9edf858156..f158f0abd25d8 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -774,6 +774,11 @@ static int snd_pcm_oss_period_size(struct snd_pcm_substream *substream,
+
+ if (oss_period_size < 16)
+ return -EINVAL;
++
++ /* don't allocate too large period; 1MB period must be enough */
++ if (oss_period_size > 1024 * 1024)
++ return -ENOMEM;
++
+ runtime->oss.period_bytes = oss_period_size;
+ runtime->oss.period_frames = 1;
+ runtime->oss.periods = oss_periods;
+@@ -1043,10 +1048,9 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ goto failure;
+ }
+ #endif
+- oss_period_size *= oss_frame_size;
+-
+- oss_buffer_size = oss_period_size * runtime->oss.periods;
+- if (oss_buffer_size < 0) {
++ oss_period_size = array_size(oss_period_size, oss_frame_size);
++ oss_buffer_size = array_size(oss_period_size, runtime->oss.periods);
++ if (oss_buffer_size <= 0) {
+ err = -EINVAL;
+ goto failure;
+ }
+diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c
+index 061ba06bc9262..82e180c776ae1 100644
+--- a/sound/core/oss/pcm_plugin.c
++++ b/sound/core/oss/pcm_plugin.c
+@@ -62,7 +62,10 @@ static int snd_pcm_plugin_alloc(struct snd_pcm_plugin *plugin, snd_pcm_uframes_t
+ width = snd_pcm_format_physical_width(format->format);
+ if (width < 0)
+ return width;
+- size = frames * format->channels * width;
++ size = array3_size(frames, format->channels, width);
++ /* check for too large period size once again */
++ if (size > 1024 * 1024)
++ return -ENOMEM;
+ if (snd_BUG_ON(size % 8))
+ return -ENXIO;
+ size /= 8;
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index ba4a987ed1c62..edd9849210f2d 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -969,6 +969,7 @@ int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream,
+ init_waitqueue_head(&runtime->tsleep);
+
+ runtime->status->state = SNDRV_PCM_STATE_OPEN;
++ mutex_init(&runtime->buffer_mutex);
+
+ substream->runtime = runtime;
+ substream->private_data = pcm->private_data;
+@@ -1002,6 +1003,7 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ } else {
+ substream->runtime = NULL;
+ }
++ mutex_destroy(&runtime->buffer_mutex);
+ kfree(runtime);
+ put_pid(substream->pid);
+ substream->pid = NULL;
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index f2090025236b9..a40a35e51fad7 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1906,9 +1906,11 @@ static int wait_for_avail(struct snd_pcm_substream *substream,
+ if (avail >= runtime->twake)
+ break;
+ snd_pcm_stream_unlock_irq(substream);
++ mutex_unlock(&runtime->buffer_mutex);
+
+ tout = schedule_timeout(wait_time);
+
++ mutex_lock(&runtime->buffer_mutex);
+ snd_pcm_stream_lock_irq(substream);
+ set_current_state(TASK_INTERRUPTIBLE);
+ switch (runtime->status->state) {
+@@ -2219,6 +2221,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+
+ nonblock = !!(substream->f_flags & O_NONBLOCK);
+
++ mutex_lock(&runtime->buffer_mutex);
+ snd_pcm_stream_lock_irq(substream);
+ err = pcm_accessible_state(runtime);
+ if (err < 0)
+@@ -2310,6 +2313,7 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,
+ if (xfer > 0 && err >= 0)
+ snd_pcm_update_state(substream, runtime);
+ snd_pcm_stream_unlock_irq(substream);
++ mutex_unlock(&runtime->buffer_mutex);
+ return xfer > 0 ? (snd_pcm_sframes_t)xfer : err;
+ }
+ EXPORT_SYMBOL(__snd_pcm_lib_xfer);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index b70ce3b69ab4d..8848d2f3160d8 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -163,19 +163,20 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ size_t size;
+ struct snd_dma_buffer new_dmab;
+
++ mutex_lock(&substream->pcm->open_mutex);
+ if (substream->runtime) {
+ buffer->error = -EBUSY;
+- return;
++ goto unlock;
+ }
+ if (!snd_info_get_line(buffer, line, sizeof(line))) {
+ snd_info_get_str(str, line, sizeof(str));
+ size = simple_strtoul(str, NULL, 10) * 1024;
+ if ((size != 0 && size < 8192) || size > substream->dma_max) {
+ buffer->error = -EINVAL;
+- return;
++ goto unlock;
+ }
+ if (substream->dma_buffer.bytes == size)
+- return;
++ goto unlock;
+ memset(&new_dmab, 0, sizeof(new_dmab));
+ new_dmab.dev = substream->dma_buffer.dev;
+ if (size > 0) {
+@@ -189,7 +190,7 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ substream->pcm->card->number, substream->pcm->device,
+ substream->stream ? 'c' : 'p', substream->number,
+ substream->pcm->name, size);
+- return;
++ goto unlock;
+ }
+ substream->buffer_bytes_max = size;
+ } else {
+@@ -201,6 +202,8 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
+ } else {
+ buffer->error = -EINVAL;
+ }
++ unlock:
++ mutex_unlock(&substream->pcm->open_mutex);
+ }
+
+ static inline void preallocate_info_init(struct snd_pcm_substream *substream)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index a056b3ef3c843..704fdc9ebf911 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -685,33 +685,40 @@ static int snd_pcm_hw_params_choose(struct snd_pcm_substream *pcm,
+ return 0;
+ }
+
++#if IS_ENABLED(CONFIG_SND_PCM_OSS)
++#define is_oss_stream(substream) ((substream)->oss.oss)
++#else
++#define is_oss_stream(substream) false
++#endif
++
+ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+ {
+ struct snd_pcm_runtime *runtime;
+- int err, usecs;
++ int err = 0, usecs;
+ unsigned int bits;
+ snd_pcm_uframes_t frames;
+
+ if (PCM_RUNTIME_CHECK(substream))
+ return -ENXIO;
+ runtime = substream->runtime;
++ mutex_lock(&runtime->buffer_mutex);
+ snd_pcm_stream_lock_irq(substream);
+ switch (runtime->status->state) {
+ case SNDRV_PCM_STATE_OPEN:
+ case SNDRV_PCM_STATE_SETUP:
+ case SNDRV_PCM_STATE_PREPARED:
++ if (!is_oss_stream(substream) &&
++ atomic_read(&substream->mmap_count))
++ err = -EBADFD;
+ break;
+ default:
+- snd_pcm_stream_unlock_irq(substream);
+- return -EBADFD;
++ err = -EBADFD;
++ break;
+ }
+ snd_pcm_stream_unlock_irq(substream);
+-#if IS_ENABLED(CONFIG_SND_PCM_OSS)
+- if (!substream->oss.oss)
+-#endif
+- if (atomic_read(&substream->mmap_count))
+- return -EBADFD;
++ if (err)
++ goto unlock;
+
+ snd_pcm_sync_stop(substream, true);
+
+@@ -799,16 +806,21 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
+ if (usecs >= 0)
+ cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
+ usecs);
+- return 0;
++ err = 0;
+ _error:
+- /* hardware might be unusable from this time,
+- so we force application to retry to set
+- the correct hardware parameter settings */
+- snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+- if (substream->ops->hw_free != NULL)
+- substream->ops->hw_free(substream);
+- if (substream->managed_buffer_alloc)
+- snd_pcm_lib_free_pages(substream);
++ if (err) {
++ /* hardware might be unusable from this time,
++ * so we force application to retry to set
++ * the correct hardware parameter settings
++ */
++ snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
++ if (substream->ops->hw_free != NULL)
++ substream->ops->hw_free(substream);
++ if (substream->managed_buffer_alloc)
++ snd_pcm_lib_free_pages(substream);
++ }
++ unlock:
++ mutex_unlock(&runtime->buffer_mutex);
+ return err;
+ }
+
+@@ -848,26 +860,31 @@ static int do_hw_free(struct snd_pcm_substream *substream)
+ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
+ {
+ struct snd_pcm_runtime *runtime;
+- int result;
++ int result = 0;
+
+ if (PCM_RUNTIME_CHECK(substream))
+ return -ENXIO;
+ runtime = substream->runtime;
++ mutex_lock(&runtime->buffer_mutex);
+ snd_pcm_stream_lock_irq(substream);
+ switch (runtime->status->state) {
+ case SNDRV_PCM_STATE_SETUP:
+ case SNDRV_PCM_STATE_PREPARED:
++ if (atomic_read(&substream->mmap_count))
++ result = -EBADFD;
+ break;
+ default:
+- snd_pcm_stream_unlock_irq(substream);
+- return -EBADFD;
++ result = -EBADFD;
++ break;
+ }
+ snd_pcm_stream_unlock_irq(substream);
+- if (atomic_read(&substream->mmap_count))
+- return -EBADFD;
++ if (result)
++ goto unlock;
+ result = do_hw_free(substream);
+ snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
+ cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
++ unlock:
++ mutex_unlock(&runtime->buffer_mutex);
+ return result;
+ }
+
+@@ -1173,15 +1190,17 @@ struct action_ops {
+ static int snd_pcm_action_group(const struct action_ops *ops,
+ struct snd_pcm_substream *substream,
+ snd_pcm_state_t state,
+- bool do_lock)
++ bool stream_lock)
+ {
+ struct snd_pcm_substream *s = NULL;
+ struct snd_pcm_substream *s1;
+ int res = 0, depth = 1;
+
+ snd_pcm_group_for_each_entry(s, substream) {
+- if (do_lock && s != substream) {
+- if (s->pcm->nonatomic)
++ if (s != substream) {
++ if (!stream_lock)
++ mutex_lock_nested(&s->runtime->buffer_mutex, depth);
++ else if (s->pcm->nonatomic)
+ mutex_lock_nested(&s->self_group.mutex, depth);
+ else
+ spin_lock_nested(&s->self_group.lock, depth);
+@@ -1209,18 +1228,18 @@ static int snd_pcm_action_group(const struct action_ops *ops,
+ ops->post_action(s, state);
+ }
+ _unlock:
+- if (do_lock) {
+- /* unlock streams */
+- snd_pcm_group_for_each_entry(s1, substream) {
+- if (s1 != substream) {
+- if (s1->pcm->nonatomic)
+- mutex_unlock(&s1->self_group.mutex);
+- else
+- spin_unlock(&s1->self_group.lock);
+- }
+- if (s1 == s) /* end */
+- break;
++ /* unlock streams */
++ snd_pcm_group_for_each_entry(s1, substream) {
++ if (s1 != substream) {
++ if (!stream_lock)
++ mutex_unlock(&s1->runtime->buffer_mutex);
++ else if (s1->pcm->nonatomic)
++ mutex_unlock(&s1->self_group.mutex);
++ else
++ spin_unlock(&s1->self_group.lock);
+ }
++ if (s1 == s) /* end */
++ break;
+ }
+ return res;
+ }
+@@ -1350,10 +1369,12 @@ static int snd_pcm_action_nonatomic(const struct action_ops *ops,
+
+ /* Guarantee the group members won't change during non-atomic action */
+ down_read(&snd_pcm_link_rwsem);
++ mutex_lock(&substream->runtime->buffer_mutex);
+ if (snd_pcm_stream_linked(substream))
+ res = snd_pcm_action_group(ops, substream, state, false);
+ else
+ res = snd_pcm_action_single(ops, substream, state);
++ mutex_unlock(&substream->runtime->buffer_mutex);
+ up_read(&snd_pcm_link_rwsem);
+ return res;
+ }
+@@ -1843,11 +1864,13 @@ static int snd_pcm_do_reset(struct snd_pcm_substream *substream,
+ int err = snd_pcm_ops_ioctl(substream, SNDRV_PCM_IOCTL1_RESET, NULL);
+ if (err < 0)
+ return err;
++ snd_pcm_stream_lock_irq(substream);
+ runtime->hw_ptr_base = 0;
+ runtime->hw_ptr_interrupt = runtime->status->hw_ptr -
+ runtime->status->hw_ptr % runtime->period_size;
+ runtime->silence_start = runtime->status->hw_ptr;
+ runtime->silence_filled = 0;
++ snd_pcm_stream_unlock_irq(substream);
+ return 0;
+ }
+
+@@ -1855,10 +1878,12 @@ static void snd_pcm_post_reset(struct snd_pcm_substream *substream,
+ snd_pcm_state_t state)
+ {
+ struct snd_pcm_runtime *runtime = substream->runtime;
++ snd_pcm_stream_lock_irq(substream);
+ runtime->control->appl_ptr = runtime->status->hw_ptr;
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
+ runtime->silence_size > 0)
+ snd_pcm_playback_silence(substream, ULONG_MAX);
++ snd_pcm_stream_unlock_irq(substream);
+ }
+
+ static const struct action_ops snd_pcm_action_reset = {
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 01f296d524ce6..cb60a07d39a8e 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -938,8 +938,8 @@ static int snd_ac97_ad18xx_pcm_get_volume(struct snd_kcontrol *kcontrol, struct
+ int codec = kcontrol->private_value & 3;
+
+ mutex_lock(&ac97->page_mutex);
+- ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
+- ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
++ ucontrol->value.integer.value[0] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 8) & 31);
++ ucontrol->value.integer.value[1] = 31 - ((ac97->spec.ad18xx.pcmreg[codec] >> 0) & 31);
+ mutex_unlock(&ac97->page_mutex);
+ return 0;
+ }
+diff --git a/sound/pci/cmipci.c b/sound/pci/cmipci.c
+index 9a678b5cf2855..dab801d9d3b48 100644
+--- a/sound/pci/cmipci.c
++++ b/sound/pci/cmipci.c
+@@ -298,7 +298,6 @@ MODULE_PARM_DESC(joystick_port, "Joystick port address.");
+ #define CM_MICGAINZ 0x01 /* mic boost */
+ #define CM_MICGAINZ_SHIFT 0
+
+-#define CM_REG_MIXER3 0x24
+ #define CM_REG_AUX_VOL 0x26
+ #define CM_VAUXL_MASK 0xf0
+ #define CM_VAUXR_MASK 0x0f
+@@ -3265,7 +3264,7 @@ static int snd_cmipci_probe(struct pci_dev *pci,
+ */
+ static const unsigned char saved_regs[] = {
+ CM_REG_FUNCTRL1, CM_REG_CHFORMAT, CM_REG_LEGACY_CTRL, CM_REG_MISC_CTRL,
+- CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_MIXER3, CM_REG_PLL,
++ CM_REG_MIXER0, CM_REG_MIXER1, CM_REG_MIXER2, CM_REG_AUX_VOL, CM_REG_PLL,
+ CM_REG_CH0_FRAME1, CM_REG_CH0_FRAME2,
+ CM_REG_CH1_FRAME1, CM_REG_CH1_FRAME2, CM_REG_EXT_MISC,
+ CM_REG_INT_STATUS, CM_REG_INT_HLDCLR, CM_REG_FUNCTRL0,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3a42457984e98..75ff7e8498b83 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9020,6 +9020,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
++ SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x16b2, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+@@ -9103,6 +9104,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x8561, "Clevo NH[57][0-9][ER][ACDH]Q", ALC269_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[57][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
+ SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
+ SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+@@ -11067,6 +11070,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
++ SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2),
+ SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
+diff --git a/sound/soc/sti/uniperif_player.c b/sound/soc/sti/uniperif_player.c
+index 2ed92c990b97c..dd9013c476649 100644
+--- a/sound/soc/sti/uniperif_player.c
++++ b/sound/soc/sti/uniperif_player.c
+@@ -91,7 +91,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ SET_UNIPERIF_ITM_BCLR_FIFO_ERROR(player);
+
+ /* Stop the player */
+- snd_pcm_stop_xrun(player->substream);
++ snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+ }
+
+ ret = IRQ_HANDLED;
+@@ -105,7 +105,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ SET_UNIPERIF_ITM_BCLR_DMA_ERROR(player);
+
+ /* Stop the player */
+- snd_pcm_stop_xrun(player->substream);
++ snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+
+ ret = IRQ_HANDLED;
+ }
+@@ -138,7 +138,7 @@ static irqreturn_t uni_player_irq_handler(int irq, void *dev_id)
+ dev_err(player->dev, "Underflow recovery failed\n");
+
+ /* Stop the player */
+- snd_pcm_stop_xrun(player->substream);
++ snd_pcm_stop(player->substream, SNDRV_PCM_STATE_XRUN);
+
+ ret = IRQ_HANDLED;
+ }
+diff --git a/sound/soc/sti/uniperif_reader.c b/sound/soc/sti/uniperif_reader.c
+index 136059331211d..065c5f0d1f5f0 100644
+--- a/sound/soc/sti/uniperif_reader.c
++++ b/sound/soc/sti/uniperif_reader.c
+@@ -65,7 +65,7 @@ static irqreturn_t uni_reader_irq_handler(int irq, void *dev_id)
+ if (unlikely(status & UNIPERIF_ITS_FIFO_ERROR_MASK(reader))) {
+ dev_err(reader->dev, "FIFO error detected\n");
+
+- snd_pcm_stop_xrun(reader->substream);
++ snd_pcm_stop(reader->substream, SNDRV_PCM_STATE_XRUN);
+
+ ret = IRQ_HANDLED;
+ }
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 96991ddf5055d..64f5544d0a0aa 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -542,6 +542,16 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .id = USB_ID(0x05a7, 0x40fa),
+ .map = bose_soundlink_map,
+ },
++ {
++ /* Corsair Virtuoso SE Latest (wired mode) */
++ .id = USB_ID(0x1b1c, 0x0a3f),
++ .map = corsair_virtuoso_map,
++ },
++ {
++ /* Corsair Virtuoso SE Latest (wireless mode) */
++ .id = USB_ID(0x1b1c, 0x0a40),
++ .map = corsair_virtuoso_map,
++ },
+ {
+ /* Corsair Virtuoso SE (wired mode) */
+ .id = USB_ID(0x1b1c, 0x0a3d),
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index e447ddd6854cd..d35cf54cab338 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -3360,9 +3360,10 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ if (unitid == 7 && cval->control == UAC_FU_VOLUME)
+ snd_dragonfly_quirk_db_scale(mixer, cval, kctl);
+ break;
+- /* lowest playback value is muted on C-Media devices */
+- case USB_ID(0x0d8c, 0x000c):
+- case USB_ID(0x0d8c, 0x0014):
++ /* lowest playback value is muted on some devices */
++ case USB_ID(0x0d8c, 0x000c): /* C-Media */
++ case USB_ID(0x0d8c, 0x0014): /* C-Media */
++ case USB_ID(0x19f7, 0x0003): /* RODE NT-USB */
+ if (strstr(kctl->id.name, "Playback"))
+ cval->min_mute = 1;
+ break;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-03-19 14:39 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-03-19 14:39 UTC (permalink / raw
To: gentoo-commits
commit: bc17fa051d4414e07a96d597b4273584ab6d48d1
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 19 14:39:15 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 19 14:39:15 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bc17fa05
Update CPU Optimization Patch for 5.17
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
5010_enable-cpu-optimizations-universal.patch | 73 +++++++++++++--------------
1 file changed, 34 insertions(+), 39 deletions(-)
diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index becfda36..b9c03cb6 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -1,7 +1,7 @@
-From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
+From b5892719c43f739343c628e3d357471a3bdaa368 Mon Sep 17 00:00:00 2001
From: graysky <graysky@archlinux.us>
-Date: Tue, 14 Sep 2021 15:35:34 -0400
-Subject: [PATCH] more uarches for kernel 5.15+
+Date: Tue, 15 Mar 2022 05:58:43 -0400
+Subject: [PATCH] more uarches for kernel 5.17+
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
@@ -86,7 +86,7 @@ See the following experimental evidence supporting this statement:
https://github.com/graysky2/kernel_gcc_patch
REQUIREMENTS
-linux version >=5.15
+linux version 5.17+
gcc version >=9.0 or clang version >=9.0
ACKNOWLEDGMENTS
@@ -100,12 +100,6 @@ REFERENCES
5. http://www.linuxforge.net/docs/linux/linux-gcc.php
Signed-off-by: graysky <graysky@archlinux.us>
----
-From 1bfa1ef4e3a93e540a64cd1020863019dff3046e Mon Sep 17 00:00:00 2001
-From: graysky <graysky@archlinux.us>
-Date: Sun, 14 Nov 2021 16:08:29 -0500
-Subject: [PATCH] iiii
-
---
arch/x86/Kconfig.cpu | 332 ++++++++++++++++++++++++++++++--
arch/x86/Makefile | 40 +++-
@@ -113,12 +107,12 @@ Subject: [PATCH] iiii
3 files changed, 424 insertions(+), 14 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index eefc434351db..331f7631339a 100644
+index 542377cd419d..22b919cdb6d1 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -157,7 +157,7 @@ config MPENTIUM4
-
-
+
+
config MK6
- bool "K6/K6-II/K6-III"
+ bool "AMD K6/K6-II/K6-III"
@@ -127,7 +121,7 @@ index eefc434351db..331f7631339a 100644
Select this for an AMD K6-family processor. Enables use of
@@ -165,7 +165,7 @@ config MK6
flags to GCC.
-
+
config MK7
- bool "Athlon/Duron/K7"
+ bool "AMD Athlon/Duron/K7"
@@ -136,7 +130,7 @@ index eefc434351db..331f7631339a 100644
Select this for an AMD Athlon K7-family processor. Enables use of
@@ -173,12 +173,98 @@ config MK7
flags to GCC.
-
+
config MK8
- bool "Opteron/Athlon64/Hammer/K8"
+ bool "AMD Opteron/Athlon64/Hammer/K8"
@@ -144,7 +138,7 @@ index eefc434351db..331f7631339a 100644
Select this for an AMD Opteron or Athlon64 Hammer-family processor.
Enables use of some extended instructions, and passes appropriate
optimization flags to GCC.
-
+
+config MK8SSE3
+ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
+ help
@@ -236,17 +230,17 @@ index eefc434351db..331f7631339a 100644
depends on X86_32
@@ -270,7 +356,7 @@ config MPSC
in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
-
+
config MCORE2
- bool "Core 2/newer Xeon"
+ bool "Intel Core 2"
help
-
+
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
@@ -278,6 +364,8 @@ config MCORE2
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
-
+
+ Enables -march=core2
+
config MATOM
@@ -255,7 +249,7 @@ index eefc434351db..331f7631339a 100644
@@ -287,6 +375,182 @@ config MATOM
accordingly optimized code. Use a recent GCC with specific Atom
support in order to fully benefit from selecting this option.
-
+
+config MNEHALEM
+ bool "Intel Nehalem"
+ select X86_P6_NOP
@@ -438,7 +432,7 @@ index eefc434351db..331f7631339a 100644
@@ -294,6 +558,50 @@ config GENERIC_CPU
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
-
+
+config GENERIC_CPU2
+ bool "Generic-x86-64-v2"
+ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
@@ -484,7 +478,7 @@ index eefc434351db..331f7631339a 100644
+ Enables -march=native
+
endchoice
-
+
config X86_GENERIC
@@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
@@ -494,45 +488,45 @@ index eefc434351db..331f7631339a 100644
+ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
default "4" if MELAN || M486SX || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-
+
@@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
-
+
config X86_INTEL_USERCOPY
def_bool y
- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
-
+
config X86_USE_PPRO_CHECKSUM
def_bool y
- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
- config X86_USE_3DNOW
- def_bool y
-@@ -360,26 +668,26 @@ config X86_USE_3DNOW
+
+ #
+ # P6_NOPs are a relatively minor optimization that require a family >=
+@@ -356,26 +664,26 @@ config X86_USE_PPRO_CHECKSUM
config X86_P6_NOP
def_bool y
depends on X86_64
- depends on (MCORE2 || MPENTIUM4 || MPSC)
+ depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
-
+
config X86_TSC
def_bool y
- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
+ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
-
+
config X86_CMPXCHG64
def_bool y
- depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
+ depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
-
+
# this should be set for all -march=.. options where the compiler
# generates cmov.
config X86_CMOV
def_bool y
- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
+ depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
-
+
config X86_MINIMUM_CPU_FAMILY
int
default "64" if X86_64
@@ -540,12 +534,12 @@ index eefc434351db..331f7631339a 100644
+ default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
default "5" if X86_32 && X86_CMPXCHG64
default "4"
-
+
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index 42243869216d..ab1ad6959b96 100644
+index e84cdd409b64..7d3bbf060079 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
-@@ -119,8 +119,44 @@ else
+@@ -131,8 +131,44 @@ else
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
cflags-$(CONFIG_MK8) += -march=k8
cflags-$(CONFIG_MPSC) += -march=nocona
@@ -591,7 +585,7 @@ index 42243869216d..ab1ad6959b96 100644
+ cflags-$(CONFIG_GENERIC_CPU4) += -march=x86-64-v4
cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
KBUILD_CFLAGS += $(cflags-y)
-
+
diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
index 75884d2cdec3..4e6a08d4c7e5 100644
--- a/arch/x86/include/asm/vermagic.h
@@ -676,5 +670,6 @@ index 75884d2cdec3..4e6a08d4c7e5 100644
#elif defined CONFIG_MELAN
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
---
-2.33.1
+--
+2.35.1
+
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-03-06 17:51 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-03-06 17:51 UTC (permalink / raw
To: gentoo-commits
commit: 41dc35b2785cc890101781fb03476966ef499337
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 6 17:50:51 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 6 17:50:51 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=41dc35b2
Update default security restrictions
Bug: https://bugs.gentoo.org/834085
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
1510_fs-enable-link-security-restrictions-by-default.patch | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
index b1f1a88d..e8c30157 100644
--- a/1510_fs-enable-link-security-restrictions-by-default.patch
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -1,13 +1,17 @@
--- a/fs/namei.c 2022-01-23 13:02:27.876558299 -0500
-+++ b/fs/namei.c 2022-01-23 14:01:29.634533326 -0500
-@@ -1020,8 +1020,8 @@ static inline void put_link(struct namei
++++ b/fs/namei.c 2022-03-06 12:47:39.375719693 -0500
+@@ -1020,10 +1020,10 @@ static inline void put_link(struct namei
path_put(&last->link);
}
-static int sysctl_protected_symlinks __read_mostly;
-static int sysctl_protected_hardlinks __read_mostly;
+-static int sysctl_protected_fifos __read_mostly;
+-static int sysctl_protected_regular __read_mostly;
+static int sysctl_protected_symlinks __read_mostly = 1;
+static int sysctl_protected_hardlinks __read_mostly = 1;
- static int sysctl_protected_fifos __read_mostly;
- static int sysctl_protected_regular __read_mostly;
++int sysctl_protected_fifos __read_mostly = 1;
++int sysctl_protected_regular __read_mostly = 1;
+ #ifdef CONFIG_SYSCTL
+ static struct ctl_table namei_sysctls[] = {
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-01-31 13:05 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-01-31 13:05 UTC (permalink / raw
To: gentoo-commits
commit: 97f20bd90f2c0efda3912a11e7ae4f0af22e927e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 29 20:43:23 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jan 31 13:05:01 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=97f20bd9
Select CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y as default
Bug: https://bugs.gentoo.org/832224
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 24b75095..3712fa96 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
source "Documentation/Kconfig"
+
+source "distro/Kconfig"
---- /dev/null 2021-12-21 08:57:43.779324794 -0500
-+++ b/distro/Kconfig 2021-12-21 14:12:07.964572417 -0500
-@@ -0,0 +1,283 @@
+--- /dev/null 2022-01-29 13:28:12.679255142 -0500
++++ b/distro/Kconfig 2022-01-29 15:29:29.800465617 -0500
+@@ -0,0 +1,285 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -16,6 +16,8 @@
+
+ default y
+
++ select CPU_FREQ_DEFAULT_GOV_SCHEDUTIL
++
+ help
+ In order to boot Gentoo Linux a minimal set of config settings needs to
+ be enabled in the kernel; to avoid the users from having to enable them
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [gentoo-commits] proj/linux-patches:5.17 commit in: /
@ 2022-01-23 20:00 Mike Pagano
0 siblings, 0 replies; 29+ messages in thread
From: Mike Pagano @ 2022-01-23 20:00 UTC (permalink / raw
To: gentoo-commits
commit: 1c02368330258d65db041918b557e2f5e58d7120
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 23 19:58:59 2022 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jan 23 19:58:59 2022 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1c023683
Add genpatches to 5.17
Patch to support for namespace user.pax.* on tmpfs.
Patch to enable link security restrictions by default.
Bluetooth: Check key sizes only when Secure Simple Pairing is
enabled. See bug #686758
mt76: mt7921e: fix possible probe failure after reboot
tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig.
See bug #710790. Thanks to Phil Stracchino
sign-file: full functionality with modern LibreSSL
mt76: mt7921e: fix possible probe failure after reboot
Add Gentoo Linux support config settings and defaults.
Kernel Self Protection patch
CPU Optimization patch
Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 32 +
1500_XATTR_USER_PREFIX.patch | 67 ++
...ble-link-security-restrictions-by-default.patch | 13 +
...zes-only-if-Secure-Simple-Pairing-enabled.patch | 37 ++
...e-fix-possible-probe-failure-after-reboot.patch | 436 +++++++++++++
...3-Fix-build-issue-by-selecting-CONFIG_REG.patch | 30 +
2920_sign-file-patch-for-libressl.patch | 16 +
3000_Support-printing-firmware-info.patch | 14 +
5010_enable-cpu-optimizations-universal.patch | 680 +++++++++++++++++++++
9 files changed, 1325 insertions(+)
diff --git a/0000_README b/0000_README
index 90189932..c012760e 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,38 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
+From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
+Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+
+Patch: 2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
+From: https://patchwork.kernel.org/project/linux-wireless/patch/70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com/
+Desc: mt76: mt7921e: fix possible probe failure after reboot
+
+Patch: 2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From: https://bugs.gentoo.org/710790
+Desc: tmp513 requies REGMAP_I2C to build. Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
+Patch: 2920_sign-file-patch-for-libressl.patch
+From: https://bugs.gentoo.org/717166
+Desc: sign-file: full functionality with modern LibreSSL
+
+Patch: 3000_Support-printing-firmware-info.patch
+From: https://bugs.gentoo.org/732852
+Desc: Print firmware info (Reqs CONFIG_GENTOO_PRINT_FIRMWARE_INFO). Thanks to Georgy Yakovlev
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5010_enable-cpu-optimizations-universal.patch
+From: https://github.com/graysky2/kernel_compiler_patch
+Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 00000000..245dcc29
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,67 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+--- a/mm/shmem.c 2020-05-04 15:30:27.042035334 -0400
++++ b/mm/shmem.c 2020-05-04 15:34:57.013881725 -0400
+@@ -3238,6 +3238,14 @@ static int shmem_xattr_handler_set(const
+ struct shmem_inode_info *info = SHMEM_I(inode);
+
+ name = xattr_full_name(handler, name);
++
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
++
+ return simple_xattr_set(&info->xattrs, name, value, size, flags, NULL);
+ }
+
+@@ -3253,6 +3261,12 @@ static const struct xattr_handler shmem_
+ .set = shmem_xattr_handler_set,
+ };
+
++static const struct xattr_handler shmem_user_xattr_handler = {
++ .prefix = XATTR_USER_PREFIX,
++ .get = shmem_xattr_handler_get,
++ .set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ &posix_acl_access_xattr_handler,
+@@ -3260,6 +3274,7 @@ static const struct xattr_handler *shmem
+ #endif
+ &shmem_security_xattr_handler,
+ &shmem_trusted_xattr_handler,
++ &shmem_user_xattr_handler,
+ NULL
+ };
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 00000000..b1f1a88d
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,13 @@
+--- a/fs/namei.c 2022-01-23 13:02:27.876558299 -0500
++++ b/fs/namei.c 2022-01-23 14:01:29.634533326 -0500
+@@ -1020,8 +1020,8 @@ static inline void put_link(struct namei
+ path_put(&last->link);
+ }
+
+-static int sysctl_protected_symlinks __read_mostly;
+-static int sysctl_protected_hardlinks __read_mostly;
++static int sysctl_protected_symlinks __read_mostly = 1;
++static int sysctl_protected_hardlinks __read_mostly = 1;
+ static int sysctl_protected_fifos __read_mostly;
+ static int sysctl_protected_regular __read_mostly;
+
diff --git a/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
new file mode 100644
index 00000000..394ad48f
--- /dev/null
+++ b/2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
@@ -0,0 +1,37 @@
+The encryption is only mandatory to be enforced when both sides are using
+Secure Simple Pairing and this means the key size check makes only sense
+in that case.
+
+On legacy Bluetooth 2.0 and earlier devices like mice the encryption was
+optional and thus causing an issue if the key size check is not bound to
+using Secure Simple Pairing.
+
+Fixes: d5bb334a8e17 ("Bluetooth: Align minimum encryption key size for LE and BR/EDR connections")
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Cc: stable@vger.kernel.org
+---
+ net/bluetooth/hci_conn.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 3cf0764d5793..7516cdde3373 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1272,8 +1272,13 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ return 0;
+ }
+
+- if (hci_conn_ssp_enabled(conn) &&
+- !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
++ /* If Secure Simple Pairing is not enabled, then legacy connection
++ * setup is used and no encryption or key sizes can be enforced.
++ */
++ if (!hci_conn_ssp_enabled(conn))
++ return 1;
++
++ if (!test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ return 0;
+
+ /* The minimum encryption key size needs to be enforced by the
+--
+2.20.1
diff --git a/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch b/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
new file mode 100644
index 00000000..4440e910
--- /dev/null
+++ b/2400_mt76-mt7921e-fix-possible-probe-failure-after-reboot.patch
@@ -0,0 +1,436 @@
+From patchwork Fri Jan 7 07:30:03 2022
+Content-Type: text/plain; charset="utf-8"
+MIME-Version: 1.0
+Content-Transfer-Encoding: 7bit
+X-Patchwork-Submitter: Sean Wang <sean.wang@mediatek.com>
+X-Patchwork-Id: 12706336
+X-Patchwork-Delegate: nbd@nbd.name
+Return-Path: <linux-wireless-owner@kernel.org>
+X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on
+ aws-us-west-2-korg-lkml-1.web.codeaurora.org
+Received: from vger.kernel.org (vger.kernel.org [23.128.96.18])
+ by smtp.lore.kernel.org (Postfix) with ESMTP id 21BAFC433F5
+ for <linux-wireless@archiver.kernel.org>;
+ Fri, 7 Jan 2022 07:30:14 +0000 (UTC)
+Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand
+ id S234590AbiAGHaN (ORCPT
+ <rfc822;linux-wireless@archiver.kernel.org>);
+ Fri, 7 Jan 2022 02:30:13 -0500
+Received: from mailgw01.mediatek.com ([60.244.123.138]:50902 "EHLO
+ mailgw01.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org
+ with ESMTP id S234589AbiAGHaM (ORCPT
+ <rfc822;linux-wireless@vger.kernel.org>);
+ Fri, 7 Jan 2022 02:30:12 -0500
+X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
+X-UUID: 13aa3d5f268849bb8556160d0ceca5f3-20220107
+Received: from mtkcas10.mediatek.inc [(172.21.101.39)] by
+ mailgw01.mediatek.com
+ (envelope-from <sean.wang@mediatek.com>)
+ (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256)
+ with ESMTP id 982418171; Fri, 07 Jan 2022 15:30:06 +0800
+Received: from mtkcas10.mediatek.inc (172.21.101.39) by
+ mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server
+ (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.792.3;
+ Fri, 7 Jan 2022 15:30:05 +0800
+Received: from mtkswgap22.mediatek.inc (172.21.77.33) by mtkcas10.mediatek.inc
+ (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend
+ Transport; Fri, 7 Jan 2022 15:30:04 +0800
+From: <sean.wang@mediatek.com>
+To: <nbd@nbd.name>, <lorenzo.bianconi@redhat.com>
+CC: <sean.wang@mediatek.com>, <Soul.Huang@mediatek.com>,
+ <YN.Chen@mediatek.com>, <Leon.Yen@mediatek.com>,
+ <Eric-SY.Chang@mediatek.com>, <Mark-YW.Chen@mediatek.com>,
+ <Deren.Wu@mediatek.com>, <km.lin@mediatek.com>,
+ <jenhao.yang@mediatek.com>, <robin.chiu@mediatek.com>,
+ <Eddie.Chen@mediatek.com>, <ch.yeh@mediatek.com>,
+ <posh.sun@mediatek.com>, <ted.huang@mediatek.com>,
+ <Eric.Liang@mediatek.com>, <Stella.Chang@mediatek.com>,
+ <Tom.Chou@mediatek.com>, <steve.lee@mediatek.com>,
+ <jsiuda@google.com>, <frankgor@google.com>, <jemele@google.com>,
+ <abhishekpandit@google.com>, <shawnku@google.com>,
+ <linux-wireless@vger.kernel.org>,
+ <linux-mediatek@lists.infradead.org>,
+ "Deren Wu" <deren.wu@mediatek.com>
+Subject: [PATCH] mt76: mt7921e: fix possible probe failure after reboot
+Date: Fri, 7 Jan 2022 15:30:03 +0800
+Message-ID:
+ <70e27cbc652cbdb78277b9c691a3a5ba02653afb.1641540175.git.objelf@gmail.com>
+X-Mailer: git-send-email 1.7.9.5
+MIME-Version: 1.0
+X-MTK: N
+Precedence: bulk
+List-ID: <linux-wireless.vger.kernel.org>
+X-Mailing-List: linux-wireless@vger.kernel.org
+
+From: Sean Wang <sean.wang@mediatek.com>
+
+It doesn't guarantee the mt7921e gets started with ASPM L0 after each
+machine reboot on every platform.
+
+If mt7921e gets started with not ASPM L0, it would be possible that the
+driver encounters time to time failure in mt7921_pci_probe, like a
+weird chip identifier is read
+
+[ 215.514503] mt7921e 0000:05:00.0: ASIC revision: feed0000
+[ 216.604741] mt7921e: probe of 0000:05:00.0 failed with error -110
+
+or failing to init hardware because the driver is not allowed to access the
+register until the device is in ASPM L0 state. So, we call
+__mt7921e_mcu_drv_pmctrl in early mt7921_pci_probe to force the device
+to bring back to the L0 state for we can safely access registers in any
+case.
+
+In the patch, we move all functions from dma.c to pci.c and register mt76
+bus operation earilier, that is the __mt7921e_mcu_drv_pmctrl depends on.
+
+Fixes: bf3747ae2e25 ("mt76: mt7921: enable aspm by default")
+Reported-by: Kai-Chuan Hsieh <kaichuan.hsieh@canonical.com>
+Co-developed-by: Deren Wu <deren.wu@mediatek.com>
+Signed-off-by: Deren Wu <deren.wu@mediatek.com>
+Signed-off-by: Sean Wang <sean.wang@mediatek.com>
+---
+ .../net/wireless/mediatek/mt76/mt7921/dma.c | 119 -----------------
+ .../wireless/mediatek/mt76/mt7921/mt7921.h | 1 +
+ .../net/wireless/mediatek/mt76/mt7921/pci.c | 124 ++++++++++++++++++
+ .../wireless/mediatek/mt76/mt7921/pci_mcu.c | 18 ++-
+ 4 files changed, 139 insertions(+), 123 deletions(-)
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index cdff1fd52d93..39d6ce4ecddd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -78,110 +78,6 @@ static void mt7921_dma_prefetch(struct mt7921_dev *dev)
+ mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
+ }
+
+-static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
+-{
+- static const struct {
+- u32 phys;
+- u32 mapped;
+- u32 size;
+- } fixed_map[] = {
+- { 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
+- { 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
+- { 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
+- { 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
+- { 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
+- { 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
+- { 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
+- { 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
+- { 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
+- { 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
+- { 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
+- { 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
+- { 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
+- { 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
+- { 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
+- { 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
+- { 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
+- { 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
+- { 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
+- { 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
+- { 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
+- { 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
+- { 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
+- { 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
+- { 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
+- { 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
+- { 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
+- { 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
+- { 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
+- { 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
+- { 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
+- { 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
+- { 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
+- { 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
+- { 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
+- { 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
+- { 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
+- { 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
+- { 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
+- { 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
+- { 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
+- { 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
+- { 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
+- };
+- int i;
+-
+- if (addr < 0x100000)
+- return addr;
+-
+- for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
+- u32 ofs;
+-
+- if (addr < fixed_map[i].phys)
+- continue;
+-
+- ofs = addr - fixed_map[i].phys;
+- if (ofs > fixed_map[i].size)
+- continue;
+-
+- return fixed_map[i].mapped + ofs;
+- }
+-
+- if ((addr >= 0x18000000 && addr < 0x18c00000) ||
+- (addr >= 0x70000000 && addr < 0x78000000) ||
+- (addr >= 0x7c000000 && addr < 0x7c400000))
+- return mt7921_reg_map_l1(dev, addr);
+-
+- dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
+- addr);
+-
+- return 0;
+-}
+-
+-static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
+-{
+- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+- u32 addr = __mt7921_reg_addr(dev, offset);
+-
+- return dev->bus_ops->rr(mdev, addr);
+-}
+-
+-static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
+-{
+- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+- u32 addr = __mt7921_reg_addr(dev, offset);
+-
+- dev->bus_ops->wr(mdev, addr, val);
+-}
+-
+-static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
+-{
+- struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+- u32 addr = __mt7921_reg_addr(dev, offset);
+-
+- return dev->bus_ops->rmw(mdev, addr, mask, val);
+-}
+-
+ static int mt7921_dma_disable(struct mt7921_dev *dev, bool force)
+ {
+ if (force) {
+@@ -341,23 +237,8 @@ int mt7921_wpdma_reinit_cond(struct mt7921_dev *dev)
+
+ int mt7921_dma_init(struct mt7921_dev *dev)
+ {
+- struct mt76_bus_ops *bus_ops;
+ int ret;
+
+- dev->phy.dev = dev;
+- dev->phy.mt76 = &dev->mt76.phy;
+- dev->mt76.phy.priv = &dev->phy;
+- dev->bus_ops = dev->mt76.bus;
+- bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+- GFP_KERNEL);
+- if (!bus_ops)
+- return -ENOMEM;
+-
+- bus_ops->rr = mt7921_rr;
+- bus_ops->wr = mt7921_wr;
+- bus_ops->rmw = mt7921_rmw;
+- dev->mt76.bus = bus_ops;
+-
+ mt76_dma_attach(&dev->mt76);
+
+ ret = mt7921_dma_disable(dev, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 8b674e042568..63e3c7ef5e89 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -443,6 +443,7 @@ int mt7921e_mcu_init(struct mt7921_dev *dev);
+ int mt7921s_wfsys_reset(struct mt7921_dev *dev);
+ int mt7921s_mac_reset(struct mt7921_dev *dev);
+ int mt7921s_init_reset(struct mt7921_dev *dev);
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev);
+ int mt7921e_mcu_fw_pmctrl(struct mt7921_dev *dev);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 1ae0d5826ca7..a0c82d19c4d9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -121,6 +121,110 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ mt76_free_device(&dev->mt76);
+ }
+
++static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
++{
++ static const struct {
++ u32 phys;
++ u32 mapped;
++ u32 size;
++ } fixed_map[] = {
++ { 0x820d0000, 0x30000, 0x10000 }, /* WF_LMAC_TOP (WF_WTBLON) */
++ { 0x820ed000, 0x24800, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
++ { 0x820e4000, 0x21000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
++ { 0x820e7000, 0x21e00, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
++ { 0x820eb000, 0x24200, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
++ { 0x820e2000, 0x20800, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
++ { 0x820e3000, 0x20c00, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
++ { 0x820e5000, 0x21400, 0x0800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
++ { 0x00400000, 0x80000, 0x10000 }, /* WF_MCU_SYSRAM */
++ { 0x00410000, 0x90000, 0x10000 }, /* WF_MCU_SYSRAM (configure register) */
++ { 0x40000000, 0x70000, 0x10000 }, /* WF_UMAC_SYSRAM */
++ { 0x54000000, 0x02000, 0x1000 }, /* WFDMA PCIE0 MCU DMA0 */
++ { 0x55000000, 0x03000, 0x1000 }, /* WFDMA PCIE0 MCU DMA1 */
++ { 0x58000000, 0x06000, 0x1000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
++ { 0x59000000, 0x07000, 0x1000 }, /* WFDMA PCIE1 MCU DMA1 */
++ { 0x7c000000, 0xf0000, 0x10000 }, /* CONN_INFRA */
++ { 0x7c020000, 0xd0000, 0x10000 }, /* CONN_INFRA, WFDMA */
++ { 0x7c060000, 0xe0000, 0x10000 }, /* CONN_INFRA, conn_host_csr_top */
++ { 0x80020000, 0xb0000, 0x10000 }, /* WF_TOP_MISC_OFF */
++ { 0x81020000, 0xc0000, 0x10000 }, /* WF_TOP_MISC_ON */
++ { 0x820c0000, 0x08000, 0x4000 }, /* WF_UMAC_TOP (PLE) */
++ { 0x820c8000, 0x0c000, 0x2000 }, /* WF_UMAC_TOP (PSE) */
++ { 0x820cc000, 0x0e000, 0x1000 }, /* WF_UMAC_TOP (PP) */
++ { 0x820cd000, 0x0f000, 0x1000 }, /* WF_MDP_TOP */
++ { 0x820ce000, 0x21c00, 0x0200 }, /* WF_LMAC_TOP (WF_SEC) */
++ { 0x820cf000, 0x22000, 0x1000 }, /* WF_LMAC_TOP (WF_PF) */
++ { 0x820e0000, 0x20000, 0x0400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
++ { 0x820e1000, 0x20400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
++ { 0x820e9000, 0x23400, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
++ { 0x820ea000, 0x24000, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
++ { 0x820ec000, 0x24600, 0x0200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
++ { 0x820f0000, 0xa0000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
++ { 0x820f1000, 0xa0600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
++ { 0x820f2000, 0xa0800, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
++ { 0x820f3000, 0xa0c00, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
++ { 0x820f4000, 0xa1000, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
++ { 0x820f5000, 0xa1400, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
++ { 0x820f7000, 0xa1e00, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
++ { 0x820f9000, 0xa3400, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
++ { 0x820fa000, 0xa4000, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
++ { 0x820fb000, 0xa4200, 0x0400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
++ { 0x820fc000, 0xa4600, 0x0200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
++ { 0x820fd000, 0xa4800, 0x0800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
++ };
++ int i;
++
++ if (addr < 0x100000)
++ return addr;
++
++ for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
++ u32 ofs;
++
++ if (addr < fixed_map[i].phys)
++ continue;
++
++ ofs = addr - fixed_map[i].phys;
++ if (ofs > fixed_map[i].size)
++ continue;
++
++ return fixed_map[i].mapped + ofs;
++ }
++
++ if ((addr >= 0x18000000 && addr < 0x18c00000) ||
++ (addr >= 0x70000000 && addr < 0x78000000) ||
++ (addr >= 0x7c000000 && addr < 0x7c400000))
++ return mt7921_reg_map_l1(dev, addr);
++
++ dev_err(dev->mt76.dev, "Access currently unsupported address %08x\n",
++ addr);
++
++ return 0;
++}
++
++static u32 mt7921_rr(struct mt76_dev *mdev, u32 offset)
++{
++ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ u32 addr = __mt7921_reg_addr(dev, offset);
++
++ return dev->bus_ops->rr(mdev, addr);
++}
++
++static void mt7921_wr(struct mt76_dev *mdev, u32 offset, u32 val)
++{
++ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ u32 addr = __mt7921_reg_addr(dev, offset);
++
++ dev->bus_ops->wr(mdev, addr, val);
++}
++
++static u32 mt7921_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
++{
++ struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
++ u32 addr = __mt7921_reg_addr(dev, offset);
++
++ return dev->bus_ops->rmw(mdev, addr, mask, val);
++}
++
+ static int mt7921_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+ {
+@@ -152,6 +256,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ .fw_own = mt7921e_mcu_fw_pmctrl,
+ };
+
++ struct mt76_bus_ops *bus_ops;
+ struct mt7921_dev *dev;
+ struct mt76_dev *mdev;
+ int ret;
+@@ -189,6 +294,25 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+
+ mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
+ tasklet_init(&dev->irq_tasklet, mt7921_irq_tasklet, (unsigned long)dev);
++
++ dev->phy.dev = dev;
++ dev->phy.mt76 = &dev->mt76.phy;
++ dev->mt76.phy.priv = &dev->phy;
++ dev->bus_ops = dev->mt76.bus;
++ bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
++ GFP_KERNEL);
++ if (!bus_ops)
++ return -ENOMEM;
++
++ bus_ops->rr = mt7921_rr;
++ bus_ops->wr = mt7921_wr;
++ bus_ops->rmw = mt7921_rmw;
++ dev->mt76.bus = bus_ops;
++
++ ret = __mt7921e_mcu_drv_pmctrl(dev);
++ if (ret)
++ return ret;
++
+ mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
+ (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+ dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+index f9e350b67fdc..36669e5aeef3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mcu.c
+@@ -59,10 +59,8 @@ int mt7921e_mcu_init(struct mt7921_dev *dev)
+ return err;
+ }
+
+-int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++int __mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ {
+- struct mt76_phy *mphy = &dev->mt76.phy;
+- struct mt76_connac_pm *pm = &dev->pm;
+ int i, err = 0;
+
+ for (i = 0; i < MT7921_DRV_OWN_RETRY_COUNT; i++) {
+@@ -75,9 +73,21 @@ int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
+ if (i == MT7921_DRV_OWN_RETRY_COUNT) {
+ dev_err(dev->mt76.dev, "driver own failed\n");
+ err = -EIO;
+- goto out;
+ }
+
++ return err;
++}
++
++int mt7921e_mcu_drv_pmctrl(struct mt7921_dev *dev)
++{
++ struct mt76_phy *mphy = &dev->mt76.phy;
++ struct mt76_connac_pm *pm = &dev->pm;
++ int err;
++
++ err = __mt7921e_mcu_drv_pmctrl(dev);
++ if (err < 0)
++ goto out;
++
+ mt7921_wpdma_reinit_cond(dev);
+ clear_bit(MT76_STATE_PM, &mphy->state);
+
diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 00000000..43356857
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build. Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ tristate "Texas Instruments TMP513 and compatibles"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for Texas Instruments TMP512,
+ and TMP513 temperature and power supply sensor chips.
+--
+2.24.1
+
diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 00000000..e6ec017d
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c 2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c 2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+ * signing with anything other than SHA1 - so we're stuck with that if such is
+ * the case.
+ */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+- OPENSSL_VERSION_NUMBER < 0x10000000L || \
+- defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++ ( defined(LIBRESSL_VERSION_NUMBER) \
++ && (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++ OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7
diff --git a/3000_Support-printing-firmware-info.patch b/3000_Support-printing-firmware-info.patch
new file mode 100644
index 00000000..a630cfbe
--- /dev/null
+++ b/3000_Support-printing-firmware-info.patch
@@ -0,0 +1,14 @@
+--- a/drivers/base/firmware_loader/main.c 2021-08-24 15:42:07.025482085 -0400
++++ b/drivers/base/firmware_loader/main.c 2021-08-24 15:44:40.782975313 -0400
+@@ -809,6 +809,11 @@ _request_firmware(const struct firmware
+
+ ret = _request_firmware_prepare(&fw, name, device, buf, size,
+ offset, opt_flags);
++
++#ifdef CONFIG_GENTOO_PRINT_FIRMWARE_INFO
++ printk(KERN_NOTICE "Loading firmware: %s\n", name);
++#endif
++
+ if (ret <= 0) /* error or already assigned */
+ goto out;
+
diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
new file mode 100644
index 00000000..becfda36
--- /dev/null
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -0,0 +1,680 @@
+From d31d2b0747ab55e65c2366d51149a0ec9896155e Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Tue, 14 Sep 2021 15:35:34 -0400
+Subject: [PATCH] more uarches for kernel 5.15+
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+With the release of gcc 11.1 and clang 12.0, several generic 64-bit levels are
+offered which are good for supported Intel or AMD CPUs:
+• x86-64-v2
+• x86-64-v3
+• x86-64-v4
+
+Users of glibc 2.33 and above can see which level is supported by current
+hardware by running:
+ /lib/ld-linux-x86-64.so.2 --help | grep supported
+
+Alternatively, compare the flags from /proc/cpuinfo to this list.[1]
+
+CPU-specific microarchitectures include:
+• AMD Improved K8-family
+• AMD K10-family
+• AMD Family 10h (Barcelona)
+• AMD Family 14h (Bobcat)
+• AMD Family 16h (Jaguar)
+• AMD Family 15h (Bulldozer)
+• AMD Family 15h (Piledriver)
+• AMD Family 15h (Steamroller)
+• AMD Family 15h (Excavator)
+• AMD Family 17h (Zen)
+• AMD Family 17h (Zen 2)
+• AMD Family 19h (Zen 3)†
+• Intel Silvermont low-power processors
+• Intel Goldmont low-power processors (Apollo Lake and Denverton)
+• Intel Goldmont Plus low-power processors (Gemini Lake)
+• Intel 1st Gen Core i3/i5/i7 (Nehalem)
+• Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+• Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+• Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+• Intel 4th Gen Core i3/i5/i7 (Haswell)
+• Intel 5th Gen Core i3/i5/i7 (Broadwell)
+• Intel 6th Gen Core i3/i5/i7 (Skylake)
+• Intel 6th Gen Core i7/i9 (Skylake X)
+• Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+• Intel 10th Gen Core i7/i9 (Ice Lake)
+• Intel Xeon (Cascade Lake)
+• Intel Xeon (Cooper Lake)*
+• Intel 3rd Gen 10nm++ i3/i5/i7/i9-family (Tiger Lake)*
+• Intel 3rd Gen 10nm++ Xeon (Sapphire Rapids)‡
+• Intel 11th Gen i3/i5/i7/i9-family (Rocket Lake)‡
+• Intel 12th Gen i3/i5/i7/i9-family (Alder Lake)‡
+
+Notes: If not otherwise noted, gcc >=9.1 is required for support.
+ *Requires gcc >=10.1 or clang >=10.0
+ †Required gcc >=10.3 or clang >=12.0
+ ‡Required gcc >=11.1 or clang >=12.0
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[2]
+
+Users of Intel CPUs should select the 'Intel-Native' option and users of AMD
+CPUs should select the 'AMD-Native' option.
+
+MINOR NOTES RELATING TO INTEL ATOM PROCESSORS
+This patch also changes -march=atom to -march=bonnell in accordance with the
+gcc v4.9 changes. Upstream is using the deprecated -match=atom flags when I
+believe it should use the newer -march=bonnell flag for atom processors.[3]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=5.15
+gcc version >=9.0 or clang version >=9.0
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gitlab.com/x86-psABIs/x86-64-ABI/-/commit/77566eb03bc6a326811cb7e9
+2. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html#index-x86-Options
+3. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+Signed-off-by: graysky <graysky@archlinux.us>
+---
+From 1bfa1ef4e3a93e540a64cd1020863019dff3046e Mon Sep 17 00:00:00 2001
+From: graysky <graysky@archlinux.us>
+Date: Sun, 14 Nov 2021 16:08:29 -0500
+Subject: [PATCH] iiii
+
+---
+ arch/x86/Kconfig.cpu | 332 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile | 40 +++-
+ arch/x86/include/asm/vermagic.h | 66 +++++++
+ 3 files changed, 424 insertions(+), 14 deletions(-)
+
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index eefc434351db..331f7631339a 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -157,7 +157,7 @@ config MPENTIUM4
+
+
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ help
+ Select this for an AMD K6-family processor. Enables use of
+@@ -165,7 +165,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ help
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -173,12 +173,98 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ help
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ help
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ help
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ help
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ help
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ help
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ help
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ help
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ help
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ help
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ help
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
++config MZEN2
++ bool "AMD Zen 2"
++ help
++ Select this for AMD Family 17h Zen 2 processors.
++
++ Enables -march=znver2
++
++config MZEN3
++ bool "AMD Zen 3"
++ depends on (CC_IS_GCC && GCC_VERSION >= 100300) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ help
++ Select this for AMD Family 19h Zen 3 processors.
++
++ Enables -march=znver3
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -270,7 +356,7 @@ config MPSC
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
+ help
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -278,6 +364,8 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
++ Enables -march=core2
++
+ config MATOM
+ bool "Intel Atom"
+ help
+@@ -287,6 +375,182 @@ config MATOM
+ accordingly optimized code. Use a recent GCC with specific Atom
+ support in order to fully benefit from selecting this option.
+
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
++ help
++
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MGOLDMONT
++ bool "Intel Goldmont"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Goldmont platform including Apollo Lake and Denverton.
++
++ Enables -march=goldmont
++
++config MGOLDMONTPLUS
++ bool "Intel Goldmont Plus"
++ select X86_P6_NOP
++ help
++
++ Select this for the Intel Goldmont Plus platform including Gemini Lake.
++
++ Enables -march=goldmont-plus
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ help
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ help
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ help
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ help
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ help
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
++
++config MSKYLAKEX
++ bool "Intel Skylake X"
++ select X86_P6_NOP
++ help
++
++ Select this for 6th Gen Core processors in the Skylake X family.
++
++ Enables -march=skylake-avx512
++
++config MCANNONLAKE
++ bool "Intel Cannon Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for 8th Gen Core processors
++
++ Enables -march=cannonlake
++
++config MICELAKE
++ bool "Intel Ice Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for 10th Gen Core processors in the Ice Lake family.
++
++ Enables -march=icelake-client
++
++config MCASCADELAKE
++ bool "Intel Cascade Lake"
++ select X86_P6_NOP
++ help
++
++ Select this for Xeon processors in the Cascade Lake family.
++
++ Enables -march=cascadelake
++
++config MCOOPERLAKE
++ bool "Intel Cooper Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++ select X86_P6_NOP
++ help
++
++ Select this for Xeon processors in the Cooper Lake family.
++
++ Enables -march=cooperlake
++
++config MTIGERLAKE
++ bool "Intel Tiger Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 100100) || (CC_IS_CLANG && CLANG_VERSION >= 100000)
++ select X86_P6_NOP
++ help
++
++ Select this for third-generation 10 nm process processors in the Tiger Lake family.
++
++ Enables -march=tigerlake
++
++config MSAPPHIRERAPIDS
++ bool "Intel Sapphire Rapids"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ select X86_P6_NOP
++ help
++
++ Select this for third-generation 10 nm process processors in the Sapphire Rapids family.
++
++ Enables -march=sapphirerapids
++
++config MROCKETLAKE
++ bool "Intel Rocket Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ select X86_P6_NOP
++ help
++
++ Select this for eleventh-generation processors in the Rocket Lake family.
++
++ Enables -march=rocketlake
++
++config MALDERLAKE
++ bool "Intel Alder Lake"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ select X86_P6_NOP
++ help
++
++ Select this for twelfth-generation processors in the Alder Lake family.
++
++ Enables -march=alderlake
++
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+ depends on X86_64
+@@ -294,6 +558,50 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config GENERIC_CPU2
++ bool "Generic-x86-64-v2"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ depends on X86_64
++ help
++ Generic x86-64 CPU.
++ Run equally well on all x86-64 CPUs with min support of x86-64-v2.
++
++config GENERIC_CPU3
++ bool "Generic-x86-64-v3"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ depends on X86_64
++ help
++ Generic x86-64-v3 CPU with v3 instructions.
++ Run equally well on all x86-64 CPUs with min support of x86-64-v3.
++
++config GENERIC_CPU4
++ bool "Generic-x86-64-v4"
++ depends on (CC_IS_GCC && GCC_VERSION > 110000) || (CC_IS_CLANG && CLANG_VERSION >= 120000)
++ depends on X86_64
++ help
++ Generic x86-64 CPU with v4 instructions.
++ Run equally well on all x86-64 CPUs with min support of x86-64-v4.
++
++config MNATIVE_INTEL
++ bool "Intel-Native optimizations autodetected by the compiler"
++ help
++
++ Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. Do NOT use this
++ for AMD CPUs. Intel Only!
++
++ Enables -march=native
++
++config MNATIVE_AMD
++ bool "AMD-Native optimizations autodetected by the compiler"
++ help
++
++ Clang 3.8, GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. Do NOT use this
++ for Intel CPUs. AMD Only!
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -318,7 +626,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD || X86_GENERIC || GENERIC_CPU || GENERIC_CPU2 || GENERIC_CPU3 || GENERIC_CPU4
+ default "4" if MELAN || M486SX || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -336,11 +644,11 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
+
+ config X86_USE_3DNOW
+ def_bool y
+@@ -360,26 +668,26 @@ config X86_USE_3DNOW
+ config X86_P6_NOP
+ def_bool y
+ depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL)
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++ depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+ default "64" if X86_64
+- default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8)
++ default "6" if X86_32 && (MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MEFFICEON || MATOM || MCRUSOE || MCORE2 || MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MNATIVE_INTEL || MNATIVE_AMD)
+ default "5" if X86_32 && X86_CMPXCHG64
+ default "4"
+
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 42243869216d..ab1ad6959b96 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -119,8 +119,44 @@ else
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
+ cflags-$(CONFIG_MK8) += -march=k8
+ cflags-$(CONFIG_MPSC) += -march=nocona
+- cflags-$(CONFIG_MCORE2) += -march=core2
+- cflags-$(CONFIG_MATOM) += -march=atom
++ cflags-$(CONFIG_MK8SSE3) += -march=k8-sse3
++ cflags-$(CONFIG_MK10) += -march=amdfam10
++ cflags-$(CONFIG_MBARCELONA) += -march=barcelona
++ cflags-$(CONFIG_MBOBCAT) += -march=btver1
++ cflags-$(CONFIG_MJAGUAR) += -march=btver2
++ cflags-$(CONFIG_MBULLDOZER) += -march=bdver1
++ cflags-$(CONFIG_MPILEDRIVER) += -march=bdver2 -mno-tbm
++ cflags-$(CONFIG_MSTEAMROLLER) += -march=bdver3 -mno-tbm
++ cflags-$(CONFIG_MEXCAVATOR) += -march=bdver4 -mno-tbm
++ cflags-$(CONFIG_MZEN) += -march=znver1
++ cflags-$(CONFIG_MZEN2) += -march=znver2
++ cflags-$(CONFIG_MZEN3) += -march=znver3
++ cflags-$(CONFIG_MNATIVE_INTEL) += -march=native
++ cflags-$(CONFIG_MNATIVE_AMD) += -march=native
++ cflags-$(CONFIG_MATOM) += -march=bonnell
++ cflags-$(CONFIG_MCORE2) += -march=core2
++ cflags-$(CONFIG_MNEHALEM) += -march=nehalem
++ cflags-$(CONFIG_MWESTMERE) += -march=westmere
++ cflags-$(CONFIG_MSILVERMONT) += -march=silvermont
++ cflags-$(CONFIG_MGOLDMONT) += -march=goldmont
++ cflags-$(CONFIG_MGOLDMONTPLUS) += -march=goldmont-plus
++ cflags-$(CONFIG_MSANDYBRIDGE) += -march=sandybridge
++ cflags-$(CONFIG_MIVYBRIDGE) += -march=ivybridge
++ cflags-$(CONFIG_MHASWELL) += -march=haswell
++ cflags-$(CONFIG_MBROADWELL) += -march=broadwell
++ cflags-$(CONFIG_MSKYLAKE) += -march=skylake
++ cflags-$(CONFIG_MSKYLAKEX) += -march=skylake-avx512
++ cflags-$(CONFIG_MCANNONLAKE) += -march=cannonlake
++ cflags-$(CONFIG_MICELAKE) += -march=icelake-client
++ cflags-$(CONFIG_MCASCADELAKE) += -march=cascadelake
++ cflags-$(CONFIG_MCOOPERLAKE) += -march=cooperlake
++ cflags-$(CONFIG_MTIGERLAKE) += -march=tigerlake
++ cflags-$(CONFIG_MSAPPHIRERAPIDS) += -march=sapphirerapids
++ cflags-$(CONFIG_MROCKETLAKE) += -march=rocketlake
++ cflags-$(CONFIG_MALDERLAKE) += -march=alderlake
++ cflags-$(CONFIG_GENERIC_CPU2) += -march=x86-64-v2
++ cflags-$(CONFIG_GENERIC_CPU3) += -march=x86-64-v3
++ cflags-$(CONFIG_GENERIC_CPU4) += -march=x86-64-v4
+ cflags-$(CONFIG_GENERIC_CPU) += -mtune=generic
+ KBUILD_CFLAGS += $(cflags-y)
+
+diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
+index 75884d2cdec3..4e6a08d4c7e5 100644
+--- a/arch/x86/include/asm/vermagic.h
++++ b/arch/x86/include/asm/vermagic.h
+@@ -17,6 +17,48 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE_INTEL
++#define MODULE_PROC_FAMILY "NATIVE_INTEL "
++#elif defined CONFIG_MNATIVE_AMD
++#define MODULE_PROC_FAMILY "NATIVE_AMD "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MGOLDMONT
++#define MODULE_PROC_FAMILY "GOLDMONT "
++#elif defined CONFIG_MGOLDMONTPLUS
++#define MODULE_PROC_FAMILY "GOLDMONTPLUS "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MCASCADELAKE
++#define MODULE_PROC_FAMILY "CASCADELAKE "
++#elif defined CONFIG_MCOOPERLAKE
++#define MODULE_PROC_FAMILY "COOPERLAKE "
++#elif defined CONFIG_MTIGERLAKE
++#define MODULE_PROC_FAMILY "TIGERLAKE "
++#elif defined CONFIG_MSAPPHIRERAPIDS
++#define MODULE_PROC_FAMILY "SAPPHIRERAPIDS "
++#elif defined CONFIG_ROCKETLAKE
++#define MODULE_PROC_FAMILY "ROCKETLAKE "
++#elif defined CONFIG_MALDERLAKE
++#define MODULE_PROC_FAMILY "ALDERLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -35,6 +77,30 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
++#elif defined CONFIG_MZEN2
++#define MODULE_PROC_FAMILY "ZEN2 "
++#elif defined CONFIG_MZEN3
++#define MODULE_PROC_FAMILY "ZEN3 "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--
+2.33.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
end of thread, other threads:[~2022-06-14 17:10 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-04-29 15:22 [gentoo-commits] proj/linux-patches:5.17 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2022-06-14 17:10 Mike Pagano
2022-06-09 18:31 Mike Pagano
2022-06-09 11:25 Mike Pagano
2022-06-06 11:01 Mike Pagano
2022-05-30 13:58 Mike Pagano
2022-05-25 13:09 Mike Pagano
2022-05-25 11:52 Mike Pagano
2022-05-18 9:38 Mike Pagano
2022-05-15 22:08 Mike Pagano
2022-05-12 11:27 Mike Pagano
2022-05-11 16:54 Mike Pagano
2022-05-09 10:57 Mike Pagano
2022-04-28 12:03 Mike Pagano
2022-04-27 13:16 Mike Pagano
2022-04-27 13:10 Mike Pagano
2022-04-26 12:07 Mike Pagano
2022-04-20 12:06 Mike Pagano
2022-04-13 17:53 Mike Pagano
2022-04-12 19:12 Mike Pagano
2022-04-12 17:40 Mike Pagano
2022-04-08 13:09 Mike Pagano
2022-04-08 12:53 Mike Pagano
2022-03-28 22:06 Mike Pagano
2022-03-28 10:53 Mike Pagano
2022-03-19 14:39 Mike Pagano
2022-03-06 17:51 Mike Pagano
2022-01-31 13:05 Mike Pagano
2022-01-23 20:00 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox