From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 74B221581D3 for ; Fri, 17 May 2024 11:35:16 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id A9D2EE2A37; Fri, 17 May 2024 11:35:15 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id D7292E2A37 for ; Fri, 17 May 2024 11:35:14 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 2ECD033C1C1 for ; Fri, 17 May 2024 11:35:13 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id C600F98E for ; Fri, 17 May 2024 11:35:11 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1715945697.7e120e0a664dfafca277af3b11e4dc945823fd38.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.6 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1030_linux-6.6.31.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 7e120e0a664dfafca277af3b11e4dc945823fd38 X-VCS-Branch: 6.6 Date: Fri, 17 May 2024 11:35:11 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 45e6e6e7-fd2a-4099-9758-8a429266a96d X-Archives-Hash: a44ecde13e13aa2a3e52437c13572d09 commit: 7e120e0a664dfafca277af3b11e4dc945823fd38 Author: Mike Pagano gentoo org> AuthorDate: Fri May 17 11:34:57 2024 +0000 Commit: Mike Pagano gentoo org> CommitDate: Fri May 17 11:34:57 2024 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7e120e0a Linux patch 6.6.31 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1030_linux-6.6.31.patch | 12777 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 12781 insertions(+) diff --git a/0000_README b/0000_README index 8de2a6fd..badcf846 100644 --- a/0000_README +++ b/0000_README @@ -163,6 +163,10 @@ Patch: 1029_linux-6.6.30.patch From: https://www.kernel.org Desc: Linux 6.6.30 +Patch: 1030_linux-6.6.31.patch +From: https://www.kernel.org +Desc: Linux 6.6.31 + Patch: 1510_fs-enable-link-security-restrictions-by-default.patch From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/ Desc: Enable link security restrictions by default. diff --git a/1030_linux-6.6.31.patch b/1030_linux-6.6.31.patch new file mode 100644 index 00000000..6eb335f8 --- /dev/null +++ b/1030_linux-6.6.31.patch @@ -0,0 +1,12777 @@ +diff --git a/Documentation/devicetree/bindings/iio/health/maxim,max30102.yaml b/Documentation/devicetree/bindings/iio/health/maxim,max30102.yaml +index c13c10c8d65da..eed0df9d3a232 100644 +--- a/Documentation/devicetree/bindings/iio/health/maxim,max30102.yaml ++++ b/Documentation/devicetree/bindings/iio/health/maxim,max30102.yaml +@@ -42,7 +42,7 @@ allOf: + properties: + compatible: + contains: +- const: maxim,max30100 ++ const: maxim,max30102 + then: + properties: + maxim,green-led-current-microamp: false +diff --git a/Documentation/devicetree/bindings/net/mediatek,net.yaml b/Documentation/devicetree/bindings/net/mediatek,net.yaml +index e74502a0afe86..3202dc7967c5b 100644 +--- a/Documentation/devicetree/bindings/net/mediatek,net.yaml ++++ b/Documentation/devicetree/bindings/net/mediatek,net.yaml +@@ -337,8 +337,8 @@ allOf: + minItems: 4 + + clocks: +- minItems: 34 +- maxItems: 34 ++ minItems: 24 ++ maxItems: 24 + + clock-names: + items: +@@ -351,18 +351,6 @@ allOf: + - const: ethwarp_wocpu1 + - const: ethwarp_wocpu0 + - const: esw +- - const: netsys0 +- - const: netsys1 +- - const: sgmii_tx250m +- - const: sgmii_rx250m +- - const: sgmii2_tx250m +- - const: sgmii2_rx250m +- - const: top_usxgmii0_sel +- - const: top_usxgmii1_sel +- - const: top_sgm0_sel +- - const: top_sgm1_sel +- - const: top_xfi_phy0_xtal_sel +- - const: top_xfi_phy1_xtal_sel + - const: top_eth_gmii_sel + - const: top_eth_refck_50m_sel + - const: top_eth_sys_200m_sel +@@ -375,16 +363,10 @@ allOf: + - const: top_netsys_sync_250m_sel + - const: top_netsys_ppefb_250m_sel + - const: top_netsys_warp_sel +- - const: wocpu1 +- - const: wocpu0 + - const: xgp1 + - const: xgp2 + - const: xgp3 + +- mediatek,sgmiisys: +- minItems: 2 +- maxItems: 2 +- + patternProperties: + "^mac@[0-1]$": + type: object +diff --git a/Makefile b/Makefile +index 1c144301b02f6..f66238b5d9315 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 6 +-SUBLEVEL = 30 ++SUBLEVEL = 31 + EXTRAVERSION = + NAME = Hurr durr I'ma ninja sloth + +diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S +index a86a1d4f34618..93afd1005b43c 100644 +--- a/arch/arm/kernel/sleep.S ++++ b/arch/arm/kernel/sleep.S +@@ -127,6 +127,10 @@ cpu_resume_after_mmu: + instr_sync + #endif + bl cpu_init @ restore the und/abt/irq banked regs ++#if defined(CONFIG_KASAN) && defined(CONFIG_KASAN_STACK) ++ mov r0, sp ++ bl kasan_unpoison_task_stack_below ++#endif + mov r0, #0 @ return zero on success + ldmfd sp!, {r4 - r11, pc} + ENDPROC(cpu_resume_after_mmu) +diff --git a/arch/arm64/boot/dts/qcom/sa8155p-adp.dts b/arch/arm64/boot/dts/qcom/sa8155p-adp.dts +index 5e4287f8c8cd1..b2cf2c988336c 100644 +--- a/arch/arm64/boot/dts/qcom/sa8155p-adp.dts ++++ b/arch/arm64/boot/dts/qcom/sa8155p-adp.dts +@@ -367,6 +367,16 @@ queue0 { + }; + }; + ++&pmm8155au_1_gpios { ++ pmm8155au_1_sdc2_cd: sdc2-cd-default-state { ++ pins = "gpio4"; ++ function = "normal"; ++ input-enable; ++ bias-pull-up; ++ power-source = <0>; ++ }; ++}; ++ + &qupv3_id_1 { + status = "okay"; + }; +@@ -384,10 +394,10 @@ &remoteproc_cdsp { + &sdhc_2 { + status = "okay"; + +- cd-gpios = <&tlmm 4 GPIO_ACTIVE_LOW>; ++ cd-gpios = <&pmm8155au_1_gpios 4 GPIO_ACTIVE_LOW>; + pinctrl-names = "default", "sleep"; +- pinctrl-0 = <&sdc2_on>; +- pinctrl-1 = <&sdc2_off>; ++ pinctrl-0 = <&sdc2_on &pmm8155au_1_sdc2_cd>; ++ pinctrl-1 = <&sdc2_off &pmm8155au_1_sdc2_cd>; + vqmmc-supply = <&vreg_l13c_2p96>; /* IO line power */ + vmmc-supply = <&vreg_l17a_2p96>; /* Card power line */ + bus-width = <4>; +@@ -505,13 +515,6 @@ data-pins { + bias-pull-up; /* pull up */ + drive-strength = <16>; /* 16 MA */ + }; +- +- sd-cd-pins { +- pins = "gpio96"; +- function = "gpio"; +- bias-pull-up; /* pull up */ +- drive-strength = <2>; /* 2 MA */ +- }; + }; + + sdc2_off: sdc2-off-state { +@@ -532,13 +535,6 @@ data-pins { + bias-pull-up; /* pull up */ + drive-strength = <2>; /* 2 MA */ + }; +- +- sd-cd-pins { +- pins = "gpio96"; +- function = "gpio"; +- bias-pull-up; /* pull up */ +- drive-strength = <2>; /* 2 MA */ +- }; + }; + + usb2phy_ac_en1_default: usb2phy-ac-en1-default-state { +diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c +index 212b73a715c1c..2f9e8c611f642 100644 +--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c ++++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c +@@ -337,16 +337,12 @@ int kvm_register_vgic_device(unsigned long type) + int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr, + struct vgic_reg_attr *reg_attr) + { +- int cpuid; ++ int cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr); + +- cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >> +- KVM_DEV_ARM_VGIC_CPUID_SHIFT; +- +- if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) +- return -EINVAL; +- +- reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid); + reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; ++ reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); ++ if (!reg_attr->vcpu) ++ return -EINVAL; + + return 0; + } +diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c +index 29196dce9b91d..166619348b98e 100644 +--- a/arch/arm64/net/bpf_jit_comp.c ++++ b/arch/arm64/net/bpf_jit_comp.c +@@ -1738,15 +1738,15 @@ static void invoke_bpf_prog(struct jit_ctx *ctx, struct bpf_tramp_link *l, + + emit_call(enter_prog, ctx); + ++ /* save return value to callee saved register x20 */ ++ emit(A64_MOV(1, A64_R(20), A64_R(0)), ctx); ++ + /* if (__bpf_prog_enter(prog) == 0) + * goto skip_exec_of_prog; + */ + branch = ctx->image + ctx->idx; + emit(A64_NOP, ctx); + +- /* save return value to callee saved register x20 */ +- emit(A64_MOV(1, A64_R(20), A64_R(0)), ctx); +- + emit(A64_ADD_I(1, A64_R(0), A64_SP, args_off), ctx); + if (!p->jited) + emit_addr_mov_i64(A64_R(1), (const u64)p->insnsi, ctx); +diff --git a/arch/mips/include/asm/ptrace.h b/arch/mips/include/asm/ptrace.h +index d14d0e37ad02d..4a2b40ce39e09 100644 +--- a/arch/mips/include/asm/ptrace.h ++++ b/arch/mips/include/asm/ptrace.h +@@ -159,7 +159,7 @@ extern unsigned long exception_ip(struct pt_regs *regs); + #define exception_ip(regs) exception_ip(regs) + #define profile_pc(regs) instruction_pointer(regs) + +-extern asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall); ++extern asmlinkage long syscall_trace_enter(struct pt_regs *regs); + extern asmlinkage void syscall_trace_leave(struct pt_regs *regs); + + extern void die(const char *, struct pt_regs *) __noreturn; +diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c +index d1b11f66f748f..cb1045ebab062 100644 +--- a/arch/mips/kernel/asm-offsets.c ++++ b/arch/mips/kernel/asm-offsets.c +@@ -101,6 +101,7 @@ void output_thread_info_defines(void) + OFFSET(TI_CPU, thread_info, cpu); + OFFSET(TI_PRE_COUNT, thread_info, preempt_count); + OFFSET(TI_REGS, thread_info, regs); ++ OFFSET(TI_SYSCALL, thread_info, syscall); + DEFINE(_THREAD_SIZE, THREAD_SIZE); + DEFINE(_THREAD_MASK, THREAD_MASK); + DEFINE(_IRQ_STACK_SIZE, IRQ_STACK_SIZE); +diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c +index 59288c13b581b..61503a36067e9 100644 +--- a/arch/mips/kernel/ptrace.c ++++ b/arch/mips/kernel/ptrace.c +@@ -1317,16 +1317,13 @@ long arch_ptrace(struct task_struct *child, long request, + * Notification of system call entry/exit + * - triggered by current->work.syscall_trace + */ +-asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall) ++asmlinkage long syscall_trace_enter(struct pt_regs *regs) + { + user_exit(); + +- current_thread_info()->syscall = syscall; +- + if (test_thread_flag(TIF_SYSCALL_TRACE)) { + if (ptrace_report_syscall_entry(regs)) + return -1; +- syscall = current_thread_info()->syscall; + } + + #ifdef CONFIG_SECCOMP +@@ -1335,7 +1332,7 @@ asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall) + struct seccomp_data sd; + unsigned long args[6]; + +- sd.nr = syscall; ++ sd.nr = current_thread_info()->syscall; + sd.arch = syscall_get_arch(current); + syscall_get_arguments(current, regs, args); + for (i = 0; i < 6; i++) +@@ -1345,23 +1342,23 @@ asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall) + ret = __secure_computing(&sd); + if (ret == -1) + return ret; +- syscall = current_thread_info()->syscall; + } + #endif + + if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) + trace_sys_enter(regs, regs->regs[2]); + +- audit_syscall_entry(syscall, regs->regs[4], regs->regs[5], ++ audit_syscall_entry(current_thread_info()->syscall, ++ regs->regs[4], regs->regs[5], + regs->regs[6], regs->regs[7]); + + /* + * Negative syscall numbers are mistaken for rejected syscalls, but + * won't have had the return value set appropriately, so we do so now. + */ +- if (syscall < 0) ++ if (current_thread_info()->syscall < 0) + syscall_set_return_value(current, regs, -ENOSYS, 0); +- return syscall; ++ return current_thread_info()->syscall; + } + + /* +diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S +index 18dc9b3450561..2c604717e6308 100644 +--- a/arch/mips/kernel/scall32-o32.S ++++ b/arch/mips/kernel/scall32-o32.S +@@ -77,6 +77,18 @@ loads_done: + PTR_WD load_a7, bad_stack_a7 + .previous + ++ /* ++ * syscall number is in v0 unless we called syscall(__NR_###) ++ * where the real syscall number is in a0 ++ */ ++ subu t2, v0, __NR_O32_Linux ++ bnez t2, 1f /* __NR_syscall at offset 0 */ ++ LONG_S a0, TI_SYSCALL($28) # Save a0 as syscall number ++ b 2f ++1: ++ LONG_S v0, TI_SYSCALL($28) # Save v0 as syscall number ++2: ++ + lw t0, TI_FLAGS($28) # syscall tracing enabled? + li t1, _TIF_WORK_SYSCALL_ENTRY + and t0, t1 +@@ -114,16 +126,7 @@ syscall_trace_entry: + SAVE_STATIC + move a0, sp + +- /* +- * syscall number is in v0 unless we called syscall(__NR_###) +- * where the real syscall number is in a0 +- */ +- move a1, v0 +- subu t2, v0, __NR_O32_Linux +- bnez t2, 1f /* __NR_syscall at offset 0 */ +- lw a1, PT_R4(sp) +- +-1: jal syscall_trace_enter ++ jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall + +diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S +index 97456b2ca7dc3..97788859238c3 100644 +--- a/arch/mips/kernel/scall64-n32.S ++++ b/arch/mips/kernel/scall64-n32.S +@@ -44,6 +44,8 @@ NESTED(handle_sysn32, PT_SIZE, sp) + + sd a3, PT_R26(sp) # save a3 for syscall restarting + ++ LONG_S v0, TI_SYSCALL($28) # Store syscall number ++ + li t1, _TIF_WORK_SYSCALL_ENTRY + LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? + and t0, t1, t0 +@@ -72,7 +74,6 @@ syscall_common: + n32_syscall_trace_entry: + SAVE_STATIC + move a0, sp +- move a1, v0 + jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall +diff --git a/arch/mips/kernel/scall64-n64.S b/arch/mips/kernel/scall64-n64.S +index e6264aa62e457..be11ea5cc67e0 100644 +--- a/arch/mips/kernel/scall64-n64.S ++++ b/arch/mips/kernel/scall64-n64.S +@@ -46,6 +46,8 @@ NESTED(handle_sys64, PT_SIZE, sp) + + sd a3, PT_R26(sp) # save a3 for syscall restarting + ++ LONG_S v0, TI_SYSCALL($28) # Store syscall number ++ + li t1, _TIF_WORK_SYSCALL_ENTRY + LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? + and t0, t1, t0 +@@ -82,7 +84,6 @@ n64_syscall_exit: + syscall_trace_entry: + SAVE_STATIC + move a0, sp +- move a1, v0 + jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall +diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S +index d3c2616cba226..7a5abb73e5312 100644 +--- a/arch/mips/kernel/scall64-o32.S ++++ b/arch/mips/kernel/scall64-o32.S +@@ -79,6 +79,22 @@ loads_done: + PTR_WD load_a7, bad_stack_a7 + .previous + ++ /* ++ * absolute syscall number is in v0 unless we called syscall(__NR_###) ++ * where the real syscall number is in a0 ++ * note: NR_syscall is the first O32 syscall but the macro is ++ * only defined when compiling with -mabi=32 (CONFIG_32BIT) ++ * therefore __NR_O32_Linux is used (4000) ++ */ ++ ++ subu t2, v0, __NR_O32_Linux ++ bnez t2, 1f /* __NR_syscall at offset 0 */ ++ LONG_S a0, TI_SYSCALL($28) # Save a0 as syscall number ++ b 2f ++1: ++ LONG_S v0, TI_SYSCALL($28) # Save v0 as syscall number ++2: ++ + li t1, _TIF_WORK_SYSCALL_ENTRY + LONG_L t0, TI_FLAGS($28) # syscall tracing enabled? + and t0, t1, t0 +@@ -113,22 +129,7 @@ trace_a_syscall: + sd a7, PT_R11(sp) # For indirect syscalls + + move a0, sp +- /* +- * absolute syscall number is in v0 unless we called syscall(__NR_###) +- * where the real syscall number is in a0 +- * note: NR_syscall is the first O32 syscall but the macro is +- * only defined when compiling with -mabi=32 (CONFIG_32BIT) +- * therefore __NR_O32_Linux is used (4000) +- */ +- .set push +- .set reorder +- subu t1, v0, __NR_O32_Linux +- move a1, v0 +- bnez t1, 1f /* __NR_syscall at offset 0 */ +- ld a1, PT_R4(sp) /* Arg1 for __NR_syscall case */ +- .set pop +- +-1: jal syscall_trace_enter ++ jal syscall_trace_enter + + bltz v0, 1f # seccomp failed? Skip syscall + +diff --git a/arch/powerpc/crypto/chacha-p10-glue.c b/arch/powerpc/crypto/chacha-p10-glue.c +index 74fb86b0d2097..7c728755852e1 100644 +--- a/arch/powerpc/crypto/chacha-p10-glue.c ++++ b/arch/powerpc/crypto/chacha-p10-glue.c +@@ -197,6 +197,9 @@ static struct skcipher_alg algs[] = { + + static int __init chacha_p10_init(void) + { ++ if (!cpu_has_feature(CPU_FTR_ARCH_31)) ++ return 0; ++ + static_branch_enable(&have_p10); + + return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); +@@ -204,10 +207,13 @@ static int __init chacha_p10_init(void) + + static void __exit chacha_p10_exit(void) + { ++ if (!static_branch_likely(&have_p10)) ++ return; ++ + crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); + } + +-module_cpu_feature_match(PPC_MODULE_FEATURE_P10, chacha_p10_init); ++module_init(chacha_p10_init); + module_exit(chacha_p10_exit); + + MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (P10 accelerated)"); +diff --git a/arch/powerpc/include/asm/plpks.h b/arch/powerpc/include/asm/plpks.h +index 23b77027c9163..7a84069759b03 100644 +--- a/arch/powerpc/include/asm/plpks.h ++++ b/arch/powerpc/include/asm/plpks.h +@@ -44,9 +44,8 @@ + #define PLPKS_MAX_DATA_SIZE 4000 + + // Timeouts for PLPKS operations +-#define PLPKS_MAX_TIMEOUT 5000 // msec +-#define PLPKS_FLUSH_SLEEP 10 // msec +-#define PLPKS_FLUSH_SLEEP_RANGE 400 ++#define PLPKS_MAX_TIMEOUT (5 * USEC_PER_SEC) ++#define PLPKS_FLUSH_SLEEP 10000 // usec + + struct plpks_var { + char *component; +diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c +index e8c4129697b14..b1e6d275cda9e 100644 +--- a/arch/powerpc/platforms/pseries/iommu.c ++++ b/arch/powerpc/platforms/pseries/iommu.c +@@ -786,8 +786,16 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus) + * parent bus. During reboot, there will be ibm,dma-window property to + * define DMA window. For kdump, there will at least be default window or DDW + * or both. ++ * There is an exception to the above. In case the PE goes into frozen ++ * state, firmware may not provide ibm,dma-window property at the time ++ * of LPAR boot up. + */ + ++ if (!pdn) { ++ pr_debug(" no ibm,dma-window property !\n"); ++ return; ++ } ++ + ppci = PCI_DN(pdn); + + pr_debug(" parent is %pOF, iommu_table: 0x%p\n", +diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c +index 2d40304eb6c16..ed492d38f6ad6 100644 +--- a/arch/powerpc/platforms/pseries/plpks.c ++++ b/arch/powerpc/platforms/pseries/plpks.c +@@ -415,8 +415,7 @@ static int plpks_confirm_object_flushed(struct label *label, + break; + } + +- usleep_range(PLPKS_FLUSH_SLEEP, +- PLPKS_FLUSH_SLEEP + PLPKS_FLUSH_SLEEP_RANGE); ++ fsleep(PLPKS_FLUSH_SLEEP); + timeout = timeout + PLPKS_FLUSH_SLEEP; + } while (timeout < PLPKS_MAX_TIMEOUT); + +@@ -464,9 +463,10 @@ int plpks_signed_update_var(struct plpks_var *var, u64 flags) + + continuetoken = retbuf[0]; + if (pseries_status_to_err(rc) == -EBUSY) { +- int delay_ms = get_longbusy_msecs(rc); +- mdelay(delay_ms); +- timeout += delay_ms; ++ int delay_us = get_longbusy_msecs(rc) * 1000; ++ ++ fsleep(delay_us); ++ timeout += delay_us; + } + rc = pseries_status_to_err(rc); + } while (rc == -EBUSY && timeout < PLPKS_MAX_TIMEOUT); +diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c +index 8581693e62d39..b3990874e4818 100644 +--- a/arch/riscv/net/bpf_jit_comp64.c ++++ b/arch/riscv/net/bpf_jit_comp64.c +@@ -740,6 +740,9 @@ static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_of + if (ret) + return ret; + ++ /* store prog start time */ ++ emit_mv(RV_REG_S1, RV_REG_A0, ctx); ++ + /* if (__bpf_prog_enter(prog) == 0) + * goto skip_exec_of_prog; + */ +@@ -747,9 +750,6 @@ static int invoke_bpf_prog(struct bpf_tramp_link *l, int args_off, int retval_of + /* nop reserved for conditional jump */ + emit(rv_nop(), ctx); + +- /* store prog start time */ +- emit_mv(RV_REG_S1, RV_REG_A0, ctx); +- + /* arg1: &args_off */ + emit_addi(RV_REG_A0, RV_REG_FP, -args_off, ctx); + if (!p->jited) +diff --git a/arch/s390/include/asm/dwarf.h b/arch/s390/include/asm/dwarf.h +index 4f21ae561e4dd..390906b8e386e 100644 +--- a/arch/s390/include/asm/dwarf.h ++++ b/arch/s390/include/asm/dwarf.h +@@ -9,6 +9,7 @@ + #define CFI_DEF_CFA_OFFSET .cfi_def_cfa_offset + #define CFI_ADJUST_CFA_OFFSET .cfi_adjust_cfa_offset + #define CFI_RESTORE .cfi_restore ++#define CFI_REL_OFFSET .cfi_rel_offset + + #ifdef CONFIG_AS_CFI_VAL_OFFSET + #define CFI_VAL_OFFSET .cfi_val_offset +diff --git a/arch/s390/kernel/vdso64/vdso_user_wrapper.S b/arch/s390/kernel/vdso64/vdso_user_wrapper.S +index 57f62596e53b9..85247ef5a41b8 100644 +--- a/arch/s390/kernel/vdso64/vdso_user_wrapper.S ++++ b/arch/s390/kernel/vdso64/vdso_user_wrapper.S +@@ -24,8 +24,10 @@ __kernel_\func: + CFI_DEF_CFA_OFFSET (STACK_FRAME_OVERHEAD + WRAPPER_FRAME_SIZE) + CFI_VAL_OFFSET 15, -STACK_FRAME_OVERHEAD + stg %r14,STACK_FRAME_OVERHEAD(%r15) ++ CFI_REL_OFFSET 14, STACK_FRAME_OVERHEAD + brasl %r14,__s390_vdso_\func + lg %r14,STACK_FRAME_OVERHEAD(%r15) ++ CFI_RESTORE 14 + aghi %r15,WRAPPER_FRAME_SIZE + CFI_DEF_CFA_OFFSET STACK_FRAME_OVERHEAD + CFI_RESTORE 15 +diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c +index d17bb1ef63f41..0da54dc9430a9 100644 +--- a/arch/s390/mm/gmap.c ++++ b/arch/s390/mm/gmap.c +@@ -2659,7 +2659,7 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr, + return 0; + + start = pmd_val(*pmd) & HPAGE_MASK; +- end = start + HPAGE_SIZE - 1; ++ end = start + HPAGE_SIZE; + __storage_key_init_range(start, end); + set_bit(PG_arch_1, &page->flags); + cond_resched(); +diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c +index 5f64f3d0fafbb..763469e518eec 100644 +--- a/arch/s390/mm/hugetlbpage.c ++++ b/arch/s390/mm/hugetlbpage.c +@@ -139,7 +139,7 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste) + } + + if (!test_and_set_bit(PG_arch_1, &page->flags)) +- __storage_key_init_range(paddr, paddr + size - 1); ++ __storage_key_init_range(paddr, paddr + size); + } + + void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr, +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c +index ab88a27ff433f..06d5a7eeee81a 100644 +--- a/arch/x86/kernel/apic/apic.c ++++ b/arch/x86/kernel/apic/apic.c +@@ -1808,7 +1808,7 @@ void x2apic_setup(void) + __x2apic_enable(); + } + +-static __init void apic_set_fixmap(void); ++static __init void apic_set_fixmap(bool read_apic); + + static __init void x2apic_disable(void) + { +@@ -1830,7 +1830,12 @@ static __init void x2apic_disable(void) + } + + __x2apic_disable(); +- apic_set_fixmap(); ++ /* ++ * Don't reread the APIC ID as it was already done from ++ * check_x2apic() and the APIC driver still is a x2APIC variant, ++ * which fails to do the read after x2APIC was disabled. ++ */ ++ apic_set_fixmap(false); + } + + static __init void x2apic_enable(void) +@@ -2095,13 +2100,14 @@ void __init init_apic_mappings(void) + } + } + +-static __init void apic_set_fixmap(void) ++static __init void apic_set_fixmap(bool read_apic) + { + set_fixmap_nocache(FIX_APIC_BASE, mp_lapic_addr); + apic_mmio_base = APIC_BASE; + apic_printk(APIC_VERBOSE, "mapped APIC to %16lx (%16lx)\n", + apic_mmio_base, mp_lapic_addr); +- apic_read_boot_cpu_id(false); ++ if (read_apic) ++ apic_read_boot_cpu_id(false); + } + + void __init register_lapic_address(unsigned long address) +@@ -2111,7 +2117,7 @@ void __init register_lapic_address(unsigned long address) + mp_lapic_addr = address; + + if (!x2apic_mode) +- apic_set_fixmap(); ++ apic_set_fixmap(true); + } + + /* +diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h +index d008a153a2b9f..7ed1a2085bd72 100644 +--- a/arch/xtensa/include/asm/processor.h ++++ b/arch/xtensa/include/asm/processor.h +@@ -115,9 +115,9 @@ + #define MAKE_RA_FOR_CALL(ra,ws) (((ra) & 0x3fffffff) | (ws) << 30) + + /* Convert return address to a valid pc +- * Note: We assume that the stack pointer is in the same 1GB ranges as the ra ++ * Note: 'text' is the address within the same 1GB range as the ra + */ +-#define MAKE_PC_FROM_RA(ra,sp) (((ra) & 0x3fffffff) | ((sp) & 0xc0000000)) ++#define MAKE_PC_FROM_RA(ra, text) (((ra) & 0x3fffffff) | ((unsigned long)(text) & 0xc0000000)) + + #elif defined(__XTENSA_CALL0_ABI__) + +@@ -127,9 +127,9 @@ + #define MAKE_RA_FOR_CALL(ra, ws) (ra) + + /* Convert return address to a valid pc +- * Note: We assume that the stack pointer is in the same 1GB ranges as the ra ++ * Note: 'text' is not used as 'ra' is always the full address + */ +-#define MAKE_PC_FROM_RA(ra, sp) (ra) ++#define MAKE_PC_FROM_RA(ra, text) (ra) + + #else + #error Unsupported Xtensa ABI +diff --git a/arch/xtensa/include/asm/ptrace.h b/arch/xtensa/include/asm/ptrace.h +index a270467556dc8..86c70117371bb 100644 +--- a/arch/xtensa/include/asm/ptrace.h ++++ b/arch/xtensa/include/asm/ptrace.h +@@ -87,7 +87,7 @@ struct pt_regs { + # define user_mode(regs) (((regs)->ps & 0x00000020)!=0) + # define instruction_pointer(regs) ((regs)->pc) + # define return_pointer(regs) (MAKE_PC_FROM_RA((regs)->areg[0], \ +- (regs)->areg[1])) ++ (regs)->pc)) + + # ifndef CONFIG_SMP + # define profile_pc(regs) instruction_pointer(regs) +diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c +index a815577d25fd0..7bd66677f7b6d 100644 +--- a/arch/xtensa/kernel/process.c ++++ b/arch/xtensa/kernel/process.c +@@ -47,6 +47,7 @@ + #include + #include + #include ++#include + #include + + extern void ret_from_fork(void); +@@ -380,7 +381,7 @@ unsigned long __get_wchan(struct task_struct *p) + int count = 0; + + sp = p->thread.sp; +- pc = MAKE_PC_FROM_RA(p->thread.ra, p->thread.sp); ++ pc = MAKE_PC_FROM_RA(p->thread.ra, _text); + + do { + if (sp < stack_page + sizeof(struct task_struct) || +@@ -392,7 +393,7 @@ unsigned long __get_wchan(struct task_struct *p) + + /* Stack layout: sp-4: ra, sp-3: sp' */ + +- pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), sp); ++ pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), _text); + sp = SPILL_SLOT(sp, 1); + } while (count++ < 16); + return 0; +diff --git a/arch/xtensa/kernel/stacktrace.c b/arch/xtensa/kernel/stacktrace.c +index 831ffb648bda7..ed324fdf2a2f9 100644 +--- a/arch/xtensa/kernel/stacktrace.c ++++ b/arch/xtensa/kernel/stacktrace.c +@@ -13,6 +13,7 @@ + #include + + #include ++#include + #include + #include + #include +@@ -189,7 +190,7 @@ void walk_stackframe(unsigned long *sp, + if (a1 <= (unsigned long)sp) + break; + +- frame.pc = MAKE_PC_FROM_RA(a0, a1); ++ frame.pc = MAKE_PC_FROM_RA(a0, _text); + frame.sp = a1; + + if (fn(&frame, data)) +diff --git a/block/blk-iocost.c b/block/blk-iocost.c +index 7ee8d85c2c68d..0dca77591d66c 100644 +--- a/block/blk-iocost.c ++++ b/block/blk-iocost.c +@@ -1347,7 +1347,7 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now) + { + struct ioc *ioc = iocg->ioc; + struct blkcg_gq *blkg = iocg_to_blkg(iocg); +- u64 tdelta, delay, new_delay; ++ u64 tdelta, delay, new_delay, shift; + s64 vover, vover_pct; + u32 hwa; + +@@ -1362,8 +1362,9 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now) + + /* calculate the current delay in effect - 1/2 every second */ + tdelta = now->now - iocg->delay_at; +- if (iocg->delay) +- delay = iocg->delay >> div64_u64(tdelta, USEC_PER_SEC); ++ shift = div64_u64(tdelta, USEC_PER_SEC); ++ if (iocg->delay && shift < BITS_PER_LONG) ++ delay = iocg->delay >> shift; + else + delay = 0; + +@@ -1438,8 +1439,11 @@ static void iocg_pay_debt(struct ioc_gq *iocg, u64 abs_vpay, + lockdep_assert_held(&iocg->ioc->lock); + lockdep_assert_held(&iocg->waitq.lock); + +- /* make sure that nobody messed with @iocg */ +- WARN_ON_ONCE(list_empty(&iocg->active_list)); ++ /* ++ * make sure that nobody messed with @iocg. Check iocg->pd.online ++ * to avoid warn when removing blkcg or disk. ++ */ ++ WARN_ON_ONCE(list_empty(&iocg->active_list) && iocg->pd.online); + WARN_ON_ONCE(iocg->inuse > 1); + + iocg->abs_vdebt -= min(abs_vpay, iocg->abs_vdebt); +diff --git a/block/ioctl.c b/block/ioctl.c +index d1d8e8391279a..68265f914c27b 100644 +--- a/block/ioctl.c ++++ b/block/ioctl.c +@@ -89,7 +89,7 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode, + unsigned long arg) + { + uint64_t range[2]; +- uint64_t start, len; ++ uint64_t start, len, end; + struct inode *inode = bdev->bd_inode; + int err; + +@@ -110,7 +110,8 @@ static int blk_ioctl_discard(struct block_device *bdev, blk_mode_t mode, + if (len & 511) + return -EINVAL; + +- if (start + len > bdev_nr_bytes(bdev)) ++ if (check_add_overflow(start, len, &end) || ++ end > bdev_nr_bytes(bdev)) + return -EINVAL; + + filemap_invalidate_lock(inode->i_mapping); +diff --git a/drivers/ata/sata_gemini.c b/drivers/ata/sata_gemini.c +index 400b22ee99c33..4c270999ba3cc 100644 +--- a/drivers/ata/sata_gemini.c ++++ b/drivers/ata/sata_gemini.c +@@ -200,7 +200,10 @@ int gemini_sata_start_bridge(struct sata_gemini *sg, unsigned int bridge) + pclk = sg->sata0_pclk; + else + pclk = sg->sata1_pclk; +- clk_enable(pclk); ++ ret = clk_enable(pclk); ++ if (ret) ++ return ret; ++ + msleep(10); + + /* Do not keep clocking a bridge that is not online */ +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c +index ea61577471994..c5b5241891a5a 100644 +--- a/drivers/base/regmap/regmap.c ++++ b/drivers/base/regmap/regmap.c +@@ -2836,6 +2836,43 @@ int regmap_read(struct regmap *map, unsigned int reg, unsigned int *val) + } + EXPORT_SYMBOL_GPL(regmap_read); + ++/** ++ * regmap_read_bypassed() - Read a value from a single register direct ++ * from the device, bypassing the cache ++ * ++ * @map: Register map to read from ++ * @reg: Register to be read from ++ * @val: Pointer to store read value ++ * ++ * A value of zero will be returned on success, a negative errno will ++ * be returned in error cases. ++ */ ++int regmap_read_bypassed(struct regmap *map, unsigned int reg, unsigned int *val) ++{ ++ int ret; ++ bool bypass, cache_only; ++ ++ if (!IS_ALIGNED(reg, map->reg_stride)) ++ return -EINVAL; ++ ++ map->lock(map->lock_arg); ++ ++ bypass = map->cache_bypass; ++ cache_only = map->cache_only; ++ map->cache_bypass = true; ++ map->cache_only = false; ++ ++ ret = _regmap_read(map, reg, val); ++ ++ map->cache_bypass = bypass; ++ map->cache_only = cache_only; ++ ++ map->unlock(map->lock_arg); ++ ++ return ret; ++} ++EXPORT_SYMBOL_GPL(regmap_read_bypassed); ++ + /** + * regmap_raw_read() - Read raw data from the device + * +diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c +index 5277090c6d6d7..638074992c829 100644 +--- a/drivers/bluetooth/btqca.c ++++ b/drivers/bluetooth/btqca.c +@@ -99,7 +99,8 @@ static int qca_read_fw_build_info(struct hci_dev *hdev) + { + struct sk_buff *skb; + struct edl_event_hdr *edl; +- char cmd, build_label[QCA_FW_BUILD_VER_LEN]; ++ char *build_label; ++ char cmd; + int build_lbl_len, err = 0; + + bt_dev_dbg(hdev, "QCA read fw build info"); +@@ -114,6 +115,11 @@ static int qca_read_fw_build_info(struct hci_dev *hdev) + return err; + } + ++ if (skb->len < sizeof(*edl)) { ++ err = -EILSEQ; ++ goto out; ++ } ++ + edl = (struct edl_event_hdr *)(skb->data); + if (!edl) { + bt_dev_err(hdev, "QCA read fw build info with no header"); +@@ -129,14 +135,25 @@ static int qca_read_fw_build_info(struct hci_dev *hdev) + goto out; + } + ++ if (skb->len < sizeof(*edl) + 1) { ++ err = -EILSEQ; ++ goto out; ++ } ++ + build_lbl_len = edl->data[0]; +- if (build_lbl_len <= QCA_FW_BUILD_VER_LEN - 1) { +- memcpy(build_label, edl->data + 1, build_lbl_len); +- *(build_label + build_lbl_len) = '\0'; ++ ++ if (skb->len < sizeof(*edl) + 1 + build_lbl_len) { ++ err = -EILSEQ; ++ goto out; + } + ++ build_label = kstrndup(&edl->data[1], build_lbl_len, GFP_KERNEL); ++ if (!build_label) ++ goto out; ++ + hci_set_fw_info(hdev, "%s", build_label); + ++ kfree(build_label); + out: + kfree_skb(skb); + return err; +@@ -205,6 +222,49 @@ static int qca_send_reset(struct hci_dev *hdev) + return 0; + } + ++static int qca_read_fw_board_id(struct hci_dev *hdev, u16 *bid) ++{ ++ u8 cmd; ++ struct sk_buff *skb; ++ struct edl_event_hdr *edl; ++ int err = 0; ++ ++ cmd = EDL_GET_BID_REQ_CMD; ++ skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, EDL_PATCH_CMD_LEN, ++ &cmd, 0, HCI_INIT_TIMEOUT); ++ if (IS_ERR(skb)) { ++ err = PTR_ERR(skb); ++ bt_dev_err(hdev, "Reading QCA board ID failed (%d)", err); ++ return err; ++ } ++ ++ edl = skb_pull_data(skb, sizeof(*edl)); ++ if (!edl) { ++ bt_dev_err(hdev, "QCA read board ID with no header"); ++ err = -EILSEQ; ++ goto out; ++ } ++ ++ if (edl->cresp != EDL_CMD_REQ_RES_EVT || ++ edl->rtype != EDL_GET_BID_REQ_CMD) { ++ bt_dev_err(hdev, "QCA Wrong packet: %d %d", edl->cresp, edl->rtype); ++ err = -EIO; ++ goto out; ++ } ++ ++ if (skb->len < 3) { ++ err = -EILSEQ; ++ goto out; ++ } ++ ++ *bid = (edl->data[1] << 8) + edl->data[2]; ++ bt_dev_dbg(hdev, "%s: bid = %x", __func__, *bid); ++ ++out: ++ kfree_skb(skb); ++ return err; ++} ++ + int qca_send_pre_shutdown_cmd(struct hci_dev *hdev) + { + struct sk_buff *skb; +@@ -227,9 +287,10 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev) + } + EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd); + +-static void qca_tlv_check_data(struct hci_dev *hdev, ++static int qca_tlv_check_data(struct hci_dev *hdev, + struct qca_fw_config *config, +- u8 *fw_data, enum qca_btsoc_type soc_type) ++ u8 *fw_data, size_t fw_size, ++ enum qca_btsoc_type soc_type) + { + const u8 *data; + u32 type_len; +@@ -239,12 +300,16 @@ static void qca_tlv_check_data(struct hci_dev *hdev, + struct tlv_type_patch *tlv_patch; + struct tlv_type_nvm *tlv_nvm; + uint8_t nvm_baud_rate = config->user_baud_rate; ++ u8 type; + + config->dnld_mode = QCA_SKIP_EVT_NONE; + config->dnld_type = QCA_SKIP_EVT_NONE; + + switch (config->type) { + case ELF_TYPE_PATCH: ++ if (fw_size < 7) ++ return -EINVAL; ++ + config->dnld_mode = QCA_SKIP_EVT_VSE_CC; + config->dnld_type = QCA_SKIP_EVT_VSE_CC; + +@@ -253,6 +318,9 @@ static void qca_tlv_check_data(struct hci_dev *hdev, + bt_dev_dbg(hdev, "File version : 0x%x", fw_data[6]); + break; + case TLV_TYPE_PATCH: ++ if (fw_size < sizeof(struct tlv_type_hdr) + sizeof(struct tlv_type_patch)) ++ return -EINVAL; ++ + tlv = (struct tlv_type_hdr *)fw_data; + type_len = le32_to_cpu(tlv->type_len); + tlv_patch = (struct tlv_type_patch *)tlv->data; +@@ -292,25 +360,64 @@ static void qca_tlv_check_data(struct hci_dev *hdev, + break; + + case TLV_TYPE_NVM: ++ if (fw_size < sizeof(struct tlv_type_hdr)) ++ return -EINVAL; ++ + tlv = (struct tlv_type_hdr *)fw_data; + + type_len = le32_to_cpu(tlv->type_len); +- length = (type_len >> 8) & 0x00ffffff; ++ length = type_len >> 8; ++ type = type_len & 0xff; + +- BT_DBG("TLV Type\t\t : 0x%x", type_len & 0x000000ff); ++ /* Some NVM files have more than one set of tags, only parse ++ * the first set when it has type 2 for now. When there is ++ * more than one set there is an enclosing header of type 4. ++ */ ++ if (type == 4) { ++ if (fw_size < 2 * sizeof(struct tlv_type_hdr)) ++ return -EINVAL; ++ ++ tlv++; ++ ++ type_len = le32_to_cpu(tlv->type_len); ++ length = type_len >> 8; ++ type = type_len & 0xff; ++ } ++ ++ BT_DBG("TLV Type\t\t : 0x%x", type); + BT_DBG("Length\t\t : %d bytes", length); + ++ if (type != 2) ++ break; ++ ++ if (fw_size < length + (tlv->data - fw_data)) ++ return -EINVAL; ++ + idx = 0; + data = tlv->data; +- while (idx < length) { ++ while (idx < length - sizeof(struct tlv_type_nvm)) { + tlv_nvm = (struct tlv_type_nvm *)(data + idx); + + tag_id = le16_to_cpu(tlv_nvm->tag_id); + tag_len = le16_to_cpu(tlv_nvm->tag_len); + ++ if (length < idx + sizeof(struct tlv_type_nvm) + tag_len) ++ return -EINVAL; ++ + /* Update NVM tags as needed */ + switch (tag_id) { ++ case EDL_TAG_ID_BD_ADDR: ++ if (tag_len != sizeof(bdaddr_t)) ++ return -EINVAL; ++ ++ memcpy(&config->bdaddr, tlv_nvm->data, sizeof(bdaddr_t)); ++ ++ break; ++ + case EDL_TAG_ID_HCI: ++ if (tag_len < 3) ++ return -EINVAL; ++ + /* HCI transport layer parameters + * enabling software inband sleep + * onto controller side. +@@ -326,6 +433,9 @@ static void qca_tlv_check_data(struct hci_dev *hdev, + break; + + case EDL_TAG_ID_DEEP_SLEEP: ++ if (tag_len < 1) ++ return -EINVAL; ++ + /* Sleep enable mask + * enabling deep sleep feature on controller. + */ +@@ -334,14 +444,16 @@ static void qca_tlv_check_data(struct hci_dev *hdev, + break; + } + +- idx += (sizeof(u16) + sizeof(u16) + 8 + tag_len); ++ idx += sizeof(struct tlv_type_nvm) + tag_len; + } + break; + + default: + BT_ERR("Unknown TLV type %d", config->type); +- break; ++ return -EINVAL; + } ++ ++ return 0; + } + + static int qca_tlv_send_segment(struct hci_dev *hdev, int seg_size, +@@ -491,7 +603,9 @@ static int qca_download_firmware(struct hci_dev *hdev, + memcpy(data, fw->data, size); + release_firmware(fw); + +- qca_tlv_check_data(hdev, config, data, soc_type); ++ ret = qca_tlv_check_data(hdev, config, data, size, soc_type); ++ if (ret) ++ goto out; + + segment = data; + remain = size; +@@ -574,14 +688,64 @@ int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr) + } + EXPORT_SYMBOL_GPL(qca_set_bdaddr_rome); + ++static int qca_check_bdaddr(struct hci_dev *hdev, const struct qca_fw_config *config) ++{ ++ struct hci_rp_read_bd_addr *bda; ++ struct sk_buff *skb; ++ int err; ++ ++ if (bacmp(&hdev->public_addr, BDADDR_ANY)) ++ return 0; ++ ++ skb = __hci_cmd_sync(hdev, HCI_OP_READ_BD_ADDR, 0, NULL, ++ HCI_INIT_TIMEOUT); ++ if (IS_ERR(skb)) { ++ err = PTR_ERR(skb); ++ bt_dev_err(hdev, "Failed to read device address (%d)", err); ++ return err; ++ } ++ ++ if (skb->len != sizeof(*bda)) { ++ bt_dev_err(hdev, "Device address length mismatch"); ++ kfree_skb(skb); ++ return -EIO; ++ } ++ ++ bda = (struct hci_rp_read_bd_addr *)skb->data; ++ if (!bacmp(&bda->bdaddr, &config->bdaddr)) ++ set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); ++ ++ kfree_skb(skb); ++ ++ return 0; ++} ++ ++static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size, ++ struct qca_btsoc_version ver, u8 rom_ver, u16 bid) ++{ ++ const char *variant; ++ ++ /* hsp gf chip */ ++ if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID) ++ variant = "g"; ++ else ++ variant = ""; ++ ++ if (bid == 0x0) ++ snprintf(fwname, max_size, "qca/hpnv%02x%s.bin", rom_ver, variant); ++ else ++ snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid); ++} ++ + int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, + enum qca_btsoc_type soc_type, struct qca_btsoc_version ver, + const char *firmware_name) + { +- struct qca_fw_config config; ++ struct qca_fw_config config = {}; + int err; + u8 rom_ver = 0; + u32 soc_ver; ++ u16 boardid = 0; + + bt_dev_dbg(hdev, "QCA setup on UART"); + +@@ -615,6 +779,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, + snprintf(config.fwname, sizeof(config.fwname), + "qca/apbtfw%02x.tlv", rom_ver); + break; ++ case QCA_QCA2066: ++ snprintf(config.fwname, sizeof(config.fwname), ++ "qca/hpbtfw%02x.tlv", rom_ver); ++ break; + case QCA_QCA6390: + snprintf(config.fwname, sizeof(config.fwname), + "qca/htbtfw%02x.tlv", rom_ver); +@@ -649,6 +817,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, + /* Give the controller some time to get ready to receive the NVM */ + msleep(10); + ++ if (soc_type == QCA_QCA2066) ++ qca_read_fw_board_id(hdev, &boardid); ++ + /* Download NVM configuration */ + config.type = TLV_TYPE_NVM; + if (firmware_name) { +@@ -671,6 +842,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, + snprintf(config.fwname, sizeof(config.fwname), + "qca/apnv%02x.bin", rom_ver); + break; ++ case QCA_QCA2066: ++ qca_generate_hsp_nvm_name(config.fwname, ++ sizeof(config.fwname), ver, rom_ver, boardid); ++ break; + case QCA_QCA6390: + snprintf(config.fwname, sizeof(config.fwname), + "qca/htnv%02x.bin", rom_ver); +@@ -702,6 +877,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, + + switch (soc_type) { + case QCA_WCN3991: ++ case QCA_QCA2066: + case QCA_QCA6390: + case QCA_WCN6750: + case QCA_WCN6855: +@@ -750,6 +926,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate, + break; + } + ++ err = qca_check_bdaddr(hdev, &config); ++ if (err) ++ return err; ++ + bt_dev_info(hdev, "QCA setup on UART is completed"); + + return 0; +diff --git a/drivers/bluetooth/btqca.h b/drivers/bluetooth/btqca.h +index 03bff5c0059de..215433fd76a10 100644 +--- a/drivers/bluetooth/btqca.h ++++ b/drivers/bluetooth/btqca.h +@@ -12,6 +12,7 @@ + #define EDL_PATCH_VER_REQ_CMD (0x19) + #define EDL_PATCH_TLV_REQ_CMD (0x1E) + #define EDL_GET_BUILD_INFO_CMD (0x20) ++#define EDL_GET_BID_REQ_CMD (0x23) + #define EDL_NVM_ACCESS_SET_REQ_CMD (0x01) + #define EDL_PATCH_CONFIG_CMD (0x28) + #define MAX_SIZE_PER_TLV_SEGMENT (243) +@@ -28,6 +29,7 @@ + #define EDL_PATCH_CONFIG_RES_EVT (0x00) + #define QCA_DISABLE_LOGGING_SUB_OP (0x14) + ++#define EDL_TAG_ID_BD_ADDR 2 + #define EDL_TAG_ID_HCI (17) + #define EDL_TAG_ID_DEEP_SLEEP (27) + +@@ -46,8 +48,8 @@ + #define get_soc_ver(soc_id, rom_ver) \ + ((le32_to_cpu(soc_id) << 16) | (le16_to_cpu(rom_ver))) + +-#define QCA_FW_BUILD_VER_LEN 255 +- ++#define QCA_HSP_GF_SOC_ID 0x1200 ++#define QCA_HSP_GF_SOC_MASK 0x0000ff00 + + enum qca_baudrate { + QCA_BAUDRATE_115200 = 0, +@@ -92,6 +94,7 @@ struct qca_fw_config { + uint8_t user_baud_rate; + enum qca_tlv_dnld_mode dnld_mode; + enum qca_tlv_dnld_mode dnld_type; ++ bdaddr_t bdaddr; + }; + + struct edl_event_hdr { +@@ -146,6 +149,7 @@ enum qca_btsoc_type { + QCA_WCN3990, + QCA_WCN3998, + QCA_WCN3991, ++ QCA_QCA2066, + QCA_QCA6390, + QCA_WCN6750, + QCA_WCN6855, +diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c +index 410f146e3f671..92341a87a5da7 100644 +--- a/drivers/bluetooth/hci_qca.c ++++ b/drivers/bluetooth/hci_qca.c +@@ -1845,6 +1845,10 @@ static int qca_setup(struct hci_uart *hu) + set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); + + switch (soc_type) { ++ case QCA_QCA2066: ++ soc_name = "qca2066"; ++ break; ++ + case QCA_WCN3988: + case QCA_WCN3990: + case QCA_WCN3991: +@@ -1886,8 +1890,6 @@ static int qca_setup(struct hci_uart *hu) + case QCA_WCN6750: + case QCA_WCN6855: + case QCA_WCN7850: +- set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); +- + qcadev = serdev_device_get_drvdata(hu->serdev); + if (qcadev->bdaddr_property_broken) + set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks); +@@ -2043,6 +2045,11 @@ static const struct qca_device_data qca_soc_data_wcn3998 __maybe_unused = { + .num_vregs = 4, + }; + ++static const struct qca_device_data qca_soc_data_qca2066 __maybe_unused = { ++ .soc_type = QCA_QCA2066, ++ .num_vregs = 0, ++}; ++ + static const struct qca_device_data qca_soc_data_qca6390 __maybe_unused = { + .soc_type = QCA_QCA6390, + .num_vregs = 0, +@@ -2582,6 +2589,7 @@ static SIMPLE_DEV_PM_OPS(qca_pm_ops, qca_suspend, qca_resume); + + #ifdef CONFIG_OF + static const struct of_device_id qca_bluetooth_of_match[] = { ++ { .compatible = "qcom,qca2066-bt", .data = &qca_soc_data_qca2066}, + { .compatible = "qcom,qca6174-bt" }, + { .compatible = "qcom,qca6390-bt", .data = &qca_soc_data_qca6390}, + { .compatible = "qcom,qca9377-bt" }, +@@ -2599,6 +2607,7 @@ MODULE_DEVICE_TABLE(of, qca_bluetooth_of_match); + + #ifdef CONFIG_ACPI + static const struct acpi_device_id qca_bluetooth_acpi_match[] = { ++ { "QCOM2066", (kernel_ulong_t)&qca_soc_data_qca2066 }, + { "QCOM6390", (kernel_ulong_t)&qca_soc_data_qca6390 }, + { "DLA16390", (kernel_ulong_t)&qca_soc_data_qca6390 }, + { "DLB16390", (kernel_ulong_t)&qca_soc_data_qca6390 }, +diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c +index 4a67c0d4823cf..f8776065ad1f1 100644 +--- a/drivers/clk/clk.c ++++ b/drivers/clk/clk.c +@@ -4523,7 +4523,8 @@ void clk_unregister(struct clk *clk) + if (ops == &clk_nodrv_ops) { + pr_err("%s: unregistered clock: %s\n", __func__, + clk->core->name); +- goto unlock; ++ clk_prepare_unlock(); ++ return; + } + /* + * Assign empty clock ops for consumers that might still hold +@@ -4557,11 +4558,10 @@ void clk_unregister(struct clk *clk) + if (clk->core->protect_count) + pr_warn("%s: unregistering protected clock: %s\n", + __func__, clk->core->name); ++ clk_prepare_unlock(); + + kref_put(&clk->core->ref, __clk_release); + free_clk(clk); +-unlock: +- clk_prepare_unlock(); + } + EXPORT_SYMBOL_GPL(clk_unregister); + +@@ -4720,13 +4720,11 @@ void __clk_put(struct clk *clk) + if (clk->min_rate > 0 || clk->max_rate < ULONG_MAX) + clk_set_rate_range_nolock(clk, 0, ULONG_MAX); + +- owner = clk->core->owner; +- kref_put(&clk->core->ref, __clk_release); +- + clk_prepare_unlock(); + ++ owner = clk->core->owner; ++ kref_put(&clk->core->ref, __clk_release); + module_put(owner); +- + free_clk(clk); + } + +diff --git a/drivers/clk/qcom/clk-smd-rpm.c b/drivers/clk/qcom/clk-smd-rpm.c +index 0191fc0dd7dac..789903a1b3f2b 100644 +--- a/drivers/clk/qcom/clk-smd-rpm.c ++++ b/drivers/clk/qcom/clk-smd-rpm.c +@@ -758,6 +758,7 @@ static struct clk_smd_rpm *msm8976_clks[] = { + + static const struct rpm_smd_clk_desc rpm_clk_msm8976 = { + .clks = msm8976_clks, ++ .num_clks = ARRAY_SIZE(msm8976_clks), + .icc_clks = bimc_pcnoc_snoc_smmnoc_icc_clks, + .num_icc_clks = ARRAY_SIZE(bimc_pcnoc_snoc_smmnoc_icc_clks), + }; +diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c +index 8951ffc14ff52..6a4b2b9ef30a8 100644 +--- a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c ++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c +@@ -182,6 +182,8 @@ static struct ccu_nkm pll_mipi_clk = { + &ccu_nkm_ops, + CLK_SET_RATE_UNGATE | CLK_SET_RATE_PARENT), + .features = CCU_FEATURE_CLOSEST_RATE, ++ .min_rate = 500000000, ++ .max_rate = 1400000000, + }, + }; + +diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c +index 42568c6161814..892df807275c8 100644 +--- a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c ++++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c +@@ -1181,11 +1181,18 @@ static const u32 usb2_clk_regs[] = { + SUN50I_H6_USB3_CLK_REG, + }; + ++static struct ccu_mux_nb sun50i_h6_cpu_nb = { ++ .common = &cpux_clk.common, ++ .cm = &cpux_clk.mux, ++ .delay_us = 1, ++ .bypass_index = 0, /* index of 24 MHz oscillator */ ++}; ++ + static int sun50i_h6_ccu_probe(struct platform_device *pdev) + { + void __iomem *reg; ++ int i, ret; + u32 val; +- int i; + + reg = devm_platform_ioremap_resource(pdev, 0); + if (IS_ERR(reg)) +@@ -1252,7 +1259,15 @@ static int sun50i_h6_ccu_probe(struct platform_device *pdev) + val |= BIT(24); + writel(val, reg + SUN50I_H6_HDMI_CEC_CLK_REG); + +- return devm_sunxi_ccu_probe(&pdev->dev, reg, &sun50i_h6_ccu_desc); ++ ret = devm_sunxi_ccu_probe(&pdev->dev, reg, &sun50i_h6_ccu_desc); ++ if (ret) ++ return ret; ++ ++ /* Reparent CPU during PLL CPUX rate changes */ ++ ccu_mux_notifier_register(pll_cpux_clk.common.hw.clk, ++ &sun50i_h6_cpu_nb); ++ ++ return 0; + } + + static const struct of_device_id sun50i_h6_ccu_ids[] = { +diff --git a/drivers/clk/sunxi-ng/ccu_common.c b/drivers/clk/sunxi-ng/ccu_common.c +index 8babce55302f5..ac0091b4ce242 100644 +--- a/drivers/clk/sunxi-ng/ccu_common.c ++++ b/drivers/clk/sunxi-ng/ccu_common.c +@@ -44,6 +44,16 @@ bool ccu_is_better_rate(struct ccu_common *common, + unsigned long current_rate, + unsigned long best_rate) + { ++ unsigned long min_rate, max_rate; ++ ++ clk_hw_get_rate_range(&common->hw, &min_rate, &max_rate); ++ ++ if (current_rate > max_rate) ++ return false; ++ ++ if (current_rate < min_rate) ++ return false; ++ + if (common->features & CCU_FEATURE_CLOSEST_RATE) + return abs(current_rate - target_rate) < abs(best_rate - target_rate); + +@@ -122,6 +132,7 @@ static int sunxi_ccu_probe(struct sunxi_ccu *ccu, struct device *dev, + + for (i = 0; i < desc->hw_clks->num ; i++) { + struct clk_hw *hw = desc->hw_clks->hws[i]; ++ struct ccu_common *common = hw_to_ccu_common(hw); + const char *name; + + if (!hw) +@@ -136,6 +147,14 @@ static int sunxi_ccu_probe(struct sunxi_ccu *ccu, struct device *dev, + pr_err("Couldn't register clock %d - %s\n", i, name); + goto err_clk_unreg; + } ++ ++ if (common->max_rate) ++ clk_hw_set_rate_range(hw, common->min_rate, ++ common->max_rate); ++ else ++ WARN(common->min_rate, ++ "No max_rate, ignoring min_rate of clock %d - %s\n", ++ i, name); + } + + ret = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, +diff --git a/drivers/clk/sunxi-ng/ccu_common.h b/drivers/clk/sunxi-ng/ccu_common.h +index 942a72c094374..329734f8cf42b 100644 +--- a/drivers/clk/sunxi-ng/ccu_common.h ++++ b/drivers/clk/sunxi-ng/ccu_common.h +@@ -31,6 +31,9 @@ struct ccu_common { + u16 lock_reg; + u32 prediv; + ++ unsigned long min_rate; ++ unsigned long max_rate; ++ + unsigned long features; + spinlock_t *lock; + struct clk_hw hw; +diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c +index ad7b55dd9596d..26f1dedc92d38 100644 +--- a/drivers/dma/idxd/cdev.c ++++ b/drivers/dma/idxd/cdev.c +@@ -400,6 +400,18 @@ static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma) + int rc; + + dev_dbg(&pdev->dev, "%s called\n", __func__); ++ ++ /* ++ * Due to an erratum in some of the devices supported by the driver, ++ * direct user submission to the device can be unsafe. ++ * (See the INTEL-SA-01084 security advisory) ++ * ++ * For the devices that exhibit this behavior, require that the user ++ * has CAP_SYS_RAWIO capabilities. ++ */ ++ if (!idxd->user_submission_safe && !capable(CAP_SYS_RAWIO)) ++ return -EPERM; ++ + rc = check_vma(wq, vma, __func__); + if (rc < 0) + return rc; +@@ -414,6 +426,70 @@ static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma) + vma->vm_page_prot); + } + ++static int idxd_submit_user_descriptor(struct idxd_user_context *ctx, ++ struct dsa_hw_desc __user *udesc) ++{ ++ struct idxd_wq *wq = ctx->wq; ++ struct idxd_dev *idxd_dev = &wq->idxd->idxd_dev; ++ const uint64_t comp_addr_align = is_dsa_dev(idxd_dev) ? 0x20 : 0x40; ++ void __iomem *portal = idxd_wq_portal_addr(wq); ++ struct dsa_hw_desc descriptor __aligned(64); ++ int rc; ++ ++ rc = copy_from_user(&descriptor, udesc, sizeof(descriptor)); ++ if (rc) ++ return -EFAULT; ++ ++ /* ++ * DSA devices are capable of indirect ("batch") command submission. ++ * On devices where direct user submissions are not safe, we cannot ++ * allow this since there is no good way for us to verify these ++ * indirect commands. ++ */ ++ if (is_dsa_dev(idxd_dev) && descriptor.opcode == DSA_OPCODE_BATCH && ++ !wq->idxd->user_submission_safe) ++ return -EINVAL; ++ /* ++ * As per the programming specification, the completion address must be ++ * aligned to 32 or 64 bytes. If this is violated the hardware ++ * engine can get very confused (security issue). ++ */ ++ if (!IS_ALIGNED(descriptor.completion_addr, comp_addr_align)) ++ return -EINVAL; ++ ++ if (wq_dedicated(wq)) ++ iosubmit_cmds512(portal, &descriptor, 1); ++ else { ++ descriptor.priv = 0; ++ descriptor.pasid = ctx->pasid; ++ rc = idxd_enqcmds(wq, portal, &descriptor); ++ if (rc < 0) ++ return rc; ++ } ++ ++ return 0; ++} ++ ++static ssize_t idxd_cdev_write(struct file *filp, const char __user *buf, size_t len, ++ loff_t *unused) ++{ ++ struct dsa_hw_desc __user *udesc = (struct dsa_hw_desc __user *)buf; ++ struct idxd_user_context *ctx = filp->private_data; ++ ssize_t written = 0; ++ int i; ++ ++ for (i = 0; i < len/sizeof(struct dsa_hw_desc); i++) { ++ int rc = idxd_submit_user_descriptor(ctx, udesc + i); ++ ++ if (rc) ++ return written ? written : rc; ++ ++ written += sizeof(struct dsa_hw_desc); ++ } ++ ++ return written; ++} ++ + static __poll_t idxd_cdev_poll(struct file *filp, + struct poll_table_struct *wait) + { +@@ -436,6 +512,7 @@ static const struct file_operations idxd_cdev_fops = { + .open = idxd_cdev_open, + .release = idxd_cdev_release, + .mmap = idxd_cdev_mmap, ++ .write = idxd_cdev_write, + .poll = idxd_cdev_poll, + }; + +diff --git a/drivers/dma/idxd/idxd.h b/drivers/dma/idxd/idxd.h +index df62dd1291189..bea10c5cdb76b 100644 +--- a/drivers/dma/idxd/idxd.h ++++ b/drivers/dma/idxd/idxd.h +@@ -275,6 +275,7 @@ struct idxd_driver_data { + int evl_cr_off; + int cr_status_off; + int cr_result_off; ++ bool user_submission_safe; + }; + + struct idxd_evl { +@@ -360,6 +361,8 @@ struct idxd_device { + + struct dentry *dbgfs_dir; + struct dentry *dbgfs_evl_file; ++ ++ bool user_submission_safe; + }; + + static inline unsigned int evl_ent_size(struct idxd_device *idxd) +diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c +index 2e323c9b2068d..786afb256b6e0 100644 +--- a/drivers/dma/idxd/init.c ++++ b/drivers/dma/idxd/init.c +@@ -47,6 +47,7 @@ static struct idxd_driver_data idxd_driver_data[] = { + .align = 32, + .dev_type = &dsa_device_type, + .evl_cr_off = offsetof(struct dsa_evl_entry, cr), ++ .user_submission_safe = false, /* See INTEL-SA-01084 security advisory */ + .cr_status_off = offsetof(struct dsa_completion_record, status), + .cr_result_off = offsetof(struct dsa_completion_record, result), + }, +@@ -57,6 +58,7 @@ static struct idxd_driver_data idxd_driver_data[] = { + .align = 64, + .dev_type = &iax_device_type, + .evl_cr_off = offsetof(struct iax_evl_entry, cr), ++ .user_submission_safe = false, /* See INTEL-SA-01084 security advisory */ + .cr_status_off = offsetof(struct iax_completion_record, status), + .cr_result_off = offsetof(struct iax_completion_record, error_code), + }, +@@ -767,6 +769,8 @@ static int idxd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) + dev_info(&pdev->dev, "Intel(R) Accelerator Device (v%x)\n", + idxd->hw.version); + ++ idxd->user_submission_safe = data->user_submission_safe; ++ + return 0; + + err_dev_register: +diff --git a/drivers/dma/idxd/registers.h b/drivers/dma/idxd/registers.h +index 7b54a3939ea13..cfbcd1adb1d1c 100644 +--- a/drivers/dma/idxd/registers.h ++++ b/drivers/dma/idxd/registers.h +@@ -6,9 +6,6 @@ + #include + + /* PCI Config */ +-#define PCI_DEVICE_ID_INTEL_DSA_SPR0 0x0b25 +-#define PCI_DEVICE_ID_INTEL_IAX_SPR0 0x0cfe +- + #define DEVICE_VERSION_1 0x100 + #define DEVICE_VERSION_2 0x200 + +diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c +index 7caba90d85b31..1fd5a93045f79 100644 +--- a/drivers/dma/idxd/sysfs.c ++++ b/drivers/dma/idxd/sysfs.c +@@ -1197,12 +1197,35 @@ static ssize_t wq_enqcmds_retries_store(struct device *dev, struct device_attrib + static struct device_attribute dev_attr_wq_enqcmds_retries = + __ATTR(enqcmds_retries, 0644, wq_enqcmds_retries_show, wq_enqcmds_retries_store); + ++static ssize_t op_cap_show_common(struct device *dev, char *buf, unsigned long *opcap_bmap) ++{ ++ ssize_t pos; ++ int i; ++ ++ pos = 0; ++ for (i = IDXD_MAX_OPCAP_BITS/64 - 1; i >= 0; i--) { ++ unsigned long val = opcap_bmap[i]; ++ ++ /* On systems where direct user submissions are not safe, we need to clear out ++ * the BATCH capability from the capability mask in sysfs since we cannot support ++ * that command on such systems. ++ */ ++ if (i == DSA_OPCODE_BATCH/64 && !confdev_to_idxd(dev)->user_submission_safe) ++ clear_bit(DSA_OPCODE_BATCH % 64, &val); ++ ++ pos += sysfs_emit_at(buf, pos, "%*pb", 64, &val); ++ pos += sysfs_emit_at(buf, pos, "%c", i == 0 ? '\n' : ','); ++ } ++ ++ return pos; ++} ++ + static ssize_t wq_op_config_show(struct device *dev, + struct device_attribute *attr, char *buf) + { + struct idxd_wq *wq = confdev_to_wq(dev); + +- return sysfs_emit(buf, "%*pb\n", IDXD_MAX_OPCAP_BITS, wq->opcap_bmap); ++ return op_cap_show_common(dev, buf, wq->opcap_bmap); + } + + static int idxd_verify_supported_opcap(struct idxd_device *idxd, unsigned long *opmask) +@@ -1421,7 +1444,7 @@ static ssize_t op_cap_show(struct device *dev, + { + struct idxd_device *idxd = confdev_to_idxd(dev); + +- return sysfs_emit(buf, "%*pb\n", IDXD_MAX_OPCAP_BITS, idxd->opcap_bmap); ++ return op_cap_show_common(dev, buf, idxd->opcap_bmap); + } + static DEVICE_ATTR_RO(op_cap); + +diff --git a/drivers/firewire/nosy.c b/drivers/firewire/nosy.c +index b0d671db178a8..ea31ac7ac1ca9 100644 +--- a/drivers/firewire/nosy.c ++++ b/drivers/firewire/nosy.c +@@ -148,10 +148,12 @@ packet_buffer_get(struct client *client, char __user *data, size_t user_length) + if (atomic_read(&buffer->size) == 0) + return -ENODEV; + +- /* FIXME: Check length <= user_length. */ ++ length = buffer->head->length; ++ ++ if (length > user_length) ++ return 0; + + end = buffer->data + buffer->capacity; +- length = buffer->head->length; + + if (&buffer->head->data[length] < end) { + if (copy_to_user(data, buffer->head->data, length)) +diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c +index 7bc71f4be64a0..b9ae0340b8a70 100644 +--- a/drivers/firewire/ohci.c ++++ b/drivers/firewire/ohci.c +@@ -1556,6 +1556,8 @@ static int handle_at_packet(struct context *context, + #define HEADER_GET_DATA_LENGTH(q) (((q) >> 16) & 0xffff) + #define HEADER_GET_EXTENDED_TCODE(q) (((q) >> 0) & 0xffff) + ++static u32 get_cycle_time(struct fw_ohci *ohci); ++ + static void handle_local_rom(struct fw_ohci *ohci, + struct fw_packet *packet, u32 csr) + { +@@ -1580,6 +1582,8 @@ static void handle_local_rom(struct fw_ohci *ohci, + (void *) ohci->config_rom + i, length); + } + ++ // Timestamping on behalf of the hardware. ++ response.timestamp = cycle_time_to_ohci_tstamp(get_cycle_time(ohci)); + fw_core_handle_response(&ohci->card, &response); + } + +@@ -1628,6 +1632,8 @@ static void handle_local_lock(struct fw_ohci *ohci, + fw_fill_response(&response, packet->header, RCODE_BUSY, NULL, 0); + + out: ++ // Timestamping on behalf of the hardware. ++ response.timestamp = cycle_time_to_ohci_tstamp(get_cycle_time(ohci)); + fw_core_handle_response(&ohci->card, &response); + } + +@@ -1670,8 +1676,6 @@ static void handle_local_request(struct context *ctx, struct fw_packet *packet) + } + } + +-static u32 get_cycle_time(struct fw_ohci *ohci); +- + static void at_context_transmit(struct context *ctx, struct fw_packet *packet) + { + unsigned long flags; +@@ -2060,6 +2064,8 @@ static void bus_reset_work(struct work_struct *work) + + ohci->generation = generation; + reg_write(ohci, OHCI1394_IntEventClear, OHCI1394_busReset); ++ if (param_debug & OHCI_PARAM_DEBUG_BUSRESETS) ++ reg_write(ohci, OHCI1394_IntMaskSet, OHCI1394_busReset); + + if (ohci->quirks & QUIRK_RESET_PACKET) + ohci->request_generation = generation; +@@ -2125,12 +2131,14 @@ static irqreturn_t irq_handler(int irq, void *data) + return IRQ_NONE; + + /* +- * busReset and postedWriteErr must not be cleared yet ++ * busReset and postedWriteErr events must not be cleared yet + * (OHCI 1.1 clauses 7.2.3.2 and 13.2.8.1) + */ + reg_write(ohci, OHCI1394_IntEventClear, + event & ~(OHCI1394_busReset | OHCI1394_postedWriteErr)); + log_irqs(ohci, event); ++ if (event & OHCI1394_busReset) ++ reg_write(ohci, OHCI1394_IntMaskClear, OHCI1394_busReset); + + if (event & OHCI1394_selfIDComplete) + queue_work(selfid_workqueue, &ohci->bus_reset_work); +diff --git a/drivers/gpio/gpio-crystalcove.c b/drivers/gpio/gpio-crystalcove.c +index 1ee62cd58582b..25db014494a4d 100644 +--- a/drivers/gpio/gpio-crystalcove.c ++++ b/drivers/gpio/gpio-crystalcove.c +@@ -92,7 +92,7 @@ static inline int to_reg(int gpio, enum ctrl_register reg_type) + case 0x5e: + return GPIOPANELCTL; + default: +- return -EOPNOTSUPP; ++ return -ENOTSUPP; + } + } + +diff --git a/drivers/gpio/gpio-lpc32xx.c b/drivers/gpio/gpio-lpc32xx.c +index 5ef8af8249806..c097e310c9e84 100644 +--- a/drivers/gpio/gpio-lpc32xx.c ++++ b/drivers/gpio/gpio-lpc32xx.c +@@ -529,6 +529,7 @@ static const struct of_device_id lpc32xx_gpio_of_match[] = { + { .compatible = "nxp,lpc3220-gpio", }, + { }, + }; ++MODULE_DEVICE_TABLE(of, lpc32xx_gpio_of_match); + + static struct platform_driver lpc32xx_gpio_driver = { + .driver = { +diff --git a/drivers/gpio/gpio-wcove.c b/drivers/gpio/gpio-wcove.c +index c18b6b47384f1..94ca9d03c0949 100644 +--- a/drivers/gpio/gpio-wcove.c ++++ b/drivers/gpio/gpio-wcove.c +@@ -104,7 +104,7 @@ static inline int to_reg(int gpio, enum ctrl_register type) + unsigned int reg = type == CTRL_IN ? GPIO_IN_CTRL_BASE : GPIO_OUT_CTRL_BASE; + + if (gpio >= WCOVE_GPIO_NUM) +- return -EOPNOTSUPP; ++ return -ENOTSUPP; + + return reg + gpio; + } +diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c +index 84125e55de101..b4b71e68b90de 100644 +--- a/drivers/gpio/gpiolib-cdev.c ++++ b/drivers/gpio/gpiolib-cdev.c +@@ -5,6 +5,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -21,6 +22,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -461,6 +463,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip) + + /** + * struct line - contains the state of a requested line ++ * @node: to store the object in supinfo_tree if supplemental + * @desc: the GPIO descriptor for this line. + * @req: the corresponding line request + * @irq: the interrupt triggered in response to events on this GPIO +@@ -473,6 +476,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip) + * @line_seqno: the seqno for the current edge event in the sequence of + * events for this line. + * @work: the worker that implements software debouncing ++ * @debounce_period_us: the debounce period in microseconds + * @sw_debounced: flag indicating if the software debouncer is active + * @level: the current debounced physical level of the line + * @hdesc: the Hardware Timestamp Engine (HTE) descriptor +@@ -481,6 +485,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip) + * @last_seqno: the last sequence number before debounce period expires + */ + struct line { ++ struct rb_node node; + struct gpio_desc *desc; + /* + * -- edge detector specific fields -- +@@ -514,6 +519,15 @@ struct line { + * -- debouncer specific fields -- + */ + struct delayed_work work; ++ /* ++ * debounce_period_us is accessed by debounce_irq_handler() and ++ * process_hw_ts() which are disabled when modified by ++ * debounce_setup(), edge_detector_setup() or edge_detector_stop() ++ * or can live with a stale version when updated by ++ * edge_detector_update(). ++ * The modifying functions are themselves mutually exclusive. ++ */ ++ unsigned int debounce_period_us; + /* + * sw_debounce is accessed by linereq_set_config(), which is the + * only setter, and linereq_get_values(), which can live with a +@@ -546,6 +560,17 @@ struct line { + #endif /* CONFIG_HTE */ + }; + ++/* ++ * a rbtree of the struct lines containing supplemental info. ++ * Used to populate gpio_v2_line_info with cdev specific fields not contained ++ * in the struct gpio_desc. ++ * A line is determined to contain supplemental information by ++ * line_has_supinfo(). ++ */ ++static struct rb_root supinfo_tree = RB_ROOT; ++/* covers supinfo_tree */ ++static DEFINE_SPINLOCK(supinfo_lock); ++ + /** + * struct linereq - contains the state of a userspace line request + * @gdev: the GPIO device the line request pertains to +@@ -559,7 +584,8 @@ struct line { + * this line request. Note that this is not used when @num_lines is 1, as + * the line_seqno is then the same and is cheaper to calculate. + * @config_mutex: mutex for serializing ioctl() calls to ensure consistency +- * of configuration, particularly multi-step accesses to desc flags. ++ * of configuration, particularly multi-step accesses to desc flags and ++ * changes to supinfo status. + * @lines: the lines held by this line request, with @num_lines elements. + */ + struct linereq { +@@ -575,6 +601,103 @@ struct linereq { + struct line lines[]; + }; + ++static void supinfo_insert(struct line *line) ++{ ++ struct rb_node **new = &(supinfo_tree.rb_node), *parent = NULL; ++ struct line *entry; ++ ++ guard(spinlock)(&supinfo_lock); ++ ++ while (*new) { ++ entry = container_of(*new, struct line, node); ++ ++ parent = *new; ++ if (line->desc < entry->desc) { ++ new = &((*new)->rb_left); ++ } else if (line->desc > entry->desc) { ++ new = &((*new)->rb_right); ++ } else { ++ /* this should never happen */ ++ WARN(1, "duplicate line inserted"); ++ return; ++ } ++ } ++ ++ rb_link_node(&line->node, parent, new); ++ rb_insert_color(&line->node, &supinfo_tree); ++} ++ ++static void supinfo_erase(struct line *line) ++{ ++ guard(spinlock)(&supinfo_lock); ++ ++ rb_erase(&line->node, &supinfo_tree); ++} ++ ++static struct line *supinfo_find(struct gpio_desc *desc) ++{ ++ struct rb_node *node = supinfo_tree.rb_node; ++ struct line *line; ++ ++ while (node) { ++ line = container_of(node, struct line, node); ++ if (desc < line->desc) ++ node = node->rb_left; ++ else if (desc > line->desc) ++ node = node->rb_right; ++ else ++ return line; ++ } ++ return NULL; ++} ++ ++static void supinfo_to_lineinfo(struct gpio_desc *desc, ++ struct gpio_v2_line_info *info) ++{ ++ struct gpio_v2_line_attribute *attr; ++ struct line *line; ++ ++ guard(spinlock)(&supinfo_lock); ++ ++ line = supinfo_find(desc); ++ if (!line) ++ return; ++ ++ attr = &info->attrs[info->num_attrs]; ++ attr->id = GPIO_V2_LINE_ATTR_ID_DEBOUNCE; ++ attr->debounce_period_us = READ_ONCE(line->debounce_period_us); ++ info->num_attrs++; ++} ++ ++static inline bool line_has_supinfo(struct line *line) ++{ ++ return READ_ONCE(line->debounce_period_us); ++} ++ ++/* ++ * Checks line_has_supinfo() before and after the change to avoid unnecessary ++ * supinfo_tree access. ++ * Called indirectly by linereq_create() or linereq_set_config() so line ++ * is already protected from concurrent changes. ++ */ ++static void line_set_debounce_period(struct line *line, ++ unsigned int debounce_period_us) ++{ ++ bool was_suppl = line_has_supinfo(line); ++ ++ WRITE_ONCE(line->debounce_period_us, debounce_period_us); ++ ++ /* if supinfo status is unchanged then we're done */ ++ if (line_has_supinfo(line) == was_suppl) ++ return; ++ ++ /* supinfo status has changed, so update the tree */ ++ if (was_suppl) ++ supinfo_erase(line); ++ else ++ supinfo_insert(line); ++} ++ + #define GPIO_V2_LINE_BIAS_FLAGS \ + (GPIO_V2_LINE_FLAG_BIAS_PULL_UP | \ + GPIO_V2_LINE_FLAG_BIAS_PULL_DOWN | \ +@@ -742,7 +865,7 @@ static enum hte_return process_hw_ts(struct hte_ts_data *ts, void *p) + line->total_discard_seq++; + line->last_seqno = ts->seq; + mod_delayed_work(system_wq, &line->work, +- usecs_to_jiffies(READ_ONCE(line->desc->debounce_period_us))); ++ usecs_to_jiffies(READ_ONCE(line->debounce_period_us))); + } else { + if (unlikely(ts->seq < line->line_seqno)) + return HTE_CB_HANDLED; +@@ -883,7 +1006,7 @@ static irqreturn_t debounce_irq_handler(int irq, void *p) + struct line *line = p; + + mod_delayed_work(system_wq, &line->work, +- usecs_to_jiffies(READ_ONCE(line->desc->debounce_period_us))); ++ usecs_to_jiffies(READ_ONCE(line->debounce_period_us))); + + return IRQ_HANDLED; + } +@@ -966,7 +1089,7 @@ static int debounce_setup(struct line *line, unsigned int debounce_period_us) + /* try hardware */ + ret = gpiod_set_debounce(line->desc, debounce_period_us); + if (!ret) { +- WRITE_ONCE(line->desc->debounce_period_us, debounce_period_us); ++ line_set_debounce_period(line, debounce_period_us); + return ret; + } + if (ret != -ENOTSUPP) +@@ -1051,8 +1174,7 @@ static void edge_detector_stop(struct line *line) + cancel_delayed_work_sync(&line->work); + WRITE_ONCE(line->sw_debounced, 0); + WRITE_ONCE(line->edflags, 0); +- if (line->desc) +- WRITE_ONCE(line->desc->debounce_period_us, 0); ++ line_set_debounce_period(line, 0); + /* do not change line->level - see comment in debounced_value() */ + } + +@@ -1078,7 +1200,7 @@ static int edge_detector_setup(struct line *line, + ret = debounce_setup(line, debounce_period_us); + if (ret) + return ret; +- WRITE_ONCE(line->desc->debounce_period_us, debounce_period_us); ++ line_set_debounce_period(line, debounce_period_us); + } + + /* detection disabled or sw debouncer will provide edge detection */ +@@ -1121,17 +1243,31 @@ static int edge_detector_update(struct line *line, + struct gpio_v2_line_config *lc, + unsigned int line_idx, u64 edflags) + { ++ u64 eflags; ++ int ret; + u64 active_edflags = READ_ONCE(line->edflags); + unsigned int debounce_period_us = + gpio_v2_line_config_debounce_period(lc, line_idx); + + if ((active_edflags == edflags) && +- (READ_ONCE(line->desc->debounce_period_us) == debounce_period_us)) ++ (READ_ONCE(line->debounce_period_us) == debounce_period_us)) + return 0; + + /* sw debounced and still will be...*/ + if (debounce_period_us && READ_ONCE(line->sw_debounced)) { +- WRITE_ONCE(line->desc->debounce_period_us, debounce_period_us); ++ line_set_debounce_period(line, debounce_period_us); ++ /* ++ * ensure event fifo is initialised if edge detection ++ * is now enabled. ++ */ ++ eflags = edflags & GPIO_V2_LINE_EDGE_FLAGS; ++ if (eflags && !kfifo_initialized(&line->req->events)) { ++ ret = kfifo_alloc(&line->req->events, ++ line->req->event_buffer_size, ++ GFP_KERNEL); ++ if (ret) ++ return ret; ++ } + return 0; + } + +@@ -1606,6 +1742,7 @@ static ssize_t linereq_read(struct file *file, char __user *buf, + + static void linereq_free(struct linereq *lr) + { ++ struct line *line; + unsigned int i; + + if (lr->device_unregistered_nb.notifier_call) +@@ -1613,10 +1750,14 @@ static void linereq_free(struct linereq *lr) + &lr->device_unregistered_nb); + + for (i = 0; i < lr->num_lines; i++) { +- if (lr->lines[i].desc) { +- edge_detector_stop(&lr->lines[i]); +- gpiod_free(lr->lines[i].desc); +- } ++ line = &lr->lines[i]; ++ if (!line->desc) ++ continue; ++ ++ edge_detector_stop(line); ++ if (line_has_supinfo(line)) ++ supinfo_erase(line); ++ gpiod_free(line->desc); + } + kfifo_free(&lr->events); + kfree(lr->label); +@@ -2316,8 +2457,6 @@ static void gpio_desc_to_lineinfo(struct gpio_desc *desc, + struct gpio_chip *gc = desc->gdev->chip; + bool ok_for_pinctrl; + unsigned long flags; +- u32 debounce_period_us; +- unsigned int num_attrs = 0; + + memset(info, 0, sizeof(*info)); + info->offset = gpio_chip_hwgpio(desc); +@@ -2384,14 +2523,6 @@ static void gpio_desc_to_lineinfo(struct gpio_desc *desc, + else if (test_bit(FLAG_EVENT_CLOCK_HTE, &desc->flags)) + info->flags |= GPIO_V2_LINE_FLAG_EVENT_CLOCK_HTE; + +- debounce_period_us = READ_ONCE(desc->debounce_period_us); +- if (debounce_period_us) { +- info->attrs[num_attrs].id = GPIO_V2_LINE_ATTR_ID_DEBOUNCE; +- info->attrs[num_attrs].debounce_period_us = debounce_period_us; +- num_attrs++; +- } +- info->num_attrs = num_attrs; +- + spin_unlock_irqrestore(&gpio_lock, flags); + } + +@@ -2498,6 +2629,7 @@ static int lineinfo_get(struct gpio_chardev_data *cdev, void __user *ip, + return -EBUSY; + } + gpio_desc_to_lineinfo(desc, &lineinfo); ++ supinfo_to_lineinfo(desc, &lineinfo); + + if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) { + if (watch) +@@ -2596,6 +2728,7 @@ static int lineinfo_changed_notify(struct notifier_block *nb, + chg.event_type = action; + chg.timestamp_ns = ktime_get_ns(); + gpio_desc_to_lineinfo(desc, &chg.info); ++ supinfo_to_lineinfo(desc, &chg.info); + + ret = kfifo_in_spinlocked(&cdev->events, &chg, 1, &cdev->wait.lock); + if (ret) +@@ -2816,11 +2949,11 @@ static int gpio_chrdev_release(struct inode *inode, struct file *file) + struct gpio_chardev_data *cdev = file->private_data; + struct gpio_device *gdev = cdev->gdev; + +- bitmap_free(cdev->watched_lines); + blocking_notifier_chain_unregister(&gdev->device_notifier, + &cdev->device_unregistered_nb); + blocking_notifier_chain_unregister(&gdev->line_state_notifier, + &cdev->lineinfo_changed_nb); ++ bitmap_free(cdev->watched_lines); + gpio_device_put(gdev); + kfree(cdev); + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +index 78476bc75b4e1..de9d7f3dc2336 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +@@ -300,12 +300,15 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job) + dma_fence_set_error(finished, -ECANCELED); + + if (finished->error < 0) { +- DRM_INFO("Skip scheduling IBs!\n"); ++ dev_dbg(adev->dev, "Skip scheduling IBs in ring(%s)", ++ ring->name); + } else { + r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job, + &fence); + if (r) +- DRM_ERROR("Error scheduling IBs (%d)\n", r); ++ dev_err(adev->dev, ++ "Error scheduling IBs (%d) in ring(%s)", r, ++ ring->name); + } + + job->job_run_counter++; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +index 361f2cc94e8e5..a348d320575e0 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +@@ -1249,14 +1249,18 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer, + * amdgpu_bo_move_notify - notification about a memory move + * @bo: pointer to a buffer object + * @evict: if this move is evicting the buffer from the graphics address space ++ * @new_mem: new resource for backing the BO + * + * Marks the corresponding &amdgpu_bo buffer object as invalid, also performs + * bookkeeping. + * TTM driver callback which is called when ttm moves a buffer. + */ +-void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict) ++void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, ++ bool evict, ++ struct ttm_resource *new_mem) + { + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); ++ struct ttm_resource *old_mem = bo->resource; + struct amdgpu_bo *abo; + + if (!amdgpu_bo_is_amdgpu_bo(bo)) +@@ -1268,12 +1272,12 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict) + amdgpu_bo_kunmap(abo); + + if (abo->tbo.base.dma_buf && !abo->tbo.base.import_attach && +- bo->resource->mem_type != TTM_PL_SYSTEM) ++ old_mem && old_mem->mem_type != TTM_PL_SYSTEM) + dma_buf_move_notify(abo->tbo.base.dma_buf); + +- /* remember the eviction */ +- if (evict) +- atomic64_inc(&adev->num_evictions); ++ /* move_notify is called before move happens */ ++ trace_amdgpu_bo_move(abo, new_mem ? new_mem->mem_type : -1, ++ old_mem ? old_mem->mem_type : -1); + } + + void amdgpu_bo_get_memory(struct amdgpu_bo *bo, +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +index fa03d9e4874cc..bc42ccbde659a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +@@ -328,7 +328,9 @@ int amdgpu_bo_set_metadata (struct amdgpu_bo *bo, void *metadata, + int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer, + size_t buffer_size, uint32_t *metadata_size, + uint64_t *flags); +-void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict); ++void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, ++ bool evict, ++ struct ttm_resource *new_mem); + void amdgpu_bo_release_notify(struct ttm_buffer_object *bo); + vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo); + void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +index d1687b5725693..8c3fb1562ffef 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +@@ -424,7 +424,7 @@ bool amdgpu_res_cpu_visible(struct amdgpu_device *adev, + return false; + + if (res->mem_type == TTM_PL_SYSTEM || res->mem_type == TTM_PL_TT || +- res->mem_type == AMDGPU_PL_PREEMPT) ++ res->mem_type == AMDGPU_PL_PREEMPT || res->mem_type == AMDGPU_PL_DOORBELL) + return true; + + if (res->mem_type != TTM_PL_VRAM) +@@ -432,7 +432,7 @@ bool amdgpu_res_cpu_visible(struct amdgpu_device *adev, + + amdgpu_res_first(res, 0, res->size, &cursor); + while (cursor.remaining) { +- if ((cursor.start + cursor.size) >= adev->gmc.visible_vram_size) ++ if ((cursor.start + cursor.size) > adev->gmc.visible_vram_size) + return false; + amdgpu_res_next(&cursor, cursor.size); + } +@@ -486,14 +486,16 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, + + if (!old_mem || (old_mem->mem_type == TTM_PL_SYSTEM && + bo->ttm == NULL)) { ++ amdgpu_bo_move_notify(bo, evict, new_mem); + ttm_bo_move_null(bo, new_mem); +- goto out; ++ return 0; + } + if (old_mem->mem_type == TTM_PL_SYSTEM && + (new_mem->mem_type == TTM_PL_TT || + new_mem->mem_type == AMDGPU_PL_PREEMPT)) { ++ amdgpu_bo_move_notify(bo, evict, new_mem); + ttm_bo_move_null(bo, new_mem); +- goto out; ++ return 0; + } + if ((old_mem->mem_type == TTM_PL_TT || + old_mem->mem_type == AMDGPU_PL_PREEMPT) && +@@ -503,9 +505,10 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, + return r; + + amdgpu_ttm_backend_unbind(bo->bdev, bo->ttm); ++ amdgpu_bo_move_notify(bo, evict, new_mem); + ttm_resource_free(bo, &bo->resource); + ttm_bo_assign_mem(bo, new_mem); +- goto out; ++ return 0; + } + + if (old_mem->mem_type == AMDGPU_PL_GDS || +@@ -517,8 +520,9 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, + new_mem->mem_type == AMDGPU_PL_OA || + new_mem->mem_type == AMDGPU_PL_DOORBELL) { + /* Nothing to save here */ ++ amdgpu_bo_move_notify(bo, evict, new_mem); + ttm_bo_move_null(bo, new_mem); +- goto out; ++ return 0; + } + + if (bo->type == ttm_bo_type_device && +@@ -530,22 +534,23 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, + abo->flags &= ~AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; + } + +- if (adev->mman.buffer_funcs_enabled) { +- if (((old_mem->mem_type == TTM_PL_SYSTEM && +- new_mem->mem_type == TTM_PL_VRAM) || +- (old_mem->mem_type == TTM_PL_VRAM && +- new_mem->mem_type == TTM_PL_SYSTEM))) { +- hop->fpfn = 0; +- hop->lpfn = 0; +- hop->mem_type = TTM_PL_TT; +- hop->flags = TTM_PL_FLAG_TEMPORARY; +- return -EMULTIHOP; +- } ++ if (adev->mman.buffer_funcs_enabled && ++ ((old_mem->mem_type == TTM_PL_SYSTEM && ++ new_mem->mem_type == TTM_PL_VRAM) || ++ (old_mem->mem_type == TTM_PL_VRAM && ++ new_mem->mem_type == TTM_PL_SYSTEM))) { ++ hop->fpfn = 0; ++ hop->lpfn = 0; ++ hop->mem_type = TTM_PL_TT; ++ hop->flags = TTM_PL_FLAG_TEMPORARY; ++ return -EMULTIHOP; ++ } + ++ amdgpu_bo_move_notify(bo, evict, new_mem); ++ if (adev->mman.buffer_funcs_enabled) + r = amdgpu_move_blit(bo, evict, new_mem, old_mem); +- } else { ++ else + r = -ENODEV; +- } + + if (r) { + /* Check that all memory is CPU accessible */ +@@ -560,11 +565,10 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict, + return r; + } + +- trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type); +-out: +- /* update statistics */ ++ /* update statistics after the move */ ++ if (evict) ++ atomic64_inc(&adev->num_evictions); + atomic64_add(bo->base.size, &adev->num_bytes_moved); +- amdgpu_bo_move_notify(bo, evict); + return 0; + } + +@@ -1568,7 +1572,7 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo, + static void + amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo) + { +- amdgpu_bo_move_notify(bo, false); ++ amdgpu_bo_move_notify(bo, false, NULL); + } + + static struct ttm_device_funcs amdgpu_bo_driver = { +diff --git a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c +index d0fc62784e821..0284c9198a04a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c ++++ b/drivers/gpu/drm/amd/amdgpu/aqua_vanjaram.c +@@ -61,6 +61,11 @@ void aqua_vanjaram_doorbell_index_init(struct amdgpu_device *adev) + adev->doorbell_index.max_assignment = AMDGPU_DOORBELL_LAYOUT1_MAX_ASSIGNMENT << 1; + } + ++static bool aqua_vanjaram_xcp_vcn_shared(struct amdgpu_device *adev) ++{ ++ return (adev->xcp_mgr->num_xcps > adev->vcn.num_vcn_inst); ++} ++ + static void aqua_vanjaram_set_xcp_id(struct amdgpu_device *adev, + uint32_t inst_idx, struct amdgpu_ring *ring) + { +@@ -86,7 +91,7 @@ static void aqua_vanjaram_set_xcp_id(struct amdgpu_device *adev, + case AMDGPU_RING_TYPE_VCN_ENC: + case AMDGPU_RING_TYPE_VCN_JPEG: + ip_blk = AMDGPU_XCP_VCN; +- if (adev->xcp_mgr->mode == AMDGPU_CPX_PARTITION_MODE) ++ if (aqua_vanjaram_xcp_vcn_shared(adev)) + inst_mask = 1 << (inst_idx * 2); + break; + default: +@@ -139,10 +144,12 @@ static int aqua_vanjaram_xcp_sched_list_update( + + aqua_vanjaram_xcp_gpu_sched_update(adev, ring, ring->xcp_id); + +- /* VCN is shared by two partitions under CPX MODE */ ++ /* VCN may be shared by two partitions under CPX MODE in certain ++ * configs. ++ */ + if ((ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC || +- ring->funcs->type == AMDGPU_RING_TYPE_VCN_JPEG) && +- adev->xcp_mgr->mode == AMDGPU_CPX_PARTITION_MODE) ++ ring->funcs->type == AMDGPU_RING_TYPE_VCN_JPEG) && ++ aqua_vanjaram_xcp_vcn_shared(adev)) + aqua_vanjaram_xcp_gpu_sched_update(adev, ring, ring->xcp_id + 1); + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c +index e76e7e7cb554e..4e8d5e6a65e41 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c +@@ -1613,19 +1613,9 @@ static int sdma_v4_4_2_set_ecc_irq_state(struct amdgpu_device *adev, + u32 sdma_cntl; + + sdma_cntl = RREG32_SDMA(type, regSDMA_CNTL); +- switch (state) { +- case AMDGPU_IRQ_STATE_DISABLE: +- sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA_CNTL, +- DRAM_ECC_INT_ENABLE, 0); +- WREG32_SDMA(type, regSDMA_CNTL, sdma_cntl); +- break; +- /* sdma ecc interrupt is enabled by default +- * driver doesn't need to do anything to +- * enable the interrupt */ +- case AMDGPU_IRQ_STATE_ENABLE: +- default: +- break; +- } ++ sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA_CNTL, DRAM_ECC_INT_ENABLE, ++ state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0); ++ WREG32_SDMA(type, regSDMA_CNTL, sdma_cntl); + + return 0; + } +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c +index c157721214e8d..d33ba4fe9ad5b 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c +@@ -1138,7 +1138,7 @@ static int kfd_ioctl_alloc_memory_of_gpu(struct file *filep, + goto err_unlock; + } + offset = dev->adev->rmmio_remap.bus_addr; +- if (!offset) { ++ if (!offset || (PAGE_SIZE > 4096)) { + err = -ENOMEM; + goto err_unlock; + } +@@ -1516,7 +1516,7 @@ static int kfd_ioctl_get_dmabuf_info(struct file *filep, + + /* Find a KFD GPU device that supports the get_dmabuf_info query */ + for (i = 0; kfd_topology_enum_kfd_devices(i, &dev) == 0; i++) +- if (dev) ++ if (dev && !kfd_devcgroup_check_permission(dev)) + break; + if (!dev) + return -EINVAL; +@@ -1538,7 +1538,7 @@ static int kfd_ioctl_get_dmabuf_info(struct file *filep, + if (xcp_id >= 0) + args->gpu_id = dmabuf_adev->kfd.dev->nodes[xcp_id]->id; + else +- args->gpu_id = dmabuf_adev->kfd.dev->nodes[0]->id; ++ args->gpu_id = dev->id; + args->flags = flags; + + /* Copy metadata buffer to user mode */ +@@ -2307,7 +2307,7 @@ static int criu_restore_memory_of_gpu(struct kfd_process_device *pdd, + return -EINVAL; + } + offset = pdd->dev->adev->rmmio_remap.bus_addr; +- if (!offset) { ++ if (!offset || (PAGE_SIZE > 4096)) { + pr_err("amdgpu_amdkfd_get_mmio_remap_phys_addr failed\n"); + return -ENOMEM; + } +@@ -3348,6 +3348,9 @@ static int kfd_mmio_mmap(struct kfd_node *dev, struct kfd_process *process, + if (vma->vm_end - vma->vm_start != PAGE_SIZE) + return -EINVAL; + ++ if (PAGE_SIZE > 4096) ++ return -EINVAL; ++ + address = dev->adev->rmmio_remap.bus_addr; + + vm_flags_set(vma, VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE | +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c +index 93ce181eb3baa..913c70a0ef44f 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c +@@ -935,7 +935,6 @@ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm) + { + struct kfd_node *node; + int i; +- int count; + + if (!kfd->init_complete) + return; +@@ -943,12 +942,10 @@ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm) + /* for runtime suspend, skip locking kfd */ + if (!run_pm) { + mutex_lock(&kfd_processes_mutex); +- count = ++kfd_locked; +- mutex_unlock(&kfd_processes_mutex); +- + /* For first KFD device suspend all the KFD processes */ +- if (count == 1) ++ if (++kfd_locked == 1) + kfd_suspend_all_processes(); ++ mutex_unlock(&kfd_processes_mutex); + } + + for (i = 0; i < kfd->num_nodes; i++) { +@@ -959,7 +956,7 @@ void kgd2kfd_suspend(struct kfd_dev *kfd, bool run_pm) + + int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm) + { +- int ret, count, i; ++ int ret, i; + + if (!kfd->init_complete) + return 0; +@@ -973,12 +970,10 @@ int kgd2kfd_resume(struct kfd_dev *kfd, bool run_pm) + /* for runtime resume, skip unlocking kfd */ + if (!run_pm) { + mutex_lock(&kfd_processes_mutex); +- count = --kfd_locked; +- mutex_unlock(&kfd_processes_mutex); +- +- WARN_ONCE(count < 0, "KFD suspend / resume ref. error"); +- if (count == 0) ++ if (--kfd_locked == 0) + ret = kfd_resume_all_processes(); ++ WARN_ONCE(kfd_locked < 0, "KFD suspend / resume ref. error"); ++ mutex_unlock(&kfd_processes_mutex); + } + + return ret; +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v10.c +index a7697ec8188e0..f85ca6cb90f56 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v10.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v10.c +@@ -336,7 +336,8 @@ static void event_interrupt_wq_v10(struct kfd_node *dev, + break; + } + kfd_signal_event_interrupt(pasid, context_id0 & 0x7fffff, 23); +- } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE) { ++ } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE && ++ KFD_DBG_EC_TYPE_IS_PACKET(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0))) { + kfd_set_dbg_ev_from_interrupt(dev, pasid, + KFD_DEBUG_DOORBELL_ID(context_id0), + KFD_EC_MASK(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0)), +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c +index 2a65792fd1162..3ca9c160da7c2 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c +@@ -325,7 +325,8 @@ static void event_interrupt_wq_v11(struct kfd_node *dev, + /* CP */ + if (source_id == SOC15_INTSRC_CP_END_OF_PIPE) + kfd_signal_event_interrupt(pasid, context_id0, 32); +- else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE) ++ else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE && ++ KFD_DBG_EC_TYPE_IS_PACKET(KFD_CTXID0_CP_BAD_OP_ECODE(context_id0))) + kfd_set_dbg_ev_from_interrupt(dev, pasid, + KFD_CTXID0_DOORBELL_ID(context_id0), + KFD_EC_MASK(KFD_CTXID0_CP_BAD_OP_ECODE(context_id0)), +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c +index 27cdaea405017..8a6729939ae55 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c +@@ -385,7 +385,8 @@ static void event_interrupt_wq_v9(struct kfd_node *dev, + break; + } + kfd_signal_event_interrupt(pasid, sq_int_data, 24); +- } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE) { ++ } else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE && ++ KFD_DBG_EC_TYPE_IS_PACKET(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0))) { + kfd_set_dbg_ev_from_interrupt(dev, pasid, + KFD_DEBUG_DOORBELL_ID(context_id0), + KFD_EC_MASK(KFD_DEBUG_CP_BAD_OP_ECODE(context_id0)), +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 3442e08f47876..98dd07e3726af 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -2974,6 +2974,10 @@ static int dm_resume(void *handle) + /* Do mst topology probing after resuming cached state*/ + drm_connector_list_iter_begin(ddev, &iter); + drm_for_each_connector_iter(connector, &iter) { ++ ++ if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) ++ continue; ++ + aconnector = to_amdgpu_dm_connector(connector); + if (aconnector->dc_link->type != dc_connection_mst_branch || + aconnector->mst_root) +@@ -5756,6 +5760,9 @@ get_highest_refresh_rate_mode(struct amdgpu_dm_connector *aconnector, + &aconnector->base.probed_modes : + &aconnector->base.modes; + ++ if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_WRITEBACK) ++ return NULL; ++ + if (aconnector->freesync_vid_base.clock != 0) + return &aconnector->freesync_vid_base; + +@@ -8445,6 +8452,9 @@ static void amdgpu_dm_commit_audio(struct drm_device *dev, + continue; + + notify: ++ if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) ++ continue; ++ + aconnector = to_amdgpu_dm_connector(connector); + + mutex_lock(&adev->dm.audio_lock); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +index f4b9d43c5d979..2bc37c5a27605 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +@@ -1465,7 +1465,9 @@ static ssize_t dp_dsc_clock_en_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -1566,7 +1568,9 @@ static ssize_t dp_dsc_clock_en_write(struct file *f, const char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -1651,7 +1655,9 @@ static ssize_t dp_dsc_slice_width_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -1750,7 +1756,9 @@ static ssize_t dp_dsc_slice_width_write(struct file *f, const char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -1835,7 +1843,9 @@ static ssize_t dp_dsc_slice_height_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -1934,7 +1944,9 @@ static ssize_t dp_dsc_slice_height_write(struct file *f, const char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -2015,7 +2027,9 @@ static ssize_t dp_dsc_bits_per_pixel_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -2111,7 +2125,9 @@ static ssize_t dp_dsc_bits_per_pixel_write(struct file *f, const char __user *bu + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -2190,7 +2206,9 @@ static ssize_t dp_dsc_pic_width_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -2246,7 +2264,9 @@ static ssize_t dp_dsc_pic_height_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -2317,7 +2337,9 @@ static ssize_t dp_dsc_chunk_size_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +@@ -2388,7 +2410,9 @@ static ssize_t dp_dsc_slice_bpg_offset_read(struct file *f, char __user *buf, + for (i = 0; i < MAX_PIPES; i++) { + pipe_ctx = &aconnector->dc_link->dc->current_state->res_ctx.pipe_ctx[i]; + if (pipe_ctx->stream && +- pipe_ctx->stream->link == aconnector->dc_link) ++ pipe_ctx->stream->link == aconnector->dc_link && ++ pipe_ctx->stream->sink && ++ pipe_ctx->stream->sink == aconnector->dc_sink) + break; + } + +diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c +index 4c3c4c8de1cfc..93720cf069d7c 100644 +--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c ++++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c +@@ -2961,6 +2961,7 @@ static enum bp_result construct_integrated_info( + result = get_integrated_info_v2_1(bp, info); + break; + case 2: ++ case 3: + result = get_integrated_info_v2_2(bp, info); + break; + default: +diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c +index 5b7ad38f85e08..65e45a0b4ff34 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hpo_dp_link_encoder.c +@@ -395,6 +395,12 @@ void dcn31_hpo_dp_link_enc_set_throttled_vcp_size( + x), + 25)); + ++ // If y rounds up to integer, carry it over to x. ++ if (y >> 25) { ++ x += 1; ++ y = 0; ++ } ++ + switch (stream_encoder_inst) { + case 0: + REG_SET_2(DP_DPHY_SYM32_VC_RATE_CNTL0, 0, +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c +index 1d1917e1b63f4..cd674ef5adb19 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c +@@ -226,7 +226,7 @@ static int smu_v13_0_4_system_features_control(struct smu_context *smu, bool en) + struct amdgpu_device *adev = smu->adev; + int ret = 0; + +- if (!en && !adev->in_s0ix) { ++ if (!en && adev->in_s4) { + /* Adds a GFX reset as workaround just before sending the + * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering + * an invalid state. +diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c +index c44d5bcf12847..309aad5f0c808 100644 +--- a/drivers/gpu/drm/drm_connector.c ++++ b/drivers/gpu/drm/drm_connector.c +@@ -2925,7 +2925,7 @@ int drm_mode_getconnector(struct drm_device *dev, void *data, + dev->mode_config.max_width, + dev->mode_config.max_height); + else +- drm_dbg_kms(dev, "User-space requested a forced probe on [CONNECTOR:%d:%s] but is not the DRM master, demoting to read-only probe", ++ drm_dbg_kms(dev, "User-space requested a forced probe on [CONNECTOR:%d:%s] but is not the DRM master, demoting to read-only probe\n", + connector->base.id, connector->name); + } + +diff --git a/drivers/gpu/drm/i915/display/intel_audio.c b/drivers/gpu/drm/i915/display/intel_audio.c +index 3d9c9b4f27f80..8b8d1d806566e 100644 +--- a/drivers/gpu/drm/i915/display/intel_audio.c ++++ b/drivers/gpu/drm/i915/display/intel_audio.c +@@ -75,19 +75,6 @@ struct intel_audio_funcs { + struct intel_crtc_state *crtc_state); + }; + +-/* DP N/M table */ +-#define LC_810M 810000 +-#define LC_540M 540000 +-#define LC_270M 270000 +-#define LC_162M 162000 +- +-struct dp_aud_n_m { +- int sample_rate; +- int clock; +- u16 m; +- u16 n; +-}; +- + struct hdmi_aud_ncts { + int sample_rate; + int clock; +@@ -95,60 +82,6 @@ struct hdmi_aud_ncts { + int cts; + }; + +-/* Values according to DP 1.4 Table 2-104 */ +-static const struct dp_aud_n_m dp_aud_n_m[] = { +- { 32000, LC_162M, 1024, 10125 }, +- { 44100, LC_162M, 784, 5625 }, +- { 48000, LC_162M, 512, 3375 }, +- { 64000, LC_162M, 2048, 10125 }, +- { 88200, LC_162M, 1568, 5625 }, +- { 96000, LC_162M, 1024, 3375 }, +- { 128000, LC_162M, 4096, 10125 }, +- { 176400, LC_162M, 3136, 5625 }, +- { 192000, LC_162M, 2048, 3375 }, +- { 32000, LC_270M, 1024, 16875 }, +- { 44100, LC_270M, 784, 9375 }, +- { 48000, LC_270M, 512, 5625 }, +- { 64000, LC_270M, 2048, 16875 }, +- { 88200, LC_270M, 1568, 9375 }, +- { 96000, LC_270M, 1024, 5625 }, +- { 128000, LC_270M, 4096, 16875 }, +- { 176400, LC_270M, 3136, 9375 }, +- { 192000, LC_270M, 2048, 5625 }, +- { 32000, LC_540M, 1024, 33750 }, +- { 44100, LC_540M, 784, 18750 }, +- { 48000, LC_540M, 512, 11250 }, +- { 64000, LC_540M, 2048, 33750 }, +- { 88200, LC_540M, 1568, 18750 }, +- { 96000, LC_540M, 1024, 11250 }, +- { 128000, LC_540M, 4096, 33750 }, +- { 176400, LC_540M, 3136, 18750 }, +- { 192000, LC_540M, 2048, 11250 }, +- { 32000, LC_810M, 1024, 50625 }, +- { 44100, LC_810M, 784, 28125 }, +- { 48000, LC_810M, 512, 16875 }, +- { 64000, LC_810M, 2048, 50625 }, +- { 88200, LC_810M, 1568, 28125 }, +- { 96000, LC_810M, 1024, 16875 }, +- { 128000, LC_810M, 4096, 50625 }, +- { 176400, LC_810M, 3136, 28125 }, +- { 192000, LC_810M, 2048, 16875 }, +-}; +- +-static const struct dp_aud_n_m * +-audio_config_dp_get_n_m(const struct intel_crtc_state *crtc_state, int rate) +-{ +- int i; +- +- for (i = 0; i < ARRAY_SIZE(dp_aud_n_m); i++) { +- if (rate == dp_aud_n_m[i].sample_rate && +- crtc_state->port_clock == dp_aud_n_m[i].clock) +- return &dp_aud_n_m[i]; +- } +- +- return NULL; +-} +- + static const struct { + int clock; + u32 config; +@@ -386,47 +319,17 @@ hsw_dp_audio_config_update(struct intel_encoder *encoder, + const struct intel_crtc_state *crtc_state) + { + struct drm_i915_private *i915 = to_i915(encoder->base.dev); +- struct i915_audio_component *acomp = i915->display.audio.component; + enum transcoder cpu_transcoder = crtc_state->cpu_transcoder; +- enum port port = encoder->port; +- const struct dp_aud_n_m *nm; +- int rate; +- u32 tmp; +- +- rate = acomp ? acomp->aud_sample_rate[port] : 0; +- nm = audio_config_dp_get_n_m(crtc_state, rate); +- if (nm) +- drm_dbg_kms(&i915->drm, "using Maud %u, Naud %u\n", nm->m, +- nm->n); +- else +- drm_dbg_kms(&i915->drm, "using automatic Maud, Naud\n"); +- +- tmp = intel_de_read(i915, HSW_AUD_CFG(cpu_transcoder)); +- tmp &= ~AUD_CONFIG_N_VALUE_INDEX; +- tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK; +- tmp &= ~AUD_CONFIG_N_PROG_ENABLE; +- tmp |= AUD_CONFIG_N_VALUE_INDEX; + +- if (nm) { +- tmp &= ~AUD_CONFIG_N_MASK; +- tmp |= AUD_CONFIG_N(nm->n); +- tmp |= AUD_CONFIG_N_PROG_ENABLE; +- } +- +- intel_de_write(i915, HSW_AUD_CFG(cpu_transcoder), tmp); +- +- tmp = intel_de_read(i915, HSW_AUD_M_CTS_ENABLE(cpu_transcoder)); +- tmp &= ~AUD_CONFIG_M_MASK; +- tmp &= ~AUD_M_CTS_M_VALUE_INDEX; +- tmp &= ~AUD_M_CTS_M_PROG_ENABLE; +- +- if (nm) { +- tmp |= nm->m; +- tmp |= AUD_M_CTS_M_VALUE_INDEX; +- tmp |= AUD_M_CTS_M_PROG_ENABLE; +- } ++ /* Enable time stamps. Let HW calculate Maud/Naud values */ ++ intel_de_rmw(i915, HSW_AUD_CFG(cpu_transcoder), ++ AUD_CONFIG_N_VALUE_INDEX | ++ AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK | ++ AUD_CONFIG_UPPER_N_MASK | ++ AUD_CONFIG_LOWER_N_MASK | ++ AUD_CONFIG_N_PROG_ENABLE, ++ AUD_CONFIG_N_VALUE_INDEX); + +- intel_de_write(i915, HSW_AUD_M_CTS_ENABLE(cpu_transcoder), tmp); + } + + static void +diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c +index f282a2d89c78d..27d1c49b46ec4 100644 +--- a/drivers/gpu/drm/i915/display/intel_bios.c ++++ b/drivers/gpu/drm/i915/display/intel_bios.c +@@ -1035,22 +1035,11 @@ parse_lfp_backlight(struct drm_i915_private *i915, + panel->vbt.backlight.type = INTEL_BACKLIGHT_DISPLAY_DDI; + panel->vbt.backlight.controller = 0; + if (i915->display.vbt.version >= 191) { +- size_t exp_size; ++ const struct lfp_backlight_control_method *method; + +- if (i915->display.vbt.version >= 236) +- exp_size = sizeof(struct bdb_lfp_backlight_data); +- else if (i915->display.vbt.version >= 234) +- exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_234; +- else +- exp_size = EXP_BDB_LFP_BL_DATA_SIZE_REV_191; +- +- if (get_blocksize(backlight_data) >= exp_size) { +- const struct lfp_backlight_control_method *method; +- +- method = &backlight_data->backlight_control[panel_type]; +- panel->vbt.backlight.type = method->type; +- panel->vbt.backlight.controller = method->controller; +- } ++ method = &backlight_data->backlight_control[panel_type]; ++ panel->vbt.backlight.type = method->type; ++ panel->vbt.backlight.controller = method->controller; + } + + panel->vbt.backlight.pwm_freq_hz = entry->pwm_freq_hz; +diff --git a/drivers/gpu/drm/i915/display/intel_vbt_defs.h b/drivers/gpu/drm/i915/display/intel_vbt_defs.h +index a9f44abfc9fc2..b50cd0dcabda9 100644 +--- a/drivers/gpu/drm/i915/display/intel_vbt_defs.h ++++ b/drivers/gpu/drm/i915/display/intel_vbt_defs.h +@@ -897,11 +897,6 @@ struct lfp_brightness_level { + u16 reserved; + } __packed; + +-#define EXP_BDB_LFP_BL_DATA_SIZE_REV_191 \ +- offsetof(struct bdb_lfp_backlight_data, brightness_level) +-#define EXP_BDB_LFP_BL_DATA_SIZE_REV_234 \ +- offsetof(struct bdb_lfp_backlight_data, brightness_precision_bits) +- + struct bdb_lfp_backlight_data { + u8 entry_size; + struct lfp_backlight_data_entry data[16]; +diff --git a/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.c b/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.c +index 044219c5960a5..99b71bb7da0a6 100644 +--- a/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.c ++++ b/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.c +@@ -8,14 +8,14 @@ + #include "intel_gt_ccs_mode.h" + #include "intel_gt_regs.h" + +-void intel_gt_apply_ccs_mode(struct intel_gt *gt) ++unsigned int intel_gt_apply_ccs_mode(struct intel_gt *gt) + { + int cslice; + u32 mode = 0; + int first_ccs = __ffs(CCS_MASK(gt)); + + if (!IS_DG2(gt->i915)) +- return; ++ return 0; + + /* Build the value for the fixed CCS load balancing */ + for (cslice = 0; cslice < I915_MAX_CCS; cslice++) { +@@ -35,5 +35,5 @@ void intel_gt_apply_ccs_mode(struct intel_gt *gt) + XEHP_CCS_MODE_CSLICE_MASK); + } + +- intel_uncore_write(gt->uncore, XEHP_CCS_MODE, mode); ++ return mode; + } +diff --git a/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.h b/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.h +index 9e5549caeb269..55547f2ff426a 100644 +--- a/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.h ++++ b/drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.h +@@ -8,6 +8,6 @@ + + struct intel_gt; + +-void intel_gt_apply_ccs_mode(struct intel_gt *gt); ++unsigned int intel_gt_apply_ccs_mode(struct intel_gt *gt); + + #endif /* __INTEL_GT_CCS_MODE_H__ */ +diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c +index be060b32bd9ce..8fbb0686c5348 100644 +--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c ++++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c +@@ -2828,6 +2828,7 @@ add_render_compute_tuning_settings(struct intel_gt *gt, + static void ccs_engine_wa_mode(struct intel_engine_cs *engine, struct i915_wa_list *wal) + { + struct intel_gt *gt = engine->gt; ++ u32 mode; + + if (!IS_DG2(gt->i915)) + return; +@@ -2844,7 +2845,8 @@ static void ccs_engine_wa_mode(struct intel_engine_cs *engine, struct i915_wa_li + * After having disabled automatic load balancing we need to + * assign all slices to a single CCS. We will call it CCS mode 1 + */ +- intel_gt_apply_ccs_mode(gt); ++ mode = intel_gt_apply_ccs_mode(gt); ++ wa_masked_en(wal, XEHP_CCS_MODE, mode); + } + + /* +diff --git a/drivers/gpu/drm/meson/meson_dw_hdmi.c b/drivers/gpu/drm/meson/meson_dw_hdmi.c +index 5a9538bc0e26f..5565f7777529f 100644 +--- a/drivers/gpu/drm/meson/meson_dw_hdmi.c ++++ b/drivers/gpu/drm/meson/meson_dw_hdmi.c +@@ -106,6 +106,8 @@ + #define HHI_HDMI_CLK_CNTL 0x1cc /* 0x73 */ + #define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 */ + #define HHI_HDMI_PHY_CNTL1 0x3a4 /* 0xe9 */ ++#define PHY_CNTL1_INIT 0x03900000 ++#define PHY_INVERT BIT(17) + #define HHI_HDMI_PHY_CNTL2 0x3a8 /* 0xea */ + #define HHI_HDMI_PHY_CNTL3 0x3ac /* 0xeb */ + #define HHI_HDMI_PHY_CNTL4 0x3b0 /* 0xec */ +@@ -130,6 +132,8 @@ struct meson_dw_hdmi_data { + unsigned int addr); + void (*dwc_write)(struct meson_dw_hdmi *dw_hdmi, + unsigned int addr, unsigned int data); ++ u32 cntl0_init; ++ u32 cntl1_init; + }; + + struct meson_dw_hdmi { +@@ -384,26 +388,6 @@ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data, + dw_hdmi_bus_fmt_is_420(hdmi)) + mode_is_420 = true; + +- /* Enable clocks */ +- regmap_update_bits(priv->hhi, HHI_HDMI_CLK_CNTL, 0xffff, 0x100); +- +- /* Bring HDMITX MEM output of power down */ +- regmap_update_bits(priv->hhi, HHI_MEM_PD_REG0, 0xff << 8, 0); +- +- /* Bring out of reset */ +- dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_SW_RESET, 0); +- +- /* Enable internal pixclk, tmds_clk, spdif_clk, i2s_clk, cecclk */ +- dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL, +- 0x3, 0x3); +- +- /* Enable cec_clk and hdcp22_tmdsclk_en */ +- dw_hdmi_top_write_bits(dw_hdmi, HDMITX_TOP_CLK_CNTL, +- 0x3 << 4, 0x3 << 4); +- +- /* Enable normal output to PHY */ +- dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_BIST_CNTL, BIT(12)); +- + /* TMDS pattern setup */ + if (mode->clock > 340000 && !mode_is_420) { + dw_hdmi->data->top_write(dw_hdmi, HDMITX_TOP_TMDS_CLK_PTTN_01, +@@ -425,20 +409,6 @@ static int dw_hdmi_phy_init(struct dw_hdmi *hdmi, void *data, + /* Setup PHY parameters */ + meson_hdmi_phy_setup_mode(dw_hdmi, mode, mode_is_420); + +- /* Setup PHY */ +- regmap_update_bits(priv->hhi, HHI_HDMI_PHY_CNTL1, +- 0xffff << 16, 0x0390 << 16); +- +- /* BIT_INVERT */ +- if (dw_hdmi_is_compatible(dw_hdmi, "amlogic,meson-gxl-dw-hdmi") || +- dw_hdmi_is_compatible(dw_hdmi, "amlogic,meson-gxm-dw-hdmi") || +- dw_hdmi_is_compatible(dw_hdmi, "amlogic,meson-g12a-dw-hdmi")) +- regmap_update_bits(priv->hhi, HHI_HDMI_PHY_CNTL1, +- BIT(17), 0); +- else +- regmap_update_bits(priv->hhi, HHI_HDMI_PHY_CNTL1, +- BIT(17), BIT(17)); +- + /* Disable clock, fifo, fifo_wr */ + regmap_update_bits(priv->hhi, HHI_HDMI_PHY_CNTL1, 0xf, 0); + +@@ -492,7 +462,9 @@ static void dw_hdmi_phy_disable(struct dw_hdmi *hdmi, + + DRM_DEBUG_DRIVER("\n"); + +- regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL0, 0); ++ /* Fallback to init mode */ ++ regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL1, dw_hdmi->data->cntl1_init); ++ regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL0, dw_hdmi->data->cntl0_init); + } + + static enum drm_connector_status dw_hdmi_read_hpd(struct dw_hdmi *hdmi, +@@ -610,11 +582,22 @@ static const struct regmap_config meson_dw_hdmi_regmap_config = { + .fast_io = true, + }; + +-static const struct meson_dw_hdmi_data meson_dw_hdmi_gx_data = { ++static const struct meson_dw_hdmi_data meson_dw_hdmi_gxbb_data = { + .top_read = dw_hdmi_top_read, + .top_write = dw_hdmi_top_write, + .dwc_read = dw_hdmi_dwc_read, + .dwc_write = dw_hdmi_dwc_write, ++ .cntl0_init = 0x0, ++ .cntl1_init = PHY_CNTL1_INIT | PHY_INVERT, ++}; ++ ++static const struct meson_dw_hdmi_data meson_dw_hdmi_gxl_data = { ++ .top_read = dw_hdmi_top_read, ++ .top_write = dw_hdmi_top_write, ++ .dwc_read = dw_hdmi_dwc_read, ++ .dwc_write = dw_hdmi_dwc_write, ++ .cntl0_init = 0x0, ++ .cntl1_init = PHY_CNTL1_INIT, + }; + + static const struct meson_dw_hdmi_data meson_dw_hdmi_g12a_data = { +@@ -622,6 +605,8 @@ static const struct meson_dw_hdmi_data meson_dw_hdmi_g12a_data = { + .top_write = dw_hdmi_g12a_top_write, + .dwc_read = dw_hdmi_g12a_dwc_read, + .dwc_write = dw_hdmi_g12a_dwc_write, ++ .cntl0_init = 0x000b4242, /* Bandgap */ ++ .cntl1_init = PHY_CNTL1_INIT, + }; + + static void meson_dw_hdmi_init(struct meson_dw_hdmi *meson_dw_hdmi) +@@ -656,6 +641,13 @@ static void meson_dw_hdmi_init(struct meson_dw_hdmi *meson_dw_hdmi) + meson_dw_hdmi->data->top_write(meson_dw_hdmi, + HDMITX_TOP_CLK_CNTL, 0xff); + ++ /* Enable normal output to PHY */ ++ meson_dw_hdmi->data->top_write(meson_dw_hdmi, HDMITX_TOP_BIST_CNTL, BIT(12)); ++ ++ /* Setup PHY */ ++ regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL1, meson_dw_hdmi->data->cntl1_init); ++ regmap_write(priv->hhi, HHI_HDMI_PHY_CNTL0, meson_dw_hdmi->data->cntl0_init); ++ + /* Enable HDMI-TX Interrupt */ + meson_dw_hdmi->data->top_write(meson_dw_hdmi, HDMITX_TOP_INTR_STAT_CLR, + HDMITX_TOP_INTR_CORE); +@@ -865,11 +857,11 @@ static const struct dev_pm_ops meson_dw_hdmi_pm_ops = { + + static const struct of_device_id meson_dw_hdmi_of_table[] = { + { .compatible = "amlogic,meson-gxbb-dw-hdmi", +- .data = &meson_dw_hdmi_gx_data }, ++ .data = &meson_dw_hdmi_gxbb_data }, + { .compatible = "amlogic,meson-gxl-dw-hdmi", +- .data = &meson_dw_hdmi_gx_data }, ++ .data = &meson_dw_hdmi_gxl_data }, + { .compatible = "amlogic,meson-gxm-dw-hdmi", +- .data = &meson_dw_hdmi_gx_data }, ++ .data = &meson_dw_hdmi_gxl_data }, + { .compatible = "amlogic,meson-g12a-dw-hdmi", + .data = &meson_dw_hdmi_g12a_data }, + { } +diff --git a/drivers/gpu/drm/nouveau/nouveau_dp.c b/drivers/gpu/drm/nouveau/nouveau_dp.c +index 6a4980b2d4d4e..bf2ae67b03d94 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_dp.c ++++ b/drivers/gpu/drm/nouveau/nouveau_dp.c +@@ -108,12 +108,15 @@ nouveau_dp_detect(struct nouveau_connector *nv_connector, + u8 *dpcd = nv_encoder->dp.dpcd; + int ret = NOUVEAU_DP_NONE, hpd; + +- /* If we've already read the DPCD on an eDP device, we don't need to +- * reread it as it won't change ++ /* eDP ports don't support hotplugging - so there's no point in probing eDP ports unless we ++ * haven't probed them once before. + */ +- if (connector->connector_type == DRM_MODE_CONNECTOR_eDP && +- dpcd[DP_DPCD_REV] != 0) +- return NOUVEAU_DP_SST; ++ if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { ++ if (connector->status == connector_status_connected) ++ return NOUVEAU_DP_SST; ++ else if (connector->status == connector_status_disconnected) ++ return NOUVEAU_DP_NONE; ++ } + + mutex_lock(&nv_encoder->dp.hpd_irq_lock); + if (mstm) { +diff --git a/drivers/gpu/drm/panel/Kconfig b/drivers/gpu/drm/panel/Kconfig +index 869e535faefa3..3a2f4a9f1d466 100644 +--- a/drivers/gpu/drm/panel/Kconfig ++++ b/drivers/gpu/drm/panel/Kconfig +@@ -184,7 +184,7 @@ config DRM_PANEL_ILITEK_IL9322 + + config DRM_PANEL_ILITEK_ILI9341 + tristate "Ilitek ILI9341 240x320 QVGA panels" +- depends on OF && SPI ++ depends on SPI + select DRM_KMS_HELPER + select DRM_GEM_DMA_HELPER + depends on BACKLIGHT_CLASS_DEVICE +diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9341.c b/drivers/gpu/drm/panel/panel-ilitek-ili9341.c +index 3574681891e81..b933380b7eb78 100644 +--- a/drivers/gpu/drm/panel/panel-ilitek-ili9341.c ++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9341.c +@@ -22,8 +22,9 @@ + #include + #include + #include ++#include + #include +-#include ++#include + #include + #include + +@@ -421,7 +422,7 @@ static int ili9341_dpi_prepare(struct drm_panel *panel) + + ili9341_dpi_init(ili); + +- return ret; ++ return 0; + } + + static int ili9341_dpi_enable(struct drm_panel *panel) +@@ -691,7 +692,7 @@ static int ili9341_dpi_probe(struct spi_device *spi, struct gpio_desc *dc, + * Every new incarnation of this display must have a unique + * data entry for the system in this driver. + */ +- ili->conf = of_device_get_match_data(dev); ++ ili->conf = device_get_match_data(dev); + if (!ili->conf) { + dev_err(dev, "missing device configuration\n"); + return -ENODEV; +@@ -714,18 +715,18 @@ static int ili9341_probe(struct spi_device *spi) + + reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); + if (IS_ERR(reset)) +- dev_err(dev, "Failed to get gpio 'reset'\n"); ++ return dev_err_probe(dev, PTR_ERR(reset), "Failed to get gpio 'reset'\n"); + + dc = devm_gpiod_get_optional(dev, "dc", GPIOD_OUT_LOW); + if (IS_ERR(dc)) +- dev_err(dev, "Failed to get gpio 'dc'\n"); ++ return dev_err_probe(dev, PTR_ERR(dc), "Failed to get gpio 'dc'\n"); + + if (!strcmp(id->name, "sf-tc240t-9370-t")) + return ili9341_dpi_probe(spi, dc, reset); + else if (!strcmp(id->name, "yx240qv29")) + return ili9341_dbi_probe(spi, dc, reset); + +- return -1; ++ return -ENODEV; + } + + static void ili9341_remove(struct spi_device *spi) +diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c +index 9febc8b73f09e..368d26da0d6a2 100644 +--- a/drivers/gpu/drm/qxl/qxl_release.c ++++ b/drivers/gpu/drm/qxl/qxl_release.c +@@ -58,56 +58,16 @@ static long qxl_fence_wait(struct dma_fence *fence, bool intr, + signed long timeout) + { + struct qxl_device *qdev; +- struct qxl_release *release; +- int count = 0, sc = 0; +- bool have_drawable_releases; + unsigned long cur, end = jiffies + timeout; + + qdev = container_of(fence->lock, struct qxl_device, release_lock); +- release = container_of(fence, struct qxl_release, base); +- have_drawable_releases = release->type == QXL_RELEASE_DRAWABLE; +- +-retry: +- sc++; +- +- if (dma_fence_is_signaled(fence)) +- goto signaled; +- +- qxl_io_notify_oom(qdev); +- +- for (count = 0; count < 11; count++) { +- if (!qxl_queue_garbage_collect(qdev, true)) +- break; +- +- if (dma_fence_is_signaled(fence)) +- goto signaled; +- } +- +- if (dma_fence_is_signaled(fence)) +- goto signaled; + +- if (have_drawable_releases || sc < 4) { +- if (sc > 2) +- /* back off */ +- usleep_range(500, 1000); +- +- if (time_after(jiffies, end)) +- return 0; +- +- if (have_drawable_releases && sc > 300) { +- DMA_FENCE_WARN(fence, +- "failed to wait on release %llu after spincount %d\n", +- fence->context & ~0xf0000000, sc); +- goto signaled; +- } +- goto retry; +- } +- /* +- * yeah, original sync_obj_wait gave up after 3 spins when +- * have_drawable_releases is not set. +- */ ++ if (!wait_event_timeout(qdev->release_event, ++ (dma_fence_is_signaled(fence) || ++ (qxl_io_notify_oom(qdev), 0)), ++ timeout)) ++ return 0; + +-signaled: + cur = jiffies; + if (time_after(cur, end)) + return 0; +diff --git a/drivers/gpu/drm/radeon/pptable.h b/drivers/gpu/drm/radeon/pptable.h +index 4c2eec49dadc9..844f0490bf31f 100644 +--- a/drivers/gpu/drm/radeon/pptable.h ++++ b/drivers/gpu/drm/radeon/pptable.h +@@ -424,7 +424,7 @@ typedef struct _ATOM_PPLIB_SUMO_CLOCK_INFO{ + typedef struct _ATOM_PPLIB_STATE_V2 + { + //number of valid dpm levels in this state; Driver uses it to calculate the whole +- //size of the state: sizeof(ATOM_PPLIB_STATE_V2) + (ucNumDPMLevels - 1) * sizeof(UCHAR) ++ //size of the state: struct_size(ATOM_PPLIB_STATE_V2, clockInfoIndex, ucNumDPMLevels) + UCHAR ucNumDPMLevels; + + //a index to the array of nonClockInfos +@@ -432,14 +432,14 @@ typedef struct _ATOM_PPLIB_STATE_V2 + /** + * Driver will read the first ucNumDPMLevels in this array + */ +- UCHAR clockInfoIndex[1]; ++ UCHAR clockInfoIndex[] __counted_by(ucNumDPMLevels); + } ATOM_PPLIB_STATE_V2; + + typedef struct _StateArray{ + //how many states we have + UCHAR ucNumEntries; + +- ATOM_PPLIB_STATE_V2 states[1]; ++ ATOM_PPLIB_STATE_V2 states[] __counted_by(ucNumEntries); + }StateArray; + + +@@ -450,7 +450,7 @@ typedef struct _ClockInfoArray{ + //sizeof(ATOM_PPLIB_CLOCK_INFO) + UCHAR ucEntrySize; + +- UCHAR clockInfo[1]; ++ UCHAR clockInfo[] __counted_by(ucNumEntries); + }ClockInfoArray; + + typedef struct _NonClockInfoArray{ +@@ -460,7 +460,7 @@ typedef struct _NonClockInfoArray{ + //sizeof(ATOM_PPLIB_NONCLOCK_INFO) + UCHAR ucEntrySize; + +- ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[1]; ++ ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[] __counted_by(ucNumEntries); + }NonClockInfoArray; + + typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Record +diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c +index 43eaffa7faae3..bf9601351fa35 100644 +--- a/drivers/gpu/drm/ttm/ttm_tt.c ++++ b/drivers/gpu/drm/ttm/ttm_tt.c +@@ -92,7 +92,7 @@ int ttm_tt_create(struct ttm_buffer_object *bo, bool zero_alloc) + */ + if (bdev->pool.use_dma_alloc && cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) { + page_flags |= TTM_TT_FLAG_DECRYPTED; +- drm_info(ddev, "TT memory decryption enabled."); ++ drm_info_once(ddev, "TT memory decryption enabled."); + } + + bo->ttm = bdev->funcs->ttm_tt_create(bo, page_flags); +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +index 4aac88cc5f913..ae796e0c64aa5 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +@@ -204,6 +204,7 @@ int vmw_bo_pin_in_start_of_vram(struct vmw_private *dev_priv, + VMW_BO_DOMAIN_VRAM, + VMW_BO_DOMAIN_VRAM); + buf->places[0].lpfn = PFN_UP(bo->resource->size); ++ buf->busy_places[0].lpfn = PFN_UP(bo->resource->size); + ret = ttm_bo_validate(bo, &buf->placement, &ctx); + + /* For some reason we didn't end up at the start of vram */ +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c +index 2a0cda3247031..5efc6a766f64e 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c +@@ -991,7 +991,7 @@ static int vmw_event_fence_action_create(struct drm_file *file_priv, + } + + event->event.base.type = DRM_VMW_EVENT_FENCE_SIGNALED; +- event->event.base.length = sizeof(*event); ++ event->event.base.length = sizeof(event->event); + event->event.user_data = user_data; + + ret = drm_event_reserve_init(dev, file_priv, &event->base, &event->event.base); +diff --git a/drivers/gpu/host1x/bus.c b/drivers/gpu/host1x/bus.c +index 84d042796d2e6..3937889fa912d 100644 +--- a/drivers/gpu/host1x/bus.c ++++ b/drivers/gpu/host1x/bus.c +@@ -351,11 +351,6 @@ static int host1x_device_uevent(const struct device *dev, + return 0; + } + +-static int host1x_dma_configure(struct device *dev) +-{ +- return of_dma_configure(dev, dev->of_node, true); +-} +- + static const struct dev_pm_ops host1x_device_pm_ops = { + .suspend = pm_generic_suspend, + .resume = pm_generic_resume, +@@ -369,7 +364,6 @@ struct bus_type host1x_bus_type = { + .name = "host1x", + .match = host1x_device_match, + .uevent = host1x_device_uevent, +- .dma_configure = host1x_dma_configure, + .pm = &host1x_device_pm_ops, + }; + +@@ -458,8 +452,6 @@ static int host1x_device_add(struct host1x *host1x, + device->dev.bus = &host1x_bus_type; + device->dev.parent = host1x->dev; + +- of_dma_configure(&device->dev, host1x->dev->of_node, true); +- + device->dev.dma_parms = &device->dma_parms; + dma_set_max_seg_size(&device->dev, UINT_MAX); + +diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c +index 56f7e06c673e4..47e1bd8de9fcf 100644 +--- a/drivers/hv/channel.c ++++ b/drivers/hv/channel.c +@@ -153,7 +153,9 @@ void vmbus_free_ring(struct vmbus_channel *channel) + hv_ringbuffer_cleanup(&channel->inbound); + + if (channel->ringbuffer_page) { +- __free_pages(channel->ringbuffer_page, ++ /* In a CoCo VM leak the memory if it didn't get re-encrypted */ ++ if (!channel->ringbuffer_gpadlhandle.decrypted) ++ __free_pages(channel->ringbuffer_page, + get_order(channel->ringbuffer_pagecount + << PAGE_SHIFT)); + channel->ringbuffer_page = NULL; +@@ -472,9 +474,18 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, + (atomic_inc_return(&vmbus_connection.next_gpadl_handle) - 1); + + ret = create_gpadl_header(type, kbuffer, size, send_offset, &msginfo); +- if (ret) ++ if (ret) { ++ gpadl->decrypted = false; + return ret; ++ } + ++ /* ++ * Set the "decrypted" flag to true for the set_memory_decrypted() ++ * success case. In the failure case, the encryption state of the ++ * memory is unknown. Leave "decrypted" as true to ensure the ++ * memory will be leaked instead of going back on the free list. ++ */ ++ gpadl->decrypted = true; + ret = set_memory_decrypted((unsigned long)kbuffer, + PFN_UP(size)); + if (ret) { +@@ -563,9 +574,15 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, + + kfree(msginfo); + +- if (ret) +- set_memory_encrypted((unsigned long)kbuffer, +- PFN_UP(size)); ++ if (ret) { ++ /* ++ * If set_memory_encrypted() fails, the decrypted flag is ++ * left as true so the memory is leaked instead of being ++ * put back on the free list. ++ */ ++ if (!set_memory_encrypted((unsigned long)kbuffer, PFN_UP(size))) ++ gpadl->decrypted = false; ++ } + + return ret; + } +@@ -886,6 +903,8 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, struct vmbus_gpadl *gpad + if (ret) + pr_warn("Fail to set mem host visibility in GPADL teardown %d.\n", ret); + ++ gpadl->decrypted = ret; ++ + return ret; + } + EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl); +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c +index 3cabeeabb1cac..f001ae880e1db 100644 +--- a/drivers/hv/connection.c ++++ b/drivers/hv/connection.c +@@ -237,8 +237,17 @@ int vmbus_connect(void) + vmbus_connection.monitor_pages[0], 1); + ret |= set_memory_decrypted((unsigned long) + vmbus_connection.monitor_pages[1], 1); +- if (ret) ++ if (ret) { ++ /* ++ * If set_memory_decrypted() fails, the encryption state ++ * of the memory is unknown. So leak the memory instead ++ * of risking returning decrypted memory to the free list. ++ * For simplicity, always handle both pages the same. ++ */ ++ vmbus_connection.monitor_pages[0] = NULL; ++ vmbus_connection.monitor_pages[1] = NULL; + goto cleanup; ++ } + + /* + * Set_memory_decrypted() will change the memory contents if +@@ -337,13 +346,19 @@ void vmbus_disconnect(void) + vmbus_connection.int_page = NULL; + } + +- set_memory_encrypted((unsigned long)vmbus_connection.monitor_pages[0], 1); +- set_memory_encrypted((unsigned long)vmbus_connection.monitor_pages[1], 1); ++ if (vmbus_connection.monitor_pages[0]) { ++ if (!set_memory_encrypted( ++ (unsigned long)vmbus_connection.monitor_pages[0], 1)) ++ hv_free_hyperv_page(vmbus_connection.monitor_pages[0]); ++ vmbus_connection.monitor_pages[0] = NULL; ++ } + +- hv_free_hyperv_page(vmbus_connection.monitor_pages[0]); +- hv_free_hyperv_page(vmbus_connection.monitor_pages[1]); +- vmbus_connection.monitor_pages[0] = NULL; +- vmbus_connection.monitor_pages[1] = NULL; ++ if (vmbus_connection.monitor_pages[1]) { ++ if (!set_memory_encrypted( ++ (unsigned long)vmbus_connection.monitor_pages[1], 1)) ++ hv_free_hyperv_page(vmbus_connection.monitor_pages[1]); ++ vmbus_connection.monitor_pages[1] = NULL; ++ } + } + + /* +diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c +index 463ab4296ede5..280b90646a873 100644 +--- a/drivers/hwmon/corsair-cpro.c ++++ b/drivers/hwmon/corsair-cpro.c +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + #include + + #define USB_VENDOR_ID_CORSAIR 0x1b1c +@@ -77,8 +78,11 @@ + struct ccp_device { + struct hid_device *hdev; + struct device *hwmon_dev; ++ /* For reinitializing the completion below */ ++ spinlock_t wait_input_report_lock; + struct completion wait_input_report; + struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */ ++ u8 *cmd_buffer; + u8 *buffer; + int target[6]; + DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS); +@@ -111,15 +115,23 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2, + unsigned long t; + int ret; + +- memset(ccp->buffer, 0x00, OUT_BUFFER_SIZE); +- ccp->buffer[0] = command; +- ccp->buffer[1] = byte1; +- ccp->buffer[2] = byte2; +- ccp->buffer[3] = byte3; +- ++ memset(ccp->cmd_buffer, 0x00, OUT_BUFFER_SIZE); ++ ccp->cmd_buffer[0] = command; ++ ccp->cmd_buffer[1] = byte1; ++ ccp->cmd_buffer[2] = byte2; ++ ccp->cmd_buffer[3] = byte3; ++ ++ /* ++ * Disable raw event parsing for a moment to safely reinitialize the ++ * completion. Reinit is done because hidraw could have triggered ++ * the raw event parsing and marked the ccp->wait_input_report ++ * completion as done. ++ */ ++ spin_lock_bh(&ccp->wait_input_report_lock); + reinit_completion(&ccp->wait_input_report); ++ spin_unlock_bh(&ccp->wait_input_report_lock); + +- ret = hid_hw_output_report(ccp->hdev, ccp->buffer, OUT_BUFFER_SIZE); ++ ret = hid_hw_output_report(ccp->hdev, ccp->cmd_buffer, OUT_BUFFER_SIZE); + if (ret < 0) + return ret; + +@@ -135,11 +147,12 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8 + struct ccp_device *ccp = hid_get_drvdata(hdev); + + /* only copy buffer when requested */ +- if (completion_done(&ccp->wait_input_report)) +- return 0; +- +- memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size)); +- complete(&ccp->wait_input_report); ++ spin_lock(&ccp->wait_input_report_lock); ++ if (!completion_done(&ccp->wait_input_report)) { ++ memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size)); ++ complete_all(&ccp->wait_input_report); ++ } ++ spin_unlock(&ccp->wait_input_report_lock); + + return 0; + } +@@ -492,7 +505,11 @@ static int ccp_probe(struct hid_device *hdev, const struct hid_device_id *id) + if (!ccp) + return -ENOMEM; + +- ccp->buffer = devm_kmalloc(&hdev->dev, OUT_BUFFER_SIZE, GFP_KERNEL); ++ ccp->cmd_buffer = devm_kmalloc(&hdev->dev, OUT_BUFFER_SIZE, GFP_KERNEL); ++ if (!ccp->cmd_buffer) ++ return -ENOMEM; ++ ++ ccp->buffer = devm_kmalloc(&hdev->dev, IN_BUFFER_SIZE, GFP_KERNEL); + if (!ccp->buffer) + return -ENOMEM; + +@@ -510,7 +527,9 @@ static int ccp_probe(struct hid_device *hdev, const struct hid_device_id *id) + + ccp->hdev = hdev; + hid_set_drvdata(hdev, ccp); ++ + mutex_init(&ccp->mutex); ++ spin_lock_init(&ccp->wait_input_report_lock); + init_completion(&ccp->wait_input_report); + + hid_device_io_start(hdev); +diff --git a/drivers/hwmon/pmbus/ucd9000.c b/drivers/hwmon/pmbus/ucd9000.c +index 8d9d422450e5c..d817c719b90bd 100644 +--- a/drivers/hwmon/pmbus/ucd9000.c ++++ b/drivers/hwmon/pmbus/ucd9000.c +@@ -80,11 +80,11 @@ struct ucd9000_debugfs_entry { + * It has been observed that the UCD90320 randomly fails register access when + * doing another access right on the back of a register write. To mitigate this + * make sure that there is a minimum delay between a write access and the +- * following access. The 250us is based on experimental data. At a delay of +- * 200us the issue seems to go away. Add a bit of extra margin to allow for ++ * following access. The 500 is based on experimental data. At a delay of ++ * 350us the issue seems to go away. Add a bit of extra margin to allow for + * system to system differences. + */ +-#define UCD90320_WAIT_DELAY_US 250 ++#define UCD90320_WAIT_DELAY_US 500 + + static inline void ucd90320_wait(const struct ucd9000_data *data) + { +diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c +index 75d142bc14b4f..b608aa5467175 100644 +--- a/drivers/iio/accel/mxc4005.c ++++ b/drivers/iio/accel/mxc4005.c +@@ -27,9 +27,13 @@ + #define MXC4005_REG_ZOUT_UPPER 0x07 + #define MXC4005_REG_ZOUT_LOWER 0x08 + ++#define MXC4005_REG_INT_MASK0 0x0A ++ + #define MXC4005_REG_INT_MASK1 0x0B + #define MXC4005_REG_INT_MASK1_BIT_DRDYE 0x01 + ++#define MXC4005_REG_INT_CLR0 0x00 ++ + #define MXC4005_REG_INT_CLR1 0x01 + #define MXC4005_REG_INT_CLR1_BIT_DRDYC 0x01 + +@@ -113,7 +117,9 @@ static bool mxc4005_is_readable_reg(struct device *dev, unsigned int reg) + static bool mxc4005_is_writeable_reg(struct device *dev, unsigned int reg) + { + switch (reg) { ++ case MXC4005_REG_INT_CLR0: + case MXC4005_REG_INT_CLR1: ++ case MXC4005_REG_INT_MASK0: + case MXC4005_REG_INT_MASK1: + case MXC4005_REG_CONTROL: + return true; +@@ -330,17 +336,13 @@ static int mxc4005_set_trigger_state(struct iio_trigger *trig, + { + struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig); + struct mxc4005_data *data = iio_priv(indio_dev); ++ unsigned int val; + int ret; + + mutex_lock(&data->mutex); +- if (state) { +- ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1, +- MXC4005_REG_INT_MASK1_BIT_DRDYE); +- } else { +- ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1, +- ~MXC4005_REG_INT_MASK1_BIT_DRDYE); +- } + ++ val = state ? MXC4005_REG_INT_MASK1_BIT_DRDYE : 0; ++ ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1, val); + if (ret < 0) { + mutex_unlock(&data->mutex); + dev_err(data->dev, "failed to update reg_int_mask1"); +@@ -382,6 +384,14 @@ static int mxc4005_chip_init(struct mxc4005_data *data) + + dev_dbg(data->dev, "MXC4005 chip id %02x\n", reg); + ++ ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK0, 0); ++ if (ret < 0) ++ return dev_err_probe(data->dev, ret, "writing INT_MASK0\n"); ++ ++ ret = regmap_write(data->regmap, MXC4005_REG_INT_MASK1, 0); ++ if (ret < 0) ++ return dev_err_probe(data->dev, ret, "writing INT_MASK1\n"); ++ + return 0; + } + +diff --git a/drivers/iio/imu/adis16475.c b/drivers/iio/imu/adis16475.c +index 6c81dc5bf2c7a..b24e1e27f2da6 100644 +--- a/drivers/iio/imu/adis16475.c ++++ b/drivers/iio/imu/adis16475.c +@@ -1127,6 +1127,7 @@ static int adis16475_config_sync_mode(struct adis16475 *st) + struct device *dev = &st->adis.spi->dev; + const struct adis16475_sync *sync; + u32 sync_mode; ++ u16 val; + + /* default to internal clk */ + st->clk_freq = st->info->int_clk * 1000; +@@ -1188,8 +1189,9 @@ static int adis16475_config_sync_mode(struct adis16475 *st) + * I'm keeping this for simplicity and avoiding extra variables + * in chip_info. + */ ++ val = ADIS16475_SYNC_MODE(sync->sync_mode); + ret = __adis_update_bits(&st->adis, ADIS16475_REG_MSG_CTRL, +- ADIS16475_SYNC_MODE_MASK, sync->sync_mode); ++ ADIS16475_SYNC_MODE_MASK, val); + if (ret) + return ret; + +diff --git a/drivers/iio/pressure/bmp280-spi.c b/drivers/iio/pressure/bmp280-spi.c +index 967de99c1bb97..9de923228a9f4 100644 +--- a/drivers/iio/pressure/bmp280-spi.c ++++ b/drivers/iio/pressure/bmp280-spi.c +@@ -83,7 +83,7 @@ static const struct of_device_id bmp280_of_spi_match[] = { + { .compatible = "bosch,bmp180", .data = &bmp180_chip_info }, + { .compatible = "bosch,bmp181", .data = &bmp180_chip_info }, + { .compatible = "bosch,bmp280", .data = &bmp280_chip_info }, +- { .compatible = "bosch,bme280", .data = &bmp280_chip_info }, ++ { .compatible = "bosch,bme280", .data = &bme280_chip_info }, + { .compatible = "bosch,bmp380", .data = &bmp380_chip_info }, + { .compatible = "bosch,bmp580", .data = &bmp580_chip_info }, + { }, +@@ -95,7 +95,7 @@ static const struct spi_device_id bmp280_spi_id[] = { + { "bmp180", (kernel_ulong_t)&bmp180_chip_info }, + { "bmp181", (kernel_ulong_t)&bmp180_chip_info }, + { "bmp280", (kernel_ulong_t)&bmp280_chip_info }, +- { "bme280", (kernel_ulong_t)&bmp280_chip_info }, ++ { "bme280", (kernel_ulong_t)&bme280_chip_info }, + { "bmp380", (kernel_ulong_t)&bmp380_chip_info }, + { "bmp580", (kernel_ulong_t)&bmp580_chip_info }, + { } +diff --git a/drivers/infiniband/hw/qib/qib_fs.c b/drivers/infiniband/hw/qib/qib_fs.c +index ed7d4b02f45a6..11155e0fb8395 100644 +--- a/drivers/infiniband/hw/qib/qib_fs.c ++++ b/drivers/infiniband/hw/qib/qib_fs.c +@@ -439,6 +439,7 @@ static int remove_device_files(struct super_block *sb, + return PTR_ERR(dir); + } + simple_recursive_removal(dir, NULL); ++ dput(dir); + return 0; + } + +diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c +index fab6c347ce578..de698463e94ad 100644 +--- a/drivers/iommu/mtk_iommu.c ++++ b/drivers/iommu/mtk_iommu.c +@@ -1773,6 +1773,7 @@ static const struct of_device_id mtk_iommu_of_ids[] = { + { .compatible = "mediatek,mt8365-m4u", .data = &mt8365_data}, + {} + }; ++MODULE_DEVICE_TABLE(of, mtk_iommu_of_ids); + + static struct platform_driver mtk_iommu_driver = { + .probe = mtk_iommu_probe, +diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c +index 8a0a5e5d049f4..f1754efcfe74e 100644 +--- a/drivers/iommu/mtk_iommu_v1.c ++++ b/drivers/iommu/mtk_iommu_v1.c +@@ -600,6 +600,7 @@ static const struct of_device_id mtk_iommu_v1_of_ids[] = { + { .compatible = "mediatek,mt2701-m4u", }, + {} + }; ++MODULE_DEVICE_TABLE(of, mtk_iommu_v1_of_ids); + + static const struct component_master_ops mtk_iommu_v1_com_ops = { + .bind = mtk_iommu_v1_bind, +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 624259f79337f..e4d3741234d90 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -2485,6 +2485,7 @@ static int bind_rdev_to_array(struct md_rdev *rdev, struct mddev *mddev) + fail: + pr_warn("md: failed to register dev-%s for %s\n", + b, mdname(mddev)); ++ mddev_destroy_serial_pool(mddev, rdev, false); + return err; + } + +diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c +index dbbf7db4ff2f4..c290e849b2ed8 100644 +--- a/drivers/misc/eeprom/at24.c ++++ b/drivers/misc/eeprom/at24.c +@@ -581,6 +581,31 @@ static unsigned int at24_get_offset_adj(u8 flags, unsigned int byte_len) + } + } + ++static void at24_probe_temp_sensor(struct i2c_client *client) ++{ ++ struct at24_data *at24 = i2c_get_clientdata(client); ++ struct i2c_board_info info = { .type = "jc42" }; ++ int ret; ++ u8 val; ++ ++ /* ++ * Byte 2 has value 11 for DDR3, earlier versions don't ++ * support the thermal sensor present flag ++ */ ++ ret = at24_read(at24, 2, &val, 1); ++ if (ret || val != 11) ++ return; ++ ++ /* Byte 32, bit 7 is set if temp sensor is present */ ++ ret = at24_read(at24, 32, &val, 1); ++ if (ret || !(val & BIT(7))) ++ return; ++ ++ info.addr = 0x18 | (client->addr & 7); ++ ++ i2c_new_client_device(client->adapter, &info); ++} ++ + static int at24_probe(struct i2c_client *client) + { + struct regmap_config regmap_config = { }; +@@ -756,15 +781,6 @@ static int at24_probe(struct i2c_client *client) + } + pm_runtime_enable(dev); + +- at24->nvmem = devm_nvmem_register(dev, &nvmem_config); +- if (IS_ERR(at24->nvmem)) { +- pm_runtime_disable(dev); +- if (!pm_runtime_status_suspended(dev)) +- regulator_disable(at24->vcc_reg); +- return dev_err_probe(dev, PTR_ERR(at24->nvmem), +- "failed to register nvmem\n"); +- } +- + /* + * Perform a one-byte test read to verify that the chip is functional, + * unless powering on the device is to be avoided during probe (i.e. +@@ -780,6 +796,19 @@ static int at24_probe(struct i2c_client *client) + } + } + ++ at24->nvmem = devm_nvmem_register(dev, &nvmem_config); ++ if (IS_ERR(at24->nvmem)) { ++ pm_runtime_disable(dev); ++ if (!pm_runtime_status_suspended(dev)) ++ regulator_disable(at24->vcc_reg); ++ return dev_err_probe(dev, PTR_ERR(at24->nvmem), ++ "failed to register nvmem\n"); ++ } ++ ++ /* If this a SPD EEPROM, probe for DDR3 thermal sensor */ ++ if (cdata == &at24_data_spd) ++ at24_probe_temp_sensor(client); ++ + pm_runtime_idle(dev); + + if (writable) +diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h +index 3390ff5111033..d3c03d4edbeff 100644 +--- a/drivers/misc/mei/hw-me-regs.h ++++ b/drivers/misc/mei/hw-me-regs.h +@@ -115,6 +115,8 @@ + #define MEI_DEV_ID_ARL_S 0x7F68 /* Arrow Lake Point S */ + #define MEI_DEV_ID_ARL_H 0x7770 /* Arrow Lake Point H */ + ++#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */ ++ + /* + * MEI HW Section + */ +diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c +index bd4e3df44865e..3c2c28c8ba30a 100644 +--- a/drivers/misc/mei/pci-me.c ++++ b/drivers/misc/mei/pci-me.c +@@ -122,6 +122,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = { + {MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)}, + {MEI_PCI_DEVICE(MEI_DEV_ID_ARL_H, MEI_ME_PCH15_CFG)}, + ++ {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)}, ++ + /* required last entry */ + {0, } + }; +diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c +index 9bd661be3ae93..5c32208b17a1d 100644 +--- a/drivers/mtd/mtdcore.c ++++ b/drivers/mtd/mtdcore.c +@@ -552,6 +552,7 @@ static int mtd_nvmem_add(struct mtd_info *mtd) + config.dev = &mtd->dev; + config.name = dev_name(&mtd->dev); + config.owner = THIS_MODULE; ++ config.add_legacy_fixed_of_cells = of_device_is_compatible(node, "nvmem-cells"); + config.reg_read = mtd_nvmem_reg_read; + config.size = mtd->size; + config.word_size = 1; +@@ -898,6 +899,7 @@ static struct nvmem_device *mtd_otp_nvmem_register(struct mtd_info *mtd, + config.name = compatible; + config.id = NVMEM_DEVID_AUTO; + config.owner = THIS_MODULE; ++ config.add_legacy_fixed_of_cells = !mtd_type_is_nand(mtd); + config.type = NVMEM_TYPE_OTP; + config.root_only = true; + config.ignore_wp = true; +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c +index 8556502f06721..e5bac87941f61 100644 +--- a/drivers/net/dsa/mv88e6xxx/chip.c ++++ b/drivers/net/dsa/mv88e6xxx/chip.c +@@ -697,6 +697,18 @@ static void mv88e6352_phylink_get_caps(struct mv88e6xxx_chip *chip, int port, + } + } + ++static void mv88e632x_phylink_get_caps(struct mv88e6xxx_chip *chip, int port, ++ struct phylink_config *config) ++{ ++ unsigned long *supported = config->supported_interfaces; ++ ++ /* Translate the default cmode */ ++ mv88e6xxx_translate_cmode(chip->ports[port].cmode, supported); ++ ++ config->mac_capabilities = MAC_SYM_PAUSE | MAC_10 | MAC_100 | ++ MAC_1000FD; ++} ++ + static void mv88e6341_phylink_get_caps(struct mv88e6xxx_chip *chip, int port, + struct phylink_config *config) + { +@@ -4976,7 +4988,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = { + .gpio_ops = &mv88e6352_gpio_ops, + .avb_ops = &mv88e6352_avb_ops, + .ptp_ops = &mv88e6352_ptp_ops, +- .phylink_get_caps = mv88e6185_phylink_get_caps, ++ .phylink_get_caps = mv88e632x_phylink_get_caps, + }; + + static const struct mv88e6xxx_ops mv88e6321_ops = { +@@ -5022,7 +5034,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = { + .gpio_ops = &mv88e6352_gpio_ops, + .avb_ops = &mv88e6352_avb_ops, + .ptp_ops = &mv88e6352_ptp_ops, +- .phylink_get_caps = mv88e6185_phylink_get_caps, ++ .phylink_get_caps = mv88e632x_phylink_get_caps, + }; + + static const struct mv88e6xxx_ops mv88e6341_ops = { +@@ -5588,7 +5600,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = { + .prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6141, + .family = MV88E6XXX_FAMILY_6341, + .name = "Marvell 88E6141", +- .num_databases = 4096, ++ .num_databases = 256, + .num_macs = 2048, + .num_ports = 6, + .num_internal_phys = 5, +@@ -6047,7 +6059,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = { + .prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6341, + .family = MV88E6XXX_FAMILY_6341, + .name = "Marvell 88E6341", +- .num_databases = 4096, ++ .num_databases = 256, + .num_macs = 2048, + .num_internal_phys = 5, + .num_ports = 6, +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +index 89c8ddc6565ae..79d096a371ae7 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +@@ -2,7 +2,7 @@ + /* + * Broadcom GENET (Gigabit Ethernet) controller driver + * +- * Copyright (c) 2014-2020 Broadcom ++ * Copyright (c) 2014-2024 Broadcom + */ + + #define pr_fmt(fmt) "bcmgenet: " fmt +@@ -2469,14 +2469,18 @@ static void umac_enable_set(struct bcmgenet_priv *priv, u32 mask, bool enable) + { + u32 reg; + ++ spin_lock_bh(&priv->reg_lock); + reg = bcmgenet_umac_readl(priv, UMAC_CMD); +- if (reg & CMD_SW_RESET) ++ if (reg & CMD_SW_RESET) { ++ spin_unlock_bh(&priv->reg_lock); + return; ++ } + if (enable) + reg |= mask; + else + reg &= ~mask; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock_bh(&priv->reg_lock); + + /* UniMAC stops on a packet boundary, wait for a full-size packet + * to be processed +@@ -2492,8 +2496,10 @@ static void reset_umac(struct bcmgenet_priv *priv) + udelay(10); + + /* issue soft reset and disable MAC while updating its registers */ ++ spin_lock_bh(&priv->reg_lock); + bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD); + udelay(2); ++ spin_unlock_bh(&priv->reg_lock); + } + + static void bcmgenet_intr_disable(struct bcmgenet_priv *priv) +@@ -3299,7 +3305,7 @@ static void bcmgenet_get_hw_addr(struct bcmgenet_priv *priv, + } + + /* Returns a reusable dma control register value */ +-static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv) ++static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv, bool flush_rx) + { + unsigned int i; + u32 reg; +@@ -3324,6 +3330,14 @@ static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv) + udelay(10); + bcmgenet_umac_writel(priv, 0, UMAC_TX_FLUSH); + ++ if (flush_rx) { ++ reg = bcmgenet_rbuf_ctrl_get(priv); ++ bcmgenet_rbuf_ctrl_set(priv, reg | BIT(0)); ++ udelay(10); ++ bcmgenet_rbuf_ctrl_set(priv, reg); ++ udelay(10); ++ } ++ + return dma_ctrl; + } + +@@ -3345,7 +3359,9 @@ static void bcmgenet_netif_start(struct net_device *dev) + struct bcmgenet_priv *priv = netdev_priv(dev); + + /* Start the network engine */ ++ netif_addr_lock_bh(dev); + bcmgenet_set_rx_mode(dev); ++ netif_addr_unlock_bh(dev); + bcmgenet_enable_rx_napi(priv); + + umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, true); +@@ -3387,8 +3403,8 @@ static int bcmgenet_open(struct net_device *dev) + + bcmgenet_set_hw_addr(priv, dev->dev_addr); + +- /* Disable RX/TX DMA and flush TX queues */ +- dma_ctrl = bcmgenet_dma_disable(priv); ++ /* Disable RX/TX DMA and flush TX and RX queues */ ++ dma_ctrl = bcmgenet_dma_disable(priv, true); + + /* Reinitialize TDMA and RDMA and SW housekeeping */ + ret = bcmgenet_init_dma(priv); +@@ -3606,16 +3622,19 @@ static void bcmgenet_set_rx_mode(struct net_device *dev) + * 3. The number of filters needed exceeds the number filters + * supported by the hardware. + */ ++ spin_lock(&priv->reg_lock); + reg = bcmgenet_umac_readl(priv, UMAC_CMD); + if ((dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) || + (nfilter > MAX_MDF_FILTER)) { + reg |= CMD_PROMISC; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock(&priv->reg_lock); + bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL); + return; + } else { + reg &= ~CMD_PROMISC; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock(&priv->reg_lock); + } + + /* update MDF filter */ +@@ -4017,6 +4036,7 @@ static int bcmgenet_probe(struct platform_device *pdev) + goto err; + } + ++ spin_lock_init(&priv->reg_lock); + spin_lock_init(&priv->lock); + + /* Set default pause parameters */ +@@ -4259,7 +4279,7 @@ static int bcmgenet_resume(struct device *d) + bcmgenet_hfb_create_rxnfc_filter(priv, rule); + + /* Disable RX/TX DMA and flush TX queues */ +- dma_ctrl = bcmgenet_dma_disable(priv); ++ dma_ctrl = bcmgenet_dma_disable(priv, false); + + /* Reinitialize TDMA and RDMA and SW housekeeping */ + ret = bcmgenet_init_dma(priv); +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h +index 1985c0ec4da2a..28e2c94ef835c 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h +@@ -1,6 +1,6 @@ + /* SPDX-License-Identifier: GPL-2.0-only */ + /* +- * Copyright (c) 2014-2020 Broadcom ++ * Copyright (c) 2014-2024 Broadcom + */ + + #ifndef __BCMGENET_H__ +@@ -573,6 +573,8 @@ struct bcmgenet_rxnfc_rule { + /* device context */ + struct bcmgenet_priv { + void __iomem *base; ++ /* reg_lock: lock to serialize access to shared registers */ ++ spinlock_t reg_lock; + enum bcmgenet_version version; + struct net_device *dev; + +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c +index 7a41cad5788f4..1248792d7fd4d 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c +@@ -2,7 +2,7 @@ + /* + * Broadcom GENET (Gigabit Ethernet) Wake-on-LAN support + * +- * Copyright (c) 2014-2020 Broadcom ++ * Copyright (c) 2014-2024 Broadcom + */ + + #define pr_fmt(fmt) "bcmgenet_wol: " fmt +@@ -151,6 +151,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv, + } + + /* Can't suspend with WoL if MAC is still in reset */ ++ spin_lock_bh(&priv->reg_lock); + reg = bcmgenet_umac_readl(priv, UMAC_CMD); + if (reg & CMD_SW_RESET) + reg &= ~CMD_SW_RESET; +@@ -158,6 +159,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv, + /* disable RX */ + reg &= ~CMD_RX_EN; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock_bh(&priv->reg_lock); + mdelay(10); + + if (priv->wolopts & (WAKE_MAGIC | WAKE_MAGICSECURE)) { +@@ -203,6 +205,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv, + } + + /* Enable CRC forward */ ++ spin_lock_bh(&priv->reg_lock); + reg = bcmgenet_umac_readl(priv, UMAC_CMD); + priv->crc_fwd_en = 1; + reg |= CMD_CRC_FWD; +@@ -210,6 +213,7 @@ int bcmgenet_wol_power_down_cfg(struct bcmgenet_priv *priv, + /* Receiver must be enabled for WOL MP detection */ + reg |= CMD_RX_EN; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock_bh(&priv->reg_lock); + + reg = UMAC_IRQ_MPD_R; + if (hfb_enable) +@@ -256,7 +260,9 @@ void bcmgenet_wol_power_up_cfg(struct bcmgenet_priv *priv, + } + + /* Disable CRC Forward */ ++ spin_lock_bh(&priv->reg_lock); + reg = bcmgenet_umac_readl(priv, UMAC_CMD); + reg &= ~CMD_CRC_FWD; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock_bh(&priv->reg_lock); + } +diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c +index 97ea76d443abe..e7c659cd39746 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c +@@ -2,7 +2,7 @@ + /* + * Broadcom GENET MDIO routines + * +- * Copyright (c) 2014-2017 Broadcom ++ * Copyright (c) 2014-2024 Broadcom + */ + + #include +@@ -75,6 +75,7 @@ static void bcmgenet_mac_config(struct net_device *dev) + reg |= RGMII_LINK; + bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL); + ++ spin_lock_bh(&priv->reg_lock); + reg = bcmgenet_umac_readl(priv, UMAC_CMD); + reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) | + CMD_HD_EN | +@@ -87,6 +88,7 @@ static void bcmgenet_mac_config(struct net_device *dev) + reg |= CMD_TX_EN | CMD_RX_EN; + } + bcmgenet_umac_writel(priv, reg, UMAC_CMD); ++ spin_unlock_bh(&priv->reg_lock); + + priv->eee.eee_active = phy_init_eee(phydev, 0) >= 0; + bcmgenet_eee_enable_set(dev, +@@ -274,6 +276,7 @@ int bcmgenet_mii_config(struct net_device *dev, bool init) + * block for the interface to work, unconditionally clear the + * Out-of-band disable since we do not need it. + */ ++ mutex_lock(&phydev->lock); + reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL); + reg &= ~OOB_DISABLE; + if (priv->ext_phy) { +@@ -285,6 +288,7 @@ int bcmgenet_mii_config(struct net_device *dev, bool init) + reg |= RGMII_MODE_EN; + } + bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL); ++ mutex_unlock(&phydev->lock); + + if (init) + dev_info(kdev, "configuring instance for %s\n", phy_name); +diff --git a/drivers/net/ethernet/brocade/bna/bnad_debugfs.c b/drivers/net/ethernet/brocade/bna/bnad_debugfs.c +index 7246e13dd559f..97291bfbeea58 100644 +--- a/drivers/net/ethernet/brocade/bna/bnad_debugfs.c ++++ b/drivers/net/ethernet/brocade/bna/bnad_debugfs.c +@@ -312,7 +312,7 @@ bnad_debugfs_write_regrd(struct file *file, const char __user *buf, + void *kern_buf; + + /* Copy the user space buf */ +- kern_buf = memdup_user(buf, nbytes); ++ kern_buf = memdup_user_nul(buf, nbytes); + if (IS_ERR(kern_buf)) + return PTR_ERR(kern_buf); + +@@ -372,7 +372,7 @@ bnad_debugfs_write_regwr(struct file *file, const char __user *buf, + void *kern_buf; + + /* Copy the user space buf */ +- kern_buf = memdup_user(buf, nbytes); ++ kern_buf = memdup_user_nul(buf, nbytes); + if (IS_ERR(kern_buf)) + return PTR_ERR(kern_buf); + +diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c +index 98dd78551d89a..fff1ce835bc0d 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c +@@ -2684,12 +2684,12 @@ int cxgb4_selftest_lb_pkt(struct net_device *netdev) + lb->loopback = 1; + + q = &adap->sge.ethtxq[pi->first_qset]; +- __netif_tx_lock(q->txq, smp_processor_id()); ++ __netif_tx_lock_bh(q->txq); + + reclaim_completed_tx(adap, &q->q, -1, true); + credits = txq_avail(&q->q) - ndesc; + if (unlikely(credits < 0)) { +- __netif_tx_unlock(q->txq); ++ __netif_tx_unlock_bh(q->txq); + return -ENOMEM; + } + +@@ -2724,7 +2724,7 @@ int cxgb4_selftest_lb_pkt(struct net_device *netdev) + init_completion(&lb->completion); + txq_advance(&q->q, ndesc); + cxgb4_ring_tx_db(adap, &q->q, ndesc); +- __netif_tx_unlock(q->txq); ++ __netif_tx_unlock_bh(q->txq); + + /* wait for the pkt to return */ + ret = wait_for_completion_timeout(&lb->completion, 10 * HZ); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h +index aaf1f42624a79..57787c380fa07 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h +@@ -890,7 +890,7 @@ struct hnae3_handle { + struct hnae3_roce_private_info rinfo; + }; + +- u32 numa_node_mask; /* for multi-chip support */ ++ nodemask_t numa_node_mask; /* for multi-chip support */ + + enum hnae3_port_base_vlan_state port_base_vlan_state; + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index dfd0c5f4cb9f5..14713454e0d82 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -1526,6 +1526,9 @@ static int hclge_configure(struct hclge_dev *hdev) + cfg.default_speed, ret); + return ret; + } ++ hdev->hw.mac.req_speed = hdev->hw.mac.speed; ++ hdev->hw.mac.req_autoneg = AUTONEG_ENABLE; ++ hdev->hw.mac.req_duplex = DUPLEX_FULL; + + hclge_parse_link_mode(hdev, cfg.speed_ability); + +@@ -1755,7 +1758,8 @@ static int hclge_vport_setup(struct hclge_vport *vport, u16 num_tqps) + + nic->pdev = hdev->pdev; + nic->ae_algo = &ae_algo; +- nic->numa_node_mask = hdev->numa_node_mask; ++ bitmap_copy(nic->numa_node_mask.bits, hdev->numa_node_mask.bits, ++ MAX_NUMNODES); + nic->kinfo.io_base = hdev->hw.hw.io_base; + + ret = hclge_knic_setup(vport, num_tqps, +@@ -2447,7 +2451,8 @@ static int hclge_init_roce_base_info(struct hclge_vport *vport) + + roce->pdev = nic->pdev; + roce->ae_algo = nic->ae_algo; +- roce->numa_node_mask = nic->numa_node_mask; ++ bitmap_copy(roce->numa_node_mask.bits, nic->numa_node_mask.bits, ++ MAX_NUMNODES); + + return 0; + } +@@ -3331,9 +3336,9 @@ hclge_set_phy_link_ksettings(struct hnae3_handle *handle, + return ret; + } + +- hdev->hw.mac.autoneg = cmd->base.autoneg; +- hdev->hw.mac.speed = cmd->base.speed; +- hdev->hw.mac.duplex = cmd->base.duplex; ++ hdev->hw.mac.req_autoneg = cmd->base.autoneg; ++ hdev->hw.mac.req_speed = cmd->base.speed; ++ hdev->hw.mac.req_duplex = cmd->base.duplex; + linkmode_copy(hdev->hw.mac.advertising, cmd->link_modes.advertising); + + return 0; +@@ -3366,9 +3371,9 @@ static int hclge_tp_port_init(struct hclge_dev *hdev) + if (!hnae3_dev_phy_imp_supported(hdev)) + return 0; + +- cmd.base.autoneg = hdev->hw.mac.autoneg; +- cmd.base.speed = hdev->hw.mac.speed; +- cmd.base.duplex = hdev->hw.mac.duplex; ++ cmd.base.autoneg = hdev->hw.mac.req_autoneg; ++ cmd.base.speed = hdev->hw.mac.req_speed; ++ cmd.base.duplex = hdev->hw.mac.req_duplex; + linkmode_copy(cmd.link_modes.advertising, hdev->hw.mac.advertising); + + return hclge_set_phy_link_ksettings(&hdev->vport->nic, &cmd); +@@ -7940,8 +7945,7 @@ static void hclge_set_timer_task(struct hnae3_handle *handle, bool enable) + /* Set the DOWN flag here to disable link updating */ + set_bit(HCLGE_STATE_DOWN, &hdev->state); + +- /* flush memory to make sure DOWN is seen by service task */ +- smp_mb__before_atomic(); ++ smp_mb__after_atomic(); /* flush memory to make sure DOWN is seen by service task */ + hclge_flush_link_update(hdev); + } + } +@@ -9894,6 +9898,7 @@ static int hclge_set_vlan_protocol_type(struct hclge_dev *hdev) + static int hclge_init_vlan_filter(struct hclge_dev *hdev) + { + struct hclge_vport *vport; ++ bool enable = true; + int ret; + int i; + +@@ -9913,8 +9918,12 @@ static int hclge_init_vlan_filter(struct hclge_dev *hdev) + vport->cur_vlan_fltr_en = true; + } + ++ if (test_bit(HNAE3_DEV_SUPPORT_VLAN_FLTR_MDF_B, hdev->ae_dev->caps) && ++ !test_bit(HNAE3_DEV_SUPPORT_PORT_VLAN_BYPASS_B, hdev->ae_dev->caps)) ++ enable = false; ++ + return hclge_set_vlan_filter_ctrl(hdev, HCLGE_FILTER_TYPE_PORT, +- HCLGE_FILTER_FE_INGRESS, true, 0); ++ HCLGE_FILTER_FE_INGRESS, enable, 0); + } + + static int hclge_init_vlan_type(struct hclge_dev *hdev) +@@ -11610,16 +11619,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) + if (ret) + goto out; + +- ret = hclge_devlink_init(hdev); +- if (ret) +- goto err_pci_uninit; +- +- devl_lock(hdev->devlink); +- + /* Firmware command queue initialize */ + ret = hclge_comm_cmd_queue_init(hdev->pdev, &hdev->hw.hw); + if (ret) +- goto err_devlink_uninit; ++ goto err_pci_uninit; + + /* Firmware command initialize */ + ret = hclge_comm_cmd_init(hdev->ae_dev, &hdev->hw.hw, &hdev->fw_version, +@@ -11747,7 +11750,7 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) + + ret = hclge_update_port_info(hdev); + if (ret) +- goto err_mdiobus_unreg; ++ goto err_ptp_uninit; + + INIT_KFIFO(hdev->mac_tnl_log); + +@@ -11787,6 +11790,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) + dev_warn(&pdev->dev, + "failed to wake on lan init, ret = %d\n", ret); + ++ ret = hclge_devlink_init(hdev); ++ if (ret) ++ goto err_ptp_uninit; ++ + hclge_state_init(hdev); + hdev->last_reset_time = jiffies; + +@@ -11794,10 +11801,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) + HCLGE_DRIVER_NAME); + + hclge_task_schedule(hdev, round_jiffies_relative(HZ)); +- +- devl_unlock(hdev->devlink); + return 0; + ++err_ptp_uninit: ++ hclge_ptp_uninit(hdev); + err_mdiobus_unreg: + if (hdev->hw.mac.phydev) + mdiobus_unregister(hdev->hw.mac.mdio_bus); +@@ -11807,9 +11814,6 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) + pci_free_irq_vectors(pdev); + err_cmd_uninit: + hclge_comm_cmd_uninit(hdev->ae_dev, &hdev->hw.hw); +-err_devlink_uninit: +- devl_unlock(hdev->devlink); +- hclge_devlink_uninit(hdev); + err_pci_uninit: + pcim_iounmap(pdev, hdev->hw.hw.io_base); + pci_release_regions(pdev); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +index 7bc2049b723da..76a5edfe7d2e5 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +@@ -263,11 +263,14 @@ struct hclge_mac { + u8 media_type; /* port media type, e.g. fibre/copper/backplane */ + u8 mac_addr[ETH_ALEN]; + u8 autoneg; ++ u8 req_autoneg; + u8 duplex; ++ u8 req_duplex; + u8 support_autoneg; + u8 speed_type; /* 0: sfp speed, 1: active speed */ + u8 lane_num; + u32 speed; ++ u32 req_speed; + u32 max_speed; + u32 speed_ability; /* speed ability supported by current media */ + u32 module_type; /* sub media type, e.g. kr/cr/sr/lr */ +@@ -875,7 +878,7 @@ struct hclge_dev { + + u16 fdir_pf_filter_count; /* Num of guaranteed filters for this PF */ + u16 num_alloc_vport; /* Num vports this driver supports */ +- u32 numa_node_mask; ++ nodemask_t numa_node_mask; + u16 rx_buf_len; + u16 num_tx_desc; /* desc num of per tx queue */ + u16 num_rx_desc; /* desc num of per rx queue */ +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index 04ff9bf121853..877feee53804f 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -1077,12 +1077,13 @@ static void hclge_mbx_request_handling(struct hclge_mbx_ops_param *param) + + hdev = param->vport->back; + cmd_func = hclge_mbx_ops_list[param->req->msg.code]; +- if (cmd_func) +- ret = cmd_func(param); +- else ++ if (!cmd_func) { + dev_err(&hdev->pdev->dev, + "un-supported mailbox message, code = %u\n", + param->req->msg.code); ++ return; ++ } ++ ret = cmd_func(param); + + /* PF driver should not reply IMP */ + if (hnae3_get_bit(param->req->mbx_need_resp, HCLGE_MBX_NEED_RESP_B) && +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +index 0aa9beefd1c7e..43ee20eb03d1f 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +@@ -412,7 +412,8 @@ static int hclgevf_set_handle_info(struct hclgevf_dev *hdev) + + nic->ae_algo = &ae_algovf; + nic->pdev = hdev->pdev; +- nic->numa_node_mask = hdev->numa_node_mask; ++ bitmap_copy(nic->numa_node_mask.bits, hdev->numa_node_mask.bits, ++ MAX_NUMNODES); + nic->flags |= HNAE3_SUPPORT_VF; + nic->kinfo.io_base = hdev->hw.hw.io_base; + +@@ -2082,8 +2083,8 @@ static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev) + + roce->pdev = nic->pdev; + roce->ae_algo = nic->ae_algo; +- roce->numa_node_mask = nic->numa_node_mask; +- ++ bitmap_copy(roce->numa_node_mask.bits, nic->numa_node_mask.bits, ++ MAX_NUMNODES); + return 0; + } + +@@ -2180,8 +2181,7 @@ static void hclgevf_set_timer_task(struct hnae3_handle *handle, bool enable) + } else { + set_bit(HCLGEVF_STATE_DOWN, &hdev->state); + +- /* flush memory to make sure DOWN is seen by service task */ +- smp_mb__before_atomic(); ++ smp_mb__after_atomic(); /* flush memory to make sure DOWN is seen by service task */ + hclgevf_flush_link_update(hdev); + } + } +@@ -2845,10 +2845,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) + if (ret) + return ret; + +- ret = hclgevf_devlink_init(hdev); +- if (ret) +- goto err_devlink_init; +- + ret = hclge_comm_cmd_queue_init(hdev->pdev, &hdev->hw.hw); + if (ret) + goto err_cmd_queue_init; +@@ -2941,6 +2937,10 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) + + hclgevf_init_rxd_adv_layout(hdev); + ++ ret = hclgevf_devlink_init(hdev); ++ if (ret) ++ goto err_config; ++ + set_bit(HCLGEVF_STATE_SERVICE_INITED, &hdev->state); + + hdev->last_reset_time = jiffies; +@@ -2960,8 +2960,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) + err_cmd_init: + hclge_comm_cmd_uninit(hdev->ae_dev, &hdev->hw.hw); + err_cmd_queue_init: +- hclgevf_devlink_uninit(hdev); +-err_devlink_init: + hclgevf_pci_uninit(hdev); + clear_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state); + return ret; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +index a73f2bf3a56a6..cccef32284616 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +@@ -236,7 +236,7 @@ struct hclgevf_dev { + u16 rss_size_max; /* HW defined max RSS task queue */ + + u16 num_alloc_vport; /* num vports this driver supports */ +- u32 numa_node_mask; ++ nodemask_t numa_node_mask; + u16 rx_buf_len; + u16 num_tx_desc; /* desc num of per tx queue */ + u16 num_rx_desc; /* desc num of per rx queue */ +diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c +index 395746bcf8f7c..8bf44103fb910 100644 +--- a/drivers/net/ethernet/intel/e1000e/phy.c ++++ b/drivers/net/ethernet/intel/e1000e/phy.c +@@ -157,7 +157,7 @@ s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data) + * the lower time out + */ + for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { +- usleep_range(50, 60); ++ udelay(50); + mdic = er32(MDIC); + if (mdic & E1000_MDIC_READY) + break; +@@ -181,7 +181,7 @@ s32 e1000e_read_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 *data) + * reading duplicate data in the next MDIC transaction. + */ + if (hw->mac.type == e1000_pch2lan) +- usleep_range(100, 150); ++ udelay(100); + + if (success) { + *data = (u16)mdic; +@@ -237,7 +237,7 @@ s32 e1000e_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data) + * the lower time out + */ + for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { +- usleep_range(50, 60); ++ udelay(50); + mdic = er32(MDIC); + if (mdic & E1000_MDIC_READY) + break; +@@ -261,7 +261,7 @@ s32 e1000e_write_phy_reg_mdic(struct e1000_hw *hw, u32 offset, u16 data) + * reading duplicate data in the next MDIC transaction. + */ + if (hw->mac.type == e1000_pch2lan) +- usleep_range(100, 150); ++ udelay(100); + + if (success) + return 0; +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +index d30e84803481d..feca86e429df2 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +@@ -999,12 +999,10 @@ static ssize_t rvu_dbg_qsize_write(struct file *filp, + u16 pcifunc; + int ret, lf; + +- cmd_buf = memdup_user(buffer, count + 1); ++ cmd_buf = memdup_user_nul(buffer, count); + if (IS_ERR(cmd_buf)) + return -ENOMEM; + +- cmd_buf[count] = '\0'; +- + cmd_buf_tmp = strchr(cmd_buf, '\n'); + if (cmd_buf_tmp) { + *cmd_buf_tmp = '\0'; +diff --git a/drivers/net/ethernet/micrel/ks8851_common.c b/drivers/net/ethernet/micrel/ks8851_common.c +index d4cdf3d4f5525..502518cdb4618 100644 +--- a/drivers/net/ethernet/micrel/ks8851_common.c ++++ b/drivers/net/ethernet/micrel/ks8851_common.c +@@ -234,12 +234,13 @@ static void ks8851_dbg_dumpkkt(struct ks8851_net *ks, u8 *rxpkt) + /** + * ks8851_rx_pkts - receive packets from the host + * @ks: The device information. ++ * @rxq: Queue of packets received in this function. + * + * This is called from the IRQ work queue when the system detects that there + * are packets in the receive queue. Find out how many packets there are and + * read them from the FIFO. + */ +-static void ks8851_rx_pkts(struct ks8851_net *ks) ++static void ks8851_rx_pkts(struct ks8851_net *ks, struct sk_buff_head *rxq) + { + struct sk_buff *skb; + unsigned rxfc; +@@ -299,7 +300,7 @@ static void ks8851_rx_pkts(struct ks8851_net *ks) + ks8851_dbg_dumpkkt(ks, rxpkt); + + skb->protocol = eth_type_trans(skb, ks->netdev); +- __netif_rx(skb); ++ __skb_queue_tail(rxq, skb); + + ks->netdev->stats.rx_packets++; + ks->netdev->stats.rx_bytes += rxlen; +@@ -326,11 +327,11 @@ static void ks8851_rx_pkts(struct ks8851_net *ks) + static irqreturn_t ks8851_irq(int irq, void *_ks) + { + struct ks8851_net *ks = _ks; ++ struct sk_buff_head rxq; + unsigned handled = 0; + unsigned long flags; + unsigned int status; +- +- local_bh_disable(); ++ struct sk_buff *skb; + + ks8851_lock(ks, &flags); + +@@ -384,7 +385,8 @@ static irqreturn_t ks8851_irq(int irq, void *_ks) + * from the device so do not bother masking just the RX + * from the device. */ + +- ks8851_rx_pkts(ks); ++ __skb_queue_head_init(&rxq); ++ ks8851_rx_pkts(ks, &rxq); + } + + /* if something stopped the rx process, probably due to wanting +@@ -408,7 +410,9 @@ static irqreturn_t ks8851_irq(int irq, void *_ks) + if (status & IRQ_LCI) + mii_check_link(&ks->mii); + +- local_bh_enable(); ++ if (status & IRQ_RXI) ++ while ((skb = __skb_dequeue(&rxq))) ++ netif_rx(skb); + + return IRQ_HANDLED; + } +diff --git a/drivers/net/ethernet/qlogic/qede/qede_filter.c b/drivers/net/ethernet/qlogic/qede/qede_filter.c +index a5ac21a0ee33f..cb6b33a228ea2 100644 +--- a/drivers/net/ethernet/qlogic/qede/qede_filter.c ++++ b/drivers/net/ethernet/qlogic/qede/qede_filter.c +@@ -1868,8 +1868,8 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto, + struct flow_cls_offload *f) + { + struct qede_arfs_fltr_node *n; +- int min_hlen, rc = -EINVAL; + struct qede_arfs_tuple t; ++ int min_hlen, rc; + + __qede_lock(edev); + +@@ -1879,7 +1879,8 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto, + } + + /* parse flower attribute and prepare filter */ +- if (qede_parse_flow_attr(edev, proto, f->rule, &t)) ++ rc = qede_parse_flow_attr(edev, proto, f->rule, &t); ++ if (rc) + goto unlock; + + /* Validate profile mode and number of filters */ +@@ -1888,11 +1889,13 @@ int qede_add_tc_flower_fltr(struct qede_dev *edev, __be16 proto, + DP_NOTICE(edev, + "Filter configuration invalidated, filter mode=0x%x, configured mode=0x%x, filter count=0x%x\n", + t.mode, edev->arfs->mode, edev->arfs->filter_count); ++ rc = -EINVAL; + goto unlock; + } + + /* parse tc actions and get the vf_id */ +- if (qede_parse_actions(edev, &f->rule->action, f->common.extack)) ++ rc = qede_parse_actions(edev, &f->rule->action, f->common.extack); ++ if (rc) + goto unlock; + + if (qede_flow_find_fltr(edev, &t)) { +@@ -1998,10 +2001,9 @@ static int qede_flow_spec_to_rule(struct qede_dev *edev, + if (IS_ERR(flow)) + return PTR_ERR(flow); + +- if (qede_parse_flow_attr(edev, proto, flow->rule, t)) { +- err = -EINVAL; ++ err = qede_parse_flow_attr(edev, proto, flow->rule, t); ++ if (err) + goto err_out; +- } + + /* Make sure location is valid and filter isn't already set */ + err = qede_flow_spec_validate(edev, &flow->rule->action, t, +diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c +index 4f9658a741024..b2f27e505f76c 100644 +--- a/drivers/net/hyperv/netvsc.c ++++ b/drivers/net/hyperv/netvsc.c +@@ -154,8 +154,11 @@ static void free_netvsc_device(struct rcu_head *head) + int i; + + kfree(nvdev->extension); +- vfree(nvdev->recv_buf); +- vfree(nvdev->send_buf); ++ ++ if (!nvdev->recv_buf_gpadl_handle.decrypted) ++ vfree(nvdev->recv_buf); ++ if (!nvdev->send_buf_gpadl_handle.decrypted) ++ vfree(nvdev->send_buf); + bitmap_free(nvdev->send_section_map); + + for (i = 0; i < VRSS_CHANNEL_MAX; i++) { +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index e2e181378f412..edc34402e787f 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -1431,6 +1431,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */ + {QMI_QUIRK_SET_DTR(0x1546, 0x1312, 4)}, /* u-blox LARA-R6 01B */ + {QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)}, /* u-blox LARA-L6 */ ++ {QMI_QUIRK_SET_DTR(0x33f8, 0x0104, 4)}, /* Rolling RW101 RMNET */ + + /* 4. Gobi 1000 devices */ + {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */ +diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c +index ecdf0276004f9..f98069920e27f 100644 +--- a/drivers/net/vxlan/vxlan_core.c ++++ b/drivers/net/vxlan/vxlan_core.c +@@ -1674,6 +1674,7 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb) + bool raw_proto = false; + void *oiph; + __be32 vni = 0; ++ int nh; + + /* Need UDP and VXLAN header to be present */ + if (!pskb_may_pull(skb, VXLAN_HLEN)) +@@ -1762,12 +1763,28 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb) + skb->pkt_type = PACKET_HOST; + } + +- oiph = skb_network_header(skb); ++ /* Save offset of outer header relative to skb->head, ++ * because we are going to reset the network header to the inner header ++ * and might change skb->head. ++ */ ++ nh = skb_network_header(skb) - skb->head; ++ + skb_reset_network_header(skb); + ++ if (!pskb_inet_may_pull(skb)) { ++ DEV_STATS_INC(vxlan->dev, rx_length_errors); ++ DEV_STATS_INC(vxlan->dev, rx_errors); ++ vxlan_vnifilter_count(vxlan, vni, vninode, ++ VXLAN_VNI_STATS_RX_ERRORS, 0); ++ goto drop; ++ } ++ ++ /* Get the outer header. */ ++ oiph = skb->head + nh; ++ + if (!vxlan_ecn_decapsulate(vs, oiph, skb)) { +- ++vxlan->dev->stats.rx_frame_errors; +- ++vxlan->dev->stats.rx_errors; ++ DEV_STATS_INC(vxlan->dev, rx_frame_errors); ++ DEV_STATS_INC(vxlan->dev, rx_errors); + vxlan_vnifilter_count(vxlan, vni, vninode, + VXLAN_VNI_STATS_RX_ERRORS, 0); + goto drop; +@@ -1837,7 +1854,9 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni) + goto out; + + if (!pskb_may_pull(skb, arp_hdr_len(dev))) { +- dev->stats.tx_dropped++; ++ dev_core_stats_tx_dropped_inc(dev); ++ vxlan_vnifilter_count(vxlan, vni, NULL, ++ VXLAN_VNI_STATS_TX_DROPS, 0); + goto out; + } + parp = arp_hdr(skb); +@@ -1893,7 +1912,7 @@ static int arp_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni) + reply->pkt_type = PACKET_HOST; + + if (netif_rx(reply) == NET_RX_DROP) { +- dev->stats.rx_dropped++; ++ dev_core_stats_rx_dropped_inc(dev); + vxlan_vnifilter_count(vxlan, vni, NULL, + VXLAN_VNI_STATS_RX_DROPS, 0); + } +@@ -2052,7 +2071,7 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb, __be32 vni) + goto out; + + if (netif_rx(reply) == NET_RX_DROP) { +- dev->stats.rx_dropped++; ++ dev_core_stats_rx_dropped_inc(dev); + vxlan_vnifilter_count(vxlan, vni, NULL, + VXLAN_VNI_STATS_RX_DROPS, 0); + } +@@ -2371,7 +2390,7 @@ static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan, + len); + } else { + drop: +- dev->stats.rx_dropped++; ++ dev_core_stats_rx_dropped_inc(dev); + vxlan_vnifilter_count(dst_vxlan, vni, NULL, + VXLAN_VNI_STATS_RX_DROPS, 0); + } +@@ -2403,7 +2422,7 @@ static int encap_bypass_if_local(struct sk_buff *skb, struct net_device *dev, + daddr->sa.sa_family, dst_port, + vxlan->cfg.flags); + if (!dst_vxlan) { +- dev->stats.tx_errors++; ++ DEV_STATS_INC(dev, tx_errors); + vxlan_vnifilter_count(vxlan, vni, NULL, + VXLAN_VNI_STATS_TX_ERRORS, 0); + kfree_skb(skb); +@@ -2664,7 +2683,7 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, + return; + + drop: +- dev->stats.tx_dropped++; ++ dev_core_stats_tx_dropped_inc(dev); + vxlan_vnifilter_count(vxlan, vni, NULL, VXLAN_VNI_STATS_TX_DROPS, 0); + dev_kfree_skb(skb); + return; +@@ -2672,11 +2691,11 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, + tx_error: + rcu_read_unlock(); + if (err == -ELOOP) +- dev->stats.collisions++; ++ DEV_STATS_INC(dev, collisions); + else if (err == -ENETUNREACH) +- dev->stats.tx_carrier_errors++; ++ DEV_STATS_INC(dev, tx_carrier_errors); + dst_release(ndst); +- dev->stats.tx_errors++; ++ DEV_STATS_INC(dev, tx_errors); + vxlan_vnifilter_count(vxlan, vni, NULL, VXLAN_VNI_STATS_TX_ERRORS, 0); + kfree_skb(skb); + } +@@ -2709,7 +2728,7 @@ static void vxlan_xmit_nh(struct sk_buff *skb, struct net_device *dev, + return; + + drop: +- dev->stats.tx_dropped++; ++ dev_core_stats_tx_dropped_inc(dev); + vxlan_vnifilter_count(netdev_priv(dev), vni, NULL, + VXLAN_VNI_STATS_TX_DROPS, 0); + dev_kfree_skb(skb); +@@ -2747,7 +2766,7 @@ static netdev_tx_t vxlan_xmit_nhid(struct sk_buff *skb, struct net_device *dev, + return NETDEV_TX_OK; + + drop: +- dev->stats.tx_dropped++; ++ dev_core_stats_tx_dropped_inc(dev); + vxlan_vnifilter_count(netdev_priv(dev), vni, NULL, + VXLAN_VNI_STATS_TX_DROPS, 0); + dev_kfree_skb(skb); +@@ -2844,7 +2863,7 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev) + !is_multicast_ether_addr(eth->h_dest)) + vxlan_fdb_miss(vxlan, eth->h_dest); + +- dev->stats.tx_dropped++; ++ dev_core_stats_tx_dropped_inc(dev); + vxlan_vnifilter_count(vxlan, vni, NULL, + VXLAN_VNI_STATS_TX_DROPS, 0); + kfree_skb(skb); +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c +index 1ccbe8c1eeb42..278454c116fee 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c +@@ -853,10 +853,15 @@ int iwl_mvm_mld_rm_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif, + + int iwl_mvm_mld_rm_sta_id(struct iwl_mvm *mvm, u8 sta_id) + { +- int ret = iwl_mvm_mld_rm_sta_from_fw(mvm, sta_id); ++ int ret; + + lockdep_assert_held(&mvm->mutex); + ++ if (WARN_ON(sta_id == IWL_MVM_INVALID_STA)) ++ return 0; ++ ++ ret = iwl_mvm_mld_rm_sta_from_fw(mvm, sta_id); ++ + RCU_INIT_POINTER(mvm->fw_id_to_mac_id[sta_id], NULL); + RCU_INIT_POINTER(mvm->fw_id_to_link_sta[sta_id], NULL); + return ret; +diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c +index ca74b1b63cac1..0efa304904bd3 100644 +--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c ++++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c +@@ -1588,9 +1588,9 @@ void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn, + return; + + tfd_num = iwl_txq_get_cmd_index(txq, ssn); +- read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr); + + spin_lock_bh(&txq->lock); ++ read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr); + + if (!test_bit(txq_id, trans->txqs.queue_used)) { + IWL_DEBUG_TX_QUEUES(trans, "Q %d inactive - ignoring idx %d\n", +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 012c8b3f5f9c9..2db71e222fa7d 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -3540,7 +3540,7 @@ static int nvme_init_ns_head(struct nvme_ns *ns, struct nvme_ns_info *info) + "Found shared namespace %d, but multipathing not supported.\n", + info->nsid); + dev_warn_once(ctrl->device, +- "Support for shared namespaces without CONFIG_NVME_MULTIPATH is deprecated and will be removed in Linux 6.0\n."); ++ "Support for shared namespaces without CONFIG_NVME_MULTIPATH is deprecated and will be removed in Linux 6.0.\n"); + } + } + +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h +index ba62d42d2a8b7..176e372a5a796 100644 +--- a/drivers/nvme/host/nvme.h ++++ b/drivers/nvme/host/nvme.h +@@ -161,6 +161,11 @@ enum nvme_quirks { + * Disables simple suspend/resume path. + */ + NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND = (1 << 20), ++ ++ /* ++ * MSI (but not MSI-X) interrupts are broken and never fire. ++ */ ++ NVME_QUIRK_BROKEN_MSI = (1 << 21), + }; + + /* +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index b985142fb84b9..e47172bd84efe 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -2216,6 +2216,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues) + .priv = dev, + }; + unsigned int irq_queues, poll_queues; ++ unsigned int flags = PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY; + + /* + * Poll queues don't need interrupts, but we need at least one I/O queue +@@ -2239,8 +2240,10 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues) + irq_queues = 1; + if (!(dev->ctrl.quirks & NVME_QUIRK_SINGLE_VECTOR)) + irq_queues += (nr_io_queues - poll_queues); +- return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues, +- PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd); ++ if (dev->ctrl.quirks & NVME_QUIRK_BROKEN_MSI) ++ flags &= ~PCI_IRQ_MSI; ++ return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues, flags, ++ &affd); + } + + static unsigned int nvme_max_io_queues(struct nvme_dev *dev) +@@ -2469,6 +2472,7 @@ static int nvme_pci_enable(struct nvme_dev *dev) + { + int result = -ENOMEM; + struct pci_dev *pdev = to_pci_dev(dev->dev); ++ unsigned int flags = PCI_IRQ_ALL_TYPES; + + if (pci_enable_device_mem(pdev)) + return result; +@@ -2485,7 +2489,9 @@ static int nvme_pci_enable(struct nvme_dev *dev) + * interrupts. Pre-enable a single MSIX or MSI vec for setup. We'll + * adjust this later. + */ +- result = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES); ++ if (dev->ctrl.quirks & NVME_QUIRK_BROKEN_MSI) ++ flags &= ~PCI_IRQ_MSI; ++ result = pci_alloc_irq_vectors(pdev, 1, 1, flags); + if (result < 0) + goto disable; + +@@ -3382,6 +3388,8 @@ static const struct pci_device_id nvme_id_table[] = { + .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY | + NVME_QUIRK_DISABLE_WRITE_ZEROES| + NVME_QUIRK_IGNORE_DEV_SUBNQN, }, ++ { PCI_DEVICE(0x15b7, 0x5008), /* Sandisk SN530 */ ++ .driver_data = NVME_QUIRK_BROKEN_MSI }, + { PCI_DEVICE(0x1987, 0x5012), /* Phison E12 */ + .driver_data = NVME_QUIRK_BOGUS_NID, }, + { PCI_DEVICE(0x1987, 0x5016), /* Phison E16 */ +diff --git a/drivers/nvmem/apple-efuses.c b/drivers/nvmem/apple-efuses.c +index 9b7c871021043..d3d49d22338b3 100644 +--- a/drivers/nvmem/apple-efuses.c ++++ b/drivers/nvmem/apple-efuses.c +@@ -36,6 +36,7 @@ static int apple_efuses_probe(struct platform_device *pdev) + struct resource *res; + struct nvmem_config config = { + .dev = &pdev->dev, ++ .add_legacy_fixed_of_cells = true, + .read_only = true, + .reg_read = apple_efuses_read, + .stride = sizeof(u32), +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c +index 5b3955ad40534..e26f79a132bb5 100644 +--- a/drivers/nvmem/core.c ++++ b/drivers/nvmem/core.c +@@ -1003,9 +1003,11 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) + if (rval) + goto err_remove_cells; + +- rval = nvmem_add_cells_from_legacy_of(nvmem); +- if (rval) +- goto err_remove_cells; ++ if (config->add_legacy_fixed_of_cells) { ++ rval = nvmem_add_cells_from_legacy_of(nvmem); ++ if (rval) ++ goto err_remove_cells; ++ } + + rval = nvmem_add_cells_from_fixed_layout(nvmem); + if (rval) +diff --git a/drivers/nvmem/imx-ocotp-scu.c b/drivers/nvmem/imx-ocotp-scu.c +index c38d9c1c3f486..517d83e11af2c 100644 +--- a/drivers/nvmem/imx-ocotp-scu.c ++++ b/drivers/nvmem/imx-ocotp-scu.c +@@ -220,6 +220,7 @@ static int imx_scu_ocotp_write(void *context, unsigned int offset, + + static struct nvmem_config imx_scu_ocotp_nvmem_config = { + .name = "imx-scu-ocotp", ++ .add_legacy_fixed_of_cells = true, + .read_only = false, + .word_size = 4, + .stride = 1, +diff --git a/drivers/nvmem/imx-ocotp.c b/drivers/nvmem/imx-ocotp.c +index e8b6f194925df..f1e202efaa497 100644 +--- a/drivers/nvmem/imx-ocotp.c ++++ b/drivers/nvmem/imx-ocotp.c +@@ -615,6 +615,7 @@ static int imx_ocotp_probe(struct platform_device *pdev) + return PTR_ERR(priv->clk); + + priv->params = of_device_get_match_data(&pdev->dev); ++ imx_ocotp_nvmem_config.add_legacy_fixed_of_cells = true; + imx_ocotp_nvmem_config.size = 4 * priv->params->nregs; + imx_ocotp_nvmem_config.dev = dev; + imx_ocotp_nvmem_config.priv = priv; +diff --git a/drivers/nvmem/meson-efuse.c b/drivers/nvmem/meson-efuse.c +index ba2714bef8d0e..33678d0af2c24 100644 +--- a/drivers/nvmem/meson-efuse.c ++++ b/drivers/nvmem/meson-efuse.c +@@ -74,6 +74,7 @@ static int meson_efuse_probe(struct platform_device *pdev) + + econfig->dev = dev; + econfig->name = dev_name(dev); ++ econfig->add_legacy_fixed_of_cells = true; + econfig->stride = 1; + econfig->word_size = 1; + econfig->reg_read = meson_efuse_read; +diff --git a/drivers/nvmem/meson-mx-efuse.c b/drivers/nvmem/meson-mx-efuse.c +index d6d7aeda31f92..3ff04d5ca8f85 100644 +--- a/drivers/nvmem/meson-mx-efuse.c ++++ b/drivers/nvmem/meson-mx-efuse.c +@@ -210,6 +210,7 @@ static int meson_mx_efuse_probe(struct platform_device *pdev) + efuse->config.owner = THIS_MODULE; + efuse->config.dev = &pdev->dev; + efuse->config.priv = efuse; ++ efuse->config.add_legacy_fixed_of_cells = true; + efuse->config.stride = drvdata->word_size; + efuse->config.word_size = drvdata->word_size; + efuse->config.size = SZ_512; +diff --git a/drivers/nvmem/microchip-otpc.c b/drivers/nvmem/microchip-otpc.c +index 436e0dc4f3375..7cf81738a3e0a 100644 +--- a/drivers/nvmem/microchip-otpc.c ++++ b/drivers/nvmem/microchip-otpc.c +@@ -261,6 +261,7 @@ static int mchp_otpc_probe(struct platform_device *pdev) + return ret; + + mchp_nvmem_config.dev = otpc->dev; ++ mchp_nvmem_config.add_legacy_fixed_of_cells = true; + mchp_nvmem_config.size = size; + mchp_nvmem_config.priv = otpc; + nvmem = devm_nvmem_register(&pdev->dev, &mchp_nvmem_config); +diff --git a/drivers/nvmem/mtk-efuse.c b/drivers/nvmem/mtk-efuse.c +index b36cd0dcc8c7f..87c94686cfd21 100644 +--- a/drivers/nvmem/mtk-efuse.c ++++ b/drivers/nvmem/mtk-efuse.c +@@ -83,6 +83,7 @@ static int mtk_efuse_probe(struct platform_device *pdev) + return PTR_ERR(priv->base); + + pdata = device_get_match_data(dev); ++ econfig.add_legacy_fixed_of_cells = true; + econfig.stride = 1; + econfig.word_size = 1; + econfig.reg_read = mtk_reg_read; +diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c +index 70f2d4f2efbf1..9aa8f42faa4c9 100644 +--- a/drivers/nvmem/qcom-spmi-sdam.c ++++ b/drivers/nvmem/qcom-spmi-sdam.c +@@ -142,6 +142,7 @@ static int sdam_probe(struct platform_device *pdev) + sdam->sdam_config.name = "spmi_sdam"; + sdam->sdam_config.id = NVMEM_DEVID_AUTO; + sdam->sdam_config.owner = THIS_MODULE; ++ sdam->sdam_config.add_legacy_fixed_of_cells = true; + sdam->sdam_config.stride = 1; + sdam->sdam_config.word_size = 1; + sdam->sdam_config.reg_read = sdam_read; +diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c +index 14814cba2dd65..6c554040c6e67 100644 +--- a/drivers/nvmem/qfprom.c ++++ b/drivers/nvmem/qfprom.c +@@ -357,6 +357,7 @@ static int qfprom_probe(struct platform_device *pdev) + { + struct nvmem_config econfig = { + .name = "qfprom", ++ .add_legacy_fixed_of_cells = true, + .stride = 1, + .word_size = 1, + .id = NVMEM_DEVID_AUTO, +diff --git a/drivers/nvmem/rave-sp-eeprom.c b/drivers/nvmem/rave-sp-eeprom.c +index df6a1c594b781..9ecf3873cbb76 100644 +--- a/drivers/nvmem/rave-sp-eeprom.c ++++ b/drivers/nvmem/rave-sp-eeprom.c +@@ -328,6 +328,7 @@ static int rave_sp_eeprom_probe(struct platform_device *pdev) + of_property_read_string(np, "zii,eeprom-name", &config.name); + config.priv = eeprom; + config.dev = dev; ++ config.add_legacy_fixed_of_cells = true; + config.size = size; + config.reg_read = rave_sp_eeprom_reg_read; + config.reg_write = rave_sp_eeprom_reg_write; +diff --git a/drivers/nvmem/rockchip-efuse.c b/drivers/nvmem/rockchip-efuse.c +index 4004c5bece423..2b40978ddb18c 100644 +--- a/drivers/nvmem/rockchip-efuse.c ++++ b/drivers/nvmem/rockchip-efuse.c +@@ -205,6 +205,7 @@ static int rockchip_rk3399_efuse_read(void *context, unsigned int offset, + + static struct nvmem_config econfig = { + .name = "rockchip-efuse", ++ .add_legacy_fixed_of_cells = true, + .stride = 1, + .word_size = 1, + .read_only = true, +diff --git a/drivers/nvmem/sc27xx-efuse.c b/drivers/nvmem/sc27xx-efuse.c +index 2210da40dfbd7..bff27011f4ff2 100644 +--- a/drivers/nvmem/sc27xx-efuse.c ++++ b/drivers/nvmem/sc27xx-efuse.c +@@ -247,6 +247,7 @@ static int sc27xx_efuse_probe(struct platform_device *pdev) + econfig.reg_read = sc27xx_efuse_read; + econfig.priv = efuse; + econfig.dev = &pdev->dev; ++ econfig.add_legacy_fixed_of_cells = true; + nvmem = devm_nvmem_register(&pdev->dev, &econfig); + if (IS_ERR(nvmem)) { + dev_err(&pdev->dev, "failed to register nvmem config\n"); +diff --git a/drivers/nvmem/sec-qfprom.c b/drivers/nvmem/sec-qfprom.c +index e48c2dc0c44b3..19799b3fe00aa 100644 +--- a/drivers/nvmem/sec-qfprom.c ++++ b/drivers/nvmem/sec-qfprom.c +@@ -47,6 +47,7 @@ static int sec_qfprom_probe(struct platform_device *pdev) + { + struct nvmem_config econfig = { + .name = "sec-qfprom", ++ .add_legacy_fixed_of_cells = true, + .stride = 1, + .word_size = 1, + .id = NVMEM_DEVID_AUTO, +diff --git a/drivers/nvmem/sprd-efuse.c b/drivers/nvmem/sprd-efuse.c +index 7e6e31db4baae..bb3105f3291fc 100644 +--- a/drivers/nvmem/sprd-efuse.c ++++ b/drivers/nvmem/sprd-efuse.c +@@ -408,6 +408,7 @@ static int sprd_efuse_probe(struct platform_device *pdev) + econfig.read_only = false; + econfig.name = "sprd-efuse"; + econfig.size = efuse->data->blk_nums * SPRD_EFUSE_BLOCK_WIDTH; ++ econfig.add_legacy_fixed_of_cells = true; + econfig.reg_read = sprd_efuse_read; + econfig.reg_write = sprd_efuse_write; + econfig.priv = efuse; +diff --git a/drivers/nvmem/stm32-romem.c b/drivers/nvmem/stm32-romem.c +index 0f84044bd1ade..1541c20709d25 100644 +--- a/drivers/nvmem/stm32-romem.c ++++ b/drivers/nvmem/stm32-romem.c +@@ -207,6 +207,7 @@ static int stm32_romem_probe(struct platform_device *pdev) + priv->cfg.priv = priv; + priv->cfg.owner = THIS_MODULE; + priv->cfg.type = NVMEM_TYPE_OTP; ++ priv->cfg.add_legacy_fixed_of_cells = true; + + priv->lower = 0; + +diff --git a/drivers/nvmem/sunplus-ocotp.c b/drivers/nvmem/sunplus-ocotp.c +index f3a18aa0a6c73..38f5d9df39cd5 100644 +--- a/drivers/nvmem/sunplus-ocotp.c ++++ b/drivers/nvmem/sunplus-ocotp.c +@@ -145,6 +145,7 @@ static int sp_ocotp_read(void *priv, unsigned int offset, void *value, size_t by + + static struct nvmem_config sp_ocotp_nvmem_config = { + .name = "sp-ocotp", ++ .add_legacy_fixed_of_cells = true, + .read_only = true, + .word_size = 1, + .size = QAC628_OTP_SIZE, +diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c +index 5d364d85347fc..ba14a76208ab6 100644 +--- a/drivers/nvmem/sunxi_sid.c ++++ b/drivers/nvmem/sunxi_sid.c +@@ -153,6 +153,7 @@ static int sunxi_sid_probe(struct platform_device *pdev) + nvmem_cfg->dev = dev; + nvmem_cfg->name = "sunxi-sid"; + nvmem_cfg->type = NVMEM_TYPE_OTP; ++ nvmem_cfg->add_legacy_fixed_of_cells = true; + nvmem_cfg->read_only = true; + nvmem_cfg->size = cfg->size; + nvmem_cfg->word_size = 1; +diff --git a/drivers/nvmem/uniphier-efuse.c b/drivers/nvmem/uniphier-efuse.c +index 0a1dbb80537ec..6ad3295d31951 100644 +--- a/drivers/nvmem/uniphier-efuse.c ++++ b/drivers/nvmem/uniphier-efuse.c +@@ -52,6 +52,7 @@ static int uniphier_efuse_probe(struct platform_device *pdev) + econfig.size = resource_size(res); + econfig.priv = priv; + econfig.dev = dev; ++ econfig.add_legacy_fixed_of_cells = true; + nvmem = devm_nvmem_register(dev, &econfig); + + return PTR_ERR_OR_ZERO(nvmem); +diff --git a/drivers/nvmem/zynqmp_nvmem.c b/drivers/nvmem/zynqmp_nvmem.c +index f49bb9a26d053..7f15aa89a9d09 100644 +--- a/drivers/nvmem/zynqmp_nvmem.c ++++ b/drivers/nvmem/zynqmp_nvmem.c +@@ -58,6 +58,7 @@ static int zynqmp_nvmem_probe(struct platform_device *pdev) + + priv->dev = dev; + econfig.dev = dev; ++ econfig.add_legacy_fixed_of_cells = true; + econfig.reg_read = zynqmp_nvmem_read; + econfig.priv = priv; + +diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c +index 80838dc54b3ab..7938741136a2c 100644 +--- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c ++++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c +@@ -43,7 +43,7 @@ + #define SCU614 0x614 /* Disable GPIO Internal Pull-Down #1 */ + #define SCU618 0x618 /* Disable GPIO Internal Pull-Down #2 */ + #define SCU61C 0x61c /* Disable GPIO Internal Pull-Down #3 */ +-#define SCU620 0x620 /* Disable GPIO Internal Pull-Down #4 */ ++#define SCU630 0x630 /* Disable GPIO Internal Pull-Down #4 */ + #define SCU634 0x634 /* Disable GPIO Internal Pull-Down #5 */ + #define SCU638 0x638 /* Disable GPIO Internal Pull-Down #6 */ + #define SCU690 0x690 /* Multi-function Pin Control #24 */ +@@ -2494,38 +2494,38 @@ static struct aspeed_pin_config aspeed_g6_configs[] = { + ASPEED_PULL_DOWN_PINCONF(D14, SCU61C, 0), + + /* GPIOS7 */ +- ASPEED_PULL_DOWN_PINCONF(T24, SCU620, 23), ++ ASPEED_PULL_DOWN_PINCONF(T24, SCU630, 23), + /* GPIOS6 */ +- ASPEED_PULL_DOWN_PINCONF(P23, SCU620, 22), ++ ASPEED_PULL_DOWN_PINCONF(P23, SCU630, 22), + /* GPIOS5 */ +- ASPEED_PULL_DOWN_PINCONF(P24, SCU620, 21), ++ ASPEED_PULL_DOWN_PINCONF(P24, SCU630, 21), + /* GPIOS4 */ +- ASPEED_PULL_DOWN_PINCONF(R26, SCU620, 20), ++ ASPEED_PULL_DOWN_PINCONF(R26, SCU630, 20), + /* GPIOS3*/ +- ASPEED_PULL_DOWN_PINCONF(R24, SCU620, 19), ++ ASPEED_PULL_DOWN_PINCONF(R24, SCU630, 19), + /* GPIOS2 */ +- ASPEED_PULL_DOWN_PINCONF(T26, SCU620, 18), ++ ASPEED_PULL_DOWN_PINCONF(T26, SCU630, 18), + /* GPIOS1 */ +- ASPEED_PULL_DOWN_PINCONF(T25, SCU620, 17), ++ ASPEED_PULL_DOWN_PINCONF(T25, SCU630, 17), + /* GPIOS0 */ +- ASPEED_PULL_DOWN_PINCONF(R23, SCU620, 16), ++ ASPEED_PULL_DOWN_PINCONF(R23, SCU630, 16), + + /* GPIOR7 */ +- ASPEED_PULL_DOWN_PINCONF(U26, SCU620, 15), ++ ASPEED_PULL_DOWN_PINCONF(U26, SCU630, 15), + /* GPIOR6 */ +- ASPEED_PULL_DOWN_PINCONF(W26, SCU620, 14), ++ ASPEED_PULL_DOWN_PINCONF(W26, SCU630, 14), + /* GPIOR5 */ +- ASPEED_PULL_DOWN_PINCONF(T23, SCU620, 13), ++ ASPEED_PULL_DOWN_PINCONF(T23, SCU630, 13), + /* GPIOR4 */ +- ASPEED_PULL_DOWN_PINCONF(U25, SCU620, 12), ++ ASPEED_PULL_DOWN_PINCONF(U25, SCU630, 12), + /* GPIOR3*/ +- ASPEED_PULL_DOWN_PINCONF(V26, SCU620, 11), ++ ASPEED_PULL_DOWN_PINCONF(V26, SCU630, 11), + /* GPIOR2 */ +- ASPEED_PULL_DOWN_PINCONF(V24, SCU620, 10), ++ ASPEED_PULL_DOWN_PINCONF(V24, SCU630, 10), + /* GPIOR1 */ +- ASPEED_PULL_DOWN_PINCONF(U24, SCU620, 9), ++ ASPEED_PULL_DOWN_PINCONF(U24, SCU630, 9), + /* GPIOR0 */ +- ASPEED_PULL_DOWN_PINCONF(V25, SCU620, 8), ++ ASPEED_PULL_DOWN_PINCONF(V25, SCU630, 8), + + /* GPIOX7 */ + ASPEED_PULL_DOWN_PINCONF(AB10, SCU634, 31), +diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c +index 184ec92241ca8..9e7b3e6c79cb1 100644 +--- a/drivers/pinctrl/core.c ++++ b/drivers/pinctrl/core.c +@@ -2116,13 +2116,7 @@ int pinctrl_enable(struct pinctrl_dev *pctldev) + + error = pinctrl_claim_hogs(pctldev); + if (error) { +- dev_err(pctldev->dev, "could not claim hogs: %i\n", +- error); +- pinctrl_free_pindescs(pctldev, pctldev->desc->pins, +- pctldev->desc->npins); +- mutex_destroy(&pctldev->mutex); +- kfree(pctldev); +- ++ dev_err(pctldev->dev, "could not claim hogs: %i\n", error); + return error; + } + +diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c +index 6e0a40962f384..5ee746cb81f59 100644 +--- a/drivers/pinctrl/devicetree.c ++++ b/drivers/pinctrl/devicetree.c +@@ -220,14 +220,16 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev) + for (state = 0; ; state++) { + /* Retrieve the pinctrl-* property */ + propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state); +- if (!propname) +- return -ENOMEM; ++ if (!propname) { ++ ret = -ENOMEM; ++ goto err; ++ } + prop = of_find_property(np, propname, &size); + kfree(propname); + if (!prop) { + if (state == 0) { +- of_node_put(np); +- return -ENODEV; ++ ret = -ENODEV; ++ goto err; + } + break; + } +diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c +index 95a8a3a22b2b4..0aaeb54a64765 100644 +--- a/drivers/pinctrl/intel/pinctrl-baytrail.c ++++ b/drivers/pinctrl/intel/pinctrl-baytrail.c +@@ -277,33 +277,33 @@ static const unsigned int byt_score_plt_clk5_pins[] = { 101 }; + static const unsigned int byt_score_smbus_pins[] = { 51, 52, 53 }; + + static const struct intel_pingroup byt_score_groups[] = { +- PIN_GROUP("uart1_grp", byt_score_uart1_pins, 1), +- PIN_GROUP("uart2_grp", byt_score_uart2_pins, 1), +- PIN_GROUP("pwm0_grp", byt_score_pwm0_pins, 1), +- PIN_GROUP("pwm1_grp", byt_score_pwm1_pins, 1), +- PIN_GROUP("ssp2_grp", byt_score_ssp2_pins, 1), +- PIN_GROUP("sio_spi_grp", byt_score_sio_spi_pins, 1), +- PIN_GROUP("i2c5_grp", byt_score_i2c5_pins, 1), +- PIN_GROUP("i2c6_grp", byt_score_i2c6_pins, 1), +- PIN_GROUP("i2c4_grp", byt_score_i2c4_pins, 1), +- PIN_GROUP("i2c3_grp", byt_score_i2c3_pins, 1), +- PIN_GROUP("i2c2_grp", byt_score_i2c2_pins, 1), +- PIN_GROUP("i2c1_grp", byt_score_i2c1_pins, 1), +- PIN_GROUP("i2c0_grp", byt_score_i2c0_pins, 1), +- PIN_GROUP("ssp0_grp", byt_score_ssp0_pins, 1), +- PIN_GROUP("ssp1_grp", byt_score_ssp1_pins, 1), +- PIN_GROUP("sdcard_grp", byt_score_sdcard_pins, byt_score_sdcard_mux_values), +- PIN_GROUP("sdio_grp", byt_score_sdio_pins, 1), +- PIN_GROUP("emmc_grp", byt_score_emmc_pins, 1), +- PIN_GROUP("lpc_grp", byt_score_ilb_lpc_pins, 1), +- PIN_GROUP("sata_grp", byt_score_sata_pins, 1), +- PIN_GROUP("plt_clk0_grp", byt_score_plt_clk0_pins, 1), +- PIN_GROUP("plt_clk1_grp", byt_score_plt_clk1_pins, 1), +- PIN_GROUP("plt_clk2_grp", byt_score_plt_clk2_pins, 1), +- PIN_GROUP("plt_clk3_grp", byt_score_plt_clk3_pins, 1), +- PIN_GROUP("plt_clk4_grp", byt_score_plt_clk4_pins, 1), +- PIN_GROUP("plt_clk5_grp", byt_score_plt_clk5_pins, 1), +- PIN_GROUP("smbus_grp", byt_score_smbus_pins, 1), ++ PIN_GROUP_GPIO("uart1_grp", byt_score_uart1_pins, 1), ++ PIN_GROUP_GPIO("uart2_grp", byt_score_uart2_pins, 1), ++ PIN_GROUP_GPIO("pwm0_grp", byt_score_pwm0_pins, 1), ++ PIN_GROUP_GPIO("pwm1_grp", byt_score_pwm1_pins, 1), ++ PIN_GROUP_GPIO("ssp2_grp", byt_score_ssp2_pins, 1), ++ PIN_GROUP_GPIO("sio_spi_grp", byt_score_sio_spi_pins, 1), ++ PIN_GROUP_GPIO("i2c5_grp", byt_score_i2c5_pins, 1), ++ PIN_GROUP_GPIO("i2c6_grp", byt_score_i2c6_pins, 1), ++ PIN_GROUP_GPIO("i2c4_grp", byt_score_i2c4_pins, 1), ++ PIN_GROUP_GPIO("i2c3_grp", byt_score_i2c3_pins, 1), ++ PIN_GROUP_GPIO("i2c2_grp", byt_score_i2c2_pins, 1), ++ PIN_GROUP_GPIO("i2c1_grp", byt_score_i2c1_pins, 1), ++ PIN_GROUP_GPIO("i2c0_grp", byt_score_i2c0_pins, 1), ++ PIN_GROUP_GPIO("ssp0_grp", byt_score_ssp0_pins, 1), ++ PIN_GROUP_GPIO("ssp1_grp", byt_score_ssp1_pins, 1), ++ PIN_GROUP_GPIO("sdcard_grp", byt_score_sdcard_pins, byt_score_sdcard_mux_values), ++ PIN_GROUP_GPIO("sdio_grp", byt_score_sdio_pins, 1), ++ PIN_GROUP_GPIO("emmc_grp", byt_score_emmc_pins, 1), ++ PIN_GROUP_GPIO("lpc_grp", byt_score_ilb_lpc_pins, 1), ++ PIN_GROUP_GPIO("sata_grp", byt_score_sata_pins, 1), ++ PIN_GROUP_GPIO("plt_clk0_grp", byt_score_plt_clk0_pins, 1), ++ PIN_GROUP_GPIO("plt_clk1_grp", byt_score_plt_clk1_pins, 1), ++ PIN_GROUP_GPIO("plt_clk2_grp", byt_score_plt_clk2_pins, 1), ++ PIN_GROUP_GPIO("plt_clk3_grp", byt_score_plt_clk3_pins, 1), ++ PIN_GROUP_GPIO("plt_clk4_grp", byt_score_plt_clk4_pins, 1), ++ PIN_GROUP_GPIO("plt_clk5_grp", byt_score_plt_clk5_pins, 1), ++ PIN_GROUP_GPIO("smbus_grp", byt_score_smbus_pins, 1), + }; + + static const char * const byt_score_uart_groups[] = { +@@ -331,12 +331,14 @@ static const char * const byt_score_plt_clk_groups[] = { + }; + static const char * const byt_score_smbus_groups[] = { "smbus_grp" }; + static const char * const byt_score_gpio_groups[] = { +- "uart1_grp", "uart2_grp", "pwm0_grp", "pwm1_grp", "ssp0_grp", +- "ssp1_grp", "ssp2_grp", "sio_spi_grp", "i2c0_grp", "i2c1_grp", +- "i2c2_grp", "i2c3_grp", "i2c4_grp", "i2c5_grp", "i2c6_grp", +- "sdcard_grp", "sdio_grp", "emmc_grp", "lpc_grp", "sata_grp", +- "plt_clk0_grp", "plt_clk1_grp", "plt_clk2_grp", "plt_clk3_grp", +- "plt_clk4_grp", "plt_clk5_grp", "smbus_grp", ++ "uart1_grp_gpio", "uart2_grp_gpio", "pwm0_grp_gpio", ++ "pwm1_grp_gpio", "ssp0_grp_gpio", "ssp1_grp_gpio", "ssp2_grp_gpio", ++ "sio_spi_grp_gpio", "i2c0_grp_gpio", "i2c1_grp_gpio", "i2c2_grp_gpio", ++ "i2c3_grp_gpio", "i2c4_grp_gpio", "i2c5_grp_gpio", "i2c6_grp_gpio", ++ "sdcard_grp_gpio", "sdio_grp_gpio", "emmc_grp_gpio", "lpc_grp_gpio", ++ "sata_grp_gpio", "plt_clk0_grp_gpio", "plt_clk1_grp_gpio", ++ "plt_clk2_grp_gpio", "plt_clk3_grp_gpio", "plt_clk4_grp_gpio", ++ "plt_clk5_grp_gpio", "smbus_grp_gpio", + }; + + static const struct intel_function byt_score_functions[] = { +@@ -455,8 +457,8 @@ static const struct intel_pingroup byt_sus_groups[] = { + PIN_GROUP("usb_oc_grp_gpio", byt_sus_usb_over_current_pins, byt_sus_usb_over_current_gpio_mode_values), + PIN_GROUP("usb_ulpi_grp_gpio", byt_sus_usb_ulpi_pins, byt_sus_usb_ulpi_gpio_mode_values), + PIN_GROUP("pcu_spi_grp_gpio", byt_sus_pcu_spi_pins, byt_sus_pcu_spi_gpio_mode_values), +- PIN_GROUP("pmu_clk1_grp", byt_sus_pmu_clk1_pins, 1), +- PIN_GROUP("pmu_clk2_grp", byt_sus_pmu_clk2_pins, 1), ++ PIN_GROUP_GPIO("pmu_clk1_grp", byt_sus_pmu_clk1_pins, 1), ++ PIN_GROUP_GPIO("pmu_clk2_grp", byt_sus_pmu_clk2_pins, 1), + }; + + static const char * const byt_sus_usb_groups[] = { +@@ -468,7 +470,7 @@ static const char * const byt_sus_pmu_clk_groups[] = { + }; + static const char * const byt_sus_gpio_groups[] = { + "usb_oc_grp_gpio", "usb_ulpi_grp_gpio", "pcu_spi_grp_gpio", +- "pmu_clk1_grp", "pmu_clk2_grp", ++ "pmu_clk1_grp_gpio", "pmu_clk2_grp_gpio", + }; + + static const struct intel_function byt_sus_functions[] = { +diff --git a/drivers/pinctrl/intel/pinctrl-intel.h b/drivers/pinctrl/intel/pinctrl-intel.h +index cee512f97b566..45216b9e852dc 100644 +--- a/drivers/pinctrl/intel/pinctrl-intel.h ++++ b/drivers/pinctrl/intel/pinctrl-intel.h +@@ -179,6 +179,10 @@ struct intel_community { + .modes = __builtin_choose_expr(__builtin_constant_p((m)), NULL, (m)), \ + } + ++#define PIN_GROUP_GPIO(n, p, m) \ ++ PIN_GROUP(n, p, m), \ ++ PIN_GROUP(n "_gpio", p, 0) ++ + #define FUNCTION(n, g) \ + { \ + .func = PINCTRL_PINFUNCTION((n), (g), ARRAY_SIZE(g)), \ +diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c +index 33d6c3fb79080..9cd7fe3c3e0df 100644 +--- a/drivers/pinctrl/mediatek/pinctrl-paris.c ++++ b/drivers/pinctrl/mediatek/pinctrl-paris.c +@@ -165,20 +165,21 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev, + err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_SR, &ret); + break; + case PIN_CONFIG_INPUT_ENABLE: +- case PIN_CONFIG_OUTPUT_ENABLE: ++ err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_IES, &ret); ++ if (!ret) ++ err = -EINVAL; ++ break; ++ case PIN_CONFIG_OUTPUT: + err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &ret); + if (err) + break; +- /* CONFIG Current direction return value +- * ------------- ----------------- ---------------------- +- * OUTPUT_ENABLE output 1 (= HW value) +- * input 0 (= HW value) +- * INPUT_ENABLE output 0 (= reverse HW value) +- * input 1 (= reverse HW value) +- */ +- if (param == PIN_CONFIG_INPUT_ENABLE) +- ret = !ret; + ++ if (!ret) { ++ err = -EINVAL; ++ break; ++ } ++ ++ err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DO, &ret); + break; + case PIN_CONFIG_INPUT_SCHMITT_ENABLE: + err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &ret); +@@ -193,6 +194,8 @@ static int mtk_pinconf_get(struct pinctrl_dev *pctldev, + } + + err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_SMT, &ret); ++ if (!ret) ++ err = -EINVAL; + break; + case PIN_CONFIG_DRIVE_STRENGTH: + if (!hw->soc->drive_get) +@@ -281,26 +284,9 @@ static int mtk_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin, + break; + err = hw->soc->bias_set_combo(hw, desc, 0, arg); + break; +- case PIN_CONFIG_OUTPUT_ENABLE: +- err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_SMT, +- MTK_DISABLE); +- /* Keep set direction to consider the case that a GPIO pin +- * does not have SMT control +- */ +- if (err != -ENOTSUPP) +- break; +- +- err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR, +- MTK_OUTPUT); +- break; + case PIN_CONFIG_INPUT_ENABLE: + /* regard all non-zero value as enable */ + err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_IES, !!arg); +- if (err) +- break; +- +- err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR, +- MTK_INPUT); + break; + case PIN_CONFIG_SLEW_RATE: + /* regard all non-zero value as enable */ +diff --git a/drivers/pinctrl/meson/pinctrl-meson-a1.c b/drivers/pinctrl/meson/pinctrl-meson-a1.c +index 79f5d753d7e1a..50a87d9618a8e 100644 +--- a/drivers/pinctrl/meson/pinctrl-meson-a1.c ++++ b/drivers/pinctrl/meson/pinctrl-meson-a1.c +@@ -250,7 +250,7 @@ static const unsigned int pdm_dclk_x_pins[] = { GPIOX_10 }; + static const unsigned int pdm_din2_a_pins[] = { GPIOA_6 }; + static const unsigned int pdm_din1_a_pins[] = { GPIOA_7 }; + static const unsigned int pdm_din0_a_pins[] = { GPIOA_8 }; +-static const unsigned int pdm_dclk_pins[] = { GPIOA_9 }; ++static const unsigned int pdm_dclk_a_pins[] = { GPIOA_9 }; + + /* gen_clk */ + static const unsigned int gen_clk_x_pins[] = { GPIOX_7 }; +@@ -591,7 +591,7 @@ static struct meson_pmx_group meson_a1_periphs_groups[] = { + GROUP(pdm_din2_a, 3), + GROUP(pdm_din1_a, 3), + GROUP(pdm_din0_a, 3), +- GROUP(pdm_dclk, 3), ++ GROUP(pdm_dclk_a, 3), + GROUP(pwm_c_a, 3), + GROUP(pwm_b_a, 3), + +@@ -755,7 +755,7 @@ static const char * const spi_a_groups[] = { + + static const char * const pdm_groups[] = { + "pdm_din0_x", "pdm_din1_x", "pdm_din2_x", "pdm_dclk_x", "pdm_din2_a", +- "pdm_din1_a", "pdm_din0_a", "pdm_dclk", ++ "pdm_din1_a", "pdm_din0_a", "pdm_dclk_a", + }; + + static const char * const gen_clk_groups[] = { +diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +index 08df9494603c5..30951f7131cd9 100644 +--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c ++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +@@ -719,6 +719,7 @@ static struct miscdevice isst_if_char_driver = { + }; + + static const struct x86_cpu_id hpm_cpu_ids[] = { ++ X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_D, NULL), + X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_X, NULL), + X86_MATCH_INTEL_FAM6_MODEL(ATOM_CRESTMONT_X, NULL), + {} +diff --git a/drivers/power/supply/mt6360_charger.c b/drivers/power/supply/mt6360_charger.c +index 1305cba61edd4..aca123783efcc 100644 +--- a/drivers/power/supply/mt6360_charger.c ++++ b/drivers/power/supply/mt6360_charger.c +@@ -588,7 +588,7 @@ static const struct regulator_ops mt6360_chg_otg_ops = { + }; + + static const struct regulator_desc mt6360_otg_rdesc = { +- .of_match = "usb-otg-vbus", ++ .of_match = "usb-otg-vbus-regulator", + .name = "usb-otg-vbus", + .ops = &mt6360_chg_otg_ops, + .owner = THIS_MODULE, +diff --git a/drivers/power/supply/rt9455_charger.c b/drivers/power/supply/rt9455_charger.c +index c345a77f9f78c..e4dbacd50a437 100644 +--- a/drivers/power/supply/rt9455_charger.c ++++ b/drivers/power/supply/rt9455_charger.c +@@ -192,6 +192,7 @@ static const int rt9455_voreg_values[] = { + 4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000 + }; + ++#if IS_ENABLED(CONFIG_USB_PHY) + /* + * When the charger is in boost mode, REG02[7:2] represent boost output + * voltage. +@@ -207,6 +208,7 @@ static const int rt9455_boost_voltage_values[] = { + 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, + 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, + }; ++#endif + + /* REG07[3:0] (VMREG) in uV */ + static const int rt9455_vmreg_values[] = { +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index a7b3e548ea5ac..dfb986377a989 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -1909,19 +1909,24 @@ static struct regulator *create_regulator(struct regulator_dev *rdev, + } + } + +- if (err != -EEXIST) ++ if (err != -EEXIST) { + regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs); +- if (IS_ERR(regulator->debugfs)) +- rdev_dbg(rdev, "Failed to create debugfs directory\n"); ++ if (IS_ERR(regulator->debugfs)) { ++ rdev_dbg(rdev, "Failed to create debugfs directory\n"); ++ regulator->debugfs = NULL; ++ } ++ } + +- debugfs_create_u32("uA_load", 0444, regulator->debugfs, +- ®ulator->uA_load); +- debugfs_create_u32("min_uV", 0444, regulator->debugfs, +- ®ulator->voltage[PM_SUSPEND_ON].min_uV); +- debugfs_create_u32("max_uV", 0444, regulator->debugfs, +- ®ulator->voltage[PM_SUSPEND_ON].max_uV); +- debugfs_create_file("constraint_flags", 0444, regulator->debugfs, +- regulator, &constraint_flags_fops); ++ if (regulator->debugfs) { ++ debugfs_create_u32("uA_load", 0444, regulator->debugfs, ++ ®ulator->uA_load); ++ debugfs_create_u32("min_uV", 0444, regulator->debugfs, ++ ®ulator->voltage[PM_SUSPEND_ON].min_uV); ++ debugfs_create_u32("max_uV", 0444, regulator->debugfs, ++ ®ulator->voltage[PM_SUSPEND_ON].max_uV); ++ debugfs_create_file("constraint_flags", 0444, regulator->debugfs, ++ regulator, &constraint_flags_fops); ++ } + + /* + * Check now if the regulator is an always on regulator - if +diff --git a/drivers/regulator/mt6360-regulator.c b/drivers/regulator/mt6360-regulator.c +index ad6587a378d09..24cc9fc94e900 100644 +--- a/drivers/regulator/mt6360-regulator.c ++++ b/drivers/regulator/mt6360-regulator.c +@@ -319,15 +319,15 @@ static unsigned int mt6360_regulator_of_map_mode(unsigned int hw_mode) + } + } + +-#define MT6360_REGULATOR_DESC(_name, _sname, ereg, emask, vreg, vmask, \ +- mreg, mmask, streg, stmask, vranges, \ +- vcnts, offon_delay, irq_tbls) \ ++#define MT6360_REGULATOR_DESC(match, _name, _sname, ereg, emask, vreg, \ ++ vmask, mreg, mmask, streg, stmask, \ ++ vranges, vcnts, offon_delay, irq_tbls) \ + { \ + .desc = { \ + .name = #_name, \ + .supply_name = #_sname, \ + .id = MT6360_REGULATOR_##_name, \ +- .of_match = of_match_ptr(#_name), \ ++ .of_match = of_match_ptr(match), \ + .regulators_node = of_match_ptr("regulator"), \ + .of_map_mode = mt6360_regulator_of_map_mode, \ + .owner = THIS_MODULE, \ +@@ -351,21 +351,29 @@ static unsigned int mt6360_regulator_of_map_mode(unsigned int hw_mode) + } + + static const struct mt6360_regulator_desc mt6360_regulator_descs[] = { +- MT6360_REGULATOR_DESC(BUCK1, BUCK1_VIN, 0x117, 0x40, 0x110, 0xff, 0x117, 0x30, 0x117, 0x04, ++ MT6360_REGULATOR_DESC("buck1", BUCK1, BUCK1_VIN, ++ 0x117, 0x40, 0x110, 0xff, 0x117, 0x30, 0x117, 0x04, + buck_vout_ranges, 256, 0, buck1_irq_tbls), +- MT6360_REGULATOR_DESC(BUCK2, BUCK2_VIN, 0x127, 0x40, 0x120, 0xff, 0x127, 0x30, 0x127, 0x04, ++ MT6360_REGULATOR_DESC("buck2", BUCK2, BUCK2_VIN, ++ 0x127, 0x40, 0x120, 0xff, 0x127, 0x30, 0x127, 0x04, + buck_vout_ranges, 256, 0, buck2_irq_tbls), +- MT6360_REGULATOR_DESC(LDO6, LDO_VIN3, 0x137, 0x40, 0x13B, 0xff, 0x137, 0x30, 0x137, 0x04, ++ MT6360_REGULATOR_DESC("ldo6", LDO6, LDO_VIN3, ++ 0x137, 0x40, 0x13B, 0xff, 0x137, 0x30, 0x137, 0x04, + ldo_vout_ranges1, 256, 0, ldo6_irq_tbls), +- MT6360_REGULATOR_DESC(LDO7, LDO_VIN3, 0x131, 0x40, 0x135, 0xff, 0x131, 0x30, 0x131, 0x04, ++ MT6360_REGULATOR_DESC("ldo7", LDO7, LDO_VIN3, ++ 0x131, 0x40, 0x135, 0xff, 0x131, 0x30, 0x131, 0x04, + ldo_vout_ranges1, 256, 0, ldo7_irq_tbls), +- MT6360_REGULATOR_DESC(LDO1, LDO_VIN1, 0x217, 0x40, 0x21B, 0xff, 0x217, 0x30, 0x217, 0x04, ++ MT6360_REGULATOR_DESC("ldo1", LDO1, LDO_VIN1, ++ 0x217, 0x40, 0x21B, 0xff, 0x217, 0x30, 0x217, 0x04, + ldo_vout_ranges2, 256, 0, ldo1_irq_tbls), +- MT6360_REGULATOR_DESC(LDO2, LDO_VIN1, 0x211, 0x40, 0x215, 0xff, 0x211, 0x30, 0x211, 0x04, ++ MT6360_REGULATOR_DESC("ldo2", LDO2, LDO_VIN1, ++ 0x211, 0x40, 0x215, 0xff, 0x211, 0x30, 0x211, 0x04, + ldo_vout_ranges2, 256, 0, ldo2_irq_tbls), +- MT6360_REGULATOR_DESC(LDO3, LDO_VIN1, 0x205, 0x40, 0x209, 0xff, 0x205, 0x30, 0x205, 0x04, ++ MT6360_REGULATOR_DESC("ldo3", LDO3, LDO_VIN1, ++ 0x205, 0x40, 0x209, 0xff, 0x205, 0x30, 0x205, 0x04, + ldo_vout_ranges2, 256, 100, ldo3_irq_tbls), +- MT6360_REGULATOR_DESC(LDO5, LDO_VIN2, 0x20B, 0x40, 0x20F, 0x7f, 0x20B, 0x30, 0x20B, 0x04, ++ MT6360_REGULATOR_DESC("ldo5", LDO5, LDO_VIN2, ++ 0x20B, 0x40, 0x20F, 0x7f, 0x20B, 0x30, 0x20B, 0x04, + ldo_vout_ranges3, 128, 100, ldo5_irq_tbls), + }; + +diff --git a/drivers/regulator/tps65132-regulator.c b/drivers/regulator/tps65132-regulator.c +index a06f5f2d79329..9c2f0dd42613d 100644 +--- a/drivers/regulator/tps65132-regulator.c ++++ b/drivers/regulator/tps65132-regulator.c +@@ -267,10 +267,17 @@ static const struct i2c_device_id tps65132_id[] = { + }; + MODULE_DEVICE_TABLE(i2c, tps65132_id); + ++static const struct of_device_id __maybe_unused tps65132_of_match[] = { ++ { .compatible = "ti,tps65132" }, ++ {}, ++}; ++MODULE_DEVICE_TABLE(of, tps65132_of_match); ++ + static struct i2c_driver tps65132_i2c_driver = { + .driver = { + .name = "tps65132", + .probe_type = PROBE_PREFER_ASYNCHRONOUS, ++ .of_match_table = of_match_ptr(tps65132_of_match), + }, + .probe = tps65132_probe, + .id_table = tps65132_id, +diff --git a/drivers/rtc/nvmem.c b/drivers/rtc/nvmem.c +index 07ede21cee347..37df7e80525b4 100644 +--- a/drivers/rtc/nvmem.c ++++ b/drivers/rtc/nvmem.c +@@ -21,6 +21,7 @@ int devm_rtc_nvmem_register(struct rtc_device *rtc, + + nvmem_config->dev = dev; + nvmem_config->owner = rtc->owner; ++ nvmem_config->add_legacy_fixed_of_cells = true; + nvmem = devm_nvmem_register(dev, nvmem_config); + if (IS_ERR(nvmem)) + dev_err(dev, "failed to register nvmem device for RTC\n"); +diff --git a/drivers/s390/cio/cio_inject.c b/drivers/s390/cio/cio_inject.c +index 8613fa937237b..a2e771ebae8eb 100644 +--- a/drivers/s390/cio/cio_inject.c ++++ b/drivers/s390/cio/cio_inject.c +@@ -95,7 +95,7 @@ static ssize_t crw_inject_write(struct file *file, const char __user *buf, + return -EINVAL; + } + +- buffer = vmemdup_user(buf, lbuf); ++ buffer = memdup_user_nul(buf, lbuf); + if (IS_ERR(buffer)) + return -ENOMEM; + +diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c +index 1148b4ecabdde..f0f3b6272d5b8 100644 +--- a/drivers/s390/net/qeth_core_main.c ++++ b/drivers/s390/net/qeth_core_main.c +@@ -364,30 +364,33 @@ static int qeth_cq_init(struct qeth_card *card) + return rc; + } + ++static void qeth_free_cq(struct qeth_card *card) ++{ ++ if (card->qdio.c_q) { ++ qeth_free_qdio_queue(card->qdio.c_q); ++ card->qdio.c_q = NULL; ++ } ++} ++ + static int qeth_alloc_cq(struct qeth_card *card) + { + if (card->options.cq == QETH_CQ_ENABLED) { + QETH_CARD_TEXT(card, 2, "cqon"); +- card->qdio.c_q = qeth_alloc_qdio_queue(); + if (!card->qdio.c_q) { +- dev_err(&card->gdev->dev, "Failed to create completion queue\n"); +- return -ENOMEM; ++ card->qdio.c_q = qeth_alloc_qdio_queue(); ++ if (!card->qdio.c_q) { ++ dev_err(&card->gdev->dev, ++ "Failed to create completion queue\n"); ++ return -ENOMEM; ++ } + } + } else { + QETH_CARD_TEXT(card, 2, "nocq"); +- card->qdio.c_q = NULL; ++ qeth_free_cq(card); + } + return 0; + } + +-static void qeth_free_cq(struct qeth_card *card) +-{ +- if (card->qdio.c_q) { +- qeth_free_qdio_queue(card->qdio.c_q); +- card->qdio.c_q = NULL; +- } +-} +- + static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15, + int delayed) + { +@@ -2628,6 +2631,10 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card) + + QETH_CARD_TEXT(card, 2, "allcqdbf"); + ++ /* completion */ ++ if (qeth_alloc_cq(card)) ++ goto out_err; ++ + if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED, + QETH_QDIO_ALLOCATED) != QETH_QDIO_UNINITIALIZED) + return 0; +@@ -2663,10 +2670,6 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card) + queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT; + } + +- /* completion */ +- if (qeth_alloc_cq(card)) +- goto out_freeoutq; +- + return 0; + + out_freeoutq: +@@ -2677,6 +2680,8 @@ static int qeth_alloc_qdio_queues(struct qeth_card *card) + qeth_free_buffer_pool(card); + out_buffer_pool: + atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); ++ qeth_free_cq(card); ++out_err: + return -ENOMEM; + } + +@@ -2684,11 +2689,12 @@ static void qeth_free_qdio_queues(struct qeth_card *card) + { + int i, j; + ++ qeth_free_cq(card); ++ + if (atomic_xchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == + QETH_QDIO_UNINITIALIZED) + return; + +- qeth_free_cq(card); + for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { + if (card->qdio.in_q->bufs[j].rx_skb) { + consume_skb(card->qdio.in_q->bufs[j].rx_skb); +@@ -3742,24 +3748,11 @@ static void qeth_qdio_poll(struct ccw_device *cdev, unsigned long card_ptr) + + int qeth_configure_cq(struct qeth_card *card, enum qeth_cq cq) + { +- int rc; +- +- if (card->options.cq == QETH_CQ_NOTAVAILABLE) { +- rc = -1; +- goto out; +- } else { +- if (card->options.cq == cq) { +- rc = 0; +- goto out; +- } +- +- qeth_free_qdio_queues(card); +- card->options.cq = cq; +- rc = 0; +- } +-out: +- return rc; ++ if (card->options.cq == QETH_CQ_NOTAVAILABLE) ++ return -1; + ++ card->options.cq = cq; ++ return 0; + } + EXPORT_SYMBOL_GPL(qeth_configure_cq); + +diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c b/drivers/scsi/bnx2fc/bnx2fc_tgt.c +index 2c246e80c1c4d..d91659811eb3c 100644 +--- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c ++++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c +@@ -833,7 +833,6 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba, + + BNX2FC_TGT_DBG(tgt, "Freeing up session resources\n"); + +- spin_lock_bh(&tgt->cq_lock); + ctx_base_ptr = tgt->ctx_base; + tgt->ctx_base = NULL; + +@@ -889,7 +888,6 @@ static void bnx2fc_free_session_resc(struct bnx2fc_hba *hba, + tgt->sq, tgt->sq_dma); + tgt->sq = NULL; + } +- spin_unlock_bh(&tgt->cq_lock); + + if (ctx_base_ptr) + iounmap(ctx_base_ptr); +diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +index e914c0c13bb5b..6b97c066e6631 100644 +--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c ++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +@@ -2245,7 +2245,15 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task, + case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP: + if ((dw0 & CMPLT_HDR_RSPNS_XFRD_MSK) && + (sipc_rx_err_type & RX_FIS_STATUS_ERR_MSK)) { +- ts->stat = SAS_PROTO_RESPONSE; ++ if (task->ata_task.use_ncq) { ++ struct domain_device *device = task->dev; ++ struct hisi_sas_device *sas_dev = device->lldd_dev; ++ ++ sas_dev->dev_status = HISI_SAS_DEV_NCQ_ERR; ++ slot->abort = 1; ++ } else { ++ ts->stat = SAS_PROTO_RESPONSE; ++ } + } else if (dma_rx_err_type & RX_DATA_LEN_UNDERFLOW_MSK) { + ts->residual = trans_tx_fail_type; + ts->stat = SAS_DATA_UNDERRUN; +diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c +index 5c261005b74e4..f6e6db8b8aba9 100644 +--- a/drivers/scsi/libsas/sas_expander.c ++++ b/drivers/scsi/libsas/sas_expander.c +@@ -135,7 +135,7 @@ static int smp_execute_task(struct domain_device *dev, void *req, int req_size, + + static inline void *alloc_smp_req(int size) + { +- u8 *p = kzalloc(size, GFP_KERNEL); ++ u8 *p = kzalloc(ALIGN(size, ARCH_DMA_MINALIGN), GFP_KERNEL); + if (p) + p[0] = SMP_REQUEST; + return p; +diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h +index 04d608ea91060..9670cb2bf198e 100644 +--- a/drivers/scsi/lpfc/lpfc.h ++++ b/drivers/scsi/lpfc/lpfc.h +@@ -1325,7 +1325,6 @@ struct lpfc_hba { + struct timer_list fabric_block_timer; + unsigned long bit_flags; + atomic_t num_rsrc_err; +- atomic_t num_cmd_success; + unsigned long last_rsrc_error_time; + unsigned long last_ramp_down_time; + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS +@@ -1430,6 +1429,7 @@ struct lpfc_hba { + struct timer_list inactive_vmid_poll; + + /* RAS Support */ ++ spinlock_t ras_fwlog_lock; /* do not take while holding another lock */ + struct lpfc_ras_fwlog ras_fwlog; + + uint32_t iocb_cnt; +diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c +index b1c9107d34083..79b45ea5fdb5e 100644 +--- a/drivers/scsi/lpfc/lpfc_attr.c ++++ b/drivers/scsi/lpfc/lpfc_attr.c +@@ -5864,9 +5864,9 @@ lpfc_ras_fwlog_buffsize_set(struct lpfc_hba *phba, uint val) + if (phba->cfg_ras_fwlog_func != PCI_FUNC(phba->pcidev->devfn)) + return -EINVAL; + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + state = phba->ras_fwlog.state; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + if (state == REG_INPROGRESS) { + lpfc_printf_log(phba, KERN_ERR, LOG_SLI, "6147 RAS Logging " +diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c +index 2919579fa0846..c305d16cfae9a 100644 +--- a/drivers/scsi/lpfc/lpfc_bsg.c ++++ b/drivers/scsi/lpfc/lpfc_bsg.c +@@ -5070,12 +5070,12 @@ lpfc_bsg_get_ras_config(struct bsg_job *job) + bsg_reply->reply_data.vendor_reply.vendor_rsp; + + /* Current logging state */ +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + if (ras_fwlog->state == ACTIVE) + ras_reply->state = LPFC_RASLOG_STATE_RUNNING; + else + ras_reply->state = LPFC_RASLOG_STATE_STOPPED; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + ras_reply->log_level = phba->ras_fwlog.fw_loglevel; + ras_reply->log_buff_sz = phba->cfg_ras_fwlog_buffsize; +@@ -5132,13 +5132,13 @@ lpfc_bsg_set_ras_config(struct bsg_job *job) + + if (action == LPFC_RASACTION_STOP_LOGGING) { + /* Check if already disabled */ +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + if (ras_fwlog->state != ACTIVE) { +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + rc = -ESRCH; + goto ras_job_error; + } +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + /* Disable logging */ + lpfc_ras_stop_fwlog(phba); +@@ -5149,10 +5149,10 @@ lpfc_bsg_set_ras_config(struct bsg_job *job) + * FW-logging with new log-level. Return status + * "Logging already Running" to caller. + **/ +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + if (ras_fwlog->state != INACTIVE) + action_status = -EINPROGRESS; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + /* Enable logging */ + rc = lpfc_sli4_ras_fwlog_init(phba, log_level, +@@ -5268,13 +5268,13 @@ lpfc_bsg_get_ras_fwlog(struct bsg_job *job) + goto ras_job_error; + + /* Logging to be stopped before reading */ +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + if (ras_fwlog->state == ACTIVE) { +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + rc = -EINPROGRESS; + goto ras_job_error; + } +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + if (job->request_len < + sizeof(struct fc_bsg_request) + +diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c +index ea9b42225e629..20662b4f339eb 100644 +--- a/drivers/scsi/lpfc/lpfc_debugfs.c ++++ b/drivers/scsi/lpfc/lpfc_debugfs.c +@@ -2196,12 +2196,12 @@ static int lpfc_debugfs_ras_log_data(struct lpfc_hba *phba, + + memset(buffer, 0, size); + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + if (phba->ras_fwlog.state != ACTIVE) { +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + return -EINVAL; + } +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + list_for_each_entry_safe(dmabuf, next, + &phba->ras_fwlog.fwlog_buff_list, list) { +@@ -2252,13 +2252,13 @@ lpfc_debugfs_ras_log_open(struct inode *inode, struct file *file) + int size; + int rc = -ENOMEM; + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + if (phba->ras_fwlog.state != ACTIVE) { +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + rc = -EINVAL; + goto out; + } +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + if (check_mul_overflow(LPFC_RAS_MIN_BUFF_POST_SIZE, + phba->cfg_ras_fwlog_buffsize, &size)) +diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c +index 18b8325fd419e..44d3ada9fbbcb 100644 +--- a/drivers/scsi/lpfc/lpfc_els.c ++++ b/drivers/scsi/lpfc/lpfc_els.c +@@ -4432,23 +4432,23 @@ lpfc_els_retry_delay(struct timer_list *t) + unsigned long flags; + struct lpfc_work_evt *evtp = &ndlp->els_retry_evt; + ++ /* Hold a node reference for outstanding queued work */ ++ if (!lpfc_nlp_get(ndlp)) ++ return; ++ + spin_lock_irqsave(&phba->hbalock, flags); + if (!list_empty(&evtp->evt_listp)) { + spin_unlock_irqrestore(&phba->hbalock, flags); ++ lpfc_nlp_put(ndlp); + return; + } + +- /* We need to hold the node by incrementing the reference +- * count until the queued work is done +- */ +- evtp->evt_arg1 = lpfc_nlp_get(ndlp); +- if (evtp->evt_arg1) { +- evtp->evt = LPFC_EVT_ELS_RETRY; +- list_add_tail(&evtp->evt_listp, &phba->work_list); +- lpfc_worker_wake_up(phba); +- } ++ evtp->evt_arg1 = ndlp; ++ evtp->evt = LPFC_EVT_ELS_RETRY; ++ list_add_tail(&evtp->evt_listp, &phba->work_list); + spin_unlock_irqrestore(&phba->hbalock, flags); +- return; ++ ++ lpfc_worker_wake_up(phba); + } + + /** +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c +index 5154eeaee0ec3..93703ab6ce037 100644 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c +@@ -257,7 +257,9 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + if (evtp->evt_arg1) { + evtp->evt = LPFC_EVT_DEV_LOSS; + list_add_tail(&evtp->evt_listp, &phba->work_list); ++ spin_unlock_irqrestore(&phba->hbalock, iflags); + lpfc_worker_wake_up(phba); ++ return; + } + spin_unlock_irqrestore(&phba->hbalock, iflags); + } else { +@@ -275,10 +277,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + lpfc_disc_state_machine(vport, ndlp, NULL, + NLP_EVT_DEVICE_RM); + } +- + } +- +- return; + } + + /** +diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c +index 2c336953e56ca..416816d74ea1c 100644 +--- a/drivers/scsi/lpfc/lpfc_init.c ++++ b/drivers/scsi/lpfc/lpfc_init.c +@@ -7698,6 +7698,9 @@ lpfc_setup_driver_resource_phase1(struct lpfc_hba *phba) + "NVME" : " "), + (phba->nvmet_support ? "NVMET" : " ")); + ++ /* ras_fwlog state */ ++ spin_lock_init(&phba->ras_fwlog_lock); ++ + /* Initialize the IO buffer list used by driver for SLI3 SCSI */ + spin_lock_init(&phba->scsi_buf_list_get_lock); + INIT_LIST_HEAD(&phba->lpfc_scsi_buf_list_get); +@@ -13051,7 +13054,7 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba) + rc = request_threaded_irq(eqhdl->irq, + &lpfc_sli4_hba_intr_handler, + &lpfc_sli4_hba_intr_handler_th, +- IRQF_ONESHOT, name, eqhdl); ++ 0, name, eqhdl); + if (rc) { + lpfc_printf_log(phba, KERN_WARNING, LOG_INIT, + "0486 MSI-X fast-path (%d) " +diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c +index 96e11a26c297e..a7479258e8055 100644 +--- a/drivers/scsi/lpfc/lpfc_nvme.c ++++ b/drivers/scsi/lpfc/lpfc_nvme.c +@@ -2614,9 +2614,9 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + /* No concern about the role change on the nvme remoteport. + * The transport will update it. + */ +- spin_lock_irq(&vport->phba->hbalock); ++ spin_lock_irq(&ndlp->lock); + ndlp->fc4_xpt_flags |= NVME_XPT_UNREG_WAIT; +- spin_unlock_irq(&vport->phba->hbalock); ++ spin_unlock_irq(&ndlp->lock); + + /* Don't let the host nvme transport keep sending keep-alives + * on this remoteport. Vport is unloading, no recovery. The +diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c +index bf879d81846b6..cf506556f3b0b 100644 +--- a/drivers/scsi/lpfc/lpfc_scsi.c ++++ b/drivers/scsi/lpfc/lpfc_scsi.c +@@ -167,11 +167,10 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba) + struct Scsi_Host *shost; + struct scsi_device *sdev; + unsigned long new_queue_depth; +- unsigned long num_rsrc_err, num_cmd_success; ++ unsigned long num_rsrc_err; + int i; + + num_rsrc_err = atomic_read(&phba->num_rsrc_err); +- num_cmd_success = atomic_read(&phba->num_cmd_success); + + /* + * The error and success command counters are global per +@@ -186,20 +185,16 @@ lpfc_ramp_down_queue_handler(struct lpfc_hba *phba) + for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) { + shost = lpfc_shost_from_vport(vports[i]); + shost_for_each_device(sdev, shost) { +- new_queue_depth = +- sdev->queue_depth * num_rsrc_err / +- (num_rsrc_err + num_cmd_success); +- if (!new_queue_depth) +- new_queue_depth = sdev->queue_depth - 1; ++ if (num_rsrc_err >= sdev->queue_depth) ++ new_queue_depth = 1; + else + new_queue_depth = sdev->queue_depth - +- new_queue_depth; ++ num_rsrc_err; + scsi_change_queue_depth(sdev, new_queue_depth); + } + } + lpfc_destroy_vport_work_array(phba, vports); + atomic_set(&phba->num_rsrc_err, 0); +- atomic_set(&phba->num_cmd_success, 0); + } + + /** +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c +index 4dfadf254a727..5af669b930193 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.c ++++ b/drivers/scsi/lpfc/lpfc_sli.c +@@ -1217,9 +1217,9 @@ lpfc_set_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp, + empty = list_empty(&phba->active_rrq_list); + list_add_tail(&rrq->list, &phba->active_rrq_list); + phba->hba_flag |= HBA_RRQ_ACTIVE; ++ spin_unlock_irqrestore(&phba->hbalock, iflags); + if (empty) + lpfc_worker_wake_up(phba); +- spin_unlock_irqrestore(&phba->hbalock, iflags); + return 0; + out: + spin_unlock_irqrestore(&phba->hbalock, iflags); +@@ -6844,9 +6844,9 @@ lpfc_ras_stop_fwlog(struct lpfc_hba *phba) + { + struct lpfc_ras_fwlog *ras_fwlog = &phba->ras_fwlog; + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + ras_fwlog->state = INACTIVE; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + /* Disable FW logging to host memory */ + writel(LPFC_CTL_PDEV_CTL_DDL_RAS, +@@ -6889,9 +6889,9 @@ lpfc_sli4_ras_dma_free(struct lpfc_hba *phba) + ras_fwlog->lwpd.virt = NULL; + } + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + ras_fwlog->state = INACTIVE; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + } + + /** +@@ -6993,9 +6993,9 @@ lpfc_sli4_ras_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + goto disable_ras; + } + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + ras_fwlog->state = ACTIVE; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + mempool_free(pmb, phba->mbox_mem_pool); + + return; +@@ -7027,9 +7027,9 @@ lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, + uint32_t len = 0, fwlog_buffsize, fwlog_entry_count; + int rc = 0; + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + ras_fwlog->state = INACTIVE; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + + fwlog_buffsize = (LPFC_RAS_MIN_BUFF_POST_SIZE * + phba->cfg_ras_fwlog_buffsize); +@@ -7090,9 +7090,9 @@ lpfc_sli4_ras_fwlog_init(struct lpfc_hba *phba, + mbx_fwlog->u.request.lwpd.addr_lo = putPaddrLow(ras_fwlog->lwpd.phys); + mbx_fwlog->u.request.lwpd.addr_hi = putPaddrHigh(ras_fwlog->lwpd.phys); + +- spin_lock_irq(&phba->hbalock); ++ spin_lock_irq(&phba->ras_fwlog_lock); + ras_fwlog->state = REG_INPROGRESS; +- spin_unlock_irq(&phba->hbalock); ++ spin_unlock_irq(&phba->ras_fwlog_lock); + mbox->vport = phba->pport; + mbox->mbox_cmpl = lpfc_sli4_ras_mbox_cmpl; + +@@ -11369,18 +11369,18 @@ lpfc_sli_post_recovery_event(struct lpfc_hba *phba, + unsigned long iflags; + struct lpfc_work_evt *evtp = &ndlp->recovery_evt; + ++ /* Hold a node reference for outstanding queued work */ ++ if (!lpfc_nlp_get(ndlp)) ++ return; ++ + spin_lock_irqsave(&phba->hbalock, iflags); + if (!list_empty(&evtp->evt_listp)) { + spin_unlock_irqrestore(&phba->hbalock, iflags); ++ lpfc_nlp_put(ndlp); + return; + } + +- /* Incrementing the reference count until the queued work is done. */ +- evtp->evt_arg1 = lpfc_nlp_get(ndlp); +- if (!evtp->evt_arg1) { +- spin_unlock_irqrestore(&phba->hbalock, iflags); +- return; +- } ++ evtp->evt_arg1 = ndlp; + evtp->evt = LPFC_EVT_RECOVER_PORT; + list_add_tail(&evtp->evt_listp, &phba->work_list); + spin_unlock_irqrestore(&phba->hbalock, iflags); +diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c +index 6c7559cf1a4b6..9e0e9e02d2c47 100644 +--- a/drivers/scsi/lpfc/lpfc_vport.c ++++ b/drivers/scsi/lpfc/lpfc_vport.c +@@ -683,10 +683,6 @@ lpfc_vport_delete(struct fc_vport *fc_vport) + lpfc_free_sysfs_attr(vport); + lpfc_debugfs_terminate(vport); + +- /* Remove FC host to break driver binding. */ +- fc_remove_host(shost); +- scsi_remove_host(shost); +- + /* Send the DA_ID and Fabric LOGO to cleanup Nameserver entries. */ + ndlp = lpfc_findnode_did(vport, Fabric_DID); + if (!ndlp) +@@ -730,6 +726,10 @@ lpfc_vport_delete(struct fc_vport *fc_vport) + + skip_logo: + ++ /* Remove FC host to break driver binding. */ ++ fc_remove_host(shost); ++ scsi_remove_host(shost); ++ + lpfc_cleanup(vport); + + /* Remove scsi host now. The nodes are cleaned up. */ +diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c +index 9dacbb8570c93..aa5b535e6662b 100644 +--- a/drivers/scsi/mpi3mr/mpi3mr_app.c ++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c +@@ -1345,7 +1345,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job, unsigned int *reply + if ((mpirep_offset != 0xFF) && + drv_bufs[mpirep_offset].bsg_buf_len) { + drv_buf_iter = &drv_bufs[mpirep_offset]; +- drv_buf_iter->kern_buf_len = (sizeof(*bsg_reply_buf) - 1 + ++ drv_buf_iter->kern_buf_len = (sizeof(*bsg_reply_buf) + + mrioc->reply_sz); + bsg_reply_buf = kzalloc(drv_buf_iter->kern_buf_len, GFP_KERNEL); + +diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c +index 77aa6d26476cd..0da5d9d1af037 100644 +--- a/drivers/slimbus/qcom-ngd-ctrl.c ++++ b/drivers/slimbus/qcom-ngd-ctrl.c +@@ -1451,7 +1451,11 @@ static void qcom_slim_ngd_up_worker(struct work_struct *work) + ctrl = container_of(work, struct qcom_slim_ngd_ctrl, ngd_up_work); + + /* Make sure qmi service is up before continuing */ +- wait_for_completion_interruptible(&ctrl->qmi_up); ++ if (!wait_for_completion_interruptible_timeout(&ctrl->qmi_up, ++ msecs_to_jiffies(MSEC_PER_SEC))) { ++ dev_err(ctrl->dev, "QMI wait timeout\n"); ++ return; ++ } + + mutex_lock(&ctrl->ssr_lock); + qcom_slim_ngd_enable(ctrl, true); +diff --git a/drivers/spi/spi-axi-spi-engine.c b/drivers/spi/spi-axi-spi-engine.c +index 0258c9a72fdcc..9faee4fcc049a 100644 +--- a/drivers/spi/spi-axi-spi-engine.c ++++ b/drivers/spi/spi-axi-spi-engine.c +@@ -6,6 +6,8 @@ + */ + + #include ++#include ++#include + #include + #include + #include +@@ -13,12 +15,6 @@ + #include + #include + +-#define SPI_ENGINE_VERSION_MAJOR(x) ((x >> 16) & 0xff) +-#define SPI_ENGINE_VERSION_MINOR(x) ((x >> 8) & 0xff) +-#define SPI_ENGINE_VERSION_PATCH(x) (x & 0xff) +- +-#define SPI_ENGINE_REG_VERSION 0x00 +- + #define SPI_ENGINE_REG_RESET 0x40 + + #define SPI_ENGINE_REG_INT_ENABLE 0x80 +@@ -78,28 +74,42 @@ struct spi_engine_program { + uint16_t instructions[]; + }; + +-struct spi_engine { +- struct clk *clk; +- struct clk *ref_clk; +- +- spinlock_t lock; +- +- void __iomem *base; +- +- struct spi_message *msg; ++/** ++ * struct spi_engine_message_state - SPI engine per-message state ++ */ ++struct spi_engine_message_state { ++ /** Instructions for executing this message. */ + struct spi_engine_program *p; ++ /** Number of elements in cmd_buf array. */ + unsigned cmd_length; ++ /** Array of commands not yet written to CMD FIFO. */ + const uint16_t *cmd_buf; +- ++ /** Next xfer with tx_buf not yet fully written to TX FIFO. */ + struct spi_transfer *tx_xfer; ++ /** Size of tx_buf in bytes. */ + unsigned int tx_length; ++ /** Bytes not yet written to TX FIFO. */ + const uint8_t *tx_buf; +- ++ /** Next xfer with rx_buf not yet fully written to RX FIFO. */ + struct spi_transfer *rx_xfer; ++ /** Size of tx_buf in bytes. */ + unsigned int rx_length; ++ /** Bytes not yet written to the RX FIFO. */ + uint8_t *rx_buf; ++ /** ID to correlate SYNC interrupts with this message. */ ++ u8 sync_id; ++}; ++ ++struct spi_engine { ++ struct clk *clk; ++ struct clk *ref_clk; ++ ++ spinlock_t lock; ++ ++ void __iomem *base; + +- unsigned int sync_id; ++ struct spi_message *msg; ++ struct ida sync_ida; + unsigned int completed_id; + + unsigned int int_enable; +@@ -258,100 +268,105 @@ static void spi_engine_xfer_next(struct spi_engine *spi_engine, + + static void spi_engine_tx_next(struct spi_engine *spi_engine) + { +- struct spi_transfer *xfer = spi_engine->tx_xfer; ++ struct spi_engine_message_state *st = spi_engine->msg->state; ++ struct spi_transfer *xfer = st->tx_xfer; + + do { + spi_engine_xfer_next(spi_engine, &xfer); + } while (xfer && !xfer->tx_buf); + +- spi_engine->tx_xfer = xfer; ++ st->tx_xfer = xfer; + if (xfer) { +- spi_engine->tx_length = xfer->len; +- spi_engine->tx_buf = xfer->tx_buf; ++ st->tx_length = xfer->len; ++ st->tx_buf = xfer->tx_buf; + } else { +- spi_engine->tx_buf = NULL; ++ st->tx_buf = NULL; + } + } + + static void spi_engine_rx_next(struct spi_engine *spi_engine) + { +- struct spi_transfer *xfer = spi_engine->rx_xfer; ++ struct spi_engine_message_state *st = spi_engine->msg->state; ++ struct spi_transfer *xfer = st->rx_xfer; + + do { + spi_engine_xfer_next(spi_engine, &xfer); + } while (xfer && !xfer->rx_buf); + +- spi_engine->rx_xfer = xfer; ++ st->rx_xfer = xfer; + if (xfer) { +- spi_engine->rx_length = xfer->len; +- spi_engine->rx_buf = xfer->rx_buf; ++ st->rx_length = xfer->len; ++ st->rx_buf = xfer->rx_buf; + } else { +- spi_engine->rx_buf = NULL; ++ st->rx_buf = NULL; + } + } + + static bool spi_engine_write_cmd_fifo(struct spi_engine *spi_engine) + { + void __iomem *addr = spi_engine->base + SPI_ENGINE_REG_CMD_FIFO; ++ struct spi_engine_message_state *st = spi_engine->msg->state; + unsigned int n, m, i; + const uint16_t *buf; + + n = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_CMD_FIFO_ROOM); +- while (n && spi_engine->cmd_length) { +- m = min(n, spi_engine->cmd_length); +- buf = spi_engine->cmd_buf; ++ while (n && st->cmd_length) { ++ m = min(n, st->cmd_length); ++ buf = st->cmd_buf; + for (i = 0; i < m; i++) + writel_relaxed(buf[i], addr); +- spi_engine->cmd_buf += m; +- spi_engine->cmd_length -= m; ++ st->cmd_buf += m; ++ st->cmd_length -= m; + n -= m; + } + +- return spi_engine->cmd_length != 0; ++ return st->cmd_length != 0; + } + + static bool spi_engine_write_tx_fifo(struct spi_engine *spi_engine) + { + void __iomem *addr = spi_engine->base + SPI_ENGINE_REG_SDO_DATA_FIFO; ++ struct spi_engine_message_state *st = spi_engine->msg->state; + unsigned int n, m, i; + const uint8_t *buf; + + n = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_SDO_FIFO_ROOM); +- while (n && spi_engine->tx_length) { +- m = min(n, spi_engine->tx_length); +- buf = spi_engine->tx_buf; ++ while (n && st->tx_length) { ++ m = min(n, st->tx_length); ++ buf = st->tx_buf; + for (i = 0; i < m; i++) + writel_relaxed(buf[i], addr); +- spi_engine->tx_buf += m; +- spi_engine->tx_length -= m; ++ st->tx_buf += m; ++ st->tx_length -= m; + n -= m; +- if (spi_engine->tx_length == 0) ++ if (st->tx_length == 0) + spi_engine_tx_next(spi_engine); + } + +- return spi_engine->tx_length != 0; ++ return st->tx_length != 0; + } + + static bool spi_engine_read_rx_fifo(struct spi_engine *spi_engine) + { + void __iomem *addr = spi_engine->base + SPI_ENGINE_REG_SDI_DATA_FIFO; ++ struct spi_engine_message_state *st = spi_engine->msg->state; + unsigned int n, m, i; + uint8_t *buf; + + n = readl_relaxed(spi_engine->base + SPI_ENGINE_REG_SDI_FIFO_LEVEL); +- while (n && spi_engine->rx_length) { +- m = min(n, spi_engine->rx_length); +- buf = spi_engine->rx_buf; ++ while (n && st->rx_length) { ++ m = min(n, st->rx_length); ++ buf = st->rx_buf; + for (i = 0; i < m; i++) + buf[i] = readl_relaxed(addr); +- spi_engine->rx_buf += m; +- spi_engine->rx_length -= m; ++ st->rx_buf += m; ++ st->rx_length -= m; + n -= m; +- if (spi_engine->rx_length == 0) ++ if (st->rx_length == 0) + spi_engine_rx_next(spi_engine); + } + +- return spi_engine->rx_length != 0; ++ return st->rx_length != 0; + } + + static irqreturn_t spi_engine_irq(int irq, void *devid) +@@ -387,12 +402,16 @@ static irqreturn_t spi_engine_irq(int irq, void *devid) + disable_int |= SPI_ENGINE_INT_SDI_ALMOST_FULL; + } + +- if (pending & SPI_ENGINE_INT_SYNC) { +- if (spi_engine->msg && +- spi_engine->completed_id == spi_engine->sync_id) { ++ if (pending & SPI_ENGINE_INT_SYNC && spi_engine->msg) { ++ struct spi_engine_message_state *st = spi_engine->msg->state; ++ ++ if (spi_engine->completed_id == st->sync_id) { + struct spi_message *msg = spi_engine->msg; ++ struct spi_engine_message_state *st = msg->state; + +- kfree(spi_engine->p); ++ ida_free(&spi_engine->sync_ida, st->sync_id); ++ kfree(st->p); ++ kfree(st); + msg->status = 0; + msg->actual_length = msg->frame_length; + spi_engine->msg = NULL; +@@ -417,29 +436,46 @@ static int spi_engine_transfer_one_message(struct spi_controller *host, + { + struct spi_engine_program p_dry, *p; + struct spi_engine *spi_engine = spi_controller_get_devdata(host); ++ struct spi_engine_message_state *st; + unsigned int int_enable = 0; + unsigned long flags; + size_t size; ++ int ret; ++ ++ st = kzalloc(sizeof(*st), GFP_KERNEL); ++ if (!st) ++ return -ENOMEM; + + p_dry.length = 0; + spi_engine_compile_message(spi_engine, msg, true, &p_dry); + + size = sizeof(*p->instructions) * (p_dry.length + 1); + p = kzalloc(sizeof(*p) + size, GFP_KERNEL); +- if (!p) ++ if (!p) { ++ kfree(st); + return -ENOMEM; ++ } ++ ++ ret = ida_alloc_range(&spi_engine->sync_ida, 0, U8_MAX, GFP_KERNEL); ++ if (ret < 0) { ++ kfree(p); ++ kfree(st); ++ return ret; ++ } ++ ++ st->sync_id = ret; ++ + spi_engine_compile_message(spi_engine, msg, false, p); + + spin_lock_irqsave(&spi_engine->lock, flags); +- spi_engine->sync_id = (spi_engine->sync_id + 1) & 0xff; +- spi_engine_program_add_cmd(p, false, +- SPI_ENGINE_CMD_SYNC(spi_engine->sync_id)); ++ spi_engine_program_add_cmd(p, false, SPI_ENGINE_CMD_SYNC(st->sync_id)); + ++ msg->state = st; + spi_engine->msg = msg; +- spi_engine->p = p; ++ st->p = p; + +- spi_engine->cmd_buf = p->instructions; +- spi_engine->cmd_length = p->length; ++ st->cmd_buf = p->instructions; ++ st->cmd_length = p->length; + if (spi_engine_write_cmd_fifo(spi_engine)) + int_enable |= SPI_ENGINE_INT_CMD_ALMOST_EMPTY; + +@@ -448,7 +484,7 @@ static int spi_engine_transfer_one_message(struct spi_controller *host, + int_enable |= SPI_ENGINE_INT_SDO_ALMOST_EMPTY; + + spi_engine_rx_next(spi_engine); +- if (spi_engine->rx_length != 0) ++ if (st->rx_length != 0) + int_enable |= SPI_ENGINE_INT_SDI_ALMOST_FULL; + + int_enable |= SPI_ENGINE_INT_SYNC; +@@ -473,52 +509,34 @@ static int spi_engine_probe(struct platform_device *pdev) + if (irq < 0) + return irq; + +- spi_engine = devm_kzalloc(&pdev->dev, sizeof(*spi_engine), GFP_KERNEL); +- if (!spi_engine) +- return -ENOMEM; +- +- host = spi_alloc_host(&pdev->dev, 0); ++ host = devm_spi_alloc_host(&pdev->dev, sizeof(*spi_engine)); + if (!host) + return -ENOMEM; + +- spi_controller_set_devdata(host, spi_engine); ++ spi_engine = spi_controller_get_devdata(host); + + spin_lock_init(&spi_engine->lock); ++ ida_init(&spi_engine->sync_ida); + +- spi_engine->clk = devm_clk_get(&pdev->dev, "s_axi_aclk"); +- if (IS_ERR(spi_engine->clk)) { +- ret = PTR_ERR(spi_engine->clk); +- goto err_put_host; +- } +- +- spi_engine->ref_clk = devm_clk_get(&pdev->dev, "spi_clk"); +- if (IS_ERR(spi_engine->ref_clk)) { +- ret = PTR_ERR(spi_engine->ref_clk); +- goto err_put_host; +- } ++ spi_engine->clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk"); ++ if (IS_ERR(spi_engine->clk)) ++ return PTR_ERR(spi_engine->clk); + +- ret = clk_prepare_enable(spi_engine->clk); +- if (ret) +- goto err_put_host; +- +- ret = clk_prepare_enable(spi_engine->ref_clk); +- if (ret) +- goto err_clk_disable; ++ spi_engine->ref_clk = devm_clk_get_enabled(&pdev->dev, "spi_clk"); ++ if (IS_ERR(spi_engine->ref_clk)) ++ return PTR_ERR(spi_engine->ref_clk); + + spi_engine->base = devm_platform_ioremap_resource(pdev, 0); +- if (IS_ERR(spi_engine->base)) { +- ret = PTR_ERR(spi_engine->base); +- goto err_ref_clk_disable; +- } +- +- version = readl(spi_engine->base + SPI_ENGINE_REG_VERSION); +- if (SPI_ENGINE_VERSION_MAJOR(version) != 1) { +- dev_err(&pdev->dev, "Unsupported peripheral version %u.%u.%c\n", +- SPI_ENGINE_VERSION_MAJOR(version), +- SPI_ENGINE_VERSION_MINOR(version), +- SPI_ENGINE_VERSION_PATCH(version)); +- ret = -ENODEV; +- goto err_ref_clk_disable; ++ if (IS_ERR(spi_engine->base)) ++ return PTR_ERR(spi_engine->base); ++ ++ version = readl(spi_engine->base + ADI_AXI_REG_VERSION); ++ if (ADI_AXI_PCORE_VER_MAJOR(version) != 1) { ++ dev_err(&pdev->dev, "Unsupported peripheral version %u.%u.%u\n", ++ ADI_AXI_PCORE_VER_MAJOR(version), ++ ADI_AXI_PCORE_VER_MINOR(version), ++ ADI_AXI_PCORE_VER_PATCH(version)); ++ return -ENODEV; + } + + writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_RESET); +@@ -527,7 +545,7 @@ static int spi_engine_probe(struct platform_device *pdev) + + ret = request_irq(irq, spi_engine_irq, 0, pdev->name, host); + if (ret) +- goto err_ref_clk_disable; ++ return ret; + + host->dev.of_node = pdev->dev.of_node; + host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_3WIRE; +@@ -545,18 +563,12 @@ static int spi_engine_probe(struct platform_device *pdev) + return 0; + err_free_irq: + free_irq(irq, host); +-err_ref_clk_disable: +- clk_disable_unprepare(spi_engine->ref_clk); +-err_clk_disable: +- clk_disable_unprepare(spi_engine->clk); +-err_put_host: +- spi_controller_put(host); + return ret; + } + + static void spi_engine_remove(struct platform_device *pdev) + { +- struct spi_controller *host = spi_controller_get(platform_get_drvdata(pdev)); ++ struct spi_controller *host = platform_get_drvdata(pdev); + struct spi_engine *spi_engine = spi_controller_get_devdata(host); + int irq = platform_get_irq(pdev, 0); + +@@ -564,14 +576,9 @@ static void spi_engine_remove(struct platform_device *pdev) + + free_irq(irq, host); + +- spi_controller_put(host); +- + writel_relaxed(0xff, spi_engine->base + SPI_ENGINE_REG_INT_PENDING); + writel_relaxed(0x00, spi_engine->base + SPI_ENGINE_REG_INT_ENABLE); + writel_relaxed(0x01, spi_engine->base + SPI_ENGINE_REG_RESET); +- +- clk_disable_unprepare(spi_engine->ref_clk); +- clk_disable_unprepare(spi_engine->clk); + } + + static const struct of_device_id spi_engine_match_table[] = { +diff --git a/drivers/spi/spi-hisi-kunpeng.c b/drivers/spi/spi-hisi-kunpeng.c +index 35ef5e8e2ffd2..77e9738e42f60 100644 +--- a/drivers/spi/spi-hisi-kunpeng.c ++++ b/drivers/spi/spi-hisi-kunpeng.c +@@ -151,8 +151,6 @@ static const struct debugfs_reg32 hisi_spi_regs[] = { + HISI_SPI_DBGFS_REG("ENR", HISI_SPI_ENR), + HISI_SPI_DBGFS_REG("FIFOC", HISI_SPI_FIFOC), + HISI_SPI_DBGFS_REG("IMR", HISI_SPI_IMR), +- HISI_SPI_DBGFS_REG("DIN", HISI_SPI_DIN), +- HISI_SPI_DBGFS_REG("DOUT", HISI_SPI_DOUT), + HISI_SPI_DBGFS_REG("SR", HISI_SPI_SR), + HISI_SPI_DBGFS_REG("RISR", HISI_SPI_RISR), + HISI_SPI_DBGFS_REG("ISR", HISI_SPI_ISR), +diff --git a/drivers/spi/spi-microchip-core-qspi.c b/drivers/spi/spi-microchip-core-qspi.c +index 4f76ddf97b10f..32a0fa4ba50f7 100644 +--- a/drivers/spi/spi-microchip-core-qspi.c ++++ b/drivers/spi/spi-microchip-core-qspi.c +@@ -283,6 +283,7 @@ static int mchp_coreqspi_setup_clock(struct mchp_coreqspi *qspi, struct spi_devi + } + + control = readl_relaxed(qspi->regs + REG_CONTROL); ++ control &= ~CONTROL_CLKRATE_MASK; + control |= baud_rate_val << CONTROL_CLKRATE_SHIFT; + writel_relaxed(control, qspi->regs + REG_CONTROL); + control = readl_relaxed(qspi->regs + REG_CONTROL); +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c +index 1e08cd571d21a..76383ddbd6a6f 100644 +--- a/drivers/spi/spi.c ++++ b/drivers/spi/spi.c +@@ -4261,6 +4261,7 @@ static int __spi_sync(struct spi_device *spi, struct spi_message *message) + wait_for_completion(&done); + status = message->status; + } ++ message->complete = NULL; + message->context = NULL; + + return status; +diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c +index d5860c1c1f469..9a88774836c9d 100644 +--- a/drivers/target/target_core_configfs.c ++++ b/drivers/target/target_core_configfs.c +@@ -3634,6 +3634,8 @@ static int __init target_core_init_configfs(void) + { + struct configfs_subsystem *subsys = &target_core_fabrics; + struct t10_alua_lu_gp *lu_gp; ++ struct cred *kern_cred; ++ const struct cred *old_cred; + int ret; + + pr_debug("TARGET_CORE[0]: Loading Generic Kernel Storage" +@@ -3710,11 +3712,21 @@ static int __init target_core_init_configfs(void) + if (ret < 0) + goto out; + ++ /* We use the kernel credentials to access the target directory */ ++ kern_cred = prepare_kernel_cred(&init_task); ++ if (!kern_cred) { ++ ret = -ENOMEM; ++ goto out; ++ } ++ old_cred = override_creds(kern_cred); + target_init_dbroot(); ++ revert_creds(old_cred); ++ put_cred(kern_cred); + + return 0; + + out: ++ target_xcopy_release_pt(); + configfs_unregister_subsystem(subsys); + core_dev_release_virtual_lun0(); + rd_module_exit(); +diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c +index 0787456c2b892..c873fd8239427 100644 +--- a/drivers/ufs/core/ufs-mcq.c ++++ b/drivers/ufs/core/ufs-mcq.c +@@ -94,7 +94,7 @@ void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds) + + val = ufshcd_readl(hba, REG_UFS_MCQ_CFG); + val &= ~MCQ_CFG_MAC_MASK; +- val |= FIELD_PREP(MCQ_CFG_MAC_MASK, max_active_cmds); ++ val |= FIELD_PREP(MCQ_CFG_MAC_MASK, max_active_cmds - 1); + ufshcd_writel(hba, val, REG_UFS_MCQ_CFG); + } + EXPORT_SYMBOL_GPL(ufshcd_mcq_config_mac); +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index ee9119b708f01..344806330be16 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -3064,7 +3064,9 @@ static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba, + + /* MCQ mode */ + if (is_mcq_enabled(hba)) { +- err = ufshcd_clear_cmd(hba, lrbp->task_tag); ++ /* successfully cleared the command, retry if needed */ ++ if (ufshcd_clear_cmd(hba, lrbp->task_tag) == 0) ++ err = -EAGAIN; + hba->dev_cmd.complete = NULL; + return err; + } +@@ -9547,7 +9549,10 @@ static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op) + + /* UFS device & link must be active before we enter in this function */ + if (!ufshcd_is_ufs_dev_active(hba) || !ufshcd_is_link_active(hba)) { +- ret = -EINVAL; ++ /* Wait err handler finish or trigger err recovery */ ++ if (!ufshcd_eh_in_progress(hba)) ++ ufshcd_force_error_recovery(hba); ++ ret = -EBUSY; + goto enable_scaling; + } + +diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c +index 20d9762331bd7..6be3462b109ff 100644 +--- a/drivers/uio/uio_hv_generic.c ++++ b/drivers/uio/uio_hv_generic.c +@@ -181,12 +181,14 @@ hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) + { + if (pdata->send_gpadl.gpadl_handle) { + vmbus_teardown_gpadl(dev->channel, &pdata->send_gpadl); +- vfree(pdata->send_buf); ++ if (!pdata->send_gpadl.decrypted) ++ vfree(pdata->send_buf); + } + + if (pdata->recv_gpadl.gpadl_handle) { + vmbus_teardown_gpadl(dev->channel, &pdata->recv_gpadl); +- vfree(pdata->recv_buf); ++ if (!pdata->recv_gpadl.decrypted) ++ vfree(pdata->recv_buf); + } + } + +@@ -295,7 +297,8 @@ hv_uio_probe(struct hv_device *dev, + ret = vmbus_establish_gpadl(channel, pdata->recv_buf, + RECV_BUFFER_SIZE, &pdata->recv_gpadl); + if (ret) { +- vfree(pdata->recv_buf); ++ if (!pdata->recv_gpadl.decrypted) ++ vfree(pdata->recv_buf); + goto fail_close; + } + +@@ -317,7 +320,8 @@ hv_uio_probe(struct hv_device *dev, + ret = vmbus_establish_gpadl(channel, pdata->send_buf, + SEND_BUFFER_SIZE, &pdata->send_gpadl); + if (ret) { +- vfree(pdata->send_buf); ++ if (!pdata->send_gpadl.decrypted) ++ vfree(pdata->send_buf); + goto fail_close; + } + +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 7417972202b8b..1ba3feb5e1900 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -5077,9 +5077,10 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1, + } + if (usb_endpoint_maxp(&udev->ep0.desc) == i) { + ; /* Initial ep0 maxpacket guess is right */ +- } else if ((udev->speed == USB_SPEED_FULL || ++ } else if (((udev->speed == USB_SPEED_FULL || + udev->speed == USB_SPEED_HIGH) && +- (i == 8 || i == 16 || i == 32 || i == 64)) { ++ (i == 8 || i == 16 || i == 32 || i == 64)) || ++ (udev->speed >= USB_SPEED_SUPER && i > 0)) { + /* Initial guess is wrong; use the descriptor's value */ + if (udev->speed == USB_SPEED_FULL) + dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i); +diff --git a/drivers/usb/core/port.c b/drivers/usb/core/port.c +index 76e00bfedc25c..5fb3f55ef06db 100644 +--- a/drivers/usb/core/port.c ++++ b/drivers/usb/core/port.c +@@ -50,13 +50,15 @@ static ssize_t disable_show(struct device *dev, + struct usb_port *port_dev = to_usb_port(dev); + struct usb_device *hdev = to_usb_device(dev->parent->parent); + struct usb_hub *hub = usb_hub_to_struct_hub(hdev); +- struct usb_interface *intf = to_usb_interface(hub->intfdev); ++ struct usb_interface *intf = to_usb_interface(dev->parent); + int port1 = port_dev->portnum; + u16 portstatus, unused; + bool disabled; + int rc; + struct kernfs_node *kn; + ++ if (!hub) ++ return -ENODEV; + hub_get(hub); + rc = usb_autopm_get_interface(intf); + if (rc < 0) +@@ -100,12 +102,14 @@ static ssize_t disable_store(struct device *dev, struct device_attribute *attr, + struct usb_port *port_dev = to_usb_port(dev); + struct usb_device *hdev = to_usb_device(dev->parent->parent); + struct usb_hub *hub = usb_hub_to_struct_hub(hdev); +- struct usb_interface *intf = to_usb_interface(hub->intfdev); ++ struct usb_interface *intf = to_usb_interface(dev->parent); + int port1 = port_dev->portnum; + bool disabled; + int rc; + struct kernfs_node *kn; + ++ if (!hub) ++ return -ENODEV; + rc = kstrtobool(buf, &disabled); + if (rc) + return rc; +diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c +index 7d50c81ce97ee..1f66578c671e7 100644 +--- a/drivers/usb/dwc3/core.c ++++ b/drivers/usb/dwc3/core.c +@@ -104,6 +104,27 @@ static int dwc3_get_dr_mode(struct dwc3 *dwc) + return 0; + } + ++void dwc3_enable_susphy(struct dwc3 *dwc, bool enable) ++{ ++ u32 reg; ++ ++ reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)); ++ if (enable && !dwc->dis_u3_susphy_quirk) ++ reg |= DWC3_GUSB3PIPECTL_SUSPHY; ++ else ++ reg &= ~DWC3_GUSB3PIPECTL_SUSPHY; ++ ++ dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg); ++ ++ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); ++ if (enable && !dwc->dis_u2_susphy_quirk) ++ reg |= DWC3_GUSB2PHYCFG_SUSPHY; ++ else ++ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; ++ ++ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg); ++} ++ + void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode) + { + u32 reg; +@@ -585,11 +606,8 @@ static int dwc3_core_ulpi_init(struct dwc3 *dwc) + */ + static int dwc3_phy_setup(struct dwc3 *dwc) + { +- unsigned int hw_mode; + u32 reg; + +- hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0); +- + reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)); + + /* +@@ -599,21 +617,16 @@ static int dwc3_phy_setup(struct dwc3 *dwc) + reg &= ~DWC3_GUSB3PIPECTL_UX_EXIT_PX; + + /* +- * Above 1.94a, it is recommended to set DWC3_GUSB3PIPECTL_SUSPHY +- * to '0' during coreConsultant configuration. So default value +- * will be '0' when the core is reset. Application needs to set it +- * to '1' after the core initialization is completed. +- */ +- if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) +- reg |= DWC3_GUSB3PIPECTL_SUSPHY; +- +- /* +- * For DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be cleared after +- * power-on reset, and it can be set after core initialization, which is +- * after device soft-reset during initialization. ++ * Above DWC_usb3.0 1.94a, it is recommended to set ++ * DWC3_GUSB3PIPECTL_SUSPHY to '0' during coreConsultant configuration. ++ * So default value will be '0' when the core is reset. Application ++ * needs to set it to '1' after the core initialization is completed. ++ * ++ * Similarly for DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be ++ * cleared after power-on reset, and it can be set after core ++ * initialization. + */ +- if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD) +- reg &= ~DWC3_GUSB3PIPECTL_SUSPHY; ++ reg &= ~DWC3_GUSB3PIPECTL_SUSPHY; + + if (dwc->u2ss_inp3_quirk) + reg |= DWC3_GUSB3PIPECTL_U2SSINP3OK; +@@ -639,9 +652,6 @@ static int dwc3_phy_setup(struct dwc3 *dwc) + if (dwc->tx_de_emphasis_quirk) + reg |= DWC3_GUSB3PIPECTL_TX_DEEPH(dwc->tx_de_emphasis); + +- if (dwc->dis_u3_susphy_quirk) +- reg &= ~DWC3_GUSB3PIPECTL_SUSPHY; +- + if (dwc->dis_del_phy_power_chg_quirk) + reg &= ~DWC3_GUSB3PIPECTL_DEPOCHANGE; + +@@ -689,24 +699,15 @@ static int dwc3_phy_setup(struct dwc3 *dwc) + } + + /* +- * Above 1.94a, it is recommended to set DWC3_GUSB2PHYCFG_SUSPHY to +- * '0' during coreConsultant configuration. So default value will +- * be '0' when the core is reset. Application needs to set it to +- * '1' after the core initialization is completed. +- */ +- if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) +- reg |= DWC3_GUSB2PHYCFG_SUSPHY; +- +- /* +- * For DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared after +- * power-on reset, and it can be set after core initialization, which is +- * after device soft-reset during initialization. ++ * Above DWC_usb3.0 1.94a, it is recommended to set ++ * DWC3_GUSB2PHYCFG_SUSPHY to '0' during coreConsultant configuration. ++ * So default value will be '0' when the core is reset. Application ++ * needs to set it to '1' after the core initialization is completed. ++ * ++ * Similarly for DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared ++ * after power-on reset, and it can be set after core initialization. + */ +- if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD) +- reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; +- +- if (dwc->dis_u2_susphy_quirk) +- reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; ++ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; + + if (dwc->dis_enblslpm_quirk) + reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM; +@@ -1213,21 +1214,6 @@ static int dwc3_core_init(struct dwc3 *dwc) + if (ret) + goto err_exit_phy; + +- if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD && +- !DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) { +- if (!dwc->dis_u3_susphy_quirk) { +- reg = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)); +- reg |= DWC3_GUSB3PIPECTL_SUSPHY; +- dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), reg); +- } +- +- if (!dwc->dis_u2_susphy_quirk) { +- reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); +- reg |= DWC3_GUSB2PHYCFG_SUSPHY; +- dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg); +- } +- } +- + dwc3_core_setup_global_control(dwc); + dwc3_core_num_eps(dwc); + +diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h +index 5e6ead35dffc2..69b734a7bae1c 100644 +--- a/drivers/usb/dwc3/core.h ++++ b/drivers/usb/dwc3/core.h +@@ -1567,6 +1567,7 @@ int dwc3_event_buffers_setup(struct dwc3 *dwc); + void dwc3_event_buffers_cleanup(struct dwc3 *dwc); + + int dwc3_core_soft_reset(struct dwc3 *dwc); ++void dwc3_enable_susphy(struct dwc3 *dwc, bool enable); + + #if IS_ENABLED(CONFIG_USB_DWC3_HOST) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE) + int dwc3_host_init(struct dwc3 *dwc); +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index 07820b1a88a24..4062a486b9e63 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -2937,6 +2937,7 @@ static int __dwc3_gadget_start(struct dwc3 *dwc) + dwc3_ep0_out_start(dwc); + + dwc3_gadget_enable_irq(dwc); ++ dwc3_enable_susphy(dwc, true); + + return 0; + +@@ -4703,6 +4704,7 @@ void dwc3_gadget_exit(struct dwc3 *dwc) + if (!dwc->gadget) + return; + ++ dwc3_enable_susphy(dwc, false); + usb_del_gadget(dwc->gadget); + dwc3_gadget_free_endpoints(dwc); + usb_put_gadget(dwc->gadget); +diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c +index f6a020d77fa18..6c143f7d24103 100644 +--- a/drivers/usb/dwc3/host.c ++++ b/drivers/usb/dwc3/host.c +@@ -10,9 +10,30 @@ + #include + #include + #include ++#include ++#include + ++#include "../host/xhci-plat.h" + #include "core.h" + ++static void dwc3_xhci_plat_start(struct usb_hcd *hcd) ++{ ++ struct platform_device *pdev; ++ struct dwc3 *dwc; ++ ++ if (!usb_hcd_is_primary_hcd(hcd)) ++ return; ++ ++ pdev = to_platform_device(hcd->self.controller); ++ dwc = dev_get_drvdata(pdev->dev.parent); ++ ++ dwc3_enable_susphy(dwc, true); ++} ++ ++static const struct xhci_plat_priv dwc3_xhci_plat_quirk = { ++ .plat_start = dwc3_xhci_plat_start, ++}; ++ + static void dwc3_host_fill_xhci_irq_res(struct dwc3 *dwc, + int irq, char *name) + { +@@ -117,6 +138,11 @@ int dwc3_host_init(struct dwc3 *dwc) + } + } + ++ ret = platform_device_add_data(xhci, &dwc3_xhci_plat_quirk, ++ sizeof(struct xhci_plat_priv)); ++ if (ret) ++ goto err; ++ + ret = platform_device_add(xhci); + if (ret) { + dev_err(dwc->dev, "failed to register xHCI device\n"); +@@ -142,6 +168,7 @@ void dwc3_host_exit(struct dwc3 *dwc) + if (dwc->sys_wakeup) + device_init_wakeup(&dwc->xhci->dev, false); + ++ dwc3_enable_susphy(dwc, false); + platform_device_unregister(dwc->xhci); + dwc->xhci = NULL; + } +diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c +index 0ace45b66a31c..0e151b54aae82 100644 +--- a/drivers/usb/gadget/composite.c ++++ b/drivers/usb/gadget/composite.c +@@ -2112,7 +2112,7 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) + buf[5] = 0x01; + switch (ctrl->bRequestType & USB_RECIP_MASK) { + case USB_RECIP_DEVICE: +- if (w_index != 0x4 || (w_value >> 8)) ++ if (w_index != 0x4 || (w_value & 0xff)) + break; + buf[6] = w_index; + /* Number of ext compat interfaces */ +@@ -2128,9 +2128,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) + } + break; + case USB_RECIP_INTERFACE: +- if (w_index != 0x5 || (w_value >> 8)) ++ if (w_index != 0x5 || (w_value & 0xff)) + break; +- interface = w_value & 0xFF; ++ interface = w_value >> 8; + if (interface >= MAX_CONFIG_INTERFACES || + !os_desc_cfg->interface[interface]) + break; +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c +index 6e9ef35a43a7b..fd0f4879f38e8 100644 +--- a/drivers/usb/gadget/function/f_fs.c ++++ b/drivers/usb/gadget/function/f_fs.c +@@ -821,6 +821,7 @@ static void ffs_user_copy_worker(struct work_struct *work) + work); + int ret = io_data->status; + bool kiocb_has_eventfd = io_data->kiocb->ki_flags & IOCB_EVENTFD; ++ unsigned long flags; + + if (io_data->read && ret > 0) { + kthread_use_mm(io_data->mm); +@@ -833,6 +834,11 @@ static void ffs_user_copy_worker(struct work_struct *work) + if (io_data->ffs->ffs_eventfd && !kiocb_has_eventfd) + eventfd_signal(io_data->ffs->ffs_eventfd, 1); + ++ spin_lock_irqsave(&io_data->ffs->eps_lock, flags); ++ usb_ep_free_request(io_data->ep, io_data->req); ++ io_data->req = NULL; ++ spin_unlock_irqrestore(&io_data->ffs->eps_lock, flags); ++ + if (io_data->read) + kfree(io_data->to_free); + ffs_free_buffer(io_data); +@@ -846,7 +852,6 @@ static void ffs_epfile_async_io_complete(struct usb_ep *_ep, + struct ffs_data *ffs = io_data->ffs; + + io_data->status = req->status ? req->status : req->actual; +- usb_ep_free_request(_ep, req); + + INIT_WORK(&io_data->work, ffs_user_copy_worker); + queue_work(ffs->io_completion_wq, &io_data->work); +@@ -3331,7 +3336,7 @@ static int ffs_func_setup(struct usb_function *f, + __ffs_event_add(ffs, FUNCTIONFS_SETUP); + spin_unlock_irqrestore(&ffs->ev.waitq.lock, flags); + +- return creq->wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0; ++ return ffs->ev.setup.wLength == 0 ? USB_GADGET_DELAYED_STATUS : 0; + } + + static bool ffs_func_req_match(struct usb_function *f, +diff --git a/drivers/usb/gadget/function/uvc_configfs.c b/drivers/usb/gadget/function/uvc_configfs.c +index 9bf0e985acfab..d16c04d2961b6 100644 +--- a/drivers/usb/gadget/function/uvc_configfs.c ++++ b/drivers/usb/gadget/function/uvc_configfs.c +@@ -92,10 +92,10 @@ static int __uvcg_iter_item_entries(const char *page, size_t len, + + while (pg - page < len) { + i = 0; +- while (i < sizeof(buf) && (pg - page < len) && ++ while (i < bufsize && (pg - page < len) && + *pg != '\0' && *pg != '\n') + buf[i++] = *pg++; +- if (i == sizeof(buf)) { ++ if (i == bufsize) { + ret = -EINVAL; + goto out_free_buf; + } +diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c +index 4f9982ecfb583..5cec7640e913c 100644 +--- a/drivers/usb/host/ohci-hcd.c ++++ b/drivers/usb/host/ohci-hcd.c +@@ -888,6 +888,7 @@ static irqreturn_t ohci_irq (struct usb_hcd *hcd) + /* Check for an all 1's result which is a typical consequence + * of dead, unclocked, or unplugged (CardBus...) devices + */ ++again: + if (ints == ~(u32)0) { + ohci->rh_state = OHCI_RH_HALTED; + ohci_dbg (ohci, "device removed!\n"); +@@ -982,6 +983,13 @@ static irqreturn_t ohci_irq (struct usb_hcd *hcd) + } + spin_unlock(&ohci->lock); + ++ /* repeat until all enabled interrupts are handled */ ++ if (ohci->rh_state != OHCI_RH_HALTED) { ++ ints = ohci_readl(ohci, ®s->intrstatus); ++ if (ints && (ints & ohci_readl(ohci, ®s->intrenable))) ++ goto again; ++ } ++ + return IRQ_HANDLED; + } + +diff --git a/drivers/usb/host/xhci-plat.h b/drivers/usb/host/xhci-plat.h +index 2d15386f2c504..6475130eac4b3 100644 +--- a/drivers/usb/host/xhci-plat.h ++++ b/drivers/usb/host/xhci-plat.h +@@ -8,7 +8,9 @@ + #ifndef _XHCI_PLAT_H + #define _XHCI_PLAT_H + +-#include "xhci.h" /* for hcd_to_xhci() */ ++struct device; ++struct platform_device; ++struct usb_hcd; + + struct xhci_plat_priv { + const char *firmware_name; +diff --git a/drivers/usb/host/xhci-rzv2m.c b/drivers/usb/host/xhci-rzv2m.c +index ec65b24eafa86..4f59867d7117c 100644 +--- a/drivers/usb/host/xhci-rzv2m.c ++++ b/drivers/usb/host/xhci-rzv2m.c +@@ -6,6 +6,7 @@ + */ + + #include ++#include "xhci.h" + #include "xhci-plat.h" + #include "xhci-rzv2m.h" + +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c +index 68e024a8a1f9a..08efd4a6bd1dd 100644 +--- a/drivers/usb/typec/tcpm/tcpm.c ++++ b/drivers/usb/typec/tcpm/tcpm.c +@@ -1488,7 +1488,8 @@ static void svdm_consume_identity(struct tcpm_port *port, const u32 *p, int cnt) + port->partner_ident.cert_stat = p[VDO_INDEX_CSTAT]; + port->partner_ident.product = product; + +- typec_partner_set_identity(port->partner); ++ if (port->partner) ++ typec_partner_set_identity(port->partner); + + tcpm_log(port, "Identity: %04x:%04x.%04x", + PD_IDH_VID(vdo), +@@ -1576,6 +1577,9 @@ static void tcpm_register_partner_altmodes(struct tcpm_port *port) + struct typec_altmode *altmode; + int i; + ++ if (!port->partner) ++ return; ++ + for (i = 0; i < modep->altmodes; i++) { + altmode = typec_partner_register_altmode(port->partner, + &modep->altmode_desc[i]); +@@ -2422,7 +2426,7 @@ static int tcpm_register_sink_caps(struct tcpm_port *port) + { + struct usb_power_delivery_desc desc = { port->negotiated_rev }; + struct usb_power_delivery_capabilities_desc caps = { }; +- struct usb_power_delivery_capabilities *cap; ++ struct usb_power_delivery_capabilities *cap = port->partner_source_caps; + + if (!port->partner_pd) + port->partner_pd = usb_power_delivery_register(NULL, &desc); +@@ -2432,6 +2436,9 @@ static int tcpm_register_sink_caps(struct tcpm_port *port) + memcpy(caps.pdo, port->sink_caps, sizeof(u32) * port->nr_sink_caps); + caps.role = TYPEC_SINK; + ++ if (cap) ++ usb_power_delivery_unregister_capabilities(cap); ++ + cap = usb_power_delivery_register_capabilities(port->partner_pd, &caps); + if (IS_ERR(cap)) + return PTR_ERR(cap); +@@ -3571,7 +3578,10 @@ static int tcpm_init_vconn(struct tcpm_port *port) + + static void tcpm_typec_connect(struct tcpm_port *port) + { ++ struct typec_partner *partner; ++ + if (!port->connected) { ++ port->connected = true; + /* Make sure we don't report stale identity information */ + memset(&port->partner_ident, 0, sizeof(port->partner_ident)); + port->partner_desc.usb_pd = port->pd_capable; +@@ -3581,9 +3591,13 @@ static void tcpm_typec_connect(struct tcpm_port *port) + port->partner_desc.accessory = TYPEC_ACCESSORY_AUDIO; + else + port->partner_desc.accessory = TYPEC_ACCESSORY_NONE; +- port->partner = typec_register_partner(port->typec_port, +- &port->partner_desc); +- port->connected = true; ++ partner = typec_register_partner(port->typec_port, &port->partner_desc); ++ if (IS_ERR(partner)) { ++ dev_err(port->dev, "Failed to register partner (%ld)\n", PTR_ERR(partner)); ++ return; ++ } ++ ++ port->partner = partner; + typec_partner_set_usb_power_delivery(port->partner, port->partner_pd); + } + } +@@ -3653,9 +3667,11 @@ static int tcpm_src_attach(struct tcpm_port *port) + static void tcpm_typec_disconnect(struct tcpm_port *port) + { + if (port->connected) { +- typec_partner_set_usb_power_delivery(port->partner, NULL); +- typec_unregister_partner(port->partner); +- port->partner = NULL; ++ if (port->partner) { ++ typec_partner_set_usb_power_delivery(port->partner, NULL); ++ typec_unregister_partner(port->partner); ++ port->partner = NULL; ++ } + port->connected = false; + } + } +@@ -3871,6 +3887,9 @@ static enum typec_cc_status tcpm_pwr_opmode_to_rp(enum typec_pwr_opmode opmode) + + static void tcpm_set_initial_svdm_version(struct tcpm_port *port) + { ++ if (!port->partner) ++ return; ++ + switch (port->negotiated_rev) { + case PD_REV30: + break; +@@ -4859,6 +4878,7 @@ static void run_state_machine(struct tcpm_port *port) + break; + case PORT_RESET: + tcpm_reset_port(port); ++ port->pd_events = 0; + if (port->self_powered) + tcpm_set_cc(port, TYPEC_CC_OPEN); + else +diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c +index c4f5ee171cdeb..7d2ca39ad7622 100644 +--- a/drivers/usb/typec/ucsi/ucsi.c ++++ b/drivers/usb/typec/ucsi/ucsi.c +@@ -972,7 +972,7 @@ void ucsi_connector_change(struct ucsi *ucsi, u8 num) + struct ucsi_connector *con = &ucsi->connector[num - 1]; + + if (!(ucsi->ntfy & UCSI_ENABLE_NTFY_CONNECTOR_CHANGE)) { +- dev_dbg(ucsi->dev, "Bogus connector change event\n"); ++ dev_dbg(ucsi->dev, "Early connector change event\n"); + return; + } + +@@ -1403,6 +1403,7 @@ static int ucsi_init(struct ucsi *ucsi) + { + struct ucsi_connector *con, *connector; + u64 command, ntfy; ++ u32 cci; + int ret; + int i; + +@@ -1455,6 +1456,15 @@ static int ucsi_init(struct ucsi *ucsi) + + ucsi->connector = connector; + ucsi->ntfy = ntfy; ++ ++ mutex_lock(&ucsi->ppm_lock); ++ ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci)); ++ mutex_unlock(&ucsi->ppm_lock); ++ if (ret) ++ return ret; ++ if (UCSI_CCI_CONNECTOR(cci)) ++ ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci)); ++ + return 0; + + err_unregister: +diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c +index cb5b7f865d585..e727941f589de 100644 +--- a/drivers/vfio/pci/vfio_pci.c ++++ b/drivers/vfio/pci/vfio_pci.c +@@ -71,6 +71,8 @@ static bool vfio_pci_dev_in_denylist(struct pci_dev *pdev) + case PCI_DEVICE_ID_INTEL_QAT_C62X_VF: + case PCI_DEVICE_ID_INTEL_QAT_DH895XCC: + case PCI_DEVICE_ID_INTEL_QAT_DH895XCC_VF: ++ case PCI_DEVICE_ID_INTEL_DSA_SPR0: ++ case PCI_DEVICE_ID_INTEL_IAX_SPR0: + return true; + default: + return false; +diff --git a/drivers/w1/slaves/w1_ds250x.c b/drivers/w1/slaves/w1_ds250x.c +index 7592c7050d1d7..cb426f7dd23d4 100644 +--- a/drivers/w1/slaves/w1_ds250x.c ++++ b/drivers/w1/slaves/w1_ds250x.c +@@ -168,6 +168,7 @@ static int w1_eprom_add_slave(struct w1_slave *sl) + struct nvmem_device *nvmem; + struct nvmem_config nvmem_cfg = { + .dev = &sl->dev, ++ .add_legacy_fixed_of_cells = true, + .reg_read = w1_nvmem_read, + .type = NVMEM_TYPE_OTP, + .read_only = true, +diff --git a/fs/9p/fid.h b/fs/9p/fid.h +index 29281b7c38870..0d6138bee2a3d 100644 +--- a/fs/9p/fid.h ++++ b/fs/9p/fid.h +@@ -49,9 +49,6 @@ static inline struct p9_fid *v9fs_fid_clone(struct dentry *dentry) + static inline void v9fs_fid_add_modes(struct p9_fid *fid, unsigned int s_flags, + unsigned int s_cache, unsigned int f_flags) + { +- if (fid->qid.type != P9_QTFILE) +- return; +- + if ((!s_cache) || + ((fid->qid.version == 0) && !(s_flags & V9FS_IGNORE_QV)) || + (s_flags & V9FS_DIRECT_IO) || (f_flags & O_DIRECT)) { +diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c +index 11cd8d23f6f23..8566ddad49ad5 100644 +--- a/fs/9p/vfs_file.c ++++ b/fs/9p/vfs_file.c +@@ -591,6 +591,7 @@ const struct file_operations v9fs_file_operations = { + .splice_read = v9fs_file_splice_read, + .splice_write = iter_file_splice_write, + .fsync = v9fs_file_fsync, ++ .setlease = simple_nosetlease, + }; + + const struct file_operations v9fs_file_operations_dotl = { +@@ -605,4 +606,5 @@ const struct file_operations v9fs_file_operations_dotl = { + .splice_read = v9fs_file_splice_read, + .splice_write = iter_file_splice_write, + .fsync = v9fs_file_fsync_dotl, ++ .setlease = simple_nosetlease, + }; +diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c +index ea695c4a7a3fb..853c63b836815 100644 +--- a/fs/9p/vfs_inode.c ++++ b/fs/9p/vfs_inode.c +@@ -83,7 +83,7 @@ static int p9mode2perm(struct v9fs_session_info *v9ses, + int res; + int mode = stat->mode; + +- res = mode & S_IALLUGO; ++ res = mode & 0777; /* S_IRWXUGO */ + if (v9fs_proto_dotu(v9ses)) { + if ((mode & P9_DMSETUID) == P9_DMSETUID) + res |= S_ISUID; +@@ -178,6 +178,9 @@ int v9fs_uflags2omode(int uflags, int extended) + break; + } + ++ if (uflags & O_TRUNC) ++ ret |= P9_OTRUNC; ++ + if (extended) { + if (uflags & O_EXCL) + ret |= P9_OEXCL; +diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c +index 73db55c050bf1..958efc8423334 100644 +--- a/fs/9p/vfs_super.c ++++ b/fs/9p/vfs_super.c +@@ -320,6 +320,7 @@ static const struct super_operations v9fs_super_ops = { + .alloc_inode = v9fs_alloc_inode, + .free_inode = v9fs_free_inode, + .statfs = simple_statfs, ++ .drop_inode = v9fs_drop_inode, + .evict_inode = v9fs_evict_inode, + .show_options = v9fs_show_options, + .umount_begin = v9fs_umount_begin, +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 5acb2cb79d4bf..9fbffd84b16c5 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -686,24 +686,14 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array) + unsigned int last = allocated; + + allocated = alloc_pages_bulk_array(GFP_NOFS, nr_pages, page_array); +- +- if (allocated == nr_pages) +- return 0; +- +- /* +- * During this iteration, no page could be allocated, even +- * though alloc_pages_bulk_array() falls back to alloc_page() +- * if it could not bulk-allocate. So we must be out of memory. +- */ +- if (allocated == last) { ++ if (unlikely(allocated == last)) { ++ /* No progress, fail and do cleanup. */ + for (int i = 0; i < allocated; i++) { + __free_page(page_array[i]); + page_array[i] = NULL; + } + return -ENOMEM; + } +- +- memalloc_retry_wait(GFP_NOFS); + } + return 0; + } +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 33d0efa5ed794..5ddee801a8303 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -2510,7 +2510,7 @@ void btrfs_clear_delalloc_extent(struct btrfs_inode *inode, + */ + if (bits & EXTENT_CLEAR_META_RESV && + root != fs_info->tree_root) +- btrfs_delalloc_release_metadata(inode, len, false); ++ btrfs_delalloc_release_metadata(inode, len, true); + + /* For sanity tests. */ + if (btrfs_is_testing(fs_info)) +diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c +index 2b8ff8b53af04..8a3c46cb67f53 100644 +--- a/fs/btrfs/ordered-data.c ++++ b/fs/btrfs/ordered-data.c +@@ -1202,6 +1202,7 @@ struct btrfs_ordered_extent *btrfs_split_ordered_extent( + ordered->disk_bytenr += len; + ordered->num_bytes -= len; + ordered->disk_num_bytes -= len; ++ ordered->ram_bytes -= len; + + if (test_bit(BTRFS_ORDERED_IO_DONE, &ordered->flags)) { + ASSERT(ordered->bytes_left == 0); +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c +index b430e03260fbf..651f0865bb0df 100644 +--- a/fs/btrfs/send.c ++++ b/fs/btrfs/send.c +@@ -8222,8 +8222,8 @@ long btrfs_ioctl_send(struct inode *inode, struct btrfs_ioctl_send_args *arg) + goto out; + } + +- sctx->clone_roots = kvcalloc(sizeof(*sctx->clone_roots), +- arg->clone_sources_count + 1, ++ sctx->clone_roots = kvcalloc(arg->clone_sources_count + 1, ++ sizeof(*sctx->clone_roots), + GFP_KERNEL); + if (!sctx->clone_roots) { + ret = -ENOMEM; +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c +index 457ec7d02a9ac..0548072c642fb 100644 +--- a/fs/btrfs/transaction.c ++++ b/fs/btrfs/transaction.c +@@ -1449,6 +1449,7 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans) + radix_tree_tag_clear(&fs_info->fs_roots_radix, + (unsigned long)root->root_key.objectid, + BTRFS_ROOT_TRANS_TAG); ++ btrfs_qgroup_free_meta_all_pertrans(root); + spin_unlock(&fs_info->fs_roots_radix_lock); + + btrfs_free_log(trans, root); +@@ -1473,7 +1474,6 @@ static noinline int commit_fs_roots(struct btrfs_trans_handle *trans) + if (ret2) + return ret2; + spin_lock(&fs_info->fs_roots_radix_lock); +- btrfs_qgroup_free_meta_all_pertrans(root); + } + } + spin_unlock(&fs_info->fs_roots_radix_lock); +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 453d037987fb3..d2285c9726e7b 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -1224,23 +1224,30 @@ static int open_fs_devices(struct btrfs_fs_devices *fs_devices, + struct btrfs_device *device; + struct btrfs_device *latest_dev = NULL; + struct btrfs_device *tmp_device; ++ int ret = 0; + + list_for_each_entry_safe(device, tmp_device, &fs_devices->devices, + dev_list) { +- int ret; ++ int ret2; + +- ret = btrfs_open_one_device(fs_devices, device, flags, holder); +- if (ret == 0 && ++ ret2 = btrfs_open_one_device(fs_devices, device, flags, holder); ++ if (ret2 == 0 && + (!latest_dev || device->generation > latest_dev->generation)) { + latest_dev = device; +- } else if (ret == -ENODATA) { ++ } else if (ret2 == -ENODATA) { + fs_devices->num_devices--; + list_del(&device->dev_list); + btrfs_free_device(device); + } ++ if (ret == 0 && ret2 != 0) ++ ret = ret2; + } +- if (fs_devices->open_devices == 0) ++ ++ if (fs_devices->open_devices == 0) { ++ if (ret) ++ return ret; + return -EINVAL; ++ } + + fs_devices->opened = 1; + fs_devices->latest_dev = latest_dev; +@@ -3361,6 +3368,7 @@ static int btrfs_relocate_sys_chunks(struct btrfs_fs_info *fs_info) + * alignment and size). + */ + ret = -EUCLEAN; ++ mutex_unlock(&fs_info->reclaim_bgs_lock); + goto error; + } + +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c +index ef7017fb69512..2b578615607e4 100644 +--- a/fs/gfs2/bmap.c ++++ b/fs/gfs2/bmap.c +@@ -1715,7 +1715,8 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length) + struct buffer_head *dibh, *bh; + struct gfs2_holder rd_gh; + unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift; +- u64 lblock = (offset + (1 << bsize_shift) - 1) >> bsize_shift; ++ unsigned int bsize = 1 << bsize_shift; ++ u64 lblock = (offset + bsize - 1) >> bsize_shift; + __u16 start_list[GFS2_MAX_META_HEIGHT]; + __u16 __end_list[GFS2_MAX_META_HEIGHT], *end_list = NULL; + unsigned int start_aligned, end_aligned; +@@ -1726,7 +1727,7 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length) + u64 prev_bnr = 0; + __be64 *start, *end; + +- if (offset >= maxsize) { ++ if (offset + bsize - 1 >= maxsize) { + /* + * The starting point lies beyond the allocated metadata; + * there are no blocks to deallocate. +diff --git a/fs/nfs/client.c b/fs/nfs/client.c +index 44eca51b28085..4d9249c99989f 100644 +--- a/fs/nfs/client.c ++++ b/fs/nfs/client.c +@@ -73,7 +73,6 @@ const struct rpc_program nfs_program = { + .number = NFS_PROGRAM, + .nrvers = ARRAY_SIZE(nfs_version), + .version = nfs_version, +- .stats = &nfs_rpcstat, + .pipe_dir_name = NFS_PIPE_DIRNAME, + }; + +@@ -502,6 +501,7 @@ int nfs_create_rpc_client(struct nfs_client *clp, + const struct nfs_client_initdata *cl_init, + rpc_authflavor_t flavor) + { ++ struct nfs_net *nn = net_generic(clp->cl_net, nfs_net_id); + struct rpc_clnt *clnt = NULL; + struct rpc_create_args args = { + .net = clp->cl_net, +@@ -513,6 +513,7 @@ int nfs_create_rpc_client(struct nfs_client *clp, + .servername = clp->cl_hostname, + .nodename = cl_init->nodename, + .program = &nfs_program, ++ .stats = &nn->rpcstats, + .version = clp->rpc_ops->version, + .authflavor = flavor, + .cred = cl_init->cred, +@@ -1175,6 +1176,8 @@ void nfs_clients_init(struct net *net) + #endif + spin_lock_init(&nn->nfs_client_lock); + nn->boot_time = ktime_get_real(); ++ memset(&nn->rpcstats, 0, sizeof(nn->rpcstats)); ++ nn->rpcstats.program = &nfs_program; + + nfs_netns_sysfs_setup(nn, net); + } +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index e21c073158e5b..ca76b0b51b779 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -2426,12 +2426,21 @@ EXPORT_SYMBOL_GPL(nfs_net_id); + + static int nfs_net_init(struct net *net) + { ++ struct nfs_net *nn = net_generic(net, nfs_net_id); ++ + nfs_clients_init(net); ++ ++ if (!rpc_proc_register(net, &nn->rpcstats)) { ++ nfs_clients_exit(net); ++ return -ENOMEM; ++ } ++ + return nfs_fs_proc_net_init(net); + } + + static void nfs_net_exit(struct net *net) + { ++ rpc_proc_unregister(net, "nfs"); + nfs_fs_proc_net_exit(net); + nfs_clients_exit(net); + } +@@ -2486,15 +2495,12 @@ static int __init init_nfs_fs(void) + if (err) + goto out1; + +- rpc_proc_register(&init_net, &nfs_rpcstat); +- + err = register_nfs_fs(); + if (err) + goto out0; + + return 0; + out0: +- rpc_proc_unregister(&init_net, "nfs"); + nfs_destroy_directcache(); + out1: + nfs_destroy_writepagecache(); +@@ -2524,7 +2530,6 @@ static void __exit exit_nfs_fs(void) + nfs_destroy_inodecache(); + nfs_destroy_nfspagecache(); + unregister_pernet_subsys(&nfs_net_ops); +- rpc_proc_unregister(&init_net, "nfs"); + unregister_nfs_fs(); + nfs_fs_proc_exit(); + nfsiod_stop(); +diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h +index b1fa81c9dff6f..c91bce41931fb 100644 +--- a/fs/nfs/internal.h ++++ b/fs/nfs/internal.h +@@ -449,8 +449,6 @@ int nfs_try_get_tree(struct fs_context *); + int nfs_get_tree_common(struct fs_context *); + void nfs_kill_super(struct super_block *); + +-extern struct rpc_stat nfs_rpcstat; +- + extern int __init register_nfs_fs(void); + extern void __exit unregister_nfs_fs(void); + extern bool nfs_sb_active(struct super_block *sb); +diff --git a/fs/nfs/netns.h b/fs/nfs/netns.h +index c8374f74dce11..a68b21603ea9a 100644 +--- a/fs/nfs/netns.h ++++ b/fs/nfs/netns.h +@@ -9,6 +9,7 @@ + #include + #include + #include ++#include + + struct bl_dev_msg { + int32_t status; +@@ -34,6 +35,7 @@ struct nfs_net { + struct nfs_netns_client *nfs_client; + spinlock_t nfs_client_lock; + ktime_t boot_time; ++ struct rpc_stat rpcstats; + #ifdef CONFIG_PROC_FS + struct proc_dir_entry *proc_nfsfs; + #endif +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index 12a48e1d80c3f..b598c7ed497bb 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -1049,6 +1049,7 @@ struct cifs_ses { + and after mount option parsing we fill it */ + char *domainName; + char *password; ++ char *password2; /* When key rotation used, new password may be set before it expires */ + char workstation_name[CIFS_MAX_WORKSTATION_LEN]; + struct session_key auth_key; + struct ntlmssp_auth *ntlmssp; /* ciphertext, flags, server challenge */ +diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c +index c5705de7f9de2..cb3bed8364e07 100644 +--- a/fs/smb/client/connect.c ++++ b/fs/smb/client/connect.c +@@ -2178,6 +2178,7 @@ cifs_set_cifscreds(struct smb3_fs_context *ctx, struct cifs_ses *ses) + } + + ++delim; ++ /* BB consider adding support for password2 (Key Rotation) for multiuser in future */ + ctx->password = kstrndup(delim, len, GFP_KERNEL); + if (!ctx->password) { + cifs_dbg(FYI, "Unable to allocate %zd bytes for password\n", +@@ -2201,6 +2202,7 @@ cifs_set_cifscreds(struct smb3_fs_context *ctx, struct cifs_ses *ses) + kfree(ctx->username); + ctx->username = NULL; + kfree_sensitive(ctx->password); ++ /* no need to free ctx->password2 since not allocated in this path */ + ctx->password = NULL; + goto out_key_put; + } +@@ -2312,6 +2314,12 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx) + if (!ses->password) + goto get_ses_fail; + } ++ /* ctx->password freed at unmount */ ++ if (ctx->password2) { ++ ses->password2 = kstrdup(ctx->password2, GFP_KERNEL); ++ if (!ses->password2) ++ goto get_ses_fail; ++ } + if (ctx->domainname) { + ses->domainName = kstrdup(ctx->domainname, GFP_KERNEL); + if (!ses->domainName) +diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c +index 103421791bb5d..4d9e57be84dbc 100644 +--- a/fs/smb/client/fs_context.c ++++ b/fs/smb/client/fs_context.c +@@ -161,6 +161,7 @@ const struct fs_parameter_spec smb3_fs_parameters[] = { + fsparam_string("username", Opt_user), + fsparam_string("pass", Opt_pass), + fsparam_string("password", Opt_pass), ++ fsparam_string("password2", Opt_pass2), + fsparam_string("ip", Opt_ip), + fsparam_string("addr", Opt_ip), + fsparam_string("domain", Opt_domain), +@@ -314,6 +315,7 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx + new_ctx->nodename = NULL; + new_ctx->username = NULL; + new_ctx->password = NULL; ++ new_ctx->password2 = NULL; + new_ctx->server_hostname = NULL; + new_ctx->domainname = NULL; + new_ctx->UNC = NULL; +@@ -326,6 +328,7 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx + DUP_CTX_STR(prepath); + DUP_CTX_STR(username); + DUP_CTX_STR(password); ++ DUP_CTX_STR(password2); + DUP_CTX_STR(server_hostname); + DUP_CTX_STR(UNC); + DUP_CTX_STR(source); +@@ -884,6 +887,8 @@ static int smb3_reconfigure(struct fs_context *fc) + else { + kfree_sensitive(ses->password); + ses->password = kstrdup(ctx->password, GFP_KERNEL); ++ kfree_sensitive(ses->password2); ++ ses->password2 = kstrdup(ctx->password2, GFP_KERNEL); + } + STEAL_STRING(cifs_sb, ctx, domainname); + STEAL_STRING(cifs_sb, ctx, nodename); +@@ -1283,6 +1288,18 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + goto cifs_parse_mount_err; + } + break; ++ case Opt_pass2: ++ kfree_sensitive(ctx->password2); ++ ctx->password2 = NULL; ++ if (strlen(param->string) == 0) ++ break; ++ ++ ctx->password2 = kstrdup(param->string, GFP_KERNEL); ++ if (ctx->password2 == NULL) { ++ cifs_errorf(fc, "OOM when copying password2 string\n"); ++ goto cifs_parse_mount_err; ++ } ++ break; + case Opt_ip: + if (strlen(param->string) == 0) { + ctx->got_ip = false; +@@ -1582,6 +1599,8 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + cifs_parse_mount_err: + kfree_sensitive(ctx->password); + ctx->password = NULL; ++ kfree_sensitive(ctx->password2); ++ ctx->password2 = NULL; + return -EINVAL; + } + +@@ -1684,6 +1703,8 @@ smb3_cleanup_fs_context_contents(struct smb3_fs_context *ctx) + ctx->username = NULL; + kfree_sensitive(ctx->password); + ctx->password = NULL; ++ kfree_sensitive(ctx->password2); ++ ctx->password2 = NULL; + kfree(ctx->server_hostname); + ctx->server_hostname = NULL; + kfree(ctx->UNC); +diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h +index 4e409238fe8f7..d7c090dbe75db 100644 +--- a/fs/smb/client/fs_context.h ++++ b/fs/smb/client/fs_context.h +@@ -137,6 +137,7 @@ enum cifs_param { + Opt_source, + Opt_user, + Opt_pass, ++ Opt_pass2, + Opt_ip, + Opt_domain, + Opt_srcaddr, +@@ -170,6 +171,7 @@ struct smb3_fs_context { + + char *username; + char *password; ++ char *password2; + char *domainname; + char *source; + char *server_hostname; +diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c +index 0d13db80e67c9..d56959d02e36d 100644 +--- a/fs/smb/client/misc.c ++++ b/fs/smb/client/misc.c +@@ -101,6 +101,7 @@ sesInfoFree(struct cifs_ses *buf_to_free) + kfree(buf_to_free->serverDomain); + kfree(buf_to_free->serverNOS); + kfree_sensitive(buf_to_free->password); ++ kfree_sensitive(buf_to_free->password2); + kfree(buf_to_free->user_name); + kfree(buf_to_free->domainName); + kfree_sensitive(buf_to_free->auth_key.response); +diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c +index 94bd4c6d2d682..70530108b9bb9 100644 +--- a/fs/smb/client/smb2pdu.c ++++ b/fs/smb/client/smb2pdu.c +@@ -367,6 +367,17 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon, + } + + rc = cifs_setup_session(0, ses, server, nls_codepage); ++ if ((rc == -EACCES) || (rc == -EKEYEXPIRED) || (rc == -EKEYREVOKED)) { ++ /* ++ * Try alternate password for next reconnect (key rotation ++ * could be enabled on the server e.g.) if an alternate ++ * password is available and the current password is expired, ++ * but do not swap on non pwd related errors like host down ++ */ ++ if (ses->password2) ++ swap(ses->password2, ses->password); ++ } ++ + if ((rc == -EACCES) && !tcon->retry) { + mutex_unlock(&ses->session_mutex); + rc = -EHOSTDOWN; +diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c +index 53dfaac425c68..dc729ab980dc0 100644 +--- a/fs/smb/server/oplock.c ++++ b/fs/smb/server/oplock.c +@@ -612,13 +612,23 @@ static int oplock_break_pending(struct oplock_info *opinfo, int req_op_level) + + if (opinfo->op_state == OPLOCK_CLOSING) + return -ENOENT; +- else if (!opinfo->is_lease && opinfo->level <= req_op_level) +- return 1; ++ else if (opinfo->level <= req_op_level) { ++ if (opinfo->is_lease && ++ opinfo->o_lease->state != ++ (SMB2_LEASE_HANDLE_CACHING_LE | ++ SMB2_LEASE_READ_CACHING_LE)) ++ return 1; ++ } + } + +- if (!opinfo->is_lease && opinfo->level <= req_op_level) { +- wake_up_oplock_break(opinfo); +- return 1; ++ if (opinfo->level <= req_op_level) { ++ if (opinfo->is_lease && ++ opinfo->o_lease->state != ++ (SMB2_LEASE_HANDLE_CACHING_LE | ++ SMB2_LEASE_READ_CACHING_LE)) { ++ wake_up_oplock_break(opinfo); ++ return 1; ++ } + } + return 0; + } +@@ -886,7 +896,6 @@ static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level) + struct lease *lease = brk_opinfo->o_lease; + + atomic_inc(&brk_opinfo->breaking_cnt); +- + err = oplock_break_pending(brk_opinfo, req_op_level); + if (err) + return err < 0 ? err : 0; +@@ -1199,7 +1208,9 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid, + + /* Only v2 leases handle the directory */ + if (S_ISDIR(file_inode(fp->filp)->i_mode)) { +- if (!lctx || lctx->version != 2) ++ if (!lctx || lctx->version != 2 || ++ (lctx->flags != SMB2_LEASE_FLAG_PARENT_LEASE_KEY_SET_LE && ++ !lctx->epoch)) + return 0; + } + +@@ -1461,8 +1472,9 @@ void create_lease_buf(u8 *rbuf, struct lease *lease) + buf->lcontext.LeaseFlags = lease->flags; + buf->lcontext.Epoch = cpu_to_le16(lease->epoch); + buf->lcontext.LeaseState = lease->state; +- memcpy(buf->lcontext.ParentLeaseKey, lease->parent_lease_key, +- SMB2_LEASE_KEY_SIZE); ++ if (lease->flags == SMB2_LEASE_FLAG_PARENT_LEASE_KEY_SET_LE) ++ memcpy(buf->lcontext.ParentLeaseKey, lease->parent_lease_key, ++ SMB2_LEASE_KEY_SIZE); + buf->ccontext.DataOffset = cpu_to_le16(offsetof + (struct create_lease_v2, lcontext)); + buf->ccontext.DataLength = cpu_to_le32(sizeof(struct lease_context_v2)); +@@ -1527,8 +1539,9 @@ struct lease_ctx_info *parse_lease_state(void *open_req, bool is_dir) + lreq->flags = lc->lcontext.LeaseFlags; + lreq->epoch = lc->lcontext.Epoch; + lreq->duration = lc->lcontext.LeaseDuration; +- memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, +- SMB2_LEASE_KEY_SIZE); ++ if (lreq->flags == SMB2_LEASE_FLAG_PARENT_LEASE_KEY_SET_LE) ++ memcpy(lreq->parent_lease_key, lc->lcontext.ParentLeaseKey, ++ SMB2_LEASE_KEY_SIZE); + lreq->version = 2; + } else { + struct create_lease *lc = (struct create_lease *)cc; +diff --git a/fs/smb/server/transport_tcp.c b/fs/smb/server/transport_tcp.c +index 9d4222154dcc0..0012919309f11 100644 +--- a/fs/smb/server/transport_tcp.c ++++ b/fs/smb/server/transport_tcp.c +@@ -446,6 +446,10 @@ static int create_socket(struct interface *iface) + sin6.sin6_family = PF_INET6; + sin6.sin6_addr = in6addr_any; + sin6.sin6_port = htons(server_conf.tcp_port); ++ ++ lock_sock(ksmbd_socket->sk); ++ ksmbd_socket->sk->sk_ipv6only = false; ++ release_sock(ksmbd_socket->sk); + } + + ksmbd_tcp_nodelay(ksmbd_socket); +diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c +index 110e8a2721890..56d1741fe0413 100644 +--- a/fs/tracefs/event_inode.c ++++ b/fs/tracefs/event_inode.c +@@ -164,21 +164,7 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, struct dentry *dentry, + * determined by the parent directory. + */ + if (dentry->d_inode->i_mode & S_IFDIR) { +- /* +- * The events directory dentry is never freed, unless its +- * part of an instance that is deleted. It's attr is the +- * default for its child files and directories. +- * Do not update it. It's not used for its own mode or ownership. +- */ +- if (ei->is_events) { +- /* But it still needs to know if it was modified */ +- if (iattr->ia_valid & ATTR_UID) +- ei->attr.mode |= EVENTFS_SAVE_UID; +- if (iattr->ia_valid & ATTR_GID) +- ei->attr.mode |= EVENTFS_SAVE_GID; +- } else { +- update_attr(&ei->attr, iattr); +- } ++ update_attr(&ei->attr, iattr); + + } else { + name = dentry->d_name.name; +@@ -265,6 +251,35 @@ static const struct file_operations eventfs_file_operations = { + .llseek = generic_file_llseek, + }; + ++/* ++ * On a remount of tracefs, if UID or GID options are set, then ++ * the mount point inode permissions should be used. ++ * Reset the saved permission flags appropriately. ++ */ ++void eventfs_remount(struct tracefs_inode *ti, bool update_uid, bool update_gid) ++{ ++ struct eventfs_inode *ei = ti->private; ++ ++ if (!ei) ++ return; ++ ++ if (update_uid) ++ ei->attr.mode &= ~EVENTFS_SAVE_UID; ++ ++ if (update_gid) ++ ei->attr.mode &= ~EVENTFS_SAVE_GID; ++ ++ if (!ei->entry_attrs) ++ return; ++ ++ for (int i = 0; i < ei->nr_entries; i++) { ++ if (update_uid) ++ ei->entry_attrs[i].mode &= ~EVENTFS_SAVE_UID; ++ if (update_gid) ++ ei->entry_attrs[i].mode &= ~EVENTFS_SAVE_GID; ++ } ++} ++ + /* Return the evenfs_inode of the "events" directory */ + static struct eventfs_inode *eventfs_find_events(struct dentry *dentry) + { +diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c +index 99caf1c3d0c41..48ed6c900ece8 100644 +--- a/fs/tracefs/inode.c ++++ b/fs/tracefs/inode.c +@@ -30,20 +30,47 @@ static struct vfsmount *tracefs_mount; + static int tracefs_mount_count; + static bool tracefs_registered; + ++/* ++ * Keep track of all tracefs_inodes in order to update their ++ * flags if necessary on a remount. ++ */ ++static DEFINE_SPINLOCK(tracefs_inode_lock); ++static LIST_HEAD(tracefs_inodes); ++ + static struct inode *tracefs_alloc_inode(struct super_block *sb) + { + struct tracefs_inode *ti; ++ unsigned long flags; + + ti = kmem_cache_alloc(tracefs_inode_cachep, GFP_KERNEL); + if (!ti) + return NULL; + ++ spin_lock_irqsave(&tracefs_inode_lock, flags); ++ list_add_rcu(&ti->list, &tracefs_inodes); ++ spin_unlock_irqrestore(&tracefs_inode_lock, flags); ++ + return &ti->vfs_inode; + } + ++static void tracefs_free_inode_rcu(struct rcu_head *rcu) ++{ ++ struct tracefs_inode *ti; ++ ++ ti = container_of(rcu, struct tracefs_inode, rcu); ++ kmem_cache_free(tracefs_inode_cachep, ti); ++} ++ + static void tracefs_free_inode(struct inode *inode) + { +- kmem_cache_free(tracefs_inode_cachep, get_tracefs(inode)); ++ struct tracefs_inode *ti = get_tracefs(inode); ++ unsigned long flags; ++ ++ spin_lock_irqsave(&tracefs_inode_lock, flags); ++ list_del_rcu(&ti->list); ++ spin_unlock_irqrestore(&tracefs_inode_lock, flags); ++ ++ call_rcu(&ti->rcu, tracefs_free_inode_rcu); + } + + static ssize_t default_read_file(struct file *file, char __user *buf, +@@ -153,16 +180,39 @@ static void set_tracefs_inode_owner(struct inode *inode) + { + struct tracefs_inode *ti = get_tracefs(inode); + struct inode *root_inode = ti->private; ++ kuid_t uid; ++ kgid_t gid; ++ ++ uid = root_inode->i_uid; ++ gid = root_inode->i_gid; ++ ++ /* ++ * If the root is not the mount point, then check the root's ++ * permissions. If it was never set, then default to the ++ * mount point. ++ */ ++ if (root_inode != d_inode(root_inode->i_sb->s_root)) { ++ struct tracefs_inode *rti; ++ ++ rti = get_tracefs(root_inode); ++ root_inode = d_inode(root_inode->i_sb->s_root); ++ ++ if (!(rti->flags & TRACEFS_UID_PERM_SET)) ++ uid = root_inode->i_uid; ++ ++ if (!(rti->flags & TRACEFS_GID_PERM_SET)) ++ gid = root_inode->i_gid; ++ } + + /* + * If this inode has never been referenced, then update + * the permissions to the superblock. + */ + if (!(ti->flags & TRACEFS_UID_PERM_SET)) +- inode->i_uid = root_inode->i_uid; ++ inode->i_uid = uid; + + if (!(ti->flags & TRACEFS_GID_PERM_SET)) +- inode->i_gid = root_inode->i_gid; ++ inode->i_gid = gid; + } + + static int tracefs_permission(struct mnt_idmap *idmap, +@@ -313,6 +363,8 @@ static int tracefs_apply_options(struct super_block *sb, bool remount) + struct tracefs_fs_info *fsi = sb->s_fs_info; + struct inode *inode = d_inode(sb->s_root); + struct tracefs_mount_opts *opts = &fsi->mount_opts; ++ struct tracefs_inode *ti; ++ bool update_uid, update_gid; + umode_t tmp_mode; + + /* +@@ -332,6 +384,25 @@ static int tracefs_apply_options(struct super_block *sb, bool remount) + if (!remount || opts->opts & BIT(Opt_gid)) + inode->i_gid = opts->gid; + ++ if (remount && (opts->opts & BIT(Opt_uid) || opts->opts & BIT(Opt_gid))) { ++ ++ update_uid = opts->opts & BIT(Opt_uid); ++ update_gid = opts->opts & BIT(Opt_gid); ++ ++ rcu_read_lock(); ++ list_for_each_entry_rcu(ti, &tracefs_inodes, list) { ++ if (update_uid) ++ ti->flags &= ~TRACEFS_UID_PERM_SET; ++ ++ if (update_gid) ++ ti->flags &= ~TRACEFS_GID_PERM_SET; ++ ++ if (ti->flags & TRACEFS_EVENT_INODE) ++ eventfs_remount(ti, update_uid, update_gid); ++ } ++ rcu_read_unlock(); ++ } ++ + return 0; + } + +@@ -398,7 +469,22 @@ static int tracefs_d_revalidate(struct dentry *dentry, unsigned int flags) + return !(ei && ei->is_freed); + } + ++static void tracefs_d_iput(struct dentry *dentry, struct inode *inode) ++{ ++ struct tracefs_inode *ti = get_tracefs(inode); ++ ++ /* ++ * This inode is being freed and cannot be used for ++ * eventfs. Clear the flag so that it doesn't call into ++ * eventfs during the remount flag updates. The eventfs_inode ++ * gets freed after an RCU cycle, so the content will still ++ * be safe if the iteration is going on now. ++ */ ++ ti->flags &= ~TRACEFS_EVENT_INODE; ++} ++ + static const struct dentry_operations tracefs_dentry_operations = { ++ .d_iput = tracefs_d_iput, + .d_revalidate = tracefs_d_revalidate, + .d_release = tracefs_d_release, + }; +diff --git a/fs/tracefs/internal.h b/fs/tracefs/internal.h +index beb3dcd0e4342..824cbe83679cc 100644 +--- a/fs/tracefs/internal.h ++++ b/fs/tracefs/internal.h +@@ -11,8 +11,12 @@ enum { + }; + + struct tracefs_inode { +- struct inode vfs_inode; ++ union { ++ struct inode vfs_inode; ++ struct rcu_head rcu; ++ }; + /* The below gets initialized with memset_after(ti, 0, vfs_inode) */ ++ struct list_head list; + unsigned long flags; + void *private; + }; +@@ -75,6 +79,7 @@ struct dentry *tracefs_end_creating(struct dentry *dentry); + struct dentry *tracefs_failed_creating(struct dentry *dentry); + struct inode *tracefs_get_inode(struct super_block *sb); + ++void eventfs_remount(struct tracefs_inode *ti, bool update_uid, bool update_gid); + void eventfs_d_release(struct dentry *dentry); + + #endif /* _TRACEFS_INTERNAL_H */ +diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c +index 56eaae9dac1ab..1f47ff83a9c22 100644 +--- a/fs/userfaultfd.c ++++ b/fs/userfaultfd.c +@@ -921,6 +921,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file) + prev = vma; + continue; + } ++ /* Reset ptes for the whole vma range if wr-protected */ ++ if (userfaultfd_wp(vma)) ++ uffd_wp_range(vma, vma->vm_start, ++ vma->vm_end - vma->vm_start, false); + new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS; + prev = vma_merge(&vmi, mm, prev, vma->vm_start, vma->vm_end, + new_flags, vma->anon_vma, +diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c +index 2307f8037efc3..118dedef8ebe8 100644 +--- a/fs/vboxsf/file.c ++++ b/fs/vboxsf/file.c +@@ -218,6 +218,7 @@ const struct file_operations vboxsf_reg_fops = { + .release = vboxsf_file_release, + .fsync = noop_fsync, + .splice_read = filemap_splice_read, ++ .setlease = simple_nosetlease, + }; + + const struct inode_operations vboxsf_reg_iops = { +diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h +index b2f9e2c409cf2..0a182f088c897 100644 +--- a/include/linux/compiler_types.h ++++ b/include/linux/compiler_types.h +@@ -280,6 +280,17 @@ struct ftrace_likely_data { + # define __no_kcsan + #endif + ++#ifdef __SANITIZE_MEMORY__ ++/* ++ * Similarly to KASAN and KCSAN, KMSAN loses function attributes of inlined ++ * functions, therefore disabling KMSAN checks also requires disabling inlining. ++ * ++ * __no_sanitize_or_inline effectively prevents KMSAN from reporting errors ++ * within the function and marks all its outputs as initialized. ++ */ ++# define __no_sanitize_or_inline __no_kmsan_checks notrace __maybe_unused ++#endif ++ + #ifndef __no_sanitize_or_inline + #define __no_sanitize_or_inline __always_inline + #endif +diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h +index fd4a823ce3cdb..b3772edca2e6e 100644 +--- a/include/linux/dma-fence.h ++++ b/include/linux/dma-fence.h +@@ -681,11 +681,4 @@ static inline bool dma_fence_is_container(struct dma_fence *fence) + return dma_fence_is_array(fence) || dma_fence_is_chain(fence); + } + +-#define DMA_FENCE_WARN(f, fmt, args...) \ +- do { \ +- struct dma_fence *__ff = (f); \ +- pr_warn("f %llu#%llu: " fmt, __ff->context, __ff->seqno,\ +- ##args); \ +- } while (0) +- + #endif /* __LINUX_DMA_FENCE_H */ +diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h +index 6583a58670c57..dfde1e1e321c3 100644 +--- a/include/linux/gfp_types.h ++++ b/include/linux/gfp_types.h +@@ -2,6 +2,8 @@ + #ifndef __LINUX_GFP_TYPES_H + #define __LINUX_GFP_TYPES_H + ++#include ++ + /* The typedef is in types.h but we want the documentation here */ + #if 0 + /** +diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h +index 6ef0557b4bff8..96ceb4095425e 100644 +--- a/include/linux/hyperv.h ++++ b/include/linux/hyperv.h +@@ -832,6 +832,7 @@ struct vmbus_gpadl { + u32 gpadl_handle; + u32 size; + void *buffer; ++ bool decrypted; + }; + + struct vmbus_channel { +diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h +index dae26295e6bed..1b81adebdb8be 100644 +--- a/include/linux/nvmem-provider.h ++++ b/include/linux/nvmem-provider.h +@@ -82,6 +82,7 @@ struct nvmem_cell_info { + * @owner: Pointer to exporter module. Used for refcounting. + * @cells: Optional array of pre-defined NVMEM cells. + * @ncells: Number of elements in cells. ++ * @add_legacy_fixed_of_cells: Read fixed NVMEM cells from old OF syntax. + * @keepout: Optional array of keepout ranges (sorted ascending by start). + * @nkeepout: Number of elements in the keepout array. + * @type: Type of the nvmem storage +@@ -112,6 +113,7 @@ struct nvmem_config { + struct module *owner; + const struct nvmem_cell_info *cells; + int ncells; ++ bool add_legacy_fixed_of_cells; + const struct nvmem_keepout *keepout; + unsigned int nkeepout; + enum nvmem_type type; +diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h +index a6c66ea80194c..0a85ff5c8db3c 100644 +--- a/include/linux/pci_ids.h ++++ b/include/linux/pci_ids.h +@@ -2683,8 +2683,10 @@ + #define PCI_DEVICE_ID_INTEL_I960 0x0960 + #define PCI_DEVICE_ID_INTEL_I960RM 0x0962 + #define PCI_DEVICE_ID_INTEL_HDA_HSW_0 0x0a0c ++#define PCI_DEVICE_ID_INTEL_DSA_SPR0 0x0b25 + #define PCI_DEVICE_ID_INTEL_HDA_HSW_2 0x0c0c + #define PCI_DEVICE_ID_INTEL_CENTERTON_ILB 0x0c60 ++#define PCI_DEVICE_ID_INTEL_IAX_SPR0 0x0cfe + #define PCI_DEVICE_ID_INTEL_HDA_HSW_3 0x0d0c + #define PCI_DEVICE_ID_INTEL_HDA_BYT 0x0f04 + #define PCI_DEVICE_ID_INTEL_SST_BYT 0x0f28 +diff --git a/include/linux/regmap.h b/include/linux/regmap.h +index c9182a47736ef..113261287af28 100644 +--- a/include/linux/regmap.h ++++ b/include/linux/regmap.h +@@ -1225,6 +1225,7 @@ int regmap_multi_reg_write_bypassed(struct regmap *map, + int regmap_raw_write_async(struct regmap *map, unsigned int reg, + const void *val, size_t val_len); + int regmap_read(struct regmap *map, unsigned int reg, unsigned int *val); ++int regmap_read_bypassed(struct regmap *map, unsigned int reg, unsigned int *val); + int regmap_raw_read(struct regmap *map, unsigned int reg, + void *val, size_t val_len); + int regmap_noinc_read(struct regmap *map, unsigned int reg, +@@ -1734,6 +1735,13 @@ static inline int regmap_read(struct regmap *map, unsigned int reg, + return -EINVAL; + } + ++static inline int regmap_read_bypassed(struct regmap *map, unsigned int reg, ++ unsigned int *val) ++{ ++ WARN_ONCE(1, "regmap API is disabled"); ++ return -EINVAL; ++} ++ + static inline int regmap_raw_read(struct regmap *map, unsigned int reg, + void *val, size_t val_len) + { +diff --git a/include/linux/regulator/consumer.h b/include/linux/regulator/consumer.h +index 39b666b40ea61..2c526c8d10cc4 100644 +--- a/include/linux/regulator/consumer.h ++++ b/include/linux/regulator/consumer.h +@@ -365,13 +365,13 @@ devm_regulator_get_exclusive(struct device *dev, const char *id) + + static inline int devm_regulator_get_enable(struct device *dev, const char *id) + { +- return -ENODEV; ++ return 0; + } + + static inline int devm_regulator_get_enable_optional(struct device *dev, + const char *id) + { +- return -ENODEV; ++ return 0; + } + + static inline struct regulator *__must_check +diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h +index 7d54808a1e8f9..5f11f98733419 100644 +--- a/include/linux/skbuff.h ++++ b/include/linux/skbuff.h +@@ -2962,6 +2962,21 @@ static inline void skb_mac_header_rebuild(struct sk_buff *skb) + } + } + ++/* Move the full mac header up to current network_header. ++ * Leaves skb->data pointing at offset skb->mac_len into the mac_header. ++ * Must be provided the complete mac header length. ++ */ ++static inline void skb_mac_header_rebuild_full(struct sk_buff *skb, u32 full_mac_len) ++{ ++ if (skb_mac_header_was_set(skb)) { ++ const unsigned char *old_mac = skb_mac_header(skb); ++ ++ skb_set_mac_header(skb, -full_mac_len); ++ memmove(skb_mac_header(skb), old_mac, full_mac_len); ++ __skb_push(skb, full_mac_len - skb->mac_len); ++ } ++} ++ + static inline int skb_checksum_start_offset(const struct sk_buff *skb) + { + return skb->csum_start - skb_headroom(skb); +diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h +index bd4418377bacf..062fe440f5d09 100644 +--- a/include/linux/skmsg.h ++++ b/include/linux/skmsg.h +@@ -456,10 +456,12 @@ static inline void sk_psock_put(struct sock *sk, struct sk_psock *psock) + + static inline void sk_psock_data_ready(struct sock *sk, struct sk_psock *psock) + { ++ read_lock_bh(&sk->sk_callback_lock); + if (psock->saved_data_ready) + psock->saved_data_ready(sk); + else + sk->sk_data_ready(sk); ++ read_unlock_bh(&sk->sk_callback_lock); + } + + static inline void psock_set_prog(struct bpf_prog **pprog, +diff --git a/include/linux/slab.h b/include/linux/slab.h +index 8228d1276a2f6..5a67dab27f831 100644 +--- a/include/linux/slab.h ++++ b/include/linux/slab.h +@@ -228,7 +228,7 @@ void kfree(const void *objp); + void kfree_sensitive(const void *objp); + size_t __ksize(const void *objp); + +-DEFINE_FREE(kfree, void *, if (_T) kfree(_T)) ++DEFINE_FREE(kfree, void *, if (!IS_ERR_OR_NULL(_T)) kfree(_T)) + + /** + * ksize - Report actual allocation size of associated object +diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h +index e9d4377d03c6e..17d84b3ee8a01 100644 +--- a/include/linux/sunrpc/clnt.h ++++ b/include/linux/sunrpc/clnt.h +@@ -139,6 +139,7 @@ struct rpc_create_args { + const char *servername; + const char *nodename; + const struct rpc_program *program; ++ struct rpc_stat *stats; + u32 prognumber; /* overrides program->number */ + u32 version; + rpc_authflavor_t authflavor; +diff --git a/include/net/gro.h b/include/net/gro.h +index 88644b3ca6600..018343254c90a 100644 +--- a/include/net/gro.h ++++ b/include/net/gro.h +@@ -86,6 +86,15 @@ struct napi_gro_cb { + + /* used to support CHECKSUM_COMPLETE for tunneling protocols */ + __wsum csum; ++ ++ /* L3 offsets */ ++ union { ++ struct { ++ u16 network_offset; ++ u16 inner_network_offset; ++ }; ++ u16 network_offsets[2]; ++ }; + }; + + #define NAPI_GRO_CB(skb) ((struct napi_gro_cb *)(skb)->cb) +diff --git a/include/net/xfrm.h b/include/net/xfrm.h +index 363c7d5105542..a3fd2cfed5e33 100644 +--- a/include/net/xfrm.h ++++ b/include/net/xfrm.h +@@ -1047,6 +1047,9 @@ struct xfrm_offload { + #define CRYPTO_INVALID_PACKET_SYNTAX 64 + #define CRYPTO_INVALID_PROTOCOL 128 + ++ /* Used to keep whole l2 header for transport mode GRO */ ++ __u32 orig_mac_len; ++ + __u8 proto; + __u8 inner_ipproto; + }; +diff --git a/include/sound/emu10k1.h b/include/sound/emu10k1.h +index 1af9e68193920..9cc10fab01a8c 100644 +--- a/include/sound/emu10k1.h ++++ b/include/sound/emu10k1.h +@@ -1684,8 +1684,7 @@ struct snd_emu1010 { + unsigned int clock_fallback; + unsigned int optical_in; /* 0:SPDIF, 1:ADAT */ + unsigned int optical_out; /* 0:SPDIF, 1:ADAT */ +- struct work_struct firmware_work; +- struct work_struct clock_work; ++ struct work_struct work; + }; + + struct snd_emu10k1 { +diff --git a/include/sound/sof.h b/include/sound/sof.h +index 51294f2ba302c..31121c6df0272 100644 +--- a/include/sound/sof.h ++++ b/include/sound/sof.h +@@ -52,11 +52,14 @@ enum sof_dsp_power_states { + + /* Definitions for multiple IPCs */ + enum sof_ipc_type { +- SOF_IPC, +- SOF_INTEL_IPC4, ++ SOF_IPC_TYPE_3, ++ SOF_IPC_TYPE_4, + SOF_IPC_TYPE_COUNT + }; + ++#define SOF_IPC SOF_IPC_TYPE_3 ++#define SOF_INTEL_IPC4 SOF_IPC_TYPE_4 ++ + /* + * SOF Platform data. + */ +diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h +index 0dd4a21d172da..3322fb93a260b 100644 +--- a/include/trace/events/rxrpc.h ++++ b/include/trace/events/rxrpc.h +@@ -83,7 +83,7 @@ + EM(rxrpc_badmsg_bad_abort, "bad-abort") \ + EM(rxrpc_badmsg_bad_jumbo, "bad-jumbo") \ + EM(rxrpc_badmsg_short_ack, "short-ack") \ +- EM(rxrpc_badmsg_short_ack_info, "short-ack-info") \ ++ EM(rxrpc_badmsg_short_ack_trailer, "short-ack-trailer") \ + EM(rxrpc_badmsg_short_hdr, "short-hdr") \ + EM(rxrpc_badmsg_unsupported_packet, "unsup-pkt") \ + EM(rxrpc_badmsg_zero_call, "zero-call") \ +diff --git a/include/uapi/linux/kfd_ioctl.h b/include/uapi/linux/kfd_ioctl.h +index eeb2fdcbdcb70..cd924c959d732 100644 +--- a/include/uapi/linux/kfd_ioctl.h ++++ b/include/uapi/linux/kfd_ioctl.h +@@ -909,14 +909,25 @@ enum kfd_dbg_trap_exception_code { + KFD_EC_MASK(EC_DEVICE_NEW)) + #define KFD_EC_MASK_PROCESS (KFD_EC_MASK(EC_PROCESS_RUNTIME) | \ + KFD_EC_MASK(EC_PROCESS_DEVICE_REMOVE)) ++#define KFD_EC_MASK_PACKET (KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_DIM_INVALID) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_GROUP_SEGMENT_SIZE_INVALID) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_CODE_INVALID) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_RESERVED) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_UNSUPPORTED) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_WORK_GROUP_SIZE_INVALID) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_DISPATCH_REGISTER_INVALID) | \ ++ KFD_EC_MASK(EC_QUEUE_PACKET_VENDOR_UNSUPPORTED)) + + /* Checks for exception code types for KFD search */ ++#define KFD_DBG_EC_IS_VALID(ecode) (ecode > EC_NONE && ecode < EC_MAX) + #define KFD_DBG_EC_TYPE_IS_QUEUE(ecode) \ +- (!!(KFD_EC_MASK(ecode) & KFD_EC_MASK_QUEUE)) ++ (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_QUEUE)) + #define KFD_DBG_EC_TYPE_IS_DEVICE(ecode) \ +- (!!(KFD_EC_MASK(ecode) & KFD_EC_MASK_DEVICE)) ++ (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_DEVICE)) + #define KFD_DBG_EC_TYPE_IS_PROCESS(ecode) \ +- (!!(KFD_EC_MASK(ecode) & KFD_EC_MASK_PROCESS)) ++ (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_PROCESS)) ++#define KFD_DBG_EC_TYPE_IS_PACKET(ecode) \ ++ (KFD_DBG_EC_IS_VALID(ecode) && !!(KFD_EC_MASK(ecode) & KFD_EC_MASK_PACKET)) + + + /* Runtime enable states */ +diff --git a/include/uapi/scsi/scsi_bsg_mpi3mr.h b/include/uapi/scsi/scsi_bsg_mpi3mr.h +index 907d345f04f93..353183e863e47 100644 +--- a/include/uapi/scsi/scsi_bsg_mpi3mr.h ++++ b/include/uapi/scsi/scsi_bsg_mpi3mr.h +@@ -382,7 +382,7 @@ struct mpi3mr_bsg_in_reply_buf { + __u8 mpi_reply_type; + __u8 rsvd1; + __u16 rsvd2; +- __u8 reply_buf[1]; ++ __u8 reply_buf[]; + }; + + /** +diff --git a/kernel/bpf/bloom_filter.c b/kernel/bpf/bloom_filter.c +index addf3dd57b59b..35e1ddca74d21 100644 +--- a/kernel/bpf/bloom_filter.c ++++ b/kernel/bpf/bloom_filter.c +@@ -80,6 +80,18 @@ static int bloom_map_get_next_key(struct bpf_map *map, void *key, void *next_key + return -EOPNOTSUPP; + } + ++/* Called from syscall */ ++static int bloom_map_alloc_check(union bpf_attr *attr) ++{ ++ if (attr->value_size > KMALLOC_MAX_SIZE) ++ /* if value_size is bigger, the user space won't be able to ++ * access the elements. ++ */ ++ return -E2BIG; ++ ++ return 0; ++} ++ + static struct bpf_map *bloom_map_alloc(union bpf_attr *attr) + { + u32 bitset_bytes, bitset_mask, nr_hash_funcs, nr_bits; +@@ -191,6 +203,7 @@ static u64 bloom_map_mem_usage(const struct bpf_map *map) + BTF_ID_LIST_SINGLE(bpf_bloom_map_btf_ids, struct, bpf_bloom_filter) + const struct bpf_map_ops bloom_filter_map_ops = { + .map_meta_equal = bpf_map_meta_equal, ++ .map_alloc_check = bloom_map_alloc_check, + .map_alloc = bloom_map_alloc, + .map_free = bloom_map_free, + .map_get_next_key = bloom_map_get_next_key, +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index c9fc734989c68..818bac019d0d3 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -17655,8 +17655,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) + f = fdget(fd); + map = __bpf_map_get(f); + if (IS_ERR(map)) { +- verbose(env, "fd %d is not pointing to valid bpf_map\n", +- insn[0].imm); ++ verbose(env, "fd %d is not pointing to valid bpf_map\n", fd); + return PTR_ERR(map); + } + +diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c +index 675ae318f74f8..a7d5fb473b324 100644 +--- a/kernel/dma/swiotlb.c ++++ b/kernel/dma/swiotlb.c +@@ -1692,6 +1692,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem, + mem->for_alloc = true; + #ifdef CONFIG_SWIOTLB_DYNAMIC + spin_lock_init(&mem->lock); ++ INIT_LIST_HEAD_RCU(&mem->pools); + #endif + add_mem_pool(mem, pool); + +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index fd7b84b06d926..8c7bafbee1b13 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -1135,8 +1135,12 @@ static bool kick_pool(struct worker_pool *pool) + !cpumask_test_cpu(p->wake_cpu, pool->attrs->__pod_cpumask)) { + struct work_struct *work = list_first_entry(&pool->worklist, + struct work_struct, entry); +- p->wake_cpu = cpumask_any_distribute(pool->attrs->__pod_cpumask); +- get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++; ++ int wake_cpu = cpumask_any_and_distribute(pool->attrs->__pod_cpumask, ++ cpu_online_mask); ++ if (wake_cpu < nr_cpu_ids) { ++ p->wake_cpu = wake_cpu; ++ get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++; ++ } + } + #endif + wake_up_process(p); +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug +index d2f73bb4121b0..da5513cfc1258 100644 +--- a/lib/Kconfig.debug ++++ b/lib/Kconfig.debug +@@ -373,7 +373,7 @@ config DEBUG_INFO_SPLIT + Incompatible with older versions of ccache. + + config DEBUG_INFO_BTF +- bool "Generate BTF typeinfo" ++ bool "Generate BTF type information" + depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED + depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST + depends on BPF_SYSCALL +@@ -404,7 +404,8 @@ config PAHOLE_HAS_LANG_EXCLUDE + using DEBUG_INFO_BTF_MODULES. + + config DEBUG_INFO_BTF_MODULES +- def_bool y ++ bool "Generate BTF type information for kernel modules" ++ default y + depends on DEBUG_INFO_BTF && MODULES && PAHOLE_HAS_SPLIT_BTF + help + Generate compact split BTF type information for kernel modules. +diff --git a/lib/dynamic_debug.c b/lib/dynamic_debug.c +index 6fba6423cc10b..a5a687e1c9192 100644 +--- a/lib/dynamic_debug.c ++++ b/lib/dynamic_debug.c +@@ -302,7 +302,11 @@ static int ddebug_tokenize(char *buf, char *words[], int maxwords) + } else { + for (end = buf; *end && !isspace(*end); end++) + ; +- BUG_ON(end == buf); ++ if (end == buf) { ++ pr_err("parse err after word:%d=%s\n", nwords, ++ nwords ? words[nwords - 1] : ""); ++ return -EINVAL; ++ } + } + + /* `buf' is start of word, `end' is one past its end */ +diff --git a/lib/maple_tree.c b/lib/maple_tree.c +index 684689457d77f..41ef91590761b 100644 +--- a/lib/maple_tree.c ++++ b/lib/maple_tree.c +@@ -5085,18 +5085,18 @@ int mas_empty_area_rev(struct ma_state *mas, unsigned long min, + if (size == 0 || max - min < size - 1) + return -EINVAL; + +- if (mas_is_start(mas)) { ++ if (mas_is_start(mas)) + mas_start(mas); +- mas->offset = mas_data_end(mas); +- } else if (mas->offset >= 2) { +- mas->offset -= 2; +- } else if (!mas_rewind_node(mas)) { ++ else if ((mas->offset < 2) && (!mas_rewind_node(mas))) + return -EBUSY; +- } + +- /* Empty set. */ +- if (mas_is_none(mas) || mas_is_ptr(mas)) ++ if (unlikely(mas_is_none(mas) || mas_is_ptr(mas))) + return mas_sparse_area(mas, min, max, size, false); ++ else if (mas->offset >= 2) ++ mas->offset -= 2; ++ else ++ mas->offset = mas_data_end(mas); ++ + + /* The start of the window can only be within these values. */ + mas->index = min; +diff --git a/lib/scatterlist.c b/lib/scatterlist.c +index 68b45c82c37a6..7bc2220fea805 100644 +--- a/lib/scatterlist.c ++++ b/lib/scatterlist.c +@@ -1124,7 +1124,7 @@ static ssize_t extract_user_to_sg(struct iov_iter *iter, + do { + res = iov_iter_extract_pages(iter, &pages, maxsize, sg_max, + extraction_flags, &off); +- if (res < 0) ++ if (res <= 0) + goto failed; + + len = res; +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 555cf1a80eaed..c2047608800fb 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -1747,8 +1747,6 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, + static void __update_and_free_hugetlb_folio(struct hstate *h, + struct folio *folio) + { +- bool clear_dtor = folio_test_hugetlb_vmemmap_optimized(folio); +- + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) + return; + +@@ -1782,7 +1780,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, + * If vmemmap pages were allocated above, then we need to clear the + * hugetlb destructor under the hugetlb lock. + */ +- if (clear_dtor) { ++ if (folio_test_hugetlb(folio)) { + spin_lock_irq(&hugetlb_lock); + __clear_hugetlb_destructor(h, folio); + spin_unlock_irq(&hugetlb_lock); +diff --git a/mm/readahead.c b/mm/readahead.c +index 1d1a84deb5bc5..7c0449f8bec7f 100644 +--- a/mm/readahead.c ++++ b/mm/readahead.c +@@ -490,6 +490,7 @@ void page_cache_ra_order(struct readahead_control *ractl, + pgoff_t index = readahead_index(ractl); + pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; + pgoff_t mark = index + ra->size - ra->async_size; ++ unsigned int nofs; + int err = 0; + gfp_t gfp = readahead_gfp_mask(mapping); + +@@ -506,6 +507,8 @@ void page_cache_ra_order(struct readahead_control *ractl, + new_order--; + } + ++ /* See comment in page_cache_ra_unbounded() */ ++ nofs = memalloc_nofs_save(); + filemap_invalidate_lock_shared(mapping); + while (index <= limit) { + unsigned int order = new_order; +@@ -534,6 +537,7 @@ void page_cache_ra_order(struct readahead_control *ractl, + + read_pages(ractl); + filemap_invalidate_unlock_shared(mapping); ++ memalloc_nofs_restore(nofs); + + /* + * If there were already pages in the page cache, then we may have +diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c +index f001582345052..9404dd551dfd2 100644 +--- a/net/8021q/vlan_core.c ++++ b/net/8021q/vlan_core.c +@@ -478,6 +478,8 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head, + if (unlikely(!vhdr)) + goto out; + ++ NAPI_GRO_CB(skb)->network_offsets[NAPI_GRO_CB(skb)->encap_mark] = hlen; ++ + type = vhdr->h_vlan_encapsulated_proto; + + ptype = gro_find_receive_by_type(type); +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index 0592369579ab2..befe645d3f9bf 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -2736,8 +2736,6 @@ void hci_unregister_dev(struct hci_dev *hdev) + + hci_unregister_suspend_notifier(hdev); + +- msft_unregister(hdev); +- + hci_dev_do_close(hdev); + + if (!test_bit(HCI_INIT, &hdev->flags) && +@@ -2791,6 +2789,7 @@ void hci_release_dev(struct hci_dev *hdev) + hci_discovery_filter_clear(hdev); + hci_blocked_keys_clear(hdev); + hci_codec_list_clear(&hdev->local_codecs); ++ msft_release(hdev); + hci_dev_unlock(hdev); + + ida_destroy(&hdev->unset_handle_ida); +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 1b4abf8e90f6b..9274d32550493 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -7200,6 +7200,8 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data, + u16 handle = le16_to_cpu(ev->bis[i]); + + bis = hci_conn_hash_lookup_handle(hdev, handle); ++ if (!bis) ++ continue; + + set_bit(HCI_CONN_BIG_SYNC_FAILED, &bis->flags); + hci_connect_cfm(bis, ev->status); +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index 706d2478ddb33..1e961cfaa07b3 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -415,6 +415,9 @@ static void l2cap_chan_timeout(struct work_struct *work) + + BT_DBG("chan %p state %s", chan, state_to_string(chan->state)); + ++ if (!conn) ++ return; ++ + mutex_lock(&conn->chan_lock); + /* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling + * this work. No need to call l2cap_chan_hold(chan) here again. +diff --git a/net/bluetooth/msft.c b/net/bluetooth/msft.c +index 9612c5d1b13f6..d039683d3bdd4 100644 +--- a/net/bluetooth/msft.c ++++ b/net/bluetooth/msft.c +@@ -769,7 +769,7 @@ void msft_register(struct hci_dev *hdev) + mutex_init(&msft->filter_lock); + } + +-void msft_unregister(struct hci_dev *hdev) ++void msft_release(struct hci_dev *hdev) + { + struct msft_data *msft = hdev->msft_data; + +diff --git a/net/bluetooth/msft.h b/net/bluetooth/msft.h +index 2a63205b377b7..fe538e9c91c01 100644 +--- a/net/bluetooth/msft.h ++++ b/net/bluetooth/msft.h +@@ -14,7 +14,7 @@ + + bool msft_monitor_supported(struct hci_dev *hdev); + void msft_register(struct hci_dev *hdev); +-void msft_unregister(struct hci_dev *hdev); ++void msft_release(struct hci_dev *hdev); + void msft_do_open(struct hci_dev *hdev); + void msft_do_close(struct hci_dev *hdev); + void msft_vendor_evt(struct hci_dev *hdev, void *data, struct sk_buff *skb); +@@ -35,7 +35,7 @@ static inline bool msft_monitor_supported(struct hci_dev *hdev) + } + + static inline void msft_register(struct hci_dev *hdev) {} +-static inline void msft_unregister(struct hci_dev *hdev) {} ++static inline void msft_release(struct hci_dev *hdev) {} + static inline void msft_do_open(struct hci_dev *hdev) {} + static inline void msft_do_close(struct hci_dev *hdev) {} + static inline void msft_vendor_evt(struct hci_dev *hdev, void *data, +diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c +index 3cc9fab8e8384..ede7391f3aa98 100644 +--- a/net/bluetooth/sco.c ++++ b/net/bluetooth/sco.c +@@ -83,6 +83,10 @@ static void sco_sock_timeout(struct work_struct *work) + struct sock *sk; + + sco_conn_lock(conn); ++ if (!conn->hcon) { ++ sco_conn_unlock(conn); ++ return; ++ } + sk = conn->sk; + if (sk) + sock_hold(sk); +diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c +index 7431f89e897b9..d97064d460dc7 100644 +--- a/net/bridge/br_forward.c ++++ b/net/bridge/br_forward.c +@@ -258,6 +258,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, + { + struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev; + const unsigned char *src = eth_hdr(skb)->h_source; ++ struct sk_buff *nskb; + + if (!should_deliver(p, skb)) + return; +@@ -266,12 +267,16 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, + if (skb->dev == p->dev && ether_addr_equal(src, addr)) + return; + +- skb = skb_copy(skb, GFP_ATOMIC); +- if (!skb) { ++ __skb_push(skb, ETH_HLEN); ++ nskb = pskb_copy(skb, GFP_ATOMIC); ++ __skb_pull(skb, ETH_HLEN); ++ if (!nskb) { + DEV_STATS_INC(dev, tx_dropped); + return; + } + ++ skb = nskb; ++ __skb_pull(skb, ETH_HLEN); + if (!is_broadcast_ether_addr(addr)) + memcpy(eth_hdr(skb)->h_dest, addr, ETH_ALEN); + +diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c +index 65e9ed3851425..4488faf059a36 100644 +--- a/net/bridge/br_netlink.c ++++ b/net/bridge/br_netlink.c +@@ -455,7 +455,8 @@ static int br_fill_ifinfo(struct sk_buff *skb, + u32 filter_mask, const struct net_device *dev, + bool getlink) + { +- u8 operstate = netif_running(dev) ? dev->operstate : IF_OPER_DOWN; ++ u8 operstate = netif_running(dev) ? READ_ONCE(dev->operstate) : ++ IF_OPER_DOWN; + struct nlattr *af = NULL; + struct net_bridge *br; + struct ifinfomsg *hdr; +diff --git a/net/core/filter.c b/net/core/filter.c +index 24f23a30c945e..df4578219e82c 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -4334,10 +4334,12 @@ static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_info *ri, + enum bpf_map_type map_type = ri->map_type; + void *fwd = ri->tgt_value; + u32 map_id = ri->map_id; ++ u32 flags = ri->flags; + struct bpf_map *map; + int err; + + ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ ++ ri->flags = 0; + ri->map_type = BPF_MAP_TYPE_UNSPEC; + + if (unlikely(!xdpf)) { +@@ -4349,11 +4351,20 @@ static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_info *ri, + case BPF_MAP_TYPE_DEVMAP: + fallthrough; + case BPF_MAP_TYPE_DEVMAP_HASH: +- map = READ_ONCE(ri->map); +- if (unlikely(map)) { ++ if (unlikely(flags & BPF_F_BROADCAST)) { ++ map = READ_ONCE(ri->map); ++ ++ /* The map pointer is cleared when the map is being torn ++ * down by bpf_clear_redirect_map() ++ */ ++ if (unlikely(!map)) { ++ err = -ENOENT; ++ break; ++ } ++ + WRITE_ONCE(ri->map, NULL); + err = dev_map_enqueue_multi(xdpf, dev, map, +- ri->flags & BPF_F_EXCLUDE_INGRESS); ++ flags & BPF_F_EXCLUDE_INGRESS); + } else { + err = dev_map_enqueue(fwd, xdpf, dev); + } +@@ -4416,9 +4427,9 @@ EXPORT_SYMBOL_GPL(xdp_do_redirect_frame); + static int xdp_do_generic_redirect_map(struct net_device *dev, + struct sk_buff *skb, + struct xdp_buff *xdp, +- struct bpf_prog *xdp_prog, +- void *fwd, +- enum bpf_map_type map_type, u32 map_id) ++ struct bpf_prog *xdp_prog, void *fwd, ++ enum bpf_map_type map_type, u32 map_id, ++ u32 flags) + { + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + struct bpf_map *map; +@@ -4428,11 +4439,20 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, + case BPF_MAP_TYPE_DEVMAP: + fallthrough; + case BPF_MAP_TYPE_DEVMAP_HASH: +- map = READ_ONCE(ri->map); +- if (unlikely(map)) { ++ if (unlikely(flags & BPF_F_BROADCAST)) { ++ map = READ_ONCE(ri->map); ++ ++ /* The map pointer is cleared when the map is being torn ++ * down by bpf_clear_redirect_map() ++ */ ++ if (unlikely(!map)) { ++ err = -ENOENT; ++ break; ++ } ++ + WRITE_ONCE(ri->map, NULL); + err = dev_map_redirect_multi(dev, skb, xdp_prog, map, +- ri->flags & BPF_F_EXCLUDE_INGRESS); ++ flags & BPF_F_EXCLUDE_INGRESS); + } else { + err = dev_map_generic_redirect(fwd, skb, xdp_prog); + } +@@ -4469,9 +4489,11 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, + enum bpf_map_type map_type = ri->map_type; + void *fwd = ri->tgt_value; + u32 map_id = ri->map_id; ++ u32 flags = ri->flags; + int err; + + ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ ++ ri->flags = 0; + ri->map_type = BPF_MAP_TYPE_UNSPEC; + + if (map_type == BPF_MAP_TYPE_UNSPEC && map_id == INT_MAX) { +@@ -4491,7 +4513,7 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, + return 0; + } + +- return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, map_type, map_id); ++ return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, map_type, map_id, flags); + err: + _trace_xdp_redirect_err(dev, xdp_prog, ri->tgt_index, err); + return err; +diff --git a/net/core/gro.c b/net/core/gro.c +index cefddf65f7db0..31e40f25fdf10 100644 +--- a/net/core/gro.c ++++ b/net/core/gro.c +@@ -373,6 +373,7 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff) + const struct skb_shared_info *pinfo = skb_shinfo(skb); + const skb_frag_t *frag0 = &pinfo->frags[0]; + ++ NAPI_GRO_CB(skb)->network_offset = 0; + NAPI_GRO_CB(skb)->data_offset = 0; + NAPI_GRO_CB(skb)->frag0 = NULL; + NAPI_GRO_CB(skb)->frag0_len = 0; +diff --git a/net/core/link_watch.c b/net/core/link_watch.c +index c469d1c4db5d7..cb43f5aebfbcc 100644 +--- a/net/core/link_watch.c ++++ b/net/core/link_watch.c +@@ -67,7 +67,7 @@ static void rfc2863_policy(struct net_device *dev) + { + unsigned char operstate = default_operstate(dev); + +- if (operstate == dev->operstate) ++ if (operstate == READ_ONCE(dev->operstate)) + return; + + write_lock(&dev_base_lock); +@@ -87,7 +87,7 @@ static void rfc2863_policy(struct net_device *dev) + break; + } + +- dev->operstate = operstate; ++ WRITE_ONCE(dev->operstate, operstate); + + write_unlock(&dev_base_lock); + } +diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c +index fccaa5bac0ed0..5a9487af44e00 100644 +--- a/net/core/net-sysfs.c ++++ b/net/core/net-sysfs.c +@@ -307,11 +307,9 @@ static ssize_t operstate_show(struct device *dev, + const struct net_device *netdev = to_net_dev(dev); + unsigned char operstate; + +- read_lock(&dev_base_lock); +- operstate = netdev->operstate; ++ operstate = READ_ONCE(netdev->operstate); + if (!netif_running(netdev)) + operstate = IF_OPER_DOWN; +- read_unlock(&dev_base_lock); + + if (operstate >= ARRAY_SIZE(operstates)) + return -EINVAL; /* should not happen */ +diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c +index f4183c4c1ec82..ebea4fa69976a 100644 +--- a/net/core/net_namespace.c ++++ b/net/core/net_namespace.c +@@ -69,12 +69,15 @@ DEFINE_COOKIE(net_cookie); + + static struct net_generic *net_alloc_generic(void) + { ++ unsigned int gen_ptrs = READ_ONCE(max_gen_ptrs); ++ unsigned int generic_size; + struct net_generic *ng; +- unsigned int generic_size = offsetof(struct net_generic, ptr[max_gen_ptrs]); ++ ++ generic_size = offsetof(struct net_generic, ptr[gen_ptrs]); + + ng = kzalloc(generic_size, GFP_KERNEL); + if (ng) +- ng->s.len = max_gen_ptrs; ++ ng->s.len = gen_ptrs; + + return ng; + } +@@ -1229,7 +1232,11 @@ static int register_pernet_operations(struct list_head *list, + if (error < 0) + return error; + *ops->id = error; +- max_gen_ptrs = max(max_gen_ptrs, *ops->id + 1); ++ /* This does not require READ_ONCE as writers already hold ++ * pernet_ops_rwsem. But WRITE_ONCE is needed to protect ++ * net_alloc_generic. ++ */ ++ WRITE_ONCE(max_gen_ptrs, max(max_gen_ptrs, *ops->id + 1)); + } + error = __register_pernet_operations(list, ops); + if (error) { +diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c +index e8bf481e80f72..7ea66de1442cc 100644 +--- a/net/core/rtnetlink.c ++++ b/net/core/rtnetlink.c +@@ -880,9 +880,9 @@ static void set_operstate(struct net_device *dev, unsigned char transition) + break; + } + +- if (dev->operstate != operstate) { ++ if (READ_ONCE(dev->operstate) != operstate) { + write_lock(&dev_base_lock); +- dev->operstate = operstate; ++ WRITE_ONCE(dev->operstate, operstate); + write_unlock(&dev_base_lock); + netdev_state_change(dev); + } +@@ -2519,7 +2519,7 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb) + + nla_for_each_nested(attr, tb[IFLA_VF_VLAN_LIST], rem) { + if (nla_type(attr) != IFLA_VF_VLAN_INFO || +- nla_len(attr) < NLA_HDRLEN) { ++ nla_len(attr) < sizeof(struct ifla_vf_vlan_info)) { + return -EINVAL; + } + if (len >= MAX_VLAN_LIST_LEN) +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index 60876262b3fb3..f0a9ef1aeaa29 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -1971,11 +1971,17 @@ static inline int skb_alloc_rx_flag(const struct sk_buff *skb) + + struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) + { +- int headerlen = skb_headroom(skb); +- unsigned int size = skb_end_offset(skb) + skb->data_len; +- struct sk_buff *n = __alloc_skb(size, gfp_mask, +- skb_alloc_rx_flag(skb), NUMA_NO_NODE); ++ struct sk_buff *n; ++ unsigned int size; ++ int headerlen; ++ ++ if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)) ++ return NULL; + ++ headerlen = skb_headroom(skb); ++ size = skb_end_offset(skb) + skb->data_len; ++ n = __alloc_skb(size, gfp_mask, ++ skb_alloc_rx_flag(skb), NUMA_NO_NODE); + if (!n) + return NULL; + +@@ -2303,12 +2309,17 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb, + /* + * Allocate the copy buffer + */ +- struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom, +- gfp_mask, skb_alloc_rx_flag(skb), +- NUMA_NO_NODE); +- int oldheadroom = skb_headroom(skb); + int head_copy_len, head_copy_off; ++ struct sk_buff *n; ++ int oldheadroom; ++ ++ if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)) ++ return NULL; + ++ oldheadroom = skb_headroom(skb); ++ n = __alloc_skb(newheadroom + skb->len + newtailroom, ++ gfp_mask, skb_alloc_rx_flag(skb), ++ NUMA_NO_NODE); + if (!n) + return NULL; + +diff --git a/net/core/skmsg.c b/net/core/skmsg.c +index 4d75ef9d24bfa..fd20aae30be23 100644 +--- a/net/core/skmsg.c ++++ b/net/core/skmsg.c +@@ -1226,11 +1226,8 @@ static void sk_psock_verdict_data_ready(struct sock *sk) + + rcu_read_lock(); + psock = sk_psock(sk); +- if (psock) { +- read_lock_bh(&sk->sk_callback_lock); ++ if (psock) + sk_psock_data_ready(sk, psock); +- read_unlock_bh(&sk->sk_callback_lock); +- } + rcu_read_unlock(); + } + } +diff --git a/net/core/sock.c b/net/core/sock.c +index 1471c0a862b36..7f64a7b95cfb2 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -486,7 +486,7 @@ int __sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb) + unsigned long flags; + struct sk_buff_head *list = &sk->sk_receive_queue; + +- if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf) { ++ if (atomic_read(&sk->sk_rmem_alloc) >= READ_ONCE(sk->sk_rcvbuf)) { + atomic_inc(&sk->sk_drops); + trace_sock_rcvqueue_full(sk, skb); + return -ENOMEM; +@@ -556,7 +556,7 @@ int __sk_receive_skb(struct sock *sk, struct sk_buff *skb, + + skb->dev = NULL; + +- if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) { ++ if (sk_rcvqueues_full(sk, READ_ONCE(sk->sk_rcvbuf))) { + atomic_inc(&sk->sk_drops); + goto discard_and_relse; + } +diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c +index dd4b5f0aa1318..c5f7bd01379ce 100644 +--- a/net/hsr/hsr_device.c ++++ b/net/hsr/hsr_device.c +@@ -31,8 +31,8 @@ static bool is_slave_up(struct net_device *dev) + static void __hsr_set_operstate(struct net_device *dev, int transition) + { + write_lock(&dev_base_lock); +- if (dev->operstate != transition) { +- dev->operstate = transition; ++ if (READ_ONCE(dev->operstate) != transition) { ++ WRITE_ONCE(dev->operstate, transition); + write_unlock(&dev_base_lock); + netdev_state_change(dev); + } else { +@@ -71,39 +71,36 @@ static bool hsr_check_carrier(struct hsr_port *master) + return false; + } + +-static void hsr_check_announce(struct net_device *hsr_dev, +- unsigned char old_operstate) ++static void hsr_check_announce(struct net_device *hsr_dev) + { + struct hsr_priv *hsr; + + hsr = netdev_priv(hsr_dev); +- +- if (hsr_dev->operstate == IF_OPER_UP && old_operstate != IF_OPER_UP) { +- /* Went up */ +- hsr->announce_count = 0; +- mod_timer(&hsr->announce_timer, +- jiffies + msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL)); ++ if (netif_running(hsr_dev) && netif_oper_up(hsr_dev)) { ++ /* Enable announce timer and start sending supervisory frames */ ++ if (!timer_pending(&hsr->announce_timer)) { ++ hsr->announce_count = 0; ++ mod_timer(&hsr->announce_timer, jiffies + ++ msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL)); ++ } ++ } else { ++ /* Deactivate the announce timer */ ++ timer_delete(&hsr->announce_timer); + } +- +- if (hsr_dev->operstate != IF_OPER_UP && old_operstate == IF_OPER_UP) +- /* Went down */ +- del_timer(&hsr->announce_timer); + } + + void hsr_check_carrier_and_operstate(struct hsr_priv *hsr) + { + struct hsr_port *master; +- unsigned char old_operstate; + bool has_carrier; + + master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); + /* netif_stacked_transfer_operstate() cannot be used here since + * it doesn't set IF_OPER_LOWERLAYERDOWN (?) + */ +- old_operstate = master->dev->operstate; + has_carrier = hsr_check_carrier(master); + hsr_set_operstate(master, has_carrier); +- hsr_check_announce(master->dev, old_operstate); ++ hsr_check_announce(master->dev); + } + + int hsr_get_max_mtu(struct hsr_priv *hsr) +diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c +index e59962f34caa6..b50308105551f 100644 +--- a/net/ipv4/af_inet.c ++++ b/net/ipv4/af_inet.c +@@ -1571,6 +1571,7 @@ struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb) + /* The above will be needed by the transport layer if there is one + * immediately following this IP hdr. + */ ++ NAPI_GRO_CB(skb)->inner_network_offset = off; + + /* Note : No need to call skb_gro_postpull_rcsum() here, + * as we already checked checksum over ipv4 header was 0 +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c +index 196495ff3977b..2458461e24874 100644 +--- a/net/ipv4/ip_output.c ++++ b/net/ipv4/ip_output.c +@@ -1469,7 +1469,7 @@ struct sk_buff *__ip_make_skb(struct sock *sk, + * by icmp_hdr(skb)->type. + */ + if (sk->sk_type == SOCK_RAW && +- !inet_test_bit(HDRINCL, sk)) ++ !(fl4->flowi4_flags & FLOWI_FLAG_KNOWN_NH)) + icmp_type = fl4->fl4_icmp_type; + else + icmp_type = icmp_hdr(skb)->type; +diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c +index ee55d0cbe4e63..39834b95ee59a 100644 +--- a/net/ipv4/raw.c ++++ b/net/ipv4/raw.c +@@ -604,6 +604,9 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) + (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0), + daddr, saddr, 0, 0, sk->sk_uid); + ++ fl4.fl4_icmp_type = 0; ++ fl4.fl4_icmp_code = 0; ++ + if (!hdrincl) { + rfv.msg = msg; + rfv.hlen = 0; +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index f8df35f7352a5..7bf774bdb9386 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -2710,7 +2710,7 @@ void tcp_shutdown(struct sock *sk, int how) + /* If we've already sent a FIN, or it's a closed state, skip this. */ + if ((1 << sk->sk_state) & + (TCPF_ESTABLISHED | TCPF_SYN_SENT | +- TCPF_SYN_RECV | TCPF_CLOSE_WAIT)) { ++ TCPF_CLOSE_WAIT)) { + /* Clear out any half completed packets. FIN if needed. */ + if (tcp_close_state(sk)) + tcp_send_fin(sk); +@@ -2819,7 +2819,7 @@ void __tcp_close(struct sock *sk, long timeout) + * machine. State transitions: + * + * TCP_ESTABLISHED -> TCP_FIN_WAIT1 +- * TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible) ++ * TCP_SYN_RECV -> TCP_FIN_WAIT1 (it is difficult) + * TCP_CLOSE_WAIT -> TCP_LAST_ACK + * + * are legal only when FIN has been sent (i.e. in window), +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index e6c4929549428..f938442b202d7 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -6627,6 +6627,8 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) + + tcp_initialize_rcv_mss(sk); + tcp_fast_path_on(tp); ++ if (sk->sk_shutdown & SEND_SHUTDOWN) ++ tcp_shutdown(sk, SEND_SHUTDOWN); + break; + + case TCP_FIN_WAIT1: { +diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c +index c7ffab37a34cd..c464ced7137ee 100644 +--- a/net/ipv4/tcp_ipv4.c ++++ b/net/ipv4/tcp_ipv4.c +@@ -154,6 +154,12 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) + if (tcptw->tw_ts_recent_stamp && + (!twp || (reuse && time_after32(ktime_get_seconds(), + tcptw->tw_ts_recent_stamp)))) { ++ /* inet_twsk_hashdance() sets sk_refcnt after putting twsk ++ * and releasing the bucket lock. ++ */ ++ if (unlikely(!refcount_inc_not_zero(&sktw->sk_refcnt))) ++ return 0; ++ + /* In case of repair and re-using TIME-WAIT sockets we still + * want to be sure that it is safe as above but honor the + * sequence numbers and time stamps set as part of the repair +@@ -174,7 +180,7 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) + tp->rx_opt.ts_recent = tcptw->tw_ts_recent; + tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp; + } +- sock_hold(sktw); ++ + return 1; + } + +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index ab3b7b4b4429b..5631041ae12cb 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -3533,7 +3533,9 @@ void tcp_send_fin(struct sock *sk) + return; + } + } else { +- skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation); ++ skb = alloc_skb_fclone(MAX_TCP_HEADER, ++ sk_gfp_mask(sk, GFP_ATOMIC | ++ __GFP_NOWARN)); + if (unlikely(!skb)) + return; + +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c +index 5e9219623c0a6..ca576587f6d21 100644 +--- a/net/ipv4/udp.c ++++ b/net/ipv4/udp.c +@@ -534,7 +534,8 @@ static inline struct sock *__udp4_lib_lookup_skb(struct sk_buff *skb, + struct sock *udp4_lib_lookup_skb(const struct sk_buff *skb, + __be16 sport, __be16 dport) + { +- const struct iphdr *iph = ip_hdr(skb); ++ const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; ++ const struct iphdr *iph = (struct iphdr *)(skb->data + offset); + struct net *net = dev_net(skb->dev); + int iif, sdif; + +diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c +index c3d67423ae189..e5971890d637d 100644 +--- a/net/ipv4/udp_offload.c ++++ b/net/ipv4/udp_offload.c +@@ -471,6 +471,7 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head, + struct sk_buff *p; + unsigned int ulen; + int ret = 0; ++ int flush; + + /* requires non zero csum, for symmetry with GSO */ + if (!uh->check) { +@@ -504,13 +505,22 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head, + return p; + } + ++ flush = NAPI_GRO_CB(p)->flush; ++ ++ if (NAPI_GRO_CB(p)->flush_id != 1 || ++ NAPI_GRO_CB(p)->count != 1 || ++ !NAPI_GRO_CB(p)->is_atomic) ++ flush |= NAPI_GRO_CB(p)->flush_id; ++ else ++ NAPI_GRO_CB(p)->is_atomic = false; ++ + /* Terminate the flow on len mismatch or if it grow "too much". + * Under small packet flood GRO count could elsewhere grow a lot + * leading to excessive truesize values. + * On len mismatch merge the first packet shorter than gso_size, + * otherwise complete the GRO packet. + */ +- if (ulen > ntohs(uh2->len)) { ++ if (ulen > ntohs(uh2->len) || flush) { + pp = p; + } else { + if (NAPI_GRO_CB(skb)->is_flist) { +@@ -718,7 +728,8 @@ EXPORT_SYMBOL(udp_gro_complete); + + INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) + { +- const struct iphdr *iph = ip_hdr(skb); ++ const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; ++ const struct iphdr *iph = (struct iphdr *)(skb->data + offset); + struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + + /* do fraglist only if there is no outer UDP encap (or we already processed it) */ +diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c +index 183f6dc372429..f6e90ba50b639 100644 +--- a/net/ipv4/xfrm4_input.c ++++ b/net/ipv4/xfrm4_input.c +@@ -61,7 +61,11 @@ int xfrm4_transport_finish(struct sk_buff *skb, int async) + ip_send_check(iph); + + if (xo && (xo->flags & XFRM_GRO)) { +- skb_mac_header_rebuild(skb); ++ /* The full l2 header needs to be preserved so that re-injecting the packet at l2 ++ * works correctly in the presence of vlan tags. ++ */ ++ skb_mac_header_rebuild_full(skb, xo->orig_mac_len); ++ skb_reset_network_header(skb); + skb_reset_transport_header(skb); + return 0; + } +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index d1806eee1687d..9dfbda164e8c1 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -4160,7 +4160,7 @@ static void addrconf_dad_work(struct work_struct *w) + if (!ipv6_generate_eui64(addr.s6_addr + 8, idev->dev) && + ipv6_addr_equal(&ifp->addr, &addr)) { + /* DAD failed for link-local based on MAC */ +- idev->cnf.disable_ipv6 = 1; ++ WRITE_ONCE(idev->cnf.disable_ipv6, 1); + + pr_info("%s: IPv6 being disabled!\n", + ifp->idev->dev->name); +@@ -6011,7 +6011,7 @@ static int inet6_fill_ifinfo(struct sk_buff *skb, struct inet6_dev *idev, + (dev->ifindex != dev_get_iflink(dev) && + nla_put_u32(skb, IFLA_LINK, dev_get_iflink(dev))) || + nla_put_u8(skb, IFLA_OPERSTATE, +- netif_running(dev) ? dev->operstate : IF_OPER_DOWN)) ++ netif_running(dev) ? READ_ONCE(dev->operstate) : IF_OPER_DOWN)) + goto nla_put_failure; + protoinfo = nla_nest_start_noflag(skb, IFLA_PROTINFO); + if (!protoinfo) +@@ -6321,7 +6321,8 @@ static void addrconf_disable_change(struct net *net, __s32 newf) + idev = __in6_dev_get(dev); + if (idev) { + int changed = (!idev->cnf.disable_ipv6) ^ (!newf); +- idev->cnf.disable_ipv6 = newf; ++ ++ WRITE_ONCE(idev->cnf.disable_ipv6, newf); + if (changed) + dev_disable_change(idev); + } +@@ -6338,7 +6339,7 @@ static int addrconf_disable_ipv6(struct ctl_table *table, int *p, int newf) + + net = (struct net *)table->extra2; + old = *p; +- *p = newf; ++ WRITE_ONCE(*p, newf); + + if (p == &net->ipv6.devconf_dflt->disable_ipv6) { + rtnl_unlock(); +@@ -6346,7 +6347,7 @@ static int addrconf_disable_ipv6(struct ctl_table *table, int *p, int newf) + } + + if (p == &net->ipv6.devconf_all->disable_ipv6) { +- net->ipv6.devconf_dflt->disable_ipv6 = newf; ++ WRITE_ONCE(net->ipv6.devconf_dflt->disable_ipv6, newf); + addrconf_disable_change(net, newf); + } else if ((!newf) ^ (!old)) + dev_disable_change((struct inet6_dev *)table->extra1); +diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c +index be52b18e08a6b..6eeab21512ba9 100644 +--- a/net/ipv6/fib6_rules.c ++++ b/net/ipv6/fib6_rules.c +@@ -233,8 +233,12 @@ static int __fib6_rule_action(struct fib_rule *rule, struct flowi *flp, + rt = pol_lookup_func(lookup, + net, table, flp6, arg->lookup_data, flags); + if (rt != net->ipv6.ip6_null_entry) { ++ struct inet6_dev *idev = ip6_dst_idev(&rt->dst); ++ ++ if (!idev) ++ goto again; + err = fib6_rule_saddr(net, rule, flags, flp6, +- ip6_dst_idev(&rt->dst)->dev); ++ idev->dev); + + if (err == -EAGAIN) + goto again; +diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c +index b8378814532ce..1ba97933c74fb 100644 +--- a/net/ipv6/ip6_input.c ++++ b/net/ipv6/ip6_input.c +@@ -168,9 +168,9 @@ static struct sk_buff *ip6_rcv_core(struct sk_buff *skb, struct net_device *dev, + + SKB_DR_SET(reason, NOT_SPECIFIED); + if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL || +- !idev || unlikely(idev->cnf.disable_ipv6)) { ++ !idev || unlikely(READ_ONCE(idev->cnf.disable_ipv6))) { + __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS); +- if (idev && unlikely(idev->cnf.disable_ipv6)) ++ if (idev && unlikely(READ_ONCE(idev->cnf.disable_ipv6))) + SKB_DR_SET(reason, IPV6DISABLED); + goto drop; + } +diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c +index d6314287338da..7f014a8969fb2 100644 +--- a/net/ipv6/ip6_offload.c ++++ b/net/ipv6/ip6_offload.c +@@ -37,6 +37,40 @@ + INDIRECT_CALL_L4(cb, f2, f1, head, skb); \ + }) + ++static int ipv6_gro_pull_exthdrs(struct sk_buff *skb, int off, int proto) ++{ ++ const struct net_offload *ops = NULL; ++ struct ipv6_opt_hdr *opth; ++ ++ for (;;) { ++ int len; ++ ++ ops = rcu_dereference(inet6_offloads[proto]); ++ ++ if (unlikely(!ops)) ++ break; ++ ++ if (!(ops->flags & INET6_PROTO_GSO_EXTHDR)) ++ break; ++ ++ opth = skb_gro_header(skb, off + sizeof(*opth), off); ++ if (unlikely(!opth)) ++ break; ++ ++ len = ipv6_optlen(opth); ++ ++ opth = skb_gro_header(skb, off + len, off); ++ if (unlikely(!opth)) ++ break; ++ proto = opth->nexthdr; ++ ++ off += len; ++ } ++ ++ skb_gro_pull(skb, off - skb_network_offset(skb)); ++ return proto; ++} ++ + static int ipv6_gso_pull_exthdrs(struct sk_buff *skb, int proto) + { + const struct net_offload *ops = NULL; +@@ -206,28 +240,26 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, + goto out; + + skb_set_network_header(skb, off); +- skb_gro_pull(skb, sizeof(*iph)); +- skb_set_transport_header(skb, skb_gro_offset(skb)); ++ NAPI_GRO_CB(skb)->inner_network_offset = off; + +- flush += ntohs(iph->payload_len) != skb_gro_len(skb); ++ flush += ntohs(iph->payload_len) != skb->len - hlen; + + proto = iph->nexthdr; + ops = rcu_dereference(inet6_offloads[proto]); + if (!ops || !ops->callbacks.gro_receive) { +- pskb_pull(skb, skb_gro_offset(skb)); +- skb_gro_frag0_invalidate(skb); +- proto = ipv6_gso_pull_exthdrs(skb, proto); +- skb_gro_pull(skb, -skb_transport_offset(skb)); +- skb_reset_transport_header(skb); +- __skb_push(skb, skb_gro_offset(skb)); ++ proto = ipv6_gro_pull_exthdrs(skb, hlen, proto); + + ops = rcu_dereference(inet6_offloads[proto]); + if (!ops || !ops->callbacks.gro_receive) + goto out; + +- iph = ipv6_hdr(skb); ++ iph = skb_gro_network_header(skb); ++ } else { ++ skb_gro_pull(skb, sizeof(*iph)); + } + ++ skb_set_transport_header(skb, skb_gro_offset(skb)); ++ + NAPI_GRO_CB(skb)->proto = proto; + + flush--; +diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c +index 53fe1375b147c..f97cb368e5a81 100644 +--- a/net/ipv6/ip6_output.c ++++ b/net/ipv6/ip6_output.c +@@ -227,7 +227,7 @@ int ip6_output(struct net *net, struct sock *sk, struct sk_buff *skb) + skb->protocol = htons(ETH_P_IPV6); + skb->dev = dev; + +- if (unlikely(idev->cnf.disable_ipv6)) { ++ if (unlikely(!idev || READ_ONCE(idev->cnf.disable_ipv6))) { + IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS); + kfree_skb_reason(skb, SKB_DROP_REASON_IPV6DISABLED); + return 0; +@@ -2003,7 +2003,7 @@ struct sk_buff *__ip6_make_skb(struct sock *sk, + u8 icmp6_type; + + if (sk->sk_socket->type == SOCK_RAW && +- !inet_test_bit(HDRINCL, sk)) ++ !(fl6->flowi6_flags & FLOWI_FLAG_KNOWN_NH)) + icmp6_type = fl6->fl6_icmp_type; + else + icmp6_type = icmp6_hdr(skb)->icmp6_type; +diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c +index a05c83cfdde97..124cf2bb2a6d7 100644 +--- a/net/ipv6/udp.c ++++ b/net/ipv6/udp.c +@@ -275,7 +275,8 @@ static struct sock *__udp6_lib_lookup_skb(struct sk_buff *skb, + struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, + __be16 sport, __be16 dport) + { +- const struct ipv6hdr *iph = ipv6_hdr(skb); ++ const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; ++ const struct ipv6hdr *iph = (struct ipv6hdr *)(skb->data + offset); + struct net *net = dev_net(skb->dev); + int iif, sdif; + +diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c +index 626d7b362dc7b..639a4b506f9b5 100644 +--- a/net/ipv6/udp_offload.c ++++ b/net/ipv6/udp_offload.c +@@ -164,7 +164,8 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) + + INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) + { +- const struct ipv6hdr *ipv6h = ipv6_hdr(skb); ++ const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; ++ const struct ipv6hdr *ipv6h = (struct ipv6hdr *)(skb->data + offset); + struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + + /* do fraglist only if there is no outer UDP encap (or we already processed it) */ +diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c +index 4156387248e40..8432b50d9ce4c 100644 +--- a/net/ipv6/xfrm6_input.c ++++ b/net/ipv6/xfrm6_input.c +@@ -56,7 +56,11 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async) + skb_postpush_rcsum(skb, skb_network_header(skb), nhlen); + + if (xo && (xo->flags & XFRM_GRO)) { +- skb_mac_header_rebuild(skb); ++ /* The full l2 header needs to be preserved so that re-injecting the packet at l2 ++ * works correctly in the presence of vlan tags. ++ */ ++ skb_mac_header_rebuild_full(skb, xo->orig_mac_len); ++ skb_reset_network_header(skb); + skb_reset_transport_header(skb); + return 0; + } +diff --git a/net/l2tp/l2tp_eth.c b/net/l2tp/l2tp_eth.c +index f2ae03c404736..1f41d2f3b8c4e 100644 +--- a/net/l2tp/l2tp_eth.c ++++ b/net/l2tp/l2tp_eth.c +@@ -136,6 +136,9 @@ static void l2tp_eth_dev_recv(struct l2tp_session *session, struct sk_buff *skb, + /* checksums verified by L2TP */ + skb->ip_summed = CHECKSUM_NONE; + ++ /* drop outer flow-hash */ ++ skb_clear_hash(skb); ++ + skb_dst_drop(skb); + nf_reset_ct(skb); + +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index 07beb72ddd25a..fefaa9e902a2f 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -122,7 +122,7 @@ struct ieee80211_bss { + }; + + /** +- * enum ieee80211_corrupt_data_flags - BSS data corruption flags ++ * enum ieee80211_bss_corrupt_data_flags - BSS data corruption flags + * @IEEE80211_BSS_CORRUPT_BEACON: last beacon frame received was corrupted + * @IEEE80211_BSS_CORRUPT_PROBE_RESP: last probe response received was corrupted + * +@@ -135,7 +135,7 @@ enum ieee80211_bss_corrupt_data_flags { + }; + + /** +- * enum ieee80211_valid_data_flags - BSS valid data flags ++ * enum ieee80211_bss_valid_data_flags - BSS valid data flags + * @IEEE80211_BSS_VALID_WMM: WMM/UAPSD data was gathered from non-corrupt IE + * @IEEE80211_BSS_VALID_RATES: Supported rates were gathered from non-corrupt IE + * @IEEE80211_BSS_VALID_ERP: ERP flag was gathered from non-corrupt IE +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index 6e574e2adc22e..cf01f1f298a3b 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -7080,7 +7080,7 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, + sdata_info(sdata, + "failed to insert STA entry for the AP (error %d)\n", + err); +- goto out_err; ++ goto out_release_chan; + } + } else + WARN_ON_ONCE(!ether_addr_equal(link->u.mgd.bssid, cbss->bssid)); +@@ -7091,8 +7091,9 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata, + + return 0; + ++out_release_chan: ++ ieee80211_link_release_channel(link); + out_err: +- ieee80211_link_release_channel(&sdata->deflink); + ieee80211_vif_set_links(sdata, 0, 0); + return err; + } +diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c +index e72b518c5d026..de75df904a003 100644 +--- a/net/mptcp/ctrl.c ++++ b/net/mptcp/ctrl.c +@@ -87,6 +87,43 @@ static void mptcp_pernet_set_defaults(struct mptcp_pernet *pernet) + } + + #ifdef CONFIG_SYSCTL ++static int mptcp_set_scheduler(const struct net *net, const char *name) ++{ ++ struct mptcp_pernet *pernet = mptcp_get_pernet(net); ++ struct mptcp_sched_ops *sched; ++ int ret = 0; ++ ++ rcu_read_lock(); ++ sched = mptcp_sched_find(name); ++ if (sched) ++ strscpy(pernet->scheduler, name, MPTCP_SCHED_NAME_MAX); ++ else ++ ret = -ENOENT; ++ rcu_read_unlock(); ++ ++ return ret; ++} ++ ++static int proc_scheduler(struct ctl_table *ctl, int write, ++ void *buffer, size_t *lenp, loff_t *ppos) ++{ ++ const struct net *net = current->nsproxy->net_ns; ++ char val[MPTCP_SCHED_NAME_MAX]; ++ struct ctl_table tbl = { ++ .data = val, ++ .maxlen = MPTCP_SCHED_NAME_MAX, ++ }; ++ int ret; ++ ++ strscpy(val, mptcp_get_scheduler(net), MPTCP_SCHED_NAME_MAX); ++ ++ ret = proc_dostring(&tbl, write, buffer, lenp, ppos); ++ if (write && ret == 0) ++ ret = mptcp_set_scheduler(net, val); ++ ++ return ret; ++} ++ + static struct ctl_table mptcp_sysctl_table[] = { + { + .procname = "enabled", +@@ -139,7 +176,7 @@ static struct ctl_table mptcp_sysctl_table[] = { + .procname = "scheduler", + .maxlen = MPTCP_SCHED_NAME_MAX, + .mode = 0644, +- .proc_handler = proc_dostring, ++ .proc_handler = proc_scheduler, + }, + {} + }; +diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c +index 01ac690af7799..618d80112d1e2 100644 +--- a/net/mptcp/protocol.c ++++ b/net/mptcp/protocol.c +@@ -3698,6 +3698,9 @@ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_TOKENFALLBACKINIT); + mptcp_subflow_early_fallback(msk, subflow); + } ++ ++ WRITE_ONCE(msk->write_seq, subflow->idsn); ++ WRITE_ONCE(msk->snd_nxt, subflow->idsn); + if (likely(!__mptcp_check_fallback(msk))) + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVE); + +diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c +index 772ddb5824d9e..5d708af0fcfd3 100644 +--- a/net/nfc/nci/core.c ++++ b/net/nfc/nci/core.c +@@ -1518,6 +1518,7 @@ static void nci_rx_work(struct work_struct *work) + + if (!nci_plen(skb->data)) { + kfree_skb(skb); ++ kcov_remote_stop(); + break; + } + +diff --git a/net/nsh/nsh.c b/net/nsh/nsh.c +index f4a38bd6a7e04..bfb7758063f31 100644 +--- a/net/nsh/nsh.c ++++ b/net/nsh/nsh.c +@@ -77,13 +77,15 @@ EXPORT_SYMBOL_GPL(nsh_pop); + static struct sk_buff *nsh_gso_segment(struct sk_buff *skb, + netdev_features_t features) + { ++ unsigned int outer_hlen, mac_len, nsh_len; + struct sk_buff *segs = ERR_PTR(-EINVAL); + u16 mac_offset = skb->mac_header; +- unsigned int nsh_len, mac_len; +- __be16 proto; ++ __be16 outer_proto, proto; + + skb_reset_network_header(skb); + ++ outer_proto = skb->protocol; ++ outer_hlen = skb_mac_header_len(skb); + mac_len = skb->mac_len; + + if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN))) +@@ -113,10 +115,10 @@ static struct sk_buff *nsh_gso_segment(struct sk_buff *skb, + } + + for (skb = segs; skb; skb = skb->next) { +- skb->protocol = htons(ETH_P_NSH); +- __skb_push(skb, nsh_len); +- skb->mac_header = mac_offset; +- skb->network_header = skb->mac_header + mac_len; ++ skb->protocol = outer_proto; ++ __skb_push(skb, nsh_len + outer_hlen); ++ skb_reset_mac_header(skb); ++ skb_set_network_header(skb, outer_hlen); + skb->mac_len = mac_len; + } + +diff --git a/net/phonet/pn_netlink.c b/net/phonet/pn_netlink.c +index 59aebe2968907..dd4c7e9a634fb 100644 +--- a/net/phonet/pn_netlink.c ++++ b/net/phonet/pn_netlink.c +@@ -193,7 +193,7 @@ void rtm_phonet_notify(int event, struct net_device *dev, u8 dst) + struct sk_buff *skb; + int err = -ENOBUFS; + +- skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct ifaddrmsg)) + ++ skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct rtmsg)) + + nla_total_size(1) + nla_total_size(4), GFP_KERNEL); + if (skb == NULL) + goto errout; +diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h +index bda3f6690b321..d431376bb120a 100644 +--- a/net/rxrpc/ar-internal.h ++++ b/net/rxrpc/ar-internal.h +@@ -688,7 +688,7 @@ struct rxrpc_call { + * packets) rather than bytes. + */ + #define RXRPC_TX_SMSS RXRPC_JUMBO_DATALEN +-#define RXRPC_MIN_CWND (RXRPC_TX_SMSS > 2190 ? 2 : RXRPC_TX_SMSS > 1095 ? 3 : 4) ++#define RXRPC_MIN_CWND 4 + u8 cong_cwnd; /* Congestion window size */ + u8 cong_extra; /* Extra to send for congestion management */ + u8 cong_ssthresh; /* Slow-start threshold */ +diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c +index 0a50341d920af..29385908099ef 100644 +--- a/net/rxrpc/call_object.c ++++ b/net/rxrpc/call_object.c +@@ -175,12 +175,7 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, + call->rx_winsize = rxrpc_rx_window_size; + call->tx_winsize = 16; + +- if (RXRPC_TX_SMSS > 2190) +- call->cong_cwnd = 2; +- else if (RXRPC_TX_SMSS > 1095) +- call->cong_cwnd = 3; +- else +- call->cong_cwnd = 4; ++ call->cong_cwnd = RXRPC_MIN_CWND; + call->cong_ssthresh = RXRPC_TX_MAX_WINDOW; + + call->rxnet = rxnet; +diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c +index 1f251d758cb9d..598b4ee389fc1 100644 +--- a/net/rxrpc/conn_event.c ++++ b/net/rxrpc/conn_event.c +@@ -88,7 +88,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, + struct rxrpc_ackpacket ack; + }; + } __attribute__((packed)) pkt; +- struct rxrpc_ackinfo ack_info; ++ struct rxrpc_acktrailer trailer; + size_t len; + int ret, ioc; + u32 serial, mtu, call_id, padding; +@@ -122,8 +122,8 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, + iov[0].iov_len = sizeof(pkt.whdr); + iov[1].iov_base = &padding; + iov[1].iov_len = 3; +- iov[2].iov_base = &ack_info; +- iov[2].iov_len = sizeof(ack_info); ++ iov[2].iov_base = &trailer; ++ iov[2].iov_len = sizeof(trailer); + + serial = rxrpc_get_next_serial(conn); + +@@ -158,14 +158,14 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, + pkt.ack.serial = htonl(skb ? sp->hdr.serial : 0); + pkt.ack.reason = skb ? RXRPC_ACK_DUPLICATE : RXRPC_ACK_IDLE; + pkt.ack.nAcks = 0; +- ack_info.rxMTU = htonl(rxrpc_rx_mtu); +- ack_info.maxMTU = htonl(mtu); +- ack_info.rwind = htonl(rxrpc_rx_window_size); +- ack_info.jumbo_max = htonl(rxrpc_rx_jumbo_max); ++ trailer.maxMTU = htonl(rxrpc_rx_mtu); ++ trailer.ifMTU = htonl(mtu); ++ trailer.rwind = htonl(rxrpc_rx_window_size); ++ trailer.jumbo_max = htonl(rxrpc_rx_jumbo_max); + pkt.whdr.flags |= RXRPC_SLOW_START_OK; + padding = 0; + iov[0].iov_len += sizeof(pkt.ack); +- len += sizeof(pkt.ack) + 3 + sizeof(ack_info); ++ len += sizeof(pkt.ack) + 3 + sizeof(trailer); + ioc = 3; + + trace_rxrpc_tx_ack(chan->call_debug_id, serial, +diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c +index df8a271948a1c..7aa58129ae455 100644 +--- a/net/rxrpc/conn_object.c ++++ b/net/rxrpc/conn_object.c +@@ -118,18 +118,13 @@ struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *lo + switch (srx->transport.family) { + case AF_INET: + if (peer->srx.transport.sin.sin_port != +- srx->transport.sin.sin_port || +- peer->srx.transport.sin.sin_addr.s_addr != +- srx->transport.sin.sin_addr.s_addr) ++ srx->transport.sin.sin_port) + goto not_found; + break; + #ifdef CONFIG_AF_RXRPC_IPV6 + case AF_INET6: + if (peer->srx.transport.sin6.sin6_port != +- srx->transport.sin6.sin6_port || +- memcmp(&peer->srx.transport.sin6.sin6_addr, +- &srx->transport.sin6.sin6_addr, +- sizeof(struct in6_addr)) != 0) ++ srx->transport.sin6.sin6_port) + goto not_found; + break; + #endif +diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c +index 9691de00ade75..5dfda1ac51dda 100644 +--- a/net/rxrpc/input.c ++++ b/net/rxrpc/input.c +@@ -9,6 +9,17 @@ + + #include "ar-internal.h" + ++/* Override priority when generating ACKs for received DATA */ ++static const u8 rxrpc_ack_priority[RXRPC_ACK__INVALID] = { ++ [RXRPC_ACK_IDLE] = 1, ++ [RXRPC_ACK_DELAY] = 2, ++ [RXRPC_ACK_REQUESTED] = 3, ++ [RXRPC_ACK_DUPLICATE] = 4, ++ [RXRPC_ACK_EXCEEDS_WINDOW] = 5, ++ [RXRPC_ACK_NOSPACE] = 6, ++ [RXRPC_ACK_OUT_OF_SEQUENCE] = 7, ++}; ++ + static void rxrpc_proto_abort(struct rxrpc_call *call, rxrpc_seq_t seq, + enum rxrpc_abort_reason why) + { +@@ -366,7 +377,7 @@ static void rxrpc_input_queue_data(struct rxrpc_call *call, struct sk_buff *skb, + * Process a DATA packet. + */ + static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, +- bool *_notify) ++ bool *_notify, rxrpc_serial_t *_ack_serial, int *_ack_reason) + { + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + struct sk_buff *oos; +@@ -419,8 +430,6 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, + /* Send an immediate ACK if we fill in a hole */ + else if (!skb_queue_empty(&call->rx_oos_queue)) + ack_reason = RXRPC_ACK_DELAY; +- else +- call->ackr_nr_unacked++; + + window++; + if (after(window, wtop)) { +@@ -498,12 +507,16 @@ static void rxrpc_input_data_one(struct rxrpc_call *call, struct sk_buff *skb, + } + + send_ack: +- if (ack_reason >= 0) +- rxrpc_send_ACK(call, ack_reason, serial, +- rxrpc_propose_ack_input_data); +- else +- rxrpc_propose_delay_ACK(call, serial, +- rxrpc_propose_ack_input_data); ++ if (ack_reason >= 0) { ++ if (rxrpc_ack_priority[ack_reason] > rxrpc_ack_priority[*_ack_reason]) { ++ *_ack_serial = serial; ++ *_ack_reason = ack_reason; ++ } else if (rxrpc_ack_priority[ack_reason] == rxrpc_ack_priority[*_ack_reason] && ++ ack_reason == RXRPC_ACK_REQUESTED) { ++ *_ack_serial = serial; ++ *_ack_reason = ack_reason; ++ } ++ } + } + + /* +@@ -514,9 +527,11 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb + struct rxrpc_jumbo_header jhdr; + struct rxrpc_skb_priv *sp = rxrpc_skb(skb), *jsp; + struct sk_buff *jskb; ++ rxrpc_serial_t ack_serial = 0; + unsigned int offset = sizeof(struct rxrpc_wire_header); + unsigned int len = skb->len - offset; + bool notify = false; ++ int ack_reason = 0; + + while (sp->hdr.flags & RXRPC_JUMBO_PACKET) { + if (len < RXRPC_JUMBO_SUBPKTLEN) +@@ -536,7 +551,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb + jsp = rxrpc_skb(jskb); + jsp->offset = offset; + jsp->len = RXRPC_JUMBO_DATALEN; +- rxrpc_input_data_one(call, jskb, ¬ify); ++ rxrpc_input_data_one(call, jskb, ¬ify, &ack_serial, &ack_reason); + rxrpc_free_skb(jskb, rxrpc_skb_put_jumbo_subpacket); + + sp->hdr.flags = jhdr.flags; +@@ -549,7 +564,16 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb + + sp->offset = offset; + sp->len = len; +- rxrpc_input_data_one(call, skb, ¬ify); ++ rxrpc_input_data_one(call, skb, ¬ify, &ack_serial, &ack_reason); ++ ++ if (ack_reason > 0) { ++ rxrpc_send_ACK(call, ack_reason, ack_serial, ++ rxrpc_propose_ack_input_data); ++ } else { ++ call->ackr_nr_unacked++; ++ rxrpc_propose_delay_ACK(call, sp->hdr.serial, ++ rxrpc_propose_ack_input_data); ++ } + if (notify) { + trace_rxrpc_notify_socket(call->debug_id, sp->hdr.serial); + rxrpc_notify_socket(call); +@@ -670,14 +694,14 @@ static void rxrpc_complete_rtt_probe(struct rxrpc_call *call, + /* + * Process the extra information that may be appended to an ACK packet + */ +-static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, +- struct rxrpc_ackinfo *ackinfo) ++static void rxrpc_input_ack_trailer(struct rxrpc_call *call, struct sk_buff *skb, ++ struct rxrpc_acktrailer *trailer) + { + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + struct rxrpc_peer *peer; + unsigned int mtu; + bool wake = false; +- u32 rwind = ntohl(ackinfo->rwind); ++ u32 rwind = ntohl(trailer->rwind); + + if (rwind > RXRPC_TX_MAX_WINDOW) + rwind = RXRPC_TX_MAX_WINDOW; +@@ -688,10 +712,7 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, + call->tx_winsize = rwind; + } + +- if (call->cong_ssthresh > rwind) +- call->cong_ssthresh = rwind; +- +- mtu = min(ntohl(ackinfo->rxMTU), ntohl(ackinfo->maxMTU)); ++ mtu = min(ntohl(trailer->maxMTU), ntohl(trailer->ifMTU)); + + peer = call->peer; + if (mtu < peer->maxdata) { +@@ -837,7 +858,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) + struct rxrpc_ack_summary summary = { 0 }; + struct rxrpc_ackpacket ack; + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); +- struct rxrpc_ackinfo info; ++ struct rxrpc_acktrailer trailer; + rxrpc_serial_t ack_serial, acked_serial; + rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt, since; + int nr_acks, offset, ioffset; +@@ -917,11 +938,11 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) + goto send_response; + } + +- info.rxMTU = 0; ++ trailer.maxMTU = 0; + ioffset = offset + nr_acks + 3; +- if (skb->len >= ioffset + sizeof(info) && +- skb_copy_bits(skb, ioffset, &info, sizeof(info)) < 0) +- return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack_info); ++ if (skb->len >= ioffset + sizeof(trailer) && ++ skb_copy_bits(skb, ioffset, &trailer, sizeof(trailer)) < 0) ++ return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack_trailer); + + if (nr_acks > 0) + skb_condense(skb); +@@ -950,8 +971,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) + } + + /* Parse rwind and mtu sizes if provided. */ +- if (info.rxMTU) +- rxrpc_input_ackinfo(call, skb, &info); ++ if (trailer.maxMTU) ++ rxrpc_input_ack_trailer(call, skb, &trailer); + + if (first_soft_ack == 0) + return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_zero); +diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c +index 4a292f860ae37..cad6a7d18e040 100644 +--- a/net/rxrpc/output.c ++++ b/net/rxrpc/output.c +@@ -83,7 +83,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, + struct rxrpc_txbuf *txb, + u16 *_rwind) + { +- struct rxrpc_ackinfo ackinfo; ++ struct rxrpc_acktrailer trailer; + unsigned int qsize, sack, wrap, to; + rxrpc_seq_t window, wtop; + int rsize; +@@ -126,16 +126,16 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, + qsize = (window - 1) - call->rx_consumed; + rsize = max_t(int, call->rx_winsize - qsize, 0); + *_rwind = rsize; +- ackinfo.rxMTU = htonl(rxrpc_rx_mtu); +- ackinfo.maxMTU = htonl(mtu); +- ackinfo.rwind = htonl(rsize); +- ackinfo.jumbo_max = htonl(jmax); ++ trailer.maxMTU = htonl(rxrpc_rx_mtu); ++ trailer.ifMTU = htonl(mtu); ++ trailer.rwind = htonl(rsize); ++ trailer.jumbo_max = htonl(jmax); + + *ackp++ = 0; + *ackp++ = 0; + *ackp++ = 0; +- memcpy(ackp, &ackinfo, sizeof(ackinfo)); +- return txb->ack.nAcks + 3 + sizeof(ackinfo); ++ memcpy(ackp, &trailer, sizeof(trailer)); ++ return txb->ack.nAcks + 3 + sizeof(trailer); + } + + /* +diff --git a/net/rxrpc/protocol.h b/net/rxrpc/protocol.h +index e8ee4af43ca89..4fe6b4d20ada9 100644 +--- a/net/rxrpc/protocol.h ++++ b/net/rxrpc/protocol.h +@@ -135,9 +135,9 @@ struct rxrpc_ackpacket { + /* + * ACK packets can have a further piece of information tagged on the end + */ +-struct rxrpc_ackinfo { +- __be32 rxMTU; /* maximum Rx MTU size (bytes) [AFS 3.3] */ +- __be32 maxMTU; /* maximum interface MTU size (bytes) [AFS 3.3] */ ++struct rxrpc_acktrailer { ++ __be32 maxMTU; /* maximum Rx MTU size (bytes) [AFS 3.3] */ ++ __be32 ifMTU; /* maximum interface MTU size (bytes) [AFS 3.3] */ + __be32 rwind; /* Rx window size (packets) [AFS 3.4] */ + __be32 jumbo_max; /* max packets to stick into a jumbo packet [AFS 3.5] */ + }; +diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c +index 89981dbe46c94..598ac9ead64b7 100644 +--- a/net/smc/smc_ib.c ++++ b/net/smc/smc_ib.c +@@ -209,13 +209,18 @@ int smc_ib_find_route(struct net *net, __be32 saddr, __be32 daddr, + if (IS_ERR(rt)) + goto out; + if (rt->rt_uses_gateway && rt->rt_gw_family != AF_INET) +- goto out; +- neigh = rt->dst.ops->neigh_lookup(&rt->dst, NULL, &fl4.daddr); +- if (neigh) { +- memcpy(nexthop_mac, neigh->ha, ETH_ALEN); +- *uses_gateway = rt->rt_uses_gateway; +- return 0; +- } ++ goto out_rt; ++ neigh = dst_neigh_lookup(&rt->dst, &fl4.daddr); ++ if (!neigh) ++ goto out_rt; ++ memcpy(nexthop_mac, neigh->ha, ETH_ALEN); ++ *uses_gateway = rt->rt_uses_gateway; ++ neigh_release(neigh); ++ ip_rt_put(rt); ++ return 0; ++ ++out_rt: ++ ip_rt_put(rt); + out: + return -ENOENT; + } +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c +index 339dfc5b92246..f4d32cf2cd16a 100644 +--- a/net/sunrpc/clnt.c ++++ b/net/sunrpc/clnt.c +@@ -399,7 +399,7 @@ static struct rpc_clnt * rpc_new_client(const struct rpc_create_args *args, + clnt->cl_maxproc = version->nrprocs; + clnt->cl_prog = args->prognumber ? : program->number; + clnt->cl_vers = version->number; +- clnt->cl_stats = program->stats; ++ clnt->cl_stats = args->stats ? : program->stats; + clnt->cl_metrics = rpc_alloc_iostats(clnt); + rpc_init_pipe_dir_head(&clnt->cl_pipedir_objects); + err = -ENOMEM; +@@ -685,6 +685,7 @@ struct rpc_clnt *rpc_clone_client(struct rpc_clnt *clnt) + .version = clnt->cl_vers, + .authflavor = clnt->cl_auth->au_flavor, + .cred = clnt->cl_cred, ++ .stats = clnt->cl_stats, + }; + return __rpc_clone_client(&args, clnt); + } +@@ -707,6 +708,7 @@ rpc_clone_client_set_auth(struct rpc_clnt *clnt, rpc_authflavor_t flavor) + .version = clnt->cl_vers, + .authflavor = flavor, + .cred = clnt->cl_cred, ++ .stats = clnt->cl_stats, + }; + return __rpc_clone_client(&args, clnt); + } +@@ -1053,6 +1055,7 @@ struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *old, + .version = vers, + .authflavor = old->cl_auth->au_flavor, + .cred = old->cl_cred, ++ .stats = old->cl_stats, + }; + struct rpc_clnt *clnt; + int err; +diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c +index a15bf2ede89bf..c3007f3e16f8c 100644 +--- a/net/sunrpc/xprtsock.c ++++ b/net/sunrpc/xprtsock.c +@@ -2644,6 +2644,7 @@ static void xs_tcp_tls_setup_socket(struct work_struct *work) + .xprtsec = { + .policy = RPC_XPRTSEC_NONE, + }, ++ .stats = upper_clnt->cl_stats, + }; + unsigned int pflags = current->flags; + struct rpc_clnt *lower_clnt; +diff --git a/net/tipc/msg.c b/net/tipc/msg.c +index 5c9fd4791c4ba..76284fc538ebd 100644 +--- a/net/tipc/msg.c ++++ b/net/tipc/msg.c +@@ -142,9 +142,9 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) + if (fragid == FIRST_FRAGMENT) { + if (unlikely(head)) + goto err; +- *buf = NULL; + if (skb_has_frag_list(frag) && __skb_linearize(frag)) + goto err; ++ *buf = NULL; + frag = skb_unshare(frag, GFP_ATOMIC); + if (unlikely(!frag)) + goto err; +@@ -156,6 +156,11 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) + if (!head) + goto err; + ++ /* Either the input skb ownership is transferred to headskb ++ * or the input skb is freed, clear the reference to avoid ++ * bad access on error path. ++ */ ++ *buf = NULL; + if (skb_try_coalesce(head, frag, &headstolen, &delta)) { + kfree_skb_partial(frag, headstolen); + } else { +@@ -179,7 +184,6 @@ int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf) + *headbuf = NULL; + return 1; + } +- *buf = NULL; + return 0; + err: + kfree_skb(*buf); +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index 9f6d8bcecfebe..c4f08f7eb741d 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -14052,6 +14052,8 @@ static int nl80211_set_coalesce(struct sk_buff *skb, struct genl_info *info) + error: + for (i = 0; i < new_coalesce.n_rules; i++) { + tmp_rule = &new_coalesce.rules[i]; ++ if (!tmp_rule) ++ continue; + for (j = 0; j < tmp_rule->n_patterns; j++) + kfree(tmp_rule->patterns[j].mask); + kfree(tmp_rule->patterns); +diff --git a/net/wireless/trace.h b/net/wireless/trace.h +index 617c0d0dfa963..e89443173c7b4 100644 +--- a/net/wireless/trace.h ++++ b/net/wireless/trace.h +@@ -1015,7 +1015,7 @@ TRACE_EVENT(rdev_get_mpp, + TRACE_EVENT(rdev_dump_mpp, + TP_PROTO(struct wiphy *wiphy, struct net_device *netdev, int _idx, + u8 *dst, u8 *mpp), +- TP_ARGS(wiphy, netdev, _idx, mpp, dst), ++ TP_ARGS(wiphy, netdev, _idx, dst, mpp), + TP_STRUCT__entry( + WIPHY_ENTRY + NETDEV_ENTRY +diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c +index d5ee96789d4bf..0c08bac3ed269 100644 +--- a/net/xfrm/xfrm_input.c ++++ b/net/xfrm/xfrm_input.c +@@ -388,11 +388,15 @@ static int xfrm_prepare_input(struct xfrm_state *x, struct sk_buff *skb) + */ + static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb) + { ++ struct xfrm_offload *xo = xfrm_offload(skb); + int ihl = skb->data - skb_transport_header(skb); + + if (skb->transport_header != skb->network_header) { + memmove(skb_transport_header(skb), + skb_network_header(skb), ihl); ++ if (xo) ++ xo->orig_mac_len = ++ skb_mac_header_was_set(skb) ? skb_mac_header_len(skb) : 0; + skb->network_header = skb->transport_header; + } + ip_hdr(skb)->tot_len = htons(skb->len + ihl); +@@ -403,11 +407,15 @@ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb) + static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb) + { + #if IS_ENABLED(CONFIG_IPV6) ++ struct xfrm_offload *xo = xfrm_offload(skb); + int ihl = skb->data - skb_transport_header(skb); + + if (skb->transport_header != skb->network_header) { + memmove(skb_transport_header(skb), + skb_network_header(skb), ihl); ++ if (xo) ++ xo->orig_mac_len = ++ skb_mac_header_was_set(skb) ? skb_mac_header_len(skb) : 0; + skb->network_header = skb->transport_header; + } + ipv6_hdr(skb)->payload_len = htons(skb->len + ihl - +diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs +index e8811700239aa..de54d5fede6f8 100644 +--- a/rust/kernel/lib.rs ++++ b/rust/kernel/lib.rs +@@ -60,7 +60,7 @@ + /// The top level entrypoint to implementing a kernel module. + /// + /// For any teardown or cleanup operations, your type may implement [`Drop`]. +-pub trait Module: Sized + Sync { ++pub trait Module: Sized + Sync + Send { + /// Called at module initialization time. + /// + /// Use this method to perform whatever setup or registration your module +diff --git a/rust/macros/module.rs b/rust/macros/module.rs +index d62d8710d77ab..acd0393b50957 100644 +--- a/rust/macros/module.rs ++++ b/rust/macros/module.rs +@@ -199,17 +199,6 @@ pub(crate) fn module(ts: TokenStream) -> TokenStream { + /// Used by the printing macros, e.g. [`info!`]. + const __LOG_PREFIX: &[u8] = b\"{name}\\0\"; + +- /// The \"Rust loadable module\" mark. +- // +- // This may be best done another way later on, e.g. as a new modinfo +- // key or a new section. For the moment, keep it simple. +- #[cfg(MODULE)] +- #[doc(hidden)] +- #[used] +- static __IS_RUST_MODULE: () = (); +- +- static mut __MOD: Option<{type_}> = None; +- + // SAFETY: `__this_module` is constructed by the kernel at load time and will not be + // freed until the module is unloaded. + #[cfg(MODULE)] +@@ -221,76 +210,132 @@ pub(crate) fn module(ts: TokenStream) -> TokenStream { + kernel::ThisModule::from_ptr(core::ptr::null_mut()) + }}; + +- // Loadable modules need to export the `{{init,cleanup}}_module` identifiers. +- #[cfg(MODULE)] +- #[doc(hidden)] +- #[no_mangle] +- pub extern \"C\" fn init_module() -> core::ffi::c_int {{ +- __init() +- }} +- +- #[cfg(MODULE)] +- #[doc(hidden)] +- #[no_mangle] +- pub extern \"C\" fn cleanup_module() {{ +- __exit() +- }} ++ // Double nested modules, since then nobody can access the public items inside. ++ mod __module_init {{ ++ mod __module_init {{ ++ use super::super::{type_}; ++ ++ /// The \"Rust loadable module\" mark. ++ // ++ // This may be best done another way later on, e.g. as a new modinfo ++ // key or a new section. For the moment, keep it simple. ++ #[cfg(MODULE)] ++ #[doc(hidden)] ++ #[used] ++ static __IS_RUST_MODULE: () = (); ++ ++ static mut __MOD: Option<{type_}> = None; ++ ++ // Loadable modules need to export the `{{init,cleanup}}_module` identifiers. ++ /// # Safety ++ /// ++ /// This function must not be called after module initialization, because it may be ++ /// freed after that completes. ++ #[cfg(MODULE)] ++ #[doc(hidden)] ++ #[no_mangle] ++ #[link_section = \".init.text\"] ++ pub unsafe extern \"C\" fn init_module() -> core::ffi::c_int {{ ++ // SAFETY: This function is inaccessible to the outside due to the double ++ // module wrapping it. It is called exactly once by the C side via its ++ // unique name. ++ unsafe {{ __init() }} ++ }} + +- // Built-in modules are initialized through an initcall pointer +- // and the identifiers need to be unique. +- #[cfg(not(MODULE))] +- #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))] +- #[doc(hidden)] +- #[link_section = \"{initcall_section}\"] +- #[used] +- pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init; ++ #[cfg(MODULE)] ++ #[doc(hidden)] ++ #[no_mangle] ++ pub extern \"C\" fn cleanup_module() {{ ++ // SAFETY: ++ // - This function is inaccessible to the outside due to the double ++ // module wrapping it. It is called exactly once by the C side via its ++ // unique name, ++ // - furthermore it is only called after `init_module` has returned `0` ++ // (which delegates to `__init`). ++ unsafe {{ __exit() }} ++ }} + +- #[cfg(not(MODULE))] +- #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] +- core::arch::global_asm!( +- r#\".section \"{initcall_section}\", \"a\" +- __{name}_initcall: +- .long __{name}_init - . +- .previous +- \"# +- ); ++ // Built-in modules are initialized through an initcall pointer ++ // and the identifiers need to be unique. ++ #[cfg(not(MODULE))] ++ #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))] ++ #[doc(hidden)] ++ #[link_section = \"{initcall_section}\"] ++ #[used] ++ pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init; ++ ++ #[cfg(not(MODULE))] ++ #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] ++ core::arch::global_asm!( ++ r#\".section \"{initcall_section}\", \"a\" ++ __{name}_initcall: ++ .long __{name}_init - . ++ .previous ++ \"# ++ ); ++ ++ #[cfg(not(MODULE))] ++ #[doc(hidden)] ++ #[no_mangle] ++ pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{ ++ // SAFETY: This function is inaccessible to the outside due to the double ++ // module wrapping it. It is called exactly once by the C side via its ++ // placement above in the initcall section. ++ unsafe {{ __init() }} ++ }} + +- #[cfg(not(MODULE))] +- #[doc(hidden)] +- #[no_mangle] +- pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{ +- __init() +- }} ++ #[cfg(not(MODULE))] ++ #[doc(hidden)] ++ #[no_mangle] ++ pub extern \"C\" fn __{name}_exit() {{ ++ // SAFETY: ++ // - This function is inaccessible to the outside due to the double ++ // module wrapping it. It is called exactly once by the C side via its ++ // unique name, ++ // - furthermore it is only called after `__{name}_init` has returned `0` ++ // (which delegates to `__init`). ++ unsafe {{ __exit() }} ++ }} + +- #[cfg(not(MODULE))] +- #[doc(hidden)] +- #[no_mangle] +- pub extern \"C\" fn __{name}_exit() {{ +- __exit() +- }} ++ /// # Safety ++ /// ++ /// This function must only be called once. ++ unsafe fn __init() -> core::ffi::c_int {{ ++ match <{type_} as kernel::Module>::init(&super::super::THIS_MODULE) {{ ++ Ok(m) => {{ ++ // SAFETY: No data race, since `__MOD` can only be accessed by this ++ // module and there only `__init` and `__exit` access it. These ++ // functions are only called once and `__exit` cannot be called ++ // before or during `__init`. ++ unsafe {{ ++ __MOD = Some(m); ++ }} ++ return 0; ++ }} ++ Err(e) => {{ ++ return e.to_errno(); ++ }} ++ }} ++ }} + +- fn __init() -> core::ffi::c_int {{ +- match <{type_} as kernel::Module>::init(&THIS_MODULE) {{ +- Ok(m) => {{ ++ /// # Safety ++ /// ++ /// This function must ++ /// - only be called once, ++ /// - be called after `__init` has been called and returned `0`. ++ unsafe fn __exit() {{ ++ // SAFETY: No data race, since `__MOD` can only be accessed by this module ++ // and there only `__init` and `__exit` access it. These functions are only ++ // called once and `__init` was already called. + unsafe {{ +- __MOD = Some(m); ++ // Invokes `drop()` on `__MOD`, which should be used for cleanup. ++ __MOD = None; + }} +- return 0; +- }} +- Err(e) => {{ +- return e.to_errno(); + }} +- }} +- }} + +- fn __exit() {{ +- unsafe {{ +- // Invokes `drop()` on `__MOD`, which should be used for cleanup. +- __MOD = None; ++ {modinfo} + }} + }} +- +- {modinfo} + ", + type_ = info.type_, + name = info.name, +diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal +index b3a6aa8fbe8cb..1979913aff682 100644 +--- a/scripts/Makefile.modfinal ++++ b/scripts/Makefile.modfinal +@@ -23,7 +23,7 @@ modname = $(notdir $(@:.mod.o=)) + part-of-module = y + + quiet_cmd_cc_o_c = CC [M] $@ +- cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI) $(CFLAGS_GCOV), $(c_flags)) -c -o $@ $< ++ cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI) $(CFLAGS_GCOV) $(CFLAGS_KCSAN), $(c_flags)) -c -o $@ $< + + %.mod.o: %.mod.c FORCE + $(call if_changed_dep,cc_o_c) +diff --git a/security/keys/key.c b/security/keys/key.c +index 5f103b2713c64..35db23d05302e 100644 +--- a/security/keys/key.c ++++ b/security/keys/key.c +@@ -464,7 +464,8 @@ static int __key_instantiate_and_link(struct key *key, + if (authkey) + key_invalidate(authkey); + +- key_set_expiry(key, prep->expiry); ++ if (prep->expiry != TIME64_MAX) ++ key_set_expiry(key, prep->expiry); + } + } + +diff --git a/sound/hda/intel-sdw-acpi.c b/sound/hda/intel-sdw-acpi.c +index b57d72ea4503f..4e376994bf78b 100644 +--- a/sound/hda/intel-sdw-acpi.c ++++ b/sound/hda/intel-sdw-acpi.c +@@ -41,6 +41,8 @@ static bool is_link_enabled(struct fwnode_handle *fw_node, u8 idx) + "intel-quirk-mask", + &quirk_mask); + ++ fwnode_handle_put(link); ++ + if (quirk_mask & SDW_INTEL_QUIRK_MASK_BUS_DISABLE) + return false; + +diff --git a/sound/pci/emu10k1/emu10k1.c b/sound/pci/emu10k1/emu10k1.c +index fe72e7d772412..dadeda7758cee 100644 +--- a/sound/pci/emu10k1/emu10k1.c ++++ b/sound/pci/emu10k1/emu10k1.c +@@ -189,8 +189,7 @@ static int snd_emu10k1_suspend(struct device *dev) + + emu->suspend = 1; + +- cancel_work_sync(&emu->emu1010.firmware_work); +- cancel_work_sync(&emu->emu1010.clock_work); ++ cancel_work_sync(&emu->emu1010.work); + + snd_ac97_suspend(emu->ac97); + +diff --git a/sound/pci/emu10k1/emu10k1_main.c b/sound/pci/emu10k1/emu10k1_main.c +index de5c41e578e1f..ade90c7ecd922 100644 +--- a/sound/pci/emu10k1/emu10k1_main.c ++++ b/sound/pci/emu10k1/emu10k1_main.c +@@ -732,69 +732,67 @@ static int snd_emu1010_load_firmware(struct snd_emu10k1 *emu, int dock, + return snd_emu1010_load_firmware_entry(emu, *fw); + } + +-static void emu1010_firmware_work(struct work_struct *work) ++static void snd_emu1010_load_dock_firmware(struct snd_emu10k1 *emu) + { +- struct snd_emu10k1 *emu; +- u32 tmp, tmp2, reg; ++ u32 tmp, tmp2; + int err; + +- emu = container_of(work, struct snd_emu10k1, +- emu1010.firmware_work); +- if (emu->card->shutdown) ++ // The docking events clearly arrive prematurely - while the ++ // Dock's FPGA seems to be successfully programmed, the Dock ++ // fails to initialize subsequently if we don't give it some ++ // time to "warm up" here. ++ msleep(200); ++ ++ dev_info(emu->card->dev, "emu1010: Loading Audio Dock Firmware\n"); ++ /* Return to Audio Dock programming mode */ ++ snd_emu1010_fpga_write(emu, EMU_HANA_FPGA_CONFIG, ++ EMU_HANA_FPGA_CONFIG_AUDIODOCK); ++ err = snd_emu1010_load_firmware(emu, 1, &emu->dock_fw); ++ if (err < 0) + return; +-#ifdef CONFIG_PM_SLEEP +- if (emu->suspend) ++ snd_emu1010_fpga_write(emu, EMU_HANA_FPGA_CONFIG, 0); ++ ++ snd_emu1010_fpga_read(emu, EMU_HANA_ID, &tmp); ++ dev_dbg(emu->card->dev, "emu1010: EMU_HANA+DOCK_ID = 0x%x\n", tmp); ++ if ((tmp & 0x1f) != 0x15) { ++ /* FPGA failed to be programmed */ ++ dev_err(emu->card->dev, ++ "emu1010: Loading Audio Dock Firmware failed, reg = 0x%x\n", ++ tmp); + return; +-#endif ++ } ++ dev_info(emu->card->dev, "emu1010: Audio Dock Firmware loaded\n"); ++ ++ snd_emu1010_fpga_read(emu, EMU_DOCK_MAJOR_REV, &tmp); ++ snd_emu1010_fpga_read(emu, EMU_DOCK_MINOR_REV, &tmp2); ++ dev_info(emu->card->dev, "Audio Dock ver: %u.%u\n", tmp, tmp2); ++ ++ /* Allow DLL to settle, to sync clocking between 1010 and Dock */ ++ msleep(10); ++} ++ ++static void emu1010_dock_event(struct snd_emu10k1 *emu) ++{ ++ u32 reg; ++ + snd_emu1010_fpga_read(emu, EMU_HANA_OPTION_CARDS, ®); /* OPTIONS: Which cards are attached to the EMU */ + if (reg & EMU_HANA_OPTION_DOCK_OFFLINE) { + /* Audio Dock attached */ +- /* Return to Audio Dock programming mode */ +- dev_info(emu->card->dev, +- "emu1010: Loading Audio Dock Firmware\n"); +- snd_emu1010_fpga_write(emu, EMU_HANA_FPGA_CONFIG, +- EMU_HANA_FPGA_CONFIG_AUDIODOCK); +- err = snd_emu1010_load_firmware(emu, 1, &emu->dock_fw); +- if (err < 0) +- return; +- snd_emu1010_fpga_write(emu, EMU_HANA_FPGA_CONFIG, 0); +- snd_emu1010_fpga_read(emu, EMU_HANA_ID, &tmp); +- dev_info(emu->card->dev, +- "emu1010: EMU_HANA+DOCK_ID = 0x%x\n", tmp); +- if ((tmp & 0x1f) != 0x15) { +- /* FPGA failed to be programmed */ +- dev_info(emu->card->dev, +- "emu1010: Loading Audio Dock Firmware file failed, reg = 0x%x\n", +- tmp); +- return; +- } +- dev_info(emu->card->dev, +- "emu1010: Audio Dock Firmware loaded\n"); +- snd_emu1010_fpga_read(emu, EMU_DOCK_MAJOR_REV, &tmp); +- snd_emu1010_fpga_read(emu, EMU_DOCK_MINOR_REV, &tmp2); +- dev_info(emu->card->dev, "Audio Dock ver: %u.%u\n", tmp, tmp2); +- /* Sync clocking between 1010 and Dock */ +- /* Allow DLL to settle */ +- msleep(10); ++ snd_emu1010_load_dock_firmware(emu); + /* Unmute all. Default is muted after a firmware load */ + snd_emu1010_fpga_write(emu, EMU_HANA_UNMUTE, EMU_UNMUTE); ++ } else if (!(reg & EMU_HANA_OPTION_DOCK_ONLINE)) { ++ /* Audio Dock removed */ ++ dev_info(emu->card->dev, "emu1010: Audio Dock detached\n"); ++ /* The hardware auto-mutes all, so we unmute again */ ++ snd_emu1010_fpga_write(emu, EMU_HANA_UNMUTE, EMU_UNMUTE); + } + } + +-static void emu1010_clock_work(struct work_struct *work) ++static void emu1010_clock_event(struct snd_emu10k1 *emu) + { +- struct snd_emu10k1 *emu; + struct snd_ctl_elem_id id; + +- emu = container_of(work, struct snd_emu10k1, +- emu1010.clock_work); +- if (emu->card->shutdown) +- return; +-#ifdef CONFIG_PM_SLEEP +- if (emu->suspend) +- return; +-#endif +- + spin_lock_irq(&emu->reg_lock); + // This is the only thing that can actually happen. + emu->emu1010.clock_source = emu->emu1010.clock_fallback; +@@ -805,21 +803,40 @@ static void emu1010_clock_work(struct work_struct *work) + snd_ctl_notify(emu->card, SNDRV_CTL_EVENT_MASK_VALUE, &id); + } + +-static void emu1010_interrupt(struct snd_emu10k1 *emu) ++static void emu1010_work(struct work_struct *work) + { ++ struct snd_emu10k1 *emu; + u32 sts; + ++ emu = container_of(work, struct snd_emu10k1, emu1010.work); ++ if (emu->card->shutdown) ++ return; ++#ifdef CONFIG_PM_SLEEP ++ if (emu->suspend) ++ return; ++#endif ++ + snd_emu1010_fpga_read(emu, EMU_HANA_IRQ_STATUS, &sts); +- if (sts & EMU_HANA_IRQ_DOCK_LOST) { +- /* Audio Dock removed */ +- dev_info(emu->card->dev, "emu1010: Audio Dock detached\n"); +- /* The hardware auto-mutes all, so we unmute again */ +- snd_emu1010_fpga_write(emu, EMU_HANA_UNMUTE, EMU_UNMUTE); +- } else if (sts & EMU_HANA_IRQ_DOCK) { +- schedule_work(&emu->emu1010.firmware_work); +- } ++ ++ // The distinction of the IRQ status bits is unreliable, ++ // so we dispatch later based on option card status. ++ if (sts & (EMU_HANA_IRQ_DOCK | EMU_HANA_IRQ_DOCK_LOST)) ++ emu1010_dock_event(emu); ++ + if (sts & EMU_HANA_IRQ_WCLK_CHANGED) +- schedule_work(&emu->emu1010.clock_work); ++ emu1010_clock_event(emu); ++} ++ ++static void emu1010_interrupt(struct snd_emu10k1 *emu) ++{ ++ // We get an interrupt on each GPIO input pin change, but we ++ // care only about the ones triggered by the dedicated pin. ++ u16 sts = inw(emu->port + A_GPIO); ++ u16 bit = emu->card_capabilities->ca0108_chip ? 0x2000 : 0x8000; ++ if (!(sts & bit)) ++ return; ++ ++ schedule_work(&emu->emu1010.work); + } + + /* +@@ -889,7 +906,7 @@ static int snd_emu10k1_emu1010_init(struct snd_emu10k1 *emu) + snd_emu1010_fpga_read(emu, EMU_HANA_OPTION_CARDS, ®); + dev_info(emu->card->dev, "emu1010: Card options = 0x%x\n", reg); + if (reg & EMU_HANA_OPTION_DOCK_OFFLINE) +- schedule_work(&emu->emu1010.firmware_work); ++ snd_emu1010_load_dock_firmware(emu); + if (emu->card_capabilities->no_adat) { + emu->emu1010.optical_in = 0; /* IN_SPDIF */ + emu->emu1010.optical_out = 0; /* OUT_SPDIF */ +@@ -960,8 +977,7 @@ static void snd_emu10k1_free(struct snd_card *card) + /* Disable 48Volt power to Audio Dock */ + snd_emu1010_fpga_write(emu, EMU_HANA_DOCK_PWR, 0); + } +- cancel_work_sync(&emu->emu1010.firmware_work); +- cancel_work_sync(&emu->emu1010.clock_work); ++ cancel_work_sync(&emu->emu1010.work); + release_firmware(emu->firmware); + release_firmware(emu->dock_fw); + snd_util_memhdr_free(emu->memhdr); +@@ -1540,8 +1556,7 @@ int snd_emu10k1_create(struct snd_card *card, + emu->irq = -1; + emu->synth = NULL; + emu->get_synth_voice = NULL; +- INIT_WORK(&emu->emu1010.firmware_work, emu1010_firmware_work); +- INIT_WORK(&emu->emu1010.clock_work, emu1010_clock_work); ++ INIT_WORK(&emu->emu1010.work, emu1010_work); + /* read revision & serial */ + emu->revision = pci->revision; + pci_read_config_dword(pci, PCI_SUBSYSTEM_VENDOR_ID, &emu->serial); +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 057c207237cc2..47e404bde4241 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -9826,6 +9826,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), + SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT), + SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), ++ SND_PCI_QUIRK(0x103c, 0x86c1, "HP Laptop 15-da3001TU", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), + SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), + SND_PCI_QUIRK(0x103c, 0x86e7, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), + SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), +diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c +index 3c025dabaf7a4..1253695bebd86 100644 +--- a/sound/soc/codecs/wsa881x.c ++++ b/sound/soc/codecs/wsa881x.c +@@ -1155,6 +1155,7 @@ static int wsa881x_probe(struct sdw_slave *pdev, + pdev->prop.sink_ports = GENMASK(WSA881X_MAX_SWR_PORTS, 0); + pdev->prop.sink_dpn_prop = wsa_sink_dpn_prop; + pdev->prop.scp_int1_mask = SDW_SCP_INT1_BUS_CLASH | SDW_SCP_INT1_PARITY; ++ pdev->prop.clk_stop_mode1 = true; + gpiod_direction_output(wsa881x->sd_n, !wsa881x->sd_n_val); + + wsa881x->regmap = devm_regmap_init_sdw(pdev, &wsa881x_regmap_config); +diff --git a/sound/soc/intel/avs/topology.c b/sound/soc/intel/avs/topology.c +index 45d0eb2a8e710..141255420c12b 100644 +--- a/sound/soc/intel/avs/topology.c ++++ b/sound/soc/intel/avs/topology.c +@@ -1412,6 +1412,8 @@ static int avs_widget_load(struct snd_soc_component *comp, int index, + if (!le32_to_cpu(dw->priv.size)) + return 0; + ++ w->no_wname_in_kcontrol_name = true; ++ + if (w->ignore_suspend && !AVS_S0IX_SUPPORTED) { + dev_info_once(comp->dev, "Device does not support S0IX, check BIOS settings\n"); + w->ignore_suspend = false; +diff --git a/sound/soc/meson/Kconfig b/sound/soc/meson/Kconfig +index b93ea33739f29..6458d5dc4902f 100644 +--- a/sound/soc/meson/Kconfig ++++ b/sound/soc/meson/Kconfig +@@ -99,6 +99,7 @@ config SND_MESON_AXG_PDM + + config SND_MESON_CARD_UTILS + tristate ++ select SND_DYNAMIC_MINORS + + config SND_MESON_CODEC_GLUE + tristate +diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c +index f10c0c17863eb..b6f5b4572012d 100644 +--- a/sound/soc/meson/axg-card.c ++++ b/sound/soc/meson/axg-card.c +@@ -318,6 +318,7 @@ static int axg_card_add_link(struct snd_soc_card *card, struct device_node *np, + + dai_link->cpus = cpu; + dai_link->num_cpus = 1; ++ dai_link->nonatomic = true; + + ret = meson_card_parse_dai(card, np, dai_link->cpus); + if (ret) +diff --git a/sound/soc/meson/axg-fifo.c b/sound/soc/meson/axg-fifo.c +index bccfb770b3391..94b169a5493b5 100644 +--- a/sound/soc/meson/axg-fifo.c ++++ b/sound/soc/meson/axg-fifo.c +@@ -3,6 +3,7 @@ + // Copyright (c) 2018 BayLibre, SAS. + // Author: Jerome Brunet + ++#include + #include + #include + #include +@@ -145,8 +146,8 @@ int axg_fifo_pcm_hw_params(struct snd_soc_component *component, + /* Enable irq if necessary */ + irq_en = runtime->no_period_wakeup ? 0 : FIFO_INT_COUNT_REPEAT; + regmap_update_bits(fifo->map, FIFO_CTRL0, +- CTRL0_INT_EN(FIFO_INT_COUNT_REPEAT), +- CTRL0_INT_EN(irq_en)); ++ CTRL0_INT_EN, ++ FIELD_PREP(CTRL0_INT_EN, irq_en)); + + return 0; + } +@@ -176,9 +177,9 @@ int axg_fifo_pcm_hw_free(struct snd_soc_component *component, + { + struct axg_fifo *fifo = axg_fifo_data(ss); + +- /* Disable the block count irq */ ++ /* Disable irqs */ + regmap_update_bits(fifo->map, FIFO_CTRL0, +- CTRL0_INT_EN(FIFO_INT_COUNT_REPEAT), 0); ++ CTRL0_INT_EN, 0); + + return 0; + } +@@ -187,13 +188,13 @@ EXPORT_SYMBOL_GPL(axg_fifo_pcm_hw_free); + static void axg_fifo_ack_irq(struct axg_fifo *fifo, u8 mask) + { + regmap_update_bits(fifo->map, FIFO_CTRL1, +- CTRL1_INT_CLR(FIFO_INT_MASK), +- CTRL1_INT_CLR(mask)); ++ CTRL1_INT_CLR, ++ FIELD_PREP(CTRL1_INT_CLR, mask)); + + /* Clear must also be cleared */ + regmap_update_bits(fifo->map, FIFO_CTRL1, +- CTRL1_INT_CLR(FIFO_INT_MASK), +- 0); ++ CTRL1_INT_CLR, ++ FIELD_PREP(CTRL1_INT_CLR, 0)); + } + + static irqreturn_t axg_fifo_pcm_irq_block(int irq, void *dev_id) +@@ -203,18 +204,26 @@ static irqreturn_t axg_fifo_pcm_irq_block(int irq, void *dev_id) + unsigned int status; + + regmap_read(fifo->map, FIFO_STATUS1, &status); ++ status = FIELD_GET(STATUS1_INT_STS, status); ++ axg_fifo_ack_irq(fifo, status); + +- status = STATUS1_INT_STS(status) & FIFO_INT_MASK; ++ /* Use the thread to call period elapsed on nonatomic links */ + if (status & FIFO_INT_COUNT_REPEAT) +- snd_pcm_period_elapsed(ss); +- else +- dev_dbg(axg_fifo_dev(ss), "unexpected irq - STS 0x%02x\n", +- status); ++ return IRQ_WAKE_THREAD; + +- /* Ack irqs */ +- axg_fifo_ack_irq(fifo, status); ++ dev_dbg(axg_fifo_dev(ss), "unexpected irq - STS 0x%02x\n", ++ status); ++ ++ return IRQ_NONE; ++} ++ ++static irqreturn_t axg_fifo_pcm_irq_block_thread(int irq, void *dev_id) ++{ ++ struct snd_pcm_substream *ss = dev_id; ++ ++ snd_pcm_period_elapsed(ss); + +- return IRQ_RETVAL(status); ++ return IRQ_HANDLED; + } + + int axg_fifo_pcm_open(struct snd_soc_component *component, +@@ -242,8 +251,9 @@ int axg_fifo_pcm_open(struct snd_soc_component *component, + if (ret) + return ret; + +- ret = request_irq(fifo->irq, axg_fifo_pcm_irq_block, 0, +- dev_name(dev), ss); ++ ret = request_threaded_irq(fifo->irq, axg_fifo_pcm_irq_block, ++ axg_fifo_pcm_irq_block_thread, ++ IRQF_ONESHOT, dev_name(dev), ss); + if (ret) + return ret; + +@@ -254,15 +264,15 @@ int axg_fifo_pcm_open(struct snd_soc_component *component, + + /* Setup status2 so it reports the memory pointer */ + regmap_update_bits(fifo->map, FIFO_CTRL1, +- CTRL1_STATUS2_SEL_MASK, +- CTRL1_STATUS2_SEL(STATUS2_SEL_DDR_READ)); ++ CTRL1_STATUS2_SEL, ++ FIELD_PREP(CTRL1_STATUS2_SEL, STATUS2_SEL_DDR_READ)); + + /* Make sure the dma is initially disabled */ + __dma_enable(fifo, false); + + /* Disable irqs until params are ready */ + regmap_update_bits(fifo->map, FIFO_CTRL0, +- CTRL0_INT_EN(FIFO_INT_MASK), 0); ++ CTRL0_INT_EN, 0); + + /* Clear any pending interrupt */ + axg_fifo_ack_irq(fifo, FIFO_INT_MASK); +diff --git a/sound/soc/meson/axg-fifo.h b/sound/soc/meson/axg-fifo.h +index b63acd723c870..5b7d32c37991b 100644 +--- a/sound/soc/meson/axg-fifo.h ++++ b/sound/soc/meson/axg-fifo.h +@@ -42,21 +42,19 @@ struct snd_soc_pcm_runtime; + + #define FIFO_CTRL0 0x00 + #define CTRL0_DMA_EN BIT(31) +-#define CTRL0_INT_EN(x) ((x) << 16) ++#define CTRL0_INT_EN GENMASK(23, 16) + #define CTRL0_SEL_MASK GENMASK(2, 0) + #define CTRL0_SEL_SHIFT 0 + #define FIFO_CTRL1 0x04 +-#define CTRL1_INT_CLR(x) ((x) << 0) +-#define CTRL1_STATUS2_SEL_MASK GENMASK(11, 8) +-#define CTRL1_STATUS2_SEL(x) ((x) << 8) ++#define CTRL1_INT_CLR GENMASK(7, 0) ++#define CTRL1_STATUS2_SEL GENMASK(11, 8) + #define STATUS2_SEL_DDR_READ 0 +-#define CTRL1_FRDDR_DEPTH_MASK GENMASK(31, 24) +-#define CTRL1_FRDDR_DEPTH(x) ((x) << 24) ++#define CTRL1_FRDDR_DEPTH GENMASK(31, 24) + #define FIFO_START_ADDR 0x08 + #define FIFO_FINISH_ADDR 0x0c + #define FIFO_INT_ADDR 0x10 + #define FIFO_STATUS1 0x14 +-#define STATUS1_INT_STS(x) ((x) << 0) ++#define STATUS1_INT_STS GENMASK(7, 0) + #define FIFO_STATUS2 0x18 + #define FIFO_INIT_ADDR 0x24 + #define FIFO_CTRL2 0x28 +diff --git a/sound/soc/meson/axg-frddr.c b/sound/soc/meson/axg-frddr.c +index 8c166a5f338ce..747a900c0bb22 100644 +--- a/sound/soc/meson/axg-frddr.c ++++ b/sound/soc/meson/axg-frddr.c +@@ -7,6 +7,7 @@ + * This driver implements the frontend playback DAI of AXG and G12A based SoCs + */ + ++#include + #include + #include + #include +@@ -59,8 +60,8 @@ static int axg_frddr_dai_hw_params(struct snd_pcm_substream *substream, + /* Trim the FIFO depth if the period is small to improve latency */ + depth = min(period, fifo->depth); + val = (depth / AXG_FIFO_BURST) - 1; +- regmap_update_bits(fifo->map, FIFO_CTRL1, CTRL1_FRDDR_DEPTH_MASK, +- CTRL1_FRDDR_DEPTH(val)); ++ regmap_update_bits(fifo->map, FIFO_CTRL1, CTRL1_FRDDR_DEPTH, ++ FIELD_PREP(CTRL1_FRDDR_DEPTH, val)); + + return 0; + } +diff --git a/sound/soc/meson/axg-tdm-interface.c b/sound/soc/meson/axg-tdm-interface.c +index 2cedbce738373..a71790908e178 100644 +--- a/sound/soc/meson/axg-tdm-interface.c ++++ b/sound/soc/meson/axg-tdm-interface.c +@@ -349,26 +349,31 @@ static int axg_tdm_iface_hw_params(struct snd_pcm_substream *substream, + return 0; + } + +-static int axg_tdm_iface_hw_free(struct snd_pcm_substream *substream, ++static int axg_tdm_iface_trigger(struct snd_pcm_substream *substream, ++ int cmd, + struct snd_soc_dai *dai) + { +- struct axg_tdm_stream *ts = snd_soc_dai_get_dma_data(dai, substream); ++ struct axg_tdm_stream *ts = ++ snd_soc_dai_get_dma_data(dai, substream); + +- /* Stop all attached formatters */ +- axg_tdm_stream_stop(ts); ++ switch (cmd) { ++ case SNDRV_PCM_TRIGGER_START: ++ case SNDRV_PCM_TRIGGER_RESUME: ++ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: ++ axg_tdm_stream_start(ts); ++ break; ++ case SNDRV_PCM_TRIGGER_SUSPEND: ++ case SNDRV_PCM_TRIGGER_PAUSE_PUSH: ++ case SNDRV_PCM_TRIGGER_STOP: ++ axg_tdm_stream_stop(ts); ++ break; ++ default: ++ return -EINVAL; ++ } + + return 0; + } + +-static int axg_tdm_iface_prepare(struct snd_pcm_substream *substream, +- struct snd_soc_dai *dai) +-{ +- struct axg_tdm_stream *ts = snd_soc_dai_get_dma_data(dai, substream); +- +- /* Force all attached formatters to update */ +- return axg_tdm_stream_reset(ts); +-} +- + static int axg_tdm_iface_remove_dai(struct snd_soc_dai *dai) + { + int stream; +@@ -412,8 +417,7 @@ static const struct snd_soc_dai_ops axg_tdm_iface_ops = { + .set_fmt = axg_tdm_iface_set_fmt, + .startup = axg_tdm_iface_startup, + .hw_params = axg_tdm_iface_hw_params, +- .prepare = axg_tdm_iface_prepare, +- .hw_free = axg_tdm_iface_hw_free, ++ .trigger = axg_tdm_iface_trigger, + }; + + /* TDM Backend DAIs */ +diff --git a/sound/soc/meson/axg-toddr.c b/sound/soc/meson/axg-toddr.c +index 1a0be177b8fe7..972ad99f31be2 100644 +--- a/sound/soc/meson/axg-toddr.c ++++ b/sound/soc/meson/axg-toddr.c +@@ -5,6 +5,7 @@ + + /* This driver implements the frontend capture DAI of AXG based SoCs */ + ++#include + #include + #include + #include +@@ -19,12 +20,9 @@ + #define CTRL0_TODDR_EXT_SIGNED BIT(29) + #define CTRL0_TODDR_PP_MODE BIT(28) + #define CTRL0_TODDR_SYNC_CH BIT(27) +-#define CTRL0_TODDR_TYPE_MASK GENMASK(15, 13) +-#define CTRL0_TODDR_TYPE(x) ((x) << 13) +-#define CTRL0_TODDR_MSB_POS_MASK GENMASK(12, 8) +-#define CTRL0_TODDR_MSB_POS(x) ((x) << 8) +-#define CTRL0_TODDR_LSB_POS_MASK GENMASK(7, 3) +-#define CTRL0_TODDR_LSB_POS(x) ((x) << 3) ++#define CTRL0_TODDR_TYPE GENMASK(15, 13) ++#define CTRL0_TODDR_MSB_POS GENMASK(12, 8) ++#define CTRL0_TODDR_LSB_POS GENMASK(7, 3) + #define CTRL1_TODDR_FORCE_FINISH BIT(25) + #define CTRL1_SEL_SHIFT 28 + +@@ -76,12 +74,12 @@ static int axg_toddr_dai_hw_params(struct snd_pcm_substream *substream, + width = params_width(params); + + regmap_update_bits(fifo->map, FIFO_CTRL0, +- CTRL0_TODDR_TYPE_MASK | +- CTRL0_TODDR_MSB_POS_MASK | +- CTRL0_TODDR_LSB_POS_MASK, +- CTRL0_TODDR_TYPE(type) | +- CTRL0_TODDR_MSB_POS(TODDR_MSB_POS) | +- CTRL0_TODDR_LSB_POS(TODDR_MSB_POS - (width - 1))); ++ CTRL0_TODDR_TYPE | ++ CTRL0_TODDR_MSB_POS | ++ CTRL0_TODDR_LSB_POS, ++ FIELD_PREP(CTRL0_TODDR_TYPE, type) | ++ FIELD_PREP(CTRL0_TODDR_MSB_POS, TODDR_MSB_POS) | ++ FIELD_PREP(CTRL0_TODDR_LSB_POS, TODDR_MSB_POS - (width - 1))); + + return 0; + } +diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c +index 44f39a520bb39..e80a2a5ec56a1 100644 +--- a/sound/soc/sof/intel/hda-dsp.c ++++ b/sound/soc/sof/intel/hda-dsp.c +@@ -681,17 +681,27 @@ static int hda_suspend(struct snd_sof_dev *sdev, bool runtime_suspend) + struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata; + const struct sof_intel_dsp_desc *chip = hda->desc; + struct hdac_bus *bus = sof_to_bus(sdev); ++ bool imr_lost = false; + int ret, j; + + /* +- * The memory used for IMR boot loses its content in deeper than S3 state +- * We must not try IMR boot on next power up (as it will fail). +- * ++ * The memory used for IMR boot loses its content in deeper than S3 ++ * state on CAVS platforms. ++ * On ACE platforms due to the system architecture the IMR content is ++ * lost at S3 state already, they are tailored for s2idle use. ++ * We must not try IMR boot on next power up in these cases as it will ++ * fail. ++ */ ++ if (sdev->system_suspend_target > SOF_SUSPEND_S3 || ++ (chip->hw_ip_version >= SOF_INTEL_ACE_1_0 && ++ sdev->system_suspend_target == SOF_SUSPEND_S3)) ++ imr_lost = true; ++ ++ /* + * In case of firmware crash or boot failure set the skip_imr_boot to true + * as well in order to try to re-load the firmware to do a 'cold' boot. + */ +- if (sdev->system_suspend_target > SOF_SUSPEND_S3 || +- sdev->fw_state == SOF_FW_CRASHED || ++ if (imr_lost || sdev->fw_state == SOF_FW_CRASHED || + sdev->fw_state == SOF_FW_BOOT_FAILED) + hda->skip_imr_boot = true; + +diff --git a/sound/soc/sof/intel/pci-lnl.c b/sound/soc/sof/intel/pci-lnl.c +index 1b12c280edb46..7ad7aa3c3461b 100644 +--- a/sound/soc/sof/intel/pci-lnl.c ++++ b/sound/soc/sof/intel/pci-lnl.c +@@ -35,6 +35,9 @@ static const struct sof_dev_desc lnl_desc = { + .default_fw_path = { + [SOF_INTEL_IPC4] = "intel/sof-ipc4/lnl", + }, ++ .default_lib_path = { ++ [SOF_IPC_TYPE_4] = "intel/sof-ipc4-lib/lnl", ++ }, + .default_tplg_path = { + [SOF_INTEL_IPC4] = "intel/sof-ace-tplg", + }, +diff --git a/sound/soc/tegra/tegra186_dspk.c b/sound/soc/tegra/tegra186_dspk.c +index a0ce7eb11de96..95bff466e8bdb 100644 +--- a/sound/soc/tegra/tegra186_dspk.c ++++ b/sound/soc/tegra/tegra186_dspk.c +@@ -1,8 +1,7 @@ + // SPDX-License-Identifier: GPL-2.0-only ++// SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. + // + // tegra186_dspk.c - Tegra186 DSPK driver +-// +-// Copyright (c) 2020 NVIDIA CORPORATION. All rights reserved. + + #include + #include +@@ -241,14 +240,14 @@ static int tegra186_dspk_hw_params(struct snd_pcm_substream *substream, + return -EINVAL; + } + +- cif_conf.client_bits = TEGRA_ACIF_BITS_24; +- + switch (params_format(params)) { + case SNDRV_PCM_FORMAT_S16_LE: + cif_conf.audio_bits = TEGRA_ACIF_BITS_16; ++ cif_conf.client_bits = TEGRA_ACIF_BITS_16; + break; + case SNDRV_PCM_FORMAT_S32_LE: + cif_conf.audio_bits = TEGRA_ACIF_BITS_32; ++ cif_conf.client_bits = TEGRA_ACIF_BITS_24; + break; + default: + dev_err(dev, "unsupported format!\n"); +diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c +index 7e7d665a5504a..a5c2cca38d01a 100644 +--- a/sound/soc/ti/davinci-mcasp.c ++++ b/sound/soc/ti/davinci-mcasp.c +@@ -2418,12 +2418,6 @@ static int davinci_mcasp_probe(struct platform_device *pdev) + + mcasp_reparent_fck(pdev); + +- ret = devm_snd_soc_register_component(&pdev->dev, &davinci_mcasp_component, +- &davinci_mcasp_dai[mcasp->op_mode], 1); +- +- if (ret != 0) +- goto err; +- + ret = davinci_mcasp_get_dma_type(mcasp); + switch (ret) { + case PCM_EDMA: +@@ -2450,6 +2444,12 @@ static int davinci_mcasp_probe(struct platform_device *pdev) + goto err; + } + ++ ret = devm_snd_soc_register_component(&pdev->dev, &davinci_mcasp_component, ++ &davinci_mcasp_dai[mcasp->op_mode], 1); ++ ++ if (ret != 0) ++ goto err; ++ + no_audio: + ret = davinci_mcasp_init_gpiochip(mcasp); + if (ret) { +diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c +index b67617b68e509..f4437015d43a7 100644 +--- a/sound/usb/line6/driver.c ++++ b/sound/usb/line6/driver.c +@@ -202,7 +202,7 @@ int line6_send_raw_message_async(struct usb_line6 *line6, const char *buffer, + struct urb *urb; + + /* create message: */ +- msg = kmalloc(sizeof(struct message), GFP_ATOMIC); ++ msg = kzalloc(sizeof(struct message), GFP_ATOMIC); + if (msg == NULL) + return -ENOMEM; + +@@ -688,7 +688,7 @@ static int line6_init_cap_control(struct usb_line6 *line6) + int ret; + + /* initialize USB buffers: */ +- line6->buffer_listen = kmalloc(LINE6_BUFSIZE_LISTEN, GFP_KERNEL); ++ line6->buffer_listen = kzalloc(LINE6_BUFSIZE_LISTEN, GFP_KERNEL); + if (!line6->buffer_listen) + return -ENOMEM; + +@@ -697,7 +697,7 @@ static int line6_init_cap_control(struct usb_line6 *line6) + return -ENOMEM; + + if (line6->properties->capabilities & LINE6_CAP_CONTROL_MIDI) { +- line6->buffer_message = kmalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL); ++ line6->buffer_message = kzalloc(LINE6_MIDI_MESSAGE_MAXLEN, GFP_KERNEL); + if (!line6->buffer_message) + return -ENOMEM; + +diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h +index 4b0673bf52c2e..07cfad817d539 100644 +--- a/tools/include/linux/kernel.h ++++ b/tools/include/linux/kernel.h +@@ -8,6 +8,7 @@ + #include + #include + #include ++#include + #include + #include + +diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h +index f3c82ab5b14cd..7d73da0980473 100644 +--- a/tools/include/linux/mm.h ++++ b/tools/include/linux/mm.h +@@ -37,4 +37,9 @@ static inline void totalram_pages_add(long count) + { + } + ++static inline int early_pfn_to_nid(unsigned long pfn) ++{ ++ return 0; ++} ++ + #endif +diff --git a/tools/include/linux/panic.h b/tools/include/linux/panic.h +new file mode 100644 +index 0000000000000..9c8f17a41ce8e +--- /dev/null ++++ b/tools/include/linux/panic.h +@@ -0,0 +1,19 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _TOOLS_LINUX_PANIC_H ++#define _TOOLS_LINUX_PANIC_H ++ ++#include ++#include ++#include ++ ++static inline void panic(const char *fmt, ...) ++{ ++ va_list argp; ++ ++ va_start(argp, fmt); ++ vfprintf(stderr, fmt, argp); ++ va_end(argp); ++ exit(-1); ++} ++ ++#endif +diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8 +index 8f08c3fd498d5..1ba6340d3b3da 100644 +--- a/tools/power/x86/turbostat/turbostat.8 ++++ b/tools/power/x86/turbostat/turbostat.8 +@@ -370,7 +370,7 @@ below the processor's base frequency. + + Busy% = MPERF_delta/TSC_delta + +-Bzy_MHz = TSC_delta/APERF_delta/MPERF_delta/measurement_interval ++Bzy_MHz = TSC_delta*APERF_delta/MPERF_delta/measurement_interval + + Note that these calculations depend on TSC_delta, so they + are not reliable during intervals when TSC_MHz is not running at the base frequency. +diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c +index 785de89077de0..ffa964ddbacef 100644 +--- a/tools/power/x86/turbostat/turbostat.c ++++ b/tools/power/x86/turbostat/turbostat.c +@@ -53,6 +53,8 @@ + #define NAME_BYTES 20 + #define PATH_BYTES 128 + ++#define MAX_NOFILE 0x8000 ++ + enum counter_scope { SCOPE_CPU, SCOPE_CORE, SCOPE_PACKAGE }; + enum counter_type { COUNTER_ITEMS, COUNTER_CYCLES, COUNTER_SECONDS, COUNTER_USEC }; + enum counter_format { FORMAT_RAW, FORMAT_DELTA, FORMAT_PERCENT }; +@@ -1811,9 +1813,10 @@ int sum_counters(struct thread_data *t, struct core_data *c, struct pkg_data *p) + average.packages.rapl_dram_perf_status += p->rapl_dram_perf_status; + + for (i = 0, mp = sys.pp; mp; i++, mp = mp->next) { +- if (mp->format == FORMAT_RAW) +- continue; +- average.packages.counter[i] += p->counter[i]; ++ if ((mp->format == FORMAT_RAW) && (topo.num_packages == 0)) ++ average.packages.counter[i] = p->counter[i]; ++ else ++ average.packages.counter[i] += p->counter[i]; + } + return 0; + } +@@ -1966,7 +1969,7 @@ unsigned long long get_uncore_mhz(int package, int die) + { + char path[128]; + +- sprintf(path, "/sys/devices/system/cpu/intel_uncore_frequency/package_0%d_die_0%d/current_freq_khz", package, ++ sprintf(path, "/sys/devices/system/cpu/intel_uncore_frequency/package_%02d_die_%02d/current_freq_khz", package, + die); + + return (snapshot_sysfs_counter(path) / 1000); +@@ -5476,7 +5479,8 @@ void print_dev_latency(void) + + fd = open(path, O_RDONLY); + if (fd < 0) { +- warnx("capget(CAP_SYS_ADMIN) failed, try \"# setcap cap_sys_admin=ep %s\"", progname); ++ if (debug) ++ warnx("Read %s failed", path); + return; + } + +@@ -5515,6 +5519,7 @@ void process_cpuid() + unsigned int eax, ebx, ecx, edx; + unsigned int fms, family, model, stepping, ecx_flags, edx_flags; + unsigned long long ucode_patch = 0; ++ bool ucode_patch_valid = false; + + eax = ebx = ecx = edx = 0; + +@@ -5544,6 +5549,8 @@ void process_cpuid() + + if (get_msr(sched_getcpu(), MSR_IA32_UCODE_REV, &ucode_patch)) + warnx("get_msr(UCODE)"); ++ else ++ ucode_patch_valid = true; + + /* + * check max extended function levels of CPUID. +@@ -5554,9 +5561,12 @@ void process_cpuid() + __cpuid(0x80000000, max_extended_level, ebx, ecx, edx); + + if (!quiet) { +- fprintf(outf, "CPUID(1): family:model:stepping 0x%x:%x:%x (%d:%d:%d) microcode 0x%x\n", +- family, model, stepping, family, model, stepping, +- (unsigned int)((ucode_patch >> 32) & 0xFFFFFFFF)); ++ fprintf(outf, "CPUID(1): family:model:stepping 0x%x:%x:%x (%d:%d:%d)", ++ family, model, stepping, family, model, stepping); ++ if (ucode_patch_valid) ++ fprintf(outf, " microcode 0x%x", (unsigned int)((ucode_patch >> 32) & 0xFFFFFFFF)); ++ fputc('\n', outf); ++ + fprintf(outf, "CPUID(0x80000000): max_extended_levels: 0x%x\n", max_extended_level); + fprintf(outf, "CPUID(1): %s %s %s %s %s %s %s %s %s %s\n", + ecx_flags & (1 << 0) ? "SSE3" : "-", +@@ -6718,6 +6728,22 @@ void cmdline(int argc, char **argv) + } + } + ++void set_rlimit(void) ++{ ++ struct rlimit limit; ++ ++ if (getrlimit(RLIMIT_NOFILE, &limit) < 0) ++ err(1, "Failed to get rlimit"); ++ ++ if (limit.rlim_max < MAX_NOFILE) ++ limit.rlim_max = MAX_NOFILE; ++ if (limit.rlim_cur < MAX_NOFILE) ++ limit.rlim_cur = MAX_NOFILE; ++ ++ if (setrlimit(RLIMIT_NOFILE, &limit) < 0) ++ err(1, "Failed to set rlimit"); ++} ++ + int main(int argc, char **argv) + { + outf = stderr; +@@ -6730,6 +6756,9 @@ int main(int argc, char **argv) + + probe_sysfs(); + ++ if (!getuid()) ++ set_rlimit(); ++ + turbostat_init(); + + msr_sum_record(); +diff --git a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c +index d2d9e965eba59..f79815b7e951b 100644 +--- a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c ++++ b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c +@@ -2,6 +2,7 @@ + /* Copyright (c) 2021 Facebook */ + + #include ++#include + #include + #include "bloom_filter_map.skel.h" + +@@ -21,6 +22,11 @@ static void test_fail_cases(void) + if (!ASSERT_LT(fd, 0, "bpf_map_create bloom filter invalid value size 0")) + close(fd); + ++ /* Invalid value size: too big */ ++ fd = bpf_map_create(BPF_MAP_TYPE_BLOOM_FILTER, NULL, 0, INT32_MAX, 100, NULL); ++ if (!ASSERT_LT(fd, 0, "bpf_map_create bloom filter invalid value too large")) ++ close(fd); ++ + /* Invalid max entries size */ + fd = bpf_map_create(BPF_MAP_TYPE_BLOOM_FILTER, NULL, 0, sizeof(value), 0, NULL); + if (!ASSERT_LT(fd, 0, "bpf_map_create bloom filter invalid max entries size")) +diff --git a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c +index dbe06aeaa2b27..af3c31f82a8ae 100644 +--- a/tools/testing/selftests/bpf/prog_tests/tc_redirect.c ++++ b/tools/testing/selftests/bpf/prog_tests/tc_redirect.c +@@ -24,7 +24,6 @@ + + #include "test_progs.h" + #include "network_helpers.h" +-#include "netlink_helpers.h" + #include "test_tc_neigh_fib.skel.h" + #include "test_tc_neigh.skel.h" + #include "test_tc_peer.skel.h" +@@ -113,7 +112,6 @@ static void netns_setup_namespaces_nofail(const char *verb) + + enum dev_mode { + MODE_VETH, +- MODE_NETKIT, + }; + + struct netns_setup_result { +@@ -144,52 +142,11 @@ static int get_ifaddr(const char *name, char *ifaddr) + return 0; + } + +-static int create_netkit(int mode, char *prim, char *peer) +-{ +- struct rtattr *linkinfo, *data, *peer_info; +- struct rtnl_handle rth = { .fd = -1 }; +- const char *type = "netkit"; +- struct { +- struct nlmsghdr n; +- struct ifinfomsg i; +- char buf[1024]; +- } req = {}; +- int err; +- +- err = rtnl_open(&rth, 0); +- if (!ASSERT_OK(err, "open_rtnetlink")) +- return err; +- +- memset(&req, 0, sizeof(req)); +- req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg)); +- req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE | NLM_F_EXCL; +- req.n.nlmsg_type = RTM_NEWLINK; +- req.i.ifi_family = AF_UNSPEC; +- +- addattr_l(&req.n, sizeof(req), IFLA_IFNAME, prim, strlen(prim)); +- linkinfo = addattr_nest(&req.n, sizeof(req), IFLA_LINKINFO); +- addattr_l(&req.n, sizeof(req), IFLA_INFO_KIND, type, strlen(type)); +- data = addattr_nest(&req.n, sizeof(req), IFLA_INFO_DATA); +- addattr32(&req.n, sizeof(req), IFLA_NETKIT_MODE, mode); +- peer_info = addattr_nest(&req.n, sizeof(req), IFLA_NETKIT_PEER_INFO); +- req.n.nlmsg_len += sizeof(struct ifinfomsg); +- addattr_l(&req.n, sizeof(req), IFLA_IFNAME, peer, strlen(peer)); +- addattr_nest_end(&req.n, peer_info); +- addattr_nest_end(&req.n, data); +- addattr_nest_end(&req.n, linkinfo); +- +- err = rtnl_talk(&rth, &req.n, NULL); +- ASSERT_OK(err, "talk_rtnetlink"); +- rtnl_close(&rth); +- return err; +-} +- + static int netns_setup_links_and_routes(struct netns_setup_result *result) + { + struct nstoken *nstoken = NULL; + char src_fwd_addr[IFADDR_STR_LEN+1] = {}; + char src_addr[IFADDR_STR_LEN + 1] = {}; +- int err; + + if (result->dev_mode == MODE_VETH) { + SYS(fail, "ip link add src type veth peer name src_fwd"); +@@ -197,13 +154,6 @@ static int netns_setup_links_and_routes(struct netns_setup_result *result) + + SYS(fail, "ip link set dst_fwd address " MAC_DST_FWD); + SYS(fail, "ip link set dst address " MAC_DST); +- } else if (result->dev_mode == MODE_NETKIT) { +- err = create_netkit(NETKIT_L3, "src", "src_fwd"); +- if (!ASSERT_OK(err, "create_ifindex_src")) +- goto fail; +- err = create_netkit(NETKIT_L3, "dst", "dst_fwd"); +- if (!ASSERT_OK(err, "create_ifindex_dst")) +- goto fail; + } + + if (get_ifaddr("src_fwd", src_fwd_addr)) +@@ -1266,9 +1216,7 @@ static void *test_tc_redirect_run_tests(void *arg) + netns_setup_namespaces_nofail("delete"); + + RUN_TEST(tc_redirect_peer, MODE_VETH); +- RUN_TEST(tc_redirect_peer, MODE_NETKIT); + RUN_TEST(tc_redirect_peer_l3, MODE_VETH); +- RUN_TEST(tc_redirect_peer_l3, MODE_NETKIT); + RUN_TEST(tc_redirect_neigh, MODE_VETH); + RUN_TEST(tc_redirect_neigh_fib, MODE_VETH); + RUN_TEST(tc_redirect_dtime, MODE_VETH); +diff --git a/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc b/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc +index 2de7c61d1ae30..3f74c09c56b62 100644 +--- a/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc ++++ b/tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc +@@ -24,7 +24,7 @@ echo 0 > events/enable + echo "Get the most frequently calling function" + sample_events + +-target_func=`cut -d: -f3 trace | sed 's/call_site=\([^+]*\)+0x.*/\1/' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'` ++target_func=`cat trace | grep -o 'call_site=\([^+]*\)' | sed 's/call_site=//' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'` + if [ -z "$target_func" ]; then + exit_fail + fi +diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile +index 6a9fc5693145f..292359a542429 100644 +--- a/tools/testing/selftests/mm/Makefile ++++ b/tools/testing/selftests/mm/Makefile +@@ -12,7 +12,7 @@ uname_M := $(shell uname -m 2>/dev/null || echo not) + else + uname_M := $(shell echo $(CROSS_COMPILE) | grep -o '^[a-z0-9]\+') + endif +-ARCH ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/ppc64/') ++ARCH ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/powerpc/') + endif + + # Without this, failed build products remain, with up-to-date timestamps, +@@ -95,13 +95,13 @@ TEST_GEN_FILES += $(BINARIES_64) + endif + else + +-ifneq (,$(findstring $(ARCH),ppc64)) ++ifneq (,$(findstring $(ARCH),powerpc)) + TEST_GEN_FILES += protection_keys + endif + + endif + +-ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 ppc64 riscv64 s390x sparc64 x86_64)) ++ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 powerpc riscv64 s390x sparc64 x86_64)) + TEST_GEN_FILES += va_high_addr_switch + TEST_GEN_FILES += virtual_address_range + TEST_GEN_FILES += write_to_hugetlbfs +diff --git a/tools/testing/selftests/net/test_bridge_neigh_suppress.sh b/tools/testing/selftests/net/test_bridge_neigh_suppress.sh +index d80f2cd87614c..02b986c9c247d 100755 +--- a/tools/testing/selftests/net/test_bridge_neigh_suppress.sh ++++ b/tools/testing/selftests/net/test_bridge_neigh_suppress.sh +@@ -45,9 +45,8 @@ + # | sw1 | | sw2 | + # +------------------------------------+ +------------------------------------+ + ++source lib.sh + ret=0 +-# Kselftest framework requirement - SKIP code is 4. +-ksft_skip=4 + + # All tests in this script. Can be overridden with -t option. + TESTS=" +@@ -140,9 +139,6 @@ setup_topo_ns() + { + local ns=$1; shift + +- ip netns add $ns +- ip -n $ns link set dev lo up +- + ip netns exec $ns sysctl -qw net.ipv6.conf.all.keep_addr_on_down=1 + ip netns exec $ns sysctl -qw net.ipv6.conf.default.ignore_routes_with_linkdown=1 + ip netns exec $ns sysctl -qw net.ipv6.conf.all.accept_dad=0 +@@ -153,21 +149,14 @@ setup_topo() + { + local ns + +- for ns in h1 h2 sw1 sw2; do ++ setup_ns h1 h2 sw1 sw2 ++ for ns in $h1 $h2 $sw1 $sw2; do + setup_topo_ns $ns + done + +- ip link add name veth0 type veth peer name veth1 +- ip link set dev veth0 netns h1 name eth0 +- ip link set dev veth1 netns sw1 name swp1 +- +- ip link add name veth0 type veth peer name veth1 +- ip link set dev veth0 netns sw1 name veth0 +- ip link set dev veth1 netns sw2 name veth0 +- +- ip link add name veth0 type veth peer name veth1 +- ip link set dev veth0 netns h2 name eth0 +- ip link set dev veth1 netns sw2 name swp1 ++ ip -n $h1 link add name eth0 type veth peer name swp1 netns $sw1 ++ ip -n $sw1 link add name veth0 type veth peer name veth0 netns $sw2 ++ ip -n $h2 link add name eth0 type veth peer name swp1 netns $sw2 + } + + setup_host_common() +@@ -190,7 +179,7 @@ setup_host_common() + + setup_h1() + { +- local ns=h1 ++ local ns=$h1 + local v4addr1=192.0.2.1/28 + local v4addr2=192.0.2.17/28 + local v6addr1=2001:db8:1::1/64 +@@ -201,7 +190,7 @@ setup_h1() + + setup_h2() + { +- local ns=h2 ++ local ns=$h2 + local v4addr1=192.0.2.2/28 + local v4addr2=192.0.2.18/28 + local v6addr1=2001:db8:1::2/64 +@@ -254,7 +243,7 @@ setup_sw_common() + + setup_sw1() + { +- local ns=sw1 ++ local ns=$sw1 + local local_addr=192.0.2.33 + local remote_addr=192.0.2.34 + local veth_addr=192.0.2.49 +@@ -265,7 +254,7 @@ setup_sw1() + + setup_sw2() + { +- local ns=sw2 ++ local ns=$sw2 + local local_addr=192.0.2.34 + local remote_addr=192.0.2.33 + local veth_addr=192.0.2.50 +@@ -291,11 +280,7 @@ setup() + + cleanup() + { +- local ns +- +- for ns in h1 h2 sw1 sw2; do +- ip netns del $ns &> /dev/null +- done ++ cleanup_ns $h1 $h2 $sw1 $sw2 + } + + ################################################################################ +@@ -312,80 +297,80 @@ neigh_suppress_arp_common() + echo "Per-port ARP suppression - VLAN $vid" + echo "----------------------------------" + +- run_cmd "tc -n sw1 qdisc replace dev vx0 clsact" +- run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto 0x0806 flower indev swp1 arp_tip $tip arp_sip $sip arp_op request action pass" ++ run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact" ++ run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto 0x0806 flower indev swp1 arp_tip $tip arp_sip $sip arp_op request action pass" + + # Initial state - check that ARP requests are not suppressed and that + # ARP replies are received. +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 0 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "ARP suppression" + + # Enable neighbor suppression and check that nothing changes compared + # to the initial state. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 0 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 2 ++ tc_check_packets $sw1 "dev vx0 egress" 101 2 + log_test $? 0 "ARP suppression" + + # Install an FDB entry for the remote host and check that nothing + # changes compared to the initial state. +- h2_mac=$(ip -n h2 -j -p link show eth0.$vid | jq -r '.[]["address"]') +- run_cmd "bridge -n sw1 fdb replace $h2_mac dev vx0 master static vlan $vid" ++ h2_mac=$(ip -n $h2 -j -p link show eth0.$vid | jq -r '.[]["address"]') ++ run_cmd "bridge -n $sw1 fdb replace $h2_mac dev vx0 master static vlan $vid" + log_test $? 0 "FDB entry installation" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 0 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 3 ++ tc_check_packets $sw1 "dev vx0 egress" 101 3 + log_test $? 0 "ARP suppression" + + # Install a neighbor on the matching SVI interface and check that ARP + # requests are suppressed. +- run_cmd "ip -n sw1 neigh replace $tip lladdr $h2_mac nud permanent dev br0.$vid" ++ run_cmd "ip -n $sw1 neigh replace $tip lladdr $h2_mac nud permanent dev br0.$vid" + log_test $? 0 "Neighbor entry installation" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 0 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 3 ++ tc_check_packets $sw1 "dev vx0 egress" 101 3 + log_test $? 0 "ARP suppression" + + # Take the second host down and check that ARP requests are suppressed + # and that ARP replies are received. +- run_cmd "ip -n h2 link set dev eth0.$vid down" ++ run_cmd "ip -n $h2 link set dev eth0.$vid down" + log_test $? 0 "H2 down" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 0 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 3 ++ tc_check_packets $sw1 "dev vx0 egress" 101 3 + log_test $? 0 "ARP suppression" + +- run_cmd "ip -n h2 link set dev eth0.$vid up" ++ run_cmd "ip -n $h2 link set dev eth0.$vid up" + log_test $? 0 "H2 up" + + # Disable neighbor suppression and check that ARP requests are no + # longer suppressed. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress off" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress off" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 0 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 4 ++ tc_check_packets $sw1 "dev vx0 egress" 101 4 + log_test $? 0 "ARP suppression" + + # Take the second host down and check that ARP requests are not + # suppressed and that ARP replies are not received. +- run_cmd "ip -n h2 link set dev eth0.$vid down" ++ run_cmd "ip -n $h2 link set dev eth0.$vid down" + log_test $? 0 "H2 down" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip -I eth0.$vid $tip" + log_test $? 1 "arping" +- tc_check_packets sw1 "dev vx0 egress" 101 5 ++ tc_check_packets $sw1 "dev vx0 egress" 101 5 + log_test $? 0 "ARP suppression" + } + +@@ -415,80 +400,80 @@ neigh_suppress_ns_common() + echo "Per-port NS suppression - VLAN $vid" + echo "---------------------------------" + +- run_cmd "tc -n sw1 qdisc replace dev vx0 clsact" +- run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto ipv6 flower indev swp1 ip_proto icmpv6 dst_ip $maddr src_ip $saddr type 135 code 0 action pass" ++ run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact" ++ run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto ipv6 flower indev swp1 ip_proto icmpv6 dst_ip $maddr src_ip $saddr type 135 code 0 action pass" + + # Initial state - check that NS messages are not suppressed and that ND + # messages are received. +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 0 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "NS suppression" + + # Enable neighbor suppression and check that nothing changes compared + # to the initial state. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 0 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 2 ++ tc_check_packets $sw1 "dev vx0 egress" 101 2 + log_test $? 0 "NS suppression" + + # Install an FDB entry for the remote host and check that nothing + # changes compared to the initial state. +- h2_mac=$(ip -n h2 -j -p link show eth0.$vid | jq -r '.[]["address"]') +- run_cmd "bridge -n sw1 fdb replace $h2_mac dev vx0 master static vlan $vid" ++ h2_mac=$(ip -n $h2 -j -p link show eth0.$vid | jq -r '.[]["address"]') ++ run_cmd "bridge -n $sw1 fdb replace $h2_mac dev vx0 master static vlan $vid" + log_test $? 0 "FDB entry installation" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 0 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 3 ++ tc_check_packets $sw1 "dev vx0 egress" 101 3 + log_test $? 0 "NS suppression" + + # Install a neighbor on the matching SVI interface and check that NS + # messages are suppressed. +- run_cmd "ip -n sw1 neigh replace $daddr lladdr $h2_mac nud permanent dev br0.$vid" ++ run_cmd "ip -n $sw1 neigh replace $daddr lladdr $h2_mac nud permanent dev br0.$vid" + log_test $? 0 "Neighbor entry installation" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 0 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 3 ++ tc_check_packets $sw1 "dev vx0 egress" 101 3 + log_test $? 0 "NS suppression" + + # Take the second host down and check that NS messages are suppressed + # and that ND messages are received. +- run_cmd "ip -n h2 link set dev eth0.$vid down" ++ run_cmd "ip -n $h2 link set dev eth0.$vid down" + log_test $? 0 "H2 down" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 0 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 3 ++ tc_check_packets $sw1 "dev vx0 egress" 101 3 + log_test $? 0 "NS suppression" + +- run_cmd "ip -n h2 link set dev eth0.$vid up" ++ run_cmd "ip -n $h2 link set dev eth0.$vid up" + log_test $? 0 "H2 up" + + # Disable neighbor suppression and check that NS messages are no longer + # suppressed. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress off" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress off" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 0 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 4 ++ tc_check_packets $sw1 "dev vx0 egress" 101 4 + log_test $? 0 "NS suppression" + + # Take the second host down and check that NS messages are not + # suppressed and that ND messages are not received. +- run_cmd "ip -n h2 link set dev eth0.$vid down" ++ run_cmd "ip -n $h2 link set dev eth0.$vid down" + log_test $? 0 "H2 down" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr -w 5000 $daddr eth0.$vid" + log_test $? 2 "ndisc6" +- tc_check_packets sw1 "dev vx0 egress" 101 5 ++ tc_check_packets $sw1 "dev vx0 egress" 101 5 + log_test $? 0 "NS suppression" + } + +@@ -524,118 +509,118 @@ neigh_vlan_suppress_arp() + echo "Per-{Port, VLAN} ARP suppression" + echo "--------------------------------" + +- run_cmd "tc -n sw1 qdisc replace dev vx0 clsact" +- run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto 0x0806 flower indev swp1 arp_tip $tip1 arp_sip $sip1 arp_op request action pass" +- run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 102 proto 0x0806 flower indev swp1 arp_tip $tip2 arp_sip $sip2 arp_op request action pass" ++ run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact" ++ run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto 0x0806 flower indev swp1 arp_tip $tip1 arp_sip $sip1 arp_op request action pass" ++ run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 102 proto 0x0806 flower indev swp1 arp_tip $tip2 arp_sip $sip2 arp_op request action pass" + +- h2_mac1=$(ip -n h2 -j -p link show eth0.$vid1 | jq -r '.[]["address"]') +- h2_mac2=$(ip -n h2 -j -p link show eth0.$vid2 | jq -r '.[]["address"]') +- run_cmd "bridge -n sw1 fdb replace $h2_mac1 dev vx0 master static vlan $vid1" +- run_cmd "bridge -n sw1 fdb replace $h2_mac2 dev vx0 master static vlan $vid2" +- run_cmd "ip -n sw1 neigh replace $tip1 lladdr $h2_mac1 nud permanent dev br0.$vid1" +- run_cmd "ip -n sw1 neigh replace $tip2 lladdr $h2_mac2 nud permanent dev br0.$vid2" ++ h2_mac1=$(ip -n $h2 -j -p link show eth0.$vid1 | jq -r '.[]["address"]') ++ h2_mac2=$(ip -n $h2 -j -p link show eth0.$vid2 | jq -r '.[]["address"]') ++ run_cmd "bridge -n $sw1 fdb replace $h2_mac1 dev vx0 master static vlan $vid1" ++ run_cmd "bridge -n $sw1 fdb replace $h2_mac2 dev vx0 master static vlan $vid2" ++ run_cmd "ip -n $sw1 neigh replace $tip1 lladdr $h2_mac1 nud permanent dev br0.$vid1" ++ run_cmd "ip -n $sw1 neigh replace $tip2 lladdr $h2_mac2 nud permanent dev br0.$vid2" + + # Enable per-{Port, VLAN} neighbor suppression and check that ARP + # requests are not suppressed and that ARP replies are received. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_vlan_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_vlan_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress on\"" + log_test $? 0 "\"neigh_vlan_suppress\" is on" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" + log_test $? 0 "arping (VLAN $vid1)" +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" + log_test $? 0 "arping (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "ARP suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 1 ++ tc_check_packets $sw1 "dev vx0 egress" 102 1 + log_test $? 0 "ARP suppression (VLAN $vid2)" + + # Enable neighbor suppression on VLAN 10 and check that only on this + # VLAN ARP requests are suppressed. +- run_cmd "bridge -n sw1 vlan set vid $vid1 dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 vlan set vid $vid1 dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on (VLAN $vid1)" +- run_cmd "bridge -n sw1 -d vlan show dev vx0 vid $vid2 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 -d vlan show dev vx0 vid $vid2 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off (VLAN $vid2)" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" + log_test $? 0 "arping (VLAN $vid1)" +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" + log_test $? 0 "arping (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "ARP suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 2 ++ tc_check_packets $sw1 "dev vx0 egress" 102 2 + log_test $? 0 "ARP suppression (VLAN $vid2)" + + # Enable neighbor suppression on the port and check that it has no + # effect compared to previous state. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" + log_test $? 0 "arping (VLAN $vid1)" +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" + log_test $? 0 "arping (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "ARP suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 3 ++ tc_check_packets $sw1 "dev vx0 egress" 102 3 + log_test $? 0 "ARP suppression (VLAN $vid2)" + + # Disable neighbor suppression on the port and check that it has no + # effect compared to previous state. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress off" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress off" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" + log_test $? 0 "arping (VLAN $vid1)" +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" + log_test $? 0 "arping (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "ARP suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 4 ++ tc_check_packets $sw1 "dev vx0 egress" 102 4 + log_test $? 0 "ARP suppression (VLAN $vid2)" + + # Disable neighbor suppression on VLAN 10 and check that ARP requests + # are no longer suppressed on this VLAN. +- run_cmd "bridge -n sw1 vlan set vid $vid1 dev vx0 neigh_suppress off" +- run_cmd "bridge -n sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 vlan set vid $vid1 dev vx0 neigh_suppress off" ++ run_cmd "bridge -n $sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off (VLAN $vid1)" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" + log_test $? 0 "arping (VLAN $vid1)" +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" + log_test $? 0 "arping (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 2 ++ tc_check_packets $sw1 "dev vx0 egress" 101 2 + log_test $? 0 "ARP suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 5 ++ tc_check_packets $sw1 "dev vx0 egress" 102 5 + log_test $? 0 "ARP suppression (VLAN $vid2)" + + # Disable per-{Port, VLAN} neighbor suppression, enable neighbor + # suppression on the port and check that on both VLANs ARP requests are + # suppressed. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_vlan_suppress off" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress off\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_vlan_suppress off" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress off\"" + log_test $? 0 "\"neigh_vlan_suppress\" is off" + +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on" + +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip1 -I eth0.$vid1 $tip1" + log_test $? 0 "arping (VLAN $vid1)" +- run_cmd "ip netns exec h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" ++ run_cmd "ip netns exec $h1 arping -q -b -c 1 -w 5 -s $sip2 -I eth0.$vid2 $tip2" + log_test $? 0 "arping (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 2 ++ tc_check_packets $sw1 "dev vx0 egress" 101 2 + log_test $? 0 "ARP suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 5 ++ tc_check_packets $sw1 "dev vx0 egress" 102 5 + log_test $? 0 "ARP suppression (VLAN $vid2)" + } + +@@ -655,118 +640,118 @@ neigh_vlan_suppress_ns() + echo "Per-{Port, VLAN} NS suppression" + echo "-------------------------------" + +- run_cmd "tc -n sw1 qdisc replace dev vx0 clsact" +- run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 101 proto ipv6 flower indev swp1 ip_proto icmpv6 dst_ip $maddr src_ip $saddr1 type 135 code 0 action pass" +- run_cmd "tc -n sw1 filter replace dev vx0 egress pref 1 handle 102 proto ipv6 flower indev swp1 ip_proto icmpv6 dst_ip $maddr src_ip $saddr2 type 135 code 0 action pass" ++ run_cmd "tc -n $sw1 qdisc replace dev vx0 clsact" ++ run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 101 proto ipv6 flower indev swp1 ip_proto icmpv6 dst_ip $maddr src_ip $saddr1 type 135 code 0 action pass" ++ run_cmd "tc -n $sw1 filter replace dev vx0 egress pref 1 handle 102 proto ipv6 flower indev swp1 ip_proto icmpv6 dst_ip $maddr src_ip $saddr2 type 135 code 0 action pass" + +- h2_mac1=$(ip -n h2 -j -p link show eth0.$vid1 | jq -r '.[]["address"]') +- h2_mac2=$(ip -n h2 -j -p link show eth0.$vid2 | jq -r '.[]["address"]') +- run_cmd "bridge -n sw1 fdb replace $h2_mac1 dev vx0 master static vlan $vid1" +- run_cmd "bridge -n sw1 fdb replace $h2_mac2 dev vx0 master static vlan $vid2" +- run_cmd "ip -n sw1 neigh replace $daddr1 lladdr $h2_mac1 nud permanent dev br0.$vid1" +- run_cmd "ip -n sw1 neigh replace $daddr2 lladdr $h2_mac2 nud permanent dev br0.$vid2" ++ h2_mac1=$(ip -n $h2 -j -p link show eth0.$vid1 | jq -r '.[]["address"]') ++ h2_mac2=$(ip -n $h2 -j -p link show eth0.$vid2 | jq -r '.[]["address"]') ++ run_cmd "bridge -n $sw1 fdb replace $h2_mac1 dev vx0 master static vlan $vid1" ++ run_cmd "bridge -n $sw1 fdb replace $h2_mac2 dev vx0 master static vlan $vid2" ++ run_cmd "ip -n $sw1 neigh replace $daddr1 lladdr $h2_mac1 nud permanent dev br0.$vid1" ++ run_cmd "ip -n $sw1 neigh replace $daddr2 lladdr $h2_mac2 nud permanent dev br0.$vid2" + + # Enable per-{Port, VLAN} neighbor suppression and check that NS + # messages are not suppressed and that ND messages are received. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_vlan_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_vlan_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress on\"" + log_test $? 0 "\"neigh_vlan_suppress\" is on" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" + log_test $? 0 "ndisc6 (VLAN $vid1)" +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" + log_test $? 0 "ndisc6 (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "NS suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 1 ++ tc_check_packets $sw1 "dev vx0 egress" 102 1 + log_test $? 0 "NS suppression (VLAN $vid2)" + + # Enable neighbor suppression on VLAN 10 and check that only on this + # VLAN NS messages are suppressed. +- run_cmd "bridge -n sw1 vlan set vid $vid1 dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 vlan set vid $vid1 dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on (VLAN $vid1)" +- run_cmd "bridge -n sw1 -d vlan show dev vx0 vid $vid2 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 -d vlan show dev vx0 vid $vid2 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off (VLAN $vid2)" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" + log_test $? 0 "ndisc6 (VLAN $vid1)" +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" + log_test $? 0 "ndisc6 (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "NS suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 2 ++ tc_check_packets $sw1 "dev vx0 egress" 102 2 + log_test $? 0 "NS suppression (VLAN $vid2)" + + # Enable neighbor suppression on the port and check that it has no + # effect compared to previous state. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" + log_test $? 0 "ndisc6 (VLAN $vid1)" +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" + log_test $? 0 "ndisc6 (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "NS suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 3 ++ tc_check_packets $sw1 "dev vx0 egress" 102 3 + log_test $? 0 "NS suppression (VLAN $vid2)" + + # Disable neighbor suppression on the port and check that it has no + # effect compared to previous state. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress off" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress off" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" + log_test $? 0 "ndisc6 (VLAN $vid1)" +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" + log_test $? 0 "ndisc6 (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 1 ++ tc_check_packets $sw1 "dev vx0 egress" 101 1 + log_test $? 0 "NS suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 4 ++ tc_check_packets $sw1 "dev vx0 egress" 102 4 + log_test $? 0 "NS suppression (VLAN $vid2)" + + # Disable neighbor suppression on VLAN 10 and check that NS messages + # are no longer suppressed on this VLAN. +- run_cmd "bridge -n sw1 vlan set vid $vid1 dev vx0 neigh_suppress off" +- run_cmd "bridge -n sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress off\"" ++ run_cmd "bridge -n $sw1 vlan set vid $vid1 dev vx0 neigh_suppress off" ++ run_cmd "bridge -n $sw1 -d vlan show dev vx0 vid $vid1 | grep \"neigh_suppress off\"" + log_test $? 0 "\"neigh_suppress\" is off (VLAN $vid1)" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" + log_test $? 0 "ndisc6 (VLAN $vid1)" +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" + log_test $? 0 "ndisc6 (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 2 ++ tc_check_packets $sw1 "dev vx0 egress" 101 2 + log_test $? 0 "NS suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 5 ++ tc_check_packets $sw1 "dev vx0 egress" 102 5 + log_test $? 0 "NS suppression (VLAN $vid2)" + + # Disable per-{Port, VLAN} neighbor suppression, enable neighbor + # suppression on the port and check that on both VLANs NS messages are + # suppressed. +- run_cmd "bridge -n sw1 link set dev vx0 neigh_vlan_suppress off" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress off\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_vlan_suppress off" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_vlan_suppress off\"" + log_test $? 0 "\"neigh_vlan_suppress\" is off" + +- run_cmd "bridge -n sw1 link set dev vx0 neigh_suppress on" +- run_cmd "bridge -n sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" ++ run_cmd "bridge -n $sw1 link set dev vx0 neigh_suppress on" ++ run_cmd "bridge -n $sw1 -d link show dev vx0 | grep \"neigh_suppress on\"" + log_test $? 0 "\"neigh_suppress\" is on" + +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr1 -w 5000 $daddr1 eth0.$vid1" + log_test $? 0 "ndisc6 (VLAN $vid1)" +- run_cmd "ip netns exec h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" ++ run_cmd "ip netns exec $h1 ndisc6 -q -r 1 -s $saddr2 -w 5000 $daddr2 eth0.$vid2" + log_test $? 0 "ndisc6 (VLAN $vid2)" + +- tc_check_packets sw1 "dev vx0 egress" 101 2 ++ tc_check_packets $sw1 "dev vx0 egress" 101 2 + log_test $? 0 "NS suppression (VLAN $vid1)" +- tc_check_packets sw1 "dev vx0 egress" 102 5 ++ tc_check_packets $sw1 "dev vx0 egress" 102 5 + log_test $? 0 "NS suppression (VLAN $vid2)" + } + +diff --git a/tools/testing/selftests/timers/valid-adjtimex.c b/tools/testing/selftests/timers/valid-adjtimex.c +index 48b9a803235a8..d13ebde203221 100644 +--- a/tools/testing/selftests/timers/valid-adjtimex.c ++++ b/tools/testing/selftests/timers/valid-adjtimex.c +@@ -21,9 +21,6 @@ + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +- +- +- + #include + #include + #include +@@ -62,45 +59,47 @@ int clear_time_state(void) + #define NUM_FREQ_OUTOFRANGE 4 + #define NUM_FREQ_INVALID 2 + ++#define SHIFTED_PPM (1 << 16) ++ + long valid_freq[NUM_FREQ_VALID] = { +- -499<<16, +- -450<<16, +- -400<<16, +- -350<<16, +- -300<<16, +- -250<<16, +- -200<<16, +- -150<<16, +- -100<<16, +- -75<<16, +- -50<<16, +- -25<<16, +- -10<<16, +- -5<<16, +- -1<<16, ++ -499 * SHIFTED_PPM, ++ -450 * SHIFTED_PPM, ++ -400 * SHIFTED_PPM, ++ -350 * SHIFTED_PPM, ++ -300 * SHIFTED_PPM, ++ -250 * SHIFTED_PPM, ++ -200 * SHIFTED_PPM, ++ -150 * SHIFTED_PPM, ++ -100 * SHIFTED_PPM, ++ -75 * SHIFTED_PPM, ++ -50 * SHIFTED_PPM, ++ -25 * SHIFTED_PPM, ++ -10 * SHIFTED_PPM, ++ -5 * SHIFTED_PPM, ++ -1 * SHIFTED_PPM, + -1000, +- 1<<16, +- 5<<16, +- 10<<16, +- 25<<16, +- 50<<16, +- 75<<16, +- 100<<16, +- 150<<16, +- 200<<16, +- 250<<16, +- 300<<16, +- 350<<16, +- 400<<16, +- 450<<16, +- 499<<16, ++ 1 * SHIFTED_PPM, ++ 5 * SHIFTED_PPM, ++ 10 * SHIFTED_PPM, ++ 25 * SHIFTED_PPM, ++ 50 * SHIFTED_PPM, ++ 75 * SHIFTED_PPM, ++ 100 * SHIFTED_PPM, ++ 150 * SHIFTED_PPM, ++ 200 * SHIFTED_PPM, ++ 250 * SHIFTED_PPM, ++ 300 * SHIFTED_PPM, ++ 350 * SHIFTED_PPM, ++ 400 * SHIFTED_PPM, ++ 450 * SHIFTED_PPM, ++ 499 * SHIFTED_PPM, + }; + + long outofrange_freq[NUM_FREQ_OUTOFRANGE] = { +- -1000<<16, +- -550<<16, +- 550<<16, +- 1000<<16, ++ -1000 * SHIFTED_PPM, ++ -550 * SHIFTED_PPM, ++ 550 * SHIFTED_PPM, ++ 1000 * SHIFTED_PPM, + }; + + #define LONG_MAX (~0UL>>1)