From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 3615D1581EE for ; Sun, 23 Mar 2025 11:28:59 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 200E334379A for ; Sun, 23 Mar 2025 11:28:59 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id BD9821103B8; Sun, 23 Mar 2025 11:28:56 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id B13991103B8 for ; Sun, 23 Mar 2025 11:28:56 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 9ED88343168 for ; Sun, 23 Mar 2025 11:28:55 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 3DADA1D72 for ; Sun, 23 Mar 2025 11:28:54 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1742729315.a38164d3c4a28c852ce948cab10e89a49b72f35a.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.13 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1007_linux-6.13.8.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: a38164d3c4a28c852ce948cab10e89a49b72f35a X-VCS-Branch: 6.13 Date: Sun, 23 Mar 2025 11:28:54 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 1bf41ab9-736f-4be3-9557-d27132e5c661 X-Archives-Hash: abfb5a884c5e37bbb7ee9dab0ab0f5a3 commit: a38164d3c4a28c852ce948cab10e89a49b72f35a Author: Mike Pagano gentoo org> AuthorDate: Sun Mar 23 11:28:35 2025 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sun Mar 23 11:28:35 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a38164d3 Linux patch 6.13.8 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1007_linux-6.13.8.patch | 10483 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 10487 insertions(+) diff --git a/0000_README b/0000_README index e5c285ae..8e9fc0da 100644 --- a/0000_README +++ b/0000_README @@ -71,6 +71,10 @@ Patch: 1006_linux-6.13.7.patch From: https://www.kernel.org Desc: Linux 6.13.7 +Patch: 1007_linux-6.13.8.patch +From: https://www.kernel.org +Desc: Linux 6.13.8 + Patch: 1510_fs-enable-link-security-restrictions-by-default.patch From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/ Desc: Enable link security restrictions by default. diff --git a/1007_linux-6.13.8.patch b/1007_linux-6.13.8.patch new file mode 100644 index 00000000..8ecd3ef6 --- /dev/null +++ b/1007_linux-6.13.8.patch @@ -0,0 +1,10483 @@ +diff --git a/Documentation/rust/quick-start.rst b/Documentation/rust/quick-start.rst +index 4aa50e5fcb8c05..6d2607870ba44c 100644 +--- a/Documentation/rust/quick-start.rst ++++ b/Documentation/rust/quick-start.rst +@@ -145,7 +145,7 @@ Rust standard library source + **************************** + + The Rust standard library source is required because the build system will +-cross-compile ``core`` and ``alloc``. ++cross-compile ``core``. + + If ``rustup`` is being used, run:: + +diff --git a/Makefile b/Makefile +index 3363645fa6242b..44d99cd682ba69 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 13 +-SUBLEVEL = 7 ++SUBLEVEL = 8 + EXTRAVERSION = + NAME = Baby Opossum Posse + +diff --git a/arch/alpha/include/asm/elf.h b/arch/alpha/include/asm/elf.h +index 4d7c46f50382e3..50c82187e60ec9 100644 +--- a/arch/alpha/include/asm/elf.h ++++ b/arch/alpha/include/asm/elf.h +@@ -74,7 +74,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; + /* + * This is used to ensure we don't load something for the wrong architecture. + */ +-#define elf_check_arch(x) ((x)->e_machine == EM_ALPHA) ++#define elf_check_arch(x) (((x)->e_machine == EM_ALPHA) && !((x)->e_flags & EF_ALPHA_32BIT)) + + /* + * These are used to set parameters in the core dumps. +@@ -137,10 +137,6 @@ extern int dump_elf_task(elf_greg_t *dest, struct task_struct *task); + : amask (AMASK_CIX) ? "ev6" : "ev67"); \ + }) + +-#define SET_PERSONALITY(EX) \ +- set_personality(((EX).e_flags & EF_ALPHA_32BIT) \ +- ? PER_LINUX_32BIT : PER_LINUX) +- + extern int alpha_l1i_cacheshape; + extern int alpha_l1d_cacheshape; + extern int alpha_l2_cacheshape; +diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h +index 635f0a5f5bbdeb..02e8817a89212c 100644 +--- a/arch/alpha/include/asm/pgtable.h ++++ b/arch/alpha/include/asm/pgtable.h +@@ -360,7 +360,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) + + extern void paging_init(void); + +-/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */ ++/* We have our own get_unmapped_area */ + #define HAVE_ARCH_UNMAPPED_AREA + + #endif /* _ALPHA_PGTABLE_H */ +diff --git a/arch/alpha/include/asm/processor.h b/arch/alpha/include/asm/processor.h +index 55bb1c09fd39d5..5dce5518a21119 100644 +--- a/arch/alpha/include/asm/processor.h ++++ b/arch/alpha/include/asm/processor.h +@@ -8,23 +8,19 @@ + #ifndef __ASM_ALPHA_PROCESSOR_H + #define __ASM_ALPHA_PROCESSOR_H + +-#include /* for ADDR_LIMIT_32BIT */ +- + /* + * We have a 42-bit user address space: 4TB user VM... + */ + #define TASK_SIZE (0x40000000000UL) + +-#define STACK_TOP \ +- (current->personality & ADDR_LIMIT_32BIT ? 0x80000000 : 0x00120000000UL) ++#define STACK_TOP (0x00120000000UL) + + #define STACK_TOP_MAX 0x00120000000UL + + /* This decides where the kernel will search for a free chunk of vm + * space during mmap's. + */ +-#define TASK_UNMAPPED_BASE \ +- ((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2) ++#define TASK_UNMAPPED_BASE (TASK_SIZE / 2) + + /* This is dead. Everything has been moved to thread_info. */ + struct thread_struct { }; +diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c +index 86185021f75ae3..a08e8edef1a44b 100644 +--- a/arch/alpha/kernel/osf_sys.c ++++ b/arch/alpha/kernel/osf_sys.c +@@ -1210,8 +1210,7 @@ SYSCALL_DEFINE1(old_adjtimex, struct timex32 __user *, txc_p) + return ret; + } + +-/* Get an address range which is currently unmapped. Similar to the +- generic version except that we know how to honor ADDR_LIMIT_32BIT. */ ++/* Get an address range which is currently unmapped. */ + + static unsigned long + arch_get_unmapped_area_1(unsigned long addr, unsigned long len, +@@ -1230,13 +1229,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) + { +- unsigned long limit; +- +- /* "32 bit" actually means 31 bit, since pointers sign extend. */ +- if (current->personality & ADDR_LIMIT_32BIT) +- limit = 0x80000000; +- else +- limit = TASK_SIZE; ++ unsigned long limit = TASK_SIZE; + + if (len > limit) + return -ENOMEM; +diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h +index bc94e036a26b99..8104aee4f9a082 100644 +--- a/arch/arm64/include/asm/tlbflush.h ++++ b/arch/arm64/include/asm/tlbflush.h +@@ -396,33 +396,35 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) + #define __flush_tlb_range_op(op, start, pages, stride, \ + asid, tlb_level, tlbi_user, lpa2) \ + do { \ ++ typeof(start) __flush_start = start; \ ++ typeof(pages) __flush_pages = pages; \ + int num = 0; \ + int scale = 3; \ + int shift = lpa2 ? 16 : PAGE_SHIFT; \ + unsigned long addr; \ + \ +- while (pages > 0) { \ ++ while (__flush_pages > 0) { \ + if (!system_supports_tlb_range() || \ +- pages == 1 || \ +- (lpa2 && start != ALIGN(start, SZ_64K))) { \ +- addr = __TLBI_VADDR(start, asid); \ ++ __flush_pages == 1 || \ ++ (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \ ++ addr = __TLBI_VADDR(__flush_start, asid); \ + __tlbi_level(op, addr, tlb_level); \ + if (tlbi_user) \ + __tlbi_user_level(op, addr, tlb_level); \ +- start += stride; \ +- pages -= stride >> PAGE_SHIFT; \ ++ __flush_start += stride; \ ++ __flush_pages -= stride >> PAGE_SHIFT; \ + continue; \ + } \ + \ +- num = __TLBI_RANGE_NUM(pages, scale); \ ++ num = __TLBI_RANGE_NUM(__flush_pages, scale); \ + if (num >= 0) { \ +- addr = __TLBI_VADDR_RANGE(start >> shift, asid, \ ++ addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \ + scale, num, tlb_level); \ + __tlbi(r##op, addr); \ + if (tlbi_user) \ + __tlbi_user(r##op, addr); \ +- start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ +- pages -= __TLBI_RANGE_PAGES(num, scale); \ ++ __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ ++ __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\ + } \ + scale--; \ + } \ +diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c +index 1a2c72f3e7f80e..cb180684d10d5b 100644 +--- a/arch/arm64/kernel/topology.c ++++ b/arch/arm64/kernel/topology.c +@@ -194,12 +194,19 @@ static void amu_fie_setup(const struct cpumask *cpus) + int cpu; + + /* We are already set since the last insmod of cpufreq driver */ +- if (unlikely(cpumask_subset(cpus, amu_fie_cpus))) ++ if (cpumask_available(amu_fie_cpus) && ++ unlikely(cpumask_subset(cpus, amu_fie_cpus))) + return; + +- for_each_cpu(cpu, cpus) { ++ for_each_cpu(cpu, cpus) + if (!freq_counters_valid(cpu)) + return; ++ ++ if (!cpumask_available(amu_fie_cpus) && ++ !zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) { ++ WARN_ONCE(1, "Failed to allocate FIE cpumask for CPUs[%*pbl]\n", ++ cpumask_pr_args(cpus)); ++ return; + } + + cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus); +@@ -237,17 +244,8 @@ static struct notifier_block init_amu_fie_notifier = { + + static int __init init_amu_fie(void) + { +- int ret; +- +- if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) +- return -ENOMEM; +- +- ret = cpufreq_register_notifier(&init_amu_fie_notifier, ++ return cpufreq_register_notifier(&init_amu_fie_notifier, + CPUFREQ_POLICY_NOTIFIER); +- if (ret) +- free_cpumask_var(amu_fie_cpus); +- +- return ret; + } + core_initcall(init_amu_fie); + +diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c +index e2739b69e11bd8..d1a4646f13a909 100644 +--- a/arch/arm64/mm/mmu.c ++++ b/arch/arm64/mm/mmu.c +@@ -1176,8 +1176,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, + struct vmem_altmap *altmap) + { + WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); ++ /* [start, end] should be within one section */ ++ WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page)); + +- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) ++ if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || ++ (end - start < PAGES_PER_SECTION * sizeof(struct page))) + return vmemmap_populate_basepages(start, end, node, altmap); + else + return vmemmap_populate_hugepages(start, end, node, altmap); +diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S +index 0c292f81849277..1be185e9480723 100644 +--- a/arch/loongarch/kvm/switch.S ++++ b/arch/loongarch/kvm/switch.S +@@ -85,7 +85,7 @@ + * Guest CRMD comes from separate GCSR_CRMD register + */ + ori t0, zero, CSR_PRMD_PIE +- csrxchg t0, t0, LOONGARCH_CSR_PRMD ++ csrwr t0, LOONGARCH_CSR_PRMD + + /* Set PVM bit to setup ertn to guest context */ + ori t0, zero, CSR_GSTAT_PVM +diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c +index bf867824844400..99165903908a4e 100644 +--- a/arch/loongarch/mm/pageattr.c ++++ b/arch/loongarch/mm/pageattr.c +@@ -3,6 +3,7 @@ + * Copyright (C) 2024 Loongson Technology Corporation Limited + */ + ++#include + #include + #include + #include +@@ -167,7 +168,7 @@ bool kernel_page_present(struct page *page) + unsigned long addr = (unsigned long)page_address(page); + + if (addr < vm_map_base) +- return true; ++ return memblock_is_memory(__pa(addr)); + + pgd = pgd_offset_k(addr); + if (pgd_none(pgdp_get(pgd))) +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index 2bba1d934efb0d..6651360d6a5259 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -3949,6 +3949,85 @@ static inline bool intel_pmu_has_cap(struct perf_event *event, int idx) + return test_bit(idx, (unsigned long *)&intel_cap->capabilities); + } + ++static u64 intel_pmu_freq_start_period(struct perf_event *event) ++{ ++ int type = event->attr.type; ++ u64 config, factor; ++ s64 start; ++ ++ /* ++ * The 127 is the lowest possible recommended SAV (sample after value) ++ * for a 4000 freq (default freq), according to the event list JSON file. ++ * Also, assume the workload is idle 50% time. ++ */ ++ factor = 64 * 4000; ++ if (type != PERF_TYPE_HARDWARE && type != PERF_TYPE_HW_CACHE) ++ goto end; ++ ++ /* ++ * The estimation of the start period in the freq mode is ++ * based on the below assumption. ++ * ++ * For a cycles or an instructions event, 1GHZ of the ++ * underlying platform, 1 IPC. The workload is idle 50% time. ++ * The start period = 1,000,000,000 * 1 / freq / 2. ++ * = 500,000,000 / freq ++ * ++ * Usually, the branch-related events occur less than the ++ * instructions event. According to the Intel event list JSON ++ * file, the SAV (sample after value) of a branch-related event ++ * is usually 1/4 of an instruction event. ++ * The start period of branch-related events = 125,000,000 / freq. ++ * ++ * The cache-related events occurs even less. The SAV is usually ++ * 1/20 of an instruction event. ++ * The start period of cache-related events = 25,000,000 / freq. ++ */ ++ config = event->attr.config & PERF_HW_EVENT_MASK; ++ if (type == PERF_TYPE_HARDWARE) { ++ switch (config) { ++ case PERF_COUNT_HW_CPU_CYCLES: ++ case PERF_COUNT_HW_INSTRUCTIONS: ++ case PERF_COUNT_HW_BUS_CYCLES: ++ case PERF_COUNT_HW_STALLED_CYCLES_FRONTEND: ++ case PERF_COUNT_HW_STALLED_CYCLES_BACKEND: ++ case PERF_COUNT_HW_REF_CPU_CYCLES: ++ factor = 500000000; ++ break; ++ case PERF_COUNT_HW_BRANCH_INSTRUCTIONS: ++ case PERF_COUNT_HW_BRANCH_MISSES: ++ factor = 125000000; ++ break; ++ case PERF_COUNT_HW_CACHE_REFERENCES: ++ case PERF_COUNT_HW_CACHE_MISSES: ++ factor = 25000000; ++ break; ++ default: ++ goto end; ++ } ++ } ++ ++ if (type == PERF_TYPE_HW_CACHE) ++ factor = 25000000; ++end: ++ /* ++ * Usually, a prime or a number with less factors (close to prime) ++ * is chosen as an SAV, which makes it less likely that the sampling ++ * period synchronizes with some periodic event in the workload. ++ * Minus 1 to make it at least avoiding values near power of twos ++ * for the default freq. ++ */ ++ start = DIV_ROUND_UP_ULL(factor, event->attr.sample_freq) - 1; ++ ++ if (start > x86_pmu.max_period) ++ start = x86_pmu.max_period; ++ ++ if (x86_pmu.limit_period) ++ x86_pmu.limit_period(event, &start); ++ ++ return start; ++} ++ + static int intel_pmu_hw_config(struct perf_event *event) + { + int ret = x86_pmu_hw_config(event); +@@ -3960,6 +4039,12 @@ static int intel_pmu_hw_config(struct perf_event *event) + if (ret) + return ret; + ++ if (event->attr.freq && event->attr.sample_freq) { ++ event->hw.sample_period = intel_pmu_freq_start_period(event); ++ event->hw.last_period = event->hw.sample_period; ++ local64_set(&event->hw.period_left, event->hw.sample_period); ++ } ++ + if (event->attr.precise_ip) { + if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT) + return -EINVAL; +diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c +index a8defc813c3696..7e493963d05767 100644 +--- a/arch/x86/events/rapl.c ++++ b/arch/x86/events/rapl.c +@@ -796,6 +796,7 @@ static const struct x86_cpu_id rapl_model_match[] __initconst = { + X86_MATCH_VFM(INTEL_METEORLAKE_L, &model_skl), + X86_MATCH_VFM(INTEL_ARROWLAKE_H, &model_skl), + X86_MATCH_VFM(INTEL_ARROWLAKE, &model_skl), ++ X86_MATCH_VFM(INTEL_ARROWLAKE_U, &model_skl), + X86_MATCH_VFM(INTEL_LUNARLAKE_M, &model_skl), + {}, + }; +diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h +index 91f08af3107853..82d9250aac34dc 100644 +--- a/arch/x86/include/asm/sev.h ++++ b/arch/x86/include/asm/sev.h +@@ -185,6 +185,9 @@ struct snp_guest_req { + unsigned int vmpck_id; + u8 msg_version; + u8 msg_type; ++ ++ struct snp_req_data input; ++ void *certs_data; + }; + + /* +@@ -245,9 +248,6 @@ struct snp_msg_desc { + struct snp_guest_msg secret_request, secret_response; + + struct snp_secrets_page *secrets; +- struct snp_req_data input; +- +- void *certs_data; + + struct aesgcm_ctx *ctx; + +diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c +index def6a2854a4b7c..07fc145f353103 100644 +--- a/arch/x86/kernel/cpu/microcode/amd.c ++++ b/arch/x86/kernel/cpu/microcode/amd.c +@@ -1075,7 +1075,7 @@ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t siz + if (ret != UCODE_OK) + return ret; + +- for_each_node(nid) { ++ for_each_node_with_cpus(nid) { + cpu = cpumask_first(cpumask_of_node(nid)); + c = &cpu_data(cpu); + +diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c +index 00189cdeb775f0..cb3f900c46fcc1 100644 +--- a/arch/x86/kernel/cpu/vmware.c ++++ b/arch/x86/kernel/cpu/vmware.c +@@ -26,6 +26,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -429,6 +430,9 @@ static void __init vmware_platform_setup(void) + pr_warn("Failed to get TSC freq from the hypervisor\n"); + } + ++ if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !efi_enabled(EFI_BOOT)) ++ x86_init.mpparse.find_mptable = mpparse_find_mptable; ++ + vmware_paravirt_ops_setup(); + + #ifdef CONFIG_X86_IO_APIC +diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c +index 59d23cdf4ed0fa..dd8748c45529a8 100644 +--- a/arch/x86/kernel/devicetree.c ++++ b/arch/x86/kernel/devicetree.c +@@ -2,6 +2,7 @@ + /* + * Architecture specific OF callbacks. + */ ++#include + #include + #include + #include +@@ -313,6 +314,6 @@ void __init x86_flattree_get_config(void) + if (initial_dtb) + early_memunmap(dt, map_len); + #endif +- if (of_have_populated_dt()) ++ if (acpi_disabled && of_have_populated_dt()) + x86_init.mpparse.parse_smp_cfg = x86_dtb_parse_smp_config; + } +diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c +index 385e3a5fc30458..feca4f20b06aaa 100644 +--- a/arch/x86/kernel/irq.c ++++ b/arch/x86/kernel/irq.c +@@ -25,8 +25,10 @@ + #include + #include + ++#if defined(CONFIG_X86_LOCAL_APIC) || defined(CONFIG_X86_THERMAL_VECTOR) + #define CREATE_TRACE_POINTS + #include ++#endif + + DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat); + EXPORT_PER_CPU_SYMBOL(irq_stat); +diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c +index e102505735a7bc..0e6bf24093f759 100644 +--- a/arch/x86/kvm/mmu/mmu.c ++++ b/arch/x86/kvm/mmu/mmu.c +@@ -7435,7 +7435,7 @@ static void kvm_mmu_start_lpage_recovery(struct once *once) + kvm_nx_huge_page_recovery_worker_kill, + kvm, "kvm-nx-lpage-recovery"); + +- if (!nx_thread) ++ if (IS_ERR(nx_thread)) + return; + + vhost_task_start(nx_thread); +diff --git a/block/bio.c b/block/bio.c +index d5bdc31d88d327..f54e8572affc17 100644 +--- a/block/bio.c ++++ b/block/bio.c +@@ -77,7 +77,7 @@ struct bio_slab { + struct kmem_cache *slab; + unsigned int slab_ref; + unsigned int slab_size; +- char name[8]; ++ char name[12]; + }; + static DEFINE_MUTEX(bio_slab_lock); + static DEFINE_XARRAY(bio_slabs); +diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c +index 90aaec923889cf..b4cd14e7fa76cc 100644 +--- a/drivers/acpi/resource.c ++++ b/drivers/acpi/resource.c +@@ -563,6 +563,12 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = { + DMI_MATCH(DMI_BOARD_NAME, "RP-15"), + }, + }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Eluktronics Inc."), ++ DMI_MATCH(DMI_BOARD_NAME, "MECH-17"), ++ }, ++ }, + { + /* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */ + .matches = { +diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c +index 3c3d8d200abba5..f10369ad90f768 100644 +--- a/drivers/block/null_blk/main.c ++++ b/drivers/block/null_blk/main.c +@@ -1541,8 +1541,8 @@ static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob) + cmd = blk_mq_rq_to_pdu(req); + cmd->error = null_process_cmd(cmd, req_op(req), blk_rq_pos(req), + blk_rq_sectors(req)); +- if (!blk_mq_add_to_batch(req, iob, (__force int) cmd->error, +- blk_mq_end_request_batch)) ++ if (!blk_mq_add_to_batch(req, iob, cmd->error != BLK_STS_OK, ++ blk_mq_end_request_batch)) + blk_mq_end_request(req, cmd->error); + nr++; + } +diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c +index 5afc8aac62ab9b..9515e982921618 100644 +--- a/drivers/block/virtio_blk.c ++++ b/drivers/block/virtio_blk.c +@@ -1207,11 +1207,12 @@ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob) + + while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) { + struct request *req = blk_mq_rq_from_pdu(vbr); ++ u8 status = virtblk_vbr_status(vbr); + + found++; + if (!blk_mq_complete_request_remote(req) && +- !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr), +- virtblk_complete_batch)) ++ !blk_mq_add_to_batch(req, iob, status != VIRTIO_BLK_S_OK, ++ virtblk_complete_batch)) + virtblk_request_done(req); + } + +diff --git a/drivers/clk/samsung/clk-gs101.c b/drivers/clk/samsung/clk-gs101.c +index 86b39edba12276..08b867ae3ed9df 100644 +--- a/drivers/clk/samsung/clk-gs101.c ++++ b/drivers/clk/samsung/clk-gs101.c +@@ -382,17 +382,9 @@ static const unsigned long cmu_top_clk_regs[] __initconst = { + EARLY_WAKEUP_DPU_DEST, + EARLY_WAKEUP_CSIS_DEST, + EARLY_WAKEUP_SW_TRIG_APM, +- EARLY_WAKEUP_SW_TRIG_APM_SET, +- EARLY_WAKEUP_SW_TRIG_APM_CLEAR, + EARLY_WAKEUP_SW_TRIG_CLUSTER0, +- EARLY_WAKEUP_SW_TRIG_CLUSTER0_SET, +- EARLY_WAKEUP_SW_TRIG_CLUSTER0_CLEAR, + EARLY_WAKEUP_SW_TRIG_DPU, +- EARLY_WAKEUP_SW_TRIG_DPU_SET, +- EARLY_WAKEUP_SW_TRIG_DPU_CLEAR, + EARLY_WAKEUP_SW_TRIG_CSIS, +- EARLY_WAKEUP_SW_TRIG_CSIS_SET, +- EARLY_WAKEUP_SW_TRIG_CSIS_CLEAR, + CLK_CON_MUX_MUX_CLKCMU_BO_BUS, + CLK_CON_MUX_MUX_CLKCMU_BUS0_BUS, + CLK_CON_MUX_MUX_CLKCMU_BUS1_BUS, +diff --git a/drivers/clk/samsung/clk-pll.c b/drivers/clk/samsung/clk-pll.c +index be6b516949193f..8f79807036a80d 100644 +--- a/drivers/clk/samsung/clk-pll.c ++++ b/drivers/clk/samsung/clk-pll.c +@@ -206,6 +206,7 @@ static const struct clk_ops samsung_pll3000_clk_ops = { + */ + /* Maximum lock time can be 270 * PDIV cycles */ + #define PLL35XX_LOCK_FACTOR (270) ++#define PLL142XX_LOCK_FACTOR (150) + + #define PLL35XX_MDIV_MASK (0x3FF) + #define PLL35XX_PDIV_MASK (0x3F) +@@ -272,7 +273,11 @@ static int samsung_pll35xx_set_rate(struct clk_hw *hw, unsigned long drate, + } + + /* Set PLL lock time. */ +- writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR, ++ if (pll->type == pll_142xx) ++ writel_relaxed(rate->pdiv * PLL142XX_LOCK_FACTOR, ++ pll->lock_reg); ++ else ++ writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR, + pll->lock_reg); + + /* Change PLL PMS values */ +diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c +index 6e9788324fea55..371f24569b3b22 100644 +--- a/drivers/firmware/iscsi_ibft.c ++++ b/drivers/firmware/iscsi_ibft.c +@@ -310,7 +310,10 @@ static ssize_t ibft_attr_show_nic(void *data, int type, char *buf) + str += sprintf_ipaddr(str, nic->ip_addr); + break; + case ISCSI_BOOT_ETH_SUBNET_MASK: +- val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1)); ++ if (nic->subnet_mask_prefix > 32) ++ val = cpu_to_be32(~0); ++ else ++ val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1)); + str += sprintf(str, "%pI4", &val); + break; + case ISCSI_BOOT_ETH_PREFIX_LEN: +diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c +index 40f76a90fd7db9..107d75558b5a80 100644 +--- a/drivers/gpio/gpiolib-cdev.c ++++ b/drivers/gpio/gpiolib-cdev.c +@@ -2729,8 +2729,9 @@ static int gpio_chrdev_open(struct inode *inode, struct file *file) + cdev->gdev = gpio_device_get(gdev); + + cdev->lineinfo_changed_nb.notifier_call = lineinfo_changed_notify; +- ret = atomic_notifier_chain_register(&gdev->line_state_notifier, +- &cdev->lineinfo_changed_nb); ++ scoped_guard(write_lock_irqsave, &gdev->line_state_lock) ++ ret = raw_notifier_chain_register(&gdev->line_state_notifier, ++ &cdev->lineinfo_changed_nb); + if (ret) + goto out_free_bitmap; + +@@ -2754,8 +2755,9 @@ static int gpio_chrdev_open(struct inode *inode, struct file *file) + blocking_notifier_chain_unregister(&gdev->device_notifier, + &cdev->device_unregistered_nb); + out_unregister_line_notifier: +- atomic_notifier_chain_unregister(&gdev->line_state_notifier, +- &cdev->lineinfo_changed_nb); ++ scoped_guard(write_lock_irqsave, &gdev->line_state_lock) ++ raw_notifier_chain_unregister(&gdev->line_state_notifier, ++ &cdev->lineinfo_changed_nb); + out_free_bitmap: + gpio_device_put(gdev); + bitmap_free(cdev->watched_lines); +@@ -2779,8 +2781,9 @@ static int gpio_chrdev_release(struct inode *inode, struct file *file) + + blocking_notifier_chain_unregister(&gdev->device_notifier, + &cdev->device_unregistered_nb); +- atomic_notifier_chain_unregister(&gdev->line_state_notifier, +- &cdev->lineinfo_changed_nb); ++ scoped_guard(write_lock_irqsave, &gdev->line_state_lock) ++ raw_notifier_chain_unregister(&gdev->line_state_notifier, ++ &cdev->lineinfo_changed_nb); + bitmap_free(cdev->watched_lines); + gpio_device_put(gdev); + kfree(cdev); +diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c +index 19878bc75e94ca..2509b723b34db9 100644 +--- a/drivers/gpio/gpiolib.c ++++ b/drivers/gpio/gpiolib.c +@@ -1025,7 +1025,8 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data, + } + } + +- ATOMIC_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); ++ rwlock_init(&gdev->line_state_lock); ++ RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); + BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); + + ret = init_srcu_struct(&gdev->srcu); +@@ -4171,8 +4172,9 @@ EXPORT_SYMBOL_GPL(gpiod_set_array_value_cansleep); + + void gpiod_line_state_notify(struct gpio_desc *desc, unsigned long action) + { +- atomic_notifier_call_chain(&desc->gdev->line_state_notifier, +- action, desc); ++ guard(read_lock_irqsave)(&desc->gdev->line_state_lock); ++ ++ raw_notifier_call_chain(&desc->gdev->line_state_notifier, action, desc); + } + + /** +diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h +index 147156ec502b29..c129a03e204087 100644 +--- a/drivers/gpio/gpiolib.h ++++ b/drivers/gpio/gpiolib.h +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -45,6 +46,7 @@ + * @list: links gpio_device:s together for traversal + * @line_state_notifier: used to notify subscribers about lines being + * requested, released or reconfigured ++ * @line_state_lock: RW-spinlock protecting the line state notifier + * @line_state_wq: used to emit line state events from a separate thread in + * process context + * @device_notifier: used to notify character device wait queues about the GPIO +@@ -72,7 +74,8 @@ struct gpio_device { + const char *label; + void *data; + struct list_head list; +- struct atomic_notifier_head line_state_notifier; ++ struct raw_notifier_head line_state_notifier; ++ rwlock_t line_state_lock; + struct workqueue_struct *line_state_wq; + struct blocking_notifier_head device_notifier; + struct srcu_struct srcu; +diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c +index d22b027fd0bb8f..69f1aaeac99dd0 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c +@@ -528,8 +528,9 @@ static void gmc_v12_0_get_vm_pte(struct amdgpu_device *adev, + + bo_adev = amdgpu_ttm_adev(bo->tbo.bdev); + coherent = bo->flags & AMDGPU_GEM_CREATE_COHERENT; +- is_system = (bo->tbo.resource->mem_type == TTM_PL_TT) || +- (bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT); ++ is_system = bo->tbo.resource && ++ (bo->tbo.resource->mem_type == TTM_PL_TT || ++ bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT); + + if (bo && bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) + *flags |= AMDGPU_PTE_DCC; +diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c +index c1ed91b3941543..0ed91e6dd122e5 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c +@@ -284,7 +284,7 @@ static int vce_v2_0_stop(struct amdgpu_device *adev) + return 0; + } + +- ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_VCN); ++ ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_VCE); + if (!ip_block) + return -EINVAL; + +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +index 6f309948805354..8ab83dcb1a3c29 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +@@ -1230,11 +1230,13 @@ static int evict_process_queues_cpsch(struct device_queue_manager *dqm, + decrement_queue_count(dqm, qpd, q); + + if (dqm->dev->kfd->shared_resources.enable_mes) { +- retval = remove_queue_mes(dqm, q, qpd); +- if (retval) { ++ int err; ++ ++ err = remove_queue_mes(dqm, q, qpd); ++ if (err) { + dev_err(dev, "Failed to evict queue %d\n", + q->properties.queue_id); +- goto out; ++ retval = err; + } + } + } +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index ca6b9a585aba90..56a30c2d0abe22 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -244,6 +244,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, + static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector); + static void handle_hpd_rx_irq(void *param); + ++static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm, ++ int bl_idx, ++ u32 user_brightness); ++ + static bool + is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state, + struct drm_crtc_state *new_crtc_state); +@@ -3318,6 +3322,12 @@ static int dm_resume(struct amdgpu_ip_block *ip_block) + + mutex_unlock(&dm->dc_lock); + ++ /* set the backlight after a reset */ ++ for (i = 0; i < dm->num_of_edps; i++) { ++ if (dm->backlight_dev[i]) ++ amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]); ++ } ++ + return 0; + } + /* Recreate dc_state - DC invalidates it when setting power state to S3. */ +@@ -4848,6 +4858,7 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector) + dm->backlight_dev[aconnector->bl_idx] = + backlight_device_register(bl_name, aconnector->base.kdev, dm, + &amdgpu_dm_backlight_ops, &props); ++ dm->brightness[aconnector->bl_idx] = props.brightness; + + if (IS_ERR(dm->backlight_dev[aconnector->bl_idx])) { + DRM_ERROR("DM: Backlight registration failed!\n"); +@@ -4915,7 +4926,6 @@ static void setup_backlight_device(struct amdgpu_display_manager *dm, + aconnector->bl_idx = bl_idx; + + amdgpu_dm_update_backlight_caps(dm, bl_idx); +- dm->brightness[bl_idx] = AMDGPU_MAX_BL_LEVEL; + dm->backlight_link[bl_idx] = link; + dm->num_of_edps++; + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +index e339c7a8d541c9..c0dc2324404908 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +@@ -455,6 +455,7 @@ void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *hdcp_work) + for (i = 0; i < hdcp_work->max_link; i++) { + cancel_delayed_work_sync(&hdcp_work[i].callback_dwork); + cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork); ++ cancel_delayed_work_sync(&hdcp_work[i].property_validate_dwork); + } + + sysfs_remove_bin_file(kobj, &hdcp_work[0].attr); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +index c4a7fd453e5fc0..a215234151ac31 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c +@@ -894,8 +894,16 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev) + struct drm_device *dev = adev_to_drm(adev); + struct drm_connector *connector; + struct drm_connector_list_iter iter; ++ int irq_type; + int i; + ++ /* First, clear all hpd and hpdrx interrupts */ ++ for (i = DC_IRQ_SOURCE_HPD1; i <= DC_IRQ_SOURCE_HPD6RX; i++) { ++ if (!dc_interrupt_set(adev->dm.dc, i, false)) ++ drm_err(dev, "Failed to clear hpd(rx) source=%d on init\n", ++ i); ++ } ++ + drm_connector_list_iter_begin(dev, &iter); + drm_for_each_connector_iter(connector, &iter) { + struct amdgpu_dm_connector *amdgpu_dm_connector; +@@ -908,10 +916,31 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev) + + dc_link = amdgpu_dm_connector->dc_link; + ++ /* ++ * Get a base driver irq reference for hpd ints for the lifetime ++ * of dm. Note that only hpd interrupt types are registered with ++ * base driver; hpd_rx types aren't. IOW, amdgpu_irq_get/put on ++ * hpd_rx isn't available. DM currently controls hpd_rx ++ * explicitly with dc_interrupt_set() ++ */ + if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { +- dc_interrupt_set(adev->dm.dc, +- dc_link->irq_source_hpd, +- true); ++ irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1; ++ /* ++ * TODO: There's a mismatch between mode_info.num_hpd ++ * and what bios reports as the # of connectors with hpd ++ * sources. Since the # of hpd source types registered ++ * with base driver == mode_info.num_hpd, we have to ++ * fallback to dc_interrupt_set for the remaining types. ++ */ ++ if (irq_type < adev->mode_info.num_hpd) { ++ if (amdgpu_irq_get(adev, &adev->hpd_irq, irq_type)) ++ drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", ++ dc_link->irq_source_hpd); ++ } else { ++ dc_interrupt_set(adev->dm.dc, ++ dc_link->irq_source_hpd, ++ true); ++ } + } + + if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { +@@ -921,12 +950,6 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev) + } + } + drm_connector_list_iter_end(&iter); +- +- /* Update reference counts for HPDs */ +- for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { +- if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) +- drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i); +- } + } + + /** +@@ -942,7 +965,7 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev) + struct drm_device *dev = adev_to_drm(adev); + struct drm_connector *connector; + struct drm_connector_list_iter iter; +- int i; ++ int irq_type; + + drm_connector_list_iter_begin(dev, &iter); + drm_for_each_connector_iter(connector, &iter) { +@@ -956,9 +979,18 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev) + dc_link = amdgpu_dm_connector->dc_link; + + if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { +- dc_interrupt_set(adev->dm.dc, +- dc_link->irq_source_hpd, +- false); ++ irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1; ++ ++ /* TODO: See same TODO in amdgpu_dm_hpd_init() */ ++ if (irq_type < adev->mode_info.num_hpd) { ++ if (amdgpu_irq_put(adev, &adev->hpd_irq, irq_type)) ++ drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", ++ dc_link->irq_source_hpd); ++ } else { ++ dc_interrupt_set(adev->dm.dc, ++ dc_link->irq_source_hpd, ++ false); ++ } + } + + if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { +@@ -968,10 +1000,4 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev) + } + } + drm_connector_list_iter_end(&iter); +- +- /* Update reference counts for HPDs */ +- for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { +- if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) +- drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i); +- } + } +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c +index 83c7c8853edeca..62e30942f735d4 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c +@@ -275,8 +275,11 @@ static int amdgpu_dm_plane_validate_dcc(struct amdgpu_device *adev, + if (!dcc->enable) + return 0; + +- if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN || +- !dc->cap_funcs.get_dcc_compression_cap) ++ if (adev->family < AMDGPU_FAMILY_GC_12_0_0 && ++ format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) ++ return -EINVAL; ++ ++ if (!dc->cap_funcs.get_dcc_compression_cap) + return -EINVAL; + + input.format = format; +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +index 568f5b215ce2e3..68722a74ec4359 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +@@ -3389,10 +3389,13 @@ static int get_norm_pix_clk(const struct dc_crtc_timing *timing) + break; + case COLOR_DEPTH_121212: + normalized_pix_clk = (pix_clk * 36) / 24; +- break; ++ break; ++ case COLOR_DEPTH_141414: ++ normalized_pix_clk = (pix_clk * 42) / 24; ++ break; + case COLOR_DEPTH_161616: + normalized_pix_clk = (pix_clk * 48) / 24; +- break; ++ break; + default: + ASSERT(0); + break; +diff --git a/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c b/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c +index e5fb0e8333e43f..e691a1cf33567d 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c ++++ b/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c +@@ -239,6 +239,7 @@ static const struct timing_generator_funcs dce60_tg_funcs = { + dce60_timing_generator_enable_advanced_request, + .configure_crc = dce60_configure_crc, + .get_crc = dce110_get_crc, ++ .is_two_pixels_per_container = dce110_is_two_pixels_per_container, + }; + + void dce60_timing_generator_construct( +diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c +index c6a5a861467976..de2b6e954fbd25 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c ++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c +@@ -1010,7 +1010,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s + if (disp_cfg_stream_location < 0) + disp_cfg_stream_location = dml_dispcfg->num_streams++; + +- ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__); ++ ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__); + populate_dml21_timing_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, context->streams[stream_index], dml_ctx); + adjust_dml21_hblank_timing_config_from_pipe_ctx(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, &context->res_ctx.pipe_ctx[stream_index]); + populate_dml21_output_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].output, context->streams[stream_index], &context->res_ctx.pipe_ctx[stream_index]); +@@ -1035,7 +1035,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s + if (disp_cfg_plane_location < 0) + disp_cfg_plane_location = dml_dispcfg->num_planes++; + +- ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__); ++ ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__); + + populate_dml21_surface_config_from_plane_state(in_dc, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location].surface, context->stream_status[stream_index].plane_states[plane_index]); + populate_dml21_plane_config_from_plane_state(dml_ctx, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location], context->stream_status[stream_index].plane_states[plane_index], context, stream_index); +diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c +index bde4250853b10c..81ba8809a3b4c5 100644 +--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c ++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c +@@ -746,7 +746,7 @@ static void populate_dml_output_cfg_from_stream_state(struct dml_output_cfg_st * + case SIGNAL_TYPE_DISPLAY_PORT_MST: + case SIGNAL_TYPE_DISPLAY_PORT: + out->OutputEncoder[location] = dml_dp; +- if (dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location] != -1) ++ if (location < MAX_HPO_DP2_ENCODERS && dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location] != -1) + out->OutputEncoder[dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location]] = dml_dp2p0; + break; + case SIGNAL_TYPE_EDP: +@@ -1303,7 +1303,7 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat + if (disp_cfg_stream_location < 0) + disp_cfg_stream_location = dml_dispcfg->num_timings++; + +- ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__); ++ ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__); + + populate_dml_timing_cfg_from_stream_state(&dml_dispcfg->timing, disp_cfg_stream_location, context->streams[i]); + populate_dml_output_cfg_from_stream_state(&dml_dispcfg->output, disp_cfg_stream_location, context->streams[i], current_pipe_context, dml2); +@@ -1343,7 +1343,7 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat + if (disp_cfg_plane_location < 0) + disp_cfg_plane_location = dml_dispcfg->num_surfaces++; + +- ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__); ++ ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__); + + populate_dml_surface_cfg_from_plane_state(dml2->v20.dml_core_ctx.project, &dml_dispcfg->surface, disp_cfg_plane_location, context->stream_status[i].plane_states[j]); + populate_dml_plane_cfg_from_plane_state( +diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c +index dc4446d589e778..acbcfc291f728b 100644 +--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c +@@ -4035,6 +4035,22 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr) + return 0; + } + ++static bool primary_mstb_probing_is_done(struct drm_dp_mst_topology_mgr *mgr) ++{ ++ bool probing_done = false; ++ ++ mutex_lock(&mgr->lock); ++ ++ if (mgr->mst_primary && drm_dp_mst_topology_try_get_mstb(mgr->mst_primary)) { ++ probing_done = mgr->mst_primary->link_address_sent; ++ drm_dp_mst_topology_put_mstb(mgr->mst_primary); ++ } ++ ++ mutex_unlock(&mgr->lock); ++ ++ return probing_done; ++} ++ + static inline bool + drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr, + struct drm_dp_pending_up_req *up_req) +@@ -4065,8 +4081,12 @@ drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr, + + /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */ + if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) { +- dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); +- hotplug = true; ++ if (!primary_mstb_probing_is_done(mgr)) { ++ drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.\n"); ++ } else { ++ dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); ++ hotplug = true; ++ } + } + + drm_dp_mst_topology_put_mstb(mstb); +@@ -4148,10 +4168,11 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + drm_dp_send_up_ack_reply(mgr, mst_primary, up_req->msg.req_type, + false); + ++ drm_dp_mst_topology_put_mstb(mst_primary); ++ + if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { + const struct drm_dp_connection_status_notify *conn_stat = + &up_req->msg.u.conn_stat; +- bool handle_csn; + + drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", + conn_stat->port_number, +@@ -4160,16 +4181,6 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + conn_stat->message_capability_status, + conn_stat->input_port, + conn_stat->peer_device_type); +- +- mutex_lock(&mgr->probe_lock); +- handle_csn = mst_primary->link_address_sent; +- mutex_unlock(&mgr->probe_lock); +- +- if (!handle_csn) { +- drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it."); +- kfree(up_req); +- goto out_put_primary; +- } + } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) { + const struct drm_dp_resource_status_notify *res_stat = + &up_req->msg.u.resource_stat; +@@ -4184,9 +4195,6 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + list_add_tail(&up_req->next, &mgr->up_req_list); + mutex_unlock(&mgr->up_req_lock); + queue_work(system_long_wq, &mgr->up_req_work); +- +-out_put_primary: +- drm_dp_mst_topology_put_mstb(mst_primary); + out_clear_reply: + reset_msg_rx_state(&mgr->up_req_recv); + return ret; +diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c +index 370dc676e3aa54..fd36b8fd54e9e1 100644 +--- a/drivers/gpu/drm/drm_atomic_uapi.c ++++ b/drivers/gpu/drm/drm_atomic_uapi.c +@@ -956,6 +956,10 @@ int drm_atomic_connector_commit_dpms(struct drm_atomic_state *state, + + if (mode != DRM_MODE_DPMS_ON) + mode = DRM_MODE_DPMS_OFF; ++ ++ if (connector->dpms == mode) ++ goto out; ++ + connector->dpms = mode; + + crtc = connector->state->crtc; +diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c +index 0e6021235a9304..994afa5a0ffb52 100644 +--- a/drivers/gpu/drm/drm_connector.c ++++ b/drivers/gpu/drm/drm_connector.c +@@ -1308,6 +1308,10 @@ EXPORT_SYMBOL(drm_hdmi_connector_get_output_format_name); + * callback. For atomic drivers the remapping to the "ACTIVE" property is + * implemented in the DRM core. + * ++ * On atomic drivers any DPMS setproperty ioctl where the value does not ++ * change is completely skipped, otherwise a full atomic commit will occur. ++ * On legacy drivers the exact behavior is driver specific. ++ * + * Note that this property cannot be set through the MODE_ATOMIC ioctl, + * userspace must use "ACTIVE" on the CRTC instead. + * +diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs +index bcf248f69252c2..6903e2010cb98b 100644 +--- a/drivers/gpu/drm/drm_panic_qr.rs ++++ b/drivers/gpu/drm/drm_panic_qr.rs +@@ -545,7 +545,7 @@ fn add_segments(&mut self, segments: &[&Segment<'_>]) { + } + self.push(&mut offset, (MODE_STOP, 4)); + +- let pad_offset = (offset + 7) / 8; ++ let pad_offset = offset.div_ceil(8); + for i in pad_offset..self.version.max_data() { + self.data[i] = PADDING[(i & 1) ^ (pad_offset & 1)]; + } +@@ -659,7 +659,7 @@ struct QrImage<'a> { + impl QrImage<'_> { + fn new<'a, 'b>(em: &'b EncodedMsg<'b>, qrdata: &'a mut [u8]) -> QrImage<'a> { + let width = em.version.width(); +- let stride = (width + 7) / 8; ++ let stride = width.div_ceil(8); + let data = qrdata; + + let mut qr_image = QrImage { +@@ -911,16 +911,16 @@ fn draw_all(&mut self, data: impl Iterator) { + /// + /// * `url`: The base URL of the QR code. It will be encoded as Binary segment. + /// * `data`: A pointer to the binary data, to be encoded. if URL is NULL, it +-/// will be encoded as binary segment, otherwise it will be encoded +-/// efficiently as a numeric segment, and appended to the URL. ++/// will be encoded as binary segment, otherwise it will be encoded ++/// efficiently as a numeric segment, and appended to the URL. + /// * `data_len`: Length of the data, that needs to be encoded, must be less +-/// than data_size. ++/// than data_size. + /// * `data_size`: Size of data buffer, it should be at least 4071 bytes to hold +-/// a V40 QR code. It will then be overwritten with the QR code image. ++/// a V40 QR code. It will then be overwritten with the QR code image. + /// * `tmp`: A temporary buffer that the QR code encoder will use, to write the +-/// segments and ECC. ++/// segments and ECC. + /// * `tmp_size`: Size of the temporary buffer, it must be at least 3706 bytes +-/// long for V40. ++/// long for V40. + /// + /// # Safety + /// +diff --git a/drivers/gpu/drm/gma500/mid_bios.c b/drivers/gpu/drm/gma500/mid_bios.c +index 7e76790c6a81fa..cba97d7db131d8 100644 +--- a/drivers/gpu/drm/gma500/mid_bios.c ++++ b/drivers/gpu/drm/gma500/mid_bios.c +@@ -279,6 +279,11 @@ static void mid_get_vbt_data(struct drm_psb_private *dev_priv) + 0, PCI_DEVFN(2, 0)); + int ret = -1; + ++ if (pci_gfx_root == NULL) { ++ WARN_ON(1); ++ return; ++ } ++ + /* Get the address of the platform config vbt */ + pci_read_config_dword(pci_gfx_root, 0xFC, &addr); + pci_dev_put(pci_gfx_root); +diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c +index e0953777a2066b..b491827941f198 100644 +--- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c ++++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c +@@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, + return 0; + + err_free_mmio: ++ iounmap(hv->vram); + vmbus_free_mmio(hv->mem->start, hv->fb_size); + err_vmbus_close: + vmbus_close(hdev->channel); +@@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) + vmbus_close(hdev->channel); + hv_set_drvdata(hdev, NULL); + ++ iounmap(hv->vram); + vmbus_free_mmio(hv->mem->start, hv->fb_size); + } + +diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c +index 9d9fe11dd0557a..d188ea929080f8 100644 +--- a/drivers/gpu/drm/i915/display/intel_display.c ++++ b/drivers/gpu/drm/i915/display/intel_display.c +@@ -7840,9 +7840,6 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) + /* Now enable the clocks, plane, pipe, and connectors that we set up. */ + dev_priv->display.funcs.display->commit_modeset_enables(state); + +- if (state->modeset) +- intel_set_cdclk_post_plane_update(state); +- + intel_wait_for_vblank_workers(state); + + /* FIXME: We should call drm_atomic_helper_commit_hw_done() here +@@ -7916,6 +7913,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) + intel_verify_planes(state); + + intel_sagv_post_plane_update(state); ++ if (state->modeset) ++ intel_set_cdclk_post_plane_update(state); + intel_pmdemand_post_plane_update(state); + + drm_atomic_helper_commit_hw_done(&state->base); +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c +index 21274aa9bdddc1..c3dabb85796052 100644 +--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c ++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c +@@ -164,6 +164,9 @@ static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj) + * 4 - Support multiple fault handlers per object depending on object's + * backing storage (a.k.a. MMAP_OFFSET). + * ++ * 5 - Support multiple partial mmaps(mmap part of BO + unmap a offset, multiple ++ * times with different size and offset). ++ * + * Restrictions: + * + * * snoopable objects cannot be accessed via the GTT. It can cause machine +@@ -191,7 +194,7 @@ static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj) + */ + int i915_gem_mmap_gtt_version(void) + { +- return 4; ++ return 5; + } + + static inline struct i915_gtt_view +diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c +index 8d5c9c74cbb909..eac0d1d2dbda2f 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_connector.c ++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c +@@ -775,7 +775,6 @@ nouveau_connector_force(struct drm_connector *connector) + if (!nv_encoder) { + NV_ERROR(drm, "can't find encoder to force %s on!\n", + connector->name); +- connector->status = connector_status_disconnected; + return; + } + +diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c +index cbd9584af32995..383fbe128348ea 100644 +--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c ++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c +@@ -258,15 +258,16 @@ static void drm_test_check_broadcast_rgb_crtc_mode_changed(struct kunit *test) + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; ++ + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -321,15 +322,16 @@ static void drm_test_check_broadcast_rgb_crtc_mode_not_changed(struct kunit *tes + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; ++ + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -384,18 +386,18 @@ static void drm_test_check_broadcast_rgb_auto_cea_mode(struct kunit *test) + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + KUNIT_ASSERT_TRUE(test, conn->display_info.is_hdmi); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_NE(test, drm_match_cea_mode(preferred), 1); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -450,7 +452,6 @@ static void drm_test_check_broadcast_rgb_auto_cea_mode_vic_1(struct kunit *test) + mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1); + KUNIT_ASSERT_NOT_NULL(test, mode); + +- drm = &priv->drm; + crtc = priv->crtc; + ret = light_up_connector(test, drm, crtc, conn, mode, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); +@@ -496,18 +497,18 @@ static void drm_test_check_broadcast_rgb_full_cea_mode(struct kunit *test) + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + KUNIT_ASSERT_TRUE(test, conn->display_info.is_hdmi); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_NE(test, drm_match_cea_mode(preferred), 1); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -564,7 +565,6 @@ static void drm_test_check_broadcast_rgb_full_cea_mode_vic_1(struct kunit *test) + mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1); + KUNIT_ASSERT_NOT_NULL(test, mode); + +- drm = &priv->drm; + crtc = priv->crtc; + ret = light_up_connector(test, drm, crtc, conn, mode, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); +@@ -612,18 +612,18 @@ static void drm_test_check_broadcast_rgb_limited_cea_mode(struct kunit *test) + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + KUNIT_ASSERT_TRUE(test, conn->display_info.is_hdmi); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_NE(test, drm_match_cea_mode(preferred), 1); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -680,7 +680,6 @@ static void drm_test_check_broadcast_rgb_limited_cea_mode_vic_1(struct kunit *te + mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1); + KUNIT_ASSERT_NOT_NULL(test, mode); + +- drm = &priv->drm; + crtc = priv->crtc; + ret = light_up_connector(test, drm, crtc, conn, mode, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); +@@ -730,20 +729,20 @@ static void drm_test_check_output_bpc_crtc_mode_changed(struct kunit *test) + 10); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz, + ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz)); + KUNIT_ASSERT_GT(test, ret, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -804,20 +803,20 @@ static void drm_test_check_output_bpc_crtc_mode_not_changed(struct kunit *test) + 10); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz, + ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz)); + KUNIT_ASSERT_GT(test, ret, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -875,6 +874,8 @@ static void drm_test_check_output_bpc_dvi(struct kunit *test) + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_dvi_1080p, +@@ -884,14 +885,12 @@ static void drm_test_check_output_bpc_dvi(struct kunit *test) + info = &conn->display_info; + KUNIT_ASSERT_FALSE(test, info->is_hdmi); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -922,21 +921,21 @@ static void drm_test_check_tmds_char_rate_rgb_8bpc(struct kunit *test) + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_max_200mhz, + ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz)); + KUNIT_ASSERT_GT(test, ret, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -969,21 +968,21 @@ static void drm_test_check_tmds_char_rate_rgb_10bpc(struct kunit *test) + 10); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz, + ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz)); + KUNIT_ASSERT_GT(test, ret, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -1016,21 +1015,21 @@ static void drm_test_check_tmds_char_rate_rgb_12bpc(struct kunit *test) + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz, + ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz)); + KUNIT_ASSERT_GT(test, ret, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -1067,15 +1066,16 @@ static void drm_test_check_hdmi_funcs_reject_rate(struct kunit *test) + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; ++ + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_ASSERT_EQ(test, ret, 0); + +@@ -1123,6 +1123,8 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test) + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz, +@@ -1133,9 +1135,6 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test) + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK); +@@ -1146,8 +1145,9 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test) + rate = drm_hdmi_compute_mode_clock(preferred, 10, HDMI_COLORSPACE_RGB); + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); + +@@ -1192,6 +1192,8 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test) + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz, +@@ -1202,9 +1204,6 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test) + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK); +@@ -1218,8 +1217,9 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test) + rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_YUV422); + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); + +@@ -1266,9 +1266,6 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test) + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1); + KUNIT_ASSERT_NOT_NULL(test, mode); + +@@ -1282,7 +1279,9 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test) + rate = mode->clock * 1500; + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + crtc = priv->crtc; + ret = light_up_connector(test, drm, crtc, conn, mode, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); +@@ -1316,6 +1315,8 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test) + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz, +@@ -1326,9 +1327,6 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test) + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +@@ -1347,8 +1345,9 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test) + rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_YUV422); + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); + +@@ -1383,6 +1382,8 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_max_200mhz, +@@ -1393,9 +1394,6 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +@@ -1414,8 +1412,9 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test + rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_YUV422); + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); + +@@ -1449,6 +1448,8 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test + 8); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz, +@@ -1459,9 +1460,6 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +@@ -1472,8 +1470,9 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test + rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_RGB); + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); + +@@ -1509,6 +1508,8 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes + 12); + KUNIT_ASSERT_NOT_NULL(test, priv); + ++ drm = &priv->drm; ++ crtc = priv->crtc; + conn = &priv->connector; + ret = set_connector_edid(test, conn, + test_edid_hdmi_1080p_rgb_max_340mhz, +@@ -1519,9 +1520,6 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes + KUNIT_ASSERT_TRUE(test, info->is_hdmi); + KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0); + +- ctx = drm_kunit_helper_acquire_ctx_alloc(test); +- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); +- + preferred = find_preferred_mode(conn); + KUNIT_ASSERT_NOT_NULL(test, preferred); + +@@ -1532,8 +1530,9 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes + rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_RGB); + KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000); + +- drm = &priv->drm; +- crtc = priv->crtc; ++ ctx = drm_kunit_helper_acquire_ctx_alloc(test); ++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx); ++ + ret = light_up_connector(test, drm, crtc, conn, preferred, ctx); + KUNIT_EXPECT_EQ(test, ret, 0); + +diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c +index 3f0977d746beeb..87dfb9f69ee5bd 100644 +--- a/drivers/gpu/drm/vkms/vkms_composer.c ++++ b/drivers/gpu/drm/vkms/vkms_composer.c +@@ -98,7 +98,7 @@ static u16 lerp_u16(u16 a, u16 b, s64 t) + + s64 delta = drm_fixp_mul(b_fp - a_fp, t); + +- return drm_fixp2int(a_fp + delta); ++ return drm_fixp2int_round(a_fp + delta); + } + + static s64 get_lut_index(const struct vkms_color_lut *lut, u16 channel_value) +diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c +index 6eabf7a9d3b072..fd9734b0c19fa3 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ct.c ++++ b/drivers/gpu/drm/xe/xe_guc_ct.c +@@ -1699,9 +1699,11 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot, + drm_printf(p, "\tg2h outstanding: %d\n", + snapshot->g2h_outstanding); + +- if (snapshot->ctb) +- xe_print_blob_ascii85(p, "CTB data", '\n', ++ if (snapshot->ctb) { ++ drm_printf(p, "[CTB].length: 0x%zx\n", snapshot->ctb_size); ++ xe_print_blob_ascii85(p, "[CTB].data", '\n', + snapshot->ctb, 0, snapshot->ctb_size); ++ } + } else { + drm_puts(p, "CT disabled\n"); + } +diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c +index 2baa4d95571fbf..0ca3056d8bd3fa 100644 +--- a/drivers/gpu/drm/xe/xe_guc_log.c ++++ b/drivers/gpu/drm/xe/xe_guc_log.c +@@ -208,10 +208,11 @@ void xe_guc_log_snapshot_print(struct xe_guc_log_snapshot *snapshot, struct drm_ + drm_printf(p, "GuC timestamp: 0x%08llX [%llu]\n", snapshot->stamp, snapshot->stamp); + drm_printf(p, "Log level: %u\n", snapshot->level); + ++ drm_printf(p, "[LOG].length: 0x%zx\n", snapshot->size); + remain = snapshot->size; + for (i = 0; i < snapshot->num_chunks; i++) { + size_t size = min(GUC_LOG_CHUNK_SIZE, remain); +- const char *prefix = i ? NULL : "Log data"; ++ const char *prefix = i ? NULL : "[LOG].data"; + char suffix = i == snapshot->num_chunks - 1 ? '\n' : 0; + + xe_print_blob_ascii85(p, prefix, suffix, snapshot->copy[i], 0, size); +diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c +index 6f4a9812b4f4a7..d2b1fce121a845 100644 +--- a/drivers/gpu/drm/xe/xe_guc_submit.c ++++ b/drivers/gpu/drm/xe/xe_guc_submit.c +@@ -1236,9 +1236,11 @@ static void __guc_exec_queue_fini_async(struct work_struct *w) + xe_pm_runtime_get(guc_to_xe(guc)); + trace_xe_exec_queue_destroy(q); + ++ release_guc_id(guc, q); + if (xe_exec_queue_is_lr(q)) + cancel_work_sync(&ge->lr_tdr); +- release_guc_id(guc, q); ++ /* Confirm no work left behind accessing device structures */ ++ cancel_delayed_work_sync(&ge->sched.base.work_tdr); + xe_sched_entity_fini(&ge->entity); + xe_sched_fini(&ge->sched); + +diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c +index d7a9408b3a97c8..f6bc4f29d7538e 100644 +--- a/drivers/gpu/drm/xe/xe_hmm.c ++++ b/drivers/gpu/drm/xe/xe_hmm.c +@@ -138,13 +138,17 @@ static int xe_build_sg(struct xe_device *xe, struct hmm_range *range, + i += size; + + if (unlikely(j == st->nents - 1)) { ++ xe_assert(xe, i >= npages); + if (i > npages) + size -= (i - npages); ++ + sg_mark_end(sgl); ++ } else { ++ xe_assert(xe, i < npages); + } ++ + sg_set_page(sgl, page, size << PAGE_SHIFT, 0); + } +- xe_assert(xe, i == npages); + + return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING); +diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c +index 40f7c844ed44ae..f13bccfa09e2ca 100644 +--- a/drivers/gpu/drm/xe/xe_pm.c ++++ b/drivers/gpu/drm/xe/xe_pm.c +@@ -266,6 +266,15 @@ int xe_pm_init_early(struct xe_device *xe) + } + ALLOW_ERROR_INJECTION(xe_pm_init_early, ERRNO); /* See xe_pci_probe() */ + ++static u32 vram_threshold_value(struct xe_device *xe) ++{ ++ /* FIXME: D3Cold temporarily disabled by default on BMG */ ++ if (xe->info.platform == XE_BATTLEMAGE) ++ return 0; ++ ++ return DEFAULT_VRAM_THRESHOLD; ++} ++ + /** + * xe_pm_init - Initialize Xe Power Management + * @xe: xe device instance +@@ -276,6 +285,7 @@ ALLOW_ERROR_INJECTION(xe_pm_init_early, ERRNO); /* See xe_pci_probe() */ + */ + int xe_pm_init(struct xe_device *xe) + { ++ u32 vram_threshold; + int err; + + /* For now suspend/resume is only allowed with GuC */ +@@ -289,7 +299,8 @@ int xe_pm_init(struct xe_device *xe) + if (err) + return err; + +- err = xe_pm_set_vram_threshold(xe, DEFAULT_VRAM_THRESHOLD); ++ vram_threshold = vram_threshold_value(xe); ++ err = xe_pm_set_vram_threshold(xe, vram_threshold); + if (err) + return err; + } +diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig +index 4d2a89d65b6584..363c860835d359 100644 +--- a/drivers/hid/Kconfig ++++ b/drivers/hid/Kconfig +@@ -1167,7 +1167,8 @@ config HID_TOPRE + tristate "Topre REALFORCE keyboards" + depends on HID + help +- Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key keyboards. ++ Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key and ++ Topre REALFORCE R3S 87 key keyboards. + + config HID_THINGM + tristate "ThingM blink(1) USB RGB LED" +diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c +index 7e1ae2a2bcc247..d900dd05c335c3 100644 +--- a/drivers/hid/hid-apple.c ++++ b/drivers/hid/hid-apple.c +@@ -378,6 +378,12 @@ static bool apple_is_non_apple_keyboard(struct hid_device *hdev) + return false; + } + ++static bool apple_is_omoton_kb066(struct hid_device *hdev) ++{ ++ return hdev->product == USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI && ++ strcmp(hdev->name, "Bluetooth Keyboard") == 0; ++} ++ + static inline void apple_setup_key_translation(struct input_dev *input, + const struct apple_key_translation *table) + { +@@ -474,6 +480,7 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input, + hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2015) + table = magic_keyboard_2015_fn_keys; + else if (hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021 || ++ hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024 || + hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021 || + hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021) + table = apple2021_fn_keys; +@@ -724,7 +731,7 @@ static int apple_input_configured(struct hid_device *hdev, + { + struct apple_sc *asc = hid_get_drvdata(hdev); + +- if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) { ++ if (((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) || apple_is_omoton_kb066(hdev)) { + hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n"); + asc->quirks &= ~APPLE_HAS_FN; + } +@@ -1150,6 +1157,10 @@ static const struct hid_device_id apple_devices[] = { + .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY }, + { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021), + .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024), ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY }, ++ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024), ++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK }, + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021), + .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY }, + { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021), +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index d1d479ca50a214..c4ca7a579e3d4d 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -184,6 +184,7 @@ + #define USB_DEVICE_ID_APPLE_IRCONTROL4 0x8242 + #define USB_DEVICE_ID_APPLE_IRCONTROL5 0x8243 + #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021 0x029c ++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024 0x0320 + #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021 0x029a + #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021 0x029f + #define USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT 0x8102 +@@ -1093,6 +1094,7 @@ + #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001 0x3001 + #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003 0x3003 + #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008 ++#define USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473 0x5473 + + #define I2C_VENDOR_ID_RAYDIUM 0x2386 + #define I2C_PRODUCT_ID_RAYDIUM_4B33 0x4b33 +@@ -1299,6 +1301,7 @@ + #define USB_VENDOR_ID_TOPRE 0x0853 + #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_108 0x0148 + #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_87 0x0146 ++#define USB_DEVICE_ID_TOPRE_REALFORCE_R3S_87 0x0313 + + #define USB_VENDOR_ID_TOPSEED 0x0766 + #define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204 +diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c +index e0bbf0c6345d68..5d7a418ccdbecf 100644 +--- a/drivers/hid/hid-quirks.c ++++ b/drivers/hid/hid-quirks.c +@@ -891,6 +891,7 @@ static const struct hid_device_id hid_ignore_list[] = { + { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DPAD) }, + #endif + { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) }, + { } + }; + +diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c +index 03e57d8acdadf4..10460b7bde1a27 100644 +--- a/drivers/hid/hid-steam.c ++++ b/drivers/hid/hid-steam.c +@@ -1051,10 +1051,10 @@ static void steam_mode_switch_cb(struct work_struct *work) + struct steam_device, mode_switch); + unsigned long flags; + bool client_opened; +- steam->gamepad_mode = !steam->gamepad_mode; + if (!lizard_mode) + return; + ++ steam->gamepad_mode = !steam->gamepad_mode; + if (steam->gamepad_mode) + steam_set_lizard_mode(steam, false); + else { +@@ -1623,7 +1623,7 @@ static void steam_do_deck_input_event(struct steam_device *steam, + schedule_delayed_work(&steam->mode_switch, 45 * HZ / 100); + } + +- if (!steam->gamepad_mode) ++ if (!steam->gamepad_mode && lizard_mode) + return; + + lpad_touched = b10 & BIT(3); +@@ -1693,7 +1693,7 @@ static void steam_do_deck_sensors_event(struct steam_device *steam, + */ + steam->sensor_timestamp_us += 4000; + +- if (!steam->gamepad_mode) ++ if (!steam->gamepad_mode && lizard_mode) + return; + + input_event(sensors, EV_MSC, MSC_TIMESTAMP, steam->sensor_timestamp_us); +diff --git a/drivers/hid/hid-topre.c b/drivers/hid/hid-topre.c +index 848361f6225df1..ccedf8721722ec 100644 +--- a/drivers/hid/hid-topre.c ++++ b/drivers/hid/hid-topre.c +@@ -29,6 +29,11 @@ static const __u8 *topre_report_fixup(struct hid_device *hdev, __u8 *rdesc, + hid_info(hdev, + "fixing up Topre REALFORCE keyboard report descriptor\n"); + rdesc[72] = 0x02; ++ } else if (*rsize >= 106 && rdesc[28] == 0x29 && rdesc[29] == 0xe7 && ++ rdesc[30] == 0x81 && rdesc[31] == 0x00) { ++ hid_info(hdev, ++ "fixing up Topre REALFORCE keyboard report descriptor\n"); ++ rdesc[31] = 0x02; + } + return rdesc; + } +@@ -38,6 +43,8 @@ static const struct hid_device_id topre_id_table[] = { + USB_DEVICE_ID_TOPRE_REALFORCE_R2_108) }, + { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE, + USB_DEVICE_ID_TOPRE_REALFORCE_R2_87) }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE, ++ USB_DEVICE_ID_TOPRE_REALFORCE_R3S_87) }, + { } + }; + MODULE_DEVICE_TABLE(hid, topre_id_table); +diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h +index cdd80c653918b2..07e90d51f073cc 100644 +--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h ++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h +@@ -36,6 +36,8 @@ + #define PCI_DEVICE_ID_INTEL_ISH_ARL_H 0x7745 + #define PCI_DEVICE_ID_INTEL_ISH_ARL_S 0x7F78 + #define PCI_DEVICE_ID_INTEL_ISH_LNL_M 0xA845 ++#define PCI_DEVICE_ID_INTEL_ISH_PTL_H 0xE345 ++#define PCI_DEVICE_ID_INTEL_ISH_PTL_P 0xE445 + + #define REVISION_ID_CHT_A0 0x6 + #define REVISION_ID_CHT_Ax_SI 0x0 +diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c +index 3cd53fc80634a6..4c861119e97aa0 100644 +--- a/drivers/hid/intel-ish-hid/ipc/ipc.c ++++ b/drivers/hid/intel-ish-hid/ipc/ipc.c +@@ -517,6 +517,10 @@ static int ish_fw_reset_handler(struct ishtp_device *dev) + /* ISH FW is dead */ + if (!ish_is_input_ready(dev)) + return -EPIPE; ++ ++ /* Send clock sync at once after reset */ ++ ishtp_dev->prev_sync = 0; ++ + /* + * Set HOST2ISH.ILUP. Apparently we need this BEFORE sending + * RESET_NOTIFY_ACK - FW will be checking for it +@@ -577,15 +581,14 @@ static void fw_reset_work_fn(struct work_struct *work) + */ + static void _ish_sync_fw_clock(struct ishtp_device *dev) + { +- static unsigned long prev_sync; +- uint64_t usec; ++ struct ipc_time_update_msg time = {}; + +- if (prev_sync && time_before(jiffies, prev_sync + 20 * HZ)) ++ if (dev->prev_sync && time_before(jiffies, dev->prev_sync + 20 * HZ)) + return; + +- prev_sync = jiffies; +- usec = ktime_to_us(ktime_get_boottime()); +- ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &usec, sizeof(uint64_t)); ++ dev->prev_sync = jiffies; ++ /* The fields of time would be updated while sending message */ ++ ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &time, sizeof(time)); + } + + /** +diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c +index 9e2401291a2f67..ff0fc801007284 100644 +--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c ++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c +@@ -26,9 +26,11 @@ + enum ishtp_driver_data_index { + ISHTP_DRIVER_DATA_NONE, + ISHTP_DRIVER_DATA_LNL_M, ++ ISHTP_DRIVER_DATA_PTL, + }; + + #define ISH_FW_GEN_LNL_M "lnlm" ++#define ISH_FW_GEN_PTL "ptl" + + #define ISH_FIRMWARE_PATH(gen) "intel/ish/ish_" gen ".bin" + #define ISH_FIRMWARE_PATH_ALL "intel/ish/ish_*.bin" +@@ -37,6 +39,9 @@ static struct ishtp_driver_data ishtp_driver_data[] = { + [ISHTP_DRIVER_DATA_LNL_M] = { + .fw_generation = ISH_FW_GEN_LNL_M, + }, ++ [ISHTP_DRIVER_DATA_PTL] = { ++ .fw_generation = ISH_FW_GEN_PTL, ++ }, + }; + + static const struct pci_device_id ish_pci_tbl[] = { +@@ -63,6 +68,8 @@ static const struct pci_device_id ish_pci_tbl[] = { + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_H)}, + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_S)}, + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_LNL_M), .driver_data = ISHTP_DRIVER_DATA_LNL_M}, ++ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_H), .driver_data = ISHTP_DRIVER_DATA_PTL}, ++ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_P), .driver_data = ISHTP_DRIVER_DATA_PTL}, + {} + }; + MODULE_DEVICE_TABLE(pci, ish_pci_tbl); +diff --git a/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h b/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h +index effbb442c72776..dfc8cfd393532d 100644 +--- a/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h ++++ b/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h +@@ -254,6 +254,8 @@ struct ishtp_device { + unsigned int ipc_tx_cnt; + unsigned long long ipc_tx_bytes_cnt; + ++ /* Time of the last clock sync */ ++ unsigned long prev_sync; + const struct ishtp_hw_ops *ops; + size_t mtu; + uint32_t ishtp_msg_hdr; +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c +index 0f6cd44fff2921..6e55a1a2613d31 100644 +--- a/drivers/hv/vmbus_drv.c ++++ b/drivers/hv/vmbus_drv.c +@@ -2262,12 +2262,25 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size) + struct resource *iter; + + mutex_lock(&hyperv_mmio_lock); ++ ++ /* ++ * If all bytes of the MMIO range to be released are within the ++ * special case fb_mmio shadow region, skip releasing the shadow ++ * region since no corresponding __request_region() was done ++ * in vmbus_allocate_mmio(). ++ */ ++ if (fb_mmio && start >= fb_mmio->start && ++ (start + size - 1 <= fb_mmio->end)) ++ goto skip_shadow_release; ++ + for (iter = hyperv_mmio; iter; iter = iter->sibling) { + if ((iter->start >= start + size) || (iter->end <= start)) + continue; + + __release_region(iter, start, size); + } ++ ++skip_shadow_release: + release_mem_region(start, size); + mutex_unlock(&hyperv_mmio_lock); + +diff --git a/drivers/i2c/busses/i2c-ali1535.c b/drivers/i2c/busses/i2c-ali1535.c +index 544c94e86b8967..1eac3583804058 100644 +--- a/drivers/i2c/busses/i2c-ali1535.c ++++ b/drivers/i2c/busses/i2c-ali1535.c +@@ -485,6 +485,8 @@ MODULE_DEVICE_TABLE(pci, ali1535_ids); + + static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id) + { ++ int ret; ++ + if (ali1535_setup(dev)) { + dev_warn(&dev->dev, + "ALI1535 not detected, module not inserted.\n"); +@@ -496,7 +498,15 @@ static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id) + + snprintf(ali1535_adapter.name, sizeof(ali1535_adapter.name), + "SMBus ALI1535 adapter at %04x", ali1535_offset); +- return i2c_add_adapter(&ali1535_adapter); ++ ret = i2c_add_adapter(&ali1535_adapter); ++ if (ret) ++ goto release_region; ++ ++ return 0; ++ ++release_region: ++ release_region(ali1535_smba, ALI1535_SMB_IOSIZE); ++ return ret; + } + + static void ali1535_remove(struct pci_dev *dev) +diff --git a/drivers/i2c/busses/i2c-ali15x3.c b/drivers/i2c/busses/i2c-ali15x3.c +index 4761c720810227..418d11266671e3 100644 +--- a/drivers/i2c/busses/i2c-ali15x3.c ++++ b/drivers/i2c/busses/i2c-ali15x3.c +@@ -472,6 +472,8 @@ MODULE_DEVICE_TABLE (pci, ali15x3_ids); + + static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id) + { ++ int ret; ++ + if (ali15x3_setup(dev)) { + dev_err(&dev->dev, + "ALI15X3 not detected, module not inserted.\n"); +@@ -483,7 +485,15 @@ static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id) + + snprintf(ali15x3_adapter.name, sizeof(ali15x3_adapter.name), + "SMBus ALI15X3 adapter at %04x", ali15x3_smba); +- return i2c_add_adapter(&ali15x3_adapter); ++ ret = i2c_add_adapter(&ali15x3_adapter); ++ if (ret) ++ goto release_region; ++ ++ return 0; ++ ++release_region: ++ release_region(ali15x3_smba, ALI15X3_SMB_IOSIZE); ++ return ret; + } + + static void ali15x3_remove(struct pci_dev *dev) +diff --git a/drivers/i2c/busses/i2c-sis630.c b/drivers/i2c/busses/i2c-sis630.c +index 3505cf29cedda3..a19c3d251804d5 100644 +--- a/drivers/i2c/busses/i2c-sis630.c ++++ b/drivers/i2c/busses/i2c-sis630.c +@@ -509,6 +509,8 @@ MODULE_DEVICE_TABLE(pci, sis630_ids); + + static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id) + { ++ int ret; ++ + if (sis630_setup(dev)) { + dev_err(&dev->dev, + "SIS630 compatible bus not detected, " +@@ -522,7 +524,15 @@ static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id) + snprintf(sis630_adapter.name, sizeof(sis630_adapter.name), + "SMBus SIS630 adapter at %04x", smbus_base + SMB_STS); + +- return i2c_add_adapter(&sis630_adapter); ++ ret = i2c_add_adapter(&sis630_adapter); ++ if (ret) ++ goto release_region; ++ ++ return 0; ++ ++release_region: ++ release_region(smbus_base + SMB_STS, SIS630_SMB_IOREGION); ++ return ret; + } + + static void sis630_remove(struct pci_dev *dev) +diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c +index 8fe2a51df649ed..c33e6f33265ba0 100644 +--- a/drivers/input/joystick/xpad.c ++++ b/drivers/input/joystick/xpad.c +@@ -140,6 +140,7 @@ static const struct xpad_device { + { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, + { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, + { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX }, ++ { 0x044f, 0xd01e, "ThrustMaster, Inc. ESWAP X 2 ELDEN RING EDITION", 0, XTYPE_XBOXONE }, + { 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX }, + { 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 }, + { 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX }, +@@ -177,6 +178,7 @@ static const struct xpad_device { + { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX }, + { 0x06a3, 0x0201, "Saitek Adrenalin", 0, XTYPE_XBOX }, + { 0x06a3, 0xf51a, "Saitek P3600", 0, XTYPE_XBOX360 }, ++ { 0x0738, 0x4503, "Mad Catz Racing Wheel", 0, XTYPE_XBOXONE }, + { 0x0738, 0x4506, "Mad Catz 4506 Wireless Controller", 0, XTYPE_XBOX }, + { 0x0738, 0x4516, "Mad Catz Control Pad", 0, XTYPE_XBOX }, + { 0x0738, 0x4520, "Mad Catz Control Pad Pro", 0, XTYPE_XBOX }, +@@ -238,6 +240,7 @@ static const struct xpad_device { + { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x0147, "PDP Marvel Xbox One Controller", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x015c, "PDP Xbox One Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", XTYPE_XBOXONE }, + { 0x0e6f, 0x0161, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x0162, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x0163, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, +@@ -276,12 +279,15 @@ static const struct xpad_device { + { 0x0f0d, 0x0078, "Hori Real Arcade Pro V Kai Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, + { 0x0f0d, 0x00c5, "Hori Fighting Commander ONE", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, + { 0x0f0d, 0x00dc, "HORIPAD FPS for Nintendo Switch", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, ++ { 0x0f0d, 0x0151, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, ++ { 0x0f0d, 0x0152, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, + { 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX }, + { 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX }, + { 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX }, + { 0x102c, 0xff0c, "Joytech Wireless Advanced Controller", 0, XTYPE_XBOX }, + { 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, + { 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, ++ { 0x10f5, 0x7005, "Turtle Beach Recon Controller", 0, XTYPE_XBOXONE }, + { 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 }, + { 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 }, + { 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 }, +@@ -306,7 +312,7 @@ static const struct xpad_device { + { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 }, + { 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 }, + { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 }, +- { 0x1a86, 0xe310, "QH Electronics Controller", 0, XTYPE_XBOX360 }, ++ { 0x1a86, 0xe310, "Legion Go S", 0, XTYPE_XBOX360 }, + { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 }, + { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, + { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, +@@ -343,6 +349,7 @@ static const struct xpad_device { + { 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 }, + { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 }, + { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 }, ++ { 0x1ee9, 0x1590, "ZOTAC Gaming Zone", 0, XTYPE_XBOX360 }, + { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, + { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE }, + { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 }, +@@ -366,6 +373,7 @@ static const struct xpad_device { + { 0x24c6, 0x5510, "Hori Fighting Commander ONE (Xbox 360/PC Mode)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, + { 0x24c6, 0x551a, "PowerA FUSION Pro Controller", 0, XTYPE_XBOXONE }, + { 0x24c6, 0x561a, "PowerA FUSION Controller", 0, XTYPE_XBOXONE }, ++ { 0x24c6, 0x581a, "ThrustMaster XB1 Classic Controller", 0, XTYPE_XBOXONE }, + { 0x24c6, 0x5b00, "ThrustMaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, + { 0x24c6, 0x5b02, "Thrustmaster, Inc. GPX Controller", 0, XTYPE_XBOX360 }, + { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, +@@ -374,10 +382,15 @@ static const struct xpad_device { + { 0x2563, 0x058d, "OneXPlayer Gamepad", 0, XTYPE_XBOX360 }, + { 0x294b, 0x3303, "Snakebyte GAMEPAD BASE X", 0, XTYPE_XBOXONE }, + { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE }, ++ { 0x2993, 0x2001, "TECNO Pocket Go", 0, XTYPE_XBOX360 }, + { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE }, + { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 }, ++ { 0x2dc8, 0x3109, "8BitDo Ultimate Wireless Bluetooth", 0, XTYPE_XBOX360 }, + { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 }, ++ { 0x2dc8, 0x6001, "8BitDo SN30 Pro", 0, XTYPE_XBOX360 }, + { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, ++ { 0x2e24, 0x1688, "Hyperkin X91 X-Box One pad", 0, XTYPE_XBOXONE }, ++ { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE }, + { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 }, + { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 }, + { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 }, +@@ -385,11 +398,16 @@ static const struct xpad_device { + { 0x31e3, 0x1230, "Wooting Two HE (ARM)", 0, XTYPE_XBOX360 }, + { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 }, + { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 }, ++ { 0x3285, 0x0603, "Nacon Pro Compact controller for Xbox", 0, XTYPE_XBOXONE }, + { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 }, ++ { 0x3285, 0x0614, "Nacon Pro Compact", 0, XTYPE_XBOXONE }, + { 0x3285, 0x0646, "Nacon Pro Compact", 0, XTYPE_XBOXONE }, ++ { 0x3285, 0x0662, "Nacon Revolution5 Pro", 0, XTYPE_XBOX360 }, + { 0x3285, 0x0663, "Nacon Evol-X", 0, XTYPE_XBOXONE }, + { 0x3537, 0x1004, "GameSir T4 Kaleid", 0, XTYPE_XBOX360 }, ++ { 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE }, + { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX }, ++ { 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 }, + { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX }, + { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN } + }; +@@ -488,6 +506,7 @@ static const struct usb_device_id xpad_table[] = { + XPAD_XBOX360_VENDOR(0x03f0), /* HP HyperX Xbox 360 controllers */ + XPAD_XBOXONE_VENDOR(0x03f0), /* HP HyperX Xbox One controllers */ + XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster Xbox 360 controllers */ ++ XPAD_XBOXONE_VENDOR(0x044f), /* Thrustmaster Xbox One controllers */ + XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */ + XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */ + XPAD_XBOX360_VENDOR(0x046d), /* Logitech Xbox 360-style controllers */ +@@ -519,8 +538,9 @@ static const struct usb_device_id xpad_table[] = { + XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */ + XPAD_XBOX360_VENDOR(0x17ef), /* Lenovo */ + XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */ +- XPAD_XBOX360_VENDOR(0x1a86), /* QH Electronics */ ++ XPAD_XBOX360_VENDOR(0x1a86), /* Nanjing Qinheng Microelectronics (WCH) */ + XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */ ++ XPAD_XBOX360_VENDOR(0x1ee9), /* ZOTAC Technology Limited */ + XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */ + XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */ + XPAD_XBOX360_VENDOR(0x2345), /* Machenike Controllers */ +@@ -528,17 +548,20 @@ static const struct usb_device_id xpad_table[] = { + XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA controllers */ + XPAD_XBOX360_VENDOR(0x2563), /* OneXPlayer Gamepad */ + XPAD_XBOX360_VENDOR(0x260d), /* Dareu H101 */ +- XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */ ++ XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */ ++ XPAD_XBOX360_VENDOR(0x2993), /* TECNO Mobile */ + XPAD_XBOX360_VENDOR(0x2c22), /* Qanba Controllers */ +- XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller */ +- XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller for Xbox */ +- XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke Xbox One pad */ +- XPAD_XBOX360_VENDOR(0x2f24), /* GameSir controllers */ ++ XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Controllers */ ++ XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Controllers */ ++ XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Controllers */ ++ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */ ++ XPAD_XBOXONE_VENDOR(0x2e95), /* SCUF Gaming Controller */ + XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */ + XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */ + XPAD_XBOXONE_VENDOR(0x3285), /* Nacon Evol-X */ + XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */ + XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */ ++ XPAD_XBOX360_VENDOR(0x413d), /* Black Shark Green Ghost Controller */ + { } + }; + +@@ -691,7 +714,9 @@ static const struct xboxone_init_packet xboxone_init_packets[] = { + XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init), + XBOXONE_INIT_PKT(0x045e, 0x0b00, extra_input_packet_init), + XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_led_on), ++ XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_led_on), + XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_auth), ++ XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_auth), + XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init), + XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init), + XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init), +diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c +index 22022d11470dbe..80b917944b51e7 100644 +--- a/drivers/input/misc/iqs7222.c ++++ b/drivers/input/misc/iqs7222.c +@@ -100,11 +100,11 @@ enum iqs7222_reg_key_id { + + enum iqs7222_reg_grp_id { + IQS7222_REG_GRP_STAT, +- IQS7222_REG_GRP_FILT, + IQS7222_REG_GRP_CYCLE, + IQS7222_REG_GRP_GLBL, + IQS7222_REG_GRP_BTN, + IQS7222_REG_GRP_CHAN, ++ IQS7222_REG_GRP_FILT, + IQS7222_REG_GRP_SLDR, + IQS7222_REG_GRP_TPAD, + IQS7222_REG_GRP_GPIO, +@@ -286,6 +286,7 @@ static const struct iqs7222_event_desc iqs7222_tp_events[] = { + + struct iqs7222_reg_grp_desc { + u16 base; ++ u16 val_len; + int num_row; + int num_col; + }; +@@ -342,6 +343,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAC00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -400,6 +402,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAC00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -454,6 +457,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xC400, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -496,6 +500,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xC400, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -543,6 +548,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAA00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -600,6 +606,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAA00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -656,6 +663,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAE00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -712,6 +720,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAE00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -768,6 +777,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = { + }, + [IQS7222_REG_GRP_FILT] = { + .base = 0xAE00, ++ .val_len = 3, + .num_row = 1, + .num_col = 2, + }, +@@ -1604,7 +1614,7 @@ static int iqs7222_force_comms(struct iqs7222_private *iqs7222) + } + + static int iqs7222_read_burst(struct iqs7222_private *iqs7222, +- u16 reg, void *val, u16 num_val) ++ u16 reg, void *val, u16 val_len) + { + u8 reg_buf[sizeof(__be16)]; + int ret, i; +@@ -1619,7 +1629,7 @@ static int iqs7222_read_burst(struct iqs7222_private *iqs7222, + { + .addr = client->addr, + .flags = I2C_M_RD, +- .len = num_val * sizeof(__le16), ++ .len = val_len, + .buf = (u8 *)val, + }, + }; +@@ -1675,7 +1685,7 @@ static int iqs7222_read_word(struct iqs7222_private *iqs7222, u16 reg, u16 *val) + __le16 val_buf; + int error; + +- error = iqs7222_read_burst(iqs7222, reg, &val_buf, 1); ++ error = iqs7222_read_burst(iqs7222, reg, &val_buf, sizeof(val_buf)); + if (error) + return error; + +@@ -1685,10 +1695,9 @@ static int iqs7222_read_word(struct iqs7222_private *iqs7222, u16 reg, u16 *val) + } + + static int iqs7222_write_burst(struct iqs7222_private *iqs7222, +- u16 reg, const void *val, u16 num_val) ++ u16 reg, const void *val, u16 val_len) + { + int reg_len = reg > U8_MAX ? sizeof(reg) : sizeof(u8); +- int val_len = num_val * sizeof(__le16); + int msg_len = reg_len + val_len; + int ret, i; + struct i2c_client *client = iqs7222->client; +@@ -1747,7 +1756,7 @@ static int iqs7222_write_word(struct iqs7222_private *iqs7222, u16 reg, u16 val) + { + __le16 val_buf = cpu_to_le16(val); + +- return iqs7222_write_burst(iqs7222, reg, &val_buf, 1); ++ return iqs7222_write_burst(iqs7222, reg, &val_buf, sizeof(val_buf)); + } + + static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222) +@@ -1831,30 +1840,14 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir) + + /* + * Acknowledge reset before writing any registers in case the device +- * suffers a spurious reset during initialization. Because this step +- * may change the reserved fields of the second filter beta register, +- * its cache must be updated. +- * +- * Writing the second filter beta register, in turn, may clobber the +- * system status register. As such, the filter beta register pair is +- * written first to protect against this hazard. ++ * suffers a spurious reset during initialization. + */ + if (dir == WRITE) { +- u16 reg = dev_desc->reg_grps[IQS7222_REG_GRP_FILT].base + 1; +- u16 filt_setup; +- + error = iqs7222_write_word(iqs7222, IQS7222_SYS_SETUP, + iqs7222->sys_setup[0] | + IQS7222_SYS_SETUP_ACK_RESET); + if (error) + return error; +- +- error = iqs7222_read_word(iqs7222, reg, &filt_setup); +- if (error) +- return error; +- +- iqs7222->filt_setup[1] &= GENMASK(7, 0); +- iqs7222->filt_setup[1] |= (filt_setup & ~GENMASK(7, 0)); + } + + /* +@@ -1883,6 +1876,7 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir) + int num_col = dev_desc->reg_grps[i].num_col; + u16 reg = dev_desc->reg_grps[i].base; + __le16 *val_buf; ++ u16 val_len = dev_desc->reg_grps[i].val_len ? : num_col * sizeof(*val_buf); + u16 *val; + + if (!num_col) +@@ -1900,7 +1894,7 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir) + switch (dir) { + case READ: + error = iqs7222_read_burst(iqs7222, reg, +- val_buf, num_col); ++ val_buf, val_len); + for (k = 0; k < num_col; k++) + val[k] = le16_to_cpu(val_buf[k]); + break; +@@ -1909,7 +1903,7 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir) + for (k = 0; k < num_col; k++) + val_buf[k] = cpu_to_le16(val[k]); + error = iqs7222_write_burst(iqs7222, reg, +- val_buf, num_col); ++ val_buf, val_len); + break; + + default: +@@ -1962,7 +1956,7 @@ static int iqs7222_dev_info(struct iqs7222_private *iqs7222) + int error, i; + + error = iqs7222_read_burst(iqs7222, IQS7222_PROD_NUM, dev_id, +- ARRAY_SIZE(dev_id)); ++ sizeof(dev_id)); + if (error) + return error; + +@@ -2915,7 +2909,7 @@ static int iqs7222_report(struct iqs7222_private *iqs7222) + __le16 status[IQS7222_MAX_COLS_STAT]; + + error = iqs7222_read_burst(iqs7222, IQS7222_SYS_STATUS, status, +- num_stat); ++ num_stat * sizeof(*status)); + if (error) + return error; + +diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h +index 127cfdc8668a09..6ed9fc34948cbe 100644 +--- a/drivers/input/serio/i8042-acpipnpio.h ++++ b/drivers/input/serio/i8042-acpipnpio.h +@@ -1080,16 +1080,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"), + DMI_MATCH(DMI_BOARD_NAME, "AURA1501"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"), + DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* Mivvy M310 */ +@@ -1159,9 +1157,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + }, + /* + * A lot of modern Clevo barebones have touchpad and/or keyboard issues +- * after suspend fixable with nomux + reset + noloop + nopnp. Luckily, +- * none of them have an external PS/2 port so this can safely be set for +- * all of them. ++ * after suspend fixable with the forcenorestore quirk. + * Clevo barebones come with board_vendor and/or system_vendor set to + * either the very generic string "Notebook" and/or a different value + * for each individual reseller. The only somewhat universal way to +@@ -1171,29 +1167,25 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "N140CU"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "N141CU"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { +@@ -1205,29 +1197,19 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { +- /* +- * Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes +- * the keyboard very laggy for ~5 seconds after boot and +- * sometimes also after resume. +- * However both are required for the keyboard to not fail +- * completely sometimes after boot or resume. +- */ + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "NHxxRZQ"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + /* + * At least one modern Clevo barebone has the touchpad connected both +@@ -1243,17 +1225,15 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "NS50MU"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX | +- SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP | +- SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_NOAUX | ++ SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX | +- SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP | +- SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_NOAUX | ++ SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { +@@ -1265,8 +1245,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "P640RE"), ++ }, ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* +@@ -1277,16 +1262,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_PRODUCT_NAME, "P65xH"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* Clevo P650RS, 650RP6, Sager NP8152-S, and others */ + .matches = { + DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* +@@ -1297,8 +1280,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_PRODUCT_NAME, "P65_P67H"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* +@@ -1309,8 +1291,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RP"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* +@@ -1321,8 +1302,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RS"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* +@@ -1333,22 +1313,43 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_PRODUCT_NAME, "P67xRP"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "PB51RF"), ++ }, ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "PB71RD"), ++ }, ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "PC70DR"), ++ }, ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "PCX0DX_GN20"), ++ }, ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + /* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */ + { +@@ -1361,15 +1362,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "X170SM"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"), + }, +- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | +- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) ++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) + }, + { + /* +diff --git a/drivers/input/touchscreen/ads7846.c b/drivers/input/touchscreen/ads7846.c +index 066dc04003fa83..67264c5b49cb4a 100644 +--- a/drivers/input/touchscreen/ads7846.c ++++ b/drivers/input/touchscreen/ads7846.c +@@ -1021,7 +1021,7 @@ static int ads7846_setup_pendown(struct spi_device *spi, + if (pdata->get_pendown_state) { + ts->get_pendown_state = pdata->get_pendown_state; + } else { +- ts->gpio_pendown = gpiod_get(&spi->dev, "pendown", GPIOD_IN); ++ ts->gpio_pendown = devm_gpiod_get(&spi->dev, "pendown", GPIOD_IN); + if (IS_ERR(ts->gpio_pendown)) { + dev_err(&spi->dev, "failed to request pendown GPIO\n"); + return PTR_ERR(ts->gpio_pendown); +diff --git a/drivers/input/touchscreen/goodix_berlin_core.c b/drivers/input/touchscreen/goodix_berlin_core.c +index 3fc03cf0ca23fd..141c64675997db 100644 +--- a/drivers/input/touchscreen/goodix_berlin_core.c ++++ b/drivers/input/touchscreen/goodix_berlin_core.c +@@ -165,7 +165,7 @@ struct goodix_berlin_core { + struct device *dev; + struct regmap *regmap; + struct regulator *avdd; +- struct regulator *iovdd; ++ struct regulator *vddio; + struct gpio_desc *reset_gpio; + struct touchscreen_properties props; + struct goodix_berlin_fw_version fw_version; +@@ -248,19 +248,19 @@ static int goodix_berlin_power_on(struct goodix_berlin_core *cd) + { + int error; + +- error = regulator_enable(cd->iovdd); ++ error = regulator_enable(cd->vddio); + if (error) { +- dev_err(cd->dev, "Failed to enable iovdd: %d\n", error); ++ dev_err(cd->dev, "Failed to enable vddio: %d\n", error); + return error; + } + +- /* Vendor waits 3ms for IOVDD to settle */ ++ /* Vendor waits 3ms for VDDIO to settle */ + usleep_range(3000, 3100); + + error = regulator_enable(cd->avdd); + if (error) { + dev_err(cd->dev, "Failed to enable avdd: %d\n", error); +- goto err_iovdd_disable; ++ goto err_vddio_disable; + } + + /* Vendor waits 15ms for IOVDD to settle */ +@@ -283,8 +283,8 @@ static int goodix_berlin_power_on(struct goodix_berlin_core *cd) + err_dev_reset: + gpiod_set_value_cansleep(cd->reset_gpio, 1); + regulator_disable(cd->avdd); +-err_iovdd_disable: +- regulator_disable(cd->iovdd); ++err_vddio_disable: ++ regulator_disable(cd->vddio); + return error; + } + +@@ -292,7 +292,7 @@ static void goodix_berlin_power_off(struct goodix_berlin_core *cd) + { + gpiod_set_value_cansleep(cd->reset_gpio, 1); + regulator_disable(cd->avdd); +- regulator_disable(cd->iovdd); ++ regulator_disable(cd->vddio); + } + + static int goodix_berlin_read_version(struct goodix_berlin_core *cd) +@@ -744,10 +744,10 @@ int goodix_berlin_probe(struct device *dev, int irq, const struct input_id *id, + return dev_err_probe(dev, PTR_ERR(cd->avdd), + "Failed to request avdd regulator\n"); + +- cd->iovdd = devm_regulator_get(dev, "iovdd"); +- if (IS_ERR(cd->iovdd)) +- return dev_err_probe(dev, PTR_ERR(cd->iovdd), +- "Failed to request iovdd regulator\n"); ++ cd->vddio = devm_regulator_get(dev, "vddio"); ++ if (IS_ERR(cd->vddio)) ++ return dev_err_probe(dev, PTR_ERR(cd->vddio), ++ "Failed to request vddio regulator\n"); + + error = goodix_berlin_power_on(cd); + if (error) { +diff --git a/drivers/irqchip/irq-riscv-imsic-early.c b/drivers/irqchip/irq-riscv-imsic-early.c +index c5c2e6929a2f5a..275df500570576 100644 +--- a/drivers/irqchip/irq-riscv-imsic-early.c ++++ b/drivers/irqchip/irq-riscv-imsic-early.c +@@ -27,7 +27,7 @@ static void imsic_ipi_send(unsigned int cpu) + { + struct imsic_local_config *local = per_cpu_ptr(imsic->global.local, cpu); + +- writel_relaxed(IMSIC_IPI_ID, local->msi_va); ++ writel(IMSIC_IPI_ID, local->msi_va); + } + + static void imsic_ipi_starting_cpu(void) +diff --git a/drivers/irqchip/irq-thead-c900-aclint-sswi.c b/drivers/irqchip/irq-thead-c900-aclint-sswi.c +index b0e366ade4271e..8ff6e7a1363bd2 100644 +--- a/drivers/irqchip/irq-thead-c900-aclint-sswi.c ++++ b/drivers/irqchip/irq-thead-c900-aclint-sswi.c +@@ -31,7 +31,7 @@ static DEFINE_PER_CPU(void __iomem *, sswi_cpu_regs); + + static void thead_aclint_sswi_ipi_send(unsigned int cpu) + { +- writel_relaxed(0x1, per_cpu(sswi_cpu_regs, cpu)); ++ writel(0x1, per_cpu(sswi_cpu_regs, cpu)); + } + + static void thead_aclint_sswi_ipi_clear(void) +diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c +index 731467d4ed101c..b690905ab89ffb 100644 +--- a/drivers/md/dm-flakey.c ++++ b/drivers/md/dm-flakey.c +@@ -426,7 +426,7 @@ static struct bio *clone_bio(struct dm_target *ti, struct flakey_c *fc, struct b + if (!clone) + return NULL; + +- bio_init(clone, fc->dev->bdev, bio->bi_inline_vecs, nr_iovecs, bio->bi_opf); ++ bio_init(clone, fc->dev->bdev, clone->bi_inline_vecs, nr_iovecs, bio->bi_opf); + + clone->bi_iter.bi_sector = flakey_map_sector(ti, bio->bi_iter.bi_sector); + clone->bi_private = bio; +diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c +index 327b6ecdc77e00..d1b095af253bdc 100644 +--- a/drivers/net/bonding/bond_options.c ++++ b/drivers/net/bonding/bond_options.c +@@ -1242,10 +1242,28 @@ static bool slave_can_set_ns_maddr(const struct bonding *bond, struct slave *sla + slave->dev->flags & IFF_MULTICAST; + } + ++/** ++ * slave_set_ns_maddrs - add/del all NS mac addresses for slave ++ * @bond: bond device ++ * @slave: slave device ++ * @add: add or remove all the NS mac addresses ++ * ++ * This function tries to add or delete all the NS mac addresses on the slave ++ * ++ * Note, the IPv6 NS target address is the unicast address in Neighbor ++ * Solicitation (NS) message. The dest address of NS message should be ++ * solicited-node multicast address of the target. The dest mac of NS message ++ * is converted from the solicited-node multicast address. ++ * ++ * This function is called when ++ * * arp_validate changes ++ * * enslaving, releasing new slaves ++ */ + static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add) + { + struct in6_addr *targets = bond->params.ns_targets; + char slot_maddr[MAX_ADDR_LEN]; ++ struct in6_addr mcaddr; + int i; + + if (!slave_can_set_ns_maddr(bond, slave)) +@@ -1255,7 +1273,8 @@ static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool + if (ipv6_addr_any(&targets[i])) + break; + +- if (!ndisc_mc_map(&targets[i], slot_maddr, slave->dev, 0)) { ++ addrconf_addr_solict_mult(&targets[i], &mcaddr); ++ if (!ndisc_mc_map(&mcaddr, slot_maddr, slave->dev, 0)) { + if (add) + dev_mc_add(slave->dev, slot_maddr); + else +@@ -1278,23 +1297,43 @@ void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave) + slave_set_ns_maddrs(bond, slave, false); + } + ++/** ++ * slave_set_ns_maddr - set new NS mac address for slave ++ * @bond: bond device ++ * @slave: slave device ++ * @target: the new IPv6 target ++ * @slot: the old IPv6 target in the slot ++ * ++ * This function tries to replace the old mac address to new one on the slave. ++ * ++ * Note, the target/slot IPv6 address is the unicast address in Neighbor ++ * Solicitation (NS) message. The dest address of NS message should be ++ * solicited-node multicast address of the target. The dest mac of NS message ++ * is converted from the solicited-node multicast address. ++ * ++ * This function is called when ++ * * An IPv6 NS target is added or removed. ++ */ + static void slave_set_ns_maddr(struct bonding *bond, struct slave *slave, + struct in6_addr *target, struct in6_addr *slot) + { +- char target_maddr[MAX_ADDR_LEN], slot_maddr[MAX_ADDR_LEN]; ++ char mac_addr[MAX_ADDR_LEN]; ++ struct in6_addr mcast_addr; + + if (!bond->params.arp_validate || !slave_can_set_ns_maddr(bond, slave)) + return; + +- /* remove the previous maddr from slave */ ++ /* remove the previous mac addr from slave */ ++ addrconf_addr_solict_mult(slot, &mcast_addr); + if (!ipv6_addr_any(slot) && +- !ndisc_mc_map(slot, slot_maddr, slave->dev, 0)) +- dev_mc_del(slave->dev, slot_maddr); ++ !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0)) ++ dev_mc_del(slave->dev, mac_addr); + +- /* add new maddr on slave if target is set */ ++ /* add new mac addr on slave if target is set */ ++ addrconf_addr_solict_mult(target, &mcast_addr); + if (!ipv6_addr_any(target) && +- !ndisc_mc_map(target, target_maddr, slave->dev, 0)) +- dev_mc_add(slave->dev, target_maddr); ++ !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0)) ++ dev_mc_add(slave->dev, mac_addr); + } + + static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot, +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c +index 3a792f79270d99..d6e8398c07608f 100644 +--- a/drivers/net/dsa/mv88e6xxx/chip.c ++++ b/drivers/net/dsa/mv88e6xxx/chip.c +@@ -2224,13 +2224,11 @@ mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port, + return err; + } + +-static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, +- const unsigned char *addr, u16 vid, +- u8 state) ++static int mv88e6xxx_port_db_get(struct mv88e6xxx_chip *chip, ++ const unsigned char *addr, u16 vid, ++ u16 *fid, struct mv88e6xxx_atu_entry *entry) + { +- struct mv88e6xxx_atu_entry entry; + struct mv88e6xxx_vtu_entry vlan; +- u16 fid; + int err; + + /* Ports have two private address databases: one for when the port is +@@ -2241,7 +2239,7 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, + * VLAN ID into the port's database used for VLAN-unaware bridging. + */ + if (vid == 0) { +- fid = MV88E6XXX_FID_BRIDGED; ++ *fid = MV88E6XXX_FID_BRIDGED; + } else { + err = mv88e6xxx_vtu_get(chip, vid, &vlan); + if (err) +@@ -2251,14 +2249,39 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, + if (!vlan.valid) + return -EOPNOTSUPP; + +- fid = vlan.fid; ++ *fid = vlan.fid; + } + +- entry.state = 0; +- ether_addr_copy(entry.mac, addr); +- eth_addr_dec(entry.mac); ++ entry->state = 0; ++ ether_addr_copy(entry->mac, addr); ++ eth_addr_dec(entry->mac); ++ ++ return mv88e6xxx_g1_atu_getnext(chip, *fid, entry); ++} ++ ++static bool mv88e6xxx_port_db_find(struct mv88e6xxx_chip *chip, ++ const unsigned char *addr, u16 vid) ++{ ++ struct mv88e6xxx_atu_entry entry; ++ u16 fid; ++ int err; + +- err = mv88e6xxx_g1_atu_getnext(chip, fid, &entry); ++ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry); ++ if (err) ++ return false; ++ ++ return entry.state && ether_addr_equal(entry.mac, addr); ++} ++ ++static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, ++ const unsigned char *addr, u16 vid, ++ u8 state) ++{ ++ struct mv88e6xxx_atu_entry entry; ++ u16 fid; ++ int err; ++ ++ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry); + if (err) + return err; + +@@ -2862,6 +2885,13 @@ static int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port, + mv88e6xxx_reg_lock(chip); + err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid, + MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC); ++ if (err) ++ goto out; ++ ++ if (!mv88e6xxx_port_db_find(chip, addr, vid)) ++ err = -ENOSPC; ++ ++out: + mv88e6xxx_reg_unlock(chip); + + return err; +@@ -6596,6 +6626,13 @@ static int mv88e6xxx_port_mdb_add(struct dsa_switch *ds, int port, + mv88e6xxx_reg_lock(chip); + err = mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid, + MV88E6XXX_G1_ATU_DATA_STATE_MC_STATIC); ++ if (err) ++ goto out; ++ ++ if (!mv88e6xxx_port_db_find(chip, mdb->addr, mdb->vid)) ++ err = -ENOSPC; ++ ++out: + mv88e6xxx_reg_unlock(chip); + + return err; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index b6f844cac80eb8..a79c78061d1d08 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -2039,6 +2039,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, + struct rx_cmp_ext *rxcmp1; + u32 tmp_raw_cons = *raw_cons; + u16 cons, prod, cp_cons = RING_CMP(tmp_raw_cons); ++ struct skb_shared_info *sinfo; + struct bnxt_sw_rx_bd *rx_buf; + unsigned int len; + u8 *data_ptr, agg_bufs, cmp_type; +@@ -2165,6 +2166,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, + false); + if (!frag_len) + goto oom_next_rx; ++ + } + xdp_active = true; + } +@@ -2174,6 +2176,12 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, + rc = 1; + goto next_rx; + } ++ if (xdp_buff_has_frags(&xdp)) { ++ sinfo = xdp_get_shared_info_from_buff(&xdp); ++ agg_bufs = sinfo->nr_frags; ++ } else { ++ agg_bufs = 0; ++ } + } + + if (len <= bp->rx_copy_thresh) { +@@ -2211,7 +2219,8 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, + if (!skb) + goto oom_next_rx; + } else { +- skb = bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr->page_pool, &xdp, rxcmp1); ++ skb = bnxt_xdp_build_skb(bp, skb, agg_bufs, ++ rxr->page_pool, &xdp); + if (!skb) { + /* we should be able to free the old skb here */ + bnxt_xdp_buff_frags_free(rxr, &xdp); +@@ -15322,6 +15331,9 @@ static void bnxt_get_queue_stats_rx(struct net_device *dev, int i, + struct bnxt_cp_ring_info *cpr; + u64 *sw; + ++ if (!bp->bnapi) ++ return; ++ + cpr = &bp->bnapi[i]->cp_ring; + sw = cpr->stats.sw_stats; + +@@ -15345,6 +15357,9 @@ static void bnxt_get_queue_stats_tx(struct net_device *dev, int i, + struct bnxt_napi *bnapi; + u64 *sw; + ++ if (!bp->tx_ring) ++ return; ++ + bnapi = bp->tx_ring[bp->tx_ring_map[i]].bnapi; + sw = bnapi->cp_ring.stats.sw_stats; + +@@ -15386,6 +15401,9 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx) + struct bnxt_ring_struct *ring; + int rc; + ++ if (!bp->rx_ring) ++ return -ENETDOWN; ++ + rxr = &bp->rx_ring[idx]; + clone = qmem; + memcpy(clone, rxr, sizeof(*rxr)); +@@ -15468,6 +15486,7 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem) + struct bnxt_ring_struct *ring; + + bnxt_free_one_rx_ring_skbs(bp, rxr); ++ bnxt_free_one_tpa_info(bp, rxr); + + xdp_rxq_info_unreg(&rxr->xdp_rxq); + +@@ -15579,7 +15598,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx) + cpr = &rxr->bnapi->cp_ring; + cpr->sw_stats->rx.rx_resets++; + +- for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { ++ for (i = 0; i <= bp->nr_vnics; i++) { + vnic = &bp->vnic_info[i]; + + rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); +@@ -15607,7 +15626,7 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx) + struct bnxt_vnic_info *vnic; + int i; + +- for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { ++ for (i = 0; i <= bp->nr_vnics; i++) { + vnic = &bp->vnic_info[i]; + vnic->mru = 0; + bnxt_hwrm_vnic_update(bp, vnic, +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +index dc51dce209d5f0..8726657f5cb9e0 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +@@ -456,23 +456,16 @@ int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp) + + struct sk_buff * + bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags, +- struct page_pool *pool, struct xdp_buff *xdp, +- struct rx_cmp_ext *rxcmp1) ++ struct page_pool *pool, struct xdp_buff *xdp) + { + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + + if (!skb) + return NULL; +- skb_checksum_none_assert(skb); +- if (RX_CMP_L4_CS_OK(rxcmp1)) { +- if (bp->dev->features & NETIF_F_RXCSUM) { +- skb->ip_summed = CHECKSUM_UNNECESSARY; +- skb->csum_level = RX_CMP_ENCAP(rxcmp1); +- } +- } ++ + xdp_update_skb_shared_info(skb, num_frags, + sinfo->xdp_frags_size, +- BNXT_RX_PAGE_SIZE * sinfo->nr_frags, ++ BNXT_RX_PAGE_SIZE * num_frags, + xdp_buff_is_frag_pfmemalloc(xdp)); + return skb; + } +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +index 0122782400b8a2..220285e190fcd1 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +@@ -33,6 +33,5 @@ void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, + struct xdp_buff *xdp); + struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, + u8 num_frags, struct page_pool *pool, +- struct xdp_buff *xdp, +- struct rx_cmp_ext *rxcmp1); ++ struct xdp_buff *xdp); + #endif +diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c +index 7cee365cc7d167..405ddd17de1bff 100644 +--- a/drivers/net/ethernet/intel/ice/ice_arfs.c ++++ b/drivers/net/ethernet/intel/ice/ice_arfs.c +@@ -511,7 +511,7 @@ void ice_init_arfs(struct ice_vsi *vsi) + struct hlist_head *arfs_fltr_list; + unsigned int i; + +- if (!vsi || vsi->type != ICE_VSI_PF) ++ if (!vsi || vsi->type != ICE_VSI_PF || ice_is_arfs_active(vsi)) + return; + + arfs_fltr_list = kcalloc(ICE_MAX_ARFS_LIST, sizeof(*arfs_fltr_list), +diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c +index d649c197cf673f..ed21d7f55ac11b 100644 +--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c ++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c +@@ -49,9 +49,6 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) + if (vlan_ops->dis_rx_filtering(uplink_vsi)) + goto err_vlan_filtering; + +- if (ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_set_allow_override)) +- goto err_override_uplink; +- + if (ice_vsi_update_local_lb(uplink_vsi, true)) + goto err_override_local_lb; + +@@ -63,8 +60,6 @@ static int ice_eswitch_setup_env(struct ice_pf *pf) + err_up: + ice_vsi_update_local_lb(uplink_vsi, false); + err_override_local_lb: +- ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); +-err_override_uplink: + vlan_ops->ena_rx_filtering(uplink_vsi); + err_vlan_filtering: + ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, false, +@@ -275,7 +270,6 @@ static void ice_eswitch_release_env(struct ice_pf *pf) + vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi); + + ice_vsi_update_local_lb(uplink_vsi, false); +- ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); + vlan_ops->ena_rx_filtering(uplink_vsi); + ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, false, + ICE_FLTR_TX); +diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c +index 1ccb572ce285df..22371011c24928 100644 +--- a/drivers/net/ethernet/intel/ice/ice_lag.c ++++ b/drivers/net/ethernet/intel/ice/ice_lag.c +@@ -1000,6 +1000,28 @@ static void ice_lag_link(struct ice_lag *lag) + netdev_info(lag->netdev, "Shared SR-IOV resources in bond are active\n"); + } + ++/** ++ * ice_lag_config_eswitch - configure eswitch to work with LAG ++ * @lag: lag info struct ++ * @netdev: active network interface device struct ++ * ++ * Updates all port representors in eswitch to use @netdev for Tx. ++ * ++ * Configures the netdev to keep dst metadata (also used in representor Tx). ++ * This is required for an uplink without switchdev mode configured. ++ */ ++static void ice_lag_config_eswitch(struct ice_lag *lag, ++ struct net_device *netdev) ++{ ++ struct ice_repr *repr; ++ unsigned long id; ++ ++ xa_for_each(&lag->pf->eswitch.reprs, id, repr) ++ repr->dst->u.port_info.lower_dev = netdev; ++ ++ netif_keep_dst(netdev); ++} ++ + /** + * ice_lag_unlink - handle unlink event + * @lag: LAG info struct +@@ -1021,6 +1043,9 @@ static void ice_lag_unlink(struct ice_lag *lag) + ice_lag_move_vf_nodes(lag, act_port, pri_port); + lag->primary = false; + lag->active_port = ICE_LAG_INVALID_PORT; ++ ++ /* Config primary's eswitch back to normal operation. */ ++ ice_lag_config_eswitch(lag, lag->netdev); + } else { + struct ice_lag *primary_lag; + +@@ -1419,6 +1444,7 @@ static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr) + ice_lag_move_vf_nodes(lag, prim_port, + event_port); + lag->active_port = event_port; ++ ice_lag_config_eswitch(lag, event_netdev); + return; + } + +@@ -1428,6 +1454,7 @@ static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr) + /* new active port */ + ice_lag_move_vf_nodes(lag, lag->active_port, event_port); + lag->active_port = event_port; ++ ice_lag_config_eswitch(lag, event_netdev); + } else { + /* port not set as currently active (e.g. new active port + * has already claimed the nodes and filters +diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c +index a7d45a8ce7ac00..e07fc8851e1dc0 100644 +--- a/drivers/net/ethernet/intel/ice/ice_lib.c ++++ b/drivers/net/ethernet/intel/ice/ice_lib.c +@@ -3930,24 +3930,6 @@ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx) + ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S); + } + +-/** +- * ice_vsi_ctx_set_allow_override - allow destination override on VSI +- * @ctx: pointer to VSI ctx structure +- */ +-void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx) +-{ +- ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; +-} +- +-/** +- * ice_vsi_ctx_clear_allow_override - turn off destination override on VSI +- * @ctx: pointer to VSI ctx structure +- */ +-void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx) +-{ +- ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; +-} +- + /** + * ice_vsi_update_local_lb - update sw block in VSI with local loopback bit + * @vsi: pointer to VSI structure +diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h +index 10d6fc479a3211..6085039bac952b 100644 +--- a/drivers/net/ethernet/intel/ice/ice_lib.h ++++ b/drivers/net/ethernet/intel/ice/ice_lib.h +@@ -104,10 +104,6 @@ ice_vsi_update_security(struct ice_vsi *vsi, void (*fill)(struct ice_vsi_ctx *)) + void ice_vsi_ctx_set_antispoof(struct ice_vsi_ctx *ctx); + + void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx); +- +-void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); +- +-void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); + int ice_vsi_update_local_lb(struct ice_vsi *vsi, bool set); + int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); + int ice_vsi_del_vlan_zero(struct ice_vsi *vsi); +diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c +index 9c9ea4c1b93b7f..380ba1e8b3b2c8 100644 +--- a/drivers/net/ethernet/intel/ice/ice_txrx.c ++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c +@@ -2424,7 +2424,9 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring) + ICE_TXD_CTX_QW1_CMD_S); + + ice_tstamp(tx_ring, skb, first, &offload); +- if (ice_is_switchdev_running(vsi->back) && vsi->type != ICE_VSI_SF) ++ if ((ice_is_switchdev_running(vsi->back) || ++ ice_lag_is_switchdev_running(vsi->back)) && ++ vsi->type != ICE_VSI_SF) + ice_eswitch_set_target_vsi(skb, &offload); + + if (offload.cd_qw1 & ICE_TX_DESC_DTYPE_CTX) { +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c +index 98d4306929f3ed..a2cf3e79693dd8 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c +@@ -46,6 +46,9 @@ mlx5_devlink_info_get(struct devlink *devlink, struct devlink_info_req *req, + u32 running_fw, stored_fw; + int err; + ++ if (!mlx5_core_is_pf(dev)) ++ return 0; ++ + err = devlink_info_version_fixed_put(req, "fw.psid", dev->board_id); + if (err) + return err; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +index 5d128c5b4529af..0f5d7ea8956f72 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +@@ -48,15 +48,10 @@ mlx5_esw_bridge_lag_rep_get(struct net_device *dev, struct mlx5_eswitch *esw) + struct list_head *iter; + + netdev_for_each_lower_dev(dev, lower, iter) { +- struct mlx5_core_dev *mdev; +- struct mlx5e_priv *priv; +- + if (!mlx5e_eswitch_rep(lower)) + continue; + +- priv = netdev_priv(lower); +- mdev = priv->mdev; +- if (mlx5_lag_is_shared_fdb(mdev) && mlx5_esw_bridge_dev_same_esw(lower, esw)) ++ if (mlx5_esw_bridge_dev_same_esw(lower, esw)) + return lower; + } + +@@ -125,7 +120,7 @@ static bool mlx5_esw_bridge_is_local(struct net_device *dev, struct net_device * + priv = netdev_priv(rep); + mdev = priv->mdev; + if (netif_is_lag_master(dev)) +- return mlx5_lag_is_shared_fdb(mdev) && mlx5_lag_is_master(mdev); ++ return mlx5_lag_is_master(mdev); + return true; + } + +@@ -455,6 +450,9 @@ static int mlx5_esw_bridge_switchdev_event(struct notifier_block *nb, + if (!rep) + return NOTIFY_DONE; + ++ if (netif_is_lag_master(dev) && !mlx5_lag_is_shared_fdb(esw->dev)) ++ return NOTIFY_DONE; ++ + switch (event) { + case SWITCHDEV_FDB_ADD_TO_BRIDGE: + fdb_info = container_of(info, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index cb93f46eaa7c39..e02bb589104a53 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -5131,11 +5131,9 @@ static int mlx5e_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, + struct mlx5e_priv *priv = netdev_priv(dev); + struct mlx5_core_dev *mdev = priv->mdev; + u8 mode, setting; +- int err; + +- err = mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting); +- if (err) +- return err; ++ if (mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting)) ++ return -EOPNOTSUPP; + mode = setting ? BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB; + return ndo_dflt_bridge_getlink(skb, pid, seq, dev, + mode, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c +index 2b229b6226c6a7..dfb079e59d8588 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c +@@ -871,8 +871,8 @@ static void comp_irq_release_sf(struct mlx5_core_dev *dev, u16 vecidx) + + static int comp_irq_request_sf(struct mlx5_core_dev *dev, u16 vecidx) + { ++ struct mlx5_irq_pool *pool = mlx5_irq_table_get_comp_irq_pool(dev); + struct mlx5_eq_table *table = dev->priv.eq_table; +- struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev); + struct irq_affinity_desc af_desc = {}; + struct mlx5_irq *irq; + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c +index 1477db7f5307e0..2691d88cdee1f7 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c +@@ -175,7 +175,7 @@ mlx5_irq_affinity_request(struct mlx5_core_dev *dev, struct mlx5_irq_pool *pool, + + void mlx5_irq_affinity_irq_release(struct mlx5_core_dev *dev, struct mlx5_irq *irq) + { +- struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev); ++ struct mlx5_irq_pool *pool = mlx5_irq_get_pool(irq); + int cpu; + + cpu = cpumask_first(mlx5_irq_get_affinity_mask(irq)); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c +index 7f68468c2e7598..4b3da7ebd6310e 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c +@@ -859,7 +859,7 @@ void mlx5_disable_lag(struct mlx5_lag *ldev) + mlx5_eswitch_reload_ib_reps(ldev->pf[i].dev->priv.eswitch); + } + +-static bool mlx5_shared_fdb_supported(struct mlx5_lag *ldev) ++bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev) + { + struct mlx5_core_dev *dev; + int i; +@@ -937,7 +937,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) + } + + if (do_bond && !__mlx5_lag_is_active(ldev)) { +- bool shared_fdb = mlx5_shared_fdb_supported(ldev); ++ bool shared_fdb = mlx5_lag_shared_fdb_supported(ldev); + + roce_lag = mlx5_lag_is_roce_lag(ldev); + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h +index 50fcb1eee57483..48a5f3e7b91a85 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h +@@ -92,6 +92,7 @@ mlx5_lag_is_ready(struct mlx5_lag *ldev) + return test_bit(MLX5_LAG_FLAG_NDEVS_READY, &ldev->state_flags); + } + ++bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev); + bool mlx5_lag_check_prereq(struct mlx5_lag *ldev); + void mlx5_modify_lag(struct mlx5_lag *ldev, + struct lag_tracker *tracker); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c +index 571ea26edd0cab..2381a0eec19006 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c +@@ -81,7 +81,8 @@ static int enable_mpesw(struct mlx5_lag *ldev) + if (mlx5_eswitch_mode(dev0) != MLX5_ESWITCH_OFFLOADS || + !MLX5_CAP_PORT_SELECTION(dev0, port_select_flow_table) || + !MLX5_CAP_GEN(dev0, create_lag_when_not_master_up) || +- !mlx5_lag_check_prereq(ldev)) ++ !mlx5_lag_check_prereq(ldev) || ++ !mlx5_lag_shared_fdb_supported(ldev)) + return -EOPNOTSUPP; + + err = mlx5_mpesw_metadata_set(ldev); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +index a80ecb672f33dd..711d14dea2485f 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c +@@ -196,6 +196,11 @@ mlx5_chains_create_table(struct mlx5_fs_chains *chains, + ns = mlx5_get_flow_namespace(chains->dev, chains->ns); + } + ++ if (!ns) { ++ mlx5_core_warn(chains->dev, "Failed to get flow namespace\n"); ++ return ERR_PTR(-EOPNOTSUPP); ++ } ++ + ft_attr.autogroup.num_reserved_entries = 2; + ft_attr.autogroup.max_num_groups = chains->group_num; + ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h +index 0881e961d8b177..586688da9940ee 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h +@@ -10,12 +10,15 @@ + + struct mlx5_irq; + struct cpu_rmap; ++struct mlx5_irq_pool; + + int mlx5_irq_table_init(struct mlx5_core_dev *dev); + void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev); + int mlx5_irq_table_create(struct mlx5_core_dev *dev); + void mlx5_irq_table_destroy(struct mlx5_core_dev *dev); + void mlx5_irq_table_free_irqs(struct mlx5_core_dev *dev); ++struct mlx5_irq_pool * ++mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev); + int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table); + int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table); + struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev); +@@ -38,7 +41,6 @@ struct cpumask *mlx5_irq_get_affinity_mask(struct mlx5_irq *irq); + int mlx5_irq_get_index(struct mlx5_irq *irq); + int mlx5_irq_get_irq(const struct mlx5_irq *irq); + +-struct mlx5_irq_pool; + #ifdef CONFIG_MLX5_SF + struct mlx5_irq *mlx5_irq_affinity_irq_request_auto(struct mlx5_core_dev *dev, + struct cpumask *used_cpus, u16 vecidx); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +index d9362eabc6a1ca..2c5f850c31f683 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +@@ -378,6 +378,11 @@ int mlx5_irq_get_index(struct mlx5_irq *irq) + return irq->map.index; + } + ++struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq) ++{ ++ return irq->pool; ++} ++ + /* irq_pool API */ + + /* requesting an irq from a given pool according to given index */ +@@ -405,18 +410,20 @@ static struct mlx5_irq_pool *sf_ctrl_irq_pool_get(struct mlx5_irq_table *irq_tab + return irq_table->sf_ctrl_pool; + } + +-static struct mlx5_irq_pool *sf_irq_pool_get(struct mlx5_irq_table *irq_table) ++static struct mlx5_irq_pool * ++sf_comp_irq_pool_get(struct mlx5_irq_table *irq_table) + { + return irq_table->sf_comp_pool; + } + +-struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev) ++struct mlx5_irq_pool * ++mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev) + { + struct mlx5_irq_table *irq_table = mlx5_irq_table_get(dev); + struct mlx5_irq_pool *pool = NULL; + + if (mlx5_core_is_sf(dev)) +- pool = sf_irq_pool_get(irq_table); ++ pool = sf_comp_irq_pool_get(irq_table); + + /* In some configs, there won't be a pool of SFs IRQs. Hence, returning + * the PF IRQs pool in case the SF pool doesn't exist. +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h +index c4d377f8df3089..cc064425fe1608 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h +@@ -28,7 +28,6 @@ struct mlx5_irq_pool { + struct mlx5_core_dev *dev; + }; + +-struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev); + static inline bool mlx5_irq_pool_is_sf_pool(struct mlx5_irq_pool *pool) + { + return !strncmp("mlx5_sf", pool->name, strlen("mlx5_sf")); +@@ -40,5 +39,6 @@ struct mlx5_irq *mlx5_irq_alloc(struct mlx5_irq_pool *pool, int i, + int mlx5_irq_get_locked(struct mlx5_irq *irq); + int mlx5_irq_read_locked(struct mlx5_irq *irq); + int mlx5_irq_put(struct mlx5_irq *irq); ++struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq); + + #endif /* __PCI_IRQ_H__ */ +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h +index 655fa7a22d84f6..7c00740f1d1305 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h +@@ -16,8 +16,8 @@ struct mlx5hws_bwc_matcher { + struct mlx5hws_matcher *matcher; + struct mlx5hws_match_template *mt; + struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM]; ++ u32 priority; + u8 num_of_at; +- u16 priority; + u8 size_log; + atomic_t num_of_rules; + struct list_head *rules; +diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c +index be95336ce089aa..11457b6296cc01 100644 +--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c ++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c +@@ -1547,6 +1547,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + * adapter-MTU file and apc->mana_pci_debugfs folder. + */ + debugfs_remove_recursive(gc->mana_pci_debugfs); ++ gc->mana_pci_debugfs = NULL; + pci_iounmap(pdev, bar0_va); + free_gc: + pci_set_drvdata(pdev, NULL); +@@ -1569,6 +1570,8 @@ static void mana_gd_remove(struct pci_dev *pdev) + + debugfs_remove_recursive(gc->mana_pci_debugfs); + ++ gc->mana_pci_debugfs = NULL; ++ + pci_iounmap(pdev, gc->bar0_va); + + vfree(gc); +@@ -1622,6 +1625,8 @@ static void mana_gd_shutdown(struct pci_dev *pdev) + + debugfs_remove_recursive(gc->mana_pci_debugfs); + ++ gc->mana_pci_debugfs = NULL; ++ + pci_disable_device(pdev); + } + +@@ -1648,8 +1653,10 @@ static int __init mana_driver_init(void) + mana_debugfs_root = debugfs_create_dir("mana", NULL); + + err = pci_register_driver(&mana_driver); +- if (err) ++ if (err) { + debugfs_remove(mana_debugfs_root); ++ mana_debugfs_root = NULL; ++ } + + return err; + } +@@ -1659,6 +1666,8 @@ static void __exit mana_driver_exit(void) + pci_unregister_driver(&mana_driver); + + debugfs_remove(mana_debugfs_root); ++ ++ mana_debugfs_root = NULL; + } + + module_init(mana_driver_init); +diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c +index aa1e47233fe50c..ae76ecc7a5d36c 100644 +--- a/drivers/net/ethernet/microsoft/mana/mana_en.c ++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c +@@ -738,12 +738,11 @@ static const struct net_device_ops mana_devops = { + static void mana_cleanup_port_context(struct mana_port_context *apc) + { + /* +- * at this point all dir/files under the vport directory +- * are already cleaned up. +- * We are sure the apc->mana_port_debugfs remove will not +- * cause any freed memory access issues ++ * make sure subsequent cleanup attempts don't end up removing already ++ * cleaned dentry pointer + */ + debugfs_remove(apc->mana_port_debugfs); ++ apc->mana_port_debugfs = NULL; + kfree(apc->rxqs); + apc->rxqs = NULL; + } +@@ -1254,6 +1253,7 @@ static void mana_destroy_eq(struct mana_context *ac) + return; + + debugfs_remove_recursive(ac->mana_eqs_debugfs); ++ ac->mana_eqs_debugfs = NULL; + + for (i = 0; i < gc->max_num_queues; i++) { + eq = ac->eqs[i].eq; +@@ -1914,6 +1914,7 @@ static void mana_destroy_txq(struct mana_port_context *apc) + + for (i = 0; i < apc->num_queues; i++) { + debugfs_remove_recursive(apc->tx_qp[i].mana_tx_debugfs); ++ apc->tx_qp[i].mana_tx_debugfs = NULL; + + napi = &apc->tx_qp[i].tx_cq.napi; + if (apc->tx_qp[i].txq.napi_initialized) { +@@ -2099,6 +2100,7 @@ static void mana_destroy_rxq(struct mana_port_context *apc, + return; + + debugfs_remove_recursive(rxq->mana_rx_debugfs); ++ rxq->mana_rx_debugfs = NULL; + + napi = &rxq->rx_cq.napi; + +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c +index f9dd50152b1e3e..28d24d59efb84f 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c +@@ -454,8 +454,10 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter, + + num_vlans = sriov->num_allowed_vlans; + sriov->allowed_vlans = kcalloc(num_vlans, sizeof(u16), GFP_KERNEL); +- if (!sriov->allowed_vlans) ++ if (!sriov->allowed_vlans) { ++ qlcnic_sriov_free_vlans(adapter); + return -ENOMEM; ++ } + + vlans = (u16 *)&cmd->rsp.arg[3]; + for (i = 0; i < num_vlans; i++) +@@ -2167,8 +2169,10 @@ int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) + vf = &sriov->vf_info[i]; + vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans, + sizeof(*vf->sriov_vlans), GFP_KERNEL); +- if (!vf->sriov_vlans) ++ if (!vf->sriov_vlans) { ++ qlcnic_sriov_free_vlans(adapter); + return -ENOMEM; ++ } + } + + return 0; +diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c +index c42c0516656b8c..bb8f1bc215cdd4 100644 +--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c ++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c +@@ -1501,7 +1501,10 @@ static void rtase_wait_for_quiescence(const struct net_device *dev) + static void rtase_sw_reset(struct net_device *dev) + { + struct rtase_private *tp = netdev_priv(dev); ++ struct rtase_ring *ring, *tmp; ++ struct rtase_int_vector *ivec; + int ret; ++ u32 i; + + netif_stop_queue(dev); + netif_carrier_off(dev); +@@ -1512,6 +1515,13 @@ static void rtase_sw_reset(struct net_device *dev) + rtase_tx_clear(tp); + rtase_rx_clear(tp); + ++ for (i = 0; i < tp->int_nums; i++) { ++ ivec = &tp->int_vector[i]; ++ list_for_each_entry_safe(ring, tmp, &ivec->ring_list, ++ ring_entry) ++ list_del(&ring->ring_entry); ++ } ++ + ret = rtase_init_ring(dev); + if (ret) { + netdev_err(dev, "unable to init ring\n"); +diff --git a/drivers/net/mctp/mctp-i2c.c b/drivers/net/mctp/mctp-i2c.c +index d2b3f5a5914188..a0dba912aeceac 100644 +--- a/drivers/net/mctp/mctp-i2c.c ++++ b/drivers/net/mctp/mctp-i2c.c +@@ -584,6 +584,7 @@ static int mctp_i2c_header_create(struct sk_buff *skb, struct net_device *dev, + struct mctp_i2c_hdr *hdr; + struct mctp_hdr *mhdr; + u8 lldst, llsrc; ++ int rc; + + if (len > MCTP_I2C_MAXMTU) + return -EMSGSIZE; +@@ -594,6 +595,10 @@ static int mctp_i2c_header_create(struct sk_buff *skb, struct net_device *dev, + lldst = *((u8 *)daddr); + llsrc = *((u8 *)saddr); + ++ rc = skb_cow_head(skb, sizeof(struct mctp_i2c_hdr)); ++ if (rc) ++ return rc; ++ + skb_push(skb, sizeof(struct mctp_i2c_hdr)); + skb_reset_mac_header(skb); + hdr = (void *)skb_mac_header(skb); +diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c +index c1e72253063b54..c678f79aa35611 100644 +--- a/drivers/net/mctp/mctp-i3c.c ++++ b/drivers/net/mctp/mctp-i3c.c +@@ -506,10 +506,15 @@ static int mctp_i3c_header_create(struct sk_buff *skb, struct net_device *dev, + const void *saddr, unsigned int len) + { + struct mctp_i3c_internal_hdr *ihdr; ++ int rc; + + if (!daddr || !saddr) + return -EINVAL; + ++ rc = skb_cow_head(skb, sizeof(struct mctp_i3c_internal_hdr)); ++ if (rc) ++ return rc; ++ + skb_push(skb, sizeof(struct mctp_i3c_internal_hdr)); + skb_reset_mac_header(skb); + ihdr = (void *)skb_mac_header(skb); +diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c +index 872e582b7e8387..0fa540dbff4c47 100644 +--- a/drivers/net/phy/nxp-c45-tja11xx.c ++++ b/drivers/net/phy/nxp-c45-tja11xx.c +@@ -22,6 +22,11 @@ + #define PHY_ID_TJA_1103 0x001BB010 + #define PHY_ID_TJA_1120 0x001BB031 + ++#define VEND1_DEVICE_ID3 0x0004 ++#define TJA1120_DEV_ID3_SILICON_VERSION GENMASK(15, 12) ++#define TJA1120_DEV_ID3_SAMPLE_TYPE GENMASK(11, 8) ++#define DEVICE_ID3_SAMPLE_TYPE_R 0x9 ++ + #define VEND1_DEVICE_CONTROL 0x0040 + #define DEVICE_CONTROL_RESET BIT(15) + #define DEVICE_CONTROL_CONFIG_GLOBAL_EN BIT(14) +@@ -109,6 +114,9 @@ + #define MII_BASIC_CONFIG_RMII 0x5 + #define MII_BASIC_CONFIG_MII 0x4 + ++#define VEND1_SGMII_BASIC_CONTROL 0xB000 ++#define SGMII_LPM BIT(11) ++ + #define VEND1_SYMBOL_ERROR_CNT_XTD 0x8351 + #define EXTENDED_CNT_EN BIT(15) + #define VEND1_MONITOR_STATUS 0xAC80 +@@ -1593,6 +1601,63 @@ static int nxp_c45_set_phy_mode(struct phy_device *phydev) + return 0; + } + ++/* Errata: ES_TJA1120 and ES_TJA1121 Rev. 1.0 — 28 November 2024 Section 3.1 & 3.2 */ ++static void nxp_c45_tja1120_errata(struct phy_device *phydev) ++{ ++ bool macsec_ability, sgmii_ability; ++ int silicon_version, sample_type; ++ int phy_abilities; ++ int ret = 0; ++ ++ ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VEND1_DEVICE_ID3); ++ if (ret < 0) ++ return; ++ ++ sample_type = FIELD_GET(TJA1120_DEV_ID3_SAMPLE_TYPE, ret); ++ if (sample_type != DEVICE_ID3_SAMPLE_TYPE_R) ++ return; ++ ++ silicon_version = FIELD_GET(TJA1120_DEV_ID3_SILICON_VERSION, ret); ++ ++ phy_abilities = phy_read_mmd(phydev, MDIO_MMD_VEND1, ++ VEND1_PORT_ABILITIES); ++ macsec_ability = !!(phy_abilities & MACSEC_ABILITY); ++ sgmii_ability = !!(phy_abilities & SGMII_ABILITY); ++ if ((!macsec_ability && silicon_version == 2) || ++ (macsec_ability && silicon_version == 1)) { ++ /* TJA1120/TJA1121 PHY configuration errata workaround. ++ * Apply PHY writes sequence before link up. ++ */ ++ if (!macsec_ability) { ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x4b95); ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0xf3cd); ++ } else { ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x89c7); ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0x0893); ++ } ++ ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x0476, 0x58a0); ++ ++ phy_write_mmd(phydev, MDIO_MMD_PMAPMD, 0x8921, 0xa3a); ++ phy_write_mmd(phydev, MDIO_MMD_PMAPMD, 0x89F1, 0x16c1); ++ ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x0); ++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0x0); ++ ++ if (sgmii_ability) { ++ /* TJA1120B/TJA1121B SGMII PCS restart errata workaround. ++ * Put SGMII PCS into power down mode and back up. ++ */ ++ phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, ++ VEND1_SGMII_BASIC_CONTROL, ++ SGMII_LPM); ++ phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, ++ VEND1_SGMII_BASIC_CONTROL, ++ SGMII_LPM); ++ } ++ } ++} ++ + static int nxp_c45_config_init(struct phy_device *phydev) + { + int ret; +@@ -1609,6 +1674,9 @@ static int nxp_c45_config_init(struct phy_device *phydev) + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 1); + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 2); + ++ if (phy_id_compare(phydev->phy_id, PHY_ID_TJA_1120, GENMASK(31, 4))) ++ nxp_c45_tja1120_errata(phydev); ++ + phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, VEND1_PHY_CONFIG, + PHY_CONFIG_AUTO); + +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +index 5ea684802ad174..d4bc1e85b93055 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +@@ -1,6 +1,6 @@ + // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause + /* +- * Copyright (C) 2012-2014, 2018-2024 Intel Corporation ++ * Copyright (C) 2012-2014, 2018-2025 Intel Corporation + * Copyright (C) 2013-2015 Intel Mobile Communications GmbH + * Copyright (C) 2016-2017 Intel Deutschland GmbH + */ +@@ -422,6 +422,8 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, + /* if reached this point, Alive notification was received */ + iwl_mei_alive_notif(true); + ++ iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr); ++ + ret = iwl_pnvm_load(mvm->trans, &mvm->notif_wait, + &mvm->fw->ucode_capa); + if (ret) { +@@ -430,8 +432,6 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm, + return ret; + } + +- iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr); +- + /* + * Note: all the queues are enabled as part of the interface + * initialization, but in firmware restart scenarios they +diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c +index d5a9360323d29d..8755c5e6a65b30 100644 +--- a/drivers/net/wwan/mhi_wwan_mbim.c ++++ b/drivers/net/wwan/mhi_wwan_mbim.c +@@ -220,7 +220,7 @@ static int mbim_rx_verify_nth16(struct mhi_mbim_context *mbim, struct sk_buff *s + if (mbim->rx_seq + 1 != le16_to_cpu(nth16->wSequence) && + (mbim->rx_seq || le16_to_cpu(nth16->wSequence)) && + !(mbim->rx_seq == 0xffff && !le16_to_cpu(nth16->wSequence))) { +- net_err_ratelimited("sequence number glitch prev=%d curr=%d\n", ++ net_dbg_ratelimited("sequence number glitch prev=%d curr=%d\n", + mbim->rx_seq, le16_to_cpu(nth16->wSequence)); + } + mbim->rx_seq = le16_to_cpu(nth16->wSequence); +diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c +index 4319ab50c10d11..05cd4c4f1ed651 100644 +--- a/drivers/nvme/host/apple.c ++++ b/drivers/nvme/host/apple.c +@@ -599,7 +599,8 @@ static inline void apple_nvme_handle_cqe(struct apple_nvme_queue *q, + } + + if (!nvme_try_complete_req(req, cqe->status, cqe->result) && +- !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, ++ !blk_mq_add_to_batch(req, iob, ++ nvme_req(req)->status != NVME_SC_SUCCESS, + apple_nvme_complete_batch)) + apple_nvme_complete_rq(req); + } +@@ -1518,6 +1519,7 @@ static struct apple_nvme *apple_nvme_alloc(struct platform_device *pdev) + + return anv; + put_dev: ++ apple_nvme_detach_genpd(anv); + put_device(anv->dev); + return ERR_PTR(ret); + } +@@ -1551,6 +1553,7 @@ static int apple_nvme_probe(struct platform_device *pdev) + nvme_uninit_ctrl(&anv->ctrl); + out_put_ctrl: + nvme_put_ctrl(&anv->ctrl); ++ apple_nvme_detach_genpd(anv); + return ret; + } + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 46e04b30f6425c..e4034cec59237a 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -431,6 +431,12 @@ static inline void nvme_end_req_zoned(struct request *req) + + static inline void __nvme_end_req(struct request *req) + { ++ if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) { ++ if (blk_rq_is_passthrough(req)) ++ nvme_log_err_passthru(req); ++ else ++ nvme_log_error(req); ++ } + nvme_end_req_zoned(req); + nvme_trace_bio_complete(req); + if (req->cmd_flags & REQ_NVME_MPATH) +@@ -441,12 +447,6 @@ void nvme_end_req(struct request *req) + { + blk_status_t status = nvme_error_status(nvme_req(req)->status); + +- if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) { +- if (blk_rq_is_passthrough(req)) +- nvme_log_err_passthru(req); +- else +- nvme_log_error(req); +- } + __nvme_end_req(req); + blk_mq_end_request(req, status); + } +@@ -564,8 +564,6 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl, + switch (new_state) { + case NVME_CTRL_LIVE: + switch (old_state) { +- case NVME_CTRL_NEW: +- case NVME_CTRL_RESETTING: + case NVME_CTRL_CONNECTING: + changed = true; + fallthrough; +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 682234da2fabe0..7c13a400071e65 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -786,49 +786,8 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl) + "NVME-FC{%d}: controller connectivity lost. Awaiting " + "Reconnect", ctrl->cnum); + +- switch (nvme_ctrl_state(&ctrl->ctrl)) { +- case NVME_CTRL_NEW: +- case NVME_CTRL_LIVE: +- /* +- * Schedule a controller reset. The reset will terminate the +- * association and schedule the reconnect timer. Reconnects +- * will be attempted until either the ctlr_loss_tmo +- * (max_retries * connect_delay) expires or the remoteport's +- * dev_loss_tmo expires. +- */ +- if (nvme_reset_ctrl(&ctrl->ctrl)) { +- dev_warn(ctrl->ctrl.device, +- "NVME-FC{%d}: Couldn't schedule reset.\n", +- ctrl->cnum); +- nvme_delete_ctrl(&ctrl->ctrl); +- } +- break; +- +- case NVME_CTRL_CONNECTING: +- /* +- * The association has already been terminated and the +- * controller is attempting reconnects. No need to do anything +- * futher. Reconnects will be attempted until either the +- * ctlr_loss_tmo (max_retries * connect_delay) expires or the +- * remoteport's dev_loss_tmo expires. +- */ +- break; +- +- case NVME_CTRL_RESETTING: +- /* +- * Controller is already in the process of terminating the +- * association. No need to do anything further. The reconnect +- * step will kick in naturally after the association is +- * terminated. +- */ +- break; +- +- case NVME_CTRL_DELETING: +- case NVME_CTRL_DELETING_NOIO: +- default: +- /* no action to take - let it delete */ +- break; +- } ++ set_bit(ASSOC_FAILED, &ctrl->flags); ++ nvme_reset_ctrl(&ctrl->ctrl); + } + + /** +@@ -2546,7 +2505,6 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg) + */ + if (state == NVME_CTRL_CONNECTING) { + __nvme_fc_abort_outstanding_ios(ctrl, true); +- set_bit(ASSOC_FAILED, &ctrl->flags); + dev_warn(ctrl->ctrl.device, + "NVME-FC{%d}: transport error during (re)connect\n", + ctrl->cnum); +@@ -3065,7 +3023,6 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl) + struct nvmefc_ls_rcv_op *disls = NULL; + unsigned long flags; + int ret; +- bool changed; + + ++ctrl->ctrl.nr_reconnects; + +@@ -3176,12 +3133,13 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl) + if (ret) + goto out_term_aen_ops; + +- changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE); ++ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE)) { ++ ret = -EIO; ++ goto out_term_aen_ops; ++ } + + ctrl->ctrl.nr_reconnects = 0; +- +- if (changed) +- nvme_start_ctrl(&ctrl->ctrl); ++ nvme_start_ctrl(&ctrl->ctrl); + + return 0; /* Success */ + +@@ -3582,8 +3540,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list); + spin_unlock_irqrestore(&rport->lock, flags); + +- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING) || +- !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { ++ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { + dev_err(ctrl->ctrl.device, + "NVME-FC{%d}: failed to init ctrl state\n", ctrl->cnum); + goto fail_ctrl; +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index 99c2983dbe6c87..0b4ca8e8f9b465 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -1131,8 +1131,9 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, + + trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail); + if (!nvme_try_complete_req(req, cqe->status, cqe->result) && +- !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, +- nvme_pci_complete_batch)) ++ !blk_mq_add_to_batch(req, iob, ++ nvme_req(req)->status != NVME_SC_SUCCESS, ++ nvme_pci_complete_batch)) + nvme_pci_complete_rq(req); + } + +@@ -3715,6 +3716,8 @@ static const struct pci_device_id nvme_id_table[] = { + .driver_data = NVME_QUIRK_BOGUS_NID, }, + { PCI_DEVICE(0x1cc1, 0x5350), /* ADATA XPG GAMMIX S50 */ + .driver_data = NVME_QUIRK_BOGUS_NID, }, ++ { PCI_DEVICE(0x1dbe, 0x5216), /* Acer/INNOGRIT FA100/5216 NVMe SSD */ ++ .driver_data = NVME_QUIRK_BOGUS_NID, }, + { PCI_DEVICE(0x1dbe, 0x5236), /* ADATA XPG GAMMIX S70 */ + .driver_data = NVME_QUIRK_BOGUS_NID, }, + { PCI_DEVICE(0x1e49, 0x0021), /* ZHITAI TiPro5000 NVMe SSD */ +diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c +index 1afd93026f9bf0..2a4536ef618487 100644 +--- a/drivers/nvme/target/rdma.c ++++ b/drivers/nvme/target/rdma.c +@@ -996,6 +996,27 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue, + nvmet_req_complete(&cmd->req, status); + } + ++static bool nvmet_rdma_recv_not_live(struct nvmet_rdma_queue *queue, ++ struct nvmet_rdma_rsp *rsp) ++{ ++ unsigned long flags; ++ bool ret = true; ++ ++ spin_lock_irqsave(&queue->state_lock, flags); ++ /* ++ * recheck queue state is not live to prevent a race condition ++ * with RDMA_CM_EVENT_ESTABLISHED handler. ++ */ ++ if (queue->state == NVMET_RDMA_Q_LIVE) ++ ret = false; ++ else if (queue->state == NVMET_RDMA_Q_CONNECTING) ++ list_add_tail(&rsp->wait_list, &queue->rsp_wait_list); ++ else ++ nvmet_rdma_put_rsp(rsp); ++ spin_unlock_irqrestore(&queue->state_lock, flags); ++ return ret; ++} ++ + static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) + { + struct nvmet_rdma_cmd *cmd = +@@ -1038,17 +1059,9 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) + rsp->n_rdma = 0; + rsp->invalidate_rkey = 0; + +- if (unlikely(queue->state != NVMET_RDMA_Q_LIVE)) { +- unsigned long flags; +- +- spin_lock_irqsave(&queue->state_lock, flags); +- if (queue->state == NVMET_RDMA_Q_CONNECTING) +- list_add_tail(&rsp->wait_list, &queue->rsp_wait_list); +- else +- nvmet_rdma_put_rsp(rsp); +- spin_unlock_irqrestore(&queue->state_lock, flags); ++ if (unlikely(queue->state != NVMET_RDMA_Q_LIVE) && ++ nvmet_rdma_recv_not_live(queue, rsp)) + return; +- } + + nvmet_rdma_handle_command(queue, rsp); + } +diff --git a/drivers/phy/ti/phy-gmii-sel.c b/drivers/phy/ti/phy-gmii-sel.c +index e0ca59ae315314..ff5d5e29629fab 100644 +--- a/drivers/phy/ti/phy-gmii-sel.c ++++ b/drivers/phy/ti/phy-gmii-sel.c +@@ -424,6 +424,12 @@ static int phy_gmii_sel_init_ports(struct phy_gmii_sel_priv *priv) + return 0; + } + ++static const struct regmap_config phy_gmii_sel_regmap_cfg = { ++ .reg_bits = 32, ++ .val_bits = 32, ++ .reg_stride = 4, ++}; ++ + static int phy_gmii_sel_probe(struct platform_device *pdev) + { + struct device *dev = &pdev->dev; +@@ -468,7 +474,14 @@ static int phy_gmii_sel_probe(struct platform_device *pdev) + + priv->regmap = syscon_node_to_regmap(node->parent); + if (IS_ERR(priv->regmap)) { +- priv->regmap = device_node_to_regmap(node); ++ void __iomem *base; ++ ++ base = devm_platform_ioremap_resource(pdev, 0); ++ if (IS_ERR(base)) ++ return dev_err_probe(dev, PTR_ERR(base), ++ "failed to get base memory resource\n"); ++ ++ priv->regmap = regmap_init_mmio(dev, base, &phy_gmii_sel_regmap_cfg); + if (IS_ERR(priv->regmap)) + return dev_err_probe(dev, PTR_ERR(priv->regmap), + "Failed to get syscon\n"); +diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c +index 73dbf29c002f39..cf6efa9c0364a1 100644 +--- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c ++++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c +@@ -974,7 +974,7 @@ static const struct regmap_config bcm281xx_pinctrl_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, +- .max_register = BCM281XX_PIN_VC_CAM3_SDA, ++ .max_register = BCM281XX_PIN_VC_CAM3_SDA * 4, + }; + + static int bcm281xx_pinctrl_get_groups_count(struct pinctrl_dev *pctldev) +diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c +index 471f644c5eef2c..d09a5e9b2eca53 100644 +--- a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c ++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c +@@ -2374,6 +2374,9 @@ static int npcm8xx_gpio_fw(struct npcm8xx_pinctrl *pctrl) + pctrl->gpio_bank[id].gc.parent = dev; + pctrl->gpio_bank[id].gc.fwnode = child; + pctrl->gpio_bank[id].gc.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", child); ++ if (pctrl->gpio_bank[id].gc.label == NULL) ++ return -ENOMEM; ++ + pctrl->gpio_bank[id].gc.dbg_show = npcmgpio_dbg_show; + pctrl->gpio_bank[id].direction_input = pctrl->gpio_bank[id].gc.direction_input; + pctrl->gpio_bank[id].gc.direction_input = npcmgpio_direction_input; +diff --git a/drivers/platform/x86/intel/int3472/discrete.c b/drivers/platform/x86/intel/int3472/discrete.c +index 09fff213b0911c..a80c981caa340e 100644 +--- a/drivers/platform/x86/intel/int3472/discrete.c ++++ b/drivers/platform/x86/intel/int3472/discrete.c +@@ -2,6 +2,7 @@ + /* Author: Dan Scally */ + + #include ++#include + #include + #include + #include +@@ -55,7 +56,7 @@ static void skl_int3472_log_sensor_module_name(struct int3472_discrete_device *i + + static int skl_int3472_fill_gpiod_lookup(struct gpiod_lookup *table_entry, + struct acpi_resource_gpio *agpio, +- const char *func, u32 polarity) ++ const char *func, unsigned long gpio_flags) + { + char *path = agpio->resource_source.string_ptr; + struct acpi_device *adev; +@@ -70,14 +71,14 @@ static int skl_int3472_fill_gpiod_lookup(struct gpiod_lookup *table_entry, + if (!adev) + return -ENODEV; + +- *table_entry = GPIO_LOOKUP(acpi_dev_name(adev), agpio->pin_table[0], func, polarity); ++ *table_entry = GPIO_LOOKUP(acpi_dev_name(adev), agpio->pin_table[0], func, gpio_flags); + + return 0; + } + + static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int3472, + struct acpi_resource_gpio *agpio, +- const char *func, u32 polarity) ++ const char *func, unsigned long gpio_flags) + { + int ret; + +@@ -87,7 +88,7 @@ static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int347 + } + + ret = skl_int3472_fill_gpiod_lookup(&int3472->gpios.table[int3472->n_sensor_gpios], +- agpio, func, polarity); ++ agpio, func, gpio_flags); + if (ret) + return ret; + +@@ -100,7 +101,7 @@ static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int347 + static struct gpio_desc * + skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472, + struct acpi_resource_gpio *agpio, +- const char *func, u32 polarity) ++ const char *func, unsigned long gpio_flags) + { + struct gpio_desc *desc; + int ret; +@@ -111,7 +112,7 @@ skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472, + return ERR_PTR(-ENOMEM); + + lookup->dev_id = dev_name(int3472->dev); +- ret = skl_int3472_fill_gpiod_lookup(&lookup->table[0], agpio, func, polarity); ++ ret = skl_int3472_fill_gpiod_lookup(&lookup->table[0], agpio, func, gpio_flags); + if (ret) + return ERR_PTR(ret); + +@@ -122,32 +123,76 @@ skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472, + return desc; + } + +-static void int3472_get_func_and_polarity(u8 type, const char **func, u32 *polarity) ++/** ++ * struct int3472_gpio_map - Map GPIOs to whatever is expected by the ++ * sensor driver (as in DT bindings) ++ * @hid: The ACPI HID of the device without the instance number e.g. INT347E ++ * @type_from: The GPIO type from ACPI ?SDT ++ * @type_to: The assigned GPIO type, typically same as @type_from ++ * @func: The function, e.g. "enable" ++ * @polarity_low: GPIO_ACTIVE_LOW true if the @polarity_low is true, ++ * GPIO_ACTIVE_HIGH otherwise ++ */ ++struct int3472_gpio_map { ++ const char *hid; ++ u8 type_from; ++ u8 type_to; ++ bool polarity_low; ++ const char *func; ++}; ++ ++static const struct int3472_gpio_map int3472_gpio_map[] = { ++ { "INT347E", INT3472_GPIO_TYPE_RESET, INT3472_GPIO_TYPE_RESET, false, "enable" }, ++}; ++ ++static void int3472_get_func_and_polarity(struct acpi_device *adev, u8 *type, ++ const char **func, unsigned long *gpio_flags) + { +- switch (type) { ++ unsigned int i; ++ ++ for (i = 0; i < ARRAY_SIZE(int3472_gpio_map); i++) { ++ /* ++ * Map the firmware-provided GPIO to whatever a driver expects ++ * (as in DT bindings). First check if the type matches with the ++ * GPIO map, then further check that the device _HID matches. ++ */ ++ if (*type != int3472_gpio_map[i].type_from) ++ continue; ++ ++ if (!acpi_dev_hid_uid_match(adev, int3472_gpio_map[i].hid, NULL)) ++ continue; ++ ++ *type = int3472_gpio_map[i].type_to; ++ *gpio_flags = int3472_gpio_map[i].polarity_low ? ++ GPIO_ACTIVE_LOW : GPIO_ACTIVE_HIGH; ++ *func = int3472_gpio_map[i].func; ++ return; ++ } ++ ++ switch (*type) { + case INT3472_GPIO_TYPE_RESET: + *func = "reset"; +- *polarity = GPIO_ACTIVE_LOW; ++ *gpio_flags = GPIO_ACTIVE_LOW; + break; + case INT3472_GPIO_TYPE_POWERDOWN: + *func = "powerdown"; +- *polarity = GPIO_ACTIVE_LOW; ++ *gpio_flags = GPIO_ACTIVE_LOW; + break; + case INT3472_GPIO_TYPE_CLK_ENABLE: + *func = "clk-enable"; +- *polarity = GPIO_ACTIVE_HIGH; ++ *gpio_flags = GPIO_ACTIVE_HIGH; + break; + case INT3472_GPIO_TYPE_PRIVACY_LED: + *func = "privacy-led"; +- *polarity = GPIO_ACTIVE_HIGH; ++ *gpio_flags = GPIO_ACTIVE_HIGH; + break; + case INT3472_GPIO_TYPE_POWER_ENABLE: + *func = "power-enable"; +- *polarity = GPIO_ACTIVE_HIGH; ++ *gpio_flags = GPIO_ACTIVE_HIGH; + break; + default: + *func = "unknown"; +- *polarity = GPIO_ACTIVE_HIGH; ++ *gpio_flags = GPIO_ACTIVE_HIGH; + break; + } + } +@@ -194,7 +239,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares, + struct gpio_desc *gpio; + const char *err_msg; + const char *func; +- u32 polarity; ++ unsigned long gpio_flags; + int ret; + + if (!acpi_gpio_get_io_resource(ares, &agpio)) +@@ -217,7 +262,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares, + + type = FIELD_GET(INT3472_GPIO_DSM_TYPE, obj->integer.value); + +- int3472_get_func_and_polarity(type, &func, &polarity); ++ int3472_get_func_and_polarity(int3472->sensor, &type, &func, &gpio_flags); + + pin = FIELD_GET(INT3472_GPIO_DSM_PIN, obj->integer.value); + if (pin != agpio->pin_table[0]) +@@ -227,16 +272,16 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares, + + active_value = FIELD_GET(INT3472_GPIO_DSM_SENSOR_ON_VAL, obj->integer.value); + if (!active_value) +- polarity ^= GPIO_ACTIVE_LOW; ++ gpio_flags ^= GPIO_ACTIVE_LOW; + + dev_dbg(int3472->dev, "%s %s pin %d active-%s\n", func, + agpio->resource_source.string_ptr, agpio->pin_table[0], +- str_high_low(polarity == GPIO_ACTIVE_HIGH)); ++ str_high_low(gpio_flags == GPIO_ACTIVE_HIGH)); + + switch (type) { + case INT3472_GPIO_TYPE_RESET: + case INT3472_GPIO_TYPE_POWERDOWN: +- ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, func, polarity); ++ ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, func, gpio_flags); + if (ret) + err_msg = "Failed to map GPIO pin to sensor\n"; + +@@ -244,7 +289,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares, + case INT3472_GPIO_TYPE_CLK_ENABLE: + case INT3472_GPIO_TYPE_PRIVACY_LED: + case INT3472_GPIO_TYPE_POWER_ENABLE: +- gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, func, polarity); ++ gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, func, gpio_flags); + if (IS_ERR(gpio)) { + ret = PTR_ERR(gpio); + err_msg = "Failed to get GPIO\n"; +diff --git a/drivers/platform/x86/intel/pmc/core.c b/drivers/platform/x86/intel/pmc/core.c +index 3e7f99ac8c941e..c5dae640a0b2b0 100644 +--- a/drivers/platform/x86/intel/pmc/core.c ++++ b/drivers/platform/x86/intel/pmc/core.c +@@ -625,8 +625,8 @@ static u32 convert_ltr_scale(u32 val) + static int pmc_core_ltr_show(struct seq_file *s, void *unused) + { + struct pmc_dev *pmcdev = s->private; +- u64 decoded_snoop_ltr, decoded_non_snoop_ltr; +- u32 ltr_raw_data, scale, val; ++ u64 decoded_snoop_ltr, decoded_non_snoop_ltr, val; ++ u32 ltr_raw_data, scale; + u16 snoop_ltr, nonsnoop_ltr; + unsigned int i, index, ltr_index = 0; + +diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c +index 84dcd7da7319e3..a3c73abb00f21e 100644 +--- a/drivers/platform/x86/thinkpad_acpi.c ++++ b/drivers/platform/x86/thinkpad_acpi.c +@@ -7883,6 +7883,7 @@ static struct ibm_struct volume_driver_data = { + + #define FAN_NS_CTRL_STATUS BIT(2) /* Bit which determines control is enabled or not */ + #define FAN_NS_CTRL BIT(4) /* Bit which determines control is by host or EC */ ++#define FAN_CLOCK_TPM (22500*60) /* Ticks per minute for a 22.5 kHz clock */ + + enum { /* Fan control constants */ + fan_status_offset = 0x2f, /* EC register 0x2f */ +@@ -7938,6 +7939,7 @@ static int fan_watchdog_maxinterval; + + static bool fan_with_ns_addr; + static bool ecfw_with_fan_dec_rpm; ++static bool fan_speed_in_tpr; + + static struct mutex fan_mutex; + +@@ -8140,8 +8142,11 @@ static int fan_get_speed(unsigned int *speed) + !acpi_ec_read(fan_rpm_offset + 1, &hi))) + return -EIO; + +- if (likely(speed)) ++ if (likely(speed)) { + *speed = (hi << 8) | lo; ++ if (fan_speed_in_tpr && *speed != 0) ++ *speed = FAN_CLOCK_TPM / *speed; ++ } + break; + case TPACPI_FAN_RD_TPEC_NS: + if (!acpi_ec_read(fan_rpm_status_ns, &lo)) +@@ -8174,8 +8179,11 @@ static int fan2_get_speed(unsigned int *speed) + if (rc) + return -EIO; + +- if (likely(speed)) ++ if (likely(speed)) { + *speed = (hi << 8) | lo; ++ if (fan_speed_in_tpr && *speed != 0) ++ *speed = FAN_CLOCK_TPM / *speed; ++ } + break; + + case TPACPI_FAN_RD_TPEC_NS: +@@ -8786,6 +8794,7 @@ static const struct attribute_group fan_driver_attr_group = { + #define TPACPI_FAN_NOFAN 0x0008 /* no fan available */ + #define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */ + #define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */ ++#define TPACPI_FAN_TPR 0x0040 /* Fan speed is in Ticks Per Revolution */ + + static const struct tpacpi_quirk fan_quirk_table[] __initconst = { + TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1), +@@ -8815,6 +8824,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = { + TPACPI_Q_LNV3('R', '0', 'V', TPACPI_FAN_NS), /* 11e Gen5 KL-Y */ + TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */ + TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */ ++ TPACPI_Q_LNV('8', 'F', TPACPI_FAN_TPR), /* ThinkPad x120e */ + }; + + static int __init fan_init(struct ibm_init_struct *iibm) +@@ -8885,6 +8895,8 @@ static int __init fan_init(struct ibm_init_struct *iibm) + + if (quirks & TPACPI_FAN_Q1) + fan_quirk1_setup(); ++ if (quirks & TPACPI_FAN_TPR) ++ fan_speed_in_tpr = true; + /* Try and probe the 2nd fan */ + tp_features.second_fan = 1; /* needed for get_speed to work */ + res = fan2_get_speed(&speed); +@@ -10318,6 +10330,10 @@ static struct ibm_struct proxsensor_driver_data = { + #define DYTC_MODE_PSC_BALANCE 5 /* Default mode aka balanced */ + #define DYTC_MODE_PSC_PERFORM 7 /* High power mode aka performance */ + ++#define DYTC_MODE_PSCV9_LOWPOWER 1 /* Low power mode */ ++#define DYTC_MODE_PSCV9_BALANCE 3 /* Default mode aka balanced */ ++#define DYTC_MODE_PSCV9_PERFORM 4 /* High power mode aka performance */ ++ + #define DYTC_ERR_MASK 0xF /* Bits 0-3 in cmd result are the error result */ + #define DYTC_ERR_SUCCESS 1 /* CMD completed successful */ + +@@ -10338,6 +10354,10 @@ static int dytc_capabilities; + static bool dytc_mmc_get_available; + static int profile_force; + ++static int platform_psc_profile_lowpower = DYTC_MODE_PSC_LOWPOWER; ++static int platform_psc_profile_balanced = DYTC_MODE_PSC_BALANCE; ++static int platform_psc_profile_performance = DYTC_MODE_PSC_PERFORM; ++ + static int convert_dytc_to_profile(int funcmode, int dytcmode, + enum platform_profile_option *profile) + { +@@ -10359,19 +10379,15 @@ static int convert_dytc_to_profile(int funcmode, int dytcmode, + } + return 0; + case DYTC_FUNCTION_PSC: +- switch (dytcmode) { +- case DYTC_MODE_PSC_LOWPOWER: ++ if (dytcmode == platform_psc_profile_lowpower) + *profile = PLATFORM_PROFILE_LOW_POWER; +- break; +- case DYTC_MODE_PSC_BALANCE: ++ else if (dytcmode == platform_psc_profile_balanced) + *profile = PLATFORM_PROFILE_BALANCED; +- break; +- case DYTC_MODE_PSC_PERFORM: ++ else if (dytcmode == platform_psc_profile_performance) + *profile = PLATFORM_PROFILE_PERFORMANCE; +- break; +- default: /* Unknown mode */ ++ else + return -EINVAL; +- } ++ + return 0; + case DYTC_FUNCTION_AMT: + /* For now return balanced. It's the closest we have to 'auto' */ +@@ -10392,19 +10408,19 @@ static int convert_profile_to_dytc(enum platform_profile_option profile, int *pe + if (dytc_capabilities & BIT(DYTC_FC_MMC)) + *perfmode = DYTC_MODE_MMC_LOWPOWER; + else if (dytc_capabilities & BIT(DYTC_FC_PSC)) +- *perfmode = DYTC_MODE_PSC_LOWPOWER; ++ *perfmode = platform_psc_profile_lowpower; + break; + case PLATFORM_PROFILE_BALANCED: + if (dytc_capabilities & BIT(DYTC_FC_MMC)) + *perfmode = DYTC_MODE_MMC_BALANCE; + else if (dytc_capabilities & BIT(DYTC_FC_PSC)) +- *perfmode = DYTC_MODE_PSC_BALANCE; ++ *perfmode = platform_psc_profile_balanced; + break; + case PLATFORM_PROFILE_PERFORMANCE: + if (dytc_capabilities & BIT(DYTC_FC_MMC)) + *perfmode = DYTC_MODE_MMC_PERFORM; + else if (dytc_capabilities & BIT(DYTC_FC_PSC)) +- *perfmode = DYTC_MODE_PSC_PERFORM; ++ *perfmode = platform_psc_profile_performance; + break; + default: /* Unknown profile */ + return -EOPNOTSUPP; +@@ -10593,6 +10609,7 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm) + if (output & BIT(DYTC_QUERY_ENABLE_BIT)) + dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF; + ++ dbg_printk(TPACPI_DBG_INIT, "DYTC version %d\n", dytc_version); + /* Check DYTC is enabled and supports mode setting */ + if (dytc_version < 5) + return -ENODEV; +@@ -10631,6 +10648,11 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm) + } + } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */ + pr_debug("PSC is supported\n"); ++ if (dytc_version >= 9) { /* update profiles for DYTC 9 and up */ ++ platform_psc_profile_lowpower = DYTC_MODE_PSCV9_LOWPOWER; ++ platform_psc_profile_balanced = DYTC_MODE_PSCV9_BALANCE; ++ platform_psc_profile_performance = DYTC_MODE_PSCV9_PERFORM; ++ } + } else { + dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n"); + return -ENODEV; +diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c +index 52c32dcbf7d846..4112a009733826 100644 +--- a/drivers/powercap/powercap_sys.c ++++ b/drivers/powercap/powercap_sys.c +@@ -627,8 +627,7 @@ struct powercap_control_type *powercap_register_control_type( + dev_set_name(&control_type->dev, "%s", name); + result = device_register(&control_type->dev); + if (result) { +- if (control_type->allocated) +- kfree(control_type); ++ put_device(&control_type->dev); + return ERR_PTR(result); + } + idr_init(&control_type->idr); +diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c +index cba2d048a96b3c..7855d88a49d857 100644 +--- a/drivers/s390/cio/chp.c ++++ b/drivers/s390/cio/chp.c +@@ -695,7 +695,8 @@ static int info_update(void) + if (time_after(jiffies, chp_info_expires)) { + /* Data is too old, update. */ + rc = sclp_chp_read_info(&chp_info); +- chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL ; ++ if (!rc) ++ chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL; + } + mutex_unlock(&info_lock); + +diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c +index 8958547ac111ac..fed07b1460702a 100644 +--- a/drivers/scsi/qla1280.c ++++ b/drivers/scsi/qla1280.c +@@ -2867,7 +2867,7 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp) + dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n", + cpu_to_le32(upper_32_bits(dma_handle)), + cpu_to_le32(lower_32_bits(dma_handle)), +- cpu_to_le32(sg_dma_len(sg_next(s)))); ++ cpu_to_le32(sg_dma_len(s))); + remseg--; + } + dprintk(5, "qla1280_64bit_start_scsi: Scatter/gather " +diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c +index 042329b74c6e68..fe08af4dcb67cf 100644 +--- a/drivers/scsi/scsi_scan.c ++++ b/drivers/scsi/scsi_scan.c +@@ -245,7 +245,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev, + } + ret = sbitmap_init_node(&sdev->budget_map, + scsi_device_max_queue_depth(sdev), +- new_shift, GFP_KERNEL, ++ new_shift, GFP_NOIO, + sdev->request_queue->node, false, true); + if (!ret) + sbitmap_resize(&sdev->budget_map, depth); +diff --git a/drivers/spi/spi-microchip-core.c b/drivers/spi/spi-microchip-core.c +index 5b6af55855efc4..62ba0bd9cbb7e7 100644 +--- a/drivers/spi/spi-microchip-core.c ++++ b/drivers/spi/spi-microchip-core.c +@@ -70,8 +70,7 @@ + #define INT_RX_CHANNEL_OVERFLOW BIT(2) + #define INT_TX_CHANNEL_UNDERRUN BIT(3) + +-#define INT_ENABLE_MASK (CONTROL_RX_DATA_INT | CONTROL_TX_DATA_INT | \ +- CONTROL_RX_OVER_INT | CONTROL_TX_UNDER_INT) ++#define INT_ENABLE_MASK (CONTROL_RX_OVER_INT | CONTROL_TX_UNDER_INT) + + #define REG_CONTROL (0x00) + #define REG_FRAME_SIZE (0x04) +@@ -133,10 +132,15 @@ static inline void mchp_corespi_disable(struct mchp_corespi *spi) + mchp_corespi_write(spi, REG_CONTROL, control); + } + +-static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi) ++static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi, int fifo_max) + { +- while (spi->rx_len >= spi->n_bytes && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY)) { +- u32 data = mchp_corespi_read(spi, REG_RX_DATA); ++ for (int i = 0; i < fifo_max; i++) { ++ u32 data; ++ ++ while (mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY) ++ ; ++ ++ data = mchp_corespi_read(spi, REG_RX_DATA); + + spi->rx_len -= spi->n_bytes; + +@@ -211,11 +215,10 @@ static inline void mchp_corespi_set_xfer_size(struct mchp_corespi *spi, int len) + mchp_corespi_write(spi, REG_FRAMESUP, len); + } + +-static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi) ++static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi, int fifo_max) + { +- int fifo_max, i = 0; ++ int i = 0; + +- fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes); + mchp_corespi_set_xfer_size(spi, fifo_max); + + while ((i < fifo_max) && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_TXFIFO_FULL)) { +@@ -413,19 +416,6 @@ static irqreturn_t mchp_corespi_interrupt(int irq, void *dev_id) + if (intfield == 0) + return IRQ_NONE; + +- if (intfield & INT_TXDONE) +- mchp_corespi_write(spi, REG_INT_CLEAR, INT_TXDONE); +- +- if (intfield & INT_RXRDY) { +- mchp_corespi_write(spi, REG_INT_CLEAR, INT_RXRDY); +- +- if (spi->rx_len) +- mchp_corespi_read_fifo(spi); +- } +- +- if (!spi->rx_len && !spi->tx_len) +- finalise = true; +- + if (intfield & INT_RX_CHANNEL_OVERFLOW) { + mchp_corespi_write(spi, REG_INT_CLEAR, INT_RX_CHANNEL_OVERFLOW); + finalise = true; +@@ -512,9 +502,14 @@ static int mchp_corespi_transfer_one(struct spi_controller *host, + + mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select); + +- while (spi->tx_len) +- mchp_corespi_write_fifo(spi); ++ while (spi->tx_len) { ++ int fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes); ++ ++ mchp_corespi_write_fifo(spi, fifo_max); ++ mchp_corespi_read_fifo(spi, fifo_max); ++ } + ++ spi_finalize_current_transfer(host); + return 1; + } + +diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c +index 280071be30b157..6b7ab1814c12df 100644 +--- a/drivers/thermal/cpufreq_cooling.c ++++ b/drivers/thermal/cpufreq_cooling.c +@@ -57,8 +57,6 @@ struct time_in_idle { + * @max_level: maximum cooling level. One less than total number of valid + * cpufreq frequencies. + * @em: Reference on the Energy Model of the device +- * @cdev: thermal_cooling_device pointer to keep track of the +- * registered cooling device. + * @policy: cpufreq policy. + * @cooling_ops: cpufreq callbacks to thermal cooling device ops + * @idle_time: idle time stats +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index a5bb6ea96460cc..96cb294e2ec309 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -3119,8 +3119,13 @@ ufshcd_dev_cmd_completion(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) + case UPIU_TRANSACTION_QUERY_RSP: { + u8 response = lrbp->ucd_rsp_ptr->header.response; + +- if (response == 0) ++ if (response == 0) { + err = ufshcd_copy_query_response(hba, lrbp); ++ } else { ++ err = -EINVAL; ++ dev_err(hba->dev, "%s: unexpected response in Query RSP: %x\n", ++ __func__, response); ++ } + break; + } + case UPIU_TRANSACTION_REJECT_UPIU: +diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c +index 6c3ececf913752..8423be59ec0ffb 100644 +--- a/drivers/usb/phy/phy-generic.c ++++ b/drivers/usb/phy/phy-generic.c +@@ -212,7 +212,7 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop) + if (of_property_read_u32(node, "clock-frequency", &clk_rate)) + clk_rate = 0; + +- needs_clk = of_property_read_bool(node, "clocks"); ++ needs_clk = of_property_present(node, "clocks"); + } + nop->gpiod_reset = devm_gpiod_get_optional(dev, "reset", + GPIOD_ASIS); +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index e07c5e3eb18c0d..9b34e23b70919f 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1079,6 +1079,20 @@ static const struct usb_device_id id_table_combined[] = { + .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, + /* GMC devices */ + { USB_DEVICE(GMC_VID, GMC_Z216C_PID) }, ++ /* Altera USB Blaster 3 */ ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6022_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6025_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 3) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6029_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 3) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602C_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, ++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, + { } /* Terminating entry */ + }; + +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index 5ee60ba2a73cdb..52be47d684ea66 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -1612,3 +1612,16 @@ + */ + #define GMC_VID 0x1cd7 + #define GMC_Z216C_PID 0x0217 /* GMC Z216C Adapter IR-USB */ ++ ++/* ++ * Altera USB Blaster 3 (http://www.altera.com). ++ */ ++#define ALTERA_VID 0x09fb ++#define ALTERA_UB3_6022_PID 0x6022 ++#define ALTERA_UB3_6025_PID 0x6025 ++#define ALTERA_UB3_6026_PID 0x6026 ++#define ALTERA_UB3_6029_PID 0x6029 ++#define ALTERA_UB3_602A_PID 0x602a ++#define ALTERA_UB3_602C_PID 0x602c ++#define ALTERA_UB3_602D_PID 0x602d ++#define ALTERA_UB3_602E_PID 0x602e +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 58bd54e8c483a2..5cd26dac2069fa 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1368,13 +1368,13 @@ static const struct usb_device_id option_ids[] = { + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */ + .driver_info = RSVD(0) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990A (rmnet) */ + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990A (MBIM) */ + .driver_info = NCTRL(0) | RSVD(1) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990A (RNDIS) */ + .driver_info = NCTRL(2) | RSVD(3) }, +- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990A (ECM) */ + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(3) }, +@@ -1388,28 +1388,44 @@ static const struct usb_device_id option_ids[] = { + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ + .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x30), /* Telit FE990B (rmnet) */ ++ .driver_info = NCTRL(5) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x30), /* Telit FE990B (MBIM) */ ++ .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x30), /* Telit FE990B (RNDIS) */ ++ .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x30), /* Telit FE990B (ECM) */ ++ .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x60) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */ + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x30), /* Telit FN990B (rmnet) */ + .driver_info = NCTRL(5) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */ + .driver_info = NCTRL(6) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x30), /* Telit FN990B (RNDIS) */ + .driver_info = NCTRL(6) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */ +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) }, +- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30), ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x60) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x30), /* Telit FN990B (ECM) */ + .driver_info = NCTRL(6) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x40) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x60) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), + .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), +diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c +index 061c04efffa765..ea700d53d67631 100644 +--- a/drivers/usb/typec/tcpm/tcpm.c ++++ b/drivers/usb/typec/tcpm/tcpm.c +@@ -5067,16 +5067,16 @@ static void run_state_machine(struct tcpm_port *port) + */ + if (port->vbus_never_low) { + port->vbus_never_low = false; +- tcpm_set_state(port, SNK_SOFT_RESET, +- port->timings.sink_wait_cap_time); ++ upcoming_state = SNK_SOFT_RESET; + } else { + if (!port->self_powered) + upcoming_state = SNK_WAIT_CAPABILITIES_TIMEOUT; + else + upcoming_state = hard_reset_state(port); +- tcpm_set_state(port, SNK_WAIT_CAPABILITIES_TIMEOUT, +- port->timings.sink_wait_cap_time); + } ++ ++ tcpm_set_state(port, upcoming_state, ++ port->timings.sink_wait_cap_time); + break; + case SNK_WAIT_CAPABILITIES_TIMEOUT: + /* +diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c +index 9ac25d08f473e8..63612faeab7271 100644 +--- a/drivers/vhost/vhost.c ++++ b/drivers/vhost/vhost.c +@@ -666,7 +666,7 @@ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev) + + vtsk = vhost_task_create(vhost_run_work_list, vhost_worker_killed, + worker, name); +- if (!vtsk) ++ if (IS_ERR(vtsk)) + goto free_worker; + + mutex_init(&worker->mutex); +diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c +index 7fdb5edd7e2e8d..75338ffc703fb5 100644 +--- a/drivers/video/fbdev/hyperv_fb.c ++++ b/drivers/video/fbdev/hyperv_fb.c +@@ -282,6 +282,8 @@ static uint screen_depth; + static uint screen_fb_size; + static uint dio_fb_size; /* FB size for deferred IO */ + ++static void hvfb_putmem(struct fb_info *info); ++ + /* Send message to Hyper-V host */ + static inline int synthvid_send(struct hv_device *hdev, + struct synthvid_msg *msg) +@@ -862,6 +864,17 @@ static void hvfb_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width, + hvfb_ondemand_refresh_throttle(par, x, y, width, height); + } + ++/* ++ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end ++ * of unregister_framebuffer() or fb_release(). Do any cleanup related to ++ * framebuffer here. ++ */ ++static void hvfb_destroy(struct fb_info *info) ++{ ++ hvfb_putmem(info); ++ framebuffer_release(info); ++} ++ + /* + * TODO: GEN1 codepaths allocate from system or DMA-able memory. Fix the + * driver to use the _SYSMEM_ or _DMAMEM_ helpers in these cases. +@@ -877,6 +890,7 @@ static const struct fb_ops hvfb_ops = { + .fb_set_par = hvfb_set_par, + .fb_setcolreg = hvfb_setcolreg, + .fb_blank = hvfb_blank, ++ .fb_destroy = hvfb_destroy, + }; + + /* Get options from kernel paramenter "video=" */ +@@ -952,7 +966,7 @@ static phys_addr_t hvfb_get_phymem(struct hv_device *hdev, + } + + /* Release contiguous physical memory */ +-static void hvfb_release_phymem(struct hv_device *hdev, ++static void hvfb_release_phymem(struct device *device, + phys_addr_t paddr, unsigned int size) + { + unsigned int order = get_order(size); +@@ -960,7 +974,7 @@ static void hvfb_release_phymem(struct hv_device *hdev, + if (order <= MAX_PAGE_ORDER) + __free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order); + else +- dma_free_coherent(&hdev->device, ++ dma_free_coherent(device, + round_up(size, PAGE_SIZE), + phys_to_virt(paddr), + paddr); +@@ -989,6 +1003,7 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) + + base = pci_resource_start(pdev, 0); + size = pci_resource_len(pdev, 0); ++ aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME); + + /* + * For Gen 1 VM, we can directly use the contiguous memory +@@ -1010,11 +1025,21 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) + goto getmem_done; + } + pr_info("Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead.\n"); ++ } else { ++ aperture_remove_all_conflicting_devices(KBUILD_MODNAME); + } + + /* +- * Cannot use the contiguous physical memory. +- * Allocate mmio space for framebuffer. ++ * Cannot use contiguous physical memory, so allocate MMIO space for ++ * the framebuffer. At this point in the function, conflicting devices ++ * that might have claimed the framebuffer MMIO space based on ++ * screen_info.lfb_base must have already been removed so that ++ * vmbus_allocate_mmio() does not allocate different MMIO space. If the ++ * kdump image were to be loaded using kexec_file_load(), the ++ * framebuffer location in the kdump image would be set from ++ * screen_info.lfb_base at the time that kdump is enabled. If the ++ * framebuffer has moved elsewhere, this could be the wrong location, ++ * causing kdump to hang when efifb (for example) loads. + */ + dio_fb_size = + screen_width * screen_height * screen_depth / 8; +@@ -1051,11 +1076,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) + info->screen_size = dio_fb_size; + + getmem_done: +- if (base && size) +- aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME); +- else +- aperture_remove_all_conflicting_devices(KBUILD_MODNAME); +- + if (!gen2vm) + pci_dev_put(pdev); + +@@ -1074,16 +1094,16 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) + } + + /* Release the framebuffer */ +-static void hvfb_putmem(struct hv_device *hdev, struct fb_info *info) ++static void hvfb_putmem(struct fb_info *info) + { + struct hvfb_par *par = info->par; + + if (par->need_docopy) { + vfree(par->dio_vp); +- iounmap(info->screen_base); ++ iounmap(par->mmio_vp); + vmbus_free_mmio(par->mem->start, screen_fb_size); + } else { +- hvfb_release_phymem(hdev, info->fix.smem_start, ++ hvfb_release_phymem(info->device, info->fix.smem_start, + screen_fb_size); + } + +@@ -1172,7 +1192,7 @@ static int hvfb_probe(struct hv_device *hdev, + if (ret) + goto error; + +- ret = register_framebuffer(info); ++ ret = devm_register_framebuffer(&hdev->device, info); + if (ret) { + pr_err("Unable to register framebuffer\n"); + goto error; +@@ -1197,7 +1217,7 @@ static int hvfb_probe(struct hv_device *hdev, + + error: + fb_deferred_io_cleanup(info); +- hvfb_putmem(hdev, info); ++ hvfb_putmem(info); + error2: + vmbus_close(hdev->channel); + error1: +@@ -1220,14 +1240,10 @@ static void hvfb_remove(struct hv_device *hdev) + + fb_deferred_io_cleanup(info); + +- unregister_framebuffer(info); + cancel_delayed_work_sync(&par->dwork); + + vmbus_close(hdev->channel); + hv_set_drvdata(hdev, NULL); +- +- hvfb_putmem(hdev, info); +- framebuffer_release(info); + } + + static int hvfb_suspend(struct hv_device *hdev) +diff --git a/drivers/virt/coco/sev-guest/sev-guest.c b/drivers/virt/coco/sev-guest/sev-guest.c +index af64e6191f74d1..48015960643417 100644 +--- a/drivers/virt/coco/sev-guest/sev-guest.c ++++ b/drivers/virt/coco/sev-guest/sev-guest.c +@@ -249,7 +249,7 @@ static int __handle_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_r + * sequence number must be incremented or the VMPCK must be deleted to + * prevent reuse of the IV. + */ +- rc = snp_issue_guest_request(req, &mdesc->input, rio); ++ rc = snp_issue_guest_request(req, &req->input, rio); + switch (rc) { + case -ENOSPC: + /* +@@ -259,7 +259,7 @@ static int __handle_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_r + * order to increment the sequence number and thus avoid + * IV reuse. + */ +- override_npages = mdesc->input.data_npages; ++ override_npages = req->input.data_npages; + req->exit_code = SVM_VMGEXIT_GUEST_REQUEST; + + /* +@@ -315,7 +315,7 @@ static int __handle_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_r + } + + if (override_npages) +- mdesc->input.data_npages = override_npages; ++ req->input.data_npages = override_npages; + + return rc; + } +@@ -354,6 +354,11 @@ static int snp_send_guest_request(struct snp_msg_desc *mdesc, struct snp_guest_r + memcpy(mdesc->request, &mdesc->secret_request, + sizeof(mdesc->secret_request)); + ++ /* initial the input address for guest request */ ++ req->input.req_gpa = __pa(mdesc->request); ++ req->input.resp_gpa = __pa(mdesc->response); ++ req->input.data_gpa = req->certs_data ? __pa(req->certs_data) : 0; ++ + rc = __handle_guest_request(mdesc, req, rio); + if (rc) { + if (rc == -EIO && +@@ -495,6 +500,7 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques + struct snp_guest_req req = {}; + int ret, npages = 0, resp_len; + sockptr_t certs_address; ++ struct page *page; + + if (sockptr_is_null(io->req_data) || sockptr_is_null(io->resp_data)) + return -EINVAL; +@@ -528,8 +534,20 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques + * the host. If host does not supply any certs in it, then copy + * zeros to indicate that certificate data was not provided. + */ +- memset(mdesc->certs_data, 0, report_req->certs_len); + npages = report_req->certs_len >> PAGE_SHIFT; ++ page = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, ++ get_order(report_req->certs_len)); ++ if (!page) ++ return -ENOMEM; ++ ++ req.certs_data = page_address(page); ++ ret = set_memory_decrypted((unsigned long)req.certs_data, npages); ++ if (ret) { ++ pr_err("failed to mark page shared, ret=%d\n", ret); ++ __free_pages(page, get_order(report_req->certs_len)); ++ return -EFAULT; ++ } ++ + cmd: + /* + * The intermediate response buffer is used while decrypting the +@@ -538,10 +556,12 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques + */ + resp_len = sizeof(report_resp->data) + mdesc->ctx->authsize; + report_resp = kzalloc(resp_len, GFP_KERNEL_ACCOUNT); +- if (!report_resp) +- return -ENOMEM; ++ if (!report_resp) { ++ ret = -ENOMEM; ++ goto e_free_data; ++ } + +- mdesc->input.data_npages = npages; ++ req.input.data_npages = npages; + + req.msg_version = arg->msg_version; + req.msg_type = SNP_MSG_REPORT_REQ; +@@ -556,7 +576,7 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques + + /* If certs length is invalid then copy the returned length */ + if (arg->vmm_error == SNP_GUEST_VMM_ERR_INVALID_LEN) { +- report_req->certs_len = mdesc->input.data_npages << PAGE_SHIFT; ++ report_req->certs_len = req.input.data_npages << PAGE_SHIFT; + + if (copy_to_sockptr(io->req_data, report_req, sizeof(*report_req))) + ret = -EFAULT; +@@ -565,7 +585,7 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques + if (ret) + goto e_free; + +- if (npages && copy_to_sockptr(certs_address, mdesc->certs_data, report_req->certs_len)) { ++ if (npages && copy_to_sockptr(certs_address, req.certs_data, report_req->certs_len)) { + ret = -EFAULT; + goto e_free; + } +@@ -575,6 +595,13 @@ static int get_ext_report(struct snp_guest_dev *snp_dev, struct snp_guest_reques + + e_free: + kfree(report_resp); ++e_free_data: ++ if (npages) { ++ if (set_memory_encrypted((unsigned long)req.certs_data, npages)) ++ WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n"); ++ else ++ __free_pages(page, get_order(report_req->certs_len)); ++ } + return ret; + } + +@@ -1048,35 +1075,26 @@ static int __init sev_guest_probe(struct platform_device *pdev) + if (!mdesc->response) + goto e_free_request; + +- mdesc->certs_data = alloc_shared_pages(dev, SEV_FW_BLOB_MAX_SIZE); +- if (!mdesc->certs_data) +- goto e_free_response; +- + ret = -EIO; + mdesc->ctx = snp_init_crypto(mdesc->vmpck, VMPCK_KEY_LEN); + if (!mdesc->ctx) +- goto e_free_cert_data; ++ goto e_free_response; + + misc = &snp_dev->misc; + misc->minor = MISC_DYNAMIC_MINOR; + misc->name = DEVICE_NAME; + misc->fops = &snp_guest_fops; + +- /* Initialize the input addresses for guest request */ +- mdesc->input.req_gpa = __pa(mdesc->request); +- mdesc->input.resp_gpa = __pa(mdesc->response); +- mdesc->input.data_gpa = __pa(mdesc->certs_data); +- + /* Set the privlevel_floor attribute based on the vmpck_id */ + sev_tsm_ops.privlevel_floor = vmpck_id; + + ret = tsm_register(&sev_tsm_ops, snp_dev); + if (ret) +- goto e_free_cert_data; ++ goto e_free_response; + + ret = devm_add_action_or_reset(&pdev->dev, unregister_sev_tsm, NULL); + if (ret) +- goto e_free_cert_data; ++ goto e_free_response; + + ret = misc_register(misc); + if (ret) +@@ -1088,8 +1106,6 @@ static int __init sev_guest_probe(struct platform_device *pdev) + + e_free_ctx: + kfree(mdesc->ctx); +-e_free_cert_data: +- free_shared_pages(mdesc->certs_data, SEV_FW_BLOB_MAX_SIZE); + e_free_response: + free_shared_pages(mdesc->response, sizeof(struct snp_guest_msg)); + e_free_request: +@@ -1104,7 +1120,6 @@ static void __exit sev_guest_remove(struct platform_device *pdev) + struct snp_guest_dev *snp_dev = platform_get_drvdata(pdev); + struct snp_msg_desc *mdesc = snp_dev->msg_desc; + +- free_shared_pages(mdesc->certs_data, SEV_FW_BLOB_MAX_SIZE); + free_shared_pages(mdesc->response, sizeof(struct snp_guest_msg)); + free_shared_pages(mdesc->request, sizeof(struct snp_guest_msg)); + kfree(mdesc->ctx); +diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c +index 26c62e0d34e98b..1f65795cf5d7a2 100644 +--- a/drivers/xen/swiotlb-xen.c ++++ b/drivers/xen/swiotlb-xen.c +@@ -113,7 +113,7 @@ static struct io_tlb_pool *xen_swiotlb_find_pool(struct device *dev, + } + + #ifdef CONFIG_X86 +-int xen_swiotlb_fixup(void *buf, unsigned long nslabs) ++int __init xen_swiotlb_fixup(void *buf, unsigned long nslabs) + { + int rc; + unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT); +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 0dd24d12898638..95d3cf2e15c847 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -526,8 +526,6 @@ static void end_bbio_data_read(struct btrfs_bio *bbio) + u64 end; + u32 len; + +- /* For now only order 0 folios are supported for data. */ +- ASSERT(folio_order(folio) == 0); + btrfs_debug(fs_info, + "%s: bi_sector=%llu, err=%d, mirror=%u", + __func__, bio->bi_iter.bi_sector, bio->bi_status, +@@ -555,7 +553,6 @@ static void end_bbio_data_read(struct btrfs_bio *bbio) + + if (likely(uptodate)) { + loff_t i_size = i_size_read(inode); +- pgoff_t end_index = i_size >> folio_shift(folio); + + /* + * Zero out the remaining part if this range straddles +@@ -564,9 +561,11 @@ static void end_bbio_data_read(struct btrfs_bio *bbio) + * Here we should only zero the range inside the folio, + * not touch anything else. + * +- * NOTE: i_size is exclusive while end is inclusive. ++ * NOTE: i_size is exclusive while end is inclusive and ++ * folio_contains() takes PAGE_SIZE units. + */ +- if (folio_index(folio) == end_index && i_size <= end) { ++ if (folio_contains(folio, i_size >> PAGE_SHIFT) && ++ i_size <= end) { + u32 zero_start = max(offset_in_folio(folio, i_size), + offset_in_folio(folio, start)); + u32 zero_len = offset_in_folio(folio, end) + 1 - +@@ -960,7 +959,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached, + return ret; + } + +- if (folio->index == last_byte >> folio_shift(folio)) { ++ if (folio_contains(folio, last_byte >> PAGE_SHIFT)) { + size_t zero_offset = offset_in_folio(folio, last_byte); + + if (zero_offset) { +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c +index 5ab51781d0e4f6..20b025f6ac3cc0 100644 +--- a/fs/btrfs/qgroup.c ++++ b/fs/btrfs/qgroup.c +@@ -1898,11 +1898,7 @@ int btrfs_qgroup_cleanup_dropped_subvolume(struct btrfs_fs_info *fs_info, u64 su + * Commit current transaction to make sure all the rfer/excl numbers + * get updated. + */ +- trans = btrfs_start_transaction(fs_info->quota_root, 0); +- if (IS_ERR(trans)) +- return PTR_ERR(trans); +- +- ret = btrfs_commit_transaction(trans); ++ ret = btrfs_commit_current_transaction(fs_info->quota_root); + if (ret < 0) + return ret; + +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c +index e540d05549fff8..b7944d8bfb171c 100644 +--- a/fs/fuse/dir.c ++++ b/fs/fuse/dir.c +@@ -1633,7 +1633,7 @@ static const char *fuse_get_link(struct dentry *dentry, struct inode *inode, + goto out_err; + + if (fc->cache_symlinks) +- return page_get_link(dentry, inode, callback); ++ return page_get_link_raw(dentry, inode, callback); + + err = -ECHILD; + if (!dentry) +diff --git a/fs/namei.c b/fs/namei.c +index 9d30c7aa9aa6e7..553729a29095d8 100644 +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -5354,10 +5354,9 @@ const char *vfs_get_link(struct dentry *dentry, struct delayed_call *done) + EXPORT_SYMBOL(vfs_get_link); + + /* get the link contents into pagecache */ +-const char *page_get_link(struct dentry *dentry, struct inode *inode, +- struct delayed_call *callback) ++static char *__page_get_link(struct dentry *dentry, struct inode *inode, ++ struct delayed_call *callback) + { +- char *kaddr; + struct page *page; + struct address_space *mapping = inode->i_mapping; + +@@ -5376,8 +5375,23 @@ const char *page_get_link(struct dentry *dentry, struct inode *inode, + } + set_delayed_call(callback, page_put_link, page); + BUG_ON(mapping_gfp_mask(mapping) & __GFP_HIGHMEM); +- kaddr = page_address(page); +- nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1); ++ return page_address(page); ++} ++ ++const char *page_get_link_raw(struct dentry *dentry, struct inode *inode, ++ struct delayed_call *callback) ++{ ++ return __page_get_link(dentry, inode, callback); ++} ++EXPORT_SYMBOL_GPL(page_get_link_raw); ++ ++const char *page_get_link(struct dentry *dentry, struct inode *inode, ++ struct delayed_call *callback) ++{ ++ char *kaddr = __page_get_link(dentry, inode, callback); ++ ++ if (!IS_ERR(kaddr)) ++ nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1); + return kaddr; + } + +diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c +index 8878b46589ff20..cafadfe8e85879 100644 +--- a/fs/netfs/read_collect.c ++++ b/fs/netfs/read_collect.c +@@ -284,7 +284,7 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was + netfs_trace_donate_to_deferred_next); + } else { + next = list_next_entry(subreq, rreq_link); +- WRITE_ONCE(next->prev_donated, excess); ++ WRITE_ONCE(next->prev_donated, next->prev_donated + excess); + trace_netfs_donate(rreq, subreq, next, excess, + netfs_trace_donate_to_next); + } +diff --git a/fs/smb/client/asn1.c b/fs/smb/client/asn1.c +index b5724ef9f182f4..214a44509e7b99 100644 +--- a/fs/smb/client/asn1.c ++++ b/fs/smb/client/asn1.c +@@ -52,6 +52,8 @@ int cifs_neg_token_init_mech_type(void *context, size_t hdrlen, + server->sec_kerberos = true; + else if (oid == OID_ntlmssp) + server->sec_ntlmssp = true; ++ else if (oid == OID_IAKerb) ++ server->sec_iakerb = true; + else { + char buf[50]; + +diff --git a/fs/smb/client/cifs_spnego.c b/fs/smb/client/cifs_spnego.c +index 28f568b5fc2771..bc1c1e9b288add 100644 +--- a/fs/smb/client/cifs_spnego.c ++++ b/fs/smb/client/cifs_spnego.c +@@ -138,11 +138,13 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo, + + dp = description + strlen(description); + +- /* for now, only sec=krb5 and sec=mskrb5 are valid */ ++ /* for now, only sec=krb5 and sec=mskrb5 and iakerb are valid */ + if (server->sec_kerberos) + sprintf(dp, ";sec=krb5"); + else if (server->sec_mskerberos) + sprintf(dp, ";sec=mskrb5"); ++ else if (server->sec_iakerb) ++ sprintf(dp, ";sec=iakerb"); + else { + cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n"); + sprintf(dp, ";sec=krb5"); +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index 37b7d84e269138..774d47318b0d30 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -151,6 +151,7 @@ enum securityEnum { + NTLMv2, /* Legacy NTLM auth with NTLMv2 hash */ + RawNTLMSSP, /* NTLMSSP without SPNEGO, NTLMv2 hash */ + Kerberos, /* Kerberos via SPNEGO */ ++ IAKerb, /* Kerberos proxy */ + }; + + enum upcall_target_enum { +@@ -749,6 +750,7 @@ struct TCP_Server_Info { + bool sec_kerberosu2u; /* supports U2U Kerberos */ + bool sec_kerberos; /* supports plain Kerberos */ + bool sec_mskerberos; /* supports legacy MS Kerberos */ ++ bool sec_iakerb; /* supports pass-through auth for Kerberos (krb5 proxy) */ + bool large_buf; /* is current buffer large? */ + /* use SMBD connection instead of socket */ + bool rdma; +@@ -2122,6 +2124,8 @@ static inline char *get_security_type_str(enum securityEnum sectype) + return "Kerberos"; + case NTLMv2: + return "NTLMv2"; ++ case IAKerb: ++ return "IAKerb"; + default: + return "Unknown"; + } +diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c +index eaa6be4456d0ee..a5d443c42996c5 100644 +--- a/fs/smb/client/connect.c ++++ b/fs/smb/client/connect.c +@@ -1873,9 +1873,8 @@ static int match_session(struct cifs_ses *ses, + struct smb3_fs_context *ctx, + bool match_super) + { +- if (ctx->sectype != Unspecified && +- ctx->sectype != ses->sectype) +- return 0; ++ struct TCP_Server_Info *server = ses->server; ++ enum securityEnum ctx_sec, ses_sec; + + if (!match_super && ctx->dfs_root_ses != ses->dfs_root_ses) + return 0; +@@ -1887,11 +1886,20 @@ static int match_session(struct cifs_ses *ses, + if (ses->chan_max < ctx->max_channels) + return 0; + +- switch (ses->sectype) { ++ ctx_sec = server->ops->select_sectype(server, ctx->sectype); ++ ses_sec = server->ops->select_sectype(server, ses->sectype); ++ ++ if (ctx_sec != ses_sec) ++ return 0; ++ ++ switch (ctx_sec) { ++ case IAKerb: + case Kerberos: + if (!uid_eq(ctx->cred_uid, ses->cred_uid)) + return 0; + break; ++ case NTLMv2: ++ case RawNTLMSSP: + default: + /* NULL username means anonymous session */ + if (ses->user_name == NULL) { +diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c +index 49123f458d0c72..c85285bf730e90 100644 +--- a/fs/smb/client/fs_context.c ++++ b/fs/smb/client/fs_context.c +@@ -170,6 +170,7 @@ const struct fs_parameter_spec smb3_fs_parameters[] = { + fsparam_string("username", Opt_user), + fsparam_string("pass", Opt_pass), + fsparam_string("password", Opt_pass), ++ fsparam_string("pass2", Opt_pass2), + fsparam_string("password2", Opt_pass2), + fsparam_string("ip", Opt_ip), + fsparam_string("addr", Opt_ip), +@@ -1071,6 +1072,9 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + } else if (!strcmp("user", param->key) || !strcmp("username", param->key)) { + skip_parsing = true; + opt = Opt_user; ++ } else if (!strcmp("pass2", param->key) || !strcmp("password2", param->key)) { ++ skip_parsing = true; ++ opt = Opt_pass2; + } + } + +@@ -1280,21 +1284,21 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + } + break; + case Opt_acregmax: +- ctx->acregmax = HZ * result.uint_32; +- if (ctx->acregmax > CIFS_MAX_ACTIMEO) { ++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { + cifs_errorf(fc, "acregmax too large\n"); + goto cifs_parse_mount_err; + } ++ ctx->acregmax = HZ * result.uint_32; + break; + case Opt_acdirmax: +- ctx->acdirmax = HZ * result.uint_32; +- if (ctx->acdirmax > CIFS_MAX_ACTIMEO) { ++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { + cifs_errorf(fc, "acdirmax too large\n"); + goto cifs_parse_mount_err; + } ++ ctx->acdirmax = HZ * result.uint_32; + break; + case Opt_actimeo: +- if (HZ * result.uint_32 > CIFS_MAX_ACTIMEO) { ++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { + cifs_errorf(fc, "timeout too large\n"); + goto cifs_parse_mount_err; + } +@@ -1306,11 +1310,11 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + ctx->acdirmax = ctx->acregmax = HZ * result.uint_32; + break; + case Opt_closetimeo: +- ctx->closetimeo = HZ * result.uint_32; +- if (ctx->closetimeo > SMB3_MAX_DCLOSETIMEO) { ++ if (result.uint_32 > SMB3_MAX_DCLOSETIMEO / HZ) { + cifs_errorf(fc, "closetimeo too large\n"); + goto cifs_parse_mount_err; + } ++ ctx->closetimeo = HZ * result.uint_32; + break; + case Opt_echo_interval: + ctx->echo_interval = result.uint_32; +diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c +index 4d8effe78be57d..a1b06ca07fcc74 100644 +--- a/fs/smb/client/inode.c ++++ b/fs/smb/client/inode.c +@@ -1215,6 +1215,19 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data, + rc = server->ops->parse_reparse_point(cifs_sb, + full_path, + iov, data); ++ /* ++ * If the reparse point was not handled but it is the ++ * name surrogate which points to directory, then treat ++ * is as a new mount point. Name surrogate reparse point ++ * represents another named entity in the system. ++ */ ++ if (rc == -EOPNOTSUPP && ++ IS_REPARSE_TAG_NAME_SURROGATE(data->reparse.tag) && ++ (le32_to_cpu(data->fi.Attributes) & ATTR_DIRECTORY)) { ++ rc = 0; ++ cifs_create_junction_fattr(fattr, sb); ++ goto out; ++ } + } + break; + } +diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c +index b387dfbaf16b05..a93127c94aff91 100644 +--- a/fs/smb/client/reparse.c ++++ b/fs/smb/client/reparse.c +@@ -724,13 +724,12 @@ int parse_reparse_point(struct reparse_data_buffer *buf, + le32_to_cpu(buf->ReparseTag)); + return -EIO; + } +- break; ++ return 0; + default: + cifs_tcon_dbg(VFS | ONCE, "unhandled reparse tag: 0x%08x\n", + le32_to_cpu(buf->ReparseTag)); +- break; ++ return -EOPNOTSUPP; + } +- return 0; + } + + int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, +diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c +index 91d4d409cb1dc6..faa80e7d54a6e8 100644 +--- a/fs/smb/client/sess.c ++++ b/fs/smb/client/sess.c +@@ -1235,12 +1235,13 @@ cifs_select_sectype(struct TCP_Server_Info *server, enum securityEnum requested) + switch (requested) { + case Kerberos: + case RawNTLMSSP: ++ case IAKerb: + return requested; + case Unspecified: + if (server->sec_ntlmssp && + (global_secflags & CIFSSEC_MAY_NTLMSSP)) + return RawNTLMSSP; +- if ((server->sec_kerberos || server->sec_mskerberos) && ++ if ((server->sec_kerberos || server->sec_mskerberos || server->sec_iakerb) && + (global_secflags & CIFSSEC_MAY_KRB5)) + return Kerberos; + fallthrough; +diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c +index 10eca164c8386f..7ece98c742bdbb 100644 +--- a/fs/smb/client/smb2pdu.c ++++ b/fs/smb/client/smb2pdu.c +@@ -1429,7 +1429,7 @@ smb2_select_sectype(struct TCP_Server_Info *server, enum securityEnum requested) + if (server->sec_ntlmssp && + (global_secflags & CIFSSEC_MAY_NTLMSSP)) + return RawNTLMSSP; +- if ((server->sec_kerberos || server->sec_mskerberos) && ++ if ((server->sec_kerberos || server->sec_mskerberos || server->sec_iakerb) && + (global_secflags & CIFSSEC_MAY_KRB5)) + return Kerberos; + fallthrough; +@@ -2169,7 +2169,7 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree, + + tcon_error_exit: + if (rsp && rsp->hdr.Status == STATUS_BAD_NETWORK_NAME) +- cifs_tcon_dbg(VFS, "BAD_NETWORK_NAME: %s\n", tree); ++ cifs_dbg(VFS | ONCE, "BAD_NETWORK_NAME: %s\n", tree); + goto tcon_exit; + } + +diff --git a/fs/smb/common/smbfsctl.h b/fs/smb/common/smbfsctl.h +index 4b379e84c46b94..3253a18ecb5cbc 100644 +--- a/fs/smb/common/smbfsctl.h ++++ b/fs/smb/common/smbfsctl.h +@@ -159,6 +159,9 @@ + #define IO_REPARSE_TAG_LX_CHR 0x80000025 + #define IO_REPARSE_TAG_LX_BLK 0x80000026 + ++/* If Name Surrogate Bit is set, the file or directory represents another named entity in the system. */ ++#define IS_REPARSE_TAG_NAME_SURROGATE(tag) (!!((tag) & 0x20000000)) ++ + /* fsctl flags */ + /* If Flags is set to this value, the request is an FSCTL not ioctl request */ + #define SMB2_0_IOCTL_IS_FSCTL 0x00000001 +diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c +index f8a40f65db6aee..c1f22c12911179 100644 +--- a/fs/smb/server/connection.c ++++ b/fs/smb/server/connection.c +@@ -433,6 +433,26 @@ void ksmbd_conn_init_server_callbacks(struct ksmbd_conn_ops *ops) + default_conn_ops.terminate_fn = ops->terminate_fn; + } + ++void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn) ++{ ++ atomic_inc(&conn->r_count); ++} ++ ++void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn) ++{ ++ /* ++ * Checking waitqueue to dropping pending requests on ++ * disconnection. waitqueue_active is safe because it ++ * uses atomic operation for condition. ++ */ ++ atomic_inc(&conn->refcnt); ++ if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) ++ wake_up(&conn->r_count_q); ++ ++ if (atomic_dec_and_test(&conn->refcnt)) ++ kfree(conn); ++} ++ + int ksmbd_conn_transport_init(void) + { + int ret; +diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h +index b379ae4fdcdffa..91c2318639e766 100644 +--- a/fs/smb/server/connection.h ++++ b/fs/smb/server/connection.h +@@ -168,6 +168,8 @@ int ksmbd_conn_transport_init(void); + void ksmbd_conn_transport_destroy(void); + void ksmbd_conn_lock(struct ksmbd_conn *conn); + void ksmbd_conn_unlock(struct ksmbd_conn *conn); ++void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn); ++void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn); + + /* + * WARNING +diff --git a/fs/smb/server/ksmbd_work.c b/fs/smb/server/ksmbd_work.c +index 4af2e6007c29d3..72b00ca6e45517 100644 +--- a/fs/smb/server/ksmbd_work.c ++++ b/fs/smb/server/ksmbd_work.c +@@ -26,7 +26,6 @@ struct ksmbd_work *ksmbd_alloc_work_struct(void) + INIT_LIST_HEAD(&work->request_entry); + INIT_LIST_HEAD(&work->async_request_entry); + INIT_LIST_HEAD(&work->fp_entry); +- INIT_LIST_HEAD(&work->interim_entry); + INIT_LIST_HEAD(&work->aux_read_list); + work->iov_alloc_cnt = 4; + work->iov = kcalloc(work->iov_alloc_cnt, sizeof(struct kvec), +@@ -56,8 +55,6 @@ void ksmbd_free_work_struct(struct ksmbd_work *work) + kfree(work->tr_buf); + kvfree(work->request_buf); + kfree(work->iov); +- if (!list_empty(&work->interim_entry)) +- list_del(&work->interim_entry); + + if (work->async_id) + ksmbd_release_id(&work->conn->async_ida, work->async_id); +diff --git a/fs/smb/server/ksmbd_work.h b/fs/smb/server/ksmbd_work.h +index 8ca2c813246e61..d36393ff8310cd 100644 +--- a/fs/smb/server/ksmbd_work.h ++++ b/fs/smb/server/ksmbd_work.h +@@ -89,7 +89,6 @@ struct ksmbd_work { + /* List head at conn->async_requests */ + struct list_head async_request_entry; + struct list_head fp_entry; +- struct list_head interim_entry; + }; + + /** +diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c +index 3a3fe4afbdf0df..28886ff1ee5776 100644 +--- a/fs/smb/server/oplock.c ++++ b/fs/smb/server/oplock.c +@@ -46,7 +46,6 @@ static struct oplock_info *alloc_opinfo(struct ksmbd_work *work, + opinfo->fid = id; + opinfo->Tid = Tid; + INIT_LIST_HEAD(&opinfo->op_entry); +- INIT_LIST_HEAD(&opinfo->interim_list); + init_waitqueue_head(&opinfo->oplock_q); + init_waitqueue_head(&opinfo->oplock_brk); + atomic_set(&opinfo->refcount, 1); +@@ -635,6 +634,7 @@ static void __smb2_oplock_break_noti(struct work_struct *wk) + { + struct smb2_oplock_break *rsp = NULL; + struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work); ++ struct ksmbd_conn *conn = work->conn; + struct oplock_break_info *br_info = work->request_buf; + struct smb2_hdr *rsp_hdr; + struct ksmbd_file *fp; +@@ -690,6 +690,7 @@ static void __smb2_oplock_break_noti(struct work_struct *wk) + + out: + ksmbd_free_work_struct(work); ++ ksmbd_conn_r_count_dec(conn); + } + + /** +@@ -724,6 +725,7 @@ static int smb2_oplock_break_noti(struct oplock_info *opinfo) + work->sess = opinfo->sess; + + if (opinfo->op_state == OPLOCK_ACK_WAIT) { ++ ksmbd_conn_r_count_inc(conn); + INIT_WORK(&work->work, __smb2_oplock_break_noti); + ksmbd_queue_work(work); + +@@ -745,6 +747,7 @@ static void __smb2_lease_break_noti(struct work_struct *wk) + { + struct smb2_lease_break *rsp = NULL; + struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work); ++ struct ksmbd_conn *conn = work->conn; + struct lease_break_info *br_info = work->request_buf; + struct smb2_hdr *rsp_hdr; + +@@ -791,6 +794,7 @@ static void __smb2_lease_break_noti(struct work_struct *wk) + + out: + ksmbd_free_work_struct(work); ++ ksmbd_conn_r_count_dec(conn); + } + + /** +@@ -803,7 +807,6 @@ static void __smb2_lease_break_noti(struct work_struct *wk) + static int smb2_lease_break_noti(struct oplock_info *opinfo) + { + struct ksmbd_conn *conn = opinfo->conn; +- struct list_head *tmp, *t; + struct ksmbd_work *work; + struct lease_break_info *br_info; + struct lease *lease = opinfo->o_lease; +@@ -831,16 +834,7 @@ static int smb2_lease_break_noti(struct oplock_info *opinfo) + work->sess = opinfo->sess; + + if (opinfo->op_state == OPLOCK_ACK_WAIT) { +- list_for_each_safe(tmp, t, &opinfo->interim_list) { +- struct ksmbd_work *in_work; +- +- in_work = list_entry(tmp, struct ksmbd_work, +- interim_entry); +- setup_async_work(in_work, NULL, NULL); +- smb2_send_interim_resp(in_work, STATUS_PENDING); +- list_del_init(&in_work->interim_entry); +- release_async_work(in_work); +- } ++ ksmbd_conn_r_count_inc(conn); + INIT_WORK(&work->work, __smb2_lease_break_noti); + ksmbd_queue_work(work); + wait_for_break_ack(opinfo); +@@ -871,7 +865,8 @@ static void wait_lease_breaking(struct oplock_info *opinfo) + } + } + +-static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level) ++static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level, ++ struct ksmbd_work *in_work) + { + int err = 0; + +@@ -914,9 +909,15 @@ static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level) + } + + if (lease->state & (SMB2_LEASE_WRITE_CACHING_LE | +- SMB2_LEASE_HANDLE_CACHING_LE)) ++ SMB2_LEASE_HANDLE_CACHING_LE)) { ++ if (in_work) { ++ setup_async_work(in_work, NULL, NULL); ++ smb2_send_interim_resp(in_work, STATUS_PENDING); ++ release_async_work(in_work); ++ } ++ + brk_opinfo->op_state = OPLOCK_ACK_WAIT; +- else ++ } else + atomic_dec(&brk_opinfo->breaking_cnt); + } else { + err = oplock_break_pending(brk_opinfo, req_op_level); +@@ -1116,7 +1117,7 @@ void smb_send_parent_lease_break_noti(struct ksmbd_file *fp, + if (ksmbd_conn_releasing(opinfo->conn)) + continue; + +- oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE); ++ oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); + opinfo_put(opinfo); + } + } +@@ -1152,7 +1153,7 @@ void smb_lazy_parent_lease_break_close(struct ksmbd_file *fp) + + if (ksmbd_conn_releasing(opinfo->conn)) + continue; +- oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE); ++ oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); + opinfo_put(opinfo); + } + } +@@ -1252,8 +1253,7 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid, + goto op_break_not_needed; + } + +- list_add(&work->interim_entry, &prev_opinfo->interim_list); +- err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II); ++ err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II, work); + opinfo_put(prev_opinfo); + if (err == -ENOENT) + goto set_lev; +@@ -1322,8 +1322,7 @@ static void smb_break_all_write_oplock(struct ksmbd_work *work, + } + + brk_opinfo->open_trunc = is_trunc; +- list_add(&work->interim_entry, &brk_opinfo->interim_list); +- oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II); ++ oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II, work); + opinfo_put(brk_opinfo); + } + +@@ -1386,7 +1385,7 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp, + SMB2_LEASE_KEY_SIZE)) + goto next; + brk_op->open_trunc = is_trunc; +- oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE); ++ oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL); + next: + opinfo_put(brk_op); + rcu_read_lock(); +diff --git a/fs/smb/server/oplock.h b/fs/smb/server/oplock.h +index 72bc88a63a4082..3f64f07872638e 100644 +--- a/fs/smb/server/oplock.h ++++ b/fs/smb/server/oplock.h +@@ -67,7 +67,6 @@ struct oplock_info { + bool is_lease; + bool open_trunc; /* truncate on open */ + struct lease *o_lease; +- struct list_head interim_list; + struct list_head op_entry; + struct list_head lease_entry; + wait_queue_head_t oplock_q; /* Other server threads */ +diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c +index 601e7fcbcf1e68..ab533c6029879f 100644 +--- a/fs/smb/server/server.c ++++ b/fs/smb/server/server.c +@@ -270,17 +270,7 @@ static void handle_ksmbd_work(struct work_struct *wk) + + ksmbd_conn_try_dequeue_request(work); + ksmbd_free_work_struct(work); +- /* +- * Checking waitqueue to dropping pending requests on +- * disconnection. waitqueue_active is safe because it +- * uses atomic operation for condition. +- */ +- atomic_inc(&conn->refcnt); +- if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) +- wake_up(&conn->r_count_q); +- +- if (atomic_dec_and_test(&conn->refcnt)) +- kfree(conn); ++ ksmbd_conn_r_count_dec(conn); + } + + /** +@@ -310,7 +300,7 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn) + conn->request_buf = NULL; + + ksmbd_conn_enqueue_request(work); +- atomic_inc(&conn->r_count); ++ ksmbd_conn_r_count_inc(conn); + /* update activity on connection */ + conn->last_active = jiffies; + INIT_WORK(&work->work, handle_ksmbd_work); +diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c +index e95b8a48d8a02d..1d94bb7841081d 100644 +--- a/fs/vboxsf/super.c ++++ b/fs/vboxsf/super.c +@@ -21,7 +21,8 @@ + + #define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */ + +-static const unsigned char VBSF_MOUNT_SIGNATURE[4] = "\000\377\376\375"; ++static const unsigned char VBSF_MOUNT_SIGNATURE[4] = { '\000', '\377', '\376', ++ '\375' }; + + static int follow_symlinks; + module_param(follow_symlinks, int, 0444); +diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h +index 337d3336e17565..0d5b186abee86d 100644 +--- a/include/asm-generic/vmlinux.lds.h ++++ b/include/asm-generic/vmlinux.lds.h +@@ -1038,6 +1038,7 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG) + *(.discard) \ + *(.discard.*) \ + *(.export_symbol) \ ++ *(.no_trim_symbol) \ + *(.modinfo) \ + /* ld.bfd warns about .gnu.version* even when not emitted */ \ + *(.gnu.version*) \ +diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h +index 7b19b83349cf85..07af26550a5a96 100644 +--- a/include/linux/blk-mq.h ++++ b/include/linux/blk-mq.h +@@ -863,12 +863,20 @@ static inline bool blk_mq_is_reserved_rq(struct request *rq) + return rq->rq_flags & RQF_RESV; + } + +-/* ++/** ++ * blk_mq_add_to_batch() - add a request to the completion batch ++ * @req: The request to add to batch ++ * @iob: The batch to add the request ++ * @is_error: Specify true if the request failed with an error ++ * @complete: The completaion handler for the request ++ * + * Batched completions only work when there is no I/O error and no special + * ->end_io handler. ++ * ++ * Return: true when the request was added to the batch, otherwise false + */ + static inline bool blk_mq_add_to_batch(struct request *req, +- struct io_comp_batch *iob, int ioerror, ++ struct io_comp_batch *iob, bool is_error, + void (*complete)(struct io_comp_batch *)) + { + /* +@@ -876,7 +884,7 @@ static inline bool blk_mq_add_to_batch(struct request *req, + * 1) No batch container + * 2) Has scheduler data attached + * 3) Not a passthrough request and end_io set +- * 4) Not a passthrough request and an ioerror ++ * 4) Not a passthrough request and failed with an error + */ + if (!iob) + return false; +@@ -885,7 +893,7 @@ static inline bool blk_mq_add_to_batch(struct request *req, + if (!blk_rq_is_passthrough(req)) { + if (req->end_io) + return false; +- if (ioerror < 0) ++ if (is_error) + return false; + } + +diff --git a/include/linux/fs.h b/include/linux/fs.h +index f7efc6866ebc9a..1f07469f3d6e15 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -3353,6 +3353,8 @@ extern const struct file_operations generic_ro_fops; + + extern int readlink_copy(char __user *, int, const char *); + extern int page_readlink(struct dentry *, char __user *, int); ++extern const char *page_get_link_raw(struct dentry *, struct inode *, ++ struct delayed_call *); + extern const char *page_get_link(struct dentry *, struct inode *, + struct delayed_call *); + extern void page_put_link(void *); +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h +index 0385103dffcd97..39fa83a0f478fe 100644 +--- a/include/linux/hugetlb.h ++++ b/include/linux/hugetlb.h +@@ -681,6 +681,7 @@ struct huge_bootmem_page { + }; + + int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list); ++void wait_for_freed_hugetlb_folios(void); + struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, + unsigned long addr, int avoid_reserve); + struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid, +@@ -1061,6 +1062,10 @@ static inline int isolate_or_dissolve_huge_page(struct page *page, + return -ENOMEM; + } + ++static inline void wait_for_freed_hugetlb_folios(void) ++{ ++} ++ + static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, + unsigned long addr, + int avoid_reserve) +diff --git a/include/linux/module.h b/include/linux/module.h +index b3a64343535798..541e4357ea2f1f 100644 +--- a/include/linux/module.h ++++ b/include/linux/module.h +@@ -306,7 +306,10 @@ extern int modules_disabled; /* for sysctl */ + /* Get/put a kernel symbol (calls must be symmetric) */ + void *__symbol_get(const char *symbol); + void *__symbol_get_gpl(const char *symbol); +-#define symbol_get(x) ((typeof(&x))(__symbol_get(__stringify(x)))) ++#define symbol_get(x) ({ \ ++ static const char __notrim[] \ ++ __used __section(".no_trim_symbol") = __stringify(x); \ ++ (typeof(&x))(__symbol_get(__stringify(x))); }) + + /* modules using other modules: kdb wants to see this. */ + struct module_use { +diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h +index de5deb1a0118fc..1a2594a38199fe 100644 +--- a/include/linux/pci_ids.h ++++ b/include/linux/pci_ids.h +@@ -3134,6 +3134,7 @@ + #define PCI_DEVICE_ID_INTEL_HDA_LNL_P 0xa828 + #define PCI_DEVICE_ID_INTEL_S21152BB 0xb152 + #define PCI_DEVICE_ID_INTEL_HDA_BMG 0xe2f7 ++#define PCI_DEVICE_ID_INTEL_HDA_PTL_H 0xe328 + #define PCI_DEVICE_ID_INTEL_HDA_PTL 0xe428 + #define PCI_DEVICE_ID_INTEL_HDA_CML_R 0xf0c8 + #define PCI_DEVICE_ID_INTEL_HDA_RKL_S 0xf1c8 +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h +index ca22ead85dbe01..ea798f07c5a2d6 100644 +--- a/include/net/bluetooth/hci_core.h ++++ b/include/net/bluetooth/hci_core.h +@@ -804,6 +804,7 @@ struct hci_conn_params { + extern struct list_head hci_dev_list; + extern struct list_head hci_cb_list; + extern rwlock_t hci_dev_list_lock; ++extern struct mutex hci_cb_list_lock; + + #define hci_dev_set_flag(hdev, nr) set_bit((nr), (hdev)->dev_flags) + #define hci_dev_clear_flag(hdev, nr) clear_bit((nr), (hdev)->dev_flags) +@@ -2016,47 +2017,24 @@ struct hci_cb { + + char *name; + +- bool (*match) (struct hci_conn *conn); + void (*connect_cfm) (struct hci_conn *conn, __u8 status); + void (*disconn_cfm) (struct hci_conn *conn, __u8 status); + void (*security_cfm) (struct hci_conn *conn, __u8 status, +- __u8 encrypt); ++ __u8 encrypt); + void (*key_change_cfm) (struct hci_conn *conn, __u8 status); + void (*role_switch_cfm) (struct hci_conn *conn, __u8 status, __u8 role); + }; + +-static inline void hci_cb_lookup(struct hci_conn *conn, struct list_head *list) +-{ +- struct hci_cb *cb, *cpy; +- +- rcu_read_lock(); +- list_for_each_entry_rcu(cb, &hci_cb_list, list) { +- if (cb->match && cb->match(conn)) { +- cpy = kmalloc(sizeof(*cpy), GFP_ATOMIC); +- if (!cpy) +- break; +- +- *cpy = *cb; +- INIT_LIST_HEAD(&cpy->list); +- list_add_rcu(&cpy->list, list); +- } +- } +- rcu_read_unlock(); +-} +- + static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status) + { +- struct list_head list; +- struct hci_cb *cb, *tmp; +- +- INIT_LIST_HEAD(&list); +- hci_cb_lookup(conn, &list); ++ struct hci_cb *cb; + +- list_for_each_entry_safe(cb, tmp, &list, list) { ++ mutex_lock(&hci_cb_list_lock); ++ list_for_each_entry(cb, &hci_cb_list, list) { + if (cb->connect_cfm) + cb->connect_cfm(conn, status); +- kfree(cb); + } ++ mutex_unlock(&hci_cb_list_lock); + + if (conn->connect_cfm_cb) + conn->connect_cfm_cb(conn, status); +@@ -2064,43 +2042,22 @@ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status) + + static inline void hci_disconn_cfm(struct hci_conn *conn, __u8 reason) + { +- struct list_head list; +- struct hci_cb *cb, *tmp; +- +- INIT_LIST_HEAD(&list); +- hci_cb_lookup(conn, &list); ++ struct hci_cb *cb; + +- list_for_each_entry_safe(cb, tmp, &list, list) { ++ mutex_lock(&hci_cb_list_lock); ++ list_for_each_entry(cb, &hci_cb_list, list) { + if (cb->disconn_cfm) + cb->disconn_cfm(conn, reason); +- kfree(cb); + } ++ mutex_unlock(&hci_cb_list_lock); + + if (conn->disconn_cfm_cb) + conn->disconn_cfm_cb(conn, reason); + } + +-static inline void hci_security_cfm(struct hci_conn *conn, __u8 status, +- __u8 encrypt) +-{ +- struct list_head list; +- struct hci_cb *cb, *tmp; +- +- INIT_LIST_HEAD(&list); +- hci_cb_lookup(conn, &list); +- +- list_for_each_entry_safe(cb, tmp, &list, list) { +- if (cb->security_cfm) +- cb->security_cfm(conn, status, encrypt); +- kfree(cb); +- } +- +- if (conn->security_cfm_cb) +- conn->security_cfm_cb(conn, status); +-} +- + static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status) + { ++ struct hci_cb *cb; + __u8 encrypt; + + if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags)) +@@ -2108,11 +2065,20 @@ static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status) + + encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00; + +- hci_security_cfm(conn, status, encrypt); ++ mutex_lock(&hci_cb_list_lock); ++ list_for_each_entry(cb, &hci_cb_list, list) { ++ if (cb->security_cfm) ++ cb->security_cfm(conn, status, encrypt); ++ } ++ mutex_unlock(&hci_cb_list_lock); ++ ++ if (conn->security_cfm_cb) ++ conn->security_cfm_cb(conn, status); + } + + static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status) + { ++ struct hci_cb *cb; + __u8 encrypt; + + if (conn->state == BT_CONFIG) { +@@ -2139,38 +2105,40 @@ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status) + conn->sec_level = conn->pending_sec_level; + } + +- hci_security_cfm(conn, status, encrypt); ++ mutex_lock(&hci_cb_list_lock); ++ list_for_each_entry(cb, &hci_cb_list, list) { ++ if (cb->security_cfm) ++ cb->security_cfm(conn, status, encrypt); ++ } ++ mutex_unlock(&hci_cb_list_lock); ++ ++ if (conn->security_cfm_cb) ++ conn->security_cfm_cb(conn, status); + } + + static inline void hci_key_change_cfm(struct hci_conn *conn, __u8 status) + { +- struct list_head list; +- struct hci_cb *cb, *tmp; +- +- INIT_LIST_HEAD(&list); +- hci_cb_lookup(conn, &list); ++ struct hci_cb *cb; + +- list_for_each_entry_safe(cb, tmp, &list, list) { ++ mutex_lock(&hci_cb_list_lock); ++ list_for_each_entry(cb, &hci_cb_list, list) { + if (cb->key_change_cfm) + cb->key_change_cfm(conn, status); +- kfree(cb); + } ++ mutex_unlock(&hci_cb_list_lock); + } + + static inline void hci_role_switch_cfm(struct hci_conn *conn, __u8 status, + __u8 role) + { +- struct list_head list; +- struct hci_cb *cb, *tmp; +- +- INIT_LIST_HEAD(&list); +- hci_cb_lookup(conn, &list); ++ struct hci_cb *cb; + +- list_for_each_entry_safe(cb, tmp, &list, list) { ++ mutex_lock(&hci_cb_list_lock); ++ list_for_each_entry(cb, &hci_cb_list, list) { + if (cb->role_switch_cfm) + cb->role_switch_cfm(conn, status, role); +- kfree(cb); + } ++ mutex_unlock(&hci_cb_list_lock); + } + + static inline bool hci_bdaddr_is_rpa(bdaddr_t *bdaddr, u8 addr_type) +diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h +index d9c767cf773de9..9189354c568f44 100644 +--- a/include/net/bluetooth/l2cap.h ++++ b/include/net/bluetooth/l2cap.h +@@ -668,7 +668,7 @@ struct l2cap_conn { + struct l2cap_chan *smp; + + struct list_head chan_l; +- struct mutex chan_lock; ++ struct mutex lock; + struct kref ref; + struct list_head users; + }; +@@ -970,6 +970,7 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err); + void l2cap_send_conn_req(struct l2cap_chan *chan); + + struct l2cap_conn *l2cap_conn_get(struct l2cap_conn *conn); ++struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *conn); + void l2cap_conn_put(struct l2cap_conn *conn); + + int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user); +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h +index f6958118986ac4..ea0236c938d8f8 100644 +--- a/include/net/netfilter/nf_tables.h ++++ b/include/net/netfilter/nf_tables.h +@@ -1891,7 +1891,7 @@ void nft_chain_filter_fini(void); + void __init nft_chain_route_init(void); + void nft_chain_route_fini(void); + +-void nf_tables_trans_destroy_flush_work(void); ++void nf_tables_trans_destroy_flush_work(struct net *net); + + int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result); + __be64 nf_jiffies64_to_msecs(u64 input); +@@ -1905,6 +1905,7 @@ static inline int nft_request_module(struct net *net, const char *fmt, ...) { re + struct nftables_pernet { + struct list_head tables; + struct list_head commit_list; ++ struct list_head destroy_list; + struct list_head commit_set_list; + struct list_head binding_list; + struct list_head module_list; +@@ -1915,6 +1916,7 @@ struct nftables_pernet { + unsigned int base_seq; + unsigned int gc_seq; + u8 validate_state; ++ struct work_struct destroy_work; + }; + + extern unsigned int nf_tables_net_id; +diff --git a/include/sound/soc.h b/include/sound/soc.h +index 4f5d411e3823f6..355575c63a130e 100644 +--- a/include/sound/soc.h ++++ b/include/sound/soc.h +@@ -1251,7 +1251,10 @@ void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd); + + /* mixer control */ + struct soc_mixer_control { +- int min, max, platform_max; ++ /* Minimum and maximum specified as written to the hardware */ ++ int min, max; ++ /* Limited maximum value specified as presented through the control */ ++ int platform_max; + int reg, rreg; + unsigned int shift, rshift; + unsigned int sign_bit; +diff --git a/init/Kconfig b/init/Kconfig +index a20e6efd3f0fbd..4c88cb58c261cc 100644 +--- a/init/Kconfig ++++ b/init/Kconfig +@@ -1959,7 +1959,7 @@ config RUST + depends on !MODVERSIONS + depends on !GCC_PLUGIN_RANDSTRUCT + depends on !RANDSTRUCT +- depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE ++ depends on !DEBUG_INFO_BTF || (PAHOLE_HAS_LANG_EXCLUDE && !LTO) + depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC + select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG + depends on !CALL_PADDING || RUSTC_VERSION >= 108100 +diff --git a/io_uring/futex.c b/io_uring/futex.c +index e29662f039e1a1..f108da4ff863c8 100644 +--- a/io_uring/futex.c ++++ b/io_uring/futex.c +@@ -349,7 +349,7 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags) + hlist_add_head(&req->hash_node, &ctx->futex_list); + io_ring_submit_unlock(ctx, issue_flags); + +- futex_queue(&ifd->q, hb); ++ futex_queue(&ifd->q, hb, NULL); + return IOU_ISSUE_SKIP_COMPLETE; + } + +diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c +index a38f36b6806041..a2d577b099308e 100644 +--- a/io_uring/io-wq.c ++++ b/io_uring/io-wq.c +@@ -64,7 +64,7 @@ struct io_worker { + + union { + struct rcu_head rcu; +- struct work_struct work; ++ struct delayed_work work; + }; + }; + +@@ -770,6 +770,18 @@ static inline bool io_should_retry_thread(struct io_worker *worker, long err) + } + } + ++static void queue_create_worker_retry(struct io_worker *worker) ++{ ++ /* ++ * We only bother retrying because there's a chance that the ++ * failure to create a worker is due to some temporary condition ++ * in the forking task (e.g. outstanding signal); give the task ++ * some time to clear that condition. ++ */ ++ schedule_delayed_work(&worker->work, ++ msecs_to_jiffies(worker->init_retries * 5)); ++} ++ + static void create_worker_cont(struct callback_head *cb) + { + struct io_worker *worker; +@@ -809,12 +821,13 @@ static void create_worker_cont(struct callback_head *cb) + + /* re-create attempts grab a new worker ref, drop the existing one */ + io_worker_release(worker); +- schedule_work(&worker->work); ++ queue_create_worker_retry(worker); + } + + static void io_workqueue_create(struct work_struct *work) + { +- struct io_worker *worker = container_of(work, struct io_worker, work); ++ struct io_worker *worker = container_of(work, struct io_worker, ++ work.work); + struct io_wq_acct *acct = io_wq_get_acct(worker); + + if (!io_queue_worker_create(worker, acct, create_worker_cont)) +@@ -855,8 +868,8 @@ static bool create_io_worker(struct io_wq *wq, int index) + kfree(worker); + goto fail; + } else { +- INIT_WORK(&worker->work, io_workqueue_create); +- schedule_work(&worker->work); ++ INIT_DELAYED_WORK(&worker->work, io_workqueue_create); ++ queue_create_worker_retry(worker); + } + + return true; +diff --git a/kernel/futex/core.c b/kernel/futex/core.c +index ebdd76b4ecbbac..3db8567f5a44e7 100644 +--- a/kernel/futex/core.c ++++ b/kernel/futex/core.c +@@ -532,7 +532,8 @@ void futex_q_unlock(struct futex_hash_bucket *hb) + futex_hb_waiters_dec(hb); + } + +-void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb) ++void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb, ++ struct task_struct *task) + { + int prio; + +@@ -548,7 +549,7 @@ void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb) + + plist_node_init(&q->list, prio); + plist_add(&q->list, &hb->chain); +- q->task = current; ++ q->task = task; + } + + /** +diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h +index 99b32e728c4ad2..6b2f4c7eb720fd 100644 +--- a/kernel/futex/futex.h ++++ b/kernel/futex/futex.h +@@ -285,13 +285,15 @@ static inline int futex_get_value_locked(u32 *dest, u32 __user *from) + } + + extern void __futex_unqueue(struct futex_q *q); +-extern void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb); ++extern void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb, ++ struct task_struct *task); + extern int futex_unqueue(struct futex_q *q); + + /** + * futex_queue() - Enqueue the futex_q on the futex_hash_bucket + * @q: The futex_q to enqueue + * @hb: The destination hash bucket ++ * @task: Task queueing this futex + * + * The hb->lock must be held by the caller, and is released here. A call to + * futex_queue() is typically paired with exactly one call to futex_unqueue(). The +@@ -299,11 +301,14 @@ extern int futex_unqueue(struct futex_q *q); + * or nothing if the unqueue is done as part of the wake process and the unqueue + * state is implicit in the state of woken task (see futex_wait_requeue_pi() for + * an example). ++ * ++ * Note that @task may be NULL, for async usage of futexes. + */ +-static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb) ++static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb, ++ struct task_struct *task) + __releases(&hb->lock) + { +- __futex_queue(q, hb); ++ __futex_queue(q, hb, task); + spin_unlock(&hb->lock); + } + +diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c +index d62cca5ed8f4c8..635c7d5d42220b 100644 +--- a/kernel/futex/pi.c ++++ b/kernel/futex/pi.c +@@ -982,7 +982,7 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl + /* + * Only actually queue now that the atomic ops are done: + */ +- __futex_queue(&q, hb); ++ __futex_queue(&q, hb, current); + + if (trylock) { + ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex); +diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c +index 3a10375d952186..a9056acb75eef9 100644 +--- a/kernel/futex/waitwake.c ++++ b/kernel/futex/waitwake.c +@@ -350,7 +350,7 @@ void futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q *q, + * access to the hash list and forcing another memory barrier. + */ + set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE); +- futex_queue(q, hb); ++ futex_queue(q, hb, current); + + /* Arm the timer */ + if (timeout) +@@ -461,7 +461,7 @@ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken) + * next futex. Queue each futex at this moment so hb can + * be unlocked. + */ +- futex_queue(q, hb); ++ futex_queue(q, hb, current); + continue; + } + +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index ff98233d4aa59f..4703b08fb8823b 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -3191,6 +3191,8 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func) + } + EXPORT_SYMBOL_GPL(call_rcu); + ++static struct workqueue_struct *rcu_reclaim_wq; ++ + /* Maximum number of jiffies to wait before draining a batch. */ + #define KFREE_DRAIN_JIFFIES (5 * HZ) + #define KFREE_N_BATCHES 2 +@@ -3519,10 +3521,10 @@ __schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) + if (delayed_work_pending(&krcp->monitor_work)) { + delay_left = krcp->monitor_work.timer.expires - jiffies; + if (delay < delay_left) +- mod_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); ++ mod_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay); + return; + } +- queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); ++ queue_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay); + } + + static void +@@ -3620,7 +3622,7 @@ kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp) + // "free channels", the batch can handle. Break + // the loop since it is done with this CPU thus + // queuing an RCU work is _always_ success here. +- queued = queue_rcu_work(system_unbound_wq, &krwp->rcu_work); ++ queued = queue_rcu_work(rcu_reclaim_wq, &krwp->rcu_work); + WARN_ON_ONCE(!queued); + break; + } +@@ -3708,7 +3710,7 @@ run_page_cache_worker(struct kfree_rcu_cpu *krcp) + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && + !atomic_xchg(&krcp->work_in_progress, 1)) { + if (atomic_read(&krcp->backoff_page_cache_fill)) { +- queue_delayed_work(system_unbound_wq, ++ queue_delayed_work(rcu_reclaim_wq, + &krcp->page_cache_work, + msecs_to_jiffies(rcu_delay_page_cache_fill_msec)); + } else { +@@ -5654,6 +5656,10 @@ static void __init kfree_rcu_batch_init(void) + int i, j; + struct shrinker *kfree_rcu_shrinker; + ++ rcu_reclaim_wq = alloc_workqueue("kvfree_rcu_reclaim", ++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0); ++ WARN_ON(!rcu_reclaim_wq); ++ + /* Clamp it to [0:100] seconds interval. */ + if (rcu_delay_page_cache_fill_msec < 0 || + rcu_delay_page_cache_fill_msec > 100 * MSEC_PER_SEC) { +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 86cb6db0816804..4b5878b2fdd023 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -1057,9 +1057,10 @@ void wake_up_q(struct wake_q_head *head) + struct task_struct *task; + + task = container_of(node, struct task_struct, wake_q); +- /* Task can safely be re-inserted now: */ + node = node->next; +- task->wake_q.next = NULL; ++ /* pairs with cmpxchg_relaxed() in __wake_q_add() */ ++ WRITE_ONCE(task->wake_q.next, NULL); ++ /* Task can safely be re-inserted now. */ + + /* + * wake_up_process() executes a full barrier, which pairs with +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index a1be00a988bf6f..5b32d3cc393bfa 100644 +--- a/kernel/sched/debug.c ++++ b/kernel/sched/debug.c +@@ -1265,6 +1265,8 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns, + if (task_has_dl_policy(p)) { + P(dl.runtime); + P(dl.deadline); ++ } else if (fair_policy(p->policy)) { ++ P(se.slice); + } + #ifdef CONFIG_SCHED_CLASS_EXT + __PS("ext.enabled", task_on_scx(p)); +diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c +index 5ccd46124ff077..b08d8691b26b63 100644 +--- a/kernel/sched/ext.c ++++ b/kernel/sched/ext.c +@@ -6376,6 +6376,9 @@ __bpf_kfunc_start_defs(); + __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, + u64 wake_flags, bool *is_idle) + { ++ if (!ops_cpu_valid(prev_cpu, NULL)) ++ goto prev_cpu; ++ + if (!static_branch_likely(&scx_builtin_idle_enabled)) { + scx_ops_error("built-in idle tracking is disabled"); + goto prev_cpu; +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 36dd2f5c30da1e..3e7554597be248 100644 +--- a/kernel/time/hrtimer.c ++++ b/kernel/time/hrtimer.c +@@ -156,11 +156,6 @@ static struct hrtimer_cpu_base migration_cpu_base = { + + #define migration_base migration_cpu_base.clock_base[0] + +-static inline bool is_migration_base(struct hrtimer_clock_base *base) +-{ +- return base == &migration_base; +-} +- + /* + * We are using hashed locking: holding per_cpu(hrtimer_bases)[n].lock + * means that all timers which are tied to this base via timer->base are +@@ -312,11 +307,6 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base, + + #else /* CONFIG_SMP */ + +-static inline bool is_migration_base(struct hrtimer_clock_base *base) +-{ +- return false; +-} +- + static inline struct hrtimer_clock_base * + lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) + __acquires(&timer->base->cpu_base->lock) +@@ -1432,6 +1422,18 @@ static void hrtimer_sync_wait_running(struct hrtimer_cpu_base *cpu_base, + } + } + ++#ifdef CONFIG_SMP ++static __always_inline bool is_migration_base(struct hrtimer_clock_base *base) ++{ ++ return base == &migration_base; ++} ++#else ++static __always_inline bool is_migration_base(struct hrtimer_clock_base *base) ++{ ++ return false; ++} ++#endif ++ + /* + * This function is called on PREEMPT_RT kernels when the fast path + * deletion of a timer failed because the timer callback function was +diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c +index 8800f5acc00717..2ef2e1b8009165 100644 +--- a/kernel/vhost_task.c ++++ b/kernel/vhost_task.c +@@ -133,7 +133,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *), + + vtsk = kzalloc(sizeof(*vtsk), GFP_KERNEL); + if (!vtsk) +- return NULL; ++ return ERR_PTR(-ENOMEM); + init_completion(&vtsk->exited); + mutex_init(&vtsk->exit_mutex); + vtsk->data = arg; +@@ -145,7 +145,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *), + tsk = copy_process(NULL, 0, NUMA_NO_NODE, &args); + if (IS_ERR(tsk)) { + kfree(vtsk); +- return NULL; ++ return ERR_PTR(PTR_ERR(tsk)); + } + + vtsk->task = tsk; +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 15ae955c7cbc01..6c7607d46f97c2 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -2956,6 +2956,14 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) + return ret; + } + ++void wait_for_freed_hugetlb_folios(void) ++{ ++ if (llist_empty(&hpage_freelist)) ++ return; ++ ++ flush_work(&free_hpage_work); ++} ++ + struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, + unsigned long addr, int avoid_reserve) + { +diff --git a/mm/page_isolation.c b/mm/page_isolation.c +index 7e04047977cfea..6989c5ffd47417 100644 +--- a/mm/page_isolation.c ++++ b/mm/page_isolation.c +@@ -611,6 +611,16 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, + struct zone *zone; + int ret; + ++ /* ++ * Due to the deferred freeing of hugetlb folios, the hugepage folios may ++ * not immediately release to the buddy system. This can cause PageBuddy() ++ * to fail in __test_page_isolated_in_pageblock(). To ensure that the ++ * hugetlb folios are properly released back to the buddy system, we ++ * invoke the wait_for_freed_hugetlb_folios() function to wait for the ++ * release to complete. ++ */ ++ wait_for_freed_hugetlb_folios(); ++ + /* + * Note: pageblock_nr_pages != MAX_PAGE_ORDER. Then, chunks of free + * pages are not aligned to pageblock_nr_pages. +diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c +index 64e998f7d57ce4..65f81766678979 100644 +--- a/mm/userfaultfd.c ++++ b/mm/userfaultfd.c +@@ -18,6 +18,7 @@ + #include + #include + #include "internal.h" ++#include "swap.h" + + static __always_inline + bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) +@@ -1067,15 +1068,13 @@ static int move_present_pte(struct mm_struct *mm, + return err; + } + +-static int move_swap_pte(struct mm_struct *mm, ++static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma, + unsigned long dst_addr, unsigned long src_addr, + pte_t *dst_pte, pte_t *src_pte, + pte_t orig_dst_pte, pte_t orig_src_pte, +- spinlock_t *dst_ptl, spinlock_t *src_ptl) ++ spinlock_t *dst_ptl, spinlock_t *src_ptl, ++ struct folio *src_folio) + { +- if (!pte_swp_exclusive(orig_src_pte)) +- return -EBUSY; +- + double_pt_lock(dst_ptl, src_ptl); + + if (!pte_same(ptep_get(src_pte), orig_src_pte) || +@@ -1084,6 +1083,16 @@ static int move_swap_pte(struct mm_struct *mm, + return -EAGAIN; + } + ++ /* ++ * The src_folio resides in the swapcache, requiring an update to its ++ * index and mapping to align with the dst_vma, where a swap-in may ++ * occur and hit the swapcache after moving the PTE. ++ */ ++ if (src_folio) { ++ folio_move_anon_rmap(src_folio, dst_vma); ++ src_folio->index = linear_page_index(dst_vma, dst_addr); ++ } ++ + orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte); + set_pte_at(mm, dst_addr, dst_pte, orig_src_pte); + double_pt_unlock(dst_ptl, src_ptl); +@@ -1130,6 +1139,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + __u64 mode) + { + swp_entry_t entry; ++ struct swap_info_struct *si = NULL; + pte_t orig_src_pte, orig_dst_pte; + pte_t src_folio_pte; + spinlock_t *src_ptl, *dst_ptl; +@@ -1264,8 +1274,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + spin_unlock(src_ptl); + + if (!locked) { +- pte_unmap(&orig_src_pte); +- pte_unmap(&orig_dst_pte); ++ pte_unmap(src_pte); ++ pte_unmap(dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + folio_lock(src_folio); +@@ -1281,8 +1291,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + /* at this point we have src_folio locked */ + if (folio_test_large(src_folio)) { + /* split_folio() can block */ +- pte_unmap(&orig_src_pte); +- pte_unmap(&orig_dst_pte); ++ pte_unmap(src_pte); ++ pte_unmap(dst_pte); + src_pte = dst_pte = NULL; + err = split_folio(src_folio); + if (err) +@@ -1307,8 +1317,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + goto out; + } + if (!anon_vma_trylock_write(src_anon_vma)) { +- pte_unmap(&orig_src_pte); +- pte_unmap(&orig_dst_pte); ++ pte_unmap(src_pte); ++ pte_unmap(dst_pte); + src_pte = dst_pte = NULL; + /* now we can block and wait */ + anon_vma_lock_write(src_anon_vma); +@@ -1321,11 +1331,13 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + orig_dst_pte, orig_src_pte, + dst_ptl, src_ptl, src_folio); + } else { ++ struct folio *folio = NULL; ++ + entry = pte_to_swp_entry(orig_src_pte); + if (non_swap_entry(entry)) { + if (is_migration_entry(entry)) { +- pte_unmap(&orig_src_pte); +- pte_unmap(&orig_dst_pte); ++ pte_unmap(src_pte); ++ pte_unmap(dst_pte); + src_pte = dst_pte = NULL; + migration_entry_wait(mm, src_pmd, src_addr); + err = -EAGAIN; +@@ -1334,10 +1346,53 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + goto out; + } + +- err = move_swap_pte(mm, dst_addr, src_addr, +- dst_pte, src_pte, +- orig_dst_pte, orig_src_pte, +- dst_ptl, src_ptl); ++ if (!pte_swp_exclusive(orig_src_pte)) { ++ err = -EBUSY; ++ goto out; ++ } ++ ++ si = get_swap_device(entry); ++ if (unlikely(!si)) { ++ err = -EAGAIN; ++ goto out; ++ } ++ /* ++ * Verify the existence of the swapcache. If present, the folio's ++ * index and mapping must be updated even when the PTE is a swap ++ * entry. The anon_vma lock is not taken during this process since ++ * the folio has already been unmapped, and the swap entry is ++ * exclusive, preventing rmap walks. ++ * ++ * For large folios, return -EBUSY immediately, as split_folio() ++ * also returns -EBUSY when attempting to split unmapped large ++ * folios in the swapcache. This issue needs to be resolved ++ * separately to allow proper handling. ++ */ ++ if (!src_folio) ++ folio = filemap_get_folio(swap_address_space(entry), ++ swap_cache_index(entry)); ++ if (!IS_ERR_OR_NULL(folio)) { ++ if (folio_test_large(folio)) { ++ err = -EBUSY; ++ folio_put(folio); ++ goto out; ++ } ++ src_folio = folio; ++ src_folio_pte = orig_src_pte; ++ if (!folio_trylock(src_folio)) { ++ pte_unmap(src_pte); ++ pte_unmap(dst_pte); ++ src_pte = dst_pte = NULL; ++ put_swap_device(si); ++ si = NULL; ++ /* now we can block and wait */ ++ folio_lock(src_folio); ++ goto retry; ++ } ++ } ++ err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte, ++ orig_dst_pte, orig_src_pte, ++ dst_ptl, src_ptl, src_folio); + } + + out: +@@ -1354,6 +1409,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, + if (src_pte) + pte_unmap(src_pte); + mmu_notifier_invalidate_range_end(&range); ++ if (si) ++ put_swap_device(si); + + return err; + } +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index 18ab5628f85adc..f9e19f9cb5a386 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -57,6 +57,7 @@ DEFINE_RWLOCK(hci_dev_list_lock); + + /* HCI callback list */ + LIST_HEAD(hci_cb_list); ++DEFINE_MUTEX(hci_cb_list_lock); + + /* HCI ID Numbering */ + static DEFINE_IDA(hci_index_ida); +@@ -2992,7 +2993,9 @@ int hci_register_cb(struct hci_cb *cb) + { + BT_DBG("%p name %s", cb, cb->name); + +- list_add_tail_rcu(&cb->list, &hci_cb_list); ++ mutex_lock(&hci_cb_list_lock); ++ list_add_tail(&cb->list, &hci_cb_list); ++ mutex_unlock(&hci_cb_list_lock); + + return 0; + } +@@ -3002,8 +3005,9 @@ int hci_unregister_cb(struct hci_cb *cb) + { + BT_DBG("%p name %s", cb, cb->name); + +- list_del_rcu(&cb->list); +- synchronize_rcu(); ++ mutex_lock(&hci_cb_list_lock); ++ list_del(&cb->list); ++ mutex_unlock(&hci_cb_list_lock); + + return 0; + } +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 2cc7a93063501c..903b0b52692aa6 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -3391,23 +3391,30 @@ static void hci_disconn_complete_evt(struct hci_dev *hdev, void *data, + hci_update_scan(hdev); + } + +- params = hci_conn_params_lookup(hdev, &conn->dst, conn->dst_type); +- if (params) { +- switch (params->auto_connect) { +- case HCI_AUTO_CONN_LINK_LOSS: +- if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT) ++ /* Re-enable passive scanning if disconnected device is marked ++ * as auto-connectable. ++ */ ++ if (conn->type == LE_LINK) { ++ params = hci_conn_params_lookup(hdev, &conn->dst, ++ conn->dst_type); ++ if (params) { ++ switch (params->auto_connect) { ++ case HCI_AUTO_CONN_LINK_LOSS: ++ if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT) ++ break; ++ fallthrough; ++ ++ case HCI_AUTO_CONN_DIRECT: ++ case HCI_AUTO_CONN_ALWAYS: ++ hci_pend_le_list_del_init(params); ++ hci_pend_le_list_add(params, ++ &hdev->pend_le_conns); ++ hci_update_passive_scan(hdev); + break; +- fallthrough; + +- case HCI_AUTO_CONN_DIRECT: +- case HCI_AUTO_CONN_ALWAYS: +- hci_pend_le_list_del_init(params); +- hci_pend_le_list_add(params, &hdev->pend_le_conns); +- hci_update_passive_scan(hdev); +- break; +- +- default: +- break; ++ default: ++ break; ++ } + } + } + +diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c +index 43d0ebe1110006..73724439fe7183 100644 +--- a/net/bluetooth/iso.c ++++ b/net/bluetooth/iso.c +@@ -2151,11 +2151,6 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags) + return HCI_LM_ACCEPT; + } + +-static bool iso_match(struct hci_conn *hcon) +-{ +- return hcon->type == ISO_LINK || hcon->type == LE_LINK; +-} +- + static void iso_connect_cfm(struct hci_conn *hcon, __u8 status) + { + if (hcon->type != ISO_LINK) { +@@ -2337,7 +2332,6 @@ void iso_recv(struct hci_conn *hcon, struct sk_buff *skb, u16 flags) + + static struct hci_cb iso_cb = { + .name = "ISO", +- .match = iso_match, + .connect_cfm = iso_connect_cfm, + .disconn_cfm = iso_disconn_cfm, + }; +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index 728a5ce9b50587..c27ea70f71e1e1 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -119,7 +119,6 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn, + { + struct l2cap_chan *c; + +- mutex_lock(&conn->chan_lock); + c = __l2cap_get_chan_by_scid(conn, cid); + if (c) { + /* Only lock if chan reference is not 0 */ +@@ -127,7 +126,6 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn, + if (c) + l2cap_chan_lock(c); + } +- mutex_unlock(&conn->chan_lock); + + return c; + } +@@ -140,7 +138,6 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn, + { + struct l2cap_chan *c; + +- mutex_lock(&conn->chan_lock); + c = __l2cap_get_chan_by_dcid(conn, cid); + if (c) { + /* Only lock if chan reference is not 0 */ +@@ -148,7 +145,6 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn, + if (c) + l2cap_chan_lock(c); + } +- mutex_unlock(&conn->chan_lock); + + return c; + } +@@ -418,7 +414,7 @@ static void l2cap_chan_timeout(struct work_struct *work) + if (!conn) + return; + +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + /* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling + * this work. No need to call l2cap_chan_hold(chan) here again. + */ +@@ -439,7 +435,7 @@ static void l2cap_chan_timeout(struct work_struct *work) + l2cap_chan_unlock(chan); + l2cap_chan_put(chan); + +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + } + + struct l2cap_chan *l2cap_chan_create(void) +@@ -642,9 +638,9 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) + + void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) + { +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + __l2cap_chan_add(conn, chan); +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + } + + void l2cap_chan_del(struct l2cap_chan *chan, int err) +@@ -732,9 +728,9 @@ void l2cap_chan_list(struct l2cap_conn *conn, l2cap_chan_func_t func, + if (!conn) + return; + +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + __l2cap_chan_list(conn, func, data); +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + } + + EXPORT_SYMBOL_GPL(l2cap_chan_list); +@@ -746,7 +742,7 @@ static void l2cap_conn_update_id_addr(struct work_struct *work) + struct hci_conn *hcon = conn->hcon; + struct l2cap_chan *chan; + +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + + list_for_each_entry(chan, &conn->chan_l, list) { + l2cap_chan_lock(chan); +@@ -755,7 +751,7 @@ static void l2cap_conn_update_id_addr(struct work_struct *work) + l2cap_chan_unlock(chan); + } + +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + } + + static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan) +@@ -949,6 +945,16 @@ static u8 l2cap_get_ident(struct l2cap_conn *conn) + return id; + } + ++static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb, ++ u8 flags) ++{ ++ /* Check if the hcon still valid before attempting to send */ ++ if (hci_conn_valid(conn->hcon->hdev, conn->hcon)) ++ hci_send_acl(conn->hchan, skb, flags); ++ else ++ kfree_skb(skb); ++} ++ + static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len, + void *data) + { +@@ -971,7 +977,7 @@ static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len, + bt_cb(skb)->force_active = BT_POWER_FORCE_ACTIVE_ON; + skb->priority = HCI_PRIO_MAX; + +- hci_send_acl(conn->hchan, skb, flags); ++ l2cap_send_acl(conn, skb, flags); + } + + static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb) +@@ -1498,8 +1504,6 @@ static void l2cap_conn_start(struct l2cap_conn *conn) + + BT_DBG("conn %p", conn); + +- mutex_lock(&conn->chan_lock); +- + list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) { + l2cap_chan_lock(chan); + +@@ -1568,8 +1572,6 @@ static void l2cap_conn_start(struct l2cap_conn *conn) + + l2cap_chan_unlock(chan); + } +- +- mutex_unlock(&conn->chan_lock); + } + + static void l2cap_le_conn_ready(struct l2cap_conn *conn) +@@ -1615,7 +1617,7 @@ static void l2cap_conn_ready(struct l2cap_conn *conn) + if (hcon->type == ACL_LINK) + l2cap_request_info(conn); + +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + + list_for_each_entry(chan, &conn->chan_l, list) { + +@@ -1633,7 +1635,7 @@ static void l2cap_conn_ready(struct l2cap_conn *conn) + l2cap_chan_unlock(chan); + } + +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + + if (hcon->type == LE_LINK) + l2cap_le_conn_ready(conn); +@@ -1648,14 +1650,10 @@ static void l2cap_conn_unreliable(struct l2cap_conn *conn, int err) + + BT_DBG("conn %p", conn); + +- mutex_lock(&conn->chan_lock); +- + list_for_each_entry(chan, &conn->chan_l, list) { + if (test_bit(FLAG_FORCE_RELIABLE, &chan->flags)) + l2cap_chan_set_err(chan, err); + } +- +- mutex_unlock(&conn->chan_lock); + } + + static void l2cap_info_timeout(struct work_struct *work) +@@ -1666,7 +1664,9 @@ static void l2cap_info_timeout(struct work_struct *work) + conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE; + conn->info_ident = 0; + ++ mutex_lock(&conn->lock); + l2cap_conn_start(conn); ++ mutex_unlock(&conn->lock); + } + + /* +@@ -1758,6 +1758,8 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err) + + BT_DBG("hcon %p conn %p, err %d", hcon, conn, err); + ++ mutex_lock(&conn->lock); ++ + kfree_skb(conn->rx_skb); + + skb_queue_purge(&conn->pending_rx); +@@ -1776,8 +1778,6 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err) + /* Force the connection to be immediately dropped */ + hcon->disc_timeout = 0; + +- mutex_lock(&conn->chan_lock); +- + /* Kill channels */ + list_for_each_entry_safe(chan, l, &conn->chan_l, list) { + l2cap_chan_hold(chan); +@@ -1791,15 +1791,14 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err) + l2cap_chan_put(chan); + } + +- mutex_unlock(&conn->chan_lock); +- +- hci_chan_del(conn->hchan); +- + if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT) + cancel_delayed_work_sync(&conn->info_timer); + +- hcon->l2cap_data = NULL; ++ hci_chan_del(conn->hchan); + conn->hchan = NULL; ++ ++ hcon->l2cap_data = NULL; ++ mutex_unlock(&conn->lock); + l2cap_conn_put(conn); + } + +@@ -2917,8 +2916,6 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb) + + BT_DBG("conn %p", conn); + +- mutex_lock(&conn->chan_lock); +- + list_for_each_entry(chan, &conn->chan_l, list) { + if (chan->chan_type != L2CAP_CHAN_RAW) + continue; +@@ -2933,8 +2930,6 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb) + if (chan->ops->recv(chan, nskb)) + kfree_skb(nskb); + } +- +- mutex_unlock(&conn->chan_lock); + } + + /* ---- L2CAP signalling commands ---- */ +@@ -3957,7 +3952,6 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, + goto response; + } + +- mutex_lock(&conn->chan_lock); + l2cap_chan_lock(pchan); + + /* Check if the ACL is secure enough (if not SDP) */ +@@ -4064,7 +4058,6 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd, + } + + l2cap_chan_unlock(pchan); +- mutex_unlock(&conn->chan_lock); + l2cap_chan_put(pchan); + } + +@@ -4103,27 +4096,19 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn, + BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x", + dcid, scid, result, status); + +- mutex_lock(&conn->chan_lock); +- + if (scid) { + chan = __l2cap_get_chan_by_scid(conn, scid); +- if (!chan) { +- err = -EBADSLT; +- goto unlock; +- } ++ if (!chan) ++ return -EBADSLT; + } else { + chan = __l2cap_get_chan_by_ident(conn, cmd->ident); +- if (!chan) { +- err = -EBADSLT; +- goto unlock; +- } ++ if (!chan) ++ return -EBADSLT; + } + + chan = l2cap_chan_hold_unless_zero(chan); +- if (!chan) { +- err = -EBADSLT; +- goto unlock; +- } ++ if (!chan) ++ return -EBADSLT; + + err = 0; + +@@ -4161,9 +4146,6 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn, + l2cap_chan_unlock(chan); + l2cap_chan_put(chan); + +-unlock: +- mutex_unlock(&conn->chan_lock); +- + return err; + } + +@@ -4451,11 +4433,7 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn, + + chan->ops->set_shutdown(chan); + +- l2cap_chan_unlock(chan); +- mutex_lock(&conn->chan_lock); +- l2cap_chan_lock(chan); + l2cap_chan_del(chan, ECONNRESET); +- mutex_unlock(&conn->chan_lock); + + chan->ops->close(chan); + +@@ -4492,11 +4470,7 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, + return 0; + } + +- l2cap_chan_unlock(chan); +- mutex_lock(&conn->chan_lock); +- l2cap_chan_lock(chan); + l2cap_chan_del(chan, 0); +- mutex_unlock(&conn->chan_lock); + + chan->ops->close(chan); + +@@ -4694,13 +4668,9 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn, + BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x", + dcid, mtu, mps, credits, result); + +- mutex_lock(&conn->chan_lock); +- + chan = __l2cap_get_chan_by_ident(conn, cmd->ident); +- if (!chan) { +- err = -EBADSLT; +- goto unlock; +- } ++ if (!chan) ++ return -EBADSLT; + + err = 0; + +@@ -4748,9 +4718,6 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn, + + l2cap_chan_unlock(chan); + +-unlock: +- mutex_unlock(&conn->chan_lock); +- + return err; + } + +@@ -4862,7 +4829,6 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn, + goto response; + } + +- mutex_lock(&conn->chan_lock); + l2cap_chan_lock(pchan); + + if (!smp_sufficient_security(conn->hcon, pchan->sec_level, +@@ -4928,7 +4894,6 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn, + + response_unlock: + l2cap_chan_unlock(pchan); +- mutex_unlock(&conn->chan_lock); + l2cap_chan_put(pchan); + + if (result == L2CAP_CR_PEND) +@@ -5062,7 +5027,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn, + goto response; + } + +- mutex_lock(&conn->chan_lock); + l2cap_chan_lock(pchan); + + if (!smp_sufficient_security(conn->hcon, pchan->sec_level, +@@ -5137,7 +5101,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn, + + unlock: + l2cap_chan_unlock(pchan); +- mutex_unlock(&conn->chan_lock); + l2cap_chan_put(pchan); + + response: +@@ -5174,8 +5137,6 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn, + BT_DBG("mtu %u mps %u credits %u result 0x%4.4x", mtu, mps, credits, + result); + +- mutex_lock(&conn->chan_lock); +- + cmd_len -= sizeof(*rsp); + + list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) { +@@ -5261,8 +5222,6 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn, + l2cap_chan_unlock(chan); + } + +- mutex_unlock(&conn->chan_lock); +- + return err; + } + +@@ -5375,8 +5334,6 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn, + if (cmd_len < sizeof(*rej)) + return -EPROTO; + +- mutex_lock(&conn->chan_lock); +- + chan = __l2cap_get_chan_by_ident(conn, cmd->ident); + if (!chan) + goto done; +@@ -5391,7 +5348,6 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn, + l2cap_chan_put(chan); + + done: +- mutex_unlock(&conn->chan_lock); + return 0; + } + +@@ -6846,8 +6802,12 @@ static void process_pending_rx(struct work_struct *work) + + BT_DBG(""); + ++ mutex_lock(&conn->lock); ++ + while ((skb = skb_dequeue(&conn->pending_rx))) + l2cap_recv_frame(conn, skb); ++ ++ mutex_unlock(&conn->lock); + } + + static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon) +@@ -6886,7 +6846,7 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon) + conn->local_fixed_chan |= L2CAP_FC_SMP_BREDR; + + mutex_init(&conn->ident_lock); +- mutex_init(&conn->chan_lock); ++ mutex_init(&conn->lock); + + INIT_LIST_HEAD(&conn->chan_l); + INIT_LIST_HEAD(&conn->users); +@@ -7077,7 +7037,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid, + } + } + +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + l2cap_chan_lock(chan); + + if (cid && __l2cap_get_chan_by_dcid(conn, cid)) { +@@ -7118,7 +7078,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid, + + chan_unlock: + l2cap_chan_unlock(chan); +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + done: + hci_dev_unlock(hdev); + hci_dev_put(hdev); +@@ -7222,11 +7182,6 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c, + return NULL; + } + +-static bool l2cap_match(struct hci_conn *hcon) +-{ +- return hcon->type == ACL_LINK || hcon->type == LE_LINK; +-} +- + static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status) + { + struct hci_dev *hdev = hcon->hdev; +@@ -7234,6 +7189,9 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status) + struct l2cap_chan *pchan; + u8 dst_type; + ++ if (hcon->type != ACL_LINK && hcon->type != LE_LINK) ++ return; ++ + BT_DBG("hcon %p bdaddr %pMR status %d", hcon, &hcon->dst, status); + + if (status) { +@@ -7298,6 +7256,9 @@ int l2cap_disconn_ind(struct hci_conn *hcon) + + static void l2cap_disconn_cfm(struct hci_conn *hcon, u8 reason) + { ++ if (hcon->type != ACL_LINK && hcon->type != LE_LINK) ++ return; ++ + BT_DBG("hcon %p reason %d", hcon, reason); + + l2cap_conn_del(hcon, bt_to_errno(reason)); +@@ -7330,7 +7291,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) + + BT_DBG("conn %p status 0x%2.2x encrypt %u", conn, status, encrypt); + +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + + list_for_each_entry(chan, &conn->chan_l, list) { + l2cap_chan_lock(chan); +@@ -7404,7 +7365,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) + l2cap_chan_unlock(chan); + } + +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + } + + /* Append fragment into frame respecting the maximum len of rx_skb */ +@@ -7471,19 +7432,45 @@ static void l2cap_recv_reset(struct l2cap_conn *conn) + conn->rx_len = 0; + } + ++struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *c) ++{ ++ if (!c) ++ return NULL; ++ ++ BT_DBG("conn %p orig refcnt %u", c, kref_read(&c->ref)); ++ ++ if (!kref_get_unless_zero(&c->ref)) ++ return NULL; ++ ++ return c; ++} ++ + void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags) + { +- struct l2cap_conn *conn = hcon->l2cap_data; ++ struct l2cap_conn *conn; + int len; + ++ /* Lock hdev to access l2cap_data to avoid race with l2cap_conn_del */ ++ hci_dev_lock(hcon->hdev); ++ ++ conn = hcon->l2cap_data; ++ + if (!conn) + conn = l2cap_conn_add(hcon); + +- if (!conn) +- goto drop; ++ conn = l2cap_conn_hold_unless_zero(conn); ++ ++ hci_dev_unlock(hcon->hdev); ++ ++ if (!conn) { ++ kfree_skb(skb); ++ return; ++ } + + BT_DBG("conn %p len %u flags 0x%x", conn, skb->len, flags); + ++ mutex_lock(&conn->lock); ++ + switch (flags) { + case ACL_START: + case ACL_START_NO_FLUSH: +@@ -7508,7 +7495,7 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags) + if (len == skb->len) { + /* Complete frame received */ + l2cap_recv_frame(conn, skb); +- return; ++ goto unlock; + } + + BT_DBG("Start: total len %d, frag len %u", len, skb->len); +@@ -7572,11 +7559,13 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags) + + drop: + kfree_skb(skb); ++unlock: ++ mutex_unlock(&conn->lock); ++ l2cap_conn_put(conn); + } + + static struct hci_cb l2cap_cb = { + .name = "L2CAP", +- .match = l2cap_match, + .connect_cfm = l2cap_connect_cfm, + .disconn_cfm = l2cap_disconn_cfm, + .security_cfm = l2cap_security_cfm, +diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c +index 46ea0bee2259f8..acd11b268b98ad 100644 +--- a/net/bluetooth/l2cap_sock.c ++++ b/net/bluetooth/l2cap_sock.c +@@ -1326,9 +1326,10 @@ static int l2cap_sock_shutdown(struct socket *sock, int how) + /* prevent sk structure from being freed whilst unlocked */ + sock_hold(sk); + +- chan = l2cap_pi(sk)->chan; + /* prevent chan structure from being freed whilst unlocked */ +- l2cap_chan_hold(chan); ++ chan = l2cap_chan_hold_unless_zero(l2cap_pi(sk)->chan); ++ if (!chan) ++ goto shutdown_already; + + BT_DBG("chan %p state %s", chan, state_to_string(chan->state)); + +@@ -1358,22 +1359,20 @@ static int l2cap_sock_shutdown(struct socket *sock, int how) + release_sock(sk); + + l2cap_chan_lock(chan); +- conn = chan->conn; +- if (conn) +- /* prevent conn structure from being freed */ +- l2cap_conn_get(conn); ++ /* prevent conn structure from being freed */ ++ conn = l2cap_conn_hold_unless_zero(chan->conn); + l2cap_chan_unlock(chan); + + if (conn) + /* mutex lock must be taken before l2cap_chan_lock() */ +- mutex_lock(&conn->chan_lock); ++ mutex_lock(&conn->lock); + + l2cap_chan_lock(chan); + l2cap_chan_close(chan, 0); + l2cap_chan_unlock(chan); + + if (conn) { +- mutex_unlock(&conn->chan_lock); ++ mutex_unlock(&conn->lock); + l2cap_conn_put(conn); + } + +diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c +index 4c56ca5a216c6f..ad5177e3a69b77 100644 +--- a/net/bluetooth/rfcomm/core.c ++++ b/net/bluetooth/rfcomm/core.c +@@ -2134,11 +2134,6 @@ static int rfcomm_run(void *unused) + return 0; + } + +-static bool rfcomm_match(struct hci_conn *hcon) +-{ +- return hcon->type == ACL_LINK; +-} +- + static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt) + { + struct rfcomm_session *s; +@@ -2185,7 +2180,6 @@ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt) + + static struct hci_cb rfcomm_cb = { + .name = "RFCOMM", +- .match = rfcomm_match, + .security_cfm = rfcomm_security_cfm + }; + +diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c +index aa7bfe26cb40f2..5d1bc0d6aee031 100644 +--- a/net/bluetooth/sco.c ++++ b/net/bluetooth/sco.c +@@ -107,6 +107,14 @@ static void sco_conn_put(struct sco_conn *conn) + kref_put(&conn->ref, sco_conn_free); + } + ++static struct sco_conn *sco_conn_hold(struct sco_conn *conn) ++{ ++ BT_DBG("conn %p refcnt %u", conn, kref_read(&conn->ref)); ++ ++ kref_get(&conn->ref); ++ return conn; ++} ++ + static struct sco_conn *sco_conn_hold_unless_zero(struct sco_conn *conn) + { + if (!conn) +@@ -1353,6 +1361,7 @@ static void sco_conn_ready(struct sco_conn *conn) + bacpy(&sco_pi(sk)->src, &conn->hcon->src); + bacpy(&sco_pi(sk)->dst, &conn->hcon->dst); + ++ sco_conn_hold(conn); + hci_conn_hold(conn->hcon); + __sco_chan_add(conn, sk, parent); + +@@ -1398,27 +1407,30 @@ int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags) + return lm; + } + +-static bool sco_match(struct hci_conn *hcon) +-{ +- return hcon->type == SCO_LINK || hcon->type == ESCO_LINK; +-} +- + static void sco_connect_cfm(struct hci_conn *hcon, __u8 status) + { ++ if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK) ++ return; ++ + BT_DBG("hcon %p bdaddr %pMR status %u", hcon, &hcon->dst, status); + + if (!status) { + struct sco_conn *conn; + + conn = sco_conn_add(hcon); +- if (conn) ++ if (conn) { + sco_conn_ready(conn); ++ sco_conn_put(conn); ++ } + } else + sco_conn_del(hcon, bt_to_errno(status)); + } + + static void sco_disconn_cfm(struct hci_conn *hcon, __u8 reason) + { ++ if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK) ++ return; ++ + BT_DBG("hcon %p reason %d", hcon, reason); + + sco_conn_del(hcon, bt_to_errno(reason)); +@@ -1444,7 +1456,6 @@ void sco_recv_scodata(struct hci_conn *hcon, struct sk_buff *skb) + + static struct hci_cb sco_cb = { + .name = "SCO", +- .match = sco_match, + .connect_cfm = sco_connect_cfm, + .disconn_cfm = sco_disconn_cfm, + }; +diff --git a/net/core/dev.c b/net/core/dev.c +index 2b09714761c62a..b180d175c37f99 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -3725,6 +3725,9 @@ static struct sk_buff *validate_xmit_skb(struct sk_buff *skb, struct net_device + { + netdev_features_t features; + ++ if (!skb_frags_readable(skb)) ++ goto out_kfree_skb; ++ + features = netif_skb_features(skb); + skb = validate_xmit_vlan(skb, features); + if (unlikely(!skb)) +@@ -4610,7 +4613,7 @@ static inline void ____napi_schedule(struct softnet_data *sd, + * we have to raise NET_RX_SOFTIRQ. + */ + if (!sd->in_net_rx_action) +- __raise_softirq_irqoff(NET_RX_SOFTIRQ); ++ raise_softirq_irqoff(NET_RX_SOFTIRQ); + } + + #ifdef CONFIG_RPS +diff --git a/net/core/netpoll.c b/net/core/netpoll.c +index 96a6ed37d4ccbf..7ee754cd2e2f02 100644 +--- a/net/core/netpoll.c ++++ b/net/core/netpoll.c +@@ -319,6 +319,7 @@ static int netpoll_owner_active(struct net_device *dev) + static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) + { + netdev_tx_t status = NETDEV_TX_BUSY; ++ netdev_tx_t ret = NET_XMIT_DROP; + struct net_device *dev; + unsigned long tries; + /* It is up to the caller to keep npinfo alive. */ +@@ -327,11 +328,12 @@ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) + lockdep_assert_irqs_disabled(); + + dev = np->dev; ++ rcu_read_lock(); + npinfo = rcu_dereference_bh(dev->npinfo); + + if (!npinfo || !netif_running(dev) || !netif_device_present(dev)) { + dev_kfree_skb_irq(skb); +- return NET_XMIT_DROP; ++ goto out; + } + + /* don't get messages out of order, and no recursion */ +@@ -370,7 +372,10 @@ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) + skb_queue_tail(&npinfo->txq, skb); + schedule_delayed_work(&npinfo->tx_work,0); + } +- return NETDEV_TX_OK; ++ ret = NETDEV_TX_OK; ++out: ++ rcu_read_unlock(); ++ return ret; + } + + netdev_tx_t netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index 0e765466d7f79e..bdf39388e514bc 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -3237,16 +3237,13 @@ static void add_v4_addrs(struct inet6_dev *idev) + struct in6_addr addr; + struct net_device *dev; + struct net *net = dev_net(idev->dev); +- int scope, plen, offset = 0; ++ int scope, plen; + u32 pflags = 0; + + ASSERT_RTNL(); + + memset(&addr, 0, sizeof(struct in6_addr)); +- /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */ +- if (idev->dev->addr_len == sizeof(struct in6_addr)) +- offset = sizeof(struct in6_addr) - 4; +- memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4); ++ memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4); + + if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) { + scope = IPV6_ADDR_COMPATv4; +@@ -3557,7 +3554,13 @@ static void addrconf_gre_config(struct net_device *dev) + return; + } + +- if (dev->type == ARPHRD_ETHER) { ++ /* Generate the IPv6 link-local address using addrconf_addr_gen(), ++ * unless we have an IPv4 GRE device not bound to an IP address and ++ * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this ++ * case). Such devices fall back to add_v4_addrs() instead. ++ */ ++ if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 && ++ idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) { + addrconf_addr_gen(idev, true); + return; + } +diff --git a/net/mac80211/eht.c b/net/mac80211/eht.c +index 7a3116c36df9f4..fd41046e3b681b 100644 +--- a/net/mac80211/eht.c ++++ b/net/mac80211/eht.c +@@ -2,7 +2,7 @@ + /* + * EHT handling + * +- * Copyright(c) 2021-2024 Intel Corporation ++ * Copyright(c) 2021-2025 Intel Corporation + */ + + #include "ieee80211_i.h" +@@ -76,6 +76,13 @@ ieee80211_eht_cap_ie_to_sta_eht_cap(struct ieee80211_sub_if_data *sdata, + link_sta->cur_max_bandwidth = ieee80211_sta_cap_rx_bw(link_sta); + link_sta->pub->bandwidth = ieee80211_sta_cur_vht_bw(link_sta); + ++ /* ++ * The MPDU length bits are reserved on all but 2.4 GHz and get set via ++ * VHT (5 GHz) or HE (6 GHz) capabilities. ++ */ ++ if (sband->band != NL80211_BAND_2GHZ) ++ return; ++ + switch (u8_get_bits(eht_cap->eht_cap_elem.mac_cap_info[0], + IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK)) { + case IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_11454: +diff --git a/net/mac80211/util.c b/net/mac80211/util.c +index dc0b74443c8d1b..5ee7fc81ff8cfa 100644 +--- a/net/mac80211/util.c ++++ b/net/mac80211/util.c +@@ -6,7 +6,7 @@ + * Copyright 2007 Johannes Berg + * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright (C) 2015-2017 Intel Deutschland GmbH +- * Copyright (C) 2018-2024 Intel Corporation ++ * Copyright (C) 2018-2025 Intel Corporation + * + * utilities for mac80211 + */ +@@ -2190,8 +2190,10 @@ int ieee80211_reconfig(struct ieee80211_local *local) + ieee80211_reconfig_roc(local); + + /* Requeue all works */ +- list_for_each_entry(sdata, &local->interfaces, list) +- wiphy_work_queue(local->hw.wiphy, &sdata->work); ++ list_for_each_entry(sdata, &local->interfaces, list) { ++ if (ieee80211_sdata_running(sdata)) ++ wiphy_work_queue(local->hw.wiphy, &sdata->work); ++ } + } + + ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, +diff --git a/net/mctp/route.c b/net/mctp/route.c +index 3f2bd65ff5e3c9..4c460160914f01 100644 +--- a/net/mctp/route.c ++++ b/net/mctp/route.c +@@ -332,8 +332,14 @@ static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb) + & MCTP_HDR_SEQ_MASK; + + if (!key->reasm_head) { +- key->reasm_head = skb; +- key->reasm_tailp = &(skb_shinfo(skb)->frag_list); ++ /* Since we're manipulating the shared frag_list, ensure it isn't ++ * shared with any other SKBs. ++ */ ++ key->reasm_head = skb_unshare(skb, GFP_ATOMIC); ++ if (!key->reasm_head) ++ return -ENOMEM; ++ ++ key->reasm_tailp = &(skb_shinfo(key->reasm_head)->frag_list); + key->last_seq = this_seq; + return 0; + } +diff --git a/net/mctp/test/route-test.c b/net/mctp/test/route-test.c +index 17165b86ce22d4..06c1897b685a8b 100644 +--- a/net/mctp/test/route-test.c ++++ b/net/mctp/test/route-test.c +@@ -921,6 +921,114 @@ static void mctp_test_route_input_sk_fail_frag(struct kunit *test) + __mctp_route_test_fini(test, dev, rt, sock); + } + ++/* Input route to socket, using a fragmented message created from clones. ++ */ ++static void mctp_test_route_input_cloned_frag(struct kunit *test) ++{ ++ /* 5 packet fragments, forming 2 complete messages */ ++ const struct mctp_hdr hdrs[5] = { ++ RX_FRAG(FL_S, 0), ++ RX_FRAG(0, 1), ++ RX_FRAG(FL_E, 2), ++ RX_FRAG(FL_S, 0), ++ RX_FRAG(FL_E, 1), ++ }; ++ struct mctp_test_route *rt; ++ struct mctp_test_dev *dev; ++ struct sk_buff *skb[5]; ++ struct sk_buff *rx_skb; ++ struct socket *sock; ++ size_t data_len; ++ u8 compare[100]; ++ u8 flat[100]; ++ size_t total; ++ void *p; ++ int rc; ++ ++ /* Arbitrary length */ ++ data_len = 3; ++ total = data_len + sizeof(struct mctp_hdr); ++ ++ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY); ++ ++ /* Create a single skb initially with concatenated packets */ ++ skb[0] = mctp_test_create_skb(&hdrs[0], 5 * total); ++ mctp_test_skb_set_dev(skb[0], dev); ++ memset(skb[0]->data, 0 * 0x11, skb[0]->len); ++ memcpy(skb[0]->data, &hdrs[0], sizeof(struct mctp_hdr)); ++ ++ /* Extract and populate packets */ ++ for (int i = 1; i < 5; i++) { ++ skb[i] = skb_clone(skb[i - 1], GFP_ATOMIC); ++ KUNIT_ASSERT_TRUE(test, skb[i]); ++ p = skb_pull(skb[i], total); ++ KUNIT_ASSERT_TRUE(test, p); ++ skb_reset_network_header(skb[i]); ++ memcpy(skb[i]->data, &hdrs[i], sizeof(struct mctp_hdr)); ++ memset(&skb[i]->data[sizeof(struct mctp_hdr)], i * 0x11, data_len); ++ } ++ for (int i = 0; i < 5; i++) ++ skb_trim(skb[i], total); ++ ++ /* SOM packets have a type byte to match the socket */ ++ skb[0]->data[4] = 0; ++ skb[3]->data[4] = 0; ++ ++ skb_dump("pkt1 ", skb[0], false); ++ skb_dump("pkt2 ", skb[1], false); ++ skb_dump("pkt3 ", skb[2], false); ++ skb_dump("pkt4 ", skb[3], false); ++ skb_dump("pkt5 ", skb[4], false); ++ ++ for (int i = 0; i < 5; i++) { ++ KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1); ++ /* Take a reference so we can check refcounts at the end */ ++ skb_get(skb[i]); ++ } ++ ++ /* Feed the fragments into MCTP core */ ++ for (int i = 0; i < 5; i++) { ++ rc = mctp_route_input(&rt->rt, skb[i]); ++ KUNIT_EXPECT_EQ(test, rc, 0); ++ } ++ ++ /* Receive first reassembled message */ ++ rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc); ++ KUNIT_EXPECT_EQ(test, rc, 0); ++ KUNIT_EXPECT_EQ(test, rx_skb->len, 3 * data_len); ++ rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len); ++ for (int i = 0; i < rx_skb->len; i++) ++ compare[i] = (i / data_len) * 0x11; ++ /* Set type byte */ ++ compare[0] = 0; ++ ++ KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len); ++ KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1); ++ kfree_skb(rx_skb); ++ ++ /* Receive second reassembled message */ ++ rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc); ++ KUNIT_EXPECT_EQ(test, rc, 0); ++ KUNIT_EXPECT_EQ(test, rx_skb->len, 2 * data_len); ++ rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len); ++ for (int i = 0; i < rx_skb->len; i++) ++ compare[i] = (i / data_len + 3) * 0x11; ++ /* Set type byte */ ++ compare[0] = 0; ++ ++ KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len); ++ KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1); ++ kfree_skb(rx_skb); ++ ++ /* Check input skb refcounts */ ++ for (int i = 0; i < 5; i++) { ++ KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1); ++ kfree_skb(skb[i]); ++ } ++ ++ __mctp_route_test_fini(test, dev, rt, sock); ++} ++ + #if IS_ENABLED(CONFIG_MCTP_FLOWS) + + static void mctp_test_flow_init(struct kunit *test, +@@ -1144,6 +1252,7 @@ static struct kunit_case mctp_test_cases[] = { + KUNIT_CASE(mctp_test_packet_flow), + KUNIT_CASE(mctp_test_fragment_flow), + KUNIT_CASE(mctp_test_route_output_key_create), ++ KUNIT_CASE(mctp_test_route_input_cloned_frag), + {} + }; + +diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h +index b70a303e082878..7e2f70f22b05b6 100644 +--- a/net/mptcp/protocol.h ++++ b/net/mptcp/protocol.h +@@ -1194,6 +1194,8 @@ static inline void __mptcp_do_fallback(struct mptcp_sock *msk) + pr_debug("TCP fallback already done (msk=%p)\n", msk); + return; + } ++ if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback))) ++ return; + set_bit(MPTCP_FALLBACK_DONE, &msk->flags); + } + +diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c +index 7d13110ce18827..0633276d96bfbb 100644 +--- a/net/netfilter/ipvs/ip_vs_ctl.c ++++ b/net/netfilter/ipvs/ip_vs_ctl.c +@@ -3091,12 +3091,12 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len) + case IP_VS_SO_GET_SERVICES: + { + struct ip_vs_get_services *get; +- int size; ++ size_t size; + + get = (struct ip_vs_get_services *)arg; + size = struct_size(get, entrytable, get->num_services); + if (*len != size) { +- pr_err("length: %u != %u\n", *len, size); ++ pr_err("length: %u != %zu\n", *len, size); + ret = -EINVAL; + goto out; + } +@@ -3132,12 +3132,12 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len) + case IP_VS_SO_GET_DESTS: + { + struct ip_vs_get_dests *get; +- int size; ++ size_t size; + + get = (struct ip_vs_get_dests *)arg; + size = struct_size(get, entrytable, get->num_dests); + if (*len != size) { +- pr_err("length: %u != %u\n", *len, size); ++ pr_err("length: %u != %zu\n", *len, size); + ret = -EINVAL; + goto out; + } +diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c +index 4890af4dc263fd..913ede2f57f9a9 100644 +--- a/net/netfilter/nf_conncount.c ++++ b/net/netfilter/nf_conncount.c +@@ -132,7 +132,7 @@ static int __nf_conncount_add(struct net *net, + struct nf_conn *found_ct; + unsigned int collect = 0; + +- if (time_is_after_eq_jiffies((unsigned long)list->last_gc)) ++ if ((u32)jiffies == list->last_gc) + goto add_new_node; + + /* check the saved connections */ +@@ -234,7 +234,7 @@ bool nf_conncount_gc_list(struct net *net, + bool ret = false; + + /* don't bother if we just did GC */ +- if (time_is_after_eq_jiffies((unsigned long)READ_ONCE(list->last_gc))) ++ if ((u32)jiffies == READ_ONCE(list->last_gc)) + return false; + + /* don't bother if other cpu is already doing GC */ +@@ -377,6 +377,8 @@ insert_tree(struct net *net, + + conn->tuple = *tuple; + conn->zone = *zone; ++ conn->cpu = raw_smp_processor_id(); ++ conn->jiffies32 = (u32)jiffies; + memcpy(rbconn->key, key, sizeof(u32) * data->keylen); + + nf_conncount_list_init(&rbconn->list); +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 73e37861ff11fc..99f6b2530b96ee 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -34,7 +34,6 @@ unsigned int nf_tables_net_id __read_mostly; + static LIST_HEAD(nf_tables_expressions); + static LIST_HEAD(nf_tables_objects); + static LIST_HEAD(nf_tables_flowtables); +-static LIST_HEAD(nf_tables_destroy_list); + static LIST_HEAD(nf_tables_gc_list); + static DEFINE_SPINLOCK(nf_tables_destroy_list_lock); + static DEFINE_SPINLOCK(nf_tables_gc_list_lock); +@@ -125,7 +124,6 @@ static void nft_validate_state_update(struct nft_table *table, u8 new_validate_s + table->validate_state = new_validate_state; + } + static void nf_tables_trans_destroy_work(struct work_struct *w); +-static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work); + + static void nft_trans_gc_work(struct work_struct *work); + static DECLARE_WORK(trans_gc_work, nft_trans_gc_work); +@@ -10004,11 +10002,12 @@ static void nft_commit_release(struct nft_trans *trans) + + static void nf_tables_trans_destroy_work(struct work_struct *w) + { ++ struct nftables_pernet *nft_net = container_of(w, struct nftables_pernet, destroy_work); + struct nft_trans *trans, *next; + LIST_HEAD(head); + + spin_lock(&nf_tables_destroy_list_lock); +- list_splice_init(&nf_tables_destroy_list, &head); ++ list_splice_init(&nft_net->destroy_list, &head); + spin_unlock(&nf_tables_destroy_list_lock); + + if (list_empty(&head)) +@@ -10022,9 +10021,11 @@ static void nf_tables_trans_destroy_work(struct work_struct *w) + } + } + +-void nf_tables_trans_destroy_flush_work(void) ++void nf_tables_trans_destroy_flush_work(struct net *net) + { +- flush_work(&trans_destroy_work); ++ struct nftables_pernet *nft_net = nft_pernet(net); ++ ++ flush_work(&nft_net->destroy_work); + } + EXPORT_SYMBOL_GPL(nf_tables_trans_destroy_flush_work); + +@@ -10482,11 +10483,11 @@ static void nf_tables_commit_release(struct net *net) + + trans->put_net = true; + spin_lock(&nf_tables_destroy_list_lock); +- list_splice_tail_init(&nft_net->commit_list, &nf_tables_destroy_list); ++ list_splice_tail_init(&nft_net->commit_list, &nft_net->destroy_list); + spin_unlock(&nf_tables_destroy_list_lock); + + nf_tables_module_autoload_cleanup(net); +- schedule_work(&trans_destroy_work); ++ schedule_work(&nft_net->destroy_work); + + mutex_unlock(&nft_net->commit_mutex); + } +@@ -11892,7 +11893,7 @@ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event, + + gc_seq = nft_gc_seq_begin(nft_net); + +- nf_tables_trans_destroy_flush_work(); ++ nf_tables_trans_destroy_flush_work(net); + again: + list_for_each_entry(table, &nft_net->tables, list) { + if (nft_table_has_owner(table) && +@@ -11934,6 +11935,7 @@ static int __net_init nf_tables_init_net(struct net *net) + + INIT_LIST_HEAD(&nft_net->tables); + INIT_LIST_HEAD(&nft_net->commit_list); ++ INIT_LIST_HEAD(&nft_net->destroy_list); + INIT_LIST_HEAD(&nft_net->commit_set_list); + INIT_LIST_HEAD(&nft_net->binding_list); + INIT_LIST_HEAD(&nft_net->module_list); +@@ -11942,6 +11944,7 @@ static int __net_init nf_tables_init_net(struct net *net) + nft_net->base_seq = 1; + nft_net->gc_seq = 0; + nft_net->validate_state = NFT_VALIDATE_SKIP; ++ INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work); + + return 0; + } +@@ -11970,14 +11973,17 @@ static void __net_exit nf_tables_exit_net(struct net *net) + if (!list_empty(&nft_net->module_list)) + nf_tables_module_autoload_cleanup(net); + ++ cancel_work_sync(&nft_net->destroy_work); + __nft_release_tables(net); + + nft_gc_seq_end(nft_net, gc_seq); + + mutex_unlock(&nft_net->commit_mutex); ++ + WARN_ON_ONCE(!list_empty(&nft_net->tables)); + WARN_ON_ONCE(!list_empty(&nft_net->module_list)); + WARN_ON_ONCE(!list_empty(&nft_net->notify_list)); ++ WARN_ON_ONCE(!list_empty(&nft_net->destroy_list)); + } + + static void nf_tables_exit_batch(struct list_head *net_exit_list) +@@ -12068,10 +12074,8 @@ static void __exit nf_tables_module_exit(void) + unregister_netdevice_notifier(&nf_tables_flowtable_notifier); + nft_chain_filter_fini(); + nft_chain_route_fini(); +- nf_tables_trans_destroy_flush_work(); + unregister_pernet_subsys(&nf_tables_net_ops); + cancel_work_sync(&trans_gc_work); +- cancel_work_sync(&trans_destroy_work); + rcu_barrier(); + rhltable_destroy(&nft_objname_ht); + nf_tables_core_module_exit(); +diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c +index 7ca4f0d21fe2a2..72711d62fddfa4 100644 +--- a/net/netfilter/nft_compat.c ++++ b/net/netfilter/nft_compat.c +@@ -228,7 +228,7 @@ static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv) + return 0; + } + +-static void nft_compat_wait_for_destructors(void) ++static void nft_compat_wait_for_destructors(struct net *net) + { + /* xtables matches or targets can have side effects, e.g. + * creation/destruction of /proc files. +@@ -236,7 +236,7 @@ static void nft_compat_wait_for_destructors(void) + * work queue. If we have pending invocations we thus + * need to wait for those to finish. + */ +- nf_tables_trans_destroy_flush_work(); ++ nf_tables_trans_destroy_flush_work(net); + } + + static int +@@ -262,7 +262,7 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + + nft_target_set_tgchk_param(&par, ctx, target, info, &e, proto, inv); + +- nft_compat_wait_for_destructors(); ++ nft_compat_wait_for_destructors(ctx->net); + + ret = xt_check_target(&par, size, proto, inv); + if (ret < 0) { +@@ -515,7 +515,7 @@ __nft_match_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + + nft_match_set_mtchk_param(&par, ctx, match, info, &e, proto, inv); + +- nft_compat_wait_for_destructors(); ++ nft_compat_wait_for_destructors(ctx->net); + + return xt_check_match(&par, size, proto, inv); + } +diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c +index 67a41cd2baaff0..a1b373b99f7b84 100644 +--- a/net/netfilter/nft_ct.c ++++ b/net/netfilter/nft_ct.c +@@ -230,6 +230,7 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr, + enum ip_conntrack_info ctinfo; + u16 value = nft_reg_load16(®s->data[priv->sreg]); + struct nf_conn *ct; ++ int oldcnt; + + ct = nf_ct_get(skb, &ctinfo); + if (ct) /* already tracked */ +@@ -250,10 +251,11 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr, + + ct = this_cpu_read(nft_ct_pcpu_template); + +- if (likely(refcount_read(&ct->ct_general.use) == 1)) { +- refcount_inc(&ct->ct_general.use); ++ __refcount_inc(&ct->ct_general.use, &oldcnt); ++ if (likely(oldcnt == 1)) { + nf_ct_zone_add(ct, &zone); + } else { ++ refcount_dec(&ct->ct_general.use); + /* previous skb got queued to userspace, allocate temporary + * one until percpu template can be reused. + */ +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c +index b8d03364566c1f..c74012c9912554 100644 +--- a/net/netfilter/nft_exthdr.c ++++ b/net/netfilter/nft_exthdr.c +@@ -85,7 +85,6 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + unsigned char optbuf[sizeof(struct ip_options) + 40]; + struct ip_options *opt = (struct ip_options *)optbuf; + struct iphdr *iph, _iph; +- unsigned int start; + bool found = false; + __be32 info; + int optlen; +@@ -93,7 +92,6 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + iph = skb_header_pointer(skb, 0, sizeof(_iph), &_iph); + if (!iph) + return -EBADMSG; +- start = sizeof(struct iphdr); + + optlen = iph->ihl * 4 - (int)sizeof(struct iphdr); + if (optlen <= 0) +@@ -103,7 +101,7 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + /* Copy the options since __ip_options_compile() modifies + * the options. + */ +- if (skb_copy_bits(skb, start, opt->__data, optlen)) ++ if (skb_copy_bits(skb, sizeof(struct iphdr), opt->__data, optlen)) + return -EBADMSG; + opt->optlen = optlen; + +@@ -118,18 +116,18 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb, + found = target == IPOPT_SSRR ? opt->is_strictroute : + !opt->is_strictroute; + if (found) +- *offset = opt->srr + start; ++ *offset = opt->srr; + break; + case IPOPT_RR: + if (!opt->rr) + break; +- *offset = opt->rr + start; ++ *offset = opt->rr; + found = true; + break; + case IPOPT_RA: + if (!opt->router_alert) + break; +- *offset = opt->router_alert + start; ++ *offset = opt->router_alert; + found = true; + break; + default: +diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c +index 3bb4810234aac2..e573e92213029c 100644 +--- a/net/openvswitch/conntrack.c ++++ b/net/openvswitch/conntrack.c +@@ -1368,8 +1368,11 @@ bool ovs_ct_verify(struct net *net, enum ovs_key_attr attr) + attr == OVS_KEY_ATTR_CT_MARK) + return true; + if (IS_ENABLED(CONFIG_NF_CONNTRACK_LABELS) && +- attr == OVS_KEY_ATTR_CT_LABELS) +- return true; ++ attr == OVS_KEY_ATTR_CT_LABELS) { ++ struct ovs_net *ovs_net = net_generic(net, ovs_net_id); ++ ++ return ovs_net->xt_label; ++ } + + return false; + } +@@ -1378,7 +1381,6 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr, + const struct sw_flow_key *key, + struct sw_flow_actions **sfa, bool log) + { +- unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE; + struct ovs_conntrack_info ct_info; + const char *helper = NULL; + u16 family; +@@ -1407,12 +1409,6 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr, + return -ENOMEM; + } + +- if (nf_connlabels_get(net, n_bits - 1)) { +- nf_ct_tmpl_free(ct_info.ct); +- OVS_NLERR(log, "Failed to set connlabel length"); +- return -EOPNOTSUPP; +- } +- + if (ct_info.timeout[0]) { + if (nf_ct_set_timeout(net, ct_info.ct, family, key->ip.proto, + ct_info.timeout)) +@@ -1581,7 +1577,6 @@ static void __ovs_ct_free_action(struct ovs_conntrack_info *ct_info) + if (ct_info->ct) { + if (ct_info->timeout[0]) + nf_ct_destroy_timeout(ct_info->ct); +- nf_connlabels_put(nf_ct_net(ct_info->ct)); + nf_ct_tmpl_free(ct_info->ct); + } + } +@@ -2006,9 +2001,17 @@ struct genl_family dp_ct_limit_genl_family __ro_after_init = { + + int ovs_ct_init(struct net *net) + { +-#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) ++ unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE; + struct ovs_net *ovs_net = net_generic(net, ovs_net_id); + ++ if (nf_connlabels_get(net, n_bits - 1)) { ++ ovs_net->xt_label = false; ++ OVS_NLERR(true, "Failed to set connlabel length"); ++ } else { ++ ovs_net->xt_label = true; ++ } ++ ++#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) + return ovs_ct_limit_init(net, ovs_net); + #else + return 0; +@@ -2017,9 +2020,12 @@ int ovs_ct_init(struct net *net) + + void ovs_ct_exit(struct net *net) + { +-#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) + struct ovs_net *ovs_net = net_generic(net, ovs_net_id); + ++#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) + ovs_ct_limit_exit(net, ovs_net); + #endif ++ ++ if (ovs_net->xt_label) ++ nf_connlabels_put(net); + } +diff --git a/net/openvswitch/datapath.h b/net/openvswitch/datapath.h +index 365b9bb7f546e8..9ca6231ea64703 100644 +--- a/net/openvswitch/datapath.h ++++ b/net/openvswitch/datapath.h +@@ -160,6 +160,9 @@ struct ovs_net { + #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) + struct ovs_ct_limit_info *ct_limit_info; + #endif ++ ++ /* Module reference for configuring conntrack. */ ++ bool xt_label; + }; + + /** +diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c +index 881ddd3696d540..95e0dd14dc1a32 100644 +--- a/net/openvswitch/flow_netlink.c ++++ b/net/openvswitch/flow_netlink.c +@@ -2317,14 +2317,10 @@ int ovs_nla_put_mask(const struct sw_flow *flow, struct sk_buff *skb) + OVS_FLOW_ATTR_MASK, true, skb); + } + +-#define MAX_ACTIONS_BUFSIZE (32 * 1024) +- + static struct sw_flow_actions *nla_alloc_flow_actions(int size) + { + struct sw_flow_actions *sfa; + +- WARN_ON_ONCE(size > MAX_ACTIONS_BUFSIZE); +- + sfa = kmalloc(kmalloc_size_roundup(sizeof(*sfa) + size), GFP_KERNEL); + if (!sfa) + return ERR_PTR(-ENOMEM); +@@ -2480,15 +2476,6 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa, + + new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2); + +- if (new_acts_size > MAX_ACTIONS_BUFSIZE) { +- if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) { +- OVS_NLERR(log, "Flow action size exceeds max %u", +- MAX_ACTIONS_BUFSIZE); +- return ERR_PTR(-EMSGSIZE); +- } +- new_acts_size = MAX_ACTIONS_BUFSIZE; +- } +- + acts = nla_alloc_flow_actions(new_acts_size); + if (IS_ERR(acts)) + return ERR_CAST(acts); +@@ -3545,7 +3532,7 @@ int ovs_nla_copy_actions(struct net *net, const struct nlattr *attr, + int err; + u32 mpls_label_count = 0; + +- *sfa = nla_alloc_flow_actions(min(nla_len(attr), MAX_ACTIONS_BUFSIZE)); ++ *sfa = nla_alloc_flow_actions(nla_len(attr)); + if (IS_ERR(*sfa)) + return PTR_ERR(*sfa); + +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c +index fac9c946a4c75a..778cf54df69b07 100644 +--- a/net/sched/sch_api.c ++++ b/net/sched/sch_api.c +@@ -2254,6 +2254,12 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n, + return -EOPNOTSUPP; + } + ++ /* Prevent creation of traffic classes with classid TC_H_ROOT */ ++ if (clid == TC_H_ROOT) { ++ NL_SET_ERR_MSG(extack, "Cannot create traffic class with classid TC_H_ROOT"); ++ return -EINVAL; ++ } ++ + new_cl = cl; + err = -EOPNOTSUPP; + if (cops->change) +diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c +index 7d2151c62c4a14..85a32b3f6585e2 100644 +--- a/net/sched/sch_gred.c ++++ b/net/sched/sch_gred.c +@@ -913,7 +913,8 @@ static void gred_destroy(struct Qdisc *sch) + for (i = 0; i < table->DPs; i++) + gred_destroy_vq(table->tab[i]); + +- gred_offload(sch, TC_GRED_DESTROY); ++ if (table->opt) ++ gred_offload(sch, TC_GRED_DESTROY); + kfree(table->opt); + } + +diff --git a/net/sctp/stream.c b/net/sctp/stream.c +index c241cc552e8d58..bfcff6d6a43866 100644 +--- a/net/sctp/stream.c ++++ b/net/sctp/stream.c +@@ -735,7 +735,7 @@ struct sctp_chunk *sctp_process_strreset_tsnreq( + * value SHOULD be the smallest TSN not acknowledged by the + * receiver of the request plus 2^31. + */ +- init_tsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map) + (1 << 31); ++ init_tsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map) + (1U << 31); + sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL, + init_tsn, GFP_ATOMIC); + +diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c +index 6488ead9e46459..4d5fbacef496fd 100644 +--- a/net/switchdev/switchdev.c ++++ b/net/switchdev/switchdev.c +@@ -472,7 +472,7 @@ bool switchdev_port_obj_act_is_deferred(struct net_device *dev, + EXPORT_SYMBOL_GPL(switchdev_port_obj_act_is_deferred); + + static ATOMIC_NOTIFIER_HEAD(switchdev_notif_chain); +-static BLOCKING_NOTIFIER_HEAD(switchdev_blocking_notif_chain); ++static RAW_NOTIFIER_HEAD(switchdev_blocking_notif_chain); + + /** + * register_switchdev_notifier - Register notifier +@@ -518,17 +518,27 @@ EXPORT_SYMBOL_GPL(call_switchdev_notifiers); + + int register_switchdev_blocking_notifier(struct notifier_block *nb) + { +- struct blocking_notifier_head *chain = &switchdev_blocking_notif_chain; ++ struct raw_notifier_head *chain = &switchdev_blocking_notif_chain; ++ int err; ++ ++ rtnl_lock(); ++ err = raw_notifier_chain_register(chain, nb); ++ rtnl_unlock(); + +- return blocking_notifier_chain_register(chain, nb); ++ return err; + } + EXPORT_SYMBOL_GPL(register_switchdev_blocking_notifier); + + int unregister_switchdev_blocking_notifier(struct notifier_block *nb) + { +- struct blocking_notifier_head *chain = &switchdev_blocking_notif_chain; ++ struct raw_notifier_head *chain = &switchdev_blocking_notif_chain; ++ int err; + +- return blocking_notifier_chain_unregister(chain, nb); ++ rtnl_lock(); ++ err = raw_notifier_chain_unregister(chain, nb); ++ rtnl_unlock(); ++ ++ return err; + } + EXPORT_SYMBOL_GPL(unregister_switchdev_blocking_notifier); + +@@ -536,10 +546,11 @@ int call_switchdev_blocking_notifiers(unsigned long val, struct net_device *dev, + struct switchdev_notifier_info *info, + struct netlink_ext_ack *extack) + { ++ ASSERT_RTNL(); + info->dev = dev; + info->extack = extack; +- return blocking_notifier_call_chain(&switchdev_blocking_notif_chain, +- val, info); ++ return raw_notifier_call_chain(&switchdev_blocking_notif_chain, ++ val, info); + } + EXPORT_SYMBOL_GPL(call_switchdev_blocking_notifiers); + +diff --git a/net/wireless/core.c b/net/wireless/core.c +index afbdc549fb4a51..8baf22758ac12a 100644 +--- a/net/wireless/core.c ++++ b/net/wireless/core.c +@@ -1198,6 +1198,13 @@ void cfg80211_dev_free(struct cfg80211_registered_device *rdev) + { + struct cfg80211_internal_bss *scan, *tmp; + struct cfg80211_beacon_registration *reg, *treg; ++ unsigned long flags; ++ ++ spin_lock_irqsave(&rdev->wiphy_work_lock, flags); ++ WARN_ON(!list_empty(&rdev->wiphy_work_list)); ++ spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); ++ cancel_work_sync(&rdev->wiphy_work); ++ + rfkill_destroy(rdev->wiphy.rfkill); + list_for_each_entry_safe(reg, treg, &rdev->beacon_registrations, list) { + list_del(®->list); +diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs +index e3240d16040bd9..c37d4c0c64e9f9 100644 +--- a/rust/kernel/alloc/allocator_test.rs ++++ b/rust/kernel/alloc/allocator_test.rs +@@ -62,6 +62,24 @@ unsafe fn realloc( + )); + } + ++ // ISO C (ISO/IEC 9899:2011) defines `aligned_alloc`: ++ // ++ // > The value of alignment shall be a valid alignment supported by the implementation ++ // [...]. ++ // ++ // As an example of the "supported by the implementation" requirement, POSIX.1-2001 (IEEE ++ // 1003.1-2001) defines `posix_memalign`: ++ // ++ // > The value of alignment shall be a power of two multiple of sizeof (void *). ++ // ++ // and POSIX-based implementations of `aligned_alloc` inherit this requirement. At the time ++ // of writing, this is known to be the case on macOS (but not in glibc). ++ // ++ // Satisfy the stricter requirement to avoid spurious test failures on some platforms. ++ let min_align = core::mem::size_of::<*const crate::ffi::c_void>(); ++ let layout = layout.align_to(min_align).map_err(|_| AllocError)?; ++ let layout = layout.pad_to_align(); ++ + // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or + // exceeds the given size and alignment requirements. + let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8; +diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs +index 5fece574ec023b..4911b294bfe662 100644 +--- a/rust/kernel/error.rs ++++ b/rust/kernel/error.rs +@@ -104,7 +104,7 @@ pub fn from_errno(errno: crate::ffi::c_int) -> Error { + if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 { + // TODO: Make it a `WARN_ONCE` once available. + crate::pr_warn!( +- "attempted to create `Error` with out of range `errno`: {}", ++ "attempted to create `Error` with out of range `errno`: {}\n", + errno + ); + return code::EINVAL; +diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs +index c962029f96e1f1..90bfb5cb26cd7a 100644 +--- a/rust/kernel/init.rs ++++ b/rust/kernel/init.rs +@@ -259,7 +259,7 @@ + /// }, + /// })); + /// let foo: Pin<&mut Foo> = foo; +-/// pr_info!("a: {}", &*foo.a.lock()); ++/// pr_info!("a: {}\n", &*foo.a.lock()); + /// ``` + /// + /// # Syntax +@@ -311,7 +311,7 @@ macro_rules! stack_pin_init { + /// }, GFP_KERNEL)?, + /// })); + /// let foo = foo.unwrap(); +-/// pr_info!("a: {}", &*foo.a.lock()); ++/// pr_info!("a: {}\n", &*foo.a.lock()); + /// ``` + /// + /// ```rust,ignore +@@ -336,7 +336,7 @@ macro_rules! stack_pin_init { + /// x: 64, + /// }, GFP_KERNEL)?, + /// })); +-/// pr_info!("a: {}", &*foo.a.lock()); ++/// pr_info!("a: {}\n", &*foo.a.lock()); + /// # Ok::<_, AllocError>(()) + /// ``` + /// +@@ -866,7 +866,7 @@ pub unsafe trait PinInit: Sized { + /// + /// impl Foo { + /// fn setup(self: Pin<&mut Self>) { +- /// pr_info!("Setting up foo"); ++ /// pr_info!("Setting up foo\n"); + /// } + /// } + /// +@@ -970,7 +970,7 @@ pub unsafe trait Init: PinInit { + /// + /// impl Foo { + /// fn setup(&mut self) { +- /// pr_info!("Setting up foo"); ++ /// pr_info!("Setting up foo\n"); + /// } + /// } + /// +@@ -1318,7 +1318,7 @@ fn write_pin_init(mut self, init: impl PinInit) -> Result) { +-/// pr_info!("Foo is being dropped!"); ++/// pr_info!("Foo is being dropped!\n"); + /// } + /// } + /// ``` +@@ -1400,17 +1400,14 @@ macro_rules! impl_zeroable { + // SAFETY: `T: Zeroable` and `UnsafeCell` is `repr(transparent)`. + {} UnsafeCell, + +- // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee). ++ // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee: ++ // https://doc.rust-lang.org/stable/std/option/index.html#representation). + Option, Option, Option, Option, + Option, Option, + Option, Option, Option, Option, + Option, Option, +- +- // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee). +- // +- // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant. +- {} Option>, +- {} Option>, ++ {} Option>, ++ {} Option>, + + // SAFETY: `null` pointer is valid. + // +diff --git a/rust/kernel/init/macros.rs b/rust/kernel/init/macros.rs +index 1fd146a8324165..b7213962a6a5ac 100644 +--- a/rust/kernel/init/macros.rs ++++ b/rust/kernel/init/macros.rs +@@ -45,7 +45,7 @@ + //! #[pinned_drop] + //! impl PinnedDrop for Foo { + //! fn drop(self: Pin<&mut Self>) { +-//! pr_info!("{self:p} is getting dropped."); ++//! pr_info!("{self:p} is getting dropped.\n"); + //! } + //! } + //! +@@ -412,7 +412,7 @@ + //! #[pinned_drop] + //! impl PinnedDrop for Foo { + //! fn drop(self: Pin<&mut Self>) { +-//! pr_info!("{self:p} is getting dropped."); ++//! pr_info!("{self:p} is getting dropped.\n"); + //! } + //! } + //! ``` +@@ -423,7 +423,7 @@ + //! // `unsafe`, full path and the token parameter are added, everything else stays the same. + //! unsafe impl ::kernel::init::PinnedDrop for Foo { + //! fn drop(self: Pin<&mut Self>, _: ::kernel::init::__internal::OnlyCallFromDrop) { +-//! pr_info!("{self:p} is getting dropped."); ++//! pr_info!("{self:p} is getting dropped.\n"); + //! } + //! } + //! ``` +diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs +index e1065a7551a39e..fb7837bafab324 100644 +--- a/rust/kernel/lib.rs ++++ b/rust/kernel/lib.rs +@@ -6,7 +6,7 @@ + //! usage by Rust code in the kernel and is shared by all of them. + //! + //! In other words, all the rest of the Rust code in the kernel (e.g. kernel +-//! modules written in Rust) depends on [`core`], [`alloc`] and this crate. ++//! modules written in Rust) depends on [`core`] and this crate. + //! + //! If you need a kernel C API that is not ported or wrapped yet here, then + //! do so first instead of bypassing this crate. +diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs +index 1eab7ebf25fd39..cb92f2c323e57e 100644 +--- a/rust/kernel/sync.rs ++++ b/rust/kernel/sync.rs +@@ -29,28 +29,20 @@ + unsafe impl Sync for LockClassKey {} + + impl LockClassKey { +- /// Creates a new lock class key. +- pub const fn new() -> Self { +- Self(Opaque::uninit()) +- } +- + pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key { + self.0.get() + } + } + +-impl Default for LockClassKey { +- fn default() -> Self { +- Self::new() +- } +-} +- + /// Defines a new static lock class and returns a pointer to it. + #[doc(hidden)] + #[macro_export] + macro_rules! static_lock_class { + () => {{ +- static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new(); ++ static CLASS: $crate::sync::LockClassKey = ++ // SAFETY: lockdep expects uninitialized memory when it's handed a statically allocated ++ // lock_class_key ++ unsafe { ::core::mem::MaybeUninit::uninit().assume_init() }; + &CLASS + }}; + } +diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py +index 09e1d166d8d236..d1f5adbf33f91c 100755 +--- a/scripts/generate_rust_analyzer.py ++++ b/scripts/generate_rust_analyzer.py +@@ -49,14 +49,26 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs): + } + }) + +- # First, the ones in `rust/` since they are a bit special. +- append_crate( +- "core", +- sysroot_src / "core" / "src" / "lib.rs", +- [], +- cfg=crates_cfgs.get("core", []), +- is_workspace_member=False, +- ) ++ def append_sysroot_crate( ++ display_name, ++ deps, ++ cfg=[], ++ ): ++ append_crate( ++ display_name, ++ sysroot_src / display_name / "src" / "lib.rs", ++ deps, ++ cfg, ++ is_workspace_member=False, ++ ) ++ ++ # NB: sysroot crates reexport items from one another so setting up our transitive dependencies ++ # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth ++ # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`. ++ append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", [])) ++ append_sysroot_crate("alloc", ["core"]) ++ append_sysroot_crate("std", ["alloc", "core"]) ++ append_sysroot_crate("proc_macro", ["core", "std"]) + + append_crate( + "compiler_builtins", +@@ -67,7 +79,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs): + append_crate( + "macros", + srctree / "rust" / "macros" / "lib.rs", +- [], ++ ["std", "proc_macro"], + is_proc_macro=True, + ) + crates[-1]["proc_macro_dylib_path"] = f"{objtree}/rust/libmacros.so" +@@ -78,27 +90,28 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs): + ["core", "compiler_builtins"], + ) + +- append_crate( +- "bindings", +- srctree / "rust"/ "bindings" / "lib.rs", +- ["core"], +- cfg=cfg, +- ) +- crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) +- +- append_crate( +- "kernel", +- srctree / "rust" / "kernel" / "lib.rs", +- ["core", "macros", "build_error", "bindings"], +- cfg=cfg, +- ) +- crates[-1]["source"] = { +- "include_dirs": [ +- str(srctree / "rust" / "kernel"), +- str(objtree / "rust") +- ], +- "exclude_dirs": [], +- } ++ def append_crate_with_generated( ++ display_name, ++ deps, ++ ): ++ append_crate( ++ display_name, ++ srctree / "rust"/ display_name / "lib.rs", ++ deps, ++ cfg=cfg, ++ ) ++ crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) ++ crates[-1]["source"] = { ++ "include_dirs": [ ++ str(srctree / "rust" / display_name), ++ str(objtree / "rust") ++ ], ++ "exclude_dirs": [], ++ } ++ ++ append_crate_with_generated("bindings", ["core"]) ++ append_crate_with_generated("uapi", ["core"]) ++ append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"]) + + def is_root_crate(build_file, target): + try: +diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c +index 7ea59dc4926b38..967d698e0c9242 100644 +--- a/scripts/mod/modpost.c ++++ b/scripts/mod/modpost.c +@@ -503,6 +503,9 @@ static int parse_elf(struct elf_info *info, const char *filename) + info->modinfo_len = sechdrs[i].sh_size; + } else if (!strcmp(secname, ".export_symbol")) { + info->export_symbol_secndx = i; ++ } else if (!strcmp(secname, ".no_trim_symbol")) { ++ info->no_trim_symbol = (void *)hdr + sechdrs[i].sh_offset; ++ info->no_trim_symbol_len = sechdrs[i].sh_size; + } + + if (sechdrs[i].sh_type == SHT_SYMTAB) { +@@ -1562,6 +1565,14 @@ static void read_symbols(const char *modname) + /* strip trailing .o */ + mod = new_module(modname, strlen(modname) - strlen(".o")); + ++ /* save .no_trim_symbol section for later use */ ++ if (info.no_trim_symbol_len) { ++ mod->no_trim_symbol = xmalloc(info.no_trim_symbol_len); ++ memcpy(mod->no_trim_symbol, info.no_trim_symbol, ++ info.no_trim_symbol_len); ++ mod->no_trim_symbol_len = info.no_trim_symbol_len; ++ } ++ + if (!mod->is_vmlinux) { + license = get_modinfo(&info, "license"); + if (!license) +@@ -1724,6 +1735,28 @@ static void handle_white_list_exports(const char *white_list) + free(buf); + } + ++/* ++ * Keep symbols recorded in the .no_trim_symbol section. This is necessary to ++ * prevent CONFIG_TRIM_UNUSED_KSYMS from dropping EXPORT_SYMBOL because ++ * symbol_get() relies on the symbol being present in the ksymtab for lookups. ++ */ ++static void keep_no_trim_symbols(struct module *mod) ++{ ++ unsigned long size = mod->no_trim_symbol_len; ++ ++ for (char *s = mod->no_trim_symbol; s; s = next_string(s , &size)) { ++ struct symbol *sym; ++ ++ /* ++ * If find_symbol() returns NULL, this symbol is not provided ++ * by any module, and symbol_get() will fail. ++ */ ++ sym = find_symbol(s); ++ if (sym) ++ sym->used = true; ++ } ++} ++ + static void check_modname_len(struct module *mod) + { + const char *mod_name; +@@ -2195,6 +2228,8 @@ int main(int argc, char **argv) + read_symbols_from_files(files_source); + + list_for_each_entry(mod, &modules, list) { ++ keep_no_trim_symbols(mod); ++ + if (mod->dump_file || mod->is_vmlinux) + continue; + +diff --git a/scripts/mod/modpost.h b/scripts/mod/modpost.h +index ffd0a52a606efa..59366f456b7659 100644 +--- a/scripts/mod/modpost.h ++++ b/scripts/mod/modpost.h +@@ -111,6 +111,8 @@ struct module_alias { + * + * @dump_file: path to the .symvers file if loaded from a file + * @aliases: list head for module_aliases ++ * @no_trim_symbol: .no_trim_symbol section data ++ * @no_trim_symbol_len: length of the .no_trim_symbol section + */ + struct module { + struct list_head list; +@@ -128,6 +130,8 @@ struct module { + // Actual imported namespaces + struct list_head imported_namespaces; + struct list_head aliases; ++ char *no_trim_symbol; ++ unsigned int no_trim_symbol_len; + char name[]; + }; + +@@ -141,6 +145,8 @@ struct elf_info { + char *strtab; + char *modinfo; + unsigned int modinfo_len; ++ char *no_trim_symbol; ++ unsigned int no_trim_symbol_len; + + /* support for 32bit section numbers */ + +diff --git a/scripts/module.lds.S b/scripts/module.lds.S +index c2f80f9141d407..450f1088d5fd36 100644 +--- a/scripts/module.lds.S ++++ b/scripts/module.lds.S +@@ -16,6 +16,7 @@ SECTIONS { + *(.discard) + *(.discard.*) + *(.export_symbol) ++ *(.no_trim_symbol) + } + + __ksymtab 0 : ALIGN(8) { *(SORT(___ksymtab+*)) } +diff --git a/scripts/rustdoc_test_gen.rs b/scripts/rustdoc_test_gen.rs +index 5ebd42ae4a3fd1..76aaa8329413d8 100644 +--- a/scripts/rustdoc_test_gen.rs ++++ b/scripts/rustdoc_test_gen.rs +@@ -15,8 +15,8 @@ + //! - Test code should be able to define functions and call them, without having to carry + //! the context. + //! +-//! - Later on, we may want to be able to test non-kernel code (e.g. `core`, `alloc` or +-//! third-party crates) which likely use the standard library `assert*!` macros. ++//! - Later on, we may want to be able to test non-kernel code (e.g. `core` or third-party ++//! crates) which likely use the standard library `assert*!` macros. + //! + //! For this reason, instead of the passed context, `kunit_get_current_test()` is used instead + //! (i.e. `current->kunit_test`). +diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c +index f564ec7af19405..ce3ae2cba6607a 100644 +--- a/sound/hda/intel-dsp-config.c ++++ b/sound/hda/intel-dsp-config.c +@@ -539,6 +539,11 @@ static const struct config_entry config_table[] = { + .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, + .device = PCI_DEVICE_ID_INTEL_HDA_PTL, + }, ++ { ++ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, ++ .device = PCI_DEVICE_ID_INTEL_HDA_PTL_H, ++ }, ++ + #endif + + }; +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 188b64c0ed75f4..c050539d6057b7 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2498,6 +2498,8 @@ static const struct pci_device_id azx_ids[] = { + { PCI_DEVICE_DATA(INTEL, HDA_ARL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE) }, + /* Panther Lake */ + { PCI_DEVICE_DATA(INTEL, HDA_PTL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) }, ++ /* Panther Lake-H */ ++ { PCI_DEVICE_DATA(INTEL, HDA_PTL_H, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) }, + /* Apollolake (Broxton-P) */ + { PCI_DEVICE_DATA(INTEL, HDA_APL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON) }, + /* Gemini-Lake */ +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 10e9ec74104d26..f952918191ef27 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -11096,6 +11096,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC), + SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2), ++ SND_PCI_QUIRK(0x2014, 0x800a, "Positivo ARN50", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), + SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), + SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), + SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), +diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c +index b16587d8f97a89..a7637056972aab 100644 +--- a/sound/soc/amd/yc/acp6x-mach.c ++++ b/sound/soc/amd/yc/acp6x-mach.c +@@ -248,6 +248,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "21M5"), + } + }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "21M6"), ++ } ++ }, + { + .driver_data = &acp6x_card, + .matches = { +diff --git a/sound/soc/codecs/arizona.c b/sound/soc/codecs/arizona.c +index 402b9a2ff02406..68cdb1027d0c05 100644 +--- a/sound/soc/codecs/arizona.c ++++ b/sound/soc/codecs/arizona.c +@@ -967,7 +967,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + case ARIZONA_OUT3L_ENA_SHIFT: + case ARIZONA_OUT3R_ENA_SHIFT: + priv->out_up_pending++; +- priv->out_up_delay += 17; ++ priv->out_up_delay += 17000; + break; + case ARIZONA_OUT4L_ENA_SHIFT: + case ARIZONA_OUT4R_ENA_SHIFT: +@@ -977,7 +977,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + case WM8997: + break; + default: +- priv->out_up_delay += 10; ++ priv->out_up_delay += 10000; + break; + } + break; +@@ -999,7 +999,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + if (!priv->out_up_pending && priv->out_up_delay) { + dev_dbg(component->dev, "Power up delay: %d\n", + priv->out_up_delay); +- msleep(priv->out_up_delay); ++ fsleep(priv->out_up_delay); + priv->out_up_delay = 0; + } + break; +@@ -1017,7 +1017,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + case ARIZONA_OUT3L_ENA_SHIFT: + case ARIZONA_OUT3R_ENA_SHIFT: + priv->out_down_pending++; +- priv->out_down_delay++; ++ priv->out_down_delay += 1000; + break; + case ARIZONA_OUT4L_ENA_SHIFT: + case ARIZONA_OUT4R_ENA_SHIFT: +@@ -1028,10 +1028,10 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + break; + case WM8998: + case WM1814: +- priv->out_down_delay += 5; ++ priv->out_down_delay += 5000; + break; + default: +- priv->out_down_delay++; ++ priv->out_down_delay += 1000; + break; + } + break; +@@ -1053,7 +1053,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w, + if (!priv->out_down_pending && priv->out_down_delay) { + dev_dbg(component->dev, "Power down delay: %d\n", + priv->out_down_delay); +- msleep(priv->out_down_delay); ++ fsleep(priv->out_down_delay); + priv->out_down_delay = 0; + } + break; +diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c +index 83c21c17fb80b4..b1969536862ba8 100644 +--- a/sound/soc/codecs/cs42l43.c ++++ b/sound/soc/codecs/cs42l43.c +@@ -1146,7 +1146,7 @@ static const struct snd_kcontrol_new cs42l43_controls[] = { + + SOC_DOUBLE_R_SX_TLV("ADC Volume", CS42L43_ADC_B_CTRL1, CS42L43_ADC_B_CTRL2, + CS42L43_ADC_PGA_GAIN_SHIFT, +- 0xF, 5, cs42l43_adc_tlv), ++ 0xF, 4, cs42l43_adc_tlv), + + SOC_DOUBLE("PDM1 Invert Switch", CS42L43_DMIC_PDM_CTRL, + CS42L43_PDM1L_INV_SHIFT, CS42L43_PDM1R_INV_SHIFT, 1, 0), +diff --git a/sound/soc/codecs/madera.c b/sound/soc/codecs/madera.c +index b24d6472ad5fc9..fbfd7fb7f1685c 100644 +--- a/sound/soc/codecs/madera.c ++++ b/sound/soc/codecs/madera.c +@@ -2322,10 +2322,10 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case CS42L92: + case CS47L92: + case CS47L93: +- out_up_delay = 6; ++ out_up_delay = 6000; + break; + default: +- out_up_delay = 17; ++ out_up_delay = 17000; + break; + } + +@@ -2356,7 +2356,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case MADERA_OUT3R_ENA_SHIFT: + priv->out_up_pending--; + if (!priv->out_up_pending) { +- msleep(priv->out_up_delay); ++ fsleep(priv->out_up_delay); + priv->out_up_delay = 0; + } + break; +@@ -2375,7 +2375,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case MADERA_OUT3L_ENA_SHIFT: + case MADERA_OUT3R_ENA_SHIFT: + priv->out_down_pending++; +- priv->out_down_delay++; ++ priv->out_down_delay += 1000; + break; + default: + break; +@@ -2392,7 +2392,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w, + case MADERA_OUT3R_ENA_SHIFT: + priv->out_down_pending--; + if (!priv->out_down_pending) { +- msleep(priv->out_down_delay); ++ fsleep(priv->out_down_delay); + priv->out_down_delay = 0; + } + break; +diff --git a/sound/soc/codecs/rt722-sdca-sdw.c b/sound/soc/codecs/rt722-sdca-sdw.c +index 25fc13687bc83c..4d3043627bd04e 100644 +--- a/sound/soc/codecs/rt722-sdca-sdw.c ++++ b/sound/soc/codecs/rt722-sdca-sdw.c +@@ -86,6 +86,10 @@ static bool rt722_sdca_mbq_readable_register(struct device *dev, unsigned int re + case 0x6100067: + case 0x6100070 ... 0x610007c: + case 0x6100080: ++ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN, ++ CH_01) ... ++ SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN, ++ CH_04): + case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME, + CH_01): + case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME, +diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c +index d482cd194c08c5..58315eab492a16 100644 +--- a/sound/soc/codecs/tas2764.c ++++ b/sound/soc/codecs/tas2764.c +@@ -365,7 +365,7 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) + { + struct snd_soc_component *component = dai->component; + struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); +- u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0; ++ u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0, asi_cfg_4 = 0; + int ret; + + switch (fmt & SND_SOC_DAIFMT_INV_MASK) { +@@ -374,12 +374,14 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) + fallthrough; + case SND_SOC_DAIFMT_NB_NF: + asi_cfg_1 = TAS2764_TDM_CFG1_RX_RISING; ++ asi_cfg_4 = TAS2764_TDM_CFG4_TX_FALLING; + break; + case SND_SOC_DAIFMT_IB_IF: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; + fallthrough; + case SND_SOC_DAIFMT_IB_NF: + asi_cfg_1 = TAS2764_TDM_CFG1_RX_FALLING; ++ asi_cfg_4 = TAS2764_TDM_CFG4_TX_RISING; + break; + } + +@@ -389,6 +391,12 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) + if (ret < 0) + return ret; + ++ ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG4, ++ TAS2764_TDM_CFG4_TX_MASK, ++ asi_cfg_4); ++ if (ret < 0) ++ return ret; ++ + switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { + case SND_SOC_DAIFMT_I2S: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; +diff --git a/sound/soc/codecs/tas2764.h b/sound/soc/codecs/tas2764.h +index 168af772a898ff..9490f2686e3891 100644 +--- a/sound/soc/codecs/tas2764.h ++++ b/sound/soc/codecs/tas2764.h +@@ -25,7 +25,7 @@ + + /* Power Control */ + #define TAS2764_PWR_CTRL TAS2764_REG(0X0, 0x02) +-#define TAS2764_PWR_CTRL_MASK GENMASK(1, 0) ++#define TAS2764_PWR_CTRL_MASK GENMASK(2, 0) + #define TAS2764_PWR_CTRL_ACTIVE 0x0 + #define TAS2764_PWR_CTRL_MUTE BIT(0) + #define TAS2764_PWR_CTRL_SHUTDOWN BIT(1) +@@ -79,6 +79,12 @@ + #define TAS2764_TDM_CFG3_RXS_SHIFT 0x4 + #define TAS2764_TDM_CFG3_MASK GENMASK(3, 0) + ++/* TDM Configuration Reg4 */ ++#define TAS2764_TDM_CFG4 TAS2764_REG(0X0, 0x0d) ++#define TAS2764_TDM_CFG4_TX_MASK BIT(0) ++#define TAS2764_TDM_CFG4_TX_RISING 0x0 ++#define TAS2764_TDM_CFG4_TX_FALLING BIT(0) ++ + /* TDM Configuration Reg5 */ + #define TAS2764_TDM_CFG5 TAS2764_REG(0X0, 0x0e) + #define TAS2764_TDM_CFG5_VSNS_MASK BIT(6) +diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c +index 9f93b230652a5d..863c3f672ba98d 100644 +--- a/sound/soc/codecs/tas2770.c ++++ b/sound/soc/codecs/tas2770.c +@@ -506,7 +506,7 @@ static int tas2770_codec_probe(struct snd_soc_component *component) + } + + static DECLARE_TLV_DB_SCALE(tas2770_digital_tlv, 1100, 50, 0); +-static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -12750, 50, 0); ++static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -10050, 50, 0); + + static const struct snd_kcontrol_new tas2770_snd_controls[] = { + SOC_SINGLE_TLV("Speaker Playback Volume", TAS2770_PLAY_CFG_REG2, +diff --git a/sound/soc/codecs/wm0010.c b/sound/soc/codecs/wm0010.c +index edd2cb185c42cf..9e67fbfc2ccaf8 100644 +--- a/sound/soc/codecs/wm0010.c ++++ b/sound/soc/codecs/wm0010.c +@@ -920,7 +920,7 @@ static int wm0010_spi_probe(struct spi_device *spi) + if (ret) { + dev_err(wm0010->dev, "Failed to set IRQ %d as wake source: %d\n", + irq, ret); +- return ret; ++ goto free_irq; + } + + if (spi->max_speed_hz) +@@ -932,9 +932,18 @@ static int wm0010_spi_probe(struct spi_device *spi) + &soc_component_dev_wm0010, wm0010_dai, + ARRAY_SIZE(wm0010_dai)); + if (ret < 0) +- return ret; ++ goto disable_irq_wake; + + return 0; ++ ++disable_irq_wake: ++ irq_set_irq_wake(wm0010->irq, 0); ++ ++free_irq: ++ if (wm0010->irq) ++ free_irq(wm0010->irq, wm0010); ++ ++ return ret; + } + + static void wm0010_spi_remove(struct spi_device *spi) +diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c +index 502196253d42a9..64eee0d2347da1 100644 +--- a/sound/soc/codecs/wm5110.c ++++ b/sound/soc/codecs/wm5110.c +@@ -302,7 +302,7 @@ static int wm5110_hp_pre_enable(struct snd_soc_dapm_widget *w) + } else { + wseq = wm5110_no_dre_left_enable; + nregs = ARRAY_SIZE(wm5110_no_dre_left_enable); +- priv->out_up_delay += 10; ++ priv->out_up_delay += 10000; + } + break; + case ARIZONA_OUT1R_ENA_SHIFT: +@@ -312,7 +312,7 @@ static int wm5110_hp_pre_enable(struct snd_soc_dapm_widget *w) + } else { + wseq = wm5110_no_dre_right_enable; + nregs = ARRAY_SIZE(wm5110_no_dre_right_enable); +- priv->out_up_delay += 10; ++ priv->out_up_delay += 10000; + } + break; + default: +@@ -338,7 +338,7 @@ static int wm5110_hp_pre_disable(struct snd_soc_dapm_widget *w) + snd_soc_component_update_bits(component, + ARIZONA_SPARE_TRIGGERS, + ARIZONA_WS_TRG1, 0); +- priv->out_down_delay += 27; ++ priv->out_down_delay += 27000; + } + break; + case ARIZONA_OUT1R_ENA_SHIFT: +@@ -350,7 +350,7 @@ static int wm5110_hp_pre_disable(struct snd_soc_dapm_widget *w) + snd_soc_component_update_bits(component, + ARIZONA_SPARE_TRIGGERS, + ARIZONA_WS_TRG2, 0); +- priv->out_down_delay += 27; ++ priv->out_down_delay += 27000; + } + break; + default: +diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c +index 24b371f3206632..5b6c2f7f288a0c 100644 +--- a/sound/soc/generic/simple-card-utils.c ++++ b/sound/soc/generic/simple-card-utils.c +@@ -1103,6 +1103,7 @@ int graph_util_parse_dai(struct device *dev, struct device_node *ep, + args.np = ep; + dai = snd_soc_get_dai_via_args(&args); + if (dai) { ++ dlc->of_node = node; + dlc->dai_name = snd_soc_dai_name_get(dai); + dlc->dai_args = snd_soc_copy_dai_args(dev, &args); + if (!dlc->dai_args) +diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c +index 65e55c46fb0645..f61af1b924b1bf 100644 +--- a/sound/soc/intel/boards/sof_sdw.c ++++ b/sound/soc/intel/boards/sof_sdw.c +@@ -13,6 +13,7 @@ + #include + #include + #include ++#include + #include + #include "sof_sdw_common.h" + #include "../../codecs/rt711.h" +@@ -746,9 +747,36 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = { + }, + .driver_data = (void *)(SOC_SDW_PCH_DMIC), + }, ++ { ++ .callback = sof_sdw_quirk_cb, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Google"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Fatcat"), ++ }, ++ .driver_data = (void *)(SOC_SDW_PCH_DMIC | ++ SOF_BT_OFFLOAD_SSP(2) | ++ SOF_SSP_BT_OFFLOAD_PRESENT), ++ }, ++ {} ++}; ++ ++static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = { ++ SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC), + {} + }; + ++static void sof_sdw_check_ssid_quirk(const struct snd_soc_acpi_mach *mach) ++{ ++ const struct snd_pci_quirk *quirk_entry; ++ ++ quirk_entry = snd_pci_quirk_lookup_id(mach->mach_params.subsystem_vendor, ++ mach->mach_params.subsystem_device, ++ sof_sdw_ssid_quirk_table); ++ ++ if (quirk_entry) ++ sof_sdw_quirk = quirk_entry->value; ++} ++ + static struct snd_soc_dai_link_component platform_component[] = { + { + /* name might be overridden during probe */ +@@ -917,7 +945,7 @@ static int create_sdw_dailinks(struct snd_soc_card *card, + + /* generate DAI links by each sdw link */ + while (sof_dais->initialised) { +- int current_be_id; ++ int current_be_id = 0; + + ret = create_sdw_dailink(card, sof_dais, dai_links, + ¤t_be_id, codec_conf); +@@ -1276,6 +1304,13 @@ static int mc_probe(struct platform_device *pdev) + + snd_soc_card_set_drvdata(card, ctx); + ++ if (mach->mach_params.subsystem_id_set) { ++ snd_soc_card_set_pci_ssid(card, ++ mach->mach_params.subsystem_vendor, ++ mach->mach_params.subsystem_device); ++ sof_sdw_check_ssid_quirk(mach); ++ } ++ + dmi_check_system(sof_sdw_quirk_table); + + if (quirk_override != -1) { +@@ -1291,12 +1326,6 @@ static int mc_probe(struct platform_device *pdev) + for (i = 0; i < ctx->codec_info_list_count; i++) + codec_info_list[i].amp_num = 0; + +- if (mach->mach_params.subsystem_id_set) { +- snd_soc_card_set_pci_ssid(card, +- mach->mach_params.subsystem_vendor, +- mach->mach_params.subsystem_device); +- } +- + ret = sof_card_dai_links_create(card); + if (ret < 0) + return ret; +diff --git a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c +index 03fc5a1870123f..9b9098485b8c3b 100644 +--- a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c ++++ b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c +@@ -330,7 +330,7 @@ static const struct snd_soc_acpi_adr_device rt1316_3_single_adr[] = { + + static const struct snd_soc_acpi_adr_device rt1318_1_single_adr[] = { + { +- .adr = 0x000130025D131801, ++ .adr = 0x000130025D131801ull, + .num_endpoints = 1, + .endpoints = &single_endpoint, + .name_prefix = "rt1318-1" +diff --git a/sound/soc/renesas/rcar/core.c b/sound/soc/renesas/rcar/core.c +index d3709fd0409e43..f3f0c3f0bb9f5c 100644 +--- a/sound/soc/renesas/rcar/core.c ++++ b/sound/soc/renesas/rcar/core.c +@@ -1770,20 +1770,6 @@ int rsnd_kctrl_accept_anytime(struct rsnd_dai_stream *io) + return 1; + } + +-int rsnd_kctrl_accept_runtime(struct rsnd_dai_stream *io) +-{ +- struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); +- struct rsnd_priv *priv = rsnd_io_to_priv(io); +- struct device *dev = rsnd_priv_to_dev(priv); +- +- if (!runtime) { +- dev_warn(dev, "Can't update kctrl when idle\n"); +- return 0; +- } +- +- return 1; +-} +- + struct rsnd_kctrl_cfg *rsnd_kctrl_init_m(struct rsnd_kctrl_cfg_m *cfg) + { + cfg->cfg.val = cfg->val; +diff --git a/sound/soc/renesas/rcar/rsnd.h b/sound/soc/renesas/rcar/rsnd.h +index a5f54b65313c40..04c70690f7a258 100644 +--- a/sound/soc/renesas/rcar/rsnd.h ++++ b/sound/soc/renesas/rcar/rsnd.h +@@ -742,7 +742,6 @@ struct rsnd_kctrl_cfg_s { + #define rsnd_kctrl_vals(x) ((x).val) /* = (x).cfg.val[0] */ + + int rsnd_kctrl_accept_anytime(struct rsnd_dai_stream *io); +-int rsnd_kctrl_accept_runtime(struct rsnd_dai_stream *io); + struct rsnd_kctrl_cfg *rsnd_kctrl_init_m(struct rsnd_kctrl_cfg_m *cfg); + struct rsnd_kctrl_cfg *rsnd_kctrl_init_s(struct rsnd_kctrl_cfg_s *cfg); + int rsnd_kctrl_new(struct rsnd_mod *mod, +diff --git a/sound/soc/renesas/rcar/src.c b/sound/soc/renesas/rcar/src.c +index e7f86db0d94c3c..7d73b183bda685 100644 +--- a/sound/soc/renesas/rcar/src.c ++++ b/sound/soc/renesas/rcar/src.c +@@ -35,6 +35,7 @@ struct rsnd_src { + struct rsnd_mod *dma; + struct rsnd_kctrl_cfg_s sen; /* sync convert enable */ + struct rsnd_kctrl_cfg_s sync; /* sync convert */ ++ u32 current_sync_rate; + int irq; + }; + +@@ -100,7 +101,7 @@ static u32 rsnd_src_convert_rate(struct rsnd_dai_stream *io, + if (!rsnd_src_sync_is_enabled(mod)) + return rsnd_io_converted_rate(io); + +- convert_rate = src->sync.val; ++ convert_rate = src->current_sync_rate; + + if (!convert_rate) + convert_rate = rsnd_io_converted_rate(io); +@@ -201,13 +202,73 @@ static const u32 chan222222[] = { + static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io, + struct rsnd_mod *mod) + { ++ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); + struct rsnd_priv *priv = rsnd_mod_to_priv(mod); +- struct device *dev = rsnd_priv_to_dev(priv); ++ struct rsnd_src *src = rsnd_mod_to_src(mod); ++ u32 fin, fout, new_rate; ++ int inc, cnt, rate; ++ u64 base, val; ++ ++ if (!runtime) ++ return; ++ ++ if (!rsnd_src_sync_is_enabled(mod)) ++ return; ++ ++ fin = rsnd_src_get_in_rate(priv, io); ++ fout = rsnd_src_get_out_rate(priv, io); ++ ++ new_rate = src->sync.val; ++ ++ if (!new_rate) ++ new_rate = fout; ++ ++ /* Do nothing if no diff */ ++ if (new_rate == src->current_sync_rate) ++ return; ++ ++ /* ++ * SRCm_IFSVR::INTIFS can change within 1% ++ * see ++ * SRCm_IFSVR::INTIFS Note ++ */ ++ inc = fout / 100; ++ cnt = abs(new_rate - fout) / inc; ++ if (fout > new_rate) ++ inc *= -1; ++ ++ /* ++ * After start running SRC, we can update only SRC_IFSVR ++ * for Synchronous Mode ++ */ ++ base = (u64)0x0400000 * fin; ++ rate = fout; ++ for (int i = 0; i < cnt; i++) { ++ val = base; ++ rate += inc; ++ do_div(val, rate); ++ ++ rsnd_mod_write(mod, SRC_IFSVR, val); ++ } ++ val = base; ++ do_div(val, new_rate); ++ ++ rsnd_mod_write(mod, SRC_IFSVR, val); ++ ++ /* update current_sync_rate */ ++ src->current_sync_rate = new_rate; ++} ++ ++static void rsnd_src_init_convert_rate(struct rsnd_dai_stream *io, ++ struct rsnd_mod *mod) ++{ + struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); ++ struct rsnd_priv *priv = rsnd_mod_to_priv(mod); ++ struct device *dev = rsnd_priv_to_dev(priv); + int is_play = rsnd_io_is_play(io); + int use_src = 0; + u32 fin, fout; +- u32 ifscr, fsrate, adinr; ++ u32 ifscr, adinr; + u32 cr, route; + u32 i_busif, o_busif, tmp; + const u32 *bsdsr_table; +@@ -245,26 +306,15 @@ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io, + adinr = rsnd_get_adinr_bit(mod, io) | chan; + + /* +- * SRC_IFSCR / SRC_IFSVR +- */ +- ifscr = 0; +- fsrate = 0; +- if (use_src) { +- u64 n; +- +- ifscr = 1; +- n = (u64)0x0400000 * fin; +- do_div(n, fout); +- fsrate = n; +- } +- +- /* ++ * SRC_IFSCR + * SRC_SRCCR / SRC_ROUTE_MODE0 + */ ++ ifscr = 0; + cr = 0x00011110; + route = 0x0; + if (use_src) { + route = 0x1; ++ ifscr = 0x1; + + if (rsnd_src_sync_is_enabled(mod)) { + cr |= 0x1; +@@ -335,7 +385,6 @@ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io, + rsnd_mod_write(mod, SRC_SRCIR, 1); /* initialize */ + rsnd_mod_write(mod, SRC_ADINR, adinr); + rsnd_mod_write(mod, SRC_IFSCR, ifscr); +- rsnd_mod_write(mod, SRC_IFSVR, fsrate); + rsnd_mod_write(mod, SRC_SRCCR, cr); + rsnd_mod_write(mod, SRC_BSDSR, bsdsr_table[idx]); + rsnd_mod_write(mod, SRC_BSISR, bsisr_table[idx]); +@@ -348,6 +397,9 @@ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io, + + rsnd_adg_set_src_timesel_gen2(mod, io, fin, fout); + ++ /* update SRC_IFSVR */ ++ rsnd_src_set_convert_rate(io, mod); ++ + return; + + convert_rate_err: +@@ -467,7 +519,8 @@ static int rsnd_src_init(struct rsnd_mod *mod, + int ret; + + /* reset sync convert_rate */ +- src->sync.val = 0; ++ src->sync.val = ++ src->current_sync_rate = 0; + + ret = rsnd_mod_power_on(mod); + if (ret < 0) +@@ -475,7 +528,7 @@ static int rsnd_src_init(struct rsnd_mod *mod, + + rsnd_src_activation(mod); + +- rsnd_src_set_convert_rate(io, mod); ++ rsnd_src_init_convert_rate(io, mod); + + rsnd_src_status_clear(mod); + +@@ -493,7 +546,8 @@ static int rsnd_src_quit(struct rsnd_mod *mod, + rsnd_mod_power_off(mod); + + /* reset sync convert_rate */ +- src->sync.val = 0; ++ src->sync.val = ++ src->current_sync_rate = 0; + + return 0; + } +@@ -531,6 +585,22 @@ static irqreturn_t rsnd_src_interrupt(int irq, void *data) + return IRQ_HANDLED; + } + ++static int rsnd_src_kctrl_accept_runtime(struct rsnd_dai_stream *io) ++{ ++ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); ++ ++ if (!runtime) { ++ struct rsnd_priv *priv = rsnd_io_to_priv(io); ++ struct device *dev = rsnd_priv_to_dev(priv); ++ ++ dev_warn(dev, "\"SRC Out Rate\" can use during running\n"); ++ ++ return 0; ++ } ++ ++ return 1; ++} ++ + static int rsnd_src_probe_(struct rsnd_mod *mod, + struct rsnd_dai_stream *io, + struct rsnd_priv *priv) +@@ -585,7 +655,7 @@ static int rsnd_src_pcm_new(struct rsnd_mod *mod, + "SRC Out Rate Switch" : + "SRC In Rate Switch", + rsnd_kctrl_accept_anytime, +- rsnd_src_set_convert_rate, ++ rsnd_src_init_convert_rate, + &src->sen, 1); + if (ret < 0) + return ret; +@@ -594,7 +664,7 @@ static int rsnd_src_pcm_new(struct rsnd_mod *mod, + rsnd_io_is_play(io) ? + "SRC Out Rate" : + "SRC In Rate", +- rsnd_kctrl_accept_runtime, ++ rsnd_src_kctrl_accept_runtime, + rsnd_src_set_convert_rate, + &src->sync, 192000); + +diff --git a/sound/soc/renesas/rcar/ssi.c b/sound/soc/renesas/rcar/ssi.c +index b3d4e8ae07eff8..0c6424a1fcac04 100644 +--- a/sound/soc/renesas/rcar/ssi.c ++++ b/sound/soc/renesas/rcar/ssi.c +@@ -336,7 +336,8 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod, + return 0; + + rate_err: +- dev_err(dev, "unsupported clock rate\n"); ++ dev_err(dev, "unsupported clock rate (%d)\n", rate); ++ + return ret; + } + +diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c +index 19928f098d8dcb..b0e4e4168f38d5 100644 +--- a/sound/soc/soc-ops.c ++++ b/sound/soc/soc-ops.c +@@ -337,7 +337,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol, + if (ucontrol->value.integer.value[0] < 0) + return -EINVAL; + val = ucontrol->value.integer.value[0]; +- if (mc->platform_max && ((int)val + min) > mc->platform_max) ++ if (mc->platform_max && val > mc->platform_max) + return -EINVAL; + if (val > max - min) + return -EINVAL; +@@ -350,7 +350,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol, + if (ucontrol->value.integer.value[1] < 0) + return -EINVAL; + val2 = ucontrol->value.integer.value[1]; +- if (mc->platform_max && ((int)val2 + min) > mc->platform_max) ++ if (mc->platform_max && val2 > mc->platform_max) + return -EINVAL; + if (val2 > max - min) + return -EINVAL; +@@ -503,17 +503,16 @@ int snd_soc_info_volsw_range(struct snd_kcontrol *kcontrol, + { + struct soc_mixer_control *mc = + (struct soc_mixer_control *)kcontrol->private_value; +- int platform_max; +- int min = mc->min; ++ int max; + +- if (!mc->platform_max) +- mc->platform_max = mc->max; +- platform_max = mc->platform_max; ++ max = mc->max - mc->min; ++ if (mc->platform_max && mc->platform_max < max) ++ max = mc->platform_max; + + uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; + uinfo->count = snd_soc_volsw_is_stereo(mc) ? 2 : 1; + uinfo->value.integer.min = 0; +- uinfo->value.integer.max = platform_max - min; ++ uinfo->value.integer.max = max; + + return 0; + } +diff --git a/sound/soc/sof/amd/acp-ipc.c b/sound/soc/sof/amd/acp-ipc.c +index 5f371d9263f3ba..12caefd0878859 100644 +--- a/sound/soc/sof/amd/acp-ipc.c ++++ b/sound/soc/sof/amd/acp-ipc.c +@@ -167,6 +167,7 @@ irqreturn_t acp_sof_ipc_irq_thread(int irq, void *context) + + if (sdev->first_boot && sdev->fw_state != SOF_FW_BOOT_COMPLETE) { + acp_mailbox_read(sdev, sdev->dsp_box.offset, &status, sizeof(status)); ++ + if ((status & SOF_IPC_PANIC_MAGIC_MASK) == SOF_IPC_PANIC_MAGIC) { + snd_sof_dsp_panic(sdev, sdev->dsp_box.offset + sizeof(status), + true); +@@ -188,13 +189,21 @@ irqreturn_t acp_sof_ipc_irq_thread(int irq, void *context) + + dsp_ack = snd_sof_dsp_read(sdev, ACP_DSP_BAR, ACP_SCRATCH_REG_0 + dsp_ack_write); + if (dsp_ack) { +- spin_lock_irq(&sdev->ipc_lock); +- /* handle immediate reply from DSP core */ +- acp_dsp_ipc_get_reply(sdev); +- snd_sof_ipc_reply(sdev, 0); +- /* set the done bit */ +- acp_dsp_ipc_dsp_done(sdev); +- spin_unlock_irq(&sdev->ipc_lock); ++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) { ++ spin_lock_irq(&sdev->ipc_lock); ++ ++ /* handle immediate reply from DSP core */ ++ acp_dsp_ipc_get_reply(sdev); ++ snd_sof_ipc_reply(sdev, 0); ++ /* set the done bit */ ++ acp_dsp_ipc_dsp_done(sdev); ++ ++ spin_unlock_irq(&sdev->ipc_lock); ++ } else { ++ dev_dbg_ratelimited(sdev->dev, "IPC reply before FW_BOOT_COMPLETE: %#x\n", ++ dsp_ack); ++ } ++ + ipc_irq = true; + } + +diff --git a/sound/soc/sof/amd/acp.c b/sound/soc/sof/amd/acp.c +index 33648ff8b83365..9e13c96528be33 100644 +--- a/sound/soc/sof/amd/acp.c ++++ b/sound/soc/sof/amd/acp.c +@@ -27,6 +27,7 @@ MODULE_PARM_DESC(enable_fw_debug, "Enable Firmware debug"); + static struct acp_quirk_entry quirk_valve_galileo = { + .signed_fw_image = true, + .skip_iram_dram_size_mod = true, ++ .post_fw_run_delay = true, + }; + + const struct dmi_system_id acp_sof_quirk_table[] = { +diff --git a/sound/soc/sof/amd/acp.h b/sound/soc/sof/amd/acp.h +index 800594440f7391..2a19d82d620022 100644 +--- a/sound/soc/sof/amd/acp.h ++++ b/sound/soc/sof/amd/acp.h +@@ -220,6 +220,7 @@ struct sof_amd_acp_desc { + struct acp_quirk_entry { + bool signed_fw_image; + bool skip_iram_dram_size_mod; ++ bool post_fw_run_delay; + }; + + /* Common device data struct for ACP devices */ +diff --git a/sound/soc/sof/amd/vangogh.c b/sound/soc/sof/amd/vangogh.c +index 8e2672106ac60a..d5f1dddd43e72d 100644 +--- a/sound/soc/sof/amd/vangogh.c ++++ b/sound/soc/sof/amd/vangogh.c +@@ -11,6 +11,7 @@ + * Hardware interface for Audio DSP on Vangogh platform + */ + ++#include + #include + #include + +@@ -136,6 +137,20 @@ static struct snd_soc_dai_driver vangogh_sof_dai[] = { + }, + }; + ++static int sof_vangogh_post_fw_run_delay(struct snd_sof_dev *sdev) ++{ ++ /* ++ * Resuming from suspend in some cases my cause the DSP firmware ++ * to enter an unrecoverable faulty state. Delaying a bit any host ++ * to DSP transmission right after firmware boot completion seems ++ * to resolve the issue. ++ */ ++ if (!sdev->first_boot) ++ usleep_range(100, 150); ++ ++ return 0; ++} ++ + /* Vangogh ops */ + struct snd_sof_dsp_ops sof_vangogh_ops; + EXPORT_SYMBOL_NS(sof_vangogh_ops, "SND_SOC_SOF_AMD_COMMON"); +@@ -157,6 +172,9 @@ int sof_vangogh_ops_init(struct snd_sof_dev *sdev) + + if (quirks->signed_fw_image) + sof_vangogh_ops.load_firmware = acp_sof_load_signed_firmware; ++ ++ if (quirks->post_fw_run_delay) ++ sof_vangogh_ops.post_fw_run = sof_vangogh_post_fw_run_delay; + } + + return 0; +diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c +index 568f3dfe822f57..2f9925830d1d53 100644 +--- a/sound/soc/sof/intel/hda-codec.c ++++ b/sound/soc/sof/intel/hda-codec.c +@@ -454,6 +454,7 @@ int hda_codec_i915_exit(struct snd_sof_dev *sdev) + } + EXPORT_SYMBOL_NS_GPL(hda_codec_i915_exit, "SND_SOC_SOF_HDA_AUDIO_CODEC_I915"); + ++MODULE_SOFTDEP("pre: snd-hda-codec-hdmi"); + #endif + + MODULE_LICENSE("Dual BSD/GPL"); +diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c +index be689f6e10c81e..a1ccd95da8bb73 100644 +--- a/sound/soc/sof/intel/hda.c ++++ b/sound/soc/sof/intel/hda.c +@@ -1312,22 +1312,8 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev) + /* report to machine driver if any DMICs are found */ + mach->mach_params.dmic_num = check_dmic_num(sdev); + +- if (sdw_mach_found) { +- /* +- * DMICs use up to 4 pins and are typically pin-muxed with SoundWire +- * link 2 and 3, or link 1 and 2, thus we only try to enable dmics +- * if all conditions are true: +- * a) 2 or fewer links are used by SoundWire +- * b) the NHLT table reports the presence of microphones +- */ +- if (hweight_long(mach->link_mask) <= 2) +- dmic_fixup = true; +- else +- mach->mach_params.dmic_num = 0; +- } else { +- if (mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER) +- dmic_fixup = true; +- } ++ if (sdw_mach_found || mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER) ++ dmic_fixup = true; + + if (tplg_fixup && + dmic_fixup && +diff --git a/sound/soc/sof/intel/pci-ptl.c b/sound/soc/sof/intel/pci-ptl.c +index 0aacdfac9fb431..c4fb6a2441b765 100644 +--- a/sound/soc/sof/intel/pci-ptl.c ++++ b/sound/soc/sof/intel/pci-ptl.c +@@ -50,6 +50,7 @@ static const struct sof_dev_desc ptl_desc = { + /* PCI IDs */ + static const struct pci_device_id sof_pci_ids[] = { + { PCI_DEVICE_DATA(INTEL, HDA_PTL, &ptl_desc) }, /* PTL */ ++ { PCI_DEVICE_DATA(INTEL, HDA_PTL_H, &ptl_desc) }, /* PTL-H */ + { 0, } + }; + MODULE_DEVICE_TABLE(pci, sof_pci_ids); +diff --git a/sound/soc/tegra/tegra210_adx.c b/sound/soc/tegra/tegra210_adx.c +index 3e6e8f51f380ba..0aa93b948378f3 100644 +--- a/sound/soc/tegra/tegra210_adx.c ++++ b/sound/soc/tegra/tegra210_adx.c +@@ -264,7 +264,7 @@ static const struct snd_soc_dai_ops tegra210_adx_out_dai_ops = { + .rates = SNDRV_PCM_RATE_8000_192000, \ + .formats = SNDRV_PCM_FMTBIT_S8 | \ + SNDRV_PCM_FMTBIT_S16_LE | \ +- SNDRV_PCM_FMTBIT_S16_LE | \ ++ SNDRV_PCM_FMTBIT_S24_LE | \ + SNDRV_PCM_FMTBIT_S32_LE, \ + }, \ + .capture = { \ +@@ -274,7 +274,7 @@ static const struct snd_soc_dai_ops tegra210_adx_out_dai_ops = { + .rates = SNDRV_PCM_RATE_8000_192000, \ + .formats = SNDRV_PCM_FMTBIT_S8 | \ + SNDRV_PCM_FMTBIT_S16_LE | \ +- SNDRV_PCM_FMTBIT_S16_LE | \ ++ SNDRV_PCM_FMTBIT_S24_LE | \ + SNDRV_PCM_FMTBIT_S32_LE, \ + }, \ + .ops = &tegra210_adx_out_dai_ops, \ +diff --git a/tools/objtool/check.c b/tools/objtool/check.c +index 6691bd106e4b6e..6f3f408d0a0198 100644 +--- a/tools/objtool/check.c ++++ b/tools/objtool/check.c +@@ -2061,6 +2061,14 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn, + reloc_addend(reloc) == pfunc->offset) + break; + ++ /* ++ * Clang sometimes leaves dangling unused jump table entries ++ * which point to the end of the function. Ignore them. ++ */ ++ if (reloc->sym->sec == pfunc->sec && ++ reloc_addend(reloc) == pfunc->offset + pfunc->len) ++ goto next; ++ + dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc)); + if (!dest_insn) + break; +@@ -2078,6 +2086,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn, + alt->insn = dest_insn; + alt->next = insn->alts; + insn->alts = alt; ++next: + prev_offset = reloc_offset(reloc); + } + +diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h +index 9fa64b053ba916..eb0f7c26453f35 100644 +--- a/tools/sched_ext/include/scx/common.bpf.h ++++ b/tools/sched_ext/include/scx/common.bpf.h +@@ -333,6 +333,17 @@ static __always_inline const struct cpumask *cast_mask(struct bpf_cpumask *mask) + return (const struct cpumask *)mask; + } + ++/* ++ * Return true if task @p cannot migrate to a different CPU, false ++ * otherwise. ++ */ ++static inline bool is_migration_disabled(const struct task_struct *p) ++{ ++ if (bpf_core_field_exists(p->migration_disabled)) ++ return p->migration_disabled; ++ return false; ++} ++ + /* rcu */ + void bpf_rcu_read_lock(void) __ksym; + void bpf_rcu_read_unlock(void) __ksym; +diff --git a/tools/sound/dapm-graph b/tools/sound/dapm-graph +index f14bdfedee8f11..b6196ee5065a4e 100755 +--- a/tools/sound/dapm-graph ++++ b/tools/sound/dapm-graph +@@ -10,7 +10,7 @@ set -eu + + STYLE_COMPONENT_ON="color=dodgerblue;style=bold" + STYLE_COMPONENT_OFF="color=gray40;style=filled;fillcolor=gray90" +-STYLE_NODE_ON="shape=box,style=bold,color=green4" ++STYLE_NODE_ON="shape=box,style=bold,color=green4,fillcolor=white" + STYLE_NODE_OFF="shape=box,style=filled,color=gray30,fillcolor=gray95" + + # Print usage and exit +diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +index 884ad87783d59e..0c51b7288978e3 100644 +--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c ++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +@@ -522,8 +522,8 @@ static void test_sockmap_skb_verdict_shutdown(void) + if (!ASSERT_EQ(err, 1, "epoll_wait(fd)")) + goto out_close; + +- n = recv(c1, &b, 1, SOCK_NONBLOCK); +- ASSERT_EQ(n, 0, "recv_timeout(fin)"); ++ n = recv(c1, &b, 1, MSG_DONTWAIT); ++ ASSERT_EQ(n, 0, "recv(fin)"); + out_close: + close(c1); + close(p1); +@@ -628,7 +628,7 @@ static void test_sockmap_skb_verdict_fionread(bool pass_prog) + ASSERT_EQ(avail, expected, "ioctl(FIONREAD)"); + /* On DROP test there will be no data to read */ + if (pass_prog) { +- recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); ++ recvd = recv_timeout(c1, &buf, sizeof(buf), MSG_DONTWAIT, IO_TIMEOUT_SEC); + ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + } + +diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c +index c7f74f068e7883..df27535995af8d 100644 +--- a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c ++++ b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c +@@ -52,10 +52,10 @@ static void test_xdp_with_cpumap_helpers(void) + ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id"); + + /* send a packet to trigger any potential bugs in there */ +- char data[10] = {}; ++ char data[ETH_HLEN] = {}; + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &data, +- .data_size_in = 10, ++ .data_size_in = sizeof(data), + .flags = BPF_F_TEST_XDP_LIVE_FRAMES, + .repeat = 1, + ); +diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c +index 27ffed17d4be33..461ab18705d5c0 100644 +--- a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c ++++ b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c +@@ -23,7 +23,7 @@ static void test_xdp_with_devmap_helpers(void) + __u32 len = sizeof(info); + int err, dm_fd, dm_fd_redir, map_fd; + struct nstoken *nstoken = NULL; +- char data[10] = {}; ++ char data[ETH_HLEN] = {}; + __u32 idx = 0; + + SYS(out_close, "ip netns add %s", TEST_NS); +@@ -58,7 +58,7 @@ static void test_xdp_with_devmap_helpers(void) + /* send a packet to trigger any potential bugs in there */ + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &data, +- .data_size_in = 10, ++ .data_size_in = sizeof(data), + .flags = BPF_F_TEST_XDP_LIVE_FRAMES, + .repeat = 1, + ); +@@ -158,7 +158,7 @@ static void test_xdp_with_devmap_helpers_veth(void) + struct nstoken *nstoken = NULL; + __u32 len = sizeof(info); + int err, dm_fd, dm_fd_redir, map_fd, ifindex_dst; +- char data[10] = {}; ++ char data[ETH_HLEN] = {}; + __u32 idx = 0; + + SYS(out_close, "ip netns add %s", TEST_NS); +@@ -208,7 +208,7 @@ static void test_xdp_with_devmap_helpers_veth(void) + /* send a packet to trigger any potential bugs in there */ + DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &data, +- .data_size_in = 10, ++ .data_size_in = sizeof(data), + .flags = BPF_F_TEST_XDP_LIVE_FRAMES, + .repeat = 1, + ); +diff --git a/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh b/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh +index 3f45512fb512eb..7406c24be1ac99 100755 +--- a/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh ++++ b/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh +@@ -1,4 +1,4 @@ +-#!/bin/sh ++#!/bin/bash + # SPDX-License-Identifier: GPL-2.0 + # + # Test the special cpuset v1 hotplug case where a cpuset become empty of +diff --git a/tools/testing/selftests/drivers/net/bonding/bond_options.sh b/tools/testing/selftests/drivers/net/bonding/bond_options.sh +index edc56e2cc60690..7bc148889ca729 100755 +--- a/tools/testing/selftests/drivers/net/bonding/bond_options.sh ++++ b/tools/testing/selftests/drivers/net/bonding/bond_options.sh +@@ -11,8 +11,8 @@ ALL_TESTS=" + + lib_dir=$(dirname "$0") + source ${lib_dir}/bond_topo_3d1c.sh +-c_maddr="33:33:00:00:00:10" +-g_maddr="33:33:00:00:02:54" ++c_maddr="33:33:ff:00:00:10" ++g_maddr="33:33:ff:00:02:54" + + skip_prio() + { +diff --git a/tools/testing/selftests/filesystems/statmount/statmount_test.c b/tools/testing/selftests/filesystems/statmount/statmount_test.c +index 8eb6aa606a0d5e..46d289611ce86d 100644 +--- a/tools/testing/selftests/filesystems/statmount/statmount_test.c ++++ b/tools/testing/selftests/filesystems/statmount/statmount_test.c +@@ -383,6 +383,10 @@ static void test_statmount_mnt_point(void) + return; + } + ++ if (!(sm->mask & STATMOUNT_MNT_POINT)) { ++ ksft_test_result_fail("missing STATMOUNT_MNT_POINT in mask\n"); ++ return; ++ } + if (strcmp(sm->str + sm->mnt_point, "/") != 0) { + ksft_test_result_fail("unexpected mount point: '%s' != '/'\n", + sm->str + sm->mnt_point); +@@ -408,6 +412,10 @@ static void test_statmount_mnt_root(void) + strerror(errno)); + return; + } ++ if (!(sm->mask & STATMOUNT_MNT_ROOT)) { ++ ksft_test_result_fail("missing STATMOUNT_MNT_ROOT in mask\n"); ++ return; ++ } + mnt_root = sm->str + sm->mnt_root; + last_root = strrchr(mnt_root, '/'); + if (last_root) +@@ -437,6 +445,10 @@ static void test_statmount_fs_type(void) + strerror(errno)); + return; + } ++ if (!(sm->mask & STATMOUNT_FS_TYPE)) { ++ ksft_test_result_fail("missing STATMOUNT_FS_TYPE in mask\n"); ++ return; ++ } + fs_type = sm->str + sm->fs_type; + for (s = known_fs; s != NULL; s++) { + if (strcmp(fs_type, *s) == 0) +@@ -464,6 +476,11 @@ static void test_statmount_mnt_opts(void) + return; + } + ++ if (!(sm->mask & STATMOUNT_MNT_BASIC)) { ++ ksft_test_result_fail("missing STATMOUNT_MNT_BASIC in mask\n"); ++ return; ++ } ++ + while (getline(&line, &len, f_mountinfo) != -1) { + int i; + char *p, *p2; +@@ -514,7 +531,10 @@ static void test_statmount_mnt_opts(void) + if (p2) + *p2 = '\0'; + +- statmount_opts = sm->str + sm->mnt_opts; ++ if (sm->mask & STATMOUNT_MNT_OPTS) ++ statmount_opts = sm->str + sm->mnt_opts; ++ else ++ statmount_opts = ""; + if (strcmp(statmount_opts, p) != 0) + ksft_test_result_fail( + "unexpected mount options: '%s' != '%s'\n", +diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c +index fbda6bf5467128..c02b2aa6fc641e 100644 +--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c ++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c +@@ -43,7 +43,7 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev) + if (!p) + return; + +- if (p->nr_cpus_allowed == nr_cpus) ++ if (p->nr_cpus_allowed == nr_cpus && !is_migration_disabled(p)) + target = bpf_get_prandom_u32() % nr_cpus; + else + target = scx_bpf_task_cpu(p);