From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id D6AF715806E for ; Tue, 30 May 2023 16:50:44 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 16510E09C9; Tue, 30 May 2023 16:50:44 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id BAECBE09C9 for ; Tue, 30 May 2023 16:50:43 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 621BE340F8B for ; Tue, 30 May 2023 16:50:42 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id AC92FA67 for ; Tue, 30 May 2023 16:50:40 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1685465424.6b32c5adb2f3c6f9724b3d905802259a1f9b372c.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.3 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1004_linux-6.3.5.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 6b32c5adb2f3c6f9724b3d905802259a1f9b372c X-VCS-Branch: 6.3 Date: Tue, 30 May 2023 16:50:40 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: c5511a73-ac77-4853-9c99-46a95a125134 X-Archives-Hash: 1db05d97675321f6375680fe556c21b3 commit: 6b32c5adb2f3c6f9724b3d905802259a1f9b372c Author: Mike Pagano gentoo org> AuthorDate: Tue May 30 16:50:24 2023 +0000 Commit: Mike Pagano gentoo org> CommitDate: Tue May 30 16:50:24 2023 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6b32c5ad Linux patch 6.3.5 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1004_linux-6.3.5.patch | 4973 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 4977 insertions(+) diff --git a/0000_README b/0000_README index 447571a4..e0952135 100644 --- a/0000_README +++ b/0000_README @@ -59,6 +59,10 @@ Patch: 1003_linux-6.3.4.patch From: https://www.kernel.org Desc: Linux 6.3.4 +Patch: 1004_linux-6.3.5.patch +From: https://www.kernel.org +Desc: Linux 6.3.5 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1004_linux-6.3.5.patch b/1004_linux-6.3.5.patch new file mode 100644 index 00000000..bdea91f3 --- /dev/null +++ b/1004_linux-6.3.5.patch @@ -0,0 +1,4973 @@ +diff --git a/Documentation/devicetree/bindings/usb/cdns,usb3.yaml b/Documentation/devicetree/bindings/usb/cdns,usb3.yaml +index cae46c4982adf..69a93a0722f07 100644 +--- a/Documentation/devicetree/bindings/usb/cdns,usb3.yaml ++++ b/Documentation/devicetree/bindings/usb/cdns,usb3.yaml +@@ -64,7 +64,7 @@ properties: + description: + size of memory intended as internal memory for endpoints + buffers expressed in KB +- $ref: /schemas/types.yaml#/definitions/uint32 ++ $ref: /schemas/types.yaml#/definitions/uint16 + + cdns,phyrst-a-enable: + description: Enable resetting of PHY if Rx fail is detected +diff --git a/Makefile b/Makefile +index 3c5b606690182..d710ff6a3d566 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 3 +-SUBLEVEL = 4 ++SUBLEVEL = 5 + EXTRAVERSION = + NAME = Hurr durr I'ma ninja sloth + +diff --git a/arch/arm/boot/dts/imx6qdl-mba6.dtsi b/arch/arm/boot/dts/imx6qdl-mba6.dtsi +index 78555a6188510..7b7e6c2ad190c 100644 +--- a/arch/arm/boot/dts/imx6qdl-mba6.dtsi ++++ b/arch/arm/boot/dts/imx6qdl-mba6.dtsi +@@ -209,6 +209,7 @@ + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_pcie>; + reset-gpio = <&gpio6 7 GPIO_ACTIVE_LOW>; ++ vpcie-supply = <®_pcie>; + status = "okay"; + }; + +diff --git a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi +index 67072e6c77d5f..cbd9d124c80d0 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi +@@ -98,11 +98,17 @@ + #address-cells = <1>; + #size-cells = <0>; + +- ethphy: ethernet-phy@4 { ++ ethphy: ethernet-phy@4 { /* AR8033 or ADIN1300 */ + compatible = "ethernet-phy-ieee802.3-c22"; + reg = <4>; + reset-gpios = <&gpio1 9 GPIO_ACTIVE_LOW>; + reset-assert-us = <10000>; ++ /* ++ * Deassert delay: ++ * ADIN1300 requires 5ms. ++ * AR8033 requires 1ms. ++ */ ++ reset-deassert-us = <20000>; + }; + }; + }; +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index 3f9d67341484b..a237275ee0179 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -1151,7 +1151,7 @@ + + media_blk_ctrl: blk-ctrl@32ec0000 { + compatible = "fsl,imx8mp-media-blk-ctrl", +- "syscon"; ++ "simple-bus", "syscon"; + reg = <0x32ec0000 0x10000>; + #address-cells = <1>; + #size-cells = <1>; +diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c +index b9f6908a31bc3..ba468b5f3f0b6 100644 +--- a/arch/m68k/kernel/signal.c ++++ b/arch/m68k/kernel/signal.c +@@ -858,11 +858,17 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs * + } + + static inline void __user * +-get_sigframe(struct ksignal *ksig, size_t frame_size) ++get_sigframe(struct ksignal *ksig, struct pt_regs *tregs, size_t frame_size) + { + unsigned long usp = sigsp(rdusp(), ksig); ++ unsigned long gap = 0; + +- return (void __user *)((usp - frame_size) & -8UL); ++ if (CPU_IS_020_OR_030 && tregs->format == 0xb) { ++ /* USP is unreliable so use worst-case value */ ++ gap = 256; ++ } ++ ++ return (void __user *)((usp - gap - frame_size) & -8UL); + } + + static int setup_frame(struct ksignal *ksig, sigset_t *set, +@@ -880,7 +886,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set, + return -EFAULT; + } + +- frame = get_sigframe(ksig, sizeof(*frame) + fsize); ++ frame = get_sigframe(ksig, tregs, sizeof(*frame) + fsize); + + if (fsize) + err |= copy_to_user (frame + 1, regs + 1, fsize); +@@ -952,7 +958,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set, + return -EFAULT; + } + +- frame = get_sigframe(ksig, sizeof(*frame)); ++ frame = get_sigframe(ksig, tregs, sizeof(*frame)); + + if (fsize) + err |= copy_to_user (&frame->uc.uc_extra, regs + 1, fsize); +diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig +index a98940e642432..67c26e81e2150 100644 +--- a/arch/parisc/Kconfig ++++ b/arch/parisc/Kconfig +@@ -129,6 +129,10 @@ config PM + config STACKTRACE_SUPPORT + def_bool y + ++config LOCKDEP_SUPPORT ++ bool ++ default y ++ + config ISA_DMA_API + bool + +diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h +index 0bdee67241320..c8b6928cee1ee 100644 +--- a/arch/parisc/include/asm/cacheflush.h ++++ b/arch/parisc/include/asm/cacheflush.h +@@ -48,6 +48,10 @@ void flush_dcache_page(struct page *page); + + #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) + #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) ++#define flush_dcache_mmap_lock_irqsave(mapping, flags) \ ++ xa_lock_irqsave(&mapping->i_pages, flags) ++#define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \ ++ xa_unlock_irqrestore(&mapping->i_pages, flags) + + #define flush_icache_page(vma,page) do { \ + flush_kernel_dcache_page_addr(page_address(page)); \ +diff --git a/arch/parisc/kernel/alternative.c b/arch/parisc/kernel/alternative.c +index 66f5672c70bd4..25c4d6c3375db 100644 +--- a/arch/parisc/kernel/alternative.c ++++ b/arch/parisc/kernel/alternative.c +@@ -25,7 +25,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start, + { + struct alt_instr *entry; + int index = 0, applied = 0; +- int num_cpus = num_online_cpus(); ++ int num_cpus = num_present_cpus(); + u16 cond_check; + + cond_check = ALT_COND_ALWAYS | +diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c +index 1d3b8bc8a6233..ca4a302d4365f 100644 +--- a/arch/parisc/kernel/cache.c ++++ b/arch/parisc/kernel/cache.c +@@ -399,6 +399,7 @@ void flush_dcache_page(struct page *page) + unsigned long offset; + unsigned long addr, old_addr = 0; + unsigned long count = 0; ++ unsigned long flags; + pgoff_t pgoff; + + if (mapping && !mapping_mapped(mapping)) { +@@ -420,7 +421,7 @@ void flush_dcache_page(struct page *page) + * to flush one address here for them all to become coherent + * on machines that support equivalent aliasing + */ +- flush_dcache_mmap_lock(mapping); ++ flush_dcache_mmap_lock_irqsave(mapping, flags); + vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { + offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; + addr = mpnt->vm_start + offset; +@@ -460,7 +461,7 @@ void flush_dcache_page(struct page *page) + } + WARN_ON(++count == 4096); + } +- flush_dcache_mmap_unlock(mapping); ++ flush_dcache_mmap_unlock_irqrestore(mapping, flags); + } + EXPORT_SYMBOL(flush_dcache_page); + +diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c +index c064719b49b09..ec48850b9273c 100644 +--- a/arch/parisc/kernel/process.c ++++ b/arch/parisc/kernel/process.c +@@ -122,13 +122,18 @@ void machine_power_off(void) + /* It seems we have no way to power the system off via + * software. The user has to press the button himself. */ + +- printk(KERN_EMERG "System shut down completed.\n" +- "Please power this system off now."); ++ printk("Power off or press RETURN to reboot.\n"); + + /* prevent soft lockup/stalled CPU messages for endless loop. */ + rcu_sysrq_start(); + lockup_detector_soft_poweroff(); +- for (;;); ++ while (1) { ++ /* reboot if user presses RETURN key */ ++ if (pdc_iodc_getc() == 13) { ++ printk("Rebooting...\n"); ++ machine_restart(NULL); ++ } ++ } + } + + void (*pm_power_off)(void); +diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c +index f9696fbf646c4..67b51841dc8b4 100644 +--- a/arch/parisc/kernel/traps.c ++++ b/arch/parisc/kernel/traps.c +@@ -291,19 +291,19 @@ static void handle_break(struct pt_regs *regs) + } + + #ifdef CONFIG_KPROBES +- if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) { ++ if (unlikely(iir == PARISC_KPROBES_BREAK_INSN && !user_mode(regs))) { + parisc_kprobe_break_handler(regs); + return; + } +- if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) { ++ if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2 && !user_mode(regs))) { + parisc_kprobe_ss_handler(regs); + return; + } + #endif + + #ifdef CONFIG_KGDB +- if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN || +- iir == PARISC_KGDB_BREAK_INSN)) { ++ if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN || ++ iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) { + kgdb_handle_exception(9, SIGTRAP, 0, regs); + return; + } +diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c +index 7d1199554fe36..54abd93828bfc 100644 +--- a/arch/x86/events/intel/uncore_snbep.c ++++ b/arch/x86/events/intel/uncore_snbep.c +@@ -6138,6 +6138,7 @@ static struct intel_uncore_type spr_uncore_mdf = { + }; + + #define UNCORE_SPR_NUM_UNCORE_TYPES 12 ++#define UNCORE_SPR_CHA 0 + #define UNCORE_SPR_IIO 1 + #define UNCORE_SPR_IMC 6 + #define UNCORE_SPR_UPI 8 +@@ -6448,12 +6449,22 @@ static int uncore_type_max_boxes(struct intel_uncore_type **types, + return max + 1; + } + ++#define SPR_MSR_UNC_CBO_CONFIG 0x2FFE ++ + void spr_uncore_cpu_init(void) + { ++ struct intel_uncore_type *type; ++ u64 num_cbo; ++ + uncore_msr_uncores = uncore_get_uncores(UNCORE_ACCESS_MSR, + UNCORE_SPR_MSR_EXTRA_UNCORES, + spr_msr_uncores); + ++ type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA); ++ if (type) { ++ rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo); ++ type->num_boxes = num_cbo; ++ } + spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO); + } + +diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c +index 5e868b62a7c4e..0270925fe013b 100644 +--- a/arch/x86/kernel/cpu/topology.c ++++ b/arch/x86/kernel/cpu/topology.c +@@ -79,7 +79,7 @@ int detect_extended_topology_early(struct cpuinfo_x86 *c) + * initial apic id, which also represents 32-bit extended x2apic id. + */ + c->initial_apicid = edx; +- smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); ++ smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx)); + #endif + return 0; + } +@@ -109,7 +109,8 @@ int detect_extended_topology(struct cpuinfo_x86 *c) + */ + cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx); + c->initial_apicid = edx; +- core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); ++ core_level_siblings = LEVEL_MAX_SIBLINGS(ebx); ++ smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx)); + core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); + die_level_siblings = LEVEL_MAX_SIBLINGS(ebx); + pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); +diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c +index 0bf6779187dda..f18ca44c904b7 100644 +--- a/arch/x86/kernel/dumpstack.c ++++ b/arch/x86/kernel/dumpstack.c +@@ -195,7 +195,6 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, + printk("%sCall Trace:\n", log_lvl); + + unwind_start(&state, task, regs, stack); +- stack = stack ? : get_stack_pointer(task, regs); + regs = unwind_get_entry_regs(&state, &partial); + + /* +@@ -214,9 +213,13 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, + * - hardirq stack + * - entry stack + */ +- for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) { ++ for (stack = stack ?: get_stack_pointer(task, regs); ++ stack; ++ stack = stack_info.next_sp) { + const char *stack_name; + ++ stack = PTR_ALIGN(stack, sizeof(long)); ++ + if (get_stack_info(stack, task, &stack_info, &visit_mask)) { + /* + * We weren't on a valid stack. It's possible that +diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c +index cb258f58fdc87..913287b9340c9 100644 +--- a/arch/x86/mm/init.c ++++ b/arch/x86/mm/init.c +@@ -9,6 +9,7 @@ + #include + + #include ++#include + #include + #include + #include +@@ -261,6 +262,24 @@ static void __init probe_page_size_mask(void) + } + } + ++#define INTEL_MATCH(_model) { .vendor = X86_VENDOR_INTEL, \ ++ .family = 6, \ ++ .model = _model, \ ++ } ++/* ++ * INVLPG may not properly flush Global entries ++ * on these CPUs when PCIDs are enabled. ++ */ ++static const struct x86_cpu_id invlpg_miss_ids[] = { ++ INTEL_MATCH(INTEL_FAM6_ALDERLAKE ), ++ INTEL_MATCH(INTEL_FAM6_ALDERLAKE_L ), ++ INTEL_MATCH(INTEL_FAM6_ALDERLAKE_N ), ++ INTEL_MATCH(INTEL_FAM6_RAPTORLAKE ), ++ INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_P), ++ INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_S), ++ {} ++}; ++ + static void setup_pcid(void) + { + if (!IS_ENABLED(CONFIG_X86_64)) +@@ -269,6 +288,12 @@ static void setup_pcid(void) + if (!boot_cpu_has(X86_FEATURE_PCID)) + return; + ++ if (x86_match_cpu(invlpg_miss_ids)) { ++ pr_info("Incomplete global flushes, disabling PCID"); ++ setup_clear_cpu_cap(X86_FEATURE_PCID); ++ return; ++ } ++ + if (boot_cpu_has(X86_FEATURE_PGE)) { + /* + * This can't be cr4_set_bits_and_update_boot() -- the +diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c +index 8babce71915fe..014c508e914d9 100644 +--- a/arch/x86/pci/xen.c ++++ b/arch/x86/pci/xen.c +@@ -198,7 +198,7 @@ static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) + i++; + } + kfree(v); +- return 0; ++ return msi_device_populate_sysfs(&dev->dev); + + error: + if (ret == -ENOSYS) +@@ -254,7 +254,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) + dev_dbg(&dev->dev, + "xen: msi --> pirq=%d --> irq=%d\n", pirq, irq); + } +- return 0; ++ return msi_device_populate_sysfs(&dev->dev); + + error: + dev_err(&dev->dev, "Failed to create MSI%s! ret=%d!\n", +@@ -346,7 +346,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) + if (ret < 0) + goto out; + } +- ret = 0; ++ ret = msi_device_populate_sysfs(&dev->dev); + out: + return ret; + } +@@ -394,6 +394,8 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev) + xen_destroy_irq(msidesc->irq + i); + msidesc->irq = 0; + } ++ ++ msi_device_destroy_sysfs(&dev->dev); + } + + static void xen_pv_teardown_msi_irqs(struct pci_dev *dev) +diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c +index 876d5df157ed9..5c01d7e70d90d 100644 +--- a/arch/xtensa/kernel/signal.c ++++ b/arch/xtensa/kernel/signal.c +@@ -343,7 +343,19 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set, + struct rt_sigframe *frame; + int err = 0, sig = ksig->sig; + unsigned long sp, ra, tp, ps; ++ unsigned long handler = (unsigned long)ksig->ka.sa.sa_handler; ++ unsigned long handler_fdpic_GOT = 0; + unsigned int base; ++ bool fdpic = IS_ENABLED(CONFIG_BINFMT_ELF_FDPIC) && ++ (current->personality & FDPIC_FUNCPTRS); ++ ++ if (fdpic) { ++ unsigned long __user *fdpic_func_desc = ++ (unsigned long __user *)handler; ++ if (__get_user(handler, &fdpic_func_desc[0]) || ++ __get_user(handler_fdpic_GOT, &fdpic_func_desc[1])) ++ return -EFAULT; ++ } + + sp = regs->areg[1]; + +@@ -373,20 +385,26 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set, + err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)); + + if (ksig->ka.sa.sa_flags & SA_RESTORER) { +- ra = (unsigned long)ksig->ka.sa.sa_restorer; ++ if (fdpic) { ++ unsigned long __user *fdpic_func_desc = ++ (unsigned long __user *)ksig->ka.sa.sa_restorer; ++ ++ err |= __get_user(ra, fdpic_func_desc); ++ } else { ++ ra = (unsigned long)ksig->ka.sa.sa_restorer; ++ } + } else { + + /* Create sys_rt_sigreturn syscall in stack frame */ + + err |= gen_return_code(frame->retcode); +- +- if (err) { +- return -EFAULT; +- } + ra = (unsigned long) frame->retcode; + } + +- /* ++ if (err) ++ return -EFAULT; ++ ++ /* + * Create signal handler execution context. + * Return context not modified until this point. + */ +@@ -394,8 +412,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set, + /* Set up registers for signal handler; preserve the threadptr */ + tp = regs->threadptr; + ps = regs->ps; +- start_thread(regs, (unsigned long) ksig->ka.sa.sa_handler, +- (unsigned long) frame); ++ start_thread(regs, handler, (unsigned long)frame); + + /* Set up a stack frame for a call4 if userspace uses windowed ABI */ + if (ps & PS_WOE_MASK) { +@@ -413,6 +430,8 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set, + regs->areg[base + 4] = (unsigned long) &frame->uc; + regs->threadptr = tp; + regs->ps = ps; ++ if (fdpic) ++ regs->areg[base + 11] = handler_fdpic_GOT; + + pr_debug("SIG rt deliver (%s:%d): signal=%d sp=%p pc=%08lx\n", + current->comm, current->pid, sig, frame, regs->pc); +diff --git a/arch/xtensa/kernel/xtensa_ksyms.c b/arch/xtensa/kernel/xtensa_ksyms.c +index 2a31b1ab0c9f2..17a7ef86fd0dd 100644 +--- a/arch/xtensa/kernel/xtensa_ksyms.c ++++ b/arch/xtensa/kernel/xtensa_ksyms.c +@@ -56,6 +56,8 @@ EXPORT_SYMBOL(empty_zero_page); + */ + extern long long __ashrdi3(long long, int); + extern long long __ashldi3(long long, int); ++extern long long __bswapdi2(long long); ++extern int __bswapsi2(int); + extern long long __lshrdi3(long long, int); + extern int __divsi3(int, int); + extern int __modsi3(int, int); +@@ -66,6 +68,8 @@ extern unsigned long long __umulsidi3(unsigned int, unsigned int); + + EXPORT_SYMBOL(__ashldi3); + EXPORT_SYMBOL(__ashrdi3); ++EXPORT_SYMBOL(__bswapdi2); ++EXPORT_SYMBOL(__bswapsi2); + EXPORT_SYMBOL(__lshrdi3); + EXPORT_SYMBOL(__divsi3); + EXPORT_SYMBOL(__modsi3); +diff --git a/arch/xtensa/lib/Makefile b/arch/xtensa/lib/Makefile +index 7ecef0519a27c..c9c2614188f74 100644 +--- a/arch/xtensa/lib/Makefile ++++ b/arch/xtensa/lib/Makefile +@@ -4,7 +4,7 @@ + # + + lib-y += memcopy.o memset.o checksum.o \ +- ashldi3.o ashrdi3.o lshrdi3.o \ ++ ashldi3.o ashrdi3.o bswapdi2.o bswapsi2.o lshrdi3.o \ + divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o umulsidi3.o \ + usercopy.o strncpy_user.o strnlen_user.o + lib-$(CONFIG_PCI) += pci-auto.o +diff --git a/arch/xtensa/lib/bswapdi2.S b/arch/xtensa/lib/bswapdi2.S +new file mode 100644 +index 0000000000000..d8e52e05eba66 +--- /dev/null ++++ b/arch/xtensa/lib/bswapdi2.S +@@ -0,0 +1,21 @@ ++/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */ ++#include ++#include ++#include ++ ++ENTRY(__bswapdi2) ++ ++ abi_entry_default ++ ssai 8 ++ srli a4, a2, 16 ++ src a4, a4, a2 ++ src a4, a4, a4 ++ src a4, a2, a4 ++ srli a2, a3, 16 ++ src a2, a2, a3 ++ src a2, a2, a2 ++ src a2, a3, a2 ++ mov a3, a4 ++ abi_ret_default ++ ++ENDPROC(__bswapdi2) +diff --git a/arch/xtensa/lib/bswapsi2.S b/arch/xtensa/lib/bswapsi2.S +new file mode 100644 +index 0000000000000..9c1de1344f79a +--- /dev/null ++++ b/arch/xtensa/lib/bswapsi2.S +@@ -0,0 +1,16 @@ ++/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */ ++#include ++#include ++#include ++ ++ENTRY(__bswapsi2) ++ ++ abi_entry_default ++ ssai 8 ++ srli a3, a2, 16 ++ src a3, a3, a2 ++ src a3, a3, a3 ++ src a2, a2, a3 ++ abi_ret_default ++ ++ENDPROC(__bswapsi2) +diff --git a/block/blk-map.c b/block/blk-map.c +index 9137d16cecdc3..9c03e641d32c9 100644 +--- a/block/blk-map.c ++++ b/block/blk-map.c +@@ -247,7 +247,7 @@ static struct bio *blk_rq_map_bio_alloc(struct request *rq, + { + struct bio *bio; + +- if (rq->cmd_flags & REQ_ALLOC_CACHE) { ++ if (rq->cmd_flags & REQ_ALLOC_CACHE && (nr_vecs <= BIO_INLINE_VECS)) { + bio = bio_alloc_bioset(NULL, nr_vecs, rq->cmd_flags, gfp_mask, + &fs_bio_set); + if (!bio) +diff --git a/drivers/android/binder.c b/drivers/android/binder.c +index fb56bfc45096d..8fb7672021ee2 100644 +--- a/drivers/android/binder.c ++++ b/drivers/android/binder.c +@@ -1934,24 +1934,23 @@ static void binder_deferred_fd_close(int fd) + static void binder_transaction_buffer_release(struct binder_proc *proc, + struct binder_thread *thread, + struct binder_buffer *buffer, +- binder_size_t failed_at, ++ binder_size_t off_end_offset, + bool is_failure) + { + int debug_id = buffer->debug_id; +- binder_size_t off_start_offset, buffer_offset, off_end_offset; ++ binder_size_t off_start_offset, buffer_offset; + + binder_debug(BINDER_DEBUG_TRANSACTION, + "%d buffer release %d, size %zd-%zd, failed at %llx\n", + proc->pid, buffer->debug_id, + buffer->data_size, buffer->offsets_size, +- (unsigned long long)failed_at); ++ (unsigned long long)off_end_offset); + + if (buffer->target_node) + binder_dec_node(buffer->target_node, 1, 0); + + off_start_offset = ALIGN(buffer->data_size, sizeof(void *)); +- off_end_offset = is_failure && failed_at ? failed_at : +- off_start_offset + buffer->offsets_size; ++ + for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; + buffer_offset += sizeof(binder_size_t)) { + struct binder_object_header *hdr; +@@ -2111,6 +2110,21 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, + } + } + ++/* Clean up all the objects in the buffer */ ++static inline void binder_release_entire_buffer(struct binder_proc *proc, ++ struct binder_thread *thread, ++ struct binder_buffer *buffer, ++ bool is_failure) ++{ ++ binder_size_t off_end_offset; ++ ++ off_end_offset = ALIGN(buffer->data_size, sizeof(void *)); ++ off_end_offset += buffer->offsets_size; ++ ++ binder_transaction_buffer_release(proc, thread, buffer, ++ off_end_offset, is_failure); ++} ++ + static int binder_translate_binder(struct flat_binder_object *fp, + struct binder_transaction *t, + struct binder_thread *thread) +@@ -2806,7 +2820,7 @@ static int binder_proc_transaction(struct binder_transaction *t, + t_outdated->buffer = NULL; + buffer->transaction = NULL; + trace_binder_transaction_update_buffer_release(buffer); +- binder_transaction_buffer_release(proc, NULL, buffer, 0, 0); ++ binder_release_entire_buffer(proc, NULL, buffer, false); + binder_alloc_free_buf(&proc->alloc, buffer); + kfree(t_outdated); + binder_stats_deleted(BINDER_STAT_TRANSACTION); +@@ -3775,7 +3789,7 @@ binder_free_buf(struct binder_proc *proc, + binder_node_inner_unlock(buf_node); + } + trace_binder_transaction_buffer_release(buffer); +- binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure); ++ binder_release_entire_buffer(proc, thread, buffer, is_failure); + binder_alloc_free_buf(&proc->alloc, buffer); + } + +diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c +index 55a3c3c2409f0..662a2a2e2e84a 100644 +--- a/drivers/android/binder_alloc.c ++++ b/drivers/android/binder_alloc.c +@@ -212,8 +212,8 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, + mm = alloc->mm; + + if (mm) { +- mmap_read_lock(mm); +- vma = vma_lookup(mm, alloc->vma_addr); ++ mmap_write_lock(mm); ++ vma = alloc->vma; + } + + if (!vma && need_mm) { +@@ -270,7 +270,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, + trace_binder_alloc_page_end(alloc, index); + } + if (mm) { +- mmap_read_unlock(mm); ++ mmap_write_unlock(mm); + mmput(mm); + } + return 0; +@@ -303,21 +303,24 @@ err_page_ptr_cleared: + } + err_no_vma: + if (mm) { +- mmap_read_unlock(mm); ++ mmap_write_unlock(mm); + mmput(mm); + } + return vma ? -ENOMEM : -ESRCH; + } + ++static inline void binder_alloc_set_vma(struct binder_alloc *alloc, ++ struct vm_area_struct *vma) ++{ ++ /* pairs with smp_load_acquire in binder_alloc_get_vma() */ ++ smp_store_release(&alloc->vma, vma); ++} ++ + static inline struct vm_area_struct *binder_alloc_get_vma( + struct binder_alloc *alloc) + { +- struct vm_area_struct *vma = NULL; +- +- if (alloc->vma_addr) +- vma = vma_lookup(alloc->mm, alloc->vma_addr); +- +- return vma; ++ /* pairs with smp_store_release in binder_alloc_set_vma() */ ++ return smp_load_acquire(&alloc->vma); + } + + static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid) +@@ -380,15 +383,13 @@ static struct binder_buffer *binder_alloc_new_buf_locked( + size_t size, data_offsets_size; + int ret; + +- mmap_read_lock(alloc->mm); ++ /* Check binder_alloc is fully initialized */ + if (!binder_alloc_get_vma(alloc)) { +- mmap_read_unlock(alloc->mm); + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, + "%d: binder_alloc_buf, no vma\n", + alloc->pid); + return ERR_PTR(-ESRCH); + } +- mmap_read_unlock(alloc->mm); + + data_offsets_size = ALIGN(data_size, sizeof(void *)) + + ALIGN(offsets_size, sizeof(void *)); +@@ -778,7 +779,9 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc, + buffer->free = 1; + binder_insert_free_buffer(alloc, buffer); + alloc->free_async_space = alloc->buffer_size / 2; +- alloc->vma_addr = vma->vm_start; ++ ++ /* Signal binder_alloc is fully initialized */ ++ binder_alloc_set_vma(alloc, vma); + + return 0; + +@@ -808,8 +811,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc) + + buffers = 0; + mutex_lock(&alloc->mutex); +- BUG_ON(alloc->vma_addr && +- vma_lookup(alloc->mm, alloc->vma_addr)); ++ BUG_ON(alloc->vma); + + while ((n = rb_first(&alloc->allocated_buffers))) { + buffer = rb_entry(n, struct binder_buffer, rb_node); +@@ -916,25 +918,17 @@ void binder_alloc_print_pages(struct seq_file *m, + * Make sure the binder_alloc is fully initialized, otherwise we might + * read inconsistent state. + */ +- +- mmap_read_lock(alloc->mm); +- if (binder_alloc_get_vma(alloc) == NULL) { +- mmap_read_unlock(alloc->mm); +- goto uninitialized; +- } +- +- mmap_read_unlock(alloc->mm); +- for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { +- page = &alloc->pages[i]; +- if (!page->page_ptr) +- free++; +- else if (list_empty(&page->lru)) +- active++; +- else +- lru++; ++ if (binder_alloc_get_vma(alloc) != NULL) { ++ for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { ++ page = &alloc->pages[i]; ++ if (!page->page_ptr) ++ free++; ++ else if (list_empty(&page->lru)) ++ active++; ++ else ++ lru++; ++ } + } +- +-uninitialized: + mutex_unlock(&alloc->mutex); + seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); + seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); +@@ -969,7 +963,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc) + */ + void binder_alloc_vma_close(struct binder_alloc *alloc) + { +- alloc->vma_addr = 0; ++ binder_alloc_set_vma(alloc, NULL); + } + + /** +diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h +index 0f811ac4bcffd..138d1d5af9ce3 100644 +--- a/drivers/android/binder_alloc.h ++++ b/drivers/android/binder_alloc.h +@@ -75,7 +75,7 @@ struct binder_lru_page { + /** + * struct binder_alloc - per-binder proc state for binder allocator + * @mutex: protects binder_alloc fields +- * @vma_addr: vm_area_struct->vm_start passed to mmap_handler ++ * @vma: vm_area_struct passed to mmap_handler + * (invariant after mmap) + * @mm: copy of task->mm (invariant after open) + * @buffer: base of per-proc address space mapped via mmap +@@ -99,7 +99,7 @@ struct binder_lru_page { + */ + struct binder_alloc { + struct mutex mutex; +- unsigned long vma_addr; ++ struct vm_area_struct *vma; + struct mm_struct *mm; + void __user *buffer; + struct list_head buffers; +diff --git a/drivers/android/binder_alloc_selftest.c b/drivers/android/binder_alloc_selftest.c +index 43a881073a428..c2b323bc3b3a5 100644 +--- a/drivers/android/binder_alloc_selftest.c ++++ b/drivers/android/binder_alloc_selftest.c +@@ -287,7 +287,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc) + if (!binder_selftest_run) + return; + mutex_lock(&binder_selftest_lock); +- if (!binder_selftest_run || !alloc->vma_addr) ++ if (!binder_selftest_run || !alloc->vma) + goto done; + pr_info("STARTED\n"); + binder_selftest_alloc_offset(alloc, end_offset, 0); +diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c +index 2a05d8cc0e795..5be91591cb3b2 100644 +--- a/drivers/char/tpm/tpm-chip.c ++++ b/drivers/char/tpm/tpm-chip.c +@@ -572,6 +572,10 @@ static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait) + { + struct tpm_chip *chip = container_of(rng, struct tpm_chip, hwrng); + ++ /* Give back zero bytes, as TPM chip has not yet fully resumed: */ ++ if (chip->flags & TPM_CHIP_FLAG_SUSPENDED) ++ return 0; ++ + return tpm_get_random(chip, data, max); + } + +@@ -605,6 +609,42 @@ static int tpm_get_pcr_allocation(struct tpm_chip *chip) + return rc; + } + ++/* ++ * tpm_chip_bootstrap() - Boostrap TPM chip after power on ++ * @chip: TPM chip to use. ++ * ++ * Initialize TPM chip after power on. This a one-shot function: subsequent ++ * calls will have no effect. ++ */ ++int tpm_chip_bootstrap(struct tpm_chip *chip) ++{ ++ int rc; ++ ++ if (chip->flags & TPM_CHIP_FLAG_BOOTSTRAPPED) ++ return 0; ++ ++ rc = tpm_chip_start(chip); ++ if (rc) ++ return rc; ++ ++ rc = tpm_auto_startup(chip); ++ if (rc) ++ goto stop; ++ ++ rc = tpm_get_pcr_allocation(chip); ++stop: ++ tpm_chip_stop(chip); ++ ++ /* ++ * Unconditionally set, as driver initialization should cease, when the ++ * boostrapping process fails. ++ */ ++ chip->flags |= TPM_CHIP_FLAG_BOOTSTRAPPED; ++ ++ return rc; ++} ++EXPORT_SYMBOL_GPL(tpm_chip_bootstrap); ++ + /* + * tpm_chip_register() - create a character device for the TPM chip + * @chip: TPM chip to use. +@@ -620,17 +660,7 @@ int tpm_chip_register(struct tpm_chip *chip) + { + int rc; + +- rc = tpm_chip_start(chip); +- if (rc) +- return rc; +- rc = tpm_auto_startup(chip); +- if (rc) { +- tpm_chip_stop(chip); +- return rc; +- } +- +- rc = tpm_get_pcr_allocation(chip); +- tpm_chip_stop(chip); ++ rc = tpm_chip_bootstrap(chip); + if (rc) + return rc; + +diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c +index 7e513b7718320..0f941cb32eb17 100644 +--- a/drivers/char/tpm/tpm-interface.c ++++ b/drivers/char/tpm/tpm-interface.c +@@ -412,6 +412,8 @@ int tpm_pm_suspend(struct device *dev) + } + + suspended: ++ chip->flags |= TPM_CHIP_FLAG_SUSPENDED; ++ + if (rc) + dev_err(dev, "Ignoring error %d while suspending\n", rc); + return 0; +@@ -429,6 +431,14 @@ int tpm_pm_resume(struct device *dev) + if (chip == NULL) + return -ENODEV; + ++ chip->flags &= ~TPM_CHIP_FLAG_SUSPENDED; ++ ++ /* ++ * Guarantee that SUSPENDED is written last, so that hwrng does not ++ * activate before the chip has been fully resumed. ++ */ ++ wmb(); ++ + return 0; + } + EXPORT_SYMBOL_GPL(tpm_pm_resume); +diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h +index 830014a266090..f6c99b3f00458 100644 +--- a/drivers/char/tpm/tpm.h ++++ b/drivers/char/tpm/tpm.h +@@ -263,6 +263,7 @@ static inline void tpm_msleep(unsigned int delay_msec) + delay_msec * 1000); + }; + ++int tpm_chip_bootstrap(struct tpm_chip *chip); + int tpm_chip_start(struct tpm_chip *chip); + void tpm_chip_stop(struct tpm_chip *chip); + struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip); +diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c +index 4be19d8f3ca95..0d084d6652c41 100644 +--- a/drivers/char/tpm/tpm_tis.c ++++ b/drivers/char/tpm/tpm_tis.c +@@ -243,7 +243,7 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info) + irq = tpm_info->irq; + + if (itpm || is_itpm(ACPI_COMPANION(dev))) +- phy->priv.flags |= TPM_TIS_ITPM_WORKAROUND; ++ set_bit(TPM_TIS_ITPM_WORKAROUND, &phy->priv.flags); + + return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg, + ACPI_HANDLE(dev)); +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index eecfbd7e97867..f02b583005a53 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -53,41 +53,63 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask, + long rc; + u8 status; + bool canceled = false; ++ u8 sts_mask = 0; ++ int ret = 0; + + /* check current status */ + status = chip->ops->status(chip); + if ((status & mask) == mask) + return 0; + +- stop = jiffies + timeout; ++ /* check what status changes can be handled by irqs */ ++ if (priv->int_mask & TPM_INTF_STS_VALID_INT) ++ sts_mask |= TPM_STS_VALID; + +- if (chip->flags & TPM_CHIP_FLAG_IRQ) { ++ if (priv->int_mask & TPM_INTF_DATA_AVAIL_INT) ++ sts_mask |= TPM_STS_DATA_AVAIL; ++ ++ if (priv->int_mask & TPM_INTF_CMD_READY_INT) ++ sts_mask |= TPM_STS_COMMAND_READY; ++ ++ sts_mask &= mask; ++ ++ stop = jiffies + timeout; ++ /* process status changes with irq support */ ++ if (sts_mask) { ++ ret = -ETIME; + again: + timeout = stop - jiffies; + if ((long)timeout <= 0) + return -ETIME; + rc = wait_event_interruptible_timeout(*queue, +- wait_for_tpm_stat_cond(chip, mask, check_cancel, ++ wait_for_tpm_stat_cond(chip, sts_mask, check_cancel, + &canceled), + timeout); + if (rc > 0) { + if (canceled) + return -ECANCELED; +- return 0; ++ ret = 0; + } + if (rc == -ERESTARTSYS && freezing(current)) { + clear_thread_flag(TIF_SIGPENDING); + goto again; + } +- } else { +- do { +- usleep_range(priv->timeout_min, +- priv->timeout_max); +- status = chip->ops->status(chip); +- if ((status & mask) == mask) +- return 0; +- } while (time_before(jiffies, stop)); + } ++ ++ if (ret) ++ return ret; ++ ++ mask &= ~sts_mask; ++ if (!mask) /* all done */ ++ return 0; ++ /* process status changes without irq support */ ++ do { ++ status = chip->ops->status(chip); ++ if ((status & mask) == mask) ++ return 0; ++ usleep_range(priv->timeout_min, ++ priv->timeout_max); ++ } while (time_before(jiffies, stop)); + return -ETIME; + } + +@@ -376,7 +398,7 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len) + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); + int rc, status, burstcnt; + size_t count = 0; +- bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND; ++ bool itpm = test_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags); + + status = tpm_tis_status(chip); + if ((status & TPM_STS_COMMAND_READY) == 0) { +@@ -509,7 +531,8 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len) + int rc, irq; + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); + +- if (!(chip->flags & TPM_CHIP_FLAG_IRQ) || priv->irq_tested) ++ if (!(chip->flags & TPM_CHIP_FLAG_IRQ) || ++ test_bit(TPM_TIS_IRQ_TESTED, &priv->flags)) + return tpm_tis_send_main(chip, buf, len); + + /* Verify receipt of the expected IRQ */ +@@ -519,11 +542,11 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len) + rc = tpm_tis_send_main(chip, buf, len); + priv->irq = irq; + chip->flags |= TPM_CHIP_FLAG_IRQ; +- if (!priv->irq_tested) ++ if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags)) + tpm_msleep(1); +- if (!priv->irq_tested) ++ if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags)) + disable_interrupts(chip); +- priv->irq_tested = true; ++ set_bit(TPM_TIS_IRQ_TESTED, &priv->flags); + return rc; + } + +@@ -666,7 +689,7 @@ static int probe_itpm(struct tpm_chip *chip) + size_t len = sizeof(cmd_getticks); + u16 vendor; + +- if (priv->flags & TPM_TIS_ITPM_WORKAROUND) ++ if (test_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags)) + return 0; + + rc = tpm_tis_read16(priv, TPM_DID_VID(0), &vendor); +@@ -686,13 +709,13 @@ static int probe_itpm(struct tpm_chip *chip) + + tpm_tis_ready(chip); + +- priv->flags |= TPM_TIS_ITPM_WORKAROUND; ++ set_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags); + + rc = tpm_tis_send_data(chip, cmd_getticks, len); + if (rc == 0) + dev_info(&chip->dev, "Detected an iTPM.\n"); + else { +- priv->flags &= ~TPM_TIS_ITPM_WORKAROUND; ++ clear_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags); + rc = -EFAULT; + } + +@@ -736,7 +759,7 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id) + if (interrupt == 0) + return IRQ_NONE; + +- priv->irq_tested = true; ++ set_bit(TPM_TIS_IRQ_TESTED, &priv->flags); + if (interrupt & TPM_INTF_DATA_AVAIL_INT) + wake_up_interruptible(&priv->read_queue); + if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) +@@ -819,7 +842,7 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask, + if (rc < 0) + goto restore_irqs; + +- priv->irq_tested = false; ++ clear_bit(TPM_TIS_IRQ_TESTED, &priv->flags); + + /* Generate an interrupt by having the core call through to + * tpm_tis_send +@@ -1031,8 +1054,40 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, + if (rc < 0) + goto out_err; + +- intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT | +- TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT; ++ /* Figure out the capabilities */ ++ rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps); ++ if (rc < 0) ++ goto out_err; ++ ++ dev_dbg(dev, "TPM interface capabilities (0x%x):\n", ++ intfcaps); ++ if (intfcaps & TPM_INTF_BURST_COUNT_STATIC) ++ dev_dbg(dev, "\tBurst Count Static\n"); ++ if (intfcaps & TPM_INTF_CMD_READY_INT) { ++ intmask |= TPM_INTF_CMD_READY_INT; ++ dev_dbg(dev, "\tCommand Ready Int Support\n"); ++ } ++ if (intfcaps & TPM_INTF_INT_EDGE_FALLING) ++ dev_dbg(dev, "\tInterrupt Edge Falling\n"); ++ if (intfcaps & TPM_INTF_INT_EDGE_RISING) ++ dev_dbg(dev, "\tInterrupt Edge Rising\n"); ++ if (intfcaps & TPM_INTF_INT_LEVEL_LOW) ++ dev_dbg(dev, "\tInterrupt Level Low\n"); ++ if (intfcaps & TPM_INTF_INT_LEVEL_HIGH) ++ dev_dbg(dev, "\tInterrupt Level High\n"); ++ if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT) { ++ intmask |= TPM_INTF_LOCALITY_CHANGE_INT; ++ dev_dbg(dev, "\tLocality Change Int Support\n"); ++ } ++ if (intfcaps & TPM_INTF_STS_VALID_INT) { ++ intmask |= TPM_INTF_STS_VALID_INT; ++ dev_dbg(dev, "\tSts Valid Int Support\n"); ++ } ++ if (intfcaps & TPM_INTF_DATA_AVAIL_INT) { ++ intmask |= TPM_INTF_DATA_AVAIL_INT; ++ dev_dbg(dev, "\tData Avail Int Support\n"); ++ } ++ + intmask &= ~TPM_GLOBAL_INT_ENABLE; + + rc = tpm_tis_request_locality(chip, 0); +@@ -1066,35 +1121,14 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, + goto out_err; + } + +- /* Figure out the capabilities */ +- rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps); +- if (rc < 0) +- goto out_err; +- +- dev_dbg(dev, "TPM interface capabilities (0x%x):\n", +- intfcaps); +- if (intfcaps & TPM_INTF_BURST_COUNT_STATIC) +- dev_dbg(dev, "\tBurst Count Static\n"); +- if (intfcaps & TPM_INTF_CMD_READY_INT) +- dev_dbg(dev, "\tCommand Ready Int Support\n"); +- if (intfcaps & TPM_INTF_INT_EDGE_FALLING) +- dev_dbg(dev, "\tInterrupt Edge Falling\n"); +- if (intfcaps & TPM_INTF_INT_EDGE_RISING) +- dev_dbg(dev, "\tInterrupt Edge Rising\n"); +- if (intfcaps & TPM_INTF_INT_LEVEL_LOW) +- dev_dbg(dev, "\tInterrupt Level Low\n"); +- if (intfcaps & TPM_INTF_INT_LEVEL_HIGH) +- dev_dbg(dev, "\tInterrupt Level High\n"); +- if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT) +- dev_dbg(dev, "\tLocality Change Int Support\n"); +- if (intfcaps & TPM_INTF_STS_VALID_INT) +- dev_dbg(dev, "\tSts Valid Int Support\n"); +- if (intfcaps & TPM_INTF_DATA_AVAIL_INT) +- dev_dbg(dev, "\tData Avail Int Support\n"); +- + /* INTERRUPT Setup */ + init_waitqueue_head(&priv->read_queue); + init_waitqueue_head(&priv->int_queue); ++ ++ rc = tpm_chip_bootstrap(chip); ++ if (rc) ++ goto out_err; ++ + if (irq != -1) { + /* + * Before doing irq testing issue a command to the TPM in polling mode +@@ -1122,7 +1156,9 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, + else + tpm_tis_probe_irq(chip, intmask); + +- if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) { ++ if (chip->flags & TPM_CHIP_FLAG_IRQ) { ++ priv->int_mask = intmask; ++ } else { + dev_err(&chip->dev, FW_BUG + "TPM interrupt not working, polling instead\n"); + +@@ -1159,31 +1195,20 @@ static void tpm_tis_reenable_interrupts(struct tpm_chip *chip) + u32 intmask; + int rc; + +- if (chip->ops->clk_enable != NULL) +- chip->ops->clk_enable(chip, true); +- +- /* reenable interrupts that device may have lost or +- * BIOS/firmware may have disabled ++ /* ++ * Re-enable interrupts that device may have lost or BIOS/firmware may ++ * have disabled. + */ + rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), priv->irq); +- if (rc < 0) +- goto out; ++ if (rc < 0) { ++ dev_err(&chip->dev, "Setting IRQ failed.\n"); ++ return; ++ } + +- rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask); ++ intmask = priv->int_mask | TPM_GLOBAL_INT_ENABLE; ++ rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); + if (rc < 0) +- goto out; +- +- intmask |= TPM_INTF_CMD_READY_INT +- | TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT +- | TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE; +- +- tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); +- +-out: +- if (chip->ops->clk_enable != NULL) +- chip->ops->clk_enable(chip, false); +- +- return; ++ dev_err(&chip->dev, "Enabling interrupts failed.\n"); + } + + int tpm_tis_resume(struct device *dev) +@@ -1191,27 +1216,27 @@ int tpm_tis_resume(struct device *dev) + struct tpm_chip *chip = dev_get_drvdata(dev); + int ret; + +- ret = tpm_tis_request_locality(chip, 0); +- if (ret < 0) ++ ret = tpm_chip_start(chip); ++ if (ret) + return ret; + + if (chip->flags & TPM_CHIP_FLAG_IRQ) + tpm_tis_reenable_interrupts(chip); + +- ret = tpm_pm_resume(dev); +- if (ret) +- goto out; +- + /* + * TPM 1.2 requires self-test on resume. This function actually returns + * an error code but for unknown reason it isn't handled. + */ + if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) + tpm1_do_selftest(chip); +-out: +- tpm_tis_relinquish_locality(chip, 0); + +- return ret; ++ tpm_chip_stop(chip); ++ ++ ret = tpm_pm_resume(dev); ++ if (ret) ++ return ret; ++ ++ return 0; + } + EXPORT_SYMBOL_GPL(tpm_tis_resume); + #endif +diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h +index 1d51d5168fb6e..e978f457fd4d4 100644 +--- a/drivers/char/tpm/tpm_tis_core.h ++++ b/drivers/char/tpm/tpm_tis_core.h +@@ -87,6 +87,7 @@ enum tpm_tis_flags { + TPM_TIS_ITPM_WORKAROUND = BIT(0), + TPM_TIS_INVALID_STATUS = BIT(1), + TPM_TIS_DEFAULT_CANCELLATION = BIT(2), ++ TPM_TIS_IRQ_TESTED = BIT(3), + }; + + struct tpm_tis_data { +@@ -95,7 +96,7 @@ struct tpm_tis_data { + unsigned int locality_count; + int locality; + int irq; +- bool irq_tested; ++ unsigned int int_mask; + unsigned long flags; + void __iomem *ilb_base_addr; + u16 clkrun_enabled; +diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c +index f2addb4571723..114e15d02bdee 100644 +--- a/drivers/cxl/core/mbox.c ++++ b/drivers/cxl/core/mbox.c +@@ -984,7 +984,7 @@ static int cxl_mem_get_partition_info(struct cxl_dev_state *cxlds) + * cxl_dev_state_identify() - Send the IDENTIFY command to the device. + * @cxlds: The device data for the operation + * +- * Return: 0 if identify was executed successfully. ++ * Return: 0 if identify was executed successfully or media not ready. + * + * This will dispatch the identify command to the device and on success populate + * structures to be exported to sysfs. +@@ -996,6 +996,9 @@ int cxl_dev_state_identify(struct cxl_dev_state *cxlds) + struct cxl_mbox_cmd mbox_cmd; + int rc; + ++ if (!cxlds->media_ready) ++ return 0; ++ + mbox_cmd = (struct cxl_mbox_cmd) { + .opcode = CXL_MBOX_OP_IDENTIFY, + .size_out = sizeof(id), +@@ -1065,10 +1068,12 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds) + cxlds->persistent_only_bytes, "pmem"); + } + +- rc = cxl_mem_get_partition_info(cxlds); +- if (rc) { +- dev_err(dev, "Failed to query partition information\n"); +- return rc; ++ if (cxlds->media_ready) { ++ rc = cxl_mem_get_partition_info(cxlds); ++ if (rc) { ++ dev_err(dev, "Failed to query partition information\n"); ++ return rc; ++ } + } + + rc = add_dpa_res(dev, &cxlds->dpa_res, &cxlds->ram_res, 0, +diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c +index 523d5b9fd7fcf..2055d0b9d4af1 100644 +--- a/drivers/cxl/core/pci.c ++++ b/drivers/cxl/core/pci.c +@@ -101,23 +101,57 @@ int devm_cxl_port_enumerate_dports(struct cxl_port *port) + } + EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL); + +-/* +- * Wait up to @media_ready_timeout for the device to report memory +- * active. +- */ +-int cxl_await_media_ready(struct cxl_dev_state *cxlds) ++static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id) ++{ ++ struct pci_dev *pdev = to_pci_dev(cxlds->dev); ++ int d = cxlds->cxl_dvsec; ++ bool valid = false; ++ int rc, i; ++ u32 temp; ++ ++ if (id > CXL_DVSEC_RANGE_MAX) ++ return -EINVAL; ++ ++ /* Check MEM INFO VALID bit first, give up after 1s */ ++ i = 1; ++ do { ++ rc = pci_read_config_dword(pdev, ++ d + CXL_DVSEC_RANGE_SIZE_LOW(id), ++ &temp); ++ if (rc) ++ return rc; ++ ++ valid = FIELD_GET(CXL_DVSEC_MEM_INFO_VALID, temp); ++ if (valid) ++ break; ++ msleep(1000); ++ } while (i--); ++ ++ if (!valid) { ++ dev_err(&pdev->dev, ++ "Timeout awaiting memory range %d valid after 1s.\n", ++ id); ++ return -ETIMEDOUT; ++ } ++ ++ return 0; ++} ++ ++static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id) + { + struct pci_dev *pdev = to_pci_dev(cxlds->dev); + int d = cxlds->cxl_dvsec; + bool active = false; +- u64 md_status; + int rc, i; ++ u32 temp; + +- for (i = media_ready_timeout; i; i--) { +- u32 temp; ++ if (id > CXL_DVSEC_RANGE_MAX) ++ return -EINVAL; + ++ /* Check MEM ACTIVE bit, up to 60s timeout by default */ ++ for (i = media_ready_timeout; i; i--) { + rc = pci_read_config_dword( +- pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp); ++ pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(id), &temp); + if (rc) + return rc; + +@@ -134,6 +168,39 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds) + return -ETIMEDOUT; + } + ++ return 0; ++} ++ ++/* ++ * Wait up to @media_ready_timeout for the device to report memory ++ * active. ++ */ ++int cxl_await_media_ready(struct cxl_dev_state *cxlds) ++{ ++ struct pci_dev *pdev = to_pci_dev(cxlds->dev); ++ int d = cxlds->cxl_dvsec; ++ int rc, i, hdm_count; ++ u64 md_status; ++ u16 cap; ++ ++ rc = pci_read_config_word(pdev, ++ d + CXL_DVSEC_CAP_OFFSET, &cap); ++ if (rc) ++ return rc; ++ ++ hdm_count = FIELD_GET(CXL_DVSEC_HDM_COUNT_MASK, cap); ++ for (i = 0; i < hdm_count; i++) { ++ rc = cxl_dvsec_mem_range_valid(cxlds, i); ++ if (rc) ++ return rc; ++ } ++ ++ for (i = 0; i < hdm_count; i++) { ++ rc = cxl_dvsec_mem_range_active(cxlds, i); ++ if (rc) ++ return rc; ++ } ++ + md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); + if (!CXLMDEV_READY(md_status)) + return -EIO; +@@ -241,17 +308,36 @@ static void disable_hdm(void *_cxlhdm) + hdm + CXL_HDM_DECODER_CTRL_OFFSET); + } + +-static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm) ++int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm) + { +- void __iomem *hdm = cxlhdm->regs.hdm_decoder; ++ void __iomem *hdm; + u32 global_ctrl; + ++ /* ++ * If the hdm capability was not mapped there is nothing to enable and ++ * the caller is responsible for what happens next. For example, ++ * emulate a passthrough decoder. ++ */ ++ if (IS_ERR(cxlhdm)) ++ return 0; ++ ++ hdm = cxlhdm->regs.hdm_decoder; + global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); ++ ++ /* ++ * If the HDM decoder capability was enabled on entry, skip ++ * registering disable_hdm() since this decode capability may be ++ * owned by platform firmware. ++ */ ++ if (global_ctrl & CXL_HDM_DECODER_ENABLE) ++ return 0; ++ + writel(global_ctrl | CXL_HDM_DECODER_ENABLE, + hdm + CXL_HDM_DECODER_CTRL_OFFSET); + +- return devm_add_action_or_reset(host, disable_hdm, cxlhdm); ++ return devm_add_action_or_reset(&port->dev, disable_hdm, cxlhdm); + } ++EXPORT_SYMBOL_NS_GPL(devm_cxl_enable_hdm, CXL); + + int cxl_dvsec_rr_decode(struct device *dev, int d, + struct cxl_endpoint_dvsec_info *info) +@@ -425,7 +511,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm, + if (info->mem_enabled) + return 0; + +- rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); ++ rc = devm_cxl_enable_hdm(port, cxlhdm); + if (rc) + return rc; + +diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h +index 044a92d9813e2..f93a285389621 100644 +--- a/drivers/cxl/cxl.h ++++ b/drivers/cxl/cxl.h +@@ -710,6 +710,7 @@ struct cxl_endpoint_dvsec_info { + struct cxl_hdm; + struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, + struct cxl_endpoint_dvsec_info *info); ++int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm); + int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, + struct cxl_endpoint_dvsec_info *info); + int devm_cxl_add_passthrough_decoder(struct cxl_port *port); +diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h +index 090acebba4fab..2c97acfa84162 100644 +--- a/drivers/cxl/cxlmem.h ++++ b/drivers/cxl/cxlmem.h +@@ -227,6 +227,7 @@ struct cxl_event_state { + * @regs: Parsed register blocks + * @cxl_dvsec: Offset to the PCIe device DVSEC + * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH) ++ * @media_ready: Indicate whether the device media is usable + * @payload_size: Size of space for payload + * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) + * @lsa_size: Size of Label Storage Area +@@ -264,6 +265,7 @@ struct cxl_dev_state { + int cxl_dvsec; + + bool rcd; ++ bool media_ready; + size_t payload_size; + size_t lsa_size; + struct mutex mbox_mutex; /* Protects device mailbox and firmware */ +diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h +index 0465ef963cd6a..7c02e55b80429 100644 +--- a/drivers/cxl/cxlpci.h ++++ b/drivers/cxl/cxlpci.h +@@ -31,6 +31,8 @@ + #define CXL_DVSEC_RANGE_BASE_LOW(i) (0x24 + (i * 0x10)) + #define CXL_DVSEC_MEM_BASE_LOW_MASK GENMASK(31, 28) + ++#define CXL_DVSEC_RANGE_MAX 2 ++ + /* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */ + #define CXL_DVSEC_FUNCTION_MAP 2 + +diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c +index 39c4b54f07152..71d7eafb6c94c 100644 +--- a/drivers/cxl/mem.c ++++ b/drivers/cxl/mem.c +@@ -104,6 +104,9 @@ static int cxl_mem_probe(struct device *dev) + struct dentry *dentry; + int rc; + ++ if (!cxlds->media_ready) ++ return -EBUSY; ++ + /* + * Someone is trying to reattach this device after it lost its port + * connection (an endpoint port previously registered by this memdev was +diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c +index 60b23624d167f..24a2ad5caeb7f 100644 +--- a/drivers/cxl/pci.c ++++ b/drivers/cxl/pci.c +@@ -757,6 +757,12 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) + if (rc) + dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); + ++ rc = cxl_await_media_ready(cxlds); ++ if (rc == 0) ++ cxlds->media_ready = true; ++ else ++ dev_warn(&pdev->dev, "Media not active (%d)\n", rc); ++ + rc = cxl_pci_setup_mailbox(cxlds); + if (rc) + return rc; +diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c +index eb57324c4ad4a..c23b6164e1c0f 100644 +--- a/drivers/cxl/port.c ++++ b/drivers/cxl/port.c +@@ -60,13 +60,17 @@ static int discover_region(struct device *dev, void *root) + static int cxl_switch_port_probe(struct cxl_port *port) + { + struct cxl_hdm *cxlhdm; +- int rc; ++ int rc, nr_dports; + +- rc = devm_cxl_port_enumerate_dports(port); +- if (rc < 0) +- return rc; ++ nr_dports = devm_cxl_port_enumerate_dports(port); ++ if (nr_dports < 0) ++ return nr_dports; + + cxlhdm = devm_cxl_setup_hdm(port, NULL); ++ rc = devm_cxl_enable_hdm(port, cxlhdm); ++ if (rc) ++ return rc; ++ + if (!IS_ERR(cxlhdm)) + return devm_cxl_enumerate_decoders(cxlhdm, NULL); + +@@ -75,7 +79,7 @@ static int cxl_switch_port_probe(struct cxl_port *port) + return PTR_ERR(cxlhdm); + } + +- if (rc == 1) { ++ if (nr_dports == 1) { + dev_dbg(&port->dev, "Fallback to passthrough decoder\n"); + return devm_cxl_add_passthrough_decoder(port); + } +@@ -113,12 +117,6 @@ static int cxl_endpoint_port_probe(struct cxl_port *port) + if (rc) + return rc; + +- rc = cxl_await_media_ready(cxlds); +- if (rc) { +- dev_err(&port->dev, "Media not active (%d)\n", rc); +- return rc; +- } +- + rc = devm_cxl_enumerate_decoders(cxlhdm, &info); + if (rc) + return rc; +diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c +index f29d77ecf72db..2b8bfcd010f5f 100644 +--- a/drivers/firmware/arm_ffa/bus.c ++++ b/drivers/firmware/arm_ffa/bus.c +@@ -15,6 +15,8 @@ + + #include "common.h" + ++static DEFINE_IDA(ffa_bus_id); ++ + static int ffa_device_match(struct device *dev, struct device_driver *drv) + { + const struct ffa_device_id *id_table; +@@ -53,7 +55,8 @@ static void ffa_device_remove(struct device *dev) + { + struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver); + +- ffa_drv->remove(to_ffa_dev(dev)); ++ if (ffa_drv->remove) ++ ffa_drv->remove(to_ffa_dev(dev)); + } + + static int ffa_device_uevent(const struct device *dev, struct kobj_uevent_env *env) +@@ -130,6 +133,7 @@ static void ffa_release_device(struct device *dev) + { + struct ffa_device *ffa_dev = to_ffa_dev(dev); + ++ ida_free(&ffa_bus_id, ffa_dev->id); + kfree(ffa_dev); + } + +@@ -170,18 +174,24 @@ bool ffa_device_is_valid(struct ffa_device *ffa_dev) + struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id, + const struct ffa_ops *ops) + { +- int ret; ++ int id, ret; + struct device *dev; + struct ffa_device *ffa_dev; + ++ id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL); ++ if (id < 0) ++ return NULL; ++ + ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL); +- if (!ffa_dev) ++ if (!ffa_dev) { ++ ida_free(&ffa_bus_id, id); + return NULL; ++ } + + dev = &ffa_dev->dev; + dev->bus = &ffa_bus_type; + dev->release = ffa_release_device; +- dev_set_name(&ffa_dev->dev, "arm-ffa-%04x", vm_id); ++ dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id); + + ffa_dev->vm_id = vm_id; + ffa_dev->ops = ops; +@@ -217,4 +227,5 @@ void arm_ffa_bus_exit(void) + { + ffa_devices_unregister(); + bus_unregister(&ffa_bus_type); ++ ida_destroy(&ffa_bus_id); + } +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index fa85c64d3dede..02774baa90078 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -420,12 +420,17 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize, + ep_mem_access->receiver = args->attrs[idx].receiver; + ep_mem_access->attrs = args->attrs[idx].attrs; + ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs); ++ ep_mem_access->flag = 0; ++ ep_mem_access->reserved = 0; + } ++ mem_region->reserved_0 = 0; ++ mem_region->reserved_1 = 0; + mem_region->ep_count = args->nattrs; + + composite = buffer + COMPOSITE_OFFSET(args->nattrs); + composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg); + composite->addr_range_cnt = num_entries; ++ composite->reserved = 0; + + length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries); + frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0); +@@ -460,6 +465,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize, + + constituents->address = sg_phys(args->sg); + constituents->pg_cnt = args->sg->length / FFA_PAGE_SIZE; ++ constituents->reserved = 0; + constituents++; + frag_len += sizeof(struct ffa_mem_region_addr_range); + } while ((args->sg = sg_next(args->sg))); +diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c +index e6a7049bef641..b32063ac845a5 100644 +--- a/drivers/gpio/gpio-mockup.c ++++ b/drivers/gpio/gpio-mockup.c +@@ -369,7 +369,7 @@ static void gpio_mockup_debugfs_setup(struct device *dev, + priv->offset = i; + priv->desc = gpiochip_get_desc(gc, i); + +- debugfs_create_file(name, 0200, chip->dbg_dir, priv, ++ debugfs_create_file(name, 0600, chip->dbg_dir, priv, + &gpio_mockup_debugfs_ops); + } + } +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +index 7e8b7171068dc..bebd136ed5444 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +@@ -1328,12 +1328,9 @@ int amdgpu_mes_self_test(struct amdgpu_device *adev) + struct amdgpu_mes_ctx_data ctx_data = {0}; + struct amdgpu_ring *added_rings[AMDGPU_MES_CTX_MAX_RINGS] = { NULL }; + int gang_ids[3] = {0}; +- int queue_types[][2] = { { AMDGPU_RING_TYPE_GFX, +- AMDGPU_MES_CTX_MAX_GFX_RINGS}, +- { AMDGPU_RING_TYPE_COMPUTE, +- AMDGPU_MES_CTX_MAX_COMPUTE_RINGS}, +- { AMDGPU_RING_TYPE_SDMA, +- AMDGPU_MES_CTX_MAX_SDMA_RINGS } }; ++ int queue_types[][2] = { { AMDGPU_RING_TYPE_GFX, 1 }, ++ { AMDGPU_RING_TYPE_COMPUTE, 1 }, ++ { AMDGPU_RING_TYPE_SDMA, 1} }; + int i, r, pasid, k = 0; + + pasid = amdgpu_pasid_alloc(16); +diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c +index e1b7fca096660..5f10883da6a23 100644 +--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c +@@ -57,7 +57,13 @@ static int psp_v10_0_init_microcode(struct psp_context *psp) + if (err) + return err; + +- return psp_init_ta_microcode(psp, ucode_prefix); ++ err = psp_init_ta_microcode(psp, ucode_prefix); ++ if ((adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 1, 0)) && ++ (adev->pdev->revision == 0xa1) && ++ (psp->securedisplay_context.context.bin_desc.fw_version >= 0x27000008)) { ++ adev->psp.securedisplay_context.context.bin_desc.size_bytes = 0; ++ } ++ return err; + } + + static int psp_v10_0_ring_create(struct psp_context *psp, +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index f54d670ab3abc..0695c7c3d489d 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -2813,7 +2813,7 @@ static int dm_resume(void *handle) + * this is the case when traversing through already created + * MST connectors, should be skipped + */ +- if (aconnector->dc_link->type == dc_connection_mst_branch) ++ if (aconnector && aconnector->mst_root) + continue; + + mutex_lock(&aconnector->hpd_lock); +@@ -6717,7 +6717,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, + int clock, bpp = 0; + bool is_y420 = false; + +- if (!aconnector->mst_output_port || !aconnector->dc_sink) ++ if (!aconnector->mst_output_port) + return 0; + + mst_port = aconnector->mst_output_port; +diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c +index bf6d63673b5aa..5da7236ca203f 100644 +--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c ++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c +@@ -869,13 +869,11 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev, + } + if (ret == -ENOENT) { + size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf); +- if (size > 0) { +- size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size); +- size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size); +- size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size); +- size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size); +- size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size); +- } ++ size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size); ++ size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size); ++ size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size); ++ size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size); ++ size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size); + } + + if (size == 0) +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c +index ca0fca7da29e0..268c697735f34 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c +@@ -125,6 +125,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_7_message_map[SMU_MSG_MAX_COUNT] = + MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0), + MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0), + MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0), ++ MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0), + }; + + static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = { +diff --git a/drivers/gpu/drm/drm_managed.c b/drivers/gpu/drm/drm_managed.c +index 4cf214de50c40..c21c3f6230335 100644 +--- a/drivers/gpu/drm/drm_managed.c ++++ b/drivers/gpu/drm/drm_managed.c +@@ -264,28 +264,10 @@ void drmm_kfree(struct drm_device *dev, void *data) + } + EXPORT_SYMBOL(drmm_kfree); + +-static void drmm_mutex_release(struct drm_device *dev, void *res) ++void __drmm_mutex_release(struct drm_device *dev, void *res) + { + struct mutex *lock = res; + + mutex_destroy(lock); + } +- +-/** +- * drmm_mutex_init - &drm_device-managed mutex_init() +- * @dev: DRM device +- * @lock: lock to be initialized +- * +- * Returns: +- * 0 on success, or a negative errno code otherwise. +- * +- * This is a &drm_device-managed version of mutex_init(). The initialized +- * lock is automatically destroyed on the final drm_dev_put(). +- */ +-int drmm_mutex_init(struct drm_device *dev, struct mutex *lock) +-{ +- mutex_init(lock); +- +- return drmm_add_action_or_reset(dev, drmm_mutex_release, lock); +-} +-EXPORT_SYMBOL(drmm_mutex_init); ++EXPORT_SYMBOL(__drmm_mutex_release); +diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c +index 0a5aaf78172a6..576c4c838a331 100644 +--- a/drivers/gpu/drm/mgag200/mgag200_mode.c ++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c +@@ -640,6 +640,11 @@ void mgag200_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_ + if (funcs->pixpllc_atomic_update) + funcs->pixpllc_atomic_update(crtc, old_state); + ++ if (crtc_state->gamma_lut) ++ mgag200_crtc_set_gamma(mdev, format, crtc_state->gamma_lut->data); ++ else ++ mgag200_crtc_set_gamma_linear(mdev, format); ++ + mgag200_enable_display(mdev); + + if (funcs->enable_vidrst) +diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c +index 3377fbc71f654..c4dda908666cf 100644 +--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c ++++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c +@@ -99,6 +99,16 @@ static void radeon_hotplug_work_func(struct work_struct *work) + + static void radeon_dp_work_func(struct work_struct *work) + { ++ struct radeon_device *rdev = container_of(work, struct radeon_device, ++ dp_work); ++ struct drm_device *dev = rdev->ddev; ++ struct drm_mode_config *mode_config = &dev->mode_config; ++ struct drm_connector *connector; ++ ++ mutex_lock(&mode_config->mutex); ++ list_for_each_entry(connector, &mode_config->connector_list, head) ++ radeon_connector_hotplug(connector); ++ mutex_unlock(&mode_config->mutex); + } + + /** +diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c +index 918d461fcf4a6..eaa296ced1678 100644 +--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c ++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c +@@ -942,7 +942,7 @@ tmc_etr_buf_insert_barrier_packet(struct etr_buf *etr_buf, u64 offset) + + len = tmc_etr_buf_get_data(etr_buf, offset, + CORESIGHT_BARRIER_PKT_SIZE, &bufp); +- if (WARN_ON(len < CORESIGHT_BARRIER_PKT_SIZE)) ++ if (WARN_ON(len < 0 || len < CORESIGHT_BARRIER_PKT_SIZE)) + return -EINVAL; + coresight_insert_barrier_packet(bufp); + return offset + CORESIGHT_BARRIER_PKT_SIZE; +diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c +index 1a6a7a672ad77..e23935099b830 100644 +--- a/drivers/irqchip/irq-mips-gic.c ++++ b/drivers/irqchip/irq-mips-gic.c +@@ -50,7 +50,7 @@ void __iomem *mips_gic_base; + + static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks); + +-static DEFINE_SPINLOCK(gic_lock); ++static DEFINE_RAW_SPINLOCK(gic_lock); + static struct irq_domain *gic_irq_domain; + static int gic_shared_intrs; + static unsigned int gic_cpu_pin; +@@ -211,7 +211,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type) + + irq = GIC_HWIRQ_TO_SHARED(d->hwirq); + +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + switch (type & IRQ_TYPE_SENSE_MASK) { + case IRQ_TYPE_EDGE_FALLING: + pol = GIC_POL_FALLING_EDGE; +@@ -251,7 +251,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type) + else + irq_set_chip_handler_name_locked(d, &gic_level_irq_controller, + handle_level_irq, NULL); +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + + return 0; + } +@@ -269,7 +269,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask, + return -EINVAL; + + /* Assumption : cpumask refers to a single CPU */ +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + + /* Re-route this IRQ */ + write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu))); +@@ -280,7 +280,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask, + set_bit(irq, per_cpu_ptr(pcpu_masks, cpu)); + + irq_data_update_effective_affinity(d, cpumask_of(cpu)); +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + + return IRQ_SET_MASK_OK; + } +@@ -358,12 +358,12 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d) + cd = irq_data_get_irq_chip_data(d); + cd->mask = false; + +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + for_each_online_cpu(cpu) { + write_gic_vl_other(mips_cm_vp_id(cpu)); + write_gic_vo_rmask(BIT(intr)); + } +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + } + + static void gic_unmask_local_irq_all_vpes(struct irq_data *d) +@@ -376,12 +376,12 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d) + cd = irq_data_get_irq_chip_data(d); + cd->mask = true; + +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + for_each_online_cpu(cpu) { + write_gic_vl_other(mips_cm_vp_id(cpu)); + write_gic_vo_smask(BIT(intr)); + } +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + } + + static void gic_all_vpes_irq_cpu_online(void) +@@ -394,19 +394,21 @@ static void gic_all_vpes_irq_cpu_online(void) + unsigned long flags; + int i; + +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + + for (i = 0; i < ARRAY_SIZE(local_intrs); i++) { + unsigned int intr = local_intrs[i]; + struct gic_all_vpes_chip_data *cd; + ++ if (!gic_local_irq_is_routable(intr)) ++ continue; + cd = &gic_all_vpes_chip_data[intr]; + write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map); + if (cd->mask) + write_gic_vl_smask(BIT(intr)); + } + +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + } + + static struct irq_chip gic_all_vpes_local_irq_controller = { +@@ -436,11 +438,11 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq, + + data = irq_get_irq_data(virq); + +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin); + write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu))); + irq_data_update_effective_affinity(data, cpumask_of(cpu)); +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + + return 0; + } +@@ -535,12 +537,12 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq, + if (!gic_local_irq_is_routable(intr)) + return -EPERM; + +- spin_lock_irqsave(&gic_lock, flags); ++ raw_spin_lock_irqsave(&gic_lock, flags); + for_each_online_cpu(cpu) { + write_gic_vl_other(mips_cm_vp_id(cpu)); + write_gic_vo_map(mips_gic_vx_map_reg(intr), map); + } +- spin_unlock_irqrestore(&gic_lock, flags); ++ raw_spin_unlock_irqrestore(&gic_lock, flags); + + return 0; + } +diff --git a/drivers/media/radio/radio-shark.c b/drivers/media/radio/radio-shark.c +index 8230da828d0ee..127a3be0e0f07 100644 +--- a/drivers/media/radio/radio-shark.c ++++ b/drivers/media/radio/radio-shark.c +@@ -316,6 +316,16 @@ static int usb_shark_probe(struct usb_interface *intf, + { + struct shark_device *shark; + int retval = -ENOMEM; ++ static const u8 ep_addresses[] = { ++ SHARK_IN_EP | USB_DIR_IN, ++ SHARK_OUT_EP | USB_DIR_OUT, ++ 0}; ++ ++ /* Are the expected endpoints present? */ ++ if (!usb_check_int_endpoints(intf, ep_addresses)) { ++ dev_err(&intf->dev, "Invalid radioSHARK device\n"); ++ return -EINVAL; ++ } + + shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL); + if (!shark) +diff --git a/drivers/media/radio/radio-shark2.c b/drivers/media/radio/radio-shark2.c +index d150f12382c60..f1c5c0a6a335c 100644 +--- a/drivers/media/radio/radio-shark2.c ++++ b/drivers/media/radio/radio-shark2.c +@@ -282,6 +282,16 @@ static int usb_shark_probe(struct usb_interface *intf, + { + struct shark_device *shark; + int retval = -ENOMEM; ++ static const u8 ep_addresses[] = { ++ SHARK_IN_EP | USB_DIR_IN, ++ SHARK_OUT_EP | USB_DIR_OUT, ++ 0}; ++ ++ /* Are the expected endpoints present? */ ++ if (!usb_check_int_endpoints(intf, ep_addresses)) { ++ dev_err(&intf->dev, "Invalid radioSHARK2 device\n"); ++ return -EINVAL; ++ } + + shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL); + if (!shark) +diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c +index 672ab90c4b2d9..0ff294f074659 100644 +--- a/drivers/mmc/core/block.c ++++ b/drivers/mmc/core/block.c +@@ -266,6 +266,7 @@ static ssize_t power_ro_lock_store(struct device *dev, + goto out_put; + } + req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP; ++ req_to_mmc_queue_req(req)->drv_op_result = -EIO; + blk_execute_rq(req, false); + ret = req_to_mmc_queue_req(req)->drv_op_result; + blk_mq_free_request(req); +@@ -653,6 +654,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md, + idatas[0] = idata; + req_to_mmc_queue_req(req)->drv_op = + rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL; ++ req_to_mmc_queue_req(req)->drv_op_result = -EIO; + req_to_mmc_queue_req(req)->drv_op_data = idatas; + req_to_mmc_queue_req(req)->ioc_count = 1; + blk_execute_rq(req, false); +@@ -724,6 +726,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md, + } + req_to_mmc_queue_req(req)->drv_op = + rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL; ++ req_to_mmc_queue_req(req)->drv_op_result = -EIO; + req_to_mmc_queue_req(req)->drv_op_data = idata; + req_to_mmc_queue_req(req)->ioc_count = n; + blk_execute_rq(req, false); +@@ -2808,6 +2811,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val) + if (IS_ERR(req)) + return PTR_ERR(req); + req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS; ++ req_to_mmc_queue_req(req)->drv_op_result = -EIO; + blk_execute_rq(req, false); + ret = req_to_mmc_queue_req(req)->drv_op_result; + if (ret >= 0) { +@@ -2846,6 +2850,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp) + goto out_free; + } + req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD; ++ req_to_mmc_queue_req(req)->drv_op_result = -EIO; + req_to_mmc_queue_req(req)->drv_op_data = &ext_csd; + blk_execute_rq(req, false); + err = req_to_mmc_queue_req(req)->drv_op_result; +diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c +index 58f042fdd4f43..03fe21a89021d 100644 +--- a/drivers/mmc/host/sdhci-esdhc-imx.c ++++ b/drivers/mmc/host/sdhci-esdhc-imx.c +@@ -1634,6 +1634,10 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev, + if (ret) + return ret; + ++ /* HS400/HS400ES require 8 bit bus */ ++ if (!(host->mmc->caps & MMC_CAP_8_BIT_DATA)) ++ host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); ++ + if (mmc_gpio_get_cd(host->mmc) >= 0) + host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION; + +@@ -1724,10 +1728,6 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev) + host->mmc_host_ops.init_card = usdhc_init_card; + } + +- err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data); +- if (err) +- goto disable_ahb_clk; +- + if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING) + sdhci_esdhc_ops.platform_execute_tuning = + esdhc_executing_tuning; +@@ -1735,15 +1735,13 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev) + if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536) + host->quirks |= SDHCI_QUIRK_BROKEN_ADMA; + +- if (host->mmc->caps & MMC_CAP_8_BIT_DATA && +- imx_data->socdata->flags & ESDHC_FLAG_HS400) ++ if (imx_data->socdata->flags & ESDHC_FLAG_HS400) + host->mmc->caps2 |= MMC_CAP2_HS400; + + if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23) + host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN; + +- if (host->mmc->caps & MMC_CAP_8_BIT_DATA && +- imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) { ++ if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) { + host->mmc->caps2 |= MMC_CAP2_HS400_ES; + host->mmc_host_ops.hs400_enhanced_strobe = + esdhc_hs400_enhanced_strobe; +@@ -1765,6 +1763,10 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev) + goto disable_ahb_clk; + } + ++ err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data); ++ if (err) ++ goto disable_ahb_clk; ++ + sdhci_esdhc_imx_hwinit(host); + + err = sdhci_add_host(host); +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 7a7d584f378a5..806d33d9f7124 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -3924,7 +3924,11 @@ static int bond_slave_netdev_event(unsigned long event, + unblock_netpoll_tx(); + break; + case NETDEV_FEAT_CHANGE: +- bond_compute_features(bond); ++ if (!bond->notifier_ctx) { ++ bond->notifier_ctx = true; ++ bond_compute_features(bond); ++ bond->notifier_ctx = false; ++ } + break; + case NETDEV_RESEND_IGMP: + /* Propagate to master device */ +@@ -6283,6 +6287,8 @@ static int bond_init(struct net_device *bond_dev) + if (!bond->wq) + return -ENOMEM; + ++ bond->notifier_ctx = false; ++ + spin_lock_init(&bond->stats_lock); + netdev_lockdep_set_classes(bond_dev); + +diff --git a/drivers/net/ethernet/3com/3c589_cs.c b/drivers/net/ethernet/3com/3c589_cs.c +index 82f94b1635bf8..5267e9dcd87ef 100644 +--- a/drivers/net/ethernet/3com/3c589_cs.c ++++ b/drivers/net/ethernet/3com/3c589_cs.c +@@ -195,6 +195,7 @@ static int tc589_probe(struct pcmcia_device *link) + { + struct el3_private *lp; + struct net_device *dev; ++ int ret; + + dev_dbg(&link->dev, "3c589_attach()\n"); + +@@ -218,7 +219,15 @@ static int tc589_probe(struct pcmcia_device *link) + + dev->ethtool_ops = &netdev_ethtool_ops; + +- return tc589_config(link); ++ ret = tc589_config(link); ++ if (ret) ++ goto err_free_netdev; ++ ++ return 0; ++ ++err_free_netdev: ++ free_netdev(dev); ++ return ret; + } + + static void tc589_detach(struct pcmcia_device *link) +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +index 7045fedfd73a0..7af223b0a37f5 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +@@ -652,9 +652,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq, + htons(ext->lso_sb - skb_network_offset(skb)); + } else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) { + ext->lso_format = pfvf->hw.lso_tsov6_idx; +- +- ipv6_hdr(skb)->payload_len = +- htons(ext->lso_sb - skb_network_offset(skb)); ++ ipv6_hdr(skb)->payload_len = htons(tcp_hdrlen(skb)); + } else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) { + __be16 l3_proto = vlan_get_protocol(skb); + struct udphdr *udph = udp_hdr(skb); +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index c9fb1d7084d57..55d9a2c421a19 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -3272,18 +3272,14 @@ static int mtk_open(struct net_device *dev) + eth->dsa_meta[i] = md_dst; + } + } else { +- /* Hardware special tag parsing needs to be disabled if at least +- * one MAC does not use DSA. ++ /* Hardware DSA untagging and VLAN RX offloading need to be ++ * disabled if at least one MAC does not use DSA. + */ + u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); + + val &= ~MTK_CDMP_STAG_EN; + mtk_w32(eth, val, MTK_CDMP_IG_CTRL); + +- val = mtk_r32(eth, MTK_CDMQ_IG_CTRL); +- val &= ~MTK_CDMQ_STAG_EN; +- mtk_w32(eth, val, MTK_CDMQ_IG_CTRL); +- + mtk_w32(eth, 0, MTK_CDMP_EG_CTRL); + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +index b00e33ed05e91..4d6a94ab1f414 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +@@ -1920,9 +1920,10 @@ static void mlx5_cmd_err_trace(struct mlx5_core_dev *dev, u16 opcode, u16 op_mod + static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status, + u32 syndrome, int err) + { ++ const char *namep = mlx5_command_str(opcode); + struct mlx5_cmd_stats *stats; + +- if (!err) ++ if (!err || !(strcmp(namep, "unknown command opcode"))) + return; + + stats = &dev->cmd.stats[opcode]; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c +index eb5aeba3addf4..8ba606a470c8d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c +@@ -175,6 +175,8 @@ static bool mlx5e_ptp_poll_ts_cq(struct mlx5e_cq *cq, int budget) + /* ensure cq space is freed before enabling more cqes */ + wmb(); + ++ mlx5e_txqsq_wake(&ptpsq->txqsq); ++ + return work_done == budget; + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +index 780224fd67a1d..fbb392d54fa51 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c +@@ -1338,11 +1338,13 @@ static void mlx5e_invalidate_encap(struct mlx5e_priv *priv, + struct mlx5e_tc_flow *flow; + + list_for_each_entry(flow, encap_flows, tmp_list) { +- struct mlx5_flow_attr *attr = flow->attr; + struct mlx5_esw_flow_attr *esw_attr; ++ struct mlx5_flow_attr *attr; + + if (!mlx5e_is_offloaded_flow(flow)) + continue; ++ ++ attr = mlx5e_tc_get_encap_attr(flow); + esw_attr = attr->esw_attr; + + if (flow_flag_test(flow, SLOW)) +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +index b9c2f67d37941..23aa18f050555 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +@@ -182,6 +182,8 @@ static inline u16 mlx5e_txqsq_get_next_pi(struct mlx5e_txqsq *sq, u16 size) + return pi; + } + ++void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq); ++ + static inline u16 mlx5e_shampo_get_cqe_header_index(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) + { + return be16_to_cpu(cqe->shampo.header_entry_index) & (rq->mpwqe.shampo->hd_per_wq - 1); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +index 87a2850b32d09..2b1094e5b0c9d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +@@ -1692,11 +1692,9 @@ bool mlx5e_tc_is_vf_tunnel(struct net_device *out_dev, struct net_device *route_ + int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *route_dev, u16 *vport) + { + struct mlx5e_priv *out_priv, *route_priv; +- struct mlx5_devcom *devcom = NULL; + struct mlx5_core_dev *route_mdev; + struct mlx5_eswitch *esw; + u16 vhca_id; +- int err; + + out_priv = netdev_priv(out_dev); + esw = out_priv->mdev->priv.eswitch; +@@ -1705,6 +1703,9 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro + + vhca_id = MLX5_CAP_GEN(route_mdev, vhca_id); + if (mlx5_lag_is_active(out_priv->mdev)) { ++ struct mlx5_devcom *devcom; ++ int err; ++ + /* In lag case we may get devices from different eswitch instances. + * If we failed to get vport num, it means, mostly, that we on the wrong + * eswitch. +@@ -1713,16 +1714,16 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro + if (err != -ENOENT) + return err; + ++ rcu_read_lock(); + devcom = out_priv->mdev->priv.devcom; +- esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); +- if (!esw) +- return -ENODEV; ++ esw = mlx5_devcom_get_peer_data_rcu(devcom, MLX5_DEVCOM_ESW_OFFLOADS); ++ err = esw ? mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport) : -ENODEV; ++ rcu_read_unlock(); ++ ++ return err; + } + +- err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport); +- if (devcom) +- mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS); +- return err; ++ return mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport); + } + + static int +@@ -5448,6 +5449,8 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv) + goto err_action_counter; + } + ++ mlx5_esw_offloads_devcom_init(esw); ++ + return 0; + + err_action_counter: +@@ -5476,7 +5479,7 @@ void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv) + priv = netdev_priv(rpriv->netdev); + esw = priv->mdev->priv.eswitch; + +- mlx5e_tc_clean_fdb_peer_flows(esw); ++ mlx5_esw_offloads_devcom_cleanup(esw); + + mlx5e_tc_tun_cleanup(uplink_priv->encap); + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +index df5e780e8e6a6..c7eb6b238c2ba 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +@@ -762,6 +762,17 @@ static void mlx5e_tx_wi_consume_fifo_skbs(struct mlx5e_txqsq *sq, struct mlx5e_t + } + } + ++void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq) ++{ ++ if (netif_tx_queue_stopped(sq->txq) && ++ mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) && ++ mlx5e_ptpsq_fifo_has_room(sq) && ++ !test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) { ++ netif_tx_wake_queue(sq->txq); ++ sq->stats->wake++; ++ } ++} ++ + bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) + { + struct mlx5e_sq_stats *stats; +@@ -861,13 +872,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) + + netdev_tx_completed_queue(sq->txq, npkts, nbytes); + +- if (netif_tx_queue_stopped(sq->txq) && +- mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) && +- mlx5e_ptpsq_fifo_has_room(sq) && +- !test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) { +- netif_tx_wake_queue(sq->txq); +- stats->wake++; +- } ++ mlx5e_txqsq_wake(sq); + + return (i == MLX5E_TX_CQ_POLL_BUDGET); + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c +index 9a458a5d98539..44547b22a536f 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c +@@ -161,20 +161,22 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget) + } + } + ++ /* budget=0 means we may be in IRQ context, do as little as possible */ ++ if (unlikely(!budget)) ++ goto out; ++ + busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq); + + if (c->xdp) + busy |= mlx5e_poll_xdpsq_cq(&c->rq_xdpsq.cq); + +- if (likely(budget)) { /* budget=0 means: don't poll rx rings */ +- if (xsk_open) +- work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget); ++ if (xsk_open) ++ work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget); + +- if (likely(budget - work_done)) +- work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done); ++ if (likely(budget - work_done)) ++ work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done); + +- busy |= work_done == budget; +- } ++ busy |= work_done == budget; + + mlx5e_poll_ico_cq(&c->icosq.cq); + if (mlx5e_poll_ico_cq(&c->async_icosq.cq)) +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +index 9d5a5756a15a9..c8c12d1672f99 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h +@@ -371,6 +371,8 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs); + void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf); + void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw); + void mlx5_eswitch_disable(struct mlx5_eswitch *esw); ++void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw); ++void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw); + int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw, + u16 vport, const u8 *mac); + int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, +@@ -768,6 +770,8 @@ static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {} + static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) { return 0; } + static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf) {} + static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {} ++static inline void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw) {} ++static inline void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) {} + static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; } + static inline + int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { return 0; } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +index c99d208722f58..590df9bf39a56 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c +@@ -2781,7 +2781,7 @@ err_out: + return err; + } + +-static void esw_offloads_devcom_init(struct mlx5_eswitch *esw) ++void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw) + { + struct mlx5_devcom *devcom = esw->dev->priv.devcom; + +@@ -2804,7 +2804,7 @@ static void esw_offloads_devcom_init(struct mlx5_eswitch *esw) + ESW_OFFLOADS_DEVCOM_PAIR, esw); + } + +-static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) ++void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) + { + struct mlx5_devcom *devcom = esw->dev->priv.devcom; + +@@ -3274,8 +3274,6 @@ int esw_offloads_enable(struct mlx5_eswitch *esw) + if (err) + goto err_vports; + +- esw_offloads_devcom_init(esw); +- + return 0; + + err_vports: +@@ -3316,7 +3314,6 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw, + + void esw_offloads_disable(struct mlx5_eswitch *esw) + { +- esw_offloads_devcom_cleanup(esw); + mlx5_eswitch_disable_pf_vf_vports(esw); + esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK); + esw_set_passing_vport_metadata(esw, false); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c +index adefde3ea9410..b7d779d08d837 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c +@@ -3,6 +3,7 @@ + + #include + #include "lib/devcom.h" ++#include "mlx5_core.h" + + static LIST_HEAD(devcom_list); + +@@ -13,7 +14,7 @@ static LIST_HEAD(devcom_list); + + struct mlx5_devcom_component { + struct { +- void *data; ++ void __rcu *data; + } device[MLX5_DEVCOM_PORTS_SUPPORTED]; + + mlx5_devcom_event_handler_t handler; +@@ -77,6 +78,7 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev) + if (MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_DEVCOM_PORTS_SUPPORTED) + return NULL; + ++ mlx5_dev_list_lock(); + sguid0 = mlx5_query_nic_system_image_guid(dev); + list_for_each_entry(iter, &devcom_list, list) { + struct mlx5_core_dev *tmp_dev = NULL; +@@ -102,8 +104,10 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev) + + if (!priv) { + priv = mlx5_devcom_list_alloc(); +- if (!priv) +- return ERR_PTR(-ENOMEM); ++ if (!priv) { ++ devcom = ERR_PTR(-ENOMEM); ++ goto out; ++ } + + idx = 0; + new_priv = true; +@@ -112,13 +116,16 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev) + priv->devs[idx] = dev; + devcom = mlx5_devcom_alloc(priv, idx); + if (!devcom) { +- kfree(priv); +- return ERR_PTR(-ENOMEM); ++ if (new_priv) ++ kfree(priv); ++ devcom = ERR_PTR(-ENOMEM); ++ goto out; + } + + if (new_priv) + list_add(&priv->list, &devcom_list); +- ++out: ++ mlx5_dev_list_unlock(); + return devcom; + } + +@@ -131,6 +138,7 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom) + if (IS_ERR_OR_NULL(devcom)) + return; + ++ mlx5_dev_list_lock(); + priv = devcom->priv; + priv->devs[devcom->idx] = NULL; + +@@ -141,10 +149,12 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom) + break; + + if (i != MLX5_DEVCOM_PORTS_SUPPORTED) +- return; ++ goto out; + + list_del(&priv->list); + kfree(priv); ++out: ++ mlx5_dev_list_unlock(); + } + + void mlx5_devcom_register_component(struct mlx5_devcom *devcom, +@@ -162,7 +172,7 @@ void mlx5_devcom_register_component(struct mlx5_devcom *devcom, + comp = &devcom->priv->components[id]; + down_write(&comp->sem); + comp->handler = handler; +- comp->device[devcom->idx].data = data; ++ rcu_assign_pointer(comp->device[devcom->idx].data, data); + up_write(&comp->sem); + } + +@@ -176,8 +186,9 @@ void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom, + + comp = &devcom->priv->components[id]; + down_write(&comp->sem); +- comp->device[devcom->idx].data = NULL; ++ RCU_INIT_POINTER(comp->device[devcom->idx].data, NULL); + up_write(&comp->sem); ++ synchronize_rcu(); + } + + int mlx5_devcom_send_event(struct mlx5_devcom *devcom, +@@ -193,12 +204,15 @@ int mlx5_devcom_send_event(struct mlx5_devcom *devcom, + + comp = &devcom->priv->components[id]; + down_write(&comp->sem); +- for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) +- if (i != devcom->idx && comp->device[i].data) { +- err = comp->handler(event, comp->device[i].data, +- event_data); ++ for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) { ++ void *data = rcu_dereference_protected(comp->device[i].data, ++ lockdep_is_held(&comp->sem)); ++ ++ if (i != devcom->idx && data) { ++ err = comp->handler(event, data, event_data); + break; + } ++ } + + up_write(&comp->sem); + return err; +@@ -213,7 +227,7 @@ void mlx5_devcom_set_paired(struct mlx5_devcom *devcom, + comp = &devcom->priv->components[id]; + WARN_ON(!rwsem_is_locked(&comp->sem)); + +- comp->paired = paired; ++ WRITE_ONCE(comp->paired, paired); + } + + bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom, +@@ -222,7 +236,7 @@ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom, + if (IS_ERR_OR_NULL(devcom)) + return false; + +- return devcom->priv->components[id].paired; ++ return READ_ONCE(devcom->priv->components[id].paired); + } + + void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom, +@@ -236,7 +250,7 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom, + + comp = &devcom->priv->components[id]; + down_read(&comp->sem); +- if (!comp->paired) { ++ if (!READ_ONCE(comp->paired)) { + up_read(&comp->sem); + return NULL; + } +@@ -245,7 +259,29 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom, + if (i != devcom->idx) + break; + +- return comp->device[i].data; ++ return rcu_dereference_protected(comp->device[i].data, lockdep_is_held(&comp->sem)); ++} ++ ++void *mlx5_devcom_get_peer_data_rcu(struct mlx5_devcom *devcom, enum mlx5_devcom_components id) ++{ ++ struct mlx5_devcom_component *comp; ++ int i; ++ ++ if (IS_ERR_OR_NULL(devcom)) ++ return NULL; ++ ++ for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) ++ if (i != devcom->idx) ++ break; ++ ++ comp = &devcom->priv->components[id]; ++ /* This can change concurrently, however 'data' pointer will remain ++ * valid for the duration of RCU read section. ++ */ ++ if (!READ_ONCE(comp->paired)) ++ return NULL; ++ ++ return rcu_dereference(comp->device[i].data); + } + + void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h +index 94313c18bb647..9a496f4722dad 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h +@@ -41,6 +41,7 @@ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom, + + void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id); ++void *mlx5_devcom_get_peer_data_rcu(struct mlx5_devcom *devcom, enum mlx5_devcom_components id); + void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom, + enum mlx5_devcom_components id); + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c +index ad90bf125e94f..62327b52f1acf 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c +@@ -1041,7 +1041,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev) + + dev->dm = mlx5_dm_create(dev); + if (IS_ERR(dev->dm)) +- mlx5_core_warn(dev, "Failed to init device memory%d\n", err); ++ mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm)); + + dev->tracer = mlx5_fw_tracer_create(dev); + dev->hv_vhca = mlx5_hv_vhca_create(dev); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c +index 07b6a6dcb92f0..7fbad87f475df 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_cmd.c +@@ -117,6 +117,8 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev, + caps->gvmi = MLX5_CAP_GEN(mdev, vhca_id); + caps->flex_protocols = MLX5_CAP_GEN(mdev, flex_parser_protocols); + caps->sw_format_ver = MLX5_CAP_GEN(mdev, steering_format_version); ++ caps->roce_caps.fl_rc_qp_when_roce_disabled = ++ MLX5_CAP_GEN(mdev, fl_rc_qp_when_roce_disabled); + + if (MLX5_CAP_GEN(mdev, roce)) { + err = dr_cmd_query_nic_vport_roce_en(mdev, 0, &roce_en); +@@ -124,7 +126,7 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev, + return err; + + caps->roce_caps.roce_en = roce_en; +- caps->roce_caps.fl_rc_qp_when_roce_disabled = ++ caps->roce_caps.fl_rc_qp_when_roce_disabled |= + MLX5_CAP_ROCE(mdev, fl_rc_qp_when_roce_disabled); + caps->roce_caps.fl_rc_qp_when_roce_enabled = + MLX5_CAP_ROCE(mdev, fl_rc_qp_when_roce_enabled); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c +index 1e15f605df6ef..7d7973090a6b9 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste.c +@@ -15,7 +15,8 @@ static u32 dr_ste_crc32_calc(const void *input_data, size_t length) + { + u32 crc = crc32(0, input_data, length); + +- return (__force u32)htonl(crc); ++ return (__force u32)((crc >> 24) & 0xff) | ((crc << 8) & 0xff0000) | ++ ((crc >> 8) & 0xff00) | ((crc << 24) & 0xff000000); + } + + bool mlx5dr_ste_supp_ttl_cs_recalc(struct mlx5dr_cmd_caps *caps) +diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c +index 685e8cd7658c9..d2e5c9b7ec974 100644 +--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c ++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c +@@ -1013,6 +1013,16 @@ static int lan966x_reset_switch(struct lan966x *lan966x) + + reset_control_reset(switch_reset); + ++ /* Don't reinitialize the switch core, if it is already initialized. In ++ * case it is initialized twice, some pointers inside the queue system ++ * in HW will get corrupted and then after a while the queue system gets ++ * full and no traffic is passing through the switch. The issue is seen ++ * when loading and unloading the driver and sending traffic through the ++ * switch. ++ */ ++ if (lan_rd(lan966x, SYS_RESET_CFG) & SYS_RESET_CFG_CORE_ENA) ++ return 0; ++ + lan_wr(SYS_RESET_CFG_CORE_ENA_SET(0), lan966x, SYS_RESET_CFG); + lan_wr(SYS_RAM_INIT_RAM_INIT_SET(1), lan966x, SYS_RAM_INIT); + ret = readx_poll_timeout(lan966x_ram_init, lan966x, +diff --git a/drivers/net/ethernet/nvidia/forcedeth.c b/drivers/net/ethernet/nvidia/forcedeth.c +index 0605d1ee490dd..7a549b834e970 100644 +--- a/drivers/net/ethernet/nvidia/forcedeth.c ++++ b/drivers/net/ethernet/nvidia/forcedeth.c +@@ -6138,6 +6138,7 @@ static int nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id) + return 0; + + out_error: ++ nv_mgmt_release_sema(dev); + if (phystate_orig) + writel(phystate|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl); + out_freering: +diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c +index 62bf99e45af16..bd81a4b041e52 100644 +--- a/drivers/net/phy/mscc/mscc_main.c ++++ b/drivers/net/phy/mscc/mscc_main.c +@@ -2656,6 +2656,7 @@ static struct phy_driver vsc85xx_driver[] = { + module_phy_driver(vsc85xx_driver); + + static struct mdio_device_id __maybe_unused vsc85xx_tbl[] = { ++ { PHY_ID_VSC8502, 0xfffffff0, }, + { PHY_ID_VSC8504, 0xfffffff0, }, + { PHY_ID_VSC8514, 0xfffffff0, }, + { PHY_ID_VSC8530, 0xfffffff0, }, +diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c +index d10606f257c43..555b0b1e9a789 100644 +--- a/drivers/net/team/team.c ++++ b/drivers/net/team/team.c +@@ -1629,6 +1629,7 @@ static int team_init(struct net_device *dev) + + team->dev = dev; + team_set_no_mode(team); ++ team->notifier_ctx = false; + + team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats); + if (!team->pcpu_stats) +@@ -3022,7 +3023,11 @@ static int team_device_event(struct notifier_block *unused, + team_del_slave(port->team->dev, dev); + break; + case NETDEV_FEAT_CHANGE: +- team_compute_features(port->team); ++ if (!port->team->notifier_ctx) { ++ port->team->notifier_ctx = true; ++ team_compute_features(port->team); ++ port->team->notifier_ctx = false; ++ } + break; + case NETDEV_PRECHANGEMTU: + /* Forbid to change mtu of underlaying device */ +diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c +index 6ce8f4f0c70e8..db05622f1f703 100644 +--- a/drivers/net/usb/cdc_ncm.c ++++ b/drivers/net/usb/cdc_ncm.c +@@ -181,9 +181,12 @@ static u32 cdc_ncm_check_tx_max(struct usbnet *dev, u32 new_tx) + else + min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32); + +- max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize)); +- if (max == 0) ++ if (le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize) == 0) + max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */ ++ else ++ max = clamp_t(u32, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize), ++ USB_CDC_NCM_NTB_MIN_OUT_SIZE, ++ CDC_NCM_NTB_MAX_SIZE_TX); + + /* some devices set dwNtbOutMaxSize too low for the above default */ + min = min(min, max); +@@ -1244,6 +1247,9 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign) + * further. + */ + if (skb_out == NULL) { ++ /* If even the smallest allocation fails, abort. */ ++ if (ctx->tx_curr_size == USB_CDC_NCM_NTB_MIN_OUT_SIZE) ++ goto alloc_failed; + ctx->tx_low_mem_max_cnt = min(ctx->tx_low_mem_max_cnt + 1, + (unsigned)CDC_NCM_LOW_MEM_MAX_CNT); + ctx->tx_low_mem_val = ctx->tx_low_mem_max_cnt; +@@ -1262,13 +1268,8 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign) + skb_out = alloc_skb(ctx->tx_curr_size, GFP_ATOMIC); + + /* No allocation possible so we will abort */ +- if (skb_out == NULL) { +- if (skb != NULL) { +- dev_kfree_skb_any(skb); +- dev->net->stats.tx_dropped++; +- } +- goto exit_no_skb; +- } ++ if (!skb_out) ++ goto alloc_failed; + ctx->tx_low_mem_val--; + } + if (ctx->is_ndp16) { +@@ -1461,6 +1462,11 @@ cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign) + + return skb_out; + ++alloc_failed: ++ if (skb) { ++ dev_kfree_skb_any(skb); ++ dev->net->stats.tx_dropped++; ++ } + exit_no_skb: + /* Start timer, if there is a remaining non-empty skb */ + if (ctx->tx_curr_skb != NULL && n > 0) +diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c +index 2e2a2b6eab09d..d0cafe813cdb4 100644 +--- a/drivers/net/wireless/realtek/rtw89/mac.c ++++ b/drivers/net/wireless/realtek/rtw89/mac.c +@@ -1425,6 +1425,8 @@ const struct rtw89_mac_size_set rtw89_mac_size = { + .wde_size4 = {RTW89_WDE_PG_64, 0, 4096,}, + /* PCIE 64 */ + .wde_size6 = {RTW89_WDE_PG_64, 512, 0,}, ++ /* 8852B PCIE SCC */ ++ .wde_size7 = {RTW89_WDE_PG_64, 510, 2,}, + /* DLFW */ + .wde_size9 = {RTW89_WDE_PG_64, 0, 1024,}, + /* 8852C DLFW */ +@@ -1449,6 +1451,8 @@ const struct rtw89_mac_size_set rtw89_mac_size = { + .wde_qt4 = {0, 0, 0, 0,}, + /* PCIE 64 */ + .wde_qt6 = {448, 48, 0, 16,}, ++ /* 8852B PCIE SCC */ ++ .wde_qt7 = {446, 48, 0, 16,}, + /* 8852C DLFW */ + .wde_qt17 = {0, 0, 0, 0,}, + /* 8852C PCIE SCC */ +diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h +index 8064d3953d7f2..85c02c1f36bd7 100644 +--- a/drivers/net/wireless/realtek/rtw89/mac.h ++++ b/drivers/net/wireless/realtek/rtw89/mac.h +@@ -791,6 +791,7 @@ struct rtw89_mac_size_set { + const struct rtw89_dle_size wde_size0; + const struct rtw89_dle_size wde_size4; + const struct rtw89_dle_size wde_size6; ++ const struct rtw89_dle_size wde_size7; + const struct rtw89_dle_size wde_size9; + const struct rtw89_dle_size wde_size18; + const struct rtw89_dle_size wde_size19; +@@ -803,6 +804,7 @@ struct rtw89_mac_size_set { + const struct rtw89_wde_quota wde_qt0; + const struct rtw89_wde_quota wde_qt4; + const struct rtw89_wde_quota wde_qt6; ++ const struct rtw89_wde_quota wde_qt7; + const struct rtw89_wde_quota wde_qt17; + const struct rtw89_wde_quota wde_qt18; + const struct rtw89_ple_quota ple_qt4; +diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c +index 45c374d025cbd..355e515364611 100644 +--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c ++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c +@@ -13,25 +13,25 @@ + #include "txrx.h" + + static const struct rtw89_hfc_ch_cfg rtw8852b_hfc_chcfg_pcie[] = { +- {5, 343, grp_0}, /* ACH 0 */ +- {5, 343, grp_0}, /* ACH 1 */ +- {5, 343, grp_0}, /* ACH 2 */ +- {5, 343, grp_0}, /* ACH 3 */ ++ {5, 341, grp_0}, /* ACH 0 */ ++ {5, 341, grp_0}, /* ACH 1 */ ++ {4, 342, grp_0}, /* ACH 2 */ ++ {4, 342, grp_0}, /* ACH 3 */ + {0, 0, grp_0}, /* ACH 4 */ + {0, 0, grp_0}, /* ACH 5 */ + {0, 0, grp_0}, /* ACH 6 */ + {0, 0, grp_0}, /* ACH 7 */ +- {4, 344, grp_0}, /* B0MGQ */ +- {4, 344, grp_0}, /* B0HIQ */ ++ {4, 342, grp_0}, /* B0MGQ */ ++ {4, 342, grp_0}, /* B0HIQ */ + {0, 0, grp_0}, /* B1MGQ */ + {0, 0, grp_0}, /* B1HIQ */ + {40, 0, 0} /* FWCMDQ */ + }; + + static const struct rtw89_hfc_pub_cfg rtw8852b_hfc_pubcfg_pcie = { +- 448, /* Group 0 */ ++ 446, /* Group 0 */ + 0, /* Group 1 */ +- 448, /* Public Max */ ++ 446, /* Public Max */ + 0 /* WP threshold */ + }; + +@@ -44,9 +44,9 @@ static const struct rtw89_hfc_param_ini rtw8852b_hfc_param_ini_pcie[] = { + }; + + static const struct rtw89_dle_mem rtw8852b_dle_mem_pcie[] = { +- [RTW89_QTA_SCC] = {RTW89_QTA_SCC, &rtw89_mac_size.wde_size6, +- &rtw89_mac_size.ple_size6, &rtw89_mac_size.wde_qt6, +- &rtw89_mac_size.wde_qt6, &rtw89_mac_size.ple_qt18, ++ [RTW89_QTA_SCC] = {RTW89_QTA_SCC, &rtw89_mac_size.wde_size7, ++ &rtw89_mac_size.ple_size6, &rtw89_mac_size.wde_qt7, ++ &rtw89_mac_size.wde_qt7, &rtw89_mac_size.ple_qt18, + &rtw89_mac_size.ple_qt58}, + [RTW89_QTA_DLFW] = {RTW89_QTA_DLFW, &rtw89_mac_size.wde_size9, + &rtw89_mac_size.ple_size8, &rtw89_mac_size.wde_qt4, +diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c +index c2c9b0d3244cb..be967d797c28e 100644 +--- a/drivers/platform/mellanox/mlxbf-pmc.c ++++ b/drivers/platform/mellanox/mlxbf-pmc.c +@@ -1348,9 +1348,8 @@ static int mlxbf_pmc_map_counters(struct device *dev) + + for (i = 0; i < pmc->total_blocks; ++i) { + if (strstr(pmc->block_name[i], "tile")) { +- ret = sscanf(pmc->block_name[i], "tile%d", &tile_num); +- if (ret < 0) +- return ret; ++ if (sscanf(pmc->block_name[i], "tile%d", &tile_num) != 1) ++ return -EINVAL; + + if (tile_num >= pmc->tile_count) + continue; +diff --git a/drivers/platform/x86/intel/ifs/load.c b/drivers/platform/x86/intel/ifs/load.c +index c5c24e6fdc436..132ec822ab55d 100644 +--- a/drivers/platform/x86/intel/ifs/load.c ++++ b/drivers/platform/x86/intel/ifs/load.c +@@ -208,7 +208,7 @@ static int scan_chunks_sanity_check(struct device *dev) + continue; + reinit_completion(&ifs_done); + local_work.dev = dev; +- INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks); ++ INIT_WORK_ONSTACK(&local_work.w, copy_hashes_authenticate_chunks); + schedule_work_on(cpu, &local_work.w); + wait_for_completion(&ifs_done); + if (ifsd->loading_error) { +diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +index 0954a04623edf..aa715d7d62375 100644 +--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c ++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c +@@ -295,14 +295,13 @@ struct isst_if_pkg_info { + static struct isst_if_cpu_info *isst_cpu_info; + static struct isst_if_pkg_info *isst_pkg_info; + +-#define ISST_MAX_PCI_DOMAINS 8 +- + static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn) + { + struct pci_dev *matched_pci_dev = NULL; + struct pci_dev *pci_dev = NULL; ++ struct pci_dev *_pci_dev = NULL; + int no_matches = 0, pkg_id; +- int i, bus_number; ++ int bus_number; + + if (bus_no < 0 || bus_no >= ISST_MAX_BUS_NUMBER || cpu < 0 || + cpu >= nr_cpu_ids || cpu >= num_possible_cpus()) +@@ -314,12 +313,11 @@ static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn + if (bus_number < 0) + return NULL; + +- for (i = 0; i < ISST_MAX_PCI_DOMAINS; ++i) { +- struct pci_dev *_pci_dev; ++ for_each_pci_dev(_pci_dev) { + int node; + +- _pci_dev = pci_get_domain_bus_and_slot(i, bus_number, PCI_DEVFN(dev, fn)); +- if (!_pci_dev) ++ if (_pci_dev->bus->number != bus_number || ++ _pci_dev->devfn != PCI_DEVFN(dev, fn)) + continue; + + ++no_matches; +diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c +index 05f4131784629..3be6f3b10ea42 100644 +--- a/drivers/power/supply/axp288_fuel_gauge.c ++++ b/drivers/power/supply/axp288_fuel_gauge.c +@@ -507,7 +507,7 @@ static void fuel_gauge_external_power_changed(struct power_supply *psy) + mutex_lock(&info->lock); + info->valid = 0; /* Force updating of the cached registers */ + mutex_unlock(&info->lock); +- power_supply_changed(info->bat); ++ power_supply_changed(psy); + } + + static struct power_supply_desc fuel_gauge_desc = { +diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c +index de67b985f0a91..dc33f00fcc068 100644 +--- a/drivers/power/supply/bq24190_charger.c ++++ b/drivers/power/supply/bq24190_charger.c +@@ -1262,6 +1262,7 @@ static void bq24190_input_current_limit_work(struct work_struct *work) + bq24190_charger_set_property(bdi->charger, + POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, + &val); ++ power_supply_changed(bdi->charger); + } + + /* Sync the input-current-limit with our parent supply (if we have one) */ +diff --git a/drivers/power/supply/bq25890_charger.c b/drivers/power/supply/bq25890_charger.c +index bfe08d7bfaf30..75cf2449abd97 100644 +--- a/drivers/power/supply/bq25890_charger.c ++++ b/drivers/power/supply/bq25890_charger.c +@@ -750,7 +750,7 @@ static void bq25890_charger_external_power_changed(struct power_supply *psy) + if (bq->chip_version != BQ25892) + return; + +- ret = power_supply_get_property_from_supplier(bq->charger, ++ ret = power_supply_get_property_from_supplier(psy, + POWER_SUPPLY_PROP_USB_TYPE, + &val); + if (ret) +@@ -775,6 +775,7 @@ static void bq25890_charger_external_power_changed(struct power_supply *psy) + } + + bq25890_field_write(bq, F_IINLIM, input_current_limit); ++ power_supply_changed(psy); + } + + static int bq25890_get_chip_state(struct bq25890_device *bq, +@@ -1106,6 +1107,8 @@ static void bq25890_pump_express_work(struct work_struct *data) + dev_info(bq->dev, "Hi-voltage charging requested, input voltage is %d mV\n", + voltage); + ++ power_supply_changed(bq->charger); ++ + return; + error_print: + bq25890_field_write(bq, F_PUMPX_EN, 0); +diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c +index 5ff6f44fd47b2..929e813b9c443 100644 +--- a/drivers/power/supply/bq27xxx_battery.c ++++ b/drivers/power/supply/bq27xxx_battery.c +@@ -1761,60 +1761,6 @@ static int bq27xxx_battery_read_health(struct bq27xxx_device_info *di) + return POWER_SUPPLY_HEALTH_GOOD; + } + +-void bq27xxx_battery_update(struct bq27xxx_device_info *di) +-{ +- struct bq27xxx_reg_cache cache = {0, }; +- bool has_singe_flag = di->opts & BQ27XXX_O_ZERO; +- +- cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag); +- if ((cache.flags & 0xff) == 0xff) +- cache.flags = -1; /* read error */ +- if (cache.flags >= 0) { +- cache.temperature = bq27xxx_battery_read_temperature(di); +- if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR) +- cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE); +- if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR) +- cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP); +- if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR) +- cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF); +- +- cache.charge_full = bq27xxx_battery_read_fcc(di); +- cache.capacity = bq27xxx_battery_read_soc(di); +- if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR) +- cache.energy = bq27xxx_battery_read_energy(di); +- di->cache.flags = cache.flags; +- cache.health = bq27xxx_battery_read_health(di); +- if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR) +- cache.cycle_count = bq27xxx_battery_read_cyct(di); +- +- /* We only have to read charge design full once */ +- if (di->charge_design_full <= 0) +- di->charge_design_full = bq27xxx_battery_read_dcap(di); +- } +- +- if ((di->cache.capacity != cache.capacity) || +- (di->cache.flags != cache.flags)) +- power_supply_changed(di->bat); +- +- if (memcmp(&di->cache, &cache, sizeof(cache)) != 0) +- di->cache = cache; +- +- di->last_update = jiffies; +-} +-EXPORT_SYMBOL_GPL(bq27xxx_battery_update); +- +-static void bq27xxx_battery_poll(struct work_struct *work) +-{ +- struct bq27xxx_device_info *di = +- container_of(work, struct bq27xxx_device_info, +- work.work); +- +- bq27xxx_battery_update(di); +- +- if (poll_interval > 0) +- schedule_delayed_work(&di->work, poll_interval * HZ); +-} +- + static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags) + { + if (di->opts & BQ27XXX_O_ZERO) +@@ -1833,7 +1779,8 @@ static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags) + static int bq27xxx_battery_current_and_status( + struct bq27xxx_device_info *di, + union power_supply_propval *val_curr, +- union power_supply_propval *val_status) ++ union power_supply_propval *val_status, ++ struct bq27xxx_reg_cache *cache) + { + bool single_flags = (di->opts & BQ27XXX_O_ZERO); + int curr; +@@ -1845,10 +1792,14 @@ static int bq27xxx_battery_current_and_status( + return curr; + } + +- flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags); +- if (flags < 0) { +- dev_err(di->dev, "error reading flags\n"); +- return flags; ++ if (cache) { ++ flags = cache->flags; ++ } else { ++ flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags); ++ if (flags < 0) { ++ dev_err(di->dev, "error reading flags\n"); ++ return flags; ++ } + } + + if (di->opts & BQ27XXX_O_ZERO) { +@@ -1883,6 +1834,78 @@ static int bq27xxx_battery_current_and_status( + return 0; + } + ++static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di) ++{ ++ union power_supply_propval status = di->last_status; ++ struct bq27xxx_reg_cache cache = {0, }; ++ bool has_singe_flag = di->opts & BQ27XXX_O_ZERO; ++ ++ cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag); ++ if ((cache.flags & 0xff) == 0xff) ++ cache.flags = -1; /* read error */ ++ if (cache.flags >= 0) { ++ cache.temperature = bq27xxx_battery_read_temperature(di); ++ if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR) ++ cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE); ++ if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR) ++ cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP); ++ if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR) ++ cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF); ++ ++ cache.charge_full = bq27xxx_battery_read_fcc(di); ++ cache.capacity = bq27xxx_battery_read_soc(di); ++ if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR) ++ cache.energy = bq27xxx_battery_read_energy(di); ++ di->cache.flags = cache.flags; ++ cache.health = bq27xxx_battery_read_health(di); ++ if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR) ++ cache.cycle_count = bq27xxx_battery_read_cyct(di); ++ ++ /* ++ * On gauges with signed current reporting the current must be ++ * checked to detect charging <-> discharging status changes. ++ */ ++ if (!(di->opts & BQ27XXX_O_ZERO)) ++ bq27xxx_battery_current_and_status(di, NULL, &status, &cache); ++ ++ /* We only have to read charge design full once */ ++ if (di->charge_design_full <= 0) ++ di->charge_design_full = bq27xxx_battery_read_dcap(di); ++ } ++ ++ if ((di->cache.capacity != cache.capacity) || ++ (di->cache.flags != cache.flags) || ++ (di->last_status.intval != status.intval)) { ++ di->last_status.intval = status.intval; ++ power_supply_changed(di->bat); ++ } ++ ++ if (memcmp(&di->cache, &cache, sizeof(cache)) != 0) ++ di->cache = cache; ++ ++ di->last_update = jiffies; ++ ++ if (!di->removed && poll_interval > 0) ++ mod_delayed_work(system_wq, &di->work, poll_interval * HZ); ++} ++ ++void bq27xxx_battery_update(struct bq27xxx_device_info *di) ++{ ++ mutex_lock(&di->lock); ++ bq27xxx_battery_update_unlocked(di); ++ mutex_unlock(&di->lock); ++} ++EXPORT_SYMBOL_GPL(bq27xxx_battery_update); ++ ++static void bq27xxx_battery_poll(struct work_struct *work) ++{ ++ struct bq27xxx_device_info *di = ++ container_of(work, struct bq27xxx_device_info, ++ work.work); ++ ++ bq27xxx_battery_update(di); ++} ++ + /* + * Get the average power in µW + * Return < 0 if something fails. +@@ -1985,10 +2008,8 @@ static int bq27xxx_battery_get_property(struct power_supply *psy, + struct bq27xxx_device_info *di = power_supply_get_drvdata(psy); + + mutex_lock(&di->lock); +- if (time_is_before_jiffies(di->last_update + 5 * HZ)) { +- cancel_delayed_work_sync(&di->work); +- bq27xxx_battery_poll(&di->work.work); +- } ++ if (time_is_before_jiffies(di->last_update + 5 * HZ)) ++ bq27xxx_battery_update_unlocked(di); + mutex_unlock(&di->lock); + + if (psp != POWER_SUPPLY_PROP_PRESENT && di->cache.flags < 0) +@@ -1996,7 +2017,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy, + + switch (psp) { + case POWER_SUPPLY_PROP_STATUS: +- ret = bq27xxx_battery_current_and_status(di, NULL, val); ++ ret = bq27xxx_battery_current_and_status(di, NULL, val, NULL); + break; + case POWER_SUPPLY_PROP_VOLTAGE_NOW: + ret = bq27xxx_battery_voltage(di, val); +@@ -2005,7 +2026,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy, + val->intval = di->cache.flags < 0 ? 0 : 1; + break; + case POWER_SUPPLY_PROP_CURRENT_NOW: +- ret = bq27xxx_battery_current_and_status(di, val, NULL); ++ ret = bq27xxx_battery_current_and_status(di, val, NULL, NULL); + break; + case POWER_SUPPLY_PROP_CAPACITY: + ret = bq27xxx_simple_value(di->cache.capacity, val); +@@ -2078,8 +2099,8 @@ static void bq27xxx_external_power_changed(struct power_supply *psy) + { + struct bq27xxx_device_info *di = power_supply_get_drvdata(psy); + +- cancel_delayed_work_sync(&di->work); +- schedule_delayed_work(&di->work, 0); ++ /* After charger plug in/out wait 0.5s for things to stabilize */ ++ mod_delayed_work(system_wq, &di->work, HZ / 2); + } + + int bq27xxx_battery_setup(struct bq27xxx_device_info *di) +@@ -2127,22 +2148,18 @@ EXPORT_SYMBOL_GPL(bq27xxx_battery_setup); + + void bq27xxx_battery_teardown(struct bq27xxx_device_info *di) + { +- /* +- * power_supply_unregister call bq27xxx_battery_get_property which +- * call bq27xxx_battery_poll. +- * Make sure that bq27xxx_battery_poll will not call +- * schedule_delayed_work again after unregister (which cause OOPS). +- */ +- poll_interval = 0; +- +- cancel_delayed_work_sync(&di->work); +- +- power_supply_unregister(di->bat); +- + mutex_lock(&bq27xxx_list_lock); + list_del(&di->list); + mutex_unlock(&bq27xxx_list_lock); + ++ /* Set removed to avoid bq27xxx_battery_update() re-queuing the work */ ++ mutex_lock(&di->lock); ++ di->removed = true; ++ mutex_unlock(&di->lock); ++ ++ cancel_delayed_work_sync(&di->work); ++ ++ power_supply_unregister(di->bat); + mutex_destroy(&di->lock); + } + EXPORT_SYMBOL_GPL(bq27xxx_battery_teardown); +diff --git a/drivers/power/supply/bq27xxx_battery_i2c.c b/drivers/power/supply/bq27xxx_battery_i2c.c +index f8768997333bc..6d3c748763390 100644 +--- a/drivers/power/supply/bq27xxx_battery_i2c.c ++++ b/drivers/power/supply/bq27xxx_battery_i2c.c +@@ -179,7 +179,7 @@ static int bq27xxx_battery_i2c_probe(struct i2c_client *client) + i2c_set_clientdata(client, di); + + if (client->irq) { +- ret = devm_request_threaded_irq(&client->dev, client->irq, ++ ret = request_threaded_irq(client->irq, + NULL, bq27xxx_battery_irq_handler_thread, + IRQF_ONESHOT, + di->name, di); +@@ -209,6 +209,7 @@ static void bq27xxx_battery_i2c_remove(struct i2c_client *client) + { + struct bq27xxx_device_info *di = i2c_get_clientdata(client); + ++ free_irq(client->irq, di); + bq27xxx_battery_teardown(di); + + mutex_lock(&battery_mutex); +diff --git a/drivers/power/supply/mt6360_charger.c b/drivers/power/supply/mt6360_charger.c +index 92e48e3a48536..1305cba61edd4 100644 +--- a/drivers/power/supply/mt6360_charger.c ++++ b/drivers/power/supply/mt6360_charger.c +@@ -796,7 +796,9 @@ static int mt6360_charger_probe(struct platform_device *pdev) + mci->vinovp = 6500000; + mutex_init(&mci->chgdet_lock); + platform_set_drvdata(pdev, mci); +- devm_work_autocancel(&pdev->dev, &mci->chrdet_work, mt6360_chrdet_work); ++ ret = devm_work_autocancel(&pdev->dev, &mci->chrdet_work, mt6360_chrdet_work); ++ if (ret) ++ return dev_err_probe(&pdev->dev, ret, "Failed to set delayed work\n"); + + ret = device_property_read_u32(&pdev->dev, "richtek,vinovp-microvolt", &mci->vinovp); + if (ret) +diff --git a/drivers/power/supply/power_supply_leds.c b/drivers/power/supply/power_supply_leds.c +index 702bf83f6e6d2..0674483279d77 100644 +--- a/drivers/power/supply/power_supply_leds.c ++++ b/drivers/power/supply/power_supply_leds.c +@@ -35,8 +35,9 @@ static void power_supply_update_bat_leds(struct power_supply *psy) + led_trigger_event(psy->charging_full_trig, LED_FULL); + led_trigger_event(psy->charging_trig, LED_OFF); + led_trigger_event(psy->full_trig, LED_FULL); +- led_trigger_event(psy->charging_blink_full_solid_trig, +- LED_FULL); ++ /* Going from blink to LED on requires a LED_OFF event to stop blink */ ++ led_trigger_event(psy->charging_blink_full_solid_trig, LED_OFF); ++ led_trigger_event(psy->charging_blink_full_solid_trig, LED_FULL); + break; + case POWER_SUPPLY_STATUS_CHARGING: + led_trigger_event(psy->charging_full_trig, LED_FULL); +diff --git a/drivers/power/supply/sbs-charger.c b/drivers/power/supply/sbs-charger.c +index 75ebcbf0a7883..a14e89ac0369c 100644 +--- a/drivers/power/supply/sbs-charger.c ++++ b/drivers/power/supply/sbs-charger.c +@@ -24,7 +24,7 @@ + #define SBS_CHARGER_REG_STATUS 0x13 + #define SBS_CHARGER_REG_ALARM_WARNING 0x16 + +-#define SBS_CHARGER_STATUS_CHARGE_INHIBITED BIT(1) ++#define SBS_CHARGER_STATUS_CHARGE_INHIBITED BIT(0) + #define SBS_CHARGER_STATUS_RES_COLD BIT(9) + #define SBS_CHARGER_STATUS_RES_HOT BIT(10) + #define SBS_CHARGER_STATUS_BATTERY_PRESENT BIT(14) +diff --git a/drivers/regulator/mt6359-regulator.c b/drivers/regulator/mt6359-regulator.c +index de3b0462832cd..f94f87c5407ae 100644 +--- a/drivers/regulator/mt6359-regulator.c ++++ b/drivers/regulator/mt6359-regulator.c +@@ -951,9 +951,12 @@ static int mt6359_regulator_probe(struct platform_device *pdev) + struct regulator_config config = {}; + struct regulator_dev *rdev; + struct mt6359_regulator_info *mt6359_info; +- int i, hw_ver; ++ int i, hw_ver, ret; ++ ++ ret = regmap_read(mt6397->regmap, MT6359P_HWCID, &hw_ver); ++ if (ret) ++ return ret; + +- regmap_read(mt6397->regmap, MT6359P_HWCID, &hw_ver); + if (hw_ver >= MT6359P_CHIP_VER) + mt6359_info = mt6359p_regulators; + else +diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c +index c6351fac9f4d9..aeee2654ad8bb 100644 +--- a/drivers/regulator/pca9450-regulator.c ++++ b/drivers/regulator/pca9450-regulator.c +@@ -264,7 +264,7 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = { + .vsel_reg = PCA9450_REG_BUCK2OUT_DVS0, + .vsel_mask = BUCK2OUT_DVS0_MASK, + .enable_reg = PCA9450_REG_BUCK2CTRL, +- .enable_mask = BUCK1_ENMODE_MASK, ++ .enable_mask = BUCK2_ENMODE_MASK, + .ramp_reg = PCA9450_REG_BUCK2CTRL, + .ramp_mask = BUCK2_RAMP_MASK, + .ramp_delay_table = pca9450_dvs_buck_ramp_table, +@@ -502,7 +502,7 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = { + .vsel_reg = PCA9450_REG_BUCK2OUT_DVS0, + .vsel_mask = BUCK2OUT_DVS0_MASK, + .enable_reg = PCA9450_REG_BUCK2CTRL, +- .enable_mask = BUCK1_ENMODE_MASK, ++ .enable_mask = BUCK2_ENMODE_MASK, + .ramp_reg = PCA9450_REG_BUCK2CTRL, + .ramp_mask = BUCK2_RAMP_MASK, + .ramp_delay_table = pca9450_dvs_buck_ramp_table, +diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c +index a1c1fa1a9c28a..e6e0428f8e7be 100644 +--- a/drivers/tee/optee/smc_abi.c ++++ b/drivers/tee/optee/smc_abi.c +@@ -984,8 +984,10 @@ static u32 get_async_notif_value(optee_invoke_fn *invoke_fn, bool *value_valid, + + invoke_fn(OPTEE_SMC_GET_ASYNC_NOTIF_VALUE, 0, 0, 0, 0, 0, 0, 0, &res); + +- if (res.a0) ++ if (res.a0) { ++ *value_valid = false; + return 0; ++ } + *value_valid = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_VALID); + *value_pending = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_PENDING); + return res.a1; +diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c +index d0295123cc3e4..33429cdaf7a3c 100644 +--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c ++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c +@@ -131,7 +131,7 @@ static ssize_t available_uuids_show(struct device *dev, + + for (i = 0; i < INT3400_THERMAL_MAXIMUM_UUID; i++) { + if (priv->uuid_bitmap & (1 << i)) +- length += sysfs_emit_at(buf, length, int3400_thermal_uuids[i]); ++ length += sysfs_emit_at(buf, length, "%s\n", int3400_thermal_uuids[i]); + } + + return length; +@@ -149,7 +149,7 @@ static ssize_t current_uuid_show(struct device *dev, + + for (i = 0; i <= INT3400_THERMAL_CRITICAL; i++) { + if (priv->os_uuid_mask & BIT(i)) +- length += sysfs_emit_at(buf, length, int3400_thermal_uuids[i]); ++ length += sysfs_emit_at(buf, length, "%s\n", int3400_thermal_uuids[i]); + } + + if (length) +diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c +index 34742fbbd84d3..901ec732321c5 100644 +--- a/drivers/usb/core/usb.c ++++ b/drivers/usb/core/usb.c +@@ -206,6 +206,82 @@ int usb_find_common_endpoints_reverse(struct usb_host_interface *alt, + } + EXPORT_SYMBOL_GPL(usb_find_common_endpoints_reverse); + ++/** ++ * usb_find_endpoint() - Given an endpoint address, search for the endpoint's ++ * usb_host_endpoint structure in an interface's current altsetting. ++ * @intf: the interface whose current altsetting should be searched ++ * @ep_addr: the endpoint address (number and direction) to find ++ * ++ * Search the altsetting's list of endpoints for one with the specified address. ++ * ++ * Return: Pointer to the usb_host_endpoint if found, %NULL otherwise. ++ */ ++static const struct usb_host_endpoint *usb_find_endpoint( ++ const struct usb_interface *intf, unsigned int ep_addr) ++{ ++ int n; ++ const struct usb_host_endpoint *ep; ++ ++ n = intf->cur_altsetting->desc.bNumEndpoints; ++ ep = intf->cur_altsetting->endpoint; ++ for (; n > 0; (--n, ++ep)) { ++ if (ep->desc.bEndpointAddress == ep_addr) ++ return ep; ++ } ++ return NULL; ++} ++ ++/** ++ * usb_check_bulk_endpoints - Check whether an interface's current altsetting ++ * contains a set of bulk endpoints with the given addresses. ++ * @intf: the interface whose current altsetting should be searched ++ * @ep_addrs: 0-terminated array of the endpoint addresses (number and ++ * direction) to look for ++ * ++ * Search for endpoints with the specified addresses and check their types. ++ * ++ * Return: %true if all the endpoints are found and are bulk, %false otherwise. ++ */ ++bool usb_check_bulk_endpoints( ++ const struct usb_interface *intf, const u8 *ep_addrs) ++{ ++ const struct usb_host_endpoint *ep; ++ ++ for (; *ep_addrs; ++ep_addrs) { ++ ep = usb_find_endpoint(intf, *ep_addrs); ++ if (!ep || !usb_endpoint_xfer_bulk(&ep->desc)) ++ return false; ++ } ++ return true; ++} ++EXPORT_SYMBOL_GPL(usb_check_bulk_endpoints); ++ ++/** ++ * usb_check_int_endpoints - Check whether an interface's current altsetting ++ * contains a set of interrupt endpoints with the given addresses. ++ * @intf: the interface whose current altsetting should be searched ++ * @ep_addrs: 0-terminated array of the endpoint addresses (number and ++ * direction) to look for ++ * ++ * Search for endpoints with the specified addresses and check their types. ++ * ++ * Return: %true if all the endpoints are found and are interrupt, ++ * %false otherwise. ++ */ ++bool usb_check_int_endpoints( ++ const struct usb_interface *intf, const u8 *ep_addrs) ++{ ++ const struct usb_host_endpoint *ep; ++ ++ for (; *ep_addrs; ++ep_addrs) { ++ ep = usb_find_endpoint(intf, *ep_addrs); ++ if (!ep || !usb_endpoint_xfer_int(&ep->desc)) ++ return false; ++ } ++ return true; ++} ++EXPORT_SYMBOL_GPL(usb_check_int_endpoints); ++ + /** + * usb_find_alt_setting() - Given a configuration, find the alternate setting + * for the given interface. +diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h +index 4743e918dcafa..54ce9907a408c 100644 +--- a/drivers/usb/dwc3/core.h ++++ b/drivers/usb/dwc3/core.h +@@ -1110,6 +1110,7 @@ struct dwc3_scratchpad_array { + * 3 - Reserved + * @dis_metastability_quirk: set to disable metastability quirk. + * @dis_split_quirk: set to disable split boundary. ++ * @suspended: set to track suspend event due to U3/L2. + * @imod_interval: set the interrupt moderation interval in 250ns + * increments or 0 to disable. + * @max_cfg_eps: current max number of IN eps used across all USB configs. +@@ -1327,6 +1328,7 @@ struct dwc3 { + + unsigned dis_split_quirk:1; + unsigned async_callbacks:1; ++ unsigned suspended:1; + + u16 imod_interval; + +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index 80ae308448451..1cf352fbe7039 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -3838,6 +3838,8 @@ static void dwc3_gadget_disconnect_interrupt(struct dwc3 *dwc) + { + int reg; + ++ dwc->suspended = false; ++ + dwc3_gadget_set_link_state(dwc, DWC3_LINK_STATE_RX_DET); + + reg = dwc3_readl(dwc->regs, DWC3_DCTL); +@@ -3869,6 +3871,8 @@ static void dwc3_gadget_reset_interrupt(struct dwc3 *dwc) + { + u32 reg; + ++ dwc->suspended = false; ++ + /* + * Ideally, dwc3_reset_gadget() would trigger the function + * drivers to stop any active transfers through ep disable. +@@ -4098,6 +4102,8 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc) + + static void dwc3_gadget_wakeup_interrupt(struct dwc3 *dwc) + { ++ dwc->suspended = false; ++ + /* + * TODO take core out of low power mode when that's + * implemented. +@@ -4213,8 +4219,10 @@ static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc, + { + enum dwc3_link_state next = evtinfo & DWC3_LINK_STATE_MASK; + +- if (dwc->link_state != next && next == DWC3_LINK_STATE_U3) ++ if (!dwc->suspended && next == DWC3_LINK_STATE_U3) { ++ dwc->suspended = true; + dwc3_suspend_gadget(dwc); ++ } + + dwc->link_state = next; + } +diff --git a/drivers/usb/misc/sisusbvga/sisusbvga.c b/drivers/usb/misc/sisusbvga/sisusbvga.c +index 654a79fd3231e..febf34f9f0499 100644 +--- a/drivers/usb/misc/sisusbvga/sisusbvga.c ++++ b/drivers/usb/misc/sisusbvga/sisusbvga.c +@@ -2778,6 +2778,20 @@ static int sisusb_probe(struct usb_interface *intf, + struct usb_device *dev = interface_to_usbdev(intf); + struct sisusb_usb_data *sisusb; + int retval = 0, i; ++ static const u8 ep_addresses[] = { ++ SISUSB_EP_GFX_IN | USB_DIR_IN, ++ SISUSB_EP_GFX_OUT | USB_DIR_OUT, ++ SISUSB_EP_GFX_BULK_OUT | USB_DIR_OUT, ++ SISUSB_EP_GFX_LBULK_OUT | USB_DIR_OUT, ++ SISUSB_EP_BRIDGE_IN | USB_DIR_IN, ++ SISUSB_EP_BRIDGE_OUT | USB_DIR_OUT, ++ 0}; ++ ++ /* Are the expected endpoints present? */ ++ if (!usb_check_bulk_endpoints(intf, ep_addresses)) { ++ dev_err(&intf->dev, "Invalid USB2VGA device\n"); ++ return -EINVAL; ++ } + + dev_info(&dev->dev, "USB2VGA dongle found at address %d\n", + dev->devnum); +diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c +index 216d49c9d47e5..256d9b61f4eaa 100644 +--- a/drivers/video/fbdev/udlfb.c ++++ b/drivers/video/fbdev/udlfb.c +@@ -27,6 +27,8 @@ + #include