From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 1FC4515817D for ; Sun, 16 Jun 2024 14:31:33 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 5E875E2B1F; Sun, 16 Jun 2024 14:31:32 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id EEC06E2B1F for ; Sun, 16 Jun 2024 14:31:31 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 74EE633BE3B for ; Sun, 16 Jun 2024 14:31:30 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id CB88C14B0 for ; Sun, 16 Jun 2024 14:31:28 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1718548240.88e6914e3a0bd49ede202c2921b807ad344c647b.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.9 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1004_linux-6.9.5.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 88e6914e3a0bd49ede202c2921b807ad344c647b X-VCS-Branch: 6.9 Date: Sun, 16 Jun 2024 14:31:28 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: f8693acc-9345-44b3-83cb-1faa0e4622f9 X-Archives-Hash: e169b8919d3c9bdc894dde6f25e285cc commit: 88e6914e3a0bd49ede202c2921b807ad344c647b Author: Mike Pagano gentoo org> AuthorDate: Sun Jun 16 14:30:40 2024 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sun Jun 16 14:30:40 2024 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=88e6914e Linux patch 6.9.5 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1004_linux-6.9.5.patch | 5592 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 5596 insertions(+) diff --git a/0000_README b/0000_README index f824f87c..bb6274e1 100644 --- a/0000_README +++ b/0000_README @@ -59,6 +59,10 @@ Patch: 1003_linux-6.9.4.patch From: https://www.kernel.org Desc: Linux 6.9.4 +Patch: 1004_linux-6.9.5.patch +From: https://www.kernel.org +Desc: Linux 6.9.5 + Patch: 1510_fs-enable-link-security-restrictions-by-default.patch From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/ Desc: Enable link security restrictions by default. diff --git a/1004_linux-6.9.5.patch b/1004_linux-6.9.5.patch new file mode 100644 index 00000000..82e93e9f --- /dev/null +++ b/1004_linux-6.9.5.patch @@ -0,0 +1,5592 @@ +diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst +index 2466d3363af79..ad50ca6f495eb 100644 +--- a/Documentation/mm/arch_pgtable_helpers.rst ++++ b/Documentation/mm/arch_pgtable_helpers.rst +@@ -140,7 +140,8 @@ PMD Page Table Helpers + +---------------------------+--------------------------------------------------+ + | pmd_swp_clear_soft_dirty | Clears a soft dirty swapped PMD | + +---------------------------+--------------------------------------------------+ +-| pmd_mkinvalid | Invalidates a mapped PMD [1] | ++| pmd_mkinvalid | Invalidates a present PMD; do not call for | ++| | non-present PMD [1] | + +---------------------------+--------------------------------------------------+ + | pmd_set_huge | Creates a PMD huge mapping | + +---------------------------+--------------------------------------------------+ +@@ -196,7 +197,8 @@ PUD Page Table Helpers + +---------------------------+--------------------------------------------------+ + | pud_mkdevmap | Creates a ZONE_DEVICE mapped PUD | + +---------------------------+--------------------------------------------------+ +-| pud_mkinvalid | Invalidates a mapped PUD [1] | ++| pud_mkinvalid | Invalidates a present PUD; do not call for | ++| | non-present PUD [1] | + +---------------------------+--------------------------------------------------+ + | pud_set_huge | Creates a PUD huge mapping | + +---------------------------+--------------------------------------------------+ +diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst +index 72da7057e4cf9..dceeb0d763aa2 100644 +--- a/Documentation/networking/af_xdp.rst ++++ b/Documentation/networking/af_xdp.rst +@@ -329,24 +329,23 @@ XDP_SHARED_UMEM option and provide the initial socket's fd in the + sxdp_shared_umem_fd field as you registered the UMEM on that + socket. These two sockets will now share one and the same UMEM. + +-In this case, it is possible to use the NIC's packet steering +-capabilities to steer the packets to the right queue. This is not +-possible in the previous example as there is only one queue shared +-among sockets, so the NIC cannot do this steering as it can only steer +-between queues. +- +-In libxdp (or libbpf prior to version 1.0), you need to use the +-xsk_socket__create_shared() API as it takes a reference to a FILL ring +-and a COMPLETION ring that will be created for you and bound to the +-shared UMEM. You can use this function for all the sockets you create, +-or you can use it for the second and following ones and use +-xsk_socket__create() for the first one. Both methods yield the same +-result. ++There is no need to supply an XDP program like the one in the previous ++case where sockets were bound to the same queue id and ++device. Instead, use the NIC's packet steering capabilities to steer ++the packets to the right queue. In the previous example, there is only ++one queue shared among sockets, so the NIC cannot do this steering. It ++can only steer between queues. ++ ++In libbpf, you need to use the xsk_socket__create_shared() API as it ++takes a reference to a FILL ring and a COMPLETION ring that will be ++created for you and bound to the shared UMEM. You can use this ++function for all the sockets you create, or you can use it for the ++second and following ones and use xsk_socket__create() for the first ++one. Both methods yield the same result. + + Note that a UMEM can be shared between sockets on the same queue id + and device, as well as between queues on the same device and between +-devices at the same time. It is also possible to redirect to any +-socket as long as it is bound to the same umem with XDP_SHARED_UMEM. ++devices at the same time. + + XDP_USE_NEED_WAKEUP bind flag + ----------------------------- +@@ -823,10 +822,6 @@ A: The short answer is no, that is not supported at the moment. The + switch, or other distribution mechanism, in your NIC to direct + traffic to the correct queue id and socket. + +- Note that if you are using the XDP_SHARED_UMEM option, it is +- possible to switch traffic between any socket bound to the same +- umem. +- + Q: My packets are sometimes corrupted. What is wrong? + + A: Care has to be taken not to feed the same buffer in the UMEM into +diff --git a/Makefile b/Makefile +index 91f1d4d34e809..d5062a593ef7e 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 9 +-SUBLEVEL = 4 ++SUBLEVEL = 5 + EXTRAVERSION = + NAME = Hurr durr I'ma ninja sloth + +@@ -942,7 +942,6 @@ endif + ifdef CONFIG_LTO_CLANG + ifdef CONFIG_LTO_CLANG_THIN + CC_FLAGS_LTO := -flto=thin -fsplit-lto-unit +-KBUILD_LDFLAGS += --thinlto-cache-dir=$(extmod_prefix).thinlto-cache + else + CC_FLAGS_LTO := -flto + endif +@@ -1477,7 +1476,7 @@ endif # CONFIG_MODULES + # Directories & files removed with 'make clean' + CLEAN_FILES += vmlinux.symvers modules-only.symvers \ + modules.builtin modules.builtin.modinfo modules.nsdeps \ +- compile_commands.json .thinlto-cache rust/test \ ++ compile_commands.json rust/test \ + rust-project.json .vmlinux.objs .vmlinux.export.c + + # Directories & files removed with 'make mrproper' +@@ -1783,7 +1782,7 @@ PHONY += compile_commands.json + + clean-dirs := $(KBUILD_EXTMOD) + clean: rm-files := $(KBUILD_EXTMOD)/Module.symvers $(KBUILD_EXTMOD)/modules.nsdeps \ +- $(KBUILD_EXTMOD)/compile_commands.json $(KBUILD_EXTMOD)/.thinlto-cache ++ $(KBUILD_EXTMOD)/compile_commands.json + + PHONY += prepare + # now expand this into a simple variable to reduce the cost of shell evaluations +diff --git a/arch/arm/boot/dts/samsung/exynos4210-smdkv310.dts b/arch/arm/boot/dts/samsung/exynos4210-smdkv310.dts +index b566f878ed84f..18f4f494093ba 100644 +--- a/arch/arm/boot/dts/samsung/exynos4210-smdkv310.dts ++++ b/arch/arm/boot/dts/samsung/exynos4210-smdkv310.dts +@@ -88,7 +88,7 @@ eeprom@52 { + &keypad { + samsung,keypad-num-rows = <2>; + samsung,keypad-num-columns = <8>; +- linux,keypad-no-autorepeat; ++ linux,input-no-autorepeat; + wakeup-source; + pinctrl-names = "default"; + pinctrl-0 = <&keypad_rows &keypad_cols>; +diff --git a/arch/arm/boot/dts/samsung/exynos4412-origen.dts b/arch/arm/boot/dts/samsung/exynos4412-origen.dts +index 23b151645d668..10ab7bc90f502 100644 +--- a/arch/arm/boot/dts/samsung/exynos4412-origen.dts ++++ b/arch/arm/boot/dts/samsung/exynos4412-origen.dts +@@ -453,7 +453,7 @@ buck9_reg: BUCK9 { + &keypad { + samsung,keypad-num-rows = <3>; + samsung,keypad-num-columns = <2>; +- linux,keypad-no-autorepeat; ++ linux,input-no-autorepeat; + wakeup-source; + pinctrl-0 = <&keypad_rows &keypad_cols>; + pinctrl-names = "default"; +diff --git a/arch/arm/boot/dts/samsung/exynos4412-smdk4412.dts b/arch/arm/boot/dts/samsung/exynos4412-smdk4412.dts +index 715dfcba14174..e16df9e75fcb0 100644 +--- a/arch/arm/boot/dts/samsung/exynos4412-smdk4412.dts ++++ b/arch/arm/boot/dts/samsung/exynos4412-smdk4412.dts +@@ -69,7 +69,7 @@ cooling_map1: map1 { + &keypad { + samsung,keypad-num-rows = <3>; + samsung,keypad-num-columns = <8>; +- linux,keypad-no-autorepeat; ++ linux,input-no-autorepeat; + wakeup-source; + pinctrl-0 = <&keypad_rows &keypad_cols>; + pinctrl-names = "default"; +diff --git a/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi b/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi +index ed1b5a7a60678..d01023401d7e3 100644 +--- a/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi ++++ b/arch/arm64/boot/dts/hisilicon/hi3798cv200.dtsi +@@ -58,7 +58,7 @@ cpu@3 { + gic: interrupt-controller@f1001000 { + compatible = "arm,gic-400"; + reg = <0x0 0xf1001000 0x0 0x1000>, /* GICD */ +- <0x0 0xf1002000 0x0 0x100>; /* GICC */ ++ <0x0 0xf1002000 0x0 0x2000>; /* GICC */ + #address-cells = <0>; + #interrupt-cells = <3>; + interrupt-controller; +diff --git a/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts b/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts +index 14d58859bb55c..683ac124523b3 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts ++++ b/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts +@@ -9,8 +9,8 @@ / { + compatible = "nvidia,norrin", "nvidia,tegra132", "nvidia,tegra124"; + + aliases { +- rtc0 = "/i2c@7000d000/as3722@40"; +- rtc1 = "/rtc@7000e000"; ++ rtc0 = &as3722; ++ rtc1 = &tegra_rtc; + serial0 = &uarta; + }; + +diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi +index 7e24a212c7e44..5bcccfef3f7f8 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi ++++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi +@@ -572,7 +572,7 @@ spi@7000de00 { + status = "disabled"; + }; + +- rtc@7000e000 { ++ tegra_rtc: rtc@7000e000 { + compatible = "nvidia,tegra124-rtc", "nvidia,tegra20-rtc"; + reg = <0x0 0x7000e000 0x0 0x100>; + interrupts = ; +diff --git a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi +index 10655401528e4..a22b4501ce1ef 100644 +--- a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi ++++ b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi +@@ -62,7 +62,7 @@ bluetooth { + vddrf-supply = <&vreg_l1_1p3>; + vddch0-supply = <&vdd_ch0_3p3>; + +- local-bd-address = [ 02 00 00 00 5a ad ]; ++ local-bd-address = [ 00 00 00 00 00 00 ]; + + max-speed = <3200000>; + }; +diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi +index d0f82e12289e1..91871a6ee3b6b 100644 +--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi ++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi +@@ -1799,6 +1799,7 @@ pcie4_phy: phy@1c06000 { + assigned-clock-rates = <100000000>; + + power-domains = <&gcc PCIE_4_GDSC>; ++ required-opps = <&rpmhpd_opp_nom>; + + resets = <&gcc GCC_PCIE_4_PHY_BCR>; + reset-names = "phy"; +@@ -1898,6 +1899,7 @@ pcie3b_phy: phy@1c0e000 { + assigned-clock-rates = <100000000>; + + power-domains = <&gcc PCIE_3B_GDSC>; ++ required-opps = <&rpmhpd_opp_nom>; + + resets = <&gcc GCC_PCIE_3B_PHY_BCR>; + reset-names = "phy"; +@@ -1998,6 +2000,7 @@ pcie3a_phy: phy@1c14000 { + assigned-clock-rates = <100000000>; + + power-domains = <&gcc PCIE_3A_GDSC>; ++ required-opps = <&rpmhpd_opp_nom>; + + resets = <&gcc GCC_PCIE_3A_PHY_BCR>; + reset-names = "phy"; +@@ -2099,6 +2102,7 @@ pcie2b_phy: phy@1c1e000 { + assigned-clock-rates = <100000000>; + + power-domains = <&gcc PCIE_2B_GDSC>; ++ required-opps = <&rpmhpd_opp_nom>; + + resets = <&gcc GCC_PCIE_2B_PHY_BCR>; + reset-names = "phy"; +@@ -2199,6 +2203,7 @@ pcie2a_phy: phy@1c24000 { + assigned-clock-rates = <100000000>; + + power-domains = <&gcc PCIE_2A_GDSC>; ++ required-opps = <&rpmhpd_opp_nom>; + + resets = <&gcc GCC_PCIE_2A_PHY_BCR>; + reset-names = "phy"; +diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi +index e8d8857ad51ff..8c837467069b0 100644 +--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi ++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi +@@ -76,7 +76,7 @@ verdin_key_wakeup: key-wakeup { + + memory@80000000 { + device_type = "memory"; +- reg = <0x00000000 0x80000000 0x00000000 0x40000000>; /* 1G RAM */ ++ reg = <0x00000000 0x80000000 0x00000000 0x80000000>; /* 2G RAM */ + }; + + opp-table { +diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c +index e2f762d959bb3..11098eb7eb44a 100644 +--- a/arch/arm64/kvm/guest.c ++++ b/arch/arm64/kvm/guest.c +@@ -251,6 +251,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + case PSR_AA32_MODE_SVC: + case PSR_AA32_MODE_ABT: + case PSR_AA32_MODE_UND: ++ case PSR_AA32_MODE_SYS: + if (!vcpu_el1_is_32bit(vcpu)) + return -EINVAL; + break; +@@ -276,7 +277,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) + if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) { + int i, nr_reg; + +- switch (*vcpu_cpsr(vcpu)) { ++ switch (*vcpu_cpsr(vcpu) & PSR_AA32_MODE_MASK) { + /* + * Either we are dealing with user mode, and only the + * first 15 registers (+ PC) must be narrowed to 32bit. +diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c +index 8d9670e6615dc..449fa58cf3b63 100644 +--- a/arch/arm64/kvm/hyp/aarch32.c ++++ b/arch/arm64/kvm/hyp/aarch32.c +@@ -50,9 +50,23 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) + u32 cpsr_cond; + int cond; + +- /* Top two bits non-zero? Unconditional. */ +- if (kvm_vcpu_get_esr(vcpu) >> 30) ++ /* ++ * These are the exception classes that could fire with a ++ * conditional instruction. ++ */ ++ switch (kvm_vcpu_trap_get_class(vcpu)) { ++ case ESR_ELx_EC_CP15_32: ++ case ESR_ELx_EC_CP15_64: ++ case ESR_ELx_EC_CP14_MR: ++ case ESR_ELx_EC_CP14_LS: ++ case ESR_ELx_EC_FP_ASIMD: ++ case ESR_ELx_EC_CP10_ID: ++ case ESR_ELx_EC_CP14_64: ++ case ESR_ELx_EC_SVC32: ++ break; ++ default: + return true; ++ } + + /* Is condition field valid? */ + cond = kvm_vcpu_get_condition(vcpu); +diff --git a/arch/loongarch/include/asm/numa.h b/arch/loongarch/include/asm/numa.h +index 27f319b498625..b5f9de9f102e4 100644 +--- a/arch/loongarch/include/asm/numa.h ++++ b/arch/loongarch/include/asm/numa.h +@@ -56,6 +56,7 @@ extern int early_cpu_to_node(int cpu); + static inline void early_numa_add_cpu(int cpuid, s16 node) { } + static inline void numa_add_cpu(unsigned int cpu) { } + static inline void numa_remove_cpu(unsigned int cpu) { } ++static inline void set_cpuid_to_node(int cpuid, s16 node) { } + + static inline int early_cpu_to_node(int cpu) + { +diff --git a/arch/loongarch/include/asm/stackframe.h b/arch/loongarch/include/asm/stackframe.h +index 45b507a7b06fc..d9eafd3ee3d1e 100644 +--- a/arch/loongarch/include/asm/stackframe.h ++++ b/arch/loongarch/include/asm/stackframe.h +@@ -42,7 +42,7 @@ + .macro JUMP_VIRT_ADDR temp1 temp2 + li.d \temp1, CACHE_BASE + pcaddi \temp2, 0 +- or \temp1, \temp1, \temp2 ++ bstrins.d \temp1, \temp2, (DMW_PABITS - 1), 0 + jirl zero, \temp1, 0xc + .endm + +diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S +index c4f7de2e28054..4677ea8fa8e98 100644 +--- a/arch/loongarch/kernel/head.S ++++ b/arch/loongarch/kernel/head.S +@@ -22,7 +22,7 @@ + _head: + .word MZ_MAGIC /* "MZ", MS-DOS header */ + .org 0x8 +- .dword kernel_entry /* Kernel entry point */ ++ .dword _kernel_entry /* Kernel entry point (physical address) */ + .dword _kernel_asize /* Kernel image effective size */ + .quad PHYS_LINK_KADDR /* Kernel image load offset from start of RAM */ + .org 0x38 /* 0x20 ~ 0x37 reserved */ +diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c +index 60e0fe97f61a3..e8f7a54ff67cc 100644 +--- a/arch/loongarch/kernel/setup.c ++++ b/arch/loongarch/kernel/setup.c +@@ -282,7 +282,7 @@ static void __init fdt_setup(void) + return; + + /* Prefer to use built-in dtb, checking its legality first. */ +- if (!fdt_check_header(__dtb_start)) ++ if (IS_ENABLED(CONFIG_BUILTIN_DTB) && !fdt_check_header(__dtb_start)) + fdt_pointer = __dtb_start; + else + fdt_pointer = efi_fdt_pointer(); /* Fallback to firmware dtb */ +diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c +index aabee0b280fe5..fd22a32755a58 100644 +--- a/arch/loongarch/kernel/smp.c ++++ b/arch/loongarch/kernel/smp.c +@@ -262,7 +262,6 @@ static void __init fdt_smp_setup(void) + + if (cpuid == loongson_sysconf.boot_cpu_id) { + cpu = 0; +- numa_add_cpu(cpu); + } else { + cpu = cpumask_next_zero(-1, cpu_present_mask); + } +@@ -272,6 +271,9 @@ static void __init fdt_smp_setup(void) + set_cpu_present(cpu, true); + __cpu_number_map[cpuid] = cpu; + __cpu_logical_map[cpu] = cpuid; ++ ++ early_numa_add_cpu(cpu, 0); ++ set_cpuid_to_node(cpuid, 0); + } + + loongson_sysconf.nr_cpus = num_processors; +@@ -456,6 +458,7 @@ void smp_prepare_boot_cpu(void) + set_cpu_possible(0, true); + set_cpu_online(0, true); + set_my_cpu_offset(per_cpu_offset(0)); ++ numa_add_cpu(0); + + rr_node = first_node(node_online_map); + for_each_possible_cpu(cpu) { +diff --git a/arch/loongarch/kernel/vmlinux.lds.S b/arch/loongarch/kernel/vmlinux.lds.S +index e8e97dbf9ca40..3c7595342730e 100644 +--- a/arch/loongarch/kernel/vmlinux.lds.S ++++ b/arch/loongarch/kernel/vmlinux.lds.S +@@ -6,6 +6,7 @@ + + #define PAGE_SIZE _PAGE_SIZE + #define RO_EXCEPTION_TABLE_ALIGN 4 ++#define PHYSADDR_MASK 0xffffffffffff /* 48-bit */ + + /* + * Put .bss..swapper_pg_dir as the first thing in .bss. This will +@@ -142,10 +143,11 @@ SECTIONS + + #ifdef CONFIG_EFI_STUB + /* header symbols */ +- _kernel_asize = _end - _text; +- _kernel_fsize = _edata - _text; +- _kernel_vsize = _end - __initdata_begin; +- _kernel_rsize = _edata - __initdata_begin; ++ _kernel_entry = ABSOLUTE(kernel_entry & PHYSADDR_MASK); ++ _kernel_asize = ABSOLUTE(_end - _text); ++ _kernel_fsize = ABSOLUTE(_edata - _text); ++ _kernel_vsize = ABSOLUTE(_end - __initdata_begin); ++ _kernel_rsize = ABSOLUTE(_edata - __initdata_begin); + #endif + + .gptab.sdata : { +diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h +index ad4e15d12ed1c..4bea2e95798f0 100644 +--- a/arch/parisc/include/asm/page.h ++++ b/arch/parisc/include/asm/page.h +@@ -8,6 +8,7 @@ + #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) + #define PAGE_MASK (~(PAGE_SIZE-1)) + ++#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA + + #ifndef __ASSEMBLY__ + +diff --git a/arch/parisc/include/asm/signal.h b/arch/parisc/include/asm/signal.h +index 715c96ba2ec81..e84883c6b4c7a 100644 +--- a/arch/parisc/include/asm/signal.h ++++ b/arch/parisc/include/asm/signal.h +@@ -4,23 +4,11 @@ + + #include + +-#define _NSIG 64 +-/* bits-per-word, where word apparently means 'long' not 'int' */ +-#define _NSIG_BPW BITS_PER_LONG +-#define _NSIG_WORDS (_NSIG / _NSIG_BPW) +- + # ifndef __ASSEMBLY__ + + /* Most things should be clean enough to redefine this at will, if care + is taken to make libc match. */ + +-typedef unsigned long old_sigset_t; /* at least 32 bits */ +- +-typedef struct { +- /* next_signal() assumes this is a long - no choice */ +- unsigned long sig[_NSIG_WORDS]; +-} sigset_t; +- + #include + + #endif /* !__ASSEMBLY */ +diff --git a/arch/parisc/include/uapi/asm/signal.h b/arch/parisc/include/uapi/asm/signal.h +index 8e4895c5ea5d3..40d7a574c5dd1 100644 +--- a/arch/parisc/include/uapi/asm/signal.h ++++ b/arch/parisc/include/uapi/asm/signal.h +@@ -57,10 +57,20 @@ + + #include + ++#define _NSIG 64 ++#define _NSIG_BPW (sizeof(unsigned long) * 8) ++#define _NSIG_WORDS (_NSIG / _NSIG_BPW) ++ + # ifndef __ASSEMBLY__ + + # include + ++typedef unsigned long old_sigset_t; /* at least 32 bits */ ++ ++typedef struct { ++ unsigned long sig[_NSIG_WORDS]; ++} sigset_t; ++ + /* Avoid too many header ordering problems. */ + struct siginfo; + +diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c +index 83823db3488b9..2975ea0841ba4 100644 +--- a/arch/powerpc/mm/book3s64/pgtable.c ++++ b/arch/powerpc/mm/book3s64/pgtable.c +@@ -170,6 +170,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, + { + unsigned long old_pmd; + ++ VM_WARN_ON_ONCE(!pmd_present(*pmdp)); + old_pmd = pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, _PAGE_INVALID); + flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); + return __pmd(old_pmd); +diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c +index 43b97032a91c0..a0c4f1bde83e8 100644 +--- a/arch/powerpc/net/bpf_jit_comp32.c ++++ b/arch/powerpc/net/bpf_jit_comp32.c +@@ -900,6 +900,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code + + /* Get offset into TMP_REG */ + EMIT(PPC_RAW_LI(tmp_reg, off)); ++ /* ++ * Enforce full ordering for operations with BPF_FETCH by emitting a 'sync' ++ * before and after the operation. ++ * ++ * This is a requirement in the Linux Kernel Memory Model. ++ * See __cmpxchg_u32() in asm/cmpxchg.h as an example. ++ */ ++ if ((imm & BPF_FETCH) && IS_ENABLED(CONFIG_SMP)) ++ EMIT(PPC_RAW_SYNC()); + tmp_idx = ctx->idx * 4; + /* load value from memory into r0 */ + EMIT(PPC_RAW_LWARX(_R0, tmp_reg, dst_reg, 0)); +@@ -953,6 +962,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code + + /* For the BPF_FETCH variant, get old data into src_reg */ + if (imm & BPF_FETCH) { ++ /* Emit 'sync' to enforce full ordering */ ++ if (IS_ENABLED(CONFIG_SMP)) ++ EMIT(PPC_RAW_SYNC()); + EMIT(PPC_RAW_MR(ret_reg, ax_reg)); + if (!fp->aux->verifier_zext) + EMIT(PPC_RAW_LI(ret_reg - 1, 0)); /* higher 32-bit */ +diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c +index 79f23974a3204..58522de615ff2 100644 +--- a/arch/powerpc/net/bpf_jit_comp64.c ++++ b/arch/powerpc/net/bpf_jit_comp64.c +@@ -202,7 +202,8 @@ void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx) + EMIT(PPC_RAW_BLR()); + } + +-static int bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, u64 func) ++static int ++bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func) + { + unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0; + long reladdr; +@@ -211,19 +212,20 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, u + return -EINVAL; + + if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) { +- reladdr = func_addr - CTX_NIA(ctx); ++ reladdr = func_addr - local_paca->kernelbase; + + if (reladdr >= (long)SZ_8G || reladdr < -(long)SZ_8G) { +- pr_err("eBPF: address of %ps out of range of pcrel address.\n", +- (void *)func); ++ pr_err("eBPF: address of %ps out of range of 34-bit relative address.\n", ++ (void *)func); + return -ERANGE; + } +- /* pla r12,addr */ +- EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(1) | IMM_H18(reladdr)); +- EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | IMM_L(reladdr)); +- EMIT(PPC_RAW_MTCTR(_R12)); +- EMIT(PPC_RAW_BCTR()); +- ++ EMIT(PPC_RAW_LD(_R12, _R13, offsetof(struct paca_struct, kernelbase))); ++ /* Align for subsequent prefix instruction */ ++ if (!IS_ALIGNED((unsigned long)fimage + CTX_NIA(ctx), 8)) ++ EMIT(PPC_RAW_NOP()); ++ /* paddi r12,r12,addr */ ++ EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(0) | IMM_H18(reladdr)); ++ EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | ___PPC_RA(_R12) | IMM_L(reladdr)); + } else { + reladdr = func_addr - kernel_toc_addr(); + if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) { +@@ -233,9 +235,9 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, u + + EMIT(PPC_RAW_ADDIS(_R12, _R2, PPC_HA(reladdr))); + EMIT(PPC_RAW_ADDI(_R12, _R12, PPC_LO(reladdr))); +- EMIT(PPC_RAW_MTCTR(_R12)); +- EMIT(PPC_RAW_BCTRL()); + } ++ EMIT(PPC_RAW_MTCTR(_R12)); ++ EMIT(PPC_RAW_BCTRL()); + + return 0; + } +@@ -285,7 +287,7 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o + int b2p_index = bpf_to_ppc(BPF_REG_3); + int bpf_tailcall_prologue_size = 8; + +- if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) ++ if (!IS_ENABLED(CONFIG_PPC_KERNEL_PCREL) && IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) + bpf_tailcall_prologue_size += 4; /* skip past the toc load */ + + /* +@@ -803,6 +805,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code + + /* Get offset into TMP_REG_1 */ + EMIT(PPC_RAW_LI(tmp1_reg, off)); ++ /* ++ * Enforce full ordering for operations with BPF_FETCH by emitting a 'sync' ++ * before and after the operation. ++ * ++ * This is a requirement in the Linux Kernel Memory Model. ++ * See __cmpxchg_u64() in asm/cmpxchg.h as an example. ++ */ ++ if ((imm & BPF_FETCH) && IS_ENABLED(CONFIG_SMP)) ++ EMIT(PPC_RAW_SYNC()); + tmp_idx = ctx->idx * 4; + /* load value from memory into TMP_REG_2 */ + if (size == BPF_DW) +@@ -865,6 +876,9 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code + PPC_BCC_SHORT(COND_NE, tmp_idx); + + if (imm & BPF_FETCH) { ++ /* Emit 'sync' to enforce full ordering */ ++ if (IS_ENABLED(CONFIG_SMP)) ++ EMIT(PPC_RAW_SYNC()); + EMIT(PPC_RAW_MR(ret_reg, _R0)); + /* + * Skip unnecessary zero-extension for 32-bit cmpxchg. +@@ -993,7 +1007,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code + return ret; + + if (func_addr_fixed) +- ret = bpf_jit_emit_func_call_hlp(image, ctx, func_addr); ++ ret = bpf_jit_emit_func_call_hlp(image, fimage, ctx, func_addr); + else + ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr); + +diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig +index be09c8836d56b..8025ca7ef7798 100644 +--- a/arch/riscv/Kconfig ++++ b/arch/riscv/Kconfig +@@ -103,7 +103,7 @@ config RISCV + select HAS_IOPORT if MMU + select HAVE_ARCH_AUDITSYSCALL + select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP +- select HAVE_ARCH_HUGE_VMAP if MMU && 64BIT && !XIP_KERNEL ++ select HAVE_ARCH_HUGE_VMAP if MMU && 64BIT + select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL + select HAVE_ARCH_JUMP_LABEL_RELATIVE if !XIP_KERNEL + select HAVE_ARCH_KASAN if MMU && 64BIT +diff --git a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi +index 2b3e952513e44..ccd0ce55aa530 100644 +--- a/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi ++++ b/arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi +@@ -238,7 +238,6 @@ &i2c5 { + axp15060: pmic@36 { + compatible = "x-powers,axp15060"; + reg = <0x36>; +- interrupts = <0>; + interrupt-controller; + #interrupt-cells = <1>; + +diff --git a/arch/s390/include/asm/cpacf.h b/arch/s390/include/asm/cpacf.h +index b378e2b57ad87..c786538e397c0 100644 +--- a/arch/s390/include/asm/cpacf.h ++++ b/arch/s390/include/asm/cpacf.h +@@ -166,28 +166,86 @@ + + typedef struct { unsigned char bytes[16]; } cpacf_mask_t; + +-/** +- * cpacf_query() - check if a specific CPACF function is available +- * @opcode: the opcode of the crypto instruction +- * @func: the function code to test for +- * +- * Executes the query function for the given crypto instruction @opcode +- * and checks if @func is available +- * +- * Returns 1 if @func is available for @opcode, 0 otherwise ++/* ++ * Prototype for a not existing function to produce a link ++ * error if __cpacf_query() or __cpacf_check_opcode() is used ++ * with an invalid compile time const opcode. + */ +-static __always_inline void __cpacf_query(unsigned int opcode, cpacf_mask_t *mask) ++void __cpacf_bad_opcode(void); ++ ++static __always_inline void __cpacf_query_rre(u32 opc, u8 r1, u8 r2, ++ cpacf_mask_t *mask) + { + asm volatile( +- " lghi 0,0\n" /* query function */ +- " lgr 1,%[mask]\n" +- " spm 0\n" /* pckmo doesn't change the cc */ +- /* Parameter regs are ignored, but must be nonzero and unique */ +- "0: .insn rrf,%[opc] << 16,2,4,6,0\n" +- " brc 1,0b\n" /* handle partial completion */ +- : "=m" (*mask) +- : [mask] "d" ((unsigned long)mask), [opc] "i" (opcode) +- : "cc", "0", "1"); ++ " la %%r1,%[mask]\n" ++ " xgr %%r0,%%r0\n" ++ " .insn rre,%[opc] << 16,%[r1],%[r2]\n" ++ : [mask] "=R" (*mask) ++ : [opc] "i" (opc), ++ [r1] "i" (r1), [r2] "i" (r2) ++ : "cc", "r0", "r1"); ++} ++ ++static __always_inline void __cpacf_query_rrf(u32 opc, ++ u8 r1, u8 r2, u8 r3, u8 m4, ++ cpacf_mask_t *mask) ++{ ++ asm volatile( ++ " la %%r1,%[mask]\n" ++ " xgr %%r0,%%r0\n" ++ " .insn rrf,%[opc] << 16,%[r1],%[r2],%[r3],%[m4]\n" ++ : [mask] "=R" (*mask) ++ : [opc] "i" (opc), [r1] "i" (r1), [r2] "i" (r2), ++ [r3] "i" (r3), [m4] "i" (m4) ++ : "cc", "r0", "r1"); ++} ++ ++static __always_inline void __cpacf_query(unsigned int opcode, ++ cpacf_mask_t *mask) ++{ ++ switch (opcode) { ++ case CPACF_KDSA: ++ __cpacf_query_rre(CPACF_KDSA, 0, 2, mask); ++ break; ++ case CPACF_KIMD: ++ __cpacf_query_rre(CPACF_KIMD, 0, 2, mask); ++ break; ++ case CPACF_KLMD: ++ __cpacf_query_rre(CPACF_KLMD, 0, 2, mask); ++ break; ++ case CPACF_KM: ++ __cpacf_query_rre(CPACF_KM, 2, 4, mask); ++ break; ++ case CPACF_KMA: ++ __cpacf_query_rrf(CPACF_KMA, 2, 4, 6, 0, mask); ++ break; ++ case CPACF_KMAC: ++ __cpacf_query_rre(CPACF_KMAC, 0, 2, mask); ++ break; ++ case CPACF_KMC: ++ __cpacf_query_rre(CPACF_KMC, 2, 4, mask); ++ break; ++ case CPACF_KMCTR: ++ __cpacf_query_rrf(CPACF_KMCTR, 2, 4, 6, 0, mask); ++ break; ++ case CPACF_KMF: ++ __cpacf_query_rre(CPACF_KMF, 2, 4, mask); ++ break; ++ case CPACF_KMO: ++ __cpacf_query_rre(CPACF_KMO, 2, 4, mask); ++ break; ++ case CPACF_PCC: ++ __cpacf_query_rre(CPACF_PCC, 0, 0, mask); ++ break; ++ case CPACF_PCKMO: ++ __cpacf_query_rre(CPACF_PCKMO, 0, 0, mask); ++ break; ++ case CPACF_PRNO: ++ __cpacf_query_rre(CPACF_PRNO, 2, 4, mask); ++ break; ++ default: ++ __cpacf_bad_opcode(); ++ } + } + + static __always_inline int __cpacf_check_opcode(unsigned int opcode) +@@ -211,10 +269,21 @@ static __always_inline int __cpacf_check_opcode(unsigned int opcode) + case CPACF_KMA: + return test_facility(146); /* check for MSA8 */ + default: +- BUG(); ++ __cpacf_bad_opcode(); ++ return 0; + } + } + ++/** ++ * cpacf_query() - check if a specific CPACF function is available ++ * @opcode: the opcode of the crypto instruction ++ * @func: the function code to test for ++ * ++ * Executes the query function for the given crypto instruction @opcode ++ * and checks if @func is available ++ * ++ * Returns 1 if @func is available for @opcode, 0 otherwise ++ */ + static __always_inline int cpacf_query(unsigned int opcode, cpacf_mask_t *mask) + { + if (__cpacf_check_opcode(opcode)) { +diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h +index 259c2439c2517..1077c1daec851 100644 +--- a/arch/s390/include/asm/pgtable.h ++++ b/arch/s390/include/asm/pgtable.h +@@ -1778,8 +1778,10 @@ static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, + static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp) + { +- pmd_t pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID); ++ pmd_t pmd; + ++ VM_WARN_ON_ONCE(!pmd_present(*pmdp)); ++ pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID); + return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd); + } + +diff --git a/arch/sparc/include/asm/smp_64.h b/arch/sparc/include/asm/smp_64.h +index 505b6700805dd..0964fede0b2cc 100644 +--- a/arch/sparc/include/asm/smp_64.h ++++ b/arch/sparc/include/asm/smp_64.h +@@ -47,7 +47,6 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask); + int hard_smp_processor_id(void); + #define raw_smp_processor_id() (current_thread_info()->cpu) + +-void smp_fill_in_cpu_possible_map(void); + void smp_fill_in_sib_core_maps(void); + void __noreturn cpu_play_dead(void); + +@@ -77,7 +76,6 @@ void __cpu_die(unsigned int cpu); + #define smp_fill_in_sib_core_maps() do { } while (0) + #define smp_fetch_global_regs() do { } while (0) + #define smp_fetch_global_pmu() do { } while (0) +-#define smp_fill_in_cpu_possible_map() do { } while (0) + #define smp_init_cpu_poke() do { } while (0) + #define scheduler_poke() do { } while (0) + +diff --git a/arch/sparc/include/uapi/asm/termbits.h b/arch/sparc/include/uapi/asm/termbits.h +index 4321322701fcf..0da2b1adc0f52 100644 +--- a/arch/sparc/include/uapi/asm/termbits.h ++++ b/arch/sparc/include/uapi/asm/termbits.h +@@ -10,16 +10,6 @@ typedef unsigned int tcflag_t; + typedef unsigned long tcflag_t; + #endif + +-#define NCC 8 +-struct termio { +- unsigned short c_iflag; /* input mode flags */ +- unsigned short c_oflag; /* output mode flags */ +- unsigned short c_cflag; /* control mode flags */ +- unsigned short c_lflag; /* local mode flags */ +- unsigned char c_line; /* line discipline */ +- unsigned char c_cc[NCC]; /* control characters */ +-}; +- + #define NCCS 17 + struct termios { + tcflag_t c_iflag; /* input mode flags */ +diff --git a/arch/sparc/include/uapi/asm/termios.h b/arch/sparc/include/uapi/asm/termios.h +index ee86f4093d83e..cceb32260881e 100644 +--- a/arch/sparc/include/uapi/asm/termios.h ++++ b/arch/sparc/include/uapi/asm/termios.h +@@ -40,5 +40,14 @@ struct winsize { + unsigned short ws_ypixel; + }; + ++#define NCC 8 ++struct termio { ++ unsigned short c_iflag; /* input mode flags */ ++ unsigned short c_oflag; /* output mode flags */ ++ unsigned short c_cflag; /* control mode flags */ ++ unsigned short c_lflag; /* local mode flags */ ++ unsigned char c_line; /* line discipline */ ++ unsigned char c_cc[NCC]; /* control characters */ ++}; + + #endif /* _UAPI_SPARC_TERMIOS_H */ +diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c +index 998aa693d4912..ba82884cb92aa 100644 +--- a/arch/sparc/kernel/prom_64.c ++++ b/arch/sparc/kernel/prom_64.c +@@ -483,7 +483,9 @@ static void *record_one_cpu(struct device_node *dp, int cpuid, int arg) + ncpus_probed++; + #ifdef CONFIG_SMP + set_cpu_present(cpuid, true); +- set_cpu_possible(cpuid, true); ++ ++ if (num_possible_cpus() < nr_cpu_ids) ++ set_cpu_possible(cpuid, true); + #endif + return NULL; + } +diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c +index 6a4797dec34b4..6bbe8e394ad3f 100644 +--- a/arch/sparc/kernel/setup_64.c ++++ b/arch/sparc/kernel/setup_64.c +@@ -671,7 +671,6 @@ void __init setup_arch(char **cmdline_p) + + paging_init(); + init_sparc64_elf_hwcap(); +- smp_fill_in_cpu_possible_map(); + /* + * Once the OF device tree and MDESC have been setup and nr_cpus has + * been parsed, we know the list of possible cpus. Therefore we can +diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c +index a0cc9bb41a921..e40c395db2026 100644 +--- a/arch/sparc/kernel/smp_64.c ++++ b/arch/sparc/kernel/smp_64.c +@@ -1216,20 +1216,6 @@ void __init smp_setup_processor_id(void) + xcall_deliver_impl = hypervisor_xcall_deliver; + } + +-void __init smp_fill_in_cpu_possible_map(void) +-{ +- int possible_cpus = num_possible_cpus(); +- int i; +- +- if (possible_cpus > nr_cpu_ids) +- possible_cpus = nr_cpu_ids; +- +- for (i = 0; i < possible_cpus; i++) +- set_cpu_possible(i, true); +- for (; i < NR_CPUS; i++) +- set_cpu_possible(i, false); +-} +- + void smp_fill_in_sib_core_maps(void) + { + unsigned int i; +diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c +index b44d79d778c71..ef69127d7e5e8 100644 +--- a/arch/sparc/mm/tlb.c ++++ b/arch/sparc/mm/tlb.c +@@ -249,6 +249,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, + { + pmd_t old, entry; + ++ VM_WARN_ON_ONCE(!pmd_present(*pmdp)); + entry = __pmd(pmd_val(*pmdp) & ~_PAGE_VALID); + old = pmdp_establish(vma, address, pmdp, entry); + flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); +diff --git a/arch/x86/kernel/cpu/topology_amd.c b/arch/x86/kernel/cpu/topology_amd.c +index ce2d507c3b076..5ee6373d4d926 100644 +--- a/arch/x86/kernel/cpu/topology_amd.c ++++ b/arch/x86/kernel/cpu/topology_amd.c +@@ -84,9 +84,9 @@ static bool parse_8000_001e(struct topo_scan *tscan, bool has_0xb) + + /* + * If leaf 0xb is available, then the domain shifts are set +- * already and nothing to do here. ++ * already and nothing to do here. Only valid for family >= 0x17. + */ +- if (!has_0xb) { ++ if (!has_0xb && tscan->c->x86 >= 0x17) { + /* + * Leaf 0x80000008 set the CORE domain shift already. + * Update the SMT domain, but do not propagate it. +diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c +index 9aaf83c8d57df..308416b50b036 100644 +--- a/arch/x86/kvm/svm/svm.c ++++ b/arch/x86/kvm/svm/svm.c +@@ -3843,16 +3843,27 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu) + struct vcpu_svm *svm = to_svm(vcpu); + + /* +- * KVM should never request an NMI window when vNMI is enabled, as KVM +- * allows at most one to-be-injected NMI and one pending NMI, i.e. if +- * two NMIs arrive simultaneously, KVM will inject one and set +- * V_NMI_PENDING for the other. WARN, but continue with the standard +- * single-step approach to try and salvage the pending NMI. ++ * If NMIs are outright masked, i.e. the vCPU is already handling an ++ * NMI, and KVM has not yet intercepted an IRET, then there is nothing ++ * more to do at this time as KVM has already enabled IRET intercepts. ++ * If KVM has already intercepted IRET, then single-step over the IRET, ++ * as NMIs aren't architecturally unmasked until the IRET completes. ++ * ++ * If vNMI is enabled, KVM should never request an NMI window if NMIs ++ * are masked, as KVM allows at most one to-be-injected NMI and one ++ * pending NMI. If two NMIs arrive simultaneously, KVM will inject one ++ * NMI and set V_NMI_PENDING for the other, but if and only if NMIs are ++ * unmasked. KVM _will_ request an NMI window in some situations, e.g. ++ * if the vCPU is in an STI shadow or if GIF=0, KVM can't immediately ++ * inject the NMI. In those situations, KVM needs to single-step over ++ * the STI shadow or intercept STGI. + */ +- WARN_ON_ONCE(is_vnmi_enabled(svm)); ++ if (svm_get_nmi_mask(vcpu)) { ++ WARN_ON_ONCE(is_vnmi_enabled(svm)); + +- if (svm_get_nmi_mask(vcpu) && !svm->awaiting_iret_completion) +- return; /* IRET will cause a vm exit */ ++ if (!svm->awaiting_iret_completion) ++ return; /* IRET will cause a vm exit */ ++ } + + /* + * SEV-ES guests are responsible for signaling when a vCPU is ready to +diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c +index d007591b80597..103cbccf1d7dd 100644 +--- a/arch/x86/mm/pgtable.c ++++ b/arch/x86/mm/pgtable.c +@@ -631,6 +631,8 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma, + pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp) + { ++ VM_WARN_ON_ONCE(!pmd_present(*pmdp)); ++ + /* + * No flush is necessary. Once an invalid PTE is established, the PTE's + * access and dirty bits cannot be updated. +diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c +index fbd76498aba83..3f9ec273a121f 100644 +--- a/crypto/ecdsa.c ++++ b/crypto/ecdsa.c +@@ -373,4 +373,7 @@ module_exit(ecdsa_exit); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Stefan Berger "); + MODULE_DESCRIPTION("ECDSA generic algorithm"); ++MODULE_ALIAS_CRYPTO("ecdsa-nist-p192"); ++MODULE_ALIAS_CRYPTO("ecdsa-nist-p256"); ++MODULE_ALIAS_CRYPTO("ecdsa-nist-p384"); + MODULE_ALIAS_CRYPTO("ecdsa-generic"); +diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c +index f3c6b5e15e75b..3811f3805b5d8 100644 +--- a/crypto/ecrdsa.c ++++ b/crypto/ecrdsa.c +@@ -294,4 +294,5 @@ module_exit(ecrdsa_mod_fini); + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Vitaly Chikunov "); + MODULE_DESCRIPTION("EC-RDSA generic algorithm"); ++MODULE_ALIAS_CRYPTO("ecrdsa"); + MODULE_ALIAS_CRYPTO("ecrdsa-generic"); +diff --git a/drivers/acpi/apei/einj-core.c b/drivers/acpi/apei/einj-core.c +index 01faca3a238a3..bb9f8475ce594 100644 +--- a/drivers/acpi/apei/einj-core.c ++++ b/drivers/acpi/apei/einj-core.c +@@ -903,7 +903,7 @@ static void __exit einj_exit(void) + if (einj_initialized) + platform_driver_unregister(&einj_driver); + +- platform_device_del(einj_dev); ++ platform_device_unregister(einj_dev); + } + + module_init(einj_init); +diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c +index 59423fe9d0f29..6cc8572759a3d 100644 +--- a/drivers/acpi/resource.c ++++ b/drivers/acpi/resource.c +@@ -630,6 +630,18 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = { + DMI_MATCH(DMI_BOARD_NAME, "X565"), + }, + }, ++ { ++ /* TongFang GXxHRXx/TUXEDO InfinityBook Pro Gen9 AMD */ ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "GXxHRXx"), ++ }, ++ }, ++ { ++ /* TongFang GMxHGxx/TUXEDO Stellaris Slim Gen1 AMD */ ++ .matches = { ++ DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"), ++ }, ++ }, + { } + }; + +diff --git a/drivers/ata/pata_legacy.c b/drivers/ata/pata_legacy.c +index 448a511cbc179..e7ac142c2423d 100644 +--- a/drivers/ata/pata_legacy.c ++++ b/drivers/ata/pata_legacy.c +@@ -173,8 +173,6 @@ static int legacy_port[NR_HOST] = { 0x1f0, 0x170, 0x1e8, 0x168, 0x1e0, 0x160 }; + static struct legacy_probe probe_list[NR_HOST]; + static struct legacy_data legacy_data[NR_HOST]; + static struct ata_host *legacy_host[NR_HOST]; +-static int nr_legacy_host; +- + + /** + * legacy_probe_add - Add interface to probe list +@@ -1276,9 +1274,11 @@ static __exit void legacy_exit(void) + { + int i; + +- for (i = 0; i < nr_legacy_host; i++) { ++ for (i = 0; i < NR_HOST; i++) { + struct legacy_data *ld = &legacy_data[i]; +- ata_host_detach(legacy_host[i]); ++ ++ if (legacy_host[i]) ++ ata_host_detach(legacy_host[i]); + platform_device_unregister(ld->platform_dev); + } + } +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index 714070ebb6e7a..0c20fbc089dae 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -1020,7 +1020,8 @@ void tpm_tis_remove(struct tpm_chip *chip) + interrupt = 0; + + tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt); +- flush_work(&priv->free_irq_work); ++ if (priv->free_irq_work.func) ++ flush_work(&priv->free_irq_work); + + tpm_tis_clkrun_enable(chip, false); + +diff --git a/drivers/clk/bcm/clk-bcm2711-dvp.c b/drivers/clk/bcm/clk-bcm2711-dvp.c +index e4fbbf3c40fe2..3cb235df9d379 100644 +--- a/drivers/clk/bcm/clk-bcm2711-dvp.c ++++ b/drivers/clk/bcm/clk-bcm2711-dvp.c +@@ -56,6 +56,8 @@ static int clk_dvp_probe(struct platform_device *pdev) + if (ret) + return ret; + ++ data->num = NR_CLOCKS; ++ + data->hws[0] = clk_hw_register_gate_parent_data(&pdev->dev, + "hdmi0-108MHz", + &clk_dvp_parent, 0, +@@ -76,7 +78,6 @@ static int clk_dvp_probe(struct platform_device *pdev) + goto unregister_clk0; + } + +- data->num = NR_CLOCKS; + ret = of_clk_add_hw_provider(pdev->dev.of_node, of_clk_hw_onecell_get, + data); + if (ret) +diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c +index 829406dc44a20..4d411408e4afe 100644 +--- a/drivers/clk/bcm/clk-raspberrypi.c ++++ b/drivers/clk/bcm/clk-raspberrypi.c +@@ -371,8 +371,8 @@ static int raspberrypi_discover_clocks(struct raspberrypi_clk *rpi, + if (IS_ERR(hw)) + return PTR_ERR(hw); + +- data->hws[clks->id] = hw; + data->num = clks->id + 1; ++ data->hws[clks->id] = hw; + } + + clks++; +diff --git a/drivers/clk/qcom/apss-ipq-pll.c b/drivers/clk/qcom/apss-ipq-pll.c +index 5e3da5558f4e0..d7ab5bd5d4b41 100644 +--- a/drivers/clk/qcom/apss-ipq-pll.c ++++ b/drivers/clk/qcom/apss-ipq-pll.c +@@ -55,6 +55,29 @@ static struct clk_alpha_pll ipq_pll_huayra = { + }, + }; + ++static struct clk_alpha_pll ipq_pll_stromer = { ++ .offset = 0x0, ++ /* ++ * Reuse CLK_ALPHA_PLL_TYPE_STROMER_PLUS register offsets. ++ * Although this is a bit confusing, but the offset values ++ * are correct nevertheless. ++ */ ++ .regs = ipq_pll_offsets[CLK_ALPHA_PLL_TYPE_STROMER_PLUS], ++ .flags = SUPPORTS_DYNAMIC_UPDATE, ++ .clkr = { ++ .enable_reg = 0x0, ++ .enable_mask = BIT(0), ++ .hw.init = &(const struct clk_init_data) { ++ .name = "a53pll", ++ .parent_data = &(const struct clk_parent_data) { ++ .fw_name = "xo", ++ }, ++ .num_parents = 1, ++ .ops = &clk_alpha_pll_stromer_ops, ++ }, ++ }, ++}; ++ + static struct clk_alpha_pll ipq_pll_stromer_plus = { + .offset = 0x0, + .regs = ipq_pll_offsets[CLK_ALPHA_PLL_TYPE_STROMER_PLUS], +@@ -145,8 +168,8 @@ struct apss_pll_data { + }; + + static const struct apss_pll_data ipq5018_pll_data = { +- .pll_type = CLK_ALPHA_PLL_TYPE_STROMER_PLUS, +- .pll = &ipq_pll_stromer_plus, ++ .pll_type = CLK_ALPHA_PLL_TYPE_STROMER, ++ .pll = &ipq_pll_stromer, + .pll_config = &ipq5018_pll_config, + }; + +@@ -204,7 +227,8 @@ static int apss_ipq_pll_probe(struct platform_device *pdev) + + if (data->pll_type == CLK_ALPHA_PLL_TYPE_HUAYRA) + clk_alpha_pll_configure(data->pll, regmap, data->pll_config); +- else if (data->pll_type == CLK_ALPHA_PLL_TYPE_STROMER_PLUS) ++ else if (data->pll_type == CLK_ALPHA_PLL_TYPE_STROMER || ++ data->pll_type == CLK_ALPHA_PLL_TYPE_STROMER_PLUS) + clk_stromer_pll_configure(data->pll, regmap, data->pll_config); + + ret = devm_clk_register_regmap(dev, &data->pll->clkr); +diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c +index 734a73f322b3a..be18ff983d35c 100644 +--- a/drivers/clk/qcom/clk-alpha-pll.c ++++ b/drivers/clk/qcom/clk-alpha-pll.c +@@ -2489,6 +2489,8 @@ static int clk_alpha_pll_stromer_set_rate(struct clk_hw *hw, unsigned long rate, + rate = alpha_pll_round_rate(rate, prate, &l, &a, ALPHA_REG_BITWIDTH); + + regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l); ++ ++ a <<= ALPHA_REG_BITWIDTH - ALPHA_BITWIDTH; + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a); + regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL_U(pll), + a >> ALPHA_BITWIDTH); +diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c +index 28166df81cf8d..e263db0385ab7 100644 +--- a/drivers/cpufreq/amd-pstate.c ++++ b/drivers/cpufreq/amd-pstate.c +@@ -705,7 +705,7 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state) + if (state) + policy->cpuinfo.max_freq = cpudata->max_freq; + else +- policy->cpuinfo.max_freq = cpudata->nominal_freq; ++ policy->cpuinfo.max_freq = cpudata->nominal_freq * 1000; + + policy->max = policy->cpuinfo.max_freq; + +diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c +index 9da2278bd5b7d..04260f61d0429 100644 +--- a/drivers/crypto/intel/qat/qat_common/adf_aer.c ++++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c +@@ -130,8 +130,7 @@ static void adf_device_reset_worker(struct work_struct *work) + if (adf_dev_restart(accel_dev)) { + /* The device hanged and we can't restart it so stop here */ + dev_err(&GET_DEV(accel_dev), "Restart device failed\n"); +- if (reset_data->mode == ADF_DEV_RESET_ASYNC || +- completion_done(&reset_data->compl)) ++ if (reset_data->mode == ADF_DEV_RESET_ASYNC) + kfree(reset_data); + WARN(1, "QAT: device restart failed. Device is unusable\n"); + return; +@@ -147,16 +146,8 @@ static void adf_device_reset_worker(struct work_struct *work) + adf_dev_restarted_notify(accel_dev); + clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status); + +- /* +- * The dev is back alive. Notify the caller if in sync mode +- * +- * If device restart will take a more time than expected, +- * the schedule_reset() function can timeout and exit. This can be +- * detected by calling the completion_done() function. In this case +- * the reset_data structure needs to be freed here. +- */ +- if (reset_data->mode == ADF_DEV_RESET_ASYNC || +- completion_done(&reset_data->compl)) ++ /* The dev is back alive. Notify the caller if in sync mode */ ++ if (reset_data->mode == ADF_DEV_RESET_ASYNC) + kfree(reset_data); + else + complete(&reset_data->compl); +@@ -191,10 +182,10 @@ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev, + if (!timeout) { + dev_err(&GET_DEV(accel_dev), + "Reset device timeout expired\n"); ++ cancel_work_sync(&reset_data->reset_work); + ret = -EFAULT; +- } else { +- kfree(reset_data); + } ++ kfree(reset_data); + return ret; + } + return 0; +diff --git a/drivers/crypto/starfive/jh7110-rsa.c b/drivers/crypto/starfive/jh7110-rsa.c +index cf8bda7f0855d..7ec14b5b84905 100644 +--- a/drivers/crypto/starfive/jh7110-rsa.c ++++ b/drivers/crypto/starfive/jh7110-rsa.c +@@ -273,7 +273,6 @@ static int starfive_rsa_enc_core(struct starfive_cryp_ctx *ctx, int enc) + + err_rsa_crypt: + writel(STARFIVE_RSA_RESET, cryp->base + STARFIVE_PKA_CACR_OFFSET); +- kfree(rctx->rsa_data); + return ret; + } + +diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c +index 1f3520d768613..a17f3c0cdfa60 100644 +--- a/drivers/edac/amd64_edac.c ++++ b/drivers/edac/amd64_edac.c +@@ -81,7 +81,7 @@ int __amd64_read_pci_cfg_dword(struct pci_dev *pdev, int offset, + amd64_warn("%s: error reading F%dx%03x.\n", + func, PCI_FUNC(pdev->devfn), offset); + +- return err; ++ return pcibios_err_to_errno(err); + } + + int __amd64_write_pci_cfg_dword(struct pci_dev *pdev, int offset, +@@ -94,7 +94,7 @@ int __amd64_write_pci_cfg_dword(struct pci_dev *pdev, int offset, + amd64_warn("%s: error writing to F%dx%03x.\n", + func, PCI_FUNC(pdev->devfn), offset); + +- return err; ++ return pcibios_err_to_errno(err); + } + + /* +@@ -1025,8 +1025,10 @@ static int gpu_get_node_map(struct amd64_pvt *pvt) + } + + ret = pci_read_config_dword(pdev, REG_LOCAL_NODE_TYPE_MAP, &tmp); +- if (ret) ++ if (ret) { ++ ret = pcibios_err_to_errno(ret); + goto out; ++ } + + gpu_node_map.node_count = FIELD_GET(LNTM_NODE_COUNT, tmp); + gpu_node_map.base_node_id = FIELD_GET(LNTM_BASE_NODE_ID, tmp); +diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c +index cdd8480e73687..dbe9fe5f2ca6c 100644 +--- a/drivers/edac/igen6_edac.c ++++ b/drivers/edac/igen6_edac.c +@@ -800,7 +800,7 @@ static int errcmd_enable_error_reporting(bool enable) + + rc = pci_read_config_word(imc->pdev, ERRCMD_OFFSET, &errcmd); + if (rc) +- return rc; ++ return pcibios_err_to_errno(rc); + + if (enable) + errcmd |= ERRCMD_CE | ERRSTS_UE; +@@ -809,7 +809,7 @@ static int errcmd_enable_error_reporting(bool enable) + + rc = pci_write_config_word(imc->pdev, ERRCMD_OFFSET, errcmd); + if (rc) +- return rc; ++ return pcibios_err_to_errno(rc); + + return 0; + } +diff --git a/drivers/firmware/efi/libstub/loongarch.c b/drivers/firmware/efi/libstub/loongarch.c +index 684c9354637c6..d0ef93551c44f 100644 +--- a/drivers/firmware/efi/libstub/loongarch.c ++++ b/drivers/firmware/efi/libstub/loongarch.c +@@ -41,7 +41,7 @@ static efi_status_t exit_boot_func(struct efi_boot_memmap *map, void *priv) + unsigned long __weak kernel_entry_address(unsigned long kernel_addr, + efi_loaded_image_t *image) + { +- return *(unsigned long *)(kernel_addr + 8) - VMLINUX_LOAD_ADDRESS + kernel_addr; ++ return *(unsigned long *)(kernel_addr + 8) - PHYSADDR(VMLINUX_LOAD_ADDRESS) + kernel_addr; + } + + efi_status_t efi_boot_kernel(void *handle, efi_loaded_image_t *image, +diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c +index 29c24578ad2bf..2ad85052b3f60 100644 +--- a/drivers/firmware/qcom/qcom_scm.c ++++ b/drivers/firmware/qcom/qcom_scm.c +@@ -569,13 +569,14 @@ int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size, + + ret = qcom_scm_bw_enable(); + if (ret) +- return ret; ++ goto disable_clk; + + desc.args[1] = mdata_phys; + + ret = qcom_scm_call(__scm->dev, &desc, &res); +- + qcom_scm_bw_disable(); ++ ++disable_clk: + qcom_scm_clk_disable(); + + out: +@@ -637,10 +638,12 @@ int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, phys_addr_t size) + + ret = qcom_scm_bw_enable(); + if (ret) +- return ret; ++ goto disable_clk; + + ret = qcom_scm_call(__scm->dev, &desc, &res); + qcom_scm_bw_disable(); ++ ++disable_clk: + qcom_scm_clk_disable(); + + return ret ? : res.result[0]; +@@ -672,10 +675,12 @@ int qcom_scm_pas_auth_and_reset(u32 peripheral) + + ret = qcom_scm_bw_enable(); + if (ret) +- return ret; ++ goto disable_clk; + + ret = qcom_scm_call(__scm->dev, &desc, &res); + qcom_scm_bw_disable(); ++ ++disable_clk: + qcom_scm_clk_disable(); + + return ret ? : res.result[0]; +@@ -706,11 +711,12 @@ int qcom_scm_pas_shutdown(u32 peripheral) + + ret = qcom_scm_bw_enable(); + if (ret) +- return ret; ++ goto disable_clk; + + ret = qcom_scm_call(__scm->dev, &desc, &res); +- + qcom_scm_bw_disable(); ++ ++disable_clk: + qcom_scm_clk_disable(); + + return ret ? : res.result[0]; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +index e4d4e55c08ad5..0535b07987d9d 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +@@ -1188,7 +1188,8 @@ static int reserve_bo_and_cond_vms(struct kgd_mem *mem, + int ret; + + ctx->sync = &mem->sync; +- drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0); ++ drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT | ++ DRM_EXEC_IGNORE_DUPLICATES, 0); + drm_exec_until_all_locked(&ctx->exec) { + ctx->n_vms = 0; + list_for_each_entry(entry, &mem->attachments, list) { +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c +index 6857c586ded71..c8c23dbb90916 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c +@@ -211,6 +211,7 @@ union igp_info { + struct atom_integrated_system_info_v1_11 v11; + struct atom_integrated_system_info_v1_12 v12; + struct atom_integrated_system_info_v2_1 v21; ++ struct atom_integrated_system_info_v2_3 v23; + }; + + union umc_info { +@@ -359,6 +360,20 @@ amdgpu_atomfirmware_get_vram_info(struct amdgpu_device *adev, + if (vram_type) + *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type); + break; ++ case 3: ++ mem_channel_number = igp_info->v23.umachannelnumber; ++ if (!mem_channel_number) ++ mem_channel_number = 1; ++ mem_type = igp_info->v23.memorytype; ++ if (mem_type == LpDdr5MemType) ++ mem_channel_width = 32; ++ else ++ mem_channel_width = 64; ++ if (vram_width) ++ *vram_width = mem_channel_number * mem_channel_width; ++ if (vram_type) ++ *vram_type = convert_atom_mem_type_to_vram_type(adev, mem_type); ++ break; + default: + return -EINVAL; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +index 43775cb67ff5f..8f231766756a2 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +@@ -2021,6 +2021,9 @@ static int sdma_v4_0_process_trap_irq(struct amdgpu_device *adev, + + DRM_DEBUG("IH: SDMA trap\n"); + instance = sdma_v4_0_irq_id_to_seq(entry->client_id); ++ if (instance < 0) ++ return instance; ++ + switch (entry->ring_id) { + case 0: + amdgpu_fence_process(&adev->sdma.instance[instance].ring); +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c +index 719d6d365e150..ff01610fbce3b 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c +@@ -408,15 +408,8 @@ struct kfd_dev *kgd2kfd_probe(struct amdgpu_device *adev, bool vf) + f2g = &gfx_v11_kfd2kgd; + break; + case IP_VERSION(11, 0, 3): +- if ((adev->pdev->device == 0x7460 && +- adev->pdev->revision == 0x00) || +- (adev->pdev->device == 0x7461 && +- adev->pdev->revision == 0x00)) +- /* Note: Compiler version is 11.0.5 while HW version is 11.0.3 */ +- gfx_target_version = 110005; +- else +- /* Note: Compiler version is 11.0.1 while HW version is 11.0.3 */ +- gfx_target_version = 110001; ++ /* Note: Compiler version is 11.0.1 while HW version is 11.0.3 */ ++ gfx_target_version = 110001; + f2g = &gfx_v11_kfd2kgd; + break; + case IP_VERSION(11, 5, 0): +diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h b/drivers/gpu/drm/amd/include/atomfirmware.h +index af3eebb4c9bcb..1acb2d2c5597b 100644 +--- a/drivers/gpu/drm/amd/include/atomfirmware.h ++++ b/drivers/gpu/drm/amd/include/atomfirmware.h +@@ -1657,6 +1657,49 @@ struct atom_integrated_system_info_v2_2 + uint32_t reserved4[189]; + }; + ++struct uma_carveout_option { ++ char optionName[29]; //max length of string is 28chars + '\0'. Current design is for "minimum", "Medium", "High". This makes entire struct size 64bits ++ uint8_t memoryCarvedGb; //memory carved out with setting ++ uint8_t memoryRemainingGb; //memory remaining on system ++ union { ++ struct _flags { ++ uint8_t Auto : 1; ++ uint8_t Custom : 1; ++ uint8_t Reserved : 6; ++ } flags; ++ uint8_t all8; ++ } uma_carveout_option_flags; ++}; ++ ++struct atom_integrated_system_info_v2_3 { ++ struct atom_common_table_header table_header; ++ uint32_t vbios_misc; // enum of atom_system_vbiosmisc_def ++ uint32_t gpucapinfo; // enum of atom_system_gpucapinf_def ++ uint32_t system_config; ++ uint32_t cpucapinfo; ++ uint16_t gpuclk_ss_percentage; // unit of 0.001%, 1000 mean 1% ++ uint16_t gpuclk_ss_type; ++ uint16_t dpphy_override; // bit vector, enum of atom_sysinfo_dpphy_override_def ++ uint8_t memorytype; // enum of atom_dmi_t17_mem_type_def, APU memory type indication. ++ uint8_t umachannelnumber; // number of memory channels ++ uint8_t htc_hyst_limit; ++ uint8_t htc_tmp_limit; ++ uint8_t reserved1; // dp_ss_control ++ uint8_t gpu_package_id; ++ struct edp_info_table edp1_info; ++ struct edp_info_table edp2_info; ++ uint32_t reserved2[8]; ++ struct atom_external_display_connection_info extdispconninfo; ++ uint8_t UMACarveoutVersion; ++ uint8_t UMACarveoutIndexMax; ++ uint8_t UMACarveoutTypeDefault; ++ uint8_t UMACarveoutIndexDefault; ++ uint8_t UMACarveoutType; //Auto or Custom ++ uint8_t UMACarveoutIndex; ++ struct uma_carveout_option UMASizeControlOption[20]; ++ uint8_t reserved3[110]; ++}; ++ + // system_config + enum atom_system_vbiosmisc_def{ + INTEGRATED_SYSTEM_INFO__GET_EDID_CALLBACK_FUNC_SUPPORT = 0x01, +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c +index 4abfcd32747d3..2fb6c9cb0f143 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c +@@ -226,15 +226,17 @@ static int smu_v13_0_4_system_features_control(struct smu_context *smu, bool en) + struct amdgpu_device *adev = smu->adev; + int ret = 0; + +- if (!en && adev->in_s4) { +- /* Adds a GFX reset as workaround just before sending the +- * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering +- * an invalid state. +- */ +- ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, +- SMU_RESET_MODE_2, NULL); +- if (ret) +- return ret; ++ if (!en && !adev->in_s0ix) { ++ if (adev->in_s4) { ++ /* Adds a GFX reset as workaround just before sending the ++ * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering ++ * an invalid state. ++ */ ++ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, ++ SMU_RESET_MODE_2, NULL); ++ if (ret) ++ return ret; ++ } + + ret = smu_cmn_send_smc_msg(smu, SMU_MSG_PrepareMp1ForUnload, NULL); + } +diff --git a/drivers/gpu/drm/drm_fbdev_generic.c b/drivers/gpu/drm/drm_fbdev_generic.c +index d647d89764cb9..b4659cd6285ab 100644 +--- a/drivers/gpu/drm/drm_fbdev_generic.c ++++ b/drivers/gpu/drm/drm_fbdev_generic.c +@@ -113,7 +113,6 @@ static int drm_fbdev_generic_helper_fb_probe(struct drm_fb_helper *fb_helper, + /* screen */ + info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST; + info->screen_buffer = screen_buffer; +- info->fix.smem_start = page_to_phys(vmalloc_to_page(info->screen_buffer)); + info->fix.smem_len = screen_size; + + /* deferred I/O */ +diff --git a/drivers/gpu/drm/i915/i915_hwmon.c b/drivers/gpu/drm/i915/i915_hwmon.c +index b758fd110c204..c0662a022f59c 100644 +--- a/drivers/gpu/drm/i915/i915_hwmon.c ++++ b/drivers/gpu/drm/i915/i915_hwmon.c +@@ -793,7 +793,7 @@ void i915_hwmon_register(struct drm_i915_private *i915) + if (!IS_DGFX(i915)) + return; + +- hwmon = devm_kzalloc(dev, sizeof(*hwmon), GFP_KERNEL); ++ hwmon = kzalloc(sizeof(*hwmon), GFP_KERNEL); + if (!hwmon) + return; + +@@ -819,14 +819,12 @@ void i915_hwmon_register(struct drm_i915_private *i915) + hwm_get_preregistration_info(i915); + + /* hwmon_dev points to device hwmon */ +- hwmon_dev = devm_hwmon_device_register_with_info(dev, ddat->name, +- ddat, +- &hwm_chip_info, +- hwm_groups); +- if (IS_ERR(hwmon_dev)) { +- i915->hwmon = NULL; +- return; +- } ++ hwmon_dev = hwmon_device_register_with_info(dev, ddat->name, ++ ddat, ++ &hwm_chip_info, ++ hwm_groups); ++ if (IS_ERR(hwmon_dev)) ++ goto err; + + ddat->hwmon_dev = hwmon_dev; + +@@ -839,16 +837,36 @@ void i915_hwmon_register(struct drm_i915_private *i915) + if (!hwm_gt_is_visible(ddat_gt, hwmon_energy, hwmon_energy_input, 0)) + continue; + +- hwmon_dev = devm_hwmon_device_register_with_info(dev, ddat_gt->name, +- ddat_gt, +- &hwm_gt_chip_info, +- NULL); ++ hwmon_dev = hwmon_device_register_with_info(dev, ddat_gt->name, ++ ddat_gt, ++ &hwm_gt_chip_info, ++ NULL); + if (!IS_ERR(hwmon_dev)) + ddat_gt->hwmon_dev = hwmon_dev; + } ++ return; ++err: ++ i915_hwmon_unregister(i915); + } + + void i915_hwmon_unregister(struct drm_i915_private *i915) + { +- fetch_and_zero(&i915->hwmon); ++ struct i915_hwmon *hwmon = i915->hwmon; ++ struct intel_gt *gt; ++ int i; ++ ++ if (!hwmon) ++ return; ++ ++ for_each_gt(gt, i915, i) ++ if (hwmon->ddat_gt[i].hwmon_dev) ++ hwmon_device_unregister(hwmon->ddat_gt[i].hwmon_dev); ++ ++ if (hwmon->ddat.hwmon_dev) ++ hwmon_device_unregister(hwmon->ddat.hwmon_dev); ++ ++ mutex_destroy(&hwmon->hwmon_lock); ++ ++ kfree(i915->hwmon); ++ i915->hwmon = NULL; + } +diff --git a/drivers/gpu/drm/xe/xe_bb.c b/drivers/gpu/drm/xe/xe_bb.c +index 7c124475c4289..a35e0781b7b95 100644 +--- a/drivers/gpu/drm/xe/xe_bb.c ++++ b/drivers/gpu/drm/xe/xe_bb.c +@@ -96,7 +96,8 @@ struct xe_sched_job *xe_bb_create_job(struct xe_exec_queue *q, + { + u64 addr = xe_sa_bo_gpu_addr(bb->bo); + +- xe_gt_assert(q->gt, !(q->vm && q->vm->flags & XE_VM_FLAG_MIGRATION)); ++ xe_gt_assert(q->gt, !xe_sched_job_is_migration(q)); ++ xe_gt_assert(q->gt, q->width == 1); + return __xe_bb_create_job(q, bb, &addr); + } + +diff --git a/drivers/hid/i2c-hid/i2c-hid-of-elan.c b/drivers/hid/i2c-hid/i2c-hid-of-elan.c +index 5b91fb106cfc3..091e37933225a 100644 +--- a/drivers/hid/i2c-hid/i2c-hid-of-elan.c ++++ b/drivers/hid/i2c-hid/i2c-hid-of-elan.c +@@ -31,6 +31,7 @@ struct i2c_hid_of_elan { + struct regulator *vcc33; + struct regulator *vccio; + struct gpio_desc *reset_gpio; ++ bool no_reset_on_power_off; + const struct elan_i2c_hid_chip_data *chip_data; + }; + +@@ -40,17 +41,17 @@ static int elan_i2c_hid_power_up(struct i2chid_ops *ops) + container_of(ops, struct i2c_hid_of_elan, ops); + int ret; + ++ gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); ++ + if (ihid_elan->vcc33) { + ret = regulator_enable(ihid_elan->vcc33); + if (ret) +- return ret; ++ goto err_deassert_reset; + } + + ret = regulator_enable(ihid_elan->vccio); +- if (ret) { +- regulator_disable(ihid_elan->vcc33); +- return ret; +- } ++ if (ret) ++ goto err_disable_vcc33; + + if (ihid_elan->chip_data->post_power_delay_ms) + msleep(ihid_elan->chip_data->post_power_delay_ms); +@@ -60,6 +61,15 @@ static int elan_i2c_hid_power_up(struct i2chid_ops *ops) + msleep(ihid_elan->chip_data->post_gpio_reset_on_delay_ms); + + return 0; ++ ++err_disable_vcc33: ++ if (ihid_elan->vcc33) ++ regulator_disable(ihid_elan->vcc33); ++err_deassert_reset: ++ if (ihid_elan->no_reset_on_power_off) ++ gpiod_set_value_cansleep(ihid_elan->reset_gpio, 0); ++ ++ return ret; + } + + static void elan_i2c_hid_power_down(struct i2chid_ops *ops) +@@ -67,7 +77,14 @@ static void elan_i2c_hid_power_down(struct i2chid_ops *ops) + struct i2c_hid_of_elan *ihid_elan = + container_of(ops, struct i2c_hid_of_elan, ops); + +- gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); ++ /* ++ * Do not assert reset when the hardware allows for it to remain ++ * deasserted regardless of the state of the (shared) power supply to ++ * avoid wasting power when the supply is left on. ++ */ ++ if (!ihid_elan->no_reset_on_power_off) ++ gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); ++ + if (ihid_elan->chip_data->post_gpio_reset_off_delay_ms) + msleep(ihid_elan->chip_data->post_gpio_reset_off_delay_ms); + +@@ -79,6 +96,7 @@ static void elan_i2c_hid_power_down(struct i2chid_ops *ops) + static int i2c_hid_of_elan_probe(struct i2c_client *client) + { + struct i2c_hid_of_elan *ihid_elan; ++ int ret; + + ihid_elan = devm_kzalloc(&client->dev, sizeof(*ihid_elan), GFP_KERNEL); + if (!ihid_elan) +@@ -93,21 +111,38 @@ static int i2c_hid_of_elan_probe(struct i2c_client *client) + if (IS_ERR(ihid_elan->reset_gpio)) + return PTR_ERR(ihid_elan->reset_gpio); + ++ ihid_elan->no_reset_on_power_off = of_property_read_bool(client->dev.of_node, ++ "no-reset-on-power-off"); ++ + ihid_elan->vccio = devm_regulator_get(&client->dev, "vccio"); +- if (IS_ERR(ihid_elan->vccio)) +- return PTR_ERR(ihid_elan->vccio); ++ if (IS_ERR(ihid_elan->vccio)) { ++ ret = PTR_ERR(ihid_elan->vccio); ++ goto err_deassert_reset; ++ } + + ihid_elan->chip_data = device_get_match_data(&client->dev); + + if (ihid_elan->chip_data->main_supply_name) { + ihid_elan->vcc33 = devm_regulator_get(&client->dev, + ihid_elan->chip_data->main_supply_name); +- if (IS_ERR(ihid_elan->vcc33)) +- return PTR_ERR(ihid_elan->vcc33); ++ if (IS_ERR(ihid_elan->vcc33)) { ++ ret = PTR_ERR(ihid_elan->vcc33); ++ goto err_deassert_reset; ++ } + } + +- return i2c_hid_core_probe(client, &ihid_elan->ops, +- ihid_elan->chip_data->hid_descriptor_address, 0); ++ ret = i2c_hid_core_probe(client, &ihid_elan->ops, ++ ihid_elan->chip_data->hid_descriptor_address, 0); ++ if (ret) ++ goto err_deassert_reset; ++ ++ return 0; ++ ++err_deassert_reset: ++ if (ihid_elan->no_reset_on_power_off) ++ gpiod_set_value_cansleep(ihid_elan->reset_gpio, 0); ++ ++ return ret; + } + + static const struct elan_i2c_hid_chip_data elan_ekth6915_chip_data = { +diff --git a/drivers/hwmon/ltc2992.c b/drivers/hwmon/ltc2992.c +index 001799bc28ed6..b8548105cd67a 100644 +--- a/drivers/hwmon/ltc2992.c ++++ b/drivers/hwmon/ltc2992.c +@@ -876,9 +876,11 @@ static int ltc2992_parse_dt(struct ltc2992_state *st) + + ret = fwnode_property_read_u32(child, "shunt-resistor-micro-ohms", &val); + if (!ret) { +- if (!val) ++ if (!val) { ++ fwnode_handle_put(child); + return dev_err_probe(&st->client->dev, -EINVAL, + "shunt resistor value cannot be zero\n"); ++ } + st->r_sense_uohm[addr] = val; + } + } +diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c +index 147d338c191e7..648893f9e4b67 100644 +--- a/drivers/hwtracing/intel_th/pci.c ++++ b/drivers/hwtracing/intel_th/pci.c +@@ -289,6 +289,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = { + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7e24), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, ++ { ++ /* Meteor Lake-S CPU */ ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xae24), ++ .driver_data = (kernel_ulong_t)&intel_th_2x, ++ }, + { + /* Raptor Lake-S */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7a26), +diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c +index d6037a3286690..14ae0cfc325ef 100644 +--- a/drivers/i2c/i2c-core-acpi.c ++++ b/drivers/i2c/i2c-core-acpi.c +@@ -445,6 +445,11 @@ static struct i2c_client *i2c_acpi_find_client_by_adev(struct acpi_device *adev) + return i2c_find_device_by_fwnode(acpi_fwnode_handle(adev)); + } + ++static struct i2c_adapter *i2c_acpi_find_adapter_by_adev(struct acpi_device *adev) ++{ ++ return i2c_find_adapter_by_fwnode(acpi_fwnode_handle(adev)); ++} ++ + static int i2c_acpi_notify(struct notifier_block *nb, unsigned long value, + void *arg) + { +@@ -471,11 +476,17 @@ static int i2c_acpi_notify(struct notifier_block *nb, unsigned long value, + break; + + client = i2c_acpi_find_client_by_adev(adev); +- if (!client) +- break; ++ if (client) { ++ i2c_unregister_device(client); ++ put_device(&client->dev); ++ } ++ ++ adapter = i2c_acpi_find_adapter_by_adev(adev); ++ if (adapter) { ++ acpi_unbind_one(&adapter->dev); ++ put_device(&adapter->dev); ++ } + +- i2c_unregister_device(client); +- put_device(&client->dev); + break; + } + +diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c +index a2298ab460a37..bb299ce02cccb 100644 +--- a/drivers/i3c/master/svc-i3c-master.c ++++ b/drivers/i3c/master/svc-i3c-master.c +@@ -415,6 +415,19 @@ static void svc_i3c_master_ibi_work(struct work_struct *work) + int ret; + + mutex_lock(&master->lock); ++ /* ++ * IBIWON may be set before SVC_I3C_MCTRL_REQUEST_AUTO_IBI, causing ++ * readl_relaxed_poll_timeout() to return immediately. Consequently, ++ * ibitype will be 0 since it was last updated only after the 8th SCL ++ * cycle, leading to missed client IBI handlers. ++ * ++ * A typical scenario is when IBIWON occurs and bus arbitration is lost ++ * at svc_i3c_master_priv_xfers(). ++ * ++ * Clear SVC_I3C_MINT_IBIWON before sending SVC_I3C_MCTRL_REQUEST_AUTO_IBI. ++ */ ++ writel(SVC_I3C_MINT_IBIWON, master->regs + SVC_I3C_MSTATUS); ++ + /* Acknowledge the incoming interrupt with the AUTOIBI mechanism */ + writel(SVC_I3C_MCTRL_REQUEST_AUTO_IBI | + SVC_I3C_MCTRL_IBIRESP_AUTO, +@@ -429,9 +442,6 @@ static void svc_i3c_master_ibi_work(struct work_struct *work) + goto reenable_ibis; + } + +- /* Clear the interrupt status */ +- writel(SVC_I3C_MINT_IBIWON, master->regs + SVC_I3C_MSTATUS); +- + status = readl(master->regs + SVC_I3C_MSTATUS); + ibitype = SVC_I3C_MSTATUS_IBITYPE(status); + ibiaddr = SVC_I3C_MSTATUS_IBIADDR(status); +diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c +index 9e71c44288141..4f3a12383a1e4 100644 +--- a/drivers/irqchip/irq-riscv-intc.c ++++ b/drivers/irqchip/irq-riscv-intc.c +@@ -253,8 +253,9 @@ IRQCHIP_DECLARE(andes, "andestech,cpu-intc", riscv_intc_init); + static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header, + const unsigned long end) + { +- struct fwnode_handle *fn; + struct acpi_madt_rintc *rintc; ++ struct fwnode_handle *fn; ++ int rc; + + rintc = (struct acpi_madt_rintc *)header; + +@@ -273,7 +274,11 @@ static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header, + return -ENOMEM; + } + +- return riscv_intc_init_common(fn, &riscv_intc_chip); ++ rc = riscv_intc_init_common(fn, &riscv_intc_chip); ++ if (rc) ++ irq_domain_free_fwnode(fn); ++ ++ return rc; + } + + IRQCHIP_ACPI_DECLARE(riscv_intc, ACPI_MADT_TYPE_RINTC, NULL, +diff --git a/drivers/md/bcache/bset.c b/drivers/md/bcache/bset.c +index 2bba4d6aaaa28..463eb13bd0b2a 100644 +--- a/drivers/md/bcache/bset.c ++++ b/drivers/md/bcache/bset.c +@@ -54,7 +54,7 @@ void bch_dump_bucket(struct btree_keys *b) + int __bch_count_data(struct btree_keys *b) + { + unsigned int ret = 0; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct bkey *k; + + if (b->ops->is_extents) +@@ -67,7 +67,7 @@ void __bch_check_keys(struct btree_keys *b, const char *fmt, ...) + { + va_list args; + struct bkey *k, *p = NULL; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + const char *err; + + for_each_key(b, k, &iter) { +@@ -879,7 +879,7 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k, + unsigned int status = BTREE_INSERT_STATUS_NO_INSERT; + struct bset *i = bset_tree_last(b)->data; + struct bkey *m, *prev = NULL; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct bkey preceding_key_on_stack = ZERO_KEY; + struct bkey *preceding_key_p = &preceding_key_on_stack; + +@@ -895,9 +895,9 @@ unsigned int bch_btree_insert_key(struct btree_keys *b, struct bkey *k, + else + preceding_key(k, &preceding_key_p); + +- m = bch_btree_iter_init(b, &iter, preceding_key_p); ++ m = bch_btree_iter_stack_init(b, &iter, preceding_key_p); + +- if (b->ops->insert_fixup(b, k, &iter, replace_key)) ++ if (b->ops->insert_fixup(b, k, &iter.iter, replace_key)) + return status; + + status = BTREE_INSERT_STATUS_INSERT; +@@ -1100,33 +1100,33 @@ void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k, + btree_iter_cmp)); + } + +-static struct bkey *__bch_btree_iter_init(struct btree_keys *b, +- struct btree_iter *iter, +- struct bkey *search, +- struct bset_tree *start) ++static struct bkey *__bch_btree_iter_stack_init(struct btree_keys *b, ++ struct btree_iter_stack *iter, ++ struct bkey *search, ++ struct bset_tree *start) + { + struct bkey *ret = NULL; + +- iter->size = ARRAY_SIZE(iter->data); +- iter->used = 0; ++ iter->iter.size = ARRAY_SIZE(iter->stack_data); ++ iter->iter.used = 0; + + #ifdef CONFIG_BCACHE_DEBUG +- iter->b = b; ++ iter->iter.b = b; + #endif + + for (; start <= bset_tree_last(b); start++) { + ret = bch_bset_search(b, start, search); +- bch_btree_iter_push(iter, ret, bset_bkey_last(start->data)); ++ bch_btree_iter_push(&iter->iter, ret, bset_bkey_last(start->data)); + } + + return ret; + } + +-struct bkey *bch_btree_iter_init(struct btree_keys *b, +- struct btree_iter *iter, ++struct bkey *bch_btree_iter_stack_init(struct btree_keys *b, ++ struct btree_iter_stack *iter, + struct bkey *search) + { +- return __bch_btree_iter_init(b, iter, search, b->set); ++ return __bch_btree_iter_stack_init(b, iter, search, b->set); + } + + static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter, +@@ -1293,10 +1293,10 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start, + struct bset_sort_state *state) + { + size_t order = b->page_order, keys = 0; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + int oldsize = bch_count_data(b); + +- __bch_btree_iter_init(b, &iter, NULL, &b->set[start]); ++ __bch_btree_iter_stack_init(b, &iter, NULL, &b->set[start]); + + if (start) { + unsigned int i; +@@ -1307,7 +1307,7 @@ void bch_btree_sort_partial(struct btree_keys *b, unsigned int start, + order = get_order(__set_bytes(b->set->data, keys)); + } + +- __btree_sort(b, &iter, start, order, false, state); ++ __btree_sort(b, &iter.iter, start, order, false, state); + + EBUG_ON(oldsize >= 0 && bch_count_data(b) != oldsize); + } +@@ -1323,11 +1323,11 @@ void bch_btree_sort_into(struct btree_keys *b, struct btree_keys *new, + struct bset_sort_state *state) + { + uint64_t start_time = local_clock(); +- struct btree_iter iter; ++ struct btree_iter_stack iter; + +- bch_btree_iter_init(b, &iter, NULL); ++ bch_btree_iter_stack_init(b, &iter, NULL); + +- btree_mergesort(b, new->set->data, &iter, false, true); ++ btree_mergesort(b, new->set->data, &iter.iter, false, true); + + bch_time_stats_update(&state->time, start_time); + +diff --git a/drivers/md/bcache/bset.h b/drivers/md/bcache/bset.h +index d795c84246b01..011f6062c4c04 100644 +--- a/drivers/md/bcache/bset.h ++++ b/drivers/md/bcache/bset.h +@@ -321,7 +321,14 @@ struct btree_iter { + #endif + struct btree_iter_set { + struct bkey *k, *end; +- } data[MAX_BSETS]; ++ } data[]; ++}; ++ ++/* Fixed-size btree_iter that can be allocated on the stack */ ++ ++struct btree_iter_stack { ++ struct btree_iter iter; ++ struct btree_iter_set stack_data[MAX_BSETS]; + }; + + typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k); +@@ -333,9 +340,9 @@ struct bkey *bch_btree_iter_next_filter(struct btree_iter *iter, + + void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k, + struct bkey *end); +-struct bkey *bch_btree_iter_init(struct btree_keys *b, +- struct btree_iter *iter, +- struct bkey *search); ++struct bkey *bch_btree_iter_stack_init(struct btree_keys *b, ++ struct btree_iter_stack *iter, ++ struct bkey *search); + + struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t, + const struct bkey *search); +@@ -350,13 +357,14 @@ static inline struct bkey *bch_bset_search(struct btree_keys *b, + return search ? __bch_bset_search(b, t, search) : t->data->start; + } + +-#define for_each_key_filter(b, k, iter, filter) \ +- for (bch_btree_iter_init((b), (iter), NULL); \ +- ((k) = bch_btree_iter_next_filter((iter), (b), filter));) ++#define for_each_key_filter(b, k, stack_iter, filter) \ ++ for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \ ++ ((k) = bch_btree_iter_next_filter(&((stack_iter)->iter), (b), \ ++ filter));) + +-#define for_each_key(b, k, iter) \ +- for (bch_btree_iter_init((b), (iter), NULL); \ +- ((k) = bch_btree_iter_next(iter));) ++#define for_each_key(b, k, stack_iter) \ ++ for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \ ++ ((k) = bch_btree_iter_next(&((stack_iter)->iter)));) + + /* Sorting */ + +diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c +index 196cdacce38f2..d011a7154d330 100644 +--- a/drivers/md/bcache/btree.c ++++ b/drivers/md/bcache/btree.c +@@ -1309,7 +1309,7 @@ static bool btree_gc_mark_node(struct btree *b, struct gc_stat *gc) + uint8_t stale = 0; + unsigned int keys = 0, good_keys = 0; + struct bkey *k; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct bset_tree *t; + + gc->nodes++; +@@ -1570,7 +1570,7 @@ static int btree_gc_rewrite_node(struct btree *b, struct btree_op *op, + static unsigned int btree_gc_count_keys(struct btree *b) + { + struct bkey *k; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + unsigned int ret = 0; + + for_each_key_filter(&b->keys, k, &iter, bch_ptr_bad) +@@ -1611,17 +1611,18 @@ static int btree_gc_recurse(struct btree *b, struct btree_op *op, + int ret = 0; + bool should_rewrite; + struct bkey *k; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct gc_merge_info r[GC_MERGE_NODES]; + struct gc_merge_info *i, *last = r + ARRAY_SIZE(r) - 1; + +- bch_btree_iter_init(&b->keys, &iter, &b->c->gc_done); ++ bch_btree_iter_stack_init(&b->keys, &iter, &b->c->gc_done); + + for (i = r; i < r + ARRAY_SIZE(r); i++) + i->b = ERR_PTR(-EINTR); + + while (1) { +- k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad); ++ k = bch_btree_iter_next_filter(&iter.iter, &b->keys, ++ bch_ptr_bad); + if (k) { + r->b = bch_btree_node_get(b->c, op, k, b->level - 1, + true, b); +@@ -1911,7 +1912,7 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op) + { + int ret = 0; + struct bkey *k, *p = NULL; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + + for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid) + bch_initial_mark_key(b->c, b->level, k); +@@ -1919,10 +1920,10 @@ static int bch_btree_check_recurse(struct btree *b, struct btree_op *op) + bch_initial_mark_key(b->c, b->level + 1, &b->key); + + if (b->level) { +- bch_btree_iter_init(&b->keys, &iter, NULL); ++ bch_btree_iter_stack_init(&b->keys, &iter, NULL); + + do { +- k = bch_btree_iter_next_filter(&iter, &b->keys, ++ k = bch_btree_iter_next_filter(&iter.iter, &b->keys, + bch_ptr_bad); + if (k) { + btree_node_prefetch(b, k); +@@ -1950,7 +1951,7 @@ static int bch_btree_check_thread(void *arg) + struct btree_check_info *info = arg; + struct btree_check_state *check_state = info->state; + struct cache_set *c = check_state->c; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct bkey *k, *p; + int cur_idx, prev_idx, skip_nr; + +@@ -1959,8 +1960,8 @@ static int bch_btree_check_thread(void *arg) + ret = 0; + + /* root node keys are checked before thread created */ +- bch_btree_iter_init(&c->root->keys, &iter, NULL); +- k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); ++ bch_btree_iter_stack_init(&c->root->keys, &iter, NULL); ++ k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad); + BUG_ON(!k); + + p = k; +@@ -1978,7 +1979,7 @@ static int bch_btree_check_thread(void *arg) + skip_nr = cur_idx - prev_idx; + + while (skip_nr) { +- k = bch_btree_iter_next_filter(&iter, ++ k = bch_btree_iter_next_filter(&iter.iter, + &c->root->keys, + bch_ptr_bad); + if (k) +@@ -2051,7 +2052,7 @@ int bch_btree_check(struct cache_set *c) + int ret = 0; + int i; + struct bkey *k = NULL; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct btree_check_state check_state; + + /* check and mark root node keys */ +@@ -2547,11 +2548,11 @@ static int bch_btree_map_nodes_recurse(struct btree *b, struct btree_op *op, + + if (b->level) { + struct bkey *k; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + +- bch_btree_iter_init(&b->keys, &iter, from); ++ bch_btree_iter_stack_init(&b->keys, &iter, from); + +- while ((k = bch_btree_iter_next_filter(&iter, &b->keys, ++ while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys, + bch_ptr_bad))) { + ret = bcache_btree(map_nodes_recurse, k, b, + op, from, fn, flags); +@@ -2580,11 +2581,12 @@ int bch_btree_map_keys_recurse(struct btree *b, struct btree_op *op, + { + int ret = MAP_CONTINUE; + struct bkey *k; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + +- bch_btree_iter_init(&b->keys, &iter, from); ++ bch_btree_iter_stack_init(&b->keys, &iter, from); + +- while ((k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad))) { ++ while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys, ++ bch_ptr_bad))) { + ret = !b->level + ? fn(op, b, k) + : bcache_btree(map_keys_recurse, k, +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index 330bcd9ea4a9c..b0819be0486e6 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -1914,8 +1914,9 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb) + INIT_LIST_HEAD(&c->btree_cache_freed); + INIT_LIST_HEAD(&c->data_buckets); + +- iter_size = ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size + 1) * +- sizeof(struct btree_iter_set); ++ iter_size = sizeof(struct btree_iter) + ++ ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) * ++ sizeof(struct btree_iter_set); + + c->devices = kcalloc(c->nr_uuids, sizeof(void *), GFP_KERNEL); + if (!c->devices) +diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c +index 6956beb55326f..826b14cae4e58 100644 +--- a/drivers/md/bcache/sysfs.c ++++ b/drivers/md/bcache/sysfs.c +@@ -660,7 +660,7 @@ static unsigned int bch_root_usage(struct cache_set *c) + unsigned int bytes = 0; + struct bkey *k; + struct btree *b; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + + goto lock_root; + +diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c +index 8827a6f130ad7..792e070ccf38b 100644 +--- a/drivers/md/bcache/writeback.c ++++ b/drivers/md/bcache/writeback.c +@@ -908,15 +908,15 @@ static int bch_dirty_init_thread(void *arg) + struct dirty_init_thrd_info *info = arg; + struct bch_dirty_init_state *state = info->state; + struct cache_set *c = state->c; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct bkey *k, *p; + int cur_idx, prev_idx, skip_nr; + + k = p = NULL; + prev_idx = 0; + +- bch_btree_iter_init(&c->root->keys, &iter, NULL); +- k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); ++ bch_btree_iter_stack_init(&c->root->keys, &iter, NULL); ++ k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad); + BUG_ON(!k); + + p = k; +@@ -930,7 +930,7 @@ static int bch_dirty_init_thread(void *arg) + skip_nr = cur_idx - prev_idx; + + while (skip_nr) { +- k = bch_btree_iter_next_filter(&iter, ++ k = bch_btree_iter_next_filter(&iter.iter, + &c->root->keys, + bch_ptr_bad); + if (k) +@@ -979,7 +979,7 @@ void bch_sectors_dirty_init(struct bcache_device *d) + int i; + struct btree *b = NULL; + struct bkey *k = NULL; +- struct btree_iter iter; ++ struct btree_iter_stack iter; + struct sectors_dirty_init op; + struct cache_set *c = d->c; + struct bch_dirty_init_state state; +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index d874abfc18364..2bd1ce9b39226 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -36,7 +36,6 @@ + */ + + #include +-#include + #include + #include + #include +@@ -6734,6 +6733,9 @@ static void raid5d(struct md_thread *thread) + int batch_size, released; + unsigned int offset; + ++ if (test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)) ++ break; ++ + released = release_stripe_list(conf, conf->temp_inactive_list); + if (released) + clear_bit(R5_DID_ALLOC, &conf->cache_state); +@@ -6770,18 +6772,7 @@ static void raid5d(struct md_thread *thread) + spin_unlock_irq(&conf->device_lock); + md_check_recovery(mddev); + spin_lock_irq(&conf->device_lock); +- +- /* +- * Waiting on MD_SB_CHANGE_PENDING below may deadlock +- * seeing md_check_recovery() is needed to clear +- * the flag when using mdmon. +- */ +- continue; + } +- +- wait_event_lock_irq(mddev->sb_wait, +- !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags), +- conf->device_lock); + } + pr_debug("%d stripes handled\n", handled); + +diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c +index 2638875924153..231b45632ad5a 100644 +--- a/drivers/media/dvb-frontends/lgdt3306a.c ++++ b/drivers/media/dvb-frontends/lgdt3306a.c +@@ -2176,6 +2176,11 @@ static int lgdt3306a_probe(struct i2c_client *client) + struct dvb_frontend *fe; + int ret; + ++ if (!client->dev.platform_data) { ++ dev_err(&client->dev, "platform data is mandatory\n"); ++ return -EINVAL; ++ } ++ + config = kmemdup(client->dev.platform_data, + sizeof(struct lgdt3306a_config), GFP_KERNEL); + if (config == NULL) { +diff --git a/drivers/media/dvb-frontends/mxl5xx.c b/drivers/media/dvb-frontends/mxl5xx.c +index 4ebbcf05cc09e..91e9c378397c8 100644 +--- a/drivers/media/dvb-frontends/mxl5xx.c ++++ b/drivers/media/dvb-frontends/mxl5xx.c +@@ -1381,57 +1381,57 @@ static int config_ts(struct mxl *state, enum MXL_HYDRA_DEMOD_ID_E demod_id, + u32 nco_count_min = 0; + u32 clk_type = 0; + +- struct MXL_REG_FIELD_T xpt_sync_polarity[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_sync_polarity[MXL_HYDRA_DEMOD_MAX] = { + {0x90700010, 8, 1}, {0x90700010, 9, 1}, + {0x90700010, 10, 1}, {0x90700010, 11, 1}, + {0x90700010, 12, 1}, {0x90700010, 13, 1}, + {0x90700010, 14, 1}, {0x90700010, 15, 1} }; +- struct MXL_REG_FIELD_T xpt_clock_polarity[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_clock_polarity[MXL_HYDRA_DEMOD_MAX] = { + {0x90700010, 16, 1}, {0x90700010, 17, 1}, + {0x90700010, 18, 1}, {0x90700010, 19, 1}, + {0x90700010, 20, 1}, {0x90700010, 21, 1}, + {0x90700010, 22, 1}, {0x90700010, 23, 1} }; +- struct MXL_REG_FIELD_T xpt_valid_polarity[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_valid_polarity[MXL_HYDRA_DEMOD_MAX] = { + {0x90700014, 0, 1}, {0x90700014, 1, 1}, + {0x90700014, 2, 1}, {0x90700014, 3, 1}, + {0x90700014, 4, 1}, {0x90700014, 5, 1}, + {0x90700014, 6, 1}, {0x90700014, 7, 1} }; +- struct MXL_REG_FIELD_T xpt_ts_clock_phase[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_ts_clock_phase[MXL_HYDRA_DEMOD_MAX] = { + {0x90700018, 0, 3}, {0x90700018, 4, 3}, + {0x90700018, 8, 3}, {0x90700018, 12, 3}, + {0x90700018, 16, 3}, {0x90700018, 20, 3}, + {0x90700018, 24, 3}, {0x90700018, 28, 3} }; +- struct MXL_REG_FIELD_T xpt_lsb_first[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_lsb_first[MXL_HYDRA_DEMOD_MAX] = { + {0x9070000C, 16, 1}, {0x9070000C, 17, 1}, + {0x9070000C, 18, 1}, {0x9070000C, 19, 1}, + {0x9070000C, 20, 1}, {0x9070000C, 21, 1}, + {0x9070000C, 22, 1}, {0x9070000C, 23, 1} }; +- struct MXL_REG_FIELD_T xpt_sync_byte[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_sync_byte[MXL_HYDRA_DEMOD_MAX] = { + {0x90700010, 0, 1}, {0x90700010, 1, 1}, + {0x90700010, 2, 1}, {0x90700010, 3, 1}, + {0x90700010, 4, 1}, {0x90700010, 5, 1}, + {0x90700010, 6, 1}, {0x90700010, 7, 1} }; +- struct MXL_REG_FIELD_T xpt_enable_output[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_enable_output[MXL_HYDRA_DEMOD_MAX] = { + {0x9070000C, 0, 1}, {0x9070000C, 1, 1}, + {0x9070000C, 2, 1}, {0x9070000C, 3, 1}, + {0x9070000C, 4, 1}, {0x9070000C, 5, 1}, + {0x9070000C, 6, 1}, {0x9070000C, 7, 1} }; +- struct MXL_REG_FIELD_T xpt_err_replace_sync[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_err_replace_sync[MXL_HYDRA_DEMOD_MAX] = { + {0x9070000C, 24, 1}, {0x9070000C, 25, 1}, + {0x9070000C, 26, 1}, {0x9070000C, 27, 1}, + {0x9070000C, 28, 1}, {0x9070000C, 29, 1}, + {0x9070000C, 30, 1}, {0x9070000C, 31, 1} }; +- struct MXL_REG_FIELD_T xpt_err_replace_valid[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_err_replace_valid[MXL_HYDRA_DEMOD_MAX] = { + {0x90700014, 8, 1}, {0x90700014, 9, 1}, + {0x90700014, 10, 1}, {0x90700014, 11, 1}, + {0x90700014, 12, 1}, {0x90700014, 13, 1}, + {0x90700014, 14, 1}, {0x90700014, 15, 1} }; +- struct MXL_REG_FIELD_T xpt_continuous_clock[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_continuous_clock[MXL_HYDRA_DEMOD_MAX] = { + {0x907001D4, 0, 1}, {0x907001D4, 1, 1}, + {0x907001D4, 2, 1}, {0x907001D4, 3, 1}, + {0x907001D4, 4, 1}, {0x907001D4, 5, 1}, + {0x907001D4, 6, 1}, {0x907001D4, 7, 1} }; +- struct MXL_REG_FIELD_T xpt_nco_clock_rate[MXL_HYDRA_DEMOD_MAX] = { ++ static const struct MXL_REG_FIELD_T xpt_nco_clock_rate[MXL_HYDRA_DEMOD_MAX] = { + {0x90700044, 16, 80}, {0x90700044, 16, 81}, + {0x90700044, 16, 82}, {0x90700044, 16, 83}, + {0x90700044, 16, 84}, {0x90700044, 16, 85}, +diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c +index 552935ccb4a90..57906df7be4ec 100644 +--- a/drivers/media/i2c/ov2740.c ++++ b/drivers/media/i2c/ov2740.c +@@ -768,14 +768,15 @@ static int ov2740_init_controls(struct ov2740 *ov2740) + cur_mode = ov2740->cur_mode; + size = ARRAY_SIZE(link_freq_menu_items); + +- ov2740->link_freq = v4l2_ctrl_new_int_menu(ctrl_hdlr, &ov2740_ctrl_ops, +- V4L2_CID_LINK_FREQ, +- size - 1, 0, +- link_freq_menu_items); ++ ov2740->link_freq = ++ v4l2_ctrl_new_int_menu(ctrl_hdlr, &ov2740_ctrl_ops, ++ V4L2_CID_LINK_FREQ, size - 1, ++ ov2740->supported_modes->link_freq_index, ++ link_freq_menu_items); + if (ov2740->link_freq) + ov2740->link_freq->flags |= V4L2_CTRL_FLAG_READ_ONLY; + +- pixel_rate = to_pixel_rate(OV2740_LINK_FREQ_360MHZ_INDEX); ++ pixel_rate = to_pixel_rate(ov2740->supported_modes->link_freq_index); + ov2740->pixel_rate = v4l2_ctrl_new_std(ctrl_hdlr, &ov2740_ctrl_ops, + V4L2_CID_PIXEL_RATE, 0, + pixel_rate, 1, pixel_rate); +diff --git a/drivers/media/mc/mc-devnode.c b/drivers/media/mc/mc-devnode.c +index 7f67825c8757f..318e267e798e3 100644 +--- a/drivers/media/mc/mc-devnode.c ++++ b/drivers/media/mc/mc-devnode.c +@@ -245,15 +245,14 @@ int __must_check media_devnode_register(struct media_device *mdev, + kobject_set_name(&devnode->cdev.kobj, "media%d", devnode->minor); + + /* Part 3: Add the media and char device */ ++ set_bit(MEDIA_FLAG_REGISTERED, &devnode->flags); + ret = cdev_device_add(&devnode->cdev, &devnode->dev); + if (ret < 0) { ++ clear_bit(MEDIA_FLAG_REGISTERED, &devnode->flags); + pr_err("%s: cdev_device_add failed\n", __func__); + goto cdev_add_error; + } + +- /* Part 4: Activate this minor. The char device can now be used. */ +- set_bit(MEDIA_FLAG_REGISTERED, &devnode->flags); +- + return 0; + + cdev_add_error: +diff --git a/drivers/media/mc/mc-entity.c b/drivers/media/mc/mc-entity.c +index 0e28b9a7936ef..96dd0f6ccd0d0 100644 +--- a/drivers/media/mc/mc-entity.c ++++ b/drivers/media/mc/mc-entity.c +@@ -619,6 +619,12 @@ static int media_pipeline_explore_next_link(struct media_pipeline *pipe, + link = list_entry(entry->links, typeof(*link), list); + last_link = media_pipeline_walk_pop(walk); + ++ if ((link->flags & MEDIA_LNK_FL_LINK_TYPE) != MEDIA_LNK_FL_DATA_LINK) { ++ dev_dbg(walk->mdev->dev, ++ "media pipeline: skipping link (not data-link)\n"); ++ return 0; ++ } ++ + dev_dbg(walk->mdev->dev, + "media pipeline: exploring link '%s':%u -> '%s':%u\n", + link->source->entity->name, link->source->index, +diff --git a/drivers/media/pci/mgb4/mgb4_core.c b/drivers/media/pci/mgb4/mgb4_core.c +index 9bcf10a77fd38..97c833a8deb3b 100644 +--- a/drivers/media/pci/mgb4/mgb4_core.c ++++ b/drivers/media/pci/mgb4/mgb4_core.c +@@ -642,9 +642,6 @@ static void mgb4_remove(struct pci_dev *pdev) + struct mgb4_dev *mgbdev = pci_get_drvdata(pdev); + int i; + +-#ifdef CONFIG_DEBUG_FS +- debugfs_remove_recursive(mgbdev->debugfs); +-#endif + #if IS_REACHABLE(CONFIG_HWMON) + hwmon_device_unregister(mgbdev->hwmon_dev); + #endif +@@ -659,6 +656,10 @@ static void mgb4_remove(struct pci_dev *pdev) + if (mgbdev->vin[i]) + mgb4_vin_free(mgbdev->vin[i]); + ++#ifdef CONFIG_DEBUG_FS ++ debugfs_remove_recursive(mgbdev->debugfs); ++#endif ++ + device_remove_groups(&mgbdev->pdev->dev, mgb4_pci_groups); + free_spi(mgbdev); + free_i2c(mgbdev); +diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c +index 3ec323bd528b1..4bb073587817c 100644 +--- a/drivers/media/v4l2-core/v4l2-async.c ++++ b/drivers/media/v4l2-core/v4l2-async.c +@@ -563,6 +563,7 @@ void v4l2_async_nf_init(struct v4l2_async_notifier *notifier, + { + INIT_LIST_HEAD(¬ifier->waiting_list); + INIT_LIST_HEAD(¬ifier->done_list); ++ INIT_LIST_HEAD(¬ifier->notifier_entry); + notifier->v4l2_dev = v4l2_dev; + } + EXPORT_SYMBOL(v4l2_async_nf_init); +@@ -572,6 +573,7 @@ void v4l2_async_subdev_nf_init(struct v4l2_async_notifier *notifier, + { + INIT_LIST_HEAD(¬ifier->waiting_list); + INIT_LIST_HEAD(¬ifier->done_list); ++ INIT_LIST_HEAD(¬ifier->notifier_entry); + notifier->sd = sd; + } + EXPORT_SYMBOL_GPL(v4l2_async_subdev_nf_init); +@@ -618,16 +620,10 @@ static int __v4l2_async_nf_register(struct v4l2_async_notifier *notifier) + + int v4l2_async_nf_register(struct v4l2_async_notifier *notifier) + { +- int ret; +- + if (WARN_ON(!notifier->v4l2_dev == !notifier->sd)) + return -EINVAL; + +- ret = __v4l2_async_nf_register(notifier); +- if (ret) +- notifier->v4l2_dev = NULL; +- +- return ret; ++ return __v4l2_async_nf_register(notifier); + } + EXPORT_SYMBOL(v4l2_async_nf_register); + +@@ -639,7 +635,7 @@ __v4l2_async_nf_unregister(struct v4l2_async_notifier *notifier) + + v4l2_async_nf_unbind_all_subdevs(notifier); + +- list_del(¬ifier->notifier_entry); ++ list_del_init(¬ifier->notifier_entry); + } + + void v4l2_async_nf_unregister(struct v4l2_async_notifier *notifier) +diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c +index d13954bd31fd9..bae73b8c52ff5 100644 +--- a/drivers/media/v4l2-core/v4l2-dev.c ++++ b/drivers/media/v4l2-core/v4l2-dev.c +@@ -1036,8 +1036,10 @@ int __video_register_device(struct video_device *vdev, + vdev->dev.devt = MKDEV(VIDEO_MAJOR, vdev->minor); + vdev->dev.parent = vdev->dev_parent; + dev_set_name(&vdev->dev, "%s%d", name_base, vdev->num); ++ mutex_lock(&videodev_lock); + ret = device_register(&vdev->dev); + if (ret < 0) { ++ mutex_unlock(&videodev_lock); + pr_err("%s: device_register failed\n", __func__); + goto cleanup; + } +@@ -1057,6 +1059,7 @@ int __video_register_device(struct video_device *vdev, + + /* Part 6: Activate this minor. The char device can now be used. */ + set_bit(V4L2_FL_REGISTERED, &vdev->flags); ++ mutex_unlock(&videodev_lock); + + return 0; + +diff --git a/drivers/mmc/core/slot-gpio.c b/drivers/mmc/core/slot-gpio.c +index 39f45c2b6de8a..8791656e9e20d 100644 +--- a/drivers/mmc/core/slot-gpio.c ++++ b/drivers/mmc/core/slot-gpio.c +@@ -221,6 +221,26 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id, + } + EXPORT_SYMBOL(mmc_gpiod_request_cd); + ++/** ++ * mmc_gpiod_set_cd_config - set config for card-detection GPIO ++ * @host: mmc host ++ * @config: Generic pinconf config (from pinconf_to_config_packed()) ++ * ++ * This can be used by mmc host drivers to fixup a card-detection GPIO's config ++ * (e.g. set PIN_CONFIG_BIAS_PULL_UP) after acquiring the GPIO descriptor ++ * through mmc_gpiod_request_cd(). ++ * ++ * Returns: ++ * 0 on success, or a negative errno value on error. ++ */ ++int mmc_gpiod_set_cd_config(struct mmc_host *host, unsigned long config) ++{ ++ struct mmc_gpio *ctx = host->slot.handler_priv; ++ ++ return gpiod_set_config(ctx->cd_gpio, config); ++} ++EXPORT_SYMBOL(mmc_gpiod_set_cd_config); ++ + bool mmc_can_gpio_cd(struct mmc_host *host) + { + struct mmc_gpio *ctx = host->slot.handler_priv; +diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c +index 8bd938919687b..d7427894e0bc9 100644 +--- a/drivers/mmc/host/davinci_mmc.c ++++ b/drivers/mmc/host/davinci_mmc.c +@@ -1337,7 +1337,7 @@ static int davinci_mmcsd_probe(struct platform_device *pdev) + return ret; + } + +-static void __exit davinci_mmcsd_remove(struct platform_device *pdev) ++static void davinci_mmcsd_remove(struct platform_device *pdev) + { + struct mmc_davinci_host *host = platform_get_drvdata(pdev); + +@@ -1392,7 +1392,7 @@ static struct platform_driver davinci_mmcsd_driver = { + .of_match_table = davinci_mmc_dt_ids, + }, + .probe = davinci_mmcsd_probe, +- .remove_new = __exit_p(davinci_mmcsd_remove), ++ .remove_new = davinci_mmcsd_remove, + .id_table = davinci_mmc_devtype, + }; + +diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c +index acf5fc3ad7e41..eb8f427f9770d 100644 +--- a/drivers/mmc/host/sdhci-acpi.c ++++ b/drivers/mmc/host/sdhci-acpi.c +@@ -10,6 +10,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -80,6 +81,8 @@ struct sdhci_acpi_host { + enum { + DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP = BIT(0), + DMI_QUIRK_SD_NO_WRITE_PROTECT = BIT(1), ++ DMI_QUIRK_SD_CD_ACTIVE_HIGH = BIT(2), ++ DMI_QUIRK_SD_CD_ENABLE_PULL_UP = BIT(3), + }; + + static inline void *sdhci_acpi_priv(struct sdhci_acpi_host *c) +@@ -719,7 +722,28 @@ static const struct acpi_device_id sdhci_acpi_ids[] = { + }; + MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids); + ++/* Please keep this list sorted alphabetically */ + static const struct dmi_system_id sdhci_acpi_quirks[] = { ++ { ++ /* ++ * The Acer Aspire Switch 10 (SW5-012) microSD slot always ++ * reports the card being write-protected even though microSD ++ * cards do not have a write-protect switch at all. ++ */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"), ++ }, ++ .driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT, ++ }, ++ { ++ /* Asus T100TA, needs pull-up for cd but DSDT GpioInt has NoPull set */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "T100TA"), ++ }, ++ .driver_data = (void *)DMI_QUIRK_SD_CD_ENABLE_PULL_UP, ++ }, + { + /* + * The Lenovo Miix 320-10ICR has a bug in the _PS0 method of +@@ -736,15 +760,23 @@ static const struct dmi_system_id sdhci_acpi_quirks[] = { + }, + { + /* +- * The Acer Aspire Switch 10 (SW5-012) microSD slot always +- * reports the card being write-protected even though microSD +- * cards do not have a write-protect switch at all. ++ * Lenovo Yoga Tablet 2 Pro 1380F/L (13" Android version) this ++ * has broken WP reporting and an inverted CD signal. ++ * Note this has more or less the same BIOS as the Lenovo Yoga ++ * Tablet 2 830F/L or 1050F/L (8" and 10" Android), but unlike ++ * the 830 / 1050 models which share the same mainboard this ++ * model has a different mainboard and the inverted CD and ++ * broken WP are unique to this board. + */ + .matches = { +- DMI_MATCH(DMI_SYS_VENDOR, "Acer"), +- DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"), ++ DMI_MATCH(DMI_SYS_VENDOR, "Intel Corp."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "VALLEYVIEW C0 PLATFORM"), ++ DMI_MATCH(DMI_BOARD_NAME, "BYT-T FFD8"), ++ /* Full match so as to NOT match the 830/1050 BIOS */ ++ DMI_MATCH(DMI_BIOS_VERSION, "BLADE_21.X64.0005.R00.1504101516"), + }, +- .driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT, ++ .driver_data = (void *)(DMI_QUIRK_SD_NO_WRITE_PROTECT | ++ DMI_QUIRK_SD_CD_ACTIVE_HIGH), + }, + { + /* +@@ -757,6 +789,17 @@ static const struct dmi_system_id sdhci_acpi_quirks[] = { + }, + .driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT, + }, ++ { ++ /* ++ * The Toshiba WT10-A's microSD slot always reports the card being ++ * write-protected. ++ */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "TOSHIBA WT10-A"), ++ }, ++ .driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT, ++ }, + {} /* Terminating entry */ + }; + +@@ -866,12 +909,18 @@ static int sdhci_acpi_probe(struct platform_device *pdev) + if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) { + bool v = sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL); + ++ if (quirks & DMI_QUIRK_SD_CD_ACTIVE_HIGH) ++ host->mmc->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH; ++ + err = mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0); + if (err) { + if (err == -EPROBE_DEFER) + goto err_free; + dev_warn(dev, "failed to setup card detect gpio\n"); + c->use_runtime_pm = false; ++ } else if (quirks & DMI_QUIRK_SD_CD_ENABLE_PULL_UP) { ++ mmc_gpiod_set_cd_config(host->mmc, ++ PIN_CONF_PACKED(PIN_CONFIG_BIAS_PULL_UP, 20000)); + } + + if (quirks & DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP) +diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c +index c79f73459915d..746f4cf7ab033 100644 +--- a/drivers/mmc/host/sdhci.c ++++ b/drivers/mmc/host/sdhci.c +@@ -3439,12 +3439,18 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask) + host->data->error = -EILSEQ; + if (!mmc_op_tuning(SDHCI_GET_CMD(sdhci_readw(host, SDHCI_COMMAND)))) + sdhci_err_stats_inc(host, DAT_CRC); +- } else if ((intmask & SDHCI_INT_DATA_CRC) && ++ } else if ((intmask & (SDHCI_INT_DATA_CRC | SDHCI_INT_TUNING_ERROR)) && + SDHCI_GET_CMD(sdhci_readw(host, SDHCI_COMMAND)) + != MMC_BUS_TEST_R) { + host->data->error = -EILSEQ; + if (!mmc_op_tuning(SDHCI_GET_CMD(sdhci_readw(host, SDHCI_COMMAND)))) + sdhci_err_stats_inc(host, DAT_CRC); ++ if (intmask & SDHCI_INT_TUNING_ERROR) { ++ u16 ctrl2 = sdhci_readw(host, SDHCI_HOST_CONTROL2); ++ ++ ctrl2 &= ~SDHCI_CTRL_TUNED_CLK; ++ sdhci_writew(host, ctrl2, SDHCI_HOST_CONTROL2); ++ } + } else if (intmask & SDHCI_INT_ADMA_ERROR) { + pr_err("%s: ADMA error: 0x%08x\n", mmc_hostname(host->mmc), + intmask); +@@ -3979,7 +3985,7 @@ bool sdhci_cqe_irq(struct sdhci_host *host, u32 intmask, int *cmd_error, + } else + *cmd_error = 0; + +- if (intmask & (SDHCI_INT_DATA_END_BIT | SDHCI_INT_DATA_CRC)) { ++ if (intmask & (SDHCI_INT_DATA_END_BIT | SDHCI_INT_DATA_CRC | SDHCI_INT_TUNING_ERROR)) { + *data_error = -EILSEQ; + if (!mmc_op_tuning(SDHCI_GET_CMD(sdhci_readw(host, SDHCI_COMMAND)))) + sdhci_err_stats_inc(host, DAT_CRC); +diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h +index a20864fc06412..957c7a917ffb7 100644 +--- a/drivers/mmc/host/sdhci.h ++++ b/drivers/mmc/host/sdhci.h +@@ -158,6 +158,7 @@ + #define SDHCI_INT_BUS_POWER 0x00800000 + #define SDHCI_INT_AUTO_CMD_ERR 0x01000000 + #define SDHCI_INT_ADMA_ERROR 0x02000000 ++#define SDHCI_INT_TUNING_ERROR 0x04000000 + + #define SDHCI_INT_NORMAL_MASK 0x00007FFF + #define SDHCI_INT_ERROR_MASK 0xFFFF8000 +@@ -169,7 +170,7 @@ + SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL | \ + SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_DATA_CRC | \ + SDHCI_INT_DATA_END_BIT | SDHCI_INT_ADMA_ERROR | \ +- SDHCI_INT_BLK_GAP) ++ SDHCI_INT_BLK_GAP | SDHCI_INT_TUNING_ERROR) + #define SDHCI_INT_ALL_MASK ((unsigned int)-1) + + #define SDHCI_CQE_INT_ERR_MASK ( \ +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 2c5ed0a7cb18c..bceda85f0dcf6 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -6477,16 +6477,16 @@ static int __init bonding_init(void) + if (res) + goto out; + ++ bond_create_debugfs(); ++ + res = register_pernet_subsys(&bond_net_ops); + if (res) +- goto out; ++ goto err_net_ops; + + res = bond_netlink_init(); + if (res) + goto err_link; + +- bond_create_debugfs(); +- + for (i = 0; i < max_bonds; i++) { + res = bond_create(&init_net, NULL); + if (res) +@@ -6501,10 +6501,11 @@ static int __init bonding_init(void) + out: + return res; + err: +- bond_destroy_debugfs(); + bond_netlink_fini(); + err_link: + unregister_pernet_subsys(&bond_net_ops); ++err_net_ops: ++ bond_destroy_debugfs(); + goto out; + + } +@@ -6513,11 +6514,11 @@ static void __exit bonding_exit(void) + { + unregister_netdevice_notifier(&bond_netdev_notifier); + +- bond_destroy_debugfs(); +- + bond_netlink_fini(); + unregister_pernet_subsys(&bond_net_ops); + ++ bond_destroy_debugfs(); ++ + #ifdef CONFIG_NET_POLL_CONTROLLER + /* Make sure we don't have an imbalance on our netpoll blocking */ + WARN_ON(atomic_read(&netpoll_block_tx)); +diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c +index 6b64f28a9174d..b2d054f59f30f 100644 +--- a/drivers/net/vxlan/vxlan_core.c ++++ b/drivers/net/vxlan/vxlan_core.c +@@ -1446,6 +1446,10 @@ static bool vxlan_snoop(struct net_device *dev, + struct vxlan_fdb *f; + u32 ifindex = 0; + ++ /* Ignore packets from invalid src-address */ ++ if (!is_valid_ether_addr(src_mac)) ++ return true; ++ + #if IS_ENABLED(CONFIG_IPV6) + if (src_ip->sa.sa_family == AF_INET6 && + (ipv6_addr_type(&src_ip->sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL)) +@@ -1615,10 +1619,6 @@ static bool vxlan_set_mac(struct vxlan_dev *vxlan, + if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr)) + return false; + +- /* Ignore packets from invalid src-address */ +- if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) +- return false; +- + /* Get address from the outer IP header */ + if (vxlan_get_sk_family(vs) == AF_INET) { + saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr; +diff --git a/drivers/net/wireless/ath/ath10k/Kconfig b/drivers/net/wireless/ath/ath10k/Kconfig +index e6ea884cafc19..4f385f4a8cef2 100644 +--- a/drivers/net/wireless/ath/ath10k/Kconfig ++++ b/drivers/net/wireless/ath/ath10k/Kconfig +@@ -45,6 +45,7 @@ config ATH10K_SNOC + depends on ATH10K + depends on ARCH_QCOM || COMPILE_TEST + depends on QCOM_SMEM ++ depends on QCOM_RPROC_COMMON || QCOM_RPROC_COMMON=n + select QCOM_SCM + select QCOM_QMI_HELPERS + help +diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h +index fd92d23c43d91..16d884a3d87df 100644 +--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h ++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h +@@ -122,6 +122,15 @@ enum rtl8xxxu_rx_type { + RX_TYPE_ERROR = -1 + }; + ++enum rtl8xxxu_rx_desc_enc { ++ RX_DESC_ENC_NONE = 0, ++ RX_DESC_ENC_WEP40 = 1, ++ RX_DESC_ENC_TKIP_WO_MIC = 2, ++ RX_DESC_ENC_TKIP_MIC = 3, ++ RX_DESC_ENC_AES = 4, ++ RX_DESC_ENC_WEP104 = 5, ++}; ++ + struct rtl8xxxu_rxdesc16 { + #ifdef __LITTLE_ENDIAN + u32 pktlen:14; +diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c +index 4a49f8f9d80f2..fbbc97161f5df 100644 +--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c ++++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c +@@ -1505,13 +1505,13 @@ rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40) + u8 cck[RTL8723A_MAX_RF_PATHS], ofdm[RTL8723A_MAX_RF_PATHS]; + u8 ofdmbase[RTL8723A_MAX_RF_PATHS], mcsbase[RTL8723A_MAX_RF_PATHS]; + u32 val32, ofdm_a, ofdm_b, mcs_a, mcs_b; +- u8 val8; ++ u8 val8, base; + int group, i; + + group = rtl8xxxu_gen1_channel_to_group(channel); + +- cck[0] = priv->cck_tx_power_index_A[group] - 1; +- cck[1] = priv->cck_tx_power_index_B[group] - 1; ++ cck[0] = priv->cck_tx_power_index_A[group]; ++ cck[1] = priv->cck_tx_power_index_B[group]; + + if (priv->hi_pa) { + if (cck[0] > 0x20) +@@ -1522,10 +1522,6 @@ rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40) + + ofdm[0] = priv->ht40_1s_tx_power_index_A[group]; + ofdm[1] = priv->ht40_1s_tx_power_index_B[group]; +- if (ofdm[0]) +- ofdm[0] -= 1; +- if (ofdm[1]) +- ofdm[1] -= 1; + + ofdmbase[0] = ofdm[0] + priv->ofdm_tx_power_index_diff[group].a; + ofdmbase[1] = ofdm[1] + priv->ofdm_tx_power_index_diff[group].b; +@@ -1614,20 +1610,19 @@ rtl8xxxu_gen1_set_tx_power(struct rtl8xxxu_priv *priv, int channel, bool ht40) + + rtl8xxxu_write32(priv, REG_TX_AGC_A_MCS15_MCS12, + mcs_a + power_base->reg_0e1c); ++ val8 = u32_get_bits(mcs_a + power_base->reg_0e1c, 0xff000000); + for (i = 0; i < 3; i++) { +- if (i != 2) +- val8 = (mcsbase[0] > 8) ? (mcsbase[0] - 8) : 0; +- else +- val8 = (mcsbase[0] > 6) ? (mcsbase[0] - 6) : 0; ++ base = i != 2 ? 8 : 6; ++ val8 = max_t(int, val8 - base, 0); + rtl8xxxu_write8(priv, REG_OFDM0_XC_TX_IQ_IMBALANCE + i, val8); + } ++ + rtl8xxxu_write32(priv, REG_TX_AGC_B_MCS15_MCS12, + mcs_b + power_base->reg_0868); ++ val8 = u32_get_bits(mcs_b + power_base->reg_0868, 0xff000000); + for (i = 0; i < 3; i++) { +- if (i != 2) +- val8 = (mcsbase[1] > 8) ? (mcsbase[1] - 8) : 0; +- else +- val8 = (mcsbase[1] > 6) ? (mcsbase[1] - 6) : 0; ++ base = i != 2 ? 8 : 6; ++ val8 = max_t(int, val8 - base, 0); + rtl8xxxu_write8(priv, REG_OFDM0_XD_TX_IQ_IMBALANCE + i, val8); + } + } +@@ -6473,7 +6468,8 @@ int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb) + rx_status->mactime = rx_desc->tsfl; + rx_status->flag |= RX_FLAG_MACTIME_START; + +- if (!rx_desc->swdec) ++ if (!rx_desc->swdec && ++ rx_desc->security != RX_DESC_ENC_NONE) + rx_status->flag |= RX_FLAG_DECRYPTED; + if (rx_desc->crc32) + rx_status->flag |= RX_FLAG_FAILED_FCS_CRC; +@@ -6578,7 +6574,8 @@ int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb) + rx_status->mactime = rx_desc->tsfl; + rx_status->flag |= RX_FLAG_MACTIME_START; + +- if (!rx_desc->swdec) ++ if (!rx_desc->swdec && ++ rx_desc->security != RX_DESC_ENC_NONE) + rx_status->flag |= RX_FLAG_DECRYPTED; + if (rx_desc->crc32) + rx_status->flag |= RX_FLAG_FAILED_FCS_CRC; +@@ -7998,6 +7995,7 @@ static int rtl8xxxu_probe(struct usb_interface *interface, + ieee80211_hw_set(hw, HAS_RATE_CONTROL); + ieee80211_hw_set(hw, SUPPORT_FAST_XMIT); + ieee80211_hw_set(hw, AMPDU_AGGREGATION); ++ ieee80211_hw_set(hw, MFP_CAPABLE); + + wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST); + +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c +index d835a27429f0f..56b5cd032a9ac 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c +@@ -892,8 +892,8 @@ static u8 _rtl92c_phy_get_rightchnlplace(u8 chnl) + u8 place = chnl; + + if (chnl > 14) { +- for (place = 14; place < ARRAY_SIZE(channel5g); place++) { +- if (channel5g[place] == chnl) { ++ for (place = 14; place < ARRAY_SIZE(channel_all); place++) { ++ if (channel_all[place] == chnl) { + place++; + break; + } +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c +index 192982ec8152d..cbc7b4dbea9ad 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c +@@ -35,7 +35,7 @@ static long _rtl92de_translate_todbm(struct ieee80211_hw *hw, + + static void _rtl92de_query_rxphystatus(struct ieee80211_hw *hw, + struct rtl_stats *pstats, +- struct rx_desc_92d *pdesc, ++ __le32 *pdesc, + struct rx_fwinfo_92d *p_drvinfo, + bool packet_match_bssid, + bool packet_toself, +@@ -50,8 +50,10 @@ static void _rtl92de_query_rxphystatus(struct ieee80211_hw *hw, + u8 i, max_spatial_stream; + u32 rssi, total_rssi = 0; + bool is_cck_rate; ++ u8 rxmcs; + +- is_cck_rate = RX_HAL_IS_CCK_RATE(pdesc->rxmcs); ++ rxmcs = get_rx_desc_rxmcs(pdesc); ++ is_cck_rate = rxmcs <= DESC_RATE11M; + pstats->packet_matchbssid = packet_match_bssid; + pstats->packet_toself = packet_toself; + pstats->packet_beacon = packet_beacon; +@@ -157,8 +159,8 @@ static void _rtl92de_query_rxphystatus(struct ieee80211_hw *hw, + pstats->rx_pwdb_all = pwdb_all; + pstats->rxpower = rx_pwr_all; + pstats->recvsignalpower = rx_pwr_all; +- if (pdesc->rxht && pdesc->rxmcs >= DESC_RATEMCS8 && +- pdesc->rxmcs <= DESC_RATEMCS15) ++ if (get_rx_desc_rxht(pdesc) && rxmcs >= DESC_RATEMCS8 && ++ rxmcs <= DESC_RATEMCS15) + max_spatial_stream = 2; + else + max_spatial_stream = 1; +@@ -364,7 +366,7 @@ static void _rtl92de_process_phyinfo(struct ieee80211_hw *hw, + static void _rtl92de_translate_rx_signal_stuff(struct ieee80211_hw *hw, + struct sk_buff *skb, + struct rtl_stats *pstats, +- struct rx_desc_92d *pdesc, ++ __le32 *pdesc, + struct rx_fwinfo_92d *p_drvinfo) + { + struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); +@@ -413,7 +415,8 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats, + stats->icv = (u16)get_rx_desc_icv(pdesc); + stats->crc = (u16)get_rx_desc_crc32(pdesc); + stats->hwerror = (stats->crc | stats->icv); +- stats->decrypted = !get_rx_desc_swdec(pdesc); ++ stats->decrypted = !get_rx_desc_swdec(pdesc) && ++ get_rx_desc_enc_type(pdesc) != RX_DESC_ENC_NONE; + stats->rate = (u8)get_rx_desc_rxmcs(pdesc); + stats->shortpreamble = (u16)get_rx_desc_splcp(pdesc); + stats->isampdu = (bool)(get_rx_desc_paggr(pdesc) == 1); +@@ -426,8 +429,6 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats, + rx_status->band = hw->conf.chandef.chan->band; + if (get_rx_desc_crc32(pdesc)) + rx_status->flag |= RX_FLAG_FAILED_FCS_CRC; +- if (!get_rx_desc_swdec(pdesc)) +- rx_status->flag |= RX_FLAG_DECRYPTED; + if (get_rx_desc_bw(pdesc)) + rx_status->bw = RATE_INFO_BW_40; + if (get_rx_desc_rxht(pdesc)) +@@ -441,9 +442,7 @@ bool rtl92de_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats, + if (phystatus) { + p_drvinfo = (struct rx_fwinfo_92d *)(skb->data + + stats->rx_bufshift); +- _rtl92de_translate_rx_signal_stuff(hw, +- skb, stats, +- (struct rx_desc_92d *)pdesc, ++ _rtl92de_translate_rx_signal_stuff(hw, skb, stats, pdesc, + p_drvinfo); + } + /*rx_status->qual = stats->signal; */ +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h +index 2992668c156c6..2d4887490f00f 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h +@@ -14,6 +14,15 @@ + #define USB_HWDESC_HEADER_LEN 32 + #define CRCLENGTH 4 + ++enum rtl92d_rx_desc_enc { ++ RX_DESC_ENC_NONE = 0, ++ RX_DESC_ENC_WEP40 = 1, ++ RX_DESC_ENC_TKIP_WO_MIC = 2, ++ RX_DESC_ENC_TKIP_MIC = 3, ++ RX_DESC_ENC_AES = 4, ++ RX_DESC_ENC_WEP104 = 5, ++}; ++ + /* macros to read/write various fields in RX or TX descriptors */ + + static inline void set_tx_desc_pkt_size(__le32 *__pdesc, u32 __val) +@@ -246,6 +255,11 @@ static inline u32 get_rx_desc_drv_info_size(__le32 *__pdesc) + return le32_get_bits(*__pdesc, GENMASK(19, 16)); + } + ++static inline u32 get_rx_desc_enc_type(__le32 *__pdesc) ++{ ++ return le32_get_bits(*__pdesc, GENMASK(22, 20)); ++} ++ + static inline u32 get_rx_desc_shift(__le32 *__pdesc) + { + return le32_get_bits(*__pdesc, GENMASK(25, 24)); +@@ -380,10 +394,17 @@ struct rx_fwinfo_92d { + u8 csi_target[2]; + u8 sigevm; + u8 max_ex_pwr; ++#ifdef __LITTLE_ENDIAN + u8 ex_intf_flag:1; + u8 sgi_en:1; + u8 rxsc:2; + u8 reserve:4; ++#else ++ u8 reserve:4; ++ u8 rxsc:2; ++ u8 sgi_en:1; ++ u8 ex_intf_flag:1; ++#endif + } __packed; + + struct tx_desc_92d { +@@ -488,64 +509,6 @@ struct tx_desc_92d { + u32 reserve_pass_pcie_mm_limit[4]; + } __packed; + +-struct rx_desc_92d { +- u32 length:14; +- u32 crc32:1; +- u32 icverror:1; +- u32 drv_infosize:4; +- u32 security:3; +- u32 qos:1; +- u32 shift:2; +- u32 phystatus:1; +- u32 swdec:1; +- u32 lastseg:1; +- u32 firstseg:1; +- u32 eor:1; +- u32 own:1; +- +- u32 macid:5; +- u32 tid:4; +- u32 hwrsvd:5; +- u32 paggr:1; +- u32 faggr:1; +- u32 a1_fit:4; +- u32 a2_fit:4; +- u32 pam:1; +- u32 pwr:1; +- u32 moredata:1; +- u32 morefrag:1; +- u32 type:2; +- u32 mc:1; +- u32 bc:1; +- +- u32 seq:12; +- u32 frag:4; +- u32 nextpktlen:14; +- u32 nextind:1; +- u32 rsvd:1; +- +- u32 rxmcs:6; +- u32 rxht:1; +- u32 amsdu:1; +- u32 splcp:1; +- u32 bandwidth:1; +- u32 htc:1; +- u32 tcpchk_rpt:1; +- u32 ipcchk_rpt:1; +- u32 tcpchk_valid:1; +- u32 hwpcerr:1; +- u32 hwpcind:1; +- u32 iv0:16; +- +- u32 iv1; +- +- u32 tsfl; +- +- u32 bufferaddress; +- u32 bufferaddress64; +- +-} __packed; +- + void rtl92de_tx_fill_desc(struct ieee80211_hw *hw, + struct ieee80211_hdr *hdr, u8 *pdesc, + u8 *pbd_desc_tx, struct ieee80211_tx_info *info, +diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c +index 31d1ffb16e83e..c4707a036b005 100644 +--- a/drivers/net/wireless/realtek/rtw89/mac80211.c ++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c +@@ -318,7 +318,7 @@ static u8 rtw89_aifsn_to_aifs(struct rtw89_dev *rtwdev, + u8 sifs; + + slot_time = vif->bss_conf.use_short_slot ? 9 : 20; +- sifs = chan->band_type == RTW89_BAND_5G ? 16 : 10; ++ sifs = chan->band_type == RTW89_BAND_2G ? 10 : 16; + + return aifsn * slot_time + sifs; + } +diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c +index 19001130ad943..0afe22e568f43 100644 +--- a/drivers/net/wireless/realtek/rtw89/pci.c ++++ b/drivers/net/wireless/realtek/rtw89/pci.c +@@ -1089,7 +1089,8 @@ u32 __rtw89_pci_check_and_reclaim_tx_resource_noio(struct rtw89_dev *rtwdev, + + spin_lock_bh(&rtwpci->trx_lock); + cnt = rtw89_pci_get_avail_txbd_num(tx_ring); +- cnt = min(cnt, wd_ring->curr_num); ++ if (txch != RTW89_TXCH_CH12) ++ cnt = min(cnt, wd_ring->curr_num); + spin_unlock_bh(&rtwpci->trx_lock); + + return cnt; +diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c +index badc68bbae8cc..47d19f7e295a7 100644 +--- a/drivers/platform/chrome/cros_ec.c ++++ b/drivers/platform/chrome/cros_ec.c +@@ -432,6 +432,12 @@ static void cros_ec_send_resume_event(struct cros_ec_device *ec_dev) + void cros_ec_resume_complete(struct cros_ec_device *ec_dev) + { + cros_ec_send_resume_event(ec_dev); ++ ++ /* ++ * Let the mfd devices know about events that occur during ++ * suspend. This way the clients know what to do with them. ++ */ ++ cros_ec_report_events_during_suspend(ec_dev); + } + EXPORT_SYMBOL(cros_ec_resume_complete); + +@@ -442,12 +448,6 @@ static void cros_ec_enable_irq(struct cros_ec_device *ec_dev) + + if (ec_dev->wake_enabled) + disable_irq_wake(ec_dev->irq); +- +- /* +- * Let the mfd devices know about events that occur during +- * suspend. This way the clients know what to do with them. +- */ +- cros_ec_report_events_during_suspend(ec_dev); + } + + /** +@@ -475,8 +475,8 @@ EXPORT_SYMBOL(cros_ec_resume_early); + */ + int cros_ec_resume(struct cros_ec_device *ec_dev) + { +- cros_ec_enable_irq(ec_dev); +- cros_ec_send_resume_event(ec_dev); ++ cros_ec_resume_early(ec_dev); ++ cros_ec_resume_complete(ec_dev); + return 0; + } + EXPORT_SYMBOL(cros_ec_resume); +diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c +index f13837907bd5e..ca7e626a97a4b 100644 +--- a/drivers/s390/crypto/ap_bus.c ++++ b/drivers/s390/crypto/ap_bus.c +@@ -1129,7 +1129,7 @@ static int hex2bitmap(const char *str, unsigned long *bitmap, int bits) + */ + static int modify_bitmap(const char *str, unsigned long *bitmap, int bits) + { +- int a, i, z; ++ unsigned long a, i, z; + char *np, sign; + + /* bits needs to be a multiple of 8 */ +diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c +index 3e0c0381277ac..f0464db3f9de9 100644 +--- a/drivers/scsi/scsi.c ++++ b/drivers/scsi/scsi.c +@@ -350,6 +350,13 @@ static int scsi_get_vpd_size(struct scsi_device *sdev, u8 page) + if (result < SCSI_VPD_HEADER_SIZE) + return 0; + ++ if (result > sizeof(vpd)) { ++ dev_warn_once(&sdev->sdev_gendev, ++ "%s: long VPD page 0 length: %d bytes\n", ++ __func__, result); ++ result = sizeof(vpd); ++ } ++ + result -= SCSI_VPD_HEADER_SIZE; + if (!memchr(&vpd[SCSI_VPD_HEADER_SIZE], page, result)) + return 0; +diff --git a/drivers/soc/qcom/cmd-db.c b/drivers/soc/qcom/cmd-db.c +index a5fd68411bed5..86fb2cd4f484d 100644 +--- a/drivers/soc/qcom/cmd-db.c ++++ b/drivers/soc/qcom/cmd-db.c +@@ -1,6 +1,10 @@ + /* SPDX-License-Identifier: GPL-2.0 */ +-/* Copyright (c) 2016-2018, 2020, The Linux Foundation. All rights reserved. */ ++/* ++ * Copyright (c) 2016-2018, 2020, The Linux Foundation. All rights reserved. ++ * Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved. ++ */ + ++#include + #include + #include + #include +@@ -17,6 +21,8 @@ + #define MAX_SLV_ID 8 + #define SLAVE_ID_MASK 0x7 + #define SLAVE_ID_SHIFT 16 ++#define SLAVE_ID(addr) FIELD_GET(GENMASK(19, 16), addr) ++#define VRM_ADDR(addr) FIELD_GET(GENMASK(19, 4), addr) + + /** + * struct entry_header: header for each entry in cmddb +@@ -220,6 +226,30 @@ const void *cmd_db_read_aux_data(const char *id, size_t *len) + } + EXPORT_SYMBOL_GPL(cmd_db_read_aux_data); + ++/** ++ * cmd_db_match_resource_addr() - Compare if both Resource addresses are same ++ * ++ * @addr1: Resource address to compare ++ * @addr2: Resource address to compare ++ * ++ * Return: true if two addresses refer to the same resource, false otherwise ++ */ ++bool cmd_db_match_resource_addr(u32 addr1, u32 addr2) ++{ ++ /* ++ * Each RPMh VRM accelerator resource has 3 or 4 contiguous 4-byte ++ * aligned addresses associated with it. Ignore the offset to check ++ * for VRM requests. ++ */ ++ if (addr1 == addr2) ++ return true; ++ else if (SLAVE_ID(addr1) == CMD_DB_HW_VRM && VRM_ADDR(addr1) == VRM_ADDR(addr2)) ++ return true; ++ ++ return false; ++} ++EXPORT_SYMBOL_GPL(cmd_db_match_resource_addr); ++ + /** + * cmd_db_read_slave_id - Get the slave ID for a given resource address + * +diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c +index a021dc71807be..daf64be966fe1 100644 +--- a/drivers/soc/qcom/rpmh-rsc.c ++++ b/drivers/soc/qcom/rpmh-rsc.c +@@ -1,6 +1,7 @@ + // SPDX-License-Identifier: GPL-2.0 + /* + * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. ++ * Copyright (c) 2023-2024, Qualcomm Innovation Center, Inc. All rights reserved. + */ + + #define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME +@@ -557,7 +558,7 @@ static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs, + for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) { + addr = read_tcs_cmd(drv, drv->regs[RSC_DRV_CMD_ADDR], i, j); + for (k = 0; k < msg->num_cmds; k++) { +- if (addr == msg->cmds[k].addr) ++ if (cmd_db_match_resource_addr(msg->cmds[k].addr, addr)) + return -EBUSY; + } + } +diff --git a/drivers/thermal/qcom/lmh.c b/drivers/thermal/qcom/lmh.c +index f6edb12ec0041..5225b3621a56c 100644 +--- a/drivers/thermal/qcom/lmh.c ++++ b/drivers/thermal/qcom/lmh.c +@@ -95,6 +95,9 @@ static int lmh_probe(struct platform_device *pdev) + unsigned int enable_alg; + u32 node_id; + ++ if (!qcom_scm_is_available()) ++ return -EPROBE_DEFER; ++ + lmh_data = devm_kzalloc(dev, sizeof(*lmh_data), GFP_KERNEL); + if (!lmh_data) + return -ENOMEM; +diff --git a/drivers/video/fbdev/savage/savagefb_driver.c b/drivers/video/fbdev/savage/savagefb_driver.c +index ebc9aeffdde7c..ac41f8f37589f 100644 +--- a/drivers/video/fbdev/savage/savagefb_driver.c ++++ b/drivers/video/fbdev/savage/savagefb_driver.c +@@ -2276,7 +2276,10 @@ static int savagefb_probe(struct pci_dev *dev, const struct pci_device_id *id) + if (info->var.xres_virtual > 0x1000) + info->var.xres_virtual = 0x1000; + #endif +- savagefb_check_var(&info->var, info); ++ err = savagefb_check_var(&info->var, info); ++ if (err) ++ goto failed; ++ + savagefb_set_fix(info); + + /* +diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c +index 9215793a1c814..4895a69015a8e 100644 +--- a/drivers/watchdog/rti_wdt.c ++++ b/drivers/watchdog/rti_wdt.c +@@ -59,6 +59,8 @@ + #define PON_REASON_EOF_NUM 0xCCCCBBBB + #define RESERVED_MEM_MIN_SIZE 12 + ++#define MAX_HW_ERROR 250 ++ + static int heartbeat = DEFAULT_HEARTBEAT; + + /* +@@ -97,7 +99,7 @@ static int rti_wdt_start(struct watchdog_device *wdd) + * to be 50% or less than that; we obviouly want to configure the open + * window as large as possible so we select the 50% option. + */ +- wdd->min_hw_heartbeat_ms = 500 * wdd->timeout; ++ wdd->min_hw_heartbeat_ms = 520 * wdd->timeout + MAX_HW_ERROR; + + /* Generate NMI when wdt expires */ + writel_relaxed(RTIWWDRX_NMI, wdt->base + RTIWWDRXCTRL); +@@ -131,31 +133,33 @@ static int rti_wdt_setup_hw_hb(struct watchdog_device *wdd, u32 wsize) + * be petted during the open window; not too early or not too late. + * The HW configuration options only allow for the open window size + * to be 50% or less than that. ++ * To avoid any glitches, we accommodate 2% + max hardware error ++ * safety margin. + */ + switch (wsize) { + case RTIWWDSIZE_50P: +- /* 50% open window => 50% min heartbeat */ +- wdd->min_hw_heartbeat_ms = 500 * heartbeat; ++ /* 50% open window => 52% min heartbeat */ ++ wdd->min_hw_heartbeat_ms = 520 * heartbeat + MAX_HW_ERROR; + break; + + case RTIWWDSIZE_25P: +- /* 25% open window => 75% min heartbeat */ +- wdd->min_hw_heartbeat_ms = 750 * heartbeat; ++ /* 25% open window => 77% min heartbeat */ ++ wdd->min_hw_heartbeat_ms = 770 * heartbeat + MAX_HW_ERROR; + break; + + case RTIWWDSIZE_12P5: +- /* 12.5% open window => 87.5% min heartbeat */ +- wdd->min_hw_heartbeat_ms = 875 * heartbeat; ++ /* 12.5% open window => 89.5% min heartbeat */ ++ wdd->min_hw_heartbeat_ms = 895 * heartbeat + MAX_HW_ERROR; + break; + + case RTIWWDSIZE_6P25: +- /* 6.5% open window => 93.5% min heartbeat */ +- wdd->min_hw_heartbeat_ms = 935 * heartbeat; ++ /* 6.5% open window => 95.5% min heartbeat */ ++ wdd->min_hw_heartbeat_ms = 955 * heartbeat + MAX_HW_ERROR; + break; + + case RTIWWDSIZE_3P125: +- /* 3.125% open window => 96.9% min heartbeat */ +- wdd->min_hw_heartbeat_ms = 969 * heartbeat; ++ /* 3.125% open window => 98.9% min heartbeat */ ++ wdd->min_hw_heartbeat_ms = 989 * heartbeat + MAX_HW_ERROR; + break; + + default: +@@ -233,14 +237,6 @@ static int rti_wdt_probe(struct platform_device *pdev) + return -EINVAL; + } + +- /* +- * If watchdog is running at 32k clock, it is not accurate. +- * Adjust frequency down in this case so that we don't pet +- * the watchdog too often. +- */ +- if (wdt->freq < 32768) +- wdt->freq = wdt->freq * 9 / 10; +- + pm_runtime_enable(dev); + ret = pm_runtime_resume_and_get(dev); + if (ret < 0) { +diff --git a/fs/9p/vfs_dentry.c b/fs/9p/vfs_dentry.c +index f16f735816349..01338d4c2d9e6 100644 +--- a/fs/9p/vfs_dentry.c ++++ b/fs/9p/vfs_dentry.c +@@ -48,12 +48,17 @@ static int v9fs_cached_dentry_delete(const struct dentry *dentry) + static void v9fs_dentry_release(struct dentry *dentry) + { + struct hlist_node *p, *n; ++ struct hlist_head head; + + p9_debug(P9_DEBUG_VFS, " dentry: %pd (%p)\n", + dentry, dentry); +- hlist_for_each_safe(p, n, (struct hlist_head *)&dentry->d_fsdata) ++ ++ spin_lock(&dentry->d_lock); ++ hlist_move_list((struct hlist_head *)&dentry->d_fsdata, &head); ++ spin_unlock(&dentry->d_lock); ++ ++ hlist_for_each_safe(p, n, &head) + p9_fid_put(hlist_entry(p, struct p9_fid, dlist)); +- dentry->d_fsdata = NULL; + } + + static int v9fs_lookup_revalidate(struct dentry *dentry, unsigned int flags) +diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c +index 97f50e9fd9eb0..297487ee83231 100644 +--- a/fs/afs/mntpt.c ++++ b/fs/afs/mntpt.c +@@ -140,6 +140,11 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt) + put_page(page); + if (ret < 0) + return ret; ++ ++ /* Don't cross a backup volume mountpoint from a backup volume */ ++ if (src_as->volume && src_as->volume->type == AFSVL_BACKVOL && ++ ctx->type == AFSVL_BACKVOL) ++ return -ENODEV; + } + + return 0; +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index 3df5477d48a81..3a47eec87b263 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -4544,18 +4544,10 @@ static void btrfs_destroy_delayed_refs(struct btrfs_transaction *trans, + struct btrfs_fs_info *fs_info) + { + struct rb_node *node; +- struct btrfs_delayed_ref_root *delayed_refs; ++ struct btrfs_delayed_ref_root *delayed_refs = &trans->delayed_refs; + struct btrfs_delayed_ref_node *ref; + +- delayed_refs = &trans->delayed_refs; +- + spin_lock(&delayed_refs->lock); +- if (atomic_read(&delayed_refs->num_entries) == 0) { +- spin_unlock(&delayed_refs->lock); +- btrfs_debug(fs_info, "delayed_refs has NO entry"); +- return; +- } +- + while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) { + struct btrfs_delayed_ref_head *head; + struct rb_node *n; +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 87f487b116577..41173701f1bef 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -3662,6 +3662,8 @@ static struct extent_buffer *grab_extent_buffer( + struct folio *folio = page_folio(page); + struct extent_buffer *exists; + ++ lockdep_assert_held(&page->mapping->i_private_lock); ++ + /* + * For subpage case, we completely rely on radix tree to ensure we + * don't try to insert two ebs for the same bytenr. So here we always +@@ -3729,13 +3731,14 @@ static int check_eb_alignment(struct btrfs_fs_info *fs_info, u64 start) + * The caller needs to free the existing folios and retry using the same order. + */ + static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, ++ struct btrfs_subpage *prealloc, + struct extent_buffer **found_eb_ret) + { + + struct btrfs_fs_info *fs_info = eb->fs_info; + struct address_space *mapping = fs_info->btree_inode->i_mapping; + const unsigned long index = eb->start >> PAGE_SHIFT; +- struct folio *existing_folio; ++ struct folio *existing_folio = NULL; + int ret; + + ASSERT(found_eb_ret); +@@ -3747,12 +3750,14 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, + ret = filemap_add_folio(mapping, eb->folios[i], index + i, + GFP_NOFS | __GFP_NOFAIL); + if (!ret) +- return 0; ++ goto finish; + + existing_folio = filemap_lock_folio(mapping, index + i); + /* The page cache only exists for a very short time, just retry. */ +- if (IS_ERR(existing_folio)) ++ if (IS_ERR(existing_folio)) { ++ existing_folio = NULL; + goto retry; ++ } + + /* For now, we should only have single-page folios for btree inode. */ + ASSERT(folio_nr_pages(existing_folio) == 1); +@@ -3763,14 +3768,13 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, + return -EAGAIN; + } + +- if (fs_info->nodesize < PAGE_SIZE) { +- /* +- * We're going to reuse the existing page, can drop our page +- * and subpage structure now. +- */ ++finish: ++ spin_lock(&mapping->i_private_lock); ++ if (existing_folio && fs_info->nodesize < PAGE_SIZE) { ++ /* We're going to reuse the existing page, can drop our folio now. */ + __free_page(folio_page(eb->folios[i], 0)); + eb->folios[i] = existing_folio; +- } else { ++ } else if (existing_folio) { + struct extent_buffer *existing_eb; + + existing_eb = grab_extent_buffer(fs_info, +@@ -3778,6 +3782,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, + if (existing_eb) { + /* The extent buffer still exists, we can use it directly. */ + *found_eb_ret = existing_eb; ++ spin_unlock(&mapping->i_private_lock); + folio_unlock(existing_folio); + folio_put(existing_folio); + return 1; +@@ -3786,6 +3791,22 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, + __free_page(folio_page(eb->folios[i], 0)); + eb->folios[i] = existing_folio; + } ++ eb->folio_size = folio_size(eb->folios[i]); ++ eb->folio_shift = folio_shift(eb->folios[i]); ++ /* Should not fail, as we have preallocated the memory. */ ++ ret = attach_extent_buffer_folio(eb, eb->folios[i], prealloc); ++ ASSERT(!ret); ++ /* ++ * To inform we have an extra eb under allocation, so that ++ * detach_extent_buffer_page() won't release the folio private when the ++ * eb hasn't been inserted into radix tree yet. ++ * ++ * The ref will be decreased when the eb releases the page, in ++ * detach_extent_buffer_page(). Thus needs no special handling in the ++ * error path. ++ */ ++ btrfs_folio_inc_eb_refs(fs_info, eb->folios[i]); ++ spin_unlock(&mapping->i_private_lock); + return 0; + } + +@@ -3797,7 +3818,6 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + int attached = 0; + struct extent_buffer *eb; + struct extent_buffer *existing_eb = NULL; +- struct address_space *mapping = fs_info->btree_inode->i_mapping; + struct btrfs_subpage *prealloc = NULL; + u64 lockdep_owner = owner_root; + bool page_contig = true; +@@ -3863,7 +3883,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + for (int i = 0; i < num_folios; i++) { + struct folio *folio; + +- ret = attach_eb_folio_to_filemap(eb, i, &existing_eb); ++ ret = attach_eb_folio_to_filemap(eb, i, prealloc, &existing_eb); + if (ret > 0) { + ASSERT(existing_eb); + goto out; +@@ -3900,24 +3920,6 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info, + * and free the allocated page. + */ + folio = eb->folios[i]; +- eb->folio_size = folio_size(folio); +- eb->folio_shift = folio_shift(folio); +- spin_lock(&mapping->i_private_lock); +- /* Should not fail, as we have preallocated the memory */ +- ret = attach_extent_buffer_folio(eb, folio, prealloc); +- ASSERT(!ret); +- /* +- * To inform we have extra eb under allocation, so that +- * detach_extent_buffer_page() won't release the folio private +- * when the eb hasn't yet been inserted into radix tree. +- * +- * The ref will be decreased when the eb released the page, in +- * detach_extent_buffer_page(). +- * Thus needs no special handling in error path. +- */ +- btrfs_folio_inc_eb_refs(fs_info, folio); +- spin_unlock(&mapping->i_private_lock); +- + WARN_ON(btrfs_folio_test_dirty(fs_info, folio, eb->start, eb->len)); + + /* +diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c +index 40e5f7f2fcb7f..1167899a16d05 100644 +--- a/fs/btrfs/qgroup.c ++++ b/fs/btrfs/qgroup.c +@@ -468,6 +468,7 @@ int btrfs_read_qgroup_config(struct btrfs_fs_info *fs_info) + } + if (!qgroup) { + struct btrfs_qgroup *prealloc; ++ struct btrfs_root *tree_root = fs_info->tree_root; + + prealloc = kzalloc(sizeof(*prealloc), GFP_KERNEL); + if (!prealloc) { +@@ -475,6 +476,25 @@ int btrfs_read_qgroup_config(struct btrfs_fs_info *fs_info) + goto out; + } + qgroup = add_qgroup_rb(fs_info, prealloc, found_key.offset); ++ /* ++ * If a qgroup exists for a subvolume ID, it is possible ++ * that subvolume has been deleted, in which case ++ * re-using that ID would lead to incorrect accounting. ++ * ++ * Ensure that we skip any such subvol ids. ++ * ++ * We don't need to lock because this is only called ++ * during mount before we start doing things like creating ++ * subvolumes. ++ */ ++ if (is_fstree(qgroup->qgroupid) && ++ qgroup->qgroupid > tree_root->free_objectid) ++ /* ++ * Don't need to check against BTRFS_LAST_FREE_OBJECTID, ++ * as it will get checked on the next call to ++ * btrfs_get_free_objectid. ++ */ ++ tree_root->free_objectid = qgroup->qgroupid + 1; + } + ret = btrfs_sysfs_add_one_qgroup(fs_info, qgroup); + if (ret < 0) +@@ -3129,7 +3149,7 @@ static int qgroup_auto_inherit(struct btrfs_fs_info *fs_info, + qgids = res->qgroups; + + list_for_each_entry(qg_list, &inode_qg->groups, next_group) +- qgids[i] = qg_list->group->qgroupid; ++ qgids[i++] = qg_list->group->qgroupid; + + *inherit = res; + return 0; +@@ -3826,14 +3846,14 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid, + /* we're resuming qgroup rescan at mount time */ + if (!(fs_info->qgroup_flags & + BTRFS_QGROUP_STATUS_FLAG_RESCAN)) { +- btrfs_warn(fs_info, ++ btrfs_debug(fs_info, + "qgroup rescan init failed, qgroup rescan is not queued"); + ret = -EINVAL; + } else if (!(fs_info->qgroup_flags & + BTRFS_QGROUP_STATUS_FLAG_ON)) { +- btrfs_warn(fs_info, ++ btrfs_debug(fs_info, + "qgroup rescan init failed, qgroup is not enabled"); +- ret = -EINVAL; ++ ret = -ENOTCONN; + } + + if (ret) +@@ -3844,14 +3864,12 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid, + + if (init_flags) { + if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) { +- btrfs_warn(fs_info, +- "qgroup rescan is already in progress"); + ret = -EINPROGRESS; + } else if (!(fs_info->qgroup_flags & + BTRFS_QGROUP_STATUS_FLAG_ON)) { +- btrfs_warn(fs_info, ++ btrfs_debug(fs_info, + "qgroup rescan init failed, qgroup is not enabled"); +- ret = -EINVAL; ++ ret = -ENOTCONN; + } else if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_DISABLED) { + /* Quota disable is in progress */ + ret = -EBUSY; +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index 7e44ccaf348f2..fa6964de34379 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -119,6 +119,7 @@ enum { + Opt_thread_pool, + Opt_treelog, + Opt_user_subvol_rm_allowed, ++ Opt_norecovery, + + /* Rescue options */ + Opt_rescue, +@@ -245,6 +246,8 @@ static const struct fs_parameter_spec btrfs_fs_parameters[] = { + __fsparam(NULL, "nologreplay", Opt_nologreplay, fs_param_deprecated, NULL), + /* Deprecated, with alias rescue=usebackuproot */ + __fsparam(NULL, "usebackuproot", Opt_usebackuproot, fs_param_deprecated, NULL), ++ /* For compatibility only, alias for "rescue=nologreplay". */ ++ fsparam_flag("norecovery", Opt_norecovery), + + /* Debugging options. */ + fsparam_flag_no("enospc_debug", Opt_enospc_debug), +@@ -438,6 +441,11 @@ static int btrfs_parse_param(struct fs_context *fc, struct fs_parameter *param) + "'nologreplay' is deprecated, use 'rescue=nologreplay' instead"); + btrfs_set_opt(ctx->mount_opt, NOLOGREPLAY); + break; ++ case Opt_norecovery: ++ btrfs_info(NULL, ++"'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay'"); ++ btrfs_set_opt(ctx->mount_opt, NOLOGREPLAY); ++ break; + case Opt_flushoncommit: + if (result.negated) + btrfs_clear_opt(ctx->mount_opt, FLUSHONCOMMIT); +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 472918a5bc73a..d4fc5fedd8ee5 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -4856,18 +4856,23 @@ static int btrfs_log_prealloc_extents(struct btrfs_trans_handle *trans, + path->slots[0]++; + continue; + } +- if (!dropped_extents) { +- /* +- * Avoid logging extent items logged in past fsync calls +- * and leading to duplicate keys in the log tree. +- */ ++ /* ++ * Avoid overlapping items in the log tree. The first time we ++ * get here, get rid of everything from a past fsync. After ++ * that, if the current extent starts before the end of the last ++ * extent we copied, truncate the last one. This can happen if ++ * an ordered extent completion modifies the subvolume tree ++ * while btrfs_next_leaf() has the tree unlocked. ++ */ ++ if (!dropped_extents || key.offset < truncate_offset) { + ret = truncate_inode_items(trans, root->log_root, inode, +- truncate_offset, ++ min(key.offset, truncate_offset), + BTRFS_EXTENT_DATA_KEY); + if (ret) + goto out; + dropped_extents = true; + } ++ truncate_offset = btrfs_file_extent_end(path); + if (ins_nr == 0) + start_slot = slot; + ins_nr++; +diff --git a/fs/erofs/decompressor_deflate.c b/fs/erofs/decompressor_deflate.c +index 81e65c453ef05..3a3461561a3c9 100644 +--- a/fs/erofs/decompressor_deflate.c ++++ b/fs/erofs/decompressor_deflate.c +@@ -46,39 +46,15 @@ int __init z_erofs_deflate_init(void) + /* by default, use # of possible CPUs instead */ + if (!z_erofs_deflate_nstrms) + z_erofs_deflate_nstrms = num_possible_cpus(); +- +- for (; z_erofs_deflate_avail_strms < z_erofs_deflate_nstrms; +- ++z_erofs_deflate_avail_strms) { +- struct z_erofs_deflate *strm; +- +- strm = kzalloc(sizeof(*strm), GFP_KERNEL); +- if (!strm) +- goto out_failed; +- +- /* XXX: in-kernel zlib cannot shrink windowbits currently */ +- strm->z.workspace = vmalloc(zlib_inflate_workspacesize()); +- if (!strm->z.workspace) { +- kfree(strm); +- goto out_failed; +- } +- +- spin_lock(&z_erofs_deflate_lock); +- strm->next = z_erofs_deflate_head; +- z_erofs_deflate_head = strm; +- spin_unlock(&z_erofs_deflate_lock); +- } + return 0; +- +-out_failed: +- erofs_err(NULL, "failed to allocate zlib workspace"); +- z_erofs_deflate_exit(); +- return -ENOMEM; + } + + int z_erofs_load_deflate_config(struct super_block *sb, + struct erofs_super_block *dsb, void *data, int size) + { + struct z_erofs_deflate_cfgs *dfl = data; ++ static DEFINE_MUTEX(deflate_resize_mutex); ++ static bool inited; + + if (!dfl || size < sizeof(struct z_erofs_deflate_cfgs)) { + erofs_err(sb, "invalid deflate cfgs, size=%u", size); +@@ -89,9 +65,36 @@ int z_erofs_load_deflate_config(struct super_block *sb, + erofs_err(sb, "unsupported windowbits %u", dfl->windowbits); + return -EOPNOTSUPP; + } ++ mutex_lock(&deflate_resize_mutex); ++ if (!inited) { ++ for (; z_erofs_deflate_avail_strms < z_erofs_deflate_nstrms; ++ ++z_erofs_deflate_avail_strms) { ++ struct z_erofs_deflate *strm; ++ ++ strm = kzalloc(sizeof(*strm), GFP_KERNEL); ++ if (!strm) ++ goto failed; ++ /* XXX: in-kernel zlib cannot customize windowbits */ ++ strm->z.workspace = vmalloc(zlib_inflate_workspacesize()); ++ if (!strm->z.workspace) { ++ kfree(strm); ++ goto failed; ++ } + ++ spin_lock(&z_erofs_deflate_lock); ++ strm->next = z_erofs_deflate_head; ++ z_erofs_deflate_head = strm; ++ spin_unlock(&z_erofs_deflate_lock); ++ } ++ inited = true; ++ } ++ mutex_unlock(&deflate_resize_mutex); + erofs_info(sb, "EXPERIMENTAL DEFLATE feature in use. Use at your own risk!"); + return 0; ++failed: ++ mutex_unlock(&deflate_resize_mutex); ++ z_erofs_deflate_exit(); ++ return -ENOMEM; + } + + int z_erofs_deflate_decompress(struct z_erofs_decompress_req *rq, +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c +index 6de6bf57699be..30e824866274e 100644 +--- a/fs/ext4/inode.c ++++ b/fs/ext4/inode.c +@@ -2334,7 +2334,7 @@ static int mpage_journal_page_buffers(handle_t *handle, + + if (folio_pos(folio) + len > size && + !ext4_verity_in_progress(inode)) +- len = size - folio_pos(folio); ++ len = size & (len - 1); + + return ext4_journal_folio_buffers(handle, folio, len); + } +diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h +index 56938532b4ce2..7bfc5fb5a1285 100644 +--- a/fs/ext4/mballoc.h ++++ b/fs/ext4/mballoc.h +@@ -193,8 +193,8 @@ struct ext4_allocation_context { + ext4_grpblk_t ac_orig_goal_len; + + __u32 ac_flags; /* allocation hints */ ++ __u32 ac_groups_linear_remaining; + __u16 ac_groups_scanned; +- __u16 ac_groups_linear_remaining; + __u16 ac_found; + __u16 ac_cX_found[EXT4_MB_NUM_CRS]; + __u16 ac_tail; +diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c +index b67a176bfcf9f..9fdd134220736 100644 +--- a/fs/ext4/xattr.c ++++ b/fs/ext4/xattr.c +@@ -3113,8 +3113,10 @@ ext4_xattr_block_cache_find(struct inode *inode, + + bh = ext4_sb_bread(inode->i_sb, ce->e_value, REQ_PRIO); + if (IS_ERR(bh)) { +- if (PTR_ERR(bh) == -ENOMEM) ++ if (PTR_ERR(bh) == -ENOMEM) { ++ mb_cache_entry_put(ea_block_cache, ce); + return NULL; ++ } + bh = NULL; + EXT4_ERROR_INODE(inode, "block %lu read error", + (unsigned long)ce->e_value); +diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c +index c26effdce9aa7..2af50bc34fcd7 100644 +--- a/fs/f2fs/inode.c ++++ b/fs/f2fs/inode.c +@@ -361,6 +361,12 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) + return false; + } + ++ if (fi->i_xattr_nid && f2fs_check_nid_range(sbi, fi->i_xattr_nid)) { ++ f2fs_warn(sbi, "%s: inode (ino=%lx) has corrupted i_xattr_nid: %u, run fsck to fix.", ++ __func__, inode->i_ino, fi->i_xattr_nid); ++ return false; ++ } ++ + return true; + } + +diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c +index 4e8e41c8b3c0e..4ac6c8c403c26 100644 +--- a/fs/iomap/buffered-io.c ++++ b/fs/iomap/buffered-io.c +@@ -909,11 +909,11 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, + static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) + { + loff_t length = iomap_length(iter); +- size_t chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; + loff_t pos = iter->pos; + ssize_t written = 0; + long status = 0; + struct address_space *mapping = iter->inode->i_mapping; ++ size_t chunk = mapping_max_folio_size(mapping); + unsigned int bdp_flags = (iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0; + + do { +diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h +index 06253695fe53f..2ff0c35d80e0d 100644 +--- a/fs/nfs/internal.h ++++ b/fs/nfs/internal.h +@@ -710,9 +710,9 @@ unsigned long nfs_block_bits(unsigned long bsize, unsigned char *nrbitsp) + if ((bsize & (bsize - 1)) || nrbitsp) { + unsigned char nrbits; + +- for (nrbits = 31; nrbits && !(bsize & (1 << nrbits)); nrbits--) ++ for (nrbits = 31; nrbits && !(bsize & (1UL << nrbits)); nrbits--) + ; +- bsize = 1 << nrbits; ++ bsize = 1UL << nrbits; + if (nrbitsp) + *nrbitsp = nrbits; + } +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index ea390db94b622..c93c12063b3af 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -5456,7 +5456,7 @@ static bool nfs4_read_plus_not_supported(struct rpc_task *task, + struct rpc_message *msg = &task->tk_msg; + + if (msg->rpc_proc == &nfs4_procedures[NFSPROC4_CLNT_READ_PLUS] && +- server->caps & NFS_CAP_READ_PLUS && task->tk_status == -ENOTSUPP) { ++ task->tk_status == -ENOTSUPP) { + server->caps &= ~NFS_CAP_READ_PLUS; + msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ]; + rpc_restart_call_prepare(task); +diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c +index aee40db7a036f..35e6c55a0d231 100644 +--- a/fs/nilfs2/dir.c ++++ b/fs/nilfs2/dir.c +@@ -608,7 +608,7 @@ int nilfs_empty_dir(struct inode *inode) + + kaddr = nilfs_get_folio(inode, i, &folio); + if (IS_ERR(kaddr)) +- continue; ++ return 0; + + de = (struct nilfs_dir_entry *)kaddr; + kaddr += nilfs_last_byte(inode, i) - NILFS_DIR_REC_LEN(1); +diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c +index 3c1e4e9eafa31..a1b011cb4b471 100644 +--- a/fs/nilfs2/segment.c ++++ b/fs/nilfs2/segment.c +@@ -1652,6 +1652,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci) + if (bh->b_folio != bd_folio) { + if (bd_folio) { + folio_lock(bd_folio); ++ folio_wait_writeback(bd_folio); + folio_clear_dirty_for_io(bd_folio); + folio_start_writeback(bd_folio); + folio_unlock(bd_folio); +@@ -1665,6 +1666,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci) + if (bh == segbuf->sb_super_root) { + if (bh->b_folio != bd_folio) { + folio_lock(bd_folio); ++ folio_wait_writeback(bd_folio); + folio_clear_dirty_for_io(bd_folio); + folio_start_writeback(bd_folio); + folio_unlock(bd_folio); +@@ -1681,6 +1683,7 @@ static void nilfs_segctor_prepare_write(struct nilfs_sc_info *sci) + } + if (bd_folio) { + folio_lock(bd_folio); ++ folio_wait_writeback(bd_folio); + folio_clear_dirty_for_io(bd_folio); + folio_start_writeback(bd_folio); + folio_unlock(bd_folio); +diff --git a/fs/proc/base.c b/fs/proc/base.c +index 18550c071d71c..72a1acd03675c 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -3214,7 +3214,7 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns, + mm = get_task_mm(task); + if (mm) { + seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items); +- seq_printf(m, "ksm_zero_pages %lu\n", mm->ksm_zero_pages); ++ seq_printf(m, "ksm_zero_pages %ld\n", mm_ksm_zero_pages(mm)); + seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages); + seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm)); + mmput(mm); +diff --git a/fs/proc/fd.c b/fs/proc/fd.c +index 6e72e5ad42bc7..f4b1c8b42a511 100644 +--- a/fs/proc/fd.c ++++ b/fs/proc/fd.c +@@ -74,7 +74,18 @@ static int seq_show(struct seq_file *m, void *v) + return 0; + } + +-static int proc_fdinfo_access_allowed(struct inode *inode) ++static int seq_fdinfo_open(struct inode *inode, struct file *file) ++{ ++ return single_open(file, seq_show, inode); ++} ++ ++/** ++ * Shared /proc/pid/fdinfo and /proc/pid/fdinfo/fd permission helper to ensure ++ * that the current task has PTRACE_MODE_READ in addition to the normal ++ * POSIX-like checks. ++ */ ++static int proc_fdinfo_permission(struct mnt_idmap *idmap, struct inode *inode, ++ int mask) + { + bool allowed = false; + struct task_struct *task = get_proc_task(inode); +@@ -88,18 +99,13 @@ static int proc_fdinfo_access_allowed(struct inode *inode) + if (!allowed) + return -EACCES; + +- return 0; ++ return generic_permission(idmap, inode, mask); + } + +-static int seq_fdinfo_open(struct inode *inode, struct file *file) +-{ +- int ret = proc_fdinfo_access_allowed(inode); +- +- if (ret) +- return ret; +- +- return single_open(file, seq_show, inode); +-} ++static const struct inode_operations proc_fdinfo_file_inode_operations = { ++ .permission = proc_fdinfo_permission, ++ .setattr = proc_setattr, ++}; + + static const struct file_operations proc_fdinfo_file_operations = { + .open = seq_fdinfo_open, +@@ -388,6 +394,8 @@ static struct dentry *proc_fdinfo_instantiate(struct dentry *dentry, + ei = PROC_I(inode); + ei->fd = data->fd; + ++ inode->i_op = &proc_fdinfo_file_inode_operations; ++ + inode->i_fop = &proc_fdinfo_file_operations; + tid_fd_update_inode(task, inode, 0); + +@@ -407,23 +415,13 @@ static int proc_readfdinfo(struct file *file, struct dir_context *ctx) + proc_fdinfo_instantiate); + } + +-static int proc_open_fdinfo(struct inode *inode, struct file *file) +-{ +- int ret = proc_fdinfo_access_allowed(inode); +- +- if (ret) +- return ret; +- +- return 0; +-} +- + const struct inode_operations proc_fdinfo_inode_operations = { + .lookup = proc_lookupfdinfo, ++ .permission = proc_fdinfo_permission, + .setattr = proc_setattr, + }; + + const struct file_operations proc_fdinfo_operations = { +- .open = proc_open_fdinfo, + .read = generic_read_dir, + .iterate_shared = proc_readfdinfo, + .llseek = generic_file_llseek, +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c +index 102f48668c352..a097c9a65234b 100644 +--- a/fs/proc/task_mmu.c ++++ b/fs/proc/task_mmu.c +@@ -965,12 +965,17 @@ static int show_smaps_rollup(struct seq_file *m, void *v) + break; + + /* Case 1 and 2 above */ +- if (vma->vm_start >= last_vma_end) ++ if (vma->vm_start >= last_vma_end) { ++ smap_gather_stats(vma, &mss, 0); ++ last_vma_end = vma->vm_end; + continue; ++ } + + /* Case 4 above */ +- if (vma->vm_end > last_vma_end) ++ if (vma->vm_end > last_vma_end) { + smap_gather_stats(vma, &mss, last_vma_end); ++ last_vma_end = vma->vm_end; ++ } + } + } for_each_vma(vmi, vma); + +diff --git a/fs/smb/client/cifspdu.h b/fs/smb/client/cifspdu.h +index c46d418c1c0c3..a2072ab9e586d 100644 +--- a/fs/smb/client/cifspdu.h ++++ b/fs/smb/client/cifspdu.h +@@ -2574,7 +2574,7 @@ typedef struct { + + + struct win_dev { +- unsigned char type[8]; /* IntxCHR or IntxBLK or LnxFIFO*/ ++ unsigned char type[8]; /* IntxCHR or IntxBLK or LnxFIFO or LnxSOCK */ + __le64 major; + __le64 minor; + } __attribute__((packed)); +diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c +index 60afab5c83d41..d0e69591332d8 100644 +--- a/fs/smb/client/inode.c ++++ b/fs/smb/client/inode.c +@@ -591,6 +591,10 @@ cifs_sfu_type(struct cifs_fattr *fattr, const char *path, + mnr = le64_to_cpu(*(__le64 *)(pbuf+16)); + fattr->cf_rdev = MKDEV(mjr, mnr); + } ++ } else if (memcmp("LnxSOCK", pbuf, 8) == 0) { ++ cifs_dbg(FYI, "Socket\n"); ++ fattr->cf_mode |= S_IFSOCK; ++ fattr->cf_dtype = DT_SOCK; + } else if (memcmp("IntxLNK", pbuf, 7) == 0) { + cifs_dbg(FYI, "Symlink\n"); + fattr->cf_mode |= S_IFLNK; +diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c +index 6fea0aed43461..d35c45f3a25a0 100644 +--- a/fs/smb/client/smb2ops.c ++++ b/fs/smb/client/smb2ops.c +@@ -4996,6 +4996,9 @@ static int __cifs_sfu_make_node(unsigned int xid, struct inode *inode, + pdev.major = cpu_to_le64(MAJOR(dev)); + pdev.minor = cpu_to_le64(MINOR(dev)); + break; ++ case S_IFSOCK: ++ strscpy(pdev.type, "LnxSOCK"); ++ break; + case S_IFIFO: + strscpy(pdev.type, "LnxFIFO"); + break; +diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c +index 02135a6053051..1476c445cadcf 100644 +--- a/fs/smb/client/smb2transport.c ++++ b/fs/smb/client/smb2transport.c +@@ -216,8 +216,8 @@ smb2_find_smb_tcon(struct TCP_Server_Info *server, __u64 ses_id, __u32 tid) + } + tcon = smb2_find_smb_sess_tcon_unlocked(ses, tid); + if (!tcon) { +- cifs_put_smb_ses(ses); + spin_unlock(&cifs_tcp_ses_lock); ++ cifs_put_smb_ses(ses); + return NULL; + } + spin_unlock(&cifs_tcp_ses_lock); +diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c +index a878cea70f4c9..55a40a730b10c 100644 +--- a/fs/tracefs/event_inode.c ++++ b/fs/tracefs/event_inode.c +@@ -50,8 +50,12 @@ static struct eventfs_root_inode *get_root_inode(struct eventfs_inode *ei) + /* Just try to make something consistent and unique */ + static int eventfs_dir_ino(struct eventfs_inode *ei) + { +- if (!ei->ino) ++ if (!ei->ino) { + ei->ino = get_next_ino(); ++ /* Must not have the file inode number */ ++ if (ei->ino == EVENTFS_FILE_INODE_INO) ++ ei->ino = get_next_ino(); ++ } + + return ei->ino; + } +@@ -345,10 +349,9 @@ static struct eventfs_inode *eventfs_find_events(struct dentry *dentry) + * If the ei is being freed, the ownership of the children + * doesn't matter. + */ +- if (ei->is_freed) { +- ei = NULL; +- break; +- } ++ if (ei->is_freed) ++ return NULL; ++ + // Walk upwards until you find the events inode + } while (!ei->is_events); + +diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c +index 417c840e64033..d8c60c2b76e23 100644 +--- a/fs/tracefs/inode.c ++++ b/fs/tracefs/inode.c +@@ -439,10 +439,26 @@ static int tracefs_show_options(struct seq_file *m, struct dentry *root) + return 0; + } + ++static int tracefs_drop_inode(struct inode *inode) ++{ ++ struct tracefs_inode *ti = get_tracefs(inode); ++ ++ /* ++ * This inode is being freed and cannot be used for ++ * eventfs. Clear the flag so that it doesn't call into ++ * eventfs during the remount flag updates. The eventfs_inode ++ * gets freed after an RCU cycle, so the content will still ++ * be safe if the iteration is going on now. ++ */ ++ ti->flags &= ~TRACEFS_EVENT_INODE; ++ ++ return 1; ++} ++ + static const struct super_operations tracefs_super_operations = { + .alloc_inode = tracefs_alloc_inode, + .free_inode = tracefs_free_inode, +- .drop_inode = generic_delete_inode, ++ .drop_inode = tracefs_drop_inode, + .statfs = simple_statfs, + .remount_fs = tracefs_remount, + .show_options = tracefs_show_options, +@@ -469,22 +485,7 @@ static int tracefs_d_revalidate(struct dentry *dentry, unsigned int flags) + return !(ei && ei->is_freed); + } + +-static void tracefs_d_iput(struct dentry *dentry, struct inode *inode) +-{ +- struct tracefs_inode *ti = get_tracefs(inode); +- +- /* +- * This inode is being freed and cannot be used for +- * eventfs. Clear the flag so that it doesn't call into +- * eventfs during the remount flag updates. The eventfs_inode +- * gets freed after an RCU cycle, so the content will still +- * be safe if the iteration is going on now. +- */ +- ti->flags &= ~TRACEFS_EVENT_INODE; +-} +- + static const struct dentry_operations tracefs_dentry_operations = { +- .d_iput = tracefs_d_iput, + .d_revalidate = tracefs_d_revalidate, + .d_release = tracefs_d_release, + }; +diff --git a/fs/verity/init.c b/fs/verity/init.c +index cb2c9aac61ed0..f440f0e61e3e6 100644 +--- a/fs/verity/init.c ++++ b/fs/verity/init.c +@@ -10,8 +10,6 @@ + #include + + #ifdef CONFIG_SYSCTL +-static struct ctl_table_header *fsverity_sysctl_header; +- + static struct ctl_table fsverity_sysctl_table[] = { + #ifdef CONFIG_FS_VERITY_BUILTIN_SIGNATURES + { +@@ -28,10 +26,7 @@ static struct ctl_table fsverity_sysctl_table[] = { + + static void __init fsverity_init_sysctl(void) + { +- fsverity_sysctl_header = register_sysctl("fs/verity", +- fsverity_sysctl_table); +- if (!fsverity_sysctl_header) +- panic("fsverity sysctl registration failed"); ++ register_sysctl_init("fs/verity", fsverity_sysctl_table); + } + #else /* CONFIG_SYSCTL */ + static inline void fsverity_init_sysctl(void) +diff --git a/include/linux/ksm.h b/include/linux/ksm.h +index 7e2b1de3996ac..f4692ec361e1b 100644 +--- a/include/linux/ksm.h ++++ b/include/linux/ksm.h +@@ -33,16 +33,27 @@ void __ksm_exit(struct mm_struct *mm); + */ + #define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) + +-extern unsigned long ksm_zero_pages; ++extern atomic_long_t ksm_zero_pages; ++ ++static inline void ksm_map_zero_page(struct mm_struct *mm) ++{ ++ atomic_long_inc(&ksm_zero_pages); ++ atomic_long_inc(&mm->ksm_zero_pages); ++} + + static inline void ksm_might_unmap_zero_page(struct mm_struct *mm, pte_t pte) + { + if (is_ksm_zero_pte(pte)) { +- ksm_zero_pages--; +- mm->ksm_zero_pages--; ++ atomic_long_dec(&ksm_zero_pages); ++ atomic_long_dec(&mm->ksm_zero_pages); + } + } + ++static inline long mm_ksm_zero_pages(struct mm_struct *mm) ++{ ++ return atomic_long_read(&mm->ksm_zero_pages); ++} ++ + static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) + { + int ret; +diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h +index 5240bd7bca338..962e7e05e4f2f 100644 +--- a/include/linux/mm_types.h ++++ b/include/linux/mm_types.h +@@ -988,7 +988,7 @@ struct mm_struct { + * Represent how many empty pages are merged with kernel zero + * pages when enabling KSM use_zero_pages. + */ +- unsigned long ksm_zero_pages; ++ atomic_long_t ksm_zero_pages; + #endif /* CONFIG_KSM */ + #ifdef CONFIG_LRU_GEN_WALKS_MMU + struct { +diff --git a/include/linux/mmc/slot-gpio.h b/include/linux/mmc/slot-gpio.h +index 5d3d15e97868a..66272fdce43d8 100644 +--- a/include/linux/mmc/slot-gpio.h ++++ b/include/linux/mmc/slot-gpio.h +@@ -21,6 +21,7 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id, + unsigned int debounce); + int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id, + unsigned int idx, unsigned int debounce); ++int mmc_gpiod_set_cd_config(struct mmc_host *host, unsigned long config); + void mmc_gpio_set_cd_isr(struct mmc_host *host, + irqreturn_t (*isr)(int irq, void *dev_id)); + int mmc_gpio_set_cd_wake(struct mmc_host *host, bool on); +diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h +index 2df35e65557d2..5283b6193e7dc 100644 +--- a/include/linux/pagemap.h ++++ b/include/linux/pagemap.h +@@ -344,6 +344,19 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) + m->gfp_mask = mask; + } + ++/* ++ * There are some parts of the kernel which assume that PMD entries ++ * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, ++ * limit the maximum allocation order to PMD size. I'm not aware of any ++ * assumptions about maximum order if THP are disabled, but 8 seems like ++ * a good order (that's 1MB if you're using 4kB pages) ++ */ ++#ifdef CONFIG_TRANSPARENT_HUGEPAGE ++#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER ++#else ++#define MAX_PAGECACHE_ORDER 8 ++#endif ++ + /** + * mapping_set_large_folios() - Indicate the file supports large folios. + * @mapping: The file. +@@ -370,6 +383,14 @@ static inline bool mapping_large_folio_support(struct address_space *mapping) + test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); + } + ++/* Return the maximum folio size for this pagecache mapping, in bytes. */ ++static inline size_t mapping_max_folio_size(struct address_space *mapping) ++{ ++ if (mapping_large_folio_support(mapping)) ++ return PAGE_SIZE << MAX_PAGECACHE_ORDER; ++ return PAGE_SIZE; ++} ++ + static inline int filemap_nr_thps(struct address_space *mapping) + { + #ifdef CONFIG_READ_ONLY_THP_FOR_FS +@@ -528,19 +549,6 @@ static inline void *detach_page_private(struct page *page) + return folio_detach_private(page_folio(page)); + } + +-/* +- * There are some parts of the kernel which assume that PMD entries +- * are exactly HPAGE_PMD_ORDER. Those should be fixed, but until then, +- * limit the maximum allocation order to PMD size. I'm not aware of any +- * assumptions about maximum order if THP are disabled, but 8 seems like +- * a good order (that's 1MB if you're using 4kB pages) +- */ +-#ifdef CONFIG_TRANSPARENT_HUGEPAGE +-#define MAX_PAGECACHE_ORDER HPAGE_PMD_ORDER +-#else +-#define MAX_PAGECACHE_ORDER 8 +-#endif +- + #ifdef CONFIG_NUMA + struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); + #else +diff --git a/include/net/tcp_ao.h b/include/net/tcp_ao.h +index 471e177362b4c..5d8e9ed2c0056 100644 +--- a/include/net/tcp_ao.h ++++ b/include/net/tcp_ao.h +@@ -86,7 +86,8 @@ static inline int tcp_ao_sizeof_key(const struct tcp_ao_key *key) + struct tcp_ao_info { + /* List of tcp_ao_key's */ + struct hlist_head head; +- /* current_key and rnext_key aren't maintained on listen sockets. ++ /* current_key and rnext_key are maintained on sockets ++ * in TCP_AO_ESTABLISHED states. + * Their purpose is to cache keys on established connections, + * saving needless lookups. Never dereference any of them from + * listen sockets. +@@ -201,9 +202,9 @@ struct tcp6_ao_context { + }; + + struct tcp_sigpool; ++/* Established states are fast-path and there always is current_key/rnext_key */ + #define TCP_AO_ESTABLISHED (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | \ +- TCPF_CLOSE | TCPF_CLOSE_WAIT | \ +- TCPF_LAST_ACK | TCPF_CLOSING) ++ TCPF_CLOSE_WAIT | TCPF_LAST_ACK | TCPF_CLOSING) + + int tcp_ao_transmit_skb(struct sock *sk, struct sk_buff *skb, + struct tcp_ao_key *key, struct tcphdr *th, +diff --git a/include/soc/qcom/cmd-db.h b/include/soc/qcom/cmd-db.h +index c8bb56e6852a8..47a6cab75e630 100644 +--- a/include/soc/qcom/cmd-db.h ++++ b/include/soc/qcom/cmd-db.h +@@ -1,5 +1,8 @@ + /* SPDX-License-Identifier: GPL-2.0 */ +-/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. */ ++/* ++ * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. ++ * Copyright (c) 2024, Qualcomm Innovation Center, Inc. All rights reserved. ++ */ + + #ifndef __QCOM_COMMAND_DB_H__ + #define __QCOM_COMMAND_DB_H__ +@@ -21,6 +24,8 @@ u32 cmd_db_read_addr(const char *resource_id); + + const void *cmd_db_read_aux_data(const char *resource_id, size_t *len); + ++bool cmd_db_match_resource_addr(u32 addr1, u32 addr2); ++ + enum cmd_db_hw_type cmd_db_read_slave_id(const char *resource_id); + + int cmd_db_ready(void); +@@ -31,6 +36,9 @@ static inline u32 cmd_db_read_addr(const char *resource_id) + static inline const void *cmd_db_read_aux_data(const char *resource_id, size_t *len) + { return ERR_PTR(-ENODEV); } + ++static inline bool cmd_db_match_resource_addr(u32 addr1, u32 addr2) ++{ return false; } ++ + static inline enum cmd_db_hw_type cmd_db_read_slave_id(const char *resource_id) + { return -ENODEV; } + +diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h +index 03e52094630ac..bb6492b667511 100644 +--- a/io_uring/io_uring.h ++++ b/io_uring/io_uring.h +@@ -442,7 +442,7 @@ static inline bool io_file_can_poll(struct io_kiocb *req) + { + if (req->flags & REQ_F_CAN_POLL) + return true; +- if (file_can_poll(req->file)) { ++ if (req->file && file_can_poll(req->file)) { + req->flags |= REQ_F_CAN_POLL; + return true; + } +diff --git a/io_uring/napi.c b/io_uring/napi.c +index 883a1a6659075..8c18ede595c41 100644 +--- a/io_uring/napi.c ++++ b/io_uring/napi.c +@@ -261,12 +261,14 @@ int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg) + } + + /* +- * __io_napi_adjust_timeout() - Add napi id to the busy poll list ++ * __io_napi_adjust_timeout() - adjust busy loop timeout + * @ctx: pointer to io-uring context structure + * @iowq: pointer to io wait queue + * @ts: pointer to timespec or NULL + * + * Adjust the busy loop timeout according to timespec and busy poll timeout. ++ * If the specified NAPI timeout is bigger than the wait timeout, then adjust ++ * the NAPI timeout accordingly. + */ + void __io_napi_adjust_timeout(struct io_ring_ctx *ctx, struct io_wait_queue *iowq, + struct timespec64 *ts) +@@ -274,16 +276,16 @@ void __io_napi_adjust_timeout(struct io_ring_ctx *ctx, struct io_wait_queue *iow + unsigned int poll_to = READ_ONCE(ctx->napi_busy_poll_to); + + if (ts) { +- struct timespec64 poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to); +- +- if (timespec64_compare(ts, &poll_to_ts) > 0) { +- *ts = timespec64_sub(*ts, poll_to_ts); +- } else { +- u64 to = timespec64_to_ns(ts); +- +- do_div(to, 1000); +- ts->tv_sec = 0; +- ts->tv_nsec = 0; ++ struct timespec64 poll_to_ts; ++ ++ poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to); ++ if (timespec64_compare(ts, &poll_to_ts) < 0) { ++ s64 poll_to_ns = timespec64_to_ns(ts); ++ if (poll_to_ns > 0) { ++ u64 val = poll_to_ns + 999; ++ do_div(val, (s64) 1000); ++ poll_to = val; ++ } + } + } + +diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c +index 9443bc63c5a24..2aeaf9765b248 100644 +--- a/kernel/debug/kdb/kdb_io.c ++++ b/kernel/debug/kdb/kdb_io.c +@@ -184,6 +184,33 @@ char kdb_getchar(void) + unreachable(); + } + ++/** ++ * kdb_position_cursor() - Place cursor in the correct horizontal position ++ * @prompt: Nil-terminated string containing the prompt string ++ * @buffer: Nil-terminated string containing the entire command line ++ * @cp: Cursor position, pointer the character in buffer where the cursor ++ * should be positioned. ++ * ++ * The cursor is positioned by sending a carriage-return and then printing ++ * the content of the line until we reach the correct cursor position. ++ * ++ * There is some additional fine detail here. ++ * ++ * Firstly, even though kdb_printf() will correctly format zero-width fields ++ * we want the second call to kdb_printf() to be conditional. That keeps things ++ * a little cleaner when LOGGING=1. ++ * ++ * Secondly, we can't combine everything into one call to kdb_printf() since ++ * that renders into a fixed length buffer and the combined print could result ++ * in unwanted truncation. ++ */ ++static void kdb_position_cursor(char *prompt, char *buffer, char *cp) ++{ ++ kdb_printf("\r%s", kdb_prompt_str); ++ if (cp > buffer) ++ kdb_printf("%.*s", (int)(cp - buffer), buffer); ++} ++ + /* + * kdb_read + * +@@ -212,7 +239,6 @@ static char *kdb_read(char *buffer, size_t bufsize) + * and null byte */ + char *lastchar; + char *p_tmp; +- char tmp; + static char tmpbuffer[CMD_BUFLEN]; + int len = strlen(buffer); + int len_tmp; +@@ -249,12 +275,8 @@ static char *kdb_read(char *buffer, size_t bufsize) + } + *(--lastchar) = '\0'; + --cp; +- kdb_printf("\b%s \r", cp); +- tmp = *cp; +- *cp = '\0'; +- kdb_printf(kdb_prompt_str); +- kdb_printf("%s", buffer); +- *cp = tmp; ++ kdb_printf("\b%s ", cp); ++ kdb_position_cursor(kdb_prompt_str, buffer, cp); + } + break; + case 10: /* linefeed */ +@@ -272,19 +294,14 @@ static char *kdb_read(char *buffer, size_t bufsize) + memcpy(tmpbuffer, cp+1, lastchar - cp - 1); + memcpy(cp, tmpbuffer, lastchar - cp - 1); + *(--lastchar) = '\0'; +- kdb_printf("%s \r", cp); +- tmp = *cp; +- *cp = '\0'; +- kdb_printf(kdb_prompt_str); +- kdb_printf("%s", buffer); +- *cp = tmp; ++ kdb_printf("%s ", cp); ++ kdb_position_cursor(kdb_prompt_str, buffer, cp); + } + break; + case 1: /* Home */ + if (cp > buffer) { +- kdb_printf("\r"); +- kdb_printf(kdb_prompt_str); + cp = buffer; ++ kdb_position_cursor(kdb_prompt_str, buffer, cp); + } + break; + case 5: /* End */ +@@ -300,11 +317,10 @@ static char *kdb_read(char *buffer, size_t bufsize) + } + break; + case 14: /* Down */ +- memset(tmpbuffer, ' ', +- strlen(kdb_prompt_str) + (lastchar-buffer)); +- *(tmpbuffer+strlen(kdb_prompt_str) + +- (lastchar-buffer)) = '\0'; +- kdb_printf("\r%s\r", tmpbuffer); ++ case 16: /* Up */ ++ kdb_printf("\r%*c\r", ++ (int)(strlen(kdb_prompt_str) + (lastchar - buffer)), ++ ' '); + *lastchar = (char)key; + *(lastchar+1) = '\0'; + return lastchar; +@@ -314,15 +330,6 @@ static char *kdb_read(char *buffer, size_t bufsize) + ++cp; + } + break; +- case 16: /* Up */ +- memset(tmpbuffer, ' ', +- strlen(kdb_prompt_str) + (lastchar-buffer)); +- *(tmpbuffer+strlen(kdb_prompt_str) + +- (lastchar-buffer)) = '\0'; +- kdb_printf("\r%s\r", tmpbuffer); +- *lastchar = (char)key; +- *(lastchar+1) = '\0'; +- return lastchar; + case 9: /* Tab */ + if (tab < 2) + ++tab; +@@ -366,15 +373,25 @@ static char *kdb_read(char *buffer, size_t bufsize) + kdb_printf("\n"); + kdb_printf(kdb_prompt_str); + kdb_printf("%s", buffer); ++ if (cp != lastchar) ++ kdb_position_cursor(kdb_prompt_str, buffer, cp); + } else if (tab != 2 && count > 0) { +- len_tmp = strlen(p_tmp); +- strncpy(p_tmp+len_tmp, cp, lastchar-cp+1); +- len_tmp = strlen(p_tmp); +- strncpy(cp, p_tmp+len, len_tmp-len + 1); +- len = len_tmp - len; +- kdb_printf("%s", cp); +- cp += len; +- lastchar += len; ++ /* How many new characters do we want from tmpbuffer? */ ++ len_tmp = strlen(p_tmp) - len; ++ if (lastchar + len_tmp >= bufend) ++ len_tmp = bufend - lastchar; ++ ++ if (len_tmp) { ++ /* + 1 ensures the '\0' is memmove'd */ ++ memmove(cp+len_tmp, cp, (lastchar-cp) + 1); ++ memcpy(cp, p_tmp+len, len_tmp); ++ kdb_printf("%s", cp); ++ cp += len_tmp; ++ lastchar += len_tmp; ++ if (cp != lastchar) ++ kdb_position_cursor(kdb_prompt_str, ++ buffer, cp); ++ } + } + kdb_nextline = 1; /* reset output line number */ + break; +@@ -385,13 +402,9 @@ static char *kdb_read(char *buffer, size_t bufsize) + memcpy(cp+1, tmpbuffer, lastchar - cp); + *++lastchar = '\0'; + *cp = key; +- kdb_printf("%s\r", cp); ++ kdb_printf("%s", cp); + ++cp; +- tmp = *cp; +- *cp = '\0'; +- kdb_printf(kdb_prompt_str); +- kdb_printf("%s", buffer); +- *cp = tmp; ++ kdb_position_cursor(kdb_prompt_str, buffer, cp); + } else { + *++lastchar = '\0'; + *cp++ = key; +diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c +index 4c6b32318ce35..7bf9f66ca63a2 100644 +--- a/kernel/irq/irqdesc.c ++++ b/kernel/irq/irqdesc.c +@@ -160,7 +160,10 @@ static int irq_find_free_area(unsigned int from, unsigned int cnt) + static unsigned int irq_find_at_or_after(unsigned int offset) + { + unsigned long index = offset; +- struct irq_desc *desc = mt_find(&sparse_irqs, &index, nr_irqs); ++ struct irq_desc *desc; ++ ++ guard(rcu)(); ++ desc = mt_find(&sparse_irqs, &index, nr_irqs); + + return desc ? irq_desc_get_irq(desc) : nr_irqs; + } +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c +index 9dc605f08a231..5d8f918c9825f 100644 +--- a/kernel/trace/bpf_trace.c ++++ b/kernel/trace/bpf_trace.c +@@ -3260,7 +3260,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, + struct bpf_run_ctx *old_run_ctx; + int err = 0; + +- if (link->task && current != link->task) ++ if (link->task && current->mm != link->task->mm) + return 0; + + if (sleepable) +@@ -3361,8 +3361,9 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr + upath = u64_to_user_ptr(attr->link_create.uprobe_multi.path); + uoffsets = u64_to_user_ptr(attr->link_create.uprobe_multi.offsets); + cnt = attr->link_create.uprobe_multi.cnt; ++ pid = attr->link_create.uprobe_multi.pid; + +- if (!upath || !uoffsets || !cnt) ++ if (!upath || !uoffsets || !cnt || pid < 0) + return -EINVAL; + if (cnt > MAX_UPROBE_MULTI_CNT) + return -E2BIG; +@@ -3386,10 +3387,9 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr + goto error_path_put; + } + +- pid = attr->link_create.uprobe_multi.pid; + if (pid) { + rcu_read_lock(); +- task = get_pid_task(find_vpid(pid), PIDTYPE_PID); ++ task = get_pid_task(find_vpid(pid), PIDTYPE_TGID); + rcu_read_unlock(); + if (!task) { + err = -ESRCH; +diff --git a/mm/cma.c b/mm/cma.c +index 01f5a8f71ddfa..3e9724716bad8 100644 +--- a/mm/cma.c ++++ b/mm/cma.c +@@ -182,10 +182,6 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, + if (!size || !memblock_is_region_reserved(base, size)) + return -EINVAL; + +- /* alignment should be aligned with order_per_bit */ +- if (!IS_ALIGNED(CMA_MIN_ALIGNMENT_PAGES, 1 << order_per_bit)) +- return -EINVAL; +- + /* ensure minimal alignment required by mm core */ + if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) + return -EINVAL; +diff --git a/mm/huge_memory.c b/mm/huge_memory.c +index 89f58c7603b25..dd1fc105f70bd 100644 +--- a/mm/huge_memory.c ++++ b/mm/huge_memory.c +@@ -2493,32 +2493,11 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, + return __split_huge_zero_page_pmd(vma, haddr, pmd); + } + +- /* +- * Up to this point the pmd is present and huge and userland has the +- * whole access to the hugepage during the split (which happens in +- * place). If we overwrite the pmd with the not-huge version pointing +- * to the pte here (which of course we could if all CPUs were bug +- * free), userland could trigger a small page size TLB miss on the +- * small sized TLB while the hugepage TLB entry is still established in +- * the huge TLB. Some CPU doesn't like that. +- * See http://support.amd.com/TechDocs/41322_10h_Rev_Gd.pdf, Erratum +- * 383 on page 105. Intel should be safe but is also warns that it's +- * only safe if the permission and cache attributes of the two entries +- * loaded in the two TLB is identical (which should be the case here). +- * But it is generally safer to never allow small and huge TLB entries +- * for the same virtual address to be loaded simultaneously. So instead +- * of doing "pmd_populate(); flush_pmd_tlb_range();" we first mark the +- * current pmd notpresent (atomically because here the pmd_trans_huge +- * must remain set at all times on the pmd until the split is complete +- * for this pmd), then we flush the SMP TLB and finally we write the +- * non-huge version of the pmd entry with pmd_populate. +- */ +- old_pmd = pmdp_invalidate(vma, haddr, pmd); +- +- pmd_migration = is_pmd_migration_entry(old_pmd); ++ pmd_migration = is_pmd_migration_entry(*pmd); + if (unlikely(pmd_migration)) { + swp_entry_t entry; + ++ old_pmd = *pmd; + entry = pmd_to_swp_entry(old_pmd); + page = pfn_swap_entry_to_page(entry); + write = is_writable_migration_entry(entry); +@@ -2529,6 +2508,30 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, + soft_dirty = pmd_swp_soft_dirty(old_pmd); + uffd_wp = pmd_swp_uffd_wp(old_pmd); + } else { ++ /* ++ * Up to this point the pmd is present and huge and userland has ++ * the whole access to the hugepage during the split (which ++ * happens in place). If we overwrite the pmd with the not-huge ++ * version pointing to the pte here (which of course we could if ++ * all CPUs were bug free), userland could trigger a small page ++ * size TLB miss on the small sized TLB while the hugepage TLB ++ * entry is still established in the huge TLB. Some CPU doesn't ++ * like that. See ++ * http://support.amd.com/TechDocs/41322_10h_Rev_Gd.pdf, Erratum ++ * 383 on page 105. Intel should be safe but is also warns that ++ * it's only safe if the permission and cache attributes of the ++ * two entries loaded in the two TLB is identical (which should ++ * be the case here). But it is generally safer to never allow ++ * small and huge TLB entries for the same virtual address to be ++ * loaded simultaneously. So instead of doing "pmd_populate(); ++ * flush_pmd_tlb_range();" we first mark the current pmd ++ * notpresent (atomically because here the pmd_trans_huge must ++ * remain set at all times on the pmd until the split is ++ * complete for this pmd), then we flush the SMP TLB and finally ++ * we write the non-huge version of the pmd entry with ++ * pmd_populate. ++ */ ++ old_pmd = pmdp_invalidate(vma, haddr, pmd); + page = pmd_page(old_pmd); + folio = page_folio(page); + if (pmd_dirty(old_pmd)) { +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index ce7be5c244429..c445e6fd85799 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -5774,8 +5774,20 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, + * do_exit() will not see it, and will keep the reservation + * forever. + */ +- if (adjust_reservation && vma_needs_reservation(h, vma, address)) +- vma_add_reservation(h, vma, address); ++ if (adjust_reservation) { ++ int rc = vma_needs_reservation(h, vma, address); ++ ++ if (rc < 0) ++ /* Pressumably allocate_file_region_entries failed ++ * to allocate a file_region struct. Clear ++ * hugetlb_restore_reserve so that global reserve ++ * count will not be incremented by free_huge_folio. ++ * Act as if we consumed the reservation. ++ */ ++ folio_clear_hugetlb_restore_reserve(page_folio(page)); ++ else if (rc) ++ vma_add_reservation(h, vma, address); ++ } + + tlb_remove_page_size(tlb, page, huge_page_size(h)); + /* +@@ -7867,9 +7879,9 @@ void __init hugetlb_cma_reserve(int order) + * huge page demotion. + */ + res = cma_declare_contiguous_nid(0, size, 0, +- PAGE_SIZE << HUGETLB_PAGE_ORDER, +- 0, false, name, +- &hugetlb_cma[nid], nid); ++ PAGE_SIZE << HUGETLB_PAGE_ORDER, ++ HUGETLB_PAGE_ORDER, false, name, ++ &hugetlb_cma[nid], nid); + if (res) { + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", + res, nid); +diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c +index cf2d70e9c9a5f..95f859e38c533 100644 +--- a/mm/kmsan/core.c ++++ b/mm/kmsan/core.c +@@ -196,8 +196,7 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b, + u32 origin, bool checked) + { + u64 address = (u64)addr; +- void *shadow_start; +- u32 *origin_start; ++ u32 *shadow_start, *origin_start; + size_t pad = 0; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); +@@ -225,8 +224,16 @@ void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b, + origin_start = + (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); + +- for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) +- origin_start[i] = origin; ++ /* ++ * If the new origin is non-zero, assume that the shadow byte is also non-zero, ++ * and unconditionally overwrite the old origin slot. ++ * If the new origin is zero, overwrite the old origin slot iff the ++ * corresponding shadow slot is zero. ++ */ ++ for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) { ++ if (origin || !shadow_start[i]) ++ origin_start[i] = origin; ++ } + } + + struct page *kmsan_vmalloc_to_page_or_null(void *vaddr) +diff --git a/mm/ksm.c b/mm/ksm.c +index 8c001819cf10f..ddd482c71886f 100644 +--- a/mm/ksm.c ++++ b/mm/ksm.c +@@ -296,7 +296,7 @@ static bool ksm_use_zero_pages __read_mostly; + static bool ksm_smart_scan = true; + + /* The number of zero pages which is placed by KSM */ +-unsigned long ksm_zero_pages; ++atomic_long_t ksm_zero_pages = ATOMIC_LONG_INIT(0); + + /* The number of pages that have been skipped due to "smart scanning" */ + static unsigned long ksm_pages_skipped; +@@ -1428,8 +1428,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, + * the dirty bit in zero page's PTE is set. + */ + newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot))); +- ksm_zero_pages++; +- mm->ksm_zero_pages++; ++ ksm_map_zero_page(mm); + /* + * We're replacing an anonymous page with a zero page, which is + * not anonymous. We need to do proper accounting otherwise we +@@ -2747,18 +2746,16 @@ static void ksm_do_scan(unsigned int scan_npages) + { + struct ksm_rmap_item *rmap_item; + struct page *page; +- unsigned int npages = scan_npages; + +- while (npages-- && likely(!freezing(current))) { ++ while (scan_npages-- && likely(!freezing(current))) { + cond_resched(); + rmap_item = scan_get_next_rmap_item(&page); + if (!rmap_item) + return; + cmp_and_merge_page(page, rmap_item); + put_page(page); ++ ksm_pages_scanned++; + } +- +- ksm_pages_scanned += scan_npages - npages; + } + + static int ksmd_should_run(void) +@@ -3370,7 +3367,7 @@ static void wait_while_offlining(void) + #ifdef CONFIG_PROC_FS + long ksm_process_profit(struct mm_struct *mm) + { +- return (long)(mm->ksm_merging_pages + mm->ksm_zero_pages) * PAGE_SIZE - ++ return (long)(mm->ksm_merging_pages + mm_ksm_zero_pages(mm)) * PAGE_SIZE - + mm->ksm_rmap_items * sizeof(struct ksm_rmap_item); + } + #endif /* CONFIG_PROC_FS */ +@@ -3659,7 +3656,7 @@ KSM_ATTR_RO(pages_skipped); + static ssize_t ksm_zero_pages_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) + { +- return sysfs_emit(buf, "%ld\n", ksm_zero_pages); ++ return sysfs_emit(buf, "%ld\n", atomic_long_read(&ksm_zero_pages)); + } + KSM_ATTR_RO(ksm_zero_pages); + +@@ -3668,7 +3665,7 @@ static ssize_t general_profit_show(struct kobject *kobj, + { + long general_profit; + +- general_profit = (ksm_pages_sharing + ksm_zero_pages) * PAGE_SIZE - ++ general_profit = (ksm_pages_sharing + atomic_long_read(&ksm_zero_pages)) * PAGE_SIZE - + ksm_rmap_items * sizeof(struct ksm_rmap_item); + + return sysfs_emit(buf, "%ld\n", general_profit); +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index 9e62a00b46dde..18da1c2b08c3d 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1218,7 +1218,7 @@ static int me_huge_page(struct page_state *ps, struct page *p) + * subpages. + */ + folio_put(folio); +- if (__page_handle_poison(p) >= 0) { ++ if (__page_handle_poison(p) > 0) { + page_ref_inc(p); + res = MF_RECOVERED; + } else { +@@ -2097,7 +2097,7 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb + */ + if (res == 0) { + folio_unlock(folio); +- if (__page_handle_poison(p) >= 0) { ++ if (__page_handle_poison(p) > 0) { + page_ref_inc(p); + res = MF_RECOVERED; + } else { +diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c +index 4fcd959dcc4d0..a78a4adf711ac 100644 +--- a/mm/pgtable-generic.c ++++ b/mm/pgtable-generic.c +@@ -198,6 +198,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp) + pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp) + { ++ VM_WARN_ON_ONCE(!pmd_present(*pmdp)); + pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); + flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); + return old; +@@ -208,6 +209,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, + pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp) + { ++ VM_WARN_ON_ONCE(!pmd_present(*pmdp)); + return pmdp_invalidate(vma, address, pmdp); + } + #endif +diff --git a/mm/vmalloc.c b/mm/vmalloc.c +index 125427cbdb87b..109272b8ee2e9 100644 +--- a/mm/vmalloc.c ++++ b/mm/vmalloc.c +@@ -3492,7 +3492,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, + { + unsigned int nr_allocated = 0; + gfp_t alloc_gfp = gfp; +- bool nofail = false; ++ bool nofail = gfp & __GFP_NOFAIL; + struct page *page; + int i; + +@@ -3549,12 +3549,11 @@ vm_area_alloc_pages(gfp_t gfp, int nid, + * and compaction etc. + */ + alloc_gfp &= ~__GFP_NOFAIL; +- nofail = true; + } + + /* High-order pages or fallback path if "bulk" fails. */ + while (nr_allocated < nr_pages) { +- if (fatal_signal_pending(current)) ++ if (!nofail && fatal_signal_pending(current)) + break; + + if (nid == NUMA_NO_NODE) +diff --git a/net/9p/client.c b/net/9p/client.c +index f7e90b4769bba..b05f73c291b4b 100644 +--- a/net/9p/client.c ++++ b/net/9p/client.c +@@ -235,6 +235,8 @@ static int p9_fcall_init(struct p9_client *c, struct p9_fcall *fc, + if (!fc->sdata) + return -ENOMEM; + fc->capacity = alloc_msize; ++ fc->id = 0; ++ fc->tag = P9_NOTAG; + return 0; + } + +diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c +index 781b67a525719..37c42b63ff993 100644 +--- a/net/ipv4/tcp_ao.c ++++ b/net/ipv4/tcp_ao.c +@@ -933,6 +933,7 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb, + struct tcp_ao_key *key; + __be32 sisn, disn; + u8 *traffic_key; ++ int state; + u32 sne = 0; + + info = rcu_dereference(tcp_sk(sk)->ao_info); +@@ -948,8 +949,9 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb, + disn = 0; + } + ++ state = READ_ONCE(sk->sk_state); + /* Fast-path */ +- if (likely((1 << sk->sk_state) & TCP_AO_ESTABLISHED)) { ++ if (likely((1 << state) & TCP_AO_ESTABLISHED)) { + enum skb_drop_reason err; + struct tcp_ao_key *current_key; + +@@ -988,6 +990,9 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb, + return SKB_NOT_DROPPED_YET; + } + ++ if (unlikely(state == TCP_CLOSE)) ++ return SKB_DROP_REASON_TCP_CLOSE; ++ + /* Lookup key based on peer address and keyid. + * current_key and rnext_key must not be used on tcp listen + * sockets as otherwise: +@@ -1001,7 +1006,7 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb, + if (th->syn && !th->ack) + goto verify_hash; + +- if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV)) { ++ if ((1 << state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV)) { + /* Make the initial syn the likely case here */ + if (unlikely(req)) { + sne = tcp_ao_compute_sne(0, tcp_rsk(req)->rcv_isn, +@@ -1018,14 +1023,14 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb, + /* no way to figure out initial sisn/disn - drop */ + return SKB_DROP_REASON_TCP_FLAGS; + } +- } else if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) { ++ } else if ((1 << state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) { + disn = info->lisn; + if (th->syn || th->rst) + sisn = th->seq; + else + sisn = info->risn; + } else { +- WARN_ONCE(1, "TCP-AO: Unexpected sk_state %d", sk->sk_state); ++ WARN_ONCE(1, "TCP-AO: Unexpected sk_state %d", state); + return SKB_DROP_REASON_TCP_AOFAILURE; + } + verify_hash: +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index 8d5257c3f0842..f090e7bcb784f 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -4446,7 +4446,7 @@ static void rtmsg_to_fib6_config(struct net *net, + .fc_table = l3mdev_fib_table_by_index(net, rtmsg->rtmsg_ifindex) ? + : RT6_TABLE_MAIN, + .fc_ifindex = rtmsg->rtmsg_ifindex, +- .fc_metric = rtmsg->rtmsg_metric ? : IP6_RT_PRIO_USER, ++ .fc_metric = rtmsg->rtmsg_metric, + .fc_expires = rtmsg->rtmsg_info, + .fc_dst_len = rtmsg->rtmsg_dst_len, + .fc_src_len = rtmsg->rtmsg_src_len, +@@ -4476,6 +4476,9 @@ int ipv6_route_ioctl(struct net *net, unsigned int cmd, struct in6_rtmsg *rtmsg) + rtnl_lock(); + switch (cmd) { + case SIOCADDRT: ++ /* Only do the default setting of fc_metric in route adding */ ++ if (cfg.fc_metric == 0) ++ cfg.fc_metric = IP6_RT_PRIO_USER; + err = ip6_route_add(&cfg, GFP_KERNEL, NULL); + break; + case SIOCDELRT: +diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c +index 727aa20be4bde..7d1c0986f9bb3 100644 +--- a/net/xdp/xsk.c ++++ b/net/xdp/xsk.c +@@ -313,13 +313,10 @@ static bool xsk_is_bound(struct xdp_sock *xs) + + static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len) + { +- struct net_device *dev = xdp->rxq->dev; +- u32 qid = xdp->rxq->queue_index; +- + if (!xsk_is_bound(xs)) + return -ENXIO; + +- if (!dev->_rx[qid].pool || xs->umem != dev->_rx[qid].pool->umem) ++ if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) + return -EINVAL; + + if (len > xsk_pool_get_rx_frame_size(xs->pool) && !xs->sg) { +diff --git a/sound/core/seq/seq_ump_convert.c b/sound/core/seq/seq_ump_convert.c +index 9bfba69b2a709..171fb75267afa 100644 +--- a/sound/core/seq/seq_ump_convert.c ++++ b/sound/core/seq/seq_ump_convert.c +@@ -740,6 +740,7 @@ static int system_1p_ev_to_ump_midi1(const struct snd_seq_event *event, + union snd_ump_midi1_msg *data, + unsigned char status) + { ++ data->system.type = UMP_MSG_TYPE_SYSTEM; // override + data->system.status = status; + data->system.parm1 = event->data.control.value & 0x7f; + return 1; +@@ -751,6 +752,7 @@ static int system_2p_ev_to_ump_midi1(const struct snd_seq_event *event, + union snd_ump_midi1_msg *data, + unsigned char status) + { ++ data->system.type = UMP_MSG_TYPE_SYSTEM; // override + data->system.status = status; + data->system.parm1 = event->data.control.value & 0x7f; + data->system.parm2 = (event->data.control.value >> 7) & 0x7f; +diff --git a/sound/core/ump.c b/sound/core/ump.c +index fd6a68a542788..117c7ecc48563 100644 +--- a/sound/core/ump.c ++++ b/sound/core/ump.c +@@ -685,10 +685,17 @@ static void seq_notify_protocol(struct snd_ump_endpoint *ump) + */ + int snd_ump_switch_protocol(struct snd_ump_endpoint *ump, unsigned int protocol) + { ++ unsigned int type; ++ + protocol &= ump->info.protocol_caps; + if (protocol == ump->info.protocol) + return 0; + ++ type = protocol & SNDRV_UMP_EP_INFO_PROTO_MIDI_MASK; ++ if (type != SNDRV_UMP_EP_INFO_PROTO_MIDI1 && ++ type != SNDRV_UMP_EP_INFO_PROTO_MIDI2) ++ return 0; ++ + ump->info.protocol = protocol; + ump_dbg(ump, "New protocol = %x (caps = %x)\n", + protocol, ump->info.protocol_caps); +diff --git a/sound/core/ump_convert.c b/sound/core/ump_convert.c +index de04799fdb69a..f67c44c83fde4 100644 +--- a/sound/core/ump_convert.c ++++ b/sound/core/ump_convert.c +@@ -404,7 +404,6 @@ static int cvt_legacy_cmd_to_ump(struct ump_cvt_to_ump *cvt, + midi2->pg.bank_msb = cc->cc_bank_msb; + midi2->pg.bank_lsb = cc->cc_bank_lsb; + cc->bank_set = 0; +- cc->cc_bank_msb = cc->cc_bank_lsb = 0; + } + break; + case UMP_MSG_STATUS_CHANNEL_PRESSURE: +diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c +index 5cca058421260..35941027b78c1 100644 +--- a/sound/soc/sof/ipc4-topology.c ++++ b/sound/soc/sof/ipc4-topology.c +@@ -217,6 +217,14 @@ sof_ipc4_get_input_pin_audio_fmt(struct snd_sof_widget *swidget, int pin_index) + } + + process = swidget->private; ++ ++ /* ++ * For process modules without base config extension, base module config ++ * format is used for all input pins ++ */ ++ if (process->init_config != SOF_IPC4_MODULE_INIT_CONFIG_TYPE_BASE_CFG_WITH_EXT) ++ return &process->base_config.audio_fmt; ++ + base_cfg_ext = process->base_config_ext; + + /* +diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c +index 6aeae398ec289..45c755fb50eef 100644 +--- a/tools/perf/builtin-record.c ++++ b/tools/perf/builtin-record.c +@@ -1956,8 +1956,7 @@ static void record__read_lost_samples(struct record *rec) + + if (count.lost) { + if (!lost) { +- lost = zalloc(sizeof(*lost) + +- session->machines.host.id_hdr_size); ++ lost = zalloc(PERF_SAMPLE_MAX_SIZE); + if (!lost) { + pr_debug("Memory allocation failed\n"); + return; +@@ -1973,8 +1972,7 @@ static void record__read_lost_samples(struct record *rec) + lost_count = perf_bpf_filter__lost_count(evsel); + if (lost_count) { + if (!lost) { +- lost = zalloc(sizeof(*lost) + +- session->machines.host.id_hdr_size); ++ lost = zalloc(PERF_SAMPLE_MAX_SIZE); + if (!lost) { + pr_debug("Memory allocation failed\n"); + return; +diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +index 8269cdee33ae9..38fda42fd70f9 100644 +--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c ++++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c +@@ -397,7 +397,7 @@ static void test_attach_api_fails(void) + link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &opts); + if (!ASSERT_ERR(link_fd, "link_fd")) + goto cleanup; +- ASSERT_EQ(link_fd, -ESRCH, "pid_is_wrong"); ++ ASSERT_EQ(link_fd, -EINVAL, "pid_is_wrong"); + + cleanup: + if (link_fd >= 0) +diff --git a/tools/testing/selftests/mm/compaction_test.c b/tools/testing/selftests/mm/compaction_test.c +index 533999b6c2844..326a893647bab 100644 +--- a/tools/testing/selftests/mm/compaction_test.c ++++ b/tools/testing/selftests/mm/compaction_test.c +@@ -82,12 +82,13 @@ int prereq(void) + return -1; + } + +-int check_compaction(unsigned long mem_free, unsigned int hugepage_size) ++int check_compaction(unsigned long mem_free, unsigned long hugepage_size) + { ++ unsigned long nr_hugepages_ul; + int fd, ret = -1; + int compaction_index = 0; +- char initial_nr_hugepages[10] = {0}; +- char nr_hugepages[10] = {0}; ++ char initial_nr_hugepages[20] = {0}; ++ char nr_hugepages[20] = {0}; + + /* We want to test with 80% of available memory. Else, OOM killer comes + in to play */ +@@ -107,6 +108,8 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size) + goto close_fd; + } + ++ lseek(fd, 0, SEEK_SET); ++ + /* Start with the initial condition of 0 huge pages*/ + if (write(fd, "0", sizeof(char)) != sizeof(char)) { + ksft_print_msg("Failed to write 0 to /proc/sys/vm/nr_hugepages: %s\n", +@@ -134,7 +137,12 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size) + + /* We should have been able to request at least 1/3 rd of the memory in + huge pages */ +- compaction_index = mem_free/(atoi(nr_hugepages) * hugepage_size); ++ nr_hugepages_ul = strtoul(nr_hugepages, NULL, 10); ++ if (!nr_hugepages_ul) { ++ ksft_print_msg("ERROR: No memory is available as huge pages\n"); ++ goto close_fd; ++ } ++ compaction_index = mem_free/(nr_hugepages_ul * hugepage_size); + + lseek(fd, 0, SEEK_SET); + +@@ -145,11 +153,11 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size) + goto close_fd; + } + +- ksft_print_msg("Number of huge pages allocated = %d\n", +- atoi(nr_hugepages)); ++ ksft_print_msg("Number of huge pages allocated = %lu\n", ++ nr_hugepages_ul); + + if (compaction_index > 3) { +- ksft_print_msg("ERROR: Less that 1/%d of memory is available\n" ++ ksft_print_msg("ERROR: Less than 1/%d of memory is available\n" + "as huge pages\n", compaction_index); + goto close_fd; + } +diff --git a/tools/testing/selftests/mm/gup_test.c b/tools/testing/selftests/mm/gup_test.c +index 18a49c70d4c63..7821cf45c323b 100644 +--- a/tools/testing/selftests/mm/gup_test.c ++++ b/tools/testing/selftests/mm/gup_test.c +@@ -1,3 +1,4 @@ ++#define __SANE_USERSPACE_TYPES__ // Use ll64 + #include + #include + #include +diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h +index cc5629c3d2aa1..a70ae10b5f620 100644 +--- a/tools/testing/selftests/mm/uffd-common.h ++++ b/tools/testing/selftests/mm/uffd-common.h +@@ -8,6 +8,7 @@ + #define __UFFD_COMMON_H__ + + #define _GNU_SOURCE ++#define __SANE_USERSPACE_TYPES__ // Use ll64 + #include + #include + #include +diff --git a/tools/testing/selftests/net/lib.sh b/tools/testing/selftests/net/lib.sh +index cf7fe4d550dde..16372ca167ba8 100644 +--- a/tools/testing/selftests/net/lib.sh ++++ b/tools/testing/selftests/net/lib.sh +@@ -10,7 +10,7 @@ BUSYWAIT_TIMEOUT=$((WAIT_TIMEOUT * 1000)) # ms + # Kselftest framework requirement - SKIP code is 4. + ksft_skip=4 + # namespace list created by setup_ns +-NS_LIST="" ++NS_LIST=() + + ############################################################################## + # Helpers +@@ -63,9 +63,7 @@ loopy_wait() + while true + do + local out +- out=$("$@") +- local ret=$? +- if ((!ret)); then ++ if out=$("$@"); then + echo -n "$out" + return 0 + fi +@@ -135,6 +133,7 @@ cleanup_ns() + fi + + for ns in "$@"; do ++ [ -z "${ns}" ] && continue + ip netns delete "${ns}" &> /dev/null + if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then + echo "Warn: Failed to remove namespace $ns" +@@ -148,7 +147,7 @@ cleanup_ns() + + cleanup_all_ns() + { +- cleanup_ns $NS_LIST ++ cleanup_ns "${NS_LIST[@]}" + } + + # setup netns with given names as prefix. e.g +@@ -157,7 +156,7 @@ setup_ns() + { + local ns="" + local ns_name="" +- local ns_list="" ++ local ns_list=() + local ns_exist= + for ns_name in "$@"; do + # Some test may setup/remove same netns multi times +@@ -173,13 +172,13 @@ setup_ns() + + if ! ip netns add "$ns"; then + echo "Failed to create namespace $ns_name" +- cleanup_ns "$ns_list" ++ cleanup_ns "${ns_list[@]}" + return $ksft_skip + fi + ip -n "$ns" link set lo up +- ! $ns_exist && ns_list="$ns_list $ns" ++ ! $ns_exist && ns_list+=("$ns") + done +- NS_LIST="$NS_LIST $ns_list" ++ NS_LIST+=("${ns_list[@]}") + } + + tc_rule_stats_get() +diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c +index 8bd51aab6513a..5b869caed10d0 100644 +--- a/tools/tracing/rtla/src/timerlat_hist.c ++++ b/tools/tracing/rtla/src/timerlat_hist.c +@@ -324,17 +324,29 @@ timerlat_print_summary(struct timerlat_hist_params *params, + if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) + continue; + +- if (!params->no_irq) +- trace_seq_printf(trace->seq, "%9llu ", +- data->hist[cpu].min_irq); ++ if (!params->no_irq) { ++ if (data->hist[cpu].irq_count) ++ trace_seq_printf(trace->seq, "%9llu ", ++ data->hist[cpu].min_irq); ++ else ++ trace_seq_printf(trace->seq, " - "); ++ } + +- if (!params->no_thread) +- trace_seq_printf(trace->seq, "%9llu ", +- data->hist[cpu].min_thread); ++ if (!params->no_thread) { ++ if (data->hist[cpu].thread_count) ++ trace_seq_printf(trace->seq, "%9llu ", ++ data->hist[cpu].min_thread); ++ else ++ trace_seq_printf(trace->seq, " - "); ++ } + +- if (params->user_hist) +- trace_seq_printf(trace->seq, "%9llu ", +- data->hist[cpu].min_user); ++ if (params->user_hist) { ++ if (data->hist[cpu].user_count) ++ trace_seq_printf(trace->seq, "%9llu ", ++ data->hist[cpu].min_user); ++ else ++ trace_seq_printf(trace->seq, " - "); ++ } + } + trace_seq_printf(trace->seq, "\n"); + +@@ -384,17 +396,29 @@ timerlat_print_summary(struct timerlat_hist_params *params, + if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) + continue; + +- if (!params->no_irq) +- trace_seq_printf(trace->seq, "%9llu ", +- data->hist[cpu].max_irq); ++ if (!params->no_irq) { ++ if (data->hist[cpu].irq_count) ++ trace_seq_printf(trace->seq, "%9llu ", ++ data->hist[cpu].max_irq); ++ else ++ trace_seq_printf(trace->seq, " - "); ++ } + +- if (!params->no_thread) +- trace_seq_printf(trace->seq, "%9llu ", +- data->hist[cpu].max_thread); ++ if (!params->no_thread) { ++ if (data->hist[cpu].thread_count) ++ trace_seq_printf(trace->seq, "%9llu ", ++ data->hist[cpu].max_thread); ++ else ++ trace_seq_printf(trace->seq, " - "); ++ } + +- if (params->user_hist) +- trace_seq_printf(trace->seq, "%9llu ", +- data->hist[cpu].max_user); ++ if (params->user_hist) { ++ if (data->hist[cpu].user_count) ++ trace_seq_printf(trace->seq, "%9llu ", ++ data->hist[cpu].max_user); ++ else ++ trace_seq_printf(trace->seq, " - "); ++ } + } + trace_seq_printf(trace->seq, "\n"); + trace_seq_do_printf(trace->seq);