From: "Arisu Tachibana" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.16 commit in: /
Date: Sat, 20 Sep 2025 05:25:34 +0000 (UTC) [thread overview]
Message-ID: <1758345920.9ffd46d3a792ef5eb448d3a3fcf684e769b965d5.alicef@gentoo> (raw)
commit: 9ffd46d3a792ef5eb448d3a3fcf684e769b965d5
Author: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 20 05:25:20 2025 +0000
Commit: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Sep 20 05:25:20 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ffd46d3
Linux patch 6.16.8
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>
0000_README | 4 +
1007_linux-6.16.8.patch | 11381 ++++++++++++++++++++++++++++++++++
2991_libbpf_add_WERROR_option.patch | 11 -
3 files changed, 11385 insertions(+), 11 deletions(-)
diff --git a/0000_README b/0000_README
index 33049ae5..fb6f961a 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-6.16.7.patch
From: https://www.kernel.org
Desc: Linux 6.16.7
+Patch: 1007_linux-6.16.8.patch
+From: https://www.kernel.org
+Desc: Linux 6.16.8
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1007_linux-6.16.8.patch b/1007_linux-6.16.8.patch
new file mode 100644
index 00000000..81ef3369
--- /dev/null
+++ b/1007_linux-6.16.8.patch
@@ -0,0 +1,11381 @@
+diff --git a/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml b/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
+index 89c462653e2d33..8cc848ae11cb73 100644
+--- a/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
++++ b/Documentation/devicetree/bindings/serial/brcm,bcm7271-uart.yaml
+@@ -41,7 +41,7 @@ properties:
+ - const: dma_intr2
+
+ clocks:
+- minItems: 1
++ maxItems: 1
+
+ clock-names:
+ const: sw_baud
+diff --git a/Documentation/netlink/specs/mptcp_pm.yaml b/Documentation/netlink/specs/mptcp_pm.yaml
+index fb57860fe778c6..ecfe5ee33de2d8 100644
+--- a/Documentation/netlink/specs/mptcp_pm.yaml
++++ b/Documentation/netlink/specs/mptcp_pm.yaml
+@@ -256,7 +256,7 @@ attribute-sets:
+ type: u32
+ -
+ name: if-idx
+- type: u32
++ type: s32
+ -
+ name: reset-reason
+ type: u32
+diff --git a/Documentation/networking/can.rst b/Documentation/networking/can.rst
+index b018ce34639265..515a3876f58cfd 100644
+--- a/Documentation/networking/can.rst
++++ b/Documentation/networking/can.rst
+@@ -742,7 +742,7 @@ The broadcast manager sends responses to user space in the same form:
+ struct timeval ival1, ival2; /* count and subsequent interval */
+ canid_t can_id; /* unique can_id for task */
+ __u32 nframes; /* number of can_frames following */
+- struct can_frame frames[0];
++ struct can_frame frames[];
+ };
+
+ The aligned payload 'frames' uses the same basic CAN frame structure defined
+diff --git a/Documentation/networking/mptcp.rst b/Documentation/networking/mptcp.rst
+index 17f2bab6116447..2e31038d646205 100644
+--- a/Documentation/networking/mptcp.rst
++++ b/Documentation/networking/mptcp.rst
+@@ -60,10 +60,10 @@ address announcements. Typically, it is the client side that initiates subflows,
+ and the server side that announces additional addresses via the ``ADD_ADDR`` and
+ ``REMOVE_ADDR`` options.
+
+-Path managers are controlled by the ``net.mptcp.pm_type`` sysctl knob -- see
+-mptcp-sysctl.rst. There are two types: the in-kernel one (type ``0``) where the
+-same rules are applied for all the connections (see: ``ip mptcp``) ; and the
+-userspace one (type ``1``), controlled by a userspace daemon (i.e. `mptcpd
++Path managers are controlled by the ``net.mptcp.path_manager`` sysctl knob --
++see mptcp-sysctl.rst. There are two types: the in-kernel one (``kernel``) where
++the same rules are applied for all the connections (see: ``ip mptcp``) ; and the
++userspace one (``userspace``), controlled by a userspace daemon (i.e. `mptcpd
+ <https://mptcpd.mptcp.dev/>`_) where different rules can be applied for each
+ connection. The path managers can be controlled via a Netlink API; see
+ netlink_spec/mptcp_pm.rst.
+diff --git a/Makefile b/Makefile
+index 86359283ccc9a9..7594f35cbc2a5a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 16
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c
+index af1ca875c52ce2..410060ebd86dfd 100644
+--- a/arch/arm64/kernel/machine_kexec_file.c
++++ b/arch/arm64/kernel/machine_kexec_file.c
+@@ -94,7 +94,7 @@ int load_other_segments(struct kimage *image,
+ char *initrd, unsigned long initrd_len,
+ char *cmdline)
+ {
+- struct kexec_buf kbuf;
++ struct kexec_buf kbuf = {};
+ void *dtb = NULL;
+ unsigned long initrd_load_addr = 0, dtb_len,
+ orig_segments = image->nr_segments;
+diff --git a/arch/s390/kernel/kexec_elf.c b/arch/s390/kernel/kexec_elf.c
+index 4d364de4379921..143e34a4eca57c 100644
+--- a/arch/s390/kernel/kexec_elf.c
++++ b/arch/s390/kernel/kexec_elf.c
+@@ -16,7 +16,7 @@
+ static int kexec_file_add_kernel_elf(struct kimage *image,
+ struct s390_load_data *data)
+ {
+- struct kexec_buf buf;
++ struct kexec_buf buf = {};
+ const Elf_Ehdr *ehdr;
+ const Elf_Phdr *phdr;
+ Elf_Addr entry;
+diff --git a/arch/s390/kernel/kexec_image.c b/arch/s390/kernel/kexec_image.c
+index a32ce8bea745cf..9a439175723cad 100644
+--- a/arch/s390/kernel/kexec_image.c
++++ b/arch/s390/kernel/kexec_image.c
+@@ -16,7 +16,7 @@
+ static int kexec_file_add_kernel_image(struct kimage *image,
+ struct s390_load_data *data)
+ {
+- struct kexec_buf buf;
++ struct kexec_buf buf = {};
+
+ buf.image = image;
+
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index c2bac14dd668ae..a36d7311c6683b 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -129,7 +129,7 @@ static int kexec_file_update_purgatory(struct kimage *image,
+ static int kexec_file_add_purgatory(struct kimage *image,
+ struct s390_load_data *data)
+ {
+- struct kexec_buf buf;
++ struct kexec_buf buf = {};
+ int ret;
+
+ buf.image = image;
+@@ -152,7 +152,7 @@ static int kexec_file_add_purgatory(struct kimage *image,
+ static int kexec_file_add_initrd(struct kimage *image,
+ struct s390_load_data *data)
+ {
+- struct kexec_buf buf;
++ struct kexec_buf buf = {};
+ int ret;
+
+ buf.image = image;
+@@ -184,7 +184,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
+ {
+ __u32 *lc_ipl_parmblock_ptr;
+ unsigned int len, ncerts;
+- struct kexec_buf buf;
++ struct kexec_buf buf = {};
+ unsigned long addr;
+ void *ptr, *end;
+ int ret;
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index 6a262e198e35ec..952cc8d103693f 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -761,8 +761,6 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type)
+ break;
+
+ case PERF_TYPE_HARDWARE:
+- if (is_sampling_event(event)) /* No sampling support */
+- return -ENOENT;
+ ev = attr->config;
+ if (!attr->exclude_user && attr->exclude_kernel) {
+ /*
+@@ -860,6 +858,8 @@ static int cpumf_pmu_event_init(struct perf_event *event)
+ unsigned int type = event->attr.type;
+ int err = -ENOENT;
+
++ if (is_sampling_event(event)) /* No sampling support */
++ return err;
+ if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
+ err = __hw_perf_event_init(event, type);
+ else if (event->pmu->type == type)
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index 63875270941bc4..01cc6493367a46 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -286,10 +286,10 @@ static int paicrypt_event_init(struct perf_event *event)
+ /* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */
+ if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)
+ return -ENOENT;
+- /* PAI crypto event must be in valid range */
++ /* PAI crypto event must be in valid range, try others if not */
+ if (a->config < PAI_CRYPTO_BASE ||
+ a->config > PAI_CRYPTO_BASE + paicrypt_cnt)
+- return -EINVAL;
++ return -ENOENT;
+ /* Allow only CRYPTO_ALL for sampling */
+ if (a->sample_period && a->config != PAI_CRYPTO_BASE)
+ return -EINVAL;
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index fd14d5ebccbca0..d65a9730753c55 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -266,7 +266,7 @@ static int paiext_event_valid(struct perf_event *event)
+ event->hw.config_base = offsetof(struct paiext_cb, acc);
+ return 0;
+ }
+- return -EINVAL;
++ return -ENOENT;
+ }
+
+ /* Might be called on different CPU than the one the event is intended for. */
+diff --git a/arch/x86/kernel/cpu/topology_amd.c b/arch/x86/kernel/cpu/topology_amd.c
+index 827dd0dbb6e9d2..c79ebbb639cbff 100644
+--- a/arch/x86/kernel/cpu/topology_amd.c
++++ b/arch/x86/kernel/cpu/topology_amd.c
+@@ -175,27 +175,30 @@ static void topoext_fixup(struct topo_scan *tscan)
+
+ static void parse_topology_amd(struct topo_scan *tscan)
+ {
+- bool has_topoext = false;
+-
+ /*
+- * If the extended topology leaf 0x8000_001e is available
+- * try to get SMT, CORE, TILE, and DIE shifts from extended
++ * Try to get SMT, CORE, TILE, and DIE shifts from extended
+ * CPUID leaf 0x8000_0026 on supported processors first. If
+ * extended CPUID leaf 0x8000_0026 is not supported, try to
+- * get SMT and CORE shift from leaf 0xb first, then try to
+- * get the CORE shift from leaf 0x8000_0008.
++ * get SMT and CORE shift from leaf 0xb. If either leaf is
++ * available, cpu_parse_topology_ext() will return true.
+ */
+- if (cpu_feature_enabled(X86_FEATURE_TOPOEXT))
+- has_topoext = cpu_parse_topology_ext(tscan);
++ bool has_xtopology = cpu_parse_topology_ext(tscan);
+
+ if (cpu_feature_enabled(X86_FEATURE_AMD_HTR_CORES))
+ tscan->c->topo.cpu_type = cpuid_ebx(0x80000026);
+
+- if (!has_topoext && !parse_8000_0008(tscan))
++ /*
++ * If XTOPOLOGY leaves (0x26/0xb) are not available, try to
++ * get the CORE shift from leaf 0x8000_0008 first.
++ */
++ if (!has_xtopology && !parse_8000_0008(tscan))
+ return;
+
+- /* Prefer leaf 0x8000001e if available */
+- if (parse_8000_001e(tscan, has_topoext))
++ /*
++ * Prefer leaf 0x8000001e if available to get the SMT shift and
++ * the initial APIC ID if XTOPOLOGY leaves are not available.
++ */
++ if (parse_8000_001e(tscan, has_xtopology))
+ return;
+
+ /* Try the NODEID MSR */
+diff --git a/block/fops.c b/block/fops.c
+index 1309861d4c2c4b..d62fbefb2e6712 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -7,6 +7,7 @@
+ #include <linux/init.h>
+ #include <linux/mm.h>
+ #include <linux/blkdev.h>
++#include <linux/blk-integrity.h>
+ #include <linux/buffer_head.h>
+ #include <linux/mpage.h>
+ #include <linux/uio.h>
+@@ -54,7 +55,6 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb,
+ struct bio bio;
+ ssize_t ret;
+
+- WARN_ON_ONCE(iocb->ki_flags & IOCB_HAS_METADATA);
+ if (nr_pages <= DIO_INLINE_BIO_VECS)
+ vecs = inline_vecs;
+ else {
+@@ -131,7 +131,7 @@ static void blkdev_bio_end_io(struct bio *bio)
+ if (bio->bi_status && !dio->bio.bi_status)
+ dio->bio.bi_status = bio->bi_status;
+
+- if (!is_sync && (dio->iocb->ki_flags & IOCB_HAS_METADATA))
++ if (bio_integrity(bio))
+ bio_integrity_unmap_user(bio);
+
+ if (atomic_dec_and_test(&dio->ref)) {
+@@ -233,7 +233,7 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
+ }
+ bio->bi_opf |= REQ_NOWAIT;
+ }
+- if (!is_sync && (iocb->ki_flags & IOCB_HAS_METADATA)) {
++ if (iocb->ki_flags & IOCB_HAS_METADATA) {
+ ret = bio_integrity_map_iter(bio, iocb->private);
+ if (unlikely(ret))
+ goto fail;
+@@ -301,7 +301,7 @@ static void blkdev_bio_end_io_async(struct bio *bio)
+ ret = blk_status_to_errno(bio->bi_status);
+ }
+
+- if (iocb->ki_flags & IOCB_HAS_METADATA)
++ if (bio_integrity(bio))
+ bio_integrity_unmap_user(bio);
+
+ iocb->ki_complete(iocb, ret);
+@@ -422,7 +422,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ }
+
+ nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
+- if (likely(nr_pages <= BIO_MAX_VECS)) {
++ if (likely(nr_pages <= BIO_MAX_VECS &&
++ !(iocb->ki_flags & IOCB_HAS_METADATA))) {
+ if (is_sync_kiocb(iocb))
+ return __blkdev_direct_IO_simple(iocb, iter, bdev,
+ nr_pages);
+@@ -672,6 +673,8 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+
+ if (bdev_can_atomic_write(bdev))
+ filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
++ if (blk_get_integrity(bdev->bd_disk))
++ filp->f_mode |= FMODE_HAS_METADATA;
+
+ ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
+ if (ret)
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index f3477ab377425f..e9aaf72502e51a 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -1547,13 +1547,15 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
+ pr_debug("CPU %d exiting\n", policy->cpu);
+ }
+
+-static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
++static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy, bool policy_change)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+ union perf_cached perf;
+ u8 epp;
+
+- if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
++ if (policy_change ||
++ policy->min != cpudata->min_limit_freq ||
++ policy->max != cpudata->max_limit_freq)
+ amd_pstate_update_min_max_limit(policy);
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+@@ -1577,7 +1579,7 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+
+ cpudata->policy = policy->policy;
+
+- ret = amd_pstate_epp_update_limit(policy);
++ ret = amd_pstate_epp_update_limit(policy, true);
+ if (ret)
+ return ret;
+
+@@ -1619,13 +1621,14 @@ static int amd_pstate_suspend(struct cpufreq_policy *policy)
+ * min_perf value across kexec reboots. If this CPU is just resumed back without kexec,
+ * the limits, epp and desired perf will get reset to the cached values in cpudata struct
+ */
+- ret = amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);
++ ret = amd_pstate_update_perf(policy, perf.bios_min_perf,
++ FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),
++ FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),
++ FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),
++ false);
+ if (ret)
+ return ret;
+
+- /* invalidate to ensure it's rewritten during resume */
+- cpudata->cppc_req_cached = 0;
+-
+ /* set this flag to avoid setting core offline*/
+ cpudata->suspended = true;
+
+@@ -1651,7 +1654,7 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
+ int ret;
+
+ /* enable amd pstate from suspend state*/
+- ret = amd_pstate_epp_update_limit(policy);
++ ret = amd_pstate_epp_update_limit(policy, false);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 06a1c7dd081ffb..9a85c58922a0c8 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1034,8 +1034,8 @@ static bool hybrid_register_perf_domain(unsigned int cpu)
+ if (!cpu_dev)
+ return false;
+
+- if (em_dev_register_perf_domain(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
+- cpumask_of(cpu), false))
++ if (em_dev_register_pd_no_update(cpu_dev, HYBRID_EM_STATE_COUNT, &cb,
++ cpumask_of(cpu), false))
+ return false;
+
+ cpudata->pd_registered = true;
+diff --git a/drivers/dma/dw/rzn1-dmamux.c b/drivers/dma/dw/rzn1-dmamux.c
+index 4fb8508419dbd8..deadf135681b67 100644
+--- a/drivers/dma/dw/rzn1-dmamux.c
++++ b/drivers/dma/dw/rzn1-dmamux.c
+@@ -48,12 +48,16 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ u32 mask;
+ int ret;
+
+- if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS)
+- return ERR_PTR(-EINVAL);
++ if (dma_spec->args_count != RNZ1_DMAMUX_NCELLS) {
++ ret = -EINVAL;
++ goto put_device;
++ }
+
+ map = kzalloc(sizeof(*map), GFP_KERNEL);
+- if (!map)
+- return ERR_PTR(-ENOMEM);
++ if (!map) {
++ ret = -ENOMEM;
++ goto put_device;
++ }
+
+ chan = dma_spec->args[0];
+ map->req_idx = dma_spec->args[4];
+@@ -94,12 +98,15 @@ static void *rzn1_dmamux_route_allocate(struct of_phandle_args *dma_spec,
+ if (ret)
+ goto clear_bitmap;
+
++ put_device(&pdev->dev);
+ return map;
+
+ clear_bitmap:
+ clear_bit(map->req_idx, dmamux->used_chans);
+ free_map:
+ kfree(map);
++put_device:
++ put_device(&pdev->dev);
+
+ return ERR_PTR(ret);
+ }
+diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
+index 80355d03004dbd..b559b0e18809e4 100644
+--- a/drivers/dma/idxd/init.c
++++ b/drivers/dma/idxd/init.c
+@@ -189,27 +189,30 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ idxd->wq_enable_map = bitmap_zalloc_node(idxd->max_wqs, GFP_KERNEL, dev_to_node(dev));
+ if (!idxd->wq_enable_map) {
+ rc = -ENOMEM;
+- goto err_bitmap;
++ goto err_free_wqs;
+ }
+
+ for (i = 0; i < idxd->max_wqs; i++) {
+ wq = kzalloc_node(sizeof(*wq), GFP_KERNEL, dev_to_node(dev));
+ if (!wq) {
+ rc = -ENOMEM;
+- goto err;
++ goto err_unwind;
+ }
+
+ idxd_dev_set_type(&wq->idxd_dev, IDXD_DEV_WQ);
+ conf_dev = wq_confdev(wq);
+ wq->id = i;
+ wq->idxd = idxd;
+- device_initialize(wq_confdev(wq));
++ device_initialize(conf_dev);
+ conf_dev->parent = idxd_confdev(idxd);
+ conf_dev->bus = &dsa_bus_type;
+ conf_dev->type = &idxd_wq_device_type;
+ rc = dev_set_name(conf_dev, "wq%d.%d", idxd->id, wq->id);
+- if (rc < 0)
+- goto err;
++ if (rc < 0) {
++ put_device(conf_dev);
++ kfree(wq);
++ goto err_unwind;
++ }
+
+ mutex_init(&wq->wq_lock);
+ init_waitqueue_head(&wq->err_queue);
+@@ -220,15 +223,20 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ wq->enqcmds_retries = IDXD_ENQCMDS_RETRIES;
+ wq->wqcfg = kzalloc_node(idxd->wqcfg_size, GFP_KERNEL, dev_to_node(dev));
+ if (!wq->wqcfg) {
++ put_device(conf_dev);
++ kfree(wq);
+ rc = -ENOMEM;
+- goto err;
++ goto err_unwind;
+ }
+
+ if (idxd->hw.wq_cap.op_config) {
+ wq->opcap_bmap = bitmap_zalloc(IDXD_MAX_OPCAP_BITS, GFP_KERNEL);
+ if (!wq->opcap_bmap) {
++ kfree(wq->wqcfg);
++ put_device(conf_dev);
++ kfree(wq);
+ rc = -ENOMEM;
+- goto err_opcap_bmap;
++ goto err_unwind;
+ }
+ bitmap_copy(wq->opcap_bmap, idxd->opcap_bmap, IDXD_MAX_OPCAP_BITS);
+ }
+@@ -239,13 +247,7 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+
+ return 0;
+
+-err_opcap_bmap:
+- kfree(wq->wqcfg);
+-
+-err:
+- put_device(conf_dev);
+- kfree(wq);
+-
++err_unwind:
+ while (--i >= 0) {
+ wq = idxd->wqs[i];
+ if (idxd->hw.wq_cap.op_config)
+@@ -254,11 +256,10 @@ static int idxd_setup_wqs(struct idxd_device *idxd)
+ conf_dev = wq_confdev(wq);
+ put_device(conf_dev);
+ kfree(wq);
+-
+ }
+ bitmap_free(idxd->wq_enable_map);
+
+-err_bitmap:
++err_free_wqs:
+ kfree(idxd->wqs);
+
+ return rc;
+@@ -1292,10 +1293,12 @@ static void idxd_remove(struct pci_dev *pdev)
+ device_unregister(idxd_confdev(idxd));
+ idxd_shutdown(pdev);
+ idxd_device_remove_debugfs(idxd);
+- idxd_cleanup(idxd);
++ perfmon_pmu_remove(idxd);
++ idxd_cleanup_interrupts(idxd);
++ if (device_pasid_enabled(idxd))
++ idxd_disable_system_pasid(idxd);
+ pci_iounmap(pdev, idxd->reg_base);
+ put_device(idxd_confdev(idxd));
+- idxd_free(idxd);
+ pci_disable_device(pdev);
+ }
+
+diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
+index bbc3276992bb01..2cf060174795fe 100644
+--- a/drivers/dma/qcom/bam_dma.c
++++ b/drivers/dma/qcom/bam_dma.c
+@@ -1283,13 +1283,17 @@ static int bam_dma_probe(struct platform_device *pdev)
+ if (!bdev->bamclk) {
+ ret = of_property_read_u32(pdev->dev.of_node, "num-channels",
+ &bdev->num_channels);
+- if (ret)
++ if (ret) {
+ dev_err(bdev->dev, "num-channels unspecified in dt\n");
++ return ret;
++ }
+
+ ret = of_property_read_u32(pdev->dev.of_node, "qcom,num-ees",
+ &bdev->num_ees);
+- if (ret)
++ if (ret) {
+ dev_err(bdev->dev, "num-ees unspecified in dt\n");
++ return ret;
++ }
+ }
+
+ ret = clk_prepare_enable(bdev->bamclk);
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index 3ed406f08c442e..552be71db6c47b 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -2064,8 +2064,8 @@ static int edma_setup_from_hw(struct device *dev, struct edma_soc_info *pdata,
+ * priority. So Q0 is the highest priority queue and the last queue has
+ * the lowest priority.
+ */
+- queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1, sizeof(s8),
+- GFP_KERNEL);
++ queue_priority_map = devm_kcalloc(dev, ecc->num_tc + 1,
++ sizeof(*queue_priority_map), GFP_KERNEL);
+ if (!queue_priority_map)
+ return -ENOMEM;
+
+diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
+index cae52c654a15c6..7685a8550d4b1f 100644
+--- a/drivers/edac/altera_edac.c
++++ b/drivers/edac/altera_edac.c
+@@ -128,7 +128,6 @@ static ssize_t altr_sdr_mc_err_inject_write(struct file *file,
+
+ ptemp = dma_alloc_coherent(mci->pdev, 16, &dma_handle, GFP_KERNEL);
+ if (!ptemp) {
+- dma_free_coherent(mci->pdev, 16, ptemp, dma_handle);
+ edac_printk(KERN_ERR, EDAC_MC,
+ "Inject: Buffer Allocation error\n");
+ return -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index f9ceda7861f1b1..cdafce9781ed32 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -596,10 +596,6 @@ int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
+ udelay(1);
+ }
+
+- dev_err(adev->dev,
+- "psp reg (0x%x) wait timed out, mask: %x, read: %x exp: %x",
+- reg_index, mask, val, reg_val);
+-
+ return -ETIME;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index a4a00855d0b238..428adc7f741de3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -51,17 +51,6 @@
+ #define C2PMSG_CMD_SPI_GET_ROM_IMAGE_ADDR_HI 0x10
+ #define C2PMSG_CMD_SPI_GET_FLASH_IMAGE 0x11
+
+-/* Command register bit 31 set to indicate readiness */
+-#define MBOX_TOS_READY_FLAG (GFX_FLAG_RESPONSE)
+-#define MBOX_TOS_READY_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
+-
+-/* Values to check for a successful GFX_CMD response wait. Check against
+- * both status bits and response state - helps to detect a command failure
+- * or other unexpected cases like a device drop reading all 0xFFs
+- */
+-#define MBOX_TOS_RESP_FLAG (GFX_FLAG_RESPONSE)
+-#define MBOX_TOS_RESP_MASK (GFX_CMD_RESPONSE_MASK | GFX_CMD_STATUS_MASK)
+-
+ extern const struct attribute_group amdgpu_flash_attr_group;
+
+ enum psp_shared_mem_size {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+index 7c5584742471e9..a0b7ac7486dc55 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+@@ -389,8 +389,6 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
+ dma_fence_put(ring->vmid_wait);
+ ring->vmid_wait = NULL;
+ ring->me = 0;
+-
+- ring->adev->rings[ring->idx] = NULL;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c b/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
+index 574880d6700995..2ab6fa4fcf20b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
++++ b/drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
+@@ -29,6 +29,8 @@
+ #include "amdgpu.h"
+ #include "isp_v4_1_1.h"
+
++MODULE_FIRMWARE("amdgpu/isp_4_1_1.bin");
++
+ static const unsigned int isp_4_1_1_int_srcid[MAX_ISP411_INT_SRC] = {
+ ISP_4_1__SRCID__ISP_RINGBUFFER_WPT9,
+ ISP_4_1__SRCID__ISP_RINGBUFFER_WPT10,
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+index 2c4ebd98927ff3..145186a1e48f6b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
+@@ -94,7 +94,7 @@ static int psp_v10_0_ring_create(struct psp_context *psp,
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ 0x80000000, 0x8000FFFF, false);
+
+ return ret;
+ }
+@@ -115,7 +115,7 @@ static int psp_v10_0_ring_stop(struct psp_context *psp,
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ 0x80000000, 0x80000000, false);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+index 1a4a26e6ffd24c..215543575f477c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
+@@ -277,13 +277,11 @@ static int psp_v11_0_ring_stop(struct psp_context *psp,
+
+ /* Wait for response flag (bit 31) */
+ if (amdgpu_sriov_vf(adev))
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x80000000, false);
+ else
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+
+ return ret;
+ }
+@@ -319,15 +317,13 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_101 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x8000FFFF, false);
+
+ } else {
+ /* Wait for sOS ready for ring creation */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ if (ret) {
+ DRM_ERROR("Failed to wait for sOS ready for ring creation\n");
+ return ret;
+@@ -351,9 +347,8 @@ static int psp_v11_0_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x8000FFFF, false);
+ }
+
+ return ret;
+@@ -386,8 +381,7 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+
+ offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+
+- ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
+- MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
+
+ if (ret) {
+ DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -401,8 +395,7 @@ static int psp_v11_0_mode1_reset(struct psp_context *psp)
+
+ offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+
+- ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
+- false);
++ ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
+
+ if (ret) {
+ DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+index 338d015c0f2ee2..5697760a819bc7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v11_0_8.c
+@@ -41,9 +41,8 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x80000000, false);
+ } else {
+ /* Write the ring destroy command*/
+ WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_64,
+@@ -51,9 +50,8 @@ static int psp_v11_0_8_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ }
+
+ return ret;
+@@ -89,15 +87,13 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_101 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x8000FFFF, false);
+
+ } else {
+ /* Wait for sOS ready for ring creation */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ if (ret) {
+ DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ return ret;
+@@ -121,9 +117,8 @@ static int psp_v11_0_8_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x8000FFFF, false);
+ }
+
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+index d54b3e0fabaf40..80153f8374704a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v12_0.c
+@@ -163,7 +163,7 @@ static int psp_v12_0_ring_create(struct psp_context *psp,
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ 0x80000000, 0x8000FFFF, false);
+
+ return ret;
+ }
+@@ -184,13 +184,11 @@ static int psp_v12_0_ring_stop(struct psp_context *psp,
+
+ /* Wait for response flag (bit 31) */
+ if (amdgpu_sriov_vf(adev))
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x80000000, false);
+ else
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+
+ return ret;
+ }
+@@ -221,8 +219,7 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+
+ offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64);
+
+- ret = psp_wait_for(psp, offset, MBOX_TOS_READY_FLAG,
+- MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, offset, 0x80000000, 0x8000FFFF, false);
+
+ if (ret) {
+ DRM_INFO("psp is not working correctly before mode1 reset!\n");
+@@ -236,8 +233,7 @@ static int psp_v12_0_mode1_reset(struct psp_context *psp)
+
+ offset = SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_33);
+
+- ret = psp_wait_for(psp, offset, MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK,
+- false);
++ ret = psp_wait_for(psp, offset, 0x80000000, 0x80000000, false);
+
+ if (ret) {
+ DRM_INFO("psp mode 1 reset failed!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+index 58b6b64dcd683b..ead616c117057f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
+@@ -384,9 +384,8 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x80000000, false);
+ } else {
+ /* Write the ring destroy command*/
+ WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -394,9 +393,8 @@ static int psp_v13_0_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ }
+
+ return ret;
+@@ -432,15 +430,13 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_101 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x8000FFFF, false);
+
+ } else {
+ /* Wait for sOS ready for ring creation */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+- MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ if (ret) {
+ DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ return ret;
+@@ -464,9 +460,8 @@ static int psp_v13_0_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x8000FFFF, false);
+ }
+
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+index f65af52c1c1939..eaa5512a21dacd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0_4.c
+@@ -204,9 +204,8 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x80000000, false);
+ } else {
+ /* Write the ring destroy command*/
+ WREG32_SOC15(MP0, 0, regMP0_SMN_C2PMSG_64,
+@@ -214,9 +213,8 @@ static int psp_v13_0_4_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ }
+
+ return ret;
+@@ -252,15 +250,13 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_101 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_101),
++ 0x80000000, 0x8000FFFF, false);
+
+ } else {
+ /* Wait for sOS ready for ring creation */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+- MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ if (ret) {
+ DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ return ret;
+@@ -284,9 +280,8 @@ static int psp_v13_0_4_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMP0_SMN_C2PMSG_64),
++ 0x80000000, 0x8000FFFF, false);
+ }
+
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+index b029f301aaccaf..30d8eecc567481 100644
+--- a/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/psp_v14_0.c
+@@ -250,9 +250,8 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++ 0x80000000, 0x80000000, false);
+ } else {
+ /* Write the ring destroy command*/
+ WREG32_SOC15(MP0, 0, regMPASP_SMN_C2PMSG_64,
+@@ -260,9 +259,8 @@ static int psp_v14_0_ring_stop(struct psp_context *psp,
+ /* there might be handshake issue with hardware which needs delay */
+ mdelay(20);
+ /* Wait for response flag (bit 31) */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ }
+
+ return ret;
+@@ -298,15 +296,13 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_101 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_101),
++ 0x80000000, 0x8000FFFF, false);
+
+ } else {
+ /* Wait for sOS ready for ring creation */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+- MBOX_TOS_READY_FLAG, MBOX_TOS_READY_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++ 0x80000000, 0x80000000, false);
+ if (ret) {
+ DRM_ERROR("Failed to wait for trust OS ready for ring creation\n");
+ return ret;
+@@ -330,9 +326,8 @@ static int psp_v14_0_ring_create(struct psp_context *psp,
+ mdelay(20);
+
+ /* Wait for response flag (bit 31) in C2PMSG_64 */
+- ret = psp_wait_for(
+- psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
+- MBOX_TOS_RESP_FLAG, MBOX_TOS_RESP_MASK, false);
++ ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, regMPASP_SMN_C2PMSG_64),
++ 0x80000000, 0x8000FFFF, false);
+ }
+
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index 9fb0d53805892d..614e0886556271 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -1875,15 +1875,19 @@ static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
+ struct amdgpu_job *job)
+ {
+ struct drm_gpu_scheduler **scheds;
+-
+- /* The create msg must be in the first IB submitted */
+- if (atomic_read(&job->base.entity->fence_seq))
+- return -EINVAL;
++ struct dma_fence *fence;
+
+ /* if VCN0 is harvested, we can't support AV1 */
+ if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+ return -EINVAL;
+
++ /* wait for all jobs to finish before switching to instance 0 */
++ fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
++ if (fence) {
++ dma_fence_wait(fence, false);
++ dma_fence_put(fence);
++ }
++
+ scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_DEC]
+ [AMDGPU_RING_PRIO_DEFAULT].sched;
+ drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+index 46c329a1b2f5f0..e77f2df1beb773 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+@@ -1807,15 +1807,19 @@ static int vcn_v4_0_limit_sched(struct amdgpu_cs_parser *p,
+ struct amdgpu_job *job)
+ {
+ struct drm_gpu_scheduler **scheds;
+-
+- /* The create msg must be in the first IB submitted */
+- if (atomic_read(&job->base.entity->fence_seq))
+- return -EINVAL;
++ struct dma_fence *fence;
+
+ /* if VCN0 is harvested, we can't support AV1 */
+ if (p->adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0)
+ return -EINVAL;
+
++ /* wait for all jobs to finish before switching to instance 0 */
++ fence = amdgpu_ctx_get_fence(p->ctx, job->base.entity, ~0ull);
++ if (fence) {
++ dma_fence_wait(fence, false);
++ dma_fence_put(fence);
++ }
++
+ scheds = p->adev->gpu_sched[AMDGPU_HW_IP_VCN_ENC]
+ [AMDGPU_RING_PRIO_0].sched;
+ drm_sched_entity_modify_sched(job->base.entity, scheds, 1);
+@@ -1906,22 +1910,16 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
+
+ #define RADEON_VCN_ENGINE_TYPE_ENCODE (0x00000002)
+ #define RADEON_VCN_ENGINE_TYPE_DECODE (0x00000003)
+-
+ #define RADEON_VCN_ENGINE_INFO (0x30000001)
+-#define RADEON_VCN_ENGINE_INFO_MAX_OFFSET 16
+-
+ #define RENCODE_ENCODE_STANDARD_AV1 2
+ #define RENCODE_IB_PARAM_SESSION_INIT 0x00000003
+-#define RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET 64
+
+-/* return the offset in ib if id is found, -1 otherwise
+- * to speed up the searching we only search upto max_offset
+- */
+-static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int max_offset)
++/* return the offset in ib if id is found, -1 otherwise */
++static int vcn_v4_0_enc_find_ib_param(struct amdgpu_ib *ib, uint32_t id, int start)
+ {
+ int i;
+
+- for (i = 0; i < ib->length_dw && i < max_offset && ib->ptr[i] >= 8; i += ib->ptr[i]/4) {
++ for (i = start; i < ib->length_dw && ib->ptr[i] >= 8; i += ib->ptr[i] / 4) {
+ if (ib->ptr[i + 1] == id)
+ return i;
+ }
+@@ -1936,33 +1934,29 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+ struct amdgpu_vcn_decode_buffer *decode_buffer;
+ uint64_t addr;
+ uint32_t val;
+- int idx;
++ int idx = 0, sidx;
+
+ /* The first instance can decode anything */
+ if (!ring->me)
+ return 0;
+
+- /* RADEON_VCN_ENGINE_INFO is at the top of ib block */
+- idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO,
+- RADEON_VCN_ENGINE_INFO_MAX_OFFSET);
+- if (idx < 0) /* engine info is missing */
+- return 0;
+-
+- val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
+- if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
+- decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
+-
+- if (!(decode_buffer->valid_buf_flag & 0x1))
+- return 0;
+-
+- addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
+- decode_buffer->msg_buffer_address_lo;
+- return vcn_v4_0_dec_msg(p, job, addr);
+- } else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
+- idx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT,
+- RENCODE_IB_PARAM_SESSION_INIT_MAX_OFFSET);
+- if (idx >= 0 && ib->ptr[idx + 2] == RENCODE_ENCODE_STANDARD_AV1)
+- return vcn_v4_0_limit_sched(p, job);
++ while ((idx = vcn_v4_0_enc_find_ib_param(ib, RADEON_VCN_ENGINE_INFO, idx)) >= 0) {
++ val = amdgpu_ib_get_value(ib, idx + 2); /* RADEON_VCN_ENGINE_TYPE */
++ if (val == RADEON_VCN_ENGINE_TYPE_DECODE) {
++ decode_buffer = (struct amdgpu_vcn_decode_buffer *)&ib->ptr[idx + 6];
++
++ if (!(decode_buffer->valid_buf_flag & 0x1))
++ return 0;
++
++ addr = ((u64)decode_buffer->msg_buffer_address_hi) << 32 |
++ decode_buffer->msg_buffer_address_lo;
++ return vcn_v4_0_dec_msg(p, job, addr);
++ } else if (val == RADEON_VCN_ENGINE_TYPE_ENCODE) {
++ sidx = vcn_v4_0_enc_find_ib_param(ib, RENCODE_IB_PARAM_SESSION_INIT, idx);
++ if (sidx >= 0 && ib->ptr[sidx + 2] == RENCODE_ENCODE_STANDARD_AV1)
++ return vcn_v4_0_limit_sched(p, job);
++ }
++ idx += ib->ptr[idx] / 4;
+ }
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 2d94fec5b545d7..312f6075e39d11 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2910,6 +2910,17 @@ static int dm_oem_i2c_hw_init(struct amdgpu_device *adev)
+ return 0;
+ }
+
++static void dm_oem_i2c_hw_fini(struct amdgpu_device *adev)
++{
++ struct amdgpu_display_manager *dm = &adev->dm;
++
++ if (dm->oem_i2c) {
++ i2c_del_adapter(&dm->oem_i2c->base);
++ kfree(dm->oem_i2c);
++ dm->oem_i2c = NULL;
++ }
++}
++
+ /**
+ * dm_hw_init() - Initialize DC device
+ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance.
+@@ -2960,7 +2971,7 @@ static int dm_hw_fini(struct amdgpu_ip_block *ip_block)
+ {
+ struct amdgpu_device *adev = ip_block->adev;
+
+- kfree(adev->dm.oem_i2c);
++ dm_oem_i2c_hw_fini(adev);
+
+ amdgpu_dm_hpd_fini(adev);
+
+@@ -3073,16 +3084,55 @@ static int dm_cache_state(struct amdgpu_device *adev)
+ return adev->dm.cached_state ? 0 : r;
+ }
+
+-static int dm_prepare_suspend(struct amdgpu_ip_block *ip_block)
++static void dm_destroy_cached_state(struct amdgpu_device *adev)
+ {
+- struct amdgpu_device *adev = ip_block->adev;
++ struct amdgpu_display_manager *dm = &adev->dm;
++ struct drm_device *ddev = adev_to_drm(adev);
++ struct dm_plane_state *dm_new_plane_state;
++ struct drm_plane_state *new_plane_state;
++ struct dm_crtc_state *dm_new_crtc_state;
++ struct drm_crtc_state *new_crtc_state;
++ struct drm_plane *plane;
++ struct drm_crtc *crtc;
++ int i;
+
+- if (amdgpu_in_reset(adev))
+- return 0;
++ if (!dm->cached_state)
++ return;
++
++ /* Force mode set in atomic commit */
++ for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
++ new_crtc_state->active_changed = true;
++ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++ reset_freesync_config_for_crtc(dm_new_crtc_state);
++ }
++
++ /*
++ * atomic_check is expected to create the dc states. We need to release
++ * them here, since they were duplicated as part of the suspend
++ * procedure.
++ */
++ for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
++ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
++ if (dm_new_crtc_state->stream) {
++ WARN_ON(kref_read(&dm_new_crtc_state->stream->refcount) > 1);
++ dc_stream_release(dm_new_crtc_state->stream);
++ dm_new_crtc_state->stream = NULL;
++ }
++ dm_new_crtc_state->base.color_mgmt_changed = true;
++ }
++
++ for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
++ dm_new_plane_state = to_dm_plane_state(new_plane_state);
++ if (dm_new_plane_state->dc_state) {
++ WARN_ON(kref_read(&dm_new_plane_state->dc_state->refcount) > 1);
++ dc_plane_state_release(dm_new_plane_state->dc_state);
++ dm_new_plane_state->dc_state = NULL;
++ }
++ }
+
+- WARN_ON(adev->dm.cached_state);
++ drm_atomic_helper_resume(ddev, dm->cached_state);
+
+- return dm_cache_state(adev);
++ dm->cached_state = NULL;
+ }
+
+ static int dm_suspend(struct amdgpu_ip_block *ip_block)
+@@ -3306,12 +3356,6 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ struct amdgpu_dm_connector *aconnector;
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
+- struct drm_crtc *crtc;
+- struct drm_crtc_state *new_crtc_state;
+- struct dm_crtc_state *dm_new_crtc_state;
+- struct drm_plane *plane;
+- struct drm_plane_state *new_plane_state;
+- struct dm_plane_state *dm_new_plane_state;
+ struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state);
+ enum dc_connection_type new_connection_type = dc_connection_none;
+ struct dc_state *dc_state;
+@@ -3470,40 +3514,7 @@ static int dm_resume(struct amdgpu_ip_block *ip_block)
+ }
+ drm_connector_list_iter_end(&iter);
+
+- /* Force mode set in atomic commit */
+- for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+- new_crtc_state->active_changed = true;
+- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+- reset_freesync_config_for_crtc(dm_new_crtc_state);
+- }
+-
+- /*
+- * atomic_check is expected to create the dc states. We need to release
+- * them here, since they were duplicated as part of the suspend
+- * procedure.
+- */
+- for_each_new_crtc_in_state(dm->cached_state, crtc, new_crtc_state, i) {
+- dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+- if (dm_new_crtc_state->stream) {
+- WARN_ON(kref_read(&dm_new_crtc_state->stream->refcount) > 1);
+- dc_stream_release(dm_new_crtc_state->stream);
+- dm_new_crtc_state->stream = NULL;
+- }
+- dm_new_crtc_state->base.color_mgmt_changed = true;
+- }
+-
+- for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
+- dm_new_plane_state = to_dm_plane_state(new_plane_state);
+- if (dm_new_plane_state->dc_state) {
+- WARN_ON(kref_read(&dm_new_plane_state->dc_state->refcount) > 1);
+- dc_plane_state_release(dm_new_plane_state->dc_state);
+- dm_new_plane_state->dc_state = NULL;
+- }
+- }
+-
+- drm_atomic_helper_resume(ddev, dm->cached_state);
+-
+- dm->cached_state = NULL;
++ dm_destroy_cached_state(adev);
+
+ /* Do mst topology probing after resuming cached state*/
+ drm_connector_list_iter_begin(ddev, &iter);
+@@ -3549,7 +3560,6 @@ static const struct amd_ip_funcs amdgpu_dm_funcs = {
+ .early_fini = amdgpu_dm_early_fini,
+ .hw_init = dm_hw_init,
+ .hw_fini = dm_hw_fini,
+- .prepare_suspend = dm_prepare_suspend,
+ .suspend = dm_suspend,
+ .resume = dm_resume,
+ .is_idle = dm_is_idle,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 25e8befbcc479a..99fd064324baa6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -809,6 +809,7 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
+ drm_dp_aux_init(&aconnector->dm_dp_aux.aux);
+ drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux,
+ &aconnector->base);
++ drm_dp_dpcd_set_probe(&aconnector->dm_dp_aux.aux, false);
+
+ if (aconnector->base.connector_type == DRM_MODE_CONNECTOR_eDP)
+ return;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index f41073c0147e23..7dfbfb18593c12 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1095,6 +1095,7 @@ struct dc_debug_options {
+ bool enable_hblank_borrow;
+ bool force_subvp_df_throttle;
+ uint32_t acpi_transition_bitmasks[MAX_PIPES];
++ bool enable_pg_cntl_debug_logs;
+ };
+
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index 58c84f555c0fb8..0ce9489ac6b728 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -133,30 +133,34 @@ enum dsc_clk_source {
+ };
+
+
+-static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool enable)
++static void dccg35_set_dsc_clk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
+ {
+ struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+- if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && enable)
++ if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dsc && allow_rcg)
+ return;
+
+ switch (inst) {
+ case 0:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ case 1:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ case 2:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ case 3:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ default:
+ BREAK_TO_DEBUGGER();
+ return;
+ }
++
++ /* Wait for clock to ramp */
++ if (!allow_rcg)
++ udelay(10);
+ }
+
+ static void dccg35_set_symclk32_se_rcg(
+@@ -385,35 +389,34 @@ static void dccg35_set_dtbclk_p_rcg(struct dccg *dccg, int inst, bool enable)
+ }
+ }
+
+-static void dccg35_set_dppclk_rcg(struct dccg *dccg,
+- int inst, bool enable)
++static void dccg35_set_dppclk_rcg(struct dccg *dccg, int inst, bool allow_rcg)
+ {
+-
+ struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+-
+- if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && enable)
++ if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && allow_rcg)
+ return;
+
+ switch (inst) {
+ case 0:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ case 1:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ case 2:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ case 3:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable ? 0 : 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, allow_rcg ? 0 : 1);
+ break;
+ default:
+ BREAK_TO_DEBUGGER();
+ break;
+ }
+- //DC_LOG_DEBUG("%s: inst(%d) DPPCLK rcg_disable: %d\n", __func__, inst, enable ? 0 : 1);
+
++ /* Wait for clock to ramp */
++ if (!allow_rcg)
++ udelay(10);
+ }
+
+ static void dccg35_set_dpstreamclk_rcg(
+@@ -1177,32 +1180,34 @@ static void dccg35_update_dpp_dto(struct dccg *dccg, int dpp_inst,
+ }
+
+ static void dccg35_set_dppclk_root_clock_gating(struct dccg *dccg,
+- uint32_t dpp_inst, uint32_t enable)
++ uint32_t dpp_inst, uint32_t disallow_rcg)
+ {
+ struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
+
+- if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp)
++ if (!dccg->ctx->dc->debug.root_clock_optimization.bits.dpp && !disallow_rcg)
+ return;
+
+
+ switch (dpp_inst) {
+ case 0:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, enable);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK0_ROOT_GATE_DISABLE, disallow_rcg);
+ break;
+ case 1:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, enable);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK1_ROOT_GATE_DISABLE, disallow_rcg);
+ break;
+ case 2:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, enable);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK2_ROOT_GATE_DISABLE, disallow_rcg);
+ break;
+ case 3:
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, enable);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DPPCLK3_ROOT_GATE_DISABLE, disallow_rcg);
+ break;
+ default:
+ break;
+ }
+- //DC_LOG_DEBUG("%s: dpp_inst(%d) rcg: %d\n", __func__, dpp_inst, enable);
+
++ /* Wait for clock to ramp */
++ if (disallow_rcg)
++ udelay(10);
+ }
+
+ static void dccg35_get_pixel_rate_div(
+@@ -1782,8 +1787,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ //Disable DTO
+ switch (inst) {
+ case 0:
+- if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK0_ROOT_GATE_DISABLE, 1);
+
+ REG_UPDATE_2(DSCCLK0_DTO_PARAM,
+ DSCCLK0_DTO_PHASE, 0,
+@@ -1791,8 +1795,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK0_EN, 1);
+ break;
+ case 1:
+- if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK1_ROOT_GATE_DISABLE, 1);
+
+ REG_UPDATE_2(DSCCLK1_DTO_PARAM,
+ DSCCLK1_DTO_PHASE, 0,
+@@ -1800,8 +1803,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK1_EN, 1);
+ break;
+ case 2:
+- if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK2_ROOT_GATE_DISABLE, 1);
+
+ REG_UPDATE_2(DSCCLK2_DTO_PARAM,
+ DSCCLK2_DTO_PHASE, 0,
+@@ -1809,8 +1811,7 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ REG_UPDATE(DSCCLK_DTO_CTRL, DSCCLK2_EN, 1);
+ break;
+ case 3:
+- if (dccg->ctx->dc->debug.root_clock_optimization.bits.dsc)
+- REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
++ REG_UPDATE(DCCG_GATE_DISABLE_CNTL6, DSCCLK3_ROOT_GATE_DISABLE, 1);
+
+ REG_UPDATE_2(DSCCLK3_DTO_PARAM,
+ DSCCLK3_DTO_PHASE, 0,
+@@ -1821,6 +1822,9 @@ static void dccg35_enable_dscclk(struct dccg *dccg, int inst)
+ BREAK_TO_DEBUGGER();
+ return;
+ }
++
++ /* Wait for clock to ramp */
++ udelay(10);
+ }
+
+ static void dccg35_disable_dscclk(struct dccg *dccg,
+@@ -1864,6 +1868,9 @@ static void dccg35_disable_dscclk(struct dccg *dccg,
+ default:
+ return;
+ }
++
++ /* Wait for clock ramp */
++ udelay(10);
+ }
+
+ static void dccg35_enable_symclk_se(struct dccg *dccg, uint32_t stream_enc_inst, uint32_t link_enc_inst)
+@@ -2349,10 +2356,7 @@ static void dccg35_disable_symclk_se_cb(
+
+ void dccg35_root_gate_disable_control(struct dccg *dccg, uint32_t pipe_idx, uint32_t disable_clock_gating)
+ {
+-
+- if (dccg->ctx->dc->debug.root_clock_optimization.bits.dpp) {
+- dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
+- }
++ dccg35_set_dppclk_root_clock_gating(dccg, pipe_idx, disable_clock_gating);
+ }
+
+ static const struct dccg_funcs dccg35_funcs_new = {
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index cdb8685ae7d719..454e362ff096aa 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -955,7 +955,7 @@ enum dc_status dcn20_enable_stream_timing(
+ return DC_ERROR_UNEXPECTED;
+ }
+
+- fsleep(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
++ udelay(stream->timing.v_total * (stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz));
+
+ params.vertical_total_min = stream->adjust.v_total_min;
+ params.vertical_total_max = stream->adjust.v_total_max;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index a267f574b61937..764eff6a4ec6b7 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -113,6 +113,14 @@ static void enable_memory_low_power(struct dc *dc)
+ }
+ #endif
+
++static void print_pg_status(struct dc *dc, const char *debug_func, const char *debug_log)
++{
++ if (dc->debug.enable_pg_cntl_debug_logs && dc->res_pool->pg_cntl) {
++ if (dc->res_pool->pg_cntl->funcs->print_pg_status)
++ dc->res_pool->pg_cntl->funcs->print_pg_status(dc->res_pool->pg_cntl, debug_func, debug_log);
++ }
++}
++
+ void dcn35_set_dmu_fgcg(struct dce_hwseq *hws, bool enable)
+ {
+ REG_UPDATE_3(DMU_CLK_CNTL,
+@@ -137,6 +145,8 @@ void dcn35_init_hw(struct dc *dc)
+ uint32_t user_level = MAX_BACKLIGHT_LEVEL;
+ int i;
+
++ print_pg_status(dc, __func__, ": start");
++
+ if (dc->clk_mgr && dc->clk_mgr->funcs->init_clocks)
+ dc->clk_mgr->funcs->init_clocks(dc->clk_mgr);
+
+@@ -200,10 +210,7 @@ void dcn35_init_hw(struct dc *dc)
+
+ /* we want to turn off all dp displays before doing detection */
+ dc->link_srv->blank_all_dp_displays(dc);
+-/*
+- if (hws->funcs.enable_power_gating_plane)
+- hws->funcs.enable_power_gating_plane(dc->hwseq, true);
+-*/
++
+ if (res_pool->hubbub && res_pool->hubbub->funcs->dchubbub_init)
+ res_pool->hubbub->funcs->dchubbub_init(dc->res_pool->hubbub);
+ /* If taking control over from VBIOS, we may want to optimize our first
+@@ -236,6 +243,8 @@ void dcn35_init_hw(struct dc *dc)
+ }
+
+ hws->funcs.init_pipes(dc, dc->current_state);
++ print_pg_status(dc, __func__, ": after init_pipes");
++
+ if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
+ !dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
+ dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+@@ -312,6 +321,7 @@ void dcn35_init_hw(struct dc *dc)
+ if (dc->res_pool->pg_cntl->funcs->init_pg_status)
+ dc->res_pool->pg_cntl->funcs->init_pg_status(dc->res_pool->pg_cntl);
+ }
++ print_pg_status(dc, __func__, ": after init_pg_status");
+ }
+
+ static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable)
+@@ -500,97 +510,6 @@ void dcn35_physymclk_root_clock_control(struct dce_hwseq *hws, unsigned int phy_
+ }
+ }
+
+-void dcn35_dsc_pg_control(
+- struct dce_hwseq *hws,
+- unsigned int dsc_inst,
+- bool power_on)
+-{
+- uint32_t power_gate = power_on ? 0 : 1;
+- uint32_t pwr_status = power_on ? 0 : 2;
+- uint32_t org_ip_request_cntl = 0;
+-
+- if (hws->ctx->dc->debug.disable_dsc_power_gate)
+- return;
+- if (hws->ctx->dc->debug.ignore_pg)
+- return;
+- REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+- if (org_ip_request_cntl == 0)
+- REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+-
+- switch (dsc_inst) {
+- case 0: /* DSC0 */
+- REG_UPDATE(DOMAIN16_PG_CONFIG,
+- DOMAIN_POWER_GATE, power_gate);
+-
+- REG_WAIT(DOMAIN16_PG_STATUS,
+- DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
+- break;
+- case 1: /* DSC1 */
+- REG_UPDATE(DOMAIN17_PG_CONFIG,
+- DOMAIN_POWER_GATE, power_gate);
+-
+- REG_WAIT(DOMAIN17_PG_STATUS,
+- DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
+- break;
+- case 2: /* DSC2 */
+- REG_UPDATE(DOMAIN18_PG_CONFIG,
+- DOMAIN_POWER_GATE, power_gate);
+-
+- REG_WAIT(DOMAIN18_PG_STATUS,
+- DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
+- break;
+- case 3: /* DSC3 */
+- REG_UPDATE(DOMAIN19_PG_CONFIG,
+- DOMAIN_POWER_GATE, power_gate);
+-
+- REG_WAIT(DOMAIN19_PG_STATUS,
+- DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
+- break;
+- default:
+- BREAK_TO_DEBUGGER();
+- break;
+- }
+-
+- if (org_ip_request_cntl == 0)
+- REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 0);
+-}
+-
+-void dcn35_enable_power_gating_plane(struct dce_hwseq *hws, bool enable)
+-{
+- bool force_on = true; /* disable power gating */
+- uint32_t org_ip_request_cntl = 0;
+-
+- if (hws->ctx->dc->debug.disable_hubp_power_gate)
+- return;
+- if (hws->ctx->dc->debug.ignore_pg)
+- return;
+- REG_GET(DC_IP_REQUEST_CNTL, IP_REQUEST_EN, &org_ip_request_cntl);
+- if (org_ip_request_cntl == 0)
+- REG_SET(DC_IP_REQUEST_CNTL, 0, IP_REQUEST_EN, 1);
+- /* DCHUBP0/1/2/3/4/5 */
+- REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+- REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+- /* DPP0/1/2/3/4/5 */
+- REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+- REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-
+- force_on = true; /* disable power gating */
+- if (enable && !hws->ctx->dc->debug.disable_dsc_power_gate)
+- force_on = false;
+-
+- /* DCS0/1/2/3/4 */
+- REG_UPDATE(DOMAIN16_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+- REG_UPDATE(DOMAIN17_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+- REG_UPDATE(DOMAIN18_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+- REG_UPDATE(DOMAIN19_PG_CONFIG, DOMAIN_POWER_FORCEON, force_on);
+-
+-
+-}
+-
+ /* In headless boot cases, DIG may be turned
+ * on which causes HW/SW discrepancies.
+ * To avoid this, power down hardware on boot
+@@ -1453,6 +1372,8 @@ void dcn35_prepare_bandwidth(
+ }
+
+ dcn20_prepare_bandwidth(dc, context);
++
++ print_pg_status(dc, __func__, ": after rcg and power up");
+ }
+
+ void dcn35_optimize_bandwidth(
+@@ -1461,6 +1382,8 @@ void dcn35_optimize_bandwidth(
+ {
+ struct pg_block_update pg_update_state;
+
++ print_pg_status(dc, __func__, ": before rcg and power up");
++
+ dcn20_optimize_bandwidth(dc, context);
+
+ if (dc->hwss.calc_blocks_to_gate) {
+@@ -1472,6 +1395,8 @@ void dcn35_optimize_bandwidth(
+ if (dc->hwss.root_clock_control)
+ dc->hwss.root_clock_control(dc, &pg_update_state, false);
+ }
++
++ print_pg_status(dc, __func__, ": after rcg and power up");
+ }
+
+ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+index a3ccf805bd16ae..aefb7c47374158 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+@@ -115,7 +115,6 @@ static const struct hw_sequencer_funcs dcn35_funcs = {
+ .exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ .update_visual_confirm_color = dcn10_update_visual_confirm_color,
+ .apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
+- .update_dsc_pg = dcn32_update_dsc_pg,
+ .calc_blocks_to_gate = dcn35_calc_blocks_to_gate,
+ .calc_blocks_to_ungate = dcn35_calc_blocks_to_ungate,
+ .hw_block_power_up = dcn35_hw_block_power_up,
+@@ -150,7 +149,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
+ .plane_atomic_disable = dcn35_plane_atomic_disable,
+ //.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
+ //.hubp_pg_control = dcn35_hubp_pg_control,
+- .enable_power_gating_plane = dcn35_enable_power_gating_plane,
+ .dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ .dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
+ .physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
+@@ -165,7 +163,6 @@ static const struct hwseq_private_funcs dcn35_private_funcs = {
+ .calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
+ .resync_fifo_dccg_dio = dcn314_resync_fifo_dccg_dio,
+ .is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
+- .dsc_pg_control = dcn35_dsc_pg_control,
+ .dsc_pg_status = dcn32_dsc_pg_status,
+ .enable_plane = dcn35_enable_plane,
+ .wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+index 58f2be2a326b89..a580a55695c3b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+@@ -114,7 +114,6 @@ static const struct hw_sequencer_funcs dcn351_funcs = {
+ .exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ .update_visual_confirm_color = dcn10_update_visual_confirm_color,
+ .apply_idle_power_optimizations = dcn35_apply_idle_power_optimizations,
+- .update_dsc_pg = dcn32_update_dsc_pg,
+ .calc_blocks_to_gate = dcn351_calc_blocks_to_gate,
+ .calc_blocks_to_ungate = dcn351_calc_blocks_to_ungate,
+ .hw_block_power_up = dcn351_hw_block_power_up,
+@@ -145,7 +144,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
+ .plane_atomic_disable = dcn35_plane_atomic_disable,
+ //.plane_atomic_disable = dcn20_plane_atomic_disable,/*todo*/
+ //.hubp_pg_control = dcn35_hubp_pg_control,
+- .enable_power_gating_plane = dcn35_enable_power_gating_plane,
+ .dpp_root_clock_control = dcn35_dpp_root_clock_control,
+ .dpstream_root_clock_control = dcn35_dpstream_root_clock_control,
+ .physymclk_root_clock_control = dcn35_physymclk_root_clock_control,
+@@ -159,7 +157,6 @@ static const struct hwseq_private_funcs dcn351_private_funcs = {
+ .setup_hpo_hw_control = dcn35_setup_hpo_hw_control,
+ .calculate_dccg_k1_k2_values = dcn32_calculate_dccg_k1_k2_values,
+ .is_dp_dig_pixel_rate_div_policy = dcn35_is_dp_dig_pixel_rate_div_policy,
+- .dsc_pg_control = dcn35_dsc_pg_control,
+ .dsc_pg_status = dcn32_dsc_pg_status,
+ .enable_plane = dcn35_enable_plane,
+ .wait_for_pipe_update_if_needed = dcn10_wait_for_pipe_update_if_needed,
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h b/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
+index 00ea3864dd4df4..bcd0b0dd9c429a 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/pg_cntl.h
+@@ -47,6 +47,7 @@ struct pg_cntl_funcs {
+ void (*optc_pg_control)(struct pg_cntl *pg_cntl, unsigned int optc_inst, bool power_on);
+ void (*dwb_pg_control)(struct pg_cntl *pg_cntl, bool power_on);
+ void (*init_pg_status)(struct pg_cntl *pg_cntl);
++ void (*print_pg_status)(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log);
+ };
+
+ #endif //__DC_PG_CNTL_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c b/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
+index af21c0a27f8657..72bd43f9bbe288 100644
+--- a/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
++++ b/drivers/gpu/drm/amd/display/dc/pg/dcn35/dcn35_pg_cntl.c
+@@ -79,16 +79,12 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+ uint32_t power_gate = power_on ? 0 : 1;
+ uint32_t pwr_status = power_on ? 0 : 2;
+ uint32_t org_ip_request_cntl = 0;
+- bool block_enabled;
+-
+- /*need to enable dscclk regardless DSC_PG*/
+- if (pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc && power_on)
+- pg_cntl->ctx->dc->res_pool->dccg->funcs->enable_dsc(
+- pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
++ bool block_enabled = false;
++ bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
++ pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
++ pg_cntl->ctx->dc->idle_optimizations_allowed;
+
+- if (pg_cntl->ctx->dc->debug.ignore_pg ||
+- pg_cntl->ctx->dc->debug.disable_dsc_power_gate ||
+- pg_cntl->ctx->dc->idle_optimizations_allowed)
++ if (skip_pg && !power_on)
+ return;
+
+ block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, dsc_inst);
+@@ -111,7 +107,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+
+ REG_WAIT(DOMAIN16_PG_STATUS,
+ DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
++ 1, 10000);
+ break;
+ case 1: /* DSC1 */
+ REG_UPDATE(DOMAIN17_PG_CONFIG,
+@@ -119,7 +115,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+
+ REG_WAIT(DOMAIN17_PG_STATUS,
+ DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
++ 1, 10000);
+ break;
+ case 2: /* DSC2 */
+ REG_UPDATE(DOMAIN18_PG_CONFIG,
+@@ -127,7 +123,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+
+ REG_WAIT(DOMAIN18_PG_STATUS,
+ DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
++ 1, 10000);
+ break;
+ case 3: /* DSC3 */
+ REG_UPDATE(DOMAIN19_PG_CONFIG,
+@@ -135,7 +131,7 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+
+ REG_WAIT(DOMAIN19_PG_STATUS,
+ DOMAIN_PGFSM_PWR_STATUS, pwr_status,
+- 1, 1000);
++ 1, 10000);
+ break;
+ default:
+ BREAK_TO_DEBUGGER();
+@@ -144,12 +140,6 @@ void pg_cntl35_dsc_pg_control(struct pg_cntl *pg_cntl, unsigned int dsc_inst, bo
+
+ if (dsc_inst < MAX_PIPES)
+ pg_cntl->pg_pipe_res_enable[PG_DSC][dsc_inst] = power_on;
+-
+- if (pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc && !power_on) {
+- /*this is to disable dscclk*/
+- pg_cntl->ctx->dc->res_pool->dccg->funcs->disable_dsc(
+- pg_cntl->ctx->dc->res_pool->dccg, dsc_inst);
+- }
+ }
+
+ static bool pg_cntl35_hubp_dpp_pg_status(struct pg_cntl *pg_cntl, unsigned int hubp_dpp_inst)
+@@ -189,11 +179,12 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
+ uint32_t pwr_status = power_on ? 0 : 2;
+ uint32_t org_ip_request_cntl;
+ bool block_enabled;
++ bool skip_pg = pg_cntl->ctx->dc->debug.ignore_pg ||
++ pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
++ pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
++ pg_cntl->ctx->dc->idle_optimizations_allowed;
+
+- if (pg_cntl->ctx->dc->debug.ignore_pg ||
+- pg_cntl->ctx->dc->debug.disable_hubp_power_gate ||
+- pg_cntl->ctx->dc->debug.disable_dpp_power_gate ||
+- pg_cntl->ctx->dc->idle_optimizations_allowed)
++ if (skip_pg && !power_on)
+ return;
+
+ block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, hubp_dpp_inst);
+@@ -213,22 +204,22 @@ void pg_cntl35_hubp_dpp_pg_control(struct pg_cntl *pg_cntl, unsigned int hubp_dp
+ case 0:
+ /* DPP0 & HUBP0 */
+ REG_UPDATE(DOMAIN0_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+- REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++ REG_WAIT(DOMAIN0_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ break;
+ case 1:
+ /* DPP1 & HUBP1 */
+ REG_UPDATE(DOMAIN1_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+- REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++ REG_WAIT(DOMAIN1_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ break;
+ case 2:
+ /* DPP2 & HUBP2 */
+ REG_UPDATE(DOMAIN2_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+- REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++ REG_WAIT(DOMAIN2_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ break;
+ case 3:
+ /* DPP3 & HUBP3 */
+ REG_UPDATE(DOMAIN3_PG_CONFIG, DOMAIN_POWER_GATE, power_gate);
+- REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 1000);
++ REG_WAIT(DOMAIN3_PG_STATUS, DOMAIN_PGFSM_PWR_STATUS, pwr_status, 1, 10000);
+ break;
+ default:
+ BREAK_TO_DEBUGGER();
+@@ -501,6 +492,36 @@ void pg_cntl35_init_pg_status(struct pg_cntl *pg_cntl)
+ pg_cntl->pg_res_enable[PG_DWB] = block_enabled;
+ }
+
++static void pg_cntl35_print_pg_status(struct pg_cntl *pg_cntl, const char *debug_func, const char *debug_log)
++{
++ int i = 0;
++ bool block_enabled = false;
++
++ DC_LOG_DEBUG("%s: %s", debug_func, debug_log);
++
++ DC_LOG_DEBUG("PG_CNTL status:\n");
++
++ block_enabled = pg_cntl35_io_clk_status(pg_cntl);
++ DC_LOG_DEBUG("ONO0=%d (DCCG, DIO, DCIO)\n", block_enabled ? 1 : 0);
++
++ block_enabled = pg_cntl35_mem_status(pg_cntl);
++ DC_LOG_DEBUG("ONO1=%d (DCHUBBUB, DCHVM, DCHUBBUBMEM)\n", block_enabled ? 1 : 0);
++
++ block_enabled = pg_cntl35_plane_otg_status(pg_cntl);
++ DC_LOG_DEBUG("ONO2=%d (MPC, OPP, OPTC, DWB)\n", block_enabled ? 1 : 0);
++
++ block_enabled = pg_cntl35_hpo_pg_status(pg_cntl);
++ DC_LOG_DEBUG("ONO3=%d (HPO)\n", block_enabled ? 1 : 0);
++
++ for (i = 0; i < pg_cntl->ctx->dc->res_pool->pipe_count; i++) {
++ block_enabled = pg_cntl35_hubp_dpp_pg_status(pg_cntl, i);
++ DC_LOG_DEBUG("ONO%d=%d (DCHUBP%d, DPP%d)\n", 4 + i * 2, block_enabled ? 1 : 0, i, i);
++
++ block_enabled = pg_cntl35_dsc_pg_status(pg_cntl, i);
++ DC_LOG_DEBUG("ONO%d=%d (DSC%d)\n", 5 + i * 2, block_enabled ? 1 : 0, i);
++ }
++}
++
+ static const struct pg_cntl_funcs pg_cntl35_funcs = {
+ .init_pg_status = pg_cntl35_init_pg_status,
+ .dsc_pg_control = pg_cntl35_dsc_pg_control,
+@@ -511,7 +532,8 @@ static const struct pg_cntl_funcs pg_cntl35_funcs = {
+ .mpcc_pg_control = pg_cntl35_mpcc_pg_control,
+ .opp_pg_control = pg_cntl35_opp_pg_control,
+ .optc_pg_control = pg_cntl35_optc_pg_control,
+- .dwb_pg_control = pg_cntl35_dwb_pg_control
++ .dwb_pg_control = pg_cntl35_dwb_pg_control,
++ .print_pg_status = pg_cntl35_print_pg_status
+ };
+
+ struct pg_cntl *pg_cntl35_create(
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index ea78c6c8ca7a63..2d4e9368394641 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -691,6 +691,34 @@ void drm_dp_dpcd_set_powered(struct drm_dp_aux *aux, bool powered)
+ }
+ EXPORT_SYMBOL(drm_dp_dpcd_set_powered);
+
++/**
++ * drm_dp_dpcd_set_probe() - Set whether a probing before DPCD access is done
++ * @aux: DisplayPort AUX channel
++ * @enable: Enable the probing if required
++ */
++void drm_dp_dpcd_set_probe(struct drm_dp_aux *aux, bool enable)
++{
++ WRITE_ONCE(aux->dpcd_probe_disabled, !enable);
++}
++EXPORT_SYMBOL(drm_dp_dpcd_set_probe);
++
++static bool dpcd_access_needs_probe(struct drm_dp_aux *aux)
++{
++ /*
++ * HP ZR24w corrupts the first DPCD access after entering power save
++ * mode. Eg. on a read, the entire buffer will be filled with the same
++ * byte. Do a throw away read to avoid corrupting anything we care
++ * about. Afterwards things will work correctly until the monitor
++ * gets woken up and subsequently re-enters power save mode.
++ *
++ * The user pressing any button on the monitor is enough to wake it
++ * up, so there is no particularly good place to do the workaround.
++ * We just have to do it before any DPCD access and hope that the
++ * monitor doesn't power down exactly after the throw away read.
++ */
++ return !aux->is_remote && !READ_ONCE(aux->dpcd_probe_disabled);
++}
++
+ /**
+ * drm_dp_dpcd_read() - read a series of bytes from the DPCD
+ * @aux: DisplayPort AUX channel (SST or MST)
+@@ -712,19 +740,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ {
+ int ret;
+
+- /*
+- * HP ZR24w corrupts the first DPCD access after entering power save
+- * mode. Eg. on a read, the entire buffer will be filled with the same
+- * byte. Do a throw away read to avoid corrupting anything we care
+- * about. Afterwards things will work correctly until the monitor
+- * gets woken up and subsequently re-enters power save mode.
+- *
+- * The user pressing any button on the monitor is enough to wake it
+- * up, so there is no particularly good place to do the workaround.
+- * We just have to do it before any DPCD access and hope that the
+- * monitor doesn't power down exactly after the throw away read.
+- */
+- if (!aux->is_remote) {
++ if (dpcd_access_needs_probe(aux)) {
+ ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 74e77742b2bd4f..9c8822b337e2e4 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -66,34 +66,36 @@ static int oui(u8 first, u8 second, u8 third)
+ * on as many displays as possible).
+ */
+
+-/* First detailed mode wrong, use largest 60Hz mode */
+-#define EDID_QUIRK_PREFER_LARGE_60 (1 << 0)
+-/* Reported 135MHz pixel clock is too high, needs adjustment */
+-#define EDID_QUIRK_135_CLOCK_TOO_HIGH (1 << 1)
+-/* Prefer the largest mode at 75 Hz */
+-#define EDID_QUIRK_PREFER_LARGE_75 (1 << 2)
+-/* Detail timing is in cm not mm */
+-#define EDID_QUIRK_DETAILED_IN_CM (1 << 3)
+-/* Detailed timing descriptors have bogus size values, so just take the
+- * maximum size and use that.
+- */
+-#define EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE (1 << 4)
+-/* use +hsync +vsync for detailed mode */
+-#define EDID_QUIRK_DETAILED_SYNC_PP (1 << 6)
+-/* Force reduced-blanking timings for detailed modes */
+-#define EDID_QUIRK_FORCE_REDUCED_BLANKING (1 << 7)
+-/* Force 8bpc */
+-#define EDID_QUIRK_FORCE_8BPC (1 << 8)
+-/* Force 12bpc */
+-#define EDID_QUIRK_FORCE_12BPC (1 << 9)
+-/* Force 6bpc */
+-#define EDID_QUIRK_FORCE_6BPC (1 << 10)
+-/* Force 10bpc */
+-#define EDID_QUIRK_FORCE_10BPC (1 << 11)
+-/* Non desktop display (i.e. HMD) */
+-#define EDID_QUIRK_NON_DESKTOP (1 << 12)
+-/* Cap the DSC target bitrate to 15bpp */
+-#define EDID_QUIRK_CAP_DSC_15BPP (1 << 13)
++enum drm_edid_internal_quirk {
++ /* First detailed mode wrong, use largest 60Hz mode */
++ EDID_QUIRK_PREFER_LARGE_60 = DRM_EDID_QUIRK_NUM,
++ /* Reported 135MHz pixel clock is too high, needs adjustment */
++ EDID_QUIRK_135_CLOCK_TOO_HIGH,
++ /* Prefer the largest mode at 75 Hz */
++ EDID_QUIRK_PREFER_LARGE_75,
++ /* Detail timing is in cm not mm */
++ EDID_QUIRK_DETAILED_IN_CM,
++ /* Detailed timing descriptors have bogus size values, so just take the
++ * maximum size and use that.
++ */
++ EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE,
++ /* use +hsync +vsync for detailed mode */
++ EDID_QUIRK_DETAILED_SYNC_PP,
++ /* Force reduced-blanking timings for detailed modes */
++ EDID_QUIRK_FORCE_REDUCED_BLANKING,
++ /* Force 8bpc */
++ EDID_QUIRK_FORCE_8BPC,
++ /* Force 12bpc */
++ EDID_QUIRK_FORCE_12BPC,
++ /* Force 6bpc */
++ EDID_QUIRK_FORCE_6BPC,
++ /* Force 10bpc */
++ EDID_QUIRK_FORCE_10BPC,
++ /* Non desktop display (i.e. HMD) */
++ EDID_QUIRK_NON_DESKTOP,
++ /* Cap the DSC target bitrate to 15bpp */
++ EDID_QUIRK_CAP_DSC_15BPP,
++};
+
+ #define MICROSOFT_IEEE_OUI 0xca125c
+
+@@ -128,124 +130,132 @@ static const struct edid_quirk {
+ u32 quirks;
+ } edid_quirk_list[] = {
+ /* Acer AL1706 */
+- EDID_QUIRK('A', 'C', 'R', 44358, EDID_QUIRK_PREFER_LARGE_60),
++ EDID_QUIRK('A', 'C', 'R', 44358, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+ /* Acer F51 */
+- EDID_QUIRK('A', 'P', 'I', 0x7602, EDID_QUIRK_PREFER_LARGE_60),
++ EDID_QUIRK('A', 'P', 'I', 0x7602, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+
+ /* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+- EDID_QUIRK('A', 'E', 'O', 0, EDID_QUIRK_FORCE_6BPC),
++ EDID_QUIRK('A', 'E', 'O', 0, BIT(EDID_QUIRK_FORCE_6BPC)),
+
+ /* BenQ GW2765 */
+- EDID_QUIRK('B', 'N', 'Q', 0x78d6, EDID_QUIRK_FORCE_8BPC),
++ EDID_QUIRK('B', 'N', 'Q', 0x78d6, BIT(EDID_QUIRK_FORCE_8BPC)),
+
+ /* BOE model on HP Pavilion 15-n233sl reports 8 bpc, but is a 6 bpc panel */
+- EDID_QUIRK('B', 'O', 'E', 0x78b, EDID_QUIRK_FORCE_6BPC),
++ EDID_QUIRK('B', 'O', 'E', 0x78b, BIT(EDID_QUIRK_FORCE_6BPC)),
+
+ /* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
+- EDID_QUIRK('C', 'P', 'T', 0x17df, EDID_QUIRK_FORCE_6BPC),
++ EDID_QUIRK('C', 'P', 'T', 0x17df, BIT(EDID_QUIRK_FORCE_6BPC)),
+
+ /* SDC panel of Lenovo B50-80 reports 8 bpc, but is a 6 bpc panel */
+- EDID_QUIRK('S', 'D', 'C', 0x3652, EDID_QUIRK_FORCE_6BPC),
++ EDID_QUIRK('S', 'D', 'C', 0x3652, BIT(EDID_QUIRK_FORCE_6BPC)),
+
+ /* BOE model 0x0771 reports 8 bpc, but is a 6 bpc panel */
+- EDID_QUIRK('B', 'O', 'E', 0x0771, EDID_QUIRK_FORCE_6BPC),
++ EDID_QUIRK('B', 'O', 'E', 0x0771, BIT(EDID_QUIRK_FORCE_6BPC)),
+
+ /* Belinea 10 15 55 */
+- EDID_QUIRK('M', 'A', 'X', 1516, EDID_QUIRK_PREFER_LARGE_60),
+- EDID_QUIRK('M', 'A', 'X', 0x77e, EDID_QUIRK_PREFER_LARGE_60),
++ EDID_QUIRK('M', 'A', 'X', 1516, BIT(EDID_QUIRK_PREFER_LARGE_60)),
++ EDID_QUIRK('M', 'A', 'X', 0x77e, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+
+ /* Envision Peripherals, Inc. EN-7100e */
+- EDID_QUIRK('E', 'P', 'I', 59264, EDID_QUIRK_135_CLOCK_TOO_HIGH),
++ EDID_QUIRK('E', 'P', 'I', 59264, BIT(EDID_QUIRK_135_CLOCK_TOO_HIGH)),
+ /* Envision EN2028 */
+- EDID_QUIRK('E', 'P', 'I', 8232, EDID_QUIRK_PREFER_LARGE_60),
++ EDID_QUIRK('E', 'P', 'I', 8232, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+
+ /* Funai Electronics PM36B */
+- EDID_QUIRK('F', 'C', 'M', 13600, EDID_QUIRK_PREFER_LARGE_75 |
+- EDID_QUIRK_DETAILED_IN_CM),
++ EDID_QUIRK('F', 'C', 'M', 13600, BIT(EDID_QUIRK_PREFER_LARGE_75) |
++ BIT(EDID_QUIRK_DETAILED_IN_CM)),
+
+ /* LG 27GP950 */
+- EDID_QUIRK('G', 'S', 'M', 0x5bbf, EDID_QUIRK_CAP_DSC_15BPP),
++ EDID_QUIRK('G', 'S', 'M', 0x5bbf, BIT(EDID_QUIRK_CAP_DSC_15BPP)),
+
+ /* LG 27GN950 */
+- EDID_QUIRK('G', 'S', 'M', 0x5b9a, EDID_QUIRK_CAP_DSC_15BPP),
++ EDID_QUIRK('G', 'S', 'M', 0x5b9a, BIT(EDID_QUIRK_CAP_DSC_15BPP)),
+
+ /* LGD panel of HP zBook 17 G2, eDP 10 bpc, but reports unknown bpc */
+- EDID_QUIRK('L', 'G', 'D', 764, EDID_QUIRK_FORCE_10BPC),
++ EDID_QUIRK('L', 'G', 'D', 764, BIT(EDID_QUIRK_FORCE_10BPC)),
+
+ /* LG Philips LCD LP154W01-A5 */
+- EDID_QUIRK('L', 'P', 'L', 0, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE),
+- EDID_QUIRK('L', 'P', 'L', 0x2a00, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE),
++ EDID_QUIRK('L', 'P', 'L', 0, BIT(EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)),
++ EDID_QUIRK('L', 'P', 'L', 0x2a00, BIT(EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)),
+
+ /* Samsung SyncMaster 205BW. Note: irony */
+- EDID_QUIRK('S', 'A', 'M', 541, EDID_QUIRK_DETAILED_SYNC_PP),
++ EDID_QUIRK('S', 'A', 'M', 541, BIT(EDID_QUIRK_DETAILED_SYNC_PP)),
+ /* Samsung SyncMaster 22[5-6]BW */
+- EDID_QUIRK('S', 'A', 'M', 596, EDID_QUIRK_PREFER_LARGE_60),
+- EDID_QUIRK('S', 'A', 'M', 638, EDID_QUIRK_PREFER_LARGE_60),
++ EDID_QUIRK('S', 'A', 'M', 596, BIT(EDID_QUIRK_PREFER_LARGE_60)),
++ EDID_QUIRK('S', 'A', 'M', 638, BIT(EDID_QUIRK_PREFER_LARGE_60)),
+
+ /* Sony PVM-2541A does up to 12 bpc, but only reports max 8 bpc */
+- EDID_QUIRK('S', 'N', 'Y', 0x2541, EDID_QUIRK_FORCE_12BPC),
++ EDID_QUIRK('S', 'N', 'Y', 0x2541, BIT(EDID_QUIRK_FORCE_12BPC)),
+
+ /* ViewSonic VA2026w */
+- EDID_QUIRK('V', 'S', 'C', 5020, EDID_QUIRK_FORCE_REDUCED_BLANKING),
++ EDID_QUIRK('V', 'S', 'C', 5020, BIT(EDID_QUIRK_FORCE_REDUCED_BLANKING)),
+
+ /* Medion MD 30217 PG */
+- EDID_QUIRK('M', 'E', 'D', 0x7b8, EDID_QUIRK_PREFER_LARGE_75),
++ EDID_QUIRK('M', 'E', 'D', 0x7b8, BIT(EDID_QUIRK_PREFER_LARGE_75)),
+
+ /* Lenovo G50 */
+- EDID_QUIRK('S', 'D', 'C', 18514, EDID_QUIRK_FORCE_6BPC),
++ EDID_QUIRK('S', 'D', 'C', 18514, BIT(EDID_QUIRK_FORCE_6BPC)),
+
+ /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+- EDID_QUIRK('S', 'E', 'C', 0xd033, EDID_QUIRK_FORCE_8BPC),
++ EDID_QUIRK('S', 'E', 'C', 0xd033, BIT(EDID_QUIRK_FORCE_8BPC)),
+
+ /* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
+- EDID_QUIRK('E', 'T', 'R', 13896, EDID_QUIRK_FORCE_8BPC),
++ EDID_QUIRK('E', 'T', 'R', 13896, BIT(EDID_QUIRK_FORCE_8BPC)),
+
+ /* Valve Index Headset */
+- EDID_QUIRK('V', 'L', 'V', 0x91a8, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b0, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b1, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b2, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b3, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b4, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b5, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b6, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b7, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b8, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91b9, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91ba, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91bb, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91bc, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91bd, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91be, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('V', 'L', 'V', 0x91bf, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('V', 'L', 'V', 0x91a8, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b0, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b1, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b2, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b3, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b4, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b5, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b6, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b7, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b8, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91b9, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91ba, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91bb, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91bc, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91bd, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91be, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('V', 'L', 'V', 0x91bf, BIT(EDID_QUIRK_NON_DESKTOP)),
+
+ /* HTC Vive and Vive Pro VR Headsets */
+- EDID_QUIRK('H', 'V', 'R', 0xaa01, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('H', 'V', 'R', 0xaa02, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('H', 'V', 'R', 0xaa01, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('H', 'V', 'R', 0xaa02, BIT(EDID_QUIRK_NON_DESKTOP)),
+
+ /* Oculus Rift DK1, DK2, CV1 and Rift S VR Headsets */
+- EDID_QUIRK('O', 'V', 'R', 0x0001, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('O', 'V', 'R', 0x0003, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('O', 'V', 'R', 0x0004, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('O', 'V', 'R', 0x0012, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('O', 'V', 'R', 0x0001, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('O', 'V', 'R', 0x0003, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('O', 'V', 'R', 0x0004, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('O', 'V', 'R', 0x0012, BIT(EDID_QUIRK_NON_DESKTOP)),
+
+ /* Windows Mixed Reality Headsets */
+- EDID_QUIRK('A', 'C', 'R', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('L', 'E', 'N', 0x0408, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('F', 'U', 'J', 0x1970, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('D', 'E', 'L', 0x7fce, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('S', 'E', 'C', 0x144a, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('A', 'U', 'S', 0xc102, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('A', 'C', 'R', 0x7fce, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('L', 'E', 'N', 0x0408, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('F', 'U', 'J', 0x1970, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('D', 'E', 'L', 0x7fce, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('S', 'E', 'C', 0x144a, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('A', 'U', 'S', 0xc102, BIT(EDID_QUIRK_NON_DESKTOP)),
+
+ /* Sony PlayStation VR Headset */
+- EDID_QUIRK('S', 'N', 'Y', 0x0704, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('S', 'N', 'Y', 0x0704, BIT(EDID_QUIRK_NON_DESKTOP)),
+
+ /* Sensics VR Headsets */
+- EDID_QUIRK('S', 'E', 'N', 0x1019, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('S', 'E', 'N', 0x1019, BIT(EDID_QUIRK_NON_DESKTOP)),
+
+ /* OSVR HDK and HDK2 VR Headsets */
+- EDID_QUIRK('S', 'V', 'R', 0x1019, EDID_QUIRK_NON_DESKTOP),
+- EDID_QUIRK('A', 'U', 'O', 0x1111, EDID_QUIRK_NON_DESKTOP),
++ EDID_QUIRK('S', 'V', 'R', 0x1019, BIT(EDID_QUIRK_NON_DESKTOP)),
++ EDID_QUIRK('A', 'U', 'O', 0x1111, BIT(EDID_QUIRK_NON_DESKTOP)),
++
++ /*
++ * @drm_edid_internal_quirk entries end here, following with the
++ * @drm_edid_quirk entries.
++ */
++
++ /* HP ZR24w DP AUX DPCD access requires probing to prevent corruption. */
++ EDID_QUIRK('H', 'W', 'P', 0x2869, BIT(DRM_EDID_QUIRK_DP_DPCD_PROBE)),
+ };
+
+ /*
+@@ -2951,6 +2961,18 @@ static u32 edid_get_quirks(const struct drm_edid *drm_edid)
+ return 0;
+ }
+
++static bool drm_edid_has_internal_quirk(struct drm_connector *connector,
++ enum drm_edid_internal_quirk quirk)
++{
++ return connector->display_info.quirks & BIT(quirk);
++}
++
++bool drm_edid_has_quirk(struct drm_connector *connector, enum drm_edid_quirk quirk)
++{
++ return connector->display_info.quirks & BIT(quirk);
++}
++EXPORT_SYMBOL(drm_edid_has_quirk);
++
+ #define MODE_SIZE(m) ((m)->hdisplay * (m)->vdisplay)
+ #define MODE_REFRESH_DIFF(c,t) (abs((c) - (t)))
+
+@@ -2960,7 +2982,6 @@ static u32 edid_get_quirks(const struct drm_edid *drm_edid)
+ */
+ static void edid_fixup_preferred(struct drm_connector *connector)
+ {
+- const struct drm_display_info *info = &connector->display_info;
+ struct drm_display_mode *t, *cur_mode, *preferred_mode;
+ int target_refresh = 0;
+ int cur_vrefresh, preferred_vrefresh;
+@@ -2968,9 +2989,9 @@ static void edid_fixup_preferred(struct drm_connector *connector)
+ if (list_empty(&connector->probed_modes))
+ return;
+
+- if (info->quirks & EDID_QUIRK_PREFER_LARGE_60)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_60))
+ target_refresh = 60;
+- if (info->quirks & EDID_QUIRK_PREFER_LARGE_75)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_75))
+ target_refresh = 75;
+
+ preferred_mode = list_first_entry(&connector->probed_modes,
+@@ -3474,7 +3495,6 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ const struct drm_edid *drm_edid,
+ const struct detailed_timing *timing)
+ {
+- const struct drm_display_info *info = &connector->display_info;
+ struct drm_device *dev = connector->dev;
+ struct drm_display_mode *mode;
+ const struct detailed_pixel_timing *pt = &timing->data.pixel_data;
+@@ -3508,7 +3528,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ return NULL;
+ }
+
+- if (info->quirks & EDID_QUIRK_FORCE_REDUCED_BLANKING) {
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_REDUCED_BLANKING)) {
+ mode = drm_cvt_mode(dev, hactive, vactive, 60, true, false, false);
+ if (!mode)
+ return NULL;
+@@ -3520,7 +3540,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ if (!mode)
+ return NULL;
+
+- if (info->quirks & EDID_QUIRK_135_CLOCK_TOO_HIGH)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_135_CLOCK_TOO_HIGH))
+ mode->clock = 1088 * 10;
+ else
+ mode->clock = le16_to_cpu(timing->pixel_clock) * 10;
+@@ -3551,7 +3571,7 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+
+ drm_mode_do_interlace_quirk(mode, pt);
+
+- if (info->quirks & EDID_QUIRK_DETAILED_SYNC_PP) {
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_SYNC_PP)) {
+ mode->flags |= DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC;
+ } else {
+ mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ?
+@@ -3564,12 +3584,12 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_connector *connecto
+ mode->width_mm = pt->width_mm_lo | (pt->width_height_mm_hi & 0xf0) << 4;
+ mode->height_mm = pt->height_mm_lo | (pt->width_height_mm_hi & 0xf) << 8;
+
+- if (info->quirks & EDID_QUIRK_DETAILED_IN_CM) {
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_IN_CM)) {
+ mode->width_mm *= 10;
+ mode->height_mm *= 10;
+ }
+
+- if (info->quirks & EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE) {
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_DETAILED_USE_MAXIMUM_SIZE)) {
+ mode->width_mm = drm_edid->edid->width_cm * 10;
+ mode->height_mm = drm_edid->edid->height_cm * 10;
+ }
+@@ -6734,26 +6754,26 @@ static void update_display_info(struct drm_connector *connector,
+ drm_update_mso(connector, drm_edid);
+
+ out:
+- if (info->quirks & EDID_QUIRK_NON_DESKTOP) {
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_NON_DESKTOP)) {
+ drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] Non-desktop display%s\n",
+ connector->base.id, connector->name,
+ info->non_desktop ? " (redundant quirk)" : "");
+ info->non_desktop = true;
+ }
+
+- if (info->quirks & EDID_QUIRK_CAP_DSC_15BPP)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_CAP_DSC_15BPP))
+ info->max_dsc_bpp = 15;
+
+- if (info->quirks & EDID_QUIRK_FORCE_6BPC)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_6BPC))
+ info->bpc = 6;
+
+- if (info->quirks & EDID_QUIRK_FORCE_8BPC)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_8BPC))
+ info->bpc = 8;
+
+- if (info->quirks & EDID_QUIRK_FORCE_10BPC)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_10BPC))
+ info->bpc = 10;
+
+- if (info->quirks & EDID_QUIRK_FORCE_12BPC)
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_FORCE_12BPC))
+ info->bpc = 12;
+
+ /* Depends on info->cea_rev set by drm_parse_cea_ext() above */
+@@ -6918,7 +6938,6 @@ static int add_displayid_detailed_modes(struct drm_connector *connector,
+ static int _drm_edid_connector_add_modes(struct drm_connector *connector,
+ const struct drm_edid *drm_edid)
+ {
+- const struct drm_display_info *info = &connector->display_info;
+ int num_modes = 0;
+
+ if (!drm_edid)
+@@ -6948,7 +6967,8 @@ static int _drm_edid_connector_add_modes(struct drm_connector *connector,
+ if (drm_edid->edid->features & DRM_EDID_FEATURE_CONTINUOUS_FREQ)
+ num_modes += add_inferred_modes(connector, drm_edid);
+
+- if (info->quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75))
++ if (drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_60) ||
++ drm_edid_has_internal_quirk(connector, EDID_QUIRK_PREFER_LARGE_75))
+ edid_fixup_preferred(connector);
+
+ return num_modes;
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 16356523816fb8..068ed911e12458 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -1169,7 +1169,7 @@ static void icl_mbus_init(struct intel_display *display)
+ if (DISPLAY_VER(display) == 12)
+ abox_regs |= BIT(0);
+
+- for_each_set_bit(i, &abox_regs, sizeof(abox_regs))
++ for_each_set_bit(i, &abox_regs, BITS_PER_TYPE(abox_regs))
+ intel_de_rmw(display, MBUS_ABOX_CTL(i), mask, val);
+ }
+
+@@ -1630,11 +1630,11 @@ static void tgl_bw_buddy_init(struct intel_display *display)
+ if (table[config].page_mask == 0) {
+ drm_dbg_kms(display->drm,
+ "Unknown memory configuration; disabling address buddy logic.\n");
+- for_each_set_bit(i, &abox_mask, sizeof(abox_mask))
++ for_each_set_bit(i, &abox_mask, BITS_PER_TYPE(abox_mask))
+ intel_de_write(display, BW_BUDDY_CTL(i),
+ BW_BUDDY_DISABLE);
+ } else {
+- for_each_set_bit(i, &abox_mask, sizeof(abox_mask)) {
++ for_each_set_bit(i, &abox_mask, BITS_PER_TYPE(abox_mask)) {
+ intel_de_write(display, BW_BUDDY_PAGE_MASK(i),
+ table[config].page_mask);
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 34131ae2c207df..3b02ed0a16dab1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -388,11 +388,11 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+
+ of_id = of_match_node(mtk_drm_of_ids, node);
+ if (!of_id)
+- goto next_put_node;
++ continue;
+
+ pdev = of_find_device_by_node(node);
+ if (!pdev)
+- goto next_put_node;
++ continue;
+
+ drm_dev = device_find_child(&pdev->dev, NULL, mtk_drm_match);
+ if (!drm_dev)
+@@ -418,11 +418,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ next_put_device_pdev_dev:
+ put_device(&pdev->dev);
+
+-next_put_node:
+- of_node_put(node);
+-
+- if (cnt == MAX_CRTC)
++ if (cnt == MAX_CRTC) {
++ of_node_put(node);
+ break;
++ }
+ }
+
+ if (drm_priv->data->mmsys_dev_num == cnt) {
+diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
+index 6200cad22563a3..0f4ab9e5ef95cb 100644
+--- a/drivers/gpu/drm/panthor/panthor_drv.c
++++ b/drivers/gpu/drm/panthor/panthor_drv.c
+@@ -1093,7 +1093,7 @@ static int panthor_ioctl_group_create(struct drm_device *ddev, void *data,
+ struct drm_panthor_queue_create *queue_args;
+ int ret;
+
+- if (!args->queues.count)
++ if (!args->queues.count || args->queues.count > MAX_CS_PER_CSG)
+ return -EINVAL;
+
+ ret = PANTHOR_UOBJ_GET_ARRAY(queue_args, &args->queues);
+diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
+index 378dcd0fb41493..a34d1e2597b79f 100644
+--- a/drivers/gpu/drm/xe/tests/xe_bo.c
++++ b/drivers/gpu/drm/xe/tests/xe_bo.c
+@@ -236,7 +236,7 @@ static int evict_test_run_tile(struct xe_device *xe, struct xe_tile *tile, struc
+ }
+
+ xe_bo_lock(external, false);
+- err = xe_bo_pin_external(external);
++ err = xe_bo_pin_external(external, false);
+ xe_bo_unlock(external);
+ if (err) {
+ KUNIT_FAIL(test, "external bo pin err=%pe\n",
+diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+index c53f67ce4b0aa2..121f17c112ec6a 100644
+--- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+@@ -89,15 +89,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported,
+ return;
+ }
+
+- /*
+- * If on different devices, the exporter is kept in system if
+- * possible, saving a migration step as the transfer is just
+- * likely as fast from system memory.
+- */
+- if (params->mem_mask & XE_BO_FLAG_SYSTEM)
+- KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, XE_PL_TT));
+- else
+- KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
++ KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
+
+ if (params->force_different_devices)
+ KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(imported, XE_PL_TT));
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 50326e756f8975..5390f535394695 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -184,6 +184,8 @@ static void try_add_system(struct xe_device *xe, struct xe_bo *bo,
+
+ bo->placements[*c] = (struct ttm_place) {
+ .mem_type = XE_PL_TT,
++ .flags = (bo_flags & XE_BO_FLAG_VRAM_MASK) ?
++ TTM_PL_FLAG_FALLBACK : 0,
+ };
+ *c += 1;
+ }
+@@ -2266,6 +2268,7 @@ uint64_t vram_region_gpu_offset(struct ttm_resource *res)
+ /**
+ * xe_bo_pin_external - pin an external BO
+ * @bo: buffer object to be pinned
++ * @in_place: Pin in current placement, don't attempt to migrate.
+ *
+ * Pin an external (not tied to a VM, can be exported via dma-buf / prime FD)
+ * BO. Unique call compared to xe_bo_pin as this function has it own set of
+@@ -2273,7 +2276,7 @@ uint64_t vram_region_gpu_offset(struct ttm_resource *res)
+ *
+ * Returns 0 for success, negative error code otherwise.
+ */
+-int xe_bo_pin_external(struct xe_bo *bo)
++int xe_bo_pin_external(struct xe_bo *bo, bool in_place)
+ {
+ struct xe_device *xe = xe_bo_device(bo);
+ int err;
+@@ -2282,9 +2285,11 @@ int xe_bo_pin_external(struct xe_bo *bo)
+ xe_assert(xe, xe_bo_is_user(bo));
+
+ if (!xe_bo_is_pinned(bo)) {
+- err = xe_bo_validate(bo, NULL, false);
+- if (err)
+- return err;
++ if (!in_place) {
++ err = xe_bo_validate(bo, NULL, false);
++ if (err)
++ return err;
++ }
+
+ spin_lock(&xe->pinned.lock);
+ list_add_tail(&bo->pinned_link, &xe->pinned.late.external);
+@@ -2437,6 +2442,9 @@ int xe_bo_validate(struct xe_bo *bo, struct xe_vm *vm, bool allow_res_evict)
+ };
+ int ret;
+
++ if (xe_bo_is_pinned(bo))
++ return 0;
++
+ if (vm) {
+ lockdep_assert_held(&vm->lock);
+ xe_vm_assert_held(vm);
+diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
+index 02ada1fb8a2359..bf0432c360bbba 100644
+--- a/drivers/gpu/drm/xe/xe_bo.h
++++ b/drivers/gpu/drm/xe/xe_bo.h
+@@ -201,7 +201,7 @@ static inline void xe_bo_unlock_vm_held(struct xe_bo *bo)
+ }
+ }
+
+-int xe_bo_pin_external(struct xe_bo *bo);
++int xe_bo_pin_external(struct xe_bo *bo, bool in_place);
+ int xe_bo_pin(struct xe_bo *bo);
+ void xe_bo_unpin_external(struct xe_bo *bo);
+ void xe_bo_unpin(struct xe_bo *bo);
+diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
+index 6383a1c0d47847..1db2aba4738c23 100644
+--- a/drivers/gpu/drm/xe/xe_device_types.h
++++ b/drivers/gpu/drm/xe/xe_device_types.h
+@@ -529,6 +529,12 @@ struct xe_device {
+
+ /** @pm_notifier: Our PM notifier to perform actions in response to various PM events. */
+ struct notifier_block pm_notifier;
++ /** @pm_block: Completion to block validating tasks on suspend / hibernate prepare */
++ struct completion pm_block;
++ /** @rebind_resume_list: List of wq items to kick on resume. */
++ struct list_head rebind_resume_list;
++ /** @rebind_resume_lock: Lock to protect the rebind_resume_list */
++ struct mutex rebind_resume_lock;
+
+ /** @pmt: Support the PMT driver callback interface */
+ struct {
+diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
+index 346f857f38374f..af64baf872ef7b 100644
+--- a/drivers/gpu/drm/xe/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/xe_dma_buf.c
+@@ -72,7 +72,7 @@ static int xe_dma_buf_pin(struct dma_buf_attachment *attach)
+ return ret;
+ }
+
+- ret = xe_bo_pin_external(bo);
++ ret = xe_bo_pin_external(bo, true);
+ xe_assert(xe, !ret);
+
+ return 0;
+diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
+index 44364c042ad72d..374c831e691b2b 100644
+--- a/drivers/gpu/drm/xe/xe_exec.c
++++ b/drivers/gpu/drm/xe/xe_exec.c
+@@ -237,6 +237,15 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ goto err_unlock_list;
+ }
+
++ /*
++ * It's OK to block interruptible here with the vm lock held, since
++ * on task freezing during suspend / hibernate, the call will
++ * return -ERESTARTSYS and the IOCTL will be rerun.
++ */
++ err = wait_for_completion_interruptible(&xe->pm_block);
++ if (err)
++ goto err_unlock_list;
++
+ vm_exec.vm = &vm->gpuvm;
+ vm_exec.flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
+ if (xe_vm_in_lr_mode(vm)) {
+diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
+index ad263de44111d4..375a197a86089b 100644
+--- a/drivers/gpu/drm/xe/xe_pm.c
++++ b/drivers/gpu/drm/xe/xe_pm.c
+@@ -23,6 +23,7 @@
+ #include "xe_pcode.h"
+ #include "xe_pxp.h"
+ #include "xe_trace.h"
++#include "xe_vm.h"
+ #include "xe_wa.h"
+
+ /**
+@@ -285,6 +286,19 @@ static u32 vram_threshold_value(struct xe_device *xe)
+ return DEFAULT_VRAM_THRESHOLD;
+ }
+
++static void xe_pm_wake_rebind_workers(struct xe_device *xe)
++{
++ struct xe_vm *vm, *next;
++
++ mutex_lock(&xe->rebind_resume_lock);
++ list_for_each_entry_safe(vm, next, &xe->rebind_resume_list,
++ preempt.pm_activate_link) {
++ list_del_init(&vm->preempt.pm_activate_link);
++ xe_vm_resume_rebind_worker(vm);
++ }
++ mutex_unlock(&xe->rebind_resume_lock);
++}
++
+ static int xe_pm_notifier_callback(struct notifier_block *nb,
+ unsigned long action, void *data)
+ {
+@@ -294,30 +308,30 @@ static int xe_pm_notifier_callback(struct notifier_block *nb,
+ switch (action) {
+ case PM_HIBERNATION_PREPARE:
+ case PM_SUSPEND_PREPARE:
++ reinit_completion(&xe->pm_block);
+ xe_pm_runtime_get(xe);
+ err = xe_bo_evict_all_user(xe);
+- if (err) {
++ if (err)
+ drm_dbg(&xe->drm, "Notifier evict user failed (%d)\n", err);
+- xe_pm_runtime_put(xe);
+- break;
+- }
+
+ err = xe_bo_notifier_prepare_all_pinned(xe);
+- if (err) {
++ if (err)
+ drm_dbg(&xe->drm, "Notifier prepare pin failed (%d)\n", err);
+- xe_pm_runtime_put(xe);
+- }
++ /*
++ * Keep the runtime pm reference until post hibernation / post suspend to
++ * avoid a runtime suspend interfering with evicted objects or backup
++ * allocations.
++ */
+ break;
+ case PM_POST_HIBERNATION:
+ case PM_POST_SUSPEND:
++ complete_all(&xe->pm_block);
++ xe_pm_wake_rebind_workers(xe);
+ xe_bo_notifier_unprepare_all_pinned(xe);
+ xe_pm_runtime_put(xe);
+ break;
+ }
+
+- if (err)
+- return NOTIFY_BAD;
+-
+ return NOTIFY_DONE;
+ }
+
+@@ -339,6 +353,14 @@ int xe_pm_init(struct xe_device *xe)
+ if (err)
+ return err;
+
++ err = drmm_mutex_init(&xe->drm, &xe->rebind_resume_lock);
++ if (err)
++ goto err_unregister;
++
++ init_completion(&xe->pm_block);
++ complete_all(&xe->pm_block);
++ INIT_LIST_HEAD(&xe->rebind_resume_list);
++
+ /* For now suspend/resume is only allowed with GuC */
+ if (!xe_device_uc_enabled(xe))
+ return 0;
+diff --git a/drivers/gpu/drm/xe/xe_survivability_mode.c b/drivers/gpu/drm/xe/xe_survivability_mode.c
+index 1f710b3fc599b5..5ae3d70e45167f 100644
+--- a/drivers/gpu/drm/xe/xe_survivability_mode.c
++++ b/drivers/gpu/drm/xe/xe_survivability_mode.c
+@@ -40,6 +40,8 @@
+ *
+ * # echo 1 > /sys/kernel/config/xe/0000:03:00.0/survivability_mode
+ *
++ * It is the responsibility of the user to clear the mode once firmware flash is complete.
++ *
+ * Refer :ref:`xe_configfs` for more details on how to use configfs
+ *
+ * Survivability mode is indicated by the below admin-only readable sysfs which provides additional
+@@ -146,7 +148,6 @@ static void xe_survivability_mode_fini(void *arg)
+ struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
+ struct device *dev = &pdev->dev;
+
+- xe_configfs_clear_survivability_mode(pdev);
+ sysfs_remove_file(&dev->kobj, &dev_attr_survivability_mode.attr);
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index e278aad1a6eb29..84052b98002d14 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -393,6 +393,9 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
+ list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind,
+ &vm->rebind_list);
+
++ if (!try_wait_for_completion(&vm->xe->pm_block))
++ return -EAGAIN;
++
+ ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false);
+ if (ret)
+ return ret;
+@@ -479,6 +482,33 @@ static int xe_preempt_work_begin(struct drm_exec *exec, struct xe_vm *vm,
+ return xe_vm_validate_rebind(vm, exec, vm->preempt.num_exec_queues);
+ }
+
++static bool vm_suspend_rebind_worker(struct xe_vm *vm)
++{
++ struct xe_device *xe = vm->xe;
++ bool ret = false;
++
++ mutex_lock(&xe->rebind_resume_lock);
++ if (!try_wait_for_completion(&vm->xe->pm_block)) {
++ ret = true;
++ list_move_tail(&vm->preempt.pm_activate_link, &xe->rebind_resume_list);
++ }
++ mutex_unlock(&xe->rebind_resume_lock);
++
++ return ret;
++}
++
++/**
++ * xe_vm_resume_rebind_worker() - Resume the rebind worker.
++ * @vm: The vm whose preempt worker to resume.
++ *
++ * Resume a preempt worker that was previously suspended by
++ * vm_suspend_rebind_worker().
++ */
++void xe_vm_resume_rebind_worker(struct xe_vm *vm)
++{
++ queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
++}
++
+ static void preempt_rebind_work_func(struct work_struct *w)
+ {
+ struct xe_vm *vm = container_of(w, struct xe_vm, preempt.rebind_work);
+@@ -502,6 +532,11 @@ static void preempt_rebind_work_func(struct work_struct *w)
+ }
+
+ retry:
++ if (!try_wait_for_completion(&vm->xe->pm_block) && vm_suspend_rebind_worker(vm)) {
++ up_write(&vm->lock);
++ return;
++ }
++
+ if (xe_vm_userptr_check_repin(vm)) {
+ err = xe_vm_userptr_pin(vm);
+ if (err)
+@@ -1686,6 +1721,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
+ if (flags & XE_VM_FLAG_LR_MODE) {
+ INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
+ xe_pm_runtime_get_noresume(xe);
++ INIT_LIST_HEAD(&vm->preempt.pm_activate_link);
+ }
+
+ if (flags & XE_VM_FLAG_FAULT_MODE) {
+@@ -1867,8 +1903,12 @@ void xe_vm_close_and_put(struct xe_vm *vm)
+ xe_assert(xe, !vm->preempt.num_exec_queues);
+
+ xe_vm_close(vm);
+- if (xe_vm_in_preempt_fence_mode(vm))
++ if (xe_vm_in_preempt_fence_mode(vm)) {
++ mutex_lock(&xe->rebind_resume_lock);
++ list_del_init(&vm->preempt.pm_activate_link);
++ mutex_unlock(&xe->rebind_resume_lock);
+ flush_work(&vm->preempt.rebind_work);
++ }
+ if (xe_vm_in_fault_mode(vm))
+ xe_svm_close(vm);
+
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index e54ca835b58282..e493a17e0f19d9 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -268,6 +268,8 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
+ struct xe_exec_queue *q, u64 addr,
+ enum xe_cache_level cache_lvl);
+
++void xe_vm_resume_rebind_worker(struct xe_vm *vm);
++
+ /**
+ * xe_vm_resv() - Return's the vm's reservation object
+ * @vm: The vm
+diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
+index 1979e9bdbdf36b..4ebd3dc53f3c1a 100644
+--- a/drivers/gpu/drm/xe/xe_vm_types.h
++++ b/drivers/gpu/drm/xe/xe_vm_types.h
+@@ -286,6 +286,11 @@ struct xe_vm {
+ * BOs
+ */
+ struct work_struct rebind_work;
++ /**
++ * @preempt.pm_activate_link: Link to list of rebind workers to be
++ * kicked on resume.
++ */
++ struct list_head pm_activate_link;
+ } preempt;
+
+ /** @um: unified memory state */
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index a7f89946dad418..e94ac746a741af 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1052,7 +1052,7 @@ static const struct pci_device_id i801_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_SOC_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+- { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
++ { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5) },
+ { PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+diff --git a/drivers/i2c/busses/i2c-rtl9300.c b/drivers/i2c/busses/i2c-rtl9300.c
+index cfafe089102aa2..9e1f71fed0feac 100644
+--- a/drivers/i2c/busses/i2c-rtl9300.c
++++ b/drivers/i2c/busses/i2c-rtl9300.c
+@@ -99,6 +99,9 @@ static int rtl9300_i2c_config_xfer(struct rtl9300_i2c *i2c, struct rtl9300_i2c_c
+ {
+ u32 val, mask;
+
++ if (len < 1 || len > 16)
++ return -EINVAL;
++
+ val = chan->bus_freq << RTL9300_I2C_MST_CTRL2_SCL_FREQ_OFS;
+ mask = RTL9300_I2C_MST_CTRL2_SCL_FREQ_MASK;
+
+@@ -222,15 +225,6 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+ }
+
+ switch (size) {
+- case I2C_SMBUS_QUICK:
+- ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
+- if (ret)
+- goto out_unlock;
+- ret = rtl9300_i2c_reg_addr_set(i2c, 0, 0);
+- if (ret)
+- goto out_unlock;
+- break;
+-
+ case I2C_SMBUS_BYTE:
+ if (read_write == I2C_SMBUS_WRITE) {
+ ret = rtl9300_i2c_config_xfer(i2c, chan, addr, 0);
+@@ -312,9 +306,9 @@ static int rtl9300_i2c_smbus_xfer(struct i2c_adapter *adap, u16 addr, unsigned s
+
+ static u32 rtl9300_i2c_func(struct i2c_adapter *a)
+ {
+- return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+- I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
+- I2C_FUNC_SMBUS_BLOCK_DATA;
++ return I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_BYTE_DATA |
++ I2C_FUNC_SMBUS_WORD_DATA | I2C_FUNC_SMBUS_BLOCK_DATA |
++ I2C_FUNC_SMBUS_I2C_BLOCK;
+ }
+
+ static const struct i2c_algorithm rtl9300_i2c_algo = {
+@@ -323,7 +317,7 @@ static const struct i2c_algorithm rtl9300_i2c_algo = {
+ };
+
+ static struct i2c_adapter_quirks rtl9300_i2c_quirks = {
+- .flags = I2C_AQ_NO_CLK_STRETCH,
++ .flags = I2C_AQ_NO_CLK_STRETCH | I2C_AQ_NO_ZERO_LEN,
+ .max_read_len = 16,
+ .max_write_len = 16,
+ };
+@@ -353,7 +347,7 @@ static int rtl9300_i2c_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, i2c);
+
+- if (device_get_child_node_count(dev) >= RTL9300_I2C_MUX_NCHAN)
++ if (device_get_child_node_count(dev) > RTL9300_I2C_MUX_NCHAN)
+ return dev_err_probe(dev, -EINVAL, "Too many channels\n");
+
+ device_for_each_child_node(dev, child) {
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 1d8c579b54331e..4ee9e403d38501 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -422,6 +422,7 @@ static const struct xpad_device {
+ { 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE },
+ { 0x366c, 0x0005, "ByoWave Proteus Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE, FLAG_DELAY_INIT },
+ { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
++ { 0x37d7, 0x2501, "Flydigi Apex 5", 0, XTYPE_XBOX360 },
+ { 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 },
+ { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+ { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
+@@ -578,6 +579,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */
+ XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */
+ XPAD_XBOXONE_VENDOR(0x366c), /* ByoWave controllers */
++ XPAD_XBOX360_VENDOR(0x37d7), /* Flydigi Controllers */
+ XPAD_XBOX360_VENDOR(0x413d), /* Black Shark Green Ghost Controller */
+ { }
+ };
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index 6fac31c0d99f2b..ff23219a582ab8 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -2427,6 +2427,9 @@ static int iqs7222_parse_chan(struct iqs7222_private *iqs7222,
+ if (error)
+ return error;
+
++ if (!iqs7222->kp_type[chan_index][i])
++ continue;
++
+ if (!dev_desc->event_offset)
+ continue;
+
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 6ed9fc34948cbe..1caa6c4ca435c7 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1155,6 +1155,20 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "XxHP4NAx"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "XxKK4NAx_XxSP4NAx"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
++ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ },
+ /*
+ * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+ * after suspend fixable with the forcenorestore quirk.
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index c8b79de84d3fb9..071f78e67fcba0 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -370,7 +370,7 @@ static void cache_tag_flush_iotlb(struct dmar_domain *domain, struct cache_tag *
+ struct intel_iommu *iommu = tag->iommu;
+ u64 type = DMA_TLB_PSI_FLUSH;
+
+- if (domain->use_first_level) {
++ if (intel_domain_is_fs_paging(domain)) {
+ qi_batch_add_piotlb(iommu, tag->domain_id, tag->pasid, addr,
+ pages, ih, domain->qi_batch);
+ return;
+@@ -529,7 +529,8 @@ void cache_tag_flush_range_np(struct dmar_domain *domain, unsigned long start,
+ qi_batch_flush_descs(iommu, domain->qi_batch);
+ iommu = tag->iommu;
+
+- if (!cap_caching_mode(iommu->cap) || domain->use_first_level) {
++ if (!cap_caching_mode(iommu->cap) ||
++ intel_domain_is_fs_paging(domain)) {
+ iommu_flush_write_buffer(iommu);
+ continue;
+ }
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index c239e280e43d91..34dd175a331dc7 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -57,6 +57,8 @@
+ static void __init check_tylersburg_isoch(void);
+ static int rwbf_quirk;
+
++#define rwbf_required(iommu) (rwbf_quirk || cap_rwbf((iommu)->cap))
++
+ /*
+ * set to 1 to panic kernel if can't successfully enable VT-d
+ * (used when kernel is launched w/ TXT)
+@@ -1479,6 +1481,9 @@ static int domain_context_mapping_one(struct dmar_domain *domain,
+ struct context_entry *context;
+ int ret;
+
++ if (WARN_ON(!intel_domain_is_ss_paging(domain)))
++ return -EINVAL;
++
+ pr_debug("Set context mapping for %02x:%02x.%d\n",
+ bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+@@ -1795,18 +1800,6 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ (pgd_t *)pgd, flags, old);
+ }
+
+-static bool domain_need_iotlb_sync_map(struct dmar_domain *domain,
+- struct intel_iommu *iommu)
+-{
+- if (cap_caching_mode(iommu->cap) && !domain->use_first_level)
+- return true;
+-
+- if (rwbf_quirk || cap_rwbf(iommu->cap))
+- return true;
+-
+- return false;
+-}
+-
+ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ struct device *dev)
+ {
+@@ -1830,12 +1823,14 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+
+ if (!sm_supported(iommu))
+ ret = domain_context_mapping(domain, dev);
+- else if (domain->use_first_level)
++ else if (intel_domain_is_fs_paging(domain))
+ ret = domain_setup_first_level(iommu, domain, dev,
+ IOMMU_NO_PASID, NULL);
+- else
++ else if (intel_domain_is_ss_paging(domain))
+ ret = domain_setup_second_level(iommu, domain, dev,
+ IOMMU_NO_PASID, NULL);
++ else if (WARN_ON(true))
++ ret = -EINVAL;
+
+ if (ret)
+ goto out_block_translation;
+@@ -1844,8 +1839,6 @@ static int dmar_domain_attach_device(struct dmar_domain *domain,
+ if (ret)
+ goto out_block_translation;
+
+- domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain, iommu);
+-
+ return 0;
+
+ out_block_translation:
+@@ -3299,10 +3292,14 @@ static struct dmar_domain *paging_domain_alloc(struct device *dev, bool first_st
+ spin_lock_init(&domain->lock);
+ spin_lock_init(&domain->cache_lock);
+ xa_init(&domain->iommu_array);
++ INIT_LIST_HEAD(&domain->s1_domains);
++ spin_lock_init(&domain->s1_lock);
+
+ domain->nid = dev_to_node(dev);
+ domain->use_first_level = first_stage;
+
++ domain->domain.type = IOMMU_DOMAIN_UNMANAGED;
++
+ /* calculate the address width */
+ addr_width = agaw_to_width(iommu->agaw);
+ if (addr_width > cap_mgaw(iommu->cap))
+@@ -3344,62 +3341,92 @@ static struct dmar_domain *paging_domain_alloc(struct device *dev, bool first_st
+ }
+
+ static struct iommu_domain *
+-intel_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
+- const struct iommu_user_data *user_data)
++intel_iommu_domain_alloc_first_stage(struct device *dev,
++ struct intel_iommu *iommu, u32 flags)
++{
++ struct dmar_domain *dmar_domain;
++
++ if (flags & ~IOMMU_HWPT_ALLOC_PASID)
++ return ERR_PTR(-EOPNOTSUPP);
++
++ /* Only SL is available in legacy mode */
++ if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++ return ERR_PTR(-EOPNOTSUPP);
++
++ dmar_domain = paging_domain_alloc(dev, true);
++ if (IS_ERR(dmar_domain))
++ return ERR_CAST(dmar_domain);
++
++ dmar_domain->domain.ops = &intel_fs_paging_domain_ops;
++ /*
++ * iotlb sync for map is only needed for legacy implementations that
++ * explicitly require flushing internal write buffers to ensure memory
++ * coherence.
++ */
++ if (rwbf_required(iommu))
++ dmar_domain->iotlb_sync_map = true;
++
++ return &dmar_domain->domain;
++}
++
++static struct iommu_domain *
++intel_iommu_domain_alloc_second_stage(struct device *dev,
++ struct intel_iommu *iommu, u32 flags)
+ {
+- struct device_domain_info *info = dev_iommu_priv_get(dev);
+- bool dirty_tracking = flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING;
+- bool nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
+- struct intel_iommu *iommu = info->iommu;
+ struct dmar_domain *dmar_domain;
+- struct iommu_domain *domain;
+- bool first_stage;
+
+ if (flags &
+ (~(IOMMU_HWPT_ALLOC_NEST_PARENT | IOMMU_HWPT_ALLOC_DIRTY_TRACKING |
+ IOMMU_HWPT_ALLOC_PASID)))
+ return ERR_PTR(-EOPNOTSUPP);
+- if (nested_parent && !nested_supported(iommu))
++
++ if (((flags & IOMMU_HWPT_ALLOC_NEST_PARENT) &&
++ !nested_supported(iommu)) ||
++ ((flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING) &&
++ !ssads_supported(iommu)))
+ return ERR_PTR(-EOPNOTSUPP);
+- if (user_data || (dirty_tracking && !ssads_supported(iommu)))
++
++ /* Legacy mode always supports second stage */
++ if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
+ return ERR_PTR(-EOPNOTSUPP);
+
++ dmar_domain = paging_domain_alloc(dev, false);
++ if (IS_ERR(dmar_domain))
++ return ERR_CAST(dmar_domain);
++
++ dmar_domain->domain.ops = &intel_ss_paging_domain_ops;
++ dmar_domain->nested_parent = flags & IOMMU_HWPT_ALLOC_NEST_PARENT;
++
++ if (flags & IOMMU_HWPT_ALLOC_DIRTY_TRACKING)
++ dmar_domain->domain.dirty_ops = &intel_dirty_ops;
++
+ /*
+- * Always allocate the guest compatible page table unless
+- * IOMMU_HWPT_ALLOC_NEST_PARENT or IOMMU_HWPT_ALLOC_DIRTY_TRACKING
+- * is specified.
++ * Besides the internal write buffer flush, the caching mode used for
++ * legacy nested translation (which utilizes shadowing page tables)
++ * also requires iotlb sync on map.
+ */
+- if (nested_parent || dirty_tracking) {
+- if (!sm_supported(iommu) || !ecap_slts(iommu->ecap))
+- return ERR_PTR(-EOPNOTSUPP);
+- first_stage = false;
+- } else {
+- first_stage = first_level_by_default(iommu);
+- }
++ if (rwbf_required(iommu) || cap_caching_mode(iommu->cap))
++ dmar_domain->iotlb_sync_map = true;
+
+- dmar_domain = paging_domain_alloc(dev, first_stage);
+- if (IS_ERR(dmar_domain))
+- return ERR_CAST(dmar_domain);
+- domain = &dmar_domain->domain;
+- domain->type = IOMMU_DOMAIN_UNMANAGED;
+- domain->owner = &intel_iommu_ops;
+- domain->ops = intel_iommu_ops.default_domain_ops;
+-
+- if (nested_parent) {
+- dmar_domain->nested_parent = true;
+- INIT_LIST_HEAD(&dmar_domain->s1_domains);
+- spin_lock_init(&dmar_domain->s1_lock);
+- }
++ return &dmar_domain->domain;
++}
+
+- if (dirty_tracking) {
+- if (dmar_domain->use_first_level) {
+- iommu_domain_free(domain);
+- return ERR_PTR(-EOPNOTSUPP);
+- }
+- domain->dirty_ops = &intel_dirty_ops;
+- }
++static struct iommu_domain *
++intel_iommu_domain_alloc_paging_flags(struct device *dev, u32 flags,
++ const struct iommu_user_data *user_data)
++{
++ struct device_domain_info *info = dev_iommu_priv_get(dev);
++ struct intel_iommu *iommu = info->iommu;
++ struct iommu_domain *domain;
+
+- return domain;
++ if (user_data)
++ return ERR_PTR(-EOPNOTSUPP);
++
++ /* Prefer first stage if possible by default. */
++ domain = intel_iommu_domain_alloc_first_stage(dev, iommu, flags);
++ if (domain != ERR_PTR(-EOPNOTSUPP))
++ return domain;
++ return intel_iommu_domain_alloc_second_stage(dev, iommu, flags);
+ }
+
+ static void intel_iommu_domain_free(struct iommu_domain *domain)
+@@ -3411,33 +3438,86 @@ static void intel_iommu_domain_free(struct iommu_domain *domain)
+ domain_exit(dmar_domain);
+ }
+
++static int paging_domain_compatible_first_stage(struct dmar_domain *dmar_domain,
++ struct intel_iommu *iommu)
++{
++ if (WARN_ON(dmar_domain->domain.dirty_ops ||
++ dmar_domain->nested_parent))
++ return -EINVAL;
++
++ /* Only SL is available in legacy mode */
++ if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++ return -EINVAL;
++
++ /* Same page size support */
++ if (!cap_fl1gp_support(iommu->cap) &&
++ (dmar_domain->domain.pgsize_bitmap & SZ_1G))
++ return -EINVAL;
++
++ /* iotlb sync on map requirement */
++ if ((rwbf_required(iommu)) && !dmar_domain->iotlb_sync_map)
++ return -EINVAL;
++
++ return 0;
++}
++
++static int
++paging_domain_compatible_second_stage(struct dmar_domain *dmar_domain,
++ struct intel_iommu *iommu)
++{
++ unsigned int sslps = cap_super_page_val(iommu->cap);
++
++ if (dmar_domain->domain.dirty_ops && !ssads_supported(iommu))
++ return -EINVAL;
++ if (dmar_domain->nested_parent && !nested_supported(iommu))
++ return -EINVAL;
++
++ /* Legacy mode always supports second stage */
++ if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
++ return -EINVAL;
++
++ /* Same page size support */
++ if (!(sslps & BIT(0)) && (dmar_domain->domain.pgsize_bitmap & SZ_2M))
++ return -EINVAL;
++ if (!(sslps & BIT(1)) && (dmar_domain->domain.pgsize_bitmap & SZ_1G))
++ return -EINVAL;
++
++ /* iotlb sync on map requirement */
++ if ((rwbf_required(iommu) || cap_caching_mode(iommu->cap)) &&
++ !dmar_domain->iotlb_sync_map)
++ return -EINVAL;
++
++ return 0;
++}
++
+ int paging_domain_compatible(struct iommu_domain *domain, struct device *dev)
+ {
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ struct dmar_domain *dmar_domain = to_dmar_domain(domain);
+ struct intel_iommu *iommu = info->iommu;
++ int ret = -EINVAL;
+ int addr_width;
+
+- if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING)))
+- return -EPERM;
++ if (intel_domain_is_fs_paging(dmar_domain))
++ ret = paging_domain_compatible_first_stage(dmar_domain, iommu);
++ else if (intel_domain_is_ss_paging(dmar_domain))
++ ret = paging_domain_compatible_second_stage(dmar_domain, iommu);
++ else if (WARN_ON(true))
++ ret = -EINVAL;
++ if (ret)
++ return ret;
+
++ /*
++ * FIXME this is locked wrong, it needs to be under the
++ * dmar_domain->lock
++ */
+ if (dmar_domain->force_snooping && !ecap_sc_support(iommu->ecap))
+ return -EINVAL;
+
+- if (domain->dirty_ops && !ssads_supported(iommu))
+- return -EINVAL;
+-
+ if (dmar_domain->iommu_coherency !=
+ iommu_paging_structure_coherency(iommu))
+ return -EINVAL;
+
+- if (dmar_domain->iommu_superpage !=
+- iommu_superpage_capability(iommu, dmar_domain->use_first_level))
+- return -EINVAL;
+-
+- if (dmar_domain->use_first_level &&
+- (!sm_supported(iommu) || !ecap_flts(iommu->ecap)))
+- return -EINVAL;
+
+ /* check if this iommu agaw is sufficient for max mapped address */
+ addr_width = agaw_to_width(iommu->agaw);
+@@ -4094,12 +4174,15 @@ static int intel_iommu_set_dev_pasid(struct iommu_domain *domain,
+ if (ret)
+ goto out_remove_dev_pasid;
+
+- if (dmar_domain->use_first_level)
++ if (intel_domain_is_fs_paging(dmar_domain))
+ ret = domain_setup_first_level(iommu, dmar_domain,
+ dev, pasid, old);
+- else
++ else if (intel_domain_is_ss_paging(dmar_domain))
+ ret = domain_setup_second_level(iommu, dmar_domain,
+ dev, pasid, old);
++ else if (WARN_ON(true))
++ ret = -EINVAL;
++
+ if (ret)
+ goto out_unwind_iopf;
+
+@@ -4374,6 +4457,32 @@ static struct iommu_domain identity_domain = {
+ },
+ };
+
++const struct iommu_domain_ops intel_fs_paging_domain_ops = {
++ .attach_dev = intel_iommu_attach_device,
++ .set_dev_pasid = intel_iommu_set_dev_pasid,
++ .map_pages = intel_iommu_map_pages,
++ .unmap_pages = intel_iommu_unmap_pages,
++ .iotlb_sync_map = intel_iommu_iotlb_sync_map,
++ .flush_iotlb_all = intel_flush_iotlb_all,
++ .iotlb_sync = intel_iommu_tlb_sync,
++ .iova_to_phys = intel_iommu_iova_to_phys,
++ .free = intel_iommu_domain_free,
++ .enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
++};
++
++const struct iommu_domain_ops intel_ss_paging_domain_ops = {
++ .attach_dev = intel_iommu_attach_device,
++ .set_dev_pasid = intel_iommu_set_dev_pasid,
++ .map_pages = intel_iommu_map_pages,
++ .unmap_pages = intel_iommu_unmap_pages,
++ .iotlb_sync_map = intel_iommu_iotlb_sync_map,
++ .flush_iotlb_all = intel_flush_iotlb_all,
++ .iotlb_sync = intel_iommu_tlb_sync,
++ .iova_to_phys = intel_iommu_iova_to_phys,
++ .free = intel_iommu_domain_free,
++ .enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
++};
++
+ const struct iommu_ops intel_iommu_ops = {
+ .blocked_domain = &blocking_domain,
+ .release_domain = &blocking_domain,
+@@ -4391,18 +4500,6 @@ const struct iommu_ops intel_iommu_ops = {
+ .is_attach_deferred = intel_iommu_is_attach_deferred,
+ .def_domain_type = device_def_domain_type,
+ .page_response = intel_iommu_page_response,
+- .default_domain_ops = &(const struct iommu_domain_ops) {
+- .attach_dev = intel_iommu_attach_device,
+- .set_dev_pasid = intel_iommu_set_dev_pasid,
+- .map_pages = intel_iommu_map_pages,
+- .unmap_pages = intel_iommu_unmap_pages,
+- .iotlb_sync_map = intel_iommu_iotlb_sync_map,
+- .flush_iotlb_all = intel_flush_iotlb_all,
+- .iotlb_sync = intel_iommu_tlb_sync,
+- .iova_to_phys = intel_iommu_iova_to_phys,
+- .free = intel_iommu_domain_free,
+- .enforce_cache_coherency = intel_iommu_enforce_cache_coherency,
+- }
+ };
+
+ static void quirk_iommu_igfx(struct pci_dev *dev)
+diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
+index 61f42802fe9e95..c699ed8810f23c 100644
+--- a/drivers/iommu/intel/iommu.h
++++ b/drivers/iommu/intel/iommu.h
+@@ -1381,6 +1381,18 @@ struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
+ u8 devfn, int alloc);
+
+ extern const struct iommu_ops intel_iommu_ops;
++extern const struct iommu_domain_ops intel_fs_paging_domain_ops;
++extern const struct iommu_domain_ops intel_ss_paging_domain_ops;
++
++static inline bool intel_domain_is_fs_paging(struct dmar_domain *domain)
++{
++ return domain->domain.ops == &intel_fs_paging_domain_ops;
++}
++
++static inline bool intel_domain_is_ss_paging(struct dmar_domain *domain)
++{
++ return domain->domain.ops == &intel_ss_paging_domain_ops;
++}
+
+ #ifdef CONFIG_INTEL_IOMMU
+ extern int intel_iommu_sm;
+diff --git a/drivers/iommu/intel/nested.c b/drivers/iommu/intel/nested.c
+index fc312f649f9ef0..1b6ad9c900a5ad 100644
+--- a/drivers/iommu/intel/nested.c
++++ b/drivers/iommu/intel/nested.c
+@@ -216,8 +216,7 @@ intel_iommu_domain_alloc_nested(struct device *dev, struct iommu_domain *parent,
+ /* Must be nested domain */
+ if (user_data->type != IOMMU_HWPT_DATA_VTD_S1)
+ return ERR_PTR(-EOPNOTSUPP);
+- if (parent->ops != intel_iommu_ops.default_domain_ops ||
+- !s2_domain->nested_parent)
++ if (!intel_domain_is_ss_paging(s2_domain) || !s2_domain->nested_parent)
+ return ERR_PTR(-EINVAL);
+
+ ret = iommu_copy_struct_from_user(&vtd, user_data,
+@@ -229,7 +228,6 @@ intel_iommu_domain_alloc_nested(struct device *dev, struct iommu_domain *parent,
+ if (!domain)
+ return ERR_PTR(-ENOMEM);
+
+- domain->use_first_level = true;
+ domain->s2_domain = s2_domain;
+ domain->s1_cfg = vtd;
+ domain->domain.ops = &intel_nested_domain_ops;
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index f3da596410b5e5..3994521f6ea488 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -214,7 +214,6 @@ struct iommu_domain *intel_svm_domain_alloc(struct device *dev,
+ return ERR_PTR(-ENOMEM);
+
+ domain->domain.ops = &intel_svm_domain_ops;
+- domain->use_first_level = true;
+ INIT_LIST_HEAD(&domain->dev_pasids);
+ INIT_LIST_HEAD(&domain->cache_tags);
+ spin_lock_init(&domain->cache_lock);
+diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
+index 54833717f8a70f..667bde3c651ff2 100644
+--- a/drivers/irqchip/irq-mvebu-gicp.c
++++ b/drivers/irqchip/irq-mvebu-gicp.c
+@@ -238,7 +238,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
+ }
+
+ base = ioremap(gicp->res->start, resource_size(gicp->res));
+- if (IS_ERR(base)) {
++ if (!base) {
+ dev_err(&pdev->dev, "ioremap() failed. Unable to clear pending interrupts.\n");
+ } else {
+ for (i = 0; i < 64; i++)
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3f355bb85797f8..0f41573fa9f5ec 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -1406,7 +1406,7 @@ static int super_90_validate(struct mddev *mddev, struct md_rdev *freshest, stru
+ else {
+ if (sb->events_hi == sb->cp_events_hi &&
+ sb->events_lo == sb->cp_events_lo) {
+- mddev->resync_offset = sb->resync_offset;
++ mddev->resync_offset = sb->recovery_cp;
+ } else
+ mddev->resync_offset = 0;
+ }
+@@ -1534,13 +1534,13 @@ static void super_90_sync(struct mddev *mddev, struct md_rdev *rdev)
+ mddev->minor_version = sb->minor_version;
+ if (mddev->in_sync)
+ {
+- sb->resync_offset = mddev->resync_offset;
++ sb->recovery_cp = mddev->resync_offset;
+ sb->cp_events_hi = (mddev->events>>32);
+ sb->cp_events_lo = (u32)mddev->events;
+ if (mddev->resync_offset == MaxSector)
+ sb->state = (1<< MD_SB_CLEAN);
+ } else
+- sb->resync_offset = 0;
++ sb->recovery_cp = 0;
+
+ sb->layout = mddev->layout;
+ sb->chunk_size = mddev->chunk_sectors << 9;
+diff --git a/drivers/mtd/nand/raw/atmel/nand-controller.c b/drivers/mtd/nand/raw/atmel/nand-controller.c
+index 84ab4a83cbd686..db94d14a3807f5 100644
+--- a/drivers/mtd/nand/raw/atmel/nand-controller.c
++++ b/drivers/mtd/nand/raw/atmel/nand-controller.c
+@@ -1377,14 +1377,24 @@ static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
+ if (ret)
+ return ret;
+
++ /*
++ * Read setup timing depends on the operation done on the NAND:
++ *
++ * NRD_SETUP = max(tAR, tCLR)
++ */
++ timeps = max(conf->timings.sdr.tAR_min, conf->timings.sdr.tCLR_min);
++ ncycles = DIV_ROUND_UP(timeps, mckperiodps);
++ totalcycles += ncycles;
++ ret = atmel_smc_cs_conf_set_setup(smcconf, ATMEL_SMC_NRD_SHIFT, ncycles);
++ if (ret)
++ return ret;
++
+ /*
+ * The read cycle timing is directly matching tRC, but is also
+ * dependent on the setup and hold timings we calculated earlier,
+ * which gives:
+ *
+- * NRD_CYCLE = max(tRC, NRD_PULSE + NRD_HOLD)
+- *
+- * NRD_SETUP is always 0.
++ * NRD_CYCLE = max(tRC, NRD_SETUP + NRD_PULSE + NRD_HOLD)
+ */
+ ncycles = DIV_ROUND_UP(conf->timings.sdr.tRC_min, mckperiodps);
+ ncycles = max(totalcycles, ncycles);
+diff --git a/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c b/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
+index c23b537948d5e6..1a285cd8fad62a 100644
+--- a/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
++++ b/drivers/mtd/nand/raw/nuvoton-ma35d1-nand-controller.c
+@@ -935,10 +935,10 @@ static void ma35_chips_cleanup(struct ma35_nand_info *nand)
+
+ static int ma35_nand_chips_init(struct device *dev, struct ma35_nand_info *nand)
+ {
+- struct device_node *np = dev->of_node, *nand_np;
++ struct device_node *np = dev->of_node;
+ int ret;
+
+- for_each_child_of_node(np, nand_np) {
++ for_each_child_of_node_scoped(np, nand_np) {
+ ret = ma35_nand_chip_init(dev, nand, nand_np);
+ if (ret) {
+ ma35_chips_cleanup(nand);
+diff --git a/drivers/mtd/nand/raw/stm32_fmc2_nand.c b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+index a960403081f110..d957327fb4fa04 100644
+--- a/drivers/mtd/nand/raw/stm32_fmc2_nand.c
++++ b/drivers/mtd/nand/raw/stm32_fmc2_nand.c
+@@ -272,6 +272,7 @@ struct stm32_fmc2_nfc {
+ struct sg_table dma_data_sg;
+ struct sg_table dma_ecc_sg;
+ u8 *ecc_buf;
++ dma_addr_t dma_ecc_addr;
+ int dma_ecc_len;
+ u32 tx_dma_max_burst;
+ u32 rx_dma_max_burst;
+@@ -902,17 +903,10 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+
+ if (!write_data && !raw) {
+ /* Configure DMA ECC status */
+- p = nfc->ecc_buf;
+ for_each_sg(nfc->dma_ecc_sg.sgl, sg, eccsteps, s) {
+- sg_set_buf(sg, p, nfc->dma_ecc_len);
+- p += nfc->dma_ecc_len;
+- }
+-
+- ret = dma_map_sg(nfc->dev, nfc->dma_ecc_sg.sgl,
+- eccsteps, dma_data_dir);
+- if (!ret) {
+- ret = -EIO;
+- goto err_unmap_data;
++ sg_dma_address(sg) = nfc->dma_ecc_addr +
++ s * nfc->dma_ecc_len;
++ sg_dma_len(sg) = nfc->dma_ecc_len;
+ }
+
+ desc_ecc = dmaengine_prep_slave_sg(nfc->dma_ecc_ch,
+@@ -921,7 +915,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ DMA_PREP_INTERRUPT);
+ if (!desc_ecc) {
+ ret = -ENOMEM;
+- goto err_unmap_ecc;
++ goto err_unmap_data;
+ }
+
+ reinit_completion(&nfc->dma_ecc_complete);
+@@ -929,7 +923,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ desc_ecc->callback_param = &nfc->dma_ecc_complete;
+ ret = dma_submit_error(dmaengine_submit(desc_ecc));
+ if (ret)
+- goto err_unmap_ecc;
++ goto err_unmap_data;
+
+ dma_async_issue_pending(nfc->dma_ecc_ch);
+ }
+@@ -949,7 +943,7 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ if (!write_data && !raw)
+ dmaengine_terminate_all(nfc->dma_ecc_ch);
+ ret = -ETIMEDOUT;
+- goto err_unmap_ecc;
++ goto err_unmap_data;
+ }
+
+ /* Wait DMA data transfer completion */
+@@ -969,11 +963,6 @@ static int stm32_fmc2_nfc_xfer(struct nand_chip *chip, const u8 *buf,
+ }
+ }
+
+-err_unmap_ecc:
+- if (!write_data && !raw)
+- dma_unmap_sg(nfc->dev, nfc->dma_ecc_sg.sgl,
+- eccsteps, dma_data_dir);
+-
+ err_unmap_data:
+ dma_unmap_sg(nfc->dev, nfc->dma_data_sg.sgl, eccsteps, dma_data_dir);
+
+@@ -996,9 +985,21 @@ static int stm32_fmc2_nfc_seq_write(struct nand_chip *chip, const u8 *buf,
+
+ /* Write oob */
+ if (oob_required) {
+- ret = nand_change_write_column_op(chip, mtd->writesize,
+- chip->oob_poi, mtd->oobsize,
+- false);
++ unsigned int offset_in_page = mtd->writesize;
++ const void *buf = chip->oob_poi;
++ unsigned int len = mtd->oobsize;
++
++ if (!raw) {
++ struct mtd_oob_region oob_free;
++
++ mtd_ooblayout_free(mtd, 0, &oob_free);
++ offset_in_page += oob_free.offset;
++ buf += oob_free.offset;
++ len = oob_free.length;
++ }
++
++ ret = nand_change_write_column_op(chip, offset_in_page,
++ buf, len, false);
+ if (ret)
+ return ret;
+ }
+@@ -1610,7 +1611,8 @@ static int stm32_fmc2_nfc_dma_setup(struct stm32_fmc2_nfc *nfc)
+ return ret;
+
+ /* Allocate a buffer to store ECC status registers */
+- nfc->ecc_buf = devm_kzalloc(nfc->dev, FMC2_MAX_ECC_BUF_LEN, GFP_KERNEL);
++ nfc->ecc_buf = dmam_alloc_coherent(nfc->dev, FMC2_MAX_ECC_BUF_LEN,
++ &nfc->dma_ecc_addr, GFP_KERNEL);
+ if (!nfc->ecc_buf)
+ return -ENOMEM;
+
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index b90f15c986a317..aa6fb862451aa4 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -20,7 +20,7 @@
+ #include <linux/spi/spi.h>
+ #include <linux/spi/spi-mem.h>
+
+-static int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val)
++int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val)
+ {
+ struct spi_mem_op op = SPINAND_GET_FEATURE_1S_1S_1S_OP(reg,
+ spinand->scratchbuf);
+@@ -1253,8 +1253,19 @@ static int spinand_id_detect(struct spinand_device *spinand)
+
+ static int spinand_manufacturer_init(struct spinand_device *spinand)
+ {
+- if (spinand->manufacturer->ops->init)
+- return spinand->manufacturer->ops->init(spinand);
++ int ret;
++
++ if (spinand->manufacturer->ops->init) {
++ ret = spinand->manufacturer->ops->init(spinand);
++ if (ret)
++ return ret;
++ }
++
++ if (spinand->configure_chip) {
++ ret = spinand->configure_chip(spinand);
++ if (ret)
++ return ret;
++ }
+
+ return 0;
+ }
+@@ -1349,6 +1360,7 @@ int spinand_match_and_init(struct spinand_device *spinand,
+ spinand->flags = table[i].flags;
+ spinand->id.len = 1 + table[i].devid.len;
+ spinand->select_target = table[i].select_target;
++ spinand->configure_chip = table[i].configure_chip;
+ spinand->set_cont_read = table[i].set_cont_read;
+ spinand->fact_otp = &table[i].fact_otp;
+ spinand->user_otp = &table[i].user_otp;
+diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
+index b7a28f001a387b..116ac17591a86b 100644
+--- a/drivers/mtd/nand/spi/winbond.c
++++ b/drivers/mtd/nand/spi/winbond.c
+@@ -18,6 +18,9 @@
+
+ #define W25N04KV_STATUS_ECC_5_8_BITFLIPS (3 << 4)
+
++#define W25N0XJW_SR4 0xD0
++#define W25N0XJW_SR4_HS BIT(2)
++
+ /*
+ * "X2" in the core is equivalent to "dual output" in the datasheets,
+ * "X4" in the core is equivalent to "quad output" in the datasheets.
+@@ -42,10 +45,12 @@ static SPINAND_OP_VARIANTS(update_cache_octal_variants,
+ static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants,
+ SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++ SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 4, NULL, 0, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
++ SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 2, NULL, 0, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0),
+ SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ),
+@@ -157,6 +162,36 @@ static const struct mtd_ooblayout_ops w25n02kv_ooblayout = {
+ .free = w25n02kv_ooblayout_free,
+ };
+
++static int w25n01jw_ooblayout_ecc(struct mtd_info *mtd, int section,
++ struct mtd_oob_region *region)
++{
++ if (section > 3)
++ return -ERANGE;
++
++ region->offset = (16 * section) + 12;
++ region->length = 4;
++
++ return 0;
++}
++
++static int w25n01jw_ooblayout_free(struct mtd_info *mtd, int section,
++ struct mtd_oob_region *region)
++{
++ if (section > 3)
++ return -ERANGE;
++
++ region->offset = (16 * section);
++ region->length = 12;
++
++ /* Extract BBM */
++ if (!section) {
++ region->offset += 2;
++ region->length -= 2;
++ }
++
++ return 0;
++}
++
+ static int w35n01jw_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *region)
+ {
+@@ -187,6 +222,11 @@ static int w35n01jw_ooblayout_free(struct mtd_info *mtd, int section,
+ return 0;
+ }
+
++static const struct mtd_ooblayout_ops w25n01jw_ooblayout = {
++ .ecc = w25n01jw_ooblayout_ecc,
++ .free = w25n01jw_ooblayout_free,
++};
++
+ static const struct mtd_ooblayout_ops w35n01jw_ooblayout = {
+ .ecc = w35n01jw_ooblayout_ecc,
+ .free = w35n01jw_ooblayout_free,
+@@ -230,6 +270,40 @@ static int w25n02kv_ecc_get_status(struct spinand_device *spinand,
+ return -EINVAL;
+ }
+
++static int w25n0xjw_hs_cfg(struct spinand_device *spinand)
++{
++ const struct spi_mem_op *op;
++ bool hs;
++ u8 sr4;
++ int ret;
++
++ op = spinand->op_templates.read_cache;
++ if (op->cmd.dtr || op->addr.dtr || op->dummy.dtr || op->data.dtr)
++ hs = false;
++ else if (op->cmd.buswidth == 1 && op->addr.buswidth == 1 &&
++ op->dummy.buswidth == 1 && op->data.buswidth == 1)
++ hs = false;
++ else if (!op->max_freq)
++ hs = true;
++ else
++ hs = false;
++
++ ret = spinand_read_reg_op(spinand, W25N0XJW_SR4, &sr4);
++ if (ret)
++ return ret;
++
++ if (hs)
++ sr4 |= W25N0XJW_SR4_HS;
++ else
++ sr4 &= ~W25N0XJW_SR4_HS;
++
++ ret = spinand_write_reg_op(spinand, W25N0XJW_SR4, sr4);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
+ static const struct spinand_info winbond_spinand_table[] = {
+ /* 512M-bit densities */
+ SPINAND_INFO("W25N512GW", /* 1.8V */
+@@ -268,7 +342,8 @@ static const struct spinand_info winbond_spinand_table[] = {
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
++ SPINAND_ECCINFO(&w25n01jw_ooblayout, NULL),
++ SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)),
+ SPINAND_INFO("W25N01KV", /* 3.3V */
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xae, 0x21),
+ NAND_MEMORG(1, 2048, 96, 64, 1024, 20, 1, 1, 1),
+@@ -324,7 +399,8 @@ static const struct spinand_info winbond_spinand_table[] = {
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL),
++ SPINAND_CONFIGURE_CHIP(w25n0xjw_hs_cfg)),
+ SPINAND_INFO("W25N02KV", /* 3.3V */
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xaa, 0x22),
+ NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index 3f2e378199abba..5abe4af61655cd 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -690,14 +690,6 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
+ dlc |= XCAN_DLCR_EDL_MASK;
+ }
+
+- if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
+- (priv->devtype.flags & XCAN_FLAG_TXFEMP))
+- can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
+- else
+- can_put_echo_skb(skb, ndev, 0, 0);
+-
+- priv->tx_head++;
+-
+ priv->write_reg(priv, XCAN_FRAME_ID_OFFSET(frame_offset), id);
+ /* If the CAN frame is RTR frame this write triggers transmission
+ * (not on CAN FD)
+@@ -730,6 +722,14 @@ static void xcan_write_frame(struct net_device *ndev, struct sk_buff *skb,
+ data[1]);
+ }
+ }
++
++ if (!(priv->devtype.flags & XCAN_FLAG_TX_MAILBOXES) &&
++ (priv->devtype.flags & XCAN_FLAG_TXFEMP))
++ can_put_echo_skb(skb, ndev, priv->tx_head % priv->tx_max, 0);
++ else
++ can_put_echo_skb(skb, ndev, 0, 0);
++
++ priv->tx_head++;
+ }
+
+ /**
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index d15d912690c40e..073d20241a4c9c 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1229,9 +1229,15 @@ static int b53_setup(struct dsa_switch *ds)
+ */
+ ds->untag_vlan_aware_bridge_pvid = true;
+
+- /* Ageing time is set in seconds */
+- ds->ageing_time_min = 1 * 1000;
+- ds->ageing_time_max = AGE_TIME_MAX * 1000;
++ if (dev->chip_id == BCM53101_DEVICE_ID) {
++ /* BCM53101 uses 0.5 second increments */
++ ds->ageing_time_min = 1 * 500;
++ ds->ageing_time_max = AGE_TIME_MAX * 500;
++ } else {
++ /* Everything else uses 1 second increments */
++ ds->ageing_time_min = 1 * 1000;
++ ds->ageing_time_max = AGE_TIME_MAX * 1000;
++ }
+
+ ret = b53_reset_switch(dev);
+ if (ret) {
+@@ -2448,7 +2454,10 @@ int b53_set_ageing_time(struct dsa_switch *ds, unsigned int msecs)
+ else
+ reg = B53_AGING_TIME_CONTROL;
+
+- atc = DIV_ROUND_CLOSEST(msecs, 1000);
++ if (dev->chip_id == BCM53101_DEVICE_ID)
++ atc = DIV_ROUND_CLOSEST(msecs, 500);
++ else
++ atc = DIV_ROUND_CLOSEST(msecs, 1000);
+
+ if (!is5325(dev) && !is5365(dev))
+ atc |= AGE_CHANGE;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 651b73163b6ee9..5f15f42070c539 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2358,7 +2358,8 @@ static void fec_enet_phy_reset_after_clk_enable(struct net_device *ndev)
+ */
+ phy_dev = of_phy_find_device(fep->phy_node);
+ phy_reset_after_clk_enable(phy_dev);
+- put_device(&phy_dev->mdio.dev);
++ if (phy_dev)
++ put_device(&phy_dev->mdio.dev);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index f1c9e575703eaa..26dcdceae741e4 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -4182,7 +4182,7 @@ static int i40e_vsi_request_irq_msix(struct i40e_vsi *vsi, char *basename)
+ irq_num = pf->msix_entries[base + vector].vector;
+ irq_set_affinity_notifier(irq_num, NULL);
+ irq_update_affinity_hint(irq_num, NULL);
+- free_irq(irq_num, &vsi->q_vectors[vector]);
++ free_irq(irq_num, vsi->q_vectors[vector]);
+ }
+ return err;
+ }
+diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+index ca6ccbc139548b..6412c84e2d17db 100644
+--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
++++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
+@@ -2081,11 +2081,8 @@ static void igb_diag_test(struct net_device *netdev,
+ } else {
+ dev_info(&adapter->pdev->dev, "online testing starting\n");
+
+- /* PHY is powered down when interface is down */
+- if (if_running && igb_link_test(adapter, &data[TEST_LINK]))
++ if (igb_link_test(adapter, &data[TEST_LINK]))
+ eth_test->flags |= ETH_TEST_FL_FAILED;
+- else
+- data[TEST_LINK] = 0;
+
+ /* Online tests aren't run; pass by default */
+ data[TEST_REG] = 0;
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index b76a154e635e00..d87438bef6fba5 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4451,8 +4451,7 @@ int igb_setup_rx_resources(struct igb_ring *rx_ring)
+ if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
+ xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
+ res = xdp_rxq_info_reg(&rx_ring->xdp_rxq, rx_ring->netdev,
+- rx_ring->queue_index,
+- rx_ring->q_vector->napi.napi_id);
++ rx_ring->queue_index, 0);
+ if (res < 0) {
+ dev_err(dev, "Failed to register xdp_rxq index %u\n",
+ rx_ring->queue_index);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index f436d7cf565a14..1a9cc8206430b2 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -691,7 +691,7 @@ static void icssg_prueth_hsr_fdb_add_del(struct prueth_emac *emac,
+
+ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+ {
+- struct net_device *real_dev;
++ struct net_device *real_dev, *port_dev;
+ struct prueth_emac *emac;
+ u8 vlan_id, i;
+
+@@ -700,11 +700,15 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+
+ if (is_hsr_master(real_dev)) {
+ for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
+- emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
+- if (!emac)
++ port_dev = hsr_get_port_ndev(real_dev, i);
++ emac = netdev_priv(port_dev);
++ if (!emac) {
++ dev_put(port_dev);
+ return -EINVAL;
++ }
+ icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
+ true);
++ dev_put(port_dev);
+ }
+ } else {
+ emac = netdev_priv(real_dev);
+@@ -716,7 +720,7 @@ static int icssg_prueth_hsr_add_mcast(struct net_device *ndev, const u8 *addr)
+
+ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
+ {
+- struct net_device *real_dev;
++ struct net_device *real_dev, *port_dev;
+ struct prueth_emac *emac;
+ u8 vlan_id, i;
+
+@@ -725,11 +729,15 @@ static int icssg_prueth_hsr_del_mcast(struct net_device *ndev, const u8 *addr)
+
+ if (is_hsr_master(real_dev)) {
+ for (i = HSR_PT_SLAVE_A; i < HSR_PT_INTERLINK; i++) {
+- emac = netdev_priv(hsr_get_port_ndev(real_dev, i));
+- if (!emac)
++ port_dev = hsr_get_port_ndev(real_dev, i);
++ emac = netdev_priv(port_dev);
++ if (!emac) {
++ dev_put(port_dev);
+ return -EINVAL;
++ }
+ icssg_prueth_hsr_fdb_add_del(emac, addr, vlan_id,
+ false);
++ dev_put(port_dev);
+ }
+ } else {
+ emac = netdev_priv(real_dev);
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+index f0823aa1ede607..bb1dcdf5fd0d58 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+@@ -2071,10 +2071,6 @@ static void wx_setup_mrqc(struct wx *wx)
+ {
+ u32 rss_field = 0;
+
+- /* VT, and RSS do not coexist at the same time */
+- if (test_bit(WX_FLAG_VMDQ_ENABLED, wx->flags))
+- return;
+-
+ /* Disable indicating checksum in descriptor, enables RSS hash */
+ wr32m(wx, WX_PSR_CTL, WX_PSR_CTL_PCSD, WX_PSR_CTL_PCSD);
+
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 01329fe7451a12..0eca96eeed58ab 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -4286,6 +4286,7 @@ static int macsec_newlink(struct net_device *dev,
+ if (err < 0)
+ goto del_dev;
+
++ netdev_update_features(dev);
+ netif_stacked_transfer_operstate(real_dev, dev);
+ linkwatch_fire_event(dev);
+
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 13df28445f0201..c02da57a4da5e3 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -1065,23 +1065,19 @@ EXPORT_SYMBOL_GPL(phy_inband_caps);
+ */
+ int phy_config_inband(struct phy_device *phydev, unsigned int modes)
+ {
+- int err;
++ lockdep_assert_held(&phydev->lock);
+
+ if (!!(modes & LINK_INBAND_DISABLE) +
+ !!(modes & LINK_INBAND_ENABLE) +
+ !!(modes & LINK_INBAND_BYPASS) != 1)
+ return -EINVAL;
+
+- mutex_lock(&phydev->lock);
+ if (!phydev->drv)
+- err = -EIO;
++ return -EIO;
+ else if (!phydev->drv->config_inband)
+- err = -EOPNOTSUPP;
+- else
+- err = phydev->drv->config_inband(phydev, modes);
+- mutex_unlock(&phydev->lock);
++ return -EOPNOTSUPP;
+
+- return err;
++ return phydev->drv->config_inband(phydev, modes);
+ }
+ EXPORT_SYMBOL(phy_config_inband);
+
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 0faa3d97e06b94..229a503d601eed 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -67,6 +67,8 @@ struct phylink {
+ struct timer_list link_poll;
+
+ struct mutex state_mutex;
++ /* Serialize updates to pl->phydev with phylink_resolve() */
++ struct mutex phydev_mutex;
+ struct phylink_link_state phy_state;
+ unsigned int phy_ib_mode;
+ struct work_struct resolve;
+@@ -1409,6 +1411,7 @@ static void phylink_get_fixed_state(struct phylink *pl,
+ static void phylink_mac_initial_config(struct phylink *pl, bool force_restart)
+ {
+ struct phylink_link_state link_state;
++ struct phy_device *phy = pl->phydev;
+
+ switch (pl->req_link_an_mode) {
+ case MLO_AN_PHY:
+@@ -1432,7 +1435,11 @@ static void phylink_mac_initial_config(struct phylink *pl, bool force_restart)
+ link_state.link = false;
+
+ phylink_apply_manual_flow(pl, &link_state);
++ if (phy)
++ mutex_lock(&phy->lock);
+ phylink_major_config(pl, force_restart, &link_state);
++ if (phy)
++ mutex_unlock(&phy->lock);
+ }
+
+ static const char *phylink_pause_to_str(int pause)
+@@ -1568,8 +1575,13 @@ static void phylink_resolve(struct work_struct *w)
+ struct phylink_link_state link_state;
+ bool mac_config = false;
+ bool retrigger = false;
++ struct phy_device *phy;
+ bool cur_link_state;
+
++ mutex_lock(&pl->phydev_mutex);
++ phy = pl->phydev;
++ if (phy)
++ mutex_lock(&phy->lock);
+ mutex_lock(&pl->state_mutex);
+ cur_link_state = phylink_link_is_up(pl);
+
+@@ -1603,11 +1615,11 @@ static void phylink_resolve(struct work_struct *w)
+ /* If we have a phy, the "up" state is the union of both the
+ * PHY and the MAC
+ */
+- if (pl->phydev)
++ if (phy)
+ link_state.link &= pl->phy_state.link;
+
+ /* Only update if the PHY link is up */
+- if (pl->phydev && pl->phy_state.link) {
++ if (phy && pl->phy_state.link) {
+ /* If the interface has changed, force a link down
+ * event if the link isn't already down, and re-resolve.
+ */
+@@ -1671,6 +1683,9 @@ static void phylink_resolve(struct work_struct *w)
+ queue_work(system_power_efficient_wq, &pl->resolve);
+ }
+ mutex_unlock(&pl->state_mutex);
++ if (phy)
++ mutex_unlock(&phy->lock);
++ mutex_unlock(&pl->phydev_mutex);
+ }
+
+ static void phylink_run_resolve(struct phylink *pl)
+@@ -1806,6 +1821,7 @@ struct phylink *phylink_create(struct phylink_config *config,
+ if (!pl)
+ return ERR_PTR(-ENOMEM);
+
++ mutex_init(&pl->phydev_mutex);
+ mutex_init(&pl->state_mutex);
+ INIT_WORK(&pl->resolve, phylink_resolve);
+
+@@ -2066,6 +2082,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
+ dev_name(&phy->mdio.dev), phy->drv->name, irq_str);
+ kfree(irq_str);
+
++ mutex_lock(&pl->phydev_mutex);
+ mutex_lock(&phy->lock);
+ mutex_lock(&pl->state_mutex);
+ pl->phydev = phy;
+@@ -2111,6 +2128,7 @@ static int phylink_bringup_phy(struct phylink *pl, struct phy_device *phy,
+
+ mutex_unlock(&pl->state_mutex);
+ mutex_unlock(&phy->lock);
++ mutex_unlock(&pl->phydev_mutex);
+
+ phylink_dbg(pl,
+ "phy: %s setting supported %*pb advertising %*pb\n",
+@@ -2289,6 +2307,7 @@ void phylink_disconnect_phy(struct phylink *pl)
+
+ ASSERT_RTNL();
+
++ mutex_lock(&pl->phydev_mutex);
+ phy = pl->phydev;
+ if (phy) {
+ mutex_lock(&phy->lock);
+@@ -2298,8 +2317,11 @@ void phylink_disconnect_phy(struct phylink *pl)
+ pl->mac_tx_clk_stop = false;
+ mutex_unlock(&pl->state_mutex);
+ mutex_unlock(&phy->lock);
+- flush_work(&pl->resolve);
++ }
++ mutex_unlock(&pl->phydev_mutex);
+
++ if (phy) {
++ flush_work(&pl->resolve);
+ phy_disconnect(phy);
+ }
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
+index 4bd286296da794..cebdf62ce3db9b 100644
+--- a/drivers/net/wireless/ath/ath12k/core.h
++++ b/drivers/net/wireless/ath/ath12k/core.h
+@@ -116,6 +116,7 @@ static inline u64 ath12k_le32hilo_to_u64(__le32 hi, __le32 lo)
+ enum ath12k_skb_flags {
+ ATH12K_SKB_HW_80211_ENCAP = BIT(0),
+ ATH12K_SKB_CIPHER_SET = BIT(1),
++ ATH12K_SKB_MLO_STA = BIT(2),
+ };
+
+ struct ath12k_skb_cb {
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 91f4e3aff74c38..6a0915a0c7aae3 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -3610,7 +3610,6 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ struct hal_rx_mon_ppdu_info *ppdu_info,
+ u32 uid)
+ {
+- struct ath12k_sta *ahsta;
+ struct ath12k_link_sta *arsta;
+ struct ath12k_rx_peer_stats *rx_stats = NULL;
+ struct hal_rx_user_status *user_stats = &ppdu_info->userstats[uid];
+@@ -3628,8 +3627,13 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
+ return;
+ }
+
+- ahsta = ath12k_sta_to_ahsta(peer->sta);
+- arsta = &ahsta->deflink;
++ arsta = ath12k_peer_get_link_sta(ar->ab, peer);
++ if (!arsta) {
++ ath12k_warn(ar->ab, "link sta not found on peer %pM id %d\n",
++ peer->addr, peer->peer_id);
++ return;
++ }
++
+ arsta->rssi_comb = ppdu_info->rssi_comb;
+ ewma_avg_rssi_add(&arsta->avg_rssi, ppdu_info->rssi_comb);
+ rx_stats = arsta->rx_stats;
+@@ -3742,7 +3746,6 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ struct dp_srng *mon_dst_ring;
+ struct hal_srng *srng;
+ struct dp_rxdma_mon_ring *buf_ring;
+- struct ath12k_sta *ahsta = NULL;
+ struct ath12k_link_sta *arsta;
+ struct ath12k_peer *peer;
+ struct sk_buff_head skb_list;
+@@ -3868,8 +3871,15 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int *budget,
+ }
+
+ if (ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_SU) {
+- ahsta = ath12k_sta_to_ahsta(peer->sta);
+- arsta = &ahsta->deflink;
++ arsta = ath12k_peer_get_link_sta(ar->ab, peer);
++ if (!arsta) {
++ ath12k_warn(ar->ab, "link sta not found on peer %pM id %d\n",
++ peer->addr, peer->peer_id);
++ spin_unlock_bh(&ab->base_lock);
++ rcu_read_unlock();
++ dev_kfree_skb_any(skb);
++ continue;
++ }
+ ath12k_dp_mon_rx_update_peer_su_stats(ar, arsta,
+ ppdu_info);
+ } else if ((ppdu_info->fc_valid) &&
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index bd95dc88f9b21f..e9137ffeb5ab48 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -1418,8 +1418,6 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ {
+ struct ath12k_base *ab = ar->ab;
+ struct ath12k_peer *peer;
+- struct ieee80211_sta *sta;
+- struct ath12k_sta *ahsta;
+ struct ath12k_link_sta *arsta;
+ struct htt_ppdu_stats_user_rate *user_rate;
+ struct ath12k_per_peer_tx_stats *peer_stats = &ar->peer_tx_stats;
+@@ -1500,9 +1498,12 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
+ return;
+ }
+
+- sta = peer->sta;
+- ahsta = ath12k_sta_to_ahsta(sta);
+- arsta = &ahsta->deflink;
++ arsta = ath12k_peer_get_link_sta(ab, peer);
++ if (!arsta) {
++ spin_unlock_bh(&ab->base_lock);
++ rcu_read_unlock();
++ return;
++ }
+
+ memset(&arsta->txrate, 0, sizeof(arsta->txrate));
+
+diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
+index ec77ad498b33a2..6791ae1d64e50f 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.c
++++ b/drivers/net/wireless/ath/ath12k/hw.c
+@@ -14,6 +14,7 @@
+ #include "hw.h"
+ #include "mhi.h"
+ #include "dp_rx.h"
++#include "peer.h"
+
+ static const guid_t wcn7850_uuid = GUID_INIT(0xf634f534, 0x6147, 0x11ec,
+ 0x90, 0xd6, 0x02, 0x42,
+@@ -49,6 +50,12 @@ static bool ath12k_dp_srng_is_comp_ring_qcn9274(int ring_num)
+ return false;
+ }
+
++static bool ath12k_is_frame_link_agnostic_qcn9274(struct ath12k_link_vif *arvif,
++ struct ieee80211_mgmt *mgmt)
++{
++ return ieee80211_is_action(mgmt->frame_control);
++}
++
+ static int ath12k_hw_mac_id_to_pdev_id_wcn7850(const struct ath12k_hw_params *hw,
+ int mac_id)
+ {
+@@ -74,6 +81,52 @@ static bool ath12k_dp_srng_is_comp_ring_wcn7850(int ring_num)
+ return false;
+ }
+
++static bool ath12k_is_addba_resp_action_code(struct ieee80211_mgmt *mgmt)
++{
++ if (!ieee80211_is_action(mgmt->frame_control))
++ return false;
++
++ if (mgmt->u.action.category != WLAN_CATEGORY_BACK)
++ return false;
++
++ if (mgmt->u.action.u.addba_resp.action_code != WLAN_ACTION_ADDBA_RESP)
++ return false;
++
++ return true;
++}
++
++static bool ath12k_is_frame_link_agnostic_wcn7850(struct ath12k_link_vif *arvif,
++ struct ieee80211_mgmt *mgmt)
++{
++ struct ieee80211_vif *vif = ath12k_ahvif_to_vif(arvif->ahvif);
++ struct ath12k_hw *ah = ath12k_ar_to_ah(arvif->ar);
++ struct ath12k_base *ab = arvif->ar->ab;
++ __le16 fc = mgmt->frame_control;
++
++ spin_lock_bh(&ab->base_lock);
++ if (!ath12k_peer_find_by_addr(ab, mgmt->da) &&
++ !ath12k_peer_ml_find(ah, mgmt->da)) {
++ spin_unlock_bh(&ab->base_lock);
++ return false;
++ }
++ spin_unlock_bh(&ab->base_lock);
++
++ if (vif->type == NL80211_IFTYPE_STATION)
++ return arvif->is_up &&
++ (vif->valid_links == vif->active_links) &&
++ !ieee80211_is_probe_req(fc) &&
++ !ieee80211_is_auth(fc) &&
++ !ieee80211_is_deauth(fc) &&
++ !ath12k_is_addba_resp_action_code(mgmt);
++
++ if (vif->type == NL80211_IFTYPE_AP)
++ return !(ieee80211_is_probe_resp(fc) || ieee80211_is_auth(fc) ||
++ ieee80211_is_assoc_resp(fc) || ieee80211_is_reassoc_resp(fc) ||
++ ath12k_is_addba_resp_action_code(mgmt));
++
++ return false;
++}
++
+ static const struct ath12k_hw_ops qcn9274_ops = {
+ .get_hw_mac_from_pdev_id = ath12k_hw_qcn9274_mac_from_pdev_id,
+ .mac_id_to_pdev_id = ath12k_hw_mac_id_to_pdev_id_qcn9274,
+@@ -81,6 +134,7 @@ static const struct ath12k_hw_ops qcn9274_ops = {
+ .rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_qcn9274,
+ .get_ring_selector = ath12k_hw_get_ring_selector_qcn9274,
+ .dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_qcn9274,
++ .is_frame_link_agnostic = ath12k_is_frame_link_agnostic_qcn9274,
+ };
+
+ static const struct ath12k_hw_ops wcn7850_ops = {
+@@ -90,6 +144,7 @@ static const struct ath12k_hw_ops wcn7850_ops = {
+ .rxdma_ring_sel_config = ath12k_dp_rxdma_ring_sel_config_wcn7850,
+ .get_ring_selector = ath12k_hw_get_ring_selector_wcn7850,
+ .dp_srng_is_tx_comp_ring = ath12k_dp_srng_is_comp_ring_wcn7850,
++ .is_frame_link_agnostic = ath12k_is_frame_link_agnostic_wcn7850,
+ };
+
+ #define ATH12K_TX_RING_MASK_0 0x1
+diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
+index 0a75bc5abfa241..9c69dd5a22afa4 100644
+--- a/drivers/net/wireless/ath/ath12k/hw.h
++++ b/drivers/net/wireless/ath/ath12k/hw.h
+@@ -246,6 +246,8 @@ struct ath12k_hw_ops {
+ int (*rxdma_ring_sel_config)(struct ath12k_base *ab);
+ u8 (*get_ring_selector)(struct sk_buff *skb);
+ bool (*dp_srng_is_tx_comp_ring)(int ring_num);
++ bool (*is_frame_link_agnostic)(struct ath12k_link_vif *arvif,
++ struct ieee80211_mgmt *mgmt);
+ };
+
+ static inline
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index a885dd168a372a..708dc3dd4347ad 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -3650,12 +3650,68 @@ static int ath12k_mac_fils_discovery(struct ath12k_link_vif *arvif,
+ return ret;
+ }
+
++static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif)
++{
++ struct ath12k *ar = arvif->ar;
++ struct ieee80211_vif *vif = arvif->ahvif->vif;
++ struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf;
++ enum wmi_sta_powersave_param param;
++ struct ieee80211_bss_conf *info;
++ enum wmi_sta_ps_mode psmode;
++ int ret;
++ int timeout;
++ bool enable_ps;
++
++ lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
++
++ if (vif->type != NL80211_IFTYPE_STATION)
++ return;
++
++ enable_ps = arvif->ahvif->ps;
++ if (enable_ps) {
++ psmode = WMI_STA_PS_MODE_ENABLED;
++ param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
++
++ timeout = conf->dynamic_ps_timeout;
++ if (timeout == 0) {
++ info = ath12k_mac_get_link_bss_conf(arvif);
++ if (!info) {
++ ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n",
++ vif->addr, arvif->link_id);
++ return;
++ }
++
++ /* firmware doesn't like 0 */
++ timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000;
++ }
++
++ ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
++ timeout);
++ if (ret) {
++ ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n",
++ arvif->vdev_id, ret);
++ return;
++ }
++ } else {
++ psmode = WMI_STA_PS_MODE_DISABLED;
++ }
++
++ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n",
++ arvif->vdev_id, psmode ? "enable" : "disable");
++
++ ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode);
++ if (ret)
++ ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n",
++ psmode, arvif->vdev_id, ret);
++}
++
+ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ u64 changed)
+ {
+ struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif);
+ unsigned long links = ahvif->links_map;
++ struct ieee80211_vif_cfg *vif_cfg;
+ struct ieee80211_bss_conf *info;
+ struct ath12k_link_vif *arvif;
+ struct ieee80211_sta *sta;
+@@ -3719,61 +3775,24 @@ static void ath12k_mac_op_vif_cfg_changed(struct ieee80211_hw *hw,
+ }
+ }
+ }
+-}
+
+-static void ath12k_mac_vif_setup_ps(struct ath12k_link_vif *arvif)
+-{
+- struct ath12k *ar = arvif->ar;
+- struct ieee80211_vif *vif = arvif->ahvif->vif;
+- struct ieee80211_conf *conf = &ath12k_ar_to_hw(ar)->conf;
+- enum wmi_sta_powersave_param param;
+- struct ieee80211_bss_conf *info;
+- enum wmi_sta_ps_mode psmode;
+- int ret;
+- int timeout;
+- bool enable_ps;
++ if (changed & BSS_CHANGED_PS) {
++ links = ahvif->links_map;
++ vif_cfg = &vif->cfg;
+
+- lockdep_assert_wiphy(ath12k_ar_to_hw(ar)->wiphy);
+-
+- if (vif->type != NL80211_IFTYPE_STATION)
+- return;
++ for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) {
++ arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]);
++ if (!arvif || !arvif->ar)
++ continue;
+
+- enable_ps = arvif->ahvif->ps;
+- if (enable_ps) {
+- psmode = WMI_STA_PS_MODE_ENABLED;
+- param = WMI_STA_PS_PARAM_INACTIVITY_TIME;
++ ar = arvif->ar;
+
+- timeout = conf->dynamic_ps_timeout;
+- if (timeout == 0) {
+- info = ath12k_mac_get_link_bss_conf(arvif);
+- if (!info) {
+- ath12k_warn(ar->ab, "unable to access bss link conf in setup ps for vif %pM link %u\n",
+- vif->addr, arvif->link_id);
+- return;
++ if (ar->ab->hw_params->supports_sta_ps) {
++ ahvif->ps = vif_cfg->ps;
++ ath12k_mac_vif_setup_ps(arvif);
+ }
+-
+- /* firmware doesn't like 0 */
+- timeout = ieee80211_tu_to_usec(info->beacon_int) / 1000;
+- }
+-
+- ret = ath12k_wmi_set_sta_ps_param(ar, arvif->vdev_id, param,
+- timeout);
+- if (ret) {
+- ath12k_warn(ar->ab, "failed to set inactivity time for vdev %d: %i\n",
+- arvif->vdev_id, ret);
+- return;
+ }
+- } else {
+- psmode = WMI_STA_PS_MODE_DISABLED;
+ }
+-
+- ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac vdev %d psmode %s\n",
+- arvif->vdev_id, psmode ? "enable" : "disable");
+-
+- ret = ath12k_wmi_pdev_set_ps_mode(ar, arvif->vdev_id, psmode);
+- if (ret)
+- ath12k_warn(ar->ab, "failed to set sta power save mode %d for vdev %d: %d\n",
+- psmode, arvif->vdev_id, ret);
+ }
+
+ static bool ath12k_mac_supports_station_tpc(struct ath12k *ar,
+@@ -3795,7 +3814,6 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ {
+ struct ath12k_vif *ahvif = arvif->ahvif;
+ struct ieee80211_vif *vif = ath12k_ahvif_to_vif(ahvif);
+- struct ieee80211_vif_cfg *vif_cfg = &vif->cfg;
+ struct cfg80211_chan_def def;
+ u32 param_id, param_value;
+ enum nl80211_band band;
+@@ -4069,12 +4087,6 @@ static void ath12k_mac_bss_info_changed(struct ath12k *ar,
+ }
+
+ ath12k_mac_fils_discovery(arvif, info);
+-
+- if (changed & BSS_CHANGED_PS &&
+- ar->ab->hw_params->supports_sta_ps) {
+- ahvif->ps = vif_cfg->ps;
+- ath12k_mac_vif_setup_ps(arvif);
+- }
+ }
+
+ static struct ath12k_vif_cache *ath12k_ahvif_get_link_cache(struct ath12k_vif *ahvif,
+@@ -7673,7 +7685,7 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_link_vif *arv
+
+ skb_cb->paddr = paddr;
+
+- ret = ath12k_wmi_mgmt_send(ar, arvif->vdev_id, buf_id, skb);
++ ret = ath12k_wmi_mgmt_send(arvif, buf_id, skb);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to send mgmt frame: %d\n", ret);
+ goto err_unmap_buf;
+@@ -7985,6 +7997,9 @@ static void ath12k_mac_op_tx(struct ieee80211_hw *hw,
+
+ skb_cb->flags |= ATH12K_SKB_HW_80211_ENCAP;
+ } else if (ieee80211_is_mgmt(hdr->frame_control)) {
++ if (sta && sta->mlo)
++ skb_cb->flags |= ATH12K_SKB_MLO_STA;
++
+ ret = ath12k_mac_mgmt_tx(ar, skb, is_prb_rsp);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to queue management frame %d\n",
+diff --git a/drivers/net/wireless/ath/ath12k/peer.c b/drivers/net/wireless/ath/ath12k/peer.c
+index ec7236bbccc0fe..eb7aeff0149038 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.c
++++ b/drivers/net/wireless/ath/ath12k/peer.c
+@@ -8,7 +8,7 @@
+ #include "peer.h"
+ #include "debug.h"
+
+-static struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah, const u8 *addr)
++struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah, const u8 *addr)
+ {
+ struct ath12k_ml_peer *ml_peer;
+
+diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
+index f3a5e054d2b556..44afc0b7dd53ea 100644
+--- a/drivers/net/wireless/ath/ath12k/peer.h
++++ b/drivers/net/wireless/ath/ath12k/peer.h
+@@ -91,5 +91,33 @@ struct ath12k_peer *ath12k_peer_find_by_ast(struct ath12k_base *ab, int ast_hash
+ int ath12k_peer_ml_create(struct ath12k_hw *ah, struct ieee80211_sta *sta);
+ int ath12k_peer_ml_delete(struct ath12k_hw *ah, struct ieee80211_sta *sta);
+ int ath12k_peer_mlo_link_peers_delete(struct ath12k_vif *ahvif, struct ath12k_sta *ahsta);
++struct ath12k_ml_peer *ath12k_peer_ml_find(struct ath12k_hw *ah,
++ const u8 *addr);
++static inline
++struct ath12k_link_sta *ath12k_peer_get_link_sta(struct ath12k_base *ab,
++ struct ath12k_peer *peer)
++{
++ struct ath12k_sta *ahsta;
++ struct ath12k_link_sta *arsta;
++
++ if (!peer->sta)
++ return NULL;
++
++ ahsta = ath12k_sta_to_ahsta(peer->sta);
++ if (peer->ml_id & ATH12K_PEER_ML_ID_VALID) {
++ if (!(ahsta->links_map & BIT(peer->link_id))) {
++ ath12k_warn(ab, "peer %pM id %d link_id %d can't found in STA link_map 0x%x\n",
++ peer->addr, peer->peer_id, peer->link_id,
++ ahsta->links_map);
++ return NULL;
++ }
++ arsta = rcu_dereference(ahsta->link[peer->link_id]);
++ if (!arsta)
++ return NULL;
++ } else {
++ arsta = &ahsta->deflink;
++ }
++ return arsta;
++}
+
+ #endif /* _PEER_H_ */
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index eac5d48cade663..d740326079e1d7 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -782,20 +782,46 @@ struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_ab, u32 len)
+ return skb;
+ }
+
+-int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
++int ath12k_wmi_mgmt_send(struct ath12k_link_vif *arvif, u32 buf_id,
+ struct sk_buff *frame)
+ {
++ struct ath12k *ar = arvif->ar;
+ struct ath12k_wmi_pdev *wmi = ar->wmi;
+ struct wmi_mgmt_send_cmd *cmd;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(frame);
+- struct wmi_tlv *frame_tlv;
++ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)frame->data;
++ struct ieee80211_vif *vif = ath12k_ahvif_to_vif(arvif->ahvif);
++ int cmd_len = sizeof(struct ath12k_wmi_mgmt_send_tx_params);
++ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)hdr;
++ struct ath12k_wmi_mlo_mgmt_send_params *ml_params;
++ struct ath12k_base *ab = ar->ab;
++ struct wmi_tlv *frame_tlv, *tlv;
++ struct ath12k_skb_cb *skb_cb;
++ u32 buf_len, buf_len_aligned;
++ u32 vdev_id = arvif->vdev_id;
++ bool link_agnostic = false;
+ struct sk_buff *skb;
+- u32 buf_len;
+ int ret, len;
++ void *ptr;
+
+ buf_len = min_t(int, frame->len, WMI_MGMT_SEND_DOWNLD_LEN);
+
+- len = sizeof(*cmd) + sizeof(*frame_tlv) + roundup(buf_len, 4);
++ buf_len_aligned = roundup(buf_len, sizeof(u32));
++
++ len = sizeof(*cmd) + sizeof(*frame_tlv) + buf_len_aligned;
++
++ if (ieee80211_vif_is_mld(vif)) {
++ skb_cb = ATH12K_SKB_CB(frame);
++ if ((skb_cb->flags & ATH12K_SKB_MLO_STA) &&
++ ab->hw_params->hw_ops->is_frame_link_agnostic &&
++ ab->hw_params->hw_ops->is_frame_link_agnostic(arvif, mgmt)) {
++ len += cmd_len + TLV_HDR_SIZE + sizeof(*ml_params);
++ ath12k_generic_dbg(ATH12K_DBG_MGMT,
++ "Sending Mgmt Frame fc 0x%0x as link agnostic",
++ mgmt->frame_control);
++ link_agnostic = true;
++ }
++ }
+
+ skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
+ if (!skb)
+@@ -814,10 +840,32 @@ int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
+ cmd->tx_params_valid = 0;
+
+ frame_tlv = (struct wmi_tlv *)(skb->data + sizeof(*cmd));
+- frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len);
++ frame_tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, buf_len_aligned);
+
+ memcpy(frame_tlv->value, frame->data, buf_len);
+
++ if (!link_agnostic)
++ goto send;
++
++ ptr = skb->data + sizeof(*cmd) + sizeof(*frame_tlv) + buf_len_aligned;
++
++ tlv = ptr;
++
++ /* Tx params not used currently */
++ tlv->header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_TX_SEND_PARAMS, cmd_len);
++ ptr += cmd_len;
++
++ tlv = ptr;
++ tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, sizeof(*ml_params));
++ ptr += TLV_HDR_SIZE;
++
++ ml_params = ptr;
++ ml_params->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MLO_TX_SEND_PARAMS,
++ sizeof(*ml_params));
++
++ ml_params->hw_link_id = cpu_to_le32(WMI_MGMT_LINK_AGNOSTIC_ID);
++
++send:
+ ret = ath12k_wmi_cmd_send(wmi, skb, WMI_MGMT_TX_SEND_CMDID);
+ if (ret) {
+ ath12k_warn(ar->ab,
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 8627154f1680fa..6dbcedcf081759 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3963,6 +3963,7 @@ struct wmi_scan_chan_list_cmd {
+ } __packed;
+
+ #define WMI_MGMT_SEND_DOWNLD_LEN 64
++#define WMI_MGMT_LINK_AGNOSTIC_ID 0xFFFFFFFF
+
+ #define WMI_TX_PARAMS_DWORD0_POWER GENMASK(7, 0)
+ #define WMI_TX_PARAMS_DWORD0_MCS_MASK GENMASK(19, 8)
+@@ -3988,7 +3989,18 @@ struct wmi_mgmt_send_cmd {
+
+ /* This TLV is followed by struct wmi_mgmt_frame */
+
+- /* Followed by struct wmi_mgmt_send_params */
++ /* Followed by struct ath12k_wmi_mlo_mgmt_send_params */
++} __packed;
++
++struct ath12k_wmi_mlo_mgmt_send_params {
++ __le32 tlv_header;
++ __le32 hw_link_id;
++} __packed;
++
++struct ath12k_wmi_mgmt_send_tx_params {
++ __le32 tlv_header;
++ __le32 tx_param_dword0;
++ __le32 tx_param_dword1;
+ } __packed;
+
+ struct wmi_sta_powersave_mode_cmd {
+@@ -6183,7 +6195,7 @@ void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
+ int ath12k_wmi_cmd_send(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
+ u32 cmd_id);
+ struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_sc, u32 len);
+-int ath12k_wmi_mgmt_send(struct ath12k *ar, u32 vdev_id, u32 buf_id,
++int ath12k_wmi_mgmt_send(struct ath12k_link_vif *arvif, u32 buf_id,
+ struct sk_buff *frame);
+ int ath12k_wmi_p2p_go_bcn_ie(struct ath12k *ar, u32 vdev_id,
+ const u8 *p2p_ie);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 4e47ccb43bd86c..edd99d71016cb1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -124,13 +124,13 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_mac_cfg)},/* low 5GHz active */
+ {IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_mac_cfg)},/* high 5GHz active */
+
+-/* 6x30 Series */
+- {IWL_PCI_DEVICE(0x008A, 0x5305, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x008A, 0x5307, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x008A, 0x5325, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x008A, 0x5327, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x008B, 0x5315, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x008B, 0x5317, iwl1000_mac_cfg)},
++/* 1030/6x30 Series */
++ {IWL_PCI_DEVICE(0x008A, 0x5305, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x008A, 0x5307, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x008A, 0x5325, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x008A, 0x5327, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x008B, 0x5315, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x008B, 0x5317, iwl6030_mac_cfg)},
+ {IWL_PCI_DEVICE(0x0090, 0x5211, iwl6030_mac_cfg)},
+ {IWL_PCI_DEVICE(0x0090, 0x5215, iwl6030_mac_cfg)},
+ {IWL_PCI_DEVICE(0x0090, 0x5216, iwl6030_mac_cfg)},
+@@ -181,12 +181,12 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0x08AE, 0x1027, iwl1000_mac_cfg)},
+
+ /* 130 Series WiFi */
+- {IWL_PCI_DEVICE(0x0896, 0x5005, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x0896, 0x5007, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x0897, 0x5015, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x0897, 0x5017, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x0896, 0x5025, iwl1000_mac_cfg)},
+- {IWL_PCI_DEVICE(0x0896, 0x5027, iwl1000_mac_cfg)},
++ {IWL_PCI_DEVICE(0x0896, 0x5005, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x0896, 0x5007, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x0897, 0x5015, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x0897, 0x5017, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x0896, 0x5025, iwl6030_mac_cfg)},
++ {IWL_PCI_DEVICE(0x0896, 0x5027, iwl6030_mac_cfg)},
+
+ /* 2x00 Series */
+ {IWL_PCI_DEVICE(0x0890, 0x4022, iwl2000_mac_cfg)},
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index a4a2bac4f4b279..2f8d0223c1a6de 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -1168,12 +1168,6 @@ static void __iomem *mvebu_pcie_map_registers(struct platform_device *pdev,
+ return devm_ioremap_resource(&pdev->dev, &port->regs);
+ }
+
+-#define DT_FLAGS_TO_TYPE(flags) (((flags) >> 24) & 0x03)
+-#define DT_TYPE_IO 0x1
+-#define DT_TYPE_MEM32 0x2
+-#define DT_CPUADDR_TO_TARGET(cpuaddr) (((cpuaddr) >> 56) & 0xFF)
+-#define DT_CPUADDR_TO_ATTR(cpuaddr) (((cpuaddr) >> 48) & 0xFF)
+-
+ static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
+ unsigned long type,
+ unsigned int *tgt,
+@@ -1189,19 +1183,12 @@ static int mvebu_get_tgt_attr(struct device_node *np, int devfn,
+ return -EINVAL;
+
+ for_each_of_range(&parser, &range) {
+- unsigned long rtype;
+ u32 slot = upper_32_bits(range.bus_addr);
+
+- if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_IO)
+- rtype = IORESOURCE_IO;
+- else if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_MEM32)
+- rtype = IORESOURCE_MEM;
+- else
+- continue;
+-
+- if (slot == PCI_SLOT(devfn) && type == rtype) {
+- *tgt = DT_CPUADDR_TO_TARGET(range.cpu_addr);
+- *attr = DT_CPUADDR_TO_ATTR(range.cpu_addr);
++ if (slot == PCI_SLOT(devfn) &&
++ type == (range.flags & IORESOURCE_TYPE_BITS)) {
++ *tgt = (range.parent_bus_addr >> 56) & 0xFF;
++ *attr = (range.parent_bus_addr >> 48) & 0xFF;
+ return 0;
+ }
+ }
+diff --git a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+index d7493c2294ef23..3709fba42ebd85 100644
+--- a/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
++++ b/drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
+@@ -127,13 +127,13 @@ static int eusb2_repeater_init(struct phy *phy)
+ rptr->cfg->init_tbl[i].value);
+
+ /* Override registers from devicetree values */
+- if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
++ if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
+ regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, val);
+
+ if (!of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &val))
+ regmap_write(regmap, base + EUSB2_TUNE_HSDISC, val);
+
+- if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &val))
++ if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &val))
+ regmap_write(regmap, base + EUSB2_TUNE_IUSB2, val);
+
+ /* Wait for status OK */
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+index 461b9e0af610a1..498f23c43aa139 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-pcie.c
+@@ -3064,6 +3064,14 @@ struct qmp_pcie {
+ struct clk_fixed_rate aux_clk_fixed;
+ };
+
++static bool qphy_checkbits(const void __iomem *base, u32 offset, u32 val)
++{
++ u32 reg;
++
++ reg = readl(base + offset);
++ return (reg & val) == val;
++}
++
+ static inline void qphy_setbits(void __iomem *base, u32 offset, u32 val)
+ {
+ u32 reg;
+@@ -4332,16 +4340,21 @@ static int qmp_pcie_init(struct phy *phy)
+ struct qmp_pcie *qmp = phy_get_drvdata(phy);
+ const struct qmp_phy_cfg *cfg = qmp->cfg;
+ void __iomem *pcs = qmp->pcs;
+- bool phy_initialized = !!(readl(pcs + cfg->regs[QPHY_START_CTRL]));
+ int ret;
+
+- qmp->skip_init = qmp->nocsr_reset && phy_initialized;
+ /*
+- * We need to check the existence of init sequences in two cases:
+- * 1. The PHY doesn't support no_csr reset.
+- * 2. The PHY supports no_csr reset but isn't initialized by bootloader.
+- * As we can't skip init in these two cases.
++ * We can skip PHY initialization if all of the following conditions
++ * are met:
++ * 1. The PHY supports the nocsr_reset that preserves the PHY config.
++ * 2. The PHY was started (and not powered down again) by the
++ * bootloader, with all of the expected bits set correctly.
++ * In this case, we can continue without having the init sequence
++ * defined in the driver.
+ */
++ qmp->skip_init = qmp->nocsr_reset &&
++ qphy_checkbits(pcs, cfg->regs[QPHY_START_CTRL], SERDES_START | PCS_START) &&
++ qphy_checkbits(pcs, cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL], cfg->pwrdn_ctrl);
++
+ if (!qmp->skip_init && !cfg->tbls.serdes_num) {
+ dev_err(qmp->dev, "Init sequence not available\n");
+ return -ENODATA;
+diff --git a/drivers/phy/tegra/xusb-tegra210.c b/drivers/phy/tegra/xusb-tegra210.c
+index ebc8a7e21a3181..3409924498e9cf 100644
+--- a/drivers/phy/tegra/xusb-tegra210.c
++++ b/drivers/phy/tegra/xusb-tegra210.c
+@@ -3164,18 +3164,22 @@ tegra210_xusb_padctl_probe(struct device *dev,
+ }
+
+ pdev = of_find_device_by_node(np);
++ of_node_put(np);
+ if (!pdev) {
+ dev_warn(dev, "PMC device is not available\n");
+ goto out;
+ }
+
+- if (!platform_get_drvdata(pdev))
++ if (!platform_get_drvdata(pdev)) {
++ put_device(&pdev->dev);
+ return ERR_PTR(-EPROBE_DEFER);
++ }
+
+ padctl->regmap = dev_get_regmap(&pdev->dev, "usb_sleepwalk");
+ if (!padctl->regmap)
+ dev_info(dev, "failed to find PMC regmap\n");
+
++ put_device(&pdev->dev);
+ out:
+ return &padctl->base;
+ }
+diff --git a/drivers/phy/ti/phy-omap-usb2.c b/drivers/phy/ti/phy-omap-usb2.c
+index c1a0ef979142ce..c444bb2530ca29 100644
+--- a/drivers/phy/ti/phy-omap-usb2.c
++++ b/drivers/phy/ti/phy-omap-usb2.c
+@@ -363,6 +363,13 @@ static void omap_usb2_init_errata(struct omap_usb *phy)
+ phy->flags |= OMAP_USB2_DISABLE_CHRG_DET;
+ }
+
++static void omap_usb2_put_device(void *_dev)
++{
++ struct device *dev = _dev;
++
++ put_device(dev);
++}
++
+ static int omap_usb2_probe(struct platform_device *pdev)
+ {
+ struct omap_usb *phy;
+@@ -373,6 +380,7 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ struct device_node *control_node;
+ struct platform_device *control_pdev;
+ const struct usb_phy_data *phy_data;
++ int ret;
+
+ phy_data = device_get_match_data(&pdev->dev);
+ if (!phy_data)
+@@ -423,6 +431,11 @@ static int omap_usb2_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+ phy->control_dev = &control_pdev->dev;
++
++ ret = devm_add_action_or_reset(&pdev->dev, omap_usb2_put_device,
++ phy->control_dev);
++ if (ret)
++ return ret;
+ } else {
+ if (of_property_read_u32_index(node,
+ "syscon-phy-power", 1,
+diff --git a/drivers/phy/ti/phy-ti-pipe3.c b/drivers/phy/ti/phy-ti-pipe3.c
+index da2cbacb982c6b..ae764d6524c99a 100644
+--- a/drivers/phy/ti/phy-ti-pipe3.c
++++ b/drivers/phy/ti/phy-ti-pipe3.c
+@@ -667,12 +667,20 @@ static int ti_pipe3_get_clk(struct ti_pipe3 *phy)
+ return 0;
+ }
+
++static void ti_pipe3_put_device(void *_dev)
++{
++ struct device *dev = _dev;
++
++ put_device(dev);
++}
++
+ static int ti_pipe3_get_sysctrl(struct ti_pipe3 *phy)
+ {
+ struct device *dev = phy->dev;
+ struct device_node *node = dev->of_node;
+ struct device_node *control_node;
+ struct platform_device *control_pdev;
++ int ret;
+
+ phy->phy_power_syscon = syscon_regmap_lookup_by_phandle(node,
+ "syscon-phy-power");
+@@ -704,6 +712,11 @@ static int ti_pipe3_get_sysctrl(struct ti_pipe3 *phy)
+ }
+
+ phy->control_dev = &control_pdev->dev;
++
++ ret = devm_add_action_or_reset(dev, ti_pipe3_put_device,
++ phy->control_dev);
++ if (ret)
++ return ret;
+ }
+
+ if (phy->mode == PIPE3_MODE_PCIE) {
+diff --git a/drivers/regulator/sy7636a-regulator.c b/drivers/regulator/sy7636a-regulator.c
+index d1e7ba1fb3e1af..27e3d939b7bb9e 100644
+--- a/drivers/regulator/sy7636a-regulator.c
++++ b/drivers/regulator/sy7636a-regulator.c
+@@ -83,9 +83,11 @@ static int sy7636a_regulator_probe(struct platform_device *pdev)
+ if (!regmap)
+ return -EPROBE_DEFER;
+
+- gdp = devm_gpiod_get(pdev->dev.parent, "epd-pwr-good", GPIOD_IN);
++ device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent);
++
++ gdp = devm_gpiod_get(&pdev->dev, "epd-pwr-good", GPIOD_IN);
+ if (IS_ERR(gdp)) {
+- dev_err(pdev->dev.parent, "Power good GPIO fault %ld\n", PTR_ERR(gdp));
++ dev_err(&pdev->dev, "Power good GPIO fault %ld\n", PTR_ERR(gdp));
+ return PTR_ERR(gdp);
+ }
+
+@@ -105,7 +107,6 @@ static int sy7636a_regulator_probe(struct platform_device *pdev)
+ }
+
+ config.dev = &pdev->dev;
+- config.dev->of_node = pdev->dev.parent->of_node;
+ config.regmap = regmap;
+
+ rdev = devm_regulator_register(&pdev->dev, &desc, &config);
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index cd1f657f782df2..13c663a154c4e8 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -543,10 +543,10 @@ static ssize_t hvc_write(struct tty_struct *tty, const u8 *buf, size_t count)
+ }
+
+ /*
+- * Racy, but harmless, kick thread if there is still pending data.
++ * Kick thread to flush if there's still pending data
++ * or to wakeup the write queue.
+ */
+- if (hp->n_outbuf)
+- hvc_kick();
++ hvc_kick();
+
+ return written;
+ }
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 5ea8aadb6e69c1..9056cb82456f14 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1177,17 +1177,6 @@ static int sc16is7xx_startup(struct uart_port *port)
+ sc16is7xx_port_write(port, SC16IS7XX_FCR_REG,
+ SC16IS7XX_FCR_FIFO_BIT);
+
+- /* Enable EFR */
+- sc16is7xx_port_write(port, SC16IS7XX_LCR_REG,
+- SC16IS7XX_LCR_CONF_MODE_B);
+-
+- regcache_cache_bypass(one->regmap, true);
+-
+- /* Enable write access to enhanced features and internal clock div */
+- sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,
+- SC16IS7XX_EFR_ENABLE_BIT,
+- SC16IS7XX_EFR_ENABLE_BIT);
+-
+ /* Enable TCR/TLR */
+ sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,
+ SC16IS7XX_MCR_TCRTLR_BIT,
+@@ -1199,7 +1188,8 @@ static int sc16is7xx_startup(struct uart_port *port)
+ SC16IS7XX_TCR_RX_RESUME(24) |
+ SC16IS7XX_TCR_RX_HALT(48));
+
+- regcache_cache_bypass(one->regmap, false);
++ /* Disable TCR/TLR access */
++ sc16is7xx_port_update(port, SC16IS7XX_MCR_REG, SC16IS7XX_MCR_TCRTLR_BIT, 0);
+
+ /* Now, initialize the UART */
+ sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, SC16IS7XX_LCR_WORD_LEN_8);
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 0a800ba53816a8..de16b02d857e07 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -1599,6 +1599,7 @@ static int f_midi2_create_card(struct f_midi2 *midi2)
+ strscpy(fb->info.name, ump_fb_name(b),
+ sizeof(fb->info.name));
+ }
++ snd_ump_update_group_attrs(ump);
+ }
+
+ for (i = 0; i < midi2->num_eps; i++) {
+@@ -1736,9 +1737,12 @@ static int f_midi2_create_usb_configs(struct f_midi2 *midi2,
+ case USB_SPEED_HIGH:
+ midi2_midi1_ep_out_desc.wMaxPacketSize = cpu_to_le16(512);
+ midi2_midi1_ep_in_desc.wMaxPacketSize = cpu_to_le16(512);
+- for (i = 0; i < midi2->num_eps; i++)
++ for (i = 0; i < midi2->num_eps; i++) {
+ midi2_midi2_ep_out_desc[i].wMaxPacketSize =
+ cpu_to_le16(512);
++ midi2_midi2_ep_in_desc[i].wMaxPacketSize =
++ cpu_to_le16(512);
++ }
+ fallthrough;
+ case USB_SPEED_FULL:
+ midi1_in_eps = midi2_midi1_ep_in_descs;
+@@ -1747,9 +1751,12 @@ static int f_midi2_create_usb_configs(struct f_midi2 *midi2,
+ case USB_SPEED_SUPER:
+ midi2_midi1_ep_out_desc.wMaxPacketSize = cpu_to_le16(1024);
+ midi2_midi1_ep_in_desc.wMaxPacketSize = cpu_to_le16(1024);
+- for (i = 0; i < midi2->num_eps; i++)
++ for (i = 0; i < midi2->num_eps; i++) {
+ midi2_midi2_ep_out_desc[i].wMaxPacketSize =
+ cpu_to_le16(1024);
++ midi2_midi2_ep_in_desc[i].wMaxPacketSize =
++ cpu_to_le16(1024);
++ }
+ midi1_in_eps = midi2_midi1_ep_in_ss_descs;
+ midi1_out_eps = midi2_midi1_ep_out_ss_descs;
+ break;
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index 27c9699365ab95..18cd4b925e5e63 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -765,8 +765,7 @@ static int dummy_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ if (!dum->driver)
+ return -ESHUTDOWN;
+
+- local_irq_save(flags);
+- spin_lock(&dum->lock);
++ spin_lock_irqsave(&dum->lock, flags);
+ list_for_each_entry(iter, &ep->queue, queue) {
+ if (&iter->req != _req)
+ continue;
+@@ -776,15 +775,16 @@ static int dummy_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ retval = 0;
+ break;
+ }
+- spin_unlock(&dum->lock);
+
+ if (retval == 0) {
+ dev_dbg(udc_dev(dum),
+ "dequeued req %p from %s, len %d buf %p\n",
+ req, _ep->name, _req->length, _req->buf);
++ spin_unlock(&dum->lock);
+ usb_gadget_giveback_request(_ep, _req);
++ spin_lock(&dum->lock);
+ }
+- local_irq_restore(flags);
++ spin_unlock_irqrestore(&dum->lock, flags);
+ return retval;
+ }
+
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 06a2edb9e86ef7..63edf2d8f24501 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -101,13 +101,34 @@ static u32 xhci_dbc_populate_strings(struct dbc_str_descs *strings)
+ return string_length;
+ }
+
++static void xhci_dbc_init_ep_contexts(struct xhci_dbc *dbc)
++{
++ struct xhci_ep_ctx *ep_ctx;
++ unsigned int max_burst;
++ dma_addr_t deq;
++
++ max_burst = DBC_CTRL_MAXBURST(readl(&dbc->regs->control));
++
++ /* Populate bulk out endpoint context: */
++ ep_ctx = dbc_bulkout_ctx(dbc);
++ deq = dbc_bulkout_enq(dbc);
++ ep_ctx->ep_info = 0;
++ ep_ctx->ep_info2 = dbc_epctx_info2(BULK_OUT_EP, 1024, max_burst);
++ ep_ctx->deq = cpu_to_le64(deq | dbc->ring_out->cycle_state);
++
++ /* Populate bulk in endpoint context: */
++ ep_ctx = dbc_bulkin_ctx(dbc);
++ deq = dbc_bulkin_enq(dbc);
++ ep_ctx->ep_info = 0;
++ ep_ctx->ep_info2 = dbc_epctx_info2(BULK_IN_EP, 1024, max_burst);
++ ep_ctx->deq = cpu_to_le64(deq | dbc->ring_in->cycle_state);
++}
++
+ static void xhci_dbc_init_contexts(struct xhci_dbc *dbc, u32 string_length)
+ {
+ struct dbc_info_context *info;
+- struct xhci_ep_ctx *ep_ctx;
+ u32 dev_info;
+- dma_addr_t deq, dma;
+- unsigned int max_burst;
++ dma_addr_t dma;
+
+ if (!dbc)
+ return;
+@@ -121,20 +142,8 @@ static void xhci_dbc_init_contexts(struct xhci_dbc *dbc, u32 string_length)
+ info->serial = cpu_to_le64(dma + DBC_MAX_STRING_LENGTH * 3);
+ info->length = cpu_to_le32(string_length);
+
+- /* Populate bulk out endpoint context: */
+- ep_ctx = dbc_bulkout_ctx(dbc);
+- max_burst = DBC_CTRL_MAXBURST(readl(&dbc->regs->control));
+- deq = dbc_bulkout_enq(dbc);
+- ep_ctx->ep_info = 0;
+- ep_ctx->ep_info2 = dbc_epctx_info2(BULK_OUT_EP, 1024, max_burst);
+- ep_ctx->deq = cpu_to_le64(deq | dbc->ring_out->cycle_state);
+-
+- /* Populate bulk in endpoint context: */
+- ep_ctx = dbc_bulkin_ctx(dbc);
+- deq = dbc_bulkin_enq(dbc);
+- ep_ctx->ep_info = 0;
+- ep_ctx->ep_info2 = dbc_epctx_info2(BULK_IN_EP, 1024, max_burst);
+- ep_ctx->deq = cpu_to_le64(deq | dbc->ring_in->cycle_state);
++ /* Populate bulk in and out endpoint contexts: */
++ xhci_dbc_init_ep_contexts(dbc);
+
+ /* Set DbC context and info registers: */
+ lo_hi_writeq(dbc->ctx->dma, &dbc->regs->dccp);
+@@ -436,6 +445,42 @@ dbc_alloc_ctx(struct device *dev, gfp_t flags)
+ return ctx;
+ }
+
++static void xhci_dbc_ring_init(struct xhci_ring *ring)
++{
++ struct xhci_segment *seg = ring->first_seg;
++
++ /* clear all trbs on ring in case of old ring */
++ memset(seg->trbs, 0, TRB_SEGMENT_SIZE);
++
++ /* Only event ring does not use link TRB */
++ if (ring->type != TYPE_EVENT) {
++ union xhci_trb *trb = &seg->trbs[TRBS_PER_SEGMENT - 1];
++
++ trb->link.segment_ptr = cpu_to_le64(ring->first_seg->dma);
++ trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK));
++ }
++ xhci_initialize_ring_info(ring);
++}
++
++static int xhci_dbc_reinit_ep_rings(struct xhci_dbc *dbc)
++{
++ struct xhci_ring *in_ring = dbc->eps[BULK_IN].ring;
++ struct xhci_ring *out_ring = dbc->eps[BULK_OUT].ring;
++
++ if (!in_ring || !out_ring || !dbc->ctx) {
++ dev_warn(dbc->dev, "Can't re-init unallocated endpoints\n");
++ return -ENODEV;
++ }
++
++ xhci_dbc_ring_init(in_ring);
++ xhci_dbc_ring_init(out_ring);
++
++ /* set ep context enqueue, dequeue, and cycle to initial values */
++ xhci_dbc_init_ep_contexts(dbc);
++
++ return 0;
++}
++
+ static struct xhci_ring *
+ xhci_dbc_ring_alloc(struct device *dev, enum xhci_ring_type type, gfp_t flags)
+ {
+@@ -464,15 +509,10 @@ xhci_dbc_ring_alloc(struct device *dev, enum xhci_ring_type type, gfp_t flags)
+
+ seg->dma = dma;
+
+- /* Only event ring does not use link TRB */
+- if (type != TYPE_EVENT) {
+- union xhci_trb *trb = &seg->trbs[TRBS_PER_SEGMENT - 1];
+-
+- trb->link.segment_ptr = cpu_to_le64(dma);
+- trb->link.control = cpu_to_le32(LINK_TOGGLE | TRB_TYPE(TRB_LINK));
+- }
+ INIT_LIST_HEAD(&ring->td_list);
+- xhci_initialize_ring_info(ring);
++
++ xhci_dbc_ring_init(ring);
++
+ return ring;
+ dma_fail:
+ kfree(seg);
+@@ -864,7 +904,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ dev_info(dbc->dev, "DbC cable unplugged\n");
+ dbc->state = DS_ENABLED;
+ xhci_dbc_flush_requests(dbc);
+-
++ xhci_dbc_reinit_ep_rings(dbc);
+ return EVT_DISC;
+ }
+
+@@ -874,7 +914,7 @@ static enum evtreturn xhci_dbc_do_handle_events(struct xhci_dbc *dbc)
+ writel(portsc, &dbc->regs->portsc);
+ dbc->state = DS_ENABLED;
+ xhci_dbc_flush_requests(dbc);
+-
++ xhci_dbc_reinit_ep_rings(dbc);
+ return EVT_DISC;
+ }
+
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 81eaad87a3d9d0..c4a6544aa10751 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -962,7 +962,7 @@ static void xhci_free_virt_devices_depth_first(struct xhci_hcd *xhci, int slot_i
+ out:
+ /* we are now at a leaf device */
+ xhci_debugfs_remove_slot(xhci, slot_id);
+- xhci_free_virt_device(xhci, vdev, slot_id);
++ xhci_free_virt_device(xhci, xhci->devs[slot_id], slot_id);
+ }
+
+ int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index e5cd3309342364..fc869b7f803f04 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1322,7 +1322,18 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */
+ .driver_info = NCTRL(0) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1034, 0xff), /* Telit LE910C4-WWX (rmnet) */
++ .driver_info = RSVD(2) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1035, 0xff) }, /* Telit LE910C4-WWX (ECM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1036, 0xff) }, /* Telit LE910C4-WWX */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1037, 0xff), /* Telit LE910C4-WWX (rmnet) */
++ .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1038, 0xff), /* Telit LE910C4-WWX (rmnet) */
++ .driver_info = NCTRL(0) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x103b, 0xff), /* Telit LE910C4-WWX */
++ .driver_info = NCTRL(0) | NCTRL(1) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x103c, 0xff), /* Telit LE910C4-WWX */
++ .driver_info = NCTRL(0) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+@@ -1369,6 +1380,12 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */
+ .driver_info = RSVD(0) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1077, 0xff), /* Telit FN990A (rmnet + audio) */
++ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1078, 0xff), /* Telit FN990A (MBIM + audio) */
++ .driver_info = NCTRL(0) | RSVD(1) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1079, 0xff), /* Telit FN990A (RNDIS + audio) */
++ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990A (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990A (MBIM) */
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 1f6fdfaa34bf12..b2a568a5bc9b0b 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2426,17 +2426,21 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ case ADEV_NONE:
+ break;
+ case ADEV_NOTIFY_USB_AND_QUEUE_VDM:
+- WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, NULL));
+- typec_altmode_vdm(adev, p[0], &p[1], cnt);
++ if (rx_sop_type == TCPC_TX_SOP_PRIME) {
++ typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P, p[0], &p[1], cnt);
++ } else {
++ WARN_ON(typec_altmode_notify(adev, TYPEC_STATE_USB, NULL));
++ typec_altmode_vdm(adev, p[0], &p[1], cnt);
++ }
+ break;
+ case ADEV_QUEUE_VDM:
+- if (response_tx_sop_type == TCPC_TX_SOP_PRIME)
++ if (rx_sop_type == TCPC_TX_SOP_PRIME)
+ typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P, p[0], &p[1], cnt);
+ else
+ typec_altmode_vdm(adev, p[0], &p[1], cnt);
+ break;
+ case ADEV_QUEUE_VDM_SEND_EXIT_MODE_ON_FAIL:
+- if (response_tx_sop_type == TCPC_TX_SOP_PRIME) {
++ if (rx_sop_type == TCPC_TX_SOP_PRIME) {
+ if (typec_cable_altmode_vdm(adev, TYPEC_PLUG_SOP_P,
+ p[0], &p[1], cnt)) {
+ int svdm_version = typec_get_cable_svdm_version(
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index fac4000a5bcaef..b843db855f402f 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -110,6 +110,25 @@ struct btrfs_bio_ctrl {
+ * This is to avoid touching ranges covered by compression/inline.
+ */
+ unsigned long submit_bitmap;
++ struct readahead_control *ractl;
++
++ /*
++ * The start offset of the last used extent map by a read operation.
++ *
++ * This is for proper compressed read merge.
++ * U64_MAX means we are starting the read and have made no progress yet.
++ *
++ * The current btrfs_bio_is_contig() only uses disk_bytenr as
++ * the condition to check if the read can be merged with previous
++ * bio, which is not correct. E.g. two file extents pointing to the
++ * same extent but with different offset.
++ *
++ * So here we need to do extra checks to only merge reads that are
++ * covered by the same extent map.
++ * Just extent_map::start will be enough, as they are unique
++ * inside the same inode.
++ */
++ u64 last_em_start;
+ };
+
+ static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
+@@ -882,6 +901,25 @@ static struct extent_map *get_extent_map(struct btrfs_inode *inode,
+
+ return em;
+ }
++
++static void btrfs_readahead_expand(struct readahead_control *ractl,
++ const struct extent_map *em)
++{
++ const u64 ra_pos = readahead_pos(ractl);
++ const u64 ra_end = ra_pos + readahead_length(ractl);
++ const u64 em_end = em->start + em->ram_bytes;
++
++ /* No expansion for holes and inline extents. */
++ if (em->disk_bytenr > EXTENT_MAP_LAST_BYTE)
++ return;
++
++ ASSERT(em_end >= ra_pos,
++ "extent_map %llu %llu ends before current readahead position %llu",
++ em->start, em->len, ra_pos);
++ if (em_end > ra_end)
++ readahead_expand(ractl, ra_pos, em_end - ra_pos);
++}
++
+ /*
+ * basic readpage implementation. Locked extent state structs are inserted
+ * into the tree that are removed when the IO is done (by the end_io
+@@ -890,7 +928,7 @@ static struct extent_map *get_extent_map(struct btrfs_inode *inode,
+ * return 0 on success, otherwise return error
+ */
+ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+- struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start)
++ struct btrfs_bio_ctrl *bio_ctrl)
+ {
+ struct inode *inode = folio->mapping->host;
+ struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+@@ -945,6 +983,16 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+
+ compress_type = btrfs_extent_map_compression(em);
+
++ /*
++ * Only expand readahead for extents which are already creating
++ * the pages anyway in add_ra_bio_pages, which is compressed
++ * extents in the non subpage case.
++ */
++ if (bio_ctrl->ractl &&
++ !btrfs_is_subpage(fs_info, folio) &&
++ compress_type != BTRFS_COMPRESS_NONE)
++ btrfs_readahead_expand(bio_ctrl->ractl, em);
++
+ if (compress_type != BTRFS_COMPRESS_NONE)
+ disk_bytenr = em->disk_bytenr;
+ else
+@@ -990,12 +1038,11 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ * non-optimal behavior (submitting 2 bios for the same extent).
+ */
+ if (compress_type != BTRFS_COMPRESS_NONE &&
+- prev_em_start && *prev_em_start != (u64)-1 &&
+- *prev_em_start != em->start)
++ bio_ctrl->last_em_start != U64_MAX &&
++ bio_ctrl->last_em_start != em->start)
+ force_bio_submit = true;
+
+- if (prev_em_start)
+- *prev_em_start = em->start;
++ bio_ctrl->last_em_start = em->start;
+
+ btrfs_free_extent_map(em);
+ em = NULL;
+@@ -1209,12 +1256,15 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
+ const u64 start = folio_pos(folio);
+ const u64 end = start + folio_size(folio) - 1;
+ struct extent_state *cached_state = NULL;
+- struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
++ struct btrfs_bio_ctrl bio_ctrl = {
++ .opf = REQ_OP_READ,
++ .last_em_start = U64_MAX,
++ };
+ struct extent_map *em_cached = NULL;
+ int ret;
+
+ lock_extents_for_read(inode, start, end, &cached_state);
+- ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
++ ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl);
+ btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state);
+
+ btrfs_free_extent_map(em_cached);
+@@ -2550,19 +2600,22 @@ int btrfs_writepages(struct address_space *mapping, struct writeback_control *wb
+
+ void btrfs_readahead(struct readahead_control *rac)
+ {
+- struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD };
++ struct btrfs_bio_ctrl bio_ctrl = {
++ .opf = REQ_OP_READ | REQ_RAHEAD,
++ .ractl = rac,
++ .last_em_start = U64_MAX,
++ };
+ struct folio *folio;
+ struct btrfs_inode *inode = BTRFS_I(rac->mapping->host);
+ const u64 start = readahead_pos(rac);
+ const u64 end = start + readahead_length(rac) - 1;
+ struct extent_state *cached_state = NULL;
+ struct extent_map *em_cached = NULL;
+- u64 prev_em_start = (u64)-1;
+
+ lock_extents_for_read(inode, start, end, &cached_state);
+
+ while ((folio = readahead_folio(rac)) != NULL)
+- btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
++ btrfs_do_readpage(folio, &em_cached, &bio_ctrl);
+
+ btrfs_unlock_extent(&inode->io_tree, start, end, &cached_state);
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index ffa5d6c1594050..e266a229484852 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5685,7 +5685,17 @@ static void btrfs_del_inode_from_root(struct btrfs_inode *inode)
+ bool empty = false;
+
+ xa_lock(&root->inodes);
+- entry = __xa_erase(&root->inodes, btrfs_ino(inode));
++ /*
++ * This btrfs_inode is being freed and has already been unhashed at this
++ * point. It's possible that another btrfs_inode has already been
++ * allocated for the same inode and inserted itself into the root, so
++ * don't delete it in that case.
++ *
++ * Note that this shouldn't need to allocate memory, so the gfp flags
++ * don't really matter.
++ */
++ entry = __xa_cmpxchg(&root->inodes, btrfs_ino(inode), inode, NULL,
++ GFP_ATOMIC);
+ if (entry == inode)
+ empty = xa_empty(&root->inodes);
+ xa_unlock(&root->inodes);
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 68cbb2b1e3df8e..1fc99ba185164b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1483,6 +1483,7 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ struct btrfs_qgroup *qgroup;
+ LIST_HEAD(qgroup_list);
+ u64 num_bytes = src->excl;
++ u64 num_bytes_cmpr = src->excl_cmpr;
+ int ret = 0;
+
+ qgroup = find_qgroup_rb(fs_info, ref_root);
+@@ -1494,11 +1495,12 @@ static int __qgroup_excl_accounting(struct btrfs_fs_info *fs_info, u64 ref_root,
+ struct btrfs_qgroup_list *glist;
+
+ qgroup->rfer += sign * num_bytes;
+- qgroup->rfer_cmpr += sign * num_bytes;
++ qgroup->rfer_cmpr += sign * num_bytes_cmpr;
+
+ WARN_ON(sign < 0 && qgroup->excl < num_bytes);
++ WARN_ON(sign < 0 && qgroup->excl_cmpr < num_bytes_cmpr);
+ qgroup->excl += sign * num_bytes;
+- qgroup->excl_cmpr += sign * num_bytes;
++ qgroup->excl_cmpr += sign * num_bytes_cmpr;
+
+ if (sign > 0)
+ qgroup_rsv_add_by_qgroup(fs_info, qgroup, src);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 60a621b00c656d..1777d707fd7561 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -1264,7 +1264,9 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
+ 0,
+ gfp_flags);
+ if (IS_ERR(pages[index])) {
+- if (PTR_ERR(pages[index]) == -EINVAL) {
++ int err = PTR_ERR(pages[index]);
++
++ if (err == -EINVAL) {
+ pr_err_client(cl, "inode->i_blkbits=%hhu\n",
+ inode->i_blkbits);
+ }
+@@ -1273,7 +1275,7 @@ static inline int move_dirty_folio_in_page_array(struct address_space *mapping,
+ BUG_ON(ceph_wbc->locked_pages == 0);
+
+ pages[index] = NULL;
+- return PTR_ERR(pages[index]);
++ return err;
+ }
+ } else {
+ pages[index] = &folio->page;
+@@ -1687,6 +1689,7 @@ static int ceph_writepages_start(struct address_space *mapping,
+
+ process_folio_batch:
+ rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc);
++ ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
+ if (rc)
+ goto release_folios;
+
+@@ -1695,8 +1698,6 @@ static int ceph_writepages_start(struct address_space *mapping,
+ goto release_folios;
+
+ if (ceph_wbc.processed_in_fbatch) {
+- ceph_shift_unused_folios_left(&ceph_wbc.fbatch);
+-
+ if (folio_batch_count(&ceph_wbc.fbatch) == 0 &&
+ ceph_wbc.locked_pages < ceph_wbc.max_pages) {
+ doutc(cl, "reached end fbatch, trying for more\n");
+diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c
+index fdd404fc81124d..f3fe786b4143d4 100644
+--- a/fs/ceph/debugfs.c
++++ b/fs/ceph/debugfs.c
+@@ -55,8 +55,6 @@ static int mdsc_show(struct seq_file *s, void *p)
+ struct ceph_mds_client *mdsc = fsc->mdsc;
+ struct ceph_mds_request *req;
+ struct rb_node *rp;
+- int pathlen = 0;
+- u64 pathbase;
+ char *path;
+
+ mutex_lock(&mdsc->mutex);
+@@ -81,8 +79,8 @@ static int mdsc_show(struct seq_file *s, void *p)
+ if (req->r_inode) {
+ seq_printf(s, " #%llx", ceph_ino(req->r_inode));
+ } else if (req->r_dentry) {
+- path = ceph_mdsc_build_path(mdsc, req->r_dentry, &pathlen,
+- &pathbase, 0);
++ struct ceph_path_info path_info;
++ path = ceph_mdsc_build_path(mdsc, req->r_dentry, &path_info, 0);
+ if (IS_ERR(path))
+ path = NULL;
+ spin_lock(&req->r_dentry->d_lock);
+@@ -91,7 +89,7 @@ static int mdsc_show(struct seq_file *s, void *p)
+ req->r_dentry,
+ path ? path : "");
+ spin_unlock(&req->r_dentry->d_lock);
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ } else if (req->r_path1) {
+ seq_printf(s, " #%llx/%s", req->r_ino1.ino,
+ req->r_path1);
+@@ -100,8 +98,8 @@ static int mdsc_show(struct seq_file *s, void *p)
+ }
+
+ if (req->r_old_dentry) {
+- path = ceph_mdsc_build_path(mdsc, req->r_old_dentry, &pathlen,
+- &pathbase, 0);
++ struct ceph_path_info path_info;
++ path = ceph_mdsc_build_path(mdsc, req->r_old_dentry, &path_info, 0);
+ if (IS_ERR(path))
+ path = NULL;
+ spin_lock(&req->r_old_dentry->d_lock);
+@@ -111,7 +109,7 @@ static int mdsc_show(struct seq_file *s, void *p)
+ req->r_old_dentry,
+ path ? path : "");
+ spin_unlock(&req->r_old_dentry->d_lock);
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ } else if (req->r_path2 && req->r_op != CEPH_MDS_OP_SYMLINK) {
+ if (req->r_ino2.ino)
+ seq_printf(s, " #%llx/%s", req->r_ino2.ino,
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index a321aa6d0ed226..e7af63bf77b469 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1272,10 +1272,8 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
+
+ /* If op failed, mark everyone involved for errors */
+ if (result) {
+- int pathlen = 0;
+- u64 base = 0;
+- char *path = ceph_mdsc_build_path(mdsc, dentry, &pathlen,
+- &base, 0);
++ struct ceph_path_info path_info = {0};
++ char *path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+
+ /* mark error on parent + clear complete */
+ mapping_set_error(req->r_parent->i_mapping, result);
+@@ -1289,8 +1287,8 @@ static void ceph_async_unlink_cb(struct ceph_mds_client *mdsc,
+ mapping_set_error(req->r_old_inode->i_mapping, result);
+
+ pr_warn_client(cl, "failure path=(%llx)%s result=%d!\n",
+- base, IS_ERR(path) ? "<<bad>>" : path, result);
+- ceph_mdsc_free_path(path, pathlen);
++ path_info.vino.ino, IS_ERR(path) ? "<<bad>>" : path, result);
++ ceph_mdsc_free_path_info(&path_info);
+ }
+ out:
+ iput(req->r_old_inode);
+@@ -1348,8 +1346,6 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry)
+ int err = -EROFS;
+ int op;
+ char *path;
+- int pathlen;
+- u64 pathbase;
+
+ if (ceph_snap(dir) == CEPH_SNAPDIR) {
+ /* rmdir .snap/foo is RMSNAP */
+@@ -1368,14 +1364,15 @@ static int ceph_unlink(struct inode *dir, struct dentry *dentry)
+ if (!dn) {
+ try_async = false;
+ } else {
+- path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0);
++ struct ceph_path_info path_info;
++ path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);
+ if (IS_ERR(path)) {
+ try_async = false;
+ err = 0;
+ } else {
+ err = ceph_mds_check_access(mdsc, path, MAY_WRITE);
+ }
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ dput(dn);
+
+ /* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index a7254cab44cc2e..6587c2d5af1e08 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -368,8 +368,6 @@ int ceph_open(struct inode *inode, struct file *file)
+ int flags, fmode, wanted;
+ struct dentry *dentry;
+ char *path;
+- int pathlen;
+- u64 pathbase;
+ bool do_sync = false;
+ int mask = MAY_READ;
+
+@@ -399,14 +397,15 @@ int ceph_open(struct inode *inode, struct file *file)
+ if (!dentry) {
+ do_sync = true;
+ } else {
+- path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0);
++ struct ceph_path_info path_info;
++ path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ if (IS_ERR(path)) {
+ do_sync = true;
+ err = 0;
+ } else {
+ err = ceph_mds_check_access(mdsc, path, mask);
+ }
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ dput(dentry);
+
+ /* For none EACCES cases will let the MDS do the mds auth check */
+@@ -614,15 +613,13 @@ static void ceph_async_create_cb(struct ceph_mds_client *mdsc,
+ mapping_set_error(req->r_parent->i_mapping, result);
+
+ if (result) {
+- int pathlen = 0;
+- u64 base = 0;
+- char *path = ceph_mdsc_build_path(mdsc, req->r_dentry, &pathlen,
+- &base, 0);
++ struct ceph_path_info path_info = {0};
++ char *path = ceph_mdsc_build_path(mdsc, req->r_dentry, &path_info, 0);
+
+ pr_warn_client(cl,
+ "async create failure path=(%llx)%s result=%d!\n",
+- base, IS_ERR(path) ? "<<bad>>" : path, result);
+- ceph_mdsc_free_path(path, pathlen);
++ path_info.vino.ino, IS_ERR(path) ? "<<bad>>" : path, result);
++ ceph_mdsc_free_path_info(&path_info);
+
+ ceph_dir_clear_complete(req->r_parent);
+ if (!d_unhashed(dentry))
+@@ -791,8 +788,6 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ int mask;
+ int err;
+ char *path;
+- int pathlen;
+- u64 pathbase;
+
+ doutc(cl, "%p %llx.%llx dentry %p '%pd' %s flags %d mode 0%o\n",
+ dir, ceph_vinop(dir), dentry, dentry,
+@@ -814,7 +809,8 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ if (!dn) {
+ try_async = false;
+ } else {
+- path = ceph_mdsc_build_path(mdsc, dn, &pathlen, &pathbase, 0);
++ struct ceph_path_info path_info;
++ path = ceph_mdsc_build_path(mdsc, dn, &path_info, 0);
+ if (IS_ERR(path)) {
+ try_async = false;
+ err = 0;
+@@ -826,7 +822,7 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
+ mask |= MAY_WRITE;
+ err = ceph_mds_check_access(mdsc, path, mask);
+ }
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ dput(dn);
+
+ /* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 06cd2963e41ee0..14e59c15dd68dd 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -55,6 +55,52 @@ static int ceph_set_ino_cb(struct inode *inode, void *data)
+ return 0;
+ }
+
++/*
++ * Check if the parent inode matches the vino from directory reply info
++ */
++static inline bool ceph_vino_matches_parent(struct inode *parent,
++ struct ceph_vino vino)
++{
++ return ceph_ino(parent) == vino.ino && ceph_snap(parent) == vino.snap;
++}
++
++/*
++ * Validate that the directory inode referenced by @req->r_parent matches the
++ * inode number and snapshot id contained in the reply's directory record. If
++ * they do not match – which can theoretically happen if the parent dentry was
++ * moved between the time the request was issued and the reply arrived – fall
++ * back to looking up the correct inode in the inode cache.
++ *
++ * A reference is *always* returned. Callers that receive a different inode
++ * than the original @parent are responsible for dropping the extra reference
++ * once the reply has been processed.
++ */
++static struct inode *ceph_get_reply_dir(struct super_block *sb,
++ struct inode *parent,
++ struct ceph_mds_reply_info_parsed *rinfo)
++{
++ struct ceph_vino vino;
++
++ if (unlikely(!rinfo->diri.in))
++ return parent; /* nothing to compare against */
++
++ /* If we didn't have a cached parent inode to begin with, just bail out. */
++ if (!parent)
++ return NULL;
++
++ vino.ino = le64_to_cpu(rinfo->diri.in->ino);
++ vino.snap = le64_to_cpu(rinfo->diri.in->snapid);
++
++ if (likely(ceph_vino_matches_parent(parent, vino)))
++ return parent; /* matches – use the original reference */
++
++ /* Mismatch – this should be rare. Emit a WARN and obtain the correct inode. */
++ WARN_ONCE(1, "ceph: reply dir mismatch (parent valid %llx.%llx reply %llx.%llx)\n",
++ ceph_ino(parent), ceph_snap(parent), vino.ino, vino.snap);
++
++ return ceph_get_inode(sb, vino, NULL);
++}
++
+ /**
+ * ceph_new_inode - allocate a new inode in advance of an expected create
+ * @dir: parent directory for new inode
+@@ -1523,6 +1569,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ struct ceph_vino tvino, dvino;
+ struct ceph_fs_client *fsc = ceph_sb_to_fs_client(sb);
+ struct ceph_client *cl = fsc->client;
++ struct inode *parent_dir = NULL;
+ int err = 0;
+
+ doutc(cl, "%p is_dentry %d is_target %d\n", req,
+@@ -1536,10 +1583,17 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ }
+
+ if (rinfo->head->is_dentry) {
+- struct inode *dir = req->r_parent;
+-
+- if (dir) {
+- err = ceph_fill_inode(dir, NULL, &rinfo->diri,
++ /*
++ * r_parent may be stale, in cases when R_PARENT_LOCKED is not set,
++ * so we need to get the correct inode
++ */
++ parent_dir = ceph_get_reply_dir(sb, req->r_parent, rinfo);
++ if (unlikely(IS_ERR(parent_dir))) {
++ err = PTR_ERR(parent_dir);
++ goto done;
++ }
++ if (parent_dir) {
++ err = ceph_fill_inode(parent_dir, NULL, &rinfo->diri,
+ rinfo->dirfrag, session, -1,
+ &req->r_caps_reservation);
+ if (err < 0)
+@@ -1548,14 +1602,14 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ WARN_ON_ONCE(1);
+ }
+
+- if (dir && req->r_op == CEPH_MDS_OP_LOOKUPNAME &&
++ if (parent_dir && req->r_op == CEPH_MDS_OP_LOOKUPNAME &&
+ test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags) &&
+ !test_bit(CEPH_MDS_R_ABORTED, &req->r_req_flags)) {
+ bool is_nokey = false;
+ struct qstr dname;
+ struct dentry *dn, *parent;
+ struct fscrypt_str oname = FSTR_INIT(NULL, 0);
+- struct ceph_fname fname = { .dir = dir,
++ struct ceph_fname fname = { .dir = parent_dir,
+ .name = rinfo->dname,
+ .ctext = rinfo->altname,
+ .name_len = rinfo->dname_len,
+@@ -1564,10 +1618,10 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ BUG_ON(!rinfo->head->is_target);
+ BUG_ON(req->r_dentry);
+
+- parent = d_find_any_alias(dir);
++ parent = d_find_any_alias(parent_dir);
+ BUG_ON(!parent);
+
+- err = ceph_fname_alloc_buffer(dir, &oname);
++ err = ceph_fname_alloc_buffer(parent_dir, &oname);
+ if (err < 0) {
+ dput(parent);
+ goto done;
+@@ -1576,7 +1630,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ err = ceph_fname_to_usr(&fname, NULL, &oname, &is_nokey);
+ if (err < 0) {
+ dput(parent);
+- ceph_fname_free_buffer(dir, &oname);
++ ceph_fname_free_buffer(parent_dir, &oname);
+ goto done;
+ }
+ dname.name = oname.name;
+@@ -1595,7 +1649,7 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ dname.len, dname.name, dn);
+ if (!dn) {
+ dput(parent);
+- ceph_fname_free_buffer(dir, &oname);
++ ceph_fname_free_buffer(parent_dir, &oname);
+ err = -ENOMEM;
+ goto done;
+ }
+@@ -1610,12 +1664,12 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ ceph_snap(d_inode(dn)) != tvino.snap)) {
+ doutc(cl, " dn %p points to wrong inode %p\n",
+ dn, d_inode(dn));
+- ceph_dir_clear_ordered(dir);
++ ceph_dir_clear_ordered(parent_dir);
+ d_delete(dn);
+ dput(dn);
+ goto retry_lookup;
+ }
+- ceph_fname_free_buffer(dir, &oname);
++ ceph_fname_free_buffer(parent_dir, &oname);
+
+ req->r_dentry = dn;
+ dput(parent);
+@@ -1794,6 +1848,9 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req)
+ &dvino, ptvino);
+ }
+ done:
++ /* Drop extra ref from ceph_get_reply_dir() if it returned a new inode */
++ if (unlikely(!IS_ERR_OR_NULL(parent_dir) && parent_dir != req->r_parent))
++ iput(parent_dir);
+ doutc(cl, "done err=%d\n", err);
+ return err;
+ }
+@@ -2488,22 +2545,21 @@ int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode,
+ int truncate_retry = 20; /* The RMW will take around 50ms */
+ struct dentry *dentry;
+ char *path;
+- int pathlen;
+- u64 pathbase;
+ bool do_sync = false;
+
+ dentry = d_find_alias(inode);
+ if (!dentry) {
+ do_sync = true;
+ } else {
+- path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase, 0);
++ struct ceph_path_info path_info;
++ path = ceph_mdsc_build_path(mdsc, dentry, &path_info, 0);
+ if (IS_ERR(path)) {
+ do_sync = true;
+ err = 0;
+ } else {
+ err = ceph_mds_check_access(mdsc, path, MAY_WRITE);
+ }
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ dput(dentry);
+
+ /* For none EACCES cases will let the MDS do the mds auth check */
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 230e0c3f341f71..94f109ca853eba 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -2681,8 +2681,7 @@ static u8 *get_fscrypt_altname(const struct ceph_mds_request *req, u32 *plen)
+ * ceph_mdsc_build_path - build a path string to a given dentry
+ * @mdsc: mds client
+ * @dentry: dentry to which path should be built
+- * @plen: returned length of string
+- * @pbase: returned base inode number
++ * @path_info: output path, length, base ino+snap, and freepath ownership flag
+ * @for_wire: is this path going to be sent to the MDS?
+ *
+ * Build a string that represents the path to the dentry. This is mostly called
+@@ -2700,7 +2699,7 @@ static u8 *get_fscrypt_altname(const struct ceph_mds_request *req, u32 *plen)
+ * foo/.snap/bar -> foo//bar
+ */
+ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+- int *plen, u64 *pbase, int for_wire)
++ struct ceph_path_info *path_info, int for_wire)
+ {
+ struct ceph_client *cl = mdsc->fsc->client;
+ struct dentry *cur;
+@@ -2810,16 +2809,28 @@ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+ return ERR_PTR(-ENAMETOOLONG);
+ }
+
+- *pbase = base;
+- *plen = PATH_MAX - 1 - pos;
++ /* Initialize the output structure */
++ memset(path_info, 0, sizeof(*path_info));
++
++ path_info->vino.ino = base;
++ path_info->pathlen = PATH_MAX - 1 - pos;
++ path_info->path = path + pos;
++ path_info->freepath = true;
++
++ /* Set snap from dentry if available */
++ if (d_inode(dentry))
++ path_info->vino.snap = ceph_snap(d_inode(dentry));
++ else
++ path_info->vino.snap = CEPH_NOSNAP;
++
+ doutc(cl, "on %p %d built %llx '%.*s'\n", dentry, d_count(dentry),
+- base, *plen, path + pos);
++ base, PATH_MAX - 1 - pos, path + pos);
+ return path + pos;
+ }
+
+ static int build_dentry_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+- struct inode *dir, const char **ppath, int *ppathlen,
+- u64 *pino, bool *pfreepath, bool parent_locked)
++ struct inode *dir, struct ceph_path_info *path_info,
++ bool parent_locked)
+ {
+ char *path;
+
+@@ -2828,41 +2839,47 @@ static int build_dentry_path(struct ceph_mds_client *mdsc, struct dentry *dentry
+ dir = d_inode_rcu(dentry->d_parent);
+ if (dir && parent_locked && ceph_snap(dir) == CEPH_NOSNAP &&
+ !IS_ENCRYPTED(dir)) {
+- *pino = ceph_ino(dir);
++ path_info->vino.ino = ceph_ino(dir);
++ path_info->vino.snap = ceph_snap(dir);
+ rcu_read_unlock();
+- *ppath = dentry->d_name.name;
+- *ppathlen = dentry->d_name.len;
++ path_info->path = dentry->d_name.name;
++ path_info->pathlen = dentry->d_name.len;
++ path_info->freepath = false;
+ return 0;
+ }
+ rcu_read_unlock();
+- path = ceph_mdsc_build_path(mdsc, dentry, ppathlen, pino, 1);
++ path = ceph_mdsc_build_path(mdsc, dentry, path_info, 1);
+ if (IS_ERR(path))
+ return PTR_ERR(path);
+- *ppath = path;
+- *pfreepath = true;
++ /*
++ * ceph_mdsc_build_path already fills path_info, including snap handling.
++ */
+ return 0;
+ }
+
+-static int build_inode_path(struct inode *inode,
+- const char **ppath, int *ppathlen, u64 *pino,
+- bool *pfreepath)
++static int build_inode_path(struct inode *inode, struct ceph_path_info *path_info)
+ {
+ struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
+ struct dentry *dentry;
+ char *path;
+
+ if (ceph_snap(inode) == CEPH_NOSNAP) {
+- *pino = ceph_ino(inode);
+- *ppathlen = 0;
++ path_info->vino.ino = ceph_ino(inode);
++ path_info->vino.snap = ceph_snap(inode);
++ path_info->pathlen = 0;
++ path_info->freepath = false;
+ return 0;
+ }
+ dentry = d_find_alias(inode);
+- path = ceph_mdsc_build_path(mdsc, dentry, ppathlen, pino, 1);
++ path = ceph_mdsc_build_path(mdsc, dentry, path_info, 1);
+ dput(dentry);
+ if (IS_ERR(path))
+ return PTR_ERR(path);
+- *ppath = path;
+- *pfreepath = true;
++ /*
++ * ceph_mdsc_build_path already fills path_info, including snap from dentry.
++ * Override with inode's snap since that's what this function is for.
++ */
++ path_info->vino.snap = ceph_snap(inode);
+ return 0;
+ }
+
+@@ -2872,26 +2889,32 @@ static int build_inode_path(struct inode *inode,
+ */
+ static int set_request_path_attr(struct ceph_mds_client *mdsc, struct inode *rinode,
+ struct dentry *rdentry, struct inode *rdiri,
+- const char *rpath, u64 rino, const char **ppath,
+- int *pathlen, u64 *ino, bool *freepath,
++ const char *rpath, u64 rino,
++ struct ceph_path_info *path_info,
+ bool parent_locked)
+ {
+ struct ceph_client *cl = mdsc->fsc->client;
+ int r = 0;
+
++ /* Initialize the output structure */
++ memset(path_info, 0, sizeof(*path_info));
++
+ if (rinode) {
+- r = build_inode_path(rinode, ppath, pathlen, ino, freepath);
++ r = build_inode_path(rinode, path_info);
+ doutc(cl, " inode %p %llx.%llx\n", rinode, ceph_ino(rinode),
+ ceph_snap(rinode));
+ } else if (rdentry) {
+- r = build_dentry_path(mdsc, rdentry, rdiri, ppath, pathlen, ino,
+- freepath, parent_locked);
+- doutc(cl, " dentry %p %llx/%.*s\n", rdentry, *ino, *pathlen, *ppath);
++ r = build_dentry_path(mdsc, rdentry, rdiri, path_info, parent_locked);
++ doutc(cl, " dentry %p %llx/%.*s\n", rdentry, path_info->vino.ino,
++ path_info->pathlen, path_info->path);
+ } else if (rpath || rino) {
+- *ino = rino;
+- *ppath = rpath;
+- *pathlen = rpath ? strlen(rpath) : 0;
+- doutc(cl, " path %.*s\n", *pathlen, rpath);
++ path_info->vino.ino = rino;
++ path_info->vino.snap = CEPH_NOSNAP;
++ path_info->path = rpath;
++ path_info->pathlen = rpath ? strlen(rpath) : 0;
++ path_info->freepath = false;
++
++ doutc(cl, " path %.*s\n", path_info->pathlen, rpath);
+ }
+
+ return r;
+@@ -2968,11 +2991,8 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ struct ceph_client *cl = mdsc->fsc->client;
+ struct ceph_msg *msg;
+ struct ceph_mds_request_head_legacy *lhead;
+- const char *path1 = NULL;
+- const char *path2 = NULL;
+- u64 ino1 = 0, ino2 = 0;
+- int pathlen1 = 0, pathlen2 = 0;
+- bool freepath1 = false, freepath2 = false;
++ struct ceph_path_info path_info1 = {0};
++ struct ceph_path_info path_info2 = {0};
+ struct dentry *old_dentry = NULL;
+ int len;
+ u16 releases;
+@@ -2982,25 +3002,49 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ u16 request_head_version = mds_supported_head_version(session);
+ kuid_t caller_fsuid = req->r_cred->fsuid;
+ kgid_t caller_fsgid = req->r_cred->fsgid;
++ bool parent_locked = test_bit(CEPH_MDS_R_PARENT_LOCKED, &req->r_req_flags);
+
+ ret = set_request_path_attr(mdsc, req->r_inode, req->r_dentry,
+- req->r_parent, req->r_path1, req->r_ino1.ino,
+- &path1, &pathlen1, &ino1, &freepath1,
+- test_bit(CEPH_MDS_R_PARENT_LOCKED,
+- &req->r_req_flags));
++ req->r_parent, req->r_path1, req->r_ino1.ino,
++ &path_info1, parent_locked);
+ if (ret < 0) {
+ msg = ERR_PTR(ret);
+ goto out;
+ }
+
++ /*
++ * When the parent directory's i_rwsem is *not* locked, req->r_parent may
++ * have become stale (e.g. after a concurrent rename) between the time the
++ * dentry was looked up and now. If we detect that the stored r_parent
++ * does not match the inode number we just encoded for the request, switch
++ * to the correct inode so that the MDS receives a valid parent reference.
++ */
++ if (!parent_locked && req->r_parent && path_info1.vino.ino &&
++ ceph_ino(req->r_parent) != path_info1.vino.ino) {
++ struct inode *old_parent = req->r_parent;
++ struct inode *correct_dir = ceph_get_inode(mdsc->fsc->sb, path_info1.vino, NULL);
++ if (!IS_ERR(correct_dir)) {
++ WARN_ONCE(1, "ceph: r_parent mismatch (had %llx wanted %llx) - updating\n",
++ ceph_ino(old_parent), path_info1.vino.ino);
++ /*
++ * Transfer CEPH_CAP_PIN from the old parent to the new one.
++ * The pin was taken earlier in ceph_mdsc_submit_request().
++ */
++ ceph_put_cap_refs(ceph_inode(old_parent), CEPH_CAP_PIN);
++ iput(old_parent);
++ req->r_parent = correct_dir;
++ ceph_get_cap_refs(ceph_inode(req->r_parent), CEPH_CAP_PIN);
++ }
++ }
++
+ /* If r_old_dentry is set, then assume that its parent is locked */
+ if (req->r_old_dentry &&
+ !(req->r_old_dentry->d_flags & DCACHE_DISCONNECTED))
+ old_dentry = req->r_old_dentry;
+ ret = set_request_path_attr(mdsc, NULL, old_dentry,
+- req->r_old_dentry_dir,
+- req->r_path2, req->r_ino2.ino,
+- &path2, &pathlen2, &ino2, &freepath2, true);
++ req->r_old_dentry_dir,
++ req->r_path2, req->r_ino2.ino,
++ &path_info2, true);
+ if (ret < 0) {
+ msg = ERR_PTR(ret);
+ goto out_free1;
+@@ -3031,7 +3075,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+
+ /* filepaths */
+ len += 2 * (1 + sizeof(u32) + sizeof(u64));
+- len += pathlen1 + pathlen2;
++ len += path_info1.pathlen + path_info2.pathlen;
+
+ /* cap releases */
+ len += sizeof(struct ceph_mds_request_release) *
+@@ -3039,9 +3083,9 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ !!req->r_old_inode_drop + !!req->r_old_dentry_drop);
+
+ if (req->r_dentry_drop)
+- len += pathlen1;
++ len += path_info1.pathlen;
+ if (req->r_old_dentry_drop)
+- len += pathlen2;
++ len += path_info2.pathlen;
+
+ /* MClientRequest tail */
+
+@@ -3154,8 +3198,8 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ lhead->ino = cpu_to_le64(req->r_deleg_ino);
+ lhead->args = req->r_args;
+
+- ceph_encode_filepath(&p, end, ino1, path1);
+- ceph_encode_filepath(&p, end, ino2, path2);
++ ceph_encode_filepath(&p, end, path_info1.vino.ino, path_info1.path);
++ ceph_encode_filepath(&p, end, path_info2.vino.ino, path_info2.path);
+
+ /* make note of release offset, in case we need to replay */
+ req->r_request_release_offset = p - msg->front.iov_base;
+@@ -3218,11 +3262,9 @@ static struct ceph_msg *create_request_message(struct ceph_mds_session *session,
+ msg->hdr.data_off = cpu_to_le16(0);
+
+ out_free2:
+- if (freepath2)
+- ceph_mdsc_free_path((char *)path2, pathlen2);
++ ceph_mdsc_free_path_info(&path_info2);
+ out_free1:
+- if (freepath1)
+- ceph_mdsc_free_path((char *)path1, pathlen1);
++ ceph_mdsc_free_path_info(&path_info1);
+ out:
+ return msg;
+ out_err:
+@@ -4579,24 +4621,20 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ struct ceph_pagelist *pagelist = recon_state->pagelist;
+ struct dentry *dentry;
+ struct ceph_cap *cap;
+- char *path;
+- int pathlen = 0, err;
+- u64 pathbase;
++ struct ceph_path_info path_info = {0};
++ int err;
+ u64 snap_follows;
+
+ dentry = d_find_primary(inode);
+ if (dentry) {
+ /* set pathbase to parent dir when msg_version >= 2 */
+- path = ceph_mdsc_build_path(mdsc, dentry, &pathlen, &pathbase,
++ char *path = ceph_mdsc_build_path(mdsc, dentry, &path_info,
+ recon_state->msg_version >= 2);
+ dput(dentry);
+ if (IS_ERR(path)) {
+ err = PTR_ERR(path);
+ goto out_err;
+ }
+- } else {
+- path = NULL;
+- pathbase = 0;
+ }
+
+ spin_lock(&ci->i_ceph_lock);
+@@ -4629,7 +4667,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ rec.v2.wanted = cpu_to_le32(__ceph_caps_wanted(ci));
+ rec.v2.issued = cpu_to_le32(cap->issued);
+ rec.v2.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+- rec.v2.pathbase = cpu_to_le64(pathbase);
++ rec.v2.pathbase = cpu_to_le64(path_info.vino.ino);
+ rec.v2.flock_len = (__force __le32)
+ ((ci->i_ceph_flags & CEPH_I_ERROR_FILELOCK) ? 0 : 1);
+ } else {
+@@ -4644,7 +4682,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ ts = inode_get_atime(inode);
+ ceph_encode_timespec64(&rec.v1.atime, &ts);
+ rec.v1.snaprealm = cpu_to_le64(ci->i_snap_realm->ino);
+- rec.v1.pathbase = cpu_to_le64(pathbase);
++ rec.v1.pathbase = cpu_to_le64(path_info.vino.ino);
+ }
+
+ if (list_empty(&ci->i_cap_snaps)) {
+@@ -4706,7 +4744,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ sizeof(struct ceph_filelock);
+ rec.v2.flock_len = cpu_to_le32(struct_len);
+
+- struct_len += sizeof(u32) + pathlen + sizeof(rec.v2);
++ struct_len += sizeof(u32) + path_info.pathlen + sizeof(rec.v2);
+
+ if (struct_v >= 2)
+ struct_len += sizeof(u64); /* snap_follows */
+@@ -4730,7 +4768,7 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ ceph_pagelist_encode_8(pagelist, 1);
+ ceph_pagelist_encode_32(pagelist, struct_len);
+ }
+- ceph_pagelist_encode_string(pagelist, path, pathlen);
++ ceph_pagelist_encode_string(pagelist, (char *)path_info.path, path_info.pathlen);
+ ceph_pagelist_append(pagelist, &rec, sizeof(rec.v2));
+ ceph_locks_to_pagelist(flocks, pagelist,
+ num_fcntl_locks, num_flock_locks);
+@@ -4741,17 +4779,17 @@ static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
+ } else {
+ err = ceph_pagelist_reserve(pagelist,
+ sizeof(u64) + sizeof(u32) +
+- pathlen + sizeof(rec.v1));
++ path_info.pathlen + sizeof(rec.v1));
+ if (err)
+ goto out_err;
+
+ ceph_pagelist_encode_64(pagelist, ceph_ino(inode));
+- ceph_pagelist_encode_string(pagelist, path, pathlen);
++ ceph_pagelist_encode_string(pagelist, (char *)path_info.path, path_info.pathlen);
+ ceph_pagelist_append(pagelist, &rec, sizeof(rec.v1));
+ }
+
+ out_err:
+- ceph_mdsc_free_path(path, pathlen);
++ ceph_mdsc_free_path_info(&path_info);
+ if (!err)
+ recon_state->nr_caps++;
+ return err;
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 3e2a6fa7c19aab..0428a5eaf28c65 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -617,14 +617,24 @@ extern int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath,
+
+ extern void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc);
+
+-static inline void ceph_mdsc_free_path(char *path, int len)
++/*
++ * Structure to group path-related output parameters for build_*_path functions
++ */
++struct ceph_path_info {
++ const char *path;
++ int pathlen;
++ struct ceph_vino vino;
++ bool freepath;
++};
++
++static inline void ceph_mdsc_free_path_info(const struct ceph_path_info *path_info)
+ {
+- if (!IS_ERR_OR_NULL(path))
+- __putname(path - (PATH_MAX - 1 - len));
++ if (path_info && path_info->freepath && !IS_ERR_OR_NULL(path_info->path))
++ __putname((char *)path_info->path - (PATH_MAX - 1 - path_info->pathlen));
+ }
+
+ extern char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc,
+- struct dentry *dentry, int *plen, u64 *base,
++ struct dentry *dentry, struct ceph_path_info *path_info,
+ int for_wire);
+
+ extern void __ceph_mdsc_drop_dentry_lease(struct dentry *dentry);
+diff --git a/fs/coredump.c b/fs/coredump.c
+index f217ebf2b3b68f..012915262d11b7 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -1263,11 +1263,15 @@ static int proc_dostring_coredump(const struct ctl_table *table, int write,
+ ssize_t retval;
+ char old_core_pattern[CORENAME_MAX_SIZE];
+
++ if (write)
++ return proc_dostring(table, write, buffer, lenp, ppos);
++
+ retval = strscpy(old_core_pattern, core_pattern, CORENAME_MAX_SIZE);
+
+ error = proc_dostring(table, write, buffer, lenp, ppos);
+ if (error)
+ return error;
++
+ if (!check_coredump_socket()) {
+ strscpy(core_pattern, old_core_pattern, retval + 1);
+ return -EINVAL;
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 16e4a6bd9b9737..dd7d86809c1881 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -65,10 +65,10 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+ }
+
+ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
+- erofs_off_t offset, bool need_kmap)
++ erofs_off_t offset)
+ {
+ erofs_init_metabuf(buf, sb);
+- return erofs_bread(buf, offset, need_kmap);
++ return erofs_bread(buf, offset, true);
+ }
+
+ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+@@ -118,7 +118,7 @@ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+ pos = ALIGN(erofs_iloc(inode) + vi->inode_isize +
+ vi->xattr_isize, unit) + unit * chunknr;
+
+- idx = erofs_read_metabuf(&buf, sb, pos, true);
++ idx = erofs_read_metabuf(&buf, sb, pos);
+ if (IS_ERR(idx)) {
+ err = PTR_ERR(idx);
+ goto out;
+@@ -299,7 +299,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+
+ iomap->type = IOMAP_INLINE;
+- ptr = erofs_read_metabuf(&buf, sb, mdev.m_pa, true);
++ ptr = erofs_read_metabuf(&buf, sb, mdev.m_pa);
+ if (IS_ERR(ptr))
+ return PTR_ERR(ptr);
+ iomap->inline_data = ptr;
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 91781718199e2a..3ee082476c8c53 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -115,7 +115,7 @@ static int erofs_fileio_scan_folio(struct erofs_fileio *io, struct folio *folio)
+ void *src;
+
+ src = erofs_read_metabuf(&buf, inode->i_sb,
+- map->m_pa + ofs, true);
++ map->m_pa + ofs);
+ if (IS_ERR(src)) {
+ err = PTR_ERR(src);
+ break;
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index 34517ca9df9157..9a8ee646e51d9d 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -274,7 +274,7 @@ static int erofs_fscache_data_read_slice(struct erofs_fscache_rq *req)
+ size_t size = map.m_llen;
+ void *src;
+
+- src = erofs_read_metabuf(&buf, sb, map.m_pa, true);
++ src = erofs_read_metabuf(&buf, sb, map.m_pa);
+ if (IS_ERR(src))
+ return PTR_ERR(src);
+
+diff --git a/fs/erofs/inode.c b/fs/erofs/inode.c
+index a0ae0b4f7b012a..47215c5e33855b 100644
+--- a/fs/erofs/inode.c
++++ b/fs/erofs/inode.c
+@@ -39,10 +39,10 @@ static int erofs_read_inode(struct inode *inode)
+ void *ptr;
+ int err = 0;
+
+- ptr = erofs_read_metabuf(&buf, sb, erofs_pos(sb, blkaddr), true);
++ ptr = erofs_read_metabuf(&buf, sb, erofs_pos(sb, blkaddr));
+ if (IS_ERR(ptr)) {
+ err = PTR_ERR(ptr);
+- erofs_err(sb, "failed to get inode (nid: %llu) page, err %d",
++ erofs_err(sb, "failed to read inode meta block (nid: %llu): %d",
+ vi->nid, err);
+ goto err_out;
+ }
+@@ -78,10 +78,10 @@ static int erofs_read_inode(struct inode *inode)
+
+ memcpy(&copied, dic, gotten);
+ ptr = erofs_read_metabuf(&buf, sb,
+- erofs_pos(sb, blkaddr + 1), true);
++ erofs_pos(sb, blkaddr + 1));
+ if (IS_ERR(ptr)) {
+ err = PTR_ERR(ptr);
+- erofs_err(sb, "failed to get inode payload block (nid: %llu), err %d",
++ erofs_err(sb, "failed to read inode payload block (nid: %llu): %d",
+ vi->nid, err);
+ goto err_out;
+ }
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 06b867d2fc3b7c..a7699114f6fe6d 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -385,7 +385,7 @@ void erofs_put_metabuf(struct erofs_buf *buf);
+ void *erofs_bread(struct erofs_buf *buf, erofs_off_t offset, bool need_kmap);
+ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb);
+ void *erofs_read_metabuf(struct erofs_buf *buf, struct super_block *sb,
+- erofs_off_t offset, bool need_kmap);
++ erofs_off_t offset);
+ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *dev);
+ int erofs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len);
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index cad87e4d669432..06c8981eea7f8c 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -141,7 +141,7 @@ static int erofs_init_device(struct erofs_buf *buf, struct super_block *sb,
+ struct erofs_deviceslot *dis;
+ struct file *file;
+
+- dis = erofs_read_metabuf(buf, sb, *pos, true);
++ dis = erofs_read_metabuf(buf, sb, *pos);
+ if (IS_ERR(dis))
+ return PTR_ERR(dis);
+
+@@ -268,7 +268,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ void *data;
+ int ret;
+
+- data = erofs_read_metabuf(&buf, sb, 0, true);
++ data = erofs_read_metabuf(&buf, sb, 0);
+ if (IS_ERR(data)) {
+ erofs_err(sb, "cannot read erofs superblock");
+ return PTR_ERR(data);
+@@ -999,10 +999,22 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
+ return 0;
+ }
+
++static void erofs_evict_inode(struct inode *inode)
++{
++#ifdef CONFIG_FS_DAX
++ if (IS_DAX(inode))
++ dax_break_layout_final(inode);
++#endif
++
++ truncate_inode_pages_final(&inode->i_data);
++ clear_inode(inode);
++}
++
+ const struct super_operations erofs_sops = {
+ .put_super = erofs_put_super,
+ .alloc_inode = erofs_alloc_inode,
+ .free_inode = erofs_free_inode,
++ .evict_inode = erofs_evict_inode,
+ .statfs = erofs_statfs,
+ .show_options = erofs_show_options,
+ };
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 9bb53f00c2c629..e8f30eee29b441 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -805,6 +805,7 @@ static int z_erofs_pcluster_begin(struct z_erofs_frontend *fe)
+ struct erofs_map_blocks *map = &fe->map;
+ struct super_block *sb = fe->inode->i_sb;
+ struct z_erofs_pcluster *pcl = NULL;
++ void *ptr;
+ int ret;
+
+ DBG_BUGON(fe->pcl);
+@@ -854,15 +855,14 @@ static int z_erofs_pcluster_begin(struct z_erofs_frontend *fe)
+ /* bind cache first when cached decompression is preferred */
+ z_erofs_bind_cache(fe);
+ } else {
+- void *mptr;
+-
+- mptr = erofs_read_metabuf(&map->buf, sb, map->m_pa, false);
+- if (IS_ERR(mptr)) {
+- ret = PTR_ERR(mptr);
+- erofs_err(sb, "failed to get inline data %d", ret);
++ erofs_init_metabuf(&map->buf, sb);
++ ptr = erofs_bread(&map->buf, map->m_pa, false);
++ if (IS_ERR(ptr)) {
++ ret = PTR_ERR(ptr);
++ erofs_err(sb, "failed to get inline folio %d", ret);
+ return ret;
+ }
+- get_page(map->buf.page);
++ folio_get(page_folio(map->buf.page));
+ WRITE_ONCE(fe->pcl->compressed_bvecs[0].page, map->buf.page);
+ fe->pcl->pageofs_in = map->m_pa & ~PAGE_MASK;
+ fe->mode = Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE;
+@@ -1325,9 +1325,8 @@ static int z_erofs_decompress_pcluster(struct z_erofs_backend *be, int err)
+
+ /* must handle all compressed pages before actual file pages */
+ if (pcl->from_meta) {
+- page = pcl->compressed_bvecs[0].page;
++ folio_put(page_folio(pcl->compressed_bvecs[0].page));
+ WRITE_ONCE(pcl->compressed_bvecs[0].page, NULL);
+- put_page(page);
+ } else {
+ /* managed folios are still left in compressed_bvecs[] */
+ for (i = 0; i < pclusterpages; ++i) {
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index f1a15ff22147ba..14d01474ad9dda 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -31,7 +31,7 @@ static int z_erofs_load_full_lcluster(struct z_erofs_maprecorder *m,
+ struct z_erofs_lcluster_index *di;
+ unsigned int advise;
+
+- di = erofs_read_metabuf(&m->map->buf, inode->i_sb, pos, true);
++ di = erofs_read_metabuf(&m->map->buf, inode->i_sb, pos);
+ if (IS_ERR(di))
+ return PTR_ERR(di);
+ m->lcn = lcn;
+@@ -146,7 +146,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
+ else
+ return -EOPNOTSUPP;
+
+- in = erofs_read_metabuf(&m->map->buf, m->inode->i_sb, pos, true);
++ in = erofs_read_metabuf(&m->map->buf, m->inode->i_sb, pos);
+ if (IS_ERR(in))
+ return PTR_ERR(in);
+
+@@ -403,10 +403,10 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ .inode = inode,
+ .map = map,
+ };
+- int err = 0;
+- unsigned int endoff, afmt;
++ unsigned int endoff;
+ unsigned long initial_lcn;
+ unsigned long long ofs, end;
++ int err;
+
+ ofs = flags & EROFS_GET_BLOCKS_FINDTAIL ? inode->i_size - 1 : map->m_la;
+ if (fragment && !(flags & EROFS_GET_BLOCKS_FINDTAIL) &&
+@@ -502,20 +502,15 @@ static int z_erofs_map_blocks_fo(struct inode *inode,
+ err = -EFSCORRUPTED;
+ goto unmap_out;
+ }
+- afmt = vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER ?
+- Z_EROFS_COMPRESSION_INTERLACED :
+- Z_EROFS_COMPRESSION_SHIFTED;
++ if (vi->z_advise & Z_EROFS_ADVISE_INTERLACED_PCLUSTER)
++ map->m_algorithmformat = Z_EROFS_COMPRESSION_INTERLACED;
++ else
++ map->m_algorithmformat = Z_EROFS_COMPRESSION_SHIFTED;
++ } else if (m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
++ map->m_algorithmformat = vi->z_algorithmtype[1];
+ } else {
+- afmt = m.headtype == Z_EROFS_LCLUSTER_TYPE_HEAD2 ?
+- vi->z_algorithmtype[1] : vi->z_algorithmtype[0];
+- if (!(EROFS_I_SB(inode)->available_compr_algs & (1 << afmt))) {
+- erofs_err(sb, "inconsistent algorithmtype %u for nid %llu",
+- afmt, vi->nid);
+- err = -EFSCORRUPTED;
+- goto unmap_out;
+- }
++ map->m_algorithmformat = vi->z_algorithmtype[0];
+ }
+- map->m_algorithmformat = afmt;
+
+ if ((flags & EROFS_GET_BLOCKS_FIEMAP) ||
+ ((flags & EROFS_GET_BLOCKS_READMORE) &&
+@@ -551,7 +546,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ map->m_flags = 0;
+ if (recsz <= offsetof(struct z_erofs_extent, pstart_hi)) {
+ if (recsz <= offsetof(struct z_erofs_extent, pstart_lo)) {
+- ext = erofs_read_metabuf(&map->buf, sb, pos, true);
++ ext = erofs_read_metabuf(&map->buf, sb, pos);
+ if (IS_ERR(ext))
+ return PTR_ERR(ext);
+ pa = le64_to_cpu(*(__le64 *)ext);
+@@ -564,7 +559,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ }
+
+ for (; lstart <= map->m_la; lstart += 1 << vi->z_lclusterbits) {
+- ext = erofs_read_metabuf(&map->buf, sb, pos, true);
++ ext = erofs_read_metabuf(&map->buf, sb, pos);
+ if (IS_ERR(ext))
+ return PTR_ERR(ext);
+ map->m_plen = le32_to_cpu(ext->plen);
+@@ -584,7 +579,7 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ for (l = 0, r = vi->z_extents; l < r; ) {
+ mid = l + (r - l) / 2;
+ ext = erofs_read_metabuf(&map->buf, sb,
+- pos + mid * recsz, true);
++ pos + mid * recsz);
+ if (IS_ERR(ext))
+ return PTR_ERR(ext);
+
+@@ -641,14 +636,13 @@ static int z_erofs_map_blocks_ext(struct inode *inode,
+ return 0;
+ }
+
+-static int z_erofs_fill_inode_lazy(struct inode *inode)
++static int z_erofs_fill_inode(struct inode *inode, struct erofs_map_blocks *map)
+ {
+ struct erofs_inode *const vi = EROFS_I(inode);
+ struct super_block *const sb = inode->i_sb;
+- int err, headnr;
+- erofs_off_t pos;
+- struct erofs_buf buf = __EROFS_BUF_INITIALIZER;
+ struct z_erofs_map_header *h;
++ erofs_off_t pos;
++ int err = 0;
+
+ if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags)) {
+ /*
+@@ -662,12 +656,11 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_Z_BIT, TASK_KILLABLE))
+ return -ERESTARTSYS;
+
+- err = 0;
+ if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags))
+ goto out_unlock;
+
+ pos = ALIGN(erofs_iloc(inode) + vi->inode_isize + vi->xattr_isize, 8);
+- h = erofs_read_metabuf(&buf, sb, pos, true);
++ h = erofs_read_metabuf(&map->buf, sb, pos);
+ if (IS_ERR(h)) {
+ err = PTR_ERR(h);
+ goto out_unlock;
+@@ -699,22 +692,13 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ else if (vi->z_advise & Z_EROFS_ADVISE_INLINE_PCLUSTER)
+ vi->z_idata_size = le16_to_cpu(h->h_idata_size);
+
+- headnr = 0;
+- if (vi->z_algorithmtype[0] >= Z_EROFS_COMPRESSION_MAX ||
+- vi->z_algorithmtype[++headnr] >= Z_EROFS_COMPRESSION_MAX) {
+- erofs_err(sb, "unknown HEAD%u format %u for nid %llu, please upgrade kernel",
+- headnr + 1, vi->z_algorithmtype[headnr], vi->nid);
+- err = -EOPNOTSUPP;
+- goto out_put_metabuf;
+- }
+-
+ if (!erofs_sb_has_big_pcluster(EROFS_SB(sb)) &&
+ vi->z_advise & (Z_EROFS_ADVISE_BIG_PCLUSTER_1 |
+ Z_EROFS_ADVISE_BIG_PCLUSTER_2)) {
+ erofs_err(sb, "per-inode big pcluster without sb feature for nid %llu",
+ vi->nid);
+ err = -EFSCORRUPTED;
+- goto out_put_metabuf;
++ goto out_unlock;
+ }
+ if (vi->datalayout == EROFS_INODE_COMPRESSED_COMPACT &&
+ !(vi->z_advise & Z_EROFS_ADVISE_BIG_PCLUSTER_1) ^
+@@ -722,32 +706,54 @@ static int z_erofs_fill_inode_lazy(struct inode *inode)
+ erofs_err(sb, "big pcluster head1/2 of compact indexes should be consistent for nid %llu",
+ vi->nid);
+ err = -EFSCORRUPTED;
+- goto out_put_metabuf;
++ goto out_unlock;
+ }
+
+ if (vi->z_idata_size ||
+ (vi->z_advise & Z_EROFS_ADVISE_FRAGMENT_PCLUSTER)) {
+- struct erofs_map_blocks map = {
++ struct erofs_map_blocks tm = {
+ .buf = __EROFS_BUF_INITIALIZER
+ };
+
+- err = z_erofs_map_blocks_fo(inode, &map,
++ err = z_erofs_map_blocks_fo(inode, &tm,
+ EROFS_GET_BLOCKS_FINDTAIL);
+- erofs_put_metabuf(&map.buf);
++ erofs_put_metabuf(&tm.buf);
+ if (err < 0)
+- goto out_put_metabuf;
++ goto out_unlock;
+ }
+ done:
+ /* paired with smp_mb() at the beginning of the function */
+ smp_mb();
+ set_bit(EROFS_I_Z_INITED_BIT, &vi->flags);
+-out_put_metabuf:
+- erofs_put_metabuf(&buf);
+ out_unlock:
+ clear_and_wake_up_bit(EROFS_I_BL_Z_BIT, &vi->flags);
+ return err;
+ }
+
++static int z_erofs_map_sanity_check(struct inode *inode,
++ struct erofs_map_blocks *map)
++{
++ struct erofs_sb_info *sbi = EROFS_I_SB(inode);
++
++ if (!(map->m_flags & EROFS_MAP_ENCODED))
++ return 0;
++ if (unlikely(map->m_algorithmformat >= Z_EROFS_COMPRESSION_RUNTIME_MAX)) {
++ erofs_err(inode->i_sb, "unknown algorithm %d @ pos %llu for nid %llu, please upgrade kernel",
++ map->m_algorithmformat, map->m_la, EROFS_I(inode)->nid);
++ return -EOPNOTSUPP;
++ }
++ if (unlikely(map->m_algorithmformat < Z_EROFS_COMPRESSION_MAX &&
++ !(sbi->available_compr_algs & (1 << map->m_algorithmformat)))) {
++ erofs_err(inode->i_sb, "inconsistent algorithmtype %u for nid %llu",
++ map->m_algorithmformat, EROFS_I(inode)->nid);
++ return -EFSCORRUPTED;
++ }
++ if (unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||
++ map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))
++ return -EOPNOTSUPP;
++ return 0;
++}
++
+ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ int flags)
+ {
+@@ -760,7 +766,7 @@ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ map->m_la = inode->i_size;
+ map->m_flags = 0;
+ } else {
+- err = z_erofs_fill_inode_lazy(inode);
++ err = z_erofs_fill_inode(inode, map);
+ if (!err) {
+ if (vi->datalayout == EROFS_INODE_COMPRESSED_FULL &&
+ (vi->z_advise & Z_EROFS_ADVISE_EXTENTS))
+@@ -768,10 +774,8 @@ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ else
+ err = z_erofs_map_blocks_fo(inode, map, flags);
+ }
+- if (!err && (map->m_flags & EROFS_MAP_ENCODED) &&
+- unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||
+- map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))
+- err = -EOPNOTSUPP;
++ if (!err)
++ err = z_erofs_map_sanity_check(inode, map);
+ if (err)
+ map->m_llen = 0;
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index ba400aafd64061..551e1cc5bf1e3e 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -2048,7 +2048,7 @@ static int proc_dointvec_minmax_coredump(const struct ctl_table *table, int writ
+ {
+ int error = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+
+- if (!error)
++ if (!error && !write)
+ validate_coredump_safety();
+ return error;
+ }
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index e21ec857f2abcf..52c72896e1c164 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -202,6 +202,14 @@ static int vfs_dentry_acceptable(void *context, struct dentry *dentry)
+ if (!ctx->flags)
+ return 1;
+
++ /*
++ * Verify that the decoded dentry itself has a valid id mapping.
++ * In case the decoded dentry is the mountfd root itself, this
++ * verifies that the mountfd inode itself has a valid id mapping.
++ */
++ if (!privileged_wrt_inode_uidgid(user_ns, idmap, d_inode(dentry)))
++ return 0;
++
+ /*
+ * It's racy as we're not taking rename_lock but we're able to ignore
+ * permissions and we just need an approximation whether we were able
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index e80cd8f2c049f9..5150aa25e64be9 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -1893,7 +1893,7 @@ static int fuse_retrieve(struct fuse_mount *fm, struct inode *inode,
+
+ index = outarg->offset >> PAGE_SHIFT;
+
+- while (num) {
++ while (num && ap->num_folios < num_pages) {
+ struct folio *folio;
+ unsigned int folio_offset;
+ unsigned int nr_bytes;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 47006d0753f1cd..b8dc8ce3e5564a 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -3013,7 +3013,7 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ .nodeid_out = ff_out->nodeid,
+ .fh_out = ff_out->fh,
+ .off_out = pos_out,
+- .len = len,
++ .len = min_t(size_t, len, UINT_MAX & PAGE_MASK),
+ .flags = flags
+ };
+ struct fuse_write_out outarg;
+@@ -3079,6 +3079,9 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in,
+ fc->no_copy_file_range = 1;
+ err = -EOPNOTSUPP;
+ }
++ if (!err && outarg.size > len)
++ err = -EIO;
++
+ if (err)
+ goto out;
+
+diff --git a/fs/fuse/passthrough.c b/fs/fuse/passthrough.c
+index 607ef735ad4ab3..eb97ac009e75d9 100644
+--- a/fs/fuse/passthrough.c
++++ b/fs/fuse/passthrough.c
+@@ -237,6 +237,11 @@ int fuse_backing_open(struct fuse_conn *fc, struct fuse_backing_map *map)
+ if (!file)
+ goto out;
+
++ /* read/write/splice/mmap passthrough only relevant for regular files */
++ res = d_is_dir(file->f_path.dentry) ? -EISDIR : -EINVAL;
++ if (!d_is_reg(file->f_path.dentry))
++ goto out_fput;
++
+ backing_sb = file_inode(file)->i_sb;
+ res = -ELOOP;
+ if (backing_sb->s_stack_depth >= fc->max_stack_depth)
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index a6c692cac61659..9adf36e6364b7d 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -70,6 +70,24 @@ static struct kernfs_open_node *of_on(struct kernfs_open_file *of)
+ !list_empty(&of->list));
+ }
+
++/* Get active reference to kernfs node for an open file */
++static struct kernfs_open_file *kernfs_get_active_of(struct kernfs_open_file *of)
++{
++ /* Skip if file was already released */
++ if (unlikely(of->released))
++ return NULL;
++
++ if (!kernfs_get_active(of->kn))
++ return NULL;
++
++ return of;
++}
++
++static void kernfs_put_active_of(struct kernfs_open_file *of)
++{
++ return kernfs_put_active(of->kn);
++}
++
+ /**
+ * kernfs_deref_open_node_locked - Get kernfs_open_node corresponding to @kn
+ *
+@@ -139,7 +157,7 @@ static void kernfs_seq_stop_active(struct seq_file *sf, void *v)
+
+ if (ops->seq_stop)
+ ops->seq_stop(sf, v);
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ }
+
+ static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
+@@ -152,7 +170,7 @@ static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
+ * the ops aren't called concurrently for the same open file.
+ */
+ mutex_lock(&of->mutex);
+- if (!kernfs_get_active(of->kn))
++ if (!kernfs_get_active_of(of))
+ return ERR_PTR(-ENODEV);
+
+ ops = kernfs_ops(of->kn);
+@@ -238,7 +256,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ * the ops aren't called concurrently for the same open file.
+ */
+ mutex_lock(&of->mutex);
+- if (!kernfs_get_active(of->kn)) {
++ if (!kernfs_get_active_of(of)) {
+ len = -ENODEV;
+ mutex_unlock(&of->mutex);
+ goto out_free;
+@@ -252,7 +270,7 @@ static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+ else
+ len = -EINVAL;
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ mutex_unlock(&of->mutex);
+
+ if (len < 0)
+@@ -323,7 +341,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ * the ops aren't called concurrently for the same open file.
+ */
+ mutex_lock(&of->mutex);
+- if (!kernfs_get_active(of->kn)) {
++ if (!kernfs_get_active_of(of)) {
+ mutex_unlock(&of->mutex);
+ len = -ENODEV;
+ goto out_free;
+@@ -335,7 +353,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ else
+ len = -EINVAL;
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ mutex_unlock(&of->mutex);
+
+ if (len > 0)
+@@ -357,13 +375,13 @@ static void kernfs_vma_open(struct vm_area_struct *vma)
+ if (!of->vm_ops)
+ return;
+
+- if (!kernfs_get_active(of->kn))
++ if (!kernfs_get_active_of(of))
+ return;
+
+ if (of->vm_ops->open)
+ of->vm_ops->open(vma);
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ }
+
+ static vm_fault_t kernfs_vma_fault(struct vm_fault *vmf)
+@@ -375,14 +393,14 @@ static vm_fault_t kernfs_vma_fault(struct vm_fault *vmf)
+ if (!of->vm_ops)
+ return VM_FAULT_SIGBUS;
+
+- if (!kernfs_get_active(of->kn))
++ if (!kernfs_get_active_of(of))
+ return VM_FAULT_SIGBUS;
+
+ ret = VM_FAULT_SIGBUS;
+ if (of->vm_ops->fault)
+ ret = of->vm_ops->fault(vmf);
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ return ret;
+ }
+
+@@ -395,7 +413,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
+ if (!of->vm_ops)
+ return VM_FAULT_SIGBUS;
+
+- if (!kernfs_get_active(of->kn))
++ if (!kernfs_get_active_of(of))
+ return VM_FAULT_SIGBUS;
+
+ ret = 0;
+@@ -404,7 +422,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
+ else
+ file_update_time(file);
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ return ret;
+ }
+
+@@ -418,14 +436,14 @@ static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
+ if (!of->vm_ops)
+ return -EINVAL;
+
+- if (!kernfs_get_active(of->kn))
++ if (!kernfs_get_active_of(of))
+ return -EINVAL;
+
+ ret = -EINVAL;
+ if (of->vm_ops->access)
+ ret = of->vm_ops->access(vma, addr, buf, len, write);
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ return ret;
+ }
+
+@@ -455,7 +473,7 @@ static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+ mutex_lock(&of->mutex);
+
+ rc = -ENODEV;
+- if (!kernfs_get_active(of->kn))
++ if (!kernfs_get_active_of(of))
+ goto out_unlock;
+
+ ops = kernfs_ops(of->kn);
+@@ -490,7 +508,7 @@ static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+ }
+ vma->vm_ops = &kernfs_vm_ops;
+ out_put:
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ out_unlock:
+ mutex_unlock(&of->mutex);
+
+@@ -852,7 +870,7 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait)
+ struct kernfs_node *kn = kernfs_dentry_node(filp->f_path.dentry);
+ __poll_t ret;
+
+- if (!kernfs_get_active(kn))
++ if (!kernfs_get_active_of(of))
+ return DEFAULT_POLLMASK|EPOLLERR|EPOLLPRI;
+
+ if (kn->attr.ops->poll)
+@@ -860,7 +878,7 @@ static __poll_t kernfs_fop_poll(struct file *filp, poll_table *wait)
+ else
+ ret = kernfs_generic_poll(of, wait);
+
+- kernfs_put_active(kn);
++ kernfs_put_active_of(of);
+ return ret;
+ }
+
+@@ -875,7 +893,7 @@ static loff_t kernfs_fop_llseek(struct file *file, loff_t offset, int whence)
+ * the ops aren't called concurrently for the same open file.
+ */
+ mutex_lock(&of->mutex);
+- if (!kernfs_get_active(of->kn)) {
++ if (!kernfs_get_active_of(of)) {
+ mutex_unlock(&of->mutex);
+ return -ENODEV;
+ }
+@@ -886,7 +904,7 @@ static loff_t kernfs_fop_llseek(struct file *file, loff_t offset, int whence)
+ else
+ ret = generic_file_llseek(file, offset, whence);
+
+- kernfs_put_active(of->kn);
++ kernfs_put_active_of(of);
+ mutex_unlock(&of->mutex);
+ return ret;
+ }
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 3bcf5c204578c1..97bd9d2a4b0cde 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -890,6 +890,8 @@ static void nfs_server_set_fsinfo(struct nfs_server *server,
+
+ if (fsinfo->xattr_support)
+ server->caps |= NFS_CAP_XATTR;
++ else
++ server->caps &= ~NFS_CAP_XATTR;
+ #endif
+ }
+
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 033feeab8c346e..a16a619fb8c33b 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -437,10 +437,11 @@ static void nfs_invalidate_folio(struct folio *folio, size_t offset,
+ dfprintk(PAGECACHE, "NFS: invalidate_folio(%lu, %zu, %zu)\n",
+ folio->index, offset, length);
+
+- if (offset != 0 || length < folio_size(folio))
+- return;
+ /* Cancel any unstarted writes on this page */
+- nfs_wb_folio_cancel(inode, folio);
++ if (offset != 0 || length < folio_size(folio))
++ nfs_wb_folio(inode, folio);
++ else
++ nfs_wb_folio_cancel(inode, folio);
+ folio_wait_private_2(folio); /* [DEPRECATED] */
+ trace_nfs_invalidate_folio(inode, folio_pos(folio) + offset, length);
+ }
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 8dc921d835388e..9edb5f9b0c4e47 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -293,7 +293,7 @@ ff_lseg_match_mirrors(struct pnfs_layout_segment *l1,
+ struct pnfs_layout_segment *l2)
+ {
+ const struct nfs4_ff_layout_segment *fl1 = FF_LAYOUT_LSEG(l1);
+- const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l1);
++ const struct nfs4_ff_layout_segment *fl2 = FF_LAYOUT_LSEG(l2);
+ u32 i;
+
+ if (fl1->mirror_array_cnt != fl2->mirror_array_cnt)
+@@ -773,8 +773,11 @@ ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
+ continue;
+
+ if (check_device &&
+- nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node))
++ nfs4_test_deviceid_unavailable(&mirror->mirror_ds->id_node)) {
++ // reinitialize the error state in case if this is the last iteration
++ ds = ERR_PTR(-EINVAL);
+ continue;
++ }
+
+ *best_idx = idx;
+ break;
+@@ -804,7 +807,7 @@ ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
+ struct nfs4_pnfs_ds *ds;
+
+ ds = ff_layout_choose_valid_ds_for_read(lseg, start_idx, best_idx);
+- if (ds)
++ if (!IS_ERR(ds))
+ return ds;
+ return ff_layout_choose_any_ds_for_read(lseg, start_idx, best_idx);
+ }
+@@ -818,7 +821,7 @@ ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio,
+
+ ds = ff_layout_choose_best_ds_for_read(lseg, pgio->pg_mirror_idx,
+ best_idx);
+- if (ds || !pgio->pg_mirror_idx)
++ if (!IS_ERR(ds) || !pgio->pg_mirror_idx)
+ return ds;
+ return ff_layout_choose_best_ds_for_read(lseg, 0, best_idx);
+ }
+@@ -868,7 +871,7 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ req->wb_nio = 0;
+
+ ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+- if (!ds) {
++ if (IS_ERR(ds)) {
+ if (!ff_layout_no_fallback_to_mds(pgio->pg_lseg))
+ goto out_mds;
+ pnfs_generic_pg_cleanup(pgio);
+@@ -1072,11 +1075,13 @@ static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
+ {
+ u32 idx = hdr->pgio_mirror_idx + 1;
+ u32 new_idx = 0;
++ struct nfs4_pnfs_ds *ds;
+
+- if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx))
+- ff_layout_send_layouterror(hdr->lseg);
+- else
++ ds = ff_layout_choose_any_ds_for_read(hdr->lseg, idx, &new_idx);
++ if (IS_ERR(ds))
+ pnfs_error_mark_layout_for_return(hdr->inode, hdr->lseg);
++ else
++ ff_layout_send_layouterror(hdr->lseg);
+ pnfs_read_resend_pnfs(hdr, new_idx);
+ }
+
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index a2fa6bc4d74e37..a32cc45425e287 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -761,8 +761,10 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ trace_nfs_setattr_enter(inode);
+
+ /* Write all dirty data */
+- if (S_ISREG(inode->i_mode))
++ if (S_ISREG(inode->i_mode)) {
++ nfs_file_block_o_direct(NFS_I(inode));
+ nfs_sync_inode(inode);
++ }
+
+ fattr = nfs_alloc_fattr_with_label(NFS_SERVER(inode));
+ if (fattr == NULL) {
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 9dcbc339649221..0ef0fc6aba3b3c 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -531,6 +531,16 @@ static inline bool nfs_file_io_is_buffered(struct nfs_inode *nfsi)
+ return test_bit(NFS_INO_ODIRECT, &nfsi->flags) == 0;
+ }
+
++/* Must be called with exclusively locked inode->i_rwsem */
++static inline void nfs_file_block_o_direct(struct nfs_inode *nfsi)
++{
++ if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
++ clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
++ inode_dio_wait(&nfsi->vfs_inode);
++ }
++}
++
++
+ /* namespace.c */
+ #define NFS_PATH_CANONICAL 1
+ extern char *nfs_path(char **p, struct dentry *dentry,
+diff --git a/fs/nfs/io.c b/fs/nfs/io.c
+index 3388faf2acb9f5..d275b0a250bf3b 100644
+--- a/fs/nfs/io.c
++++ b/fs/nfs/io.c
+@@ -14,15 +14,6 @@
+
+ #include "internal.h"
+
+-/* Call with exclusively locked inode->i_rwsem */
+-static void nfs_block_o_direct(struct nfs_inode *nfsi, struct inode *inode)
+-{
+- if (test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
+- clear_bit(NFS_INO_ODIRECT, &nfsi->flags);
+- inode_dio_wait(inode);
+- }
+-}
+-
+ /**
+ * nfs_start_io_read - declare the file is being used for buffered reads
+ * @inode: file inode
+@@ -57,7 +48,7 @@ nfs_start_io_read(struct inode *inode)
+ err = down_write_killable(&inode->i_rwsem);
+ if (err)
+ return err;
+- nfs_block_o_direct(nfsi, inode);
++ nfs_file_block_o_direct(nfsi);
+ downgrade_write(&inode->i_rwsem);
+
+ return 0;
+@@ -90,7 +81,7 @@ nfs_start_io_write(struct inode *inode)
+
+ err = down_write_killable(&inode->i_rwsem);
+ if (!err)
+- nfs_block_o_direct(NFS_I(inode), inode);
++ nfs_file_block_o_direct(NFS_I(inode));
+ return err;
+ }
+
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 510d0a16cfe917..e2213ef18baede 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -453,12 +453,13 @@ static void nfs_local_call_read(struct work_struct *work)
+ nfs_local_iter_init(&iter, iocb, READ);
+
+ status = filp->f_op->read_iter(&iocb->kiocb, &iter);
++
++ revert_creds(save_cred);
++
+ if (status != -EIOCBQUEUED) {
+ nfs_local_read_done(iocb, status);
+ nfs_local_pgio_release(iocb);
+ }
+-
+- revert_creds(save_cred);
+ }
+
+ static int
+@@ -649,14 +650,15 @@ static void nfs_local_call_write(struct work_struct *work)
+ file_start_write(filp);
+ status = filp->f_op->write_iter(&iocb->kiocb, &iter);
+ file_end_write(filp);
++
++ revert_creds(save_cred);
++ current->flags = old_flags;
++
+ if (status != -EIOCBQUEUED) {
+ nfs_local_write_done(iocb, status);
+ nfs_local_vfs_getattr(iocb);
+ nfs_local_pgio_release(iocb);
+ }
+-
+- revert_creds(save_cred);
+- current->flags = old_flags;
+ }
+
+ static int
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 01c01f45358b7c..48ee3d5d89c4ae 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -114,6 +114,7 @@ static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
+ exception.inode = inode;
+ exception.state = lock->open_context->state;
+
++ nfs_file_block_o_direct(NFS_I(inode));
+ err = nfs_sync_inode(inode);
+ if (err)
+ goto out;
+@@ -430,6 +431,7 @@ static ssize_t _nfs42_proc_copy(struct file *src,
+ return status;
+ }
+
++ nfs_file_block_o_direct(NFS_I(dst_inode));
+ status = nfs_sync_inode(dst_inode);
+ if (status)
+ return status;
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 5e9d66f3466c8d..1fa69a0b33ab19 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -291,9 +291,11 @@ static loff_t nfs42_remap_file_range(struct file *src_file, loff_t src_off,
+
+ /* flush all pending writes on both src and dst so that server
+ * has the latest data */
++ nfs_file_block_o_direct(NFS_I(src_inode));
+ ret = nfs_sync_inode(src_inode);
+ if (ret)
+ goto out_unlock;
++ nfs_file_block_o_direct(NFS_I(dst_inode));
+ ret = nfs_sync_inode(dst_inode);
+ if (ret)
+ goto out_unlock;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 7e203857f46687..8d492e3b216312 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -4007,8 +4007,10 @@ static int _nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *f
+ res.attr_bitmask[2];
+ }
+ memcpy(server->attr_bitmask, res.attr_bitmask, sizeof(server->attr_bitmask));
+- server->caps &= ~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS |
+- NFS_CAP_SYMLINKS| NFS_CAP_SECURITY_LABEL);
++ server->caps &=
++ ~(NFS_CAP_ACLS | NFS_CAP_HARDLINKS | NFS_CAP_SYMLINKS |
++ NFS_CAP_SECURITY_LABEL | NFS_CAP_FS_LOCATIONS |
++ NFS_CAP_OPEN_XOR | NFS_CAP_DELEGTIME);
+ server->fattr_valid = NFS_ATTR_FATTR_V4;
+ if (res.attr_bitmask[0] & FATTR4_WORD0_ACL &&
+ res.acl_bitmask & ACL4_SUPPORT_ALLOW_ACL)
+@@ -4082,7 +4084,6 @@ int nfs4_server_capabilities(struct nfs_server *server, struct nfs_fh *fhandle)
+ };
+ int err;
+
+- nfs_server_set_init_caps(server);
+ do {
+ err = nfs4_handle_exception(server,
+ _nfs4_server_capabilities(server, fhandle),
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index ff29335ed85999..08fd1c0d45ec27 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -2045,6 +2045,7 @@ int nfs_wb_folio_cancel(struct inode *inode, struct folio *folio)
+ * release it */
+ nfs_inode_remove_request(req);
+ nfs_unlock_and_release_request(req);
++ folio_cancel_dirty(folio);
+ }
+
+ return ret;
+diff --git a/fs/ocfs2/extent_map.c b/fs/ocfs2/extent_map.c
+index 930150ed5db15f..ef147e8b327126 100644
+--- a/fs/ocfs2/extent_map.c
++++ b/fs/ocfs2/extent_map.c
+@@ -706,6 +706,8 @@ int ocfs2_extent_map_get_blocks(struct inode *inode, u64 v_blkno, u64 *p_blkno,
+ * it not only handles the fiemap for inlined files, but also deals
+ * with the fast symlink, cause they have no difference for extent
+ * mapping per se.
++ *
++ * Must be called with ip_alloc_sem semaphore held.
+ */
+ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ struct fiemap_extent_info *fieinfo,
+@@ -717,6 +719,7 @@ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ u64 phys;
+ u32 flags = FIEMAP_EXTENT_DATA_INLINE|FIEMAP_EXTENT_LAST;
+ struct ocfs2_inode_info *oi = OCFS2_I(inode);
++ lockdep_assert_held_read(&oi->ip_alloc_sem);
+
+ di = (struct ocfs2_dinode *)di_bh->b_data;
+ if (ocfs2_inode_is_fast_symlink(inode))
+@@ -732,8 +735,11 @@ static int ocfs2_fiemap_inline(struct inode *inode, struct buffer_head *di_bh,
+ phys += offsetof(struct ocfs2_dinode,
+ id2.i_data.id_data);
+
++ /* Release the ip_alloc_sem to prevent deadlock on page fault */
++ up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ ret = fiemap_fill_next_extent(fieinfo, 0, phys, id_count,
+ flags);
++ down_read(&OCFS2_I(inode)->ip_alloc_sem);
+ if (ret < 0)
+ return ret;
+ }
+@@ -802,9 +808,11 @@ int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ len_bytes = (u64)le16_to_cpu(rec.e_leaf_clusters) << osb->s_clustersize_bits;
+ phys_bytes = le64_to_cpu(rec.e_blkno) << osb->sb->s_blocksize_bits;
+ virt_bytes = (u64)le32_to_cpu(rec.e_cpos) << osb->s_clustersize_bits;
+-
++ /* Release the ip_alloc_sem to prevent deadlock on page fault */
++ up_read(&OCFS2_I(inode)->ip_alloc_sem);
+ ret = fiemap_fill_next_extent(fieinfo, virt_bytes, phys_bytes,
+ len_bytes, fe_flags);
++ down_read(&OCFS2_I(inode)->ip_alloc_sem);
+ if (ret)
+ break;
+
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index 409bc1d11eca39..8e1e48760ffe05 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -390,7 +390,8 @@ struct proc_dir_entry *proc_register(struct proc_dir_entry *dir,
+ if (proc_alloc_inum(&dp->low_ino))
+ goto out_free_entry;
+
+- pde_set_flags(dp);
++ if (!S_ISDIR(dp->mode))
++ pde_set_flags(dp);
+
+ write_lock(&proc_subdir_lock);
+ dp->parent = dir;
+diff --git a/fs/resctrl/ctrlmondata.c b/fs/resctrl/ctrlmondata.c
+index d98e0d2de09fd0..3c39cfacb25183 100644
+--- a/fs/resctrl/ctrlmondata.c
++++ b/fs/resctrl/ctrlmondata.c
+@@ -625,11 +625,11 @@ int rdtgroup_mondata_show(struct seq_file *m, void *arg)
+ */
+ list_for_each_entry(d, &r->mon_domains, hdr.list) {
+ if (d->ci_id == domid) {
+- rr.ci_id = d->ci_id;
+ cpu = cpumask_any(&d->hdr.cpu_mask);
+ ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
+ if (!ci)
+ continue;
++ rr.ci = ci;
+ mon_event_read(&rr, r, NULL, rdtgrp,
+ &ci->shared_cpu_map, evtid, false);
+ goto checkresult;
+diff --git a/fs/resctrl/internal.h b/fs/resctrl/internal.h
+index 0a1eedba2b03ad..9a8cf6f11151d9 100644
+--- a/fs/resctrl/internal.h
++++ b/fs/resctrl/internal.h
+@@ -98,7 +98,7 @@ struct mon_data {
+ * domains in @r sharing L3 @ci.id
+ * @evtid: Which monitor event to read.
+ * @first: Initialize MBM counter when true.
+- * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summing domains.
++ * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing domains.
+ * @err: Error encountered when reading counter.
+ * @val: Returned value of event counter. If @rgrp is a parent resource group,
+ * @val includes the sum of event counts from its child resource groups.
+@@ -112,7 +112,7 @@ struct rmid_read {
+ struct rdt_mon_domain *d;
+ enum resctrl_event_id evtid;
+ bool first;
+- unsigned int ci_id;
++ struct cacheinfo *ci;
+ int err;
+ u64 val;
+ void *arch_mon_ctx;
+diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
+index f5637855c3acac..7326c28a7908f3 100644
+--- a/fs/resctrl/monitor.c
++++ b/fs/resctrl/monitor.c
+@@ -361,7 +361,6 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ {
+ int cpu = smp_processor_id();
+ struct rdt_mon_domain *d;
+- struct cacheinfo *ci;
+ struct mbm_state *m;
+ int err, ret;
+ u64 tval = 0;
+@@ -389,8 +388,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ }
+
+ /* Summing domains that share a cache, must be on a CPU for that cache. */
+- ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
+- if (!ci || ci->id != rr->ci_id)
++ if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map))
+ return -EINVAL;
+
+ /*
+@@ -402,7 +400,7 @@ static int __mon_event_count(u32 closid, u32 rmid, struct rmid_read *rr)
+ */
+ ret = -EINVAL;
+ list_for_each_entry(d, &rr->r->mon_domains, hdr.list) {
+- if (d->ci_id != rr->ci_id)
++ if (d->ci_id != rr->ci->id)
+ continue;
+ err = resctrl_arch_rmid_read(rr->r, d, closid, rmid,
+ rr->evtid, &tval, rr->arch_mon_ctx);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 89160bc34d3539..4aaa9e8d9cbeff 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -87,7 +87,7 @@
+ #define SMB_INTERFACE_POLL_INTERVAL 600
+
+ /* maximum number of PDUs in one compound */
+-#define MAX_COMPOUND 7
++#define MAX_COMPOUND 10
+
+ /*
+ * Default number of credits to keep available for SMB3.
+@@ -1877,9 +1877,12 @@ static inline bool is_replayable_error(int error)
+
+
+ /* cifs_get_writable_file() flags */
+-#define FIND_WR_ANY 0
+-#define FIND_WR_FSUID_ONLY 1
+-#define FIND_WR_WITH_DELETE 2
++enum cifs_writable_file_flags {
++ FIND_WR_ANY = 0U,
++ FIND_WR_FSUID_ONLY = (1U << 0),
++ FIND_WR_WITH_DELETE = (1U << 1),
++ FIND_WR_NO_PENDING_DELETE = (1U << 2),
++};
+
+ #define MID_FREE 0
+ #define MID_REQUEST_ALLOCATED 1
+@@ -2339,6 +2342,8 @@ struct smb2_compound_vars {
+ struct kvec qi_iov;
+ struct kvec io_iov[SMB2_IOCTL_IOV_SIZE];
+ struct kvec si_iov[SMB2_SET_INFO_IOV_SIZE];
++ struct kvec unlink_iov[SMB2_SET_INFO_IOV_SIZE];
++ struct kvec rename_iov[SMB2_SET_INFO_IOV_SIZE];
+ struct kvec close_iov;
+ struct smb2_file_rename_info_hdr rename_info;
+ struct smb2_file_link_info_hdr link_info;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 1421bde045c21d..8b407d2a8516d1 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -998,7 +998,10 @@ int cifs_open(struct inode *inode, struct file *file)
+
+ /* Get the cached handle as SMB2 close is deferred */
+ if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) {
+- rc = cifs_get_writable_path(tcon, full_path, FIND_WR_FSUID_ONLY, &cfile);
++ rc = cifs_get_writable_path(tcon, full_path,
++ FIND_WR_FSUID_ONLY |
++ FIND_WR_NO_PENDING_DELETE,
++ &cfile);
+ } else {
+ rc = cifs_get_readable_path(tcon, full_path, &cfile);
+ }
+@@ -2530,6 +2533,9 @@ cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags,
+ continue;
+ if (with_delete && !(open_file->fid.access & DELETE))
+ continue;
++ if ((flags & FIND_WR_NO_PENDING_DELETE) &&
++ open_file->status_file_deleted)
++ continue;
+ if (OPEN_FMODE(open_file->f_flags) & FMODE_WRITE) {
+ if (!open_file->invalidHandle) {
+ /* found a good writable file */
+@@ -2647,6 +2653,16 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name,
+ spin_unlock(&tcon->open_file_lock);
+ free_dentry_path(page);
+ *ret_file = find_readable_file(cinode, 0);
++ if (*ret_file) {
++ spin_lock(&cinode->open_file_lock);
++ if ((*ret_file)->status_file_deleted) {
++ spin_unlock(&cinode->open_file_lock);
++ cifsFileInfo_put(*ret_file);
++ *ret_file = NULL;
++ } else {
++ spin_unlock(&cinode->open_file_lock);
++ }
++ }
+ return *ret_file ? 0 : -ENOENT;
+ }
+
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index fe453a4b3dc831..11d442e8b3d622 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1931,7 +1931,7 @@ cifs_drop_nlink(struct inode *inode)
+ * but will return the EACCES to the caller. Note that the VFS does not call
+ * unlink on negative dentries currently.
+ */
+-int cifs_unlink(struct inode *dir, struct dentry *dentry)
++static int __cifs_unlink(struct inode *dir, struct dentry *dentry, bool sillyrename)
+ {
+ int rc = 0;
+ unsigned int xid;
+@@ -2003,7 +2003,11 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ goto psx_del_no_retry;
+ }
+
+- rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
++ if (sillyrename || (server->vals->protocol_id > SMB10_PROT_ID &&
++ d_is_positive(dentry) && d_count(dentry) > 2))
++ rc = -EBUSY;
++ else
++ rc = server->ops->unlink(xid, tcon, full_path, cifs_sb, dentry);
+
+ psx_del_no_retry:
+ if (!rc) {
+@@ -2071,6 +2075,11 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ return rc;
+ }
+
++int cifs_unlink(struct inode *dir, struct dentry *dentry)
++{
++ return __cifs_unlink(dir, dentry, false);
++}
++
+ static int
+ cifs_mkdir_qinfo(struct inode *parent, struct dentry *dentry, umode_t mode,
+ const char *full_path, struct cifs_sb_info *cifs_sb,
+@@ -2358,14 +2367,16 @@ int cifs_rmdir(struct inode *inode, struct dentry *direntry)
+ rc = server->ops->rmdir(xid, tcon, full_path, cifs_sb);
+ cifs_put_tlink(tlink);
+
++ cifsInode = CIFS_I(d_inode(direntry));
++
+ if (!rc) {
++ set_bit(CIFS_INO_DELETE_PENDING, &cifsInode->flags);
+ spin_lock(&d_inode(direntry)->i_lock);
+ i_size_write(d_inode(direntry), 0);
+ clear_nlink(d_inode(direntry));
+ spin_unlock(&d_inode(direntry)->i_lock);
+ }
+
+- cifsInode = CIFS_I(d_inode(direntry));
+ /* force revalidate to go get info when needed */
+ cifsInode->time = 0;
+
+@@ -2458,8 +2469,11 @@ cifs_do_rename(const unsigned int xid, struct dentry *from_dentry,
+ }
+ #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+ do_rename_exit:
+- if (rc == 0)
++ if (rc == 0) {
+ d_move(from_dentry, to_dentry);
++ /* Force a new lookup */
++ d_drop(from_dentry);
++ }
+ cifs_put_tlink(tlink);
+ return rc;
+ }
+@@ -2470,6 +2484,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ struct dentry *target_dentry, unsigned int flags)
+ {
+ const char *from_name, *to_name;
++ struct TCP_Server_Info *server;
+ void *page1, *page2;
+ struct cifs_sb_info *cifs_sb;
+ struct tcon_link *tlink;
+@@ -2505,6 +2520,7 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ if (IS_ERR(tlink))
+ return PTR_ERR(tlink);
+ tcon = tlink_tcon(tlink);
++ server = tcon->ses->server;
+
+ page1 = alloc_dentry_path();
+ page2 = alloc_dentry_path();
+@@ -2591,19 +2607,53 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+
+ unlink_target:
+ #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */
+-
+- /* Try unlinking the target dentry if it's not negative */
+- if (d_really_is_positive(target_dentry) && (rc == -EACCES || rc == -EEXIST)) {
+- if (d_is_dir(target_dentry))
+- tmprc = cifs_rmdir(target_dir, target_dentry);
+- else
+- tmprc = cifs_unlink(target_dir, target_dentry);
+- if (tmprc)
+- goto cifs_rename_exit;
+- rc = cifs_do_rename(xid, source_dentry, from_name,
+- target_dentry, to_name);
+- if (!rc)
+- rehash = false;
++ if (d_really_is_positive(target_dentry)) {
++ if (!rc) {
++ struct inode *inode = d_inode(target_dentry);
++ /*
++ * Samba and ksmbd servers allow renaming a target
++ * directory that is open, so make sure to update
++ * ->i_nlink and then mark it as delete pending.
++ */
++ if (S_ISDIR(inode->i_mode)) {
++ drop_cached_dir_by_name(xid, tcon, to_name, cifs_sb);
++ spin_lock(&inode->i_lock);
++ i_size_write(inode, 0);
++ clear_nlink(inode);
++ spin_unlock(&inode->i_lock);
++ set_bit(CIFS_INO_DELETE_PENDING, &CIFS_I(inode)->flags);
++ CIFS_I(inode)->time = 0; /* force reval */
++ inode_set_ctime_current(inode);
++ inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
++ }
++ } else if (rc == -EACCES || rc == -EEXIST) {
++ /*
++ * Rename failed, possibly due to a busy target.
++ * Retry it by unliking the target first.
++ */
++ if (d_is_dir(target_dentry)) {
++ tmprc = cifs_rmdir(target_dir, target_dentry);
++ } else {
++ tmprc = __cifs_unlink(target_dir, target_dentry,
++ server->vals->protocol_id > SMB10_PROT_ID);
++ }
++ if (tmprc) {
++ /*
++ * Some servers will return STATUS_ACCESS_DENIED
++ * or STATUS_DIRECTORY_NOT_EMPTY when failing to
++ * rename a non-empty directory. Make sure to
++ * propagate the appropriate error back to
++ * userspace.
++ */
++ if (tmprc == -EEXIST || tmprc == -ENOTEMPTY)
++ rc = tmprc;
++ goto cifs_rename_exit;
++ }
++ rc = cifs_do_rename(xid, source_dentry, from_name,
++ target_dentry, to_name);
++ if (!rc)
++ rehash = false;
++ }
+ }
+
+ /* force revalidate to go get info when needed */
+@@ -2629,6 +2679,8 @@ cifs_dentry_needs_reval(struct dentry *dentry)
+ struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+ struct cached_fid *cfid = NULL;
+
++ if (test_bit(CIFS_INO_DELETE_PENDING, &cifs_i->flags))
++ return false;
+ if (cifs_i->time == 0)
+ return true;
+
+diff --git a/fs/smb/client/smb2glob.h b/fs/smb/client/smb2glob.h
+index 224495322a05da..e56e4d402f1382 100644
+--- a/fs/smb/client/smb2glob.h
++++ b/fs/smb/client/smb2glob.h
+@@ -30,10 +30,9 @@ enum smb2_compound_ops {
+ SMB2_OP_QUERY_DIR,
+ SMB2_OP_MKDIR,
+ SMB2_OP_RENAME,
+- SMB2_OP_DELETE,
+ SMB2_OP_HARDLINK,
+ SMB2_OP_SET_EOF,
+- SMB2_OP_RMDIR,
++ SMB2_OP_UNLINK,
+ SMB2_OP_POSIX_QUERY_INFO,
+ SMB2_OP_SET_REPARSE,
+ SMB2_OP_GET_REPARSE,
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 8b271bbe41c471..86cad8ee8e6f3b 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -346,9 +346,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ trace_smb3_posix_query_info_compound_enter(xid, tcon->tid,
+ ses->Suid, full_path);
+ break;
+- case SMB2_OP_DELETE:
+- trace_smb3_delete_enter(xid, tcon->tid, ses->Suid, full_path);
+- break;
+ case SMB2_OP_MKDIR:
+ /*
+ * Directories are created through parameters in the
+@@ -356,23 +353,40 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ */
+ trace_smb3_mkdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+- case SMB2_OP_RMDIR:
+- rqst[num_rqst].rq_iov = &vars->si_iov[0];
++ case SMB2_OP_UNLINK:
++ rqst[num_rqst].rq_iov = vars->unlink_iov;
+ rqst[num_rqst].rq_nvec = 1;
+
+ size[0] = 1; /* sizeof __u8 See MS-FSCC section 2.4.11 */
+ data[0] = &delete_pending[0];
+
+- rc = SMB2_set_info_init(tcon, server,
+- &rqst[num_rqst], COMPOUND_FID,
+- COMPOUND_FID, current->tgid,
+- FILE_DISPOSITION_INFORMATION,
+- SMB2_O_INFO_FILE, 0, data, size);
+- if (rc)
++ if (cfile) {
++ rc = SMB2_set_info_init(tcon, server,
++ &rqst[num_rqst],
++ cfile->fid.persistent_fid,
++ cfile->fid.volatile_fid,
++ current->tgid,
++ FILE_DISPOSITION_INFORMATION,
++ SMB2_O_INFO_FILE, 0,
++ data, size);
++ } else {
++ rc = SMB2_set_info_init(tcon, server,
++ &rqst[num_rqst],
++ COMPOUND_FID,
++ COMPOUND_FID,
++ current->tgid,
++ FILE_DISPOSITION_INFORMATION,
++ SMB2_O_INFO_FILE, 0,
++ data, size);
++ }
++ if (!rc && (!cfile || num_rqst > 1)) {
++ smb2_set_next_command(tcon, &rqst[num_rqst]);
++ smb2_set_related(&rqst[num_rqst]);
++ } else if (rc) {
+ goto finished;
+- smb2_set_next_command(tcon, &rqst[num_rqst]);
+- smb2_set_related(&rqst[num_rqst++]);
+- trace_smb3_rmdir_enter(xid, tcon->tid, ses->Suid, full_path);
++ }
++ num_rqst++;
++ trace_smb3_unlink_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_EOF:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -442,7 +456,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ ses->Suid, full_path);
+ break;
+ case SMB2_OP_RENAME:
+- rqst[num_rqst].rq_iov = &vars->si_iov[0];
++ rqst[num_rqst].rq_iov = vars->rename_iov;
+ rqst[num_rqst].rq_nvec = 2;
+
+ len = in_iov[i].iov_len;
+@@ -732,19 +746,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ trace_smb3_posix_query_info_compound_done(xid, tcon->tid,
+ ses->Suid);
+ break;
+- case SMB2_OP_DELETE:
+- if (rc)
+- trace_smb3_delete_err(xid, tcon->tid, ses->Suid, rc);
+- else {
+- /*
+- * If dentry (hence, inode) is NULL, lease break is going to
+- * take care of degrading leases on handles for deleted files.
+- */
+- if (inode)
+- cifs_mark_open_handles_for_deleted_file(inode, full_path);
+- trace_smb3_delete_done(xid, tcon->tid, ses->Suid);
+- }
+- break;
+ case SMB2_OP_MKDIR:
+ if (rc)
+ trace_smb3_mkdir_err(xid, tcon->tid, ses->Suid, rc);
+@@ -765,11 +766,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ trace_smb3_rename_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+- case SMB2_OP_RMDIR:
+- if (rc)
+- trace_smb3_rmdir_err(xid, tcon->tid, ses->Suid, rc);
++ case SMB2_OP_UNLINK:
++ if (!rc)
++ trace_smb3_unlink_done(xid, tcon->tid, ses->Suid);
+ else
+- trace_smb3_rmdir_done(xid, tcon->tid, ses->Suid);
++ trace_smb3_unlink_err(xid, tcon->tid, ses->Suid, rc);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_EOF:
+@@ -1165,7 +1166,7 @@ smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ FILE_OPEN, CREATE_NOT_FILE, ACL_NO_MODE);
+ return smb2_compound_op(xid, tcon, cifs_sb,
+ name, &oparms, NULL,
+- &(int){SMB2_OP_RMDIR}, 1,
++ &(int){SMB2_OP_UNLINK}, 1,
+ NULL, NULL, NULL, NULL);
+ }
+
+@@ -1174,20 +1175,29 @@ smb2_unlink(const unsigned int xid, struct cifs_tcon *tcon, const char *name,
+ struct cifs_sb_info *cifs_sb, struct dentry *dentry)
+ {
+ struct cifs_open_parms oparms;
++ struct inode *inode = NULL;
++ int rc;
+
+- oparms = CIFS_OPARMS(cifs_sb, tcon, name,
+- DELETE, FILE_OPEN,
+- CREATE_DELETE_ON_CLOSE | OPEN_REPARSE_POINT,
+- ACL_NO_MODE);
+- int rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
+- NULL, &(int){SMB2_OP_DELETE}, 1,
+- NULL, NULL, NULL, dentry);
++ if (dentry)
++ inode = d_inode(dentry);
++
++ oparms = CIFS_OPARMS(cifs_sb, tcon, name, DELETE,
++ FILE_OPEN, OPEN_REPARSE_POINT, ACL_NO_MODE);
++ rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
++ NULL, &(int){SMB2_OP_UNLINK},
++ 1, NULL, NULL, NULL, dentry);
+ if (rc == -EINVAL) {
+ cifs_dbg(FYI, "invalid lease key, resending request without lease");
+ rc = smb2_compound_op(xid, tcon, cifs_sb, name, &oparms,
+- NULL, &(int){SMB2_OP_DELETE}, 1,
+- NULL, NULL, NULL, NULL);
++ NULL, &(int){SMB2_OP_UNLINK},
++ 1, NULL, NULL, NULL, NULL);
+ }
++ /*
++ * If dentry (hence, inode) is NULL, lease break is going to
++ * take care of degrading leases on handles for deleted files.
++ */
++ if (!rc && inode)
++ cifs_mark_open_handles_for_deleted_file(inode, name);
+ return rc;
+ }
+
+@@ -1441,3 +1451,113 @@ int smb2_query_reparse_point(const unsigned int xid,
+ cifs_free_open_info(&data);
+ return rc;
+ }
++
++static inline __le16 *utf16_smb2_path(struct cifs_sb_info *cifs_sb,
++ const char *name, size_t namelen)
++{
++ int len;
++
++ if (*name == '\\' ||
++ (cifs_sb_master_tlink(cifs_sb) &&
++ cifs_sb_master_tcon(cifs_sb)->posix_extensions && *name == '/'))
++ name++;
++ return cifs_strndup_to_utf16(name, namelen, &len,
++ cifs_sb->local_nls,
++ cifs_remap(cifs_sb));
++}
++
++int smb2_rename_pending_delete(const char *full_path,
++ struct dentry *dentry,
++ const unsigned int xid)
++{
++ struct cifs_sb_info *cifs_sb = CIFS_SB(d_inode(dentry)->i_sb);
++ struct cifsInodeInfo *cinode = CIFS_I(d_inode(dentry));
++ __le16 *utf16_path __free(kfree) = NULL;
++ __u32 co = file_create_options(dentry);
++ int cmds[] = {
++ SMB2_OP_SET_INFO,
++ SMB2_OP_RENAME,
++ SMB2_OP_UNLINK,
++ };
++ const int num_cmds = ARRAY_SIZE(cmds);
++ char *to_name __free(kfree) = NULL;
++ __u32 attrs = cinode->cifsAttrs;
++ struct cifs_open_parms oparms;
++ static atomic_t sillycounter;
++ struct cifsFileInfo *cfile;
++ struct tcon_link *tlink;
++ struct cifs_tcon *tcon;
++ struct kvec iov[2];
++ const char *ppath;
++ void *page;
++ size_t len;
++ int rc;
++
++ tlink = cifs_sb_tlink(cifs_sb);
++ if (IS_ERR(tlink))
++ return PTR_ERR(tlink);
++ tcon = tlink_tcon(tlink);
++
++ page = alloc_dentry_path();
++
++ ppath = build_path_from_dentry(dentry->d_parent, page);
++ if (IS_ERR(ppath)) {
++ rc = PTR_ERR(ppath);
++ goto out;
++ }
++
++ len = strlen(ppath) + strlen("/.__smb1234") + 1;
++ to_name = kmalloc(len, GFP_KERNEL);
++ if (!to_name) {
++ rc = -ENOMEM;
++ goto out;
++ }
++
++ scnprintf(to_name, len, "%s%c.__smb%04X", ppath, CIFS_DIR_SEP(cifs_sb),
++ atomic_inc_return(&sillycounter) & 0xffff);
++
++ utf16_path = utf16_smb2_path(cifs_sb, to_name, len);
++ if (!utf16_path) {
++ rc = -ENOMEM;
++ goto out;
++ }
++
++ drop_cached_dir_by_name(xid, tcon, full_path, cifs_sb);
++ oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
++ DELETE | FILE_WRITE_ATTRIBUTES,
++ FILE_OPEN, co, ACL_NO_MODE);
++
++ attrs &= ~ATTR_READONLY;
++ if (!attrs)
++ attrs = ATTR_NORMAL;
++ if (d_inode(dentry)->i_nlink <= 1)
++ attrs |= ATTR_HIDDEN;
++ iov[0].iov_base = &(FILE_BASIC_INFO) {
++ .Attributes = cpu_to_le32(attrs),
++ };
++ iov[0].iov_len = sizeof(FILE_BASIC_INFO);
++ iov[1].iov_base = utf16_path;
++ iov[1].iov_len = sizeof(*utf16_path) * UniStrlen((wchar_t *)utf16_path);
++
++ cifs_get_writable_path(tcon, full_path, FIND_WR_WITH_DELETE, &cfile);
++ rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, iov,
++ cmds, num_cmds, cfile, NULL, NULL, dentry);
++ if (rc == -EINVAL) {
++ cifs_dbg(FYI, "invalid lease key, resending request without lease\n");
++ cifs_get_writable_path(tcon, full_path,
++ FIND_WR_WITH_DELETE, &cfile);
++ rc = smb2_compound_op(xid, tcon, cifs_sb, full_path, &oparms, iov,
++ cmds, num_cmds, cfile, NULL, NULL, NULL);
++ }
++ if (!rc) {
++ set_bit(CIFS_INO_DELETE_PENDING, &cinode->flags);
++ } else {
++ cifs_tcon_dbg(FYI, "%s: failed to rename '%s' to '%s': %d\n",
++ __func__, full_path, to_name, rc);
++ rc = -EIO;
++ }
++out:
++ cifs_put_tlink(tlink);
++ free_dentry_path(page);
++ return rc;
++}
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index d3e09b10dea476..cd051bb1a9d608 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -2640,13 +2640,35 @@ smb2_set_next_command(struct cifs_tcon *tcon, struct smb_rqst *rqst)
+ }
+
+ /* SMB headers in a compound are 8 byte aligned. */
+- if (!IS_ALIGNED(len, 8)) {
+- num_padding = 8 - (len & 7);
++ if (IS_ALIGNED(len, 8))
++ goto out;
++
++ num_padding = 8 - (len & 7);
++ if (smb3_encryption_required(tcon)) {
++ int i;
++
++ /*
++ * Flatten request into a single buffer with required padding as
++ * the encryption layer can't handle the padding iovs.
++ */
++ for (i = 1; i < rqst->rq_nvec; i++) {
++ memcpy(rqst->rq_iov[0].iov_base +
++ rqst->rq_iov[0].iov_len,
++ rqst->rq_iov[i].iov_base,
++ rqst->rq_iov[i].iov_len);
++ rqst->rq_iov[0].iov_len += rqst->rq_iov[i].iov_len;
++ }
++ memset(rqst->rq_iov[0].iov_base + rqst->rq_iov[0].iov_len,
++ 0, num_padding);
++ rqst->rq_iov[0].iov_len += num_padding;
++ rqst->rq_nvec = 1;
++ } else {
+ rqst->rq_iov[rqst->rq_nvec].iov_base = smb2_padding;
+ rqst->rq_iov[rqst->rq_nvec].iov_len = num_padding;
+ rqst->rq_nvec++;
+- len += num_padding;
+ }
++ len += num_padding;
++out:
+ shdr->NextCommand = cpu_to_le32(len);
+ }
+
+@@ -5377,6 +5399,7 @@ struct smb_version_operations smb20_operations = {
+ .llseek = smb3_llseek,
+ .is_status_io_timeout = smb2_is_status_io_timeout,
+ .is_network_name_deleted = smb2_is_network_name_deleted,
++ .rename_pending_delete = smb2_rename_pending_delete,
+ };
+ #endif /* CIFS_ALLOW_INSECURE_LEGACY */
+
+@@ -5482,6 +5505,7 @@ struct smb_version_operations smb21_operations = {
+ .llseek = smb3_llseek,
+ .is_status_io_timeout = smb2_is_status_io_timeout,
+ .is_network_name_deleted = smb2_is_network_name_deleted,
++ .rename_pending_delete = smb2_rename_pending_delete,
+ };
+
+ struct smb_version_operations smb30_operations = {
+@@ -5598,6 +5622,7 @@ struct smb_version_operations smb30_operations = {
+ .llseek = smb3_llseek,
+ .is_status_io_timeout = smb2_is_status_io_timeout,
+ .is_network_name_deleted = smb2_is_network_name_deleted,
++ .rename_pending_delete = smb2_rename_pending_delete,
+ };
+
+ struct smb_version_operations smb311_operations = {
+@@ -5714,6 +5739,7 @@ struct smb_version_operations smb311_operations = {
+ .llseek = smb3_llseek,
+ .is_status_io_timeout = smb2_is_status_io_timeout,
+ .is_network_name_deleted = smb2_is_network_name_deleted,
++ .rename_pending_delete = smb2_rename_pending_delete,
+ };
+
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 035aa16240535c..8d6b42ff38fe68 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -320,5 +320,8 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ int smb2_make_nfs_node(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, umode_t mode, dev_t dev);
++int smb2_rename_pending_delete(const char *full_path,
++ struct dentry *dentry,
++ const unsigned int xid);
+
+ #endif /* _SMB2PROTO_H */
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 93e5b2bb9f28a2..a8c6f11699a3b6 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -669,13 +669,12 @@ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(query_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(posix_query_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(hardlink_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(rename_enter);
+-DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(rmdir_enter);
++DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(unlink_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_eof_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_info_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(set_reparse_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(get_reparse_compound_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(query_wsl_ea_compound_enter);
+-DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(delete_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mkdir_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(tdis_enter);
+ DEFINE_SMB3_INF_COMPOUND_ENTER_EVENT(mknod_enter);
+@@ -710,13 +709,12 @@ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(query_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(posix_query_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(hardlink_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(rename_done);
+-DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(rmdir_done);
++DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(unlink_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_eof_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_info_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(set_reparse_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(get_reparse_compound_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(query_wsl_ea_compound_done);
+-DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(delete_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mkdir_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(tdis_done);
+ DEFINE_SMB3_INF_COMPOUND_DONE_EVENT(mknod_done);
+@@ -756,14 +754,13 @@ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(query_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(posix_query_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(hardlink_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(rename_err);
+-DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(rmdir_err);
++DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(unlink_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_eof_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_info_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(set_reparse_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(get_reparse_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(query_wsl_ea_compound_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mkdir_err);
+-DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(delete_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(tdis_err);
+ DEFINE_SMB3_INF_COMPOUND_ERR_EVENT(mknod_err);
+
+diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
+index e4ca35143ff965..3e35a68b2b4122 100644
+--- a/include/drm/display/drm_dp_helper.h
++++ b/include/drm/display/drm_dp_helper.h
+@@ -523,10 +523,16 @@ struct drm_dp_aux {
+ * @no_zero_sized: If the hw can't use zero sized transfers (NVIDIA)
+ */
+ bool no_zero_sized;
++
++ /**
++ * @dpcd_probe_disabled: If probing before a DPCD access is disabled.
++ */
++ bool dpcd_probe_disabled;
+ };
+
+ int drm_dp_dpcd_probe(struct drm_dp_aux *aux, unsigned int offset);
+ void drm_dp_dpcd_set_powered(struct drm_dp_aux *aux, bool powered);
++void drm_dp_dpcd_set_probe(struct drm_dp_aux *aux, bool enable);
+ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
+ void *buffer, size_t size);
+ ssize_t drm_dp_dpcd_write(struct drm_dp_aux *aux, unsigned int offset,
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index f13d597370a30d..da49d520aa3bae 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -843,7 +843,9 @@ struct drm_display_info {
+ int vics_len;
+
+ /**
+- * @quirks: EDID based quirks. Internal to EDID parsing.
++ * @quirks: EDID based quirks. DRM core and drivers can query the
++ * @drm_edid_quirk quirks using drm_edid_has_quirk(), the rest of
++ * the quirks also tracked here are internal to EDID parsing.
+ */
+ u32 quirks;
+
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index b38409670868d8..3d1aecfec9b2a4 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -109,6 +109,13 @@ struct detailed_data_string {
+ #define DRM_EDID_CVT_FLAGS_STANDARD_BLANKING (1 << 3)
+ #define DRM_EDID_CVT_FLAGS_REDUCED_BLANKING (1 << 4)
+
++enum drm_edid_quirk {
++ /* Do a dummy read before DPCD accesses, to prevent corruption. */
++ DRM_EDID_QUIRK_DP_DPCD_PROBE,
++
++ DRM_EDID_QUIRK_NUM,
++};
++
+ struct detailed_data_monitor_range {
+ u8 min_vfreq;
+ u8 max_vfreq;
+@@ -476,5 +483,6 @@ void drm_edid_print_product_id(struct drm_printer *p,
+ u32 drm_edid_get_panel_id(const struct drm_edid *drm_edid);
+ bool drm_edid_match(const struct drm_edid *drm_edid,
+ const struct drm_edid_ident *ident);
++bool drm_edid_has_quirk(struct drm_connector *connector, enum drm_edid_quirk quirk);
+
+ #endif /* __DRM_EDID_H__ */
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index 4fc8e26914adfd..f9f36d6af9a710 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -18,23 +18,42 @@
+ #define KASAN_ABI_VERSION 5
+
+ /*
++ * Clang 22 added preprocessor macros to match GCC, in hopes of eventually
++ * dropping __has_feature support for sanitizers:
++ * https://github.com/llvm/llvm-project/commit/568c23bbd3303518c5056d7f03444dae4fdc8a9c
++ * Create these macros for older versions of clang so that it is easy to clean
++ * up once the minimum supported version of LLVM for building the kernel always
++ * creates these macros.
++ *
+ * Note: Checking __has_feature(*_sanitizer) is only true if the feature is
+ * enabled. Therefore it is not required to additionally check defined(CONFIG_*)
+ * to avoid adding redundant attributes in other configurations.
+ */
++#if __has_feature(address_sanitizer) && !defined(__SANITIZE_ADDRESS__)
++#define __SANITIZE_ADDRESS__
++#endif
++#if __has_feature(hwaddress_sanitizer) && !defined(__SANITIZE_HWADDRESS__)
++#define __SANITIZE_HWADDRESS__
++#endif
++#if __has_feature(thread_sanitizer) && !defined(__SANITIZE_THREAD__)
++#define __SANITIZE_THREAD__
++#endif
+
+-#if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer)
+-/* Emulate GCC's __SANITIZE_ADDRESS__ flag */
++/*
++ * Treat __SANITIZE_HWADDRESS__ the same as __SANITIZE_ADDRESS__ in the kernel.
++ */
++#ifdef __SANITIZE_HWADDRESS__
+ #define __SANITIZE_ADDRESS__
++#endif
++
++#ifdef __SANITIZE_ADDRESS__
+ #define __no_sanitize_address \
+ __attribute__((no_sanitize("address", "hwaddress")))
+ #else
+ #define __no_sanitize_address
+ #endif
+
+-#if __has_feature(thread_sanitizer)
+-/* emulate gcc's __SANITIZE_THREAD__ flag */
+-#define __SANITIZE_THREAD__
++#ifdef __SANITIZE_THREAD__
+ #define __no_sanitize_thread \
+ __attribute__((no_sanitize("thread")))
+ #else
+diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
+index 7fa1eb3cc82399..61d50571ad88ac 100644
+--- a/include/linux/energy_model.h
++++ b/include/linux/energy_model.h
+@@ -171,6 +171,9 @@ int em_dev_update_perf_domain(struct device *dev,
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ const struct em_data_callback *cb,
+ const cpumask_t *cpus, bool microwatts);
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++ const struct em_data_callback *cb,
++ const cpumask_t *cpus, bool microwatts);
+ void em_dev_unregister_perf_domain(struct device *dev);
+ struct em_perf_table *em_table_alloc(struct em_perf_domain *pd);
+ void em_table_free(struct em_perf_table *table);
+@@ -350,6 +353,13 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ {
+ return -EINVAL;
+ }
++static inline
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++ const struct em_data_callback *cb,
++ const cpumask_t *cpus, bool microwatts)
++{
++ return -EINVAL;
++}
+ static inline void em_dev_unregister_perf_domain(struct device *dev)
+ {
+ }
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 040c0036320fdf..d6716ff498a7aa 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -149,7 +149,8 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
+ /* Expect random access pattern */
+ #define FMODE_RANDOM ((__force fmode_t)(1 << 12))
+
+-/* FMODE_* bit 13 */
++/* Supports IOCB_HAS_METADATA */
++#define FMODE_HAS_METADATA ((__force fmode_t)(1 << 13))
+
+ /* File is opened with O_PATH; almost nothing can be done with it */
+ #define FMODE_PATH ((__force fmode_t)(1 << 14))
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index 890011071f2b14..fe5ce9215821db 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -562,7 +562,7 @@ static inline void kasan_init_hw_tags(void) { }
+ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+
+ void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+-int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
++int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask);
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ unsigned long free_region_start,
+ unsigned long free_region_end,
+@@ -574,7 +574,7 @@ static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+ { }
+ static inline int kasan_populate_vmalloc(unsigned long start,
+- unsigned long size)
++ unsigned long size, gfp_t gfp_mask)
+ {
+ return 0;
+ }
+@@ -610,7 +610,7 @@ static __always_inline void kasan_poison_vmalloc(const void *start,
+ static inline void kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size) { }
+ static inline int kasan_populate_vmalloc(unsigned long start,
+- unsigned long size)
++ unsigned long size, gfp_t gfp_mask)
+ {
+ return 0;
+ }
+diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
+index 15eaa09da998ce..d668b6266c34a2 100644
+--- a/include/linux/mtd/spinand.h
++++ b/include/linux/mtd/spinand.h
+@@ -484,6 +484,7 @@ struct spinand_user_otp {
+ * @op_variants.update_cache: variants of the update-cache operation
+ * @select_target: function used to select a target/die. Required only for
+ * multi-die chips
++ * @configure_chip: Align the chip configuration with the core settings
+ * @set_cont_read: enable/disable continuous cached reads
+ * @fact_otp: SPI NAND factory OTP info.
+ * @user_otp: SPI NAND user OTP info.
+@@ -507,6 +508,7 @@ struct spinand_info {
+ } op_variants;
+ int (*select_target)(struct spinand_device *spinand,
+ unsigned int target);
++ int (*configure_chip)(struct spinand_device *spinand);
+ int (*set_cont_read)(struct spinand_device *spinand,
+ bool enable);
+ struct spinand_fact_otp fact_otp;
+@@ -539,6 +541,9 @@ struct spinand_info {
+ #define SPINAND_SELECT_TARGET(__func) \
+ .select_target = __func
+
++#define SPINAND_CONFIGURE_CHIP(__configure_chip) \
++ .configure_chip = __configure_chip
++
+ #define SPINAND_CONT_READ(__set_cont_read) \
+ .set_cont_read = __set_cont_read
+
+@@ -607,6 +612,7 @@ struct spinand_dirmap {
+ * passed in spi_mem_op be DMA-able, so we can't based the bufs on
+ * the stack
+ * @manufacturer: SPI NAND manufacturer information
++ * @configure_chip: Align the chip configuration with the core settings
+ * @cont_read_possible: Field filled by the core once the whole system
+ * configuration is known to tell whether continuous reads are
+ * suitable to use or not in general with this chip/configuration.
+@@ -647,6 +653,7 @@ struct spinand_device {
+ const struct spinand_manufacturer *manufacturer;
+ void *priv;
+
++ int (*configure_chip)(struct spinand_device *spinand);
+ bool cont_read_possible;
+ int (*set_cont_read)(struct spinand_device *spinand,
+ bool enable);
+@@ -723,6 +730,7 @@ int spinand_match_and_init(struct spinand_device *spinand,
+ enum spinand_readid_method rdid_method);
+
+ int spinand_upd_cfg(struct spinand_device *spinand, u8 mask, u8 val);
++int spinand_read_reg_op(struct spinand_device *spinand, u8 reg, u8 *val);
+ int spinand_write_reg_op(struct spinand_device *spinand, u8 reg, u8 val);
+ int spinand_select_target(struct spinand_device *spinand, unsigned int target);
+
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 5e49619ae49c69..16daeac2ac555e 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -459,19 +459,17 @@ struct nft_set_ext;
+ * control plane functions.
+ */
+ struct nft_set_ops {
+- bool (*lookup)(const struct net *net,
++ const struct nft_set_ext * (*lookup)(const struct net *net,
+ const struct nft_set *set,
+- const u32 *key,
+- const struct nft_set_ext **ext);
+- bool (*update)(struct nft_set *set,
++ const u32 *key);
++ const struct nft_set_ext * (*update)(struct nft_set *set,
+ const u32 *key,
+ struct nft_elem_priv *
+ (*new)(struct nft_set *,
+ const struct nft_expr *,
+ struct nft_regs *),
+ const struct nft_expr *expr,
+- struct nft_regs *regs,
+- const struct nft_set_ext **ext);
++ struct nft_regs *regs);
+ bool (*delete)(const struct nft_set *set,
+ const u32 *key);
+
+@@ -1918,7 +1916,6 @@ struct nftables_pernet {
+ struct mutex commit_mutex;
+ u64 table_handle;
+ u64 tstamp;
+- unsigned int base_seq;
+ unsigned int gc_seq;
+ u8 validate_state;
+ struct work_struct destroy_work;
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index 03b6165756fc5d..04699eac5b5243 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -94,34 +94,35 @@ extern const struct nft_set_type nft_set_pipapo_type;
+ extern const struct nft_set_type nft_set_pipapo_avx2_type;
+
+ #ifdef CONFIG_MITIGATION_RETPOLINE
+-bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
+-bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
+-bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
+-bool nft_hash_lookup_fast(const struct net *net,
+- const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
+-bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
+-bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
+-#else
+-static inline bool
+-nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
+-{
+- return set->ops->lookup(net, set, key, ext);
+-}
++const struct nft_set_ext *
++nft_rhash_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
++const struct nft_set_ext *
++nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
++const struct nft_set_ext *
++nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
++const struct nft_set_ext *
++nft_hash_lookup_fast(const struct net *net, const struct nft_set *set,
++ const u32 *key);
++const struct nft_set_ext *
++nft_hash_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
+ #endif
+
++const struct nft_set_ext *
++nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
++
+ /* called from nft_pipapo_avx2.c */
+-bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
++const struct nft_set_ext *
++nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
+ /* called from nft_set_pipapo.c */
+-bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext);
++const struct nft_set_ext *
++nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key);
+
+ void nft_counter_init_seqcount(void);
+
+diff --git a/include/net/netns/nftables.h b/include/net/netns/nftables.h
+index cc8060c017d5fb..99dd166c5d07c3 100644
+--- a/include/net/netns/nftables.h
++++ b/include/net/netns/nftables.h
+@@ -3,6 +3,7 @@
+ #define _NETNS_NFTABLES_H_
+
+ struct netns_nftables {
++ unsigned int base_seq;
+ u8 gencursor;
+ };
+
+diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h
+index b1394628727758..ac74133a476887 100644
+--- a/include/uapi/linux/raid/md_p.h
++++ b/include/uapi/linux/raid/md_p.h
+@@ -173,7 +173,7 @@ typedef struct mdp_superblock_s {
+ #else
+ #error unspecified endianness
+ #endif
+- __u32 resync_offset; /* 11 resync checkpoint sector count */
++ __u32 recovery_cp; /* 11 resync checkpoint sector count */
+ /* There are only valid for minor_version > 90 */
+ __u64 reshape_position; /* 12,13 next address in array-space for reshape */
+ __u32 new_level; /* 14 new level we are reshaping to */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 52a5b950b2e5e9..af5a54b5db1233 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -886,6 +886,9 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type)
+ if (req->flags & REQ_F_HAS_METADATA) {
+ struct io_async_rw *io = req->async_data;
+
++ if (!(file->f_mode & FMODE_HAS_METADATA))
++ return -EINVAL;
++
+ /*
+ * We have a union of meta fields with wpq used for buffered-io
+ * in io_async_rw, so fail it here.
+diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
+index 3a335c50e6e3cb..12ec926ed7114e 100644
+--- a/kernel/bpf/Makefile
++++ b/kernel/bpf/Makefile
+@@ -62,3 +62,4 @@ CFLAGS_REMOVE_bpf_lru_list.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_queue_stack_maps.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_lpm_trie.o = $(CC_FLAGS_FTRACE)
+ CFLAGS_REMOVE_ringbuf.o = $(CC_FLAGS_FTRACE)
++CFLAGS_REMOVE_rqspinlock.o = $(CC_FLAGS_FTRACE)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index d966e971893ab3..829f0792d8d831 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2354,8 +2354,7 @@ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ const struct bpf_insn *insn)
+ {
+ /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
+- * is not working properly, or interpreter is being used when
+- * prog->jit_requested is not 0, so warn about it!
++ * is not working properly, so warn about it!
+ */
+ WARN_ON_ONCE(1);
+ return 0;
+@@ -2456,8 +2455,9 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
+ return ret;
+ }
+
+-static void bpf_prog_select_func(struct bpf_prog *fp)
++static bool bpf_prog_select_interpreter(struct bpf_prog *fp)
+ {
++ bool select_interpreter = false;
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
+ u32 idx = (round_up(stack_depth, 32) / 32) - 1;
+@@ -2466,15 +2466,16 @@ static void bpf_prog_select_func(struct bpf_prog *fp)
+ * But for non-JITed programs, we don't need bpf_func, so no bounds
+ * check needed.
+ */
+- if (!fp->jit_requested &&
+- !WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) {
++ if (idx < ARRAY_SIZE(interpreters)) {
+ fp->bpf_func = interpreters[idx];
++ select_interpreter = true;
+ } else {
+ fp->bpf_func = __bpf_prog_ret0_warn;
+ }
+ #else
+ fp->bpf_func = __bpf_prog_ret0_warn;
+ #endif
++ return select_interpreter;
+ }
+
+ /**
+@@ -2493,7 +2494,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ /* In case of BPF to BPF calls, verifier did all the prep
+ * work with regards to JITing, etc.
+ */
+- bool jit_needed = fp->jit_requested;
++ bool jit_needed = false;
+
+ if (fp->bpf_func)
+ goto finalize;
+@@ -2502,7 +2503,8 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
+ bpf_prog_has_kfunc_call(fp))
+ jit_needed = true;
+
+- bpf_prog_select_func(fp);
++ if (!bpf_prog_select_interpreter(fp))
++ jit_needed = true;
+
+ /* eBPF JITs can rewrite the program in case constant
+ * blinding is active. However, in case of error during
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 67e8a2fc1a99de..cfcf7ed57ca0d2 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -186,7 +186,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ struct xdp_buff xdp;
+ int i, nframes = 0;
+
+- xdp_set_return_frame_no_direct();
+ xdp.rxq = &rxq;
+
+ for (i = 0; i < n; i++) {
+@@ -231,7 +230,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
+ }
+ }
+
+- xdp_clear_return_frame_no_direct();
+ stats->pass += nframes;
+
+ return nframes;
+@@ -255,6 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+
+ rcu_read_lock();
+ bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
++ xdp_set_return_frame_no_direct();
+
+ ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
+ if (unlikely(ret->skb_n))
+@@ -264,6 +263,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ if (stats->redirect)
+ xdp_do_flush();
+
++ xdp_clear_return_frame_no_direct();
+ bpf_net_ctx_clear(bpf_net_ctx);
+ rcu_read_unlock();
+
+diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
+index 94854cd9c4cc32..83c4d9943084b9 100644
+--- a/kernel/bpf/crypto.c
++++ b/kernel/bpf/crypto.c
+@@ -278,7 +278,7 @@ static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
+ siv_len = siv ? __bpf_dynptr_size(siv) : 0;
+ src_len = __bpf_dynptr_size(src);
+ dst_len = __bpf_dynptr_size(dst);
+- if (!src_len || !dst_len)
++ if (!src_len || !dst_len || src_len > dst_len)
+ return -EINVAL;
+
+ if (siv_len != ctx->siv_len)
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index fdf8737542ac45..3abbdebb2d9efc 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1277,8 +1277,11 @@ static int __bpf_async_init(struct bpf_async_kern *async, struct bpf_map *map, u
+ goto out;
+ }
+
+- /* allocate hrtimer via map_kmalloc to use memcg accounting */
+- cb = bpf_map_kmalloc_node(map, size, GFP_ATOMIC, map->numa_node);
++ /* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until
++ * kmalloc_nolock() is available, avoid locking issues by using
++ * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM).
++ */
++ cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node);
+ if (!cb) {
+ ret = -ENOMEM;
+ goto out;
+diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
+index 338305c8852cf6..804e619f1e0066 100644
+--- a/kernel/bpf/rqspinlock.c
++++ b/kernel/bpf/rqspinlock.c
+@@ -471,7 +471,7 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
+ * any MCS node. This is not the most elegant solution, but is
+ * simple enough.
+ */
+- if (unlikely(idx >= _Q_MAX_NODES)) {
++ if (unlikely(idx >= _Q_MAX_NODES || in_nmi())) {
+ lockevent_inc(lock_no_node);
+ RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
+ while (!queued_spin_trylock(lock)) {
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index e43c6de2bce4e7..b82399437db031 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -39,6 +39,7 @@ enum {
+ dma_debug_sg,
+ dma_debug_coherent,
+ dma_debug_resource,
++ dma_debug_noncoherent,
+ };
+
+ enum map_err_types {
+@@ -141,6 +142,7 @@ static const char *type2name[] = {
+ [dma_debug_sg] = "scatter-gather",
+ [dma_debug_coherent] = "coherent",
+ [dma_debug_resource] = "resource",
++ [dma_debug_noncoherent] = "noncoherent",
+ };
+
+ static const char *dir2name[] = {
+@@ -993,7 +995,8 @@ static void check_unmap(struct dma_debug_entry *ref)
+ "[mapped as %s] [unmapped as %s]\n",
+ ref->dev_addr, ref->size,
+ type2name[entry->type], type2name[ref->type]);
+- } else if (entry->type == dma_debug_coherent &&
++ } else if ((entry->type == dma_debug_coherent ||
++ entry->type == dma_debug_noncoherent) &&
+ ref->paddr != entry->paddr) {
+ err_printk(ref->dev, entry, "device driver frees "
+ "DMA memory with different CPU address "
+@@ -1581,6 +1584,49 @@ void debug_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+ }
+ }
+
++void debug_dma_alloc_pages(struct device *dev, struct page *page,
++ size_t size, int direction,
++ dma_addr_t dma_addr,
++ unsigned long attrs)
++{
++ struct dma_debug_entry *entry;
++
++ if (unlikely(dma_debug_disabled()))
++ return;
++
++ entry = dma_entry_alloc();
++ if (!entry)
++ return;
++
++ entry->type = dma_debug_noncoherent;
++ entry->dev = dev;
++ entry->paddr = page_to_phys(page);
++ entry->size = size;
++ entry->dev_addr = dma_addr;
++ entry->direction = direction;
++
++ add_dma_entry(entry, attrs);
++}
++
++void debug_dma_free_pages(struct device *dev, struct page *page,
++ size_t size, int direction,
++ dma_addr_t dma_addr)
++{
++ struct dma_debug_entry ref = {
++ .type = dma_debug_noncoherent,
++ .dev = dev,
++ .paddr = page_to_phys(page),
++ .dev_addr = dma_addr,
++ .size = size,
++ .direction = direction,
++ };
++
++ if (unlikely(dma_debug_disabled()))
++ return;
++
++ check_unmap(&ref);
++}
++
+ static int __init dma_debug_driver_setup(char *str)
+ {
+ int i;
+diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h
+index f525197d3cae60..48757ca13f3140 100644
+--- a/kernel/dma/debug.h
++++ b/kernel/dma/debug.h
+@@ -54,6 +54,13 @@ extern void debug_dma_sync_sg_for_cpu(struct device *dev,
+ extern void debug_dma_sync_sg_for_device(struct device *dev,
+ struct scatterlist *sg,
+ int nelems, int direction);
++extern void debug_dma_alloc_pages(struct device *dev, struct page *page,
++ size_t size, int direction,
++ dma_addr_t dma_addr,
++ unsigned long attrs);
++extern void debug_dma_free_pages(struct device *dev, struct page *page,
++ size_t size, int direction,
++ dma_addr_t dma_addr);
+ #else /* CONFIG_DMA_API_DEBUG */
+ static inline void debug_dma_map_page(struct device *dev, struct page *page,
+ size_t offset, size_t size,
+@@ -126,5 +133,18 @@ static inline void debug_dma_sync_sg_for_device(struct device *dev,
+ int nelems, int direction)
+ {
+ }
++
++static inline void debug_dma_alloc_pages(struct device *dev, struct page *page,
++ size_t size, int direction,
++ dma_addr_t dma_addr,
++ unsigned long attrs)
++{
++}
++
++static inline void debug_dma_free_pages(struct device *dev, struct page *page,
++ size_t size, int direction,
++ dma_addr_t dma_addr)
++{
++}
+ #endif /* CONFIG_DMA_API_DEBUG */
+ #endif /* _KERNEL_DMA_DEBUG_H */
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index 107e4a4d251df6..56de28a3b1799f 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -712,7 +712,7 @@ struct page *dma_alloc_pages(struct device *dev, size_t size,
+ if (page) {
+ trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle,
+ size, dir, gfp, 0);
+- debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0);
++ debug_dma_alloc_pages(dev, page, size, dir, *dma_handle, 0);
+ } else {
+ trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0);
+ }
+@@ -738,7 +738,7 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page,
+ dma_addr_t dma_handle, enum dma_data_direction dir)
+ {
+ trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0);
+- debug_dma_unmap_page(dev, dma_handle, size, dir);
++ debug_dma_free_pages(dev, page, size, dir, dma_handle);
+ __dma_free_pages(dev, size, page, dma_handle, dir);
+ }
+ EXPORT_SYMBOL_GPL(dma_free_pages);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 872122e074e5fe..820127536e62b7 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10330,6 +10330,7 @@ static int __perf_event_overflow(struct perf_event *event,
+ ret = 1;
+ event->pending_kill = POLL_HUP;
+ perf_event_disable_inatomic(event);
++ event->pmu->stop(event, 0);
+ }
+
+ if (event->attr.sigtrap) {
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index ea7995a25780f3..8df55397414a12 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -552,6 +552,30 @@ EXPORT_SYMBOL_GPL(em_cpu_get);
+ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ const struct em_data_callback *cb,
+ const cpumask_t *cpus, bool microwatts)
++{
++ int ret = em_dev_register_pd_no_update(dev, nr_states, cb, cpus, microwatts);
++
++ if (_is_cpu_device(dev))
++ em_check_capacity_update();
++
++ return ret;
++}
++EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
++
++/**
++ * em_dev_register_pd_no_update() - Register a perf domain for a device
++ * @dev : Device to register the PD for
++ * @nr_states : Number of performance states in the new PD
++ * @cb : Callback functions for populating the energy model
++ * @cpus : CPUs to include in the new PD (mandatory if @dev is a CPU device)
++ * @microwatts : Whether or not the power values in the EM will be in uW
++ *
++ * Like em_dev_register_perf_domain(), but does not trigger a CPU capacity
++ * update after registering the PD, even if @dev is a CPU device.
++ */
++int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states,
++ const struct em_data_callback *cb,
++ const cpumask_t *cpus, bool microwatts)
+ {
+ struct em_perf_table *em_table;
+ unsigned long cap, prev_cap = 0;
+@@ -636,12 +660,9 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states,
+ unlock:
+ mutex_unlock(&em_pd_mutex);
+
+- if (_is_cpu_device(dev))
+- em_check_capacity_update();
+-
+ return ret;
+ }
+-EXPORT_SYMBOL_GPL(em_dev_register_perf_domain);
++EXPORT_SYMBOL_GPL(em_dev_register_pd_no_update);
+
+ /**
+ * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 9216e3b91d3b3b..c8022a477d3a1c 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -423,6 +423,7 @@ int hibernation_snapshot(int platform_mode)
+ }
+
+ console_suspend_all();
++ pm_restrict_gfp_mask();
+
+ error = dpm_suspend(PMSG_FREEZE);
+
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index 30899a8cc52c0a..e8c479329282f9 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -787,10 +787,10 @@ static void retrigger_next_event(void *arg)
+ * of the next expiring timer is enough. The return from the SMP
+ * function call will take care of the reprogramming in case the
+ * CPU was in a NOHZ idle sleep.
++ *
++ * In periodic low resolution mode, the next softirq expiration
++ * must also be updated.
+ */
+- if (!hrtimer_hres_active(base) && !tick_nohz_active)
+- return;
+-
+ raw_spin_lock(&base->lock);
+ hrtimer_update_base(base);
+ if (hrtimer_hres_active(base))
+@@ -2295,11 +2295,6 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ &new_base->clock_base[i]);
+ }
+
+- /*
+- * The migration might have changed the first expiring softirq
+- * timer on this CPU. Update it.
+- */
+- __hrtimer_get_next_event(new_base, HRTIMER_ACTIVE_SOFT);
+ /* Tell the other CPU to retrigger the next event */
+ smp_call_function_single(ncpu, retrigger_next_event, NULL, 0);
+
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index dac2d58f39490b..db40ec5cc9d731 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1393,7 +1393,8 @@ int register_ftrace_graph(struct fgraph_ops *gops)
+ ftrace_graph_active--;
+ gops->saved_func = NULL;
+ fgraph_lru_release_index(i);
+- unregister_pm_notifier(&ftrace_suspend_notifier);
++ if (!ftrace_graph_active)
++ unregister_pm_notifier(&ftrace_suspend_notifier);
+ }
+ return ret;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b91fa02cc54a6a..56f6cebdb22998 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -846,7 +846,10 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ /* copy the current bits to the new max */
+ ret = trace_pid_list_first(filtered_pids, &pid);
+ while (!ret) {
+- trace_pid_list_set(pid_list, pid);
++ ret = trace_pid_list_set(pid_list, pid);
++ if (ret < 0)
++ goto out;
++
+ ret = trace_pid_list_next(filtered_pids, pid + 1, &pid);
+ nr_pids++;
+ }
+@@ -883,6 +886,7 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ trace_parser_clear(&parser);
+ ret = 0;
+ }
++ out:
+ trace_parser_put(&parser);
+
+ if (ret < 0) {
+@@ -7264,7 +7268,7 @@ static ssize_t write_marker_to_buffer(struct trace_array *tr, const char __user
+ entry = ring_buffer_event_data(event);
+ entry->ip = ip;
+
+- len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
++ len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
+ if (len) {
+ memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+ cnt = FAULTED_SIZE;
+@@ -7361,7 +7365,7 @@ static ssize_t write_raw_marker_to_buffer(struct trace_array *tr,
+
+ entry = ring_buffer_event_data(event);
+
+- len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
++ len = copy_from_user_nofault(&entry->id, ubuf, cnt);
+ if (len) {
+ entry->id = -1;
+ memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index fd259da0aa6456..337bc0eb5d71bf 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2322,6 +2322,9 @@ osnoise_cpus_write(struct file *filp, const char __user *ubuf, size_t count,
+ int running, err;
+ char *buf __free(kfree) = NULL;
+
++ if (count < 1)
++ return 0;
++
+ buf = kmalloc(count, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 8ead13792f0495..d87fbb8c418d00 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -2050,6 +2050,10 @@ static void damos_adjust_quota(struct damon_ctx *c, struct damos *s)
+ if (!quota->ms && !quota->sz && list_empty("a->goals))
+ return;
+
++ /* First charge window */
++ if (!quota->total_charged_sz && !quota->charged_from)
++ quota->charged_from = jiffies;
++
+ /* New charge window starts */
+ if (time_after_eq(jiffies, quota->charged_from +
+ msecs_to_jiffies(quota->reset_interval))) {
+diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
+index 4af8fd4a390b66..c2b4f0b0714727 100644
+--- a/mm/damon/lru_sort.c
++++ b/mm/damon/lru_sort.c
+@@ -198,6 +198,11 @@ static int damon_lru_sort_apply_parameters(void)
+ if (err)
+ return err;
+
++ if (!damon_lru_sort_mon_attrs.sample_interval) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs);
+ if (err)
+ goto out;
+diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
+index a675150965e020..ade3ff724b24cd 100644
+--- a/mm/damon/reclaim.c
++++ b/mm/damon/reclaim.c
+@@ -194,6 +194,11 @@ static int damon_reclaim_apply_parameters(void)
+ if (err)
+ return err;
+
++ if (!damon_reclaim_mon_attrs.aggr_interval) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ err = damon_set_attrs(ctx, &damon_reclaim_mon_attrs);
+ if (err)
+ goto out;
+diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
+index 1af6aff35d84a0..57d4ec256682ce 100644
+--- a/mm/damon/sysfs.c
++++ b/mm/damon/sysfs.c
+@@ -1243,14 +1243,18 @@ static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
+ {
+ struct damon_sysfs_kdamond *kdamond = container_of(kobj,
+ struct damon_sysfs_kdamond, kobj);
+- struct damon_ctx *ctx = kdamond->damon_ctx;
+- bool running;
++ struct damon_ctx *ctx;
++ bool running = false;
+
+- if (!ctx)
+- running = false;
+- else
++ if (!mutex_trylock(&damon_sysfs_lock))
++ return -EBUSY;
++
++ ctx = kdamond->damon_ctx;
++ if (ctx)
+ running = damon_sysfs_ctx_running(ctx);
+
++ mutex_unlock(&damon_sysfs_lock);
++
+ return sysfs_emit(buf, "%s\n", running ?
+ damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_ON] :
+ damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_OFF]);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index a0d285d2099252..eee833f7068157 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5855,7 +5855,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ spinlock_t *ptl;
+ struct hstate *h = hstate_vma(vma);
+ unsigned long sz = huge_page_size(h);
+- bool adjust_reservation = false;
++ bool adjust_reservation;
+ unsigned long last_addr_mask;
+ bool force_flush = false;
+
+@@ -5948,6 +5948,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ sz);
+ hugetlb_count_sub(pages_per_huge_page(h), mm);
+ hugetlb_remove_rmap(folio);
++ spin_unlock(ptl);
+
+ /*
+ * Restore the reservation for anonymous page, otherwise the
+@@ -5955,14 +5956,16 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ * If there we are freeing a surplus, do not set the restore
+ * reservation bit.
+ */
++ adjust_reservation = false;
++
++ spin_lock_irq(&hugetlb_lock);
+ if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
+ folio_test_anon(folio)) {
+ folio_set_hugetlb_restore_reserve(folio);
+ /* Reservation to be adjusted after the spin lock */
+ adjust_reservation = true;
+ }
+-
+- spin_unlock(ptl);
++ spin_unlock_irq(&hugetlb_lock);
+
+ /*
+ * Adjust the reservation for the region that will have the
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index d2c70cd2afb1de..c7c0be11917370 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -335,13 +335,13 @@ static void ___free_pages_bulk(struct page **pages, int nr_pages)
+ }
+ }
+
+-static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
++static int ___alloc_pages_bulk(struct page **pages, int nr_pages, gfp_t gfp_mask)
+ {
+ unsigned long nr_populated, nr_total = nr_pages;
+ struct page **page_array = pages;
+
+ while (nr_pages) {
+- nr_populated = alloc_pages_bulk(GFP_KERNEL, nr_pages, pages);
++ nr_populated = alloc_pages_bulk(gfp_mask, nr_pages, pages);
+ if (!nr_populated) {
+ ___free_pages_bulk(page_array, nr_total - nr_pages);
+ return -ENOMEM;
+@@ -353,25 +353,42 @@ static int ___alloc_pages_bulk(struct page **pages, int nr_pages)
+ return 0;
+ }
+
+-static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
++static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_t gfp_mask)
+ {
+ unsigned long nr_pages, nr_total = PFN_UP(end - start);
+ struct vmalloc_populate_data data;
++ unsigned int flags;
+ int ret = 0;
+
+- data.pages = (struct page **)__get_free_page(GFP_KERNEL | __GFP_ZERO);
++ data.pages = (struct page **)__get_free_page(gfp_mask | __GFP_ZERO);
+ if (!data.pages)
+ return -ENOMEM;
+
+ while (nr_total) {
+ nr_pages = min(nr_total, PAGE_SIZE / sizeof(data.pages[0]));
+- ret = ___alloc_pages_bulk(data.pages, nr_pages);
++ ret = ___alloc_pages_bulk(data.pages, nr_pages, gfp_mask);
+ if (ret)
+ break;
+
+ data.start = start;
++
++ /*
++ * page tables allocations ignore external gfp mask, enforce it
++ * by the scope API
++ */
++ if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
++ flags = memalloc_nofs_save();
++ else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
++ flags = memalloc_noio_save();
++
+ ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
+ kasan_populate_vmalloc_pte, &data);
++
++ if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
++ memalloc_nofs_restore(flags);
++ else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
++ memalloc_noio_restore(flags);
++
+ ___free_pages_bulk(data.pages, nr_pages);
+ if (ret)
+ break;
+@@ -385,7 +402,7 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end)
+ return ret;
+ }
+
+-int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
++int kasan_populate_vmalloc(unsigned long addr, unsigned long size, gfp_t gfp_mask)
+ {
+ unsigned long shadow_start, shadow_end;
+ int ret;
+@@ -414,7 +431,7 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size)
+ shadow_start = PAGE_ALIGN_DOWN(shadow_start);
+ shadow_end = PAGE_ALIGN(shadow_end);
+
+- ret = __kasan_populate_vmalloc(shadow_start, shadow_end);
++ ret = __kasan_populate_vmalloc(shadow_start, shadow_end, gfp_mask);
+ if (ret)
+ return ret;
+
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index 15203ea7d0073d..a0c040336fc59b 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1400,8 +1400,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
+ */
+ if (cc->is_khugepaged &&
+ (pte_young(pteval) || folio_test_young(folio) ||
+- folio_test_referenced(folio) || mmu_notifier_test_young(vma->vm_mm,
+- address)))
++ folio_test_referenced(folio) ||
++ mmu_notifier_test_young(vma->vm_mm, _address)))
+ referenced++;
+ }
+ if (!writable) {
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index dd543dd7755fc0..e626b6c93ffeee 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -950,7 +950,7 @@ static const char * const action_page_types[] = {
+ [MF_MSG_BUDDY] = "free buddy page",
+ [MF_MSG_DAX] = "dax page",
+ [MF_MSG_UNSPLIT_THP] = "unsplit thp",
+- [MF_MSG_ALREADY_POISONED] = "already poisoned",
++ [MF_MSG_ALREADY_POISONED] = "already poisoned page",
+ [MF_MSG_UNKNOWN] = "unknown page",
+ };
+
+@@ -1343,9 +1343,10 @@ static int action_result(unsigned long pfn, enum mf_action_page_type type,
+ {
+ trace_memory_failure_event(pfn, type, result);
+
+- num_poisoned_pages_inc(pfn);
+-
+- update_per_node_mf_stats(pfn, result);
++ if (type != MF_MSG_ALREADY_POISONED) {
++ num_poisoned_pages_inc(pfn);
++ update_per_node_mf_stats(pfn, result);
++ }
+
+ pr_err("%#lx: recovery action for %s: %s\n",
+ pfn, action_page_types[type], action_name[result]);
+@@ -2088,12 +2089,11 @@ static int try_memory_failure_hugetlb(unsigned long pfn, int flags, int *hugetlb
+ *hugetlb = 0;
+ return 0;
+ } else if (res == -EHWPOISON) {
+- pr_err("%#lx: already hardware poisoned\n", pfn);
+ if (flags & MF_ACTION_REQUIRED) {
+ folio = page_folio(p);
+ res = kill_accessing_process(current, folio_pfn(folio), flags);
+- action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
+ }
++ action_result(pfn, MF_MSG_ALREADY_POISONED, MF_FAILED);
+ return res;
+ } else if (res == -EBUSY) {
+ if (!(flags & MF_NO_RETRY)) {
+@@ -2279,7 +2279,6 @@ int memory_failure(unsigned long pfn, int flags)
+ goto unlock_mutex;
+
+ if (TestSetPageHWPoison(p)) {
+- pr_err("%#lx: already hardware poisoned\n", pfn);
+ res = -EHWPOISON;
+ if (flags & MF_ACTION_REQUIRED)
+ res = kill_accessing_process(current, pfn, flags);
+@@ -2576,10 +2575,9 @@ int unpoison_memory(unsigned long pfn)
+ static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+
+- if (!pfn_valid(pfn))
+- return -ENXIO;
+-
+- p = pfn_to_page(pfn);
++ p = pfn_to_online_page(pfn);
++ if (!p)
++ return -EIO;
+ folio = page_folio(p);
+
+ mutex_lock(&mf_mutex);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 6dbcdceecae134..5edd536ba9d2a5 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2026,6 +2026,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ if (unlikely(!vmap_initialized))
+ return ERR_PTR(-EBUSY);
+
++ /* Only reclaim behaviour flags are relevant. */
++ gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
+ might_sleep();
+
+ /*
+@@ -2038,8 +2040,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ */
+ va = node_alloc(size, align, vstart, vend, &addr, &vn_id);
+ if (!va) {
+- gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
+-
+ va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
+ if (unlikely(!va))
+ return ERR_PTR(-ENOMEM);
+@@ -2089,7 +2089,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
+ BUG_ON(va->va_start < vstart);
+ BUG_ON(va->va_end > vend);
+
+- ret = kasan_populate_vmalloc(addr, size);
++ ret = kasan_populate_vmalloc(addr, size, gfp_mask);
+ if (ret) {
+ free_vmap_area(va);
+ return ERR_PTR(ret);
+@@ -4826,7 +4826,7 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+
+ /* populate the kasan shadow space */
+ for (area = 0; area < nr_vms; area++) {
+- if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area]))
++ if (kasan_populate_vmalloc(vas[area]->va_start, sizes[area], GFP_KERNEL))
+ goto err_free_shadow;
+ }
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index ad5574e9a93ee9..ce17e489c67c37 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -829,7 +829,17 @@ static void bis_cleanup(struct hci_conn *conn)
+ /* Check if ISO connection is a BIS and terminate advertising
+ * set and BIG if there are no other connections using it.
+ */
+- bis = hci_conn_hash_lookup_big(hdev, conn->iso_qos.bcast.big);
++ bis = hci_conn_hash_lookup_big_state(hdev,
++ conn->iso_qos.bcast.big,
++ BT_CONNECTED,
++ HCI_ROLE_MASTER);
++ if (bis)
++ return;
++
++ bis = hci_conn_hash_lookup_big_state(hdev,
++ conn->iso_qos.bcast.big,
++ BT_CONNECT,
++ HCI_ROLE_MASTER);
+ if (bis)
+ return;
+
+@@ -2274,7 +2284,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ * the start periodic advertising and create BIG commands have
+ * been queued
+ */
+- hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK,
++ hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
+ BT_BOUND, &data);
+
+ /* Queue start periodic advertising and create BIG */
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 0ffdbe249f5d3d..090c7ffa515252 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6973,9 +6973,14 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ continue;
+ }
+
+- if (ev->status != 0x42)
++ if (ev->status != 0x42) {
+ /* Mark PA sync as established */
+ set_bit(HCI_CONN_PA_SYNC, &bis->flags);
++ /* Reset cleanup callback of PA Sync so it doesn't
++ * terminate the sync when deleting the connection.
++ */
++ conn->cleanup = NULL;
++ }
+
+ bis->sync_handle = conn->sync_handle;
+ bis->iso_qos.bcast.big = ev->handle;
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 14a4215352d5f1..c21566e1494a99 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -1347,7 +1347,7 @@ static int iso_sock_getname(struct socket *sock, struct sockaddr *addr,
+ bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst);
+ sa->iso_bdaddr_type = iso_pi(sk)->dst_type;
+
+- if (hcon && hcon->type == BIS_LINK) {
++ if (hcon && (hcon->type == BIS_LINK || hcon->type == PA_LINK)) {
+ sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid;
+ sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis;
+ memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
+diff --git a/net/bridge/br.c b/net/bridge/br.c
+index 0adeafe11a3651..ad2d8f59fc7bcc 100644
+--- a/net/bridge/br.c
++++ b/net/bridge/br.c
+@@ -324,6 +324,13 @@ int br_boolopt_multi_toggle(struct net_bridge *br,
+ int err = 0;
+ int opt_id;
+
++ opt_id = find_next_bit(&bitmap, BITS_PER_LONG, BR_BOOLOPT_MAX);
++ if (opt_id != BITS_PER_LONG) {
++ NL_SET_ERR_MSG_FMT_MOD(extack, "Unknown boolean option %d",
++ opt_id);
++ return -EINVAL;
++ }
++
+ for_each_set_bit(opt_id, &bitmap, BR_BOOLOPT_MAX) {
+ bool on = !!(bm->optval & BIT(opt_id));
+
+diff --git a/net/can/j1939/bus.c b/net/can/j1939/bus.c
+index 39844f14eed862..797719cb227ec5 100644
+--- a/net/can/j1939/bus.c
++++ b/net/can/j1939/bus.c
+@@ -290,8 +290,11 @@ int j1939_local_ecu_get(struct j1939_priv *priv, name_t name, u8 sa)
+ if (!ecu)
+ ecu = j1939_ecu_create_locked(priv, name);
+ err = PTR_ERR_OR_ZERO(ecu);
+- if (err)
++ if (err) {
++ if (j1939_address_is_unicast(sa))
++ priv->ents[sa].nusers--;
+ goto done;
++ }
+
+ ecu->nusers++;
+ /* TODO: do we care if ecu->addr != sa? */
+diff --git a/net/can/j1939/j1939-priv.h b/net/can/j1939/j1939-priv.h
+index 31a93cae5111b5..81f58924b4acd7 100644
+--- a/net/can/j1939/j1939-priv.h
++++ b/net/can/j1939/j1939-priv.h
+@@ -212,6 +212,7 @@ void j1939_priv_get(struct j1939_priv *priv);
+
+ /* notify/alert all j1939 sockets bound to ifindex */
+ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv);
++void j1939_sk_netdev_event_unregister(struct j1939_priv *priv);
+ int j1939_cancel_active_session(struct j1939_priv *priv, struct sock *sk);
+ void j1939_tp_init(struct j1939_priv *priv);
+
+diff --git a/net/can/j1939/main.c b/net/can/j1939/main.c
+index 7e8a20f2fc42b5..3706a872ecafdb 100644
+--- a/net/can/j1939/main.c
++++ b/net/can/j1939/main.c
+@@ -377,6 +377,9 @@ static int j1939_netdev_notify(struct notifier_block *nb,
+ j1939_sk_netdev_event_netdown(priv);
+ j1939_ecu_unmap_all(priv);
+ break;
++ case NETDEV_UNREGISTER:
++ j1939_sk_netdev_event_unregister(priv);
++ break;
+ }
+
+ j1939_priv_put(priv);
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 6fefe7a6876116..785b883a1319d3 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -520,6 +520,9 @@ static int j1939_sk_bind(struct socket *sock, struct sockaddr *uaddr, int len)
+ ret = j1939_local_ecu_get(priv, jsk->addr.src_name, jsk->addr.sa);
+ if (ret) {
+ j1939_netdev_stop(priv);
++ jsk->priv = NULL;
++ synchronize_rcu();
++ j1939_priv_put(priv);
+ goto out_release_sock;
+ }
+
+@@ -1299,6 +1302,55 @@ void j1939_sk_netdev_event_netdown(struct j1939_priv *priv)
+ read_unlock_bh(&priv->j1939_socks_lock);
+ }
+
++void j1939_sk_netdev_event_unregister(struct j1939_priv *priv)
++{
++ struct sock *sk;
++ struct j1939_sock *jsk;
++ bool wait_rcu = false;
++
++rescan: /* The caller is holding a ref on this "priv" via j1939_priv_get_by_ndev(). */
++ read_lock_bh(&priv->j1939_socks_lock);
++ list_for_each_entry(jsk, &priv->j1939_socks, list) {
++ /* Skip if j1939_jsk_add() is not called on this socket. */
++ if (!(jsk->state & J1939_SOCK_BOUND))
++ continue;
++ sk = &jsk->sk;
++ sock_hold(sk);
++ read_unlock_bh(&priv->j1939_socks_lock);
++ /* Check if j1939_jsk_del() is not yet called on this socket after holding
++ * socket's lock, for both j1939_sk_bind() and j1939_sk_release() call
++ * j1939_jsk_del() with socket's lock held.
++ */
++ lock_sock(sk);
++ if (jsk->state & J1939_SOCK_BOUND) {
++ /* Neither j1939_sk_bind() nor j1939_sk_release() called j1939_jsk_del().
++ * Make this socket no longer bound, by pretending as if j1939_sk_bind()
++ * dropped old references but did not get new references.
++ */
++ j1939_jsk_del(priv, jsk);
++ j1939_local_ecu_put(priv, jsk->addr.src_name, jsk->addr.sa);
++ j1939_netdev_stop(priv);
++ /* Call j1939_priv_put() now and prevent j1939_sk_sock_destruct() from
++ * calling the corresponding j1939_priv_put().
++ *
++ * j1939_sk_sock_destruct() is supposed to call j1939_priv_put() after
++ * an RCU grace period. But since the caller is holding a ref on this
++ * "priv", we can defer synchronize_rcu() until immediately before
++ * the caller calls j1939_priv_put().
++ */
++ j1939_priv_put(priv);
++ jsk->priv = NULL;
++ wait_rcu = true;
++ }
++ release_sock(sk);
++ sock_put(sk);
++ goto rescan;
++ }
++ read_unlock_bh(&priv->j1939_socks_lock);
++ if (wait_rcu)
++ synchronize_rcu();
++}
++
+ static int j1939_sk_no_ioctlcmd(struct socket *sock, unsigned int cmd,
+ unsigned long arg)
+ {
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index d1b5705dc0c648..9f6d860411cbd1 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -1524,7 +1524,7 @@ static void con_fault_finish(struct ceph_connection *con)
+ * in case we faulted due to authentication, invalidate our
+ * current tickets so that we can get new ones.
+ */
+- if (con->v1.auth_retry) {
++ if (!ceph_msgr2(from_msgr(con->msgr)) && con->v1.auth_retry) {
+ dout("auth_retry %d, invalidating\n", con->v1.auth_retry);
+ if (con->ops->invalidate_authorizer)
+ con->ops->invalidate_authorizer(con);
+@@ -1714,9 +1714,10 @@ static void clear_standby(struct ceph_connection *con)
+ {
+ /* come back from STANDBY? */
+ if (con->state == CEPH_CON_S_STANDBY) {
+- dout("clear_standby %p and ++connect_seq\n", con);
++ dout("clear_standby %p\n", con);
+ con->state = CEPH_CON_S_PREOPEN;
+- con->v1.connect_seq++;
++ if (!ceph_msgr2(from_msgr(con->msgr)))
++ con->v1.connect_seq++;
+ WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_WRITE_PENDING));
+ WARN_ON(ceph_con_flag_test(con, CEPH_CON_F_KEEPALIVE_PENDING));
+ }
+diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
+index 616479e7146633..9447065d01afb0 100644
+--- a/net/core/dev_ioctl.c
++++ b/net/core/dev_ioctl.c
+@@ -464,8 +464,15 @@ int generic_hwtstamp_get_lower(struct net_device *dev,
+ if (!netif_device_present(dev))
+ return -ENODEV;
+
+- if (ops->ndo_hwtstamp_get)
+- return dev_get_hwtstamp_phylib(dev, kernel_cfg);
++ if (ops->ndo_hwtstamp_get) {
++ int err;
++
++ netdev_lock_ops(dev);
++ err = dev_get_hwtstamp_phylib(dev, kernel_cfg);
++ netdev_unlock_ops(dev);
++
++ return err;
++ }
+
+ /* Legacy path: unconverted lower driver */
+ return generic_hwtstamp_ioctl_lower(dev, SIOCGHWTSTAMP, kernel_cfg);
+@@ -481,8 +488,15 @@ int generic_hwtstamp_set_lower(struct net_device *dev,
+ if (!netif_device_present(dev))
+ return -ENODEV;
+
+- if (ops->ndo_hwtstamp_set)
+- return dev_set_hwtstamp_phylib(dev, kernel_cfg, extack);
++ if (ops->ndo_hwtstamp_set) {
++ int err;
++
++ netdev_lock_ops(dev);
++ err = dev_set_hwtstamp_phylib(dev, kernel_cfg, extack);
++ netdev_unlock_ops(dev);
++
++ return err;
++ }
+
+ /* Legacy path: unconverted lower driver */
+ return generic_hwtstamp_ioctl_lower(dev, SIOCSHWTSTAMP, kernel_cfg);
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index 88657255fec12b..fbbc3ccf9df64b 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -49,7 +49,7 @@ static bool hsr_check_carrier(struct hsr_port *master)
+
+ ASSERT_RTNL();
+
+- hsr_for_each_port(master->hsr, port) {
++ hsr_for_each_port_rtnl(master->hsr, port) {
+ if (port->type != HSR_PT_MASTER && is_slave_up(port->dev)) {
+ netif_carrier_on(master->dev);
+ return true;
+@@ -105,7 +105,7 @@ int hsr_get_max_mtu(struct hsr_priv *hsr)
+ struct hsr_port *port;
+
+ mtu_max = ETH_DATA_LEN;
+- hsr_for_each_port(hsr, port)
++ hsr_for_each_port_rtnl(hsr, port)
+ if (port->type != HSR_PT_MASTER)
+ mtu_max = min(port->dev->mtu, mtu_max);
+
+@@ -139,7 +139,7 @@ static int hsr_dev_open(struct net_device *dev)
+
+ hsr = netdev_priv(dev);
+
+- hsr_for_each_port(hsr, port) {
++ hsr_for_each_port_rtnl(hsr, port) {
+ if (port->type == HSR_PT_MASTER)
+ continue;
+ switch (port->type) {
+@@ -172,7 +172,7 @@ static int hsr_dev_close(struct net_device *dev)
+ struct hsr_priv *hsr;
+
+ hsr = netdev_priv(dev);
+- hsr_for_each_port(hsr, port) {
++ hsr_for_each_port_rtnl(hsr, port) {
+ if (port->type == HSR_PT_MASTER)
+ continue;
+ switch (port->type) {
+@@ -205,7 +205,7 @@ static netdev_features_t hsr_features_recompute(struct hsr_priv *hsr,
+ * may become enabled.
+ */
+ features &= ~NETIF_F_ONE_FOR_ALL;
+- hsr_for_each_port(hsr, port)
++ hsr_for_each_port_rtnl(hsr, port)
+ features = netdev_increment_features(features,
+ port->dev->features,
+ mask);
+@@ -226,6 +226,7 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ struct hsr_priv *hsr = netdev_priv(dev);
+ struct hsr_port *master;
+
++ rcu_read_lock();
+ master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
+ if (master) {
+ skb->dev = master->dev;
+@@ -238,6 +239,8 @@ static netdev_tx_t hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+ dev_core_stats_tx_dropped_inc(dev);
+ dev_kfree_skb_any(skb);
+ }
++ rcu_read_unlock();
++
+ return NETDEV_TX_OK;
+ }
+
+@@ -484,7 +487,7 @@ static void hsr_set_rx_mode(struct net_device *dev)
+
+ hsr = netdev_priv(dev);
+
+- hsr_for_each_port(hsr, port) {
++ hsr_for_each_port_rtnl(hsr, port) {
+ if (port->type == HSR_PT_MASTER)
+ continue;
+ switch (port->type) {
+@@ -506,7 +509,7 @@ static void hsr_change_rx_flags(struct net_device *dev, int change)
+
+ hsr = netdev_priv(dev);
+
+- hsr_for_each_port(hsr, port) {
++ hsr_for_each_port_rtnl(hsr, port) {
+ if (port->type == HSR_PT_MASTER)
+ continue;
+ switch (port->type) {
+@@ -534,7 +537,7 @@ static int hsr_ndo_vlan_rx_add_vid(struct net_device *dev,
+
+ hsr = netdev_priv(dev);
+
+- hsr_for_each_port(hsr, port) {
++ hsr_for_each_port_rtnl(hsr, port) {
+ if (port->type == HSR_PT_MASTER ||
+ port->type == HSR_PT_INTERLINK)
+ continue;
+@@ -580,7 +583,7 @@ static int hsr_ndo_vlan_rx_kill_vid(struct net_device *dev,
+
+ hsr = netdev_priv(dev);
+
+- hsr_for_each_port(hsr, port) {
++ hsr_for_each_port_rtnl(hsr, port) {
+ switch (port->type) {
+ case HSR_PT_SLAVE_A:
+ case HSR_PT_SLAVE_B:
+@@ -672,9 +675,14 @@ struct net_device *hsr_get_port_ndev(struct net_device *ndev,
+ struct hsr_priv *hsr = netdev_priv(ndev);
+ struct hsr_port *port;
+
++ rcu_read_lock();
+ hsr_for_each_port(hsr, port)
+- if (port->type == pt)
++ if (port->type == pt) {
++ dev_hold(port->dev);
++ rcu_read_unlock();
+ return port->dev;
++ }
++ rcu_read_unlock();
+ return NULL;
+ }
+ EXPORT_SYMBOL(hsr_get_port_ndev);
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index 192893c3f2ec73..bc94b07101d80e 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -22,7 +22,7 @@ static bool hsr_slave_empty(struct hsr_priv *hsr)
+ {
+ struct hsr_port *port;
+
+- hsr_for_each_port(hsr, port)
++ hsr_for_each_port_rtnl(hsr, port)
+ if (port->type != HSR_PT_MASTER)
+ return false;
+ return true;
+@@ -134,7 +134,7 @@ struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt)
+ {
+ struct hsr_port *port;
+
+- hsr_for_each_port(hsr, port)
++ hsr_for_each_port_rtnl(hsr, port)
+ if (port->type == pt)
+ return port;
+ return NULL;
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 135ec5fce01967..33b0d2460c9bcd 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -224,6 +224,9 @@ struct hsr_priv {
+ #define hsr_for_each_port(hsr, port) \
+ list_for_each_entry_rcu((port), &(hsr)->ports, port_list)
+
++#define hsr_for_each_port_rtnl(hsr, port) \
++ list_for_each_entry_rcu((port), &(hsr)->ports, port_list, lockdep_rtnl_is_held())
++
+ struct hsr_port *hsr_port_get_hsr(struct hsr_priv *hsr, enum hsr_port_type pt);
+
+ /* Caller must ensure skb is a valid HSR frame */
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index f65d2f7273813b..8392d304a72ebe 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -204,6 +204,9 @@ static int iptunnel_pmtud_build_icmp(struct sk_buff *skb, int mtu)
+ if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr)))
+ return -EINVAL;
+
++ if (skb_is_gso(skb))
++ skb_gso_reset(skb);
++
+ skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN);
+ pskb_pull(skb, ETH_HLEN);
+ skb_reset_network_header(skb);
+@@ -298,6 +301,9 @@ static int iptunnel_pmtud_build_icmpv6(struct sk_buff *skb, int mtu)
+ if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr)))
+ return -EINVAL;
+
++ if (skb_is_gso(skb))
++ skb_gso_reset(skb);
++
+ skb_copy_bits(skb, skb_mac_offset(skb), &eh, ETH_HLEN);
+ pskb_pull(skb, ETH_HLEN);
+ skb_reset_network_header(skb);
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index ba581785adb4b3..a268e1595b22aa 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -408,8 +408,11 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ if (!psock->cork) {
+ psock->cork = kzalloc(sizeof(*psock->cork),
+ GFP_ATOMIC | __GFP_NOWARN);
+- if (!psock->cork)
++ if (!psock->cork) {
++ sk_msg_free(sk, msg);
++ *copied = 0;
+ return -ENOMEM;
++ }
+ }
+ memcpy(psock->cork, msg, sizeof(*msg));
+ return 0;
+diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
+index 3caa0a9d3b3885..25d2b65653cd40 100644
+--- a/net/mptcp/sockopt.c
++++ b/net/mptcp/sockopt.c
+@@ -1508,13 +1508,12 @@ static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ static const unsigned int tx_rx_locks = SOCK_RCVBUF_LOCK | SOCK_SNDBUF_LOCK;
+ struct sock *sk = (struct sock *)msk;
++ bool keep_open;
+
+- if (ssk->sk_prot->keepalive) {
+- if (sock_flag(sk, SOCK_KEEPOPEN))
+- ssk->sk_prot->keepalive(ssk, 1);
+- else
+- ssk->sk_prot->keepalive(ssk, 0);
+- }
++ keep_open = sock_flag(sk, SOCK_KEEPOPEN);
++ if (ssk->sk_prot->keepalive)
++ ssk->sk_prot->keepalive(ssk, keep_open);
++ sock_valbool_flag(ssk, SOCK_KEEPOPEN, keep_open);
+
+ ssk->sk_priority = sk->sk_priority;
+ ssk->sk_bound_dev_if = sk->sk_bound_dev_if;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0e86434ca13b00..cde63e5f18d8f9 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1131,11 +1131,14 @@ nf_tables_chain_type_lookup(struct net *net, const struct nlattr *nla,
+ return ERR_PTR(-ENOENT);
+ }
+
+-static __be16 nft_base_seq(const struct net *net)
++static unsigned int nft_base_seq(const struct net *net)
+ {
+- struct nftables_pernet *nft_net = nft_pernet(net);
++ return READ_ONCE(net->nft.base_seq);
++}
+
+- return htons(nft_net->base_seq & 0xffff);
++static __be16 nft_base_seq_be16(const struct net *net)
++{
++ return htons(nft_base_seq(net) & 0xffff);
+ }
+
+ static const struct nla_policy nft_table_policy[NFTA_TABLE_MAX + 1] = {
+@@ -1153,9 +1156,9 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ {
+ struct nlmsghdr *nlh;
+
+- event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+- nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+- NFNETLINK_V0, nft_base_seq(net));
++ nlh = nfnl_msg_put(skb, portid, seq,
++ nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++ flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -1165,6 +1168,12 @@ static int nf_tables_fill_table_info(struct sk_buff *skb, struct net *net,
+ NFTA_TABLE_PAD))
+ goto nla_put_failure;
+
++ if (event == NFT_MSG_DELTABLE ||
++ event == NFT_MSG_DESTROYTABLE) {
++ nlmsg_end(skb, nlh);
++ return 0;
++ }
++
+ if (nla_put_be32(skb, NFTA_TABLE_FLAGS,
+ htonl(table->flags & NFT_TABLE_F_MASK)))
+ goto nla_put_failure;
+@@ -1242,7 +1251,7 @@ static int nf_tables_dump_tables(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -2022,9 +2031,9 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ {
+ struct nlmsghdr *nlh;
+
+- event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+- nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+- NFNETLINK_V0, nft_base_seq(net));
++ nlh = nfnl_msg_put(skb, portid, seq,
++ nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++ flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -2034,6 +2043,13 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net,
+ NFTA_CHAIN_PAD))
+ goto nla_put_failure;
+
++ if (!hook_list &&
++ (event == NFT_MSG_DELCHAIN ||
++ event == NFT_MSG_DESTROYCHAIN)) {
++ nlmsg_end(skb, nlh);
++ return 0;
++ }
++
+ if (nft_is_base_chain(chain)) {
+ const struct nft_base_chain *basechain = nft_base_chain(chain);
+ struct nft_stats __percpu *stats;
+@@ -2120,7 +2136,7 @@ static int nf_tables_dump_chains(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -3658,7 +3674,7 @@ static int nf_tables_fill_rule_info(struct sk_buff *skb, struct net *net,
+ u16 type = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+
+ nlh = nfnl_msg_put(skb, portid, seq, type, flags, family, NFNETLINK_V0,
+- nft_base_seq(net));
++ nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -3826,7 +3842,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -4037,7 +4053,7 @@ static int nf_tables_getrule_reset(struct sk_buff *skb,
+ buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ nla_len(nla[NFTA_RULE_TABLE]),
+ (char *)nla_data(nla[NFTA_RULE_TABLE]),
+- nft_net->base_seq);
++ nft_base_seq(net));
+ audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC);
+ kfree(buf);
+@@ -4871,9 +4887,10 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ u32 seq = ctx->seq;
+ int i;
+
+- event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+- nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
+- NFNETLINK_V0, nft_base_seq(ctx->net));
++ nlh = nfnl_msg_put(skb, portid, seq,
++ nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++ flags, ctx->family, NFNETLINK_V0,
++ nft_base_seq_be16(ctx->net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -4885,6 +4902,12 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ NFTA_SET_PAD))
+ goto nla_put_failure;
+
++ if (event == NFT_MSG_DELSET ||
++ event == NFT_MSG_DESTROYSET) {
++ nlmsg_end(skb, nlh);
++ return 0;
++ }
++
+ if (set->flags != 0)
+ if (nla_put_be32(skb, NFTA_SET_FLAGS, htonl(set->flags)))
+ goto nla_put_failure;
+@@ -5012,7 +5035,7 @@ static int nf_tables_dump_sets(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (ctx->family != NFPROTO_UNSPEC &&
+@@ -6189,7 +6212,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (dump_ctx->ctx.family != NFPROTO_UNSPEC &&
+@@ -6218,7 +6241,7 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
+ seq = cb->nlh->nlmsg_seq;
+
+ nlh = nfnl_msg_put(skb, portid, seq, event, NLM_F_MULTI,
+- table->family, NFNETLINK_V0, nft_base_seq(net));
++ table->family, NFNETLINK_V0, nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -6311,7 +6334,7 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
+
+ event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+ nlh = nfnl_msg_put(skb, portid, seq, event, flags, ctx->family,
+- NFNETLINK_V0, nft_base_seq(ctx->net));
++ NFNETLINK_V0, nft_base_seq_be16(ctx->net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -6610,7 +6633,7 @@ static int nf_tables_getsetelem_reset(struct sk_buff *skb,
+ }
+ nelems++;
+ }
+- audit_log_nft_set_reset(dump_ctx.ctx.table, nft_net->base_seq, nelems);
++ audit_log_nft_set_reset(dump_ctx.ctx.table, nft_base_seq(info->net), nelems);
+
+ out_unlock:
+ rcu_read_unlock();
+@@ -8359,20 +8382,26 @@ static int nf_tables_fill_obj_info(struct sk_buff *skb, struct net *net,
+ {
+ struct nlmsghdr *nlh;
+
+- event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+- nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+- NFNETLINK_V0, nft_base_seq(net));
++ nlh = nfnl_msg_put(skb, portid, seq,
++ nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++ flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+ if (nla_put_string(skb, NFTA_OBJ_TABLE, table->name) ||
+ nla_put_string(skb, NFTA_OBJ_NAME, obj->key.name) ||
++ nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+ nla_put_be64(skb, NFTA_OBJ_HANDLE, cpu_to_be64(obj->handle),
+ NFTA_OBJ_PAD))
+ goto nla_put_failure;
+
+- if (nla_put_be32(skb, NFTA_OBJ_TYPE, htonl(obj->ops->type->type)) ||
+- nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
++ if (event == NFT_MSG_DELOBJ ||
++ event == NFT_MSG_DESTROYOBJ) {
++ nlmsg_end(skb, nlh);
++ return 0;
++ }
++
++ if (nla_put_be32(skb, NFTA_OBJ_USE, htonl(obj->use)) ||
+ nft_object_dump(skb, NFTA_OBJ_DATA, obj, reset))
+ goto nla_put_failure;
+
+@@ -8420,7 +8449,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -8454,7 +8483,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
+ idx++;
+ }
+ if (ctx->reset && entries)
+- audit_log_obj_reset(table, nft_net->base_seq, entries);
++ audit_log_obj_reset(table, nft_base_seq(net), entries);
+ if (rc < 0)
+ break;
+ }
+@@ -8623,7 +8652,7 @@ static int nf_tables_getobj_reset(struct sk_buff *skb,
+ buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ nla_len(nla[NFTA_OBJ_TABLE]),
+ (char *)nla_data(nla[NFTA_OBJ_TABLE]),
+- nft_net->base_seq);
++ nft_base_seq(net));
+ audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC);
+ kfree(buf);
+@@ -8728,9 +8757,8 @@ void nft_obj_notify(struct net *net, const struct nft_table *table,
+ struct nft_object *obj, u32 portid, u32 seq, int event,
+ u16 flags, int family, int report, gfp_t gfp)
+ {
+- struct nftables_pernet *nft_net = nft_pernet(net);
+ char *buf = kasprintf(gfp, "%s:%u",
+- table->name, nft_net->base_seq);
++ table->name, nft_base_seq(net));
+
+ audit_log_nfcfg(buf,
+ family,
+@@ -9413,9 +9441,9 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ struct nft_hook *hook;
+ struct nlmsghdr *nlh;
+
+- event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event);
+- nlh = nfnl_msg_put(skb, portid, seq, event, flags, family,
+- NFNETLINK_V0, nft_base_seq(net));
++ nlh = nfnl_msg_put(skb, portid, seq,
++ nfnl_msg_type(NFNL_SUBSYS_NFTABLES, event),
++ flags, family, NFNETLINK_V0, nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+@@ -9425,6 +9453,13 @@ static int nf_tables_fill_flowtable_info(struct sk_buff *skb, struct net *net,
+ NFTA_FLOWTABLE_PAD))
+ goto nla_put_failure;
+
++ if (!hook_list &&
++ (event == NFT_MSG_DELFLOWTABLE ||
++ event == NFT_MSG_DESTROYFLOWTABLE)) {
++ nlmsg_end(skb, nlh);
++ return 0;
++ }
++
+ if (nla_put_be32(skb, NFTA_FLOWTABLE_USE, htonl(flowtable->use)) ||
+ nla_put_be32(skb, NFTA_FLOWTABLE_FLAGS, htonl(flowtable->data.flags)))
+ goto nla_put_failure;
+@@ -9477,7 +9512,7 @@ static int nf_tables_dump_flowtable(struct sk_buff *skb,
+
+ rcu_read_lock();
+ nft_net = nft_pernet(net);
+- cb->seq = READ_ONCE(nft_net->base_seq);
++ cb->seq = nft_base_seq(net);
+
+ list_for_each_entry_rcu(table, &nft_net->tables, list) {
+ if (family != NFPROTO_UNSPEC && family != table->family)
+@@ -9662,17 +9697,16 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+ static int nf_tables_fill_gen_info(struct sk_buff *skb, struct net *net,
+ u32 portid, u32 seq)
+ {
+- struct nftables_pernet *nft_net = nft_pernet(net);
+ struct nlmsghdr *nlh;
+ char buf[TASK_COMM_LEN];
+ int event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWGEN);
+
+ nlh = nfnl_msg_put(skb, portid, seq, event, 0, AF_UNSPEC,
+- NFNETLINK_V0, nft_base_seq(net));
++ NFNETLINK_V0, nft_base_seq_be16(net));
+ if (!nlh)
+ goto nla_put_failure;
+
+- if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_net->base_seq)) ||
++ if (nla_put_be32(skb, NFTA_GEN_ID, htonl(nft_base_seq(net))) ||
+ nla_put_be32(skb, NFTA_GEN_PROC_PID, htonl(task_pid_nr(current))) ||
+ nla_put_string(skb, NFTA_GEN_PROC_NAME, get_task_comm(buf, current)))
+ goto nla_put_failure;
+@@ -10933,11 +10967,12 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ * Bump generation counter, invalidate any dump in progress.
+ * Cannot fail after this point.
+ */
+- base_seq = READ_ONCE(nft_net->base_seq);
++ base_seq = nft_base_seq(net);
+ while (++base_seq == 0)
+ ;
+
+- WRITE_ONCE(nft_net->base_seq, base_seq);
++ /* pairs with smp_load_acquire in nft_lookup_eval */
++ smp_store_release(&net->nft.base_seq, base_seq);
+
+ gc_seq = nft_gc_seq_begin(nft_net);
+
+@@ -11146,7 +11181,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+
+ nft_commit_notify(net, NETLINK_CB(skb).portid);
+ nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
+- nf_tables_commit_audit_log(&adl, nft_net->base_seq);
++ nf_tables_commit_audit_log(&adl, nft_base_seq(net));
+
+ nft_gc_seq_end(nft_net, gc_seq);
+ nft_net->validate_state = NFT_VALIDATE_SKIP;
+@@ -11471,7 +11506,7 @@ static bool nf_tables_valid_genid(struct net *net, u32 genid)
+ mutex_lock(&nft_net->commit_mutex);
+ nft_net->tstamp = get_jiffies_64();
+
+- genid_ok = genid == 0 || nft_net->base_seq == genid;
++ genid_ok = genid == 0 || nft_base_seq(net) == genid;
+ if (!genid_ok)
+ mutex_unlock(&nft_net->commit_mutex);
+
+@@ -12108,7 +12143,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+ INIT_LIST_HEAD(&nft_net->module_list);
+ INIT_LIST_HEAD(&nft_net->notify_list);
+ mutex_init(&nft_net->commit_mutex);
+- nft_net->base_seq = 1;
++ net->nft.base_seq = 1;
+ nft_net->gc_seq = 0;
+ nft_net->validate_state = NFT_VALIDATE_SKIP;
+ INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work);
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index 88922e0e8e8377..e24493d9e77615 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -91,8 +91,9 @@ void nft_dynset_eval(const struct nft_expr *expr,
+ return;
+ }
+
+- if (set->ops->update(set, ®s->data[priv->sreg_key], nft_dynset_new,
+- expr, regs, &ext)) {
++ ext = set->ops->update(set, ®s->data[priv->sreg_key], nft_dynset_new,
++ expr, regs);
++ if (ext) {
+ if (priv->op == NFT_DYNSET_OP_UPDATE &&
+ nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) &&
+ READ_ONCE(nft_set_ext_timeout(ext)->timeout) != 0) {
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index 63ef832b8aa710..58c5b14889c474 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -24,36 +24,73 @@ struct nft_lookup {
+ struct nft_set_binding binding;
+ };
+
+-#ifdef CONFIG_MITIGATION_RETPOLINE
+-bool nft_set_do_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++static const struct nft_set_ext *
++__nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
++#ifdef CONFIG_MITIGATION_RETPOLINE
+ if (set->ops == &nft_set_hash_fast_type.ops)
+- return nft_hash_lookup_fast(net, set, key, ext);
++ return nft_hash_lookup_fast(net, set, key);
+ if (set->ops == &nft_set_hash_type.ops)
+- return nft_hash_lookup(net, set, key, ext);
++ return nft_hash_lookup(net, set, key);
+
+ if (set->ops == &nft_set_rhash_type.ops)
+- return nft_rhash_lookup(net, set, key, ext);
++ return nft_rhash_lookup(net, set, key);
+
+ if (set->ops == &nft_set_bitmap_type.ops)
+- return nft_bitmap_lookup(net, set, key, ext);
++ return nft_bitmap_lookup(net, set, key);
+
+ if (set->ops == &nft_set_pipapo_type.ops)
+- return nft_pipapo_lookup(net, set, key, ext);
++ return nft_pipapo_lookup(net, set, key);
+ #if defined(CONFIG_X86_64) && !defined(CONFIG_UML)
+ if (set->ops == &nft_set_pipapo_avx2_type.ops)
+- return nft_pipapo_avx2_lookup(net, set, key, ext);
++ return nft_pipapo_avx2_lookup(net, set, key);
+ #endif
+
+ if (set->ops == &nft_set_rbtree_type.ops)
+- return nft_rbtree_lookup(net, set, key, ext);
++ return nft_rbtree_lookup(net, set, key);
+
+ WARN_ON_ONCE(1);
+- return set->ops->lookup(net, set, key, ext);
++#endif
++ return set->ops->lookup(net, set, key);
++}
++
++static unsigned int nft_base_seq(const struct net *net)
++{
++ /* pairs with smp_store_release() in nf_tables_commit() */
++ return smp_load_acquire(&net->nft.base_seq);
++}
++
++static bool nft_lookup_should_retry(const struct net *net, unsigned int seq)
++{
++ return unlikely(seq != nft_base_seq(net));
++}
++
++const struct nft_set_ext *
++nft_set_do_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
++{
++ const struct nft_set_ext *ext;
++ unsigned int base_seq;
++
++ do {
++ base_seq = nft_base_seq(net);
++
++ ext = __nft_set_do_lookup(net, set, key);
++ if (ext)
++ break;
++ /* No match? There is a small chance that lookup was
++ * performed in the old generation, but nf_tables_commit()
++ * already unlinked a (matching) element.
++ *
++ * We need to repeat the lookup to make sure that we didn't
++ * miss a matching element in the new generation.
++ */
++ } while (nft_lookup_should_retry(net, base_seq));
++
++ return ext;
+ }
+ EXPORT_SYMBOL_GPL(nft_set_do_lookup);
+-#endif
+
+ void nft_lookup_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+@@ -61,12 +98,12 @@ void nft_lookup_eval(const struct nft_expr *expr,
+ {
+ const struct nft_lookup *priv = nft_expr_priv(expr);
+ const struct nft_set *set = priv->set;
+- const struct nft_set_ext *ext = NULL;
+ const struct net *net = nft_net(pkt);
++ const struct nft_set_ext *ext;
+ bool found;
+
+- found = nft_set_do_lookup(net, set, ®s->data[priv->sreg], &ext) ^
+- priv->invert;
++ ext = nft_set_do_lookup(net, set, ®s->data[priv->sreg]);
++ found = !!ext ^ priv->invert;
+ if (!found) {
+ ext = nft_set_catchall_lookup(net, set);
+ if (!ext) {
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index 09da7a3f9f9677..8ee66a86c3bc75 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -111,10 +111,9 @@ void nft_objref_map_eval(const struct nft_expr *expr,
+ struct net *net = nft_net(pkt);
+ const struct nft_set_ext *ext;
+ struct nft_object *obj;
+- bool found;
+
+- found = nft_set_do_lookup(net, set, ®s->data[priv->sreg], &ext);
+- if (!found) {
++ ext = nft_set_do_lookup(net, set, ®s->data[priv->sreg]);
++ if (!ext) {
+ ext = nft_set_catchall_lookup(net, set);
+ if (!ext) {
+ regs->verdict.code = NFT_BREAK;
+diff --git a/net/netfilter/nft_set_bitmap.c b/net/netfilter/nft_set_bitmap.c
+index 12390d2e994fc6..8d3f040a904a2c 100644
+--- a/net/netfilter/nft_set_bitmap.c
++++ b/net/netfilter/nft_set_bitmap.c
+@@ -75,16 +75,21 @@ nft_bitmap_active(const u8 *bitmap, u32 idx, u32 off, u8 genmask)
+ }
+
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_bitmap_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+ const struct nft_bitmap *priv = nft_set_priv(set);
++ static const struct nft_set_ext found;
+ u8 genmask = nft_genmask_cur(net);
+ u32 idx, off;
+
+ nft_bitmap_location(set, key, &idx, &off);
+
+- return nft_bitmap_active(priv->bitmap, idx, off, genmask);
++ if (nft_bitmap_active(priv->bitmap, idx, off, genmask))
++ return &found;
++
++ return NULL;
+ }
+
+ static struct nft_bitmap_elem *
+@@ -221,7 +226,8 @@ static void nft_bitmap_walk(const struct nft_ctx *ctx,
+ const struct nft_bitmap *priv = nft_set_priv(set);
+ struct nft_bitmap_elem *be;
+
+- list_for_each_entry_rcu(be, &priv->list, head) {
++ list_for_each_entry_rcu(be, &priv->list, head,
++ lockdep_is_held(&nft_pernet(ctx->net)->commit_mutex)) {
+ if (iter->count < iter->skip)
+ goto cont;
+
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index abb0c8ec637191..9903c737c9f0ad 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -81,8 +81,9 @@ static const struct rhashtable_params nft_rhash_params = {
+ };
+
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_rhash_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+ struct nft_rhash *priv = nft_set_priv(set);
+ const struct nft_rhash_elem *he;
+@@ -95,9 +96,9 @@ bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
+
+ he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
+ if (he != NULL)
+- *ext = &he->ext;
++ return &he->ext;
+
+- return !!he;
++ return NULL;
+ }
+
+ static struct nft_elem_priv *
+@@ -120,14 +121,11 @@ nft_rhash_get(const struct net *net, const struct nft_set *set,
+ return ERR_PTR(-ENOENT);
+ }
+
+-static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+- struct nft_elem_priv *
+- (*new)(struct nft_set *,
+- const struct nft_expr *,
+- struct nft_regs *regs),
+- const struct nft_expr *expr,
+- struct nft_regs *regs,
+- const struct nft_set_ext **ext)
++static const struct nft_set_ext *
++nft_rhash_update(struct nft_set *set, const u32 *key,
++ struct nft_elem_priv *(*new)(struct nft_set *, const struct nft_expr *,
++ struct nft_regs *regs),
++ const struct nft_expr *expr, struct nft_regs *regs)
+ {
+ struct nft_rhash *priv = nft_set_priv(set);
+ struct nft_rhash_elem *he, *prev;
+@@ -161,14 +159,13 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
+ }
+
+ out:
+- *ext = &he->ext;
+- return true;
++ return &he->ext;
+
+ err2:
+ nft_set_elem_destroy(set, &he->priv, true);
+ atomic_dec(&set->nelems);
+ err1:
+- return false;
++ return NULL;
+ }
+
+ static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
+@@ -507,8 +504,9 @@ struct nft_hash_elem {
+ };
+
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_hash_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+ struct nft_hash *priv = nft_set_priv(set);
+ u8 genmask = nft_genmask_cur(net);
+@@ -519,12 +517,10 @@ bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
+ hash = reciprocal_scale(hash, priv->buckets);
+ hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
+ if (!memcmp(nft_set_ext_key(&he->ext), key, set->klen) &&
+- nft_set_elem_active(&he->ext, genmask)) {
+- *ext = &he->ext;
+- return true;
+- }
++ nft_set_elem_active(&he->ext, genmask))
++ return &he->ext;
+ }
+- return false;
++ return NULL;
+ }
+
+ static struct nft_elem_priv *
+@@ -547,9 +543,9 @@ nft_hash_get(const struct net *net, const struct nft_set *set,
+ }
+
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_hash_lookup_fast(const struct net *net,
+- const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_hash_lookup_fast(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+ struct nft_hash *priv = nft_set_priv(set);
+ u8 genmask = nft_genmask_cur(net);
+@@ -562,12 +558,10 @@ bool nft_hash_lookup_fast(const struct net *net,
+ hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
+ k2 = *(u32 *)nft_set_ext_key(&he->ext)->data;
+ if (k1 == k2 &&
+- nft_set_elem_active(&he->ext, genmask)) {
+- *ext = &he->ext;
+- return true;
+- }
++ nft_set_elem_active(&he->ext, genmask))
++ return &he->ext;
+ }
+- return false;
++ return NULL;
+ }
+
+ static u32 nft_jhash(const struct nft_set *set, const struct nft_hash *priv,
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 9e4e25f2458f99..793790d79d1384 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -397,37 +397,38 @@ int pipapo_refill(unsigned long *map, unsigned int len, unsigned int rules,
+ }
+
+ /**
+- * nft_pipapo_lookup() - Lookup function
+- * @net: Network namespace
+- * @set: nftables API set representation
+- * @key: nftables API element representation containing key data
+- * @ext: nftables API extension pointer, filled with matching reference
++ * pipapo_get() - Get matching element reference given key data
++ * @m: storage containing the set elements
++ * @data: Key data to be matched against existing elements
++ * @genmask: If set, check that element is active in given genmask
++ * @tstamp: timestamp to check for expired elements
+ *
+ * For more details, see DOC: Theory of Operation.
+ *
+- * Return: true on match, false otherwise.
++ * This is the main lookup function. It matches key data against either
++ * the working match set or the uncommitted copy, depending on what the
++ * caller passed to us.
++ * nft_pipapo_get (lookup from userspace/control plane) and nft_pipapo_lookup
++ * (datapath lookup) pass the active copy.
++ * The insertion path will pass the uncommitted working copy.
++ *
++ * Return: pointer to &struct nft_pipapo_elem on match, NULL otherwise.
+ */
+-bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++static struct nft_pipapo_elem *pipapo_get(const struct nft_pipapo_match *m,
++ const u8 *data, u8 genmask,
++ u64 tstamp)
+ {
+- struct nft_pipapo *priv = nft_set_priv(set);
+ struct nft_pipapo_scratch *scratch;
+ unsigned long *res_map, *fill_map;
+- u8 genmask = nft_genmask_cur(net);
+- const struct nft_pipapo_match *m;
+ const struct nft_pipapo_field *f;
+- const u8 *rp = (const u8 *)key;
+ bool map_index;
+ int i;
+
+ local_bh_disable();
+
+- m = rcu_dereference(priv->match);
+-
+- if (unlikely(!m || !*raw_cpu_ptr(m->scratch)))
+- goto out;
+-
+ scratch = *raw_cpu_ptr(m->scratch);
++ if (unlikely(!scratch))
++ goto out;
+
+ map_index = scratch->map_index;
+
+@@ -444,12 +445,12 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ * packet bytes value, then AND bucket value
+ */
+ if (likely(f->bb == 8))
+- pipapo_and_field_buckets_8bit(f, res_map, rp);
++ pipapo_and_field_buckets_8bit(f, res_map, data);
+ else
+- pipapo_and_field_buckets_4bit(f, res_map, rp);
++ pipapo_and_field_buckets_4bit(f, res_map, data);
+ NFT_PIPAPO_GROUP_BITS_ARE_8_OR_4;
+
+- rp += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
++ data += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
+
+ /* Now populate the bitmap for the next field, unless this is
+ * the last field, in which case return the matched 'ext'
+@@ -465,13 +466,15 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ scratch->map_index = map_index;
+ local_bh_enable();
+
+- return false;
++ return NULL;
+ }
+
+ if (last) {
+- *ext = &f->mt[b].e->ext;
+- if (unlikely(nft_set_elem_expired(*ext) ||
+- !nft_set_elem_active(*ext, genmask)))
++ struct nft_pipapo_elem *e;
++
++ e = f->mt[b].e;
++ if (unlikely(__nft_set_elem_expired(&e->ext, tstamp) ||
++ !nft_set_elem_active(&e->ext, genmask)))
+ goto next_match;
+
+ /* Last field: we're just returning the key without
+@@ -481,8 +484,7 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ */
+ scratch->map_index = map_index;
+ local_bh_enable();
+-
+- return true;
++ return e;
+ }
+
+ /* Swap bitmap indices: res_map is the initial bitmap for the
+@@ -492,112 +494,54 @@ bool nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
+ map_index = !map_index;
+ swap(res_map, fill_map);
+
+- rp += NFT_PIPAPO_GROUPS_PADDING(f);
++ data += NFT_PIPAPO_GROUPS_PADDING(f);
+ }
+
+ out:
+ local_bh_enable();
+- return false;
++ return NULL;
+ }
+
+ /**
+- * pipapo_get() - Get matching element reference given key data
++ * nft_pipapo_lookup() - Dataplane fronted for main lookup function
+ * @net: Network namespace
+ * @set: nftables API set representation
+- * @m: storage containing active/existing elements
+- * @data: Key data to be matched against existing elements
+- * @genmask: If set, check that element is active in given genmask
+- * @tstamp: timestamp to check for expired elements
+- * @gfp: the type of memory to allocate (see kmalloc).
++ * @key: pointer to nft registers containing key data
++ *
++ * This function is called from the data path. It will search for
++ * an element matching the given key in the current active copy.
++ * Unlike other set types, this uses NFT_GENMASK_ANY instead of
++ * nft_genmask_cur().
+ *
+- * This is essentially the same as the lookup function, except that it matches
+- * key data against the uncommitted copy and doesn't use preallocated maps for
+- * bitmap results.
++ * This is because new (future) elements are not reachable from
++ * priv->match, they get added to priv->clone instead.
++ * When the commit phase flips the generation bitmask, the
++ * 'now old' entries are skipped but without the 'now current'
++ * elements becoming visible. Using nft_genmask_cur() thus creates
++ * inconsistent state: matching old entries get skipped but thew
++ * newly matching entries are unreachable.
+ *
+- * Return: pointer to &struct nft_pipapo_elem on match, error pointer otherwise.
++ * GENMASK will still find the 'now old' entries which ensures consistent
++ * priv->match view.
++ *
++ * nft_pipapo_commit swaps ->clone and ->match shortly after the
++ * genbit flip. As ->clone doesn't contain the old entries in the first
++ * place, lookup will only find the now-current ones.
++ *
++ * Return: ntables API extension pointer or NULL if no match.
+ */
+-static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+- const struct nft_set *set,
+- const struct nft_pipapo_match *m,
+- const u8 *data, u8 genmask,
+- u64 tstamp, gfp_t gfp)
++const struct nft_set_ext *
++nft_pipapo_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+- struct nft_pipapo_elem *ret = ERR_PTR(-ENOENT);
+- unsigned long *res_map, *fill_map = NULL;
+- const struct nft_pipapo_field *f;
+- int i;
+-
+- if (m->bsize_max == 0)
+- return ret;
+-
+- res_map = kmalloc_array(m->bsize_max, sizeof(*res_map), gfp);
+- if (!res_map) {
+- ret = ERR_PTR(-ENOMEM);
+- goto out;
+- }
+-
+- fill_map = kcalloc(m->bsize_max, sizeof(*res_map), gfp);
+- if (!fill_map) {
+- ret = ERR_PTR(-ENOMEM);
+- goto out;
+- }
+-
+- pipapo_resmap_init(m, res_map);
+-
+- nft_pipapo_for_each_field(f, i, m) {
+- bool last = i == m->field_count - 1;
+- int b;
+-
+- /* For each bit group: select lookup table bucket depending on
+- * packet bytes value, then AND bucket value
+- */
+- if (f->bb == 8)
+- pipapo_and_field_buckets_8bit(f, res_map, data);
+- else if (f->bb == 4)
+- pipapo_and_field_buckets_4bit(f, res_map, data);
+- else
+- BUG();
+-
+- data += f->groups / NFT_PIPAPO_GROUPS_PER_BYTE(f);
+-
+- /* Now populate the bitmap for the next field, unless this is
+- * the last field, in which case return the matched 'ext'
+- * pointer if any.
+- *
+- * Now res_map contains the matching bitmap, and fill_map is the
+- * bitmap for the next field.
+- */
+-next_match:
+- b = pipapo_refill(res_map, f->bsize, f->rules, fill_map, f->mt,
+- last);
+- if (b < 0)
+- goto out;
+-
+- if (last) {
+- if (__nft_set_elem_expired(&f->mt[b].e->ext, tstamp))
+- goto next_match;
+- if ((genmask &&
+- !nft_set_elem_active(&f->mt[b].e->ext, genmask)))
+- goto next_match;
+-
+- ret = f->mt[b].e;
+- goto out;
+- }
+-
+- data += NFT_PIPAPO_GROUPS_PADDING(f);
++ struct nft_pipapo *priv = nft_set_priv(set);
++ const struct nft_pipapo_match *m;
++ const struct nft_pipapo_elem *e;
+
+- /* Swap bitmap indices: fill_map will be the initial bitmap for
+- * the next field (i.e. the new res_map), and res_map is
+- * guaranteed to be all-zeroes at this point, ready to be filled
+- * according to the next mapping table.
+- */
+- swap(res_map, fill_map);
+- }
++ m = rcu_dereference(priv->match);
++ e = pipapo_get(m, (const u8 *)key, NFT_GENMASK_ANY, get_jiffies_64());
+
+-out:
+- kfree(fill_map);
+- kfree(res_map);
+- return ret;
++ return e ? &e->ext : NULL;
+ }
+
+ /**
+@@ -606,6 +550,11 @@ static struct nft_pipapo_elem *pipapo_get(const struct net *net,
+ * @set: nftables API set representation
+ * @elem: nftables API element representation containing key data
+ * @flags: Unused
++ *
++ * This function is called from the control plane path under
++ * RCU read lock.
++ *
++ * Return: set element private pointer or ERR_PTR(-ENOENT).
+ */
+ static struct nft_elem_priv *
+ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+@@ -615,11 +564,10 @@ nft_pipapo_get(const struct net *net, const struct nft_set *set,
+ struct nft_pipapo_match *m = rcu_dereference(priv->match);
+ struct nft_pipapo_elem *e;
+
+- e = pipapo_get(net, set, m, (const u8 *)elem->key.val.data,
+- nft_genmask_cur(net), get_jiffies_64(),
+- GFP_ATOMIC);
+- if (IS_ERR(e))
+- return ERR_CAST(e);
++ e = pipapo_get(m, (const u8 *)elem->key.val.data,
++ nft_genmask_cur(net), get_jiffies_64());
++ if (!e)
++ return ERR_PTR(-ENOENT);
+
+ return &e->priv;
+ }
+@@ -1344,8 +1292,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ else
+ end = start;
+
+- dup = pipapo_get(net, set, m, start, genmask, tstamp, GFP_KERNEL);
+- if (!IS_ERR(dup)) {
++ dup = pipapo_get(m, start, genmask, tstamp);
++ if (dup) {
+ /* Check if we already have the same exact entry */
+ const struct nft_data *dup_key, *dup_end;
+
+@@ -1364,15 +1312,9 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
+ return -ENOTEMPTY;
+ }
+
+- if (PTR_ERR(dup) == -ENOENT) {
+- /* Look for partially overlapping entries */
+- dup = pipapo_get(net, set, m, end, nft_genmask_next(net), tstamp,
+- GFP_KERNEL);
+- }
+-
+- if (PTR_ERR(dup) != -ENOENT) {
+- if (IS_ERR(dup))
+- return PTR_ERR(dup);
++ /* Look for partially overlapping entries */
++ dup = pipapo_get(m, end, nft_genmask_next(net), tstamp);
++ if (dup) {
+ *elem_priv = &dup->priv;
+ return -ENOTEMPTY;
+ }
+@@ -1913,9 +1855,9 @@ nft_pipapo_deactivate(const struct net *net, const struct nft_set *set,
+ if (!m)
+ return NULL;
+
+- e = pipapo_get(net, set, m, (const u8 *)elem->key.val.data,
+- nft_genmask_next(net), nft_net_tstamp(net), GFP_KERNEL);
+- if (IS_ERR(e))
++ e = pipapo_get(m, (const u8 *)elem->key.val.data,
++ nft_genmask_next(net), nft_net_tstamp(net));
++ if (!e)
+ return NULL;
+
+ nft_set_elem_change_active(net, set, &e->ext);
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index be7c16c79f711e..39e356c9687a98 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -1146,26 +1146,27 @@ static inline void pipapo_resmap_init_avx2(const struct nft_pipapo_match *m, uns
+ *
+ * Return: true on match, false otherwise.
+ */
+-bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+ struct nft_pipapo *priv = nft_set_priv(set);
++ const struct nft_set_ext *ext = NULL;
+ struct nft_pipapo_scratch *scratch;
+- u8 genmask = nft_genmask_cur(net);
+ const struct nft_pipapo_match *m;
+ const struct nft_pipapo_field *f;
+ const u8 *rp = (const u8 *)key;
+ unsigned long *res, *fill;
+ bool map_index;
+- int i, ret = 0;
++ int i;
+
+ local_bh_disable();
+
+ if (unlikely(!irq_fpu_usable())) {
+- bool fallback_res = nft_pipapo_lookup(net, set, key, ext);
++ ext = nft_pipapo_lookup(net, set, key);
+
+ local_bh_enable();
+- return fallback_res;
++ return ext;
+ }
+
+ m = rcu_dereference(priv->match);
+@@ -1182,7 +1183,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ if (unlikely(!scratch)) {
+ kernel_fpu_end();
+ local_bh_enable();
+- return false;
++ return NULL;
+ }
+
+ map_index = scratch->map_index;
+@@ -1197,6 +1198,7 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ next_match:
+ nft_pipapo_for_each_field(f, i, m) {
+ bool last = i == m->field_count - 1, first = !i;
++ int ret = 0;
+
+ #define NFT_SET_PIPAPO_AVX2_LOOKUP(b, n) \
+ (ret = nft_pipapo_avx2_lookup_##b##b_##n(res, fill, f, \
+@@ -1244,13 +1246,12 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ goto out;
+
+ if (last) {
+- *ext = &f->mt[ret].e->ext;
+- if (unlikely(nft_set_elem_expired(*ext) ||
+- !nft_set_elem_active(*ext, genmask))) {
+- ret = 0;
++ const struct nft_set_ext *e = &f->mt[ret].e->ext;
++
++ if (unlikely(nft_set_elem_expired(e)))
+ goto next_match;
+- }
+
++ ext = e;
+ goto out;
+ }
+
+@@ -1264,5 +1265,5 @@ bool nft_pipapo_avx2_lookup(const struct net *net, const struct nft_set *set,
+ kernel_fpu_end();
+ local_bh_enable();
+
+- return ret >= 0;
++ return ext;
+ }
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 2e8ef16ff191d4..b1f04168ec9377 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -52,9 +52,9 @@ static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe)
+ return nft_set_elem_expired(&rbe->ext);
+ }
+
+-static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext,
+- unsigned int seq)
++static const struct nft_set_ext *
++__nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key, unsigned int seq)
+ {
+ struct nft_rbtree *priv = nft_set_priv(set);
+ const struct nft_rbtree_elem *rbe, *interval = NULL;
+@@ -65,7 +65,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ parent = rcu_dereference_raw(priv->root.rb_node);
+ while (parent != NULL) {
+ if (read_seqcount_retry(&priv->count, seq))
+- return false;
++ return NULL;
+
+ rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+
+@@ -77,7 +77,9 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ nft_rbtree_interval_end(rbe) &&
+ nft_rbtree_interval_start(interval))
+ continue;
+- interval = rbe;
++ if (nft_set_elem_active(&rbe->ext, genmask) &&
++ !nft_rbtree_elem_expired(rbe))
++ interval = rbe;
+ } else if (d > 0)
+ parent = rcu_dereference_raw(parent->rb_right);
+ else {
+@@ -87,50 +89,46 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ }
+
+ if (nft_rbtree_elem_expired(rbe))
+- return false;
++ return NULL;
+
+ if (nft_rbtree_interval_end(rbe)) {
+ if (nft_set_is_anonymous(set))
+- return false;
++ return NULL;
+ parent = rcu_dereference_raw(parent->rb_left);
+ interval = NULL;
+ continue;
+ }
+
+- *ext = &rbe->ext;
+- return true;
++ return &rbe->ext;
+ }
+ }
+
+ if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
+- nft_set_elem_active(&interval->ext, genmask) &&
+- !nft_rbtree_elem_expired(interval) &&
+- nft_rbtree_interval_start(interval)) {
+- *ext = &interval->ext;
+- return true;
+- }
++ nft_rbtree_interval_start(interval))
++ return &interval->ext;
+
+- return false;
++ return NULL;
+ }
+
+ INDIRECT_CALLABLE_SCOPE
+-bool nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
+- const u32 *key, const struct nft_set_ext **ext)
++const struct nft_set_ext *
++nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
++ const u32 *key)
+ {
+ struct nft_rbtree *priv = nft_set_priv(set);
+ unsigned int seq = read_seqcount_begin(&priv->count);
+- bool ret;
++ const struct nft_set_ext *ext;
+
+- ret = __nft_rbtree_lookup(net, set, key, ext, seq);
+- if (ret || !read_seqcount_retry(&priv->count, seq))
+- return ret;
++ ext = __nft_rbtree_lookup(net, set, key, seq);
++ if (ext || !read_seqcount_retry(&priv->count, seq))
++ return ext;
+
+ read_lock_bh(&priv->lock);
+ seq = read_seqcount_begin(&priv->count);
+- ret = __nft_rbtree_lookup(net, set, key, ext, seq);
++ ext = __nft_rbtree_lookup(net, set, key, seq);
+ read_unlock_bh(&priv->lock);
+
+- return ret;
++ return ext;
+ }
+
+ static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 104732d3454348..978c129c609501 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1836,6 +1836,9 @@ static int genl_bind(struct net *net, int group)
+ !ns_capable(net->user_ns, CAP_SYS_ADMIN))
+ ret = -EPERM;
+
++ if (ret)
++ break;
++
+ if (family->bind)
+ family->bind(i);
+
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index 73bc39281ef5f5..9b45fbdc90cabe 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -276,8 +276,6 @@ EXPORT_SYMBOL_GPL(rpc_destroy_wait_queue);
+
+ static int rpc_wait_bit_killable(struct wait_bit_key *key, int mode)
+ {
+- if (unlikely(current->flags & PF_EXITING))
+- return -EINTR;
+ schedule();
+ if (signal_pending_state(mode, current))
+ return -ERESTARTSYS;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index c5f7bbf5775ff8..3aa987e7f0724d 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -407,9 +407,9 @@ xs_sock_recv_cmsg(struct socket *sock, unsigned int *msg_flags, int flags)
+ iov_iter_kvec(&msg.msg_iter, ITER_DEST, &alert_kvec, 1,
+ alert_kvec.iov_len);
+ ret = sock_recvmsg(sock, &msg, flags);
+- if (ret > 0 &&
+- tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT) {
+- iov_iter_revert(&msg.msg_iter, ret);
++ if (ret > 0) {
++ if (tls_get_record_type(sock->sk, &u.cmsg) == TLS_RECORD_TYPE_ALERT)
++ iov_iter_revert(&msg.msg_iter, ret);
+ ret = xs_sock_process_cmsg(sock, &msg, msg_flags, &u.cmsg,
+ -EAGAIN);
+ }
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 72c000c0ae5f57..de331541fdb387 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -36,6 +36,20 @@
+ #define TX_BATCH_SIZE 32
+ #define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE)
+
++struct xsk_addr_node {
++ u64 addr;
++ struct list_head addr_node;
++};
++
++struct xsk_addr_head {
++ u32 num_descs;
++ struct list_head addrs_list;
++};
++
++static struct kmem_cache *xsk_tx_generic_cache;
++
++#define XSKCB(skb) ((struct xsk_addr_head *)((skb)->cb))
++
+ void xsk_set_rx_need_wakeup(struct xsk_buff_pool *pool)
+ {
+ if (pool->cached_need_wakeup & XDP_WAKEUP_RX)
+@@ -528,24 +542,43 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags)
+ return dev->netdev_ops->ndo_xsk_wakeup(dev, xs->queue_id, flags);
+ }
+
+-static int xsk_cq_reserve_addr_locked(struct xsk_buff_pool *pool, u64 addr)
++static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
+ {
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&pool->cq_lock, flags);
+- ret = xskq_prod_reserve_addr(pool->cq, addr);
++ ret = xskq_prod_reserve(pool->cq);
+ spin_unlock_irqrestore(&pool->cq_lock, flags);
+
+ return ret;
+ }
+
+-static void xsk_cq_submit_locked(struct xsk_buff_pool *pool, u32 n)
++static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
++ struct sk_buff *skb)
+ {
++ struct xsk_addr_node *pos, *tmp;
++ u32 descs_processed = 0;
+ unsigned long flags;
++ u32 idx;
+
+ spin_lock_irqsave(&pool->cq_lock, flags);
+- xskq_prod_submit_n(pool->cq, n);
++ idx = xskq_get_prod(pool->cq);
++
++ xskq_prod_write_addr(pool->cq, idx,
++ (u64)(uintptr_t)skb_shinfo(skb)->destructor_arg);
++ descs_processed++;
++
++ if (unlikely(XSKCB(skb)->num_descs > 1)) {
++ list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
++ xskq_prod_write_addr(pool->cq, idx + descs_processed,
++ pos->addr);
++ descs_processed++;
++ list_del(&pos->addr_node);
++ kmem_cache_free(xsk_tx_generic_cache, pos);
++ }
++ }
++ xskq_prod_submit_n(pool->cq, descs_processed);
+ spin_unlock_irqrestore(&pool->cq_lock, flags);
+ }
+
+@@ -558,9 +591,14 @@ static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
+ spin_unlock_irqrestore(&pool->cq_lock, flags);
+ }
+
++static void xsk_inc_num_desc(struct sk_buff *skb)
++{
++ XSKCB(skb)->num_descs++;
++}
++
+ static u32 xsk_get_num_desc(struct sk_buff *skb)
+ {
+- return skb ? (long)skb_shinfo(skb)->destructor_arg : 0;
++ return XSKCB(skb)->num_descs;
+ }
+
+ static void xsk_destruct_skb(struct sk_buff *skb)
+@@ -572,23 +610,33 @@ static void xsk_destruct_skb(struct sk_buff *skb)
+ *compl->tx_timestamp = ktime_get_tai_fast_ns();
+ }
+
+- xsk_cq_submit_locked(xdp_sk(skb->sk)->pool, xsk_get_num_desc(skb));
++ xsk_cq_submit_addr_locked(xdp_sk(skb->sk)->pool, skb);
+ sock_wfree(skb);
+ }
+
+-static void xsk_set_destructor_arg(struct sk_buff *skb)
++static void xsk_set_destructor_arg(struct sk_buff *skb, u64 addr)
+ {
+- long num = xsk_get_num_desc(xdp_sk(skb->sk)->skb) + 1;
+-
+- skb_shinfo(skb)->destructor_arg = (void *)num;
++ BUILD_BUG_ON(sizeof(struct xsk_addr_head) > sizeof(skb->cb));
++ INIT_LIST_HEAD(&XSKCB(skb)->addrs_list);
++ XSKCB(skb)->num_descs = 0;
++ skb_shinfo(skb)->destructor_arg = (void *)(uintptr_t)addr;
+ }
+
+ static void xsk_consume_skb(struct sk_buff *skb)
+ {
+ struct xdp_sock *xs = xdp_sk(skb->sk);
++ u32 num_descs = xsk_get_num_desc(skb);
++ struct xsk_addr_node *pos, *tmp;
++
++ if (unlikely(num_descs > 1)) {
++ list_for_each_entry_safe(pos, tmp, &XSKCB(skb)->addrs_list, addr_node) {
++ list_del(&pos->addr_node);
++ kmem_cache_free(xsk_tx_generic_cache, pos);
++ }
++ }
+
+ skb->destructor = sock_wfree;
+- xsk_cq_cancel_locked(xs->pool, xsk_get_num_desc(skb));
++ xsk_cq_cancel_locked(xs->pool, num_descs);
+ /* Free skb without triggering the perf drop trace */
+ consume_skb(skb);
+ xs->skb = NULL;
+@@ -605,6 +653,7 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
+ {
+ struct xsk_buff_pool *pool = xs->pool;
+ u32 hr, len, ts, offset, copy, copied;
++ struct xsk_addr_node *xsk_addr;
+ struct sk_buff *skb = xs->skb;
+ struct page *page;
+ void *buffer;
+@@ -619,6 +668,19 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
+ return ERR_PTR(err);
+
+ skb_reserve(skb, hr);
++
++ xsk_set_destructor_arg(skb, desc->addr);
++ } else {
++ xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL);
++ if (!xsk_addr)
++ return ERR_PTR(-ENOMEM);
++
++ /* in case of -EOVERFLOW that could happen below,
++ * xsk_consume_skb() will release this node as whole skb
++ * would be dropped, which implies freeing all list elements
++ */
++ xsk_addr->addr = desc->addr;
++ list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list);
+ }
+
+ addr = desc->addr;
+@@ -690,8 +752,11 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ err = skb_store_bits(skb, 0, buffer, len);
+ if (unlikely(err))
+ goto free_err;
++
++ xsk_set_destructor_arg(skb, desc->addr);
+ } else {
+ int nr_frags = skb_shinfo(skb)->nr_frags;
++ struct xsk_addr_node *xsk_addr;
+ struct page *page;
+ u8 *vaddr;
+
+@@ -706,12 +771,22 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ goto free_err;
+ }
+
++ xsk_addr = kmem_cache_zalloc(xsk_tx_generic_cache, GFP_KERNEL);
++ if (!xsk_addr) {
++ __free_page(page);
++ err = -ENOMEM;
++ goto free_err;
++ }
++
+ vaddr = kmap_local_page(page);
+ memcpy(vaddr, buffer, len);
+ kunmap_local(vaddr);
+
+ skb_add_rx_frag(skb, nr_frags, page, 0, len, PAGE_SIZE);
+ refcount_add(PAGE_SIZE, &xs->sk.sk_wmem_alloc);
++
++ xsk_addr->addr = desc->addr;
++ list_add_tail(&xsk_addr->addr_node, &XSKCB(skb)->addrs_list);
+ }
+
+ if (first_frag && desc->options & XDP_TX_METADATA) {
+@@ -755,7 +830,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ skb->mark = READ_ONCE(xs->sk.sk_mark);
+ skb->destructor = xsk_destruct_skb;
+ xsk_tx_metadata_to_compl(meta, &skb_shinfo(skb)->xsk_meta);
+- xsk_set_destructor_arg(skb);
++ xsk_inc_num_desc(skb);
+
+ return skb;
+
+@@ -765,7 +840,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+
+ if (err == -EOVERFLOW) {
+ /* Drop the packet */
+- xsk_set_destructor_arg(xs->skb);
++ xsk_inc_num_desc(xs->skb);
+ xsk_drop_skb(xs->skb);
+ xskq_cons_release(xs->tx);
+ } else {
+@@ -807,7 +882,7 @@ static int __xsk_generic_xmit(struct sock *sk)
+ * if there is space in it. This avoids having to implement
+ * any buffering in the Tx path.
+ */
+- err = xsk_cq_reserve_addr_locked(xs->pool, desc.addr);
++ err = xsk_cq_reserve_locked(xs->pool);
+ if (err) {
+ err = -EAGAIN;
+ goto out;
+@@ -1795,8 +1870,18 @@ static int __init xsk_init(void)
+ if (err)
+ goto out_pernet;
+
++ xsk_tx_generic_cache = kmem_cache_create("xsk_generic_xmit_cache",
++ sizeof(struct xsk_addr_node),
++ 0, SLAB_HWCACHE_ALIGN, NULL);
++ if (!xsk_tx_generic_cache) {
++ err = -ENOMEM;
++ goto out_unreg_notif;
++ }
++
+ return 0;
+
++out_unreg_notif:
++ unregister_netdevice_notifier(&xsk_netdev_notifier);
+ out_pernet:
+ unregister_pernet_subsys(&xsk_net_ops);
+ out_sk:
+diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
+index 46d87e961ad6d3..f16f390370dc43 100644
+--- a/net/xdp/xsk_queue.h
++++ b/net/xdp/xsk_queue.h
+@@ -344,6 +344,11 @@ static inline u32 xskq_cons_present_entries(struct xsk_queue *q)
+
+ /* Functions for producers */
+
++static inline u32 xskq_get_prod(struct xsk_queue *q)
++{
++ return READ_ONCE(q->ring->producer);
++}
++
+ static inline u32 xskq_prod_nb_free(struct xsk_queue *q, u32 max)
+ {
+ u32 free_entries = q->nentries - (q->cached_prod - q->cached_cons);
+@@ -390,6 +395,13 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
+ return 0;
+ }
+
++static inline void xskq_prod_write_addr(struct xsk_queue *q, u32 idx, u64 addr)
++{
++ struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
++
++ ring->desc[idx & q->ring_mask] = addr;
++}
++
+ static inline void xskq_prod_write_addr_batch(struct xsk_queue *q, struct xdp_desc *descs,
+ u32 nb_entries)
+ {
+diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c
+index cfea7a38befb05..da3a9f2091f55b 100644
+--- a/samples/ftrace/ftrace-direct-modify.c
++++ b/samples/ftrace/ftrace-direct-modify.c
+@@ -75,8 +75,8 @@ asm (
+ CALL_DEPTH_ACCOUNT
+ " call my_direct_func1\n"
+ " leave\n"
+-" .size my_tramp1, .-my_tramp1\n"
+ ASM_RET
++" .size my_tramp1, .-my_tramp1\n"
+
+ " .type my_tramp2, @function\n"
+ " .globl my_tramp2\n"
+diff --git a/tools/testing/selftests/net/can/config b/tools/testing/selftests/net/can/config
+new file mode 100644
+index 00000000000000..188f7979667097
+--- /dev/null
++++ b/tools/testing/selftests/net/can/config
+@@ -0,0 +1,3 @@
++CONFIG_CAN=m
++CONFIG_CAN_DEV=m
++CONFIG_CAN_VCAN=m
diff --git a/2991_libbpf_add_WERROR_option.patch b/2991_libbpf_add_WERROR_option.patch
index e8649909..39d485f9 100644
--- a/2991_libbpf_add_WERROR_option.patch
+++ b/2991_libbpf_add_WERROR_option.patch
@@ -1,17 +1,6 @@
Subject: [PATCH] tools/libbpf: add WERROR option
-Date: Sat, 5 Jul 2025 11:43:12 +0100
-Message-ID: <7e6c41e47c6a8ab73945e6aac319e0dd53337e1b.1751712192.git.sam@gentoo.org>
-X-Mailer: git-send-email 2.50.0
-Precedence: bulk
-X-Mailing-List: bpf@vger.kernel.org
-List-Id: <bpf.vger.kernel.org>
-List-Subscribe: <mailto:bpf+subscribe@vger.kernel.org>
-List-Unsubscribe: <mailto:bpf+unsubscribe@vger.kernel.org>
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
Check the 'WERROR' variable and suppress adding '-Werror' if WERROR=0.
-
This mirrors what tools/perf and other directories in tools do to handle
-Werror rather than adding it unconditionally.
next reply other threads:[~2025-09-20 5:25 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-20 5:25 Arisu Tachibana [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-13 11:56 [gentoo-commits] proj/linux-patches:6.16 commit in: / Arisu Tachibana
2025-10-06 12:01 Arisu Tachibana
2025-10-06 11:06 Arisu Tachibana
2025-10-02 14:17 Arisu Tachibana
2025-10-02 14:14 Arisu Tachibana
2025-10-02 13:42 Arisu Tachibana
2025-10-02 13:30 Arisu Tachibana
2025-10-02 13:25 Arisu Tachibana
2025-10-02 3:28 Arisu Tachibana
2025-10-02 3:28 Arisu Tachibana
2025-10-02 3:12 Arisu Tachibana
2025-09-25 12:02 Arisu Tachibana
2025-09-20 6:29 Arisu Tachibana
2025-09-20 6:29 Arisu Tachibana
2025-09-20 5:31 Arisu Tachibana
2025-09-12 3:56 Arisu Tachibana
2025-09-10 6:18 Arisu Tachibana
2025-09-10 5:57 Arisu Tachibana
2025-09-10 5:30 Arisu Tachibana
2025-09-05 14:01 Arisu Tachibana
2025-09-04 15:46 Arisu Tachibana
2025-09-04 15:33 Arisu Tachibana
2025-08-28 16:37 Arisu Tachibana
2025-08-28 16:01 Arisu Tachibana
2025-08-28 15:31 Arisu Tachibana
2025-08-28 15:19 Arisu Tachibana
2025-08-28 15:14 Arisu Tachibana
2025-08-28 15:14 Arisu Tachibana
2025-08-25 0:00 Arisu Tachibana
2025-08-24 23:09 Arisu Tachibana
2025-08-21 4:31 Arisu Tachibana
2025-08-21 4:31 Arisu Tachibana
2025-08-21 1:07 Arisu Tachibana
2025-08-21 1:00 Arisu Tachibana
2025-08-21 0:27 Arisu Tachibana
2025-08-16 5:54 Arisu Tachibana
2025-08-16 5:54 Arisu Tachibana
2025-08-16 5:21 Arisu Tachibana
2025-08-16 4:02 Arisu Tachibana
2025-08-16 3:07 Arisu Tachibana
2025-07-29 7:43 Arisu Tachibana
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1758345920.9ffd46d3a792ef5eb448d3a3fcf684e769b965d5.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox